Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
Hi. You can check out my journal. I started system trading from summer of 2017 and you could see up and downs of my system.
1. Commission. You need to add them into calculation.
2. Add 1 slippage for every trade to see if it survives.
3. Need to go back at least 08 and 09 to see if system survived and profitable in bear market.
-How do you decide when a strategy is good enough to take live, and what data do you use to determine this?
at least 100% return annually on historic max DD then forwarding test for a month.
-Is this simple method of backtesting a reliable qualifier for a strategy? (I'm worried I've simply optimized my strategy for this data set and this won't be representative of how it performs live.)
No. I would not trade those numbers at this time. but please feel free to forward test it with SIM.
Could you elaborate on this? I would think this would be something that's necessary for very short time frame trading (my avg. time in market is 93minutes). I must be missing something.
I'll definitely take a look at your journal. FYI, If I add 1 slippage my net profit decreases by ~15%. Yikes.
Thank you everyone for your input. I was hoping to come to this forum and be humbled. It seems to be working.
I use my VPS because I can not secure the internet connection for a long time.
VPS also offer a very low latency. But that should not be so important on most systems.
The biggest risk you can ever take is not betting on yourself! (Bill Williams +)
A strategy doesn't have to go from $0 to 2X your networth on margin.
I am interested in getting a strategy to real money ASAP because it is like a bug, it is not going to have that long of a life anyway.
As it performs better it gets more money to trade and as it performs worse it gets less until it dies.
I do have a real problem with optimization in this context though. All that optimization is doing is more finely tuning the model to conditions in the past that we know will not hold tomorrow. You are really optimizing the brittleness of the model most likely.
What I want to see from my models is that the strategy does not completely fall apart when used on another highly correlated time series.
If something blows the doors off ES but gets killed on SPY it is far more likely your model is simply over fitted to a random realization of the time series for ES than capturing and exploiting a structure of ES that is not part of SPY when we know the time series has the same generating process. Then I want to see SPY vs QQQ, NQ vs ES, YM vs SPY, etc..
You also need to look at as a baseline at the profit/loss if you had just gone long SPY and held since 2011 then normalize the leverage in ES. If you are not beating that then what is the point of trading. A system is not good because it is profitable, a trading system is good because it beats by and hold over the same period.
Don't forget too a purely random bad strategy is going to be between 40%-60% win/loss rate. I mean I would love to find a strategy that loses 95% of the time so I can just flip things. I agree with Ozquant 65% rule of thumb although I am ok with 60%. You need to give yourself a model error/noise cushion to stay above pure randomness.
This. Building a strategy that optimizes well and gives you a great back test is easy (and I agree with previous posters, your result is not yet a great backtest). Building an edge that tests well on out of sample data using a walk-forward test is difficult. And a test that performs well on live data (in sim) is the final step before unleashing on real money.
I just finished reading Kevin's book a few weeks ago. It definitely put things in perspective and answered many of my questions.
I'm finding this to be very, very true. Since my original post I've increased back-test profitability and win rate substantially. However, as soon as I move to out of sample data it falls apart. Furthermore, I've been live sim trading this strategy since the beginning of the year (I figured why not?) and it's doing extremely well! It's infuriating and fascinating all at the same time.
I'm still convinced that my entry signal can be reliable. The two things I'm really struggling with is how to scale my entry parameters to match market volatility and how to let my winning trades run. I'm also still baffled on why this strategy seems to hate short trades. If I can get these things figured out maybe I'll have something.
It wasn't clear to me if you are still working on the same basic strategy, or whether you have moved on.
But just in case - if you test the in-sample against the OOS, modify and re-test then the OOS effectively becomes in-sample. You should only peak at the OOS once. If it doesn't work, then start again from scratch.
The advantage of 'algos', is that you can have a lot of them, which keeps the individual bet size low, which lowers max drawdown. Since you need a lot of them, you need to develop a process so you can churn them out. You'll be relatively indifferent if it works or not. It should be expected that most won't work. Keep in mind, you can get decent OOS results, which in reality is still random noise. That's another reason you want multiple systems.
That PNL is obviously over fitted. It is not reasonable that a system 20 bags and has a life span of that long. Of course falling apart out of sample is a classic symptom of over fitting.
IMO testing back to 2011 just makes no sense in general.
At the most I would use something like whatever the data providers back fill on ticks is, this is your DB. View it as a rolling window that the stale data falls out of. To me that kills two birds with one stone as far as data curation and dumping stale data.
If you start going back longer IMO you need some kind of regime switching classifier.