Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I have never used Sharkindicators, and last time I checked, I understood it could not be used to do optimizations, which would be a problem for many situations. Maybe it has changed since then.
I also do not use Ninja for strategy development, so for me it is not an option.
If you do not like coding, life does get tougher, no matter what platform you use.
Right now, I am using a visual tool (Trade View) to create code for MT4. It is pretty slick, but even so, I find myself visually "coding" - I am still figuring out logic blocks (If...then), branches, etc.
For backtesting, I do not use Ninja, so I cannot vouch for its reliability. I will say that, regardless of the platform, it is easy to trick and cheat the backtest engine to give ridiculous results.
What do you mean exactly? I thought Tradingview was a web platform only. Do you mean that you can use it to generate metatrader 4 compatible code? Thanks
Thanks for the question. I primarily use Tradestation, so I incorporate their backtesting module in what I do. If you are using MT4, you could use their Strategy Tester.
Two keys:
1. Whatever software you use to help you test, you should know it well enough to fool it. That is, can you create a strategy that exploits the strategy engine and gives ridiculous results? If you know ways to fool it, you are less likely to be fooled by unrealistic results. Limit order touch fills are a good example in most software platforms.
2. Running through a platform's strategy test engine is only a part of what I do. I also have quite a few other steps to help validate a strategy. You can likely find some info in old futures.io webinars I have done.
Assume when testing insample you find something that looks good. You then decide to try an additional filter, which drops the sample size, but turns it into something much better (while the filter works with a range of nearest neighbours - i.e it's not a complete fluke).
Would you cross your fingers and hoped this extra filter worked out of sample?
Would you test both out of sample, and trade which ever stacked up best?
Or would you test only the system with more signals, less parameters/filters and fine looking metrics - with the view to only looking at the out of sample once?
My development process is different than the process you describe, so I can't say I'd do what you are describing.
Here is what I do, in relation to your questions:
1. If during my initial testing, I find something that passes my criteria, I usually stop there and move to the next step. I don't add filters etc in the hopes of making it even better.
2. If during my initial testing, I find something that does NOT pass my criteria, I usually add filters etc and see if I can make it pass my criteria.
So, the general philosophy I have is: make the strategy as simple as possible, but not so simple it does not work.
Then, when I move on to the next step, walkforward (out of sample) testing, that is really a one shot deal. The strategy either passes, or it does not. Running OOS, not liking the results, going and tweaking the strategy, then examining the OOS results, is very dangerous. I try to avoid this when possible.
One more thing: if you do run 2 OOS tests, and pick the best one, you've just optimized. Generally, not good to do.
It's interesting to evaluate some of the algo system data out there and the numbers they produce. As a gross generalization, a typical system might have a good JAN & FEB, then not so great the next two following months, but back in the black the months thereafter. Stepping back and taking a yearly view the overall results might average say a 19% return. Well, that's not too shabby when compared to investing in stocks, bonds, mutual funds, etc.
However, from a manual trader's point of view those two down months would be cause to stop trading altogether and begin some serious self-evaluation, most likely heavily dosed with a prescription of bourbon and water.
Which begs the question, "what are your expectations"? From my own perspective I expect one thing from my manual trading and quite another thing altogether from an algo system. Why then do I keep trying to develop automated systems that mimic my manual trading?
Aren't they two different animals completely with two different expectations?
Thanks for the comment. If one of your goals with an algo is "make money every day/week/month/year" you can simply make that a requirement of anything you build, just like you'd have goals for certain returns, drawdowns, trades per year, etc.
Of course, you may not be able to hit that goal, or any of your other goals, with your strategy. And many times the goals may conflict, forcing you to choose one over the over (like maybe some losing months are better in return for some big winning months).