Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
How much Money for faster Optimization Backtesting: Willing to Pay.
I am back testing algos for Edge with Strategy Analyzer. Below is my current optimization time remaining. I also attached my existing VPS hard ware metrics.
I have $70,000 for investing in algos, is there anyway on earth I can get faster (in minutes) optimizations results so I do not spend my entire life seeking an Edge to make money running algos?
Is there anyone out there I can pay to help me increase optimizations faster? Please let me know your fee and I will pay you today.
Is there a server I can buy to increase optimizations faster?
Thank you so much for all the help
Can you help answer these questions from other members on NexusFi?
...there are SO many ways to address this but you might want to start by "throwing hardware/cores at it". AMD Ryzen ThreadRipper CPUs can be purchased with 64/128 threads but those aren't cheap - if you're tied to cloud-based only, you can throw a lot of CPU horsepower there as well. Neither option is "cheap" but you didn't divulge exactly what you're trying to optimize (length/size of data, # parameters being optimized, etc.).
I use Multicharts for my optimization, and there is an option for a genetic algorithm optimization. It's a way to genetically pick the best combinations of inputs. For instance, I may have 1 million exhaustive optimization options, genetic optimization may reduce this to 5000.
Also, in Multicharts at least, optimization speed is directly related to cores. You can select how many cores you want to utilize during optimization (and change on the fly, which is important for me). So I see you have 2 cores available for optimization. I have 16 cores available. Which means I can optimize 8 times faster than you, in Multicharts at least.
Another way to speed things up is if you have certain inputs that you can take larger steps initially as you hone in on what may or may not affect performance. For example, optimizing on a moving average value from 100-200 by 1 is not necessary. You may be able to get away with optimizing by increments of 25 (and have better real time performance). Taking 100 iterations to 5.
I can tell you how to reduce that time by a factor of 360. >> follow my advice and 48 hours becomes about 8 minutes.
I won't even charge you.
Reduce the number of iterations to no more than 100.
You have 36K iterations now, you are just curvefitting and overoptimizing. You'll get a nice looking backtest that just falls apart in real time. The result with 100 iterations will look worse for sure. But is that the goal here - a great looking backtest?
Of course, most people will say "only 100 iterations?" but I have 10 inputs to optimize, or 2 inputs with 190 values I want to test? Or some such nonsense. OK, so maybe do 200 iterations.
I appreciate your advice on keeping the iterations no more than 100 or even 200.
I think the premise behind this is:
1. Less is better. Adding too many inputs in the strategy leads to too much decision making.
2. Adding too many iterations leads to second guessing after the test is complete. Then we go round and round again with let me just try this and that.
3. If we get to deep in the overoptimizing process of the strategy, we eventually for get what we are really doing, which is confirm if the trading idea/method makes money with reasonable drawdown with in sample data.
More processing power can help, but there are limitations to that if you are trying to use tick replay. When I'm trying to test a really large number of variations I've found the memory to be the bigger barrier here. Even with 32gb of ram you can run into problems. Once you've maxed out your memory things slow to a crawl. I found that setting IncludeTradeHistoryInBacktest = false makes a big difference here. You can also try the generational optimization runs.
There's big problems with trying to do optimization in this manner anyways though. You're just going to overfit. If it's a good idea your model should just work, and it shouldn't require insane amounts of fitting. So the optimization runs become more of a way to run small experiments rather than discovering some magic settings. I usually only do a large number of iterations when I'm trying to thoroughly test an popular strategy for videos.
There's probably a lot of things that could be done with the strategy analyzer. I still have issues with some sort of memory leak from time to time. Expensive operations like tick replay could be saved to disk so that you only have to do that once. If not actually saving the live data. Ultimately though at that point you're probably better off diving deeper into the machine learning realm. Get a system going in python and tensorflow or pytorch where you can use graphics cards to speed things up.