Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I've created a new Genetic Strategy Optimizer based loosely on Peter's GAOptimizer.
Installation Instructions:
- download the .ZIP file from the download link (above). NOTE: ignore the '1.09' in the filename - it's wrong, but i can't change it...
- in the main NT window, choose the "File->Utilities->Import NinjaScript" menu.
- when prompted, locate the downloaded .ZIP file
- in your strategy analyzer optimize dialog choose "PH Genetic" as the Optimizer.
- you can select whichever "Optimize on..." metric you like.
- when the "PH Genetic Options" dialog appears, hover over the controls to get a short description of what each option does.
- if you upgrade NT6.5, you'll need to re-install or uninstall the optimizer.
like most GA optimizers it does like to find a local maximum, although it'll attempt to kick out if it notices stability.
let me know if you have any problems/questions. i'll write some documentation soon, honest ;-)
- more info what's new here.
- brief instructions on how to optimize Enum values in your strategies, here.
please use this else error when you do on explorer ...
C:\Users\trading\Documents\NinjaTrader 6.5\bin\Custom\Strategy>rename @Strategy.cs @Strategy.cs.bak
It looks nice, but what did you add/modify ?
Is the % of Aliens are a random elements added during the selection process ?
Anyway, I'll test it, thx Piersh.
it uses a different way of extracting the tested strategy's score. it uses a hack in base Strategy.Dispose to get around a design flaw in NT - they don't expose the value being optimized to the optimizer . this allows you to use whichever scoring metric you like without modification. i removed the logging/threshold stuff since this it uses the same scoring as the analyzer. you can add this in your custom OptimizationTypes, if necessary.
it uses an integral representation of the parameter space that allows it to ensure that all children are unique - reducing the number of backtest runs that are required.
it uses a fitness weighting function to more naturally choose the parents of the next generation. the chance that an individual gets to procreate is on a probability curve, not a boolean function. this increases the richness of the gene pool by allowing worse-performing individuals to procreate (albeit with lower probablility than others)
it makes an attempt to determine when a maximum has been found and the parameter space is stable (ie. subsequent generations are unlikely to produce any better individuals). it does this by checking the top 'stability size' % of the population - if they're the same between two generations, then it considers the parameter space to be stable. at this point it blows away (almost) the whole history and starts over with the top 'reset size' % of the previous lot, seeds the rest of the new generation with aliens and continues. if you find it's resetting too much, try increasing the 'stability size' percentage (a value of 100 means it'll never reset).
between resets, it creates new generations from all previous generations. this is necessary since new children are now unique. i might add an aging function to reduce the probability of those crusty great-great-great-grandparents getting some action...
it adds aliens. aliens are completely randomly generated (none of their genes are derived in any way from other individuals). a certain configurable percentage of every new generation are aliens, the rest are normal (possibly mutated) children of the previous generations. I added this because I found that while it slows convergence somewhat, it allows a broader search of the parameter space, helping to avoid local maximums.
i have found, at least with the strategy i'm using to test with and fine-grained parameter settings (2 billion individuals), that the parameter surface is very bumpy and the algorithm likes to get caught at the top of some local maximum without getting closer to the optimal set. it'll find a good solution quickly, but not necessarily the best.
Great work! I have a question - do you have a way of making this compatible with the NT walk forward function? Currently, every time it rolls forward, it stops and waits for user input to confirm the GO parameters
oops, sorry i hadn't tried with walkforward. i've uploaded a new version (to the original post above) which should help a little. please let me know if it still doesn't work for you.
thanks for the fix - runs ok except that after the first full run through, subsequent runs don't display the GO params form. It may be that the walkthrough flag persists, not being reset after the job is complete.
Piersh, thanks a lot for this. First time user of GO and am very very impressed. Could have saved me weeks in working with the NT built-in. Finally my computer can work with 6 parameters at once instead of just 3 and that in a fraction of the time..
Yes indeed, I use Swig's neat trick in some of my strategies.
however, be warned that changing an MA (or MA combination) can have a drastic effect on the meaning of the other parameters, and the 'gene-splicing' of these other parameters makes little sense (in biological terms it's like cross breeding completely different species). this reasults in making the optimization surface much more bumpy, and the optimizer is more likely to get stuck in a local maximum. you might find that different runs of the optimizer will give you completely different results. I'm thinking that some kind of covariance decomposition would help with this, but that's a whole different kettle of fish.