Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I have always like this thread, and the recent discussion got me thinking about it again. Looking for some reasonable, if not statistically sound way to apply the metrics to generate results for non-Bernoulli distributions by combining Monte-Carlo with the Risk Adjusted Optimal F template.
Would it not make sense to find out the statistics of the low performance on a Monte-Carlo and then run those numbers in the Risk adjusted optimal F?
While writing this post I have spent too much time implementing my idea to see if it was worthwhile...lol.
I used the results of the bot that I have been running this year as my starting point.
Take the win loss ratio from the Low PnL on the Monte-Carlo simulator, then apply that to the Risk Adjusted Optimal F template. With my example, the accuracy went from 63% to 48.5%. Granted you have only taken care of one variable, accuracy, with the win/loss ratio still potentially in flux.
W/L changes are much more subjective. I adjusted average win down by the same percentage as average loss, basically reducing my W/L ratio at current accuracy until the profit reached the Low PnL on the Monte-Carlo simulator. Again with my example, W/L ratio went from 1.65 to 0.95.
I then have 2 points that I can use for sensitivity analysis. One if I have adjustments in my system accuracy, the other for W/L ratio changes.
I made 2 copies of the Optimal F sheet. Put data tables below where I could feed in "Low Accuracy" and current W/L ratio to the model for one sheet and current accuracy and "Low W/L ratio" on the other. I can then evaluate impacts to accuracy given the "Low W/L" and impacts to the W/L assuming "Low Accuracy". Basically, conducting a sensitivity analysis on each variable given the "worst" situation for the other variable.
To me this seemed a decent way to check system risk. In the end, my position sizing on the bot match my analysis results. @Fat Tails is welcome to tell me how statistically unsound this is, so that I know how much statistical risk I have remaining.
First of all I want to thank @Anagami for creating this thread and for @Fat Tails for adding so much context and understanding to it.
In trying to take the content from the world of theory down to "how do I apply this?" I thought about some considerations I have, and would like to discuss them here.
Do all of the simulations here all assume a mechanical approach, where the stop and target are always fixed? My assumption is yes but I may be incorrect.
My issue with the mathematical content in this thread, besides the fact that my limited brain power prohibits me from fully grasping it after only one brief read, is that it assumes that we are trading a "system" and that we have a definite signal, and that we take every one, and that we are trading a system that implies the market is essentially the same at all times and days (at least that's what I think it assumes).
But as a day trader trading ES, with the trades I normally take, a 1:1 would absolutely kill me. I would have to modify my approach drastically to obtain the optimal F that FT is touting. My trades are based around the structure of the typical ES day, and taking the trade off simply at 1R when I know that it will yield more profit seems a bit unusual to me. In other words, I would be trading a system, and not the market.
I apologize for taking this from a great mathematical discussion to the gray areas of practicality and for being less than eloquent with my words. But at the end of the day, we are still humans (though I think FT may be an android in disguise), and human behavior is not always a case of "optimal F".... if humans were driven by mathematics, there would be no such things as credit card debt and we would not have the sovereign debt problems we see on such a grand scale. What fool would spend more than he takes in? It's possibly the most simple math on the planet, yet, humans fail to obey even that. So, I think there is more to making money with a discretionary approach than accepting a 1:1 as optimal.
I trade a very mechanical system with a fixed 2:1 risk/reward. I generally take an average between 10 and 15 trades per day. At the end of every day I review all my signals and determine how the method would have done if I traded with a various risk/reward ratios. The 1:1 ratio generally always has more winners, but is also almost always less profitable. I have been doing this exercise for years just out of habit. I don't know how that fits in with the theoretical findings here, but this has been my experience with a real world mechanical method.
For the Risk Adjusted Optimal F spreadsheet, your assumption is correct. Only 2 outcomes to a trade, win "x" or lose "y". Giving you ratio x:y. Basically, you enter a trade and put on an ATM with a fixed target and fixed stop loss and walk way.
I don't think that anywhere it was said a pure 1:1 is best. The outcome that Fat Tails noted was that higher accuracy and lower W/L ratio allowed more leverage than low accuracy and high W/L ratio.
65% accurate and 1.5 : 1
50% accurate and 2.6 : 1
Those scenarios are almost equivalent in this fixed stop and fixed target world, with a slight edge going to the one with higher accuracy.
For a discretionary guy, I think the only real take-way is that a consistent trader should work on increasing accuracy before increasing W/L ratio to allow for an increase in leverage or a smoother equity curve. Really it is a balancing act between the two, trying to make sure you don't give up too much of one for the other.
@monpere Since you trading is directly applicable to this thread. When looking at the profitability of 1:1 or 2:1, have you considered the impact of being able to increase leverage on the 1:1 due to accuracy?
If I was trading 1 contract on both methods, and I now double the contracts on the 1:1 and also double the contracts on the 2:1, wouldn't the 2:1 still come out ahead? or is there a piece of the puzzle I am missing?
@Luger: I think that we are talking about two different things here ....
(a) the risk that the trading system is correctly represented by the sample
(b) the risk derived from the variance of the sample
How good is the sample?
The sample trades are those backtested over the in-sample period. The question whether the sample correctly represents the edge of the trading system is important. There is a systemic risk (a1) that the behaviour of the market participants has changed and the edge since evaporated, which cannot be easily estimated with statistical tools. And then there is a risk that
(a2) the sample size is too small and represents too favourable an outcome
(a3) the trading system has been curve fitted to the sample
Your approach deals with (a2), if you analyze the low point of the MonteCarlo simulation. My approach ignores that type of risk. I assume that my sample has a sufficient size and that it correctly represents the edge of the system. So I focus on the risk (b), which comes with the variance of the sample.
Adjusting position size to risk of ruin
Example: I have an account of $ 100,000. I do accept a risk of ruin of 1%. My definition of ruin is that the equity has dropped from $ 100,000 to $ 50,000. How many contracts should I trade to comply with the specified risk?
The risk of ruin is equivalent with a maximum drawdown and the probability that the maximum drawdown is reached or exceeded.
What do I now? I take the backtest of my trading system based on a single contract and do a Monte Carlo Analysis with 1,000 different equity graphs. Then I look at the maximum drawdown that occured during the in-sample period for each of them. I plot a distribution of the 1,000 drawdowns and take the lower 1 percentile value, which is the worst remaining occurence, once I have eliminated the worst 10 drawdowns of the Monte Carlo analysis.
Now let us assume that the worst remaining occurence produced a maximum drawdown of $ 12,500. Starting from my requirement that there be a risk of ruin of 1% of a drawdown to an equity value of $ 50,000, I can now leverage the system by trading 4 contracts.
I would not use the Optimal F sheet for other than Bernoulli distributions. I will have to check, whether Ralph Vince has developed a more universal approach that can be used to do that. The simple spreadsheet is probably not the best tool for position sizing in a real world.
TS sucks for certain capabilities but for analysis like Monte Carlo, Walk Forward, etc...it's awesome.
TS's walk forward optimizer actually does a pretty good job and has built in rules for pass/fail for strategies. It rejects (fails) those that have a certain number of "runs" that aren't profitable or those that have an excessive drawdown, etc. It even gives you recommendations on the next optimization interval.
Interestingly, I wanted to discuss your comment about in sample.
I find a lot of misconception about in sample size. Contrary to popular belief, you can actually include TOO much data. If for instance, the market has recently shifted significantly (i.e. a margin increase crushes volatility), then going further back with your in sample analysis is simply muddying the water more.
Although it's helpful to try to find an in sample period that includes multiple market structures and conditions, why go back any further than a couple of years on lower time frame strategies? On CL for example, if you had data, you could try to go back 35 years and you'd find there was a decade in there when oil didn't move hardly at all. So obviously optimization or analysis including periods like that is going to affect/sway your results, possibly in the wrong direction.
I find that it's best to find a happy medium of enough data to give you a trade size that's statistically significant (and I don't accept the college textbook n=30 value) and features a couple of recent relevant market structures, but doesn't give me so much history that it favors out of date/obsolete conditions.
"A dumb man never learns. A smart man learns from his own failure and success. But a wise man learns from the failure and success of others."
The main purpose of the thread was not to promote 1:1 (or any other RR for that matter) or a mechanical approach. It was to simply see how trials and the capital curve emerge as one changes RR and the winning %. FT introduced position sizing into the picture, which is also paramount.
The Kelly formula assumes that the outcomes and probabilities are known, so it has certain inherent modelling limitations (particularly if we believe in 'let the profits run', as we cannot predict the outcome). In theory, it is actually impossible to fulfill the formula conditions, because the trade probabilities are never known with certainty.
However, you can still get a ballpark position sizing recommendation by averaging your winners, losers, and probabilities and plugging them in. It's not the same, but gives you some idea.
I should add that most people seem to prefer to position size less than Kelly suggests (say, half Kelly), as the capital curve can bit a bit too rocky for comfort (giving up some gain for smoothness and minimizing the impact of unaccounted-for risks).
You are never in the wrong place... but sometimes you are in the right place looking at things in the wrong way.