Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
Yes, I also think the process of designing a strategy is very important. E.g. avoid curve fitting. I try to have as few rules as possible and I always focus on the KISS rule.
But for the statisticial metrics/reports of my strategies, I also use more complex models to get more insights about my risks.
For me, software tools are "helpers". I want to be able to focus on strategy design and development. I don't want to program the broker interface, the charts etc. For these aspects, the tools save me months and years of time, and for that I also pay up to 2000$ (and not more!) for one software tool.
MultiCharts and @risk help me to save so much time and allowed me to implement my strategies. Without them, I would have a process plan and concept, but wouldn't be able to apply them.
The overnight strategy uses 105 minute bars, and the US day session uses 60 minute bars.
Yes, you are absolutely correct that I use a quite simple version of Monte Carlo. You are also correct that the trades in my system do not have a normal distrubtion:
BUT, and here is where I may be wrong in my thinking, I am not making any assumptions about data normality regarding the input data or the output data.
Or am I?
Maybe you can help me answer that...
INPUT DATA:
For the input data (the daily trades results), as shown above, the data is not normal. But, when I run the simulation, I only use the actual data as possible daily results. I do not assume the daily results could be different than what I input into the spreadsheet. I can see the value of that (why do I assume future values of daily results only be exactly what past daily results were? that is a very limiting assumption), and if I wanted to create a Monte Carlo model like that, then I'd have to assume sort of of distribution.
Example: I know the mean and standard deviation for my input daily results data. I could build a Monte Carlo simulator that would grab a random number from a normal curve generated from the aforementioned mean and standard deviation. If I did that, I would agree it would be incorrect - I would be assuming the data is normal, yet it clearly is not.
Can you comment on my use of input data? I do not think I am using a normal distribution anywhere. Am I making an assumption about data normality somehow that I do not realize?
OUTPUT DATA:
For the output data (the return and max drawdown over 1 year of simulated trades), the data is decidely not normal also.
When I analyze this data, I use percentile functions. I look at things like "what is the probability that my return will be greater than X?" I use the non-normal distribution above to answer the query.
Can you comment on my use of output data? I do not think I am using a normal distribution anywhere. Am I making an assumption about data normality somehow that I do not realize?
Looking back through the thread, I can see only one area where I assumed a normal distribution, although that was mainly for a visual look at the data, not for any Monte Carlo analysis.
I agree that the process is key. If you have no process, you have no repeatability, and that makes it really hard to create consistent systems.
I always wonder if my process is "right" or if there are better ways. @artesimo alluded to a different development process, and I hope he expands on how he does things.
I would dispute the "world class" moniker - I am just a retail guy trying to survive every day. "World class" is a goal worth striving for, of course!
I have no doubt this product (priced at $2,195 for 1 year) would provide a wealth of useful info and statistics. Back when I was in Quality Assurance management in aerospace, I recall one of my statisticians using this software for his work. He was savant like (and now works at the world renowned Cleveland Clinic).
If I gave you the raw data I use, and if we agree on 1 or more metrics to compare, would you be willing to run my data through your model?
I think it would be interesting to see the results - how a simple spreadsheet differs from a professional package.
Also, if I am presenting incorrect or misleading results, it would give me a solid reason to correct them.
Please PM me if you are willing, and we can proceed. I think this would be really interesting.
Ok, my bad. While I quickly went through this thread, I read "z-Value". So I thought you used normal distribution. I checked your VBA code now. Indeed, you are not making the normal distribution assumption. You are using the real historical discrete distribution.
Now, if we assume your historical results have predictive power for the future, then I would still prefer to use a fitted distribution, because other outcomes of profit/and loss is also possible like in the real world. Also with discrete distribution, the results can be misleading, depending how much trades you have done. With less trades, the results will be more extreme if you have several outliers in your discrete distribution in your Monte Carlo Simulation.
As for the Random function in Excel, at least in academic circle, it is wideley criticized (wrong implementation). At least up to Excel 2007. Google will tell you more. I don't know about newer versions. don't know whether the quality of that function is good enough for trading.
By the way, how much Iteration do you use? 2500? This value is quite low, because with each simulation, you get values which differs maybe more than you like. I use 100 000 iterations. With higher values, I don't see significant difference.
I'm also curious about the difference in result with your approach and mine. So just provide me with your raw data.
I will try to do it next week. (Will PM you later). Mainly I want to compare end value of equity/value at risk (x%-percentile values, value at risk for y trades) and expectancy. Currently, I'm building up my risk modelling with @risk for trading.
As for @risk: the price includes a nonetime limited version. the one year maintenance means: you also get new main upgrades (not only minor bugfix for the version you bought) within one year free. This is what I remember.
When I had a crash course in Monte Carlo simulation, I learned from a professor that the distribution of profit/loss is not normal distributed in the financial market. But the outcome (equity) is nearly normal distributed. So I'm wondering about your result.
Well, our comparison maybe will shed more lights into this issue.
Good to hear. It is always nice to have an expert (I consider you one, as you analyze risk as a profession) check my work.
I agree. This is one of those tradeoffs I made - complexity (and more power) versus simplicity. I chose simplicity, but I give up all the things you mention. It could very well be that my approach is too simple.
I agree. I don't know the impact, if any. I've tried some different random number generators over the years, and never noticed a big difference. But I realize random number generators are truly not 100% random. I might try a later version of Excel, to see if there is a measurable(to me) difference.
I agree again. I picked 2,500 iterations because the macro code (the way I wrote it) is terribly slow.
Great! I look forward to it.
A one time fee definitely makes it more palatable for retail traders.
Can you elaborate somewhat on what you mean with 'process', as applied to trading system development, Kevin? And perhaps give an example of how you approach the process of developing a strategy? (Though there is already a lot of information about that in this thread, so I'm not sure if that question is justified)
I'm asking since I suspect that you, coming from an aerospace background, may have a whole different definition of 'process' than I, from a social science background, have.
I should have pointed out that the chart you reference includes position sizing. That is why it has a non-normal shape.
Below is a histogram of 1 contract traded all the time. I also did 10,000 iterations to get smoother results.
As you predicted, it looks like a normal distribution.
The only difference is the spike at the lower end of the histogram. That is due to "risk of ruin" - when equity falls below a certain point, trading ceases. Without that real world restriction, the little spike would be gone.
Note to all readers: If some of this discussion is confusing or overwhelming (normal curves, random number generators, etc.), PLEASE feel free to ask questions. My philosophy is that the only dumb question is the one you are afraid to ask. So ask away. I guarantee if others have the same question(s) as you!
Yes, I have a defined process for developing a trading system. The basic flow chart is shown below. If you search nexusfi.com (formerly BMT) archives, you'll find a couple of webinars I did on the topic. It is a big topic, so much so that I could actually write a book on the subject.