Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
"Yeah, I can't see the code you are posting, but just be aware that when you change the inputs you are actually creating an additional indicator that runs in parallel with the original one. So, that can easily get out of hand if you don't do some quantizing... like: instead of changing the SMA length each bar, only change it in length multiples of 5, or whatever."
Richard,
Is there a way to destroy the original one when I create the new one, so that only one instance will exist? Can I use Dispose() to do that?
Thanks to you we are making some great progress in this thread now. It's making a real difference.
Can you help answer these questions from other members on NexusFi?
I just want to make sure we're on the same page. Storing a reference to an existing DataSeries doesn't cost anything. Creating a new dataseries and filling it with values from another one is wasteful.
So, putting this in the init code:
... costs nothing, and leaves you with access to the Dataseries that the MACD is filling. On the other hand:
... stores copies of data that you already had access to, which is wasteful. In both cases, you end up with a DataSeries called macdavg that has the correct values in it, but the first version is practically free and the second one does a bunch of unnecessary work.
I don't think so, because ninja is caching them and you don't know who else is looking at that same indicator reference. In the same way, you can't just change an indicator's inputs on the fly, because someone else may be depending on the object continuing to compute the 5 SMA rather than the 10, or whatever.
The three easiest workarounds I know of are:
Quantize so that you only ever create a few indicators to cover the input space and you don't have to worry about it.
Make an indicator that knows how to vary itself in the right way, rather than having the main indicator in control of adjusting the inputs (for bonus points, market it to people as an "adaptive indicator" for tons of cash to fund further development)
Suck the indicator code into a private function of the main indicator, so that you can call it with different inputs at will
Based on the suggestion by Richard in Post 38, I changed the DoubleStochasticsOptimized to check within the indicator, when a new bar starts, whether the [Period +1] value is equal to the minimum or maximum value for each of four different dataseries.
It is necessary to call the external MIN or MAX function ONLY when that is true.
A 30 tick, 30 days back chart of the YM with this indicator refreshes in less than one second.
(Printing the value of CurrentBar to the Output Window shows that this chart has 99,798 bars. ) That seems pretty good.....
In linux cli world, we can easily time stuff to find which method is most CPU efficient, just:
time <what you want to time>
It would be nice if a small block of code could be written to time these new optimized functions, so we can compare old vs. new and see what we are really accomplishing.
I'm all for writing efficient code and am just curious how much CPU time we're saving here. I think with .NET framework our only hope is to measure CPU time, measuring resource consumption like memory is not going to happen I don't think.
Anecdotally, there is a definite difference in load time and workspace switching times since Zondor started his optimization project.
I am running 3 workspaces of 3 GOM Volume Ladders & 5 copies of BSV39 (both notoriously slow loaders). Load times went from 30 + seconds for the attached chart when hitting F5 to less than 8 seconds with 4 worksapces running. The fourth workspace has 12 GOM recorders -2 each of YM, CL, 6e flat and binary and 2 contract months each. Load time is less then 3 seconds if this is the only chart loaded in an empty workspace & that includes data.
R.I.P. Andy Zektzer (ZTR), 1960-2010.
Please visit this thread for more information.
I had in mind more of a FastMAX and FastMIN indicator that you can use as replacements for the original MAX and MIN... so other indicators wouldn't need to know the difference. Like the attached.
I can run the new 89-max and 89-min on a chart with 2.8 million bars in 6 seconds, versus more like 19 seconds with the built-in max and min. It definitely makes a difference to avoid redundant computations.
While we're on the topic of optimization, DoubleStochasticsOptimized has a line like this in it:
Whenever you see a line like this, it is a sign that you probably don't need p2 at all. Just use pEMA.Value instead. Also, by using fastMAX and fastMIN you can put those into OnStartUp() and avoid the cache lookup.