Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
The LinReg indicator does not have much of a performance impact but this example shows how it can be made more efficient.
Calls to the external SUM class eliminated. SUM algorithm moved into this indicator. Beats using a reusable instance of the external SUM class.
Variables that do not change intrabar are calculated only on FirstTickOfBar.
OnBarUpdate does not process redundant intrabar ticks.
Single precision "Float" variables reduce amount of arithmetic and memory load since a Float has half the bytes of a Double.
If you want to use this, rename the file to @LinReg and REPLACE the existing file that has that name.
"If we don't loosen up some money, this sucker is going down." -GW Bush, 2008
“Lack of proof that something is true does not prove that it is not true - when you want to believe.” -Humpty Dumpty, 2014
“The greatest shortcoming of the human race is our inability to understand the exponential function.” Prof. Albert Bartlett
You actually lose performance by downcasting those doubles. C# double addition takes 0.11 ns more than float addition on a 2 GHz machine, all else is the same. So downcasting is more expensive for your algorithm. I would only do it if the memory bandwidth was comparable to the L2 cache.
Thanks @artemiso, I was wondering about that. The float downcasting is a sledgehammer approach.
I was brought up in the days of slide rules, so am very aware that doing arithmetic with excessive non signficant precision is very wasteful. But I am not sure that there is any good way to curtail it. Math.Round, Trunc, and string formatting don't sound like good options either.
Noticed that lots of people have looked at this post. You are probably admiring my necktie and wondering where you can get your own.
"If we don't loosen up some money, this sucker is going down." -GW Bush, 2008
“Lack of proof that something is true does not prove that it is not true - when you want to believe.” -Humpty Dumpty, 2014
“The greatest shortcoming of the human race is our inability to understand the exponential function.” Prof. Albert Bartlett
You're welcome. Unfortunately, modern processors are so fast that the primitives nearly all take 1 clock cycle. The extra precision is indeed wasteful, but it's here to stay.
@Richard closed it down to pursue other interests, but hopefully is still around on futures.io (formerly BMT).
Too bad, there was some great stuff posted there.
"If we don't loosen up some money, this sucker is going down." -GW Bush, 2008
“Lack of proof that something is true does not prove that it is not true - when you want to believe.” -Humpty Dumpty, 2014
“The greatest shortcoming of the human race is our inability to understand the exponential function.” Prof. Albert Bartlett