Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
This is precisely my point about "prettiness" over functionality. As BigMike pointed out Sierra is the ugly duckling. Ninja is no supermodel but nothing to be ashamed of.
This is a fundamental part of a multibroker product in my opinion. It must be able to easily facilitate trade/perform position sizing/select data provider across multiple brokers not just be able to connect to multiple brokers. This should be development priority one for a multi-broker product.
I'm hoping this is a simple programming fix.
I'm also wanting to move to automated position sizing by account /broker (and assets classes within the account) but you cannot extract account values by broker. This would be a valuable addition to the product as well in my opinion and would appreciate ninjatrader taking note of it.
I don't think NT can do anything to make GPGPU processing easier for its users. Achieving speedup on a GPU lies mainly with code optimization, of finding embarrassingly parallel tasks in a larger problem. However, it's hard to parallelize the parameter optimization logic. The same trading algorithm with a slightly different set of parameters has a different runtime from another over the same backtest period. Obvious examples: if your optimization involves deciding to use a 20 or 30 period SMA, or if it has several if's. There are two important characteristics of CUDA that do not tolerate the unsynchronized runs across backtests:
1. shared memory access
Since market data has to be accessed serially in time, you cannot simply erase a period of market data from memory until the batch of operations working on that specific period has been completed. Though I am guessing this is less of a problem nowadays with 6 GB Tesla cards being relatively cheap.
2. simultaneous execution across the 32 threads that form each warp
Meaning that the task completion time of each thread in the warp is only as fast as the slowest task.
Failing to account for these, you would be faster simply using the CPU. I'm not saying it's impossible to achieve a performance gain in optimization runs through the GPU. On the contrary, it is *definitely* possible to achieve a performance boost - except that the work goes back to the user to write a strategy that cooperates with a parallelized parameter optimizer logic. This means you have to write your trading strategy just so its parameter optimization runs can be sped up, e.g. with small bodies of if's. This is very restrictive. (Of course, you could also write the same strategy in slightly different ways, one to take advantage of GPU-based parameter optimization and one for simply fast execution times, but you will spend more time writing these two than that saved by your GPU.)
It is clear how password brute force takes advantage of the GPU - each task has nearly the same runtime (or you can easily write the code such that this is the case) and even in cases where it doesn't; the bodies of if's are small and progression in password length (and hence runtime) is straightforward; so branching is minimal and predictable.
In short - when it comes to backtesting logic, I don't think there's a lot they can do to take advantage of GPU processing.
On the strategy-level, I guess you could replace your C# routines that involve heavy matrix manipulations with CUDA compute kernels. The driver functions that manage memory and launch the CUDA kernels must still be called from an unmanaged DLL - which some of you are familiar with doing in your NinjaScripts. However, this is very expensive (delay of 10~30 instructions per call). And so long as your NT strategies must be written for .NET, it's not immediately obvious how to eke a performance gain from GPUs.
If speed is a concern, one easy thing NT can do is to allow compilation in the unsafe context (i.e. the /unsafe option in the command line).
From what I have seen, Ninjatrader strategies are generally coded using the same sloppy and inefficient coding techniques that are so popular with indicator programmers.
Seems to me that addressing this area would have major speed up benefits for back testing and optimization.
Because of the high resource demands, econo programming techniques should be aggressively applied to strategies. There could be a high return on the effort expended.
"If we don't loosen up some money, this sucker is going down." -GW Bush, 2008
“Lack of proof that something is true does not prove that it is not true - when you want to believe.” -Humpty Dumpty, 2014
“The greatest shortcoming of the human race is our inability to understand the exponential function.” Prof. Albert Bartlett
On a personal level, I am looking forward to volume profile on the Super DOM, I am sure many of us also appreciate the stored bid/ask as well.
On a strategic level, the shift of emphasis in NinjaScript to optimal performance over ease of use is exactly the right decision at this point in the product development path. Ray is making some good strategic decisions with NinjaTrader. It sounds like he is going to give us a fast and efficient platform from which we can leverage our collective talent and energy to expand and enhance it's full potential. The more tools he can give us now, the further we can take this. The way I see this is Ray is not going to go down the mistaken path other vendors went down (mentioning no obvious 800 pound elephant names). I will take a jaguar to a hippo any day...Control the bloat and keep it lean !
I agree with this, but the programmers of custom indicators need to do their part too.
"If we don't loosen up some money, this sucker is going down." -GW Bush, 2008
“Lack of proof that something is true does not prove that it is not true - when you want to believe.” -Humpty Dumpty, 2014
“The greatest shortcoming of the human race is our inability to understand the exponential function.” Prof. Albert Bartlett
Yes but most of the performance gains from NT8 were out of reach for us. They made back-end changes in memory optimization and garbage collection, we aren't talking about localizing MIN and MAX or minimizing repeating loops when data isn't changing, etc.
At least that was my impression.
Ray did mention and show the new keyword (I forget what it was now, but basically OnChange) which only fires when price actually changes, instead of every tick. This will work well for most people not doing bid/ask analysis.
- Optimization of indicator code is helpful but it only contributed nominal amounts to the increased backtesting performance in NT8
- "Calculate on bar close" has been replaced with "Calculate" which has three options, "On bar close", "On price change" and "On each tick".