Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I would not expect to see 100% CPU usage regardless of increased multiple-threading. Moving data between threads is expensive, regardless of the mechanisms used, so that there is often way less benefit than advertised. With much object-oriented software the hardware cores will frequently be stalled in the MMUs or even worse at the page pool if you have excessive data usage (worth checking, but less likely unless you have leak conditions).
Regardless of machine and software, it will always come back to asking whether the right type of charts are being employed for the job - e.g. do you need tick level data for something that could just as easily be seen with a time based series? e.g. do you need to load 100 days when 4 would do? and so, on. Maybe you already did that.
[As a comparison I run multiple workspaces on several machines of varying ages with truckloads of charts and symbols and drawing from custom code under NT7 and average 1.5% cpu pretty much all the time - I simply choose the level of analysis that is optimum and reasonable for each visualisation I want.]
It might be useful if you gave more idea of the charts you use, the memory usage (e.g. working set size, etc). If you use only Ninja indicators then I would think their support department would be keen to know whether it is running properly or whether there are more optimum ways to use it.
If I start the program NT8 all charts are opened using 100% CPU. So why not during heavy volume time?
I have not much charts/indicators. If I get the problems I reduce the number of charts and days. It helps a bit.
As far as I understand the problem happens on every CPU: One has just to increase the number of charts and indicators.
If data are moved between threads, should then a 100% usage not even more be demanded?
I don't understand much of this multiple-threading topic.
Computing work done is not just cpu cycles. Much of the work in the memory units outside of the cpu takes vastly more cycles to accomplish than a simple instruction, in these cases a core with nothing to do because it is waiting for data is just idle. Having extra threads will make no difference if the memory units are already at capacity, at several levels.
Startup conditions may be different for many reasons - initial library building/optimising, mostly loading of historical data instead of live (e.g. ratio of drawn to off-chart), just depends again what type of charts you are using and days/bars loaded - you haven't said.
I can write you code in NT7 that will use 100% of an 8 core machine, so long as the allocation of work and data to threads makes sense. Even so I can also show that offloading OMD data for example into another thread will achieve almost no benefit because of the memory transfer rate versus cpu cycles gained. In any case the market tick rate is still puny compared to wasted gigahertz cycles, except where grossly inefficient code has been employed.
System architecture is a complex issue, usually overlooked by proponents of OO software and parallel programming because they are usually focused on feature richness far above efficiency, as well as using managed memory languages like C# (which I favour for ease of deployment) but which also comes at a cost.
Working set size/soft faults/hard faults/etc are tools you can begin to look at in the Task Manager/Resource Monitor to see how your system is running under various chart/load conditions.
In a single (main) thread product like NT7 you would not expect to see more than 12.5% max cpu usage on an 8 core machine, except where other processes are contributing significantly or custom threads have been added.
I do not yet have much NT8 experience (due to the early Air/Alpha nature of the software I have avoided it) but 40 yrs of OS and embedded systems architecture experience tells me exactly what reasonable assumptions I can make.
30% sounds typical to me, assuming the extra threads are sensibly restricted in function, bounded by Windows WPF restrictions and developer choices, modulated by data shifting at sensible levels, especially given that large-scale feature-rich OO/C# is naturally a sparse-access cache-busting architecture.
Would be good to see measured comparisons from others.
Do you use any custom made indicators?
I had a custom indicator that was very compute intensive and i would see the same behavior. But it was due to indicator.
Do you have the same problem if you only have charts open without any indicators?
I see about 30% for my CPU. I have no experience with producing videos and it doesn't make much sense to produce one. Even if NT8 could get 100% on some other CPU for heavy volume time it doesn't solve the general problem because if I increase the number of indicators even more a time delay would be inevitable.
One simply needs very fast CPUs and not too much indicators in not too many charts.
That's what is becoming clear from this discussion.
That's not clear to me at all. I think your original post stated you had a 17 minute delay? A delay of that length is not due to software not being able to keep up. I offered to help understand your issue and asked you to send us an email. We have not received one as of yet and still wanting to help.
Yes, I had a 17 minutes delay on my first time Wednesday and on my second try a 20 minutes delay.
In my understanding with a CPU having say 56 cores and every core at least 2 times faster than in my 7 years old CPU I expect that more volume could be processed and so either I have no delay at all or a much smaller one, right??
I had a discussion already last year with NT as I reported this problem. Then, I was disappointed but hoped that NT would solve the problem. But in RC1 and RC2 there is still the same problem. And still I am disappointed.
So, I wanted to hear whether buying a new PC could help to solve the problem. And I think now that there is a general limit for NT8 depending on the particular CPU.