Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
Bt.w, mike care what you plan getting hwardware wise?
I jsut got a 3930 with soon 32gb ram (16 at the moment - the 8gb modules are hard to get here and I had that lying around9. 6 cores, 12 with threading. Mild overclocking to 4ghz. Freaking monster.
Otherwise I am waiting for the new AMD to come out later this year (piledriver - the current ones suck) So, just interested in where you plan going.
Can you help answer these questions from other members on NexusFi?
I have been toying with a few ideas. One, I want to replace my aging primary workstation which is a i7 920 running at 4ghz on air. Two is I was considering consolidating my three dedicated Xeon servers at Chicago with one monster box.
But I need to do some tests regarding how NT and MC behave with multi cores and slower IPC vs fewer cores and higher IPC. I've done tests on the futures.io (formerly BMT) server hardware and find that a faster clock rate and IPC is beneficial over more cores.
So truth is, I don't know what I will be building. But I want to build something... it's been too long since my last crazy build
The 3930 is an awesome box. I was even considering a couple 3930s for me to use in 1U's for futures.io (formerly BMT) equip, as the faster clock freq and IPC will be beneficial for the latency of fetching forum pages. I'd rather use the Xeon but at $1000 price premium I think I can get over it, provided I can find a good IPMI solution. But for workstation use, I really, really like the 3930.
if you're running any multicore system backtesting in NT7, ensure you've cleaned up the strat properly for not using any enums / bools - learned this the hard way time ago that this would limit the optimizer to single core...
In the attached Task Manager and Resource Monitor screens, I'm running an optimization on my Perry Strategy. My Perry strategy has Perry's original 7 indicators (2 SMAs, 2 EMAs, ADX, DMI+, and DMI-) along with an additional 6 momentum calcs being done on these indicators. The optimization is for 1 day and has 18 inputs, 5 to 10 steps each optimization. This optimization takes over 1 hr to run.
My EVGA SR-2 has 12 cores and 24 Hyperthreads. As you can see the optimization is using maybe 20 of the 24 Hyperthreads. Each thread hits peaks of 80% and not 100%. So its not CPU bound.
And for comparison sake, you might try just the built-in MA Cross Over strategy and post the same screenshots? To see if a simpler strategy has better CPU utilization.
I think that the reason it is not maximizing the CPU is due to all the dataseries and objects for that 7 indicator strategy. If you are really wanting to increase the performance, I think you could eliminate the SMA, EMA, ADX and DMI calls externally, and just build them in as internal functions to return Double's since you likely only need the current value, or perhaps the current value and the 1 prior value. It will be far less expensive than a DataSeries being called from a separate indicator.
Are you using genetic or brute force exhaustive optimization?
Your arguments make no sense - I wont argue whether it will perform slower, or that data series are slower than building internally.
BUT: All that would be execution time, it would NOT show up in lower CPU utilization. Example HyperThreading - it may have to wait on "thread 2" for resources, but it would still show up 100% cpu utilization properly.
All that makes NO sense for Ninja. Ninja should be able to spin off enough parallel runs to use up the CPU stable 95% upward. Whether or not a strategy run internally optimal will show up in "how many cpu cycles it uses per bar", but not in how high the CPU utilization is.
Doing data loads (still fighting my NxCore library) I get the CPU stable to nearly 95% - I run 11 parallel threads, on 12 cores. The reason is that... well... that keeps the UI more responsive But 11 cores are running full stable 100%.
Ninja could do the same.
Sorry, the task manager bar showed a chip not even half working, not mostly working.