Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
Just a few off topic points on clock management in Windows with (or without) NTP.
I made a lot of tests with NTP and my conclusion is that
* I requires symmetrical latency ( not available on ADSL because of the A)
* I required low latency jitter (not the case at on a DSL network, if you make an NTP query and your PC is busy downloading, you'll be off by tens or hundreds of milliseconds)
Because of its huge time constants it also requires a stable clock speed ie constant temperature (not the case in a PC, because of room temperature or PC load)
I tried lots of builds, selected specific NTP low stratum servers, but the NTP feedback loop never stabilized because of the PC and network non ideal configurations.
So I bought a cheapo GPS card and plugged it in my PC through the serial port (to get the GPS very precise pulse called PPS)
To manage my clock built a C program that uses a very simple algo : if PC ahead of time the put clock in "slow" mode, else put PC in "fast" mode
Before Windows 8 you had to use peformance counters to build a precise clock estimate, now with Windows 8 you can call GetSystemTimePreciseAsFileTime which will give you nanosec precision time. It makes all the clock stuff coding much more easier.....
Of course it's much more simpler to use a NTP daemon with the GPS clock , but I got fed with NTP so built my own...
Here's the result, you see you can have much lower than ms precision with a very basic gps board (with PPS)
I you choose to go with the hardware way, it's better to use a legacy serial port, which has a direct IRQ access to the CPU, than going through the full serial over usb stack.
Thanks GOMI, thats exact my current way to solve this issue. In the meantime i've received all things to build my own stratum 1 GPS based time source (with PPS of course). To overcome windows time granularity issues i access the time in a direct way. Network path to my local device is very symmetric.
If your clock is correctly set up, you can use GomTimeMeasureV2 to watch time distribution on tick arrivals
3 tick modes : Last, BidAsk, All (Hotkey Space Bar)
This allows to check if the time distrubution is constant depending on tick types.
2 visualisation modes : Basic and Convolution
The problem is that NT has 1 second resolution, so to find real latency, we use a sliding convolution window of 1 second, and we try to see which starting time of the window captures the maximum ticks. This will give the latency.
If network latency is constant, the convolution should be triangular.
In basic mode, the point which is the most to the left is the minimum measured network lag (based on data feed timestamp)
You can move the chart around by using Ctrl+click and move.
Here's an example
We can see that :
shortest lag on dax is 200 ms
shortest lag on ES is 170 ms
mean network las is 270 ms.
Bid/Ask ticks have greater lag than Price ticks, so you may need to throw all the delta stuff away ;-)
Thanks a lot @gomi for your contributions. I have to re-read your post a few times and look into the NT code. Till now i haven't had an idea to utilize NT for that issue due to lack of exchange time stamps. If i got an access to rithmic API (pending since several day's) i want to compare my time to the R|API exchange time stamps. So it would be a big improvement if i could utilize a second datafeed to make the measurements.
@gomi: I haven't grasb it yet. You said "we use a sliding convolution window of 1 second, and we try to see which starting time of the window captures the maximum ticks" - is that related to the assumption that the most ticks are at the very beginning of a new second?
Suppose you have a constant latency of 150 ms.
Then the measured latency should be equally distributed between 0.150 s (ticks happening at mm:ss.000, timestamped mm:ss ), and 1.150 s (ticks happening at mm:ss.99999999999, also timestamped mm:ss).
The idea is you take the histogram of all latencies, and you cumulate it using a sliding window of 1 second length.
So you calculate how much ticks you have between
0 and 1
0.10 and 1.10
0.20 and 1.20
etc....
The window that catches the maximum of ticks will be the one that represents the latency.
Here's a chart, the blue reactangle is where the measured tick latencies are.