Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I appreciate your sharing your insight. I definitely don't have super speed at this point in the game, but my lo-location (even being still on the retail level) is giving me a decent improvement over what I I tested from my local machine by a decent margin. Here is a snapshot of some of the time stamps of my live trading.
By comparison, I tested canceling from my house (I don't have the greatest internet speed anyway) and it took around 100 to 300 milliseconds on average from Canceled Submitted to Cancelled. Co-located I am getting this down to 10 milliseconds. So I would characterize 100- 200 millisecond latency as nearly unworkable, but 10 milliseconds I can work with.... though I have to run my cancellation logic more conservative and look further out. In my world, if the volume on my side ever gets below 50, I should cancel. In your world, you could likely wait until this got near 1 and then pull the trigger.
So just out of curiosity, and I respect if you don't want to revile anything:
Are you using something like R | Diamond API and co-locating with Rythmic, or are you at the direct market access level doing it full custom? Any details you are willing to share in this regard I would love to take notes on.
Obviously I can't afford to scale too heavy at this point, but if there is a different route that you would recommend for someone in the < $500 per month co-location range I would love to hear your thoughts. I am open to building my own trading platform via any number of APIs, I am a programmer by trade, so I wouldn't be too intimidated, but I haven't really seen a need for it yet. If you have any thoughts on this topic I am all ears.
A very simple strategy, which is not supposed to make money, but to measure the delay in this process :
- order send
- order received by the exchange
- order cancelled
- cancel receive and acknoledge by NinjaTrader
Trading: Primarily Energy but also a little Equities, Fixed Income, Metals, U308 and Crypto.
Frequency: Many times daily
Duration: Never
Posts: 5,059 since Dec 2013
Thanks Given: 4,410
Thanks Received: 10,226
Just another 'timing' data point to put your 10 milliseconds in perspective. I use XTrader Pro and have servers at both Aurora and Cermak. I haven't looked at my my actual message logs (will try tonite), but looking at my fill logs (for Aurora trading CL) I can see that the Autospreader takes between 1 and 2 milliseconds to leg a spread. Meaning I receive a fill message for one leg of the spread, and XTrader reacts, sends the other leg of the spread, which in turn is filled within 1-2 milliseconds of the original fill. I think you'll find what I'm using is a pretty standard setup for professional/institutional traders, but to @artemiso I'm a dinosaur.
Also - less relevant to the conversation, but since you mentioned home desktop latency - I monitor my connection to my servers continually and for the last 240 pings/1 hour, round trip latency between my desktop (Houston) and Cermak (Chicago) is 31.2 milliseconds.
After considering a number of ways to go about this research I determined that running this through any sort of retail trading platform doing countless SIM trades to model outcomes simply wouldn't cut it. So I took a different rout.
I used ninjatrader and wrote a script to extract the most granular output I could concerning level 1 events. I collected every trade with all resting volumes and all transacted volumes around each side. I did this over a few months worth of data just to confirm the patterns and validate what I was seeing. I kept it simple and just used a 1 tick time frame and pulled all the resting volume data on the OnBarUpdate event handler. For transacted volume I pulled this through the OneMarketData event handler. I could have technically gotten more granular with the resting volumes using OnMarketData but as I was mostly only concerned with the starting volumes I figured the OnBarUpdate would be good enough.
Let me back up and explain what exactly I am trying to do first because this gets fairly technical fairly quick and just to lay the ground work you have to have a basic understanding of the following.
Strong Side / Weak Side: This may be common knowledge to some and new to others. Also maybe my use of the terminology could be throwing people off a bit (Strong / Weak... I just came up with this, I am not sure anyone thinks of it this way)
The image above describes it though. You have one side with resting orders 10-20 levels out and the other side with nothing. As you can't submit limit long orders above the market, or limit short orders below the market, your only option for this (Weak side) is to just wait until the new level is created and then submit. So the weak side will typically spontaneous pick up new limit orders, whereas the strong side will just bring down orders from levels further out that move. So what I have observed is the following 2 points that everyone should know.
1. The weak side will have thinner volume naturally, which improves your chances of getting filled considerably.
2. Also by having thin volume, you have a much higher chance of this side breaking against you after you get filled.
So this double edge sword creates a paradox. Do you want to get the most possible fills / trades, or do you want to protect your P&L by not being on the side most likely to break. So my quest in my recent project was to find a way to thread the proverbial needle of doom and have my cake and eat it to. Ambitious? Sure, Stupid? Probably. Advisable? No. I wouldn't recommend anyone try this. It is like playing Tetris on level 120 while drunk.
Now to the results of my research (Excel file enclosed)
I consolidated over 2,000 price level changes down to just a few specific variables that I could run simulations on in excel.
1. Starting level change: (Was the newly created level up or down from the previous level) This identifies the weak side / strong side. If the new level moved up it would be considered a weak side to submit a buy order, but strong side to submit a sell order. The image above along with my previous explanation will hopefully give this context.
2. Starting volume for both the bid and ask
3. Total transacted volume at the bid and ask separated out for each level.
4. Would I have gotten filled or not. With points 2-3 I can test the assumption that I submit my order immediately as the level change first occurs, to test if I would have gotten filled, I just measure the total transacted volume - the starting volume. If TV > SV then I assume a fill. I work this for both the bid and ask independently.
5. The Ending Level. This is the second data point we need to understand what occurred. In this example we are always betting on the weak side. So if Staring level = Up and Ending Level = Up, this produces a Win. Whereas Starting Level = Down and ending level = Up produces a loss. So any two "like" events is a win. Up, Up, Down, Down, but any alternating events is a loss. Up, Down, or Down, Up.
6. The actual bet. The win loss criteria just determines if you get an entry up 1 tick or down 1 tick due to how the first level clears after your starting level. This in and of itself doesn't translate to a real trading outcome but simply measures how effective the first sequence of your trade goes. If you were only aiming for 1 tick profit or 1 tick loss, then this may be a 75% good proxy for it, but this is only meant to quantify if you get a decent entry or not.
So from these thousands of data points, I was able to easily run every possible permutation of different entry logic to see if I could find anyway to beat the built in bias against me. By the way, the build in strong side / weak side bias in the ES is 75% favorable to the strong side. Which means on every single price level 75% of the time the weak side will break. This is something that I think is critical for every trader to know. The primary reason for this is just due to the nature of the strong side / weak side explanation I have provided above. The other explanation that I can think of is the presence of spoofing the weak side. (I am doing a separate statistical analysis to quantify this in a future post.) But for now my initial hypothesis is that this occurs at least twice as much on the weak side.
Now the conclusions from my testing: After testing every significant possible volume ratio, and absolute volume setting, I have found several combinations that beat the natural odds. They don't do this be a huge margin and truthfully at best I can only cut this down to close to a 50%/ 50% win loss ratio. But considering with no filtering you would typically have a 25%/ 75% Win Loss ratio, I would characterize this as a significant break though.
Now this isn't good enough in and of itself to constitute a true edge, but with proper canceling logic in place throughout the life of each trade moving from Accepted > Working > Filled, you have a shot at getting the win loss ratio up higher. I would say 55% on the low side, up to as high as 75% if you nail every cancellation you can spot and you have perfect latency. I don't think you will ever do better than that because most spoofing moves drop the entire level with no warning, so these will typically get you with 0 time to react.
As you can imagine, you can get significantly more fills on the weak-side, but this is clearly a land mind, so that is why the idea of threading this needle seems so intriguing. If one could pull this off, their prize would likely be at least 2x the number of fills they would get if only submitted from the strong side multiple levels out to gain optimal queue placement.
In the coming days I am going to code this and post my market replay / Live SIM results.
Have you tried calculating the marginal value of those additional 190-290 milliseconds?
Everything is custom. On CME everyone half-serious is bypassing broker risk because the exchange has a good risk platform. I use a mix of FPGA and x86 for the hardware.
>=10 ms is low tier even for proximity hosting. The line that most people use between Equinix and CME is about 0.293 ms one-way host-to-host. If you're experiencing a lot more than that, it's because something is wrong at the ends. If for example you know the bandwidth provider for your data/execution provider is fixed, you could ask your hosting provider to see if they can patch you only onto that provider's network. If they can't do it, the worst case cost should be $250-300 per month on top of your existing costs.
If you really want the mental satisfaction of having nicer round trips, you can still get actual colo for <$500 per month but it just takes careful snooping. Some vendors stock up an entire rack with blades and just need a few remaining people to take out the slots that are unused so they can recoup their hosting costs.
Of course you can get the cheapest gains by replacing your application, but it's also the most time consuming.
And you don't have to thank me, I could be bullsh*tting completely for all you know. Test things out yourselves and be very skeptical of people offering advice online.
Another piece of advice: which method call are you using for your timing measurements? C# has a few timer calls but the trivial ones have poor accuracy.
I do feel compelled to thank you, because I have done my research on you, and you came voluntarily to this thread because you were well recommended by several members on here with great reputations. I honestly do appreciate your insight because you are much further in the game than I am, and the feedback you share will be taken seriously, though I am a lifetime skeptic and will also take your advise to test everything myself and continue to do my own research.
I have done some testing on my latency and how they impact my strategies. I am going the VPS route at the moment, but I may move to a dedicated server next. I think this is likely why I am a little slow at the moment.... that and obviously I am using a platform that primarily caters to chart traders and moves slower than something that was just executing my strategy and nothing else. I might would consider eventually building my own system, I know that the CME has a number of APIs that contain standard market events and some of the event handlers that I would need, but I would still have to go the retail route of either going with CQG / Continuum or Rithmic for data. So I would end up going Application > Data Provider > Broker Risk > Exchange, whereas DMA guys can go Application > Exchange. Maybe I am wrong... but I always assumed that any savings I picked up from a quicker application would be offset by the additional steps I am going through presently.
Below you mentioned something that very much interests me. When you mention that I won't be able to land the cancellations that matter I am envisioning a specific type of cancellation scenario and I was hoping that you could give me some verification of my thinking. I know that right now I can cancel based on volume related criteria provided that it is looking far enough out... But the scenario I am very curious about is a price level change. Let's say the market moves the bid price down from 2746 to 2745.75 and I had a buy order at 2746. If you have an event handler catching this price change at the most granular level would you ever have any chance at all of pulling off a cancel? I have never been able to nail one myself, but again I am on the slow side. I am just wondering if this type of cancellation is possible to achieve at all. I mean obviously if the market clears a level it is assumed that all volume traded at that level, but with spoofing, flipping, and not knowing the exact sequence of events at the exchange, I just figured I would ask, as this was the first scenario that came to mind, and I would love to know if this is even possible and if so where the line is on latency for this.
Thank you for your time and input.
Ian
Regarding the timing measurements, I am just using the standard NT logs. I am not programming anything myself to pick up the time stamps. They are .Net, so they are going with C#.
You do need enough margin capital (YMMV but say 250-500k) and volume for your broker to let you bypass risk, because it takes up some attention span of their trading desk, risk and compliance.
There are 2 separate matters of staleness of your application and quality of your cancellation here.
1. A properly designed application should, by construction, in a pure FIFO market, never allow you to successfully cancel a bid that was resting at 2746 and show you that the best bid has gone below your order. By the time this scenario shows up on the feed, it's already too late.
The only edge case where you can successfully cancel in this scenario is that you placed that bid close enough to the price move (which is quantifiable, but that requires a longer post) that the feed has yet to disseminate that you and/or other parties have improved the best bid, however that doesn't seem like the scenario you're describing to me.
I said a pure FIFO market because there are some markets that allow you to retroactively cancel that bid but that's another issue.
2. The second issue that I'm talking about takes place in a different scenario, which is, your feed tells you that the BB is @ 2746 and your order has been accepted for long enough @ 2746, is there some way to predict that the market will move against you in the next dt time and pull that order in a way that improves your PnL substantially enough to pay off your marginal expenditure on latency improvements? It's possible, but you need to be faster than what's possible for proximity hosting + NT.
I've dealt with external platforms, data and libraries before. One of the less productive, but necessary evils when you use external components is that you need to be able to trust your timestamps.
What's 12:34:56.789 to NT? Exchange timestamp for the event? Vendor timestamp? PC local time? PC local time when it reaches which part of user space? You could, for example, timestamp it yourself in the strategy layer and estimate the duration between that and your own timestamp, which is useful for knowing how fast the application itself is (e.g. in book construction).
For .NET I believe you need to read the literature around the Win32 API (QueryPerformanceCounter) or home brew your own from unmanaged C++ and assembly.