Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
If you still see reconstruction issues while using the OnMarketDepth callback, my best guess is a common design pattern in these platforms where they aggregate all book state changes before dispatching the callback that your application logic (strategy) is enclosed in.
The reason for this design pattern is that it's the most sensible thing to do if you anticipate any queuing bottlenecks. Examples of these bottlenecks include the:
- expectation that the application logic is slow, often the case among retail users
- limited bandwidth between market data source and client
As a result, you might find gaps even inbetween calls of your OnMarketDepth handler.
This design pattern is even more common on the GUI side because platform developers safely assume that no screen is supposed to keep up with the messaging traffic. You can try this out yourself and call up your broker or use a separate application to click-trade a unique lot size for you on ES, and chances are that it will not print on your GUI. I've done this dozens of times and I've never been able to match my fill to a trade print on any retail platform's book viewer.
Sounds about right, no comment.
Yes, we know this without even needing to resort to data analysis. Just read the dozens of cases where people get caught spoofing and it's always the classic technique where they place a large bid on the other side of a small offer they're actually trying to fill, and then they cancel the bid once they're filled. If you trade in less-regulated markets, this effect is even more rampant.
So the obvious answer is that you're more likely to get filled on the thinner side of an asymmetric book. However this is also offset by worse PnL on that fill.
Not an obvious conclusion. Back in the day like 2008-2010 this would be a useful heuristic. Nowadays market makers are more competitive and a wiped level is more likely to replenish on the same side of the book than to be replenished on the other side of the book.
-----
My best advice for you is to collect your own raw data and build your own platform. It's near impossible to do analysis at this granularity without an open source peek into how the data is touched before your application logic. I know this sounds like a huge distraction, because building any meaningful platform is a 1+ year endeavor for a first timer, but that's generally the upfront cost you have to pay if you want to beat 1 tick of slippage on your round trip. No free lunch.
Also you mentioned some way to estimate your fills by assuming the volume needs to accumulate to your initial position. There's a couple of edge cases that you're probably already aware of, e.g. that you need to short circuit that logic if there's a price trade-through, but do test this thoroughly. I still get the code subtly wrong every now and then even after having done it many times over for different exchanges.
I'll save you a few months of work and tell you that the pessimistic case is very close to the actual case. So if you don't think you can make (save?) money using limit orders with your current analysis, you're probably not going to be able to do it with more accurate queue simulation.
Thanks for providing me with some feedback on everything. I appreciate you taking the time to provide me with your insights. So here is where I am at with everything.
1. Regarding your comments below about the marketdepth event handler and the potential gaps. I have seen the gaps and have built a pretty neat workaround. NT support recently confirmed that the GetCurrentBidVolume() and GetCurrentAskVolume() methods are bar size specific. So you only get these updates on the bar update regardless of which event handler you call them from. When I moved my strategy from a 50 tick time series down to 1 tick, I did see more updates, but there were still gaps. So between updates, I am running back the actual transacted volumes of the bid and ask via the OnMarketData event handler which gives every change to the level 1 books independent of everything else. So I am deducting any transacted volume against my last GetCurrentBidVolume() and GetCurrentAskVoulme() update. Once I get a refreshed update, I reset my cumulative transaction count back to 0 and start again. I am able to capture the levels breaking fairly accurately in between updates and this accounts for all but the canceled volumes, which unfortunately I can't see between updates.
2. I have observed that you have a higher lightly-hood of getting on the right side of the action if you wait in the queue a few levels out, by the time your level comes up you may be in the top 25% to 50% of the queue, so you increase your likelihood of not getting filled at the end of the line and the level immediately breaking against you. The unfortunate downside to this approach is you end up with far less trades because you spend too much time waiting in line a few levels out. My proposed workaround, and the point of this recent project is to try to place orders to the weak side (Side where there was not a queue from other levels already working),and then just cancel when my estimated place in the queue relative to my side of the book and the opposite side of the book looked poor. When my queue position relative to my side of the book and the opposite side is attractive I would let it play out. I have already gotten this kind of working with around 50% / 50% split between positive and negative outcomes (Level break in my favor, level breaks against me). And typically submitting to the weak side at least with the ES you typically see the weak side break around 60% or higher, so I think I am beating the odds currently, but I imagine with further optimization I can get this figure even higher.
3. I think you are right with the recommendation of going to an independent platform for testing. I have an expertise in SQL and have built trading simulators in the past with this route, and will likely pursue this again for this endeavor. With this approach I can test almost every possible permutation of different entry volume ratios and throughout the various life cycles of trades see where canceling or riding out will play in my favor or run through my level against me.
This is definitely a field of research that I think will prove extremely fruitful but as you can imagine it is no small undertaking.
For most part it seems that you are going about this sensibly. A few remaining simple things:
1. It seems that you're thinking of this as a way to reduce transaction costs ("improve entries") for some more important, overarching strategy. This seems weird because there's lower hanging fruit ways to reduce transaction costs. If your goal is to make market making your core strategy, then it makes more sense.
2. Yeah, you have practically zero hope of getting a good queue position from interacting with the top of the book with NT. Also your cancels will be awful. There's a pretty sharp cutoff in round trip latencies at which your cancels matter or not and I can tell you that you're not close enough with NT.
3. PnL targets and stop losses should worsen your PnL but improve your cash flow, don't see the reason for you to use them here.
Thank you for taking the time to follow up with me. I appreciate your insight.
As you can no doubt guess I am applying this to a HF trading strategy, so you can see why the 1 tick matters as I am more on the market making spectrum.
I have fairly decent speed with NT as I am co-locating in close proximity to my data provider. So while I am not in the league of direct market access, I would say I am at the higher end of retail. My testing with canceling live trades has been surprising fairly good thus far, so I think outside of the 9:30- 10:00 AM rush, I shouldn't hit too many snags. From your experience where do you see the lines in the sand around latency and NinjaTrader? I think most people run NT off of at least 10,20,50 ticks or more, so this creates a lag in and of itself. I run all of my execution related code off of the OnMarketData event handler, which captures every change in level 1 events. So this will execute faster than even running a 1 tick time frame for example. I see that you use think or swim, have you found this to have speed advantages over ninjatrader? If you have any ideas that would help improve latency that won't break the proverbial bank I am all ears.
But for sure, I could be on a fools errand and putting in the work all for not, but I got to give it a shot right?
That really doesn't matter. You're likely still taking a handful of BGP hops and paying garbage collection on your application. The people who're looking to lift your trades 90+% of the time (I'm not throwing this number out randomly) aren't the guys in your same proximity hosting facility, they're the ones optimized to evaluate the value of your order and lift it a few thousand times between every 2 OnMarketData calls.
The line in the sand is around 2-3 orders of magnitude faster than what you can achieve with the setup you've described.
It's good however that you have the sensible judgment to call it a "line in the sand", because many people don't grasp that it's an all-or-nothing scenario - if you're not across the line, you're practically competing with everyone else on equal latency grounds whether it's grandma from her iPhone app or someone in the same rack as you.
This isn't necessarily a bad thing, it's just that you are probably overspending on the wrong vendor services when you could've spent it on something with a more meaningful impact at your time scale, e.g. an alternate data source.
You'll have no problem canceling orders no doubt even during US cash open, it's just that you won't land the cancellations that matter. Almost always. Even during US T+1 session.
That's not a very meaningful optimization. Deallocation in .NET garbage collection likely takes more cycles than most things you can cram into 10 passes of your main event loop.
That's just a reference to an old joke that before my team had a working GUI, I needed to confirm our new platform was working properly, so someone fired up thinkorswim and compared our feed against that.
By the way, one cheap way you can know the marginal latency benefit of your proximity hosting and/or application is to submit orders in some benchmark pattern, e.g. replace->acknowledged->cancel.
Then ask your broker to look up your orders in their risk platform and ask for the exact timestamps of your orders. They'll be able to see actually how fast your orders are from the frame of reference of the exchange, if they use CME's front end for risk, or from the execution vendor's gateway application.
Now you can actually modify your proximity hosting decisions and see the impact, i.e. run the same benchmark side-by-side on your local workstation.
CME's front end risk unfortunately only has millisecond granularity but it's good enough if you're just proximity hosting. You can get your broker to generate a report like this in CSV (see below) through Firmsoft. In this example, I'm canceling my order shortly after its modify has been acknowledged and some market data event arrived to trigger the cancel, and from CME's perspective, it takes place within the same millisecond. It actually takes much less but CME's clock isn't precise enough.
With NT and proximity hosting at 350 E Cermak I estimate you're going to take several milliseconds. I doubt it's going to be meaningfully faster than running it on your own desktop. And I doubt 1 tick or 50 tick resolution is going to make a difference at all.
Inspiring to see the effort and the willingness to share. Those qualities and the spirit they reflect represent values that I wish where more prevalent in the world.
Kudos and respect to both of you guys.
Please leave some crumbs for the old school guys that still put their hands in the air and shout at the monitor for execution.
artemiso, are guys like you gonna swallow guys like me? Do I need get my application in at Walmart?