NexusFi: Find Your Edge


Home Menu

 





Outside the Box and then some....


Discussion in Trading Journals

Updated
      Top Posters
    1. looks_one iantg with 66 posts (325 thanks)
    2. looks_two SMCJB with 12 posts (15 thanks)
    3. looks_3 artemiso with 12 posts (34 thanks)
    4. looks_4 pen15 with 10 posts (2 thanks)
      Best Posters
    1. looks_one iantg with 4.9 thanks per post
    2. looks_two wldman with 3.7 thanks per post
    3. looks_3 artemiso with 2.8 thanks per post
    4. looks_4 SMCJB with 1.3 thanks per post
    1. trending_up 31,751 views
    2. thumb_up 410 thanks given
    3. group 64 followers
    1. forum 137 posts
    2. attach_file 52 attachments




 
Search this Thread

Outside the Box and then some....

  #81 (permalink)
 artemiso 
New York, NY
 
Experience: Beginner
Platform: Vanguard 401k
Broker: Yahoo Finance
Trading: Mutual funds
Posts: 1,152 since Jul 2012
Thanks Given: 784
Thanks Received: 2,685


iantg View Post
So I took my granularity level from 50 ticks down to 1 tick and tried to see if I could build a full sequence of trades from x contracts down to the last 1 contract before each level broke. After doing some research I have come to understand that this will never be achievable based on just the GetCurrentBidVolume() / GetCurrentAskVolume() alone. There are always going to be gaps in the data between each update as new contracts get added to the queue and transactions occur.

For starters, are you sampling on every trade ("1 tick" in NT lingo) or every market depth update? I'm not familiar with latest NT but a quick search up the API docs suggests this is what you need: https://ninjatrader.com/support/helpGuides/nt8/en-us/?onmarketdepth.htm

If you still see reconstruction issues while using the OnMarketDepth callback, my best guess is a common design pattern in these platforms where they aggregate all book state changes before dispatching the callback that your application logic (strategy) is enclosed in.

The reason for this design pattern is that it's the most sensible thing to do if you anticipate any queuing bottlenecks. Examples of these bottlenecks include the:
- expectation that the application logic is slow, often the case among retail users
- limited bandwidth between market data source and client

As a result, you might find gaps even inbetween calls of your OnMarketDepth handler.

This design pattern is even more common on the GUI side because platform developers safely assume that no screen is supposed to keep up with the messaging traffic. You can try this out yourself and call up your broker or use a separate application to click-trade a unique lot size for you on ES, and chances are that it will not print on your GUI. I've done this dozens of times and I've never been able to match my fill to a trade print on any retail platform's book viewer.



iantg View Post
In addition to this, I have implemented a queue position tracker to quantify where I am at in the queue. This works like this.

Sounds about right, no comment.



iantg View Post
2. Is there a specific ratio of bid / ask volumes present when entering a trade that will make it more likely or less likely to get filled in the following ways.

Yes, we know this without even needing to resort to data analysis. Just read the dozens of cases where people get caught spoofing and it's always the classic technique where they place a large bid on the other side of a small offer they're actually trying to fill, and then they cancel the bid once they're filled. If you trade in less-regulated markets, this effect is even more rampant.

So the obvious answer is that you're more likely to get filled on the thinner side of an asymmetric book. However this is also offset by worse PnL on that fill.



iantg View Post
1. How close am I to getting filled relative to the level breaking against me? For example if there are 20 contracts ahead of me that need to fill before I get filled, but only 30 contracts left total, then waiting in line to get filled would be bad. Because as soon as I get filled, the price level will change and I will be down 1 tick.

Not an obvious conclusion. Back in the day like 2008-2010 this would be a useful heuristic. Nowadays market makers are more competitive and a wiped level is more likely to replenish on the same side of the book than to be replenished on the other side of the book.

-----

My best advice for you is to collect your own raw data and build your own platform. It's near impossible to do analysis at this granularity without an open source peek into how the data is touched before your application logic. I know this sounds like a huge distraction, because building any meaningful platform is a 1+ year endeavor for a first timer, but that's generally the upfront cost you have to pay if you want to beat 1 tick of slippage on your round trip. No free lunch.

Reply With Quote
Thanked by:

Can you help answer these questions
from other members on NexusFi?
NT7 Indicator Script Troubleshooting - Camarilla Pivots
NinjaTrader
ZombieSqueeze
Platforms and Indicators
What broker to use for trading palladium futures
Commodities
Cheap historycal L1 data for stocks
Stocks and ETFs
MC PL editor upgrade
MultiCharts
 
  #82 (permalink)
 artemiso 
New York, NY
 
Experience: Beginner
Platform: Vanguard 401k
Broker: Yahoo Finance
Trading: Mutual funds
Posts: 1,152 since Jul 2012
Thanks Given: 784
Thanks Received: 2,685

Also you mentioned some way to estimate your fills by assuming the volume needs to accumulate to your initial position. There's a couple of edge cases that you're probably already aware of, e.g. that you need to short circuit that logic if there's a price trade-through, but do test this thoroughly. I still get the code subtly wrong every now and then even after having done it many times over for different exchanges.

I'll save you a few months of work and tell you that the pessimistic case is very close to the actual case. So if you don't think you can make (save?) money using limit orders with your current analysis, you're probably not going to be able to do it with more accurate queue simulation.

Reply With Quote
Thanked by:
  #83 (permalink)
 iantg 
charlotte nc
 
Experience: Advanced
Platform: My Own System
Broker: Optimus
Trading: Emini (ES, YM, NQ, ect.)
Posts: 408 since Jan 2015
Thanks Given: 90
Thanks Received: 1,148


Hi artemiso,

Thanks for providing me with some feedback on everything. I appreciate you taking the time to provide me with your insights. So here is where I am at with everything.

1. Regarding your comments below about the marketdepth event handler and the potential gaps. I have seen the gaps and have built a pretty neat workaround. NT support recently confirmed that the GetCurrentBidVolume() and GetCurrentAskVolume() methods are bar size specific. So you only get these updates on the bar update regardless of which event handler you call them from. When I moved my strategy from a 50 tick time series down to 1 tick, I did see more updates, but there were still gaps. So between updates, I am running back the actual transacted volumes of the bid and ask via the OnMarketData event handler which gives every change to the level 1 books independent of everything else. So I am deducting any transacted volume against my last GetCurrentBidVolume() and GetCurrentAskVoulme() update. Once I get a refreshed update, I reset my cumulative transaction count back to 0 and start again. I am able to capture the levels breaking fairly accurately in between updates and this accounts for all but the canceled volumes, which unfortunately I can't see between updates.

2. I have observed that you have a higher lightly-hood of getting on the right side of the action if you wait in the queue a few levels out, by the time your level comes up you may be in the top 25% to 50% of the queue, so you increase your likelihood of not getting filled at the end of the line and the level immediately breaking against you. The unfortunate downside to this approach is you end up with far less trades because you spend too much time waiting in line a few levels out. My proposed workaround, and the point of this recent project is to try to place orders to the weak side (Side where there was not a queue from other levels already working),and then just cancel when my estimated place in the queue relative to my side of the book and the opposite side of the book looked poor. When my queue position relative to my side of the book and the opposite side is attractive I would let it play out. I have already gotten this kind of working with around 50% / 50% split between positive and negative outcomes (Level break in my favor, level breaks against me). And typically submitting to the weak side at least with the ES you typically see the weak side break around 60% or higher, so I think I am beating the odds currently, but I imagine with further optimization I can get this figure even higher.

3. I think you are right with the recommendation of going to an independent platform for testing. I have an expertise in SQL and have built trading simulators in the past with this route, and will likely pursue this again for this endeavor. With this approach I can test almost every possible permutation of different entry volume ratios and throughout the various life cycles of trades see where canceling or riding out will play in my favor or run through my level against me.

This is definitely a field of research that I think will prove extremely fruitful but as you can imagine it is no small undertaking.

Thanks,

Ian


artemiso View Post
For starters, are you sampling on every trade ("1 tick" in NT lingo) or every market depth update? I'm not familiar with latest NT but a quick search up the API docs suggests this is what you need: https://ninjatrader.com/support/helpGuides/nt8/en-us/?onmarketdepth.htm

If you still see reconstruction issues while using the OnMarketDepth callback, my best guess is a common design pattern in these platforms where they aggregate all book state changes before dispatching the callback that your application logic (strategy) is enclosed in.

The reason for this design pattern is that it's the most sensible thing to do if you anticipate any queuing bottlenecks. Examples of these bottlenecks include the:
- expectation that the application logic is slow, often the case among retail users
- limited bandwidth between market data source and client

As a result, you might find gaps even inbetween calls of your OnMarketDepth handler.

This design pattern is even more common on the GUI side because platform developers safely assume that no screen is supposed to keep up with the messaging traffic. You can try this out yourself and call up your broker or use a separate application to click-trade a unique lot size for you on ES, and chances are that it will not print on your GUI. I've done this dozens of times and I've never been able to match my fill to a trade print on any retail platform's book viewer.




Sounds about right, no comment.




Yes, we know this without even needing to resort to data analysis. Just read the dozens of cases where people get caught spoofing and it's always the classic technique where they place a large bid on the other side of a small offer they're actually trying to fill, and then they cancel the bid once they're filled. If you trade in less-regulated markets, this effect is even more rampant.

So the obvious answer is that you're more likely to get filled on the thinner side of an asymmetric book. However this is also offset by worse PnL on that fill.




Not an obvious conclusion. Back in the day like 2008-2010 this would be a useful heuristic. Nowadays market makers are more competitive and a wiped level is more likely to replenish on the same side of the book than to be replenished on the other side of the book.

-----

My best advice for you is to collect your own raw data and build your own platform. It's near impossible to do analysis at this granularity without an open source peek into how the data is touched before your application logic. I know this sounds like a huge distraction, because building any meaningful platform is a 1+ year endeavor for a first timer, but that's generally the upfront cost you have to pay if you want to beat 1 tick of slippage on your round trip. No free lunch.


Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #84 (permalink)
 
jackbravo's Avatar
 jackbravo 
SF, CA/USA
 
Experience: Beginner
Platform: SC
Broker: Stage 5
Trading: NQ...uh..ES actually
Posts: 1,337 since Jun 2014
Thanks Given: 4,362
Thanks Received: 2,400

*mind blown* hope you find much success with this effort

Sent using the https://nexusfi.com/NexusFi mobile app

"It does not matter how slowly you go, as long as you do not stop." Confucius
Reply With Quote
Thanked by:
  #85 (permalink)
 artemiso 
New York, NY
 
Experience: Beginner
Platform: Vanguard 401k
Broker: Yahoo Finance
Trading: Mutual funds
Posts: 1,152 since Jul 2012
Thanks Given: 784
Thanks Received: 2,685


iantg View Post
Hi artemiso,

Thanks for providing me with some feedback on everything. I appreciate you taking the time to provide me with your insights. So here is where I am at with everything.

1. Regarding your comments below about the marketdepth event handler and the potential gaps. I have seen the gaps and have built a pretty neat workaround. NT support recently confirmed that the GetCurrentBidVolume() and GetCurrentAskVolume() methods are bar size specific. So you only get these updates on the bar update regardless of which event handler you call them from. When I moved my strategy from a 50 tick time series down to 1 tick, I did see more updates, but there were still gaps. So between updates, I am running back the actual transacted volumes of the bid and ask via the OnMarketData event handler which gives every change to the level 1 books independent of everything else. So I am deducting any transacted volume against my last GetCurrentBidVolume() and GetCurrentAskVoulme() update. Once I get a refreshed update, I reset my cumulative transaction count back to 0 and start again. I am able to capture the levels breaking fairly accurately in between updates and this accounts for all but the canceled volumes, which unfortunately I can't see between updates.

2. I have observed that you have a higher lightly-hood of getting on the right side of the action if you wait in the queue a few levels out, by the time your level comes up you may be in the top 25% to 50% of the queue, so you increase your likelihood of not getting filled at the end of the line and the level immediately breaking against you. The unfortunate downside to this approach is you end up with far less trades because you spend too much time waiting in line a few levels out. My proposed workaround, and the point of this recent project is to try to place orders to the weak side (Side where there was not a queue from other levels already working),and then just cancel when my estimated place in the queue relative to my side of the book and the opposite side of the book looked poor. When my queue position relative to my side of the book and the opposite side is attractive I would let it play out. I have already gotten this kind of working with around 50% / 50% split between positive and negative outcomes (Level break in my favor, level breaks against me). And typically submitting to the weak side at least with the ES you typically see the weak side break around 60% or higher, so I think I am beating the odds currently, but I imagine with further optimization I can get this figure even higher.

3. I think you are right with the recommendation of going to an independent platform for testing. I have an expertise in SQL and have built trading simulators in the past with this route, and will likely pursue this again for this endeavor. With this approach I can test almost every possible permutation of different entry volume ratios and throughout the various life cycles of trades see where canceling or riding out will play in my favor or run through my level against me.

This is definitely a field of research that I think will prove extremely fruitful but as you can imagine it is no small undertaking.

Thanks,

Ian

For most part it seems that you are going about this sensibly. A few remaining simple things:

1. It seems that you're thinking of this as a way to reduce transaction costs ("improve entries") for some more important, overarching strategy. This seems weird because there's lower hanging fruit ways to reduce transaction costs. If your goal is to make market making your core strategy, then it makes more sense.

2. Yeah, you have practically zero hope of getting a good queue position from interacting with the top of the book with NT. Also your cancels will be awful. There's a pretty sharp cutoff in round trip latencies at which your cancels matter or not and I can tell you that you're not close enough with NT.

3. PnL targets and stop losses should worsen your PnL but improve your cash flow, don't see the reason for you to use them here.

Reply With Quote
Thanked by:
  #86 (permalink)
 iantg 
charlotte nc
 
Experience: Advanced
Platform: My Own System
Broker: Optimus
Trading: Emini (ES, YM, NQ, ect.)
Posts: 408 since Jan 2015
Thanks Given: 90
Thanks Received: 1,148

Hi artemiso,

Thank you for taking the time to follow up with me. I appreciate your insight.

As you can no doubt guess I am applying this to a HF trading strategy, so you can see why the 1 tick matters as I am more on the market making spectrum.


I have fairly decent speed with NT as I am co-locating in close proximity to my data provider. So while I am not in the league of direct market access, I would say I am at the higher end of retail. My testing with canceling live trades has been surprising fairly good thus far, so I think outside of the 9:30- 10:00 AM rush, I shouldn't hit too many snags. From your experience where do you see the lines in the sand around latency and NinjaTrader? I think most people run NT off of at least 10,20,50 ticks or more, so this creates a lag in and of itself. I run all of my execution related code off of the OnMarketData event handler, which captures every change in level 1 events. So this will execute faster than even running a 1 tick time frame for example. I see that you use think or swim, have you found this to have speed advantages over ninjatrader? If you have any ideas that would help improve latency that won't break the proverbial bank I am all ears.

But for sure, I could be on a fools errand and putting in the work all for not, but I got to give it a shot right?

Thanks,

Ian



artemiso View Post
For most part it seems that you are going about this sensibly. A few remaining simple things:

1. It seems that you're thinking of this as a way to reduce transaction costs ("improve entries") for some more important, overarching strategy. This seems weird because there's lower hanging fruit ways to reduce transaction costs. If your goal is to make market making your core strategy, then it makes more sense.

2. Yeah, you have practically zero hope of getting a good queue position from interacting with the top of the book with NT. Also your cancels will be awful. There's a pretty sharp cutoff in round trip latencies at which your cancels matter or not and I can tell you that you're not close enough with NT.

3. PnL targets and stop losses should worsen your PnL but improve your cash flow, don't see the reason for you to use them here.


Visit my NexusFi Trade Journal Started this thread Reply With Quote
Thanked by:
  #87 (permalink)
 artemiso 
New York, NY
 
Experience: Beginner
Platform: Vanguard 401k
Broker: Yahoo Finance
Trading: Mutual funds
Posts: 1,152 since Jul 2012
Thanks Given: 784
Thanks Received: 2,685


iantg View Post
As you can no doubt guess I am applying this to a HF trading strategy, so you can see why the 1 tick matters as I am more on the market making spectrum.


iantg View Post
I have fairly decent speed with NT as I am co-locating in close proximity to my data provider.

That really doesn't matter. You're likely still taking a handful of BGP hops and paying garbage collection on your application. The people who're looking to lift your trades 90+% of the time (I'm not throwing this number out randomly) aren't the guys in your same proximity hosting facility, they're the ones optimized to evaluate the value of your order and lift it a few thousand times between every 2 OnMarketData calls.


iantg View Post
From your experience where do you see the lines in the sand around latency and NinjaTrader?

The line in the sand is around 2-3 orders of magnitude faster than what you can achieve with the setup you've described.

It's good however that you have the sensible judgment to call it a "line in the sand", because many people don't grasp that it's an all-or-nothing scenario - if you're not across the line, you're practically competing with everyone else on equal latency grounds whether it's grandma from her iPhone app or someone in the same rack as you.

This isn't necessarily a bad thing, it's just that you are probably overspending on the wrong vendor services when you could've spent it on something with a more meaningful impact at your time scale, e.g. an alternate data source.

You'll have no problem canceling orders no doubt even during US cash open, it's just that you won't land the cancellations that matter. Almost always. Even during US T+1 session.



iantg View Post
I think most people run NT off of at least 10,20,50 ticks or more, so this creates a lag in and of itself. I run all of my execution related code off of the OnMarketData event handler, which captures every change in level 1 events. So this will execute faster than even running a 1 tick time frame for example.

That's not a very meaningful optimization. Deallocation in .NET garbage collection likely takes more cycles than most things you can cram into 10 passes of your main event loop.



iantg View Post
I see that you use think or swim, have you found this to have speed advantages over ninjatrader?

That's just a reference to an old joke that before my team had a working GUI, I needed to confirm our new platform was working properly, so someone fired up thinkorswim and compared our feed against that.

Reply With Quote
Thanked by:
  #88 (permalink)
 artemiso 
New York, NY
 
Experience: Beginner
Platform: Vanguard 401k
Broker: Yahoo Finance
Trading: Mutual funds
Posts: 1,152 since Jul 2012
Thanks Given: 784
Thanks Received: 2,685

By the way, one cheap way you can know the marginal latency benefit of your proximity hosting and/or application is to submit orders in some benchmark pattern, e.g. replace->acknowledged->cancel.

Then ask your broker to look up your orders in their risk platform and ask for the exact timestamps of your orders. They'll be able to see actually how fast your orders are from the frame of reference of the exchange, if they use CME's front end for risk, or from the execution vendor's gateway application.

Now you can actually modify your proximity hosting decisions and see the impact, i.e. run the same benchmark side-by-side on your local workstation.

CME's front end risk unfortunately only has millisecond granularity but it's good enough if you're just proximity hosting. You can get your broker to generate a report like this in CSV (see below) through Firmsoft. In this example, I'm canceling my order shortly after its modify has been acknowledged and some market data event arrived to trigger the cancel, and from CME's perspective, it takes place within the same millisecond. It actually takes much less but CME's clock isn't precise enough.



With NT and proximity hosting at 350 E Cermak I estimate you're going to take several milliseconds. I doubt it's going to be meaningfully faster than running it on your own desktop. And I doubt 1 tick or 50 tick resolution is going to make a difference at all.

Reply With Quote
  #89 (permalink)
 
wldman's Avatar
 wldman 
Chicago Illinois USA
Legendary Market Wizard
 
Experience: Advanced
Broker: IB, ToS
Trading: /ES, US Equities/Options
Frequency: Several times daily
Duration: Hours
Posts: 3,534 since Aug 2011
Thanks Given: 2,069
Thanks Received: 9,556

This is why I stay connected to FIO. @artemiso @iantg

Inspiring to see the effort and the willingness to share. Those qualities and the spirit they reflect represent values that I wish where more prevalent in the world.

Kudos and respect to both of you guys.

Please leave some crumbs for the old school guys that still put their hands in the air and shout at the monitor for execution.

artemiso, are guys like you gonna swallow guys like me? Do I need get my application in at Walmart?

Dan

Visit my NexusFi Trade Journal Reply With Quote
  #90 (permalink)
 artemiso 
New York, NY
 
Experience: Beginner
Platform: Vanguard 401k
Broker: Yahoo Finance
Trading: Mutual funds
Posts: 1,152 since Jul 2012
Thanks Given: 784
Thanks Received: 2,685



wldman View Post
artemiso, are guys like you gonna swallow guys like me? Do I need get my application in at Walmart?

You're fine. I'm probably more of the one in need of a new job at the rate that our industry is consolidating and volatility keeps shrinking.

Reply With Quote
Thanked by:




Last Updated on June 23, 2018


© 2024 NexusFi™, s.a., All Rights Reserved.
Av Ricardo J. Alfaro, Century Tower, Panama City, Panama, Ph: +507 833-9432 (Panama and Intl), +1 888-312-3001 (USA and Canada)
All information is for educational use only and is not investment advice. There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results.
About Us - Contact Us - Site Rules, Acceptable Use, and Terms and Conditions - Privacy Policy - Downloads - Top
no new posts