Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
with today technology you can slide and dice your time frame in hundreds diff ways so the question is which time frame is the best? one way to do it is to emulate what other successful traders do, see what tf they use what indicator they use if any. Thus that is how 5m was selected as "the one" since it is the one that most successful traders use.
Can you help answer these questions from other members on NexusFi?
I understand what you are trying to say and I agree with you for the most part.. however I think saying that 'any method based on walking forward from a given time/price/volume' is bogus or based on fantasy is going a bit too far.
Sampling is a fact of life in any practical system. Have you ever tried trading off raw tick data or a 1 second chart? It becomes cumbersome rather quickly (especially given Ninjatraders inefficiencies, but that is another story).
Obviously when you do use any form of sampling or quantification you are losing some information, so it is very important to be aware of the consequences of that when you choose a sampling method. This topic is so important to the design of digital electronics that there is an entire branch of mathematics devoted to it called Information Theory.
One of the most important aspects of Information Theory is the concept of mutual information which is closely related to the shannon entropy. In a nutshell, the entropy is a measure of how much uncertainty is associated with a random/unpredictable variable (ie the next bar close value). It also represents a bounds on how much you can compress something before you start losing information, in other words the more random it is the less it can be compressed.
Obviously financial timeseries are highly random so you cannot compress them very much before you start losing information. So there is a tradeoff between the difficulty of analysis (due to overabundance of data) and how much detail is lost. In thinking about that tradeoff you must consider what is more important, the evolution of price or the evolution of time/volume.
In my opinion the evolution of price is most important, that is why I use range bars. It can also be proven from a statistical/information theoretic standpoint that range bars are superior to all other types because they offer the maximum compression with the least amount of information loss. This is obvious from the fact that they reduce the variability between bars and are thus smoother looking than other charts (less randomness).
Some people will argue that you lose the sense of time evolution on range charts but this is not necessarily true, the information is still there it is just in a format that is not as simple to analyze (ie you have to look at the time elapsed between each bar). However on a time based chart the detail of the price evolution is actually lost.
With regards to the fact that the starting point of a chart is arbitrary.. that is true, however it makes no difference as long as you have a relevant amount of history on your chart and you concern yourself only with with relative difference instead of the absolute.
What the market is doing now may or may not be important from a relative standpoint. This is another reason why time based bars are inferior to all other types (from a statistical standpoint), with all other types you are defining an active threshold for creating a new bar based on something that is related to the price movement in some way. Time is independent of the price movement, it marches on whether price is moving or not.
yes indeed. it's especially important when sampling to retain as much of the original source signal as possible - once you've erased it there's no getting it back.
let's assume, for the sake of argument, that the highest resolution feed you can get is a 1-second bar. in order to make a 5-minute bar series from that 1-second feed you need to erase all but one of every 300 seconds. whatever indicators you're going to build on that 5-minute chart are never going to be able to use the other 299 1-second bars of information - you've erased them right off the bat.
it's like newton did, instead of looking at the tangent at discreet times, you have to take delta-t down to zero and look at the underlying curve.
try this: open a 1-second line chart, but set the time scale so you can see a couple of hours on the chart. then instead of EMA(17), add an EMA(17*300).
the idea that putting boxes on the chart somehow aligns you with other traders is a mystery to me - everyone is looking at different boxes (except those will silly-expensive feeds).
This is true, but as I mentioned in my last post it doesn't matter as long as you focus on the relative difference between bars on your chart. In that sense, if drawing boxes on your chart or trendlines or whatever else helps you to see the magnitude of one movement relative to another then it may be helpful.
Also with respect to the sampling issue.. one thing to consider is the difference between discretionary trading and automated trading. If you are discretionary trading and watching the bars unfold in real time then technically those 299 unused bars are not totally lost if they impacted the traders 'feel' for the market. That is why I mentioned in an earlier post on this thread that IMO time based bars are ok for discretionary trading but I do not recommend them to be used in automated strategies.
But yes, one thing I have become keenly aware of recently is that there is no such thing as noise in the markets. In DSP it is often necessary to use smoothing filters to remove noise from the signal that is caused by imprecise measurements, interference or other hardware related factors. In other words it is real noise that was not generated by the signal being analyzed. Market 'noise' on the other hand, is not really noise at all in that sense because there is no measurement error.
The implication there is that there is valuable information in the high frequency 'noise' of the markets which very few people notice because they are smoothing over it, either by undersampling or by relying too heavily on moving averages.
This is one reason I yap about wavelets so much, they allow you to have a very small timeframe on your chart but analyze it from many different resolutions, so you can see both the details and the 'big picture' on the same chart.. hence why some people refer to wavelets as 'the mathematical microscope'