Welcome to NexusFi: the best trading community on the planet, with over 150,000 members Sign Up Now for Free
Genuine reviews from real traders, not fake reviews from stealth vendors
Quality education from leading professional traders
We are a friendly, helpful, and positive community
We do not tolerate rude behavior, trolling, or vendors advertising in posts
We are here to help, just let us know what you need
You'll need to register in order to view the content of the threads and start contributing to our community. It's free for basic access, or support us by becoming an Elite Member -- see if you qualify for a discount below.
-- Big Mike, Site Administrator
(If you already have an account, login at the top of the page)
I looked into this Newton-Raphson method, its really interesting and could provide a really good approximation of root this function in a couple of steps but I should differentiate this again, and it makes it way too complicated.
only the first part looked like this:
wouldn’t be more efficient just increasing/decreasing the value of x in a cycle In the original equation until the value of y won’t get any smaller?
Whole numbers are fine for me, I don’t need any decimals.
You need to distinguish between symbolic and numerical differentiation here. There's 3 different approaches depending on what you're trying to do:
If you're trying to find the analytical form of the first derivative, f' and solve the root for it, and you know the analytic formula of f at compile-time, you can solve f'=0 by hand and hard-code that into your program. Or you can find f' and then get quadratic convergence on the minimum with Newton-Raphson.
If you know the analytic form of f at runtime, there's different ways to perform symbolic differentiation, and chain rule-based approaches are popular, e.g. in backpropagation literature for neural networks.
If you don't know the analytic formula of f either at compile-time or runtime, you can still use Newton-Raphson but approximate f' numerically with any finite difference method (secant, forward difference, backward difference, central difference), then converge on f'=0 iteratively.
Also if you do know the analytic form and decide to solve the first order necessary condition, do make sure that your function is strictly convex. (The derivative does not exist at x=0 for f(x) = |x|.)
If I look at your function it is built from six components, each component being the absolute amount of a linear function. Now let us look at a component function, for example
y1 = |-22-16x|
This function has two linear legs, a descending leg on the left side and and a rising leg on the right side. For x = -1.375 it takes the value zero.
The same is true for the other five component functions. Each of them is built from two legs and has a low point, where it takes the value 0. I conclude that the function y that you have exposed, has six points where it is not differentiable. Newton-Raphson cannot be used.
However, your function has another property, which makes a solution easy. With the exception of the low points of the six component functions your function can be built by adding six linear functions. Therefore it is linear itself between two low points. As a consequence one of the low points of the component function is the minimum for which you are looking.
Therefore you just need to calculate x1, x2, x3, x4, x5, x6 representing the low points of the component functions and then find out for which of those the function
y = |-22-16x|+|44-22x|+|21-25x|+|40-22x|+|6-11x|+|12-4x|
I had some similar reasoning in my head. I was sure the it has to be somewhere between the biggest and smallest root of the components like you explained.
I was experimenting to calculate the minimum point from all the roots somehow, but I wouldn't thought its gonna be one of them.
Flawless logic!
Thanks you and everybody who contributed!