Logarithm Algorithm

53,896

Solution 1

Use this identity:

logb(n) = loge(n) / loge(b)

Where log can be a logarithm function in any base, n is the number and b is the base. For example, in Java this will find the base-2 logarithm of 256:

Math.log(256) / Math.log(2)
=> 8.0

Math.log() uses base e, by the way. And there's also Math.log10(), which uses base 10.

Solution 2

I know this is extremely late, but this may come to be useful for some since the matter here is precision. One way of doing this is essentially implementing a root-finding algorithm that uses, from its base, the high precision types you might want to be using, consisting of simple +-x/ operations.

I would recommend implementing Newton's ​method since it demands relatively few iterations and has great convergence. For this sort of application, specifically, I believe it's fair to say it will always provide the correct result provided good input validation is implemented.

Considering a simple constant "a" where log_b(x) = a

Where a is sought to be solved for such that it obeys, then x - b^a = 0

We can use the Newton method iteratively to find "a" within any specified tolerance, where each a-ith iteration can be computed by a_i = a_{i-1} - \frac{x - b^a}{-ab^{a-1}}

and the denominator is

-ab^{a-1} = \frac{\partial}{\partial a},

because that's the first derivative of the function, as necessary for the Newton method. Once this is solved for, "a" is the direct answer for the "a = log,b(x)" problem, obtainable by simple +-x/ operations, so you're already good to go. "Wait, but there's a power there?". Yes. If you can rely on your power function as being accurate enough, then there are no issues with going ahead and using it there. Otherwise, you can further break down the power operation into a series of other +-x/ operations by using these methods to simplify whatever decimal number that is on the power into two integer power operations that can be computed easily with a series of multiplication operations. This process will eventually leave you with nth-roots to solve for, which you can also find with the Newton method. If you do go down that road, you can use this for the newton method

\frac{\partial (\sqrt[b]{x})}{\partial x} = \frac{\sqrt[b]{x}}{bx}

which, as you can see, has to be solved for recursively until you reach b = 1.

Phew, but yeah, that's it. This is the way you can solve the problem by making sure you use high precision types along the whole way with only +-x/ operations. Below is a quick implementation I did in Excel to solve for log,2(3), compared with the solution given by the software's original function. As you can see, I can just keep refining "a" until I reach the tolerance I want by monitoring what the optimization function gives me. In this, I used a=2 as the initial guess, which you can use and should be fine for most cases.

Newton method for the solution of a log operation

Share:
53,896
Justin
Author by

Justin

Life should be fun! Make it so. I program not so much for the end goal, but because programming itself is fun. Why? Well, It's the inefficiencies in life that make it worth living. Programming itself isn't efficient, although it is about making things more efficient. The most enjoyable kind of programming is the highly inefficient, high brainpower, challenging work that improves future efficiency. And, for life to be worth living, I find challenge to be paramount. Learning new languages. Understanding new libraries. Grokking C++'s template metaprogramming. Rust's borrow checker. Concurrent programming. Even clean, simple Java 8's streams and lambdas. Or bigger: designing the classes of your project. Attempting to simplify the inevitable refactoring due to design errors. Frustration at limitations of your language. These all have something in common for me: they require brainpower. It's inefficient to do these things. I could just go in and code. Granted, that would rapidly devolve into inefficient tar, but that doesn't mean it would be inefficient at the time. It's these challenges, these inefficiencies, that make life worth living. One of my mentors once said, It's good to be lazy, if it is the right kind of lazy. -- D.A. I've followed this mentality my whole life. While taking Calculus, I constantly pondered over its rules and quirks. I constantly searched for shortcuts that would make coursework faster. I constantly analyzed the mathematics to try to make sense of it. Consequently, it made so much sense to me. Integration by Parts became easy once I tried to integrate without it and derived it via the Product Rule. Volumes of solids of revolution became easy when I saw that variables were just labels. It didn't matter if I graphed the x axis as the vertical one. And in programming, it is definitely good to be lazy. Yet, sometimes the laziness never effects increased efficiency in the project I wanted to be lazy with. Sometimes these are only learning experiences which help me become a better developer in the future. And I need it to be enjoyable. As it should be. As Linus Torvalds said, Most good programmers do programming not because they expect to get paid or get adulation by the public, but because it is fun to program. I would go further: Life needs to be fun. Whatever you do, you should do not because you expect to get paid, not because you want the approval of others, but because you find it fulfilling. You find it enjoyable. Your life is for you, not for me, but for you. Perhaps your job isn't enjoyable, but you do it so you can enjoy your free time; that is fine. Perhaps you aren't finding anything enjoyable whatsoever, but you are doing what you know you should find fun. That is also fine. Just remember that life is short; this flash of color will be gone in the blink of an eye. Life is for living. Ensure that you actually are living it, not just surviving.

Updated on July 09, 2022

Comments

  • Justin
    Justin almost 2 years

    I need to evaluate a logarithm of any base, it does not matter, to some precision. Is there an algorithm for this? I program in Java, so I'm fine with Java code.

    How to find a binary logarithm very fast? (O(1) at best) might be able to answer my question, but I don't understand it. Can it be clarified?

  • Justin
    Justin over 11 years
    I know that identity. I want to calculate a logarithm with more precision than a double can give.
  • Óscar López
    Óscar López over 11 years
    @user1896169 in that case, switch to BigDecimal and use this recipe for calculating the logarithm
  • someone
    someone over 11 years
    Math.log() default base is e
  • Justin
    Justin over 11 years
    @Óscar López, I tried that, my compiler (Eclipse) doesn't know what intRoot is, and I don't know what intRoot means
  • Óscar López
    Óscar López over 11 years
    @user1896169 a quick search in google yielded this link with a complete implementation of the functions