The actual issue here is more complex than it seems. The problem is exacerbated by increase in floating point precision, but it's not caused by it. It's caused by a leaky abstraction in computing that cannot really be hidden by any programming language or tool. Put simply, what you're seeing is not a limitation of this program, it is a limitation of current computer hardware.

Computers (almost always) represent floating points as approximations, not actual values. mIRC, in turn, can only represent approximations of floating point values. You can try as hard as you'd like, but 1.999999999999 will never actually be represented in memory as the literal digits you wrote out in the program. You cannot treat arbitrary numbers as if they were literal strings interpreted and computed as a human would; it's just not how floating points work in computing.

Now, you may not have run into this problem before, because mIRC used to have a much lower "precision", a.k.a., approximation of floating points. Your 1.999... was probably being approximated to something like 1.99997 in memory, which, as you can see, is still not 2.0. However, if you add more precision and still approximate, eventually, 1.999.... turns into 2.0 in memory. That's why, with extra precision, what seems like "1.999999999999" is really being interpreted as 2.0.

This problem isn't specific to mIRC; it's a problem with all uses of floating point values across all programming languages and computers of all kinds. It's why you never use floating points for currency, and why all sorts of scientific simulations and video games run into weird errors at the boundaries of large numbers. Some programs support arbitrary precision for math operations, but these are typically highly specialized applications specific to these large number problem sets, and, those math operations are extremely expensive. mIRC does not support arbitrary precision-- you will always be stuck with the limitations of floating points. The increased precision in mIRC isn't a bug, it's only increasing the surface area of an age-old leaky abstraction.

For what it's worth, I'd highly recommend reading more about how floating points work if you're going to do math stuff with any computer. See below for one of many possible explanations. Basically, you should avoid the use of floats unless you really need them. These kinds of weird scenarios will always crop up-- like, even for basic values like 0.1.

More on floating points: http://blog.reverberate.org/2014/09/what-every-computer-programmer-should.html


- argv[0] on EFnet #mIRC
- "Life is a pointer to an integer without a cast"