Dear Khaled,

I've been playing with the new big-float features of the two recent betas, but I am slightly confused by the decision to arbitrarily truncate numbers to 30 decimal places when mIRC is capable of 50 digit precision. Wouldn't it make more sense to truncate to a number's largest bit precision, as determined by the bit depth of the whole number (left and right side of the decimal point)?

That is, if you have a number with 0 on the left side of the decimal point, you can have 50 digits of precision on the right side. If you have 20 digits left of the decimal point, then only 30 on the right side. If 30 on the left of the decimal point, then only 20 on the right.

For a visual demonstration, try running this single line script in the editbox.

//var %i = 0 | while (%i <= 55) { var -s %pi.bf = $calc($pi * 10 ^ %i) | inc %i }

Above you see precision building for the first 20 iterations until decimal place truncation automatically occurs. Instead of the arbitrary truncation of 30, it had ought to start at 50 positions to allow for maximal precision. No?

Last edited by Raccoon; 29/10/22 09:08 PM.

Well. At least I won lunch.
Good philosophy, see good in bad, I like!