Bigfloat feedback on 3rd beta.

Your list of identifiers affected by BF mode included $min $max, and since they behave like the various modes of $sorttok I assume that is also counted.

I do see an example where .bf mode does not make $max $min $sorttok be correc at the border of doubles space due to evaluating $calc(2^53+1) same as 2^53:

//var -s %smallr 9007199254740992 , %larger 9007199254740993, %a.bf sort lo-to-hi: $sorttok(%larger %smallr ,n,32) vs max: $max( %smallr %larger) vs min: $min( %larger %smallr )

* Set %a.bf to sort lo-to-hi: 9007199254740993 9007199254740992 vs max: 9007199254740992 vs min: 9007199254740993

---

Observation of the precision of $log2 for 2^n integers. I saw $log2(using 2^n) mentioned in Talon's bitwise post, and it looks the current level of precision means he will specifically need to ensure using it in doubles mode, unless that creates rounding issues for other integers he'd be using.

//var %i.bf 2 , %j %i.bf | while (%i.bf isnum 0-100) { echo -a %j $log2(%j) : $log2(%i.bf) | var %i.bf %i.bf * 2 , %j %i.bf }

2 1 : 0.99999999999999999489848895823
4 2 : 1.999999999999999989796977916459
8 3 : 2.999999999999999984695466874691
16 4 : 3.99999999999999997959395583292
32 5 : 4.99999999999999997449244479115
64 6 : 5.99999999999999996939093374938

This is a case where my suggested digits parameter for $calc and $log could help obtain good results without needing to wrap $round() around the output

--

Quote
2 types of BF mode $powmod() gpf crashes

The 1st type doesn't crash when in doubles mode, but in .bf mode it becomes non-responsive when there's a fraction in either the base or modulus:

//var -s %gpf_crash.bf $powmod(94906267.1,94906268,94906270.1)

In both .bf mode and doubles mode I also get a 2nd type of crash if allowing the exponent to be negative:

//var -s %gpf_crash.bx $powmod(4,-5,21)

Negatives for the base and modulus are not returning the correct values because they are being handled by other (value < 2) code intended for values 0 or 1, but that's my fault for assuming that negatives would not be fed here, and I describe the solution later below.

- -

//bigfloat off | echo -a $powmod(3,54,77) $calc(3^54 % 77) : $powmod(3.1,54,77) $calc(3.1^54 % 77) : $powmod(3.1,54,77.1) $calc(3.1^54 % 77.1) : $powmod(3.1,54,77.1) $calc(3.1^54 % 77.1)

15 66 : 40.79617 8 : 62.729731 61.683662 : 62.729731 61.683662

From these results in doubles mode, it appears that fractions for the base or modulus can be valid to obtain accurate results by evading the doubles limit, because all the base is used for is to initialize one of the loop variables being multiplied by, and the modulus is just the divisor for the % operation that $calc uses safely every day. So a fraction for either the base or modulus shouldn't be crashing it in .bf mode unless there's something in one of the MAPM functions being used that's designed to only work with integers, and throws a fit when given a float. Perhaps I just didn't wait enough time for the non-responsive mIRC to finish whatever it's doing.

However, the story is completely different for why $powmod must validate that the exponent does not have a fraction, and 5.1 would have given $powmod the same result:

//echo -a $calc(2 ^ 4.1 % 99999) vs $powmod(2,4.1,99999)

17.148375 vs 32

But in .bf mode the issue is worse, because of how the design accesses the next bit of the exponent by assuming that (exponent //2) just divides by 2 while discarding the previously-the-lowest bit. The algorithm is not designed to handle a float as the exponent the same way the $calc(2^3.5) is the same as 8 * sqrt(5). The only reason I could think for the fractional exponent to trigger a crash in .bf is if the (exponent //2) or (exponent %2) was using a function designed for integer-only, or else if the result was keeping the fractional result by doing /2 instead of //2, and the extreme precision means that "while (exponent)" can never be false by never reaching zero no matter how many times you divide by 2.

Since the algorithm isn't designed to handle exponent fractions, that should either return=0 or syntax error.

While it's possible for this algorithm to work correctly where base or modulus is a fraction, that would just slow it down for the true integer purpose if it needs to use different functions that can play nice with floats.

- -

The other issue to deal with in doubles mode: the $powmod algorithm works by squaring a number that for integer calculation can be as large as (modulus-1)^squared, so you can lose precision in doubles mode for results involving a modulus that's only slightly greater than $sqrt(2^53):

//var -s %mod $calc( (2^53^0.5 + 5) // 1) , %base %mod - 3 , %exp 3 | echo -a doubles mode wrong $powmod(%base,%exp,%mod) vs %null.bf bf mode good $powmod(%base,%exp,%mod)

* Set %mod to 94906270
* Set %base to 94906267
* Set %exp to 3
doubles mode wrong 94906246 vs bf mode good 94906243

So, if this is going to be used in doubles mode, the user needs to be warned that using this in doubles mode can easily return bad results when the modulus exceeds sqrt(2^53), such as (odd,anything,even) returning an impossible even result $powmod(3,46,$calc(2^32)).

* *

It's my fault that $powmod() is not returning the correct results for negative parameters, because I saw:

// Assumed x, n, m all integers, n >= 0

... and I read as if it meant that all negative inputs had been filtered out before calling the function. However they're not, so I need to update the validation portion of the bf_modpow script to allow it to handle negatives correctly.

The fix for $powmod includes rejecting inputs that are not integers, which I did have as a simple regex trying to mimc that, which also blocked fractions which was my main intent. These next rules assume the inputs are integers regardless if positive or zero or negative, so they do things like reject a technically valid modulus of 1.9 because it's less than 2. For reasons I explained earlier, I prefer to not have the .bf mode permit fractions for any parameters, but in both cases never permit the exponent to have a fraction.

So this is an updated validation and changing of parameters, because the handling of negatives is interleaved with handling =1 =0. I used $regsubex as a 'safe' method of $abs() that's guaranteed to not round any integers. I include an explanation prior to each step, and after each validation is a comment showing a summary of the suriviving range for each of the 3 input integers:

Code
  var %base.bf $1 , %exponent.bf $2 , %modulus.bf $3
  var %answer.bf 1

  if (!$regex($1-3,^-?\d+ -?\d+ -?\d+$)) return $calc(0/0)
  ;now: (base=any integer, exp=any integer, mod=any integer) result always [0,modulus-1]

  ; modulus = abs(modulus)
  if (-* iswm %modulus.bf) var %modulus.bf $regsubex(foo,$v2,^(-*)(.*)$,\2)
  ;now: (base=anything, exp=anything, mod >= 0)

  ; (any % 1 = 0); (any % 0 = divide by zero)
  ; if (%modulus.bf == 0) return $calc(0/0)
  if (%modulus.bf < 2) return 0
  ;now: (base=anything, exp=anything, mod >= 2)

  ; if (base < 0) base = modulus - (abs(base) % modulus)
  if (-* iswm %base.bf) var %base.bf $calc(%modulus.bf - $regsubex(foo,$v2,^(-*)(.*)$,\2) )
  ; aka -N % mod -> [0,mod-1] -> (-N+mod) % mod
  ;now: (base >= 0, exp=anything, mod >= 2)

  if (%exponent.bf < 2) {
    ; any^0 = 1
    if (%exponent.bf == 0) return 1
    ; base is positive, so for base^1 return [0,mod-1] as (base % modulus)
    if (%exponent.bf == 1) return $calc( %base.bf % %modulus.bf)

    ; remaining case here is (exp < 0)
    if (%exponent.bf  < 0) {
      ; powmod(+base,-exp, +mod) -> powmod( mod_inverse(+base,modulus) , +exp, +mod)
      ; exp = abs(exp)
      var %exponent.bf $regsubex(foo,%exponent.bf,^(-*)(.*)$,\2)
      ; base = inverse(+base,modulus)
      var %base.bf $bf_ModInverse( %base.bf , %modulus.bf )
      ; 6^-2 mod 15 -> 6^(-1*2) mod 15  -> 6^(-1)^2 mod 15 -> (1/6)^2 mod 15 -> modinv(6,15)^2 mod 15 -> 3^2 mod 15 = 9
      ; if (%base.bf == 0) return $calc(0/0)
      ; assumes $ModInverse() return value = 0 when there is no inverse ie (3,-anything,21)
    }
  }
  ;now: (base >= 0, exp >= 2, mod >= 2)

  if (%base.bf < 2) {
    ; 0^any = 0; 1^any = 1; i.e. return val matches base
    return %base.bf
  }
  ;now: (base >= 2, exp >= 2, mod >= 2)
the rest of the alias goes here

Notes:

The alias coded as $powmod() includes (%base.bf * %base.bf) that was a replacement for (%base.bf * 2) because the $calc(base^2) resi;t still needs repair. If m_apm_square is pure integer and doesn't involve any float calculations like the other function that uses exponents other than 2, it should be safe to use. That command is called 'n' times for an 'n' bit modulus, so there might be a detectable speed change if _square is a lot faster than _multiply.

The above references a modular multiplicative inverse that I haven't mentioned yet, so (exp < 0) can return the $calc(0/0) condition if that feature will not be integrated into this function.

In the above code lines are references to $calc(0/0). These are error conditions where the design decision would be made whether errors return an identifier syntax message, or returns 0, or returns some other invalid value outside the range of [0,modulus-1], such as -1. If any of these lines are commented, they only need to be uncommented if the design is changed to make modulus=0 or similar cases return the error condition instead of returning 0. Just like $calc(0/0) or $calc(0 % 0) are identical to legitimate return values, this identifier also has legit cases to return 0, such as for powmod(same,positive,same) or powmod(zero,positive,positive)

Quote
$calc ^

The $calc ^ operator is doing better, and now the 2^103 and 2^120 examples work. But it looks like that was fixed by putting more precision into a float calculation, and the effect still appears at lengths slightly more than double that. Both powmod results here are correct, but $calc ^ now finally begins to drift at 2^259:

//.bigfloat on | var -s %exp 258 | while (%exp isnum 258-259) { echo -a exp %exp powmod: $powmod(2,%exp,1 $+ $str(0,%exp)) | echo -a exp %exp calc::: $calc(2^ %exp) | inc %exp }
Code
exp 258 powmod: 463168356949264781694283940034751631413079938662562256157830336031652518559744
exp 258 calc::: 463168356949264781694283940034751631413079938662562256157830336031652518559744
exp 259 powmod: 926336713898529563388567880069503262826159877325124512315660672063305037119488
exp 259 calc::: 926336713898529563388567880069503262823437618389757004607953675203850891427840
And it does appear that the issue with ^ is what affects the drift in the $base output, because each time the accuracy for $calc ^ changes, so does the size of the input number where $base begins to go astray.

//var -s %hex $left($sha512(test),66) , %good $qbase(%hex,16,36) , %nope $base(%hex,16,36)
* Set %hex to ee26b0dd4af7e749aa1a8ee3c10ae9923f618980772e473f8819a5d4940e0db27a
* Set %good to 167J1NURV0796I7EZBYO6RICQ7CELO8SQUCLOZCHHW1WI82RAFZU
* Set %nope to 167J1NURV0796I7EZBYO6RICQ7ZZBT07UT1HLTVZET7PY7I7XGU2

* *

Quote
Possible partial fix for $calc ^

Since it appears that $powmod works fine for positive integers even when using a modulus of 8192 bits which allows an internal result to be double that bit length, it seems that a scripted variant of powmod which does not use a modulus would solve the rounding errors above 2^258 at least for the subset of $calc(integer^(integer >= 0)), as long as the timing is not too awful, which I'm hoping should not be the case.

This wouldn't be a slow thing like $powmod working against a 4096-bit modulus, it would be working against an exponent of literally 4096, which is a *12* bit number. Even at $maxlenl that would only be a 14-bit number for a max of 14 loops.

That means there would rarely ever be even a dozen loops, and each loop would be faster because there would not be any modular division against anything. The only housekeeping duties that this would face that $powmod does not is caused by the fact that the modulus in $powmod does throttle the %answer.bf from ever being more than double the bit length of of %modulus.bf, but %calc_^_operator would need something else to make sure that its own storage can't get crashed.

It wouldn't need to count bits every time until the number got large enough into the worry zone. Because %answer and %base are affected only by multiplying against %base, each 'round' can only increase the variable length by the round-entry bit length of 'base', so it should be able to estimate when/if it runs out of trouble.

For example, $calc(3^31). An approximation of the bit length is:

exponent * log(base) / log(2)
aka exponent * log2(base)

result: 49.133853

The $ceil(result) indicates that storage of %answer would require 50 bits of storage. However, since %base doubles each 'round', the bitlength for %base would keep double.

Also, your mileage may vary with a log fraction that's a little too close for comfort to the underside of an integer.

I've modified the $powmod loop for a small optimization which probably benefits $powmod too. The optimization allows skipping the final squaring of %base as being un-necessary due to being the last command before falling out of the loop. It speeds up the last of 'n' loops, so it has a greater benefit for $calc ^ by avoiding 1 of 9 loops than it gives for powmod if it's optimizing 1 of 4096 loops, but every little bit helps. Plus, more importantly for the ^ operator, it chops in half the needed storage space for the %base.bf variable that's being doubled each loop.

old:
var %base.bf $calc(%base.bf * %base.bf)
var %exponent.bf $calc(%exponent.bf //2)
new:
var %exponent.bf $calc(%exponent.bf //2)
if (%exponent) var %base.bf $calc(%base.bf * %base.bf)

The speed optimization depends on whether it's faster to check (bitlength of exponent) times whether a value is zero, or whether the library has an effective way to just go ahead and square the variable 'n' times instead of 'n-1' times and swallow the extra storage. For $powmod I'm pretty sure it's worth the trouble to save even 1 squaring in 1-of-many loops.

And also, the powmod alias had avoided squaring like base^2 because it's not the size of the exponent that was causing the error, but the result. So if the underlying $powmod code is using m_apm_multiply to multiply %base.bf by itself, it's probably faster to use m_apm_square to square it, and it's likely to be just as reliable-ish for $calc ^ as m_apm_multiply

But the way to estimate the storage needed for %base is to determine that the final value of base after n-1 loops is base^(2*(n)). 17 requires 5 bits to represent it, so that means it would require 5 loops. So the new 'fake' exponent is 2*5=10

So the storage bits for base 3 would be: 10 * log2(3) = 15.84963

If this result is an exact multiple of the storage space unit, it would need to increase + 1 unit for the same reason that 2^32 doesn't fit in a 32-bit variable.

I think but am not certain that there shouldn't be any combos for base/exponent where the answer is shorter than the scratchpad value for base, but if %answer is always the larger number, then that's the only one whose length needs checking.

To summarize, this is only a partial potential solution for cases where the ^ operator sees both parameters as integers where the exponent is not negative. The case for either of them being 0 or 1 is easy to make do an early exit, and the alias has code for that. The code also handles negative bases by checking if the exponent is odd/even.

For the other cases where either the exponent or the base has a fraction, or the exponent is negative, I think it's good enough to let the existing routine handle things, since thouse are going to be a float fraction that's either being handled correctly now, or the 'correct' answer would be some kind of number/big that's beyond the MAPM precision.

i.e. $calc(2^-anything) will be a fraction that is already within precision
i.e. $calc(16^(0.25)) is already handled correctly, and otherwise fractions will produce a float.

It's a little difficult to test fully by comparing against $powmod() results, because of the need to put in a really long fake modulus to make sure parm3 doesn't alter the result, plus the strings get hard to compare. But this example compares a large result from the alias against a large result from $powmod, and either they match or I'm famous for finding a hash collision.

//var -s %a.bf $calc_^_operator(3,8192) , %b.bf $powmod(3,8192,$str(3,8192)) | echo -a $sha256(%a.bf) vs $sha256(%b.bf)

Code
;  $calc_^_operator(base,exponent)

alias calc_^_operator {
  var %base.bf $1 , %exponent.bf $2 , %answer.bf 1 , %minus $null
  if (%exponent.bf < 2) {
    if (%exponent.bf == 0) return 1 | if (%exponent.bf == 1) return %base.bf
    return should not call this with negative exponent
  }
  if (%base.bf < 2) {
    if (%base.bf isnum 0-1) return $v1
    ; only condition now is base < 0
    if (%base.bf < 0) var %minus - , %base.bf $mid(%base.bf,2)
    if (!$isbit(%exponent.bf,1)) var %minus $null
  }
  while (%exponent.bf) {
    if ($calc(%exponent.bf % 2)) var %answer.bf $calc(%answer.bf * %base.bf)
    var %exponent.bf $calc(%exponent.bf //2)
    ; the non-exponential _square function 'should' be safe
    if (%exponent.bf) var %base.bf $calc(%base.bf * %base.bf)
  }
  return %minus $+ %answer.bf
}

* *

In addition to $calc(2^n) returning a string ending with all zeroes at long lengths, I also find that including a -1 subtraction can change a huge result into zero. The following test returns zero, but removing the -1 returns a string ending with a large number of zeroes. I find changing the 19937 to 17174 is the largest 2^integer where this example does not change the large number into zero.

//var -s %mersenne_twister.bf $calc(2^19937 -1)

Same thing happens for + operator where this returns 0 instead of a large number ending with 1. This is just the obvious case, and I don't know whether there are lower numbers that have the behavior too.

//var -s %a.bf $+(1,$str(0,5170)) + 1

* *

In looking at m_apm_powmod I see some of what they're doing differently than my bf_modpow alias does, and I think I start to finally understand why they're messing around with fractions when all terms are required to be integer. And, I suspect that if $powmod had been a wrapper around m_apm_powmod instead of separate C code which mimics my bf_modpow script, that it would probably eventually show the same kind of rounding going on in these other areas.

Instead of using something like m_apm_integer_div_rem to get the modulo remainder after division by the large integer 'm', m_apm_powmod instead pre-calculates 1/m as a really long fraction, and then multiplies each loop product by that really long 1/m fraction instead of just dividing by 'm' to get the actual remainder. And the iround is part of several functions whose job apparently is to massage the resulting float back into an integer that the code admits 'approximates' being modulo m.

* *

So, it looks as if any function using math that involves this kind of reciprocal shenanigans is going to eventually start showing warts like $base and ^ do, and the only solutions seem to be:

* make these kind of reciprocal fractions so absurdly long that they slow everything down, which nobody wants

* have a separate tier for big integer calculations that doesn't touch fractions at all.

And I'm seeing a little bit of that Option#1 slowdown in the 2nd and 3rd betas. For my example editbox command which called my bf_modpow alias using a 2048-bit number, my time was 4.3 seconds in the 1275 beta, and when you gave your time as 7.4 seconds, I thought that simply meant you had a slower computer, and that my time for the $powmod() identifier would continue to be faster than yours. However for the 2nd and 3rd beta, my time for that 2048 command goes up to 7.1 seconds, so that means my computer is a similar speed to your slow one. And sure enough, when I tested the same thing with the built-in $powmod identifier, instead of the 4.3*(2.9/7.4) = 1.7 seconds that I was expecting for 2048, I'm getting 2.8 secs which is just slightly shorter than your 2.9

I'm guessing that the precision setting that you increased in order to obtain extra accuracy for $calc(^) is affecting the speed of the integer calculations used by $powmod and probably used by a lot of other things.

I later found a glitch in the 1275 beta that I couldn't find in the next 2 betas where the % operator in .bf mode showed behavior as if it wasn't actually doing integer modulo either, but looks like it was instead making a long fraction out of (1/divisor) and then multiplying using that, which means increasing the precision for something like that would take longer to create the 1/reciprocal and also take longer to multiply with it.

Without benchmarking it, it seems that the shortcut for creating a long fraction reciprocal then multiplying by it, in order to create a modulo remainder, is a shortcut only for cases where it needs to iterate repeatedly using that same modulus, such as what MAPM was doing in their version of powmod. However for the one-time-shot usage of the % operator it's a 'long cut' not a 'short cut', because it avoids dividing by the divisor - by doing divisor division to create the fractional reciprocal and then multiplying by that long fraction, followed by all the iround and related adjustments.

There might be cases where the function could cheat by avoiding the creation of the reciprocal by peeking to see if it's inverting the same number that was most recently inverted.

* *

From observation, I gathered that there's 2 types of interest for numbers that can't easily come from the doubles range.

On the one side, there's people wanting longer fractions from relatively small numbers coming from $sqrt $log $cos $calc(number1/number2) etc.

Then on the other side there's interest in accurate calculations involving big integers, where any support for fractions is of no interest, and is viewed as something which instead introduces slower actions and the potential for rounding errors in the results.

For example, while there have been lots of other threads from people wanting longer accurate fractions, the top of this thread is someone who was wanting accuracy when $base translates from hex to base36, and they were using a large integer, not a large number having a 30-digit fraction attached to it.

But rarely is there much demand for having both at the same time, where they would want an accurate representation of a number having many digits on both sides of the decimal.

A solution that can give everyone the kind of accuracy they need, while speeding things up for everyone, is to have

* doubles mode, range 2^53 with 6 digit fractions
* bigfloat mode with 30 digit fractions and no promises about how super huge the mantissa
+ biginteger mode with no fraction at all, but with accurate results for big integers, and there's no such thing as a mantissa when there's no fraction.

As an analogy for trying to obtain big integer results and big float results from the same library, it would be like a vehicle whose main goal is to carry heavy weights up a steep hill, so it was designed to have only a 'low gear'. And there's another vehicle whose main goal is as a racing car on a level grade. While the racing car can't be used for carrying the heavy weights, the 1st vehicle could be made to work as a race car, but only by accelerating the engine too fast. Using a big integer library for the integer results would allow identifiers to perform the racing car functions without changing the moving vehicle in ways that causes problems.

By separating big integer from big float, I believe this would allow a significant reduction in the precision for that reciprocal and other fractions heavily used by the trigs, logs, and other floating identifiers, while still retaining accuracy for the 30 digit fraction. Hopefully this could be a drop-in precision that would restore faster speed and accuracy to integer exponents and other integer calculation, without having affecting the speed and return values on the $cos $sqrt etc fractions where games want to do that stuff at many frames per second.

And on the other side, if the integer-only calculations were using completely different functions coming from a big _integer_ library instead of being an appendage to a big _float_ library, the calculations would be much faster coming from the big integer library, and it could be a win/win all around.

I'm not sure if this would need there be a separate /biginteger ON mode alongside /bigfloat ON, because I think it might just be sufficient for some identifiers to be always in big-integer mode, and for others to only key off the name of the %variable itself.

If variable names would be a flag for biginteger mode like .bf is for bigfloat, it could be something like %var.bi etc. From looking at your list of identifiers that enabled support for bigfloat mode, most of those should be basically ignored in .bi mode, or else they may not even belong in .bf mode once .bi exists.

A big integer library would have a lot of things that could be efficiently done without trying to make them work inside a floating point library. For example, the bitwise operators that Talon wanted aren't in MAPM because it doesn't really make sense to look at bit positions of a fraction that's stored in a mystery format. But bitwise functions are one of the key building blocks for a big integer library.

* *

Unless there's a better package out there, the easiest solution could be to use OpenSSL as a BigInteger library, considering how it is already used by many functions inside mIRC, and some of the relevant functions are probably imported already. For things that use only integers, OpenSSL can be dozens of times faster, because its functions are almost exclusively integer only, fractions need not apply. As far as speed, OpenSSL seems pretty fast compared to other things that can do .bi style of math.

As an example of how much speed can be obtained by having a big integer library do big integer things while a float library do float things, I reference the demo alias bf_modpow where the editbox command uses 2048-bit integers, where substituting the new $powmod() reduces time from 7.4 seconds down to 2.9 seconds. Each time mIRC connnects to a server using an RSA-4096 SASL certificate, it does a shortcut that involves doing a pair of $powmod using numbers of this 2048 length, which substitutes a pair of 2.9 sec calculations instead of a single 23 secs 4096 calculation. OpenSSL is doing a pair of those similar 2048 calculation using a subroutine that uses only integers, and it's able to do it much quicker than a floating precision library could possibly do. I've even used an RSA-8192 as an SASL cert, which does a pair of 23-second 4096's instead of a 3-minutes 8192 calculations each time it does a certificate handshake, I have tried looking for it, but never detected any noticeable pause while it did the handshake. The timing can only be measured from the side using the private key, because the server side just uses the small public exponent that's much quicker.

* *

In my separate post in the $base2 thread, I give examples of how $base can benefit from several speed optimizations if it can be in a mode which rejects input in an 'inbase' which contains digits that can't exist when 'inbase' is used as an outbase.

If $base still intends to support long fractions, that other post also shows a way that fractions can have better speed and precision by processing as if a 2nd 'fake' integer, which enables $base to pretend it doesn't know what a fraction is, but hey there's these 2 integers that I attach to this decimal here.

Without being required to support $base(ZZ,10,16) or fractions in .bi mode, there are quite a few inbase/outbase pairs that can make $base be much quicker, including being able to do string manipulation when people simply want to zero-pad their number to the same length by doing $base(number,same,same,10). Also, since internal variables are probably uint32, there should be quick efficiencies when 16 is either the inbase or outbase.

Something for down the road when $base gets better support for long strings is the ability to pad to longer string lengths. $base is still at the doubles default of treating the $4 padding as having a max of 100, which for a hex string is 400 bits. At the current level of 2^258 accuracy, that number would be a base10 string of only 80 digits or so, which hasn't quite reached that 100 barrier. But if $base is able to handle longer strings by fixing $calc ^, that can be easily exceeded. The 100 limit is also fixed regardless if outbase is 2 or 36, which means the max padding bit length at base=2 is 100 which is below the current accuracy, and at base=36 is 517.

Though to be fair, using $base(number,same,same,length) for zero padding is much slower than simply doing string manipulation.

//tokenize 32 $base($sha512(S),16,16,128) 36 36 128 | if ($len($1) < $4) var -s %output $str(0,$calc($v2 - $v1)) $+ $1 , %outlen $len($1) -> $len(%output)

* * *

This last is just for reference in case this helps with other issues, since this appears to have gone away. In addition to the rounding error affecting both $calc^ and $base, and the mentioned above 2^n+1 rounding to zero, I came across another $calc glitch that affects the % operator in the 1275 beta, but I cannot detect in the next 2 betas, probably due to ramping up the precision of fractions. To demonstrate in 1275, paste the sample numbers into the next command, where the result of $calc(positive % small-positive) can be less than zero or greater than the divisor. I found these errors with a $base3 alias that imitated $base2 by using the subset of .bf mode calculations I thought was safe, excluding "^". When the number was 1024-bits long, there would be only a handful of digits that were wrong, and they looked like single items that appeared at far-separated ranges.

In all cases that I found, the errors are an exact multiple of the modulus in one direction or the other, so this looks like this is caused by kind of modulo that m_apm_powmod was trying to do, by making long_fraction = (1/divisor) then multiplying by long_fraction and then adjusting it to be approximately correct, instead of dividing by the divisor to get the remainder. So these differences look like they would be caused by inaccuracies when generating certain long_fraction numbers. When the errors did rarely appear for various divisors against these 2 numbers, the result was further away from the correct answer when the number was larger.

//var -s %i 2 , %a.bf REPLACE_ME | while (%i isnum 2-99) { echo -ag a mod %i = $calc(%a.bf % %i) | inc %i }

2375367556404088850618648174520826561280160432840589702972085294732162134239579969864370734273995321418305033664988384807865935555938322586901096972029296805385174745353604865654752517095868844281580877080228836752924976062380370295865131511490650052293878941356520340648053365611547195104400734887585100

4880764126099366743837
7919469740387813605031256289423841237995399571684111272206998186170769885721416238571318279483520798098990001232855690606254165366075219118430826622030552451715827119443274220533030920865103504446738458680112065589481385303840571100725796559750403704097781428796416803222269605442046883432586397577234279697820643783939861375686447028897903660789517177623671447371011012146391141909217824049825006796875609169056493133980096497118239292592211941820601612285824760020244043454619516083873000

*
Edit:

Updated parameter checking on $gcd and $lcm

I discovered that these allow more than 2 parameters, which is great. However they allow 1 parameter which should be 'invalid parameters', because the definition includes the requirement that there be 2 numbers, and that they be integers.

//echo -a $gcd(5)
result: 5 should be 'invalid parameters'
//echo -a $gcd(1.23,1.23)
result: 1.23 should always be 0 if any term is a float

Same for LCM, where $lcm(1.23) should be insufficient parameters, and $lcm(1.23,1.23) should change from 1.23 to zero since this is not 2+ integers.

Last edited by maroon; 31/10/22 11:48 AM.