I think the wikipedia article is right.

Your test is not accurate anyway, because it uses too small an entropy value to actually give random results.

A fairer illustration of $rand's true randomness would be an unbiased test of the raw $rand results-- here is an example alias:

Code:
randnumbers { 
  var %i = 1, %out, %reps = 0, %n = $iif($1,$1,100)
  var %max = $iif($2,$2,$calc(%n * 10))
  while (%i < %n) {
    var %x = $rand(1,%max)
    if ($istok(%out,%x,32)) inc %reps
    %out = %out %x
    inc %i
  }
  echo -a %reps collisions of %n numbers (ratio $calc(%reps / %n * 100) $+ % $+ ): %out
}


Running the alias 100 times on a 1000 number spread ($rand(1,100)) gets me:

6 collisions of 100 numbers (ratio 6%): 908 849 246 70 245 168 113 78 856 449 931 706 644 619 384 703 943 127 705 502 912 331 374 91 56 26 119 491 446 690 880 702 60 146 913 543 862 855 247 117 173 222 674 100 85 408 519 142 809 839 572 702 942 213 410 965 18 568 227 601 713 169 172 646 908 992 58 797 379 647 113 768 271 394 846 745 36 880 214 510 396 100 586 486 351 197 985 610 659 231 742 360 477 634 861 121 290 78 136

Those are some fairly normalized results. Obviously if we bring down the spread (as your example did heavily), we can see the ratio go up. If we run the same alias 100 times with a spread of 100 numbers we get:

33 collisions of 100 numbers (ratio 33%): 94 15 30 38 16 12 80 40 8 19 80 3 66 86 84 52 71 56 49 55 41 16 36 35 76 53 8 42 90 2 62 43 46 35 53 58 84 10 83 42 78 17 58 5 41 57 21 48 51 46 15 12 96 17 32 6 67 60 1 88 41 17 62 61 80 15 24 13 52 6 89 38 5 8 58 50 1 57 41 54 59 92 62 26 80 37 11 72 27 69 30 22 24 63 83 33 8 4 20

This is also fairly normalized given that there are 100 slots for 100 selections.

I did you the favour of doing some extra research on the calculation of "randomness" and came across this utility that performs (among other things) a Chi-squared test on a randomly generated resultset to illustrate its "randomness". To quote the url,

Quote:
The chi-square test is the most commonly used test for the randomness of data, and is extremely sensitive to errors in pseudorandom sequence generators. ... We interpret the percentage as the degree to which the sequence tested is suspected of being non-random. If the percentage is greater than 99% or less than 1%, the sequence is almost certainly not random. If the percentage is between 99% and 95% or between 1% and 5%, the sequence is suspect. Percentages between 90% and 95% and 5% and 10% indicate the sequence is “almost suspect”.


Interestingly enough, the results from mIRC are very impressive:

Code:
Entropy = 7.996480 bits per byte.

Optimum compression would reduce the size
of this 50001 byte file by 0 percent.

Chi square distribution for 50001 samples is 245.14, and randomly
would exceed this value 66.01 percent of the times.

Arithmetic mean value of data bytes is 127.3520 (127.5 = random).
Monte Carlo value for Pi is 3.163806552 (error 0.71 percent).
Serial correlation coefficient is -0.003063 (totally uncorrelated = 0.0).


You'll notice that unix's rand() function returns a % value of 99.99 or above, meaning it is "unacceptably non-random" according to the site. Our results are 66%, which are far more impressive. In fact:

Quote:
Contrast both of these software generators with the chi-square result of a genuine random sequence created by timing radioactive decay events.

Chi square distribution for 500000 samples is 249.51, and randomly would exceed this value 40.98 percent of the times.


So ~60% is pretty damn random.

For reference, the code I used was:

Code:
ent { 
  var %i = 1, %n = $iif($1,$1,50000), %max = 255
  while (%i <= %n) {
    bset &b %i $rand(0,%max)
    inc %i
  }
  bwrite rand.ent 1 -1 &b
}


The rand.ent file was then fed into ent.exe to get the above output.


- argv[0] on EFnet #mIRC
- "Life is a pointer to an integer without a cast"