At which point does it become laggy and/or inefficient to use a hash table. fex using $hfind on a table with 40,000 entries and 4000 slots (~10% as suggestion by mirc help file) or even bigger tables, would using single tables of this size be unwise to use? Would it cause noticable speed/lag problems in the script?

What is the relevance of the slots value when creating a hash table, what exactly is this used for and why does this affect how efficiently/quickly you can access/search/etc a hash table.

What sort of memory requirements are related to hash tables, obviously the data is stored in some manner within mirc for quick and easy access, but what sort of ratio is there between the size of the table and the memory used to store it as well as the processing time/consumption used when accessing/searching it.

Using a while loop to go through the entries in a hash table it obviously an inefficient use of them and with such large tables is very likely to cause script 'pausing' issues. What sort of method would be better in a circumstance when it is neccersary (or just desired) to for example list the information in the table inside mirc (combination of hsave then fopen maybe?)


"Allen is having a small problem and needs help adjusting his attitude" - Flutterby