At which point does it become laggy and/or inefficient to use a hash table. fex using $hfind on a table with 40,000 entries and 4000 slots (~10% as suggestion by mirc help file) or even bigger tables, would using single tables of this size be unwise to use? Would it cause noticable speed/lag problems in the script?

Since hashtables reside in memory (and assuming you have that memory to use) I dont see that any size hashtable well be laggy, as compared to any other method of accessing data, u used the example of $hfind and if your limiting laggy ness it to a hfinds to produce you a dataset from your whole hashtable then i guess other methods such as a file and a filter may become faster or a external sql call, but for a single $hfind a hash table would beat anything else available.

What is the relevance of the slots value when creating a hash table, what exactly is this used for and why does this affect how efficiently/quickly you can access/search/etc a hash table.

I have no exact knowledge of the way its used, i do however know of other tabling systems which are also called hashtables where the performance of the name/data look up is directly relevent to the inital size of the table, these also used 10% as a default starting point, these tables used a form of tree structure where similar named Items were based in a tree below each other, its somewhat hard to explain and of curse might not be the method used here at all. I can explain it if u wish but as im just therorising on what mirc uses me expalining wouldnt really help with anything. ask if u want ill try and explain it breifly.

What sort of memory requirements are related to hash tables, obviously the data is stored in some manner within mirc for quick and easy access, but what sort of ratio is there between the size of the table and the memory used to store it as well as the processing time/consumption used when accessing/searching it.

Real hard to answer, i havent really lookde at what mirc consumes, since i have other things running in it all the time, you would have to test on a plain mirc not conencted, doing nothing else.

Using a while loop to go through the entries in a hash table it obviously an inefficient use of them and with such large tables is very likely to cause script 'pausing' issues. What sort of method would be better in a circumstance when it is neccersary (or just desired) to for example list the information in the table inside mirc (combination of hsave then fopen maybe?)

A hsave and a filter pops into mind!, also the previouslly mentioned for loop using ISWM, however for not pausing mirc i suggest something like this

* this is one of the few times i think using gotos is ok
alias aliasname {
  if ($1 == reentry) { goto $2 }
  .timer 0 1 aliasname reentry label1 1
  if  ($hget(HashTable,$3).item) {
    var %item = $v1
    .timer 0 1 aliasname reentry label1 $calc($3 + 1)
    if (*wildcard* iswm %item) {
      ;Do stuff here with %item etc

Simply put, you run the alias, it didnt have parameters starting with "reentry" so it flows passed the goto and runs the "initialising code", this simply calls a timer to run it again with a reentry point saved in $2 and other things u might want to use in $3- (namely the loop counter)
The code is then called again on a timer, $1 is "reentry" so it gotos to $2 being "label1", whcih then sees if theres a hastable item numbered $3 "1" and if so goes into the IF
first thing it does in the IF is setoff a timer for the next loop around changing "1" to "2" or n to n+1
then performs the iswm and does soemthing to a matching one, then exits
The next timer goes off and loop 2 3 4 5 6 etc occurs.

Mirc however does not pause or freeze up like doing 40,000 reps would cause. Channels get text, dl and ul keep going, etc etc

This method does have a downside that its possable for the table to be altered by other scripts casuing a missed item, however this is more for a on the fly dataset such as matching addresses or something.