From what I can see you dont seem overly concerned about having a 50 meg hash table in memory so based on this...

I would break it down into two data structures, a complete table (hash1), and a users last seen in the last X Days (hash2),

When checking scan the new listings first (hash2) , then the complete list (hash1) [ unless accessing the complete table is slow ]

If updating somones information update hash1 [unless this just adding or altering it is slow as well] (not saving it to file) and hash2 (saving it to file) with there new info.

Each time you start the program up, it runs through a reconcilation process where it appliers the entries of hash2 to hash1 and then saves hash1, this might take longer than 6 seconds but it happens only once. You may also wish to trigger a save of hash1 periodticly or after a set amount of time the hash2 database hasnt been adjusted.

You can also make this change with little to no altering of your current code, by creating a set of layer aliases to handle both data bases examples...
/hadd originalhashtable $nick blah blah blah blah blah blah

/alias -l _hadd {
hadd $reptok($1-,originalhashtable,hash1,1,32) | ; remove this line if just doing this is to slow
hadd $reptok($1-,originalhashtable,hash2,1,32)
}

Others are more dependent on if your not altering hash1 as well as hash2

Also the reconcilation process could be speedup by simply loading the hash1 table file and then loading the hash2 table file into hash1, then saving hash1 back to file.


[edit]
Was i right in reading in that you felt just accessing the large hashtable was slow? or was the only real problem during the saving it?

Last edited by DaveC; 17/03/05 10:23 PM.