mIRC Home    About    Download    Register    News    Help

Print Thread
Page 2 of 3 1 2 3
Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
Originally Posted By: Wims
What about adding a new switch to go with -m, to specify the name of the hashing algorithm to be used? Or maybe just a switch to specify the fnv1a algorithm without supporting any other if they are not so great.

I am sure that Khaled could do this if it genuinely made sense, however:

1. Unless the user has a way of knowing his data in advance and also a way of determining which hash algorithm gave the best spread, how would they have any way to determine which flag to use? So then you would have to provide an identifier for each hashing algorithm specifically so a user could pre-analyse their data to determine the best one, and then Khaled would no longer have any freedom to innovate to improve hash table performance.

2. The performance gains are, it seems, fairly marginal - at least when compared to using a binary-search algorithm instead of iterating down the entire linked-list of a slot. So if we are going to look for performance (and functional) improvements, my vote would be to spend the energy on that rather than flags to allow a user to choose an alternative hashing algorithm.

P.S. If we are looking for ways to improve hash table performance (other than binary search), then a couple of ideas off the top of my head...:

a. If not already in the code, other performance gains could be made by caching the results of the last $hget(table,n) or $hfind(table,needle,n) (and trashing the cache if /hadd is used) so that when you do a subsequent call for the same search but for a later position n+m, you start from where you left off rather than from the beginning of the table.(Though this would have the oddity that using an incrementing loop would be far faster than a decrementing loop.

b. Another possibility for massive performance improvements which literally just occurred to me would be to get mIRC to use the 3d graphic chip parallel processing that are in most modern PCs to do non-hashed lookups. (Though probably way to much programming effort to be worthwhile.)

Joined: Dec 2008
Posts: 1,515
westor Offline OP
Hoopy frood
OP Offline
Hoopy frood
Joined: Dec 2008
Posts: 1,515
Originally Posted By: Wims
What about adding a new switch to go with -m, to specify the name of the hashing algorithm to be used? Or maybe just a switch to specify the fnv1a algorithm without supporting any other if they are not so great.


That would be good and interesting to give the opportunity to specify the algorithm that you wanna use for hash but i "think" it will need a decade to be added because it will be very hard.

Example:

On /hadd and /hmake:

-aN = If not N then use default. if N = 1 then use X algorithm, if N = 2 then X algorithm , etc...


Need Online mIRC help or an mIRC Scripting Freelancer? -> https://irc.chathub.org <-
Joined: Jul 2006
Posts: 4,141
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,141
Quote:
I am sure that Khaled could do this if it genuinely made sense
Well, he said that the current algorithm was not great in some cases, that's where the other algorithm might perform better. Hash table are almost never used with unknown data, you typically associate hardcoded, known strings, to unknown data, but you also typically know the kind of data you're using, that's what you would (typically) use to know what 'flag' to use, no need to pre-analyse.

Your suggestion may not be bad but this thread was mainly about the way items are distributed, which is related to the algorithm, so my suggestion seemed fair.


#mircscripting @ irc.swiftirc.net == the best mIRC help channel
Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
Originally Posted By: Wims
Your suggestion may not be bad but this thread was mainly about the way items are distributed, which is related to the algorithm, so my suggestion seemed fair.
Whilst this thread was initially about why hashing means you lose the sequence you stored items, it very quickly became a discussion on performance.

I was not suggesting that you shouldn't make a suggestion about a flag to allow the scripter to choose one of several hashing algorithms, just that if you are going to do this then you need to:

a. Give the scripter the tools to analyse the data in order to determine the best algorithm to use for their data; and

b. At the moment Khaled can change the hashing algorithm to improve performance without any scripts breaking. If you give the user the choice / tools to analyse, then you are creating another set of constraints that you have to meet in the future to maintain backwards compatibility; and

c. A different hashing algorithm might make small differences to performance for hashed lookups - but as I attempt to show below, a lot of lookups are unhashed, and both hashed and unhashed lookups have to iterate through linked lists which is pretty slow and therefore has scope for significant performance improvements, and in a lot of cases the scripts are iterating running the same $hget(table,n) or $hfind lookup multiple times which again has scope for significant performance improvements. IMO such performance improvements could dwarf those gained by tweaking the hashing algorithm.

Originally Posted By: Khaled
The older hashing algorithm distributes data unevenly across buckets in the hash table, so some buckets have far fewer/greater items in them than others, however it is distributing individual items better. At least, with certain combinations of data. With some combinations of data, it leaves many buckets empty.

The fnv1a algorithm distributes items more evenly across all buckets, however, due to its lack of avalanching, more similar items are being stored in the same buckets. This means that when a hash matches a bucket and it has to be searched, even though there are fewer items in it (because they're more evenly distributed across buckets), the string comparison call takes longer as it is matching more characters per item because they are more similar, even though there are fewer string comparisons.

Unfortunately, every other hashing algorithm I have tried, eg. murmur3, xxhfast32, lookup3, takes more time to create the hash, at least for short strings. So it looks like we may just have to stick with the older algorithm.

Analysing my own scripts I find that I use hash matches when I am storing data by nickname or channel - so the item names are relatively short strings, but much more often I am wanting to use them to store requests in a FIFO queue (using $hget(table,n) or to use it to perform wildcard matches (in which case I am using $hfind), both of which are non-hashed lookups.

So, I think Khaled is right to look at the cost of calculating the hash for smaller strings (like nicks or channels) rather than longer ones, BUT that you need to look at the cost of the hash algorithm weighed against the benefits of having more even length lists to iterate over rather than just the cost of the algorithm alone.

There are two different scenarios when the hashing algorithm is uneven:

a. It still spreads items over all slots, but the lengths are uneven;
b. As a. but worse so that some slots are left completely empty.

In a., the average length of the slots is the same as if they were evenly spread, but... because the number of hits on a particular slot is proportional to the length of the list, and the time taken to iterate over the list is also proportional to the length of the list, then the performance hit is likely to be the square of how much variance in length there is.

In b., the above is exaggerated because the lists in empty slots are never searched.

So, the greater the variance in distribution, the bigger the performance hit and the more you can afford to increase the cost of the hashing algorithm.

I haven't looked at the specifics of the current hashing algorithm compared to the alternatives, but MurmurHash3 is the one people seem to recommend as being both even and fast.

HOWEVER... as I suggested above, I am not at all sure that we are focusing efforts on the areas where performance improvements will have a real-life benefit...

HASH TABLE SIZE
It seems to me that we should be less concerned (but definitely not unconcerned) about significant performance changes for small hash tables which have quick lookups - if say a 2ms lookup increases to 5ms or decreases to 1ms, is a 1ms-3ms change going to be noticeable to anyone in a real-life application (as opposed to a somewhat artificial benchmark).

Instead we should try to optimise for lookups on large tables which could take a significant time - a 50% reduction in a lookup on a huge hash table which takes 0.5s to run might well be noticeable by the user.

ITERATION - EXACT MATCH
As I pointed out before, iterating through a large linked list has a big cost. For hashed (i.e. exact match $hget) lookups, at least you are iterating only through one slot containing only a small portion of the data - yes, you want to optimise this by having an evenly distributed hash function, but the time needed for a hash lookup in a table with n items and s slots is O(n/s).

But for a sequential $hget(table,n) lookup and for all $hfind lookups, you are iterating over the entire table O(n), and there is a small overhead for iterating across slots as well as within the linked list for each slot. So on the basis of focusing on the functionality with the worst performance, it seems to me that unhashed lookups has the greatest scope for performance gains.

To put this in context:

* an iterative search is O(n), so an iterative $hfind search of a 10,000 entry table is going to take between an average of 5,000 iterations if it is matched and 10,000 iterations if unmatched; but
* a search through a self-balancing binary tree is O($log2(n)), so a similar search using one of these would take c. 13 iterations to either find or not find a match (and 10,000 to 13 iterations is a 99.8% reduction).

You might even find that an exact match lookup on a binary tree is as fast in practice as on a hash lookup because you spend the time looking rather than calculating the hash.

ITERATING - NON-EXACT MATCH
For wildcard / regex matches, you can't use a binary search directly and you still have to iterate, But, if the first character is not a wildcard character (* or ?) you can take the string up to the first wildcard and use the binary tree search to find the first and last possible nodes you need to iterate over - so instead of iterating over the whole tree i.e. 10,000 iterations, instead you use 13 iterations to find the starting node, and 13 to find the ending node, perhaps resulting in 100 nodes to iterate over for a total of 126 iterations - still a saving of almost 90%.

MIXED OPERATIONS
As stated in a previous post, you can even maintain multiple ways to access the data at the same time:

a. -m1 - turn hashing off (as now)
b. -i - turn on binary tree for item name (as well as hash)
c. -d - turn on binary tree for data (as well as hash)
d. -o - use binary tree per hash slot instead of linked list - for -m1 gives you a table ordered by item name rather than ordered by the sequence they were added - and this could even be default for >1 slots since order is not defined for this situation

Yes - more overhead to maintain the multiple structures when you add items, but since adding an item requires a search to see if the item already exists (currently an iteration), then overall the performance adding items could still end up better.

CACHED ITERATION
When you use $hget(table,n) or $hfind, you will often use these in an iterative loop. In these situations where you are iterating (i.e. to find the n'th wildcard match or the n'th sequential item), you can cache the last search parameters and position, and providing that you are asking for the n'th or greater item on the next call, then you can continue searching from the point you previously reached rather than starting to count from the beginning. This would also e.g. give you quick access to the data for the item you just found.

Last edited by Protopia; 19/10/18 11:43 PM.
Joined: Jan 2004
Posts: 2,127
Hoopy frood
Offline
Hoopy frood
Joined: Jan 2004
Posts: 2,127
Originally Posted By: Khaled
Originally Posted By: maroon
I don't see how this can be blamed on fnv1a

Oh the irony :-) It turns out that it is in fact due to fnv1a.



I was trying to say the slowdown can't be blamed on the speed of using $hash() vs the speed of using $fnv1a(), because - as you said - the time spent hashing was a small portion of the task done by /hadd or $hget(table,item). FNV1a should be faster than $hash, because there's not much room for $hash to be quicker than FNV1a's 1 xor + 1 mul per byte, as long as the MUL isn't being calculated using floating point math.

I had seen the lack of avalanching in FNV1a, where the hash of 'a' through 'z' produces hashes where the top 20 bits rarely change. However, in spite of this, the hash MOD 101 looks like it does a good job of assigning them to a variety of buckets.

Quote:

string a fnv1a: E40C292C bucket 11
string b fnv1a: E70C2DE5 bucket 23
string c fnv1a: E60C2C52 bucket 19
string d fnv1a: E10C2473 bucket 100
string e fnv1a: E00C22E0 bucket 96
string f fnv1a: E30C2799 bucket 7
string g fnv1a: E20C2606 bucket 3
string h fnv1a: ED0C3757 bucket 47
string i fnv1a: EC0C35C4 bucket 43
string j fnv1a: EF0C3A7D bucket 55
string k fnv1a: EE0C38EA bucket 51


However FNV1a is supposed to a low collision rate, where different strings have the same hash, which is a property that $hash() doesn't have. It's easy to find many such strings:

Code:
//echo -a $base($hash(abcdef,32),10,16) same as $base($hash(abddee,32),10,16)


Can you confirm how mIRC has been assigning items to buckets prior to the change to FNV1a? By looking at the order in which $hget(table,N).item sorts the items, I would expect $calc(1+($hash(itemname,32) % 101)) to be sequentially changing as you work your way through that list from the first through last N.

I do see the newest beta sorting items in the $hget(table,N).item order based on this MOD calculation if it uses the FNV1a-32 hash. But it doesn't work for me in the older versions when I substitute $hash(item,32) in place of FNV1a.

With items in 1897 assigned to buckets 1-101 using (1+(FNV1a(itemname) % 101)) I wouldn't think the lack of avalanche causes the hash of similar strings to have a bias toward being the same value MOD 101.

However I still see evidence that FNV1a isn't responsible for the delay shown by hadd_test:

1. If the earlier /hadd_test alias is changed so the identical portion of the string is lengthened and moved to the tail end of the strings, any possible FNV1a-vs-$hash() speed is still a small portion of the overhead time, and the string at the end gives plenty of time for the avalanche to happen, making it even more unlikely that similar strings get placed into the same bucket causing the slowdown. I'm still seeing the same 30% slowdown when the alias changes the item names from $(item,%i) to $+(%i,evenlongeritemname), where 18 extra characters surely gives the avalanche a chance to spread the slight differences in input strings into diverse hashes that would be even less likely to be associated with similar remainders MOD 101. In my alias, ALL the item names in this example were short + similar, so there should be minimal time difference in finding the difference between item12345/item12445 vs having to tell the difference between item12345/item12355. Surely any problem with avalanching should have been even worse with $hash:

Code:
//var %bits 32 , %a $hash(mIRC,%bits) , %b $hash(mIRD,%bits) | echo -a $base(%a,10,2,32) bucket $calc(%a % 101) | echo -a $base(%b,10,2,32) bucket $calc(%b % 101)



This example changed only 1 bit of the output, and it's not difficult to find many such examples. But since $hash(anything,32) always returns the lowest 8 bits as zeroes, dividing by 101 will usually shuffle them to buckets far apart. Even if the hashing algorithm were $hash(string,24) where the changed bit is the last bit, the sequential output would have assigned them (MOD 101) to different buckets, and I would expect FNV1a to do even better.

2. As my prior post showed, I was seeing much more of the slowdown between 752.notbeta-vs-1438 than I was seeing between beta 1438-vs-1897. Since 1897 shuffles $hget(table,N).item into a different sequence than not-beta does, but 1897 does the same sort order as not-beta, the slowdown in 1438 can't be blamed on FNV1a.

3. If the slowness were caused by FNV1a shuffling more similar strings into the same bucket, that could explain the increase in time required by $hget(table,item) to locate the item within the bucket, but wouldn't explain the extra 30% time to /hadd them into the bucket - unless there's also some kind of bucket indexing that requires extra time when strings within 1 bucket are too similar.

I've not yet replicated the poor bucket shuffling you're seeing. For example, when I take the FNV1a hash of 'a' through 'z' then MOD by 101, I'm finding these 26 hashes assigned to 26 different buckets.

-

FYI Background

Below is an alias that matches the test vectors at the FNV site. It would be a lot faster using the assembler code at that site which doesn't need to use extra calculations to evade the 2^52 limit of floating point math. The quote near the top of the post is created by:

Code:
//var %i $asc(a) | while (%i isnum 1-122) { var %string $chr(%i) | echo -a string %string fnv1a: $fnv1a-32(%string,h) bucket $calc(1+($fnv1a-32(%string) % 101)) | inc %i }



Code:
alias fnv1a-32 {
  noop $regsubex(temp,$1,,,&temp) | var %len $bvar(&temp,0) , %i 0 , %hash 2166136261
  while (%i < %len) {
    !inc %i
    !var %hash $xor(%hash,$bvar(&temp,%i))
    !var %hash $calc(( (%hash % 256) * 16777216 + %hash * 403) % 4294967296 )
  }
  if (h isin $2) return $base(%hash,10,16,8) | return %hash
} ; by maroon 2018
; If not for the 2^52 limit, the MUL could have been $calc((%hash * 16777619) % 4294967296)
; because the bits of the product above 2^32 aren't needed. $fnv1a-32(string,[h]) h=hash as hex


Below is how I see (FNV1a MOD 101) used in beta 1897 to assign items to buckets, but can't see the same thing in older versions using $hash(item,32). Regardless whether the item-names are random or similar, $hget(table,N).item in 1897 lists them in bucket order regardless of the order of creation. Similar input strings seem no more likely to be assigned to the same bucket than when they're randomly different.

Code:
//hfree -sw test | hmake -s test 101 | var %i 20 , %N 1 | while (%i) { hadd test $base($rand(1,4503599627370495),10,36) | dec %i } | while ($hget(test,%N).item) { echo 4 -a $ord(%N) item is $v1 bucket: $calc(1+($fnv1a-32($v1) % 101)) | inc %N }


Joined: Dec 2002
Posts: 5,408
Hoopy frood
Offline
Hoopy frood
Joined: Dec 2002
Posts: 5,408
Thanks for your comments, unfortunately I have already spent a lot of time testing this. As you know, the performance of a hash table depends on a combination of factors, including hash table size, number of items added, hash algorithm, whether the table sizes needs to be prime/odd/even, item lengths, item content, and so on. You cannot test these in isolation. In your tests, FNV1A performed better/worse at certain table sizes. That is simply how it worked out for the type and amount of data you used. Remember, no other hash table code was changed, just the hash algorithm. I may in future add a switch to the /hmake command that allows you to set the hash algorithm. However, for now, the beta has reverted to the older hash algorithm.

Joined: Jan 2004
Posts: 2,127
Hoopy frood
Offline
Hoopy frood
Joined: Jan 2004
Posts: 2,127
In the /hadd_test alias above, the 2230 beta is back to the performance of the 1438 beta for it, which is better than 1897 was, but is still noticeably slower than the no-beta was. Again, I don't believe the differences in performance were caused by the choice of hash function, because it was affecting the time needed to /hadd the items as well as $hget()'ing them later, and I'm assuming the choice of hash wouldn't be responsible for the /hadd delay.

* Regarding the change to forcing the bucket count to be a prime. Would you consider either allowing it to consider N=1 to be a prime, or allowing a new switch that allows the hash table to not be forced away from the requested bucket count? (Other than perhaps requiring it at least be odd like currently.)

It now is not permitting buckets=1 for those wishing to keep items in the original order, because N=1 is now flipped to use buckets=3.

* Could you change the max from 10000 to 10007?

It's now forcing N=9994 thru 10000 to use buckets 10007. This would prevent "hmake table2 $hget(table1).size" from being an error if the .size prop reported 10007 bucket. I'm not familiar enough with hash tables to know whether the largest prime below 2^14 (16381) would be a more appropriate max.

Code:
//hfree -sw test | hmake -s test 101 | var %i 20 , %N 1 | while (%i) { hadd test $base($rand(1,4503599627370495),10,36) | dec %i } | while ($hget(test,%N).item) { echo 4 -a $ord(%N) item is $v1 bucket: $calc(1+($fnv1a-32-mod($v1) % 101)) | inc %N }



As for FNV1a being modified, I found a modification mentioned at Brett Mulvey's website, and when using the FNV1a-32-Mod alias below, the calculated buckets increase as you cycle through the sequential $hget(table,N).item's. When I use FNV1a-32 in place of FNV1a-32-mod in this alias, that's the same order as the items were sorted in beta 1897.

However, when I substitute $hash($v1,32) in place of $fnv1a-32-mod($v1), the calculation of the bucket-number doesn't increase, indicating that unmodified $hash() isn't the method used to assign items to buckets.

* Was the $hash() identifier intended to be the same method used to assign items to buckets, or was it intentionally modified for use with hash tables? I'm mentioning this in case this might be a source of uneven assignments to buckets in the past.

* If FNV ends up making the cut, would we end up getting some kind of $fnv() alias to wean people away from using $hash()? It could either be a new identifier, or a new parameter where 0 is the default doing the old hash, while 1 and 2 would be fnv1a with/without the modification. True, it can be simulated, but is significantly slower as an alias.

Code:
alias fnv1a-32-mod {
  noop $regsubex(temp,$1,,,&temp) | var %len $bvar(&temp,0) , %i 0 , %hash 2166136261
  while (%i < %len) {
    !inc %i
    !var %hash $xor(%hash,$bvar(&temp,%i))
    !var %hash $calc(( (%hash % 256) * 16777216 + %hash * 403) % 4294967296 )
  }
  var %hash $calc((%hash * 8193) % 4294967296)
  var %hash $calc(($xor(%hash,$calc(%hash /128))    *    9) % 4294967296)
  var %hash $calc(($xor(%hash,$calc(%hash /131072)) *   33) % 4294967296)
  if (h isin $2) return $base(%hash,10,16,8) | return %hash
} ; by maroon 2018
; If not for the 2^52 limit, the MUL could have been $calc((%hash * 16777619) % 4294967296)
; because the bits of the product above 2^32 aren't needed. $fnv1a-32(string,[h]) h=hash as hex
; is identical to original FNV1a except adding the following operations after the string is hashed.
;  hash += hash << 13; (same as hash * 8193)
;  hash ^= hash >> 7;
;  hash += hash << 3; (same as hash * 9)
;  hash ^= hash >> 17;
;  hash += hash << 5; (same as hash * 33)

Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
As I have said previously, there is nothing wrong with discussions about how to make the hashing algorithm work more efficiently (either in terms of spread across buckets or speed), but these will have relatively minor impacts on hash table speed.

If we are going to expend effort on speeding up hash tables, then surely it makes sense to focus that effort on whatever area will speed up hash tables the most.

Linked-list iteration is probably THE SINGLE MOST EXECUTED ALGORITHM in mIRC's hash tables, and is the one which contributes by far the most to slow hash table performance for large hash tables because it is O(n).

It is used in /hadd, /hdel and $hget(table,item) to iterate over a single bucket to find the item or check whether the item exists and in $hget(table,n).item and $hfind to iterate over the entire table to count or find an item.

So, if we are looking for significant real-world performance improvements for large hash tables (as opposed to small ones, as measured in artificial workloads / benchmarks) then shouldn't we be looking at how to avoid iterating over linked lists?

Joined: Feb 2003
Posts: 2,812
Hoopy frood
Offline
Hoopy frood
Joined: Feb 2003
Posts: 2,812
Your code reminded me, do you think $calc() would benefit from bit shifting and other bitwise operators built in?


Well. At least I won lunch.
Good philosophy, see good in bad, I like!
Joined: Dec 2002
Posts: 5,408
Hoopy frood
Offline
Hoopy frood
Joined: Dec 2002
Posts: 5,408
Quote:
In the /hadd_test alias above, the 2230 beta is back to the performance of the 1438 beta for it, which is better than 1897 was, but is still noticeably slower than the no-beta was.

This is actually due to item 42 in beta.txt. The memory change required to handle this results in a small slow down in scripted hash tables.

Quote:
Again, I don't believe the differences in performance were caused by the choice of hash function, because it was affecting the time needed to /hadd the items as well as $hget()'ing them later, and I'm assuming the choice of hash wouldn't be responsible for the /hadd delay.

I have come to accept that you will not change your mind on this :-)

Quote:
Regarding the change to forcing the bucket count to be a prime. Would you consider either allowing it to consider N=1 to be a prime, or allowing a new switch that allows the hash table to not be forced away from the requested bucket count?

Internally, the hash table has always been forced away from the requested bucket count ie. it was always made odd. This cannot be changed. $hget().size always returned the original requested size to minimize dissonance :-) Now it returns the actual size. As for N=1, see next answer.

Quote:
It now is not permitting buckets=1 for those wishing to keep items in the original order, because N=1 is now flipped to use buckets=3.

If I do this, this means that all future optimizations I make will need to cater for this behaviour, even though it is not meant to be a feature of hash tables. This is why I keep stating that you should not expect hash tables to preserve order.

In fact, one of the optimizations I was going to make, which repositions items closer to the top of the list when they are retrieved to make future retrieval faster, would undermine order even in a one slot hash table. I had planned to do this for the next beta.

Quote:
As for FNV1a being modified, I found a modification mentioned at Brett Mulvey's website, and when using the FNV1a-32-Mod alias below, the calculated buckets increase as you cycle through the sequential $hget(table,N).item's. When I use FNV1a-32 in place of FNV1a-32-mod in this alias, that's the same order as the items were sorted in beta 1897.

Yes, it is using FNV1a-32-mod that performs better avalanching, the same as your script below.

Quote:
Was the $hash() identifier intended to be the same method used to assign items to buckets

$hash() has nothing to do with hash tables.

Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
Originally Posted By: Khaled
Originally Posted By: maroon
It now is not permitting buckets=1 for those wishing to keep items in the original order, because N=1 is now flipped to use buckets=3.

If I do this, this means that all future optimizations I make will need to cater for this behaviour, even though it is not meant to be a feature of hash tables. This is why I keep stating that you should not expect hash tables to preserve order.

In fact, one of the optimizations I was going to make, which repositions items closer to the top of the list when they are retrieved to make future retrieval faster, would undermine order even in a one slot hash table. I had planned to do this for the next beta.

1. To change how the linked_list is ordered will break backwards compatibility for any scripts which rely on order.

2. To not allow slots = 1 will break backwards compatibility for any scripts which rely on slots = 1 being ordered e.g. for queuing / stacks.

3. If you move an item in the linked-list, then you will also change the position in iterable use so e.g. $hget(table,11).item will suddenly become $hget(table,1).item, and this will break backwards compatibility.

4. In real-world use of hash tables, is there any evidence that moving the most recently accessed item to the beginning of the linked list will improve performance?

As an alternative, for each table how about caching the last $hget / $hfind parameters and linked-list location and intelligently deciding whether you can use those to speed things up.

In the real world you commonly get both item or data so $hget(table,n).item followed by $hget(table,n).data would be faster because you can go straight to the linked-list location. Similarly it is common to iterate through the table with $hget(table,n).item or $hfind so $hget(table,n).data followed by $hget(table,n + m).data would be faster because you can start iterating from the point that you got to last time. The same would work for $hfind.

You might also want to separately cache the parameters and results of any $hget(table,0).item or $hfind(...,0) so that loops which keep asking for these values would not need to iterate the whole list every time you call it.

These cached parameters / locations / results would be invalidated any time you add, change or delete records in the hash table.

These optimisations would work for all loops which iterate forwards i.e.
Code:
var %i 1 | while (%i <= $hget(table,0).item) { echo $hget(table,%i).item = $hget(table,%i).data | inc %i }
var %i 1 | while ($hget(table,%i).item) { echo $v1 = $hget(table,%i).data | inc %i }


Personally, I tend to code differently i.e.
Code:
var %i $hget(table,0).item | while (%i) { echo $hget(table,%i).item = $hget(table,%i).data | dec %i }

and these optimisations would not work for any descending loop. But they would be unlikely to slow them down noticeably, and for loops where the sequence is unimportant, recoding to take advantage of performance improvements would be simple.

Indeed, the only use case where you couldn't recode is when you want to access in the exact sequence you added items with slots=1. Because mIRC currently adds to the beginning of the linked-list you don't hold a pointer to the last item in the linked list (and possibly each item only has a forward pointer), it would be more difficult to optimise decreasing indexes. If you wanted to optimise for decreasing loops you would need an /hmake /hadd flag which tells mIRC to hold a pointer to the end of each linked-list and to hold backward pointers in each item so that you can iterate over the list in both directions. So this would enable you to decide whether e.g. for $hget(table,n).item followed by $hget(table,n - m).item mIRC could choose to iterate backwards over the linked_list using the cached location or to iterate from scratch from the beginning of the linked-list. (This choice could e.g. be based on the value of n and whether n-m is closer to 0 or to n, but I suspect in this event the scripter might want to influence this choice so we would need syntax for that.)

All that said, in the real world I suspect that hash tables used e.g. as a FIFO queue where you want to iterate in sequence added or e.g. $hget(table,$hget(table,0).item).data would tend to be relatively small cf. other types of table where you want to iterate but don't care about sequence, so personally I wouldn't worry about optimising decreasing loops.

Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
P.S. I am unclear what the benefit of forcing the number of slots to be a prime number.

If (and it is a big if) the lower order part of the hash value is truly evenly distributed, then modulus anything should also be evenly distributed.

Of course, that "if" may well not be true.

However again I would argue that we should not be bothering with changing the slots number unless we can demonstrate significant real-world performance improvements.

Joined: Dec 2002
Posts: 5,408
Hoopy frood
Offline
Hoopy frood
Joined: Dec 2002
Posts: 5,408
Quote:
I am unclear what the benefit of forcing the number of slots to be a prime number.

As you say, the distribution may not be evenly distributed, that is why.

Joined: Dec 2002
Posts: 5,408
Hoopy frood
Offline
Hoopy frood
Joined: Dec 2002
Posts: 5,408
Quote:
1. To change how the linked_list is ordered will break backwards compatibility for any scripts which rely on order.

Which is why I have stated many times that you should not depend on the order of hash table items.

Quote:
2. To not allow slots = 1 will break backwards compatibility for any scripts which rely on slots = 1 being ordered e.g. for queuing / stacks.

Again, same as above.

Quote:
3. If you move an item in the linked-list, then you will also change the position in iterable use so e.g. $hget(table,11).item will suddenly become $hget(table,1).item, and this will break backwards compatibility.

Yes, as above again.

Quote:
4. In real-world use of hash tables, is there any evidence that moving the most recently accessed item to the beginning of the linked list will improve performance?

It is an optimization that is generally recommended to speed up access for items that are retrieved more often than others.

If I decide to make my hash table implementation return items in a specific order in particular contexts, this means that I will need to support this in all future hash table improvements, or if/when I change to a new/better hash table implementation. That simply is not practical. Especially since I have repeatedly said, many times, you should not depend on the hash table preserving order since that is not a normal feature of hash tables.

Joined: Dec 2002
Posts: 5,408
Hoopy frood
Offline
Hoopy frood
Joined: Dec 2002
Posts: 5,408
Quote:
Linked-list iteration is probably THE SINGLE MOST EXECUTED ALGORITHM in mIRC's hash tables

There are no plans to make any changes to the core hash table routines, so this will continue to be a linked list hash table. For now, I am simply optimizing the hash table code. That is all.

Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
Originally Posted By: Khaled
Quote:
I am unclear what the benefit of forcing the number of slots to be a prime number.

As you say, the distribution may not be evenly distributed, that is why.

True, but will modulus prime be better than any other modulus at distributing the items amongst slots.

I can see why an even modulus might give a bigger bias than an odd modulus e.g. if the least significant bit of the hash was always 0, but I cannot see why a prime modulus >2 would give any better distribution than any odd modulus >2.

Joined: Dec 2002
Posts: 5,408
Hoopy frood
Offline
Hoopy frood
Joined: Dec 2002
Posts: 5,408
Quote:
True, but will modulus prime be better than any other modulus at distributing the items amongst slots.

That is the general theory.

Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
Originally Posted By: Khaled
Originally Posted By: Protopia
1. To change how the linked_list is ordered will break backwards compatibility for any scripts which rely on order.

Which is why I have stated many times that you should not depend on the order of hash table items.

In the real world, people have relied on this order even if you stated that they should not rely on it in some places (though NOT in the help file). So in the real world this change will break scripts.

Originally Posted By: Khaled
Originally Posted By: Protopia
4. In real-world use of hash tables, is there any evidence that moving the most recently accessed item to the beginning of the linked list will improve performance?

It is an optimization that is generally recommended to speed up access for items that are retrieved more often than others.

I can see no reason why you shouldn't move it to the first position in the slot if slots > 1. The cost of doing this is very small, though I am unsure just how often it will produce noticeable benefits in the real world (i.e. how often a script will do an $hget(table,item) for the same item in quick succession).

Originally Posted By: Khaled
If I decide to make my hash table implementation return items in a specific order in particular contexts, this means that I will need to support this in all future hash table improvements, or if/when I change to a new/better hash table implementation. That simply is not practical. Especially since I have repeatedly said, many times, you should not depend on the hash table preserving order since that is not a normal feature of hash tables.

The only thing I am suggesting is that the existing functionality for ordered recall with slots=1 is retained. I am not proposing any changes which will require items to be delivered in any sequence if slots > 1 - and indeed in a previous post I pointed out that users like me and others should NOT try to box you in to e.g. specific hash routines which would then restrict your ability to improve code, so we are agreed on this.

However depending on implementation, alternative search mechanisms enabled by flags might in practice deliver items in a sequence which people then come to rely on (like slots = 1 did) - and if you ever decide to implement these, you should bear this in mind. That said, if such a sequence is intrinsic to how the items are stored, then it would be unlikely that any future changes would impact that, and perhaps you could accept those restrictions.

It is of course your choice as to what you decide to implement - but in my defence I would say that all of my suggestions have been designed to be simple to implement and to preserve backward compatibility whilst delivering significant performance improvements for large hash tables.

(I do not have any code which needs large hash tables, and therefore I am not seeing any performance issues even with non-hashed access, so I am personally not that worried whether you implement them or not. My suggestions are only for the benefit of anyone who does use large hash tables, and particular anyone who uses unhashed access to such tables.)

Last edited by Protopia; 28/10/18 11:01 AM.
Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
P.S. Has anyone looked at Paul Hsieh's SuperfastHash http://www.azillionmonkeys.com/qed/hash.html or Bob Jenkins even later and faster lookup3 both of which claim to be both fast and very evenly distributed.

Joined: Aug 2003
Posts: 318
P
Pan-dimensional mouse
Offline
Pan-dimensional mouse
P
Joined: Aug 2003
Posts: 318
Originally Posted By: Khaled
Quote:
True, but will modulus prime be better than any other modulus at distributing the items amongst slots.

That is the general theory.

I have been looking into this a little, and that theory seems to be dependent on:

a. Whether the hashing algorithm is one based on powers of a prime (often 31) in which case you don't want your modulus to be a multiple of that prime; and

b. Hash tables which use strides when a slot is already used (rather than linked-lists for each slot as mIRC does).

Neither of these seem to apply in this discussion.

Page 2 of 3 1 2 3

Link Copied to Clipboard