In general, scripts are faster when you reduce the number of commands, because of the overhead involved in preparing the input output. i.e. if you need to use other identifiers to stitch items together, that takes even more time. Splitting data across several items just means there's more things to search past, but it's probably faster for the alternative which does not require stitching items together, or using string identifiers to hunt within the item data for a token.

If you're having a lot of items in your table, the biggest impact on $hget speed is the number of buckets. If you have 100k items split into 100 bucks, then each bucket has around 1000 items, and $hget needs to search on average 500 items to find the right one. If you have 1000 buckets, then each bucket has 100 items, and needs to search past average of 50 items to find the right one. There is probably a little overhead involved with having more buckets, so it may not be wise to just take the max 10000.

Buckets probably don't help you with using $hfind, since that needs to look at everything rather than looking in a specific bucket for a specific item.

https://en.wikichip.org/wiki/mirc/hash_tables