You're assuming the subset of matches from the hashtable will be small enough to warrant the slowdown caused by manually looping over it instead of using $hfind()'s efficient built-in loop. Based on the examples OrionsBelt has given, each token is based on a relatively small set and will frequently occur in the table at some point, just not necessarily as the fifth token. If that's true for the entire table then making a vague match and a second pass will only make things slower.

Ultimately the only solution that's likely to provide acceptable results is a reworking of the data into something easier to index.


Spelling mistakes, grammatical errors, and stupid comments are intentional.