Thanks for the effort to figure out this kinda workaround but chosing such messy way just to save work (change a script in a later stage of progress) is something I would never do, I would change the script then, regardless amount of work.

The problem is that I don"t have the choice.

Script has, besides hashes, alot custom windows (around 25), both configuration and session, for data that needs to keep its order and fast accessible (I filter upon start disk>@window), and I really do need an identical token separator, without that I would simply have to abandon it completely, for example, if a certain script part needs to pass its parameters to another part (for example, a queue) that is used by other parts as well, using different token separators.
ChannelList does not need separator 215 because its current parameters can not have spaces, while UserData cannot use separator 32 due to the storage of channels and full name.
Such problems would rise everywhere.
A week ago I tried an idea of using a hash format systemname tokenseparator, and I quickly run into problems when aliases pass parameters between the different parts.

I will just abandon using /filter (and also $fline) when I have to filter on token positions that are too far, and use scripted loops and tokenize or $gettok() again, which in fact is like I did it in the past, until I found out filter was abit faster on average, but the most important reason was that I could decrease the script size with /filter due to the lesser conditions, loops, builtin sorting on token position, etc, since it's already 920k in 13 files with a careful chosen tradeoff readability-maintainability/size.

Now, what is relevant in this section, if I can assume, like you state, that this extreme slowness is due to the way wildcard matching works, with no way of improving it, then this is not a bug (it's just required to get the job done) and the discussion (and bug report) stops here, and I use other ways with better performance.
But, it surprises me that /filter, well known as fastest method, becomes 1000 times slower than a fully scripted approach (using tokens) due to certain wildcard string content, especially because in fact, in my tests, the wildcard string length remained the same all the time, just one string 'ON' shifted up left to right with huge impact on speed.
Starbucks_mafia said:
Quote:

Of course it's going to be slow for extremely convoluted wildcards like that. It takes a hell of a lot of processing to correctly match against a string with 23 wildcards in it.

The amount wildcards never changed. It remained the same in both fast and extremely slow results. Moving the 'ON' string to position 1,2 and 12 was fast while really slow on most others.

Last edited by RRX; 29/07/06 01:11 PM.