mIRC Home    About    Download    Register    News    Help

Print Thread
Page 1 of 2 1 2
Joined: Aug 2011
Posts: 10
J
JR_ Offline OP
Pikka bird
OP Offline
Pikka bird
J
Joined: Aug 2011
Posts: 10
Hey,
triggering $hfind for instance to get like the first 50 matches out of a hashtable doesn't seem very convenient and fast. Calling it just once and lets say put the requested amount of matches into a window would be more effective and way faster.
Shouldn be hard to implement i guess, cuz u would just have to remap the output to a window. :P
Thanks for listening anyhow.
regards
jr

Joined: Oct 2004
Posts: 8,330
Hoopy frood
Offline
Hoopy frood
Joined: Oct 2004
Posts: 8,330
Depending on what you're trying to do, it sounds like /filter would be a much better option for efficiency.

I'm sure $hfind could probably be improved somewhat, but even with a lot of improvements, /filter is likely to be a lot faster.


Invision Support
#Invision on irc.irchighway.net
Joined: Feb 2006
Posts: 546
J
Fjord artisan
Offline
Fjord artisan
J
Joined: Feb 2006
Posts: 546
it would indeed be more efficient, and you could make the same case for identifiers such as $fline, $didwm / $didreg, $wildtok, etc.

Khaled has already given us this type of thing in identifiers such as $findfile (its command and window parameters) and $read (ability to start searching from a given line), but they involve much slower operations. this level of optimization is therefore less important with the identifiers mentioned above.

however, what might be nice to see is a more generalized solution to the problem of quickly iterating through these kind of collections. a 'for each' structure has been brought up several times in the past, which would be quite helpful in these situations. it would probably require numerous deep changes to mIRC's interpreter in order to be made efficiently, though.. ie. in a way that is different from our current iterative methods.

anyway.. if you expect the number of matches with your $hfind() to be quite high compared to the number of items in the table, you could try looping through items with $hget() and using iswm or $regex() to perform the tests. or save the table in a suitable format with /hsave and try handling it with /filter. play around and see if you can find something better :P


"The only excuse for making a useless script is that one admires it intensely" - Oscar Wilde
Joined: Aug 2011
Posts: 10
J
JR_ Offline OP
Pikka bird
OP Offline
Pikka bird
J
Joined: Aug 2011
Posts: 10
My Database is pretty huge wink xx xxx - xxx xxx entries.
Having $hfind output an amount of matches at once to lets say a new window would be the most efficient way for this problem.
i just suggested this, because it wouldnt involve much implementation work. wink
Using hfind is much faster btw as trying to find lines in a window, especially for such big databases.
Regards
$me

Joined: Oct 2004
Posts: 8,330
Hoopy frood
Offline
Hoopy frood
Joined: Oct 2004
Posts: 8,330
I'm not sure that it would be as easy as you think. It could easily be a lot of work and sending things to a window really isn't what most people are going to want. Another options like the method jaytea mentioned sounds a lot more useful for more people.

Regarding the size of your table, chances are that /filter will be faster than any other method being thought of. It works very well and if you want it in a window, it can do that.

Anyhow, I would definitely like to see some kind of improvement in speed for searching for multiple results in large hash tables. But I'd rather not have it force the use of a window on people.


Invision Support
#Invision on irc.irchighway.net
Joined: Aug 2011
Posts: 10
J
JR_ Offline OP
Pikka bird
OP Offline
Pikka bird
J
Joined: Aug 2011
Posts: 10
my database keeps changing a lot, so using a window +filter wouldnt make sense. Try updating a window frequently per second...
Anyhow, a window would be most reasonable way to return multiple results at once and to handle them as well, I cant think of any other reasonable and fast way to return multiple results besides into files (im no friend of file I/O, its slower).
/filter is simply no solution to ppl with higly dynamic and huge databases.

Joined: Jul 2006
Posts: 4,145
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,145
I also would like to see some improvement for $hfind.
My idea is to pass an hash table name instead of N when calling $hfind(), and in this case, mIRC would create the new hash table with all the matches.
It wouldn't need any change to the syntax and shouldn't cause any problem since having an hash table name consisting of number only isn't something one can really do: it wouldn't work with $hget().
It would let us doing:
Code:
noop $hfind(original_table,*a match*,new_table,w)
var %a 1
while ($hget(new_table,%a).item != $null) { echo -a $v1 : $hget(new_table,$v1) | inc %a }
if ($hget(new_table)) hfree $v1
That way it would also let you apply more than one match easily


#mircscripting @ irc.swiftirc.net == the best mIRC help channel
Joined: Apr 2010
Posts: 969
F
Hoopy frood
Offline
Hoopy frood
F
Joined: Apr 2010
Posts: 969
to add to this idea. Instead of changing the way $hfind works, just make a command '/hfind':
Code:
/hfind -monwWrR <InTable> <OutTable> <match text>
; -m     : Create OutTable if it does not exist
; -o     : Clear OutTable if it exists(Can be coupled with -m to clear it if it exists, create it if it does not)
; -nwWrR : Same as $hfind()'s search switches


Then you could use $hget(OutTable,0) to get the number of matches found


But, as jaytest said, a for each loop would be nice:
Code:
for each -switches [other parameters] <input> {
  if ($feitem == ForEachItem in <input>) {
    /do stuff
  }
}


Last edited by FroggieDaFrog; 31/08/11 02:02 AM.

I am SReject
My Stuff
Joined: Oct 2004
Posts: 8,330
Hoopy frood
Offline
Hoopy frood
Joined: Oct 2004
Posts: 8,330
Wims, I like the idea of putting the results in a new table automatically. That would work well for anyone and is much faster than using a window.

JR, have you tried /filter? I think you'd be surprised how well it does work on large and dynamic data. I've used it on very large score tables to instantly display a top 10 from a hash table, for example. Filtering matches requires a bit more, but should still be very fast (definitely faster than window manipulation, which is slow).


Invision Support
#Invision on irc.irchighway.net
Joined: Aug 2011
Posts: 10
J
JR_ Offline OP
Pikka bird
OP Offline
Pikka bird
J
Joined: Aug 2011
Posts: 10
I tried it, but my database is very huge, dumping it first to a text file/window then filtering it, would take longer then just using hfind in the end. But yes /filter works actually very fast if you have the input already where you wanted it(window/file). But if your database has alot of I/O per second, a window/file is no real option.... (but to answer your question yes I tried filter, and yes its fast)
And secondly, whether the multiple results of hfind would get redirected to a new hashtable or window, I couldn't care less. As long we got the option to do so at all, that would be great!
regards
$me

Last edited by JR_; 31/08/11 11:36 AM.
Joined: Oct 2004
Posts: 8,330
Hoopy frood
Offline
Hoopy frood
Joined: Oct 2004
Posts: 8,330
You don't have to "dump" it to anywhere. Just /hsave it. Unless /hsave is taking a long time, it should be fast.


Invision Support
#Invision on irc.irchighway.net
Joined: Aug 2011
Posts: 10
J
JR_ Offline OP
Pikka bird
OP Offline
Pikka bird
J
Joined: Aug 2011
Posts: 10
You don't have to "dump" it to anywhere. Just /hsave it. Unless /hsave is taking a long time, it should be fast.

Thats what I meant by "dumping", it takes too long.
So I'd like to see having hfind doing that stuff for me;)

Last edited by JR_; 31/08/11 11:55 AM.
Joined: Jan 2003
Posts: 2,523
Q
Hoopy frood
Offline
Hoopy frood
Q
Joined: Jan 2003
Posts: 2,523
It's a nice idea but creating a new hash table with the results is not as efficient as putting them in a list-like structure. The reason is that you still have to use $hget().item to go through the items in the new hash table. As you know, this is relatively inefficient (complexity of O(n^2)) and becomes very noticeable on large tables (in practice when N > 10,000). Something like $findfile's functionality (dumping to a window or calling a command on each item) should be more efficient.


/.timerQ 1 0 echo /.timerQ 1 0 $timer(Q).com
Joined: Nov 2009
Posts: 295
P
Fjord artisan
Offline
Fjord artisan
P
Joined: Nov 2009
Posts: 295
It might be a little off topic and not an answer you are looking for. But there is a dll which lets you use sqlite dbs in mirc. It's really fast and great for things with tens of thousands of entries.


http://scripting.pball.win
My personal site with some scripts I've released.
Joined: Oct 2004
Posts: 8,330
Hoopy frood
Offline
Hoopy frood
Joined: Oct 2004
Posts: 8,330
Originally Posted By: JR_
You don't have to "dump" it to anywhere. Just /hsave it. Unless /hsave is taking a long time, it should be fast.

Thats what I meant by "dumping", it takes too long.
So I'd like to see having hfind doing that stuff for me;)


Ok, just checking. Often when people talk about dumping data, they are talking about writing the data individually (as you'd have to do to go directly from a hash table to a window).

qwerty brings up a good point about putting data into a new hash table if the number of matched results is going to be very high. 50 results probably wouldn't be bad, but thousands would be. For the larger numbers, a window might actually work better. I just don't really like being forced into that method. If you know your results will be small, it's a lot nicer not to have a window created whenever you search. At least I think so. Also, windows become a problem if you are already in a lot of channels or have a lot of query windows open and then have a script or multiple scripts automatically doing searches of a variety of tables. There's a limit to the number of windows you can open at one time and you could end up hitting the limit and then your search stops working until the number of windows is reduced. Whereas another hash table or even a file output would always work as long as you have memory/drive space. That's a rare situation, of course. Just something to consider for the rare few who actually do keep that many windows open on a regular basis. It might be that so few fit that description that it's not worth worrying about. I just wanted to bring it up for consideration. Also, if you have a lot of tables and a script that can automatically search them, you could easily end up with 10, or 20, or more search windows. Not necessarily what I'd like to see. Even if the windows are hidden, you're still potentially creating a lot of windows just to search.

And pball also has a good option. Really large hash tables become cumbersome. Once you start working with significant amounts of data, a true database structure, like SQL, is often more useful and efficient. That being said, I've never used SQL with mIRC, so I can't personally say how easy or fast it is from mIRC.


Invision Support
#Invision on irc.irchighway.net
Joined: Jul 2006
Posts: 4,145
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,145
Indeed, but it would still be faster than looping on $hfind and it would let us "chain" matches.
If results are dumped to a window, how do you keep track of the item/data in the window on the same line, without losing spaces and without losing integrity ? The suggestion of a callback alias would be better in this case imo, $1 is the item and $2 the data.
Both idea would be great, it could be done like this: if the third parameter start with a '/' it is considered as a callback alias, otherwise, if it's not a number, it is considered as an hash table name, or else, the actual behavior is used

Edit: my bad, the first word on the window's line would be the item and the rest would be the data, since the name of an item cannot have spaces in it.
The third parameter could then start with a @ to specify a window's name.

Last edited by Wims; 01/09/11 01:17 AM.

#mircscripting @ irc.swiftirc.net == the best mIRC help channel
Joined: Feb 2006
Posts: 546
J
Fjord artisan
Offline
Fjord artisan
J
Joined: Feb 2006
Posts: 546
Originally Posted By: Wims
Edit: my bad, the first word on the window's line would be the item and the rest would be the data, since the name of an item cannot have spaces in it.


actually:

Code:
//hmake a | write -c a item with spaces $crlf data | hload a a | echo -a $hget(a, 1).item


"The only excuse for making a useless script is that one admires it intensely" - Oscar Wilde
Joined: Jul 2006
Posts: 4,145
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,145
Cheater laugh
I guess my question still need answers


#mircscripting @ irc.swiftirc.net == the best mIRC help channel
Joined: Feb 2006
Posts: 546
J
Fjord artisan
Offline
Fjord artisan
J
Joined: Feb 2006
Posts: 546
Originally Posted By: Wims
Cheater laugh
I guess my question still need answers


the list would just contain item names (what $hfind() returns, after all). retrieving data from item name is a fast operation with $hget(). also takes care of any items that contain binary data.


"The only excuse for making a useless script is that one admires it intensely" - Oscar Wilde
Joined: Jul 2006
Posts: 4,145
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,145
Right, well my suggestion would also allow a @window to be used anyway


#mircscripting @ irc.swiftirc.net == the best mIRC help channel
Joined: Feb 2006
Posts: 546
J
Fjord artisan
Offline
Fjord artisan
J
Joined: Feb 2006
Posts: 546
an additional hash table would presumably mean unnecessary duplicate data, so a collection of item names in a @window or an alias callback seem to me to be the most sensible ideas.


"The only excuse for making a useless script is that one admires it intensely" - Oscar Wilde
Page 1 of 2 1 2

Link Copied to Clipboard