mIRC Home    About    Download    Register    News    Help

Print Thread
#256944 02/03/16 08:02 PM
Joined: Apr 2010
Posts: 969
F
Hoopy frood
OP Offline
Hoopy frood
F
Joined: Apr 2010
Posts: 969
I have found an inefficeny with mIRC I'd like to see address:
Code:
;; fill the bvar
/bset &adding ....

;; retrieve stored data as a bvar;
;; thus creating two copies of the stored data
noop $hget(name, item, &stored)

;; copy the data I wish to add to the storage bvar;
;; thus creating two copies of the data I wish to added
bcopy -c &stored $calc($bvar(&stored, 0) + 1) &adding 1 -1

;; store the updated storage bvar
hadd -mb name item &stored

;; free up the duplicate data
bunset &adding &stored






When working with small data sets this may seem purely cosmetic but for cases such as buffering socket data to read/send the sizes add up. I purpose the following:
Code:
/hadd -bN name item &bvar
if n is not specified or 0, the current behavior is maintained: the bvar data is added to the hashtable, overwriting any data stored under the item
if n is 1, the bvar data is appended to any data stored under the item
if n is 2, the current behavior is maintained but the input bvar is unset afterwards
if n is 3, the data is sppended to any data stored under the item, and the input bvar is unset afterwards.

As such, backwards compatability would be maintained while extending support for hashtable bvars


I am SReject
My Stuff
FroggieDaFrog #256949 03/03/16 12:35 AM
Joined: Jul 2006
Posts: 4,149
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,149
Do you have any benchmark showing it's really getting slower on large data sets?
All the operations here are supposed to be fast operations, but perhaps a better feature would be to add a new -a switch to simply appends the data, binvar or not.
/bunset is as fast as you can get, a switch in /hadd unsetting a binvar really seems like cosmetic, but a way to append would avoid a call to $hget() + /bcopy + /hadd in the case of a binvar usage, which would be nicer indeed.


#mircscripting @ irc.swiftirc.net == the best mIRC help channel
Wims #259118 06/10/16 09:24 AM
Joined: Apr 2010
Posts: 969
F
Hoopy frood
OP Offline
Hoopy frood
F
Joined: Apr 2010
Posts: 969
You seem to have misunderstood. The inefficiency isn't in the speed of such operations. Its memory usage that is inefficient.

To append/alter any hashtable stored bvars, the data has to be duplicated.

$hget(table, item, &bvar) creates a 2nd copy of the data: one copy resides in the hashtable while the other copy is stored in the specified bvar. That is where the inefficiency lays.


I am SReject
My Stuff
FroggieDaFrog #259123 06/10/16 01:31 PM
Joined: Jul 2006
Posts: 4,149
W
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 4,149
I think my previous message shows clearly that I understood perfectly, I'm saying it would be nicer to be able to avoid extra command (meaning a second copy).


#mircscripting @ irc.swiftirc.net == the best mIRC help channel
FroggieDaFrog #259170 11/10/16 10:14 PM
Joined: Oct 2003
Posts: 3,918
A
Hoopy frood
Offline
Hoopy frood
A
Joined: Oct 2003
Posts: 3,918
If I'm understanding this correctly, it sounds like you're asking for a binary append feature in /hadd.

But I'm not sure your proposal is actually any more efficient.

You're suggesting the data is duplicated in $hget (which might be correct but may not be, see below), but you're assuming that your proposed /hadd feature would not duplicate memory-- which is likely inaccurate.

Given the typical architecture for storing data sets of unknown bounds, it's very likely that mIRC's own internal implementation would have to do its own malloc() / realloc() / memcpy() calls to move data around to a new memory slice of appropriate size. Specifically, it's very likely mIRC's internal implementation of /hadd -bN would be equivalent to the steps you listed in your snippet (at the C level).

I think you might be suggesting that the $hget is the real issue, since it theoretically might create one extra copy of your data. The problem is, we don't really know if $hget actually does a memcpy() internally or if it just hands back a pointer to associate with &binvar. Since hash table items are immutable, that would be the way I would implement it. If it's not already done that way, it probably should be, since it saves a copy. There's no actual reason for $hget to do any memcpy's until string data is actually returned back to the user. Based on the behavior I'm seeing ($hget(tab, item, &bvar) will completely overwrite an existing &bvar), it seems like it just replaces the pointer that &bvar points to, and doesn't actually do any copying. I could be wrong, of course. Code:

Code:
//hadd -m test foo abc | bset &binvar 1024 0 | echo -a $bvar(&binvar,0) | noop $hget(test,foo,&binvar) | echo -a $bvar(&binvar,0)


This proposal is certainly a helpful convenience method, but it's unclear if it would be any more efficient, neither in memory usage or speed. If it is more efficient, then all it tells us is that $hget should be optimized instead.


- argv[0] on EFnet #mIRC
- "Life is a pointer to an integer without a cast"

Link Copied to Clipboard