If I'm understanding this correctly, it sounds like you're asking for a binary append feature in /hadd.

But I'm not sure your proposal is actually any more efficient.

You're suggesting the data is duplicated in $hget (which might be correct but may not be, see below), but you're assuming that your proposed /hadd feature would not duplicate memory-- which is likely inaccurate.

Given the typical architecture for storing data sets of unknown bounds, it's very likely that mIRC's own internal implementation would have to do its own malloc() / realloc() / memcpy() calls to move data around to a new memory slice of appropriate size. Specifically, it's very likely mIRC's internal implementation of /hadd -bN would be equivalent to the steps you listed in your snippet (at the C level).

I think you might be suggesting that the $hget is the real issue, since it theoretically might create one extra copy of your data. The problem is, we don't really know if $hget actually does a memcpy() internally or if it just hands back a pointer to associate with &binvar. Since hash table items are immutable, that would be the way I would implement it. If it's not already done that way, it probably should be, since it saves a copy. There's no actual reason for $hget to do any memcpy's until string data is actually returned back to the user. Based on the behavior I'm seeing ($hget(tab, item, &bvar) will completely overwrite an existing &bvar), it seems like it just replaces the pointer that &bvar points to, and doesn't actually do any copying. I could be wrong, of course. Code:

Code:
//hadd -m test foo abc | bset &binvar 1024 0 | echo -a $bvar(&binvar,0) | noop $hget(test,foo,&binvar) | echo -a $bvar(&binvar,0)


This proposal is certainly a helpful convenience method, but it's unclear if it would be any more efficient, neither in memory usage or speed. If it is more efficient, then all it tells us is that $hget should be optimized instead.