Code:
alias fakearray {
  hadd -m array 1 a
  hadd -m array 2 b
  hadd -m array 3 c
  hdel array 2
  var %i = 1
  while ($hget(array,%i) != $null) {
    echo -a $v1
    inc %i
  }
}


Here is one of the major problems with using a hash table to simulate an array. The only way to fix the problem with the gaps, is to shift all items past the point of deletion down. This takes some amount of time on big tables: I know because I wrote a script that for all intents and purposes lets you create perfect arrays using hash tables.

I don't have the full script anymore, but I've whipped up a little test script to give you some idea of how slow it is to shift down entries in a hash table when you need to delete a numerical index.

Code:
alias -l create {
  var %i = $2
  while (%i) {
    hadd -m $1 %i %i
    dec %i
  }
}
alias -l delete {
  hdel array $2
  var %i = $2 + 1
  while ($hget($1,%i) != $null) {
    hadd -m $1 $calc(%i - 1) $v1
    inc %i
  }
  hdel $1 $hget($1,0).item
}
alias testspeed {
  var %ticks = $ticks
  create array 3000
  var %i = 1500, %j = 1400
  while (%i > %j) {
    delete array %i
    dec %i
  }
  hfree array
  echo -a $calc($ticks - %ticks) ms.
}


This creates a hash table containing 3,000 items, then it deletes 100 of those items from somewhere in the middle of the table. Each time it does this, it shifts all of the indexes above that down, so that the "array" maintains order (and can still be referred to as $hget(array,N)).

On my 3ghz dual core CPU this takes around 4.6 seconds to execute, and it's not even an unrealistic example.

Having arrays built in would be much more efficient and also convenient.