It seems variable read/write access times are affected heavily by the number of variables already set. My original benchmarks were based on only a couple of global variables existing at once. Setting 5000 unrelated global vars slows the original benchmark to 1/20th of its original speed which is a looong way behind the hash table benchmarks.

As always it looks like there's no straight answer about performance and it really depends on the situation. Generally though read/write performance isn't the deciding factor in a storage method anyway, usually the choice is made by the features required (such as hash tables' powerful searching of large datasets or custom windows' sorting capabilities).