You are confusing efficiency with speed. Those are two different things, although you will find them often to be related.

$readini also does disk access for each time you issue $readini, so from an efficiency viewpoint, it is just as inefficient as $read, /write or /writeini.

This is not taking into account any possible buffering that your OS might perform, chances are that it caches the file if it notices the file is referenced a lot, I'm not knowledgeable enough to speak about that.

Also, on the computers from today and the last few years, the differences in speed are all becoming smaller, even when using inefficient methods. You can read from an ini file 1000 times in a row, and it would still only take 300 milliseconds, depending on your hardware.

The point is, you are not going to be able to make any valid conclusions about speed, since you are only doing 200 iterations on average. Yes there are 400 items, but once your script found a match, it doesn't need to continue looping. The average iterations it will do then is 200. In other words, it's not possible for you to notice any performance drawback, because 200 consequent readinis doesnt take enough time for you to even notice it.

If you want to get a real feel for speed, compare the same data in a hash table to your ini file, and do 2000+ iterations, you will quickly see which one is faster.

From an efficiency point of view, each time opening/closing the file is simply a less preferable way. Accessing data from memory is much preferred, giving you an even greater speed, and no continous disk accessing.


Gone.