I can reproduce some of your problem. $readini is behaving much slower in newer mirc than in 6.35, so what you're experiencing may not be mirc crashing, but just taking a really long time. You can do things like using $findfile() to display all the files on your hard drive, and this causes mirc to display that message as if it's crashed. However it will eventually finish. All tests in this post were using the same .ini file in the same folder, so there's nothing being affected by different folder locations on the disk behaving different due to AV.

I repeated this command 9 times to create random item names, varying in length from 4-12 characters.

Code:
//var %i 100000 | while (%i) { var %a $regsubex($str(x,$rand(4,12)),/x/g,$rand(a,z)) | writeini \path\dict.ini R %a %a | dec %i }



Due to duplicates not added, this reports I have 889390 items instead of 900000, and took over 16000ms to perform. The filesize is nearly 17mb. The disk writes were not slow, taking not much longer to write 100k items than to retrieve just the 1 item.

Code:
//var %ticks $ticks | echo -a $ini(\path\dict.ini,R,0) | echo -a $calc($ticks - %ticks) ms



mirc 6.35 accessing this same file took only 219ms to count the 889390 items.

I then looked for an item near the end of the file, and another near the middle of the file and another near the front. All 3 $readini commands took almost the exact same time as the above $ini command, exceeding 16000ms.

Code:
//var %ticks $ticks | echo -a $readini(\path\dict.ini,R,ItemName) | echo -a $calc($ticks - %ticks) ms



I did attempt to read the same near-the-front-of-file item using the above command in mirc 7.52, and it took 349 seconds to complete.

I excluded that path from antivirus monitoring, so the issue in this thread shouldn't affect it, which mainly affects only diskwrites and deletes https://forums.mirc.com/ubbthreads.php/topics/260688/Re:_Windows_8.1_/write_very_ve#Post260688

If $readini is encountering unicode symbols in your datafile, that could possibly slow it down too.

As a workaround, you can try loading the dictionary into a hashtable in memory. 890k items and 17mb is a lot, but once it's in memory it should be manageable by your OS.

Code:
//if (!$hget(fout)) hmake -s fout 10000
/hload -ims fout dicc/fout.ini


Warning. This did take an extremely long time, exceeding a minute, so you probably want to try saving your dictionary in the normal hashtable format instead of .ini. If you had multiple [sections] in the same .ini, you can't load them into the same hashtable while keeping them identifiable as being from different sections.

Once the dictionary is loaded in memory, access is much faster. $hget(fout,Itemname) reported it took 0 ticks to retrieve the same thing as $readini(dicc/fout.ini,R,itemname), and this was with the default 101 buckets. If you increase to 10k buckets as shown above, retrieval should be more efficient, Though counting total items isn't helped by having many buckets, it's still much faster than disk: //echo -a $hget(fout,0).item took 78ms

If comparing 6.35 speed against newest mirc, be aware that some things that work in new mirc aren't available in the old version, such as the syntax to use hload to define the number of buckets while creating the table.

More info about hashtables can be found at https://en.wikichip.org/wiki/mirc/hash_tables