The only thing that will greatly improve the speed of your searches is using a hash table, because the searching mechanism is non lineair, but based on hashing. Any other solution like file handling commands, filter, $read, hidden windows etc. won't cut it, as they use a lineair searching approach.
You must not have doubles you say, because $ctime will screw up? That's fine, as hash table items must be unique, so this will fit your case nicely.
All I need to know now is what your data looks like. Does it have spaces anywhere in the data, or is it like: <word><space><word>. If so, then it'll be possible to create a script easily to transform your text file into a file ready to be loaded into a hash table.
Btw you are using 2 different text files for this. Why? And which one of the two exactly do you want to be faster?
Also you do a check with $read to see if <nickname>\* is in the file, and if it isn't you write to file.txt.
What will happen if it did find a match? Wil it overwrite the entry? Add an entry?
You cant expect people to help you if you keep your queries so vague. Just posting some code doen't cut it, you need to explain exactly where your problem lies, and give as much information as you can.
Btw if your database really has 605000 lines in it, then you shouldn't be using a text file heh, use a real database that you can control with SQL or such.