]
Compared to reloading 80 lines of a text file this will hardly be worth it. If you're worried about the efficiency of this design the easiest thing you should do is switch to a more advanced database system anyway.
It could work. Reloading the whole thing could interrupt what the user is doing but I'll try it.
If you're polling I think checking the $file().mtime would be sufficient. If you don't want to poll and want to keep it all in mirc, you can have all the instances listen on a different port (which will be written to a file), and then use sockets to notify the other instances when necessary.
The difficulty is that all three terminal server sessions are running the same installation of mirc, so how would each know which port to listen on?
I don't see how this is any different from three webpages visiting a page and then one client changes the page. The other two clients won't see the update until they refresh the page in their browser.
So, following that same mentality, you would want to do just that, and have your clients "refresh the page", or in this case, the window. You could expect it to be done manually, or you could do it automatically via timer. I don't see why that is overkill. Indeed, just like HTTP has E-Tags or Cache-control headers, you would use some basic checks to verify that the db was modified between retrievals so that you don't have to download the entire thing at each timer cycle. Using $crc/$md5/$sha1 would work fine, as would a timestamp on your db as suggested.
Unless your db is extremely large, I would suggest just downloading the whole thing. Dealing with delta changesets is complicated and *that* would likely be overkill.
Expecting my colleagues to refresh the database before making a change isn't going to happen, so it needs to be automatic and fool proof :p
I'll have a play around with polling the file for changes and refreshing the db and also my own idea.
Thanks for the suggestions everyone