mIRC Home    About    Download    Register    News    Help

Print Thread
#130211 14/09/05 06:46 AM
Joined: Sep 2003
Posts: 38
S
snabbi Offline OP
Ameglian cow
OP Offline
Ameglian cow
S
Joined: Sep 2003
Posts: 38
I have made a trivia bot a couple of years ago and some friends asked me to let this bot run on their server. This is my problem.

The points from the triviabot are stored in hashtables. These hash tables are stored after each question to a file. This in case of a crash.

The hash tables are more usefull because of some calculations I have to do in my programme which would be quite extensive if they would be read from an ini file for example.

But saving my list from 300 users after each question is getting extensive as well. Looking for an alternative I thought of using the /write -a option. Testing this option had good results. Apperently after faking a crash reloading the hashtable resulted in the last appended numbers.

Can anybody give me somewhat more assurance that this construction will not fail?
My appending code is:
//write -a test.txt test3 $+ $crlf $+ 30

To keep the file small I will use a timer like 5 minutes to save the original list with the hsave option.

Joined: Aug 2004
Posts: 7,252
R
Hoopy frood
Offline
Hoopy frood
R
Joined: Aug 2004
Posts: 7,252
why don't you use
ON *:EXIT: .hsave -o <table_name> <file_name>
ON *:DISCONNECT: .hsave -o <table_name> <file_name>

That will save the hash table to the file when the bot exits or gets disconnected

Joined: Sep 2005
Posts: 116
I
Vogon poet
Offline
Vogon poet
I
Joined: Sep 2005
Posts: 116
becase a crash doesnty do exit ?

Joined: Sep 2003
Posts: 4,230
D
Hoopy frood
Offline
Hoopy frood
D
Joined: Sep 2003
Posts: 4,230
Im saving hashtables all the time, ie: at least 1 a second sometimes 3 to 5 times a second and its around 150k in size, this has never effected system performance at all. I wounder if your doing something odd.

PS: i wouldnt use /write as it opens and closes the file for each line written (may also have to copy the whole file, but not sure about that when its an append)

Joined: Aug 2004
Posts: 7,252
R
Hoopy frood
Offline
Hoopy frood
R
Joined: Aug 2004
Posts: 7,252
I'm not sure which of those two events isn't done with a crash, but I use that format all the time and have had my system crash and the updated information has always been in the stored file.

Joined: Sep 2003
Posts: 38
S
snabbi Offline OP
Ameglian cow
OP Offline
Ameglian cow
S
Joined: Sep 2003
Posts: 38
I am saving my hashtable about once every 4 seconds. My hashtable is relatively small having about 300 items. I didn't think about the /write option rewriting the whole file. This could be avoided by using fopen and fwrite off course. My question realy was about this being reliable?

Besides storing the scores in the hashtable and saving the entire hashtable to a file, I am using a custom window in which all scores are stored. When somebody answers a question correctly I will remove his line from the window and add a new line containing his points. This window is sorted by number of points.
For example

person1 100
person2 50

Because windows can not sort numbers correctly I am using a while lus to dertermine the right position for the nick who's total had been changed. I do not know how big the scores will be, which makes it impossible to use precessing 0's (thus 050 instead of 50).
I am using this window to calculate the person's position in the scorelist. Thus person2 being in 2nd place.

By just deleting the nick who received his points I am saving the trouble of making the list entirely. This action is handled easily.

Perhaps somebody can help me with an indication how much precessing 0's I need for this window to get the format:

00000100 person1
00000050 person2

which would sort faster.

Joined: Sep 2003
Posts: 4,230
D
Hoopy frood
Offline
Hoopy frood
D
Joined: Sep 2003
Posts: 4,230
/filter -wwzcteu 1 32 @customwindow @customwindow *

That well sort a custom window with format as you showed it into desending numeric order.

On the saving I still dont see why your using a /hsave if your going to use a /fwrite and a loop, its going to be slower than using a /hsave.

Joined: Sep 2003
Posts: 38
S
snabbi Offline OP
Ameglian cow
OP Offline
Ameglian cow
S
Joined: Sep 2003
Posts: 38
I think I discovered the problem. I forget to add a parameter to an own written alias, resulting in saving all hashtables each time in stead of just the scores (thus 2 hashtables of 992 and 1100 items were stored each 3 secs).

But still I think using a fwrite without using a fclose would be faster then writing the whole hashtable each time. (because only one line would be written instead of all lines.)

PS
Thnx a lot for the filter command!

Joined: Sep 2003
Posts: 4,230
D
Hoopy frood
Offline
Hoopy frood
D
Joined: Sep 2003
Posts: 4,230
Your saying that there is only ever NEW items inserted into the hash table? no items values are updated? If so then writing to file each new value using a fopen/fwrites/fclose would be the fastest I would assume. (always do the fclose, as this is the only time some OS's directory info such as filesize is updated). However from what i gathered from what you said about the hashtable I assumed values in it were being updated all over, if this is the case then simply adding entries to the file well be no use.

Joined: Apr 2003
Posts: 701
K
Hoopy frood
Offline
Hoopy frood
K
Joined: Apr 2003
Posts: 701
Well, it's not really best practice or nice, but it does work: when doing a /hload on a file with multiple declarations of the same entry, the last one will be in the hash file. And since we always append to the file, the last one is the most recent one.

Joined: Sep 2003
Posts: 4,230
D
Hoopy frood
Offline
Hoopy frood
D
Joined: Sep 2003
Posts: 4,230
eeeewwwwwww damn its a audit trail!. I had wondered if that would be the effect of two duplicate ITEM entries in a file on /hload but had never checked on it.

I guess for maximin save speed that has it then! With only one slower (longer) reload that type of saving is fine to do, as long as when ever the script did to a /hload is followed it with a new /hsave -o filename, to ensure the save file didnt expedentially grow forever frown

Joined: Sep 2003
Posts: 38
S
snabbi Offline OP
Ameglian cow
OP Offline
Ameglian cow
S
Joined: Sep 2003
Posts: 38
This is what I tried to explain. My original question actually was if it was just stupid luck that my hload got the the latest values from duplicate entries.

I was writing the updated values to the file, by appeding to it and therefore adding duplicate entries. By using a timer each 5 minutes the hsave option was called to prevent the file from exsesive (sorry for my bad english I mean very big) growth.

Actually your filter command in combination finding the actual bug (I did save multiple hashtables every 3 secs entirely) solved my problem.


Link Copied to Clipboard