mIRC Home    About    Download    Register    News    Help

Print Thread
slow file operations #230493 11/03/11 02:39 PM
Joined: May 2007
Posts: 27
V
Vliedel Offline OP
Ameglian cow
OP Offline
Ameglian cow
V
Joined: May 2007
Posts: 27
It's not really a bug, but still. I heard that $read was slower on mirc 7 compared to 6 and decided to test it. Here are my results:


mirc 6.35:
/write with 100 lines: 109ms - 1.09ms per line
$read with 100 lines: 15ms - 0.15ms per line
/write with 1000 lines: 1560ms - 1.56ms per line
$read with 1000 lines: 608ms - 0.608ms per line

/fwrite with 100000 lines: 2293ms - 0.02293ms per line
$fread with 100000 lines: 3104ms - 0.03104ms per line

/hsave with 100000 items: 858ms
/hload with 100000 items: 1372ms


mirc 7.19:
/write with 100 lines: 94ms - 0.94ms per line
$read with 100 lines: 78ms - 0.78ms per line
/write with 1000 lines: 1420ms - 1.42ms per line
$read with 1000 lines: 6022ms - 6.022ms per line

/fwrite with 100000 lines: 2699ms - 0.02699ms per line
$fread with 100000 lines: 3338ms - 0.03338ms per line

/hsave with 100000 items: 5413ms
/hload with 100000 items: 11544ms

To be honest: I'm most concerned about the /hsave and /hload since saving/loading 100k items is not that weird. But still I don't understand why $read is so much slower as well.

Re: slow file operations [Re: Vliedel] #230494 11/03/11 03:55 PM
Joined: Jul 2006
Posts: 3,530
W
Wims Offline
Hoopy frood
Offline
Hoopy frood
W
Joined: Jul 2006
Posts: 3,530
Could you post the benchmarks used to get this result?


Looking for a good help channel about mIRC? Check #mircscripting @ irc.swiftirc.net
Re: slow file operations [Re: Wims] #230495 11/03/11 06:57 PM
Joined: May 2007
Posts: 27
V
Vliedel Offline OP
Ameglian cow
OP Offline
Ameglian cow
V
Joined: May 2007
Posts: 27
sure

Code:
alias filetest {
  if ($isfile(filetest.txt)) { echo -ag The file filetest.txt already exists! | halt }
  if ($fopen(filetest)) { .fclose filetest }

  var %max = $iif($1 > 0,$1,1000), %str = $str(x,100)

  ;; WRITE-part

  var %x = %max, %t = $ticks
  .fopen -n filetest filetest.txt
  while (%x) {
    .fwrite -n filetest %str
    dec %x
  }
  .fclose filetest
  echo -ag /fwrite with %max lines: $calc($ticks - %t) $+ ms - $calc($calc($ticks - %t) / %max) $+ ms per line

  if ($2 != f) {
    .remove filetest.txt
    var %x = %max, %t = $ticks
    while (%x) {
      write filetest.txt %str
      dec %x
    }
    echo -ag /write with %max lines: $calc($ticks - %t) $+ ms - $calc($calc($ticks - %t) / %max) $+ ms per line
  }

  ;; READ-part

  var %t = $ticks
  .fopen filetest filetest.txt
  while (!$fopen(filetest).eof) {
    noop -qg $fread(filetest)
  }
  .fclose filetest
  echo -ag $ $+ fread with %max lines: $calc($ticks - %t) $+ ms - $calc($calc($ticks - %t) / %max) $+ ms per line

  if ($2 != f) {
    var %x = %max,%t = $ticks
    while (%x) {
      noop $read(filetest.txt,%x)
      dec %x
    }
    echo -ag $ $+ read with %max lines: $calc($ticks - %t) $+ ms - $calc($calc($ticks - %t) / %max) $+ ms per line
  }
  .remove filetest.txt
}

alias hashtest {
  if ($hget(hashtest)) { hfree hashtest }
  if ($isfile(hashtest.hsh)) { .remove hashtest.hsh }
  hmake hashtest 100
  var %max = $iif($1 > 0,$1,100000), %str = $str(a,1000)
  var %i = %max, %t = $ticks
  while (%i) {
    hadd hashtest %i %str
    dec %i
  }
  echo -ag added %max items in $calc($ticks - %t) $+ ms
  var %t = $ticks
  hsave hashtest hashtest.hsh
  echo -ag saved %max items in $calc($ticks - %t) $+ ms
  hfree hashtest
  hmake hashtest
  var %t = $ticks
  hload hashtest hashtest.hsh
  echo -ag loaded %max items in $calc($ticks - %t) $+ ms
  hfree hashtest
  .remove hahstest.hsh
}

Re: slow file operations [Re: Vliedel] #230497 12/03/11 07:57 AM
Joined: Oct 2003
Posts: 3,918
A
argv0 Offline
Hoopy frood
Offline
Hoopy frood
A
Joined: Oct 2003
Posts: 3,918
The issue is that mIRC is now processing data as UTF-8, meaning it is reading in and converting data to unicode. In previous versions, data was just raw text. This makes things slower, but it's quite necessary for mIRC to be able to handle modern data sources.

You should benchmark /bwrite and $bread, this would give you a much better performance comparison that is unaffected by unicode conversion.

A 10x drop isn't so terrible. Moore's law should be able to fill that gap reasonably fast, even if mIRC does nothing to improve speed.

Also I'd question your assertion that 100k items in a hash table is "not that weird". Call me crazy, but I don't even think I've ever dealt with anything over 10k data points in any mIRC script I've used, let alone 100k. I don't think dealing with that much data in a script really is all that common, but what do I know? My first guess is: you're doing something not directly IRC related, or you're just organizing your data poorly.


- argv[0] on EFnet #mIRC
- "Life is a pointer to an integer without a cast"