I really doubt it, but feel free to actually test them both (or all three) on such a file of 100000 lines.

The point remains, $read and /write do the following:
- open file
- read characters and count $crlf until specified line reached
- read line in and return it
- close file

This means those files are opened and closed 100000 times, and you still have to search 1+2+3+4+...+5000 lines for the /write in the smaller files...

Using /fopen, $fread /fwrite and /fclose, you bring that number back to 1+20 times, $fread reads in sequence, it remembers the last position it read, /fwrite just writes at the back, no searching needed. This makes it very likely to be faster, a lot faster even.

Now do the same in native compiled code instead of a script language like mIRC script and you get the performance of /filter.

* Kelder goes for the /filter if possible

Since you'll probably not beleive me, try these 2 scripts:
alias test1 {
var %i = 1, %time = $ticks
fopen -no blub delme.txt
while (%i < 100000) {
.fwrite blub look! this is line number %i !
inc %i
}
fclose blub
echo -s time taken: $calc($ticks - %time) ms
}

alias test2 {
var %i = 1, %time = $ticks
while (%i < 100000) {
write delme.txt look! this is line number %i !
inc %i
}
echo -s time taken: $calc($ticks - %time) ms
}

Test1 runs in 10500 ms, test2 in 84200, and this test is only half the requirements...


ps: Look up /while, it might not be faster, but it has more chance of being readable and correct.
While you're at it, /var is nice too!