I'm thinking that your list could be pretty long and sorting a list looking for duplicates manually is both very time consuming and can lead to missed duplicates so the first idea that comes to mind is a couple while loops to scan the file for matching lines.

I did briefly test it using:
Quote:
000.000.000.000:0000
111.111.111.111:1111
222.222.222.222:2222
333.333.333.333:3333
111.111.111.111:1111
222.222.222.222:2222
000.000.000.000:0000
111.111.111.111:1111
222.222.222.222:2222
000.000.000.000:0000
111.111.111.111:1111
222.222.222.222:2222
000.000.000.000:0000
111.111.111.111:1111
222.222.222.222:2222
000.000.000.000:0000
111.111.111.111:1111
222.222.222.222:2222
333.333.333.333:3333

And the file was left with only:
Quote:
000.000.000.000:0000
111.111.111.111:1111
222.222.222.222:2222
333.333.333.333:3333

The code I tossed together is:
Code:
alias sortlist {
  var %v1 1
  while (%v1 !> $lines(1.txt)) {
    var %v2 $calc(%v1 + 1)
    while (%v2 !> $lines(1.txt)) {
      if ($read(1.txt,%v1) == $read(1.txt,%v2)) { write -dl $+ %v2 1.txt }
      else { inc %v2 }
    }
    inc %v1
  }
}

It's very simple really, it just goes through the file a line at a time comparing the current line to all other lines below it.

Good luck.


I've gone to look for myself. If I should return before I get back, please keep me here.