mIRC Home    About    Download    Register    News    Help

Print Thread
#271304 09/02/23 11:51 AM
Joined: Sep 2006
Posts: 11
M
Pikka bird
OP Offline
Pikka bird
M
Joined: Sep 2006
Posts: 11
I found a thread from a few years ago that switching to Windows schannel was considered but not implemented due to lack of TLS 1.3. That has markedly changed in newer versions of Windows 10 and Windows 11. There are lots of benefits of moving to schannel:

  • Not having to manually update cacerts on a cadence
  • Leveraging built-in Windows certificate storage and protection. DPAPI is better protection for client certificates than a flat pem file on the disk.
  • By using schannel, this offers the ability to generate and store client certificates in hardware using Microsoft's CNG key storage providers (Smart Cards, TPMs, etc.)
  • OpenSSL continually has vulnerabilities and needs to be maintained by the application, independent of Windows' built-in Update mechanism that protects schannel.


We're up to OpenSSL 1.1.1t now. I think rather than moving to 3.0, may as well just switch to using schannel instead.

Joined: Jan 2004
Posts: 2,127
Hoopy frood
Offline
Hoopy frood
Joined: Jan 2004
Posts: 2,127
1 reason that MAPM was chosen for the .bigfloat functions was that it could be compiled by the compiler that mIRC uses, and I'm wondering if Schannel can do that, and whether it still supports the various OS that mIRC runs on. Any switch-over would need to continue compatibility with OpenSSL, because I'm pretty sure that's what the IRC networks are using.

I'm not sure how the TLS is used in IRC, but I know that servers often blow off a lot of the RSA settings for the certificate you create in options/ssl, but that are considered important elsewhere.

For example, there are only a few networks like Libera.Chat that reject your certificate if it's expired, while the rest of them keep letting you use it for SASL. The RSA public exponent is usually 65537, but NIST guidelines says it's OK if it's as large as 2^256. But OpenSSL rejects if it's larger than 2^64, except when you use a small enough key.

The IRC servers don't even check the signature on your self-signed client.pem certificate. You can generate a certificate, then use notepad to alter without inserting or deleting one of the mime digits near the tail end of the public key, where the signature is located, and that ruins the signature and changes the fingerprint. But all the IRC servers will accept this certificate for SASL as long as you add the new fingerprint to nickserv.

One of the reasons that OpenSSL has more bugs is it's a larger bag of features so there's more that can go wrong, and I'm of the understanding that most of the time these problems don't affect the things that mIRC uses.

I get the impression that switching over from 1.1.1 to 3.0 would require a lot of coding, and some things are no longer even available, as they keep on shifting things into optional 'legacy' features. It was only v7.56 that did the upgrade from 1.0.x to 1.1.1

v3.0 has also started doing sneaky things by continuing to support legacy syntax parameters then ignoring them. Or, they change an external function to use a different internal function that's more secure, and they won't offer a way to get the old functionality if you really want to keep it. There are quite a few valid commands in 1.1.1's command line interface that are valid, but won't work in 3.0

For example, when testing a number to see if it's a prime, OpenSSL had several external functions, and one of them that was fairly quick - was suddenly switched over to using Miller-Rabin, which caused a DLL to suddenly take a LONG time to initialize when loading.

Miller-Rabin can't really prove something is a prime, it just detects most non-primes as not being primes, and by repeating the test multiple times it can increase the confidence that a number isn't a non-prime that happened to pass the test(s). In 1.1.1 the external function lets you specify how many times you wanted to repeat the test before it declares the number is probably prime because it passed all the tests. You could even tell it to just make 1 test against the number, which would reject the vast majority of non-primes. The claim for Miller-Rabin is that the chance of it being wrong is 1/(2^number of tests), but when OpenSSL generates primes for RSA, it rarely finds a number that passes round#1 then fails any other test. When searching and finding thousands of safe primes, I have yet to find any cases where the p and q both passed 1 round but it then turned out that either p or q failed additional rounds because of not being primes.

Well, for v3.0 they decided that it wasn't secure if programs tested using just 1 round, so they modified the external function call so that it ignored the parameter specifying the number of tests if it's less than the number of rounds they think you should use, because they decided that there's no valid reason to test fewer than the minimum they decide. And, by doing things like this, they would allow programs to swap in new OpenSSL modules without needing to change their source code where they invoke the OpenSSL code, even though their sneaky changes causes OpenSSL to no longer behave like it used to.

Well there is at least 1 valid reason to use just 1 round of Miller-Rabin, which is when trying to find a safe prime. That's where you are trying to find 2 numbers p and q, where p=2*q+1, and both p and q are primes. So there's no point in wasting time doing more than 1 round against 'p' without first checking to see if 'q' can pass even ONE round. So when using 1.1.1 you can check 1 round against 'p' before doing the extra rounds against 'q', then come back to 'p' to do the additional tests skipped earlier. But when using 3.0, they make you do the whole 128 rounds against 'p' before letting you test 'q' even though more than 99% of the q's won't ever pass more than 1 round.

Joined: Sep 2006
Posts: 11
M
Pikka bird
OP Offline
Pikka bird
M
Joined: Sep 2006
Posts: 11
The beauty of TLS is that it's a standard that operates cross-platform smile

Specific to the client key fingerprints for authenticating to services--you can absolutely present an expired certificate in the TLS handshake and it will absolutely work if the other side says to not care about that. The only thing that will occur is that the Windows Event log will have a bunch of events saying that your certificate expired. Realistically, the fingerprint concept in services is merely a form of certificate pinning which itself obviates the need to have any data in the certificate itself be valid. I.E. even today all client certificates performing authentication are self-signed. Most folks probably wouldn't know about the event log thing anyway.

I know this is IRC we're talking about, but it would be quite nice to have a key be presented and stored from a hardware device--whether virtual smart card via TPM, a TPM generated key, or a physical smart card.

Ideally, we'd have a platform that supports WebAuthN instead (I assume the fingerprinting could be the same here), but using x.509 today would offer the most compatibility (since that's what's being used today anyway).

Code
New-SelfSignedCertificate -Type Custom -Provider "Microsoft Platform Crypto Provider" -Subject "CN=magamiako" -KeyExportPolicy NonExportable -KeyUsage DigitalSignature -KeyAlgorithm RSA -KeyLength 2048 -CertStoreLocation "Cert:\CurrentUser\My"

And then the certificate selection window could be updated to just allow input of the fingerprint to use to authenticate.


Link Copied to Clipboard