Hacker News
The Dangers of SSL Certificates
dextercd
|next
[-]
I have a simple Python script that runs every day and checks the certificates of multiple sites.
One time this script signaled that a cert was close to expiring even though I saw a newer cert in my browser. It turned out that I had accidentally launched another reverse proxy instance which was stuck on the old cert. Requests were randomly passed to either instance. The script helped me correct this mistake before it caused issues.
compumike
|root
|parent
|next
[-]
firesteelrain
|root
|parent
|previous
[-]
dvratil
|next
|previous
[-]
The main lesson we took from this was: you absolutely need monitoring for cert expiration, with alert when (valid_to - now) becomes less than typical refresh window.
It's easy to forget this, especially when it's not strictly part of your app, but essential nonetheless.
Spivak
|next
|previous
[-]
Anyway you'll have one of these things anyway and I haven't seen one yet that doesn't let you monitor your cert and send you expiration notices in advance.
firesteelrain
|next
|previous
[-]
You can update your cert to prepare for it by appending—-NEW CERT—-
To the same file as ——-OLD CERT—-
But you also need to know where all your certificates are located. We were using Venafi for the auto discovery and email notifications. Prometheus ssl_exporter with Grafana integration and email alerts works the same. The problem is knowing where all hosts, containers and systems that have certs are located. Simple nmap style scan of all endpoints can help. But, you might also have containers with certs or you might have certs baked into VM images. Sure, there all sorts of things like storing the cert in a CICD global variable, bind mounting secrets, Vault Secret Injector, etc
But it’s all rooted in maintaining a valid, up to date TLS inventory. And that’s hard. As the article states: “ There’s no natural signal back to the operators that the SSL certificate is getting close to expiry. To make things worse, there’s no staging of the change that triggers the expiration, because the change is time, and time marches on for everyone. You can’t set the SSL certificate expiration so it kicks in at different times for different cohorts of users.”
Every time this happens you whack a mole a change. You get better at it but not before you lose some credibility
1970-01-01
|next
|previous
[-]
loloquwowndueo
|next
|previous
[-]
A certificate renewal process has several points at which failure can be detected and action taken, and it sounds like this team was relying only on a “failed to renew” alert/monitor.
A broken alerting system is mentioned “didn’t alert for whatever reason”.
If this certificate is so critical, they should also have something that alerts if you’re still serving a certificate with less than 2 weeks validity - by that time you should have already obtained and rotated in a new certificate. This gives plenty of time for someone to manually inspect and fix.
Sounds like a case of “nothing in this automated process can fail, so we only need this one trivial monitor which also can’t fail so meh” attitude.
gmuslera
|next
|previous
[-]
It is not about encryption (that a self-signed certificate lasting till 2035 will suffice), but verification, who am I talking with, because reaching the right server can be messed up with DNS or routing, among other things. Yes, that adds complexity, but we are talking more about trust than technology.
And once you recognize that it is essential to have a trusted service, then give it the proper instrumentation to ensure that it work properly, including monitoring and expiration alerts, and documentation about it, not just "it works" and dismiss it.
May we retitle the post as "The dangers of not understanding SSL Certificates"?
duufuvkfmc
|root
|parent
[-]
crote
|root
|parent
|next
[-]
tuetuopay
|root
|parent
|next
|previous
[-]
gmuslera
|root
|parent
|next
|previous
[-]
And I said above, SSL is more than about encryption, but also knowing that you are connecting to the right party. Maybe for a repository with multiple mirrors, dns aliases and a layer of "knowing from whom this come from" is not that essential, but for most the rest, even if the information is public, knowing that it comes from the authoritative source or really from who you think it comes from is important.
flowerlad
|next
|previous
[-]
Having only one SSL certificate is a single point of failure, we have eliminated single points of failure almost everywhere else.
woodruffw
|root
|parent
|next
[-]
Edit: but to be clear, I don’t understand why you’d want this. If you’re worried about your CA going offline, you should shorten your renewal period instead.
flowerlad
|root
|parent
[-]
Update: looks like the answer is yes. So then the issue is people not taking advantage of this technique.
throw0101c
|root
|parent
|next
|previous
[-]
Both Apache (SSLCertificateFile) and nginx (ssl_certificate) allow for multiple files, though they cannot be of the same algorithm: you can have one RSA, one ECC, etc, but not (say) an ECC and another ECC. (This may be a limitation of OpenSSL.)
So if the RSA expires on Feb 1, you can have the ECC expire on Feb 14 or Mar 1.
superkuh
|next
|previous
[-]
But for human persons and personal websites HTTP+HTTPS fixes this easily and completely. You get the best of both worlds. Fragile short lifetime pseudo-privacy if you want it (HTTPS) and long term stable access no matter what via HTTP. HTTPS-only does more harm than good. HTTP+HTTPS is far better than either alone.
throw20251220
|next
|previous
[-]
> There’s no natural signal back to the operators that the SSL certificate is getting close to expiry.
There is. The not after is right there in the certificate itself. Just look at it with openssl x509 -text and set yourself up some alerts… it’s so frustrating having to refute such random bs every time when talking to clients because some guy on the internet has no idea but blogs about their own inefficiencies.
Furthermore, their autorenew should have been failing loud and clear, everyone should know from metrics or logs… but nobody noticed anything.
ronsor
|root
|parent
|next
[-]
OpenSSL is still called OpenSSL. Despite "SSL" not being the proper name anymore, people are still going to use it.
By the way, TLS 1.3 is actually SSL v3.4 :)
throw20251220
|root
|parent
[-]
RijilV
|root
|parent
[-]
TLS 1.3 version in the record header is 3.1 (that used by TLS 1.0), and later in the client version is 3.3 (that used by TLS 1.2). Neither is correct, they should be 3.4, or 4.0 or something incrementally larger than 3.1 and 3.3.
This number basically corresponds to the SSL 3.x branch from which TLS descended from. There's a good website which visually explains this:
https://tls13.xargs.org/#client-hello/annotated
As for if someone is correct or whatever for calling out TLS 1.x as SSL 3.(x+1) IDK how much it really matters. Maybe they're correct in some nerdy way, like I could have called Solaris 3 as SunOS6 and maybe there were some artifacts in the OS to justify my feelings about that. It's certainly more proper to call things by their marketing name, but it's also interesting to note on they behave on the wire.
throw20251220
|root
|parent
[-]
toast0
|root
|parent
|next
|previous
[-]
tomas789
|root
|parent
|next
|previous
[-]
viraptor
|root
|parent
|next
[-]
throw20251220
|root
|parent
|previous
[-]
deIeted
|next
|previous
[-]
How long did it take for us to get to a "letsencrypt" setup? and exactly 100ms before that existed, you (meaning 90% of you) mocked and derided that very idea
thecosmicfrog
|previous
[-]
This has given me some interesting food for thought. I wonder how feasible it would be to create a toy webserver that did exactly this (failing an increasing percentage of requests as the deadline approaches)? My thought would be to start failing some requests as the deadline approaches a point where most would consider it "far too late" (e.g. 4 hours before `notAfter`). At this point, start responding to some percentage of requests with a custom HTTP status code (599 for the sake of example).
Probably a lot less useful than just monitoring each webserver endpoint's TLS cert using synthetics, but it's given me an idea for a fun project if nothing else.
loloquwowndueo
|root
|parent
|next
[-]
Just check expiration of the active certificate; if it’s under a threshold (say 1 week, assuming you auto-renew it when it’s 3 weeks to expiry; still serving a cert when it’s 1 week to expiration is enough signal that something went wrong) then you alert.
Then you just need to test that your alerting system is reliable. No need to use your users as canaries.
johannes1234321
|root
|parent
|next
|previous
[-]
In real life, I guess there are people who don't monitor at all. For them failing requests would go unnoticed ... for the others monitoring must be easy.
But I think the core thing might be to make monitoring SSL lifetime the "obvious" default: All the grafana dashboards etc should have such an entry.
Then as soon as I setup a monitoring stack I get that reminder as well.