I'll share a couple of thoughts, but do read EKR's blog first:
- Web PKI is inherently insecure and can't be fixed on its own. The root problem is that the CAs we "trust" can issue certificates without technical controls. The best we can do is ask them to be nice and force them provide a degree of (certificate) transparency to enable monitoring. This is still being worked on. Further, certificates are issued without strong owner authentication, which can be subverted (and is subverted). [3]
- The (very, very) big advantage of Web PKI is that it operates online and supports handshake negotiation. As a result, iteration can happen quickly if people are motivated. A few large players can get together and effect a big change (e.g., X25519MLKEM768). DNSSEC was designed for offline operation and lacks negotiation, which means that everyone has to agree before changes can happen. Example: Kipp Hickman created SSL and Web PKI in 3 months, by himself [1]. DNSSEC took years and years.
- DNSSEC could have been fixed, but Web PKI was "good enough" and the remaining problem wasn't sufficiently critical.
- A few big corporations control this space, and they chose Web PKI.
- A humongous amount of resources has been spent on iterating and improving Web PKI in the last 30 years. So many people configuring certificates, breaking stuff, certificates expiring... we've wasted so much of our collective lives. There is a parallel universe in which encryption keys sit in DNS and, in it, no one has to care about certificate rotation.
- DNSSEC can't ever work end-to-end because of DNS ossification. End-user software (e.g., browsers) can't reliably obtain any new DNS resource records, be it DANE or SVCB/HTTPS.
- The one remaining realistic use for DNSSEC is to bootstrap Web PKI and, possibly, secure server-to-server communication. This is happening, now that CAs are required to validate DNSSEC. This one changes finally makes it possible to configure strong cryptographic validation before certificate issuance. [2]
> DNSSEC could have been fixed, but Web PKI was "good enough" and the remaining problem wasn't sufficiently critical.
People say this about every failed technology. If you have something that could have been fixed at any point in the last 30 years but somehow never has been, usually i suspect its not actually true.
> Further, certificates are issued without strong owner authentication
I dont think DNSSEC would fix this either and quite frankly i dont think its a super important problem to solve.
No, DNSSEC can enforce strong cryptographic validation _today_. Here's how:
1. Configure a CAA record that restricts issuance to two CAs that support locking down issuance to specific customer accounts. For example, Let's Encrypt supports RFC 8657; DigiCert has a proprietary mechanism. After this, you can only issue certificates when you properly authenticate against your selected CAs.
2. Use only ACME validation methods that rely on DNS. Avoiding HTTP-01, for example, ensures that a MITM can't intercept that unencrypted network traffic and approve certificates with key material under their control.
3. Deploy DNSSEC. Your DNS is now cryptographically validated, meaning your CAA records can't be spoofed and the validation methods from step 2 can't be spoofed either.
If you like this sort of thing, perhaps you'll enjoy my SSL/TLS and PKI history where I track a variety of ecosystem events starting with the creation of SSL in 1994: https://www.feistyduck.com/ssl-tls-and-pki-history/
Does the TLS working group know that? Pretty much all their design work apart from one guy representing email use via Postfix and a few guys working on a perpetual-motion-machine profile for telcos assumes the only possible use for TLS is the web.
Maybe you're onto something, but in what way do you think that TLS is not serving other protocols?
Personally, I think we have a bigger problem on the PKI side, where Web PKI is very strong, but Internet PKI has been neglected. The recent move to remove client authentication is a good example.
The design decisions seem to be completely oblivious to the fact that anything exists outside the web: WebPKI, always-on Internet connections with DNS and the ability to tie in to third-party services to do things like ECH, everything can be flipped over to support whatever trendy thing someone has pushed through the WG over a period of a few weeks, CPU and memory is free and near-infinite, etc. Now project that onto a TLS implementation that has to run on a Cortex M3 in some infrastructure device, little CPU, little RAM, no DNS, and the code gets updated when the hardware gets replaced after 10-20 years. The end result is the creation of a hostile environment in the WG where pretty much everyone not involved in web use of TLS has left, so it's become an echo chamber of web-TLS users inventing things for other web-TLS users to play with.
Well, WebPKI is for the web, if you need TLS for other purposes that don't fit with the goals of those looking to protect web users and web infrastructure you need a different PKI. It's not like it's technically hard to set up your own private PKI, and there are plenty of companies who are happy to provide those services if you don't want to do it yourself, but it is more complicated and costly than just using WebPKI so we of course see WebPKI resources getting used inappropriately and then those users complain when there's a need for revocations and/or changes.
> Now project that onto a TLS implementation that has to run on a Cortex M3 in some infrastructure device, little CPU, little RAM, no DNS, and the code gets updated when the hardware gets replaced after 10-20 years.
Also the OT world needs to accept that they can't have their cake and eat it too. If you need to be able to leave the same code running untouched for 10-20 years, you don't connect it to the internet. If you need it connected to the internet, you accept that it needs to be able to receive updates and potentially have those updates applied in a matter of days. Extremely strict external security controls can mitigate some of these situations but will never eliminate the need for there to be a rapid update process.
Also the OT world needs to accept that they can't have their cake and eat it too. If you need to be able to leave the same code running untouched for 10-20 years, you don't connect it to the internet
Why on earth not? Just because most of the code that uses the web PKI is crap and needs constant patching doesn't mean there aren't developers writing code that isn't crap and that you can leave running for 10-20 years without any patching. Years ago someone who created a (at the time) widely-used security tool got asked why there hadn't been any updates in years, and whether it was abandonware. His response was "some people do things properly the first time".
And before you say "even if the code is fine it's old crypto, it's insecure", when was the last time someone got pwned because they ran 25-year-old TLS 1.0?
> Why on earth not? Just because most of the code that uses the web PKI is crap and needs constant patching doesn't mean there aren't developers writing code that isn't crap and that you can leave running for 10-20 years without any patching.
I never said that's not possible, I said you can't design your systems to assume that it's one of those things. It is certainly possible that after 10-20 years a system might never have needed an update, but you didn't know that when it was built, purchased, or implemented and assuming that will be the case is undeniably irresponsible.
> And before you say "even if the code is fine it's old crypto, it's insecure", when was the last time someone got pwned because they ran 25-year-old TLS 1.0?
The correct answer there would be "none yet", and there's no guarantee it would ever happen, but there are known weaknesses so it's always a possibility. Again, not saying everything will need to be updated regularly, but it's not a good call to assume your thing will never need it.
Let's look at this from another angle. Presumably if you have a desire to expose a device to the internet as a whole it's because you either want it to be able to access external resources or you want external systems to be able to reach it, and the outside world has this tendency to move on over time if protocols are flawed, even if those flaws don't matter to your device. If there's a process for updating regularly, this is no big deal. If there isn't, your thing is going to get progressively more annoying to use wherever it needs to interact with systems outside of its control.
I wrote about ECH a couple of months ago, when the specs were still in draft but already approved for publication. It's a short read, if you're not already familiar with ECH and its history:
https://www.feistyduck.com/newsletter/issue_127_encrypted_cl...
In addition to the main RFC 9849, there is also RFC 9848 - "Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings": https://datatracker.ietf.org/doc/rfc9848/
There's an example of how it's used in the article.
Thanks for the writeup, Ivan, I am a great fan of your work!
Now we need to get Qualys to cap SSL Labs ratings at B for servers that don't support ECH. Also those that don't have HSTS and HSTS Preload while we're at it.
Thanks! Sadly, SSL Labs doesn't appear to be actively maintained. I've noticed increasing gaps in its coverage and inspection quality. I left quite a while ago (2016) and can't influence its grading any more, sadly.
Yes, there is! After I left SSL Labs, I built Hardenize, which was an attempt to go wider and handle more of network configuration, not just TLS and PKI. It covers a range of standards, from DNS, over email, TLS and PKI, and application security.
Although Hardenize was a commercial product (it was acquired in 2022 by another company, Red Sift), it has a public report that's always been free. For example:
As already noted on this thread, you can't use certbot today to get an IP address certificate. You can use lego [1], but figuring out the exact command line took me some effort yesterday. Here's what worked for me:
lego --domains 206.189.27.68 --accept-tos --http --disable-cn run --profile shortlived
https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.
It allowed me to quickly obtain a couple of IP certificates to test with. I updated my simple TLS certificate checker (https://certcheck.sh) to support checking IP certificates (IPv4 only for now).
I wrote about OpenSSL's performance regressions in the December issue of Feisty Duck's cryptography newsletter [1]. In addition to Alex's and Paul's talk on Python cryptography, at the recent OpenSSL conference there have been several other talks worth watching:
Note: be careful with these recorded talks as they have a piercing violin sound at the beginning that's much louder than the rest. I've had to resort to muting the first couple of seconds of every talk.
Because "everybody uses RC4" (the sibling comment from dchest is correct). There was a lot of bad cryptography in that period and not a lot of desire to improve. The cleanup only really started in 2010 or thereabouts. For RC4 specifically, its was this research paper: https://www.usenix.org/system/files/conference/usenixsecurit... released in 2013.
Hello Peter. Thank you for your work. I really like this approach. I too have been following Temporal and I like it, but I don't think it's a good match for simpler systems.
I've been reading the DBOS Java documentation and have some questions, if you don't mind:
- Versioning; from the looks of it, it's either automatically derived from the source code (presumably bytecode?), or explicitly set for the entire application? This would be too coarse. I don't see auto-configuration working, as a small bug fix would invalidate the version. (Could be less of a problem if you're looking only at method signatures... perhaps add to the documentation?) Similarly, for the explicit setting, a change to the version number would invalidate all existing workflows, which would be cumbersome.
Have you considered relying on serialVersionUID? Or at least allowing explicit configuration on a per workflow basis? Or a fallback method to be invoked if the class signatures change in a backward incompatible way?
Overall, it looks like DBOS is fairly easy to pick up, but having a good story for workflow evolution is going to be essential for adoption. For example, if I have long-running workflows... do I have to keep the old code running until all old instances complete? Is that the idea?
- Transactional use; would it be possible to create a new workflow instance transactionally? If I am using the same database for DBOS and my application, I'd expect my app to do some work and create some jobs for later, reusing my transaction. Similarly, maybe when the jobs are running, I'd perhaps want to use the same transaction? As in, the work is done and then DBOS commits?
I know using the same transaction for both purposes could be tricky. I have, in fact, in the past, used two for job handling. Some guidance in the documentation would be very useful to have.
1. Yes, currently versioning is either automatically derived from a bytecode hash or manually set. The intended upgrade mechanism is blue-green deployments where old workflows drain on old code versions while new workflows start on new code versions (you can also manually fork workflows to new code versions if they're compatible). Docs: https://docs.dbos.dev/java/tutorials/workflow-tutorial#workf...
That said, we're working on improvements here--we want it to be easier to upgrade your workflows without blue-green deployments to simplify operating really long-running (weeks-months) workflows.
2. Could I ask what the intended use case is? DBOS workflow creation is idempotent and workflows always recover from the last completed step, so a workflow that goes "database transaction" -> "enqueue child workflow" offers the same atomicity guarantees as a workflow that does "enqueue child workflow as part of database transaction", but the former is (as you say) much simpler to implement.
Here's an example of a common long-running workflow: SaaS trials. Upon trial start, create a workflow to send the customer onboarding messages, possibly inspecting the account state to influence what is sent, and finally close the account that hasn't converted. This will usually take 14-30 days, but could go for months if manually extended (as many organisations are very slow to move).
I think an "escape hatch" would be useful here. On version mismatch, invoke a special function that is given access to the workflow state and let it update the state to align it with the most recent code version.
> transactional enqueuing
A workflow that goes "database transaction" -> "enqueue child workflow" is not safe if the connection with DBOS is lost before the second step completes safely. This would make DBOS unreliable. Doing it the other way round can work provided each workflow checks that the "object" to which it's connected exists. That would deal with the situation where a workflow is created, but the transaction rolls back.
If both the app and DBOS work off the same database connection, you can offer exactly-once guarantees. Otherwise, the guarantees will depend on who commits first.
Personally, I would prefer all work to be done from the same connection so that I can have the benefit of transactions. To me, that's the main draw of DBOS :)
Yes, something like that is needed, we're working on building a good interface for it.
> transactional enqueueing
But it is safe as long as it's done inside a DBOS workflow. If the connection is lost (process crashes, etc.) after the transaction but before the child is enqueued, the workflow will recover from the last completed step (the transaction) and then proceed to enqueue the child. That's part of the power of workflows--they provide atomicity across transactions.
> > transactional enqueueing
> But it is safe as long as it's done inside a DBOS workflow.
Yes, but I was talking about the point at which a new workflow is created. If my transaction completes but DBOS disappears before the necessary workflows are created, I'll have a problem.
Taking my trial example, the onboarding workflow won't have been created and then perhaps the account will continue to run indefinitely, free of charge to the user.
Looking at the trial example, the way I would solve it is that when the user clicks "start trial", that starts a small synchronous workflow that first creates a database entry, then enqueues the onboarding workflow, then returns. DBOS workflows are fast enough to use interactively for critical tasks like this.
Those two don't really compete. DNSSEC provides authenticity/integrity without privacy and DoH does exactly the opposite. If anything, you need both in order to secure DNS.
They don't compete in any immediate way, but over the long term, end-to-end DNS secure transport would cut sharply into the rationale for deploying DNSSEC. We're not there yet (though: I don't think DNSSEC is a justifiable deployment lift regardless).
It's worth keeping in mind that the largest cause of DNS authoritative data corruption isn't the DNS protocol at all, but rather registrar phishing.
Honestly, and I think this has been true for a long time, but in 2025 the primary (perhaps sole) use case for DNSSEC is as a trust anchor for X.509 certificate issuance. If that's all you need, you can get that without a forklift upgrade of the DNS. I don't think global DNSSEC is going to happen.
In what way does DoH provide end-to-end security? It doesn't, unless you adopt a different definition of "end-to-end" where the "server end" is an entity that's different from the domain name owner, but you're somehow trusting it to serve the correct/unaltered DNS entries. And even then, they can be tricked/coerced/whatever into serving unauthentic information.
For true end-to-end DNS security (as in authentication of domain owners), our only option is DNSSEC.
At best, you can argue that DoH solves a bigger problem.
If you have an end-to-end secure transport to the authority, you've factored out several of the attacks (notably: transaction-driven cache poisoning) that have, at times, formed the rationale for deploying DNSSEC. The most obvious example here is Kaminsky's attack, and the txid attacks that preceded it, which had mitigations in non-BIND DNS software but didn't in BIND specifically because DNSSEC was thought to be the proper fix. Those kinds of attacks would be off the table in a universal DOH/DOT/DOQ world, in some of the same sense that they would be if DNS just universally used TCP.
"True" DNS security isn't a term that means anything to me. Posit a world in which DNSSEC deployment is universal, rather than the sub-5% single digit deployment it has today. There are still attacks on the table, most notably from DNS TLD operators themselves. We choose to adopt a specific threat model and then evaluate attacks against it. This is a persistent problem when discussing DNSSEC (esp. vs. things like DOH). because DNSSEC advocates tend to fall back on a rhetorical crutch of what "true" security of authoritative data meant, as if that had some intuitively obvious meaning for operators.
In a world where DNS message transports are all end-to-end secure, there really isn't much of a reason at all to deploy DNSSEC; again: if you're worried about people injecting bogus data to get certificates issued, your real concern should be your registrar account.
- Web PKI is inherently insecure and can't be fixed on its own. The root problem is that the CAs we "trust" can issue certificates without technical controls. The best we can do is ask them to be nice and force them provide a degree of (certificate) transparency to enable monitoring. This is still being worked on. Further, certificates are issued without strong owner authentication, which can be subverted (and is subverted). [3]
- The (very, very) big advantage of Web PKI is that it operates online and supports handshake negotiation. As a result, iteration can happen quickly if people are motivated. A few large players can get together and effect a big change (e.g., X25519MLKEM768). DNSSEC was designed for offline operation and lacks negotiation, which means that everyone has to agree before changes can happen. Example: Kipp Hickman created SSL and Web PKI in 3 months, by himself [1]. DNSSEC took years and years.
- DNSSEC could have been fixed, but Web PKI was "good enough" and the remaining problem wasn't sufficiently critical.
- A few big corporations control this space, and they chose Web PKI.
- A humongous amount of resources has been spent on iterating and improving Web PKI in the last 30 years. So many people configuring certificates, breaking stuff, certificates expiring... we've wasted so much of our collective lives. There is a parallel universe in which encryption keys sit in DNS and, in it, no one has to care about certificate rotation.
- DNSSEC can't ever work end-to-end because of DNS ossification. End-user software (e.g., browsers) can't reliably obtain any new DNS resource records, be it DANE or SVCB/HTTPS.
- The one remaining realistic use for DNSSEC is to bootstrap Web PKI and, possibly, secure server-to-server communication. This is happening, now that CAs are required to validate DNSSEC. This one changes finally makes it possible to configure strong cryptographic validation before certificate issuance. [2]
[1] https://www.feistyduck.com/newsletter/issue_131_the_legend_o...
[2] https://www.feistyduck.com/newsletter/issue_126_internet_pki...
[3] https://redsift.com/guides/a-guide-to-high-assurance-certifi...
reply