Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is why I've blocked all HTTP traffic outgoing from my machines.

A lot of people have brought this up over the years:

https://www.reddit.com/r/AMDHelp/comments/ysqvsv/amd_autoupd...

(I'm fairly sure I have even mentioned AMD doing this on HN in the past.)

AMD is also not the only one. Gigabyte, ASUS, many other autoupdaters and installers fail without HTTP access. I couldn't even set up my HomePod without allowing it to fetch HTTP resources.

From my own perspective allowing unencrypted outgoing HTTP is a clear indication of problematic software. Even unencrypted (but maybe signed) CDN connections are at minimum a privacy leak. Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.

 help



> Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.

For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key. For package managers that usually only mean trusting gpg - at the very least no less trustworthy than the many TLS and HTTP libraries out there.


> For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key.

Assuming this all came through unencrypted HTTP:

- you're also trusting that the client's HTTP stack is parsing HTTP content correctly

- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.


> you're also trusting that the client's HTTP stack is parsing HTTP content correctly

This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.

> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.

> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.

Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).

> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.

> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.

The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.

The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.


> HTTP/1.1 alone is a trivial protocol

Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html

https://http1mustdie.com/

> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.


You seem to have forgotten all the critical TLS bugs we had. Heartbleed ring a bell?

> An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.

You misunderstand: this means more attack surface.

The attacker can mess with the far more complex and fragile TLS stack, and any attacker controlling a server or server payload can also attack the HTTP stack.

Have you recently inspected who owns and operates every single mirror in the mirror list? None of these are trusted by you or by the distro, they're just random third parties - the trust is solely in the package and index signatures of the content they're mirroring.

I'm not suggesting not using HTTPS, but it just objectively wrong to consider it to have reduced your attack surface. At the same time most of its security guarantees are insufficient and useless for this particular task, so in this case the trade-off is solely privacy for complexity.


That was a long time ago and it was specific to one implementation. In comparison GnuPG has had so many critical vulnerabilities even recently. That's why Apt switched to Sequoia.

Modern TLS stacks are far from fragile, especially in comparison to PGP implementations. It's a significant reduction in attack surface when it's a MITM we're talking about.

Malicious mirrors remain a problem, but having TLS in the mix doesn't make it more dangerous. Potential issues with PGP, HTTP and Apt's own logic are just so much more likely.


If you believe TLS is more fragile than PGP and plain HTTP, then I have reason to believe you have never looked at any of those wire protocols/file formats and the logic required.

Adding TLS in front of HTTP when talking to an untrusted third-party server (and yes, any standard HTTPS server is untrusted int his context), can only ever increase your attack surface. The only scenario where it reduces the attack surface is if you are connected with certificate pinning to a trusted server implementation serving only trusted payloads, and neither is the case for a package repo - that's why we have file signatures in the first place.


I have implemented parts of all three. I doubt you have.

> Adding TLS in front of HTTP when talking to an untrusted third-party server, can only ever increase your attack surface.

No, against a MITM it instantly subtracts the surface inside the TLS from the equation. Which is the entire point.

> [...] that's why we have file signatures in the first place.

You still don't understand that even before the cryptographic operations done in order to verify the signatures you have all those other layers. Layers that are complex to implement, easy to misinterpret and repeatedly to this day found flawed. PGP is so terrible no serious cryptographer even bothers looking at it this day and age.

I start getting the feeling that you're involved in keeping the package repositories stuck in the past. I can't wait for yet another Apt bug where some MITM causes problems yet again.


> I start getting the feeling that you're involved in keeping the package repositories stuck in the past.

I start getting the feeling that you have no actual experience in threat modelling.


This entire discussion has been about MITM attacks but you keep making arguments that are irrelevant in this context. A compromised web server that's serving malicious data is not a MITM attack.

Do you acknowledge this disconnect? Is there a good reason why you keep responding to discussion about MITM with ridicule and the type of responses I'd expect from someone who's severely confused what constitutes a MITM attack and what doesn't?


If you don't trust the http client to not do something stupid, this all applies for https, too. Plus, they can also bork on the ssl verification phase, or skip it altogether.

TLS stacks are generally significantly harder targets than HTTP ones. It's absolutely possible to use one incorrectly, but then we should also count all the ways you can misuse a HTTP, there are a lot more of those.

This statement makes no sense, TLS is a complicated protocol with implementations having had massive fun and quite public security issues, while HTTPS means you have both and need to deal with a TLS server feeing you malicious HTTP responses.

Having to harden two protocol implementations, vs. hardening just one of those.

(Having set up letsencrypt to get a valid certificate does not mean that the server is not malicious.)


TLS may be complicated for some people. But unlike HTTP, it has even formally proven correct implementations. You can't say the same about HTTP, PGP and Apt.

> Having to harden two protocol implementations, vs. hardening just one of those.

We're speaking of a MITM here. In that case no, you don't have to harden both. (Even if you did have to, ain't nobody taking on OpenSSL before all the rest, it's not worth the effort.)

I find it kind-of weird that you can't understand that if all a MITM can tamper with is the TLS then it's irrefutably a significantly smaller surface than HTTP+PGP+Apt.


> We're speaking of a MITM here

We are speaking of the total attack surface.

1. When it comes to injecting invalid packets to break a parser, you can MITM TLS without problem. This is identical to the types of attack you claimed were relevant to HTTP-only, feeding invalid data that would be rejected by authentication of the signature.

2. Any server owning a domain name can have a valid TLS certificate, creating "trusted" connections, no MITM necessary. Any server in your existing mirrorlist can go rogue, any website you randomly visit might be evil. They can send you both signed but evil TLS packets, and malicious HTTP payloads.

3. Even if the server is good, it's feeding you externally obtained data that too could be evil.

There is no threat model here where you do not rely 100% on the validity of the HTTP stack and file signature checking. TLS only adds another attack surface, by running more exploitable code on your machine, without taking away any vulnerabilities in what it protects.


No, you want to move goalposts, but we're not speaking of some arbitrary "total attack surface". The article itself is also about a potential MITM. Then you list three cherry-picked cases, none of which actually touch upon the concerns that a plaintext connection introduces or exposes. Please stop, it's silly.

There is fundamentally no reasonable threat model where a plaintext connection (involving all these previously listed protocols) is safer against a MITM than an encrypted and authenticated one.


You don't call it "cherry-picking" when a person lists fundamental flaws in your argument.

Constantly ignoring all the flaws outlined and just reiterating your initial opinion with no basis whatsoever is at best ignorance, at worst trolling.

HTTP with signed packages is by definition a protocol with authenticated payloads, and encryption exclusively provides privacy. And no, we're not singeling out the least likely attack vector for the convenience of your argument - we're looking at the whole stack.


I do call it cherry-picking because you chose scenarios that either apply to it also without TLS or the scenarios are just (intentionally) extremely narrow in scope.

You have repeatedly ignored that we're speaking about protections against a MITM, not malicious endpoints. Because of that your desperate attempt at talking about the "whole stack" talk is also nonsense. Even if you include it, a modern TLS stack is a very difficult target. The additional surface added that hasn't been inspected with a fine-toothed comb is microscopic.

As such you've excluded the core of the problem - how an unprotected connection means that you have to simultaneously ensure that your HTTP, PGP and Apt code has to be bulletproof. This is an unavoidable result, signatures or no signatures, all that surface is exposed.

You've provided no proof or proper arguments that all three of those can achieve the same level of protection against a MITM. You've not addressed how the minuscule surface added by the TLS stack is not worth it considering the enormous surface of HTTP+PGP+Apt that gets protected against a MITM.

TLS also provides more than just privacy, I recommend you familiarize yourself with the Wikipedia page of TLS.


There's a massive difference. The entire HTTP stack comes into play before whatever blob is processed. GPG is notoriously shitty at verifying signatures correctly. Only with the latest Apt there's some hope that Sequoia isn't as vulnerable.

In comparison, even OpenSSL is a really difficult target, it'd be massive news if you succeed. Not so much for GPG. There are even verified TLS implementations if you want to go that far. PGP implementations barely compare.

Fundamentally TLS is also tremendously more trustworthy (formally!) than anything PGP. There is no good reason to keep exposing it all to potential middlemen except just TLS. There have been real bugs with captive portals unintentionally causing issues for Apt. It's such an _unnecessary_ risk.

TLS leaves any MITM very little to play with in comparison.


AFAIK a lot of linux packet repositories are http-only as well. Convenient for tracking what package versions have been installed on a certain system.

They usually support both, but important to note that HTTPS is only used for privacy.

Package managers generally enforce authenticity through signed indexes and (directly or indirectly) signed packages, although be skeptical when dealing with new/minor package managers as they could have gotten this wrong.


Reducing the benefit of HTTPS to only privacy is dishonest. The difference in attack surface exposed to a MITM is drastic, TLS leaves so little available for any attacker to play with.

MITM usually will not work in case of pkg managers, since packages are signed. But still, attacker can learn what kind of software is installed on target. So I believe that HTTPS for privacy in case of linux package managers are fair enough.

The attacker can meddle with every step taken before the signature verification. The way you handle the HTTP responses, the way you handle the signature format, all that. Captive portals have already caused corruption issues for Apt, signed packages be damned.

Saying it's "fair" is like saying engine maintenance does not matter because the tires are inflated. There are more components to it.

Ensuring the correctness of your entire stack against an active MITM is significantly more difficult than ensuring the correctness of just a TLS stack against an active MITM.


Doesn't this break CRL fetching and OCSP queries?

Nothing really cares except like Prusa Connect.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: