As a Chinese living abroad, I censor my internet voluntarily via custom ublock cosmetic filters that kick in automatically upon match of certain regex patterns. I call it my "outer great firewall". Very effective protection against hate speech
Why doesn't C have an actual, formal semantic model?
"Furthermore, we argue that the C standard does not allow
Turing complete implementations, and that its evaluation semantics does not preserve typing. Finally, we claim that no strictly
conforming programs exist. That is, there is no C program for
which the standard can guarantee that it will not crash." [1]
most system administrators are only aware of tcp and udp(that quic uses). its really hurts protocol adoption if it does not work for 3rd parties blocking/not handling it on th e network gear.
SCTP packets would need to be allowed through the networks. And networks don't bother if nobody is using it. So it's a chicken-and-egg problem.
But google is big enough to push through that problem, IMHO, as long as browsers fail back to HTTP/2 or HTTP/1.1, the widespread acceptance of SCTP would be a boon to e.g. VoIP and game developers. But alas it's too late. QUIC has been in development for six years now.
> But google is big enough to push through that problem
Not really. There are hundreds of millions of home routers deployed that all do NAT. And they all don't support NAT for SCTP. And many do NAT with hardware support, so it's probably not even fixable with a firmware upgrade, which isn't even an option for a ton of unsupported devices anyway.
So, I think encapsulating in UDP is the only realistic option if you want to gain any adoption any time soon.
Also, SCTP has the same problem that TCP has in that the network can look inside the protocol, and thus you would get protocol ossification. While Google does this for selfish reasons, I think it is a really good idea to establish a protocol that is completely opaque to telcos and should ultimately benefit the public. Telcos really don't want to be dumb pipes, and they tend to abuse any power they get, as they have demonstrated time and time again, and the only way to force this issue is by simply making it impossible for them to see or manipulate anything at all. So, while we may have to live with the UDP encapsulation forever, and as stupid as that is, this at least ensures that anyone in the future can trivially invent and deploy new protocols, as it is trivial to masquerade anything at all as QUIC. The adoption of QUIC for the web has the potential to get all ISPs to fix things so that QUIC actually works reasonably reliably over their network. And the fact that as far as the network is concerned it's just UDP packets filled with random data ensures that as long as your new protocol is UDP packets filled with random data, that will work as well, even if you use completely different mechanisms for framing or flow control or multiplexing or whatever.
It has taken 20 years to get IPv6 adoption to where it is now. This takes amazing dedication and is a much more fundamental change. Why can't SCTP adoption be a similar long term project? A home router probably has a life span of less than a decade. So it would be realistic to get a majority adoption of SCTP within approximately 15 years if there were a bit of a push in that direction. QUIC has been in the making for 6 years now? SCTP was standardized in 2000. So we could be 6 years into this 15 year project by now instead. And that is not comsidering the time it will take to finish QUIC, build compatible implementations and deploy them.
More importantly: You are missing the point. Great, you have encapsulated your protocol in UDP. Now, what do you notice? Your ISP is throttling your protocol because for some braindead reason their traffic management has categorized it as some sort of unimportant protocol, probably P2P or something. Or they just throttle UDP in general, because who uses UDP besides DNS and VoIP? So, high bandwidth UDP usage is obviously a DoS, and we don't want that! If you are unlucky, maybe they even shut off your server because it is obviously part of a botnet?
OK, so enough people complain to their ISPs that SCTP over UDP doesn't work. What happens? Exactly, ISPs start putting in rules that "recognize SCTP over UDP". So next time you want to try out a new protocol or extend SCTP somehow, you run into the exact same problem.
The difficult part isn't stuffing bytes into UDP packets, the difficult part is making sure it actually works reliably and fast over the public internet.
1. While getting telcos to fix their network so that QUIC works optimally might also need some work, it should be easier than for unencapsulated SCTP for the simple reason that (a) there is a relevant proportion of the network that already does work fine with it, so it is easier to blame the bad providers when things don't work so well with some provider that throttles UDP, say, and (b) in the case of simple UDP throttling, QUIC is in competition with VoIP and DNS, which could lead to degradation of those other services, which should increase the pressure to fix things. And in any case, you at least don't need to touch all CPE, only middle boxes within the infrastructure.
2. Probably more improtantly, SCTP is a plain text protocol, so getting it to work over telcos' networks does not help you at all the next time around when you want to evolve protocols. Once the work is done for QUIC, you won't have to do it ever again, because anything new that you could come up with is easily made indistinguishable from QUIC as far as the telco is concerned.
In the above post, I was talking about SCTP over UDP, which is an IETF standard already. You have moved the goal posts by talking about unencapsulated SCTP. We all agree that this is a tough problem. However, QUIC is also encapsulated over UDP, and when encapsulated, has the same advantages / disadvantages as SCTP over UDP.
SCTP over UDP over DTLS already constitutes a good portion of web traffic, since WebRTC (and hence technologies like hangouts, facebook video messaging, etc) is based on it.
I can not find there a single hint that anything is encrypted, and I would have been very surprised if I had. The security considerations do even explicitly acknowledge that it's up to the application to encrypt the payload if it wants to.
> In the above post, I was talking about SCTP over UDP, which is an IETF standard already. You have moved the goal posts by talking about unencapsulated SCTP.
No, I simply covered that case as well because SCTP on IP was mentioned as a supposedly viable alternative in this discussion as well, which supposedly would be preferable over QUIC because of lower overhead (which is true if it works, of course).
> However, QUIC is also encapsulated over UDP, and when encapsulated, has the same advantages / disadvantages as SCTP over UDP.
No, it then has the advantage of being nearly completely encrypted, so we don't get any protocol ossification.
> SCTP over UDP over DTLS already constitutes a good portion of web traffic, since WebRTC (and hence technologies like hangouts, facebook video messaging, etc) is based on it.
Well, yeah, but then what's the advantage over QUIC? It's also a complete flow control and encryption stack in user space if you want to run it over DTLS! I mean, I don't mind SCTP, but it's not like the kernel implementation is of any help when you want to run it over DTLS ...
> I can not find there a single hint that anything is encrypted, and I would have been very surprised if I had. The security considerations do even explicitly acknowledge that it's up to the application to encrypt the payload if it wants to.
Plain text is typically used as the opposite of 'binary'. You are intending to say 'unencrypted' or 'clear text', not 'plain text'. SCTP is not encrypted by default. It is a transport protocol and it is intended that
> Well, yeah, but then what's the advantage over QUIC? It's also a complete flow control and encryption stack in user space if you want to run it over DTLS! I mean, I don't mind SCTP, but it's not like the kernel implementation is of any help when you want to run it over DTLS ...
the advantage is that DTLS over SCTP is an internet standard that was created organically by a variety of motivated individuals and organizations. DTLS over SCTP does not require kernel encryption, and is implementable in userspace. This is similar to how TLS is implemented in userspace but uses the kernel TCP drivers. SCTP has been a standard for almost two decades and DTLS over SCTP for a similarly long time. There is no reason to replace something that works, with something that is exactly the same, except for the fact it's invented by Google.
You claim QUIC is encrypted. It is not. QUIC is a transport protocol and transport protocols need their headers to be inspectable to provide good routing, etc. The payloads are encrypted, like any other transport protocol.
The point is that there is no technical advantage to QUIC over DTLS over SCTP over UDP. Thus, we should reject the new standard in favor of the existing one.
This is similar to how, if someone made a POSIX-like specification that fulfilled all the same requirements as POSIX, we shouldn't simply adopt because it's newer. If the new technology offers no advantage over the old, then the new technology should be rejected as offering nothing new.
> Plain text is typically used as the opposite of 'binary'. You are intending to say 'unencrypted' or 'clear text', not 'plain text'. SCTP is not encrypted by default. It is a transport protocol and it is intended that
Also, context. None of what I wrote makes any sense under the assumption that I was talking about a 'character string protocol'.
> the advantage is that DTLS over SCTP is an internet standard that was created organically by a variety of motivated individuals and organizations. DTLS over SCTP does not require kernel encryption, and is implementable in userspace. This is similar to how TLS is implemented in userspace but uses the kernel TCP drivers.
Which is true, but obviously does not apply to SCTP over DTLS, which is what you were talking about in your previous post.
> There is no reason to replace something that works, with something that is exactly the same, except for the fact it's invented by Google.
Well, true. Which is why it is relevant that QUIC is not exactly the same, as I have explained a dozen times now.
> You claim QUIC is encrypted. It is not. QUIC is a transport protocol and transport protocols need their headers to be inspectable to provide good routing, etc. The payloads are encrypted, like any other transport protocol.
Routing happens at the IP layer, so, no, a transport protocol does not need to be inspectable. And in fact it is an explicit design goal of QUIC to minimize what is inspectable. Which is why it is essentially uninspectable.
All that is inspectable is a connection ID and a packet sequence number. The sequence number doesn't really tell you anything as it is not even a sequence number of payload segments, but only of transport packets (so retransmits get new sequence numbers). The connection ID tells you which packets form a connection ... but then there is essentially one connection between two endpoints, so nothing to see there either that isn't obvious from the network layer anyway.
But none of the flow control machinery is inspectable by the network, and even the unencrypted header fields are still authenticated so as to prevent any meddling by middle boxes. Also, you know, you can read all of this in the specification, maybe that'd be better than removing more doubt?
SCTP can be encapsulated within UDP, has kernel support, and is already implemented in most major web browsers, because of WebRTC, for which it is the standards mandated protoclo.
The mobile game is just a reskin of an existing Netease (Chinese) game. Mobile games are also full of microtransactions which players already protestested in Diablo 3 with the real money auction house. Players actually are angry about a Diablo mobile game.
Is there any messenger that does end-to-end encryption with reliable, in-order delivery with decentralized group chat and conversation history syncing? Been looking for years but so far the double-ratchet is inadequate.
Mumble / Murmur are close to this, but without chat history syncing. End-To-End PFS encryption though. Very decentralized, but you can tie it into ldap and other authentication systems. I linked in another part of this thread.
What is with all these email clients supporting folders but not labels for Gmail. I get that labels are not a supported abstraction in IMAP, but they are supported through the Gmail API.
I still don't understand how this device could steal login details. Everything should be encrypted and authenticated through PKI when using any website that accepts login details. Whenever I visit a website with an expired certificate, for example, Chrome gives me a big red warning banner before allowing me to continue to the site.
>Everything should be encrypted and authenticated through PKI when using any website that accepts login details.
Yes, everything SHOULD be like this. I should be able to trust my neighbors and leave my doors unlocked as well, and I should be able to have faith in my elected officials. And yet...
The other issue is that you can connect to a website that implements HTTPS correctly, and still be borked if that site doesn't implement HSTS properly - there are tools that implement HTTPS downgrading on Kali.
>I still don't understand how this device could steal login details...Whenever I visit a website with an expired certificate, for example, Chrome gives me a big red warning banner before allowing me to continue to the site.
The problem comes when your corrupted router messes with DNS and sends you to https://evil.chase.com, which has a pixel perfect mock up of a chase bank login screen, and a perfectly valid cert.
That's not a downgrade, but a lack of upgrade. A few comments back said https://evil but it would have to instead be http://evil assuming no rogue root cert is installed.
And requires that if the user had visited chase.com, that chase.com not have includeSubdomains in their HSTS header.
So to prevent a downgrade attack before a first connection is made, not only does the domain need to "includeSubdomains" - and have a valid lifetime (maybe of at least 31536000 seconds, or 1 year [this may just be a government standard]), but they'd also have to send the preload directive in their HSTS header and have been preloaded by that browser platform. If the domain is not preloaded, that first connection is required to get the HSTS information to the client in the Strict-Transport-Security header.
Perfectly valid cert for the evil.com domain - someone below pointed out that I flipped the domain names.
In reality the "evil" page would look something like "https://www.login.chase/login?id=DEADBEEF/.evil.com". For a non-trivial number of users, that's enough - "I see the nice green lock, I see chase, and some crazy web address characters that are always there".
Unless you're doing something super clever with characters that I'm not understand, that's not how urls work. ".evil.com" is clearly part of the query parameter.
Assuming they're not doing anything weird with Unicode, the evil pi is probably running its own DNS server, intercepting the traffic intended for normal DNS, and basically creating its own TLD the same way you would normally do localdomain. The evil.com part is redundant.
For example you can go to my http://website.com now the normal website has a HTTPS redirect on home page. Your router replaces that page and disables the redirect. Now is up to you to notice you're on a http connection.
If you think is rare, I can tell you some fortune 500 FX and stocks trading have this vulnerability a year ago (didn't checked again).
Correct. HSTS does not protect against a first visit to a site. And to work around HSTS, there are many ways to get users to clear their caches, install new browsers, or use new devices to browse sites they've already visited.
Technically, if the domain had DNSSEC enabled, it might prevent this kind of attack, but no regular consumer is using a validating stub resolver, so even DNSSEC wouldn't work.
Now that browsers are saying "Not Secure" by default for HTTP pages, users are apparently expected to notice this popping up where it didn't before and realizing they're on a phishing site.
Anyone can preload their domain in Chrome, Firefox and others that share the preload list. I'm not sure what vulnerabilities are left after your site has been preloaded.
No. If the domain (and its subdomains) are preloaded - then a first visit is not required. The HSTS requirement is then baked into a list supported by modern browsers such as Firefox and Chrome.
HSTS and Certificate Transparency, yes. Certificate Pinning is too easy to shoot yourself in the foot with, so it should only be considered for the most sensitive sites.
Dynamic pinning (HPKP header) is being rolled back from browsers because of the reasons you mention. Only a small set of static pins will remain (in Chrome, Google sites for example).
Are Windows 0-days really that common? I thought they were usually saved for really serious attacks, e.g. from state-sponsored actors, not scams on the level of "pay some random person $15 a month to attach a mysterious device to their router".
Not only that, but because the device has unfettered access to the internet, an attacker can always update it with new ways of installing certificates on your machine.
any site that is loaded via http can have content mutated -- forcing users to http (and then acting as MITM), injecting javascript, other payloads.
If you can get a foothold on client computers you can also do things like inject trusted CA's to allow yourself to act as MITM without any cert issues raised.
DNS can be mutated.
Auto update software that does not check the cert chain and hash of the deliverable can be used to inject and run code.
...
Hundreds (if not thousands) of repeatable attack vectors given physical access to the network like this.
> any site that is loaded via http can have content mutated -- forcing users to http (and then acting as MITM), injecting javascript, other payloads.
Which is why everyone is moving to HTTPS.
> If you can get a foothold on client computers you can also do things like inject trusted CA's to allow yourself to act as MITM without any cert issues raised.
If you get access to the client computer all bets are off. You can just force all their traffic through a MITM proxy, no router hacking needed.
> DNS can be mutated.
Which won't allow you to MITM HTTPS sites.
> Auto update software that does not check the cert chain and hash of the deliverable can be used to inject and run code.
Any auto update software which doesn't verify certificates has a major security vulnerability.
>HTTPS protects against all of these:
>Which is why everyone is moving to HTTPS.
Yes, but a MiTM can block or hamper conversion to https and mutate the content. HPKP and HSTS are not widely used yet (and even if they are the first request can be bypassed given this topology). Given current "end user" level protections having a device such as this on your network basically ensures you can be hijacked if even one request made is over https or not currently pinned to HTTPS.
>If you get access to the client computer all bets are off. You can just force all their traffic through a MITM proxy, no router hacking needed.
FFS, the point is the MITM gives a huge amount of attack surface to breach the client -- which yes, after that is done you lose all bets. Everything from injecting code intip zips/exec/etc downloaded over http to using 0day browser exploits and mutating requests. The device itself is physical access to your network which makes access to the clients 1000x 9if not more) easier.
> DNS can be mutated.
There are other protocols besides HTTPS.
>Any auto update software which doesn't verify certificates has a major security vulnerability.
Given, Yes. That does not make it rare or unusual. look at the CVS. There are many developers that write (or enable) auto updaters that should not be responsible for that given their understanding of security.