For Kerberos, I was a minor foot soldier in the Crypto Wars.
Back in the mid 90s, Kerberos was restricted from export due to ITAR regulations (I had thought ITAR was obsolete, but just this year we had trouble purchasing some hardware in the USA due to ITAR restrictions).
Since IIRC Kerberos could be downloaded from MIT it's not clear how well these restrictions worked.
Nevertheless John Gilmore and I (maybe proposed by another of the Cypherpunks?) decided that we should get an ITAR-legal version of Kerberos in global circulation.
Before I ever left or contacted anyone about this a policy lawyer blessed it, and even talked about it with some people from the Commerce Dept, so everything was legal and above board. I'm pretty sure NSA knew about it too (this was back when NSA took Information Assurance seriously -- I think back then it might even have been a whole directorate).
The first step was to make a version with all the encryption stubbed out. That "worked" in that two copies of this code would connect to each other etc...with all communication in the clear with no encryption at all and no pretend-encrypted anything either. I think it was OK to take a password and copy it into another buffer and then send the contents to the other side of the connection, but honestly I don't remember and actually (see below) don't know.
I then went to Switzerland where I arranged for a local cryptographer to take our neutered code and make it compliant with the published papers on Kerberos. At that time I knew nothing about Kerberos's encryption and hadn't even look at source (neutered or original) so I wasn't even accidentally "exporting" anything in my head and could be sure I couldn't say anything to make the work easier. It was literally "can you make this package work as described in the literature".
After they got it working they connected to some sites in the US and all worked properly. And then we "imported" the swiss version and put it on our FTP server at Cygnus (Cygnus paid all the bills too) for other US people to use. Now there was a single copy of the source that could be used to protect people anywhere. Until then, even US companies couldn't use it to communicate with overseas divisions, even ones staffed entirely by US persons.
The Swedes also started their own from-scratch implementation, known as Heimdal (https://github.com/heimdal/heimdal), which has a bunch of nifty things in it including a from-scratch PKIX/x.509 implementation and a from-scratch ASN.1 compiler and library.
As far as I know, NSA still has an Information Assurance Directorate, they just seem to focus on government, military, and government and military contractors' information security, rather than the general public's.
A big contribution of the cypherpunks and hacker community has been cultivating the intuition that everybody deserves information security (even if information security efforts for the general public aren't always very well-funded). (And thank you for contributing to that!)
Not that the original article has anything to do with Kerberos other than design-by-analogy (not to knock the article, it's describing an interesting thing, and I'm always up for 90s nostalgia regardless of cause :-) but I think that version was more "MIT figured out that the change from ITAR to BXA and the published-code category made it reasonably safe for them to put anything up with a warning and possibly a registration". (CNS is the v4 release we shipped, which mostly got folded back in as an MIT krb4 patch release as well; we then shipped KerbNet which was V5 based, until we dropped the project entirely.)
Yeah, I went to Switzerland in one of your followup trips to meet with the arms-length-contract cryptographers. (Which was legal at the time, but a loophole that got closed a year or two later because Sun's "ELVIS+" project for funding a russian team to do non-US crypto got too much attention...)
I worked at a commercial Kerberos vendor in the 90s. We had a lot of work to do to clean up after the various student projects and partial refactors that inhabited the codebase.
The single biggest pain point in deployed systems in the 90s: lack of reliable system time on authenticated clients (mostly PCs), leading to clock_skew errors.
Kerberos was clearly built for organization networks with multi-user Unix hosts running software built from source and a willingness to customize their services. The inbuilt support for major packages was often stuck on v4 even though v5 had been out for a while.
Kerberos was not built for the internet, or poorly-administered PCs, or closed-source commercial software services (which required painful kerberization).
> The single biggest pain point in deployed systems in the 90s: lack of reliable system time on authenticated clients (mostly PCs), leading to clock_skew errors.
Kerberos can be used to get time from the KDCs though! Sure, MIT's grad students didn't build a program for doing that, but they could (and should) have. Say you send an AS-REQ or TGS-REQ w/ incorrect time to the AS/TGS, so you get back a KRB-ERROR telling you that your time is wrong, but also telling you the KDC's time. Well, ok, the KDC's time would not be authenticated, but you'd first get an initial ticket (AS) then you'd do this again using a TGS exchange, and if it works then you know you got a KDC time reading that's good enough. (You'd want to do this mainly with randomly-generated service keys, not with user passwords.) Now with the FAST extensions you get authenticated time.
Microsoft had a wonky, crappy NTP-secured-with-Kerberos, too.
The VXWorks port I did (as a Cygnus contract to some part of NASA that had gotten hit with a "security mandate" used basically that trick, because we couldn't assume the hardware had any realtime clock at all, so we had to stash something and work with a tick counter.) Hmm, https://web.mit.edu/eichin/www/embedded-kerberos.html mostly talked about porting, and doesn't mention the clock problem - that was the right project though.
We had a ticket replay cache built around comparing times. I cannot recall whether the client time was a part of it.
But the reason I bring it up is while Kerberos could be (and was) extended to the internet and trash PCs, it came from the same era as X-Windows and LDAP. The paradigm was that systems were large and reasonably well administered, basic trust was established at the host level, and the real problem was users.
> The paradigm was that systems were large and reasonably well administered, basic trust was established at the host level, and the real problem was users.
Well, not quite. The problem was that all the orchestration code ("domain joining", etc.) just wasn't written in the 90s at all. Eventually several orchestration systems were written, with varying levels of functionality:
- Windows 2000 (orchestrates only machine account and SPN credentials)
- OSKT (https://github.com/elric1)
- Wallet (https://www.eyrie.org/~eagle/software/wallet/readme.html)
- FreeIPA
A proprietary implementation I'm familiar with uses Heimdal's "virtual service principal namespaces" to orchestrate service credentials with no KDC writes, making "Negotiate" Kerberos operationally indistinguishable from JWT bearer tokens in that system.
JWTs don't require a library or API, while Kerberos does. JWT is easy enough to implement using only a straightforward crypto library (though one should be super careful as it's very easy to accidentally not validate a JWT's signature!). Kerberos is not. Validated JWT claims are just JSON, and it's trivial enough to stuff extensions in there. Kerberos tickets are extensible enough, but getting at those extensions is a bear.
Well, secured with the machine secret (which happened to be also the Kerberos long-term key). There was no Kerberos in their SNTP protocol extension, at least when I last looked at it.
There was Kerberos in their SNTP! It was just NTP with 1DES auth keys. Which 1DES auth keys? The ones derived from the machine account password for this purpose.
I remember connecting my Linux Kerberos client and having issues with clock skew, anything over 5 minutes would stop the client from working. But I did have a Linux machine which would connect to the school Kerberos and access network from my house instead of having to go to MU3NE, media unit 3 floor north-east. This brings back a lot of memories. I did still go, but mostly to hang out. Kinda similar thing people that go to the office do these days. I guess things don't change.
cryptonector and I also specified a Kerberos-like protocol [1] based on Mozilla’s defunct BrowserID protocol. I demonstrated it as a drop-in replacement for Kerberos with Exchange, having built a Windows SSP and credential provider as well as a GSS-API implementation (the latter which is open source [3]).
Unfortunately after Mozilla cancelled Persona, this never went anywhere.
Very cool. Back in ancient times, shortly after Kerberos for GSS-API came out I was tasked with achieving interoperability between Hewlett-Packard's GSS-API implementation (based on MIT's) with popular commercial ones (Cygnus? Microsoft?). It was my introduction to subtle art debugging crypto protocols, handy when SSL came along. Implementing to the spec doesn't necessarily mean they'll interoperate. This brings back some memories: https://web.mit.edu/kerberos/krb5-devel/doc/appdev/gssapi.ht...
Yes, I've been seeing Needham-Schroeder as a possible optimization in a PQ world, but in order to make it so we'll need a true Kerberos-ish infrastructure.
I've played a lot (in my mind, mainly, though I've done some implementation work) with using PKINIT and DANE to build something like the PKCROSS that never happened. I.e., we could use PKI (DANE preferably, as opposed to x.509) to bootstrap Kerberos symmetric trusts as needed / on demand.
There's a lot to think about before jumping in, like:
- can we simplify Kerberos V5 and/or fix
various problems in it, or start from
scratch (and simpler)?
(IMO: start from scratch)
- think that symmetrically-encrypted JWT
is a lot like Kerberos' AP protocol
- we should strive to have an RFC 4559
style bearer token profile that can be
used with much less code than Kerberos
is traditionally implemented with
- we should strive to have Needham-Schroeder
in TLS so that we don't need all of the
awful krb5_...() APIs nor GSS-API
- though keep in mind that GSS-API is a
good basis for a TLS API (that's what
SChannel is: TLS as an SSP using the
SSPI, and SSPI is the MSFT equivalent
of GSS-API)
- let's make sure there's JSON-ish
extensibility in token/ticket/whatever
metadata
- let's make sure various Kerberos V5
mistakes get fixed / not remade
> Revocation by CAs is easy and can be immediately effective.
This is also true with DANE for the same reason: you're relying on a DNS RR being removed, both from its zone and from caches (so, low TTLs, though not excruciatingly low).
It's also true in Kerberos V5 because tickets are typically short-lived. PKIX could do this too (short-lived certificates).
> The client finds three of these CA records where the UUID matches a CA that the client trusts.
I'm really not fond of the idea that we should repeat the non-hierarchical mistakes of WebPKI. I'd rather we find a way to live with the DNSSEC PKI and increase trust in it by making it harder to subvert:
1) always use QName minification in
making DNS queries,
2) add CT to DNSSEC,
3) make client detect and inform users
about unexpected changes in zone
pubkeys
(1) means that an MITMing root zone or TLD zone doesn't get to know [from the query] the full name you're trying to resolve, which means they need to commit to MITMing before knowing how interested they might be in MITMing, and then this forces them to stay in the middle.
(2) is obvious: try to catch the root and TLDs MITMing.
(3) is also about clients noticing possible MITMing.
This isn't a rebuttal, because it's good to want things, and DNSSEC would be better with binding transparency, but the reason you don't have DNSSEC CT isn't that nobody has designed it yet, but rather that Mozilla and Google forced CAs to adopt CTs and refuse to accept certificates without SCTs. No such pressure point exists for force DNS registrars --- which are generally controlled by state entities --- to adopt such a thing.
Registrars will do what the registries say. Registries typically implement DNSSEC even w/o state mandates. I expect that root and TLD zones would support SCTs eventually. I also expect that there are not many MITM attacks in DNSSEC by nation-states.
The single most important thing here is QName minimization because it will force MITMing zones to commit to being and staying in the middle early, and that's very risky for them. QName minimization does come at a price: extra DNS round trips, though most of those will be very cacheable and have high cache hit rates, but cold caches hurt.
Why would .US or .IO support mandatory SCTs? These domains are under de jure control of governments that routinely subvert the DNS for intelligence and criminal justice reasons. It's not like the CA's volunteered to do mandatory CT: they had to get dragged into it, because Mozilla and Chrome could credible threaten to dis-trust them. They can't do that with the DNS.
Because businesses that don't want those zones to MITM them will move to zones that do support SCTs. It's not as strong a reason as Mozilla and Chrome arm twisting WebPKI CAs, as sovereigns can resist longer, but eventually it will happen.
Just so we're clear, this is like saying that CT would have happened on its own, just because of market pressure from certificate customers, without Mozilla or Google intervening.
I'm not sure how CT for DNSSEC would happen. If we had a spec and implementations then we could get market pressure applied. Mozilla and Chrome might have security indicators on their browser that incentivize the use of TLDs that provide better security. And again, I think DNSSEC already has things that WebPKI doesn't have that give it some advantages in terms of security. CT is not a panacea either, since it requires that people pay attention to what CAs issue certificates for.
UofM popularized the use of Kerberos in academic settings. IIRC, Dug Song added support for Kerberos to SSH, and some other people in Honeyman's group (CITI) connected Kerberos to Andrew File System (AFS). But we'd have to ask honey to know for sure.
Yes, but Microsoft really did do the missing integration of Kerberos V5 into the rest of the system (LDAP and the login system). MSFT started this work ("NT5") in 1995, just a bare few years after MIT published the first implementation of Kerberos V5 (though Kerberos IV came half a decade earlier still). That took MSFT ~5 years to complete, and they shipped this as Windows 2000.
Sun Microsystems, Inc. (RIP), completely failed to respond appropriately to that, and Active Directory ate Sun DS's lunch eventually.
I was at the Windows 2000 unveiling at UofM where the copied tech was shown and had this feeling I saw this before. Not sure about the down votes here?
LDAP is a UofM innovation. Tim Howes, a Peter Honeyman student, wrote his dissertation about it, and then went on to found Loudcloud with Ben Horowitz.
This is a nice reminder that symmetric cryptography is PQ-safe, but barebones kerberos does not solve the initial authentication problem - the exact purpose of a TLS handshake.
Kerberos relies on the client and kerberos authority (KDC) having a pre-shared password; the KDC uses it to send encrypted messages to the client, and also for giving the client a message encrypted for a service the client wishes to talk to (these are kerberos tickets). In that sense, Kerberos does look a lot like a trusted third party facilitating a TLS handshake with all-symmetric cryptography! A very neat analogy.
It would be ludicrous to require all clients to have a pre-shared secret password with the CA, and to require the CA be an available participant in every handshake in an uncacheable way. Still, it's fun to imagine this.
Anyway, some notes for completeness: there does exist a PKINIT extension for Kerberos, where an x.509 certificate from a trusted authority authenticates the kerberos authority (KDC) for the clients, and authenticates the clients without a password. This relies on a signature system, so that's out of scope if we're talking about ditching TLS in a PQ setting. I suppose one could imagine distributing kerberos keytabs instead of certificates.
Back in the mid 90s, Kerberos was restricted from export due to ITAR regulations (I had thought ITAR was obsolete, but just this year we had trouble purchasing some hardware in the USA due to ITAR restrictions).
Since IIRC Kerberos could be downloaded from MIT it's not clear how well these restrictions worked.
Nevertheless John Gilmore and I (maybe proposed by another of the Cypherpunks?) decided that we should get an ITAR-legal version of Kerberos in global circulation.
Before I ever left or contacted anyone about this a policy lawyer blessed it, and even talked about it with some people from the Commerce Dept, so everything was legal and above board. I'm pretty sure NSA knew about it too (this was back when NSA took Information Assurance seriously -- I think back then it might even have been a whole directorate).
The first step was to make a version with all the encryption stubbed out. That "worked" in that two copies of this code would connect to each other etc...with all communication in the clear with no encryption at all and no pretend-encrypted anything either. I think it was OK to take a password and copy it into another buffer and then send the contents to the other side of the connection, but honestly I don't remember and actually (see below) don't know.
I then went to Switzerland where I arranged for a local cryptographer to take our neutered code and make it compliant with the published papers on Kerberos. At that time I knew nothing about Kerberos's encryption and hadn't even look at source (neutered or original) so I wasn't even accidentally "exporting" anything in my head and could be sure I couldn't say anything to make the work easier. It was literally "can you make this package work as described in the literature".
After they got it working they connected to some sites in the US and all worked properly. And then we "imported" the swiss version and put it on our FTP server at Cygnus (Cygnus paid all the bills too) for other US people to use. Now there was a single copy of the source that could be used to protect people anywhere. Until then, even US companies couldn't use it to communicate with overseas divisions, even ones staffed entirely by US persons.