This entertaining essay captures things well without getting distracted by some of the really funny stories, many of which cannot be repeated as the protagonists are still alive.
Programming in Japan is typically a low-status job considered a blue collar occupation (Nintendo and SCEI are notable exceptions) despite Japan having produced quite a few globally notable computer scientists. I remember visiting a 5 Gen project om 1984. I was shocked by the open plan office (how did anyone get any work done?). At the row of desks was one ASCII terminal for each two desks (to a KL-20 I assume but can no longer remember). I spoke with one of the developers asking him what he thought of the environment. "Great: I only have to share the terminal with one other person".
The next year I myself was employed at MCC in their AI group (they had a database and CAD group, and perhaps one other) working for Doug Lenat and designing the Cyc system (when I got there they were working in Interlisp on Dandelions, not surprisingly as Doug's office had been around the corner from mine at PARC). The first thing I did was toss the Interlisp implementation and redo it on for a 3600.
5Gen and MCC actually were 2 and 3 of another transformational hype effort, the Center Mondial d'Informatique and Ressource Humaine (World Center for Computer Sciences and Human Resources) launched in France in 1983 on the back of an influential book by a major French intellectual, Jean-Jaques Servan-Schreiber. They had a KL-20, a Vax (with a copy of BSD I carried over there in my luggage) and a few CADR Lispms (two or four, I can't remember). With swanky offices in central Paris around the corner from the presidential palace and a lobby full of Apple 2s that anyone could walk in and try out they didn't do much for the third world I'm afraid, idealistic goals notwithstanding, but they did launch quite a few very good computer scientists, so were probably worth the effort.
So I worked for CMIRH and MCC, and spent some time at 5Gen. Fun times.
A funny thing is that both MCC and the Fifth Generation Project were fairly unsuccessful at actually launching the future. My hypothesis is that it's because they were trying to plan research, which is to say, come up with a plan up front for what they were going to do, and then carry it out, rather than improvising based on what was learned in the process. I suspect the massive amount of planning and investment required by modern drug approval is a major cause of the drug industry's dramatic lack of progress in recent decades. Or, as the Goblin Queen put it in her comment below, when industry takes over science, we lose the science.
The 1980s computer systems research projects I think of as the most successful were dramatic advances in semiconductor fabrication, TCP/IP, microcomputers, RISC, Macintosh, Emacs, Postgres, HDLs, GCC, SGI, MOSIS, SUN, TeX, Perl, Plan 9, Cedar, generational GC, OOP, NASDAQ, Pixar/CGI movies, and wavelength division multiplexing. (Maybe my vision is a bit limited and provincial, and I'd be interested to hear what others think.) A lot of these were government-funded, but not with top-down goals. (It might sound ridiculous, but I suspect Perl may have been the most important advance to come out of JPL in the 1980s.). Others were industry-funded. Two were mostly funded by one stubborn idealist, at least until you got involved.
As someone who's worked both on very successful research projects (BSD and GCC) and unsuccessful ones (Cyc) as well as biomedical research, what do you think about what makes the difference? Was it predictable in 1984 that Japan would fail?
It seemed likely to me that Japan and in particular MITI couldn't continue on as they had been but I was an outlier and so thought I was probably wrong. Don't forget the US was just emerging from the stagflation of the 70s and while they have addressed that one macroeconomic phenomenon haven't made any progress on any of the things that lead to it.
So having said that, 5Gen was a "moonshot" from a bunch of bureaucrats who could see the writing on the wall, but mistakenly let it get hyped. Compare that to ARPA who used to place a whole bunch of batshit crazy bets that were clearly nonsensical though being done by legitimately smart people: that gave us the Internet, among other things. They also funded infrastructure to get there (e.g. MOSIS).
Since then (the now named) DARPA and corporate labs have lost interest in much blue sky work. "I don't agree that "when industry takes over science, we lose the science." Corp research gave us the transistor, semiconductor, SEM, WIMP interface etc. It could again. Pharma is one of the few fields where fundamental research is still done.
(I wouldn't say regulatory approval has had much of any influence in the cost or output of pharma research BTW; getting meaningful results is hard and if you want drugs that are better than the standard of care you need to do that work. I do think the regulatory framework around marketing and reimbursement has had a much larger impact on which API candidates get advanced into programs.
I'm biased in what I consider fundamental; I'd agree with you on TCP/IP, mpus, RISC, the Mac, generational GC (another corp research result BTW) and fab. (WDM is another corp invention about 100 years old). Some of the others I consider less fundamental or less influential (Cedar? CLU was far more influential even if no real programs were written in it).
The fact is research is hard, and skills in basic research and the skills of translational research are rarely present in the same person.
Do you think the ARPA approach is better, or better for certain cases? The accounts I've read of the invention of WIMP and transistors resemble “bunch of batshit bets by smart people”, if I have the story right, but I don't know anything about the SEM and semiconductors. (Weren't semiconductors Bose in the 1890s?)
A lot of my list clearly was pretty applied; I didn't mean to suggest they were all basic.
Cedar in particular I mentioned because it inspired Oberon, Java (né Oak), in some sense Xi (and ropes in lots of other systems), and maybe sam and the plumber, though maybe that was convergent Blit evolution. But certainly that impact is small compared to the LSP. (Were Smalltalk iterator methods convergent evolution or were they taken from CLU?)
I'm very surprised at your view on pharma. Sounds like my opinions need to change.
Well, life is a huge parallel process that requires a diversity of inputs, so you need blue sky, "stupid" stuff, boring spadework and directed development, all at once. (and those definitions change; as fields go through their "stamp collecting" modes, as taxonomy/biology did in the 18th and 19th centuries, that was exciting. High energy physics is stuck there at the moment, again).
So yes, we do need someone like the ARPA of old. Licklider, Bob Taylor, and the like are heroes in my opinion and I don't really see their like today.
I dunno if it was clear or not, but it wasn't really industry driving the research: it was the various governments, and, as you say, they were planning things. Well, sort of. That's not what a real technological plan is: in real life when we try to build advanced things, we identify the lacunae and develop a plan to fix those. Just like Apollo and the Manhattan project. This one was a giant punt into "shyeeet maybe this will work." Sort of like all fusion research to date, except worse, because unlike fusion, the gaps in building a 5th gen computer are not even identifiable. It might as well be "we plan on being lead in the antigravity industry by 2030."
Thanks for working on BSD and GCC (and Cyc, which is also pretty damned cool)!
I worked at MCC too, but in the human interface group. It was a lot of fun using Symbolics machines all day (the ultimate brain-to-computer interface). Despite nothing much coming from my work at MCC, I still got to write Lisp and Prolog for pretty much my whole career, and MCC set me on that path. So I'll thank them for that.
I'm also an MCC alum - I worked in the Software Technology Program. My code there had a surprisingly long life - transitioning to a spin out startup and then other companies. (See gIBIS https://en.m.wikipedia.org/wiki/Compendium_(software)#Histor... ). I too learned Lisp (and Emacs!) there - two tools that have stuck with me ever since. Many of us also started to learn Japanese there.
Thanks for sharing that. After visiting the MCC offices when you were just getting started, I always maintained a curiosity about your work, enjoyed playing with Open CYC, etc.
My first degree was in Japanese, mostly because I already knew how to program and had, since I was a home computer kid, been doing so for years. I did a Japanese major because Japan looked like it was going to take over the world in 1989, especially in technology. It was a foolish, foolish decision.
However, it gave me a ton of exposure to the 5th generation project, and this article is a wonderful reminder of how crazy it was, though it doesn't quite capture the insane hype around the whole thing.
There's an excellent, if imperfect, book called "The Fifth Generation Fallacy" by JM Unger (who, in a weird circumstance that makes me think the writers of our narrative are just plain lazy, a few years later, became the head of the Japanese department I was in) that talks about the project in terms that I think make quite a lot of sense.
The book talks about it, but I go even further: the 5th gen project was about Japanese beliefs about information technology and specifically that Japanese was such a complex language that the only way to have working IT was to have an intelligent machine.
Japan was still using pencils and xerox machines in the aftermath of the PC revolution in the US and Europe (in practice, this continued into the mid-90s), and had missed mainframes and minis entirely. The idea of a statistically-driven, convenient, modern input mechanism (what people use now when they use an IME, these started to show up in the mid-90s when enough of a corpus was analyzed to make them work) was not there yet; Japanese word processors of that era are incredibly cumbersome and terrible, just barely usable and not usable at all by large swaths of the work force. Japan was absolutely terrified of the technological progress in the rest of the world and a lot of it had (and has) to do with the simple inconvenience of the writing systems and the linguistic isolation.
In the cultural context, especially of the bubble that formed at the same time, and no small amount of vaguely racist beliefs about their superiority (especially linguistic, which had been going on for a very long time and played a role in Japanese relations and eventual behavior post-annexation of Korea), the 5th gen project makes perfect sense.
Unger's book doesn't get the attention it deserves, maybe because he is a humanities guy. I just wanted to add that response to his book was (predictably) very hostile, much like with Hubert Dreyfus' What Computers Can't Do. Unger offered well-reasoned criticism early in the hype cycle, but it seems to have had no influence on how people responded to the FGCP.
>> If you want to read about the actual tech they were harping as bringing MUH
SWINGULARITY, go read Norvig’s AIMA book.
This was still the main AI textbook in 2014 when I did my MSc in the subject.
It is what is rightly called a "seminal text" and there is not a single trace
of Singularitarianism in it. It's not just "Norvig's book". Its authors are
Peter Norvig and Stuart Russel. Is Stuart Russel ignored because he is not at
Google?
The article really needs to cool down on the hindsight triumphalism. A lot of
people were very optimistic about AI in the '80s (as earlier) because
significant and continued progress had been made. Scientific progress, the
continuation of the work of the Logicians in the '20s and Church and Turing's
work in the '30s. Sure, people in the industry just wanted to make a quick
buck, as they always do. But interesting science was being done and it is a
big loss that the AI winter disrupted its course. Not because it could have
led to autonomous vehicles and conversational agents. What we lost in the
winter was the opportunity to discover new knowledge. That is a tragedy.
So I really don't understand where all the schadenfreude comes from, in the
article. A big commercial project failed and took with it a whole branch of
computer science. Who, exactly, benefited from this?
Ragging on things that failed with the benefit of hindsight is a good way to sound smart. I have never liked that genre much.
5gen failed but Apollo, the Manhattan Project, DARPANet, the Bell Labs transistor project, and SpaceX reusable boosters (for a contemporary example) succeeded. This piece argues that the ambitious megaproject successes were more decomposable but I find this rationale to be an unconvincing post-hoc just-so story.
The conclusion I draw when I look over the history of innovation is that it's like a rare animal that almost never mates in captivity or one of those exotic fungi we can barely culture. We know how to set up conditions where innovation can happen: get smart people, provide capital, give them an ambitious but not impossible goal. Beyond this it's serendipity. Sometimes the muse pays a visit and sometimes it doesn't. The dynamics seem almost identical to what you see in art and music, and I don't think that's coincidence. Science and engineering are perhaps more like the arts than we tend to think.
5gen probably would never have achieved all its goals, but it might have achieved something really notable had the muse chosen to pay a visit.
This is my favorite comment in this whole thread. Looking back on big failures in history, it's too easy to point out the obvious mistakes or logical reasons why it failed, how unrealistic their dreams were.
That narrative doesn't take into account that notable successes in history, major innovations, were born from basically the same ingredients in a shared cultural context. The difference was (at least seemingly) often just a slight chance variation in conditions, being in the right place at the right time, working on an idea that happened to be ready to blossom.
I love the parallels you draw with exotic fungi, how sensitive these ambitious ideas are to its environment. There are so many ways it can fail, and the reason for success is often elusive and serendipitous.
> Science and engineering are perhaps more like the arts than we tend to think
This seems related to how great art and music are often born of a "cultural milieu", like the jazz scene, or Paris cafe culture in the 1920's that fostered poets, painters, musicians. I imagine Xerox PARC was like that, a vibrant intellectual subculture of creative people who worked in the medium of technology.
Speaking of the muse, I like the popular expression, "The elves have left Middle Earth." There's a kind of indescribable "magic" that can inhabit a place and time, pulling everyone around under its spell. Then, as mysteriously as it appeared, it's gone - leaving behind relics of wonders, art works or artifacts, that reason alone cannot explain.
> So I really don't understand where all the schadenfreude comes from, in the article. A big commercial project failed and took with it a whole branch of computer science. Who, exactly, benefited from this?
It wasn't a big commercial system; it was a big government system (you probably can't imagine the degree of shock and awe that greeted MITI people or even the mention of the minstry in the US). This was when Japan was going to destroy America and goodamnit, Something Must Be Done.
Of course as happens with such projects it all comes tumbling down, but usually with a whimper. Apollo is another good example: they hit their goal and then what? 5Gen was the more common case: just wasted away.
The schadenfreude comes not just from the central planning hubris but from the American freak-out response to what was really hype and a squib. The author says the same thing is happening today, with the hype-o-meter on 11, again in the AI field, and again with an "asian menace" on the horizon, this time China.
5Gen just rode a hype machine though; it didn't take the field out -- we could all see that coming anyway.
>> It wasn't a big commercial system; it was a big government system (you probably can't imagine the degree of shock and awe that greeted MITI people or even the mention of the minstry in the US).
Sloppy turn of phrase on my part.
I do understand the reaction of the US, and not only. I did my CS degree in 2007 and I made heavy use of my university's library that had a solid selection of Prolog textbooks, which I devoured. At some point I noticed that a few of those textbooks (the more industry-oriented) had prefaces that stated Prolog was a very important language to know because it was poised for world domination thanks to the FGCP. Curious, I dug a bit further and found that book- the one from that American journalist woman and the man from the semiconductors business. If memory serves, it portended doom unless the US spent a few billions overtaking the Japanese. Later, I found out what happened after- but I got a pretty good idea of how people in industry in the US saw the whole thing, and it has helped me understand why logic programming was relegated to a mere niche in CS following the abandonment of the FGCP.
The similarities with the present hype cycle are not lost on me, either. However, this is what happens when industry takes over science. We lose the science. There is nothing here to celebrate, or laugh at. Like I say above, missing an opportunity to discover new knowledge because some idiots want to make a quick buck is a tragedy.
I wonder if the hangover from this is behind Will Byrd’s difficulties in getting tenure despite his truly groundbreaking work with miniKANREN. Maybe people just identify KANREN with Prolog and dismiss it?
(I know it's bad form to complain about downvotes, but this one is really puzzling me. Does someone think Will Byrd doesn't deserve tenure? How could anyone think that?)
Is there a good summary somewhere of what miniKANREN is and of the difficulties finding academic support for those of us who work in industry and don't know as much about the tenure system?
(FWIW I tried to help correct the downvotes. I personally believe very very few comments should be in the gray.)
As for what miniKanren is, there are a couple of books (one being Byrd's dissertation) and a lot of papers. I think maybe the best summary I've read is actually the interview transcript at https://www.infoq.com/interviews/byrd-relational-programming.... But also, there's a summary on http://minikanren.org/, with links to the papers and talks, and an older summary on https://kanren.sf.net/. Most of these are a bit hard to understand the significance of if you aren't already familiar with logic programming. The talk on https://www.youtube.com/watch?v=OyfBQmvr2Hc is a popular description of one of the most astonishing results; I haven't watched it but people tell me it's good.
As for understanding academia, yeah, I don't know.
I think people today have a very hard time understanding what things were like during the Japan bubble from about 1987 to 1990. Coming out of the 1970s Japan ended up dominating certain parts of the electronics - and especially personal electronics, like radios or portable cassette players. They had terribly failed in _information technology_ however, but most people didn't know that, what they saw was walkmen, MITI coordinating wiping out American DRAM and television producers using dumping (among other things) and so on.
Another important event was that the US dollar was very high in the mid-1980s (Reagan deliberately de-valuing the dollar was frequent discussion on the morning news). A side effect of this was the transfer of investment to other shores, and Japan being the primary beneficiary. BOJ dropped interest rates a great deal and a speculative asset and property bubble was launched. This culminated in the purchase of many, many high-profile assets in the United States by the Japanese keiretsu and banks.
The Japanese were in a mania, and a lot of investors world-wide were as well, and the Americans, in particular, were experiencing a kind of crisis of faith in US-style capitalism.
Like most bubbles (try explaining the dotcom), it is very hard to make someone understand what happened. You can recount the facts but not the emotional context.
James Burke had a climate special (After the Warming) in about 1989-90 where his ideal future was the Japanese running things in a world government. Seems to have captured the zeitgeist and the present (creepy to hear the bit in the beginning though about "full-scale evacuation of New Orleans").
I think this deserves an HN post of its own. Absolutely fascinating to see so much basically correct (climate refugee crisis, massive efficiency gains, enacted carbon tax, free falling PV prices, my efficient smart home setup) with so much off target (widespread telecommuting! 9 billion population! A world Japanese world government!) That seems preposterous now.
I dunno, didn't look like it got anything right to me. 6 degrees of warming and two feet of ocean, though we're only a billion and change away from 9 billion.
Since when do we have a climate refugee crisis? People go to the tropics on vacation; it's fine there.
I think blaming the present day migrations on "climate" some how is "citations needed:" they're very obviously (to me) caused by political problems and wars the US fomented. Blowing up Libya and Honduras were shitty ideas.
His optimism is touching. The bit about carbon trading struck me. Lately I've been questioning the dogma of one man, one vote. What if the only way to democratically resolve climate change is to inversely weight your vote by the amount of carbon you emit?
One of the hilarious things about 5th generation computers is how certain people were about all this.
More like painful. I went through Stanford in the mid-1980s, about when it was clear that expert systems were only marginally useful and strong AI was not around the corner. Many faculty were in serious denial about this. You can write rules for an expert system, and what you get out is pretty much what you put in. Too many rules, and you're likely to have contradictions and the inference system gets confused. The logicians were trying hard to fix that, which generated a lot of papers with obscure notations.
Eventually we did get massive parallelism, but it doesn't look anything like the Thinking Machines or LISP machine approach.
>This was still the main AI textbook in 2014 when I did my MSc in the subject
FYI seminal books don't rotate in and out over a five year timespan, it is still the seminal book and will likely continue to be given all the fuel being burned on neural networks that generate more heat than light.
I had an intro AI class for which Leslie Kaelbling got use of a draft of Russell&Norvig (it was 2 thick stacks of photocopies, with comb bindings), before it became famous. Good call, LPK. Sadly, I donated&sold all my books when I did an extreme-minimalism housing move after grad school, or I'd still own this collector's item.
Wow that is cool, and I feel bad for you because that would be a cool collector's item! Though to be practical, I do prefer nicely bound and polished publications rather than the photocopied and combly bound.
I think Locklin is saying that AIMA is a good source for understanding GOFAI, and that GOFAI was touted as on the verge of achieving AGI, not that AIMA touted GOFAI as being on the verge of achieving AGI.
I agree with your distaste for the self-congratulatory tone.
> somehow they thought optical databases would allow for image search. It’s not clear what they had in mind here
Oh, well, it turns out that focusing a spatially modulated light beam through a lens gives you the Fourier transform of the original spatial modulation at the focal plane. So if your lens has a focal length of 30 cm you can calculate a Fourier transform of an image of whatever size (1024x1024, 2048x2048, 4096x4096) in 1 ns. This is about six or seven orders of magnitude faster than you can do it on a modern CPU core, and still about three orders of magnitude faster than you can do it on a NVIDIA Turing. It was about eleven orders of magnitude faster than you could do it on digital hardware at the time of the Fifth Generation Project.
But you lose the phase information (as far as I know) so you can't reverse the process. So what's it good for?
Correlation, that's what. If you have a photo of a tank from some angle at some angle and scale, and at some set of phase shifts it has a significant correlation with some other image, that probably means that tank is in that image. You can compute the correlation (disregarding phase, so it's an upper bound rather than the actual correlation) with an electrical SLM, which is slower than the lens but still a lot faster than a VAX. Or an i7.
So it was a plausible attack on the problem of image recognition using massively parallel analog computation. It hasn't panned out, but we had to try it to find that out.
"I had friends on that deathstar" basically: I was working in the UK JANET community, with people I knew doing Alvey project funded 5th Generation work. The projects had a very strong "catch up or we will loose the war" feel to them, because the Japanese were pumping money into things which made ICL, Olivetti,Ci-Honeywell-Bull,Ferranti,GEC very nervous. Domestic-Strategic outcomes vested in having strong world-comparable computing manufacturing: What if they "out thought us"
The one I can remember (from 40 years distance) is the GEC4000 project in Newcastle University, which included GEC-Plessy, Newcastle and York and Glasgow University, in a joint computing project. the GEC4000 was targetting (hah) Submarine and AWACS contexts, and this was a huge activity to get ahead of some issues in computing for synthetic radar. (which btw, GEC had some lead in: the work they did in the prior two generations of computing made british radar work world-class which is in part why british materiel sells so well into many armed forces. Also explains why Schlumberger and the like were into the same hardware: synthetic apeture radar is the same problem as Oil Shockwave exploration. Guess which british companies sold to Oil? Ferranti (Argus) and GEC/Plessy)
The standing joke about the GEC system was it had a real "halt and catch fire" instruction because thermal management on the CPU was out of hand, and it needed constant monitoring to prevent fire risk in the machine room.
The UK also had two rounds of "AI is the solution to every problem" remembering the Whitehill report of the 1970s which preseaged many of the same pushbacks in the 1980s and 1990s on ubiquitous AI: the proponents made far too sweeping claims and got a rap over the knuckles about cost-vs-benefit and sweeping up the money.
Alvey made it very hard to fund non-alvey projects. This kind of thing turns collaborative science into competitive science with toxic qualities. Overall, I judge Alvey harshly.
BTW I was a very very junior CompSci research programmer: If this stuff leaked into first-job-out-of-Uni level staff, imagine how loud it was around the Professorial table?
It is so easy to make fun of the failed ones and try to predict the past, but once there were other government projects which ended up working and now we see as obvious right just because we know how history has gone, see for example ARPANET.
If the fifth generation would have worked in some degree now it would have some status and if ARPANET would have failed it would have been forgotten.This is how anglosaxon culture often works, making fun and parodies of competitor failures, etc. In other aspects of history the same historical rhetorical approach is often seen.
I love history as much as the next guy, and this is exciting writing, but the issue I have with this is that we haven’t seen the end.
That is to say, AI is an unsolved problem, and we don’t really know what the solution will look like. We are in a (the?) golden age for statistical learning but in the end, symbolic processing and prolog like architecture might turn out to be the key.
I will point to someone much smarter than myself — Jerry Fodor — to make the arguments. But basically it amounts to this: intelligence is the ability to reason and our best model for reasoning is linguistic.
Perhaps the author has only worked on simple, predictable outcome, projects but in my experience (personal and in generally observing our industry) failure and restarts are often the path forward to success.
I lived through the period of AI in the 1980s, have been a vendor of expert system tools for Lisp machines and the Apple Macintosh, and when MCC was being founded the founder Admiral Bobby Inman gave me a tour of the place.
I also joke about the outlandish claims and thought the AI winters were well deserved but I also believe that the state of our field and business now is very good and those rough times in the 1980s and 1990s are what helped get us to where we are today.
>Saying you’re going to build an intelligent talking computer in 1982 or 2019 is much like saying you’re going to fly to the moon or build a web browser in 1492.
Alex and Siri do a pretty good job of talking, and are almost up to the task of hearing. They are command and search interfaces only - the "intelligence" in them is very limited (hardly there) but it is clear that there is a jump between 1982 and 2019. These devices do exist and to deny their reality does cast a lot of doubt on the other perspectives presented.
Well, you should go read the original 5th gen specs; they're all on file sharing. They were not talking about Siri (which is a terrible example, and something that was basically possible even way back in the 80s using vector quantizers and HMMs). They were talking about Star Trek or Hal-9000 type computers that could understand meaning and do human-like abstract design work beyond "hey google spyware, please turn on the air conditioning."
In japanese pop culture of the 80s and 90s, you see references to 5th gen computing now and then. Just like as a part of the technobabble in some sci-fi show. Watching sci-fi anime once I heard it a few times I started to suspect it might be a real thing, and looked it up.
If anyone has a recommendation for a book or long article that covers the this, some of the people involved, what they actually achieved, etc. I'd be interested.
In my opinion that is the stuff which influenced sci-fi anime.
Imagine... A specification exactly describing a common platform with different levels of capabilites, but not down to the bits, but more like an API,
where anybody is free to implement that in whatever way one sees fit,
as long as it conforms to the API.
Mandatory interoperability between products of different vendors, be it hard- or software.
FAST!
All networked.
All with code at least a magnitude smaller than contemporary
counterparts.
So no BLOAT.
That was the spirit of the time there, i guess.
But the powers of Babylon intervened again, i guess.
We still talk about video game consoles (which I can't help but notice are predominantly sold, to this day, by Japanese manufacturers and designers) as being issued in generations, too.
Programming in Japan is typically a low-status job considered a blue collar occupation (Nintendo and SCEI are notable exceptions) despite Japan having produced quite a few globally notable computer scientists. I remember visiting a 5 Gen project om 1984. I was shocked by the open plan office (how did anyone get any work done?). At the row of desks was one ASCII terminal for each two desks (to a KL-20 I assume but can no longer remember). I spoke with one of the developers asking him what he thought of the environment. "Great: I only have to share the terminal with one other person".
The next year I myself was employed at MCC in their AI group (they had a database and CAD group, and perhaps one other) working for Doug Lenat and designing the Cyc system (when I got there they were working in Interlisp on Dandelions, not surprisingly as Doug's office had been around the corner from mine at PARC). The first thing I did was toss the Interlisp implementation and redo it on for a 3600.
5Gen and MCC actually were 2 and 3 of another transformational hype effort, the Center Mondial d'Informatique and Ressource Humaine (World Center for Computer Sciences and Human Resources) launched in France in 1983 on the back of an influential book by a major French intellectual, Jean-Jaques Servan-Schreiber. They had a KL-20, a Vax (with a copy of BSD I carried over there in my luggage) and a few CADR Lispms (two or four, I can't remember). With swanky offices in central Paris around the corner from the presidential palace and a lobby full of Apple 2s that anyone could walk in and try out they didn't do much for the third world I'm afraid, idealistic goals notwithstanding, but they did launch quite a few very good computer scientists, so were probably worth the effort.
So I worked for CMIRH and MCC, and spent some time at 5Gen. Fun times.