Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cloning a 6502 Apple-1 in just 930 logic gates (obsolescenceguaranteed.blogspot.com)
121 points by gioele on May 10, 2020 | hide | past | favorite | 45 comments


The C74-6502 is somewhat similar. A 6502 built using 74xx chips, that runs 20x faster than the 6502 (20Mhz vs 1MHz).

And correct enough that he drops it into both a VIC20, and a C64, and it works.

https://c74project.com/

About ~130 74xx chips vs the Gigatron's 33, and SMT instead of through-hole.

Edit: A link to an old school web-ring of other homebrew (mostly) ttl logic CPUs: https://www.homebrewcpuring.org/ringhome.html


There is also a discrete-transistor version that runs much slower than the original 6502, but with the advantage of an LED on each signal, so you can watch it run.

https://m.youtube.com/watch?v=tQIwS2GzXLI

See also (not a 6502 though)

http://6502.org/users/dieter/mt15/mt15.htm

that approches half as fast as a real 6502.


"It needs just 930 logic gates (packed into 33 standard 7400-series ICs) to create a computer that beat 'complex' 1980s home computers like the VIC-20 in terms of both CPU power and graphics."

Given that 7400 chips existed at the time, why did no contemporary microcomputer go this route? Would it just have been cost prohibitive?


There were commercial computers that did. The PDP-8, as one example (not 74xx, but similar).

Edit: The Xerox Alto CPU was built with 74181 chips.

Cost, power consumption, space, etc, are all disadvantages. I assume the Gigatron is using modern 74HCxx CMOS chips, which consume less power than the original 74xx TTL chips.


Yes HC didn't exist and 4000 CMOS was slow. Plus the pcb, assembly and test cost for 40 DIP packages is significant.


Typical discrete logics designs were much larger than the Gigatron. So they're not the same. Using 74LS the single board draws 2.5 W.


The short answer is that computers with CPUs made from 7400 series TTL existed, but they weren't "micro".


The Kenbak-1 might be considered an exception to the rule: http://www.kenbak-1.net/index.htm


The simplicity of Gigatron is compensated by the large memory that was extremely expensive.


This is a much more profound insight than is readily apparent. It is really easy to build a Turing machine -- except for the tape. You could probably build a universal TM is a lot less than 930 gates. All hardware design is essentially nothing more than taking common design patterns and moving off the tape and into the engine so that the tape can be more compact and the machine can run faster.

This is the main reason I personally find actual hardware re-implementations of old machines to be fairly uninteresting. There's really very little sport in it once you know the trick.


74 series gate golf to get the most compact hardware brainfuck implementation:

http://grapsus.net/74/


I have to admit that is pretty cool in a sick and twisted kind of way.


It doesn't need a large memory if remove the video, or if you change it to B/W.


930 logic gates + 1K of microcode ROM (so how many gate equivalent is 1K of ROM?) and it only runs 1/4 speed of a 6502.


>"Turning 930 logic gates into a working 6502 Apple-1 clone? Like the venerable IBM 360/30, the Gigatron uses a form of microcode to elevate its spartan eight hardware instructions into a comfortable instruction set you can live with. Like the 8-bit IBM 360/30 CPU, the Gigatron normally pretends to be a 16-bitter using its microcoded instruction set. Unlike the IBM, though, the Gigatron's instruction set is not compatible with anything else.

Which sparked a discussion: could the microcode also contain a 6502 compatible instruction set? That would prove that a 6502 compatible system could be done with much, much less hardware, even back in the 70s.

Short answer: yes. In fact, you can make it into an entire Apple-1 clone without the use of a 6502.

[...]

Marcel wrote the Gigatron's 6502 microcode quickly (no bugs detected so far) but wrapping the Apple-1 around it took about a year. The machine has become dual-core: you either use its colourful native vCPU microcode to embarrass 1980s home computers, or you boot it into 6502/Apple-1 mode to demonstrate how a compatible Apple-1 including all its display hardware can be done in only 930 logic gates. Hmm!

The 6502 microcode takes up about 1K of ROM cells, and could fit inside a fast late-70s ROM. But the Gigatron cheats a bit by using a biggish 128K EPROM from the 1980s. That leaves enough space to tuck in the 6502/Apple-1 microcode next to all the other features of the latest Gigatron v5a ROM."


>"Like the venerable IBM 360/30, the Gigatron uses a form of microcode to elevate its spartan eight hardware instructions into a comfortable instruction set you can live with. Like the 8-bit IBM 360/30 CPU, the Gigatron normally pretends to be a 16-bitter using its microcoded instruction set."

Could someone elaborate on how exactly the IBM 360/3 and Gigatron "elevate" their eight hardware instructions into a larger ISA via microcode?


If you want to implement a complicated computer you have two choices: you can create a complicated logic circuit that does what you need (this is called "hardwired") or you can create a far simpler computer and program it to emulate the complicated one (this is called "microcode").

https://en.wikipedia.org/wiki/Microcode

Since the simple computer will only ever run one program, the emulator for the complicated computer, it can be very specialized. That makes a microcoded IBM 360 very efficient compared to a Z80 emulating an IBM 360, for example. Normally microcoded machines have a tiny special memory to hold the microcode program, from less than a hundred to a few thousand words in size. The Gigatron has its "microcode" in the EPROM while the emulated code is in the RAM.

https://gigatron.io/?page_id=482


Thanks, I understand that microcode is one of two ways to implement the control unit on a CPU. My question was more of if a CPU only has "a spartan eight hardware instructions" and microcode is used to implement each of those eight hardware instructions how does that allow the ISA to be greater than the original eight instructions?


The microcoded CPU is processing a richer virtual ISA. Have a look at SWEET16 for an analogous 16-bit virtual CPU on an 8-bit CPU: https://en.m.wikipedia.org/wiki/SWEET16

Edit: SWEET16 would run fine on this emulated 6502, too. Inception!

The Apollo guidance computer did something similar.


Also Dann McCreary wrote in 1978 an “8080 Simulator for the 6502” https://www.pagetable.com/?p=824


I now want to use the Javascript-based Gigatron emulator[1] in a browser on a Windows 2000 VM under the jslinux emulator[2]. (I wonder how jslinux would handle a few-year-old version of Firefox...)

Then I can run the Gigatron-based 6502 emulator in that browser to run the 8080 simulator you referenced to run CP/M. Under CP/M I should be able to find a COBOL program to run. I would be achieving an immense coefficient-of-"Inception" and re-enacting "The Birth and Death of all Software" [3] simultaneously.

Doing all of this in Windows NT 4.0 or Linux on my DEC Multia w/ an Alpha CPU would just be icing on the cake.

[1] https://github.com/kervinck/gigatron-rom/tree/master/Contrib...

[2] https://bellard.org/jslinux/

[3] https://www.destroyallsoftware.com/talks/the-birth-and-death...


Very interesting link. I hadn't seen this before, this helps. Thanks.


The microcode here consists of sequences of the eight low-level gigatron hardware instructions, which implement the richer 6502 ISA.


Thanks for the insight. That makes sense. Cheers.


What's fascinating about this is that the home computer "revolution" could have started a few years sooner had people just realized they could make a CPU out of 33 74xxx series IC chips. These were relatively low cost by the early to mid 70s. It took a generation of skill to go back and see what could be done with just these logic building blocks.


But they did build those systems. They were called minicomputers. They took up more room than a microcomputer and cost a bunch more money. Because building a system like this is expensive both in engineering and component costs.


For example, one thing about the Gigatron is that it uses static RAM. Static RAM is fast, but expensive, and back in the early 80s even dynamic RAM was dear, and static RAM was several times more expensive than that. All the early 8080 and 6502 computers were using dynamic RAM. With static RAM, and fetching instructions out of ROM, and using a 74 series that wasn't available at the time, the Gigatron benefits from modern times.


The Original Commodore PET, up until the model "N" that was introduced in 1979, had static RAM. I had the 8k model back in 1978.


I wonder how big the difference is between current logic gate performance vs. what we have available when the 6502 was 'new'.


Not a lot really, as far as the 74xx series goes. 10ns gate delay for the original 74xx TTL chips, vs 6ns gate delay for the current CMOS 74HCxx chips. The reduced power consumption is a bigger benefit.


Is that due to both the chemical gate setup as well as node size?


TL;DR: this is a very simple computer running an emulator. Cool project! (The approach is similar to the way many of FPGA applications are done these days.)


It’s worth mentioning how simple the computer is. While the Gigatron is an 8-bit computer running at ~8 MHz (and probably more powerful than an 8086), it only has... seventeen instructions (with a few encodings for different operands). The instruction set is so limited, they created Gigatron Control Language (GCL) to allow you to write in a slightly higher level assembly.

So getting an emulator running is quite an achievement.


Getting an 6502 emulator up and running is not too difficult and you saw how the author did it "quickly" which I took to mean in a couple of days / a week.


I don't really see this as similar to the way many of FPGA applications are done these days. The approach in FPGA is more along the lines of implementing the same logic as the original CPU as opposed to microcoding / emulation.


I guess I could be more clear. In industrial FPGA applications the goal usually isn't the CPU, the goal is to implement the required logic while keeping the number the gates needed for this to a minimum, and implementing a simple CPU (a.k.a. PSM) is what makes it possible. Same here: the CPU only serves as the economical way to implement something else (another CPU, in this case).


I still recall the shock discovering that Micro-soft BASIC in the Apple ][ did a linear search for the line number from, the beginning of the program, on each GOTO or GOSUB. It would have been super-easy to memoize the search result at the branch site, but Bill couldn't be bothered.


Sneer all you want, Microsoft Basic was a reasonable achievement considering that:

1) it ran in under 4K bytes of memory on an Altair 8800. That means less than 4,096 bytes of memory. Not 4,096 Kilobytes or Megabytes. 4,096 bytes.

2) An enhanced version ran in under 8K bytes of memory. Less than 8,192 bytes. Let me repeat: less than 8,192 bytes.

When you're fitting an entire BASIC interpreter in 8,192 bytes, you're not spending a lot of effort to memoize a search result.

Bill had a lot on his plate at the time. The software started out on the Altair, but all sorts of manufacturers were soon beating the door down begging for versions for the Commodore PET, the Atari, and countless other computers.

https://en.wikipedia.org/wiki/Microsoft_BASIC


From what I've read, Microsoft BASIC was considered too bloated for a 4K machine, plus Bill Gates wrote an open letter to hobbyists and suggested they were all pirates, so a smaller BASIC was developed called "Tiny BASIC":

"Tiny BASIC was published openly and later invented the term "copyleft" to describe this. This made it popular in the burgeoning early microcomputer market."

https://en.wikipedia.org/wiki/Tiny_BASIC


It did not fit in 4K on the Apple ][. PET and Atari were also 6502, like Apple, so mostly the same code. Altair and the others were 8080. So, only two interpreters. It was the same design on two instruction sets, with some simple customization per vendor. Some had a Z-80, with extra instructions available: using those made your program smaller, but slower. It is doubtful he did.

That it was so short also meant there just wasn't much to it. Bill doesn't need your unpaid defense. He didn't make his $billions from BASIC. That just gave him a customer list.


Doesn't sound that easy to me. You'd need an efficient way to forget all those memoized values on each program edit, and every byte of program storage counted when memory size was a whopping 4K.


Trust me, it would be easy. They could all be erased on the first edit, in time not noticeable.

Essentially the same code would have been in each interpreter, so written just twice. The space for the code to do it would be in ROM, so would not take up any of your precious 4k.


In fact you could poke around and get the line numbers in a different order and the program would still run properly.

That's why subroutines were put at the beginning of "efficient" code.


This is cool. I feel like this has implications in metaprogramming, code generation, state machines, interposition, and stuff I haven't thought of.

This changes my understanding of the semantics of basic immensely.


Sorry, it is not cool at all.

A system that runs subroutines faster according to how far they are from the start of the program is just badly designed. Robustness against poking in different line numbers is not worth trading off performance.

He should be ashamed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: