Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Third Base – Ternary Notation (2001) (williams.edu)
95 points by sidcool on Feb 2, 2015 | hide | past | favorite | 30 comments


In case you were wondering: "As in ordinary ternary numbers, the digits of a balanced ternary numeral are coefficients of powers of 3, but instead of coming from the set {0, 1, 2}, the digits are –1, 0 and 1."

e.g.

1 x 3^3 - 1 x 3^2 + 0 x 3^1 + 1 x 3^0,

Imagine, trits instead of bits. I like the idea of base e, if unrealistic.

(BTW: I have an old answer on Math.stackexchange that cheats by using base 3: http://math.stackexchange.com/a/734723/108109 )


If you like the idea of e as a base I wrote a post about how you can do that, at least asymptotically, which might be of interest.

http://hummusandmagnets.tumblr.com/post/48829755849/how-to-u...


We'd all be more attuned to power laws if we thought in base 3. :-)

I like the -1, 0 and 1 concept, but is there any way to pull it off from an electrical engineering concept? Or would it all have to be overlayed as software?



If you assume a current I=0 represents 0 and I=i represents 1, you could instead choose I={-i,0,i} to represent {-1,0,1} using just 1/6 more power.


Galois field theory says there is no difference in using {0, 1, 2} or {-1, 0, 1}


Is that "no difference" in the sense that all turing complete languages are "the same", or in some more practical sense?


For representing a number, it makes no difference at all. For binary you can use on/off, up/down, positive/negative etc. For trinary you can have on-mid-off, up-mid-down, positive-neutral-negative, and so on. Any symbols will do as long as you define the math in terms of the symbols you pick.


Erm not under multiplication, it seems:

{a,b,c} => {0,1,2}-> A | {-1,0,1}-> B

b * c * c = 1 * 2 * 2 (A) != 0 * 1 * 1 (B)


Well, the problem is notation which is quite misleading here. Firstly, sets don't have order so {-1,0,1} == {0,1,-1} == {-1,1,0}. And the symbols "1", "-1" and "0" are quite misleading as well, especially if you are not familiar with finite fields. In reality the symbols "1", "-1" and "0" represent (infinite) sets and mathematics can be done using any "representative" of that set. For this particular example "-1" and "2" are part of the same set, however mathematics has this notion of "well-defined" so if you choose to do your calculations in say {0, 1, 2} and I do it in {3, 4, 5}, we will have the same answer.


You get the same answer if you only use '+', but you may get a different answer if you use 'x', that was my point. In other words, once you have a "canonical" addition table for the elements of a set, there are many multiplications tables to choose from which "extend" the addition in a straightforward way.

Sorry not very knowledgeable on this stuff so it took a long detour.


Ok, no problem. However once again that is not true and it has to do with the underlying mathematical structure (for which + and * are defined). Here is an easy way to think about: Imagine a filing cabinet with 3 drawers. Drawer 1 has papers "0", "3", "-3", "6", "-6" and so on. Drawer 2 has papers "1", "4", "-2", "-5" and so on. Drawer 3 has papers "2", "-1", "5", "-4", "8" and so on. So how do we add the two papers? Well you simply look at the numbers so "1" and "2", for example, and you add them -- we get "3". Now where do we put it under? Drawer 1, as it has the "3" in it. Or say we want to multiply "4" and "2", we just "multiply" and get "8", so it goes into Drawer 3. Now I say Drawer 1 should be labeled "0", Drawer 2 "1", and Drawer 3 "-1". But you might call me a sinner and a liar and say that Drawer 3 is "2". But it doesn't matter! "-1" and "2" are both equally valid and can be used as labels. Heck, you could even call it "8".


Oh so multiplication actually works nicely! Neat, thanks for the explanation.


There is a Texas Instruments 1962 patent for a tristable circuit (instead of a bistable flip-flop), using vacuum tubes, so it would be possible to build similar trinary computers in silicon. Though the same problem as with the old Russian Setun seems to remain: you need four transistors, which would ordinarily suffice for two binary bits, so any advantage is lost.

If you can design a tristable "flip-flop-flap" circuit with just three transistors, you might be in business!

https://encrypted.google.com/patents/US3051907



It is also possible to design, and implement on the same trinary hardware, balanced trinary representation with base phi (sqrt(5)+1)/2, i.e. fibonacci encoding, which has some additional advantages.


While balanced ternary is nice in all sorts of ways, there's an argument against it being the most efficient representation. You might consider 2 better in practice in the sense that it makes it easier to pick the optimal amount of state space for a word when designing hardware.

On any practical architecture a word has some fixed length, N, and the vastly most efficient case will be when your number fits within that space. That is, it is more efficient if the number can be represented in r^N bits of state. The amount of state an architecture gives you is dictated by many different considerations but the radix that offers you most control (that is, most different possible values for r^N) is 2. For instance, say you want a word length that can store numbers up to 500, you have the options of

2^N: 2, 4, 8, 16, 32, 64, 128, 256

3^N: 3, 9, 27, 81, 243

So 2 gives you 8 choices for N whereas 3 gives you 5. So 2 gives you more design space to choose the most efficient representation in practice.


I don't understand what is being optimized by minimizing the product rw. Reducing w reduces the number of digits, but how does reducing the radix r help with component count? Surely a memory cell that could store 1,000,000 different values in a base-1-million system is just one component, even if it is quite difficult to make. Are they talking about the components needed to perform calculations?


I'd say, your base-1-million system is virtually impossible to make...

But you are right: what counts is how much information you can store or process per square inch / per Watt, not really how many components you need to achieve this.


While w measures the number of cells you need, r is a coarse approximation of the complexity of each cell.

In practice people never got a ternary cell that cost only 3/2 of the best binary cells (in both space, power and money). That's why research dried up. But there's no reason to think this is fundamental.


My favorite thing about "trits" is that they can directly encode the notion of "true / false / unknown". This is attractive because a lot of computer science (and type theory in particular) seems to gravitate toward constructivist / intuitionist logics.


After reading about ternary number notation, it shouldn't be hard to solve one of my favorite brain teasers:

Given a simple balance scale with pans on both sides, how many weights does a merchant need to be able to weigh any whole number of grams from 1 to 40?

Most people have trouble getting the right answer, but its easy with balanced ternary. Try it!


This is actually explained in the article.



Someone I worked for a while ago would have summed this up as: "a cure for no known disease".

I know a little about CMOS transistor and logic design, but I'm certainly no expert. Still, I'm not seeing the advantages of ternary numbers. I'd be interested in hearing otherwise.

At today's geometries we have a hard enough time keeping binary gates and memory cells working reliably. Ternary just doesn't seem to be practical. Perhaps with another supply voltage? E.g. +1.5, 0, -1.5? But today's CPUs very very aggressively vary supply voltage in order to conserve power. Now they'd have to fiddle with two voltages? Bah!


Ternary (or even higher n-ary) storage is already widely used: multi-level cell (MLC) SSDs store more than one bit per cell by using multiple voltages (most commonly 4, i.e. 2 bits per cell). I believe Ethernet also uses 3 voltage levels.


Communication systems in general routinely have up to 1024 modulation symbols (32 voltage levels for both the sin and cos carrier).

http://en.wikipedia.org/wiki/Quadrature_amplitude_modulation...


Let me elaborate on my post, since it has been misinterpreted. I was responding to the ideas in the article.

I of course understand the advantages of non-binary encodings in the many applications people mentioned, e.g. hard disks, flash memory, etc. Another common use is digital cable, which generally uses 256-QAM.

But let's look at the article itself, which was discussing ternary for general purpose CPU design. Here's a snippet:

   the first working ternary computer was ...
   designed by Nikolai P. Brusentsov and his
   colleagues at Moscow State University ...
   Some 50 machines were built between 1958
   and 1965. [It] operated on numbers composed
   of 18 ternary digits, or trits ...
I stand by what I first said. The above computers were probably wonderful academic toys, but otherwise were "a cure for no known disease".

Neither Intel nor anyone else will build a CPU with registers that store ternary numbers. Nor will they build cache memory that stores ternary numbers. Nor will there be any new ARM architecture forthcoming that switches from binary to ternary. Etc.


Think hard drives - bit density = capacity


Makes you wonder what a ternary - based, memsistor built functionally programmed CPU with 243 cores and roughly 4G of on board RAM would be like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: