I love the care that goes into the presentation in these old videos.
Slow, with plenty of breaks to let your brain catch up, usually filled with a nice graphical example to keep your bring churning. Plenty of “why” explanation, in the beginning. Authoritative, but not condescending. Just the right amount of jargon.
Is this showing up here because of a comment made about analog computers on that USB-C charger post? [1] Spending a bit too much time here, I keep on seeing what seems like logical connections in-between comments and posts over a few days. But I guess I may have overlooked this if I didn't read about it a couple days ago.
Anyway, I'm now intrigued enough to consider buying one.
It's a common karma harvesting technique but also a good way to get good content onto the front page. I often submit links I see in comments and they tend to get a lot of upvotes because others who think it's interesting submit the link too, and the first submission gets an automatic upvote when others submit it.
Submitter here! I hadn’t seen the post you linked to. This one originally disappeared immediately, but moderators thought it was interesting enough to pluck from the second chance pool. https://news.ycombinator.com/item?id=26998308
My first thought too, and most likely. I wouldn't say there is anything ill-intent about it, just people finding subdiscussions interesting themselves.
We figured this out with guitar pedals. Analog circuits sound awesome. Taking one of these analog processors and layering a digital interface on top provides the best of both worlds. You can save presets, program the parameters using MIDI, etc.
Analog data plane, digital control plane. Yeah, it’s great for some circuits. Digital pots and other bridging tech doesn’t always work well for some especially funky analog circuits, e.g. the Fuzz Face. OTOH, those circuits usually have one good setting anyway.
This looks like so much fun. I’m into eurorack, too, and there’s something about patching music that feels so much more conducive to creativity than making music with a computer. I have ideas that I’d never have any other way. And there’s something about manually cobbling together an algorithm with your hands that is satisfying in a way that coding isn’t. Gonna have to try this.
Maybe you got no relation, but yep that is what the promised functionality would costs if you built it in analog and wanted to sell it with a markup that does not ruin you. If anything this is actually quite affordable.
On first glance this product contains eight potentiometers and two encoders, knobs, 170+ connectors, patch cables, a display, various discrete summers, integrators and differentiators and verly likely power rail managment that is clean enough to do actual calculations with it.
Maybe as a reality check try finding the cheapest price for 170 connectors and 8 potentiometers with knobs ; )
I didn't see that they said it was non-profit, and besides, non-profit does not mean "no one gets paid". Compare the sale price of any modular synthesizer with a comparable number of gozintas and gozoutas to this and it looks only a little expensive. I'd guess Behringer could make one and sell it for about $300 give or take, but they have a lot more experience in product engineering, and buy in much higher volumes than I would expect these guys to.
You could build a lot of hardwired op-amp circuits dead-bug style for $500, though, and have money left over for a base model Rigol scope and a no-name linear bench supply.
Not knowing anything about the field, are there any practical uses left for analog computation? Is there anything they can do that isn't done just as effectively digitally by now?
No there are no practical uses other than speed and energy efficiency as analog computers can be fully simulated by digital computers today. There's no clock speed in analog computation. You feed in raw voltage into the inputs and you get the resulting voltage from the outputs. So it's literally the fastest possible result achievable with our current electrical technology.
Analog computation is essentially functional programming.
You have inputs and outputs and for your results have to think in terms of piping analog data through different compositions of functional primitives to achieve your desired output.
There's no concept of "calling a function either" thus recursion is replaced with what we term as "feedback loops".
Additionally all forms of computational state (aka memory) other then initial values will essentially be replaced by feedback loops. There's no discrete step to access a computational result in an analog computer thus in order to access a previous computational result the only way is to literally feed output back into the input. At least in typical functional programming you have a stack where you can store and access previous state. With analog computing the "purity" goes to the next level.
Likely from this description you will see the obvious solution to this problem is some sort of hybrid machine that executes procedures but also stores and produces analog results. It doesn't exist yet but I feel it will look like an fpga. Maybe be call it an fpoa. Field programmable op amps.
Don’t speed and efficiency make up the entire set of practical uses for digital computing? I don’t know what it means to say “No there are no practical uses other than speed”. That sounds like the answer summary is “Yes”. ;)
> There’s no concept of “calling a function either”
This is true for the THAT machine in the article, but your comment seems to be making a lot of assumptions about analog that aren’t necessarily true. Digital computing is an abstraction over analog circuits, and it can be done at a higher level than, say, CMOS logic gates. We definitely do know how to build analog computers that have function calls.
Analog circuits still have finite bandwidth and rise/response times. The engineering trade-offs are more complicated than the gp presents.
The big but mixed advantage of an analog control loop or signal processing chain is "no software".
There's no real reason to do simulations on an analog computer today, though I think using one for model-in-the-loop would be fun, and somebody is probably doing it somewhere.
Some people are building analog chip startups today because there are reasons to do analog computation. Among the applications are low-precision high-speed parallel math, neural networks, and image and audio processing.
>>> Don’t speed and efficiency make up the entire set of practical uses for digital computing?
A couple more uses: First is "noise immunity:" The fact that you can perform a computation twice and get the same answer, and chain multiple processing steps together with no degradation. Though to be fair this includes digital computation by pencil and paper.
Second, complex operations that simply can't be conceptualized in the analog domain.
You’re totally right about chaining and repeated computations, however I think it’s possible to do this with analog machines as well by factoring in threshold in and precision tolerances. Digital floating point has precision limits too, they’re just a different kind (I’m referring to rounding, for example). I know that’s not at all what you meant about noise, I’m just saying the bigger picture is that both digital and analog computation has limited precision, and both can have increased precision and can meet specific tolerances by adding more wires. Typical fp32 and especially fp64 has way higher precision than a typical single analog signal, but that doesn’t mean that very high precision with analog isn’t possible, it just means we don’t often do it. The fundamental differences might be less black and white than you or the gp imagine.
>I think it’s possible to do this with analog machines as well by factoring in threshold in and precision tolerances.
With enough compositions even the smallest tolerances will add up. This presents a scaling problem in analog electronics. With the miniaturization of electronics into nano scale components this noise is even more prevalent.
Additionally the way a transistor works it's just easier to use these things in saturation mode.
And one more thing. Yes precision can be "equivalent" but exactness is not. An analog computer cannot represent the value 1 consistently. It essentially can never be precise. There will always be noise on the voltage. A digital computer has finite precision but it has exact finite precision.. so it can represent the exact integer 1. And you can increase precision arbitrarily by using more bytes up to the point where you use up all available bytes in memory. You don't technically have to use the default floating point values which have limited size. With an analog computer you cannot do this at all.
> With enough compositions even the smallest tolerances will add up.
This is true of digital floating point too, chained compositions lose precision.
> A digital computer has finite precision but it has exact finite precision.. so it can represent the exact integer 1.
I’m not sure what you mean here exactly. Most real numbers cannot be represented exactly, and the idea of an exact number in digital integers and/or floating point still comes with a tolerance range. It’s only possible to have an exact number by construction or a-priori knowledge, but not in general and especially not when processing input data. It is possible to have an “exact” analog number, within a tolerance range.
> With an analog computer you cannot do this at all.
That’s incorrect. It’s true a single analog signal has noise and not incredible precision / resolution. That’s not a limitation of analog computing in general, it’s a limitation of choosing to use a single signal. You don’t have to use only a single signal. If you used, say, one analog signal per decimal digit, you can have as much practical precision as you can afford digits/signals. Then it becomes a bit more “digital” but the math doesn’t need to be implemented using gate logic. Digital lines also have analog noise, we just threshold and abstract it away. The line between digital and analog is always there but you can move it.
It is possible with analog circuitry to quantize a signal and/or interpret and have an analog ‘repeater’ that will rectify the signal into the interpreted value, correcting for line noise. This way it’s possible to do some amount of integer math without introducing noise into the result. This isn’t common or particularly practical, but it does help clarify what analog and digital really mean.
Hey you’re right about these being common issues, and about the typical advantages of digital logic, you’re just overstating the limitations and fundamental differences between digital and analog, and making assumptions about where that line is being drawn without considering all possibilities.
Thought of as an analog signal, it looks like a mess. In fact, you don't know if it's an analog or digital signal from the picture alone. What makes it a digital signal is a convention for interpreting it as a discrete sequence of symbols. The convention is designed so that under reasonable conditions, the interpretation is unambiguous.
"Noise" is ill defined by itself. It means an unwanted fluctuation of a signal. If the digital signal is correctly transmitted and received, it is noise free, regardless of what it looks like on an oscilloscope.
On the other hand, "analog" doesn't just mean any continuous electrical signal. It means that a signal represents a quantity, such as temperature or acoustic pressure. It can't be noise free. If thresholds are applied to define discrete values of the quantity, then it's not analog any more.
There is a profound amount of confusion about this, so I'm sympathetic. I dabble in some audio electronics, and there are endless debates about whether Class-D amplifiers are "digital" or not. They are certainly marketed as digital. ;-)
The vagaries of floating point arithmetic, the possibility of errors in digital systems, and whether anything physical represents a real number, are interesting but basically sideshows.
> If thresholds are applied to define discrete values of the quantity, then it’s not analog any more.
I agree. It’s my fault I’m not describing this clearly, but I was referring to switching between digital and analog signals. One can do math using analog circuitry and the interpret the result as digital. The basic problem that I’ve been hinting at is that what we call “digital” has analog components, and what we call “analog” in computing usually turns into digital eventually. (Maybe not analog audio processing, but analog computing does.) It’s often a mix, and there’s a big gray area.
> there are endless debates about whether class-D amplifiers are “digital” or not.
Right! I believe there are debates about whether modern “analog” audio delay pedals are analog or digital too, right? They are now using more cheap digital components to simulate analog delay lines, which is really if you think about it switching between analog and digital multiple times before you ‘hear’ it. Does it even make sense to summarize it as either analog or digital?
> “Noise” is ill defined by itself. It means an unwanted fluctuation of a signal. [..] The vagaries of floating point arithmetic [..] are interesting but basically sideshows.
I guess it’s getting lost, but I’m making an analogy between noise and floating point, not claiming that FP rounding error is the same thing as noise. I’m perfectly familiar with noise, and I’m just saying that, in a way, floating point rounding isn’t entirely dissimilar from the net effects of noise: they both are accurate to within a certain tolerance, and they both grow larger errors with chained operations. Fp32 is way more accurate than a typical analog signal, but that’s because it has a lot of bits. Fp8 on the other hand might really have closer to the accuracy that a high quality analog signal has. Digital fp8 might be repeatable where analog isn’t, but that doesn’t necessarily make it more accurate, nor is it something we characterize as “exact” like @corethree was claiming.
My only point in this thread was to say analog computations do have some uses, which is why they’re still being used and studied. @corethree said there’s no practical use because analog can be simulated with digital. I’m sure as an audiophile, you’re aware that claim isn’t true. It’s true that analog can be simulated with digital but that’s a slow simulation and has no bearing on whether analog components are useful.
Another one is quantum computers, they’re quite analog, and definitely different from ‘digital’.
A third is fast low-precision AI training chips. There are a whole bunch of startups trying to make these come to market right now, all of them full of smart people who’ve studied the problem and believe that analog computation for AI training solves a real problem.
> what we call “analog” in computing usually turns into digital eventually.
uh no. This isn't strictly a rule that's true. It depends on the device whether it's an analog device or a digital device. Not all devices are digital. A DC motor is an analog component, a light bulb, a CRT oscilloscope display, an analog meter or gauge, an H-bridge. All these devices can live on the output side of an analog computer, or even a digital one with the right ADCs or DACs. You're wrong here.
>Digital fp8 might be repeatable where analog isn’t, but that doesn’t necessarily make it more accurate, nor is it something we characterize as “exact” like @corethree was claiming.
Why are you replying to my statements in a comment to another person? Makes no sense. Anyway, repeatable and exact are sort of similar concepts. If I can repeat a number that means I can replicate it, exactly. Hence the term exact.
I also never said digital fp8 is more accurate or exact. There's no such thing as "digital fp8" or "analog fp8" it makes no sense. Digital and analog are orthogonal concepts to floating point. I said electrical analog devices cannot precisely represent a ONE. Simply because you cannot remove all the noise from an analog signal to make it a perfect ONE. To help you understand, instead of floating point, think of an INT8. An an analog computer cannot represent the exact instance of an INT8 in the same way a digital one can.
>@corethree said there’s no practical use because analog can be simulated with digital. I’m sure as an audiophile, you’re aware that claim isn’t true.
This is probably the only area where you might be sort of right. Sounds like banging on a drum or the sound of plucking a string have origins from analog components. Even though these sounds can be replicated digitally the creation of unique sounds via analog synthesis is better suited to be done by an actual person banging on the actual thing and making the sound he wants. But I wouldn't call this analog computing.
> It’s true that analog can be simulated with digital but that has no bearing on whether analog components are useful.
Analog components are useful. Analog is what powers everyones home. Analog computing is the thing where analog has virtually no practical usage. Almost 100% of signal processing is now DSP. I'm sure there are really really niche areas where analog is still used, but in general it's not practical. I never said analog components are not useful.
Let me just add to this. Your comment wasn't just about the usefulness of analog computing. It was about a "reinvention" of a digital computer that used a base-10 numeral system and parallel data lines with one line for each digit. You misinterpreted that "invention" as a "analog computer." Your responses are more about you not understanding things then it is about analog computers and their practical usefulness.
>Another one is quantum computers, they’re quite analog, and definitely different from ‘digital’.
No. Quantum computers are not analog. They have qbits, so an aspect of it is discretized into symbols. They aren't digital either.
This is Literally the area where I said analog could be better. Speed and energy. Read the paper that's where the improvements are. They reimplemented a Feed forward neural network (something usually done in digital computers) in an analog photonic chip and the main improvements were: Speed and energy. Literally.
> repeatable and exact are sort of similar concepts
Oh my, no. Exact number is a term that makes a statement about precision and error, and repeatable is not. They are orthogonal terms.
> No. Quantum computers are not analog. The have qbits
Oof, you might want to read up a little.
> Analog computing is the thing where analog has virtually no practical usage. [..] in general it’s not practical [..] Literally the area where I said analog could be better. Speed and energy.
Wait is analog compute practical and useful or not? Make up your mind, you’re contradicting yourself, speed and energy is useful and practical. Those are the primary advantages for all computing hardware developments. Anabrid, the company than makes THAT, is building analog compute chips. Lots of companies are building analog compute chips for AI and signal processing, because there are practical uses. Just google “analog chip startup” and see how many companies you can find releasing new analog chips in the last 2 years alone.
>Oh my, no. Exact number is a term that makes a statement about precision and error, and repeatable is not. They are orthogonal terms.
Oh my yes. The action of "repeating" 100 percent requires the ability to create an exact number. Why don't you try it repeating a 1v value in a circuit where exact numbers can't be made. Your claim simply isn't in line with reality.
>Wait is analog compute practical and useful or not? Make up your mind
I "made up my mind" in literally the first statement in this thread. I'll quote it:
"No there are no practical uses other than speed and energy efficiency"
Every response your making is essentially a response to this very first sentence that spawned this entire thread. What's going on here is you "forgot" the point. You're trying to come up with counter examples on how analog computing is useful and all your examples involve speed and energy because you forgot that I caveated my first sentence with speed and energy efficiency. Hopefully you remember now.
Additionally as I said earlier you're also mistaken on what digital and analog computing is. I won't repeat myself. Just reread what I wrote earlier.
>Those are the primary advantages for all computing hardware developments.
No it's not. Memory, robustness, versatility and more all avenues of optimization trade offs that can be made. Analog chips can only be faster and energy efficient. Versatility is where they lose as they have almost none.
>"analog chip startup” and see how many companies you can find releasing new analog chips in the last 2 years alone.
I use analog chips for my job man. Accelerometers and other mems sensors are things I deal with on the daily. They are everywhere. We are talking about analog computing. For computing neural networks is really the only thing where it can sort of work and it's only done for as I stated before speed and efficiency.
A regular GPU is plenty fast in doing most applications right now anyway.
Greetings, fellow traveler. Amusingly, my user name gives me away. ;-)
I have a side business that makes a gadget for musical amplification. I don't want to "out" myself by identifying my business, but all of my products are purely analog circuits, used in a largely analog signal chain. Things could change, and they will, but like you say, my analog circuits are simpler and cheaper, and consume less power, than reasonable digital counterparts that I have any hope of designing and sourcing myself. Also, at least in the US, lack of high frequency signals simplifies the regulatory approval process.
By day, I design scientific equipment. There's almost always an analog circuit in between each sensor and the ADC. This is partly because each sensor requires a unique physical interface, that a general purpose ADC doesn't provide. The design team needs an "analog person," even if they need only one. Also, analog knowledge is valuable in troubleshooting digital systems, because a person with analog skills tends to understand things like circuits, power supplies, measurements, and so forth.
> The action of “repeating” 100 percent requires the ability to create an exact number.
I’m not sure what you’re trying to say here. When you multiply two fp32 values, and the result is rounded, the answer is not exact, even if the inputs were.
If you repeat the process and produce the same rounded number, it’s still not exact and never will be. I didn’t claim analog circuits produce exact numbers, you’re not understanding me and constructing straw men.
> For computing neural networks is really the only thing where it can sort of work
>I’m not sure what you’re trying to say here. When you multiply two fp32 values, and the result is rounded, the answer is not exact, even if the inputs were.
I'm not talking about two fp32 values. I'm not talking about multiplying. I'm talking about repeating a value. If a wire was once 2.1234 volts, I turn it off, can i replicate the 2.1234 volts again exactly? No. There's noise. Repeating is NOT multiplying.
>If you repeat the process and produce the same rounded number, it’s still not exact and never will be. I didn’t claim analog circuits produce exact numbers, you’re not understanding me and constructing straw men.
Dude, you wrote this verbatim: "Oh my, no. Exact number is a term that makes a statement about precision and error, and repeatable is not." You said that "exact" has nothing to do with repeatable. I am Responding to an INCORRECT statement you made. Stay on topic.
>Disagree.
Sure find me an alternative example. I don't mean what I said "exactly", I mean in it in the sense that the overwhelming majority of cases analog isn't practical. I'm sure there's some obscure niche examples where it's preferrable but the case against it is so overwhelming you're going to having a hard time finding one.
Either way you haven't given me a legit example of an analog computer in a legit practical application.
I was making a statement about doing operations like multiply on fp32 digital values from the start, responding directly to your statements 8 comments up. You’re losing your own topics, and you seem to be unable to actually hear or understand what I’m saying for some reason, because you keep repeating your own separate straw man over and over and over. I showed you the definition of exact number and you’re still arguing that it somehow means repeatable when it doesn’t. I see that you’re trying to making a point about digital operations being ‘exactly repeatable’, which is to say: repeatable. I understand what you’re trying to say, I agree that analog values have noise and are not repeatable, and I haven’t argued with that anywhere. It’s incorrect to call the repeatable results of a digital float operation an “exact number”, because despite being repeatable they are in general approximate and not exact. As you do repeatable operations on digital floats, they are no longer exact. With digital float, you can get exactly the wrong answer repeatedly. Exact still has nothing to do with repeatable, no matter how many capital letters you use. It’s you who’s incorrect, you just don’t know it.
> you haven’t given me a legit example of an analog computer in a legit practical application
Hahaha, SMH. I don’t need to prove anything, your assumptions and ignorance of “legit” analog compute applications are your problem not mine. I was trying to let you know they exist. @analog31 just tried to gently let you know too. I’ve listed multiple types of processors in active development and maybe a dozen companies making them, and you’re commenting on the article for one of them. If you think that nothing mentioned in this thread so far is “legit”, then it seems... nevermind. Pretty strong claims for someone who didn’t know that FPAAs exist...
Then why are you communicating with me? Isn't your goal to convince me? If not why are you wasting your time? My goal is to show you how wrong you are, so I'm not wasting my time. I think I'm failing in this regard because you're just unable to admit it.
>I was making a statement about doing operations like multiply on fp32 digital values from the start, responding directly to your statements 8 comments up. You’re losing your own topics, and you seem to be unable to actually hear or understand what I’m saying for some reason, because you keep repeating your own separate straw man over and over and over.
I'm not responding to that statement. I'm responding to your comment on repeatability. You REFRENCED me directly in that statement so I am responding to it. Look at the chain:
YOU: Digital fp8 might be repeatable where analog isn’t, but that doesn’t necessarily make it more accurate, nor is it something we characterize as “exact” like @corethree was claiming.
ME: Anyway, repeatable and exact are sort of similar concepts. If I can repeat a number that means I can replicate it, exactly. Hence the term exact.
ME: I also never said digital fp8 is more accurate or exact. There's no such thing as "digital fp8" or "analog fp8" it makes no sense. Digital and analog are orthogonal concepts to floating point. I said electrical analog devices cannot precisely represent a ONE. Simply because you cannot remove all the noise from an analog signal to make it a perfect ONE. To help you understand, instead of floating point, think of an INT8. An an analog computer cannot represent the exact instance of an INT8 in the same way a digital one can.
YOU: Oh my, no. Exact number is a term that makes a statement about precision and error, and repeatable is not. They are orthogonal terms.
ME: Oh my yes. The action of "repeating" 100 percent requires the ability to create an exact number. Why don't you try it repeating a 1v value in a circuit where exact numbers can't be made. Your claim simply isn't in line with reality.
YOU: I’m not sure what you’re trying to say here. When you multiply two fp32 values, and the result is rounded, the answer is not exact, even if the inputs were.
ME: I'm not talking about two fp32 values. I'm not talking about multiplying. I'm talking about repeating a value. If a wire was once 2.1234 volts, I turn it off, can i replicate the 2.1234 volts again exactly? No. There's noise. Repeating is NOT multiplying.
Read this chain. It looks like there's two flaws with your logic here. First you don't understand what the word repeatability means. You seem to think it's only associated with multiplication of fp32 and you referenced me into making that claim in your INITIAL comment.
Second, if I can't apply repeatability to a single value, then it is NOT applicable to fp32 multiplication. Floating point is simply not commutative or associative. If the elements are added in the exact same order you CAN repeat the result. But if all these elements are noisy analog values then you can never repeat the results EVEN if the order is repeated.
Essentially the logic here still applies EVEN when you mistakenly think repeatability is some phenomenon only attributed to floating point.
So in conclusion two mistakes made exclusively by you. Failure to understand the meaning of "repeatability." And failure to understand the nature of repeatability in the context of floating point in that it's actually an algebraic issue where it lacks associativity and commutativity.
>I’ve listed multiple types of processors in active development and maybe a dozen companies making them, and you’re commenting on the article for one of them. If you think that nothing mentioned in this thread so far is “legit”, then it seems... nevermind. Pretty strong claims for someone who didn’t know that FPAAs exist...
First.. me not knowing about FPAAs is orthogonal to my point. I'm still right and you're still wrong. Comments like this lend zero support for your argument.
Basically let me illustrate to you what "legit" means. First the application can't be about better performance and energy efficiency. I caveated the FIRST sentence in this thread saying energy and speed were the only practical use cases, so you presenting such examples would be REDUNDANT to my point and useless to your counter.
That leaves but one application: Some practical usage that analog computing can do that digital can't. You've presented no example that fits this use case. So overall the conclusion here is that you're wrong.
I could be wrong. Perhaps you did present such an example and I missed it. If that is the case it would help convince me if you emphasized and elaborated on those specific examples. But you should only do this if your goal is to convince me that you're not wrong. If that's not your goal, then I suggest you just leave and not respond as you would be wasting your own time here if you're responding to me without the goal of convincing me. If that's the case, then you have wasted your time on this entire conversation.
>This is true of digital floating point too, chained compositions lose precision.
No. You're thinking in terms of like f64 as maximum precision and how precision is lost in software as you add stuff up. That's a property of the mathematics behind floating point. The mathematical model of floating point is MODELED to lose precision this way, we chose to use this model so we chose this trade off.
The problem I'm talking about is INTRINSIC to the device itself. We cannot get rid of noise in the voltage and it will add up as we compose devices together. This does not happen with electrical gates. You chain a million NOT gates together the output will be correct and clean voltage number equivalent to a HIGH or a Low.
>I’m not sure what you mean here exactly. Most real numbers cannot be represented exactly, and the idea of an exact number in digital integers and/or floating point still comes with a tolerance range. It’s only possible to have an exact number by construction or a-priori knowledge, but not in general and especially not when processing input data. It is possible to have an “exact” analog number, within a tolerance range.
That's probably because you don't understand what it means to be digital vs. analog. In digital TTL you choose ANY voltage between 2-5V to represent 1 and any voltage between 0.8-0V to represent 0. Thus there's an entire RANGE of voltages that symbolize a 1 or 0. The representation is exact because we CHOSE it to be exact.
In analog computing we choose the EXACT voltage value to represent some multiple of EXACT number. Thus 3.49394239045 volts equals some number: C * 3.49394239045. Unless we can exactly ALWAYS get the voltage to be 1V flat we can never really represent the number exact number 1. We can't do this, thus it's practically impossible.
> If you used, say, one analog signal per decimal digit,
Doing this is a form of digital computing. You are essentially setting thresholds to the voltage. Each digit has values 0,1,2,3,4,5,6,7,8,9. You are choosing a range of voltages to represent each digit. Modern computing DOES the SAME thing, but instead of 0,1,2,3,4,5,6,7,8,9 it uses just 1 and 0. We use binary to represent all numbers but it doesn't have to be that way. Digital computers are not necessarily binary.
Reread my reply. I mentioned this, you just didn't get it.
>It is possible with analog circuitry to quantize a signal and/or interpret and have an analog ‘repeater’ that will rectify the signal into the interpreted value, correcting for line noise.
The minute you "quantize" your signal you're no longer doing analog computing. The number is no longer continuous as you discretized it.
The "precision" you're talking about with your "multiple signal" technique is achievable not just with multiple signals but with multiple bits. Digital computers can do the EXACT same thing with just 1s and 0s.
> That sounds like the answer summary is “Yes”. ;)
Practically speaking digital computers are so fast that the simulation of analog is more or less the same. In gaming we run entire physics simulations and they are so fast they appear as analog. And these are for real time simulations. We can run simulations slower and at resolution beyond what's possible with analog.
First analog would be limited by noise. So there's a limit with resolution here. Digital offers virtually unlimited resolution by using symbols to represent numbers. The advantage of analog here is that at the highest resolution of simulation possible with analog it will be faster than digital. That's all.
>We definitely do know how to build analog computers that have function calls.
No. A function call involves a stack and saving context onto that stack. An analog computer is a function in itself. There is no "call". I neglected to mention the "delay" from input to output as one replyer mentioned but that's just details. That would be the closest thing to memory since the delay means your accessing a result from the past.
A function “call” is a software abstraction. What you’re calling a function call by defining it as stack based is a pure software abstraction. A stack is one way to implement functions, but not the only way.
Capacitors can provide analog immediate access memory. Analog delay lines and feedback circuits can provide conceptually recursive functions, and those are common in analog circuits today.
It’s true that analog has limited precision, though no reason you can’t represent higher precision via multiple analog signals, just like how digital requires more bits. All the rest of it still looks to me like you’re making some incorrect assumptions about what’s possible with analog circuitry.
A function call is indeed a software abstraction. Outside of software nobody "calls" functions at all. That was my entire point.
In mathematics you have function application which is close to the idea of the function call. And this idea also doesn't exist in analog computing. Ironically, Mathematics does model what's going on in an analog computer on a field called signal processing. In signal processing it's called a transform.
>It’s true that analog has limited precision, though no reason you can’t represent higher precision via multiple analog signals,
You can't. The only way to do this with multiple signals is to make each signal represent a digit or a portion of digits of the final value. But if you did this you'd be going digital. You likely won't be doing binary but it's still digital.
Speak for yourself. ;) Depends on how you define “digital”. Digital is typically defined as being based on logic gates, not necessarily anything to do with your representation of numbers. I say if your adder is built with op-amps and multiple lines, it’s doing analog computation on digits, and it stays analog for longer than if you build it with CMOS gates.
This distinction is important when you start making signal integrators or matrix multipliers or do other computations with analog components, for example.
> Depends on how you define “digital”. Digital is typically defined as being based on logic gates, not necessarily anything to do with your representation of numbers.
It is not based off of logic gates. It is computing based off of discretized data and discretized steps. Given that computing is usually about numbers, the numbers and the steps to process those numbers in digital computing become discretized.
In analog computing nothing is discretized. The numbers and the processing of numbers is continuous.
Dude, I don't define "digital"... the ENGLISH language defines the word. Read my other reply. You're confusing your made up definition of digital with the one defined by English.
You're just reinventing a digital computer with parallel data lines and not using binary. It's not even getting around to the "fuzzy" part of what it means to be an "analog computer." It's just a misunderstanding on your part.
Hehehe Huh?? English doesn’t define itself, people define words by how they’re used. Okay so if you want to make the case for the English definition, let’s see…
Digital
1. of, relating to, or utilizing devices constructed or working by the methods or principles of electronics : ELECTRONIC
digital devices/technology
also : characterized by electronic and especially computerized technology
the digital age
3: providing a readout in numerical digits
a digital voltmeter
a digital watch/clock
6: of or relating to the fingers or toes
Okay, got it, anything electronic or anything with fingers & toes. Sounds like all analog electronic circuitry passes for digital according to the English language. :) I mean that’s actually true, most people do use “digital” as synonymous with “electronic”. That is why this definition is the first one: it’s the most used and de-facto the most correct, because that’s how the English language works, but you obviously know that already. Also sounds like the THAT computer in the article is defined as digital, because it has a digital readout, bonus! You should write to the IEEE and Anabrid corporation to have them correct their titles.
I wasn’t really intending to talk about how to define the word digital, even though I see that’s what the sentence literally suggests out of context. I was trying to repeat the same thing I said earlier with slightly different words. I was talking about where you draw the line between analog and digital in order to summarize any given device as being one or the other, when in fact many devices are mixed. I’m not contradicting what you’re saying about digital (conversely, you’re not contradicting me either).
Here’s an example of research into purely analog computation with parallel data lines for the purpose of increasing precision, exactly what I was talking about and it has at least one name: residue number system.
If you slowed down a little instead of making so many assumptions and attacking, we could have a productive conversation. You’re misunderstanding me and stating things in binary terms it comes to analog or digital, and still not acknowledging or understanding that there is a lot of space between fully analog and fully digital, and a spectrum of cross-over points and hybrids.
Your cherry picking definitions. We are talking about digital computers not some colloquial usage of the word digital like in the term digital watches. This is about digital computers and analog computers and when people talk about these things in those terms they exactly mean the britannica definition I gave you.
>If you slowed down a little instead of making so many assumptions and attacking
You are the one that needs to slow down bro. I never made a single attack. I simply only remarked on your statements stating if they are wrong or their right or if you don't understand. I never made an attack on your personal character. This accusation out of nowhere, if anything this statement is the one closest to crossing the line. You need to take a chill pill and relax. There are no attacks here.
>You’re misunderstanding me and stating things in binary terms it comes to analog or digital, and still not acknowledging or understanding that there is a lot of space between fully analog and fully digital, and a spectrum of cross-over points and hybrids.
There is a lot of space but this is where you are completely wrong because we aren't just talking about digital and analog. We are talking about digital and analog computing. Key word: computing. I'm not being pedantic here. The colloquial usage of the term "digital computing" and "analog computing" exactly means the definition on britannica. Most people who know what they are talking about will use it this way and I'm informing you that you are out of the loop and you don't get it.
Then you misinterpret that as an attack when it's anything but.
This paper isn't about what the analog vs. Digital computing debate we are having. It doesn't define what they are doing as digital or analog so it doesn't lend support in either direction. It's simply optimizing an analog process by digitizing analog values in another number system. Rns is a mathematical concept it is not the name of the overall technique they are utilizing here.
This is a hybrid, which I also talked about earlier. But you are wrong to call this new thing an "analog computer". The benefits are inline with what I stated originally. Only for speed and energy. Remember your earlier claim was about analog computing but here they are literally feeding that analog line into an ADC. Which again supports my point: the only way to increase precision of multiple analog signals is to digitize it. This is not a purely analog computation.
> the only way to increase precision of multiple analog signals is to digitize it
What do you mean? Digitizing doesn’t increase the precision of anything, it always loses precision. I don’t think you’re summarizing that paper accurately, they are most certainly talking about using analog compute units, specifically analog matrix-vector-multipliers, and a method for increasing the analog signal precision before digitizing the signal. The components used are analog compute circuits, and the title of the paper has “analog” in it. This is absolutely relevant to what I was trying to say because RNS is used here to make multiple analog data lines representing a value provide increased precision compared to a single analog data line.
No they digitize the signal to get the modulos. Look at the diagram.
It's not relevant because there are digital operations need to get the final result BEFORE they do the final ADC conversion. It's like saying analog transistors are used to make digital gates therefore are all computers with gates are analog.
They are talking about an analog MVM in a system that is doing hybrid computations both digitally and with analog systems. It completely fails to support your point of increasing precision in an analog computer with multiple analog signals. It's basically the same thing as using one analog signal per digit instead they're using one signal per modulo.
Potentially, lots! Whenever your computation allows for some small error, or is performed on fractional values (e.g. floating-point) - you could instead imagine doing it analogically. Think of the possibilities of zero-cost atomic addition in parallel, for example. Think about having "semi-real" neuronal responses instead of simulating neural network layers via precise binary computations. And then you can go into more exotic things like computing by wave interference patterns and such.
Is an analog computer more like a floating point operation, or more like a fixed-point operation (that is; not necessarily one without any decimal place, but maybe one without an exponent field)?
IMO it is surprising fixed-point values don’t come up more often… I think we’ve accidentally translated a hardware detail into our software. Floats are only necessary if we have huge dynamic range.
> Floats are only necessary if we have huge dynamic range.
Oh it’s not so surprising if you’ve used both floats and fixed a lot. There are very good reasons floats are much more widely used today, and fixed point predates floats so if fixed was better it would come up more often. With fixed point, you always have to think about both the range and the resolution you want to represent, and that’s often a hassle and might require doing difficult analysis (or more likely throwing away precision for safety). Fixed point numbers also have a non-uniform relative precision - the ratio between steps changes across it’s range, and this can be a disadvantage for a lot of computations. The relative precision of fixed is actually kind-of opposite of what most people need, it gets higher the larger the magnitude.
Floats are much easier because the relative precision is roughly constant, you always know you have ~7 decimal digits worth of mantissa, and because the range is huge so most people don’t have to worry about overflow or underflow (especially for intermediate results!), and the range goes really small as well, which is useful. Float precision, unlike fixed point, is kind-of scale invariant within the float’s range.
It’s not an accident at all that float usage dominates hardware and software, it’s very intentional and people like it. There are good uses for fixed-point too, and they’re used all the time, and it makes use of integer hardware, but on the whole fixed point numbers are much more difficult to use casually than floats.
> more like a floating point ... or more like a fixed-point ... ?
It really depends on what kind of analog hardware you use. Not exactly like either. You would different causes for error: Thermal, inherent indeterminism of interactions, decay/drift of value over time, boundary breaches with values near extrema, etc.
> IMO it is surprising fixed-point values don’t come up more often
I saw the notification from YouTube but haven't watched it yet. I got the TD-03 and RD-6 for my birthday present to myself last year. They're really fun!
Nope that should be fine. Most eurorack modules run on ±12V rails, but the convention for output signals in the maximum ±10V range (with trigger signals at 10V).
Actually posted here 2 years ago, but inexplicably overlooked at the time. I had forgotten about this but am glad of the reminder, since in the meantime I've gotten into embedded systems and microcomputing thanks to the ESP32; this suddenly looks a lot more useful for prototyping. Their pro offering seems very expensive in an age of very affordable modular synth tools, but I guess you're paying for the linearity and calibration, which aren't always a priority in devices aimed at the musical market.
Differential analysers often had components which you would connect to model ODEs. I suppose it is like an FPGA but they weren’t programmable
It would be cool if analog computers could be miniaturised as digital computers were. Then maybe they could also be programmed, and you would have your answer
small signal ins and outs like this should be perfectly safe. It's when you hook up power supplies without impedence to inputs that you have issues. You won't have access to that part of the circuit on the front panel
Anybody into analog things might be tempted to think this is by THAT Corporation, a company that is loved both for its analog audio ICs and its application notes.
Doesn't appear to be. Wait for the cease-and-desist I guess. The company is https://www.anabrid.com and they also have a very aesthetically appealing "professional" analog computer for teaching and "research", sort of like the old Comdyna units.
1936: Water integrator, used in USSR until the '80s: https://en.wikipedia.org/wiki/Water_integrator
1940s: Torpedo Data Computer: https://en.wikipedia.org/wiki/Torpedo_Data_Computer
1949: MONIAC, another water integrator: https://en.wikipedia.org/wiki/Phillips_Machine
1960s: Scanimate, of which there are still a couple in use: https://en.wikipedia.org/wiki/Scanimate
Modern day: Slime molds, other biocomputers, and domino computer: https://youtu.be/OpLU__bhu2w
And of course, quantum computers.