Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A little of–topic.

There is something interesting to be said about what kind of awareness a future AI (or whatever we should call it) will simulate the world.

Imagine what kind of perspectives are possible when thousands or millions of input sources are your senses.



A little pedantic: you already have thousands of different input sources, from the classic "five senses" to less conscious awareness of things like your limb placement and internal biochemistry (e.g., hunger). There are even people who've "added" artificial senses by doing things like placing magnets under their skin (to sense magnetic fields) or strapping a device to their leg that buzzes in the direction of North. Not to mention the fact that we can use things like visually inspecting a screen to access senses (via information feeds) that are not available biologically.

However, you're right that a robot/AI developed with the intention of feeding it all sorts of heterogeneous data will probably be able to process everything more effectively. In my own research (AI with a focus on reinforcement learning and robotics) I am sometimes surprised by how effective agents can be at making sense of their input streams. For example, an experiment will not go the way you expect because the robot can trivially solve a maze via sensing the current in the wiring beneath the floor.

Of course, there's a limit in terms of how effective raw information can be. Humans don't need to see ultraviolet wavelengths because in general the spectrum of ~350-700nm provides all the information we need, and the brain is good at finding the salient aspects of what we see. If you just connect a new sensor to a robot, it might improve its ability to understand the world, or do nothing at all, because it can't incorporate this new information into its representation effectively. Or it doesn't add anything new, or at least nothing that it couldn't have figured out from existing input streams.

For example, adding a stock ticker feed to your robot would probably not help it solve a particular task, unless your robot happens to be 50 feet tall and the task in question is "rampaging down Wall Street".


Hmm sure but we are talking about a completely different scale here.

I mean as an AI you are connected to the whole planets sensors, you have the whole worlds knowledge in your possession and will be able to cross reference with what you are getting as input. You can prototype, do scenario planning on the fly, you can calculate and so on. Furthermore you are potentially getting inputs from other humans too and have the ability to mostly likely control a number of things which again provide new input.


>Imagine what kind of perspectives are possible when thousands or millions of input sources are your senses.

Doesn't this exactly describe human sensory input? Though our brain is efficient by throwing out most of the data early on in the signal chain (as research has revealed in vision and auditory input). Will future AI also need to be as efficient?


Doesn't this exactly describe human sensory input?

Well, consider having a 360º array of 30 cameras all integrated into a perfect spherical sensory experience. It's something we can't really imagine experiencing natively, but it would be trivial for eBrains to coalesce visual systems that way from eBirth.

Our bodies have lots of low bitrate sensors like billions of individual sensory nerves distributed throughout our bodes (and they are each individually addressable in the brain), but we don't think of "touch" as a sense to "computationalize" like vision or sound or language.

One amusing thing about AI sensors: nobody ever talks about superhuman smell. Where are the quantum AI noses?


Biochemical sensors, like what that discredited blood test company theranos tried to do, is a sort of superhuman smell.


But at a completely different scale though. You have to imagine a whole planets sensors as your sources combined with the whole planets knowledge and so on.

I don't think it would make much sense to compare with the limited POV we are experiencing the world from.


Deep neural nets do precisely this. Though the specifics change from implementation to implementation, in general there is a large drop in the number of hidden units at each layer of the architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: