Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> However, we are so far away from producing anything remotely close to that level of AI, that I fear their work is ungrounded. It strikes me as fun quasi-SF rather than serious engineering.

At what point would you allow research to go forth? And you know, there's a lot of work that can be done which is not 'directly coding an AI'.

> But I take believe they are seriously interested in world improvement, so my suggestion to them in their work on "benevolent goal architectures" is to study how existing goal architectures work, that is to look at existing economic and social systems where disparate individual goals are coordinated into cooperative or conflicting action, and how the results are related to human goals and human flourishing.

But why would one expect any of that to transfer to AI? What do the internal dynamics of a Board of Directors have to do with, say, the Code Red worm? What lessons can we extract from 501(c)3 nonprofits which will tell us anything about deep-learning-based architectures? Do CEO salaries really inform our understanding of Moore's Law? Or can study of Congressional lobbying seriously help us better understand progress on brain scanning and connectomes? Do Marxian dialectics truly help us improve forecasts for when feasible investments in neuromorphic chips will match human brains?

The closer I look at corporations and modern economies, the more worthless they seem for understanding the possibilities of AI, much less engineering safe or powerful ones. Modern economies are based on large assemblages of human brains, acculturated in very specific ways (remember Adam Smith's other major work: _The Theory of Moral Sentiments_), which are limited in many ways, and which are fragile and nonrobust: consider psychopathy, or consider economic's focus on 'institutions'. Why do some economies explode like South Korea, and others go nowhere at all? Even with millennia of human history and almost identical genomes and brains, outcomes are staggeringly different. (You complain about corporations; well, how 'friendly' is North Korea?)

And this is supposed to be so useful for understanding the issues that we should be focused on your favored political goals instead of directly tackling the issues?



> But why would one expect any of that to transfer to AI? What do the internal dynamics of a Board of Directors have to do with, say, the Code Red worm? What lessons can we extract from 501(c)3 nonprofits which will tell us anything about deep-learning-based architectures? Do CEO salaries really inform our understanding of Moore's Law? Or can study of Congressional lobbying seriously help us better understand progress on brain scanning and connectomes? Do Marxian dialectics truly help us improve forecasts for when feasible investments in neuromorphic chips will match human brains?

Studying human systems is one of the best ways of studying complex systems and systems engineering, which are already crucial for complex engineering projects, like developing a complex AI. Before we can even talk about our future binary overlords being hostile or friendly, we will have to study how basic, but constantly developing, AI integrates and plays off of human social systems. We have to gather quantified data about how two distinct forms of intelligence interact and what, if any, conclusions can be generalized to a future where humans are no longer the species with the highest intelligence.

You have no data about how real AI's would behave in our society except for fiction, which contains no more guidance now than the Bible did for 16th century astrophysics. We have no consistent models that explain our own intelligence, let alone an artificial one that has yet to exist. You can pontificate about Plato's ideal Terminator but it won't make a bit of difference until we get our telescope.


> Studying human systems is one of the best ways of studying complex systems and systems engineering, which are already crucial for complex engineering projects, like developing a complex AI.

What does it mean to study a generic 'complex system' and 'systems engineering' and what does this have to do with estimating the potential risks and dangers?

> we will have to study how basic, but constantly developing, AI integrates and plays off of human social systems.

This presumes you already know all about what the AI will be and is putting the cart before the horse.

> We have to gather quantified data about how two distinct forms of intelligence interact and what, if any, conclusions can be generalized to a future where humans are no longer the species with the highest intelligence.

Consider an aborigine making this argument: 'we have observed their firearms and firewater, and know there are many unknowns about these white men in their large canoes; if we look at their capabilities, our best analyses and research and extrapolations certainly suggest they could be a serious threat to us, but we must reserve judgement and quantify data about how our forms of intelligence will interact with theirs'.

> You have no data about how real AI's would behave in our society except for fiction, which contains no more guidance now than the Bible did for 16th century astrophysics

Really? We know nothing about AI and our best guesses are literally as good as random tribal superstitions?

> We have no consistent models that explain our own intelligence, let alone an artificial one that has yet to exist.

Someone tell the psychologists and the field of AI they have learned nothing at all which could possibly inform our attempts to understand these issues.


I think you are homing in on a key philosophical difference between me and LessWrongsters. Don't really have time to get into it now, except to say that it is kind of arrogant to think you can design or think about superintelligences without reference to the best existing intelligent systems we have. Especially if you want to keep them goal-compatible. The pathologies of such systems are especially instructive.


> it is kind of arrogant to think you can design or think about superintelligences without reference to the best existing intelligent systems we have.

I don't think it's any more arrogant than, say, Djikstra pointing out that submarines move in a completely different manner than a human swimmer. How arrogant were the Wright brothers in looking at birds and deciding to try to achieve some of the same goals by a completely different mechanism?

If computers thought the same way humans did, if they came with builtin moral sentiments, the product of a very unusual evolutionary history and social structure, if they had access to no novel capabilities, then humans and forms of cooperation developed over the last few eyeblinks (centuries) might be relevant. But then, no one would care about the issue in the first place...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: