Going from 0 human-intelligence level pieces of software to 1 is the hardest part. Once you have 1, you can duplicate it as much as you want given resources. It can also be pointed inward to improve its own effectiveness.
Actually, there are a lot of good arguments for logistic growth. The only ones for linear or sublinear I’ve heard are not strong and mostly take as an implicit assumption “those alarmists and their exponential growth! They probably didn’t even consider that it could be a slower, more incremental growth” instead of actual fully-fledged arguments.
There’s also a meta-argument that I have yet to hear reproduced in anti-alarmist sentiments. Which case demands more attention, if it does happen? If there’s a 5% chance of the growth being exponential, how much attention should we devote to that case, where the impact is much higher than linear or sublinear growth. This is such a big deal - it’s like Pascal’s wager but with a real occurrence that I believe most would admit has at least a small chance of happening.
Apologies for any brashness coming across. I’m still figuring out how to communicate effectively about a thing I feel a lot of emotions when thinking about.
I didn't say linear, but devil's advocate, isn't that roughly how humans learn? We start out as "pre-intelligent" little creatures who, slowly, methodically, with help of others, develop aptitudes and learn about the world. Learning in fact continues in this manner your entire life should you continue... slow, incremental progress requiring teachers, peers, trials/ errors, crises, 1/3 of your life being unconscious in sleep, etc., in the absence of which no learning at all may happen ... and the bot may potentially have greater computation constraints than humans under current technologies, as the brain is far more efficient than any computer today.
I'm not convinced that each of the arcs, elementary intelligence --> average intelligence --> super intelligence, wouldn't be painstaking and roughly linear.
>It can also be pointed inward to improve its own effectiveness.
Assumption. Intelligence (which isn’t defined) may be something that can grow without bound, or it may be something that plateaus just above the brightest human yet (again this is ill defined. “IQ is a number. There are numbers that are higher, so intelligence must be able to grow” is about as much thought as some people put into it) or maybe it is something that can grow without bound, but the effort required grows too.
Use "capability" instead of "intelligence" then. Defined as "ability to solve any problem dwighttk has ever dreamt up."
There's pretty much no reason to believe capability peaks roughly above the brightest human.
Our brains aren't yet even integrated with hardware-optimized algorithm solvers on which to offload minimax or tree search problems, or solve simple game-theoretic situations, or any number of things a computer system is much better and faster at than a human.
It's just another one of those things that you can believe if you want to not spend time worrying about the ethics problem.
Actually, there are a lot of good arguments for logistic growth. The only ones for linear or sublinear I’ve heard are not strong and mostly take as an implicit assumption “those alarmists and their exponential growth! They probably didn’t even consider that it could be a slower, more incremental growth” instead of actual fully-fledged arguments.
There’s also a meta-argument that I have yet to hear reproduced in anti-alarmist sentiments. Which case demands more attention, if it does happen? If there’s a 5% chance of the growth being exponential, how much attention should we devote to that case, where the impact is much higher than linear or sublinear growth. This is such a big deal - it’s like Pascal’s wager but with a real occurrence that I believe most would admit has at least a small chance of happening.
Apologies for any brashness coming across. I’m still figuring out how to communicate effectively about a thing I feel a lot of emotions when thinking about.