Jordan argues that leaps in human-imitative AI are not necessary to solve IA/II problems -- "We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda."
However, achieving near-human level accuracy on tasks such as classifying images of cars or road signs would be immensely useful to the proposed II-type system that handles large-scale self-driving transportation (individual cars would conceivably need the ability to understand their local environments and communicate this to the overall network).
I agree with his argument that there should be a shift in the way we think about problems in "AI", but I don't think we should necessarily think that progress in human-imitative AI problems and IA/II problems are mutually exclusive.
This sounds like the longstanding debate between weak / narrow vs strong AI. Can improving the former make progress toward the latter? I'm inclined to agree with Jordan that we shouldn't expect the two to enhance the other much less commingle. Just as advancement of one classical algorithm rarely enhances another, I think it's unlikely the next generation of object recognition is going to advance speech recognition or reading comprehension.
Probably more essentially, until AI escapes its current dependency on pattern matching driven solely by accumulation of probabilistic events, I see little chance that human-level general-purpose cognition will arise from our current bases for AI, namely observing innumerable games of chess or watching millions of cars wander city streets.
However, achieving near-human level accuracy on tasks such as classifying images of cars or road signs would be immensely useful to the proposed II-type system that handles large-scale self-driving transportation (individual cars would conceivably need the ability to understand their local environments and communicate this to the overall network).
I agree with his argument that there should be a shift in the way we think about problems in "AI", but I don't think we should necessarily think that progress in human-imitative AI problems and IA/II problems are mutually exclusive.