statements like that typically come from best-selling author and keynote speaker. This kind of articles seem valuable to some audience, but it's probably something I can never understand.
"What if a large class of algorithms used today -- from the algorithms that help us avoid traffic to the algorithms that identify new drug molecules -- worked exponentially faster?" -- these problems used as example in the opening line are only tangentially related to the problem of submodular maximisation that the paper it is talking about is tackling. I also don't think submodular maximisation is widely used today for identifying new drug molecules or avoiding traffic.
He already did his job of intellectually engaging the recruiter. If the other party is not interested, it will not be rude if he update his resume and move on.
The guy writes well. The recruiter is the one with bad attitude. But I would say that a practical communicator knows his audience, and in this case might want to give up explaining POSIX, change his resume, and move on.
I agree with this article more. Yes, $86M is sure expensive with lots of efficiencies. But the first one is oversimplifying the technical aspect, which is missing the point on the sources of inefficiencies. The project can't be using openALPR at its current level and it is hopeless to use it as a starting point to get decent accuracy.
So IBM Watson can do all the smart and complex stuff but we still need human to do the dumb stuff like importing excel files where the cost outweighs the benefit of getting Watson to do the smart stuff.
"I still think you're thinking of this as you'll take an existing product and add some AI to it. That’s not what we’re seeing. What we’re seeing is an entirely new kind of product that wasn't possible before."
The major difference from previous platform shifts is that the limits of frontier AI technology are extremely hard to gauge even for experts. Unlike AI tech, previous platform shifts are easy to understand for a person on the street. For e.g., consumers can see that smartphone allows messages to sent over cellular data, which is better than SMS. But it is not obvious whether Echo is better than Siri? Tesla's driver-assist more reliable than a competitor one?
I feel the recent breakthroughs in AI are concentrated in a small number (but very important) areas such as computer vision and to some extent machine translation. There need to be more advances especially in areas of decision making before AI can have a broader and more meaningfully impact as stated in the interview.
"Now that DeepMind knows the approach works, it also knows where its AI system lacks information, so it may ask Google to put additional sensors into its data centers to let its software eke out even more efficiency."
Sounds like active learning to me. It's a type of machine learning where a learner pro-actively ask for interesting data points to be labeled so that he can learn more about the system. :)
UCT is a very simple idea that works surprisingly well across very diverse domains. Can't emphasize its generality enough, you can throw different problems at it and it can give you decent (may not be the best) result without using any additional domain knowledge.
It is also very easy to work with, you can easily tweak the algorithm and add heuristics for your specific domains.
I've coded and tested sparse lookahead in a trading algorithm before using a comprehensive example as a guide. Do you know of any comprehensive walkthrough examples implementing a UCT scenario that I can use to implement and verify my results?
However, the tutorial doesn't work towards a working implementation. I think you can verify your results against benchmark problems. There are a number of good implementations around:
I think it could be predictability of a self-driving car in an accident that is making people uncomfortable and accident rate does not capture that. For e.g., an accident might means hitting a barrier, a human realizes the terrible situation and will swerve away from the cliff behind the barrier. But what would a self-driving car do? Its sensor might be too damaged to realize the dire consequence.