Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(1) Counterpoint: Apple has much better taste than other OEMs. They've hired designers from Burberry and Nike to assist with wearable design. Design is a core competency for Apple. Others can't claim the same thing.

(2) More data gets you a couple of percentage points on accuracy. It doesn't make Google Now infinitely better in perpetuity than Apple's offering (Siri + whatever is next). Apple is also investing heavily in building out Siri/Maps/NLP.



More data gets you way more than that, but it's not just the data anyway. Google has the personnel who are able to make use of that data. This isn't the kind of stuff you can easily outsource, you need a superb research team. (Source: I'm a professor in machine learning). As far as anyone knows, Apple doesn't have one. They do fantastic product research, and it shows, but they're at least a decade behind the knowledge contained inside of Google, and increasingly, Facebook, Baidu, and a few others. And worse than that, the gap is widening, not shrinking.


What does more data get you beyond accuracy? I think it opens up certain model classes -- like online regression -- which have proveably low error rates with lots of data, but my argument is that you don't need "the entire web," as another commenter suggests, to be good enough at, say, speech recognition. I could definitely be mistaken though...

I agree that quantitative research must be a core competency -- I'm a ML engineer at a company that's heavily invested in its research team -- and it is most certainly not one of Apple's focii. What's stopping Apple from building that competency by acquihiring the talent, though? This is no different from what Google has done over the years...


At some level, it all boils down to increasing accuracy, but the point I was making was just that doing that seems right not to be best accomplished with loads of data. If you look at the deep learning work that's been big lately, you have models with millions of free parameters. By necessity, you need a lot of data in order to constrain a model that big.

Even speech recognition has gotten a big boost recently from taking a "simple model with massive data" approach.

I'm not convinced that these approaches are sufficient to give you some sort of human-level AI. I'm pessimistic on the timeframes for that in general. And I'm sure there are areas where they fail, and maybe someone else comes along with a better idea, but Apple's not working on that either.

Certainly, they could acquire their way to competency, and I'm certainly not going to chime in with the proverbial "Apple is doomed...DOOOOOOOOMED". The only thing really stopping them is interest. But it takes a while to ramp up from getting results from a research team into making those results into a product.


(1) Right now they do but it's only a matter of time before Samsung or HTC partner up with Prada, Rolex, Omega or any other major brand and come out with a line of watches that appeal. The point is that we might get one or two apple watches a year if we're lucky and in the same time frame we'll be getting around 30-50 different android wear devices. Android wear will be available in a ton of different shapes and sizes, and there's no way that the top watch brands won't want to get in on the action.

(2) Data is everything in machine learning and google has the whole web. Google knows me better than my mother, girlfriend and brother combined. Android, youtube, search, gmail, drive, maps, calendar, hangouts and shopping express are enough to tell them who I talk to, where I live, what I eat, where I'm going and what I read.

A couple of percentage points is huge when your accuracy is above 90%. Apple doesn't have anyone as good as Geoff Hinton, Peter Norvig, nor Jeff Dean. There's no way they'll be able compete on AI with google.


(1) Google has partnered with Diane Von Furstenberg on Glass and...there isn't a whole lot to show for it. Glass has ~zero mindshare among fashion-forward people. I cannot stress the importance of design as a core competency.

(2) You're the second person to respond with "data is everything" in machine learning, but it's more complicated than that. While Google has Hinton working on deep nets and Kevin Murphy on knowledge representations, this cutting-edge work represents something closer to MS Research or Bell Labs. These models take years to affect production.

My experience is that "a couple of percentage points" above 90% actually matters little--the marginal cost of obtaining those points is enormous (many many hidden layers in your convolutional net really slows it down) with little real-world benefit (a user can't tell the difference between 90 and 95%--they'll just think of the product as "really really good").

I believe Apple can approach the point where their NLP/speech tech is "good enough" relative to Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: