Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I joked about this[1] last year, now we are seriously discussing it. Hilarious.

[1] https://www.linkedin.com/pulse/2015-technology-7-predictions...

EDIT:

I don't think we can trust an AI just yet. For example I had arguments from people who wanted me to feed motivation (cover) letters from job applicants into Watson to determine "cultural fit" (I'm in the tech recruitment business atm). IMO these technologies are way over-hyped for now and we are walking down a very dangerous path, because marketing pushes into this direction and the technology is far from ready.

To prove my point I tried to feed Joseph Mengele, Stalin & Bin Laden writings into Watson to see how he evaluates the data. As expected Watson had some "great things" to say about these characters. Another feeling I get is that when we read info about ourselves in this context it's like reading a horoscope. People read 2 things that are true (but vague) and the 3rd thing may not be true but they shrug it off as "oh I didn't know this about myself yet ... I'll have to monitor myself in future to see if this is right". We are prone to be "open" to such statements as long as they sound like a positive trait. But is it true? So in that sense the machine learning might fool us into thinking we remove bias (but we can not remove bias like this). I honestly think that this technology should come with a warning label because people who have no idea about how the data is being prepared or analysed will interpret the output verbatim and take it face value.

Here is the link http://blog.valbonne-consulting.com/2015/06/13/using-big-dat...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: