Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it's about the net benefit to society and we should be very careful what we wish for.

Seems like we have a classic trolly problem.

On one track, compensating copyright holders is required for LLMs, and it's going to be very expensive to acquire all of this copyrighted info, meaning only the biggest companies can afford to do it.

On the other track, compensating copyright holders is not required, LLMs (led by big tech) capture most of the economic value from every incremental piece of content created by humans in perpetuity, consolidating wealth in the hands of a few shareholders and insiders.

Neither seem ideal.



> On one track, compensating copyright holders is required for LLMs, and it's going to be very expensive to acquire all of this copyrighted info, meaning only the biggest companies can afford to do it.

There is also the third track which is that most abundant code is open source or unlicensed content (which is protected in the US afaik). If corporations can't monetize on it, we win, because models either need to be open source or we need payment for training.


I'm not sure it's certain yet with AI is going to lead to more consolidation or actually have the opposite effect.

Whilst history tends to make me suspect the former, the recent leaked Google memo gave me pause for thought. AI is already out there and already can be trained on consumer hardware. It's ever so slightly possible that big tech won't be able to horde the benefits this time.


I'd choose the second track without hesitation.

Shareholders can consolidate all the wealth they want, as long as they deliver the goods: LLMs that are trained on all of humanity's creative output.


Open source models are possible if we pick the second option. Lots of innovation in the AI scene is happening thanks to open source models being available to the general public.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: