Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[dead]


> The issue isn't AI, it's effort asymmetry

Effort asymmetry is inheret to AI's raison d'être. (One could argue that's true for most consumer-facing technology.)

The problem is AI.


I’ve been thinking about this idea a lot. I have a phrase that I’ve taken to using. Leverage Engineering.

I think AI is going to create a whole new class of people that take a tiny output and turn it into an outsized output.

When this works, it is really nice. Think Cursor, Lovable, or OpenClaw.

When it doesn’t work though, things get ugly too. The same power that allows a small team to build a billion dollar company also allows rouge agents to industrialize their efforts as well.

Combine this with the rise of headless browsers and you have a dangerous cocktail.

I wouldn’t be surprised if we see regulation or licensing around frontier AI APIs in the near future.


> I think AI is going to create a whole new class of people that take a tiny output and turn it into an outsized output.

and thats a problem because? you know people said very similar things about digital music and synthesizers.... "anyone can make music with a synthesizer, so it will make music less special", but were still waiting for music to decay away because its easier than it used to be.


To be fair, I didn’t see parent criticizing it. Just observing.

The issue is not AI. It's our incentives that make having contributions to a well known open source project a currency for getting a job.

> A "contributor must show they've read the contributing guide" gate (like a small quiz or a required issue link) would filter out 90% of drive-by LLM PRs.

Having a no brown M&Ms rule will only work temporarily.

The LLM can read the guidelines too, after all.

Better might be to move to emailed PRs and ignore github completely. The friction is higher and email addresses are easier to detect and record as spammers than github accounts.


I suspect an LLM would read the instructions more thoroughly than a human.

So if there are only brown M&Ms greeting you in your dressing room, most likely they were put there by a robot.


I see the Van Halen reference you made there.

Nah; I could see any of the modern models blazing through that challenge.

What might be better is an option that developers can enable which disables new PRs by API. This way, outside contributors can still create new PRs if they're willing to spend a few seconds doing it in the browser.


I honestly think LLMs are what will put the final nail in the coffin of weak identification.

We need to start putting names behind our usernames even if they are only used to link accounts to identities and not used to display real names to each other. This could in one fell swoop take care of sock puppets, astroturfing, ban shirking and anonymous bot slop.


Ah, and now one of the main motivations of pushing LLMs so hard



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: