Hacker Newsnew | past | comments | ask | show | jobs | submit | madrox's commentslogin

I'm getting so exhausted of the "slop" accusation on new project launches. There are legit criticisms of EmDash in the parent comment that are overshadowed by the implication it was AI coded and, thus, unusable quality.

The problem is there's no beating the slop allegation. There's no "proof of work" that can be demonstrated in this comment section that satisfies, which you can see if you just keep following the entire chain. I'd rather read slop comments than this.

The main engineer of this project is in the comments and all he's being engaged with on is the definition of vibes.


They called the project EmDash and launched it on April 1st with a blog which brags about how little effort it took to write because of agents before even saying what it is.

If the product launch involves dressing the engineering team up in duck suits and releasing to a soundtrack of quacking, it's really not surprising people are asking the guy they hid behind the Daffy mask on why he's dressed as a duck rather than what he learned about headless CMS architecture from being on the Astro core team...


I know that it's discourteous to write-off a potentially valuable project because the release post showed a lack of self-awareness, but I think it's indicative of the larger struggle taking place: that trust is decaying.

It's decaying for a lot of the reasons displayed in the post, like you described, but the post also:

  - is overlong (probably LLM assisted)
  - is self-congratulatory
  - boosts AI
  - rewrites an existing project (vs contributing to the original)
  - conjures long-term maintenance doubt/suspicions
  - is functionally an advertisement (for CloudFlare)
So yeah, maybe EmDash is revolutionary with respect to Wordpress, but it hasn't signaled trust, and that's a difficult hurdle to get past.

This is a great point. I wish we started from this.

There's plenty of other comments saying this. It isn't that I don't understand, and need a clever metaphor.

But to run with your metaphor, can we, maybe, just ignore the quacking since we all know that's just how you get attention these days and instead focus on that other stuff? Because it seems like asking about the duck mask will never produce a satisfactory answer and instead turn into a debate on the merits of ducks.

Dare I suggest that this debate has become boring and beside the point. Unless someone on HN has been living under a rock they've already made up their mind about ducks.


Obtuse and repetitive debates is what HN comments are for. :)

But in this case it feels less like somebody has launched a revolutionary new product and HN is debating the MIT licence and landing page weight, and more like somebody has announced they've a plug-in replacement for a popular repository with a troll post and HN chooses not to spend enough time on Github to discover the all-star team and excellent architectural decisions the blog didn't bother mentioning.

Plus Cloudflare deliberately signalling that at best they're not very invested in its success and it might well just be low-effort slop probably is more pertinent to whether a purported WordPress replacement actually gains any traction than its technical merit, and headless CMS with vendor lockin vs managing WordPress security isn't likely to be a more productive debate than one on "slop". The target audience for this product is much more 'HN crowd' than 'read about agentic solutions to workforce automation on Gartner crowd' too, so the quacking alienating HN is actually relevant.


> Obtuse and repetitive debates is what HN comments are for. :)

Fair


I am not implying unusuablilty due to AI involvement.

I am implying that Cloudflare is publishing unusable one-off software without care because they have done it before and the blog post indicates that they are doing it again („look how CHEAP it is to pump out code now“).

I don’t need a proof of work, I need a proof of quality, and the blog post is the opposite of that.


This feels like a great example of a project that wouldn't exist if not for AI coding.

I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person.

Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies.


This is so disingenuous. You literally clipped the full sentence that changes the context significantly.

> "Once I’ve proven to myself that rendering was feasible, I used Claude to create an approximate version of the game loop in JavaScript based on the original DOOM source, which to me is the least interesting part of the project"

This post is about whether you can render Doom in CSS not whether Claude can replicate Doom gameplay. I doubt the author even bothered to give the game loop much QA.


> I've always said this but AI will win a fields medal before being able to manage a McDonald's.

I love this and have a corollary saying: the last job to be automated will be QA.

This wave of technology has triggered more discussion about the types of knowledge work that exist than any other, and I think we will be sharper for it.


The ownership class will be sharper. They will know how to exploit capital and turn it into more capital with vastly increased efficiency. Everybody else will be hosed.

I'm not sure if people will be more hosed than before. Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor. Knowledge labor is becoming cheaper, easier, and more accessible. That changes the calculus for what is valuable, but not the mechanisms.

> Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor.

You forgot to include resources:

What makes people with capital able to turn things into more capital is their ability to buy labor and resources. If people with more capital can generate capital faster than people with less capital, then (unless they are constrained, for example, by law or conscious) the people with the most capital will eventually own effectively all scarce resources, such as land. And that's likely to be a problem for everyone else.


Fair, though I don’t see how AI is really changing the equation here

AI doesn't change the equation; it makes the equation more brutal for people who don't have capital.

If you don't have capital, the only way to get it is by trading resources or labor for it. Most poor people don't have resources, but they do have the ability to do labor that's valued. But AI is a substitute for labor. And as AI gets better, the value of many kinds of labor will go towards zero.

If it was hard for poor people to escape poverty in the past, it's going to be even harder with AI. Unless we change something about the structure of society to ensure that the benefits of AI are shared with poor people.


Ok, I'm following you. You're saying because labor gets cheaper it will be harder to make a living providing labor. Not disagreeing, but I wonder how much weight to give this argument. History shows a precedent of productivity revolutions changing the workforce, but not eliminating it, and lifting the quality of life of the population overall (though it does also create problems). Mixed bag with the arc bending towards betterment for all. You could argue that this moment is unprecedented in history, but unless the human spirit changes, for better or worse, we will adapt as we always have, rich and poor alike.

If the value of many kinds of labor go towards zero, those benefits also go to the poor. ChatGPT has a free tier. The method of escaping poverty will still be the same. Grow yourself. Provide value to your community.


Entire classes of workers have been put in the poorhouse on a near permanent basis due to technological changes, many tines during the past two centuries of industrial civilization. Without systemic structural changes to support the workforce this will happen/is already happening with AI.

There is a fundamental problem with this thinking, you are making an assumption about scale. There is the apocryphal quote "I think there is a world market for maybe five computers".

You have to believe that LLM scaling (down) is impossible or will never happen. I assure you that this is not the case.


but what if we succeed in gamifying the latent knowledge in LLM's to upload it to our human brains, by some kind of speed / reaction game?

Already enough comments about base rate fallacy, so instead I'll say I'm worried for the future of GitHub.

Its business is underpinned by pre-AI assumptions about usage that, based on its recent instability, I suspect is being invalidated by surges in AI-produced code and commits.

I'm worried, at some point, they'll be forced to take an unpopular stance and either restrict free usage tiers or restrict AI somehow. I'm unsure how they'll evolve.


Having managed GitHub enterprises for thousands of developers who will ping you at the first sign of instability.. I can tell you there has not been one year pre-AI where GitHub was fully "stable" for a month or maybe even a week, and except for that one time with Cocoapods that downtime has always been their own doing.

In a (possibly near) future where most new code is generated by AI bots, the code itself becomes incidental/commodotized and it's nothing more than an intermediate representation (IR) of whatever solution it was prompt-engineered to produce. The value will come from the proposals, reviews, and specifications that caused that code to be produced.

Github is still code-centric with issues and discussions being auxilliary/supporting features around the code. At some point those will become the frontline features, and the code will become secondary.


I'm definitely not an AI skeptic and I use it constantly for coding, but I don't think we are approaching this future at all without a new technological revolution.

Specifications accurate enough to describe the exact behaviors are basically equivalent to code, also in terms of length, so you basically just change language (and current LLM tech is not on course to be able to handle such big specifications)

Higher level specifications (the ones that make sense) leave some details and assumption to the implementation, so you can not safely ignore the implementation itself and you cannot recreate it easily (each LLM build could change the details and the little assumptions)

So yeah, while I agree that documentation and specifications are more and more important in the AI world, I don't see the path to the conclusions you are drawing


This is exactly what people said about the "low code revolution".

Not saying that you are wrong, necessarily. But I think it's still a pretty broad presumption.


I think you're directionally correct, but this stuff still has to live somewhere, whether the repo is code or prompts. GitHub is actually pretty well-positioned to evolve into whatever is next.

I don't think GitHub's product is at risk, but its business model might.


The instability is related to their Azure migration isn't it? Cynically you could say it hasn't been helped by the rolling RIFs at Microsoft

I keep hearing this, and I know Azure has had some issues recently, but I rarely have an issue with Azure like I do with GitHub. I have close to 100 websites on Azure, running on .NET, mostly on Azure App Service (some on Windows 2016 VMs). These sites don't see the type of traffic or amount of features that GitHub has, but if we're talking about Azure being the issue, I'm wondering if I just don't see this because there aren't enough people dependent on these sites compared to GitHub?

Or instead, is it mistakes being made migrating to Azure, rather than Azure being the actual problem? Changing providers can be difficult, especially if you relied on any proprietary services from the old provider.


Running on Azure is not the same as migrating to Azure.

Making big changes like the tech that underpins your product while still actively developing that product means a lot of things in a complicated system changing at once which is usually a recipe for problems.

Incidentally I think that is part of the current problem with AI generated code. Its a fire hose of changes in systems that were never designed or barely holding together at their existing rate of change. AI is able to produce perfectly acceptable code at times but the churn is high and the more code the more churn.


I agree with this. I just have seen a huge pile-on Microsoft for Azure, in regards to this GitHub migration. There are already plenty of legitimate reasons to be upset with Microsoft, without needing to tackle Azure.

> Its a fire hose of changes in systems that were never designed or barely holding together

Yeah... my career hasn't been that long but I've only ever worked on one system that wasn't held together by duct-tape and a lot that were way more complicated than they needed to be.


Azure is fine, stability wise.

The assumption is it would be mistakes in their migration - edge cases that have to be handled differently either in the infrastructure code, config or application services.


Does anyone actually know? So far I've just seen people guessing, and seeing that repeated.

I dont believe sudden influx of few million bots running 24/7 generating PRa and commits and invoking actions does not impact GitHub.

It even sounds silly when you say it this way.


That is fair, in fact I just came across their recent blog post on this. They're pointing to usage growth as the issue https://github.blog/news-insights/company-news/addressing-gi...

Text is cheap to store and not a lot of people in the world write code. Compare it, for example, to email or something like iCloud.

Also I would guess there would be copy-on-write and other such optimizations at Github. It's unlikely that when you fork a repo, somewhere on a disk the entire .git is being copied (but even if it was, it's not that expensive).


That doesn’t make sense. Commits are all text. If YouTube can easily handle 4PB of uploads a day with essentially one large data center that can handle that much daily traffic for the next 20 years, GitHub should have no problems whatsoever.

My friend and I are usually pretty good at ballparking things of this nature; that is "approximately how much textual data is github storing" and i immediately put an upper bound of a petabyte, there's absolutely no way that github has a petabyte of text.

Assuming just text, deduplication,not being dumb about storage patterns, our range is 40-100TB, and that's probably too high by 10x. 100TB means that the average repo is 100KB, too.

Nearly every arcade machine and pre-2002 console is available as a software "spin" that's <20TB.

How big was "every song on spotify"? 400TB?

the eye is somewhere between a quarter and a half a petabyte.

Wikipedia is ~100GB. It may be more, now, i haven't checked. But the raw DB with everything you need to display the text contained in wikipedia is 50-100GB, and most of that is the markup - that is, not "information for us, but information for the computer"

Common Crawl, with over one billion, nine hundred and seventy thousand web pages in their archive: 345TB.

We do not believe this has anything to do with the "queries per second" or "writes per second" on the platform. Ballpark, github probably smooths out to around ten thousand queries per second, median. I'd have guessed less, but then again i worked on a photography website database one time that was handling 4000QPS all day long between two servers. 15 years ago.

P.S. just for fun i searched github for `#!/bin/bash` and it returned 15.3mm "code", assume you replace just that with 2 bytes instead of 12, you save 175MB on disk. That's compression; but how many files are duplicated? I don't mean forks with no action, but different projects? Also i don't care to discern the median bash script byte-length on github, but ballparked to 1000 chars/bytes, mean, that's 16GB on disk for just bash scripts :-)

i have ~593 .sh files that everything.exe can see, and 322 are 1KB or less, 100 are 1-2KB, 133 are 2-10KB, and the rest - 38 - are >11KB. of the 1KB ones, a random sample shows they're clustering such that the mean is ~500B.


Veracity unconfirmed, but this article asserts that until they did some cleanup they were storing 19 petabytes.

https://newsletter.betterstack.com/p/how-github-reduced-repo...

maybe sourced from this tweet?

https://x.com/github/status/1569852682239623173

Edit: though maybe that data doesn't count as your "just text" data.


yeah i assume all the artifacts[0] and binaries greatly inflate that. I have no idea how git works under the hood as it is implemented at github, so i can't comment on potential reasons there.

Is there some command a git administrator can issue to see granular statistics, or is "du -sh" the best we can get?

0: i'm assuming a site-rip that only fetches the equivalent files to when you click the "zip download" button, not the releases, not the wikis, images, workers, gists, etc.


I don't think the issue at hand is a technical challenge. It's merely a sign, imo, that usage has surged due to AI. To your point, this is a solvable scaling problem.

My worry is for the business and how they structure pricing. GitHub is able to provide the free services they do because at some point they did the math on what a typical free tier does before they grow into a paid user. They even did the math on what paid users do, so they know they'll still make money when charging whatever amount.

My hunch is AI is a multiplier on usage numbers, which increases OpEx, which means it's eating into GH's assumptions on margin. They will either need to accept a smaller margin, find other ways to shrink OpEx, or restructure their SKUs. The Spotifies and YouTubes of the world hosting other media formats have it harder than them, but they are able to offset the cost of operation by running ads. Can you imagine having to watch a 20 second ad before you can push?


> Common Crawl, with over one billion, nine hundred and seventy thousand web pages in their archive: 345TB.

Common Crawl is 300 billion webpages and 10 petabytes. I suppose your number is 1 of our 122 crawls.


oh, i didn't see that the 1.97 billion pages were crawled in a 11 day period earlier this month. either way, nearly 2,000,000,000 pages fit in ~third of a petabyte...

p.s. thanks for correcting me, i was using this information for something else, and now it's correct!


I think the instability is mostly due to the CEO running away at the same time as a forced Azure migration where the VP of engineering ran away. There’s only so much stability you can expect from a ship that’s missing 2 captains.

I mean the fish rots from the head, but at the end of the day that rot translates into an engineering culture that doesn't value craftsmanship and quality. Every github product I've used reeks from sloppiness and poor architecture.

That's not to say they don't have people who can build good things. They built the standard for code distribution after all. But you can't help but recognize so much of it is duct taped together to ship instead of crafted and architected with intent behind major decisions that allow the small shit to just work. If you've ever worked on a similar project that evolved that way, you know the feeling.


This.

But also, GitHub profiles and repos were at one point a window into specific developers - like a social site for coders. Now it's suffering from the same problem that social media sites suffer from - AI-slop and unreliable signals about developers. Maybe that doesn't matter so much if writing code isn't as valuable anymore.


after microsoft acquired it, they greatly expanded the free tier allowances, and they still seem happy to dump money into it

Counterpoint: Ai coding without GitHub is like performing a stunt where you set yourself on fire but without a fire crew to extinguish the flames

> worried for the future of GitHub

Oh no, who would think about the big corporations? How is Micro$lop going to survive? /s


Fuck GitHub. It's a corporate attempt at owning git by sprinkling socials on top. I hope it fails.

If you need to host git + a nice gui (as opposed to needing to promote your shit) Forgejo is free software.


The true value prop of github isn't "hosted git + nice gui", it is the whole ecosystem of contributers, forks, and PRs. You don't get that by hosting your own forge.

Also, I wouldn't say GitHub is a corporate attempt to own git... GitHub is a huge part of why Git is as popular as it is these days, and GitHub started as a small startup.

Of course, you can absolutely say Microsoft bought GitHub in an attempt to own git, but I think you are really underselling the value of the community parts of GitHub.


Or they'll just keep forcing policies that let them steal the code you post on GitHub (for their AI training), and make everyone leave that way.

This feels like the only sane response. It's undoubtedly a useful idea for the mechanic. How it performs and if it can improve remains to be seen.

This is such a rorschach test for AI pessimism and optimism.


I've been using gstack for the last few days, and will probably keep it in my skill toolkit. There's a lot of things I like. It maps closely to skills I've made for myself.

First, I appreciate how he implemented auto-update. Not sure if that pattern is original, but I've been solving it in a different-but-worse way for a similar project. NOT a fan of how it's being used to present articles on Garry's List. I like the site, but that's a totally different lane.

The skills are great for upleveling plans. Claude in particular has a way of generating plans with huge blind spots. I've learned to pay close attention to plans to avoid getting burned, and the plan skills do a fair job at helping catch gaps so I don't have to ralph-wiggum later. I don't find the CEO skill terribly effective, but I do like the role it plays at finding delighters for features. This is also where I think my original prompting tends to be strong, which could be why it doesn't appear to have a huge impact like the other skills.

I think the design skills are great and I like the direction they're going. DESIGN.md needs to become a standard practice. I think it's done a great job at helping with design consistency and building UIs that don't feel like slop. This general approach will probably challenge lots of design-focused coding tools.

The approach to using the browser is superior to Claude's built-in extension in pretty much every way (except cookie management). It's worth it for that alone.

For people who don't understand this...think of each skill like a phase of the SDLC. The actual content, over time, will probably become bespoke to how your team builds software, but the steps themselves are all pretty much the same. All of this is still early days, so YMMV using these specific skills, but I like the philosophy.


Your answer reads like a weird mix of AI slop and astroturfing.

I took the time to read through your most recent posts, and it tracks with your attitude towards slop in general.


Is there something you don't like about the substance of my comments? Or is this just name calling? Is this not Hacker News? Aren't AI dev stacks supposed to be interesting to developers?

Say what you want about my comments, but at least I'm within bounds of comment guidelines: https://news.ycombinator.com/newsguidelines.html


I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.

I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.

And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.


> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.

Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.


> If I'm asking humans, I want to see human responses

I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"

Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.

And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).


> It shouldn't matter as long as it addresses your ask, yet it does.

If the LLM output is concise and efficient I don’t actually care that it’s LLM output.

My problem is that much of the LLM prose feels like someone took their half-baked idea and asked the LLM to put a veneer of quality writing on top of it. Then you waste your time reading it to parse out the half-baked idea hiding among the wall of text.


Yes exactly

If a person has a shitty idea that sounds good, they start writing about it. If they exercise some care in their writing, the act of writing itself is enough to make them realize that their idea is shitty.

By the way, it happens to me all the time! Even just on HN, I’ve bailed halfway through writing a comment because I realized that I didn’t know what I was talking about, lol.

But an LLM will gladly take that shitty idea and expand it into a very plausible article/message/post, that seems reasonable if you don’t think very critically about it. And it’ll be done with such a high-seeming level of care that any human author would’ve been fact checking themselves the whole time.

So it forces the reader to think even more critically, rather than letting our subconscious try to judge authenticity of the writer through the language they use.

For example, someone who says “my WiFi is broken” when referring to the fact that their computer is dead, we can quickly judge them as “not an expert at computers”. But if they say that “my M.2 drive has gone bad”, we inherently assume they have some understanding. —- when the first person uses LLMs to write, they sound as informed as the second person even if they are completely clueless and wrong


In my case, it's because it doesn't address my ask, which is why I didn't ask an ai in the first place. The only person I know who does sloppypasta is my brother in law. I know he means well, but when I ask his opinion I want the perspective of someone in his demographic. If a generic ai response met my needs, I wouldn't be asking him.


> It shouldn't matter as long as it addresses your ask

But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.

Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.


As an example of this, I am currently comparing two different models of Android e-readers, from a Chinese brand where the tech specs are all published but there aren't a lot of good comparative reviews. Plus, the specs like battery life are close to the same mAh, but for e-readers especially with Android optimization/drivers/etc make a gigantic difference.

So I have been Googling for "Reader X vs Reader Y review"(/comparison/etc) hoping to find Reddit comments or non-spam blog posts from people who actually own both to compare screen and battery life. I found a reddit thread comparing them directly and lo and behold the first comment is someone saying "I own both but honestly you could just ask ChatGPT for this". Fortunately a couple other people responded...

When I ask Gemini or ChatGPT, all I get is regurgitation of the tech specs (that are all mostly identical) plus summarized SEO spam reviews (that were probably written by another LLM based on those same tech specs) and it's totally unhelpful. So for this, I absolutely do NOT want an OpenClaw bot to respond as if they've physically used the devices and it would be actively enraging to learn a "helpful" comment "answering" the question was actually just an LLM impersonator.


I think it is reasonable, yes, but I don’t think it’s ever been reasonable to expect reasonableness on the internet. We have a difficult enough time showing each other decency.


Then why even have this discussion in the first place? You weren’t expecting any reasonable responses to it, after all.


Do you only do stuff where you expect the outcome to be good?

Perhaps they did it for the off chance of a good response.


But then it'd be false that the internet is the wrong place. This person's account is from 2011, which shows they don't believe what they claim.

I think they are playing devil's advocate in the most irritating way. Good things, and good venues, are worth preserving against enshittification.

Let's not blandly accept this. Fight against it, even if it's a losing battle.

"It was always like this" is false, anyway.


> I think it is reasonable, yes, but I don’t think it’s ever been reasonable to expect reasonableness on the internet. We have a difficult enough time showing each other decency.

This is a disingenuous answer. You don't truly believe this. How do I know? Because you're having this conversation here and not with ChatGPT. So you do think the internet is reasonable enough to engage in this conversation.

Also, Sturgeon's Law applies. "The Internet" is as reasonable as humans are. Of course 90% of it is going to be garbage, but that's no reason to discard the other 10%.

I know you're playing devil's advocate. I wish people stopped doing this, and instead acknowledged human connection is worthwhile.


I'm purposely talking to a person and not a chatbot.

So it does not meet the bare minimum of addressing my ask, the premise of the ask hinges on a discussion with a real person.


I think it should matter. When you ask the AI something you are in a frame of mind, you have a specific context, the question also holds value and context that might completely change the parsing of the answer or at least the difficulty of it.

What I'm asking and the response from AI through an intermediary lose some context (the prompt), it's like the telephone game where the data becomes more and more distorted, that's why people don't have an issue with their own AI generated answers.

Another issue is that when I'm talking with someone and parsing through what they've said I'm considering them, as a person, taking all available context (some of this might happen unconsciously).

In any case I don't think there is an easy solution to the problem.


> shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y”

The people copy-pasting slop almost never excerpt the relevant response. As a result, you get non-concise text you have to triple check. This is functionally useless to the point of being fine to skip.


Exactly. If you can find the answer for someone with AI, then by all means use it. But at least filter, curate, and verify it into an answer.


We can tell by your fury that you’re a slop poster.

I don’t want a random person’s use of an AI to be slopped at me. I don’t know what they asked it, a lot of the words are made up, and I have to go through the effort of decoding it.

If I wanted an AI answer I would ask an AI. AI slop is made up. It’s like handing me a paste of google search results. It’s creating work for me.


I agree with you. No one wants this.

But the internet has had slop long before AI. It's in the same class as clickbait. AI just made it worse and given the slop a distinct flavor. You can be furious about this if you want, but to me this seems like a waste of energy, which is the whole point of my original post.

We need better tools for managing our attention. Perhaps the effort of decoding it can be offloaded to AI.

Is that lame? Yes, but at least it's somewhat effective.


> AI just made it worse

Understatement of the century.


> People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention

They are achieving the exact opposite. I don't connect with the person who sends me slop. And they send me content that is a waste of my time and attention, because I have to vet it. Why would I trust someone - how can I ever connect with them - when the only thing I know about them is they take shortcuts?


>I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y". Because it's probably not actually about the content but the sense of connection.

It's also about the content. Generic slop I can get on demand from an LLM myself, vs a novel insight.


I am really into this approach of AI being used as a user-agent.

In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me.


> The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.

Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.

Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.


Everybody wants to use LLMs to produce things and absolutely nobody wants to consume the things that LLMs produce and this is the fundamental reason this is all going to collapse unless we find a way for producers to pay consumers to consume their LLM output.


Gotta disagree. I've found several great new YouTube channels that clearly use ai for everything but the script writing. I assume it's an enthusiastic and smart niche expert who lacks the charisma to make videos in addition to doing the research. In very glad ai is filling in those people's weak spots.


How would you know it’s an enthusiastic and smart expert creating the content you’re consuming, do you have the subject matter expertise to judge that?

The odds are far higher it’s somebody who knows very little about anything but wants to make money from the gullible.


> Gotta disagree. I've found several great new YouTube channels that clearly use ai for everything but the script writing. I assume it's an enthusiastic and smart niche expert who lacks the charisma to make videos in addition to doing the research. In very glad ai is filling in those people's weak spots.

But why are you glad? There's no intrinsic right to be popular on YouTube, or to be successful. There's also no lack of YouTubers making interesting videos. Why not let a "natural selection" of sorts weed out these people who lack the charisma or whatever to make videos without AI?


How do you know the scripts aren't AI generated?


Brevity mostly. I'm sure they are partially ai generated, just not in a way that detracts from my enjoyment.


What are these youtube channels, care to share their names?


John ag and that survivalist raccoon were the two I had in mind.


hmm, those are definitely AI made. There's one channel I watch periodically that's hard to tell:

https://www.youtube.com/@MedievalWay

Here's a sample video.

https://www.youtube.com/watch?v=wKdKr4VONc4

I suspect the voice is AI, the thumbnails definitely are, but unsure of the rest of the content. It's really interesting and the content in the video seems legit (looked up the "forgotten vegetables" and was surprised to find that we did forgo a lot of tasty things to satisfy the machine harvesters).

Guessing it's likely a phd student with too much time on their hand.


Great case in point. I think the whole "forgotten/banned garden plant" genre appeared post ai. Probably too difficult to get together enough visuals for a full length video on an obscure plant before.


One person's slop is another person's treasure, I guess. I've seen a lot of slop on Youtube, and I block the channels putting it out. It's pretty awful. They use AI narration that can't pronounce simple common phrases correctly. I'm not wasting my time with that garbage, I'd rather give views to actual people producing good content. I don't have time for slop.


>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google.


I'm starting to realise I might have completely misunderstood the whole lmgtfy thing. I thought it was a semi-rude way to call someone out for asking lazy questions instead of trying to find the answer themselves.


No, you completely had it correct. And sending an AI response to a question is the same semi-rude way to respond.

The context here is that the person logging the ticket (or asking the original question by using AI to do it) is the one who is ALSO being a lazy piece of shit, and deserves and equally lazy useless response in the form of a LMGTFY or AI response, because they were too lazy to actually think about their original query and spend time to craft a succinct but useful ticket/query.


I am sorry, but in what way is everyone letting the "We've been creating bait content for a long time" comment slide?

Did you even read the article? It is about person to person interactions. The three examples weer:

* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)

* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)

* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)

The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.

The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.

What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.


>I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI.

Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting).

And their concern is not the mere quality or lack thereof, but also its origin, and this is something new.

>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them.


> We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

Well, cat videos make people happy.


This Firefox extension replaces Daily Mail pages by pictures of kittens https://addons.mozilla.org/en-US/firefox/addon/kitten-block/


Touche


I find your comment disingenuous at best.

> The internet was not a bastion of high quality content or discourse pre-AI.

I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.

This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.


Sorry, not related to your point, but the language:

To "opine" is to give an opinion on something.

To "pine" for something is to wish for it, usually in a nostalgic sense.

I get how the two are related and can be confused, especially when you're talking about comments on the web. Just thought I'd clarify.


Even before AI, the human social internet was loaded with bots and disingenuous actors. You want the imperfect human internet that is also pristine and curated. I've been socializing on the internet since 1994, and I feel fairly confident in sharing that this never existed, except in nostalgia.

If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that.


And since the foundation of the internet, the correct response to bots and disingenuous actors has been to a) ignore them b) ban them and c) ostracize then. We're talking about basic behaviors that have been understood since Usenet, something you surely should be aware of since you grew up in that era.


I absolutely agree with this. We did not tell bot operators to "do better" like this manifesto is trying to do, which is my whole point.


I think the difference was that before all this, there would be additional information embedded in the way a person types, or the way they'd written their code, that you could use to build a larger picture of the situation.

Right now it's as if everyone started wearing digital face masks that replaced their facial expressions with "better" ones. Sure, maybe everyone's faces weren't perfect before, but their expressions contained useful information.


I don't think that "it's more of the same" is a good way to think about it. The internet contained a lot of low-quality content, but even low-quality content used to be fairly expensive and time-consuming to produce. Further, you could immediately discern bottom-of-the-barrel content-farmed nonsense by the writing style alone. Now, LLMs make it practically free to generate unlimited amounts of slop that drowns out human-written stuff, and they can imitate the style hints we used to depend on for quick screening.


Yet how are the alternative ways of thinking about it better? Spending your time angry about what others can do? In any era, that’s a poor life philosophy.

The problem is the same as it has always been. Figure out how to use your time and attention effectively,


Is it possible to be critical without being angry? Are the only options here misplaced ire or total queiescent fatalism? Does the site here even seem excessively angry?


A sufficient number of people being angry about something is how you end up with social norms. These norms will shape how the technology is used.

Conversely, if your take is that there's no point being angry and we should just take it in stride, that just emboldens the producers of slop.


You're reading too much into my words if you think I'm suggesting we should take it in stride.

I think we should accept that trying to enforce social norms is a waste of time as that will only work on the politest part of the internet. Instead, focus on what you can control: better mechanisms for managing your attention and time.


Strategic, directed anger is an important component of using your time effectively. It sends a clear signal that certain kinds of behavior are unacceptable and people who'd like continued access to your time had best not engage in them. You shouldn't go around yelling at people every time you get a bit frustrated, but you should and I do express anger when someone signs their name to LLM-generated Slack responses.


> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.

Somehow nobody that replied to you mentioned this. The issue is reciprocity. If I spend two hours manually researching and using my expertise to reply to a ticket, then 10 minutes later I get a novel-length AI reply in response...I now have no respect for the person replying with AI, because they can't even be bothered to spend a few minutes and summarize their "findings" and I suspect they didn't even read what their AI wrote. Especially in a professional setting, where you were hired for your (supposed) skillset, not your prompting skills.

If I'm sending out AI content, then sure, give me AI content in return.


The biggest vendor I work with uses "AI" for all email communications. It's like they use it to sanitize and corporate-speakify their communications, and I really hate it. They can never communicate like a real human being in email. But when we have actual zoom calls they speak like real humans, but in email it becomes so robotic. It's frustrating to feel like I'm speaking with a robot.


I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors.

I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.

(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)


Couple of expressions from pre-AI culture: "RTFM", "Google is your friend". These were well-used because they are directed, pithy, abrasive.

(n)amow(?): (not) All my own work ?


Good point: RTFM and (wall of slop) are two ways of telling someone that responding to them is not worth your time that are both ruder and more time-consuming than simply saying nothing. Explaining the culture of RTFM, i.e. "if there was any way you could possibly have found the answer otherwise, you should never have asked the question" to non-tech friends usually results in disbelief.

But the slop-wall is even worse, as it wastes the questioner's time in figuring out that they're just getting slop. At least RTFM is efficient.


Clickable links for URLs mentioned in parent comment:

https://nohello.net

https://dontasktoask.com


Yes, I can replace the link to nohello in my automated responses now :)


I think you will find you will get farther by offloading this unpleasantness to an AI and open sourcing it rather than teaching etiquette to the internet, a place not known for its decency.


There’s a certain very satisfying force to turning something into a static website that you can point people at. The Internet equivalent of “don’t make me tap the sign”; especially in an era of AI-slop.


> I don't have a lot of sympathy for people angry at this type of behavior

I ignore it. But if that isn’t an option, this sort of writing can help you convince someone in power around you it’s okay to ignore it.


>like they're being hoodwinked somehow

Because they are. It would be like if I bought some trinket off aliexpress and told you I made it by hand just for you. You wouldn't mind if you bought it yourself, but the fact that I lied about it to make it seem like I care is deceptive and immoral.

Sending someone AI generated text without disclosing so is incredibly offensive. It says you don't care about wasting the receivers time and don't care about honesty either.


I agree. I think it's interesting that, even if AI handles the conversation effectively, we're still repulsed.

I'm curious what will happen once AI generated text gets good enough that people can't tell the difference. Will we just assume everything is AI and remain suspicious, or will we stop caring?

My hunch is we'll all retreat to places of the internet where we can feel sure we're talking to real people and there are chains of trust. For example, I spend most of my time on discord servers where people are real life friends or friends of friends, and increasingly assume "public" internet to be AI default, and therefore use our own AI to browse and summarize for us.


I like to think that a lot of the current internet is just going to die and we will return to more in person interaction. And I think awareness of this is continuously rising. Terms like "chronically offline", talking about quitting social media, reducing phone use, etc are hot right now. But I'm still yet to see awareness and talk convert to action. People are as addicted to social media as they ever have been. We just widely recognize much like junk food and cigarettes where we know it's bad but keep doing it anyway.

What I'm fairly confident of is people will not just enjoy being deceived in to talking to bots. We have seen this before, companies and customer service platforms have been using templated messages to imitate real human conversation for a while. When you load a website and the Intercom chat box pops up with a message that looks like a real person from the company is trying to talk to you, initially it might have worked but very quickly you learn it's fake and tune it out.


Talking about bait, good job getting 42 responses on hacker news! Your opinions are controversial enough to draw out people who need to correct them, yet genuine enough to not be passed off as a troll and downvoted.


It's been pretty amusing seeing the total upvotes for my comment go up and down.

I wasn't expecting it to be so controversial. Reading and responding to many of the replies, I think many people are strawmanning me as being in support of AI slop.


Not the author, but did a LOT of research on this during my time at Disney while working on Disney+ prior to its launch.

This is, effectively, no different than a carousel of algorithm-recommended content. However, UX studies have found users reluctant to watch something recommended to them. It requires making an affirmative decision on time investment. Most people have the experience of a friend recommending a movie or book and still being reluctant to dive in.

The problem is very similar to dating apps, if you think about it. This is why Tinder's innovation on "swipe left/right" took off the way it did. In UX terms it's better to drop users into something and make the cognitive effort be choosing to get out of it rather than choosing to get into it. It's a big part of why TikTok works.

The reason this isn't more common in video apps has more to do with UX norms at this point. Another important thing I learned about streaming at Disney was that no one really cares how innovative the browsing experience is. They just want to watch Frozen. They're used to carousels now, and they're easy to program. This, I think, speaks more to your sensibilities.


Tuning into a channel in channel surfing mode also lets you hop in mid story which is it's own experience.


I guess it's really not for me though. First thing I do is turn autoplay off, and I'd refuse to use a service that doesn't give me that option. OTOH, I do sometimes find it fun to hunt for good stuff among the recommendations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: