This is what I've been doing for a couple years now: having AI help to code/test projects that I've had in my long TODO list but would never realistically started/completed. AI is now pretty capable of producing decent code if your specifications are decent.
I still think that non-programmers are going to have a tough time with vibe coding. Nuances and nomenclature in the language you are targeting and programming design principles in general help in actually getting AI to build something useful.
A simple example is knowing to tell AI that a window should be 'modal' or that null values should default to xyz.
I've a similar history of AI use as you but every so often I simply describe what I want and try out what it creates. Honestly in these past 2 years the pace of improvement has been stunning.
Yesterday I was inside one of the tools that a just build it prompt created and I asked it to use a NewType pattern for some of the internals.
It wasn't until I was in bed that I thought why? If I'm never reading that code and the agent doesn't benefit from it? Why am I dragging my cognitive baggage into the code base?
Would a future lay vibe coder care what a New type pattern is? Why it helps? Who it helps?
I think the pedagogy of programming will change so that effective prompting will be more accessible.
I do think about this a lot. On the one hand, you might be right, and it may not matter at all. On the other, we often do that kind of stuff because it makes it harder to slip up by accident and/or makes the code easier to read and understand. These things are surely helpful to AI agents in the same way that they are useful to people?
I guess it depends on whether the extra time you could invest in that kind of thing pays back in terms of context windows, code quality or speed of AI code generation.
"Ok, here's a static bar on the top of the page" as it disappears as you scroll. You can now no longer click anything else on the page. You said "can't click away" and it did show up on top. As a 30 year coder that never did any UI, this is this shit I run into constantly. I can create awesomely cool and fast back end stuff, and can create better UIs than I've ever been able to, but not knowing the nomenclature trips me up constantly.
It's a massive accelerator for my dumb and small hobby projects. If they take to long I tend to give up and do something else.
Recently I designed and 3d printed a case for a raspberry pi, some encoders and buttons, a touchscreen, just to contol a 500 EUR audio effects paddel (eventide H9)
They official android app was over if the worst apps I ever used. They even blocked paste in the login screen...
Few people have this fx box, and even fewer would need my custom controller for it., build it for an audience of one. But thanks to llms it was not that big of a deal. It allowed me to concentrate on what was fun.
in the age of LLM-built side projects... what's the right venue for sharing these things with other people?
i feel like the expectations for a "Show HN" project are too high for a passing around a silly little toy that I had the robot throw together. product hunt is for things that are actual products/businesses. so maybe you throw it in a targetted subreddit for a niche interest group?
seems like there should be a marketplace for silly little side-projects, but i'm not sure how you keep it from getting overrun
Yeah, but that's not the same, as most readers will just skip over that. What I said is more similar to HN's monthly "who's hiring" threads or "what are you working on" threads. Like https://news.ycombinator.com/item?id=46937696. I find those much more interesting.
In theory, you write/vlog about the human side of making it, or lessons learned, or something else that people will find value in related to the thing you make. Over time, maybe a few people start to care.
Ironically, if people care about you, you can pretty much serve up hot buttered shit and get traction.
I think the problem with such places is, they just become a dump for self-promotion by people who otherwise don't participate at all. The opposite of an actual community. That's why even reddit used to have a 10-to-1 rule of thumb about posts like that (which would be very easily gamed today).
No login or signup is required so it's very easy to try out and quite fun to play with, which probably helped. I think the time people are willing to invest in something before getting some sort of reward is approaching sub-second territory.
I've always been unhappy with the way tasking/todo app (don't) work for me. I just started building a TUI in Zig (with the help of Codex) to manage my daily tasks. And since I'm building it just for me, the scope is mine to determine too.
I have more than a few side projects that began as late night discussions with an llm. A couple of those projects reached a level of completion where I use the products daily, and one project reached production (a game you can find referenced on my profile).
I have had similar experiences to the author, and I’ve found that just working with a single agent in Antigravity (on the Gemini Pro subscription) is adequate. The extra perceived speed and power of multiple agents and/or Claude Code really didn’t match the output.
With a single Gemini (or sometimes switching to Claude Opus which inexplicably Google provides a generous amount of for free via AG) gives me incremental results so fast that I spend most of my time thinking about what I want (answering unplanned product questions or deciding how to handle edge cases).
I’m fact, sometimes I just get exhausted with so much decision making. However, that’s what it takes to build something useful; we just aren’t accustomed to iterating so fast!
> I don't think we'll ever manually write code again. It's just so much faster.
If velocity was the most important criteria, well, we could always write tech-debt faster, we just chose not to.
Unless the LLM/agent is carefully curated, it will produce tech-debt faster than it can fix it.
For some products, it seems not a problem - you just want to validate PMF on a product (of course you'll have a new problem now, which is that everyone with $20 to spare can do the same).
For others, a longer-life product is preferable. We shall have to see how things shake out. My best guess would be that we have more useless stuff that is free or close to free, and fewer useful stuff that is free or close to free.
I find this the least convincing argument ever. Its only a gotcha if you assume all/most of the people excited about one were excited about the other. Personally I never met a real person who gave a shit about crypto, much less nfts. But AI interest is everywhere, with it roughly 50/50 in my life of people who are uneasy with it vs use it regularly.
I don't disagree about monumentous amounts of tech debt and risk being created. Its my hope for my own job and skills being relevant going into the future. I do like playing with it, understanding it as a tool. But it is just a tool, not a machine god, and regularly fallible.
Actually I met one guy who was somehow deep in NFTS when this Boring-Ape-NFT took off and he told me how much money he has now (on paper) - then they were vaporized and he lost everything.
I was a doubter. This will literally work 100x faster than you. It can one-shot 1kLOC across dozens of files in mere minutes and understand the context.
You'll need to pay back a lot of those performance gains in reviewing the code, but the overall delta is a 2x speedup at minimum. I'd say it's closer to 4x. You can get a week's worth of work done in a day.
A human context switches too much and cannot physically keep up with these models. We're at the chess take off moment. We're still good at reviewing and steering.
Right now I'm trying to get an AI (actually two ChatGPT and Grok) to write me a simple HomeAssistant integration that blinks a virtual light on and off driven by a random boolean virtual sensor. I just started using HomeAssistant and don't know it well. +2H and a few iterations in, still doesn't work. Winning.
HomeAssistant is probably doing too much for what you need. Imo it's not a good piece of software. https://nodered.org/ is maybe a better fit. Or just some plain old scripts.
I'm in the same camp. The last few months I've been building a couple of applications (editors) for my own work - and since it's so fast I've had Claude spin off to build Zig tools and libraries for markdown parsing, PDF generation, a Scheme implementation for embedding and more. (If anyone's interested they are at my Codeberg: https://codeberg.org/sicher)
> Sidenote: I wonder what's going to happen when the crazy money runs out and Anthropic, OpenAI & co have to start charging for more than it costs them to run the models. Hopefully by then the open source models will have caught up?
How brutal will the enshittification phase of these products be?
Will the 10x cost or whatever be something that future employers will have to pay, or will it be a more visible impact for all of us? Assuming no AGI scenario here and the investments will have to be paid back with further subscription services like today.
I really hope Open Source (Open Weights) keep up with the development, and that a continuation of Moore's Law (the bastardized performance per € version) makes local models increasingly accessible.
Is it proven that they serve the models at cost? Amodei has said that Anthropic's models make back their training cost - the reason they're so deep in the red is because they're investing substantially more in subsequent runs, and R&D dwarfs inference cost[1]. If the tech plateaus I would expect to see a lot of that R&D spend move into just powering inference.
>The age of actually finishing side projects is here
This is a really good summary of how I've experienced AI put into words. I'm not really sure how this can be monetized though.
I'm not going to burn $200-1k per day on agents to do some side projects that have been on the back burner. The only reason I'm doing it now is the heavily subsidized or free available models all over the place.
> Start with a conversation, and explore the problem space with the LLM. The idea here is to gather options and ideas. Once you have a clear vision of what you want to build, ask for a detailed specification. Iterate on the spec until you understand it fully and are happy with it.
Maybe its just the specific language being used here but I really hate talking to these things. They inject way too much personality into things, especially Claude and are still too sychophantic and could lead you down a wrong path. I'd much rather just give them instructions.
I’m in agreement with the blog post. I’ve been treating AI more like a tool and less like a science experiment and I’ve gotten some good results when working on my various side projects. In the past much of my time was taken up by research and learning the various little parts of how everything works. What starts as a little python project to play around with APIs ends with me spending 5 hours learning tkinter and barely making any API calls.
LLMs have finally freed me from the shackles of yak shaving. Some dumb inconsequential tooling thing doesn't work? Agent will take care of it in a background session and I can get back to building things I do care about.
I'm finding that in several kinds of projects ranging from spare-time amusements to serious work, LLMs have become useful to me by (1) engaging me in a conversation that elicits thoughts and ideas from me more quickly than I come up with them without the conversation, and (2) pointing me at where I can get answers to technical questions so that I get the research part of my work done more quickly.
Talking with other knowledgeable humans works just as well for the first thing, but suitable other humans are not as readily available all the time as an LLM, and suitably-chosen LLMs do a pretty good job of engaging whatever part of my brain or personality it is that is stimulated through conversation to think inventively.
For the second thing, LLMs can just answer most of the questions I ask, but I don't trust their answers for reasons that we all know very well, so instead I ask them to point me at technical sources as well, and that often gets me information more quickly than I would have by just starting from a relatively uninformed google search (though Google is getting better at doing the same job, too).
I still think that non-programmers are going to have a tough time with vibe coding. Nuances and nomenclature in the language you are targeting and programming design principles in general help in actually getting AI to build something useful.
A simple example is knowing to tell AI that a window should be 'modal' or that null values should default to xyz.
reply