Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe a sidetrack, but I find it difficult to see the productivity boost in asking an LLM to move some files rather than just do it myself. Is this a common use case?


It could be that the author was trying to make the agent do something wrong and the move operation has potential for that

I'll do even more sidetracking and just state that the behaviour of "move" in Windows as described in the article seems absolutely insane.

Edit: so the article links to the documentation for "move" and states that the above is described there. I looked through that page and cannot find any such description - my spider sense is tingling, though I do not now why


Knowing how to do things is passé.

I'm just waiting for vibe prompting, where it's arranged for the computer to guess what will make you happy, and then prompt AI agents to do it, no thinking involved at all.


That was my thought. More keystrokes with less certain results.


I also think that the keystrokes are strictly less and the loop feedback is faster and more robust, but I'm curious to read different points of view.


Someone else in the thread mentioned they used an LLM to remove a test.

The amount of energy wasted to do these banal tasks is mindboggling. So extremely wasteful.

And with Meta and Openai building 5GW ai data centers, looks like the wastefulness will only grow.


If you ask the agent to move the files, it knows they were moved.

After that it can continue to refactor the code if some imports need to be modified.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: