Hacker Newsnew | past | comments | ask | show | jobs | submit | nnevatie's commentslogin

Introduces a lot of lag on the mouse pointer.

Someone's going to be grabbed by the president?

He does that in NYC.

No, that’s the drop off location, not the pick up.

Everyone has values, until they get punched with a billion dollar check.

It’s a lot easier to keep your values when you aren’t literally waiting for and counting on the billion dollar check, though.

Now we’re just haggling over the price.

Almost every person has his price.

Which is okay. But don't pretend otherwise.

Exactly. Any company promoting their values lost all credibility. It's just corporate lies until they find an investor. And I hate being treated like a child and lied to.

Just tell me that you're waiting for the money shot and then I can take you seriously. Otherwise just F O.


> Who's going to pay your bills when the $20k is gone after 3 months?

And who's going to maintain this turd the LLM pushed out? It's a cool one-shot sort of thing, but let's not pretend this is useful as a real compiler or something anyone would like to maintain, as a human.

One could keep improving one the implementation by vibing more, but I think that's just taking you to the wrong direction of the rabbit hole.


I'd be very interested in seeing some statistics on what could be considered confidential material pasted on ChatGPT's chat interface.

I think the results would be pretty shocking and I think mostly because the integrations to source services are abject messes.


https://www.theregister.com/2025/10/07/gen_ai_shadow_it_secr...

"With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI."


No worse than MS Office on web, then?


If you get the same speeds for C++ and Java, I'd like to point out that the C++ implementation is likely very sub-optimal.

This can obviously be true for toy problems, but tends not to generalize.


That's because when the failure becomes the context, it can clearly express the intent of not falling for it again. However, when the original problem is the context, none of this obviousness applies.

Very typical, and gives LLMs the annoying Captain Hindsight -like behaviour.


All aboard the soul tra…erhm…drain!


The format of the article comes across as AI-sloppy. Each section is filled with numbered lists and there are several AIsms, such as the omni-present "not-only-x-but-y".


Thanks for the feedback on the formatting. While I do use tools to help structure thoughts and edit for clarity (which might explain the lists and phrasing you noticed), the core technical analysis regarding the challenges of optical flow vs. spatiotemporal AI stems directly from our actual engineering work in building video restoration models. The goal was to make complex concepts digestible, but I appreciate the note on style. I hope the substance of the technical argument still comes through.


Looks more like trimetric projection, but cool nevertheless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: