Hacker Newsnew | past | comments | ask | show | jobs | submit | petetnt's commentslogin

Whoa, I think GPT-5.3 Instant was a disappointment, but GPT-5.4 is definitely the future!

Every other minute some bots is creating an issue that a bot is trying to solve via a pull request which is reviewed by multiple bots. Future is now, good luck and have fun.

If you want a picture of the future, imagine a bot stamping LGTM! :sparkles: :rocket: on a pull request - forever.

That’s the conclusion you get when you sit in the board of 20 companies where all the CEOs are telling you the same thing but you don’t understand that you are all just selling the same golden shovel to each other. Obviously this can also be backed by their own experiences too: 100% of code is written by AI, because last time they actually wrote code was in 2010.


Whoa, I think Claude Sonnet 4.5 was a disappointment, but Claude Sonnet 4.6 is definitely the future!


The history back/forward navigation is broken when trying to browse the historical downtimes which probably says everything that is needed.


I agree with the general statement, if you didn’t spend time on writing it, I am not going to spend time reading it. That includes situations where the writer decides to strip all personality by letting AI format the end product. There’s irony in not wanting to read AI content, but still using it for code and especially documentation though, where the same principle should apply.


I find AI is great at documenting code. It's a description of what the code does and how to use it - all that matters is that it's correct and easy to read, which it almost certainly will be in my experience.


I have quite a different take on that. As much as most people view documentation as a chore, there is value in it.

See it as code review, reflection, getting a birds eye view.

When I document my code, I often stop in between, and think: That implementation detail doesn't make sense/is over convoluted/can be simplified/seems to be lacking sanity check etc…

There is also the art of subtly injecting humor in it, with, e.g. code examples.


Documentation is needed for intent. For everything else you could just read the code. With well-written code, “what the code does and how to use it” should be clear.


> all that matters is that it's correct and easy to read

Absolutely disagree. A lot of the best docs I've read feel more personal, and have little extra touches like telling the reader which sections to skip or to spend more time in depending on what your background is.

Formatting and layout matters too. Docs sites with messy navigation and sidenotes all over the place might be "easy to read" if you can focus on only looking at one thing, but when you try to read the whole thing, you just get a bunch of extra noise that could've been left out.


Whoa, I think GPT-5.3-Codex was a disappointment, but GLM-5 is definitely the future!


I find 5.3 very impressive TBH. Bigger jump than Opus 4.6.

But this here is excellent value, if they offer it as part of their subscription coding plan. Paying by token could really add up. I did about 20 minutes of work and it cost me $1.50USD, and it's more expensive than Kimi 2.5.

Still 1/10th the cost of Opus 4.5 or Opus 4.6 when paying by the token.


The Pro and Max plans can use it. Pro has 1 concurrent session.


I’m a big fan of your work (just checked your post history.)

All I’ve got to add is that GLM-5 is actually just the team at Z.ai getting started. I’m really bullish on this.


> I think GPT-5.3-Codex was a disappointment

Care to elaborate more?


GitHub has had customer visible incidents large enough to warrant status page updates almost every day this year (https://www.githubstatus.com/history).

This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.

I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.


It's 100% because the number of operations happening on Github has likely 100x'd since the introduction of coding agents. They built Github for one kind of scale, and the problem is that they've all of a sudden found themselves with a new kind of scale.

That doesn't normally happen to platforms of this size.


A major platform lift and shift does not help. They are always incredibly difficult.

There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.


> A major platform lift and shift does not help.

ISTR that the lift-n-shift started like ... 3 years ago? That much of it was already shifted to Azure ... 2 years ago?

The only thing that changed in the last 1 year (if my above two assertions are correct (which they may not be)) is a much-publicised switch to AI-assisted coding.


How strange. We have that joke too in our office for Azure pipelines. Do they use the same agents perhaps?


Whoa, I think GPT-5.2-Codex was a disappointment, but GPT-5.3-Codex is definitely the future!


There’s nothing old internet about these AI companies. Old internet was about giving out and asking for nothing in return. These companies take everything and give back nothing, unless you are willing to pay that is.


I get the sentiment, but if you can't acknowledge that AI is useful and currently a lot better than search for a great many things, then it's hard to have a rational conversation.


why do they need to acknowledge something outside of the point they're trying to make?


Because it was a middlebrow dismissal of the GP


because that's how conversations work. anything less is sparkling debate.


how is it useful to be fed misleading nonsense?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: