No, within the same DC network latency does not add that much. After all EFS also manages 600µs average latency.
It's really just S3 that's slow. I assume some large fraction of S3 is spread over HDDs, not SSDs.
Piece of free advice towards a better civilisation: people who didn't even read the comment they're replying to shouldn't be rewarded for their laziness.
I read his comment and still replied. I think his claim that nobody reads thinking blocks and that thinking blocks increase latency is nonsense. I am not going to figure out which settings I need to enable because after reading this thread I cancelled my subscription and switched over to Codex. Because I had the exact same experience as many in this thread.
Also what is that "PR advice"—he might as well wear a suit. This is absolutely a nerd fight.
I tested because I was porting memories from Claude Code to Codex, so I might as well test. I obviously still have subscription days remaining.
There is another comment in this thread linking a GitHub issue that discusses this. The GitHub issue this whole HN submission is about even says that Anthropic hides thinking blocks.
I didn't use commands. I only used rules, memories, and skills. I asked Codex to read rules and memories from where Claude Code stores them on the filesystem and merge them into `AGENTS.md` and this actually works better because Anthropic prompts Claude Code to write each memory to a separate file, so you end up having a main MEMORY.md that acts as a kind of directory that lists each individual memory with its file name and brief description, hoping that Claude Code will read them, but the problem is that Claude Code never does. This is the same problem[0] that Vercel had with skills I believe. Skills are easy to port because they appear to use the same format, so you can just do `mv ~/.claude/skills ~/.codex/skills` (or `.agents/skills`).
What I was pointing out in my comment about the PR advice is that someone responding from a corporation to customers should be providing information to help the customer, nothing more.
Customers may want to fight - you seem to be providing an example - but representatives shouldn't take the bait.
So far the studies point to study authors having a profound misunderstanding of what’s happening. Which isn’t surprising, since any study right now requires speculating about what’s important and impactful in a new and fast-moving field. Very few people are good at that, and most of the ones who are are not running studies.
There's another perspective on this, which is that the entire function of mature corporations is to codify what's needed to perform certain functions mechanistically, eliminating the need for expertise. Sure, you might have product development or R&D but they're not part of the daily customer-facing function of the corporation.
That's why when private equity buys a company, the first thing they often do is shut down any new product development or R&D. They want to run the machine and extract profit from what it does now - a cash cow - without taking risks on changing a working model. In this model, new product development is for startup ventures, not mature companies whose DNA doesn't tend to be a good fit for it anyway.
tl;dr: What you're describing is the system working as designed and intended. For better or worse.
This company and product line launched nearly 20 years ago (2007) and doesn’t seem to have changed much since. That’s quite a long time for something like this. If the owners had wanted the business to continue (perhaps they didn’t), some diversification could have achieved that relatively easily.
Your assumption here is that they wanted it to continue making money, rather than (for example) reacting to the influx of new orders from HA’s announcement by shutting it down. Perhaps a working source of revenue is being voluntarily terminated rather than having starved to death?
Contracts are negotiable at most companies once you get to a certain level. I negotiated my last contract, and I’m an IC, not an exec. In fact I made the non-disparaging clause mutual, among other things.
Agreed. The article bemoans the fact that AIs don’t need to work in the inefficient way that most humans prefer, getting micro-level feedback from IDEs and REPLs to reduce our mistake count as we go.
If you take a hard look at that workflow, it implies a high degree of incompetence on the part of humans: the reason we generally don’t write thousands of lines without any automated feedback is because our mistake rate is too high.
Aren't you comparing local in-process latency to network latency? That's multiple OOM right there.
reply