The real win here isn’t just performance,it’s convergence.
When ReadableStream behaves the same in the browser, Workers, and other runtimes, stream-based code becomes portable and predictable. That reduces subtle backpressure bugs and eliminates “works here but not there” edge cases.
Standardization at the streams layer is a big deal for building reliable streaming systems across environments.
I’d like to add an obvious point that is often overlooked.
LLMs take on a huge portion of the work related to handling context, navigating documentation, and structuring thoughts. Today, it’s incredibly easy to start and develop almost any project. In the past, it was just as easy to get overwhelmed by the idea of needing a two-year course in Python (or any other field) and end up doing nothing.
In that sense, LLMs help people overcome the initial barrier, a strong emotional hurdle, and make it much easier to engage in the process from the very beginning.
I think the name mruby kind of makes sense; we have
MRI (matz ruby implementation) so the leading "M"
there; we have jruby too. We also have truffleruby
which is a bit against that name scheme ... but we could
call it truby. Nobody does that, but we could. And
MRI could also be called c-ruby. These are not
great names though. Murby is also not a great name;
it reminds me of Murphy from Robocop though.
It would be a good idea to put some eye-catching example of a hotel room in the article headline, like an image of a shower without a door, just for visual impact.
As for me, I’ve come across hotels where the shower is visible from the bedroom, separated only by a glass wall. Lol, that’s probably the next level.
I'd say nothing kills the web more than hiding the “reject all cookies” button and covering the whole page with a popup until you accept. So I think we’re safe for now.
Honestly, this approach feels like it adds a lot of unnecessary complexity. It introduces a custom serialization structure that can easily lead to subtle UI bugs and a nightmare of component state tracking. The author seems to be solving two issues at once: large payloads and stream-structured delivery. But the latter only really arises because of the former.
For small to medium JSON responses, this won't improve performance meaningfully. It’s hard to imagine this being faster or more reliable than simply redesigning the backend to separate out the heavy parts (like article bodies or large comment trees) and fetch them independently. Or better yet, just use a proper streaming response (like chunked HTTP or GraphQL @defer/@stream).
In practice, trying to progressively hydrate JSON this way may solve a niche problem while creating broader engineering headaches.
When ReadableStream behaves the same in the browser, Workers, and other runtimes, stream-based code becomes portable and predictable. That reduces subtle backpressure bugs and eliminates “works here but not there” edge cases.
Standardization at the streams layer is a big deal for building reliable streaming systems across environments.
reply