Considering the many hundreds of technical comments over at the PR (https://github.com/nodejs/node/pull/61478), the 8 reviewers thanked by name in the article, and the stellar reputations of those involved, seems likely.
My mistake 19k lines. At 2 mins per line that’s (19000*2)/60/7=90 7-hour days to review it all, are you sure it was all read? I mean they couldn’t be bothered to write it, so what are the chances they read it all?
For someone’s website or one business maybe the risk is worth it, for a widely used software project that many others build on it is horrifying to see that much plausible code generated by an LLM.
I probably review about 1k LoC worth of PRs / day from my coworkers. It certainly doesn't take me 33 hours (!!) to do so, so I must be one of those rockstar 10x superhero ninja engineers I keep hearing about.
I think that goes back to whether they are programmers vs engineers.
Engineers will focus on professionalism of the end product, even if they used AI to generate most of the product.
And I'm not going by "title", but by mindset. Most of my fellow engineers are not - they are just programmers - as in, they don't care about the non-coding part of the job at all.
Depends - if it is from a human I find I can trust it a lot more. If it is large blobs from LLMs I find it takes more effort. But it was just a guess at an average to give an estimate of the effort required. I’d hope they spent more than 2 mins on some more complex bits.
Are you genuinely confident in a framework project that lands 19kloc generated PRs in one go? I’d worry about hidden security footguns if nothing else and a lot of people use this for their apps. Thankfully I don't use it, but if I did I'd find this really troubling.
It also has security implications - if this is normalised in node.js it would be very easy to slip in deniable exploits into large prs. It is IMO almost impossible to properly review a PR that big for security and correctness.
usually yes, but that's why there are tests, and there's a long road before people start depending on this code (if ever). people will try it, test it, report bugs, etc.
and it's not like super carefully written code is magically perfect. we know that djb can release things that are close to that, but almost nobody is like him at all!
Appreciate the reminder (the lack of SemVer has thrown me in the past). In this case, 6.0 is a bigger change than normal:
> TypeScript 6.0 arrives as a significant transition release, designed to prepare developers for TypeScript 7.0, the upcoming native port of the TypeScript compiler. While TypeScript 6.0 maintains full compatibility with your existing TypeScript knowledge and continues to be API compatible with TypeScript 5.9, this release introduces a number of breaking changes and deprecations that reflect the evolving JavaScript ecosystem and set the stage for TypeScript 7.0.
The main selling point for me is that it has proper data times for dates, times, etc.
Most date/time libraries that I've seen have a only single "date/time" or "timestamp" type, then they have to do things like representing "January 13 2026" as "January 13 2026 at midnight local time" or "January 13 2026 at midnight UTC."
Temporal has full-fledged data types representing the different concepts: an Instant is a point in time. A PlainDate is just a date. A PlainTime is just a time. ("We eat lunch at 11am each day.") A ZonedDateTime is an Instant in a known time zone. Etc.
Temporal draws a lot of inspiration from Java's Joda-Time (which also went on to inspire .NET's Noda Time, Java's official java.time API, and JavaScript's js-joda). This is helpful; it means that some concepts can transfer if you're working in other languages. And, more importantly, it means that it benefits from a lot of careful thought on how to ergonomically and effectively represent date/time complexities.
In a follow-up tweet, Mark Atwood eloborates: "Amazon was very carefully complying with the licenses on FFmpeg. One of my jobs there was to make sure the company was doing so. Continuing to make sure the company was was often the reason I was having a meeting like that inside the company."
I interpret this as meaning there was an implied "if you screw this up" at the end of "they could kill three major product lines with an email."
Are you interpreting that as "if we violate the license, they can revoke our right to use the software" ?? And they use it in 3 products so that would be really bad. That would make sense to have a compliance person.
Yeah - Amazon Elastic Transcoder which they just shut down and replaced with Elemental MediaConvert is almost certainly just managed "ffmpeg as a Service" under the hood.
Twitch definitely. This whole brouhaha has been brewing for a while, and can be traced back to a spat between Theo and ffmpeg.
In the now deleted tweet Theo thrashed VLC codecs to which ffmpeg replied basically "send patches, but you wouldn't be able to". The reply to which was
You clearly have no idea how much of my history was in ffmpeg. I built a ton of early twitch infra on top of yall.
--- end quote ---
This culminated in Theo offering a 20k bounty to ffmpeg if they remove the people running ffmpeg twitter account. Which prompted a lot of heated discussion.
So when Google Project Zero posted their bug... ffmpeg went understandably ballistic
The company that I work at makes sure anything that uses third-party library, whether in internal tools/shipped product/hosted product, goes through legal review. And you'd better comply with whatever the legal team asks you to do. Unless you and everyone around you are as dumb as a potato, you are not going to do things that blatantly violates licenses, like shipping a binary with modified but undisclosed GPL source code. And you can be sure that (1) it's hard to use anything GPL or LGPL in the first place (2) even if you are allowed to, someone will tell you to be extra careful and exactly do what you are told to (or not to)
And as long as Amazon is complying with ffmpeg's LGPL license, ffmpeg can't just stop licensing existing code via an email. Of course, unless there is some secret deal, but again, in that case, someone in the giant corporation will make sure you follow what's in the contract.
Basically, at company at Amazon where there are functional legal teams, the chance of someone "screwing up" is very small.
pnpm doesn't execute lifecycle scripts by default, so it avoids the particular attack vector of "simply downloading and installing an NPM package allows it to execute malicious code."
As phiresky points out, you're still "download[ing] arbitrary code you are going to execute immediately afterwards" (in many/most cases), so it's far from foolproof, but it's sufficient to stop many of the attacks seen in the wild. For example, it's my understanding that last month's Shai-Hulud worm depended on postinstall scripts, so pnpm's restriction of postinstall scripts would have stopped it (unless you whitelist the scripts). But last month's attack on chalk, debug, et al. only involved runtime code, so measures like pnpm's would not have helped.
The way you know is by running the full SQLite test suite, with 100% MC/DC coverage (slightly stricter than 100% branch coverage), on each new compiler, version, and set of flags you intend to support. It's my understanding that this is the approach taken by the SQLite team.
Dr. Hipp's position is paraphrased as, “I cannot trust the compilers, so I test the binaries; the source code may have UBs or run into compiler bugs, but I know the binaries I distribute are correct because they were thoroughly tested" at https://blog.regehr.org/archives/1292. There, Dr. John Regehr, a researcher in undefined behavior, found some undefined behavior in the SQLite source code, which kicked off a discussion of the implications of UB given 100% MC/DC coverage of the binaries of every supported platform.
(I suppose the argument at this point is, "Users may use a new compiler, flag, or version that creates untested code, but that's not nearly as bad as _all_ releases and platforms containing untested code.")
Yes. Sardar Biglari, who's an activist investor and the CEO and owner of Steak'n'Shake, has been pushing for more control over Cracker Barrel for several years. He amplified some of the backlash against Cracker Barrel.
Would you still say it's a small API surface? State, memos, callbacks (which are just memo functions), effects, effect events, reducers, context, external stores, special handling for any tags that go in the `<head>`, form actions, form status, action state, activities, refs, imperative handles, transitions, optimistic updates, deferred updates, suspense, server components, compiler, SSR?
Or maybe it's a small enough API but a lot of concepts. Or maybe I'm just grumpy that the days of "it's just a view layer" feel long ago.
> Or maybe I'm just grumpy that the days of "it's just a view layer" feel long ago.
That abstraction was always leaky, in that it begged many more questions that had to be answered.
Part of the appleal was that it was limited in the perimeter, and part of the curent situation is that the community around React-the-library created the tools to answer these other questions, which means that React-the-ecosystem is much more complex than React-the-lib.
I assume you're referring to Web SQL? As I understand it, the argument against isn't just "there's only 1 implementation," it's "there's no standard and there's only 1 implementation," so the standard would have to devolve to "whatever that 1 implementation does."