Hacker Newsnew | past | comments | ask | show | jobs | submit | sgt's commentslogin

Seems like it. Which is serious but far from what I thought when I read the title. I suspect 90% of LinkedIn users don't even have a single browser extension installed.

I would debate that. Most work computers have some extensions installed by default. That's millions of laptops. Ex. Snow Inventory Agent, ad blockers etc.

Liftoff! The planning that went behind this is mind boggling. Well done

I can’t deploy a stupid little app at work without something breaking.

Im impressed when people can build something so complex that works on the first try.


I flagged it now for this reason.


Have a look at the old Astrovan: https://en.wikipedia.org/wiki/Astronaut_transfer_van#/media/...

It's not the new one though. But you'll enjoy this one more.


> It’s only when a user tries to install an unregistered app that they’ll require ADB or advanced flow, helping us keep the broader community safe while preserving the flexibility for our power users.

So, we have a sideloaded app now. Which has been increasingly tricky for our users to install. The warning they get is hard to understand. Does this mean essentially the end of sideloading?


If you get 'verified' by Google and sign your app, sideloading shouldn't change. That means money and ID checks, or a free 'hobbyist' carve out if you have <20 users.

If you don't want to play their game, sideloading will get substantially harder.


Not even close. If you want to run this on PC's you need to get a GPU like 5090 but that's still not the same cost per token, and it will be less reliable and use a lot more power. Right now the Apple Silicon machines are the most cost effective per token and per watt.

It's odd no manufacturer jumped on this wagon to offer a competitive alternative.

Is there even enough market for this?

These models are dumber and slower than API SoTA models and will always be.

My time and sanity is much more expensive than insurance against any risk of sending my garbage code to companies worth hundreds of billions of dollars.

For most, it's a downgrade to use local models in multiple fronts: total cost of ownership, software maintenance, electricity bill, losing performance on the machine doing the inference, having to deal with more hallucinations/bugs/lower quality code and slower iteration speed.


Actually yes. For example, I run local models for ingested documents, summaries, etc. The local models are fine, and there is no need for me to pay for tokens. Performance is adequate for that purpose as well. There are many other cases where I run at scale, time is flexible so things can move slower, and I rather keep it all in house. I'm not even getting into areas where data cannot leave the premises for legal reasons. Right now I'm limited with GPUs mostly. But if that world of local models on Apple silicon is so "good", there is room to expand it to other fruits...

> These models are dumber and slower than API SoTA models and will always be.

Sure but you're paying per-token costs on the SoTA models that are roughly an order of magnitude higher than third-party inference on the locally available models. So when you account for per-token cost, the math skews the other way.


In Claude Code's /usage it just hangs. I can't even see what my limits are, which is weird. Maybe a bug? I can't imagine I'm close to my limits though, I'm on Max 20x plan, using Opus 4.6.

>The NASA engineers wanted to understand what would happen if large chunks of the heat shield were stripped away entirely from the composite base of Orion. So they subjected this base material to high energies for periods of 10 seconds up to 10 minutes, which is longer than the period of heating Artemis II will experience during reentry.

> What they found is that, in the event of such a failure, the structure of Orion would remain solid, the crew would be safe within, and the vehicle could still land in a water-tight manner in the Pacific Ocean.

Indeed, this is a much more balanced take. And it turns out that the OP armchair expert is assuming NASA doesn't know what they are doing or is negligent.


The OP links a document from former astronaut Charles Camarda, who NASA explicitly invited in to check their work, and who observed the press conference the Ars article comes from. He addresses every point in it, including that one. Just because an article is contrary to a strident opinion doesn't make it 'balanced'. It matters whether the actual facts are true or not.

https://docs.google.com/document/d/1ddi792xdfNXcBwF8qpDUxmZz...


This report from astronaut Camarda is indeed a bomb. Scaring.

I mean, it isn't like there are not multiple precedents for NASA to find a surprise safety issue, talk it down, and then see it literally blow up in their faces.

NASA is an institution and the incentives align with launching despite risk in cases where the risk was completely unanticipated. The project has its own momentum that it has gathered over time as it rolls down collecting opportunity costs and people tie themselves to it. If you think an astronaut would pull out of a launch because of a 5% risk of catastrophe... well you are talking about a group of people which originated from test pilot programs post-WWII where chances of blowing up with the gear was much much higher, so even though modern astronauts don't have the same direct experience, it isn't beyond reason to assume they inherent at least a bit of that bravado.


Is this an issue for those only using axios on the frontend side like in a VueJS app?

Absolutely. If you ever did a npm install on a project using one of the affected axios versions, your entire system may be compromised.

> The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.

I strongly recommend you read the entire article.


nftables syntax is pretty tough to read. I wonder why they didn't go for an easier to read DSL. I do understand it's likely super fast to parse though, and has a 1:1 relationship to its struct in the kernel.

I’ll pick nftables over iptables any day, it’s leagues better (granted, it’s not hard). The nftables wiki is great, as the syntax and modules are documented in a single easy to read page.

As an added bonus, you get atomic updates of all chains for free.

Granted, for simple usecases, ufw or firewalld may be simpler though.


Definitely an upgrade over iptables. I kinda miss ipchains though.

You can still use the iptables interface for nftables rules if you'd like, but I think you miss out on things like atomic application of rulesets, ranges, lists, and variables (not shell variables).

I personally stick to iptables. nftables does not seem to be an improvement at all. iptables is terse but logical.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: