Hacker Newsnew | past | comments | ask | show | jobs | submit | Jerrrrrrrry's commentslogin

Everyone who disagrees with me is either a bot or in bad faith.

-99% of comments since 2020


You forgot they're being paid to say it.

I would like the same for _______.

If you were to slowly replace your brain with a cybernetic appliance, you could also have perfect continuity.

Not that it matters; we sleep and wake up, no one freaks out daily that they were unconscious for hours.

No reason to suspect waking up in 3030 after being unfrozen or in 6045 after being cybernetically reanimated would be any more disconcerting physiologically than an extended coma patients experience.

Your continuity is just as illusionous as your free will.


> no one freaks out daily that they were unconscious for hours.

Speak for yourself! Every time I come to there's something to freak out about. Okay, not every time, but waking up is a lot.


Aberrations in solar activity coinciding with earthquakes was the go to example here on HN for clowning.

Next year it will be cannon that we always knew they were related.



Thoughtful comment.

Please read hn rules.


Created a voltage drop that exactly occurred to be timed to the key comparison, then a spike at the continuation.

Irl noop and forced execution control flow to effectively return true.

B e a utiful


No? It is crowbar voltage glitching, but you're significantly underselling it here. The glitching does not affect key comparisons.

It's a double-glitch. The second glitch takes control of PC during a memcpy. The first glitch effectively disables the MMU by skipping initialization (allowing the second glitch to gain shellcode exec). (I am also skipping a lot of details here, the whole talk is worth a watch)


It's fascinating - how does one defend against an attacker or red-team who controls the CPU voltage rails with enough precision to bypass any instruction one writes? It's an entirely new class of vulnerability, as far as I can tell.

This talk https://www.youtube.com/watch?v=BBXKhrHi2eY indicates that others have had success doing this on Intel microcode as well - only in the past few months. Going to be some really exciting exploits coming out here!


> how does one defend against an attacker or red-team who controls the CPU voltage rails

The xbox does have defences against this, the talk explicitly mentions rail monitoring defences intended to detect that kind of attack. It had a lot of them, and he had to build around them. The exploit succeeds because he found two glitch points that bypassed the timing randomisation and containment model.


I hope Apple is paying attention, since their first gen AirTags are vulnerable to voltage glitching to disable the speaker and the tracking warning.

I don't see much motivation for fixing that when I can purchase a nrf52xx Bluetooth Beacon on aliexpress for €4 and flash it with firmware that pretends to be 50 different airtags, rotating every 10 minutes, and therefore bypassing all tracker detections.

What's the battery life like on one of those?

Months if the firmware properly sleeps.

They're also, as it turns out, vulnerable to a drillbit

It's pretty trivial to just open it up and disconnect the speaker too. I took one apart to make a custom wallet card out of it and broke the speaker in doing so; the rest of it worked perfectly fine (though obviously the warning would still work).

Apple has a team that works on glitching protection for their phones. Disabling the speaker on AirTags is a very different threat model.

Isn't airtags completely and utterly broken, or has anything changed?

It's not new - fault injection as a vulnerability class has existed since the beginning of computing, as a security bypass mechanism (clock glitching) since at least the 1990s, and crowbar voltage glitching like this has been widespread since at least the early 2000s. It's extraordinarily hard to defend against but mitigations are also improving rapidly; for example this attack only works on early Xbox One revisions where more advanced glitch protection wasn't enabled (although the author speculates that since the glitch protection can be disabled via software / a fuse state, one could glitch out the glitch protection).

Just so you know, hardware hackers have been doing this for 20+ years. Hacking satellite TV (google smart card glitching) was done the same way.

Its more that its really hard to do security when the attacker has unlimited physical access.


> It's an entirely new class of vulnerability, as far as I can tell.

It is know as voltage glitching. If you're interested our research group applies to Intel CPUs. https://download.vusec.net/papers/microspark_uasc26.pdf


You can't. Console makers have these locked-down little systems with all the security they can economically justify... embedded in an arbitrarily-hostile environment created by people who have no need to economically justify anything. It's completely asymmetrical and the individual hackers hold most of the cards. There's no "this exploit is too bizarre" for people whose hobby is breaking consoles, and if even one of those bizarre exploits wins it's game over.

And if you predict the next dozen bizarre things someone might try, you both miss the thirteenth thing that's going to work and you make a console so over-engineered Sony can kick your ass just by mentioning the purchase price of their next console. ("$299", the number that echoed across E3.)


> You can't

It's a moot point, they are not trying to prevent it. They only need to buy enough time to sell games in the lifespan of the hardware, which they did.

> all the security they can economically justify...

It seems like they did a perfect job, it lasted long enough to protect Microsoft game profits.


Well, they had better hope nobody notices how to use this flaw to chain into another one in the current generation.

This is a cat-and-mouse that can always be won by a sufficiently advanced cat. Whatever protection circuit you design, the attacker can decap the chip, put a wire on the right node in that circuit and force it to disabled. But that's really really hard, and most cats can't do it.

It's reassuring that the owner of a device will always own it, in the end.


The microcontrollers I worked on 15 years ago had low voltage detection:

https://en.wikipedia.org/wiki/Low-voltage_detect


Glitching attacks are typically performed by switching the supply voltage at quite high frequencies, a typical low-voltage detection won't trigger a reset under such conditions. And this is also why glitching attacks are often performed by spiking higher voltages, not lower. See for example Joe Grant's latest video on breaking crypto wallets [0].

Low-voltage detection is usually implemented as simple comparator which should trigger instantly, but often only on a single Vcc pin, and due to the decoupling caps found on a typical circuit design there is effectively an RC circuit that filters short fluctuations of supply voltage. So most low-voltage detection implementations only trigger on 'longer' periods of low voltage.

Traditionally low-voltage detection features (like brown-out detection) are there to guarantee functionality of the uC itself or the device the uC controls. It is typically not intended as a defence measure against these types of attacks. In fact, 15 years ago it may not have been much of a concern.

[0] https://www.youtube.com/watch?v=MhJoJRqJ0Wc


Voltage glitching is an old technique. Here's a paper about it from 2 decades ago https://ieeexplore.ieee.org/document/1708651 but it is at least another decade older as an attack vector.

Defend against it one way by voltage monitoring or physical intrusion detection, and another way by droop and such detection and countermeasures on the device. Both probably just increase the cost of hacking it by some orders of magnitude, but that may be enough.


Basically if someone has physical access to device, its game over.

You can do things like efuses that basically brick devices if something gets accessed, but that becomes a matter of whether the attacker falls for the trap.


> Basically if someone has physical access to device, its game over.

It took more than a decade to exploit this vulnerability and even then there are fairly trivial countermeasures that could have been used to prevent it (and that are implemented in other platforms.)

Nothing is unhackable, but it requires a very peculiar definition of "game over".

(And as others have pointed out: only early versions of this Xbos One where vulnerable to this attack.)


The incentives to hack the XOne were few. Easy sideloading. No exclusives. Not a great performance per dollar ratio either. It is the opposite of Nintendo consoles if you think about it, and nintendo consoles are notorious for having a really quick homebrew scene.

Every time a console gets hacked, the checklist of SOC security architects grows a little longer. Boot ROMs are written in formally verifiable language, there are hardware glitch detectors, CPUs running in lockstep to guard against glitches, checks against out of order completion of security phases, random delay insertion, and so forth.

When it comes to SOC security, the past is not a good predictor of the present. The previous Nintendo SOC was designed 15 years ago. A lot has been learned since. It's become increasingly harder to bypass these mechanisms.

The fact that it took 13 years to hack the Xbox One is not because it's not an attractive platform: because of its high profile, it has been a popular subject for security research grad students from the moment it was released. And if anything, the complexity of the current hack shows how much SOC security has progressed over the years.


Only if they leave a door open, which they did here.

If your argument is that you can't hope to close every door, then AI will make it easier to close all the doors in the future.


>then AI will make it easier to close all the doors in the future.

AI could also make it easier to open the doors too.


This hasn't been true for the time a typical American high school senior has been alive. Please stop repeating things people said years ago.

Could a chip detect this and reset?

I'm not at all familiar with the Xbox One, but this is a feature that's generally available if you're designing "closed" hardware like a console. Most SoC these days have some sort of security processor that runs in its own little sandbox and can monitor different things that suggest tampering (e.g. temperatures, rail voltages, discrete tamper I/O) and take a corrective action. That might be as simple as resetting the chip, but often you can do more dramatic things like wiping security keys.

But this exploit shows that it's still almost impossible to protect yourself from motivated attackers with local access. All of that security stuff needs to get initialized by code that the SoC vendor puts in ROM, and if there's an exploit in that, you're hooped.


Yes, and the Xbox One has mechanisms to do just that. But they turned out to not be fully sufficient.

This attack is on the early models that didn't have those protections enabled. The researcher surmised that later models do indeed have anti-glitching mechanisms enabled.

not a new vulnerability class.

Extremely impressive feat nonetheless!


The Xbox 360 was hacked in a simpler but nearly identical way [1]! Amazing that despite the various mitigations, the same process was enough to crack the Xbox One.

[1] https://consolemods.org/wiki/Xbox_360:RGH/RGH3


But it took them 4x as long to be successful against the xbone.

I think the security team would call their mitigations a success.


The earliest example I know of for this is CLKSCREW, but security hardware (like for holding root CA private keys) was hardened against this stuff way before that attack.

Has anyone heard of notable earlier examples?


In terms of fault injection as a security attack vector (vs. just a test vector, where it of course dates back to the beginning of computing) in general, satellite TV cards were attacked with clock glitching at least dating back into the 1990s, like the "unlooper" (1997). There were also numerous attacks against various software RSA implementations that relied on brownout or crowbar glitching like this - I found https://ieeexplore.ieee.org/document/5412860 right off the bat but I remember using these techniques before then.

This sounds like a way less crude version of the way many unlicensed NES cartridges got around the lockout chip. Just charge a capacitor and blast it at boot time.

Lol your PhD got you this far, keep appealing to your PhD gods

Apt username from a person suggesting that non edible fiber is the nutrient causing illness and thats the presupposition we should argue against.

Why would more fiber help?


The mechanism behind why more fiber helps is pretty straightforward:

Insoluble fiber speeds up gut motility. Faster gut motility means less time for toxins to sit and absorb in your gut.

Also, fermentable fibers serve as substrate for gut microbes, producing short-chain fatty acids (butyrate is one - a primary fuel source for colonocytes - the cells that line your colon).

It also lowers colonic pH, inhibiting pathogenic bacteria.

Lastly, (although there are tons more benefits I'm not listing), soluble fiber is incredible for people trying to lose weight, as highly fibrous foods increase satiety, keeping you fuller for longer.


Uh, what? I have not made a presuppositional argument (I made no argument at all...). I made a statement about my epistemic state - ie: that I would "bet" on low fiber being the major contributor to colon cancer rates. Someone then asserted that it can't be that, and I asked "why?".

> Why would more fiber help?

Because there is an incredible amount of research into high fiber diets being good for your gut, including reduced colon cancer rates. This is the consensus of various organizations such as WHO - high fiber diets have lower risks of colon cancer.


My comment is that it is not ONLY low fiber diets. There are a lot of other risk factors involved. Will high fiber help - absolutely. Is it the be all end all - no I doubt it.

Western diet collapsed its fiber intake well over 80 years ago - it would have shown up already.


> My comment is that it is not ONLY low fiber diets.

Well, you said "can't" and I asked "why", which feels very reasonable to me. Your argument seems to be that it wouldn't account properly for the data - specifically, you're saying we would have seen colon cancer rates rise earlier.

> Western diet collapsed its fiber intake well over 80 years ago - it would have shown up already.

I don't really buy this for a lot of reasons. Probably the two most important are (a) ability to screen historically and (b) the timing isn't particularly "off" for the fiber argument. We did see it already, we've been seeing increases in color cancer risks for decades.

Now, I'm not married to it "just" being fiber whatsoever, but if I were to "bet" on the major contributing factor, naively, that's where my money would go. I think it's very reasonable to not place your bet there.


Yeah, i wonder what was the fiber i take for someone from egypt or hunter gatherers. I get it that in our modern diet, fiber is better than sugar and plastic stuff made in factories combining oils and sugar into something that looks like food. But if a person is regular and does not have any gut issues, how would more fiber help?

> Yeah, i wonder what was the fiber i take for someone from egypt or hunter gatherers.

Very high.

> But if a person is regular and does not have any gut issues, how would more fiber help?

There is a ton of research about this and it's why WHO and other orgs state explicitly that fiber reduces rates of colon cancer.


99% of humans ate meat, amd fruits occasionally.

Fiber does nothing.


lol this is such utter bullshit? I'm blown away by how confidently stated and how utterly incorrect this is.

1. Ancient egyptians ate fucktons of wheat and barley, lentils, chickpeas, etc. They ate massive amounts of fiber lol I mean holy fuck I just can't believe how wrong you are?

2. Fiber is very, very well understood by ALL health organizations to be preventative for colon cancer.


You shouldn't feed the trolls.

Maybe, but the person they're responding to seemed to be genuine in their question, and I worry that they'll read a statement like "they mostly ate meat" and think it's plausible when it's insanely incorrect.

You should follow the HN rules.

Ancient Egyptians is less than 10,000 years ago.

Homo genus is 2 million years old.

Repeating "muh authorities" isnt an argument.

Your willful axiomatic dogmatism isnt science.

Cave paintings dont depict agriculture. They depicted hunting and agrarian nomadic lifestyles


Should be a betting service for this kind of thing instead of sports betting. Maybe all the men betting sports might read and change their habits based on the betting outcomes (and improve their health).

I would also bet top reason is fiber but it isn't the only reason - multiple factors at play.


I think that's all very fair.

>"Otherwise"

Would could this possibly be referring to?


Many different viral infections and other immune pressures can kick off ME/CFS. We don't yet know what otherwise actually means really, Epstein Bar Virus and Influenza but there is likely many others and only 70% of patients with the condition say it was initiated with an infection. Its a question well worth good research, there just isn't much in the way of funding for ME/CFS.


Epstein-Barr virus (EBV), unknown genetic conditions or other unknown stressors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: