Hacker Newsnew | past | comments | ask | show | jobs | submit | pwlb's commentslogin

Many neurodivergent people are simply overwhelmed by the sound on the streets

The documentation actually reveals why this will most likely not work, given you are on expert on mobile security

Oh, you don't say. The above was a link to the source module, but ad with magisk there are many ways to peel the potato.

I've see countless users confirming it works for them, for example by using this workflow: https://magiskzip.com/how-to-pass-integrity-with-strong-chec...

But as an expert on the mobile security you can assure us its not possible to spoof Google play integrity pass with Magisk - am I right?


EIDAS 2 motivation is implicitly that eID failed in eIDAS 1. It simply either didn't take off or didn't work at all

They are also trying to shoehorn in age verification with it.

This is necessary because the wallets contain an identity proofing functionality called PID(Person Identification Data). Showing these credentials basically approves you are you. There are high requirements for identity proofing that even pre-date wallets and that makes sense, because the potentially blast radius of identity theft is huge. Historically, these have been secured in smartcards, like eID cards or passports and are not shifting to the smartphone. Verifying the security posture of your device and app is therefore crucial.

OK, but Google will happily confirm android device running Oreo is safe.

While it's dramatically worse than devices Google refuses to certify (ie these not running their spyware as privileged services).


What do you mean "shifting to smartphone"? It's not a natural process - it's a technical decision to shift them to the smartphone, and a really bad one. We already have smart cards, they work and do not depend on any corporations, even less foreign corporations.

We even have smartcards with e-ink displays and I'd personally want them to succeed here instead of moving security-critical apps to smartphones..

Because Google then abuses its position to inject unremovable spyware with elevated privileges into the phone which the user then can't defent against without making the phone "unsecure" and thus unsuitable for these apps.

If these apps really need a smartphone, I'd at least want it to be free of ad-related garbage in the system. I'm fine with not being able to flash a custom ROM on the smartcard as it doesn't contain hostile software.

Now if even Apple starts showing ads, there's no other choice but to restist this..


Banks actually have high fraud rates today because of weak security mechanisms. If attackers steal your money, the bank will reimburse you. If attackers steal your identity, you are really screwed. Security requirements for banking and identity are simply different.

Mobile Google account based is even weaker than hardware tokens used by banks. Make of that what you will.

Please give some evidence that this is due to hardware tokens failing where a smartphone based solution would have prevented it

If they use SSN as a password, it doesn't mean you can't have something slightly more reasonable without going full cyberpunk dystopia.

Preventing credential duplication is a requirement to achieve high level of assurance. One of its purpose is to limit the potential damage that can be done by attacks. If credentials are bound to hardware-bound keys, attackers will always need access to this key store to make any miss-use. If you don't prevent duplication, attackers may extract credentials and miss-use them at a 1000 places simultaneously.

Okay, but Google certifies phones which are not updates for the last several years.

They can be trivially rooted, then they spoof the signature and get a pass in Integrity while being wide open for malware (or cooying the ID, ID presume).


The documentation clearly outlines that there are multiple signals being analysed. Relying on play integrity alone is definitely not sufficient as you state.

Okay, I meant that Google issuing a "pass" is worthless, yet it's being used as a mandatory signal.

This is due to many parts of the system being spread across multiple IETF RFCs, which happens as OAuth was improved and made more secure over time. Efforts are underway by combining all important parts into OAuth 2.1, otherwise have a look at FAPI 2.0 security profile for high assuance use cases.


You may have a look at this (still a Draft): https://datatracker.ietf.org/doc/draft-ietf-oauth-status-lis...


I don't think status lists solve the requirement for near-realtime revocations. The statuslist itself has a TTL and does not get re-loaded until that TTL expires. This is practically similar to the common practice of having a stateful refresh token and a stateless access token. The statuslist "ttl" claim is equivalent to the "exp" claim of the access token in that regard, and it comes with the same tradeoffs. You can have a lower TTL for statuslist, but that comes at the cost of higher frequency of high-latency network calls due to cache misses.

The classic solution to avoid this (in the common case where you can fit the entire revocation list in memory) is to have a push-based or pub/sub-based mechanism for propagating revocations to token verifiers.


> The statuslist itself has a TTL (...)

If you read the draft, the TTL is clearly specified as optional.

> (...) and does not get re-loaded until that TTL expires.

That is false. The draft clearly states that the optional TTL is intended to "specify the maximum amount of time, in seconds, that the Status List Token can be cached by a consumer before a fresh copy SHOULD be retrieved."

> You can have a lower TTL for statuslist, but that comes at the cost of higher frequency of high-latency network calls due to cache misses.

The concept of a TTL specifies the staleness limit, and anyone can refresh the cache at a fraction of the TTL. In fact, some cache revalidation strategies trigger refreshes at random moments well within the TTL.

There is also a practical limit to how frequently you refresh a token revocation list. Some organizations have a 5-10min tolerance period for basic, genera-purpose access tokens, and fall back to shorter-lived and even one-time access tokens for privileged operations. So if you have privileged operations being allowed when using long-lived tokens, your problem is not the revocation list.


In that case, when and how would you reload the statuslist?

Again, it doesn't matter if TTL and caching is optional, what matters is that this specification has NOTHING to do with a pub/sub-based or push-based mechanism as described by GGGP. This draft specifies a list that can be cached and/or refreshed periodically or on demand. This means that there will always be some specified refresh frequency and you cannot have near-real-time refreshes.

> There is also a practical limit to how frequently you refresh a token revocation list. Some organizations have a 5-10min tolerance period for basic, genera-purpose access tokens, and fall back to shorter-lived and even one-time access tokens for privileged operations. So if you have privileged operations being allowed when using long-lived tokens, your problem is not the revocation list.

That's totally cool. Some organizations are obviously happy with delayed revocations for non-sensitive operations, which they could easily achieve them with stateful refresh tokens, without the added complexity of revocation lists. Stateful and revokable refresh tokens are already supported by many OAuth 2.0 implementations such as Keycloak and Auth0[1]. All you have to do is to set the access token's TTL to 5-10 minutes and you'll get the same effect as you've described above. The performance characteristics may be worse, but many ap which are happy with delayed revocation are happy with this simple solution.

Unfortunately, there are many products where immediate revocation is required. For instance, administrative dashboards and consoles where most operations are sensitive. You can force token validity check through an API call for all operations, but that makes stateless access tokens useless.

What the original post above proposed is a common pattern[2] that lets you have the performance characteristics (zero extra latency) of stateless tokens together with the security characteristics of a stateful access token (revocation is registered in near-real-time, usually less than 10 seconds). This approach is supported by WSO2[3], for instance. The statuslist spec does nothing to standardize this approach.

[1] https://auth0.com/docs/secure/tokens/refresh-tokens/revoke-r...

[2] See "Decentralized approach" in https://dzone.com/articles/jwt-token-revocation

[3] https://mg.docs.wso2.com/en/latest/concepts/revoked-tokens/#...


> In that case, when and how would you reload the statuslist?

It only depends on your own requirements. You can easily implement pull-based or push-based approaches if they suit your needs. I know some companies enforce a 10min tolerance on revoked access tokens, and yet some resource servers poll them at a much higher frequency.

> Again, it doesn't matter if TTL and caching is optional (...)

I agree, it doesn't. TTL is not relevant at all. If you go for a pull-based approach, you pick the refresh strategy that suits your needs. TTL means nothing if it's longer than your refresh periods.

> This draft specifies a list that can be cached and/or refreshed periodically or on demand. This means that there will always be some specified refresh frequency and you cannot have near-real-time refreshes.

Yes. You know what it makes sense for you. It's not for the standard to specify the max frequency. I mean, do you think the spec specify max expiry periods for tokens?

Try to think about the problem. What would you do if the standard somehow specified a TTL and it was greater than your personal needs?


For this reason, I use a LaunchDarkly config flag for the revocation list. Updates to the config are pushed to all LD clients in near real time.


Where exactly do people move if they only chose between Firefox and Chrome, there is not enough competition in the browser market


First,which DID methods will be successful is a question of time, additional your wallet app could support multiple of these DID methods. Second, DID and the corresponding keys are supposed to be owned by the user or managed by a platform, any indivdual can make the choice whteher he wants convience of managed keys or full privacy under his own control Third, you can have a seperate DID for every service and they issue you an login credential for that particular service.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: