I completely agree with everything you said and know a bit about OIDC too, but since I'm far from mobile/3rd party apps development, there is one thing I don't understand: client credentials (confidential client) in case of mobile app installed from the marketplace. How is it done?
You'd need dynamic client registration, right? I know there's a spec for that and I think I understand the mechanics. That would let you identify the client but not sure you can ever identify app developer with it (if needed for audit purposes). Or am I missing something, maybe?
I think for that you'd probably need even more robust client authentication, like allowing clients to give you a CSR for their own CA, which can sign CSRs generated by each app installation and the chain can be traced from individual app, through the developer's CA, back to a trusted internal root certificate.
That lets developers maintain key confidentiality (devs keep their CA private keys) and maintain control over the app installations' access to signed certs (as well as cert lifetime).
Even if it's not a CA, OIDC has some brief words on signing JWT with a registered keypair, which gives a similar, though less robust, ability to keep the private key secret.
No matter what, any of these scenarios still involves figuring out a way to trust the installed app is authorized by the resource owner and the client developer to obtain a signed cert/token (thus shifting real financial liability onto them in OP's scenario). Which probably means requiring the end user to register for your service also, validating the user again rather than the app.
The fundamental fact remains that the human mind remains the only truly secret place, which is why passwords aren't going anywhere, and why DRM solutions have to rely on making it illegal to attempt to obtain the decryption key embedded in device, or making attempted recovery involve physical destruction of the key.
Yeah, thinking about the same. But then I saw the OP mentioned somewhere in the comments below that he's only thinking about server to server scenario (2LO/client credentials), so its those comments above discussing fake login UIs confused me that it was about 3-legged flow.
I had the same questions, and it's very hard to find the answer - took me a very long time to piece this together but this is how Google does it:
1) You create a "normal" client in Google Developer console (i.e. a web client)
2) You create a native/Android client in the same project. This client is shared across all phones.
3) You add a scope of audience:server:client_id:$NORMAL_CLIENT_ID to auth requests from the mobile.
4) You get back token minted for the web client, from the native client!
The reason it is safe is because you can only do the cross client stuff from a mobile client, which disallows any redirect urls except for localhost and a couple of other special URIS (see https://developers.google.com/identity/protocols/OAuth2Insta...)
It's ok that the secret is not really secret because it's not possible to use it to making a Phishing site since the redirect URL is localhost.
I guess that doesn't answer your "how does it identity the app developer" but it does tell you how these things are deployed at least, and the important fact that there's just one client (not one for every device)
I understand that. Problem is that I can "steal" other dev's app client_id and use in my app. So it seems impossible to use such client_id for auditing/evidence. With a web client I cannot do that since I don't own the domain, so I can be proven to be a party in some transaction
They should allow for push notifications. That'd be more secure
At the end of the day though, everyone has to sign their apps with certs that are pretty well validated. So, it really cuts down on funny business like you mention.