This is a very long-winded way of saying that libraries that are loaded via @rpath are vulnerable to having a malicious library inserted earlier in the rpath order than the actual library it wants to load.
But so what? That's not at all interesting. The given examples involve inserting the library into the application bundle (because the examples of @rpath-dependent loads are for frameworks embedded within the application). This is trivially detectable for any codesigned application (as the inserted library won't be part of the code signature), and for non-codesigned apps it should be just as easy to detect this at it would be to detect an attacker that simply swaps out the expected library with a new one.
Performing this sort of attack also likely requires administrator privileges, and an attacker that can get administrator privileges can perform all sorts of mischief on the system.
This also doesn't subvert the app store sandbox in any way, because malicious code injected into an application is still subject to the same sandbox restrictions that the application is normally subject to.
Their "fake" installer is a copy of an existing codesigned app (Instruments.app) and a directory structure that causes the app to load a library from outside its app bundle when launched.
In this case, since it's loading a library outside of the app bundle, the code signature is intact. But beyond that, it seems no different than just writing an application that explicitly loads something from outside its bundle. The only benefit here is it can use someone else's codesigned app instead of having to get a certificate to codesign the app themselves. But that doesn't seem particularly meaningful.
Assuming the library itself isn't codesigned, this does suggest an issue with Gatekeeper, wherein it doesn't validate all the libraries that are loaded by the application. If that's true, it seems like something that should be relatively easy for Apple to fix without affecting legitimate functionality (because any codesigned app has no need to load non-codesigned libraries).
The results are also wildly overblown. The entire attack requires the user to open a downloaded DMG and run a contained app without copying the app. This is not something people do very often, and if this really is a serious threat, then a simple rule of "don't launch apps from DMGs" suffices (merely moving the app out of the DMG neuters the attack).
As far as I can tell, this is not a particularly interesting finding.
Basically, if an application tries to load a library and it isn't there, you can create the library and it will load it.
On Windows, this may be a vulnerability because the program will look in the application directory and current directory for the library, which may not be secured locations.
On Linux, and I believe OSX too, the program will only look in system-specified locations (/lib, /usr/lib, etc) which should be secured at the same or higher level than the applications themselves. If you have access there, you can just overwrite the executable.
The paper points out that security software may inspect the binary or its signature. However, such security software must also inspect the libraries; otherwise, one could simply modify a library (e.g. libc) to accomplish the same task with less effort. If the security software is going to inspect the libraries anyways, it can check whether any unauthorized libraries are being loaded.
While OSX will verify a bundle hasn't been tampered with, but (apparently) will scan a folder next to the bundle for shared libraries automatically.
This bypasses a number of security features (code signing, gatekeeper warning, network firewalls) and doesn't require the user click any unusual buttons or type LD_PRELOAD into a terminal.
Note that you were always able to easily inject/specify which DYLIBS were loaded into the invocation of an executable. However, the OP describes a way of doing this that is difficult (impossible in a general sense?) to detect.
Second, this would probably be the most effective attack against the sandbox of the Mac Store. It effectively allows injection of arbitrary code into an arbitrary child process.
Third, to my understanding, this does not allow user-bound privilege escalation (e.g. no root) because those are bound by the process, inside of which the code is loaded and run.
Honestly, though, shouldn't be too difficult to patch.
There are some mitigations for this already built in. "Sensitive" processes are disallowed from linking libraries relative to @rpath and friends. Excerpt from dyld.cpp:
else if (sProcessIsRestricted && (path[0] != '/' )) {
throwf("unsafe use of relative rpath %s in %s with restricted binary", path, context.origin);
}
A cursory glance suggests that sProcessIsRestricted is true for setuid binaries and processes with restricted entitlements. Which makes sense: these would otherwise be privilege escalation vectors.
But so what? That's not at all interesting. The given examples involve inserting the library into the application bundle (because the examples of @rpath-dependent loads are for frameworks embedded within the application). This is trivially detectable for any codesigned application (as the inserted library won't be part of the code signature), and for non-codesigned apps it should be just as easy to detect this at it would be to detect an attacker that simply swaps out the expected library with a new one.
Performing this sort of attack also likely requires administrator privileges, and an attacker that can get administrator privileges can perform all sorts of mischief on the system.
This also doesn't subvert the app store sandbox in any way, because malicious code injected into an application is still subject to the same sandbox restrictions that the application is normally subject to.