only for 64-bit, but disclaimers might apply in (32-bit) IoT land:
> Our results suggest that, for current 32-bit architectures, address-space randomization is ineffective against the possibility of generic exploit code for a single flaw; and brute force attacks can be efficient, and hence effective
32-bit IoT usually has a lot more problems that are easier to exploit than being able to brute force ASLR, but yes: if you are able to run attacks quickly, it’s pretty easy to get lucky and find the offset. There really isn’t much you can do on 32 bit to fix this, unfortunately, unless you start doing fancy things like reordering functions themselves to increase the number of bits of entropy (and even then, you’re not getting much…)
Since IoT is somewhat less tied to existing architectures, it's not completely out of the question that somebody could produce a microcontroller that keeps separate stacks for return addresses and function arguments/results. If the former stack is not directly modifiable, that removes most of the need for ASLR.
Major selling point of most modern microcontroller architectures is that they don't have separate hardware return address stack, and are thus "C compatible". Many early microcontrollers had exactly that as it also typically means simpler hardware implementation.
"C compatible" strikes me as a very overblown way of describing the consequences of choosing whether to have a separate return address stack. In practice, I don't see it actually amounting to anything more than a need to use privileged instructions to implement setjmp, which is perfectly reasonable for a high-security environment.
And ASLR is bypassable as well if you have other bugs that leak information about the stack layout. Overall it just gets more annoying and more tricky because you need to find more bugs and they need to fit together.
Deterrence is a bit of Lord of the Flies though. I'm making myself less attractive so you go after some other poor SOB.
But with IoT I'm not necessarily after one person. I'm going after a million people who all have the same garbage device. If the payoff is bigger the incentive to keep pushing past obstacles is very high.
The halting problem only tells you that there can be no algorithm to prove this automatically for any given program.
It could still be the case that for all practically relevant programs there is a proof in each case that they are secure, even if there is no automatic way to find it.
That's only if an algorithm would halt if it has a condition to do so.
If you use a finite state machine, You can control the states and make mathematical assurances that your program terminates.
The solution to the halting problem is that you work backwards, instead of forwards. You don't let a program run forever and hope for the answer. We also don't give infinite memory to programs to eventually exit/halt or not - My machines have 8 GB and 32GB ram. And oomkiller does its job rather well.
That does not solve the halting problem. The halting problem is entirely framed around making a program that can specifically compute whether another arbitrary program will return.
Tests work because they are arranged by humans to verify the system under test for validity, but only for one specific system at a time.
The halting problem is basically an admission that one is unable to write a test to verify arbitrary programs on arbitrary, Turing complete hardware. You must be 10% smarter than the piece of equipment in short.
Killing something for taking too long or eating too much memory makes no substantive judgement on whether the computation at hand is ready to return or not, You (or your OS) just unilaterally made the decision that the computation wasn't worth continuing.
No. AWS Lambda allows up to 10s of processing before killing and returning a (killed, exceeded time) for the api call.
The Halting problem depends on a turing machine. As stated by Wikipedia, a turing machine is:
"a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed."
The first description how to build one is, "The machine operates on an infinite memory tape divided into discrete cells."
Infinite memory? Well, that's already trivial to dismiss then.
And I also said that things that need to be proved should be using finite state machines. Using an FSM means a dimensionality reduction to a thing that doesn't suffer from the halting problem. One can make complicated graphs, yet still not introduce the same pitfalls as a turing machine has.
This is why some scripting and configuration languages are now toying with the notion that "not Turing Complete" is a feature not a limitation of the system.
I'm kinda holding my breath that they turn out to be right.
Maybe, if operating under a lot of restrictions and using theorem proof, but even then the compiler and the OS (and the CPU) also need to be bug-free for complete protection.
Compilers have plenty of bugs, so do operating systems and CPUs as we have seen in recent years.
You can also insert stack-protecting canaries with some compilers (-fstack-protector on GCC). A bit more limited and it incurs a small runtime cost but you only need compiler support.
Stack canaries make these attacks much more annoying, but it’s still possible to perform these if you can overwrite a function pointer or perform a controlled read or write (rather than a buffer overflow).