Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No they wouldn't. They still need to turn WebAsm/IR into assembly, which is the thing they already do today anyway. Nothing changes for compilers, other than the potential for optimizations gets much, much worse as the IR is comparatively crippled and restricted to the IR they already have.

Most compilers today have separate assembly generation for MIPS, ARM, x86_64. They could turn source into WebAssembly and no more (the job of WebAssembly -> native is left to some other architecture specific compiler).

> This has never been the result of CPU instructions. That's a library problem, not an IR problem. WebAsm does nothing to help with this, particularly as it intentionally has no real standard library to speak of.

If any one language targets WebAssembly, as long as you resolve your libraries within that language, you'll be able to deploy to any target that supports WebAssembly. This is pretty much the defacto solution to the library problem in a variety of ecosystems: in Java you'll make a fatjar and in C/C++/Rust you'll make a staticly linked binary.

> WebAsm is an intermediate, not a source. Formally verifying it is about as useful as formally verifying assembly. Which is to say, not useful at all. That doesn't help you verify anything about your code, which was a compiler, optimizer, and god knows what else away from the webasm that was generated.

Are you familiar with binary analysis?



But why use webasm for that when llvm already does it better?


I'm not sure I understand: WebAssembly is the output of a (hopefully) optimizing compiler. LLVM is such a compiler backend. If you use WebAssembly today, you are probably going through LLVM.

Perhaps you meant: why not use LLVM IR instead of WebAssembly? If so, allow me to refer you to this comment[1] (from a bit further down in this thread).

[1]: https://news.ycombinator.com/item?id=16586239




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: