One thing that continues to amaze me is that WebAssembly isn't being discussed more outside of the context of the web. Think about it just as a format that (a) is low-level enough to support performance tricks, (b) is fast to turn into native code, and (c) easy to check.
Compilers could stop worrying about obscure/old architectures. Deploying an application onto multiple platforms is no longer a problem. Sandboxing is much simpler. Formal verification becomes possible (the WebAssembly spec actually reads like a spec, unlike the C standard which reads more like a religious text).
I'm so excited about WebAssembly, but really not because of the web.
> Compilers could stop worrying about obscure/old architectures.
No they wouldn't. They still need to turn WebAsm/IR into assembly, which is the thing they already do today anyway. Nothing changes for compilers, other than the potential for optimizations gets much, much worse as the IR is comparatively crippled and restricted to the IR they already have.
> Deploying an application onto multiple platforms is no longer a problem.
This has never been the result of CPU instructions. That's a library problem, not an IR problem. WebAsm does nothing to help with this, particularly as it intentionally has no real standard library to speak of.
Or put another way every compiled program is already on a perfectly portable IR called x86_64. Runs on just about every desktop, laptop, and nearly every server in the world. Yet good luck writing a portable "hello world" in it.
It marginally reduces your release artifacts as you only produce a single webasm set instead of x86, x86_64, arm7, and armv8, but using webasm instead comes at non-trivial costs, too. Instead of compiling once on a known toolchain you're now compiling millions of times on uncontrolled, unknown toolchains. That's not a great trade-off in many, if not most, circumstances.
> Sandboxing is much simpler.
Sandboxing is already a solved problem using process isolation, which has the nice property of not caring how your process runs at all. What benefit does WebAsm add to this?
> Formal verification becomes possible (the WebAssembly spec actually reads like a spec, unlike the C standard which reads more like a religious text).
WebAsm is an intermediate, not a source. Formally verifying it is about as useful as formally verifying assembly. Which is to say, not useful at all. That doesn't help you verify anything about your code, which was a compiler, optimizer, and god knows what else away from the webasm that was generated.
It's good that the spec is actually a spec, but this isn't a unique trait to webasm and it won't help your code any since your code isn't in webasm. It's still in C/C++, Rust, or whatever else and they all remain just as verifiable (or not) as they always were.
> Or put another way every compiled program is already on a perfectly portable IR called x86_64. Runs on just about every desktop, laptop, and nearly every server in the world.
But not mobile.
(Besides, I'm also somewhat uneasy with accepting that the computing lingua franca will forever be proprietary to Intel, covered by innumerable patents, and backwards compatible to 1978.)
> Sandboxing is already a solved problem using process isolation
If only it were. Process isolation does not protect against kernel attacks or attacks against whatever IPC mechanism you use to call out to the privileged broker process.
> Formally verifying it is about as useful as formally verifying assembly. Which is to say, not useful at all.
You can verify memory safety of the compiled code, which is a useful and important property.
(I think nobody seriously doubts that Web Assembly is memory safe assuming a bounds checked heap, though, so it's not that practically interesting of a result. Maybe it'll be more so when GC support lands.)
> (Besides, I'm also somewhat uneasy with accepting that the computing lingua franca will forever be proprietary to Intel, covered by innumerable patents, and backwards compatible to 1978.)
Well, x86_64 is actually AMD's creation not Intel's. But replace x86_64 with ARM on mobile and it's the same thing - a portable IR/instruction set does not result in a portable application.
> Process isolation does not protect against kernel attacks or attacks against whatever IPC mechanism you use to call out to the privileged broker process.
Neither does webasm, other than by just not having any features currently. But that's obviously not a viable long-term strategy, certainly not for anything standalone-ish.
> You can verify memory safety of the compiled code, which is a useful and important property.
My native code is perfectly memory safe as well, enforced in hardware even. Has been that way for decades.
Of course that's not what anyone actually means by a "memory safe" language, but as soon as you go plop malloc/free on top of a single webasm allocation you're back to all the same memory-unsafeness of C despite the "memory safe" claims of webasm anyway. A memory safe IR is meaningless outside of the context of being embedded in another process. Aka, when used in a web browser. Or I guess as a massively overcomplicated replacement for Lua.
But the point is verifying the resulting webasm doesn't mean your code was correct, it means your code plus current toolchain selection happened to result in verified code. Whether or not that was the result of a fluke or well defined behavior is not something webasm has any impact on.
> Of course that's not what anyone actually means by a "memory safe" language, but as soon as you go plop malloc/free on top of a single webasm allocation you're back to all the same memory-unsafeness of C despite the "memory safe" claims of webasm anyway.
That's not true. Web Assembly semantics don't allow interpreting data as code or explicitly messing with the machine stack. This guarantees control flow integrity, preventing problems like ROP or traditional buffer overflows.
While webasm isn't vulnurable to traditional ROP attacks, it will also lose at least as much performance penalty as IR + g-free/indirect return.
Memory protection in terms of r/w/x been solved for at least about 3 decades now. It would be absolutely trivial to enforce via binary/IR distribution.
At the end of the day webasm is just a shitty IR that only exists because it's the path of least resistance on the web. There's really no point in going from lang-> llvm -> webasm -> llvm -> exec when you can just go from lang -> llvm -> asm.
However, it only partially addresses JOP (jump-oriented programming, i.e. hijacking calls to function pointers and virtual methods). And there are CFI designs that provide equivalent or better guarantees for native code, such as Clang CFI + SafeStack. In fact, I expect Clang CFI to be more widely adopted in the future… However, the main obstacle to increased adoption has always been overhead, yet WebAssembly has significantly higher overhead.
edit: not to mention that WebAssembly is currently missing security mitigations that long have been standard in native code, such as ASLR, and… maybe stack overflow protection? (It looks like emscripten handles the latter manually by checking STACKTOP against STACK_MAX, but I'm not sure LLVM's native WebAssembly target does.) Maybe these will be addressed in the future, but for now there are some interesting exploitation opportunities.
> However, the main obstacle to increased adoption has always been overhead, yet WebAssembly has significantly higher overhead.
But you get a lot more than security for your trouble using Web Assembly. So the performance-vs.-security tradeoff isn't the only part of the calculus here.
> And there are CFI designs that provide equivalent or better guarantees for native code, such as Clang CFI + SafeStack.
"No. We are currently looking at other alternatives (all look grim, though).
Before trying to proceed with SafeStack please get the agreement from
security folks, since SafeStack doesn't actually sounds too secure any more :("
Indirect calls as opposed to what? C++ virtual method calls are supported by Clang CFI. Direct calls are always safe because the destination address is fixed. (That is, unless you mess with the PLT, but that's what RELRO is for.)
Not sure what's up with SafeStack - though I bet it has to do with more hardware timing attacks, in this case to leak the address. The whole design is a bit of a hack since the only thing preventing the attacker from accessing the safe stack is their (theoretical) inability to guess the address. If only x86-64 hadn't gotten rid of segmentation, so normal memory accesses and stack accesses could actually use entirely separate memory regions… On the other hand, Intel CET should allow for some subset of that functionality on future hardware.
But again, to be fair, one should note that "grim" has a different meaning when the budget for acceptable performance loss is perhaps 1-5%, not 30-50% :P
> No they wouldn't. They still need to turn WebAsm/IR into assembly, which is the thing they already do today anyway. Nothing changes for compilers, other than the potential for optimizations gets much, much worse as the IR is comparatively crippled and restricted to the IR they already have.
Most compilers today have separate assembly generation for MIPS, ARM, x86_64. They could turn source into WebAssembly and no more (the job of WebAssembly -> native is left to some other architecture specific compiler).
> This has never been the result of CPU instructions. That's a library problem, not an IR problem. WebAsm does nothing to help with this, particularly as it intentionally has no real standard library to speak of.
If any one language targets WebAssembly, as long as you resolve your libraries within that language, you'll be able to deploy to any target that supports WebAssembly. This is pretty much the defacto solution to the library problem in a variety of ecosystems: in Java you'll make a fatjar and in C/C++/Rust you'll make a staticly linked binary.
> WebAsm is an intermediate, not a source. Formally verifying it is about as useful as formally verifying assembly. Which is to say, not useful at all. That doesn't help you verify anything about your code, which was a compiler, optimizer, and god knows what else away from the webasm that was generated.
I'm not sure I understand: WebAssembly is the output of a (hopefully) optimizing compiler. LLVM is such a compiler backend. If you use WebAssembly today, you are probably going through LLVM.
Perhaps you meant: why not use LLVM IR instead of WebAssembly? If so, allow me to refer you to this comment[1] (from a bit further down in this thread).
>. Nothing changes for compilers, other than the potential for optimizations gets much, much worse as the IR is comparatively crippled and restricted to the IR they already have.
The performance of an application that doesn't work is worse than a cripple application.
> I'm so excited about WebAssembly, but really not because of the web.
You and me both. I wrote a backend for the JVM [0] and there is one in dev targetting native [1] (in Rust using an interesting alt-llvm project [2]). I suspect where you'll start to see this really shine is when a popular langauge targets WASM as the primary target and lets second-level backends compile to a specific arch. The other problem is interoperability. Until host bindings come along and the community standardizes on things like string representation and syscalls, many of uses are siloed by the frontend compilers. I think we can do better than libc+posix.
It's a true open standard developed collaboratively by multiple competing top-tier companies, has multiple interoperable implementations, has been designed from the ground up for running untrusted code in a sandboxed environment, has been designed to be a general-purpose compilation target rather than with tight coupling to a specific source language, it has enough killer apps that is will likely continue to be developed and supported by these multiple companies, and it was developed with the hindsight of many of these previous failed or only partially successful efforts.
Some of these previous efforts might have one or two of these features, but none of them meet all of these criteria.
What other low-level format supports my points (a), (b), and (c)? And has an actual spec?
EDIT: you've since added some examples. Here are my (very subjective) opinions:
- The JVM isn't really all that low-level - it eeks out a lot of performance at runtime. Plus, you need to have GC, which tends to increase memory requirements and complicate real-time constraints.
- NaCL was interesting but its spec wasn't as good. IIRC Google didn't do too much in the way of asking for public input. I think some folks had some security concerns too. I really like WebAssembly's spec - I don't see any typing judgements or small-step rules for NaCL.
Yes but you're still bound to java semantics. Java is very high level. Almost everything except the blessed primtives long/int/short/byte is an object. This is not a suitable target for lower level languages like C.
While some of those were open standards, most/all of those only ever had a single implementation each. JVM's startup time is painful, and its sandboxing abilities have been frequently broken. Wasn't Microsoft p-code interpreted, or are you referring to something else? NaCL was still architecture-specific until PNaCL. PNaCL is pretty comparable to WebAssembly besides that PNaCL was driven by Google alone, and WebAssembly was worked on by many players after it evolved from asm.js.
* ANDF/TenDRA - Never really used except for research purposes, ie. no critical mass.
* JVM - Most heavily used implementation borked the sandboxing. It also forces a GC down your throat, which is great for 95% of use cases, but the other 5% are really important too.
* NaCl - Really cool, but ultimately reliant on weird CPU features for protection. PNaCl was trying to fit a square peg into a round hole with the whole trying to make LLVM IR platform independent thing. WebAsm took a lot of ideas from these projects (and asm.js).
* P-Code - Really cool, but ultimately probably a victim of it's time. A little simplistic for modern architectures as it has no real concept of a memory model or atomic primitives.
* TIMI - Unbelievably proprietary. Even if you were to document it well enough to build an implementation, IBM would almost certainly sue you into oblivion.
Yeah, the social factor is way more important. We might actually get WORA that can't be blocked by a walled garden, cause it would be a suicidal move for whichever company does it.
it's not just the social factor, it's that there's an immediate benefit to using it - your code can be run in a web browser. that solves the bootstrapping problem with your shiny new WORA bytecode spec being unattractive until there are already a bunch of implementations, but there won't be a bunch of implementations until people are actively using it.
We've added Lua scripting into the Ceph distributed object storage system that lets you remotely compute on objects, or create I/O interfaces with new semantics [0]. I've been excited about WebAssembly as a replacement since I learned about it simply because LuaJIT can be a bit of a pain, and because the cross-compiling toolchains already exist that let me get a lot more existing code into a form that can be shipped off to run remotely.
As others have pointed out, platform independent IL is nothing new. There's nothing exciting about WebAssembly outside of the web that hasn't already been done with Java (or countless other technologies).
The true advantage of WebAssembly is that it's so sandboxed that it literally can't do anything on it's own. It can only do something when hosted in a browser where it can use JavaScript to interact with outside world.
>There's nothing exciting about WebAssembly outside of the web that hasn't already been done with Java (or countless other technologies).
This is completely wrong. The JVM works at an extremely high level. Everything is a garbage collected JVM object. You will never see raw memory at the bytecode layer.
WebAssembly gives you access to the heap and stack at byte granularity. You have to bring your own memory allocator or garbage collector. It's in the damn name: "WebAssembly"
The closest equivalent is NaCL but Google didn't really put a lot of effort into pushing it for wide adoption. If I recall correctly it is also based on LLVM IR and LLVM IR isn't known for maintaining backwards compatibility which turns it into a dead end. Of course if LLVM IR was stable then WebAssembly would be redundant.
We're not living in that world. We're living in a world were WebAssembly is winning not only because of popularity but also because of strong technical fundamentals.
Honestly it seems like HN is blind to fundamentals and everything new is just some hyped up useless piece of shit.
In JS land new = always good. On HN new = always bad.
There is no middle ground. Both sides are equally bad.
> This is completely wrong. The JVM works at an extremely high level.
I'll give you that; it was just one example. There have been plenty of others in history, Pascal P-code is closer to WebAssembly and that's from the 70's! The concept of a portable assembly language is neither new or interesting. WebAssembly is just another compiler target -- if you can compile to it, you can compile to every other native CPU directly. That's just not that interesting as an intermediate form. Java was slightly more interesting as they abstracted the entire platform not just the CPU.
I stand by my statement that WebAssembly is only valuable because it's sandboxed in the browser.
> Honestly it seems like HN is blind to fundamentals and everything new is just some hyped up useless piece of shit.
Maybe being blind to the fundamentals is not knowing the 50 years of technology that has already been done. Remixing old technology in new ways is valuable and in this case remixing portable assembly with the browser sandbox is the cool part.
The JVM also forces one to adopt the JVM's high-level object model and use the JVM's garbage collector. Plenty of languages have a large semantic mismatch with Java, and shoehorning them onto the JVM is cumbersome.
But so far, all those frankenstein hacks still had better interoperability between guest language and java than what you get between the natural peers of js and wasm. It's too early to point fingers that way.
I think someone in a chatroom I'm in is working on something like this; embed a WASM runtime in a fat ELF (or PE) that will also load a runtime from disk if it's newer and then compile/JIT the included WASM code.
IMO WASM can probably learn from the JVM mistakes a lot, especially when the runtime gets baked in so you don't need it on your computer just to run it (looking hard at you JVM, don't try to hide!)
Well maybe it's because these existing things do completely different things and are completely inadequate?
>What about the JVM?
too high level
>What about llvm IR and bitcode?
no backwards compatibility
>What about .net/CLR?
too high level
What the majority of people here do not comprehend is that if you look at each individual feature (low level, stable) each of them is nothing new. It's the combination of these existing features (low level and stable) that existing technologies are simply not capable of.
LLVM had almost two decades to stabilise their IR and yet they didn't so someone else has to step up and fix what they failed to do.
not exactly, the idea is to have a normal cpu like arm or x86 but the kernel doesn't ever deal with native binaries, only web assembly, and the performance penalty of running stuff in a VM gets offset by disabling memory protection and its performance penalty, since memory protection is already guaranteed by the VM.
> One thing that continues to amaze me is that WebAssembly isn't being discussed more outside of the context of the web.
Probably because WASM is a runtime blackbox locked inside a web page running in a browser. There is a huge amount of overhead to that and the ability to interact with anything external to the blackbox is severely limited.
I get that you are excited that WASM is a compile target for many languages, but it comes with a heavy dose of isolation and performance costs.
Because a web browser is essentially an operating system running many internal applications. Simply open an empty browser with a single tab and watch how much memory it consumes. Compare that with a new command shell.
In this context, "native" and "machine-language" are synonymous, and V8/Node do in fact generate native code/machine-language from WASM. The GP did correctly answer the OP's question, but it's not clear if the OP had a more specific question in mind.
The phrase you're looking for is "Ahead-of-Time" or "AoT" compiler. It's not clear if the OP was specifically looking for an AoT compiler, or if a JIT suffices for her/his use case.
I don't really think there is any demand for that. The generated WASM code is already optimized. The JIT is basically a very fast AOT compiler. The only advantage you're gaining is faster startup time and that could perhaps be achieved by caching the generated machine code on disk.
WebAssembly is a dead end. I don't get the hype nor the interest. People tend to forget the huge number of systems out there with tons of active development in C and early versions of C++. Those systems is what Rust would be good for, not "WebAssembly". The vast majority of workhorse code is on the backend, not the front end and UI. I think the only people interested in WebAssembly are former PHP/javascript coders wanting to actually make their code something that runs properly rather than crawls.
A friend was recently trying to use webassembly from rust and ran into an issue where he couldnt use anything that depended on rust's random - because it depended on a nano scale clock from the system. With problems like meldown/spectre, how could this even be resolved in the future? Would anything that depended on that need to use a random function from js instead?
> This is because JS is a good choice for most things.
It really isn't, though. Maybe less bad with ES6. I feel like this was thrown in to mollify the scores of people who <strike>wasted their lives with</strike> invested a lot into javascript.
I know the unfortunate story of how javascript came to be the "web language". It's a shame even a tiny bit more thought didn't go into it.
> One of the hardest parts of working with WebAssembly is getting different kinds of values into and out of functions. That’s because WebAssembly currently only has two types: integers and floating point numbers.
So... The only overlap between Javascript and webassembly is floating point numbers? Are they the same type of floating point?
And is the conversion unicode-string > js float > ws int - or something else?
JS doesn't have integer, but its Number is really just a double, which has a 52 bit mantissa, meaning it can hold any 8, 16 or 32 bit integer. Only 64 bit integers are unable to be stored in a double, which is why the WASM->JS interface doesn't allow them.
The article talks about being able to integrate into the JavaScript "ecosystem," but for me the excitement of WebAssembly is that I can avoid JavaScript and an object system that is fundamentally unsuitable for anything other than static word-processor-like documents. (Which, to be fair, is what HTML was designed to be.)
We know how to write a UI library, I think the basic components are pretty well-defined. Qt and NextStep (surviving to some extent in Cocoa) are well-regarded. Java Swing is okay. So if we could get a real UI library for use in the browser, instead a JavaScript framework poorly re-inventing it, that would be wonderful. And, best of all, it doesn't have to be in JavaScript!
With web assembly you don't have to write a lick of javascript to write a guide in some other language that uses a completely accessible dom tree under the hood, provided your library provides the glue (like the rust library Yew).
There's no reason one couldn't wrote a UI frame work like the ones mentioned by the parent that use the dom under the covers.
I think bypassing the entire DOM would be ultimately a good thing. There's no real choice in webdev; we are all shackled to a rickety and cobbled together mountain of legacy code.
Browsers should always have been a well defined VM with everything else built on top. I'm hoping we can get away from being shackled to what Microsoft, Google, and even Mozilla think is necessary and important for developing software.
What we'll lose of course is interoperability. But I think it will be worth it.
Unless you're porting old applications you should always use the native UI toolkit of the platform you're developing for. In this case the UI toolkit is the DOM.
>> One big 2018 goal for the Rust community is to become a web language.
Without DOM access I fail to see how that would happen. So until WASM gets DOM access no other language(except JS) can become a(first class) web language.
You can access the DOM, but you have to call into JS, which adds some overhead. When people say “DOM access,” they mean native, not through JS. Check out the stdweb crate.
Even then, there's tons of logic you want to execute in the browser that aren't related to UI. For example, any sort of number crunching. That stuff is now feasible, which opens up all kinds of applications.
To add a specific example here, I've recently compiled a roguelike I'm working on to WASM, not knowing what to expect when I started.
This required only a handful of changes across the codebase (mostly handling external resources such as the filesystem and the random generator) and then implementing a bit of JavaScript that took the drawcalls from the WASM core and rendered them on the canvas. And vice-versa, a bit of JS that passed the browser input into the WASM game.
Since the original game had no concept of a DOM, it didn't matter to it. I think WASM can be huge for browser games.
But like Steve said, you can call any JavaScript function and manipulate the DOM that way. At least for Rust, there's also a library that does this for you. So you can have Rust code that uses a DOM API and the WASM <-> JS bridge is handled for you:
Then applications in WASM will have to reduce the number of these calls until it gets native.
This can be easily achieved by buffering calls to the DOM and merging batches of actions together (you could probably even divide DOM actions into priority queues to improve reaction time at expense of throughput in less important UI elements)
> For example, any sort of number crunching. That stuff is now feasible, which opens up all kinds of applications.
Not really. You're still hamstrung by either being single threaded & time sliced with the UI thread (huzzah for cooperative multitasking still somehow not being dead), or you need to jump through the webworkers hoop if you're lucky enough to have a problem & dataset that's compatible with webworker's rather severe limitations.
webasm claims to have threading support on the roadmap but until that happens anything truly involving heavy number crunching is still largely infeasible. And with SharedArrayBuffer being killed by spectre there's a lot of rather major unknowns to deal with.
Virtual DOM springs to mind. React/Vue.js/Ember all implement a lightweight fascimile of the DOM used to do support DOM diffing, which that could be ported to WASM. Here's an example of someone who has written one already: https://github.com/mbasso/asm-dom
I'm not sure if this quite counts as "web" (especially because of accessibility barriers), but I can totally see a future where people frustrated with CSS and the DOM just render their web page with a completely alternative UI toolkit on top of a giant <canvas>.
If you look at it as "compilation tends to converge on the lowest-level possible output", it all fits pretty well. Dynamic scripting languages flouted that for a long time, but now there's enough JITs floating around that show that even they tend in that direction.
So if JS is the lowest possible level language, than that will be the compilation target for a program going into that context.
x86 has its many, many critics, but I've never heard any of them say that x86 is bad because so many things compile to it....
Jeez, this horse always gets beaten to death, twice. We get it. You hate JS. ${language_of_choice} is better because [Reasons].
In the end, it doesn't matter. JS is here to stay. If you don't want to write it, don't write code for projects that need it. Constant complaining about its faults does nothing at all except produce noise in threads like this.
And, fwiw, I don't see too many C projects for Front-end solutions like e.g. React. Maybe that means JS is a superior language in the context of the problem is was made to address. Holy shit! Who would have thought that "it depends" applies to development?!
> Maybe that means JS is a superior language in the context of the problem is was made to address. Holy shit! Who would have thought that "it depends" applies to development?!
Only in terms of access/availability but that was his point...
Suggesting tha Node is only used by lazy front-end developers does nothing but show how condescending and uncharitable you are towards other people who picked different trade-offs than you.
Not wanting to learn a different language is a justifiable preference. Software is full of cost/benefit questions like that. Believing true things also doesn't make you uncharitable.
Because I'm rather tired of the silly claims. I write Rust. I write Kotlin. I write C++. I write Ruby. And I also write JavaScript. Horses for courses--and the mind-meltingly stupid sneering nonsense should just end. Just...end.
I used to smarm about it. I was wrong to. You should stop, because you are wrong now.
> Because I'm rather tired of the silly claims. I write Rust. I write Kotlin. I write C++. I write Ruby. And I also write JavaScript.
I never made any such 'claims', but I'm happy for you. I enjoy those things, too...
> and the mind-meltingly stupid sneering nonsense should just end. Just...end. I used to smarm about it. I was wrong to.
You're the one sneering, smarming, etc. and attempting to create drama. I never said anything about using Node being bad. Maybe you should re-read what I wrote?
> You should stop, because you are wrong now.
Hey, keep believing that if it makes you feel better.
Ryan Dahl wanted to replicate the nginx "async" model for normal web development. Since javascript was single-threaded, it pretty much forced an asynchronous programming model and many of the libraries were already built that way, so it proved to be a fertile ground for experimentation.
In other words, we have node.js and not node.xxx because other languages at that time did not provide a suitable ecosystem for building asynchronous programs. Today most languages have much better async support, or they have features (like goroutines in golang) which achieve the same objective in a different way.
Emphasis on ecosystem. I remember working with netty and other platforms that provided async networking years before nodejs got popular and asnyc became "trendy".
Doug Schmidt's C++ Reactor and Sam Rushings Python Medusa did async networking 20 years ago. But in the late 90s and early 2000s a lot of J2EE based server code was built in Java that used one blocking thread per client; that was a major wrong turn in mainstream SW design that took a long time to correct, IMHO.
Compilers could stop worrying about obscure/old architectures. Deploying an application onto multiple platforms is no longer a problem. Sandboxing is much simpler. Formal verification becomes possible (the WebAssembly spec actually reads like a spec, unlike the C standard which reads more like a religious text).
I'm so excited about WebAssembly, but really not because of the web.