> ...Even though JavaScript is a weakly typed language it doesn’t mean that it is inherently insecure. Yes, the programming language used plays an important role, but at the end of the day it is the developers obligation to write secure code in the first place...We chose JavaScript because it runs literally everywhere, is extremely popular & widespread, and has huge companies like Google or Microsoft working on its speed and security across a wide range of devices.
I don't mean to be rude, but that's the worst possible answer you could have given. Security should be your first priority, not an afterthought to popularity. That it isn't tells me everything I need to know about the seriousness of your project.
I agree. I don't understand why people say things like "You can create bugs and make mistakes in any language." That's not the issue. The issue is how likely someone is to make mistakes in that language, and whether the way the language is designed makes it easier to shoot yourself in the foot.
Please read "The Shocking Truth about static Types" by Eric Elliott, which includes links to studies on bug density by programming language. JavaScript is in the middle of the pack, and programs built on JS are on average less buggy than C++ and Java. https://medium.com/javascript-scene/the-shocking-secret-abou...
I didn't find that convincing at all and most studies like this are full of issues because coder productivity is notoriously hard to measure. The graph of bugs per language is just a simple count of how many GitHub issues were labelled "bug" which isn't meaningful.
I don't even know why you need a study for this anyway. Working in a language where it's impossible to mix up strings and numbers, impossible to call a function with the wrong number of arguments etc. leads to less bugs by definition. Do you really need a study to prove you're more productive and your code is more secure in a language where mixing up types is impossible compared to using a language that lets you make that mistake? You can write secure code in assembly if you want but you're just making life hard for yourself.
Having worked with JS for years, even with large teams, I can tell you that type errors are relatively rare. They happen, sure, but they are a small minority of overall bugs that are possible to generate in any program.
Types are a trade off. Sometimes, by explicitly defining interfaces and types ad nauseam, you spend more time defining those types and managing those types than you ever would in fixing minor type errors that after all, generally only come from incorrectly calling a function.
I think JS code tends to be simpler than equivalent Java code and in my opinion, the simplicity of JS offsets its lack of explicit types.
"Simplicity is prerequisite for reliability." -Dijkstra
> Having worked with JS for years, even with large teams, I can tell you that type errors are relatively rare. They happen, sure, but they are a small minority of overall bugs that are possible to generate in any program.
> Types are a trade off. Sometimes, by explicitly defining interfaces and types ad nauseam, you spend more time defining those types and managing those types than you ever would in fixing minor type errors that after all, generally only come from incorrectly calling a function.
I wouldn't agree with that. Issues with string/number conversions, strings being treated like arrays and unexpected undefined/null (these are particularly bad) are common in my experience in JavaScript. Even if they're not particularly common, you really think having to define a few interfaces takes so much time it nullifies the benefit of automatic error checking? Writing an interface is way less effort than writing a test as well, plus you get automatic refactoring tools, autocomplete and automatic error checking in return.
I really can't see why you'd want to give all that up to save a few keystrokes. I've done years of JavaScript after moving from typed languages like OCaml, C++, Java and Coq, and it's horrible trying to write large apps in plain JavaScript without types.
Unexpected nulls are common in typed languages like Java and C# as well. I know some static functional languages go much further and offer superior protection and reliability. I'm willing to believe that.
However, I think that in a strongly typed languages like Java, it's more than a few keystrokes. Also, because there's mutable state everywhere buried in Java classes, you get many more logic errors.
At the end of the day, I think simplicity is the best hedge against errors. However, I tend to agree that statically typed, functional languages are probably the best for preventing errors.
> However, I think that in a strongly typed languages like Java, it's more than a few keystrokes.
Type inference solves the extra typing issue and Java isn't a great example of a strongly typed language. For JavaScript, you can use TypeScript which has type inference and non-null checking. There's also BuckleScript or Reason if you want to code in OCaml so there's practical ways to get this safety in JavaScript.
> At the end of the day, I think simplicity is the best hedge against errors.
You can have static types + simplicity though which is better than just simplicity. Types and immutability constrain the behaviour of your program to make it simpler to reason about.
This is true but a study is useless if it can't measure the thing you want. Things in programming are notoriously difficult to quantify, compared to say, stuff in physics. Perhaps it's why there are so many holy wars.
I read the studies he cited and I'm not sure we can conclude anything from them. As far as I can tell, "bug density by language" is not a thing you can measure in a meaningful way. It depends on the specific programmers at a specific point in their career, their skill level, domain knowledge for the project, scope of project, who the project manager is, their culture of testing, etc. For example a great JavaScript programmer will make a less buggy program than a mediocre c++ programmer.
There is no silver bullet, though the article tries to pass off TDD as one. Also he does concede that static types can power tooling that might "feel like they make us more productive".
Disclaimer: I prefer having types but I also think JavaScript is great and don't shy away from using it.
I agree with all your points. I think, if anything, the study should show that JS is not inherently worse than any other language. It's got a bad rap, that mostly has to do with weird quirks in the language (i.e. == vs ===), but those quirks are easy to avoid. I do think testing in general is a good way to build more reliable programs.
"...language design does have a significant, but modest effect on software quality. Most notably, it does appear that strong typing is modestly better than weak typing, and among functional languages, static typing is also somewhat better than dynamic typing.
"This is strong evidence that functional static languages are less error prone than functional dynamic languages … In order to strengthen this assertion we recode the model as above using treatment coding and observe that the Functional-Static-Strong-Managed language class is significantly less defect prone than the Functional-Dynamic-Strong-Managed language class with p = 0.034."
Dude, don't know what the ad hominem is all about, but if you have rational arguments to dispute the studies by UC Davis or the methodology behind Lebrero's bug count analysis, I'd love to hear them.
Fair point but I have tried more programming languages than I can list "the way the language is designed makes it easier to shoot yourself in the foot" is a fairly accurate description for all of them.
The problem isn't that they wrote the blockchain in JavaScript, it's that the language they chose for user-written contracts and dapps (out of all the languages in the world, mind you) was JavaScript. Meaning that in the future, if this project really takes off (suspend disbelief here), an entire generation of blockchain apps that transfer money back and forth will be built on JavaScript when they didn't have to be. If they chose C as the language for smart contracts/dapps/whatever I would be ragging on them just the same. They had options, and choose the one of the worst ones.
Paypal, Walmart are doing just fine handling huge amounts of transactions in JS. Majority of security breaches are via targeted emails with attachments and have very little to do with language services are written in.
Yes, but Paypal nor Walmart don’t handle irreversible transactions. Nor are they platforms on which apps and custom transactions can be implemented. So I don’t think this is a good analogy.
Not sure what is making everyone upset Paypal is doing 1.7 billion transaction per quarter (100 billion + in volume).
80%+ of breaches at companies are via email as a vector.
Are Paypal and Walmart actually handling their transactions in JS or is that just the front end? In either case, when they transfer money, they hopefully use a sane database transaction mechanism. AFAIK, that would have prevented the DAO hack.
If you're interested in developing a better language for smart contracts, check out Viper (https://github.com/ethereum/viper). Although development is kind of slow right now because Vitalik is the only one who merges pull requests.
Quite frankly, I think Etherium is a shameless powergrab by a bunch of folks who's sole positive contribution to the medium is growing it.
It's a tribute to the essentially unskilled & uninformed investors and inventors in many parts of the blockchain ecosystem that Eth has been allowed to grow at all. It's a great idea for the owners of Eth and a terrible idea for everyone, everyone, EVERYONE else (except maybe certain classes of miners).
I've got 0 interest in contributing my time to a system that will only be used to extract wealth from other people's good ideas while simultaneously welding them to a release calendar that keeps them at the back of the pack of blockchain software. If I wanted to pitch in with that, zcash is way more competent anyways.
You don't standardize to a platform at the beginning of a boom. You standardize to protocols. And that is, by the way, exactly what every competent programmer is doing; standardizing on the protocols laid out by bitcoin as they develop their own chains and experiment with new, cheaper ways to sustain a digital currency.
* Most accessible language in the planet -> any script kiddie can write dapps right away without even bothering to study what the risks are and what is happening under the hood
* dapps manage real money in real time
* end users aren't able to evaluate the quality of the dapps they entrust their money to
> any script kiddie can write dapps right away without even bothering to study what the risks are and what is happening under the hood
It's not the contract author's responsibility to guarantee that the contract works properly -- they aren't forcing anyone to use it. Any script kiddie SHOULD be able to write a dapp. Most people don't [understand that they don't] understand the complexities our our modern financial system, but we don't require that one has a PhD in order to spend a dollar. A low barrier to entry when dealing with financial transactions is desirable.
> end users aren't able to evaluate the quality of the dapps they entrust their money to
This is entirely not true. Because dapps are publicly audit-able, an end user can evaluate whether or not he/she wishes to participate in a contract based on his/her understanding of the code. Granted, it might take a lot of time to properly learn the language used to build a contract, but this would be true of any language. Even so, if a user does not understand a contract, he/she can choose not to use it.
I'd agree that the example of spending a dollar is a gross over simplification. I'd go further to say that there are good reasons to not allow just anyone to issue an exotic derivative. If you're going to draw up contracts, you should have a good understanding of how the rules of the contract work.
Although using a language other than JavaScript may help prevent some errors, it does not fix the fundamental issue of responsibility. One should not enter into a complex contract, blockchain or not, without fully understanding the contract itself. If instead of understanding the contract, you are relying on the structure of the language to protect you, well, you're doing it wrong.
Correct me if I'm wrong, but I believe that auditing a Dapp is far from trivial. Only the bytecode is stored in the blockchain, and even if the source code is published, it's hard to see if it corresponds to the published bytecode (and even then, FOSS still have bugs, remember OpenSSH and its latest security bugs). From [1]:
> Q: How can I verify that a contract on the blockchain matches the source code?
> A: AFAIK the best way to do this at the moment is to compile the source code again with the exact same compiler version the author used (so this is something that needs to be disclosed) and to compare the bytecode. (Thomas Bertani @ StackExchange)
In addition, in the source code itself a contract programmer could implement "contractual" backdoors... ahem.. loopholes. And you'll have no recourse.
I agree, this is not trivial, but as you point out, since only the bytecode is stored in the blockchain, examining it must be only way to know what a contract does.
> In addition, in the source code [...]
Yes, the programmer could implement backdoors, no matter what the language is. As such, the end user should not rely on specific properties of a language to make guarantees as to whether or not it the contract does what they expect. It can help, but it's not foolproof.
My point is that if you invest in a smart contract, it is unreasonable rely on outside entities -- either the person who created the contract or the contract language itself --- to ensure that it will work as expected. As hard as it may be to do, if you can't reasonably understand what's going to happen when you enter into a contract, you shouldn't.
> Any script kiddie SHOULD be able to write a dapp. Most people don't [understand that they don't] understand the complexities our our modern financial system, but we don't require that one has a PhD in order to spend a dollar. A low barrier to entry when dealing with financial transactions is desirable.
I think that simple market approach works well when customers are able to understand and compare what they are buying (or investing into), but it doesn't work when customers don't have the means to do so. That's why the financial industry is one of those that needs to be heavily regulated. So, no, IMHO having lowering the barrier to entry to write complex financial instruments that can trade hundreds of millions of dollars is a recipe for disaster, not for useful innovation.
Of course I understand that, if you're a libertarian, you're not going to agree :)
JS unsafety comes from its loose rules regarding coercion, bugs that only pop up when they're actually executed (sometimes under very specific conditions), etc.
why would they not use Typescript and only "eventually move to it". The primary focus of blockchain languages should be safety and security.
even something like Facebook's https://github.com/BuckleScript/bucklescript or https://reasonml.github.io/guide/what-and-why would be a great starting point since it is built on top of an Ocaml -> javascript toolchain
The main problem with Javascript cryptography called out in https://www.nccgroup.trust/us/about-us/newsroom-and-events/b... is that the Javascript is delivered to the user's browser from the server on every view. If someone makes a web app for doing end-to-end encrypted messages between users, it can't be secure from the web server operator: the server operator can update the javascript one day to leak users' private keys to the server. The server operator could even serve that malicious change to a subset of users so that it's even less likely to be noticed.
In systems where the Javascript isn't loaded on demand from a server on every use (like a server written in node.js, or an application built with electron), then that issue doesn't apply.
If it was just about type systems, then you'd expect similar outrage over web apps written in PHP, Python, and Ruby.
Personally I'm a huge fan of how Flow or Typescript add a type system on top of Javascript and think anyone writing a >500 line web app would hugely benefit from using one of them.
Your browser can auto-update, your OS can have backdoor, drone and satellite can fly over our heads and observe our every moves. The safest way is to hide in a cave and put on tin foil hat. If one does not trust the service provider, then don't use their service. Similarly, if one cannot trust their ISP, then stay offline. Trust is critical in many things in life. If we cannot trust anyone anymore, then we have to build everything ourselves. Shit, is the cup of water in front of me safe to drink?
There's a huge range of possibilities between "I don't trust anyone and use only a CPU I made by hand and software I wrote bit by bit" and "I execute everything blindly that shadyapp.com sends me daily".
Debian for example has people review everything that goes into the package repositories, has policies about what types of things are allowed, and the history of packages on the repository can be inspected. An app developer couldn't selectively deliver a malicious key-leaking version of an application to an individual user running Debian with the application installed from Debian's repository.
If one is paranoid, why trust the "people", while View Source is simply a click away. As long as one is low enough in the overall software/hardware stacks, selectivity is really not that difficult to achieve. Protection comes in layers, there is no such thing as absolute security.
Don't see how that differs from any other language.
Any program, no matter what the language its written in can send you bad code from the server you have to download it from.
You also have the javascript engine/c compiler to worry about, which can of course also be malicious.
Then you have the OS to worry about.
Then, given all that, you have people pretending to know what they are talking about and stating they take security seriously to worry about. When clearly, all your base are belong to us.
>Don't see how that differs from any other language. Any program, no matter what the language its written in can send you bad code from the server you have to download it from.
:shrug: Javascript is the one language where for its most popular uses, people download the code from a server on every use, and few if any other languages have that as the popular runtime mechanism.
I'd prefer it if the article were more obvious about the issue being the download-on-run mechanism rather than being titled as if the problem were the language itself.
>You also have the javascript engine/c compiler to worry about, which can of course also be malicious. Then you have the OS to worry about. ...
You could say that in any discussion about security, but I'm not sure it's really useful because it seems the implication is to give up on any security problem because perfection isn't possible.
>You could say that in any discussion about security.
Exactly. But no one does. Which is why you need to worry about all these guys pretending to know security.
Better to assume you are not secure when you mostly are, than assume you are secure when you definately are not. With baseband processors on mobile, and management engine on x64, all security is currently broken at the hardware level anyway. Major mindset shift needed to fix that.
I'm curious: could any of the recently known smart contract bugs have been prevented through the use of a stricter type system?
I tend to think of type systems more as a hindrance myself. I mean, they can certainly help you catch bugs before the code even runs - but which of those bugs would you not catch during the testing phase anyway?
I'm genuinely curious: what types of bugs does a stricter type system catch that a reasonable test suite probably would not?
Note I'm not saying that tests guarantee bug-free code, or that you can't do both. I'm just wondering about which different kinds of bugs you might catch.
>I'm curious: could any of the recently known smart contract bugs have been prevented through the use of a stricter type system?
Short-answer is yes, and there's been lots of work done to go even further than having static typing and have formal verification for smart-contracts.
>I'm genuinely curious: what types of bugs does a stricter type system catch that a reasonable test suite probably would not?
Not sure what you mean by "reasonable" (is it extensive? testing pathological cases? where do you draw the line?). But type checking at compile time makes your code less inclined to showcase a certain class of bugs that are the byproduct of ambiguity in the language semantics and logical mistake in the programmer writing the code.
And with formal verification you can actually make sure that your logic meets the specs. You can hardly extract stronger correctness guarantees than that!
If that interests you, check out Tezos (https://tezos.com/) and github.com/tezos/tezos. The entire codebase is in OCaml.
> Short-answer is yes, and there's been lots of work done to go even further than having static typing and have formal verification for smart-contracts.
Yes, I've looked at some of these projects before. And I certainly think it's a good idea to use automated tools to try to prove properties about programs.
But I'm more excited about things like property testing than I am about things like strong type systems. And I'm wondering specifically what is the value they bring.
No offense, but all of the answers I've gotten so far are very vague, and don't really address my question. I don't doubt that strong type systems can catch bugs, I am wondering how their capabilities in catching bugs differ from test suites.
Let me give you an example, say we have a hypothetical language with a strict type system, and we declare a variable to be of type List[Foo]. Then later we use that variable as if it was really of type Foo. That's not gonna work, and a type-checker would catch that at compile-time. But a test suite (that covers the variable access) is going to catch that as well, because the code won't behave as it should.
At which point is a strong type system going to surface a bug that a good test suite would not have? Like, can you give an example?
> Not sure what you mean by "reasonable" (is it extensive? testing pathological cases? where do you draw the line?).
The line is as variable as the strictness of the type system we would compare it to.
I guess one could argue that a type system will force the programmer to satisfy it, while a test suite can be written very sloppily. So maybe there is some kind of signalling value in using these types of languages.
> I am wondering how their capabilities in catching bugs differ from test suites.
Well, for one they can prove the absence of certain classes of bugs. Buffer overflows, for example. No amount of testing can do that.
Obviously most languages have "escape hatches" to do inherently 'unsafe' things like calling into C, but then at least you know exactly which bits to audit especially rigorously.
> But a test suite (that covers the variable access) is going to catch that as well, because the code won't behave as it should.
How many different List[X], where X != Foo do you need to test with to have the assurance you need? Are those tests that will actually get written? (IME it's pretty rare to see such "negative" tests, but then I mostly work in typed languages where such tests are usually unnecessary...)
There's also the really huge advantage to types that they actually document a machine-checked contract in a way that integrates seamlessly with the language. There's no such consistency in e.g. JS-land. Now, those contracts may be pretty vague (in e.g. Java or C#), but in Haskell for example they include such things as "does this function have any side effects?". That's extremely powerful, but it's hard to appreciate just how powerful until you have experience in those type systems.
EDIT: Also, don't forget that tests also have costs -- they have to be maintained just like the rest of the program, and static types can drastically cut down on the amount of tests you need to write+maintain.
> I don't doubt that strong type systems can catch bugs, I am wondering how their capabilities in catching bugs differ from test suites.
Type systems and test suites are complementary. When a test suite finds a bug, that proves that the program is incorrect for some inputs. When a type system doesn't reject a program, that proves that the program is correct for all inputs.
Which one to use depends on the impact of an error. In the case of a system that controls lots of money, you'll want a guarantee that all inputs lead to a correct balance. That suggests to use a type system.
On the other hand, if you're just writing an app to get data from a website and display it, you can probably afford it if the program doesn't work in some cases. If you can write a generator for realistic input, and check the output, that will give you a probabilistic estimate of correctness.
The main advantage that test suites have over type systems is the kind of properties they can easily check. If you have a test suite that only checks that the output values have the right structure (EDIT: https://news.ycombinator.com/item?id=15137691 points out that part of the Lisk test suite does exactly that), you'd probably benefit even from the C type system. But to formalize the correctness of values, for many programs you'd need a much more powerful type system e.g. using dependent types, that isn't quite so simple to use as writing the equivalent test.
I think a good compromise would be a language that allows you to annotate your types with arbitrary properties, but doesn't complain if it can't type-check them, so long as you write a test. (But it should complain when it can prove that the properties never hold, e.g. using success types, so that you don't waste time writing a test for that.)
> When a test suite finds a bug, that proves that the program is incorrect for some inputs. When a type system doesn't reject a program, that proves that the program is correct for all inputs.
> Which one to use depends on the impact of an error. In the case of a system that controls lots of money, you'll want a guarantee that all inputs lead to a correct balance. That suggests to use a type system.
When a test suite passes that proves that the program is correct for a subset of all inputs. But I do not see how the same can be said about a type system.
Types do not usually capture semantics - unless we are talking about something a lot more powerful than what I'm used to seeing in real programming languages.
> I think a good compromise would be a language that allows you to annotate your types with arbitrary properties, but doesn't complain if it can't type-check them, so long as you write a test.
But why do I need types for this? Why can't I just assert the properties outright, writing assertions only in terms of the code interface? It's not the types that tell me what arbitrary properties my code should have!
> Types do not usually capture semantics - unless we are talking about something a lot more powerful than what I'm used to seeing in real programming languages.
You are right, I should have specified that the correctness proof only applies to properties actually given in the type system. So for a high-stakes financial application, your types should be strong enough to capture at least elementary arithmetic. This is definitely not something you'd see in a mainstream programming language.
> It's not the types that tell me what arbitrary properties my code should have!
When you give a variable in a program a type, you assert a property for all values that variable will ever take. The converse is also true: for any property you want to express, there is a type system that can encode it. This is called the Curry-Howard correspondence.
Unfortunately, most interesting properties one might want to formalize require either an undecidable type system, or you have to write a bunch of proof code just to convince the type checker that the rest of the code conserves the properties it should. That isn't too different if you'd be doing formal verification for all your code anyway, but it gets annoying when it is enforced everywhere, even when you'd deem it unnecessary otherwise. In that, it is similar to a policy of "unit tests everywhere", which probably catches some bugs, but also leads to lots of boilerplate stating the obvious.
> > Yes, I've looked at some of these projects before. And I certainly think it's a good idea to use automated tools to try to prove properties about programs.
> But I'm more excited about things like property testing than I am about things like strong type systems. And I'm wondering specifically what is the value they bring.
By "property checking" you mean theorem proving.
Static type checkers are theorem provers. More powerful type systems allow more interesting and less intuitive proofs to be written. Sometimes, the type checker is integral to the compiler and is required to run on every build, and sometimes it's an external tool.
In any case, if you're adding annotations to your source code in order for a theorem prover to statically prove certain runtime properties, those annotations constitute a static type system which is type-checked by your theorem prover.
Maybe you don't like even a subset of your theorem prover to run on every build, but if you're in favor of machine-checked proofs of program behavior, you're in favor of static typing.
Property testing [1] is about semantics, the same as regular testing, and it does not require code annotations. [Edit: It's more like "fuzzing" than "model checking".] Type systems on the other hand aren't usually powerful enough to capture semantics.
They allow you to say things like, this function takes in a list of Foo objects, and returns one Foo object. But they don't really let you express whether the object returned is or is not part of the original list, and if it is, that it was chosen according to the right mechanism. That's what tests are for.
Without being able to express the semantics of code, I don't see how you can trust it. There may not be any type mismatches, but there sure can be lots of bad logic in there.
> I'm genuinely curious: what types of bugs does a stricter type system catch that a reasonable test suite probably would not?
Edge cases that you do not hit in your test cases.
One could also argue that a distributed computing platform coupled to a money system may not need to be Turing complete. State-of-the art type systems are capable of proving non-trivial properties of code which could be handy in the crypto world.
Assuming stricter typing does prevent smart contract bugs, something like Ada or Spark would seem to be good fit for cryptocurrency development given their track record creating highly-reliable systems:
That being said, I spent some time with Ada several years ago and did not enjoy the language; very verbose and anal. If the impression is widespread, such a language could end up hurting a blockchain project by drawing less contributors.
Hi, thanks for the link. I've seen this paper, but it has nothing to do with strongly typed languages, as far as I can tell. In fact, there is no mention of types in the paper at all, it's strictly automated analysis.
Would you prefer a different language or just a better answer? From my perspective, I don't see how the language makes a difference. In security, you have way more to worry about before even considering an application's underlying implementation language.
> we are planning an eventual transition to TypeScript
Just switch today. That's kind of silly to write a whole bunch of untyped code and then move to types. Especially since you can do it over time with TS.
All these tests that something is a string... why not let a compiler do that for you?
yep i love Typescript! Been using it with Angular 2 and Angular 4. I come from a C#/.NET background and don't want to go back. I even co-founded a .NET user group and the name of my company has ".NET" in it! No reason to not use TypeScript.. Can use what you want or continue writing javascript in the .ts files. Might need to change some default settings, don't know, i've had NO reason to NOT get all these autocomplete/intellisense/advice/errors/warnings/best-practices.
> You can now build a decentralized Internet of Things application which allows you to securely (with authorisation!) turn on gadgets with a simple transaction, which can be just a push on a button.
Why would it be necessary to use a transaction to turn something on/off securely?
With all of the pain I've had with Hyperledger Composer, I'm not sure JavaScript is a good idea for smart contracts. Even on private blockchains its still painful.
> We chose JavaScript because it runs literally everywhere, is extremely popular & widespread, and has huge companies like Google or Microsoft working on its speed and security across a wide range of devices.
What about WASM? Seems like if we're future thinking "things that can run everywhere" just about any language can be compiled into a web assembly target and used. WASM is supported in most major browsers except IE.
The difference here is that JavaScript is a human-readable format and, although, it can be translated into a readable format, WASM is byte-code. It is more practical to write contracts in a high-level language and translate them into byte-code than writing the byte-code itself.
Ethereum uses this concept by translating solidity code into ethereum byte-code, whereas Lisk appears to interpret the JavaScript directly without translation. WASM would be a candidate to replace the byte-code in ethereum, but in order to avoid the impracticality of writing contracts directly in WASM, we'd also need a high-level language from which to compile.
This reason this is a good idea is because one could write contracts in any language for which a WASM compiler exists, but I imagine that could also be done with ethereum byte-code. WASM currently has the '"runs literally everywhere" advantage' here, but it should be simple to create a compiler that translates from solidity to WASM, or even directly from ethereum byte-code itself.
Further, I don't buy in to the '"runs literally everywhere" advantage' for JavaScript because it loses this advantage to any language that can compile to it and this is becoming increasingly true for WASM as well.
> Even though JavaScript is a weakly typed language it doesn’t mean that it is inherently insecure. Yes, the programming language used plays an important role, but at the end of the day it is the developers obligation to write secure code in the first place.
Sorry, but I just don't agree. Javascript is a popular language, and has an important role to play in frontend development, but for financial transactions and critical contract code, the lack of safety almost guarantees catastrophic bugs and vulnerabilities will be made in production code. Maybe this is fine for hobby projects, but if you are talking about moving billions of dollars around in the real global economy, I don't see it being done safely in Javascript. I like that projects such as Tezos are integrating formal verification of smart contract code, that seems like the right way forward.
Not just that. One thing that worries me a lot about JS in this context is numbers: JS was designed to make it easy to use numbers and convert to/from strings without caring about the precision and exact internal representation. Which is perfectly fine for UI code and the like, but a terrible idea when you're transacting money.
I have good news for you! There's a BigInt proposal [0], and it's already in stage 3 of the process. Now they're asking for feedback and waiting for implementations.
Well, there are plenty of number libraries like BN.js that univocally convert from/to buffers and have unlimited precision. Numbers are the least of my worries with JS security code.
The point is that you have to know you need to use those libraries to have exact computations. Lisk chose javascript to make dapps available to the masses. How many javascript programmers do you think even know what a number on a computer really is? The risks of summing floats? Etc. etc.
I'm one of probably the larger JS fans, and even love working with NodeJS (don't shoot me please) -- but I totally agree with you. I would certainly not trust myself, I really wouldn't trust somebody else trying the same.
I don't see a huge problem using some JS blockchain library for some non-critical application if it somehow added to the end product, but something so highly critical... no thanks.
Except for the part about Tezos. I haven't looked into it.
There was a time when common internet protocols we take for granted today were a buzzword, too.
Buzzwords are buzzwords until implementation is (or isn't) proven useful. Lisk is making an attempt. Let's judge the attempt and the fruits or lack thereof.
Disclaimer: I am not an investor, stakeholder, or employee in Lisk.
I'm not calling blockchains a buzzword as a derogatory term towards blockchains. I'm more using the term to point out the way blockchains are being used right now (e.g. marketing bad ideas and non-ideas as groundbreaking tech).
That's not to say there aren't plenty of promising projects out there. There absolutely are.
Moderation works by random sample so it obviously isn't going to be consistent. But if you post like that repeatedly, you're eventually going to get asked not to.
Whatever the definition of unsubstantive is, snarky generic dismissals are certainly included.
In this specific case you call Lisk a terrible idea without substantiating why, other than the fact that's its related to a nascent technology (buzzword as you say).
I don't mean to be rude, but that's the worst possible answer you could have given. Security should be your first priority, not an afterthought to popularity. That it isn't tells me everything I need to know about the seriousness of your project.