That's a nice article. But I feel like tweaking something in it.
> The programmer should be able to use Unicode, so the source file _must_ declare its encoding.
Another Great Lesson we've learned, is to avoid clutter. So a reasonable rule, IMHO, would be that any standard Unicode encoding can be declared, but the lack of such a declaration is equivalent to a declaration of UTF-8.
We also learned to only use UTF-8, so we should make it yet more simpler :) [1] Really, why declare the encoding of source code, given that we do have a universal character set and an endian-agnostic, compact representation?
The author conflates quite a few unrelated concepts. Monads with "interfaces and object models," and templates with type inference. Perhaps in the programmer's head these all serve similar functions (often erroneously. See, e.g. http://blog.tmorris.net/type-classes-are-nothing-like-interf...), but language designers need to make clearer distinctions.
The author seems mistaken about the purposes of Monads in Haskell. The essence of monads is control flow, not encapsulating code or providing interfaces.
Rubymotion is built around a slightly restricted ruby that compiles to llvm. A non iOS focused version would seem to fit the criteria.
Along similar lines, Unity uses a slightly restricted JavaScript than compiles to Mono's CLR implementation, as well as Boo, which is a Python-like language that also compiles to Mono. Iron Python, of which I have no direct experience, is another example.
And of course there's Clojure if you're so inclined.
I can't recall a single instance of language being an actual issue when solving a business problem, and I've been programming for over 30 years. Discussions about language can be tiresome, but languages themselves havent been a problem.
The only real problem I know of with languages is that they don't die off fast enough. If the perfect language arrived tomorrow, it would be lost in the sea of languages anyway.
A language debate is a sign the there is an extra language lurking and it should die off. Python vs Ruby is a lively debate. I don't care which one survives, but do we really need both? The most curious debate of all is "Haskell?", a debate that seems to exist apart from any other language, the clearest sign of a lurking "extra" language.
Haskell isn't, and isn't trying to be, the next big language. Haskell is first and foremost a vehicle for programming language research, and it's primary proponents aren't shy about that. As such, it has been extraordinarily successful, and necessary.
In this case are you talking about a language as a semantic and conceptual entity or are you including a language as an implementation? Ive had to rewrite code in C/C++ a couple of times because my Python solution wasnt fast enough.
Perfect example. There were no language problems. You wrote it in Python, then once you knew where to optimize you did so with C/C++.
Great use of the available tools.
Now to answer your question: I have no idea. Perhaps like you, I consider C and C++ to be pretty much one language because they occupy nearly the same slot in the toolbox. Maybe ones a regular screwdriver and the other is a power screwdriver? I dunno.
The most peculiar debates are about the latest versions of C and C++. Really smart guys, how much nicer if they would listen to Peter Thiel and work on real innovation. The economy needs it.
I think you might have too narrow a definition of "problem caused by a language". I love Python as much as the next guy, but if it's not fast enough, forcing him to rewrite part of his code in another language, that's a problem. Wasted effort.
It's hypothetically possible for a language to be both expressive and performant. If a language could prevent a problem, but doesn't, doesn't it share responsibility for the problem?
Languages that are turing complete can obviously solve any business problem that is solvable. The trick is the ease with which your goals can be accomplished, and there is a tremendous amount of variability there (if there wasn't, we'd all be programming in assembly). I've seen bake-offs where different teams used different languages and the results were dramatic enough that I'd say the loser was clearly a "problem".
the only Objective-C feature I miss when coding in other languages is named arguments(e.g. [self drawRectOfSize:size atPoint:point inColour:colour]). Doxygen, rdoc, javadoc and similar tools do a great job of generating API documentation from source, but I prefer the self-documenting nature of Objective-C method calls.
This sentiment seems to be at odds with the preference of
abs(val) {
over
int abs(int val) {
I'm all for type inference, I think it's wonderful. But if there's anywhere where I do want types to be explicit, it's in function signatures.
An interesting idea is for the types to be inferred, but not everytime you compile the code, but only the first time, then the compiler edits the source and adds the infered types. Or an IDE could do that.
I prefer my compiler to be a pure function. What if I'm generating source code and sending it over a pipe? I prefer not to need an IDE, either, though that's a tad more subjective.
On the contrary, what he really misses is the "self-documenting nature" of Objective-C source code. And when it comes to functions, the four bits of documentation that are more important than any other are:
1) the order (or names) of its arguments (how to call it)
2) the types of its arguments (what to feed it)
3) the type of its return value(s) (what to expect)
4) whether the function maintains referential transparency (whether it plays nice with the rest of your program)
Relying on type inference in the function signature abandons two of these and forces the casual reader to manually infer the manner in which the function may be safely used. Of course, in this position, one may always opt to use a documentation generation tool (doxygen, rdoc, javadoc, etc.)--however, when it comes to this, our code can no longer be considered to be self-documenting.
> I've no complaint with this, but it leads to the mistaken impression that great applications are written in dynamic languages. They aren't. Great prototypes are written in dynamic languages, but the lasting replacement will be written in a dull typed language like Java/C++/C#.
Common Lisp is the only programming language I know of that provides both the prototyping power of dynamic languages and the execution power of static languages. This is achieved through the Lisp declarations system.
The maxima computer algebra system also offers a variety of algebraic declarations such as linear, additive, multiplicative, outative, evenfun, oddfun, commutative, symmetric, antisymmetric, nary, lassociative, and rassociative. In my opinion all programs should be written this way - the Lisp way.
> So far my fondness with Scheme has coexisted with disappointment at its unsuitability for real world tasks
I am fond of Scheme, Kernel, Arc, Shen and many other Lisp dialects. The unsuitability of these languages for real world tasks can be solved to some extent by embedding them in a larger host platform. Clojure does the embedding approach quite nicely.
> I've no complaint with this, but it leads to the mistaken impression that great applications are written in dynamic languages
Right, except for Facebook, Twitter, Reddit and Wikipedia (yes, Facebook has a different PHP runtime which uses gcc to get a performance boost, this is an irrelevant implementation detail).
The last thing I know was they moved from Ruby to Scala. Did they move to Java without me noticing or do you mean Scala by Java (since it's a JVM language too)?
EDIT: according to Wikipedia the service itself moved from Ruby to Scala while search moved from Ruby to Java.
I’ve come to see programming languages, like applications, as a collection of features bound by a narrative. (By ‘narrative’, I mean a set of styles and goals that allow one to make particular implementation decisions.)
Language design, like most engineering, is all about weighing tradeoffs. There may be a language that's ideal for a particular application but there will never be a perfect language.
The difference between Jython and CPython may tell a slightly different story: with them, you have the same language but different implementations that may be more appropriate in one circumstance than another. If you really want to utilize threads to accomplish your goal, use Jython (yes, there are ways to do the same thing in CPython). If you want very fast startup, use CPython.
Maybe there could be a single, perfect language with different implementations providing the tradeoffs. The most difficult challenge may be bridging the performance/productivity gap between the C's and the Pythons of the world, but that may be a side effect of making multi-core programming easier. Amdahl's law[1] may prohibit that, though.
Static vs dynamic typing, OO vs procedural, stateful vs functional/pure etc all have intrinsic strengths and weaknesses. You can only compensate for so much in implementation. Python will always be a bad language for writing kernels, for example.
"need for a multithreading-centric, statically typed, type inferred language that compiles to native code but possesses the feel, readability and programmer efficiency of Python."
Language designers have moved on to writing on-top of LLVM to provide the native code benefit. That includes even Apple, Objective-C's backer. Of these, the highly interesting languages that provide the other benefits demanded:
People have been searching for the One True Language for decades. Why is it so important that there be one? The interfaces we use to talk between components are so much more important, so that we can use what fits for different parts.
> People have been searching for the One True Language for decades.
The one true language is mathematics. John McCarthy (a math professor) discovered that programs could be represented as out degree one directed graphs (cons cells) that compose mathematical functions (lambda expressions). These simple mathematical principles form the basis of the functional programming paradigm and they have led Lisp programmers to success for nearly a half a century.
I don't know about the "One True" language, but it seems clear that people are casting about for the next generation successor to C and C++; that is to say a fast, compiled, statically typed language incorporating the lessons of the last 25 years and dropping various baggage. It's what the whole D/rust/go thing is about.
D is really great. The terseness, modeling power, and general ease of writing it correctly the first time seem to obviate the need for higher level languages, for me.
"Great prototypes are written in dynamic languages, but the lasting replacement will be written in a dull typed language like Java/C++/C#"
That is the gist.
Cross platform support is king for me right now which means Java and Javascript are getting a good run even though I don't much like the languages themselves.
How easy is it to take a D program cross platform? Can it cross compile easily?
The gdc compiler sticks the D front end over gcc, so cross compiling is the same as with gcc/mingw, to the best of my understanding. LDC is another compiler that uses the LLVM backend and it might support cross compiling, I don't know.
Setting up a virtualbox and building from that is pretty easy these days. And you need to test the executables somewhere anyway.
D will be perfectly portable by default, just as with modern C++. If you use the standard library it will compile and run everywhere.
What gives you the idea that statically typed languages are undesirable to "hardcore" users? If anything, I've seen the opposite--the really advanced programmers (especially those who specialize in programming languages) seem to prefer statically typed languages.
Of course, there is some serious selection bias here for me, largely because programming language design has been one of my primary interest lately, and as I said earlier, the field seems generally biased towards static typing.
http://beza1e1.tuxen.de/articles/proglang_mistakes.html