I primarily use java in my day job, and although there's a lot not to like about it, some standard complaints are somewhat overblown. For example, nothing really forces you to use long variable/method names, but even if you do, a modern IDE -- I use IntelliJ -- with auto-completion makes these fairly painless. It's not like you have to type every letter of "SomeClass.veryLongMethodName()" -- rather, you type "SomeClass.v" and then hit tab.
Similarly, I never type "for (int i = 0; i < blah; i++)." I type "cmd-J, fori, return." Etc....
So, I'm not particularly persuaded to the "you have to type a lot" complaints.
Similarly, Spring has simplified a lot of stuff that J2EE had previously made overly complicated.
That's not to say that Java doesn't have problems or isn't suboptimal for a certain set of problems. (In fact, it's unclear to me where java might be optimal.)
Most standard complaints are over 10 years old, people should just get over it. The IDEs are a little fat and I/O hungry. If you don't have a beefy machine, just get a SSD, and that problem is gone.
Spring is great, but JavaEE6 simplified things further standardising things like CDI, JPA, JSF, Jersey, etc... (also, JavaEE7 was just released).
It's actually a pleasure to write web apps in Java nowadays. Great IDEs, debugging tools, lots of mature options, unlike some cool kids..
Jersey is a true joy. It's really a nice way to write restful apps.
I never liked Spring. And large enterprise apps' source code makes my eyes hurt. However, none of this is a reflection on Java or the JVM itself. I think we'll have another 40 years of people predicting the end of Java until we realize it's interwoven into every aspect of computing and can't be tossed out any more easily than C or C++ can be. The JVM will probably be healthy for a very long time, and as long as the JVM is around, Java will probably be chosen for new projects (and maintenance on Java apps will probably be a nice retirement income for many of us now).
Jersey is a true joy. It's really a nice way to write restful apps.
It's actually mostly JAX-RS which is a joy, and allows you to swap out Jersey for e.g. RestEasy with very little work.
I never liked Spring. And large enterprise apps' source code makes my eyes hurt. However, none of this is a reflection on Java or the JVM itself.
I agree. I hate Spring. Spring, J2EE and pre-JIT Java have probably done most damage to Java's public image. JAX-RS is very nice and lightweight. Play is brilliant, if you can deal with mixing in Scala here and there.
I think you hit the nail on the head... it's always been the "enterprise" apps that bugged me about Java. Java may be entirely capable.. but, for example getting an application setup in Eclipse + Java has always just seemed painful to me.. I mean an existing app. Ant + Tomcat + X + actually getting a working debugging session.
VS has usually been get latest from source control.. open sln and click debug... wait for ever it seems like for enterprise code.
Probably why I've been so taken/enamored with nodejs lately. For the most part it's not enterprise platforms... it's well tested, small modules put together like lego blocks stacked together. Event streams and pipes are awesome.
It is sometimes surprising how many deeply nested versions of npm modules are in other modules.. it's still better than having to dig through a dozen projects to update a common dependency they all share.
Java and C# will be around for a very long time, cobol is still pretty widely used... that doesn't mean I want to green field something in it.
Also, Gulp (or even Grunt) with npm is far less friction than anything I've seen in the Java space.. and nuget (.net) doesn't really compare well.
> I'm not particularly persuaded to the "you have to type a lot" complaints.
I haven't used Java in anger in a long time, but my beef was never that you had to type a lot, but that it was verbose. Verbosity impacts on typing, sure, and as you've said, IntelliJ makes typing it out painless.
The flip side of verbosity, though, is reading the code. There's no "hit tab" or "Cmd-J, fori, return" to increase comprehension speed. And reading tends to be far more common than writing.
Well, there are programming languages and then there are conventions. I think a programming language is more like an alphabet, and programming conventions are the real languages.
I can scroll through code at high speed and understand it at a mere glance once I "get" both the language and the convention that was used (if only the doc and comments didn't get in the way all the freaking time...) Things that feel out of place immediately stand out for more careful inspection.
Autocomplete, code snippets, common refactor options etc, pretty much everything an IDE does automagically for programmers, those are a boon for enforcing conventions as well, once you're proficient with them and how they're configured for a project you can pretty much "see" the history of the code without even looking at the commit history.
I once had a boss who used to say this: "I love looking at programmers reading code. It's like that guy from The Matrix, he's looking at undecypherable numbers and says 'my, my, look at that brunette in the sexy red dress'!"
> Well, there are programming languages and then there are conventions. I think a programming language is more like an alphabet, and programming conventions are the real languages.
I like this. As a thought experiment, consider the idea of transforming a convention into another "letter" in the alphabet. A simple example would be the natural numbers. If our alphabet is "a", then we can represent 1 as "a", 2 as "aa", 3 as "aaa", and so on (unary). If we expand our alphabet with a "b", then we basically have binary. And in the case of "aaaaaaaa", "baaa" is a an improvement. But there will be a an optimal point somewhere, where the load of expanding the alphabet is greater than the benefit of the more compact representation. This explains to a small extent why we (programmers) use hex, and didn't really go to higher bases (digits + alpha could get us to base 36!)
The analogy starts to fall apart when we think about the ability to add to the alphabet, using the conventions. We can't invent new letters, but we can create/repurpose words that encapsulate a lot of meaning.
I'm not really sure where I was going with this anymore!
> I can scroll through code at high speed and understand it at a mere glance once I "get" both the language and the convention that was used.
The issue for me is that the boilerplate is noise. I don't want to see the similar parts, I want to see the parts that differ. Pseudo-C example:
x = [1,2,3,4,5]
product = 1
for (int i = 0; i < x.length; i += 1) {
product *= x[i]
}
sum = 0
for (int j = 0; j < x.length; j += 1) {
sum += x[j]
}
vs pseudo-Haskell:
x = [1,2,3,4,5]
product = fold (*) x
sum = fold (+) x
With the latter example, there are objectively less places for bugs to be present. Sure the definition of fold is somewhere else, but there's only one implementation, whereas the pseudo-C has two implementations. Of course, I'm sure modern Java is much better than this, but it's still a step down.
I'm pretty sure we could configure an editor to expand the second one into the first, but then we come back around to writing-vs-reading.
An interesting experiment would be to have your raw input, eg. msluyter's example of "cmd-J, fori, return" as the editable source file rather than the generated Java output.
Aye, I won't bother critiquing the example, I see what you mean with the boilerplate as noise and part of me agree with you.
One of the main points of programming is to reduce the noise to enable higher levels of code, while keeping an easy access to what's under the hood to fine tune or add new functionalities. A programming language is merely a list of things that will make you lose your temper while you do all that.
It's really cons lists that are harmful -- foldl and foldr are just ways to operate on them. GP's example would work fine with conc trees and a divide-and-conquer reduce instead.
Back in college when I was interviewing for jobs sometimes interviewers would first ask you to write any executing Java class because some interviewees said they 'knew' Java and couldn't do even that simple task.
A couple years later when Eclipse came out I told a coworker this anecdote and he joked that it was 'main' + ctrl-spacebar followed by 'sysout' + ctrl-spacebar. I lol'd.
You can shorten it even further to typing 'fori' then triggering the expansion by pressing tab.
Best shortcut that every IntelliJ user should know is "Find Action" (ctrl + shift + a on Win), press and start typing what you want to do. Will even present shortcuts in the list for next time.
Eclipse has something similar, which it calls "code templates." Features like these leave me wondering why "excessive typing" is often cited as a criticism of Java. As far as I can estimate on the fly, I type less than half of the characters that make up my Java code.
And as the article points out, for readability, verbosity is arguably a good thing. I sure believe it is, perhaps because I am more distracted by poor symbol selection and syntax than most programmers I know. Speaking of, I don't particularly like the C-lineage syntax, Java included. I don't like the braces and parentheses. I prefer a syntax with fewer symbols and more keywords. Crazy, I know.
Excessive typing isn't the issue. The problem with verbose code is that code gets read way more than it gets written. In Java you're often reading 15 lines of code for something that could be 2-3 in other languages.
And still, I have seen plenty of Java code which is perfectly readable and Haskell, Perl, or Ruby code (2-3 lines) that resembled line noise. And I am a Haskell fan :).
The worst offender in Java are IMO anonymous inner classes. But IntelliJ shows them abbreviated. E.g. IIRC Runnables are displayed in Java 8 lambda syntax.
> In fact, it's unclear to me where java might be optimal.
I'd say server side processing of whatever. That's what we use it at work for, and I don't see how any of the mainstream languages would be much, if any, better. Our software has to run on our customers' servers, so portability is of utmost importance since we don't want to need to compile it for random OS's. This rules out languages that need native binaries. Performance is also important, since people don't want to buy bigger machines just because you wrote your software in Python/Ruby/whatever and is 10+ times slower than it needs to be. Also, our software is old enough that C# didn't even exist for many years.
Python/Ruby/whatever will most likely get faster as access to tools like LLVM and the JVM itself is getting more common these days. You'll still get the HUGE bump in development times (due to the inherent expressiveness of the languages) and at least a good portion of the speed benefits you'd reap from Java itself.
"You'll still get the HUGE bump in development times (due to the inherent expressiveness of the languages)"
Not sure if that is true. Maybe for small typical web-projects or server-side scenarios that benefit from thin scripting layer (like at Google). If it is true though, then Haskell and OCAML/F# would be even better because you get expressiveness without loosing the obvious advantages of strong static type systems.
Those language are too cryptic for people to adopt them en masse.
There's a reason why Ruby/PHP/Python are so pervasive, 95% of people simply do NOT care about monads and poly-variadic fix-point combinators for mutual recursion.
Similarly, I never type "for (int i = 0; i < blah; i++)." I type "cmd-J, fori, return." Etc....
So, I'm not particularly persuaded to the "you have to type a lot" complaints.
Similarly, Spring has simplified a lot of stuff that J2EE had previously made overly complicated.
That's not to say that Java doesn't have problems or isn't suboptimal for a certain set of problems. (In fact, it's unclear to me where java might be optimal.)