Regarding nil references, I now always think of this Sir Charles Antony Hoare quote when the topic arises:
"I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. In recent years, a number of program analysers like PREfix and PREfast in Microsoft have been used to check references, and give warnings if there is a risk they may be non-null. More recent programming languages like Spec# have introduced declarations for non-null references. This is the solution, which I rejected in 1965."
So he's saying he rejected declarations that allow you to declare either nullable or non-nullable types?
Kotlin and Rust seem to have an interesting approach. In Kotlin, you have to explicitly declare nullable types, and Rust does not allow references/pointers to be null in the first place if I am not mistaken. I have just recently started learning more about Scala, and although it allows nulls to be passed, they are not idiomatic, and I am guessing it would not be too hard to have an automatic style checker that catches uses of null (at least those that do not interface to Java libraries).
No, he's saying he now rejects references being nullable (as the default state of all reference types). A concept of "nothing here" is sometimes necessary so he probably doesn't reject explicit nullability annotations (option types as in Haskell, MLs or Rust for instance; this is not exactly an original approach either).
Scala's approach is sane because you can't dissalow nulls when you're working with libraries that work with null values. On the other hand Scala gives you the tools to deal with nulls very efficiently. You can easily lift nullable values to Options, and because Option is a monadic type, it can be used in for-comprehensions and is very composable.
That's a bit harsh. JetBrains and the developers of the compiler are certainly aware of the Maybe/Option approach, they chose not to use it. Their approach is not as composable as Haskell's but much more practical and more readable in my opinion.
If anything, the fact that Option is still used so rarely in Scala is an indication that maybe, that experiment has failed.
Just out of curiosity, how is Kotlin's approach less practical than Option? (I'm new to this style of programming) Is it because you can use Option types in maps and filters and the like?
Yes, but it goes deeper and I can't give it enough justice by typing this on my iPad.
The thing to understand about Nulls is that most often they denote non-fatal errors that have to be either explicitly handled or otherwise they get ignored. That's why nulls are evil, because NPEs take developers by surprise, at the worst possible time.
Monads are useful for dealing with errors. It's a tool, or a design pattern if you will, that makes the developer either deal with exceptional state or make it somebody else's problem. Developers that are not familiar with monads are doomed to implement it badly [1]
If you're interested for more and are a little familiar with Scala, I've attached an URL for an awesome explanation of this topic [2]
"I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. In recent years, a number of program analysers like PREfix and PREfast in Microsoft have been used to check references, and give warnings if there is a risk they may be non-null. More recent programming languages like Spec# have introduced declarations for non-null references. This is the solution, which I rejected in 1965."