Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To clarify: I mean to say that 0 represents "nothing" in C in the same way that it does in math. This is one of the things I love about C--the ability to exploit the properties of numbers directly, rather than needing to build abstractions around them. Bitmasks, for example, are one of my favorite things ever.

But yeah, point well-taken. Perhaps I'll take the opportunity to wax on the other special values used in Foundation & CoreFoundation sometime.



0 isn't really "nothing", though. It's special, in that it's the additive identity of a lot of useful sets of numbers, but it's definitely something. An integer containing 0 is completely different from a pointer containing nil, conceptually speaking. To really have "nothing", you could use an int*, where a NULL pointer means "nothing", and a valid pointer means it contains a value. Or something more efficient with e.g. a separate flag. But in any case there's no real native support in the language for the concept.


Step back for a second and think about what the number 0 means. If I give you 0 dollars, what have I given you?

Or go to Wikipedia: 'The wThe word zero came via French zéro from Venetian zero, which (together with cypher) came via Italian zefiro from Arabic صفر, ṣafira = "it was empty", ṣifr = "zero", "nothing". ord zero came via French zéro from Venetian zero, which (together with cypher) came via Italian zefiro from Arabic صفر, ṣafira = "it was empty", ṣifr = "zero", "nothing".'


I'm well aware of both the meaning and etymology of the word.

Now, you should step back for a second and think about what the value 0 means in a program. Take the following function:

    int secondsUntilNextEvent()
Does 0 mean "the next event is immediate", or does it mean "there is no interval because there is no next event"? 0 here does not necessarily mean "nothing".

An option type gives you a different state for "there is no value here, at all". Putting a zero into an int certainly does not signify "no value". It signifies a value, and that value is zero.

What number is greater than three but less than one? Nothing. There is no such number. "0" is not a correct answer.


Glad you are aware that in at least some contexts, zero really is nothing, rather than '0 isn't really "nothing", though', as you claimed earlier.

You write: 'Putting a zero into an int certainly does not signify "no value". It signifies a value, and that value is zero.'

Again, the way you write this makes it seem like you say that this has always been the case and is the only way it could be, when in fact this idea of zero being "just another value" is a fairly recent one. And the way our machines handle scalar values (and where what you write is largely true) is even more recent and more of a special case.

'What number is greater than three but less than one? Nothing. There is no such number. "0" is not a correct answer.'

'Nothing' is also not the correct answer. What you have is a contradiction, which is not nothing. Or you could talk about the possible results as sets, in which case you have the empty set, which is also not nothing. It does have the cardinality 0, though.

"The empty set is not the same thing as nothing; rather, it is a set with nothing inside it and a set is always something. "

Of course, the empty set can be used to signify "nothing", just as the number zero can.

"The number zero is sometimes used to denote nothing. The empty set contains no elements."

Again, not arguing that you can't have contexts in which "nothing" and "zero" are distinct. Just pointing out that those contexts are hardly universal enough to justify a statement saying that nothing really isn't zero.

Things are a bit more complicated and less clear-cut than that.

Which is why I always liked C's and Objective-C's somewhat loose but intuitively (for me!) workable handling of NULL, 0, nil, false etc.


It comes down to this:

Zero is not nothing. Sometimes it means nothing, which is not the same thing.

C pointer types are always option types. They can hold "nothing" or a pointer. Code that deals with pointers always has to deal with this option nature of the types, whether it wants the feature or not.

C primitive types are never option types. They can never hold "nothing". Some of them can hold a value that is sometimes used to represent "nothing", but again, that is not the same thing. Code that deals with primitive types and wants to be able to represent "nothing" either has to keep a separate flag or borrow a value from the primitive type's range to indicate "nothing" without any language support for that.

In short: I object to C mixing option types with pointer types. Life would be simpler if they were separate. NULL (nil etc.) is how C indicates "none" for pointers. C has no built-in indicator for "none" for primitives.


"Zero is not nothing."

Yes. In a computer, there is nothing that is nothing. You always have something that signifies nothing.

C pointers also do not hold "nothing". What you refer to as "nothing" is the address 0, which is a special value that the language makes certain guarantees about that you interpret as being "nothing" because of these guarantees.


> C pointers also do not hold "nothing". What you refer to as "nothing" is the address 0, which is a special value that the language makes certain guarantees about that you interpret as being "nothing" because of these guarantees.

Of course. The point is that pointers have that "special value", while primitives do not. The integer value 0 is not the same thing.


You're both right. Zero does mean nothing. Also, zero is not a special value for integers like it is for pointers. Enough of this self-righteous debate.


Dude, that is exactly what I was saying...


I never claimed it was. Sheesh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: