Hacker Newsnew | past | comments | ask | show | jobs | submit | peterkelly's commentslogin

Unless a violation of that contract can lead to a crash or security vulnerability...


The post is about changing the serialization-format so enforcing contracts becomes esier; and I am defending the post, so I don't understand what you're hinting at here.


Then reject the request if it is incomplete?


The most important one in the context of 2025 is this one:

On the foolishness of "natural language programming". https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


Thanks for the link. Great read.

Apparently Dijesktra loved using em-dash!


If you look at the scanned pdf from what looks like a typewriter written document, he was using two hyphens everywhere. Links on the top of the page.


I love that essay. It's such a joy to read, and even though it is very short and to the point it says so much both about the topic itself and society at large.

And its just so obviously correct.


It is not obviously correct. The unstated premise is that programming is - or should be - similar to writing mathematical proofs.


The proofs/programs (Howard/Curry) correspondance has been fairly well established I think.


It's the requirements discovery phase that always breaks every pure mathematical treatment of software development.

(And also, like DW points, that software is way more complex. But on this case, it's the requirements discovery.)


That’s because it’s not pure math, but applied math. Which also has the requirements discovery phase.


Yes, and constructing mathematical theories is completely different from finding proofs.


It is a bit like architecture is just physics or painting is just chemistry. Technically true in some reductionist sense, but not necessarily the most useful way to think about it.


No, the premise is that programming is the act of writing precise specifications, which is easier in a precise language. Similarly to mathematical proofs.


Suffers from philosophical liberalism I'd say

Which leads to the assumption that some people "just" dont want to be better.

This mental framework is how society justifies the superiority of some persons while ignoring the material realities of others.

This framework is the root of classism, the root of racism, the root of elitism and finally it manifesta in individuals as narcissism.


IMHO you are going way way way to far. Far in the weeds.


Nah, I think they're probably seeing warning signs. So much Djikstra's writing is pseudo intellectual pretensious word salad. Which is a shame bc he was an actual intellectual.


Setting aside the the elephant in the room (modern coding LLMs are in some sense indeed compilers for natural language -- except they still "compile to" ordinary programming languages) it nonetheless seems to me that even conventional programming languages use too little, not too much, natural language.

Example:

- "&&" rather than "and",

- "||" rather than "or",

- "if (A) B" rather than "if A then B"

This only makes the code harder to read for beginners without apparent benefit. I'm not sure whether Dijkstra would have agreed.

Thankfully though, programming languages already use mostly explicit (English) language in function names. Which is a much better situation than in mathematics, where almost every function or operator is described by a single nondescript letter or symbol, often even in Greek or in a weird font style.

There is a tradeoff between conciseness and readability. Mathematics has decided long ago to exclusivly focus on the former, and I'm glad this didn't happen with programming. If we read Dijkstra as arguing that only focusing on readability (i.e., natural language) is a bad tradeoff, then he is right.


I suspect Dijkstra would have disagreed with you about "and" and "or", judging from his criticism of the technical report which had the line "even the standard symbols used for logical connectives have been avoided for the sake of clarity".

Personally I think one advantage of '&&' and '||' is that it's clear they're a notation that you need to know the syntax and semantics of. For instance typically '&&' is "short-circuiting" and will not evaluate its RHS if the LHS is true; a natural-language "if a and b then ..." doesn't suggest that critical detail or necessarily nudge you to go and check. (Not that Dijkstra was in favour of short-circuiting logical operators, to judge by https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD100... point 4...)

More generally, I'm not sure of the benefit of tailoring the language syntax for beginners rather than experienced practitioners; the advantage of '&&', '||' and the rest of the "C-like" syntax stuff in a new language is that it's familiar to a wide base of existing experienced programmers.


We could have "and" and "cand" (conditional and, his terminology), which would be more explanatory. Or we use "and" (conditional) and "ncand" (non-conditional and). Depending on which should be the default.

> More generally, I'm not sure of the benefit of tailoring the language syntax for beginners rather than experienced practitioners; the advantage of '&&', '||' and the rest of the "C-like" syntax stuff in a new language is that it's familiar to a wide base of existing experienced programmers.

At one point in time every awkward and outdated syntax was the more familiar option, but that's of course not a good argument not to improve it, otherwise we would be stuck with it forever.


Working a bit with some old programming languages that are very natural language like (FORTRAN’s .and. and .or. or COBOL’s IS GREATER THAN) I don’t think the readability increases that much, maybe the benefit is more approachability. In a more symbolic language skimming code is much easier because the symbols provide visible distinction between more syntactic control flow and the declared functions, variables etc.


Note that only about 1/3 of this critique (the last example, and with slightly different actual syntax in place of the preferred syntax) of "conventional programming languages" applies to the #1 language on the Tiobe index, which does, in fact, use and instead of && and or instead of || but uses ":" instead of "then" in if statements.


Yeah that was more a criticism of the common C syntax. The use of the colon in Python is pretty good as it mirrors the usage in ordinary text. Though it still follows the pattern "if (A): B else: C" with parentheses around A. That seems superfluous.

(Unfortunately Python also follows the herd in using "=" for assignment and "==" for equality. It should arguably be "<-" or ":=" for assignment and "=" for equality. But that's a matter of asking for better symbols rather than of asking for better English operator names.)


> The use of the colon in Python is pretty good as it mirrors the usage in ordinary text. Though it still follows the pattern "if (A): B else: C" with parentheses around A.

Python doesn't require parens for the condition in an if-statement, and the default settings of black or Ruff formatters will remove them if present where not needed. (They can be needed if, e.g., you are breaking a condition across multiple physical lines.)

> Unfortunately Python also follows the herd in using "=" for assignment and "==" for equality. It should arguably be "<-" or ":=" for assignment and "=" for equality.

Note that Python uses ":=" for the assignment expression operator as well as "=" for the simple assignment statement operator.


For && and || I disagree: they are 'shortcircuiting' operators and as such they deserve a différent name than just 'and', 'or'. That said &| should have been named and,or which would have freed &| to use for the shortcircuit operators.


Friend, this filter is a feature not a bug.


This is gold! Thanks.


> when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.

This hits so hard. cough dynamic typing enthusiasts and vibe coders cough


> Another thing we can learn from the past is the failure of characterizations like "Computing Science is really nothing but X", where for X you may substitute your favourite discipline, such as numerical analysis, electrical engineering, automata theory, queuing theory, lambda calculus, discrete mathematics or proof theory. I mention this because of the current trend to equate computing science with constructive type theory or with category theory.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD12xx/EWD124...


I’m not sure that is the focus of most serious dynamic language. For me, it’s the polymorphism and code re-use it enables that the popular static languages generally aren’t close to catching up to.


I’m curious, can you give an example that wouldn’t be solved by polymorphism in a modern statically typed OO language? I would generally expect that for most cases the introduction of an interface solves for this.

Most examples I can think of would be things like “this method M expects type X” but I can throw in type Y that happens to implement the same properties/fields/methods/whatever that M will use. And this is a really convenient thing for dynamic languages. A static language proponent would call this an obvious bug waiting to happen in production when the version of M gets updated and breaks the unspecified contract Y was trying to implement, though.


That’s basically the main example I’d give. I think the static proponents with that opinion are a little myopic. Those sorts of relationships could generally be statically checked, it’s just that most languages don’t allow for it because it doesn’t fit in the OOP/inheritance paradigm. C++ concepts seem to already do this.

The “bug waiting to happen” attitude kind of sucks, too. It’s a good thing if your code can be used in ways you don’t originally expect. This sort of mindset is the same trap that inheritance proponents fall into. If you try to guess every way your code will ever be used, you will waste a ton of time making interfaces that are never used and inevitably miss interfaces people actually want to use.


> The “bug waiting to happen” attitude kind of sucks, too. It’s a good thing if your code can be used in ways you don’t originally expect.

Rather than call it myopic I would say this is a hard won insight. Dynamic binding tends to be a bug farm. I get enough of this with server to server calls and weakly specified JSON contracts. I don’t need to turn stable libraries into time bombs by passing in types that look like what they might expect but aren’t really.

> If you try to guess every way your code will ever be used

It’s not about guessing every way your code could be used. It’s about being explicit about what your code expects.

If I’m stuffing aome type into a library that expects a different type, I don’t really know what the library requires and the library certainly doesn’t know what my type actually supports. There’s a lot of finger crossing and hoping it works, and that it continues to work when my code or the library code changes.


> Rather than call it myopic I would say this is a hard won insight. Dynamic binding tends to be a bug farm.

I run into typing issues rarely. Almost always the typing issues are the most trivial to fix, too. Virtually all of my debugging time is spent on logic errors.

> It’s not about guessing every way your code could be used. It’s about being explicit about what your code expects.

This is not my experience in large OOP code bases. It’s common for devs to add many unused or nearly unused (<3 uses) interfaces while also requiring substantial refactors to add minor features.

I think what’s missed in these discussions is massive silent incompatibility between libraries in static languages. It’s especially evident in numerical libraries where there are highly siloed ecosystems. It’s not an accident that Python is so popular for this. All of the interfaces are written in Python. Even the underlying C code isn’t in C because of static safety. I don’t think of any of that is accident. If the community started over today, I’m guessing it would instead rely on JITs with type inference. I think designing interfaces in decentralized open source software development is hard. It’s even harder when some library effectively must solely own an interface, and the static typing requires tons of adapters/glue for interop.


> Almost always the typing issues are the most trivial to fix, too.

For sure. My issue is with the ones I find in production. Trivial to fix doesn’t change the fact that it shipped to customers. The chances of this increases as the product size grows.

> It’s common for devs to add many unused or nearly unused (<3 uses) interfaces while also requiring substantial refactors to add minor features.

I’ve seen some of this, too. The InterfaceNoOneUses thing is lame. I think this is an educational problem and a sign of a junior dev who doesn’t understand why and when interfaces are useful.

I will say that some modern practices like dependency injection do increase this. You end up with WidgetMaker and IWidgetMaker and WidgetMakerMock so that you can inject the fake thing into WidgetConsumer for testing. This can be annoying. I generally consider it a good trade off because of the testing it enables (along with actually injecting different implementations in different contexts).

> I think what’s missed in these discussions is massive silent incompatibility between libraries in static languages.

What do you mean by this?

> It’s especially evident in numerical libraries where there are highly siloed ecosystems. It’s not an accident that Python is so popular for this. All of the interfaces are written in Python.

Are we talking about NumPy here and libraries like CuPy being drop-in replacements? This is no different in the statically typed world. If you intentionally make your library a drop in replacement it can be. If you don’t, it won’t be.

I am not personally involved in any numeric computing, so my opinions are mostly conjecture, but I assume that a key reason python is popular is that a ton of numeric code is not needed long term. Long term support doesn’t matter much if 99% of your work is short term in nature.


It is just a classical Dijkstra strawman - hiding a weak argument behind painting everybody else as idiots. In fact it is much easier to make dangerous undetected mistakes in C than it is in Python.


I downvoted you. First of all, your explanation of what "classical Dijkstra strawman" is lacks the substantiation. Second, your statement about C vs Python is a sort of strawman itself in the context of static vs dynamic typing. You should compare Python with things like Java, Rust or Haskell. (Or C with dynamic languages from similar era - LISP, Rexx, Forth etc.)


It is a strawman because nobody actually equates the ease of programming with the ease of making undetected mistakes.


Presumably no one thinks "I love using [dynamically-typed language] because I can make mistakes easier", but on the other hand, isn't it the case that large codebases are written with low initial friction but high future maintenance?


So you agree it is a strawman?


Perhaps Dijkstra was going for the former, but is it bad to consider a stronger argument along the lines of what he said?


A charitable interpretation would be he critizises e.g JavaScripts silent type coercion which can hide silly mistakes, compared to e.g Python which will generally throw an error in case of incompatible types.


Flagged because this takes you to a random website so everyone is going to be reading a different article.

I love Kagi as a search engine, and this is a cool idea, but posting a "random web page selector" without explanation on HN is just confusing.


For some people it's a job.

For others it's a calling.

Nothing wrong with either - I just think it's worth being aware that people have different motivations.


Why do all the AI agents on the services page have human names and profile pictures if there are "no humans in the loop"?


its personas name


There was a post on HN the other day where someone was launching an email assistant that used AI to summarise emails that you received. The idea didn't excite me, it scared me.

I really wish the tech industry would stop rushing out unreliable misinformation generators like this without regard for the risks.

Google's "AI summaries" are going to get someone killed one day. Especially with regards to sensitive topics, it's basically an autonomous agent that automates the otherwise time-consuming process of defamation.


Create the problem, sell the solution.


I thought vibe coding tools did this already


If there's one thing I definitely don't want AI in the middle of, it's communication with other people. The potential for misunderstandings due to hallucination in summaries, both on my end and the recipient's, scares me. There were some pretty bad examples with Apple News.

Accuracy matters, especially when communicating with customers or between managers/employees, and I can imagine many kinds of scenarios where this goes wrong.


I'm honestly curious why Apple (and other OS vendors like MS and various Linux distributions) still feel the need to tweak their UIs many, many years after having reached maturity.

How many iterations does it take before you get it right?

I get that there's a certain sense of fashion to it, but so often these changes are either neutral or worse, and it just seems so pointless. I don't see any concrete benefits of this year's UI design over what was already there 10-20 years ago.


Operating systems from the biggest companies are rather stable and feature rich beneath for years now but that doesn't make these attractive in long term. Certainly not for the daily consumers. So they introduce all sorts of meaningless "featurettes" and UI/UX changes to pretend they work hard and thoughtfully on their products to create these never-ending "exciting experiences".

It's not about polishing to get it right nowadays but rather making a change for sake of changes because that looks good in terms of marketing.

As for this particular Apple case with all "26" versions and Liquid Glass: the backlash they got in June puts their actions along Microsoft's with Windows 8/8.1.


No one gets a promotion for saying “yep still looks good. “


  > I'm honestly curious why Apple (and other OS vendors like MS and various Linux distributions) still feel the need to tweak their UIs many, many years after having reached maturity.
its hard to market something like an os to consumers and devs without some large noticeable changes

though one then has to wonder, why do we need a new os every year...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: