Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, why would it be taken as a given that programming should mimic the hardware?

Pure functions are easier to reason about (there may be exceptions of course), that's why they're interesting.

This paper is not related to side-effects - it's related to "totality", meaning all programs terminate. This could also be interesting for side-effecting programs, but it's of course much harder to verify in this case.

Due to the halting problem's existance a total language can not be Turing complete, but many useful programs can be verified.

I didn't see anything in the paper that claims that pure functions should be "all the way down", and the paper is not about side effects anyway.



>Well, why would it be taken as a given that programming should mimic the hardware?

Because that’s the environment that programs are run on, doing anything else is fighting against the environment. I would argue that humans have done this to great effect in some areas, but not in programming.

>Pure functions are easier to reason about (there may be exceptions of course), that's why they're interesting.

Prove it.


>Prove it.

Proofs are good evidence that pure functions are easier to reason about. Many proof assistants (Coq, Lean, F*) use the Calculus of Inductive Constructions, a language that only has pure, total functions, as their theoretical foundation. The fact that state of the art tools to reason about programs use pure functions is a a pretty good hint that pure functions are a good tool to reason about behavior. At least, they're the best way we have so far.

This is because of referential transparency. If I see `f n` in a language with pure functions, I can simply lookup the definition of `f` and copy/paste it in the call site with all occurrences of `f`'s parameter replaced with `n`. I can simplify the function as far as possible. Not so in an imperative language. There could be global variables whose state matters. There could be aliasing that changes the behavior of `f`. To actually understand what the imperative version of `f` does, I have to trace the execution of `f`. In the worst case, __every time__ I use `f` I must repeat this work.


And if I go to a flat earth conference, I will find that they produce lots of “proof” for flat earth.

I don’t really accept “this group of people who’s heads are super far up the ‘pure functions’ ass choose purity for their solutions” as “evidence” that purity is better.

I’m not saying that purity is bad by any stretch. I just consider it a tool that is occasionally useful. For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.


>For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.

Modeling the method that modifies internal state as a function from old state to new state is the simplest way to accomplish this goal. I.e., preconditions and postconditions.


> doing anything else is fighting against the environment

Video game programming, where performance really matters, is a great way to see the cost of forcing the hardware to deal with human abstractions (mainly OOP). Rules like "never have a Boolean in a struct" or "an array with linear access can be faster than a tree with log access" wake you up to the reality of the hardware. :p

In academia, and the especially academic, utopian realm of functional programming, you're trained to live in dreamland.

If you can afford it, though, hey, it's a nice place to be.


> In academia, and the especially academic, utopian realm of functional programming, you're trained to live in dreamland.

OO outside of contexts where every little bit of performance matters suffers in exactly the exact same way.

> If you can afford it, though, hey, it's a nice place to be.

No arguments there! A huge majority of applications can afford to be written this way, even ones where performance is a concern (WhatApp, for example).


> A huge majority of applications can afford to be written this way, even ones where performance is a concern

This is sometimes true for any one given app but it's not a good overall outcome.

It is why we have today multi-GHz CPUs with lots of cores and dozens of GB of RAM and yet... most actions feel less responsive today than in 1995 with a 120MHz CPU, 1 core and 1MB.


My comment was in response to the need to squeeze out every last bit of performance you possibly can. You're talking about ignoring performance altogether which is not what I'm talking about.


Then again, abstractions can be helpful too, including in game programming. Epic's heavily invested, for example. Or in databases, relational algebra often beats out an array with linear access. I agree that OOP-in-the-small lacks mechanical sympathy though. That's one reason for entity-component-model, though another, I'd argue, is that it provides abstractions that are easier to reason about.


On Epic's case it helps that Tim Sweany started his game company doing games in VB, and always cared for a good developer experience, that is why Unreal Engine always favoured good abstractions, has a C++ GC, Blueprints and now Verse.

He was never a kind of "lets keep doing C in games" style of developer.

Thankfully, without the likes of Unreal, Unity and similar efforts, we would still be doing like it always was done here, kind of mentality.


Andrew Kelley has a pretty good talk on gaining performance in the zig compiler.

In many cases performing math again is faster than memoization.

General gist is to try to cram as much in to the cache lines as possible, sometimes even at the “cost” of calculating values again.


Jax and Daxter did pretty well for a game written in Common Lisp.

And apparently Epic belives enough on this to create Verse, and to quote Tim Sweany point of view on "The Next Mainstream Programming Languages",

http://lambda-the-ultimate.org/node/1277


A slight tangent: there's also a related notion to termination, that allows you to describe event loops.

Basically it's about the loop making progress on every iteration, but we can have an infinite number of iterations.

I think the distinction is related to the one between data and co-data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: