As a still-learning programmer, can someone point me towards the ELIF (Explain Like I'm Five) version of this? Some of the first questions that enter my mind after reading this intro --
Why is this relevant? Why do we as programmers want to know about pure functional programming? What are its advantages? What's the point? This seems to add complexity. Etc.
I know there are good answers to these, but this intro has done little more than whet my appetite a bit...
When you say "add complexity", I think you mean "add difficulty". Because for example, a language that has only one-argument functions, and builds multi-arg functions on top is a simpler one than one that has multi-arg functions built-in.
Simplicity is hard -- and pure functional programming is a lot about simplicity, which does make it hard -- hard to learn, and hard to figure out how to tackle various problems.
But the simplicity has many rewards. Mathematical simplicity makes a lot of mathematical deductions tractable -- this is what is typically referred to as "ease of reasoning" about code.
Referential transparency adds a whole lot of ease to the reasoning process.
The pure functional approach also means that type safety goes much further. If you forget to hook up your functions together properly (forget an input, ignore an output) you're likely to get warnings/errors. In the imperative approach, if you forget a side-effecting statement that adds another input to some code, the compiler cannot help you.
Imagine testing -- imperative code requires tests to build up a mock world as input for the code, and then inspect the mock world that results as an output from the code.
Testing a referentially transparent function requires just calling it with differing inputs. It makes property-testing possible, too!
Imagine composition -- pure function composition of two functions composes the meanings of those functions. If those two functions work, so will the composed one.
Try to combine two pieces of imperative code -- that's not necessarily true at all, one might mutate data that's in use by the other. Aliasing issues may arise, etc.
So functional code is easier to reason about, more composable, easier to test, and gets more safety from type checking. It is harder to learn to use, and harder to figure out how to use it to solve certain problems. With time, however, more and more problems yield to the pure functional programming approaches.
I would like to add that recursive calls can be nicer than loops, when multiple conditions or branching are possible in the loop body.
An equivalent implementation with while loops, if conditions, continues, and breaks are usually harder to design correctly than a good recursive function.
Functional programming encourages use of small simple datatypes. Tuples instead of small structs. Lists instead of more complex container types. First-class functions instead of classes.
However, IMO 'purity' is a bit of a buzzword when applied to functional programming. You may disagree, but that's how I feel about it..
I think recursive solutions are often very nice indeed. But at worst, recursive solutions are sometimes worse than structured loops, because you can use recursions very much like a free-style goto, allowing your code to have even less restrictive structure than loops. That's why directly recursive code is usually discouraged and reusing recursive combinators is encouraged.
To resolve this, a nice suggestion is to use the term "denotative programming". That is, programming where types and values have mathematically simple denotations that predictably compose exactly according to the syntactic composition being used.
Tail-recursive calls are essentially gotos, but a little more well behaved (if you try to put state into explicit parameters instead of mutable variables).
> I would like to add that recursive calls can be nicer than loops
Having a good library of higher-order functions that encapsulate common recursion patterns is a lot nicer than doing recursion by hand, or having to read code where someone did recursion by hand and you have to not only figure out how the recursion works, but whether the author did anything wrong.
Higher-order functions are control structures, and we should encourage their use for all the same reasons we encourage the use of while loops in preference to people building their loop structures by hand out of gotos and conditional gotos.
As another nice effect, higher-order functions (or as we call them in their use as control structures: combinators) unify data structure and control structures. E.g. folding over a list is equivalent to various loops, the Monad instance of Maybe represents simple exception handling, trees and their traversal are identifiable in some sense, etc.
At least in lazy, statically typed functional languages, like e.g. Haskell.
As someone who only discovered functional programming this year (and now do almost all my programming in OCaml), I would say that the biggest revelation that I came to (and didn't really appreciate until I'd studied the subject for a while) was the lack of mutability in (pure) functional programming. This allows you to maintain invariants and makes it much easier to reason about your code, which has huge advantages when you're working with concurrency or parallelism.
Here's one of my favorite examples. Say we have a set S and an element x, as in the following code:
"let S = Set.empty () in
let S = Set.add S x in
f ();
if (Set.mem S x) then..."
If our code is purely functional, then it's impossible that x is no longer in S after our call "f ()". However, if we're using mutability, then maybe in our function f, we remove x from S. If that's the case, we really can't be sure what will happen when we reach our if-statement.
This is certainly one of my favorite properties of functional programming: there is simply less to worry about. Everything is either local or explicit--distant parts of my code cannot implicitly affect each other.
However, for me, the real realization was this combined with the fact that these languages also manage to be more expressive than the imperative ones I was using before. This might seem a bit counter-intuitive--after all, it feels like they give you less power, not more--but it has been the main draw to FP for me.
So the code is simpler, there is less to worry about and it's more expressive. What not to like?
Without understanding functional programming, you can't invent
MapReduce, the algorithm that makes Google so massively scalable.
The terms Map and Reduce come from Lisp and functional
programming. MapReduce is, in retrospect, obvious to anyone who
remembers from their 6.001-equivalent programming class that
purely functional programs have no side effects and are thus trivially
parallelizable.
Different languages are good at different things. Some are good for pattern matching, others are great for describing algorithms, others are fit well into a specific OS ecosystem, others are good at manipulating specific data structures (such as vectors, matrices, graphs, etc.), and so on.
The most powerful part about pure functional programming is that you get more powerful functions. Because your functions don't change the state of data, you can confidently combine them into new functions, producing a sort of chain effect. This can lead to very concise programs since there is no need to hold intermediate variables (i, count, temp, ...) in some place. What you sometimes end up with are one liners that -- if the functions are named well enough -- explain the entire logic of the program.
There are certain problems that lend themselves well to the functional style, mainly those where you are piping a piece of data through various functions and applying different transformations to it. It's worth learning at least a couple of functional languages so you're not applying a hammer where you need a wrench.
> There are certain problems that lend themselves well to the functional style, mainly those where you are piping a piece of data through various functions and applying different transformations to it.
I don't quite understand this - how does this fit into the 'immutability' of fp? So functions can mutate data that goes into them, but they can't maintain internal state?
In FP functions shouldn't mutate data structures that were passed by reference on input, instead any input data should be copied before performing mutations on it to avoid changes to external state.
This is how e.g. Array.prototype.filter() works in JavaScript:
var numbers = [1,2,3,4,5,6,7,8,9,10];
var evenNumbers = numbers.filter( function(number) {
return number % 2 === 0;
});
I would like to add that this it is perfectly possible to write purely functional programs that do mutation. You just need to be explicit about it.
Many purely functional data structures also have advanced implementations that need much less copying than "immutable" would at first make you think, since two values can share unchanged structure.
In f# for example you can write:
let result = list1
|> List.filter (fun e -> ...)
|> List.map (fun e -> ...)
|> ....
no mutation, but you are piping the intermediate result to the next operation. In c# you could achieve a similar result by using the dot operator instead of the backpiping |> one.
The answer is that currying reduces the cost of breaking apart and combining functions on their arguments, two things that are done frequently in functional programming.
Currying allows functions to be broken apart ("specialized") on certain arguments for virtually free because specialization and normal function application, in effect, become the same thing.
Consider Haskell's _map_ function for example:
map :: (a -> b) -> [a] -> b
You can specialize it into a function that adds one to every element in a list by simply applying it to its first argument:
incList = map (+1)
Currying also reduces the cost of combining functions by allowing any function that returns a function to have the returned function applied to any following arguments directly, without need of intervening syntax. For example, _flip_ modifies a function of two arguments (or more, thanks to currying) to receive its first two arguments in "flipped" order, the second first and the first second:
flip :: (a -> b -> c) -> (b -> a -> c)
To flip _map_'s arguments, I don't need to write
let flipMap = flip map in flipMap [1,2,3] (+1)
but, thanks to currying, can simply write
flip map [1,2,3] (+1)
The value of these tools, however, is not to be able to write complete calls like those in my examples above but to quickly assemble from simple functions the exact functions needed to solve the problems at hand, usually by feeding those functions into the higher-order functions found in libraries.
In sum, currying makes some of the most frequently done things in functional programming virtually free.
I think the title is a bit misleading. Perhaps "a quick explanation of one common feature of functional languages in five minutes" would be more appropriate?
To be fair, the g and h names should obvious to anyone who studied function composition in high school.
Also, the names you gave don't help at all, as the idea of calling those g and h is to treat them anonymously. By calling it "callTwice", you are thinking about it as some sort of utility function, which is not the case. Shows you didn't grasped the idea behind it yet.
Might be obvious to you and other FPers, but when you're trying to teach FP to programmers who may only "sort of" or "not at all" grasp FP and spend most of their daily life writing CRUD business logic apps for a living but WANT to learn more about FP in order to be better programmers then it's totally useless. I found the other example immediately more understandable - even though I personally understood the content I sympathize with the people who are struggling. The FP community needs to take the academic context out of a lot of its teaching mechanisms.
It's actually very easy to see what is happening with g and f in this example. I think the part that is pretty opaque to non-FPers is why you would want to do that. Until something clicks, it is hard to see why this construct is anything beyond a very convoluted way of calculating 3 to the fourth power.
In FP, you can treat all functions as "curryable" because that's how you achieve composition. So that would be the equivalent of naming all your functions "callTwice". Now that doesn't make sense.
Could you be a bit more specific as to what you find confusing? Is it:
- That you can use the name of a variable, h, as if it was a function? That's because Javascript has first class functions [1] - the language is defined to support passing them around as variables and calling them like that.
- That you can use h at all even though it's neither a local variable nor a parameter of the anonymous function? That's because functions in javascript aren't simply procedures in the traditional sense - i.e. description (function signature) + code - they are also closures [2]. If you declare one function inside another, it can capture (have a reference to) variables and parameters of the outer one. You are also guaranteed that local variables and parameters will not get cleaned up while a closure still exists that holds a reference to them.
OOOOOOh, ok I see how this is working a bit, I think. I am very inexperienced with Javascript, and I think I only just "got" what was being done here, so I'll try to explain my thought processes as a newbie:
So, in Javascript, variables can be functions. Which means you can pass in a function as a variable to another function. And the part that says h(h(y)), basically says that "h" has to be a function. Which means you pass in a function, and then it get's applied to itself in the way specified within that function, "g".
Another odd part is the function you pass in:
g(function(x) {
return x * x;
})(3);
because you are passing in a function, but I'd assumed that if you can only pass in one variable, and that variable has to be a function, then it seemed like you wouldn't be able to pass in an initial value for the function you want to apply. But I guess in Javascript you can pass in a value for an anonymous function defined inside a function call by using this syntax:
g(function(x) {
whatever it is that happens in this function;
})(some_value_to_be_passed_into_the_previously_defined_anonymous_function);
`g(f)` returns a function, which is then immediately called with another value. You're passing `3` to the result of `g`, not to the function you passed to `g`.
Just to help in case it isn't obvious - g also returns a function - which could be assigned to a variable. In this case it's called straight away but you could do it like
var powerOfFour = g(function(x) {return x*x;});
powerOfFour(3) === 81; //true
Yes, but that has an obvious lack of syntactic parallelism, one especially important when talking about currying since you might be tempted to relate that to the coffee script
(X, y) -> x+y
It also has a subtly different performance characteristic in GHC, it turns out. Same semantics though.
This is a really excellent presentation. It got me to finally make the connection between currying and closures - I think that was the missing awesome piece I was missing for really understanding the importance of closures.
IMO currying is a bit advance idea for non-functional programmers.
If I were to explain functional programming in 5 min, I would start from Functions and Values.
In FP, there is no difference between a Function and a Value. A function can be defined without a name. You can say every thing is a function & a function takes an input and after performing its magic (calculations), it must return you an output. All values are also functions. So technically there is no difference between x = 2 and x = function(){....}.
Now when you realize that all values are also functions or all functions are also values, you can pass functions inside other functions.
Typically in imperative programming your functions only accepts values (like integers, strings etc.) and returns you values. But in FP you can pass functions inside other functions and you can return functions from functions.
Lets define a function doSomething(input){ return input } . Now you can call it by passing x where x could be a value or a function.
y = doSomething(x)
Now if x = function(){ ..} or x = 2, it really doesn't matter for doSomething because doSomething crunches an input and returns it.
By learning this technique, you can do lot of "interesting things". And Currying is one of those "interesting things". Every thing in FP really boils down to functions/values. 5 mins are over now.
function g(x) {
return function(y) {
return x + y;
};
}
f(1, 2) === g(1)(2);
How does this work? Since the function g is itself also returning a function, the call g(x) is itself literally another function address (if you will) awaiting an additional parameter with an invisible (y) on the end.
I have been dabbling in functional programming for a short while now, but this one took me completely by surprise. I had no idea JS would pass a function like that.
The way functions work in JavaScript is awesome. Just think of a function as another value a variable can have. This might be more clear if we wrote it as:
var g = function(x){
var f = function(y){
return x + y;
}
return f;
}
Also because of the magic of Lexical Scoping, the new function f holds onto a reference to the variable x that it saw when it was created. If you run var myfunction = g(5), you effectively get this:
this is great! (both because of content and presentation tool)
so why is there a limitation of just one argument? since lambda expressions can receive multiple arguments, and simply resolve them in order, why can't functional programming do the same? is this just a stylistic thing?
Because lambda expressions of just one argument are the simplest possible thing that works.
One of the neat things you learn if you start playing with lambda expressions in languages with algebraic data types is that multiple argument functions are modeled perfectly as functions which take single, tuple-typed arguments.
f :: (Int, Int) -> Int
f = \(a, b) -> a + b
ghci> :t curry f
curry f :: Int -> Int -> Int
ghci> f(3,2)
5
So, if you let your lambdas be as simple as possible then you naturally get "multi-argument lambdas" as the combination of tuples and lambdas.
Nah, the presentation framework is open source (http://github.com/hakimel/reveal.js) and we provide an export option that lets you easily get the HTML contents out for migration.
I have a habit of assuming bad faith whenever money is involved, but looking at your pricing information, you're really just charging for a service...
and oh god, look at that mark up.
<section data-markdown>
<script type="text/template">
## Markdown support
For those of you who like that sort of thing.
Instructions and a bit more info available [here]
(https://github.com/hakimel/reveal.js#markdown).
```
<section data-markdown>
## Markdown support
For those of you who like that sort of thing.
Instructions and a bit more info available [here]
(https://github.com/hakimel/reveal.js#markdown).
</section>
```
</script>
</section>
Can someone explain to me like a child what makes JavaScript more "functional" than any of the many, many, many other languages that offer anonymous code blocks? kthnx.
I suspect that the audience for these slides is Javascript programmers, which is why JS is heavily emphasized.
I think lambda functions is the only truly FP feature Javascript provides, but it's so darn useful that it really should be emphasized. Lately, I can't stand writing in languages that don't provide lambda functions (like PHP pre-5.3, or current Java), so at least Javascript has that going for it.
check out a few Lisp tutorials on Youtube and you good to go.
Dont try to apply fuctionnal principles to procedural languages , if you are a beginner.
Using Lisp will force you to do and understand functionnal programming. And frankly , Lisp is a lot of fun and easy to start with. Dont be afraid by all these nasty (()) and weird (+ 4 5) operations , it will make sense quite fast.
The best functional lisps to use right now would be Clojure and Racket. They both use immutable data structures by default, though neither is "pure". Clojure if you want all the good JVM bits and maybe actually convincing your manager to use it and Racket if you just want to learn using a language that's more beginner-friendly (http://htdp.org/ and the DrRacket IDE) or (mostly) compatible with Scheme (SICP, Little Schemer, etc.).
I've found there aren't really any widely used purely functional languages out there except Haskell (which needs it due to laziness).
Purity is a false idol. CL is a functional programming llanguage. Functional as in programming with functions. Maintenance of referential transparency is the duty of the programmer. As long as you have HOFs and some form of lexical closure as basic language features, you can call a language functional. tail-call optimization, default immutability, automated structure cloning, parameteric datatypes, default currying and h-m type inference are language features that make mathematical reasoning easier, but they are neither essential nor definitive requirements of a functional programming language.
Then python is also a functional language just like cl. Except no-one calls python a functional language, and everyone seems to call cl a functional language.
Sorry for my ignorance here, but that function takes two parameters right? The slides seemed to suggest that functional languages should use only one. So I guess my question is how come you're using two and not currying?
It's not a matter of should so much as can. In some languages (e.g. Haskell), functions are curried-by-default; in others (e.g. Lisps), functions are uncurried-by-default. In each of these, I could (if I wanted) write either version, e.g.
-- Curried and uncurried add in Haskell:
curriedAdd :: Int -> Int -> Int
curriedAdd x y = x + y
uncurriedAdd :: (Int, Int) -> Int
uncurriedAdd (x, y) = x + y
;; Curried and uncurried add in Scheme:
(define (curried-add x)
(lambda (y) (+ x y)))
(define (uncurried-add x y)
(+ x y))
Haskell chooses to make curried the default, but allows you to write uncurried functions if you'd like. Lisp chooses the opposite.
Also, just generally, currying is a transformation. You can do it by hand, but it can also be done mechanically, either automatically by the language in certain situations, or by applying (easy to define and usually included anyway) curry and uncurry functions where appropriate.
Why is this relevant? Why do we as programmers want to know about pure functional programming? What are its advantages? What's the point? This seems to add complexity. Etc.
I know there are good answers to these, but this intro has done little more than whet my appetite a bit...