Scala is statically typed through-and-through. Pyret is designed to always offer a dynamic account of the language. We do have a static type checker under development (it's all-but-ready to release), but it will always work over a language where type annotations are optional. We also have a radically different idea about type inference, which we are currently trying out. In short, Pyret and Scala are already quite different and will grow even more so soon.
I'd be interested in seeing some more explicit examples of the differences, because I get a very scala-like vibe from all the examples in the article (I accept that dynamic vs static is a fundamental difference that doesn't really show up in syntax).
Scala is a great language with a rich excursion into type system design (and other things). Pyret is designed to reinforce specification (more general than types) and to do so through a combination of static and dynamic means. That much I've already said.
Let me point to testing as an example of where we differ. Testing is really important to us. As you've seen, we have lightweight, in-place test cases. In addition:
1. We have a growing family of language support for writing tests well (e.g., see "satisfies" on the Pyret home page). I expect this to grow richer and richer.
2. We are working on some interesting ideas about the scope of testing blocks. We don't say this on the home page, but you can also write "test" blocks for data definitions, not only for functions. But now the functions that operate over those data are going to want to use instances of the data. We're working on a "scope conduit" to take definitions from the data definition to its functions.
3. As some readers have noted, in most languages, you simply cannot write unit tests for a function nested inside another, because you can't even name it. In Pyret, because definitions can include tests, this creates no problem at all.
4. We are now working on a type-inference approach that uses tests, rather than code, to infer types. The code is then checked against these inferred types.
Essentially, testing is a kind of metaprogramming, and metaprogramming works best when you integrate it into the language (what Racket has shown) rather than as ad hoc external tools. Each of the above four examples is really just a specific instance of this general issue. Since testing is an especially important form of (dynamic) metaprogramming, integrating it well into the language from the very beginning is likely to lead to expressiveness, flexibility, and cleanliness that may be harder to achieve from the outside.
This is part of a more general philosophy. To me, testing is a form of _specification_, and we want to see rich descriptions of program invariants expressed with a gentle blending of static and dynamic methods. We don't want to be so grandiose as to call Pyret a "testing-oriented" programming language, but in our minds it is, with testing having both the base role of making sure we (as programmers) didn't screw up but also the exalted role of providing an alternate, independent description of the program we're trying to write. (This is something I emphasize a lot in my teaching: eg, http://cs.brown.edu/courses/cs019/2012/assignments/sortacle, http://cs.brown.edu/courses/cs019/2012/assignments/oracle).
This is a philosophical position, and so hard to quantify through individual bits of code, but I already see small influences of this (as above) and I expect it to guide us to a very different point in the design space over time. It feels safe to say this is quite different from Scala's viewpoint.
That's pretty fascinating. (static) Type inference based on tests sounds like an ill-posed problem, or at least one that can't produce human-meaningful annotations. That is to say, If one function has spec A for a generic parameter, and another function has spec B, it doesn't seem like it's possible to generate in general the spec for their composition in "closed-form".
Like, what's the type spec of "map" where the monad argument is a list of even numbers and the function to be lifted maps numbers to uppercase strings? It doesn't seem like we can meaningfully talk about the "type" of the function's result with any sort of specificity beyond List[String] or maybe List[String[Uppercase]], even though the real type is quite a bit more specific.
How does inference work downstream of this point? Can we get reasonable errors before run-time?
Our philosophy is that the typed Pyret language is an explicitly-typed one. Inference is just a convenience to save you some amount of typewritering. We want to do a good enough job of it, and if you want to get more specific, you do it yourself. For instance, in the foreseeable future we do not plan to ever infer a refinement. So I think that addresses your example.
Furthermore, we aren't going to play the cute trick of having an inference process that is so consistent with the type-checker that we can get rid of a type-checker. One nice thing about old fashioned recursive-descent type-checkers is that they give simple and familiar error messages; I don't know of any research papers that have been written about better error reporting by them (at least in the past few decades). That's a feature. [NB: There's always an exception. Hopefully my point is clear.]
So, whatever type we infer is one that gets fed to the type-checker to check against the function. Typically, assuming the function has passed its tests, the function should be consistent with its inferred type, unless the tests didn't cover the code properly.
What happens with code that can apply to multiple types, i.e., parametrically polymorphic code? We made an explicit decision to not have union types; had we had them, this would have been more sticky. But because we don't, we simply assume you mean to use the function in a polymorphic fashion, and that's the type we "infer". [Put differently, if you have a polymorphic function, you should test it on at least two different types!]
For reporting errors, we intend to make a distinction between declared types and inferred ones -- a distinction that I don't believe is commonly made in other type inference error presentations. For instance, inconsistency between two inferred types should perhaps be treated a bit differently than inconsistency between two declared types or between an inferred and a declared type.
More broadly, types are specifications. The point of a spec is to provide a redundant statement of program behavior, that can be checked against an implementation. From that perspective, inferring the type from the implementation is a strange idea: one shouldn't be inferring specs from code. [Yes, I know, it's been fashionable to infer "specs" from code for over a decade now. I would not call those specs -- they're more like summaries of the code.]
But tests, to my mind, are a lightweight form of specification, so it does make sense to infer one specification from another [just as people have gone the other way and used specifications to derive tests: the area of specification-driven test-generation]. Especially since it's being backed up by a true type-checker.
As I said, these are all pretty novel and possibly heretical ideas. But this is the experiment we're trying.