Wednesday, May 5, 2010

Types à la Chart

I recently saw a debate on whether a certain language had a "weak" or "strong" type system. Naturally the debate ended in flames. As far as I could tell neither side knew what the other was talking about which isn't surprising as I've almost as many definitions of "weak" and "strong" typing as there are developers who use the words. Frankly the words have too many meanings to be useful.

If the first part of the problem is ambiguity then it's still only a start of the problems. Even if you get something crisp like "static" vs "dynamic" you've still managed to condense out a lot of useful information. Agda and Java live in incredibly different spaces even though they are both statically typed. Or how 'bout the world of difference between E and Ruby, two dynamically typed languages?

In this article I'm going to try to convince you that the space is far more complicated than the typical debates would allow. The design space is very highly dimensioned, perhaps infinitely so, but I'll boil it down to 2 important ones which none-the-less already show considerably more sophistication. Here's the set up: what can you know statically about an expression, like f(x), without dynamically executing/evaluating it and if all you know is static types? The two dimensions I use are 1) How "safe" is it - are there back channels that might violate abstractions? And 2) How rich and sophisticated can the static type be?

One big, fat caveat: this is almost certainly wrong in many details. I've listed a bunch of languages. There just aren't enough "21 days" in a lifetime to understand in detail exactly what each one can promise and how one type logic does or does not contain another. Naturally if you are an expert on one of these languages and I got something a bit wrong you will conclude I'm an idiot even if everything else is 100% right. Such is a the price of writing for a technical audience. In spite of any such errors I contend that even if some details are wrong the overall picture should be reasonably accurate and should convey my central thesis that this space is too complicated for binary distinctions.

Click to zoom

Chart of Static Type Properties

Power

The horizontal axis is "power" or "sophistication" of the static type system's underlying logic. For this analysis I focused only on the ability of the static type system to describe actual data structures and computations. While it's interesting that C++, GHC Haskell, and Scala all have Turing complete type systems, none of their type systems seem to have the same ability to usefully describe computation as Agda's. Of course, I could be wrong. Feel free to bump the asterisked languages to the right accordingly.

The horizontal axis can also be read backwards in terms of type system simplicity.

The far left requires some explanation. The languages in this column tend to be dynamically typed which at first seems like a challenge for thinking about them in static terms. But there's a nice theoretical framework where you think of them as having one static type. A valid Scheme expression has "the Scheme type" and an invalid one does not. Different people who mean to emphasize different things will call these languages "untyped," "unityped", or "dynamically typed." For my purpose "unityped" gives the right connotation. This column corresponds to the untyped lambda calculus.

Moving to the right there are simple first order type systems. Such type systems are a bit richer than the unityped ones, but don't have any form of type abstraction: they don't allow a user to define types that will be parameterized by other types. This column corresponds to the simply typed lambda calculus.

Further right are second order type systems. These allow type abstraction qualified with "forall", and correspond to System F. In between first and second order is HM (Hindley-Milner) which allows full type inference but at the price of not allowing some second order things like higher rank polymorphism. Moving to the right are higher kinded languages, corresponding to System Fn up through System Fω

At the far right are dependently typed languages: languages that allows types to be parameterized by values[1][2]

Safety

The vertical axis is various kinds of "safety." For safety analysis I'm completely ignoring foreign function interfaces and the like. FFIs are useful and pragmatic but also make otherwise pretty safe languages no more safe than Assembly.[3]

You can also read this axis going the opposite direction in terms of a kind of operational power - Assembly isn't safe, but it lets you make the machine dance.

At the bottom end are memory unsafe languages. An expression like f(x) might very well totally hose memory.

A bit up from that is memory safe languages. Such languages have a few built in abstractions (like you can't write past the end of an array) that f(x) can't possibly violate. A couple of steps up is abstraction safety. That means that one developer's abstractions can be protected from another developer in some fashion. For instance, private variables can only be accessed through functions defined in a scope with access to the variable. Many popular "dynamic languages" are not abstraction safe because their powerful dynamic meta-programming facilities allow abstractions to be smashed. In between I've listed a few JVM languages as being configurably safe: the Java standard includes some pretty powerful reflection capabilities that allow many developer written abstractions to be violated, but the JVM standard includes a security mechanism that allows that to be turned off. Move these points up and down accordingly.

At the next level are capability safe languages. In most languages, any function call can have all manner of harmful side effects including formatting your hard drive or launching missiles. But at the capability safe level the expression cannot possibly do such crazy things unless granted that power in the form of an "object" that represents the capability to do so. While you cannot statically know whether f(x) has received a missile launching capability, you can statically know that if f(x) lives in a program that has not been granted missile launching capability then it will not launch missiles.

The pure level is for languages with a strict, statically enforced wall between side effecting expressions and non-side effecting ones. Examining the static type is sufficient to know the difference. Finally, the "pure & total" level is for languages where the expression f(x) is either guaranteed to terminate normally (no infinite loops, no exceptions)[4] or where static types indicate possible non-totality.

Random Notes

In many languages safety properties are enforced dynamically so a knee jerk reaction is "why is this stuff in a discussion about static properties." But remember the point was what I can statically know about an arbitrary expression f(x) without evaluating it. In Ruby I know with absolute conviction that, barring implementation bugs and foreign function interfacing, any arbitrary f(x) absolutely does not write past the end of an array. In C I don't know that.

It might seem odd to put Java further to the right than Haskell '98 but it's true. Haskell'98 has a slightly extended Hindley-Milner type system while Java has a kind of warped System F<: (use nesting to create rank-n polymorphism). Of course, a chart that ranked type systems on "bang for the buck", versatility vs ease of use, would rank things quite differently.

C# gets stuffed into the unsafe row because of the "unsafe" keyword and associated pointer machinery. But if you want to think of these as a kind of inline foreign function mechanism feel free to bump C# skyward.

I've marked Haskell '98 as pure but GHC Haskell as not pure due to the unsafe* family of functions. If you disable unsafe* then jump GHC Haskell up accordingly.

Assembly is problematic. In a certain weird sense some Assemblies are "memory safe" because they do protect their language abstractions by virtue of having almost none.

Conclusion

"Weak" and "strong" mean so many things that they mean nothing. "Static" and "dynamic" mean something but don't really convey many differences. This chart is not the end-all-be-all of language categorization, but it is the beginning of the start of an inkling of a glimpse at how rich the space really is. Hopefully this will make static typing advocates understand that dynamic typing doesn't necessarily mean total insanity while dynamic typing advocates can begin to see the range of sophisticated expression afforded by many modern statically typed languages. I highly recommend "What To Know Before Debating Type Systems" for more on the subject.

Postscript

Inspired in part by this post and a follow up discussion, David MacIver is investigating the dimensions of language design that are actually useful in picking the right tool for the job.

Footnotes

  1. Type theory folk will note that I've pretty casually collapsed two legs of the "lambda cube" into one dimension of power. That's not very valid but that seems to work out in practice. It's hard to find a sophisticated dependently typed language (types indexed by values) that isn't also very rich when it comes to types indexed by types.
  2. C++'s ability to parameterize templates by a few forms of literal shouldn't be confused with dependent typing.
  3. Technically there's no such thing as "Assembly;" it's a family of vaguely related languages with semantics tied extremely closely to underlying machines.
  4. A total language can of course crash due to resource exhaustion or hardware failure. There's no way to prevent that.

19 comments:

  1. Great analysis, I learned a lot reading this!

    One small point of clarification - if a language is "pure & total", then can it still be considered Turing complete since the halting problem can be solved on such a language?

    ReplyDelete
  2. Toby, no, total languages are not Turing complete - you can't write an interpreter for Agda in Agda.

    ReplyDelete
  3. Cool, this is some kind of lambda square, for practical purposes.
    Great article; thanks! The diagram will probably grow with more languages being added (from Basic to Coq).

    (a nitpick: JavaScript's official name is with capital 'S', afaik)

    ReplyDelete
  4. There should be place for F# somewhere :)

    ReplyDelete
  5. Nice but what about higher-order module systems?

    Maybe you should replace your SML with ML or F# and put SML and OCaml further to the right?

    Can Java express any static type that OCaml cannot?

    Should OCaml be at the bottom because it has the Obj module?

    ReplyDelete
  6. My understanding was that, by virtue of kinds, Haskell '98 had a type system that would be right of second order. In lambda cube speak, second order is terms depending on types, which gives you roughly system F, but that types depending on types, ie kinds, is considered a separate edge in the cube.

    I'd really like to have all of these things right in my mind, so any clarification/discussion would be much appreciated.

    ReplyDelete
  7. Jon/Flying Frog,

    I did put SML a bit to the right of HM because it has a few things that are strictly more powerful than HM. But it's my understanding that SML, even with functors, can't express rank-n polymorphism. If it can then it should definitely move right.

    I don't know enough about OCaml and F# to know where to place them with certainty.

    As for Java, it has a straight-forward (if painfully awkward) mechanism for expressing rank-k polymorphism for any k and it has existentials with both upper and lower bounds. It's just a shame the Java language designers made all that stuff so hard to use.

    Anon,

    As far as I know, Haskell'98 does does not have higher kinded types the way GHC Haskell and Scala do. The report mentions kinds but they are an implicit part of the metalogic used to describe type constructors.

    ReplyDelete
  8. One reason this chart is helpful is that we can draw a reasonably clear line between languages that are often used to do "real work" and those that typically are not.

    The line moves over time; mostly up, but somewhat to the right. I'd say it doesn't currently encompass GHC Haskell but people keep trying to pull the line past that point.

    The incentives for moving the line upward and to the right are obvious. The reasons why we can't move the line further upward and to the right are typically not so obvious (to me at least). Cultural resistance plays a part but I think that's diminishing and anyway I'm not very interested in that issue.

    I'd be very interested in an analysis of what makes it feasible to pull the line past a given point. For example, the sticking points for Haskell seem to be the conceptual overhead of composing monads, and some of the messiness of the fancier type extensions, but I've never seen a good analysis of "things you might need to do that would be hard in GHC Haskell".

    I think after the point where the line falls right now, the vertical scale is different. Semantically Epigram seems a lot further from GHC Haskell than GHC Haskell is from C#.

    I can't understand Agda and Epigram well enough to know if this semantic distance will shrink as they are turned into more usable languages by better compilation techniques, or whether the difference is intrinsic and they'll always involve the same level of conceptual overhead.

    Based on the history of Haskell, I'd guess the latter but I would be interested in comments by people who understand this domain better.

    ReplyDelete
  9. Given that the pure & total languages have type systems that keep them from expressing their own semantics, does it really make sense to say those type systems are "more powerful" than systems that can be used to express their own semantics?

    To ask this another way, obviously dependent typing is more expressive than just "higher kinded" etc. So that dimension seems coherent in its own terms. But it seems to omit something important since it doesn't reflect these other semantic restrictions.

    This line of thought suggests a question:

    Is there something intrinsic to dependently typed languages that requires them to be pure and total? Or that makes our current technology unable to check them if they are not pure and total? Or is this just a current design choice based on the preferences of those doing dependently typed language design?

    ReplyDelete
  10. Jed,

    As you move to the right the type logic becomes very sophisticated and requires a programmer to think more and more at the meta-level of the type system. There's a real conceptual learning hurtle there. In the lower left area type = representation. That's especially true in language communities where the word "type" means "class" or perhaps "interface" or "protocol". As you move up and to the right type and representation become more and more decoupled. As an example of the hurtles: in Java programmers struggle mightily with such concepts as variance on bounded existentials because they have so little to do with representation or class.

    In the vertical dimension, increasing safety puts shackles on programmers. It wasn't that long ago that the goto wars were fought that or that most "real" programmers felt that you couldn't write "real" programs without pointers. Again there's a conceptual hurtle. Programmers really don't know how to get things done without ambient authority and mutation. Even before we talk about monad composition, we need to talk about how mainstream programmers tend to stick to fairly first order coding styles even though the reigning paradigm is very much geared towards higher order programming.

    You're right that the horizontal scale is weird. That's because it's formally invalid to do what I've done. I've implied a total ordering on something that is fundamentally partially ordered and then I've squeezed infinite ranges into finite segments. Scale definitely shouldn't be taken too seriously.

    As for dependently typed languages, they don't have to be total. Cayenne is an example of a non-total dependently typed language (I probably should put it on the chart). http://www.cs.chalmers.se/~augustss/cayenne/ Nor do dependently typed language have to be pure. I think it's just a case that the researchers of dependently typed languages are interested in pushing the Curry-Howard analogy to the point where programming and proofs really are the same thing. Which segues nicely into the next point.

    Your point about the semantic weakness of totality is well taken, but remember that the horizontal axis doesn't care about totality. You can take any total language (e.g. a system F variant) that's on the horizontal axis and extend it with a fix point operation. In my presentation that would move it down vertically but wouldn't change its horizontal position.

    That said, if you look at the horizontal axis as describing a logic then adding a fix point operation fundamentally changes the logic in that it introduces inconsistency - with fix point you can prove nonsense. Mea culpa, I'm not going for rigor here, only trying to introduce some of the richness of the space.

    ReplyDelete
  11. Thanks James! Replying only to your first point about conceptual hurdles:

    At the limit this it is true that to fully understand sophisticated type logics one has to learn a lot of new skills and ideas. Also it is true that to program with immutable entities requires new skills.

    However I think this can easily be over-emphasized. Clojure for example has generated a lot of interest because good runtime support for sophisticated immutable entities has substantially lowered the hurdle for less sophisticated programmers. Erlang is more demanding but lots of less sophisticated programmers learn to use it for immutable programming without much trouble.

    Similarly I think less sophisticated programmers can use some sophisticated type systems, e.g ML or OCAML, with relatively little conceptual stress, again because the language design has lowered the hurdle.

    Perhaps an interesting contrast is optimization techniques. Modern optimization involves proving sometimes rather esoteric properties of programs, and carrying out sometimes quite unintuitive transformations. However in practice optimizations are implemented "transparently" to the programmer so there's no conceptual hurdle. (Of course this is not really true because small changes in programming style can dramatically change the optimizations possible. But turning on fancier optimization doesn't directly impose conceptual hurdles.)

    I pretty strongly believe that the right language design can create a slippery slope that encourages (but does not force) programmers to move into increasingly safer and more tightly typed styles of programming. They will be rewarded with more reliable, modifiable and optimizable code. I think we have enough examples to realistically imagine that during this "slide" a programmer will never be faced with a hurdle high enough to be daunting.

    Certainly those hurdles exist today. I suspect that the academic culture of sophisticated type system design doesn't provide any incentive to reduce the hurdles. In fact it has perverse incentives to maintain them, because getting over the hurdles is fun for those who like that sort of thing, and the difficulty of clearing the hurdles helps to emphasize that they have rare and valuable skills.

    ReplyDelete
  12. "hurdle", not "hurtle", i think? :-)

    ReplyDelete
  13. So, would you say that on the right side, a language should be on the upper half for the cost of the sophistication to be worth it because of enough safety? And that the lower right corner is deep red because huge sophistication/complexity buys no safety at all?

    ReplyDelete
  14. James,

    some remarks:

    It's correct that SML's first-order modules give you rank-2 polymorphism only. Higher ranks require higher-order modules as e.g. in Ocaml.

    Haskell 98 has higher kinds. There is no explicit syntax for them, but kind inference figures them out (and there are some tricks to force specific kinds). Moreover, polymorphic abstraction can occur at higher kinds, unlike in ML.

    ML functors also give you the equivalent of higher kinds, but as with polymorphism, you again need HO functors to go beyond second order.

    As for dependent types plus general fixpoints: that's fine, and doesn't harm logical concistency, as long as the type system properly tracks non-termination as an effect, e.g. through monads.

    And finally, LF (aka Twelf) is a type theory that has dependent types but no higher kinds (and no polymorphism). This is a feature, because the logic would become too powerful otherwise for the inference machinery behind it.

    ReplyDelete
  15. asflierl,

    I think a lot of people have thought that. I don't necessarily agree, though. When I'm hacking C I sometimes seriously miss parametric polymorphism.

    Andreas,

    Thanks for the info on LF/Twelf. It's a clear example of why my horizontal axis is an oversimplification. I think it would be very interesting to have a proper lambda cube plus one dimension for safety (a kind of lambda hypercube) and put languages into it. On the other hand, I don't know for certain that my safety axis would stand up under close scrutiny either.

    I'll have to study up on how higher kinds behave in Haskell'98. I was under the impression that Haskell's kinding was an artifact of the metalogic to describe the type system rather than something a user could use.

    ReplyDelete
  16. Interesting article, but what the definition of a type?

    ReplyDelete
  17. Your chart showed up on programmers.stackexchange.com. I pointed out "I think C# is not well positioned. C# allows and promotes safer programming styles. It should not be measured by the worst thing it allows." C# should go exactly about where Java is (it's type system is slightly richer and that's about it). There are Java -> native code compilers, so one could easily ammend them by the ability to write inline assembly. Would that make Java a memory-unsafe language? Also managed Objective-C 2.0 is memory safe (if you use GC and really only make message invocations and no C stuff).

    ReplyDelete
  18. Higher kinds in Standard ML (SML).

    Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules

    ReplyDelete