gdritter repos documents / master posts / static-versus-dyn
master

Tree @master (Download .tar.gz)

static-versus-dyn @master

cb58185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
In this post, I'm going to argue that dynamic typing is not an inferior
tool that is superseded by strong static typing, but rather an important
tool in the toolkit of a programmer, and that different problem domains
are best served by different tool sets.

# Defining Terms

These discussions tend to result in a lot of tedium because of
terminological confusion, so let's start with this:

- *Static Typing* means that a programming language will reject
certain programs as syntactically valid but semantically anomalous
according to a 'type system', thus preventing programs which
cause certain classes of run-time errors from ever being run.

- *Dynamic Typing* means that a programming language will execute
syntactically valid programs, allowing those programs to execute
until run-time errors are found, generally based on run-time values
which correspond to the type of a value. Bob Harper, following in
the tradition of Dana Scott, insists on calling these
[http://existentialtype.wordpress.com/2011/03/19/dynamic-languages-are-static-languages/](unityped languages),
but I will stick with the more common nomeclature.

- I to some degree make a tripartite distinction between languages
with dynamic types, static types, and *dependent types*, despite the
fact that
dependent types are a kind of static type. A *dependent type system*
allows types to depend on values, and I make the distinction because
programming in a dependently typed system has a very different flavor
than programming in a static but non-dependently-typed system.

# Why Do We Want To Use Type Systems?

Before defending dynamic languages, I want to start with something
basic: why do we want static languages in the first place?

Type systems are a lightweight but formal system for reasoning about
the correctness of programs. A type system is there to help the
programmer write correct programs by preventing various classes of
errors. Different type systems can catch different classes of
errors: for example, even a very weak type system like C's will warn
you if you attempt to take the square root of a string:

~~~~
tmp.c:5:5: error: incompatible type for argument 1 of 'sqrt'
...
note: expected 'double' but argument is of type 'char *'
~~~~

More and more powerful type systems can rule out more and more
classes of bugs, while at the same time allowing programs that
might be rejected with other type systems. Most ML-ish functional
languages make it impossible to execute programs which would
result in null pointer exceptions, which are still permitted by
Java's type system. Haskell's type system makes a distinction
between code that performs side effects and code that doesn't,
and doesn't allow calling the former from the latter. Rust uses
its type system to ensure memory safety while avoiding garbage
collection.

At the far end of the spectrum, there are the dependently
typed languages, which allow you to write executable mathematical
objects that prevent errors in a way which is impossible to express in
"weaker" statically typed languages. You encode your specification,
then you write your program, and if your program doesn't fit
the specification, it will fail to compile. This is sometimes
considered an ultimate goal of strongly statically typed languages:
the ability to express all errors as type errors so that they can
be statically prevented.

There are other advantages, as well: one is performance-related. The
more information we have about our program at _compile_ time,
the less we need to know about it at _run_ time. If our program is
constructed in such a way that we cannot dereference a null pointer, we
don't need to check for null before dereferences. If our program
is constructed in such a way that a given value is always a machine integer,
we can perform machine addition without first checking an associated
runtime tag. If our program is constructed in such a way that a given
list is non-zero, we can omit runtime length checks.

Another is ergonomic: people who are new to typed languages will talk
about "fighting the type system", but people who are comfortable with
static type systems often express the opposite, using the types as a
way of guiding you towards correct solutions. A basic practical example
is using types as a way of distinguishing sanitized from unsanitized
strings to prevent SQL injection. What was once a matter of thorough
testing and careful code discipline becomes simple: if you do it wrong,
your compiler will tell you.

All of this is to say: type systems are wonderful! They are an invaluable
tool for writing code and verifying it correct, and I hope that everyone
learns and uses at least one strong statically-typed language.

# Why Might You _Not_ Want To Use Type Systems?

I can think of three big reasons:

- For certain problems, _stronger types are a hurdle to expressing the
program you want to express_.
- Type systems are _not always amenable to the type discipline you
want to impose_.
- A program _can still perform correctness checks without a static
type system_.

Let's go into these in detail:

## Expressing The Program You Want To Express

Some problems are very amenable to being expressed in a static setting!
It's not uncommon to develop in a static system by writing a bunch of type
signatures and gradually filling in their definitions, fitting pieces
together as a kind of framework and then adding what's missing to the
framework.

On the other hand, sometimes this is less helpful. Certain types are
difficult to express in a static way, and appeasing the typechecker
requires extra work which distracts from the program at hand. As
Bob Harper informs us, dynamic languages can be seen as the subset
of static languages in which all types have a single type. Among other
things, this means that dynamic languages often have idioms well-suited
to dispatch on a large or even unbounded set of runtime tags.
Consequently, if your problem involves dealing with data in which you
make decisions based on a lot of runtime characteristics (such as in
scripting or shell invocation) it might be a good idea to take advantage
of the structure afforded by dynamic languages!

One could, as Harper suggests, construct that unitype in a static
languagebut look at the tedium involved in, say,
[manipulating JSON in Haskell](), and you can see that, at that point,
you're not writing the program you want: you're writing the boilerplate
to deal with the unitype. Your intention might be clear to you, but
you also have to look through layers of constructors and types that
are there mostly because the compiler needs them to be assured that
your program is safe. (For a great example of this, look at the
library I wrote for working with [Activity Steams]

Additionally, there is a []

## The Type Discipline You Want To Impose



# A Short Story From Real Life

A few months ago at my work, we did what amounted to a rewrite of a compiler
system that was written in Haskell. Haskell's type system is inching
towards being fully dependently typed, and already it can express
vast swaths of what dependent types can achieve. We took full
advantage of this, and the end result was pretty amazing. The AST was
constructed in such a way that the types expressed not just the result
types of the AST nodes, but also information about the use of those
values, and prevented invalid ASTs from being constructed at any
time.

As this system was being constructed, the type machinery was nothing
short of invaluable. Multiple times, it caught subtle errors that would
have likely gone unnoticed for long periods of time. It ensured
end-to-end correctness of the program being written as we were writing
it. Because of our chosen types, we were literally incapable of
constructing invalid programs!

But as we continued to expand our compiler, this system started to be less
and less helpful. It was getting in the way of optimizations:
constructing the AST, in effect, required type machinery to 'prove'
it was constructed correctly, so certain optimization passes
required that one not merely rewrite the AST, but also perform
elaborate manipulations on the types in order to convince the
type-checker that the resulting ASTs were permissible. And frankly,
it required substantial mental effort just to understand these
type manipulations, to say nothing of the work involved in
writing them in the first place. An optimization passcorrect
on paper and conceptually straightforwardwould take an inordinate
amount of time and effort to write _and_ understand. The code
had become brittle, difficult to reason about, and most importantly,
difficult to change.

We ended up removing that extra type information, going back to
an AST with no type parameters.

Except we didn't remove guarantees: we just encoded them with other
kinds of checks. We still rule out invalid ASTs, we just don't do so by
_construction_. We can still verify that an optimization pass produces
correct code: we just run manually-written checks on the AST.

What's the summary here? Why did we ditch the stronger guarantees that
the previous edition provided us? Well, in short:

- The ergonomic advantages were gone. _Working with more static type
information took significantly more effort than working with the
more weakly-typed AST._
- The type system was _not amenable to the type information we wanted
to encode_. In this case, I suspect we would have been better suited
by a proper dependently typed language, as Haskell does not have the
necessary features for manipulating the proofs needed to convince the
type system of things.
- The constraints we wanted to express _were still otherwise expressible_
in ways that we could reason about: in fact, it was straightforward to
express them as simple manually-written passes. In the end, our program
_still checked the properties that needed to be checked_.

# How Does This Apply To Dynamic Typing?

After all, we didn't port our application from Haskell to Python. We
obviously were still reaping the benefits of Haskell's rather strong
static type system. So why am I citing this story in a blog post
about _dynamic_ types?

Well, look at the three criteria that led to us reducing the amount
of static type information. All three of these are also criteria that
might push you to choose a dynamic language for a particular problem
domain:

- The flip-side to Harper's argument that dynamic (or "unityped")
languages are in fact statically typed is that if you're writing a
program in which you will need to constantly check tags at runtime[^3], you
might as well be using a dynamic language. After allit's what you'd
be doing in a static language, but much more concisely and
ergonomically, right? For some problem domains, _dynamic types
take less effort to write and understand than static types_.
- A program in a dynamic language generally has some kind of informal
type system applied by the programmer. Sometimes these systems can be
quite difficult to express even in sophisticated systems like dependently
typed languages[^1], and
equally often multiple type disciplines can be applied simultaneously[^2].
These _informal type disciplines can always be expressed in a dynamic
language, but may not be straightforwardly expressible in particular
static languages_.
- Dynamic languages can still express types: sometimes at runtime and
even sometimes (with the ability to write significantly powerful macros,
as in the Lisp family) at compile-time. _You can still check the
properties you want to check_.

Does this mean that you ought to run out and ditch your types? Of course
not! Static typing is a powerful tool, and dependent static typing even
more so. Dynamic languages are, after all, doing these checks at runtime,
and there is _great_ value in using a tool that can verify properties
statically. And for software that absolutely must be correct, 

But there is also great value in understanding that a particular type system
may or may not be amenable to the properties you want it to express, and that
there are problem domains in which doing things in a dynamically typed
language is reasonable or even advisable. It might even be possible to express
the type discipline you're using in a static way, but not in an existing
statically typed language. In that case, you could write it informally and
gradually layer your type discipline onto the code in a checkable way.

Additionally, there are shades of static verification: great inroads have
been made in the areas of gradual typing, which allows a system to be
selectively dynamically and statically typed. Some systems would benefit
from this! Some would not.

The 

# Appendices: A Few Specific Objections to Dynamic Types

## Large Systems In Dynamically Typed Languages Are Unmaintainable or Unwritable

Proof: by construction. There are many, many large codebases in dynamically
typed languages which have varying degrees of maintainability. This is also
true of statically typed codebases.

## The Curry-Howard Isomorphism 



[^1]: As a simple example, various idioms allow you to construct new types
based on runtime values, as in Python's `namedtuple`. Generating new types
in a dynamically typed language is incredibly straightforward, but doing
the corresponding work in (say) Haskell requires a great deal of effort,
including elaborate macro facilities and metaprogramming.

[^2]: This is exactly [what Matthias Felleisen says in a comment on Bob
Harper's blog](http://existentialtype.wordpress.com/2014/04/21/bellman-confirms-a-suspicion/#comment-1362):

    > The hidden blessing is that DT programmers can import, super-impose,
    > and integrate many different static type disciplines  informally  and
    > use them to design their code and to reason about it. The result are
    > programming idioms that no one ordinary static type system can
    > uniformly check, and that is why DT programmers cannot accept the
    > phrase DT languages are statically typed.

    Note the similarity to how [Alan Key describes his original conception of
objects](http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en):

    > My math background made me realize that each object could have
    > several algebras associated with it, and there could be families of
    > these, and that these would be very very useful. The term
    > "polymorphism" was imposed much later (I think by Peter Wegner) and
    > it isn't quite valid, since it really comes from the nomenclature of
    > functions, and I wanted quite a bit more than functions. I made up a
    > term "genericity" for dealing with generic behaviors in a
    > quasi-algebraic form.

[^3]: One situation in which this is reasonable is in long-running or scripting
languages. Erlang is a good example: you're probably gonna receive a lot of
incorrect messages, so instead of a static type system in which you constantly
branch on various tags, it provides a dynamic system. Similarly, success with
statically typed operating system shells has been limited.


# SNIPPETS

## But What About Curry-Howard?

The Curry-Howard Isomorphism is about the relationship between programs and
proofs in a logic. It is very cool. It has _almost nothing_ to do with
whether or not dynamic languages are worthwhile.

Firstly: the fact that programs correspond to a logic is a really strong
advantage for two reasons:

1. It gives you the ability to reason about programming languages themselves
using the tools of mathematics.
2. It has important ramifications for the construction of dependently-typed
languages, in which values and logic intermingle.

Notice that this doesn't say much about the programs written in a language
like Haskell: that's because the Curry-Howard Isomorphism has nothing to
say about whether they're "more correct" than programs written in another
language. In a language that's not dependently typed, the fact that
pairs correspond to conjunction doesn't have any bearing on their use.

You might argue that you get nice properties like

~~~~
 p: A * B     p: A * B
----------   ----------
 fst p: A     snd p: B
-----------------------
 (fst p, snd p): A * B
~~~~

out of the logicbut Scheme has the exact same law

~~~~
(pair? p) => p == (cons (car p) (cdr p))
~~~~

and has no type system to correspond to any logic!

As an aside: the Curry-Howard Isomorphism is often perceived to be
more magical than it is. People act as though it's some kind of
peek into the deep structure of reality, but (and this is a
bigger point) all mathematics is fundamentally a human invention.
A very useful and elegant one! But still the product of people.[^people]
Programming languages like the lambda-calculus came directly out of
the field of logic, and were invented by people steeped in the same
intellectual tradition: seeing the correspondance between the
lambda-calculus and logic as evidence that logic is an "ontological
ultimate" is like seeing the correspondance between French and
Spanish as evidence that Romance languages are inscribed in the
structure of the universe.

[^people]: One of the most disappointing things about the widespread
belief that math is some kind of deep universal truth is that people
tend to ignore the human factors such as culture and creativity that
go into creating mathematical formalisms.