14 — More Types
Friday, 21 February 2020
Presenters (1) William Victoria & Lenny Xie (2) Vadym Matviichuk, Oleksandr Litus
The topic of types is so large that some people spend the entire principles of programming languages course on types. Some people go even as far as saying that a programming language must be understood as the “sum of its types,” that is, the entire topic must be organized around the study of types and the linguistic constructs that introduce and eliminate certain forms of types.
While this idea has an elegant appeal and is worth studying somewhat, it would be wrong to view the entire area through this angle, as if programming languages weren’t organically grown artifacts.
In the spirit of looking at ideas that you are likely to encounter, here are two more from the world of structural typing that has so dominated research in this area.
Structural vs Nominal Subtyping
Typing and Subtyping à la Java
Take a look at figure 64, which shows two “parallel” class hierarchies. The extends is just to indicate the classes sit inside of an fixed class hierarchy.
class Aplus extends Object {
public int m(int x) {
return 42; }
}
class Aprime extends Object {
public int m(int x) {
return 42; }
}
class ConsumeA { |
public int k(Aprime a) { |
return a.m(1); } |
} |
|
... new ConsumeA().k(new Aplus()) ... |
What is the impact of this strictly name-based subtyping on software development? When programmers create their own separate hierarchy of classes, like the one for Aplus, they cannot simultaneously inject this class into the class hierarchy of Aprime and will then be forced to copy code, write adapters (and lose some benefits from type checking), or resort to other tricks.
Typing and Subtyping à la Typed Racket
At first glance, figure 65 presents code that is identical to figure 64 but written in Typed Racket.
#lang typed/racket (define-type Aprime (Class (m (-> Integer Integer)))) (: Aprime% Aprime) (define Aprime% (class object% (super-new) (define/public (m x) 42)))
#lang typed/racket (define-type Aplus (Class (m (-> Integer Integer)))) (: Aplus% Aplus) (define Aplus% (class object% (super-new) (define/public (m x) 42)))
(define ConsumeA (class object% (super-new) (define/public (k {x : (Instance Aprime)}) (send x m 1)))) (send (new ConsumeA) k (new Aplus%))
In a structural type system a method may accept any object that has the
methods (and fields), properly typed, as specified in the parameter’s
type. The type checker accepts such calls, and all method invocations will
work out—
(define ConsumeAgain (class object% (super-new) (define/public (c {x : (Instance Aprime)}) : (Instance Aprime) (aux x)) (define/private (aux {x : (Instance Aprime)}) : (Instance Aplus) (new Aplus%)))) (send (new ConsumeAgain) c (new Aprime%))
In short, structural typing makes a software developer’s life easier, because it enables more code reuse than nominal subtyping. But, as always, it imposes a burden on the implementor of the type checker (and implementation).
Some Basic Rules
|
n >= 0 |
---------------- |
TEnv |- n : (nat) |
|
|
------------------------------------------ |
TEnv |- [tnode o l r] : (nat) |
|
|
TEnv |- f : (-> t s), |
TEnv |- a : t* t* < t |
--------------------------------------------------------- |
TEnv |- [tcall f a] : s |
Take a look at the example in figure 66.
Yes, it really makes sense to permit a natural number to play the role of an integer during computation. And that’s what such rules are: predictors. This one feels okay.
STEP 1: 0 |- [tcall [tfun* [x : (int)] x] 4] : ________
STEP 2: apply the application rule to the open claim
0 |- [tfun* [x : (int)] x] : (-> ____ ____) 0 |- 4 : ______
-----------------------------------------------------------------------
0 |- [tcall [tfun* [x : (int)] x] 4] : __________________
STEP 3: apply the rule for literal numeric constants to the second open
claim above the line
4 >= 0 check
---------------
0 |- [tfun* [x : (int)] x] : (-> ____ ____) 0 |- 4 : (nat)
----------------------------------------------------------------------
0 |- [tcall [tfun* [x : (int)] x] 4] : _____________
STEP 4: apply the fun and variable rules to the first open claim
claim above the line
x is in the one-element TENV check
-----------------------------------------
(x: (int)) |- x : (int) 4 >= 0 check
------------------------------------------------ ---------------
0 |- [tfun* [x : (int)] x] : (-> ____ ____) 0 |- 4 : (nat)
---------------------------------------------------------------------
0 |- [tcall [tfun* [x : (int)] x] 4] : ______________
STEP 5: fill in the function type
x is in the one-element TENV check
-----------------------------------------
(x: (int)) |- x : (int) 4 >= 0 check
---------------------------------------------- ---------------
0 |- [tfun* [x : (int)] x] : (-> (int) (int)) 0 |- 4 : (nat)
---------------------------------------------------------------------
0 |- [tcall [tfun* [x : (int)] x] 4] : ______________
STEP 6: recognize that while the domain and argument types differ,
they are compatible
x is in the one-element TENV check
-----------------------------------------
(x: (int)) |- x : (int) 4 >= 0 check
------------------------------------------------ ---------------
0 |- [tfun* [x : (int)] x] : (-> (int) (int)) 0 |- 4 : (nat)
(<: (nat) (int))
---------------------------------------------------------------------
0 |- [tcall [tfun* [x : (int)] x] 4] : ______________
STEP 7: hence, we have established the claim that the program itself
is of type (int)
|
|
TEnv |- e : t s :> t |
---------------------------------- |
TEnv |- e : s |
|
Type Inference and let Polymorphism
Throughout the history of programming languages people have expressed the idea that writing down types is painful and useless. They are correct with respect to pain. As language designers add power to the type system, types become complex pieces of code development in their own right, and it is easy to make mistakes. The problem becomes particularly acute with the addition of polymorphic types. In response to this idea, programming language people have invested a lot of energy into type inference.
As for the second part of the claim—
Module-Level Inference
Take fully typed programs in a typed language. Erase some of the type specifications for variables (parameters, declarations). Find an algorithm that can restore the type variables in all cases.
Curry and Feys worked out the very basics first. Hindley expanded their work to a full typed lambda calculus at Swansea. Milner (who was in the same academic department at Swansea) re-invented the algorithm ten years later and made it famous as ML’s inference algorithm for let polymorphism. It is known as Hindley-Milner inference now.
The solution to the most well-known instance of this general problem is known as
Hindley-Milner type inference. Two well-known languages rely on HM
type inference: ML and Haskell. HM inference roughly applies to languages such
as those used in this course, extended with type products, sums, and records,
plus explicitly declared recursive datatypes (lists, trees). The algorithm also
accommodates reference cells like those used in 5 —
Let’s assume for the rest of this section that tdecl does not introduce recursion.
|
|
|
|
|
|
[tdecl f F {[tfun x X {[tnode + {x,X} {1,O}] ,FB}],R} [tcall f 42]]
2. In mathematics we learn to set up equations that govern and constrain variables. In the context of type checking, we can use the type-checking rules to find equations.
|
|
|
This simple looking rule imposes the constraint that the domain part of the function type is equal to the type of the argument expression. When we expressed this as code, we actually had to make this test explicit. The rule on the right side shows how we can express this idea as an equational constraint.
|
|
|
Once the rules are written this way, it becomes straightforward to derive equations that govern the type variables we have attached to terms. Here is a table of equations with their justification:
F = R |
| tdecl demands the declared type is equal |
| to the computed type of the right-hand side | |
R = (-> X B) |
| tfun* demands a function has an -> type, |
| whose domain is the specified type (X) and | |
| whose result is the computed type of the body | |
X = (int) |
| tnode demands +’s arguments are of type (int) |
O = (int) |
| tnode demands +’s arguments are of type (int) |
3. We can solve equations to find out which values of variables solve them. Recall that solving means you can plug the values in for the variables and you get equations whose left-hand side is the same as the right-hand side.
|
|
|
The process yields one solution per type variable that we put into the program, and the values are the same that we erased.
In calculation-oriented math courses, as they are now common, all you see is matrix manipulations. But these manipulations are really just efficient representations of equality-preserving operations on equations.
Note What’s the difference between this and linear algebra? The constructor symbols have no (inverse) laws on them. We say they are uninterpreted. The process of solving relies on matching the parts of the type constructors (here ->), which really just generalizes the Gaussian elimination process for solving linear equations over numbers.
a single solution
In middle school a single solution is the most desirable. When it comes to type inference, it means type checking succeeded in an ordinary manner.
no solution
Type checking would fail. There are no types that get this expression thru the type checker. None. Don’t even look.
an infinite number of solutions
When you solve a system of three equations in three variables and two are dependent, you get an equation that describes a two-dimensional plain. In the context of type inference, you get a function whose type is compatible with several, seemingly distinct uses.
Milner called this idea let polymorphism, and it was—
and by some OCaml and Haskell programmers still is— considered a practical and lightweight form of useful polymorphism.
[tdecl f ___ [tfun* x ___ x] [tcall [tfun* g ___ [tcall f 0]] [tcall f [tfun* x (int) x]]]]
|
TEnv |- bdy[x <- rhs] : s TEnv |- rhs : t |
------------------------------------------------ |
TEnv |- [tdecl x _ rhs bdy] : s |
The one on the left represents the new idea. Instead of assigning x a type in TEnv, it substitutes rhs for every occurrence of x in bdy. Intuitively this copies the type variable for x and thus permits different solutions for each copy of the type variable.
The one on the right merely confirms that rhs has a type. Without this premise and without a reference to x in bdy, a tdecl expression may type check even though rhs doesn’t. Since the interpreter must determine the value of rhs, this failure could cause a lack of soundness.
[tdecl f _ {[tfun* x X x],B} [tcall [tfun* g G [tcall f 0]] [tcall f [tfun* x (int) x]]]]
[tcall [tfun* g G [tcall {[tfun* x X1 x],B1} 0]] [tcall {[tfun* x X2 x],B2} [tfun* x (int) x]]]
B1 = (-> X1 X1) |
X1 = (int) |
B2 = (-> X2 X2) |
X2 = (-> (int) (int)) |
G = (-> (int) (int)) |
B1 = (-> (int) (int)) |
X1 = (int) |
B2 = (-> (-> (int) (int)) (-> (int) (int))) |
X2 = (-> (int) (int)) |
G = (-> (int) (int)) |
So type inference not only seems to make the use of types convenient, a slight modification gives developers new powers. What’s not to like?
Why Type Inference is Wrong Many things. Here are the two most important ones:
Cardelli makes this point very forcefully in a DEC SRC Technical Report titled Typeful Programming. The second step of the design recipe for functions demands the equivalent of a type declaration. This signature is then used in subsequent steps to direct the function design.
Programming people therefore dub a type signature a specification. Think of a specification as a blue print, which shows the essential elements of a building but not exactly what the finished product looks like.
The function definition itself is an an implementation. Roughly speaking, the code is to the type what a building is to a blue print.
Type checking is making sure that the finished code adheres to the specification.
Type inference thus fails us in two ways. First it removes an essential element of systematic program design. Second it does not check the compatibility of an independently created blue print with the finished product. Besides unit tests, type checking is one of the simplest ways to guarantee a few small things about our code before it runs.
Stated in its basic form, the type inference problem is undecidable for almost all typed languages and ways of erasing types.
Technically, this means that there is no algorithm—
an always terminating generative recursive function— that can compute what the missing types should be. The OCaml community has succeeded in extending it o an expressive record and object system. The Haskell creators were able to deal with a rich degree of overloading, called type classes. From the perspective of language design, the consequence is much more dramatic. It means that type inference is a brittle, unstable design point. Every extension to a language must be carefully examined and is subject to stringent algorithmic implementation constraints. But coupling design with implementation constraints at an early stage is a bad idea.
Emeritus Professor Wand was one of the first to point out this problem and to report some early results. He did so over three decades ago.
One secondary but substantial problem is the let-polymorphism that usually comes with type inference. To this day, this form of inference does not explain failures properly. In other words, when type inference fails because the equation systems does not have a solution, the type inference algorithm often (not always) reports an error in terms that is rarely actionable for programmers.
Local Type Inference Because of the above, most type checkers employ a form of type inference that is restricted to a single declaration, definition, expression, or statement. This is particularly useful for the application of polymorphic functions. When this form of inference fails, it is easy to pinpoint a small region of code and help the developer.