## 2013-03-15

### When I invented inner classes.

Over the years I've invented a number of useful things. Unfortunately, much of the time, someone else did it first. One I'm quite proud of is dynamic register renaming, which I invented in 1989, many years after engineers at IBM had.

A couple of months ago I decided to look at the report I had written as my undergraduate thesis. It was on the design and implementation of an object-oriented language. This was in 1985 and interest in object orientation was just starting to rise after years of O-O being an obscure field with small groups of enthusiasts using SmallTalk and Simula. Nineteen-eighty-five was the year C++ was released commercially by Bell Labs. Even 6 years later, the first edition of Booch's Object Oriented Design with Applications was largely about how to do O-O design and then implement in a non-O-O language such as Ada (which, at the time, was not object-oriented).

Anyway, reading my report, I was struck by the fact that I had invented inner classes. Somewhat dismaying was that I had not only invented inner classes, I had also apparently failed to notice that I had done so: at least they get no special mention in the report. Indeed when Java introduced inner classes in 1997 or 1998, I saw them as a cool new idea, completely missing the fact that I had invented the same thing myself.

Then, a few days ago, I was struck by a very worrying thought: What if I had not only invented inner classes and failed to notice, but had been the first to invent them and failed to publish! After a bit of looking about I realized that this was not the case. Simula seems to have had them since at least 1973 and the Beta language also had them.

You might wonder how I managed to invent something, but not notice. This language was essentially an O-O extension of a language very similar to a statically scoped Lisp -- but with a nicer syntax. Functions were written as lambda-expressions, which evaluated to closures, which carried with them the context in which the lambda expression was evaluated. A context is simply a mapping from identifiers to locations, together with a pointer to a parent context. Thus, within a lambda expression written within another lambda expression, the parameters and other variables of the outer lambda expression could be referred to within the inner lambda expression -- just as in Scheme or Common Lisp. For example, the following defines a function that computes the composition of its arguments

    var compose <= (fn (f, g) (fn (x) f(g(x)) fn) fn);


(The <= means permanent assignment.) Classes were essentially lambda expressions that implicitly returned a copy of their context as their value. Objects were just copies of contexts.  (The advantage of making a copy is that the parent pointer is not needed in the object. Thus, in simple cases, such as an object with no methods, the context and its parent could be potentially garbage collected, even if the object was not. The object and the original context share the same locations of course.) Methods were simply lambda expressions evaluated within a class, and thus, when a method was called, it could refer to the locations of the object which were shared with its context.

    var Counter <= (class {init}
var k <- init ;
                var incr <= (fn() k <- k+1 fn)
                var get <= (fn () k fn)
            class) ;
    var counterObj <= Counter{0} ;

(the <- means assignment. I used curly braces for constructor parameter and argument lists, so that allocation expressions look different from function applications.)

Given this, a class written within a class or a class within a lambda expression just works. It would have been extra work to design and implement the language so that it didn't have inner classes. Unlike Java (but like Scala), there was no restriction that all local variables captured be constants (final in Java lingo). This is because I was using trees of contexts already. Contexts were garbage collected when no longer needed, rather than being explicitly de-allocated like the frames on the Java stack.  I was aware of the inefficiency of this scheme, but figured that a compiler could optimize by keeping variables that could not outlive their function invocation on a stack and all others on the heap.