I find it interesting that everyone gets so up-beat about the ideology or philosophical debate about objects sending each object messages, hell even not relying to a message one object send you until a later date. Without even thinking about concurrency, state, and hell even the basics such as cyclic loops within a event based system!
Though I keep hearing from Alan and other prominent language designers that we still are holding onto this old 1960 mental model of command and control, or structured design concepts. Even some well spoken people suggest that the whole notion of object's are worthless to programmers because it takes place within the notion of classes and constraints of the machine.
This whole philosophical debate about understanding/miss-understanding/correct usage of OOP is just a complete waste of time.
For what its worth I consider object's simply as a basic level category of procedures and data tied to a namespace. If the category requires state then consider it as a object otherwise its considered a module.
I've seen too many project's that have drunken the cool aid and has resulted in 5-10 level deep inheritance tree's with their own branching logic trying to fit behavior to a specific taxonomy.
"For what its worth I consider object's simply as a basic level category of procedures and data tied to a namespace.“
I mean you're just using different words to talk about the same thing. In your terminology inheritance is just extending a namespace and overloading names.
How does that discredit or put into question the "mental model of command and control, or structured design concepts"
If anything I feel the failure of OO languages is that they have been afraid of baking in proven design patterns/principles that reoccur over and over (ie. Gang of Four and a few more since then). Yesterday i was staring at some undocumented OpenCV code trying to figure out why there was some pointer that kept showing up in random places. After a very confusing 30 minutes i figured out it was actually the pImpl idiom. Now why do we have to have to reimplement the pImpl idiom each time? why can't this be a keyword?. The amount of code bloat and confusion that a visitor introduces is the prime reason I never use them. (There was an interesting proposal at CppCon 2 years ago that addressed this, but as far as I know it's not gone anywhere)
The problem is that the most OO language trace back to C which IS a half-assed language from the perspective of "concurrency, state, and hell even the basics such as cyclic loops within a event based system" and most languages that do OO have been half baked on top of that foundation.
OOP has flaws but it has certainly proven to be extremely versatile and adaptable over these past decades since even today, it's still the dominant paradigm to solve modern problems in computing.
Seriously? Have you watched any of the demos Alan references? They were doing things with computers in the 1960s/70s that still haven't reached mainstream use as yet. Computers as used today are still absolutely dumb machines that are little more than super-fast calculators, and in most cases increases the mental burden of their users instead of reducing/augmenting them. Nicholas Negroponte had a great quote in '94 that's still true today - "those infra-red urinals in public restrooms know more about what we are doing than our computers today."
Douglas Englebart's demo - too many innovations to list, but includes real time collaboration (he demoed in a convention center while the system ran 30 miles away in his lab, connected by a leased line operating at 1200 baud!). People think he just invented the mouse, but the overarching theme in his work was augmenting human capabilities... http://dougengelbart.org/firsts/dougs-1968-demo.html
See infinite8s' post above. The "Engelbart's demo" mentioned is called The Mother of All Demos, which you can find by that title on youtube.
I have to admit I never really got further than the first 5-10 minutes or so. My first computer experiences are from the late 80s (and indeed didn't have a mouse) but this demo is so old, I kept having to mentally translate for myself which bits might have been new or ground breaking, and which just plain archaic. I keep meaning to watch it in full some day, but the first bit that I saw, I keep feeling I'm missing something important. People are lyrical about this demo (hence the name!), but I don't quite feel it or something. Then again, while I appreciate the great work of engineering that went into it, I'm not really one to sit down and watch a documentary about the moon landing program either.
Maybe somebody here knows a good link or article about The Mother of All Demos that explains which things I should be in awe about and why they were so novel then?
Part of it kind of looks like a simple database table system, which reminds me of a stupid story from my childhood. When I was young (~1990 I guess), I was playing with the MSX BASIC on a friend's computer whose family had a farm. They wondered if I could write anything to keep track of their cows or something--sure thing! (though I really preferred to write graphics code, later joined the demoscene, way more fun). Now I had no idea about business or their requirements (let alone requirements engineering), so I just wrote some whatever that let them enter data, but in the back of my mind I wondered how this was going to be useful for anything. Neither did I know (yet) how to actually save the data so it'd be available after reboot, which was probably a big reason for my doubts. And they were still impressed (that it could repeat names of many cows, I guess). I never finished it (nor any idea of what "finished" would mean).
Sorry, I mean "drawbacks".
One of the reoccurring complaints about OO is that it leads to huge codebases. Lots of articles about how X rewrote some Java program in Y and it's not 10x smaller and 100x more maintainable. This is a consequence of languages being designed to be minimal in terms of keywords - trying to push off as much as possible on to library writers. Unfortunately this seems to have serious limitations and all the boilerplate can't seem to be hidden in library wrappers
"Huge code base" is a pretty vague metric. Huge in what, lines of code? Binary size?
I'd argue that neither of these are really a good measure, I'm more interested in the concepts that a source file captures than its size in kb, because these concepts correlate directly to how easy it will be to evolve that code base in the future. There is a point of diminishing return in trying to make the code too compact, which basically means you're optimizing in one dimension at the detriments of all the others.
All that versatility and adaptability comes at the expense of various kinds of boilerplate, most of which can be avoided by employing other styles. In other words, it's not more versatile, it's only "Turing complete", and versatility would mean using a number of different styles and being efficient in each.
Yup. We were looking at alternate runtimes in some OS / userland projects at Apple in the late 80s. You don't need vtables, and you can do better than static struct layout. You have to do some work at code load-time.
C++'s initial lack of a real string type and simple collection types pretty much doomed the language. (Instead they did iostreams? I've been using C++ since early CFront days, and have never used that misbegotten API).
I guess I am one of the few people on the planet than enjoys using iostreams when I need to use C++. :)
Then again, I never needed to know all their little details.
Yeah implementing string, array and collection types was a kind of rite of passage for every C++ developer, back when the compilers were still trying to catch up with C++ARM.
I believe appealing to the C culture was one of the reasons those types weren't initially available, as C++ was already considered too bloated by the C community even without them.
Nowadays their three major compilers are written in C++.
> I've seen too many project's that have drunken the cool aid and has resulted in 5-10 level deep inheritance tree's with their own branching logic trying to fit behavior to a specific taxonomy.
I don't think you understand Alan Kay's version of OOP, if you think inheritance is a key part of OOP.
I was interested in this stuff during the summer and I wrote up the following question/answer if you're interested on what Alan Kay's version of OOP was:
I read your essay on programming languages. I found it was awesome btw.
I saw you made the comment `o be concrete - object oriented programming is easy in python, because it's possible to program methods of an object in a way that doesn't assume a particular representation.`
Quote from the article:
`Our findings show that the receiver in 97.4% of all call-sites in the average program can be described by a single static type`.
It looks like even when that restriction is removed it doesn't really help that much.
No but seriously, I think both you and hackits make it clear that there's a lot of discussion over what OOP actually is. Which makes me curious: is there any research on the characteristics of OOP as it implemented in practice across languages? I'd be very interested in such research, because it seems to me that what is possible is often not practiced.
So, for example, Ruby might be all about 'message passing' as others, and all the tutorials and books I've read indicate, but in practice it might be indistinguishable from just calling methods on objects, or whatever distinguishes message passing from other OOP styles.
If no such research exists, how difficult would it be to scan codebases on Github to analyse this? I might be wrong, but it seems like this would be a worthwhile investigation: how do people actually use languages and what is 'OOP' or 'FP' actually like in codebases that claim to be one or the other.
Otherwise we're just getting tangled up in definitions and it's all a moo point.
Since 1960 till 2012, there are only 22 research papers that used empirical evidence with randomised cross over studies for language design.
As of which, Class inheritance had 4 randomized controlled experiments. With a collection of other studies. That is mostly of the information I have come across.
I think Google published another paper looking at development time and I believe the result was the biggest time consumer for programmer was dependencies not necessary the programming langue.
I'm not sure measuring how programmers use programming languages would help you gauge whether those languages are any good. It's like measuring the way people rode bicycles in 1880 - that would never lead you to invent the car.
> Otherwise we're just getting tangled up in definitions and it's all a moo point.
I agree. I would say a lot of things people do in OOP isn't really OOP it's just object modeling principles. You can apply the same principles in a different language without notions of objects/classes (javascript) comes to mind.
Part of the issue is that people have differing perspectives on computing.
One half, see it in terms of computer and data science. For these, data-structures, algorithms and ADTs are paramount. Organization of the problem is through modularity and functional decomposition. An object (as in OO) might be useful for encapsulation, data and implementation hiding, and code organization.
On the other hand, there are people involved in business, commerce, industry etc, who need to model concepts, real and abstract, and the complex relationships and constraints between them, into code. For them, an object can be the best way to represent a concept and an OO approach can be the best way to represent how concepts interact with each other.
My gut feeling is that OOA/D/P will best be remembered as a revolution in analysis, not programming. As a programming concept it's a mixed bag. As an analysis tool I don't think it has an equal.
Well, try to remember that for Alan, inheritance is at most an optional feature of OOP, so maybe those "5-10 level deep inheritance trees" aren't as much about people drinking the OOP "cool-aid" as you think.
Both simual and smalltalk inheritance was added later in the languages history. Yes its optional but it doesnt stop people making a mess of things when they categorise their objects based on data type instead of behaviour
Nothing can stop bad programmers from writing bad code. Take OOP away from them, and they'll just write bad procedural code or bad functional code; their bad code is not a stain on the paradigm they choose to abuse.
I don't really understand this obsession with messages.
We're moving away from it. This is not how we program in the 21st century. The last popular language that supported this paradigm (Objective C) is being replaced and will probably be all but gone in just a few years as Swift (not message based) takes it place.
Besides, this idea of message passing is really not that useful for modern programming anyway but a lot of people still have this weird idea that message passing magically solves the problem of data integrity in parallelism. It doesn't. It does nothing to solve it. You can still have deadlocks and you can still have data corruption.
The only approach that has provably improved our problems in this area is immutability, but even that paradigm comes at a cost that sometimes doesn't make it worth the trouble
Erlang and Elixir may not meet your criteria of popular, but message passing is one of several closely linked features (including immutability) that collectively make distributed, robust systems much easier.
No, it's not a silver bullet, but neither is it some misbegotten relic of an earlier era as you posit.
We're accelerating towards messages at a very high rate of speed.
This is in fact how we're starting to program in the 21st century and all programming will be of that type in the 22nd century.
The last popular languages that support this paradigm (Go, Rust, Erlang, Elixir) are exploding in popularity and entering the mainstream.
Message passing does in fact magically solve the problem of data integrity in parallelism when the data is immutable. Languages like Rust and Pony are starting to solve the deadlock problem, and 'data corruption' has nothing to do with messages or lack of messages.
What about e.g. HTTP? The dynamics of server to server communication are all about message passing (REST is just a message conveying state, if you ignore the other rules). Look at things like Erlang and Elixir processes. Akka and the actor model. Map/Reduce to a certain extent. These are all founded on the idea of giving a task out with enough information that the worker can reasonably go and work on the task without having to look around for state. Also look at things like FRP and more event driven architectures. While not ThingA sends message to ThingB, ThingA still broadcasts a message, possibly containing the relevant state.
Also almost forgot. Look at the wealth of message queues that exist (RabbitMQ, ActiveMQ, SQS, Kafka, ... the list goes on).
Message passing implies one way broadcast. Think GOTO. HTTP is a request / reply protocol, i.e. a function call. Map / Reduce is an implementation device for SQL-like collection-based processing, modern Map / Reduce systems [Google's Flume / Dataflow] offer directly the collection API, using Map / Reduce as an implementation detail. FRP is a device for adding a collection API on top of streams of events. Collection APIs are conceptually function calls over collections [duh].
Just because your compiler uses GOTO / messaging behind the scenes to implement function calls, it doesn't follow that GOTO / messaging is the right API level for designing applications. We know this since 1968.
There are two distinct meanings mushed together under the "messaging" umbrella:
A. One way communication, aka events, goto. Very useful as an implementation concept, but generally a poor idea for structuring applications, as it makes local reasoning about code harder than necessary.
B. Polymorphism, aka closures. This is a very useful device for modularizing code, albeit, in practice, vastly overused. It also happens to trivially fall out from the concept of "first class function". To have first class functions, we need to be able to create them at any point in the program, thus we need some form of variable capture to allow value capture, which was named "closure", in the example below "function g closes over variable x". Furthermore, we need to be able to return the function value we just created and invoke it later, in the example below the invocation of q(4) actually invokes the closure of g over 3. Once we have first class functions, we can add any number of concrete syntactic sugar constructs [classes, vtables, etc, etc, etc].
def f(x) = {
def g(y) = {
x + y
}
g
}
val q = f(3)
q(4)
IMHO, the "misunderstanding of OOP" is strongly attributable to a propensity in OOP community for using ambiguous terminology to pretend it possesses some sort of silver bullet, instead of recognizing the rather elementary computation constructs obscured by said terminology. In some alternate universe, we'd use "events" and "closures", and there will be very little misunderstanding. "messaging" is a false friend between computer languages and real world experience.
In modern enterprise programming message is very different concept (probably more widespread, if we'll look at popularity of Java and .Net, compared to Objective-C), which is not related to OOP. It's a pattern of integration, in which systems exchange with messages through a message broker, that abstracts them from each other. By their nature such messages are asynchronous and unidirectional.
Smalltalk receivers returned values to the senders of the message. Plus, they were synchronous.
So, while indeed still messaging, I'm not sure about the similarity to message queues/brokers in modern enterprise, where, as the parent said "By their nature such messages are asynchronous and unidirectional".
HTTP is closer to RPC. You always know the address of the routine to call, so it's not actually a message (except in HATEOAS, in which you first need to learn the next address from last API call).
Closest analogy is the message queue (e.g. ESB) like Active MQ or Rabbit MQ (and the whole JMS tech), in which you do actually send a message and infrastructure does actually figure out who will receive and process it.
>>> This is not how we program in the 21st century.
Actually, for many if not most, programming today means using procedures and subroutines, organized through functional decomposition, just like was done in the 1950s and 60s. We haven't gotten that far have we.
OO is a different paradigm from organizing the concepts in a domain through functional decomposition, and understanding how it is different, is key to understanding OO and messaging.
> "Could you elaborate leading by example `how it is different`?"
I'll try.
By the end of the 1950's, modular programming based on procedures and subroutines had become problematic. It was hard to have large teams of programmers working on parts of the code at the same time, and it was hard to create large well-structured programs.
The answer in the 1960's was to break down a problem into pieces, then sub-components, then subroutines through functional decomposition. This better allowed large problems to be understood, and worked on by different teams at the same time. This approach is known as structured programming.
This approach has resulted in quality software over the years, but suffers from painful limitations. One of the big ones is that as one works on a problem and discovers its requirements, the decomposition used needs to be modified, often from the top down.
Object orientation (much as for the "data" movement) represented a different way to approach a problem. Let's break down a large problem or system into entirely independent concepts that know how to to do things themselves. Let's not orchestrate concepts (objects) centrally, but instead have them do things themselves together to solve a problem. This comes close to the way we think. We recognize and organize things in our minds around us as objects. We differentiate between them by their attributes, and we classify them. When we see a tree we recognize it as such, and differentiate it from other by its attributes and also by its place in the hierarchy of types of trees we know. Behavior on the other hand is only immediately important to us if it is causal - for example if it threatens us. That unknown thing moving fast towards you prompts changes in your body, but once you recognize the object is a beetle and not a spider, you change.
Functions have no internal state. Early languages used global variables to share state across functions without explicit message passing (everything was a singleton). This created worlds of pain. The alternative was passing state with each function call. This also got painful as people often passed data down through functions.
OOP lets chunks of functions share state and hide that state from the wider application. More importantly you could have multiple instances of that shard state without explicit management, saving a lot of complexity and effort.
But closures do. And thus, why closures are a poor man's objects, and objects are a poor man's closures. As an example, Java closures are really anonymous objects with the closed state as instance variables.
> More importantly you could have multiple instances of that shard state without explicit management, saving a lot of complexity and effort.
Technically that's what classes gives you, not objects. But agreed on the sentiment.
If you apply this common understanding of "closure" where the values that closures close over are not really values but mutable objects, then closures don't count as (pure) functions.
If one thinks of them as proxies to the closed-over objects, they are quite like "procedures". Another way to think about them is simply as objects with only a single method.
They are definitely still "functions", barring the fact that Java doesn't have first-class functions and SAMs are the closest thing. C# does have first-class functions, and it is even more liberal in its closure rules. (Java can only close over `final` variables, which are immutable variable bindings. C# doesn't even have immutable variable bindings, only member bindings.)
Also, they can still be "pure functions". Purity is a relation of side effects, not of mechanisms. (Even then, it's a somewhat loose definition -- is allocating memory a side effect?) If you don't allow closures to escape their declared scope, you can handle closed values as extra parameters. So closures don't mean you suddenly can't write pure functions, it means that some input to the function is pre-determined. In other words, closures have the same implications to purity as parameter binding -- absolutely none.
You are kind-of making my point. It's clearer to use "function" fore pure, mathematical functions, and otherwise "procedure"/"object"/"proxy"/"closure" whatever. Even if some popular languages misuse the term "function".
> Even then, it's a somewhat loose definition -- is allocating memory a side effect?
This is leading nowhere. Is loading a value into a register a side effect? If you care.
I mean, if you're only point is that programmers use the term "function" to mean something other than a pure mathematical mapping from one set to another... You won't get any arguments from me.
> Technically that's what classes gives you, not objects. But agreed on the sentiment.
No, that's what objects give you. Classes are merely one means of creating objects (others exist), but the benefits come from the objects, not their means of construction.
My point is that you can have an OOP language / system that consists entirely of singletons. The ability to create multiple object instances from a template is classes or prototypes or whatever other mechanism. It's not something strictly necessary to OOP.
Yea, I have no idea what you're trying to convey. Additionally, a system built entirely with singletons isn't object oriented, it's procedural with modules.
Sadly, how we program in the 21st century isn't much different to how we programmed in the early 20th century when computers were first invented (at least to a 1st order approach).
Where are the Maxwell equations of Computer Science?
Message passing (and by extension Smalltalk/Squeak) suffers from LISP disease, the notion that because the system makes no constraints on program structure, it can do anything, and therefore is maximally powerful. But "can do anything" is different from "does anything". In real programs you have to solve performance, security, evolving product features needs, a growing team of contributors , static analysis tools to make global changes to the axioms of the system, all those details that matter when software leaves the lab and enters industry.
Is message passing that you're referring to different than the Ruby concept of messages? Objects respond to messages in Ruby, and it is encouraged to think of it that way rather than calling functions.
Is that actually the case in practice? I haven't used Ruby enough to know, but my experience so far is that most of the time the approach is still mostly just plain method calling in practice. I vaguely recall Rails moving away from some usage of method_missing (which I'd argue is one explicit example of message passing vs method calling).
He said "[...] it is encouraged to think of it that way rather than calling functions."
This is what Sandi Metz talks about when she speaks about message/responsibility centric design vs data-centric design in OOP. The conceptual difference is said to lead to different designs. You don't start out your design by thinking about what data you hold but you think about which responsibilities/roles you have and create objects around those by thinking how they might talk to each other to reach their goals.
(I'm currently reading POODR by Sandi Metz, so this is still new to me, if I got anything wrong there then please feel free to correct me)
I personally don't know how it is implemented under the hood, but it is important to understand as a Ruby programmer that methods are not just functions. In fact functions are not really first class entities in Ruby. Instead you have a mishmash of procs, blocks, instance methods and class methods and they are all different to a certain degree. It's one of the big frustrations of programming in Ruby sometimes.
But when it comes down to it what really is message passing? You have a reference to an object, a method name and a collection of parameters. When you dispatch the message you look for the method on the object and run the function passing in the parameters. Does it matter if you optimise the situation by placing the parameters on a stack and using a dictionary with a reference to the method? Or if you optimise by doing static analysis to look up the method and replace the dispatch with a direct call to the method? Or if you optimise further by inlining the code altogether?
The point is not how it is implemented but how the programmer thinks of it. Object Oriented programming is a thing humans do, not a thing computers do. Object Oriented Programming Languages have features that make it more convenient to do Object Oriented programming. Whether the code is OO or not is only up to the programmer, not the compiler/interpreter.
Having said all that, is Ruby my first choice for (what I understand to be) Alan Kay's model of OOP? No. But then I really don't know what my first choice would be. It is possible that the language hasn't been written yet ;-)
>I don't really understand this obsession with messages.
We're moving away from it.
On the contrary. We use it more than ever -- only similarly to Greenspun's tenth rule -- in "ad-hoc" and crappier implementations.
What do you think REST microservices are beneath the surface for example? Or AJAX, WebWorkers, SOAP. Etc. The bad parts of messaging without the flexibility (or even the performance of Smalltalk of the 90s) (Incidentally, Erlang/Elixir gets this right, Akka, etc). Or, all the Rabbit, Duck, Donkey MQs, ZeroMQs, pub/sub, event emitters etc out there.
>This is not how we program in the 21st century
That's more of a problem of the 21st century that messages.
>Besides, this idea of message passing is really not that useful for modern programming anyway but a lot of people still have this weird idea that message passing magically solves the problem of data integrity in parallelism. It doesn't. It does nothing to solve it. You can still have deadlocks and you can still have data corruption.
Deadlocks are not related to data integrity, so they're another story. As for "data corruption", messages where values are copied (and thus immutable) are not really prone to it compared to "how we program in the 21st century".
No, message passing doesn’t solve all the problem of parallelism, but it is one of the most powerful and useful tools to have in our toolkit when attacking these problems.
I hope it doesn’t dominate as OOP did in the 90s, as many other models such as actor and dataflow still have a lot of room for research and improvement.
Actors are based on message-passing as well. The paradigm favored by Go and Erlang is CSP, which is a pretty interesting idea that revolves around message-passing, but it doesn't have a monopoly on the practice.
Message-passing is a necessary but insufficient abstraction. It's not pixie dust, but if you want to build a robust and organized system you'll probably want to reach for it. For the data integrity issue, you need to have a strong data model implemented upon your message-passing architecture, like an immutable event log. Project useful aggregations and indices from that. I'm wrapping up an implementation of exactly this today (Node.js + Kafka = poor man's Erlang actors), AMA. Email is in my profile as well.
I might be wrong, but I think people like Alan Kay and Christopher Alexander have in common that they:
* see that things can be done better (not as in upgrade but best);
* have a global idea/feeling of how this could be done;
* take a lot of ideas from nature.
And somehow I think they have a lot of trouble expressing the 'how'.
My take: we should take a good look at nature because this is closest to what we are.
For example the communication between objects could be learned from cells.
Maybe they're reluctant to pick "the" how, too soon.
If you're in a field from the early days, you probably have more of a pioneer or explorer mentality. Later come settlers, who pick areas for utility. Even later come residents, born and raised there, most of whom don't really question what is given.
This probably seems weird and frustrating to the explorers -- why settle so soon? Yes it seems comfortable and convenient, here, but there is so much left to explore.
That's how I understand Alan Kay. He doesn't want to sell us some snazzy next big thing. He wants more of us to spend more time imagining and trying new things.
Nothing wrong with looking at nature and trying to get inspiration from it (e.g. neural nets) but ever since we realized that flight was easier to implement with chemical combustion than by flapping wings, we know that just because something works in nature doesn't mean it will be easy to replicate for human use.
We probably won't, at least not at human-scale and not on Earth. Flapping wings don't scale; there's a reason you have to get to #12 of the heaviest birds list to get one that flies, at ~13% the maximum mass of the heaviest bird.
Alan was a Molecular Biology major in college and has stated before that the basis for OO messages was originally inspired by cellular communication:
> I thought of objects being like biological cells and/or individual
computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be do messaging in a programming language efficiently enough to be useful).
I'd love clarification on a few points, from anyone who understands everything Alan says here:
Think of the internet -- to live, it (a) has to allow
many different kinds of ideas and realizations that are beyond any single standard
Is he referring here to alternative protocols from http, or to the fact that behind the single http protocol are servers written in myriad languages and styles?
If you focus on just messaging -- and realize that a good metasystem can late bind the various 2nd level architectures used in objects
What is a metasystem? Can you provide an example? Similarly, what are 2nd level architectures?
the realization that assignments are a metalevel change from functions, and therefore should not be dealt with at the same level
what does he mean here? what are the two "metalevels" of assignments vs functions? what are examples of other "metalevels"? i've just never heard these terminology in this context, so don't know where to start in understanding it...
> Is he referring here to alternative protocols from http, or to the fact that behind the single http protocol are servers written in myriad languages and styles?
My interpretation is that the design of TCP/IP (and maybe HTTP?) focuses on the behavior of the components from the perspective of how they communicate with one another, not how they behave internally. They follow the Robustness Principle: Be conservative in what you do, and liberal in what you accept from others.
> What is a metasystem? Can you provide an example? Similarly, what are 2nd level architectures?
It took me a long time to understand what this means. Basically, a metasystem is a blurring of the language and the code you write within that language. Smalltalk is an example of such a system because you can change the language itself from within the language and the environment.
There are downsides to such a system, though, including the proliferation of many fragmented, inconsistent environments. To take a deliberately ridiculous example, let's imagine Ruby was a metasystem. You write some code in your rails app that allows a third method privacy mechanism beyond public, private, and protected. Or, that you decide you want multiple inheritance in ruby.
That's powerful -- and dizzying for many people not accustomed to thinking about their languages at that level. So, what Alan is suggesting is that we erect "fences" of some kind to both allow this power in our languages but in a protected manner, so that we mitigate the risk of the fragmented world that could result. (Since you've essentially created a new "flavor" of ruby by changing its behavior.)
There is a fantastic book on this topic called "The Art of the Metaobject Protocol" that I highly recommend. It guides you through building an object oriented language construct from within Common Lisp.
I don't know what he means by 2nd-level architectures, nor what he's saying about assignments being meta-level change from functions.
the realization that assignments are a metalevel change from functions, and therefore should not be dealt with at the same level
I have no idea what Kay himself meant by this, but I personally appreciate that they are very different (and maybe my perspective aligns with his).
A function prescribes how to transform input data and produce output data. An assignment manipulates the environment of the scope itself - you are updating some state somewhere in the runtime, the interpreter or main memory.
One interesting point from a talk of his - the internet has never broken or been unavailable since they turned it on, even though it has gone through about 3 successive generations of hardware/software.
So the idea, essentially, is that TCP/IP is like a "free form field" where people can write whatever they want, and then everyone is free to create their own structure on top of that, to suit their needs. In contrast to some protocol that accepted only, say, XML? His point, then, is that not forcing such structure was a crucial ingredient in the internet's success?
The TCP messaging analogy here is that the design was a protocol, which focused on how different components in a TCP system communicate with each other. Rather than a design that focused on how each component worked internally.
The inter-communication was the central design, not the intra-communication.
Many OOP programmers today spend the bulk of their time thinking about how their objects should work rather than how they should communicate.
Well, TCP/IP was just about the mechanics of getting 2 machines to be able to communicate with each other. He hints at the end of that comment on how to expand that to include meaning and interpretation of the actual message. One way of thinking about it - how would we communicate with an alien civilization? In a way that's how 2 machines that never interacted before could negotiate common meaning.
> One way of thinking about it - how would we communicate with an alien civilization? In a way that's how 2 machines that never interacted before could negotiate common meaning.
Interesting. Actual attempts to solve this, such as the Arecibo message, always reference "universal" constants like prime numbers, atomic structure, the speed of light, etc, to establish a common language.
I'm not sure I understand what the analog would be for two computers?
Unfortunately, I'm not quite sure either. Alan Kay has a very socratic way of communicating (at least from reading through all his comments on the AMA) and I often feel like he's been sitting in a higher plane of thought for so long that the only way he is able to communicate his ideas is by 'forcing' the recipient to make the same jumps he has (a bit like the square in Flatland). I could be completely misunderstanding where he was going with those thoughts.
IMO the problem with object-oriented programming is that it turned into the standard curriculum for peoples' first semester of computer science, rather than being yet another interesting concept that advanced programmers would ponder. And the way we handle "the standard CS curriculum" sucks. (In the USA at least.)
For example, AP Computer Science requires Java and tries to teach stuff like designing inheritance hierarchies to people in their first two months of programming education. And then tests it mostly using multiple-choice quizzes. It's totally inappropriate - most of these students would fail fizzbuzz.
OOP should be something you only get into after you have written some practical programs, rather than something where you can't program, but you memorize the difference between "is-a" and "has-a" inheritance so that you can pass multiple choice tests.
OOP made sense to people who already knew structured programming and understood functional decomposition (what we now call 'refactoring'). Perhaps education should start with that and then justify OOP? I remember reading about the original LOGO experiments; one thing kids do not spontaneously do is break up their monolithic actions into sensible functions.
Yeah "breaking up a complex function into simpler functions" is a great example of something that isn't taught well today, yet is simpler and more critical than what we try to teach people in intro courses.
I think of it in analogy to math. OOP is like topology - it's definitely useful in some cases, it's not too hard for an experienced mathematician to get the basics, and yet in a lot of situations it's irrelevant and it doesn't really belong in the first class you take.
Being able to solve fizzbuzz is like doing arithmetic. If you can't solve fizzbuzz then you aren't going to be able to really get any advanced concepts. It's like you can't be a good mathematician if you can't figure out whether 351 is an odd number - you need to learn the basics first, even if "real math isn't about arithmetic".
Sadly, you can tell if you do a lot of phone screens that many people who graduate with CS degrees still can't code fizzbuzz. Our CS education is busted right at the beginning. It needs to get the basics right, like, can you write loops, can you write functions. Today it is failing at that.
Perhaps inheritance is easier to teach, which might be completely unrelated to how useful it is in practice? That would explain a lot.
It reminds me a bit of how I was taught basic economics, where societies go from barter to currency. Apparently this is not true at all, but it's a story that is easy to teach.
Breaking up a large functions into smaller functions isn't even (strictly speaking) functional decomposition because not every sub-function might be dependent on the previous function. "Extract function" is a kind of refactoring, but not refactoring itself.
When the ECE department at my university revised the undergrad Computer Engineering curriculum, they rejected the CS introductory courses partly for this reason. They also want to teach hardware first, but more importantly they teach "systematic decomposition" from the very beginning (before they teach programming) and leave OOP for future course work.
I have experience teaching kids Lua, and with the right metaphors and a little bit of backtracking and foundation-building, even complicated ideas like emulating classes and single inheritance can be understood and even implemented by young students.
If I taught them lua reserved keywords, and a couple little math tricks here and there, they usually would all bunch up everything into a couple huge functions, and their game would (sort-of) work but be impossible to reason about, and very painful to extend.
Introducing objects as a way to represent things that they want the game to do (draw things, shoot things, eat things, etc), it becomes clear to them that there is merit in structuring programs with objects beyond "shrugger says to do it like this!"
OOP on its own doesn't make much sense. You can't teach someone to drive stick shift if they don't know what a car is. Maybe you could, but that's probably even worse.
OOP should be something you only get into after you have written some practical programs
I'd modify that to "practical programs that are nontrivial enough to benefit from OOP", just like how I advocate not teaching about functions until they become useful. IMHO the CS curriculum should start with the low-level basics and work up from there in a natural progression:
- Representation and interpretation of binary data.
- Basic straight-line computation (more like calculation), concept of instruction execution
- Flow control: decisions, loops
- Functions/procedures/subroutines
- Grouping data together: Arrays/structures/records/etc.
- Basic OOP, grouping data and code together: Objects, composition, inheritance
- Advanced OOP: virtual functions/polymorphism/etc.
I have worked with some of that type of Java-student you mention, and it is astounding how many can create huge object-oriented monstrosities (including design patterns), the bulk of which are object creation and method calls, while not understanding arrays or even basics like why datatypes have a finite range. Their code has more object creation and method calls than branches, loops, or simple computational statements. Given that sort of inverted knowledge they have, that they fail fizzbuzz is not surprising at all. It also explains a lot of the inefficiency and bloat in most software. It's somewhat like teaching calculus to students who don't know arithmetic.
I've asked a few why OO is the first thing you learn - and the reasoning was two fold:
1) People that start off by learn imperative programming end up writing really really shitty OO code. They're they type of people that end up copy and pasting stuff everywhere. It's really hard to get students to not take the "short cut" of copy paste to think in objects when you have homework deadlines and other things to worry about (and the teacher never actually spend time to teach you how to use programming toolchains...)
2) it's conceptually one of the biggest stumbling blocks for people. A lot of people have trouble wrapping their heads around OO. So sure you could teach it later, but it'll never sick or sink in.
People have trouble understanding OO because usually it is presented informally with insufferable dog/cat/mouse examples.
Moreover, the fact that "subclassing==subtyping is unsound" is swept under the rug. A lot of people actually grasp the latter at an intuitive level, that's why they're never comfortable with objects.
OO should be taught as abstractly as possible (I recommend Didier Remy's writings). After grasping the concepts, many people will conclude that they'll be better off with FP and modules.
> It's totally inappropriate - most of these students would fail fizzbuzz.
Maybe this is related to the common occurrence of people who can provide stellar answers to interview questions about inheritance hierarchies and object composition... and yet can't pass a basic fizzbuzz style test?
It may actually be directly caused by it. If you want people to come to your university, you need good post-degree employment figures.
If every tech interviewer phone-screens by asking about multiple inheritance and red-black trees, it's really not surprising that universities might tip the curriculum towards that sort of thing at the expense of actual programming experience. After all, if you can't pass the phone screen you'll never get the opportunity to write code.
This is the consequence of not teaching domains of problem-solving through actual problem-solving. You see the same thing in math. I had an excellent practitioner of this method for AP calculus. I learned limits by attempting to find the area under a curve using ever-shrinking rectangles, until we got to pushing them to zero width. And now, 15 years later, I still remember the concept. I don't necessarily remember how to do any particular problem involving limits, but I know what limits are and when they apply to a problem.
His distinction reminded me of what I wrote in a comment regarding Dissipative Adaptation and its relationship with language [1].
If Alan Kay is right, and Wittgenstein is right, and Prof. England is right, then we should be focusing on the language that emerges from the necessary transactions between functions, and let those transactions define the objects. That would make the modern popular understanding of OOP completely backwards -- as backwards as our understanding of the role of language prior to Wittgenstein and the role of thermodynamics in biology prior to England.
Meaning, objects should not define (infer or induce or implement) the communication, nor the expression. Rather, all transactions (interactions, communications) should define (infer or induce or implement) the objects. And, given names, the objects will find a way to sort themselves based on rate of utility (the chosen objects, just as we choose the right words, from which refined definitions sort themselves).
Before Wittgenstein philosophers were obsessed with the factual nature of words and tried mapping everything correctly (logically) with the natural world. Except, they were failing.
Wittgenstein came in and basically said language was never designed to represent reality, but rather is what emerges from the use cases between people. Communication is a transaction ("game" in his words), and not some mathematical or logical construct. It may have such properties, and the people and the context are all real, so reality is involved, but language is not a direct output, nor does it need to directly correlate to resist contradiction or paradox -- which are abound in philosophy.
Except, for those who speak it, language is their reality. Those who cannot overcome their own immersion can never see past their own words, which sums up much of his opposition. They are all correct in their world and in their words... except Wittgenstein was talking about how words and worlds worked.
In short, words can be arbitrary, and are constrained by the goal to communicate and transact. This exact phenomenon which Wittgenstein described as what we are doing is the phenomena England is describing as what biological systems are doing.
It's all Dissipative Adaptation, with language being the unique construct for every such system that emerges and sustains it all.
Are there any examples of Kay's ideal other than The Internet? Especially systems that I could actually inspect and learn about in detail, rather than read short stories about.
Hearing the way Alan Kay talks about "messaging" always reminds me of two technologies: the Flux/Redux architecture and Apache Samza & Kafka.
> The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.
With React and Flux, this is exactly what's going on! You're defining view components, and when someone interacts with them, there's a very structured, discrete set of actions that can be fired. These actions are then dispatched and handled elsewhere, but somewhere at the core, there's a single source of truth for what happens upon any given message.
And for Apache Samza, I think this video[1] does a nice job of talking about stream processing as a programming architecture. There's a connection to be made between processing streams of information and processing messages between modules, and that video goes a good way towards exposing it.
Redux is definitely a step in that direction. Elm, which it's inspired by, is even further along that road in that its design is better handled (but then it doesn't require handling in JavaScript so I suppose that is why)
I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be useful).
I found it really hard to understand what Alan Kay is actually talking about. Saying things like
> This is why "the objects of the future" have to be ambassadors that can negotiate with other objects they've never seen.
This seems like something that (based on my limited knowledge) isn't possible with Smalltalk (in any but the most trivial of cases), and so I don't really consider it an example of what he is talking about. Or, at least, all of the things that I know of that you can do with this you could just as easily implement in languages without the same dynamism as Smalltalk.
My conclusion after reading all of that is that Kay is talking about a system so completely unlike anything we have that there aren't even really examples of what it would look like.
I am really trying to understand what Kay is talking about, so any pointers would be interesting!
Here's my understanding of the general idea (let me know if I got something wrong!). Say you have a bunch of computers with different software and hardware. You come up with a cool new image format called "PJEG". To get the images to show up on all the computers, you typically do the following:
* Publish a PJEG spec
* Define a .pjeg extension and let everyone know that means it's
a PJEG file
* Write a PJEG encoder/viewer for each kind of computer
* Distribute these programs to the computers
* Configure the computers to open .pjeg files using the programs
you distributed
* Modify other programs (image editors, web browsers) so they
can recognize, encode and view PJEG files as well
* etc...
This approach doesn't scale very well. I think the alternative Alan is suggesting is that you'd include an interpretable description of the file format in the file itself in a sort of meta-format. Once you have an interpreter for these meta-format descriptions, all you'd have to do is:
* Include an interpretable PJEG encoder/viewer with every PJEG file
And that's all! Any program able to read the meta-format could use this file without any extra software. This obviates file type metadata and reduces the amount of distribution you need to do, making the whole thing way more scalable. The practical problems with this solution (larger file size, slow encoding and decoding) can be solved in various ways:
* The larger file size can be mitigated by format negotiation
(if I already know about the format, you don't have to tell
me about it)
* Optimized encoders and decoders can be written to replace the
slower interpreted ones included with the file
Having the interpreted format description makes it easier to validate the optimized version, too (you can generate a bunch of random examples and make sure they decode/encode the same way). And this doesn't have to just be for files, you can use the same technique for arbitrary "data types" in your program as well.
Great example. I'll add that the "compile everything to Javascript" ecosystem shows that it's easily possible. Arguably, it's already being done for web stuff just not below a certain level of the stack. There were related fields that did this stuff called agent-oriented programming and meta protocols.
In agent-oriented programming, one could send the code and data packaged together to the remote site. It could do any necessary computation there through the platform's interface to prevent lots of data being transfered. It might also do typical RPC's on remote sites. As it was interpreted, it could in theory even modify itself to use a different communication or storage method.
For meta-objects, that was just ways to create other objects. Starts with stuff like CLOS and Smalltalk that's really about clever ways to specify systems. Also allows self-modifying or improving code. Later versions, applicable here, allowed one to specify protocols or interfaces that meta-tools would turn into libraries for one's programs. Some even had interpreters built-in so apps could negotiate arbitrary protocols during runtime. Such tech gets us closer to the idea of focusing on a higher level and how things communicate rather than their state and coding specific procedures for sharing it.
Not necessarily. That was one of the points Alan made in his AMA when suggesting 'send processes rather than messages'[1] That thread used the example of how to find things and then what to do when you find them.
This would require that any new image format only did things that were foreseen by the creators of the metalanguage. If the metalanguage is Turing complete then it's just an interpreter. Python, for example, could be your metalanguage. Then I would argue that we already have this.
You are correct, Smalltalk as-is can't currently do that. I believe he was referring to a future language / evolution / system that he would like to see, not something that currently exists. These are the sorts of things I think he's expressing his frustration about when he says things like Smalltalk getting long in the tooth etc. in that it was never intended to the be-all, end-all of languages... just one stop along a path.
But don't let this discourage you from looking into Squeak. I played around with Squeak in the 90's but couldn't see past things like the weird mouse button mappings and color scheme. Fortunately I came back to it again and found some of the glaring UI issues (by modern standards) had been smoothed over. It also probably didn't hurt that I was a little older and better able to stick with it until I could finally see better the second go at it what the original vision was. Also, there is the work from his STEPS project which used Squeak as a platform that was used to prototype some of the next-generation ideas he talks about.
If you want to understand better the next generation concepts, definitely look at the VPRI writings (esp. the STEPS annual reports) linked to in a previous post. To bridge the gap from here to there (i.e. to get to a tangible system that you can actually use that does any of this) I think you're probably going to want to learn Smalltalk as a starting point.
Regarding Smalltalk, I don't think Alan Kay has ever felt it was a pinnacle of language design (or even a complete implementation of his ideas). He has said in several talks that while Smalltalk was a PARC project it saw many re-imaginings, but once it went out into the world it basically was frozen and hasn't changed since 1976. Here's a good history of Smalltalk - http://worrydream.com/EarlyHistoryOfSmalltalk/.
When being complex information systems you ended with subtle drifting in what something actually means from person to person or over time.
More directly in regards to having ambassadors working with unseen objects:
Seems like you would need some shared "language" the objects speak otherwise communication is impossible. Maybe a better way of describing intent or information? Like some kind of logical or declarative model + messages.
And it also reminds me of https://en.wikipedia.org/wiki/Smart_contract and one can see how different block-chains need different interpreters to understand what computation are needed in order to "execute" the message.
>The essence of object orientation is that networks of collaborating objects work together to achieve a common goal.
> The common sense of object oriented programming should reflect this essence with code that specifies how the objects collaborate. Our industry has, unfortunately, chosen differently and code is commonly written in terms of classes.
> A class tells us everything about the properties of the individual objects that are its instances. It does not tell us anything about how these instances work together to achieve the system behavior.
Note this email and conclusion are from 2003-07-23:
> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.
The original article gives some important context to this sentence, as otherwise it is slightly weird, as both ST-80 (at least in comparison to earlier Smalltalks) and CLOS does not have that much explicit concept of messaging and in both cases it's essentially late bound function call.
One example is the HTTP GET request. This was originally conceived of as a file download, where the URL path is mapped directly to filesystem paths. GET as an RPC: "download the file at this location."
But in modern thinking, HTTP GET is a request with abstract semantics. The URL's path is abstract, and may be interpreted arbitrarily by the server. The client has no idea whether the request is serviced by a simple sendfile() or a fully dynamic program. HTTP GET is no longer an RPC, it is now a message.
A key difference is whether your function call is a request for concrete or abstract behavior. If you answer "abstract," then the call itself must be reifiable data, which can be sliced and diced in ways hidden from the caller.
As defined in both RFC 1945 and 2616 (that is, as always has been defined): "The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI."
The thing is that there's no difference between abstract and concrete semantics in terms of definition of what is a function or message and what is not. You can send a message or call a function with very concrete semantics ("please, check if that file exists") or with something very abstract ("please, execute a job").
The real difference of the message and function definitions is how they are executed. With message you pass the information on what do you expect to be done to the underlying infrastructure, which has to find the actual code to complete the delivery. With functions you are supposed to know, what exactly code you are calling.
In modern world the discussion about these differences in context of OOP classes does not make much sense: virtual methods of interfaces do their job as well for local invocations (JVM in Java world or VMT in C++ does the binding job), so it does not matter whether you call them messages (which may be right, considering dynamic nature of the call) or functions (again, it's correct, because as in C++ case there's no special mediator passing the call and in Java JVM is practically invisible to application programmer). The cases, when message delivery does not coincide with invocation of single method are rare and normally solved with application level architecture (design patterns like Facade are good example of the solution).
What's more important, IMHO, is that this talk about importance of messages is no more relevant to current problems of object-oriented programming. I do not think today we are really concerned about object interactions, rather we have to fight the enormous complexity of big projects, finding better ways of generalization (by means of more expressive languages and metamodels) and API contract clarification and enforcement (by elimination of side effects and correct handling of corner cases).
HTTP GET was initially defined to "Please transfer a named document back." [1]. The idea of interpreting the path to dynamically generate a document came later. If you like, you can think of it as a shift from Apache-circa-1999 servers that expect to deal primarily in files, to Rails-style servers that primarily route requests to dynamic code.
You're right that design patterns like Facade are often used instead of exploiting messaging. But this comes at a price: clients are necessarily aware that they are talking to a Facade. Discovery and negotiation are done statically, via the type checker. Versioning is static too. Everything it tightly coupled.
Say you have a String, and you want to concatenate it with another String. You check the docs, or StackOverflow, or IntelliSense, right? What you don't do is use Reflection to list all the methods, and pick based on their name. And you certainly don't feed an input to each method, and pick the one that gives the right output! (This strategy is routine in Smalltalk! [2])
Now say you want to build a web search engine. Do you try to statically build up website descriptions? Maybe you check StackOverflow for the HN link structure, catalog it for everyone? No, you start at the root, interrogate it, and build the link graph as you go. You have a conversation with the remote site, and perform discovery dynamically.
That's messaging! And that's why URLs are a better example than most modern programming languages: the value of OO is best realized when you have loosely coupled components, like on the Internet. (Alan Kay wrote that every object should have a URL. [3])
Well, the remark about the acceptable format in the same definition from 1992 means that there's dynamic processing of the request, which may involve the document conversion at least. And it does not say "file", which means that there can be a database as a document storage. In 1999 the web was mostly dynamic (I myself was using PHP, Informix WebConnect, Java and C for web development by that time) and more abstract RFC 1945 came in 1996 (just 4 years later), 1 year after Amazon went online. The period of time, when GET could mean only "give me a file" was in fact very short, if ever existed.
Then, you write that Facade is "tightly coupled". In fact, it's not more coupled than any native messaging. Let's say, the client code wants an object, representing a coffee machine, to prepare latte with temperature about 40C. There are other things, that coffee machine can do (e.g. self-check), so you need to pass 3 facts to it: "prepare drink", "drink type is latte", "temperature is 50C". You indeed can send a message to this object, containing these 3 facts as message data. Or you can call a method "prepare" of API with 2 parameters. See, there's no difference in amount of information you use for this operation: either way, there are 3 facts, for which you use 2 different syntactic forms. Coupling by definition is amount of shared information about mutual state and behavior between components (number of facts fixed in interaction contract). Here it's the same.
With all that in mind, modern languages are so rich in syntax, that there's almost no use case for messages as the means of interaction between objects at this moment. When you mention reflection, I'd say it's not the same. You use reflection to discover the address of the routine to call (that is, do not rely on infrastructure for delivery), rather than deliver the message to the object so it could dispatch it itself. Well, of course, there's one remark - this depends on how you define the message. Reflective calls are late binding just the same way as dependency injection: in runtime some container or reflection API provides an address of the code, implementing given interface, that will be invoked. If reflection means messages, then DI means messages, then VMT means messages, then messages=methods and all this talk is about outdated definition of what everyone uses.
Web crawlers are a bit different story: search engines do not deal with objects, they always deal with documents, that do not expose any behavior. The only thing in the net that comes to my mind is the semantic web. There were talks about it in early 2000s, when there was a lot of hype about web services, runtime discovery, web ontology language etc, but it's now as dead as CORBA.
Reifying means to take the abstract and make it concrete. Reflection is an example: take a language feature, like a class, and make it into real data. Many languages have some reflection capabilities for types; message sending takes this even further.
For example, in Java, I can take a class and make data out of it: dynamically look up fields, etc. I can also do that with a method. But I cannot take a method call and turn it into data. I can sort of do it with a lambda, but lambdas make poor data:
1. A lambda is concrete, not abstract. It's literally "run this code."
2. Even if your lambda merely invokes a method on an object, it's still opaque. I can't pick it apart, get the parameters or method name out, etc.
Here's some practical objects you can't make in Java:
1. Envelope: wraps an arbitrary remote object Contents. Any method you invoke on Envelope gets sent over the wire and invoked on Contents.
2. Delegator: represents the union of one or more objects, the Delegates. Any method invoked on Delegator gets re-sent to the first Delegate that understands it.
3. Tee: Any method you invoke on the Tee gets multicast to its wrapped objects.
4. Mapper: wraps a list. Any message you send to Mapper sends it the elements of its list, and returns the resulting list.
etc. Put directly, Java method invocations are procedure calls, not messaging. But once you reify method invocations into object, it opens up all sorts of dynamic possibilities.
"reifiable" = "able to be reified", where "to reify" means "to make something abstract concrete or real".
So, for example, Scheme's call/cc function reifies continuations- it take a continuation, an abstract control-flow concept, and turns it into a concrete object that you can pass around and manipulate in code. Therefore, continuations are reifiable in Scheme.
The other comments have already answered this well enough. If you want a little more reading this is a nice example of an application of reification [0]. The first few paragraphs define it well, and then shows examples for implementing it for handling user/actor input and actions in a video game.
So... it's basically event-driven programming? Objects generate events, and other objects are free to subscribe to them and do something in response (or not)?
I thought of something similar a few years ago, I think it would indeed enable much better decoupling than the current OOP model - instead of object A telling object B "do this" (and thus having to know that object B exists and that it can do "this"), have object A emit an event "I just did this" and let object B, if it's interested, handle that.
There's still the problem of "now B has to know about A", but it can be solved with a common event bus and a series of messages defined in a different library - that way both A and B only know about the common library, they can be completely independent of each other.
I wrote something like this a few years ago - just added it to my GitHub at https://github.com/mdpopescu/public/tree/master/Snake - but I can't say I ever did something "serious" with it. It's one of the things I'd like to actually use in production, like CQRS/ES or Orleans :)
There isn't any subscription with messages, which are sent to the object directly. There isn't any "event bus" or other complex structure. Messages are just a replacement for function calls (including accessor methods, which may be implied).
> decoupling
That's one of the primary goals. Traditionally function calls required coupling between the call and a single function or multiple functions with vtables or other polymorphism. With messages, the handling of what looks like a function call can be interpreted like an incoming event in event-driven programming.
> let object B, if it's interested, handle that
That's exactly right, but think of it as:
* Object A sends Object B an event (message)
* Object B can then handle that event in any way, such as:
- calling a function
- interpreting the event directly (i.e. all events handled the same)
- raising an exception
- ...whatever...
* Object B then returns the result which becomes
the "return value" that Object A. (this part is RPC-like)
> now B has to know about A
Not at all! I suspect you're thinking of this as a subscription model, which isn't correct.
One of the key benefits of messages is that objects never need to know about each other's type (in either direction). Ignoring types and simply sending messages to objects (regardless of their type) is called "duck typing"[1]. As long as an object responds in a useful way to the messages ["year", "month", "day"], it isn't important if the object is actually of type Date.
I have to admit I still have basically no idea what he's talking about when he says 'messaging'. A practical example would be useful for us non-CS-degree programmers who don't speak any of the CS lingo.
One has a main procedure that orchestrates itself and other code to work things out and do things. This is the way most people organize their problems into code.
The second is a set of independent intelligent concepts (objects) that work together to work things out and do things. The objects work together only by sending messages to each other. There is no central code orchestrating them into a solution.
If you want an example, consider two tennis players and a ball. How would your represent them hitting the ball to each other in code?
For me there's a disconnect in the handling. So in a traditional approach, you write one function that takes in input and spits out an output (called, say, FUNCTION_A).
Then you have another function (FUNCTION_B) that maybe does some stuff and then calls FUNCTION_A with an internal variable and bam you're done.
In messaging, FUNCTION_B doesn't call FUNCTION_A. It just shouts out (or sends a message containing) something like "I need to turn my internal variable into a given output". Somewhere in the application will be a process that listens out for just that message and then does something in response to it.
So you have all these modules (or cells in some examples) that are shouting out things and then you have other cells (or a single brain) that take in all the messages that are being shouted and do stuff based on the content of the message. You could think of a stock market trading floor (with a whole bunch of people scrambling and shouting stuff, although obviously much more orderly in a program) or the cells in your body that send messages to your brain (like cells in your fingers sending "the human is putting his hand on a hot stove, tell him to take it off before us little cells die" and then your brain - and you - react).
The idea is to replace "calling a function" with "sending a message".
Traditionally a function call is fixed at compile time (with some exceptions such as function pointers). In C a function is just an address; if we want to call foo(), we must have that function available at address &foo. C++ added flexibility by allowing there to be multiple functions called foo() with and a set of vtables[1] that store the actual (C-style) address. Other variations exist, but all of these traditional styles map function calls to specific code that runs every time the function is called.
Message change all of that. Instead of vtables (or similar)
obj = create_foo()
# instead of calling foo's bar() function directly, e.g.
foo_bar(obj)
# or perhaps
obj.bar()
... we send a message by name of the function we want to call to the object itself
obj = create_foo()
obj.send_message("bar")
# if function args are needed, include them as an array
obj.send_message("baz", [42, "quux"])
The idea is that while the message can be effectively the same as a function call, it doesn't have to be. In a proper OO language, this message sending is handled automagically by the syntax.
# instead of handling the messages directly, e.g.
obj.send_message("bar")
# the language does that for you when you call
obj.bar()
# in some languages, these are equivalent
In many cases the "foo" class above will handle the message "bar" by running the appropriate function, but that isn't required. For example, in ruby when no function exists for a given message, the raw message is send to the #method_missing function.
class Foo
def method_missing(name, *args, &block)
puts "#{self.inspect} I was sent message #{name.inspect} with args #{args.inspect}"
end
end
>> obj = Foo.new
=> #<Foo>
>> obj.bar()
#<Foo> I was sent message :bar with args []
=> nil
>> obj.any_name_we_want()
#<Foo> I was sent message :any_name_we_want with args []
=> nil
>> obj.any_name_we_want("args", "are", "optional")
#<Foo> I was sent message :any_name_we_want with args ["args", "are", "optional"]
Thinking of "obj.method()" as a message instead of only a function is much more flexible.
Calling a method is "Command and Control" where "command" is about getting a thing to do something that you want, and "control" is about preventing a thing from doing something that you don't want. In any case, you're running the process.
Message passing is about negotiating with something that is already in process. It turns out that this is the key to building scalable¹ systems (for all the usual reasons: enforcing loose coupling, abstraction, decentralization, etc.)
Longer Answer:
1. The actual powerful thing about a general-purpose computer is that it can simulate anything, including a "better" general-purpose computer (think about what Universal Turing Machine means).
2. Recursion is about making the part as powerful as the whole
Putting those two together leads to the original insight behind OOP: Why not build systems out of (scaled-down) computers!
So, in de-jure OOP, objects are supposed to be computers. Sometimes they are general-purpose computers (i.e. they contain an interpreter for a "Turing Complete" programming language), often times they are more limited special-purpose computers (e.g. functions, procedures, programs, etc.) . Crucially, the only way to interact with a computer/object is to send it input and receive output. It's completely up to the computer as to how to interpret the message (n.b. each object contains an interpreter). I like to think of OOP as being about scaling computer networks in both directions: scaling up gets you something like the Internet, scaling down can get you something like desktop publishing (I recall that Alan Kay said something like desktop publishing was really just about getting rid of the borders between apps).
While, in theory, method calling and message passing are equivalent, the problem with method calling is that it tends to limit you to building systems out of mere data structures that just happen to have all of the functions/procedures conveniently "nearby". Data structures are good if you want to make a process, but lame when you need to deal with one.
¹Scaling to me means that, with respect to some metric, there is a point at which the difference between the addition of part_n and the later addition of part_(n+1) becomes negligible. A part can be lots of different things: e.g. a user (metric is performance), an edit to the codebase (metric is pain), a new compute node in a network (metric is cost), etc...
(For the mathematically inclined, I think scaling is about making sure that the sequence of steps for building a system is Cauchy.)
Thanks! This is a great description of what I think Alan is trying to describe. The issue most people seem to be facing is that they are trying to map his ideas of messaging onto their current conceptions of what software should look like, instead of stepping back (way back) and saying - how would a bunch of Universal Turing Machines communicate? Software today is fixed functionality - once it's written it can only be changed with great pain. What if we made meaning and interpretation of messages 'late-bound'?
I think it could be different if the intention is that the message (or method body) is always executed in the context of the called object. E.g. in an own thread, which is used instead of the thread/callstack of the the caller, and which is also exclusivly used even if the method is called from multiple threads - which would elide the need for synchronization.
However as far as I understand most implementations that talk about messaging (e.g. ObjectiveC) do exactly the same thing as plain method calls. The difference seems to be that there is more dynamic in the "messaging" (can send any message to an object or can write a "method" that processes arbitrary messages) - but for me that sounds more like the difference between static and dynamic languages than as a completely different messaging concept instead of methods. I don't know Squeal and Smalltalk so maybe I'm missing something here
For dynamically typed languages, this distinction is less pronounced, but in general, the caller assumes less about the callee (i.e. you can send any message to any object).
It also allows transparent routing/delegating and, in some cases (void result type), transparent multiplexing.
That makes me wonder though, if i send a message to the same object from two different senders, will the first message affect the outcome of the second? If not then there is plainly little difference between the two, as either action gets a new pristine instance. All in all, this seems to be a whole lot of syntactical hair splitting.
Checking some links to Kay's responses elsewhere, i get the impression that unless we basically toss the notion of a programs as a singular compiled file of binary, and replace it with some kind of abstract notion of work that can happen on a single computer, or across the net as a whole, the distinction between a message and a method is academic at best.
Because for message as a concept to make sense, it has to be seen as someone standing on a rooftop shouting "can someone please hit that nail?!", and then wait around until someone shouts back "done!".
Without that you just end up with a carpenter talking to himself "hit nail, done, hit nail, done, hit nail, done".
> Why didn't he call it Message Oriented Programming then?
He eventually said much the same thing (many people remember him as using your exact phrase, but I'm less sure about that):
> [Kay:] I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging"...
Somewhere I read him saying that a key moment was when he was looking at (IIRC) Sussman's highly optimized Lisp dispatch of continuations, and realized that it was essentially the same thing as what Kay had in mind for Smalltalk. Something roughly like that.
It may have been the first "X is a poor man's Y" observation.
Calling a method on an object is a kind of message passing - please take those arguments here, do your thing with them and send me back whatever return value you calculated. This really is just a synchronous exchange of two messages between the caller and the callee (plus you also give your processor to the callee to do his work because while you are waiting for the response you don't need it anyway).
Does anyone know what Alan means by "assignments are a metalevel change from functions, and therefore should not
be dealt with at the same level"? In what way are assignments a metalevel change from functions?
I believe he means that variable and their assignments are at a lower level. Objects interfaces should be defined in terms of the messages they can send and receive. No exposed variables: Variables should always be private/protected and are just used by the objects to maintain their internal state.
I'm thinking the closest mainstream thing to real messages is probably event handling. Events are nearly always represented as objects of some sort (rather than something simpler like a function call) and event routing is the heart of any UI framework.
So maybe the question is whether we should be using events (and streams of events) more?
But this doesn't seem like a neglected area - publish/subscribe is pretty common and there's plenty of recent discussion of reactive systems and event-handling architectures.
It is much broader than events. Anything where you sit and wait for some piece of data to happen and trigger code can be viewed as receiving messages. An object that sits in memory and has its methods called is receiving messages. A piece of code that calls methods on objects is sending messages. A variable containing data is a message at rest, and once passed to a method becomes a message in transit. If you then start thinking about things like generators, you see that this is just another way of sending messages. You can take any program and view it entirely as a collection of messages, with the objects and procedures being the glue between the messages.
The issue that languages have, is that the syntax obscures this fact instead of highlighting it. There are many different ways of sending a message between objects, and none of them are labeled as such. But essentially, there are just two different models that matter: synchronous send + receive (receive the replied message, the method's return value, before moving on), and asynchronous send + receive (where the receive remembers the context of what was sent that is being replied to). A really simple OO language would just implement those two operations between objects, and would need nothing else.
It is generally considered a suited for a different class of problems. I think Kay's point is that decoupling the messages from an object's internal state is a smart approach to system design. Many of the other "features" of OOP get in the way of this design principle.
Isn't a lack of direct hardware/ISA support for message passing becoming apparent in today's world of multicore architectures? The Disruptor library used by HFT shops and much of Erlang seem to suggest a need for this.
Generally speaking anything that's essentially pointer chasing can't be moved to hardware, because they'd make it with microcode and that's just hard-to-update software.
This sounds a little bit like microservices, which are at the very least a heavy-handed way to enforce separation of concerns. (I know they can't possibly be what Alan Kay was talking about in 1998.)
I've used Smalltalk, I've used Java. They are not not different in terms of the object mechanism. The only difference one could imagine is that you can treat messages as data through doesNotUnderstand in Smalltalk (the analog to method_missing in Ruby). That is by far not the predominant use case though.
Otherwise, what makes method calls "messages" in Smalltalk, but not in Java?
This mailing list entry has always sound like bogus philosophistry trying to defend an imaginary high ground.
Though I keep hearing from Alan and other prominent language designers that we still are holding onto this old 1960 mental model of command and control, or structured design concepts. Even some well spoken people suggest that the whole notion of object's are worthless to programmers because it takes place within the notion of classes and constraints of the machine.
This whole philosophical debate about understanding/miss-understanding/correct usage of OOP is just a complete waste of time.
For what its worth I consider object's simply as a basic level category of procedures and data tied to a namespace. If the category requires state then consider it as a object otherwise its considered a module.
I've seen too many project's that have drunken the cool aid and has resulted in 5-10 level deep inheritance tree's with their own branching logic trying to fit behavior to a specific taxonomy.