Hacker News new | past | comments | ask | show | jobs | submit login
On Learning Smalltalk (robsayers.com)
152 points by xkriva11 on Jan 11, 2022 | hide | past | favorite | 117 comments



I learned Smalltalk in the Elder Days (1985). Coming from C and Pascal, the power and majesty of true collections—dynamic lists! that you didn't have to build yourself with pointers! sets! dictionaries! iteration over objects not integers! selection, rejection, injection!—was astounding. The ability to define behaviors in a generic not intensely specific way, the ability to meta-program, the ability to twiddle your environment directly—just so many world-changing innovations. Thankfully most of these advances are now baked into Ruby, Python, JavaScript, etc. (either right into the language, or with ready-to-hand libraries like itertools, more-itertools, and Underscore.js).

The image—all your code, data/state, your entire OS and windowing system, your IDE, all rolled up into one—that was an interesting approach but honestly that is the worst thing about Smalltalk. No modularity. No version control. Your image persists your data and changes forever—including any mistakes you make, knocking around inside big ball of ever-declining clarity and reliability. Over time every image collects entropy, mistakes, and bitrot like no one's business! Eventually it gets to be impossible to figure out how to fix it, or takes so much energy to maintain you won't bother. Then you have to chuck it and start afresh. Rinse and repeat! Give me today's alternative, non-persistent dev environments and version control, any day.


I learned Smalltalk in 1994, which was well past the Elder Days. I came from a strong C++ background. I had the same experience you had in that Smalltalk showed a completely different way of doing things. Unfortunately we were hampered by garbage collection. We tried to tune and optimize as much as the environment would allow to no avail - when it came time to garbage collect the system would freeze for several seconds, sometimes up to tens of seconds. That was completely unacceptable for commercial software. Marketing also wasn't thrilled the UI came bundled with the environment. It was rather ironic that Java would come out the next year and the entire industry would pour so many resources into solving the garbage collection problem. Nowadays on a multi-processor system with concurrent garbage collection enabled the garbage collection can run faster than on manually-managed memory! That sure didn't seem like it would ever be possible in 1994!


> [a] system with concurrent garbage collection enabled the garbage collection can run faster than on manually-managed memory

Maybe. But not predictably. It will have delays, they may be small, but there is know way to know when they will come. This can be managed but it is all overhead. Manual memory management has none of that run-time overhead (but a tonne of programmer overhead)


Manual memory management is much more predictable (which is really nice for interactive/real-time programs), it does have nontrivial runtime overhead: malloc and free aren't free (pun slightly intended), and often costlier than a bump allocator most moving/copying GCs use (just increment a pointer and have a very branch-predictable conditional to check OOM). Also, moving/copying GCs increase data locality by bringing live objects closer, which is more cache friendly.

I agree that manual memory management is much more predictable than garbage collection, I just wanted to touch on the nuances of the memory overhead of manual memory management.

Also, if the program creates lots of short-lived objects (which is often the case), a generational garbage collector would be pretty fast because it will reclaim/move a few objects rather than deallocating many objects. An arena collector or allocating on the stack might be better (and manual methods) in this scenario though.


> It will have delays, they may be small, but there is know way to know when they will come.

It was my understanding that C4 doesn't do pauses.


On the face of it, I do not believe it. "Mostly dosen't pauses" is believable. Great strides have been made

But what is it? Where can I find out about it? A deterministic garbage collector would (a) prove me wrong and (b) be a Very Good Thing


I taught myself Smalltalk in 1988 coming from Modula-2.

It's simply anachronistic to judge Smalltalk implementations of that past decade against version control of this decade, when there have been newer Smalltalk implementations with version control.

Even back in 1983 the blue book discussed "Storing the set of Changes on a File" (page 465) and "Commands that check for conflicts" ( http://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePro... page 472).

"Whether your development team consists of one person, or many: Do understand and use the change manager. The change manager is one of the most important tools for software development in the Smalltalk-80 environment. … It allows you to selectively browse [changes], remove [changes] incorporate them into another version of the system, check for conflicts, and prepare the changes for release to other members of the development team or to end users." (page 500 "Smalltalk-80 Software Development Do's and Dont's")

> Your image persists your data and changes forever—including any mistakes you make…

Even back in 1988, source code changes made with the Smalltalk IDE were automatically logged to a text file (the change log) and we could choose which of the logged source code changes to apply later to an unchanged image file.

So no, mistakes were not forever.


I don't know about that. For most of the VisualWorks programmers I was tasked with helping in the 1990s, the change log was an error prone way of sharing changes amongst your team members - this was before ParcPlace implemented packages properly and whatnot.

All of it was pretty useless compared to using Envy/Developer which was a unique version control system that was based entirely around Smalltalk and collected up your changes in a way that made sense for sharing.

The guys at ParcPlace never seemed to quite understand what we were facing back then - your average departmental programmer trying to replace a green screen in a team of 20 other programmers not only had to cope with a complete change of paradigm for the programming language, but the version control systems were non-existent or useless even compared with simple systems like SCCS. Change logs just didn't cut it when you got past 2 developers.

The vanilla VisualWorks simply didn't scale to multiple users trying to share code. Envy/Developer fixed that (and how! Still the best version control system I have ever used).

The "forever" thing might be an exaggeration, but in practical terms I had a lot of guys destroy images accidentally by over-ridding something in Object or whatever while they were learning and causing havoc for themselves. Trying to piece together their good work from the change log was...effectively impossible until they learned what they were doing due to the total scattergun approach a lot of them used. So perhaps not "forever", but given the time constraints in a normal development environment, it was forever for certain values of forever.


> … didn't scale to multiple users trying to share code…

Just curious, what process did you tell them to get past their "scattergun approach"?

I never looked at the 1984 "Smalltalk-80 The Interactive Programming Environment" book until about a year ago, so very much with the benefit of hindsight —

"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager." (page 500)

http://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePro...


Initially we followed that model, we would have a designated integrator that incorporated changes when you hit a milestone, and ways of filing that into other developers images as they went, but it was very error prone and a lot of regressions would pop up for the integrator as they struggled to keep up.

Envy fixed that, you could start with a fresh envy image and grab your updates. You could also isolate developers in their own branch like git and get proper warnings when you had integration issues when merging.

Basically the integration job went from full time to a couple of hours a week.


Thanks. fwiw Seems to me that what you describe is different.

> … filing that into other developers images as they went…

Seems to me that after "building a new system image" all their developers worked with that same fresh system image (built from known archived sources).

(Perhaps I'm wrong about that.)


It was a bit of a combination. We tried using pre built images but managing them got out of hand pretty quickly and there was no easy way to diff images so whoever did it had to be careful.

Then we settled on pre built images with just the across team shared code. But that got difficult too, as some teams didn't want to upgrade as quickly as others on the shared code base, so we were maintaining several versions of that.

The teams themselves had regular image consolidations saved, plus file outs and some PVCS for those, but it got increasingly harder to make a consistent merged image no matter what and devs were spinning their wheels in a kind of code merge hell at the method level that is unique to smalltalk.

Then we got envy and it fixed everything.

ParcPlace did eventually come up with a different structure for packaging and sharing code but it was too late and too simplistic to be useful.


> … code merge hell at the method level … envy and it fixed everything.

Puzzled. ENVY/Developer is still Smalltalk and code merge still method level?

Perhaps what was shared within teams and between teams changed?


Yeah, there were a lot of side channels for sharing code that made merging things more difficult than they could have been. Poor discipline when devs were in a mad rush.


Seems to me that "sharing changes amongst your team members" is much more about process than it is about tools; and the process (whatever it is) must be followed by everyone every-time.

(With the benefit of hindsight) seems to me that many teams made their mistakes with vanilla Smalltalk and avoided those same mistakes when they moved to ENVY/Developer.


Envy enforced discipline in a way, which helped a lot. Sure, the devs could have had that discipline with filed out classes but because envy made sharing code easier, they inherited that process as well.


> … effectively impossible until they learned what they were doing due to the total scattergun approach a lot of them used.

You tell us the people didn't know what they were doing — so why blame the tools?

(Incidentally, I used Envy/Developer for years.)


I blame the tools because they were inadequate at the time and inferior to even rudimentary things like sccs.

Not every programmer is a superstar. Visual basic was a much better environment for average guys doing green screen replacement jobs.


> … replace a green screen in a team of 20 other programmers…

I was briefly involved with a project that sounds a bit like that — a 2 week training course for current staff, then rewrite everything from data model to U/X in half-the-usual time because someone said OO is more productive.

That project used ENVY/Developer. The staff admin deleted the repo. The staff wouldn't have passed phone screen for Smalltalk work (maybe some would have in 6 or 12 months). When the people don't know the basic tools, that's what matters.


It does help if they know the tools. The main struggle we had wasn't that they weren't smart, most of them were fantastic domain experts who just didn't have that inquisitive tech spark that sets different breeds of IT people apart. There is nothing wrong with being a domain expert who isn't that keen on cutting edge tools, but back then giving them smalltalk was a mistake. VB and powerbuilder were theoretically inferior, but far more productive for guys like that.


This is like saying that using an OS with long term storage is a problem because it accumulates incorrect files, mistakes, and other long term choices that will hamper your productivity. It may be true, but it is not necessarily the case. You can use images in the way you describe, but you can also use them as a support technology that is available only when useful for development. Modern development in Smalltalk uses source control like any other language.


> an OS with long term storage is a problem because it accumulates incorrect files, mistakes, and other long term choices that will hamper your productivity

This is why containers became so successful, IMO. For a long time I thought being able to log in to a server and set up a new cron job to deal with some maintenance task was a strength. It turns out that over time it does become a problem.


I don't know about the current state of Smalltalk, but back in the late 90s there was a very elegant version control system for Smalltalk called Envy. It provided version control at the level of each method, and it was integrated nicely in the IDE.


I use Envy on VA Smalltalk professionally. Coming from Mercurial and SVN, when I first encountered it, I thought "WTF".

Now I regard it as the greatest version control tool I have ever used. (It is also a package manager as well as a build tool). Version control at the method level!


I would second that. Incredible control aimed at the strengths of Smalltalk. I'm not sure the way it worked would ever really translate to other languages but I really miss it sometimes. It was very expensive though.



Any persistent storage will exhibit the cruft problem over time for any system of even limited complexity. You're describing people, not Smalltalk. And Smalltalk has had version control for more than a decade.

Smalltalk always felt like it was from the future to me. Its vision and respect for human thought and creativity is so grand it makes me blush. It represents a real belief in technology and what people might do with it. It's full of hope and humanity. It's basically the Shire of programming environments -- if the Shire were part of the Standford campus.

If you haven't checked Smalltalk out, you should free your mind and give it a shot.


Most of the Smalltalk implementations I know support version control via Git or some other mechanism as a means of code distribution. And as I understand it, production code is built with CI/CD pipelines using a base image and checking out the code that's the app from a repo.


That's correct, while using an image for development is possible, it is not the preferred way to operate. Any industrial application in ST will use version controlled files, like in other languages.


> Over time every image collects entropy, mistakes, and bitrot like no one's business! Eventually it gets to be impossible to figure out how to fix it, or takes so much energy to maintain you won't bother.

That's an interesting perspective, thank you for sharing.

Smalltalk was trying hard to be different, and there were a lot of neat ideas in there, but while it seems like a very powerful concept to have your OS/editor and language all rolled up into one, that doesn't mean it's a good idea. Kind of like LISP macros. Seems like such a powerful and useful concept, but on the other hand, if you give programmers too much power, they are guaranteed to misuse it.


> That's an interesting perspective, thank you for sharing.

One person's perspective; here's another:

https://news.ycombinator.com/item?id=29895839


I will note that image based deployments seems to be what the ml world has moved to.

Such that I'm curious if moving models around for use would have been easier in some image based languages.


That seems to be the common complaint about image bases systems like Smalltalk and (some versions of) Common Lisp. The difficulty of collaborating, auditing, deploying, continuous integration and rolling back to specific versions of the code.

Seems like it makes operational tasks, and really anything that involves distributing the code to anyone who is not the developer, is more difficult than with non-image based languages.


There have been several different version control systems available for most Smalltalks for a while now. In the big open source Smalltalks (Squeak, Pharo, etc) there is even Git integration.


Can't we just deploy the Smalltalk or Common Lisp code and not make changes to the live image? Then we can deploy Smalltalk/Common Lisp projects just like we deploy Python/Ruby prjoects. Can't we?


For Common Lisp: sure we can. We can run a program from the sources, as a script, just like Python, and we can build a binary, with all dependencies baked in. CL has the feature to save the current image: it's excellent for development (see how Ravenpack ships images with several GBs of data baked in: the image will start instantly, you won't have to wait for your development data to be initialized again).

And we can run the Lisp program and not touch it. But… if we want, we can easily connect to the running Lisp image, introspect it, change a couple parameters… or do a software upgrade, which can involve installing Lisp libraries… and we could connect to the image on the remote server from our editor and develop the app (while still saving the changes in source control locally), but that would be silly and it isn't a standard way.


Yeah, just don't save the image. Whenever you restart the program use a fresh image. This is pretty much indistinguishable from a "normal" program's state when it is running.


Here are some libraries and a program in Smalltalk Interchange Format being loaded into a Smalltalk image, the image being saved, and the saved image being used to run the program on the command line —

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

    ./bin/visual nbody.vw_run.im -nogui -pcl MatriX -filein nbody.vw -doit 'ObjectMemory snapshotThenQuit'
    ...
    ...
    ./bin/visual nbody.vw_run.im -nogui -evaluate "BenchmarksGame do: 50000000"
    ...
    ...
    -0.169075164
    -0.169059907


What CL versions could they be? Even in versions with their own, very interactive IDE (LispWorks), we work with source files, with git and all, and we can run a program either from the sources as a script or from a pre-built binary (which is built from clean sources, not from our development image).


> … common complaint…

For sure! Is it a well-founded complaint or just an obvious difference?

What if the Smalltalk image is a cache not an archive?

What if source code is exported from the image as text files and archived, and each week the source code is loaded into a base image for test?

https://news.ycombinator.com/item?id=29911384


Pharo has all this solved quite well.


Can you elaborate?


Pharo uses libgit2 binding and adds nice tools on top of it. The development of Pharo itself is based on Git/Github. So it uses standard Git workflow with branches, pull-requests, CI building and testing etc., but because it has methods based granularity, it is even better than plain Git for operations like merging and cherry-picking (it can, for example, easily resolve conflicts that plaintext based Git cannot). Moreover, it allows things like listing the history of a particular method. Even CI building can benefit from still-present image capabilities because you can have a basic prebuild image with project dependencies and then just load what you are really interested in in a few seconds (while still having an option to bootstrap it completely).


That sounds pretty cool.


I once interviewed James Gosling and one of the questions was precisely that - what did Smalltalk do wrong and what did Java do right for one to fail and the other to dominate the market?

The “doesn’t play well with others” problem of Smalltalk was kind of a killer.


Might he possibly have had an agenda to promote? :-)


At that point I doubt it. I was expecting that answer in fact. Smalltalk had a couple problems Java didn’t. Its syntax scared people used to C while Java made them feel comfortable. The tooling was also free, which was not the case for Smalltalk at the time. Finally, the idea of storing source code inside a memory image didn’t help with interoperability (something I also observed while working with Zope).


> … storing source code inside a memory image didn’t help with interoperability…

Here's some "play well with others” "interoperability" —

Sept 21, 1987 — "A version of the Smalltalk-80 object-oriented development environment … has been announced by Parcplace Systems.

Smalltalk-80 DE version 2.2 is said to provide Unix environments with interprocess communication via sockets and access to Unix C shells. On the Apple Macintosh, it provides access to Desk Accessories and provides the ability to cut and paste between the Smalltalk-80 system and the clipboard."

https://books.google.com/books?id=ldk7z4Q-WWYC&pg=RA4-PA4&lp...


"The Rise and Fall of Commercial Smalltalk" discussions provide a fuller picture:

https://news.ycombinator.com/item?id=29223880

> … storing source code inside a memory image didn’t help with interoperability…

"Interoperability" covers a lot of things, was there something particular you had in mind?


In this case it was about being adaptable to the tools programmers already used. Smalltalk required one to buy into the whole thing - it’s an IDE with its own OS and GUI.


What if those already-used tools weren't as-good for writing Smalltalk programs?

----

> … its own OS and GUI.

Well there are examples of bare metal Smalltalk (I'm guessing we could say the same of Java?)

    https://github.com/michaelengel/crosstalk
but that was kind-of unusual and afaik commercial Smalltalk implementations targeted specific host OSes.

----

Similarly back in the '80s commercial Smalltalk implementations provided their own GUI L&F (and in the '90s Java provided their CrossPlatformLookAndFeel aka Metal).

When OS GUIs became available in the '80s, Smalltalk implementations either targeted specific host OS GUIs (so on MS Win use MS Win controls) or provided portable GUIs based on emulated L&Fs (OS/2, Motif, MacOS, MSWin) (and in the '90s Java provided AWT/Swing…).

IBM Smalltalk (Visual Age) GUIs were based on what would become in the '90s Java SWT.

----

So no, not really "its own OS and GUI" — when GUIs became provided by the host OS, Smalltalk implementations provided ways to use the platform L&F.


I agree. While there are many things to admire about Smalltalk (e.g. clean syntax, real Object Orientation, feeling of a high-level language), the image idea is certainly the least favorite of mine.

The idea of a bundled IDE is also something I am not completely convinced that it is the best solution. On the one hand, it allows powerful operations, but on the other hand, every feature has to be developed explicitly for that language.

Instead, I prefer the modularity of Unix tools which can be used to develop tools by combining multiple technologies.


Text based technologies able to be called from via the command line and piped around to each other with other Linux commands is best for me.


https://github.com/tonyg/squeaker

"Like Docker, but for Smalltalk images."


I did smalltalk professionally as my second job out of college, 1995-1997. It was cool learning it, and it definitely gets you into OOP. It also laid the groundwork for realizing that OOP isn't a great paradigm for software development. Plus it was really slow and clunky, in the end. For the time, the IDE (VisualAge and the other one I can't remember) was cool, although git crushes the model they used back then.

Overall, I'm glad I learned it, much like TCL and Lisp, but I'm definitely glad I don't use it professionally.

The languages basically take their premise to the extreme, where the premise for each is "What if everything, including the language itself, was ..."

TCL: ...a string?

Lisp: ...a list?

Smalltalk: ...an object?

I don't know enough about it, but I think with Haskell the suffix would be "...a function?"

Neat experiments, neat learning them, but I appreciate other languages more for actually implementing large projects.


I feel like Smalltalk demonstrated a better version of OOP than what you get in other languages that market themselves as object oriented. Firstly that it's consistent with itself, but mainly because it's incredibly awkward to do OOP in the Gang of Four/Clean Code/DDD style that you'd typically see in, say, a Java project.

You're essentially building up a catalogue of messages and answers and so it's not particularly ergonomic to jump through several layers of abstraction in the browser to get what you need. And there's no arbitrary limitation on your code such that a class can only have 10 methods in it, or 100 lines, or some other silly thing that forces you into creating a more convoluted architecture with more indirection.

I've grown weary of OOP for a long time because of how hard it is to find an effective application of it that doesn't grow into a collection of explicit design patterns with some business logic buried in the depths of the hierarchy. My brief experience with Smalltalk essentially revived my interest in it and encouraged me to be more thoughtful about my approach without throwing it all out and pining for a purer, more functional language that trades OOP's big problems with a set of its own.


> And there's no arbitrary limitation on your code such that a class can only have 10 methods in it, or 100 lines, or some other silly thing that forces you into creating a more convoluted architecture with more indirection.

I've never heard of such a restriction, even in very early programming languages. What language are you thinking of where you're limited to a specific number of methods or lines per class?


They're making a hyperbolic allusion to texts like Clean Code which promote short methods and relatively small source files (recommendations usually in the low 100s of SLOC). Not to actual language restrictions.


It's not particularly hyperbolic when using, say, Rubocop for Ruby. Those are pretty much the default settings (methods no longer than 5 or 10 lines, classes no longer than 100, etc.). You can change the defaults of course but there is one class of software engineer which will treat those as gospel.


Things that are good from OOP/Smalltalk:

* Interfaces

* Messages

* Some combining of data and functions

* Some polymorphism

Not so good:

* Inheritance

* Open/close

* Most Encapsulation

* Most everything else ;)


For Haskell, maybe it's:

"What if everything was immutable?"

Which I guess doesn't really include the language itself.

But it seems more fundamental to Haskell than first class functions, which exist in many other popular languages.


not to mis-interpret this comment: Lisp has of course other data structures, hash tables and all. The source code though can be seen as a list of lists, and it can be manipulated as is in macros (but you can use an AST too).


Smalltalk shows a very opinionated type of OOP, which has little to do with the commonly accepted form of OOP done in C++ and Java. And I think it is “brave” to claim that the most commonly used and proven paradigm that does scale with complexity is not great for software development.


I worked with Smalltalk for a number of years, and was even present at discussion during the ESUG (European Smalltalk User Group) meetup in 2008 that was the very birthplace of Pharo, though admittedly was a little out of my depth with some of the technicalities (and politics) that caused the rift from Squeak.

I loved working with Smalltalk as a language, and developing inside the fully integrated VM. It's my fondest development experience to date.

I really didn't love deploying Smalltalk applications and even collaborating was more difficult than file based languages/systems. Deployments (at the time) were a practice of shipping the image to the servers that needed to run. There literally is no (separate) "runtime" it's all in the image. This thing got bulky real quick.

I haven't touched Smalltalk in a while, so maybe (hopefully?) things have improved and deployments are no longer a problem.


I played around with Squeak over Christmas (basically trying to implement a ray tracer, although I started to lose interest after I rolled my own Matrix implementation with all the transforms).

As far as working with the code, debugging, reading and writing were concerned, it was great. It really gave me a taste for thoughtful OOP, simple documentation, and even TDD-style development again, because the whole environment just worked perfectly for it in a way that would be hard to reproduce in a traditional language where you're organising things on the file system.

Dealing with version control was a little more awkward (getting Git in place), and I'm glad I didn't have to worry about deployment or any kind of maintenance. Possibly GNU Smalltalk would be an ideal middle-ground there.

It's also one of the few occasions where I've not really had to rely on the internet so much to get help when I need it. It's been enough to have a book on Smalltalk and to rely on the documentation in the IDE.


In the Squeak/Pharo world, a built in tool called Iceberg handles VCS integration and is pretty slick. You can also easily load packages from external sources. In practice most packages are installed from their GitHub repo.

Deployments can start with a stock image, load your code and run, without you having to move around an actual binary blob from machine to machine.


My first Smalltalk was Smalltalk/V.

I fail to see how shipping a tree shaked version of the image was any different than shipping a C++ application with MVSCRT.dll, for example.

As for source control, while it was indeed a problem with workarounds like text export/import, image based SCM were eventually developed.


I've played in Smalltalk systems over the years, but not used it for anything serious.

On one hand I like the idea of an image, being inspectable, adjustable, etc. Yet on the other hand I'm also a fan of stateless, pure-functional approaches where possible (e.g. Haskell, Nix, etc.).

I get a similar feeling from Emacs: it's really useful to inspect, override, debug, etc. the underlying Lisp; yet I prefer to keep my config as declarative as possible (e.g. via use-package). In fact, I rebooted recently and found that ANSI colour codes had stopped working in my Emacs terminals: turns out I'd forgotten to update my xterm-color config when switching from shell-mode to shx-mode several weeks ago!


You can manage the Smalltalk image the same way you do with Emacs by starting with a base image that loads only the extra packages you want.


Love these ambiguous enough titles to be either something interesting for the whole population (how to learn small talk) or something incredibly specific for one subset of the programmers population.


I think it's the difference between Smalltalk (one word) and small talk (two words).


smalltalk and lisp are both secret weapons; understand and appreciate them and even though you may not use them in any commercial or professional setting, knowledge of them makes becoming familiar with other languages much easier.


Not only that, learn about Mesa, XDE and Mesa/Cedar, and then discover how Smalltalk and Interlisp-D influenced how even modern IDEs look like.


Correct.


What is the current state of multi-core and multi-threading in Smalltalk? I remember Squeak wouldn’t use more than one core and one OS process and would implement threads in itself, but that was a very long while ago.


The 2008 RoarVM project allowed Squeak to run on a 56 core Tilera chip and was later ported to run on multiprocessor x86 machines:

https://stefan-marr.de/renaissance/

https://github.com/smarr/RoarVM

This was a research project so performance on a single core was poor compared to the official Squeak virtual machine, but it was an interesting exploration of the natural fit between objects/message passing and multiple cores.


Thanks, Jecel. Good to see you around.

It is indeed a natural fit to many core architectures. I frequently mused about giving every message process its own thread. I’ll read the papers you mentioned.


This is heartening. It was hugely influential on the Xerox Star and by extension, Lisa and Mac, even though we didn't actually write in it. It showed you a different way of thinking about things.

But for anyone saying they learned Smalltalk in the Elder Days (1980s):

No, you didn't.

In [1] I discuss learning it in 1978, and there was an icon prototype written in Smalltalk that same year.

[1] https://www.albertcory.io


Maybe - one day - somebody will make the image based virtual machine for Java... VisualAge for Java was an attempt in that direction, but the image part was abandoned later in Eclipse...


I think that the magic ingredient is not the image itself. Imagine that you have the language where every method/function is in its own file, and most of them are very short (two, three lines). Then the text editing functionality of the IDE starts to be really unimportant - something on the level of Notepad is good enough. It makes the compilation super fast; the IDE needs to invest in exploring relations between these small code fragments (that have no defined order), their organization, code changes need to be not just text file editing (Smalltalk pioneered refactorings). This is one thing that makes Smalltalk so interesting (and what the mainstream does not get).


iirc Bill Opdyke's refactorings were for C++

https://ieeexplore.ieee.org/document/7274256


I like the idea of a live image, and have watched some Smalltalkers being amazingly productive, but I hope the next iteration of image-based IDEs think about some advances such as multi-images, round trips to text, and better scripting around the creation, destruction, and interaction of multiple VMs. Having checkpoints would be a nice touch. Image deltas and stare copying would be nice.


The Smalltalk VCSes do all that, AFAIK, even as far back as Envy


Take a look at TruffleSqueak on the Graal VM.


Actually that is kind of the idea behind Eclipse's workspace concept.

It wasn't abandoned, rather rebuilt on top of a virtual file system.


Smalltalk was my introduction to OO in the 90s. It is the perfect introduction in my opinion, but as the author says, it might spoil you for anything else. If I remember right, the whole image thing took a bit of care, with odd forgotten stuff ending up in production. Someone may correct me.


The problem I have with Smalltalk (and I love the idea!) is that when I want to save things in Git, or when I want to deploy something minimal, things are so glued with each other that I find difficult to figure out what belongs to me.

There are also other problems:

- a desktop app cannot look like a regular one, at least in Pharo.

- I am not sure how to glue native code, so when doing graphics stuff (this is what I really want to do with Pharo bc of its interactivity) I did not know how to consume APIs for drawing.

I do not have a lot of training admittedly. The interactivity looks super, super cool! But also you have to change a lot of habits.


Pharo is bootstrapped, and its building process starts with a really minimal image (even the compiler is not included). It is possible to build custom images of any size. You need to take care of packages and dependencies, of course.

Pharo can make applications with a native look & feel - https://github.com/pharo-spec/Spec-Gtk.

Current Pharo has a very solid and powerful FFI, so using C libraries is easy.


Pharo 10 will use a native GUI backend.

Also, git is well-integrated.


Yes, I saw git is integrated. The problem is that I am not sure how to use it. Any docs? Probably there were but my visit was some time ago.

It would rock to have native GUI. Also, if I could know how to draw graphics... That would be very nice also, because having an interactive environment for graphics is the most awesome stuff that Pharo can offer (me).


There is a booklet about Git integration: http://books.pharo.org/booklet-ManageCode/pdf/2019-03-24-Man...

For 2D graphics, Pharo uses Cairo binding named Athens (and Roassal as another convinient layer above that).

For 3D, you may be interesting in Godotalk (https://www.youtube.com/watch?v=ncmERef0EFo, https://pharoweekly.wordpress.com/2021/02/03/godotalk/)


I started working on a raylib binding for pharo using pharo's ffi...I have to say pharo's ffi system is very easy to use. It works and I got a few demos working, but I haven't put in the function calls for most of raylib. https://github.com/Zenchess/pharoRaylib

There's also native pharo smalltalk stuff like athens, woden, and maybe others... Personally I have moved to using Dolphin Smalltalk and raylib bindings there as it's been more performant and I prefer Dolphin over pharo. Dolphin also has good external interfacing support and is more of a native Windows OS smalltalk distro. If you're looking for building native windows looking applications on windows at least probably your best bet is dolphin and it's free and open source.


> Pharo 10 will use a native GUI backend.

Will this mean HIDPI support too?


Yes please...


What do some of you smalltalksters do with it?

I've had a daydream that some sort of data Swiss army knife/visualizer toolkit would be a fun project but it lacks the share ability of notebooks. Are there other operational sorts of things you use smalltalk for? I have a set of scripts and tools I use daily for work (things that slice and dice our data, perform various operational tasks against AWS, etc..) that I could see cooking in to some sort of smalltalk system but I haven't done it.




If feels like you build the environment to match the problem, not figure out how to describe your problem on the terms of your language.

Yes. Correct. That was the intention of the language design. Here is described in detail by Dan Ingalls https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....


Smalltalk has the unrivaled ability to explore and change your environment graphically, and for the changes to persist. Only the web with the developer console comes close, and the changes aren't persistent.

Binary image-based development with serialisation, now we aren't all trying to make exes, should be more of a thing.


I wrote my Bachelor and Master Thesis about Smalltalk stuff and I still love it to this date.


I remember learning Smalltalk. Or rather, beating the crap out of the bloody machine it ran on. This was back in 1989...Carleton U in Ottawa. The CompSci faculty was all-in on the glory of Smalltalk. We had to learn it on old (even then) Apple Lisa machines running some sort of Mac emulation to run Smalltalk-80. Occasionally, the Lisa would eat your diskette...and you had to rip the case apart to pry it out. We were told Smalltalk was the future...seemed no one was offering jobs in Smalltalk though...just a whole lot of dBaseIV. Must admit, it was fun, and cute...but it had no chance of paying my bills at the time.


I find Ruby to be enough of a Smalltalk for my taste. Besides that I see people to stuff with Clojure, such as hot-replacing live code, that I had only seen Smalltalk devs do so far.


"pry" in many ways try to borrow even more of Smalltalk into Ruby. E.g. the ability to inspect a running system and bring up an editor to edit a specific method and then load the result into the running system, for example.

I often bootstrap experimental code with pry "built in" in various ways, e.g. with a small "prolog" with a few utility methods to hot-reload the code and set up basics to let me keep an editor in one window a pry session in another, make changes and just load the changes without dropping the current state so I can get instant feedback using the test data I already have loaded.

For the last two years or so I've used a text editor I wrote in Ruby instead of emacs which spins up a server process on first start that it then exposes via DrB (an IPC library for those not familiar with Ruby) that stores the contents of the buffers and persists them regularly. I did that both so I can implement multiple views on the same buffer without having the same client process keep everything open, and so that if anything crashes I'm unlikely to lose data.

But the side effect is that because DrB forwards exceptions too, and the editor itself is set up to catch all exceptions and throw me into pry, if something crashes my editor, I get a pry prompt where I can live-debug it on the real buffers, edit the editor with another instance of itself talking to the same backend server instance, and then hot-replace the code to see if it fixes the bug. For hard-to-reproduce bugs in particular it's amazing to be able to keep the buggy instance running while fixing the problem and figuring out which conditions causes it.


I really like this, but I also like strong typing, and somehow it feels I cannot have both. Smalltalk, Ruby, LISPs (Clojure) in one corner A, and Rust, Haskell, OCaml/ReasonML/Rescript, Elm, PureScript in the other corner B.

A: quick to "compile", so fast write-run-try loops, very runtime inspectable/updatable, less noisy syntax; but also easier to make runtime errors, larger test suites, less easy to refactor.

B: refactorability (especially with good IDE support), strong runtime correctness guarantees; but also long compile times, and no cool runtime inspection/update wizzardry.


Ruby is strongly typed. So is Smalltalk.

It is not statically typed, so given a variable you can not know from the variable which type it currently holds. But in Ruby you can always know from the object itself which type it is.

That said there's a lot of work going on in Ruby to allow you to get more certainty statically about types. See e.g. Sorbet


> Ruby is strongly typed. So is Smalltalk.

What is strong typed is not well defined. But mostly people use it for langs that use the typesystem to prevent as many runtime errors as possible. Features like sum-types, and explicit nullability are often mentioned in that regard.

Ruby and Smalltalk have neither and do not so much try to avoid runtime errors. They tray to allow runtime inspection, and runtime fixing.


I strongly disagree that this is how it normally used.

Both Ruby, Smalltalk and e. g. Python are generally regarded a strongly typed on the basis that no conflicts can occur to trigger undefined behaviour or runtime errors that are unrecoverable due to lack of type safety causing the runtime environment to be corrupted, coupled with the explicit typing of all data and limited implicit conversions.

In fact, a someone who has worked on a Ruby compiler, merely defining what meaningfully constitutes an error in Ruby is one of the big challenges with compiling it, because a program is free to define how to respond to any kind of type error in whichever way which makes most sense. Or to check and assert types however strictly you want.

While you can throw an exception you might call a runtime error, that does not imply a failed or crashed program unless you want it to, because the runtime environment does not get corrupted by it. A runtime error in Ruby can be part of the normal life cycle of an application that includes recovery.

My editor for example will not crash in the face of a "runtime error". A bug may trigger errors, sure, but they are all trapped and correctable from within the running editor itself. It's not unusual for me to keep my editor running for months at a time.

At the same time, nothing stops you from doing sum-types or explicit nullability in Ruby if you want to with some creative meta programming. People have built stricter type checks for Ruby as long as it has been around. It was almost a rite of passage for some time, but people tend to stop when they learn to use Ruby properly. A key aspect of learning Ruby properly is to learn to not fight the type system and only check the things that actually matters. When you do, you find a lot of type checks quickly becomes a smell.

But by your own criteria, the ability to completely trap and recover and prevent all runtime errors and the ability to define arbitrarily powerful typing systems, Ruby is strongly typed.

Just not statically typed.

Of course with 3.1 you can increasingly do static type checking too, though that will take time to mature and is not a replacement for strong typing at runtime.


"a meaningless phrase"

"Programming Languages: Application and Interpretation" 2007, page 263-4

https://cs.brown.edu/~sk/Publications/Books/ProgLangs/2007-0...


And this one contradicts both cies and me, but it's in no way authoritative of anything.

Here are a few alternatives (equally unauthoritative, but demonstrating my use of the term):

"Strongly typed is a concept used to refer to a programming language that enforces strict restrictions on intermixing of values with differing data types. When such restrictions are violated and error (exception) occurs." ... "Examples of strongly typed languages in existence include Java, Ruby, Smalltalk and Python. In the case of Java, typing errors are detected during compilation Other programming languages, like Ruby, detect typing errors during the runtime." [1]

...

"Dynamically typed languages (where type checking happens at run time) can also be strongly typed. Note that in dynamically typed languages, values have types, not variables.

A weakly typed language has looser typing rules and may produce unpredictable or even erroneous results or may perform implicit type conversion at runtime." [2]

...

"A strongly-typed language is one in which variables are bound to specific data types, and will result in type errors if types do not match up as expected in the expression — regardless of when type checking occurs." [3]

This last one gives a clear delineation which mostly matches how I use it (e.g. C and PHP described as weakly typed - C allows you to override types and PHP does all kinds of nasty implicit conversions; conversely while specific Ruby API's may do conversions, the type system does not and do not allow implicit conversions, and do not allow you to coerce your way past the type system)

You can find all kinds of other variants, so sure, it's not a term that has a definition that everyone agrees about. Your linked article has a point that it's a term one need to define.

I however stand by my assessment that the definitions in most common use fits Ruby.

[1] https://www.techopedia.com/definition/24434/strongly-typed

[2] https://en.wikipedia.org/wiki/Strong_and_weak_typing

[3] https://medium.com/android-news/magic-lies-here-statically-t...


You are the only person who cares about your use of the term.

It's "a meaningless phrase", just stop.


So what are the similarities between Ruby and Smalltalk? I'm probably never gonna learn Smalltalk so it's hard to understand what Ruby borrowed.


Everything is an object. Being able to inspect/change running objects, etc...


I feel Ruby is mostly another syntax, but the type of OO (with bits of FP where it makes sense in OO) is very much like Smalltalk the language.


related, see this clojure talk on image-based programming in a functional context: https://www.youtube.com/watch?v=Kgw9fblSOx4


I don't have experience with Ruby or Clojure. While you're debugging an app, can you step to a break point, evaluate the next statement, navigate back up the call stack, change code that's already executed and try again in those languages?


Common Lisp lets you do that, but Clojure doesn't - I don't know about Ruby, but I expect not. That said, you have the wrong mental model. You don't run the whole program and set breakpoints. Instead you directly interact with your program-space. Imagine you have a function that retrieves something from an API. So you execute that function and see what it returns. You change that data around a little to fit your usecase, and then hand it off to the function that acts on the API result. That will produce more data; maybe this data is wrong, so you change the function, recompile it and load it into the program space, and try again with the same data you just changed (which is still right there in the variable you defined). Does that make it clearer?


To an extent. Things like "pry" in Ruby will let you edit methods and reload and resume and generally somewhat pretend you're working on an image. It's not as flexible as a typical Smalltalk environment, but with the new Ruby debugger in Ruby 3.1 I suspect we'll see another big leap towards that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: