Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Developer here who has been programming over 40 years (since I was a teenager in the late 1970s).

I know I am stretching things a bit here, but IBM mainframes, multi-user Forth systems, and distributed QNX systems ranging from the 1970s to the 1980s -- not to mention UNIX systems -- could all support remote procedure calls or interprocess/interapplication scripting across standard APIs to some extent (for a loose sense of process or application, especially with Forth). Even Smalltalk back then could do that to an extent but mostly from a single-user perspective in the sense that Smalltalk is mostly about message-passing objects. Essentially, you could have a system that could talk to itself or other similar systems in standard ways.

Yeah, there have been so many cycles of forgetting and reinventing with new generations of programmers. Although it is true some things improve even as sometimes other things decay for a constantly changing kaleidoscope of opportunities and risks (a bit like host/parasite arms races in evolutionary cycles).

https://en.wikipedia.org/wiki/History_of_CP/CMS

https://www.forth.com/resources/forth-programming-language/

https://en.wikipedia.org/wiki/QNX https://www.qnx.com/developers/docs/qnx_4.25_docs/tcpip50/pr...

And also from the 1960s-1970s: https://en.wikipedia.org/wiki/PLATO_%28computer_system%29 "Although PLATO was designed for computer-based education, perhaps its most enduring legacy is its place in the origins of online community. This was made possible by PLATO's groundbreaking communication and interface capabilities, features whose significance is only lately being recognized by computer historians. PLATO Notes, created by David R. Woolley in 1973, was among the world's first online message boards, and years later became the direct progenitor of Lotus Notes."

And from a different perspective, what is email but a standard way to do a remote procedure call to hopefully invoke some behavior -- even if a human may often be in the loop? https://en.wikipedia.org/wiki/History_of_email

And from the 1930s an earlier Paul Otlet invented the idea of using a standard 3x5 index card to store and transmit information (mainly metadata): https://en.wikipedia.org/wiki/Paul_Otlet "Otlet was responsible for the development of an early information retrieval tool, the "Repertoire Bibliographique Universel" (RBU) which utilized 3x5 inch index cards, used commonly in library catalogs around the world (now largely displaced by the advent of the online public access catalog (OPAC)). Otlet wrote numerous essays on how to collect and organize the world's knowledge, culminating in two books, the Traité de Documentation (1934) and Monde: Essai d'universalisme (1935)."

For another example of cycles, my current favorite UI technology is Mithril+HyperScript+Tachyons for JavaScript (although Elm is great too conceptually, and likely inspired Mithril and React in part) which is so easy to use from a developer ergonomic point of view in part by (simplifying with a very broad brush) re-inventing the OpenGL video game paradigm of redrawing everything (with behind-the-scenes VDOM optimizations) from essentially a global state tree whenever the UI is considered "dirty" because someone touched it. Mithril is so much easier to use than UI systems that are all about creating dependencies (like most Smalltalk systems) or which require storing and updating state in carefully managed components (like React) or similar constrained models. But sadly React+JSX+SCSS has so far won the mindshare war despite overall worse developer ergonomics. I hope that cycle continues to turn someday and the Mithril approach will win out (if maybe in some other implementation by then). https://github.com/pdfernhout/choose-mithril

Frankly it has been frustrating over the decades to see great ideas lose out for a time to lesser ones with better marketing or other institutional advantages or other non-technical issues (Forth vs. DOS, CPM vs. DOS, Smalltalk vs. Java, Mithril vs. React) or which better fit with the familiarity of developers with earlier systems (HyperScript vs. JSX, Lisp vs. C++). Yet, I can also still be hopeful things may improve as social dynamics and technical dynamics change over time in various ways. Like was said about JavaScript which I mainly program in now: "It is better than we deserve..."



Ad multi-user Forth, I have a question, and you may know the answer. In the Forth history on forth.com, they write:

"By the late 1980s, polyFORTH users such as NCR were supporting as many as 150 users on a single 80386-based PC."

Do you have any idea how that was done? I do not know any hardware way from that era that was able to connect 150 terminals to a single PC.

For anyone interested, there is a nice book about Paul Otlet: "Cataloging the World: Paul Otlet and the Birth of the Information Age."


That's a great question. I don't know the exact answer, but as one possibility here is a PCI (I know, probably not right bus) PC card that supports 16 serial ports: https://www.startech.com/en-us/cards-adapters/pex16s550lp

So if you had 6 of those, you could support 96 users. You could get expansion units too for the main bus: https://www.reddit.com/r/retrobattlestations/comments/dpt47y... "It takes up one ISA slot in the main PC, and then hauls the signal to the external box, where you can plug in up to 7 more cards, plus some RAM"

Which mentions: https://en.wikipedia.org/wiki/IBM_Personal_Computer_XT#Expan...

So, using 6 slots in the first box, and 6 slots in the next, and 16 port serial cards, that's in theory 192 users on RS-232 lines. Anyway, this is just a guess. I vaguely remember hearing of some actual (lesser) systems with lots of RS-232 ports, but don't recall exactly how they worked.

One thing about Forth is that it could cooperatively multitask essentially (almost) by just switching a dictionary pointer to one for each current user (along with a small terminal buffer of say 80 characters). https://groups.google.com/g/comp.lang.forth/c/Rh3stETjMls https://forth-standard.org/proposals/multi-tasking-proposal

So, that's 12K to support 150 input buffers, plus probably at least 1K for each user dictionary on top of the shared dictionary (12K + 150K = 172K total). That is probably low -- if users might want 4K each that's 600K. Throw in 28K for an extensive base system to round things off and that is 640K for a great low-latency system supporting 150 users all simultaneously having a command line, assembler, compiler, linker, and editor. And I'd guess probably a database too on a shared 10MB hard disk. And it might even feel more responsive than many modern single-user systems (granted, expectations were lower back then for what you could actually do with a computer). So, yes, "640K of memory should be enough for 150 anyones." :-)

Related: "Why Modern Computers Struggle to Match the Input Latency of an Apple IIe" https://www.extremetech.com/computing/261148-modern-computer... "Comparing the input latency of a modern PC with a system that’s 30-40 years old seems ridiculous on the face of it. Even if the computer on your desk or lap isn’t particularly new or very fast, it’s still clocked a thousand or more times faster than the cutting-edge technology of the 1980s, with multiple CPU cores, specialized decoder blocks, and support for video resolutions and detail levels on par with what science fiction of the era had dreamed up. In short, you’d think the comparison would be a one-sided blowout. In many cases, it is, but not with the winners you’d expect. ... The system with the lowest input latency — the amount of time between when you hit a key and that keystroke appears on the computer — is the Apple IIe, at 30ms ... This boils down to a single word: Complexity. For the purposes of this comparison, it doesn’t matter if you use macOS, Linux, or Windows. ..."

Thanks for the Otlet book reference! Got a copy just now: https://www.amazon.com/Cataloging-World-Otlet-Birth-Informat...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: