In 1997 at OOPSLA Alan Kay said "every object on the internet should have a URL". The response at that time (and many years afterward) could be generally described as smugnorant mockery. A bunch of smart-asses decided that "it doesn't work", mostly by sharing gossip about CORBA failures, rather than looking into relevant research and experiments.
20+ years later we get the same idea presented as some groundbreaking revelation, except it's made orders of magnitude more complicated by the virtue of being an ad-hoc accretion of "hip" technologies. All created with zero overarching vision by people who clearly aren't even aware of past research or ideas. The crowds applaud - themselves.
Let me just point out some ideas that seem to have been forgotten:
- The idea that machines can exchange live objects, which can bring their "friends" along. (Before anyone says something about this not working - this is very similar to how we load JavaScript in the browser on nearly every page on the web. Except way more formalized and universal.)
- The idea that concurrency can be achieved by versioning object changes and reconciling different version of reality when you need to. Pseudo-time, etc.
- The idea that objects that respond the same to the same messages are equivalent. Which extends to the notion that a node on the network can be modeled as an object and conversely a local object can be made a node on the network.
...
Joe Armstrong had a lot of good intuitions when designing Erlang. I's a pity that after a brief surge in popularity Erlang/Elixir have been downgraded to "not hip enough" status.
...
Another thing. If you like Unix pipes, you have to love late-bound dynamically typed objects that communicate through message passing, because Unix pipes can be generalized as such. Yet I routinely see people who claim to like the former and hate the latter.
It's especially depressing how many people don't see the point of "message passing" as the fundamental building block of programming.
Just a couple examples why it matters:
- This progression. Object interface -> inter-object protocol (extends to sequences and state) -> object communication language (allows objects/system to derive protocols).
- If objects communicate through messages then object interactions with its environment are also messages. Which means you can create virtualized environments for an object by restricting or manipulating those messages.
> Another thing. If you like Unix pipes, you have to love late-bound dynamically typed objects that communicate through message passing, because Unix pipes can be generalized as such. Yet I routinely see people who claim to like the former and hate the latter.
Sorry if this is a dumb question, but could you explain this a bit more?
The reason people still care about Unix pipes is because you can take a bunch of arbitrary commands and string them together in a way that produces some useful results. Moreover, you can configure each pipe to do different things depending on what you need at the moment.
What stops you from stringing together a bunch of arbitrary objects to achieve the same effect? Well, a bunch of things:
1. Verbose syntax of the language.
2. The need to compile new code.
3. Early binding that stops you from calling things you don't know about before compiling code.
4. The fact that objects can't just take arbitrary method arguments.
#1 is self-imposed.
#2 is solved by JIT compilation and some other technique.
#3 is not a problem in dynamically typed languages.
Most importantly, #4 is explicitly what polymorphism in OOP was supposed to solve. Poly = many. Morphos = shapes. The idea that the object adapts to the messages you send it.
With this in mind, ls -a | grep “gear” can be interpreted as code that does the following:
1. Send the object representing 'ls' a message with -a argument. This will supposedly construct a list.
2. Send 'grep' object a message with "gear" argument. This will construct a configured grep instance.
3. Send response from #1 as a message (or stream of messages) to #2. This will generate a filtered list.
4. Send the object resulting from #3 a message that would request a console-friendly representation (e.g. a string with codes for color in console).
5. Display #5.
This is (with some caveats) very close to how OOP was envisioned in the 70s and early 80s. Except pipelines are a very crude way of stringing things together. You can get way more sophisticated with objects. E.g. :
Yep, the complexity of deploying docker and k8s versus uploading an WAR or EAR file into a JEE application server.
One of the nice things of being in Java and .NET lands since day one, and C++ since its kindergarten years, is to see all the cool kids coming up with "totally new stuff". watch them re-learn what we already did, and eventually collect some improvements and those stacks adopt what was actually new since the last reboot.
In the long run the turtle still seems a better option.
Sounds like a nightmare to me. Something that could've been a module in a regular application is split across multiple services introducing multiple network calls that can fail (not to mention the response time once the workflow grows).
If people who don't know how to write modular code start using microservices as a solution, the only thing that changes is that you end up with distributed big ball of mud instead of the regular big ball of mud.
It's worth clarifying that workflows are generally used to coordinate long running processes that can last up to days - think coordinating receiving an order, picking, dispatch etc. It's not intended to fit within the space of an http request/response.
As for should everything be in a module? Idk, but it's a much nicer place to start than microservices
Microservices are an organizational pattern: "we have 500 programmers working on one application, how the hell do we do it."
Solution: break up application into independent projects, each team of developers works on one of them, have extra team that builds tooling to make everyone happy.
Using microservices with a small teams is a terrible idea, usually, because you're switching to a distributed system when you don't need to. And distributed systems are hard to get right. And you're not even following the organizational pattern, because you have one team with N microservices.
That organizational pattern used to be addressed by proper design, engineering and project management. IMO, allowing the hell that is Agile dictate architecture design is the tail wagging the dog.
You need a (good) programmer/architect for this to work in my experience. If you don't have someone who can accurately keep the entire 500-piece workflow in his/her head, success is much harder to achieve. If you have that person though, the 500 developers will feel very focused, and you end up getting where you intend to go. That architectural role is clutch from my experience.
Agreed. If you don’t have the discipline to maintain libraries and their interfaces then you won’t have the discipline to maintain micro services either. I am sure there are good use cases for micro services but using them to compensate for bad coordination won’t help.
Kind of. If you need to scale compute across multiple nodes, starting out as some kind of service abstraction over a method call will help. But most people don't need that scale.
I think RPC is a dreadful network programming abstraction. Anything that makes you think it has the same QoS as local calls is just a trap, stuff that works in development and fails horribly in production. Errors, latency, timeouts, retries etc are first class concerns once you bring in the network, far more so than local.
Yes, but I also think it isn't a bad idea to separate your applications. Just because each application is a monolith doesn't mean that all of your code has to be in one repository.
Can your admin system be a separate system? Can they be three separate systems because three separate groups will use them? Even if you don't immediately split them into separate repositories, it may make sense to segment them as if they were, because if successful, you will start running into problems at around 40 programmers.
The workflow/orchestrator pattern is key to keeping services decoupled and simplifying your code. I'm working on a Typescript library to encapsulate all the technical detail in https://github.com/node-ts/bus/tree/master/packages/bus-work... so that you only worry about message inputs and outputs keeping it easy to see the logical flows of your system.
The author also hints about an alternative, commonly known as "routing slips" (https://www.enterpriseintegrationpatterns.com/patterns/messa...), where the "next step" is loaded into the message header and gets passed down to the next handler as part of a broader process. This removes the need for a centralised orchestrator, but has a huge downside of being tough to handle failure paths, compensating logic or switching logic.
I feel like you missed the point of the article. Programs like find where taking the functions of two (and more) smaller more finite subsystems and rolling them into one, more complicated system. That added complication assumes all the functionality you would ever need is baked in -- in this case it might be, but this was just a example to start out with.
20+ years later we get the same idea presented as some groundbreaking revelation, except it's made orders of magnitude more complicated by the virtue of being an ad-hoc accretion of "hip" technologies. All created with zero overarching vision by people who clearly aren't even aware of past research or ideas. The crowds applaud - themselves.
This is depressing.