Phoenix is practically built with this idea in mind.
My startup is built out of a single phoenix monolith. Only external dependency is postgres.
Despite this, I've managed to completely avoid all the typical scaling issues of a monolith. Need a service or background worker? I juts make a Genserver or Oban worker and mount it in the supervision tree. The Beam takes care of maintaining the process, pubsub to said service and error handling in the event of a crash.
I'm basicly getting all the best advantages of a monolith
* one codebase to navigate, our team is TINY
* all functionality is in libraries
And all the advantages of microservices without the associated costs. (orchestration, management etc)
* bottleneck use cases can live on their own nodes
* publish subscribe to dispatch background jobs
* services that each focus on doing one thing really well
We've accomplished this all through just libraries and a little bit of configuration. At no point so far have we had to..
* setup a message broker (phoenix pubsub has been fine for us. we can send messages between different processes on the server and to the clients themselves over channels)
* setup external clusters for background workers. (Oban was drop in and jsut works. I'm able to add a new worker without having to tell devops anything. the supervision tree allocates the memory for the process and takes care of fault tolerance for me. That leaves me to focus on solving the business problems)
* setup a second github repo. Its a monorepo and at out scale, its fine. We have one library for communicating to the database that every system just uses.
Eventually we'll probably have to start rethinking and building out separate services. But I'm happy that we're already able to get some of teh benefits of a microservice architecture while sticking to what makes monoliths great for mvps. It will be awhile before we need to think about scaling out web service. It just works. Leaves me more time to work on tuning out database to keep up!
In my experience, this is also a major benefit of running Akka on Scala or Java. I had the realization that it's basically a single-language Kubernetes, with some really nice abstractions built on top.
By using Kubernetes, you get a scalable infrastructure.
By using OTP/Akka, you get a scalable application.
While there is common problems to both domains, they are still 2 different domains.
For example, using only Kubernetes, you won't have the ability to react to a Pod restart within your application (unless your application is aware of Kubernetes).
Using only OTP/Akka, you still need a workflow for deployment and infrastructure management, and you still need to implement node discover for clustering.
NB: For Elixir, you have libcluster[1] that can use different strategies for node discovery, including using a Kubernetes Service to discover the nodes (as Pods).
EDIT: Using Kubernetes, libcluster, and Horde[2], you get the best of both worlds IMHO.
Thanks! I was just going to point out that Elixir/Phoenix + Libcluster + K8s is like a match made in heaven. I haven't tried Horde yet but I'm quite intrigued now.
Horde uses a CRDT[1] (Conflict-Free Replicated Data Type) to provide a distributed Supervisor/Registry, this allows you to run an OTP supervision tree only once in your cluster, with automatic takeover. Basically, you run your supervisor on each nodes, but only one will run it (thanks to the CRDT).
I find it very useful because "Distributed OTP Application" are a pain IMHO (they must be started during the boot of the BEAM).
wow thankyou! this is actually handles a usecase we're trying to solve in an upcoming sprint. (each tenant needs to maintain a persistent websocket connection to a third party api but though us) I was trying to use registry to do it but was was wondering how I'd scale it once it got big enough.
this sounds really neat and I'm going to go read about it!
I also wanted to call out that this ~sentence has just... a bunch of things that seem like jargon/names within the community? As an outsider, I have no idea what they mean:
> make a Genserver or Oban worker and mount it in the supervision tree. The Beam takes care of maintaining the process
Elixir inherits a library called OTP from erlang which is a set of primitives for building massively concurrent systems.
A Genserver is sort of like a Base Class for a object that runs as an independant process that plugs into OTP. By inheriting/implementing the Genserver behavior, you create an independant process that can be mounted in an OTP supervision tree which runs at the top of your application and monitors everything below it. Out of the box, that means your process can be sent messages by other processes and can send messages itself. If a crash happens, the supervisor will kill it, resurrect it and redirect messages to the new process.
Creating a Genserver is as easy as adding an annotation and implementing a few callbacks.
Genservers are the base on which a lot of other systems build on. Oban, a job worker essentially builds on Genserver to use a postgres table as a job processor. Since its just a Genserver with some added behavior, adding a background worker is as simple as adding a file that inherits from Oban and specifying how many workers for it should be allocated in a config file. The result is that adding a background worker is about as much work for me as adding a controller. No additional work for devops either.
And yes, it is very sci fi. Honestly I'm shocked elixir isn't more widespread. there's very little hype behind it but the engineering is pretty solid. Every scaling bottleneck we've had so far has been in the database (only because we particularly make heavy use of stored procedures)
In the world of OTP (Open telecom platform), a process is the term for what essentially is a green-thread, not an OS process!
So it is:
a) much, much more lightweight (IIRC ~ 1Kb)
b) scheduled by the Erlang virtual machine (so called BEAM)'s scheduler. The BEAM's schedulers run on a per-thread-basis, inside the BEAM's process
c) independently garbage collected, no mutable memory sharing
This isn't a knock against Elixir or the Erlang ecosystem but I would definitely say that Elixir gets a decent amount of hype. Each time a new release comes out it invariably shoots to the front page of HN.
1. if I'm calling directly, I can just use a raw sql query
2. views can be backed with a read only ecto model
3. triggers can be set to run without your ecto code even being aware of it.
4. for custom errors, you can add overides for the error handling in ecto to transform things like deadlocks to 400 errors
Some of these ideas are written up in the late Joe Armstrong's dissertation about Erlang and OTP "Making reliable distributed systems in the presence of software errors". It's a few hundred pages but quite readable to a programming audience -- it isn't filled with acres of formal proofs or highly specialised jargon.
See chapter 4 "Programming Techniques" section 4.1 "Abstracting out concurrency" and chapter 6 "Building an Application" section 6.2 "Generic server principles"
They do sound very sci-fi. I would say it made me more or less inclined to dig a little deeper myself hahahaha.
Genservers are an abstraction encapsulating the typical request/response lifecycle of what we would consider a "server", but applied to a BEAM-specific process. Like "general server".
Oban allows for job processing, instrumented much like you would any other process in the BEAM. This is an external library while genserver is built in.
Really great talk! He talks about some problems about the BEAM Distribution, but didn't get into details about it. Do you have any idea about those problems?
I have done much the same in plain old Java, at a couple of jobs, going back ten years or so.
No OTP, GenServer, etc. Just a webserver and some framework for scheduled jobs, third party or write your own. Config enables individual routes and jobs at the top level. You can deploy one instance with everything enabled, or a hundred instances with the customer-facing web routes enabled in 60, API routes in 20, admin routes in 5, report generation jobs in 10, maintenance jobs in 5, or anywhere in between.
The only discipline you have to stick to is that you must pass information between components in a way which will work when the app is distributed. A shared database is the most obvious, and may be enough. We also used message queues, and at one point one of those distributed caches that were all the rage (Hazelcast / Terracotta / Infinispan / EHCache / etc - anyone remember JavaSpaces and Jini?).
We have a similar philosophy with our Python code base. Redis and Postgres are our only real dependencies. We use celery but it’s basically just Python. Maybe takes a little more work setting it all up, but once done provides similar benefits.
How do you keep track of background jobs? Is the queue persisted on disk somewhere? If it's not and a background worker crashes for some reason, then is the job lost?
That’s a very nice library. BEAM + Elixir + OTP supervision trees + Obama for background jobs + PostgreSQL for persistent storage seems a killer combination. And like you said you can scale horizontally naturally with the Erlang plateform without even using k8s. Really interesting.
This was possible due to the fact that BEAM does allow you to do things like the ones you describe and enjoy advantages of both worlds (monolithic and microservices). If someone does not use Elixir/Erlang however or if the the product consists of parts written in multiple languages (for whatever reasons) then it's simply not possible to have the advantages of microservices in a monolithic approach.
We moved to microservices, despite my love for a monolith with libraries:
- to enable different teams to deploy their smaller microservice more easily (without QA, database migrations etc affecting the whole app);
- to solve the human temptation of crossing abstraction boundaries.
Not black and white, and both approaches can solve these problems. Open to any feedback!
Yeah I moved to microservices for the exact same reasons. People are undisciplined, and making it harder (and much more obvious) for them to do the wrong thing is more important than having all your code in one place. Plus, if you need to upgrade some dependency or if you want to try a new language or library or idiom, you can do so without the risk, effort and sunk cost of upgrading the entire application.
> And all the advantages of microservices without the associated costs. (orchestration, management etc)
Really curious about this! What’s the deployment experience? What environment do you use for your production? How do you run/maintain the erlang VM and deploy your service?
We use eks with some customizations specific to elixir and use a autodiscovery service for new nodes to connect to the mesh.
One of the big differences we had to make was allocating one node per machine as the beam likes to have access to all resources. in practice this isn't a problem because its internal scheduler is way more efficient and performant than anything managing OS level processes.
That said, a more complex deployemnt story is defiantly one of the downsides. But the good news is that once setup, its pretty damn resilient.
Now of course, our deployment setup is more complex specifically to take advantage of beam such as distributed pubsub and shared memory between the cluster. If you don't need that, you could use dokku or heroku.
To add to this, we use Phoenix in a typical dockerized, stateless Loadbalancer- X Webserver - Postgres/Redis setup and it works great. Deployment is exactly the same as any other dockerized webapp. What OP is using is the "next level" that allows you to really leverage the BEAM but you don't have to.
I hadn't heard of this but it looks cool. Looks like something that will do the job when we do need it. so far we just store the parameters from certain heavily used mutations into a job table and run the actual insertion in a background worker. We're no where near needing this yet but it'll be good to have it on hand when we (hopefully) get to that point.
A zero-cost alternative that has worked well for me so far is to use a front-end load balancer to distribute requests to multiple Phoenix instances (in k8s), and then just let those requests' background tasks run on the node that starts them.
The whole app is approximately a websocket-based chat app (with some other stuff), and the beauty of OTP + libcluster is that the websocket processes can communicate with each other, whether or not they're running on the same OTP node.
not automatically but its pretty easy to configure in your supervision tree file. I don't know the details because whatever happens by default has taken care of our needs so far.
IT does automatically distribute processes across all the cores on a cpu though.
What I like about the Erlang platform is that it seems like it has the most sensible “microservice” story: deploy your language runtime to all the nodes of a cluster and then configure distribution in code. Lambdas, containers, etc. all push this stuff outside your code into deployment tooling that is, inevitably, less pleasant to manage than your codebase.
Write libraries AND services, where it makes sense.
I wrote a Python library to scrape google news [0]
We also have it as a service [1]
Want to know why? Because devs who can't pay won't pay. Businesses who can pay will rather pay for a service (API in our case), and not care about maintaining it.
The hidden message of the mantra "just self host your own" is that it takes you time to set that up. If you aren't familiar with that process from past experiences, you now have to get those man hours somehow for selfhosting, and suddenly that free self-hosted solution is not so free if you value your time over $0/hr.
That's where I think services should come in. Only with no phoning home crap or cruft, just give me a hosted version of a bare bones cli app that I can access anywhere from any device. Give me my own instance of tiny tiny RSS or something else that's very lightweight and commonly selfhosted for $1 a month, and I'll happily pay up to avoid having to spend a few hours vetting hosting options and setting this up myself.
How does the liscence agreement with Google look like, you just pay per Query? Does Google prohibit caching? Just curious as I remember reading nightmare stories of companies basing their business model on Google services (maps).
This is a bad take and a false dichotomy. The reasons for choosing a library or a service are so myriad and context-dependent that any generalization about which is "better" is just silliness. It's like saying, "If you have to get somewhere, running is better than walking." Sure, in the case of a race or escaping a predator, running is probably the best option. But what if you want to take in the sights or you can't sweat in your clothes? Running would be a bad choice, even though it's faster. There are just too many variables in the real world to make any sort of blanket statement about which approach is "better."
I agree. But if we steel-man this,it could make sense for services that can be replaced by libraries. So simple services that could be replaced by a slim library on the user's machine.
Some actual examples could go along way towards making a better point. Obviously some things are better as services and other better as libraries. The question is under what circumstances to choose one model or the other. I didn't think the article really shed any light on that question.
I also think that the author (and other commenters in this thread) underestimate how much the move to SaaS is driven by what users want as opposed to what the providers want. I'm old enough the remember the pre-SaaS/pre-cloud days when everything was a library. And it was a nightmare. You have to run all of your own infrastructure and the pace of development in the products themselves was terrible. And of course it makes sense. Need to store some data? A SaaS provider can pick a storage layer that meets their needs and be done with. But of course a purveyor of enterprise software has to support every possible storage layer under the sun.
I have a hard disagree here. Though, I suspect it is a matter of what your code is doing.
First, my objection, though. Pushing this "to the users" is in no way easier to support. It is easier to abandon, but support is a different thing. You will have support contacts. And, due to the distributed nature of the deployment, you will have a much harder time isolating your code from the environment it is in.
With a service, it would be a lie to claim this is easier, but it is a bit more approachable.
Now, a difference, I believe, really comes in to whether or not there is state associated with what you do. If there is, and it is not something that makes sense to move close to the user, then a service is pretty much required. If there is no state, make sure that is not artificially done by just moving to all state being managed by the user.
>If one user can't have a negative impact on other users, then you don't care if some users are slow to upgrade; they're only hurting themselves.
This quote looks like it was written by someone who has never had to support users. I've had that pleasure, and, these slow to upgrade users will become the bane of your existence. You'll spend an inordinate amount of time trying to support the "If only this f*ing user would upgrade!" customer.
I don't hear people talk about the design side of services vs libraries enough, but I think it's a huge part of the picture.
Many people reach for services too fast because they feel more comfortable thinking about APIs through the lens of HTTP verbs and resource URLs than they do in the comparatively infinite garden of options inside a program.
The REST paradigm, despite most people not understanding, needing, or utilizing it fully, has out-competed most other software design memes. It was well-positioned to do so, with its creator being an author on the HTTP RFCs and the internet exploding in popularity and use right as the work was published.
REST (and earlier, SOAP) also had good business/network reasons to become very well known: a business who wants you to integrate with them must tell you about their integration pattern. They need to document it and motivate it. Then people have to actually write a lot of software that works that way, over and over. Exposure spreads at the speed of business, and a broad culture of REST-knowledge was inevitable, as was an industry of teaching it. Learning "REST" has long been a compulsory part of learning web-interfacing development.
By contrast, can you name a similarly restrictive organizational zeitgeist for internal program structure?
Domain Driven Design, maybe? The broad concept of "design patterns"? "OO"? None of these are anywhere near prescriptive enough to answer the classic question, "how do I organize this greenfield project from scratch?"
Maybe someone could take a restrictive set of best practices for internal API design, write a dissertation on it, give it a great name (like NICE) and then proselytize it effectively enough to gain mindshare and improve the design options available. (This seems like an interesting marketing problem as much as it is a software design one!)
As it is though, most people just don't know any other consistent, repeatable way to break a complex system down.
> As it is though, most people just don't know any other consistent, repeatable way to break a complex system down.
I think MVC is that. It's just vague enough that you can square-peg-round-hole almost anything into it, but it's also pretty prescriptive in that there are exactly three "components" to your program and each has a specific role and relationship to the others.
I think the flow architecture (i.e. Redux) is sufficiently descriptive for how to organize a project, at least compared to REST. There is one giant state object which is essentially a struct/trivially serializable to JSON. This state can only be changed via actions/events which are themselves only structs, not full objects. Reducer function(s) transform the old state into a new state using these events (in practice people use multiple reducer functions which are chained together).
This isn't even message passing really since you do have global state, it's just that this global state is changed via messages/events.
"REST" these days is a defacto code name for "(JSON?) calls over HTTP", with loosey-goosey adherence to anything close to what the original REST author meant.
At this point, we might as well call it RPC-style organized calls over HTTP using JSON representations and defined by "Swagger" files.
Likewise, most of SOAP was "RPC-style organized calls over HTTP (or TCP) using mostly XML serialization and defined by XSDs.
Yes I'm disillusioned with the unnecessary re-invention of the wheel here. And "REST" interfaces defined with gigantic OpenAPI documents aren't really all that better than SOAP, just with hip and still-in-development toolchains around them. It's all marketing and popularity contests, and SOAP/XML/XSD lost out.
> It's all marketing and popularity contests, and SOAP/XML/XSD lost out.
I believe the triumph of REST over SOAP was mainly piggy-backed off of the triumph of JSON over XML (even though neither has technically anything to do with REST).
In practice, in most people's minds, REST == JSON and SOAP == XML and JSON > XML. Therefore REST > SOAP.
Though as you note, what most people today call REST falls way short of the whole singing and dancing HATEOAS bag that Roy had in mind.
(Personally I think hypermedia is way overrated for most APIs, though that's not an opinion many REST zealots are open to).
It was both JSON and the "pretty" URLs, which also don't have anything specific to do with REST but people immediately associate them with it, to the point some people call them "RESTful URLs".
I think JSON is an improvement over XML for RPC, though they are similar. Generic JSON serialization/deserialization is trivial for any language with arrays and string maps. With XML, are fields serialized as attributes or as tags? Unless you are doing something really weird, JSON tells you just to use a JSON object. And for lists: do you use a special <li /> like take for the items or do you skip the intermediate step and just assume each child item is a list?
JSON is definitely not perfect and has similar problems with things like serializing maps with objects as keys, but it does have more opinionated ways for serializing things than XML does.
The main product I work on has a specification that dictates the use of SOAP/XML. So I'm probably more familiar with SOAP than REST (by which I mean JSON/HTTP). Although I don't really delve deep into the RPC code when it can be avoided.
The answer to pretty much all your questions is "depends on the schema". There are XSD (XML Schema Definition) files that you can use to define types. You can define sequences, etc.
Then you can define a WSDL (Web Services Description Language) file which describes the RPC operations, arguments, return types, etc.
After that, you can use something like gSOAP to generate C/C++ out of the WSDL and XSDs. I think Java has a lot more variety in turning a WSDL into code, but last time I looked on the Java side of things it was super confusing. None of the tools seemed to work well together. They would never shut up about beans, and there always seemed to be a bunch of tool specific annotations. Pretty disappointing, and definitely makes me lean towards REST for personal projects, if only from a KISS perspective.
My impression is that SOAP and XML could probably make a much more complex web service than REST (again, abusing the term to mean JSON/HTTP). But after having worked on a pretty confusing contract-first web service, I am kinda jaded about it.
XML guarantees that tag precedes attributes which in turn precede contents, which is a great help when any sort of polymorphism is involved. Heck, anything beyond a tree of untyped arrays, maps, and primitive values adds ugly complexity to structuring and interpreting the JSON.
IMO much of that could be fixed with a JSON derivative that allows optional type identifiers before values. While you're there, guarantee support for comments and trailing commas, because despite design ideals, humans will write JSON manually, even using JSON for configuration files that are expected to be hand-modified!
I now genuinely dislike XML. We ditched all that XML tooling around RPC over the wire and can't be happier. JSON is far better for what we do. Its faster, easier to parse, easier to store, easier to create/modify/delete. For us we even found JSON over HTTP uses less electricity.
We still use XML but its gone from RPC / everywhere to now only being a storage mechanism. I remember the network transfers dropping significantly and being very happy. Haven't looked back.
Services allow you to interop with all languages, they let you avoid worrying about threading models of the program your operating within, and they don't have access to all of they sandbox the code being run allowing you to avoid worrying about the code doing something nefarious with access it gets from running inside your process. On top of this, people just seem to be fat more okay with not having source access to a service than they do using proprietary libraries (which from a security perspective makes sense to me).
If there was a standard threading model, language agnostic interop with clear evolutionary properties (perhaps a channel based one), and a generic mechanism for sandboxing libraries folks could use then maybe they would consider providing libraries more. Some sort of hybrid between COM, wasm, and optimized in process grpc is more or less what I'm trying to describe. This doesn't exist, however, and moving your library out of process solves this issue neatly so it's not a surprise that's what's happening.
An object in a type-safe language can contain
capabilities for resources which it uses to
implement its methods, without those capabilities
being available to the code calling those methods.
Java-style stack inspection can restrict user or
library code to deny access at runtime to
unauthorized methods.
Capability-safe architectures such as CHERI prevent
code from accessing memory that it doesn't have an
explicit capability for.
Software fault isolation can allow Multics-style
"call gates", where a library has a different
privilege level from other code.
Aside from CHERI, which requires specialized hardware, none of these actually work and none of the implementations you will find in the wild are anything but snakeoil. Just use a service, it's the only local security boundary current OSes even pretend to enforce.
>none of these actually work and none of the implementations you will find in the wild are anything but snakeoil
This is wrong. The foundation of the modern web is safe Javascript implementations. BPF/eBPF is another widespread technology using isolation mechanisms like this.
All of these techniques work. As I mentioned in the very next sentence, only the first in that list is common, but that doesn't mean the rest don't work. If you have found some fundamental hole in NaCL or CHERI or JVM security which makes them ineffective, feel free to publish your results.
Huh? NaCL is dead, and as far as I know nothing relies on the JVM sandbox for security anymore (and it was a disaster when they tried).
These also rely on wrapping the entire userspace into a sandbox, which is hardly relevant to the libraries vs. services thing. You can't inspect a stack from inside the attacker's process and expect that to do anything at all, let alone control access to a resource.
>You can't inspect a stack from inside the attacker's process and expect that to do anything at all, let alone control access to a resource.
See the article:
>If user code is running on a sufficiently advanced platform, one not administered by the user, then a library can safely manipulate resources that aren't accessible to the rest of the program. For example:
I think you are talking about different things. I have absolutely decompiled obfuscated third-party Java libraries, fiddled with a few variables or method signatures, and recompiled + used them.
You're correct that obfuscation of Java libraries has nothing to do with Java stack inspection/JVM security... but the latter is what the article talks about and what is cited above, so not sure why you have just now brought up decompiling obfuscated Java libraries, which is, as you say, totally unrelated...
"Java-style stack inspection can restrict user or library code to deny access at runtime to unauthorized methods.",
which doesn't seem true to me, if the user has access to the library binaries. It's very possible for the user to just patch the library and use it however they want. It's possible (although I'm not especially convinced) that you can prevent the user from reflectively doing this at runtime, but that's not the only way to access unauthorized methods.
Yes, the OP has this backwards. The Java sandbox would allow an application to limit the access granted to a library to protect from a malicious library. It does not protect a library from being used by an application in way that was not intended by the library creator. It is certainly not a means of enforcing a licensing scheme.
A service is great when you want to let the services dev cycle run independent of yours. There’s good use cases for this:
- an auth service that keeps up with security bugs
- a recsys that keeps deploying newer and better recsys into your app
In these cases the service continues to fulfill the same contract, but they keep getting better independent of your dev cycle.
In effect a service is a kind of “push” model of dependency. I push my improvements into your app as soon as they’re ready. Crucially you let me do this because it’s an area you’ve given complete trust to another team to manage for you for a variety of reasons.
Libraries, OTOH, are a “pull” model. You pull the dependency into your code base when you’re ready to absorb the changes. There’s less benefit to letting the dependency evolve on its own and you want to decide when to pull in updates.
Crucially in services the promise is more abstracted. I’ll give you “good recommendations” whereas libraries can be more “add two numbers”. With the service it’s more about the interface being stable over time, and while this is true with libraries, the API can evolve more fluidly.
These aren’t hard and fast lines, it I think both are valuable and have their pros and cons.
100% agree with this. Determining the "direction" of dependency like this is one of the main considerations around implementing something as a library or a service.
I've built services that expose the latest/current version of an underlying library, with various versions of that library used in products across the company.
There is also a question of overhead - calling a library is almost always SO much faster and lighter than calling out to some service, especially if we're talking JSON or XML over HTTP.
What I was alluding to is that one of the first comments on this thread was "how do I make money out of this?" That indicates the priorities of the commenter, if nothing else.
All aspects of life in the US seem to be increasingly money focused, driven, or dictated by. It's not just tech but wherever there is more money to be had, there is more focus on money.
At some point, I wonder if our social and economic system becomes a close enough proxy to darwinistic survival of the fittest through capital ownership that a large enough portion of the population realize they would do better outside this construct rather than inside our societies and will begin to reject the system at large.
Right now we are able to provide stable food, housing, and ERs can't refuse people for medical care but stable housing is in the downturn and Healthcare costs that lead to bankruptcy conditions are on the rise. I feel like we're really testing how low many will go before they reject the system at hand and at what point is that number of people large enough to be critical to the existing system's stability.
At some point the effort for comfortable success may become so high people just abandon it entirely.
The communist party was pretty successful in helping to overthrow the Portugese dicatorship in the early '70s, but they were also democrats. They're still one of the major political parties in Portugal, and you see ads for the communist party everywhere.
The Indian state of Kerala is the most advanced, well run, and successful region of the country. On development indexes it ranks with the poorer regions of the EU. It's also largely a communist state, but once again the communists are democrats.
We have voluntarily destroyed lots of economic growth and plunged millions into extreme poverty because of a virus that is not especially bad when compared to historical pandemics.
Oh that's a burning question for me. I'm finishing a C lib that - if all goes well - will be very useful.
I worked hard on it and now I don't know how to license it.
I'm okay with non-profit OSS projects to use it freely, but I need others to pay for it. How ? How do you express such a dual license ? And how do you ensure for-profit users reward you ?
GPLv3 with a commercial exception. Corporate suits don't like the GPL, so if they buy a commercial licence they support the project without having to redistribute (You can also offer support)
I think AGPL is the much more effective open source license if your intention is to have corporations too afraid to use it under the open license and to negotiate a commercial exception license.
Plenty of companies will use GPL code, especially for hosted software.
Notably, the FSF recommends against[1] modifying the GPL. And, you'd want to hire a lawyer to go over whatever scheme you're thinking of.
It's easier to draft a separate license agreement if you're the sole copyright holder.
If you're not the sole rights holder, if your project contains other GPL code, you'll need permission from the authors to license under terms other than the GPL. That would include patches from other authors, unless they sign the rights over to you.
The way they were monetized before SaaS was a thing. You sell licenses to use them. You can't 100% prevent people from using them without a license but for the most part you don't really have to. A company with real assets is generally not going to use a cracked version of proprietary libraries and expose themselves to litigation.
It's easy, you release it open source... Then make sure you have languages "officially" promote it in their documentation and have people adopt it thinking it's great and an actively developed project that's going to be around forever, etc... And then fork it, stop releasing security updates for the older (open-source) versions, and turn it into a product that you charge people for because corporates are now dependent on it.
Maybe if you code up a library that meets a specific set of requirements, but you do it in a way so that using the library seems simple on the surface but, the deeper the end-user-developer goes, the more illogical stuff becomes. At that point you have a developer too invested in the work they've already done to abandon it: a developer in the market for a "... The Good Parts" or "... Cookbook" book.
I was not thinking of Actionscript when I wrote this comment.
It basically allows you to share your code to others in exchange for a fee. If someone builds an app or service that uses your library, and it is paid within Fluence, you will receive a fee. This works transitively.
You put it on your resume and make your bread doing other jobs or freelance work, where you are now the top candidate in the pile thanks to the work you've done with your publically available library where anyone can inspect the source and see your handiwork.
Engineers in tech always complain that there is no proof of good work, like someone in biochemistry with a stack of papers on their CV would have. Well, this is how you do it. You spend effort sharing ideas with the community knowing you won't necessarily make a dime off of them, just like the biochem person who doesn't get paid per paper, and not spend 100% of your personal efforts working for someone else in private on a for profit project you aren't at liberty to intimately discuss.
Not really. Your market is different. Instead of selling to business managers seeking to solve a problem, you're selling to developers. Developers tend to like solving problems themselves, and you're competing versus free.
There's an ecosystem of paid libraries and tools around Java and C#. While developers might like to solve problems of their own, standards compliance is one place that they likely wouldn't want to tread their own path if they can help it. They've got work to do, and deadlines.
The definition being used here for libraries and services has both being marketed to developers. We're talking about APIs, not a web product with a UI etc.
Services, in this article, doesn't refer to things that can only be built as services (e.g. because it queries a proprietary dataset) but those that might optionally be built as services (e.g. fetching and processing otherwise accessible data) and could, as easily, be built as a library.
You can totally sell libraries to business managers. You can sell sauteed rat's assholes on a stick to business managers, you just need the right sales pitch.
Including a stolen library you in commercial software that you distribute is easy to do but also easy to detect, so it only means that you're willing to pay a triple of the price in damages.
One example of this is computer games. There are games like StarCraft 2, Diablo 3, where server is required even for solo-playing and it's not possible to play offline. I think that one of the reasons is to make it harder to crack a game (as you would need to re-implement server) and to make a cracked game objectively worse (as your implementation will not be as polished).
For example gamedev middleware is an example of that - be it individual library, for example audio: fmod, criware, wwise, miles, etc., or full blown game enines (unity, unreal, etc.). Just few examples.
After building embedded library codebases that have had version skew across clients (sometimes over several years worth of code), the library approach just stops evolving after a certain point - the backward compatibility mess is a drowning morass. There is no control on the upgrade cycle, old infra can't be turned down, performance is a risk, upgrades are a risk, every deploy has "foreign code" and the potential for a dependency mismatch. Services work - use them. Unless performance is of the utmost concern, say no to libraries.
> A service has constant administration costs which are paid by the service provider. A properly designed library instead moves these costs to the users of the library.
I agree with the articles point, but this introduction, right there, is why it's not happening. SaaS turns your startup into a unicorn and yourself into a rich person. Or at least that's what you are hoping/aiming for. A library is not going to make you a billionaire. Sad but reality.
You're right. The fact that people can make outsize rewards from subscription services are the reason that even small single use tools are now 'services'.
I have a piece of software that removes the background from my video feed. It's a subscription, and it regularly phones home.
I have a piece of software that lets me visually combine, rotate and reorder pdfs. It's a subscription.
Even the simplest things, like a 'torch' are ad-supported on android these days.
It's miserable fighting off a million people who want to help me for 'less than the daily price of your coffee'.
And that's ignoring the fact that so many of these subscriptions are really quite expensive - they adopt the standard cloud pricing of 9.99 per month, even when the service they're offering is not much difference than what a $5 piece of shareware would have provided in the distant past.
I don't think either of those subscription services are unicorns or making billionaires. If anything I'm happy to see indie developers able to maintain a decent income making useful (but often niche) tools that other people can use. Of all the SaaS services I pay for, it's one like these that I'm the least apprehensive spending money on.
Video codecs are hard. PDFs are a labyrinth of a file format with traps everywhere. I bet whatever open source equivalent you'd find of these services either require some serious additional tooling to get what you want or are too outdated to be useful anymore because the maintainer moved on from the project.
Video codecs are hard, but the reason video utils become services is because the best (by a mile) video codec libraries are (L)GPLed. I suspect the same is true for the best PDF libraries that would be integrated into desktop applications (meaning compiled like SumatraPDF's library, since there are plenty of JS/Python permissive licensed libraries).
All the developers I know immediately jump down a level to ffmpeg command line utils or up a level to proper video editing software (often pirated, no less), so there's no incentive to develop a high quality simple video editor.
Yeah, I wonder if things like ffmpeg would have been created in the current culture, or if it would have been a VC funded webservice paid for with a monthly subscription, and when the company eventually failed, all the code would have vanished with it, rather than being progressively improved by a large number of people through the years and having a phenomenal positive impact on the world.
> It's miserable fighting off a million people who want to help me for 'less than the daily price of your coffee'.
Is it also miserable "fighting off" a million companies who want to sell you stuff for the daily price of a coffee? Stuff like T-Shirts, books, newspapers, candy, etc?
You seem to have a patronizing attitude towards software products/services. You seem to think that they are so simple, that they should be free. And you seem to be upset that the software's author would dare to charge money for their work.
I would never begrudge someone wanting to get paid for their labor. I probably wouldn't hire them... just like I don't buy the vast majority of things that people try to sell to me. But I would never begrudge someone for wanting to get paid for their time either. If you don't think a piece of software is deserving of its subscription fee, you can always... not buy it.
Complaining that subscriptions are a bad deal for me as a user is a valid, if weak, market signalling mechanism.
And they are a bad deal, because with each subscription comes a business relationship you have to manage, which is annoying and prone to being forgotten (which is what many companies offering subscriptions, particularly the "less than coffee" types, are very much counting on).
> Is it also miserable "fighting off" a million companies who want to sell you stuff for the daily price of a coffee? Stuff like T-Shirts, books, newspapers, candy, etc?
Yes, it is. Advertising is cancer, and quality of life improvements can be measured in your ability to limit your exposure to it.
Also, like other commenter mentioned, that stuff isn't subscription-based. It's either consumables, or usable until it wears down - which is something I can control. And most importantly, none of these examples involve me having to manage relationships with any of the vendors.
(Technically I have a relationship with each of the vendors, but it's entirely mediated by consumer protection laws. I have nothing to actively manage here, except saving receipts - I only need to know which government agency to write to if the vendor fucks up and doesn't want to reimburse me for it.)
> with each subscription comes a business relationship you have to manage, which is annoying and prone to being forgotten
Though, this is one advantage of centralized services like Paypal and the iOS ecosystem.
At any time, I can see the iOS apps that are auto-billing me and I can rescind the contract from that UI. I don't need to call anyone or hope their <button>Cancel</button> actually does anything. I don't need to watch my statement like a hawk just to make sure they actually stopped billing me or that the 7-day trial that I canceled actually canceled.
Anyone can complain about yet another subscription. But tools that help us stay on top of our subscriptions are essential for subscriptions that aren't a bad deal. The banking/financial industry stopped evolving long ago and should have built ubiquitous tools for this.
Finally, the complaints about subscription services in this thread aren't very compelling. Nobody wants to buy a subscription yet the app transaction they want on their terms (e.g. buy once, never expire) presumably doesn't exist. It's a pebble's throw from just complaining that you'd prefer if everything was free so that you could keep your hard earned money.
Aside, iOS doesn't go far enough. Just so I don't seem like I'm too kind to Apple and subscription services here, they still have a long way to go. If they cared more about consumer protection, they would enact these changes:
1) iOS notification every time we get auto-billed. Every time we get charged, we should get reminded to consider if we actually want the subscription and that people aren't just getting taken advantage of by forgetting. iOS does nicely show you your auto-renewing subscriptions, but my parents don't know how to access it.
2) nuke the ability for weekly charges. A monthly billing cycle should be the minimum because that's what people are used to. It's kinda disgusting that an app can charge $7/wk when 99.9% of auto-renew cycles are monthly, and the user has to happen to notice the "wk" when they agree to it. And if weekly billing is allowed, then the iOS pricing page should standardize it showing you how much that costs per month to make it clear that it's not $7/mo.
3) An app shouldn't be able to default to the yearly billing cycle, it should default to monthly and the user can choose a yearly cycle if they want to, ugh. So many apps will default to the yearly cycle (so, 12*fee upfront) and even require you to pick that one if you want the 7-day free trial. It's hard to see how Apple could design the system to allow this behavior without knowing it's going to make people commit to a billing cycle they don't actually want.
4) You shouldn't be able to display a full-screen interstitial that makes it seem like you have to subscribe to use the app. I was just looking for a good daily workout iPhone app this week and every app had a full-screen splash page where you had to notice the tiny "X" to skip.
That said, still better than the US system where giving someone your debit card number in 2021 lets them pull money from your account for years just because you bought a $3 hotdog from them once.
> Nobody wants to buy a subscription yet the app transaction they want on their terms (e.g. buy once, never expire) presumably doesn't exist
It used to. The subscription model is pretty new and has only become common in the last decade or so (generously; it's probably even more recent). Buying a perpetual license to use a copy of software was the way to buy software up until fairly recently.
You can characterize this shift as malicious, as a result of corporate greed and a desire to protect IP. Or you can characterize it as simply companies struggling to generate stable, predictable revenue with the old model, and finding subscription revenue to be more healthy. Regardless, subscription models are new, not the long-time status quo.
> It's a pebble's throw from just complaining that you'd prefer if everything was free so that you could keep your hard earned money.
No, it's not, and it's disingenuous of you to suggest that's where people are going with this.
It’s not hard to find a T-shirt I can wear over and over again without having to keep paying for it. Nowadays it’s nearly impossible to find software that lets you pay once at a reasonable price. So yeah, it is very frustrating to fight off subscriptions.
I think the problem is that people are overestimating the benefits of subscriptions.
Yes, subscriptions make recurring revenue. But so does pay-once software, as long as you don't stop getting new users. But your SaaS subscribers also churn, so you can't stop getting new customers there either.
Pay-once has the huge advantage that the barrier to entry is much lower. I'm pretty sure that it's much easier to sell a 50€ app than a 10€ subscription. And that 10€ subscription means you need to keep your customers for at least 5 months. If they don't need your app anymore for some reason before that, you would have made more with the pay once app.
I mean, if you have recurring costs per user, please go for a subscription. But folks shouldn't assume that pay once is unsustainable. You just get the full lifetime value of the customer up front and you don't have to worry about them churning!
There is a hidden problem here, which is app store policy (all of them, afaik).
Setting up a subscription? Easy. Selling software once.. and just once? Also easy.
But if you want to follow the classic version model, where users pay for upgrades? Now you have problems. The app stores see a new version as a completely new piece of software, so you have to build up reputation for it from the ground up, and if you want to overlap the old and new version for awhile, which is usually a good idea, you run the risk of people buying the old one by accident and getting mad at you.
It's just not a use case which is supported anymore, and it should be.
This, a million times this! The app stores made the most sensible model extremely cumberstone! I want to buy a software and own it forever. On the other hand, forcing the developer to provide upgrades for free forever is not sustainable.
Maybe there is a clever way to work around this issue using in app purchases ...
You are overthinking it. A lot of software just doesn't need elaborate updates.
For one utility app that I sell, I just build a new version every couple of years when it's no longer compatible with the latest OS, and people still buy it. There really is no need for updates for some apps.
If there is demand for updates (eg. because customers want new features), then you can just sell it as a new app. People who want to upgrade can get the new version. And people who are happy with the old version can just continue using that.
The folks who complain that they can't sell yearly updates on the app store are basically just trying to sell subscriptions without calling them subscriptions. They should just sell their apps as a subscription instead. It's not really pay-once if users have to buy an upgrade every year.
I don't think that's really the same concept, though.
Yes, a utility app might be "done" at some point and only need updates when the App Store requires you to build against a newer iOS SDK. Fine. But many apps go through large changes over time and accumulate large improvements that might be worth paying more for.
Selling a new major version as a new app is a clumsy experience for users. They need to find the new app, install it, somehow transfer all the data and settings from the old app to the new, and then delete the old app. Most of that isn't something the app author can automate or do for the user.
I agree that some people might "abuse" this sort of functionality to sell subscriptions without selling subscriptions. But so what? Under this imaginary App Store upgrade flow, the user could also choose to keep using the old version and not upgrade. That gives the user more choices, not less.
> Pay-once has the huge advantage that the barrier to entry is much lower. I'm pretty sure that it's much easier to sell a 50€ app than a 10€ subscription.
I think you've got it backwards. I can't think of many examples that suggest that this is true.
The iOS app store is a very hard place to sell an expensive $50 app, yet it's a very easy place to sell 7-day trials that turn into $10/month subscriptions. Immediate cash outlays are always harder to sell than pay-over-time deals for various reasons.
One reason being the customer's attempt to avoid the feeling of overspend: that you can always stop subscribing when you're done rather than getting "stuck" with the product, even if the one-time cost is a better deal. Another reason just being that you're asking for less money upfront which is always easier.
In my own experience, people will even stick with a pricier monthly billing option over the yearly billing option just to avoid the larger hit, even when they've been a customer for five years and know they'll still be subscribing a year from now.
I certainly appreciate how us HNers might prefer a one-time cost over a subscription, but I wouldn't try to generalize that to customer behavior.
>I'm pretty sure that it's much easier to sell a 50€ app than a 10€ subscription.
Very much this for me, I will go to great lengths to avoid using software with subscription fees. For example, I was fine with paying several thousand for the Adobe suite and upgrades in CS4-6.5 days but no way will I pay $50/month for the same software. The cost may be less upfront but I have an easier time justifying a single one time purchase than a continual unending string of fees. I don't use the software enough to justify the expense month after month so seeing the bill again and again wears on me until I cancel.
> I mean, if you have recurring costs per user, please go for a subscription.
I sometimes see software that is running 'in the cloud' for no discernable reason. Yes, those companies have recurring costs per user, but that's entirely their choice, the more natural way of writing a simple file transformation tool is as an application, not a webpage.
So write your own software for your needs and sell it for a one off price?
But you presumably won't because the incentive isn't great enough. Which is exactly why people milk subs. And if you're not willing to do it for the incentive of a one off price, why should anyone else be?
> So write your own software for your needs and sell it for a one off price?
Should he tailor his tshirt too? It is totaly fine to whine about something without fixing the industry by yourself.
I for one don't like the movement of commercial software to forced cloud integration. Big or small business. What would have been shareware or naggware would today be SaaS and be gone the day the server shuts down.
No, because t-shirts are already available for one-off prices. He's already willing to pay enough to motivate people to make them for him. He and/or wider society are not willing to pay enough to motivate people to make him one-time-purchase software, apparently.
And the end result of this whole thing is too-expensive software that no one is willing to pay for and doesn't get used, and no one's problem is actually solved. And then when the "startup" fails, the code is of course not turned into a library, and it all starts again.
If you think an $x one-time payment is a great reward for making y, then make y and charge $x. Seems like a free opportunity to me. If it's really so simple, you can easily undercut those doing the subscription model.
Unless, of course, $x is not actually that motivating a fee for the work. In which case, it's a bit entitled to expect others to work for a fee that doesn't spur you into action either.
Fells like the reason one off pricing is less common is because the model has felt broken since the time of Kazaa (file sharing app from way back), Napster etc. For some reason software is almost exclusively sold through app stores. I don't remember the last time I bought software that wasn't subscription based for a non mobile device. Ok, maybe one app.
Well that and probably the pricing psychology of paying what would be your LTV as a subscriber up front.
That’s also why I don’t agree subscription software is categorically detrimental but more of a trade off. The benefit is there for both parties: for the seller it’s easier to convert people because you’re only asking for the one month or one year price, for the buyer you’re at most out that same reduced price if the service doesn’t work out.
The negative is of course that you don’t own it outright, but it seems a lot of people are happy with the trade off. Apart from HN.
The parent isn't arguing against paying for software, they're arguing against the subscription model.
I was a professional designer once. I happily purchased Adobe software to do my job; it was the best out there. Adobe switched to a subscription model and I no longer use their products. Thankfully, alternatives have presented themselves, and there are now capable software suites that I can pay a fair price for.
That's fair - but if you tear that shirt through your own usage or because it was poorly made - that's on you.
There is a very fair expectation of continued support when it comes to software, this doesn't need to include new features, but security issues in libraries should continue to be fixed. That implies an ongoing cost that might not have an ongoing revenue stream to sustain it.
When it comes to shirts there is no similar expectation of ongoing maintenance - Nike isn't going to come to your house and resew a seam because they did a shoddy job the first time.
This was the attitude that made me not use the graphic design degree that I got and move into software development (other than the fact that I had been building and using computers since I was 12). People look at the arts as a "thing you should just do because you enjoy it". Can you make me a logo in your spare time is a common refrain. It seems to have trickled down to software development.
Anything anyone has a true passion for, eventually, will be seen as a free commodity by interested parties, unfortunately. I wish it weren't so, but business acumen and true passion are often enemies.
This. However, the burden of dealing with dozens of not only subscriptions, but change in subscription models (none->monthly, annual only like Prime, or per-usage) is daunting to many.
In particular, if I can't get family subscription for services like VPN, video, etc, I simply don't bother - because I'm not maintain 2-3x subs and I'm not going request close family to pay for something they may not need.
So I choose not to pay and look for simpler alternatives in that case.
It's not about "getting paid for labour" It's the attitude and environment a whole generation of devs is brought up in.
In the olden days, if you encountered a problem and had an idea how to solve it, you sat down and hacked a solution. No matter at work or in the evening at home. You had fun doing it, it improved your life, you open sourced it to share it with the community with the goal of having other people help you improve it or even helping your peers. That's how most of the small tools in the GNU toolchain were created as well as even Linux etc. And many of them live on today even as the original maintainers left or didn't keep improving by way of forks.
Today, if somebody has such an idea, they sit down make a MVP, find a cofounder, apply for YC, move to the valley, wonder about seed funding and product/market fit of their SaaS. And hope for getting acquired for big money. And most don't, they just fail and die. It's about money first and making it big.
> You seem to think that they are so simple, that they should be free.
I don't think anyone is claiming that; the opposite of subscription SaaS is not freeware. At some point companies realized that they could have more predictable, long-term, higher revenue streams if instead of selling you a bit of client-side software a single time (with uncertain future business from upgrades), they sold you a subscription to their software, which is often (unnecessarily?) cloud-based. This all feels weird to those of us who have been around for a while and got used to buying software the "old way".
Building a cloud/web-based app means it's (mostly) effortlessly multi-platform. You don't have to build separate apps for Windows, Mac, and (occasionally) Linux. (But there's also the e.g. Electron option.) And meanwhile, you can push bugfixes and new features out to your customers near-instantly. Your release cycles are small, and are often measured in days or weeks, not quarters or years. You can justify charging on a subscription model because you are constantly working for your customers. On the flip side, if a customer wants to cancel their subscription, what happens to any data generated with your app? If it's in a proprietary format, IMO it's unethical to hold a user's data hostage like that.
Selling software by the download is a hard business to be in, and the incentives are not always aligned well. You're expected to fix bugs and release patch versions for "free". But usually you can charge for new major version upgrades. So the incentive is to skimp on bugfixes and instead work on new, big features. Beyond that, there's an incentive to make big, sweeping changes to your app so you can justify calling it a new major release, which triggers an upgrade fee, even if those changes don't actually benefit users. (Then again, this phenomenon, for some reason, exists with subscription apps too.)
But many people just see it as a money grab, especially for products that used to not require a subscription. In 1998 I could go and buy a copy of MS Office in a box from my local store, and it was then mine. I could use it as long as I could find an OS that would run it. Likely I could still run it today under wine or something if I still had a copy. But now we have Office 365. I have to sign up for a subscription. If at any point I want to cancel, I can't use the software anymore. The data I've created with it is still mine, but I have to deal with imperfect format conversions done by other office apps. You could perhaps draw a similar parallel with Adobe's creative software.
Regarding money, I think there's also an aversion to having to pay indefinitely. Sure, MS Office was expensive to buy, at several hundred dollars or whatever it was. But when I forked over that cash, I knew I was done paying for that version. Even if the SaaS version is priced at a few dollars a month, and I'm unlikely to ever subscribe long enough to pay the old "full price", there's still an irrational feeling of getting a raw deal. I think most people are more comfortable with known, one-time costs than with recurring costs, which may change if the company later decides to charge more.
Is there a standard way to get at "turn the screen white and the brightness up"? I used to have a torch app that had that mode (presumably, lower power than driving the flash LED), along with a few others (like a night-vision-preserving red screen).
Hardly the most challenging app in the world, but it was worth the $.99 I paid for it. I probably could have written it myself, and while I personally would likely have made it free, I felt ok giving somebody a tiny tip for it.
App store monopoly and monopsony status is a seperate (albeit related) problem from app stores having no or shitty quality control. If a non-monop'y app store wants to control peoples' business... well, they can't; people will just use a different app store; that's the point.
MS Office is a great example of this because they squeeze money out of folks when it seems the features added are minimal. The only interesting aspect is cloud-collaboration, but that could be P2P instead. I'm willing to wager most folks still use the same subset of office functionality: page layouts and fonts and such have been around for a while; financial formulas rarely change; etc. But yet, they are charged an arm-and-a-leg for the "cloud."
And then there's Amazon with their lambdas—trying to convince people that they should forget how to program and rely on a plethora of beautiful, shiny one-liners.
MS Office is a good example, and like you mentioned Word is pretty much Word from 10 years ago.
But you know who else does this? Book publishers. Specifically, textbook publishers. Every year there is a new edition of a calculus or algebra book. So this is a business model that has been around for awhile, and takes a variety of shapes. Such as planned obsolescence.
Software has it easy today, though. They can just cry "security updates" and instantly have a solid case for the subscription model. Even if it is nonsense.
> They can just cry "security updates" and instantly have a solid case for the subscription model
or they can cry "changes in browser and OS!" stuff that worked 5 years ago may not work the same today, or at all. Having a business model around it to help keep up with changes that are largely outside the control of that vendor helps ensure the value still stands. Or... new value can be unlocked - want your useful service to be able to handle that new video format, or compression, or audio format? I seem to remember something as 'trivial' as Apple moving MacBooks to "retina" displays caused a lot of problems and non-trivial amount of work for a lot of tools and services to be able to work 'correctly' with the new formats.
Lambda is absolutely the worst thing to happen to software engineering in recent memory, IMO. I've seen it used well, but only a tiny percentage of the time. The rest of the time it's tortured and abused and the project turns into a sadistic exercise in forcing the problem to fit the desired solution, instead of the other way around.
I think this is especially annoying on the app stores, as there is no way to filter for a specific price (range) or in apple's case for apps that don't sell your data to everyone and your neighbor
Pretty sure there is normal software for those uses, and you can get free/open source apps with no ads from F-Droid for all basic needs (such as a "torch").
There's a bible app called YouVersion that's by far the most popular app on both iOS and Android.
"YouVersion Bible is notorious for privacy violations and dangerous data collection. Yet, here it is: still seated firmly in the Play Store, racking up over 100 million installs with a whopping 22 permission requests."
Great comment about SaaS, ganafagol. It's running software that you make money from, not code. Code is just the leverage you have over changing the running software. The running service itself is the ultimate "concrete object" that, for generations, software has been moving away from. There has never been a "compute fabric" as robust as the modern cloud, so long-lived mutable data-structures are becoming more common, and will be even more so.
(We try to have our cake and eat it too in an iterated game where the unit of deployment is immutable and reproducible. That game is called devops and more specifically, continuous delivery.)
A few libraries in the Java world have this model. They haven't produced unicorns but seem to be pretty stable businesses - jOOQ(1), hibernate(2) etc. I'm researching DB libraries for work and so those are the ones I recalled immediately, but I think there are some commercial UI ones too.
I'm very happy that alternate universe doesn't exist. Libraries outnumber SaaS products 100 to 1 and I remember wrangling with software licenses on library implementations in the early 2000s. It sucked.
Distribution (the internet) and open source disrupted that business out of existence.
I think we need something like that for the cloud. Pay an interchangeable cloud provider a monthly pittance, and they host your choice of services as turnkey solutions. No more centralization of data, and no more paying $5/mo for a service wrapper around some FOSS CLI tool.
Why do you think it's limited to the FOSS world? Surely most SaaS companies (and certainly the most profitable) are business-to-business companies, and presumably they're quite a lot more valuable than the average library vendor. I would also hazard a guess that onprem services occupy an intermediate tier both in terms of profitability and in terms of integration model: they're a whole service (as opposed to a lib) but the customer is on the hook for integrating and operating (as opposed to SaaS).
> Only on the FOSS world, because it is the only way to force devs to pay.
What about the approach that the Qt library uses, where they have a free GPL version and a paid commercial license? People might pay to avoid GPL obligations while using the library.
The Qt Company recently changed their publishing model and they provide only recent versions as (L)GPL. Thus Open source users have to migrate to Qt 6 or run an outdated version of 5.7, missing bugfix releases. Migrating to Qt 6 however isn't easy as some components aren't available for Qt 6, yet. Thus Open Source users requiring those modules can't go anywhere.
Aside from that the Qt company restricted access to their builds behind a registration wall.
And if you are willing to pay they created a pricing model, which isn't easy to understand and can become quite expensive, (233$/month/developer) and as it's a subscription you can't simply pay a license and go from there, but you have to subscribe and the moment you terminate the agreement you are forbidden from distributing your application any further with Qt.
Thus unhappy open source users and many users who at least claim they would like to buy for sensible cost, but can't afford.
There is another alternative universe where commercial software, including libraries, gets sold
I remember old issues of Doctor Dobb’s Journal with full-page ads for libraries you could buy to add features to your shrink wrap desktop application. It was a viable business model once.
They certainly exist, but I don't think they are remotely as common. I can think of dozens of paid services that I use at work but one paid library. Maybe 5 if you include things like database SDKs where we paid for the database.
A lot of stuff in the automotive world has paid libraries. They can't be services because they need to be real-time, but they're not free because implementing a complex IEEE standard as software is not trivial.
>SaaS turns your startup into a unicorn and yourself into a rich person. Or at least that's what you are hoping/aiming for. A library is not going to make you a billionaire.
The article's author seems to be making an indirect reference to Moxie Marlinspike (Signal) "ecosystem is moving" essay[0].
If so, Moxie Marlinspike's method for becoming a billionaire by creating non-profit 501(c)(3) organization and providing Signal's source code is a very strange way to cash out of a unicorn.
And btw... even though users/developers have the Signal source[1] which enables them to create an alternate chat universe that's not dependent on Signal's official service/servers, that isn't good enough. They still want to federate[2] with Moxie's servers. This aspect isn't addressed by op's (catern) article.
In other words, having a library (or even the full client+server source code) doesn't really solve the users end needs. It turns out that many place more importance on the service than the library.
EDIT to reply: The first IKEA business in 1943 was for-profit. The non-profit foundation (Stichting Ingka Foundation) was formed later in 1982 so the owner could take advantage of tax efficiencies. I don't see how IKEA's opposite timeline has any relevance to Marlin's playbook to become a billionaire. Is there a real case study of a 501(c)(3) non-profit entity tricking everyone into a bait & switch and minting a new billionaire?
> If so, Moxie Marlinspike's method for becoming a billionaire by creating non-profit 501(c)(3) organization and providing Signal's source code is a very strange way to cash out of a unicorn.
On the contrary:
Step 1: The service is already centralized
Step 2: Make the central service closed source: happening now.
Step 3: A no profit can be turned for-profit or used together with a for-profit (e.g. IKEA)
Signal can become a perfect example of bait-and-switch market capture.
It doesn't even take oodles of money from a SaaS platform to motivate turning a library into a service.
Even in in-house development, developers are often motivated to build services and have internal customers take dependencies on them so that they can expand influence and demonstrate ownership in a way that gets noticed by senior leadership and put them in line for promo. It also opens up the possibility for stakeholders to build their own little fiefdoms with access controls, intake processes, and a justifiable source of funding.
That seems like an overly cynical explanation. There are many reasons why you would want a service instead of a library:
1. Even if the service is just a CRUD API, then you can isolate the storage layer from external users. If you just a have a library then every application needs to be able to connect to the DB.
2. You can protect mission critical resources through rate-limiting in a way that is way harder with a library.
3. Even if those are not problems, if someone has a DB connection then there is nothing really stopping them from just going around your library entirely. So random service X gets popped by an attacker. Now they can execute arbitrary queries against your DB. With a service they are still constrained to the operations exposed through the API.
4. You have a lot more freedom to change internal implementation details for a service. Need to change your DB schema (or migrate between postgres and mysql) then you can hide that behind the service interface. If you have a library out there then you have limited control over when people take version updates and it is virtually impossible to synchronize the update across all consumer of said library.
You've just described requirements that belong to a service. Congratulations, you made the right (obvious) decision.
I'm talking about writing entire services that are just wrappers around ffmpeg, pdf2html, parquet-tools, Olson tzdata, or a 10-parameter logistic regression. No stateful behavior, storage, or authoritative source of truth involved. The worst case I've seen was a service that just enumerates a bunch of values of constants (that actually never change).
There may have been some future-proofing in mind at the time, but more likely it was a solution in search of a problem.
Fair enough, but that is why the question of service vs library is not really a question that has ONE answer. It depends on the use case.
But to push back (slightly) on your chosen examples. Dealing with binary codecs is actually something where it can make a lot of sense to wrap it in a service (even if you're just using the open source tools under the hood). It is a space that is notoriously prone to security vulnerabilities up to and including RCE vulns (https://www.cvedetails.com/vulnerability-list/vendor_id-3611...). So doing it in it's own sandbox can be a smart move. Maybe not a SaaS product per se but still something you might want to isolate as a service separated from your application server.
It is a reality. Another point is that piece of code does not mean everything. When it run on an optimised platform, it will give much value. For example, a multi-core algorithm. When it is a service, that can be ensured. Also, the Library and the beneficiary application run on same memory space (normally). So the lib code can cause crashes or can hack privacy. Another thing is your secret-sauce is public now. So it can be copied or reverse engineered.
I detest that people have accepted this business model as being the norm. I think the bar has been lowered drastically in terms of what people are willing to pay for. If somebody launched Notepad as a Service with premium backgrounds instead of that boring, conventional white color, then you'd probably find enough people willing to pay $9.99/month for it. It's just absurd.
It is a possibility that could happen, but it is not very probable.
I am certain taht millions of services are written and dies in lonely obscurity.
According to a Hulu documentary I watched recently some woman makes over 150K a month from OnlyFans.
Lots of people want that and sign up and the vast majority will not make anything.
Or the more classic moving to Hollywood to become a
famous actor making $$$$, or be a rock star.
The chances your services will make billions is slim,
If you write a good library, you can sell it.
I would much rather buy a library than a service.
(I may be in the minority for sure).
If you give it away, and if it does prove highly popular then wrapping it in a service and offering it that way will create some income.
With that strategy you can iterate over functionality and find out what the market wants the most and create a product that is more mature as a service.
It still applies for internal usage. You can have 18 microservices, but if you're strict about versioning and start treating them like libraries when possible, you're going to save yourself a lot of headache.
Recurring revenue is much more stable than license purchases and also less prone to piracy. Adobe's incredibly successful transition (from a share price perspective) is a testament to the value ascribed to recurring revenue models.
In fact, companies that sell services with a recurring revenue component are generally seen as higher margin and more stable across industries. For instance, Aerospace & Defense companies that have a strong "Aftermarket" presence (which really means maintenance, spares, repairs, etc.) generally command higher valuations.
The SaaS hype just takes this to the ultimate level. Digital services have very little costs (relative to their physical counterparts). Recurring digital revenue equates to mind-boggling numbers like Salesforce having >70% gross margins, which is probably the most prominent reason why Tech valuations have skyrocketed in the cloud era
> For instance, Aerospace & Defense companies that have a strong "Aftermarket" presence (which really means maintenance, spares, repairs, etc.) generally command higher valuations.
Not sure whether that's a good example, because that might just be exploiting some weirdness in how government projects get funding approval, and might not be relevant to the wider (and software) world?
I suspect it's the same reason game devs are hot for streaming. When the code only runs on machines you control, piracy becomes impossible. By contrast, trusting people to respect licenses on code on their computers is how piracy happens.
In my opinion this is a feature, not a bug, but the business reason is clear for why all commercial software is moving towards the SaaS model.
How much game piracy happens these days? Since steam became such a good distribution platform and my internet connection got fast I haven’t even thought about pirating a game. AAA titles are expensive, but there aren’t a ton of them released every year, and downloading cracked installers is a huge security risk.
Been a while since I saw a license without recurring payment, particularly for code components. B2B, all libraries I've dealt with were paid per-developer-seat-year.
> SaaS turns your startup into a unicorn and yourself into a rich person
If the function could be served by a library? Probably not. Because someone else will write the library, and then your high-latency, internet-required, for-pay service will be competing against a free (presuming the library is available that way, which if the first one isn't is still likely eventually), low-latency, offline-capable library.
wrong, you can have both, several successful open source projects have libraries that are released under friendly open source license. SaaS comes in to provide added benefits over DIY approach, both realities can happily exist and in fact open source growth drives SaaS business models.
Don't worry. If the functionality is useful, then eventually there will be a free open source library that will prevent the original author from becoming a billionaire.
there is also the fact that companies might see a smaller operating expenditure vs a larger capital expenditure as a benefit as well. depends on the situation of the company but it could be an easier sell.
> I don’t think it is possible to create a Python library that is not publicly available through pypi/pip, but can be installed in cloud functions
Of course it is possible. You can set up your own PEP 503 [0] compliant repository, even unreachable outside your VPC, and use pip to install from it. One caveat: on GCF you can -- if I'm not mistaken -- only install Python-only libraries; but if you use Cloud Run you can install anything inside your container.
There are servers for hosting your own PEP503 repo, such as devpi [1] -- but ultimately you only need a Web server to serve the file index.
Good advice. I will take a modular monolith over a frontend that consumes a series of services any day. Black boxes are fun until funny things start to happen inside them.
>People say, "services are easy because you can upgrade them centrally, so you can avoid slow-to-upgrade users making everyone's lives worse."
> But this assumes that slow-to-upgrade users can have negative effects on everyone else.
Is the above a reference to Moxie Marlinspike's comments[1]?
So would a concrete example of your abstract essay be that communication should be a client utility that uses an XMPP library[2] instead of Signal's services?
Totally agree that library is easier to maintain and for developer to use. However, services are a lot easier
to monetize and allow the company to collect any data they want. I can't think of any billion dollar companies that release their core library to the users.
Libraries can also be a PITA to maintain when you need to maintain widespread platform support, including for legacy/crufy systems. I quite enjoy the freedom of owning the underlying platform with cloud deployments vs my on-prem software days.
If it is common service used across the company, writing service makes sense as teams can use different programming languages to consume it. Libraries expect everyone to use same language or that library has to be recreated in other languages & managed.
That being said it's incredibly hard to follow this advice in environments which encourage a polyglot stack and 'the best tool for the job' mindset. The moment you have more than one language in your stack it becomes easier to build services than to maintain libraries in each language. You can of course write native C libraries with bindings to different languages but that's a bit weird.
It's also requires more discipline to maintain abstractions and boundaries properly with libraries in large code bases but that's another story.
This is a good overview of the trade-offs between service and library. Highlighting a couple more things that I didn't notice the author mentioning:
- the author questions why one cares if users are slow to upgrade your library. That depends on what your library is doing. If you really don't need a concern yourself, you're fine. If, however, your library includes a security hole that makes it possible for people to turn software consuming your library into a botnet that attacks third parties, you aren't legally liable, but it doesn't feel good to go down in history as the people who facilitated that exploit. And it's worth remembering that even PNG encoding and decoding libraries have been in this category.
- in general, a library is going to constrain my choice of language to something compilable against a stable API. and the ecosystem of compilation tools and software vending being what it is, it is still, in many ways, quite a bit easier to get functionality in front of users by running it as a service behind an HTTPS API then by releasing it as a compilable binary and trusting all of your potential audience is going to go to the hassle of gluing your bespoke make tooling to there bespoke make tooling. if you constrain the problem to a narrower ecosystem you can avoid this hassle, but then you constrained your user base to a narrower ecosystem.
What I think is missing in this take is the separation of concerns and encapsulation of dependencies that services bring over libraries.
For example; a pdf transformation library may require a Linux machine for with custom PDF software, maybe even accelerated graphics hardware too. With a library I have to deal with all that, but a service can encapsulate that behind a HTTP interface.
Alternatively, with a pdf transformation library, you need to deal with those dependencies and hardware requirements yourself.
I kinda wish that the people behind the Language Server Protocol understood this. The whole idea of that makes no sense to me at all. None. I mean, I get how it works, and why people think it's a good thing, but libraries are the better way to go in every situation that I can imagine. There is no need for the Language Server Protocol or language servers in general, and its existence only makes things more complicated than they need to be.
1. There are N editors in the world (vim, emacs, vscode, ...) and there are M programming languages (c, c++, java, python, ...).
2. Most programming language developers write tooling to make their language ecosystem nice. (gofmt, cargo, ...)
3. Most programming language developers like writing in their programming language.
4. Not all N editors or M languages are written in the same language. In addition the programming languages that our editors/tools are written in cannot always interface with each other clearly (cffi is not supported in every language).
5. "All" programming languages that people use today have some networking stack that can be used to open TCP sockets and send data.
Given these facts it would be easy to conclude that:
If you want editor developers to focus on writing text editors and you want tooling developers to focus on writing tooling but you also want your text editors to support advanced functionality that is already implemented in your tooling the easiest way to send that information is over TCP.
The other options are:
1. Don't have advanced languages for "All" languages.
2. Force everyone in the world to use one programming language.
3. Force all tooling in the world to be written in one programming language.
4. Force everyone to implement another cross-language communication system (ex: cffi) in "All" languages.
Unfortunately these options are more complex then just defining an API and sending messages back and forth.
So why can't you just write your language support library in whatever language you like, wrap that in something that supports the C ABI if it doesn't already, then call that from your editor? If you're going to use a language server, then you have to write code to call the LSP. Why not just call a library?
Why does there need to be a server involved? Why does there need to be pipes, or network traffic involved? LSP defines that JSON-RPC is to be used as the communications layer. Why does JSON have to be involved?
A library naming convention could have just as easily been written which allows all the things an LSP allows. Just name the methods in your language support library the same way everyone else is and then anyone can call your library and gain support for your language. This is the same thing that's happening with language servers, except it's much cleaner and more straightforward than language servers.
What people are doing is writing their language support library in whatever language they like, then wrapping that in a JSON-RPC wrapper with the proper LSP stuff on it. Now the editors have to implement an LSP client when they already had the ability to call libraries provided via the C ABI.
Sure editors only need one piece of code to call any LSP server, but they didn't need any more code to call a library.
If those editors don't implement the LSP calling code themselves, and rely on a library, they still have to call a library that they are using the LSP to avoid doing in the first place.
Nothing about LSPs enable anything that wasn't possible before. Nothing about LSPs makes anything that was possible before any simpler. It all just adds crap to the chain and everyone is calling it a "win." It's not a win. It's a loss. It's adding complexity and layers where they don't need to be, for no discernable benefit. Language support people still have to write language support code. Editor people still have to write editor support code. Except now they do it with new protocols they can add to their resume. This is resume-oriented development, that's all.
Language support is tricky. Libraries and services implementing it will have bugs, memory leaks, etc. ultimately leading to crashes. I'd prefer my text editor not to crash erasing my unsaved changes. So probably the language support should be isolated in a process separate from the editor.
Of course, an LSP client will also have bugs. When I tried LSP support in Kate editor last year, it crashed together with the language server. Emacs at some point (could be after a language server crash too) locked its UI. However, the client code has a limited scope; the editor developers can and will fix bugs there, in contrast with dozens of exotic language support libraries in dozens of different languages.
Probably we could think of better editor design (isolate the core and let the other things crash — like in Xi editor, or something Acme-like with most tooling being external), but we already have a lot of editors. It is also definitely possible to design a better IPC protocol than JSON-RPC, but compared to the idea of isolating plugins (for me, it is one of the primary advantages of VSCode over any other editor with plugins I've used: it almost never crashes, and when it does, it preserves the state), and considering the resource-intensity of the language support itself, I think, JSON-RPC overhead is not as large as it looks.
Finally, I'd like to agree with the sibling comment: unfortunately, C FFI interface to a library is probably not easier to implement nowadays than JSON-RPC interface to a service in most languages, excluding C/C++, assembly and so Forth :)
> So why can't you just write your language support library in whatever language you like, wrap that in something that supports the C ABI if it doesn't already, then call that from your editor?
You can! This is called "defining an API" and this is basically what an LSP is. The downsides of using the C ABI like I said is not all programming languages use the C ABI. To work around this you have suggested writing a wrapper transforms to/from this API that follows the C ABI calling conventions in each language you want to support. To see how fun of an endeavor this is you can look at things like libgit2 [0] which spend a lot of time maintaining bindings for each language. While these bindings do absolutely work they are:
1. Difficult to maintain (look at some issues [1, 2, 3])
If you instead separate into services the lsp team could maintain a whole test suite that your service could be run against. You would provide an LSP + a corpus that has certain features and the test framework could do a set of operations to talk to this system.
If you use a dsl to describe the protocol you can automatically generate server/client libraries to use to talk to/implement an lsp (and other services!) I'm a huge fan of gRPC for this reason: write a single dsl and now everyone can talk to/implement your service.
You can define shared debugging tools. For example ebpf can be used to debug any networked application in linux regardless of what it's implemented in. Similar tools can now be developed at an application protocol level for all LSPs to make development easier without tying the infrastructure to a single language or ABI.
The crux of the issue is: service boundaries solve the exact same thing that C ABI/ffi solve with the following benefit:
1. No dependency on any language-specific implementation of a protocol or API.
2. TCP is supported everywhere and you can get a bunch of free monitoring features from using it. It's also pretty darn fast now especially to localhost.
3. Easy to plug into an TCP server regardless of your runtime environment. Do you need to host your source code on a linus system when your dev environment is running in windows in Visual Studios? No problem!
> Language support people still have to write language support code. Editor people still have to write editor support code.
Correct! Except it's which code gets duplicated. Could LSPs been implemented as .so & dlls that followed the C ABI calling conventions passing HSOURCE_FILE* back and forth in process? Yes! Would it have been easy to implement that for all languages in a safe and secure way that can run in an adversarial environment and allow different people to manage and debug different implementations while sharing standardized tooling? No, not easily.
This still doesn't seem superior to library calls from a complexity point of view.
On one hand you have a library to secure. On the other hand you have the same library (or at least the same logic and methods) now with a JSON-RPC server wrapping it, and both need to be secured.
I would feel a lot better if it weren't JSON, I guess. Binary protocols are just SO MUCH FASTER and require so much less memory. Parsing JSON is fast, sure. Reading and writing binary is probably 3 orders of magnitude faster, and it's easier to read and write binary, in my experience. (I also don't understand why protobuf exists.)
I think context is really important here. Multiple comments in this thread jump to the concern that it is much harder to monetize a library -- but in an era where many developers are enthusiastic about service oriented architecture, it's also worth considering the decision between services and libraries for functionality which will only be exposed inside of an organization.
One other dimension not discussed in the article is how the scale of use varies over time. A service operator needs to ensure the service scales to support its use. But some use cases are extremely spiky (e.g. when something is invoked by a batch job processing many TB of data among many workers). Even if the service is able to dynamically scale based on load, that's not frictionless or without cost to the team operating the service, and can cause a degradation of service provided to other users. By contrast, if one provides a library, any given use case can be responsible for providing the capacity to support themselves.
A last dimension of libraries vs services within an organization is attributing value and cost. This is a double-edged sword. When providing a service, the team that operates the service may have some clear costs to continue to run it. Just providing the service can make you look like a cost center. If you do the extra work to make sure use cases are distinguished and trackable (e.g. use cases have separate credentials used in calling the service), then perhaps these costs (and "value" in the form of request volume) can be tied to callers. When providing a library, the team that provides it both doesn't appear as a cost center, but also it may not have a straight forward way of knowing how intensively their library is being used and therefore how much value it has provided to the org.
Yeah, the "everything service" mentality is getting a little out of hand. The most recent, egregious example I heard of is the "language server" in VS Code. What used to be a plugin/library functionality has now been turned into a distributed computing problem with all its warts and gotchas. I've no idea who makes these decisions on a project like that and how they justify them.
I’ve seen many unnecessary services-that-should-have-been-libraries. Usually the result of organizational structure and career ambition; it’s more high-profile to launch a new service than to write a new library. Coworker jokingly called one egregious case the “Promotion Service”, as that seemed to be the main function of the service for the team which developed it.
Curious what people's experiences are wrt library development within an enterprise. I've been at organizations where basically everything was done via services and there was very little library usage, and I always thought that was suboptimal. Now I'm at a place where we utilize lots of libraries and they seem to pose pretty significant costs of their own.
A typical example is if we need to do something that the library doesn't support, we're faced with an unpleasant choice of updating the library itself or doing some sort of workaround/hack. The former shouldn't be that difficult, but in cases where a library was written by a different team, updating can be painful if you don't have someone with experience with the library/domain on your team, or if you have to jump through approval hoops.
I suppose this is more an ownership/organizational problem and applies to services as well, but somehow dealing with library dependencies feels more onerous.
Exactly. Integrating external libraries into another system is much harder than leveraging e.g. a RESTAPI. The OP seems to be saying that Service oriented architecture (SOA) is worse. It's definitely not. There's no "better" or "worse" here. It's just two different options with different costs of integration and interoperability. the service architecture is going to have additional maintenance overhead but provide cheaper and easier interoperability.
A comment on the website. I know on HN its popular to have minimal sites. But this is just rediculous. While I also dislike medium, they get the spacing right. This site is awful to read.
Generally lines shouldn't be longer than 700px or so (+/- 100px). Its harder for the eye to stay on the same line when its longer than that.
One line of CSS can fix this and vastly improve the experience.
The font size is tiny, I have to zoom in 50% to easily read. While I can do that, its annoying to have to do so.
That's another 1 line CSS fix. And then one final line of css to center everything.
I'm not asking for an entire site redesign, but literally 3 lines of CSS can vastly improve the end user experience.
CSS needed:
body {
max-width: 700px;
font-size: 20px;
margin: auto;
}
> But if you didn't have the service in the first place - if there was only the library, containing all the functions, doing whatever the service was supposed to do in the first place - you wouldn't have this problem. Users who don't upgrade would suffer whatever problem exists in the initial version of the library, and everyone else would be fine.
While I tend to prefer a library over a service where possible, there’s an implication I didn’t see mentioned: if your use case involves communication between library instances (or collaboration on shared data) at runtime, you might have to choose between requiring users to only run matching versions of the library (complicates library use) or implementing very strong backwards compatibility (preferable, but can seriously complicate iteration, especially during early stages).
Libraries are better for end-users, services are better for the businesses building them. When a service could just as easily be a library (which isn't always the case, to be fair), it's not a question of best-practice, it's a question of power-balance and incentives.
My interpretation here is that this makes a ton more sense within an organization vs externally facing. The other comments regarding saas are on point. But the way you build your saas should - imo - take this approach whenever possible.
Clearly, it makes sense on the "sellers" side to make it into a service - the article pretty much conceded to that. However, this issue is also created by the "buyers" side as well. To give a concrete example - prometheus is an open source alternative for observability. But if you have a small team and want to release early, you will still go to DataDogs, and splunks of the world.
Of course, you can say but that's fine. But there are others that could have been a library. Yes, but libraries also come with a maintenance overhead (security, patches, etc.).
I don't really understand this comparison. Point of having a service is not only exposing computation - more usually, it's exposing a centralized database. How is library useful in this context? Unless, the author is talking about writing, for example, a number multiplication library vs number multiplication service... but then the advice is obvious.
EDIT: also, this stuff about accessing memoty, denying access at runtime makes me think that it was written for a different times. Nowadays, who creates new stuff that's not isolated by VM or container layer?
Meanwhile, any time I receive a piece of functionality as a blackbox .so library from a vendor, the first thing I do with it is wrap it in a gRPC host so I can easily call it from the rest of my codebase.
> A service has constant administration costs which are paid by the service provider. A properly designed library instead moves these costs to the users of the library.
What really happens is that users pay money to some service provider that has said administration costs and margins for providing said library as a service. Users don't want costs related to maintaining a library or infrastructure that it needs to provide its service to them. That's why things like AWS and Azure exist.
Unless of course SaaS providers are your "users" in this case ...
1990s advice on a 1990s website. It's the opposite. We need to move away from package hell by splitting our work in two opposite directions: on the one hand, we should move back to large, effective standard libraries; on the other hand, we should move toward more services, fewer library packages. A project shouldn't need more than a few packages, usually for something like a database connection protocol. Working with libraries is the worst part of the job.
This is one of the worst ideas I've ever read on this site. But I'm glad you wrote it, as this is the epitome of a sentiment I've many times seen lurking.
- Standard libraries always go bad over time as idioms change and they don't have a policy for breaking changes
- Services do not compose as easily as libraries. Composition is key to code reuse.
I understand most off the industry deals with terrible libraries, and many of you long for simpler times, but the proliferation of complexity will occur with or without the NPMs of the world, as the industry grows and functions in a economy that doesn't care about externalities, including endemic stupid extra complexity.
I am trying to maintain an absolute pile of shite because the previous lead dev needed to reinvent every wheel in an undocumented untested way. On boarding developers takes longer. Bugs are more difficult to fix.
You are in my opinion a bad developer for having that attitude. Being able to know when and when not to use a library is an important skill. As with everything, it's a trade off and you seem to not be aware of one side.
This isn't a new concept at all, but I can't remember where I've read this before.
It is quite sensible. I'm a believer in SDKs. An SDK abstracts the backend. Gives everybody a lot more control over their own domain.
Of course, the requirement, is that the SDK needs to be super-high quality. It also needs to expose a fairly generic API that may not actually map directly to the service it abstracts/replaces.
I think the cost of maintaining an upgrade path for libraries that doesn't piss off your users is non trivial compared to a service
I agree you shift work to users, but at what cost?
Either an angry or resentful user base or jumping through countless complex edge cases to deal with usage problems you didn't forsee or intend to happen.
I'd happily take managing and maintaining a centralised service over a lib in most cases.
I remember people talked about libraries a decade or so ago, but now people use the term package instead. Can someone shed some light on the difference between the two if any and when they started diverging?
I don't get it, what if you want to use a different language to your clients? What if you want to have state shared across an entire company and spin up/down bits individually?
A service is a bundle of code and resources.
Where possible, unbundle code and resources.
Wherever you can provide the resources externally, you can now reuse the code.
Decent engineering advice, but not such good economic advice. And therein lies the rub. It's very hard to get people to do that which will make them less money.
I'm not so sure about that. The great thing about software before SaaS (as a business model) was that selling another copy was basically zero marginal cost. And of course there were the extremely lucrative "professional services" you could reap because installing an enterprise software package on-prem was a nightmare.
Maybe it's just that the open source ecosystem covers most of the low-hanging fruit for things that could be libraries. And things that are big and complicated and require their own databases and such are easier to run as services for the actual user.
This makes no sense. In order to build a service, you need a library (or something very much like it) supporting it.
If you need a library, and can use a library, yay, you're done, it's the simplest and fastest thing possible. If, for some reason, the logic you need can't be run locally, then you need a service. Find or build one and use it. You will pay in application complexity, dealing with the possibility that the service is down/unreachable. You will also pay in latency.
Why is this "not even wrong" article at the top of Hacker News?
Yes, we are both saying "use a library when possible". I am mystified by the need to write or read such an article. It seems akin to "don't use an airplane to go to the corner store for a dozen eggs".
You’re right but your service could be exposed with a public API and still have your library public.
I think the best of both worlds is to have an open-source library and a service. People can choose the one they prefer. And you can still make money from selling the service.
Got it, buy the cheapest possible 600$ server, pay 200$/month to collocate it (also do a cost benefit of different facilities), be responsible for all hardware failures and availability issues.
One issue I have seen in distributed systems is where a library is on another customer/teams hosts and brings them down due to a bug in the library. The other customer/team has no way to fix the bug and is dependent on getting the attention of the company/team who vended the lib in the first place. One I remember is where a library had a bug which created 0 byte files and exhausted the inodes causing an outage.
Cloud systems have become cheap enough and flexible enough that protecting your customers by not putting your bugs on their hosts is not as big as a lift as it used to be when you had buy bare metal servers or explicit VMs.
libraries don't make you money, services do. Libraries get your name in a license in the "open source licenses" tucked 20 menus deep in an app's hidden dev options.
I don't think there's a one size fits all solution here, and there are constraints that the author isn't seeing.
In favor of a service are cases where users want plugins, and the upstream doesn't want to maintain them. An example is anything that touches DNS, like cert-manager, external-dns, things like that. There are thousands of DNS hosts, and they each have their own API. So nobody wants to provide support for all of them out of the box -- the upstream maintainer's entire life will become approving PRs for obscure DNS services. The maintainer can't test them, because they don't have an account, so can't own the quality of the final product anymore. Therefore, nobody does this. Services are an answer to this problem -- the thing you actually want to use defines an API for DNS services, and the DNS provider gives you a program that can receive messages to update DNS, and everyone is happy. You just run it as a sidecar, and you can update it when your DNS provider changes their API, and update the other thing whenever they add a new feature that you want. The coupling is loose, so you don't get the same perfect reliability as a purpose-built "update foocorp dns whenever an ingress rule adds a hostname", but it's pretty good.
(I honestly don't know if this is actually how cert-manager and external-dns work. I use cert-manager and it builds in libraries for the DNS providers I use, with the caveat that they aren't accepting any more. I don't use external-dns, but I heard some rumblings about making a standard a few years back for this sort of thing. Didn't check in on the status of either. It's just a hypothetical example ;)
Libraries are unhelpful here because there are a billion programming languages, and the upstream provider will never choose your programming language of choice as their priority. Additionally, container builds are hard and I've never heard of anyone with a reliable way of taking some sort of upstream project and building it with local add-ons at every upstream commit, tagging stable versions in parallel with the upstream, etc. Definitely possible, but out of reach of the average container operator. Things like Go without dynamic linking (a feature I agree with, BTW), and container builds make services more critical for the plugin type use case. (My dream is to just write libraries in something that compiles to WebAssembly and use that to implement a plugin system in applications that need it. Some people are kind of sort of doing this; Envoy for example.)
On the other hand, sometimes you need the deep integration that only libraries can provide. It's popular to use sidecars to add network features like distributed tracing and mTLS; linkerd and Istio are examples. But these are exactly the use cases where you should use libraries instead of services. You need to see the TLS handshake so that your application can make authorization decisions based on the peer that you're connected to. To do distributed tracing, you need to copy the X-B3-Trace-Id header out of the incoming HTTP request into the outgoing HTTP request. Libraries can do that for you, but services can't.
In summary, the answer on library or service is "it depends". Hopefully this comment adds a little more nuance than the original article.
> One consequence of the emphasis that the Unix programming style put on modularity and well-defined APIs is a strong tendency to factor programs into bits of glue connecting collections of libraries, especially shared libraries (the equivalents of what are called dynamically-linked libraries or DLLs under Windows and other operating systems).
>If you are careful and clever about design, it is often possible to partition a program so that it consists of a user-interface-handling main section (policy) and a collection of service routines (mechanism) with effectively no glue at all. This approach is especially appropriate when the program has to do a lot of very specific manipulations of data structures like graphic images, network-protocol packets, or control blocks for a hardware interface. Some good general architectural advice from within the Unix tradition, particularly applicable to the resource-management challenges of this sort of library is collected in The Discipline and Method Architecture for Reusable Libraries [Vo].
>Under Unix, it is normal practice to make this layering explicit, with the service routines collected in a library that is separately documented. In such programs, the front end gets to specialize in user-interface considerations and high-level protocol. With a little more care in design, it may be possible to detach the original front end and replace it with others adapted for different purposes. Some other advantages should become evident from our case study.
>There is a flip side to this. In the Unix world, libraries which are delivered as libraries should come with exerciser programs.
>APIs should come with programs, and vice versa. An API that you must write C code to use, which cannot be invoked easily from the command line, is harder to learn and use. And contrariwise, it's a royal pain to have interfaces whose only open, documented form is a program, so you cannot invoke them easily from a C program — for example, route(1) in older Linuxes. -- Henry Spencer
>Besides easing the learning curve, library exercisers often make excellent test frameworks. Experienced> Unix programmers therefore see them not just as a form of thoughtfulness to the library's users but as an indication that the code has probably been well tested.
>An important form of library layering is the plugin, a library with a set of known entry points that is dynamically loaded after startup time to perform a specialized task. For plugins to work, the calling program has to be organized largely as a documented service library that the plugin can call back into.
My startup is built out of a single phoenix monolith. Only external dependency is postgres.
Despite this, I've managed to completely avoid all the typical scaling issues of a monolith. Need a service or background worker? I juts make a Genserver or Oban worker and mount it in the supervision tree. The Beam takes care of maintaining the process, pubsub to said service and error handling in the event of a crash.
I'm basicly getting all the best advantages of a monolith
* one codebase to navigate, our team is TINY * all functionality is in libraries
And all the advantages of microservices without the associated costs. (orchestration, management etc)
* bottleneck use cases can live on their own nodes
* publish subscribe to dispatch background jobs
* services that each focus on doing one thing really well
We've accomplished this all through just libraries and a little bit of configuration. At no point so far have we had to..
* setup a message broker (phoenix pubsub has been fine for us. we can send messages between different processes on the server and to the clients themselves over channels)
* setup external clusters for background workers. (Oban was drop in and jsut works. I'm able to add a new worker without having to tell devops anything. the supervision tree allocates the memory for the process and takes care of fault tolerance for me. That leaves me to focus on solving the business problems)
* setup a second github repo. Its a monorepo and at out scale, its fine. We have one library for communicating to the database that every system just uses.
Eventually we'll probably have to start rethinking and building out separate services. But I'm happy that we're already able to get some of teh benefits of a microservice architecture while sticking to what makes monoliths great for mvps. It will be awhile before we need to think about scaling out web service. It just works. Leaves me more time to work on tuning out database to keep up!