Hacker News new | past | comments | ask | show | jobs | submit login

The years when Rails monoliths were the de facto web stack were some of the best of my career. As I progressed in my career and the popular tech stack shifted to things like microservices, document DBs, serverless functions, Node, importing tiny nom packages for everything, docker containers, React, and GraphQL, the sheer cognitive overhead of getting a simple app up and running gradually took all the fun out of the job for me. The fast, satisfying feedback loop of writing a feature in Rails was replaced with weeks-long coordination efforts between independent microservices, constant discussions about tooling, and doubts over whether or not we had chosen the “right” flavor-of-the-week framework or library for our product. Every time I started a new project or a joined a new company, we had the reinvent the wheel for our bespoke Node/serverless stack and have the same tiring conversations about naming conventions, logging, data consistency, validation, build scripts, etc., all of which Rails gives you by default. I ended up spending more time on tooling setup than actual business logic.

I eventually gave up and switched to a semi-technical product management role.




At Arist (YC S20) we've found that we can have our cake and eat it too by using Ruby on Jets, which is a nearly 100% drop-in replacement for Rails that runs on AWS Lambda. Our service sends messages to hundreds of thousands of people at scheduled moments in the day, and traffic is incredibly spikey, so combining the productivity of Rails and the Ruby ecosystem with the cost effectiveness and scalability of Lambda was a no-brainer. It also passed muster recently when we subjected our platform to a comprehensive penetration test.

We also get the benefits of a mono-repo and the benefits of microservices in the same application footprint, because every controller method is automatically deployed as its own independent lambda (this is core to how Jets works), but we're still in the usual Rails mono-repo format we all know and love. Also very strong integration between ApplicationJob and background lambdas has been killer for us.

One thing I've always said is the real difficulties in software development happen at the seams where services connect with other services. This sort of strategy (and particularly, the mono-repo format) minimizes the number of seams within your application, while still giving you the scalability and other benefits of microservices and serverless.


Is AWS lambda really cost effective? It has been many years since I was part of a team that was assessing AWS Lambda as _workers_ but the resource limitations at the time alongside cost calculation made PHP+VMs the cost-effective choice by orders of magnitude.


Its loved because it means your technical costs are always guaranteed to be a % of your total computing needs, and your computing needs should (theoretically) always be proportional to your total company revenue.

From a business standpoint, that's a pretty great pitch.

Is it actually the most cost effective solution? No more so than any other tool. It depends on exactly what you're building, how, and how you measure the cost. AWS can be extremely costly, or cheap, depending on your engineering needs, constraints, and practices.


I have been at exactly one (web) company in my life where the cost of computing resources actually mattered, and compared to bandwidth, even that didn't matter.

The companies I have been at where computing costs did matter were doing extremely specialized, long-running calculations that are inappropriate for lambda.

I'm sure there are rare exceptions, but this all smells like premature optimization. Developer salaries are, 9.99 times out of ten, the cost you want to optimize.


Another area that this workflow really empowers is it frees you up to have infinite and arbitrary staging environments. In our case we have a GitHub action set up that creates a fully deployed version of the app for every PR automatically, and a bot that comments on the PR with a link to the deployment. You can of course do this with any kind of infrastructure, but only with a serverless sort of architecture is it virtually free to do this kind of thing. On the db side, we allocate extremely tiny instances so it is still quite affordable to just have one running for all of our branches. They are automatically destroyed when the PR is closed so the overhead is basically $5/month for each PR that is open for a whole month (in practice they are open for a few days at most).


This is an interesting use case, thanks for sharing although these days, setting up kubernetes environment emulators is supported by most CI systems out of the box.


With a spikey workload, it's almost always worth it because we have to scale up 1000x and then 10 mins later scale back down to 1x for the rest of the day, and this can happen any time of day randomly. You can also do this with load balancers but in practice I've found these to be much slower to scale and much more costly with the same workload.

For reference, 90% of our bill is database related currently.


> 90% of our bill is database related currently.

Huh. With the context of the rest of the comment, I realize (the very obvious comparison) that a database engine designed to shard to many thousands of small workers could potentially be a very attractive future development path.

Iff the current trends in cloud computing (workers, lambda, etc) continues and some other fundamental doesn't come along and overtake.

Which is probably (part of) the reason why this doesn't exist, since I think I've basically just described the P=NP of storage engineering :)


> Huh. With the context of the rest of the comment, I realize (the very obvious comparison) that a database engine designed to shard to many thousands of small workers could potentially be a very attractive future development path.

Yes. I've been patiently waiting for the database community to realize this for the last 5 years now :)


I have no delusions I'd be able to viably make a dent in the area (at least anytime soon), but I do wonder how this would actually work.

The optimal solution would of course be to shard both the compute I/O and the storage footprint, so each worker only needed to hold onto maybe 1-100MB of data.

Perhaps some existing (simpler) designs could be modified to "hyper-shard" the compute angle, but would still likely require carrying around a large percentage of the database.

In any case, you'd need an internal signalling fabric capable of (cost-effectively) handling very bursty many-to-many I/O across thousands of endpoints to make consensus work in realtime.

It would honestly be really interesting to see how something like this would work in practice.


I have worked with many teams and found lambda to be by far more cost effective. Did your calculations include the time lost waiting to deploy solutions while infrastructure gets spun up, the payment for staff or developers spending time putting infrastructure together instead of building solutions, the time spent maintaining the infrastructure, the cost of running servers at 2am when there is no traffic. Perhaps even the cost of running a fat relational database scaled for peak load that needs to bill you by the hour, again even when there is no traffic.

Serverless as an architectural pattern is about more than just Lambda and tends to incorporate a multitude of managed services to remove the cost of managment and configuration overhead. When you use IaC tools like Serverless Framework that are built to help developers put together an application as opposed to provisioning resources, it means you can get things up fast and ready to bill you only for usage and that scales amazingly.


Funnily enough, I have made almost the opposite experience. In my experience IAC and serverless bring all the troubles of dev ops to „regular“ developers. Your plain vanilla mid-level full stack dev now needs to understand not only a bunch about FE & BE code, but also a much bigger bunch about servers, networking, VPCs, etc. than in a non-serverless setup.

How do you resolve this in your projects? (Serious question).

This is such a big problem for some of the projects that they are now only able to hire senior develops (which brings it‘s own set of problems).


But VPCs and networking and distributed computing aren't serverless. Serverless is using AWS Lambdas or GCP functions and not dealing with VPCs beyond having an endpoint to hit.

There's not getting around the networking and such - that's the full part of full-stack (FS) - it's more than simply FE+BE. Maybe call them distributed systems engineers instead.

What it sounds like though, is your organization (regardless of what we call it) is large enough to organize into FE, BE, and FS roles, with FS running the platform and being in charge of the fleet and for them to work on the system itself so that FE and BE can work without having to know about fleet - and FS folk are building internal tools that the rest of the org use to do their job, and to shield them from any of the implementation details your fleet has.


Again though on my framwork-ification point -- in our case, we have a full VPC setup, and jets actually allocates the VPC for us we just configure it in our application.rb file. I'm sure the serverless framework has something similar. Either way, we have gotten away with not having a dedicated devops person or persons because of how much the framework does for you.


Yeah, framework-ification of this process has been the real differentiator in the last 5 years that has taken lambda from being obtuse and glitchy to work with to quite a joy if you just lean on your framework.

All of that said, I vastly prefer google cloud functions personally and would switch to that in a heartbeat if they had capabilities like API gateway but it's not there yet.

I also regret that there isn't a better cross-cloud solution currently, but that's something I have a lot of open source ambitions about creating soon. I don't like serverless that much so stay tuned for something from me in the coming years, probably in Rust.


Deploying a Lambda function with Terraform or Pulumi seems to not require knowledge about servers or networking to me


They bill by the millisecond for usage now, which has helped bring down the cost a ton (at least for web services). And if you don't need fancy stuff, they have a cheaper version of API Gateway now too.

Lambda also supports Docker and is pretty damn fast now, so it's less painful to use.


Most seed stage startups don't even surpass the free tier in my experience, and depending on the product this can be true for a lot of Series A startups as well.


My biggest issue with lambda are these two limitations.

1. You have a maximum of 30 seconds to reply to the request

2. You cannot stream data from lambda through the api gateway. Though you can stream to s3 etc.


Max limit is 15 minutes now.

I agree streaming needs work, but if I had to deal with that I'd probably just use naked endpoints.


Not when triggered by the api gateway, that is still 30 seconds for things triggered from sqs, s3 etc its 15 minutes.


Needing anything more than a few seconds out of API gateway is just bad application design though unless we're talking about websockets or streaming as you said. If it's a webhook, you risk the client hanging up if you do all that work up front (you should instead be scheduling a background lambda that will do it based on the webhook params). If it's an actual user, I'm assuming you're not expecting to have a user wait 30+ seconds for a page to load, so probably some JSON API endpoint with a progress bar on the frontend. In those cases there's really no need to use API gateway as it isn't visible to the user anyway (you could just use your lambda naked in that case).

If you're talking about web sockets, yeah, I feel ya. I haven't had to navigate that situation yet but my plan of action is to bypass API gateway entirely and use a naked lambda running at its full execution time, and when we approach the 15 minute boundary, send a command that resets the connection. Don't know if it would work well but I haven't had a need to use web sockets for any projects I work on for a while.

For streaming, like you said, maybe an elastic beanstalk cluster is more the way to go depending on your workflow. If you can find a way to get it all to work in Lambda though, would probably be a game changer cost-wise I would expect as long as you figure out a way to deal with resetting every 15 mins.


I'd say it's still the case for the majority of the cases. Which is a shame, because I like this idea of having hw limits for your software and being able to efficiently maximise hardware at the datacenter level for redundant, trivial applications (like serving web requests). Most web servers are half idle "just in case" because hardware is cheap and engineers time is expensive.

I migrated some mid business (500-1000 people company) lambda setup to ec2 spot instances and a standard app and the costs dropped.

It would have been even cheaper on something like heztner, but good luck getting a buy-in from the infra team.

I venture most small businesses will have less load than them. Certainly it's not worth it for my tiny side businesses.

I was running some calculations for a project though, and if you "abuse" them with lengthy / expensive calculations run very infrequently (think like a cron job), you may end up being quite cost effective.


We trialed switching over to lambda for our worker queues just last month. We went for $500 a month EC2 costs (auto scaling spot fleet), to $500 a day.


I have always wondered if Jets was ready for production. Have you found the documentation & community to be ready for mainstream?


So what I would say is, we've gone ahead and done a lot of the upfront investment of making Jets much more production-ready than it was a year ago. Pretty much every feature path a typical startup Rails app would hit, we've used in production with jets at this point (and in some cases had to get issues fixed, etc). We also sponsor the jets project at $1k/month, and have a great relationship with Tung, the creator of Jets. We're at a point now where we haven't had to get anything fixed for a few months, and everything is stable and working the way we want.

While we did have to spend some $$ to get things up to snuff (particularly so we could pass SOC 2), this pales in comparison to how much we would have had to spend to do devops and the usual server wrangling to get our use-case up and running in a typical elastic beanstalk sort of situation.

One area of difficulty was Oauth so if you ever need help implementing that in Jets feel free to reach out or read the public issue history of how we got it working.


Hey, really appreciate the reply.


That sounds great!

Is there a blog post about the tech stack? My main concern with serverless/lambda is cold start time. How do you deal with it? What does the p99 latency look like?

Also how do you scale the usual bottleneck which is the database?


Discovered Ruby on Jets in this thread and started watching the video on the project page.

The author addressed the cold start problem here(timestamped) https://youtu.be/a0VKbrgzKso?t=439

Hope it helps :)


I've been meaning to put together a blog entry on our whole stack and will post it on HN probably in the next few months!


nice! All the best to Arist and the team!


There is an auto-warming option -- we keep it warmed up every 5 seconds so it's always super peppy -- requests served within 100ms generally, sometimes much faster. Appdex hovers around 0.996 but some webhooks are included in there so it's probably faster in reality.


DB-wise we utilize the usual postgresql cluster setup with read clones in several regions. We could easily partition by course or by org if we had to but honestly we could probably scale up to series C+ before needing to do that.


Out of interest, have you considered moving to a "serverless" db like Aurora Postgres or even DynamoDb, to avoid the cost of unused database capacity at idle times?


I'd love to, but when I've tried to set up Aurora, it seems impossible to do multi-regional with postgresql (not multi-zonal, but multi-regional). Would love to hear how to do this if anyone has got it working. Last time I tried was about 2 years ago.


Ah ok, interesting - I haven't tried Aurora, my company uses a mixture of RDS postgres and Dynamo. Cockroachdb and Yugabyte also seem like good options but a harder sell for us not being AWS native.

More generally though, all of these "newsql" offerings feel a little too good to be true for me, I can't see how you could really have all the relational integrity of postgres with the elastic scalability of a distributed DB without trading something off. Am I too cynical?


I can relate. I was just about ready to exit programming until 2 things came along that removed all the analysis paralysis for me.

1. Elixir + Phoenix

2. TailwindCSS

Elixir because you can build a monolith with it and everything is naturally separated by design. The runtime makes it impossible to hurt yourself. It addresses virtually every web use case in an ideal fashion that balances development speed with long term maintenance and scalability because of the language design trade offs.

TailwindCSS because it removed the decision about which of the million frontend stacks in should be using. It makes it so easy to do anything I can dream up without feeling like I need to hire somebody to help with a pet project.

And honestly, even though I like DevOps and database work just using Heroku/Fly/Gigalixir so I don’t have to worry about it makes my life easier.


Elixir is kinda nice. I like their macros, db querying layer and mix and low latency.

But I feel no typesystem would bite me in bigger project.


The type system problem is somewhat mitigated by ubiquitous use of pattern matching. In python, the function

  def foo(mymap):
     blah
Has no real information about what my map is. In contrast, a staticly typed language like Java would have

  public static HashMap<String,Integer> foo(HashMap<String,Integer> mymap){
    return mymap
  }
which encourages you to have a lot of classes defining different types of things, which is rigid, but at least makes it clear what exactly the function is expecting. Elixir is somewhere in between, where you'd write

  def foo(%{"username" => username, "password" => password}) do
     {username,password}
  end
which makes it abundantly clear what the function expects ( a map, with keys "username" and "password"), and handily unpacks those values for you, making it feel more immediately useful than "In the future it'll be nice to have more clear code", which helps with adoption.


In practice, the Python example is likely to have considerable information hidden in the “blah” from which existing tooling can often infer much of what would be declared in a language with mandatory explicit static typing; and it has existing tooling for optional explicit static typing, too.


So does Elixir with typespecs and Dialyzer, but I'm talking about quick, at a glance analysis of a function definition. Which is really the only benefit from a brittle type system like C or Java's, in contrast to a proper type system like Rust's or Haskell's, which have their place and cannot be adequately replaced merely with pattern matching.


Python 3.10 introduces pattern matching: I've not had the chance to really play with it yet so can't say how fully featured it is, but you can do something like:

    def foo(login_data):
       match login_data:
          case {"username": "foo", "password": "blah"}:
              return True
          case _:
              return False
In addition you can use type hinting:

    def foo(login_data: dict[str, str]) -> bool:
        match login_data:
          case {"username": "foo", "password": "blah"}:
              return True
          case _:
              return False
Unfortunately I don't think you can yet do the neat argument unpacking as per your Elixir example, but it's some of the way there.


I don't know that that's the big issue with dyanmicism, at least not for me. In Ruby the main thing that bugs me once in a few months is a silly typo - calling a misspelled ivar or method. It happens maybe once in half a year but it's still humbling when it happens. It's rare because usually your IDE is very good in spotting these mistakes - even in Ruby, and also decently tested code should break right away. But I do wish and hope the tools will become better - whether it's Rubocop or something completely new, that the chance of this happening will be almost zero. It's not there.


I worried about this too, but I found that 90% of the places I want "types" are elegantly handled with pattern matching and what other langs would call "function overloading." I also love that you can treat structs like a map, which I feel is the best feature from javascript.


I feel there's more to the dynamic language world that's not clearly explained by the lack of types. I have used both Python and Elixir for production apps, and I get an order of magnitude more type errors with Python than Elixir, for some reason.

IMO it's a disservice to compare Elixir's typing to languages like Python, and we need a new category to distinguish the two.


The fact that there are a lot of standard behaviours (GenServer, Supervisor, Agent, e.t.c) and conventions followed makes this not such a big issue for me. Dialyzer is also a great tool which catches a lot of errors for me.


I'm in exactly the same boat, and hoping that others follow suit. It feels like "the good old days", but the community is still small and resources are scattered, so sometimes doing simple things mean a bit of trial and error at first. Still, it's actually been enjoyable again.


Elixir & Phoenix comes across as a "full-stack" framework with tons of potential, much more so than Ruby on Rails.


I agree that Elixir & Phoenix are the natural evolution in a post-Rails world, but looking further ahead, I think we've reached a point where the compiled & typed languages are starting to have all the goodies that used to be reserved only for the highest of the high level languages, and with web assembly support on the rise, it's going to start to become possible to use languages like this for both backend and frontend in a unified way. So for me, the next natural evolution is using something like Rust or Crystal but with very solid Railsy conventions and intelligent code sharing between frontend and backend. I think this combined with something like the Phoenix live view, but in Rust, is at least where I want my future to be, and I think a workflow like this could become the dominant pattern before, say, Elixir/Phoenix overtakes Rails.

I tip my hat though to people diving into Elixir --- it is definitely a cool direction and one I would follow if I weren't already so sold on Rust/Crystal/etc.


Well the game has also changed for dynamic languages. Writing JS or Ruby with a powerful IDE today isn't what it used to be 5-10 years ago and it will keep getting better. The tools and linters will become smarter about catching silly typos. It will never be like writing in a compiled language but the gap will narrow. Also - I don't think we'll ever agree on any language. Sure, some languages will be dominant - Java maybe or Rust or whatever, but there will always be room for alternative approaches. Humans don't fully agree on much.


yes, it's true that with sophisticated gradual typing and expressive static langs, dynamic and static languages are far more similar than they used to be.

personally, I still enjoy the extreme dynamism of lisp and lisp-like langs (julia), but we may never see a resurgence of them in popularity


Where Elixir makes everything work is the runtime that allows millions of concurrent processes to coexist without taking over the processor.

I can’t think of another language, even Rust, where you could run a full database inside your codebase without negatively impacting everything else.

That BEAM runtime just makes things possible that aren’t in other languages.


I get that for sure. I've been avoiding learning React because I keep waiting for the space to settle down. I was very interested to see how the Rails folks are talking about their latest version: https://rubyonrails.org/2021/12/15/Rails-7-fulfilling-a-visi...

In particular there's a theme of JS being totally optional: "Rails 7 takes advantage of all of these advances to deliver a no-Node default approach to the front end – without sacrificing neither access to npm packages nor modern JavaScript in the process." "The same approach is taken with CSS bundlers that rely on Node. [...] If you’re willing to accept the complexity of a Node dependency, you can pre-configure your new Rails app to use any of them"

That seems like a smart goal. They're not trying to go it alone, but they're clearly trying to draw some boundaries that keep the JS chaos/complexity at the edges.


React feels pretty settled now, it's been out for 10+ years, hooks and functional components have become the standard.

If you want to learn it I think it's at a mature enough place.

Popular libraries like redux have been rebuilt to use hooks and simplify the integration.

I'd also check out Remix[0] if you wanted to get into a React framework. It's fairly new but extremely promising and easy to get up and running and even deployed anywhere (express server, cloud flare workers, deno, etc)

[0] remix.run


> hooks and functional components have become the standard

Now if only the React ecosystem settled on standards too, that would be wonderful.

But alas. There is flux, redux, mobx, hooks, routes, sagas, thunk, observable, styled components, emotion, tailwind, react-antd, axios, fetch and so on and so forth. Edit: on top of, obviously, webpacker, grunt, npm, yarn etc.

Contrary to e.g. Rails, none of these come with React. You'll need to organize it all yourself (or go with something as react-boilerplate). You'll need at least some of these pieces to have something workable very early on. Things like redux or saga are not some "we've grown out of vanilla React and need additional tooling", they are essential to do things that practically every app needs: pages, communication with a backend, some styling, some consistency and deploying it to a public server.


There's a lot here that feels like you're purposefully conflating to make it seem more confusing than it is and a lot of what you mention is also an issue in Ruby on Rails. It's also a false comparison to begin with. Ruby on Rails is more equivalent to Nextjs or Remix.

When I had to learn a little ruby on rails I found the convention over configuration far harder to wrap my head around. It's quite nice to be able to see for yourself exactly how pieces are wired up and configured instead of having it magically done for you and dictated to you that it has to be done this way.

There are standards in some of what you mentioned. Mainly hooks, fetch, npm, and css.

Ruby doesn't "solve" css either, you're just using regular css files which you can do trivially in React as well (<Foo style={myStyles}/> and define myStyles as an plain old javascript object wherever you want - same file or not) or just include the stylesheet on the server response.

The rest of it, state management, webpack vs grunt, etc is pretty simple to select from and even in all that there are obvious choices - redux, webpack, npm that are certainly considered standard.


With rails, there are really only a handful conventions, and while they take a little time to learn (table names plural, model names singular, controllers plural, name view files based on controller/action, foreign keys names in the obvious way) and that's about it. Once you have that you can look at a URL and know exactly where the code is. That is really valuable and worth a little effort.

There isn't much magic in rails, and just stepping through the methods in a debugger makes everything pretty clear if you do want to know exactly how everything is wired up.


> Ruby on Rails is more equivalent to Nextjs or Remix.

That was my entire point, yes.


I'll make it really easy for you - ignore everything in that list except fetch.

The rest is nasty, nasty bloat. Saga in particular is one of the most pointless and libraries I've ever seen. Total solution looking for a problem.


I agree.

My point, however, was that this is not obvious for anyone starting with React.

This is caused by the fact that React is "no batteries included", which means that you have to find the right batteries at a moment when you lack all knowledge about batteries in the first place. You search "how to send to backend" and saga's pop up, explaining why they are better than useEffect or redux or whatever (at a moment you may not even know useEffect or its downsides).

Compare that to Rails which has "all batteries included" (and which is a nightmare in itself, too, though), where all those choices are made for you. You can choose to ignore Rails' testing suite and instead erect an rspec setup next to it, but you'll do so consiously. Because that moment when you asked yourself "how do I test in rails" the One True Way was there, configured, ready for use and documented.

Both have tradeoffs and pros and cons. But the React community (with tutorials, this weeks best practices, breakspeed iterations of tooling) is not helping here. At all.


Exactly. It's discussions like this that are exactly what I'm thinking of when I say that React still doesn't feel settled to me. There are quite a lot of people who say, "Oh, just do it this way". But I don't have much sense that they are all saying the same thing and will still be saying the same thing in a year.

There are some default choices in Rails that I disagree with. E.g., I think it's too database-focused, and I build things that often aren't. But I still know I can get started with Rails and expect it all to just work. I know that people with Rails experience will be able to jump in without much pain. And then if I depart from the standard path, I have a good sense of where I might get into trouble.


Working on a project with a significant number of sagas, I have to heartily agree with you.

There are some very clever things you can do with them, but this really need to know all the ins and outs of it first, and by the time you've learned them you implemented a giant nightmare.


We've been trying to generally discourage Redux users from using sagas in most cases. I've always felt that they were absolutely overkill for basic data fetching scenarios, and this is even more true now that data fetching libraries like RTK Query and React Query exist.

Where sagas _do_ still make sense is highly complex async workflows, including responding to dispatched actions, "background thread"-type behavior, and lots of debouncing/throttling/etc.

But yes, I've heard of plenty of cases where sagas made a codebase unreadable, and it's a shame that they get so heavily pushed by some early Redux users.

FWIW, I've actually been working on designing a new "action listener middleware" that we'd like to ship in an upcoming version of Redux Toolkit. It started off as very simple callbacks, but by adding a few key primitive functions like `take`, `condition`, and `delay` I think we've been able to to come up with something that can handle maybe 75% of what sagas can do with a much smaller API surface and bundle size. I'd love to have you or anyone else using Redux take a look and give us some feedback on the current API design and let us know if there's other use cases it ought to cover:

https://github.com/reduxjs/redux-toolkit/discussions/1648


The last time I did web development, I used Angular 1.2 or something like that. I had some forms and many different views that talked to some REST APIs.

It was never clear to me why/when I would need React. I read and worked through some tutorials years ago, and while the reactive/one way flow pattern was nice, I never had a problem with Angular.

I recognize there have been many versions of Angular since then. And there are several web development frameworks, complex build tooling, etc. etc.

Meanwhile, I’m still writing services in Java that handle 20,000 requests per second for a service that brings in revenue in the billions of dollars a year. And I’m writing C++ (very straightforward, no templates, few uses of pointers, etc) for cross platform mobile code. I have some Spark pipelines all written in Scala.

Meanwhile the web stack continues to evolve… new technologies all just for rendering web pages. I don’t understand why. Admittedly there’s far more complexity in our use cases today than the HTML I was writing in 1999, but much of it is unnecessary bloat from bored developers.


Old Angular wasn't such a bad framework actually...


Sometimes running into AngularJS was not too bad of an experience, since it's relatively simplistic when compared to what's needed to get it running (no complicated toolchain, such as with TS -> JS).

But most of the times it's a pain, because of it not scaling well to larger projects. I've actually seen projects where controllers grow to the length of thousands of lines because the developers didn't feel comfortable with introducing new ones in the pre-existing codebase, due to how the scoping works, due to how they'd need to set up the data models and care about initialization of everything, as well as custom integrations with validation libraries and other JS cruft (due to it not quite being "battieries included", like the new Angular is).

Now personally, i really liked how they did error messages (which gave you URLs that you could navigate to, with a bit of context in them; though it would be better with a locally hosted docs server), but a lot of things were awkward, such as their binding syntax: https://blog.krawaller.se/posts/dissecting-bindings-in-angul...

Luckily, AngularJS seems to finally be dead, so we'll get to migrate to something else soon, hopefully. And the projects that won't will be red flags enough for the people who don't like such challenges to turn the other way - truly a litmus test of sorts.


> it's been out for 10+ years

Oh god, I'm so old.


It hasn’t been out for 10+ years. Even if you count the prototype FaxJS predecessor it’s barely 10 years old, and that was only a Facebook internal project.


do you mind sharing your thoughts on vue?

in particular, someone said react is better for building complex systems. do you agree with this opinion?


vue and react are relatively close

if you want to try something different I'd go with svelte


thanks for the reply!

which is better for building complex systems?


Any of them will allow you to build a complex system - vue, svelte, react and others.

It's more important to pick one and just get started with it. You can do some reading about how they're different or their different philosophies but they all work.

It's more important to get started. If you want to build something to learn a skill to get a job, react is the most popular. If you want to build something to learn something new, any of them are a good choice. Just get started.


Yea, I'm really glad they dropped Webpacker. It was the worst "Webpack abstraction" I've had to deal with, and the only documentation was always out of date and confusing.


I think it's important to be a little careful with the terminology here: JS itself is not optional, really. If you want in on Hotwire and the rest, it's inevitable that it's there.

What's optional is whether you need to use node or npm itself as a developer-facing tool, and needing ESM support in the browser for the way they've done it means it's only relatively recently that that's been practical.


Fair point. Thanks.


At this point React is old news. You need to learn something like Solid.js or Svelte.


Solid.js feels like what React should have been in the first place (but even Preact would have been a nicer default for the community)


I couldn't agree more.

Recently I switched back to Rails after 10 years. I can't say I enjoy the whole asset pipeline business but I yearn for a simpler time and Rails gives that (to a degree).

My only wish it was simpler to host. I don't necessary want to buy into render.com or heroku.


Why do you think it isn't simple to host? If you yearn for the simple times, no one is stopping you from just spinning up a VPS with Ruby, Passenger, Nginx and your favorite database.

Of course the larger your project is, the more hassle, but that's not specific to Rails.


I have been doing this for upwards of 10 Ruby projects for over a decade.

It is not simple. Python or Node.Js suffer the same issues, though.

Compared to dropping a go-binary or rust build somewhere and firing that up, Rails/Ruby (Rack, Sintra, Rails etc) are a mess.

In no particular order, the issues I had to deal with in the last three months:

* asset pipelines crashed randomly; turned out the ~2GB build VPS had too little memory to run the asset pipeline.

* puma and unicorn and phusion cannot be ran and configgured easily on one server.

* rbenv builds are hard to manage.

* getting the correct gems on your server is a mess. There is bundler, rbenv, vendored, rubygems.org, gem, Gemfile, Gemfile.lock, ruby-version in gemfile, ruby version .ruby-version, all of which can -and will- conflict at some point.

Add to that, that a typical Rails app requires at least redis, postgresql/mysql, quite some lib-x headers to build native gems, and imagemagick etc, and you have a very complex setup.

This isn't more complex than your typical Django, Laravel or ExpressJs setup. But it is far more complex than deploying a go, rust, or Java bundle.


This hasn't been my experience at all. Deploying Ruby-based projects in recent years is streamlined and repeatable. The term soup you're throwing out there makes me wonder why you're either overthinking this or not streamlining your own processes.


One cause of the problems is that the servers are handling multiple ruby deamons amongst which some Rails apps.

I know things get simpler when you keep "one app/thread - one server". But price-wise that doesn't make sense, especially since many common Rails apps hardly run on a the cheapest VPS (a throttled dual-core 500MB server). If you run ten tiny services, ten servers add up quickly.

I also host some larger apps on their own server, some loadbalanced over multiple servers.

But throwing 10+ ruby services on a VPS is hard. Especially if you don't want to, or cannot constrain their ruby-versions, app-server, etc.

It is hard for the exact same reason why setting up a local env may be hard. I recently had to help a co-worker on an old MacOS machine who broke her entire mac-desktop because she installed RVM because somehow that was in the tutorial for getting the app locally.

Anything with a runtime is harder than anything without that runtime. This is too obvious, but does warrant some highlighting. Ruby's runtime is one of the hardest of all runtimes because it comes with so many moving parts. Python is a close second, and on Ubuntu (my main driver) probably worse than Ruby, because fing up Node is fine, but fing up Python can render your machine unusable, unrecoverable so without good understanding of all python things.


I started prebuilding assets in ci or locally and rsyncing to the server. Much faster and avoids the issues with constrained resources on the vps. Also saves me from installing asset-related gems and node modules on the server. Let me know if interested and I'll share my capistrano setup


I would love to see how you’re doing this. Could help a lot with Docker builds as well.


I've been doing Rails for about 12 years and at our shop we have it down. `cap production deploy` and it's done. Essentially zero downtime for deploys.

Some things I'd note-- * Don't use puma or unicorn. Phusion Passenger and Apache is just fine performance-wise and is much more robust IMO. * To avoid ruby version conflicts use .ruby_version as your sole source of versioning. I use this in all my Gemfiles: `ruby File.read('.ruby-version').strip` * RVM isn't as popular as rbenv but it's amazing and you won't run into the issues you mentioned. It integrates with Capistrano and Passenger+Apache really well. * 2GB might be a bit light for a production app even with light load. We start our new VMs at 8GB and then adjust up or down as makes sense.

We still do a single Ubuntu LTS VM for each app rather than anything fancy. Once you've added the stuff you need via apt it just works. Our sysops folks use Ansible playbooks that automate everything so there's no mystery when we need to do an OS upgrade or spin up a new VM.


I've been using Rails for 12 years, over at least a couple dozen apps. I use RVM to manage my environments, and use Capistrano to deploy. I've hosted on full servers, on shared servers, and on Heroku. With the exception of the classic bugaboo of getting the older execjs compiled in different environments, I have never had the sort of problems you say you're having trying to keep gems sorted. In fact, I would have said that was one of the stacks' greatest strengths, to keep that sorted out for you. I guess my takeaway is that RVM > rbenv, but I REALLY don't intend to start a flamewar about it.

And please don't try to tell us that a real-world Java web application is simpler than, well, anything. ;-)


Are you hosting multiple ruby versions and multiple apps on one server? Because as I explain below, that is the main (but not the only) cause of many issues.

Execjs, libreadline, libsqlite3, lib-postgres, (older)nokogiri etc. Quite a lot of the headers keep being a problem. Not major, often a mere ddg+apt-get away, but still annoying. It does get problematic when you have multiple apps depend on different libreadlines, for example.

And I'm not often deploying java, but at least the runtime and tooling has become omnipresent. The latest deploys were a mere "move all files there, and restart the service". With Rails you at least need to rebuild the gems.

Often java-services are an `apt get jenkins` away. There is no equivalent apt-get gitlab or apt-get mastodon that just sets up the stack entirely. But that is probably due to their popularity more than technology.


The solution you're looking for is Docker. Gitlab distributes as a Docker image.


> And please don't try to tell us that a real-world Java web application is simpler than, well, anything. ;-)

Tongue in cheek, but honestly, that's probably debatable.

Under the hood, Java apps (at least in my experience) are Eldritch horrors with hundreds of beans/proxies/servlets/God knows what, needless layers upon layers of abstractions and dangerous amounts of reflection and dynamic behavior, all to launch and initialize a web application in a "simple" manner. I've always seen the exact same horror, be it working with Spring Boot, Spring, JavaEE, Struts, or even something like Dropwizard (though it seemed a bit more sane). Only the microframeworks seemed decent in comparison, something like Vert.X (has a different paradigm, though), or Quarkus/Helidon.

But when it comes to running them... well, historically they were still a nightmare, if packaged with .war and relying on certain application servers/JDK versions being present. But packaging them as executable .jar files (think Spring Boot) makes them similarly easy to Go, at least as long as you have the JDK version that you need. You just drop the file in a folder and if you have some configuration (which your Go app would also need in one way or another), you can probably launch it.

> I have never had the sort of problems you say you're having trying to keep gems sorted.

Ruby suffers from the same problem that Python, Node, PHP and other non-statically compiled technologies suffer from - messing around with dependencies. If i develop on Ruby locally against version X.Y.Z, but only X.Y.W is available on the server due to Debian packaging older versions, then i'll run into problems because of the project refusing to build/run. I've also run into situations where building the native stuff (DB drivers in this case) will fail, for example, when libpq-fe.h headers were missing and pg couldn't been installed, so the gem native extension couldn't build. Also, on Windows, Ruby 2.7 downloaded the sqlite gem with 3 different trojans (Win64.Alien.eg, Win64.Alien.ef, Win32.Agent.xahigh) in the extensions (btreeinfo.dll, memstat.dll, memvfs.dll), as picked up by Kaspersky. No idea how that happened, or whether that could have been a false positive, but i didn't appreciate that much either.

That said, i have a folder that has about 112 images of all sorts of software breaking in various ways to date, and the number is only so low because i don't screenshot things on my work computer and not even every small instance of something breaking. In my experience, all of the technologies out there are bad in some ways, it's just about identifying and managing these tradeoffs towards whatever is suitable for your circumstances.


Well... Java developer here, 20+ years exp... yada yada yada.. I haven't deployed a war file in over a decade. No you do not NEED a specific kind of server, just like any programming language there are LOADs of options for deployment. Nowadays we have an NGINX + Docker container. Image just contains an executable JAR. Without docker it is not much different, you just have a Spring boot executable JAR. But even if you loathe Spring, you can build an executable JAR using a build in webserver (Tomcat, Jetty, what ever). For ruby, PHP and Python my guess is Docker is the solution as well. And if you are building applications daily with it you know the best ways of deployment. I think we should all stop bashing ANY programming language; they all have their key selling points. There is always someone willing to stand up against the bashing of their programming language their are working with. I do not program in Java exclusively; I use BASH, Groovy, Scala, TypeScript, JavaScript, as well. I get angry at Node sometimes at well but then I remember that I do not use it nearly as much as I do Java...and go to stackoverflow...

Just my 2 cents


> If i develop on Ruby locally against version X.Y.Z, but only X.Y.W is available on the server due to Debian packaging older versions

Either you use RVM locally to develop with Ruby X.Y.W or you install RVM on the server to use Ruby X.Y.Z there. Of course if the OS on the server is too old or too new there could be some versions of Ruby that won't work. It happened to me because of versions of openssl.


Have you not heard of docker?


Docker is amazing for deploying things on servers and i use it extensively - nowadays almost everything on my homelab is containerized since i no longer need to have that many separate VMs anymore...

That said, Docker is also oftentimes a poor option for local development, unless you're talking about external dependencies the app needs (e.g. MariaDB/PostgreSQL, RabbitMQ, Redis etc.), or maybe very specific setups that just won't be achievable otherwise (specific userland needed).

For example, when trying to run Ruby/PHP/Node/Python/Java apps in Docker containers, i've run into the following:

  - problems with file permissions
  - problems with file line endings
  - problems with Docker Desktop bind mounts (e.g. directory won't be mounted, path parsed incorrectly etc.)
  - problems with Docker Desktop bind mount performance
  - problems with networking (e.g. host.docker.internal sometimes refusing to route to non-containerized stuff on the host)
  - problems with run profiles (e.g. your IDE + Rails integration just won't work, at best you can just launch a different container/Compose stack with some other parameters)
  - problems with test runs and coverage (if you use the IDE integration, though that depends on the stack and how deeply your IDE is integrated, e.g. Java with IntelliJ IDEA)
  - problems with remote debugging, breakpoints, flame graphs etc. (most of those either don't work when the code is inside of a container, or need additional setup, e.g. remote debugging, where supported)
In my eyes, whenever possible, develop locally with the native runtimes for the IDE integration etc., use Docker for deployment or maybe QA/test environments as well - anything that will go on a server somewhere.


I find RVM much easier to manage than docker, for local development. To deploy on servers, it depends. Deploying with Capistrano to one server is easier than building a docker container and push / pull it. For cloud / serverless environments... the number of companies that really need that is surprisingly low. Most businesses are small and are well served by a single VPS.


What is complex about a real-world Java web-app ? Asking seriously.


You should try phoenix and elixir, only been playing with them for less than a week and as a long time rails/ruby developer this is what it feels like again.


I bounced from Rails to Phoenix and couldn't be happier. I've done lots of functional programming before my Ruby days but never saw it coordinated in such a way that made sense to me. And since I only do web dev it was a perfect fit.

That said, I'm _really_ excited to get into all the goodness that Rails 7 has to offer. I really thought they lost their way with the whole webpacker debacle. I recently installed a fresh Rails 7 app and the word webpack doesn't even exist anymore! It's like a breath of fresh air to be done with that. Those were some dark days. (I'm exaggerating only a little)


Currently using Jumpstart Rails ( https://jumpstartrails.com/ ) with esbuild for a personal project and it's so fast..


I'm also a Rails dev, currently doing Advent of Code in Elixir, I haven't tried Phoenix yet, but writing Elixir feels a lot like writing Ruby for the first time. It's the same kind of sexy, of-course-you-can-do-that feeling.


For me Elixir didn't feel too much like Ruby, but Phoenix sure felt a lot like Rails. YMMV since Phoenix has gradually become more and more of its own thing than it felt like in 2016 when I learned it.

There's really nothing I miss about Ruby for web dev and Elixir feels like a complete win to me these days. Ruby is still far nicer for unix-y scripting, though.


That's really funny, my perception of Elixir is generally the sentiment "of course you can't do that". (that's a good thing).


Funny, you're also right. Of course you can't do that weird hacky thing you could totally do in Ruby but maybe shouldn't.

I meant something more like "of course there's a function for that with the name you'd expect".


Elixir and Ruby have nothing in common as languages and ecosystems, I really don't get why the 2 keep being brought up together. Oh I do get it - whenever someone mentions Ruby some Elixir user will say - hey have u tried Elixir?


Ruby and Elixir do have surface syntax in common, and Rails and Phoenix are clearly from the same conceptual direction. They're more similar than, say, Rails and Spring Boot.

This is not surprising: the Elixir folks were fairly high-visibility in the Ruby community before they did Elixir and Phoenix.


I'll give you surface syntax in common, other than that not much. I think a Rails dev would have a much easier time picking up Django or Laravel than Phoenix. How hard is the switch from Rails to Django? Probably trivial. But Ruby to Elixir is a mind shift.


Ruby to Elixir is a mind shift. Rails to Phoenix...I don't think so.

But most Rails programmers write Rails, not Ruby, and it's important to differentiate them. Ruby is a big playground. Rails is...not, by design. (This is why I do not like Rails, personally--if I am choosing to write code in Ruby, it's because I expect to need to get weird with it.)

Rails->Phoenix omits a lot of the stuff that makes Elixir cool, both in itself and in OTP, but that's not a bad thing and I do think the transfer isn't too onerous for somebody coming purely from Rails.


> But most Rails programmers write Rails, not Ruby

You're really exaggerating.


I'm really not. I've built tools and systems around Rails for a long time. Most Rails programmers I've worked with live in the controller and the view. Absolutely there are those who do not--and they're often building libraries and tooling for those who do. IME, at the hireable junior/mid levels, Rails driving is pretty tightly coupled to the framework--this is often a positive, to be clear, and not a dunk on anybody. This is the baseline way to get a web application out the door.

Roughly equivalent interfaces exist in Phoenix, and the mapping isn't quite 1:1 but it's close enough that the adjustment is an adjustment, I think, and not a re-learning.


I partly agree with what you're saying but still - you're exaggerating. Doing Rails without a decent knowledge in Ruby is exactly what you said - it's being a junior. You'll rely on the existing codebase without any ability to provide your own ideas, only shifting code from here to there without fully understanding what you're doing and will always need a senior whenever things go awry. That's what being a junior is like in any language.

I don't think we can compare languages based solely on the experience of juniors. I'm not saying it's irrelevant, it is, but it's only a part of what writing software is. In the end of the day you need the seniors and people who really get it to maintain the code and steer the ship in the right direction.

But I do agree that Rails is a kind of dialect of Ruby and that you can get pretty far with Rails without being great in Ruby (you still need to be decent though, no way of getting around that).

> Most Rails programmers I've worked with live in the controller and the view

Where is the business logic at? They also write services or something similar or stick it in the Model. Whatever isn't ActiveRecord will be pure Ruby - Rails has no opinions there.


> You'll rely on the existing codebase without any ability to provide your own ideas, only shifting code from here to there without fully understanding what you're doing and will always need a senior whenever things go awry.

Absolutely, and I would submit that this is most developers' experience up until a senior-ish role. Not title, you can have juniors doing this, but the pyramid of tool-users versus tool-makers.

> In the end of the day you need the seniors and people who really get it to maintain the code and steer the ship in the right direction.

Again agreed--but jeez, there are just so many fewer tool-makers at most places I've found myself. (This worked out great for me early in my career because there were tons of vacuums to start doing that work even in my first year out of college--building systems that other people could rely on to go faster, because nobody else was doing it!) Thinking about it, this impression perhaps sticks more because of the businesses I consulted for, which is perhaps where my perspective comes from--hiring uncritically 'til you're in a pit where you're paying the medium bucks to get somebody to come dig you out.

> Where is the business logic at? They also write services or something similar or stick it in the Model.

Found the guy who hasn't seen too many 10KLOC controllers in his time out there. ;)

You may be right that I am grim about this, but at the same time, reading your description makes me go "I haven't seen too many places that actually do-it-right". The truth's probably somewhere in the middle, and outliers exist on both sides. But yeah, I've absolutely been consulting for more than one company whose primary product was thin models, gargantuan controllers, and no service abstraction to speak of. They make a lot of money, too. (Fortunately for me, they brought me in for devops stuff, not "please save our megamonolith".)


> But yeah, I've absolutely been consulting for more than one company whose primary product was thin models, gargantuan controllers, and no service abstraction to speak of.

I don't get why this is still happening. It's kinda common wisdom now to not let your controllers get fat. I"m not saying that fat models are great but as a community we kinda agree that most business logic stuff is easier tested on the model/service and should be placed there. CTO's/team leaders have to be real sloppy to let teams build code as you describe At the same time - being a consultant, don't you more often than not see codebases in bad state? I mean those are the ones that usually need consulting. So maybe you have an overly negative view on how Rails projects usually look.


> Rails->Phoenix omits a lot of the stuff that makes Elixir cool, both in itself and in OTP, but that's not a bad thing and I do think the transfer isn't too onerous for somebody coming purely from Rails.

I'm not following you here. Could you please explain your position a bit more?

The reason I'm confused by your commentary is that Phoenix makes extensive use of OTP and since it is just Elixir + macros, there is nothing at all limiting you from employing every Elixir (and Erlang, for that matter) feature you want to in your Phoenix apps.


Sorry - omits is the wrong word, because of course Phoenix uses that stuff. More that in my experience, for the kind of "in the framework" developer that I've run into in the Rails space, the mapping to using Phoenix is very simple and doesn't demand of the end user the conscious use of processes, genservers, etc. -- it can, and from POV often seems to be used as, plug-and-chug. This totally isn't a criticism either, I want to stress; it's the mark of a good framework, that you don't necessarily have to care about that stuff to ship a thing.

At my current company, we use Phoenix and we use a lot of its bells and whistles. But we also have a lot of people who understand OTP--definitely way more than I do. I also see people ship Phoenix apps without getting deep into that when their needs are relatively small.


> Rails->Phoenix omits a lot of the stuff that makes Elixir cool, both in itself and in OTP

How do you think Phoenix could better take advantage of Elixir and OTP? IMO, it already is doing plenty to take advantage of what's cool about OTP, through channels and especially LiveView. I suppose Phoenix apps could use Mnesia instead of relational databases. But would the benefits actually outweigh the costs?


I have extensive experience with Elixir/Phoenix, Ruby on Rails, Laravel, and Django and I have to say I disagree. Rails and Phoenix are both opinionated in similar ways, both have similar generators, both have excellent/similar tooling, and many of the same ideas exist in both (e.g., Channels and Action Cable). I view Elixir/Phoenix as a natural evolution of Ruby/Rails.


The creator of Elixir was on the Rails core team and involved in the Ruby community, even for a little while after Elixir was released (until about the time Phoenix came around). The Ruby community is where Elixir gained a big chunk of its initial attention from.


That doesn't mean much. He came from Rails and then switched direction to a functional, Erlang based new language. The historical "initial traction" also doesn't mean much. The two languages aren't similar at all.


In terms of ease of hosting?

I'm considering migrating to Phoenix and my biggest concern is ease of deployment


It's ridiculously easy.

Even 6 years ago when I started, you could just run it in production with `mix phx.server`, much like running `rails server` and use the exact same deployment patterns. Or you could build a release and copy it up to the server however you saw fit. At that time it was painful for some coming from different habits that involved pulling in config at run-time rather than compile time, but those patterns have also been supported since at least 2019.

Since then, the team has been continually making it easier.

Now there's a single command line to do a deploy to Fly and also one to build a docker container for you if that's what you want: https://twitter.com/chris_mccord/status/1468998944009166849


I've just tried heroku so far and took me about 30 min to deploy my prototype, but I read there are limitations. Go take a look at fly.io and I think there's also another one that specialize in elixir hosting. Once I understand elixir/erlang better, I will probably want to host there.


I got around to trying https://render.com/ recently, it's also pretty easy and cheaper than Heroku, I think they support Phoenix/Elixir but I've never used it or tried. Also side note Heroku seems to have just stalled and atrophied as a product since the Salesforce acquisition, still one of the easiest hosts though.


Yes, they support Elixir (including clustered apps) and it's very easy. I wrote a tutorial and did a 6 minute screencast on that a while back: https://alchemist.camp/articles/elixir-releases-deployment-r...

I generally use a VPS, but Render is a top-notch host if you want something managed. In many ways, I think it's like Heroku was before Salesforce bought them.


(Render founder) Lots of Phoenix/Elixir apps are hosted natively (no Docker) on Render, and of course we support Dockerized apps as well.

https://render.com/docs/deploy-phoenix


On the topic of render there was a really nice podcast on Software Engineering Daily from the founder talking about their product.

https://softwareengineeringdaily.com/2021/12/07/render-with-...


You can deploy pretty much anything on Render, AFAIK it’s all container based.

Here’s the docs for Phoenix deployments: https://render.com/docs/deploy-phoenix


what a horrible name for a site, I can't imagine googling for how to add x extension to postgres with render.


what type of results do you imagine you'd get for that kind of query rather than what you'd want to get? especially given the context of postgres extension, i'd assume if there was content out there that google would find it with that query.

A friend of mine who worked at an SEO agency once had a client who's company name was a misspelling of some noun that google would automatically fix which i found hilarious.


Query + render.com seems to do the trick.


https://www.gigalixir.com/ is likely what you're referencing. Since they're focused on Elixir first it supports a buncha the nice things the BEAM supports, such as native clustering, hot upgrades, and opening a REPL to a running cluster.


I've deployed Rails at a few different companies. I wouldn't quantify it has hard by any means, there's just a lot of configurations to read through and verify every time. Realistically, the defaults are complete sane and could be deployed as is without much change.


at a base, you can use heroku. That said, you won't be able to have your nodes communicate with each other. This restricts you from some of the more advanced featues. For Example, at my startup, we use horde to dynamically spin up individual websocket connections per customer for a integration partner. This was easy to do using horde and simply connecting the nodes together. Additionally, we can use the built in key store which is global to the cluster.

I would take a look at render.com. they support connected nodes and autodiscovery


I hear fly.io is good.


It's excellent, and a pretty generous free-tier i think.


It's great


Agreed. Immutable functional Elixir is the very opposite programming model of mutable state Ruby OOP.


It really is a nice experience. The BEAM family of languages was the first for me where doing multiple things at once came natural. It doesn’t require pulling in a library or adding redis or having to worry about shared state. You just add a process to a supervisor.


Yeah and then how do you deploy new version? In practice you still need Redis for most stuff. Also if anyway you have to deploy containers then value of BEAM is dimnished and provably better go with Go.


Could you be more specific about the issue you're having with deployments?

I've never used Redis in any of my Elixir/Phoenix apps. In cases where I might need something like Redis to store state, I'd reach for a GenServer or maybe Mnesia instead. I've never had an issue with deployments using this strategy. Also, K8s and BEAM work together just fine using libcluster. I'm not a fan of K8s or Docker and don't use either one, but you easily can.

The benefit of BEAM is that it can surgically handle a vast array of failures without restarting the node. K8s, on the other hand, will take the pod down and restart it. In many cases, solving a problem with BEAM is like using tweezers to remove a splinter whereas K8s is more like using a sledgehammer.


You don't need Redis if you learn the BEAM and OTP. Elixir runs on top of a distributed VM with builtin clustering. Use it.

And I see nothing wrong with having the BEAM run in Docker or K8s. There's some overlap in concepts, but they work at totally different layers. Using the BEAM is like having resilient microservices _inside_ your app with none of the downsides.


as someone on his 3rd year on a startup built on elixir, you're making the right choice. Our system is faster than an equivalent node setup while building features is almost as productive as rails. In some ways better. Plus, if you're goal is realtime or anything heavily websocket based, its going to easily outperform node or rails.


I simply cannot understand why people care about "fast" anymore with cheap containerization, CI/CD infra and no more dedicated servers. Who cares if its fast? Just add another box. I have this suspicion that anyone who still cares about backend speeds is stuck in the vertical scaling mindset where they need to keep adding cores and processes to a single machine


Not everything scales horizontally. Your database for instance is going to be scaled vertically to its max long before you think about using federated write servers. Sure you can use planetscale but that comes with its own considerations and tradeoffs.

Also, higher performance translates to lower machine costs. Our server costs would easily be 4x if we ran rails. given that elixir is 80-90% as productive for writing code in the short term, its a fair deal. Over the long term, we've found elixir to be more maintainable since there's very little magic and the immutable data structures and lack of magic make it super easy to debug.

Thanks to the performance of our system, we've also managed to get by without setting up caching. (we'll probably need it eventually but our performance gains have come mostly though fine tuning our sql queries. Ecto gives us way better control over the output sql than active record or sequalize)


A lot of engineers take great pride in improving performance even if it's mostly meaningless to the company. The users don't really care if the number of instances went down from 30 to 20. It is what it is, engineers will be engineers.


>I simply cannot understand why people care about "fast" anymore with cheap containerization, CI/CD infra and no more dedicated servers. Who cares if its fast?

People who pay for those extra boxes? Not everybody has your budget (especially when they're building things themselves, on the side, so they aren't even paid).


Getting to that scale on the side is a nice problem to have. Maybe it shouldn't be on the side anymore.


A cheap micro container is $5 a month. A cheap dedicated server can be had for $20 a month. Budget is not the issue here.


An expensive dedicated server can also be had for $1300 a month.

Scaling vertically can take you a long long way before you actually have to start separating services.


Ok? That's a non sequitur. No one is forcing you to buy a dedicated server for $1300 a month which probably has the same performance as $250 worth of separate boxes. And separating services isn't what we're talking about here. It's adding horizontal processes across machines instead of on one machine.


Doesn’t „fast“ directly translate into shorter page loading times? Static pages (JAMStack) load so fast it’s delightful everytime.


It doesn't. They mean "fast" as in one box can handle more load which in their opinion is cheaper because you don't need to containerize and horizontally scale. I find that unlikely.


What's Phoenix/Elixir like for deploying these days? When I last seriously played with it a few years ago the development was a lot of fun (and Elixir has influenced how I write and architecture on other platforms) but I remember deployment to production being a pain.


"mix release" makes it pretty much a breeze. I find myself spending more time setting up a Postgres server than mucking about launching my backend.


It's pretty easy if you use something like fly or render, anything dockerized really.


It's not harder to Dockerise than any other language.


> My only wish it was simpler to host

I think it can be simple if you keep it simple :). In my book Deployment from Scratch I show people how to run Rails on a cheap VPS with a couple hundreds lines of Bash.

Just Ruby + chruby + Puma + systemd (and optional systemd socket activation).

I include a scripted demo (so you don't have to do it yourself) which demonstrates all of the this:

- Setting up a git-pust deployment for Ruby applications

- Using chruby to automate installing requested Ruby versions

- Configuring NGINX as a proxy using UNIX sockets

- Setting up systemd socket activation for graceful restarts

- Configuring NGINX to serve static assets

- Configuring NGINX to proxy WebSockets connections for Action Cable

- Automatic SSL/TLS certificates with Let's Encrypt

- Redirecting HTTP traffic and subdomains to main domain over HTTPS

- Running PostgreSQL and Redis on the same server

- Building a custom SELinux module

- Configuring firewall

- Setting up automatic weekly system update

- Setting up log rotation for NGINX and PostgreSQL logs and max limit for system log

- Doing application backups and restores

- Creating admin tasks

Although it sounds like a lot, the demo is reasonably small and clean so you can go through all the files in 2 hours.

I think people many times complicate what they don't need to complicate...


> I think it can be simple if you keep it simple :).

> a couple hundreds lines of Bash.

One or the other. It can't be both.


I'm sorry, but how is this simple?


That sounds fucking complicated


You're welcome to adapt the AWS deployment scripts I setup for Haven[1]. I tend to adapt them when deploying other personal projects like the sites I've built for my family tree or privately hosting/sharing old family home movies.

[1]: https://github.com/havenweb/haven/tree/master/deploymentscri...


(Render founder; genuinely curious) what about Render makes you not want to use it?


Python/Django developer here, but I've found Digital Ocean+Dokku to be a pretty decent cheap alternative with more flexibility than Heroku, especially for an early-stage side project where a single server is enough for your needs.


I used to use Dokku a lot for hosting Flask APIs, am now using Caprover , the added GUI is great


Take a look at Cloud66. Been using it for every project over the last 10 years or so... It's like Heroku except you can control a lot more and not get locked into "enterprise" pricing when you need to scale or simply don't want to expose your database to the open internet (imagine that).


Check out https://www.hatchbox.io/, it lets you easily host stuff on various hosting platforms (using your own account) with similar ease to Heroku. Which also means it's WAY cheaper. Heroku's pricing has always seemed insane to me.


I think it used to be harder to host than it is now. I think the only thing that might be hard now is sifting through all the tools for hosting and the analysis paralysis that can come with that. Heroku can get pricey at scale, but heroku and others are options that make it dead simple. Then there are a ton of options for how you might want to self-host. There are plenty of guides for getting docker setup with it, opensource heroku-like alternatives (most using docker under the hood), and plenty of resources (maybe too much).


Did you try uwsgi? I have been hosting everything with this thing for over a decade and I still love it: https://uwsgi-docs.readthedocs.io/en/latest/Ruby.html


It’s been a long time, but I used to use convox for this back in the day, may still be worth checking out: https://docs.convox.com/example-apps/rails/


I've had good success on my side-business with Piku: https://piku.github.io It's essentially a lightweight PaaS that you can run on your own hardware (even ARM)


I have been using a software called dokku for about six years. It‘s like Heroku, but you self host. It even uses the same buildpacks. You just run the dokku installer and off you go.


If you don't mind hosting on AWS, Elastic Beanstalk is a pretty streamlined way to host Rails apps, and it doesn't cost anything more than the AWS resources you consume.


See, emteycz? Popularity is not the same thing as satisfaction. ;)


I'm using hatchbox.io on Linode!

And it's great!

Hatchbox takes the maintenance and setup out of DevOps for me!

Using Hatchbox makes it easier to treat servers like cattle and less like pets!


Agree on the hosting part. I would like an official docker compose for production that had everything tied together nicely.


Just that React, Webpack, graphql backends and millions of other packages and pipeline tools are on npmjs.com does not mean you have to use them. You can still create super-slim, to the point, and timeless backends using express.js (which was inspired by Ruby's Sinatra after all) and vanilla frontends. Doesn't get you brownie points on your CV though, which I think is the actual problem.


That, and the OP mentions they have a business background. If you don't come from a "how to write code starting with no batteries, forming your own opinions, etc", and all the tutorials are "you need to put some spaces before a string? We'll include leftpad!" style dependency hells, I can see Rails feeling like a breath of fresh air since it's batteries included, and heavily opinionated.


True, as long as people mindfully not using them drive the team. But the fact that "they are there" - and the fact that visible folks in the community usually either make the tools or endorse them (no wonder because they have to do personal marketing) means that it only takes that many new team members until every one of them regrets "not having" their favourite mini-tool used on that particular team, on this particular project. It takes extreme will (and some unpleasant conversations) to drive against that trend, and in the end folks might leave because of being denied using what they would like to try. So while you can "not use" all the new fancy tooling you will have the weight of convincing your peers that they do not need them either.


By contrast, Node was a breath of fresh air for me. With Rails, you had to start with this already complex system and bend it to do what you want. With a greenfield node server, it only does what you tell it to and nothing more. No surprises.


Building a complex SaaS app or backoffice app on a "greenfield Node server" is a supreme exercise in frustration.


There's a tool for every job, but for most of my use-cases, I'd rather have a webserver which is just a webserver, and get my front-end elsewhere. But I'm sure there are places where Rails is a great fit in the right hands.

I'm not even using node very much these days. Serverless is what I would reach for for most professional applications these days.


Sinatra covers that use case in Ruby.


I think this is also a function of team size.

Suppose you have been given 100,000 developers to work on a website, but it will have millions of customers and things cannot fail.

Now it is actually useful to be able to parcel out the database layer to a group of 1000, and they can split up each service into groups of 100 etc.

There are enough people to decide what conventions, logging, validation people ought to use and perhaps measure them. If all of them hack on the same Rails codebase they might be running into each other all the time.

Of course, for a solo developer, it makes sense to use Rails/Django/etc.

Now there is a little spot in between 2-99,999. It is possible that there is a team or two at this level. And I think that gear shift or pathway has not yet been discovered. Typically you start on rails, suspect you should switch and then switch way too late.

A pathway might look like this:

Django -> Django API + VueJs -> Redis + update database?

It probably depends on the challenges.

If someone has an idea, or something they have tried, it would be interesting to take a look so please share.

Edited for clarity.


This is an interesting example of the 80-20 rule (or some variant of it)

The vast majority of businesses/projects are small, and will only have one or a small number of developers working on it. So Ruby on Rails (or PHP or Wordpress) is a suitable choice for maybe 80% of projects.

At the same time, a very small number of businesses require 100,000+ (or even just 1000+) developers. These businesses, because they have so many developers, employ the vast majority of professional developers. Thus, maybe only 20% of professional developers work on products that are typical of business needs.

This disparity explains why communities on Hacker News are typically so negative towards Ruby on Rails. It also explains part of what makes Ruby on Rails so remarkable. Even after all these years, it has a thriving community and has resisted the pull to become more "enterprisy." It's still a tool that is targeted at and well suited for a large swath of business cases, the vast majority of which will likely never need to migrate to something "better".


"from HELLO WORLD to IPO" as dhh said today https://twitter.com/dhh/status/1471267036793765888?s=20


Micro services are usually overkill unless you’re working at a significant scale. For most projects such as corporate apps, it’s just unnecessary.

It’s good to start simple and only add complexity when it’s really needed. It’s unfortunate the current trend is to start with massive complexity.


Excellent advice! I recently worked at a company where the "architect" insisted on using Broadway and Kafka for what should have been basic CRUD operations.

He got fired.


That's refreshing to see. In a lot of startups he would've been promoted to some high level position, hired dozens of engineers to maintain the new monster and then invited to an AWS conference to brag about how they solve their (self-inflicted) problems, all while the business is bleeding money in cloud bills and engineer salaries.


> I ended up spending more time on tooling setup than actual business logic.

This is exactly why, in 2004, I moved from Java (Struts/Spring/Websphere/XML/...) to Rails. History repeats but we never learn. Kudos to Rails for remaining relevant.


Right, but a big project in Rails has all the problems you can think of:

- slow to compile and run

- easy target for spaghetti code

- test suites get sluggish

- hard to upgrade

- won't scale well

- with no static typing, large scale collaboration can easily break things

- metaprogramming hell


A big project in any stack has many problems. Vanilla Rails is going to scale way better than some wacky bespoke configuration of stitched-together NPM packages, which in my experience is what a lot of folks build now because they don't know any better.


I've successfully run Java/Go projects with huge amounts of code without a lot of degradation in any direction whereas Rails seems to quickly come to a stopping point compared to other platforms. I've written my side project which is now at a huge size in Elixir, I'm yet to see any degradation so far. I'm sure Rails would have been a pain by now. Sorry I don't have much Node experience so can't comment.


Simply hasn't been my experience at all, and I've worked with numerous Rails projects large and small since 2008. If you prefer Java/Go/Elixir, knock yourself out, but I'm throughly satisfied with Ruby-based solutions.


Got it. It's not impossible but I've been at multiple jobs involving Rails monolithic apps, they all really sucked from a maintenance and development perspective.

P.S would have been happier if you had a better choice of words than "knocking myself out".


Yes, I’m at one of those companies now, spaghetti code galore, and the project is becoming increasingly harder to maintain.

As the company scales they really need to break things into more services to distribute work and satiate work happening in other languages, but now that’s nearly impossible because of the mess they’ve created

Rails is great if your building a basic “dumb” web app, but rails folks seem to think it’s more than that and need to get their heads out of the clouds


I've worked on a lot of legacy Rails apps. Most of the problems of "spaghetti code" occur because people write bucketloads of un-Ruby-like code that betrays a serious lack of understanding around how Rails really works. Whenever I hear "aw man, Rails sucks. I switched to writing (X) in Rust/Go/Haskell/TypeScript/whatever…"

Let me stop you right there. If you tell me Rails wasn't meeting your needs so you rewrote (X) in a different manner using Ruby + something else, I'd totally understand. Otherwise, all it tells me is that there was a tragic lack of deep Ruby knowledge, respect even, to begin with as the project unfolded. Often I muse on how in some ways it was unfortunate that Rails became such a darling in the startup community early on. It meant that a ton of people jumped into Rails web dev thinking they were writing "Rails". No, you're writing Ruby, and Rails just provides a nice set of base classes and some decent defaults and assumptions. If you end up with a giant wad of spaghetti mess, that's on you. It's totally feasible not to end up there, and plenty of projects end up in far better shape.


> won't scale well

Really?

https://twitter.com/ShopifyEng/status/1465806691543531525

Github is in Rails. I mean, seems like it scales just fine.


It takes a lot more effort and housekeeping in Rails to make it scale well. When Github was built, Rails didn't have many decent alternatives. As of today there are a good number


> It takes a lot more effort and housekeeping in Rails to make it scale well.

It takes increasing the number of pods on your k8s which is how almost all teams deploy nowadays. Bigger amazon bills yes but the devops overhead isn't really larger in Rails,


Somehow Rails seemed to me like a short fad.

I knew a few people at university, who were pretty hyped about it. That was in 2008, I think. They used it at some companies where they jobbed and found it too limiting.

I was building customized stuff with bare PHP in those days and found it quite flexible. Later, the Rails hype came to PHP, but I already left for Node.js and never looked back.


GitHub and GitLab use Rails. Many UK government websites do.

Like anything else, Rails is good for some things & terrible at others. Rails tries to minimize developer time, at the cost of computing time (Ruby is not a fast language), and focuses on CRUD-type applications. Whether or not that's a good match depends on what you're trying to do.


Somehow Rails-like frameworks didn't make it big on Node.js, which was released way later than Ruby.

I saw this as a sign that the time of these fully fledged frameworks was over and devs were craving modularity.

But, yes, I saw a bunch of recent projects done with Rails. Forem is built with it, for example.


Out of those, though, I have to blame document DBs (and the start of NoSQL craze) on Rails :(

To cut a long story short, Active Record (especially at the time) wasn't super keen on generating super good SQL that would do as much as possible in database, and Ruby 1.9 arrived with stable ordered hashes. Then someone implemented basic ActiveRecord wrapper on top of key/value store (I think it started with Berkeley DB or similar), and from there NoSQL was off to the races as "new hip thing"[1]

[1] Meanwhile IBM IMS has been selling pretty well for last 40 years at the time ;)


ActiveRecord was, even then, a much better ORM than exists in the Node world today. It was full featured and well hardened, and required basically no setup other than a class declaration. Plus, you could always drop down to the SQL if you needed to. Generally, it was a joy to use, and the performance was usually good enough.

At some point, I gave up on even trying to use any Node ORM's because they were clunky and offered no value. It was easier to just drop straight to a knex session and write the queries natively.


You should check what Prisma team has been doing lately


Oh man, so much this - TypeORM is such a disaster when you know how to write SQL. So many times fighting with TypeORM I thought “I’d be done by now if I just write the SQL directly”


Objection (knex-based) is well designed and maintained. It clearly takes inspiration from ActiveRecord. It's light-years ahead of Sequelize in usability and footgun avoidance.


I don't fully understand your argument. Document DB's never fully got traction in the Rails community. Then Node.JS became popular because it did concurrent IO better, and it put document DB's front and center for no apparent reason.


It didn't get a lot of traction because it required some effort, but essentially the groups that jumped on them were also highly correlated with those that soon jumped to NodeJS, helped along with JSON serialization being rather obvious in JS context

Though I would also consider their exodus to be a good thing for Rails ;)


They were only a de fact web stack at certain types of shops, I have been doing Java, .NET and C++ all along, Rails came and went, and I kept doing boring technology.

I guess by now Rails could also be considered boring technology.

It is tiring to keep track of latest fads, specially those that don't happen to stick around, so only move when dust settles down.

Yeah one might lose business opportunities from not being first mover, on the other hand, there are business opportunities to port back the surviving projects into classical development stacks.

For me, still much better than dealing with management.


Sounds like the managers that came before you failed you! Not to sound accusing but if this happened with multiple jobs then maybe you could have asked more questions at the interview stage. You can usually see dysfunction like that a mile away.

There's no need to chase the latest Node JS shenanigans. It feels like it's all we talk about on HN but a huge, huge number of developers are out there happily coding away in Java, C# and all the rest. Even within Node world the majority are using the exact same React stack (the homogeneity of which is the reason for using it in the first place).

IMO Ruby has faded because it never found its niche. Java and C# thrive in the corporate world. Python already exists as a dynamically typed server language. As much as people complain, writing server-side web stuff in JS does make sense given it's what you're using in the client. So where does Ruby go?


The language is less important than the supporting tooling. IMHO we are just now, a decade later, seeing frameworks crop up in node that look anything like the out of the box usefulness that was the rails and django apps I was first exposed to. My honest opinion from several startups is that most sub ten year vets just don’t know that stuff existed already and / or just how much you can ask of your tech stack. I generally agree with your point though.


100% this. I've always said that the ecosystem, the tooling, the editor support, the libraries, the frameworks and the availability of people to hire are far more important than the language. For example it irks me when people say they like react because jsx or Vue because templates...that's the smallest of the reasons to choose one or another!!!

Same thing with Rails/Laravel/etc...people don't choose it because PHP or because Ruby , etc...the language doesn't matter at all, the tooling does matter it all.


Paul Graham, founder of YC would disagree. He claimed Common Lisp was the secret weapon behind Viaweb's success.


Ruby's has been startups—Stripe, Coinbase, Instacart, Teespring, Podia, Twitch, Gitlab, Github, etc... despite never having been the most popular language or even close, it's been absolutely dominant as an initial back-end language for top-tier startups.

https://twitter.com/logicmason/status/1371255029412233218


>IMO Ruby has faded because it never found its niche. Java and C# thrive in the corporate world. Python already exists as a dynamically typed server language. As much as people complain, writing server-side web stuff in JS does make sense given it's what you're using in the client. So where does Ruby go?

In the web server side, helping you build web apps faster. That has been it's niche (with Rails) and it's still doing well there.


> Python already exists as a dynamically-typed server language.

Ruby isn't new; it's only slightly younger than Python, and brought things to the table. Rails and Django are roughly the same age (and Django started off more as a CMS). That Python exists isn't an argument against Ruby. Lots of other languages exist, each with their own cost/benefit tradeoffs.

> [...] writing server-side web stuff in JS does make sense given it's what you're using in the client.

I don't understand the argument. Matching client/server language is a different goal than matching task/language, and the task isn't the only driving factor.

There's some crossover in that the language is the same, which means you could standardize on JS developers, but the ecosystems are not the same, and the required overall skillset is different. In wee shops the tradeoff may be worth it, but at scale, the benefit is oversold.


> JS does make sense given it's what you're using in the client.

Well JS is forced upon us on the browser and not everyone is tremendously happy about it. It has improved, but still. Plenty of folks out there will seek other languages, we will never all fully agree on what's good on the backend.

BTW - are we positive JS will have a monopoly on browsers 5-10 years from now? (I'm talking about WebAssembly). Cos if the monopoly is gone JS will die a violent death I think.


> BTW - are we positive JS will have a monopoly on browsers 5-10 years from now?

Yes.

I will assume your question arose from the exuberance of youth.

"Good enough" + inertia = Yes

It's not enough to be better; it has to be so much better, existing infrastructure and training must become moot. And preferably an obvious best to avoid balkanization.

See: Fortune 500 and their continued reliance on COBOL and mainframes rather than opting for replacement and retraining costs.


Ruby captured a lot of webdev market before python became mainstream, and you can see it in the maturity of the tools around this domain (ruby's way more evolved). The tables turned because python became the de-facto ML language, and everyone and your engineering manager will be afraid of betting on anything other than python in fear of hampering future ML-based optimizations (which is in practice utterly wrong, but try tell that to management...)


Agree! But isn’t it kind of weird that we accept that ml happens only in python? I’m biased because I’m working on a startup that makes machine learning sdks in ruby, elixir, golang, php, JavaScript etc. Also, to clarify what you are saying, you think companies are choosing to write their entire stack in python just because ml tools are in python?


> you think companies are choosing to write their entire stack in python just because ml tools are in python?

Going with the masses (e.g Java/Node/Python) is the easy thing to do. It feels "safe". But who knows where Python/Node will be 10 years from now, people are restless and will come up with new things (Deno?) and it could be they will start declining. Java is pretty much unavoidable though.


Eh I’ve been down this road and the tooling in python is just so evolved it’s hard to replicate. Julia is having a hard enough time, I see ML libraries pop up in other languages and they usually simmer out or are used by hobbyists.

As an ML engineer it’s really hard to justify spending your time in a language you will be handicapped by instead of just using python


"the popular tech stack shifted": this says it all. Choice of a stack shouldn't be about popularity but rather about meeting the cultural,organizational,functional and non functional requirements of the solution. I said it already and I will continue saying it: our industry is driven more by fashion rather than good engineering practices. Every new trend solves a problem of the old generation but also introduces additional costs and new trade-offs. Adopting something blindly is not a good engineering approach. Software engineering is the art of balancing carefully your trade-offs and not to go with the latest fashion.


> and have the same tiring conversations about naming conventions, logging, data consistency, validation, build scripts, etc.

Sounds like bike-shedding to me. Eg. You start with solving trivial problems instead of the hard problems.

Usually the first thing that is decided is what framework to use, often by a non technical person that herd from a friend that is was good. Rather then having the engineers sit down and think about what are the biggest challenges, and what is the best framework suited to solve the hardest problems.


OP never once stated that they started with those problems. What makes you think the engineering challenges weren't tackled first?


> cognitive overhead of getting a simple app up and running gradually took all the fun out of the job for me

This video was recently linked in other thread on HN, but I feel this comment really resonates with the main conclusion: https://www.youtube.com/watch?v=pW-SOdj4Kkk


Because of this unnecessary complxity in the webdev ecosystem, I switched my development career to other areas[^]. If I have to do webdev for a hobby project, I use Django and Postgres, with no JS frameworks whatsoever. It's liberating and fun.

[^] Mostly data engineering at work, and low level os/system dev in my spare time.


I hear ya. I perversely yearn for my days with Perl and CGI::Application. When I get up in the morning my eyes just happen to land on a stack of O'Reilly Perl books and it sets me off.


Ditto for me except swap RoR for PHP monoliths. Gone are the days where my only cognitive overhead was deciding between Less or Sass and CodeIgniter or “the fancy new” Laravel. Likewise I got fed up of where dev was going in the workplace, and moved into a Solution Architecture / Product Manager role. I still get a lot of satisfaction creating side projects in the tools of old (which IMHO work much nicer than 95% of “modern” applications), but thinking about doing development in my day job would give me a headache from the outset.


> I eventually gave up and switched to a semi-technical product management role.

Could you let me know what your job title is - I am looking for something similar where I would like to work on some semi technical stuff and at the same time manage developers but not sure what title I am supposed to look for.


Search for "Project managment". There is "Product management" which sounds similar but it's a bit different: project manager talks with developers and manages task complition while product manager talks to product owner(and other business people) and team lead and manages product money making and overall future strategy.


“Solution Architect” is normally the role that lies between Product Owners and the Dev team. It’s somewhat technical, especially if you don’t have a Technical Architect counterpart.



I like monoliths... which are modular inside... no need for microservices, easy deployment even without Docker and Kubernetes... just a single binary...


This sounds more like you couldn't adapt to more people joining the company. The stack reflects your org structure.


I started as a rails developer. This is just looking at the past with rosy lens.

Given the popularity of rails at the time, if node.js was unproductive like you say it is, it would have never caught on.

The reality is that express was considerably simpler and easier than rails and that's what made many of the internet companies we have today even possible.


Node productivity changes after a few years, I can give you examples from my work from this week (no internet stories) where the npm packaging bullshit wasted my time , and I just had to hack my way for a fix and postpone the inevitability that a giant cleanup needs to be done in our dependencies.

After the sit I seen with my own eyes(not internet stories) I am convinced that there are packages like "red" that defines the color RED="#ff0000" and a package colors that depends on all colors, and probably a few big dev tools that depends on the colors one, this node/npm ecosystem is crazy but we are paid to maintain and fix other people stupidity.

EDIT forgot to mention why stuff gets complex after time, you hit cases where you need to get a new version of package A , but A needs say a new node version or some new other shit, but you have a package B that will fail on the new node version with some stupid error, and package B was abandoned or the fix is to upgrade to a new major version... also you will notice that your packages are now abandoned and might have security issues when you inherit some old project , it is a big mess.


Adding lots of small, trivial packages is totally optional though.


For some reason I inherited a project that has tons of such packages, this include popular at that time frameworks and dev tools. Din't leftpad prove that the node community was and still doing this bad thing. Sure I have the option to have 0 dependencies but if you inherit an old node project the chance the project depends on a few bad packages is 100%


Not as optional as it would be if Node had a standard library to speak of.


Have you tried implementing a well structured and well developed migration system for node? How about a unified folder structure, best practices and patterns? Scheduled cron jobs? Templating and asset bundling? All of this came with Rails out of the box with sane defaults.

I guess I should be thankful re-inventing the wheel on all of these aspects in node has given me hundreds if not thousands of hours in compensation.


>Given the popularity of rails at the time, if node.js was unproductive like you say it is, it would have never caught on.

Never understimate the power of fad, tech pop culture, hype, and trying a shiny new thing...


The assumption that "better thing always wins" (for whatever definitions of better) is just plain wrong.


For sure. The longer somebody has been coding, the more examples they can list where the worse technology won out.

Exhibit A here is the JS language itself. It became popular not because it was the best language, just the most ubiquitous one. You could read the 25 years since as trying to turn it into a solid language.

Or look at the rise of PHP. It started out as a tool for making a Personal Home Page, which is a fine use case and was certainly a popular one in the mid/late 1990s. It eventually turned into the foundation for one of the world's largest companies. But nobody can argue it was a particularly good language. E.g. https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

Same thing with Java, really. Bunch of interesting ideas, some of which worked out and some didn't. Even at launch a lot of the things that have turned out to be problems were criticized. They say that Java is the new COBOL, and that seems pretty fair to me.

I could go on all day. The most popular technologies are not always the best. That applies to movies and music and pretty much everything. And personally, I've accepted it. It's part of how the world works. The trick for those of us who like "better" things is to figure out how to get them to become mainstream.


When all dimensions are considered, the better thing is the thing that wins.

All the counter examples in other replies focus on one aspect of a piece of tech that was bad while completely ignoring the rest. Php might have been bad as a language but for its time, it was easier and more productive than the alternative (J2EE): the better thing won until it was supplanted by something better.


That is an error in the other direction: focusing only on a single good aspect of a piece of tech. PHP wasn't up against J2EE in that first wave, it displaced perl and cgi-bin. And the only thing it had going for it was easier deployment. Arguably first-class templating too, but I don't think that was the killer feature, and really perl wasn't that far behind. Perl had better OS support, a massive ecosystem that PHP didn't catch up with for about 15 years, and a community that actually cared about code quality. And it was renowned as a "getting stuff done" high-productivity language.

The easier deployment story for PHP3 meant you saw massive expansion at the bottom of the market as cheap webhosts were able to offer very locked-down accounts for peanuts, which meant a generation of developers came along whose only experience was with PHP. They didn't have anything to compare to and nobody was really offering them anything else at anything like the same scale. The virtualisation revolution hadn't happened yet, so these were all shared accounts on physical boxes, and they couldn't install their own stuff either.

It's just not reasonable to say PHP was "better" except in a very limited, and largely accidental sense. It just so happened that that one advantage was enough to catapult it into first place.


Is express still easier than Rails? Im a noob and the reason I’m asking is that I’m getting somewhat stuck with a project in spring and I find their docs very fragmented and the community is more advance/intermediate or business oriented.

It’s just me building this and I have no one in my circle that’s familiar with spring.


I've been a software engineer for 32 years (I'm in management now, but I still write non-critical-path code). During my career, I've used Express, Koa, Rails, Django, Laravel, Pyramid, Sinatra, Flask, Pedestal, Luminus, Phoenix, and several other backend frameworks. If I were starting out as a "noob" today, I'd learn Elixir and Phoenix.


node.js was a lot more performant and uses a language people already know from writing in browser code




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: