Hacker News new | past | comments | ask | show | jobs | submit | valenterry's comments login

That's where Scala shines. I wrote about this here a bit: https://valentin.willscher.de/posts/contextual-syntax/

Rust is heavily inspired by Scala, but I guess achieving something like the examples in my post is difficult. I really hope Rust finds one way or another to make it work. Because simply forbidding everything all the time isn't even the safest way - it drives many people to just avoid it altogether and use unsafe code.


While at it, could you give a few examples that illustrate how Rust was inspired by Scala proper, and not Haskell / SML / OCaml (which also influenced Scala)?

This page from official Rust website of its influences does not mention Scala. https://doc.rust-lang.org/reference/influences.html

In fact this is the first time I've seen anyone say Scala influenced Rust let alone "heavily". Seems like a stretch


It says here:

https://en.wikipedia.org/wiki/ML_(programming_language)

that ML influenced Rust, Scala, Haskell and OCaml, so that's the common denominator.

However, although Rust wants to be an ML style language all the idioms break because there's no GC.


So I regularly jump between Scala and Rust and it's more a feeling.

In that overall the way option types, pattern matching, FP operations e.g. map/filter, immutability by default and type parameters have been implemented are very similar to Scala. And since these make up a large percentage of the everyday code you write you see the similarity as being larger than maybe it is.


All these are just functional features you find in every functional language though. Rust was highly influenced by OCAML and other ML languages

Hence why I said it was a feeling.

And I think the fact the overall style of the language e.g. braces, semicolon, method signatures, loop handling is so similar to Scala versus OCaml contributes to this.


Crazy project.

Personally I find that it's an interesting indicator of the capability of the programming languages. Moving from language A to B can be extremely easy if B is as powerful or more powerful in terms of expressiveness than A. It can be an absolute horror if it is less powerful.

Being not null-safe in fact brings additional expressiveness. I guess most would argue that it's not a good type expressiveness. Nonetheless it is a type of expressiveness that can cause trouble during such a transition.

In general it feels like Java is making faster progress than Kotlin nowadays. I wonder where Kotlin would be if it weren't for Android. I hope those folks don't have to migrate back before they even finished.


Without Android, Kotlin would just be Cool Alternative Language #4721. Java has been a safe, non-controversial pick for decades. Who is going to endorse writing their hot new project in it just because some IDE uses it? When Google says they support a technology going forward, that gives the project some cache it otherwise never would have received.

Not even support, now they’ve completely migrated all Android libraries to be Kotlin-first/Java-never.

It's not like you can't decorate all variable access with !! to retain this expressiveness. This is not done in practice because it would be a waste of verifiable type safety. You can always `lateinit val` in cases where the initialization and lifetime of a particular value is trivial, but in cases where you can't do this, you probably benefit from the null safety guarantees.

> This is not done in practice because it would be a waste of verifiable type safety.

Why not? The current application is working, so I'd start with that and then gradually move away from it. But it seems they chose to go the opposite way.


I like it. First glance and they chose the "proper" order (from -> where -> select) over the classical order (select -> from -> where). Probably because that improves/enables autocomplete and typehandling. This is good


Using (from -> where -> select), how would you provide type hints on the where clause when your select includes non-table columns?

  SELECT
    COUNT(col_a) as count
  WHERE
    count > 0
 
Kysely uses (from -> select -> where), and allows joins and selects in multiple places, like (from -> join -> select -> join -> select -> where).


Maybe I'm mistaken right now, but I think your query is invalid. You cannot refer to an select-alias (here "count") in a where-condition.


> Frameworkism isn't delivering.

Correct. The reason is not the frameworks but the languages. What is needed is a much more high-level and feature-powerful language.

Just look at react. Passing down dependencies/values is a pain. So what did react do? It introduced contexts. Similar, other react addons try to solve this problem. But they all sacrifice type-safety in the process.

They simply cannot solve this problem; only a better language can. Typescript maybe could, but they restrict themselves to not generate code (or at least very limited, I think for enums they do it).

Without a change here, nothing can ever improve.


This is straight up not true. React state management is very flexible and requires you to think about what your problem is and the nature of state.

There is absolutely nothing preventing you from keeping those safety - I’m not sure what you mean by that.

The typescript type system is very advanced, maybe even too advanced! I disagree that more sophisticated languages are required.


See what Peter Kelly wrote. For example.


The React team should have done their own language, rather than making it a framework/library. Then they could ensure the absence of side effects in rendering code, and had a better way of detecting updates. The knowledge about how to do this was already out there for many years before they started: https://en.wikipedia.org/wiki/Functional_reactive_programmin...


Yes, but if they did their own language it would have almost certainly died in obscurity.

React won because it solved a real pain point for a group of people, using the techniques they were already using... but better!

Having built large apps in angular, handlebars, and jquery... it was (and still is) a godsend for building highly interactive, stateful web applications.

I am deeply skeptical of some of the stuff that people are doing now wrt. putting backend logic into React components, but the fundamentals of React are still very reasonable to me.


Isn’t that ReScript?

https://rescript-lang.org/


That's exactly my point.


This new language. Do you mean something that compiles down to JS like Typescript, Coffescript & Dart? Or a language that compiles to WASM like C, C++ or Rust?


Doesn't matter, as long as the language directly supports the required features.


Don’t use state, except within a component.

I build all my react this way.

No prop drilling, no Redux, no context, no mobx no state.

If you need to communicate between components then send a document event.

Your react application will be dramatically more simple without state.


So, let's say you want to deploy server instances. Let's keep it simple and say you want to have 2 instances running. You want to have zero-downtime-deployment. And you want to have these 2 instances be able to access configuration (that contains secrets). You want load balancing, with the option to integrate an external load balancer. And, last, you want to be able to run this setup both locally and also on at least 2 cloud providers. (EDIT: I meant to be able to run it on 2 cloud providers. Meaning, one at a time, not both at the same time. The idea is that it's easy to migrate if necessary)

This is certainly a small subset of what kubernetes offers, but I'm curious, what would be your goto-solution for those requirements?


That's an interesting set of requirements though. If that is indeed your set of requirements then perhaps Kubernetes is a good choice.

But the set seems somewhat arbitrary. Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

Indeed given that you have 4 machines (2 instances, x 2 providers) could a human manage this? Is Kubernetes overkill?

I ask this merely to wonder. Naturally if you are rolling out hundreds of machines you should, and no doubt by then you have significant revenue (and thus able to pay for dedicated staff) , but where is the cross-over?

Because to be honest most startups don't have enough traction to need 2 servers, never mind 4, never mind 100.

I get the aspiration to be large. I get the need to spend that VC cash. But I wonder if Devops is often just premature and that focus would be better spent getting paying customers.


> Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

I think the "2 cloud providers" criteria is maybe negotiable. Also, maybe there was a misunderstanding: I didn't mean to say I want to run it on two cloud providers. But rather that I run it on one of them but I could easily migrate to the other one if necessary.

The zero-downtime one isn't. It's not necessarily so much about actually having zero-downtime. It's about that I don't want to think about it. Anything besides zero-downtime actually adds additional complexity to the development process. It has nothing to do with trying to be large actually.


I disagree with that last part. By default, having a few seconds downtime is not complex. The easiest thing you could do to a server is restart it. Its literally just a restart!


It's not. Imagine there is a bug that stops the app from starting. It could be anything, from a configuration error (e.g. against the database) to a problem with warmup (if necessary) or any kind of other bug like an exception that only triggers in production for whatever reasons.

EDIT: and worse, it could be something that just started and would even happen when trying to deploy the old version of the code. Imagine a database configuration change that allows the old connections to stay open until they are closed but prevents new connections from being created. In that case, even an automatic roll back to the previous code version would not resolve the downtime. This is not theory, I had those cases quite a few times in my career.


I managed a few production services like this and it added a lot of overhead to my work. On the one hand I'd get developers asking me why their stuff hasn't been deployed yet. But then I'd also have to think carefully about when to deploy and actually watch it to make sure it came back up again. I would often miss deployment windows because I was doing something else (my real job).

I'm sure there are many solutions but K8s gives us both fully declarative infrastructure configs and zero downtime deployment out of the box (well, assuming you set appropriate readiness probes etc)

So now I (a developer) don't have to worry about server restarts or anything for normal day to day work. We don't have a dedicated DevOps/platforms/SRE team or whatnot. Now if something needs attention, whatever it is, I put my k8s hat on and look at it. Previously it was like "hmm... how does this service deployment work again..?"


"Imagine you are in a rubber raft, you are surrounded by sharks, and the raft just sprung a massive leak - what do you do?". The answer, of course, is to stop imagining.

Most people on the "just use bash scripts and duct tape" side of things assume that you really don't need these features, that your customers are ok with downtime and generally that the project that you are working on is just your personal cat photo catalog anyway and don't need such features. So, stop pretending that you need anything at all and get a job at the local grocery store.

The bottom line is there are use cases, that involve real customers, with real money that do need to scale, do need uptime guarantees, do require diverse deployment environments, etc.


Yep. I'm one of 2 Devops at an R&D company with about 100 employees. They need these services for development, if an important service goes down you can multiply that downtime by 100, turning hours into man-days and days into man-months. K8 is simply the easiest way to reduce the risk of having to plead for your job.

I guess most businesses are smaller than this, but at what size do you start to need reliability for your internal services?


You know that you can scale servers just as well, you can use good practices with scripts and deployments in bash and having them documented and in version control.

Equating bash scripts and running servers to duct taping and poor engineering vs k8s yaml being „proper engineering„ is well wrong.


The question is why solve a solved problem?


I think you are proving the point; there are very, very few applications that need to run on two cloud providers. If you do, sure, use Kubernetes if that makes your job easier. For the other 99% of applications, it’s overkill.

Apart from that requirement, all of this is very doable with EC2 instances behind an ALB, each running nginx as a reverse proxy to an application server with hot restarting (e.g. Puma) launched with a systemd unit.


To me that sounds harder than just using EKS. Also, other people are more likely to understand how it works, can run it in other environments (e.g. locally), etc.


Hmm, let's see, so you've got to know: EC2, ALB, Nginx, Puma, Systemd, then presumably something like Terraform and Ansible to deploy those configs, or write a custom set of bash scripts. And all of that and you're tied to one cloud provider.

Or, instead of reinventing the same wheels for Nth time, I could just use a set of abstractions that work for 99% of network services out there, on any cloud or bare metal. That set of abstractions is k8s.


Sorry, that was a misunderstanding. I meant that I want to be able to run it on two cloud providers, but one at a time is fine. It just means that it would be easy to migrate/switch over if necessary.


My personal goto-solution for those requirements -- well 1 cloud provider, I'll follow up on that in a second -- would be using ECS or an equivalent service. I see the OP was a critic of Docker as well, but for me, ECS hits a sweet spot. I know the compute is at a premium, but at least in my use-cases, it's so far been a sensible trade.

About the 2 cloud providers bit. Is that a common thing? I get wanting migrate away from one for another, but having a need for running on more than 1 cloud simultaneously just seems alien to me.


Last time I checked ECS was even more expensive than using Lambda but without the ability of fast starting your container, so I really don't get the niche it fits into, compared to Lambda on one side and self-hosting docker on minimal EC2 instances on the other side.


I may need to look at Lambda closer! At least way back, I thought it was a no-go since the main runtime I work with is Ruby. As for minimal EC2 instances, definitely, I do that for environments where it makes sense and that's the case fairly often.


Actually, I totally agree. ECS (in combination with secret manager) is basically fulfilling all needs, except being not so easy to reproduce/simulate locally and of course with the vendor lock-in.


Do you know of actual (not hypothetical) cases, where you could "flip a switch" and run the exact same Kubernetes setups on 2 different cloud providers?


I run clusters on OKE, EKS, and GKE. Code overlap is like 99% with the only real differences all around ingress load balancers.

Kubernetes is what has provided us the abstraction layer to do multicloud in our SaaS. Once you are outside the k8s control plane, it is wildly different, but inside is very consistent.


Yes. I've worked on a number of very large banking and telco Kubernetes platforms.

All used multi-cloud and it was about 95% common code with the other 5% being driver style components for underlying storage, networking, IAM etc. Also using Kind/k3d for local development.


Both EKS (Amazon) and GKE (Google Cloud) run Cilium for the networking part of their managed Kubernetes offerings. That's the only real "hard part". From the users' point of view, the S3 buckets, the network-attached block devices, and compute (CRIO container runtime) are all the same.

You are using some other cloud provider or want uniformity there's https://Talos.dev


Yes, but it would involve first setting up a server instance and then installing k3s :-)


I actually also think that k3s probably comes closest to that. But I have never used it, and ultimately it also uses k8s.


If you are located in germany and run critial IT infrastructure (banks, insurance companies, energy companies) you have to be able to deal with a cloud provider completely going down in 24 houres. Not everyone who has to can really do it, but the big players can.


I'm just happy to see the tl;dr at the TOP of the document.


I've worked at tiny startups before. Tiny startups don't need zero-downtime-deployment. They don't have enough traffic to need load balancing. Especially when you are running locally, you don't need any of these.


Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

Tiny startups are rarely trying to build projects for small customer bases (eg little scaling required.) They’re trying to be the next unicorn. So they should probably make sure they can easily scale away from tossing everything on the same server


> Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

Having too many (or too big) customers to handle is a nice problem to have, and one you can generally solve when you get there. There are a handful of giant customers that would want you to be giant from day 1, but those customers are very difficult to land and probably not worth the effort.


Startups need product-market fit before they need scale. It’s incredibly hard to come by and most won’t get it. Their number one priority should be to run as many customer acquisition experiments as possible for as little as possible. Every hour they spend on scale before they need it is an hour less of runway.


while true, zero downtime deployments is... trivial... even for a tiny startup.. So you might as well do it.


Zero downtime deployments were a thing long before K8S


Tiny startups don't have money to spend on too much PaaS or too many VMs or faff around with custom scripts for all sorts of work.

Admittedly, if you don't know k8s, it might be non-starter... but if you some knowledge, k3s plus cheap server is a wonderful combo


Why does a startup need zero-downtime-deployment? Who cares if your site is down for 5 seconds? (This is how long it takes to restart my Django instance after updates).


Because it increases development speed. It's maybe okay to be down for 5 seconds. But if I screw up, I might be down until I fix it. With zero-downtime deployment, if I screw up, then the old instances are still running and I can take my time to fix it.


If you're doing CD where every push is an automated deploy a small company might easily have a hundred deploys a day.

So you need seamless deployments.


I think it's a bit of an exaggeration to say a "small" company easily does 100 deployments a day.


Not necessarily. Some companies prefer to have a "push to master -> auto deploy" workstyle.


We’ve been deploying software like this for a long ass time before kubernetes.

There’s shitloads of solutions.

It’s like minutes of clicking in a ui of any cloud provider to do any of that. So doing it multiple times is a non issue.

Or automate it with like 30 lines of bash. Or chef. Or puppet. Or salt. Or ansible. Or terraform. Or or or or or.

Kubernetes brings in a lot of nonsense that isn’t worth the tradeoff for most software.

If you feel it makes your life better, then great!

But there’s way simpler solutions that work for most things


I'm actually not using kubernetes because I find it too complex. But I'm looking for a solution for that problem and I haven't found one, so I was wondering what OP uses.

Sorry, but I don't want to "click in a UI". And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.


> And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

Maybe not literally 30.. I didn't bother actually writing it. Also bash was just a single example. It's way less terraform code to do the same thing. You just need an ELB backed by an autoscaling group. That's not all that much to setup. That gets you the two loadbalanced servers and zero downtime deploys. When you want to deploy, you just create a new scaling group and launch configuration and attach to the ELB and ramp down the old one.. Easy peasy. For the secrets, you need at least KMS and maybe secret manager if you're feeling fancy.. That's not much to setup. I know for sure AWS and azure provide nice CLIs that would let you do this in not that many commands. or just use terraform

Personally if I really cared about multi cloud support, I'd go terraform (or whatever it's called now).


> You just need an ELB backed by an autoscaling group

Sure, and then you can neither 1.) test your setup locally nor 2.) easily move to another cloud provider. So that doesn't really fit what I asked.

If they answer is "there is nothing, just accept the vendor lock-in" then fine, but please don't reply with "30 lines of bash" and make me have expectations. :-(


A script that installs some dependencies on an Ubuntu vm. A script that rsyncs the build artifact to the machine. The script can drain connections and restart the service using the new build, then onto the next VM. The cloud load balancer points at those VMs and has a health check. It's very simple. Nothing fancy.

Our small company uses this setup. We migrated from GCP to AWS when our free GCP credits from YC ran out and then we used our free AWS credits. That migration took me about a day of rejiggering scripts and another of stumbling around in the horrible AWS UI and API. Still seems far, far easier than paying the kubernetes tax.


I guess the cloud load balancer is the most custom part. Do you use the alb from aws?


For something this simple, multi-cloud seems almost irrelevant to the complexity. If I’m understanding your requirements right, a deployment consists of two instances and a load balancer (which could be another instance or something cloud-soecific). Does this really need to have fancy orchestration to launch everything? It could be done by literally clicking the UI to create the instances on a cloud and by literally running three programs to deploy locally.


Serverless containers.

Effectively using Google and Azure managed K8s. (Full GKE > GKE Autopilot > Google Cloud Run). The same containers will run locally, in Azure, or AWS.

It's fantastic for projects but and small. The free monthly grant makes it perfect for weekend projects.


0 downtime. Jesus Christ. Nginx and HAProxy solved this shit decades ago. You can drop out a server or group. Deploy it. Add it back in. With a single telnet command. You don’t need junk containers to solve things like “0 downtime deployments”. That was a solved problem.


Calm down my friend!

You are not wrong, but that only covers a part of what I was asking. How about the rest? How do you actually bring your services to production? I'm curious.

And, PS, I don't use k8s. Just saying.


Cloud Run. Did you read the article?

Migrating to another cloud should be quite easy. There are many PaaS solutions. The hard parts will be things like migrating the data, make sure there's no downtime AND no drift/diff in the underlying data when some clients write to Cloud-A and some write to CLoud-B, etc. But k8 do not fix these problems, so..


Came here to say the same thing: PaaS. Intriguing that none of the other 12 sibling comments mention this… each in their bubble I guess (including me). We use Azure App Service at my day job and it just works. Not multi-cloud obviously, but the other stuff: zero downtime deploys, scale-out with load balancing… and not having to handle OS updates etc. And containers are optional, you can just drop your binaries and it runs.


There are no studies for anything, and if there were, they would be outdated by the time they are published.

So if you ask such a question, do you really expect anything more than people's firsthand experiences?

But I'll tell you my own side as well: doing anything with concurrency in any other language (yes, including Go and Rust) is just much much harder in comparison to Scala.

No wait, that's not true. There are languages that are better suited for that. But they are either academic or extremely niche and come with other problems (such as lack of ecosystem or much worse performance).

> The fact is there a fewer FP devs. Why is that if FP is "better"?

The fact is also that the number of FP devs has been and is growing. And so has the number of languages with those features.

> There's fewer Scala devs, because Scala isn't better.

What metrics are you using to make the claim that it "isn't better"?


Fewer devs, fewer libs, fewer tools, fewer projects.

Those are all quantified with out a research study.


Your question is a bit like someone asking "so what does the garbage collector actually do? Does it X or Y? What impact does it have?"

And the answer is: no need to care about it. Unless you need to really optimize for high performance (not necessary in 99% of the cases, otherwise you'd use a different language from the beginning anyways).

> Also, the original strong need for immutable data in the first place is safety under concurrency and parallelism?

One of the reasons is that you really can just completely stop thinking about it. Just like you can stop thinking of (de)allocations. Except for some edge-cases when performance matters a lot.


Yes it does, otherwise the code would actually not compile.


> The idea that LLMs are going to advance software in any substantial way seems implausible to me

I disagree. They won't do that for existing developers. But they will make it so that tech-savy people will be able to do much more. And they might even make it so that one-off customization per person will become feasable.

Imagine you want to sort hackernews comments by number of character inline in your browser. Tell the AI to add this feature and maybe it will work (just for you). That's some ways I can see substantial changes happen in the future.


Exactly that. LLMs generate a lot of simple and dumb code fast. Then you need to refactor it and you can't because LLMs are still very bad at that. They can only refactor locally with a very limited scope, not globally.

Good luck to anyone having to maintain legacy LLM-generated codebases in the future, I won't.


I’ve noticed LLMs quickly turn to pulling in dependencies and making complicated code


I'm sure they do great for scripts and other stuff. But the few times I tried, they always go for the most complicated solutions. I prefer my scripts to grow organically. Why automate something if I don't even know how it's done in the first place? (Unless someone else is maintaining the solution)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: