Hacker News new | past | comments | ask | show | jobs | submit login
The Untouched Goldmine of F# (rm4n0s.github.io)
38 points by rmanolis 22 days ago | hide | past | favorite | 56 comments



> The question is: Why companies moved from monolithic to microservices? What do they try to avoid?

One of the main reasons why companies move from monoliths to microservices is to promote ownership and accountability in large codebases.

In a monolith where everyone owns the code, developers can break each other's code.

With microservices, each team becomes responsible for one part, and (as long as they keep their SLAs) they can't break each other's code.

When something fails it's easier to identify who needs to fix what.

Microservices don't make much sense for small teams if they don't need or don't have the headcount to split responsibilities.


Same-process modularization should solve that problem. Splitting the architecture into separate processes means a crash in one module doesn't propagate to other modules, but the whole system still breaks down if the process is essential.


What same-process modularization does _not_ solve is independent lifecycle management. In a monolithic system your change rate often becomes tied to your slowest, most bug prone module or team. If you've some integration test that bakes some piece of important code for 48 hours, you can only make a change _everywhere_ else every 48 hours.

Now sometimes folks (the Windows team famously did this) build systems which identify which tests to run based on changes that occur in the codebase, but that's _not_ easy.


Yes. With the microservice as the unit of deployment, the design is about team structure as much as it is anything else. The team of 4-12 people owns and deploys a service at their own pace, without needing to be part of a larger and slower batch.

If you're interested in iterative development and continuous deployment, which of course I am, it is a natural fit. It is productive in that case.


Not all bugs are crashes. It could be a copy text or price calculation.


A system can break without crashing. Incorrectly processing financial transactions due to a bug in price calculations would be a good example of a serious breakage that may well be worse than crashing.


For bugs that don't cause crashes the effect is the same regardless of how you architect the system.


Yes. And the little-understood reason why microservices became the answer is that the tools for defining clear ownership boundaries in a single service were (and to an extent still are) lacking.

Defining a clear public contract in most existing programming languages that cannot be violated by a lazy programmer is tricky. Java's package-private should be the answer but is too blunt and ends up requiring a team to make things public so they can use them. C# has a bit more control that might make clear contracts possible (I haven't tried), but not everyone wants to use it because Microsoft. And most of the popular languages at the time of the rise of microservices had basically no mechanism for defining public versus private interfaces.

Microservices allow you to do that in any language— your HTTP API is your public interface. Anything inside of that is physically impossible to access without adding an explicit build-time dependency, which would be trivial to catch in code review.

Yes, code review could theoretically solve the same problems, but in practice no organization is that disciplined. You need tooling, and those tools didn't exist yet when microservices came to be. We're seeing them develop now (Rust's visibility modifiers are very powerful, and monorepo tools are starting to develop lints that can impose module boundaries on languages that lack them natively), and it's not a coincidence that around the same time we're seeing a lot of people moving back to monoliths.


> With microservices, each team becomes responsible for one part, and (as long as they keep their SLAs) they can't break each other's code.

Well, they still can and do break someone else's code. It is just breaking someones code with more steps in between. And who is at fault isn't necessarily as clear cut.


Sounds like extra bureaucracy.


Yes and no. Would you rather ask for deployment of a microservice owned by a team of 10 people, or a monolith that has changes from a department of 100 people. Which deployment do you think has "extra bureaucracy"? Which one do you think can be done this week?

There are co-ordination costs to microservices. But there are also independence benefits.


It definitely is; but this bureaucracy can be useful when the company has thousands of developers.


> With microservices, each team becomes responsible for one part, and (as long as they keep their SLAs) they can't break each other's code.

When done well, microservices work as a unit of deployment. The team that owns the service code deploys it on their own pace. Sure there is process to follow, but it involves 10 people in that microservice team not 100 people for the monolithic app. This allow for smaller batches and independence of development.


Having micro services doesn’t protect against affecting another team’s code/service. Think of a service changing some behavior or contract on either side of your service. Are you thinking of a person changing the actual code of another team? That can be solved without micro services.


Yet some companies use monorepos, so everybody has access to everything.


> It may not have the information on the filename or the line of code like the ordinary stack traces, but these are useless information anyway.

That is the weirdest and most crazy thing I've read in years.


The whole post feels like it came from a parallel universe. People adopted microservices because stack traces are too long! Exposing the implementation details of a function in its type signature is good! The most meaningful way to specify a function is by how it can fail (this one is fun, though)!


In soviet F# the monads... uh....


the endofunctors operate on the category of you!


I’ve been told by HR I’m not allowed to show people my endofunctor anymore..


No doubt it’s harder to write structured code and assertions against a string stacktrace, but as a human reading it, the information is immensely valuable.


Depends on how your app is structured, but it very well can be.

badvalue = dobadthing(); dostuffwith(badvalue);

Depending on the indirection between dobadthing() and the place that chokes on badvalue the stack trace can be completely worthless. It is one of the reasons people hated the original Spring. Claimed it decoupled stuff but all it really did was obfuscate.

I have seen similar claims made about Clojure. Bad data coming from somewhere, but no way to trace it through the system. Sure you can filter bad data in to the garbage can, but that just masks the issue not fix it.


Yeah, that's also my main problem with the article. Otherwise great to prmote TSTs.

I think the TSTs should also contain the stacktrace. Not or X or Y, but both X and Y.


So… if I ever decide I want to change the call stack of a lower level function… I’m going to break types that all my callers and callers’ callers and callers’ callers’ callers’ are depending on? Like, my call stack is enshrined in a type so that to change it means a refactor all the way up to main()?


Ultimate job security. Write the code that no one will ever dare to touch. And you pass on your job security to those few brave souls who do.


Is TST essentially the validation monad[0]?

Glad to see more folks finding and appreciating FP languages like F#. It's good stuff. Scott Wlachin has a great site to discover more F# goodness[1].

[0]: https://hackage.haskell.org/package/monad-validate-1.3.0.0/d...

[1]: https://fsharpforfunandprofit.com/


It seems a step on the path to learning both the Either Monad and the Validation Monad, especially for someone coming from golang which desperately needs a proper Either Monad and essentially has baked in the worst of both imperative and FP worlds in the way everything returns as if there were an Either Monad but not enough useful combinators nor a semblance of do-notation/Computation Expressions exist.

Which is fun to see, really, because it is interesting watching someone rediscover FP principles from first practice.


> only Odin, F#, OCaml and Zig have [Tagged Unions or Discriminated Unions]

The good old missing sum type story.

Don't Rust, Haskell, Elm, Kotlin (with sealed classes), etc also have them?


I got to this part of the article and concluded it wasn’t worth my time, nor probably anyone else’s

1. Absurd claim that file names aren’t important in a stack trace 2. Claims they’re sharing a feature of F# that is never used, then later reveals they’ve been studying F# for 3 months 3. Then this statement about tagged unions


3 months, and he's already deep in acronym soup touting DDD, CDD, TDD, and TST. I'm getting major architecture astronaut vibes.


bit of a weird article to find on the top of HN, to be sure..

But I felt similarly when I picked up Haskell coming from Ruby and Java, so I can understand the attitude. :D


Most languages have them, or allow them to be expressed. The name "Tagged Unions" literally comes from C.


Just having sum types is not enough: you need some level of type safety and exhaustivity checking in match/switch statements to truly benefit from them.

Go does not. Java did not (maybe now with sealed types and exhausitvity checks on switch statement, but not sure if they've already landed).


Java does have exhaustiveness checking on switch expressions over sealed types in the release JDK, but not switch statements (due to the requirement for backwards compatibility). I'll often find myself writing var _ = <some switch>; to get around this, a very Java idiom if ever there was one, heh


Java does have exhaustiveness checking on switch statements for sealed types, but not enums (for backward compatibility), which is probably what you're using.

Try this:

    sealed interface Shape permits Circle, Rectangle { }
    record Circle(double radius) implements Shape { }
    record Rectangle(double length, double width) implements Shape { }

    void test() {
        Shape shape = new Circle(5);
        switch (shape) {
            case Circle _ -> System.out.println("Circle");
            case Rectangle _ -> System.out.println("Rectangle"); // Comment out this line to get an error.
        }
    }


Did it? Didn't Pascal have tagged unions?


I looked it up and apparently it actually comes from Algol, but I digress. Point being sum types and their equivalents are found all over the place. I wouldn't be surprised to find out modern prologs have them.


> It may not have the information on the filename or the line of code like the ordinary stack traces, but these are useless information anyway.

These are useless information?


God I remember having to deal with F# when I was an intern, what a nightmare. It's like the worst parts of Java and C# smashed into one esoteric language.


C# has a much awaited and active proposal to add DUs to C# so I suspect C# will also support this once live (and continue its legacy of plucking great features from F#).

https://github.com/dotnet/csharplang/blob/main/proposals/Typ...


Another reason why companies moved to microservices is because it is easier to manage smaller projects. Services have stronger boundaries, they can be swapped with newer and alternative implementations, it is possible to combine the power of different technologies.

I came from the other side: I own a huge monolithic web behemoth which is almost unbearable to maintain now. It was built using a now outdated technology, but: it serves customers, it runs the business, it brings profits. Nevertheless, it is a huge pain to even try to change something in it. The project is at its dead end now, it is a one-trick pony who has become too old.

Nowadays I build newer parts using separate well-defined services. It is not textbook microservices, I would call it just services. Imagine building a mini-product that is not publicly available and only used internally. This kind of productized services work well enough to never look back at the fragile monolithic approach.


I upvoted this because I love F#, but I can't say I agree with the conclusions of the author.


This is really about making stack traces easier to understand.


I dont think so, TST and stacktraces are different.

Stacktraces show me call-stack to the point the error happened. Something TST does not always show: different call-stacks can result in the same (or very similar) TST-stacks.

It is possible to return the stacktrace as part of the TST error (not just an error message but also a stacktrace).


I really like F#, but I really miss having a full fledged f#-native ORM


No you dont. Sorry for the blunt answer.

ORMs are a bad idea, even when using OO langs: they make the simple queries slightly simpler (`Users.getById(id: Long)`), they do not help you for hard queries (ORM-using codebases of size usually have hard-SQL-queries "in strings").

Most users of FP langs know this and hence will not even try to implement ORMs.

Look into jOOQ, LINQ-method-syntax (or whatever it is called, without the funny SQLish syntactic sugar), SQLDelight or sqlx for non-ORM options that improve embedding SQL in general purpose langs.


I like the idea of F# type providers but last time I tried (two years ago) they were pretty shit compared to any mainstream ORM/alternative - has that improved ?

Also ORM for simple row selects is a straw-man - their use case is deserializing, updating relationship graphs where they hide away a bunch of code. Should that code be hidden is a different question - but pretending it's there to save you from typing SELECT * FROM foo is kind of disingenuous. Editing graph data structures in most FP languages is just not compatible with OOP approach hence no ORM.


Most of the time we do simple CRUD operations, I want this plumbing to be as straight forward as possible. Don't be patronising...


I am always confused by this wish. If the operations are simple, why the need for ORM?


And the notion of lazy associations, life cycles, clean/dirty, attached/detached, ... etc, etc.

The book of Hibernate is thicker than the book for SQL.

The only thing that works better in ORMs is "save". Just call save on an ORM managed object and it is saved, quite a bit harder in SQL.


F# secret superpower that no one has discovered for 30 years, and it will change its popularity in enterprise software.


I like TST. We use it in our code (FP'ish Kotlin).

But I also like stack traces, as they show me call-stack to the point the error happened. Something TST does not always show: different call-stacks can result in the same (or very similar) TST-stack.


it's secret superpower it to allow you to give up on trying your code to compile and then just to shim against some C# bindings


C# bindings?? F# and C# build on top of the same type system and can transparently access each other's types.


maybe I misunderstand, but iirc i had to compile C# first into a separate dll.


I must say this take seems a bit... grandiose... to me.

People have been banging on with type systems and similarly "better" capabilities for at least 2 decades [0] and Enterprise continues with a stark preference for language "practicality" and low barrier of entry.

IMO it's because it's best to keep engineers superficially interchangeable rather than having a highly stable system (perhaps stable over spec) and a costly workforce with lots of negotiation leverage. But I digress.

[0] https://en.wikipedia.org/wiki/Worse_is_better




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: