Hacker News new | past | comments | ask | show | jobs | submit login

I go a little back and forth on this with my experience in F#, which relies heavily on inferred types. You can write a lot of F# before you need to add type annotations, but eventually, things become a spiderweb. The key issue is when you make a 'small' change to some method/value, the changes ripple through the program creating confusing errors sometimes where the compiler is trying to knit things together.

After a while, I found myself adding back types in a decent number of places to "anchor" the type inference, indicating that a certain type/signature is fixed and a change should be carefully considered.

I still don't know how folks deal with these kinds of changes in weakly typed languages without always allowing bugs to pour into their code over time. But I do love the "move fast" and low boiler-plate aspects of "typeless" coding.




I think what a lot of people miss is that this ripple effect always exists. Whether you have strong types or weak types, it's hard and often impossible to avoid.

All strong types do is make it explicit at compile time (and with static analysis), instead of leaving it to be discovered in tests or at runtime.

This is why I'll use Python for small one-off scripts but anything that will be in regular use I prefer to do in Java. Even though I have many more years of experience with Python.


As someone who works in C# and TypeScript, I suspect that people who work in weakly typed languages either:

- Don’t have big projects.

- Have lots of unit tests to cover some of what a C# or Java compiler would have caught through static analysis.

- Don’t even consider doing certain kinds of refactoring (which would be trivial on a strongly typed language) because they have no way of knowing what will break.


4th option, they are just better than you are at managing dynamism.


Even ignoring for a moment that I consider being highly practiced at dynamism a largely pointless skill while strongly typed languages exist, sooner or later a code base which is always growing will reach a certain size where even the very best person in the world at dynamism will have to resort to unit tests to cover what a compiler can do for free, or just accept that certain refactoring or changes to the code base are unreasonably expensive to do with any reasonable level of confidence.


I'm the guy with a huge javascript codebase, written before typescript existed, and I have none of the problems you describe. Huge refactors are also not a necessity in every codebase, if the code was written well to begin with. And so far using typescript in other projects has not produced the supposed benefits a lot of people say are inherent with typescript. There is no magic happening that saves me from writing bad code, because I wasn't writing bad code before typescript. Refactoring isn't all that difficult either, even without tests. But I guess YMMV.


Good code becomes bad when the requirements change sufficiently in ways the original design didn't anticipate.


Great, but that doesn't describe every Javascript project or use of Javascript. If you're describing big changes, chances are a refactor isn't what's needed, a rewrite is. And even when requirements change, it doesn't mean Javascript can't be refactored. It depends on the skill of the team. If you want to hire idiots then you're going to need more than strong types to get anything shipped. I doubt types would really help that much in some places because programmers love to invent their own footguns.


The fact you think that big changes more often than not necessitate a rewrite makes you sound like someone who’s never worked on a very large strongly typed code base where big changes and refactors are absolutely possible, and happen in a reasonable timeframe, without having to resort to a rewrite.

Of course it’s not impossible to have a good JavaScript code base, it’s just much harder to have a very large one where it’s still economically feasible to make significant changes to it without having to resort to a rewrite or needing to write tests which would be covered by the compiler in a strongly typed language.


>makes you sound like

Too bad you don't know me so you're left to ad-hominem attacks on my expertise.

>without having to resort to a rewrite.

Glad you get to move the goalposts anywhere you want to justify any comment you make. You haven't worked on every codebase that ever existed, even though it sounds like you think you have.

In many cases, yes, it is worth a rewrite instead of trying to shoehorn something that exists, only because it exists but is the wrong solution for the new requirements. We're not going to get into the weeds of every kind of refactor in every kind of codebase here on HN, so instead you can move the goalposts and I'll just stop replying here. We can agree to disagree and call it a day.


> In many cases, yes, it is worth a rewrite instead of trying to shoehorn something that exists

I also haven't seen every codebase in existence, but I have seen enough "it's only going to take a year" rewrites that absolutely should have been refactorings and so I remain rather skeptical of the claim that rewrites make sense in "many cases".


Thanks for your anecdotal comment, but it really only means something to you.


I could say the same about your comments. Your snarkiness is not appropriate for this website. Don't start commenting if you're not open to having your views challenged.


> Huge refactors are also not a necessity in every codebase, if the code was written well to begin with.

If dynamically typed languages only worked well in absolutely pristine codebases that never saw any hacks or poorly thought out solutions, I'm not sure they'd be actually useful in most real-life settings. Of course, people refactor all the time in dynamically typed languages as well.


They probably are also more beautiful too huh?


Is it really too much to ask to use the correct terms here, static and dynamic typing?

Strong and weak are sort of coherent as a spectrum, but they aren't a typology, and they do not in any sense or in any case reduce to static vs. dynamic types. Conflating them is not useful: I can make a good case that Julia's type system is stronger than C's, but Julia is dynamically typed and C is statically typed.

There's no reason to keep doing this. It's a malapropism, we could just.. not say that. Especially in a thread which is specifically about types.


Sorry, you're right of course. I was on the phone and in a hurry and just took the terminology of the comment I was responding to without thinking about it. Static and dynamic typing are the correct terms. Strong/weak typing is the difference between Python and C++, which are (inversely) dynamically and statically typed respectively.


I agree with this sentiment but view it inherently as a feature of languages like f# and Haskell - That small ripple at compile times makes trusting refractors easier. After having experienced this, languages like python become really challenging to grok without heavy unit testing.


There's also a middle ground which is to support function-local type inference, but not inter-function type inference, which guarantees that such an "anchor" is never all that far away (especially never in another file). This is the approach Rust uses.


> You can write a lot of F# before you need to add type annotations, but eventually, things become a spiderweb. The key issue is when you make a 'small' change to some method/value, the changes ripple through the program creating confusing errors sometimes where the compiler is trying to knit things together.

I found working with type inference being its own skill. If you know how to place type annotations well, the ripple effect only affects one or two callsites and then the type inference just continues to infer what you meant in the subsequent code. Though we may be approaching the structuring of the code differently so YMMV. But I haven't had issues even with complicated member constraints which replicate dependent typing.


Having worked on a large js codebase back before typescript or flow or the closure compiler existed, I found that the process was not very different to any code - you check your preconditions on entry to code that will be called from elsewhere, it's just that those preconditions in a dynamically typed language may include the types of your arguments. If you do that, then type errors typically cause your code to fail fast, often on first load, in easy to understand ways. Overhead is just a couple of extra easy to understand lines at the top of about half your functions (probably less boilerplate than go's error handling forces on you).

Given the fantastic iteration speed that the Web platform had (and still does to some extent) I didn't really miss a compiler for catching errors. The main improvement is that IDEs find it easier to support big refactoring.

I also worked on a mid sized scala codebase a little later on. Scala is probably better now than it was then, but despite (perhaps because of?) the cleverer type system, everything was so slow that it actually took longer for many bugs to be highlighted by the compiler than they would have been found by hitting f5 in a browser window with a good js codebase. That was when I realised that as a developer I care a lot about when an error is highlighted in wall clock time and not at all about which compiler phase it was discovered in.


I think Scala has a good compromise here, where all function signatures require explicit types and (almost) everything else can be inferred. Maybe F# does the same? I find that this is enough "anchoring" to allow for sensible error messages most of the time, although occasionally I'll sprinkle in more annotations if the inferred types would be particularly confusing to readers (usually because they're too general).


> the changes ripple through the program creating confusing errors sometimes where the compiler is trying to knit things together.

F# is derived from OCaml, and they both support interface files–in the case of F#, with the .fsi extension–which is designed to solve exactly this problem. After settling on the surface area of your module, you nail it down by writing an interface file with the exact types you want to enforce. The compiler then uses this to check the usage of the module by its consumers, and provides much more accurate type errors.

It works really well.


Glad F# was mentioned, my mind went their immediately as well. The ripple is definitely intentional, and from what I've found the dotnet compiler is quite performant so not so much experiencing the feedback lag that other mentioned when it comes to Scala's compile time. The thing I appreciate of F# is that if it compiles it probably "just works".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: