I don't use ruby, I am genuinely interested - why is it great? I'm assuming if it were ever allowed, it would be a use-at-will feature and wouldn't affect anyone who didn't use it. Typescript has probably doubled if not more my speed and accuracy since I've adopted it - yet I still do plenty of things in normal javascript. These days I'm usually unhappy when something does not have typings because it can make it terribly difficult to discover things.
It's great because Ruby is an Object-Oriented Programming language. Just saying that is an understatement; Ruby lives and breathes Object Oriented philosophies. It was made for them.
The conflict here is that object oriented philosophies aren't actually about objects. They're about communication between objects. The messaging between objects. As per Alan Kay himself:
> I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging".
The goal of object oriented design is to focus on the communication between objects, not the objects themselves. Part of that is that the type of object receiving the message doesn't matter so long as it understands the message and knows how to respond. If the object looks like a duck, swims like a duck, and quacks like a duck, that's good enough--even if the duck turns out to be a chicken with an identity crises. It understood the message and responded, and that's all we want in object oriented programming, objects that can communicate with each other.
Adding type checking flies in the face of this philosophy. Instead of type being irrelevant as long as the receiver of a message can understand that message, suddenly it's front and center. The code will accept or reject objects based on their type even if they're fully capable of upholding their end of the conversation.
Type-less-ness is core to Ruby. But some people may still prefer to include typing. We all want to use the tools and practices that best enable us to deliver, so that's a fair want. But since Ruby as a philosophy doesn't care about type, it's important to maintain type checking as an accessory to the language, not a feature of it. Something that can be layered on top of the Ruby language for those that want it, but that can be ignored by those don't.
Bravo. Let dynamic languages be dynamic. Why does every *damn language have to approximate Java in the long run? PHP is nothing more than pseudo-Java and Javascript is heading in the same direction now classes have become firmly-established. At least there's still Clojure.
The philosophical argument in the Ruby community is basically that Ruby is not a statically typed language, period. And a strong contingent, myself included, do not want a hybrid world where type annotations are optional, spattering redundancies all over our syntax. Mostly because I see that as a step in the direction of some kind of "strict" mode that will ultimately enforce type annotations and type-checking and destroy most of what I love about Ruby.
That's why the approaches being used keep the type annotations out of the source files themselves.
> Typescript has probably doubled if not more my speed and accuracy since I've adopted it
TypeScript hasn't ever done anything for me than give me 3rd party dependency integration headaches. I love strongly typed languages and compile time checking, but TypeScript has never seemed worth the trade off due to its broken interoperability with normal JavaScript and the terrible state of crowd sourced typedefs. I'm either fighting some badly defined third party typedef, spending a lot of time creating typedefs myself or dealing with a version issue because the typedef isn't compatible with the version of the library I'm using.
When I use JavaScript I hardly ever run into issues that static typing would have prevented and I have zero TypeScript issues.
Honestly how has it improved the speed at which you get things done? Were you just constantly running into JavaScript bugs due to the lack of typing?
This was my experience with typescript. Nothing I actually wanted to use had first class support for typescript. Nothing I settled with didn't have endless compiler errors that had more to do with the tsconfig than my actual types.
Then at the end of the day, it was still JavaScript (an interesting word for "not ruby"), but with types slapped on top.
I ended up switching to crystal, which is basically ruby + types (infered when possible, but I actually wanted the types) with the performance of golang.
Most of the improvement is from the typings that other libraries come with, if, like you said, they are complete. Now I can just ctrl-click into an object to view it's methods and from their can view the interfaces the methods accept and the interfaces the interface accepts, and so on and so on.
Honestly, I rarely refer to documentation for these things because every project is a snowflake and the documentation gradient goes from no documentation to perfect documentation. By that, I don't just mean the words, I mean the website or the framework used to document, as well as the style of documentation (more like flavor?) Typescript is the great equalizer that makes a project with no documentation (but decent comments or method/var names) just as documented as one that that does.
I can also ctrl-space easy to get a list of methods in case I forgot which method I needed, or if I want to discover what's available. That's enormous in my style of programming. Sure beats going to someone else's documentation page, trying to read it.
Some of the improvement is not necessarily that I have javascript bugs due to lack of typing but rather that with typescript I don't get those bugs which means I don't have to reason about avoiding those bugs anymore like I did with javascript. Sort of a reduced cognitive load.
Also, I have a few coworkers that are not javascript/typescrpt savvy that I was able to get up to speed with typescript fairly easily due to the easy of using the types. There are, of course, hard things such as partials or understanding tsconfig.json or even generating types that I don't cover with them and just have them come and get me when they're ready.
For most things without types I just do the declare module in a d.ts - however, I will first try to find another package that does the same thing with types. Most popular packages these days do include types, some better than others.
After I re-read above, I realized that a lot of it depends on the IDE. If I were still using vim or kate/gedit, it probably wouldn't be a huge timesaver. Fortunately, I settled on one of the intellij editors.
I'm having a really hard time understanding this "I need types forced down my throat" and "I like typing 3x as much as I would otherwise need to" and "yes, I want half my screen obscured by the types of everything I'm doing, not the actual code" and the "adding types now means bugs are impossible" mass cult hysteria that's running so rampant. Typing very occasionally prevents bugs that are generally easy to catch/fix or show up straight away when running an app. It's mostly a documentation system. And it slows development down.
Especially in Ruby which is such an elegant "programmer's language" I think it would just be silly.
If your type definitions are 3x longer than the functions implementing them, something is wrong. In languages with complete type inference, you actually don't have to write types at all if you don't want to, though in practice you end up doing so to clarify your intentions.
Static types do make certain classes of bugs impossible, like missing method bugs, typos, and the like. You can eliminate a large group of defensive programming techniques and trivial unit tests that you would need in a dynamic language to ensure a similar level of confidence in a program. Obviously they don't make all bugs impossible, there will be bugs as long as there are programs, because we write programs without perfect knowledge of the requirements, and this is an unavoidable pitfall of software.
This can depend really heavily on what you mean by "development." If it's just getting the first version banged out, sure. If it includes coming back to code a couple years later in order to incorporate a new business requirement, having that documentation present can be a really big deal. 2 seconds spent typing out a type hint now might, down the line, save several minutes on average. Even in a recent Python project I did over the course of just a couple weeks, when I got to the "clean the code up and get it ready to put on the shelf for now" phase of the project, I ended up wishing that I had bothered to use type hints just a wee bit more when I was banging it out in the first place. It would have been a net time saver.
I don't like static typing super a lot in all cases because it makes it hard to do data-level programming. Which I find to be the true productivity booster in dynamic languages. But optional typing seems to hit the sweet spot for a great many purposes.
For example, JSON describes a logical structure of nested lists and dictionaries. If you were doing data-level programming, you would just map the JSON into actual nested lists of dictionaries and get on about your business.
The alternative, which is more common in static languages like Java, is to transform it all into some set of domain model objects, and probably validate it up-front, too. Even the bits you don't actually need to look at in order to accomplish the job at hand. IMO, that approach tends to mean creating a lot of unnecessary work for oneself. It also makes it harder to obey Postel's law.
(The corollary to that last bit is that it is also possible for static typing to create bugs.)
I'm skeptical of Postel's law, if you deviate from the spec how can the meaning be clear? It seems to me like you would have to go out of your way to implement a buggy version of the spec?
A personal example of this was Httpd used to accept standard headers with spaces instead of dashes, this leads to strange behavior if you accidentally include both. So they decided to stop doing that in a major version. This major version was opaquely included by ops accidentally into our base images. This lead to a very long day of debugging on our end.
Point is, by being liberal with what you accept you create ambiguity, which you may not totally understand at the time. By putting that out into the wild you basically are forced to keep this ambiguous, undocumented spec alive or you no doubt will end up breaking some client.
That's definitely a concern, but it's also way outside of what I was talking about. I would also expect any JSON parser, even one in a dynamic language, to fail on JSON that is straight-up malformed. And ambiguous formats are always bad news.
I'm talking about situations where the JSON is formatted fine, it's just that some field wasn't specified, so then the entire input gets rejected. Even though there was zero need to read the contents of that field in the first place. It just happened to be included in some domain object that gets re-used everywhere, including some other places where the field's contents do matter.
Keep in mind that, when we're dealing with anything that might be transmitted in JSON, thinking that there might be a published spec, and that it manages to accurately cover all these details, is really optimistic. I've honestly never seen it happen in the wild. Oftentimes, any validation rules you might try to impose are guesswork as much as they are anything else. So complaining that a piece of data didn't conform to the spec might not even be a valid thing to do. All you can say for sure is that the data didn't meet the needs of some piece of business logic.
It's not perfect, but it's life. This tension, for example, is at the heart of why proto2 got replaced with proto3, and why using proto3 is strongly encouraged if you're looking to build a robust infrastructure.
There are huge debates at Google internally over required vs optional in proto2 and proto3.
Beyond that I think you’re operating from a misconception about JSON parsing in static languages. There’s no requirement to convert to domain objects and reject data that doesn’t fit on a triviality, you’re just required to specify explicitly what happens when you encounter unexpected structure or data.
Sorry if I wasn't being clear. I'm not saying that's the only way it can work in static languages. I'm saying that that's the way it tends to work out in practice, because the ergonomics of most popular static languages tend to discourage a less brittle approach.
Whereas the ergonomics of popular dynamic languages tend to favor an approach that I find, for this specific purpose, to be both less verbose and more robust.
> For example, suppose we have JSON that represents a set of metric data (this isn't our real JSON, this is just a thought experiment) that should look like this, with "tags" being optional attribute:
{ "id": "1", "timestamp":"12:30pm", "value":"999", "tags": [ "myapp" ] }
> Suppose a python client sends tags but calls the attribute "tag" rather than "tags" (its missing the "s"). Its an optional attribute, so the server won't consider it an error if the "tags" attribute is missing. But it also won't fail due to this unknown attribute called "tag" - it will just silently ignore it now. The Python developer is wondering why his tags aren't being stored - he is getting no errors but they are just silently being ignored. He would need to figure out he is sending in the wrong attribute name, with no error messages to help him out.
> That's the use-case I'm asking about - the "silent error" that will occur due to malformed JSON messages.
What is the difference in approach between these? I've programmed extensively in dynamic and static languages, and don't understand what you're talking about. Less verbose, I might concede. More robust though, I need some more evidence.
Reminds me of Rich Hickey’s “Maybe Not” speech, which I understand him suggesting that programming with “sets” is better than programming with “records” that may contain optional values.
Yes, I know it and he seems to mostly ignore the fact that you can still fall back to manual typechecking in a statically typed language. That’s the part I don’t get. There’s nothing stopping you from manipulating JSON structurally in a static language.
You can definitely still do this kind of programming in a statically typed language. There are a few ways to go about it.
One way is to treat the JSON as a generic JSON structure, and traverse it manually. Of course, you will have to be explicit about what should happen when children are of different types from what you expect, though this explicitness could just be throwing an exception or ignoring it. Haskell's Aeson and Rust's serde_json both support this, as does .NET's JsonElement type.
Unfortunately, this means you're passing around a lot of objects called something like "JSON" without any information about what they contain at the type level, and as an alternative between that approach and creating domain objects, there are row polymorphic records, which allow you to write functions that accept any record that has certain fields, and also specify that they may also contain other fields which you do not handle. This allows you to program to what you know about the types you're ingesting without having to write a lot of new types.
Clojure is strongly typed. I think you mean statically typed.
They're orthogonal concerns. C is statically and weakly typed. Clojure is dynamically and strongly typed. PHP is dynamically and weakly typed. Haskell is statically and strongly typed. Java, as the most design-by-committe language ever, manages to be a mix of all four.
Weak typing is when types get automatically transformed like 2 + “3” == 5,
“2” + 3 == “23”.
Strong typing doesn’t do these types of automatic conversions and throws exceptions or generates a compiler error.
Static typing — types checked at compile time.
Dynamic typing — types checked at runtime.
"Strong" typing doesn't mean much of anything and I generally try to avoid using it but slipped up here. When I do use it, I use it as a synonym for static languages with expressive type systems. I prefer statically typed languages.
Strong typing generally does not mean much and everyone seems to be using a different definition. Would you consider Javascript weakly typed? What about Python?
I'd consider JavaScript to be more toward the weak typing end of things, because it does lots of automatic conversions with surprising results. (see, for example, Gary Bernhardt's "Wat?" lightning talk.) I don't think I'd consider it as weak as C, which has things like unions and pointers that let you just sort of fall out of the type system entirely.
I'd consider Python to be more strongly typed than JavaScript. It doesn't do quite so many automatic conversions. For example, in Python, `1 + "foo"` is a TypeError. In JavaScript, it's "1foo". Sadly, `1 == True` in Python, so it certainly doesn't get full marks.
{-# LANGUAGE MultiParamTypeClasses, TypeSynonymInstances, FlexibleInstances #-}
import Prelude (String, (++), show, Int, (==))
import qualified Prelude
class Add x y where
(+) :: x -> y -> y
instance Add Int String where
(+) x y = show x ++ y
instance Add Int Int where
(+) x y = x Prelude.+ y
instance Add String String where
(+) x y = x ++ y
a = ((1 :: Int) + (1 :: Int)) == 2
b = ((1 :: Int) + "aa") == "1aa"
c = ("a" + "aa") == "aaa"
Examples like the last one about Python are why I think it’s approximately meaningless as a descriptor. I don’t see why dynamic languages should have any implicit conversions at all.
Where you store the type information and when you do the type check is a separate question from whether you do the type conversions automatically or not.
I think a more interesting question is typecasts, like happens in languages like Java and C#. These languages are nominally statically typed, but they retains some type information at run-time, so that you can perform run-time type conversions, which requires run-time type checking. Which is the defining feature of dynamic typing.
C# is a little bit more straightforward about being a hybrid static/dynamic language, with its reified generics and dynamic references. But teasing out the details of where, how, and the extent to which Java is statically or dynamically typed would make a decent topic for a master's thesis.
It also hints at a deeper thing that one must be mindful of: static/dynamic and strong/weak are not binary categories. They're not even the extremes of two binary scales. They are somewhat vague descriptions that are meant to serve as useful shorthands for certain sets of choices that one must make when designing a language's type discipline.
But the fact that they're not cut-and-dry terms does not mean that they're meaningless. It just means that one must disabuse oneself of the notion that they're cut-and-dry before one can have a conversation about type discipline that goes beyond a certain level of detail.
You’re muddying the waters. Static and dynamic have a much clearer distinction between them than “strong” and “weak” typing do. These things aren’t binary but that doesn’t mean they are equally descriptive terms.
Java is a statically typed language with late binding implemented through subtype polymorphism and its type system has been explored pretty extensively in the literature.
> Typing very occasionally prevents bugs that are generally easy to catch/fix or show up straight away when running an app.
This is not true. You could paint almost every language feature aimed at producing correct software in this way: "writing tests makes me type more, and they catch very few bugs that would have been shown when running my app anyway". (Or, as an ex coworker once told me, "I don't need to write tests because I never have any bugs").
And what are types if not a kind of test/proof that the computer writes for you?
> And it slows development down.
There's a software development adage that goes like this: "I don't like writing tests, because they make me waste time I need to fix bugs on production that weren't caught because I don't write tests."
> It's mostly a documentation system. And it slows development down.
Well, I guess this is also a matter of perspective.
From where I'm standing, I'd rather you slow down and "document" your code. Code written at the speed of thought makes for an awesome MVP and for an awful legacy for your co-workers.
In the course of my job I write Swift for iOS and Ruby for server APIs and our web-based UIs.
Type issues are about 0% of my Ruby bugs, but dealing with all the damn type requirements in Swift regularly takes dozens of minutes to track down when some weird esoteric error message pops up. And God help you if you try to use generics.
If you want strong typing, then good for you. Just pick a language that fits that mold.
So much of what I love about Ruby is what it doesn't make me do.
Type issues are 0% of your Ruby bugs because you're not using a typechecker. I guarantee you have type errors somewhere if your codebase is large enough.
My point is that imposing a big ass type system on developers as a "solution" to a trivial number of actual problems is overkill.
I'm sure there are developer/projects that both enjoy and benefit from static typing and strict type systems of various kinds. I just want Ruby to remain a place for those of us who aren't in those positions.
I'm not sure what a "big ass type system" is, and I disagree that the number of actual problems is trivial. However, I'm in no more position to say what Ruby should be than you are, and I'm sorry you're so opposed to static types that even attempting to support them is a minus in your book.
However, even with TypeScript ascendant, the vast majority of people programming JavaScript write vanilla dynamic JS. I don't think dynamically typed Ruby is ever going to die. Whether large enterprise codebases will standardize on requiring type signatures is a different matter, because the benefits always outweigh what downsides you see in static typing once you surpass a certain scale.
>Swift's type system is what I have in mind: strict, complex, required, and in my experience, often petty.
I do hear a lot of complaints about Swift's type system. I wonder what the specific problems are, because I do not hear similar complaints about Rust. I wonder if it's the combination of subtyping with a lot of type inference and also a full-on trait system with protocols and extensions and such.
My biggest complaints all center around the intersection of custom types with protocols and extensions, especially when trying to get a generic approach to something working.
In my experience at least 70% of bugs are ones that you'd catch by using types - things like x instead of y, possibly-empty list instead of known-nonempty list, user ID instead of group ID. Logic errors that couldn't be caught by typing do exist, but they're very much the minority.
Maybe we just work on different kinds of problems.
70%+ of bugs I deal with are business logic issues that no type system could solve.
Sure, as I code I run into an occasional nil object or NoMethod error, but those last as long in Ruby as they do in Swift (about 2-5 minutes while working on that specific part of the code).
I've worked across a wide range of industries over several years, and it's always been pretty similar. You should be building the business constraints into your types so that errors in the business logic become errors in the types - in my experience if you actually work with the type system then most errors become type errors. If you've got examples of the kind of errors you're talking about then I could try to be more specific.
Not the GP, but here is a scenario that I am interested in understanding from the perspective of types.
A calculation that involves 21 parameters (in a particular insurance industry underwriting) yields a number. A threshold is read from the database. This threshold could change every month.
Suppose that the current value of the threshold is 0.78. The calculation above can yield an `x` with the following cases:
(i) x <= 0.78,
(ii) x > 0.78.
We have hundreds of test cases for the combinations of the 21 parameters, leading to hundreds of values for `x`. It is a bug for `x` to be > 0.78 when it should be the other way.
Is there a way this can be encoded in types? That would be very interesting.
This description doesn't quite make sense. If the threshold is regularly changing, the calculation can output the same result number for the same 21 parameters and have it be a bug or not a bug from month to month, depending on the threshold. How can you write a test for that without locking in the threshold? Indeed, without hard-coding the threshold in the calculation itself?
Sure. Create a type that represents x being <= that threshold, with a private constructor. Only allow constructing it via a factory method that requires it to be an x that should be <= the threshold. Then whenever you have a value of that type, you know that it's legitimately <= the threshold, and the bug becomes impossible.
Don't you see the irony in your own comment? If you never create type related bugs in ruby then you shouldn't encounter them in a typed language either because you are infallible. The truth is probably that you see all the type errors at runtime instead and don't see them as such.
Sure, and I get compile time errors in Swift. Each last about 2-5 minutes.
The actual bugs I have to fix are nearly always business logic issues. Edge cases around 3rd party integrations, incomplete implementations, unintended side effects, etc.
Types are great for tooling, which is a much bigger drive for me to use them than soundness guarantees. I can’t stand opening up API docs in a separate tab (or god-forbid browser window) once I got used to having literally everything I could want to know about how I can use a value available with a simple Cmd+Space.
> I like typing 3x as much as I would otherwise need to
3x? Even on languages that do not support type inference I would say that this is at most 1.1x. Even then, type inference exists.
> adding types now means bugs are impossible
I usually see that as a mis-representation of what type advocates say. Rather, it seems that people just support that types reduce the amount of bugs.
> or show up straight away when running an app
Or that show up after you had said app running for a while, and then you get a run-time type error which appears only after doing certain actions. This is the main reason that I am avoiding languages like lua and python.
(In addition languages with more advanced type-systems allow you to catch bugs such as buffer overflows or division by 0 at compile time)