Hacker News new | past | comments | ask | show | jobs | submit login

I would split use #3 into two parts: 3a) disambiguation for builtin operators - e.g. a float local needs to go in a float register, adding 16-bit integers uses 16-bit addition, a struct local needs to go in an appropriately-sized memory location. This is basically the same as #2. 3b) schema-ed data storage. This allows laying out structured data in a compact way in memory (e.g. `struct x { int a; int b; }` consists of 2 integers one after the other). This functionality is useful even in unityped code.



While you are right that standard arithmetic operators are typically polymorphic, but that doesn't mean there is another use case. Another example I had in my head was little and big endian integers, or binary and decimal integers. A language where arithmetic operators are not polymorphic is for example assembler.

What I would say perhaps is that #2 and #3 are in a way complementary. In 2, we describe a common operation with a more abstract type, while in 3, we describe the specific representation with a more concrete type. So both #2 and #3 address issues of naming things on different levels of abstraction, which aren't directly related to the program specification or its correctness (use #1).


The arithmetic operators in x86/x86-64 are certainly polymorphic (over word-length, plus integer vs. x87 vs. SSE).

I think the distinction is that #3b-types, which denote the "encoding" of a value, mapping it to its meaning, are often used as a basis for a #2-types system (to parametrize operations).


> The arithmetic operators in x86/x86-64 are certainly polymorphic

I disagree - at least in machine code, the instruction code determines the type of the operands. If they were to be polymorphic, the type of operands would determine the specific instructions that are to be used (and so you could for example reuse the same code for different word lengths). Maybe modern assemblers can do that (and have a generic instruction name for addition, for example), it's been 20 years since I programmed in x86, I only recently used mainframe assembler where it is as I describe.

Polymorphism is all about names. You want the same name (and by extension, the same code) to refer to potentially different operations on data.

> I think the distinction is that #3b-types

I am still not sure how it is different from #3a in your definition. When I say 16-bit signed integer, I can also mean this as an encoding of abstract "integer number". The whole point of declaring type in sense of #3 is to be specific about encoding. In mathematics, you (typically) don't care about that; but you sometimes care about use case #2, which is typically dealt by mapping to more abstract concepts with some morphism.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: