Hacker News new | past | comments | ask | show | jobs | submit login

Pretty much all languages designed for mathematics use 1-based indexing. Mathematica, R, Matlab, Fortran, etc. Either people have to think that the designers of these languages all made a mistake, or realize that it makes much more sense for mathematical computing to follow mathematical standards.



Is it possible that mathematics got it slightly wrong? The whole concept of 0 is relatively recent. Plenty of mathematics comes from before its inclusion, so presumably the idea of maintaining convention was there for successive mathematicians too.


It's not about right or wrong, it's just that they work for different things but programming languages unlike math or human languages have to pick only one as default. 1-index is good for counting, if I want the first element up to the 6th element, then I pick 1:6 which is more natural than 0:5 (from the 0th to the 5th). 0-index is good for offset, for example I'm born on the first year of my life, but I wasn't born as a 1 year old, but as a "0 year old".

And since pointer arithmetic is based on offset, it wouldn't make sense for C to use anything other than 0-index. But mathematical languages aren't focusing on mapping the hardware in any way, but to map the mathematics which already uses 1-index for vector/matrix indexing. You can see the relation of languages in [1].

If you want to write generic code for arrays in Julia, you shouldn't use direct indexing anyway, but iterators [2] which allows you to use arrays with any offset you want according to your problem, and for stuff that is tricky to do with 1-indexing like circular buffer the base library already provides solutions (such as mod1()).

[1] https://en.wikipedia.org/wiki/Comparison_of_programming_lang...

[2] https://julialang.org/blog/2016/02/iteration


> 1:6 which is more natural than 0:5 (from the 0th to the 5th)

This is again just begging the question. When you want to refer to the initial element as the "1st", it is due to the established convention of starting to count from 1. The point is that the reasonining for starting from 1 might only be that: conventional, not based on some inherent logic.


You start counting with 1 because 0 is the term created later to indicate an absence of stuff to count. If I have one kid, I start counting by the number one, if I have 0 kids I don't have anything to count.

But then I agree that there is no inherent logic, math is invented and not discovered, and you could define it any way you want. If we all had 8 fingers we would probably use base 8 instead of 10 after all.


Actually we naturally count from 0, because that's the initial value of the counter.

It just so happens that this edge case of 0 things doesn't occur when we actually need to count something. Starting from 1 is kinda like head is a partial function (bad!) in some functional programming languages. Practicality beats purity.


Does it matter if it's wrong? In mathematics it's a pretty standard, if not written, convention that for example the top left corner of a matrix has the position (1, 1) and not (0, 0). If I read an equation and saw an "a3" in it I can safely assume that there exists an a1 and an a2, all three of which are constants of some sort. I can safely assume that there does not exist an a0, because this just isn't the convention. And furthermore, when I do encounter a 0 subscript (e.g, v0), it is implicitly a special value referencing some reference value or the original starting value. This is different than if I were to see a 1 subscript, such as v1. For example, the equations

f = v0 + x

f = v1 + x

Those are the same equations right? Sure, but when I see v1 I'm not really sure what it is or could be, vs if I saw v0 I can assume it may be the initial velocity when I can look up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: