How do you deal with namespaces (or anything analogous like package names) there?
E.g. if you have a function named "+", it can likely clash with other functions named "+"
If the programming language supports function overloading that can be resolved through the types
By the way, I actually like RPN calculators, and low level stack based programming languages, where operators and functions also work the same way, but that is mostly interactive or for small self contained programs
This is a problem of the model of polymorphism a language has. It's orthogonal to the function/operator distinction. It doesn't arise based on having this distinction or not.
In Haskell you use typeclasses (you can think of them as Go interfaces or Rust traits) and without them you cannot introduce nameclashes. In Scheme (the Lisp that I know)... you don't have anything. You import things from libraries, define variable bindings and depending on the order of all this your variable is going to be bound to something, most likely a procedure... but which one? Depends on the order. Not great. I prefer the Haskell approach to this, but then Haskell is a bit complicated in other areas. :/
But again, this has little to do with "operators vs functions".
> But again, this has little to do with "operators vs functions".
It has to do with it in the sense that you can add package names or namespaces to function names which are already text, but it looks bad to do this with symbolic infix operators like "+"
It would be nice if everything that operates looks exactly the same, but maybe the field of mathematics could start with fixing their notation then instead: mathematics uses a bit of everything you can imagine:
add, subtract, multiply, division, power, root and log are all binary operators and instead of making them all look the same, mathematicians have chosen to:
* infix symbol for add and subtract
* no symbol at all (usually) for multiply
* superscript for power
* horizontal bar and put things on top of each other for division
* another horizontal bar and a superscript on the left for root
* function-name notation with subscript for log
Maybe this is a global optimum that math has converged to because this mix of different things actually is the easiest way for the human brain to read formulas the fastest, e.g. the arbitrary grouping that division creates and the fact that this eliminates the need for some parenthesis, and the easy visual difference between all the different notations, may really help. In some programming languages, if you've got dozens of parenthesis or deeply nested indentation, can you easily tell which argument is to which function?
Or, it's just a historically grown monster that could as well have turned out entirely different and can have other forms that would be faster to interpret.
A lot of these things were established long before mathematicians started talking about functions, let alone give a robust definition of what a function is. It may have obscured the fact that these things are very similar to each other.
At the point when there was an effort to formalize these things ideas like RPN starting popping up.
Mathematics also deals with a much more flexible medium. It's written on a plane, in 2 dimensions, whereas programmers confine themselves to just one. Mathematicians are also free to make up their own notation and symbols, whereas programmers are confined to a single alphabet and most often can't extend the syntax.
EDIT: Case in point: some mathematicians argued for a while if abs (absolute value) is a function. Some argued that it's not, because you need two formulas to define it. When somebody pointed out that (for real numbers) it's just "sqrt(x^2)", the first ones agreed. Sounds silly in retrospect now that we have a set-theoretic definition, but at the time it wasn't obvious and all these functions looked very different from each other.
> Case in point: some mathematicians argued for a while if abs (absolute value) is a function. Some argued that it's not, because you need two formulas to define it. When somebody pointed out that (for real numbers) it's just "sqrt(x^2)", the first ones agreed.
Isn't that a circular definition, though? sqrt(x²) = ±x, which isn't a function since there are two values in the domain for each value in the range (other than zero). The version they're equating with abs(x) is the absolute value of the square root, or abs(x) = abs(sqrt(x²)), which is true but wouldn't help prove that abs(x) is a function.
Of course, the underlying problem was the premise that a function must be defined by exactly one formula.
When people take the square root of a number during working with real numbers, they almost always mean the non-negative root. It's a convention. Only when they start considering complex numbers, this convention breaks, and the square root ceases to be a function - even according to the modern definition -, because "non-negative" still doesn't narrow it down to a single number in the general case.
Isn't that basically what I said? The convention is that sqrt(x) is generally read as abs(sqrt(x)). The definition is that sqrt(x²) = x, which has positive and negative solutions in x for any x² > 0. You can choose to ignore the negative solutions (or the positive solutions) to make it a function, but I wouldn't consider that any simpler or closer to a single formula than the piecewise-defined version of abs(x). It's an arbitrary restriction—much like the mistaken idea that a function must be defined by exactly one formula.
Why not just say that abs(x) = ±x, ignoring the negative solutions? If you'll accept "the non-negative square root of x²" then I see no reason to reject "the non-negative component of ±x". Both are single formulas with positive and negative solutions combined with a qualifier rejecting the negative solutions.
> The definition is that sqrt(x²) = x, which has positive and negative solutions in x for any x² > 0.
When people write "sqrt", they mean the function, or, "principal square root". "A square root" is a different thing. Saying "the definition" makes sense only within the context where the definition is established, otherwise we have to fallback to the common usage. Wittgenstein would laugh at this conversation.
Also, I'm not arguing for this convention ("it needs to have a single formula to be a function"). And when you start substituting symbols, almost nothing withstands this test. Because then you can always always twist the formula into several cases. And generally thinking about functions as something that is defined by formulas is a very limiting view. Almost all functions cannot be defined by formulas.
E.g. if you have a function named "+", it can likely clash with other functions named "+"
If the programming language supports function overloading that can be resolved through the types
By the way, I actually like RPN calculators, and low level stack based programming languages, where operators and functions also work the same way, but that is mostly interactive or for small self contained programs