This article gets at the basic idea of a subfield of economics called "welfare economics". The general problem is, how do you combine individual well-beings into an aggregate to make decisions that affect multiple people? We can also answer questions like "Given certain assumptions about bargaining outcomes (nash equilibria etc) what aggregation function will rational actors come to on their own?"
This article presents a "priority view" as a contrasting moral view to "pure utilitarianism" and wonders why the view hasn't caught on outside of moral philosophy. The answer is that it has, and outside of philosophy we have models of aggregate utility that subsume both of the moral views in the article. This was a very active field from the 1930s-1970s, and now most of the interesting work here IMO is done in the cryptocurrency space (trying to find ways to prevent forks or incentivize participants to be pro-social).
The two points of view described in this article are just two specific "social welfare functions" we could optimize for. There are many others.
The "priority view" in this article is known as the "Kalai egalitarian bargaining solution" in economics and game theory, or the "Rawlsian" social welfare function (maximize the minimum individual utility):
The "pure utilitarianism" view is not a bargaining solution, but it's known as the "Benthamite" social welfare function (maximize the sum of individual utilities).
I'm curious about whether anyone has studied using the harmonic mean (or at least, the reciprocal of the sum of reciprocals) as a way of aggregating utilities. I haven't had time to research this, but the thought repeatedly occurs to me as I have kids that are gifted in a school district that is especially concerned with the gap in achievement between the highest and lowest performers. I can't shake the feeling that what they really should want is a measure that prioritizes helping the lowest performers but does not consider the performance of the highest performers to have literally negative value, and the harmonic mean fits that bill.
> The "priority view" in this article is known as the "Kalai egalitarian bargaining solution" in economics and game theory, or the "Rawlsian" social welfare function
No, that is a common misunderstanding, especially understandable here since some of OP's formulations also conflate the two. If you're an econ person a good text that puts the prioritarian view in context is Matthew Adler's 2020 Measuring Social Welfare: An Introduction. I'd also recommend that book to anyone here who claimed e.g. "we can't compare across persons!" but who is open to reading a case against that claim.
I don't think I'm misunderstanding. Here's a quote from the article.
"The point being made here is that we do not assign any moral value to decreasing the well-being of those who are better off ..., but we do assign more moral value to increasing the utility of those who start from a lower base."
That's explicitly 0 weight on those who are "better off", leaving all the weight for those with the lowest utility. That's precisely the Rawlsian function. Traditionally Rawls only described the most extreme version, but softer versions with weights that are not all zero or one, but follow the same pattern are in the same family and also generally called "Rawlsian".
No, the OP claim "we do not assign any moral value to decreasing the well-being of those who are better off" is different from your claim "0 weight on those who are "better off", leaving all the weight for those with the lowest utility".
The former is a way, albeit too compressed by OP, to distinguish prioritarianism from the kind of egalitarian view where interpersonal relative differences affects the value of increasing/decreasing one individual's well-being, which makes egalitarianism the target of the so called levelling down critique. See Parfit's article that OP discusses for more on that distinction and critique.
Prioritarianism does not give 0 weight or value on increasing the well-being of those who are already better off - on the contrary a distinctive feature of prioritarianism is that increasing someone's well-being always has some value. It is only how much value that varies with the absolute level the increase starts from. In jargon the most prominent version of the prioritarian social welfare function is continuous, strictly increasing and strictly concave. In contrast a leximin SWF (sometimes called "rawlsian" because of some points of similarity with the features of one component, the difference principle, in Rawls systematic theory of justice) gives absolute lexical priority to increasing the worst-off positions well-being.
All this said, perhaps there is a way to relax or modify leximin SWF so as to make it functionally equivalent to a prioritarian SWF, but I'm not familiar with anyone doing so (happy to learn if you have a reference though!) and, regardless, to call that rawlsian would be unfair to Rawls since he definitely did not propose or argue for such a view in any of his texts on political philosophy and the view he did argue for is definitely incompatible with it.
This article presents a "priority view" as a contrasting moral view to "pure utilitarianism" and wonders why the view hasn't caught on outside of moral philosophy. The answer is that it has, and outside of philosophy we have models of aggregate utility that subsume both of the moral views in the article. This was a very active field from the 1930s-1970s, and now most of the interesting work here IMO is done in the cryptocurrency space (trying to find ways to prevent forks or incentivize participants to be pro-social).
The two points of view described in this article are just two specific "social welfare functions" we could optimize for. There are many others.
The "priority view" in this article is known as the "Kalai egalitarian bargaining solution" in economics and game theory, or the "Rawlsian" social welfare function (maximize the minimum individual utility):
https://en.wikipedia.org/wiki/Cooperative_bargaining
The "pure utilitarianism" view is not a bargaining solution, but it's known as the "Benthamite" social welfare function (maximize the sum of individual utilities).