No, it isn't. This is what you need to get through your head: the brains (really, the souls) are always different and there is a huge amount of subjectivity involved in experiencing a migrane, for example. You are trying to make moral reasoning neat and tidy math, using precise-sounding terms like "twice", but it simply isn't.
Basic utilitarian reasoning leads to obviously incorrect moral positions such as liquidating a small, troublesome and annoying minority being fine so long as it increases "the greater good". This is where the math leads, and it's all built on a foundation of clay, namely that there exists a mathematically precise, comparable measure of general "utility" that can be compared, summed-up, etc. between moral agents.
The whole thing would would be pure comedy, were it not for the fact that intelligent people take it seriously.
You conflate the question of interpersonal wellbeing comparison with both adjacent and orthogonal issues I have said nothing about (level of precision, mathematical modeling, deontic criteria, ...). What I describe is practiced daily in health care priority work and risk mitigation in e.g. macro level infrastructure planning. Meanwhile the utilitarians I know are busy working to end world poverty, eradicate malaria, minimize global pandemic risk, improve health access and end factory farming. The evidence on the severest forms of migraines do not support a huge experiential variation in the people afflicted - they max out suffering for virtually everyone. https://www.preventsuffering.org/cluster-headaches/
Yes, many utilitarians end up being good people despite their utilitarianism. They take a best guess at what is right, in a murky, conflicted world, and then work on that, perhaps backfilling a utilitarian just-so story to make themselves feel better. I applaud them for that.
But utilitarianism as a philosophy is obviously, and will continue to be, ridiculous for the simple reason that there is no such thing as general utility.
I have already given simple examples of interpersonal wellbeing comparisons of a type that are both common-sense and in fact widely put to practical use in health care and risk migitation in all countries in the world. If you want to dispute that then do present evidence, saying "no" is not enough.
Here is a simple test: imagine you can press either button A or button B. If you press A 10000 people are spared the severest migraines. If you press B one person is spared a mild scratch. Which should you morally do and why? My answer is: you should do A and the reason why should do A is because we can compare the outcomes in terms of wellbeing and outcome A is comparatively much better than outcome B. What is your answer and why? Anyone denying that interpersonal wellbeing comparisons are possible or that we can in any way aggregate wellbeing ("there is no such thing as general utility") seems forced to say that there is no way to discern a moral difference between choice A and B. Is that really what you believe? If yes how do you put that into practice with regard to choices in everyday life?
As for backfilling, it seems to me that some people express aversion to utilitarianism, and portray it as very different from what it really is, as a way to shy away from taking moral responsibility for the consequences of what they do and omit to do. In fact utilitarianism is in the world as it is making similar demands as several other ethical theories https://twitter.com/ben_j_todd/status/1463867302089302019
As you well know, that example is ridiculous because of the obvious magnitude difference between the two. Moral questions aren't interesting at that scale: nearly every moral system will give you the same answer.
A more interesting question from the utilitarian perspective is this: Imagine you can press a button A. If you press A 10000 people are spared the severest migraines, but one person dies. Do you press it? If yes, two people? If yes, what N is too high? Ah, easy, just sum up the total negative utility of 10000 migranes, and then the negative utility of being dead multiplied by N, and you have your answer. Laughable, of course.
The idea that people opposed to utilitarianism are shying away from their own moral responsibility is ridiculous. Utilitarianism can be used to justify obviously wrong acts, such as, again, the liquidation of small and annoying minorities in order to increase the total utility. The fact that most utilitarians would be horrified by this proposition isn't an argument in favor of utilitarianism. Rather, it is a testament to the moral common sense of even utilitarians.
Your "obvious magnitude difference between the two" is an interpersonal wellbeing comparison claim, the thing you earlier thought impossible. Progress.
"A more interesting question from the utilitarian perspective is ..."
In general the intervention priority in actual health care is guided by utilitarian inspired health economics calculations. Because even if we would want to treat all migraines fully and prevent all premature death there are limits to time, labor and resources. Consult any health economics textbook for an overview, for example "Cost-Effectiveness Analysis in Health: A Practical Approach, 3rd Edition". Can you reply with specific critique of where such established methods run afoul and what alternative you suggest that we replace them with in day to day health care priority work? Focus on the sections about QALYs and, since your hypothetical involved a comparison with death, look especially at the value estimation method called "standard gamble" https://www.sciencedirect.com/topics/medicine-and-dentistry/...
No, it isn't. This is what you need to get through your head: the brains (really, the souls) are always different and there is a huge amount of subjectivity involved in experiencing a migrane, for example. You are trying to make moral reasoning neat and tidy math, using precise-sounding terms like "twice", but it simply isn't.
Basic utilitarian reasoning leads to obviously incorrect moral positions such as liquidating a small, troublesome and annoying minority being fine so long as it increases "the greater good". This is where the math leads, and it's all built on a foundation of clay, namely that there exists a mathematically precise, comparable measure of general "utility" that can be compared, summed-up, etc. between moral agents.
The whole thing would would be pure comedy, were it not for the fact that intelligent people take it seriously.