Delegating the structure of engagement to a pattern matcher doesn't change fundamentals. Consider Arrow's Impossibility Theorem: can't have all the nice properties of a social choice function without a dictator. So your AI needs to have higher level definitions in its objective to achieve some allocative efficiency. Examples abound, common ones are utilitarianism (don't use this one, this results in bad outcomes) and egalitarianism. Fortunately, we can choose this with both eyes open.
The field that considers this type of research is Mechanism Design, an inverse to Game Theory where you design for a desired outcome through incentives.
Would it be correct to suggest your suggestion to delegate to AI the design of games means you trust people are ineffectual at identifying when certain game types, such as zero sum games, are all that are possible?
The field that considers this type of research is Mechanism Design, an inverse to Game Theory where you design for a desired outcome through incentives.
Would it be correct to suggest your suggestion to delegate to AI the design of games means you trust people are ineffectual at identifying when certain game types, such as zero sum games, are all that are possible?