This is a very low effort blog post on this subject, I'm not sure it is adding anything new to the topic and just feels like the same sort of low quality blog spam that shows up at the top of Google search results instead of a high quality introduction to the subject.
Its mentioned in the article, but what I find neat about multi-objective optimization is that (for a certain type of well behaved problem) the "solution" is not a single point (0 dimensional) like in normal optimization, but is N-1 dimensional where N is the number of objective functions. So if you have 2 objective functions the best solutions all lay on some 1d curve and if you have 3 they fall on some 2d surface and so on. This is called the Pareto front and Wikipedia has some nice visualizations[1]. It is then left as an additional exercise to pick out the best solution to your problem from the Pareto front.
A common example from engineering is optimizing for strength and weight. You may want an airplane wing to be very strong and very light and the Pareto front represents the best solutions for at a given strength/weight and then you can use other information to pick a particular solution.
There are plenty of scientific papers and wikipedia articles for any complex topic. The point of the article is instead to introduce the topic in plain english without extensive mathematical notation or expressions. The idea _is_ simple. Perhaps I should have added some visualizations of the Pareto front, but I think sometimes these graphs are shown unnecessarily quickly. Besides that what would you add to an introduction that is paramount to one’s understanding?
I think a worked example of a simple problem with some accompanying visualizations would make this a more complete introduction. Links to learn more would be nice too. As it stands it feels half done.
Alright, I see your point. My idea was to create a second article or part 2 with some more practical work using MOGAs if there was any interest. But I can see the benefit of adding a simple example here too.
Its mentioned in the article, but what I find neat about multi-objective optimization is that (for a certain type of well behaved problem) the "solution" is not a single point (0 dimensional) like in normal optimization, but is N-1 dimensional where N is the number of objective functions. So if you have 2 objective functions the best solutions all lay on some 1d curve and if you have 3 they fall on some 2d surface and so on. This is called the Pareto front and Wikipedia has some nice visualizations[1]. It is then left as an additional exercise to pick out the best solution to your problem from the Pareto front.
A common example from engineering is optimizing for strength and weight. You may want an airplane wing to be very strong and very light and the Pareto front represents the best solutions for at a given strength/weight and then you can use other information to pick a particular solution.
[1] https://en.wikipedia.org/wiki/Pareto_front