You can do all of these things in software, and it is done. It’s important thing to have control over the process so you can get quantitative data at the other end, and not just a pretty picture. Also, noise should not be discounted as a very good reason to use lower megapixel sensors. If you want a pretty picture, by all means use a cellphone, but you can’t reality use or trust the result for many scientific purposes.
I think this is just not realistic. "Pretty pictures" are actually more important even in science. The vast majority of the time you're not using pixel values or exact color characteristics in a scientific sense. You just want a clear high res image of what you're looking at so that you can ID the pollen, plankton or whatever it is. The algorithms in phone cameras are some of the most advances available. Sure you could in theory reproduce it in software.. but realistically there are no opensource code bases that can recreate the same level of dynamic range that a high end phone company software stack does
I take a pic with my iPhone and I can ID the things on my slide much better than with the color accurate high end Olympus scientific sensor. And in the end that's what matters most