I agree with "for now" but not "for many years". Right now, most or all automatic summarizers are doing extraction. Which is just lifting sentences from the original article itself. It is different from the human perception of summary, which is abstraction. That uses the most important parts of the article and paraphrase it for easy reading.
Right now, abstraction or paraphrasing is hard to do by a computer. But I think and hopefully it will be possible in few years time. There are various open source and academic tools that can do some pretty good NLP. I'm looking into Apache OpenNLP, and WordNet. I'm hoping for 2 or 3 years time.
Changing the sentences adds bias. Maintaining the author's intent is important.
Generating news highlights from lots of sources might be cool as computer generated content. But rewriting an author's story in new words is not adding value it is just ripping them off.
Thanks. I got pretty good insights. Bias doesn't come into my mind. So you're saying multi-document "summarization" maybe the next step of consumer automatic summarization? There are many research about multi-document summarization, and will look into it.
Right now, abstraction or paraphrasing is hard to do by a computer. But I think and hopefully it will be possible in few years time. There are various open source and academic tools that can do some pretty good NLP. I'm looking into Apache OpenNLP, and WordNet. I'm hoping for 2 or 3 years time.
BTW, I have an app similar to your tldr.io. Check my HN comment (https://news.ycombinator.com/item?id=5523770) for more info about it. ;)