Humans can't disambiguate in the sea of words as effectively as machines do. There is just too much results you have to sift through, every time you want to look up some little detail.
That is exactly what Google does: display too many results for a human to sort through effectively, and that is why ddg flattens google.
Wikipedia is humans disambiguating words; ddg is an abstraction of Wikipedia.
This ddg setup is the power of humans disambiguating words into a tree structure, and the power of machines indexing and querying this tree structure on behalf of humans.
I believe wikipedia would do well in copying ddg interface.
Generally, finding information in well structured trees is faster than one modge podge long list - hence the perceived better results.