Normally, with big-O notation, the goal is to reduce complexity. The author's wording kinda reverses that assumption only to "surprise" you in the end? A somewhat forced irony.
You learn about it in real analysis, but it's worth noting that in analysis you pretty much always use little-O, which is the one that makes the guarantee you need in that context.