One of the authors here. Thanks for the comment! Yes, we mention this work and number of others in the blogpost and paper. This isn't the first (or the last) paper on the topic but I think we've clarified the large-batch training situation significantly by connecting gradient noise directly to the speed of training, and by measuring it systematically on a bunch of ML tasks and characterizing its behavior.