The arxiv paper here is analyzing the nonlinearities in a network's learning dynamics; exploring why training time / error rates are not do not vary linearly throughout the the training process.
They note:
"Here we provide an exact analytical theory of learning in deep linear neural networks that quantitatively
answers these questions for this restricted setting. Because of its linearity, the input-output map of a deep
linear network can always be rewritten as a shallow network."
They note: "Here we provide an exact analytical theory of learning in deep linear neural networks that quantitatively answers these questions for this restricted setting. Because of its linearity, the input-output map of a deep linear network can always be rewritten as a shallow network."