bfloat16 is probably familiar only to ML practitioners; it's a reduced precision floating point format designed for ML models. I was surprised to learn that the "b" stands for "brain", as in the team at google that developed it along with many other advances in machine learning.