In all practical terms, subnormal is the same as denormal (a non-normalized number with the smallest possible exponent).
I think this is a situation where there's confusion over precision of terminology, so things have shifted to a new term which is consistently used. Subnormal is now the new preferred terminology for denormal, I think out of confusion as to whether or not unnormal numbers were also denormal (I have never heard the term so used, but I'm also not a great connoisseur of non-IEEE 754 floating-point).
There's a similar attempt to shift use of the term "mantissa" to "significand", since a mantissa implies that the number is in the range [0, 1), whereas the usual implementation in a floating point type is to use the range [1, base).
I think this is a situation where there's confusion over precision of terminology, so things have shifted to a new term which is consistently used. Subnormal is now the new preferred terminology for denormal, I think out of confusion as to whether or not unnormal numbers were also denormal (I have never heard the term so used, but I'm also not a great connoisseur of non-IEEE 754 floating-point).
There's a similar attempt to shift use of the term "mantissa" to "significand", since a mantissa implies that the number is in the range [0, 1), whereas the usual implementation in a floating point type is to use the range [1, base).