"Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
This is a convenient explanation and you will find a lot of evidence for it, but I think it's superficial.
The real question is, why can they afford to take this approach without being outperformed by companies that use better languages and better developers?
For those of us who enjoy exploring more powerful/innovative languages, there is a convenient explanation and a not so convenient explanation.
The convenient one is that the success of these "Blub companies" is largely determined by factors unrelated to software development.
The inconvenient one is that all these interesting and powerful language features do not significantly improve software development outcomes.
I think it is remarkable that some of the companies that are on the Blub side when it comes to software development are among the most cutting edge, research oriented companies in other regards, such as AI.
Also, companies like Google can not only choose among the best candidates, they also have a lot of influence on what people learn before they even apply for a job there. So I simply don't buy the familiarity excuse, at least not for the entire workforce.
I also don't buy the large existing codebase excuse as that would apply equally to new Blub languages and new PL research based languages.
There is simply very little faith in any positive impact of PL research innovations on software development outcomes. And this lack of faith cannot be explained away by claiming that all those lacking faith are simply incompetent.
The likes of Google and Microsoft can and do hire competent PL researchers, just as they hire competent AI researchers. And they do have a massive incentive to improve software development quality and productivity.
I think the truth is that PL researchers simply haven't made the case for the effectiveness of their work.
In part because it's extremely difficult to make that case empirically. All the empirical studies I have seen suffer from huge and mostly unfixable counfounders.
But I think another reason is that PL researchers seem to largely ignore cognitive science and sociology. I haven't seen much discussion about the impact of particular PL features on developers' state of mind in any realistic context.
Nor have I seen much debate about programming languages as group communication tools or about the different roles in which people interact with code as part of a software development process. All of that is apparently considered out of scope.
And that, I think, is the reason why it is so easy for many practitioners to dismiss PL research out of hand. Even where PL research is being picked up decades later it appears more fashion driven than evidence based.
If we look at blub engineering regarding mankind's history, impressive pieces of work were build by recurring to thousand of workers each doing his/her little thing across several years, even generations.
Yet, most likely we would use heavy construction building machines nowadays.
Niklaus Wirth admits on one of his papers that he expected developers would pick Oberon by caring about tools for quality in software engineered, but he was wrong about it.
Hoare has a similar remark on his Turing award speech.
So from my point of view it is a mix of sociology, fashion driven development and office politics.
"Type system tyranny", page 8 - https://talks.golang.org/2009/go_talk-20091030.pdf
"Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
https://talks.golang.org/2012/splash.article