The attitude is often quite strange. Like on Twitter I saw a few days ago that many were riled up about the fact that a prof would ask how to solve Ax=b (linear algebra) for a deep learning / computer vision PhD position. People were calling this unnecessary gatekeeping... I'm like that's just a warm-up question to get comfortable... But apparently the loud online hive mind opinion is that all that should count is soft skills and that everyone is equally able. People are even calling out professors on Twitter (who then engage, for some reason) for saying they are looking for "outstanding" PhD candidates. Because this is apparently too exclusionary language, everyone is equally outstanding or something...
To be fair, you would not actually need to know how numpy.linalg.solve works to use pytorch. Solving linear equations is an extremely deep subject on its own, featuring a lot of difficult and sophisticated issues related to numeric stability. Entire books have been written about how to solve Ax=b with awareness of the nature of floating point arithmetic. Machine learning researchers are generally unconcerned with those topics.
As everything in the world of knowledge, this topic too has a fractal nature. There's a difference between what you just said vs "well, optimize it with gradient descent, yoloswag" or "just invert the matrix A". If you say something something pivot selection, Gauss elimination, iterative algorithms, QR, LU factorization, backslash operator, pseudoinverse, under and overdetermined systems and can roughly handwave your way around roughly explaining what these things are about, it's probably already very good for this job. This prof probably wasn't interested in the tiniest of numerical stability details.
Except solving linear functions has a clear role in any field that makes use of linear algebra. Interviews that require memorizing volumes of pointless trivia unrelated to the work at hand are completely different.
Yes, in practice this interview style now achieves exactly the opposite goal that it was created for. That is, originally people wanted to base hiring on general problem solving skill, mapping out a problem domain with relevant questions, proposing solutions, identifying tradeoffs, reacting to a change in problem statement, etc. based on a pure computer science foundation, independent of ever changing software frameworks, libraries, APIs etc. Precisely because memorizing the specific steps of creating a CRUD mobile app in today's workflow is not useful later on. The problem is, Goodhart's law kicked in and performance in such puzzles became the target to optimize for, so it stopped being a good measure, because now people have to explicitly study for it from books and online courses and practice sites and so now it measures more of the effort you're willing to put in to jump hoops instead of actual problem solving aptitude.
An alternative could be to ask about a recent real-world project of the applicant, but that's also easier to rehearse.