Nope, I mean it in the first sense. That happened with the GeForce 256 in 1999, and shader registers (the first programmable vector math) were introduced with the GeForce 3 in 2001. Before that 3D graphics accelerators -- the term GPU had not yet been invented -- simply handled rasterization of triangles, and texture look-ups. Transformation & lighting was handled on the CPU.
(I will use "GPU" because "3d accelerator" was very gaming PC oriented term predated by 3d graphic hardware for decade)
Only in consumer market - which is why GeForce 256 release had the game devs with engines using GL smug for immediately benefiting from hardware T&L which was the original function of earlier GPUs (to the point that more than one "3D GPU" was an i860 or few with custom firmware and some DMA glue to do... mostly vector ops on transforms (and a bit of lighting, as a treat).
The consumer PC market looked differently because games wanted textures, and the first truly successful 3D accelerator was 3Dfx Voodoo which was essentially a rasterizer chip and texture mapping chip, with everything else done on CPU.
Fully programmable GPUs were also a thing in the 2D era, with things like TIGA, where at least one package I heard of pretty much implemented most of the X11 on the GPU.
This was of course all driven by what the market demanded. Original "GPUs" were driven by the needs of professional work like CAD, military, etc. where most of the time you were operating in wireframe and using gouraud/phong shaded triangles was for fancier visualizations.
Games on the other hand really wanted textures (though limitations of consoles like PSX meant that some games were mostly simple colour shaded triangles, like Crash Bandicoot), offloading of which was major improvement for gaming.