No, you only need to go into the driver when you need to do work that cannot be done in the userland. What are you talking about is just a shared library in the address space of your app, and the code you are showing is literally just writing bytes into a buffer.
Nobody forbids you from calling it "driver", of course but then the whole point of "going into the driver" does not make sense, since there is no syscall and it's just a regular function.
What I've linked to is a driver. Everybody calls it that.
When you go download a driver for graphics cards, whether on Linux or Windows, that driver actually consists of multiple components, some of them running in kernel-mode and some of them running in user-mode. It's basically the exo-kernel principle, but without feeling the need of giving it a fancy name :)
There's a broader history of user-mode drivers not just for graphics, and not just in the obvious case of micro-kernels. User-mode USB drivers used to be a thing, for example (and I guess they still are for some more obscure hardware).
As I said, you can call it whatever you want, "driver", "kernel" or "linux" even. The point of "going into the driver" being expensive only makes sense if it's a syscall, which it is not as we both seem to agree.
Not really. OpenGL in particular has to do a surprising amount of work on every call, to make sure the state hasn't changed and to update all sorts of things if it has, to manage the various buffers and make sure they're mapped in the right place, and then to go down through all the abstraction layers until you end up in the code that actually writes stuff into the command buffer. It's not going to be a syscall level of overhead, though it may end up being that if stuff needs to get mapped into the GPU address space, but it's definitely going to be more than the dozen instruction overhead of going from JITed to native C++ code.
"Not really" what? There is syscall? Then you say yourself there is not... I only argue that there is no syscall in glDraw* as well as the vast majority of the APIs. Sure, driver/opengl do whatever and some calls are more expensive than others but adding more overhead is not going to make it any better and it's already pretty bad without overhead. That's why they developed Vulkan in the first place.
You know, you're talking to somebody who writes graphics drivers for a living :)
If you don't believe me or crzwdjk, just go ahead and actually profile a system running an OpenGL application. The syscall overhead -- as in, the overhead of transitioning between user and kernel mode -- is laughably negligible compared to everything else. Also, the vast majority of driver CPU time is spent in user space building up command submissions. The final command submission itself isn't free of course, but clearly more time is spent processing precisely those glDraw*() calls that you seem to think don't matter.
> You know, you're talking to somebody who writes graphics drivers for a living :)
That's great. Why do you think you are the only one?
And what should I believe exactly here? That there is a syscall in every OpenGL API? If you are writing drivers you know it's not true yourself. The syscall overhead is not laughable, it's tens of thousands of clocks.
>Also, the vast majority of driver CPU time is spent in user space building up command submissions.
Exactly. OpenGL system (if you want to call it "driver" be my guest, DirectX does not do that for example, neither do other APIs) works mostly in the user space.
> The final command submission itself isn't free of course, but clearly more time is spent processing precisely those glDraw*() calls that you seem to think don't matter.
??? I don't even know what are you arguing here. Let's rewind.
Someone said that "driver calls" are expensive. And it's true for people who understand drivers as a part of OS, not "user mode drivers", which are just shared libs. I corrected, saying that there is no actual driver call in the sense that people understand, i.e. there is no OS call or "syscall" since the OpenGL "driver" is mostly a shared library in the user space. You seem to agree with me.
Now, I am well aware that some calls are expensive. I even know why. Some are not though. On some the overhead of moving data from a managed language to the GPU will be much greater than the call itself. E.g. setting an index buffer.
It still does not make it true that there are syscalls in the OpenGl calls anyways.
The userspace part of the driver is still called the driver.
A driver does not mean a kernel module. It's often that, but it does not exclusively mean that. Userspace drivers are still drivers.
The library that gets loaded into the process is part of the driver. It's provided by the GPU vendor and it's specific to the hardware you're running. It maps API calls to hardware-specific commands. Aka, it's a driver. It just happens to be implemented as a userspace library for most of the work.
Nobody forbids you from calling it "driver", of course but then the whole point of "going into the driver" does not make sense, since there is no syscall and it's just a regular function.