Nope. Fairly low utilization on most of the 8 cores. It's only halving the GPU resources if all of the GPU resources are being used. And that's kind of the point.. adding more GPU won't help for most VR because the games simply won't use it.
So SLI automatically bridges the two+ GPUs into a single logical card, with twice everything except RAM. Apps don't have to explicitly support it; they see a GPU with twice (or more) the number of texture units, cores, etc. as the underlying SLI'd card has.
If your VR doesn't scale up its performance with resources, perhaps it's artificially throttling itself to a max framerate? Can't push pixels fast enough to the headset? waiting for sensor data from the headset before rendering?
I'm not sure, really. I've searched on it a bit and don't see a lot of good sources on explicitly why it doesn't work well, but I see the vast majority of user opinion is the same.
This is a bit anecdotal, and I don't have expertise in the underlying technology, but I read mentions of requiring use of specific driver functionality to get better performance out of an SLI or dual GPU setup, via mechanisms like using a dedicated GPU to render each eye independently from each other instead of attempting to render both images together on a bridged virtual card. Allegedly, it's a bit of extra effort to do this (as they'd need to support both the AMD LiquidVR and Nvidia VRWorks) and would only benefit a minimal audience, so other aspects of the game get prioritized for development effort.
There are a couple of implementations that do. eg, Nvidia Funhouse VR was a showcase app of what implementing their SDK could do and they used it. I think Serious Sam VR used it, but I haven't tried that game..