1. Observe that each ascii whitespace character ends with a different nibble.
2. Make some vector of 16 bytes which has the white space character whose final nibble is the index of the byte, or some other character with a different final nibble from the byte (eg first element is space =0x20, next could be eg 0xff but not 0xf1 as that ends in the same nibble as index)
3. For each block where you want to find white space, compute pcmpeqb(pshufb(whitespace, input), input). The rules of pshufb mean (a) non-ascii (ie bit 7 set) characters go to 0 so will compare false, (b) other characters are replaced with an element of whitespace according to their last nibble so will compare equal only if they are that whitespace character.
I’m not sure how easy it would be to do such tricks with vgather.vv. In particular, the length of the input doesn’t matter (could be longer) but the length of white space must be 16 bytes. I’m not sure how the whole vlen stuff interacts with tricks like this where you (a) require certain fixed lengths and (b) may have different lengths for tables and input vectors. (and indeed there might just be better ways, eg you could imagine an operation with a 256-bit register where you permute some vector of bytes by sign-extending the nth bit of the 256-bit register into the result where the input byte is n).
Aside on LMUL, if you haven't encountered it yet: rvv allows you to group vector registers when configuring the vector configuration with vsetvl such that vector instruction operate on multiple vector registers at once. That is, with LMUL=1 you have v0,v1...v31. With LMUL=2 you effectively have v0,v2,...v30, where each vector register is twice as large. with LMUL=4 v0,v4,...v28, with LMUL=8 v0,v8,...v24.
In my code, I happen to read the data with LMUL=2. The trivial implementation would just call vrgather.vv with LMUL=2, but since we only need a lookup table with 128 bits, LMUL=1 would be enough to store the lookup table (V requires a minimum VLEN of 128 bits).
So instead I do six LMUL=1 vrgather.vv's instead of three LMUL=2 vrgather.vv's because there is no lane crossing required and this will run faster in hardware: (see [0] for a relevant mico benchmark)
# codegen for equivalent of that function
vsetvli a1, zero, e16, m2, ta, ma
vsrl.vi v16, v10, 4
vsrl.vi v12, v12, 4
vsetvli zero, a0, e8, m2, ta, ma
vand.vi v16, v16, 15
vand.vi v10, v10, 15
vand.vi v12, v12, 15
vsetvli a1, zero, e8, m1, ta, ma
vrgather.vv v18, v8, v16
vrgather.vv v19, v8, v17
vrgather.vv v16, v9, v10
vrgather.vv v17, v9, v11
vrgather.vv v8, v14, v12
vrgather.vv v9, v14, v13
vsetvli zero, a0, e8, m2, ta, ma
vand.vv v10, v18, v16
vand.vv v8, v10, v8
This works for every VLEN greater than 128 bits, but an implementation with larger VLENs do have to do a theoretically more complex operation.
I don't think this will be much of a problem in practice though, as I predict most implementations with a smaller VLEN (128,256,512 bits) will have a fast LMUL=1 vrgather.vv.
Implementations with very long VLENs (e.g. 4096 bits, like ara) could have a special fast path optimizations for smaller lookup ranges, although it remains to be seen what the hardware ecosystem will converge to.
I'm still contemplating whether or not to add a non vrgather version and runtime dispatch based on large VLENs or quick performance measurements. In my case this would require almost >30 instructions when done trivially. Your example would require about about 8 eq + 8 and vs 4 shuffle + 4 eq, that isn't that bad.
vrgather.vv is probably the most decisive instruction when it comes to scaling to larger vector lengths.
PS: I just looked over my optimized strlen implementation and realized it had a bug. That's fixed now, and the hot path didn't change, just the setup didn't work correctly.
Perhaps another way to do gather with mixed sizes would be with a combination of the vector and packed simd extensions, using a simd register for the lookup table.
The instruction I was thinking of is pshufb. An example ‘weird’ use can be found for detecting white space in simdjson: https://github.com/simdjson/simdjson/blob/24b44309fb52c3e2c5...
This works as follows:
1. Observe that each ascii whitespace character ends with a different nibble.
2. Make some vector of 16 bytes which has the white space character whose final nibble is the index of the byte, or some other character with a different final nibble from the byte (eg first element is space =0x20, next could be eg 0xff but not 0xf1 as that ends in the same nibble as index)
3. For each block where you want to find white space, compute pcmpeqb(pshufb(whitespace, input), input). The rules of pshufb mean (a) non-ascii (ie bit 7 set) characters go to 0 so will compare false, (b) other characters are replaced with an element of whitespace according to their last nibble so will compare equal only if they are that whitespace character.
I’m not sure how easy it would be to do such tricks with vgather.vv. In particular, the length of the input doesn’t matter (could be longer) but the length of white space must be 16 bytes. I’m not sure how the whole vlen stuff interacts with tricks like this where you (a) require certain fixed lengths and (b) may have different lengths for tables and input vectors. (and indeed there might just be better ways, eg you could imagine an operation with a 256-bit register where you permute some vector of bytes by sign-extending the nth bit of the 256-bit register into the result where the input byte is n).