Hacker News new | past | comments | ask | show | jobs | submit login

I'm actually doing something quite similar in my, in progress, unicode conversion routines.

For utf8 validation there is a clever algorithm that uses three 4-bit look-ups to detect utf8 errors: https://github.com/simdutf/simdutf/blob/master/src/icelake/i...

Aside on LMUL, if you haven't encountered it yet: rvv allows you to group vector registers when configuring the vector configuration with vsetvl such that vector instruction operate on multiple vector registers at once. That is, with LMUL=1 you have v0,v1...v31. With LMUL=2 you effectively have v0,v2,...v30, where each vector register is twice as large. with LMUL=4 v0,v4,...v28, with LMUL=8 v0,v8,...v24.

In my code, I happen to read the data with LMUL=2. The trivial implementation would just call vrgather.vv with LMUL=2, but since we only need a lookup table with 128 bits, LMUL=1 would be enough to store the lookup table (V requires a minimum VLEN of 128 bits).

So instead I do six LMUL=1 vrgather.vv's instead of three LMUL=2 vrgather.vv's because there is no lane crossing required and this will run faster in hardware: (see [0] for a relevant mico benchmark)

        # codegen for equivalent of that function
        vsetvli a1, zero, e16, m2, ta, ma
        vsrl.vi v16, v10, 4
        vsrl.vi v12, v12, 4
        vsetvli zero, a0, e8, m2, ta, ma
        vand.vi v16, v16, 15
        vand.vi v10, v10, 15
        vand.vi v12, v12, 15
        vsetvli a1, zero, e8, m1, ta, ma
        vrgather.vv     v18, v8, v16
        vrgather.vv     v19, v8, v17
        vrgather.vv     v16, v9, v10
        vrgather.vv     v17, v9, v11
        vrgather.vv     v8, v14, v12
        vrgather.vv     v9, v14, v13
        vsetvli zero, a0, e8, m2, ta, ma
        vand.vv v10, v18, v16
        vand.vv v8, v10, v8



This works for every VLEN greater than 128 bits, but an implementation with larger VLENs do have to do a theoretically more complex operation.

I don't think this will be much of a problem in practice though, as I predict most implementations with a smaller VLEN (128,256,512 bits) will have a fast LMUL=1 vrgather.vv. Implementations with very long VLENs (e.g. 4096 bits, like ara) could have a special fast path optimizations for smaller lookup ranges, although it remains to be seen what the hardware ecosystem will converge to.

I'm still contemplating whether or not to add a non vrgather version and runtime dispatch based on large VLENs or quick performance measurements. In my case this would require almost >30 instructions when done trivially. Your example would require about about 8 eq + 8 and vs 4 shuffle + 4 eq, that isn't that bad.

vrgather.vv is probably the most decisive instruction when it comes to scaling to larger vector lengths.

[0] https://camel-cdr.github.io/rvv-bench-results/canmv_k230/byt...

PS: I just looked over my optimized strlen implementation and realized it had a bug. That's fixed now, and the hot path didn't change, just the setup didn't work correctly.




Perhaps another way to do gather with mixed sizes would be with a combination of the vector and packed simd extensions, using a simd register for the lookup table.


I think I'll propose a vrgatherei4, analogous to vrgatherei16, but with 4 bit indices, for LUTs, when I eventually finish my project with a blog post.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: