Code generators are autoregressive (code->code) not labeled (text->image) so the attack wouldn't work. Also, you can actually tell if a code model works or not while you're training it, by running tests on the output.
You could put up a lot of misleading code where the comments are wrong or there's bugs in it… seems bad for obvious reasons though.
wasn't there an attack a while ago that used hidden characters to break compilers? i'll see if i can dig it up. maybe something like that could be used for github. you'd have to have a pre commit and pull hook that would encode / decode the malicious characters