Yes, but then by definition they aren't communicating, and their effectiveness is massively reduced. They could, for instance, be attacking the same target or even a decoy (wasting ammo), or fly into a trap that other drones would have warned them about. There are also no drones currently that can make firing decisions on their own, thank god, this small scale test notwithstaning. The problems AI has to solve are hard enough even without radioelectronic warfare or adversarial, AI-specific countermeasures. With it they become even less tractable.
I agree that it will be an arms race between AI researchers in competing militaries between ease of getting around defenses and making attacks ineffective.
Of course any country not in on it will be far, far, far, far behind. The idea of a drone swarm based air force going up against a less capable foe seems overwhelmingly superior. I'm sure there will be a give and take as people figure out how to disrupt them better, but I also have no reason to think there won't be ways to compensate for a countermeasure after it's understood.
Or we could just, I dunno, stop having so many wars... but I'm kind of at the point where it's obvious enough that every major world power is working on it that I'm not sure what else the US can do.
Not building $1B fighters seems like a good decision all around if there's any other way... that just seems stupid, no offense intended to anyone working on it but it just seems unreasonable on the face of it if you can make hundreds of unmanned drones instead (or pay teachers more of course...).