As if that would even have any effect in that situation. No amount of audits and rules would prevent TikTok from collecting data and manipulating the public opinion.
Why not monitor it? Create thousands of read-only accounts that "prefer" content with all kinds of ideological viewpoints and statistically analyze whether the algorithm is being biased to promote certain viewpoints. I'm not smart enough to implement something like that but it sounds like a solvable problem to me.
I thought about this too. In no way do I suggest it's an actual solution, but I wonder if some kind of reporting could be used as leverage to help appease US leaders towards a solution that doesn't require banning the app or handing it over to them.
I don't think I need a citation to say that it's feasible for China to inject malware via the TikTok app on people's phones. Would it be difficult? I imagine so. But, I think the risk is such that the onus is to prove that it's not possible, not the other way around. China is a hostile power and an authoritarian regime. It's a different risk calculus than Facebook, which is not controlled by a dangerous foreign adversary.
The alleged national security implications of Tiktok are not based on spreading propaganda, but on gaining access to information about Americans. A privacy law would address that issue, as well as protected Americans' privacy from other companies, regardless of where they are based.
> The alleged national security implications of Tiktok are not based on spreading propaganda, but on gaining access to information about Americans
What? Doesn't the opinion itself literally say that the threat of "covert manipulation of the content" was one of the government's justification? Never mind the millions of times that Chinese control over the content people view has been brought up as a rationale both inside of Congress and outside? Haven't these been beaten to death already?
There is no "algorithm": the policies of a service like Tiktok are spread throughout its entirety. The only meaningful way to "release the algorithm" would be to release the whole source code.
Furthermore, releasing the source code wouldn't help, since regular people aren't able to understand what it means; and there is no way to verify that the released source code corresponds to what is actually being run.
It would be great if there was some way to verify that a service you're using matches some published code, but we don't have that.
> Furthermore, releasing the source code wouldn't help, since regular people aren't able to understand what it means
Releasing the code does help. Joe can't open up his car and fix the engine control code, but the local repair shop can and they can also understand it and raise to a journalist "huh this manufacturer pushed a new version that'll make it stop driving if you service it at the workshop of a competitor" or whatever the car equivalent of this tiktok algorithm concern would be
The second problem you mention, I fully agree with: verifying whatever they publish. Client source code, you barely even need because it'll just be a front end for what the servers decide to show you. Verifying that what they say the server code is, is really what the server runs, that's the hard bit. But claiming to be open could be a start; something we can find discrepancies in and push for further openness
Whether this will solve the national security concerns and help with the youth mental health crisis that's often linked to social media, that's all way beyond my expertise and I have no opinion on the matter. Just that, in general, not everyone needs to understand everything in the world for it to be useful to publish
The code has been available for years. Bytedance published their recommendation engine as an arxiv paper in 2021 and the code is available on Github, https://github.com/bytedance/monolith. The power is in the weights of the live-trained model.
What would have been a solution to the problem that people would have appreciated?