Hacker News new | past | comments | ask | show | jobs | submit login

TikTok is just being used as an excuse to grab power, because this bill will do immeasurably more than allow the government to ban TikTok. It's quite similar to how terrorism was used as an excuse to pass the Patriot Act. The Patriot Act did little to nothing to ensure society's security, but gave the government an immense range of powers that, now more than decades later, have still not been entirely rescinded. And this bill, unlike the Patriot Act, lacks even the pretext of being temporary.

This will also set the stage for even more draconian actions. Obama passing a law allowing for indefinite detention, including of US citizens, without trial or even legal representation [1] would never, in a million years, have been accepted in more sane times. But people lose their mind when the government rallies up enough fear, which is then actively exploited to seize even more powers.

This bill will directly lead to the development of the "Great Firewall of America" as it's the only practical way to engage censorship on the scale that the bill envisages, and it also grants the Secretary endless powers to carry out such. This is a bill is one of the purest embodiments of the absolute decline of America, and everything it, and we, once stood for.

Becoming China is not how you compete against China.

[1] - https://en.wikipedia.org/wiki/Indefinite_detention#United_St...




> Becoming China is not how you compete against China.

This proposal is actually substantially worse than the Great Firewall of China. The Chinese government doesn't particularly care if you use VPN's to get around the firewall, because they know that most people won't, and that's enough to meet their main goal, protecting domestic tech companies from competing with established US tech companies. The US is definitely going to be prosecuting individuals for using VPNs to read TeleSUR or whatever, though, because it's about controlling the narrative, not about protecting local industry.


> and that's enough to meet their main goal, protecting domestic tech companies from competing with established US tech companies.

You haven't experienced the GFW if you think that's the main goal. Oh, and please try telling your Chinese friends that "the govt doesn't particularly care about VPNs" and listen to what they yell at you.

Try read this 13 years old blog post on what GFW was (at that time): https://gfwrev.blogspot.com/2009/10/gfw.html#:~:text=%E6%95%...

... and revisiting it this time I realized it even recommended a tptacek publication. Didn't realize who this guy is last time I read it.


Can you show me where it allows this in the actual bill, not a news report?

Because when I read the bill, it appears that penalties only apply to covered entities and covered transactions which are fairly narrowly defined, and don't include individuals accessing content. I question whether something like that could pass constitutional muster anyway.

My read of the text of the bill [1] looks like the intent is to grant the secretary of commerce, in partnership with doj, the ability to review, inspect and prohibit equipment and technology that's used in critical infrastructure or impacts nation security (an overly broad term) which are substantially owned by a foreign adversary.

On it's face, that doesn't seem like a bad thing, but I agree that some of the language is too broad. For example, I think it should require congressional review to deem a country a foreign adversary.

There's also concerning language that exempts all of it from judicial review, which I don't understand - is this even possible?

I've been seeing a lot of reporting on this that is inconsistent with the actual text of the bill. It's hard to tease out the truth of the legislation from the partisan flag carrying.

[1] https://www.congress.gov/bill/118th-congress/senate-bill/686...


There's one really tricky clause in there that I also missed on my first pass: "The term “covered transaction” includes ANY OTHER TRANSACTION, the STRUCTURE of which is designed or intended to evade or circumvent the application of this Act, subject to regulations prescribed by the Secretary."

That clause adds a vast amount under the blanket of "covered transactions." At the minimum this would include the usage or offering of VPNs, TOR, or any sort of technology or connection that might be able to circumvent the methods the government will try to use. The "or" in the bill's clause also seems quite meaningful, as it clarifies that offering a service which CAN circumvent the restrictions is just as illegal as offering a service INTENDED to do so.

Beyond that, this thing's a monster of obfuscation. To even begin reading it you need to remove 'weak' OR statements. For one simple example, from Section 3 all the Secretary needs to do to justify 'imposing mitigation measures' against something is to show that it poses an "undue" risk to the "safety of United States persons." The language about national security or critical infrastructure, which you mentioned from the same section, are irrelevant as they're both subsets of 'undue risk to the safety of United States persons.'


> (a) In general.—The Secretary shall identify and refer to the President any covered holding that the Secretary determines, in consultation with the relevant executive department and agency heads, poses an undue or unacceptable risk to the national security of the United States or the security and safety of United States persons.

This means the Secretary of Commerce makes a determination about any entity that "poses an undue or unacceptable risk".

Where "covered holding" can be any non-US entity, or any entity "directed or controlled" by a non-US entity, so essentially, every entity but especially foreign entities.

> (1) IN GENERAL.—A person who willfully commits, willfully attempts to commit, or willfully conspires to commit, or aids or abets in the commission of an unlawful act described in subsection (a) shall, upon conviction, be fined not more than $1,000,000, or if a natural person, may be imprisoned for not more than 20 years, or both.

So you might think subsection (a) refers to something really terrible, but actually:

> (a) Unlawful acts.—

> (1) IN GENERAL.—It shall be unlawful for a person to violate, attempt to violate, conspire to violate, or cause a violation of any regulation, order, direction, mitigation measure, prohibition, or other authorization or directive issued under this Act, including any of the unlawful acts described in paragraph (2).

Where "paragraph (2)" goes on to describe every possible technical channel (mobile, LAN, satellite, cable, back-haul networks, etc).

This gives the Secretary of Commerce the ability to say that reading Russian or Chinese news, for example, is unlawful, because it is a risk to national security. If you use a VPN to access that news, you are guilty of this crime.

It seems pretty clear to me.


But using a VPN to do something that isn’t illegal (say, logging in to work) would not be criminalized under the bill, correct?


As long as there's 0 connection to restricted countries, that's probably the case.

But if your work involves logging into Alibaba for procurement or if you're a journalist that reads Weibo or something, you might be a felon, should the Secretary of Commerce recommend you to be.


> [1]

"The most controversial provisions to receive wide attention were contained in subsections 1021–1022 of Title X, Subtitle D, entitled "Counter-Terrorism", authorizing the indefinite military detention of persons the government suspects of involvement in terrorism, including U.S. citizens arrested on American soil.

Although the White House and Senate sponsors maintain that the Authorization for Use of Military Force (AUMF) already grants presidential authority for indefinite detention, the Act states that Congress "affirms" this authority and makes specific provisions as to the exercise of that authority."

https://en.m.wikipedia.org/wiki/National_Defense_Authorizati...

AUMF:

https://en.m.wikipedia.org/wiki/Authorization_for_Use_of_Mil...


From the AUMF article:

> Today, the full list of actors the U.S. military is fighting or believes itself authorized to fight under the 2001 AUMF is classified and therefore a secret unknown to the American public.


>Becoming China is not how you compete against China.

I don't disagree, but I also don't know what alternative works without weaponizing our own ideals against us. Free speech is wonderful, but, as irrational apes, it can also amplify divisiveness. I worry that the alternative at a technological scale requires a civically engaged and rationally-minded populace, and my optimism is waning. So what do you suggest as a workable alternative?


Treat people as being smarter than you give them credit for, and more able to see through BS. There's a reason scammers try to put time pressure on you - given time, lies are easy to spot.


It’s not just “other people” though; it’s all of us. I think the crux is there is abundant research that shows humans are not particularly rational, but your take seems to assume they are. How do you reconcile those two diverging viewpoints?


Humans aren't necessarily rational, but they are good at social stuff - humans are wired for living in social situations. You only need the latter to give people credit for having a "BS radar."


Do we lump social media into this "social stuff"? Because I think there's mounting evidence that we aren't particularly good at managing it rationally. As I read it, that's really what the article and the law are about: how social media can be leveraged by adversarial actors by leveraging our irrational/unhealthy relationship with it.

Or, alternatively, maybe that's the how the veiled power-grab is being framed. But we'd probably still need to undermine that claim about irrationality to offer an alternative.


I question whether a study can accurately capture rationality in a comparable/aggregateable way between multiple people. Measuring the process used rather than the end result sounds quite challenging.

I expect that what those studies really measure is how well social media users align and agree with the researchers


The alternative theory is almost certainly correct. If you actually read the bill, there is nothing in it about quality of information or misinformation. At the same time, the government has been warning about "malinformation" recently, which they define roughly as "true facts that do not fit the broader narrative." If that definition sounds like something from the soviet union, that's because it is.

The talk about controlling the flow of bad information is a naked power grab, and always has been ever since the first governments tried to do it in the ancient era. What history has taught us is that the truth always catches up eventually, and that it is better to provide people with more information than less. I honestly don't think that the social media era has changed that at all, and it would require some very strong evidence that it did (which is completely lacking).

Remember that before Facebook, news/gossip junkies were subscribing to papers and tabloids of varying quality. We have just eliminated the dead trees. The content was just as bad, emotionally-charged, and false, and there were so many of these papers that you could create a Facebook-style information bubble.


I’ll be more blunt: how do you then undermine the claim that such a bill is necessary due to the irrationality of humans which can be more easily leveraged in the age of social media for bad outcomes?

It seems like you either need to make the case that:

1) humans aren’t irrational or

2) the risk of such irrationality at scale is not really a major risk or

3) social media isn’t uniquely poised to capitalize on that risk or

4) we already have tools capable of mitigating that risk

It sounds like you’re debating 3) but there seems to be some evidence that social media tends to spread “bad” information faster and farther than “good” information by hijacking innate human psychological traits more easily.


I'm suggesting primarily that 3 is true, and we (as individuals in a society which is and has always been full of misinformation) have developed pretty good BS detectors to compensate. Bad information has always been faster to spread than good information. The only difference today is that all information is faster.

Also, to some degree, 1 is fairly true. You see this in economics, for example, where individuals are generally irrational about their choices, but modeling populations as a whole by treating them as 100% rational individuals gives pretty similar results to real life.


>only difference today is that all information is faster.

This is exactly the point being made (although I’m not sure I am completely convinced). The analogy could be “people have always killed each other. Nuclear weapons just make it faster to kill lots of people. Hence there no need to ban nuclear weapons.” The idea being there is a tipping point where the scalability of technology outpaces our biologically evolved innate sense to control it.

I agree with #1 but only in some domains. You can see it with guessing the weight of a bull at a fair or the number of jelly beans in a jar. But there’s other examples where it breaks down. In economics we have bubbles that are destructive when corrected. I’m not sure we want that short term extreme volatility in something like governance.


The alternative to that short-term volatility is long-term tyranny. No thank you. I'll take the volatility.


I think that’s a false dichotomy and not particularly helpful. The question is more about what level of volatility can be tolerate while still maintaining a stable society.


> given time, lies are easy to spot.

Really? Maybe small lies. But not bigger ones.

"The election was rigged" is a binary statement that should be resolvable. In 2022[1], when asked about the 2020 US election, more than 25% of US citizens polled thought it was rigged. More than 55% thought it was not. (Some were unsure). Certainly it's not "easy to spot" lies in that basic example.

[1] https://www.surveymonkey.com/curiosity/axios-january-6-revis...


25% is not a majority opinion. In fact it's a minority opinion.

So the majority of people seemly are able to discern lies by that data.

Which is a good thing for free speech and democracy.


"Good thing for freedom and democracy" is a bit optimistic.

Suppose 40% of voters think social security should be eliminated, 35% of voters think social security should be saved, and 25% voters vote to save social security because they think lepricons are poisoning babies, and only the social security save party is willing to do something about it.

This would show that social security's future is now now determined by who tells the best lies about lepricons.

Whatever this thing is, I don't think "good" is the right work to describe it.


I believe it to be fair to say all societies rely on some form of trust in others and the institutions setup within to function.

Regardless if it is in a Democracy, Republic, Monarchy, etc.

Regardless of it's means of resource distribution such as capitalism, communism, socialism, etc.

America for example, despite at times in the past having slavery being legal, committing genocide, segregation, the stealing of land, etc.

Despite being majority one religion.

Still expanded the vote. Expanded freedoms. Expanded access to education. And has shown an ability to let people that were previously expected to hide to live out in the open and freely.

If we are going to really take one survey result at one moment in time as gospel to say that American voters can't be trusted seems counter to the examples of the past where despite majority of Americans voters feeling one way, opinions were changed and progress was made. Slowly. Painfully. Yet still some positive direction.

Thanks for reading my rant and I'll leave with one last thing:

The public trust is sadly a fragile thing. When that trust is abused and poisoned by the very elected officials that have sworn to protect it the voters faith in the system becomes erroded. It unfortunately then takes longer and more work to rebuild that trust again. To move back to a society where arguments were made in good faith; not to just to troll or wage culture wars.


It’s a good rant. But apropos to the article, what should a society do if an adversarial actor deliberately uses a tool to erode that trust?


Trust is an important word there. In general I think people tend to overestimate the ability of any given entity, let alone individual, to unilaterally influence society. When "revolutionary" action is taken it often tends to be a result of little more than already preexisting sentiment finally reaching a climax. This [1] is a poll of public trust in government, which is a proxy for much of this. The "Trump years" literally don't even cause a notable shift in the trend.

The point I make here is that the issues are more fundamental than e.g. China or Trump or whatever else. We're like flagellants of the ancient times looking to the Heavens for the cause of the plague, while the answer lay somewhat more mundanely on the rats scurrying about their feet. So what IS causing the massive declines in trust? It's no more complex than those fleas, it's the government behaving in an untrustworthy way. The decline started shortly after everything around the assassination of JFK ended up being shrouded in secrecy and lies, which was just the start of a never-ending series of deceitful-looking, if not plainly deceitful, actions.

And now? The en vogue "solution is supposed to be censorship, even more authoritarian levels of social control, and a borderline obsession with making sure people can only access the "right" message, which is little more than a euphemism for propaganda kept at arm's length. Do you genuinely think these sort of things will help improve trust? Look to the countless countries which were able to execute such mechanisms in times of far less externa influence, like the USSR. The government had complete control not only over absolutely all messaging, but even if who could enter or leave the country. And resulted not only in extreme distrust, but a system that predictably collapsed [long] before even reaching its centennial.

Quite simply, if the government wants to regain trust, the best way to do that is to start acting in a transparent and trustworthy fashion. Yet we're going in the exact opposite direction, hard.

[1] - https://www.pewresearch.org/politics/2022/06/06/public-trust...


This is a very good post, but I think it misses part of the point. You talk exclusively about the erosion of trust between the people and the government. And I agree with your point. But I think the real distinction (and current issue regarding social media) isn’t erosion between the populace and the government, but between the populace and itself. The “othering” and divisive nature of social media, which erodes the “united” in United States.


This bill is only about the external, and a general framework for the establishment of a national firewall. If the government wants to ban all social media, more power to them. It would never pass the 1st amendment, but if it could I'd even support it! But that's tangential, as this bill isn't that.

As for what's causing divides, the one thing I'd observe is that if it were the internet then we'd expect to see divides of a similar magnitude throughout the world, and that is not the case. Some Western democracies, such as Norway, are even becoming less divided. They have no censorship whatsoever besides a handful of file-sharing sites being blocked, and universal access to the internet. Beyond this there are so many possible, and completely viable, explanations for the divisions in America that I think it turns into a Rorschach test. It's like trying to explain the fall of Rome.


You're right; the bill is aimed at adversarial threats to national security. Social media isn't the end, just the means in this case.

>we'd expect to see divides of a similar magnitude throughout the world, and that is not the case.

I'm not sure if this premise holds. For one, I think it's missing the nuance about the size and heterogeneity of the U.S. Take, for example, the thesis that the country is more accurately comprised of multiple different dominant sub-cultures.[1] I think when you expand the other Western democracies similarly, we see analagous divisions (see: Brexit).

[1] https://www.npr.org/2013/11/11/244527860/forget-the-50-state...


That wasn't the result of informed opinion but cult-like affiliation. Some politicians work hard to radicalize their voters so that they become almost religiously attached to given ideologies. The principle is that an idea can be changed, but a faith cannot, so once they're hooked they'll fight on behalf of their "god", and call any evidence against as fabricated.


Surely, TikTok's or other Social Media company algorithms could be tricked into provoking cult-like affiliations, even if they didn't actively promote it. And SM actively promoting informed opinions over cult-like affiliations is probably not their goal


That’s central to my point. “well-informed and rational” is beginning to seem like too high of a bar to expect, especially when people deliberately take advantage of that aspect of human nature.


What does rigged mean?

Does gerrymandering count as rigged?


I think there is enough evidence of attempts to rig every single US election in my lifetime (between gerrymandering, voting machine malfunctions (hanging chads), unconstitutional election law changes in some states, organized smear campaigns, burying harmful news stories, etc.) that it's reasonable to say that any of those elections was rigged. I also think that there's no evidence that any of those rigging attempts changed the outcome, so it's reasonable to say that any of those elections were not rigged.

As far as I can tell, the only unreasonable position is that US elections are always completely and perfectly clean. People commit fraud to win county clerk elections - every more significant position almost certainly has some dirty games.


In this case, the specific question was "Joe Biden as having legitimately won the 2020 election".

Gerrymandering has no impact on presidential elections, so it does not count for this survey.


We’ve all learned that is not the case, unfortunately.


>"So what do you suggest as a workable alternative?"

Do not ever limit freedoms of your own citizens under false pretense of "saving the children". As George Carlin said on this particular subject - fuck the children. Rights supposed to be increasing if you are healthy society.

If you do not like China you can kick them out. You already have enough laws an discretions to do so. There will of course be consequences but that's the price.


I'm inclined to personally agree, but we need to at least acknowledge the counterpoint. That being, in a cliched phrase, that "the Constitution isn't a suicide pact."

Or, in the words of Jefferson, "The laws of necessity, of self-preservation, of saving our country when in danger, are of higher obligation."

That's the claim being made by lawmakers: that certain trade constitutes a threat to national security. I'm not really seeing much debate about that claim.

If I understand your point correctly, you're instead saying we already have the tools to ban TikTok (presumably under commerce laws) and we don't need any other means.


Yes we already have tools to ban TikTok if this is what we need. Instead they want a law which would allow them more and more ways to criminalize our own people. Message to them - GFY.

>"the Constitution isn't a suicide pact."

Acknowledged long time ago - do not yell fire in a cinema example.


The same bill but without criminalizing the use of VPNs to access foreign networks that are deemed "bad" by random politicians


100% agree. Power can only be paraded, until it ends up in the wrong hands. Recent examples serve enough where an entire election was sought to be overturned that even simple sources of truth were crucial




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: