Hacker News new | past | comments | ask | show | jobs | submit | vadfa's comments login

So when h266 or whatever comes out you can't watch video anymore because your cpu can't decode it in software even if it tried?


An FPGA can be reprogrammed, and we do really do this for standards with better longevity than video standards (e.g. cryptographic ones like AES and SHA). For standards like video codecs, we just use GPUs instead, which I assume is what OP had in mind for "specialized hardware" (specialization can still be pretty general :-)).


Hardware video decoding is done by a single-purpose chip on the graphics card (or dedicated hardware inside the GPU), not via software running on the GPU. Adding support for a new video codec requires buying a new video card which supports that codec.


SystemC bloat will require you to upgrade to a bigger FPGA!


What if that new variant is also engineered?


most "artificial origin" hypotheses centre around a lab-leak.

The number of scenarios where the new variant is engineered are vastly reduced, since it's less likely that this variant is a lab leak.

Other scenarios can still be within the realm of plausibility, but there are fewer of them.


What if new variants inevitably emerge given global mutations and lackluster vaccinations?


Exactly what I was going to ask. They wrote the UI in javascript with obvious consequences. The day they rewrite it in a normal language they will see me again.


It makes sense since there aren't enough houses for all the people who want to live in an area.


"iwr https://chocolatey.org/install.ps1 | iex" installs chocolatey without having to open IE


I see. That's why my comment was worded so carefully: though I'm sure one _could_ install chocolatey from the Windows CLI, a tiny minority ever would do so.

But that's for introducing me to the iwr command. I don't use Windows often, but it's good to know what tools are available in the rare instances that I need to. Now I need a mnemonic to remember it... "I Want Real (curl)"


Invoke Web Request is another good one, considering iwr is just an alias for the PowerShell cmdlet Invoke-WebRequest.


C:\Windows\System32\curl.exe exists if you want real curl. (or just run curl from a command prompt. curl in powershell is an alias for iwr.


Its 5 years outdated tho, better use this, always up to date:

https://community.chocolatey.org/packages/curl


Masks work. I don't dispute that. I just don't want to wear one for the rest of my life. I don't want my government telling me what to wear.

I got vaccinated months ago because I thought that would be the end of masks. That was the entire point of vaccinating: not needing a mask because the virus could do nothing to you. The vaccination rate in my country is almost 80% and we still have a mask mandate. I will not even consider taking a booster until they do away with the mask.


While I agree with your general sentiment, saying "I will not even consider taking a booster until they do away with the mask" seems like cutting off your nose to spite your face. Wouldn't you want to get a booster just to lower your risk of getting a potentially severe illness?


That is not necessary because I'm forced to wear a mask. If we are forced to wear masks, it's because masks work. Right?

On the other hand, I fell ill for an entire day after taking the second dose, and I refuse to go through that again.


I don't think it's binary like that(I'm willing to be proven wrong).

Vaccines are effective at preventing infection and particularly severe infections.

Masks are only effective at preventing you spreading your own infection. If you wear an n95 it will prevent you from getting infected but there is user error involved in tightness/compliance/etc....

My understanding has been that vaccines are more effective at preventing infection than masks so if you're picking between the two for your own safety, it would seem that the vaccine is the better choice for yourself.

I will agree with you that the side effects from the vaccine are certainly a counter-argument for getting a booster every 6 months(or whenever a variant comes out that needs a different vaccine). I got mine last month and was asking myself, "I am pretty fit, in my 30's, I work from home, and my leisure activities are all outdoors(running/hiking/climbing)... Am I just going to get knocked out for 1-2 days every 6 months when my perceived risk level of catching covid is so low?"

My hope is that we're spending time working on better vaccines with fewer side-effects. If the side-effects are like the flu shot i'll gladly get a booster whenever needed without thinking twice.


Aww look, a coward.


Yeah, Germany has the best laws regarding speech: https://www.courthousenews.com/german-nationalist-wins-injun...


On the other hand, there are numerous cases where Facebook has been compelled by German courts to restore posts they have deleted under their content policy to protect the user's free speech rights. Something like that is unthinkable in the US.


Calling her a Nazi or fascist would have been fine as that's understood as an expression of opinion, calling her a swine is clearly an insult and insulting someone is a crime under civil law (not sure how this translates to the US but it's not something the police will arrest you for but something the insulted party can take you to court over).

There's a common misconception in Germany that there is a law about "Beamtenbeleidigung" (insult of a public official) but the truth is that public officials have no special protection in that regard per se, it's just usually easier for them to sue people (esp. when you insult a police officer as they're literally the police). There are some caveats when insulting government officials, especially foreign government officials, but insulting a Nazi politician on Facebook is not any different from insulting a celebrity on Facebook.

The problem with social media is that it can be difficult to find out who to sue and compelling a foreign company to release the likely incomplete information they have on a user in an attempt to identify them isn't great. I'm not saying the law in question (NetzDG, requiring social media companies to block such content in Germany) is a good solution to this problem but it's certainly not the worst.

If anything, the problem with NetzDG is that it allows users who are able to avoid revealing their identity unambiguously to engage in Holocaust denialism or Volksverhetzung[0] and have their posts still be visible with a proxy if blocked (allowing Nazi groups to organize and operate hidden in plain sight) or when the content is deleted for those crimes to just be swept under the rug and make it harder to report to the actual authorities rather than the social media company. Social media companies like Twitter have also made it nearly impossible to report ToS violations in Germany as the report button immediately funnels you into NetzDG technicalities users aren't meant to understand like which specific law you believe the offending post violates.

[0]: https://en.wikipedia.org/wiki/Volksverhetzung


[flagged]


I say this as an Austrian... the fact that Nazi ideology and symbols are forbidden in our country is a godsend. We have a small but persistent problem with militant far-right Nazi sympathisers, and the "Verbotsgesetz" is an invaluable tool in dealing with them.

The law is extremely clear, nobody breaks it accidentally, and it makes sure that dangerous extremists are taken seriously by the police.

A couple of times a year the police discovers illegal weapon and ammo stashes when investigating Neonazis. These guys are dangerous, and pretending it's just about "free speech" is stupid.


So impeding the speech of two people is a better outcome than impeding the speech of none? I don't get it.

>the situation in America where everyone is a fucking edgelord in their spare time

So don't read them, as easy as that.


> So don't read them, as easy as that.

Hate speech and radicalizing speech isn't meant for those that aren't reading or listening to that speech, but rather to motivate those who do listen to act out the things that the speakers are saying.

The speakers hide behind "I didn't do anything, I just said something" and count on those who take their words into their heart and convert them into action. This is the danger of hate speech. It's not enough for good people to just ignore. It requires more effort to prevent the talking from being doing. If the term "hate speech" doesn't sit well, I prefer to use the term "rhetorical violence". Basically, rhetorical violence is speech using the imagery and terminology of violence intended to inspire violent thoughts in others.

The video posted below by another commenter shows how radicalizing speech is used to motivate others to commit acts that the speaker themselves would not commit or would claim not to support. In essence, the speakers are claiming the rights to rhetorical violence while being disconnected from actual violence that the speech might incite, inspire, or support.


We already have laws against violence.


We also have laws against threatening violence.


Reality doesn't fit so neatly into these categories that you're trying to construct, where speech is perfectly harmless unless it's direct incitement to violence and then suddenly it's harmful. That might be how the legal system works but it's not how reality works.

Motivating radicals and spewing racism might not be direct incitement to violence, but history shows that it can have significant negative consequences. The causal pathway is usually non-linear and hard to attribute. But, behind many genocides is racial hate speech that's been allowed to fester for years. Behind many lone wolf terrorist attacks is propaganda, even if nobody directly incited it.

I'm not arguing for or against any specific hate speech law here. Just trying to point out there's a grey area that your categorical thinking isn't good at addressing.


Wanna say, vadfa's two answers:

>We already have laws against violence.

>So don't read them, as easy as that.

are both "demand-side" solutions, which conservatives are well aware don't work when there's people dealing poison in the street.

Still, rexreed, I'll always fight for free speech, even when the people exercising it are abhorrent. And even knowing they'll take advantage of that to the fullest effect they can. Because if we really restrict it, the worst possible people will take control of who gets to say what. And it won't be the people we'd like to be making that decision. Every encroachment on free speech is like feeding steroids to the nazis.


Free speech and fighting rhetorical violence are not mutually exclusive. There are ways to reduce the visibility and spread of rhetorical violence without imposing on the rights of everyone to speak.

Let's use another mental construct if this is helpful. Imagine at your place of work, one person every day comes into the office, points at you and says "I hate this guy. Someone should beat the crap out of them". This person then posts messages on the company chat about how much of a terrible person you are, spreading all sorts of half-lies and untruths. This person goes as far as to put a message on the bulletin board in the cafeteria saying that you are a rotten person and someone should slash your tires or make your life a living hell.

One day you come to work. Your tires are slashed. Someone has trashed your desk. When you leave work at night someone assaults you, punches you and throws you on the ground. You can't see who it is.

You can point your finger and say "this person has been verbally harassing me". Would it be right for the company to say "any speech is allowed, therefore, this person has the right to continue that speech. Any actions are the fault of the perpetrator and not the speaker."

How long would you be willing to put up with that and defend that right even though it is causing you direct harm? There are indeed laws against violent and harassing speech, even though the words themselves aren't harm because of the direct harm that can be linked. I agree that the line between annoying and controversial speech and overtly violent speech is not well defined, but the lack of a well defined boundary does not mean that there is no boundary at all. Clearly some things are beyond the pale.

Now the company can't tell the verbal harasser that they are not allowed to think or express their abhorrent views. That harasser, as abhorrent as their views are, are using protected free speech. But the company can tell the harasser they are not allowed to communicate those views on company grounds, in company chat rooms, in the company cafeteria, or in any capacity as a company employee. Basically, the company can impose limits on the spread of those views. And in the vast majority of cases, it's imposing limits on the spread of views that acts to dampen actual violence.


Definitely. I think the main problem modern society (post-internet) is having, is that people have conflated the right to speak with the right (or the recent privilege they've been granted) to be heard, and assumed that if you have one you should automatically have the other. It's never been so.

[edit] since you updated... so, it's often been said that "speech" for nazis is a boot to the face, and that's all the words they need. And the truth is that if violence takes over it eradicates speech. A societal commitment to free speech is what allows the victims of threats and harassment and violence to speak out where they would otherwise be afraid to - especially if the intimidating environment is not just one company, but society as a whole. And this is why it's very dangerous, and can possibly breed more violence, to ever say that speech==violence [edit2: people reading "revolutionary books" in prison can be equated with violence by the prison guards]. Yes, incitement is beyond the pale, but in the example you just delineated it's very possible to separate incitement from opinion. Remove "someone should..." &c.

Now imagine you're born and everyone you're related to is accused of horrible crimes against humanity, controlling the media, stealing from honest people and drinking babies' blood, and your grandparents' families were murdered by people who said the same thing, and you hear people saying stuff like that every day which is clearly intended to incite people to, you know, kill you. And then imagine coming to the point where you know that preserving their right to say whatever they want about you, however disgusting and evil, is the only chance you have to preserve your own rights as an individual. If you can put yourself there, mazel tov, you're Jewish.

And it's natural to wonder whether all that free speech is a terrible idea, so, like all important things it's open to debate. But it's why my grandparents came to America, and they wouldn't like the idea of a law against nazi speech any more than I do.

Twitter, of course, is a whole other story. Private enterprise and should be held accountable for every word on their platform. They should banhammer anyone they feel like.


100% this is the case. People are conflating the rights of those who have rhetorically violent speech to express those views with the supposed "right" of those violent speakers to use a given platform to spread that rhetorical violence. From the perspective of the social media outlets: I can't stop you from expressing your abhorrent views, if it's protected speech, but you do not have the right to use my platform or my loudspeaker or my venue or my publication or my social network to spread that rhetorical violence. The rhetoric might or might not be protected, but the platforms have no obligation to spread that rhetoric.

Long story short, your speech might or might not be a protected right, but your use of a given platform to spread that speech, and any obligations to spread that speech or provide visibility or virality to that speech is not a protected right. One cannot be arrested or detained or sued for simply expressing their opinions, and I agree that even that abhorrent speech is protected. However, a platform can opt to not publish hateful speech, pull the plug on the loudspeakers, prevent the use of their venues, and refuse to promote abhorrent speech. The most effective means for combating hate speech and rhetorical violence is not to suppress the speech, but rather to prevent its spread. In this way the rights are protected without increasing the harm.

You're right that not too long ago, those with rhetorically violent speech would have little access to mass media. They would have to literally stand on street corners with megaphones to shout their messages or print their own publications and then find ways to distribute those publications. Nowadays, everyone has instant and immediate access to mass media whose viewership, ease of spread, and total audience size rivals even the very largest of mass media publications 100 years ago. In the current age where a single viral Tiktok or Tweet can get millions of impressions, the power (and responsibility) of media companies is far greater than ever.


> They would have to literally stand on street corners with megaphones to shout their messages

This is the primary problem. "Speaker's corner" has always been the place for insane people to shout. Social media has elevated it to the mainstream. (And made a handsome profit).

Insanity is contagious. What I mean by that is: Mental instability, FUD, conspiracy theories, propaganda, and simple sociopathic narcissism are viruses. No one who has witnessed 2016-present could doubt that. But anyone who knows about 1932-1945 already understood it.

Individuals with violent and malevolvent personality disorders are very capable of spreading their mentality to others. All they need is a channel. Radio and television, in the wrong hands, were used to mobilize millions of people to their deaths. And suddenly we open a channel for the craziest of crazies, and think their mental afflictions won't affect billions of people around the world?

There is no right to be heard. Over all of human history, being heard by the masses has been an extremely rare privilege. Creating a technology that allows crazy people to be heard is frankly the definition of insanity breeding more insanity. Speech is not the problem. Proliferation is.


>Insanity is contagious. What I mean by that is: Mental instability, FUD, conspiracy theories, propaganda, and simple sociopathic narcissism are viruses. No one who has witnessed 2016-present could doubt that. But anyone who knows about 1932-1945 already understood it.

What an implicitly condescending, shitty thing you state so casually: That obviously the only reason Trump won in 2016 is because he "spread" his sociopathic narcissism to others, who also likely happen to be mentally unstable and possibly conspiracy nuts. No chance that maybe, just maybe, millions of people voted for him on their own no less rational volition than those who voted for a frankly terrible democrat candidate like Clinton. No, the Trump voters were just mentally infected, weak minded idiots I suppose?


I'm not talking about everyone who voted for Trump. His is not the only or even the most important species of insanity that's been allowed to spread like a virus. Yes, people have all sorts of reasons for voting in populist demagogues without needing to specifically buy their insanity wholesale. Trump's madness is a symptom and a vector, a stop on the road between Alex Jones shouting on a corner and Adolf Hitler in a bunker. The door just keeps opening wider, though.


Enough with the absurd hyperbole already. Trump's presidency was neither an Alex Jones conspiracy nutfest or an Adolf Hitler madhouse of dictatorship. It was mostly mediocre but hardly worse than many previous presidency. Possibly better than some even. I'm no fan of that guy in so many ways, but he lived up to very few of the insane worst expectations that were created when he just entered the office. The world certainly didn't go to hell because of it. If anyone promoted idiotic unfounded conspiracies during his presidencies, it was the media endlessly harping about Russian collusion in his victory, but never being quite able to provide solid evidence of a single aspect of that particular conspiracy theory. Or the obsessive fixation on the new boogeyman of "misinformation", which suddenly has become a global problem according to many media sources and politicans because, oh god forbid, a candidate that they didn't give their formal benediction to happened to win a major election.


> So impeding the speech of two people is a better outcome than impeding the speech of none? I don't get it.

I mean, it's not a better strategy and it's not right - what I'm trying to say is that impeding one person's speech leads to impeding another person's speech, and that's how you end up with totalitarianism, regardless of who's in control.

The trouble is that whoever speaks loudest never respects the mechanism that allowed them to speak in the first place, or extends that right to anyone else.

So as to what leads to a better outcome, I'd say the results aren't in yet.


> So don't read them, as easy as that.

If only I knew the content of something before I read it. I would have to limit my internet use to Signal conversations with my dog to avoid most of tech’s poison machine.


That is a good idea, it is what I did. I don't visit any social networks, I don't read the news, and I stop talking to those who send me information that I'm not interested in.


Yet here you are commenting along everybody else on HN. Unfortunately real world situations are not that black and white so they cannot be solved with such black and white solutions...


I'm just going to leave this here: https://www.youtube.com/watch?v=P55t6eryY3g


There is a difference between having a set of rules and doing ad hoc moderation.


So writing a rule "don't show results from domains on list X" makes it ad hoc ad not a rule?

I don't think there's sombody manually removing each result from search queries by hand. That wouldnt meet latency constraints


The set of rules includes "results that are illegal to divulge in a jurisdiction will not be shown."


Since Google is doing this voluntarily, many of the blocked results were legal to divulge but Google is choosing not to, which does make it ad hoc.


This just becomes a question of pragmatism at that point - Google lacks the capacity to determine which of the blocked results are legal versus are not without incurring cost, so the most realistic approach is to recognize that a majority of results from that domain are illegal and block the domain. This is just the simplest way to enforce a particular rule in a particular case that can't otherwise be cheaply codified programatically.


Yet Google doesn't generally block most content that is illegal in one jurisdiction in all others where it is not illegal. If Google is deciding to do that just with TPB, then that is indeed an adhoc decision.


Women also have advantages in some other regards. I think it overall evens out.


I don't see the difference between the algorithm putting those posts in your feed and putting ads. They are both unwanted content.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: