Hacker Newsnew | past | comments | ask | show | jobs | submit | AstroNoise58's commentslogin

Do not underestimate audio circuit tuning based on listening tests. Good ears and patience can substitute a lot of lab equipment dollars, especially for a hobbyist.


I would say its the precise inverse: very few people can do it well by ear, and it takes a lot of practise and experimentation. At the most basic, try matching the volume level of a subwoofer to the main speakers - even this is already very hard to do by ear, for reasons of human auditory perception and psychoacoustics.

You don't even need a lot of "lab equipment dollars", measuring the basics can be done with ~100€ calibrated USB mic. As a rule of thumb, you cannot develop a good speaker without measuring, unless you have advanced modelling tools and experience to use them correctly, in which case the measurements will mostly match the simulations.


The best acoustic devices in the world have been made and tuned by hand for thousands of years. "you cannot develop a good speaker without measuring, [modelling, or simulations]." Is fundamentally untrue despite what the cybernetic totalists may want to believe.


>The best acoustic devices in the world have been made and tuned by hand for thousands of years.

Says who? You should really acquaint yourself with current research, starting with Toole's infamous "Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms".


> Says who?

Anyone who has read about or experienced instruments and music halls that were made before the commodification of electricity?


That book sincerely answers that sort of confusion. Audio production (here music creation) is very different from audio reproduction. There is no real way of judging the former, except for room acoustic which has been scientifically guided for a long time.


Yeah but there's a pretty important caveat here. The person who's actually doing the listening is the only person's opinion who matters about the quality. This is very much of a case of "if it sounds good it is good" That's why you're doing here because that is literally the only person who needs to be satisfied with this arrangement.


Thats valid.

My point was more along the lines of „a layman reading this might draw the conclusion that this is the proper way to solve this problem, because the text is written with an engineering mindset and demonstrates a certain degree of sophistication“, and I wanted to caution people that this is not the case.


I assume you mean AD85050 (rather than AD8255). And yes, the last paragraph before "Going all-in" is about the idea of driving the I2S. But the I2C config sent to the ESMT chip would have had to be reverse-engineered as well...


Fixed, thanks. Somehow the title of the datasheet pdf is AD8255 despite the chip being an AD85050.


There are tools to do just that, see for example banks2ledger: https://git.hq.sig7.se/banks2ledger.git


The escape velocity on the surface of the Earth is approx. 11.2 km/s, not 11000 km/s. Factor of 1000x error. But nice video :)


Tom's really mastered his skill of casual misleading, he adds these "wait, what?" moments to every video. A more notorious example will be his explanation of how optical reflection works, check out his "30 Weird Chess Algorithms" (https://youtu.be/DpXy041BIlA) around 10:20.

Edit: spelling.


I made this to serve my own need (which it does), to allow my Synology NAS a clean shutdown in case of a power cut. My UPS is not recognized if plugged into the NAS via USB, so it is plugged into a different (SBC) computer instead, which runs this.

Speaking of prior art, I am only aware of this one: https://github.com/luizluca/nut-snmpagent I am not a fan of Ruby and I only trust software I wrote (a.k.a.: I prefer my own bugs in critical pieces of my infra), so here it is.

Posting in case someone else finds it useful, but will answer more general questions (should they come up) as well.


> I prefer my own bugs

In which case, you might wanna double check these return values https://github.com/tomszilagyi/upsc-snmp-agent/blob/8f35379d...


Not sure what problem you see with that, care to elaborate and/or open an issue?


It never returns 1, and returns 4 in two cases. This varies from the comment before the function, which I presume documents return values (though at first glance I thought the "name(number)" format was a function call example).


Oh, ok, I guess that might be a bit confusing. :)

The comment documents the interpretation (by the UPS MIB) of possible return values. The same is true for the surrounding few functions (upsAutoRestart, upsBeeperStatus and upsOutputSource) that are similar in form and purpose. Obviously, there is no requirement that all possible output values are generated (exactly once or at all).

The function upsBatteryStatus returns 4 in two cases, because in both of those cases the battery is deemed 'depleted'. And it does not return 1 because there is no input case from upsc (that I know of, at least) where it would make sense to report the battery status as 'unknown'.


I find it pretty interesting that Martin does not mention the kind of community member-driven up/downvote mechanism found on this site (and elsewhere) as an example of decentralised content moderation.

Edit: now I see Slashdot and Reddit mentioned at the end in the updates section (I don't remember seeing them on my first read, but that might just be me).


"We vote on values, we bet on beliefs" - Robin Hanson.

Voting tells us what we value but that doesn't mean what is good for us. It also treats all content as somewhat equivalent, which isn't true. A call to (maybe violent) action isn't the same thing as sharing a cute cat video.


How would up/down votes work on a decentralized platform? Wouldn't it be easy to game by standing up your own server and wishing up a legion of sockpuppets?

There's a whole Moonshot of spam resistance that's going to need to happen in Mastodon/Matrix/Whatever.


Decentralized networks need trust, and trust is not a Boolean value.

With a centralized service, trust is simple: how much you trust the single entity that represents the service.

In a distributed network, nodes need to build trust to each other. In the best-known federated network, email, domain reputation is a thing. Various blacklists and graylists pass around trust values in bulk.

So a node with a ton of sock puppets trying to spam votes (or content) is going to lose the trust of its peers fast, so the spam from it will end up marked as such. A well-run node will gain considerable trust with time.

This, of course, while helpful, does not guarantee "fairness" of any kind. If technology and people's values clash, the values prevail. You cannot alter values with technology alone (even weapon technology).


The problem you describe is called Sybil resistance and is known to be hard, but there are some example working systems such as Bitcoin.


FTA: "even though such filtering saves you from having to see things you don’t like, it doesn’t stop the objectionable content from existing". He doesn't want upvoting/downvoting, he wants complete eradication (of whatever the majority happens to objects to right now).


> of whatever the majority happens to objects to right now

I don't see that Martin Kleppmann is using 'democracy' to mean 'majoritarianism' here. He makes considered points about how to form and implement policies against harmful content, and appears to talk about agreement by consensus.

Democracy and majoritarianism are (in general) quite different things. This might be more apparent in European democracies.


He plays a little trick by saying "ultimately it should be the users themselves who decide what is acceptable or not". This has two meanings, somewhat contradictory.

The straightforward meaning is that ultimately I decide what is acceptable or not for me, and you decide what is acceptable or not for you. We can, and likely will, have a different opinion on different things.

But the following talk of "governance" and "democratic control" suggest that the one who ultimately decides are not users as individuals, but rather some kind of process that would be called democratic in some sense. Ultimately, someone else will make the decision for you... but you can participate in the process, if you wish... but if your opinions are too unusual, you will probably lose anyway... and then the rest of us will smugly congratulate ourselves for giving you the chance.

> Democracy and majoritarianism are (in general) quite different things.

Sure, a minority can have rights as long as it is popular, rich, well organized, able to make coalitions with other minorities, or too unimportant to attract anyone's attention. But that still means living under the potential threat. I don't see a reason why online communities would have to be built like this, if instead you could create a separate little virtual universe for everyone who wished to be left alone... and then invent good tools for navigating these universes, to make it convenient, from user perspective, to create their places, to invite and be invited, and to exlude those who don't follow the local rules (who in turn can create their own places and compete for popularity).


> The straightforward meaning is that ultimately I decide what is acceptable or not for me, and you decide what is acceptable or not for you.

I disagree that this is straightforward in meaning. Even if I do have a good idea of what is unacceptable to me, I need someone external to screen for that. If the point is to avoid personally facing the content that I find unacceptable, it's impossible for me to adequately perform this screening on my own behalf.

I can instruct or employ someone (or something) to do this, but then ultimately they will make the decision for me. It's only plausible to do this at scale, unless I'm wealthy enough to employ my own personal cup-bearer who accepts the harm. So, it makes sense to band together with other users with similar requirements.

Your claim seems to be that delegating these decisions is a bad thing that should be avoided, but it is an essential and inevitable part of this service - I have to delegate that decision to someone else, or I won't get that service.

This is not to mention legal restrictions on content in different jurisdictions, which define a minimum standard of moderation and responsibility, that may include additional risk wherever they are not fully defined.


> I can instruct or employ someone (or something) to do this, but then ultimately they will make the decision for me. It's only plausible to do this at scale, unless I'm wealthy enough to employ my own personal food-taster, so it makes sense to band together with other users with similar requirements.

And here we run into the issue that economists and political scientists call "the Principal-Agent problem"[0].

Whether we're talking about the management of a firm acting in the interests of owners, elected officials acting in the interests of voters, or moderators of communication platforms acting in the interest of users, this isn't a solved problem.

And in fact, that last has extra wrinkles since there is not agreement on just whose interests the moderator is supposed to prioritize (there can be similar disagreement regarding company management, but at least the disagreement itself is far better defined).

This is deeply messy, and as hard as it is now, it is only going to get worse with every additional human that is able to access and participate in these systems.

[0] https://en.m.wikipedia.org/wiki/Principal%E2%80%93agent_prob...


This is the crux of censorship. If anything, it hinges on hubris: the censor assumes to know which content deserves to exist at all.

The need for censoring content still exists because certain kinds of content are deemed illegal, and failure to remove that may end up in serving jail time.

On the other hand, moderation is named very aptly.

That said, I fully support the right of private companies to censor content on their premises as they see fit. If they do a poor job, I can just avoid using their services.


-Devils Advocate

> I fully support the right of private companies to censor content on their premises as they see fit.

Those private companies don't have the right to censor content on their premises 'as they see fit' without giving up protections afforded to them in law as 'platforms'. The question is at what level of moderation and/or bias do they become a 'publisher', not a 'platform'.

> If they do a poor job, I can just avoid using their services.

Issues arise when the poor job spills over outside their service. As an example, The people who live around the US Capitol endangered by pipe bombs in part because of incitement organised on Twitter.


> Those private companies don't have the right to censor content on their premises 'as they see fit' without giving up protections afforded to them in law as 'platforms'.

Not only do they, but there’s no such thing as “protections afforded to them in law as ‘platforms’”: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

> The question is at what level of moderation and/or bias do they become a 'publisher', not a 'platform'.

This idea of “publisher vs. platform” has been entirely made up by people with no understanding of the state of the law. [1] “Bias” doesn’t play into it – they can do what they want, in good faith, on their website. Hacker News (via its moderators) has a bias against low-effort “shitposting” and posts which fan racial flames. It’s so frequent and well-known that it could become a tagline, “Hacker News: Please Don’t Do This Here”. At what level of curation of non-flamey posts does it become a publisher due to this bias?

[1]: https://www.eff.org/deeplinks/2020/12/publisher-or-platform-...


They don't have the common carrier protections. That is, phone companies are not required to censor hate speech, and ISPs are not required to censor unlawful content that passes their pipes, because they are just, well, pipe, oblivious of the bytes they pass.

Platforms are in the business of making content available, so they are forced to control the availability, and censor unlawful content. They choose to censor any "objectionable content" along the way, without waiting for PR attacks or lawsuits. I can understand that.

(What is harder for me to understand is when these same platforms extoll the freedom of expression. I'd like them be more honest.)


Moderation is not upvoting/downvoting.

For example, when you moderate a debate you do not silence opinions you disagree with, you simply ensure that people express themselves within 'acceptable' boundaries, which usually means civility.

To me this means that 'decentralised content moderation' is largely an utopia: Whilst the rules may be defined by the community, letting everyone moderate will, in my view, always end up being similar to upvoting/downvoting which is a vote of agreement/disagreement.


On the other hand, decentralised filtering out of objectionable content might go hand-in-hand with replicating and thus preserving the most valuable content. Empirically, 90% of the content in most decentralized systems (e-mail, the Web etc.) is worthless spam that 99.99999% of users or more (a rather extreme majority if there ever was one) will never care about and could be eradicated with no issues whatsoever.


Isn't it just an example of democratic content moderation? We up vote, down vote and flag content. We don't get the ability to do so unless we are a community member of some tenure. It's augmented by centralized moderation by a handful of moderators.

How well it works is always a topic here.


We don't see all of the countless hours spent by mods like dang to keep the quality high. It's a thankless job most of the time!


Having moderated some large forums in the past I know!


> Isn't it just an example of democratic content moderation

A democracy makes great efforts to ensure 1 person = 1 vote. Online platforms do not.


Up/downvote mechanisms always end up as agree/disagree votes.

Moderation is not the same. It is not about agreeing but curating content that is not acceptable (off-topic, illegal, insulting).

Article quote: "In decentralised social media, I believe that ultimately it should be the users themselves who decide what is acceptable or not"

In my view that is only workable if that means users define the rules because, as said above, I think 'voting' on individual piece of content always leads to echo chambers and to censoring dissenting views.

Of course this may be fine if within an online community focus on one topic or interest, but probably not if you want to foster open discussions and a plurality of views and opinions.

We can observe this right here on HN. On submissions that are prone to trigger strong opinions downvotes and flagging explode.


He mentions Reddit at the end of the article, which is close enough in mechanism to Hackernews.


There are various ways to take advantage of the GPU -- the so-called GPU-accelerated terminals contain bespoke OpenGL programs ("shaders") to do rendering on the GPU. Here's an interesting way of doing it: https://tomscii.sig7.se/2020/11/How-Zutty-works


Most "modern" terminals are surprisingly buggy (for lack of a better word), that is, they do not conform to the VT100-series specs (and less documented, but well known quirks). Here's a look at some of them: https://tomscii.sig7.se/2020/12/A-totally-biased-comparison-...


A very simple web application (aimed at being self-hosted) to collect videos for watching them (repeatedly) later: https://github.com/tomszilagyi/copycat

It invokes youtube-dl under the hood, but the user can add videos (to be downloaded) via the browser. It is quite usable as is, but pretty slim on features. Maybe someone here wants to take it further.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: