Hacker Newsnew | past | comments | ask | show | jobs | submit | zilti's favoriteslogin

“I Got It from Agnes” is the funniest song ever written and I will refuse to socialize with anyone who thinks otherwise after hearing it. What a brilliant mind.

The first website is still more user-friendly, faster, and prettier than most of the Web today.

“Web4 should run on LaTeX. The World Wide Web is broken: it is dominated by a handful of websites, nearly everything is financed by ads, bloated tech needlessly slows down surfing, NFTs and blockchain are digital cancer, et cetera. Stop it. Just stop it.” —https://www.cynicusrex.com/file/web4.html

--

POSSE: Publish (on your) Own Site, Syndicate Elsewhere (https://indieweb.org/POSSE)


Good article, but misses one very interesting detail.

E.g. in the example with 司る (tsukasadoru "be in charge"): the article says they "gave" the phrase a kanji. I would however assume that it happened the other way: the kanji was approximated with two Japanese words.

What's the difference? Let's go back to when kanji was adopted. The article notes Japanese writers approximated sounds with Chinese kanji readings, but there's another overlooked part: they also approximated Chinese text with Japanese words.

That is, traditionally they would often write in classical Chinese, but read it out loud in Japanese. Indeed, they developed a system[0] that let them retrofit an entire language, with a completely different sentence structure, phonetics, etc. into their own. Or, in short: they could read Chinese in Japanese.

This is likely where 司る comes from; some classical text using 司 in a way that was at some point best approximated by the Japanese word tsukasadoru in that context.

[0]: Example from https://en.m.wikipedia.org/wiki/Kanbun (abridged):

> 楚人有(下)鬻(二)盾與(一)(レ)矛者(上)

> [...] the word 有 'existed' marked with 下 'bottom' is shifted to the location marked by 上 'top'. Likewise, the word 鬻 'sell' marked with 二 'two' is shifted to the location marked by 一 'one'. The レ 'reverse' mark indicates that the order of the adjacent characters must be reversed.

> Following these kanbun instructions step by step transforms the sentence so it has the typical Japanese subject–object–verb argument order.

> Next, Japanese function words and conjugations can be added with okurigana, [...]

> The completed kundoku translation reads as a well-formed Japanese sentence with kun'yomi:

> 楚(そ)人に盾と矛とを鬻(ひさ)ぐ者有り

Obviously, the system comes with limitations; it's more of a system to analyze classical Chinese text than a way to magically translate it into Japanese. Still, I find it the most fascinating part of the language, because you can view it as a sort of "machine translation" from a millennium before computers existed, simply by abusing the fact that they used the same sort-of-semantic alphabet.

This is also where the "many readings of a single word" property of kanji comes from. Modern Japanese writing is the fusion of the phonetic and semantic interpretation of kanji - kana being the simplification of phonetic forms, and kanji's weird readings being derived from kanbun-kundoku.


> Japanese has a lot of compound words of Chinese origin, where two or more kanji appear as a set.

In the original Chinese language, a "word" mostly consists of a single character. Interestingly, many of the compound words commonly seen in modern Chinese were in fact coined by the Japanese scholars during their attempts to translate western writings around the 19th century and were later "imported" back into Chinese language. Interestingly, the two examples in the article, "art" (美术) and "science" (科学) are both of Japanese origin, though one can still tell whoever coined the terms chose the individual characters due to their meaning being relevant to the concepts the words are describing.


There's an interesting piece of trivia regarding the title "War and Peace". The title in Russian is "Война и Мир", where "Мир" can mean both "peace" and "world", depending on context. However, there's some debate regarding which meaning was intended by Tolstoy.

I couldn't find anything about this on English wikipedia, but here's a rough translation from the Russian page:

Before the 1917-1918 language reform, "peace" was written as "миръ", and "world" as "мiръ". There's a legend which claims that Tolstoy initially intended to use the "world" meaning. Indeed, the second part of the epilogue has some thoughts about why the wars happen and how they affect the world as a whole.

Despite this, every edition of the novel published during Tolstoy's life was titled as "Война и миръ" (= peace), and the French version of the title as written by Tolstoy was "La guerre et la paix". There are different explanations of this legend. (explanations follow, can't be bothered to translate)

https://ru.m.wikipedia.org/wiki/%D0%92%D0%BE%D0%B9%D0%BD%D0%...


"If you ever feel bad about yourself, consider that three days trapped in a cabin with lord byron was enough to traumaspawn two distinct schools of supernatural horror story"

https://nitter.privacydev.net/Sotherans/status/1507504361093...


One of the more interesting perspectives I gained about Lord Byron was when I learned of his support of the Luddites from reading the book Blood in the Machine[0], which I am surprised the posted article never mentioned once since it seemed to be a significant part of his fame, especially during his time in the House of Lords[1].

[0]: https://www.hachettebookgroup.com/titles/brian-merchant/bloo...

[1]: https://www.smithsonianmag.com/smart-news/byron-was-one-few-...


I've seen a similar idea implemented with a lot of tech teams. In particular, I've seen companies try to be "flat," meaning that the software developers don't have managers, but instead, the software developers are expected to self-organize.

But all of the normal tasks of a manager still exist: someone has to coordinate the work of multiple teams when those teams have zones of concern that overlap, and someone needs to be able to assign a budget, spend a budget, and take full responsibility from both the good and the bad that arises from spending that budget. If money is spent poorly, someone has to take the blame. If money is invested wisely, someone has to get the credit.

What tends to happen (in "flat" organizations) is that a lot of the coordination work gets pushed down to the individual software engineers, so that they now need to spend more of their time on coordination activities, and they spend less time actually writing code. I've seen "flat" organizations where senior engineers spend as much as 25 hours a week in meetings, because they've taken over all of the coordination work that would have previously been handled by an engineering manager.

Decisions about budget are rarely extended down to individual software engineers, so instead those decisions go up the hierarchy: you've now got the CEO making small-scale spending decisions that should have been passed down to some middle manager. For instance, at Futurestay.com, the CEO was dragged into an argument about what managed hosting service to use for MongoDB, a decision where the difference was maybe $200 a month. Obviously the CEO should not get dragged into spending decisions of that scale (unless you're talking about a 5 person startup that is just getting started).

If it was possible to wave a magic wand and make all management work cease to be necessary, then every company in the world would do that. But instead, many companies will make the managers cease to exist, while the management work is still there. And the overall result tends to be a loss of productivity, either because essential coordination activities are left undone, or because talented specialists are forced to do management work for which they have no training.

Also, if I might comment on a controversial issue, so-called "flat" organizations tend to be especially weak at enforcing discipline. If a worker is lazy, or if a worker does poor work, then they would normally run the risk of being fired, but in a "flat" organization they can often get away with poor performance for a long time, because fewer people are tracking their performance.

But I do think Bayer has a grasp on a thread of at least one important idea: they claim they are doing this to save $2.5 billion. That implies they think the management work can be done by other employees who are paid less money than the managers. And that implies that the managers were overpaid, relative to the value they delivered. While I think Bayer is making a mistake by getting rid of its managers, I also think that managers are probably overpaid relative to the value they deliver.

When I was at ShermansTravel.com we had a very competent project manager who oversaw the tech team. She did a fantastic job of estimating tickets, prioritizing tickets, and keeping engineers focused on the right tickets. But she was paid less than any of the software engineers. And I think that is the right model for most companies, including Bayer. The default assumption, everywhere, is that managers need to be paid more than the people they manage, but why is that? I think there are many cases where the managers should be paid less than the people they manage.


For a while my company tried a “matrix organization”, it separates the people leader from the work leader.

So a person would be on a team, and the work the team did would be guided by the product owner and kept on track by the scrum master. Then a person’s actual boss was someone else. This meant an employee would really only talk to their boss for HR type issues, which in practice meant most people never talked to their boss.

During this time I was a product owner. This whole setup had its weaknesses. First, it was confusing and it felt a bit like the Office Space joke of having multiple bosses, although I very much tried to maintain that I was there to serve the team and kept it democratic, not every product owner took that approach. The other issue was when it came time for reviews. The people leaders gave the reviews which would impact a person’s compensation. The people on my team reported to one of 5 different people leaders. Of those 5, only one of them actually reached out to me to ask how their employee was doing when it came time for reviews. People’s reputations proceeded them, and their reviews were essentially based on their reputations, which sucks. Though I don’t think anyone made any radical changes during this period. If they had, I likely would have made it a point to make sure their boss was aware. We also did have sprint reviews that should have been attended by each of the people leaders. We put a lot of effort into making those good and making sure everyone was able to show off what they had been working on, so that probably helped. Most teams really phoned it in, but my view was that we were only as good as what we should show. If we made something amazing, but didn’t show anyone, or talked over people’s heads, it was as good as not doing it at all.


> “You say you were the chief, how big was your tribe?”

"Big enough to hunt woolly mammoth"


Great article! I've posted it in other comments before, but it's worth repeating:

The best explanation I've seen is in the book "The Secret Life of Programs" by Jonathan E. Steinhart. I'll quote that paragraph verbatim:

---

Computer programming is a two-step process:

1. Understand the universe.

2. Explain it to a three-year-old.

What does this mean? Well, you can't write computer programs to do things that you yourself don't understand. For example, you can't write a spellchecker if you don't know the rules for spelling, and you can't write a good action video game if you don't know physics. So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.

The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do. This rigidity in children is really obvious when they're about three years old. Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question. The problem is, she doesn't understand that you're really asking her to put her shoes on so that you both can go somewhere. Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.


Poster I saw on a men's restroom wall in a restaurant in Brisbane:

Fly 1: Bet you I can run up this wall faster than you.

Fly 2: Bet you you cant.

They say Australians will bet on two flies running up a wall. If you have a gambling problem, call this number yada yada yada.

Other poster from the same restaurant:

Win a trip for two to Las Vegas.


I believe they're right about there being only one single electron. I tried to start my Pathfinder yesterday and it was dead as a doornail. None of the dash lights lit and there was not even a click from the starter. Someone else must've been using that electron though since I tried again a few minutes later and it cranked right up. I had my turn with it so I'm not mad at all.

I love that game, along with Fate of Atlantis and Day of the Tentacle - but this has to be my favorite one.

For some reason I never quite understood, it wasn’t that well received when it came out. My stance on this one hasn’t changed in 27 years and I firmly believe it is the pinnacle of the genre : incredible art, hilarious, crazy but somehow logical puzzles, great music, and above all extraordinarily well written. Very very few games had me laugh in front of my computer. And don’t forget the brilliant voice acting.

I’ve just bought the latest title in the series but haven’t played it yet - I somehow fear it won’t be able to match COMI. I will also be introducing my kids to this incredible work of art - they will never hear about this from their friends at school, and I feel this is typically the kind of thing I can and probably should share as a parent.

Lucas folks, thanks for making this game.


I'm on my phone so I can only do so much digging, but from the usbip sourceforge page that's linked above, it says that development has moved into the Linux kernel:

  For Linux, the source code of usbip was merged into the staging tree, and finally has been moved to the mainline since Linux-3.17. Development is ongoing in the kernel community, not here. Linux distributions will provide binary packages of usbip.*

Sounds a bit like USB/IP (https://wiki.archlinux.org/title/USB/IP).

Is this a new thing?


Possibly because a developer hired to write something around usbip would cost a lot less. https://usbip.sourceforge.net/

I'm running virtualhere on thousands of raspberry pi's sharing various USB devices to cloud machines over vpn. It's been working without issues for years now. Seems to be a solo developer in Australia that's been working on it for a really long time. https://www.virtualhere.com/

First time I heard about Wireless USB was just the other day, in a video looking at a luggable computer from 2006.

https://youtu.be/OO5hYhdxIuk

Video is worth a watch although it also doesn’t give an answer for why Wireless USB actually disappeared.

I was wondering after watching that video, if it could be due to security concerns? Like, is the Wireless USB protocol encrypted? And if so, does it use sufficiently strong encryption?

I did find a document that talks a bit about Wireless USB encryption.

https://cdn.teledynelecroy.com/files/appnotes/wireless_usb_e...


Such a shame they didn't illustrate the article with specific pictures of some of the actual markings the author is referring to. The small picture at the top isn't really conclusive.

Edit to add: That string quartet (Quartet No. 15 in A Minor, Op. 132) is amazing though[1], so it's great to be reminded to listen to it again.

[1] Aren't they all? But that one for sure is.


Define "Useless."

My team worked on a project for about 18 months. I won't go into detail about what it was, but it was (and still is) badly-needed.

We worked with the top interaction and graphic design folks in the world for the aesthetics and interaction. We had many meetings, flying the whole team to San Francisco, several times a year.

When the project was in its final testing, the company got cold feet, and canceled the project. I had to lay off two of my engineers.

It would still, to this day, be the best of breed. It was designed about twenty years ago, in the early aughts.


As long as we don't get the cyberpunk dystopian outcomes like tailored molecules dumped in the water supply to assassinate a specific individual.

> They sent a slamhound on Turner's trail in New Delhi, slotted it to his pheromones and the color of his hair. It caught up with him on a street called Chandni Chauk and came scrambling for his rented BMW through a forest of bare brown legs and pedicab tires. Its core was a kilogram of recrystallized hexogene and flaked TNT.

-- Count Zero, by William Gibson



I don't know why, but this reminded me of going to school in the early 90's, we'd go through the university's voicemail system inputting random phone numbers and trying the default password, '0000', which meant the voicemail on that number had never been set up. When we found one, we'd record a song as the greeting. We then posted notes by the various public phones on campus for our 'dial a song' directory so anybody could enjoy a song, like a big public jukebox.

The entire thing worked well for a semester, until some killjoy updated the phone system default to disable voicemail for unused numbers and blew away all our songs.


I've used paste all my life, but I never knew you could do

  paste - -
to convert stdin into 2 columns (or "paste - - - -" to get 4 columns!). TIL ...

Full list:

    SOS  - Shiny Object Syndrome
    MDD  - Magpie Driven Development
    HDD  - Hype Driven Development
    JDD  - Jargon Driven Development
    BWDD - BuzzWords Driven Development
    HNDD - Hacker News Driven Development
    RDD  - Resume Driven Development
    CVDD - CV Driven Development
    EDD  - Epoch Driven Development
    PDD  - Promotion Driven Development
    BPDD - Blog Post Driven Development
    MDD  - Medium.com Driven Development
    HDD  - Headline Driven Development

In the neon-lit, digitized colosseum of the 21st century, two titans lock horns, casting long shadows over the earth. Google and Microsoft, behemoths of the digital age, engaged in an eternal chess match played with human pawns and privacy as the stakes. This isn’t just business; it’s an odyssey through the looking glass of corporate megalomania, where every move they make reverberates through society’s fabric, weaving a web of control tighter than any Orwellian nightmare.

Google, with its ‘Don’t Be Evil’ mantra now a quaint echo from a bygone era, morphs the internet into its own playground. Each search, a breadcrumb trail, lures you deeper into its labyrinth, where your data is the prize – packaged, sold, and repackaged in an endless cycle of surveillance capitalism. The search engine that once promised to organize the world’s information now gatekeeps it, turning knowledge into a commodity, and in its wake, leaving a trail of monopolized markets, squashed innovation, and an eerie echo chamber where all roads lead back to Google.

Meanwhile, Microsoft, the once-dethroned king of the digital empire, reinvents itself under the guise of cloud computing and productivity, its tentacles stretching into every facet of our digital lives. From the operating systems that power our machines to the software that runs our day, Microsoft's empire is built on the sands of forced obsolescence and relentless upgrades, a Sisyphean cycle of consumption that drains wallets and wills alike. Beneath its benevolent surface of helping the world achieve more lies a strategy of dependence, locking society into a perpetual embrace with its ecosystem, stifling alternatives with the weight of its colossal footprint.

Together, Google and Microsoft architect a digital Panopticon, an invisible prison of convenience from which there seems no escape. Their decisions, cloaked in the doublespeak of innovation and progress, push society ever closer to a precipice where freedom is the currency, and autonomy a relic of the past. They peddle visions of a technocratic utopia, all the while drawing the noose of control tighter around the neck of democracy, commodifying our digital souls in the altar of the algorithm.

The moral is clear: in the shadow of giants, the quest for power blurs the line between benefactor and tyrant. As Google and Microsoft carve their names into the annals of history, the question remains – will society awaken from its digital stupor, or will we remain pawns in their grand game, a footnote in the epic saga of the corporate conquest of the digital frontier?


I always think of this from Aaron Swartz:

> But let’s say you can narrow it down to one good one, and you can find the time to read it. You plunk down an absurd $30 (of which, I’m told, less than $3 goes to the author) for a bulky hardcover and you quickly discover that the author doesn’t have all that much to say. But a book is a big thing, and they had to fill it all up, so the author padded it. There are several common techniques.

> One is to repeat your point over and over, each time slightly differently. This is surprisingly popular. Writing a book on how code is law, an idea so simple it can fit in the book’s title? Just give example after example after example.

> Another is to just fill the book with unnecessary detail. Arguing that the Bush administration is incompetent? Fill your book up with citation after citation. (Readers must love being hit over the head with evidence for a claim they’re already willing to believe.)

> I have nothing against completeness, accuracy, or preciseness, but if you really want a broad audience to hear what you have to say, you’ve got to be short. Put the details, for the ten people who care about them, on your website. Then take the three pages you have left, and put them on your website too.

Source: http://www.aaronsw.com/weblog/001229


One of my DND groups did an EIJ session when we had some absent members. It was a lot of fun. We ended up discovering that Canadian McDonalds was a cover for a human meat smuggling racket to supply Canadian cannibals. Fun times.

I've played Everyone is John for years now, and groups of friends I've played with have gone on to play with even more groups of friends. It's been a fantastic middle ground between role playing and party game, with no preplanning required and very little prep.

I was surprised to find that how I've interpreted and run the game differs from videos I've seen of others' games. Not to suggest there's a right way or a wrong way, but this seems to strongly impact the flavor of gameplay.

The way I've run it is guided by this paragraph:

> Everyone is John is a humorous, competitive roleplaying game about playing the various personalities of John, an insane man from Minneapolis. One participant is the GM, or, in Everyone is John lingo, “Everyone Else.” All of the other players are Voices in John’s head.

Hence, the Voices are John, directly making choices and taking actions. The GM, despite the name of the game, is not John. The GM handles all other characters, describes the world, calls for rolls, adjudicates, and does the usual GM stuff.

However, in videos of games I've seen online, players will act only as literal Voices in John's head, while the GM plays the voice of "sane" John, whom the other players get to boss around.

IMHO, I'm running it correctly, and the results speak for themselves, with tales of misadventures told and retold years later. The other version, to me, is more one-note and less worthy of replay. To each their own, of course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: