Hacker News new | past | comments | ask | show | jobs | submit | destructionator's comments login

I work from home with a young child. I think if I put a sign up, it'd just prompt more questions from her lol

Use the ascii bell character "\a" and turn off the "visual bell" or whatever options in the terminal (I hate those things) so you can actually hear it beep and find joy.


What I find offensive about this post is using "neo"vim and neomutt. what, was original AgentSmithMutt and TrinityVim not good enough? ...Is it because they're old?

Well anyway, I mostly agree with this post. Another similar thing I hate to see is things like "Really? In 2024?" and im just like that's the laziest critique of something ever.


neo means new


so does modern. so the same author who hates the term modern uses neo - synonomous with modern for all practical purposes - vim and mutt. why is that? might those reasons also apply to other "modern" things?


If you're forking a software to introduce change… it is factually newer.


The recent push to Wayland in 2024 is an interesting choice, given how productive and usable X11 is.


I'm very glad people made the push! My configuration has been much nicer being based on Wayland for the last 4 years or so than it was on X. Screentearing and limited refresh rates on mixed Hz setups are now a thing of the past :)


By that measure we'd still be using DOS these days (which was also productive and usable... and indeed the initial backlash against this nrefangled "Windows 95" thing kept going for a while, not very dissimilar to the X11 vs Wayland debates)


I mean you're free to fork and continue developing X11; right now there is nobody with both the capability and the desire to do so.

I'd wager that once I get hardware made in 2024, Wayland may work well for me (though in its defense it does work fine on my one machine with an Intel integraded GPU), but for now none of my (very old) discrete GPUs work reliably with Wayland, with 2 GPUs and 3 drivers (nvidia vs nouveau for my old GeForce and "radeon" (not amdgpu) for my old AMD card) causing 3 symptoms:

1. Crashes immediately on login

2. Black Screen

3. Kind-of sort-of works, but sometimes the screen just freezes for no reason and sometimes switching VTs fixes it sometimes not.


> I mean you're free to fork and continue developing X11; right now there is nobody with both the capability and the desire to do so.

OpenBSD Xenocara


Last I heard, Xenocara was downstream of X11 rather than being a hard fork?


The era of a single machine is over. We need remote rendering for services on datacenter fleets without GPUs, so X11 is more often replaced by Javascript for a browser (with support for a user's local GPU) than by Wayland.


Have fun yelling at that cloud for the rest of time.


x11 is depreciated. It has no active maintainers and barely even qualifies for "maintenance mode" status; the push to remove Xorg can be justified by enumerating the security issues and nothing else.

Strictly speaking Linux is "productive and usable" with nothing but a terminal multiplexer and a shell to work with. With expectations as high as they are in 2024, I don't think former Windows or Mac users will feel at-home with an x11 session. Switching away from bazaar-style software development is a prerequisite for the Year of the Linux Desktop.


I really do like Gnome and Wayland. I use them every day. That being said,

Bazaar-style software development is the sole advantage free desktop has over macOS and Windows.


Cathedral-style development doesn't necessarily mean closed-source, but instead reflects the less-modular nature of Wayland in relation to x11. There aren't multiple desktops that are all using the same display server; instead each desktop implements it themselves around a common spec. Plug-and-play software has fewer and more restrictive interfaces to rely on. Modern desktop Linux is decidedly pared-back, which is a good thing when you consider how scarily open Linux is in the right hands.

"sole advantage" isn't correct either - there's a plethora of reasons to use Linux. In the enterprise, people pay companies money to keep their Linux away from bazaar-level patches and randomly packaged repos. More casually, a lot of people don't use desktop Linux for a particularly advanced purpose and just treat it like a Mac/Windows/Chrome machine with fewer advertisements. Some people do very much get a lot of value out of the bazaar-side of Linux, but the comparison between the two styles wouldn't exist at all if Linux didn't entertain both philosophies.


> It only takes a little electricity to power this process, which can raise the refrigerant’s temperature by many degrees Celsius.

And the same electricity can raise the temperature by even more degrees Fahrenheit!


Heat in F chill in C, et voilà! free energy.


Unit arbitrage. I love it.


I'd buy that for a dollar!


Using a temperature system built around water to measure air temperature. I mean I can use it but the range of fahrenheit is more useful.

What we really need is a combination of the two. Something that measures air temperature and water content because 68F at 5% humidity is a lot different than the same temp at 40%>


> Several accounts of how he originally defined his scale exist, but the original paper suggests the lower defining point, 0 °F, was established as the freezing temperature of a solution of brine made from a mixture of water, ice, and ammonium chloride (a salt). The other limit established was his best estimate of the average human body temperature, originally set at 90 °F, then 96 °F (about 2.6 °F less than the modern value due to a later redefinition of the scale).

Nothing beats scientific accuracy and thoroughness, right? So it then actually ended up being tied to water as well:

> For much of the 20th century, the Fahrenheit scale was defined by two fixed points with a 180 °F separation: the temperature at which pure water freezes was defined as 32 °F and the boiling point of water was defined to be 212 °F

(from https://en.wikipedia.org/wiki/Fahrenheit)


So that's why one unit Celcius is roughly 2 units Fahrenheit!

For some reason I never noticed there are exactly 180 degrees between freezing and boiling points on the Fahrenheit scale. 100°C is a nice "round" number, and 180°F divides evenly into a lot of smaller numbers.


> What we really need is a combination of the two. Something that measures air temperature and water content because 68F at 5% humidity is a lot different than the same temp at 40%

The "feels like" apparent temperature accounts for things like humidity and windchill[1].

Many weather apps provide the "feels like" temp, including my app: https://uw.leftium.com

I was going to drop the "feels like" reading in my new weather app (I just didn't notice a major difference), but maybe I'll keep it...

[1]: https://www.wikiwand.com/en/Apparent_temperature


Wet bulb temperatures account for evaporation, but I don't know if weather stations report wet bulb or dry bulb.


> In years of doing this not a single person has ever complained

I've never used your website, but if I did and the side arrow changed things, I'd immediately close it and never come back. You wouldn't get a complaint from me; you'd just lose me instantly and permanently.

It drives me absolutely nuts when sites do this, it is so disorienting.


> I'd immediately close it and never come back. You wouldn't get a complaint from me; you'd just lose me instantly and permanently.

Do you think this concerns me?

I don't make this site for you, someone who admits they've "never used" my site.

I make my site for my regular readers, some of whom have been reading the site for over 10 years.

The people who regularly email me comments and feedback on my posts. People who _love_ the fact that they can flip through the entire blog in seconds using the arrow keys.

> when sites do this

My site is not like other sites on the web. In fact, it is such an outlier, that I've recently started a new successor to the WWW, called the World Wide Scroll, to start aggregating more sites like mine.

My site is entirely public domain, has no advertising or trackers or cookies, can be downloaded in 1 click and used entirely offline, is written in a new language (Scroll) that is mathematically shown to be the simplest/most powerful language yet invented, that compiles to HTML/CSS/JSON/XML/RSS/plain text, is fully tracked by git so you can see the history of every line in every file.


> a new language (Scroll) that is mathematically shown to be the simplest/most powerful language yet invented

This is, at best, ambiguous. To show something mathematically, you have to have a precise definition of it, and neither "simple" nor "powerful" admits a precise definition that is widely agreeable enough for any mathematical proof based on it to be worth anything.



> Now that they're detaching from Serenity they can start reaping the benefits of the existing work in the FOSS ecosystem, which should enable a faster pace of development.

now they could embed chromium LOL


> All the rendering happens server side, and bitmaps are sent over the wire. It's basically a crappy VNC.

Even if this were true (which it isn't), there's a lot more to a GUI than the G. A lot of nice interoperability is provided too, like clipboard integration, dragging and dropping, mixed windows on the same taskbar, etc. Far more pleasant to use than awkwardly going to a full screen thing to get a window out.


> X protocol requires a lot of round trips that waste a lot of time.

This isn't very true. The X protocol is very async and lets you batch plenty of things when a response is required.


You can't pipeline X11 operations in the presence of anything but a perfect network because

1. TCP streams require stalling when packets are dropped to keep the stream in order

2. X _requires_ by design that commands are performed in order

Which means that using something that can do UDP and manages it's own sequence ordering can do significantly better. This is why things like RDP, PCoIP, etc could do full frame rate HD video 15 years ago and you still can't with X protocol over the network.

Breaking up the screen into small 16x16 chunks or so, encoding on the GPU, and shipping that turns out to be significantly faster.

Especially when you take into account that virtually _nothing_ draws with X using X drawing primitives. It's almost all using Xshm for anything non-trivial.


Most modern Ethernet LANs are effectively lossless. You're not going to see a single dropped packet unless you saturate the link.

> Which means that using something that can do UDP and manages it's own sequence ordering can do significantly better. This is why things like RDP, PCoIP, etc could do full frame rate HD video 15 years

No it isn't. It's because those things actually compress the video, and X-forwarding generally doesn't. The transport protocol is completely irrelevant, it's just a bandwidth problem.

I've X-forwarded Firefox between two desktops on 10G Ethernet. I can watch 4K video, and I genuinely can't tell the difference between local and remote.


If you're including TCP ACKs as part of the "chatty"/"required round trips" of a higher level protocol, that's bad new for a lot of things. (Which, granted, is why they made those QUIC protocols etc., but still, it seems unreasonable to single out X's protocol for this, especially since RDP and VNC are commonly used over TCP as well).

But: > This is why things like RDP, PCoIP, etc could do full frame rate HD video 15 years ago and you still can't with X protocol over the network.

Compression is going to have a much bigger impact over a large motion than most anything else; you can stream video over HTTP 1.1 / TCP thanks to video codecs, but X (sadly i think, seems like such an easy thing that should have been in an extension, but even png or jpeg never made it in) doesn't support any of that.

> It's almost all using Xshm for anything non-trivial.

Xshm is not available over a network link and it is common for client applications to detect this and gracefully degrade.


> This is fun and all, but if you lose your connection, your windows will go away and your program will usually exit.

Interestingly, that is xlib's behavior moreso than inherent in the protocol. xlib assumes connection lost is a fatal event, but if you're doing your own socket, you can choose to do this differently. (or, even with xlib, you can throw an exception from the connection lost callback and regain control before letting it abort)

Some server state is lost, but it is possible, with some care, to recreate that from your client upon reconnecting. You can even connect to a different server and somewhat seamlessly migrate windows over.

But yeah it isn't commonly done.


You can survives disconnects even with xlib, I think the abort is just the default behavior.

Emacs is the one program I know of that can actually do this. It can pop up frames (Emacs lingo for windows) on multiple displays at the same time and even mix tty and X11 frames. The Emacs session survives connection loss fine, frames on other displays continue working and you can reconnect if desired.

The one caveat is that Gtk used to have a bug that caused it to uncontrollably abort on connection loss but I build my Emacs with --without-x-toolkit (so it uses raw xlib, no toolkit) and that configuration has always been robust and performant. If I remember correctly the Gtk bug might be fixed now too.


Gtk also has an annoying misfeature here. It literally calls abort when the connection is lost.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: