Hacker News new | past | comments | ask | show | jobs | submit | pzmarzly's comments login

Interesting, I went the other way about 7 years ago - switched from fish to zsh (initially with oh-my-zsh). The interactive experience was similar enough on both shells, and the performance was great on fish and okay-ish on zsh, but two things won me over:

1. With zsh, I can copy-paste some bash snippet and in 99% of cases it will just work. Aside of copy-pasting from StackExchange, I also know a lot of bash syntax by heart by now, and can write some clever one-liners. With zsh, I didn't need to learn everything from scratch. (I guess this matters less now that you can ask AI to convert a bash one-liner into fish one-liner?)

2. For standalone scripts... well, I think it's best to reach for a proper programming language (e.g. Python) instead of any shell language, but if I had to use one, I would pick bash. Sure, it has many footguns, but I know them pretty well. And fish language is also not ideal - e.g. IIRC it doesn't have an equivalent of `set -e`, you have to add `; or return 1` to each line.


I use fish and on the very, very rare occasion I need to copy and paste bash from the internet it's pretty easy to just type 'bash' into fish and paste it in. Its not like bash and fish conflict, you can have them both installed.

FWIW, fish is much more bash-compatible these days. We've introduced support for a lot of bash-isms that don't completely break the fish spirit or clash with its syntax in the last few releases.

I personally liked "; and" but... "&&" solves around half of the problems with copy-pasting and does not look terrible, so it was probably the right thing to add.

Thanks! I'm trying fish once in a while, and currently, the following are missing:

    $ cat <(echo "Hello")
    Hello
    $ cat <<<"Test"
    Test

> 2. For standalone scripts... well, I think it's best to reach for a proper programming language (e.g. Python) instead of any shell language, but if I had to use one, I would pick bash. Sure, it has many footguns, but I know them pretty well. And fish language is also not ideal - e.g. IIRC it doesn't have an equivalent of `set -e`, you have to add `; or return 1` to each line.

I'm sure you know this, but: no particular reason the interactive shell you use has to match the shell you use for scripts. All of my scripts are in bash, but I haven't used bash interactively in decades now, at least on purpose.


I write all my scripts with the hash bang as "#! /bin/bash" so even though fish is my interactive shell, I still use bash for all shell scripts. I think the restrictions you mention only apply if you use "#! /bin/sh" rather than bash specifically.

Just fyi, you should use `#!/usr/bin/env bash` instead of `#!/bin/bash` or whatever because you can't assume the location of bash (but the location of `env` is indeed portably fixed). e.g. FreeBSD (and macOS?) has bash at `/usr/local/bin/bash`

And NixOS has bash somewhere in the Nix store... :)

Clarification: /usr/bin/env should be used for pretty much every shebang since it looks up the binary on $PATH.


That assumes you care about portability. Not everybody does.

Writing portable software is difficult, and doing it for shell scripts even more so. Blindly pursuing portability for its own sake is not worth it. Weigh the cost of portability against the odds that the software will ever run on different systems.

For me personally it is never worth it to write my personal programs portably. This would require that I test them on different systems that I do not even use. Pointless.


It’s not so much a portability thing IMO as it is a utility thing. If I have a newer bash in my PATH than what is in /bin/bash, I want to use it.

bash is /bin/bash on macOS, unless the user really likes bash, in which case it's probably /opt/homebrew/bin/bash or /opt/local/bin/bash

I wouldn't say I particularly like bash, bash has seen a ton of improvements since Apple stopped updating the vendored version. Using that old bash which is frozen for non-technical reasons just seems stupid to me.

If you don't want bash-specific features, you might as well use zsh or dash or whatever lives in /bin/sh. If you do want bash-specific features, you might as well take advantage of the latest and greatest.

On that note, on my Macs, the bash I want is usually /opt/pkg/bin/bash or /run/current-system/sw/bin/bash :)


In any of those cases, using `/usr/bin/env bash` gets what the user probably wants

Yeah I'm just commenting on what the path for that would be

I'm confirming. Often, when you run a script on more than just your own computer, bash is located in unexpected places.

For me, for example: `/data/data/com.termux/files/usr/bin/bash`

In such cases, scripts containing the absolute path to bash in shebang do not run correctly.


I “devolved” mostly along the same path. Bespoke shell to OMZSH to Zsh to Bash.

Zsh has a few nasty Bashism footgun incompatibilities. If I remember correctly the worst one is with how globbing / “*” works, which is why that is guarded with an option.

My main reason for sticking with Bash is that it’s everywhere, and the places where it isn’t try very hard to support the most-used featureset of Bash.

A stock Bash shell does feel a little naked without my dotfiles though :)


Bash on osx is pretty old due to avoiding GPLv3. I think they have zsh as the default login shell

True. But it’s easy to install Bash 5 via Homebrew or MacPorts.

Reading the associated issue (https://github.com/fish-shell/fish-shell/issues/510) about the lack of "set -e" was interesting as it highlighted how weird Bash, and shell scripting in general, is from a programming language perspective. Imagine programming in any other environment where every function you call could either succeed or fail catastrophically. There's some talk about adding exception handling to Fish, but maybe the sensible thing to do is to have a mode where Fish ensures that you've dealt with each possible error before moving on. Which is what you would do anyway if you were invoking external programs from a non-shell language (like Python's subprocess.check_call).

In any case the discussion in that issue made a convincing (to me) argument that if you're doing the sort of scripting for which "set -e" makes sense, which is most of it, you should be using Bash. That doesn't mean you need to use Bash interactively though, as others have pointed out.


> Imagine programming in any other environment where every function you call could either succeed or fail catastrophically

There's not much to imagine since that's pretty much every other language?

Sure you can recover with error handlers (sometimes[0]), but by default all of them will hard abort in case of exceptions.

In our modern language landscape shells are very much the odd ones, where errors are completely silent by default and the thing just carries on oblivious that the world around it might be crumbling completely.

[0]: https://doc.rust-lang.org/book/ch09-01-unrecoverable-errors-...


> Imagine programming in any other environment where every function you call could either succeed or fail catastrophically

Laughs in client-side JS.


Hmm? It’s not like a JavaScript exception crashes the entire browser tab.

Client-side JS is event-driven. An unhandled exception stops processing for that event, but doesn’t block other events.


But, scripting languages are not programming languages, scripting languages are made to run commands, and by default a script should halt if a command fails, at least in the CLI execution context. The problem is, scripting languages mix programming context and scripting context, so a condition written in the script shouldn't be treated as a CLI exit status. Anyway, I don't use fish for scripts just for the lack of exit on command error. That's essential while scripting.

I think that oilshell is aimed at people like you. I’ve never used it, but their website does make some interesting points about how a shell ought to work and how this could be compatible with bash.

As a go programmer, "; or return" makes a lot of sense to me

This. To add some words why this is important:

Given the remote-first container-based world we're heading towards, decoupling UI (terminal emulator) from its "backend" (tmux, code-server) is a great design decision, which I think will ultimately define what the "next generation" of terminal emulators is. Imagine being able to open tabs directly on remote host, reconnect without losing state, etc, all while using native UI (so Cmd+T to open new tab, Cmd+F to search, etc). Productivity game changer, which currently only the iTerm2 users can fully enjoy.

Ptyxis (putting its backend in running containers), WezTerm (native handling of ssh sessions) and VSCode's terminal (starting a proprietary code-server binary and connecting to its TCP port) have reached some of this functionality, but in their design they need some out-of-band mechanisms to handle the connections, ultimately limiting the scenarios they can handle.

Meanwhile tmux -CC [0] and ht [1] are sending both their control channel and data channel over the opened terminal itself (in-band), making them flexible enough to support any configuration. Something complex like `ssh jumpbox -- ssh prod -- podman exec -it prod /bin/bash -- tmux -CC` should just work, as if everything was running on your local machine.

[0] https://github.com/tmux/tmux/wiki/Control-Mode

[1] https://github.com/andyk/ht


> Given the remote-first container-based world we're heading towards

You might want to expand on who "we" are here, because to me it seems to be a very small section of developers who want a "remote-first" experience and most (if not everyone) I speak with want software and tools that work local/offline-first, including our dear development environment.

If a tool requires a internet connection to work (so this "remote-first container-based world" you mention), I don't think either me nor many of the developers I know would even give it a try, because we need to be able to use our tools regardless if we have a working internet connection and/or the service-provider has downtime.

In fact, the only ones I know who are using a "remote-first" environment are developers who are forced to do so by their employer, and it's not a short-list of complaints about it when we meet over beers.


Good point, I also want things to work offline. By "remote" here I was also thinking of scenarios like:

- Connecting from Windows/macOS host to a local VM

- Connecting to a build server over LAN (I have a beefy PC but prefer to work from my couch)

Both work offline just fine, but from tooling perspective you are connecting to remote host over the network


Not very small; most corporate non-Windows development uses containers, I'd bet on the ratio above 90%.

Not that developers want remote-first experience, but they face it. Containers in this regard behave like remote systems, even when run locally. A tool that helps juggle multiple remote contexts in a sane way may be very helpful. Say, tmux does a lot to help, but more is of course possible.


This is a trace from the BIOS, it is not uncommon to have them printed over the serial console. Potentially the BIOS is based on EDK2 source code, in which case you can take a look here for the implementation of the trace printing logic: https://github.com/tianocore/edk2/blob/9e6537469d4700d9d793e...

How does Vulkan Video differ from VA-API and VDPAU? Does it replace them, supplements them, or are these completely unrelated?


It basically does seem to replace them, albeit only for more modern codecs (which should be OK; I suspect nobody really cares about MPEG-2 acceleration anymore.) Like others are pointing out, it is cross-platform and vendor-neutral.

I am curious to see how this evolves over time. Not sure if there are Vulkan Video implementations for things that are not GPUs yet; if we start to see them, this interface may eventually also wind up replacing v4l2-based codec acceleration, too. With increasingly robust support for Vulkan Video in FFMPEG it's seeming likely that it will soon be the defacto way of deciding and encoding modern codecs using hardware acceleration on Linux, at least with GPUs.


Does this essentially bring AMD video enc/dec up to scratch with Intel's QuikSync?

Thats long been one of my reasons for having a home server run on Intel, and if that moat is now crossed then there is basically no reason for me to go Intel in the future.


> Not sure if there are Vulkan Video implementations for things that are not GPUs yet

Qualcomm video acceleration is stateful, and as such doesn't fit with the design of the Vulkan Video API (which is stateless)


I'm afraid I don't know enough about hardware-accelerated video encoding/decoding to really understand what this implies. From what I gather, the Vulkan API gives you the ability to queue video encode/decode operations, which seem to pertain to e.g. a frame of h264 video, passing in all of the required state. So I assume at a low level, how a driver would have to handle this is by processing these commands, dispatching units of work to relevant hardware units, flipping registers and etc. as needed, and doing whatever can't be done in hardware in software. And I guess what you mean when you say Qualcomm's video acceleration is "stateful", it means that you can't just simply dispatch a single-frame encode job, the video encoding process is some stateful thing where the per-codec state is not all exposed and cannot be processed out-of-order, so there's no logical way to implement a Vulkan driver.

If so, that's a bummer.

I wonder if it might be possible to "cheat" by having a driver that tries to use stateful acceleration in the happy path, but falls back to software encoding/decoding when things go out of order or there are no more hardware resources. Probably not. At least, hopefully, future hardware designs will account for this. Until those existing devices are outside of their useful lifespan, though, it seems Vulkan Video will probably not be the one hardware video coding API to rule them all.


> And I guess what you mean when you say Qualcomm's video acceleration is "stateful", it means that you can't just simply dispatch a single-frame encode job

Yup it's exactly that


Article says:

> Great to see this milestone for better exposing this cross-vendor, multi-platform open video encode/decode API.

The others were designed separately by each vendor. https://wiki.archlinux.org/title/Hardware_video_acceleration feels like a decent enough, brief explainer.


Seems like an effort to stick video en/decoding behind graphics drivers as a vendor neutral approach instead of relying on VDPAU or VA-API. These efforts should equally apply to Windows and Android as well as Linux.


Vulkan video isn't allowed to be enabled in Android Vulkan drivers, though perhaps that will change some day.


Too much new attack surface, or do you know what's likely holding it back?


Apparently Android's multimedia security system and driver model are incompatible with Vulkan Video. https://issuetracker.google.com/issues/293419320?pli=1


or do you know what's likely holding it back

The lack of AMD GPUs in Android devices?


Gotcha, thank you, I misread the OP


Unless targeting flagship devices, most likely OpenGL ES drivers are more stable than Vulkan.

Hence why OpenGL ES on Vulkan is now a thing since Android 14, that is the only way Google has found out to force OEMs to up their game on Vulkan drivers.


To expand on this, the author is describing the so-called "Docker-out-of-Docker (DooD) pattern", i.e. exposing Docker's Unix socket into the container. Since Docker was designed to work remotely (CLI on another machine than DOCKER_HOST), this works fine, but essentially negates all isolation.

For many years now, all major container runtimes support nesting. Some make it easy (podman and runc just work), some hard (systemd-nspawn requires setting many flags to work nested). This is called "Docker-in-a-Docker (DinD)".


FreeBSD has supported nesting of jails natively since version 8.0, which dates back to 2009.

I prefer FreeBSD to K8s.


The jQuery UI Datepicker widget[0] still remains my favourite date picker implementation, even though it sadly never supported mobile phones that well.

Non-confusing design, live-reacting to user typing the date with keyboard, accessible, configurable, and offering 0-effort localisation in 60+ languages[1].

Is there anything like this in React world?

[0] https://jqueryui.com/datepicker/#dropdown-month-year

[1] https://github.com/jquery/jquery-ui/tree/main/ui/i18n


That date picker is really good.

> Is there anything like this in React world?

None in any world, to be honest.

It's amazing that most UI libraries go out of their way to make month and year selection as awkward and non-intuitive as possible.


It's as if today's UI designers can't imagine why someone would pick a date 20 years in the past.

I wonder if they ever had to enter their own date of birth on a form somewhere. Surely most of them are over 20 years old?


My guess is they've never built a birthday input, and possibly never had to use one.

Or rather their LLM has never.


Current top comment links to PrimeVue and I feel like it solves month/year selection very similarly (except not with a dropdown): https://primevue.org/datepicker/

This is the first UI component lib I checked. I'm curious what's missing here in your opinion?


It's not obvious you can click month/year. Unclear how to back out if you think you made a mistake.

One of the better ones I've seen is https://vaadin.com/docs/latest/components/date-picker


Fair! Thanks

EDIT: ooh wow that Vaadin one is super nice indeed!


Unless you require something special, just a regular <input type="date"> or datetime-local is a good choice these days. There's week and time types as well.


It’s amazing how far you can get with just HTML and a gentle sprinkle of CSS these days.


The author set a redirection based on Referer header. You need to copy-paste the link, or (in some browsers) use the Open in New Tab option.


Good for you, my experience with Jekyll is closer to OP's experience with Node. I have a big website that I built in 2014, with tons of custom plugins, that is now stuck on Jekyll 2.x and Ruby 2.x, and has a ton of hidden C++ dependencies. The way I build it now is using a Dockerfile with Ubuntu 18.04. I probably could update it given enough effort, but I was rather thinking of rewriting it in Astro.js or Next.js.


This is the issue I have with the "build vs buy (or import)" aspect of today's programming.

There are countless gems, libraries or packages out there that make your life easier and development so much faster.

But software (in my experience) always lives longer than you expect it to, so you need to be sure that your dependencies will be maintained for that lifetime (or have enough time to do the maintenance or plug in the replacements yourself).


If you're looking for a stable target you should not even consider Next.


Just avoid JavaScript frameworks altogether.


Yes indeed, that is the solution to modern IT problems - never update your Ubuntu 18 containers and you're set.

(Wish I was joking, but sadly I'm serious.)


> new solution will have only 70% of cups' features 15 years

Which sounds fine? Most people don't want LPT printers support, they want AirPrint and WSD to just work.


How many percent is "most" people? What about enterprise users with complex setups/requirements, will they be supported or out-of-luck? Typically you'll have print servers with centralized authentication, possibly logging/auditing/billing, any this might depend on "the" component they'll leave out in the new product because, well, most people don't care about it...


> How many percent is "most" people? What about enterprise users with complex setups/requirements, will they be supported or out-of-luck? Typically you'll have print servers with centralized authentication, possibly logging/auditing/billing, any this might depend on "the" component they'll leave out in the new product because, well, most people don't care about it...

But the old, complex cups doesn't go away if a new, sandboxed version is developed, so the people who want the complexities can evaluate whether the security trade-off is worth it, and use it anyway if so.


TIL about Element X. Is this app ready to use or not? The website says:

> Element X is the mobile messaging app that almost all Element users should now be using.

But the commenters here disagree.

Also, the Element app and its App Store listing doesn't indicate there is another version I should switch to, so looks like they don't want people to switch after all?

Honestly, this is infuriating. I keep my Matrix account just for that one friend who prefers using that platform, so I grit my teeth and suffer from bugs (lost media attachments, missing notifications), and now it turns out all of that could have maybe been avoided with this new app that they aren't telling us anything about? I am not going to read developer blogs for an app that I use once a month...

They could have went the Google Hangouts route. Rename existing app to Element Classic, put up new one as Element.


The commenters are wrong (unless you consider threads or spaces to be absolutely critical to your current workflows).

> and now it turns out all of that could have maybe been avoided with this new app that they aren't telling us anything about?

Element X was only just declared ready for primetime in the last few weeks. It can't replace the classic app entirely yet because it doesn't have spaces/threads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: