I had the same problem on a Pixel 4a, it worked perfectly, I performed a factory reset, immediately after it refused to boot due to "battery issue" (I don't remember the exact message). So I think Pixel 4a is also impacted by the "brick on factory reset" issue.
That's funny, I had the same issue. I always update then factory reset new phones after I buy them. The 4a was the only one that bricked after doing that. Returned it, and haven't really trusted the brand since.
Unexpected keyboard [^1] has been a game changer for me. As far as I know, it's the only full-size keyboard that features swiping towards the corners of keys for numbers and special characters instead of tapping and holding. As a bonus, it weighs in at under a megabyte.
Edit: another comment mentioned Thumb-Key [^2], which has a similar swipe feature.
Heliboard works really great, but for swiping you need to manually add Google's lib (which I did), and support for alternative language layouts is very basic (if that's something you need).
It's nice, but I've gotten too used to seeing the long-press keys in my previous keyboards, and having a number row. Let's see how the layout for other languages (mainly Japanese) shake out though.
Interesting. Still no number row, but much better. And having voice input integrated means that I can remove the dedicated app. Still really just missing Japanese keyboard layout support.
The whole area is shockingly underdeveloped. For instance there is no open source keyboard that supports pinyin chinese input. Some very low hanging fruit...
In France, I pay monthly 27.48€ (~$29) for 1Gbps down and 500Mbps up (in theory, in practice, it's more like 500~600Mbps down, 250~300Mbps up). This includes a TV option for 2€ (without it, it's 25.48€).
My provider is SFR (the only one giving access to optical fiber in the small village where I live).
EDIT: I'm talking about home internet. For mobile internet, I pay 19.99€/month for unlimited access (5G), but I haven't done a speedtest.
For comparison, I live in Washington State 50km away from Seattle, and I get 1200 Mbps down and 200 Mbps up (in practice more like 900/100) for $115/month. This is just pure Internet, no TV or anything else.
The ISP that I currently use - Comcast Xfinity - is also the only cable provider in this area. I can get some mobile and satellite options, but they are all more expensive for lower speed and higher latency.
It's very common in autoconf codebases because the idea is that you untar and then run `./configure ...` rather than `autoreconf -fi && ./configure ...`. But to do that either you have to commit `./configure` or you have to make a separate tarball (typically with `make dist`). I know because two projects I co-maintain do this.
It's common but it's plain wrong. A "release" should allow to build the project without installing dependencies that are only there for compilation.
Autotools are not guaranteed to be installed on any system. For example they aren't on the OSX runners of GitHub Action.
It's also an issue with UX. autoreconf fails are pretty common. If you don't make it easy for your users to actually use your project, you lose out on some.
> [...] A "release" should allow to build the project without installing dependencies that are only there for compilation.
Built artifacts shouldn't require build-time dependencies to be installed, yes, but we're talking about source distributions. Including `./configure` is just a way of reducing the configuration-/build-time dependencies for the user.
> Autotools are not guaranteed to be installed on any system. [...]
Which is why this is common practice.
> It's common but it's plain wrong.
Strong word. I'm not sure it's "plain wrong". We could just require that users have autoconf installed in order to build from sources, or we could commit `./configure` whenever we make a release, or we could continue this approach. (For some royal we.)
But stopping this practice won't prevent backdoors. I think a lot of people in this thread are focusing on this as if it was the source of all evils, but it's really not.
Autotools are not backwards-compatible. Often only a specific version of autotools works. Only the generated configure is supposed to be portable.
It's also not the distribution model for an Autotools project. Project distributions would include a handwritten configure file that users would run: The usual `./configure && make && make install`. Since those configure scripts became more and more complex for supporting diverse combinations of compiler and OS, the idea of autotools was for maintainers to generate it. It was not meant to be executed by the user: https://en.wikipedia.org/wiki/GNU_Autotools#Usage
For running autoreconf you need to have autotools installed and even then it can fail.
I have autotools installed and despite that autoreconf fails for me on the xz git repository.
The idea of having configure as a convoluted shell script is that it runs everywhere without any additional. If it isn't committed to the repository you're burdening your consumers with having compilation dependencies installed that are not needed for running your software.
Yes...For running gcc you need to have gcc installed.
You don’t need gcc to run the software. It’s not burdening anyone that gcc was needed to build the software.
It’s very standard practice to have development dependencies. Why should autoconf be treated exceptionally?
If they fail despite being available it’s either a sign of using a fragile tool or a badly maintained project. Both can be fixed without shipping a half-pre-compiled-half-source repo.
The configure script is not a compilation artifact.
The more steps you add to get final product the more errors are possible. It's much easier for you as the project developer to generate the script so you should do it.
If it's easier for you to generate the binary, you should do it as well (reproducible binaries of course). That's why Windows binaries are often shipped. With Linux binaries this is much harder (even though there are solutions now). With OSX it depends if you have the newest CPU architecture or not.
> If it's easier for you to generate the binary, you should do it as well (reproducible binaries of course).
I think that's the crux of what you're saying. But consider that if Fedora, Debian, etc. accepted released, built artifacts from upstreams then it would be even easier to introduce backdoors!
Fedora, Debian, Nix -all the distros- need to build from sources, preferably from sources taken from upstreams' version control repositories. Not that that would prevent backdoors -it wouldn't!- but that it would at least make it easier to investigate later as the sources would all be visible to the distros (assuming non-backdoored build tools).