I migrated most of my self-hosted services from multiple rpi (I've been using several of them for years) to a single cheap Intel N100 NUC that I purchased last year: 16GB RAM/512Gb SSD for 156€, and I've been very pleased with it.
You lose some resilience when you do that - if the NUC fails then you lose everything. Whereas if you distribute your services across multiple rpis then a failure of one rpi is not catastrophic.
venv + requirements.txt has worked for every single python project I made for the last 2 years (I'm new to python). Only issue I had was when using a newish python version and not having a specific library released yet for this new version, but downgrading python solved this.
Being new to the ecosystem I have no clue why people would use Conda and why it matters. I tried it, but was left bewildered, not understanding the benefits.
The big thing to realise is that when Conda first was released it was the only packaging solution that truly treated Windows as a first class citizen and for a long time was really the only way to easily install python packages on Windows. This got it a huge following in the scientific community where many people don't have a solid programming/computer background and generally still ran Windows on their desktops.
Conda also not only manages your python interpreter and python libraries, it manages your entire dependency chain down to the C level in a cross platform way. If a python library is a wrapper around a C library then pip generally won't also install the C library, Conda (often) will. If you have two different projects that need two different versions of GDAL or one needs OpenBLAS and one that needs MKL, or two different versions of CUDA then Conda (attempts to) solve that in a way that transparently works on Windows, Linux and MacOS. Using venv + requirements.txt you're out of luck and will have to fall back on doing everything in its own docker container.
Conda lets you mix private and public repos as well as mirroring public packages on-perm in a transparent way much smoother than pip, and has tools for things like audit logging, find grained access control, package signing and centralised controls and policy management.
Conda also has support for managing multi-language projects. Does your python project need nodejs installed to build the front-end? Conda can also manage your nodejs install. Using R for some statistical analysis in some part of your data pipeline? Conda will mange your R install. Using a Java library for something? Conda will make sure everybody has the right version of Java installed.
Also, it at least used to be common for people writing numeric and scientific libraries to release Conda packages first and then only eventually publish on PyPi once the library was 'done' (which could very well be never). So if you wanted the latest cutting edge packages in many fields you needed Conda.
Now there are obviously a huge class a projects where none of these features are needed and mean nothing. If you don't need Conda, then Conda is no longer the best answer. But there are still a lot of niche things Conda still does better than any other tool.
> it manages your entire dependency chain down to the C level in a cross platform way.
I love conda, but this isn't true. You need to opt-in to a bunch of optional compiler flags to get a portable yml file, and then it can often fail on different OS's/versions anyway.
I haven't done too much of this since 2021 (gave up and used containers instead) but it was a nightmare getting windows/mac builds to work correctly with conda back then.
it was a nightmare getting windows/mac builds to work correctly
I think both statements can be true. Yes getting cross platform windows/Mac/Linux builds to work using Conda could definitely be a nightmare as you say. At the same time it was still easier with Conda than any other tool I've tried.
Anecdata, but I found the very same issue (the bermuda triangle gap between radioboxes, but also checkboxes, and their labels) in a project a few months ago.
It seemed a pretty big deal to me, specially because I always clicked on the gap, and got frustrated and angry at this. So I reported it to the UX team managing the design system, and to the developers implementing the design system, and nobody really cared. Some people even tried to convince me this behaviour was OK (because other design systems worked that way too, or because they were planning to refactor this on the far future so they didn't want to spend time on this).
I think the industry is now filled with people that just don't care, specially on big companies where, if it's not in a ticket, and if the ticket is not prioritized as critical, nobody cares. All they care about are metrics (test coverage, line count of a function, whatever). Pretty sad actually.
Most software engineering has become the modern equivalent of assembly line workers, which brings about the concept of alienation from our work products, per Marx. It is all about productivity metrics and nobody actually cares about non-measured forms of quality and artisanship.
That's fine and I appreciate this progress, but right now performance is way below energy efficiency to me when choosing a laptop. My major question is, how long will the battery last between each charge. I just don't want to worry anymore about charging when travelling for a few hours.
I’ll echo that. Laptops have been “fast enough” for a long time, which is why I can break out an Core 2 Duo laptop from 2008 and still get a lot done with it, but somehow despite far better process nodes and battery tech, battery life is on average worse today than it was in 2014, which is just goofy.
> but somehow despite far better process nodes and battery tech, battery life is on average worse today than it was in 2014, which is just goofy
Not trying to make excuses for shitty laptop vendors but the CPU power usage is just a fraction of a laptop's power envelope. The GPU, RAM, controllers, storage, and display (including driver circuitry) all eat up power. Their usage is exacerbated if the OS can't sleep or otherwise throttle those components during times of low demand.
Apple's pretty good about managing whole system power. It's gotten "easier" for them with Apple Silicon since everything is a SoC that they control. On x86 machines the vendor is just leveraging the components they get from Intel or AMD. The vendor's control is limited to what the drivers expose.
Often times different components' behavior or stability can change with different power modes. Vendors will just lock the device to high power mode to avoid those pitfalls at the cost of battery life for the user.
This is probably great to use, but it also highlights in comparison the beautiful simplicity of SQL. Doing something simple is hard, and SQL is so simple that I think most people find little value to abstractions built on it.
The X-COPY user interface[1] was very powerful, watching the little squares slowly fill with color while hearing the floppy drive painfully squeaking and grinding was an experience in itself.
Operating system standardization have killed this kind of creative interface, it's probably for the best, but I'm pretty sure something bold and colorful can still happen and make a difference to users.
> Operating system standardization have killed this kind of creative interface
It's coming back with Electron and other frameworks for apps in the browser. Of course these must also be responsive to multiple window sizes and input styles, whereas on home computers you could always rely on a fixed-size screen and standard mouse/keyboard/gamepad as inputs.
One scene where this may still happen is the pico-8 community, they all work on a single 'device' with fixed dimensions, resolution and capabilities. The demoscene for that one is also pretty impressive.
There's a few similar "fantasy consoles" with different amounts of restraints. Oh, and there's the Playdate console, which also seems to have a pretty vibrant community - they're limited to greyscale graphics though.
Operating system standardization have killed this kind of creative interface
Clearly there is a yearning among users — even technical users — for more creative, colorful, and interesting interfaces.
That's why we have multiple variations on sixels, dozens of libraries for drawing charts in the command line, hundreds of colors were added to terminals, and thousands of tutorials for customizing command prompts.
Not really, there is this "limited access"[1] with content filter an abuse monitoring: "Customers who wish to modify content filters and modify abuse monitoring after they have onboarded to the service are subject to additional scenario restrictions and are required to register here."
I migrated most of my self-hosted services from multiple rpi (I've been using several of them for years) to a single cheap Intel N100 NUC that I purchased last year: 16GB RAM/512Gb SSD for 156€, and I've been very pleased with it.
reply