> To be fair, the quality of software has dramatically dropped, apps now take 10 seconds to load, memory usage is maxed, games crash and people needed to reinstall their OS so frequently that Microsoft literally added a "reset PC" option..
Are you talking about the 90s or now? Because those were all at least as true then as now. Everything took forever. You needed more RAM every month. Everything crashed constantly. I had to reinstall Win98SE so many fucking times that I can still type F73WT-WHD3J-CD4VR-2GWKD-T38YD from memory.
The amount of suck in commercial software is constant. Companies always prioritize adding the shiny-looking features that sell software to rubes over improving things like memory use, response time, and general quality of life until the quality of life is actually bad enough to drive customers to another vendor, so it's perpetually bad enough to keep the average customer right on the edge of "oh fuck this, I'm switching to something else."
And even then, people were never against most of it. Scrollbar thumbs with grip stipple? Checkboxes that fill in with a roundrect rather than a checkmark? Buttons and tabs that have an inline ring-highlight "intent" color to them, akin to the fill color on modern Bootstrap theme buttons? These were all parts of the Luna theme as well — and people liked them. (And, IIRC, they were often sad that these parts got deactivated when reverting to the Windows Classic theme, and often asked if there was some hybrid theme that kept these.)
With Luna, I think people were mainly just reacting negatively to two things:
1. the start button being big and green and a weird blob shape; the start menu it opens having a huge, very rounded forehead and chin — and both of these having a certain "pre-baked custom PNG image 8-way sliced in Photoshop and drawn by parts" look that you'd see used on web pages in this era. This made the whole UI feel very "non-brutalist" — form not following function, the way it did in Windows Classic (where the theme was in part designed to optimize for as few line-draw GDI calls as possible.)
2. both the taskbar and window title bars being vertically thicker, and having a vaguely-plastic-looking sheen to them to "add dimensionality."
And my hypothesis is that, of these, it was mainly the "vertically thicker" taskbar+window decorations that upset so many people.
This was an era where many screens were still largely 1024x768, even as monitor sizes were growing; so "small was cool" [and legible!] Websites baked their text into images using 8x5 pixel fonts; Linux users used tiny fonts and narrow themes in fvwm/blackbox/fluxbox, etc. In that era, a title bar stealing thirty whole pixels was almost blasphemy. (Same problem with the Office XP ribbon. Microsoft's visual designers must have been too far ahead-of-the-curve in what kind of resolutions their graphics cards supported, I think.)
I think, if there was an alternate version of Luna that also shipped with XP, that just narrowed the taskbar and window caption bar to the Windows Classic dimensions... then Luna would have been universally acclaimed.
What urge? The urge to understand what the software you're about to build upon is doing? If so, uh... no. No thanks.
I've seen some proponents of these code-generation machines say things like "You don't check the output of your optimizing compiler, so why check the output of Claude/Devon/whatever?". The problem with this analogy is that the output from mainstream optimizing compilers is very nearly always correct. It may be notably worse than hand-generated output, but it's nearly never wrong. Not even the most rabid proponent will claim the same of today's output from these code-generation machines.
So, when these machines emit code, I will inevitably have to switch from "designing and implementing my software system" mode into "reading and understanding someone else's code" mode. Some folks may be actually be able to do this context-shuffling quickly and easily. I am not one of those people. The results from those studies from a while back that found that folks take something like a quarter-hour to really get back into the groove when interrupted while doing a technical task suggest that not that many folks are able to do this.
> Think in interfaces...
Like has been said already, you don't tend to get the right interface until you've attempted to use it with a bunch of client code. "Take a good, educated stab at it and refine it as the client implementations reveal problems in your design." is the way you're going to go for all but the most well-known problems. (And if your problem is that well-known, why are you writing more than a handful of lines solving that problem again? Why haven't you bundled up the solution to that problem in a library already?)
> Successive rendering, not one-shot.
Yes, like nearly all problem-solving, most programming is and always has been an iterative process. One rarely gets things right on the first try.
Dangle-trains are one of those things that appeal to me for unknown reasons, they just look so cool. But I am unable to really quantify the appeal, so here is my attempt.
Advantages:
keeps your electrical plant out of the weather
allows the track to be out of the road while allowing street level access to train. This one is a bit iffy as the dangle train will usually be put above street traffic.
Disadvantages:
look at how much steel it takes to make that box beam.
Every thing is in tension, leading to complicated structure to contain it, joints can be much simpler in compression.
Any how as a dangle-train connoisseur I leave you with two additional videos.
I agree with the last paragraph about doing this yourself. Humans have tendency to take shortcuts while thinking. If you see something resembling what you expect for the end product you will be much less critical of it. The looks/aesthetics matter a lot on finding problems with in a piece of code you are reading. You can verify this by injecting bugs in your code changes and see if reviewers can find them.
On the other hand, when you have to write something yourself you drop down to slow and thinking state where you will pay attention to details a lot more. This means that you will catch bugs you wouldn't otherwise think of. That's why people recommend writing toy versions of the tools you are using because writing yourself teaches a lot better than just reading materials about it. This is related to know our cognition works.
The reason for this limit, at least on modern systems, is that select() has a fixed limit (usually 1024). So it would cause issues if there was an fd higher than that.
The correct solution is basically 1. On startup every process should set the soft limit to the hard limit, 2. Don't use select ever 3. Before execing any processes set the limit back down (in case the thing you exec uses select)
There are so many possible scripts that could be based on what has happened since 2022 in Ukraine. There would be no need for exaggeration of the heroism, bravery, and loss. The only issue might be that people would not believe it actually happened.
From Zelenskyy, a previous comedic actor refusing to flee, "I need ammunition, not a ride," to the defense of Snake Island "Russian warship, go fuck yourself," to all the brave women who volunteered, the farmers towing abandoned Russian tanks, the constant drone attacks on residential and commercial areas, the 40,000 stolen Ukrainian children, this most recent attack on Russian air bases...
If this was a movie, I would probably think it was a bit much myself, but this all happened. We witnessed it.
Well, I'd think of it like being car-dependent. Sure, plenty of suburbanites know how to walk, they still have feet, but they live somewhere that's designed to only be practically traversable by car. While you've lived that lifestyle, you may have gained weight and lost muscle mass, or developed an intolerance for discomfort to a point where it poses real problems. If you never got a car, or let yourself adapt to life without one, you have to work backwards from that constraint. Likewise with the built environment around us; the cities many people under the age of 40 consider to be "good" are the ones that didn't demolish themselves in the name of highways and automobiles, in which a car only rarely presents what we'd think of as useful technology.
There are all kinds of trades that the car person and the non-car person makes for better or worse depending on the circumstance. The non-car person may miss out on a hobby, or not know why road trips are neat, but they don't have the massive physical and financial liabilities that come with them. The car person meanwhile—in addition to the aforementioned issues—might forget how to grocery shop in smaller quantities, or engage with people out in the world because they just go from point A to B in their private vessel, but they may theoretically engage in more distant varied activities that the non-car person would have to plan for further in advance.
Taking the analogy a step further, each party gradually sets different standards for themselves that push the two archetypes into diametrically opposed positions. The non-car owner's life doesn't just not depend on cars, but is often actively made worse by their presence. For the car person, the presence of people, especially those who don't use a car, gradually becomes over-stimulating; cyclists feel like an imposition, people walking around could attack at any moment, even other cars become the enemy. I once knew someone who'd spent his whole life commuting by car, and when he took a new job downtown, had to confront the reality that not only had he never taken the train, he'd become afraid of taking it.
In this sense, the rise of LLM does remind of the rise of frontend frameworks, bootcamps thay started with React or React Native, high level languages, and even things like having great internet; the only people who ask what happens in a less ideal case are the ones who've either dealt with those constraints first-hand, or have tried to simulate it. If you've never been to the countryside, or a forest, or a hotel, you might never consider how your product responds in a poor connectivity environment, and these are the people who wind up getting lost on basic hiking trails having assumed that their online map would produce relevant information and always be there.
Edit: To clarify, in the analogy, it's clear that cars are not intrinsically bad tools or worthwhile inventions, but had excitement for them been tempered during their rise in commodification and popularity, the feedback loops that ended up all but forcing people to use them in certain regions could have been broken more easily.
This entire section reads like, oddly, the reverse of the "special pleading" argument that I usually see from artists. Instead of "Oh, it's fine for other fields, but for my field it's a horrible plagiarism machine", it's the reverse: "Oh, it's a problem for those other fields, but for my field get over it, you shouldn't care about copyright anyway".
I'm all for eliminating copyright. The day I can ignore the license on every single piece of proprietary software as I see fit, I'll be all for saying that AIs should be able to do the same. What I will continue to complain about is the asymmetry: individual developers don't get to violate individual licenses, but oh, if we have an AI slurp up millions of codebases and ignore their licenses, that's fine.
No. No, it isn't. If you want to ignore copyright, abolish it for everyone. If it still applies to everyone else, it should still apply to AIs. No special exceptions for mass-scale Open Source license violations.
> "For art, music, and writing? I got nothing. I’m inclined to believe the skeptics in those fields."
You've already lost me, because I view programming as an art form. I would no more use AI to generate code than I would use it to paint my canvas.
I think the rest of the article is informative. It made me want to try some things. But it's written from the perspective of a CEO thinking all his developers are just salt miners; miners go into the cave and code comes out.
I think that's actually what my hangup is. It's the old adage of programmers simply "copying and pasting from stack overflow" but taken to the extreme. It's the reduction of my art into mindless labor.
>simple fact that you can now be fuzzy with the input you give a computer, and get something meaningful in return
I got into this profession precisely because I wanted to give precise instructions to a machine and get exactly what I want. Worth reading Dijkstra, who anticipated this, and the foolishness of it, half a century ago
"Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve. (This was evidently not understood by the author that wrote —in 1977— in the preface of a technical report that "even the standard symbols used for logical connectives have been avoided for the sake of clarity". The occurrence of that sentence suggests that the author's misunderstanding is not confined to him alone.) When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.[...]
It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable"
Welcome to prompt engineering and vibe coding in 2025, where you have to argue with your computer to produce a formal language, that we invented in the first place so as to not have to argue in imprecise language
>If you were trying and failing to use an LLM for code 6 months ago †, you’re not doing what most serious LLM-assisted coders are doing.
Here’s the thing from the skeptic perspective: This statement keeps getting made on a rolling basis. 6 months ago if I wasn’t using the life-changing, newest LLM at the time, I was also doing it wrong and being a luddite.
It creates a never ending treadmill of boy-who-cried-LLM. Why should I believe anything outlined in the article is transformative now when all the same vague claims about productivity increases were being made about the LLMs from 6 months ago which we now all agree are bad?
I don’t really know what would actually unseat this epistemic prior at this point for me.
In six months, I predict the author will again think the LLM products of 6 month ago (now) were actually not very useful and didn’t live up to the hype.
"It pains me to say this, but I think that differentiating humans from bots on the web is a lost cause."
Ah, but this isn't doing that. All this is doing is raising friction. Taking web pages from 0.00000001 cents to load to 0.001 at scale is a huge shift for people who just want to slurp up the world, yet for most human users, the cost is lost in the noise.
All this really does is bring the costs into some sort of alignment. Right now it is too cheap to access web pages that may be expensive to generate. Maybe the page has a lot of nontrivial calculations to run. Maybe the server is just overwhelmed by the sheer size of the scraping swarm and the resulting asymmetry of a huge corporation on one side and a $5/month server on the other. A proof-of-work system doesn't change the server's costs much but now if you want to scrape the entire site you're going to have to pay. You may not have to pay the site owner, but you will have to pay.
If you want to prevent bots from accessing a page that it really wants to access, that's another problem. But, that really is a different problem. The problem this solves is people using small amounts of resources to wholesale scrape entire sites that take a lot of resources to provide, and if implemented at scale, would pretty much solve that problem.
It's not a perfect solution, but no such thing is on the table anyhow. "Raising friction" doesn't mean that bots can't get past it. But it will mean they're going to have to be much more selective about what they do. Even the biggest server farms need to think twice about suddenly dedicating hundreds of times more resources to just doing proof-of-work.
It's an interesting economic problem... the web's relationship to search engines has been fraying slowly but surely for decades now. Widespread deployment of this sort of technology is potentially a doom scenario for them, as well as AI. Is AI the harbinger of the scrapers extracting so much from the web that the web finally finds it economically efficient to strike back and try to normalize the relationship?
1. Banner ads made more money. This stopped being true a while ago, it's why newspapers all have annoying soft paywalls now.
2. People didn't have payment rails set up for e-commerce back then. Largely fixed now, at least for adults in the US.
3. Transactions have fixed processing costs that make anything <$1 too cheap to transact. Fixed with batching (e.g. buy $5 of credit and spend it over time).
4. Having to approve each micropurchase imposes a fixed mental transaction cost that outweighs the actual cost of the individual item. Difficult to solve ethically.
With the exception of, arguably[0], Patreon, all of these hurdles proved fatal to microtransactions as a means to sell web content. Games are an exception, but they solved the problem of mental transaction costs by drowning it in intensely unethical dark patterns protected by shittons of DRM[1]. You basically have to make someone press the spend button without thinking.
The way these proof-of-work systems are currently implemented, you're effectively taking away the buy button and just charging someone the moment they hit the page. This is ethically dubious, at least as ethically dubious as 'data caps[2]' in terms of how much affordance you give the user to manage their spending: none.
Furthermore, if we use a proof-of-work system that's shared with an actual cryptocurrency, so as to actually get payment from these hashes, then we have a new problem: ASICs. Cryptocurrencies have to be secured by a globally agreed-upon hash function, and changing that global consensus to a new hash function is very difficult. And those hashes have economic value. So it makes lots of sense to go build custom hardware just to crack hashes faster and claim more of the inflation schedule and on-chain fees.
If ASICs exist for a given hash function, then proof-of-work fails at both:
- Being an antispam system, since spammers will have better hardware than legitimate users[3]
- Being a billing system, since legitimate users won't be able to mine enough crypto to pay any economically viable amount of money
If you don't insist on using proof-of-work as billing, and only as antispam, then you can invent whatever tortured mess of a hash function is incompatible with commonly available mining ASICs. And since they don't have to be globally agreed-upon, everyone can use a different, incompatible hash function.
"Don't roll your own crypto" is usually good security advice, but in this case, we're not doing security, we're doing DRM. The same fundamental constants of computing that make stopping you from copying a movie off Netflix a fool's errand also make stopping scrapers theoretically impossible. The only reason why DRM works is because of the gap between theory and practice: technically unsophisticated actors can be stopped by theoretically dubious usages of cryptography. And boy howdy are LLM scrapers unsophisticated. But using the tried-and-true solutions means they don't have to be: they can just grab off-the-shelf solutions for cracking hashes and break whatever you use.
[0] At least until Apple cracked Patreon's kneecaps and made them drop support for any billing mode Apple's shitty commerce system couldn't handle.
[1] At the very least, you can't sell microtransaction items in games without criminalizing cheat devices that had previously been perfectly legal for offline use. Half the shit you sell in a cash shop is just what used to be a GameShark code.
[2] To be clear, the units in which Internet connections are sold should be kbps, not GB/mo. Every connection already has a bandwidth limit, so what ISPs are doing when they sell you a plan with a data cap is a bait and switch. Two caps means the lower cap is actually a link utilization cap, hidden behind a math problem.
[3] A similar problem has arisen in e-mail, where spammy domains have perfect DKIM/SPF, while good senders tend to not care about e-mail bureaucracy and thus look worse to antispam systems.
> But Celsius as a temperature scale is no more logical than Fahrenheit
Celsius is more logical:
(1) the endpoints of Celsius are boiling/melting point of water (at standard atmospheric pressure). The lower endpoint of Fahrenheit is the lowest temperature Fahrenheit could achieve using a mixture of water, ice and ammonium chloride-using the freezing point of pure water is more logical than using the freezing point of an ammonium chloride solution-water is fundamental to all known life, ammonium chloride solutions don’t have the same significance (and why ammonium chloride instead of sodium chloride or potassium chloride? of the salts readily available to Fahrenheit, the ammonium chloride solution had the lowest freezing point)
(2) Fahrenheit initially put 90 degrees between his two “endpoints” (ammonium chloride solution freezing point and human body temperature), then he increased it to 96. Celsius having 100 degrees between its endpoints is more logical than 90 or 96
(3) while for both Celsius and Fahrenheit, there is error in the definition of their endpoints (the nominal values are different from the real values, because our ability to measure these things accurately was less developed when each scale was originally devised, and some unintentional error crept in)-the magnitude of that error is smaller for Celsius than for Fahrenheit
(4) nowadays, all temperature units are officially defined in terms of Kelvin - and Celsius has a simpler relation to Kelvin than Fahrenheit (purely additive versus requiring both addition and multiplication)
(5) Celsius is the global standard for everyday (non-scientific) applications, not Fahrenheit, and it is more logical to use the global standard than a rarely used alternative whose advantages are highly debatable at best
This is incorrect. Machine Learning is a term that refers to numerical as opposed to symbolic AI. ML is a subset of AI as is Symbolic / Logic / Rule based AI (think expert systems). These are all well established terms in the field. Neural Networks include deep learning and LLMs. Most AI has gone the way of ML lately because of the massive numerical processing capabilities available to those techniques.
One of the biggest problems I've noticed with younger computer users is that they have no idea where they saved a file. Having to drag it to a specific folder seems like it would be harder to forget in that case.
If it paid for people's lives and sustained itself, that sounds like a huge success to me. There's a part of me that thinks, maybe we'd all be better off if we set the bar for success of a business at "sustains the lives of the people who work there and itself is sustainable."
The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions, functions returning bool, functions returning 0 on success, functions returning 0 on error, functions returning -1 on error, functions returning negative errno on error, functions taking optional pointer to bool to indicate error (optionally), functions taking reference to std::error_code to set an error (and having an overload with the same name that throws an exception on error if you forget to pass the std::error_code)...I understand there's 30 years of history, but it still is annoying, that even the standard library is not consistent (or striving for consistency).
Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
There's a difference between data transport and data hosting. Modern expectations of messengers seem to blur this line and it's better if it's not blurred.
Incidentally: The reason why they blur it is because of 2 network asymmetries prevalent since the 1990's that enforced a disempowering "all-clients-must-go-through-a-central-server model" of communications. Those 2 asymmetries are A) clients have lower bandwidth than servers and B) IPv4 address exhaustion and the need/insistence on NAT. It's definitely not practical to have a phone directly host the pictures posted in its group chats, but it would be awesome if the role of a messaging app's servers was one of caching instead of hosting.
In the beginning though: the very old IRC was clear on this; it was a transport only, and didn't host anything. Anything relating to message history was 100% a client responsibility.
And really I have stuck with that. My primary expectation with messaging apps is message transport. Syncing my message history on disparate devices is cool, and convenient, but honestly I don't really need it in a personal capacity if each client is remembering messages. I don't understand how having to be responsibile for the management of my own data is "less control of my life," it seems like more control. And ... I'm not sure I care about institutional entitlement to archive stuff that is intended to be totally personal.
I understand companies like to have group chats, and history may be more useful and convenient there, but that's why I'm not ever going to use Teams for personal purposes. But I'm not going to scroll back 10 years later on my messaging apps to view old family pictures. I'm going to have those saved somewhere.
Fun fact that was dredged up because the author mentions Australia: GPS points change. Their example coordinates give 6 decimal places, accurate to about 10-15cm. Australia a few years back shifted all locations 1.8m because of continental drift they’re moving north at ~7cm/year). So even storing coordinates as a source of truth can be hazardous. We had to move several thousand points for a client when this happened.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
Not so, because of net present value.
The return from investing in normal stocks is ~10%/year, which is to say ~670% over 20 years, because of compounding interest. Another way of saying this is that $1 in 20 years is worth ~$0.15 today. A dollar in 30 years is worth ~$0.05 today. A dollar in 40 years is worth ~$0.02 today. As a result, if a thing generates the same number of dollars every year, the net present value of the first 20 years is significantly more than the net present value of all the years from 20-120 combined, because money now or soon from now is worth so much more than money a long time from now. And that's assuming the revenue generated would be the same every year forever, when in practice it declines over time.
The reason corporations lobby for copyright term extensions isn't that they care one bit about extended terms for new works. It's because they don't want the works from decades ago to enter the public domain now, and they're lobbying to make the terms longer retroactively. But all of those works were already created and the original terms were sufficient incentive to cause them to be.
Usenet, as far as I remember, used to be a fucking hell to maintain right. With each server having to basically mirror everything, it was a hog on bandwidth and storage, and most server software at its heyday was a hog on filesystems of its day (you had to make sure you have plenty of inodes to spare).
The other day, I logged into Usenet using eternalseptember, and found out that it consisted in 95% of zombies sending spam you could recognize from the millenium start. On one hand, it made me feel pretty nostalgic. Yay, 9/11 conspiracy theories! Yay, more all-caps deranged Illuminati conspiracies! Yay, Nigerian princes! Yay, dick pills! And an occasional on-topic message which strangely felt out of place.
On the other hand, I felt like I was in a half-dark mall bereft of most of its tenants, where the only place left is 85-year old watch repair shop and a photocopy service on the other end of the floor. On still another hand, turns out I haven't missed much by not being on Usenet, as all-caps deranged conspiracy shit is quite abound on Facebook.
I would welcome a modern replacement for Usenet, but I feel like it would need a thorough redesign based on modern connectivity patterns and computing realities.
Buggy unsafe blocks can affect code anywhere (through Undefined Behavior, or breaking the API contract).
However, if you verify that the unsafe blocks are correct, and the safe API wrapping them rejects invalid inputs, then they won't be able to cause unsafety anywhere.
This does reduce how much code you need to review for memory safety issues. Once it's encapsulated in a safe API, the compiler ensures it can't be broken.
This encapsulation also prevents combinatorial explosion of complexity when multiple (unsafe) libraries interact.
I can take zlib-rs, and some multi-threaded job executor (also unsafe internally), but I don't need to specifically check how these two interact.
zlib-rs needs to ensure they use slices and lifetimes correctly, the threading library needs to ensure it uses correct lifetimes and type bounds, and then the compiler will check all interactions between these two libraries for me. That's like (M+N) complexity to deal with instead of (M*N).
With huge respect, I’m an European SWE making a little more than 3k net (which means that what’s left after all taxes).
I own a nice house in a countryside village that I bought recently (so at the current market price), 10 min walking distance from the train station. I can afford premium quality food, I have enough money (and time !) to go on vacation 4 to 5 weeks per year (not just holidays but going abroad as a tourist). I own two cars. I’ll have a retirement.
Life hasn’t been cool on me on the last decade : I had to go under a 100+k surgery, I now take a treatment of about 150€/month. My grandmother had a stroke and is now living hospitalized under my dad’s roof. I did a burnout and stayed 1 year at home
to recover. And you know what ? Everything of this had barely any impact on our finances. Everything health related : 0 impact.
Now everything is fine, my health is better, I still have strong savings, still own my house, my grandmother is greatly taken care of…
I would never exchange that for the extra 4k I could lose at any moment without notice because life.
I made the switch to Kotlin around 2018; after having been using Java since 1995. Java is a choice these days, not a necessity. Kotlin is a drop in replacement for Java in 100% of it's core use cases. No exceptions that I know of. Doesn't matter whether you do Android or Spring Boot. It kind of does it all. And it's actually really good at dealing with creaky old Java APIs. Extension functions (one of the party tricks modern Java hasn't yet picked up from Kotlin) are amazing for adapting old code to be a bit less tedious to deal with.
You don't really lose anything (APIs, tools, etc.); but you gain a lot. I still use Spring. But I've now used it longer with Kotlin than with Java.
And the nice thing with Kotlin is that it is gaining momentum as its own ecosystem. I'm increasingly biased to not touching old school Java libraries. Those lock me into the JVM and I like having the option of running things in wasm, in the browser or on mobile. Our frontend is kotlin-js and it runs a lot of the same code we run in our Spring Boot server. Kotlin multi platform is nice. I've published several libraries on Github that compile for platforms that I don't even have access to. Supposedly my kt-search client (for opensearch and elasticsearch) should work on an Apple watch. I've not gotten around to testing that just yet and I don't see why you'd want that. But it definitely builds for it. I had one person some time ago trying it out on IOS; same thing (I'm an Android user).
Ecosystems are important. But it's also important to not get too hung up on them. I say that as somebody that made the bet on Java when there was essentially no such thing as a Java ecosystem. It was all new. Kind of slow and wonky. And there were a lot of things that didn't quite work right. But it had lots of people working on fixing all of those things. And that's why it's so dominant now. People are more important than the status quo.
Sometimes you just have to cut loose from the past and not go with something safe but very boring like Delphi, Visual Basic, Perl, and several other things that were quite popular at the time and didn't quite make it. They're still around and there's a half decent library ecosystem even probably. But let's just say none of those things are obvious things to pick for somebody doing something new that was born this century.
Go as an ecosystem is definitely large enough at this point that I would label it as low risk. Same with Rust. Neither is going anywhere and there are plenty of well motivated people working on keeping all that going. Same with Java. All valid reasons for using any of those. But nothing is set in stone in this industry. A lot of things people were using in the nineties did not age well. And it will be the same in another 30 years. Most of those old people that will never change retire at some point. Java projects are pretty depressing to work on these days for me. Lots of grumpy old people my age. I've had a few of those in the last few years. Not my idea of fun. The language is one thing but the people haven't aged well.
It's beyond wrong.
For example, at the core of plenty of numerical libraries is 30+ year old fortran code from netlib that works great.
It's just done.
It does what it's supposed to. It does not have meaningful bugs. There is no reason to change it.
Obviously, one can (and people do) rewrite it in another language to avoid having to have a fortran compiler around, or because they want to make some other tradeoff (usually performance).
But it's otherwise done. There is no need to make new releases of the code. It does what it is supposed to do.
That's what you want out of libraries past a certain point - not new features, you just want them to do what you want and not break you.
> Instead, think of your queries as super human friendly SQL.
I feel that comparison oversells things quite a lot.
The user is setting up a text document which resembles a
question-and-response exchange, and executing a make-any-document-bigger algorithm.
So it's less querying for data and more like shaping a sleeping dream of two fictional characters in conversation, in the hopes that the dream will depict one character saying something superficially similar to mostly-vanished data.
Are you talking about the 90s or now? Because those were all at least as true then as now. Everything took forever. You needed more RAM every month. Everything crashed constantly. I had to reinstall Win98SE so many fucking times that I can still type F73WT-WHD3J-CD4VR-2GWKD-T38YD from memory.
The amount of suck in commercial software is constant. Companies always prioritize adding the shiny-looking features that sell software to rubes over improving things like memory use, response time, and general quality of life until the quality of life is actually bad enough to drive customers to another vendor, so it's perpetually bad enough to keep the average customer right on the edge of "oh fuck this, I'm switching to something else."