With modern software development there is a ton of boilerplate and focus on tooling (how many hours do we waste on IDEs or text editors), compilers, build tools... whatever.
With APL or K, you don't really have a lot of that. Once someone is proficient, they can do a ton with just a tiny amount of code. Aaron Hsu has a bunch of videos online and chats on HN (user arcfide I think) on the strengths of APL in general and how he built a compiler to run APL on the GPU and got it down to like a page of the alien glyphs. His point is that with APL you can eliminate huge amounts of things like helper functions and see the entire code at a more macro level. He literally uses windows notepad to code in as well, so a lot less time is wasted on learning/managing/configuring all the tools in the ecosystem.
Supposedly they've done some studies where people can be taught APL pretty easily if they go in with an open mind. I can code a bit in it after only playing with it a bit. The symbols are a LOT more intuitive than you think (a lot of effort went into making them make sense in representing the operations they do). The big problem is professional devs don't want to get locked into a niche technology or admit there are other ways to do things that don't involve Java-like hell. This is less of a problem with finance workers using Kdb+ (a very fast array database that you access with the k or q languages) that get paid $$$$$$$ to write stock ticker analysis software.
As a CTO, I'm not sure what the lessons are except keep code more reasonable. Don't write stuff you don't need...etc. I'm guessing OP has gotten use to the expressive power of APL and the interactive nature of the software and how to keep things lightweight and now sees a lot of modern software as being overly bloated. It reminds me of a vendor that shipped us what should have been a page long application (pretty simple data transformations), but instead it was like 50 class files and all sorts of other junk. I was blown away by the inefficiency and extreme complexity for what we later wrote ourselves in a short script with few functions. Assuming it wasn't outright on purpose, I think certain industry tools encourage over architecture.
I think the problem is that most "big data" problems aren't really that big, or the latency requirements aren't that stringent. So people are OK with much slower solutions they can throw midlevel python devs & parallelize AWS compute at. With true time series problems you do get many "big data" problems that you can't actually parallelize (think sequential event based operations where the previous results are needed for deciding what to do with the next event).
With data frames you get some of the "close to the data" interactivity you get with a APL/J/K/Q stack. Of course, K is ~100x faster than Polars which is what.. ~10x faster than Pandas. Meanwhile most people are still using Pandas in a Jupyter notebook despite Polars being right there.
You'd think as people look at their AWS bills they'd be considering solutions that use 1/10th, 1/100th and even 1/1000th the compute time.. and yet.
If you mean "free" as in open-source/free, there is J, which has its own builtin database. I'm assuming it's similar to k/kdb, but that's just a guess.
Learning an APL variant... not a free lunch. Takes a while and commitment to grasp.
Arguably "crazy expensive" software that minimizes crazy expensive hardware is something work considering. Firms pay 10s of times more for things like DataDog than they do for a KDB+ license.
Also I found the argument that KDB+ devs are expensive to be laughable once I saw how much (same, or even more) we started to pay AWS/Python devs.
Crazy expensive may be affordable to firms like JP Morgan, but other industries just won't pay that when Postgres is free. It's not as good for the kind of analysis Kdb+ does IMO, but free is free and it's easier to just get a VM from IT.
I'm no expert, but they're all pretty darn expressive.
Personally I'd love to work with Kdb+ as you have a lightning fast database and you typically use the q-language instead of k. Q allows you a lot of the power of both SQL and a general purpose array language.
The dyalog variant of APL does have support for common data formats like CSV & JSON, but I think it's all a lot more natural in Kdb+
This might sound strange, but the most salient lesson I've gotten is that value lies 10% in the software and 90% in understanding the problem domain. My customers don't usually care about _how_ we solve a problem for them, they just want a solution. They're often willing to discuss nitty-gritty details, but their ability to just get on with the stuff they're good at scales directly with our ability to simply remove a problem from their life.
Much miscommunication and incidental complexity stems from inverting that relationship, IMHO. Treating the product, i.e. the software, as the valuable asset and curating it thusly encourages us to focus on issues that are less relevant. It's harder to freely explore the problem domain and update our understanding when the landscape changes.
How this boils down in my day-to-day is that I have started giving my devs much more agency, allowing them to make bigger mistakes and learn harder-earned lessons, as long as it's clear that they're engaging directly with the core problem. In nitty-gritty discussions, I make sure that the discussion never loses sight of the core business problem and try to makes sure we stay aligned on that core aspect.
It's a bit strange that simply writing a YAML parser could connect so directly with business management-level sentiments, but I went into the project knowing that the problem itself isn't information theoretically massive and that APL should let me converge on some "set of equations" that distilled out its essential nature.
The process involved many rounds of completely throwing away my entire codebase and starting from scratch. Each time I found myself thinking about details not endemic to the problem, I would throw out the code and try again. It's psychologically easier to do that with 100 lines of APL than with the 10,000 line equivalent Python or C. Now everything is at a point where pretty much every character in the implementation externalizes some important piece of my understanding about YAML parsing.
That process contrasted starkly with the habits and culture at my work and got me to think about how much time, energy, and emotion is spent on potential irrelevancies, which then lead to thinking about where my company's true value lies and how we can keep that at the forefront.