One issue with this is that arbitrary Python code can have arbitrary side-effects.
Your suggestion reminds me a lot of fine-grained reactivity like in SolidJS, which makes sense, since spreadsheets basically operate on reactive programming. Some great articles by Ryan Carnatio on the topic.
The side-effects thing comes in if a user puts in some side-effect in a dependent cell, which is equivalent to adding side-effects in a memo in reactive-speak.
The actual underlying models run in a lower-level language (not Python).
But with the right tool chain, you already can do this. You can use Pyodide to embed a Python interpreter in WASM, and if you set things up correctly you should be able to make the underlying C/FORTRAN/whatever extensions target WASM also and link them up.
TFA is compiling a subset of actual raw Python to WASM (no extension). To be honest, I think applications for this are pretty niche. I don't think Python is a super ergonomic language once you remove all the dynamicity to allow it to compile down. But maybe someone can prove me wrong.
We implemented an in-browser Python editor/interpreter built on Pyodide over at Comet. Our users are data scientists who need to build custom visualizations quite often, and the most familiar language for most of them is Python.
One of the issues you'll run into is that Pyodide only works by default with packages that have pure Python wheels available. The team has developed support for some libraries with C dependencies (like scikit-learn, I believe), but frameworks like PyTorch are particularly thorny (see this issue: https://github.com/pyodide/pyodide/issues/1625 )
I think one element to recognise is that politics is the fundamental way we make collective decisions. It is extremely important, both in a rational sense and (with humans being social animals) emotional sense.
The article focuses on the irrational or pathological aspect of toxic politics, and that is definitely relevant. But also important is to recognise that a strong emotional response is not irrational, even if the resulting thoughts aren't. The outcome of the political process _does_ seriously affects our lives.
I think that be acknowledging this, it's easier for us to notice and mindfully accept our emotional reactions to politics, see through the filter of our experience, and therefore make more informed decisions — and ultimately that's our only way out of a trapped prior.
> I think one element to recognise is that politics is the fundamental way we make collective decisions. It is extremely important, both in a rational sense and (with humans being social animals) emotional sense.
"the moment God crapped out the 3rd caveman a conspiracy was hatched" -- all decisions are eventually politics.
Quantum mechanics postulates that the state space of a system has the structure of a Hilbert space. To investigate the statistical properties of a collection of N particles, we can take the state space of each individual particle and take their tensor products to get the collection's state space. This is called a Fock space.
However, experimentally, we find that the Fock space of a system composed of N identical particles is actually smaller than this full tensor product. Specifically, the particles must be "indistinguishable"; this is formalized using permutation operators, which are defined as the natural action of re-ordering the tensor product. A composite system's states are then restricted to the intersection of the +1/-1-eigenspaces of all the permutation operators (more on the +1/-1 thing later).
For example, a 2-particle system where the single-particle state space is spanned by a basis {a, b} will have a tensored state space spanned by {aa, ab, ba, bb}. The permutation operator on this space exchanges particles 1 and 2, meaning Paa := aa, Pab := ba, Pba := ab, and Pbb = bb. The indistinguishability criterion is then that for any state x, Px = +/-x. For 2 particles, this is satisfied by aa, bb, and ab+ba for +1, and ab-ba only for -1.
Now, if the criterion is indistinguishability, a natural question would be why we don't just take the +1 eigenspace. This is because the Hilbert space is actually too large; states that only differ in (complex) norm represent the same physical space. Though we work in the Hilbert space for the conveniences of linearity, the actual physical state space requires it to be projectively reduced. (Actually, it's even more complicated because of density matrices, but I'll skip over that.) Reducing to the -1 eigenspace also produces a self-consistent theory of indistinguishable particles, and it so happens to also correctly describe fermions, where the +1 eigenspace describes bosons.
The physical reason why this indistinguishability criterion applies is because constructing a multi-particle state from the single-particle states is actually an artifice. There really are no particles; they are just excitations of a common underlying quantum field, and that it the cause of these "quantum correlations". Particles do drop out of the QFT formalism, but only in certain limiting cases, but that's why you do end up with experimentally verifiable theories from the Fock approach.
I never studied anyons in detail, so the following is just my high-level understanding. In the Fock approach, the only eigenvalues allowed for a permutation operator are +1 and -1. But by going down to the level of the quantum field, you can construct anyonic theories, where the equivalent of the permutation operator can be any arbitrary phase.
What's the physical relevance of these? I see them as explorations of some of those projective aspects of quantum mechanics, in similar vein to the Aharonov-Bohm effect.
For some collection of non-abelian anyons, the total state of the system will depend on how those anyons have been moved around each other. Thus, the paths encode a computation and the state, as measured by anyon fusion, is the result of the computation.
It looks like polars is using PyO3, which are Rust bindings for Python. Python's reference implementation is in C, so I imagine it's interacting with that API [0] through FFI. Common Python extension modules (as these are called) are compiled as dynamically linked libraries and binaries (or compilation instructions) are included in Python wheels.
I'm currently early in my career and "the software guy" in a non-software team and role, but I'm looking to move into a more engineering direction. You've pretty much got my dream next job at the moment — if you don't mind me asking, how did you manage to find your role, especially being "still pretty Jr."?
The things I did to get here are honestly kind of stupid. I started out at a defense contractor after graduating and left in the first six months because all the software devs were jumping ship. Went to a small business defense contractor (yep that's a thing) and learned to build web apps with React and Django. Then the pace of business slowed so after about 18 months I got on the Leetcode grind and got into a FAANG. Realized I hated it, so I quit after about 9 months with no job lined up.
While unemployed I convinced myself I was going to get a job in robotics (I actually got pretty close, I had 3 final level interviews with robotics companies), but the job market went to shit pretty much the exact day I quit my job lol. I spent about 6 months just learning ROS, Inverse Kinematics, math for robotics, gradient descent and optimization, localization, path planning, mapping etc. I taught at a game development summer camp for a month and a half, that was awesome. Working with kids is always a blast. Also learned Rust and built a prototype for a multiplayer browser-based coding game I had been thinking about for a while. It was an excuse to make a full stack application with some fun infrastructure stuff.
The backend is no longer running, but originally users could see their territory on the galaxy grow as their code won battles for them.
For the current role, I really just got lucky. The previous engineer was on his way out for non-job related reasons. He had read a lot of the books I had (Code Complete, Domain Driven Design) and I think we just connected over shared interests and intellectual curiosity.
I think that in the modern day, so many people are really just in this space for the paycheck-- and that's okay! Everyone needs to make a living. But I think that if you have that intellectual curiosity and like making stuff, people will see that and get excited. It ends up being a blessing and a curse.
I have failed interviews because of honesty "I would Google the names of books and read up on that subject" or "I think if I was doing CSS then I would be in the wrong role" (I realize how douchey that sounds but I just was not meant to design things, I have tried). But I have also gone further in interviews than I should have because I was really engrossed in a particular problem like path planning or inverse kinematics and I was able to talk about things in plain terms.
I think it's easier to learn things quickly if they are something you're actually interested in, it becomes effortless. Basically I just try to do that so I can learn optimally, then I try to get lucky.
EDIT: Oh I just thought of more good advice. Find senior devs to learn from. They can be kind of grumpy in their online presence, but they help you avoid so many tar pits. I am in a Discord channel with a handful of senior engineers. The best way to get feedback is to naively say "I'm going to do X", they will immediately let you know why X is a bad idea. A lot of their advice boils down to KISS and use languages with strong typing.
I did this myself for a good 15 years or so, but eventually with a family, money became a bit more of a priority, and it's hard to get a good job if all you've worked at is small shops. Any next role in a larger tech company will likely be a downgrade until you can prove yourself out, which of course you may not be able to because things are so different, and motivation will run low because you're being tasked with all the stuff that caused you to leave big tech in the first place. It can be quite miserable to be grouped with a bunch of kids with 3-5 YOE that have no idea how to build something from scratch, and they're outperforming you because they know the system.
In my case it took a good five years and a couple job hops to rebalance. But eventually you get back to a reasonable tech leadership role and back to making some of the bigger decisions to help make the junior devs' lives less miserable.
No regrets, but the five years it takes to rebalance can be pretty hard.
I think that my work is honestly the most important factor in my happiness. I spend 8 hours a day (probably for the rest of my life) at work so it's going to be the thing that impacts me the most psychologically in my life.
After realizing that, I decided I'd try as hard as I possibly could to never have to work at a job that I didn't like. I already didn't want kids so that part is easy. The other part of the equation is saving lots of money. I'm not an ascetic by any means, but I live well below my means on a SWE salary which means I can save quite a bit of money each year.
I also recognize that not wanting to go corporate severely limits my options down the line. But capitalism is all about making money for other people. If I can make someone a lot of money, they're not going to care about if I have the chops to stand up a Kubernetes cluster or write a Next.js app or whatever (I hope).
I don't think I'm super smart, I'd say I'm pretty average for this line of work. But I reckon that most SWEs are focused on learning new technologies to get to their next job, or are overly concerned with technical problems. I like to think that I am pragmatic enough about only doing things that are going to deliver business value to make up for being average in smarts.
Anyways, there's not really a point to this rant. These are just some thoughts I have had about optimizing my career for my own happiness, and how I hope I can stay a hot commodity even though I hate working in the cloud and my software skills aren't bleeding edge.
I expect _global_ here refers to the language-level semantics, not internal interpreter semantics.
Python's module system results in no implicit modifications to Python-level state between modules — at least, for a well-developed module. Since Python is so dynamic, you can of course jerry-rig all sorts of global consequences if you really want to. But without dynamic magic, all state is namespaced to the imported module, and must be explicitly given names within the importer, with no global consequences for parallel modules.
In your os.getcwd() example, the state is scoped to the importer module, so it's not an example of global interpreter state.
I don't know what Ruby's module system looks like, so I don't know for sure if that is the point the GP is making. It seems to me that you are in agreement, just disagree with the meaning of "global" in this context.
Ruby's modules are namespaces (well objects that are deeply similar to class objects, but not identical), but Ruby files aren't modules, they are executed in the global environment, and may (or may not) define (or redefine, modules are mutable) modules, and any such modules may or may not have any relationship to the file name and relative path location.
But by this interpretation, the description of #include is incorrect, and `require` wouldn't be analogous to `#include`. That #included code appears at the highest scope in a file is a coding convention, not a fact about the preprocessor. That's just the place where people like to put their #include directives.
The reason they do that, of course, is that they want to be able to access #included code from the highest scope in their file. But once that's the requirement, we're back to saying that this is the goal of all module systems, and all of them will modify the global namespace in this way.
> all state is namespaced to the imported module, and must be explicitly given names within the importer, with no global consequences for parallel modules.
This is just false; state appears within the interpreter in the form of registering the imported module under the provided name. You are free to register a name that's already in use, in which case you will lose the ability to refer to the module that was registered with the same name earlier. Becoming unreachable is ordinarily considered a "consequence"; you can find people complaining that this has happened to them.
Why does the problem arise? Because importing means making changes to the state of the interpreter.
I hope you will forgive me for this wall of text, but this is a topic that is quite close to my heart and that I've gone back and forth in many times before settling on my current perspective.
I appreciate that especially for mathematicians and programmers, making a clean distinction between a function and its evaluation is a key conceptual point, and Leibniz notation obscures this fact. However, there are good reasons why physicists use Leibniz notation, and this answer really glosses over that.
The reason is that the particularities of the mathematical structures used to model a physical problem matter a lot less than the relationships between the actual physical underlying quantities. And there can be a lot of them. The answer evokes thermodynamics, and I couldn't think of a better example. Are we really to introduce a distinct symbol for _every_ possible functional relation between _every_ state variable? If I have T = f(P, V) and P = g(T, S), do I need to remember in which position exactly each function has been ordered before I can write down "how P varies with T when S is fixed"?
Leibniz notation, although formally tricksy, is just the best tool for communicating the _intent_ of a physical relationship without getting bogged down in the mathematically detail. The purpose of an equation is to express that relationship to the reader. Think if it like code — is it so bad if my code does a bit of magic behind the scenes to allow for a clearer reading, even if the semantics aren't immediately obvious? Well, ultimately, it depends on the situation, and a balance must be reached. I don't believe that expressing everything the way TFA suggests is striking the right balance.
Notational abuse happens all the time in physics, and this is certainly not the most egregious example. Just compare it to the path integral. It's easy to assume that this is because of a lack of sophistication or rigour by physicists. (Certainly I did throughout my physics education, being more mathematically or pedantically inclined.) But it's a simplistic view.
Now, while I'll defend the usage of these sort of unrigorous conventions even if they are strictly speaking meaningless mathematically, what I won't defend is the slapdash approach that is often used to _teach_ partial derivatives to physicists. Some exposure to concepts like distinguishing real variables/quantities from functions is needed, or, as TFA does mention at the end, the student won't be able to unpack the notational convenience into clear semantics, which can lead to unclear reasoning. I used to share the views of the author for a period when first introduced to Leibniz partial derivative notation in my first thermodynamics course, and, probably like them, found it to be totally incomprehensible symbol soup. But for myself, I see now that it was mostly a failure of teaching rather than a failure of the notation itself.
I'll add one last thought. There is a degree of "primitive obsession" at work here, trying to fit everything into positional functions and real numbers. I have thought that a formalism that better reflects the structure of "physical quantity" (as opposed to thinking of them as plain real numbers) may help bridge the gap between rigour and conceptual convenience. The tools are really already there. We need two key concepts: first, borrow from programming the idea of keyword arguments (there is a book which sadly I can't remember the name of which does as much to formalize Einstein notation for tensors in a coordinate-free way); second, to model quantities not as real numbers but as differentiable homomorphisms from a state space (modeled as a manifold in a coordinate-free way) to the reals. This is how physicists already think about it, it needs only be formalised.
I work in an R&D environment with a lot of people from scientific backgrounds who have picked up some programming but aren't software people at heart. I couldn't agree more with your assessment, and I say that without any disrespect to their competence. (Though, perhaps with some frustration for having to deal with bad code!)
As ever, the best work comes when you're able to have a tight collaboration between a domain expert and a maintainability-minded person. This requires humility from both: the expert must see that writing good software is valuable and not an afterthought, and the developer must appreciate that the expert knows more about what's relevant or important than them.
> As ever, the best work comes when you're able to have a tight collaboration between a domain expert and a maintainability-minded person. This requires humility from both: the expert must see that writing good software is valuable and not an afterthought, and the developer must appreciate that the expert knows more about what's relevant or important than them.
I do work in such an environment (though in some industry, and not in academia).
An important problem in my opinion is that many "many software-minded people" have a very different way of using a computer than typical users, and are always learning/thinking about new things, while the typical user has a much less willingness to be permanently learning (both in their subject matter area and computers).
So, the differences in the mindsets and usage of computers are in my opinion much larger than your post suggest. What you list are in my experience differences that are much easier to resolve, and - if both sides are open - not really a problem practice.
Your suggestion reminds me a lot of fine-grained reactivity like in SolidJS, which makes sense, since spreadsheets basically operate on reactive programming. Some great articles by Ryan Carnatio on the topic.
The side-effects thing comes in if a user puts in some side-effect in a dependent cell, which is equivalent to adding side-effects in a memo in reactive-speak.