Oh its a dot. Dots, diamonds,the absense of an operator, anything is multiplication it seems. While this comment might look like a paragraph, its actually a lot of maths.
maybe it was just a fancy dot. But it looked like ◆ on the page.
> That never happened
Bit harsh, I don't see the need for gaslighting. Sure I might be losing my mind but I specifically remember it because it took me so long to find a symbol that matched it online.
A scary (if not particularly original) thought: If people become utterly reliant on LLMs and no longer embrace any new language etc for which there is insufficient LLM training, new languages etc will no longer continue to be created.
If languages stop being created, it will be because there won't be a need for them. That's not necessarily a bad thing.
Think of programming languages as you currently think of CPU ISAs. We only need so many of those. And at this point, machine-instruction architecture has diverged so far from traditional ISAs that it no longer gets called that. Instead of x86 and ARM and RISC-V we talk about PTX and SASS and RDNA. Or rather, hardly anyone talks about them, because the interesting stuff happens at a higher level of abstraction.
Possible, but I think unlikely. New languages already suffer this uphill battle because they don't yet have a community to do Q&A like entrenched languages; their support is the documentation, source code of implementations, and whatever dedicated userbase they have as a seed for future community. People are currently utterly reliant on community-based support like StackOverflow, and new languages continue to be born.
Why do you say it's practically impossible to motivate matrix multiplication? The motivation is that this represents composition of linear functions, exactly as you follow up by mentioning.
It's a disservice to anyone to tell them "Well, that's the way it is" instead of telling them from the start "Look, these represent linear functions. And look, this is how they compose".
Sure, that's a way to approach it. All you have to do is stay interested in "linear functions" long enough to get there. It's totally possible -- I got there, and so did many many many other people (arguably everyone who has applied mathematics to almost any problem has).
But when I was learning linear algebra all I could think was "who cares about linear functions? It's the simplest, dumbest kind of function. In fact, in one dimension it's just multiplication -- that's the only linear function and the class of scalar linear functions is completely specified by the factor that you multiple by". I stuck to it because that was what the course taught, and they wouldn't teach me multidimensional calculus without making me learn this stuff first, but it was months and years later when I suddenly found that linear functions were everywhere and I somehow magically had the tools and the knowledge to do stuff with them.
Yeah, concepts can make a student reject them with passion.
I remember in a differential geometry course, when we reached "curves on surfaces", I thought "what stupidity! what are the odds a curve lies exactly on a surface?"
Linear functions are the ones that we can actually wrap our heads around (maybe), and the big trick we have to understand nonlinear problems is to use calculus to be able to understand them in terms of linear ones again. Problems that can't be made linear tend to be exceptionally difficult, so basically any topic you want to learn is going to be calculus+linear algebra because everything else is too hard.
The real payoff though is after you do a deep dive and convince yourself there's plenty of theory and all of these interesting examples and then you learn about SVD or spectral theorems and that when you look at things correctly, you see they act independently in each dimension by... just multiplication by a single number. Unclear whether to be overwhelmed or underwhelmed by the revelation. Or perhaps a superposition.
> But when I was learning linear algebra all I could think was "who cares about linear functions? It's the simplest, dumbest kind of function. In fact, in one dimension it's just multiplication -- that's the only linear function and the class of scalar linear functions is completely specified by the factor that you multiple by".
This seems to make it good motivation for an intellectually curious student—"linear functions are the simplest, dumbest kind of function, and yet they still teach us this new and exotic kind of multiplication." That's not how I learned it (I was the kind of obedient student who was interested in a mathematical definition because I was told that I should be), but I can't imagine that I wouldn't have been intrigued by such a presentation!
None of what I mentioned are revolutionary at the technological level - they are using existing techniques to offer services to a large population. The technological revolution is realized only when enough people use the technology and society is changed along with it.
I agree, but under the current administration, the FTC isn't going to do anything to impede a megacorporation's profits. We're fucked, at least for the time being.
The election where the billionaire prominently campaigned for by the richest megabillionaire in the world won proves that money plays little role in politics.
Odd to use Berlelamp-Massey to recover a linear recurrence, when Cayley-Hamilton already directly gives you a linear recurrence whose characteristic polynomial is that of the matrix.
But to get the polynomial you need to take the determine of A -lambda I, which runs in n^3. Next question then why doesn’t this Berlelamp-Massey method then effectively give you determinants in n^2?
I think it could generate the minimal polynomiale instead. Though it is curious that this would still make it faster for almost all matrices, just not guaranteed to be correct.
Note that the article describes this Berlekamp-Massey approach as involving a step of complexity on the order of EV, which is V^3 in the worst-case. So this is only beneficial for sparse matrices. It does seem like Berlekamp-Massey is used to efficiently but non-guaranteedly compute determinants for sparse matrices, as described at https://en.wikipedia.org/wiki/Block_Wiedemann_algorithm
Any recurrence that holds on the matrix also holds on each individual element (and vice versa, in that a recurrence holds on the matrix just in case it holds on every individual element).
Your description here does not quite match your linked code, in that it is not that the N-th pack contains integers spaced out by N. Rather, packs on the N-th row contain integers spaced out by N. For example, the third pack does not contain "every third integer", but rather draws alternating integers just like the second pack, because it is on the second row. The second pack contains (first cell of the second row) contains {101, 103, 105, ..., 299} and the third pack (second cell of the second row) contains {102, 104, 106, ..., 300}.
My one quibble with the comment I linked is about asymptotics. By the Prime Number Theorem, asymptotically, the density of black squares should approach zero and the density of red squares should approach 100% (including among the left diagonal which is entirely black in the displayed window, and including losing the regular appearance of rows that are entirely black except for their last cell. These black line patterns in the displayed window are both small number phenomena caused by (1 - 1/ln(R))^100 being nearly zero for small R, which stops and then goes the other way for large R.)
If the writing does the job it needs to do--in this case, a deft summary of an article--why is it better if it comes from a human vs. AI? Analysis, sure. But summary? This is the whole point of the article...do you actually prefer to read bad writing because it was written by a real person?
reply