Hacker News new | past | comments | ask | show | jobs | submit login
Matrix Multiplication (matrixmultiplication.xyz)
438 points by jonbaer on Jan 17, 2022 | hide | past | favorite | 97 comments



The animation looks really neat but I did not find it very helpful from an pedagocical perspective which is why I built an alternative version inspired by OP. [1]

/edit: I just noticed the OP is built by andrestaltz with Cycle.js. Cycle is a really nice reactive programming js framework that is very great to work with.

[1]: https://static.laszlokorte.de/matrix-multiplication/


I like it!

If you put matrix B over (or under) the Result matrix, the relationship between rows in A, columns in B, and elements in Result would be even more directly visualized.

You could even show the intermediate products to be summed as a diagonal between matrices A and B as you step through them. Also with direct visual alignment.


> If you put matrix B over (or under) the Result matrix, the relationship between rows in A, columns in B, and elements in Result would be even more directly visualized.

This is the way I multiply matrices on paper - it's a lot easier to keep track of the current row and column that way.


I added an option for that.


This is a nice visualization. Although visually obvious, it might be nice to also show the matrix dimension adjacent to or beneath the Matrix A and Matrix B labels.


Thanks for the idea! I added that :)


This is great! I think if you swap the order to show (row x column) you'll help people grasp the intuition behind the need for matching up Matrix A's column count with Matrix B's row count.


Yes, I wanted to comment that it's just taking dot products of the rows of the left matrix and the columns of the right one. That's also more easy to understand since there's just one thing going on at a time, and you don't have to move, flip and overlay matrices. This is nice to animate but almost impossible to do in your head.


I agree, your version is much better and a lot less confusing.


I'd be really nice if it could be parametrized so I could bookmark a specific size, like say https://static.laszlokorte.de/matrix-multiplication/?size=2x....


Thanks for your suggestion. I Now added a share link at the allows you to share/bookmark the full state (size, cells, step)


I like both of them. I think yours is definitely more comprehensive and robust as a teaching tool, but there is something cool about watching the matrices overlay each other in a smooth transition.


Your version has the left most term cut off on Android mobile, the page wants to center too far to the left to see it for some reason.


Thank you for your feedback. Should be fixed now :)


This cleared up matrix multiplication after a long period of interest/confusion around the subject, great work and derivation.


Really nice set of educational utils!


In place of slider having a button with the label of current step would be nice.


Nice one! (Just as it is.)


This is incredibly confusing. I also don't like the dot-product explanation for individual entries of the resultant matrix. When I was in college, it was hammered into my head that matrix multiplication was a linear combination of column vectors. It took a while for that to sink in, but once it did, it made a lot more geometric sense than just taking a bunch of inner products of column and row vectors.


Totally yes. The mechanistic illustrations of matrix multiplication, including the OP, are easy enough but don't help me with motivation or intuition.

I always start with Ax, which is just a linear combination of the columns, like the first figure here https://eli.thegreenplace.net/2015/visualizing-matrix-multip...

For those who want to make their own cheat sheet:

Matrix $\times$ vector is linear combination of the columns:

\begin{align}

    Ax &= \begin{pmatrix}

    a_{1,1} & a_{1,2}\\

    a_{2,1} & a_{2,2}\\

    a_{3,1} & a_{3,2}

    \end{pmatrix}
\begin{pmatrix}

    x_1\\x_2
\end{pmatrix}\\

&= x_1\begin{pmatrix}

a_{1,1}\\

a_{2,1}\\

a_{3,1}\\

\end{pmatrix}

+

x_2\begin{pmatrix}

a_{1,2}\\

a_{2,2}\\

a_{3,2}\\

\end{pmatrix}

\end{align}

Intuitively, this example maps a point in $\mathbb{R}^2$ to $\mathbb{R}^3$

Another way to see $Ax$: Columns of $A$ form a basis, and $x$ is a coordinate vector in that basis.


The link you posted perfectly depicts what I was saying. It's good to see a clear example of that online, when almost every other resource is showing the dot-product interpretation.


> a linear combination of column vectors

Isn’t that pretty much the same as the dot-product explanation?


To clarify, I mean if the columns of a 3x3 matrix A are A1, A2, and A3, and the scalar elements of vector x are <x1, x2, x3> then Ax = x1*A1 + x2*A2 + x3*A3. Each column of A is scaled by an element of x and then added together.

Is that what you had in mind by the dot-product explanation? To me, the dot product explanation is that in Ax = b, b1 = <row1 of A> dot x, b2 = <row2 of A> dot x, and b3 = <row3 of A> dot x.

Of course these (and all other valid) interpretations of matrix multiplication are "the same", but this is less geometrically intuitive to me.


A good example of using technology to make the simple complicated.


I was going to say, this is about the least intuitive I've seen matrix multiplication. Not only is the flip confusing, but they fill more than one result cell simultaneously.

Slick animations, but it needs to be slower and simpler.


I find this very clear. I would have loved having this kind of visualization when I studied linear algebra in college (which was a struggle for me).

In my experience, math is taught by people who are good understanding abstract concepts. It's hard for me to pick up abstract concepts, so a lot of traditional math education / exercises / tutorials work poorly for me.


it could be helpful to visual learners


You are not a visual learner https://www.youtube.com/watch?v=rhgwIhB58PA


Other visual approaches are better. This one's too complicated.

Maybe what's missing is a slick animation...


My advice:

- Learn how to multiply a matrix M by a column vector v.

- Generalise that by noticing that M[u|v|...] = [Mu|Mv|...].

That's matrix multiplication.



This is my favorite way to think about it as well. I think I learned it first in one of Steven Boyds astounding convex optimization lectures https://www.youtube.com/watch?v=McLq1hEq3UY


After all it's called linear algebra for a reason :)


also related: the left multiplication of a matrix M by a row vector: https://dzone.com/articles/visualizing-matrix


Visually, the rotation part makes it much less intuitive than writing the result matrix next to the first operand and the second operand above the result matrix


Denis Auroux explains that way of seeing it, https://youtu.be/bHdzkFrgRcA?t=1404


so useful - thanks!


I agree. Matrix multiplication can pretty succinctly be defined in terms of the dot product between the row and column of the inputs at the corresponding spot in the output. When I first learned I already new a little about the properties of dot products so it made the “magical” properties of matrix multiplication more understandable. Maybe not the way a mathematician would define it — they might look at dot products as a special case of matrix multiplication thus teach the latter first — but it was at least intuitive.


I think mathematicians will usually be aware of many ways of defining something. Because different definitions will suggest different approaches to a problem, and suggest different ways to generalize a concept.

Yours is an important one.


It can be nice to help visualize, though, for example if you have a block matrix and you know some blocks have particular properties, it can be helpful to imagine the steps of an algorithm as various blocks sliding across each other.


I like the website but don’t find the animation intuitive, sorry! I always position (for Ax = b) the A matrix with the x matrix to the top right of it. To the right of the A matrix, under the x matrix, I draw the resulting b matrix. I then draw a line through the (1,1) position of x downwards to b. It hits (1,1) of b, then I do the same horizontally from A(1,1). I take the dot product of the numbers which the two lines intersect, and repeat for all other positions. This is the only way I can remember matrix multiplication! It made my life so much easier and I am always on the lookout for intuitive visual tools for mathematics concepts. I don’t know if this is a really common tool, it was taught to me by my university professor Burkard Polster (the Mathologer on YouTube).



Thanks! Macroexpanded:

Matrix Multiplication - https://news.ycombinator.com/item?id=24141472 - Aug 2020 (1 comment)

Matrix Multiplication - https://news.ycombinator.com/item?id=13036386 - Nov 2016 (131 comments)

Matrix Multiplication - https://news.ycombinator.com/item?id=13033725 - Nov 2016 (2 comments)


Interesting to compare this discussion vs the discussion 5 years ago. Although the consensus is the same (that it's not very intuitive) people seem to be much more blunt with their criticism now...


Is the connotation that this is the most intuitive way to think about or visualize matrix multiplication? If so, I reject.


Yeah I found this confusing personally. For me, this image from learnopengl.com has been the most intuitive way: https://learnopengl.com/img/getting-started/matrix_multiplic... from their page on transformations https://learnopengl.com/Getting-started/Transformations


Agreed.

Posts like this seem to get upvoted on HN just because "oooh, a pretty animation" and/or they struggled with college math "See? This is how they should've taught it! Pictures are better than lectures!"


I understand the rotation part and some uses, but what's the intuitive explanation for why it's done this way? Convention?

IOW, what matrix multiplication tries to achieve by doing this manipulation which looks like rotation visually? Why not doing it w/o rotation instead (with the precondition that numbers of columns in two matrices should be equal, instead of column vs row)?


When mathematicians think about matrix multiplication (and matrices in general), they don't really think about "rotating" matrices like in the animation above, but rather about operators and their composition. The matrix multiplication is what it is, because that's how function composition works.

Look: consider two functions, f(x, y) = (x + 2y, 3x + 4y), and g(x, y) = (-x + 3y, 4x - y). What's f(g(x, y))? Well, let's work it out, it's simple algebra:

f(g(x, y)) = f(-x+3y, 4x-y) = ((-x+3y)+2(4x-y), 3(-x+3y) + 4(4x-y)) = (-x + 3y + 8x - 2y, -3x + 9y + 16x - 4y) = (7x + y, 13x + 5y).

Whew, that was some hassle to keep track of everything. Now, here's what mathematicians typically do instead: they introduce matrices to make it much easier to keep track of the operations:

Let e_0 = (1, 0), and e_1 = (0, 1). Then f(e_0) = f(1, 0) = (1, 3) = e_0 + 3 e_1, and f(e_1) = f(0, 1) = (2, 4) = 2 e_0 + 4 e_1. Thus, mathematicians would write that f in basis e_0, e_1 is represented by the matrix

[1 2] [3 4]

so that when you multiply it by the (coulmn) vector [x, y], you get

      [x]
    * [y]   
 [1 2][x + 2y]
 [3 4][3x + 4y]
 
Similarly, g(e_0) = (-1, 4) = -e_0 + 4e_1, g(e_1) = (3, -1) = 3e_0 - e_1, so it's represented by the matrix:

[-1 3] [ 4 -1]

Now, let's multiply matrix of f by matrix of g:

      [-1        3]
    * [ 4       -1]
 [1 2][-1*1+2*4  3*1-1*2]  = [7  1]
 [3 4][-1*3+4*4  3*3-1*4]    [13 5]
and when we multiply the resulting matrix by column vector [x, y]:

       [x]
     * [y]
 [7  1][7x + y]
 [13 5][13x + 5]
So, what did we get was in fact our original calculation of f(g(x, y)) = (7x + y, 13x + 5y).

The conclusion here is that matrix multiplication is what the function composition forces it to be.


Thanks all in the thread for explanations. I had matrix multiplication at the Uni (IT faculty, many years ago) as part of algebra courses, I memorized, I passed the exam, and forgot the topic. Though, I don't remember anyone explaining it then why it's used or useful.

My early understanding after reading your responses and the wiki article, it that's useful if we have some input data (vector), which then undergoes some sequential manipulation by several functions, and we want to know the result in one step, instead of many?


That's right, yeah.


The short answer is that it leads to a more consistent notation.

The longer answer is that matrix multiplication is essentially 2 different operations. Consider the linear funtions:

  f :: R3 -> R3
  h :: R3 -> R3
As well as the point

  x :: R3
If we fix a set a basis vectors, we can represent f and h as 3x3 matrices, and x as a 3x1 matrix.

The product [f][x]=[f(x)] then represents the result of a function application, while [f][h] = [f∘h] represents function composition.

For the function application portion, there is no problem with your proposal. We simply represent the point as a 1x3 vector instead of a a 3x1 vector (or similarly transpose the convention for representing a function as a matrix).

The problem comes with the function composition use-case. For your proposal to work, we would need to transpose only one of the two matrices, which means that the matrix representation of a function is determined both by the basis vectors, and a transposition parity bit. With multiplication as composition only making sense when the transposition parities don't match.


A different convention would make matrix-matrix multiplication no longer express composition of linear maps. So the geometric -- or deeper meaning -- would be lost.

The operation you're describing is nevertheless equal to A^T B. In other words, it can be expressed using a combination of matrix multiplication and matrix transpose. I don't see what it could be used for, though.


I'd guess that the "sliding window" metaphor means you could model matrix multiplication as a convolution. Does this give any intuition for matrix multiplication via the Fast Fourier Transform?

https://en.m.wikipedia.org/wiki/Convolution_theorem

> the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms.


The fastest known matrix multiplication algorithm is the Coppersmith-Winograd algorithm with a complexity of O(n^2.3737). FFT is O(n log n). So probably not.


Apologies; apparently I was thinking of integer multiplication using FFT:

https://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strasse...

But my question (which wasn't about asymptotic complexity) stands: can we think about matrix multiplication as a convolution? If so, we can do pointwise multiplication sandwiched between Fourier transforms -- I don't expect it to be fast, I just expect it to be possible.


> Fastest known matrix multiplication algorithm...

...for general matrices. Fourier transforms do indeed diagonalize circulant matrices[1], making their multiplications O(2 n log n + n) -> O(n log n).

[1]: https://en.wikipedia.org/wiki/Circulant_matrix


Yes, no doubt that there are faster algorithms for specific matrices. That matrix you are bringing into the discussion is only n-dimensional, so not surprising that you can multiply it faster than a matrix that is n×n dimensional.


You can't model matrix multiplication as a convolution, but you can model convolution as matrix multiplication (with a toeplitz matrix) which is why the discrete fourier transform can be modeled as a matrix multiplication.


> You can't model matrix multiplication as a convolution

Hmm. I'm not convinced yet...

The "convolution" of two functions M and N -- let's notate it as M <> N, because ASCII -- will itself be a function, where for each index `i` we consider indices of M and N that sum to `i`. At each index `i`, we "sum" the "products" of M and N at indices summing to `i`; so if `i` is 2 (and we start indexing at 0, for sanity), we have `(M <> N)(2) = M(0)*N(2) + M(1)*N(1) + M(2)*N(0)`.

https://betterexplained.com/articles/intuitive-convolution/

Now consider a matrix multiplication M * N. We can think of M as an array of row vectors, and N an array of column vectors; in both cases, they're functions from (a subset of) the naturals to matrix elements.

To instantiate convolution on our matrices, we can take multiplication as dot-product, and addition as a formal (uninterpreted) sum. (So, technically, we need to lift the elements of our input matrices to these sums as well, but there's a canonical way to do this.) Then M <> N is an array of formal sums of vectors; each formal sum gives an anti-diagonal band of elements in the matrix product, and the whole array runs along the main diagonal.

There are almost certainly faults in the remaining details -- for example, it's not really clear that this is associative yet, due to the formal sum -- but this seems to cast matrix multiplication in the shape of a convolution.


The process itself is visualised here well but can someone explain why anyone would want to multiply matrixes? Yes, I know where it is used and I did my fair share back in college but it never really clicked for me in intuitive manner.

Why simply don't go procedural and skip the matrix representation altogether? Using matrixes in physics calculations, for example, feels like disassociating the intuitiveness of the physics for the sake of the matrix calculation tools. When I look at a matrix of values I don't see anything.


Most of the time, the objects you are actually interested on are not NxM grids of numbers, but linear functions. For simplicity, I will assume we are working with real numbers.

Suppose you have two linear functions:

  f :: R3 -> R3
  h :: R3 -> R2
and a point:

  x :: R3
The first problem you have if you want to work with these functions is how to represent them. As it turns out, assuming you have a set of basis vectors, any linear function Rn -> Rm can be represented by exactly n * m real numbers. Further, if we arrange these numbers in a grid it is easy to fill in the grid if you know how the function acts on the basis vectors, and it is easy to read off how the function acts on the basis vectors from the grid.

Similarly, a point in Rn can be represented by exactly n real numbers. Again, this representation depends on a choice of basis vectors.

For simplicity, let [?] represent the matrix representation of ?.

The second problem you want to tackle is how to compute function application y = f(x). Given the representations defined above, y is the point in R3 that is represented by the result of the matrix multiplication [f][x].

The third problem you want to solve is how to compute the function composition g = f∘h. Again, g is the function given by the matrix representation [g] = [f][h]


The other responses to your question are classic mathematicians giving math answers to someone who doesn't see why the math is useful in response to asking why is the math is useful. The most straightforward application is solving systems of equations. From early high school, we have problems like x+y=2 and x-y=3 and are asked to solve these. We then add a z (and maybe a w) and solve systems of 3 (or 4) equations before realizing it gets annoying very quickly. Linear algebra is simply the generalization of solving these equations. Instead of writing out all those equations line by line, let's make a new notation as:

[[1,1],[1,-1]] @ [[x,y]] = [[2,3]]

@ is of course matrix multiplication but what it does does not really matter. Only that it performs some "function". But is this notation helpful? It turns out this function is really useful to answer other questions. From here, we can go on to show very interesting things like how to find the answers for x and y. It turns out that it has a recursive process similar to when solving it by hand (isolate one of the variables, plug into another equation, isolate another, ...) and part of this recursive process has a pattern we will call the determinant. Turns out that when the determinant is 0, funny things happen. The point is that it all started by trying to solve these equations which are interesting from a mathematical point of view but also show up naturally in many business applications. See Leontief's Model [1] for how this could be useful. Consumption equals production and bills being linear (#apples * $apples + #bananas * $bananas + ... = total spent) make linear systems arise naturally. All the things that come later, eigenvalues, SVD, etc. were all discovered as mathematical curiosities and later realized that they had very useful applications such as solving OLS and compression.

[1] - https://www.math.ksu.edu/~gerald/leontief.pdf


The benefit is reusing the matrix theory - matrix knowledge(!) - in different contexts.

In some cases making a matrix in physics is a quite sterile representation, like you suggest, like plugging in the numbers in an otherwise generally stated question. The matrix values depend on choices of coordinate system and everything, quite inelegant in that way.

Why practically use matrixes on the computer? It's efficient to compute in blocks instead of one by one.


One example: Physics can sometimes involve pretty gnarly integrals. Some of the intuitiveness of physics comes from the fact that we study the cases where we can get a closed-form solution -- nice geometric shapes for example, where we can set up a nice mathematically described surface integral or whatever.

Unfortunately, engineers insist on designing devices which are neither perfect spheres nor perfectly flat planes (ridiculous!), and they might only have equations to describe how properties change in time or space. In this case, it can be easier to discretize the thing into a mesh, and use a matrix to describe how the physical phenomena at the points of that mesh relate to each other.


Matrix math is just a standardized, regularized way of doing a whole hell of a lot of multiplication and addition en mass. Disassociation is arguably the goal: instead of remembering to add this, but multiply that, but divide this other thing, and subtract that... you just take the dot product of every row of the left matrix vs every column of the right matrix. Is is procedural math... if you optimize for simplicity of procedure.

Optimizing for simplicity of procedure can make computers quite happy, and reduce errors by humans (they can still make arithmetic errors, but will make fewer errors of procedure)... and also lets you build reusable optimizations to that simple procedure (math theory about matricies) that can then be shared across multiple domains (a lot of work has been put into optimizing e.g. sparse matricies.)

Figuring out an intuitive understanding of matrix values can be done, although it depends on the problem domain you're applying them to. My problem domain of choice is computer graphics. Rotating a vertex of a triangle can be done using a 3x3 matrix:

                    [ ux_x, ux_y, ux_z ]
    [v_x, v_y, v_z] [ uy_x, uy_y, uy_z ] = [o_x, o_y, o_z]
                    [ uz_x, uz_y, uz_z ]

    v_{...} is an input vertex position
    o_{...} is an output vertex position
Can this be visualized? Yes!

   ux_{...} is where a unit x vector (1,0,0) will end up at
   uy_{...} is where a unit y vector (0,1,0) will end up at
   uz_{...} is where a unit z vector (0,0,1) will end up at
What if I want to add an offset/position to all the verticies? Well, we don't want the multiplication bits then, so we need a 1 to have a noop multiply:

   [v_x, v_y, v_z, 1.0]
And then we can use that 1.0 to add a new row to our matrix which will get added to the final output:

    [ ux_x, ux_y, ux_z ]
    [ uy_x, uy_y, uy_z ]
    [ uz_x, uz_y, uz_z ]
    [  t_x,  t_y,  t_z ]
Bam, we added [t_x, t_y, t_z] to [o_x, o_y, o_z]. t_{...} is then easily visualized as the new position where verticies were at the origin (0,0,0) will end up at, and u{...} becomes the position of the unit vectors relative to that origin, instead of in absolute coordinates.

EDIT: Adding a 4th column becomes useful for projection matricies in a way I don't have a super great intuition for... but it also lets us multiply transformation matricies together. This is quite useful: We can simplify "transform this vertex from the original mesh, into it's position relative to this larger model, into it's position in worldspace, into it's position relative to this camera" to "multiply against this single 4x4 matrix that we constructed using matrix multiplication."


a major application is markov chains


How do you think about it in intuitive way?


In a Markov chain, the matrix (or matrices) are similar to the matrix representation of a graph (without brackets):

    a b c d
  a 1 0 1 0
  b 1 1 0 0
  c 0 0 1 1
  d 1 0 0 1
Written another way, the above graph has these directed edges:

  a -> a, a -> c
  b -> a, b -> b
  c -> c, c -> d
  d -> a, d -> d
In a Markov chain, the possible state transitions (graph edges) are represented in the same way, but instead of a 1 or 0 (connects or doesn't connect) a probability is assigned to each possible transition.

https://en.wikipedia.org/wiki/Examples_of_Markov_chains - Several examples with a mix of prose, graphical presentation, and the matrix representation.


Connectedness between words.


I think there are two parts to matrix multiplications:

1) How you do it by hand, which this page offers a very handy and easy to remember recipe for.

2) And why you do it in such a weird and not very multiplicative way.

Matrix multiplication is so arbitrary when first introduced. Every time I see it work for some real life application I am surprised how mathematicians came up with it in the first place.


I you want some intuition, this Kalid Azad's explanation may help:

https://betterexplained.com/articles/matrix-multiplication/


It looks so innocuous when you write it as

(A B)_ij = A_ik B_kj.


Using the Einstein convention*. Otherwise that can be read wrong.

* https://en.wikipedia.org/wiki/Einstein_notation


This is how I've always visualised matrix multiplication in my head, I find it quite intuitive.


This is what you should show students before sitting them through a 1 hour class lecture.


I passed a class on linear algebra, and I think most of it made sense to me at the time. I can't remember anything of it now, except "Abelian" was a funny sounding word that was used a lot. I clicked around on this thing. I didn't find it illuminating at all.


I understand, people gain intuition through different things.


have you looked into https://en.wikipedia.org/wiki/Strassen_algorithm strassen multiplication and karatsubas algorithm for multiplication? https://en.wikipedia.org/wiki/Karatsuba_algorithm apparently its the one python uses. I just recently learned it and I find it very interesting.


I need this a bit more than 10 years ago, but thanks anyway :-)


This annoys me because it is the opposite of the way I do matrix multiplication myself. I work from left to right taking the rows of the left hand matrix, transposing them then scalar producting them with each column of the left hand matrix in turn. This animation starts with he columns of the left hand matrix and it feels like it is "working backwards" by going from right to left.


I think the view of matrices as a transform of space is more intuitive, for a visual approach. Ie, I think a nice addition to this webapp would be demonstrating (At least for 2 and 3 dimensions) what the transforms look like; this algorithmic approach it's showing feels less useful.


I was going through the ray tracer challenge book a while ago and this is the one thing I struggled with a little bit more than others.

The underlying data structure made all the difference, but I still wasn't quite happy with how I approached it, with a bunch of nested loops and iterators.


Whenever I talk about matrix multiplication, I just say RC multiplication or row-column dot product. Linear Algebra is difficult because how literal it is and lack of emphasis on misinterpretations. A vector is different than most peoples understandings.


Wow this is a much easier way to remember for me. Thank you!


I like the visual look of the sliding window.


I did until I realized it's computing more than one dot product at a time. Then it was suddenly not great. It falls somewhere between standard matrix multiplication and what might eventually be a good way to understand a systolic array multiplier.


Visually it looks very cool, but I agree it's not very helpful.

At the very least each result and RC should be in a different colour.


No problem in computer science is really optimised, until you can solve it with matrix multiplication ;)


I would never do it like this if I only had pen and paper but I admit the animation is pretty awesome :)


Well, now I understand matrix multiplication well enough to do it longhand in a pinch. Thanks for that!


This makes me think of a different but less intuitive way of matrix mul. Great jobs.


This visualisation really shows you how unintuitive matrix multiplication is.


This is absolutely, amazingly cool! Awesome job!


Absolutely beautifully animated! Nice


[flagged]


The domain name suggests it's someone called Laszlo Korte who owns the site and not jonbaer.

Also what makes you think this is a "data gathering" exercise. Neither Privacy Badger nor uBlock Origin appear to have blocked any third party cookies.

I think your comment is somewhat misguided.


Maybe OP just wanted to share something interesting and is unaffiliated with the site?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: