I used to work with a carpenter from Trinidad. This was in the southern US, and at an employer where I was probably the only college educated person. I recall them as being very professional but able to socialize easily with everyone else, and they were also insanely good at chess.
Yeah one of the first open-source recommendation engines I ever worked with was called Voogo[1] and I believe it was based on k-means. This was back in 2008 or so?
For someone who had never been exposed to any of the math behind this kind of thing, it was an interesting implementation, and the source code was very readable.
The original website seems to be gone and I couldn't find a Git link so apologies for Sourceforge.
I'm no expert in ML, but beyond the research and emerging work in unsupervised learning clustering seems to be the most common approach, and there's nothing conceptually new here in the past 10+ years. Don't get me wrong, computationally we can do stuff with a ridiculous # of dimensions that was hard/impossible before and there are new algorithms but I was doing KMeans and DBScan/HDBscan and Gaussian mixtures in grad school 15 years ago in relation to databases, I had never heard of "Machine Learning" and we were in the glacial stage of the AI winter. There's some newer work that is based on human judgement for results but clustering still seems to be the mainstream "data validated" approach...
Might have been k-nearest-neighbors rather than k-means. Knn can be used for "recommended because you bought X" or "users like you also bought X" type recommendations that relate user to user or item to item.
K-means could potentially be helpful to group together common users/items if e.g. you're memory constrained and don't want to give each user a fully unique embedding entry so that's also possible.
it's still widely used and can be validated without significant human judgement. Implementations are more efficient, and computatal complexity is through the roof but the approach is still legit.
My only counter is that for a real-time collaborative app the server should be the single source of truth about application state, which would make htmx a good choice for that type of application.
Yeah, I've played hours of Minecraft with my own dithering renderer, but I'm sure going to 1-bit would give me a headache within minutes.
I wonder if rendering the in-game graphics as if they are being shown on a virtual monitor would provide some respite. If the virtual monitor was on a desk with other items, then would give players some place to rest their eyes.
The notes within are interesting, though it's not clear to me if Automat itself meets those ideals? (Or even tries to? From what I understood, the notebook seems to first praise tech that is ubiquitous and enduring, but then rejects web apps due to bloat?)
I think they are referring to the author of the article losing their job because they weren't focused on AI:
> the Dean of my college told me...that I should look for long-term academic employment elsewhere. My research and practice was not centered enough on “AI” and “emerging technology” to fit within the institution...
Think of it more as a feature that provides accessibility to people that either don't have the ability, the time, or that simply can't be bothered to work it out themselves.
reply