Looking at the hn demo, I'm impressed. There are definitely relevant tags being generated. Unfortunately there also some noisy tags which clutter the results. Taking one example, the post "DevOps? Join us in the fight against the Big Telcos" given the tags "phone tools sendhub we're news experience customers comfortable", I would say that "we're" is unarguably noise. Another example, "Questions for Donald Knuth" with tags "computer programming don i've knuth taocp algorithms i'm" I would call out "i've" and "i'm".
There are other words in both examples that I personally would not use as tags, but I can't really say they would be universally not-useful. I think a vast improvement could be made just by having a dictionary blacklist filled with things like these - from this tiny sampling contractions seem to be a big loser.
Agreed. Actually we could turn up the number of times it runs inside LDA algorithm and those would fix up but it affects performance. This was just a quick and dirty example (with an expectation of high traffic).
You can also seed LDA with a whitelist of words which we didn't do either - again all in the name of a quick and dirty solution to show.
That and the questionable use of stopwords makes it sound like they're just slapping some marketing on an out-of-the box LDA implementation (not that I blame them, it's a dense algorithm).
and this doesn't really strike me as much of a victory for the idea that it's just the implementation of an algorithm being the sticking point in practice.
We are just showing the versatility of the platform through a real world use case. LDA is hard to implement/scale for the untrained same as many other machine learning, optimization, graph traversing,etc algorithms. What we are building is crowd-sourced and generalized API where all these algorithms can be combined and used together to really make any application smarter.
The demo we show here is a version of how we used our platform to generate tags for all entries in our API by combining algorithms that existed already in Algorithmia. (modified for performance over quality due to the volume that HN would bring).
It's only hard because most libriaries I've seen have so little documentation available. It's simple once you understand the library. We need people picking these libraries up, implementing them on weekends on fun projects, documenting their work and code, and publishing it for everyone to learn from.
This could be really useful in ecommerce for creating search keywords for category pages. The noise in the results matters not, so long as it gets 'T-Shirt' and someone searches for 'T-shirt' then all is well and good.
Are you looking to plug what you have into something such as the Magento e-commerce platform? The right clients could pay proper money for this functionality. It is something I would quite like to speak to you about.
LDA is very impressive. But it might be better to have an iterative algorithm that forms a linear-algebraic basis from several tags (and let people add more tags as vectors into the mix) and then every time people upvote something, you update their interests (points in the linear algebraic space) and then every time an article gets upvoted you update ITS tags ...
after a while the system converges to a very useful structure and new members can see correctly tagged articles and the system learns their interests by itself
After watching "Enough Machine Learning to Make Hacker News Readable Again"[1] I thought of recommendation engine/machine learning based linkshare/discussion system (eg HN/reddit style). Your frontpage would be continuously formed by your up/down-votes. I'm not sure if the same could be applied to comment threads too, essentially creating automatic moderation. Algorithmic tagging would certainly be useful for that kind of site.
Not too impressed to be honest; singular/plural forms are not treated equal; not familiar with LDA, but I've written and LSA implementation in the past, and it did a lot better than what is shown here.
Lol, this seriously took me by suprise. I'm currently developing a HackerNews with tags (you can self host it). I quickly generated this Google Form, if you are interested for being a beta user in the nearby future
Yeah, i know lobste.rs. But i go much further then HN or lobste.rs.. (not limiting with only url's or texts is just one feature). It's more a "Document Management System" with HN influence for larger Businesses (or public websites) with ( > 30 users) then a HN copy.
from gensim.models import ldamodel
lda = ldamodel.LdaModel.load("hn40_lemmatized.ldamodel")
lda.alpha = [lda.alpha for _ in range(40)] # because there was a change since 2012
lda.show_topics()
LDA/Topic Modeling is interesting stuff. I always feel like the way this data gets surfaced as "tags" is very ineffective. Any non-tech person would look at this and generally be confused. So this item is triggering my rants against tagging:
- Tagging is like trying to predict the future. What word will help some future person to get to this content?
- Tagging often tries to fill the hole left by bad search
- There is no evaluation method to measure how good a set of tags are
- Tags make very bad UI clutter.
Some of these points are related to encouraging users to tag content, but auto-tagging also seems problematic.
To me something more along the lines of entity extraction is more useful because it is a well defined problem, and can be used to improve a lot of other applications.
It seems like you would want to run k-means over the comments and the tags to pull out semantically meaningful words for tags and then reduce the total number of tags over the corpus. Then say, use wikipedia to generate an automatic taxonomy where those extract words occur.
Tagging is useful to summarize the content. It's like saying describe this article in 3 nouns. Lot of HN articles are cryptic and if you can pull out good tags it can be helpful to prioritizing its reading. It's even more helpful when there are 100s of comments and you want to know key topics of discussion. The problem is often that generated tags are often very poor quality.
To understand the utility of tagging, look at some article, read it and then put 3 words that best describes the topics. I bet most others would find human generated tags very useful. Machine generated tags are usually no where close to what humans would generate.
I would like to see this kind of tagging being used to improve search results and simply hiding them. Could increase rank when there are more tag hits. Although I guess that is essentially what good search is.
Tags while cluttering the ui do help you find similar content. Still not as good as a good reccomendation system but a decent stop gap measure in some instances.
I like this project (i am creating something like this, so i'm pretty serious).
But doesn't the auto-tagging feature make to much noise for a business use-case? For example, it tags a article of Amazon and includes Google in the tags. White-listing words wouldn't fix this (Google is a whitelisted word if Amazon is).
I don't know about LDA though. Perhaps a proper tag administration would fix this, but then you'd have to remove tags on the go.
Has anyone seen Open Calais [1]? It does tagging and categorization. It's been around for years and seems pretty powerful. It's a bit lower-level than Algorithmia (not href aware), but it seems more powerful, and a system like Algorithmia could be built on it.
you can try it yourself at the bottom of the blog post or you can send me a url and I can try a bunch for you diego at algorithmia dot com or @doppenhe.
There are other words in both examples that I personally would not use as tags, but I can't really say they would be universally not-useful. I think a vast improvement could be made just by having a dictionary blacklist filled with things like these - from this tiny sampling contractions seem to be a big loser.