Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Identifying duplicate data from a large dataset?
2 points by gerenuk on March 18, 2019 | hide | past | favorite | 3 comments
Hi,

We have a dataset around 150 million URLs and other meta data in ElasticSearch and looking for an efficient way to identify the duplicate URLs/titles from our dataset. Used ElasticSearch term aggregation but it becomes very slow and returns only 10,000 URLs and most of the time it misses the URLs.

Currently, we have a redis with Sorted Sets, before any indexing URL, we look for the into redis set.

Options we have explored:

1. Clickhouse, storing all the URL and running aggregation etc. on it later on? 2. Storing the URLs in redis along with bloomfilter.

If you have worked on a similar thing, would love to hear your feedback.

Thanks.




This is easier than deduplicating the many different URLs which have the same content. A harder problem awaits you!

ML & basic stats


What approach would you be going for initially for deduplicating same urls?


You might look at a real data processing system, like something from the Apache projects




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: