Hacker News new | past | comments | ask | show | jobs | submit login
Tracking supermarket prices with Playwright (sakisv.net)
467 points by sakisv 3 months ago | hide | past | favorite | 210 comments



I have been doing something similar for New Zealand since the start of the year with Playwright/Typescript dumping parquet files to cloud storage. I've just collecting the data I have not yet displayed it. Most of the work is getting around the reverse proxy services like Akamai and Cloudflare.

At the time I wrote it I thought nobody else was doing but now I know of at least 3 start ups doing the same in NZ. It seems the the inflation really stoked a lot of innovation here. The patterns are about what you'd expect. Supermarkets are up to the usual tricks of arbitrary making pricing as complicated as possible using 'sawtooth' methods to segment time-poor people from poor people. Often they'll segment on brand loyalty vs price sensitive people; There might be 3 popular brands of chocolate and every week only one of them will be sold at a fair price.


Can anyone comment how supermarkets exploit customer segmentation by updating prices? How do the time-poor and poor-poor people generally respond?

“Often they'll segment on brand loyalty vs price sensitive people; There might be 3 popular brands of chocolate and every week only one of them will be sold at a fair price.”


Let's say there are three brands of some item. Each week one of the brands is rotated to $1 while the others are $2. And let's also suppose that the supermarket pays 80c per item.

The smart shopper might only buy in bulk once every three weeks when his favourite brand at a lower price, or twitch to the cheapest brand every week. A hurried or lazy shopper might always pick their favourite brand every week. If they buy one item a week the lazy shopper would have spent $5, while the smart shopper has only spent $3.

They've made 60c off the smart shopper and $2.60 off the lazy shopper. By segmenting out the lazy shoppers they've made $2. The whole idea of rotating the prices is nothing to do with the cost of goods sold it's all about making shopping a pain in the ass for busy people and catching them out.


Bingo, an extra 50 cents to 2 dollars per item in a grocery order adds up quick, sooner or later.

Also, in-store pricing can be cheaper or not advertised as a sale, but encouraging online shopping can often be a higher price, even if it's pickup and not delivery.

"In-Store Pricing" on websites/apps are an interesting thing to track as well, it feels the more a grocery store goes towards "In-Store pricing', the higher it is.

This would be great to see in other countries.


Legality of this is rocky in Australia. I dare say that NZ is the same?

There are so many scrapers that come and go doing this in AU but are usually shut down by the big supermarkets.

It's a cycle of usefulness and "why doesn't this exist", except it had existed many times before.


I think with the current climate wrt the big supermarkets in AU, now would be the time to push your luck. The court of public opinion will definitely not be on the supermarkets side, and the government may even step in.


Agreed. Should we make something and find out?


Agreed. Hopefully the govs price gouging mitigation strategy includes free flow of information (allowing scraping for price comparison).

I’ve been interested in price comparison for Australia for a while, am a Product designer/manager with a concept prototype design, looking for others interested to work on it. My email is on my profile if you are.


Aussie here. I hadn't heard that price scraping is only quasi-legal here and that scrapers get shut down by the big supermarkets - but then again I'm not surprised.

I'm thinking of starting a little price comparison site, mainly to compare select products at Colesworths vs Aldi (I've just started doing more regular grocery shopping at Aldi myself). But as far as I know, Aldi don't have any prices / catalogues online, so my plan is to just manually enter the data myself in the short-term, and to appeal to crowdsourcing the data in the long-term. And plan is to just make it a simple SSG site (e.g. Hugo powered), data all in simple markdown / json files, data all sourced via github pull requests.

Feel free to get in touch if you'd like to help out, or if you know of anything similar that already exists: greenash dot net dot au slash contact


But as far as I know, Aldi don't have any prices / catalogues online

There are a few here, but more along the lines of a flyer than a catalog:

https://www.aldi.com.au/groceries/

Aldi US has a more-or-less complete catalog online, so it might be worth crowdsourcing suggestions to the parent company to implement the same in Australia.


Ironic that I get a few responses with "I am wanting to do the project".

I did it around 10 years ago and I think I've seen one a year since that. I didn't bother once I saw that people shipping were doing well and increasing their dataset only for it to be severely reduced later (I assume due to threats of legal action.)


> get shut down by the big supermarkets

How do they shut them down?


Threaten legal action if scraping continues or something similar as the scraping ends abruptly for every site but the prices still sit online at the supermarkets site.


For the other commenters here - looks like this site does the job? https://hotprices.org/

With the corresponding repo too: https://github.com/Javex/hotprices-au


> Legality of this is rocky in Australia. I dare say that NZ is the same?

You might be breaking the sites terms and conditions but that does not mean its illegal.

Dan Murphy uses a similar thing, they have their own price checking algorithm.


Breaking ToS is 100%, but that wouldn't stop people scraping, in all cases the people scraping are receiving something that stops them scraping while the data still is available, continuing the "why doesnt someone do this" project.


If it's just a matter of (not) breaking ToS, then my idea of manually collating prices shouldn't be able to be shut down (or credibly threatened with being shut down). It might be a painstaking (and unsustainable) effort, but price tags in a physical store have no ToS.


I built one called https://bbdeals.in/ for India. I mostly use it to buy just fruits and its saved me about 20% of sending. which is not bad in these hard times.

Building crawlers and infra to support it tool not more than 20 hours.


Does this work for HYD only?


Yeas. Planned to expand it to other manjor cities


As a kiwi, are your able to make any of these (or your) projects? I'm quite interested.


Those who order grocery delivery online would benefit from price comparisons, because they can order from multiple stores at the same time. In addition, there's only one marketplace that has all the prices from different stores.


>Those who order grocery delivery online would benefit from price comparisons, because they can order from multiple stores at the same time.

Not really, since the delivery fees/tips that you have to pay would eat up all the savings, unless maybe if you're buying for a family of 5 or something.


Instacart, Uber Eats, DoorDash all sell gift cards of $100 for $80 basically year round — when you combine that 20% with other promotions I often have deliveries that are cheaper than shopping in person.


Uber Eats and Deliveroo all have list prices that are 15-20+% above the shelf price in the same supermarket. Plus a delivery fee, plus the "service charge", I've _never_ found it to be competitive let alone cheaper.


Some vendors offer "in-store prices" on Instacart.


Are those usually discounted via coupon sites?


No, I buy them from Costco, Sam’s Club mostly.


I think the fees they tack on for online orders would ruin ordering different products from different stores. It mostly makes sense with staples that don't perish.

With fresh produce I find Pak n Save a lot more variable with quality, making online orders more risky despite the lower cost.


For those who have to order online (e.g. elderly), they are paying the fees anyway. They can avoid minimum order fees with bulk purchases of staples. Their bot/app can monitor prices and when a staple goes on sale, they can order multiple items to meet the threshold for a lower fee.


I was planning on doing the same in NZ. I would be keen to chat to you about it (email in HN profile). I am a data scientist

Did you notice anything pre and post Whittakers price increase(s)? They must have a brilliant PR firm in retainer for every major news outlet to more or less push the line that increased prices are a good thing for the consumer. I noticed more aggressive "sales" more recently, but unsure if I am just paying more attention.

My prediction is that they will decrease the size of the bars soon.


I think Whittaker's changed their recipe some time in the last year. Whittaker's was what Cadbury used to be (good) but now I think they have both followed the same course. Markedly lower quality. This is the 200g blocks fwiw not sure about the wee 50g peanut slab.


Nice writeup. I've been through similar problems that you have with my contact lens price comparison website https://lenspricer.com/ that I run in ~30 countries. I have found, like you, that websites changing their HTML is a pain.

One of my biggest hurdles initially was matching products across 100+ websites. Even though you think a product has a unique name, everyone puts their own twist on it. Most can be handled with regexes, but I had to manually map many of these (I used AI for some of it, but had to manually verify all of it).

I've found that building the scrapers and infrastructure is somewhat the easy part. The hard part is maintaining all of the scrapers and figuring out if when a product disappears from a site, is that because my scraper has an error, is it my scraper being blocked, did the site make a change, was the site randomly down for maintenance when I scraped it etc.

A fun project, but challenging at times, and annoying problems to fix.


Doing the work we need. Every year I get fucked by my insurance company when buying a basic thing - contacts. Pricing is all over the place and coverage is usually 30% done by mail in reimbursement. Thanks!


Thanks for the nice words!


I'm curious, can you wear contact lenses while working? I notice my eyes get tired when I look at a monitor for too long. Have you found any solutions for that?


I use contact lenses basically every day, and I have had no problems working in front of screens. There's a huge difference between the different brands. Mine is one of the more expensive ones (Acuvue Oasys 1-Day), so that might be part of it, but each eye is compatible with different lenses.

If I were you I would go to an optometrist and talk about this. They can also often give you free trials for different contacts and you can find one that works for you.


FWIW, that is the same brand that I use and was specifically recommended for dry-eyes by my optometrist. I still wear glasses most of the time because my eyes also get strained from looking at a monitor with contacts in.

I'd recommend a trial of the lenses to see how they work for you before committing to a bigger purchase.


> Acuvue Oasys 1-Day

I don't often wear contacts at work but I can second that these are great for "all day" wear.


Age is important factor here, not just contract brands.

As you get older, your eyes get dryer. Also, having done Lasik and needing contacts after many years is a recipe for dry eyes.


This is very likely age-dependent.

When I was in my 20s, this was absolutely not a problem.

When I hit my 30s, I started wearing glasses instead of contacts basically all the time, and it wasn't a problem.

Now that I'm in my 40s, I'm having to take my glasses off to read a monitor and most things that are closer than my arm's reach.


Wait until you get to 50 and you have to take OFF your glasses to read things that are small or close.

This is the most annoying part of all my vision problems.


Hah, I'm already there in my 40s! I'm seriously considering getting a strap for my glasses - right now I just hook them into my shirt, but they'll occasionally fall out when I bend over for something, and it's only a matter of time before they break or go into a sewer.


My eye doctor recommended wearing “screen glasses”. They are a small prescription (maybe 0.25 or 0.5) with blue blocking. It’s small but it does help; I work on normal glasses at night (so my eyes can rest) and contacts + screen glasses during the day and they are really close.


Go try an E-Ink device. B&N Nooks are small Android tablets in disguise, you just need to install a launcher. Boox devices are also Android.

I can use an E-Ink device all day without my eyes getting tired.


I cannot, personally. They dry out


For Germany, below the prices it says "some links may be sponsored", but it does not mark which ones. Is that even legal? Also there seem to be very few shops, are maybe all the links sponsored? Also idealo.de finds lower prices.


When I decided to put the text like that, I had looked at maybe 10-20 of the biggest price comparison websites across different countries because I of course want to make sure I respect all regulations that there are. I found that many of them don't even write anywhere that the links may be sponsored, and you have to go to the "about" page or similar to find this. I think that I actually go further than most of them when it comes to making it known that some links may be sponsored.

Now that you mention idealo, there seems to be no mention at all on a product page that they are paid by the stores, you have to click the "rank" link in the footer to be brought to a page https://www.idealo.de/aktion/ranking where they write this.


Fair enough, I had assumed the rules would be similar to those for search engines.


> One of my biggest hurdles initially was matching products across 100+ websites. Even though you think a product has a unique name, everyone puts their own twist on it. Most can be handled with regexes, but I had to manually map many of these (I used AI for some of it, but had to manually verify all of it)

In the U.S. at least, big retailers will have product suppliers build slightly different SKUs for them to make price comparisons tricky. Costco is somewhat notorious for this where almost everything electronics (and many other products) sold in their stores is a custom SKU -- often with slightly product configuration.


Costco does this for sure, but Costco also creates their own products. For instance there are some variations of a package set that can only be bought at Costco, so you aren't getting the exact same box and items as anywhere else.


Would that still matter if you just compare by description?


Isn’t this a use-case where LLMs could really help?


Yeah it is to some degree. I tried to use it as much as possible, but there's always those annoying edge cases that makes me not trust the results and I have to check everything, and it ended up being faster just building some simple UI where I can easily classify the name myself.

Part of the problem is simply due to bad data from the websites. Just as an example - there's a 2-week contact lens called "Acuvue Oasys". And there's a completely different 1-day contact lens called "Acuvue Oasys 1-Day". Some sites have been bad at writing this properly, so both variants may be called "Acuvue Oasys" (or close to it), and the way to distinguish them is to look at the image to see which actual lens they mean, look at the price etc.

It's true that this could probably also be handled by AI, but in the end, classifying the lenses takes like 1-2% of the time it takes to make a scraper for a website so I found it was not worth trying to build a very good LLM classifier for this.


> It's true that this could probably also be handled by AI, but in the end, classifying the lenses takes like 1-2% of the time it takes to make a scraper for a website so I found it was not worth trying to build a very good LLM classifier for this.

This is true for technology in general (in addition to specifically for LLMs).

In my experience, the 80/20 rule comes into play in that MOST of the edge cases can be handled by a couple lines of code or a regex. There is then this asymptotic curve where each additional line(s) of code handle a rarer and rarer edge case.

And, of course, I always seem to end up on project where even a small, rare edge case has some huge negative impact if it gets hit so you have to keep adding defensive code and/or build a catch all bucket that alerts you to the issue without crashing the entire system etc.


Do you support Canada?


I created a similar website which got lots of interest in my city. I scrape even app and websites data using a single server at Linode with 2GB of RAM with 5 IPv4 and 1000 IPv6 (which is free) and every single product is scraped at most 40 minutes interval, never more than that with avg time of 25 minutes. I use curl impersonate and scrape JSON as much as possible because 90% of markets provide prices from Ajax calls and the other 10% I use regex to easily parse the HTML. You can check it at https://www.economizafloripa.com.br


> I scrape even app and websites data

And then try to sell it back to businesses, even suggesting they use the data to train AI. You also make it sound like there’s a team manually doing all the work.

https://www.economizafloripa.com.br/?q=parceria-comercial

That whole page makes my view of the project go from “helpful tool for the people, to wrestle back control from corporations selling basic necessities” to “just another attempt to make money”. Which is your prerogative, I was just expecting something different and more ethically driven when I read the homepage.


Where does this lack ethics? It seems that they are providing a useful service, that they created with their hard work. People are allowed to make money with their work.


[flagged]


That was not the argument at all. Please don’t strawman. From the guidelines:

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.


> Where does this lack ethics?

I didn’t say it lacked ethics, I said I expected it to be driven by ethics. There’s a world of difference. I just mean it initially sounded like this was a protest project “for the people”, done in a way to take back power from big corporations, and was saddened to see it’s another generic commercial endeavour.

> People are allowed to make money with their work.

Which is why I said it’s their prerogative.

If you’re going to reply, please strive to address the points made, what was said, not what you imagine the other person said. Don’t default to thinking the other person is being dismissive or malicious.


I’m curious to know why you thought it “sounded like this was a protest project ‘for the people’”?

I’ve read the parent post above and looked at the website and see nothing that would make me think it’s a “protest for the people”.

It just seems a little strange when you then go on to say “strive to address… what was said, not what you imagine the other person said”.


I had the same thought. This is why:

- 'I created a similar website', so it compares to https://pricewatcher.gr/en/.

- a big part of the discussion is in the context of inflation and price gouging

- pricewatcher presents its data publicly for all consumers to see and use, it is clearly intended as a tool for consumers to combat price gouging strategies

- 'pricewatcher.gr is an independent site which is not endorsed by any shop', nothing suggests this website is making money off consumers

- the 'similar website' however is offering exclusive access to data to businesses, at a price, in order for those business to undercut the competition and become more profitable

So the goals are almost opposite. One is to help consumers combat price gouging of supermarkets, the other is to help supermarkets become (even) more profitable. It is similar in the sense that it is also scraping data, but it's not strange to think being similar would mean they would have the same goal, which they don't.


> I’m curious to know why you thought it “sounded like this was a protest project ‘for the people’”?

See the sibling reply by another user, which I think explains it perfectly.

https://news.ycombinator.com/item?id=41179628

> It just seems a little strange when you then go on to say “strive to address… what was said, not what you imagine the other person said”.

It’s not strange at all if you pay attention to the words. I did not mischaracterise the author or their goals, I explained what I expected and what I felt regarding what I experienced reading the post and then the website.

In other words, I’m not attacking or criticising the author. I’m offering one data point, one description of an outside view which they’re free to ignore or think about. That’s it.

Don’t take every reply as an explicit agreement or disagreement. Points can be nuanced, you just have to make a genuine effort to understand. Default to not assuming the other person is a complete dolt. Or don’t. It’s also anyone’s prerogative to be immediately reactive. That’s becoming ever more prevalent (online and offline), and in my view it’s a negative way to live.


I would argue that you imagined something that the other person said - in this context, the other website. This is why your "strive to address... what was said, not what you imagine the other person said" comment sits uneasily for me.

I'm not sure if your highly condescending and somewhat reactive tone is intended or not, perhaps it's satirical, but in case you are unaware, your tone is highly condescending. Saying things like "if you pay attention to the words" and "you just have to make a genuine effort to understand" and your didactic "default to this" comes across rather badly.


Respectfully, I don’t relish the idea of wasting more time failing to impart the difference between criticism and expressing personal disappointment. It is possible to dislike something without thinking that thing or its author are bad. Not everything is about taking a side. We used to be able to understand that.

My tone did succumb to the frustration of having the point be repeatedly strawmanned (you’ll have to turn on show dead to see every instance), which is worryingly becoming more common on HN. I accept and thank you for calling that out. While in general I appreciate HN’s simple interface, it makes it easy to miss when the person we’re talking to on a new reply is not the same as before, so I apologise for any negativity you felt from me.

I may still see your reply to this (no promises) but forgive me if I don’t reply further.


It’s almost like people try to do valuable services for others in exchange for money.


A good few people cant imagine doing anything for any other reason. The fascinating aspect is that (combined with endless rent seeking) everything gets more expensive and eventually people wont have time to do anything for free. What is also fascinating is that by [excessively] not accounting for things done for free people shall take them for granted. We are surrounded by useful things done for us by generations long gone.

I started thinking about this when looking at startup micro nations. Having to build everything from scratch turns out to be very expensive and a whole lot of work.

Meanwhile we are looking to find ways to sell the price label separately. haha

When I worked at a supermarket I often proposed to rearrange the shelves into a maze. One can replace the parking lot with a hedge maze with many entrances an exits from the building. Special doorways on timers and remote control so that you need only one checkout. Add some extra floors, mirrors and glass walls.

There are countless possibilities, you could also have different entrances and different exits with different fees, doorways that only open if you have all varieties of a product in your basket, crawling tunnels, cage fights for discounts, double or nothing buttons, slot machines to win coupons.

valuable services for others?


That was not the argument. Please don’t strawman. Saying “I was just expecting something different” does not mean “what you are doing is wrong”.

I also expected replies to be done in good faith. That they would cover what I said, not that the reader would inject their own worst misconceptions and reply to those. I generally expect better from this website. I am also disappointed when that isn’t the case.


How does the ipv6 rotation work in this flow?


Nice article!

> The second kind is nastier. > > They change things in a way that doesn't make your scraper fail. Instead the scraping continues as before, visiting all the links and scraping all the products.

I have found that it is best to split the task of scraping and parsing into separate processes. By saving the raw JSON or HTML, you can always go back and apply fixes to your parser.

I have built a similar system and website for the Netherlands, as part of my master's project: https://www.superprijsvergelijker.nl/

Most of the scraping in my project is done by doing simple HTTP calls to JSON apis. For some websites, a Playwright instance is used to get a valid session cookie and circumvent bot protection and captchas. The rest of the crawler/scraper, parsers and APIs are build using Haskell and run on AWS ECS. The website is NextJS.

The main challenge I have been trying to work on, is trying to link products from different supermarkets, so that you can list prices in a single view. See for example: https://www.superprijsvergelijker.nl/supermarkt-aanbieding/6...

It works for the most part, as long as at least one correct barcode number is provided for a product.


Thanks!

> I have found that it is best to split the task of scraping and parsing into separate processes. By saving the raw JSON or HTML, you can always go back and apply fixes to your parser.

Yes, that's exactly what I've been doing and it saved me more times than I'd care to admit!


Awesome, have been looking for something like this!


This is interesting because I believe the two major supermarkets in Australia can create a duopoly in anti-competitive pricing by just employing price analysis AI algorithms on each side and the algorithms will likely end up cooperating to maximise profit. This can probably be done legally through publicly obtained prices and illegally by sharing supply cost or profit per product data. The result is likely to be similar. Two trained AIs will maximise profit in weird ways using (super)multidimensional regression analysis (which is all AI is), and the consumer will pay for maximised profits to ostensible competitors. If the pricing data can be obtained like this, not much more is needed to implement a duopoly-focused pair of machine learning implementations.


Here in Norway, what is called the "competition authority"(https://konkurransetilsynet.no/norwegian-competition-authori...), is frequently critical to open and transparent (food) price information for that exact reason.

The rationale is that if all prices are out there in the open, consumers will end up paying a higher price, as the actors (supermarkets) will end up pricing their stuff equally, at a point where everyone makes a maximum profit.

For years said supermarkets have employed "price hunters", which are just people that go to competitor stores and register the prices of everything.

Here in Norway you will oftentimes notice that supermarket A will have sale/rebates on certain items one week, then the next week or after supermarket B will have something similar, to attract customers.


The word I was looking for was collusion, but done with software and without people-based collusion.


Compusion.


> They change things in a way that doesn't make your scraper fail. Instead the scraping continues as before, visiting all the links and scraping all the products. However the way they write the prices has changed and now a bag of chips doesn't cost €1.99 but €199. To catch these changes I rely on my transformation step being as strict as possible with its inputs.

You could probably add some automated checks to not sync changes to prices/products if a sanity check fails e.g. each price shouldn't change by more than 100%, and the number of active products shouldn't change by more than 20%.


Sanity checks in programming are underrated, not only are they cheap performance vice, they catch bugs early that would otherwise poison the state.


Yeah I thought about that, but I've seen cases that a product jumped more than 100%.

I used this kind of heuristic to check if a scrape was successful by checking that the amount of products scraped today is within ~10% of the average of the last 7 days or so


The hard thing is not scraping, but getting around the increasingly sophisticated blockers.

You'll need to constantly rotate residential proxies (high rated) and make sure not to exhibit data scraping patterns. Some supermarkets don't show the network requests in the network tab, so cannot just get that api response.

Even then, mitm attacks with mobile app (to see the network requests and data) will also get blocked without decent cover ups.

I tried but realised it isn't worth it due to the costs and constant dev work required. In fact, some of the supermarket pricing comparison services just have (cheap labour) people scrape it


I wonder if we could get some legislation in place to require that they publish pricing data via an API so we don't have to tangle with the blockers at all.


Perhaps in Europe. Anywhere else, forget about it.


I'd prefer that governments enact legislation that prevents discriminating against IP addresses, perhaps under net neutrality laws.

For anyone with some clout/money who would like to stop corporations like Akamai and Cloudflare from unilaterally blocking IP addresses, the way that works is you file a lawsuit against the corporations and get an injunction to stop a practice (like IP blacklisting) during the legal proceedings. IANAL, so please forgive me if my terminology isn't quite right here:

https://pro.bloomberglaw.com/insights/litigation/how-to-file...

https://www.law.cornell.edu/wex/injunctive_relief

Injunctions have been used with great success for a century or more to stop corporations from polluting or destroying ecosystems. The idea is that since anyone can file an injunction, that creates an incentive for corporations to follow the law or risk having their work halted for months or years as the case proceeds.

I'd argue that unilaterally blocking IP addresses on a wide scale pollutes the ecosystem of the internet, so can't be allowed to continue.

Of course, corporations have thought of all of this, so have gone to great lengths to lobby governments and use regulatory capture to install politicians and judges who rule in their favor to pay back campaign contributions they've received from those same corporations:

https://www.crowell.com/en/insights/client-alerts/supreme-co...

https://www.mcneeslaw.com/nlrb-injunction/

So now the pressures that corporations have applied on the legal system to protect their own interests at the cost of employees, taxpayers and the environment have started to affect other industries like ours in tech.

You'll tend to hear that disruptive ideas like I've discussed are bad for business from the mainstream media and corporate PR departments, since they're protecting their own interests. That's why I feel that the heart of hacker culture is in disrupting the status quo.


Thankfully I'm not there yet.

Since this is just a side project, if it starts demanding too much of my time too often I'll just stop it and open both the code and the data.

BTW, how could the network request not appear in the network tab?

For me the hardest part is to correlate and compare products across supermarkets


If they don't populate the page via Ajax or network requests. Ie server side, then no requests for supermarket data will appear.

See nextjs server side, I believe they mention that as a security thing in their docs.

In terms of comparison, most names tend to be the same. So some similarity search if it's in the same category matches good enough.


And you couldn't use OCR and simply take an image of the product list? Not ideal, but difficult or impossible to track depending on your method.


You'll get blocked before even seeing the page most times.


Crowdsource it with a browser extension


Would be nice to have a price transparency of goods. It would make processes like this much more easier to track by store, and region.

For example, compare the price of oat milk at different zip codes and grocery stores. Additionally track “shrinkflation” (same price but smaller portion).

On that note, it seems you are tracking price but are you also checking the cost per gram (or ounce)? Manufacturer or store could keep price the same but offer less to the consumer. Wonder if your tool would catch this.


I do track the price per unit (kg, lt, etc) and I was a bit on the fence on whether I should show and graph that number instead of the price that someone would pay at the checkout, but I opted for the latter to keep it more "familiar" with the prices people see.

Having said that, that's definitely something that I could add and it would show when the shrinkflation occured if any.


Grocers not putting per unit prices on the label is a pet peeve of mine. I can’t imagine any purpose not rooted in customer hostility.


In my experience, grocers always do include unit prices…at least in the USA. I’ve lived in Florida, Indiana, California, and New York, and in 35 years of life, I can’t remember ever not seeing the price per oz, per pound, per fl oz, etc. right next to the total price for food/drink and most home goods.

There may be some exceptions, but I’m struggling to think of any except things where weight/volume aren’t really relevant to the value — e.g., a sponge.


What they often do is put different units on the same type of good. Three chocolate bars? One will be in oz, one in lbs, one in "per unit."

They all are labelled, but it's still customer hostile to create comparison fatigue.


This is such a shame, anywhere this is mandated they should mandate by mass and for medical/vitamins per mass of active ingredient


Worse, I've seen CVS do things like place a 180-count package of generic medication next to an identically-sized 200-count package of the equivalent name brand, with the generic costing a bit less, but with a slightly higher unit price due to the mismatched quantities.


In Canada I think they are legally required to, but sometimes it can be frustrating if they don’t always compare like units - one product will be price per gram or 100 grams, and another price per kg. I’ve found with online shopping, the unit prices don’t take into account discounts and sale prices, which makes it harder to shop sales (in store seems to be better for this).


I doubt it. Seems totally optional here where I am in BC.


I live in BC, common to not see unit pricing.


Or when they change what unit to display so you can’t easily cross compare.


It's required by law in Australia, which is nice


Imagine mandating transparent cost of goods pricing. I'd love to see farmer was paid X, manufacturer Y, and grocer added Z.


We have been doing it for the Swedish market in more than 8 years. We have a website https://www.matspar.se/ , where the customer can browse all the products of all major online stores, compare the prices and add the products they want to buy in the cart. The customer can in the end of the journey compare the total price of that cart (including shipping fee) and export the cart to the store they desire to order it.

I'm also one of the founders and the current CTO, so there been a lot of scraping and maintaining during the years. We are scraping over 30 million prices daily.


On the business side, what's your business model, how do you generate revenue? What's the longer term goals?

(Public data shows the company have a revenue of ≈400k USD and 6 employees https://www.allabolag.se/5590076351/matspar-i-sverige-ab)


We are selling price/distribution data about the products we scrape. We do run some ads and have an affiliate deals.

The insight i can share is that the main (tech) goal is to make the product more user friendly and more aligned with the customer need as it has many pain points and we have gain some insights on the preferred customer journey.


Do you have a technical writeup of your scraping approach? I'd love to read more about the challenges and solutions for them.


Unfortunately no, but i can share some insights that i hope can be of value:

- Tech: Everything is hosted in AWS. We are using Golang in docker containers that does the scraping. They run on ECS Fargate spots when needed using cronjob. The scraping result is stored as a parquet in S3 and processed in our RDS Postgresql. We need to be creative and have some methods to identify that a particular product A in store 1 is the same as product A in store 2 so they are mapped together. Sometimes it needs to be verified manually. The data that are of interest for the user/site is indexed into an Elastic search.

Things that might be of interest: - We always try to avoid parsing the HTML but instead calling the sites APIs directly to reduce scraping time. We also try to scrape the category listing to access multiple prices by one request, this can reduce the total requests from over 100 000 to maybe less than 1000 requests.

- We also try to avoid scraping the sites during peak times and respect their robots.txt. We add some delay to each request. The scrapes are often done during night/early morning.

- The main challenge is that stores can redesign or modify which make our scrapers fail, so we need to be fast and adopt to the new changes.

- Another major hidden challenge is that the stores have different prices for the same product depending on your zip code, so we have our ways of identifying the stores different warehouses, what zip codes belong to a specific warehouse and do a scrape for that warehouse. So a store might have 5 warehouses, so we need to scrape it 5 times with different zip codes

There is much more but i hope that gave you some insights of challenges and some solutions!


Interesting stuff, thanks for the reply!

Do you run into issues where they block your scraping attempts or are they quite relaxed on this? Circumventing the bot detection often forces us to go for Puppeteer so we can fully control the browser, but that carries quite a heavy cost compared to using a simple HTTP requester.


We have been blocked a couple of times during they years, usually using proxy has been enough. We try to reach out to the stores and try to establish a friendly relationship. The feelings have been mixed depending on what store we are talking to


I'm unfamiliar with the parquet format and trying to understand - are you storing the raw scraped data in that format or are you storing the result of parsing the scraped data?


We are storing the result of the parsed scrape as parquet. I would advice to store the raw data as well in a different s3. The database should only have the data it needs and not act as a storage.


Have the sites tried to shut you down?


We received some harsh words in the start but everything we are doing is legally and by the book.

We try to establish good relationship with the stores as the customers don't always focus on the price, but sometimes they want a specific product. We are both helping the stores and the customers to find each other. We have sent million of users over the years to the stores (not unique of course as there are only 9 million people living in Sweden)


I used to price track when I moved to a new area, but now I find it way easier to just shop at 2 markets or big box stores that consistently have low prices.

In Europe, that would probably be Aldi/Lidl.

In the U.S., maybe Costco/Trader Joe's.

For online, CamelCamelCamel/Amazon. (for health/beauty/some electronics but not food)

If you can buy direct from the manufacturer, sometimes that's even better. For example, I got a particular brand of soap I love at the soap's wholesaler site in bulk for less than half the retail price. For shampoo, buying the gallon size direct was way cheaper than buying from any retailer.


> In the U.S., maybe Costco/Trader Joe's.

Costco/Walmart/Aldi in my experience.

Trader Joe's is higher quality, but generally more expensive.


walmart is undisputed king of low prices and honestly in my experience the quality on their store brand stuff is pretty damn solid. and usually like half the price of comparable products. Been living off their greek yogurt for a while now. Its great


Sams club I’ve found beats Costco in some areas but for some items Costco absolutely undercuts like crazy. Cat litter at sams is twice the price when not on sale.

I pretty much just exclusively shop at Aldi/Walmart as they have the best prices overall. Kroger owned stores and Albertsons owned are insanely overpriced. Target is a good middle ground but I can’t stand shopping there now with everything getting locked up.


Trader Joe's also only carries Trader Joe's-branded merchandise, aside from the produce. So if you're looking for something in particular that isn't a TJ item, you won't find it there.


Occasionally you can get the same Trader Joe’s private label products rebranded as Aldi merchandise for even cheaper at Aldi.


You can find ALDIs in the USA, but they are regional. Trader Joe’s is owned by the same family as ALDIs, and until recently (past 10 years) you wouldn’t see them in the same areas.


I'd usually associate the term "regional" with chains like Meijer, Giant Eagle, and Winn-Dixie.

With 2,392 stores in 38 states plus DC[1], I'm not sure Aldi US qualifies.

[1] https://stores.aldi.us


One problem that the author notes is that so much rendering is done client side via javascript.

The flip side to this is that very often you find that the data populating the site is in a very simple JSON format to facilitate easy rendering, ironically making the scraping process a lot more reliable.


Initially that's what I wanted to do, but the first supermarket I did is sending back HTML rendered on the server side, so I abandonded this approach for the sake of "consistency".

Lately I've been thinking to bite the bullet and Just Do It, but since it's working I'm a bit reluctant to touch it.


For your purposes scraping the user-visible site probably makes the most sense since in the end, their users' eyes are the target.

I am typically doing one-off scraping and for that, an undocumented but clean JSON api makes things so much easier, so I've grown to enjoy sites that are unnecessarily complex in their rendering.


Ah, I love this. Nice work!

I really wish supermarkets were mandated to post this information whenever the price of a particular SKU updated.

The tools that could be built with such information would do amazing things for consumers.


Thanks!

If Greece's case is anything to go by, I doubt they'd ever accept that as it may bring to light some... questionable practices.

At some point I need to deduplicate the products and plot the prices across all 3 supermarkets on the same graph as I suspect it will show some interesting trends.


fyi, I posted this on /r/greece


Thanks!


As someone who actively works on these kind of systems, it's a bit more complicated than that. The past few years we worked on migrating from some old system from the 80's designed for LAN use only, to a cloud based item catalogue system that finally allowed us the ability to easily make pricing info more available to consumers, such as through an app.


[flagged]


no - UPCs are U for a reason.

the only thing that is needed is to be able to query the UPC for the price at the location.

The only way to scan the UPC for a price per location is... at the register...

Instead - what we need is an app where every single person uploads their receipt.

Several things here - have AI scan the receipts for the same item from the same [VARIABLE] tree and look for gouging, fraud, mis prints etc.

How much is that spaghetti in the ghetto vs the grotto?

etc.

If we could have every single receipt from EVERY supplier include a QR code of that receipt that they need to have scannable for the digital json of the receipt to last as long as the return policies/laws allow.

--

I did this for cannabis labels for a company a few years ago: The cannabis had to have a publicly available PDF of the lab results for chemical testing.

I made a lable through a tiny.url for each product and the dispensary to go to that lead to the lab's PDF posting of the results..

Then when the QR was scanned - you could see which products from which dispensaries were scanned against the lab results for it and also report the sales location for them....

(there is an amazing labeling software thats free from Seagull Scientific called BarTender (as-in Barcodes):

[0] https://www.seagullscientific.com/software/starter/


Scraping tools have become more powerful than ever, but bot restrictions have become equally more strict. It's hard to scrape reliably under any circumstance, or even consistently without residential proxies.


When I first started it there was a couple of instances that my IP was blocked - despite being a residential IP behind CGNAT.

I then started randomising every aspect of the scraping process that I could: The order that I visited the links, the sleep duration between almost every action, etc.

As long as they don't implement a strict fingerprinting technique, that seems to be enough for now


This reminds me a bit of a meme that said something along the lines of "I don't want AI to draw my art, I want AI review my weekly grocery shop, workout which combinations of shops save me money, and then schedule the deliveries for me."


Something I was talking over with a friend a while ago was something along the lines of this.

Where you could set a list of various meals that you like to eat regularly, a list of like 20 meal options. And then the app fetches the pricing for all ingredients and works out which meals are the best value that week.

You kind of end up with a DIY HelloFresh / meal in a box service.


Yes, that would work.

"Dave, the cheapest meals for you this week are [LIST OF DINNERS]. Based on your preferred times, deliveries from Waitrose, Tesco and Sainburys are turning up at 7pm on Monday. Please check you still have the following staples in stock [EG pasta, tinned tomatoes etc]".


Ha, you can't imagine how many times I've thought of doing just that - it's just that it's somewhat blocked by other things that need to happen before I even attempt to do it


> My CI of choice is [Concourse](https://concourse-ci.org/) which describes itself as "a continuous thing-doer". While it has a bit of a learning curve, I appreciate its declarative model for the pipelines and how it versions every single input to ensure reproducible builds as much as it can.

What's the thought process behind using a CI server - which I thought is mainly for builds - for what essentially is a data pipeline?


Well I'm just thinking of concourse the same way it describes itself, "a continuous thing doer".

I want something that will run some code when something happens. In my case that "something" is a specific time of day. The code will spin up a server, connect it to tailscale, run the 3 scraping jobs and then tear down the server and parse the data. Then another pipeline runs that loads the data and refreshes the caches.

Of course I'm also using it for continuously deploying my app across 2 environments, or its monitoring stack, or running terraform etc.

Basically it runs everything for me so that I don't have to.


I'm building something similar for 7 grocery vendor in Canada and am looking to talk with others who are doing this - my email is in my profile.

One difference: I'm recording each scraping session as a HAR file (for proving provenance). mitmproxy (mitmdump) is invaluable for that.


Very cool! I did something similar in Canada (https://grocerytracker.ca/)


Similar for Austria: https://heisse-preise.io


Love your site! It was a great source of inspiration with the amount of data you collect.

I did the same and made https://grocerygoose.ca/

Published the API endpoints that I “discovered” to make the app https://github.com/snacsnoc/grocery-app (see HACKING.md)

It’s an unfortunate state of affairs when devs like us have to go to such great lengths to track the price of a commodity (food).


Was looking for one in Canada. Tried this out and it seems like some of the data is missing from where I live (halifax). Got an email I can hit you up at? Mine's in my HN profile - couldn't find yours on HN or your site.


For sure, just replace the first dot in the url from my profile with an @


Oh nice!

A thorny problem in my case is that the same item is named in 3 different ways between the 3 supermarkets which makes it very hard and annoying to do a proper comparison.

Did you have a similar problem?


I have built a similar system for myself, but since it's small scale I just have "groups" of similar items that I manually populate.

I have the additional problem that I want to compare products across France and Belgium (Dutch-speaking side) so there is no hope at all to group products automatically. My manual system allows me to put together say 250g and 500g packaging of the same butter, or of two of the butters that I like to buy, so I can always see easily which one I should get (it's often the 250g that's cheaper by weight these days).

Also the 42000 or so different packagings for Head and Shoulders shampoo. 250ml, 270ml, 285ml, 480ml, 500ml (285ml is usually cheapest). I'm pretty sure they do it on purpose so each store doesn't have to match price with the others because it's a "different product".


Absolutely! It’s made it difficult to implement some of the cross-retailer comparison features I would like to add. For my charts I’ve just manually selected some products, but I’ve also been trying to get a “good enough but not perfect” string comparison algorithm working.


would maintaining a map of products product_x[supermarket] with 2-3 values work? I don't suspect that supermarkets would be very keen to change the name (but they might play other dirty games)

I am thinking of doing the same thing for linux packages in debian and fedora


Ah yes.

My approach so far has been to first extract the brand names (which are also not written the same way for some fcking reason!), update the strings, and then compare the remaining.

If they have a high similarity (e.g. >95%) then they could be automatically merged, and then anything between 75%-95% can be reviewed manually.


I am not by any mean an expert but maybe using some LLMs or a sentence transformer here could help to do the job?


I gave it a very quick try with chatgpt, but wasn't very impressed from the results.

Granted it was around January, and things may have progressed...

(Βut then again why take the easy approach when Ι can waste a few afternoons playing around with string comparisons)


Excellent work.


Nice article, enjoyed reading it. I’m Pier, co founder of https://Databoutique.com, which is a marketplace for web scraped data. If you’re willing to monetize your data extractions, you can list them on our website. We just started with the grocery industry and it would be great to have you on board.


This looks like a really cool website but my only critique is how are you verifying that the data is actually real and not just generated randomly?


Do you have data on which data is in higher demand? Do you keep a list of frequently-requested datasets?


In the US, retail businesses are offering individualized and general coupons via the phone apps. I wonder if this pricing can be tracked, as it results in significant differences.

For example, I recently purchased fruit and dairy at Safeway in the western US, and after I had everything I wanted, I searched each item in the Safeway app, and it had coupons I could apply for $1.5 to $5 off per item. The other week, my wife ran into the store to buy cream cheese. While she did that, I searched the item in the app, and “clipped” a $2.30 discount, so what would have been $5.30 to someone that didn’t use the app was $3.

I am looking at the receipt now, and it is showing I would have spent $70 total if I did not apply the app discounts, but with the app discounts, I spent $53.

These price obfuscation tactics are seen in many businesses, making price tracking very difficult.


I wrote a chrome extension to help with this. Clips all the coupons so you don't have to do individual searches. Has resulted in some wild surprise savings when shopping. www.throwlasso.com


This looks amazing. Do you have plans to support Firefox and other browsers?


It's published as a Firefox extension and you should be able to find it by searching for Lasso but I think I need to push the latest version and update the website. Thanks for the reminder. Which other browsers would you like?


Personally I only care about Firefox, but I think its pretty standard to support Firefox, Chromium, and Safari.

Tried it out and works well after I figured out how to start clipping but didn't work for a couple sites I tried, mostly the financial ones like Chase and PayPal. Looking forward to the update!


Looks like this is the Firefox extension: https://addons.mozilla.org/en-US/firefox/addon/lasso-coupon-...


Ha! I have the same thing as a bookmarklet for specific sites. It’s fun to watch it render the clicks.


Could you share the bookmarklet?


It’s specific to each market but took all of 3 min to write. It’s just a […document.querySelectorAll(‘a.class-for-action’)].map(x => x.click())

No smarts. No rate limiting. Just barrage everything with a click.


Wow! This is amazing, thank you. I usually use Safari, but will give it a try.


Nice job getting through all this. I kind of enjoy writing scrapers and browser automation in general. Browser automation is quite powerful and under explored/utilized by the average developer.

Something I learned recently, which might help your scrapers, is the ability in Playwright to sniff the network calls made through the browser (basically, programmatic API to the Network tab of the browser).

The boost is that you allow the website/webapp to make the API calls and then the scraper focuses on the data (rather than allowing the page to render DOM updates).

This approach falls apart if the page is doing server side rendering as there are no API calls to sniff.


...or worse, if there _is_ an API call but the response is HTML instead of a json


Playwright is basically necessary for scraping nowadays, as the browser needs to do a lot of work before the web page becomes useful/readable. I remember scraping with HTTrack back in high school and most of the sites kept working...

For my project (https://frankendash.com/), I also ran into issues with dynamically generated class names which change on every site update, so in the end I just went with saving a crop area from the website as an image and showing that.


HTTrack was fantastic, still was a couple of years ago when I used it for a small project too.


A few years ago, we had a client and built a price-monitoring app for women's beauty products. They had multiple marketplaces, and like someone mentioned before, it was tricky because many products come in different sizes and EANs, and you need to be able to match them.

We built a system for admins so they can match products from Site A with products from Site B.

The scraping part was not that hard. We used our product https://automatio.co/ where possible, and where we couldn't, we built some scrapers from scratch t using simple CURL or Puppetteer.

Thanks for sharing your experience, especially that I didn't use Playwright before.


Cloudflare Worker has Browser Rendering API


It’s pretty good actually. Used in a small scraping site and worked without a hitch.


I did something very similar but for the price of wood from sellers here in the UK but instead of Platwright, which I'd never heard of at the time, I used NodeRED.

You just reminded me, it's probably still running today :-D


> I went from 4vCPUs and 16GB of RAM to 8vCPUs and 16GB of RAM, which reduced the duration by about ~20%, making it comparable to the performance I get on my MBP. Also, because I'm only using the scraping server for ~2h the difference in price is negligible.

Good lesson on cloud economics. Below certain threshold we get linear performance gain with more expensive instance type. It is essentially the same amount of spending but you would save time running the same workload with more expensive machine but for shorter period of time.


Great article and congrats on making this! It would be great to have a chat if you like, because I’ve built Zuper, also for Greek supermarkets, which has similar goals (and problems!)


We should mutualize scraping efforts, creating a sort of Wikipedia of scraped data. I bet a ton of people and cool applications would benefit from it.


Haha all we have to do is agree on the format, right?


We already did. The format supports attaching related content, the scraped info, with the archive itself. So you get your data along with the means to generate it yourself if you want something different.

https://en.m.wikipedia.org/wiki/WARC_(file_format)


Honestly I don't think that matters a lot. Even if all sites were scraped in a different format, the collection would still be insanely useful.

The most important part is being able to consistently scrape every day or so for a long time. That isn't easy.


I heard that some e-commerce sites will not block scrappers, but poison the data shown to them (e.g. subtly wrong prices). Does anyone know more about this?


I never poisoned data, but I have implemented systems where clients who made requests too quickly got served data from a snapshot that only updated every 15 minutes.


This HN post had me playing around with Key Food's website. A lot of information is wrapped up in a cookie, but it looks like there isn't too much javascript rendering.

But when I hit the URLs with curl, without a cookie, I get a valid looking page, but it's just a hundred listings for "Baby Bok Choy." Maybe a test page?

After a little more fiddling, the server just responded with an empty response body. So, it looks like I'll have to use browser automation.


Yeah, by far the most reliable way of preventing bots is to silently poison the data. The harder you try to fight them in a visible fashion, the harder they become to detect. If you block them, they just come back with a hundred times as many IP addresses and u-a fingerprints.


Hey, thanks for creating https://pricewatcher.gr/en/ very much appreciated.

Nice blog post and very informative. Good to read that it costs you less than 70€ per year to run this and hope that the big supermarkets don’t block this somehow.

Have you thought of monetizing this? Perhaps with ads from the 3 big supermarkets you scrape ;-)


Thanks for your kind words!

I haven't thought about monetizing it, because honestly I don't really see anything about it to be monetized in its current form.


Looks great. Perhaps more than 30 days comparisons would be interesting. Or customizable should be fast enough with a duckdb backend


When you click on a product you get its full price history by default.

I did consider adding a 3 and 6 month button, but for some reason I decided against it, don't remember why. It wasn't performance because I'm heavily caching everything so it wouldn't have made a difference. Maybe aesthetics?


Can someone name the South-American country where they have a government price comparison website. Listing all products was required by law.

Someone showed me this a decade ago. The site had many obvious issues but it did list everything. If I remember correctly it was started to stop merchants pricing things by who is buying.

I forget which country it was.



> My first thought was to use AWS, since that's what I'm most familiar with, but looking at the prices for a moderately-powerful EC2 instance (i.e. 4 cores and 8GB of RAM) it was going to cost much more than I was comfortable to spend for a side project.

Yep, AWS is hugely overrated and overpriced.


If you were thinking of making a UK supermarket price comparison site, IIRC there's a company who owns all the product photos, read more at https://news.ycombinator.com/item?id=31900312


I would be curious if there were a price difference between what is online and physically in the store.


In Denmark there often is, things like localised sales the 4-8 times a year a specific store celebrates its birthday or similar. You can scan their PDF brochures but you would need image recognition for most of them, and some well trained recognitions to boot since they often alter their layouts and prices are listed differently.

The biggest sales come from the individual store “close to expiration” sales where items can become really cheap. These aren’t available anywhere but the stores themselves though.

Here I think the biggest challenge might be the monopoly supermarket chains have on the market. We basically two major corporations with various brands. They are extremely similar in their pricing, and even though there are two low price competitors, these don’t seem to affect the competition with the two major corporations at all. What is worse is that one of these two major corporations is “winning”, meaning that we’re heading more and more toward what will basically be a true monopoly.


Next step: monitoring the updates to those e-ink shelf edge labels that are starting to crop up.


The few random checks that I did on a few products as I was shopping didn't show any difference.

Either I was lucky or they don't bother, who knows


I live in the Netherlands, where we are blessed with a price comparison website (https://tweakers.net/pricewatch/) for gadgets.


> The data from the scraping are saved in Cloudflare's R2 where they have a pretty generous 10GB free tier which I have not hit yet, so that's another €0.00 there.

Wonder how's the data from R2 fed into frontend?


This is great! Would be great if the website would give a summary of which shop was actually cheapest (e.g. based on a basket of comparable goods that all retailers stock).

Although might be hard to do with messy data.


I've worked with similar solutions for decades (complete different need) and in the end web changes made the solution unscalable. Fun idea to play but with too many error scenarios.


Some stores don’t have an interactive website but instead send out magazines to your email with news for the week.

How would one scrape those? Anyone experienced?


Imap library to dump the attachment, pandoc to convert it to html, then DOM library to parts it statically.

Likely easier than website scraping.


I’ll try this approach, thanks! Most magazines I’ve noticed are using a grid design, so my first thought was to somehow detect each square then OCR the product name with it’a price.



What if you add all products to your shopping cart and save it as “favourites” and scrape that every other day.


You would still need a way to add all items and to check if there are new ones


> While the supermarket that I was using to test things every step of the way worked fine, one of them didn't. The reason? It was behind Akamai and they had enabled a firewall rule which was blocking requests originating from non-residential IP addresses.

Why did you pick Tailscale as the solution for proxy vs scraping with something like AWS Lambda?


Didn't you answer your own question with the quote? It needs to originate from a residential IP address


What about networking costs? Is it free in Hetzner?


Depends on the server.

Most have at least 20 TB of bandwidth included in the price, even the lowest $5/mo shared cpu machines. 20 TB is a gigantic amount unless you're serving videos or some such.

Some have unlimited bandwidth (I mean they are effectively limited by the speed of network connection but you don't pay for amount).


Anyone know of one of these for Spain?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: