> When the reward of being dishonest is (seemingly) greater than the risk, no person or company is accountable: The entire system is at fault.
This dynamic is a great factor in many overarching issues today. In politics, it's so easy to get stuck on ideology, ethics or morality. But in practice, it comes down to incentive structure. And carrot usually works better than tick, for several reasons.
Consider the problem of tax evasion in retail. Greece is imposing fines on those who conduct a large enough ratio of their transactions in cash[0]. Taiwan solved this more elegantly in 1951, IMO, with introducing a lottery with a number coming up on each receipt, thereby creating an incentive for consumers to ask for a receipt[1].
1. Have a way for buyers to report when a seller is bribing them to fake reviews. If the report turns out to be genuine, give the buyer a small reward ($5/10).
2. Have Amazon occasionally pretend to bribe people to do fake reviews. If a user does the fake review, their account is banned for one month. If the user reports the fake review, they get the small reward.
Between the carrot of reporting fake reviews, and the stick of being banned for falling for the request, you'd "crowdsource" a lot of the issues.
> This still requires the platform to detect _genuine_ transgressions which is very difficult.
I had in mind situations where a seller contacts a buyer who has left a negative review, and offers free merchandise for changing the review. If it happens within Amazon's system, they should be able to verify it.
Obviously an attack you have to defend against is unscrupulous seller E posting fake review requests on Facebook (or wherever) "on behalf of" honest seller A, causing Amazon to ban / punish A in the rankings.
> Greece is imposing fines on those who conduct a large enough ratio of their transactions in cash
That's extremely unethical. But more importantly, doesn't it cause huge perverse incentives?
Correct me if I'm wrong, but the way this works is that you pretend the cash transactions didn't happen, and write the inventory off as lost or whatever. Or you buy the inventory for cash as well.
So if you have 50% cash and 50% card, you might report only 50% of the cash and reduce your reported gross income by 25%.
In other words, the legitimate but cash-eager businesses will have to pay fines. The frauds can boast about their 100% cashless ratio and pay no taxes as well as no fines.
Taiwan's solution is brilliant! Incentivizing the customer to ask for the receipt instead of putting the onus on the store who is incentivized to not book the sale at all.
I hope policymakers will consider creative ideas like that instead of punishing consumers who use cash (still legal tender, for now) to conduct business.
The solution is simpler than that which is to remove high-denomination notes from circulation and provide all citizens with free bank accounts. Eventually, just move to cashless transactions entirely.
The bigger problem in countries like Greece though is that the tax burden is immense, for example the sales tax rate is 24%. This creates a huge incentive to avoid the tax. This windfall is then either wasted on corruption or spent on healthcare and pensions for the elderly - ie. not services that the taxpayer can immediately or personally access.
Sales tax, company tax, and flat income tax rates around 10%, with no payroll taxes, seem like a better and more sustainable way to go. Flat taxes in Eastern Europe have actually seen tax payments and compliance go up, even with lower and non-progressive rates.
There seems to be a lot less gaming of the negative than the positive reviews. So on Amazon, etc. I like to read the worst reviews and see if they talk me out of buying.
I suppose if this approach catches on, scammers will spend more time denigrating competitors than boosting their own products, and it'll become less effective.
I don't know if gaming negative reviews is less common than positive ones (I assume it is), but there are plenty enough fake negative reviews to cause me to stop paying any real attention to them too.
I use reviews for only one thing -- to find out about specific tips and tricks about using whatever the product is. I pay very little attention to how the reviewer actually rated the product.
Yeah, the entire ads industry exists because of information asymmetry, but what is new is how bad actors are using technology in these imperfect markets to abuse it.
Even in real-life, you're never sure, when someone recommends a product, be it a salesman in a brick and mortar store ("this one here is a lot better then that one there", even at the same price, could mean different profits). With platforms like youtube, you have atleast minimal trust in a few youtubers you follow regularly (and this literally means "a few"), but doing a random search means 80% chance the reviewer got sponsoder or atleast received the product for free in exchange for a good review. Star systems on sellers pages have their own sets of issues... usually people don't write reviews at all if the product is good, and complain only after it breaks (unless they're 'paid' for the review).
They can do what legacy retailers have done for ages, which is to not stock goods from fly-by-night sellers. Offer customers choice among established brands, and eschew becoming the Western version of DealExtreme, AliExpress or Wish.
I don't see fake reviews of TVs or headphones on Best Buy's site, and I've begun purchasing from brick and mortar establishments (buying online or doing instore pickup) more often now. Amazon's supply chain can't be trusted.
I'm less worried about possibly astroturfed reviews if the brand is something I've already bought from, like Samsung or Sennheiser, or a cheaper but otherwise reliable one like Anker.
> Even in real-life, you're never sure, when someone recommends a product, be it a salesman in a brick and mortar store...
This is real-life too but the difference is the dis-honesty of the influencers is more damaging than the next-door salesman's whilst at the same time affording more convenience with proportionally limited accountability to the said influencers.
> What should we/they do?
Force on-demand transparency and information disclosure? I am not really sure, to be honest, but regulation might be one way to curb or limit the game to a few enterprising entities (which comes with its own down-sides)?
Exactly. This problem is not new. The issue is that people feel like they should have easy access to truth and that is just not the case, ever. The best we can ask for is a good way to do research to try to discover what is true, which is what the internet is for and basically what we have now.
Because they're knowledgeable about their domain and want to make a sale, so want to push you towards a product that suits your needs? It's a positive-sum interaction.
If most salespeople did that, then there wouldn't be a problem. But that's not what most salespeople do -- they're much more likely to push you towards products that will earn them the greatest commission, or will help them hit whatever sales targets they have.
> Two months before the collapse of Bear Stearns, the first investment bank claimed by The Great Recession, The New York Times ran a story entitled, "Can banks self-regulate?" Clearly, the answer was no. Yet, in the face of new regulations that followed, the surviving big banks have grown bigger and more powerful than before. A similar situation may play out with big tech companies.
I turned skeptical when I read this plus the Upton Sinclair quote. Nothing about Amazon's marketplace or ad business or even fake reviews is even remotely close to the financial system pre-financial crisis. AAFG are not holding the bill for trillions in potential liabilities on highly leveraged positions in complex derivatives. They're operating a market that doesn't involve any credit whatsoever outside of payments. Seems like the author has something of an axe to grind rather than a rational assessment of the risk and problems.
I don't think it's good that this system is the way it is, but caveat emptor seems like a perfectly fine solution. I've become more skeptical about what I buy on Amazon, and I imagine most other consumers have done the same. If Amazon wants more business from me again, they can improve the quality of offerings.
How is an emptor supposed to caveat if the so-called review system is systemically dishonest?
Caveat emptor doesn't work, because Amazon has no incentive to improve review or product quality.
Bezos is rewarded whenever an item is sold. Most items won't be returned unless grossly faulty or misdescribed. Accurate reviews and more selective product filtering would decrease his earnings - a strong negative incentive, if you're someone like Bezos.
With current legislation, the only thing that might improve the situation for buyers is aggressive competition. As a de facto monopoly, Amazon doesn't need to worry about that.
Putting it crudely, e-commerce automates and amplifies scamming and dishonesty. There's limited accountability for sellers and virtually no accountability for the owner of the marketplace.
In the non-virtual world there are trading standards and other consumer protection mechanisms. They're not infallible, but they do at least exist.
The online equivalent would be some kind of blanket consumer-good-faith system which punished fake news, manipulated reviews, and substandard products.
Good luck getting that written and passed as coherent legislation with adequate enforcement.
Amazon isn't a de-facto monopoly by any reasonable definition. Most big box retailers have online shopping, and Walmart even offers the same delivery speeds in many areas.
One of the biggest long term risks for Amazon's retail business is that they don't really have a moat. They can beat people logistically, but eventually, other companies will catch up, and there's probably a diminishing returns effect on money poured into that. They have Amazon Prime, which was probably created to try to be a moat, but I think they've had more difficulty making that a moat over the long term than they may have initially hoped. Why would I pay for faster shipping if Wal-Mart pretty much offers it by default? Tacking on a lot of mediocre streaming services is a weird way to add value to that.
(AWS has a lock-in moat. Once you're in, you will find it harder and harder to leave over time. True of all the clouds, but that plays in favor of the biggest which at the moment AWS is.)
If you look, you can kinda see a lot of flailing on Amazon's part over the past several years trying to figure out how to build a moat around their retail in a lot of ways; Amazon Basics, various games around Prime, trying to convince people to buy retail through Alexa which is hard-coded to Amazon, of course, etc. Almost everything the retail side is doing amounts to that in the end.
I don’t trust Amazon to buy important things. Like expensive electronics. Or things where the risk of counterfeits are too high. I’d rather buy it from Target, and pay more, because I can get access to reliable customer support.
>Two months before the collapse of Bear Stearns, the first investment bank claimed by The Great Recession, The New York Times ran a story entitled, "Can banks self-regulate?" Clearly, the answer was no.
To nitpick on this quote further: recessions are the market correcting itself. A recession can mean that a bubble was growing in the market and it burst. In that sense, if banks are allowed to fail then they are self-regulating: the market decides which banks can survive and which can't. The downside is that due to how fractional reserve banking is set up and how the banks are interconnected that the failure of one can cause a cascade.
In regards to tech companies, I wonder if they're not simply overvalued. If the value of tech companies gets cut in half then I can see it causing quite a few problems.
There is no proof that a semi-capitalist market "corrects itself" in all cases, nor that a recession is even an event that brings "the market" (for some arbitrary quality) into alignment with some other arbitrary baseline or expectation.
As of the middle of 2019 Google continued to be inundated with fake business listings, some of them are outright scams and fraudsters who harm consumers. All of them are intended to harm local businesses by making them invisible compared to the fake listings. I have not heard of Google finding a solution to this problem.
Some specific fields, like dentistry, seems to be flooded with SEO that is meant to prevent price comparison research. I'm no expert in ad fraud but I suspect the most polluted categories are the ones with very high click-through revenue.
While there are many things Google gets right, including fighting SEO where revenue is not much at stake, they have obvious perverse incentives to ignore these problems as well as the way fake reviews contribute to these problems.
We need a review site linked to hard identity + purchase verification + ability to see all other reviews by that reviewer (how many things do they review, in what categories, and are they potentially biased? Also, for complex products, do they have any authoritative credentials that might make them subject matter experts?) + ability for other reviewers to review reviewers, sort of like a peer rating system...
Unfortunately this would make the barrier to entry high enough that even for "qualified" individuals, people won't go through the effort without a separate incentive.
We can criticise the regulators as not doing enough and complain that they lack resources, but the reality is, they don’t need to compete with (and beat) tech giants, they only need teeth. Ideally the regulator (FTC or whatever) would just write some sensible laws (e.g. no fake reviews or no lies in ads), then burden the tech giants themselves with enforcing these laws... then investigate not necessarily non-compliance (i.e. evidence of fake reviews), but also lack of process, i.e. the inability to detect potential fake reviews... and of course impose humongous fines (% of worldwide revenue, a la GDPR but actual fines, not just threats).
This is exactly how things are currently done (DMCA, “sex work” search results), and it leads to more people being outraged by big tech companies acting as judge jury and executioner.
Only to people judging what "works" means with similar criteria as yourself.
I'm fairly certain GP does not share your criteria; ergo, your statement that "it works" degenerates to "it is, despite the protests of those opposed."
This is a failure of lack of regulation, as human nature is to go too far in order to understand how far they can go.
Therefore, some of us will always take it too far until we reach the wall. This wall cannot be built by ethics or morals or any "self-regulation". It needs to be externally built, and well-enforced.
It shocks me we don't have a place to read honest reviews.
Where professional reviews who get paid will test things and give an honest opinion, anonymously, so they won't be bribed, and you can go there to read what they say and see their rating.
In the UK we have 'Which', a private organisation that does not allow advertising in its magazine or website to avoid conflicts of interest. I understand that their reviews are impartial. They prioritise features that do not always overlap with my own preferences so their 'best buy' recommendations are not always aligned with my personal preference, BUT their reviews are very thorough and I purchase a membership whenever I plan to buy a significant item.
> Machines only follow the instructions humans tell it to follow.
I suggest that you read something about AI and ML. The "instructions" that go into these pipelines are many levels removed from the outputs. It's like saying that cells only follow the DNA instructions given to them - ignoring evolution, mutagens, exogenetics, etc. The original instructions don't tell anywhere near the whole story.
Is the opacity of these algorithms a problem? Yes, it surely is, precisely because the machines are not just following instructions in any meaningful sense.
Humans are still the prime movers who cultivate the parameters and models on which the model is trained, and more importantly decide whether or not the results are credible.
Thinking like yours is called Mathwashing; and it's a serious problem if we as a society are willing to accept it as a valid justification for dismissing culpability for our choices/actions.
Before flinging labels, you should have read the part where I said the opacity of the algorithms is a problem. The opacity/non-determinism definitely does not wash away responsibility, and in no way did I imply that it does. The problem is that now people end up being responsible for something they don't fully control or understand, and that's a sure recipe for disaster. Promoting further misunderstanding by putting your own words in others' mouths is not the solution.
Just because the instructions are convoluted and implemented by people who don't code the math themselves doesn't mean the outcomes are removed from the inputs. It's still a machine.
Heuristics use random number generators and produce random (non-deterministic) results, is that not following instructions written by programmers?
> is that not following instructions written by programmers
Not in any useful sense. Everything's deterministic with perfect knowledge, but nobody has perfect knowledge. The universe is still a machine too, and yet somehow manages to surprise us anyway. As soon as you inject non-replicable random numbers into the process, "following instructions" isn't entirely true any more.
Also, even if AI/ML are not being used to generate fake reviews, they sure as heck are being used to determine the search ranking of products stuffed with fake reviews. Amazon has created a game for less-scrupulous sellers to play, and even they don't know the rules.
I admit that I don't. I am guessing that such things are at present beyond AI/ML. An AI/ML-generated review I would expect to read like... well, like obviously-machine-generated text.
> Also, even if AI/ML are not being used to generate fake reviews, they sure as heck are being used to determine the search ranking of products stuffed with fake reviews.
How do you know that? They have a ranking algorithm, sure, but why do you think it's AI/ML (by normal definitions of AI/ML)?
Luckily products from well established companies are still free from fake reviews (on Amazon) by and large. As long as you avoid the hordes of 5 star products with very generic brand names that you have never heard of, you can still trust reviews to have some degree of accuracy.
Not to say that the suspiciously cheap product with near perfect reviews that always shows up at the top is always the wrong choice. I bought a knock off dremel for $20, and later ended up using it to cut quite a bit of 1.8in mild steel and it performed great. Only a little bit of smoking from the internals. On the other hand, I bought a cheap microphone from the same style of seller, and it was an unmitigated piece of garbage.
This dynamic is a great factor in many overarching issues today. In politics, it's so easy to get stuck on ideology, ethics or morality. But in practice, it comes down to incentive structure. And carrot usually works better than tick, for several reasons.
Consider the problem of tax evasion in retail. Greece is imposing fines on those who conduct a large enough ratio of their transactions in cash[0]. Taiwan solved this more elegantly in 1951, IMO, with introducing a lottery with a number coming up on each receipt, thereby creating an incentive for consumers to ask for a receipt[1].
0: https://fortune.com/2019/12/16/greece-digital-economy-cashle...
1: https://en.wikipedia.org/wiki/Uniform_Invoice_lottery