"But reputation is useless as a hedge against the real nightmare of a setup like Ebay: the long con. It doesn’t cost much, nor does it take much work, to build up sleeper identities on Ebay, fake storefronts that sell unremarkable goods at reasonable prices, earning A+++ GREAT SELLER tickmarks, even for years, until one day, that account lists a bunch of high-value items on the service, pockets the buyers’ funds, and walks off."
That was the business plan of at least half the Bitcoin exchanges.
Of course, in both cases, that is a bug of the particular reputation system, not of reputation systems in general:
1) In the Ebay case, reputation can be conditioned on the type of items and the flow of money being processed by the merchant. Someone who is highly reputable for $50 transactions, isn't necessarily so for $5,000 ones. You could also always have "anomaly detection" built into the system, which kicks off whenever the seller moves too quickly from $50 items to $5,000 and either warns users or pro-rates their reputation (if nothing else, radical behavior change could mean the seller's account got compromised...).
2) In the Bitcoin exchange case, there is currently no real reputation score for Bitcoin exchanges as far as I know. But you could have a proof of stake system that requires exchanges to hold X% of their funds available, similar to the requirements for banks. You could also operate the exchange in a reduced risk way if say, to convert $1000 USD to BTC you sequentially post 1000 $1 USD transactions and don't send them the next dollar until the corresponding BTC amount clears on the network and is safely stored in your hardware or multi-server wallet.
Not saying that there aren't fundamental issues with reputation economies, but lets give the dystopia a fair trial and assume that the non-essential problems will be solved.
In response to (1), plenty of exit scams involve exactly the same product that the company previously sold with 5-star ratings. Either an expensive product from the beginning, or long fulfillment times that allow a large backlog of orders before anyone takes notice, or both.
(No, that $100 ounce never came, and I lost my money just like fifty other fools on Agora.)
EDIT: oh yeah, this is also how I got scammed out of $200 trying to purchase a fake ID. Kids, friends make friends use an escrow service.
> But you could have a proof of stake system that requires exchanges to hold X% of their funds available, similar to the requirements for banks
In Bitcoin's case you can even do that provably and without trusted third parties: https://github.com/olalonde/proof-of-liabilities. I haven't worked on this in a while so there might be more recent proposals.
#2 is pointless. People want to keep bitcoins on an exchange because every bitcoin > bitcoin transaction has overhead.
So, now the exchange has account X with Y money in it. Once Y get's to 100,000,000$ (or whatever) they defect.
PS: The bitcoin network only handles ~7 transactions a second, 1,000 1$ transactions would take over 16 minutes if your willing to pay more money than anyone else to do those transactions. However, if you wait for a transaction to clear that's ~10 minutes per transaction. So, 1000 1$ transactions would take 1 week.
The problem with 2) is that the USD transactions would be too hard to do with the current banking system. While in many cases you can already do instant and free wire transfers, virtually no banks offer APIs good enough to program such scenario.
Also, I'd be afraid of triggering some kind of banking system warnings if I sent a hundred transactions within one day.
With crypto to crypto exchanges a better solution would be to do it as a DAO/smart contract.
Why should an exchange have any less than 100% of the funds available? The exchange should be holding all customer funds in segregation, not mingling and floating them.
It's strange to think about, but realistically the exchange does not need 100% of the funds available, since they will never all be withdrawn at once, even in the event of a massive crash.
The actual proportion an exchange needs to keep to be able to pay everyone (who asks for it) at all times is about as predictable as the bitcoin market itself.
Would we trust "100% funds available" exchanges more? Possibly, but the "cost" would be very high so I would actually be a little wary of an exchange which does this, as to me it would be "trying too hard" to look nice (in reality there's always some trust involved, and so some risk that the owners run off with your coins/money).
> It's strange to think about, but realistically the exchange does not need 100% of the funds available, since they will never all be withdrawn at once.
This was never true, but in the aftermath of 2008, makes a really poor argument. Two points worth mentioning without going into details:
1. Bitcoin doesn't have a central bank to easy the bank-run.
2. When financial businesses go burst, it's extremely difficult to quantify the loss a priori for the accountants
Any sane financial business with half-brain at this time and age would keep at least some percentage of the funds intact.
> Possibly, but the "cost" would be very high so I would actually be a little wary of an exchange which does this
You lost me there. Why would that cost be high? All you need is a wallet (or two, hot/cold) which holds your customer deposits and which you are not allowed to spend from.
With fractional reserve banking, the institution has assets greater than liabilities. With insolvency, the institution has a plan to someday have assets greater than liabilities.
Because then you get all the "features" of the current banking system that bitcoin wanted to avoid.
Bitcoin is by design deflationary currency. Fractional reserve banking is by design inflationary. No matter where you sit on how much inflation and what monetary policy is best - it is obvious that it is against the intentions of the original adopters and creators of the BTC network.
Also partly how pirateat40's big Ponzi scheme worked. He built up a strong reputation on the Bitcoin trading community's web of trust-based feedback system and enlisted a bunch of other highly-rated people from there to resell his "investment" program. The people who accurately pointed out that it had all the markings a ponzi naturally had much lower reputations on there, especially after they'd done so.
What's really funny is EVE online has proven this exact phenomenon dozens of times and is essentially an accurate model of what 'anonymous digital currency' leads to.
That doesn't sound right to me. Scammy exchanges run away with the balances people carry at them, right? That seems very different (and much more lucrative and viable) than a retail storefront changing overnight from reputable practices to theft. Obviously the former is a real problem and a big threat, although it has little to do with reputation systems. The latter doesn't sound like that big of a threat. It seems like the cost to establishing a retail storefront with a solid reputation is not worth the quick payout from theft. In fact, if you have the initial capital and business acumen to run a stable retail storefront, it's probably more profitable and less risky to just keep doing that.
Or a ponzi scheme. The initial investors get paid out which builds up the schemer's reputation, then when the pot is big enough (or new investments slow down) the schemer walks away with the pot.
They only protect you if the seller has money in his account. My Friend bought a video card that was never sent. Ebay said that the seller did not have money in his account so there was nothing they could do. He was out $500 with no recourse.
Exactly this. Had to go through this situation. PayPal said they'll flag the account, but these accounts are already abandoned once the money is withdrawn.
In addition to my other comment. You want to pay by credit card since it has fraud protection. Never use PayPal credits or bank transfers or debit cards. Debit do have some protection, but usually have shorter time frame on when you need to report.
Well, that's kinda fine by me. If I'm out $1000 from a fraudulent lens purchase on eBay, and they/Paypal won't refund, then I'm fine not doing business with them anymore and recovering my $1k from credit charge back.
Perhaps if we had some formal description of all the store fronts we could write a program to calculate the "effective rating" for the item being sold and that could help prevent attacks like these.
In "Liars and Outliers", Bruce Schneier talks about reputation as a tool that society uses to force its members to behave well. In small societies, reputation by itself is usually enough, but the problem is that it doesn't scale because people can ordinarily only keep track of about a hundred to a hundred and fifty people (Dunbar's number).
I'm of the opinion that, since Dunbar's number is a purely biological limitation, there's no reason to think that computer-assisted reputation can't scale to much larger groups of people. I think Cory Doctorow is right to be worried that an authoritarian government could use reputation as a tool of coercion (and this is why we should worry about things like government collection of phone metadata). However, I think it's also worth considering that reputation could be used as a tool by the people to obtain some accountability from the rich and powerful, and to help distinguish grass-roots support from astroturf. (I went so far as to create a site, http://polink.org, as a sort of collaborative graph database of how powerful people and organizations are connected to each other. It's one of those side projects I feel like I should get back to and work on one of these days.)
One of the reasons I think computer-assisted reputation hasn't taken off is that it's actually kind of hard to implement a good reputation engine, or even agree on what sort of properties a good reputation system ought to have. A lot of awful reputation systems have taught the technology industry to have very low expectations, which is a shame because I think they can be very powerful if implemented well and used correctly.
> I think Cory Doctorow is right to be worried that an authoritarian government could use reputation as a tool of coercion
Not just "could", China is already doing it. It sounds awful
But Peeple is a modest effort compared to ‘‘Citizen
Scores,’’ the for-now-voluntary service run by the Chinese
government in partnership with Tencent (a huge social media
and games company) and Alibaba (China’s answer to Amazon).
Your citizen score is visible to everyone the government
wants – buying socially approved items, undertaking
approved leisure activities, adhering to rules and
regulations, and socializing with other high-score
individuals. Of course, not doing these things makes your
score go down. Just being friends with low-scoring
individuals drags your own score down, creating a powerful
incentive to conform.
Mandatory Citizen Scores are being phased in over the next decade [...]
..tangentially, why even bother with reputations? The big issue with money based economy is that money is all cleansing. All the shitty things you did to get your money can be forgotten and your money used to buy yourself into a position of power, prestige, or even good standing.
If we could tag money's history of exchange we could maybe avoid the both the issues of reputations (big if obviously). Individual transactions could be judged by the prior history of exchanges. Though this could be used as a proxy for reputation or to construct a reputation, the reputation is not required to complete the transaction as a seller. The seller only needs judge the transaction on the history of the money offered itself.
Ideally this would create a situation where 'clean' currency would progressively become more valuable than 'dirty' currency, encouraging clean wealth creation.
I think it's pretty important for money to be fungible. Your idea is intended to solve a very real problem, but I think your proposed solution is worse than the problem.
I agree regarding the importance of fungability, but not all the way. If a majority of reputation engines only advised rejection of payments as offered by the bottom 1% of ranked teams, then 99% of teams would still experience the fungability expectation.
Otherwise, if the reputation engines are too strict, no one would trusts the currency. Too loose, and there is no effective deterence to unsustainable activity. There's an ideal balance to the targeted overall rejection rate among all teams, most likely erring on the side of being too loose (0.5% rejection?), filtering out only the worst of the worst. This targeting to a metric value reminds me of fed actions to indirectly influence the economy, such as with setting interest rates.
I like the idea of being able to reliably track how money was earned or used at the time that it is being offered as payment. However, it seems (to me, at least) that there is no practical way to do this for physical currency notes, for various reasons.
Fortunately, for digital currency there are easier ways to implement such tracking ideas. One approach is for a team to use its planned budgets directly as digital currency. When two teams transact, corresponding amounts of expense and revenue budgets get cancelled, so there is no transfer of currency between teams. Thus, each payment offer is always traceable to the reputation of the team who issued the currency (as budgets).
Illustrated explanations, more details, and a working prototype is available at: https://tatag.cc/ui/. (Disclaimer: I'm the developer of the linked site.)
Do not accept payments from teams who have budget balances that are way beyond typical. The Tatag platform provides accurate and current summary statistics about the paying team, so a reputation engine could factor those metrics to evaluate the concerns that are important to a payment recipient.
As an aside, traditional systems do not prevent similar situations from happening. You have ultra-wealthy people that are each effectively holding an endless supply of money. Should we not be asking that same question whenever and wherever we see it happen today?
>this is why we should worry about things like government collection of phone metadata). However, I think it's also worth considering that reputation could be used as a tool by the people to obtain some accountability from the rich and powerful
I find myself in agreement with much of what you say, but the above argument always makes me pause in these discussions. Essentially, the argument is: "on the one hand, the rich can use this tech to control the poor. That's bad. On the other, the poor can control the rich. That's good."
Yeah, well, which one are you? Where do you draw that line between who is "rich" and "poor"? Why is one side coercive, while the other is obtaining accountability?
We're all, here, the poor and oppressed, of course, only willing to use our new-found power for good.
It's about finding the right balance. If members of society weren't accountable at all to the rest of society, we would have absolute anarchy. If we had perfect compliance, we would have a totalitarian state. Schneier makes this point very well in his book - it's good that society can force cooperation most of the time, but it's also good that it's not able to force cooperation from everyone, all the time.
Reputation is a powerful (and rather dangerous) tool and software-assisted reputation calculation is something that's going to exist and can't be un-invented. Therefore, I think it would be best if that tool were accessible to all sides of society so it isn't exclusively applied by the powerful against the weak. I don't mean to imply that any use of reputation systems by the powerful is bad and any use by the less-powerful is inherently good, I just think that abuse is less likely if those wielding the tool are fully aware that the same tool can and will be used against them if they get enough people mad.
The approach I prefer is to let each market participant or team decide which reputation engine they want to use when evaluating a payment offer. So there does not have to be a global consensus on what reputation engine everyone should use.
I have prototyped a reputation currency platform [1] where third-party 'advisor' applications offer real-time advisory on whether a payment recipient should approve or reject a payment offer. Data science/big data techniques would help improve and adapt such reputation engines over time, much like web search engines improving incrementally over time.
I like the idea of making reputation data available to everyone and let them run their own calculations however they want. Unfortunately (or fortunately if you're wary of the privacy implications), most online applications that have some notion of reputation don't share reputation data don't allow you to just grab the whole database.
A team's reputation does not need to involve divulging personal information about its members. Although, if a team is small enough, there's a good chance that generalized transactions reported between teams may still be matched to a particular team member.
The bigger picture is that a team in the proposed system would not have to constantly worry about funding, since it could use its budgets directly as currency as long as the team maintains its reputation. I think that for most participants that is more than good enough trade-off to take on small privacy risk. In other words, would a team rather worry about funding or privacy, and with good platform design those concerns might not even be mutually exclusive in most cases.
With regards to not having access to reputation data from online applications, there are work-arounds. For example, tweets, blog posts, or product reviews about a team could be crawled for sentiment detection and used in reputation metric calculations. These work-arounds might seem difficult to do, but so was search engine technology in the early days of the web.
I wonder if there is another aspect in play, the mutability of identities.
In small, local communities, you have one thing you can never get rid off, your face.
But online there is no face, nothing to tie your claimed identity back to your biological self.
While pondering this i reminded myself of a mmorpg that may illustrate this.
It originated in South Korea, and there the player account is tied to the players national identity number. So once banned, one is banned for life.
But when it was exported to the states, anyone could sign up multiple times. End result was that various mechanics that originally was meant for the community to police itself, was used to harass other players.
That is an important characteristic of online communities: someone with a bad reputation can usually (depending on the application) throw away their old account and start over with a new identity; therefore, having no reputation is more-or-less indistinguishable from having a negative reputation. That's not really a problem per se, it's just a characteristic of the problem domain to consider when implementing a reputation systems (and in particular, in deciding how much leeway to impact the system to give to new users).
"If this sounds familiar, it’s because that’s how money works."
Whuffies sound a whole lot more like politics than money. As a medium of exchange, usually to get something of value you must give up or at least risk some money. Yes, there will be a power kaw distribution (as with any network phenomenon) but it will be tempered by self-redistributive, non-zero-sum property of exchange.
Whuffies don't have that, so obviously the inequalities will be greater.
The non-zero-sum aspect you describe is commonly described as return-on-investment in a capitalist society. The redistribution is taxes and government spending. The power law is created by the preferential attachment of capital to existing capital. Even a weak preference creates an extreme inequality.
I'm not sure if it'll help you to think about it this way, but you can always model the system as zero-sum if you normalize the total amount of wealth in the system to 1 after every transaction. Or track participants' share of total wealth instead of absolute wealth.
> but you can always model the system as zero-sum if you normalize the total amount of wealth in the system to 1 after every transaction
Yes, if you "normalize" the world to fit your conclusion, you'll generally find that your conclusion is true. If we just normalize everybody's wealth to 1, then all wealth inequality disappears. Problem solved.
The fact that you need to normalize the total amount of wealth suggests that you recognize that the total amount of wealth changes, which means the system isn't zero sum.
> The power law is created by the preferential attachment of capital to existing capital. Even a weak preference creates an extreme inequality.
> If we just normalize everybody's wealth to 1, then all wealth inequality disappears. Problem solved.
You misunderstood the use of the word "normalize". The point of normalization is to change the absolute magnitude but keep the relative magnitudes the same. Perhaps I should be more specific: if you have two values, 6 and 4, which sum to 10, and you'd like to normalize the total to 1, then you multiply each by 1/10 so that you end up with 0.6 and 0.4.
> The evidence doesn't really bear this one out.
Write some code to simulate this. I think you'll find that preferential attachment does in fact create a rich-get-richer phenomenon. For example, take a look at this simulation coded in NetLogo (http://ccl.northwestern.edu/netlogo/models/PreferentialAttac...).
Honestly it's the self-redistributiveness that's more important than the non-zero-sumness but being non-zero-sum in utility will taper the steepness of the power law
I'm not sure what you mean by "self-redistribution". I at first assumed you meant economic "market forces" as opposed to government intervention. If that's true, then those market forces are almost entirely rich-get-richer. Sure, there's the "creative destruction" as some fortunes are lost, but that's actually part of the driving force for inequality -- that's what forces capital-owners to seek growth as opposed to allowing them to stay static.
" it’s a score that a never-explained set of network services calculate by directly polling the minds of the people who know about you and your works, reducing their private views to a number. The number itself is idiosyncratic, though: for me, your Whuffie reflects how respected you are by the people I respect. Someone else would get a different Whuffie score when contemplating you and your worthiness. "
This is exactly what i'e designed with github.com/neyer/respect
It's basically the same as pagerank, but it has an important feature that the author doesn't seem to have considered.
Look at the notion of 'soundness of respect' in my code, and see how different it is from anything out there. It provides multiple parties who aren't directly part of a conflict with an incentive to find a resolution.
> Gamergate could use to destroy your’s employment and personal life, possibly permanently, just by mass-one-starring you.
This is specially what the respect matrix works against; people who one starred someone else for spurious reasons end up dinging _themselves_ if they are at all connected, which the social graph says we all are. You'll lower your soundness score if you give someone else a shitty rating, and yet you have an implied positive rating through other links.
I don't think his analogy to reputation economies is quite right: having been a Lyft driver, I did observe that it was difficult to get out of ruts when you have a low score (a passenger entering a car with a low score is going to be more vigilant and critical). Yet, scores are constrained to being between 1 and 5 so it's relatively easy, with the right strategy, to create upward momentum and bring oneself back into a high range. There is no power law distribution in the scores because they are constrained to a small range.
I don't think the "inequalities" really generally apply to constrained reputation economies.
That's kind of like Nigel from Spinal Tap's argument that his amp goes to 11.
I doubt that internally Lyft stores your score on a scale of 1 to 5. It's almost certainly derived from a bunch of other data, or at the very least stored as a floating point number. You can approximate whatever curve you like with either, and then just round.
I'd believe that Lyft doesn't currently have a power law curve, but the 1-5 system isn't what's stopping them.
You can't have a power law over 1..5? Yes you can. Power law works perfectly fine with finite ranges. It works perfectly fine with discrete values as well. The only question is how accurate a power law model would be in this case.
That's rather obvious. Due to Dunbar's number (the rough maximum number of human relationships you can maintain), reputation is a very scarce resource.
To the extent that corporations and brands are virtual individuals, I think this explains why so many Internet markets are winner take all. There is only one Facebook because Facebook occupies a "Dunbar slot" in the minds of its users and those slots are scarce.
Also means that more B2B or behind the scenes Internet companies may not be as winner-take-all as B2C Internet fronts and gateways. To an extent the 'portal' hype from the original dot.com boom was correct-- portals are super-valuable and tend to be winner-take-all or at least a few winners take most.
There is a chapter in Tom Slee's (excellent) book "What's Yours is Mine: Against the Sharing Economy" which goes into depth about rep systems and product ratings, why they are different, and why rep systems are a bad way to measure things. There are a bunch of footnotes. I recommend it if you are looking for an even more critical take.
Excellent perspective, particularly so because it comes from someone who did so much to describe where a reputation economy could go.
Science fiction writing is amazing in how it gets us to imagine, be bewildered, or be alarmed by things that have not happened yet, or could never happen.
It is a starting point, giving us the confidence to keep building -- but also the foresight to be ready for the ills that our creations will no doubt introduce.
>"The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design."
I think you bring up an interesting parallel between Hayek's conception of the purpose of economics and the common view of science fiction. Being an avid reader of both, this seems to be a great synergy!
I thought it interesting because despite the idea of reputation-based currency being by far the most convincing and fully-realised part of Doctorow's Down and Out in the Magic Kingdom, it wasn't until reading this essay I realised how much contempt he had for the concept.
(without giving too much of the plotline away, the main distinguishing factor of the reputation-based currency is that people lose it very quickly when they upset people, and in the book people are usually upset for perfectly understandable reasons. Otherwise, people achieve high reputations with the same blend of shrewdness, endeavour and inheritance as required to get rich in today's economy).
> it wasn't until reading this essay I realised how much contempt he had for the concept.
That's because he didn't. Cory published DaOitMK in 2003, meaning it was in the publisher's pipeline in 2002 and written probably 2000-2001. This essay was published in 2016. He's had 15 years of cumulative experiences upon the basis of which to change his opinion on reputation economies -- is it surprising that he's done so?
(Similarly: I disagree violently with the Charlie Stross who wrote Accelerando, circa 1998-2004, on the subject of the singularity. But hey, it's been close to 20 years; younger Charlie had less data and experience to go on, is all.)
((Disclaimer: I have no insight into Cory's mind on this topic other than that which comes from having known him for about 15 years and written a book with him.))
I'm about as skeptical about the singularity these days as Cory is skeptical about Whuffie.
(When you look at the pattern of beliefs around the singularity they structurally resemble the complex of beliefs held by pre-millenarian Christian fundamentalists. They're equally difficult to falsify, too. If you conditionally accept the theory that atheism as understood in the West is a product of the Enlightenment and thus a Protestan heresy, then it looks like a bunch of the avowedly rationalist atheists have come full-circle and accidentally built themselves a tree house that looks uncannily like a church.)
I wonder what the Gini coefficient of karma scores is. (In the sense of "legitimately curious" rather than "attempting to make a joke which will further concentrate karma in the hands of the 0.001%.")
Just like "Post a comment; people shower it with karma" is positive-sum activity, many efforts in the wider economy are positive-sum. That's why we don't eat grubs and use leeches for treatment.
Did I miss something or does the article start by making an argument about what works or doesn't work in the real world by invoking a fictional novel that he himself wrote?
Not really. The fictional world is used only to contrast it with how badly it mirrors the reality. It's invoked merely for the author to have a chance to say "mea culpa" (and maybe to help illustrate that these wrong ideas are seductive).
And ends with imagining how a proposed but yet to exist system will function... while ignoring historical examples of many societies that did organize themselves by reputation that worked out quite well without seeing anywhere near the type of inequality that currently exists. And, of course, some that organized themselves that way and didn't work out that well, although still not with the current level of inequality.
For that matter, he doesn't actually show that even his imaginary reputation system is worse than our current monetary system, just that some of the same types of issues can happen.
"... Citizen Scores are a near-perfect expression of reputation economics: like most other forms of currency, they are issued by a central bank that uses them to try and influence social outcomes."
It seems to me, and I could have misread, that the biggest problem with the examples of currency systems in the article is that the 'pooled' currencies are unquestionably accepted by participants. Take that guarantee away, and someone could design a reputation currency where participants could reject payments from disreputable participants. In which case accumulation of units does not imply the long-term ability to use them; instead, there will be a long-term incentive for participants to maintain a good reputation.
See this overview of a counter-example to the article's point, a reputation currency system without a central issuer: https://tatag.cc/ui/home-about. (Disclaimer: I'm the developer of the linked site.)
In which case accumulation of units does not imply the long-term ability to use them; instead, there will be a long-term incentive for participants to maintain a good reputation.
There's an important distinction here: the score of your peers versus the score of your government/authority.
It's possible your peers love you while your government hates you. Maybe you do great work, but are a pain to the power structure. In that case, you end up with high peer ratings, low "political office" ratings.
If the goal is "orderly citizens," then the political ratings become more viral (as in, the people you associate with get boosted or discounted based on your Official Political Office Rating) and try to reinforce people conform to how The Political Office wishes people would behave.
We can even make this local to HN. HN has secret "political rankings" on accounts to restrict the reach of people the "HN establishment" doesn't like (HI DANG), but the public vote counts are more peer-oriented rankings (which still aren't perfect because HN has 3 to 5 distinct sub-cultures fighting against each other, so a +100 from the "VCs are evil bastards" subculture in the morning gets canceled out in the afternoon once SF wakes up and you get -200 from the "VCs are genius darlings and we should all kiss their feet" subculture).
I think the best way to address the issue of how reputations are determined is to let "sellers" or payment recipients decide on who they want to benefit from their goods and services.
So, in the digital currency system that I have prototyped [1], each team decides on which recommender system they want to provide advise, in real-time, on whether to accept or reject a payment offer from another team. Borrowing from your examples, one team could use an "advisor" developed by the HN staff, another team could use an "advisor" endorsed by the subculture you identify with, etc.
There are many issues that I have worked out in the prototype, such as making sure payments are always traceable to the issuer and inflation is decentrally regulated, and most of the solution comes from the budgets-as-currency approach. I'm still in the process of improving the advisor options with better data-science techniques (hopefully with contributions from others). [2]
No, that's not quite how Sesame Credits work in China.[1][2] The system computes a social score for each citizen. But it's not run by the Government or a bank. It's run by Tencent and Alibaba. It tracks what you say online, what you buy, and who your friends are. Having a low-scoring friend brings your score down. New feature: Baihe, China's largest dating service, now uses Sesame Credit scores in matching. It's voluntary now, but will become mandatory in 2020.
This is the successor to the Dang'an, a permanent record kept for each citizen of China since it went Communist. The record was typically maintained in a book by the employer, in conjunction with the Public Security Organization. The Dang'an system became less relevant as more non-state employers appeared. Now it's getting an upgrade.
Thanks for the clarification. I think by "central bank", the article just meant a non-decentralized currency system.
It's good that you pointed out the possibility for scores to go down in that system, and I assume that would affect the ability of low scoring participants to transact in the system. If that's the case, it makes me doubt the article's strong assertion regarding that example of a reputation currency.
> Take that guarantee away, and someone could design a reputation currency where participants could reject payments from disreputable participants. In which case accumulation of units does not imply the long-term ability to use them; instead, there will be a long-term incentive for participants to maintain a good reputation.
+1, considering that the empowerment of individuals to issue the currency themselves creates a trust graph that should be enough to mitigate many of the downsides he brings up. Here's my attempt at social, reputation cryptocurrency (but it's currently being rewritten from scratch): https://github.com/sunny-g/whuffie/blob/master/ABOUT.md
I had to dig a bit which founder you were referring to. No controversy here, but just want to give a background: I always think of Ryan Fugger as the person who initially proposed the network of IOUs idea and used the name Ripple for it, and later on worked with Jed McCaleb's team who implemented Ripple as it's known today. [1]
Just amazing how alt-currency/alt-payment ideas have grown so much from being on the fringe to more or less widely accepted. It was hard to imagine back in 2006 how these ideas would finally take off.
If you want to see a reputation currency treated seriously in SF, read "Daemon" and "Freedom™" by Daniel Suarez. Augmented reality meets level grinding.
I caught a panel this past summer where Suarez, Doctorow, and Hugh Howey (Wool) talked about their approaches to writing, taking ideas into the "what's next" stage, and their inspirations. It was fascinating.. unfortunately, recordings were not allowed. (possibly part of the reason it was fascinating)
"Whuffie has all the problems of money, and then a bunch more that are unique to it. In Down and Out in the Magic Kingdom, we see how Whuffie – despite its claims to being ‘‘meritocratic’’ – ends up pooling up around sociopathic jerks who know how to flatter, cajole, or terrorize their way to the top. Once you have a lot of Whuffie – once a lot of people hold you to be reputable – other people bend over backwards to give you opportunities to do things that make you even more reputable, putting you in a position where you can speechify, lead, drive the golden spike, and generally take credit for everything that goes well, while blaming all the screw-ups on lesser mortals.
If this sounds familiar, it’s because that’s how money works."
" As anyone who’s ever tried to figure out the he said/she said campaigns run by the US climate denial lobby can attest, doubt is much more powerful than outright suppression.".
I don't know about the campaigns but the point about doubt is as it should be because if you put forward a hypothesis it's vital (a la Popper) to subject it to tests to see how well it matches reality.
The CO2 versus global temperature relationship is anything but simple. Between 1998 and 2013, the Earth’s surface temperature rose at a rate of 0.04°C a decade, far slower than the 0.18°C increase in the 1990s. At the same time, CO2 levels rose uninterruptedly as they did earlier. Someone should compute the enforced degree of depression of industrial activity across the world which would according to the theory, have reduced global temperatures to be consistent with those figures. What a pointless and tragic exercise that would have been.
This will come as no surprise to anyone who has studied game theory. The rational move in a prisoner's dilemma if you know (or control) when the game will end is to defect. This is also why term limits don't work.
Its a limit on the utility of term limits where the term limited officer is unlikely to seek another elective office when termed out, but that concern does not apply to most holders of term-limited positions (it does apply to the US President, which tends to be someone's last elected position, but state legislators, for instance, often seek other, usually higher, offices with constituencies overlapping those of their prior office when termed-out, and so the concern doesn't really apply to them.)
Actually, it does apply because there are vastly fewer higher positions than lower ones. Most politicians who term-out go into private industry, and they know it.
Yes, of course. The implication here is that someone would stop running in order to enable a defection, but that's a fallacy. People can (and do) defect for reasons other than knowing when the game will end. The problem is not the defections per se (those happen all the time), it's the fact that if the end of your term is known then not only does that force you to defect (if you are rational), it also forces everyone else to defect if they are rational.
Reputation only works if you effectively can't disappear. One reason why good reputation can be trusted somewhat is that you can guess how likely the person enjoying it will go protect it for the future.
Interesting application of reputation to market-based economies. Doctorow's article won't be a surprise to anyone who lived behind the Iron Curtain during socialism.
The State held information on every citizen, and all official transactions resulted in stamps and signatures in little booklets. The stamps and signatures were a proxy to reputation. Don't have the 40 years worth of stamps in your employment booklet? That's a problem for your retirement. Know the right people with the right connections? You get a better apartment than everyone else.
> attempt to establish a basis for strangers to trust one another
That's a perfect problem description for a new business. Existing reputation systems don't do it well, but then again, search engines before Google didn't do search well.
No, he's saying that "merit" is an incredibly easily-gamed metric. We can measure results all right, and we can measure ability rather less so, but "merit" implies a casual link between ability and results that is very hard to unambiguously prove and shockingly easy to fake (even if just by disregarding beneficial external factors).
More importantly: creating a scalar metric of "merit" is always an exercise in power, because it simplifies a hugely multidimensional space to a scale, and in collapsing all those dimensions expresses in a hidden way the preferences of the definer of that metric.
This happens even in notionally quantifiable domains like finance. What asset gives the best return? Should be easy, right? Well, what about volatility? Oops, turns out the naive rank ordering of assets by expected return contains an implicit value judgment about volatility.
See upthread for a mention of eBay seller scores being unadjusted for transaction size for another example.
Where Whuffies meet the real world is in reputation systems being used in society today: Amazon and eBay seller reviews, Uber and lyft driver reviews, Facebook and LinkedIn profiles.
Does that mean you only care about the inequality of people who vote for politicians who promise open boarders? Not having open boarders is also reinforcing inequality globally, which is even bigger (hence worse) than just in America.
If you don't try to reason out your political opinions objectively, they're no better than the opinion of a screaming mob.
Well, it's really conflating reality with the a hypothetical situation. The hypothetical is, money pools around sociopathic jerks who know how to flatter, cajole, or terrorize their way to the top, and if that hypothesis turns out to be true, then I don't feel sorry for those people who caused their own problems by supporting jerks.
That was the business plan of at least half the Bitcoin exchanges.