Bit more context as a CS researcher: Symposium on Theoretical Computing (STOC) and Foundations of Computer Science (FOCS) are the two main computer science theory conferences, they're not random journals or something.
People who aren't familiar with the Sokal affair may miss the critical point that word salad was the author's whole intent, to expose the problem.
'a demonstrative scholarly hoax performed by Alan Sokal...to test the journal's intellectual rigor, specifically to investigate whether "a leading North American journal of cultural studies...[would] publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions."
'The article, "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity",[3] was published in the journal's spring/summer 1996 "Science Wars" issue. It proposed that quantum gravity is a social and linguistic construct. The journal did not practice academic peer review and it did not submit the article for outside expert review by a physicist.'
IMHO it is the most amazing academic comeuppance in history.
Edit: See the author's great book: Sokal, Alan; Bricmont, Jean (1998), Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science (1st ed.), New York: Picador USA, ISBN 0-312-19545-1
But they also weren't treating it like a normal paper, and the lack of peer review is part of that.
More importantly, that being the big example, decades later, helps show just how rare something like that is for major conferences and journals.
An estimate of it happening 1% of the time would be much too high, but even if 1% was accurate it would still mean that endorsements are quite convincing.
If the general consensus is that the org is legit and hosts quality work/research, then yes an appeal to authority is warranted. Does that mean they’re above scrutiny? No. But it does mean the person dismissing the work needs to present a concrete reason given it’s been validated/vetted by an org folks generally trust based on a proven track record.
How efficient is it now? The last time I checked, FHE required minutes of computation and gigabytes of memory to store tiny amounts of data, and since it does not hold IND-CCA, I could not find any use cases.
Very inefficient. Like wildly so. Specifically if you have a very small database and you preprocess it with their techniques, the resulting database is petabytes in size. But the results are very beautiful.
There are no obvious ways to improve on this work, so it is not a matter of engineering. We really do need another breakthrough result to get us closer to practicality.
Pretty efficient! E.g. a recent paper describes a system to do fully private search over the common crawl (360 million web pages) with an end to end latency of 2.7 seconds: https://dl.acm.org/doi/10.1145/3600006.3613134
It used 145 core-seconds of compute, according to the paper. That's not 100% unless they were mostly 1 core servers.
However, the search used almost 50MiB of traffic before the query was even sent by the client. I imagine that will add up fast.
You’re right, I did my math with 1 core assumption. Still, it’s 800ms of 100% utilization of all 4 vCPUs across the entire cluster. That’s 40% capacity if the 2s latency was purely CPU bound and the client was next to the server, but we know the initial phase involves some round trips to communicate that 50mib so the 40% is a floor. My point remains that this is several orders of magnitude off in terms of cost still (as you mention network bandwidth required is off by similar orders of magnitude as cpu).
Many many orders of magnitude. It really depends on the search algorithm. If you're looking up a single term, there are many standard ways to index this, but for instance using a trie or hash table, the lookup cost would be a number of operations proportional to the search term length (e.g. for "foo", it's a of length 3, times some small number of operations for each letter). Plus then if you want to combine the results and sort by frequency, to estimate the cost of this, add up the number of pages from each term, p_1 + p_2 + ... = P. Then the number of operations might be P*log(P). This dominates the cost. So if your search terms appear in 1,000,000 of these pages, you're talking about on the order of ~6,000,000 small operations. An implementation would be able to execute this many operations on a single CPU in 10s or 100s of milliseconds maybe, for an elementary unoptimized algorithm.
The benchmarks listed there are approximately 100 times slower than the original Intel 8088 microprocessor released 1979 on which the original IBM PC was based.
That microprocessor was efficient enough for many applications of general purpose computing, but we still need Moore's law to give us a 100-fold increase in compute power to reach this level.
This level of performance seems comparable to (and IMHO slower than) the very first electronic stored-program computer, the 1948 Manchester Baby.
In other words, this can be compared to do the computation manually, but with the guarantee that whoever is performing the computation cannot glean anything from it.
This can be OK for some relatively rare and important computations, which you for some reason must run in a software-untrusted environment. I can imagine using it for some high-stakes, small-scale automated voting. All the intermediate results may be posted in the encrypted form as an audit trail. After the process is completed and results are declared, the encryption keys are released, and any party can check that every step of the computation determining the winner was correct, there was no stuffing.
Not an answer, but a question that I hope someone can answer. Is the lack of speed because of a lack of optimization or compute? Is this something that could be fixed by an accelerator?
It's often hard to compare things in an accurate way. I mean many current methods might already have hardware specific acceleration and researchers aren't always good at optimization (why should they be?). So is the problem of FHE a bigger lack of just not enough people (specialists) putting in time to optimize it or is it so computationally intensive that we need hardware to get better before it becomes practical? I mean look at llama.cpp and how much faster it is or stable diffusion? Sure, both kinda slow (diffusion's no GAN and SD's not straight diffusion) but very usable for many applications.
Based on my recollection of a conversation with the authors after their STOC talk: the RAM scheme is not efficient enough to be competitive with circuit-based FHE schemes; for problems with low RAM usage, existing circuit-based methods are more efficient, and problems with higher RAM usage are infeasible to run on any FHE method right now.
They were 50/50 on whether or not this technique could be made feasible in the same way that, say, the CKKS scheme is.
I remain unconvinced of the practical applications of solutions like this.
These seem like academic toys.
Pre-processing a dynamic database is a non-starter. No database that is used for any real world use case is static at scale, especially not internet traffic.
I’m not expecting FHE that’s fast in my lifetime, but I’ve been wrong before.
Context: I led a team which built the first open source production-grade implementation of hardware-accelerated PIR which is currently at use at scale.
I don't think the purpose of this sort of research is to be immediately applicable, it more shows a direction that could be useful in the future. Shor's algorithm has not been used practically, but it's hard to imagine modern cryptography without "post quantum" being an important topic.
I think you are right that this paper isn't practically useful. But that is much like saying that making one chip off of a block of granite isn't art. It isn't, but the first chip enables further research until it is practical.
The first time I can across PIR was via the 2014 blog post from Signal on how Private Contact Discovery was a hard problem and how it required PIR to be solved. https://signal.org/blog/contact-discovery/
Maybe this will help Signal get to a viable solution in a few years.
> Unfortunately, for a reasonably large user base, this strategy doesn’t work because the bloom filters themselves are too large to transmit to mobile clients. For 10 million TextSecure users, a bloom filter like this would be ~40MB, requested from the server 116 times a second if every client refreshes it once a day.
They decided to run computations inside a 'secure' hardware environment instead (SGX specifically) so that they can't get access to the computation themselves but it also doesn't need to be run client side. I assume you meant the former thing, but the approach they actually use is fundamentally different from homomorphic encryption / PIR.
Unless you have an electron microscope, work at Intel, or manage to find a hardware exploit, you are not getting the private key out of that chip, which, short of breaking the underlying cryptography, is the only way you're getting at that data.
Except, of course, if you put malware in the next build of your mobile app, and grab it before it's encrypted. Which Signal easily could, and it probably wouldn't be spotted for weeks. Fundamentally, it's all about trusting other people.
I know of the concept of zero-knowledge proofs, but didn't know that the blockchain industry advanced cryptography a lot in that area. What are the practical applications of those new things? Or which new things are there to begin with? The Wikipedia article on zero-knowledge proof doesn't seem to say
One of the applications are ZK-Rollups [1] which allow developers to move heavy computation off a blockchain. The blockchain receives the results and only validates proofs that they are valid. This is especially useful on Ethereum because its computational throughput is pretty low.
There's also ZCash [2], which is a cryptocurrency that lets you make untraceable transactions. This is in stark contrast to Bitcoin or Ethereum, where transaction information is available publicly to everyone. They have a series of blog posts [3] on the math that actually makes it work under the hood.
With zk you can prove you own something or sign transactions without people tracking your entire history. It is also used to help scale chains by rolling up multiple transactions into one proof.
> But using their private lookup method as scaffolding, the authors constructed a new scheme which runs computations that are more like the programs we use every day, pulling information covertly without sweeping the whole internet.
Doesn’t Google already accomplish this? Or is the key here that Google doesn’t do the sweeping in a covert way? So search boils down to two problems: building the index and using the index.
It understand it's inefficient, but could it be used by a well-resourced organization where confidentiality is extremely high-value and the database is small, such as intelligence, military plans, Apple's product plans, etc.?
How do people with pen and paper operate on 10k pieces of data? Which is honestly a rather small number. There's a reason we use computers and why statistics accelerated after its invention.
There are two use cases: (1) when the computation is proprietary so the user can’t run it, or (2) when the user doesn’t even want to trust their own hardware.
You run Google’s algorithm yourself and secretly pull data from the internet when necessary.
yeah isn't this the same offline viewing? why do you need a new algorithm for that? we've known about this forever. obviously, if you download the database and access it, this is more secure than being online , but way slower. Regarding the library example, this too has easy solutions: checkout many books at once, have a stranger check out the book, etc.
This article seems to do a poor job explaining what exactly this solves, only that it's revolutionary. The metaphors are not helpful.
How I interpreted this is that you download the google search algorithm, then scan their database over the internet with that algorithm. Which would take ages or millenia depending on how much data google has.
You wouldn't need to download the entire database and there would be alot of uncertainty over which search result you'd actually used.
so, not private nor efficient nor solution to homomorphic data.
basically a polynomial factor hash of the index keys... basically you will need the entire db plus the larger hashed keys. doesn't help at all implementing any of the outlandish claims.
guess the merit is in proving the poly fact they build on as secure. but that is not a game changer as there are better alternatives and nobody solved the claimed problems with them.
Wouldn't ChatGPT in homomorphic encryption mode do something like that already?
You put in a query - the system processes it in mysterious ways, and returns the answer in constant time.
This is super misleading and sort of ambiguous. Are they claiming they eliminated any side channel information that can be used? I am confused here. The problem is anyone can collect various side channel information and use it as Meta data… Fully Homo. Encryption only helps after you’ve established secure links and to do that you will inevitably have to use a less secure link or system that which can be used in and of itself to make inferences about queries… The real issue is we don’t know if the “secure” systems we use that we don’t control actually give us the privacy they claim to…
Homomorphic encryption and zero knowledge proofs are the most exciting technologies for me for the past bunch of years (assuming they work (I'm not qualified enough to know)).
Having third parties compute on encrypted, private data, and return results without being able to know the inputs or outputs is pretty amazing.
The general concept is "I want to keep this data private and confidential, but I want to be able to rent elastic compute for it". Previously, at the end of the day, the processor could only do useful work on unencrypted data.
But something like "analyze this list of customer records for fraud" or "analyze this proprietary data for trends" has previously either required a lot of trust that your cloud provider isn't going to siphon your data, or just required on-prem hardware that you cannot scale as easily.
If "math on encrypted data" works, we could keep our data confidential while still sending of batches of it to be number-crunched at a cloud provider.
Or even start talking about distributed / peer-to-peer computing, where a swarm of users (or, say, businesses that wish to cooperate) can send compute jobs to each other without having to trust that the other members weren't going to go snooping through the data that was sent.
And for that matter if I were a provider of some kind of computational service for hire, I (and my insurers) might feel a great deal of relief at the idea that we’re no longer having to sit on a big attractive ransomable pile of our clients’ data.
The idea is the result is encrypted too such that, when the person that holds the key gets it back they can decrypt it and see the result. So you can run an inference algorithm on data without ever having to decrypt it and see what the data actually is.
It seems to me that this intrinsically is vulnerable to side-channel attacks, but it will be interesting to see if we can avoid those with constant-time algorithms or something.
No, FHE is not intrinsically vulnerable to any side-channel attacks, by definition; the properties hold no matter how the "executor" executes the operations and they include the assumption that obviously they can see (or change!) everything that happens during the execution.
However, that effectively means that any FHE approach has to be "anti-optimized", always touching all the data that might get touched, always taking all the conditional paths, not just the worst case path as in constant time algorithms, since the execution must return the correct result for any encrypted input data.
It's not that you can "see if we can avoid those with constant-time algorithms or something", the basic table stakes (which all FHE methods actually implement) is the constant-time algorithm approach taken to the extreme and then squared.
It does have "some" unavoidable performance impact because of that.
I suspect that you’re right, but my concern is more about unknown unknowns here: new tech like this often has failure modes we can’t imagine until the tech itself becomes commonplace.
Because, if you have a mechanism to run arbitrary computations on encrypted data, I’d be a bit concerned about an attacker running carefully crafted computations on the data and deducing information about the encrypted data based on how the ciphertext changes or the time/memory usage of the computation.
This isn’t really a particularly well-informed suspicion: it’s partly based on a sense that FHE is a “have your cake and eat it too” sort of technology that’s too good to be true and partly based on the recent discoveries that fairly well-understood things like speculative execution in CPUswere vulnerable to serious side-channel attacks.
If running computations on encrypted data is enough to compromise the integrity of the encryption then that encryption algorithm is not even EAV secure, and EAV secure is a pretty low bar.
zk proofs are used in public blockchains to transmit millions in value so I would assume they aren't vulnerable to side channel attacks at least not on a superficial level.
A homomorphism is a function so that f(ab) = f(a) f(b). So you can imagine the cloud customer wants to compute ab, but the use a homomorphic encryption function, f, to ask the cloud provider to computer f(ab) given f(a) and f(b).
Then the customer decrypts f(ab). This doesn’t imply any weakness in encryption.
FHE is a bit stronger than what I’ve described, but that’s the idea.
The issue with this use-case is that the "I want to be able to rent elastic compute for it" is not a fundamental need or desire, but effectively a desire to save costs for hardware and other infrastructure.
Like, for a true need you could imagine someone saying "I really want this, no matter how it costs" - but you wouldn't do that for elastic compute if doing that locally was 100x cheaper; and in a similar way, the "wish to cooperate" by sending compute jobs to each other is meaningful if and only if you can actually save effort this way, but if there's a 100x overhead cost for FHE (and in reality it's far worse than mere 100x), that doesn't make sense, and you'd simply be better off doing it without distributed / peer-to-peer / cloud computing and just buy and maintain your own non-shared hardware even if it's used only 5% of the time.
And I'm not sure if FHE can ever get to overhead rates so low that these use cases would start to make sense, like, there's no good reason to assume that a mere 10x overhead FHE is even theoretically possible.
No because this is the very bleeding edge of technology so there are only very limited (but very cool) tech demos of the technology? Nothing at scale yet.
One paper I used showed how X-rays can be analyzed without exposing the X-ray data itself.
This can be generalized more to any situation where one party owns a proprietary algorithm and another party owns sensitive input data but they don't trust each other.
Modern LLMs are actually the perfect example. You want to use chatgpt but they don't want to give you their model and you don't want them to see your input. If HME was more efficient, you could use it so that the person executing the model never sees your real input.
If it’s just that the parties don’t trust each other then the cost of HME has to be compared to the current “state of the art” which is contracts and enforcement thereof.
In practice, I don’t think those costs are that high because the rate of incident is low and the average damage is also low.
Yes there are outlier instances of large breaches but these seem like high profile aircraft crashes considering how many entities have sensitive data.
I feel like trust is a spectrum, and the promise of these techniques is that they reduce the need for trust in the first place.
We should consider what kinds of computational tasks today’s responsible parties (or their regulators, or their insurers) think of as too risky to casually trust to third parties under the status quo. For example with my block storage provably unintelligible if you don’t have the HSM I keep safely in my corporate dungeon, I’m comfortable not caring whose racks the encrypted blocks sit on. I’d have to vet those vendors a lot harder if they could read all my super secret diaries or whatever.
And, for that matter, it’s on the service provider side too, right? Even the contractual, spit-and-handshake pinky-swear-based mode of enforcement comes with significant compliance costs for service providers, especially ones operating in regulated industries. Perhaps it’s not too much to hope that effective and efficient HME techniques might reduce those service providers’ compliance costs, and lower the barrier to entry for new competitors.
I’m reminded how even non-tech people in my life became much more willing to trust their credit card details to online retailers once they felt like a little green lock icon made it “safe”. Of course a LOT changed over that same period, but still: the underlying contractual boundaries didn’t substantially change—in the US the customer, then as now, has only ever been responsible for a certain amount of fraud/theft loss—but people’s risk attitudes updated when the security context changed, and it opened up vast new efficiencies and lines of business.
It’s not too much to hope that HME reduces those compliance costs. However, I believe it is too much to assume there will be any material adoption before it can demonstrate that reduction.
Reduction of trust is not a value add, it is a cost reduction. Maybe that cost is blocking a valuable product/service but either that product/service’s value is less than the current cost of trust OR trust has to be far more costly in the context of the new product/service.
It’s only the latter that I find interesting which is why tend to be pretty hard on suggestions that this will do much for anything that currently exists. At best, it will improve profits marginally for those incumbents.
What is something where the price of trust is so catastrophically high in modern society AND HME can reduce that cost by orders of magnitude? Let’s talk about that rather than HME.
Data incidents cause more problems than can easily be resolved with a contract lawsuit. Perhaps the data was siphoned by a 3rd party that hacked your vendor, or a malicious insider at your vendor sold it to a competitor. Sure, you can recoup some losses by suing your vendor for breach of contract, but once the data is leaked, it's never secret again.
And then there's the example of businesses that work with lots of confidential customer data, like banks or doctors. Again, you can sue your vendor for breach of contract if they behave irresponsibly with your data, but your customers may not care; you're going to suffer a hit to your reputation regardless of whether or not the breach was your fault.
You can say it’s insufficient but it is what it costs them today.
I guess the better comparison is that cost in a financial statement plus some expected increase in revenue due to a “better” product.
Again, I think you are correct in your analysis of the improvements but that contributes little to the revenue as explaining the benefit to most customers requires framing your existing product as potentially harmful to them. Educating them will be hard and it may result in an offsetting realization that they were unsafe before and as a result were paying too much.
Not really, you would phrase it to your customers or investors as a way of mitigating risk. You can probably apply a price tag to that risk by estimating the impact of a data incident vs. the likelihood of one happening. Different businesses have different risk appetites, and I would hope that a board or C-Suite is thinking about what level of risk is acceptable for their business.
Mitigating risk is covered in the cost reduction side.
Yes the C-Suite is thinking about and mitigating risk. They probably know the exact number for a given class of risk in terms of current mitigation costs. You have to beat that by a margin wide enough for them to take action.
Even if you know their numbers and know you beat it by enough to warrant the deployment you will still get bumped if someone sells them a path to increasing revenue.
The out I gave was to frame it as value added (more revenue) and that is where you risk devaluing your current product.
If you frame it as cost reduction you are capped in both price and interest by the current, necessarily acceptable, levels of risk and cost of mitigations.
> One paper I used showed how X-rays can be analyzed without exposing the X-ray data itself.
Is there any hope of getting the performance to the point where something like that would be feasible? I’d imagine the raw data itself would be so big that the performance for anything non trivial would be unworkable.
It lets you have someone add up your numbers, and give you the sum, without knowing the input numbers or their sum. Basically, any time you want someone else to host the data, while you can also do queries/computations on it.
That now generalizes to any computation (not just addition), because there is a logically complete set of primitive operations for which fully homomorphic encryption (FHE) can be done.
Caveats:
1) You can't actually do e.g. while loops with run time unknown. All such FHE computations are done by generating a fixed size boolean circuit for a given input size. It's "Turing-complete" in the sense that you can size up the circuit to any input size, but it wouldn't directly implement an unbounded while loop -- you have to generate a different "program" (circuit) for each input size.
2) All such computations must have a fixed output size -- else it leaks information about the data inside. So allowable computations would be like, "give me the first 30 bytes of the result of querying for names in the set that begin with A". If there are no results, the output still has to be 30 bytes.
3) For similar reasons, any FHE computation must look at all bits of the input (otherwise, it leaks info about what's in them). "See if this value is in the set" can't just jump to the relevant section.
4) The untrusted third party knows what computation you're doing (at least at a low level), just not the data it's being performed on or the result.
5) As you might have expected from 3), there's a massive blowup in resources to do it. There can be improvements, but some blowups are inherent to FHE.
In addition to the other answers, any situation where you want to prove identity but want to preserve privacy at the same time. For example you could prove you're an adult citizen with ID without revealing your ID number, picture, or any private information whatsoever.
There are examples in cryptocurrency, where ZK proofs are all the rage. One use case is validating ownership of some amount of currency without knowing the actual amount, or verifying that a smart contract was executed correctly without revealing the specifics of the contracts internal state. Some blockchains use this in production, such as Polygons ZkEvm.
There are a few that I've thought about but the first one that comes to mind is proving you're 21+ to an alcohol store cashier without handing over your ID with all your personal info. You could just give them a ZK proof that returns a true/false and they could check it against encrypted data (signed by the DMV).
Why is that preferable over a message attesting “over 21” signed by the DMV?
The hard parts here are retrofitting society to use a digital ID and how to prove that the human in front of you is attached to that digital ID.
The solutions there all seem like dystopias where now instead of a bouncer looking at your ID for a few seconds, technology is taking pictures of you everywhere and can log that with location and time trivially.
It doesn't have to be a digital ID, it can just be encrypted data encoded on a regular ID on a QR code.
Age depends on timestamp. The encrypted data is stored on the ID and signed by the DMV, with a function that can be run by the bouncer's scanning machine that plugs in a now() timestamp, and receives a boolean in return. The DMV doesn't even need to be involved after the issuance of the ID and no network access is needed for this calculation.
No one's location was tracked and no one's picture was taken and now a bouncer who fancies you can't turn up at your house after glancing at your ID.
Age verification (without leaking other PII) was the illustrative scenario for W3C Verified Credentials (it lets you use a validating authority to sign specific subset of your schema).
There’s lots of other ways to solve the problem for verification/signing use cases tbh. Homomorphic encryption shines best when you are looking at more complex calculations than just a Boolean result - such as tax calculations.
Can you submit your financial information and have your taxes calculated without revealing the amounts involved? Can you apply filters to an image without having the image readable by the server? It essentially allows us to “trust a remote server” in scenarios where one wouldn’t usually.
Still doesn’t quite justify homomorphic encryption if the computations involved are mundane (like tax calculations). The true use case is when the computation itself is proprietary, so that users don’t want to reveal their data but the provider also doesn’t want to divulge their algorithm. This is why ML models or finely tuned search engines are the examples most commonly cited, but of course those are also far outside the scale that could be achieved.
> No one's location was tracked and no one's picture was taken
I assume you mean the bouncer didn't take a photo, they just looked at the DMV photo embedded in the ID and did a visual comparison in their meat brain.
If there's no photo anywhere, how does the bouncer know I'm not using someone else's ID?
How do you know that the bouncers scanning machine didn’t log the interaction?
The whole value prop is built on not trusting that bouncer and by extension their hardware.
Everything would have to be encrypted leading to the bouncer also needing to establish that this opaque identifier actually belongs to you. This is where some picture or biometric comes into play and since the bouncer cannot evaluate it with their own wetware you are surrendering more data to a device you cannot trust.
They also cannot trust your device. So, I don’t see a scenario where you can prove ownership of the ID to a person without their device convincing them of it.
All sorts - anything where you today give your data to some SaaS (and they encrypt it at rest, sure), and they then provide you some insight based on it or allow you to do something with it (by decrypting it, crunching numbers, spitting out the answer, and then encrypting the source data again) could potentially be done without the decryption (or the keys to make it possible).
Concretely (and simplistically) - I give you two numbers, 2 & 2, but they're encrypted so you don't know what they are, and you add them together for me, but that's also encrypted so you don't even know that the sum of the inputs I gave was 4. It's 'computation on encrypted data', basically.
For example the Numerai hedge fund's data science tournament for crowdsourced stock market prediction is giving out their expensive hedge fund quality data to their users but it's transformed enough that the users don't actually know what the data is, yet the machine learning models are still working on it. To my knowledge it's not homomorphic encryption because that would be still too computational expensive, but it would be an ideal application for this.
Whatever Numerai has done to their data, there is no indication that they actually transformed it in a way that would hold up to a theoretical cryptographic adversary.
It would come in handy with not requiring any trust of the cloud that is running your servers. For protected health information this could potentially be a major breakthrough in patient privacy.
It would be hideously expensive due to the compute required and the total inability to monetize the data. Some people value their privacy that much; most don’t.
In an extreme case could Google use a mechanism like this to deny themselves direct access to the data they collect while still bidding in ad auctions based on that information?
As most likely lots of cryptographers read this, I have a question. What's an efficient way to store encrypted data in a database (only specific table columns) and decrypt them on the fly while querying the data? Using postgres with pgp_sym_encrypt(data, key) seems very slow.
The problem with all these privacy preserving cryptographic breakthroughs is they are never deployed in practice.
Just look at cryptocurrency. We've known how to create a privacy preserving, truly distributed, cryptographic replacement to cash for decades, and what we end up with instead is Bitcoin and the like, which is pseudo-anonymous only and ends up being centralised anyway to interact with the fiat world.
Theres no demand for this tech in current society.
David Chaum [1], a famous Cryptographer, founded International association of Cryptologic Research (IACR). He published so many articles on digital-cash, anonymous cash, etc. He had patents on them; he even founded a company on that concept. However, that company failed.
blind signatures are nice and verifiably unlinkable but digital bearer certificates require a trusted central "mint" to issue and reissue them. This central point of failure can and inevitably will... fail, as Chaum's company (Digicash) did. And it can also inflate the currency without anyone knowing.
Bitcoin was the result of cypherpunks going back to the drawing board to create a decentralized solution. Unfortunately bitcoin sacrificed unlinkability for decentralization. Modern "privacy" cryptocurrencies utilizing zero-knowledge proofs are advancing the state-of-the-art in terms of having both properties.
A decentralized DBC "mint" is theoretically possible. However there are two more downsides to blind signature approach: (1) auditing is impossible because there is no history so detecting if mint-node(s) have colluded to cheat or catching an inflation bug is unsolved problem. (2) Arbitrary amounts are not supported so it is necessary to create fixed denomination "notes", which then add size and complexity to every transaction.
source: been there, done that. bought the t-shirt.
That's not true. Sometimes these technologies get pulled up high side or are only developed there so the public doesn't hear about them. You should read the ietf paper on the crypto wars. Crypto and ZKPs are some of the known attempts to keep these technologies out of the public.
Doubly Efficient Private Information Retrieval and Fully Homomorphic RAM Computation from Ring LWE
https://eprint.iacr.org/2022/1703