One interesting thing about this: You are performing known operations on unknown data. But theoretically you could simulate a generic computer whose program is encrypted data as well, thus enabling unknown operations on unknown data. However, with speeds in the ms per gate we are a long way from that being practical right now.
I wonder if you could write your programs in such a way that the bulk of the computation was done publically, and only sensitive ops were shunted out to the secure processing network.
Maybe in a language similar to Erlang, but instead of writing code that's amenable to sharing between multiple CPUs, you'd be sharing between multiple degrees of privacy.
> I wonder if you could write your programs in such a way that the bulk of the computation was done publically, and only sensitive ops were shunted out to the secure processing network.
Do you have some example of what such an application may be?
Encode computations as NP hard graph optimization problems. Use the public computation resources to solve the hard part, but keep the labels of the nodes and edges in the slower encrypted computing resource.
1) Secretly compute a list of encrypted tags and associated, unencrypted scores.
2) Sort by scores.
3) Convert the highest N tags back to encrypted data.
As a plus, steps 1 and 3 should be embarrassingly parallel.
You are, of course, leaking a count, which can be important (but your opponent can already make inferences based on the amount of data you have stored...).
For a layman like myself, downloading and skimming the reference papers at the bottom of the page, starting with Craig Gentry 2013's paper[1], really helped.
What operations can I do homomorphically with this library? The page says
"With the cloud-keyset, the library can evaluate a net-list of binary gates homomorphically at a rate of about 50 gates per second per core, without decrypting its input. It suffices to provide the sequence of gates, as well as ciphertexts of the input bits. And the library computes ciphertexts of the output bits."
but what does "evaluating a net list of binary gates" come to in practice? What operations could I expect to be able to perform?
"The library supports the homomorphic evaluation of the 10 binary gates (And, Or, Xor, Nand, Nor, etc…), as well as the negation and the Mux gate."
So you'd program it by designing a digital circuit using AND, OR, and NOT gates, somewhat similar to how you would make a circuit with physical components.
You have millions, maybe billions of these gates in your CPU, each capable of doing millions of calculations for each tick of your homomorphic gate, so the "fast" in the title should be taken with a grain of salt. Your homomorphic circuit could have a noticeable delay adding two 64-bit numbers.
It's fast relative to the previous state of the art in homomorphic encryption. But the path to practical applications is always paved with incremental improvements.
Why would we expect to need a full modern CPU? An 8080 had thousands, and that's still general purpose. Something built for a specific task might well be substantially smaller. Still slow, but possibly realistic for some tasks.
It's a long way from even an 8080. The 8080 had a couple thousand gates and ran at 2+MHz. At 50 gates per second, this would run an 8080 at maybe 20 millihertz.
Yeah, I didn't mean to imply that it'll happily emulate an 8080 at useful speeds. I wanted to illustrate the orders of magnitude between "minimal something useful" and "a modern cpu", and an 8080 is a reasonably familiar point on that spectrum.
Anything you can do in 10 or 20 cycles of an 8080, you can now do to secret data in a few hours. And you can probably do much, much faster than that if you design a special purpose circuit. It's still sloooow by the computational standards we're used to. But it might be fast enough that there are useful applications.
I wonder then how implementable this is in an FPGA for accelerating the whole thing? This could be a killer use of the FPGA instances on AWS and similar cloud services.
Well, no, other way around more likely - the most practical way of building programs for these systems would probably be through VHDL/Verilog with a specialized toolchain.
(edit: although actually FPGAs could be useful simply because they can be very good at running FFTs (as are GPUs), not because of the gate-programmable nature of them)
HDLs have no special advantage in evaluating boolean operations in simulation. Most simulators are significantly slower than native code unless they use compiler infrastructure used by mainstream languages and even then they still have the penalty of maintaining the simulation environment which serves no purpose in this application.
They don't describe logic gates directly. They are used to formulate boolean expressions which are mapped into gates or an equivalent structure by synthesis software. Outside of special circumstances, no one is manually crafting individual gates when writing with an HDL. that is left to the back end tooling. If the target isn't hardware then normal programming languages are just as good. There is no magic speedup when HDLs are simulated and you just pay the penalty of keeping track of simulated time and sequencing of events that don't matter if you just want to compute an answer.
I haven't studied this much, but to speculate, it sounds like each gate is a boolean expression (AND, OR, XOR, NOT). So this lets you compute boolean functions on bits. I don't see a way to compute a loop, though, other than perhaps to unroll it and send lots of instructions.
Also, 50ms per bit operation, though apparently an improvement, sounds extremely inefficient compared to normal programming.
In general it's much easier to reason about Boolean circuits because they always have an answer, unlike arbitrary programs that may never halt, for example, so you see a lot of theoretic results focusing on Boolean circuits (sometimes with limitations on the depth) instead.
Early on it was thought that considering circuits instead of regular programs could result in "polynomial with advice" algorithms for NP-complete problems, but results such as the Karp-Lipton theorem have shown this to be unlikely.
The current version of the library achieves 13ms per binary operation on a core-i7 instead of 50ms in the original paper.
The improvements are described in the new paper: https://eprint.iacr.org/2017/430. Hopefully, the packing techniques will be also present in the next version of TFHE.
You can compute a loop by essentially building a CPU out of gates. But you're right that this would be impractically slow even with this latest improvement in performance.
It's not clear to me that this is still "impractically slow" for all purposes. An 8080 is a few thousand gates. What can you do on an 8080 in 100 cycles? You can now do that in about an hour on secret data by naively simulating the chip. That actually sounds like it might make some things practical that weren't before...
Nor I, TBH, but I've not been thinking about it very hard or very long. It just surprised me that we're no longer talking about months or years, where things are more obviously impractical for just about everything.
You can compute anything, given enough time of course.
The CPU running on your device boils down to a netlist describing a bunch of gates and their interconnections. This is somewhat of an oversimplification, of course.
I think it means that this implementation of homomorphic encryption relies on polynomial computations being done on the plane of a geometrical torus? See the relevant paper here:
Sounds very interesting!, I'm going to have to look at in more detail. I'm just wondering how it compares to something like https://github.com/shaih/HElib
The best analogy is that Helib is a homomorphic GPU while TFHE is a homomorphic CPU. Elementary operations (addition, multiplication modulo p) with Helib are slower (especially bootstrapping), but are performed on a huge vector of data simultaneously. In the opposite, elementary operations with TFHE (binary gates) are extremely fast, but deal with a single bit.
In other word, if the application you are aiming at is suitable for running on a GPU, go for Helib, else if it would be faster on a CPU, use TFHE.
However, not being an expert on FHE, is there a way to leverage this on current RDBMS systems for example?
It says the library can evaluate binary gates. If we would like to run a SQL query for example, how do we translate it to a series of gates? Is it possible?
Or is this so low level that we basically would need to build our own "processor" with binary gates and then build the rest of the stack on top of it so we can, in the end, run a query?
Can anyone shed some light on how exactly can we take advantage of this library?
Fully homomorphic encryption isn't tremendously useful for database queries - you end up having to put the entire database in a massive FHE ciphertext, then expressing the query as a circuit which requires time linear in the size of the database to return a result.
Although slower, it feels like TFHE would provide better security against an active adversary. So, at least, simple server-side encrypted queries would well be possible
Can you explain how you would use FHE instead of garbled circuits in the Arx range query data structure? I don't see how that would work - wouldn't you have to (re-)introduce interaction to let the server learn intermediate results?
TFHE is not even in the same galaxy as CryptDB. Comparing the two is like comparing an apple and a 2007 Honda Civic. They're polar opposite approaches to executing queries on encrypted databases.
That's a seriously cool thing to have in the toolbox!
Does it produce only encrypted output, or can it optionally produce unencrypted results also? Can it optionally use public data as an input?
Also I am guessing if it could be accelerated on GPUs. I worked with a guy who accelerated a standard FFT on CUDA 100..1000 times for scientific computations (and later NVidia copied his code, lol). I wonder if something similar can be done here
The point is to be able to give encrypted data to a third party and have them do operations on that data (ex. sum all the values) and give you an encrypted result back.
tldr. computations in the cloud with encrypted, private data
a) smart contracts controlling something within the encrypted data based on publicly available data;
b) encrypted key-value store where you traverse the tree structure based on encrypted query and encrypted tree buckets, but get a publicly available result about which bucket is next (similarly to how it was done in Arx paper using garbled circuits).
> This work leaves much room for improvement, however. For example, the throughput and latency can be significantly improved by using GPUs and FPGAs to accelerate the computation.
> We demonstrate CryptoNets on the MNIST optical character recognition tasks. CryptoNets achieve 99% accuracy and can make more than 51000 predictions per hour on a single PC. Therefore, they allow high throughput, accurate, and private predictions.
This looks like the slowest routines are FFT and GEMM (CPU bound).
I wonder if one can find DSPs easily for racked servers. Maybe hardware h264 encoders can be repurposed that way?
I obviously don't know what I am talking about! Would an FPGA implementation accelerate execution?
Open Mined (http://openmined.org/) are looking at this from the other angle - sharing the encrypted neural network with their users so that they can train it without sharing their data at all - the encrypted gradients are computed by the user and collated and decrypted by the company that wants to train the neural network.
Yes, but not to the same degree. Numerai uses structure-preserving encryption / neural encryption. This allows people to use any existing machine learning algorithm on the data. For fully homomorphic encryption you would need specialized algorithms. These are way more difficult to design. They also run slower.
Worth pointing out that Numerai actually doesn't use encryption in any standard sense (including structure-preserving encryption), but instead seems to be using some heuristic method of obfuscating their data.
Their (closed-source) method of obfuscating their data apparently does have the property that it preserves the structure of the data, but calling it "structure-preserving encryption" is misleading imo since it risks confusing it with standard notions of encryption and structure-preserving encryption which have much stronger security guarantees.
(Their marketing seems to encourage this conflation by, for example, citing academic advances in standard notions of homomorphic encryption and SPE and implying that these advances have enabled Numerai's technology)
They switched from structure-preserving encryption to neural encryption (using a neural net's layer activations).
> Just a few months ago this package was released by Louis Aslett at Oxford http://www.louisaslett.com/HomomorphicEncryption/. Louis helped me use his package to do Fan and Vercauteren homomorphic encryption on my dataset. Because the ciphertexts are polynomials it's not too easy for an average data scientist to use the data. That's why I came up with more chill ways of encrypting Numerai's data that the article mentions like order-preserving encryption. There's a security vs easy of use trade off, for sure. But homomorphic encryption is a real thing.
Thanks for that link! I was never able to find any details from someone who works for Numerai (or claims to at least)
I still don't think it's fair to market their method as comparable to Aslett's scheme or "standard" notions of homormorphic/order preserving encryption, no matter how "chill" they are :)
Do you think neural encryption is closer to encryption? GAN-style: One network encrypts while preserving structure, another network tries to reverse engineer to the original features.
Edit: No specific sources for what Numerai is using, but in general: https://arxiv.org/abs/1610.06918 "Learning to Protect Communications with Adversarial Neural Cryptography".
Edit2: Yes, in general. I would say "yes, this is a valid form of encryption". But I do agree that their marketing was perhaps a bit too optimistic. I have no problem calling it "obfuscation" either (I just think their method of "obfuscation" is way more advanced than removing headers and normalizing within 0-1).
I've never heard of neural encryption before, do you have a good source for me?
From the Wikipedia page for "Neural cryptography", it seems like there's some success in using NN's for cryptanalysis, but not for constructions...
Edit: Do you mean the Google GAN experiment? (https://arxiv.org/pdf/1610.06918v1.pdf) Ahh ok, well at least for this there looks like an attempt at defining a security model (security against some other NN). I don't really believe the security model is realistic (how do we know NN's are really that effective as adversaries?), but at least there is a model, so calling that "encryption" sits somewhat better with me. I'm pretty sure this is not what's being used by Numerai since it seems like it would not result in ciphertexts with the structure necessary to perform ML operations on.
Edit2: Maybe you're right and it is more advanced. In any case, as a crypto nerd I wish they would disclose what they are actually doing / the rationale instead of tantalizingly suggesting that they have made (what would be) a breakthrough in a practical use case of advanced encryption schemes, but not saying how.
This is a great and really important point. But shouldn't warranting the "encryption" label come down to the security guarantee of the algorithm in question, in the use cases it's meant for? I don't see why non-traditional data obfuscation algorithms can't be equally valid methods of "encryption" just because they are architecturally different. After all there are plenty of traditional encryption approaches that have been deprecated and deemed insufficiently insecure. So it'd be a bigger problem if the definition of encryption became so narrow that practitioners assume security guarantees based on use of the term alone.
IMO the key is that new encryption methods demonstrate levels of security in their intended use cases that users deem sufficiently strong. I agree that all closed-source methods leave room for misleading statements (and I won't speak to Numerai as I don't have any additional insight on their approach). But any algorithm can and should be benchmarked--generally and vs. common standards--to avoid confusion. Serious commercial & public sector users will be quick to ensure so, especially in lieu of massive social proof.
My thought here is that if we draw the line on what warrants "encryption" by saying it's got to be a traditional key-based system, or sufficiently similar to existing approaches, we risk stifling innovation in the space by denying new entrants an industry-standard term that buyers are trained to seek. Non-traditional doesn't necessarily mean non-secure. Would love to hear your thoughts on this.
I think it's appropriate to restrict usage of the word "encryption" to methods that come with some justification for their security.
I don't think that doing this stifles innovation (In fact, I think requiring crypto innovations to have security justifications is probably better overall for innovation in the field)
Your specific statement was valid and good-natured, so I didn't mean to attack you here. All I'm arguing against is the adoption of a legal or de-facto definition of "encryption" based on architectural similarity to currently popular methods, rather than actual evaluation in situ. Most all practical applications of crypto trade security for functionality to some extent, and requirements for one or the other vary based on the use case. Future security gains, moreover, won't likely all be made on traditional grounds (e.g. increased susceptibility to brute force hacks). That's why I'm hesitant to support statements like "X company is misleading us by using the term 'encryption' because its closed-source approach doesn't appear to adhere to conventional notions on cursory inspection."
Much practical innovation comes from closed-source applications whose peer review comes in the form of commercial lab tests. Overly ossified technology standards & labels often force CIOs / CTOs / CISOs to build artificial barriers into their corporate procurement processes for optical reasons. In addition to engendering "check-the-box" complacency, these barriers absolutely stifle startup-driven innovation.
This is very interesting from an academic/theory point of view.
There currently aren't a lot practical use cases where we can afford a performance loss of ~100,000,000x (your homomorphic crypto algorithm is going to run on the order of ~Hz on a ~Ghz CPU).
Homomorphic programs are designed to run without knowledge of the key. You have to process every piece of data in basically the same way. So I'm not entirely sure where you think the side-channel attack would arise.
My first guess would be conditionals, but I'd guess there is no real branching in the execution of operations on data. Another would be evaluating comparisons, but those aren't very easy to do in bitwise terms, and you can't read the output either.
But the thing about this setting is that you don't possess the secrets, so you can't reveal the secrets by noticing how long things take.
An analogy to think about might be blind signatures. In blind signatures you sign a blinded token and then the other party can unblind it to get a valid signature from you over a message whose content you don't know. This is classical public-key cryptography. In that case there is a secret key, and a timing attack against someone who can observe the signature creation might reveal that key. The connection between the blinded and unblinded message is also meant to be secret, and a timing attack against someone performing the blinding or unblinding operations might also reveal that relationship.
However, there is no timing attack that the signer can perform against itself to reveal the relationship between the blinded and unblinded messages, and there is no timing attack that the other party can perform against itself to reveal the secret key.
I think this analogy holds up for most purposes (indeed, blind signatures can be viewed as an extremely specific, narrow kind of homomorphic encryption), but I'd be happy to hear corrections if someone can see a way that it doesn't.
the "side channel" i am referring to is in however this impliments this mixing you refer to.
if you can observe how the internal state is changing given a cloud keyset you should be able to infer the secret key. in the same way you can infer the iv used in a mesernine twister from like 27 cycles (or whatever).
downvotes and promises definately wont reduce my skeptacism.
The "cloud keyset" is another name for "public key". Usually, public key enables encryption but no decryption, here, the cloud key enables computations on ciphertexts, but not decryption.
Like in public key cryptography, you can give a RSA public key to an adversary, he can use it as much as he want, he will not be able to get any information on the private key (in any practical time). Most importantly, the cloud key is not secret, nor obfuscated: the cloud already knows each bits of it (side channel attacks are not relevant).
The "cloud key" is the public part of the key given to the cloud so that he can perform his operations (circuit, bootstrapping). There are strong security proofs that show that it is impossible to recover the secret key from the cloud key.
In the case of TFHE, if there was a polynomial algorithm that would recover even a single bit of the secret key given the cloud key, this algorithm could be used to break the LWE problem, and also worst case instances of lattice problems (which are much harder than factorization or discrete log for instance).
because the simple fact you can process the data and examine the output reveals untold quantities of information about the key.
the known plaintext attack breaks pretty much every crypto system, mix that with statistical analysis of this "processing" and I'm sure whatever is in the cloud will surrender its secrets pretty quick.
And all that risk for what benefit? none of this processing will ever be faster than doing the processing in place.
You can examine the output and so on, but that's not a side-channel attack, which is what I was responding to.
And yes, it is a malleable cryptosystem, but that doesn't imply you can learn the key from an input/output pair, nor does it imply you can read the input given to you. It just means you can change the output in a way that can still be decrypted by the same key.
Usually people use padding and IVs and such to prevent that, but this system is made to allow computation with it instead.
Certainly there may be some flaw that we can't see yet, but to say you're "sure" some secrets will be quickly surrendered seems foolhardy.
> And all that risk for what benefit?
It's very interesting from a theoretical perspective, and it's good to know that if you need to carry out computation on devices you don't trust, it is at least possible to prevent your input and output from being stolen.
This is already a 30x increase in efficiency from the last generation. It may never be faster, but there's always a chance someone will find other reasons to use it.
you dont need to get the key, you just need to decrypt the data.
they do not need to be one in the same, if, statistically you can infer input given output.
And i never said sure in any way shape or form. what i said is i am skeptical of the motivations driving a project like this, when we know for an absolute fact bad men are trying to steal our secrets. It seems like a lot of work and very heavy math for something that will never tip the balance that is holding the cloud back with people not trusting it to keep their secrets.
for example, if you can do something as simple as compute a hash of the unencrypted data, you can instantly decrypt all data with known hashes with a high degree of certainty, without ever knowing the keys - side channel attack.
In homomorphic encryption, the cloud performs operations on encrypted data, and gets only an encrypted result. He does not have any information on the decrypted result.
That's the difference between FHE and IO or multilinear maps. The second ones are almost all broken, mostly because of the attacks you mention. FHE, on the other hand, has semantic security.
None of the other AES finalists were vulnerable to known plaintexts attacks. Neither is 3-DES or chacha20. DES and RC4 are now considered insecure, but in their day they were resistant known plaintext attacks as well.
Other than some toy examples, and cryptosystems that were used before cryptography was studied academically, I can't think of any that are vulnerable to known plaintext attacks.
> And all that risk for what benefit? none of this processing will ever be faster than doing the processing in place.
The benefit comes into play when you mix data from different sources that don't trust each other (to the point where they would never agree to one of them doing the processing in place). Homomorphic encryption allows combining the data without ever revealing it to the one doing the computation.
For example when two parties are interested in making a deal, but don't want to reveal it unless the other party is interested as well. (The example is usually a date, but it could be applicable to voting or other situations where both privacy and unanimity are desired.)
By performing a homomorphically encrypted computation, they can set it up to only reveal the final decision, but not the individual inputs that determined it, so nobody loses face.