Hacker Newsnew | past | comments | ask | show | jobs | submit | more graphene's commentslogin

Isn't it true though that there would be an O(n^2)-type difficulty in adding extra qubits, since they all need to interact?

Or is that an oversimplified view?


They do not all need to interact directly with one another. You can create full entanglement even if they linearly interact. It just means you pay a penalty in the compilation of your program.

Architectures with higher two-qubit connectivity is merely an optimization.


Yes, according to the first law of Thermodynamics, energy is conserved; regardless of whether there is computation being done, 100W of electric power will result in 100W of heat being dissipated.

About heating with computers, There is a Dutch company doing this, https://cloud.nerdalize.com/. It's interesting to think about how their economics work, because the heaters will be switched off for large parts of the year, and for much of the day. They must be banking that computation won't get much cheaper over time (in terms of FLOPS/$) because otherwise it'd be hard to recoup their initial investment. In a sense, this almost looks like a bet against Moore's law!


They trained on a computer model of the optical circuit, and only did the feed-forward step on the real thing. The rationale for that is that real-life models spend much more time (and energy) in inference mode, so that is the step you'd most want to optimize.

I can't help but think it would be really cool to automatically produce a circuit that would output the gradient of the error of the actual NN, so you could optimize that directly.


Not entirely what you're describing, but pandoc goes a long way towards being a sort of LLVM for text documents. In order to do all the format conversions, it transforms inputs into a tree-based internal representation, and then translates that into the output format.

Unfortunately it doesn't have a (pure) TeX reader yet, but that could be implemented relatively easily.


If it could be implemented easily, chances are it would have been by now. One big issue is, TeX doesn't run in traditional compiler-like layers (lex,parse,etc.) In TeX, the meaning of the next token (lexer level) can be changed by something happening in the guts of the engine in response to the previous token. So, just as compiling LISP requires an ability to interpret LISP, compiling TeX into some sort of tree structure would require implementing a big chunk of the TeX engine itself in the process.


Well, yes and no. You are absolutely right that a complete implementation of TeX would be difficult, but you could read a subset of the language that is big enough to be useful, including simple macro definitions and commonly used commands, which is exactly what pandoc's LaTeX reader already does.


yes, it would be really helpful to get some examples of the counterproductive advice.


The counterargument is that as more and more industries start to do their engineering at the nanoscale (whether coming from above, as in materials and electronics, or from below as in biochemistry and pharmaceuticals), the physics of their systems will become more similar. This will cause their design rules to also become more and more similar, so it's plausible that you could end up with a small number of players possessing engineering expertise that can be applied to a very wide variety of sectors.


I don't think it's a counterargument, it's more a prediction for the late-game of manufacturing technology. For the mid-game, I suspect it will be like it's with chemistry, agriculture, pharmaceuticals and biotechnology nowadays - there is no unified sector encompassing all the areas above, but there are big companies, like Monsanto, 3M or DuPoint, doing all four at once, exploiting the relative similarities of the fields.


There is the fact that proteins are constrained to function as part of organisms that are capable of self-replication. This constraint means that proteins (mostly) are not very stable wrt oxidisation and UV degradation, and only work properly in aqueous solution.

It's anyone's guess how significant these constraints will be from the viewpoint of developing artificial protein machines (maybe rapid (bio)degradation is a good thing!), but there's definitely large swathes of chemical design space outside of arbitrary chains of known amino acids, and we might be able to discover entire classes of molecular machines that don't have the drawbacks of bioinspired proteins.


Probably. But to do that, we need to be able to do work on that level; existing proteins seem to be just about enough for us to run proper experiments and eventually bootstrap production of nanocomponents better suited for our needs.


I don't know if this is what you're referring to, but a common application for government-owned supercomputers is simulating the degredation of nuclear warheads. The degradation of the fissile material as well as its surroundings is highly critical to a nation's security, and also very hard to model well.

Of course in an ideal world those cycles would be used to help cure cancer, but given that these warheads exist, it's probably a good idea to invest resources into getting an idea of what shape they're in.


The old way to do that was to blow one of them up every so often. The Nuclear Test Ban Treaty put a stop to that.


You could compare raw FLOPS (Floating point operations per second) but that would only tell part of the story. These supercomputers are highly engineered for low network latency between nodes, which is necessary for many scientific workloads. Google and other companies are generally able to express their algorithms in highly parallel ways, which means there are much reduced requirements for communication between nodes.

Therefore, even if the raw performance in terms of FLOPS sound similar, the two systems will have widely differing performance on real workloads.


Depends on what you mean by a "real" workload.

Capturing and indexing the entire web is certainly a real workload, even if it is massively psrallelizable, so it would probably run equally well on Google's infrastructure as on a supercomputer because those fast interconnects wouldn't provide much advantage, right?

However,when simulating a nuclear explosion or a weather system (maybe that's what you mean by "real" workloads?), the heavy node-to-node communication makes the supercomputer much, much better suited.


Super interesting.

If you read the Feynman speech that he references at the beginning, he actually mentions that as you scale machines down, things like mechanical rigidity will degrade and you will need to change your design rules accordingly. I always assumed that when you reach the molecular level, thermal motion and the constant bombardment by water molecules would mean that the only viable option is to use proteins, just like nature does, so it's very interesting to see that this guy is aiming to use more rigid structures at the molecular level. I guess this is a way to reduce the complexity (degrees of freedom) compared to designing protein tertiary structure. I wonder if this is too constraining though, he admits he has yet to figure out how to build mechanical machines using this approach, and intuitively I'd expect that to be very difficult with this degree of rigidity. You might need the additional flexibility of peptide chains to do many of the interesting things that are possible.

He does point out the advantage of durability, but this raises the obvious issue that one of the questioners alluded to, namely toxicity/pollution risk. I'd think degredation by biological or other means would be a feature, not a bug, since as he points out, even conventional plastics are a huge pollution problem.

Fascinating stuff nonetheless.


Christian Schafmeister here - thanks! You make some very good points that I can address: (1) Things made with "Molecular Lego" still move, but they keep their shape just enough to organize atoms/groups to do things like speed up reactions and bind other molecules. We've explored dynamics in several of our papers. They aren't too constrained - they are constrained "just enough"; and we can build in flexibility and hinges where ever we want. (2) We don't know how to make mechanical molecular machines with them yet; we need to start making a lot of them and explore their properties to figure that out. The first folks who smelted iron didn't know how to build motors with it - it took a lot of people playing around with iron for a long time to figure that out. I think we can get there in less time but it will take work. (3) Re: toxicity/pollution - this is the first non-natural molecular technology that contains the solution to any problems that it generates. We need to learn how to make catalysts well and then we can build catalysts that break these molecules down. Conceivably, we could build materials that contain the catalysts that break them down and are activated by an external signal. Or we make materials out of bigger bricks (built from Molecular Lego) and we fish them out of the environment, tear them apart from each other, check them for defects and build new materials with the good ones and recycle the broken ones. We can also build catalysts that break down every other indestructable material that we've been dumping into the environment for 100 years. Regarding toxicity, these molecules are made out of carbon, nitrogen, oxygen and hydrogen - the same atoms you are made out of. They are inherently non-toxic (there are qualifiers on this).


Hi Christian, great to have you replying directly like this; I hope my criticism came across as constructive, since I'm super excited about and impressed with this work, as someone also chasing the dream of molecular nanotechnology.

Re: rigidity; I'm curious (apologies for not having read your papers) how you define "just enough" flexibility, and how your design tools take freely moving components into account. Would you agree with my intuitive feeling that there's a tradeoff between designability and functionality, and that your spiroligomer work sits between rationally designed protein structures (very hard problem) and Drexlerian molecular-scale gears and ratchets (similar, determinsitic design rules as in macroscopic systems)? Or, do you feel that anything protein "machines" can do, spiroligomer machines can do too?

I recently started a startup that has molecular nanotechnology as the end goal, and my thinking has been that the flexibility of proteins is an essential element in achieving the capability to design and manufacture with atomic precision, and that the concomitant complexity of the large numbers of degrees of freedom can be tamed with a data-driven approach leveraging machine learning algorithms. I'd love to hear if you have any thoughts on this, and how it relates to the spiroligomer approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: