Hacker News new | past | comments | ask | show | jobs | submit login
Feynman: Simulating Physics with Computers (1981) [pdf] (cs.berkeley.edu)
187 points by merrier on May 7, 2016 | hide | past | favorite | 22 comments



For those who don't realize, this is (arguably) the first extended discussion of quantum computers.

The other founding paper of the field is David Deutsch's superb "Quantum theory, the Church-Turing principle and the universal quantum computer": http://folk.uio.no/ovrum/articles/deutsch85.pdf

Both papers are excellent reading, and surprisingly accessible.


By coincidence I just finished reading John Preskill's introductory chapter on Quantum Computing, which mentioned Feynman & Deutsch, and is also surprisingly accessible http://www.theory.caltech.edu/people/preskill/ph229/notes/ch...


If you enjoyed this, may I also recommend Feynman's Lectures on Computation. [1] It consists of transactions of a lecture series that begins with computer architecture and theory of computation and considers interesting subjects like the physical limits to computational capability--all through the unique lens of Feynmans' mind.

Feynman worked on this subject working on the Connection Machine supecomputer. [2][3]

[1] http://www.amazon.com/Feynman-Lectures-Computation-Richard-P...

[2] https://en.wikipedia.org/wiki/Connection_Machine

[3] http://longnow.org/essays/richard-feynman-connection-machine...


Simulating and thus predicting performance of new technologies with computers will be the ultimate afterburner towards singularity. Be they quantum or not. Recent progress has been shown in:

  - Protein folding [1]
  - Neuroscience [2]
  - Battery technology [3]
All of these project have in common that they do (or potentially will) scale to contributors around the globe in near-realtime. It will simply save us from expensive real-world experiments. The frontier of computational crowd-science heralds truly exciting times.

PS: Berkeley has been a major force in that and from where many similar projects originated (c.f. BOINC project [4]).

[1] https://fold.it

[2] http://www.openworm.org

[3] http://newscenter.lbl.gov/2015/04/15/electrolyte-genome-coul...

[4] https://en.wikipedia.org/wiki/List_of_distributed_computing_...


It would seem to be useful to start with the question of representing the position of a particle with other particles. For any situation, you will need some dynamic range of measurement to describe the properties of a particle. Classical position might want 10 bits, interpreted as meters. With 6 numbers (position and momentum in 3 directions) that's 60 bits per particle. Even at 1 particular per bit, that's still a ratio of 60:1. And 10 bits is certainly not enough for some phenomena! The bottom line is that if you converted the universe to computronium, the absolute best you could do is simulate a universe 1/60th the complexity. In real life we do many orders of magnitude worse, memory cells require millions of atoms. This also doesn't deal at all with the computation required to advance the state of the system. Perhaps by focusing on the experience of life you can make the ratio make sense again. E.g. don't simulate particles, simulate minds (for which a particle is only an abstraction anyway), and you recover that ratio, and more.


> You put a counter out there and you find "clunk," and nothing happens for a while, "clunk," and nothing happens for a while. It's riot discretized at all, you never can measure such a tiny field, you don't find a tiny field, you don't have to imitate such a tiny field, because the world that you're trying to imitate, the physical world, is not the classical world, and it behaves differently. So the particular example of discretizing the electric field, is a problem which I would not see, as a physicist, as fundamentally difficult, because it will just mean that your field has gotten so small that I had better be using quantum mechanics anyway, and so you've got the wrong equations, and so you did the wrong problem!

Things that far away are tiny, in a way. :)


Any good books with a more modern take on this? I.e., discussing discrete differential forms, multigrid, etcetera?


You might be interested in Sussman's classical mechanics course:

http://groups.csail.mit.edu/mac/users/gjs/6946/index.html

might not be the most modern approach, but he's still teaching and updating it


Feynman wrote one: "Lectures on Computation." It's excellent.

https://www.scribd.com/doc/52657907/Feynman-Lectures-on-Comp...


Interesting link, thanks. But I don't think it totally covers the machinery that people use nowadays for numerically solving complicated physics problems. E.g., the topics that I mentioned, multigrid and discrete differential forms, seem to be missing here.


You might like "Fundamentals of Scientific Computing," "Mathematical Experiments on the Computer," and "Discrete Mathematics for Computing." The first one in particular may have what you're looking for.


That's not what Feynman is talking about at all. He touches on it; points out that what you get from numerical methods is an approximation, whereas he wants to talk about how you would get an exact simulation.


Numerical recipes: the art of scientific computing is a go to reference for most people I know doing physics simulations. I think they've got a copy up on their website. Not sure if it covers all you want but the range is pretty good.


(Possibly dumb question) As a topologist, not a computer scientist - is "discrete differential forms" just the practice of computing things in De Rham cohomology by passing to simplicial homology via Poincare duality?


"De Rham cohomology", "simplicial homology" and "Poincare duality?" might all be simple concepts -- but the jargon will need to be unpacked for most readers of HN.


> Quantum mechanics can't seem to be imitable by a local classical computer.

So is this a fatal flaw in Wolfram's "new kind of science" based on cellular automata?


I'm not sure if this is meant as an update or merely my misinterpretation of a brief passage, but while NKS book's chapter on principle of computational equivalence talked about the principle applying to any process of any kind both natural and artificial, the current definition of the principle on Wolfram's mathworld only defines as applying to systems found in the natural world. https://twitter.com/_nict_/status/727513301274529792


As far as I'm aware, Wolfram doesn't think cellular automata underpin our physical universe; they're just a very simple form of universal computation, which makes them interesting to explore.

He thinks the Universe is more like a graph of nodes and edges, with particles, interactions, spacetime, etc. emerging from the rewriting of simple patterns in the network http://blog.stephenwolfram.com/2015/12/what-is-spacetime-rea...


One of the many flaws. The other is that his cellular automata model violates special relativity.


I found this: https://magnusopium.wordpress.com/2011/05/12/deleuzes-bergso...

I have no idea if it's a fair summary of D&G.

I don't really want to touch on anything having to do with scriptures in the most general sense (vedas, charkras, nadis, and the works that describe them); I don't think they are likely to have any utility in understanding gravitation, acceleration, or uniform linear motion at all, and if there is anything in there, it is probably easier to re-discover in a modern theoretical framework than to translate.

Instead, the interesting thing is the Achilles-vs-tortoise argument in a context in approximately 1920 but before the work by Lemaître, Friedmann, Robertson and Walker that led to the underpinnings of the standard cosmology, and most particularly before the late 1920s when the Hubble flow was described and found to apply to all then-known distant galaxies.

Einstein in 1920 had a personal bias towards a static universe for a variety of reasons, although in part that is because evidence at the time did not disfavour one. In such a universe, making some assumptions about the behaviour of its non-gravitational content, there is probably no "universal clock", and so a resort to GR in an Achilles-vs-hare argument likely would not prove illuminating (and would be much harder to do quickly).

However, our universe is so close to being isotropic and homogeneous (as far as we can tell) that we almost certainly can rely upon the scale factor from the Friedmann-Lemaître-Robertson-Walker model to be equally valid for all observers. There are additionally relic fields which manifest the scale factor (e.g., the average temperature of the CMB radiation).

The resolution to Achilles-vs-hare is that both can agree on the scale factor at the boundary conditions, namely, when they are together at the start of the race and when they are together again after the race has ended. What they will disagree on is only the amount of wristwatch time has elapsed.

The tortoise has simply travelled much further in spacetime than Achilles, and all observers at both boundary conditions will agree with that, no matter how they have travelled from the start to the end. Even more strongly, any observer who can place the pair of them together start of the race at time a(t_start) and the pair of them together at a(t_end) will agree that the tortoise has travelled further in spacetime, although their count of the elapsed time in, say, picoseconds, between a(t_start) and a(t_end) may be unique.

But even if we drop the examination of the scale factor, and we resort only to Special Relativity, we can see with a Minkowski diagram (available in 1920!) that the slopes of the race are different. The tortoise's slope is more vertical (where the y axis is the vertical axis is the timelike axis). If we choose convenient units where c=1 and we use seconds as the coordinates (so, actual seconds on the y axis, light-seconds on the x axis, but with light travelling at one light second per second by choice of units), and we use a metric like ds^2 = cdt^2 - dx^2, and Achilles takes 300 seconds to run from origin to finish (on the x axis) while the tortoise takes 3000 seconds to run from origin to finish (on the y axis), "s" is much bigger for the tortoise from start to finish, but approximately the same when you trace from boundary condition (the pair together at the start) to boundary condition (the pair together at the end). But, using just SR, the less time Achilles takes to run the race, the smaller his "s" is compared to the tortoise's. He is travelling a shorter distance in spacetime, even though the number of ticks along the x axis are the same for him and for tortoise, and we can calculate the exact difference in distance using the Lorentz formula.

We are not required to use the flat space metric, or Euclidean coordinates (heck, what's the difference between between "x" and "r" (from polar coordinates) in the example above?); general covariance (from General Relativity) guarantees that no matter what set of smooth coordinates we use, or what units we use, faster Achilles traverses less spacetime between the boundary conditions than slower tortoise.

The critical point that I do not see in the debate is that the fixing of the boundary conditions are important (and we now know this because of work done since Einstein's death, particularly since the development of 3+1 formalisms). The critical boundaries are when the pair are together again, not when either of the pair is at the start gate and the end ribbon.

To us in 2016, this is a simple Cauchy problem. However, in 1920 it would have been at least novel (IIRC, Hadamard's lectures hadn't happened yet, for example) and certainly not a first choice tool to resolve a seeming paradox.

edit: I reread the article at the top and realize I should substitute April 1922 for 1920 (and various approximations) above. I don't think the exact date is terribly important; the key thing is that April 1922 is before the Hubble flow was understood, and before initial-values-surface/boundary-conditions approaches and close relatives were in use in a GR context.


Here I read through this only to stop at this line in the second paragraph on the first page:

Therefore my question is, Can physics be simulated ...

The C, on the word Can, is capitalized after a comma?


Imagine quotes around the question. It's a sentence in a sentence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: