Hacker News new | past | comments | ask | show | jobs | submit login
Physics in 100 Years (arxiv.org)
127 points by jeffreyrogers on March 27, 2015 | hide | past | favorite | 44 comments



My speculation for the next 100 years, that Wilczek does not propose, is that a notation revolution will happen.

The standard way of representing mathematics today is tedious to write and extremely non-intuitive in many ways. Unfortunately alternative representation systems of physical processes (e.g. block diagrams, Feynmann diagrams, etc.) don't yet provide a way for the user to operate on higher-level objects while staying on an abstracted level. One has to tear open all the black boxes, rewrite them as integral/sigma/matrix/bra/ket soup before they can be operated upon.

Most of the time when I read a physics paper, even in my own field of research, I spend abount 95% of my time and brain power parsing and 5% of the time understanding. This should be reversed.

In addition, a startling problem is that it now takes about 25-30 years of education from birth to the time before any individual can be productive to society in physics research. They are subsequently only productive for another 30-40 at most. As fundamental science continues to become more advanced, the increasing number of required education years to catch up with all of human history, even for a highly narrow, specialized field, is not a sustainable trend. Either modifying the brain or adopting a new framework of thought will be necessary to sustain progress; a notation revolution may facilitate the latter.


I don't know if I'd predict it, but I agree that it's hard to imagine a more transformative technology than a dramatically improved notation for something as broad as mathematics. It might mean that "notation" is actually a higher fidelity, more dynamic medium for communicating mathematical ideas only possible within computing. (See Bret Vector, etc.) One has to wonder what the world would be like if physics knowledge (at all levels) were disseminated via a medium beyond static text.


Sussman tried to use scheme as a mathematical notation (http://mitpress.mit.edu/sites/default/files/titles/content/s...) to avoid ambiguities. I don't know how many people saw this, and what the critics were.

The traditional static symbolic notation of math may be the most efficient one, It's constrained, but forces abstraction to emerge. If people had Excel before, we'd never know Gauss sequence sum formula, we'd let the system help us.


> The traditional static symbolic notation of math may be the most efficient one... If people had Excel before, we'd never know Gauss sequence sum formula, we'd let the system help us.

I think some people in this thread must be using "notation" in a different sense than I am accustomed to. I don't think of the computer's ability to automate addition as an innovation in "notation", per se.


That wasn't Sussman's point. The problem is that the same notational forms -- the same orthographies, if you will -- can mean different things in different contexts. It doesn't help at all that one must, on occasion, slip from context to context in the same argument, producing adjacent or near-adjacent statements that have similar general forms but wildly different meanings. His solution to the problem was to use a notation that also happens to be valid statements in a computer language he had a large hand in designing. That part isn't necessary, but disambiguating mathematical notation by some means is.


My analogy was borderline, I just think there's value in terse static glyph, and going away from it would be detrimental.


I skimmed that book to figure out how to convert formulas to procedures in Scheme for learning Spivaks Calculus and a few OCW math courses. Also picked up Doing Math w/Python from NoStarch press to see how other people do this, and to learn Python since I've never really used it. It was definitely awesome to debug logic errors in emacs of formulas I translated, it really does give you a structural interpretation of the material you are trying to learn by coding it all. End result is a library with clear procedures I can use for reference whereas I've already forgotten most of the notation I've learned over the years (SICM uses Spivak's Calculus on Manifolds similar notation)


All languages have, at least, nouny things and verby things, and a finite(expandable) number of them composable in infinite ways, and I'd say our brains are hardwired for it. So any notation good for humans would have to have this property, which would mean being "equivalent" (in a way) to the language that already exists.

I don't see how any improvement can be anything more than cosmetic.


Perhaps Geometric Algebra is a big piece of that in the works?

http://en.wikipedia.org/wiki/Geometric_algebra

Re Scheme as notation (or APL/ J many years ago) -- when I hear a programming language proposed as a replacement for math, I always wonder "how does that make proving theorems easier?" I haven't read Sussman's book, and he is way smart, but I have listened to lots of APL weanies say "Really, it's just like math" but ... no, it isn't...

EDIT: I just glanced at the first part of Sussman's book on Classical Mechanics, and it looked like Lagrangian mathematics to me, no new notation in the first few sections.


GA does not quite do the job.


What does that mean? This is hacker news, you can't just say "that sucks" or "that's great" like on reddit...


Oh right, this is the place where you make a crazy claim and back it up by a Wikipedia link instead of an argument! ;)

What I meant is that, no, I don't think that GA is a substantial improvement of notation, even though a vocal minority might have made you believe otherwise. One piece of evidence is that physicists would have happily picked up the notation otherwise (and no, this is not an appeal to authority ;). Also, I am not sure what you mean by "in the works", since it is quite old of a concept.


GA isn’t really a difference in notation exactly. It’s a difference in model.

Anyway, the model is great, and IMO should be taught in high schools. It would reduce a lot of redundant and obnoxiously inconsistent existing models we use instead, and ultimately save students a lot of time.

It takes a bit of work to get used to reasoning in GA vs. the standard vector model, or via trigonometry, etc., but there’s a pretty big payoff, I find. Many problems are substantially easier to reason about in GA terms, and many bits of bookkeeping can be avoided compared to the standard approaches.

More generally, the study of geometry has in general been systematically deemphasized and devalued over the past 100 years, in favor of analysis. This has IMO caused some serious problems for students’ and scientists’ geometric fluency and reasoning. GA is IMO one way of helping reunify algebraic/analytic with geometric reasoning.

By “in the works” he probably means that since David Hestenes has put in a career’s worth of work reintroducing / popularizing the ideas that were mostly ignored after Grassman’s time with the singular exception of Clifford (except in highly technical pure mathematics contexts, and by a few physicists for studying general relativity), and especially in the last 20 years or so since a slightly bigger community has gotten involved, it seems like GA is picking up some steam. Many more people have heard/thought about it today than 10 years ago, there are now a handful of textbooks, etc. In particular some computer graphics / computer vision / robotics folks have found GA very useful to their work.

As for why physicists haven’t been champing at the bit.. GA as a model doesn’t allow people to solve any new problems that they couldn’t solve before using existing models, it just makes understanding what’s going on a bit easier. It’s sort of like refactoring all your code: it takes a lot of effort to reframe everything in terms of a new model, and the payoff is mostly for the people who are learning the material for the first time rather than for current professional mathematicians and physicists. Radical changes to low-level models take a long time to propagate through society, or sometimes are altogether impossible (which is why we still use base 10 instead of base 12, why America is stuck with imperial units, why we have a completely hacked together calendar of unequal length months, why we use π as a circle constant instead of either 2π or π/2, why we haven’t normalized English spelling, etc. etc.)


I don't know if this will start a revolution, but Structure and Interpretation of Classical Mechanics attempts to address the problem of notation in physics. The preface walks through the classical notation for the Lagrange equations and points out how inadequate that notation is. The whole book essentially makes the case that a functional notation that can be implemented (in Scheme) is better for doing physics:

http://mitpress.mit.edu/sites/default/files/titles/content/s...


> My speculation for the next 100 years, that Wilczek does not propose, is that a notation revolution will happen.

I thought that I was inured to future speculation (except in the sense of "what wrong things are intelligent people thinking today?"—a comment on general such speculation, not Wilczek's in particular), but I think that this is one of the most exciting ideas I have ever read. I'm being quite serious.


One thing that we have begun using, to simplify some quantum computational analysis are Graphical tensor networks as presented in this http://arxiv.org/abs/1111.6950 paper. Hasn't caught much traction yet, but it and similar ideas could be the way forward.


Frank Wilczek is both a highly accomplished physicist and an exceptional writer, two things that don't often go together. You can find all of his 2014 writings here:

  http://frankwilczek.com/2014
And 2015 in progress:

  http://frankwilczek.com/2015


I like his suggestion to replace 'Standard Model' with the name 'Core Theory'

"The quantum revolution gave this revelation: we’ve finally learned what Matter is. The necessary equations are part of the theoretical structure often called the Standard Model. That yawn-inducing name fails to convey the grandeur of the achievement, and does not serve fundamental physics well in its popular perception. I’m going to continue my campaign, begun in The Lightness of Being, to replace it with something more appropriately awesome: Standard Model → Core Theory"


Is this a joke? Sounds like some bureaucratic decision where you'd create a committee to come up with New Coke. Most people aren't going to care.


You'd be surprised. Yes it must be grounded in soundness but a cool name never hurts.


I bet the folks who coined the term "global warming" sure wish they'd thought of "climate change" first.


You do know the history of those terms and the replacement of one by the other, right?


Science Sparring Society had an interesting (and short) episode on this recently:

http://www.acmescience.com/2015/02/sss-14-global-warming-vs-...


I was at first upset that he seemed to focus heavily on fundamental particle physics. I was delighted however, that he did mention other discoveries we'd like to see happen in other subfields of physics, like the hope for the realization of the quantum computer.

One thing that is rather hot now and likely will become a richer field over the next decade or so (and will likely leave it's imprint in future technology and life in 100 years) is the merging of physics with biology. Mainly, the use of physics and quantitative methods in biology is the next area of innovation for this century, in my opinion. That might fall more under "Biology in 100 years," however.


He's a particle theorist, so it's not too surprising he focuses on things he knows more about.


Brett Victor also has some interesting ideas about the symbology of math: http://worrydream.com/KillMath/


I know that English is the wrong way of expressing the ideas, but the text on page 6 about bleaching and colours and the tasteful application of rules producing a scheme is straight out of 16th Century alchemy.


This is an exercise in paradigm paralysis. How about: I predict that if I were teleported 100 years into the future, I would have no fucking clue what was going on, you wouldn't be able to explain it to me and I would probably die from the culture shock. You have evolve and develop and turn into the future. You can't just go there willy-nilly, nor, at this point, can you confidently make predictions about it. That too is a prediction, though. For all I know Rudy Rucker is right and we will be a black hole in 100 years. But even if that prediction is right, our guesstimate of the phenomenology is all wrong.


It doesn't seem to me that if someone from 1915 was transported to the present day, it would be impossible to explain to them what was going on and they would die from the culture shock.

What's so hard to explain? Airplanes and cars? You had them then, they're just more common now. TV? Yeah, it's like radio and movies combined. Cellphones? Ok, phones are portable now. Computers? Alright, just tell them what a computer is.

They can handle it. They were human then too, not some alien species of comparative primitive idiots.

(If the argument were instead that it's unreasonable to expect someone to accurately predict the world of 100 years hence, I would agree; long-term prediction has not historically been something anyone has been all that good at. But the difficulty of predicting the future doesn't mean one couldn't handle exposure to it.)


There's some decent science fiction written along these same lines.

In 1915 radio is far more primitive than you are thinking and very few people had them. 'Talking pictures' had not even been invented yet. If you took a more technically minded person from New York perhaps, they may be able to wrap their head around what is occurring. If you took a farmer from the midwest they would be totally lost. WWI has not been going on long and the world is beginning a rapid set of changes from manpower to machine power. 1914 is commonly considered the end of the Steam Age.

Next, it would be far harder for them to handle socially than strictly comprehensively. Think of the common language we use when talking to each other. People would almost be speaking your language, but they wouldn't be speaking it at all dude, post that to your blog and tweet it. Even common things like food would seem totally foreign, in the most literal sense. Foods were very regional and always made at the time of eating. Now you can get food from practically anywhere, anytime, frozen solid, and you can heat it up in magic beeping boxes with some kind of magic rays inside.

Lastly, unless you were from one of the always on cities in 1915, the pace of the modern world would wear you down so quickly you'd be in a constant exhaustion. The rate of change we consider normal these days is unprecedented. The speed we travel. The copious amount we are expected to communicate with large numbers of people are not something that commonly occurred in that past. Now imagine a future where brain implants allowed to you instantly connect to devices or other peoples minds all around you. You could understand it, but the feed of information to and from you would likely be something your mind would have issues dealing with, such as privacy something we focus on quite often these days.


Let me up the ante so to speak: I would say that if an educated Greek or Roman from between 300 B.C. (Periclean Athens) to 250 A.D. (Marcus Aurelius) were to show up in 2015, they would spend about a week tripping on the gadgetry (and pissing themselves --literally, in fear -- the first time they rode in a car), but then they would probably think -- oh, yeah, there are a few changes, but nothing big. Except for the steam engine, fertilizer (finally, enough food), and birth control, maybe, but only for their social implications.

The social/ cultural changes are what matters, I think, and the technology is actually much less important.

On the other hand, I think if you brought an Afghani tribesmen from the 1950s into modern US, they would be more blown away and have a harder time adjusting, (EDIT: even though they would be more familiar than an ancient Greek or Roman with telephones, cars, etc).


> Computers? Alright, just tell them what a computer is.

Do you really believe that it would be so simple? I think that it's hard to appreciate how much computers have changed the shape of our lives in large and small ways. For example, imagine bringing Turing to the present day. Would he have any trouble understanding what a computer is, as a mechanical or mathematical device? Probably not. Would he be baffled by, and initially uncomprehending of, the way that computers have changed our lives? I'd say almost certainly. (I often am, just thinking back on the way things were as recently as 20 years ago, and I lived through the change.)


Yes, I really believe it. That's why I said it.

Computers have changed our lives in many large and small ways. Great. Tell that to the traveler from 1915. Show them. They'll understand; they've witnessed the same phenomenon with the technological developments of their own time. They won't be used to our modern world, but their mind won't violently reject exposure to the concept.


I agree. Somebody from 1915 would already understand the paradigm shift brought by the industrial revolution. The notion of decomposing problems into small, specialized tasks and delegating to people and other resources was familiar.

Consider these technologies and innovations that had already been developed:

* Jacquard loom (http://en.wikipedia.org/wiki/Jacquard_loom)

* Player pianos (http://en.wikipedia.org/wiki/Player_piano)

* Widespread use of interchangeable parts (http://en.wikipedia.org/wiki/Interchangeable_parts#Late_19th...)

* Assembly lines (http://en.wikipedia.org/wiki/Assembly_line)

* Tabulating machine for the 1890 US Census (http://en.wikipedia.org/wiki/Tabulating_machine)

* Transoceanic telegraph networks (http://en.wikipedia.org/wiki/Electrical_telegraph)

* Use of humans for distributed computation (http://en.wikipedia.org/wiki/Human_computer)

Automated computation is an amazing concept, but--at the risk of historic bias--what we have now is merely an optimized and widely available form of what we had then.

What I imagine they'd have a really difficult time accepting is modern physics, particularly quantum theory. Then again, the vast majority of the public today (including me) struggles with it.


I was about to disagree with you (citing that the change of 1815-1915 is much less than the change of 1915-2015). But then I realized that for certain places on Earth, the norm of technology might not be that much more advance than 1915 (this is a bit of a stretch, but you get the point) -- and I 'm not sure those people won't be able to adapt if they got move to the US.


The technological change of 1815 to 1915 is pretty drastic: railways, the automobile, the airplane, the telegraph, the telephone, the phonograph, photography, moving pictures, radio transmission, refrigeration, plastics, lightbulbs, x-rays, anesthetic surgery, the work of Semmelweis/Snow/Pasteur/Lister on medical hygiene/sanitation(/the germ theory of disease!), the spread of indoor plumbing and the flush toilet, ...

And culturally? Well, for one (U.S.-focused) example, there's that whole Civil War/end-of-slavery thing smack dab in the middle of it.

It's by no means clear to me that the changes of 1915 - 2015 somehow completely outstrip the changes of 1815 - 1915.

(I see that you agree with my general point, and with good illustration, too. I just wanted to point this out as well.)


Problem: a lot of those items/techniques were exclusively available to the very rich in 1915.

The UK still had slums with limited indoor plumbing during the 1950s. It wasn't until the slum clearances of the 1960s that the standard became an indoor bathroom in a house with central heating.

The big change between 1915 and 2015 is that the technological GINI coefficient has shrunk so much.

Being rich gets you few technological perks. You can buy a supercar or a private jet, but that's a difference of degree, not a difference in absolute access.

Most people have cars, and almost everyone can afford to fly. And all but the very poorest have Internet and electronic media access.

Likewise for cultural differences. Sexual morality - at least as publicly presented - is completely different now.

Work culture is somewhat different. Politics and finance have probably changed least of all.

The point being that someone from 1915 may just about be able to understand the Internet and computing. But they're going to have a really tough time learning how to parse Buzzfeed or TechCrunch or Reddit. Never mind what happens when they find PornHub or Tinder.

There are three things to learn, not one. The first is what the technology does. The second is the vocabulary of new names and the new concepts used to describe. The last is the social scripting that defines appropriate and inappropriate behaviour.

Those last two will take longest and be hardest.

I'd expect similar challenges in 2115, with the difference that there's likely to be much more social and political change, and not less.

It would surprise me if the definition of "human" hadn't changed fundamentally by then, together with almost everything we think we know today about culture, politics, and economics.


By the way, just a small tone clarification: I originally said (and you may have read) "Do you really believe this?", which looked inflammatory (but wasn't meant that way). I didn't mean to question your sincerity, only whether you had perhaps made a statement without fully considering its ramifications.

Also, I didn't mean to imply full agreement with chaosfactor (https://news.ycombinator.com/item?id=9279631)'s "die from culture shock", which I think overstates it. I just meant to say that there's a wide gap between being told the facts (about anything profound, not just computers) and really coming to terms with their implications, and that I think that achieving the first is easy and the second hard.


I'm not sure how I would explain (as only one example among many) Facebook's market cap to our 1915 visitor. I think s/he would find it enigmatic, not having been along for the evolution of events.


The market cap concept is known pretty well to an educated visitor from 1915, exchanges had been in existence for more than two hundred years then. The number may seem big at first, but after explaining inflation impact and may be expressing this number as percent of GDP, this, too, does not seem to be far-fetched.


I expect them to die in approximately five minutes from culture shock. Ah, and also getting hit by a car.


I guess a vehicular injury could be considered an extreme case of culture shock.


Here's the story of real people that were cut off from the outside world for 40 years:

http://www.smithsonianmag.com/history/for-40-years-this-russ...

> “What amazed him most of all,” Peskov recorded, “was a transparent cellophane package. ‘Lord, what have they thought up -- it is glass, but it crumples!’”


Nah, you'd probably die from dirty air... or drowning depending on where you land :-)

Everything else can be learned - after all, even dogs get used to Roombas...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: