Arguing about what science and engineering is all fine and dandy, far be it from me to tell people what to discuss, but I would highlight the actual point of the piece, which is the suggestion that we are overinvesting in science in certain areas where we should be investing in engineering.
I find myself thinking back to an article earlier today: http://news.ycombinator.com/item?id=1986640 , about the virus that could improve lithium battery's capacity by up to 10 times... from scientists at the University of Maryland. As some people say in the comments, they're tired of hearing about these advances that never make it to market. Perhaps this is part of the reason why? If nobody takes this to engineers or funds engineers, this was, if not entirely a waste of time, certainly a suboptimal use of time.
(Certainly part of the reason why is some of these ideas simply don't pan out in practice; batteries with 10 times the capacity but at 100x the cost may have such a limited market as to be effectively no market. But it seems like some of these things ought to be happening. The various promised-but-never-materializing advances in the field of solar energy particularly come to mind.)
I think that's a rather shortsighted view of science. There are many discoveries that, for reasons of funding or just exposure, are reported by the media as groundbreaking or revolutionary... and never make it to market.
There are many reasons for this - maybe there are issues that are non-solvable using today's technology/knowledge. Maybe it's not economical, or maybe it's just waiting for the proliferation of something else before becoming feasible.
The knowledge is not wasted - there are many instances where things discovered (and had no real applications) decades ago become of incredible relevance later.
There's a mantra I heard of while in school - math drives science 50 years in the future. Science drives engineering 50 years in the future.
I think you may have accidentally read a frequently-expressed idea into my message, that "science is worthless when it is not practical", that I did not actually say, nor do I believe. The point of the article is I believe more accurately phrased as: If we actually want to solve our energy problems, then we should expect to need to invest in engineering, not just science. The discussion was framed not as an abstract consideration of the virtues of science, but, given that we have important problems that need solving, are we approaching the solution in the best manner? Or, by conflating "science" and "engineering", are we accidentally satisfying ourselves that we are making progress when in fact we are investing too much in science and not enough in engineering, a mistake we might not make if we did not freely conflate the two so frequently?
In the specific context of solving real world problems, this question has meaning, and it's not the generic "useless science is useless" argument, it's actually grounded in very real considerations.
This is part of the reason I posted in the first place, as people are getting drawn to the strange attractor of the generic science vs. engineering question I felt people were missing out on the more interesting and more subtle question brought up by the article itself. (And personally I think "science vs. engineering" is just a boring and irrelevant definition debate when it comes down to it, everybody citing their personal definitions at each other and arguing which definition is "real" as if that actually matters. This article raises an actually interesting question with teeth in it, and works perfectly well with rather standard definitions of the two terms.)
I think one could also argue that a lot of science research needs to improve it's engineering methodology.
One obvious problem in a lot of even hard-science research is that the end-product is a piece of paper with some formulas and graphs. In this era of computer-use, often a lot of software went into it's construction and that software is generally not released.
Just much, a lot of science uses Matlab, which pretty much inherently makes the results closed to further use. This has been argued here and I'm well aware a lot of results can be too preliminary for general use. But it's worth improving this.
After all, the degree of mathematical sophistication of physicists and biologists has increased over the years. It seems reasonable to ask that their software engineering sophistication improve also (not that software engineering is as exact as science of course).
(The following is based around a United States perspective, would be interested in more international views)
In the US, as I see it, there seems to be a view of engineering is a capitalistic endeavor powered by private investment, where science is an academic endeavor powered by primarily government investment. (Exceptions seem to be defense contractors and medical research. The first is normally the government buying a service, the second is typically the transition from academic to commercial prospects.)
I wonder how many people here consider themselves scientists versus engineers. I think computer science is in a pretty unique position because people in our field solve both scientific and engineering problems. Indeed, the heritage of our field comes from both mathematics/physics and companies that made things like typewriters. Building computational systems has always required solving problems of both types; I'm not sure this has been the case for a while in other engineering or science fields.
The main distinction that I've seen between EE (at least the part focused on computational things), CompE, and CompSci is what part of the stack you focus on; does this reflect others' experiences, particularly those in EE or CompE?
I'd like to make a point here that Computer Science is not really a science. This is not an insult - I'm a computer scientist myself. Computer science derives from mathematics, which also is not a science. My premise here is that for something to be a science, it needs to be guided by the scientific method. Science generally deals with empirical observations and the change in the observation when one single controlled factor is changed.
Mathematics on the other hand already works in an idealized world where everything is logically connected to the base premises, and so experimentation is not required because we can formally prove things, which is strictly better than experimentation.
I have a computer science degree, although I wouldn't refer to myself as a computer scientist. However, as a programmer there frequently are times when I do resort to a basic scientific approach, in terms of hypothesis, experimentation, analysis of results... Someone might say that programming is supposed to be axiomatic, but things are typically complex enough that even with basic understanding of the axioms, beyond a basic level there is some level of trial and error that goes along with it.
Programming more generally is about solving problems. That involves a certain degree of scientific knowledge, mathematical knowledge, and some engineering since we generally build things that people will use. There is also a certain amount of creativity, both in expression and in terms of the ways we find solutions to problems as well. In my view, it doesn't exactly mimic any of those disciplines, but can be a mixture of all, and in varying degrees depending on the person and task at hand.
I have an impression that today too many programmers use engineering 'trial and error' approach in preference to formal reasoning. While the laws of nature does not change, runtime environment of a program changes rather frequently with every update of OS and libraries which is why the prevalence of the 'trial and error' approach results in the overall poor quality of software.
Perhaps, but as I mentioned in my last comment, formal reasoning often breaks down when dealing with anything more than the simplest of systems. What kind of axioms can you reason from in software? The network is always slow: not always, memory is always abundant, not always, the disk is always slow, with virtual memory, not always, exponential time complexity is always bad... not always. And especially, factor in cost, which is one of the most important aspects, and that severely impacts even the ability to formally reason since the most correct version can also be the most expensive and time consuming to implement..
Anyways, I didn't mean to downplay formal reasoning, but just to indicate that it is not often practical to use those methods exclusively when building software.
It makes sense to reason from the specifications of the runtime environment. For example, the spec for memcpy says that it's behavior is undefined when memory areas overlap. Then it would not be correct to use it for overlapping memory areas even if it works in some particular implementation as trial and error may show.
Definitely agreed. On the other hand, the act of programming itself is not quite Computer Science either, it's rather a feature you get as a result of it. It's more like applied math in my opinion.
Dijkstra said: "Computer Science is about computers as much as astronomy is about telescopes."
An alternate viewpoint is that mathematicians discover truths and laws about the mathematical realm, just as physicists, chemists and others discover them about the physical realm.
Mathematicians do in fact do "experiments" before proving a theorem - it's just that those experiments are often thought experiments, in terms of asking "what does (or would) this mathematical object look like, or how would it behave?", and those experiments are not mentioned in the final formal write-up.
I agree that your definition of science is the proper one, but then it makes distinguishing between engineering and science very difficult, doesn't it? I would guess that the classical scientific method (observe, hypothesize, test, iterate) is actually used on an every day basis more often by engineers than most basic scientists.
That said, as a physicist quite far removed from any applications, I staunchly agree with the thesis of the article: "Although a good deal is already known about those things, it certainly would not hurt to know more, but what would really move things forward would be investments in engineering."
I think most people outside of a particular corner of academia don't realize how less compelling the case is for basic scientific research leading to applications than it was (say) 50 years ago.
I can't agree more. I think computer science is somewhat unique amongst fields that are typically called engineering. While mechanical and electrical engineers build things that are based off of scientific theories, computer scientists build things that are based off of mathematical theories.
I generally agree with you -- I avoided stating that CS is more similar to math than the physical sciences for the sake of conciseness, and to highlight the science vs. engineering aspect.
That said, I find CS has some elements of what we generally think of as science. We can make hypotheses about unknown (and sometimes even quasi-natural) phenomena, and then test those hypotheses with empirical evaluation. The networking and Internet measurement literature is a good example of this, and I'm sure there are others as well (any measurement study, really). I don't think that's really the case in math.
The way I see it, any time you are building a system, you're an engineer. Theoretical computer scientists aren't really building systems, they're uncovering the mathematical foundation for the work that we do, so they don't count as scientists.
In contrast, someone implementing algorithm, is by this definition, an engineer. A good algorithm necessarily needs you to get your hands dirty with the nitty-gritties of the system the algorithm is going exist in. So, these guys are engineers.
I also think the guys who are deciding our taxes, studying economic policies and government should all be considered as engineers because these folks area also building systems, with the caveat that these systems have human beings inside them.
Interestingly my degree was in Computer Science (from a respected British University), yet this was part of the Engineering faculty. That seems quite apt for the duality of Computing, which I agree definitely straddles across both.
I think most Computing studies cover both the engineering (eg. programming), and science (eg. algorithms, grammars etc.) sides.
there's always been a lot of debate about whether software engineering is part of computer science or a separate profession.
looking at it through my own personal lens ... PREfix and PREfast (the static analysis tools i architected) were primarily software engineering -- we published in the Journal of Empirical Software Engineering. others working in the same space took more of a computer science approach, developed far more precise models and made breakthroughs in SAT and BDD. over time the two streams merged, which is a big win all around.
on the other hand, some of my other work was very CS-y -- looking at formalizing security algebras, for example, and applying standpoint epistemology to graph theory. so in the end i consider myself both a computer scientist and a software engineer.
My computer science education has been a schizoid mixture of classes geared to the practical, which I think of as "engineering" classes, and classes geared to the theoretical, which I think of as "computer science" classes. The latter are not about a science though, they are about a branch of mathematics.
In the US, you can call yourself an engineer whenever you feel like it, and you can put it on your resume and tell your employers that you're an engineer.
Here in Canada, it's illegal to call yourself an engineer without having a license.
And it's understandable, since accredited university engineering programs are rigorous. All engineers need to study chemistry, physics, math, and you have to go through "how to be an engineer" programs that teaches you ethics, engineering tools, and safety regulations that need to be followed. The programs are not easy, and you end up having spent 4-5 years of your life mostly studying. My friends in science programs are off playing around or partying all the time, but most of the people in my engineering program are studying, with the occasional day off where they go drinking.
I'm in Computer Engineering, and when I compare my schedule to my friends in Computer Science, it's a complete joke. I have 30 hours of lectures and labs a week, while the computer science people only have 15-20. My roommate in second year computer science just games all the time. At 2AM and I'm trying to sleep, and I can still hear him playing with my other roommate, yelling across the hall.
You can also double major computer science with business. My friend is doing that right now. You can't double major engineering with anything, because there's not enough time in a day to fit in that many courses.
Most people are under the impression that the difference between computer science and computer engineering is that one is theoretical, and the other is more practical. (which is what my guidance counselor told me) But when you actually enter the program, you realize computer engineering is far more than just about computers.
The running joke in Canada is that if you graduate computer engineering, you can go build a bridge. (legally, because you have an engineering stamp!)
Yes, computer science solves scientific and engineering problems. But do you really call a toddler an engineer when he builds lego stairs to reach the cookie jar on the shelf?
If so, I hope you're the only person on the bridge he builds.
True but here in the US "engineer" is not the term that matters. "Professional Engineer" is the term given to licensed and accredited engineers - generally identified by the initials "p.e." after their name, and you usually include some indication of the state you are licensed in.
Since graduating with an engineering degree is not enough to become a "Professional Engineer" in the US, you can actually argue it makes some sense to differentiate the terms - being licensed as a PE requires 6 years of professional engineering experience before you can even take the exam. So simply listing a job title of "engineer" might help demonstrate relevant engineering experience but means nothing with respect to your actual license.
If you can't double major in engineering and something else, you're either lazy or not very smart. Especially at a Canadian university... I mean, come on we're not talking MIT here.
Computer programming used to be applied maths. You followed the correct lemmas in the language and if you didn't make any errors you had a correct program.
Now there are so many layers of indirection between you and the world (huge complex APIS, graphics drivers, optimizing compilers, hyperthreading, cache, etc) that the actual behaviour of a program - especially in high performance, real time world - is no longer possible to calculate from the published specs.
It becomes experimental science, you adjust various parameters and measure the result. Certainly to get performance but sometimes just to get APIs to work together.
It becomes experimental science, you adjust various parameters and measure the result. Certainly to get performance but sometimes just to get APIs to work together.
This confuses the issue. The relevant question is --- as a programmer, would you consider yourself a "scientist" or an "engineer"?
In that context, we would not want to conflate the two, for example by trying to define why engineering ("adjust various parameters ... to get APIs to work together") is in fact science ("it becomes experimental science").
During the process of engineering, we of course employ the scientific method to accomplish our goals. But we would not want to consider engineering a purely scientific endeavor, in the way that, say, theoretical physics is a purely scientific endeavor.
Some universities call their program "computer and information sciences," which I think captures the notion that there's a distinction between the technical and theoretical training. What I've done since leaving school hardly qualifies me as a scientist, but I certainly draw on the principles.
A good book on this topic is "Designerly Ways of Knowing" published by Birkhauser. It defines Design as a third discipline, distinct from Humanities (which deals with the human experience) and Science (which deals with the natural world). Design with a capital D meaning anything related to creating new things, usually with technology.
The book goes on to explain how the three disciplines differ as per the skills required to practice it: scientists need analytical thinking, designers need synthetic thinking.
You learn this by page 10. The rest of the book is just as interesting as its first chapter.
> A scientist can create a new measure or a new tool.
Don't you mean, engineer a new tool :P
A Scientist is someone who accrues knowledge using the scientific method, that's it! Do you hypothesize, test and measure? Well then congrats, you're a scientist! All other distinctions are semantic and egotistically driven and unscientific.
That's a good point and reminds of a professor I had in grad school. I remember having this discussion with him over beers once and while he could explain how any algorithm worked (or figure out how to explain), he didn't really care. What got him excited was taking all these low level pieces and using them to build things that never existed before and solving higher order problems.
Sigh. Don't understand why it's such a big deal. Case in point in the intersection between CS/Biology/EE:
Programming has completely revolutionized Biology not even in terms of introducing databases to volumes of genomic data but in terms of forcing geneticists to think in terms of algorithms and coding practices in DNA transcription/translation.
Advances in Biology in neural imaging and evolutionary genetics have been introduced in artificial neural networks and evolutionary computing algorithms; some of which ironically are used to solve Biology problems, protein folding/disease modeling.
Advances in biomedical engineering from applied silicon wafer design has made expensive equipment such as DNA replication (PCR machines), sequencing and genetic expression (DNA microarrays) accessible to every lab. To reciprocate, Biologists are building organic circuits to eventually build a self-replicating bio-computer.
Sometimes as a scientist, you have to do engineering work to build the tools to investigate a new phenomenon. Sometimes as an engineer, you have to do some science research to build a new widget that's not been built before. It's all the same to me.
Would you hire a physicist and material scientist to design you a bridge? Or the civil engineer?
You are right, there are degrees of each profession in each other. No one (not the least the IEEE) will deny that. But that's a huge leap to 'It's all the same to me'.
This is like saying 'oh, as a sysadmin I periodically have to write scripts and small programs' and therefore there's no big deal conflating sysadmins and being a programmer/developer.
Also, to nitpick a bit... biologists aren't building organic circuits. Biologists along with bio-engineers (and a whole buttload of other engineers) are building organic circuits.
Yes, there will be exceptional people who so perfectly blend scientist and engineer, that in that person, the two become the same. But as a whole, they are two intertwined, but separate things.
I agree entirely. I imagine the reason why the media lumps science and engineering together is because they really are inseparable in driving technological progress. The article suggests that not recognizing the difference between the two could lead to misinformed decision-making, but if a politician for example decided to exclude one or the other from a decision, that's an even bigger mistake.
People in engineering have to solve scientific problems too... and the author of the article doesn't understand completely what he wrote about.
Science is not biology nor physics nor chemistry and it's not truly about understanding the "universe and all it contains".
Science is a process of seeing, understanding, and confirming.
The word science does not have a claim on that process.
And almost every scientist today certainly does not have a valid claim on understanding "all" that the world contains.
If engineers didn't use science then they couldn't understand anything new and thus they couldn't engineer stuff because, as the author states, engineering is about understanding and solving problems.
Therefore there are some things in the world that engineers have to find out through a trustworthy process of confirmation and if they use controls in order to see and test things of the world then this falls squarely into the scientific process.
The author acts like someone can't be a scientist and an engineer at the same time but neither practice would be able to exist like it is today without the other one.
In the book he discusses both how engineers use science and how scientists use engineering. It's not that he doesn't know what he's talking about, just that this is a very short article.
I guess I should read his book... but his defitions of engineering and science are not good enough fmpov because there is no matter to verify. A simple example is that you could interchange the definitions and they would not be 100% incorrect either.
I think a lot of the confusion stems from the fact that both scientists and engineers use lots of (often somewhat esoteric) math. This places us firmly on the math (science and engineering) side of the occupational divide, as opposed to the non-math (law, medicine, everything else) side.
Programming a piece of software is very similar to writing a book. You're just telling a computer what to do in a somewhat unusual language. So software development, in my opinion, is neither science nor engineering, it's art.
What kind of book? A math book? biography? romance fiction? In that respect, programming is also like cooking. For cooking, you can go from one extreme of food science, like NASA practices for food going up in space, to popping a frozen pizza into the oven, to many points in between. Programming allows a wide range of discipline to be executed in the process.
Software is a logical construction. It is a structure of parts that have determinate behaviours, put together to serve a particular purpose. It realises intent by assembling objective materials. When you build software you are building a machine. This is pretty much nothing like writing a book.
The only rationale you give is to imply both activities are telling something what to do. But does this stand up? When you write a book, are you 'telling' the book 'what to do'? No. Or maybe telling the person reading it what to do? In which case, are you expecting them to behave exactly according to your 'instructions'? No, not really.
Literature and software are two very different things.
That entirely depends on the purpose of the software that's being written.
If you're writing, say, the Amiga Boing Ball demo, or Dali Clock, you might be leaning more towards art.
But if you're writing software that controls transactions for a bank, you're definitely more an engineer.
What does it matter anyway? In my rather short career so far, I've been an artist and an engineer, a scientist and a craftsman. Sometimes all in the same day.
The process of science and engineering is art too. They both are "writing books" in their own way. Engineers write bridges, electrical circuits, environmentally friendly housing, etc. Scientists write new paradigms to understand the world with.
I've always seen engineering more like an art than a science. I don't mean to be condescending or anything... I don't mean it saying that we coders make works of art and everyone should kiss the keyboard we type on. Engineering differs to science in that it's driven by a selfish want to achieve something, albeit something technical which could probably benefit from scientific knowledge. An engineer is always trying to create something new either from some theory or previous knowledge, or from freaking thin air. Science is akin to reading a book and then trying to prove theorems based on what you learnt, while engineering is akin to grabbing a blank sheet of paper and draw what's in your head until it makes sense.
Robert Hooke wrote Micrographia, about his observations using a microscope. Today the field is called Biology. The field of Computer Science is still stuck with the name of its primary tool. It like calling cosmology "Telescope Science".
The distinguishing feature of a scientist is the ability to observe and generalize. The engineer is good at constructing things. In the early days of a science you need both skills to get things going, so it is easy for outsiders to confuse the two.
But the word "computer" is different than the word "telescope" in that the computer is actually fundamental to computer science. Computer science is about what information can be expressed with a computing device. It is fundamental information science that is as transcendental as physics, and frankly probably more transcendental than biology (as we know it today).
You can do biology w/o the existence of microscopes. But there is no computer science w/o the existence of computers (models where computation is performed). "Computer" is to computer science what "life" is to life science or "physical" is to physical science. It's the fundamental underpinning on which it is built. It's just happenstance that we have physical devices that manifest a specific representation of a computational model.
Computer science is about what information can be expressed with a computing device.
This kind of proves the point that the name is not adequate. Let's pick an example. Kolmogorov complexity is a tool used to discuss randomness and the limits of data compression. It cannot be calculated on a computing device. So let's kick it out of CS. Meanwhile all the javascript programmers who are diligently engineering the next banking application UX can call themselves computer scientists instead of engineers if they want to. And maybe there is a whiff of science in A/B testing.
My point is that the name of the field shows its immaturity. Hooke wasn't looking for chromosomes and DNA through the microscope. So at that stage the game his field would have been limited to "Stuff that can be seen with a microscope". The stuff we are going to discover with computers has only been glimpsed. And, as you say, it won't be about computers.
Kolmogorov complexity is a tool used to discuss randomness and the limits of data compression. It cannot be calculated on a computing device.
What are you talking about? Kolmogorov complexity is all about calculating on a computing device (but lets be clear, a computing device is not that thing under your desk or in datacenters -- it can just as easily be a Turing machine or binary lambda calculus or digital circuits).. All "complexity" theory is about calculations on a computing device. Otherwise how are you compressing the data? Where does the data live? How do you measure the programs that interpret the strings? What do the programs run on? Kolmogorov complexity is simply a subset of information theory, all of which lies in the field of computer science, as it is all about how one expresses information over a computational model.
Now, just because you use a computer doesn't mean you're doing computer science. Here's a simple test, if you're answering this question, "For some model of computation is X true" then you're doing computer science. If you're not, then you're likely doing engineering. Building a banking application is generally not attempting to answer that question.
Everything in all complexity theory, algorithms, programming languages, AI, quantum computing, etc... require the existence of computers.
They way I like to phrase it is that computer science is everything that involves the theory behind and the implementation of the concept of computation.
Yeah, but it turns out, one of the best thing you teach someone to write good software is a bunch of theory of computation. It both shows abstract thought and teaches them how things work.
I think you're getting hung up on the degree, which is called "Computer science" with the professions which use those who have the degrees, mainly, all the varieties of software construction efforts which all have lots of names.
The thinking required for engineering and that for science are different spaces for solving problems. Understanding real world problems uses scientific thought processes. And to obtain solutions one has to adopt engineering -- the world has too many variables which only a deployed solution can capture. Of course, getting to the final solution requires multiple iterations between the two spaces.
"Throughout history, a full scientific understanding has been neither necessary nor sufficient for great technological advances"
Although this is true, it is also true that a full understanding(garnered through science) is necessary to fully grasp all of the implications of a new technology.
I cite for example the internal combustion engine. I suspect that although it was known gasoline powered engines produced harmful exhaust fumes, it was never fully understood at the time what effect those fumes would have on the atmosphere & environment when the number of automobiles began to go up in orders of magnitude(until there was a significant enough number of them to produce a measurable effect, which no doubt a scientist found).
With that said, both fields are due and ample amount of respect. The article seems to favor engineering more, and indeed it does result in remarkable things. However, it is equally important to have the full picture that can only be garnered through empirical scientific research.
This reminds me of Rod Adams' argument that the US nuclear program focuses too much on scientific research, when the more practical thing to do would be to focus more on the engineering challenges of mass reactor production, plant construction scheduling and logistics, and so on. That engineering approach could get us huge amounts of inexpensive, safe green power, as China is discovering already. But instead the US remains preoccupied with research, when they ought to be building with the technology we've already got.
Related to the CS science vs engineering discussion:
There is one movie of a lecture somewhere online, where a professor writes "Computer Science" on the blackboard. He than crosses over the "Science" part, and says this class is about engineering. He proceeds to cross out the "Computer" part too, and says it is not about computers, either. I no longer remember where exactly this was, though.
EDIT: Now that i think about it, I'm pretty sure it was on MIT open courseware.
I've always made sense of it all as follows:
Science describes reality with theory.
Applied science moves science into practical application.
Engineering composes/commercializes applied science.
I ve always considered Engineering an application of science to solve problems (to make humanity better, if you want the pure, idealistic definition). In my opinion,
One definition of engineering I read a while back (and haven't been able to find again) was something to the effect of "engineering is the application of science to produce devices to perform useful work" so by that definition, I'd agree with you.
He shouldn't forget that a lot of the "scientists" that are using his "engineering" money are really "applied scientists", which are on the very blurry boundary with basic engineering.
I always thought engineering was the application of science. A scientist observes, experiments, and records his findings. An engineer uses these findings to build stuff.
When science isn't driven at least in part by problem solving, there's a tendency to wander off into interesting but arcane fields with little potential for application any time soon.
You can argue that eventually all science will be useful somehow or that as long as someone finds it interesting, it doesn't really matter. But, it seems a waste for brilliant minds to pursue largely irrelevant questions when they could dramatically make the world a better place now just by shifting their interests a bit.
I strongly disagree with this. Consider astrophysics and the work that is being done on the big questions. It's unclear how to apply most of the progress being made in these fields, but yet the understanding of these things has the potential to affect everything. If it takes 1000 years, it still might mean more fundamentally than all the intermediate applied science funded by companies for short-term profit.
You need a mix of practical (short-term) and seemingly impractical (long-term). The former drives the evolution of current ideas while the latter leads to completely new ones.
And don't easily dismiss the "interesting but arcane fields". In and of themselves, they may hold no direct applications but how you go about the research does. Think searching for the Higgs boson and the construction of the LHC.
I find myself thinking back to an article earlier today: http://news.ycombinator.com/item?id=1986640 , about the virus that could improve lithium battery's capacity by up to 10 times... from scientists at the University of Maryland. As some people say in the comments, they're tired of hearing about these advances that never make it to market. Perhaps this is part of the reason why? If nobody takes this to engineers or funds engineers, this was, if not entirely a waste of time, certainly a suboptimal use of time.
(Certainly part of the reason why is some of these ideas simply don't pan out in practice; batteries with 10 times the capacity but at 100x the cost may have such a limited market as to be effectively no market. But it seems like some of these things ought to be happening. The various promised-but-never-materializing advances in the field of solar energy particularly come to mind.)