This is my open source app that has grown massively recently. We're using the barometers in some new Android phones to build a global weather network of hopefully unprecedented scale. We just launched version 3.0 this week which is livestreaming our data to an atmospheric science researcher at University of Washington. Our primary goal is to grow the network and use the data to improve short-term weather forecasting methods.
I really love the idea of crowdsourced meteorological data. I can envision a world where every smartphone has a lower power barometer and ambient temperature and humidity sensors. Combine this with lower power GPS and you could have the majority of the world's citizens constantly transmitting basic atmospheric data 24/7. As sensors become more advanced and cheaper, one can even envision a time when smartphones also include things like air quality sensors and so forth.
I can't imagine how accurate our weather forecasting would become if we had constant access to this incredible amount of real-time data.
I don't think ambient temperature and humidity sensors would work very well because people usually aren't outside and when they are, phone is usually in pocket.
I love this idea too though. I'm guessing pressure is one of the least effected by being inside or in a pocket, while still being highly useful for weather prediction.
It would be interesting to study this. I wonder if there are ways to normalize the data and still extract some useful data even if the devices are in the users' pockets.
Longer term, I can see these sensors being embedded in glasses or contacts, which could be an easier problem.
The problem is that the effect of being in a pocket is not zero-biased when it comes to temperature (and possibly also humidity). You'd see temperatures being pulled towards normal body temperature on average.
Yes, but in my rudimentary understanding of weather the importance is the pressure differential rather than the absolute pressure.
I believe the data in aggregate will provide a pretty good map of the pressure gradient which could then be fixed to the more accurate dedicated weather stations. Think of it like the 10,000 year clock which uses a clock known to drift in conjunction with a solar time fix to calibrate.
You may be right about pressure, but btown is talking about temperature and that's going to have much worse systematic bias exactly as he says. Your comment and his are nearly completely unrelated.
>one can even envision a time when smartphones also include things like air quality sensors and so forth.
How-about ambient light sensors so we can accurately predict potential solar power in a given area? Correlated with NOAA data this could be very valuable to companies looking for good solar data.
Luminosity/temperature/humidity inside a house or office is usually very different from the value outside. So these data are usefull for the user but not for a weather modeler.
But pressure inside a house is usually almost equal to the pressure outside. So it can be crowdsourced.
Sadly even if we had one sensor per cubic foot of the troposphere nonlinearity would quickly swamp any predictive value, I'm not sure we will ever have accurate forecasts past a few days.
Right, it's generally believed that forecasting will never be effective beyond about 15 days, due to amplification of small uncertainties by the nonlinear atmosphere dynamics. Predictability is better in the tropics by a couple of days, and better in the northern hemisphere winter by a couple of days.
It's worth noting that the pressures measured, even if they could be calibrated, would be almost entirely on land, and only at the surface of the Earth, not at higher altitudes (it is, of course, a 3D problem). And also, a lot more than just pressure is needed -- temperature, wind velocity, clouds, aerosols, irradiance, ocean currents, wave height, soil moisture, ...
I think I might have been the person who first told Cliff Mass about your app. Of course, I wouldn't have known about it had it not been for Hacker News. I'm really happy to see that this has grown so much! :)
From: Aaron Brethorst <aaron@xx.yy>
Subject: Announcing pressureNET 2.0 | Cumulonimbus
Date: February 14, 2012 10:45:50 AM PST
To: cliff@aa.bb
I'm not affiliated with this project, but figured
you'd find it interesting.
http://www.cumulonimbus.ca/announcing-pressurenet-2/?
Do you have a plan to monetize pressureNET? I've been testing it for a few weeks. It's a very interesting project!
Also, pressureNET's privacy settings say, "The data we collect is the location of your device, the time, and your atmospheric pressure." Is a unique device ID shared with researchers or the public?
We most definitely have a plan to monetize pressureNET. We're working towards that goal right now by building a customer-facing API that will allow us to livestream our data to professional forecasters. We've already seen decent customer interest, and we haven't even reached out to anyone yet - just general media coverage is showing us that there's a definite market for our data.
A hashed unique device ID is currently shared with the researchers in order for them to do calibration work. We have not shared it with anyone else yet, and due to privacy concerns we would be very nervous about sharing it publicly. Removing it does remove some utility of the dataset though, so we're going to try to find another way.
I'll make sure to be more clear about the privacy settings as we move forward. :)
This is one of those things that really highlights the disruptive power of ubiquitous computing + sensors. I expect to see more of it in the future.
I wonder if we could get Randall Munroe to do a "What if" treatment of a million people taking a picture at the same time with their camera phones pointed toward Sirius. Given EXIF data of time and GPS location, could you use that data set to create high resolution images of astronomical entitites? If they were all 5Mp cameras and you had a million participants, that is a 5terapixel image with an image surface across thousands of miles.
This is a really interesting idea. I believe, however, in this form it wouldn't quite work as expected. What you'd end up with is five terapixels of data, but not a 5TP image. The way you'd get that 5TP picture is to arrange all those sensors in a grid and focus the image of your desired object onto that grid. In this form you'd end up with a very, very high resolution image because only a tiny fraction of the image would be falling on each sensor. Each fraction of the total image could then be combined into one mega image, and voila, your 5TP image!
If you have a million people all focusing their 5MP cameras at the same object, you'd end up with a million photos of the same object, but without anything more than 5MP because the entire image would fall on each sensor.
Perhaps people should donate their old phones to science and equivalent sensors could be arranged into a giant grid with a powerful lens attached. The only difficulty then would be atmospheric distortion, so perhaps a cheap trip out of our atmosphere would be in order! I propose we call it the Hacker Telescope.
Well outside the atmosphere we've already got a plan, its called the James Webb Telescope [1] :-)
One of the weird things about light is that the photons that hit the camera sensor in California are not the same photons that hit the camera sensor in New York. So you could, if you chose to, add the two pictures together which would increase its brightness (more photons) and not change the content of the picture. The trick of course is figuring out which pixels in the camera sensor were getting the same (or nearly the same) photons.
Since you are taking a picture of the stars, which are far enough away that parallax effects won't change their relative position, I should be possible to map the position of the stars in each image, combine it with the pointing vector from the accelerometer, to then create a projection matrix that would allow you to back project the camera pixels into an idealized focal geometric plane.
Now you have a map of all of the various image pixels with respect to their projection onto this plane, and you can then add together like pixels. Or generate a pseudo 5mP image where each pixel is comprised of a million sub-samples. (sometimes I wish I could draw in this editor)
The atmosphere is a problem but there is an interesting effect in computational photography [1] where you take many pictures of the same thing through interference and you remove the interference. In theory each camera could provide its GPS co-ordinates (accurate to 20m or so), the time of day (accurate to the second at least), camera orientation with respect to the gravity vector (3 axis accelerometer), and possibly the orientation of the magnetic field. The million dollar question then is how much can you use that information, and the image data, to construct a computational model of the light field as it is incident on the planet at a particular time, and from that identify and display sources of that light.
Clearly you'd need a significant chunk of computer power to post process that data. But it might be interesting.
I did want to see what the state of the art was though, and they use rigid steel on concrete foundations and "path compensation" to deal with the alignments problems (quotes because I am quoting from outside my vocabulary...).
Check out EM reconstruction. Electron microscopy images are taken at very terrible resolution. But you pick out tens of thousands of them, from different angles, and you can average them to get a very high resolution average.
I think the difference between EM reconstruction and ... "stellar reconstruction", as it were, is the relative amounts of parallax. When we take EM images of sub-microscopic objects, we can take them from appreciably different angles. When we take cell phone camera images of a star, we can't.
And, the irregular lenses on all those crappy cameras. (Crappy by astronomical standards.)
There's no way you'd be able to calibrate that out, so you'd never know which pixel on the camera mapped to a given patch on the sky.
Not to mention that the camera body is so compliant that even if you perfectly characterized the lens, once you put the camera in your pocket and took it back out again, it would all be different.
And what about thermal effects? It boggles the mind.
I love this project and the idea of linking together the sensors on so many devices. I wonder if something similar could be used for earthquake detection using motion sensors in the phone, since many phone rest on flat surfaces at least part of the day (most of the day for me). It could be activated when phones are resting on a flat surface with no movement.
The Quake-Catcher Network (http://qcn.stanford.edu/) has been doing this for years with laptops and USB motion detection modules. They say that laptops are less than ideal because in larger earthquake laptops bounce around and give less accurate results. Mobile phones would suffer even more from this.
I used to have QCN on my laptop (old MacBook) and the sensors were quite sensitive, to the point where it could detect footsteps a couple of feet away.
Presumably this will matter more in places like Africa, India, etc. which lack the same level of highly developed weather forecasting sensors as the US. It seems likely low-end Android phones could be just as pervasive there at some point, and would make a huge difference to farmers, public safety, travel, etc.
At the moment, only the newer, high-end Android devices have barometers in them. A regularly available data or wifi connection is also necessary for the data to be real-time. So, this should work just fine right now in the major cities of India and the more developed countries of Africa, but for the most part it's probably a bust. Their lack of one infrastructure mirrors another.
Very neat technology, and I think it could be of value in the developing world and sparsely populated areas, but is there that much need for this data in many parts of, say, the US or Europe? Here in San Francisco, I've got highly accurate weather stations at SFO and OAK, plus dozens of other stations on Weather Underground around the city. The variability of these readings over, say, 10 miles is quite low. Are there specific ways in which a more dense grid of barometric pressure measurements can give us further insight into weather patterns?
This would be a great augment in areas where the density of observations stations is much lower. Wind speed, which does vary a lot more due to terrain and structures, would be neat to map this way, but it's hard to measure the wind through our pockets on cell phones.
I am planning to assemble a barometer (and ultimately gather temperature and humidity data as well) on Arduino, and upload data feed to my computer. Does PressureNet have an API to accept data from devices other than Android phones?
Almost. I'm working on this as almost top-priority right now. There are two APIs/SDKs that I'm building for pressureNET this week. The first is a customer-facing API to access the livestream data, and this is what we have in Beta right now streaming just to Cliff Mass. We'll finish that up in a few days, and move on to API number 2, which is what you're asking for. My first step is allowing other app developers to include pressureNET inside their apps, in order to increase the value of both projects. I will follow this by accepting data from other sources as well.
All of us who work on pressureNET are doing it in our free time as we all have day jobs, and this project currently does not generate any revenue. So you may wait a little while for the second part of that second API to get done. But on the other hand, we're open source, so if you're itching to get it ready you can help us out. :) The project is split into three repos on GitHub, we're going to merge the two servers into one sometime soon.
I've got all that working on Arduino as part of my coffee grinder data logger. Two temp sensors (ambient and burr temos), humidity, barometer, and a digital compass to log grind adjustemnts. It logs all that and motor on/off times to an SD card and squirts it out over USB. Let me know if you'd like to see my design/code. (email in profile)
Really awesome app/idea. I have been waiting for someone to do this. One suggestion, create hourly(or w/e increment would work best) maps(heatmap esk) of this data.
I understand this is (probably) meant as a scientific tool for data retrieval but adding something like this would give a lot to the users of how the data is being used and what it actually means in comparison to everyone else.
An isobar display instead of individual info points has been a goal in pressureNET for a long time. Giving the user enticing visual feedback is part of making the app more broadly appealing.
Glad to hear it. I did some work in this but plotting PM & Ozone data on an android device, I had never done anything remotely similar before and had a hard time finding any good options besides going out and writing my own charting system. I ended up using a heatmap style plotting system. Looked good but it was very hard to get the colors to match up to any particular scale. I hope you end u with better results then i did :).
Some Google employees have mentioned that it's there to improve GPS times. The devices already have a latitude+longitude estimate from cell towers and WiFi, and the barometer adds an altitude estimate.
You would have thought that "ground level" (based on the latitude and longitude estimate) would be a pretty good starting estimate for altitude in most cases.
In a modern urban environment I think this is becoming less and less so.
Living in a metropolitan area for me at least mean I spend quite a lot of my time in an Nth floor apartment, Mth floor office, and commute by subterranean subways and elevated train tracks, walkways and highways.
The difference in altitude between the subway and the top floor of an apartment building isn't really very far though, when observed from a GPS satellite that's 20,000 km above you.
Trying the app I saw there's an option choosing who you want to share your data with.
I'm inclined to choose 'public' but in the end lack of insight into whether and/or how you do data anonymization made me chose otherwise.
Care to elaborate a bit on how and what you do with the data?
Excellent question. I do realize that our sharing options are a bit vague, and I'm going to write up official descriptions and examples to eliminate all the confusion. We have not taken any action on the 'Public' option yet, so for now that option is equivalent to 'Us, Researchers and Forecasters'. There is currently no anonymization done as our only customer so far (Cliff Mass) requires the full dataset for calibration. For the Public data set, we will look into how to deal with this - it should probably not include unique identifiers, and so we'll likely remove those. This does, however, remove some of the utility of the dataset, so perhaps there is another option.
I'll make sure to be more open and clear about this as we make the decisions. Thanks for your comment.
Thanks. Yes, we have some very clear plans on how to grow. This was a big problem for us about a year ago, when our only method of growth was by posting to r/android and r/xoom. Currently, the project has solved the original chicken-and-egg problem in that we already have enough users that our existing userbase provides enough content to be interesting to new users.
We have a new method of very large growth coming up, though. There have been at least four different app developers that have contacted us recently, asking if they could include pressureNET inside their own apps. These are typically very popular apps with 1,000,000+ installs. Now, most of these users don't have barometers, but even 1% would be very large growth for this project. So there's that. This requires me to build a simple pressureNET SDK, which is a priority for this weekend.
Beyond this short-term, large growth, we have another plan which is to contact local Samsung and carrier offices and hopefully work towards a goal of having pressureNET included in next-generation phones. Contacting Google is a very clear long-term goal, but I want to grow with my current opportunities a bit more first.
Yes, definitely. I'm working on it right now, actually. We're building an API that Cliff Mass is using right now, so everything is in place and working. I have to finish some features in code, document everything, ensure there are no privacy breaches, and then we'll go live with it. I'll keep you updated.
Edit: We are featured in MIT Technology Review right now: http://www.technologyreview.com/news/510626/app-feeds-scient...
And here's our 3.0 launch blog post with more information: http://www.cumulonimbus.ca/pressurenet-3-0-sharing-visualiza...