Fascinating. This reminds me of the WWII novel All The Light We Cannot See where the young German soldier is tasked with locating illegal radio transmissions by placing receivers at different places and using trigonometry calculations to estimate the distance and direction of the source. Of course, the culprits would invariably be killed when found, leaving the boy with a deep sense of guilt.
I did similar long baseline acoustic localization for my dissertation a few years ago. We had about 15 microphones deployed in a wetland for bioacoustic monitoring.
Rather than manually finding the timestamps in the individual audio files you can do pretty well with using cross-correlation to find the relative delays between the mic signals. Any particular delay between a pair of microphones corresponds to a hyperbola through the mic array. If you visualize the hyperbola from all the pairs of microphones, the correct location pops up as the intersection of them. The cross-correlation peak can get pretty smeared out though, because each microphone is hearing a different version of the source signal.
Interesting. That looks like a system that uses phase difference.
Last weekend I localized an explosion to within 20m when two of the microphones were 3 km apart. I went to the location and they were firing fireworks as part of some religious ceremony.
The landing page doesn’t describe what TAK is and doesn’t even link to a page which does, even in a hidden menu.
To everyone: if you make something and put it on the web, please spend 10 minutes to write a single paragraph of introduction for people who just learned that your thing exists.
The "about" box doesn't show on mobile. On desktop it says this:
The Android Team Awareness Kit (ATAK), for civilian use, or Android Tactical Assault Kit (also ATAK) for military use - is a suite of software that provides geospatial information and allows user collaboration over geography.
ATAK was originally developed by the Air Force Research Laboratory (AFRL) and is now maintained by the TAK Product Center (TPC).
There's more information on the About page (accessible via the navigation bar on desktop, or the menu on mobile): https://www.civtak.org/atak-about/
That’s a nice read, thank you for that. If he sets up three or more recorders then next time he can find out within minutes of the first bang to within meters.
But his article presents a fun challenge. He doesn’t say where his receiver locations are but he does give a triangle dimensions. I wonder if it’s possible to map this onto the street names and then work out which farm it was. It seems likely that there is only one way to map that triangle so each vertex is on the street it is reported to be on.
Then you could invent a time, any time and add the number of seconds that he mentions to get the two other times. Now you have the times and the co-ordinates, within the constraints from the time inaccuracies you could run the docker version of my localization program (https://github.com/hcfman/sbts-aru) on this input and it would output the map link to the offending farm. With a largish degree of error.
Note. From the diagram, the sound source doesn’t fall within the triangle of receivers. This can still work, but the convergence drops off after a time so you can only practically localize this if the distance is not too far outside. But if you make battery powered receivers, you choose a better spot and move one if the recorders so it does fall within the triangle.
Note also, although my project details the construction of a Raspberry Pi recorder for high accuracy localization. If you are localizing over long distances and you are happy a second or two inaccuracy from the arrival times, then indeed you can run the docker version of the program listed on GitHub and perform localizations based on mobile phone determinations of time of arrival. You just have to match the correct input format as detailed in the article. Specifically like the following:
$ docker run -i localize_event 20
Enter GPS coordinates and timestamps. Press enter twice to finish.
Note also, you can simulate this easily by choosing locations on a map and calculating the expect time of arrivals. Then you can introduce one more seconds of error and see what difference it makes.
It’s lovely to work things out for yourself but he could have saved himself some weeks of sleep by googling sound localization if that was his primary interest :-)
It is weird that you can leg off air cannons all night for three weeks and not have the authorities on your head though.
Actually, chatgpt says that for 1mm resolution you need 9 decimal places anyway :-) Note a rtk gnss can resolve your location to 1cm. I don't know how much jitter there is in the USB buffering, but for this purpose, lets assume 0. Then if I want to do experiments with short distance localization then it would be good to be one decimal less than what I want to use in practice, so there is the 1 mm. So I could format this to a max of 9 decimal places. I see the above is still a little more. Something for the next time I make a push. Thanks for your concern :-)
I guess I could ask chatgpt how many decimals gets you to cm accuracy and fix the decimal places to that. Haven’t found it a priority yet. But if it makes details guys happy. You are not the first to say this.
In reference to that "Finding the Air Cannon" article. The author indicated the following facts:
* His listening posts were part of a triangle with sides of length 3.8km, 3.0km and 6.3km
* The vertices were on the roads in Corvallis, Bellfountain Rd, Brooklane Dr and Rivergreen Ave
* The relative times of arrival are 0, 4, 6
* The temperature was 35F or 1 degree celsius
The only way I can make a triangle that fits on those three roads yields a set of the following likely co-ordinates from where his listening posts are:
Adding the relative times to the co-ordinates in a form whereby my localization program can work on it and putting it through my localization program I get the following result:
$ localize_event.sh 1
44.52421,-123.33492 2024-05-27_12-00-00.0
44.53796,-123.29172 2024-05-27_12-00-04.0
44.52975,-123.25571 2024-05-27_12-00-06.0
Enter GPS coordinates and timestamps. Press enter twice to finish.
Location: 44.488866130770134,-123.31219694386117
I did this for my senior design project a few years ago.
We had 4 raspberry pis with mic arrays. They were time synced over a wifi mesh network (indoor so no GPS). Each had a detector running, and would send detection segments to a base station to run generalized cross correlation.
It worked pretty well, but we were doing it in room scale 20' environments. Our biggest issue was that we could not disambiguate echos, and had to hack in a hold off period.
We did not end up having time but I wish we could have gotten beamforming working on each pis microphone array, then we could have combined angle of arrival with tdoa, and potentially better handled the echo.
Indoor and thus smaller distances means time accuracy becomes much more important. In my software chain there is a very small bit of uncortekated latency due to unknown buffering size of the USB driver adding to time inaccuracy potentially.
I've had an unrealized pet project on my personal to-do list for quite some time in a similar area - to estimate the speed of neighborhood traffic based on the doppler effect of their traffic noise while passing my house. I can't use radar/IR devices since there's I'm not directly adjacent to the road and the intervening land doesn't belong to me, and the substantial hedges make any line-of-sight sensing impractical.
How do you time-synchronize the 4 RPi?
When you power them off, you lose the sync. So it is difficult to do it correctly.
Also it may be possible that the different RPi have different clock drift over time?
i.e. 1.0000 hour on one of them = 1 hour + 20 milliseconds on another one
uses a GPS to synchronise the time with and then even when running completely disconnected from any network the clocks will be accurate to real time with less than 1 microsecond of error. Typically the system time hovers around 100 ns or less from the real time. And I’ve tested this by triggering interrupts on gpio’s on two devices with the same switch and printing the time.
Syncing to sub microsecond accuracy is usually achieved in under a minute.
Oh. I’d like to point out that the localization algorithm itself that I use comes from the lovely opensoundscape project from Tessa Rhinehart and friends from the University of Pittsburg. A big thanks to them for their lovely project.
That's useful. It wold probably be more useful as a phone app, so that a few people with phones could locate a sound. This is especially useful for low-pitched noise, where human ears are not far enough apart to get any phase difference.
Over really long distances the time inaccuracy makes less difference so indeed a phone can do interesting things.
I’m a person that likes extremes so the extremely accurate clock time I could get with a GPS synched Oi really appeals to me. Although it’s not really necessary I want to get an rtk GNSS next year to experiment with. Together with ultrasonic microphones I’m pretty sure it could do a great job localizing bats.
If I remember correctly, when you have 2 RPi located at F and F', the solutions M are on an hyperbola : MF = MF' + delta_distance where delta_distance is simply computed with delta_t in the two signals.
Would this work for music? I assume it's a bit more complicated because it's not clear which bass you are getting at which location. Maybe if you wait for a new track? Has anyone tried something like that?