The only Achilles heel for this is getting the data out and doing some real analysis of what your sleep cycles are, how your fitness levels trend, etc.
Hopefully in the near future, they will come out with something, but thus far, the data you see is quite limited.
I managed to extract the raw minute-by-minute data from the Band using some hacky tricks. I was thinking of posting a howto, but trying to check if it runs into problems with the CFAA/DMCA. Might post something tomorrow after making sure it's legally sound.
I'm curious whether you decoded the binary protocol/blob or have just been poking at the API (where just about everything is pretty convenient JSON). I've done the latter but haven't broken out my Ubertooth yet for the former.
Re: DMCA - there's a reverse engineering carve-out for interoperability. While I guess MS could abuse either CFAA/DMCA to slap down RE, it seems like it'd be a pretty bad PR move.
A lot of my notes aren't super well organized (and wouldn't make sense w/o scrubbed captured flows) but I just posted a comment on your post thread talking about the BTLE as well as the remote API stuff.
That site is run by The Onion, which, for a few weeks, claimed to have been bought out by Yu Wan Mei Salvage Fisheries and Polymer Injection Corporation.
Not really an RTOS. I hear it's more an "event loop and some interrupt handlers," much like the firmware in the Kinect.
(We went through a "we need an RTOS" phase on the Kinect. Eventually we ditched the thread stuff we wrote, which made things a lot simpler -- if you don't absolutely have to have threads and can get away with computing stuff in a main loop, then do that. Threads introduce all kinds of synchronization and inconsistent execution time nonsense that you should avoid if you can).
fascinating. I assume that was the first kinect. Was the vision processing separated from the USB interface in different chips? Sounds like a fun project.
- A "DSP" based camera system that just does transfers from the sensor system to USB, and that does some other monitoring and housekeeping. It does no vision processing, it's just pass-through of video from the chips to the host.
- An ARM-based audio system that handles mic data, runs an echo cancelation algorithm against host-provided speaker data, and provides the raw and echo-canceled mic data to the host.
Early Kinect versions used yet another processor for managing the tilt motor and accelerometer. This stuff was move to the ARM later (that a tilt-motor processor existed in year 1 units is a fine example of team structure affecting product structure).
All of these CPUs have their own USB interfaces, and there's an internal USB hub so there's only one wire going to the host. :-)
None of these chips use an RTOS; that would just get in the way.
The vision processing is done on the host, where you have heavy lifting capability with GPUs and tons of memory and so on. Doing that processing on the camera would quite expensive in terms of hardware and power, and wouldn't be able to adapt as well to new algorithms.
The only Achilles heel for this is getting the data out and doing some real analysis of what your sleep cycles are, how your fitness levels trend, etc.
Hopefully in the near future, they will come out with something, but thus far, the data you see is quite limited.