It's unclear if that is a limitation of the Oceanic app or the sensors. The watch waterproofing is rated to 100m. +40 meters is beyond the limits of recreational diving so it would make sense that the app no longer gives you decompression info beyond that range.
When freediving, 40 meters is not unusually deep. Many divers can get past 30m after a long weekend of training. I'd be disappointed if a different app couldn't track beyond that limit.
I highly recommend Processing and The Coding Train [0]. The visual feedback is both engaging and illuminating to folks who have no prior experience with procedures or spreadsheets.
Just to add to this great recommendation, https://p5js.org/ (JS port of Processing) is really great as they can get started right away and the docs are super simple. Dan uses it in most of his videos and really starts at a basic level, but works up to really complex concepts as well. My kids are younger, but they like the graphical part of it way more than me showing them the terminal and Ruby.
Picture is from 2011 and was originally hosted on Picasa (but I'm pretty certain it was never shared publicly) and I don't have a Google Plus account. If I go to the picture itself on Photos, there's no sharing/privacy controls, so I'd assume it is private.
The wording on the email makes it even worse, as if I had intentionally done something overnight to share them on Maps.
I can't find any evidence that it has been. I use a long and strong password and have 2-step-verification enabled in my account. I haven't received any new account activity emails and the I don't see anything suspicious on myaccount.google.com
As far as I remember, I've never used Panoramio (as a comment above suggests to be the cause). I'd agree that this seems limited to some specific circumstances.
AOL Reader was what I settled on as well after Google Reader shutdown. I was never a power user so AOL Reader feels like a perfect replacement for Google Reader's feature set. It can be a bit flaky at times with managing read state, though that seems to be a UI issue that resets itself fairly quickly.
Awesome news! Prometheus does fill a void in the monitoring, time series, visualization niche. I found it more straightforward to setup and use than the alternatives such as statsd, collectived, influxdb (which seems to be headed the wrong way in monetizing) and grafana et al.
I always thought that Hashicorp would someday fill this void with their polished products and straightforward community and monetization strategies, especially through consul [0]. I hope it can polish the rough edges while being incubated by the Cloud Native Computing Foundation.
PS.: I've prepared an Ansible role for prometheus for those interested, albeit outdated [1]
NOTE: We recommend Grafana for visualization of Prometheus metrics nowadays, as it has native Prometheus support and is widely adopted and powerful. There will be less focus on PromDash development in the future.
I thought the same thing! Although he is definitely right to complain about Namecheap, the biggest takeaway is, your email is the most important service you have on the internet:
> I’m pretty careful to use 2FA for any service that I consider important
That's a neat way way to tight together both worlds, and I can see it being useful in cases like testing.
Nonetheless, it is important to distinguish the need to communicate between programs and the need to programmatically run a piece of software like ffmpeg and getting its output.
For the seconds case, especially in more complex architectures, where you need "interact with software written in another language" it makes sense to explicitly separate this interaction, for example through a broker [0]. In the end, all you need is a way to communicate from Program A that Program B can do some sort of job, and this can be a simple string pointing to a raw video file in a storage like S3, not necessarily the raw file.
Nah. Reputation is hard to build, but eeeeasy to destroy; saying "trust us that you can trust us again" doesn't magically revert it to previous version (at least a suspicion remains of "...until we have another Wonderful Idea at an indeterminate point in the future").
It is an old state of things, but the data is not an executable installer (that is usually the problem, as SF has a bad habit of infect them with malware), it's a ISO image. For nightly (trunk) builds ReactOS uses its own servers, so I guess it's about the maximal bandwidth that may be of concern here especially when the new version are released, and SF may address exactly that.
> (that is usually the problem, as SF has a bad habit of infect them with malware)
SF is under new ownership as of last week, and they've already removed the most problematic behavior. If you find malware you should let them know, as the new ownership seems much more on the ball.
Running a VPS / web server for an initial (web) seed isn't free - and then you're relying on popularity to keep the files available. I should check whether the Internet Archive will host things.
I've grabbed 5 year old tarballs off SourceForge before. I know there are 5+ year old torrents that still have peers but that's an exception to the rule.