I've been hoping that Fleet would emerge as a true multi language IDE. I code in GoLang and Python regularly. I currently have the Python plugin in Goland which is not the professional plugin. If I want them I have to use a different IDE and switching back and forth is a pain.
Also, with a rewrite I've hoped that remote development will be less buggy than it currently is with Goland. It's laggy too and you see weird screen flashes. Sometimes certain features don't even work over remote.
The reason is people will click on them thinking it is an interesting story. The publisher (ESPN/Yahoo/etc.) just care that you click on the page so the ads load and they get the impressions. Some PM probably ran some analysis that machine generated articles from a stats table will get X number of clicks which will generate Y dollars in ad revenue. There was likely no consideration that the overall content of the site would decrease. After awhile people stop clicking on them because they know it will be a machine generated article so eventually the publisher will stop putting them on their website.
I think the industry term for this is "made for advertising content".
The PM or whosever's idea it was: "we don't do long-term experiments so nobody can prove it is ultimately destructive and imma be outa here next year anyway".
I hope that the grocery store who is splitting the wheel removes the rind area with the label. I don't like the idea of ingesting a eating a chip even if it's very very small.
> While stopped at an intersection on the way to his office, a woman plowed into me and 2 other cars because she wasn't paying attention, and was likely on her phone. That wreck generated $45K of medical bills for me and a 21-month settlement process.
Sorry to hear and I hope you recovered fully.
Fundamentally, our roads are unsafe and since the pandemic road deaths in the US have been on the rise. Locally where I live in, SF, the number of driving citations is significantly down over the last 10 years. I see incredibly risky maneuvers when I'm driving my car or on bike.
Many levels of gov are not addressing this serious risk to our health, road accidents. If our roads were declared a public health hazard and be avoided at all costs it might be draw some attention where we move towards finding solutions.
Around once a week in the Bay I'm exposed to severely reckless driving (going 60+ mph downhill through residential stop signs around blind curves, not swerving into any of the 3 empty passing lanes when going 120+ till within spitting distance, actively swerving in front of me to throw trash into my windshield and slam on the brakes while zig-zagging to prevent passing, ...).
Police are sometimes helpful but usually won't bother to even make a report. I take less psychotic roads nowadays even if they're slower. I'm not sure what to do other than stop driving or leave. Do you have any advice for surviving SF roads?
On bike I only ride routes that I have either been before, walked, or driven through. I have some sense of safety.
Otherwise, I stay calm on roads and never try to overreact to the overly aggressive people driving around me. I also never take too aggressive lane change because I worry someone might have road rage.
Hi! Thanks for asking. Basically, Status pages get updated manually, and people decide whether and when an outage is sufficiently bad to warrant a status page update. We monitor actual functionality and will capture smaller glitches that either escape human attention altogether or never get escalated to the point where the status page is updated.
In more detail, this can be for three reasons:
1.) We use functional testing so we're simply showing what aspects of the platform are working and what's not. Due to definitions of "outages" and such in SLA's, vendors like Datadog might not disclose/categorize certain dysfunctions as outages and so they won't show them on their status page. In other words, some outages might be more "minor" and they won't include them on the status page.
2.) Status pages are manual, Metrist is automatic. DD might not have updated or even be fully aware of the outage. Our tests are just showing the objective data as it's happening.
3.) Everyone experiences outages differently. This data from the demo is Metrist's experience with Datadog and can be slightly different from other people (another reason why status pages can be vague). That's why we have an orchestrator that allows people to set up personalized monitoring so they can know exactly how a vendor is affecting them in real-time. And if an outage is relevant to and affecting them.
Does that answer your question? LMK if I can follow up with more info. :)
This bugs me to no end. I don't want to name names but I had a devops service that was returning an odd error implying I was doing something wrong. Status page said everything was good. After several hours I emailed to be told it was actually down, they were aware, and were working on it. It eventually gets fixed, they email back, and all is well. The status page never did show any downtime.
Unless the status page show response times and is automatically updated when stuff stops working, assume that the status page is used as a marketing page. Companies who have nothing to hide run proper status pages, the rest that want to appear proper run marketing status pages that takes 30-60 minutes to even be updated in the first place.
In most cases, vendors like Datadog may still manually say its service is still down, even if it's pretty much up and running just to make sure they don't speak too soon about being up and running again. But our tests can see that they are working even before the vendor is ready to announce they are functioning again. What a vendor reports generally isn't usually a real-time reflection of what's happening in their software. Updating the status page is like a press release about someone important recovering from an illness. We're like the medical equipment that monitors that person's health. The press has to take some time to make craft a message when they know the person is healthy and wait a moment to report to make sure the person doesn't relapse and they report health too soon. On the other hand, medical equipment is just there to measure health and it can show that way sooner than the press release.
In other cases, Metrist mostly monitors essential functions right now and in the demo we monitor them from our point of view. So a minor part we don't monitor could be down but the major parts we do monitor are up. And so a status page may report certain part of the service as down while we just don't monitor that part. Further, since users experience outages differently and the demo is from our experience with the software, other users could be experiencing an outage while we aren't. So it's important for Metrist users to set up personalized monitoring so they know exactly how an outage is affecting them.
My guess would be that Metrist made one or more API calls that failed within a time-slice (hopefully more than one failure). They then mark the entire day orange or red and compare it to AWS's green. Which is true, for the entire day their status symbol was probably green.
The AWS team has a hard challenge of reporting availability and deciding when a system is not green across dozens of API use cases per service, hundreds of services, hundreds of data centers, dozens of availability zones, and millions of clients.
Metrist has no visibility into services internal SLA, SLO, and SLIs. [1]
Thanks for pointing that out! Since status pages are updated manually, we monitor actual functionality. We often see that pages functionally recover long before the status pages update that everything is in working order. Again, because it's manual and status pages are often more for marketing than development purposes.
And also we're in "Show HN" and may not be 100% perfect ;) but we stick to the above explanation :)
That would explain the scenario when Metrist says something is down but the actual service doesn't say it, because it's manually updated.
But what about the reverse? In what scenario would the platform say something is down but Metrist says it's up? Metrist is fully automated as I understand it, so it should detect it faster and reliable than their manually updated status pages, right?
> 99.9% of those visitors never travel more than 50 feet from the main road. This means that most of those visitors experience less than 1/10th of 1% of the actual park.
I'd say that's a win. We should be preserving as much as we can which really means most of us shouldn't be exploring more of the park which is mostly off trail.
I do get your point that people are coming to Yosemite and are not even taking advantage of the trails.
One point that should be made from this is that many people who are coming to the park don't really have the fitness, skills, and motivation to explore more of the trails in the park. Similarly, many people really aren't going to go explore at their companies because of skills, motivation, and time. Time is a major blocker for me because I can do more in areas outside of my focus but there are other life obligations and the need to rest to avoid burnout.
I don’t think the tradeoff really exists at most companies. Sure I could make our data departments life more difficult by pointing out they do a few dumb things but what is the outcome to me and my fellow engineers? I’ll at best get a pat on the back and have the data team be at least somewhat pissed off at me. Some random executive might get a bonus.. but why do I care?
(saves this HN post to Pocket to come back to it later to see replies)