What if my OTP base data is exported to a publically-readable datastore? I could be tricked into exporting the QR codes from Google Authenticator, for example. Though I see that there are significantly better 2FA methods, it does seem like the biggest flaws with SMS 2FA are in the insecure implementations, not the actual concept.
When "run a SQL query against an implicit file database" is a huge improvement for anything but the edgiest of cases (and I believe you that it is), that's a scathing indictment of the normal UI.
What I don't understand is how the AWS log inspection tools are still as bad as they are. Even if it's just to prepare public-facing material, AWS clearly dogfoods them a little bit, so surely there would be glory and accolades to be won by implementing search that was half-assed (instead of quarter-assed)? Or is the AWS culture so broken that it net punishes core improvements? Come to think of it, that would explain a lot.
I probably should have been more clear on that, it is very possible with Cloudtrail and Athena and I find myself doing that pretty regularly.
But there are also times that it is inconsistent at best especially when trying to look at some nested permission problem. More than a few times I have had to get on with AWS support because the actual error just was not in Cloudtrail anywhere. Or it is related to some service that doesnt log to Cloudtrail like s3 access.
Which kinda more my point was, it isnt IAM itself that is the problem.
Big balloon tethered to sea floor. Pump air in when you've got spare energy, hydrostatic pressure forces air back out when needed. The neat bit is that the air pressure doesn't tail off as the bag empties. Site near offshore windfarms and you don't need much more infrastructure.
The main issue as I understand it is that you lose a lot of energy to heat if you just pump air straight in. To be efficient you need to chill the compressed air down to water temperature and store the heat before you pump the air underwater. Then add the heat back to the air when you release it from the balloon. This is all doable, but with all those heat pumps it's not quite as simple/cheap as it might seem.
I've wondered if there will be emerging business models for medium-sized, amenities-included (AP wiring, coffee, snack service, etc..), short-lease office spaces. Basically for when a company wants to downsize, but also wants to retain a little bit of physical space for meetings, events, and folks that live nearby. Nobody wants to sign a three year lease for their whole company, nor do they want a "we work" space where you'll be surrounded by random people. I think companies will want to hedge on short term accommodations. I don't have the real estate knowledge to know if this is even possible.
One challenge about office space is that it can be a very narrow margin business. It’s often cheaper to not operate than to slash rent to get anyone in. High turnover is even more expensive.
I do like the idea of a No Frills We Work. The Holiday Inn of office space. Clean. Beige. Functional. No “culture.” No common areas. Just shared entrances and hallway like your traditional office complex.
I thought WW-style was mostly hot desks or hot offices? I was thinking something semi-permanent with additional privacy. Maybe that exists and I’m out of the loop.
I'd forgotten the bit in here about the storage hierarchy, especially optical disks and immutable storage. I'm finding the modern "big data" stack with blob stores, immutable storage, and neutral formats (Parquet, Avro) to be useful, especially when it can be seamlessly read from a DBMS instance with its own optimized storage (Redshift, Vertica, and others can do this). However, I miss Postgresql's types and planner. From afar, it seems like Postgresql would be a perfect fit into this tiered ecosystem. Having a "hot set" in Postgresql and the "cold set" in a blob store for cost optimization in particular... as long as PG could use parallel query to maximize throughput over the higher-latency connection.
Interesting point. I don't think I have that particular use case, but I can definitely imagine it. I'd think especially with dates in the far future and the timezone offsets are altered between the time of recording and the time value itself?
Individual timezones are pretty stable but a handful change every year in some way, often switching how they observe DST or something similar. If you have a truly global userbase and this actually matters, you'll definitely hit them.
Lunch service 11am to 3pm Weekdays, 11 to 2pm Weekends
How would that dynamically map to changes in the timezones that might be mandated by the law but reflect the correct event times in a scheduling system? Spreadsheet systems developed the $ prefix for cell addresses as a shorthand to lock that in when adjusting the relative offsets in copy and paste / duplicate operations.
That’s a weird use case but you could just create a UDT to do it. You need the timestamptz value, the timestamp and the Timezone. Super expensive but ¯\_(ツ)_/¯
Ha I had a need for this yesterday. Just added a timestamptz column and text column. In my use case the text column preserves the input value for display to the user while the timestamp itself is used programmatically. But a “TIMESTAMP WITH TIME ZONE PRESERVING TIME ZONE” data type would be cool.
Half my job is doing this in reverse by accumulating deltas over streams and materializing the current state in various caching tiers. I'd love this to be native feature in PG, but for any my workloads it would need to support a lower-cost archival-grade storage tier ala a blob store.
I admit to being a little frustrated with systems engineers telling me I should never need shell access to a production system again, that web-based Metrics and Tracing should be enough to debug all problems. I have twenty years of muscle memory using strace, dtrace, lsof, blah blah blah to troubleshoot complex problems. Furthermore I'm only brought in when the problem is sufficiently complex. I understand that it should be a break-glass exception, but I don't want linux abstracted away completely.
I'm experimenting with using this VSCode feature to edit code on a Spark master, with the code storage on an EFS volume. So far, this seems to be allowing me to have a local-feeling environment, with a well-factored codebase that I can reference from a Jupyter notebook (inside VSCode), that has high-bandwidth and low-latency access to our data repositories.
I can also choose to temporarily vertically scale the remote host if I want to run single-node operations (ie Pandas).
I recall this article about the total maintenance costs of a Model X cab with 400k miles. It’s a very interesting look at total cost of ownership over time.
That is, unfortunately, anecdata. Everyone keeps telling me that electric cars have fewer moving parts and have such low maintenance costs that they cost 90% less to maintain, at which point I turn around and ask why Tesla doesn't offer a bumper to bumper 20 year warranty, because they can benefit from the law of large numbers. If the repair costs are so much better, this should be easy to do, right? That's the point where my interlocuter usually walks away as they have no answer.
One way you can try to estimate what the real maintenance cost is in the first X years is to look at what automakers set aside for warranties. In that case, (the last I looked), Tesla seemed in the middle of the pack vis-a-vis major ICE makers. But then you have the 20-X years of service, and the dirty secret is that those years are also paid for by new owners except they pay those repairs forward as depreciation when they sell. So then you look at depreciation curves, to see if Tesla is holding up much better than ICE vehicles due to lower expected maintenance costs and there, too, Tesla appears to be right in line with other major producers. So bottom line, I can't find any evidence for the thesis that getting rid of all these components will significantly reduce lifetime maintenance costs, while the battery costs remain a big unknown.
Now part of this is just not having enough data. In 20 years, we'll have a lot more data, and then maybe the warranty policies and depreciation curves will look very different. But this goes back to my point which is why isn't Tesla insuring the buyers against this risk by selling massive 20 year waranties to stand behind these claims of long service life and very low maintenance costs? Why leave people searching for anecdata in a new car whose service costs they don't have the data to estimate?
For most people a car is a major portion of their net-worth and they tend to be conservative in making this purchase. Sure, for high income buyers, they can afford to take risks but most buyers can't. So why doesn't Tesla do more to insure prospective buyers against this risk? It seems like such a no brainer, and yet many companies insist on pushing risk onto the customer. This isn't just an issue with Tesla, but I see it in many industries, where the producer is the one who has the survivorship data, they benefit from the law of large numbers, and they have financial backing, they are in a position to sell insurance, and people would buy the insurance, but the insurance just isn't being offerred, and if it is offered, it's on absolutely terrible terms, rather than as something to remove purchase frictions.
But if you think Tesla is averse to taking on risk, how much more risk-averse would the customers take on? Tesla has the law of large numbers and technical data available to them. They are in a position to arbitrage that and get (expected) free money by selling long term insurance to buyers for whom the insurance is a lot more valuable than what it costs Tesla, so why wouldn't they do that?
If indeed the EVs are so much more durable and have such lower maintenance costs but are surrounded by a cloud of doubt, why not remove that cloud? Even if Tesla doesn't ramp up production faster, the increased demand would allow them to command a higher price until production was ramped up.
So if indeed the market is wrong and depreciation curves are too steep, Tesla can arbitrage that. Why they don't should raise some questions, at least it does to me.
I'm not finding a hole in this argument. It's especially interesting because Elon has a reputation as a risk-taker, so I would expect him to steer Tesla in this direction if it were possible and profitable.
Even if it’s not 90% less maintenance cost, I can tell you in the 2 years since I’ve owned my Model 3 (20k miles) I’ve brought it in for maintenance items zero times. In an ICE car that would have been like 6 annoying oil changes by now? That’s more than enough to convince me.
In Europe, on a modern (diesel but gasoline wouldn't be much different) car, lubricated with semi-synthetic oil that would be 1 oil change at around 30,000 km or at the most two (first one at 15,000 and second one at 45,000, or similar).
As a side note, it depends, but "no maintenance" unlike many people think, is not such a good idea, overall, I'll try to explain myself.
Many years ago, fully synthetic oils came out, they were awfully expensive but guaranteed something like 80,000 km on diesel and 120,000 km on gasoline without any change, you only had to refill to level and change (at double the normal interval, usually 15,000 or 20,000 km x 2= 30,000 or 40,000 km the oil filter).
And, people with older cars might remember this, lamps burned out much more often than modern leds, non-electronic distributors needed maintenance, as well as carburetors and spark plugs (or on diesel pumps/injectors), and to this you add the (normal for mineral oil) 10,000-15,000 km oil change.
This meant that every three to six months your car was normally put for one day in the hands of a professional that - besides doing these maintenance chores - tested your car, made sure that brakes and suspensions were in an efficient condition, could notice and repair minor leaks, loose bolts/parts, could (much better than what you normally can do) "feel" if anything in wheels, suspensions, steering wheels (and its servo)was fine, etc.
The adoption of fully synthetic oil meant that the car, unless you found yourself an issue/defect, was seen/tested by a professional mechanic once every 1 1/2 to 2 years, and this was not a good thing, for the overall "heath" (and safety) of the car.
Ive always been told I need oil changes every 3-5,000 miles, if that’s not true it’s news to me. Also, my state mandates annual inspections so I assume a professional is checking for those things during the inspection.
The manufacturer is the one that knows (obviously) the most about the engine so you should stick to the recommended type of oil and recommended oil change interval, doing it more often than that is simply a waste.
Depreciation with an electric car is driven more by advancement in batteries, and other new technology that is advancing relatively rapidly compared to ICE cars.