I hate the network switch debate. Most center around noise shaping the sound.
It is a one or zero, it will checksum and TCP either gives the correct packet or drops it. Also I do not know many audio streamers that do not buffer since TCP works way faster than a song can play. Plenty of time to make sure the packets are correct.
I have no idea why the audiophile network switches irritates me the most. At least with audio cables I guess you can do measurements that show the signal is cleaner, even though there is zero chance of you hearing a difference most of the time (I just make sure cables have good shielding and if the cable has to be seen it better look cool too, haha).
Ya know there are even audiophile USB cables.......not a joke google it.
When I am writing code for something it is fun to see what it comes up with as opposed to what I did. I also find it fun for bouncing ideas off of while brainstorming (it does not judge me for stupid ideas). Most importantly I taught it the proper response for when I type.....who ya gonna call.....
Hindsight is always 20/20, we all know now that it would of been a bad idea. I do think this, along with Project Orion, are great thinking by people to utilize terribly destructive weapons for outside the box peaceful solutions.
Again we know now this would be bad and the consequences would not outweigh the benefits. I do think it is vital to have this kind of thinking (I mean outside the box). I think some people may find it crazy that this was even thought of, however I find is an interesting read.
Listen when you break something tell us. Explain what you did and we can get it fixed. You will make mistakes, we all still make mistakes.
That is what i always do and expect my coworkers to do. Yeah we talk a little smack afterwards (only jokingly), but were able to resolve most self inflicted wounds quickly. Nowadays what scares me more is a coworker who hides mistakes rather than one who confesses to them.
From the beginning of Byte magazine until about 1984 are really fun to read. The ads were probably annoying at the time of release, but not they are fun and interesting to read. I like the 1983 issue with imaging where they are talking about movie effects for "Tron" and "Revenge of the Jedi".
I wished they would of had more UNIX in the issues, however it was usually unaffordable. But a fun look, at least for me, to the history of computing. And who does not like an ad for a 8MB, $3999 disk drive subsystem.
The ads were not annoying at the time of release. I really enjoyed reading those ads and finding out about products specs and prices, remember that we didn’t have the Web back then and Byte magazine was my main source of information about what was available.
Really! the articles where the main point - just whish I had had the nerve to use the $25 diy modem Steve Garcia for one project instead of paying £300 for an answer and £600 for an answer originate.
His BYTE electronics DIY project articles in his Circuit Cellar column were so well-written that I used to enjoy browsing them even though I'm not an electronics guy, gleaning what I could about the topic. He even did one about a 32-bit computer, when 8 and 16 bits were still the standard! The Definicon motherboard/CPU, I think it was about.
I think it “creative computing” or nibble magazine that had a postcard where you could check off interests and send it back. I did this in middle school and would occasionally get mailings about products (and credit card offers).
That was a standard system in the magazine trade back then. It was part of the service to advertisers. They would collect all the queries for each advertiser every month and pass them on.
Those ads were unbelievably lucrative. Each page cost thousands of dollars, and there were hundreds and hundreds of them, plus smaller classifieds. Freelancers were paid fairly generously, but that outgoing was a drop in the ocean compared to the monthly ad income. Byte was posting profits of around $10m/year in the 80s.
Edit to add: many readers saw the ads as informative rather than intrusive and distracting. This was partly because many were creative and fun, but also because they went into far more technical detail than you'd see today, so they were almost a supplement to the official written copy.
Serious question, since this is out of my swim lane. The Author keeps talking about the effects of Galactic Cosmic Rays, do we have any usable materials we can build habitats out of that can block them? I mean usable in the sense we can produce in scale and they are non toxic to humans.
For Cosmic Rays the only thing is just a lot of atomic nuclei. You need pure mass. This is why a transit to Mars or Saturn is going to be hard for us to pull off in the near term. All those nuclei are expensive to get into orbit right now.
Outside of that, there is a 'graded Z' shield. Here, you make a layer cake of various atomic nuclei (the Zs), going from heavier to lighter. Typically Tantalum down to Tin and down to Aluminum. They physics here aren't super important, but for lower energy radiation, you can get down to a 60% mass reduction for similar shielding protection.
The problem is that it's the higher energy radiation that you are worried about, the Cosmic Rays. Graded Z shields pretty much work like anything else at those energies. Under our current physics mumbo-jumbo, you just need nuclei.
Yeah, for the less energetic stuff. The same issue applies, the very energetic stuff just sails right on in. Big EM fields also don't deflect neutrally charged stuff like Cosmic Rays and neutrons. Radiation shielding is hard stuff and it'll take multiple methods working in tandem.
Any material will work if thick enough. Thick concrete, water/ice. It just has to be thick enough to also handle the secondary radiation from higher energy stuff. We're talking a need for several feet of concrete or several inches of lead.
For a rocky body like Mars the 'easiest' solution is simply excavate trenches to put your habitats in and then piling several meters of regolith over the top, or using lava tubes/caves. If excavating was too difficult you could similarly just make bricks of compressed/fused regolith and pile them up. Titan has a rocky core but is mostly ice where a manned mission would be, there you'd probably just carve out large blocks of ice and place them around your habitat.
The best universal solution would probably be some sort of sandwich of materials that was still decently thick.
Every project i have been on where we built for availability and resilience has inevitably had at least one single point of failure. Usually it is something deemed non critical, but somehow can still bring the infrastructure down (A single DNS server at one of our production sites did this, we have 2 more accessible via a VPN tunnel, it was deemed if the production DNS went down the other two were still reachable, to bad the day it happened the tunnel was down too).
Also you have to deal with sysadmin error, i know us sysadmins are practically perfect in every way, but occasionally we make mistakes....big mistakes. ;)
Funny story, about 6 years ago we got a HP DL980 server with 1TB of memory to move from an Itanium HP-UX server. The test database was Oracle and about 600GB in size. We loaded the data and they had some query test they would run and the first time took about 45 minutes (which was several hours faster than the HPUX), They made changes and all the rest of the runs took about 5 minutes for their test to complete. Finally someone asked me and i manually dropped the buffers and cache and back to about 45 minutes.
Their changes did not do anything, everything was getting cached. It was cool, but one needs to know what is happening with their data. I am just glad they asked before going to management saying their tests only took 5 minutes.
Those were some poorly built systems. I worked on probably 10 of them and they not-infrequently had .. major issues. HP's support model was to send a tech out with 2 sticks of RAM, and try them in different places to try to trace memory failures... across 4 (or 8?) cassettes, and 64 sticks of ram, and 20+ minute POST times.
We eventually had one server entirely replaced at HP's cost after yelling at them long enough, and that one never worked well enough to ever use in production, either. I'd say we had maybe a 70-80% success rate with those servers. They were beasts, though, with 4TB of RAM as I recall, and 288 cores.
After 15 years of being a DoD contractor, it is frustrating to see yet another sole source entity getting the contract. Prices will inflate and there is no competition.
HP and Oracle have been reaping the benefits of this for at least two decades now. We put databases on Oracle that should be handled by Postgres or MariaDB since DoD prefers Oracle. We would buy useless HP software because we were a HP shop. I fought to get a non HP solid state array for our data, it was an epic battle (in the end I won, on the extreme we had 6 to 7 hour processes cut down to under an hour, the HP equivalent could not replicate that at the time).
So i can see DoD moving to Azure and then get the vendor lock in and in 10 years if they want to move the cost will be so extreme it will either cause taxpayers a ton of money or not be realistic.
As impossible as it sounds, and somewhat impractical, i would rather see a vendor agnostic approach and DoD spread across multiple gov clouds. i guess years has gotten me jaded with government spending (wait, what, how did we buy 2 extra $50k Cisco chassis and then keep them in storage for 3 years....).
> We put databases on Oracle that should be handled by Postgres or MariaDB since DoD prefers Oracle.
I mean, if you’ve already budgeted the CapEx for some additional Oracle licenses, the OpEx efficiencies of having unified tooling and a unified ops doctrine are no joke.
I haven’t worked with an Oracle DBMS, but I think this is analogous: I’d sure hate to have to manage a cloud infrastructure where parts were on AWS, parts on GCP, and parts on Azure. Sure, there are generic tools that treat all three the same, or over-layers like K8s that don’t care about substrate—but what if the projects on each platform were taking advantage of that platform’s specialties? What if I was using SNS on AWS, or BigQuery on GCP?
To bring that back through the analogy, what if our Oracle projects were tuned using Oracle-specific query-planner hints, while our Postgres projects did their ETL using PG-specific Foreign Data Wrapper connectors?
In both cases, the only real solution is hiring and retaining O(N) specialized ops headcount, one team for each stack. And that cost gets a lot higher than just paying for another darn Oracle license.
Even with an abstraction layer like kubernetes you still have a lot of duplicate work if you're multi-cloud. Services need to be exposed with load balancers, and those will have different configurations to be setup. Same with any Volumes. And then you have platform updates on both sides, bugs, quirks.
plus the maintenance and upkeep of two clouds - two bills to inspect, two account managers to deal with, two sets of permissions and overall account configuration to setup, maintain and audit.
Multi-cloud is really a set of requirements that changes the whole game in terms of operational overhead. And we limited our example to kubernetes. I imagine any non-fictional-for-the-sake-of-example company would want to make use of other platform specific tools as you mention.
I think a lot of this speculation requires inside knowledge of what the DoD's use cases actually are.
Are they doing a lot of compute, a lot of ingestion, a lot of output, and a ton of networking? Are they primarily just doing one of these things?
Who knows?
There's a lot of cases where having multiple clouds could be fine -- maybe even a big benefit. There's also a lot of cases where it could be a major headache.
I know a little about it from a previous employer.
Even the narrow slice I saw was all of "a lot of compute, a lot of ingestion, a lot of output, and a ton of networking", and more.
I think that the internal inefficiencies in the DOD datacenters are so enormous that any kind move to something more 'standardized', no matter what company it is, even with all of the artificial overhead, etc, will likely be a big win.
I'm sure that these deals are complicated at the size they're going at and maybe laymens pricing models just get tossed aside, but one of the biggest things I spend time on in cloud architecting is all around data ingress/egress in order to control costs. I simply don't understand how it's possible to go multi-cloud and control those costs, I feel like you'd either blow costs out not caring, or blow costs out throwing engineers at very complex solutions.
Not just governments. I worked at a F100 company that had a special type of internal funding called SQP for spending on (mostly) IT people-hours on projects. Year after year, the biz would penny-pinch on their SQP to the point that we struggled to keep core contractors on staff. Then around Oct/Nov, biz would come to us begging to spend their SQP on anything (else they could lose it next year) as long as that project stop billing at fiscal year end (not a day afterwards; as that would be charged to next year's SQP). That meant we now struggled because we didn't have enough people on staff to burn up all the SQP. Famine then feast! I would suggest (half jokingly) that we should head over to the nearest retirement home and get a bus load of old folks to join us at $150/hr for two months to just sit there doing nothing but burn SQP.
We are always seeing threads with junior people/new grads talking about how they are having so much trouble breaking into the industry. Just hire them and give them interesting tiny projects. Would be super helpful to so many people to get going and still allow you to spend all your money.
What's hysterical to me is that Microsoft was actually making this point in their proposal because they very much expected for AWS to solely receive go-ahead with this contract.
As a former longtime contractor I totally agree. One thing often overlooked is this 'Silver Tsunami' that will wreak havoc on the old guard (existing giant contracting companies). The existing workforce is aging, the new blood is uninterested in the old ways (cruft, old tech) and frankly there's tremendous opportunity for a 'Space-X' style small contractor to get a serious foothold.
Some of these companies need to replace 40% of their workforce in the next decade. Who would a new grad choose?
> there's tremendous opportunity for a 'Space-X' style small contractor to get a serious foothold.
Sure, but the procurement process is incredibly difficult to crack into for small companies. There is tremendous amounts of red tape designed to keep other players out.
> Some of these companies need to replace 40% of their workforce in the next decade. Who would a new grad choose?
Most will go to where the money is...unless you go FAANG, it's harder to find the kind of high salaries that defense contractors can throw around in the private sector.
> Most will go to where the money is...unless you go FAANG, it's harder to find the kind of high salaries that defense contractors can throw around in the private sector.
There is a reason the DC suburbs of Northern VA and Southern MD have some of the richest counties in the US...
Is it really unusual or inefficient for the Department of Defense logistics chain to have lots of spare capacity all over the place w.r.t the Cisco chassis? Especially with global force projection, I'd expect there would be massive, unavoidable waste just to maintain operational capabilities, stretching into every corner of the industry.
This was an accident, they meant to buy 2 blades for more SAN ports and they ordered the wrong parts.
They did this with a few database servers as well, they were supposed to order 8 core processors, but they ordered 10 core instead (it was a typo), they had 4 processors per machine it was a $40k extra mess not including the extra Oracle licenses needed to buy since Oracle licenses by cores (8 extra cores per machine and 3 machines).
But i do understand the spend it or lose it, which is another reason no one tries to be efficient.
My cubicle at my last job was surrounded by a mountain of expensive printers because they would loose some funding if they didn't spend all of their budget each year.
If we ordered a server it would show up in 6 to 12 months and sometimes would be sent to the wrong place. The people in charge of ordering would replace SSDs with spinning disk and we wouldn't know about the change until the wrong parts arrive months later. Fun times.
I understand that the existing approach results in a worse and more expensive product, but doesn't this approach also allow agencies to focus their efforts on assuring the vendors of their critical infrastructure aren't comprimised by bad actors etc...?
(This isn't my area expertise, so I'm open to the idea of being super wrong)
You can lock down YOUR infrastructure, but then are 100% dependent on the cloud environment to maintain and patch theirs. Since you have no insight to what the underlying infrastructure consists of you really have no way of knowing if they are secure. Do their storage arrays have open CVE's? are they employing people who are mentally sane? You just need to trust them.
So in the cloud just migrates the a lot of the security to another team. I do not know for a fact, but I am pretty sure the DoD cannot just show up at the AWS or Azure facilities and start auditing them (maybe they can and it is in a contract, someone else might know).
Considering AWS already has 2 entire airgapped dataceters for the US government, I'm pretty sure this contract will entail microsoft building entirely separate airgapped datacenters for the DOD and thus they probably will be able to just show up to their datacenter because nobody else is going to be running anything in those datacenters.
Correct, but those air gapped data centers already exist- there are separate Azure regions for certain restricted civilian government workloads, DoD unclassified work, and cleared work.
It is a one or zero, it will checksum and TCP either gives the correct packet or drops it. Also I do not know many audio streamers that do not buffer since TCP works way faster than a song can play. Plenty of time to make sure the packets are correct.
I have no idea why the audiophile network switches irritates me the most. At least with audio cables I guess you can do measurements that show the signal is cleaner, even though there is zero chance of you hearing a difference most of the time (I just make sure cables have good shielding and if the cable has to be seen it better look cool too, haha).
Ya know there are even audiophile USB cables.......not a joke google it.