Keep in mind this is relatively new. The press seems to be spinning it as an "ah ha, old phones really are getting slower!" Well, yes, now. I mean as of 10.2.1, so a year ago, but people have been thinking their phones were getting slower since smartphones became a thing.
Likewise with old computers.
And as someone else commented, "apple will never add a MAKE MY PHONE UNSTABLE switch."
That this is a problem is just ridiculous. It's on freaking rails! There is absolutely no excuse for this. GPS, Inertial Nav, simply integrating speed over time, it should be absolutely dead simple to calculate exactly where the train is at all times.
The article mentioned crap like differential GPS. Completely unnecessary. You don't need to be that accurate, I mean good grief, +/- 100 feet would be just fine for dealing with such gross overspeed detection.
I can't believe this is actually a problem. Pure politics/bureaucracy. It's certainly not a technical problem.
You don't even need GPS, or an external signal at all. All you need is an odometer.
1) Engineer selects the train's starting position (the station and departure gate) and selects the route.
2) Train software presses the equivalent of a 'trip reset' button that our cars have had forever. Current Position = 0.0
3) The train and the engineer each program a maximum speed. The train determines the maximum speed by selecting the maxSpeed which has the highest Position less than currentPosition. The engineer determines the maximum speed as she normally would.
4) The train's speed must not exceed the lower of the two configured maximum speeds.
> probably runs into issues with accumulated error
Typically there's no single nor double integration required for a wheeled ground vehicle; distance (revolutions times wheel circumference) is the primary measurement, with speed being trivially computed from that plus an accurate clock.
That assumes there's insignificant slippage between the wheel and ground. A reasonable assumption for a rubber tire on a concrete road, with a 20 sq. in. contact patch. Less reasonable for steel on steel with a 2 sq. in. contact patch.
I certainly agree that there are still errors, wheel slip being a major one. However, that does not cause an accumulating/ever-compounding error in speed nor distance travelled in the way that, say, a one-time error from an IMU's accelerometer does.
The lack-of-integration common in wheeled ground vehicles is highly beneficial to long-term speed & distance accuracy; it's a main reason why odometers were accurate for literally thousands of years prior to anyone knowing how to build an equally-accurate aircraft/spacecraft IMU. (The Romans were able to achieve <0.5% errors over hundreds of miles by the first century AD, something that no aircraft IMU was consistently able to achieve until after WWII).
The downvotes seem weird to me. Maybe it felt off topic?
Dr. Francis S. Collins is in several of the videos, and the third video is all about this particular controversy of dangerous experiments at the dawn of genetic engineering(for example one guy wanted to splice a cancer causing gene in e. coli bacterium).
It seemed relevant to me. It is what the article is talking about afterall.
I don't really understand your point. This is what he means:
I often write a placeholder title, write the essay, and then at the very end, spend a good chunk of time iterating on titles until there’s a good one.
It isn't uncommon for me to handle blog posts that way as well, or even paid freelance articles if no title was provided. You put in a title that suggests what you are talking about, then you write the piece, then you try to figure out what the most important detail is or the most compelling hook or the shortest way to make the essential point.
Writing good titles is quite hard and good titles often grow out of the piece after it is written. You get to the end and you have written multiple paragraphs to give a good lead up and then your final paragraph draws a conclusion in a way you didn't have in mind when you started. And therein lies clues to a good title.
I fairly often pull ideas for a title from the last paragraph or two of a blog post. And I usually don't know what that last paragraph will be until I have written the entire piece.
I must have missed that. Thanks for pointing it out, I was bothered by the seemingly contradictory advice :)
all I noticed was:
>"Titles are 80% of the work, but you
write it as the very last thing.
It has to be an compelling opinion
or important learning"
followed by
>"Most of my writing comes from
talking/reading deciding I strongly
agree or disagree. These opinions
become titles. Titles become essays."
and then the example
>"The best example of this in
my work is 'Growth Hacker is the
new VP Marketing' which started
out as a tweet with 20+ shares, and
then was developed into an essay
afterwards."
I agree with the "because it's fun to learn" answers.
But I'd also add the reliability factor of keeping a quorum of containers running. It's a tool for dealing with the unreliability of cloud infrastructure, and unexpected process death.
There aren't enough altruistic people to make the blockchain trustworthy. Profit motive is the entire driving force behind the trust model of bitcoin, et al.
You've got that exactly backwards. The profit motive being predictable is what every market based economy heavily counts on. Which is to say, most of the world's economic activity relies upon the profit motive as the foundation of trust.
Hundreds of trillions in global wealth and $100 trillion in annual economic output are powered by very predictable self-interest. You can build trust networks around it and you can build regulations based on it, precisely because it's extraordinarily predictable and universal to the extent necessary for the whole system to function with billions of participants.
I think you're both partially right. Profit motive is what drives people, but profit margins will be shrunk so in an established mining ecosystem like bitcoin you probably aren't going to make any profit (after accounting for opportunity costs).
That's interesting. I switched back from emacs to vim (neovim), and fugitive was recommended. it just looked like :commands for the normal git cli... maybe I missed something.
So even though I edit in vim, I jump back to emacs for magit for bigger commits or multiple smaller commits (where I need to see diffs to be sure I capture my changes).
I have desired something as useful in vim, but I didn't think there was anything. I'll take another look at fugitive.
As soon I started to work with git, I installed fugitive. My learning curve of fugitive has been slow, and I have never be able to stage efficiently with it.
And then, a colleague showed me magit: I waited for a year that something similar comes to vim, trying to push this idea to fugitive https://github.com/tpope/vim-fugitive/issues/569 , without success.
Finally, some first experiments showed that partial hunk stage was feasible, and I created vimagit.
As you will see, it is far from whole magit features. For the moment, it "only" focuses on stage/unstage and commit feature (which is the main use case to me). The current workflow is quite robust: you can easily navigate through all the diffs to review them, stage by file/hunk/line/part of line, write the commit message (or amend the last commit), jump to the diff locations in their files... I continue to use fugitive for Gblame and Gdiff.
Next major features should be git stash (be able to prepare a stash like a commit, by file/hunk/line) and some git log related feature (to easily git commit --fixup a chosen commit in a log for example).
Yeah, fugitive seems pointless to me. Might as well just use the git cli...which I do, but when I want something quicker to navigate/visualize, magit is my go to.
See my comment in sibling thread. I am not familiar with Magit, but there's no equivalent on the CLI to Fugitive's flow for rapidly traversing history, it's really really useful, and lets me answer questions that other devs on the team just throw up their hands because a line may have traversed several different files over dozens of commits throughout its history. The reason it can't be done on the CLI is because you need multiple buffers and window management to make it viable.
All that said, I don't really use it for committing, mostly because I have a shell open right next to the editor anyway.
I wonder if this would be feasible at the neighborhood level via Home owners association. The neighborhood gets a tower and microwave link to a backhaul station, and provides internet via wifi or wires to the neighborhood.
I think our neighboorhood is about 130 houses. probably not enough to make it cost effective.
On the flip side, maybe starting a local company to provide LOS microwave hookups to the various neighborhoods in the area could make it work.
If you can somehow convince your HOA to let you put up a tower, then yes, it's feasible. And if you are doing microwave link only, it's pretty cheap.
You can rent space on a nearby cell tower for a pricy-but-not-insane monthly fee, and they'll usually have decent backhaul already present. (American Tower had a WISP sales program specifically for this at one point, I'm not sure if they still do). Run point-to-point from there to your neighborhood via some microwave WISP gear.
If you had a volunteer from the HOA willing to setup and manage it (a bigger ask than it sounds like), and if all 130 houses would agree to pay $50/month, then the math would work out OK (at least, using pricing I got in suburban Michigan about 4 years ago).
> If you can somehow convince your HOA to let you put up a tower.
You don't have to convince them, let the FCC do that. I lived in an area with a heavy handed HOA. The only decent broadband was a WISP. They had a few go rounds with the HOA, but they can't regulate antennas. In the end the WISP put a tower on my roof - I never heard a word. They may try, but they don't have authority to regulate it.
> They had a few go rounds with the HOA, but they can't regulate antennas.
That's a little bit of an overstatement. HOAs can regulate antennas unless the FCC (or Congress) makes an exception.
In the case of WISP, there is an exception that applies: 47 CFR 1.4000 [1]. WISPs would fall under the exception for antennas for "fixed wireless signals". A "fixed wireless signal" is "any commercial non-broadcast communications signals transmitted via wireless technology to and/or from a fixed customer location".
I think you have a flawed assumption that the big telcos that own the tower and backhaul aren't going to charge content providers for access to that tower.
> you have a flawed assumption that the big telcos that own the tower and backhaul
Most cell towers are owned by a third party (not a big telco), and they'll lease to anyone if you have the cash, and the site has the capacity (physical space, weight/wind requirements, etc). You can lease from American Tower, Crown Castle, SBA, etc.
The existing backhaul is often owned by existing monopoly telecom providers. But not always. And competitive non-big-telco commercial operators will often install service to a site for you, if you are willing to pay for it. For example, I'm looking at a cell site in Michigan right now, that's deep in AT&T territory, but Sprint fiber is actually the installed backhaul provider, and four other commercial providers will install service there for a price.
You can know all of this upfront, before you sign anything, so there's very little risk in terms of tower space or backhaul availability. People have been doing this for decades now, it's not as ill-defined as it might seem.
Speaking from today's perspective, you're correct. But it won't be long until all the third parties et. al figure out they too can get into the paid access game. Contracts will be revised. Rents will be extracted. Because there are no regulations to put a check on greed.
Contrary to the claims made if one is to remove municipal blocks fiber is very easy and very cheap to install. What makes fiber installation expensive is municipal regulations
Clearly I’m not the guy to talk to about tor/onion. I’m wondering if a network could be set up across thousands of homes somehow and that network could purchase priority of its traffic. Basically there’s always a way to add another layer of abstraction to circumvent a lower layers restrictions.
Yes, a mesh network would work for that, then you just need a method to measure how much traffic each node serves then pay the node operators for that.
I know there were mesh networks with wireless microwave transmitters deployed in some rural areas, but I can't find the articles. It's probably going to get more and more attention though, along with distributed electricity and similar things as technology progresses.
Does this mean you think all viewers are democrats (or even most) and that republicans are the enemy?