I started at MS during Vista and I've been involved (sometimes tangentially) with Windows ever since. This is all my opinion, but It's been very interesting seeing the decision making process change over time.
If I had to summarize the change, I'd say that it's evolved from an expertise-based system to a data based system. The reason why eight people were present at every planning meeting is because their expert opinion was the primary tool used in decision making. In addition to poor decisions, this had two very negative outcomes:
1) reputation was fiercely fought for. Individuals feared that if they were ever incorrect, the damage to their reputation would limit their ability to impact future decisions and eventually lead to career death. Whether this actually happened or not is irrelevant; the fear itself caused overt caution and consensus seeking.
2) In the absence of data, an eloquent negotiator is often able to obtain their desired outcome, no matter how sub-optimal that outcome might be.
Nowadays, I see data used a lot more, hence the telemetry. A while ago, in response to criticism of Windows telemetry habits, I wrote:
"Telemetry is, by now, a fundamental part of the engineering process. Products that don't incorporate it are going to be clobbered by products that do. Microsoft didn't start this paradigm, but I think they had to incorporate it in order to stay competitive."
I understand that telemetry is still a delicate issue, but the expertise based decision making of yester-year truly sucked for many, and I don't see a viable alternative.
Putting on my asbestos suit... I started the telemetry project (aka SQM or CEIP) at Microsoft when I was on the MSN Explorer team. (Then Office got excited about it, then SQL Server and Windows and nearly everyone else.)
The original inspiration was being in a meeting much like the one described, where people were arguing about whether we should optimize the UI for 640x480, 800x600, or 1024x768 screens. The "argument" consisted almost entirely of anecdotes. I thought "Why don't we just measure how big the screens actually are, and then I won't have to attend any more meetings like this?" It turned out a lot of other people felt that way.
There's a right way and a wrong way to do telemetry.
The right way: When the Windows 7 RC was released, you guys made it available for free, with telemetry, as the final stage of real-world testing. It was up-front, it was open, it was optional. I was happy to join in, and bought Windows 7 when it came out, and it good.
The wrong way: When Windows 10 was released, a whole lot of nebulously defined, mandatory telemetry was baked into the commercial release. The fact that it was "free" is irrelevant - this was a clear transition from "you own the software" to "MS, via the software, owns and monetizes you." Then the "Get Windows 10" push started and for over a year we had a pitched battle between MS trying to force users to upgrade, and users who didn't want Windows 10 with all its invasive marketing, telemetry (weird how that slider keeps changing itself back from "minimal" to "full", huh?), and loss of control of their systems. I've disagreed with MS's customer relations in the past but never before have I felt that they were actively hostile. I will never use Windows 10.
Win 10 is way after my time. When I was the SQM architect we were very strict about not collecting any identifying information, the collection was always opt-in, and the intention was to publish a KB article with a list of the data points we collected (not sure if that happened).
I don't know what the internal MSFT rules are now, but the conservative position I took back then does seem rather quaint given what happens on mobile platforms nowadays...
Because the desktop software is executed on my desktop - and since I'm the owner I'd like to have full control over what is happening on it instead of giving up that power for the benefit of a faceless entity and a foggy promise of "better user experience".
A lot of telemetry gathering done by websites is also executed on your desktop. I don't think it makes a different that it's written in JavaScript and not C++.
That's also bad, but at least it is reasonably well documented, and there are a variety of tools (Ghostery, uBlock Origin, uMatrix, Disconnect, Privacy Badger, AdBlockPlus, ....) that can all help you with controlling it.
Unlike files, your OS has access to every file, every key press, etc; some of these get sent in the telemetry. Also, Win10 is mostly a black box in these respects, you don't know what gets sent and there is hardly any documentation either.
The problem is that Web exists in a sandbox. They can collect data about my usage of their website, full stop (large sites like Facebook and Google get around this, but that's not really telemetry). With desktop there's no way of knowing what's collected with "telemetry"
Furthermore, when people choose to "go online" or "use the Internet" that's a (mostly) conscious decision --- they know, more or less, that what they're doing is actually reaching out to some other machine somewhere else, and adjust their behaviour accordingly.
The related blurring of the distinction between what's local and what's remote, also irritates me greatly. E.g. making one UI search both local file names and contents as well as the Internet.
Very interesting and thanks for the post. Does Windows 10 actually conduct ongoing A/B tests and use telemetry to evaluate them? (for example the interface used by two people on two different computers might look different, or something)
I'm intrigued but still very skeptical about the effectiveness of the data-based approach to UI design. That seems to be an area where local, hillclimbing optimization can lead to a mess of conflicting decisions across the OS.
Yes. The blog for the Windows Insider Program makes frequent references to beta features that were shipped in different forms to different users and then a decision made based on the telemetry from those flights.
> Telemetry is, by now, a fundamental part of the engineering process. Products that don't incorporate it are going to be clobbered by products that do.
I am not sure this is correct. Telemetry can show what users do but it won't show why they do it. Users don't click the button: maybe they don't need it but maybe they don't understand what it does. Or maybe there are too many buttons and users get lost.
Also making a decision based on telemetry is making a desicion based on opinion of non-experts. There is a quote by Ford “If I had asked people what they wanted, they would have said faster horses.”. Can you make something innovative this way?
For example if we take a code editor like Sublime Text which in my opinion has very good UI, and it has no toolbar - can you use telemetry as an argument for removing a toolbar? No, because if there is a toolbar, there will be users who would use it without realising that they could do better without it. You need an expert to make better UI, not telemetry.
I think it would be inaccurate to say you can't learn things from telemetry. You can easily run A/B tests, or even in the absence of proper A/B tests, you can see if a newly added feature is actually being used or not.
The other thing one is the timing of action - I bet you could learn a lot from that, especially if you take a nice group of people, watch and quiz them, line up your learnings with the telemetry, and then use the telemetry of your general user base to infer how your user base is responding.
Standard flame-war prevention disclaimer: I'm not saying we should add telemetry to everything. I'm just saying it seems like it could be another useful tool in product creation.
I don't work at Microsoft, but you've eloquently described issues I deal with at my own workplace. I think your outcome 1) is almost inevitable in many large software orgs as ego/reputation fills the decision making vacuum left by the absence of 'good' data/telemetry.
Telemetry isn't magic. It might take a few contentious issues off the table but for the ambiguous long term stuff, it still boils down to someone showing some balls and putting their neck on the line. Lumia is a good example where the numbers overruled basic self confidence.
Can you elaborate? I remember hearing mainly good things about the Lumia UI. The problem was that it appeared in the middle of an already heated competition between iOS and Android and didn't put much else on the table.
Yes, and, I might be misunderstanding your question, but I view that as a good thing. I don't ever want to spend my time in a meeting witnessing people bicker over where to position a dialog button, when you could just run an A/B test and let data guide the decision process.
"Based on the data collected, most users seem to prefer..." is better than, "Based on my training and personal experiences, I think users would prefer..."
It is a lot easier to come to a common agreement on something as objective as statistical significance when compared to personal feelings and opinions.
What is the telemetry optimizing? In the web world, most A/B testing is focused on increasing engagement. Half of the time people are trying to build a better Skinner box.
Is this software actually better for users? I would be encouraged if MS was, for instance, optimizing for a user being able to complete the same tasks in less time.
Cannot speak for all the teams in Windows, but my team has used it successfully to improve the battery life of machines through A/B tests.
We're able to (on insiders builds) try out postponing various operations until the machine is plugged in to see how it effects the user experience and power draw. It helped us tune a few operations in ways I wouldn't have anticipated to improve battery life without degrading the experience
One of the other comments mentioned this but the blog will also highlight some of the more user visible tests that are run. Generally around optimizing UI flows for usability
I don't see how telemetry would have fixed the organizational problems described in the article. You still have to agree on a handful of features to implement for the AB tests. If cross-team communication across the kernel, shell, and menu teams is bureaucratic and slow and it takes months for dependent code changes to trickle through the repository tree you're still going to end up with the lowest common denominator feature implementations.
I can relate to this. It was a common theme across Microsoft. In one instance, the team I was part of was responsible for the initial integration of Bing and Facebook. We had about 24 dedicated people on the team (a few partner/principal group managers too!). Our team was considered agile because we released most features every 4 months (this was around 2009 I think)!!!
Anyway, we had a hackathon at Facebook with Zuckerberg and the FB team. There were all these big (redundant) talks by every manager up that chain about how this is the most important integration, etc.
Guess what came out 4 months later??? A like button on the Bing home page! The kicker was that the code for the widget was picked from the facebook developers portal.
This isn't purely a software engineering mess though. The PC architecture for "off" is a mess at the hardware level too.
I mean, long ago you just had a physical switch on the power supply. But then filesystem authors invented buffering and that wasn't safe anymore. So now there were two ways to shut the machine off: the "safe" way and the old way (where the "old" way was still available via e.g. holding the power button down for 4 seconds).
Then, we wanted to start using laptops and carrying them around. But booting the whole computer again and again every time you changed your seat became a chore, so software people invented this cool new trick where you could dump memory to disk and restore it later, so "hibernation" was born and we now had three ways to shut down.
Then the hardware people jumped in and pointed out that really the problem here is just the CPU. DRAM refresh is cheap, so there's really no need to dump the RAM at all. Let's just shut the CPU off and come up with a hardware/firmware/OS/driver hack (yeah, it touches basically everything) for powering on into a known DRAM configuration. Much faster! And now we had a fourth way to shut down.
(OK, this is a little spun. In fact suspend to disk and suspend to RAM landed nearly simultaneously in the PC world, with different manufacturers picking different horses. Then of course ACPI came in and standardized both, forever locking us into not one but two kinds of suspend.)
Then of course, we had a paradigm shift where "mobile" OSes revisted this whole scheme and threw it out the window. The hardware people making mobile chips designed the clock and power gating logic such that the "suspend to RAM" happens essentially every time the CPU reaches an idle state, and never has to be "entered" explicitly the way ACPI S3 is. And now PCs are shipping with this scheme too even on systems where ACPI still works in a traditional way. So, yeah. FIVE.
I mean, Microsoft surely made a UI mess out of this. But it's not like they were handed a simple problem to begin with.
In reality, CPUs and SoCs have multiple levels of sleep, some of which are standardized for a platform. ACPI at least tries to specify what should happen on x86/x64 systems. ARM has some standards, and then each SoC implements a lot of their own, which, to be fair, are sometimes really cool.
For lots of the power saving modes, hardware drivers have to opt in. This is where a lot of problems happen, if a single driver doesn't work, a laptop won't go to sleep, and the battery dies. Or, on mobile, a single bug in some component (which may manifest itself only under certain situations, which is why a reset can fix some problems) can prevent a good sleep state from being entered.
Of course software can also present problems. For the longest time, if WebGL content was loaded in any tab in, I think FF (either that or Chrome, heh), then the GPU would stay on and the system would never go to sleep.
Fun stuff like that.
The behavior people really want is "turns on quickly, uses little battery." That is harder to do on PC due to legacy, but part of the problem is also different usage patterns from mobile. Users check their mobile frequently, if something is draining the battery, odds are it'll be noticed in a couple hours and the phone will get a charge (and some apps potentially force killed, based on how knowledgeable the user is.) Laptops have less periodic usage patterns, so a single problem program may not get noticed before the battery is completely dead.
Sure, but the interfaces are more or less as described, and that's what MS has to deal with. I mean, yeah, you could make it into even more of a messy description, but from the start menu's perspective the PMIC or EC interface to power management isn't particularly important.
> Sure, but the interfaces are more or less as described, and that's what MS has to deal with. I mean, yeah, you could make it into even more of a messy description, but from the start menu's perspective the PMIC or EC interface to power management isn't particularly important.
Now days, the majority use case is "shut the lid". If software works (hah!), then users shouldn't ever have to manually shutdown or reboot, or worry about power state at all.
If I don't like what a computer is doing, I will frequently just yank the plug, pull the battery, flip the circuit breaker, and fuck the rest.
All this buffering and fault tolerance is bullshit at the consumer level. If the disk's file table gets fucked up, then let it burn.
I'll format the disk and re-install your stupid operating system as I see fit, whenever I want, and keep my actual data safe and sound far away from someone else's stupid hung process, until it produces actual results that I can copy into place, whenever appropriate.
These are lessons I learned throughout the late 90's, in the face of countless blue screens, before migrating to linux.
There was a nice paper a while back on the topic of 'crash-only systems'. That is, the idea would be that no software system was permitted to have a dignified shutdown path at all. Every system would be turned off with the equivalent of the power switch or "kill -9".
The point that was made was that frequently the recovery paths were on net faster (for a shutdown/reboot cycle) than the "durpee dur, I am slowly shutting myself down paths" and that you have to build a good crash recovery path anyhow.
I remember working at a rather large company (that you've heard of) in the 90s. It was a guy's sole job to reboot all the NT server every night for stability.
A common practice (I believe from MS) was to have C: dedicated to your OS and D: dedicated to everything else. The logic being: a rebuild would be a lot less complicated.
Ah the good ole days.
FWIW, I usually do a complete Windows reinstall every 6 months or so.
I worked as a lawyer on a Microsoft acquisition once. We had once a week 40-person conference calls for status updates. The company being acquired had a total of 9 employees I think.
The calls were scheduled for an hour. But, with 40 people on the line, the calls always ran late. At least 4 of the people of that call were billing $400+ / hr.
Most of the things I had to present during the call could have been resolved with a quick email to the relevant party that said "Hey, this looks funny. Do you care about this?" But the person running the entire thing insisted that I prepare Powerpoint slides for the meeting. And after I showed off my slides and everyone on the call had a chance to ask questions and discuss, the conclusion was almost invariably, "No, we don't really care about that."
"I was on the team responsible for improving performance for Outlook 98. Right after Outlook 97 shipped, my lead printed out every single function call made when Outlook started up. It was a stack of paper about a foot high. He spent about a week going through this printout with a highlighter, looking for "stupid shit", as he called it. Turned out, there was plenty of it. It further turned out that most of this "stupid shit" wasn't a result of any one programmer making a dumb decision; most of it was a result of an architecture and a mindset which tried to prevent developers from shooting themselves in the foot by gratuitously abstracting away "dangerous" things like memory management. Again: needless abstraction will bite you in the ass if you're not careful. My lead and I spent months going through the Outlook code exorcising "stupid shit" -- removing code which hid what it was actually doing, getting the code cleaner and closer to the machine, making everything more explicit, tighter, and less generic."
The above is excerpted from a later blog post by the same author.
On the flip side, removing safety from MS Office devs has led to a pretty significant quality and security decline. Pretty much every Office version we upgrade to is tested to hell and back and we usually delay adoption for three years. By then its barely functional after three years of service packs and hotfixes. We just migrated 100 people to 2013 and are still finding issues. Worse, this mentality spread to updates, which we have to delay by 2 weeks or more due to breaking things, ironically usually Outlook.
Would I trade a slower start time and less perky performance for security and stability? Absolutely. Maybe those old timers knew what they were doing. Maybe the care vs quick equation makes more sense to lean on care than quick for many types of software, especially in our connected world. Maybe the trade-off didn't make too much sense during the age of Pentium II, but in the age of i5's and i7's on every desktop with 8+gb of ram? I doubt it would be as noticeable.
I think we're dangerously experimenting with a "do it quick, ship, and maybe fix it later" mentality that might make sense with startups hurting for dollars and a MVP, but for dinosaurs like Microsoft, its a liability. I wonder how many "previous employees didnt know what they were doing so we fixed everything" anecdotes end up with unhappy customers in the end? Perhaps most.
Microsoft fired a huge number of their SDETs (and they no longer actively recruit them). Since then, I've seen a huge decline in quality among office and windows. Devs will tell you they're just as good as testing as the testers, but I don't buy it. Also, if you're a dev, do you really like testing enough to give it just as much attention as you do feature implementation? Again, I don't buy it.
Yes, I'm a dev and I know I don't have the patience for testing. Devs tend to only test happy path, or just off of it. I didn't realize how unexpected software can be used until I attended one of our company's user groups.
> in the age of i5's and i7's on every desktop with 8+gb of ram
Is that really so? Large corporations probably don't want to overspend money for excess hardware power. Intel Celeron or Atom + 1-2Gb of RAM should be enough for office work.
Once you add the standard set of IT shitware, Serious Enterprise Endpoint Security, legacy line-of-business apps, and poorly written drivers for the cheap components in those systems, 8GB/i5 is closer to a reasonable minimum for usability.
And if it comes with spinning drives, just have it shipped directly to the landfill.
I have decided to stick with writing code rather than progress my career.
Consequently I am that minority in the team delivering features that customers use. I take personal pride in that and keep quiet about it. Those reports, those plans, the documentation, the mock ups, the meetings and the emails are not in the final product, all that work counted for next to nothing. Ultimately it was mostly two of us working away and refining the specs by collaboration with stakeholders on an informal basis. Because we understand the problem space we are able to ask the important questions and I think stakeholders prefer working directly with programmers rather than have messages related via management channels with costs plucked out of the air and relayed back.
With stakeholders I find I am talking their language and a lot of ground gets covered in a working meeting, where some fixes are done during that meeting with anything requiring more time speced out. With management intermediaries they think everyone thinks like how they think so they can't imagine that it would be safe to have the programmer talk in a mix of trade jargon with a client. Usually this is part of the gig, many people make a cushty living coasting between meetings. What do we do so as to get on with the job but not enable this coasting that goes on with freeloading management types?
I've heard the rumors, but I haven't had a problem with it yet. I've been building systems with, not bleeding edge, but relevant technology for 20 years. By building systems, I mean I either do it solo or lead the team with the design. I specialize in back-end automation engines, but can do a good front end as well. I usually don't adopt a technology until it's obvious it's stuck. I primarily do Microsoft stacks. I talk well on my feet, am good talking with clients, and understand the needs of business. I haven't had to use a recruiter in 16 years.
I hope it never happens, but I haven't seen any evidence of it. I'm also never the oldest person in the group. I hope this helps.
Most people in programming entered it recently, so while an ageism problem is possible, programmers skewing young is not, by itself, evidence of ageism.
I'm not in SV. Just a general observation. Younger people know _how_ to do things, but they don't know _what_ to, and even more importantly what not to do.
As you age, you will discover that junior devs are much faster at accomplishing well defined tasks, but, as a rule, utterly helpless with anything open ended/complicated/requiring foresight and experience. Your value will be in helping them navigate things they can't navigate themselves.
I'd like to read this, but Blogspot is absolutely the most reading-hostile service of which I'm aware. Its "dynamic" formats most especially.
Not only do these defy all readability within a browser, they break fallbacks: console clients (w3m, elinks, etc.), or readability alternatives (Pocket, Instapaper, Outline.com).
Please just don't use Blogspot. And don't use the dynamic styles if you must.
Anybody who currently works on the Windows team know if things are still like this? I know I've read about improvements to the version control system for Windows teams and some other blogs about improvements in management practices but I'm wondering if even after 11 years if this has been fixed.
The Win10 UI works. It's stable. It's reasonably efficient at getting things done. It's reasonably compatible with people's expectations of how Windows works. (no Win8-style revamp of Start, etc.)
There's parts of it I dislike greatly, personally. (Cortana, the Settings/Control Panel split.) It's annoying and unpolished on parts like that, but I can't say it's a _travesty_.
It's ugly (especially the dialog boxes) and obviously targeted at tablets. As the Surface don't perform well, I hope one day Microsoft will come back to its root and release an edition of Windows tailored for the PC.
That's exactly right. It is so ugly! And in the recent builds I've been seeing blurred fonts everywhere. I do like that they are moving fast and most changes are good. For example the pseudo transparency effects alleviate some of the ugliness concerns.
And they still managed to fuck it up in the latest version of Windows Server; logging out of the GUI is perilously similar to shutting down the system. Oh, you get some more warnings, but I can easily see muscle memory doing the wrong thing.
I have always thought about the shutdown crapfest as not being able to shutdown for like ... 25 reasons? I believe there is a KB entry somewhere listing a bunch of them.
Almost impossible to troubleshoot ( network connections! )
Is it just me, or shouldn't an OS be able to kill it's own processes? When I kill -9 in Linux, the application is dead, gone... doesn't hang, doesn't sit there frozen.
Task Manager... don't even get me started. It feels like half the time when I try to end a process, it won't end. It'll hang up Task Manager or something else will freak out before you actually end a non-responsive process.
This is why I feel Windows can't shut down half the time. It doesn't actually seem to be able to kill and shutdown it's own processes. So when shutdown time comes, one little app hang and you'll just sit there forever.
`kill -9` on a process blocked on a syscall does kill the process, it just doesn't clean the process up. The result of SIGKILL-ing (i.e., killing with -9) a blocked process is a zombie process. Zombies are dead, right?
You're right, signal delivery does not interrupt an atomic operation, so if
a syscall takes unusually long time (like network filesystems sometimes
cause), the process hasn't received its SIGKILL yet.
This 10000X over. I have an SSD that I have to assume is bad that had abyssal read/write speeds due to processes constantly getting stuck in state D when they touched it. It would bring the system to its knees with load averages 40+ while my 8c/16t CPU would be sitting there doing nothing.
AFAIUI, Windows took the philosophy that the user should be the god of their system. By running software, the user has indicated to the OS that "This code is blessed and I want it to run".
Where this becomes relevant is that when you shut down, Windows doesn't just kill -9 everything (which it is totally capable of doing), but instead, politely informs all running processes of the imminent shutdown so that they can clean themselves up and finish anything important. If things take too long, Windows shows the screen to the user with the "Shut Down Now" button which, when pressed, indicates to the OS that the god of the realm has given Windows the authority to purge everything and shut down.
The problem seems to be that Windows seems to consider "takes too long" as just a few seconds in some cases, and is reluctant to just begin culling everything of its own accord, and therefore asks for permission and authorization. Windows assumes that any program the user run must be doing "Important" work, otherwise, why would you run it?
This "User is God" mindset seems to be gone with the 8/10 versions though, which makes me rather sad
> AFAIUI, Windows took the philosophy that the user should be the god of their system. By running software, the user has indicated to the OS that "This code is blessed and I want it to run".
I really don't buy this. Aside from it being an incredibly dumb way of them to think that any process running was intentionally blessed by the user (have they _used_ a computer at any point in the last couple decades?), it's in direct conflict with the idea that they'll do things like restart your computer without your consent in order to install updates[1] (and of course, the restart takes several minutes due to the updates). My roommate and I were in the middle of a LAN Starcraft game in college and this happened. This was a trivial case that just meant our team lost the game, but there are far more serious situations in which taking a user's computer out of commission at an entirely arbitrary time could cause some serious damage. I can't imagine what kind of deranged lunatic at Microsoft thought this was an okay thing to do, and I can't fathom what kind of Stockholm Syndrome drives people to continue to use Windows when there's so many aggressively anti-user decisions like that peppered through the OS.
[1] All you have to do is click Postpone often enough when asked to install updates
The forced restart experience indicates that windows is quite capable of killing any process it wants to. Otherwise you could just run an app to block that feature.
If you read Windows Internals you'll get some insight into how shutdown works. There are lots of things that want to do special work on shutdown, and windows gives them time. Consider, for example, if you have an unsaved document in word; windows won't shutdown until you save the document and close word or windows gets tired of waiting and prompts you to shut down anyway. This is configurable through the registry, btw.
I'd say the n times you hit 'postpone' count as windows giving you your fair warning.
> I'd say the n times you hit 'postpone' count as windows giving you your fair warning.
I'm sorry, but this is absolutely and utterly insane. It's beyond idiotic to suggest that an arbitrary point hours or days in the future will magically be appropriate for your computer to shut down just because you passed over the chance to update at several unrelated times in the past. I agree that it was ill-advised for my roommate to skip the updates so consistently, but the idea that this means that it's
You apparently only use your computer for Facebook games or something, but some of us (meaning like.....99.9% of computer users) use it for important things at least occasionally, and many use computers for things that are absolutely critical. The idea that Windows is justified in ignoring whatever important thing you might be doing at this moment in time because at several moments in the past you postponed updates is probably the stupidest thing I've ever heard.
I feel like that mindset is still there, but the goal posts have just moved. As you point out, in Vista and 7 the "takes too long" was often just a few seconds (the reported goal in Vista was listed in milliseconds). 8/10 continue to tune that based on telemetrics and user decisions in aggregate on one side and developer/application opt-in to more aggressive app shutdowns on the other side. That app is taking too long to shutdown screen still occurs in 8/10; it's just slower to appear and more likely to only appear in cases where an application has truly stalled out.
I meant the mindset seems to have changed on a grander scale. IE with respect to forcing updates on users and aggressive telemetry collection.
I understand how Microsoft is between a rock and a hard place when it comes to OS updates, as a majority of systems being up to date helps with herd immunity to prevent worm-scale attacks lack the past few ransomeware attacks, but at the same time, I hate desktop systems coming closer to mobile phones with respect to being locked down. With some updates even changing system settings, it's obvious that Microsoft no longer trusts the user, and that unnerves me.
When it comes to telemetry, I was always for it before, purposely selecting the "yes send telemetry" for any Microsoft product I installed, as I was very much a fanboy for most of my life and I enjoyed helping to make the product better. But the way they've handled the privacy aspect of it, along with not allowing you to opt out in any real way, other than disconnecting the network makes my hair stand on end.
I dual boot. I also love new features and updates. On Fedora, I install update very frequently: on Windows, I go to some extremes to prevent certain types of (telemetry) updates from being installed.
On my phone, every app from F-Droid is updated daily; I've avoided installing updates to preinstalled Samsung crap from GPlay for months.
Instead of forcing updates on me, Microsoft might first try convincing me to trust that they're in my best interest, first.
With the telemetry though, it's hard to argue that isn't about putting users first when so much of the aggressive nature of it seems to be a desire to get as large of a sample size as possible to do the best they can attempt to do the most good for the most users at a time.
I think with recent updates Microsoft continues to listen to the privacy concerns and find a balance between large sample size and genuine privacy explanations and opt-out powers. The new privacy OBE ("out of the box experience") for Fall Creators Update from what I saw of it in Windows Insider build pushes looks like positive steps in that direction.
As for updates, that is a rock and a hard place. I've not had trouble scheduling updates to times that are convenient to me and this is also something I think each Windows 10 update has gotten better at, but I can appreciate how upsetting it can be for those that have had issues with Windows scheduling updates for inconvenient times.
> With some updates even changing system settings
Many of these seem to be accidents and that this is also something that Microsoft (from watching Insider builds) seems aware of and is working to get better at. Some of these accidental settings wipes it felt like you could tell were artifacts of the move to the new "agile" update infrastructure of Windows 10. (Increasingly more "roll up" updates are like what a Windows Upgrade used to be, creating a new registry and C:\Windows entirely, copying important bits from the old ones before replacing them, and it's easy to forget things to copy/patch from the old install, especially new settings. Some of those should have been caught by more Insiders at the time, because almost all Insider builds are installed that way, but the Insiders are a self-selected subset of Windows users that admittedly have their own biases.)
> When I kill -9 in Linux, the application is dead, gone... doesn't hang, doesn't sit there frozen
Desktop Linux user here -- while there are lots of advantages to Linux, this is not among them. At least, not without qualification. If something goes wrong with the graphics system and X hangs, SIGKILL bounces right off your window manager. But at least under Linux you can still ssh into the box and reboot it.
That works sometimes, and when it works I can usually get X unstuck without a reboot. But sometimes X gets stuck in a way that makes me unable to switch to another virtual console. Then ssh becomes my only recourse, and the only way to kill X is to reboot. This has only happened to me on systems with nvidia graphics, although it doesn't seem to matter whether I'm using nouveau or proprietary drivers.
> But sometimes X gets stuck in a way that makes me unable to switch to another virtual console
This is the situation where I recommend the Magic-SysRq[0] keys. Alt+SysRq+r (on a system where /proc/sys/kernel/sysrq is configured) transfers keyboard control back to the kernel. There are other Magic-SysRq keys. My personal favourite is the one that invokes the OOMKiller; and there's one for remounting all filesystems read-only, and one for unconditionally rebooting the system.
Oh god, I remember setting a whole sone as my logoff somewhere in the win9x days, and it totally would. I had to hear the whole damn song just to reboot, by which time I had forgotten to change it. Annoyed my family for weeks.
Entertaining blogpost. And reading the 2006 comment thread below is also pretty entertaining. All of the comments seem to be about ten years old. Most of them tend to agree that Microsoft seriously botched this feature. But there is also a very vocal minority who argue that Joel is a moron who should shut up and the thing that makes Windows so great is its flexibility, and all those 15 menu options make sense, and people should just shut up about their Macs and deal with the fact that Mac has just a teeny weeny market share.
Well, fast forward to 2017. MacBooks are all over the place. And when trying to shut down Windows 10, I'm faced with just three options: Sleep, Shut down, and Restart.
Vista really was the nadir of Microsoft in terms of UX, no matter how furiously some people contested this back in the day. And Joel was spot on.
> "Those new hybrid hard drives can make this super fast."
- Vista RTM'd - 8th November 2006
- Date on that blog post - 21st November 2006
... and the wikipedia page he links from the blog post about hybrid drives says "In 2007, Seagate and Samsung introduced the first hybrid drives".
I don't think making UI decisions based on hardware unreleased at the time is really the right way to do it.
(The same wikipedia page also lists those early hybrid drives as "featuring 128 MB or 256 MB NAND flash memory options" and given that Vista required 512mb and recommended 1gb, I don't even think that they would have helped that much as most of the ram would still need to be written to the spinner platter part of the disk anyway)
Blogspot's UI is awful. They probably tried to follow trends and make an app out of website. While loading the page it first shows you the preloader with gears, then loads the top post and only later loads the post you were going to read. It probably gives good score in automated load speed test but in fact you can start reading the text much later than if it were plain HTML page.
And when I clicked the search box, the post I was reading has scrolled itself away (no kidding!). I tried to remove focus from the search box but the post didn't come back.
And that is Google's product. It might have some advanced code inside but the UI is awful.
I think it's pretty terrible that Blogspot requires JavaScript in order to display text. There's already a really great way to display text on the web: HTML. It just makes no sense to me to change that.
It's much like how the original wiki is so badly broken now.
If I had to summarize the change, I'd say that it's evolved from an expertise-based system to a data based system. The reason why eight people were present at every planning meeting is because their expert opinion was the primary tool used in decision making. In addition to poor decisions, this had two very negative outcomes:
1) reputation was fiercely fought for. Individuals feared that if they were ever incorrect, the damage to their reputation would limit their ability to impact future decisions and eventually lead to career death. Whether this actually happened or not is irrelevant; the fear itself caused overt caution and consensus seeking.
2) In the absence of data, an eloquent negotiator is often able to obtain their desired outcome, no matter how sub-optimal that outcome might be.
Nowadays, I see data used a lot more, hence the telemetry. A while ago, in response to criticism of Windows telemetry habits, I wrote:
"Telemetry is, by now, a fundamental part of the engineering process. Products that don't incorporate it are going to be clobbered by products that do. Microsoft didn't start this paradigm, but I think they had to incorporate it in order to stay competitive."
I understand that telemetry is still a delicate issue, but the expertise based decision making of yester-year truly sucked for many, and I don't see a viable alternative.