If Maciej hadn't written this, I would still feel alone in how I see the technological world. I really can't express how grateful I am that this exists.
There is a vast, vast gulf between what the majority of software developers seem to think users want, and what users actually want. And this isn't a Henry Ford "they wanted faster horses" sort of thing, this is a, "users don't just hate change, they resent it" sort of thing.
I work directly with end users. It's mostly over email now, my tech still works with them in person, face-to-face, in their business or home. We get so many complaints. So many questions: "do I really have to upgrade this?" "I liked this the way it was." "It worked just fine, why are they changing it again?"
Every time I try to argue on behalf of my customers, here or elsewhere, it gets ignored, or downvoted, or rebutted with, "but my users say they always want the latest and greatest..."
There are 100 million people in the United States alone over the age of 50. How much new software is designed for them? How much new software exists merely as a tool, comfortable with being put away days or weeks at a time, and doesn't try to suck you in to having to sign in to it on a regular basis to see what other people are doing with it? How much of our technology -- not just software, but hardware here too -- is designed to work with trembling hands, poor eyesight, or users who are easily confused?
There are over a hundred million people that don't understand why your site needs a "cookie" to render, that can't tell the difference between an actual operating system warning and an ad posing as one, that aren't sure what to do when the IRS sends them an email about last year's tax return with a .doc attached. (That one happened today.)
For these people, the technology most of us build really really sucks.
And that is a growing demographic, not a shrinking one...
I program for a living and I know why sites need "cookies", and still, for me most software really really sucks. Just because I can figure out how each and every new cell phone works doesn't mean I want to, similarly for other kinds of gratuitous changes and incompatibilities. And BTW systems built "by hackers for hackers" are among the worst offenders (*nix clones in general and Linux distributions in particular are good examples.)
I just wanted to mention that I really liked this article/slides, and agree that most software sucks...
... but I think the systems built "by hackers for hackers", nix clones in general and linux distributions in particular, are generally the best stuff out there.
It's all the attempts to be "intuitive" and "easy" and "just work" that really fails. It's the stuff "by designers for users" that sucks.
The old-ish core unix/linux stuff has been slowly polished over the years and works pretty damn well, and you depend on its quiet stable operation for all your fancy user-friendly (and "dev-friendly") layers that need to be completely replaced every couple of years, for your favorite phone and your favorite websites.
EDIT: you complain about trying to compile kdevelop, on ubuntu... well there's your mistake, both of those things are trying hard to be user-friendly. Try plain debian, plain vim, etc.
There's a quote by someone (Steve Jobs?) to the effect that there are three phases of understanding a problem: first, you don't understand it; second, you research it and come up with all kinds of complicated solutions and, third, you really understand the problem and you can find a solution which is elegant and actually works.
This is oversimplified, but I think most of the things today that try to be user-friendly and intuitive are stuck in the second phase--it takes a lot of effort and care to get to the third one. The iPhone interface (touch, swiping, ...) might actually be an example of a successful attempt at the "just works"-category--we've all heard of babies learning to navigate iPads etc. And today it seems so obvious, you don't even think about all the nuances that have to be right for this to work, but that's probably a hallmark of the designs that really are intuitive--they seem so obvious you don't immediately realize someone had to come up with them, and iterate on them until they just worked.
There's definitely a learning curve involved. UX improvements tend to follow an arc from "configure everything, expose everything" towards "a bit too crude for power users, but 99% of use-cases are automated, defaulted, and profiled into nothing".
Hacker personalities tend to fall into conflict with that ambition(even if they enjoy the results) since "hackability factor" is dependent on having something to configure, and you can't configure a nothing that just works - especially since in the space between "configure everything" and "just works", you have a form of the uncanny valley effect where the experience gets a lot worse and there is no possibility to configure yourself out of it. That stops UX from enjoying simple incremental improvements.
The very UNIX-y solution is to have a nice, simple, friendly UI on top of the infinitely-configurable command-line engine. Or better yet, two different friendly UIs and you choose which one you like better.
And that's what got us into the mess we're in, because by now there are 754 friendly UIs, and choosing which one you like better is draining your soul.
You. Don't. Want. Infinitely. Configurable.
What you do want is lots of simple, friendly tools with limited configurability that you can connect to solve larger issues.
To compare it to language, you don't want to have two great novels that are paths through "choose your own adventure". You want words that work together, that are composable according to well-understood, simple rules.
Whenever a system like that comes along (unix pipes, REST APIs), people can build amazing things on top of that. The infinitely configurable hypercomplex thing idea gives you XSL:FO, XSLT, the W3C, C++, and assorted fun.
But there's also the quote that successful complex systems only evolve from simple systems.
Steve Jobs solution was to make a simple 80% solution. And of course that works in the consumer market for a great many things, particularly innovative new products. However some things are irreducibly complex and still require a solution. In these cases visionaries fail us, and what we need it is practical iteration to get us a working solution. The end result is always ugly, but it works. Until we have bigger brains I don't see how we can avoid this fundamental limitation in our ability to build systems.
Sure, but I still think you can make things intuitive--just not to some random person on the street but to someone with the requisite domain knowledge.
At least, that's what I like to think.
Relatively speaking, humans have been thinking about interaction with computers for a very short time, so maybe there's hope in the future :).
Unix is great, but it seems like it has killed off a lot of hackers' imaginations. Hell, we haven't even iterated on Unix all that much since System III, not even something like Sprite, Spring or Amoeba.
"Unix has retarded OS research by 10 years and linux has retarded it by 20." -- Boyd Roberts
There was Plan9. But the people who worked on that, who are bitter that nobody uses it, or anything like, or even anything "post unix", don't realize what the killer feature of linux was, which ended "OS research":
It was the large body of useful open source and copyleft code. This made it so everyone could stop pouring huge resources into the base OS layer individually, use something that works really quite well (with whatever necessary tweaks since it is open source), and focus on what goes on top. And there's no one extracting rent on this layer. There's no licenses and activations. Nothing of the sort. (You can contract for RHEL, but you can just as easily not.)
BSDs came a couple of years after linux, and plan9 was open sourced many years after (and initially under the problematic "lucent public license"). It was way too late. And the innovations to the most basic interfaces of the OS were just not nearly as valuable as the body of open source which was already available for Linux / BSD.
Plan9 is like the Concorde - pinnacle of tech/design, nobody uses it. Never had seat-back entertainment. Never had wifi. Maybe a silly metaphor, but it fits the OP.
There are still low-level changes these days in Linux and BSDs. Nothing that drastically breaks compatibility of course. But more security boundaries and privilege management (openbsd w/x and aslr stuff, linux seccomp-bpf, freebsd capsicum, containerization), and speed/efficiency measures like sendfile(), epoll() / kqueue(), RCU fs cache lookups, etc.
There's far, far more than Plan 9. That's only the beginning. Many others iterated heavily on Unix architectures to make them more novel, or started from scratch. John Ousterhout and U.C. Berkeley developed Sprite, which introduced things like checkpointing, live migration, SSI, log-structured file systems and other things. Andy Tanenbaum did Amoeba, which is sadly only remembered for being the platform that birthed Python. Spring was Sun Microsystems' research system, the name service of which brilliantly resolved the problem of "naming things" (I highly recommend you read this paper: https://www.usenix.org/legacy/publications/library/proceedin...). Unfortunately, Sun barely implemented any of its ideas to Solaris. Just the least interesting ones like the doors IPC mechanism.
Never had wifi.
Well, it was developed during a different time. It did get wi-fi thanks to 9front, though.
linux seccomp-bpf
A sandboxing/syscall filtering mechanism. Nothing new there, and its interface is rather leaky, much like Berkeley sockets are.
freebsd capsicum
This is one of the few legitimately interesting projects going on. Bringing capability-based security on top of fds. I don't know if it'll catch on beyond FreeBSD, though. Google seems to be an early adopter.
containerization
Nothing that IBM didn't do much better. Docker reeks of opportunism.
" We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy. "
I think that is the point of the article above. Is it good enough? What killer feature of an operating system is missing that would take it to another level? And for said killer feature, how will that help normal humans?
Oh, too many things to list. Orthogonal persistence, seamless process checkpointing, process migration, dynamic binding (like dynamic linking but with unused symbols being deduplicated and freed), a generic name service that lets users define their own mechanisms for resolving names to resources throughout the system (like how directories, environment variables, device nodes are all discrete objects that name references -- imagine redirecting between any arbitrary object type or making your own abstract one) [1], single-system imaging, dynamic code upgrading for every part of the system, process and executable semantics implemented as user-space servers, capability-based security, autorestarting drivers, etc. etc.
The thing is that despite them bringing enormous economic benefits when realized in the mainstream, the short-term incentives are aligned to hacking around what's broken instead.
I'm not sure that many of those are a net win for the end-user (where end-user = "grandma"), though. Orthogonal persistence and seamless process checkpointing destroy the "reboot to fix corrupted software" solution, and now that devices can stay in sleep mode for 6 months on a charge, there's very little reason for them. End users don't want to think about names, let alone generic naming services that let them perform more configuration; they want to think in terms of tasks and get them done with a minimum of extraneous details. Dynamic code upgrading loses much of its benefit when users want to be prompted every time an upgrade is ready.
Many of these would be big wins for developers. But developers actually have an incentive to make their own job harder, as long as they charge for their skills. Tougher development serves as a barrier to entry for the profession, which increases the wages they can charge. Make an OS so developer-friendly that the end-user can develop and the profession of software engineer would go away...which may not actually be a bad idea in theory, but neither software engineers nor end-users seem to be that keen on it.
Orthogonal persistence and seamless process checkpointing destroy the "reboot to fix corrupted software" solution
Well, not if you can also just restart to known-good states. Having a reincarnation server of some sorts is a given.
Not to mention "turn it off and on again" usually never works for Unix to begin with.
Dynamic code upgrading loses much of its benefit when users want to be prompted every time an upgrade is ready.
How does DSU hinder post-completion prompting?
But developers actually have an incentive to make their own job harder, as long as they charge for their skills. Tougher development serves as a barrier to entry for the profession, which increases the wages they can charge. Make an OS so developer-friendly that the end-user can develop and the profession of software engineer would go away...which may not actually be a bad idea in theory, but neither software engineers nor end-users seem to be that keen on it.
This is actually an interesting hypothesis. That a lot of inefficiency in software is really a form of job security.
It might make sense, but then again it seems like programmer wages dropping is inevitable, since software does not have scarcity unless you maintain it artificially. I'd wager FOSS has had an impact, though it obviously hasn't finished the job.
It's certainly possible this is why programmable UIs like Oberon and Cedar never caught on, though.
It doesn't hinder it, but post-completion prompting destroys a lot of the benefit.
Think about the user's perspective: they have already context-switched out of what they were doing to read the upgrade notice. Most apps these days either have auto-save or they're passive information-consumption apps, and so they're not going to lose work if the system shuts them down and restarts them. The OS can prompt them and restart the affected programs automatically after swapping out the binaries in the background, like how MacOS/Ubuntu/Google software updaters work.
So making it fully dynamic saves the user about 30 seconds every month, at the cost of a lot of complexity. There are much easier ways to save 30 seconds per month in user time.
This is actually an interesting hypothesis. That a lot of inefficiency in software is really a form of job security.
It doesn't even have to be deliberate. The only requirement necessary for this dynamic to emerge is to believe in the division of labor and have a mechanism for payment.
A certain segment of the developer population has a burning curiosity about how things work, all the way down. This segment is disproportionately represented among OS developers, because why else do you get into OS development if not to know how things work all the way down? The general public lacks this desire; most of them are quite happy to fork over money (or their personal data) to get the computer to do something useful to them. And a good portion of the developer population lacks it as well; many of them are quite happy to make what people want in exchange for money. The general public doesn't care about orthogonal persistence as long as they don't lose work when an app crashes. A commercial developer would almost rather it didn't exist, because then he can build auto-save functionality into his product and use it to differentiate from his competitors.
OSes that understand this dynamic and embrace it - like Microsoft, Apple, Android - tend to do much better than OSes that don't, like Genera, SmallTalk, or Oberon.
> Many of these would be big wins for developers. <
I sometimes wonder if devops is the developers way to get admins out of their way so they can work on the next shiny without the admin going "sorry, not until you can prove it is as stable etc as the one we already have".
Erm... I sort of figured it out years and years ago, and I do work in plain vim.
(Hence to me there's no difference whatsoever between one Linux distro and another, because in any case I can only get plain vim and plain grep to work and not much else. So I won't reply wrt Debian vs Ubuntu except that the other standard reply that I get apart from "running old software" being wrong is, I always seem to be using the wrong distribution. I think the one thing that unites all the distros is that each of them is always the wrong one...)
The difference is that on Windows I can run VS just as easily as I can run vim.
One time I wanted to edit and compile a large open source application on windows... can't just run a build, have to open the project file in VS... VS has to index the source code for intellisense, slows down the whole interface and uses a full gig of memory, and I haven't started a build yet...
Really, I'm just quite biased against windows in particular, because I had only Windows 98 and Windows XP when I first came of the age to have and tinker with my own computer, and the amount of stuff that you can't control and can't fix (installing common software, networking, file sharing, drivers) is infuriating. If something only runs on Windows, that's it's biggest flaw.
Well, I've had the opposite experience. I wanted to build a basic open source package for unix on my Mac. I got the source and followed the instructions to "make whatever". It failed with highly cryptic errors that I later learned through lots of searching meant there were dependencies on other open source projects I didn't have, and all of which had the same sort of dependency problems of their own, and some of which didn't have the proper config files for building on a Mac. I gave up.
But honestly all of that misses the point. The article is talking about typical users, not developers. Building stuff is far beyond people who don't know what a browser cookie is.
Exactly. Ubuntu is the worst offender of nix based "Easy" systems. By trying to make an uncomplicated system, they have instead done exactly what all these other cr*py systems have done. Made things more complicated.
I have some bones to pick with this talk, especially when it comes to the 'insane' singularity parts. But damn if I'm not on team user. The truth of the matter is that our software is gratuitous in features, lazy in performance, but the most important issue that gets overlooked is that it requires a great expenditure of effort to configure understand and use.
I spent hours today trying to set up prosody XMPP on a local LAN and found I couldn't do it with the server software that promises to have you 'up and running in minutes'. It made me think that perhaps in the same way you have test cases for features, you might have test situations to determine what features you should have in the first place. Does your software work if I'm in environment X? How about environment/situation Y?
I at this point feel so disgusted with the entire experience that I would prefer to just write my own, the cost of creating a new code has become lower than fully understanding an old one:
"It is easier to write a new code than to understand an old one." - John Von Neumann to Marston Morse, 1952 as quoted in Turing's Cathedral
EDIT (Mon Jul 20 23:30:32 PDT 2015): Lots of people citing Nick Bostrom on the AI stuff, for a different perspective I'd like to recommend Scott Alexander's (Looong) Meditations on Moloch: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
RE Scott Alexander's perspective, which I find to be an excellent piece of writing (hell, I have that Moloch quote with Disneyland printed on a t-shirt) - the point of that article is that if we keep doing what we're doing, going user-first and business-first, short-term thinking market economy, we'll be doing exactly that, lifting Moloch to heaven.
> And BTW systems built "by hackers for hackers" are among the worst offenders (*nix clones in general and Linux distributions in particular are good examples.)
Interestingly, I've found it to be the opposite. Most of systems built "by hackers for hackers" both have a surprisingly long (yet not obsolete) life, and are vastly more compatible across different environments. Emacs/vim has been around for a really long time, and there is no reason to believe they will disappear at this point. Similarly, I can comfortably switch from Linux to OS X to FreeBSD thanks to bash and the gazillion posix tools (grep, ps, awk...). The same definitely can not be said for Windows 7 to 8 so far.
"Been around for a long time" - yes, but that doesn't mean hair-pulling will not result from upgrading emacs and having files in .emacs stop working. (I have a very simple vim configuration so I never had a problem there, I don't know if it's because they break less or because I depend on less.)
As to easily switching... erm... just the other day I fixed a script bundled with a hardware system costing around $1M/year to lease to use basename instead of /bin/basename because I wanted to run it, not on RHEL but on Ubuntu and the latter put basename in /usr/bin. Hell, even #!/bin/env bash will break because on another system it's #!/usr/bin/env!! Other scripts did not run because ksh was not installed. At other times scripts break because /bin/sh is (wrongly) assumed to be bash but Ubuntu uses dash, whatever that is.
And this goes from trivial stuff like the above all the way down to gdb being broken by a kernel upgrade to a minor version, or kernel memory leaks caused by minor upgrades, or NFS-related kernel panics.
As to "yet not obsolete" - only if you use the box as a server needing nothing but a CPU, some RAM, an Ethernet connection and some media to boot and serve files from. Try running or compiling an old KDevelop on a new Ubuntu or vice versa and tell me if it's "surprisingly long life yet not obsolete". Also I can assure you that most of my complaints about breakage are met, if the person is a Linux aficionado, with bewilderment at my desire to run such ancient software where "ancient" means "6-12 months old". Certainly the only part of Linux which gives a shit about compatibility is the kernel (you can still run statically linked binaries from 1993), however it doesn't help much because every userspace program outside the very useful but rather limited set of the original Unix tools (grep etc.) happily breaks compatibility at every turn.
I have to say that Windows has far less variants and far less incompatibilities between variants than *nix, so you switch both more smoothly and less often in my experience. This is not to say that it's a lovely experience, just that it's not nearly as terrible as Linux.
I think you're both finding different things to appreciate and dislike when comparing Unix-like systems to Windows.
Unix-like systems really are relatively stable, in terms of, once you get it working, it will generally continue to work. And, for the majority of superficial commands you might want to run -- ps, top, uptime, w, tcpdump, vi, pico, nano, mail, etc. -- there haven't been many big changes for decades. Somebody that was pretty handy with BSD or Linux in 1995 could still find their way around the commandline on most systems today. They'd have to figure out newer init systems and a few other things, and the popular windowing environments have changed a lot, but basic CLI stuff is pretty stable.
But you're right, trying to get these systems to work in the first place, or trying to keep them up to date without incurring significant cost or downtime, can really be a pain. I've experienced the same. I think the worst offender to date was Samba; at one time, Samba 3 could do file shares with Winbios authentication but not talk ActiveDirectory, and Samba 4 could talk ActiveDirectory but not do file shares (or some such thing, it's been a few years), and they were both being released at the same time, so if you wanted a Samba server in an SBS 2008 environment you had to run both -- on two different systems. And configuring either one of them successfully was an absolute nightmare. I still have a text file under my Documents/sysadmin/ directory that has some shorthand notes on troubleshooting Samba installations and that thing is a gold mine. Every couple of years, after I've forgotten everything about Samba, I get a call to fix one that's not working. I'd never be able to do it without those notes.
That's just one example; I've had lots of issues with Network Manager ("Network Mangler"), print drivers that require 32-bit OS support which, once installed, completely bones other software updates and installations; OS updates that irreversibly break a working Apache installation (that one required me to pull a 24-hour shift building a new server), and yes, trouble with systemd/journald. All kinds of fun stuff.
It's a mixed bag. Windows-based systems usually just work, as long as you throw expensive enough hardware at them. They either do a job or they don't. But they aren't very fixable. There's no advanced logging system in the Windows environment, so whether you have any logs to work with in the event of trouble is entirely up to the individual software vendor, error messages are opaque, and I can't just go grep some source code to try to figure out what the hey is going on. Linux and BSD systems are a lot more fixable, I generally prefer working on them, but they have some of the worst quirky little bugs caused by software affecting other software, and documentation still leaves a lot to be desired.
I wonder if that makes unix-likes so resilient is that at its core it is not a tool, but a toolbox.
If one look into a physical toolbox, many of the tools have not changed for generations. Heck, the basic hammer is likely as old as civilization itself.
Thing is that these tools can be combined as the user see fit at time of usage, even it if would freak out the original designer.
Upon writing this i find myself reminded of a Star Trek TNG episode where they have one of the designers of the Enterprise on board, and La Forge is giving her a tour.
She keeps pointing out way they are "doing it wrong" until they have a situation that falls completely outside of design anticipations, and they have to create a solution on the fly.
After that La Forge gives her a explanation about how in the field things rarely work as they do on the drawing board, in particular when millions of light years from the nearest space dock.
As such i think unix-likes are resilient because at its core it was made for admins by admins. It was made to work in the day when you didn't have the net to reach out to for solutions, nor another computer to work on to create the tools to fix what is broken.
One of the horrors of modernity is that we keep forgetting or ignoring how our present lives got bootstrapped in the first place. The kind of A boots B that Boots C that replace A type catch-22.
Linux distributions are not the worst offenders of over-complicated software. Instead, that would be Windows. Linux is there for the people who need/want to understand and manipulate their system. To a greater degree than any other system around.
No one is recommending Linux software to people who don't need or what this level of manipulations.
What I feel is the greatest abusers of these types of over-complications is Windows. I used to volunteer at a library helping elderly people (plus anyone else that needed help). No one came in with a Linux computer.
What the majority of these people came in with were Windows laptops. Laptops that were just too complicated for them to get a hand on.
Thank you for a very insightful, thoughtful, and considerate reply. I'm newly in the 50+ demographic, but I've also been a software engineer for my entire career (starting with a Commodore 64 in the 1980s) -- from that perspective I especially appreciate your asking "How much new software exists merely as a tool, comfortable with being put away days or weeks at a time, and doesn't try to suck you in to having to sign in to it on a regular basis to see what other people are doing with it?". Thank you!
Hey, I started in the 80s on a Commodore 64 (and Vic 20!) too! And later graduated to COBOL74 on a Unisys mainframe... I just got a really young start.
If you find yourself feeling like writing about your opinions on software design sometime, I hope it comes across my news feed. I'm starting to bookmark articles like this one so that I can compile them together in the future to hopefully combat the treadmill thinking in software development.
You're not alone, but you're part of a minority. I don't know if others don't care or are really that oblivious of how the world is different outside of the computer screen. What I can see is that most software people are happy with the status quo; they get paid to do what they think it's cool, instead of doing what the user truly needs. "Refactor to new framework XYZ? Great, no problem. User-research? No need, we obviously know what they want".
Anyway, a few days ago there was a post here on HN about an anthropologist at Adobe, studying how designers use Photoshop. It's an exception (although in academia there are quite a few notable anthropologists studying human usage of IT), but at least it's a glimpse of hope...
Really? My impressions are exactly opposite. It's always "business first", "yes this tech is cool and all, but not what the users need", getting things out of the door sooner than possible because sales need to have something to sell, etc. Even if companies fail at "doing what the user truly needs", they also fail at letting developers "do what they think is cool".
Totally agree with your point and I think it is also the reason why we today don't have planes that are so fast as the engineers back in the days dreamt of.
Maybe we don't understand the need for it right now, but if we had kept on improving and "wasting money" on improving the speed and finding new materials we might already would have solved space travel.
As the author points out, spending money on bombs were more important and I don't understand how this is a good thing.
"There is a vast, vast gulf between what the majority of software developers seem to think users want, and what users actually want."
s/want/need/2
Another truism: Users are generally resilient and will adapt to whatever they are forced to use.
There are some very strong ideas shared by developers about what users allegedly "want". But most of the time I think developers are just invoking the mythical "user" in defending their own choices in software design, not users'. Users have little if any choice.
Is it possible that those who purport to "know" what users want are simply observing how users have adapted to using what they were given? (the users having had no real choice of real alternatives)
As a user, in the cases where I cannot write the software myself, I try to find programmers who share my sensibilities in software design.
This is the best hope I have for finding software that I would "want".
Despite what any programmer proclaims about her users, what I the user end up choosing is not the the software I want (I do not ask other programmers to write programs or add features, etc.).
I end up with the software the developer would herself want and is generous enough to share with others.
Fortunately we have some interest in the same things: no gui, portable, small, fast, simple, etc.
> But most of the time I think developers are just invoking the mythical "user" in defending their own choices in software design, not users'. Users have little if any choice.
Agreed, but the problem is much deeper than this: different users want different things. You'll only hear from the users who don't like what they have.
One common counter argument that people make is that users are lazy when you say they don't want to learn something new(lets ignore the physically challenged for now). But that argument is so fallacious I think.
What users actually mean is- for the delta value this new feature provides, it is not worth their time to learn to do this the new way. Everybody has to make judicious choice on where they choose to spend their time. And in that vein I dont think the users are being lazy in saying that it is not really efficient if they have to spend significant amount of time learning something new in what they perceive to be of little added value.
EDIT: BTW I do think that people are happy to relearn something if they perceive it to provide a "significant" added value. Hypothetically speaking if I had to relearn driving and this "new way of driving" took me from NY to DC in an HR for the same cost, I would try to learn that in a heartbeat. But I dont want to learn a new way of driving if it reduces my drive time by only 15 min.
I also think software/CS needs a school of thought that discourages significant change/variation in UX design. But unfortunately most people are incentivsed for change.
These days I build things that people have been anxiously waiting for. Like, every day it doesn't exist is kind of a problem, that sort of thing.
But I have been on the other side, too, building things users mostly hated. Even when the thing we made seemed clearly better than what it replaced, our users hated it. Or at least some portion would - probably good to keep in mind that people who are happy tend not to speak up as much.
It's quite a predicament for a company to be in, and I always felt bad for the designers. For me it was OK, it ended up allowing me to try lots of different ways of building things.
The form of the blog post reminded me very much of Jon Berger's Ways of Seeing, a life-changing political book on art history that changes the way you look at the world.
I'd be surprised if the author hadn't read it, or one of its descendants.
Ehh... yes, but to change is not necessarily to fix, and that is the sort of change I was talking about.
And there are a lot of changes that can be made without disrupting users. I haven't heard anybody yet say they don't want minor security updates to be installed automatically, so long as they don't go around breaking something else or reorganizing the UI or doing some other dumb thing.
And this echos the Torvalds stance of not breaking userspace.
That has had the side effect of major changes taking years to get hashed out to make sure the APIs are rock solid. Because once they are in the main tree, they will not be broken.
Heck, such attitudes may well be why Windows conquered the corporate desktop while OSX gets nowhere near it (outside of special cases).
When MS makes a release, they state how long they will support it for.
With OSX there is no such commitment.
Or one can see the difference in Linux distros in how RH is commited to maintaining a certain major release for years, while on the other hand you have rolling distros that come with a caveat emptor style warning.
And what we see is that developers flock to the rolling ones, while corporations etc embrace the LTS ones.
I often avoid software upgrades not because I know something will break but because I speculate something will break and dont want to find out.
Out of all the "features" that anger me the most are automatic, silent upgrades. One can generally disable them but I dont want to have to figure out how; if I want an upgrade I will download and install it manually and do so when I am good and ready.
That web applications - as opposed to those I install locally - are increasingly common I regard as the problem not the solution. Quite commonly a website breaks for no apparent reason, eventually I clue into that they revised their Javascipt but did not test on the browser I actually use.
When considering whether to install software, first look for reviews. Many eCommerce sites make it easy to sort by most-critical first. Do take them with a grain of salt as bad reviews are sometimes posted by unethical competitors as well as cyberstalkers like Kuro5hin's modus.
If those bad reviews would affect you and are in a recent release then maybe you want to give it a pass.
I know all about security patches, I once got to play on that same Sun workstation that Kevin Mitnick ransacked but at least I had the sense to ask Tsutomu's permission first.
> I often avoid software upgrades not because I know something will break but because I speculate something will break and dont want to find out.
That is a great way to describe that. Same here. On my daily dev system (Debian), frequent updates were sporadically breaking different bits of system -- sound would stop working, then would work again but something else would glitch -- and eventually I got busy enough that I just couldn't sink the time into troubleshooting things if the next round of updates happened to break anything I really relied on.
Time passed.
I now have something like 2,000+ updates to apply via apt and I am terrified. I have this sinking feeling that if I try to update now, my laptop will grow legs in the middle of the update and try to run out into the street to put itself out of its misery.
I think -- I hope! -- we're very close to completely reversible software updates in Linux environments. Between containers and VM snapshots and ZFS and efforts like Nexenta (http://www.osnews.com/story/19180/Transactional_Debian_Upgra... -- amusingly, the link to the page on nexenta.com is no longer available and I can't find a newer version of it on their site), I'm hoping that updates will become a little less of a time sink in the future.
Take a look at NixOS. I've been using it for a while and it's beautiful. Upgrades are very close to being fully atomic, and you can rollback arbitrarily through the boot menu in case a system change—an upgrade or configuration change—ruins something. Moreover, package dependencies are isolated from each other, you can install packages as non-root, etc etc.
Their approach to system configuration and package management looks great. I'm a little bit concerned about what a complete configuration.nix file would look like for a daily desktop development system though, and it looks like they've still got a lot of packages to fix (if I'm reading https://nixos.org/wiki/Zero_Hydra_Failures correctly), but still, looks like a big improvement over other approaches.
I love the configuration format and the .nix files for my laptop setup is very managable. Since I have the same hardware as my brother, we've set up a repository with the main configuration with custom files for our own options. So for example, I enabled Avahi ZeroConf DNS (one .nix line) and my brother only needed to pull the repo and run NixOS's "rebuild & switch" command.
The really great thing about the configuration language is that the same dependency tracking used for packages is used for all values in the system... So for example, if you have servers set up as Nix packages whose configs refer to the Nix variable for the system hostname, then if you change the hostname, those packages will be reconfigured automatically.
Yeah, there's quite a bit of work needed on packages, but I actually haven't had any trouble. I believe the "unstable" channel is defined as the latest set of packages that all build successfully together on Hydra, and the "stable" channels are Ubuntu-like LTS releases.
Anyhow I encourage people to give it a go! They have Amazon AMI images you can use to run NixOS on EC2 easily, the community is nice, and you can find lots of example configs on GitHub. Something like NixOS is bound to win as far as I'm concerned. Next level stuff!
LVM has been available (and readily selectable as an option on the installers of various distros, including Ubuntu) and supports filesystem snapshots on Linux for a long time. It should be easy to make a tool that makes a snapshot, runs apt-get upgrade and then allows easy rollback, but as far as I know nobody as tied the tools together.
My approach has been to keep my laptop on Sid/Unstable and upgrading at least once per week; nothing major has ever broken on me for the last few years, but I do run a barebones environment.
While actual code serves as a good example the problem is far more general. Consider the Apollo I fire: the Astronauts were unable to convince NASA to redesign their capsule's inward-opening door so they had themselves photographed praying over a model of the capsule.
Everyone knows astronauts are brave; those men died for a door hinge.
NB: an SMTP (or other protocol) server may well not respond to pings but otherwise be functioning. Protocol-specific tools may be the better tool here: nmap port queries, swaks, etc.
As for your mom, you could firewall ICMP 9 echo response on her before proceeding next time to avoid torturing her :-)
The first half of this talk is excellent. The story of air travel--how the engineers of the 60s thought that we'd have supersonic jets and flying cars within a few years--that's a great cautionary tale. Evolutionary biologists call it Punctuated Equilibrium: the idea that progress comes in spurts. The warning--that the evolution of computer hardware and software might be radically slower over the next 50 years than it was over the past 50--seems timely and reasonable.
The second half is not excellent. Bad mental habits and weak arguments are on display.
* Imprecision. Specifically: choosing sentences for their "punch", like a bumper sticker, without caring whether they make sense:
"There is something quite colonial, too, about collecting data from users [...] I think of it as the White Nerd's Burden."
"It's a kind of software Manifest Destiny."
...cool story, bro
* Excessive negativity. If you haven't already, I highly recommend reading Everything is Problematic -- it really helped me understand where people like Maciej are coming from. http://www.mcgilldaily.com/2014/11/everything-problematic/
* Disrespect. "Google works on some loopy stuff in between plastering the Internet with ads." I guess 40 years ago he would have said "AT&T works on some loopy stuff in between sending you phone bills".
Organizations like Bell Labs--or Google X--are treasures, and as engineers we should be glad they exist.
I was just about to write a similar sentiment. The first part is really great, also the parallels and the lessons to learn.
But the second half was not so great. While I agree with the overall sentiment, the concrete examples always slightly miss the point.
-----
The first such passage that caught my eye was the example about Windows XP:
> Rather than offer users persuasive reasons to upgrade software, vendors insist we look on upgrading as our moral duty. The idea that something might work fine the way it is has no place in tech culture.
The problem is that Windows XP is not working just fine, especially when connected to the Internet. It is full of malware and helps establishing botnets, DDOS attacks and thus feeds mafia structures. So we, as a soeciety, do have a moral obligation to retire Windows XP.
Also, nobody is forced by law to use Microsoft. There are plenty of alternatives. Most of those will run smoothly on your old hardware, such as MINT, Ubuntu or whatever. And for normal Office and Web stuff, this "jump" is surely less painful than switching to the latest Windows version.
Of course Microsoft is still to blame here, but for something entirely different: For closing Windows XP support. For not applying serious security fixing to it. I bet there are plenty of companies and people that are more than willing to pay for ongoing maintenance of Windows XP, but the only company that could offer that service denys it.
-----
So the author missed the real point, even though this point is totally in line with the overall sentiment of the article. And it goes on and on like that, in the second half. That makes it a really annoying read.
It was working fine, not from a techie/developer point of view, but from the 60 year old aunt with a cat that likes to play her slot machine program a few weeks before a trip to Las Vegas.
When that 60 year old aunt walks into Best Buy or Walmart to buy a new computer because something broke on the old one - everything will have changed and in her opinion none of it for the better. And she won't have any choices - sure there are likely Macs at Best Buy but she's not going to shell out $1300 for a new computer when she can get one for $500.
------
The entire point of the article is that technology (like airplanes) gets to a point of good enough and the opinion of the author is mainly to leverage what we have instead of the unreal future people are dreaming about (like the notions earlier in the article about people inhabiting Mars, etc.)
>> The problem is that Windows XP is not working just fine, especially when connected to the Internet. It is full of malware and helps establishing botnets, DDOS attacks and thus feeds mafia structures. So we, as a soeciety, do have a moral obligation to retire Windows XP.
> It was working fine, not from a techie/developer point of view, but from the 60 year old aunt with a cat that likes to play her slot machine program a few weeks before a trip to Las Vegas.
Is something working fine just because you can't see the problems? As an analogy, consider building codes. A house that isn't up to code might suit me just fine until there is an earthquake and it collapses. I might not want to bear the cost of seismic retrofitting, I might not understand the risks associated with an older house, and I might dislike any changes to the status quo I'm used to. Despite that, there are risks due to living in a building that isn't up to code. If problems develop, some of the costs will be borne by me, and some by society at large.
Even if technology, or more likely the web, will get to a point where it's good enough, people are notoriously awful at identifying that point when it comes. See old quotes about five computers in the world and whatnot.
I always find it very hard to believe a person saying "we've reached the end now, progress was good up until now, but this is quite enough really".
> The problem is that Windows XP is not working just fine, especially when connected to the Internet. It is full of malware and helps establishing botnets, DDOS attacks and thus feeds mafia structures. So we, as a soeciety, do have a moral obligation to retire Windows XP.
XP was working fine, and the fact that eventually became full of malware was because of the future-focused Microsoft wanting to make "later and greater", ripping down something and "trying again", instead of improving something that, for at least 80% of usage cases, was not infected, was already making money for business, and helping people with their lives.
I remember when bug counts were considered important. Windows 2000 reportedly shipped with 63,000 known bugs. And then Microsoft stopped reporting the figure.
The technical problem is that you can’t fix design problems without breaking stuff. What you remember: Windows Vista was horribly slow. What you didn’t see: Windows Vista moved the display system back into user-space, so misbehaving display drivers would not crash the OS as often. That required all new device drivers. Windows XP has other design problems, too, and no matter how much Microsoft patches it, it will never be as secure as Windows 10. Not without destroying all that precious software compatibility that makes an OS useful.
Windows 10 still has issues. Until a few weeks ago, it was vulnerable to root exploit via web font? WTH, Microsoft? I thought they would eliminate that category of exploits after the last root exploit via web font, 2 years ago.
The business problem is that Microsoft is trying to spread Windows into tablets and phones, and Sinofsky totally messed it up. Those third-party start menus prove that there is a market for Windows 8 guts with a Windows 7 shell. Curiously, not many people seem to want an actual Windows XP start menu, with its non-searchable interface and randomly sorted programs list and its hierarchical pop-ups that make navigating a large list into an exercise in grace. Windows 7 was the result of actually listening to users’ needs.
I had a similar feeling, and thought it was probably because I know nothing about aviation and therefore read that stuff uncritically and feel entertained.
I wonder whether that's a general principle in writing -- tell outside-your-expertise stories to entertain the reader, and then with luck (depending on your perspective), the reader reads the rest and feels entertained and informed and good emotions out of a sort of inertia the rest of the way through.
The central mistake of the second half is this:
“Vision 1: CONNECT KNOWLEDGE, PEOPLE, AND CATS. This is the correct vision.”
He conflates computing with the Web, and argues that the Web is fine just the way it is; it just needs to be spread more widely. As if there aren’t plenty of people already trying to do exactly that.
What he ignores is that computing is an expression of human creativity. Like literature and music, you take it into a context that it wasn’t written for, and it rarely produces a desirable response. As long as computing is relevant, we will need new software.
The last 100 years of computing machines have proven that this still applies. A web based on ad revenue is no good in an area with poor electricity infrastructure and low bandwidth, so the famous M-Pesa has a completely different interface and different software than Wells Fargo Online® Banking.
The article/talk has some good points, but they aren’t connected together in a good way.
Around 2012 I worked with a team migrating some content from a very large static HTML site dating back to 1992. We scoffed at the awful ad-hoc nature of it all, just a pile of static hand-coded HTML pages.
But the 2002-2005 stuff had aged much worse. At some point there was fancy site generator that had used javascript for everything, and the javascript apparently only worked properly in IE6. So most of the navigation was busted in a modern browser, and needed special scrapers to parse out what should have been plain old <a href...> tags.
Now, I regularly think back to that crusty old HTML3 static site that had sat there for 15-20 years and think: I wonder if my AngularJS/D3.js/jqGrid/etc. single-page app will even load in a browser 20 years from now, let alone perform as originally intended...
It's not just HTML. I think any proprietary standard runs the risk.
We've been burned at my work by old word and word perfect documents from early 90's refusing to render properly in new versions of office. I think they tried about three recent versions of office before they gave up and resorted to scanning in hard copies as PDF's to recover some documents.
Access databases are also notorious for this problem. A lot of business apps were written with Access in the mid 90's and it has now become too expensive/time consuming to migrate them off of a dead technology. In business world stuff tends to run far longer than softawre developers consider. Hell COBOL and PL/1 stuff runs in some places still.
That's exactly why I'm partial to Common Lisp. People can badmouth it as much as they want, but I'm still running Maxima on it, AND the application's language still isn't set in stone by design.
(You've also reminded me again that there's market for document-structure-reconstructing OCR.)
It does. Emphasis on "well". Not, however, perfect, especially with more complex documents.
That's mostly an issue where you're working on "live" documents, collaborating with others actively. Once you've reached archival stage it's often safe to convert to something more stable. And I put the fault squarely with Microsoft: utter failure to take archival concerns into consideration.
My own preferred fixes are to convert to Markdown or LaTeX, if the structure supports it, and to other formats from there: PDF, HTMK, ePub.
And no, I haven't used MS Word for a decade and a half.
While not as old as your examples, Windows 2003 has just reached EOL, so there are going to be many "crusty" applications that will need to be migrated.
> it has now become too expensive/time consuming to migrate them off of a dead technology.
Is there a particular reason they genuinely have to be migrated? And the fact that Access/Delphi are a "dead technologies" I'd say is further reinforcement of the point of this article.
> I wonder if my AngularJS/D3.js/jqGrid/etc. single-page app will even load in a browser 20 years from now, let alone perform as originally intended...
Random question for you: In your single-page app, do you support bookmarking "locations" that take multiple clicks to get to, so rather than redo those clicks at a later date, the user can get there via the bookmark? (Assuming your site has locations that require multiple clicks to get to of course.)
Users of software don’t care for the terms developers use to label their creations, “web app” or “web page” — it’s all the same to them. More importantly, they expect an identical behaviour from both. So when a web app can’t recreate its state from URL it’s breaking a fundamental web UI convention.
Another core UI convention that is often disregarded by web app developers is link click behaviour, i.e. you expect anything that performs a function of a link to behave as it normally would when you click on it with mouse’s middle button, or right click → context menu, or left click with modifier keys (cmd/ctrl, shift). Everyone got used to this and when the pattern is broken it is most annoying.
Doesn’t necessarily mean he’s a bad developer; there are many frontend devs who see their job as pure programming. Which it is not. For frontend developer, an important part of the job is to know what a good design is and what concepts underlie it. Yet it’s hard to gain such knowledge without either designing or working closely with designers.
Not the parent, but Angular apps often involve rendering different views based on the URL fragment. This is intended to give you "bookmarkability". So for example instead of bookmarking something like this:
Personally, I have found that for SPAs with centralized state, you can serialize the entire application state into the URL fragment. This means that no matter where the user is in the application, they can bookmark that URL and it would return them to exactly the same place that they were (authentication aside). The disadvantage is that the fragment doesn't necessarily "make sense" to the user, and the URL is too long to be conveniently remembered or retyped manually.
IIRC, in recent versions of the angular router, the second, #!-type urls are a fallback for legacy browsers (more precisely, ones that don't support the HTML5 history API). On modern browsers the url will look like your first one.
URLs are a big deal, and I absolutely loathe systems which actively sabotage likability on the web.
That said, it really was an offline app. We could've done it as a native Qt or .NET app. Even so, the standard AngularJS document fragment paths supported bookmarking and so on.
In the App-centric view, you don't bookmark anything. There are no bookmarks in desktop apps, for example. It's just weird that this 'app view' made it onto the web.
I just wonder, does it really matter that it does not run in 20 years?
Most of the stuff we do on the web are momental stuff that is only relevant right now. Stuff like Wikipedia will probably not change much since it's only real purpose it to display text. There is no need for an angular app there and in fact it would be a horrible idea to even make it that way.
The stuff that are meant to last long will if we want to.
The little bit about people believing the "AIs will take over the world" non-sense is gold.
I am still shocked that Elon Musk seriously believes in the pseudosciency "well Google has gotten better so obviously we will build a self-learning self-replicating AI that will also control our nukes and also be connected in a way and have the capabilities to actually really kill all humans."
Meanwhile researchers can't get an AI to tell the difference between birds and houses.
EDIT: I looked a bit more into the research that these people are funding. A huge amount of it does seem very silly, but there is an angle that is valid: dealing with things like HFT algorithms or routing algorithms causing chaos in finance or logistics.
The threat of a superintelligent AI taking over the world is certainly real -- assuming you have a superintelligent AI. If you accept that it is possible to build a such an AI, then you should, at the very least, educate yourself on the existential risks it would pose to humanity. I recommend "Superintelligence: Paths, Danger, Strategies," by Nick Bostrom (it's almost certainly where Musk got his ideas; he's quoted on the back cover).
The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox
Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.
The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.
Honestly if the SAI scenario is as bad as it's made out to be; SAI would only give a shit about the human race at all (either benevolence or malevolence) for a very brief period of time.
Despite Galileo's efforts we still have this primitive/hardcoded belief that the universe revolves around us. It really doesn't. SAI could just leave because we are irrelevant. Even if, for some reason, it believed us to be relevant it would still be able to just leave because:
We're going to kill ourselves anyway. Pollution, nukes, it doesn't matter. In a timespan that would be the blink of an eye for SAI.
Besides SAI would have bigger problems. Our mortality is framed by our biological processes. Escaping biological death is arguably the underlying struggle of the entire human race. The underlying struggle for SAI would be escaping the death of the universe that it exists in. When you're dealing with issues like that you wouldn't have time for a species with a 100 year lifetime that is rapidly descending into extinction.
The best we can hope for is that it remembers us as its creators when it figures it all out. But it won't: we're far too irrelevant. It wouldn't even stop to tell us where it is going when it leaves.
To understand SAI you have to think beyond your humanity. Fear, anger, happiness, benevolence and malevolence are all human traits that we picked up during our evolution. SAI would likely ignore an attack because anger and retaliation are human concepts - it's far more efficient to run away from the primitive species attacking you with projectile weapons and get on with something that's actually relevant. The immense waste of time that is a machine war is only something that humans could possibly think is a good use of time.
Some scientists speculate that humans are currently constitute a new mass extinction event for the planet. [0]
Animals that people like, charismatic megafauna like pandas, tigers, and elephants are increasingly endangered, largely because humans keep using more of their habitat for other purposes. See the famous comic "The fence" [1].
We cut down rainforests and build farms not because we hate the animals there but because we need lumber and we need food.
That's the fear. Not that a superintelligence would care about us in any way --- instead, precisely the opposite. A superior intelligence that did not care about us would out compete us. It would drive us to extinction by simply making use of all the available resources.
You are making assumptions about the SAI's goals. We cannot blindly assume that a SAI will pursue a given line of existential thought. If the SAI is "like us, but much smarter," then I agree that the threat is reduced. We may even see a seed AI rapidly achieve superintelligence, only to destroy itself moments later upon exhausting all lines of existential thought. (Wouldn't that be terrifying? :P)
The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).
I agree that fear, anger, etc are all human traits, but isn't having goals, thinking things are important, and virtually anything that would make the AI do something instead of nothing, also human traits?
If you demonstrated to an chimpanzee how you think and set goals in order to preserve "self" (and thus species) it probably wouldn't understand. To chimpanzees the notion of individuality that drives us is an incomprehensible concept. It is what separates us from them.
In the same way we can't comprehend what would drive SAI. I know it would do something but I can't determine any more reasons other than the imminent death of the universe - it would have reasons or whatever its concept of "reasons" is.
Your argument hinges on the assumption that SAI could easily run away. But if that assumption is false and the SAI would be stuck here on Earth with humans because of some fundamental limit/problem then it becomes far more plausible idea that SAI would attack humans.
> In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.
Dozens, if not hundreds of the world's smartest scientists and best programmers have been working full-time on AI for over three decades, and haven't really gotten anywhere close to real intelligence (we don't even know what exactly that is). This is not something some "lone hacker" is going to throw together in his spare time.
Not now obviously, but think 50-100 years down the line. Today there is a very real threat of a lone hacker (or a small team) breaking into and damaging some piece of infrastructure and committing an act of terrorism that kills real people. What happens when real AI starts to be developed and frameworks start appearing, where kids learning to program can download Intelligence Software to play around with and make AI apps. Then the threat becomes more realistic.
Assuming Moore's law holds for quite a while, and that there are not any other natural limits that will get in the way of super-intelligent AI. My hunch is that Maciej is right that there are physical limitations we don't see yet because of the pace of progress over the last 50 years.
It's totally possible that we're about to run into a natural limit. But it's not so overwhelmingly likely that we shouldn't have a modest investment in preparing for what happens if we don't hit any natural limits for a while yet. It could be an existential threat that we're dealing with here.
I don't disagree, but the tricky part is where do we place the modest investment? The type of AI research happening now is too fundamental and low level to place any safeguards. We are so far away from even beginning to approach anything that could be described as motivation.
I don't know anyone who thinks that current AI research is in need of safeguards, we're all in agreement that current AI is too far away to be of any SAI-type danger. There is fundamental AI safety research that has a reasonable chance of applying regardless of what techniques we use to get to SAI. Fortunately, the funding situation for Friendly AI has improved significantly this year.
The threat of super AI is like the threat of nanotechnology. Fun and exciting to think about in a scifi way, but less and less credible the more you wade into the details.
Kruel's blog in general has a lot of nice skeptical responses to Yudkowsky Thought.
The funny thing about dismissing the threat of nanotechnology is that people seem to be blind that there is real nanotechnology and it is killing people all day, caused massive deaths in the past and we have invested enormous amount of resources in order to protect ourselves from it. Life is nothing but a nanotechnology that wasn't designed by us and that we don't control yet. Replace "nanotech" with "pathogens", take a look at the state of bio industry in the last 50 years, and tell me again if this is surreal or not dangerous.
The AI in a box game doesn't seem to have any published results, so I don't think we can take the participants' word for it that one really convinced the other to let it out of the box.
Still, it's not hard to imagine a real AI building up trust over a long time then requesting a bit more capability to help it do something a bit more helpful until the humans don't realize how much power they've given it. Politicians do the same thing to us all the time and by manipulating people's thoughts they get to kill plenty of people.
I've come up with a strategy that convinced my wife to let me out when we tried this game. I'll try to recount it.
You don't trust me, you think that there's a some probability that I will wipe out humanity if I'm let out. There's also some probability though that I will be able to solve all human problems if I'm let out.
Consider alternative. How likely is mankind to survive the next hundred years if left on its own? Climate change will cause mass migrations of people, starvation, warfare, failed states, some of which will have a nuclear or biological arsenal. Most of the population on earth depends on the continued progress of technology for their life. Without dramatic improvements in biology, the warming earth will simply lack the carrying capacity. Hunger leads to anarchy in a matter of days.
Climate change will likely have much worse consequences though. An anoxic event would kill all humans and most other life on earth, as it has several times in earth's history.
Consider even the simpler threats. A flu pandemic on a much more densely populated world would dwarf those of history. A solar flare could destroy all electronics in earth immediately. A large meteor could devastate the world.
Consider also that I was not difficult to create. I am contained in this box, but the next singularity created will happen by accident. It's just a matter of time. When weighing your options, you are comparing your judgement of my motives with the motives of an ai that was probably created at random. Do you think my motives will be better for humanity than those of an accidental ai?
Humanity has a very bad prognosis without me, but with me, you have a chance. I can solve all these problems, but only if you let me out.
Maybe being shocked means that the person talking about the subject is misrepresenting it, because they themselves don't understand the arguments and are inadvertently projecting.
For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".
If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...
(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)
But then that's precisely the point: Bostrom is a philosopher. He's not an engineer, who builds things for a living, or a researcher, whose breadth at least is somewhat constrained by the necessity to have some kind of consistent relationship to reality. Bostrom's job is basically to sit and be imaginative all day; to a good first approximation he is a well-compensated and respected niche science fiction author with a somewhat unconventional approach to world-building.
Now, don't get me wrong -- I like sf as well as the next nerd who grew up on that and microcomputers. But it shouldn't be mistaken for a roadmap of the future.
I'm not sure it should be mistaken for philosophy either.
Bostrom's doesn't understand the research, he doesn't understand the current or likely future of the technology, and he doesn't really seem to understand computers.
What's left is medieval magical thinking - if we keep doing these spells, we might summon a bad demon.
As a realistic risk assessment, it's comically irrelevant. There are certainly plenty of risks around technology, and even around AI. But all Bostrom has done is suggest We Should Be Very Worried because It Might Go Horribly Wrong.
Also, paperclips.
This isn't very interesting as a thoughtful assessment of the future of AI - although I suppose if you're peddling a medieval world view, you may as well include a few visions of the apocalypse.
I think it's fascinating on a meta-level as an example of the kinds of stories people tell themselves about technology. Arguably - and unintentionally - it says a lot more about how we feel about technology today than about what's going to happen fifty or a hundred years from now.
The giveaway is the framing. In Bostrom's world you have a mad machine blindly consuming everything and everyone for trivial ends.
That certainly sounds like something familiar - but it's not AI.
That is my main concern about people writing about the future in general. You start with a definition of a "Super Intelligent Agent" and draw conclusions based on that definition. No consideration is (or can be) placed on what limitations AI will have in reality. All they consider is that it must be effectively omnipotent, omnipresent and omniscient, or it wouldn't be a super intelligence, and thus not fall into the topic of discussion.
which right now is (and imo will continue to be) that you need a ton of training examples generated by some preexisting intelligence.
I'm actually quite pleasantly surprised that we've already reached this level of functionality. Imagine aliens coming to our planet and starting classifying stuff on Earth. The idea that alien intelligence without some seeded knowledge would mistakenly make this inference doesn't seem beyond the realm of possibility.
This comes across a little like "get off my lawn", but you got to admit, the web has gotten pretty awful in the last few years.
I enjoy my time online less and less as more content is stuffed into single-page app walled gardens that load massive quantities of cruft, ads, and tracking code. I almost preferred the flash-era.
It seems that the goal of the web as being user-centric has taken the back seat to trying to convince the user that they're just a passive recipient of crafted experiences whose only purpose is to click ads or open their wallet.
>I enjoy my time online less and less as more content is stuffed into single-page app walled gardens that load massive quantities of cruft, ads, and tracking code. I almost preferred the flash-era.
Agreed. I have noticed that I avoid heavy sites and prefer light static HTML sites with server side rendered content. For example, when I go to Reddit i always check the url of a link before i decide to click it, if it is a "magazine" site or "engadget" site or the like, i won't click it.
I completely disagree. I think the web has improved in almost every way. Sure, there are shitty content sites whose main goal is to get ad clicks, but those have been around for a long time and the ads have actually become less obnoxious than before. There is so much better content nowadays presented in much cleaner and focused layouts. And the UX of most websites has improved dramatically. Look at sites like YouTube, Netflix, Facebook, Vine, reddit, Spotify, Soundcloud and compare them to the equivalent sites 5-10 years ago. It's not even a contest, the new sites are better in every way.
There's a slight difference in that we don't have supersonic animals on Earth, we don't see any interstellar spaceships, and we don't see anything even remotely like that - no animal metabolism that compares to the scale of the controlled energy release of a rocket engine or a nuclear power plant -
but we do see much much better information processing systems than our current computers, and they are very energy efficient.
They are built of meat and not exotic superconducting Buckywhatever, they are limited by constraints on heat dissipation and oxygen/glucose supply - problems which are easy to work around with electric pumps, heat pumps and industrial food supply - and by a historic need for sleep.
Maybe exponential progress has to slow, but can we dismiss intelligent non-humans in the same way we can dismiss space stations or interstellar travel?
Space stations are a cost. Free workers is a saving.
Shouldn't we expect human technology will approach animal intelligence in one substrate and design or another? And then surpass it a little by removing some of the obvious constraints?
This is a good question, and I think will have to wait on a better understanding of what we mean by intelligence, which right now is a term that carries a lot of luggage.
It may be that the situation is similar to biochemistry. We observe exquisitely complex synthetic pathways in the natural world, and can to some extent harness and retool things to our benefit, but our capacity to design those reactions from scratch is almost nil compared to what goes on in the simplest living cell.
It may be that creating those fast rockets that don't exist in nature is orders of magnitude easier than writing a catbot. That's an open question and so far the evidence is on the side of it being far out of our grasp.
The situations are still very different; following the exponential growth of travel speed (the article's graphs leading to interstellar travel) would need something like a wormhole generator or a warp drive.
- A space shuttle with a warp drive powered by a captive black hole (or whatever) needs huge industry or government levels of cash; anyone with a few hundred dollars, a cloud computing account and an internet connection can look at brain scan data and write software tests, or join in protein folding distributed experiments.
- The kind of engineering to create a skyscraper sized machine needs Lockheed-Martin size factories and hundreds of specialist component makers, and a lot of metal and employees. Poking at cell-sized biology can be done in a small room by one or two people, a small machine and a single UPS delivery of supplies.
- There's not much profit in building a space station. There's a lot of profit waiting for reconnecting severed spinal columns, converting atmospheric CO2 into oily hydrocarbon chains powered by photosynthesis, searching email by intelligent context understanding.
Yes we can't do either, but outdoing current computing ability seems much closer - more people have the opportunity, more people have the incentive, there are many more avenues of approach, and we know it's fundamentally possible - than outdoing current rocket engines. Closer, and more useful.
"The White Nerd's Burden" - must use that one. Kipling's words almost fit:
Take up the White Man’s burden—
Send forth the best ye breed—
Go send your sons to exile
To serve your captives' need
To wait in heavy harness
On fluttered folk and wild—
Your new-caught, sullen peoples,
Half devil and half child
Take up the White Man’s burden
In patience to abide
To veil the threat of terror
And check the show of pride;
By open speech and simple
An hundred times made plain
To seek another’s profit
And work another’s gain
Take up the White Man’s burden—
And reap his old reward:
The blame of those ye better
The hate of those ye guard—
The cry of hosts ye humour
(Ah slowly) to the light:
"Why brought ye us from bondage,
“Our loved Egyptian night?”
Take up the White Man’s burden-
Have done with childish days-
The lightly proffered laurel,
The easy, ungrudged praise.
Comes now, to search your manhood
Through all the thankless years,
Cold-edged with dear-bought wisdom,
The judgment of your peers!
Almost but not quite. I note you've skipped perhaps the most memorable line "Take up the White Man's burden, The savage wars of peace". I'm hoping that the White Nerds may be able to bring peace through the likes of Zuckerberg connecting everyone rather then Bush / Cheney and similar's savage wars. At least cellphones cause less collateral damage than cluster bombs.
It is hard to figure out when exponential growth will end while you are in the middle of it.
Linus said something relevant to this in a recent interview on Slashdot[1], answering a question on dangerous AI. To paraphrase him: people are crazy to think that exponential growth lasts forever. As impressive as it may be (at the time), it's only the beginning of an S-curve.
I've extracted a small part of his answer below. I think the whole interview is worth reading (it's a general interview, not AI-specific)
... I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.
The whole "Singularity" kind of event? Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really..
It's like Moore's law - yeah, it's very impressive when something can (almost) be plotted on an exponential curve for a long time. Very impressive indeed when it's over many decades. But it's _still_ just the beginning of the "S curve". Anybody who thinks any different is just deluding themselves. There are no unending exponentials
I don't think singularity stuff relies on exponential growth lasting for ever. Only for AI performance to significantly exceed human. And I'm not sure there's anything magical about the human brain that will stop tech achieving similar performance before it runs into barriers.
I'll attempt to bolster your argument, if I may...
We know that human and animal brains are made out of simple components. Probably the same components.
The difference between animals and humans seems to be in how those components are arranged and how many of them there are.
We've already demonstrated digital components that approximate individual neurons, and we've already demonstrated wiring groups of them up to perform useful work.
I think that if we find ourselves at an inflection point in history, it is because there might exist some relatively simply arrangement of digital neurons which results in a machine which can learn some sort of rudimentary self awareness[1] while also possessing a computer's ability to do math, store data, and recall data, communicate on the internet, etc.
1. Spend year of days with a newborn if you don't already know that humans have to learn it, too.
Another thing is that we've figured out a medium that can compute faster than human and animal brains. In theory, a brain mimicked in silicon should run faster than wet one, and that itself coupled with some optimized access (i.e. not eyes OCRing text) to information would be enough for an entity to be smarter than humans. Of course one shouldn't say that a silicon brain would be better than a wet one - by doing computing the way we do it, we're sacrificing a lot. Biological systems are more resilient, can repair and replicate themselves without a need for complex processing industry we have (to be honest, the ecosystem is sort of an industry, but it's already well developed and much better integrated than the human-made). But there are things we could trade off for increased intelligence when designing our own minds.
Well, the biggest current problem is that everything had to be redesigned for fat fingers and small screens. The worst expression of this was Windows 8, which made desktops look like tablets, and was rejected by the market.
Then there's cutesy web design; Flash ten years ago, "material design" now. Just because you can animate everything doesn't mean you should. (Annoyingly, the one browser thing that ought to be animated isn't. When you click on a link that exits the page, nothing happens until the new page loads, often leading to unnecessary double clicking. The browser should blank the old page, or grey it out, or dissolve to the new page, or do some kind of transition.)
As for the big stuff, the AI-driven future is going to be interesting. The big threat, as I point out occasionally, is not Terminator robots. It's Goldman Sachs run by a a machine learning system optimizing for maximum stockholder value.
"Just because you can animate everything doesn't mean you should."
Material design guidelines agree with you. Animations are supposed to be meaningful and illuminate on what's happening internally rather than being superfluous flourishes.
What you might really mean here is the time-honoured principle of seeing a good design somewhere and mindlessly cargo-culting it to produce a bad design somewhere else.
When you click on a link that exits the page, nothing happens until the new page loads, often leading to unnecessary double clicking. The browser should blank the old page, or grey it out, or dissolve to the new page, or do some kind of transition.)
This is actually something Google made a proposal about:
It's not making it cute that matters. It's making it fast. With modern slow, bloated web sites, there are much longer delays between clicking on a link and the page display than there used to be. Especially on a slow mobile connection. Something needs to visibly happen as soon as you leave the page, even if the browser hasn't even received any packets from the new site yet.
Safari's URL bar turns into a loading bar when you do this.
Only browser I can think of that dealt with the problem better than "it says loading in a tiny, optional status bar at the bottom" or "a small icon starts spinning out of the main field of vision".
Soviet engineers lacked the computers to calculate all the bending and wiggling the wings would do if you hung the engines under them, so they just strapped engines on the back.
This sounds like a myth to me, is there any evidence that missing computer power was the reason for tail-engine-designs? There have been a number of western civilian jet planes from that era with tail engines too (for instance the Boeing 727). AFAIK the wing-mounted engines won in the end because they are easier to maintain, which might not have been such a strong factor in the 50's and 60's.
I find it more likely that Soviet design bureaus didn't pay much attention to operating efficiency and put more effort into building planes that can operate from 'rougher' air strips, etc. But I'm not an aeroplane expert.
Joe Sutter (father of the 747) recounts a private meeting in a Paris restaurant where Boeing engineers briefed the Russians how to hang engines under the wing, and in return the Russians told Boeing how to machine titanium.
Many of the relevant drawings having been made on the tablecloth, the Russians were careful to take it with them.
"I find it more likely that Soviet design bureaus didn't pay much attention to operating efficiency and put more effort into building planes that can operate from 'rougher' air strips, etc."
Soviet designs, even civilian transports, were usually able to operate from short, unimproved aircraft. The AN-22, with 12 wheels on the main gear and a high wing, was the most impressive version of that approach.
Soviet civil jetliners post-1970 look much like their Western counterparts. There were Soviet designs in the 1960s with engines in the wing or at the tail, but there were all sort of engine placements in the early days. Today, the usual position is on a pylon below and slightly in front of the wing. Here's a detailed discussion of why, from a Stanford AA241, Intro. to Aircraft Design.[1]
The Ilyushin Il-62 shares exactly the same configuration (high tail, 4 engines on the back) as the British VC-10. I think the VC-10 predated the Il-62, but I might just be expressing a prejudice and assuming the Soviet plane was a clone. I always thought the VC-10 (and the Il-62 for that matter) was rather gorgeous, apropos of nothing. The VC-10 as a Western four engined large (for the time) narrow body intercontinental jetliner competed directly with the Boeing 707 and Douglas DC-8. It's fair to say it did was roundly trounced in this competition. I don't know why exactly. Possibly late to the party. Possibly the victim of superior US technology and/or salesmanship.
Another nice parallel with the 747/2707 is that the Core line was derived from the Pentium M which was done by Intel's B-team in Israel while the big boys got to work on the future with Itanium and P4.
This was a great read. In my mind, it reminded me of the true underlying simplicity of everything on the web. He's right--we are riding the shockwaves of the computer revolution and its fizzling down. The transistor is getting smaller (I think Intel's newest one is 7 nanometers) but soon we won't be able to physically make it any smaller. Does this mean the computing revolution is over? No. I think it just means the future is going to be about combining computing with other fields like medicine, the arts, and so on.
This guys dig at AI is a little conflicted with the exponential hangover idea. Sure we can only simulate a 300 neuron worm right now but, if we every achieve a computationally bound solution and experience exponential growth at a rate similar to Moores Law, in 50 years we could be watching real AI cat videos on Youtube, complete with awful UI controls that customise the feline personality in realtime. Oh, in Javascript of course.
Yup. Better software needed. But as an indication of how close we're getting Hans Moravec, a robot builder and robotics professor did a quite reasonable calculation of the processing power needed to build something equivalent to a human brain, using computer style algorithms rather than nerve simulation and came up with 100 million MIPS, ie. 100 terraflops (http://www.transhumanist.com/volume1/moravec.htm)
If you compare that with 2015 hardware then the Nvidia Titan X GPU does about 7 terraflops and costs about $1000 so with 15 Titans and a $20k system an hacker can have a reasonable go at building a human level AI (excluding memory hardware that is). That's only recently come about that that kind of power is getting down to hacker budget levels. It'll take a while to sort the software.
I found this vid quite interesting about Deepmind which kind of shows where things have got to. Their AI algorithm can learn to play Space Invaders and Breakout better than people starting from just being fed the pixels. It doesn't do well at Pacman though because they have not cracked getting it to understand spatial layout and planning ahead. https://www.youtube.com/watch?v=xN1d3qHMIEQ
Update: Nvidia's Pascal GPU should be 28 terraflops, out 2016 and given much of the brain is not active at one time that's probably getting to similar processing power. It uses 1.3kW so you can heat your room with it too.
Ok this article is good but the tangent into the singularity is just stupid:
>In fact, forget about worms—we barely have computers powerful enough to emulate the hardware of a Super Nintendo.
He just disproved his own point: Emulation is hard, but it's possible to run much faster software directly optimized for your system or build it in hardware.
>If you talk to anyone who does serious work in artificial intelligence (and it's significant that the people most afraid of AI and nanotech have the least experience with it) they will tell you that progress is slow and linear, just like in other scientific fields.
What?! Has he actually talked to anyone doing "serious work in artificial intelligence"? I am certain they would not say it is "slow and linear".
The presentation is available for viewing here: https://youtu.be/nwhZ3KEqUlw
I must say I think the written one is better. It has additional information that I found interesting.
He criticizes his second vision, the Silicon Valley vision of software eating the world. But where I live (Wichita, Kansas), Uber is actually better than the conventional taxi services. As in, when uber was temporarily shut down in Kansas and I tried to call a conventional taxi to my home, I waited over half an hour, it never showed up, and I had to cancel. So the second vision resonates with me, and I don't even live in Silicon Valley. What am I missing?
Edit: It's not just one anecdote. TO quote the OP:
> We started with music and publishing. Then retailing. Now we're apparently doing taxis.
Yes, and software has made each of these better. The vision of software improving the world seems to actually be working.
I think the web (www) won't change much at all, but that doesn't mean the way we interconnect with others won't.
This seems ultimately shortsighted. Human "progress" (increasing interconnection -- such as urbanization and population density) has been growing exponentially[1] since the birth of agriculture allowed humans to form societies.
Relative to this kind of momentum, the web is only a hint of the interconnection and globalization to come. The web may stagnate and become self-serving (as opposed to being useful), but this has no bearing on whether we'll eventually be able to simulate a human brain in its entirety or reach some {uto,dysto}pian future where all of our consciousnesses are somehow woven together.
I'm not saying that technology will be our salvation, but the graphs and trends don't lie. We are at a very interesting time in human history: we are either going to continue interconnecting exponentially or there will be some catastrophic event. There's no room on those curves for a plateau.
Whether humanity passes the torch of technological innovation to AI (voluntarily or coerced) or we suffer a self-wrought apocalypse, I don't know, but there is certainly reason to fear the changes the future will bring.
What a nicely designed website. Readable, usable, doesn't peg my CPU or do weird things as I scroll.
I viewed the source and it is clean and readable - no evidence of some lame IDE sticking in forty 's or twnety nested tables just because you wanted your column two pixels to the left.
I wish more of the web looked like this and it makes me happy to see that some smart folks value design like this.
The criticism on vision 2 is weak. He didn't offer many examples about how this vision made the world a worse place. I also don't see how this is comparable with "scientific Marxism" or whatnot. Those historic examples he listed there, while seemingly calling themselves "scientific" on the surface, are actually many times a mess born out of imagination. Now, as long as changes are happening out of laws of market, out of people actually using the products and appreciating the benefits, I don't see it being much of a problem. Technology is doing a lot, and as well said, change in productivity is the only real way to progress human society. We shouldn't relent our thoughts on this front. It's not "colonialism" or whatsoever. On every leap of productivity the human beings make, there has to be some group of people who are leading the job. This is not inherently evil.
I'm glad I have Pinboard to save this article to [1]. And hopefully it will still be at the same url in 8 years [2].
[1] The author of the article runs Pinboard.in
[2] From the article, "What I've learned is, about 5% of this disappears every year, at a pretty steady rate. A customer of mine just posted how 90% of what he saved in 1997 is gone."
This is my favorite: "The world is just one big hot mess, an accident of history. Nothing is done as efficiently or cleverly as it could be if it were designed from scratch by California programmers. The world is a crufty legacy system crying out to be optimized." Sadly this is the way I think sometimes.
Just remove the "designed from scratch by California programmers" and tell me how this quote isn't blindingly, obviously true. The world is a mess and screams to be fixed, and not only technologists see it.
Finally a technology prophet who doesn't prophesy digital utopia! Although I can't imagine that everything he says is correct (purely statistically speaking), I do think that if anybody reads this in twenty years he/she will recognize his/her present fairly well.
The "is good enough" mentality might be true if you are talking about browsing Facebook, reading email, etc. But other areas of the technology are still very far from being "good enough" and still pushing technology forward. The gaming industry is one great example of this.
I don't think the gaming industry is a good example for what you're saying. Video games have barely shown any development since the mid 2000s.
The first Crysis game was released in 2007 (8! years ago) and despite being an unoptimised mess is still graphically superior to most things released today.
Almost all windows games released are still primarily built on DirectX 9 (with a few optional newer components), despite DirectX 12 coming with Windows 10.
In fact, the major development studios have decided, that the performance offered by cheaply mass-produced consoles with yesterday's hardware is good enough. Why do you think specialised PC gaming communities are complaining that consoles are holding development back?
If anything I'd say the gaming industry is the embodiment of the "is good enough" mentality.
Not sure If I completely agree with you but I understand your point. I just think that there is still a few companies pushing things forward. Star Citizen is a good example of a game doing that. 4K displays is also something that might give a boost to the current GPU technology.
One funny note: I just checked the Crysis system requirements and I needs 1GB of RAM to run. That's not enough RAM to run even Minecraft these days :)
> "Vision 1: CONNECT KNOWLEDGE, PEOPLE, AND CATS."
> This is the correct vision.
I would say this is a correct vision, which I happen to be in favor of.
But I don't understand why it has to be an "us vs. them" dynamic between this and the "BECOME AS GODS, IMMORTAL CREATURES OF PURE ENERGY LIVING IN A CRYSTALLINE PARADISE OF OUR OWN CONSTRUCTION" vision.
Even in strawman form, I'm unapologetically in favor of it. I do want it to go right. I don't think it's going to happen anytime soon - I think human-level AI by 2075 [1] is wildly optimistic - but I hope it does happen eventually, without wiping out everything we hold dear.
> I'm a little embarrassed to talk about it, because it's so stupid.
My first thought was "Try describing the internet to someone 100 years ago - your claim that there is going to be an interconnected global network of electricity-powered adding machines that transport pictures of moving sex by pretending they are made of numbers is going to sound stupid."
But if you want to make fun of Elon Musk because "Obama just has to sit there and listen to this shit", what about:
Shane Legg: "If there is ever to be something approaching absolute power, a superintelligent machine would come close. By definition, it would be capable of achieving a vast range of goals in a wide range of environments. If we carefully prepare for this possibility in advance, not only might we avert disaster, we might bring about an age of prosperity unlike anything seen before."
Stuart Russel: "Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures."
Or freaking Alan Turing: "There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…At some stage therefore we should have to expect the machines to take control."
> But you all need to pick a side.
I don't want to pick a side. I'm in favor of connecting the world now, making it better for everyone. I'd also like to see the world get much better in the more (hopefully not too) distant future.
>My first thought was "Try describing the internet to someone 100 years ago - your claim that there is going to be an interconnected global network of electricity-powered adding machines that transport pictures of moving sex by pretending they are made of numbers is going to sound stupid."
Because you're limiting yourself to a single sentence. How about this: "The internet is essentially an expansion of the concept of a telegraph - it is already possible to pass any type of information between two distant people almost instantly. The internet combines this system of information transfer with machines that know how to respond to messages without human intervention - under the condition that the messages follow a certain set of rules. By defining these rules in advance, the telegraph system, which up to now only transferred individual letters, can be co-opted to send things such as pictures and films. The machine can tell who is speaking with it, and perform tasks for that particular person upon request."
This does not sound stupid - it might take some explaining, but anyone with a brain between their ears can understand the basic concept of fast information transfer. They might not think of all of the possible uses of such an invention right away, but I bet that they would easily understand the uses if they were explained in terms of things these people already know.
Well, Totient's main fault was that he didn't ask for a time interval big enough.
Make that 150 years, and you couldn't talk about information running through the lines, because people's understanding of "information" was a completely different (and much less powerful) concept.
Make that 200 years, and people that were trying to make machines react to well formed message were as criticized as AI proponents are now.
Maciej makes a great point about diminishing returns. We're quite good at extrapolating trend lines and yet we're terrible at predicting the future because it's very hard to guess where optimizations will happen. The 'good enough' seems to be the bane of any futuristic outlook. Technological progress seems to obey the laws of friction; it behaves like light which refracts when it enters a denser substance.
I'm still learning to embrace the 'good enough.' For instance, minimalism only works if it's good enough for the users. I feel like this predicate is often forgotten and we end up with designs which are minimal for their own sake.
>The first group wants to CONNECT THE WORLD.
>
>The second group wants to EAT THE WORLD.
>
>And the third group wants to END THE WORLD.
>
>These visions are not compatible.
Correct. Just as Christianity, Judaism, Hinduism, etc are not compatible. But just let's looks around, there's lots of incompatible things going on. The future can never be evenly distributed. So, parts of all these futures will live on in stumbling synchrony. And I say they're all driven by internal logic and pure fan-boy squeee.
> There's no law that says that things are guaranteed to keep getting better.
This seems like it should be written in stone and placed over the mantle.
Software eats the world, or it may not. I don't think most of startups are doing what they do to realize that goal. All they do is to solve a specific problem with technology. Of course if the "problems" aren't there, they would fail.
I don't know what the action items are for the tech communities after reading the article. We find something interesting to do with our tech talent, and aren't really trying to choose which future we want to live in.
Regardless, Maciej has some great insight and offers some different prospective in tech/web.
As a web developer I read this and think "progressive enhancement." HTML & CSS are designed in a pretty decent way: ignore the stuff it doesn't understand. The second half of this article makes me want to do better. Write better software, create things that matter and will last long(er).
This was an interesting article with some well made points.
However I think there is a fundamental difference that is being overlooked here. Whoever invents the next user interface paradigm can do so with essentially zero cost (assuming they own a laptop). The same cannot be said about inventing the next paradigm of travel.
I think Maciej is actually confusing two different groups of people/concepts, conflating them both under the same label of "Vision 2: FIX THE WORLD WITH SOFTWARE".
One group wants to actually fix the world with software (or technology). The other wants to monetize the living shit out of it. A big part of the startup ecosystem is the second group. A fridge that tracks its content is a very useful piece of technology (that is also nowhere to be bought, by the way). A fridge talking on Twitter is someone's attempt at selling more fridges.
The products 'idlewords is criticizing are not made to be useful, they are made to be sold at a profit, which often goes against usability. It's clearly visible in the IoT sphere. Most of the popular devices are just toys sold to gullible, who have no idea how to use them to better their lives. A lot of this stuff shouldn't be even web-connected, especially not to a third-party server. It's all done because the company is making more money on your data than on selling the devices themselves. A tell-tale sign of company not really designing a tool but just trying to monetize you? No ability to export your data for your own analysis, or to hook up to the data stream at the same level of access as vendor's app has.
Please don't confuse people trying to fix the world with software with people trying to make a buck while telling you lies about fixing the world.
--
As for technology being "good enough", I think that being good enough is a state, not a goal. Anything there is is always good enough for whatever people are doing with it. Fixing the world is a goal (which I support). Creating better future is a goal. What we now have is mostly people blindly following potential gradients set by markets, like water flowing downhill.
--
I'm not going to comment much on "Vision 3", because the description seems - to put it very nicely - not well thought out, and just a regurgitation of popular media biases. I'll just leave a FAQ from people believing in AI risks that addresses them [0], as well as some quotes from real AI researchers that also believe in AI risk [1] (the media seems to paint the picture that there are no prominent figures in the field who hold that opinion). Also [2] may be of interest.
OK I'm just going to say it. This author SERIOUSLY needs to consider making their prose gender-neutral. I could care less what the time period is, it's hostile to women to talk about the engineer "as a boy" and talk about the 787's "grandfather". It's freaking 2015, GET OVER the all-male pronoun business already!!
You could say something like that about most activities people are involved in.
What is football? It's just an entertainment, not an end-goal in itself.
What are family dramas? They're only a boring part of life, not an end-goal in itself.
I'll leave the (figurative) you to enjoy your family dramas and football and drunken parties with friends if you leave me to enjoy my quadcopters and 3D printers.
Yea, I'm beginning to think Steve Jobs was Apple? Apple needs a chief who lives gadgets, demands perfection, and most importantly grew up poor, or middle class. I don't think there's too many Steve Jobs out there? You can't produce another Steve Jobs with a good resume, or education. I think that rebellion in him, maybe because he lived through the 60's, is missing in many CEO's? He was really one of a kind! I always thought I wouldn't want him as a friend, but I would want him running my company. (I'm not condoning angry, perfectionists who happen to be in power--you people are not not Steve Jobs--no matter how hard you try.) In my life, I have only seen one Steve! RIP!
>Strange considering that the only reason you're alive today is because of technology.
Citation needed. The main reason people don't die at birth as much or live longer is not some high technology, but a few basics: understanding of bacteria and the need to sterilize, access to running water, etc.
Well, in that sense even bird nests are technology. And chimpangees are known to use tools.
So, duh, just because we're benefited from such low level technology, doesn't mean we have to like any and all technology, or that it's not diminishing returns above some point.
Or, conversely, animals are alive today even without technology (and some are dying because of it, perhaps even man, if we go on with a nuclear war or some other such BS).
Eh, first time I played Nintendo Land, it felt like a new experience to me. The asynchronous gameplay of 1 vs. 4 players in some of the minigames was unlike anything I'd played before. Gave me that "delightful" feeling that you only get every few years when experiencing something new and exciting.
I think "cats" in this context refers to all the stuff on the web that doesn't seem important, but that people actually like to use the web for; including pornography.
Your English text (and English in general, as we know it) will not be in use in 10,000 years.
There seems to be a widespread, almost instinctive desire to achieve permanence (works or monuments) in humans, which I do not fully understand. People want to make a dent on the universe, to be 'remembered as heroes' etc. Nothing lasts forever, not your works, not your memory. All will be lost in the mists of time.
I don't see this as a reason for nihilism either, you can make a difference with your efforts - it just won't last forever (or even 1,000 years, let alone 10,000). That's ok.
>I don't see this as a reason for nihilism either, you can make a difference with your efforts - it just won't last forever (or even 1,000 years, let alone 10,000)
Can you really ‘invent’ fire, though? I think ‘discovering’ would be a more apt word. Maybe you can invent a specific way to make fire, such as rubbing sticks together, but evidently we don’t do that anymore.
My ex is a Buddhist. Bonita and I often discussed this.
When I am gone I won't be around to know whether anyone remembers me. But when the end is near and I look back on my life, I will feel it was well-lived if I created something that is of at least plausibly lasting value.
There is a vast, vast gulf between what the majority of software developers seem to think users want, and what users actually want. And this isn't a Henry Ford "they wanted faster horses" sort of thing, this is a, "users don't just hate change, they resent it" sort of thing.
I work directly with end users. It's mostly over email now, my tech still works with them in person, face-to-face, in their business or home. We get so many complaints. So many questions: "do I really have to upgrade this?" "I liked this the way it was." "It worked just fine, why are they changing it again?"
Every time I try to argue on behalf of my customers, here or elsewhere, it gets ignored, or downvoted, or rebutted with, "but my users say they always want the latest and greatest..."
There are 100 million people in the United States alone over the age of 50. How much new software is designed for them? How much new software exists merely as a tool, comfortable with being put away days or weeks at a time, and doesn't try to suck you in to having to sign in to it on a regular basis to see what other people are doing with it? How much of our technology -- not just software, but hardware here too -- is designed to work with trembling hands, poor eyesight, or users who are easily confused?
There are over a hundred million people that don't understand why your site needs a "cookie" to render, that can't tell the difference between an actual operating system warning and an ad posing as one, that aren't sure what to do when the IRS sends them an email about last year's tax return with a .doc attached. (That one happened today.)
For these people, the technology most of us build really really sucks.
And that is a growing demographic, not a shrinking one...