Hacker News new | past | comments | ask | show | jobs | submit login
Fraser Speirs's iPad commentary: Future Shock (speirs.org)
82 points by cpr on Jan 29, 2010 | hide | past | favorite | 39 comments



While that's true, the orthogonal skills can sometimes make the real work easier. Knowing even a small amount of VBA could save you hours in Excel. Knowing a small amount of Python could let you rename every mp3 file on your hard drive in 5 minutes. When you remove access to the low levels of the computer, you remove a lot of the ability to solve one-off problems. There are people who put panels inside panels on WinForms just to get a border color other than black, because they aren't aware they can override Panel.OnPaint. Having only high-level access may make easy things easier, but it can also make hard things much harder.

There was a great submission here recently called "Programming as a super power" (http://blogs.msdn.com/alfredth/archive/2010/01/20/programmin...), where the author said that "there are more programs that should be written than professional programmers can ever write." This could be abstracted to "there are more things that should be done at a low level than superusers can ever do." So if the iPad doesn't let you do things at a low level, there will be negative consequences.


Let's take it as a given that the iPad and like devices won't kill normal computers any time soon, if ever, so those of us who know we want to tinker will have a place to do it.

The question, then, is if one is designing a mass-market computing device, is the ability to tinker with it going to produce enough good in the world to offset all the misery it causes to other people? For every person that saves ten hours using VBA to program their spreadsheet, how many other people lose hours because they get a virus or install incompatible programs or corrupt some obscure system file?

In other words: you're 100% correct that there will be negative consequences, but don't forget that there will be positive consequences as well. It's a judgment call as to which side you think weighs more heavily and where the overall balance will be. Yes, you lose something important and valuable when you lose the ability to tinker. Yes, the world would be better if you could have that AND not have people frustrated and crippled by their technology. But no one has figured out how to do both yet.

I think it's fair to say that you have to make the choice, and you can't have it both ways. And given that choice, my personal accounting tells me that we gain a lot more by removing the frustration than we lose by removing the power. While we lose something, I think we gain a lot more, and I don't see any way to gain it without giving something up.


I think there are more solutions than just open or closed. I just recently tried my hand at Blackberry development. It runs 2 type of code. Signed and unsigned. Accessing certain API's requires signed code.


The question, then, is if one is designing a mass-market computing device, is the ability to tinker with it going to produce enough good in the world to offset all the misery it causes to other people?

Excellent question!

I would answer yes. I would also add that I think Geeks often under-estimate how much simple tinkering non-geeks sometimes do.

The number of competent windows and macintosh users is constantly increasing. Even for those outside geek circles, this basic competence can be a jumping-off point for further tinkering.


Programming is a super power – but if you want to be able to use it, you need years of training. Not in the strict sense. But you need to be motivated, you need to invest time, you have to fight with the (foreign and abstract – non-intuitive) basic concepts and many people won’t have the time to do that. Becoming a lawyer is quite hard enough, thank you.

Just as it’s hard to be a doctor and a lawyer at the same time, so is it hard to be a programmer and a lawyer at the same time. Even if you only know a bit about medicine. Even if you only know a bit about programming. That’s not to say you can’t do it, but it’s not easy and many won’t do it.

It’s frustrating: I (not being a programmer) spent a few hundread hours of my life understanding programming and programming. Heck, I can explain bisection search or bubble sort to you. I can write cute little (clumsy) programs that do stuff (useless stuff, that is). But that’s not enough to save me hours in Excel. The most frustrating part about this is that I know what kind of automation is theoretically possible but I can’t actually do it. That knowledge is so often so unbearable. But I don’t have the time to invest a few days worth of work into understanding how to go from theoretical to practical, not for this one specific problem.

But that’s just how it is. You can’t be an expert at everything.


> many people won’t have the time to do that.

The basic concepts and main ideas aren't that hard: certainly no harder than understanding high-school level mathematics. Ever see what Alan Kay was able to teach children to do in the 70s, with an intuitive enough development environment and programming language.

Being able to write straightforward little algorithms to solve data-munging tasks is more and more a matter of basic literacy, especially for people like lawyers, or scientists, who need to sort and sift through masses of data, find the interesting patterns, and explain it to the world.


Maybe, maybe not. I can certainly write straightforward little algorithms, but I always get lost when I want to hook them up to anything useful (writing algorithms is still hard, though, especially if you don’t do it constantly but only from time to time).

There is no easy way to do this at the moment. Not if you want to ignore details, just write something and have it work. The best I can do at the moment is guess if something is possible to do, guess the amount of work and tell someone (who’s an expert) what to do.

You are probably right about scientists, though. It’s a pain to see social scientists working with the mess that is called SPSS when r is so cool. I don’t know why that is but when toying around with r I feel for the first time that I am able to actually do something useful with what I know about programming.


Oh come on. A couple years ago I was able to teach a college friend, a political science student, who had never done a lick of programming, and who had last taken a math course in high school, enough Python to scrape down data from a bunch of websites and do some simple analysis on it, in about 2 hours.

Granted, he's a smart guy, and obviously he should really spend a few weeks or months learning the ins and outs if he wants to get especially sophisticated, but seriously, these things are essential for anyone with any kind of research or curation job ... and frankly for anyone: so much of our society runs on searching and sorting through masses of data that it’s important to understand just to know what’s going on, even for those uninterested in answering their own questions.


"While that's true, the orthogonal skills can sometimes make the real work easier. Knowing even a small amount of VBA could save you hours in Excel"

"""It is a very important lesson in rationality, that at any time, the Environment may suddenly ask you almost any question, which requires you to draw on 7 different fields of knowledge. If you missed studying a single one of them, you may suffer arbitrarily large penalties up to and including capital punishment. You can die for an answer you gave in 10 seconds, without realizing that a field of knowledge existed of which you were ignorant.

This is why there is a virtue of scholarship."""

- Good Ol' EY - http://lesswrong.com/lw/qx/timeless_identity/

This is the answer to the endless questions from my schooldays - "when are we ever going to use this?" and "will it be on the test?". If you know something you can recognise if you are in a situation where it is useful and benefit from knowing it (or choose not to use it). If you don't, you cannot recognise when you are in a situation it would be useful and will suffer arbitrary consequences from the lack of it, and won't even realise that there was a missed opportunity.


The corollary is that it ought then be more important to teach how to recognize what specialisms different situations require than to teach the specialisms themselves. That way, more people benefit from the powers of delegation and specialization.


The iPad is great technology, to be sure. And a lot of that technology may be incorporated into the future of computing. However, I'd suggest that the shock comes from the fact that this particular iPad is a computer for the past, not the future. It's great for moms and grandpas--people who grew up without computers. The moms and dads of my generation--people who are in their teens and twenties now--are pretty comfortable with the computers we have already. We want a computer that incorporates the technology of the iPad to enhance the power and flexibility that we are used to, not cripple it. That's why the iPad is disappointing--because we wanted the future, not an indicator of the future.


Anecdote: Wednesday evening, my younger sister's laptop crashed while shutting down. She pressed the off button, the screen when dark. The next day, she pushes the power button, screen lights up, displays the crashed logging off screen. She has to call me up to teach her how to use the power button to restart the computer - 1 sec for sleep, 10 sec for really off.

A few people in their teens and twenties are pretty comfortable, many more are not.


Of course, but I'm not saying that computers shouldn't become easier to use and more elegant--they absolutely should. I'm saying that they should become more capable at the same time, not less capable.


Capable is a tricky word, without a user who can use it the computer is capable of nothing.


It's more of a computer for the present then, not the past.

It will make money for Apple in a similar fashion to the way the Wii made money for Nintendo. If it truly is for the masses then the only success or failure is based on if those masses adopt and use it.


That's a very good strawman. Nobody's complaining because it's too simple. As far as I can tell they're complaining because it's not open.


Yes I agree with this idea 100%. Somehow all the features of the iPad have been deemed interconnected and necessary, as if it would be impossible to create a machine that is both easy to use AND didn't have Apple arbitrarily denying apps to.

I wish we would approach these two subjects in a completely separate fashion: Yes, the iPad proposes a new interface to computing that is quite revolutionary and ultimately healthy I think. Separately, it continues to push an ecosystem that is closed an unhealthy.


I think they are connected. Part of the reason people are so comfortable with the iPhone is they really can't screw it up. They can't download any apps that are spyware, or viruses or are going to steal their credit cards. They don't have to worry about what attachments they can and can't open. The very locked down nature of the device means it's a controlled environment that you can turn people loose in and say go nutz. That most certainly is not something you can do with an open computer.

So I don't think you can separate the environment from the equation, it's part of the entire end user experience.

Personally I think it's putting the onus back in the right place, which is on developers to make things work and make them work dead simple.


You're generalizing from 1 (incompetent) example, & that's despite lots of counterexamples. Someone earlier mentioned Blackberry development.

In theory yeah if you run your own code you can run bad code, but it should be up to Apple to ensure that newbies never have to leave the app store. I can't remember the last time I went hunting for code instead of using apt-get. Sandboxing isn't hard either.

With luck Apple will look aside as everyone jailbreaks it. After all they sell hardware, not software.


Can software that is open without bound remain simple (non-complex), logistically, politically, technically?


If a user is confused about all the choice they can just stick to the software Apple and a handful of popular vendors provide. It stays simple by the fact that they can just ignore all the other choices.


See: Paradox of Choice. Ignoring something is also a choice.

To actually make things simpler you've got to make it easier to choose. When you download a desktop app, you've got to trade off between, "is this app useful to me", and, "will this app break my computer". Since apps on the iPhone are sandboxed, and are garunteed to uninstall cleanly, you don't need to worry about the second question.

Also have a look at what Microsoft are doing with permissions in .Net, they are slowly making it easier for Windows applications to be sandboxed. In a secure Linux enviroment, it is standard practice to give different apps their own users, with tailored permissions, eg the www user.


How many years has Linux been trying? 10? More? And they're still not there. Android is probably the next best hope.


"""There's another reality distortion field at work, though, and everyone that makes a living from the tech industry is within its tractor-beam. That RDF tells us that computers are awesome, they work great and only those too stupid to live can't work them"""

Not everyone. It's more and more obvious to me that I only thrive in computing where I can control my environment. I do make a living from the tech industry, and I find myself in the same position as "normals" do when faced with "software for normals", and I hate it as much as "they" do.

Error messages are meaningless to "them" but meaningful to me. So called 'friendly' error messages are also meaningless to "them" and worse they are meaningless to me as well.

Hiding complexity isn't simplifying things. Arjan van de Ven said about the 5 second linux boot project "Don't settle for 'make boot faster'; it's the wrong question. [..] It's not about booting faster, it's about booting in 5 seconds". - http://lwn.net/Articles/299483/

Apple are saying "it's not about a nice GUI over the top of a complex system, it's about a simple system". They've thrown out everything they can get away with and then a bit more; which Linux distro's can't or wont do. Linux is 'simplified' by people who don't want or need it to be simple, Windows is 'simplified' by people hobbled by backwards compatibility, inertia and lack of focused obsessive direction.


"They've thrown out everything they can get away with and then a bit more; which Linux distro's can't or wont do."

Bravo. I would hazard to guess ChromeOS is Google's stab at this. There's also Jolicloud and some others for netbooks, so maybe there's hope in that direction.


Android's open-ness has already been a thorn in its side. Right now we're seeing the splintering of the Android platform thanks to meddling by individual hardware manufacturers - consumers are getting wildly different user experiences, some worse than others, depending on who they buy an Android phone from. This is disastrous for a platform trying to gain mindshare.


The thing to consider is who Android is meant to gain mindshare with. If it is manufacturers and service providers, the splintering isn't disastrous at all. It has achieved great market penetration because it is a ready-to-go affordable option. The customizability mainly serves to reassure skittish manufacturers who might choose differently if that wasn't a "feature."


Very true, and note that the the Nexus One page (http://www.google.com/phone) makes no mention of Android. If you look at the tech specs page you'll see it listed as "Android Mobile Technology Platform", that's some serious vendor-oriented naming.

I think this is a very significant point that is often misunderstood. Android isn't something that competes with the iPhone OS, it's the platform you build that competitor on.

I'm not sure that competitor exists. Not for lack of quality in devices, but for lack of the right approach. Google is saying "write apps for Android" when they need to be saying "write apps for the Nexus One (and the Nexus Two will be the same, but better)".


People are already confused which Google technology to bank on for mobile: Android, or Chrome OS?


As the author wrote it, he's also criticizing Windows, OSX, & anything that's not the iPad. See his finder example. Instead of focusing on people who want to run their own programs, he says they don't want to make easy OS'es. (it doesn't make sense to me either)


Here's a Star Trek prop from 1987, called a PADD.

http://imgur.com/e3fju.jpg


Funny that 80s and 90s sci-fi is beginning to look just as archaic as 60s sci-fi (blinking light bulbs and many switches) looked then. We’re not quite there yet, but soon we will.


Very true--my old phone was essentially Kirk's communicator, and the iPhone surpassed it.

http://images2.fanpop.com/external/860814


Heh, this picture kinda gives an answer to the complaints why iPad's bezel is so wide :)


Will the ipad actually be useful for the things people actually need to get done to do their jobs? and contribute to a better world because of it? I think in very limited cases it will, but will mainly help non technical people consume more media. Is this a good thing? I personally think not, because I like nature too much. I am happy that some people do not spend their life glued to a screen. Others may differ on this. In terms of useful applications, for doctors and for others who need convenient information retrieval, the Ipad will be great, and I welcome it. On the most part though it will just be a gadget that people will get addicted to and waste more or their worthwhile lives on. Real work will be done on real computers (with a keyboard).


but will mainly help non technical people consume more media

Why is "non technical" there? Do electronics engineers construct TV and Radio sets for themselves, or just grab one off the shelf and are happy with it?

I do not want to write a browser when I want to consume the web I will be more than happy to use one someone already wrote — given that it is good enough for the task. What iPad comes with is more than good enough for the things it is intended for.


By non technical, all that was meant was, people who currently find technologies hard to use. I agree it will help everyone consume more media, but it will particularly help these people.


Reading all the comments about iPad, I can see how people fantasize and abstract things. I mean, the iPad it's just the result of a trend that derived from the iPhone, Kindle [, etc] success. iPad as computer killer, simplicity/complexity that help/destroy the world, et cetera.

I'm not talking only for the product itself, but derivation of the product, the mission, and all this stuff. By fantasy I mean that they overestimate, creating tales, of what it's simply a business decision, following current trends.

Not just hype, but fantasy.

EDIT: I'm not saying that it's a bad thing, I really like to learn new perspectives. Fantasy in the sense of fanboys of StarTrek, Apple, WoW, Dungeons & Dragons.


Sigh. Why does everyone have to labor under the belief that background processes are somehow impossible for iPhone/iPad hardware or Apple? Apple can create an API to register short functions with hard-retime constraints. That allows OS X (Apple) to dole out milli-watts to background Apps exactly as it sees fit while satisfying 90% of dev's needs. I predict it will happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: