One truism of AI research (and 'human augmentation' is fairly in the AI space) is that the best brains have overestimated our ability to write useful rules. In a sense our legal systems and national economies are large-scale structures for organizing human effort into something greater. In the 50s people thought white-collar bureaucracy was the crowning achievement of the 20th century (read about White Collar by C Wright Mills).
Other than 'faster mail', it's not clear that any rule systems of the consumer internet make people more efficient workers. No futurists predicted that grindr & SMS would be the best our century had to offer re: automating human resources.
Neural nets were certainly hampered until the 2000s by weaker CPUs, but I wonder if part of the problem was the expectation and hope that humans would be able to write useful rules. This is a very old illusion.
Try: e-commerce and search engines. We are pretty good at getting used to technology, to the point that we forget we ever had to search anything in books and encyclopedias, or call a travel agent to book hotel reservations halfway across the globe, or seek a distributor for a part we needed in the yellow pages. Now, is not that I don't agree that the march towards automation advances more slowly than predicted in a lot of areas, but we have definitely come up with systems that improve significantly how things are produced and distributed when compared to mid 20th century...
I still relish this true story of progress: my father, who started programming with punch cards in high school that he had to send off to the nearest university and wait a week for results, now has a computer in his pocket with orders of magnitude more computing power and capacity, plus it's connected to a storehouse of information that one could not exhaust in a lifetime. Sure, optimists will predict more favorable outcomes, but as XKCD (rightly) says to those asking "where's my flying car?", the answer is: we have helicopters.
e-commerce is cool but not because we've invented smarter rules for it. amazon isn't that different from sears 116 years ago; collab filtering is a difference, but that's a stat algo, not something humans design. They also use TLA+ for their distributed systems -- i.e. they're solving problems at the limit of unassisted human understanding.
(TLA and CF are definitely solid examples of human augmentation; but that doesn't mean e-commerce is. the merchant is being augmented, not the customer).
grindr as a proxy for any service that says 'match me with an arbitrary person with these characteristics at this place and time'. grindr was one of the early successful ones. stackoverflow careers (or monster.com, god help us) also belongs on this list.
and SMS as an alternative to making plans and sticking to them; remember when you had to be on time and couldn't edit plans on the way?
Taken individually, all the improvements might seem merely like changes in degree -- Amazon is an improved Sears, Roebuck; Wikipedia is an online version of Britannica, etc. -- but in the aggregate it creates a difference in kind. By making things easier and faster, it becomes possible to do more.
I'm sure I'm not the only person who has done more -- and I don't necessarily just mean work at my job, I mean hobbies and projects and artwork and things that I want to do, completely disconnected from needing to do them -- because it's a lot easier to find information, order things, talk to other people, etc. Projects that would have just been too complex to get off the ground a few decades ago, because they would have involved multiple library trips and probably bunches of ILLs and perhaps correspondence with various people, each letter having a significant roundtrip time, are the work of a few nights of reading online, a couple of online orders, and a weekend.
It's not AI, but it's certainly an enhancement. I'm not sure that there are any entirely new categories of things that people, as a group, can do that we weren't able to do before computerization, but it's certainly possible for an individual to do more, and a number of artificial limitations have basically disappeared.
On the wiki page for intelligence amplification, they say that it is often contrasted with AI (IA vs. AI, quite convenient acronyms). That is, we can focus on making computers smarter through AI, or focus on making computers enhance our own intellect (IA). It seems like an easier problem, but actually, we still haven't come very far with this (as you might agree in your post). Computers seem to have a problem with abstract reasoning, but that is where we struggle also.
I like to think that the next step for programming languages is to become the anti-AI, since programming allows us to exert incredible power over a computer, but we need a lot of assistance in wielding that power (e.g. through live programming). Anyways, researching that is how I came across this article.
> It seems like an easier problem, but actually, we still haven't come very far with this.
I think to some extent there's a variant of the AI Effect (the IA Effect?) happening here. There are many ways that computers, through their ability to reliably store and manipulate huge datasets and to rapidly execute algorithms, have allowed us to do many things that would be impractical or infeasible without them.
We tend to take those things for granted now because we've forgotten how onerous they would be to do by hand. I'm thinking of:
* IDEs with code completion, refactoring tools, context dependent documentation popups etc.
* The knowledge-base searching power of Google search and the like.
All of these things allow humans to tackle massively greater cognitive tasks than we can handle unassisted, even if on their own they're not what you'd consider "intelligent". Even just pencil and paper have the ability to amplify human intelligence to a remarkable degree.
It is truly remarkable, at least to those of us who lived through it - who wrote code on green terminals and eventually found the power amplification of a good IDE, and remembered when documentation was something on paper. Now we have people who cannot program without that IDE, and copy 'n paste from other people's dodgy code. (This may just be a variant of "Get off my lawn" of course)
ONLY increasing communication throughput and decreasing prices and latency by orders of magnitude. That's got to be the understatement of the year :)
Imagine it's 1980 and you want to create wikipedia and provide it to 3 300 000 000 users (roughly the amount of people on Earth that have internet now). How would you price that?
> the best brains have overestimated our ability to write useful rules
Nah, we (all humans, not just the brightest) can come up with useful rules just fine. What we can't do is write useful rules that describe systems we don't understand. This is the crucial part: understanding. Somehow people ended up buying a false promise that computing power can replace human insight as a tool for taming complexity. The art of making models, of coming up with the right simplifying assumptions, of getting the biggest explanatory bang for your complexity buck, has largely been lost.
> What we can't do is write useful rules that describe systems we don't understand.
Absolutely. But there are other limitations: We have trouble coming up with rule sets that are complete, non-contradictory, and that really span the domain, even on domains that we understand. (You could argue that we only mostly understand those domains...)
> We have trouble coming up with rule sets that are complete,
That's okay. Incomplete rules are better than no rules at all. There exist modeling techniques that require making less assumptions about the domain. For example, https://en.wikipedia.org/wiki/Nonparametric_statistics can deal with data sets about whose distribution you want to make no assumptions. Also, using an incomplete model and measuring its inadequacy can help us construct a better model.
You can split the domain into regions that you understand in isolation then patch everything together, possibly using some smoothening transformation to avoid pathological behaviors at the boundaries.
So did we make any progress in the last 50 years? Is the Flynn effect (https://en.wikipedia.org/wiki/Flynn_effect) a natural side-effect of the increasing complexity of a society, or the result of active work and research toward increasing society's intelligence? I'm very fascinated by this area, as I'm sure many of you here on HN are as well.
> a natural side-effect of the increasing complexity of a society, or the result of active work and research toward increasing society's intelligence?
I think the two are interrelated, and maybe one might be a function of the other– that is, "society's intelligence" is a function of the operations we can perform unconsciously. We are more intelligent than before because we don't have to reinvent everything from scratch, we can build on what others have learnt and discovered before us.
Other than 'faster mail', it's not clear that any rule systems of the consumer internet make people more efficient workers. No futurists predicted that grindr & SMS would be the best our century had to offer re: automating human resources.
Neural nets were certainly hampered until the 2000s by weaker CPUs, but I wonder if part of the problem was the expectation and hope that humans would be able to write useful rules. This is a very old illusion.