Having a billion transistors is not 10 times as useful as 100 million transistors. The diminishing return kicked in even before that. Unless a killer application needs a billion transistors, the value created over a previous iPhone is incremental.
The denominator is how much the invention makes our life better as measured by (the amount of time * the extent it helps).
By this measure, smartphones are not bad but probably lose out to the Internet, washing machines, and air conditioners/heaters in hot/cold climate. Google is a pretty big improvement over older search engines and is worth quite a lot although the price we pay as privacy is not obvious.
> Having a billion transistors is not 10 times as useful as 100 million transistors. The diminishing return kicked in even before that.
I think this is far more difficult to evaluate than your comment admits.
More transistors -> more software that can do more things, written by developers with a wider range of skills, in a society with increased tech permeation, and so on...
The trade off is usually ~same functionality but better graphics. Compare Office 1997 with Office 2017 and the average user sees a very modest improvement relative to how much better of a computer you need.
I don't think that's entirely clear. At least for cpus, less transistors in practice means you get rid of super-scalar architectures, and over-engineered microcode decoding and branch-prediction units. This leads to a modest reduction in processing speed with far larger gains in battery life for mobile devices (think X86 versus Arm). Client side applications are mostly unaffected in terms of end-user functionality.
So true! Sadly, the relevant Econ agencies work opposite to your logic. The worst example was when a Fed banker insisted that faster iPads cancel out food inflation:
You're right, 10x the transistors is less than 10x as useful. 10x faster CPU in an iPad is not 10x better, because you won't be able to harness all that speed before you hit another bottleneck. And you can't buy a fraction of an iPad. And you definitely can't eat one.
your first point about diminishing returns posits a reasonable (if ambiguous) definition of (incremental) value to the consumer: that it should improve the lives of users.
but then you say...
> Google is a pretty big improvement over older search engines and is worth quite a lot although the price we pay as privacy is not obvious.
which is puzzling in light of your very own definition of incremental value.
is google's search engine so much better than bing or yandex or anything else that it deserves a much higher valuation? if you apply your own incremental definition of value to search itself, the billions of searches done on other search engines seems to contradict that conclusion. it seems the other search engines are just fine for many situations, so the incremental value of google's search is not high on either your "time" or "improvement to well-being" metric (price is hidden in ad revenue, but likely higher for google).
google's valuation broadly comes from (1) it's ad business and (2) speculation that google will conquer another industry, not from it's search technology per se, although search is certainly a core component of the ad business.
it seems to me the way that search contributes to value is that it exhibits winner-take-all characteristics (not considered in your model), so that even a slight (perceived) improvement leads people to flock to the better search engine en masse (which is how google won search in the first place, then they developed/borrowed the ad auction model and became a runaway financial success). by garnering more search queries, google has greater and better ad placement inventory (i.e., your eyeballs) to sell to advertisers.
I compared Google to "older search engines" like Altavista and Excite in the 90's. Bing and other current engines are indeed much closer to the current Google in quality than those older engines to the initial Google at its inception. The delta in quality that Google provided when it was invented was a huge factor in getting people to switch from well-established engines then.
One can build computer just using gears, pullies and other mechanical devices without using any electricity at all. It would be mind boggling to compare cost of mechanical iPhone X to build in 1600s.
Offtopic: The second of the three pictures at the bottom of the post, the one with the big yellow circle monitor looking thing, is a radar monitoring station. It was meant for long shifts, requiring dedicated attention from a soldier. In support of that purpose, it has a built in cigarette lighter and ash tray.
The picture is from the Computer History Museum. Well worth a visit.
Other quite interesting item in this museum is a former ballistic missile [1] that after its retirement started an academia career as an university, general-purpose computer [2].
Comparisons like that always assume the that price is constant with increased demand even when world GDP is spent on something. What would likely happen is first the price would go up and later down as the world manufacturing would shift to vacuum tubes production.
Yes if a government started buying trillions of vacuum tubes, companies would quickly get much more efficient at producing then and there would be a huge investment into researching alternatives.
But that brings up the question, what tech could we be subsidizing today so we can get it faster?
Energy comes to my mind: mass battery storage, solar, wind and various other renewable fuel sources. I have a buddy who was working for the Dept of Energy until just recently and was saying that the funding for renewable energy is being wound down very quickly right now, many in that sector (working for DoE that is) will be without a job soon, probably to be scooped up by the private sector. He said it was also a huge shame because those guys were working on some amazing tech that would be hugely beneficial.
We already subsidize renewables, and that is encouraging research into them. The current projection of price per watt for solar is set to plummet in the next decade and a half. It's possible that further subsidies would encourage higher levels of production and economies of scale would reduce the price quicker.
The fossil fuels industry pays extremely high tax rates on their profits and has for decades (Exxon and Chevron typically average near 35%, among the highest corporate tax rates anywhere). They've been subsidizing the research into renewables for a very long time.
While I agree there should be zero subsidies for fossil fuels, the corporate taxes they've paid drastically outweigh the modest subsidies the fossil fuel corporations have received. The only way that isn't the case, is if one includes extremely fuzzy environmental damage estimates (in which case nearly every industry that has ever existed has been theoretically massively subsidized).
> The fossil fuels industry pays extremely high tax rates on their profits and has for decades (Exxon and Chevron typically average near 35%, among the highest corporate tax rates anywhere).
And yet they have still been doing more than fine for all these decades, they still do really good.
> modest subsidies the fossil fuel corporations have received
"Modest" is an odd word to use considering that subsidies come in more forms than straight up sending money to a company, companies can also be subsidized by taxing them less on certain behavior by weighting factors differently, like on long-term environmental impacts [0].
I've seen a way more detailed breakdown of this issue before in another study, but I just can't find it anymore.
> But that brings up the question, what tech could we be subsidizing today so we can get it faster?
As an American, I’d say internet deployment and infrastructure upgrades, but multiple government entities have tried that and the ISPs end up doing nothing.
It's worth investigating what Utah did, they have some nice internet. http://archive.sltrib.com/article.php?id=3062258&itype=CMSID (their internet has improved since that article, too). Those lucky people have multiple high speed providers.
It seems to be something that's easier to do at a local level.
Construction. The industry has low margins ,is risk averse and slow to change.
Mobile wireless - we already have theoretical innovations that can fully solve and make it practically free
Cargo transport. Hybrid airships come mind.
On-demand ride-sharing(multiple people in a minibuses). It won't even take money, just a law demanding all transport requests will be done through a single app and than share the data and create a market.
Nuclear for sure.
Microcontroller based products -so much complexity is added to the field by the games(limited ram,flash,peripherals when they cost nothing) mcu makers play to increase profit.
Industries that may look like there's planned obsolescence occuring.so we could get cars that last a million miles(Toyota had that but fixed that problem), more reliabile appliances, etc.
In the 50s, that might have worked. But today, I imagine the effect would be to jack up investment in gaming the procurement procedures so the contracts would go to the most well connected "suppliers", who would actually cut corners since they know how to game the acceptance tests, and 90% wouldn't even work.
"what tech could we be subsidizing today so we can get it faster?"
That's one of Apple's brilliant strategies: instantly commoditize new technology, paying cash for new factories that produce next-gen tech components at such high volume that prices immediately drop to commodity levels immediately.
The US is participating in the ITER project [0], that's pretty much as good as it get's right now unless a country starts their own, very expensive, project.
There can be benefits to running multiple projects in parallel. For example, there are at least two competing reactor designs (tokamak and stellarator) where we (IIRC) still don't know which one is more promising. ITER is a tokamak, whereas Germany (which also participates in ITER) has a research stellarator in Greifswald, the Wendelstein 7-X.
If we had that then the governments would build supermassive flying submarine aircraft carriers with massive railguns and lasers and armor that is almost impervious to anything perhaps even nuclear weapons.
OT: thank you for not saying "But that begs the question." Two reasons: (1) it would have been incorrect, and (2) the correct use of the expression "beg the question" is doomed, and we should put it out of its misery.
Ha, I didn't expect anyone to notice that. I was totally going to go with "begs the question", but wasn't sure if it was correct and was to lazy to look it up.
People don't normally use the word "beg" in that sense. "I'm begging you, please shut up" is the closest to an ordinary usage of the word -- and even that's being a little melodramatic. If they're talking about highlighting something for discussion, they'd say "you make a good point," or "you raise another issue," or "that brings up a question," or "which leads to another thing," etc.
This leaves "begs the question" as a weird, pretentious-sounding, overly emphatic expression that automatically annoys some percentage of the audience and will slightly confuse the rest (for at least the next decade or so until the purists give up as they did with "irregardless," "could care less," and "literally").
Of course, if your goal truly is to unpredictably annoy semi-arbitrary subsets of people with something as neutral as your speech, then by all means, beg away. Personally, I don't have a choice. I'm a parent, and as is the case with any parent, my kids internalize everything I do. So I can't have fun adding auto-troll expressions into my communication, because they'll do the same without realizing the consequences. I don't like that the world is full of people who both (1) have the power to decide my fate, and (2) set the bozo bit from small things like the quality of my communication, but I'm pragmatic enough to accept that those people do exist and will judge my kids' fate. That's why I speak as if it matters.
Trying to answer the following might be fun as well: How much is a 1000 USD from today in that era in terms of purchasing power? What could you have bought with the equivalent amount of money, and how many people could have afforded the iPhone equivalent then?
Yep. I just checked with http://www.usinflationcalculator.com/. That site has a calculator that comes up with almost the exact same conclusion as you.
Or, you could assume an average rate of inflation and calculate this using the rule of 72. I like to keep that formula in my pocket for things like this.
The factor that many seem to have overlooked with the $1000 price of the iPhone X is that, if it proves successful, it sets a precedence for future smartphones.
The spec improvements of the iPhone X do not line up with the accompanying price increase. Even by iPhone standards it's an expensive device. If you justify the price increases you're just telling Apple you're prepared to be short changed when the iPhone X's successor comes out.
You misunderstand. The iPhone X is the new flagship line. Next year there's most likely to be an iPhone 9 and an iPhone X2, though considering Apple's naming strategy it'll probably be called something like 'the new iPhone X'.
My point wasn't about the name but about the new product line. iPhone X is a new product line. Apple have made fundamental changes to the user interface. The iPhone 9 would be the successor to the iPhone 8. Whatever name Apple comes up with for iPhone X's successor, it's almost certain to keep the new 'all screen' interface.
Well I'm sure it's Apple's intention to not give up a super fat margin after establishing it. It remains to be seen whether in the medium term the market would keep its enthusiasm for a "legacy" iPhone range which becomes implicitly old and busted in the shadow of the "real" iPhone and competitor products.
My argument is not that Apple is looking to lose their fat margins, my argument is that they're looking to extend those margins.
The iPhone X almost certainly higher margin than the iPhone. My argument was that if consumers tolerate the $1000 price this time, they shouldn't be surprised to see future high end phones at similar prices.
It's unfair to point to one exponentially increasing technology and use that as an index for general technology.
On the other hand, it's interesting they choose to use the semiconductor industry as their example because progress is slowing in the semi industry due to the laws of physics with respect to standard CMOS and innovative rescues in either architectural form or through novel device physics are very unlikely due to industry's glacial pace and apparent allergies to innovation. A field that scorns young entrants is bound to die someday.
The fact that I can just within the last year buy a $200 256GB SD memory card to plug into my RPi in lieu of any need for other non-volatile storage, and that I can move all my primary servers and repos onto it, or indeed that the RPi exists and appears with significant speed bumps for essential constant size and price and mean power consumption, shows that lots of innovation is still happening and at a pace.
I've been in electronics and computing ~40Y, and worked with some moderately big iron on the way, eg in banking.
That iPhone has more oomph by orders of magnitude than an oil company's exploration division had not so many years ago. I worked there and looked after the machines.
> glacial pace and apparent allergies to innovation
Glacial pace? It followed an exponential, doubling the number of transistors every 10 months for 40 years. You would be hard-pressed to come up with another field with the same pace of innovation and progress. The amount of innovation they had to come up with to sustain that, is staggering.
> A field that scorns young entrants is bound to die someday.
Try to make a chip, you'll quickly realize that your challenges will be primarily non technical in nature. It wasn't always this way, the semi industry was once incredibly innovative and open to new ideas, and that helped drive exponential progress.
>Who will produce your chips then?
I meant "die" to mean stagnate as the common euphemism in tech.
> I studied microelectronics. I am aware of the technical challenges. Can you explain those challenges that are primarily non-technical?
We spent far more time buying EDA software, installing it, talking to foundries, getting the PDKs, signing NDAs, dealing with buggy EDA software, dealing with slow EDA response times, etc. than actually working on our chip.
>Things like Silicon-on-insulator, high-k dielectrics, finfets, extreme ultraviolet lithography are not innovative or new ideas?
I'm not saying they aren't, but I have noticed that the general level of openness, and following that, innovation and open-mindedness has dropped dramatically in the past decade or so, and I do have to say that the general semi industry has stayed generally innovative, and much of my criticism is directed towards the rest of the industry primarily. That being said, there is a major glacial pace.
Example of a real conversation I had with an engineer at one of the major (can't name the exact one) foundries about a device that's actually pretty close to reality:
Me: "Why don't you use this X device?"
Him: "Because it's still research"
Me: "Sure, but it's very promising, why aren't there at least any industrial research efforts to commercialize it?"
Him: "Because it's still research"
Me: -__-
SOI is innovative, but it's been held back by cost and the self-heating effect, both things that really aren't that much of a problem.
FinFETs were launched by a DARPA initiative.
High-k dielectrics I will say are the single most interesting (if not innovative) innovation in the last decade in the semi industry, although I have some bias there.
EUV is a feat to engineering no doubt, but again, my grievances aren't really focused in that area.
Maybe not a simple typo, but with two errors it's plausible. The iPhone X's A11 CPU is supposedly slightly faster than the A10, which is 2.3 GHz. So, place a decimal between the 2 and the 4, and change the M to a G, and you have a reasonable figure.
the original benchmarks the author likely used had a screenshot of some benchmark tool's output. It showed a 24 MHz processing speed which I also thought was weird. Could just be an anomaly in the benchmark tool.
Either way the tool they used (geekbench) apparently looks into chip-specific optimizations for tests and the rest of their process is not disclosed so it's possible all of their benchmarks are non-uniform (again based on an optimization ticket here: http://support.primatelabs.com/discussions/geekbench/18305-o...)
multiple milliseconds or seconds? Ridiculous! Those of old enough to remember 9600 baud know the disappointment (in our lifetimes) waiting many minutes to download a jpg just to find it wasn't what we wanted. Or that mp3 track from some obscure FTP server....
But this problem applies equally to local software.
I believe there was a name for it but I cannot recall the term. The idea is that as hardware gets faster, software developers add bloat^W features so that the user always experiences the same speed. "Speed" mean something like the time taken to perform some routine user task.
Microsoft has been usurping user resources like this since at least the 1990's. They always had the new hardware before it hit the market. New software was tested on new hardware that no consumer yet had.
By the time the user purchased a new computer, the company had a new software version that already uses up whatever resource gains the new hardware provided. Pre-installed. The end result is the user experiences the same speed.
If the user ran the old version on new hardware, they might see a speed increase.
But the company makes it very difficult to do this and/or wages a relentless marketing campaign to convince users to try the new version and ditch the old one.
The new software version did the same basic tasks the older version did, but because of the bloat^W features the speed was not any faster for the user. Not to mention how the new version would usurp gains in storage space and RAM as well.
Incidentally, regarding downloading images over dialup, didnt you abort unwanted images before they completed?
I recall images rendered slowly line by line across the screen.
And I remember bandwidth being too slow even on a corporate network to blindly batch download images and then delete the unwanted ones.
It was more efficient to view each one as it was downloading and abort if unwanted.
Essentially Parkinson's Law: any system will bloat to consume the resources available to it and then some.
But some of us enjoy NOT indulging in that, and having a Web page served in milliseconds from low-powered hardware, or having MIPS-speed IoT hardware run on microwatts and do a decent job using modern compiler smarts to help get there...
"By the time the user purchased a new computer, the company had a new software version that already uses up whatever resource gains the new hardware provided. Pre-installed. The end result is the user experiences the same speed."
This is true, but there's a couple of other factors involved. One is the recent gains in storage efficiency, and the other is familiarity over time with the new gains. When SSDs hit the scene, you could migrate your existing installation to solid state storage and see immediate, measurable gains. Then PC manufacturers started shipping SSDs as standard equipment, and folks got used to that level of performance. Essentially, it became the new normal. A while later NVMe hit the scene and suddenly even traditional SSDs began to feel "too slow". Again, manufacturers are beginning to ship NVMe-based units, pushing the tolerance levels even further out.
I've experienced this recently; my main workstation has a Skylake CPU, DDR4 RAM, and an NVMe OS drive, with a SSHD (solid state hybrid drive) for storage. I recently revived an older but still very fast workstation with a standard HDD, and it felt like I'd gone back to the "Windows Vista Capable" days of the late 2000s. Same OS (Windows 10 and Elementary OS), same software, technically faster CPU on the older workstation, but it was so "slow" I could barely stand to use it. I felt like I was waiting ages for applications to start or web pages to load, even though it was usually less than a few seconds difference. But oh, what a difference those precious seconds can make in human perception!
"When SSDs hit the scene, you could migrate your existing installation to solid state storage."
If by "installation" you mean the OS, I have not needed SSDs. I migrated my installation to RAM. I stopped using disk for the OS and data I am working with. I can fit everything I need in RAM. I still use disks sometimes for long term storage of infrequently accessed data, but it surprises me how infrequently I need them. For me the recognition of "immediate gains" in speed was not with the advent of SSDs it was with the availability of >= 500MB RAM. Diskless became too easy.
But I do understand what SSDs have done for other users with different requirements and I think that has been a great improvement.
Indeed, latency is very important. For a while I used an SD card + adapter in my laptop as the main boot drive (mainly for the shock resistance), and although sequential accesses were quite a bit slower than the HDD it replaced, the near-zero access time meant it was noticeably more responsive in practice with lots of random access I/O.
To be fair there has been somewhat of a shift in certain industry sectors. Some gaming companies finally recognized the importance of 60 fps gaming for many people, PS4Pro giving the user the option to choose between eye-candy or responsiveness is a very good thing to happen, with PC gaming there's been quite a boom with high-fps gaming with supporting displays becoming more affordable.
VR put latency on the general radar for many people previously completely oblivious to the issue, imho another good thing to happen.
> Those of old enough to remember 9600 baud know the disappointment (in our lifetimes) waiting many minutes to download a jpg just to find it wasn't what we wanted.
It was also pretty bad with a 56k modem as far as I remember -- definitely not minutes though! The wait becomes worse when the JPEG is of a... critical nature, if you know what I mean.
Definitely for worse in many cases. Every time I buy a new phone with better specs, I expect that things will be that much better, but they never are. Instead, the power is wasted on useless things like animation, apps that I can't shut off running in the background, etc.
> animation, apps that I can't shut off running in the background, etc.
The greatest performance issues stem from the attitude of the implementers. If you assume you have more memory than you'll ever need, chances are you'll need more than you'll ever have. Also, somehow no matter how much better our collective and public knowledge of cache-aware programming gets, we manage to make the data so much larger that it doesn't matter how cache aware we are.
disregarding "apps that I can't shut off running in the background", there is no good reason that animations, even the currently selected ones, should account for nearly as much power consumption as they do.
People often claim this, but I've never really found it to be the case. Faster processors and more memory let me run the same software much faster, or better software the same speed as the old, in general. So, perhaps you are seeing the second factor happening more, but don't want or need the improvements?
I think he's lamenting that the new hardware tends to come with (or mandate) the newer software, which is generally slower, but trades this slowness for nothing of value. That is, there is no legitimate reason that the same tasks should take longer on faster hardware; but (usually) the exact same software will tend to run faster.
I don't have this problem on my workstations (generally) except when I browse the web, though recently I've had to make the practical tradeoff of owning a smartphone, where this problem is rampant.
Worse than things not getting faster, the same product tends to get slower because the software designed for the new hardware is the only maintained branch, and the versions for older hardware are just backports from future hardware.
Smartphones were solved back in 2011 with the iPhone 4s and the Galaxy S2. I still see some around, especially the iPhones. Next iterations are obviously better and faster but they are more about to keep funnelling money out of our pockets and to power (maybe not purposely) the ever increasing complexity of tracking and marketing tools.
I think this is related in part due to a focus on throughput metrics rather than latency metrics. Throughput is easy to measure and easy to market, latency is unfortunately less intuitive, so less important for selling consumer goods.
For the punchline, the author should have followed up:
>taken up 100 billion square meters of floor space
that is (with a three-meter ceiling height per floor): a hundred-story square building 300 meters high, and 3 kilometers long and wide
by saying,
"Oh and by the way this device has an edge to edge display that is so real you can hold it up seamlessly against a background while it invents a made-up image and draws it on, pretending it's part of reality.[1] It knows its position and can adjust to your movements. It also includes a photography studio that makes billboard-size full-color photos. It fits in your pocket and needs to have its battery recharged once per day. Also it's an entire telephone including video which it has a camera facing the front for. And you can talk to it, it has a built-in assistant. Basically, magic."
If I'm reading http://jcmit.net/memoryprice.htm right, core memory in 1960 cost about $0.60/bit (with ~10 microsecond cycle time). It does list an only slightly faster transistor memory for 1957, maybe for use in the CPU?
I'm guessing these are historical prices, not inflation-adjusted.
This comparison seems fairly fantastical and irrelevant. Something from within the history of consumer electronics would be more useful to consider, in my opinion.
For example, in 1982, the wildly popular low-end home entertainment computer, the Commodore 64, was released at $595, or about $1500 after inflation adjustment. On release in 1977, the Apple II was $1298, or over $5000 in today's terms. You could pay $400 per 4k of ram, so $6000 if you wanted one with 12k ram. If the iPhone X came out in 1985 for this price, it would have been $400 in 1982 dollars, or 40% of the C64 - with over 4 million times the storage capacity, to say nothing of the other capabilities.
That our modern mobile devices are popularly called 'phones' misses the point that they are used as general purpose computing devices, not primarily phones or even necessarily primarily for communication. They are also GPS navigators, cameras, calculators, recipe files, photo albums, alarm clocks, book readers, walkmans, home stereos, encyclopedias, wallets, and a lot more. People used to frequently pay decent sums to buy dedicated devices to perform many of those services.
I talked to a guy with a long career in emergency management. According to him, earth.nullschool.net, which is available to any anyone anywhere today for free, is superior to many classes of tools available to folks in the field for his entire career.
His opinion was that a similar thing would literally be worth multiple billions to the Federal government in the 80s/90s.
That's the thing -- not only are phones technology marvels in their own right... they open the door to capability with almost incalculable value.
Did he give you any ideas of how that tool is actually used by people in the field? It's a nifty visualization, but I'm struggling to come up with how it could be worth billions of dollars. What kinds of scenarios is it providing answers to that would otherwise be really difficult to get?
What I think is also important to consider is median income and net worth of US citizens at that time.
In the early 50s, much pf Europe and Asia was still rebuilding from war, and the US held an effective monopoly on multiple industries.
From the data I could find for 1952 in the US, a house was 5 years of income and a new car was half a year's income.
For many people in the US today, a new car (post tax dollars) is either completely unattainable (multiple years of income) or a complete joke (one month's income).
The numbers are different for 1982, but still closer to the 1952 US situation than today's 2017 situation.
So, I guess what I am trying to say is that $1,000 for a phone is still fabulously expensive for 99.3% of the world today who aren't millionares.
EDIT :
Thank you for your insight :)
I am thinking that maybe I am just very different from most consumers, or that our definition of "afford" is very hazy.
I am writing to you either from my $50 cracked screen wifi only iphone 5c, or my $40 2009 lenovo thinkpad running lubuntu.
What is strange to me is that I'm not buying this because I am price sensitive, but rather because "it is enough for what I want to do".
I have a hard time believing that a large percentage of the US market will buy an iphone X. Or if they are buying it, then they don't fit my definition of "affording it". Here's why:
Just 36.8% of US adults have a networth of over 100k.
That means 63% of US adults would be spending more than 1% on a mobile computer (phone) that is not very much different from a $300 phone. How many of these users will Apple capture?
Additionally, many people's net worth in the USA is tied up in real estate or retirement accounts. So these funds are not available to be spent, and it manifests itself in the following stat:
In 2016, 63% of Americans said they could not come up with $500 to cover an emergency purchase.
So, tons of americans can't even afford basic expenses, let alone luxury phones.
That is why I feel this is a millionaire's phone only.
$1000 USD is 1.8 weeks full time of the minimum wage in my country (Australia).
It might be dumb for a minimum wage earner to spend 1.8 weeks of their income on a mobile phone, but I absolutely believe a certain subset of working-class Australians will, with some difficulty, manage to scrape together enough to buy one.
Or, more realistically, they'll finance one on a $100/month phone plan, which I also think is a bit ridiculous and unwise, but not so far fetched as to be impossible for many lower-income people who really want to do it.
Now, if you're middle class... let's just say that as someone who earns a salary that's within the same ballpark as the national median, I could go out and buy several iPhone Xs tomorrow if I had to...
Here's a chart of how a middle class American family currently spend their money.
Please note 74k in income is pre tax, and 57k in expenses is post tax.
So, the average American family is actually spending every dollar they earn.
I don't know where technology fits in to this current budget, but you can see it's pretty much pegged due to housing and healthcare.
$1,000 for one phone is out of the budget for all middle class Americans, period, unless they begin to cut back on cars or food.
Which, you can assume that some might, but on the aggregate, they won't.
I could go out and buy literally 100 motorcycles tomorrow, but I wouldn't even buy one, and I damn sure wouldn't buy one that was 5x more expensive than a similar motorcycle that isn't "luxury".
I disagree. I expect the median middle class American to have an iPhone X or other phone that costs within 15% of that. I ride the public transportation in a not-so-well-off area and new, high end phones are ubiquitous. I think the miscalculation that you're making is underestimating how important phones are for people, both in their usefulness and in their status. People will absolutely save money for an iPhone that won't/can't save money for emergency costs. The only question is whether the iPhone 8 will be enough to keep people from buying the X.
My iphone 5c, cracked screen, was like $40 off of ebay or something. When it came out, the original price was like 200-400 depending on contracts and gb capacity.
I think you will see the same thing happening with iphone 6s too.
But if you told me that you see a large number of fast food employees and janitorial employees and etc riding the bus to work, with iphone 7s in their hand in the first year of launching, I'd definitely be surprised and interested.
I too rode the bus for 3 years and never really noticed that trend among those types of workers who are struggling the most. I just saw a ton of galaxy s3 phones, which are awesome phones from 2012 or whatever.
I don't doubt your experiences however! It is a really interesting thought to have. If they are affording it, I can only assume they aren't purchasing something else, like a car or high end laptop or house.
It's true, I don't really know enough to tell how recent the phone is (I'm an Android user and don't even know that market very well). I do think people spend "more than they can afford" on their phones. Compared to past generations that spent more than they could afford on cars, say, it's a relatively cheap thing to splurge on. And if they're going to splurge on one thing, lots of people will pick their phone these days.
I don't earn insane SV money but I could buy an iPhone X tomorrow, I save approx 50% of my salary every month but then I don't drive, own a house or have any debts.
My total owed to everyone is <1mth income.
I grew up poor though so my attitude to money is different to most people my age.
If I don't have a minimum of 6mths income in savings I feel vulnerable.
FWIW I won't be buying the iPhone X, my Moto G5 Plus is an ideal phone for me and only a few months old.
I did drop 1400 quid on a Thinkpad T470P though recently - I will spend money where it makes sense.
I suppose I have two points to make from your comments:
1) You are similar to most Americans in that you already have a phone that is ideal for your needs, and that phone isn't an iPhone X
2) You are different, statistically, from hundreds of millions of Americans, in that you have any money saved at all, and don't have large financial responsibilities (house, debts)
off topic, sorry.... I am about 1 year in on the T460P as a personal machine - first laptop purchase in 10 years. looks like we came to the same conclusions.
I love it, fast, lightweight, the 2560x1440 screen at 14" is lovely.
Not cheap but an i7-7700HQ in this form factor is a decent amount of grunt and with the nvme it's faster than my old i5-2500K desktop at home by a fair margin, it doesn't feel noticeably slower in practice than my Ryzen desktop for most things (until I spin up 3-4VM's and then I do start to notice but really 4VM's with 4GB of RAM each on a laptop is just crazy).
My family were not nearly millionaires when we bought a $1500 equivalent computer in 1982, but this phone costs less than that. And a iPhone X isn't a phone, it's a computer. I agree you do need to be in an upper tier economically to afford one, but only speaking globally. People bought plenty of laptops when they averaged well over $1000.
The way phones are financed makes them much more attainable, also. The way people will get that $1000 phone today is paying $2-400, and then $35 a month, folded into their phone bill, for 2 years. That's reasonable for anyone with decent enough credit to get standard wireless service. Not most of the US population, perhaps, but definitely the same demographic as the people already buying $800 Samsungs or previous iPhones, which is tens of millions of people.
At the risk of getting into a completely different debate, the iPhone is not a computer. A computer allows the user to write and run any program of their choosing. An iPhone can only run programs approved by Apple, and the user cannot use the iPhone to generate new programs.
So while technically the iPhone is a computer, with a microprocessor that executes instructions, primary and secondary storage, and so on, the device has more in common with the traditional telephone and television, because it can be used for communicating and consuming content, but not general purpose computing.
True that it is limited by software to less functionality than a traditional computing device. In the grand scheme of things, though, that definition is rather specific and is less meaningful to people outside our field. The iPhone performs functions far more similar to a computer than to television or phone. For instance, I am writing this on an iPhone. 12 years ago, I would have been writing this on a desktop computer and certainly not a television. The other things i use this phone for - photography and music creation - are similarly not exclusively content consumption activities. So while the device is geared towards that, it can also be used for creation.
I'm not an iPhone developer, but from what i understand you can write apps to do about anything and run them yourself - just not distribute them on the app store.
Also people pay and have paid $1000 for iPhones for years now, there's absolutely nothing new here and people think this is some sort of a new era beginning. It's absolutely not.
I If I tried to carry a desktop computer and monitor around with me all day, it would probably get broken. Thankfully mobile device are... mobile. I'm glad it ever fits in my pocket! You can mitigate the issues you detail with a heavy duty case, waterproof phone, and/or insurance.
Yea, I bought an Atari St 1200 I believe in 1983 for college.
It did seem like a big purchase initially. I used it for 6 years of college. I mainly used it for word processing. It never skipped a beat.
Once every two years, I would need to re-ink the ribbon with ink myself. Just dipped the ribbon into the bottle, and rewind. I was too cheap to buy a new ribbon.
Besides the initial outlay, I spent $2.00 days on ink in six years.
If I had foresight, I never would have thrown away a great computer. I miss the word processor program. I definetly miss that dot matrix printer. (I never told anyone at school about my about my computer. I felt like I was cheating. Not one professor noticed the dot matrix print type.
What people miss about old hardware from the days was everything was built like tanks.
The thought of buying a new computer every two years was unthinkable. In my world, it's still unthinkable.
I really believe we have been duped by the industry--on all levels.
A big difference is that the Atari probably wasn't connected to the internet 24-7. Your ST wasn't being probed around the clock by hackers across the country and around the world. It likely wasn't tied into your banking and probably didn't hold a lot of your personal communications and health data. The Atari probably had a lot of security holes but it didn't matter that nobody was fixing things and sending a frequent stream of updates.
I am not an iPhone user, or ever bought one. But from the perspective of novelty, I would prefer to buy an iPhone x over an iPhone 8, for the exclusive features. Otherwise I'd stick to an affordable and feature-filled Android.
That makes sense and I almost wonder if all of this talk about the price is somehow sneaky marketing. iPhone has traditionally been positioned as a moderate luxury phone... this price is only marginally more than the current top end phones. But if you're going to buy a phone for status, don't you want to buy the most expensive? And all of this talk in the media about how expensive the iPhone is service to cement that notion in peoples minds.
Another approach would be to ask, if you were going back to the year x in a time machine, how much would you have to spend on trade goods to make your fortune? For example, if you were going back to 1970 it would certainly be less than $500, because the processing power you can get for $500 today would have been worth at least $150 million in 1970. The function goes asymptotic fairly quickly, because for times over a hundred years or so you could get all you needed in present-day trash. In 1800 an empty plastic drink bottle with a screw top would have seemed a miracle of workmanship.
This flies directly into the face of mainstream economists who claim that deflation is killing an economy and therefore requires positive inflation because otherwise "people would stop spending".
Here's the thing: Deflation got people spending money on an iPhone.
Now include the failure rate of electron tubes, and the literal bugs that would gum up the works, and this theoretical machine becomes impossible even ignoring the power consumption.
Comparison is inaccurate. iPhoneX has 256GB of Flash, not RAM. Vacuum tubes were used for RAM, while smaller and cheaper ferrite cores were used for permanent storage.
The comparison is a bit unfair because the technology of 1957 was primarily analog. A few rolls of film can store the same HD videos as an iPhone for instance. Video phones existed in the 1960s and just didn't catch on.
It's sort of an apples and oranges comparison. Obviously there are many things that couldn't be done with analog tech. But it's not as bad as you would expect with just naive comparisons based on the cost of vacuum tubes.
...the creation of FORTRAN? Looking at the entry for 1956, the site lists MIT creating the TX-0, the "first general-purpose programmable computer built with transistors", and IBM's shipment of "RAMAC", the first computer based on "The new technology of the hard disk drive".
In the mid 50s silicon transistors started to replace vacuum tubes. I assume tubes were at their cheapest in the late 50s. I'm not sure why the article uses '57 in particular though.
these comparisons are fine and fun. but a single iphone isnt that valuable if youre the only person on all the different networks an iphone gives you access to
I am demonstrating that OP is cherrypicking an example and that a broader view of technological progress, such as the Performance Curve Database, shows that very gradual improvements are the norm.
I keep seeing these economic treatments of "how much progress has accelerated" (and have been seeing them for some 30-40 years now myself), and ... I'm starting to feel a case of three-card monty or the travelling dime problem -- keep moving the pieces quickly enough so that the audience^Wmarks don't spot the trick.
First off: yes, absolutely, the cost of provisioning and operating electronic memory data storage and processing has fallen phenomenally. DeLong makes that point abundantly clear:
in 1957, the transistors in an iPhoneX alone would have ... cost 150 trillion of today's dollars: one and a half times today's global annual product ... taken up a hundred-story square building 300 meters high, and 3 kilometers long and wide ... drawn 150 terawatts of power—30 times the world's current generating capacity
But let's look at those comparisons right there.
The iPhone X costs $1,000, and for easy math I'll assume all of that is the memory storage (this is wrong, but it's not horribly wrong, on an orders-of-magnitude basis). If the 1950 cost was $150 trillion, then the price has fallen by at least 150 billion fold. (And in fact it's fallen more, because there's more than just memory in the device, so my easy math understates the case.)
Global GDP in 1955, or more accurate, GWP, was $5.4 billion. As of 2016 it was about $80 billion, or, just for round numbers, lets call that $5 billion and $100 billion.[1]
The multiplier is a factor of 20. Which, if I check maths, is somewhat less than 150 billion. Which is to say that whatever's been strapping white lightning to our capacity to chunk out memory circuits has not been strapped to the global economy as a whole.
Measures of the total built environment are difficult to come by, and even proxies for that seem at best obscure. Since DeLong specifies the idea of a 100-story-tall building, though, there is at least one interesting statistic that can be readily produced. Up until 1970, there was precisely one such building, and it was the Empire State Building, which held that record from 1931 until 1972 (at which time the newly completed World Trade Centers in New York City claimed the crown).
Naturally, there's been some contention for that prize since. A total of four additional tallest structures are listed: the Sears Tower (completed in 1974), Patronas Towers, Taipei 101, and Burje Khalifa. If we look at the list of the world's tallest buildings, and use the ESB's 381 meter height as a minimum qualification, there are by my count 37 such structures. Again, this seems slightly less than 150 billion.[2]
Finally, energy consumption. In 1955, this was, roughly, 100 exajoule. In 2017 this is, roughly, 500 exajoule. The multiplier would be then ... 5. A number somewhat less than 150 billion.[3]
The question which arises out of this is what is it about information technology that allows for a 150-billion-plus increase in capabilities, whilst total GWP (20x), skyscrapers (37x), and energy (5x) have seen far, far, far less expansion?
There's another question which asks if we're actually including full costs, which I'll note but leave off the table for this discussion.[4]
But the question I would like to ask is what additional service value is being provided for all that the iPhone offers?
Consider that it is, ultimately, an information delivery device. And that the information end-consumer, the human tethered to it, has an almost ludicrously low consumption capability. Sure, you can deliver gigabytes or terabytes of source data to a human, but the amount of that which is absorbed, over the course of a day, amounts to ... a few megabytes, at most. And we're talking single digit values here.[5] What the iPhone can deliver is video, audio, images, and text. Through on a viewport roughly the size of a 3x5 index card. The equivalent 1955 technologies it replaces are a notebook, a telephone (and probably some form of answering service or secretarial pool), the not-yet-invented transistor radio, a deck of cards or pocket game, a newspaper and/or magazine, a paperback book, a letter. A pile of index cards itself.[6]
And ... the iPhone X carries any number of unintended consequences: the loss of liberal democracy, undermining a century-old tradition of advertising + subscriber based print media, journalism, adtech, concentration, possibly an entire generation.[7] Unintended consequences are a real bitch.
4. Much of this revolves around the question of natural capital accounting. The good news is that this is entering mainstream economics, see the World Bank for example. The bad news is that it's still improperly founded. Steve Keen's work on energy in production functions is also of interest, though that admits yet another factor.
5. Consider audio. The human limit of perception is roughly 20 impules per second, a/k/a 20 Hz, which is the threshold at which beats become a tone. Given 86,400 seconds/day, at 20x, we've got 1.7 million bits of data, or 216 KB of audio-encoded pulses. For printed material, a 250 words/min reading pace is fairly typical, which works out to 2.16 MB/day, sustained for 24 hours.
A modern phone actually makes a good attempt at replacing:
Notepad
Telephone
Letter post
Still camera
Movie camera (not common in 1957)
Security camera monitor (needs external hardware, but still)
Audio recorder
Cinema
Library
Music collection
Games arcade
Map and navigation system
Magazine and newspaper stand
Store catalogues (concierge shopping, to some extent)
News and weather on TV
TV (partly...)
Job search
Travel guide
Restaurant and hotel finder
Flight booking and checkin tool
Business memo distribution system
Classroom toys and child entertainment
Textbook and trainer for older/adult students (limited, but hardly non-existent)
Quick notes for friends and family
Clearly there's quite a bit more value than just "information delivery."
Source data doesn't need to be absorbed. No one in 1957 seriously expected readers to memorise the written text of newspapers or magazines, and no one seriously expects Facebook or Twitter users to memorise entire feeds today.
So the actual volume of useful information consumed daily has increased by a huge amount, and it's presented in a far more accessible and interactive/participative form than it used to be.
The point is that mobile devices connected to an open public data network generate huge economic synergies. Processing speed and memory are far less relevant than automation of old applications and the development of whole classes of new applications. Both have literally been transformative.
As for liberal democracy and journalism - those are no more endangered now than they used to be. Technology is a social multiplier, and if the roots of a culture aren't sound media of all kinds will reflect that - but that's a political problem, not one caused by technology.
Thank you for that list, all of which is precisely information delivery, or capture. Having lived in both eras, I'm rather familiar with the general set, and not entirely unsympathetic to some of the benefits. And costs.
You might consider how else such needs, or in many cases, wants, were previously satisfied. Or accommodated, or in which activities worked around their lack. And how, often in initially subtle ways, the smaartphone's presence has changed the structures and institutions it interacts with.
But as I've expanded on this elsewhere, the supposed economic analysis of DeLong is missing key insights.
Data and computation, much as work, expand to fill available time. It's less how much computation can you buy and far more how much are you willing to spend. A curious aspect of computers is that the price points have remained remarkably resilient. In nominal currency, even. The original IBM PC cost $1,565. A current-generation Lenovo (successor to IBM), say, the P310SFF is priced at about half that, $710, plus $160 for the monitor, or a total of $870. It seems that for typical end-use the question is more of "how much computing can I buy for a given budget, than how much will X amount of computing power cost me.
As for technology (and especially communications) not changing or disrupting democracy or society, I'd very much suggest you reassess that premise as it seems to me that every communications revolution, dating to speech itself, has had profound and often highly disruptive effects. Elizabeth Eisenstein captures some of that in The Printing Press as an Agent of Change. The role of cheap press, radio, audio tape, microphones and public address, photoreproduction, and cinema in the rise and spread of fascism is another hugely instructive episode.
Economists like deLong are obsessed with the idea of ever increasing growth to justify the assault on the lifestyle of working class people. A 1957 US middle class family with a sole bread winner (union job) makes for a very uncomfortable comparison with 2017. Thus the need to focus attention on products and technologies that weren't very advanced 60 years ago.
Frankly though modern houses, automobiles, indoor plumbing, central heat, washing machines, vaccines, 50's era medicine, were a vastly larger jump in living standards than smart phones.
Then that justification for the price is the justification for being willing to pay that price; same thing.
Also, "free market" is kind of a joke as a mantra. For example, "without interference" kind of would require a whole lot of marketing people stopping what they're doing. As it is now, many companies themselves are doing their best to interfere with the decisions of consumers and workers, in some cases even get in bed with each other and wage outright war on those they extract money from; so that's not a free market by a long shot.
In a way I love that this comment is greyed out. It's the only one that has merit. Think of it: how often do you see articles of this kind on HN, that could be written in the thousands easily, all of which you could call "amusing" or "interesting" with just as much merit? For me personally the answer is "not that I can remember", so yeah, it's kind of obvious.
HN, do you want to be a meeting point for minds, or a marketing tentacle connection node?
Amazing prediction... A11 bionic chip then was alien science fiction and might be considered privacy issue in real life considerations such as rise of communists or power consumption issues.
Still is not an argument on Apple X high price.. I like more Raspbbery Pi analogy
Well, okay. But the monopoly would have also kept costs down. Yes - all iPhones would have looked identical, but everyone would have had one!
Another plus - the software stack would have been far smaller. You wouldn't need a 32 Gig phone just to install some apps. All processing would have been done in the cloud, on the mainframe. The apps would all have been dumb and only screen viewers for the mainframe.
Honestly, who cares. A ton of modern inventions would've cost way more money in 1956. I'm not gonna go to the store and say, wow this microwave would've cost millions in the 50s, that's a super reasonable price. Compare 2017 phone prices to other 2017 phones.
It impresses the shit out of me that I can now walk around with a processor in my pocket that outperforms the leading Super Computer when I was coming up, The Cray 2, by several hundred times.
I think a better way to measure value is
The denominator is how much the invention makes our life better as measured by (the amount of time * the extent it helps).By this measure, smartphones are not bad but probably lose out to the Internet, washing machines, and air conditioners/heaters in hot/cold climate. Google is a pretty big improvement over older search engines and is worth quite a lot although the price we pay as privacy is not obvious.