Very disappointed about this generation of processors from Qualcomm. It seems like they were caught off-guard by the introduction of an ARMv8 processor by Apple last year, and soon after they scrambled to release an ARMv8 chip - any ARMv8 chip, so they don't look like they're as behind as they really are.
That's how they announced the Cortex A53-based chip as soon as possible early this year (that one was probably already planned, since they use stock ARM CPU's at the mid-range), and now a Cortex A57-based one, even though they licensed the ARMv8 architecture a few years ago, and they were supposed to ship Krait's successor 2 years after its introduction, which would be right about now, and before ARM's own stock designs. What's the point of buying the architecture license, if ARM itself can outcompete you, and forces you to buy their stock CPU designs anyway?
The only explanation is that they were trying to milk Krait for at least 3-3.5 years, before they were planning to launch their own ARMv8-based Krait successor. Apple seems to have screwed their plans, and forced them to buy Cortex A57 for the high-end.
I'm even less impressed with the GPU. The high-end one by the looks of it will probably score around 200 Gflops, while Nvidia's mobile Kepler is already over 300. And the 4k support and "live HDR" for video? Please, that's old news. Everything about these chips screams "rushed" and "repackaged" (even their names).
Anand stated in a previous article that Chinese customers were basically demanding 64-bit 4 or 8-core SoCs, just to tick checkboxes on a spec sheet.
Of course it's pretty well established that 2 fast/wide cores (like Apple and Qualcomm's in-house designs) are better for current mobile software than 4 or 8 slower ones. And Android doesn't take any advantage of ARMv8 right now.
Another sad example of marketing trumping engineering reality.
There's real value in having ARMv8 chips as soon as possible. First off, the architecture is significantly improved over ARMv7. It's not just the 64-bit support. Second, the transition to 64-bit is going to take 3-4 years after it begins anyway, which just means developers are going to have to support both 32-bit and 64-bit (and ARMv7 and ARMv8) for 3-4 more years, instead of just dumping ARMv7 as soon as possible, and supporting only ARMv8. The sooner everyone is moving to ARMv8, the better.
The 8-core thing is indeed mainly a gimmick, and I don't know if Anand has real data on "Asians wanting 64-bit", or he just lumped it together with the "Asians want 8-cores" (where there is some data to support that). But I think almost everyone wants 64-bit as soon as possible, because 32-bit chips are on their last legs, and especially if you only upgrade every 2-3 years, it wouldn't be wise to buy a 32-bit one right now, or you'll see some (not all) apps start dropping support for ARMv7 mid-life of your phone.
I think games that use take advantage of high parallelism would do better on the 8 core devices. We just don't see much of that kind of game in the mobile market.
Aren't games of all stripes typically GPU bound rather than CPU bound these days, if they're complicated enough to be limited by the hardware at all? If so, dedicating more area to GPU cores would be better for them.
Depends on the game really; but generally yeah, the GPU will be the bottleneck. Games with a large number of calculations will be CPU bound though; StarCraft 2 is a good example of that.
We also don't see many 8 core chips on the desktop market. Chicken and egg problem.
Very few people have the incentive or funds to get either the hyperthreaded i7s (which don't really have 4 cores), the 8 core Xeons or the 8 core AMD high end parts.
Because both CPU and GPU are generally on the same bus in mobile devices, maybe you could use those extra CPU cores as fragment/pixel shaders, if memory and cache architecture doesn't prevent it.
On PCs that is not feasible, because of bandwidth and latency issues over PCIe. On mobile SoCs, who knows, maybe it'd work.
Just like in current PS3 titles, that offload shading from GPU to SPUs.
PS3 games don't offload fragment shading to the SPUs, that would be too slow and leave the pixel shading hardware on the GPU idle. Remember that the PS3 has separate vertex and pixel shaders. What they usually offload is post-processing, mostly for anti-aliasing.
I was talking about a technique where deferred rendering, GPU is used in modern PS3 games to render surface normals, material index, Z-buffer, texture color, etc. in a buffer. Typically 128 bits per pixel.
This buffer is then DMA transferred to SPUs, where final lighting and fragment shading is performed.
Final step is to transfer 32-bit RGB data back to GPU's frame buffer for displaying.
No one was caught off guard by Apple (ARM was previewing 64-bit ARMv8 designs over a year before the A7), nor has Apple's move to 64-bit even really mattered at all: Realistically Apple could have iterated their existing core and they'd be exactly where they are now. There is absolutely no reason for Qualcomm to need to respond to Apple, as it isn't like Apple is going to get some design wins with Qualcomm customers.
Qualcomm has the ability to modify ARM designs. That doesn't mean they should do so, especially given some of the metrics that show the A57 to be a stellar design. We'll see how it performs in practice, and sure Qualcomm will then start iterating, but ARM's own literature show it as being a much more significant performance boost than was seen from the A6 to the A7.
EDIT: To indirectly respond to mrpippy (given that HN can penalize if you reply more than once) - two fast cores are better than four slow cores. But four fast cores are even better than two fast cores. The A57 is, on paper at least, an absolutely formidable design, which is exactly why Qualcomm didn't foolishly pursue a NIH agenda. It will, if ARM promises come to pass, yield significantly better performance than the Apple A7, which is a part of the reason most other vendors waited on 64-bit. I'm fairly certainly Google will have the 64-bits ready when the market needs them.
The technical aspects don't matter. (Although ObjC is getting significant benefit from 64-bit pointer tricks.) Now that 64-bit exists, many customers demand 64-bit. https://www.youtube.com/watch?v=UAOtC9QfXac
Bull. I've seen numerous comments in articles that they were caught flat footed. The end of the Anandtech article notes that Samsungs has been shipping 64 bit chips for a few months now but their roadmap from right before Apple shipped didn't even mention 64 bit this year.
Samsung shouldn't need to chase specs, but that's what their customers want so that's what they do. They bump chip speeds, miscounted cores (sadly Apple did this too), add extra cores, etc. They need checkbox features so they make them. If they had 64 bit earlier they would have shipped it for the PR/spec value.
> Qualcomm has the ability to modify ARM designs. That doesn't mean they should do so [...]
If they don't modify the cores, then they're just a fab house. That makes them easier to replace. They need differentiation.
> I'm fairly certainly Google will have the 64-bits ready when the market needs them.
The benefit of what Apple did is that when the market needs 64 bit it will have been deployed for years. Transition issues are already sorted out.
Furthermore, Apple isn't going to start shipping new 32 bit devices. What they've got left is probably the last of them. That means developers can count on going 64 bit only sooner.
Given Android OEMs' histories of shipping ancient versions it wouldn't surprise me if brand new 32 bit Android phones were selling in the millions two years from now.
The benefit of what Apple did is that when the market needs 64 bit it will have been deployed for years. Transition issues are already sorted out.
But iOS apps are also more likely to have issues on a 64-bit CPU, since they are written in (Objective-)C. Since nearly all Android apps are written in Java, the transition will likely be far less eventful.
Bull. I've seen numerous comments in articles that they were caught flat footed.
Oh, I'm sure you've seen lurid tales of mysterious, unnamed sources who will regale about how flat-footed Qualcomm -- one of the largest ARM processor makers in the world, closely intertwined with ARM's 64-bit efforts -- was with Apple's introduction. Might you also be in the market for a bridge?
If Qualcomm is "responding" to anyone it is other chip vendors (such as MediaTek), because those are who Qualcomm competes with. Qualcomm does not compete with Apple. Again, Apple could have come out with a new, improved A6 and they would have gotten as many iPhone 5s sales, and they would have taken exactly the same number of deign wins from Qualcomm (0).
If they don't modify the cores, then they're just a fab house.
So they should change things just because? The A57 is a fantastic design, and is poised to offer incredible power to power consumption (and, if preliminary analysis is correct, Apple will have gone down the wrong path jumping the gun on ARMv8, having effectively put themselves a lap behind). Qualcomm will likely offer their own take on things later, but right now they might just go with what gets them a great design with little risk.
Given Android OEMs' histories of shipping ancient versions it wouldn't surprise me if brand new 32 bit Android phones were selling in the millions two years from now.
I would not be surprised at all. And it doesn't matter at all. Why should it matter? That is a complete non-problem.
If Qualcomm is "responding" to anyone it is other chip vendors (such as MediaTek)
MTK is one of the vendors that seems to have little (or even negative) reputation in the West, but their turnkey reference designs are extremely popular in southeast Asia. They're not (currently) competing on performance, but value, although that may start changing soon. I also have a feeling that their "officially proprietary/confidential but unofficially semi-open" stance - see http://www.bunniestudios.com/blog/?p=3040 - may be helping their popularity. It's true that they're not so GPL-friendly with Android sources and such, but when I found over 1GB of very detailed register-level documentation... the decision was pretty easy.
I would not be surprised at all. And it doesn't matter at all. Why should it matter? That is a complete non-problem.
Agreed, as long as they're 4GB of RAM or less (which is probably still plenty for many applications).
Are you calling Anandtech a mysterious, unnamed source?
"What's interesting to me is just how quickly Qualcomm has shifted from not having any 64-bit silicon on its roadmap to a nearly complete product stack. Qualcomm appeared to stumble a bit after Apple's unexpected 64-bit Cyclone announcement last fall. Leaked roadmaps pointed to a 32-bit only future in 2014 prior to the introduction of Apple's A7"
Are you calling Anandtech a mysterious, unnamed source?
No, I was referring to the general claim by articles that "sources" from "inside Qualcomm" gave them such information.
However Anand's comment was...dubious. Qualcomm doesn't generally publish a roadmap, and there was a single leak of uncertain origin that actually showed Qualcomm's own 64-bit IP (their own designed cores) in Q4 of 2014. So not only was Anand wrong about what the supposed roadmap contained, the roadmap itself has absolutely no credibility to begin with (even if it were made within Qualcomm, it could easily have been made by a division such as marketing that is going on the most certain of facts, while engineering may very well have had such a plan to adopt ARM's design for iteration 1, and then customize for iteration 2).
Making any broad claims about Qualcomm from that is ill-informed rumor mongering, which Anand is not always above.
I like that there are at least 3 different technologies for accurate styluses on Android: there's the active digitizer based pen, like Samsung's S-pen, there's this ultrasonic one from Qualcomm, and Nvidia's chip-based inexpesive solution. Too bad Google itself doesn't seem to be too interested in this area too much, even though it could be a pretty big competitive advantage against Apple's iPad, so we're only seeing random tablets that support them.
Sigh. Its true. I'm guessing they sit in the same set of neurons or something. When I'm reading about modems and RF gear I always read it as Qualcomm but for CPUs I tend transliterate Qualcomm into Broadcom way to frequently.
If you're going to be changing the linked articles in submissions regularly now, you should change the site to automatically mark comments which were in response to an article which is no longer the linked article.
We've always made these changes regularly. The only new thing is that we've been commenting about it, hopefully making things a little clearer than before.
We're not closed to implementing something to reduce the intelligibility gap you're talking about. But it would have to be very simple, i.e. unobtrusive and fit in with the existing design. If anyone knows a simple way to do it, please email it to hn@ycombinator.com.
Step back for a minute and realize how astonishing these devices are. As someone who was in the semiconductor industry in the late 80s and 90s I am just blown away.
Like for like? I was programming on Acorn Archimedes 310s back in 1987: an ARM 2 clocked at 8 MHz. The memory and video controllers were discrete chips. Acorn claimed that their ARM 2's pipeline design let it run integer operations at the same speed as a 20 - 25 MHz 386.
IIRC it only had 23 instructions, making it the most pure RISC design to be available off the shelf.
That's a lot of die area for video decoding! It's substantially larger than their entire set of CPUs.
I've got a feeling that full-hardware video decoding is going to die off, having worked on it myself. It doesn't make a whole lot of sense to leave so much silicon idle most of the time, when you could just accelerate a more general purpose processor or GPU to have very wide integer paths.
These chips look pretty cool. That's quite a bit of power in one package.
On a side note - for a press release it seemed kind of casual: "And because the only thing sadder than a selfie is a selfie that cannot be shared, remember you can upload these high-res images and HDR videos faster with the Snapdragon processor’s 4th generation 4G LTE solution."
The link is for a corporate blog post, not the official presser (which is linked at the bottom). The blog site appears to be aiming towards consumers than the press release, which is as formal as any other tech release.
EDIT: Mod has changed URL to Anandtech review from original link
That's how they announced the Cortex A53-based chip as soon as possible early this year (that one was probably already planned, since they use stock ARM CPU's at the mid-range), and now a Cortex A57-based one, even though they licensed the ARMv8 architecture a few years ago, and they were supposed to ship Krait's successor 2 years after its introduction, which would be right about now, and before ARM's own stock designs. What's the point of buying the architecture license, if ARM itself can outcompete you, and forces you to buy their stock CPU designs anyway?
The only explanation is that they were trying to milk Krait for at least 3-3.5 years, before they were planning to launch their own ARMv8-based Krait successor. Apple seems to have screwed their plans, and forced them to buy Cortex A57 for the high-end.
I'm even less impressed with the GPU. The high-end one by the looks of it will probably score around 200 Gflops, while Nvidia's mobile Kepler is already over 300. And the 4k support and "live HDR" for video? Please, that's old news. Everything about these chips screams "rushed" and "repackaged" (even their names).