If AAPL thinks QCOM doesn't add proportional value to their phone, let them sell their phone without QCOM ip.
Because we all know AAPL takes its proportional percentage of flesh from an app developer for having the audacity to add value to iOS, whilst simultaneously holding back the web.
Sewell is directly quoted as saying cellular "isn't as important as it used to be." When they thought 3.5mm TRS wasn't as important as it used to be they cut it. Seems like cellular is perhaps still more crucial to the viability of the phone than they want to admit.
in what ways did Qualcomm violate FRAND terms? Nokia, Ericsson and pretty much everyone else uses the entire device as a royalty basis. This is a fairly standard industry practice -- and has been so for decades.
While Qualcomm is the largest contributor of wireless standards, it is far from a monopology.
Also note that Apple accuses of every wireless patents holders of some sort of unfair pricing and violation of FRAND terms whenever Apple is up for license renewal. This is coming from a company that audaciously asked about $30 per device for a handful of frivolous design and utility patents from Samsung.
First, Apple doesn't directly pay Qualcomm. Apple has refused to take Qualcomm license, though I'm pretty sure Qualcomm would love to have Apple as their customer and start collecting license fees based on their retail price.
Second, Foxconn, Pegatron and Apple's contractors are the ones paying for Qualcomm licenses. Their licensing agreement with Qualcomm likewise precedes Apple's iPhone release in 2007. In another word, those contract manufacturers pay the same royalty rate to Qualcomm whether their end-products are for Apple, HTC, or whoever -- they all pay the same rate. Apple's rates are probably lower given various "collation" agreements (and rebates) Apple imposed on Qualcomm.
If you are trying to say Qualcomm unfairly charges Apple more, you need to bring some facts.
Directly From Apple. And they have said these number of times in the public. Apple did not bring out any thing to prove it, so i am going to take its word they are not lying.
Those agreement also precede LTE.
You cant NOT use Qualcomm patents in LTE, but if Qualcomm were allowed to charge whatever they wanted, then they have a monopoly case, and we have to have somebody to define what is a fair price. Since Qualcomm are subject to FRAND.
All these patents fee were one of the reason why HEVC started charging $100M / year combined for their Video Codec, 20 times more the AVC / H.264. Because they saw what 4G patents were capable of charging.
1) can you cite your source? Apple is known for their sleazy wordsmithing and, having followed their lawsuits last several years, throwing completely unsubstantiated accusation at their opponent (see my comment about a 2012 USITC case against Samsung where Apple's own witness came out testifying against Apple). I'd like to read it myself as I'm pretty sure there are a lot of footnotes and modifiers that are not conveyed in one-liners.
2) whether contract manufacturers' licensing with Qualcomm precede LTE is immaterial in this case. Any LTE handset maker sourcing those contract manufacturers will (indirectly) pay the same rates. Apple and Qualcomm had business "collaboration" agreements in which Qualcomm provided additional technical, support resources and monetary compensation for sticking with Qualcomm (see Qualcomm's lawsuit). Apple is likely paying far less than smaller handset makers without such agreements with Qualcomm.
3) "You cant NOT use Qualcomm patents in LTE" <-- not sure what you mean. Qualcomm like many wireless patent holders routinely publishes their (initial) FRAND rates and if the company is engaged in unfair licensing practices, it would be easy to find that out. I'd like to emphasize that, contrary to Apple's view on FRAND, FRAND doesn't mean cheap and SEP patent holders are under no obligation to license their patents. (ETSI IPR Guide, Section 1.11 (http://www.etsi.org/images/files/IPR/etsi-guide-on-ipr.pdf):
The purpose of the ETSI IPR Policy is to facilitate the
standards making process within ETSI. In complying with the
Policy the Technical Bodies should not become involved in
legal discussion on IPR matters. The main characteristics
of the Policy can be simplified as follows:
• Members are fully entitled to hold and benefit from any
IPRs which they may own, including the right
to refuse the granting of licenses.
4) MPEGA licensing schemes are fundamentally different than that of the wireless industry. For starter, theirs is based on some fixed cost per unit which caps at 90M per year; whereas Qualcomm's is a percentage of end-user device with no limit in quantity. Apple is allegedly paying something like $2B per year to Qualcomm as a result. Further Apple is an active contributing member of MPEGLA standard and most patents holders pay nowhere close to the publicized figure due to various sales and cross-licensing agreements.
How is Apple holding back the web? Is there an assumption that every application ought to be on the web and that a browser is the best means to use an application?
As a counterpoint to some platitudes expressed here, let me say this: I unequivocally hate my dad. The reasons are complicated, and personal. But there is something I know with absolute certainty. Everything he ever did in relation to me was with the best of intentions, and he always placed my well-being (as seen by him) before his.
Sure, I mostly feel like an ungrateful wretch. And yes, now that he's an old man, I do treat him very poorly and wish he were dead, mainly because it'd save my feelings of guilt with how I treat him.
I wonder if he resents having me. I will never ask him. I fear his reponse, that he'll deny it.
If you're willing to have a kid, I'm one of the ugly corner cases.
I don't have any kids. I'm beyond the age where they're on my radar. Perhaps a relationship with someone a fair bit younger would change that, but I doubt it.
I'm sorry to hear that. Just to clarify, you're saying you hate your father despite him having done nothing intentionally to deserve that hatred? You're obviously not obliged to go into more detail, but I guess I'm wondering if your message is: "Be warned that you can have kids and do your best as a parent and they might end up hating you anyway"?
The specific part of your comment, which is true, is, yes, I am essentially warning that you can do your absolute best to raise a child and put their interests (as you see it) ahead of your own, and they still may end up hating you for it.
I'm not, however, going to comment on any other part of your response. I just don't want to get into it.
I'm probably the only weirdo that thinks this, but if you support byte-addressing you'd better as well be happy with byte-alignment. Atomics being the only place where it's reasonable to be different.
Which brings me to padding. I wonder what percentage of memory of the average 64-bit user's system is padding? I'm afraid of the answer. The heroes of yesteryear could've coded miracles in the ignored spaces in our data.
> if you support byte-addressing you'd better as well be happy with byte-alignment
All ARM processors do this. The concept is called "natural alignment" and it's pretty common on non-x86. See e.g. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.... . The problem here is that a lot of code written for x86 wants more than that, e.g. byte addressing for non-byte-wide values.
I understand. What I mean is that if your word-size is not your addressing-size, you'd better not have a concept of mis-aligned accesses. It's trouble you brought on all by yourself.
The cray did this, iirc, and the result was that char pointers were extra fat because they needed to include the word address and the byte address within the word. That's not an efficiency improvement.
Alignment requirements are and have historically been very common -- you can see them on the PDP-11, the 680x0, and so on. It's only because a few very popular architectures like x86 have had very loose or no alignment requirements that we've ended up with a lot of code that assumes there is no alignment requirement, and this has dragged other architectures down the "we need to support this" path. If your architecture faults on misaligned accesses it's really not hard to deal with -- you have to be doing something a bit odd to even run into the problem usually.
I can understand the historical requirements for alignment, the necessary transistors, what not. But, much like branch-delay slots, there is no modern reason to expose this to the programmer. Of course, I gave an exception to atomics, but if you will, they're like memory-mapped communication, and now that all I/O is memory-mapped, with no concept of ports, the (ordering) semantics of memory access becomes really important.
I'm also the weirdo that feels process isolation, memory management, and I/O mechanisms need a rethink. But that's something that would take me forever to get into.
One thing I will say, though, is alignment issues "infect" everything. Assume your architecture doesn't allow misaligned access. Now, all your data has to be naturally aligned. Your structs now have to be aligned to the alignment of the largest sub-structure within them. This is all because code is alignment sensitive. Given a pointer to a struct, generic code is unnecessarily larger. Any why would we care? Communication, of course. If we're exchanging data between systems then idiosyncrasies such as this suddenly become globally visible.
Endian-ness must be little. Byte-aligment a non-issue, and network-bit order should be from bit zero up, with any upper layer need, say for cut-through forwarding, expressed as a data ordering requirement, so for example an IP4 address is not a blind 32-bit word, but specifies the structure of those 32-bits.
Even today, allowing unaligned accesses is still not free -- there is an implementation cost in transistors and in design complexity. There's a tradeoff here, as usual. There are a lot of places with a CPU architecture where there's a choice of "do we handle this in hardware, at the cost of having to have more hardware, or do we say it's software's job to deal with this, and hardware provides either nothing or just some helpful tools". You can see this for instance in whether software has to perform icache/dcache maintenance vs the CPU doing a lot of snooping to present the illusion of a completely coherent system; in whether hypervisor virtual machine switching is done with a single "switch all my state" operation on by letting hypervisor software switch register state itself; and in many other places. x86 has in my view generally ended up on the "handle things in hardware and make software's life easier", which it's been able to do because its natural territory is desktop/server where extra transistors don't hurt much. Other architectures tend towards different points on this spectrum because their constraints differ -- in embedded systems the extra power and are cost of more transistors can really matter. "Tend to prefer that software do something" is also a strand of the original RISC philosophies.
Practically speaking, the world is not going to converge on a single endianness or on a no-alignment-restrictions setup any time soon, so we have to deal with the world as it is. If you're programming in a sensible high-level language, it will deal with this kind of low-level nit for you. If you're programming in a low-level language (like C), well, I think you wouldn't be doing that if you didn't have fun at some level in feeling like you had a mastery of the low-level nits :-)
You sound like an architecture person (I'm not, btw), so maybe you can give the lowdown on this.
Why registers? I haven't studied the Tomasulo algorithm in any detail, but if you're going to do "register renaming", why have registers at all? You could, for example, treat memory as a if-needed-only backing store, and then add a "commit" instruction that commits memory (takes an address, or a range). Sure you need to make changes with how you do mm i/o and protection, but at a basic level: why registers?
I'm glad FPGA's are becoming a thing, and I think we're about a decade or two away from ASICs as a service, because if you're not beholden to tradition, you really can work some magic. Of course I'll be pretty rusty by then, but who knows, maybe medicine will keep me feisty.
Because that would require longer instructions and thus more memory.
Instructions on a CPU are something like to following (this is based on MIPS since x86 is a mess) The first 6 bits are the instruction, the rest is command specific. For add the next 12 would be 4 bits for each of the source registers and then the destination register and then various flags (overflow for example).
If instead they only worked on memory they would have a lot more possible instructions - but there isn't enough room on CPUs to design that many instructions anyway so who cares, followed by the all three memory addresses. This means that every CPU instruction needs to read 3 times as much memory before doing anything. Worse, most of those are pointers: when you compile the code you don't know the location of those address, so in most cases it is read the instruction from the program, then go back to the stack to read the address of the next values, then read those locations. That is a lot of memory access and memory access is expensive. Of course as you can say you can just use caching, but cache is expensive and now you need to add 3 times as much - this is too big for the fast level one cache so now you are expanding level two cache and seeing a lot more cache misses in the level one cache.
The above would all be okay, but it turns out that given enough registers (x86 fails here) in most cases you are operating on the same set of values all the time, (indeed the stack locations each of the above is referring too is probably a small set of variables) so if the compiler is careful it can manage all that. The compiler has better information on when things need to be committed to memory anyway so let it handle that.
Not really. The TMS9900 [1] used memory as registers and had a fairly compact instruction set. Yes, it does have registers, but only three (a program counter, a status register, and a pointer to the current "register set" in memory). At the time, it was regarded as a slow machine, probably because of all the memory-to-memory operations.
Sure, this is one technique. You could also, akin to jump instructions, have a concept of data locality, versus instruction locality. You can do this is a lot of ways without resorting to something like segmentation, which everybody hates. Trivial would be something like a current "data pointer", which would see useful implicit updates, and well as explicit ones (akin to a long jump).
Not all CPUs do register renaming, and almost all architectures will have started out being defined for a CPU which didn't do renaming. Even today, lower end CPUs (think the embedded market) don't do register renaming. If you want your architecture to be able to cover down to the low end then anything that drops the idea of a register file is a non-starter. Also, a register-based architecture is well understood, in terms of how to implement it effectively, how to exploit it in compiler design, and how to hand-code assembly for it when necessary. You need a really strong argument to justify taking the weird and innovative route, usually.
I used to be a big champion of RISC-V, just look at my submission history, but I've become increasingly weary due to SiFive's dubious leadership.
1) It's still impossible for anyone to get their hands on an FE310 chip over half-a-year on from the release of the HiFive board.
2) They promised open-source cores, but somehow backtracked due to "customer requests". How does this make any sense? And if so, just have an open-source version, and a closed-source one that, I dunno, has a SiFive logo on the mask.
I was really inspired by them, now I'm mostly dejected. Still, I'm hoping someone like ST takes their peripherals and makes an MCU with a RISC-V.
> I hope they support Googles VM threads while they're at it.
I saw the slides on the VM threads concept. I'm unsure as to the point. A process essentially sees a virtual CPU, multiplexed by a kernel. The point of a hypervisor could be understood as multiplexing things (aka OSes) that weren't written to share hardware.
What exactly is the point of a VM thread, which I understand as something which makes a VM look to a kernel as a process?
There doesn't seem to be any "win". Apart from perhaps making implementing Type II hypervisors, like KVM, easier. My critique of them is that their TCB is far larger than the Type I kind.
And all this for an architecture that doesn't have any legacy binary software... very strange.
I only say that not because I'm convinced at all about the concept, but because it seems interesting enough to ensure that it's not excluded. I'd like them to be able to run with it a bit and show us more evidence of the advantages. OTOH if supporting it is a bigger deal than I thought then no.
"I'm really impressed by Apple's engineering. It's so easy to repair and recycle these phones. I've gotta think that Apple's really proud that their phones don't really end up in landfills."
He then adds, "But there's also credit due to the many thousands of people here who have figured out how to turn trash like this [shows mangled screen assembly] back into beautiful working phones."
Dear friends in Shenzhen, not all Westerner's are as shallow and fantastically, well, douchey as this asshole. We praise you for your ingenuity and unwavering work ethic. Thank you.
Calling "shallow" the OP but also calling all people in Shenzhen your friends ¯\_(ツ)_/¯
The 1st person face is a bit shocking, and he does seem a bit naive at some points, but from that to call Scotty Allen "shallow and fantastically douchey asshole" is totally uncalled for.
To clarify, the author uses CDNs to indicate the content silos that are goog, appl, amzn, fb, etc., which is not traditional usage.
But in light of this, I see him as essentially correct. Any one of them could demand payment from ISPs for their content (their endusers would revolt otherwise). The new willingness to dismantle "net neutrality" does not hurt any of them, only startups. And, as for the carriers, cablecos, telcos, etc, their window of monopoly is over, they should have invested in content when they had the chance.
Because we all know AAPL takes its proportional percentage of flesh from an app developer for having the audacity to add value to iOS, whilst simultaneously holding back the web.
Hypocrisy all around.