I know nothing about the actual underlying architecture but from the (credible) sources I've heard from apparently it's a really nice API hiding an absolutely unholy mess and that's why emulation has been so poor for so long.
If the API is nice, shouldn't the emulation be easy ? Since the emulator doesn't have to care about security / hacks.
Or do you mean that the API only nice in theory, but is actually not nice in practice (lot of undocumented corner cases, or bugs, that games might rely on) ?
I have no idea about the specifics but from what I've heard developers really liked the XBOX (despite it's unholy internals) so I'd say the API was actually nice.
As for HLE based on the API it's definitely possible but there's probably tons of nasty corner cases and that's my theory on why it's never been done. That and the chance of games with their own proprietary APIs existing and thus they won't work with that solution.
I'm just looking at the motherboard layout at the very top of the article, and it's really strange to my non-expert eyes. The CPU is off to one side, and the GPU is the thing in the middle. And the SDRAM is all split up and far away from the CPU! Is this some sort of game-console specific thing?
The xbox (like many consoles, n64, gamecube, xbox360, wii, wii u, xbox one, ps4, switch ps5, XSX) has unified memory, as in the CPU and GPU share the same sdram.
The only way to do this is to have one chip (It's always the GPU. The GPU needs more memory bandwidth) connected directly to the dram, and the second chip (CPU) has to send memory requests to the second chip.
Though, this console dates to a time when CPUs didn't typically have dram controllers onboard. PCs usually relied on a northbridge chip to have the dram controllers, along with the routing to all peripherals (PCI/AGP) and present a nice tidy Front-side-bus that the CPU understands. In the case of the xbox, the GPU is acting as a combined Northbridge/GPU (a design that was common at the time in low-cost desktops and laptops)
Unified memory has a large number of advantages for consoles. It lowers cost. It gets rid of copying delays between GPU and CPU memory and it allows the game developer to dynamically allocate memory to the GPU or CPU depending on their needs.
> In the case of the xbox, the GPU is acting as a combined Northbridge/GPU (a design that was common at the time in low-cost desktops and laptops)
As I recall, the earliest Intel integrated graphics added almost nothing to the production cost of the chipset. The die needed a certain perimeter to support all the IO connections, and that left quite a bit of unused silicon in the middle. Putting GPU logic in that space was almost free (minus R&D), and only slightly increased the total pin count. Intel got to capture slightly more revenue per PC and deny a lot of revenue to competing chip companies making discrete GPUs.
The situation today is very different, with GPU and CPU on the same die and the GPU blocks taking up far more space than the CPU cores. The integrated GPU is an important part of the chip cost, and that means desktop processors often have worse (smaller) iGPUs than laptop chips.
Interestingly you see one of the major compelling reasons for the north bridge functionality to be on a separate chip. That large fixed area cost component (the pads have a lot analog components that don't shrink like logic does) can be manufactured on whatever tried and true older process node gets amazing yields for the area cost rather than the cost per gate (like you want your CPUs on).
It's a lot like what you see today with AMD's chiplets at 7nm connected to a central I/O die on 14nm. Just the classic systems were integrated at the board level instead of a special interposer.
EDIT: I wonder if the switch to serdes links instead of parallel buses (infinity fabric instead of hypertransport) is a large part of what made this idea useful again, reducing the number of off chip signals again for the CPU dies. I wonder if we'll therefore see a replacement for Intel's QPI if they switch to chiplets too.
Wikichip [1] says there are two versions of infinity fabric (which is a super-set of HyperTransport), one optimised for on-package communication that is 32 bits wide, and one optimised for inter-socket communication that is 16bits wide.
I'm not a hardware person, just a software person who dabbles in hardware, so I don't know if the term "SerDes" strongly implies serial and AMD have misused it here, or if SerDes is generic enough to apply to any SERialisation/DESerialisation.
But still, I think you are on the right track. The investment and development into low power, high-speed, and low-latency on-package links is probably what has enabled chiplets to become relevant now.
Probably all modern fast interconnects work by interleaving some kind of "physical frames" (often bytes with some 8B10B encoding) across multiple distinct serial links that are synchronized as if these frames were bits of parallel interface.
The primary reason for this design is that at current frequencies it is essentially impossible to manufacture the physical parallel interface with equal-enough wire lengths. Interestingly, memory interfaces (like DDR4) use opposite approach: the interface is still mostly parallel, but memory controller measures the delays and mismatches of the physical wires and compensates for that in its timing.
Really? That's crazy! I thought DDR was serial connections. In fact, I thought parallel connections had mostly gone the way of the dodo. Serial is just so much less complicated.
> I wonder if we'll therefore see a replacement for Intel's QPI if they switch to chiplets too.
I think Intel's trying to ensure they have the advanced packaging/interposer/bridge tech to handle wide parallel connections between chiplets. If it works out, they might even end up moving in the opposite direction—toward wider interconnects rather than narrower.
One notable exception from that list is the PS3. That thing was a real bastard to write for but if you managed to vectorize things properly it did really go at a good clip.
One fun thing was optimizing for the SPUs meant that your code was really cache coherent and usually saw significant gains on all platforms. Of course most people at that time wrote for PC+360 first and entered a world of pain when PS3 came along.
If there's one lesson to take away it's always build for your most constrained platforms first. There's still a few funky architectures out there(RPi I'm looking at you[1]) so it's always worth understanding where the hardware constraints in your system can come back to cause havok.
It works for the Amiga because the Amiga has simple DRAM.
It gets harder and harder to do such external muxing as the ram gets more and more complex. With multiple banks, row open delays, bursts and more complex signalling (fast and faster ual data rate at lower and lower voltages) it's near impossible to control modern DRAM without a proper controller.
And that controller has to live inside a single chip. It would be insanity to try and have two different dram controllers multiplexing the same DRAM chips.
Amiga worked exactly the same way. All memory access to the so called "Chip" ram had to go thru Agnus. CPU address lines didnt touch ram chips directly. CPU was just a passenger riding on the back of powerful GPU.
Low level implementation detail. Agnus is the memory controller here, handling refresh and addressing. Block diagram tristate latch (74LS244 & 74LS373 in real hardware) should be considered part of the chipset (controlled by Gary). Take away Agnus(or even Gary) and CPU cant do anything, cant really say there is any ">1 chip access the SDRAM" here. We would have to go back all the way to C64 to say cpu and graphic chip share same sdram bus ~equally.
The Amiga "chip" memory was actually quite slow to use, precisely due to it being accessed by both the CPU and chipset - the solution was to add "fast", CPU-only RAM. The address space was unified between "chip" and "fast" memory, but the underlying hardware arrangement was different.
The chip mem slowness only came in the later Amiga models with faster CPUs with 32 bit memory buses I think? Part of the constant trend of custom chipset falling behind, after the first machine. (With a few increasingly laggard refreshes)
> The only way to do this is to have one chip (It's always the GPU. The GPU needs more memory bandwidth) connected directly to the dram, and the second chip (CPU) has to send memory requests to the second chip.
I don't think thats true, in embedded architectures its not uncommon to have dual-port RAM.
Fair enough, I should have been more precise. I wanted to point out that it is not a hard technical limitation, but should have added that in that specific use case it is economically unfeasible.
CPUs didn't have DRAM controllers on-die back then. The GPU is performing that function for the system, and the link between the CPU and GPU is the CPU's front side bus rather than PCI(e). The CPU and GPU are also close together so they can both be cooled effectively by the same fan.
It looks like the CPU and GPU are really close, which makes sense because it looks like there's a ton of pins shared between the two.
Ultimately designing a circuit board layout is an optimization problem. You usually have some constraints, like how close chips can be before they start interfering with each other magnetically, where the I/O will be, and where you need holes to mount the board. Then you either try to be a pathing optimizer yourself or you run a program that will layout your board for you.
I'm not sure about the XBox, but game consoles sometimes have faster Memory->GPU pipelines than normal PCs to speed up render times, which might be why the GPU is the most central component.
The architecture here is similar to old PCs (roughly before 2010) that had integrated graphics in the northbridge. The memory controller also resides in the northbridge. The CPU communicates with the northbridge through the front-side bus. Incidentally the northbridge also used to be responsible for high-speed I/O such as PCIe, so even if you had a discrete GPU it would not be connected directly to the CPU.
Over time CPUs have integrated all those features on-die, resulting in today's SoC-like processors where the "chipset" is merely an I/O expander connected over a PCIe-like link.
That design is called Unified Memory Architecture or 'UMA' and can save a lot of production costs at the expense of greater memory latency. The Xbox is not the first one to implement it (the N64 is another good example).
For the Xbox 360, they ditched Intel and went for PowerPC. Microsoft then bought a bunch of PPC Mac Pros from Apple for development since they shared the same ISA :D
That entire generation of consoles, the Xbox 360, PS3 and Wii (and then arguably the Wii U) were all some form of PowerPC.
The following generation though everything switched to more commodity hardware, with the PS4 and Xbox One using x86_64 processors and the Switch using an almost off-the-shelf SoC from nvidia.
> It is speculated that Microsoft may have left that code from prototype/debug units, so for the purposes of his research (possibly accidental, since this block exposes the types algorithms that Microsoft applied). In conclusion, this was considered garbage code [...]
* Some emulator do exists. The earlier attempts were just API translation layers that work a bit like wine: translate the function calls to native system APIs on windows. As time went, tricks and workarounds were piling up, especially as some games used lower level HW functionality (writing in registers, etc), which provided difficult to emulate, and game executables had to be patched, thus making the emulators a collection of special cases. Such emulators include Xenia, Cxbx (and derivatives such as shogun's version, dxbx, etc).
* More recently, efforts turned to low-level emulation, with complete emulation of the Xbox GPU, using a codebase derived from QEMU: XQEMU, and more recently XEMU (mborgeson's fork, focused on trying less-proven tricks and workarounds to maximize compatibility). Both are being developed in the open (XQEMU's development process might be slightly more open), and reverse-engineering is ongoing.
* Big names (among others) on the emulation scene: mborgeson, JayfoxRox, Espes, Shogun
* Bunnie Huang’s Hacking The Xbox was mentioned by another commenter, but 17 Mistakes Microsoft Made in the Xbox Security System is also an interesting read about working around the Xbox security mechanisms: https://xboxdevwiki.net/17_Mistakes_Microsoft_Made_in_the_Xb...
> Every game console since the first Atari was more or less designed to prevent the piracy of games and yet every single game console has been successfully modified to enable piracy. However, this trend has come to an end. Both the Xbox One and the PS4 have now been on the market for close to 6 years, without hackers being able to crack the system to enable piracy or cheating. This is the first time in history that game consoles have lasted this long without being cracked to enable piracy. In this talk, we will discuss how we achieved this for the Xbox One. We will first describe the Xbox security design goals and why it needs to guard against hardware attacks, followed by descriptions of the hardware and software architecture to keep the Xbox secure. This includes details about the custom SoC we built with AMD and how we addressed the fact that all data read from flash, the hard drive, and even DRAM cannot be trusted. We will also discuss the corresponding software changes we made to keep the system and the games secure.
But I can say that I became disinterested in piracy when they made getting games more convenient than piracy. When they made using the hardware closer to its full potential part of the default experience. When they got the pricing right for these “premium” but pretty basic features. And of course, personally having the disposable income to afford the content because I would have never been a customer when I was pirating, only an unpaid evangelist of the franchise.
"And of course, personally having the disposable income to afford the content"
I think this is most of the story.
I mean, it's fun to hack, but a lot of people's ideology about a lot of things go out the window as soon as they have a regular job and can afford to buy regular stuff and see these things as pretty much regular products and services.
Where I would disagree is when I think about Xbox and PS2, they had hardware to be media centers. They had compelling region locked content that just couldn't be used. Hacking any of that meant hacking all of that, and now you could also download games which was faster and more convenient than going to the store. And play Japanese games you couldn't get anyway.
Future generations of consoles made that default behavior, and games are released in multiple continents at the same time with their respective localization. American flagship games are now more appealing and engaging than their Japanese counterparts.
No, not really. Things that annoy me are not going to get my money, even though I have some to spend now. What really happened once I started earning was I started paying for things that I had appreciated back when I was running their free trials, or ones that I had ahem "extended" the trials for (or added one :P).
It's still fun to hack, by the way, I just have less time to do it.
When the currency of your third world country is worth less than 1/5 of the US dollar and a single game amount to a quarter of many people’s incomes, piracy don’t register as something even remotely wrong.
I don't think MS is hugely concerned about piracy that doesn't affect sales.
In fact, if they're smart, they'll be fine with it, because it's a form of 'perfect price discrimination'.
That said, the question is to what extent piracy from other areas bleeds into the higher value markets. Which I believe does happens, ergo 'a crack is a crack is a crack' probably.
It's funny because if the cost of getting a 'hack' of a game literally entails buying a 'CD-ROM' and going through the pain of copying it like in the 'old days', which is a small pain but more than many want to bear, it puts a natural bound on the copying/hacking: it will happen a lot in poor countries, less so in rich countries.
But if a hack is 'discovered once', wherever, and then easily exploited everywhere ... not so good.
Personally I'd call myself a "hybrid consumer." I'll pirate a game as an offline demo if the only way to purchase it doesn't allow refunds. I'll also pirate a game if I already paid for it on a different platform. There are quite a few PC games still in their plastic wrap on my shelf because they required obnoxious add-on software like uPlay and pirating something I already bought was preferable to installing it.
Also multiplayer. One does not simply go online with pirated games! This didn’t matter to pirates when games were mostly singleplayer with local multi as a bonus.
My partner has a ps4 and some subscription thing (PS now?). It ends up being about 6$/month and new games are available each month (including some AAA). If each month you add them to your library you quickly end up with a backlog of more games than you can play.
games seems to loose value rapidly after they are released so some are very affordable.
I just can't get over the idea that a game I'm playing today might be removed from gamepass tomorrow. As a result, I don't want to play anything on gamepass, or subscribe. Although I think I might have somehow accidentally subscribed anyway. I'll stick to gold.
If it get pulled and you enjoy it that much take advantage of the discount (every game on Game Pass has a somewhat discounted price for members) and buy it. There’s been a handful of games here or there I didn’t want to lose when they were retiring, and others that I played but didn’t really miss. I still save a lot of money compared to buying every game I ever want to play - especially given many of them I enjoy but not enough to shell out full price on
Not games but music. One of the things Spotify and Apple Music bring to the table is search and cataloging. It's less of an issue for video games where you might be seriously interested in 20 titles, but for songs, someone else organizing it for you is a huge value-add.
Sadly, due to licensing agreements, nothing can beat the breadth, depth, and curation of the classic not-legit sites that were shut down. I pay for Spotify, but several albums I have on e.g. CD are not available there, and you can't put local music into a Spotify playlist or vice versa.
No service will ever compare to how awesome Oink’s Pink Palace was. It just worked and it had everything. No blackouts, no licensing windows, no format issues. Just all music.
I should have clarified that I mean music that is stored on my phone while I'm not at home. Also, the default Spotify player in Ubuntu is in a SNAP container that can't access my music folder and won't follow symlinks from the SNAP's designated music folder to other folders.
But it is entirely possible. I paid money for a Git client, for example, even though what it can do is a strict subset of what I can do with the command line tools.
Why bother? Those who "pirate" will never convert to paid customers so why not let kids from some landlocked African country or rural India enjoy games?
A) a lot of people who do pirate in console are current customers. They spend quite a bit of money to get access to "every game ever" and stop buying new games.
B) a lot of other ones hack the console in order to play hacked copies of the game in multiplayer, which ruins the experience for paying customers.
Uh, I'm confused. The PS4 has been cracked. Jailbreaks exist. And yes, they enable piracy. But like any console that is still being actively supported/updated by the manufacturer, it requires a certain firmware version (or below).
That's true, but Sony has been successful in making jailbreaking unfeasible for most people. The only path to a jailbroken PS4 today is to find one that's been sitting in a drawer without being updated for over 3 years, and even if you get that far there's no workaround for the version check on newer games so you can't play anything (legitimately or otherwise) that require firmware newer than the last exploitable version from 3 years ago, which rules out quite a few killer app exclusives.
Unlike past generations you can't just brute force it with hardware modifications either, the security is so deeply embedded in the SoC now that there's no way to touch it. Without a software exploit you're shit out of luck.
Yes. The talk was given by an xbox architect, and iirc he mentions he's not sure about the state of the PS4. It's a great talk and if you're interested in security you should give it a watch.
Sony even managed to pre-emptively fix that last exploit a few months before it became known to the public, so users who were actively using their PS4 and allowing it to update unknowingly locked themselves out of jailbreaking before they had the chance. The prospects of jailbreaking the next gen consoles really aren't looking good given the PS4 was this much of a PITA and the Xbox One has been completely bulletproof for its entire life.
https://bunniefoo.com/nostarch/HackingTheXbox_Free.pdf