Hacker News new | past | comments | ask | show | jobs | submit login

This is not true. The great complexity of modern GPUs is a much larger problem generally than emulating a CPU. Furthermore, the previous x86-equipped console, the original Xbox, never saw any decent progress in emulation whereas its far more exotic competitors have really good emulators.

That's not to say the exotic Cell architecture is not a major problem for PS3 emulation. It is. But having an x86 CPU does not seem to increase the emulatability of a console at all.




Do you think this is because of difficulty in emulation or because the X-Box was so easy to hack? It was not super expensive to buy a used X-Box and mod it even back when new games were coming out for it, so maybe it just didn't seem like a worthwhile investment to spend time writing an emulator?


I'd recommend reading this post [1] by the dev of an existing Xbox emulator. It's a summary of the issues inherent to the emulation of the original Xbox, highlighting some major hurdles and technical difficulties that would have to be overcome.

[1] http://www.neogaf.com/forum/showpost.php?p=48088464&postcoun...


Having spent a lot of time doing Xbox emulation, the biggest issue is simple: statically linked XDKs meant that doing high-level emulation was next to impossible. You had to find signatures for every XDK, which isn't viable considering that most of them weren't publicly released.

LLE/MME can work, but it's significantly more effort to pull it off; that's the approach I was taking, when I was still working in it.


"The real problem is that any modern x86 processor including the Pentium III can execute multiple instructions at once. So it's not like emulating a Z80 doing one instruction at a time. The actual algorithm and how x86 does this is undocumented and still unknown. In short, the Xbox's CPU can be emulated, but not accurately."

How is that relevant? Is there any code compiled for the Xbox that actually relies on how the P3 handles microops and pipelines things and all that? Cause if your argument is "X=A+B+C+D" might have several different ways of happening, the answer is that it doesn't matter. So I'm not sure why the author bring this up.

The Wii chip is also OOO but Dolphin seems to do a fantastic job.


Perhaps they know the PowerPC's algorithm for performing out of order execution? It might have been available for a long time, or have been shared to the public after IBM started opening up the Power architecture since 2004.


Or most probably: it doesn't matter at all since compilers aren't generating code that depends on such details?


CPUs (x86 or PPC) only do out-of-order execution when it is provably the case that it doesn't matter. The result of running any given sequence of instructions in lock-step, or out-of-order should be exactly the same. If not, you found a CPU bug!

That said, timing matters in these tight inner loops, and that is where the details of the CPU pipeline matter, with respect to emulators. How many nanoseconds would it have taken for the emulated CPU to execute those instructions? That requires knowledge of how the CPU works. And sometimes, unfortunately, game code stupidly depends on these details.


With newer-gen high-level consoles and emulators, is that really the case? Having cycle/instruction accurate emulation only seems to be an issue for older consoles (PS2 and before?) where programmers actually relied on all sorts of things to eek out every cycle's worth.

On the Xbox, is that really the case?


Why doesn't the developer's post address the possibility of visualizing the CPU rather than emulating it?


I don't think you can get accurate emulation that way. You may end up with games running at different speeds depending on what CPU you have. Some may be too fast or too slow to be playable.


I did not know that the GPUs are that different from their desktop cousins. Do you know the reason for this?


They are not, the problem is how you use them. For example - fetching a texture data on PS3 takes about 50 clock cycles, while on a PC it takes 1000-2000 cycles, because the request has to go through all the driver layers first. PS3 uses its own version of OpenGL(PSGL) which has a much more low level access to hardware. That level of access is simply not provided by PC hardware, so it has to be somehow emulated - and for high performance GPUs we have nowadays, it's rather difficult.


I agree with your general point, but PSGL is pretty much never used. I only know of a couple of small indie PSN titles that used it, mostly to ease the porting process from PC. It's way too slow for the big games, who use a lower level API called LibGCM.


Do you think DX12 or Mantle will help with overcoming the high-level API overhead?


The 360 emulator (Xenia) dev seems to think so.


I imagine that timing and synchronisation between things has to be emulated too for various operations to work as expected, further complicating this.


That was actually a big problem with PS2 emulation - some parts of the system were hardcoded to wait only a certain amount of time for a function to finish - because that function was ran on a different chip, and would always take the same amount of time. Now emulating that behaviour was incredibly difficult, especially if all of it had to run on a single CPU. The problem has been mitigated somewhat by multi-core CPUs,with each core emulating a different part of the PS2, but it's still far from ideal.


Maybe it's possible to pre-load a lot of textures and stuff since PC GPU:s has so much RAM




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: