Hacker News new | past | comments | ask | show | jobs | submit login
RPCS3: An open-source PlayStation 3 emulator for Windows written in C++ (github.com/dhrpcs3)
164 points by CrystalCuckoo on March 24, 2014 | hide | past | favorite | 48 comments



Uhh... it's also for Linux. Comments in the forums look like the Linux version is just as mature as the Windows one. Also the FAQ says nothing about "windows only."

Gotta make sure Windows doesn't get all the gaming glory, eh? Eh?! Eh?!?!?!


Top lines of the FAQ currently

> What is RPCS3?

> RPCS3 is an open-source Sony PlayStation 3 emulator for Windows written in C++. It currently runs only small homebrew applications. The source code for RPCS3 is hosted at: https://github.com/DHrpcs3/rpcs3/


It doesn't say "only," and at the bottom of the page:

As long as this platform is powerful enough to emulate the PlayStation 3, yes, there is a high chance that RPCS3 will be ported there. However, at the moment we only target Windows and Linux.

Boo-ya-ka-sha? FAQ is probably just outdated. Never seen that on an open source project ;-) luckily I read the whole thing <3


Here is a video of it running Disgaea 3 https://www.youtube.com/watch?v=IQEv6B6fIgA


Don't miss the comment in the video description that the video itself is sped up by a factor of 8.

It's definitely a start, though!


Xbox 360 emulator under the same line of thought: https://github.com/benvanik/xenia


The first public version was released in 2011 [1], they've come a long way. Kudos to them for keeping at it all this time, always a little encouraging to hear progress on such projects.

[1] http://code.google.com/p/rpcs3/source/detail?r=1


I wonder if we'll see "emulators" for the PS4 and XBox One soon (maybe even sooner than usable PS3 emulators) given that they both use x86-64 CPU architecture.

It seems we wouldn't even need emulation, "just" an API emulation layer like Wine. And I think the ps3 used openGL, so it might not be too difficult to map that to native calls. And unlike emulators we'd get near-native speeds.

Actually that sounds a bit too simple to be true, what am I missing?


A few things:

Neither system's encryption has been cracked yet.

Both systems have "hUMA", where the GPU and CPU have a coherent view of memory (at least on the "Onion" bus) and the former can even page fault. This likely makes them rather difficult to emulate properly on computers that don't have similar AMD cards, and I think AMD hasn't even properly exposed APIs for this for those cards on PC.

And, of course, writing something like Wine is a huge amount of work.

But it's definitely possible.


Also worth pointing out that while AMD does have PC hardware with hUMA (Kaveri series APUs), the unified memory pool only exists when you're using the integrated graphics processor. If you hook up a discrete GPU (which most gamers do) you get two separate memory pools that are in a unified address space, but don't have the magical consistency benefits that hUMA can give a CPU+GPU working on the same data. Not my area of expertise, but I doubt that kind of setup will help with PS4/XBOne emulation.

For drivers, proper support for hUMA isn't available yet on PC. There's some pretty recent information on AnandTech: http://www.anandtech.com/show/7825/amd-updates-driver-roadma...


Neither PS3 nor PS4 (if that's what you meant) use OpenGL. I don't know why this myth is so prevalent.


I was talking specifically about the PS4. I don't know why I thought they'd use OpenGL, I just naively thought Xbox == DirectX and PS4 == OpenGL. Seemed easier for the devs to port their games on a well known and documented API.

What do they use instead? A custom API?


The PS4 uses a couple APIs for game graphics, GNM and GNMX[1]. I've heard that their shader language is similar to GLSL (corrected by lunixbochs). The menus are built in WebGL running inside a stripped-down Webkit browser. Developers are unable to use WebGL for games, but it isn't out of the question that will be supported in the future.

[1] https://en.wikipedia.org/wiki/PlayStation_4_system_software


HLSL is Microsoft's shading language. GLSL is the OpenGL shading language.


You're point?


The shading language is similar to CG and HLSL, not very similar to GLSL.


Yes, customs APIs. LibCGM on PS3, GNM and GNMX on PS4.


I won't bet on it. The original XBox was also x86, and there still isn't an emulator for it available.


I doubt this will run well on anything but the latest high base clock PC's. I have a 2.2 Ghz Sandy Bridge laptop, with Turbo Boost up to 3.1 Ghz, but Turbo Boost is mainly marketing scam by Intel, so it will run as if it's 2.2 Ghz. Plus Haswell only has a ~15 percent IPC improvement over SNB. With the PS2 emulator I can't even get 60 fps for most games.


Does it matter though, this is an elegant hack in the true sense of the word.

The cell processors architecture pretty much rules out high fidelity emulation directly on an x86 chip any time soon.

Not all hacks have to be useful (even for a given value of useful).


I disagree. High-speed emulation of the processor would be possible using dynamic recompilation, a la QEmu. Someone would just have to write the code to do it.


I disagree. Dynamic recompilation wouldn't help with the fact that x86 just doesn't have an equivalent of SPUs. They can be emulated, but they will be nowhere near as fast as the real thing. You can reliably emulate the PPU but that's about it. I've programmed for the PS3 and the architecture is so different to x86 that any emulation would have to somehow make up for the hardware that doesn't exist in x86. Early games for the PS3 didn't even use the SPUs, or barely used them, running mostly on the PPU, and those games should run fairly decently fairly soon. But latest games like The Last Of Us, where the SPUs have been pushed to the limit? I wouldn't count on that being emulated anytime soon.


It's not "just" a software problem.

The SPU's are a fundamentally different architecture to an x86 been something akin to a graphics card, heavily optimised for a very particular kind of floating point calculations (somewhat similar to SSE but much more task focused).

A more likely approach is to use something like CUDA/OpenCL but again while closer in architecture the SPU's are incredibly optimised for one thing.


I think the best chance for emulating the SPUs would be with an Intel processor equipped with Iris Pro and the shared L4 cache. The latency and memory sharing issues involved with using a dedicated GPU would impose a huge bottleneck and the AMD APUs are just so weak in IPC terms.


It won't run well, period. It's very early stages and isn't even handling textures properly, let alone running games smoothly.


As it stands right now the chance of seeing a playable PS3 emulator is much less than the chance of seeing a playable PS4 emulator. This is of course due to the x86 processor on PS4

EDIT: DCKing corrected me on this I erroneously assumed that the CPU architecture was the main hurdle. Turns out the GPU is a much bigger problem. The more you know, thanks DCKing.


This is not true. The great complexity of modern GPUs is a much larger problem generally than emulating a CPU. Furthermore, the previous x86-equipped console, the original Xbox, never saw any decent progress in emulation whereas its far more exotic competitors have really good emulators.

That's not to say the exotic Cell architecture is not a major problem for PS3 emulation. It is. But having an x86 CPU does not seem to increase the emulatability of a console at all.


Do you think this is because of difficulty in emulation or because the X-Box was so easy to hack? It was not super expensive to buy a used X-Box and mod it even back when new games were coming out for it, so maybe it just didn't seem like a worthwhile investment to spend time writing an emulator?


I'd recommend reading this post [1] by the dev of an existing Xbox emulator. It's a summary of the issues inherent to the emulation of the original Xbox, highlighting some major hurdles and technical difficulties that would have to be overcome.

[1] http://www.neogaf.com/forum/showpost.php?p=48088464&postcoun...


Having spent a lot of time doing Xbox emulation, the biggest issue is simple: statically linked XDKs meant that doing high-level emulation was next to impossible. You had to find signatures for every XDK, which isn't viable considering that most of them weren't publicly released.

LLE/MME can work, but it's significantly more effort to pull it off; that's the approach I was taking, when I was still working in it.


"The real problem is that any modern x86 processor including the Pentium III can execute multiple instructions at once. So it's not like emulating a Z80 doing one instruction at a time. The actual algorithm and how x86 does this is undocumented and still unknown. In short, the Xbox's CPU can be emulated, but not accurately."

How is that relevant? Is there any code compiled for the Xbox that actually relies on how the P3 handles microops and pipelines things and all that? Cause if your argument is "X=A+B+C+D" might have several different ways of happening, the answer is that it doesn't matter. So I'm not sure why the author bring this up.

The Wii chip is also OOO but Dolphin seems to do a fantastic job.


Perhaps they know the PowerPC's algorithm for performing out of order execution? It might have been available for a long time, or have been shared to the public after IBM started opening up the Power architecture since 2004.


Or most probably: it doesn't matter at all since compilers aren't generating code that depends on such details?


CPUs (x86 or PPC) only do out-of-order execution when it is provably the case that it doesn't matter. The result of running any given sequence of instructions in lock-step, or out-of-order should be exactly the same. If not, you found a CPU bug!

That said, timing matters in these tight inner loops, and that is where the details of the CPU pipeline matter, with respect to emulators. How many nanoseconds would it have taken for the emulated CPU to execute those instructions? That requires knowledge of how the CPU works. And sometimes, unfortunately, game code stupidly depends on these details.


With newer-gen high-level consoles and emulators, is that really the case? Having cycle/instruction accurate emulation only seems to be an issue for older consoles (PS2 and before?) where programmers actually relied on all sorts of things to eek out every cycle's worth.

On the Xbox, is that really the case?


Why doesn't the developer's post address the possibility of visualizing the CPU rather than emulating it?


I don't think you can get accurate emulation that way. You may end up with games running at different speeds depending on what CPU you have. Some may be too fast or too slow to be playable.


I did not know that the GPUs are that different from their desktop cousins. Do you know the reason for this?


They are not, the problem is how you use them. For example - fetching a texture data on PS3 takes about 50 clock cycles, while on a PC it takes 1000-2000 cycles, because the request has to go through all the driver layers first. PS3 uses its own version of OpenGL(PSGL) which has a much more low level access to hardware. That level of access is simply not provided by PC hardware, so it has to be somehow emulated - and for high performance GPUs we have nowadays, it's rather difficult.


I agree with your general point, but PSGL is pretty much never used. I only know of a couple of small indie PSN titles that used it, mostly to ease the porting process from PC. It's way too slow for the big games, who use a lower level API called LibGCM.


Do you think DX12 or Mantle will help with overcoming the high-level API overhead?


The 360 emulator (Xenia) dev seems to think so.


I imagine that timing and synchronisation between things has to be emulated too for various operations to work as expected, further complicating this.


That was actually a big problem with PS2 emulation - some parts of the system were hardcoded to wait only a certain amount of time for a function to finish - because that function was ran on a different chip, and would always take the same amount of time. Now emulating that behaviour was incredibly difficult, especially if all of it had to run on a single CPU. The problem has been mitigated somewhat by multi-core CPUs,with each core emulating a different part of the PS2, but it's still far from ideal.


Maybe it's possible to pre-load a lot of textures and stuff since PC GPU:s has so much RAM


interesting tidbit - if you search 'turbo boost is a marketing scam' on google your comment is the first result.


How will this finesse the (hardware-based) DRM on PS3 games?


Most of the encryption done on the PS3 is actually performed in software on an isolated SPU core. The crypto code has been dumped and the algorithms reversed. This emulator leverages naehrwert's excellent scetool to decrypt the SELF files.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: