Uhh... it's also for Linux. Comments in the forums look like the Linux version is just as mature as the Windows one. Also the FAQ says nothing about "windows only."
Gotta make sure Windows doesn't get all the gaming glory, eh? Eh?! Eh?!?!?!
> RPCS3 is an open-source Sony PlayStation 3 emulator for Windows written in C++. It currently runs only small homebrew applications. The source code for RPCS3 is hosted at: https://github.com/DHrpcs3/rpcs3/
It doesn't say "only," and at the bottom of the page:
As long as this platform is powerful enough to emulate the PlayStation 3, yes, there is a high chance that RPCS3 will be ported there. However, at the moment we only target Windows and Linux.
Boo-ya-ka-sha? FAQ is probably just outdated. Never seen that on an open source project ;-) luckily I read the whole thing <3
The first public version was released in 2011 [1], they've come a long way. Kudos to them for keeping at it all this time, always a little encouraging to hear progress on such projects.
I wonder if we'll see "emulators" for the PS4 and XBox One soon (maybe even sooner than usable PS3 emulators) given that they both use x86-64 CPU architecture.
It seems we wouldn't even need emulation, "just" an API emulation layer like Wine. And I think the ps3 used openGL, so it might not be too difficult to map that to native calls. And unlike emulators we'd get near-native speeds.
Actually that sounds a bit too simple to be true, what am I missing?
Both systems have "hUMA", where the GPU and CPU have a coherent view of memory (at least on the "Onion" bus) and the former can even page fault. This likely makes them rather difficult to emulate properly on computers that don't have similar AMD cards, and I think AMD hasn't even properly exposed APIs for this for those cards on PC.
And, of course, writing something like Wine is a huge amount of work.
Also worth pointing out that while AMD does have PC hardware with hUMA (Kaveri series APUs), the unified memory pool only exists when you're using the integrated graphics processor. If you hook up a discrete GPU (which most gamers do) you get two separate memory pools that are in a unified address space, but don't have the magical consistency benefits that hUMA can give a CPU+GPU working on the same data. Not my area of expertise, but I doubt that kind of setup will help with PS4/XBOne emulation.
I was talking specifically about the PS4. I don't know why I thought they'd use OpenGL, I just naively thought Xbox == DirectX and PS4 == OpenGL. Seemed easier for the devs to port their games on a well known and documented API.
The PS4 uses a couple APIs for game graphics, GNM and GNMX[1]. I've heard that their shader language is similar to GLSL (corrected by lunixbochs). The menus are built in WebGL running inside a stripped-down Webkit browser. Developers are unable to use WebGL for games, but it isn't out of the question that will be supported in the future.
I doubt this will run well on anything but the latest high base clock PC's. I have a 2.2 Ghz Sandy Bridge laptop, with Turbo Boost up to 3.1 Ghz, but Turbo Boost is mainly marketing scam by Intel, so it will run as if it's 2.2 Ghz. Plus Haswell only has a ~15 percent IPC improvement over SNB. With the PS2 emulator I can't even get 60 fps for most games.
I disagree. High-speed emulation of the processor would be possible using dynamic recompilation, a la QEmu. Someone would just have to write the code to do it.
I disagree. Dynamic recompilation wouldn't help with the fact that x86 just doesn't have an equivalent of SPUs. They can be emulated, but they will be nowhere near as fast as the real thing. You can reliably emulate the PPU but that's about it. I've programmed for the PS3 and the architecture is so different to x86 that any emulation would have to somehow make up for the hardware that doesn't exist in x86. Early games for the PS3 didn't even use the SPUs, or barely used them, running mostly on the PPU, and those games should run fairly decently fairly soon. But latest games like The Last Of Us, where the SPUs have been pushed to the limit? I wouldn't count on that being emulated anytime soon.
The SPU's are a fundamentally different architecture to an x86 been something akin to a graphics card, heavily optimised for a very particular kind of floating point calculations (somewhat similar to SSE but much more task focused).
A more likely approach is to use something like CUDA/OpenCL but again while closer in architecture the SPU's are incredibly optimised for one thing.
I think the best chance for emulating the SPUs would be with an Intel processor equipped with Iris Pro and the shared L4 cache. The latency and memory sharing issues involved with using a dedicated GPU would impose a huge bottleneck and the AMD APUs are just so weak in IPC terms.
As it stands right now the chance of seeing a playable PS3 emulator is much less than the chance of seeing a playable PS4 emulator. This is of course due to the x86 processor on PS4
EDIT: DCKing corrected me on this I erroneously assumed that the CPU architecture was the main hurdle. Turns out the GPU is a much bigger problem. The more you know, thanks DCKing.
This is not true. The great complexity of modern GPUs is a much larger problem generally than emulating a CPU. Furthermore, the previous x86-equipped console, the original Xbox, never saw any decent progress in emulation whereas its far more exotic competitors have really good emulators.
That's not to say the exotic Cell architecture is not a major problem for PS3 emulation. It is. But having an x86 CPU does not seem to increase the emulatability of a console at all.
Do you think this is because of difficulty in emulation or because the X-Box was so easy to hack? It was not super expensive to buy a used X-Box and mod it even back when new games were coming out for it, so maybe it just didn't seem like a worthwhile investment to spend time writing an emulator?
I'd recommend reading this post [1] by the dev of an existing Xbox emulator. It's a summary of the issues inherent to the emulation of the original Xbox, highlighting some major hurdles and technical difficulties that would have to be overcome.
Having spent a lot of time doing Xbox emulation, the biggest issue is simple: statically linked XDKs meant that doing high-level emulation was next to impossible. You had to find signatures for every XDK, which isn't viable considering that most of them weren't publicly released.
LLE/MME can work, but it's significantly more effort to pull it off; that's the approach I was taking, when I was still working in it.
"The real problem is that any modern x86 processor including the Pentium III can execute multiple instructions at once. So it's not like emulating a Z80 doing one instruction at a time. The actual algorithm and how x86 does this is undocumented and still unknown. In short, the Xbox's CPU can be emulated, but not accurately."
How is that relevant? Is there any code compiled for the Xbox that actually relies on how the P3 handles microops and pipelines things and all that? Cause if your argument is "X=A+B+C+D" might have several different ways of happening, the answer is that it doesn't matter. So I'm not sure why the author bring this up.
The Wii chip is also OOO but Dolphin seems to do a fantastic job.
Perhaps they know the PowerPC's algorithm for performing out of order execution? It might have been available for a long time, or have been shared to the public after IBM started opening up the Power architecture since 2004.
CPUs (x86 or PPC) only do out-of-order execution when it is provably the case that it doesn't matter. The result of running any given sequence of instructions in lock-step, or out-of-order should be exactly the same. If not, you found a CPU bug!
That said, timing matters in these tight inner loops, and that is where the details of the CPU pipeline matter, with respect to emulators. How many nanoseconds would it have taken for the emulated CPU to execute those instructions? That requires knowledge of how the CPU works. And sometimes, unfortunately, game code stupidly depends on these details.
With newer-gen high-level consoles and emulators, is that really the case? Having cycle/instruction accurate emulation only seems to be an issue for older consoles (PS2 and before?) where programmers actually relied on all sorts of things to eek out every cycle's worth.
I don't think you can get accurate emulation that way. You may end up with games running at different speeds depending on what CPU you have. Some may be too fast or too slow to be playable.
They are not, the problem is how you use them. For example - fetching a texture data on PS3 takes about 50 clock cycles, while on a PC it takes 1000-2000 cycles, because the request has to go through all the driver layers first. PS3 uses its own version of OpenGL(PSGL) which has a much more low level access to hardware. That level of access is simply not provided by PC hardware, so it has to be somehow emulated - and for high performance GPUs we have nowadays, it's rather difficult.
I agree with your general point, but PSGL is pretty much never used. I only know of a couple of small indie PSN titles that used it, mostly to ease the porting process from PC. It's way too slow for the big games, who use a lower level API called LibGCM.
That was actually a big problem with PS2 emulation - some parts of the system were hardcoded to wait only a certain amount of time for a function to finish - because that function was ran on a different chip, and would always take the same amount of time. Now emulating that behaviour was incredibly difficult, especially if all of it had to run on a single CPU. The problem has been mitigated somewhat by multi-core CPUs,with each core emulating a different part of the PS2, but it's still far from ideal.
Most of the encryption done on the PS3 is actually performed in software on an isolated SPU core. The crypto code has been dumped and the algorithms reversed. This emulator leverages naehrwert's excellent scetool to decrypt the SELF files.
Gotta make sure Windows doesn't get all the gaming glory, eh? Eh?! Eh?!?!?!