I doubt this will run well on anything but the latest high base clock PC's. I have a 2.2 Ghz Sandy Bridge laptop, with Turbo Boost up to 3.1 Ghz, but Turbo Boost is mainly marketing scam by Intel, so it will run as if it's 2.2 Ghz. Plus Haswell only has a ~15 percent IPC improvement over SNB. With the PS2 emulator I can't even get 60 fps for most games.
I disagree. High-speed emulation of the processor would be possible using dynamic recompilation, a la QEmu. Someone would just have to write the code to do it.
I disagree. Dynamic recompilation wouldn't help with the fact that x86 just doesn't have an equivalent of SPUs. They can be emulated, but they will be nowhere near as fast as the real thing. You can reliably emulate the PPU but that's about it. I've programmed for the PS3 and the architecture is so different to x86 that any emulation would have to somehow make up for the hardware that doesn't exist in x86. Early games for the PS3 didn't even use the SPUs, or barely used them, running mostly on the PPU, and those games should run fairly decently fairly soon. But latest games like The Last Of Us, where the SPUs have been pushed to the limit? I wouldn't count on that being emulated anytime soon.
The SPU's are a fundamentally different architecture to an x86 been something akin to a graphics card, heavily optimised for a very particular kind of floating point calculations (somewhat similar to SSE but much more task focused).
A more likely approach is to use something like CUDA/OpenCL but again while closer in architecture the SPU's are incredibly optimised for one thing.
I think the best chance for emulating the SPUs would be with an Intel processor equipped with Iris Pro and the shared L4 cache. The latency and memory sharing issues involved with using a dedicated GPU would impose a huge bottleneck and the AMD APUs are just so weak in IPC terms.
As it stands right now the chance of seeing a playable PS3 emulator is much less than the chance of seeing a playable PS4 emulator. This is of course due to the x86 processor on PS4
EDIT: DCKing corrected me on this I erroneously assumed that the CPU architecture was the main hurdle. Turns out the GPU is a much bigger problem. The more you know, thanks DCKing.
This is not true. The great complexity of modern GPUs is a much larger problem generally than emulating a CPU. Furthermore, the previous x86-equipped console, the original Xbox, never saw any decent progress in emulation whereas its far more exotic competitors have really good emulators.
That's not to say the exotic Cell architecture is not a major problem for PS3 emulation. It is. But having an x86 CPU does not seem to increase the emulatability of a console at all.
Do you think this is because of difficulty in emulation or because the X-Box was so easy to hack? It was not super expensive to buy a used X-Box and mod it even back when new games were coming out for it, so maybe it just didn't seem like a worthwhile investment to spend time writing an emulator?
I'd recommend reading this post [1] by the dev of an existing Xbox emulator. It's a summary of the issues inherent to the emulation of the original Xbox, highlighting some major hurdles and technical difficulties that would have to be overcome.
Having spent a lot of time doing Xbox emulation, the biggest issue is simple: statically linked XDKs meant that doing high-level emulation was next to impossible. You had to find signatures for every XDK, which isn't viable considering that most of them weren't publicly released.
LLE/MME can work, but it's significantly more effort to pull it off; that's the approach I was taking, when I was still working in it.
"The real problem is that any modern x86 processor including the Pentium III can execute multiple instructions at once. So it's not like emulating a Z80 doing one instruction at a time. The actual algorithm and how x86 does this is undocumented and still unknown. In short, the Xbox's CPU can be emulated, but not accurately."
How is that relevant? Is there any code compiled for the Xbox that actually relies on how the P3 handles microops and pipelines things and all that? Cause if your argument is "X=A+B+C+D" might have several different ways of happening, the answer is that it doesn't matter. So I'm not sure why the author bring this up.
The Wii chip is also OOO but Dolphin seems to do a fantastic job.
Perhaps they know the PowerPC's algorithm for performing out of order execution? It might have been available for a long time, or have been shared to the public after IBM started opening up the Power architecture since 2004.
CPUs (x86 or PPC) only do out-of-order execution when it is provably the case that it doesn't matter. The result of running any given sequence of instructions in lock-step, or out-of-order should be exactly the same. If not, you found a CPU bug!
That said, timing matters in these tight inner loops, and that is where the details of the CPU pipeline matter, with respect to emulators. How many nanoseconds would it have taken for the emulated CPU to execute those instructions? That requires knowledge of how the CPU works. And sometimes, unfortunately, game code stupidly depends on these details.
With newer-gen high-level consoles and emulators, is that really the case? Having cycle/instruction accurate emulation only seems to be an issue for older consoles (PS2 and before?) where programmers actually relied on all sorts of things to eek out every cycle's worth.
I don't think you can get accurate emulation that way. You may end up with games running at different speeds depending on what CPU you have. Some may be too fast or too slow to be playable.
They are not, the problem is how you use them. For example - fetching a texture data on PS3 takes about 50 clock cycles, while on a PC it takes 1000-2000 cycles, because the request has to go through all the driver layers first. PS3 uses its own version of OpenGL(PSGL) which has a much more low level access to hardware. That level of access is simply not provided by PC hardware, so it has to be somehow emulated - and for high performance GPUs we have nowadays, it's rather difficult.
I agree with your general point, but PSGL is pretty much never used. I only know of a couple of small indie PSN titles that used it, mostly to ease the porting process from PC. It's way too slow for the big games, who use a lower level API called LibGCM.
That was actually a big problem with PS2 emulation - some parts of the system were hardcoded to wait only a certain amount of time for a function to finish - because that function was ran on a different chip, and would always take the same amount of time. Now emulating that behaviour was incredibly difficult, especially if all of it had to run on a single CPU. The problem has been mitigated somewhat by multi-core CPUs,with each core emulating a different part of the PS2, but it's still far from ideal.