Normally, an emulator strives for accuracy. But, since there are so few (if any) Wii and Gamecube games coming out, the library is fixed. So, I think it might be a better choice to hard-code a fix for specific games for these known crashes rather than emulating the crash accurately.
That's a philosophical question more than a pragmatic one: even if they can manually hard-code crash fixes for specific games, is the role of an emulator to provide cycle-perfect emulation or to provide the best possible game experience?
It depends on the philosophy that guides the emulation author. For example, the creator of bsnes believes in accuracy over everything. He's proud that his emulator will show the same bugs as the console when other emulators don't.
I would also add that practicality definitely comes into play. For example, there's a GBC emulator for the GBA that is written entirely in ARM assembly because that was the only way to make it run fast enough. There is a certain amount of accuracy that was sacrificed to make it work, but the fact that it actually runs games at playable speed is notable in-and-of-itself. It probably could have been done more accurately, but that doesn't really matter to most people if the games run at completely unplayable speeds, accurate or not.
One thing I always wonder about low-resource-usage emulation projects for old consoles with fixed game libraries (e.g. GBA emulating GBC), is why you can't just throw some heavy ahead-of-time analysis at the problem of statically recompiling the whole game library for the native destination architecture. Why write an "emulator" at all, when you know exactly what code you want to run, and know exactly what code that code should translate into when run on the target CPU?
I guess a potential problem would be that this requires shipping a (IP-infringing) copy of the recompiled game library with the emulator. But it doesn't quite: instead of a library of ROMs, you could just ship a library of hint files to turn the static-analysis step of the recompilation into a few seconds of work, such that you still need an input ROM and the dest-ISA program is only generated in memory.
In this case, the question is not between emulating a crash and allowing the game to proceed, but rather between emulating an abort() to land the VM in a sane terminated state, or skipping the abort() and continuing on until the emulator itself is pushed into an undefined state and crashes.
People often complain about the legality issue surrounding emulation and roms in general. One thing to remember about the proliferation of the roms and emu scene is that of convenience. People now a days tend to appreciate having less stuff and digital content. Steaming, Steam, etc. If Nintendo created some sort of digital game marketplace odds are good emu and rom scene would deteriorate. fundamentally issue is the scene itself and how gamers now a days want to consume content.
Also important to remember is that often times the older consoles and cartridges no longer function well, the condition of the physical hardware starts to deteriorate and unfortunately things can wear out.
Fun story about the legality problems. Way back in the day, I worked for a company that created a "play old videogames" service. Our lawyers went out and hunted down the owners of old Commodore 64 titles and such. According to the internal rumor mill, some of these companies required us to convince them that they really were the owners of these games do to a long series of acquisitions before they'd agree to sell us the rights to the games.
Anyway, once we had the rights, we of course asked for a copy of the game. Well, naturally, they didn't have one. And several of these old systems had some form of copy protection. Well, fortunately, the 1980s demo scene had led to cracked copies of these games being easily downloaded (bonus fun fact: I may be the only person in history who clicked 'agree' on a 'you must have written permission from the publisher to download this ROM' and meant it). But of course the crackers had included crack intros which we certainly couldn't use. So we ended up including the cracked version, with the cracks, and just initializing the games to a memory state just after the crack intro played.
Without those crackers cracking the game, we might never have found a copy that we could have published.
And that's why copyright should lapse and die when the registered copyright owner can't offer the Library of Congress a copy of the work, for archival and eventual entry into the public domain.
Ah, the C64... Once a friend had an original disk of a game that wouldn't work. Just for the heck of it we copied it and applied the protection (some copy software had it as an extra step, as I recall), and it worked.
I'm not sure if this was protection related, but once I rented a C64 game and it wouldn't work. The store wanted me to pay for it until we booted it on their system and saw it work. Years later, I read in a magazine that that particular game wouldn't work with my particular printer. :( Weird.
I'm no lawyer, but I imagine that if you sued us for copying you demo, we'd sue you back for distributing the cracked game in the first place, and the comparison of relative damages would probably not work out in your favor.
Nintendo does have a digital game marketplace, and it's stuffed with quite good games. However, even if a game has previously been published on Nintendo hardware, that does not automatically give Nintendo the rights to release it again on their virtual console. Those deals all need to be renegotiated, which is never going to happen (or may be downright impossible) for more obscure games.
There are plenty of first-party Nintendo virtual titles, though, and most likely plenty of people (like me!) who now refuse to buy them because they have to be bought over and over and over again for any new hardware.
"Digital" is an interesting distinction here. Nintendo has a service that lets you play old Nintendo games... but only on other new(er) Nintendo consoles: a physical product you must first buy from them. Basically, if you want their service, you need a $300 hardware dongle.
And even then, the (licenses for) the old games that you're buying from Nintendo's service are actually licenses to the particular port that runs on whatever the console generation you buy them on. If you buy the Wii Virtual Console port of Mario 64, that doesn't entitle you to download+run the Wii U Virtual Console port of Mario 64. You either accumulate an ever-expanding pile of hardware dongles that will themselves break one day, or you keep buying and re-buying ports of your favourite old games for each new console generation, never knowing whether they'll actually bother to port any given game to any given console, whether they did it for a previous generation or not.
This doesn't quite match the meaning of "convenient" in the sense that Steam is convenient: with Steam, when you buy a game, you're buying a license to play that game on anything—if the game starts out only on Windows, but then is ported to run on Mac and Linux, you don't have to pay again. You just own "the game", and are granted the ability to download+run whatever ports of it exist on Steam.
It's too bad they never covered distribution in their original game licensing (that allowed publishers to get the Nintendo official seal on the packaging). I wonder if consoles/companies of today are correcting that to account for future distribution models.
> odds are good emu and rom scene would deteriorate
This is a rather limited point of view - the draw to emulation is much more than convenience.
There are thriving communities based around homebrew, preservation, ROM hacking, translations, prototype collecting, debugging, tool-assisted speedrunning, enhancements, music, artwork and reverse engineering.
There are many parallels to the music piracy controversies with Napster, etc. iTunes gave people a convenient alternative and turned into a big success. If they would just make more old games available people would probably be willing to pay for them.
I love that this project documents its development so thoroughly and entertainingly. I first used Dolphin 5+ years ago (craving some Double Dash) and even then I remember being intrigued about the under the hood stuff. Congratulations to them for reaching this milestone.
> Because the CPU can't directly map the auxiliary RAM to the address space due to a missing hardware feature, the game has to read or write to an invalid memory address to invoke an exception handler. This exception handler would then use Direct Memory Access (DMA) to move data from the auxiliary RAM into a game designated cache in physical memory.
So essentially we're just leveraging exception handlers to code parts of the game treating the exception as "desired" behaviour.
>> Secondly, the GameCube only has 24MB (and some specialized areas) of RAM across a 4GB address space, meaning most memory addresses have no RAM backing them!
Is this common with consoles or anywhere else? I knew that there's very little RAM on consoles, but is there a reason for having the large address space available? (possibly convenience?)
I don't know for sure, but I believe that memory addresses are generally represented as an unsigned integer of one of a few sizes. 24MB is well over the maximum value for a 16-bit integer (approx. 65K) so they simply used the next larger size -- a 32-bit integer (max size approx. 4B) -- and figured that most of the address space being un-mapped wouldn't be a problem. This is common in most computers because the allowed memory size grows at an exponential rate with increased size for the addresses.
Yes, this reminds me of older computers. My Atari ST 1040 has something like 2MB of RAM but I'm sure the addressing is 32-bit, meaning that there are addresses that it can't ever reach.
In fact, doing some research shows that 8-bits of the address are often ignored on the Motorola 68000 leaving a 24-bit memory address.
I remember reading on HN that developers would use those unused bits to carry more data around. There was so little RAM available that the unused bits in pointer addresses were useful.
Last time I looked, the current 64bit Intel CPUs only used the lower 48 bits for userspace virtual addresses. This leaves 16 bits available if you want to tag your pointers with some additional information. This can be useful for saving space or using them with an 8 byte cmpxchg.
This kind of trick is easier to get away with if you know exactly which CPUs your software is going to run on. Otherwise your software could break horribly when a new generation of CPU comes around.
It's just an artifact of 32-bit addressing. Addresses are just an integer from 0 to (2^31 - 1). Using a smaller address space may be possible (although it is still very limited), but dealing with anything other than exactly 32 bits is a TON of extra work, and so it's not reasonably to accomodate what is really a non-issue.
Because the CPU "thinks" in 32-bits, therefore anything not 32-bits needs to be processed first. You would need to first convert your memory address to 32-bits before the CPU could read it, which brings something that would normally be a single CPU clock tick into more than that (it's slightly more complicated than that, but only in ways that make it worse). On top of that, since compilers don't normally do this stuff for you, you would need to end up writing a wrapper to handle every possible memory call, and write it in assembly.
On top of that, you can't use those wasted bits for anything else anyway, so what's the point?
The R&D needed to invent an entirely new CPU architecture isn't worth it for a game console, so they all use some off-the-shelf architecture - in this case, PPC, which is a 32-bit architecture like most other PC/workstation architectures of its vintage.
There are relatively few architectures which have address sizes that are not a power of two, mostly because this means an array of addresses requires wasteful padding for alignment. And for 32-bit architectures, even at the time it was realistic for high end workstations to have 4G of physical RAM, so there was a good reason to support this at the architectural level. Removing this for a game console after it was developed would have been more expensive than leaving it in.
Assuming your typing this on a modern, 64 bit computer. You theoretically have 64 exabytes of address space available, but I'm going to assume you don't have that much RAM.
> I had a quick look at the BAT mappings it was setting up.
The data BATs were set up in a regular manner, with a 1:1 mapping from virtual memory to physical memory. But the instruction BATs was a weird 1:1 mapping with a bunch of holes. It mapped 128KB, followed by a 1.85MB gap, then 512KB followed by at 1.5MB gap. Then 1MB followed by a 11MB gap and a final 2MB.
> My guess, is the important code that would run multiple times a frame was positioned into areas backed by BATs (with linker scripts), as BAT mappings are nice and fast. Then they would use page tables to fill in the gaps with more uncommon code.
> Most of the uncommon code would be backed by invalid memory, until something jumped to it. Then a pagefault handler would decompress and/or copy that code from auxiliary ram into a cache backed by real memory, and set up page tables to map the code into place.
> Complicated, but would allow them to squash a large executable into the limited ram of the gamecube.
I have an interesting anecdote about GC development - I know of a game that almost released on GC that would break the drive on every console it was played on - a developer responsible for the loading code used an undocumented method to increase the read speed from the optical drive, so that data would stream in sufficiently fast for the game. Unfortunately, that was more than the drive was designed for, and it would actually die after few hours of use that way. QC reported some of their test consoles dying, but Nintendo ruled they must have been defective, and replaced them - and the game actually went to manufacturing like that. The whole release was halted a week before it was due because someone spoke to the guy and asked how he got the data streaming to work quickly enough.
You could do something similar with PS1, PS2 and PSP - use undocumented methods to overclock the CPU and make them run quicker.
As someone who has used undocumented methods on many consoles and ALWAYS been caught by QA and told to remove them I'd love to know how they got away with this.
Even on the PS1 where I used an undocumented graphics command to get a 'free' screen clear (nothing dodgy and we learned about it by watching Tekken in the PA) I got told to remove it.
Crash did not really do anything weird with the CD drive, it just read from the disc a lot. So much, in fact, that a single playthrough would hit the disk more times than the drive was rated for (the drive was rated for some 70000 reads, but completing the game required something like 120000). [1]
Yeah. It looks like a memory optimisation to squash more code and data into the gamecube's 24mb of memory.
It was one of the earlier gamecube games and it looks like they rolled their own custom solution rather than using one of the standard nintendo libraries for advanced memory management that most other games use.
Although, it's worth noting that most OG Xbox homebrew (that I know of, at least) was made using leaked official SDKs (XDK), not open-source reverse engineered SDKs.
IMO the PSP's homebrew community was the best without relying on leaked SDKs - the open source PSPSDK is very good and was used in lieu of leaked SDKs by almost all homebrew authors.
>Easier access to memchecks means that Dolphin can accurately emulate well known crash glitches in games without Dolphin itself crashing!