As someone who has decided to do all (or most of) his personal coding in public, I must say the idea of spending time and effort making sure no one reverse engineers your stuff is kind of funny.
This has a lot to do with the time the article was written. 2008 was a time where ring0 had gone out of fashion (SoftIce dead, nothing usable to replace it), and virtual machines weren't quite that common yet for debugging.
Today you'll find lots of anti-VM tricks, and if you go a few more years back, lots of ring0/SoftIce tricks.
That said, they are what they are, tricks. Good protections can't simply rely on tricks, which simply temporarily inconvenience whoever is not aware of them.
To be fair he does say clearly that all of these would be reasonably easy to bypass.
Perhaps he needed a better, but less catchy, title. "Some tricks that will slightly delay reverse-engineering" or "What I know about making reverse engineering a little bit harder than it needs to be".
If you read past the title and into the article content, you'll see that he says "In this article, I plan to travel a bit deeper into the interesting world of reverse engineering and explore some more intermediate level techniques for annoying reverse engineers."
He's right - these techniques basically just annoy any mildly competent reverse engineer.
Of course there is the 'knowing you're being reverse engineered and doing something else'. I don't doubt for a minute people who write sensitive code, be it malware or DVD decoders, might simply act differently if they thought a debugger was involved, not so much as not act at all. Some of these techniques could be used there.
That being said, the more interesting thing is poking around in the inner bits of the machine and seeing how it comes together. Highly recommended for anyone serious about wanting to know how the machine does what it does.
If you want to practice on code that is easily obtained I suggest you poke around the World of Warcraft rootkit code that it uses to prevent people from cheating at WoW.
Detecting debuggers and altering behavior is so much the oldest trick in the book that it is actually covered in depth in this Codeproject article (Codeproject is often unusually well written, and so is this article, but be clear that this is really basic stuff he's talking about).
In the old days (80's and early/mid 90's) where people would distribute
small and simple patches to disable protections (i.e. "crack" executable
files), a fast release cycle for the software thwarted the simple
cracks. This situation did not last long. The crackers started using
more sophisticated patching techniques, like search-string-patching and
key generators.
The task of maintaining on-going disassembly across multiple release
versions of some software is actually straight forward. The "dumb" (but
useful) way to do it is by finger-printing all of the subroutines in the
old disassembly, and then using the fingerprints to identify the similar
routines in the new disassembly (IDB2PAT). The "smart" way to do it is
the graph theoretic approach of Halvar Flake.
Anyone in the Anti-Virus or compatibility industries can confirm both
the capacity and the need to maintain disassemblies across multiple
versions of software.
Pumping out a relentless stream of new versions of your software is no
longer a deterrent, and hasn't been for over a decade.
I think it's an unfair question. The creators are always at a
disadvantage since the replicators always leverage and reuse the efforts
of the creators.
I'm sorry I did not mean to be unfair. I was curious if the companies were able to distribute updates pre-broadband. I can remember downloading the twenty something floppies for os2 over a dialup.
At one point in time, software companies sent updates on magnetic tape
through the postal mail. One of the most clever hacks I've read about
was when a group doing penetration testing mailed a fake (back doored)
update tape to the target.
When it comes to the efficiency of distribution, it's best to think of
it in terms of the constraints and requirements.
Without a way to duplicate and distribute their products to customers,
software companies could not exist, so the capacity to duplicate and the
ability to distribute are both requirements.
Those very same duplication and distribution methods used by the company
can also be used by others to further (re)distribute additional copies.
The difference is, the software companies are operating under the
constraint of needing to make a living by selling copies of their
products, so there's really no way to make a fair comparison on the
efficiency of the methods used by the companies versus those people
making additional copies. You're essentially comparing farmers to chefs;
one produces food, while the other prepares the food.
Well, nothing new. If it runs, it can be cracked. I say it as a reverser (legal) with more than 10 years of exp. So, instead of investing money/time into the protection mechanisms, it's better to use the resources to improve your software. Yet well-thought custom protection (i.e. not ASProtect, Armadillo, etc..) can be harder, but it's all the same crackable.
It is true the techniques in the article are nothing new. However, this comment is a knee jerk reaction and largely misleading. The notion that something being crackable is a failure is incorrect. The time a protection scheme needs to hold up varies depending on the industry, but the economics of DRM to not require it to last forever. For video games it's a month or two (Starcraft 2 sold 75% of total copies told to date within the first month). Also, there are several DRM schemes deployed in production today that are acknowledged by those in the field as quite effective, namely BD+ and DirecTV's scheme. Indeed, as far as I know there hasn't been a break in DirecTV for over half a decade.
As far as malware goes, virtualization obfuscators are the current state of the art. They are a fundemental advancement in packing. Up until virtualization obfuscators, all other packers had the weakness where at some point the unprotected program would end up in memory. This weakness is easily exploitable (ala VxClass, see Recon 2010 for an easily digestible talk). Virtualization obfuscators are still beatable with manual reverse engineering (see Rolles WOOT 2009) and some effort has been made to automate the process (see Wenke Lee's group at Oakland 2009). But when a packing scheme forces you to hire reverse engineers who know what symbolic execution is, you know that you've substantially raised the bar.
To add to the 'quite effective' schemes: Cinavia. It adds watermarks to the audio in movies and is resilient enough to be maintained even when a cam-rip is done of a movie. Brilliant scheme, and the patent application (US7369677) is really well written. There hasn't been a break yet, but it'll come eventually.
Haha yeah that paper is how I got started reversing VM based schemes. A short paper that does use symbolic execution and theorem proving is BinHunt, although it's a blatant ripoff of Halvar published 4 years later. Their only claimed contributions are (1) symbolic execution and theorem proving for basic block equivalence and (2) backtracking for their maximum common subgraph isomorphism algorithm (in contrast to Halvar who I believe used direction instruction comparison for basic block equivalence and a greedy subgraph algorithm). These could be meaningful contributions but they provide no data to prove that that the posited accuracy increase of symbolic execution and backtracking is worth the large performance hit.
I'm not sure DirecTV's scheme is entirely relevant, because it revolves around protected hardware rather than protected software.
The idea behind DirecTV is that the crypto code runs entirely in hardware the user can never see, heavily protected physically - a protection method which isn't possible for software on most modern x86 machines. Plus, satellite providers have a distinct advantage in that their content needs to be protected only in real time.
I do give kudos to DirecTV for managing to create a technology that's less of a sieve than Nagravision (although that might also have to do with DirecTV suing everyone who dared go near their smartcards into oblivion in the early 2000s), but they play a totally different (and, IMO, much easier) game than software protectors do.
As for BD+, with HDCP broken so widely I don't really see a point to breaking it. Scene groups can source movies of equal or greater quality from many other sources, even using decrypted HDMI as a last resort, before needing to care about actually exploiting the BD+ VM.
I have a very little bit of insight into how DirecTV's modern cards are implemented, and the fundamental technique does not rely on hardware --- in other words, if you had already been outfitted with a lab that could decap and image chips well enough to generate simulators, the fundamental technique involved would still be expensive to unwind.
BD+ has produced multiple titles which, during their new release window, had no high-quality HD rips torrented. But that's besides the point: if antidebugging and antireversing is such a lost cause, it stands to reason that BD+ should be completely broken by now. But, of course, it is not.
"Plus, satellite providers have a distinct advantage in that their content needs to be protected only in real time."
I could be wrong here, and it's been a while since I missed with Echostar/Dish hardware, but DVR recordings are stored on the hard drive in raw, encrypted form and then played back through the decryption hardware.
Yet another mistake Dish made - IMO, every iteration of Nagravision is end-to-end pretty poorly implemented.
At any rate, I was speaking more to the practical aspects than the technical ones. In the eyes of a DRM writer, a game needs to be protected for at least a few weeks (launch purchase window), and once it's cracked once, it's pretty much the end - the game is in the wild, and the damage is done, because what's being pirated is the game itself.
On the flip side, what satellite providers are protecting isn't really the content - it's the ability to display the content in at a certain point of presence as a stream. Public exhibition (bars, clubs) and live PPV fights are the big game for satellite encryption, not Joe Public (or Joe Pirate) watching his shows. They'll be available to pirates immediately after they're aired through other means (stations, screeners, stripping HDCP off of HDMI) anyway.
That's why I think the satellite game is easier - not technically, but practically. As a satellite TV provider, even if your DRM can be removed post facto, the benefit to the pirate is greatly diminished (and the downside to you, as well).
I beleive 99.99% of those who pirate will never buy a game just because they don't want to wait several days (or weeks, very rare). DRM only harasses those who don't pirate.
>> Starcraft 2 sold 75% of total copies told to date within the first month.
And who says that without the DRM it would be lower? Maybe it would be even higher! Many techy people just don't buy those new games with draconian DRMs/Spyware which require internet connection. Or they buy the game and then download NODVD/cracked.exe
Note the instructions. It's just capturing the video stream via analog source, as far as I can tell.
I believe your comment's parent was referring to their over-the-air stream DRM (i.e. the stream to the dish receiver), which hasn't been publicly broken.
And since cracking follows the path of least resistance, it's likely that the DRM would have been cracked if that was the only way to get at the content.
This is a case of deadbolted door, window left open.
I agree with you there - all HD video DRM is "deadbolted door, window left open" at this point since HDCP is fully opened (both by the ability to clone or purchase the HDCP hardware from a real TV and by the leaking of the HDCP master keys).
There's still some incentive to crack video DRM, since ripping through HDMI requires a re-encode and degrades quality, but the approach is good enough that the payoff is reduced substantially.
Breaking DRM is not the only relevant aspect of the video game industry. Punkbuster and other anticheat daemons need to work for the lifetime of the game.
I do not know a lot about reversing but I was always under the impression that nothing was impenetrable and most schemes were easily defeated. In light of this I am always puzzled by the relative efficacy of punkbuster and related daemons.
Who are the best 'reversers' out there? I work in bioengineering which is largely a reverse-engineering discipline (i.e. we do a lot of tweaking natural organisms that we didn't design) -- just curious if there are overlapping ways of thinking / general approaches. Anyone do serious CS research on reverse engineering methodologies?
I've been out of touch and out of practice for years now, so I'm curious to know the answer to this question. I was fascinated with reverse engineering back when Fravia[1] hadn't yet completely moved away from cracking to "search lore".
There is the 0-day Scene, those groups which are pretty much closed/private (to name some, CORE, BliZZARD, EMBRACE, HERiTAGE, SSG, DiGERATi..), these are invitation only. And the Web scene groups like FFF, TSRh, RES, who often offer to solve some keygenme/crackme for newcomers to be accepted.
No, my business is continuing to grow year on year. People seem to love installing my dead software on their dead PC's running that dead operating system Windows and they are quite happy to pay me for it.
Sorry to go off topic, but I have to ask. How did you get involved in that industry? Malware analysis seems like something I'd love to do. If you don't mind, could you share any tips or career advice you would give to someone who might want to pursue a career in malware analysis/antivirus industry?
Well, in my case I was enjoying unpacking protected software (just for fun, I wasn't distributing cracked software, just writing tutorials on how to bypass protections and releasing tools/unpackers/unprotectors for common types of protectors). Then somebody contacted me and offered the job.
What you can do? I would start from tuts4you.com, teach x86 assembly language, download a degugger and/or disassembler and dig something or follow tutorials.
Thanks for your reply. I guess I tend to shy away from things like tuts4u and crackmes.de because it seems questionable legally, although I haven't looked into it much. I mostly "reverse" my own code in gdb just to understand whats going on in the assembly. These types of things are hard to show in a resume, so I'm not really sure how to make myself an attractive job candidate to some of these antivirus companies.
I don't see anything illegal in disassembling shareware or any other type of software. Compare it with disassembling your smartphone/alarm clock/hand watch/.. to find out how it works. Why should it be illegal?
There is something about high school seniors and low-level code like this. From numerous observations over the last 15 years or so. I think it might be because your first 5 years in the profession serves in part to teach you what technical issues to be "scared" of, and high school kids haven't learned that yet.
I certainly did my lowest level stuff (building PC ISA hardware from scratch) in high school. Mainly it was out of necessity as I didn't have the money for the real hardware. Other folks I've known who have gotten into ASM in high school did so for game cracking/cheating/modding, also a form of necessity.
Any* sane professional development organization is going to try to minimize the amount of time their developers spend writing ASM by hand. There's almost always a higher level tool that's more productive.
* Actually, I worked at a place where the main DOS product had been hand written entirely in x86 ASM. The sanity of continuing such a practice into the 90's is an open question. Rumor had it that even the Windows versions of Wordperfect were written in ASM.
The thing high school students have is time. Time to experiment, time to go deep, without external pressure. Once you are in University or the work force or have dependents, the pressure is on to get results, and you no longer have time.
The irony is that the applied pressure probably doesn't produce a superior result to being left to your own devices.
I entirely agree - as a high schooler, I was very, very good at reversing.
It's helped me a lot professionally, actually - I'm much faster at debugging compiled languages than my coworkers are, and the mental patterns I developed while peeling away the layers of control flow in disassembly map very well to understanding large code bases even when I do have the source.
I think it's mostly a function of time rather than being scared, though - learning how to reverse takes quite a while, and once you've got a professional job where reversing seems entirely irrelevant (and even potentially dangerous), it's hard to justify taking the time to learn how.
This is probably a very naive question but can't any of these measures be easily bypassed by running the observed software in a VM like KVM and just attaching a debugger like GDB directly to the VM?
They absolutely can be. At the time this article was written (2008), VMs weren't as in vogue as they are today, VM-attached debugging wasn't as mature, and the leading "ring-0" / above-the-kernel debugger (SoftICE) had just been discontinued.
Today, there are a host of new measures designed to thwart in-VM debugging, but the playing field differs substantially from the one present when this article was written, and your observation is a good one.
What about the timer-based approach that he mentions? It seems that regardless of how you debug through a VM, it will still take additional time, unless you are altering the processor clock somehow.