BESIDES not having any particular way to validate a token without asking the service, making the rate a hell of a lot slower than 2^64 tokens per second (lol wut) doesn’t it also assume that you have 2^46 valid tokens in existence? Isn’t that 70 TRILLION valid tokens, or nearly 9000 tokens per human on earth?
I'm sure Aseprite could be further optimised to press down the constant factor, but it is performing a rect packing algorithm among other things, which is an NP-hard problem.
The bin packing problem is NP-hard, but rectangle packing is "merely" NP.
However, most people just use a recursive biggest-fit-first ¹), a simple heuristic that is surprisingly hard to beat for most workloads.
I couldn't figure out what Aseprite is doing from their website or documentation, but if it's not that then it might be worth it writing your own sprite packer.
I say- it’s like they hooked a computer to a car. At first I thought “oh man, look! They hooked a computer to a car! Think of all the cool things you can do!”
A few months later I had learned how to reboot the drivers side door latch because of a software failure, somewhere… and I thought “oh yeah, right. It’s like they hooked a computer to a car.”
BTW that code has a security vulnerability in it: the size computation can overflow and you can end up shrinking your buffer, and you will end up smashing your heap, and then bad things.
I know. I don't care. Partly because it's only an example and the requirements are not spec'ed out. Partly because you have potential overflows basically everywhere, if you disregard the context. Let's just say the caller is assumed to make sure there will be no overflow. (Or just add a no-overflow assertion - I don't care).
If you are subscribed, the footer of the page reads as follows:
You can share this article! This URL can be posted anywhere you like, including in public. Anyone clicking on it can read the article without logging in. The URL is specific to your subscribed account, but no one outside of Destroy All Software can identify your account simply from the URL.
The game Braid originally shipped to the world in 2008, and after some ports in 2009, I have only worked significantly with the code on a few occasions. But I want to maintain this game indefinitely into the future; in the back of my mind, there have always been some clean-ups that I have wanted to perform on the code. Often when shipping a game, the best answer to a problem isn’t evident, and we are under time pressure, so we solve the problem in some way that is sufficient but sub-optimal. Other times, we need to design the game to meet technical constraints of systems we want to deploy on, but as the years go on, these systems become irrelevant, so the code can be cleaned up. I figured it would be interesting to talk about some of these things in a blog (and the blog will help motivate me to think about these situations and clean some of them up!)
After a few days of working on it, it seems like he’s been able to cut about 25k lines of code from the original ~95k, taking it down to about 70k lines. Pretty nice!
Throwing away never-called functions, chopping out hacks for now-outdated platforms, de-duplicating subsystems, switching to easier-to-reason-about data structures, making assets and related code use more standard formats, etc. must be pretty satisfying, like peeling old faded cracking paint off a wall and refinishing it.
"Let's suppose that down in the bowels of some particular version of some particular toolkit library, there lurks a bug. Let's suppose that the nature of this bug is something relatively obscure: say that it's something like, if you hold down 5 keys on the keyboard for 10 seconds then drag the middle mouse button, the text entry widget gets a SEGV. (In fact, I'm not making this up: I saw this very bug once, years ago.)
Now, that's the sort of bug that is not likely to be noticed or fixed, because it's the sort of thing that people "never" do. If that bug was reported against, say, a web browser, nobody would much care: User: "I can crash my web browser by doing this crazy thing!" Developer: "Uh, don't do that then." And that's not a totally unreasonable response.
However, in the context of security software, it matters, because then it's not merely a cute trick that crashes the program: now it's a backdoor password that unlocks the screen."
Also from that article is a good advice in general:
Bugs like that will exist in GUI libraries; it's inevitable. The libraries are big, and do many different things. So one way to protect against that problem is to keep the number of libraries used by the xscreensaver daemon to an absolute minimum.
If you want to make less buggy software, you should aim to reduce complexity as much as possible, and not just hide that complexity with a library. The coordination of multiple processes as exampled therein is a good demonstration of how splitting functionality into multiple pieces may mean adding complexity overall because of the need to coordinate how the pieces interact.
This sounds like a job for Erlang, not JWZ staring at C code until he's convinced it won't segfault. A small monitor process could do something reasonable when the big GUI codebase fails, and then everyone wins.
The design of X screen lockers makes that kind of difficult. Exclusive keyboard control is exclusive, meaning the secure locking process and the pretty crashing process aren't going to share very well.
From a personal email, jwz said I could quote: "Every time [Hacker News drops by] someone launches a DDoS. This time it's a SYN flood. Pretty much like clockwork." This looks like a response to that.
The strange thing is that it does work for me if I copy-paste the link (as opposed to ctrl-clicking it). There really shouldn't be much that's different between those two requests except for the referrer header.
Appears to be one of those bizarre people who gets offended when people link to them. They were more common back when bandwidth was as precious as diamonds, and I see this article dates from that era. On the modern Web, I've never been sure if they don't quite understand what the internet is for, or what.
Anyone who actually gets angry when they're linked to (that image seems calculated to offend, not merely deflect bandwidth usage) has some kind of fundamental disconnect with how the internet is used on a day-to-day basis. It may be a social/emotional disconnect rather than a technical one, but it's still a disconnect.
I suppose I should acknowledge the possibility that he's just kind of a dick and likes trolling people for lulz. That's not really better, though.
It's his server. He gets to decide how it responds when people send it requests. If there's a disconnect, I think it's the people telling jwz how he needs to configure his server.
Sending drastically different content based on where your URL was clicked from, should indeed count as one definition of "not understanding how the hypertext web works." It's also something you're free to do, and the rest of the web is free to stop linking to you in response.
I'm sorry to say that this argument is not even wrong.
As programmers, it is not useful to us to think about that 99.9% of time. The 0.1% of the time is literally our entire job.
"Most of the universe is not Earth, so why do we spend so much time thinking about things on Earth?"