Is this a good or bad thing? I don’t really know much about kotaku.
But I do know game journalists don’t exactly produce the greatest content, everything from “the game is too hard 1/10” to IGNs obvious paid for scores to “this game has a male character, therefore it must be sexist”.
This is a pretty hyperbolic take on games reviewers, other than the paid reviews which does exist but is pretty easy to identify and ignore. I've seen games being criticised for being too hard but even Elden Ring, Lies of P et al aren't getting 1/10 for that. I've seen people lamenting the same lazy copy-pasted male character being the protagonist again and again, but never decrying that it's outright sexist that a male character exists in a game (and you're kind of telling on yourself with this one, tbh)
There's plenty of great games journalism out there. I'm similarly unsure about Kotaku, but let's not get silly here.
The gaming community is often delusional and will rant online about a journalist giving a game a 9/10 instead of a 10/10. Good reviews, where the journalist actually played the game and has experience to compare it to, goes unnoticed. Garbage articles, like "Nintendo used a disability slur in a song" (turns out they're speaking Japanese) get huge attention.
Yeah I’ve seen that a few people get very upset over one reviewers’ subjective take on a game to the extent that they’ll hound them online. James Stephanie Sterling’s 7/10 on Tears of the Kingdom springs to mind (though they seem to be a bit of a lightning rod for hate overall). I can’t fathom it myself, I’ll read and watch a few reviews to get an overall feel for the game and see from there. Come to think of it, most of the reviews I encounter don’t give a score.
Isn’t there a discussion to be had about journalists (and activists) needing to insert intersectionality into fantasy worlds in a genre for a demographic that didn’t ask for it?
I’m detecting an air of elevated ego in your reply. I understand the toxicity that has and does (now to a much lessor degree) exist, but can’t we recognize when social justice jumps sharks? I think we can. It’s ok to admit when activism goes too far, wandering into the territory of enforcing morals onto other people; making the same mistake the religious right has done…
If you want to have that discussion, feel free to find the people who think the existence of 1 (one) male character indicates sexism, like the GP described, and discuss it with them. Tbh it sounds more like you want to use the GP's hyperbole as a wedge to open up a (ugh) "culture war" debate and frankly I'm not interested.
Isn’t there a discussion to be had about journalists (and activists) needing to insert intersectionality into fantasy worlds in a genre for a demographic that didn’t ask for it?
Don't we need to have a discussion first to debate whether you should be writing comments on such topics first? I mean, I didn't ask for your opinions, so I think it's a discussion worth having.
There's some lazy, parasitic, and axe-grinding writing in games journalism, as in all kinds of journalism. I am personally annoyed by how often I see Reddit posts and YouTube summarized like they're a story. But the good stuff is good, and this is IMO a good trend. Aftermath itself might suck, but I'd like to see more passionate video game writers get an opportunity to run their own outlet rather than churn out stupid bullshit about Twitter posts to create more surface for ads.
"We're trying something different, thanks for joining us"
A comma splice isn't the end of the world, but if this is the post that's advertising your new enterprise to the rest of the world, then it tells me you have some inexperienced writers.
That's an interesting observation - I'd have never noticed it in such a way. Reminds me "Nobody. Understands. Punctuation."[0] that's often reposted on hacker news.
I do not agree. It's readable, but not well written.
ex. "Fortnite saw 3.9 concurrent players on OG’s launch day, besting recent records, and huge subsequent numbers. It was the number one category on Twitch, bolstered by a 24-hour stream by Ninja, who made his name on the game."
From my experience, "great content" to gamers is just content that confirms their pre-existing opinions and biases. Anything that goes beyond that is labeled "bad journalism".
This part is true for most people and most topics, including politics. A lot of the gaming audience would be young(er) people that are part of the scene - so the strong feelings part is given.
Exactly, it's not too hard to implement in C. The one I made never copied data, instead saved the pointer/length to the data.
The user only had to Memory Map the file (or equivalent), pass that data into the parse.
Only memory allocation was for the Jason nodes.
This way they only paid the parsing tax (decoding doubles, etc..) if the user used that data.
The first line of the article explains the context of the talk:
> This talk is a case study of designing an efficient Go package.
The target audience and context are clearly Go developers. Some of these comments are focusing too much on the headline without addressing the actual article.
Yup and if your implementation uses a hashmap for object key -> value lookup, then I recommend allocating the hashmap after parsing the object not during to avoid continually resizing the hashmap. You can implement this by using an intrusive linked list to track your key/value JSON nodes until the time comes to allocate the hashmap. Basically when parsing an object 1. use a counter 'N' to track the number of keys, 2. link the JSON nodes representing key/value pairs into an intrusive linked list, 3. after parsing the object use 'N' to allocate a perfectly sized hashmap in one go. You can then iterate over the linked list of JSON key/value pair nodes adding them to the hashmap. You can use this same trick when parsing JSON arrays to avoid continually resizing a backing array. Alternatively, never allocate a backing array and instead use the linked list to implement an iterator.
> The user only had to Memory Map the file (or equivalent)
Having done this myself, it's a massive cheat code because your bottleneck is almost always i/o and memory mapped i/o is orders of magnitude faster than sequential calls to read().
But that said it's not always appropriate. You can have gigabytes of JSON to parse, and the JSON might be available over the network, and your service might be running on a small node with limited memory. Memory mapping here adds quite a lot of latency and cost to the system. A very fast streaming JSON decoder is the move here.
> memory mapped i/o is orders of magnitude faster than sequential calls to read()
That’s not something I’ve generally seen. Any source for this claim?
> You can have gigabytes of JSON to parse, and the JSON might be available over the network, and your service might be running on a small node with limited memory. Memory mapping here adds quite a lot of latency and cost to the system
Why does mmap add latency? I would think that mmap adds more latency for small documents because the cost of doing the mmap is high (cross CPU TLB shoot down to modify the page table) and there’s no chance to amortize. Relatedly, there’s minimal to no relation between SAX vs DOM style parsing and mmap - you can use either with mmap. If you’re not aware, you do have some knobs with mmap to hint to the OS how it’s going to be used although it’s very unwieldy to configure it to work well.
Experience? Last time I made that optimization it was 100x faster, ballpark. I don't feel like benchmarking it right now, try yourself.
The latency comes from the fact you need to have the whole file. The use case I'm talking about is a JSON document you need to pull off the network because it doesn't exist on disk, might not fit there, and might not fit in memory.
> Experience? Last time I made that optimization it was 100x faster, ballpark. I don't feel like benchmarking it right now, try yourself.
I have. Many times. There's definitely not a 100x difference given that normal file I/O can easily saturate NVMe throughput. I'm sure it's possible to build a repro showing a 100x difference, but you have to be doing something intentionally to cause that (e.g. using a very small read buffer so that you're doing enough syscalls that it shows up in a profile).
> The latency comes from the fact you need to have the whole file
That's a whole other matter. But again, if you're pulling it off the network, you usually can't mmap it anyway unless you're using a remote-mounted filesystem (which will add more overhead than mmap vs buffered I/O).
I also really like this paradigm. It’s just that in old crusty null-terminated C style this is really awkward because the input data must be copied or modified. But it’s not an issue when using slices (length and pointer). Unfortunately most of the C standard library and many operating system APIs expect that.
We are in a 2nd bigger 'tech' bubble. I hear lots of funny thing said about the era and now' "today is nothing like the dot com bubble, pets.com ... PETS DOT COM!". and I am like, pets.com had a max valuation of 400 million for like a hot minute. Today we have over 1,000 never made a profit tech unicorns. One thousand companies with a market cap of 1 billion+ that has never (and probably never will) made a profit.
"you just got to find a company with a strong balance sheet, one that makes money, one that sells the shovels". Had you invested your money in CISCO back then, you'd still be in the red over 23 years later.
I've long had the idea for trying to write a Valgrind tool to help with this by analyzing struct usage. Something to profile how hot and cold the various fields of my structs are, and also to correlate which fields in a struct are frequently accessed together (i.e., within N cycles of each other). A tool for the profile part of "profile before optimizing" to go with the optimizations you mentioned.
I'm not sure how feasible this is. But if someone else wants to steal this idea and implement it for me, be my guest. :-)
The problem with this kind of instrumentation is that it is very expensive to collect, which affects the data collected in a way that may skew it from true runtime performance. Maybe that is still good enough! (It also feels difficult to implement.)
He shows the complete opposite, how to seperate data, so they won't appear on the same cache line. This is of course nonsense for single threaded accesses, but beneficial for concurrent accesses.