Hacker News new | past | comments | ask | show | jobs | submit login

Those are some very good points, but I think some of them are wrong or do not apply for most uses cases.

1) "Do not serialize JSON, or use msgpack or protobuf - write your own protocol." - Why should you write your own protocol insead of protobuf? If your game has more than 10-20 types of messages/objects than it becomes very hard to implement your own binary encoding for each object. Plus with protobuf is easy to use and the packets generated are really smaller (most likely smaller than your own protocol).

2) "Measure everything with realistic load - i.e. I've found out that it's more costly to calculate world entity deltas to send to client, than to send everything in his view." - Profiling is something that you should always do, with both synthetic and real loads. In your case, my guess is that your diffing code was inefficient or that the load was so small that both computing the delta and sending the message were almost instant, so comparing them would give inaccurate results.

3) "Prefer fixed entity speeds." - I think this is a game design decision and you shouldn't limit your game to this if you don't have too. Also, I don't think there's any issue with entities being able to have a different speed each tick.

And, a question. What VPS are you using? Do you recommend any specific VPS for hosting Node.js game servers?




1) Demonstrably not so, in my case. Devising and implementing a protocol flexible enough for me was, along with test-driven implementation of that particular aspect, a joy and more performant solution than any other out of the box. Yes, I've tested against msgpack/protobuf (and I use protobuf in production on other projects) and found it to be less optimal, for both implementation (you're never as flexible as with blank board) and runtime resource usage, across board.

2) Sure, I might go the way of the delta one of these days - it does still smell like good design. That's why I was underwhelmed to see it underperform first time around.

3) Yeah, design and implementation are never separate. Neither is content and presentation. :)

DigitalOcean works for me now. Wouldn't mind slightly better CPUs, as even High Compute droplets are not all they're cracked to be.


Just curious about your own binary protocol. If your object has an int32 value, but the value could actually be represented with only 12 bits, do you send the entire int value (32 bits) or only the 12 bits needed to reconstruct the number?

I also used DO for a low-tickrate game (2ticks/second), but it doesn't seem like the servers are able to handle multiple game servers running at 60FPS.

If you were to implement the game again, would you rethink your networking so you don't have 2TB of data transfer for only 200 players?


I pick smallest needed value type, and some values encode multiple fields (i.e. type/state/frame are all in uint8). Node.js uses cast to unsigned 32-bit integers for bitwise operations on numbers, so bundling doesn't pay out as much in CPU as does in bandwidth savings.

2TB of data is really not that much. Say you run at 30fps, streaming only 2D position (say, 2 x uint32) of 100 entities to 200 clients...

(64 * 100 * 200 * 30 * 60 * 60 * 24 * 30 ) / 8 / 1024^4 ~= 11TB per month.

You bring some sanity to that with less frames, less entities, naive delta not sending updates for entities, etc. It's really not surprisingly big number.

I'll definitely explore world state deltas in next game, but this works good enough for now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: