Hacker News new | past | comments | ask | show | jobs | submit | tridentboy's comments login

The thing is that this is the first time they got this kind of money. And it was unexpected, since Dwarf Fortress sold their 8 month objective by the second week. So they weren't in a position of knowing who and how to call that someone.


You've missed the fact that there are two attributes, rank and regiment. So it's something like this for a 3x3:

1A - 2B - 3C

2C - 3A - 1B

3B - 1C - 2A


so, something like:

a6 - b2 - c4 - d3 - e2 - f1

f5 - a4 - b3 - c2 - d1 - e6

e4 - f3 - a2 - b1 - c6 - d5

d3 - e2 - f1 - a6 - b5 - c4

c2 - d1 - e6 - f5 - a4 - b3

b1 - c6 - d5 - e4 - f3 - a2

?


Close. Pretty sure you have to use all values for letter and number combination. For example, you don't have f2.


You have two 2s in the second column


ok thank you. after studying he article again it gets clear.


A great FAQ on ECS is available here:

https://github.com/SanderMertens/ecs-faq#what-is-ecs


Something I've had trouble wrapping my head around with ECS -- how does it fundamentally differ from an in memory relational database?

They describe a "sparse set" designed for sequential identifiers -- how does that compare to something like an in memory b+ tree?


ECS is less about access patterns (relationships and queries) and more about efficient storage. Games care about page faults still so making your tight loop execute quickly on many smaller pieces of data and "do the right thing" is a win. Also, from an architectural perspective it's easier to write abstract code like `Health.value -= 10` instead of PlayerHealth.value -= 10`. By coding against a single components interface you can get consistent behavior across all entities with that component. You could also do this with inheritance but that becomes complex and (the real killer) very slow. jblow has some rants about this in the witness.


Some compilers also use the ECS "pattern" quite heavily.

Case in point: rustc:

https://www.youtube.com/watch?v=N6b44kMS6OM&t=1263s


I don't think the entities/system are really that relational. Data is not denormalized in the same way and joins don't really exist. Indices are an after thought. Full sequential scans are the main access pattern.

But, it's easy to think of data for a component system as a table where each row is associated with an entity so I can see why you draw the similarity.


I don't think there is a fundamental difference. ECS is the application of relational databases in a dynamic real-time context with relatively modest amounts of data, but lots of sequential processing. Sparse sets work well in this context, but they are ultimately an implementation detail. There are other ECS implementations that don't make use of it but can still deal with large throughput.


> ECS is the application of relational databases in a dynamic real-time context with relatively modest amounts of data, but lots of sequential processing.

This is probably the best explanation that I can imagine -- that they're not fundamentally different, just that ECS tends to make a sufficiently different set of design choices that it warrants some of it's own terminology.

I can't help but wonder what a more generalized take on ECS might look like, if it were to continue drawing from relational databases. For example, support for multiple indices to assist with different query patterns. Or triggers to abstract away cascading updates. Or perhaps materialized views to keep query patterns simple.

I've never had the opportunity to use an ECS system, especially in a performance sensitive context, so I don't have a good sense of where any pain points are in practice versus what my imagination can conjure up.

I also wonder what it might look like to use SQL to generate an optimal C++ representation while keeping the data definition high level.

Just idle musings - maybe one day I'll take the time to experiment.


>Or triggers to abstract away cascading updates.

Triggers might be counter productive. Games usually have a fundamental concept of a game loop. Changes can be processed in the loop and side effects can be processed in the following iteration. Triggers would cause this processing to be unpredictable (or at least harder to predict). ECS provides a clean way to define the order in which systems are processed each loop. Triggers might disrupt this ordering.

Maybe that's still desired, maybe not. I just thought it would be interesting to mention.


That's a great insight. I wonder if it'd be practical to defer them until after the causal operation was complete

say translating all positions, then calling all of the triggers for the positions component.

that'd keep everything in tight single purpose loops and preserve cache lines.

fair enough that it'd probably make execution order harder to predict, but also in theory it would be in the realm of possibility to generate a plan and print out the order that things would happen in.

(I'm not positing that this is actually worth doing or that the pros out weigh the cons -- just toying with the idea)


A lot of game engines have the concept of messaging that tends to happen synchronously or with a delayed dispatch step which sounds very similar to triggers. The latter comes with a verification cost because the message might not be valid anymore.

The thing with the actual gameplay layer is your often processing mainly heterogeneous elements rather than homogeneous so all the worry and focus on cache is largely academic for most kinds of game.


Completely agree on the different design choices. Curious if the analogy of how different gui packages work might have some parallels. FLTK uses a "retained mode" paradigm where IMGUI uses immediate mode. More idle musings..


I had the impression, a goal of ECS was parallel and not sequencial processing.


Strict ordering and sequential layout allows you to efficiently split the work and process these systems in parallel (or with SIMD).


Well, it seems that his point still stands. Overhand shuffle is terrible for actually shuffling a deck of cards.


Assuming 4 shuffles per second, 3000 shuffles would take about 12.5 minutes, while 10000 shuffles would take more than 40 minutes. One of those sounds feasible during a break on a casual game night.

Of course, those numbers apply only if the shuffle is done literally—I personally try to mix individual cards by letting one hand’s batch cut in-between the other’s (and I think I’m not the only one)—but I’m still curious where did those 10000 come from.


I'm sorry, could you please elaborate? I was always under the assumption that hash functions have to be deterministic, and thus, that "every input has a unique output" was a correct statement.

AFAIK the contrary is invalid, so that "not every output is the result of one and only one input".


A function being deterministic means that any input will have a single output. But it is not unique for any hash function, SHA-256 included. The definition of a hash function is any function which takes an arbitrary length input and outputs an n-bit output for some fixed value of n. By virtue of the fact that you have infinite inputs and finite outputs, the outputs cannot be unique.

Generally when people make this claim, what they're actually referring to is what's called Collision Resistance (CR) and/or Weak Collision Resistance (WCR), which instead make claims on difficulty of finding such collisions (of which infinitely many exist).

WCR, necessary for almost any cryptographic use, states that for any given input it should be difficult to find a different input which hashes to the same value. CR, generally desirable for cryptographic hash functions, states that it should be difficult to find two different inputs such that their hashes are equal. CR implies WCR, but WCR does not imply CR -- for example, SHA-256 (currently) exhibits CR but SHA-1 only exhibits WCR.


There are 2^256 potential outputs for SHA-256, while the number of potential inputs is infinite. Therefore, the same output can be generated with different inputs, although finding such "collisions" by chance is extremely unlikely


The claim is not that every output has a unique input, which would not be correct, and seems to be what you are addressing.


at 1:08 in the video, that is exactly what he claims:

"So every piece of data in the world has its own unique hash digest."

This is false for the reasons apeescape describes: every piece of data in the world has its own hash digest, but these hash digests are not unique.


Yes that sentence is technically incorrect, but practically correct. We've never found a collision and though we expect it to be theoretically possible, even common if you consider "all possible inputs" and the pigeonhole principle, for practical purposes hash outputs are unique because nobody considers "all possible inputs" when evaluating probabilities.

I'm saying that for a layman explanation, it's reasonable to say that hash outputs are unique. Because following that with "technically, it's more 'practically' unique, theoretically there are collisions but you won't encounter them with probability > 2^-256" (or whatever it is) just confuses the topic to them more than just summarizing. You have to admit that most people won't go on a 200h adventure to learn about the state space of 256+ bits and how to conceptualize tiny statistical probabilities, so there must be a point where you have to cut the explanation to an approximation of the truth. This is true in every field.


I don't like to leave holes like this in people's comprehension. It's OK if people don't end up with an intuitive feeling for how relatively unlikely different things that don't actually happen are, but I want them to be aware of that category as distinct from things which can't happen because the type of argument needed is different.

The air molecules in the room you're in can't all gather in one corner because that's not possible, it's forbidden by conservation rules.

But they won't gather in two opposite corners only because that's so tremendously unlikely, it would be allowed by conservation but statistically it's ludicrous.

The same is true at the opposite end of the spectrum. Almost all real numbers are normal (in all bases) but the nature of "Almost all" in mathematics is different in an important way from "All" and I want people to grasp this difference when I'm discussing properties of numbers. It definitely is not true that all real numbers are normal, you probably rarely think about any normal numbers at all.


> I don't like to leave holes like this in people's comprehension.

I agree. I think this wording would be better than in my previous comment, what do you think?

    it's reasonable to say that hash outputs are *almost surely* unique


> I'm saying that for a layman explanation, it's reasonable to say that hash outputs are unique. [...] theoretically there are collisions but you won't encounter them

You could have said exactly the same thing about MD5 right up until you couldn't. Then you could have said "oh yeah well MD5 is broken, but it's safe to assume you'll never find one for SHA-1", right up until we did. So if you say "oh yeah well SHA-1 is broken, but it's safe to assume you'll never find one for SHA-256", I disagree.

It would be one thing if collisions in hash functions were found by just repeatedly hashing things until you find a collision. If that were the case, then yes, I'd agree with you on those 1-in-2^256 odds, at least for a while. But by and large, that's not what happens. Over time, weaknesses are found in algorithms which allow you shrink the search space, which significantly changes your odds.


Kind of agree w you, but still feel adding a few words by way of a disclaimer about collisions is much better than presenting as plain truth something that merely approaches it.


On the other hand, if we can count "every piece of data in the world" then we can estimate the probability of having a collision.


I see what you mean, but it sounds like the output is unique, and we probably agree that in this field you need to use sentences that cannot be easily misinterpreted.


Know it's not exactly related to what you do. But do you have some recommendations of books/online classes to learn C?


There is a big difference in the intervention made by US in east asian countries, and the interventions made by superpowers in other countries.

I'm not saying that South Korea and Japan didn't make good use of this intervention, but while the intervention in those countries was more akin to a financing/investment, which was in line with US' Marshall Plan during the Cold War.

In other countries such as those from Africa/South America, it was direct exploration of people and resources without really giving something back.


There was not that sort of investment in South Korea, and of course in Vietnam the opposite happened since the communists won the war there.


I don't think "these days" really apply to this situation. People have always tried to take advantage of flaws in any system.


Statistically speaking 4000 is a good enough number. I mean, election polls in the US usually use less than that for a population of 300 million and they're usually close to the real number.


Except in 2016. And that 4000 is a biased population of only game devs that attended a certain conference. Maybe if it was a totally random sample of 4000 of all game devs it would be statistically significant.


>Except in 2016

In 2016 the national polls were very close to the final result--within a couple points. Remember that Clinton won the popular vote.

State polls were less accurate, but the general consensus is they were wrong due to undecided voters breaking more for Trump than is normal in the final few days--after the final state polls were conducted.

That being said, your point stands. Sample method is generally more important than sample size, and 4,000 game devs who self selected to attend a conference is very unlikely to be a representative sample.


When they do these election polls, do they stand in the street randomly choosing people coming out of the conference centre where one of the major parties is having their AGM?


>> Statistically speaking 4000 is a good enough number.

It seems to me the GP comment meant that because it was a survey among people who attended GDC it's not representative. If that is why they questioned it, I'd say it's an even more relevant survey, as those are the people who care more about the industry and have more influence. No mater how you slice it, I'd say it's an indicator not to be ignored.


It’s an indicator, but just unlikely to be an indicator for the population of game developers.


They are taking a very good care to select the right 4000 people.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: