Hacker News new | past | comments | ask | show | jobs | submit | chipsy's comments login

I've written many 48 hour games. I've also written games that drag on for months or longer without apparent progress.

The difference literally comes down to whether you are doing easy things or not. Having an engine or framework does help make a variety of things easy, but it does nothing for the one or two features that aren't. Eventually you hit a wall where it takes forever, and that's your next month. You get over the wall and then a flood of other new features come in almost instantly. Also in the same ballpark are features that you have coded before and are familiar with, vs. ones you aren't. You can get a lot done "from scratch" by spamming preexisting knowledge at the problem, but it still takes time and it isn't exactly easy either.

Last of all, at first clone-and-modify is enough to feel interesting. So you go very quickly, because you care little about the result. But after a few dozen times doing that, you're done, and you want to expand the parts you care about. That creates more barriers to get over, more months where progress is slow because your ambition is big enough to no longer follow the easy path. More months where problems are on the content development side, not the runtime. That part is always difficult. Scope is deceptive.


I think the potential for a game to provoke some new legislation is very real. The quantity of people changing their use of time, space and infrastructure so suddenly is unprecedented, and even if it fades over time, the chances of more like this one are high.


It seems to be part of our habit in mathematics to first make many discoveries using brute-force and large enumerations, and then later extend the results with more sophisticated methods if we can find them. For one practical example, if you were doing a lot of computation in the 1960's you might carry around a book of trig functions, or a slide rule, but after microprocessors came about a pocket calculator could do all those functions with better precision. With proofs, many questions are proven up to some number n, which is only further extended by feeding the algorithm into a powerful computer. But occasionally we discover a way of reframing the problem so that it can be solved with relatively little computation.

I believe programming has some analogous quality to it: It's much easier to solve just one problem and gradually find ways to generalize it.


> if you were doing a lot of computation in the 1960's you might carry around a book of trig functions, or a slide rule, but after microprocessors came about a pocket calculator could do all those functions with better precision

The pocket calculator is doing a huge amount of computation to find those results, an amount which would be impractical for the human to do (hence the use of slide rules instead of laborious pen-and-paper arithmetic). It’s just a different flavor of brute force. The calculator is basically going back to the pre-logarithm method, carrying out elementary school arithmetic algorithms very fast.

To be honest, the slide rule method – converting multiplication problems to addition problems via a logarithm lookup table encoded on a stick – is quite a bit more “elegant” than what the calculators are doing. The invention of logarithms in ~1600 was one of the most important advances in the history of science and technology.

* * *

The same is true in many other kinds of mathematical problem solving. In the past, we only had access to manual effort and limited human time/attention, so the available brute computation was quite limited and many problems were entirely intractable, and great cleverness was required to solve others. The goal of symbolic reasoning was to reframe problems to eliminate as much manual computation as possible. For that reason, it was necessary to learn how to manipulate trigonometric identities, solve nasty integrals by hand, etc. We had to be able to rewrite any problem in a form where each concrete computation only required a few simple arithmetic steps plus as few table lookups as possible. Despite such simplifications, actually performing computations often required teams of people mechanically performing arithmetic algorithms all day. https://en.wikipedia.org/wiki/Human_computer

Now that computation is cheap, we can dispense with many of the clever/elegant methods of the past, and just throw silicon at our problems instead. This lets us treat a wider variety of problems in a uniform way, and get away from doing nearly so much tricky algebra.


> The pocket calculator is doing a huge amount of computation to find those results, an amount which would be impractical for the human to do (hence the use of slide rules instead of laborious pen-and-paper arithmetic). It’s just a different flavor of brute force. The calculator is basically going back to the pre-logarithm method, carrying out elementary school arithmetic algorithms very fast.

I think the point is more that all those computations are packaged up into a black box where the user doesn't need to think about its internals. Elegant/short proofs are often like this too: they build on deep/high-power/complicated-to-prove results, using them as black boxes. Of course the actual proofs of those theorems might be ugly (e.g. a proof that uses the four colour theorem), but the statement can still be neat.


Your mention of "invention of logarithms" raises a couple of questions.

First: is there a good, accessible (college calculus, some diff eq, some linear algebra) history of mathematics you might recommend?

Second: I've been kicking around an ontology of technological dynamics (or mechanisms) for a few months. In it I classify mathematics under symbolic representation and manipulation, along with what I see as related tools of speech, language, writing, logic, programming, and AI. If that sets off any lights, bells, or whistles, I'd be happy to hear ideas or references.

https://ello.co/dredmorbius/post/klsjjjzzl9plqxz-ms8nww


Stillwell’s book is pretty good. http://www.springer.com/us/book/9781441960528 https://amzn.com/144196052X (Unfortunately recent Springer books are printed on demand, and printing/binding quality can be iffy; in particular quality of books bought via Amazon seems to be quite poor.)


Thanks. Fortunately there are local library copies:

https://www.worldcat.org/title/mathematics-and-its-history/o...


Most calculators implement lookup tables.


"Ugliness has no permanent place in mathematics"

-Paul Erdos


For the manufacturer, it really does make a difference to their bottom line. The "software tie ratio" of console games in the 360/PS3 era was modest - somewhere between 3 and 8 according to this graph [0]. Getting unit cost down matters a lot when you aren't selling a lot of additional content, and the hardware got optimized around whatever game developers could soak up the most.

As such it was conventional for game consoles to have fast-but-small RAM. The reasoning is that console games mostly bottleneck on the rendering of a scene at acceptable framerates, vs. simulating all aspects of a complex scene or achieving maximum detail as a movie would. Since ROM cartridges were fast and optical allowed data to be streamed in "fast enough", there were plenty of ways to achieve the right effect under tight RAM conditions. One exceptional case where tight RAM did not play out well is the N64's 4kB texture cache, which imposed a large burden on the entire art pipeline(if you wanted a high res texture, you had to resort to tricks like tiling it across additional geometry).

Today what is demanded from a console is much more in lock step with every other consumer device - they do more computer-like things, they can multi-task some and scenes are doing more memory-intensive things so they're more well-rounded, and get more RAM.

[0] http://vignette2.wikia.nocookie.net/vgsales/images/c/ce/Esti...


The answer is: it depends. They will silently bend the "TRC" a little if there is a business case for releasing something now and not later.

But most of the requirements center around nitpicks of software polish: Specific words and phrases used to discuss the device, loading screens must not just be a black screen, the game should not crash if the user mashes the optical eject button, etc. These things add a level of consistency but aren't the same as "solid 60hz" or "no input lag". The latter sort of issues can be shipped most of the time, they just impact the experience everywhere.


Installing a bouncer means following technical documentation and having a server free. The first requirement kills the interest of people who want a single app install. The second kills the interest of people who want the service free and run by a third party.

These are not enormous barriers but they were enough to put me off of setting up Quassel on a VPS for a few years. Now that I've done it I don't want to go back, of course, and I don't see it as a huge chore to do it again. But that's what's making it "not actionable" - the perception that this is going to end in a nightmare of configuration files and Stack Overflow searches.


That's part of the reason why "Quassel as a service" would be a very powerful tool.

Currently, though, we have to tell users who want that to use IRCCloud instead - about half of the people come back after the first week of free usage of IRCCloud when it asks you to pay, and start using Quassel from then on.


I totally understand that installing a bouncer isn't desirable or easy for everybody. But to call it unhelpful and non-actionable is just ridiculous.


Historically, most governance switches between "business as usual" and "urgent crisis" without much gradient. This gives the appearance of stability most of the time as most of the political effort in the business as usual scenario is behind the scenes, trading horses, serving the high bidders, driving a wedge on an issue to create a new support base, or shutting down challengers. But when a crisis hits, everyone's plans go out the window and chaos ensues. When it finally settles, there is a new order and a new set of policy issues, not always for the better.

What we have at this moment, across many nations, is a set of crises that none of the existing governments have the resolve or imagination to solve. That's why the parties are tearing themselves up - they are realigning everything.

At ground level this manifests as partisan politics in part because the remnants of the old platforms are in do-or-die mode; with no stable middle to appeal to, they have to pick a place to move to, and it is going to be left or right of their old position.


IME, having competed undergraduate studies in economics, it has a huge indoctrination blind spot in that as typically taught, the theories are presented as hard rules and students generally aren't exposed to more than one economic theory in depth. Study time is spent in a performance of mathematical theatre, extrapolating broad notions of how society behaves from simplified models. Some of it has useful explanatory power - especially micro economic theories that have plenty of experimental backing - but just as often there is a design constraint of "our options as policy makers are x, y, and z because those are what are in our model." And this is not probed so actively at the undergraduate level - to do the homework and pass the tests you have to answer "yes, of course z is the best policy, because our textbook model says so." Which, I suppose, is like high school history and its tendency to use a singular narrative of cause and consequence, but with some symbols thrown in. It is deceptively universalizing.


Have some DSP resources:

Richard G. Lyons, Understanding Digital Signal Processing [0]

Gareth Loy, Musimathics: The Mathematical Foundations of Music (volume 2) [1]

r8brain-free-src (high quality sample rate conversion algorithms) [2]

KVR's DSP forum, frequented by actual pro audio developers [3]

[0] https://www.amazon.com/Understanding-Digital-Signal-Processi...

[1] https://www.amazon.com/gp/product/026251656X/ref=pd_cp_0_1?i...

[2] https://github.com/avaneev/r8brain-free-src

[3] https://www.kvraudio.com/forum/viewforum.php?f=33


To put it another way, the British Empire had a "merchantalist" foundation - maintain a positive balance of trade and surplus material assets by forcing its colonies to engage in trade on preferential terms - while the U.S. has been "free trade" oriented in its imperial years, with balance of trade being less important than retaining GDP.

Both have imperial dynamics, but for the U.S. the important part is being the "world's policeman" as it comes with the authority to destabilize regions and install puppets where their sovereign governments may act against U.S. interests. With this more limited administrative footprint, they're free to focus on use of violence and propaganda, while taking up domestic market policies that benefit net importer businesses and thus justify maintaining the empire.

Where people say that the Pax Americana is ending it is in part because the dynamic has grown more multipolar since the end of the Cold War, with a diverse group of nations asserting their interests without being overthrown.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: