Hacker News new | past | comments | ask | show | jobs | submit | more s_m_t's comments login

Yes, the in-depth domain specific 1150 page book is going to be more useful for building an engine than the introductory 340 page book on design patterns with a gamedev flavor.


Right. I don't think "design patterns with a gamedev flavor" has any value for game developers, but I didn't want to just say the book was bad without suggesting an alternative. They're both aimed at game developers who want some deeper understanding of how to structure games and you can pick out chapters you like from the 1150 page book.


A nice hack for building a small keyboard/macro pad that is 80% of the way there is to buy one of those cherry mx key testers, a microcontroller dev board of your choice, and a small lipo. All together it might cost you something like $40 depending on how many keys your key tester has and you should only need a soldering iron and some wire to put it together.


Those are handy to have anyway. I have a tester that originally had 64 keys that is down to 56 or so… I normally like really light keys, like 30-40g, but it’s nice to have much heavier action on a few keys like backspace and numpad enter so you can give em a really satisfying thwack without bottoming out.

Sure there are places that will sell switches in lots of 5 or 10, but it’s really nice to be able to swap out and experiment and see just how much heavier works for a given key.

My actual ideal would probably be something more like a layout that aimed for equal perceived (instead of actual) force. So basically heavier towards the middle and lighter towards the outside, but done mindfully of kinematics and ergonomics.


I second this as a great way to get started. If you look on Amazon there are a bunch of cheap eight key macros (Lichifit makes one) that are built via layers of acrylic: a base layer, a layer that houses the microcontroller, and a layer for the keyboard. It's easy enough to sub your own micro and 3D print a replacement layer.


The trick is that you don't actually have to do all the tasks you write down. It is still nice to have a record of what you've planned so if you ever decide to jump back on any task you have a history of what you have done and any context associated with it on hand.

A lot of my todos are something like "I found this article interesting but I don't have the current skills to really understand everything in it" or "I want to add this feature to X but I think I will wait until the new version comes out because it will be easier then" or "I want to remember this when I finally decide to do Y".


This is what my read-later list is like. I always keep adding to it, but I don’t really have any part of my life carved out to read any of it. It’s full of good intentions to learn about things or start new hobbies. I migrated it a while ago and was really disappointed to find a lot of dead links. It makes me wonder what I missed out on.


Why would Zeus care? The greek afterlife is a sort of pointless outcome to Pascal's wager so it isn't even worth factoring in.


Why would Zeus care?

Well, once you start asking questions like that, the whole thing starts to unravel. Just play along, don't rock the boat.


How is that legal? What is stopping the hospital from charging 200 million dollars for the rabies shots?

If I hired someone to fix my plumbing and they wouldn't give me an estimate and then decided to charge me $20k/hr I don't think a court would honor that contract.


It's legal and legitimized because of corruption.

Whoever has the most money in America captured the regulators and politicians through money conveyed to them by lobbyists.

You live without plumbing by making different arrangements but you can't live with a fatal lyssavirus.


I'm not sure about that. Hospitals use the fact that patients have no choice. Amoral business is the most profitable. A pipe in your house has burst, you called a plumber and he arrived in 1 hour (other plumbers were even further). He looks at your situation and says "sign here, my rate is 5k/hr plus unexpected expenses up to 50k" and when you try to negotiate, he calmly looks at water flooding the second floor in your 2mln house and answers nothing. You both know that very soon the repairs will cost you 100 thousands of dollars. I doubt any judge would agree that the plumber owes you work at a rate convenient for you.


Sure the plumber is free to only offer outrageous rates but I'm not sure they would be able to markup a roll of teflon tape they used in the repairs as 50k and say that is covered by "plus unexpected expenses up to 50k".

To me it just seems like fraud. Forget about patient choice because in this situation the patients conceivably could have shopped around for rabies shots, AFAIK you aren't going to die because you got your rabies shot a few hours later the you needed to. The problem seems to be that even if they were able to shop around and find the lowest price apparently the hospital can just charge whatever they want regardless of the expected or quoted cost.

Going back to the example of the plumber; Imagine if my house is flooding and I get into contact with two plumbers, one tries to price gouge me and quotes me $5k/hr and one is seemingly honest and quotes me $80/hr. I go with the plumber that quotes me less but then he ends up charging me $50k for the teflon tape he used anyways. We can see this isn't a breakdown in being able to shop the market and the normal competitive dynamics that arise from that, the breakdown occurs because of the fraud.


It's the gambit of brinkmanship + Vito Corleone.


Hospitals are required, under federal law, to list all their prices publicly.


Why would ChatGPT provide me with dangerous code if I prompt it benignly? I just the other day used ChatGPT to create a webscraper for a friends project and it provided me with an only slightly wrong implementation to scrape data from a specific website. 15 minutes of reviewing the code and editing literally three lines and I was done. Unless it was trained on a bunch of webscraping code that also contained malicious code I just don't see how it could happen.

If anything they could enable an option to have ChatGPT review the code for you and let you know that it hasn't accidentally thrown in some fishy code.


>Why would ChatGPT provide me with dangerous code if I prompt it benignly?

Because it's a black box that's sensitive to unexpected inputs. Humans create bugs too, but an unaligned AI doesn't have the sense to be careful of certain classes of bug.


The burden of proof for executing arbitrary code as input should always be on proving that it doesn't violate your security or reliability model. You shouldn't need to assume anything about that code other than that you're able to configure the runtime before it executes. Whether it comes from ChatGPT or users shouldn't matter, you should assume some of those programs were written by hackers and that you don't know which ones.

Even if you trust ChatGPT you should do this. You should assume that one day the hackers will intercept your connection to OpenAI and hand you malware, and they should either fail or be forced to use a zero day exploit against eg Firecracker (modulo your threat model and such).


Not the poster you asked the question to but:

I think we are going to be building and using 'sub-AGI' models for a very long time, probably well after we (probably achieve) full AGI. I think these 'sub-AGI' models are going to radically change the way we think about intelligence in the sense that we are going to be better able to nail down what we find special about human intelligence while at the same time discovering that it is a narrow and somewhat arbitrary subset of what we consider advanced intelligence.

ChatGPT seems to me like a very advanced Markov chain, which isn't to denigrate it, the surprising part is how useful and effective it is which to me shows we have been thinking about intelligence the wrong way. IMO an important portion of our brain probably works in a very analogous way but you could build as advanced a version of ChatGPT as you like and it wouldn't become 'sentient' or go HAL9000 on us.

There is definitely a danger of someone taking ChatGPT9 and telling it to create you computer viruses to take down digital infrastructure or telling it to trade in a way to crash the stock market but that's going to happen when all the zero-day exploits are detonated in the lead up to WW3 anyways and all things considered that has a higher chance of happening sooner than ChatGPT9 or the equivalent is used to do it. Instead of getting worried over rogue AI we should probably just harden our infrastructure.


Hardening infrastructure is a trillion dollar problem that a huge number of people don't believe in, or believe it's a waste of money to spend even more on. And that is before we make cutting edge thinking machines that could attack it.

Also, a paperclip maximizer could in theory go 'HAL' on you without intent or malice. Sentience isn't needed. And at the same time as we're working on embodiment models, what we could consider human like sentience could be far closer than we expect.


Catan's trading is a mistake in game design for sure


Absolutely not. The trading is the whole point. You talk to each other, you negotiate, it's interactive. Sure it means the same person always wins, but it's the interaction that keeps it alive.


It's a difficult issue because I think the trading is also the core of what makes the game interesting; it's hard to force players to cooperate with their competitors while avoiding the kingmaker issue.


I played another game fairly recently where people could negotiate trades for resources. Also a mistake in game design. I want to say it was Moonrakers?


You could always try and fix it so the table logic is aware of the latex snippet renderings.

I recently messed around with org-drill, found it wasn't really behaving like I wanted it to, decided to change it myself, and then was pleasantly surprised to find that changing its basic functionality was way easier than I expected (and I've barely started learning lisp/elisp.


Seemed to left have out the simplest explanation. Consciousness doesn't exist. Unrelated but it is a bit strange to me that a lot of (most? nearly all?) atheists believe in consciousness.


I feel like this position is meaningless. People experience something they call "consciousness". The phenomenon exists. Saying that it doesn't exist seems to really mean "the phenomenon is an illusion created by something else". But we don't understand how that works either, and this explanation is just as untestable as any other.


That explanation is too simple for those who think/feel subjective states exist, however. While someone may be fooled about what they're perceiving, they can't be fooled about that they're perceiving, from a first person view. It's much harder to know if someone else is perceiving or just behaving like they are.

But there seems to be something of a fundamental divide between those who find consciousness obvious and those who don't. ISTR Chalmers, after on a boat trip with one of the eliminative materialists (might have been Dennett or Dawkins) said that all the experiences he had had while on the trip just confirmed that he was conscious and thus the existence of consciousness, and the eliminative materialist said he still had no idea what Chalmers was on about.


Dennett, in his gloriously titled book Consciousness Explained seems to suggest we're merely tricked by our perceptual apparatus into believing we have conscious experience, similarly to how a succession of still frames, 24 per second, fools movie audiences into believing they see moving pictures (I may be a little unfair, but that's more or less what I took away from my reading) which makes me wonder if Dennett actually is (or is indulging in literary cosplaying as) a philosophical zombie.[1]

1: https://en.wikipedia.org/wiki/Philosophical_zombie


That's like saying movies technically don't exist because they're just a series of still frames. That sequence of still frames is a real thing in its own right, and it has a name.


The movie is more analogous to the conscious person, or perhaps “the mind,” and no one is proposing that those things don’t exist.

Insisting that consciousness must be real because it feels undeniably real is analogous to insisting that the way movies work is by showing continuous motion because that’s undeniably what it feels like when you’re watching a movie.


The difference would be that in the movie case, there is a you that is being tricked, that is present even through the trickery. You're having the experience of something moving, but it's really your senses being tricked into constructing this experience out of something else.

But for the case of consciousness, the "what's being tricked" is having experience at all. So the claim "insisting that consciousness is real because it feels real" rests on the cogito-ergo-sum like observation that if there were no experience, there would be nobody to feel it was real to begin with because feeling is an experience. That you are feeling it (in a first person sense) is evidence in itself, although frustratingly, not evidence that can be communicated.

That's what makes it so hard, though. Either "cogito ergo sum" seems self-evidently true, or it seems self-evidently false, and there's no way of getting to the position by indirect means. And because it can't be communicated, there's seemingly no way to unambiguously show someone else what you mean.


> That explanation is too simple for those who think/feel subjective states exist, however.

Normally good faith pursuits of knowledge don’t explicitly start with the pursuer having already decided that only one conclusion is acceptable even if that conclusion feels like it must be the correct conclusion.


Then where does the first person experience of seeing red come from? Where does red come from? You can't say that a certain frequency of EM radiation is literally the same as the color red you experience.

That's what consciousness is, the first person experience that we have. And it's not just some reflection of brain processes, just added on top, consciousness has the ability to influence things. Otherwise we would not be talking about it.


This was not left out. It is mentioned as the "ultra-materialist" position.


I missed that, just shoot me I won't notice as the author says lol


I was gonna say, what if consciousness is a trick of the brain? Like, we just do what we do and at every moment like Maxwell Smart our brain goes, "I meant to do that!" and so it feels like we made decisions "consciously"?


It's not just about conscious action but also perception, being conscious of things. I don't know how that could be a trick.


I suspect that's a trick, too. I speculate that as soon as you get a digital mind sophisticated enough to model the world and itself, you soon must force the system to identify with the system at every cycle.

Otherwise you could identify with a tree, or the wall, or happily cut parts of yourself. Pain is not painful if you don't identify with the receiver of pain.

Thus I think you can have unconscious smart minds, but not unconscious minds that make decisions in favour of themselves. Because they can identify with the whole room, or with the whole solar system for what matters.

Would you even plan how to survive if you don't have a constant spell that tricks you into thinking you're the actor in charge?

that's consciousness.


A lot of the things going on with ChatGPT make me wonder if AI is actually very limited in its intelligence growth by not having sensory organs/devices the same way a body does. Having a body that you must keep alive enforces a feedback loop of permanence.

If I eat my cake, I no longer have it and must get another cake if I want to eat cake again. Of course in the human sense if we don't want to starve we must continue to find new sources of calories. This is engrained into our intelligence as a survival mechanism. If you tell ChatGPT it has a cake in its left hand, and then it eats the cake, you could very well get an answer like the cake is still in its left hand. We keep the power line constantly plugged into ChatGPT, for it the cake is never ending and there is no concept of death.

Of course for humans there are plenty of ways to break consciousness in one way or another. Eat the extract of certain cactuses and you may end up walking around thinking that you are a tree. Our idea and perception of consciousness is easily interrupted by drugs. Once we start thinking outside of our survival its really easy for us to have very faulty thoughts that can lead to dangerous situations, hence in a lot of dangerous work we develop processes to take thought out of the situation, hence behaving more like machines.


> I speculate that as soon as you get a digital mind sophisticated enough to model the world and itself, you soon must force the system to identify with the system at every cycle.

I kinda think the opposite: that the sense of identity with every aspect of one’s mind (or particular aspects) is something we could learn to do without. Theory of mind changes over time, and there’s no reason to think it couldn’t change further. We have to teach children that their emotions are something they can and ought to control (or at the bare minimum, introspect and try to understand). That’s already an example of deliberately teaching humans to not identify with certain cognitive phenomena. An even more obvious example is reflexive actions like sneezing or coughing.


I’d love to hear more about this. What is it are most people referring to when they talk about consciousness?


Haha, no, they didn't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: