Hacker News new | past | comments | ask | show | jobs | submit login

It baffles me that people ignore the fact that they have subjectivity when making these claims. It makes a certain amount of sense since science itself aims to remove any taint of subjectivity in its practice, so folks who want to be thoroughly 'scientific' in other aspects of their lives (e.g. their personal philosophies of reality) carry over over this ideal.

But it's also a contradiction since science scientific methodology demands we check our results empirically: the fact that you have subjectivity (i.e. that qualia exist) and what it's like, is the most direct empirical fact we have access to and which needs reconciling in any discussion of the universe's constitution.

I think people sort of push this point aside either by saying that qualia/subjectivity could be generated by math (or a computational process), and that's what really matters. But we have no examples of abstract math generating anything; physical processes generate things and we use math to influence their behavior—but that's it! And furthermore, even if math generated subjectivity—it's still there in all of its un-mathematical obviousness at the end of the day.

I've yet to see an argument in favor of math having higher ontological status than this which wasn't every bit as unrestrainedly imaginative and unfounded as Plato's 'ideal forms'. I would expect a thoroughgoing adherent to scientific methodology to instead say something on the lines of, "this is outside the domain of science to judge one way or the other; the most we can say is the idea that reality is made of math is one untestable hypothesis among many whose probabilities of correctness each approach zero by dint of their many peers."




> It baffles me that people ignore the fact that they have subjectivity when making these claims.

Do they? Everyone is entitled to their unquestioned assumptions as long as they state them outright and describe their parameters. I have yet to see dualists (which ultimately is at the base of this whole line of argumentation) admit that they posit subjective experience _without evidence_, since they equate asking questions and "seeing" things as subjective experience. It's worthwhile debating and deconstructing just exactly what that means before accepting it and its hoaky mystical follow-ons. The fact remains that no entity can successfully prove to another entity that it has inner experience, so how would proving you have subjectivity to _your own mind_ work? You can't, and probably, you don't bother, because it is somehow "self evident" that you have this property. The whole qualia and consciousness debate is riddled with circular reasoning about something that dualists just "feel." Well sometimes I feel like I might have the same thoughts as a jelly donut, go ahead and prove me wrong with math!

> I think people sort of push this point aside either by saying that qualia/subjectivity could be generated by math (or a computational process), and that's what really matters. But we have no examples of abstract math generating anything;

This is a bizarre and unsupportable statement. Clearly physical processes can carry out computation, which is a special case of mathematics (or vice versa, depending on what you got your doctorate in), and computation can produce statements in whatever language you desire, even English. Yes, chips and software stacks produce outputs, even in English. How is that not "generating" something? Ah, but you ask what is inside? Tons! There are gazillions of bits inside computers, and similarly gazillions of flashes in your neural circuitry. That's a whole lot of "inside". How do you know that it _doesn't_ feel like something to be all those twinkling bits? If you still accept without question that you do in fact have subjective experience, doesn't it stand to reason that if the neural network which absolutely definitely does correlate with your consciousness did feel like _anything_, then by definition it would have to feel like this?


> The fact remains that no entity can successfully prove to another entity that it has inner experience, so how would proving you have subjectivity to _your own mind_ work?

If you are trying to 'prove' subjectivity, in the sense of providing an argument for it, then you are missing the point: you already experience it directly by definition, so there is nothing to prove.

> This is a bizarre and unsupportable statement. Clearly physical processes can carry out computation, which is a special case of mathematics ...

I am arguing against the idea that the computation (i.e. the math part) carried out the physical process—not the other way around.


> If you are trying to 'prove' subjectivity, in the sense of providing an argument for it, then you are missing the point: you already experience it directly by definition, so there is nothing to prove.

I can also perceive that glasses of water magically break and reconstitute pencils, in what we now know to be an optical illusion. "Self-evident" observations stand only so long as they are consistent with all other observations. The premise that subjectivity exists is not consistent with everything else we know without introducing additional structure to the universe that has no purpose but to stroke our egos. Therefore, it should be discarded.


> The premise that subjectivity exists is not consistent with everything else we know without introducing additional structure to the universe that has no purpose but to stroke our egos.

We must be using different definitions of subjectivity. I'm just referring to whatever one experiences at some point in time. Do you experience anything? Then you have subjectivity.


> We must be using different definitions of subjectivity. I'm just referring to whatever one experiences at some point in time. Do you experience anything? Then you have subjectivity.

We are, because an ontology that takes subjectivity as primitive/irreducible is different than an ontology that reduces apparent subjectivity via third-party objective facts. You subscribe to the former, I subscribe to the latter. It is precisely this distinction that differentiates materialists and dualists in the philosophy of mind.


I don't subscribe to either (and I'd venture to say neither does westoncb). My view is that both are flawed interpretations of reality. Both dualists and materialists assume "objects" to have some sort of independent existence and the argument between them is that the former claims there is something else that also exists while the latter claims nothing else exists. However ask yourself, what sort of existence test can any object (or concept for that matter) pass without the subjective knowing of it? The moment an ontology presumes either objects or subjects as the core premise of reality, it is already one step removed from it.


> However ask yourself, what sort of existence test can any object (or concept for that matter) pass without the subjective knowing of it?

This question doesn't yield any insight, because it still leaves us with the question of the nature of subjectivity, a term you use with impunity but without definition!

You seem to be implying that subjectivity and objects must form some sort of recursive relation, but this then already assumes subjectivity is irreducible and the question at hand is precisely whether such an elaborate construct is necessary. There's plenty of evidence hinting that we're just special pleading about consciousness's special properties.


I did not mean to imply that subjectivity is irreducible, but rather to say that it a mistaken belief that is the flip side of another mistaken belief (i.e reality is the objects/concepts we use to describe it).

>it still leaves us with the question of the nature of subjectivity, a term you use with impunity but without definition!

I didn't try to promote or define subjectivity, I challenged you to define an object without making any references to something subjective (as this is the ontology you said you subscribe to). What I was getting at is that you'll be hard pressed to present any object that is not a concept created by a subject. Mind you, I'm not saying objects and subjects are real, but that they are co-created by the mind and as such are equally unreal as far as representing what actually is.


> I didn't try to promote or define subjectivity, I challenged you to define an object without making any references to something subjective

I can easily posit an ontology that references no subjects. Just because I formulated said ontology from sense perception doesn't entail that it references subjective concepts.

> Mind you, I'm not saying objects and subjects are real, but that they are co-created by the mind and as such are equally unreal as far as representing what actually is.

Except as per Moore, this sort of skepticism necessarily presupposes the very knowledge that it attempts to undermine, and therefore it contradicts itself and is literally false.


>Just because I formulated said ontology from sense perception doesn't entail that it references subjective concepts.

My point is that calling it sense perception will not make the subjective pre-conceptual experience of it go away. I'm not advocating a ghost in the machine here, but trying to show that neither the ghost nor the machine can survive as primitives when put under deep scrutiny, but I first need to demonstrate how neither can independently exist as a valid concept.

>this sort of skepticism necessarily presupposes the very knowledge that it attempts to undermine, and therefore it contradicts itself and is literally false.

Not sure I follow how this relates to the quoted sentence, but the problem with any explanation (physical or philosophical) is that it is always limited to discussing concepts and relationships between them and can never break the boundaries of what can be grasped by thought (this explanation not excluded). My attempt is to show that thought (and thus any object derived from it) is not the pristine reality, but something that comes after reality is processed. At most, thought can point in the right direction (e.g, understand it's own limitations).


I'm not talking about an ontology--that concept presupposes too much already. What I'm talking about is more basic: if someone blows in your ear, do you experience anything? Whatever it is that you experience is what I mean by subjectivity. If you do not experience anything, then this universe is a much weirder place than I already thought:)


> What I'm talking about is more basic: if someone blows in your ear, do you experience anything? Whatever it is that you experience is what I mean by subjectivity.

You're evading the issue. You initially claimed that there is nothing to prove about subjectivity. This is patently false.

The subjectivity you apparently experience by observation, if taken at face value as you initially suggested, ie. "there is nothing to prove", is logically irreducible to third-party objective facts.

And yet, third-party objective facts are the best explanation we have for everything else.

Therefore, you do have something to prove regarding subjective experience, contra your original claim.


> I am arguing against the idea that the computation (i.e. the math part) carried out the physical process—not the other way around.

Ah, I see. But that follows directly from the computable universe hypothesis.


Right, which if I accepted we wouldn't be having this conversation :)


Colors, smells, pains, etc. are not part of the mathematical or physical description, so you can't simply dismiss the problem of qualia by labelling it as dualism. Where do the colors, tastes, feels come from? What sort of math would result in those experiences?

Here's another problem. You know about the physical/mathematical properties of the world by abstracting away from our experience of a world with color, sound, etc in it. Remove the colors and how would you know about shape, extension, mass?


>Colors, smells, pains, etc. are not part of the mathematical or physical description, so you can't simply dismiss the problem of qualia by labelling it as dualism. Where do the colors, tastes, feels come from? What sort of math would result in those experiences?

Predictive coding in the interoceptive and control networks of the brain, actually. Sure, it doesn't explain everything, but it pretty well dispels the intuition that makes p-zombies conceivable and explains the vast majority of the empirical and phenomenological facts to be explained.


How do you get anything other than likely correlation from "Predictive coding in the interoceptive and control networks of the brain", and what does it have to do with physics or math? At most, you've provided a biological basis for consciousness. But why would any of that result in color, pleasure, etc?


> But why would any of that result in color, pleasure, etc?

Because it doesn't exist in a vacuum, it exists inside an agent, that is inside the world. And the agent has to achieve a goal, to maximize utility, by learning to act intelligently. This creates a complex value system for states and actions. The values are used to select our actions, and we experience them as "qualia".

So colors and pleasures are just sensory experiences accompanied by their action-values. In the end it's all a game, and qualia is generated by the game. We're mistakenly searching for qualia in the brain, while it was all around us, in the game itself. The brain is but one part of it.


> But why would any of that result in color, pleasure, etc?

Why wouldn't it? Not everyone shares your underinformed intuitions. Predictions about light are colors, predictions about reward are pleasure, etc.


How would anybody ever get anything other than likely correlation?


In order to check a correlation we test it's causal predictive power in new situations. Causation has a special property that it can be used to simulate the future, while correlation does not.


In physics there are causal explanations, and in math there are proofs. If the best you can do is correlation, then Y isn't explained by X.


> Predictive coding in the interoceptive and control networks of the brain, actually. Sure, it doesn't explain everything, but it pretty well dispels the intuition that makes p-zombies conceivable and explains the vast majority of the empirical and phenomenological facts to be explained.

Could you point to something related to predictive coding and control networks explaining anything phenomenological (or give an argument)? From what I can tell the two are completely orthogonal.

Edit: I should add that my suspicion is you are mistaking a system which is isomorphic to relationships between phenomenological entities with something which could account for the presence of the entities themselves.

I would recommend checking out the comment here from user 'theoh' on the "relations are what matters" perspective.


To build a system like that, you need:

1. A physical phenomenon, spectrum of light in this case.

2. An entity with a retina with light perceiving cells at the right bandwidth.

3. A network that represents an invariant concept at that bandwidth without too many dependencies on object and context. However, there might be visual illusions that are hard to get rid of.

4. If the entity needs to describe this concept to others you need a(nother) network that maps the concept to a sequence of utterances that maps to vocal cords or muscle movements for a keyboard.

5. If the entity has to be able to reason about the concept it needs to be able to store it for longer times and have flexible ways to combine it with other concepts.

6. If the entity needs retrospection of this behavior it needs to have a representation of itself.


> Remove the colors and how would you know about shape, extension, mass?

The same way that blind people do?

> Colors, smells, pains, etc. are not part of the mathematical or physical description, so you can't simply dismiss the problem of qualia by labeling it as dualism. Where do the colors, tastes, feels come from?

I'm curious: what is a concrete answer that you would find satisfying? Even if it's unrealistic, or would only work in a different universe, what sort of answer are you looking for?


> The same way that blind people do?

Color is a stand-in for any subjective (or creature-dependant) sensation, which would include sounds and feels. Remove those and epistemology collapses.

> I'm curious: what is a concrete answer that you would find satisfying? Even if it's unrealistic, or would only work in a different universe, what sort of answer are you looking for?

I have no idea. How to explain the subjective in terms of the objective? Why does such a division exist? It's a deep metaphysical question.


> Remove the colors and how would you know about shape, extension, mass?

Most of abstract mathematics and philosophy deal in concepts that have no color, shape, or mass. And yet we deal with them. We don't need to have physical grounding for every concept.


You need sensory experiences to bootstrap your way to abstract concepts. Shape has no meaning without objects. Justice has no meaning without human interaction.


Look, as far as I know, I definitely have an actively inferring predictive-coding central nervous system, so I know I'm in here. But you keep insisting you're a ghost, so I can't really be sure you're not a zombie. After all, I only get access to your behavior, so I can't see that you've got a proper predictive system in there at all.

Whenever I try to cut you open and get at it, you keep yelling about qualia and human-rights violations.


> But you keep insisting you're a ghost

I think there has been some miscommunication if what you got from my comment is an insistence that I (and others) are less than what's suggested in the article. What I'm speaking of is strictly addition. If anything, the ghost metaphor what be more applicable to someone claiming 'we are only' abstraction X (e.g. mathematics). Or Maybe I'm not following your intention there.


>Or Maybe I'm not following your intention there.

That's probably because without a proper cognitive system, you don't have intentionality ;-).


You're right, and I also see how this obvious observation is consistently and elegantly ignored in most scientific or philosophical arguments that try and promote some totally "objective" account of reality.

However, it's not really that surprising: Science over the centuries has steadily (and for it's time, justifiably) eroded the role of subjectivity in order to uncompromisingly get at what is consistently true (regardless of what we'd like to be true) and this process required being very suspicious of any subjective accounts that can't be measured.

That being said, perhaps the time has come to dig a bit deeper and in the same persistent and uncompromising way ask what does objectivity or subjectivity mean? What do they arise from and to whom? I don't know if any new technologies can be derived from having clearer insights to these questions, but I do feel that it would be time better spent than any untestable mind made artifact about what reality ultimately is.


I agree it's worth looking more closely at the relationship between objectivity and subjectivity, especially the mapping from the 'nearest' objective structure we know to various aspects of subjectivity. I think this could be a decent route to finding a brain structure tightly related to conscious experience.

However, my suspicion is that the experience itself will remain not amenable to conceptual analysis—that it will remain fundamentally incapable of systematizing. For instance what would the mathematical description of the experience of a fever chill look like? (Hint: the definition there requires that you don't take a reductive approach to this: it's not asking about a mechanism which produces or is responsible for the experience, it refers to the experience itself.)

That said, I think it would be very valuable to understand better what the limits of our conceptual faculty are: as impressive as cognition is relative to other things we and other animals are able to do, we're being unduly anthropocentric in assuming it to be boundless in capability. More recently I've come to consider concepts as almost like another sense: they are symbolic patterns consistently formed when we are exposed to certain stimuli (like our experience of smells consistently reappearing when exposed to similar molecules). Our conceptual faculty augments the pure pattern-correspondence of our more primitive senses in that the patterns can be associated with other internal patterns, and in that we can generate new patterns purely from existing patterns (using logical and analogical processes), which at some future time may be usefully associated with some never-encountered external stimulus.

Finding the limits to our conceptual faculty is almost synonymous with find its 'structure', which I think would have clear benefits.


>However, my suspicion is that the experience itself will remain not amenable to conceptual analysis

I'm in total agreement with this as well. When I suggested looking into what is it that objectivity and subjectivity arise from, I wasn't referring to the "physical" explanation (which I assume will eventually be resolved, though not in the near future) but to the core of the hard problem of consciousness. My take on this is that when looked at closely, it will not be found within the domain of anything that can be conceptualised simply because it precedes concepts. Any concept requires consciousness to exist but not vice versa. Of course the usual argument at this point is something in the line of "so the sun didn't exist before any thinking animals existed?" To which my answer is that the sun as a concept didn't exist. Neither did hydrogen, electrons or wave functions. Only pure pre-concept reality existed, neither objective nor subjective, "waiting" for a mind to evolve and eventually form subjects. objects and concepts that with increasing accuracy describe how this reality functions, and then proceed to confuse this description of reality with reality...


Heh, yeah, that's pretty much exactly how I think of it :) I didn't mean to state my previous comment as a disagreement, just an elaboration.


It seems similar to the appeal of platonism over the centuries. The human mind evolved to categorise things to make sense of the world, and to dislike uncertainty. So the concepts of some abstract, ideal essence of things are easily understood and liked.


Have you considered reinforcement learning as a paradigm that can explain the interplay of subjectivity vs objectivity? Artificial RL agents have perception, values, and can act in the world, learning how to maximize utility. They have a subjective view of the world given by their perception and value system (past learned). I think RL opens subjectivity up for examination.


AI in general (whether via RL or whichever new development that will surface) opens up a fascinating new arena for learning about how agents operate, and investigating the soft problem of consciousness (i.e, what are the type of processes and the complexity involved that is required for having an inner experience), but I stand by my assertion that it will never explain the hard problem of consciousness, and no conceptual explanation ever will.


By saying "no conceptual explanation ever will" you put consciousness in a category of its own, supernatural or mysterious, like Chalmers. That's just a soft form of dualism.

You have sense organs and sensory cortex that creates representations. Those representations are evaluated subjectively for their utility, thus, generating emotion. This is qualia. What else is there? The fact that it feels like something to see red or blue? That's just sensation + emotional response.

I know it feels like I just gave you the soft problem of consciousness instead of the hard problem, but I don't think the hard problem is anything beyond a loop of perceive - evaluate - act - learn in the world.

If you want the ontological cause of consciousness, then it must be the environment because the environment teaches the agent perception and values. Values and emotions appear from the game the agent is playing. So the substrate of consciousness is not the brain, but the "game".


>By saying "no conceptual explanation ever will" you put consciousness in a category of its own, supernatural or mysterious, like Chalmers. That's just a soft form of dualism.

I think you just pointed out the very paradox which illustrates my point: In which category do you place something that cannot belong to any category?

At this point you might want to say that I'm just making up an artificial nonexistent construct that has no bearing on reality and my task would be to show that not only does it exist, but it is the very basis necessary for all categories to exist. This thread would probably not suffice to achieve that :-) And by the way, I don't see it as supernatural - if anything, the process of concepts and objects created by an agent to approximate reality and then taking these approximations to be reality is what I find to be supernatural. It amazes me how evolution managed to pull this off...

>The fact that it feels like something to see red or blue? That's just sensation + emotional response.

I'm suggesting that "just sensation" is a placeholder for "I have no idea what that is"... There's a big difference between the explanation of a process and the experiencing of a process. We are so quick to categorise experiences into concepts and relationships between them that we overlook the fact that the sensory experience exists before any conceptualisation and categorisation, and in a way is much more real than any story we mould it into some milliseconds later (i.e "this is red", "this is cold" etc.)

>So the substrate of consciousness is not the brain, but the "game".

By now you probably realise (though I assume still disagree with me) that I see it the other way around: i.e that no concept can exist without knowing it. This is not the same as saying nothing exists without our knowledge, but it does mean that what exists before our knowledge of it remains by definition undefinable and inscrutable.


> In which category do you place something that cannot belong to any category?

How could something created by a body not belong to any category? It belongs to the category of self-regulated systems adapting to their environment.

Creating concepts has been demonstrated since 2012 (Google 'cat detector' https://www.wired.com/2012/06/google-x-neural-network/). In natural language processing, Andrej Karpathy demonstrated in his char-RNN blogpost the emergence of neurons specialized in various concepts (http://karpathy.github.io/2015/05/21/rnn-effectiveness/). It happens naturally in deep networks, as lower layer features are combined into higher level features.

> I'm suggesting that "just sensation" is a placeholder for "I have no idea what that is"...

Ah sorry, I didn't mean "sensation" as in "placeholder for no idea". I meant it as a neural net, the kind we have in our vision system, or the kind used in Deep Learning. They are made of millions of neurons and trained on millions of images to distinguish and localize objects and object relations in images. We have had thousands of papers optimizing on this kind of architecture - Convolutional Neural Network (CNN) - arguably the most successful part of deep learning. Not only that "sensation" was a neural net, but "emotional response" is also a neural net that computes scores of utility of states and actions.

So I was referring to concrete concepts. That's what I like about RL compared to philosophy - it replaces vague concepts with concrete models.


It looks like we share a similar enthusiasm for the potential of artificial neural networks (and I also assume a similar distrust in the ability of philosophy to answer these questions), and I agree that NNs are currently the best candidate to eventually create self-conscious machines (though unlike some prominent celebrities, I believe it's still in the far future). The point we depart is that you seem to be saying there is no real difference between the hard problem and the soft problem of consciousness.

We obviously mean different things by "sensation": I've implemented neural nets to solve problems using ML, I understand the theory and certainly impressed with the results . I assume that by "sensations" you are referring to the process of inputs entering the input layer of the neural net. This is exactly part of a model for solving the soft problem of consciousness. What I'm getting at is that you know what a sensation feels like before you give it any label - it has a certain "is"ness to it that no equation or concept can convey. I'm saying this becomes possible when a system becomes complex enough to be self-conscious and it is usually attributed as some sort of emergent property that is created by the system, whereas I claim it is much deeper than that, and in fact it is the other way around.

Obviously this correspondence format is not really suited to dive deeper, but I'm always happy to discuss this to find if and where I'm mistaken.


I know of a nice argument against simulationism that I think applies here:

If simulated universes can be "real," then we're assuming that there's a mapping between patterns of electrons in a computer (or provable theorems, if you're brave enough), and the real universe they represent. (Say, interpreting two numbers that may be stored very far apart on the chip as the positions of two particles that are, in the universe, very close together.)

The problem that this introduces is that there are no well-motivated ways that we could decide which mappings are "real," and could be "lived in." For example, we might claim that the state of the simulation must be map-able in polynomial time to some charts and pictures that a human could interpret - but an inhabitant wouldn't care.

So, since all mappings are equally motivated as far as an inhabitant is concerned, allow me to choose one that maps this grain of sand to a universe the same as ours.

Well, there you go. Our world in a grain of sand.


Not sure why this is an argument against simulationism. As far as I can tell, you're just making the point that any system could be regarded as simulating any other system given a sufficiently complex mapping. This seems more like an argument for simulation than against it. In fact, I first heard this argument a long time ago in high school as part of a discussion on panpsychism - the speaker claimed that a cup of tea was sentient because interpreted in a particular way, its thermal states could represent a self aware entity.

The fact that you have to consider the complexity of the encoding scheme and not just the input and the output bit patterns is well understood in information theory. If I have an encoding scheme that has included in it a complete simulation of the universe, I can encode any video into 0 bits since the decoder will always know what I am about to request of it. The more the mapping is customized to the information you're searching for, the less complexity is needed in the input bit patterns to produce the desired output. If you go searching PI for a message, it'll probably take you longer to find it if you use the ASCII values of characters than if you use base64. You'll find it even quicker if you specify that 3='h' 1st 1='e' 4='l' 2nd 1='l' 5='o'.

This all ultimately comes back to Ockham and parameters on a model. Models with more parameters can fit more things spuriously, so you want the model with the fewest parameters that can do the job. Minimum Description Length (and MML) is a great concept https://en.wikipedia.org/wiki/Minimum_description_length


It's possible to think about the complexity of going between encoding schemes, but unless you've got a special representation blessed as the right way to write things, you won't be able to build a tier system where some encoding schemes are more awkward than others. Normally we have no problem doing this because we can recognize a well-written answer when we see it, but that's based on no more than the way our own brains work (at best, on the parts of our brains that are the way they are because of the universe we live in.)


Even encoding schemes themselves have an information theoretic entropy level regardless of universe. Encoding schemes with less entropy can be considered more 'natural' - it's possible to come up with a measure of the 'naturalness' of encoding schemes in a pure mathematical sense without having to take into account entities beyond the mathematical realm.


If my understanding is right, a bijective function won't change the entropy of the strings it acts on. So, as long as I can find enough states in my grain of sand (quite possibly undoable, so imagine I'm talking about a few photons in space instead!), I can map them on to my universe and be no worse-off, from an information entropy standpoint.


A mapping is a computation.

If your mapping is very complex compared to the ostensible simulator, then the mapping is actually doing most of the simulation. So the simulation is not mostly running when the ostensible simulator runs, it is running when you perform the mapping.

If you are inside the universe being simulated and think you can do that mapping, it seems unlikely that there exist enough time and space for it.


Right, but if the role of the mapping is just to make the status of the simulation make sense to you, (after all, that's the only thing you would be able to observe it doing), then the inhabitant wouldn't care if the mapping was ever computed. So, it's fine if the mappings would be ridiculously impossible to compute. (Extra note: it is extremely hard to simulate the chaotic motion of nearly every physical system. Does that mean they are doing lots of hard computation? If so, does that make them "more likely" to be interpretable as simulating universes?)


It's not that "the mapping" makes "the simulation" make sense. You have moved most of "the simulation" into "the mapping". So if you don't do the mapping, the simulation mostly does not happen, so there is nothing to make sense of.


Scott Aaronson wrote a great paper arguing around what you mentioned[1]. He tackles the issue of why a waterfall does not simulate a universe if we transform the waterfall in an appropriate way. He argues that the waterfall would indeed simulate a universe, if and only if the transformation function is of polynomial complexity. However! He argues that it is more likely such a transformation is if exponential complexity, or of an even higher order, and thus the waterfall does not simulate a universe. A polynomial transformation function to turn a grain of sand into a universe existing seems like quite a strong assumption.

[1] https://www.scottaaronson.com/papers/philos.pdf


Aaronson is one of my favorite philosophers.

But, what if there's a law of physics inside of the simulated universe that solves exponential problems in polynomial time? You could pack a more limited universe inside of that one, if you didn't want your inhabitants to have access to it. I think that computational complexity absolutely does determine how useful a simulation is to us, but it's not meta-universal enough to do this kind of philosophy with.


Interesting point! All I have to bring in is that if computational complexity is of no interest to inhabitants of a simulation, very strong assumptions about the nature of the simulating universe still need to be made? This, I think, means that you're saying that time is for all intents and purposes unbounded/unlimited/infinite in the simulating universe. As the law of physics that turns exp->poly problems would still need to be simulated, and likely, if i understand correctly, is not necessarily polynomial in the simulating universe?

If I'm interpreting it correctly it does seem to put some constraints on the nature of the simulating universe, but does not necessarily disprove the existence of it?


Well, I can try to lay it out more formally:

We can divide up the task of running a physical simulation into the part where the state is computed, and the part where the results are made visible to us in a way we understand.

We can also divide computation up into the same two pieces: the actual solving of the problem, and then the task of reading symbols off the Turing tape. Usually, we set up our definitions so that a program isn't thought to "solve" a problem unless the reading-the-answer part is trivial.

Now, for a person living inside a simulation, the reading-the-answer part would become insignificant. Why do they care if we can see in to their universe?

So, if all we want is to make a world for simulated people to live in, why not stuff all of the computational complexity into the reading-the-answer part, and then not do it? They would be perfectly happy with a jumbled-up, uninterpretable state in our world, because they would be living on the inside where it made sense.

So, if that is convincing, then allow me to pitch my universe simulating computer:

Launch two photons into space. The distance between them is the state of the simulation. If you want to look in to the state, you have to solve an incredibly hard exponential problem to map the distance to a certain corresponding wavefunction that you could make sense out of.

However, the inhabitants wouldn't care about whether or not you interpreted their state. So, as far as the simulated are concerned, two photons flying apart do make a simulation!

In my opinion the easiest way to make this reductio-ad-absurdum go away is to reject the idea that the simulated is real.


Not to say that I think that the world is a simulation or anything like that (I don't), but I think that computational complexity and related things probably do induce a natural notion of "naturality" on the different mappings.

Even if there might not be a single most natural mapping, I think it makes sense to say that some mappings are more natural than others. I think there is a natural sense which is independent from human perception and intent, in which a computer running, say, a graph coloring algorithm, is doing that more than a waterfall is.

Even if there is a bijection between a collection of initial states of the waterfall with the input of program, that fits together with one between the outputs and the results of the waterfall.


Computational complexity is an insufficient motivator, because while in our current universe some problems are hard and others are easy, in the hypothetical universe there might be a law of physics that solves NP-hard problems. In fact, if the problem is right, you might even end up in a situation where they can't hook their NP-hard problem solvers together in a way that solves problems that we would consider polynomial.


> in the hypothetical universe there might be a law of physics that solves NP-hard problems

I agree with this part.

I wasn't thinking specifically about P vs NP though. (if I was, I think that would be a kind of weak argument because we don't have a proof that P isn't equal to NP ? At least not yet.)

I was thinking more in terms of like, Solomonoff probability and Kolmorogov and things like that.

And even if you suppose that the universe in question has a Turing oracle, I think you would just have to go up enough levels in that hierarchy in order solve that.

Even without that though, looking at the space of permutations on N, I think it is likely that there is a natural way to talk about what things are true of "most" things in that space, and things like that, which could, I think, support at least a weak sense of degrees of naturality between mappings, even without appealing to computational complexity.

Edit: I figured I should respond to this part also:

> In fact, if the problem is right, you might even end up in a situation where they can't hook their NP-hard problem solvers together in a way that solves problems that we would consider polynomial.

I think that whatever process is needed to get the output in a form that can be used as input for other things in general should be considered to be part of the computational process. Maybe that should even be implied by the meaning of "compute".

Like, say that some computation takes a number as input in a binary representation, and outputs a number, but outputs it in ternary. If you are talking about how fast the process is on numbers, it should include the time needed to convert back to the binary representation. (Unless, of course, the computation that you are considering the process to be doing includes the "input is in binary, output is in ternary" part. If it is being considered "on numbers" though, I think it should include the conversion time to and from whatever representation of numbers is being considered to be canonical.)


> It baffles me that people ignore the fact that they have subjectivity when making these claims.

The author does not ignore that but states his assumption on the first page:

The foundation of my argument is the assumption that there exists an external physical reality independent of us humans.

EDIT: Clarification: "external physical reality" means to me that we can rule out subjectivity here.


Yes, I thought that part of the article was kind of funny: just because we admit an external reality that is supposed to imply there is no internal reality?

Here's the one other part where he references subjectivity/qualia:

> In this example, the frog itself must consist of a thick bundle of pasta whose highly complex structure corresponds to particles that store and process information in a way that gives rise to the familiar sensation of self-awareness.

This is too big an big assumption to make: processing something gives rise to sensations? Just because the processing is mathematically describable, is the sensation also? What would the mathematical description of the experience of a fever chill look like? This is the typical omission made by the many people who have formed similar arguments to Tegmark.


> just because we admit an external reality that is supposed to imply there is no internal reality?

Do we? Were? I don't see that in the essay.

> This is the typical omission made by the many people who have formed similar arguments to Tegmark.

Do you have any links? I'd like to read more.


> Do we? Were? I don't see that in the essay.

Actually, I think you are correct—I misread part of it the first time. He is pretty meticulous about saying this only applies to 'external physical reality,' and the only reference to an internal part is in the excerpt I gave above about the sensation of self-awareness.

However, it's a similar kind of trick. He goes ahead and makes statements like:

> Everything in our world is purely mathematical — including you.

—which isn't really so significant if you are implicitly ignoring any interior aspects.

Instead, it's just kind of a tautology: the 'exterior' aspects of reality (well, the selection of them which we know of/discuss anyway) are precisely the ones approachable to conceptual/linguistic/mathematical formulation. So I don't see much more being said here than "the mathematically describable parts of reality are mathematically describable."

> Do you have any links? I'd like to read more.

I've already spent far too much time on this thread, but if you look around at adherents of digital physics, computational universe, etc. you'll find a similar elision repeating itself: either subjectivity/qualia are ignored completely, or it is assumed that they would be a by-product of some kind of generative mathematical/computational process. I've seen several articles related to the idea just on HN.

Edit: also: this is just a recent manifestation of the idea, which has been around at least since Plato.


>> Everything in our world is purely mathematical — including you.

> —which isn't really so significant if you are implicitly ignoring any interior aspects.

You are quoting the quite catchy climax of the introduction. The paragraph this quote ends with starts with "So here is the crux of my argument." The author is trying to pave the way for the actual argument and a context-free quote of the buildup does not help the discourse.

>> Do you have any links? I'd like to read more.

> I've already spent far too much time on this thread

I am sorry to have wasted your time with requests for foundations of your claims.

> Edit: also: this is just a recent manifestation of the idea, which has been around at least since Plato.

Which aspect? I actually find it very refreshing to read an essay that goes down to foundations of philosophy. If your valuable time allows, could you maybe find the time to explicitly point out congruent arguments in philosophy "since Plato"?


> The author is trying to pave the way for the actual argument and a context-free quote of the buildup does not help the discourse.

The reason I quoted it is to show that he makes assertions meant to sound as if they include interior aspects of reality, even though he says previously in the article that he is only talking about external aspects. There is nothing that happens in that paragraph which affects that fact at all.

To be more clear, the pattern is like: Let us redefine the universe to just refer to exterior aspects of the universe. Then for the rest of the article he can just talk about anything and the reader is supposed to call to mind the initial disclaimer. Statements like "Everything in our world is purely mathematical — including you" gain their force from the fact that he stated his limitation only early on, so that 'everything' has been redefined to mean something else.

> I am sorry to have wasted your time with requests for foundations of your claims.

If I were deferring justification that would be another matter. Instead, I was declining to give yet more elaboration of which there's plenty in this thread already.

> If your valuable time allows, could you maybe find the time to explicitly point out congruent arguments in philosophy "since Plato"?

https://en.wikipedia.org/wiki/Theory_of_forms

The common thread is that folks since Plato have tried to make arguments that general categories of things are more real than particular things themselves. The means of going about this have been various. The author's particular tack is called 'mathematical platonism': https://plato.stanford.edu/entries/platonism-mathematics/


> people sort of push this point aside either by saying that qualia/subjectivity could be generated by math

It's not math that generates qualia, it's embodiment. The fact that "math" is inside an agent, which is inside a world that evolves. The agent can move about and affect the world, and it has a utility principle (some needs to fulfill) that guide its actions and perceptions - that generates qualia. The source is being in the world, not math. Math describes anything, qualia describes the world as experienced by the agent.

If you take "being in the world, maximizing utility" as an explanation for consciousness, then you can easily classify various AIs and entities as conscious or not-conscious.


> the idea that reality is made of math is one untestable hypothesis among many whose probabilities of correctness each approach zero by dint of their many peers.

I would like to (gently) challange the assertion that all untestable hypotheses are "peers" of (neo)Platonic notions. What sets mathematics apart is this: https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.htm...


Hey eternalban, I am familiar with that article. I posted something else here which I think explains why I find it insufficient for the author's uses (Tegmark, not Wigner):

> imagine some complex 3d object changing in time. Now imagine it's wrapped in a kind of net, like you see in 'wireframes' in computer graphics. If that complex 3D object is our universe, I see math as like that wrapping wireframe: there is a strong correspondence of some kind, and it covers the full extent of what's out there in some sense, but there are still gaps and extents into other directions etc. which are not math (or any form of human description).

I agree with you though, that it should be set apart from many more absurd hypotheses because of the additional merit it has—although for the specific question of whether reality is literally comprised of mathematics (rather than aspects of it being accurately describable by mathematics), it's a binary question, true or false, and while in some sense it's closer than the others, as far as I can tell it still lacks justification.

Edit: I should also add that my jab about probabilities approaching zero etc. etc. was more kind of a joke/flourish which probably could've been left out :)


I'll also add where I see math fitting in: imagine some complex 3d object changing in time. Now imagine it's wrapped in a kind of net, like you see in 'wireframes' in computer graphics. If that complex 3D object is our universe, I see math as like that wrapping wireframe: there is a strong correspondence of some kind, and it covers the full extent of what's out there in some sense, but there are still gaps and extents into other directions etc. which are not math (or any form of human description).


I strongly suspect that I can dramatically affect or entirely 'turn off' your experience of qualia through purely physical means, so I think your objection is much more to do with your skepticism that mathematical or computational processes can generate physical processes than anything to do with qualia.

Of course, you only have to have a virtual object thrown at your head in VR to discover that your brain will mistake computational processes for physical ones rather easily.


Your brain doesn't experience computational processes any more directly through VR: VR works by sending controlled physical sensations to your senses using the same routes as when you ordinarily interact with the world.


Sure, but your brain mistakes lights flashing according to a computational process for light flashing according to a physical process. It's not an argument that you are directly experiencing maths, just an example of how easily our brains can get maths and physics mixed up with the hint of a suggestion that if it's so easy, then perhaps there's not as big a distinction as we might think.


> But it's also a contradiction since science scientific methodology demands we check our results empirically: the fact that you have subjectivity (i.e. that qualia exist) and what it's like, is the most direct empirical fact we have access to and which needs reconciling in any discussion of the universe's constitution.

I hereby directly challenge your subjectivity and qualia. It is most absolutely not an empirical fact.


It doesn't matter whether I also have it—as long as at least you have subjectivity/qualia, then it's a part of reality and must be accounted for.


I like the approach of declaring maths as the ultimate construct through which the universe can be modeled and understood. At best it will approach towards truth, such as Eratosthenes's calculation of Earth's shape and dimensions or Copernican derivations for the solar system, at worst it will help enlarge the definitions and constructs within mathematics.

So shut up. And calculate.


Learning creates subjectivity.


a comment I made earlier, with Hannah Arendt quoting Heisenberg on the matter: https://news.ycombinator.com/item?id=15634222

In the prologue of "Vita Activa" she wrote this:

> This future man, whom the scientists tell us they will produce in no more than a hundred years, seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself. There is no reason to doubt our abilities to accomplish such an exchange, just as there is no reason to doubt our present ability to destroy all organic life on earth. The question is only whether we wish to use our new scientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order and therefore can hardly be left to the decision of professional scientists or professional politicians.

> While such possibilities still may lie in a distant future, the first boomerang effects of science's great triumphs have made themselves felt in a crisis within the natural sciences themselves. The trouble concerns the fact that the "truths" of the modern scientific world view, though they can be demonstrated in mathematical formulas and proved technologically, will no longer lend themselves to normal expression in speech and thought. The moment these "truths" are spoken of conceptually and coherently, the resulting statements will be "not perhaps as meaningless as a 'triangular circle,' but much more so than a 'winged lion' " (Erwin Schrodinger). We do not yet know whether this situation is final. But it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will for- ever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do. In this case, it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking. If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.

> However, even apart from these last and yet uncertain consequences, the situation created by the sciences is of great political significance. Wherever the relevance of speech is at stake, matters become political by definition, for speech is what makes man a political being. If we would follow the advice, so frequently urged upon us, to adjust our cultural attitudes to the present status of scientific achievement, we would in all earnest adopt a way of life in which speech is no longer meaningful. For the sciences today have been forced to adopt a "language" of mathematical symbols which, though it was originally meant only as an abbreviation for spoken statements, now contains statements that in no way can be translated back into speech. The reason why it may be wise to distrust the political judgment of scientists qua scientists is not primarily their lack of "character" — that they did not refuse to develop atomic weapons — or their naivete — that they did not understand that once these weapons were developed they would be the last to be consulted about their use — but precisely the fact that they move in a world where speech has lost its power. And whatever men do or know or experience can make sense only to the extent that it can be spoken about. There may be truths beyond speech, and they may be of great relevance to man in the singular, that is, to man in so far as he is not a political being, whatever else he may be. Men in the plural, that is, men in so far as they live and move and act in this world, can experience meaningfulness only because they can talk with and make sense to each other and to themselves.

As for the paper:

"It is important to remember, however, that it is we humans who create these concepts" vs. "Modern mathematics is the formal study of structures that can be defined in a purely abstract way."

What's the definition of "to define"? Create a concept of something?

"the only properties of integers are those embodied by the relations between them. That is, we don’t invent mathematical structures — we discover them"

That doesn't follow at all. What's worse, you could also use an example sentence about a child and her mother translated into various languages, and say the actual nouns don't matter, just the relation between them. That's how you discover biological relations, that's how you discover what a priest and their function is.

Last, but never least: http://www.bartleby.com/370/55.html


The "relations are what matters" perspective is a structuralist thing, which can be traced to Saussure's linguistics on the philosophical side of things. Henri Poincaré was among the first to affirm a similar view in the sciences.

The question of whether it's a workable perspective or not is way above my pay grade. Some call it differential ontology http://www.iep.utm.edu/diff-ont/

The joke about category theorists being unable to distinguish isomorphic objects seems relevant (saying that as an appreciator of category theory). Also relevant is the hoary question of the identity of indiscernibles. If every distinguishable entity must stand in a unique relation to the universe, couldn't that solve the apparent ontological problem?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: