Hacker News new | past | comments | ask | show | jobs | submit login

Articles like this annoy me: it seems to want to comment on philosophy of mind, but shows zero awareness of the classic debates in that discipline - materialism vs idealism vs dualism vs neutral monism, and the competing versions of each of those, e.g. substance dualism vs hylemorphic dualism, eliminativist vs reductionist/emergentist materialism, property dualism, epiphenomenalism, panpsychism, Chalmers’ distinctions between different idealisms, such as realist vs anti-realist and micro-idealism vs macro-idealism…

Add to that the typical journalistic fault of forcing one to read through paragraph after paragraph of narrative before actually explaining what the thesis they are presenting is. I’d much prefer to read a journal article where they state their central thesis upfront




As a linguist, article like this also annoy me by the claims that "X [whales, dolphins, parrots, crows...] uses language." We have known since 1957 that there is a hierarchy of "grammars", with finite state "languages" being near the bottom, and transformational grammars at the top. Human languages are certainly at a minimum at the context free phrase structure grammar level. My point is that by using the word "language" loosely, almost anything (DNA codons, for example) can be considered to be a language. But few if any other animals can get past the finite state level--and perhaps none gets even that far.

And an article or book that uses the word "communicate" is even more annoying, since "communicate" seems to mean virtually anything.

End of my rant...


As a scientist, articles like this also annoy me. Because it, right off the bat, assumes that there are no degrees in consciousness. Just because we don't experience those degrees, and animals can't convey what they feel through language, we assume it is "emergent", or "suddenly there". I think we are too caught up in believing that consciousness must be something really special or some magic discontinuity of spacetime.

I don't believe it is. Somewhere inside our brain there is some perception of self related to the outside world. So we can project self into the future using information from the now and make better choices and survive better ("Information is that which allows you [who is in possession of that information] to make predictions with accuracy better than chance"- Chris Adami). Why do we need all these difficult words?

I bet animals also have some image of self inside there somewhere, and make decisions based on simulated scenarios. Perhaps to a lesser degree, perhaps because of a lack of language they experience it in a different way? Not being able to label any of the steps in the process... Who knows?

Perhaps when we get to simulate a whole brain we can get some idea. But then there is the ethics. We do attribute great value to organisms that have this image of self.


Add to this that a lot of people presume that peoples experience is the same.

E.g. I have aphantasia - I don't picture things in my "inner eye". Through discussions with people about that, a lot of people described own differences from the perceived norm, and it is clear a not insignificant numbers also have other differences, such as not thinking consciously in words.

A lot of people then tend to express disbelief, and question the whole thing on the basis of a belief that if people's inner life does not match theirs, people couldn't possibly be conscious or reason.

People make far too many assumptions about the universality of their own experience of consciousness.


I'm curious about aphantasia - if I asked you to draw a floorplan of your home/office etc. would you not be able to do it?

If you could wouldn't that involve you imagining what the building looks like from the inside and how the rooms are connected together?

Or would do it some other way?

(I'm assuming you could do it, since if you couldn't I think that would be a majorly debiliating condition rather than just some amusing fact)


Yes, in fact my ability to draw from memory places I've seen is, if anything, from experience, substantially above average. However, when I draw from memory, my drawings do tend to be more stylized.

I can "imagine" what a building looks like, but I can't see it. Even our language makes this hard to describe, and the reason why I didn't realise that this isn't how other people do it, because most of our language for remember how something looks uses terms that implies we see them, so I just assumed until a few years ago (and I turn 50 this year) that that was just a metaphor to everyone else too.

How do you imagine (there's that word again...) blind people remember places they can't see?

To me, the notion that I'd need to "see" what the building looks like to draw it is bizarre because I know where everything is in relation to each other, and their shapes, so why would I need to see it?

EDIT: increasingly, and in part drive by split-brain experiments, I tend to think that a whole lot of what we see as our conscious decision-making, are instead shallow retroactive attempts by parts of the brain at rationalising decisions largely already taken autonomously into a cohesive self/ego.


I am largely in the same boat -- I've taken to telling people that I think very well spatially (that is my primary method of thought) but not at all visually. I am much better than most people seem to be when it comes to tasks involving space, direction, and so on -- basically abstractions of reality, not images of it. But if you want me to actually visualize an apple in my head? Yeah, I can't do that at all. Not even a "stylized" apple.

If you wanted me to draw an abstract floorplan of my house, or the layout of my town, I can do that easily. I can navigate an extended road trip unassisted by simply looking at a map ahead of time and mentally storing an abstraction of where I need to go. But if you wanted, say, an image of my house, or the main intersection in the middle of town, or anything like that -- I'm no good. I can't "see" it, and I would not be able to draw anything remotely accurate.

To circle back around too your "imagine" vs. "see" problem -- my response to the often-referenced "imagine a ball rolling on a table" exercise often elicits some confusion from people. I can imagine a ball rolling on a table just fine. What color is the ball, though? It has no color. What size is it? It has no size. It is just "a ball", in the abstract -- its only property is shape (spherical), which is ultimately all that is necessary for imagining rather than seeing. If you wanted me to visualize a large red ball on a green table, though, that's beyond me.


This seems similar to how mine works too.

It’s as though the connections among things are dimensional rather than visual. I can “render” blueprints as this thread discussed, or know which way I’m facing deep in an intricate tube station, but am not “picturing” it, it’s just, this is how these things relate in space time. Although I can do the same with exterior architecture (is that an image?) or street corners. I don’t think I’ve ever “seen” anything: picture yourself on a beach, counting sheep, always assumed were metaphorical expressions.

I also feel concepts are dimensional, and can tell if a piece of a concept is missing because where it should be is empty. Think how periodic table of elements worked.

A red ball on a green table maps to billiards in dimensional concept space so I can render that too.


Yeah, I also sometimes describe it as spatial thinking. I'm pretty sure that the extent to which I can draw from memory is down to having spent quite a lot of time drawing as a child and getting good at drawing from spatial recollection.

E.g. if I draw a fantasy drawing, it will be closer to impressionism in style, the same way as if I draw something in front of me. It'll be messy. If I draw from memory, the lines are clear, and stylized.

The clearest example of that was an art class at school where we were asked to draw our shoes first from memory and then while looking at it, and both were highly detailed, but without any conscious decision, the line-work was entirely different.

Same as you when it comes to the "ball rolling" scenario. It "has" the properties people add to it, but if they're not affecting the behaviour, they're just verbal labels attached to an abstract concept - I'll remember them, but they won't change anything about the "imagined" scenario.


You have probably heard this before but this is really bizarre and fascinating to someone like me who writes difficult words correctly because I see them (visualize the word) in my head and judge if they look right. It is like it all has to be visual for it to make any sense.

As a molecular biologist I often deal with minute quantities of substances in small containers, but I just picture them there, the molecules, what happens to them as I dilute a sample or add some enzyme. Reading a book is like watching it play out.

I have a colleague who is also much more text oriented. I really wondered how she can function without seeing all the departments in an overview and the connections between them, and how governance structures overlap with those departments, etc. For me it's a lot to hold in, overwhelming at times. But she's absolutely great in these types of things, just masterful, very structured as well. Her mind must be very alien to me, I always wonder what her experience of life looks like (or not "looks like", apparently... But how it is to her...)


I write a lot, including a couple of novels, and I realised as I found out about aphantasia that it explains a lot about what I read and write, and what I like. E.g. I skim or even entirely skip sections that focus on how something looks, unless it is beautifully written. I care about the language and the ideas much more than about any visual description, because I don't get much value from the visual description. I could, if I wanted to, sketch out a picture of things I've read a description of. E.g. I remember drawing some scenes from Lord of the Rings when I read it for the first time as a child, but I don't get the visual part of the satisfaction of those descriptions until I've drawn them.

But with some works - like Tolkien's, I will still enjoy the descriptions because the language itself is beautiful.

Just to make this weirder: I remember many things by appearance. E.g. I can find my place in a paper I've read years ago by what the page looks like. But I can't see it.

I do however see things when I'm dreaming, and I have one solitary experience I think of seeing things awake, during meditation. I say "I think", because there is the possibility that I fell asleep even though I don't believe I did, and the imagery was far clearer than during my dreams.

But I also can't recall images from my dreams while awake, and usually don't remember dreams at all past the first 30 seconds or so awake.


When I close my eyes (and especially if I rub them when they are closed) I see a grey/brown empty space with letters and numbers inscribed everywhere within the field, although the letters and numbers are oriented randomly and not from any script or language I’m familiar with.

Also fifty. Realized this when I was about seven and it hasn’t materially changed since. I have no problem drawing things like blueprints of my house but when it comes to more curvy objects I’m a terrible artist.


> assumes that there are no degrees in consciousness

And I don't get why, it seems to me quite self evident that you yourself can experience reduced-consciousness states, be it being half-asleep or quite drunk.


The problem for me is that we haven't conceptualized what we mean by "reduced" consciousness. Reduced in what way?

If we create an analogy with audio - audio can have 2 properties: frequencies and volume. Frequencies are the content of the audio and volume is how "present" the sound is.

Well the same could be applied to consciousness. When we say reduced consciousness do we mean that the mind experiences less content (frequencies) or do we mean that all the frequencies are there but at a reduced volume?


Reduced in what way?

In any.

C. is a complex thing, and these can fail to lower level in myriads of ways. You have to align millions* of things to make it work. The biological (in our case) nature of it allows for some slack rather than immediate breakdown, so there are thousands of parameters to play with.

* numbers arbitrary


Personal experience would tell me "both" - less complex thoughts with less intensity and with worse SNR.


And these aren't even generally low-functioning or undesirable! The much-sought-after 'state of flow' is defined in part by a lack of self-aware consciousness.


The article’s real contribution is in highlighting evidence of complex behavior in living systems that often get excluded from definitions of "intelligence". In doing so, it invites deeper philosophical reflection, even if it doesn’t mount that reflection itself.


> We do attribute great value to organisms that have this image of self.

My impression is we attribute great value to organisms that can effectively push back against us.

respect based on force.

Not saying I want things to be that way.


Does that mean LLMs can already be considered conscious at some level since they are able to reason and self reflect?


tbf, many materialists dislike the "degrees of consciousness" idea because a theory that posits "consciousness is on a spectrum" is one that starts to resemble panpsychism, which they consider magical woo.


This. it nicely encapsulates why AI aficionados use words like "hallucinate" which become secret clues to belief around the G part of AGI. If it's just a coding mistake, how can the machine be "alive" but if I can re-purpose "hallucinate" as a term of art, I can also make you, dear reader, embue the AI with more and more traits of meaning which go to "it's alive"

It's language Jim, but more as Chomsky said. or maybe Chimpsky.

I 100% agree with your rant. This time fly likes your arrow.


The correct term isn't "hallucinate", it's "bullshit". I mean that in the casual sense of "a bullshitter" - every LLM is a moderately knowleable bullshitter that never stops talking (In a literal sense - even the end of responses is just a magic string from the LLM that cues the containing system to stop the LLM, and if not stopped like that it would just keep going after the "end".) The remarkable thing is that we ever get correct responses out of them.


You could probably s/LLM/human/ in your comment. Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs. In between there may be thoughts in there, like 'I've no more to say', or 'Shut your mouth before they figure you out'. The question is, how is it that humans are not a deterministic computer? And if the answer is that actually they are, then what differs between LLMs and actual intelligence?


> Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs.

This metaphor of the pachinko machine (or Plinko game) is exactly how I explain LLMs/ML to laypersons. The process of training is the act of discovering through trial and error the right settings for each peg on the board, in order to consistently get the ball to land in the right spot-ish.


There’s something meta about your comment and my reaction. Metaphor. The pachinko metaphor seems so apt it made me pause and probably internalize it in some way. It’s now added to my brains dataset specifically about LLMs. It’s an interesting moment to be hyper aware of in the context that you’re also describing (definition of intelligence). Far out.


It’s survival in reality. Bullshit doesn’t survive (at least on the lower levels of existense, corporate and cultural bs easily does) and that’s why people are so angry at it. We hate absurdity because using absurd results yields failures and loses time or resources that were important to stay fed and warm.

People also can lose your time or resources (first line support, women’s shopping, etc) and the reaction is the same.

I don’t know why there’s still no LLM with a constant reality feedback loop yet. Maybe there’s a set of technical issues that prevents it. But until this happens, pretrained AI will bullshit itself and everyone cause there’s nothing that could hit it on the head.


Well in some ways it does, some insect species use various strategies to fool other insect species, such as a special type of caterpillar that does so against ant colonies, to live at their expense.



> This time fly likes your arrow.

And fruit flies like bananas. :)


Yes, it's such a waffle. Instead of the unnecessary title "A Radical New Proposal For How Mind Emerges From Matter" – a more appropriate one would be "On plant intelligence (and possible consciousness)" given the entirety of the article is devoted to plant intelligence. We don't have anything radical, nor is it very deeply related to the mind/matter problem. If an author can't get something simple like that correct, then they don't deserve our time. Shame one has to get paragraphs deep into the article to find out we have a spiel about plants, not about mind.


Here lies the promised land: the possibility of a precise and concise nomenklatura that assigns each thing a unique name, perfectly matching its unique position in the world, derived from the complete determination of the laws governing what it is and how it interacts with others. The laws of what is shall dictate how things ought to be named. What a motivating carrot—let’s keep following these prescriptions, for surely, in the end, the harmony of their totality will prove they were objective descriptions all along. Above all, let’s not trust our own linguistic ability to distinguish between the subtle nuances hidden within the same word, or at least, let’s distrust the presence of this ability in our fellow speakers. That should be enough to justify our intervention in the name of universality itself.

Imagine this: language is an innate ability that all speakers have mastered, yet none are experts in—unless they are also linguists. And what, according to experts, is the source of such mastery? A rigid set of rules capturing the state of a language (langue) at a given time, in a specific geographical area, social class, etc., from which all valid sentences (syntactically and beyond) can supposedly be derived. Yet this framework never truly explains—or at best relegates to the background—our ability to understand (or recognize that we have not understood) and to correct (or request clarification) when ill-formed sentences are thrown at us like wrenches into a machine. Parsers work this way: they reject errors, bringing communication to an end. They do not ask for clarification, they do not correct their interlocutors, let alone engage in arguments about usage, which, under the effect of rational justification, hardens into "rules."

Giving in to the temptation of an objective description of language as an external reality—especially when aided by formal languages—makes us lose sight of this fundamental origin. In the end, we construct yet another norm, one that obscures our ability to account for normativity itself, starting with out own.

Perhaps this initial concealment is its very origin.


I’m a linguistics layman, but can’t you make an even stronger claim about human language? Apparently there are certain constructs in Swiss German that are not context-free.


From another viewpoint, all human language is only at the finite-state level - a finite state automaton can recognise a language from any level in the Chomsky hierarchy provided you constrain all sentences to a finite maximum length, which of course you can - no human utterance will ever be longer than a googolplex symbols, since there aren’t enough atoms in the observable universe to encode such an utterance

Really the way people use the Chomsky hierarchy in practice (both in linguistics and computer science) is “let’s agree to ignore the finitude of language” - a move borrowed from classical mathematics


This is why the classical notion of competence and performance in linguistics is important. We describe programming languages as being Turing-complete even if every computer is always in practice finite because, in principle, it could be run on a computer with more memory or whatever. Likwise, it seems that language is bounded by language external facts about memory not intrinsic facts about how language is processed.


It's philosophy. It doesn't concern itself with facts or knowledge.


So, I think you' are wrong about this: the article is discussing a change in perspective that would largely make the classic debates in philosophy of mind irrelevant in the same way that heliocentrism made classic debates within the geocentric paradigm (about epicycles, and such) irrelevant. Highly worthwhile, if insufficiently in-depth.


It seems like this might be missing or just talking past parents point. Sure, new paradigms might always make old lines of research irrelevant. But otoh, something like idealism vs materialism vs dualism can’t just be dismissed because the categories are exhaustive! So new paradigms might shed light on it but can’t just cancel out the question. So yes, to parents point some stuff is fundamental, and if you’re not acquainted with the fundamentals then it’s hard to discuss the deep issues coherently. It’s possible I guess that a plant biologist or a journalist or a ML engineer for that matter is going to crack the problem wide open with new lines of inquiry that are completely ignorant about historical debates in theory of mind, but in that case it will still probably take someone else to explain what /how that actually happened.


I agree in spirit, that it's possible for a field to be so lost that a new paradigm fundamentally obviates it and frees you from having to recapitulate the entire thicket. But still, a minimal test for whether you've actually obviated them would be whether you can double back and show how the new paradigm resolves/makes them look naive and confused. And so I'd expect an author to do that for at least one such school of thought as part of their exposition.


I bailed 10 words in and came to the comments to see if the article was worth reading. Thanks for confirming its a skip.


So many adjectives... and "radical" in the title is a doozy considering this article is essentially a stoner summary of LeDoux's "Deep History of Ourselves" with the science replaced with thesaurus suggestions.


I held out a bit longer, and then skimmed, but started to become offended by the whole ,thing, but couldnt be bothered to be that annoyed by so many different things. That said I am very much into bieng outside in the company of many animals, working with them, and just bieng in wild environments, but language fails, when it comes to describing how species interact, and in the end, we cant describe our own mental proceseses in a clear, intellible by all, way. There is yet to be a genius of the mind, where the definition of genius is,:an idea that once explained, everyone else goes "of course" My main issue is that these sort of philisophical projects, reek, of money and hidden agenda's, and shade very quickly into, policy decisions, and quasi religious beurocratic powers.


The gp is wrong. They are just being supercilious with a word salad of their own which is best ignored.

The article is pretty good making you rethink all your preexisting concepts of Mind/Intelligence based on what we know from Biology today. It is not an article on various theories of Mind but on how scientific research (conveyed through pointers from various researchers) is advancing rapidly on so many fronts that we are forced to confront our very fundamental beliefs.

Absolutely worth reading at least a couple of times.


Yes.

The article is really about conceptual framing — how clinging to outdated or vague definitions prevents progress in understanding biological and cognitive processes.

People keep forcing everything into the vague, overloaded concept of "intelligence" instead of just using the right terms for the phenomena they’re studying. If we simply describe behaviors in terms of adaptation, computation, or decision-making, the whole debate evaporates

https://chatgpt.com/share/67c07325-3140-8007-8177-c56a89b257...


Is there a philosopher or philosophical school that identifies intelligence as a being's capacity to deploy a capability towards some end with intention? If so, what is this called? Or who is associated with it?

Edit: I'd expect this thinker/school also to argue that the being needs to be able to experience its intention as an intention (as opposed to a dumb, inarticulate urge); in other words, intelligence would require an agent to be aware of itself as an intelligent agent.

Edit 2: I strongly recommend Peter Watts' Blindsight to anybody who's on the market for sci fi that deals with these issues.


This is _not_ about defining intelligence. But it's about supposing intentionality as a tool for explaining the behavior of a thing.

https://en.wikipedia.org/wiki/Intentional_stance


Agree. This last couple years of AI has been a wellspring of people/articles re-inventing philosophy, like all these subjects haven't been debated and studied for a few hundred years. At least acknowledge them, even if we aren't going to try and build on what has come before.


Wait. If words fail to capture the truth, why do we keep making more words about it?


[flagged]


Your position is self-refuting: you are dismissing philosophy, yet simultaneously making philosophical claims in doing so-claiming that all knowledge is empirical is itself a philosophical claim (empiricism)


This is a philosophical argument, therefore dismissible by science.

Unless you can rephrase your argument as something testable, it's philosophy and thereby not relevant.


> This is a philosophical argument, therefore dismissible by science.

The claim I just made - that dismissing philosophy on empirical grounds is self-refuting because it relies on the philosophical position of empiricism - is not “dismissible by science” - there is no experiment or observation capable of proving or disproving that claim.

Also, the assumption you seem to be making - that all genuine knowledge comes from empirical science - can be countered with the argument that mathematical theorems (e.g. those of Gödel) are true and can he known to be true, but they are not known or knowable by means of empirical science

> Unless you can rephrase your argument as something testable, it's philosophy and thereby not relevant.

What you just said is not testable, hence by its own terms is philosophy and thereby not relevant - it condemns itself as irrelevant


You're missing the point - your claim of "hypothesis has to be testable otherwise can be dismissed" is itself philosophical (philosophy of science). You're claiming that your claim can be dismissed.


There's no contradiction. Philosophy is something everybody does after a beer. No point in pretentding that it's a relevant profession for hundreds of years already.


Philosophy can be safely ignored until you start making philosophical mistakes.


There's no such thing as philosophical mistake because there's no correct philosophy.


There is such a thing as a philosophical mistake – almost everyone agrees that logical positivism was a failure, despite the fact that there was a period in the 1950s when it was all the rage in philosophy departments in the English-speaking world.

There are still a few philosophers who will try to raise logical positivism from the dead – but even they'll all acknowledge that in its classic formulation it doesn't work, so any attempt to do so will require significant revisions.

Philosophers may never agree on who is right, but sometimes they can reach a consensus on who is wrong.


A lot of philosophy is testable and the inspiration for scientific enquiry. Philosophy contains logic as a major area of study and the results of this work are core tenets in mathematics which enables the ability to conduct rigorous science.

Science itself is the product of philosophical enquiry.


Science is the product of philosophical inquiry if you ask philosophers.

Just like the Internet is the work of Al Gore, if you ask Al Gore.


The difference is that what we now call “science” actually did start out as a branch of philosophy, and only gradually became separated from it; and there are figures who made substantial contributions to both disciplines (e.g. Henri Poincaré, who made significant contributions to both mathematical physics and philosophy of science)

Al Gore’s relationship to the Internet can’t be compared


> The difference is that what we now call “science” actually did start out as a branch of philosophy, and only gradually became separated from it;

It was hundreds of years ago, when philosophers of nature, aka people who thought about usefull stuff, left to become scientists. What remains in philosophy since then is all the useless stuff.


There are practising scientists who take philosophy of science much more seriously than you do.

To give a random example, I'm quite a fan of Lynn Waterhouse's Rethinking Autism: Variation and Complexity (Elsevier Academic Press, 2013) which seeks to provide a critical evaluation of the strength of the scientific evidence behind the theory of autism, in its various incarnations (from Kanner's early infantile autism and Asperger's autistic psychopathy through to DSM-5 ASD). And Waterhouse actually draws on philosophy of science in the process, as this quote demonstrates (p. 24):

> In fact, the orphaned and disconfirmed theories that have failed to explain variation in autism are, in large part, examples of theory underdetermination. Science philosopher Peter Lipton (2005) argued, “Theories go far beyond the data that support them; indeed, the theories would be of little interest if this were not so. However, this means a scientific theory is always ‘underdetermined’ by the available data concerning the phenomena” (p. 1261). The theory of underdetermination, called the Duhem–Quine principle, is a formal acceptance that theories make claims that data do not fully support. Theory succession, as from Hippocrates’ theory of pangenesis to Darwin’s gemmules, to DNA and transcriptomes, moves from one underdetermined theory to the next. However, Stanford (2001) pointed out that not all theory underdetermination is acceptable. It can be a Devil’s bargain: a serious threat to scientific discovery.

And another quote from the same book (p. 27):

> Equally problematic, autism brain research findings have not uncovered the underlying complexity of the phenomena. Bogen and Woodward (1988) argued that what can be measured is “rarely the result of a single phenomenon operating alone, but instead typically reflect the interaction of many different phenomena ….Nature is so complicated that … it is hard to imagine an instrument which could register any phenomenon of interest in isolation” (pp. 351–352).

And the citations referenced in those quotes:

> Lipton, P. (2005). The Medawar Lecture 2004: The truth about science. Philosophical Transactions of the Royal Society of London Series B, 360, 1259–1269

> Stanford, P.K. (2001). Refusing the devil’s bargain: What kind of underdetermination should we take seriously? Philosophy of Science, 68, 3. Supplement: Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association. Part I: Contributed Papers, pp. S1–S12

> Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review, 97, 303–352.

The book only cites three philosophy of science papers, compared to dozens and dozens of papers from neuroscience, genetics, psychology, etc – as you'd expect for a scholarly book focused on the science of autism. Still, the fact it cites philosophy papers at all is an example of how many practising scientists are more positive about philosophy of science than you yourself are.


> I don't care what your ideas are. If they aren't testable, they're your opinion.

Most of the life and the world involves beliefs that aren't testable. The lack of testability doesn't mean it's arbitrary; testing is just one tool (the most effective one, I believe). But if you restricted yourself to that, if we had no other qualification or judgment, we couldn't achieve anything.


Philosophy is precisely the domain of things that cannot be objectively testable because they are grounded in experience. You can not prove that you are conscious (especially in a world of LLMs) but you know it to be true. Should I assume you are an automaton simply because I can’t prove you are conscious?


Consciousness is not philosophy. In the past, it was. People didn't have the tools or theoretic background to even approach it in an empiric way.

However we now have at least the basics to hypothesize and perform experiments. So it's no longer philosophy and is in the realm of understanding what actually are the mechanisms of consciousness.


We can find "neural correlates" of consciousness, but nothing we do right now can prove that experience/awareness is taking place somewhere other than where oneself is experiencing/being aware.


So you’re an empiricist.

Is science “reality” or is it just a series of models/ conceptual frameworks ?


Keep walking into a glass door a few dozen times. Is the door a reality even though you can't see it, or just a conceptual framework you keep bouncing off of?

At some point you have to stop thinking in your head why you can't walk through the window and start testing and figuring out what keeps breaking your nose.


You're describing the scientific method. Philosophy is not for "understanding the universe". Surely it cannot be blamed for not fulfilling a goal you only imagine it to have.


It can be blamed for being pretentious useless ramblings.


Nah, that is strictly the domain of political punditry.


> Philosophy has little to do with reality.

Translation: I’ve never read Aristotle.


You just reinvented Wittgenstein a priori.


That itself is a philosophical position


Well said. The gp's comment was mere snobbish verbiage since the article is not about various theories of Mind, but on biological research which is making us rethink our very fundamental assumptions. It is a pretty good one.


"Propose hypothesis. "??

Where do you get the 'hypothesis'?

That is where philosophy comes in, it is pre-testing, thinking of the things to test.

(also, note. by your definition, string theory is an 'opinion', not science).


Sir, where do hypotheses come from?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: