>Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness.
Only this "wetness" wouldn't soak an actual napkin.
Nothing linguistic about it.
Physical objects have physical properties --you can simulate those, but then you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.
Only this "wetness" wouldn't soak an actual napkin.
No "wetness" will ever do it. That would require real water, right? What causes the soaking are electrical forces, there's no such thing as "wetness" in nature. I believe it's just a word that humans invented for the properties of water.
you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.
Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us. That's what humans do with each other (the other minds problem), and we typically assume that there is "understanding".
>No "wetness" will ever do it. That would require real water, right?
You don't say! (as the meme goes).
I mean, of course, I'm using the word wetness to imply the physical implications of the presence of water.
So, to return to the actual thing under discussion, what I mean is that we can simulate stuff from the physical world, but this simulation might capture some of same information and calculations (e.g down to the individual positions of particles, exchanges of energy etc) but it doesn't have the same properties, unless you simulate the whole of their environment.
>Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us.
What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing. Not just "this neuron fires now" etc, but also stuff like the neuron's materials, physical characteristics and responses, etc. Those could be essential to things like how accurately (or not) information like memory and thoughts is saved, how it is recalled, timings of neuron firings, etc.
What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing
You might not, but current research is more hopeful. The current consensus is that you simulate a neuron well enough if you get down to the level of chemical reaction kinetics, and it appears that this description is accurate enough to recreate the electrical properties of neurons. There are yet no neuronal phenomena that can't be explained with this framework, so the consensus is more like we might than we might not.
Only this "wetness" wouldn't soak an actual napkin.
Nothing linguistic about it.
Physical objects have physical properties --you can simulate those, but then you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.