Executive summary: The argument of David Weinberger at the start of a lengthy discussion on matter and consciousness (Joho the Blog: Why matter matters) is completely false. In fact I can't believe I didn't figure this out immediately, but sometimes you need to rephrase in your own terms to properly understand an argument.
For details, read on.
I have gone back and forth in my estimation of Searle's chinese room argument and Weinbergers concise version of it. I have now come full circle and turn back to my original position: The argument is completely false, and holds no merit.
That this is so is best understood by writing the argument down in concise notation.
The position Weinberger attacks is the position that if we can establish a 1-1 mapping (at sufficient level of detail) of a system R' to a conscious system R (in short, if R' can be said to be a simulation of R) then R' too is conscious. In mathematical notation:
If there exists a mapping c (an interpretation as simulation) of R' to R and R is conscious, then R' is conscious.
We call this the 'strong AI position'
Weinberger counters that this cannot be the case since we can construct another mapping n from R' to R that is not viewable as simulation.
Weinberger concludes that we have proved at the same time that R' is conscious and that it is non-conscious and thus that the strong AI position is meaningless. His claim is obviously false. Weinberger is confusing the contrary opposite with the logical opposite. The existence of a meaningless mapping is not the logical negative of the original claim. That would be non-existence of any simulating mapping.
Weinberger further says that consciousness of a physical object cannot be a matter of our interpretation of that object as conscious. This is obviously true, but fortunately the strong AI position is completely consistent with this position, as should be evident from the discussion above. The existence of the mapping from R' to R does not change R' in any way, and nobody is saying that it does. The difference of opinion is not on this question, but rather on whether consciousness is an observable quality of R'. The strong AI position is that it is: The simulating mapping constitutes observation of consciousness. Weinbergers position on the matter is not apparent from his argument. From his comments in the discussion and various other posts it would appear that he does admit that consciousness could be observable.
Posted by Claus at June 29, 2003 08:50 PMI am not saying that because the M&M/Chinese Room mapping fails, all 1:1 mappings must fail to be conscious. I'm saying that a symbolic mapping can't be conscious. (And, fwiw, I do think that an atomic mapping, resulting in living flesh, would be conscious.)
The analogies to Chinese Rooms and M&M bins aren't meant to show that sometimes 1:1 mappings fail to yield consciousness so it must fail in all cases. The M&M mapping is the same as the silicon mapping in all relevant respects. It isn't a different case at all. (I.e., I was speaking loosely when I called them analogies in the first line of this paragraph.) Since the Kurzweilian AI position says that consciousness is a pattern that is independent of the matter in which it is instantiated, the Chinese Room and M&M examples simply instantiate the pattern in a medium in which the symbolic nature of the relationship is more apparent than it is in a computer.
I may be making some other obvious mistake, but I don't think I'm making the one you ascribe to me.
Posted by: David Weinberger on June 29, 2003 11:49 PMI think your characterization ot the Kurzweilan position is wrong. I understand Kurzweil to be saying that the existence of a particular pattern (defined in Kurzweils writings as a sufficiently detailed map of an actual brain) implies consciousness. This is exactly a statement that if we can find a mapping from a real object to a conscious object, then we should consider the object we map from conscious.
This is how I think the strong AI position (Kurzweil included) is to be understood.
Inasmuch as we have an objective characterization of the pattern/mapping that we consider sufficiently detailed it would, according to this position, suffice to establish this particular pattern in the M&M's candy bin, since that defines for us the desired mapping. The pattern is independent of matter to the extent that recognizing the pattern defines for us a mapping to an actual reference consciousness. If you like, you may say that this means that we consider consciousness to be a pattern unrelated to the matter it is instantiated in, but obviously this is merely an abuse of notation. The proper phrasing is: 'Any physical system in which we recognize the pattern is conscious'.
I think my post above clearly shows that your argument does not prove this notion of consciousness self-contradictory.
I don't think the open question on the observability of consciousness (i.e. whether we will ever find out what 'a suitable level of detail' is) is really adressed by your argument or the chinese room argument, and I think you will have to adress that question to argue against the strong AI position. I don't believe an argument from the symbolic/real distinction or along the lines that our symbolic understanding of a computer is arbitrary can succeed for the reasons I have given.
Posted by: Dee on June 30, 2003 12:29 AMAn additional question one might ask (along the lines of your example), when thinking about the observability of consciousness is, if there can be a useful distinction between two physical systems S1 and S2 from which we both can establish a mapping to a third phenomenon R, that the mapping is 'real' in the case of S1 and 'symbolic' in the case of S2. I am undecided on that issue, but leaning towards the position that there is not.
In either case we are constructing (in language) a model of the systems and verifying that this very same model may be established in R.
To give an example as much in my disfavor as I can think of, lets assume S1 and S2 are two calculators, and that R is also a calculator and lets assume that S1 and R are both instances of the same type of calculator, whereas S2 is another kind of calculator.
As an example assume S1 and R are able to add two numbers. They work by transferring glass pellets (or M&M's !) from the 'first number' jar and the 'second number' jar to the 'results' jar and then counting out the resulting number. This is as physical a version of numbers and addition as I can think of. In contrast S2 is a digital calculator, representing number as a string of bits. What you seem to be saying is, if applied to this example, that the bit sequence 1000 isn't 8. From my point of view, what you're doing is actually adding properties to 8 that you require. A possible property could be that '8' by physically devisible into two copes of '4'. To add this property we would have to devise a represention of the jars as a certain area of memory in our digital calculator. This would establish a more direct mapping equation '1 bit worth of charge' with 'one glass pellet'.
You could then require the property 'made of glass'. Obviously that particular property cannot be added to our digital calculator.
I don't see important difference between this simplistic calculator example and the AI example.
Of course there exist properties that are non-simulatable. I just haven't seen any argument that consciousness is one of them. I can think of three ways in which this could be the case:
It could turn out that consciousness requires certain physical properties that we happen to have. This is Roger Penrose's position, we'll call it 'consciousness as soulstuff'. I don't happen to believe in it, but of course it would rule out simulation.
The second way strong AI could be wrong is that, language is just not enough. No finite representation of an actual brain is sufficient for consciousness.
This position is more likely. I still don't believe in it, but it is extremely difficult to argue about it. The argument in disfavor is that the brain is finite. Possible arguments in favor are numerous and some of them may be found in pure mathematics, which is full of examples of ideas that cannot be expressed in terms of simple ideas through any finite description (e.g. non-algebraic numbers)
There's a third argument that is not about possibility but practicability:
It is possible to find numerical problems that it is believed are fundamentally hard for a digital computer to solve. Solutions require exponential or worse time as problems grow when solving on digital computers.
Some of these problems are known to be solved in physical systems almost instantaneously, i.e. in a qualitatively much faster way. If it turns out that strong AI is successful, but that consciousness relies on algorithms that can only be solved in reasonable time using direct physical simulation, then that would effectively bring us back to the 'soulstuff' position.