It's been a while, but if I recall Penrose basically attacks the notion of "stong AI". Others, e.g. Scott Aaronson, have ripped his argument to shreds. But not to dismiss Penrose's argument too quickly, Aaronson did devote an entire university course to presenting the counter argument http://www.scottaaronson.com/democritus/. I think there is a case to be made against strong AI along the lines of "just because a posteriori you can model anything digitially does not make it so", but apparently Penrose failed. (And don't get me wrong, I'm not the guy to make the case either.)
I'm always suspicious of phrases such as "ripped his arguments to shreds" especially in such a young and contested field as AI and doubly when dealing with the difficult issues of sentience and Strong AI...
I'm not accusing you of anything, but as a general comment, without necessarily supporting them, the response to his arguments on this issue often seem to involve overly emotive language.
Normally I regret using inflammatory language, but in this case I think it invokes the fervor I have seen in most every defense of strong AI I have ever read. Personally I am not a proponent of strong AI. Proponents have, however, built a strong case that tends to put opponents in the position of having to prove a negative.
I don't think it's fair to say that Aaronson's course was devoted to debunking Penrose. It just happens that Penrose is very good at explaining these concepts, so his (wrong) book on consciousness (which contains a lot of physics) was used as the "textbook".
Yes, devote was too strong a word. He could have covered everything in the course without using Penrose as his straw man. The point I meant to make is Aaronson builds a considerable intellectual and theoretical case before he uses it to knock down Penrose.
as another poster mentioned, Aaronson has done a great job of covering this -- waay better than I ever could. But the TL/DR is that Penrose believes there's something quasi-magical about thinking that can't be done with Turing machines, and therefore needs all kinds of special physics to get going. It' just so patently naive and absurd to a working computer scientist that it makes me suspect his ego/judgment ratio is a bit high.
Is your claim that we understand consciousness and thus we are certain that it can be implemented algorithmically or that it can be implemented algorithmically no matter what or something else entirely?
pretty much "it can be implemented algorithmically no matter what". There is just no evidence that the physical brain takes advantage of subtle quantum effects to do what it can do; it's extremely unlikely that any machine could do so at normal body temperature.
Intuitively, I don't see any vital need to introduce special types of computation to explain consciousness. It seems very likely to be a phenomenon that emerges from the right kind of information processing, regardless of the physical nature of the computation.