I know our (Kris’ and mine) topic is about classical AI but I just want to mention something of interest regarding connectionism that I came across in an online paper by David Chalmers. I think it will shed light on the seemingly simple but really-hard-to-answer question 'What is a Symbol?' It’s been hard to find stuff on this topic that corresponds with our lecture material and isn’t too difficult to understand and is also online so we can all see. Anyway, I think this one’s OK but I’ve still yet to get all the way through it and questions I’m raising will probably be answered by the end of the paper… but this is meant to be an interactive way of learning, right? Check it out on the link to the right so we can join the pieces together. (The paper on 'Symbolic/Subsymbolic Computation'. Check out the Classical AI post below first though.)
The distinction between the two types of schools – classical and connectionist – is not really to be found in their architecture. That is, it’s not the fact that classical systems run programs written in programming languages that satisfy a turing machine-interpretation and the fact that connectionist systems work in a neuron-type process through synaptic-like firing that they are different. Chalmers notes that both systems are at bottom universal: what one can do the other can also do. He highlights that some connectionist models are written in programming languages and run on symbolic computers. They’re still connectionist systems though.
The real difference is not in syntax but semantics. In classicism, a symbol is an atomic computational token and a designator. In connectionism, the computational token is the node and connection. No single unit carries any meaning in itself. So the computational tokens here lie in another level to the systems representational level which is contained in “distributed patterns of activity over such tokens.” So the contrast is that in classical systems, the computational level coincides with the representational level whereas in connectionist systems, computation is at a lower level to semantics. For this reason, Chalmers calls connectionist models subsymbolic to better contast it to symbolic systems (classical). He notes that the term ‘connectionism’ seems to imply neural-like architecture as distinguishing feature which is somewhat misleading. Now the point of all this is, does this put new pressure on Searle’s objection? Now that connectionist systems are defined as separating syntax from semantics, are they as vulnerable as classical systems to the ‘syntax is not enough for semantics’ argument?
This is going into connectionism a lot, but I feel the more we understand one type of computation the more we understand the other.
No comments:
Post a Comment