Tuesday, 22 May 2007

The Future Of AI?


This is probably going to be my last post of a topic. It really has nothing to do with our unit material so DON'T study it for your essay. It's just a fun topic for anyone who's interested in how paranoid people like to get. I thought some of you might like to see some of the actual ethical discussions taking place regarding the potential threat of advancing AI to the point of becoming "self aware". Actually, to be honest, I just wanted an excuse to use this picture. I'm not sure of my copyrights though... might have to take it down later.

Try these sites out if you're interested:
Ethical & Social Implications
Ethical Issues In Advanced AI

Why is AI Dangerous?

Sunday, 20 May 2007

Rival Theories on the Language of Thought


Philosophers and cognitive scientists use the Language of Thought (LOT) to claim that the mind is a device that operates according to strict rules of symbol manipulation. However, there are two rival theories concerning LOT.

The first theory claims that the medium of thought is an inborn language that is separate from human spoken language. Fodor calls it ‘mentalese’ and claims that it is the basis of our thoughts and meanings. Mentalese operates below conscious awareness.

The second view is that LOT is not innate and that linguistic thoughts do actually occur in and are dependent on - the languages that we speak. So, if you are a Japanese speaker, your LOT would be Japanese.

Fodor’s hypothesis is generally segmented into five parts:

Representational Realism: people have explicit representational systems: to have the belief that smoking causes cancer is to have a representational token with the content ‘smoking causes cancer’ in one's belief box.
Linguistic Thought: The representational system that underlies human thought, is similar to spoken human languages (semantically and syntactically)
Distinctness: LOT is not the same as any spoken language.
Nativism: LOT is genetically determined and it is possessed by all humans.
Semantic Completeness: LOT is expressively semantically complete. Anything we are able to understand is expressible in this language.

However, the fact is that we cannot locate mentalese, or any brain system responsible for its implementation. But, we don’t have to locate our natural language. It’s with us all the time. Some people even talk out-laud to themselves especially when attempting to carry out complex tasks. In my opinion this suggests that thinking in natural language is taking place. Surely, such individuals would not use it to communicate with anyone, as no-one is around.

Introspection also points to the natural language playing a major role in our thinking. When we focus inwards we notice that our thoughts are being formed in sentences of natural language. This monologue is our inner speech. Of course, there is always the possibility that we may think in mentalese and it is then translated into natural language.

These competing theories opinions have not yet been resolved, although a recent study at Harvard University refers to a ‘language-independent system for thinking about objects’ seems to support Fodor’s theory.
http://www.hno.harvard.edu/gazette/2004/07.22/21-think.html

Views on AI

There are two major positions regarding the current and future artificial intelligence:

Weak AI Position
This hypothesis states that any machine program is at most only capable of simulating real human behavior and consciousness. Machines will not be able to think, but will be able to act intelligently, because there are things that computers cannot do, no matter how much effort we put into programming them. Given the complexity of thinking, writing such programs is impractical and is bound to fail
.
Supporters of Weak AI include Roger Penrose and John Searle who used his Chinese room parable to promote weak AI.

Strong AI Position
Strong AI theory claims that correctly written program running on a machine will not only act intelligently, but will be able to think, will have a conscious mind. There will be no real difference between a program emulating the actions of the brain, and the consciousness of a human being. Such a program will be able to truly reason and become self-aware, it will be a mind.
Although strong AI's dramatic reduction of consciousness into an algorithm is difficult for many to accept,
it does have its own supporters such as Douglas Hofstadter and Daniel Dennett.

Although current technology does not yet allow us to write such a program, a lot of work is being done in this field. As an example, consider a recent report by the BBC regarding an experiment simulating of a mouse brain (well, actually half of it)
http://news.bbc.co.uk/1/hi/technology/6600965.stm Research like this will eventually lead to an Artificial Intelligent machine that will be conscious and will be a mind.

Tuesday, 15 May 2007

What's in a Symbol?

I know our (Kris’ and mine) topic is about classical AI but I just want to mention something of interest regarding connectionism that I came across in an online paper by David Chalmers. I think it will shed light on the seemingly simple but really-hard-to-answer question 'What is a Symbol?' It’s been hard to find stuff on this topic that corresponds with our lecture material and isn’t too difficult to understand and is also online so we can all see. Anyway, I think this one’s OK but I’ve still yet to get all the way through it and questions I’m raising will probably be answered by the end of the paper… but this is meant to be an interactive way of learning, right? Check it out on the link to the right so we can join the pieces together. (The paper on 'Symbolic/Subsymbolic Computation'. Check out the Classical AI post below first though.)


The distinction between the two types of schools – classical and connectionist – is not really to be found in their architecture. That is, it’s not the fact that classical systems run programs written in programming languages that satisfy a turing machine-interpretation and the fact that connectionist systems work in a neuron-type process through synaptic-like firing that they are different. Chalmers notes that both systems are at bottom universal: what one can do the other can also do. He highlights that some connectionist models are written in programming languages and run on symbolic computers. They’re still connectionist systems though.

The real difference is not in syntax but semantics. In classicism, a symbol is an atomic computational token and a designator. In connectionism, the computational token is the node and connection. No single unit carries any meaning in itself. So the computational tokens here lie in another level to the systems representational level which is contained in “distributed patterns of activity over such tokens.” So the contrast is that in classical systems, the computational level coincides with the representational level whereas in connectionist systems, computation is at a lower level to semantics. For this reason, Chalmers calls connectionist models subsymbolic to better contast it to symbolic systems (classical). He notes that the term ‘connectionism’ seems to imply neural-like architecture as distinguishing feature which is somewhat misleading. Now the point of all this is, does this put new pressure on Searle’s objection? Now that connectionist systems are defined as separating syntax from semantics, are they as vulnerable as classical systems to the ‘syntax is not enough for semantics’ argument?

This is going into connectionism a lot, but I feel the more we understand one type of computation the more we understand the other.

What is LOT?

Same idea as Classical AI post. So what is the Language of Thought Hypothesis?

There is a mental language that we all tap into to produce thoughts. The lowest level of processing going on is at an atomic or symbolic level (probably lower level stuff than words and letters). But the number of little symbols we have access to are finite. The symbols however are able to combine according to certain mentalese rules (syntax) forming more complex symbols out of the ‘primatives’. It is the relation one has with a particular complex symbol or proposition that determines its meaning. For instance, the proposition ‘It will rain tomorrow’ alone has syntax and semantics (content) but no meaning in the world. It is only when you realise a token (represent content in your brain) of the symbol/proposition in a certain way that it has meaning directed back in or towards the world in which it exists, eg. ‘I believe it will rain tomorrow.’ This is a propositional attitude which has intentionality or ‘aboutness’. Further, not only do these propositional attitudes combine to form more complex ones, eg. PA ‘P’ combines with PA ‘Q’ to form PA ‘P & Q’, but LOT maintains that the mental states of complex PAs have constituent structure. Of course, the final feature of LOT (a feature that any theory of the mental should have) is that mental states have causal properties. A certain PA will either be the cause of a particular physical event, or another mental state, or both.

This all sounds quite abstract but I think the credibility of LOT comes from empirical evidence. The main argument is from the productivity and systematicity of thought. LOT explains this nicely. Productivity is that there are endless sentences capable of being formulated by thought from only a finite set of primitives. This is accounted for by treating thought as a language. Systematicity is the feature that fills in any gaps in your ability to think thoughts related to other thoughts. That is, systematicity ensures that to be able to think ‘A likes B’ is to be able to think ‘B likes A’ regardless of truth value. This corresponds with empirical evidence and again, indicates that thought is a language. Thus, mental states are generally constitutive or compisite.

What does Classical AI mean?

Let’s start by describing what classical artificial intelligence is. Feel free to add/comment on whatever is missing/wrong. We can build up a complete picture as we go.

This type of AI is dedicated to the position that the appropriate level to model intelligence/cognition is at the level of the symbol. In fact, in the version that Searle calls 'Strong AI', anything that manipulates symbols in accordance with appropriate formal rules gives rise to a mind and anything that is a mind manipulates symbols in such a fashion. This Physical Symbol System Hypothesis implies that we (our brains) are digital computers. Now the symbol is defined as having both a syntax and a semantics. The syntax comes from defining the way a particular symbol is processed and interacts with other symbols (its rules) and the semantics is derived from the content that the symbol contains (what 'thing' the symbol stands for in the world not how the symbol's content is related to. Contrast with Propositional Attitudes in LOT). It should be noted that the symbol itself is not a physical thing, but tokens of symbol-types can be realised physically (in your brain, say). In this way, the conjunction of syntax and semantics plus physical tokening can allow for mental content to have causation.

If I’m on the right track, I’d say most classicists are functionalists on some level. Anyway, the symbolic level AI has problems. As Mitch has highlighted in lectures, the way we think is not like this. We can handle inconsistent, incomplete information. But the real problem is described by Searle: syntax alone is insufficient for semantics. A digital (symbolic) computer can be defined fully by reference to its rules of symbol-manipulation. No amount of computation can account for semantics; no amount of atomic symbol shuffling produces meaning. Is he right? I feel he’s missed the definition of a symbol being something ‘atomic’ that follows rules and represents something.

Monday, 16 April 2007

What is Artificial Intelligence?

(Kris' post)

the study of intelligent behavior in machines. from
BBC Hot Topics - The science behind the news (21 July, 2003).

the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. from (AAAI).

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. by J. McCarthy, Computer Science Department, Stanford University, One of the founders of the field of AI.

Artificial intelligence is the science of how to get machines to do the things they do in the movies. - Astro Teller - http://www.astroteller.net/


Do you agree? What is your opinion? ...