Cannabis Sativa

Content deleted Content added
84.64.50.128 (talk)
Undid revision 262376885 by Ronz (talk)
Undid revision 263173540 by 84.64.50.128 (talk) take it to talk
Line 130: Line 130:
==External links==
==External links==
*{{dmoz|Computers/Artificial_Intelligence/Natural_Language/Chatterbots/|Chatterbots}}
*{{dmoz|Computers/Artificial_Intelligence/Natural_Language/Chatterbots/|Chatterbots}}
*[http://www.simonlaven.com Chatterbot Central] at The Simon Laven Page
*[http://www.chatbots.org Chatbots.org] International chatterbots directory
*[http://www.chatbots.org Chatbots.org] International chatterbots directory
*[http://www.aidreams.co.uk/chatterbotcollection/index.php The Chatterbot Collection]
*[http://www.aihub.org AI Hub] - A directory of news, programs, and links all related to chatterbots and Artificial Intelligence
*[http://knol.google.com/k/william-wynn/chatterbot/3fegkfxlkmrqb/2# Knol about Chatterbots]


[[Category:Chatterbots|*]]
[[Category:Chatterbots|*]]

Revision as of 15:55, 10 January 2009

A chatterbot (or chatbot) is a type of conversational agent, a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. The computer programmes are also known as Artificial Conversational Entity (ACE) and, though many appear to be intelligently interpreting the human input prior to providing a response, most chatterbots simply scan for keywords within the input and pull a reply with the most matching keywords or the most similar wording pattern from a local database. Chatterbots may also be referred to as talk bots, chat bots, or chatterboxes.

However, 2008 Loebner Prize entry, Eugene Goostman [1] is able to respond in an impressive manner. For example, with this interrogation, by Loebner 2008 preliminary phase judge Scott Jensen: "My car is red. What color is my car?" Eugene later remembered its answer when only asked: "What is the color of my car?".

Method of operation

A good understanding of a conversation is required to carry on a meaningful dialog but most chatterbots do not attempt this. Instead they "converse" by recognizing cue words or phrases from the human user, which allows them to use pre-prepared or pre-calculated responses which can move the conversation on in an apparently meaningful way without requiring them to know what they are talking about.

For example, if a human types, "I am feeling very worried lately," the chatterbot may be programmed to recognize the phrase "I am" and respond by replacing it with "Why are you" plus a question mark at the end, giving the answer, "Why are you feeling very worried lately?" A similar approach using keywords would be for the program to answer any comment including (Name of celebrity) with "I think they're great, don't you?" Humans, especially those unfamiliar with chatterbots, sometimes find the resulting conversations engaging. Critics of chatterbots call this engagement the ELIZA effect.

Some programs classified as chatterbots use other principles. One example is Jabberwacky, which attempts to model the way humans learn new facts and language. ELLA attempts to use natural language processing to make more useful responses from a human's input. Some programs like Jeeney AI use natural language conversation while others such as SHRDLU, are not generally classified as chatterbots because they link their speech ability to knowledge of a simulated world. This type of link requires a more complex artificial intelligence (eg., a "vision" system) than standard chatterbots have.

Early chatterbots

The classic early chatterbots are ELIZA (1966) and PARRY (1972).[1][2][3][4] More recent programs are Racter,[1] Verbots, A.L.I.C.E., and ELLA.

The growth of chatterbots as a research field has created an expansion in their purposes. While ELIZA and PARRY were used exclusively to simulate typed conversation, Racter was used to "write" a story called The Policeman's Beard is Half Constructed. ELLA includes a collection of games and functional features to further extend the potential of chatterbots.

The term "ChatterBot" was coined by Michael Mauldin (Creator of the first Verbot, Julia) in 1994 to describe these conversational programs.[5]

Malicious chatterbots

Malicious chatterbots are frequently used to fill chat rooms with spam and advertising, or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has been a published report of a chatterbot used in a fake personal ad on a dating service's website.[6]

Chatterbots in modern AI

Most modern AI research focuses on practical engineering tasks. This is known as weak AI and is distinguished from strong AI, which would require sapience and reasoning abilities.

One pertinent field of AI research is natural language. Usually weak AI fields employ specialized software or programming languages created for them. For example, one of the 'most-human' natural language chatterbots, A.L.I.C.E., uses a programming language called AIML that is specific to its program, and its various clones, named Alicebots. Nevertheless, A.L.I.C.E. is still based on pattern matching without any reasoning. This is the same technique ELIZA, the first chatterbot, was using back in 1966.

Australian company MyCyberTwin also deals in strong AI, allowing users to create and sustain their own virtual personalities online. MyCyberTwin.com also works in a corporate setting, allowing companies to set up Virtual AI Assistants. Another notable program, known as Jabberwacky, also deals in strong AI, as it is claimed to learn new responses based on user interactions, rather than being driven from a static database like many other existing chatterbots. Although such programs show initial promise, many of the existing results in trying to tackle the problem of natural language still appear fairly poor, and it seems reasonable to state that there is currently no general purpose conversational artificial intelligence. This has led some software developers to focus more on the practical aspect of chatterbot technology - information retrieval.

A common rebuttal often used within the AI community against criticism of such approaches asks, "How do we know that humans don't also just follow some cleverly devised rules?" (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument.

Chatterbots/virtual assistants in commercial environments

Automated Conversational Systems have progressed and evolved far from the original designs of the first widely used chatbots. In the UK, large commercial entities such as Lloyds TSB, Royal Bank of Scotland, Renault, Citroën and One Railway are already utilizing Virtual Assistants to reduce expenditures on Call Centres and provide a first point of contact that can inform the user exactly of points of interest, provide support, capture data from the user and promote products for sale.

In the UK, new projects and research are being conducted to introduce a Virtual Assistant into the classroom to assist the teacher. This project is the first of its kind and the chatbot VA in question is based on the Yhaken [2][failed verification] chatbot design.

The Yhaken template provides a further move forward in Automated Conversational Systems with features such as complex conversational routing and responses, well defined personality, a complex hierarchical construct with additional external reference points, emotional responses and in depth small talk, all to make the experience more interactive and involving for the user.

Annual contests for chatterbots

Many organizations try to encourage and support developers all over the world to develop chatterbots that able to do variety of tasks and compete with each other through turing tests and more. Annual contests are organized at the following links:

Chatterbots in culture

  • Bots generated by TheGreatHatsby since the mid-2000s have randomly connected people by seeding messages with opposing usernames.

See also

Citations

  1. ^ a b GüzeldereFranchi 1995
  2. ^ Computer History Museum 2006
  3. ^ Sondheim 1997
  4. ^ Network Working Group 1973- Transcript of a session between Parry and Eliza. (This is not the dialogue from the ICCC, which took place October 24-26, 1972, whereas this session is from September 18, 1972.)
  5. ^ Mauldin 1994
  6. ^ "From Russia With Love" (PDF). Retrieved 2007-12-09. Psychologist and Scientific American: Mind contributing editor Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named. Scientific American: Mind, October-November 2007, page 16-17, "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer. Also available online.

References

External links

Leave a Reply