17 August 2008

Project VagaBot Update August 2008

Following up on my previous post of January 2008, “Corpus linguistics & Concgramming in Verbots and Pandorabots”, you can now see the demo of this VagaBot at http://www.mendicott.com . The results of this trial were not satisfying due to the limitation of the VKB engine at verbotsonline.com not being able to process consecutive, or random, responses from identical input or triggers, basically tags. In other words, the responses with identical input hang on the first response, and not cycle through the series of alternatives. Apparently a commercial implementation of the Verbots platform does allow for the consecutive firing of related replies. Thanks to Matt Palmerlee of Conversive, Inc. for increasing the online knowledgebase storage to accommodate this trial and demo.

Dr. Rich Wallace has recently blogged a very helpful post, “Saying a list of AIML responses in order”, on his Alicebot blog at http://alicebot.blogspot.com . After considerable fiddling, I have successfully installed Program E on my Windows desktop under Wampserver (Apache, MySQL, PHP). I have also found a very easy commercial product for importing RSS feeds into MySQL. Next I will try to bridge the RSS database and the Program E AIML database with Extensible Stylesheet Language Transformations (XSLT) using the previously mentioned xsl-easy.com database adapters… as well as implement Dr. Wallace’s "successor" function on the Program E AIML platform. Once I get the prototype working on my desktop, I will then endeavor to replicate it on a remote server for public access.

The long term goals of Project VagaBot are to create a conversational agent that can not only “read” books, but also web feeds, and “learn” to reply intelligently to questions, in this case on “green travel”, in effect an anthropomorphic frontend utilizing not only my book, "Vagabond Globetrotting 3", but also my entire http://meta-guide.com feed resources as backend. I am not aware of another project that currently makes the contents of a book available using a conversational agent, nor one that “learns” from web feeds. I hope to eventually be able to send the VagaBot avatar into smartphones using both voice output and input. I would be very interested in hearing from anyone interested in investing or otherwise supporting this development.

15 June 2008

Twitter, Bots & Twitterbotting

Micro-blogging is a form of blogging that allows users to write brief text updates, which may be viewed by anyone or restricted to a user group. Such messages can be submitted by a variety of means, including SMS, IM, Email or Web.

Twitter is the prototypical micro-blogging service and allows users to send text-based posts up to 140 characters long, called "tweets", to the Twitter web site. One of the main advantages of using Twitter is that it provides a functional gateway between the web and the mobile phone via SMS text messaging compatibility. Christina Laun recently posted a handy primer, Twitter for Librarians: The Ultimate Guide.

There are now a growing number of Twitter applications for travel and tourism:
  • The Multimap Twitter bot helps you to access maps, directions and local information by sending messages via twitter.
  • The Nelso Twitter bot will help you find bars, restaurants, hotels, shopping, and other businesses in Europe.
  • The Twanslate Twitter bot is capable of translating anything you throw at it, and for on the go translation when all you have is your phone.
I’ve now added feeds from Twitter for all 234 countries to my Destination Meta-Guide.com 2.0 semantic mashup, for instance at:
I’ve also created two Twitterbots already:
Twitter bots are actually special Twitter users that provide information, either upon request or as it becomes available. There are at least two good web sites about Twitter bots:
  • twitterbotting.com is a site to help folks get quick info about creating new Twitterbots.
  • retweet.com helps to discover Twitter, one bot at a time.
A web feed is a data format used to provide users with frequently updated content. RSS is a web feed format used to publish frequently updated content, such as blog entries, news headlines, and podcasts. Yahoo! Pipes is a web application for building applications that aggregate web feeds, web pages, and other services. A combination of data from more than one source in a single integrated application is called a mashup.

Web feeds or mashups can be sent into Twitter with twitterfeed.com . And, feeds can be sent out of Twitter with loudtwitter.com . Feeds can also be exported from Twitter using sites like tweetscan.com or summize.com .
Using the Twitter Facebook application I’ve managed to get Twitter talking to the Facebook status message. I’ve also added the Twitter Badge for Blogger to my blog (at right). And thanks to a new ping.fm beta account, I’ve been able to add my Linkedin status message into this loop.

Now if I can just send Twitter feeds into a chatbot knowledgebase….

20 January 2008

Corpus linguistics & Concgramming in Verbots and Pandorabots

One of the definitions of semantic, as in Semantic Web or Web 3.0, is the property of language pertaining to meaning, meaning being significance arising from relation, for instance the relation of words. I don’t recall hearing about corpus linguistics before deciding to animate and make my book interactive. Apparently there has been a long history of corpus linguistics trying to derive rules from natural language, such as the work of George Kingsley Zipf. As someone with a degree in psychology, I do know something of cognitive linguistics and its reaction to the machine mind paradigm.

The man who coined the term, called artificial intelligence "the science and engineering of making intelligent machines,” which today is referred to as "the study and design of intelligent agents." Wikipedia defines intelligence as “a property of the mind that encompasses… the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.” Computational linguistics has emerged as an interdisciplinary field involved with “the statistical and/or rule-based modeling of natural language.”

In publishing, a concordance is an alphabetical list of the main words used in a text, along with their immediate contexts or relations. Concordances are frequently used in linguistics to study the body of a text. A concordancer is the program that constructs a concordance. In corpus linguistics, concordancers are used to retrieve sorted lists from a corpus or text. Concordancers that I looked at included AntConc, ConcordanceSoftware, WordSmith Tools and ConcApp. I found ConcApp and in particular the additional program ConcGram to be most interesting. (Examples of web based concordancers include KWICFinder.com and the WebAsCorpus.org Web Concordancer.)

Concgramming is a new computer-based method for categorising word relations and deriving the phraseological profile or ‘aboutness’ of a text or corpus. A concgram constitutes all of the permutations generated by the association of two or more words, revealing all of the word association patterns that exist in a corpus. Concgrams are used by language learners and teachers to study the importance of the phraseological tendency in language.

I was in fact successful in stripping out all the sentences from my latest book, VAGABOND GLOBETROTTING 3, by simply reformatting them as paragraphs with MSWord. I then saved them as a CSV file, actually just a text file with one sentence per line. I was able to make a little utility which ran all those sentences through the Yahoo! Term Extraction API, extracting key terms and associating those terms with their sentences in the form of XML output, as terms equal title and sentences equal description. Using the great XSLT editor xsl-easy.com, I could convert that XML output quickly and easily into AIML with a simple template.

The problem I encountered was that all those key terms extracted from my book sentences when tested formed something like second level knowledge that you couldn’t get out of the chatbot unless you already knew the subject matter…. So I then decided to try adding the concgrams to see if that could bridge the gap. I had to get someone to create a special program to marry the 2 word concgrams from the entire book (minus the 100 most common words in English) to their sentences in a form I could use.

It was only then that I began to discover some underlying differences between the verbotsonline.com and pandorabots.com chatbot engine platforms. I've been using verbotsonline because it seemed easier and cheaper, than adding a mediasemantics.com character to the pandorabot. However, there is a 2.5 Meg limit with verbotsonline knowledgebases, which I've reached three times already. Also, verbotsonline.com does not seem to accept multiple SAME patterns with different templates, at least the AIML-Verbot Converter apparently removes the “duplicate” categories.

In verbots, spaces automatically match to zero or more words, so wildcards are only necessary to match partial words. This means in verbots words are automatically wildcarded, which makes it much easier to achieve matches with verbots. So far, I have been unable to replicate this simple system with AIML, which makes AIML more precise or controllable, but perhaps less versatile, at least in this case. Even with the AIML knowledgebase replicated eight times with the following patterns, I could not duplicate the same results in pandorabots as the verbots do with one file, wildcarding on all words in a phrase or term.

dog cat
dog cat *
_ dog cat
_ dog cat *

dog * cat
dog * cat *
_ dog * cat
_ dog * cat *

The problem I encountered with AIML trying to “star” all words was that when starred at the beginning of a pattern only one word was accepted and not more words, and when replaced with the underscore apparently affects pattern prioritization. So there I am at the moment stuck between verbots and pandorabots, not being able to do what I want with either, verbotsonline for lack of capacity and inability to convert “duplicate” categories into VKB, and pandorabots for inability to conform to my fully wildcarded spectral word association strategy….

08 January 2008

Books, metadata and chatbots… in search of the XML Rosetta Stone

I am an author and I build chatbots (aka chatterbots). A chatbot is a conversational agent, driven by a knowledgebase. I am currently trying to understand the best way to convert a book into a chatbot knowledgebase.

A knowledgebase is a form of database, and the chatbot is actually a type of search… an anthropomorphic form of search and therefore an ergonomic form of search. This simple fact is usually shrouded by the jargon of “natural language processing”, which may or may not be actual voice input or output.

According to the ruling precepts of the “Turing test”, chatbots must be as close as possible to conversational, and this is what differentiates them from pure “search”…. With chatbots there is a significant element of “smoke and mirrors” involved, which introduces the human psychological element into the machine in the form of cultural, linguistic and thematic assumptions and expectations, so becoming in a sense a sort of “mind game”.

I’m actually approaching this from two directions. I would also like to be able to feed RSS into a chatbot knowledgebase. There is currently no working example of this available. Parsing RSS into AIML (Artificial Intelligence Markup Language), the most common chatbot dialect, is problematic and yet to be cracked effectively. So, my thinking arrived at somehow breaking a book into a form that resembles RSS. The Wikipedia List of XML markup languages revealed a number of attempts to add metadata to books.

Dr. Wallace, the originator of AIML, recently responded on the pandorabots-general group, that using RSS title fields would usually be too specific to make them useful as chatbot concept triggers. However, I believe utilities such as the Yahoo! Term Extraction API could be used to create tags for feed items, which might then prove more useful when mapped to AIML patterns….

My supposition is that a *good* book index is in effect a “taxonomy” of that book. Paragraphs would generally be too large to meet the specialized “conversational” needs of a chatbot. The results of a conventional concordance would be too general to be useful in a chatbot…. If RSS as we know it is currently too specific to function effectively in a chatbot, what if that index were mapped back to the referring sentences as “tags”, somewhat like RSS?

I figure that if you can relatively quickly break a book down into a sentence “concordance”, you could then point that at something like the Yahoo! Term Extraction API to quickly generate relevant keywords (or “tags”) for each sentence, which could then be used in AIML as triggers for those sentences in a chatbot…. Is there such a beast as a “sentence parser” for a corpus such as a common book? All I want to do at this point is strip out all the sentences and line them up, as a conventional concordance does with individual words.

There are a number of examples of desktop chatbots using proprietary Windows speech recognition today, however to my knowledge there are currently no chatbots available online or via VoIP that accept voice input (*not* IM or IRC bots)…. So, I’ve also spent some time lately looking into voiceXML (VXML), ccXML and the Voxeo callXML, as well as the Speech Recognition Grammar Specification (SRGS) and the mythical voice browser…. The only thing I could find that actually accepts voice input online for processing is Midomi.com, which accepts voice input in the form of hummed tune for tune recognition…. Apparently goog411, which is basically interactive voice response (IVR) rather than true speech recognition, is as close as it gets to a practical hybrid online/offline voice search application at this time. So, what if Google could talk?