20 January 2008

Corpus linguistics & Concgramming in Verbots and Pandorabots

One of the definitions of semantic, as in Semantic Web or Web 3.0, is the property of language pertaining to meaning, meaning being significance arising from relation, for instance the relation of words. I don’t recall hearing about corpus linguistics before deciding to animate and make my book interactive. Apparently there has been a long history of corpus linguistics trying to derive rules from natural language, such as the work of George Kingsley Zipf. As someone with a degree in psychology, I do know something of cognitive linguistics and its reaction to the machine mind paradigm.

The man who coined the term, called artificial intelligence "the science and engineering of making intelligent machines,” which today is referred to as "the study and design of intelligent agents." Wikipedia defines intelligence as “a property of the mind that encompasses… the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.” Computational linguistics has emerged as an interdisciplinary field involved with “the statistical and/or rule-based modeling of natural language.”

In publishing, a concordance is an alphabetical list of the main words used in a text, along with their immediate contexts or relations. Concordances are frequently used in linguistics to study the body of a text. A concordancer is the program that constructs a concordance. In corpus linguistics, concordancers are used to retrieve sorted lists from a corpus or text. Concordancers that I looked at included AntConc, ConcordanceSoftware, WordSmith Tools and ConcApp. I found ConcApp and in particular the additional program ConcGram to be most interesting. (Examples of web based concordancers include KWICFinder.com and the WebAsCorpus.org Web Concordancer.)

Concgramming is a new computer-based method for categorising word relations and deriving the phraseological profile or ‘aboutness’ of a text or corpus. A concgram constitutes all of the permutations generated by the association of two or more words, revealing all of the word association patterns that exist in a corpus. Concgrams are used by language learners and teachers to study the importance of the phraseological tendency in language.

I was in fact successful in stripping out all the sentences from my latest book, VAGABOND GLOBETROTTING 3, by simply reformatting them as paragraphs with MSWord. I then saved them as a CSV file, actually just a text file with one sentence per line. I was able to make a little utility which ran all those sentences through the Yahoo! Term Extraction API, extracting key terms and associating those terms with their sentences in the form of XML output, as terms equal title and sentences equal description. Using the great XSLT editor xsl-easy.com, I could convert that XML output quickly and easily into AIML with a simple template.

The problem I encountered was that all those key terms extracted from my book sentences when tested formed something like second level knowledge that you couldn’t get out of the chatbot unless you already knew the subject matter…. So I then decided to try adding the concgrams to see if that could bridge the gap. I had to get someone to create a special program to marry the 2 word concgrams from the entire book (minus the 100 most common words in English) to their sentences in a form I could use.

It was only then that I began to discover some underlying differences between the verbotsonline.com and pandorabots.com chatbot engine platforms. I've been using verbotsonline because it seemed easier and cheaper, than adding a mediasemantics.com character to the pandorabot. However, there is a 2.5 Meg limit with verbotsonline knowledgebases, which I've reached three times already. Also, verbotsonline.com does not seem to accept multiple SAME patterns with different templates, at least the AIML-Verbot Converter apparently removes the “duplicate” categories.

In verbots, spaces automatically match to zero or more words, so wildcards are only necessary to match partial words. This means in verbots words are automatically wildcarded, which makes it much easier to achieve matches with verbots. So far, I have been unable to replicate this simple system with AIML, which makes AIML more precise or controllable, but perhaps less versatile, at least in this case. Even with the AIML knowledgebase replicated eight times with the following patterns, I could not duplicate the same results in pandorabots as the verbots do with one file, wildcarding on all words in a phrase or term.

dog cat
dog cat *
_ dog cat
_ dog cat *

dog * cat
dog * cat *
_ dog * cat
_ dog * cat *

The problem I encountered with AIML trying to “star” all words was that when starred at the beginning of a pattern only one word was accepted and not more words, and when replaced with the underscore apparently affects pattern prioritization. So there I am at the moment stuck between verbots and pandorabots, not being able to do what I want with either, verbotsonline for lack of capacity and inability to convert “duplicate” categories into VKB, and pandorabots for inability to conform to my fully wildcarded spectral word association strategy….

4 comments:

mpalmerlee said...

Marcus-

Cool post, as I let you know on the verbots forums we've upped the quota to 10 MB. Have you looked at systems like FrameNet or Ontology systems like Protege? I think FrameNet's output is the result of Concgramming according to Linguist Sam Fisher, a really smart guy that works for us and is really into this stuff.

-Matt

Noggy said...

Hi Marcus !
Your research is very interesting. I'm studying for my masters degree on informatics engineering and i was wondering if i could eficiently and meaningfully transform RDF to AIML... I found the OpenCalais project from Reuters news agency you might like to check.
To avoid limitations from verbots i suggest you use RebeccaAIML which seems great.

- Carlos -

Noggy said...

I forgot to ask on my previous
reply...when you say "I had to get someone to create a special program to marry the 2 word concgrams from the entire book (...) " What exactly do you mean ? You actually created 2 concgrams from your book, being the first the result of manual indentation of the book, and second, the result of a concgramer program ??

- Carlos -

Marcus L Endicott said...

http://rebecca-aiml.sourceforge.net

Carlos,

Thanks very much for suggesting RebeccaAIML!

I left the part about custom programming intentionally vague. ;^)

But, you are correct the trial custom programming did join the "manual indentation" of my book with the "result of the concgramer program".

The results of this trial were not satisfying due to the limitations of both chatbot platforms.

The next steps for me will be to create a fully relational database of my entire book, "married" to the entire concgram spectrum, and then to pursue another bot platform. Apparently a commercial implementation of the Verbots platform allows for the consecutive firing of related replies....

Cheers from Oz,

- Marcus Endicott
http://www.mendicott.com