16 March 2009

Feedbots & Feeding Chatbots

As someone holding a degree in Psychology, and with a background in technology, I'm starting to feel like a psychologist for robots....  

I am presently working on two lines of research aimed to converge on conversational agents, or chatbots, for the mobile market.  I have been working on technology to convert books into knowledgebases (Project VagaBot).  And I have been developing feedbots to feed realtime, prefiltered information into knowledgebases (Twitter, Bots & Twitterbotting). 

Knowledgebases may take different forms, but form part of the conversational agent, or chatbot, "brain".  The off-the-shelf conversational agents I have been working with include Conversive Verbots and various AIML platforms including Pandorabots.  Lately I have also been looking beyond the so-called stimulus-response systems to the new generation semantic systems, such as Stephen Reed’s texai.org, Sherman Monroe’s monrai.com and Ben Goertzel's novamente.net.

Most basically the semantic systems strive to convert natural language into SPARQL queries and SPARQL queries into knowledgebases.  (Note, relational databases may be converted into RDF, and become accessible to SPARQL, with D2RQ.)  Goertzel's OpenCog Project is notable for attempting to lay-out a long-term roadmap or blueprint for the creation of what he calls "Artificial General Intelligence", otherwise known as Strong AI, and at least partially funded by Google, leading to what Ray Kurzweil refers to as a possible technological singularity, or point at which robots will begin to in effect build themselves.

So-called Twitter bots (Twitterbots) are most basically feed bots (feedbots), although there are a wide variety of bots being referred to as Twitterbots, not least the infamous friend adder or follow-bots.  Most basically, feedbots feed web feeds into or out of Twitter, the currently most popular feed exchange, or feed interchange.  I don't really count a simple blog feed ported into Twitter as a true "Twitterbot".  For me, a real Twitterbot must actually "do" something, have some unique functionality.  The hands-down favorite for feed manipulation is Yahoo Pipes.  I've been working for a number of years with Yahoo Pipes, and have become a skillful Pipes developer, creating hundreds of Pipes.  However, Yahoo Pipes alone is not enough to create a "brain" or "artificial intelligence"....  

I have found the Zoho Creator web-based software-as-a-service a convenient way to host my databases "in the cloud".  These databases generally consist of what is sometimes referred to as a "taxonomy", but is more precisely a "faceted-classification".  The faceted-classification as a database forms the basic "intelligence" of intelligent feed bots, or Twitter bots.  Multiple databases may also be used in tandem, a technique I refer to as "dual iteration", to sharpen or increase the intelligence.  And, specific feed bots can be combined to create cumulative meta-bots.

I have previously blogged about developing my proprietary “green travel taxonomy” over many years, which is in fact a complex faceted-classification in the form of a database that currently drives the @greentravel1 Twitterbot. greentravel1 is also available on Blogspot as greentravel1.blogspot.com.  It currently consists of 4 primary “channels”:
  • #GTNews consists of Google News searches based on the green travel faceted-classification.
  • #GTRetweet consists of analysis of the Twitter public timeline based on the same green travel faceted-classification.
  • #GTVideo currently searches an abbreviated dataset of key terms on Google Video for purposes of scalability.
  • #GTFeeds consists of an accumulated set of closely related feeds added manually. 
In short, greentravel1 delivers a continuous feed of all English language green travel news, the entire green travel related Twitter commentary, plus all new green travel videos and related blog feeds.  greentravel1 effectively enables monitoring of the bulk of cyberspace in realtime for the critical issues facing the sustainability of tourism today.  (And, to see this sustainable tourism intelligence presented dynamically on a country by country basis, for all 240 “countries”, simply visit the Destination Meta-Guide.com 2.0.)


Special thanks to Prof Dr Marc Cohen of the Royal Melbourne Institute of Technology and the RMIT Master of Wellness Program for support of this research.

2 comments:

jbx028 said...

Hello,

As a big fan of AIML, Yahoo Pipes, RDF/OWL, i can only recognised your excellent blog.

I wrote a script for Mozilla Ubiquity which allows me to talk to my Nabaztag using the Pandorabot API.

If you are interested, you can look at this article : http://blog.makezine.com/archive/2009/02/nabaztag_rabbit_and_pandorabots_ai.html

Regards,
Johnny Baillargeaux...from France

Andrew Weatherly said...

Marcus,
As a teacher, I appreciate these explorations of language. Nice to see your site, Best--Andy in Asheville