One of the definitions of semantic, as in Semantic Web or Web 3.0, is the property of language pertaining to meaning, meaning being significance arising from relation, for instance the relation of words. I don’t recall hearing about corpus linguistics before deciding to animate and make my book interactive. Apparently there has been a long history of corpus linguistics trying to derive rules from natural language, such as the work of George Kingsley Zipf. As someone with a degree in psychology, I do know something of cognitive linguistics and its reaction to the machine mind paradigm.
The man who coined the term, called artificial intelligence "the science and engineering of making intelligent machines,” which today is referred to as "the study and design of intelligent agents." Wikipedia defines intelligence as “a property of the mind that encompasses… the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.” Computational linguistics has emerged as an interdisciplinary field involved with “the statistical and/or rule-based modeling of natural language.”
In publishing, a concordance is an alphabetical list of the main words used in a text, along with their immediate contexts or relations. Concordances are frequently used in linguistics to study the body of a text. A concordancer is the program that constructs a concordance. In corpus linguistics, concordancers are used to retrieve sorted lists from a corpus or text. Concordancers that I looked at included AntConc, ConcordanceSoftware, WordSmith Tools and ConcApp. I found ConcApp and in particular the additional program ConcGram to be most interesting. (Examples of web based concordancers include KWICFinder.com and the WebAsCorpus.org Web Concordancer.)
Concgramming is a new computer-based method for categorising word relations and deriving the phraseological profile or ‘aboutness’ of a text or corpus. A concgram constitutes all of the permutations generated by the association of two or more words, revealing all of the word association patterns that exist in a corpus. Concgrams are used by language learners and teachers to study the importance of the phraseological tendency in language.
I was in fact successful in stripping out all the sentences from my latest book, VAGABOND GLOBETROTTING 3, by simply reformatting them as paragraphs with MSWord. I then saved them as a CSV file, actually just a text file with one sentence per line. I was able to make a little utility which ran all those sentences through the Yahoo! Term Extraction API, extracting key terms and associating those terms with their sentences in the form of XML output, as terms equal title and sentences equal description. Using the great XSLT editor xsl-easy.com, I could convert that XML output quickly and easily into AIML with a simple template.
The problem I encountered was that all those key terms extracted from my book sentences when tested formed something like second level knowledge that you couldn’t get out of the chatbot unless you already knew the subject matter…. So I then decided to try adding the concgrams to see if that could bridge the gap. I had to get someone to create a special program to marry the 2 word concgrams from the entire book (minus the 100 most common words in English) to their sentences in a form I could use.
It was only then that I began to discover some underlying differences between the verbotsonline.com and pandorabots.com chatbot engine platforms. I've been using verbotsonline because it seemed easier and cheaper, than adding a mediasemantics.com character to the pandorabot. However, there is a 2.5 Meg limit with verbotsonline knowledgebases, which I've reached three times already. Also, verbotsonline.com does not seem to accept multiple SAME patterns with different templates, at least the AIML-Verbot Converter apparently removes the “duplicate” categories.
In verbots, spaces automatically match to zero or more words, so wildcards are only necessary to match partial words. This means in verbots words are automatically wildcarded, which makes it much easier to achieve matches with verbots. So far, I have been unable to replicate this simple system with AIML, which makes AIML more precise or controllable, but perhaps less versatile, at least in this case. Even with the AIML knowledgebase replicated eight times with the following patterns, I could not duplicate the same results in pandorabots as the verbots do with one file, wildcarding on all words in a phrase or term.
dog cat
dog cat *
_ dog cat
_ dog cat *
dog * cat
dog * cat *
_ dog * cat
_ dog * cat *
The problem I encountered with AIML trying to “star” all words was that when starred at the beginning of a pattern only one word was accepted and not more words, and when replaced with the underscore apparently affects pattern prioritization. So there I am at the moment stuck between verbots and pandorabots, not being able to do what I want with either, verbotsonline for lack of capacity and inability to convert “duplicate” categories into VKB, and pandorabots for inability to conform to my fully wildcarded spectral word association strategy….
20 January 2008
08 January 2008
Books, metadata and chatbots… in search of the XML Rosetta Stone
I am an author and I build chatbots (aka chatterbots). A chatbot is a conversational agent, driven by a knowledgebase. I am currently trying to understand the best way to convert a book into a chatbot knowledgebase.
A knowledgebase is a form of database, and the chatbot is actually a type of search… an anthropomorphic form of search and therefore an ergonomic form of search. This simple fact is usually shrouded by the jargon of “natural language processing”, which may or may not be actual voice input or output.
According to the ruling precepts of the “Turing test”, chatbots must be as close as possible to conversational, and this is what differentiates them from pure “search”…. With chatbots there is a significant element of “smoke and mirrors” involved, which introduces the human psychological element into the machine in the form of cultural, linguistic and thematic assumptions and expectations, so becoming in a sense a sort of “mind game”.
I’m actually approaching this from two directions. I would also like to be able to feed RSS into a chatbot knowledgebase. There is currently no working example of this available. Parsing RSS into AIML (Artificial Intelligence Markup Language), the most common chatbot dialect, is problematic and yet to be cracked effectively. So, my thinking arrived at somehow breaking a book into a form that resembles RSS. The Wikipedia List of XML markup languages revealed a number of attempts to add metadata to books.
Dr. Wallace, the originator of AIML, recently responded on the pandorabots-general group, that using RSS title fields would usually be too specific to make them useful as chatbot concept triggers. However, I believe utilities such as the Yahoo! Term Extraction API could be used to create tags for feed items, which might then prove more useful when mapped to AIML patterns….
My supposition is that a *good* book index is in effect a “taxonomy” of that book. Paragraphs would generally be too large to meet the specialized “conversational” needs of a chatbot. The results of a conventional concordance would be too general to be useful in a chatbot…. If RSS as we know it is currently too specific to function effectively in a chatbot, what if that index were mapped back to the referring sentences as “tags”, somewhat like RSS?
I figure that if you can relatively quickly break a book down into a sentence “concordance”, you could then point that at something like the Yahoo! Term Extraction API to quickly generate relevant keywords (or “tags”) for each sentence, which could then be used in AIML as triggers for those sentences in a chatbot…. Is there such a beast as a “sentence parser” for a corpus such as a common book? All I want to do at this point is strip out all the sentences and line them up, as a conventional concordance does with individual words.
There are a number of examples of desktop chatbots using proprietary Windows speech recognition today, however to my knowledge there are currently no chatbots available online or via VoIP that accept voice input (*not* IM or IRC bots)…. So, I’ve also spent some time lately looking into voiceXML (VXML), ccXML and the Voxeo callXML, as well as the Speech Recognition Grammar Specification (SRGS) and the mythical voice browser…. The only thing I could find that actually accepts voice input online for processing is Midomi.com, which accepts voice input in the form of hummed tune for tune recognition…. Apparently goog411, which is basically interactive voice response (IVR) rather than true speech recognition, is as close as it gets to a practical hybrid online/offline voice search application at this time. So, what if Google could talk?
A knowledgebase is a form of database, and the chatbot is actually a type of search… an anthropomorphic form of search and therefore an ergonomic form of search. This simple fact is usually shrouded by the jargon of “natural language processing”, which may or may not be actual voice input or output.
According to the ruling precepts of the “Turing test”, chatbots must be as close as possible to conversational, and this is what differentiates them from pure “search”…. With chatbots there is a significant element of “smoke and mirrors” involved, which introduces the human psychological element into the machine in the form of cultural, linguistic and thematic assumptions and expectations, so becoming in a sense a sort of “mind game”.
I’m actually approaching this from two directions. I would also like to be able to feed RSS into a chatbot knowledgebase. There is currently no working example of this available. Parsing RSS into AIML (Artificial Intelligence Markup Language), the most common chatbot dialect, is problematic and yet to be cracked effectively. So, my thinking arrived at somehow breaking a book into a form that resembles RSS. The Wikipedia List of XML markup languages revealed a number of attempts to add metadata to books.
Dr. Wallace, the originator of AIML, recently responded on the pandorabots-general group, that using RSS title fields would usually be too specific to make them useful as chatbot concept triggers. However, I believe utilities such as the Yahoo! Term Extraction API could be used to create tags for feed items, which might then prove more useful when mapped to AIML patterns….
My supposition is that a *good* book index is in effect a “taxonomy” of that book. Paragraphs would generally be too large to meet the specialized “conversational” needs of a chatbot. The results of a conventional concordance would be too general to be useful in a chatbot…. If RSS as we know it is currently too specific to function effectively in a chatbot, what if that index were mapped back to the referring sentences as “tags”, somewhat like RSS?
I figure that if you can relatively quickly break a book down into a sentence “concordance”, you could then point that at something like the Yahoo! Term Extraction API to quickly generate relevant keywords (or “tags”) for each sentence, which could then be used in AIML as triggers for those sentences in a chatbot…. Is there such a beast as a “sentence parser” for a corpus such as a common book? All I want to do at this point is strip out all the sentences and line them up, as a conventional concordance does with individual words.
There are a number of examples of desktop chatbots using proprietary Windows speech recognition today, however to my knowledge there are currently no chatbots available online or via VoIP that accept voice input (*not* IM or IRC bots)…. So, I’ve also spent some time lately looking into voiceXML (VXML), ccXML and the Voxeo callXML, as well as the Speech Recognition Grammar Specification (SRGS) and the mythical voice browser…. The only thing I could find that actually accepts voice input online for processing is Midomi.com, which accepts voice input in the form of hummed tune for tune recognition…. Apparently goog411, which is basically interactive voice response (IVR) rather than true speech recognition, is as close as it gets to a practical hybrid online/offline voice search application at this time. So, what if Google could talk?
Labels:
AIML,
callXML,
ccXML,
Chatbot,
concordance,
goog411,
IVR,
knowledgebase,
Midomi,
Pandorabots,
RSS,
speech recognition,
SRGS,
tags,
taxonomy,
term extraction,
Turing test,
voiceXML,
XML
30 December 2007
AIML <-> OWL ??
Since I posted my original query to the pandorabots-general list in July, I'm beginning to understand the concepts involved a little better, thanks also to replies from this group and others, such as the protege-owl list.
In a comment to my recent blog entry ("I'm dreaming of RSS in => AIML out"), Jean-Claude Morand has mentioned that RSS 1.0 would probably be more conducive to conversion into RDF or AIML than RSS 2.0. He also mentioned that the Dublin Core metadata standard may eventually overtake RSS in primacy....
So, can XSL transforms really be used to translate between RSS and RDF, and between RDF and AIML?? My understanding at this point is that talking about AIML and OWL is a bit like apples and oranges.... Apparently the output from an OWL Reasoner would be in RDF? I have by now discovered the Robitron group and am finding that archive to be a rich resource....
What does this have to do with Pandorabots? I would like to address a brief question, in particular to Dr. Wallace... what do you see as the impediments to upgrading the Pandorabots service to include an OWL Reasoner (or in chaining it to another service that would provide the same function)? Surely you've considered this.... Where are the bottlenecks (other than time and money of course)? Is it an unreasonable expectation to be able to upload OWL ontologies much the same as we can upload AIML knowledgebases today?
As I have mentioned previously, one of my interests is creating knowledgebases on the fly using taxonomies. My belief is that quick and dirty knowledgebases are a more productive focus than pouring time and energy into trying to meet the requirements of the Turing test (another rant for another day....) Certainly with chatbots there is a substantial element of smoke and mirrors involved in any case.... One can always go back and refine as needed based on actual chat logs.
The next step for me will be to try and convert my most recent book, VAGABOND GLOBETROTTING 3, into a VagaBot.... I would like to hear from anyone with experience in converting books into AIML knowledgebases! My supposition is that a *good* book index is in effect a "taxonomy" of that book.... My guess is that I can use the index entries as patterns, and their referring sections as templates... to create at least the core of a knowledgebase. If more detail is needed then a concordance can always be applied to the book.
After that I hope to tackle creating quick and dirty AIML knowledgebases on the fly from RSS feed title and description fields... not in pursuit of the chimera of the Turing test, but simply to build a better bot. (Now, I wonder if anyone has ever created RSS from a book?!? ;^))
In a comment to my recent blog entry ("I'm dreaming of RSS in => AIML out"), Jean-Claude Morand has mentioned that RSS 1.0 would probably be more conducive to conversion into RDF or AIML than RSS 2.0. He also mentioned that the Dublin Core metadata standard may eventually overtake RSS in primacy....
So, can XSL transforms really be used to translate between RSS and RDF, and between RDF and AIML?? My understanding at this point is that talking about AIML and OWL is a bit like apples and oranges.... Apparently the output from an OWL Reasoner would be in RDF? I have by now discovered the Robitron group and am finding that archive to be a rich resource....
What does this have to do with Pandorabots? I would like to address a brief question, in particular to Dr. Wallace... what do you see as the impediments to upgrading the Pandorabots service to include an OWL Reasoner (or in chaining it to another service that would provide the same function)? Surely you've considered this.... Where are the bottlenecks (other than time and money of course)? Is it an unreasonable expectation to be able to upload OWL ontologies much the same as we can upload AIML knowledgebases today?
As I have mentioned previously, one of my interests is creating knowledgebases on the fly using taxonomies. My belief is that quick and dirty knowledgebases are a more productive focus than pouring time and energy into trying to meet the requirements of the Turing test (another rant for another day....) Certainly with chatbots there is a substantial element of smoke and mirrors involved in any case.... One can always go back and refine as needed based on actual chat logs.
The next step for me will be to try and convert my most recent book, VAGABOND GLOBETROTTING 3, into a VagaBot.... I would like to hear from anyone with experience in converting books into AIML knowledgebases! My supposition is that a *good* book index is in effect a "taxonomy" of that book.... My guess is that I can use the index entries as patterns, and their referring sections as templates... to create at least the core of a knowledgebase. If more detail is needed then a concordance can always be applied to the book.
After that I hope to tackle creating quick and dirty AIML knowledgebases on the fly from RSS feed title and description fields... not in pursuit of the chimera of the Turing test, but simply to build a better bot. (Now, I wonder if anyone has ever created RSS from a book?!? ;^))
Labels:
AIML,
Artificial Intelligence,
Chatbot,
chatterbot,
OWL,
Pandorabots,
protege,
RDF,
robitron,
RSS,
Turing,
XSL,
XSLT
22 December 2007
I'm dreaming of RSS in => AIML out
I am still trying to get my head around the relationship between chatbots and the Semantic Web, or Web 3.0.... Any thoughts or comments on the precise nature of this relationship are welcome.
Converting from VKB back into AIML was my first crash course in working with XML dialects.... Since then the old lightbulb has gone off, or rather "on" I should say, and it suddenly dawned on me that the whole hullabaloo about Web 2.0 largely centers on the exchange of metadata, most often in the form of RSS, another XML dialect.
I was really stoked to learn of the work of Eric Freese, apparently processing logic using the Jena framework then manually(?) converting that RDF into AIML; however, I continue to wait for word of his "Semetag/AIMEE" example at http://www.semetag.com .
My understanding is that it is quite do-able, as in off the shelf, to pull RSS into a database and accumulate it there.... Could such a database of RSS not be used as a potential knowledgebase for a chatbot?
The missing element seems to be the processing, or DL Reasoner(?).... I have been unable to find any reference to such a web-based, modular DL Reasoner yet....
http://www.knoodl.com seems to be the closest thing to a "Web 2.0-style" collaborative ontology editor, which is fine for creating ontologies collectively, however falls short of meeting the processing requirement.
In short, I'm dreaming of RSS in => AIML out. At this point I would be happy with a "toy" or abbreviated system just to begin playing around with all this affordably (not least time-wise). So it seems what's still needed is a simple, plug and play "Web 2.0-style" (or is that "Web 3.0" style?) web-based DL Reasoner that accepts common OWL ontologies, then automagically goes from RDF into AIML....
Converting from VKB back into AIML was my first crash course in working with XML dialects.... Since then the old lightbulb has gone off, or rather "on" I should say, and it suddenly dawned on me that the whole hullabaloo about Web 2.0 largely centers on the exchange of metadata, most often in the form of RSS, another XML dialect.
I was really stoked to learn of the work of Eric Freese, apparently processing logic using the Jena framework then manually(?) converting that RDF into AIML; however, I continue to wait for word of his "Semetag/AIMEE" example at http://www.semetag.com .
My understanding is that it is quite do-able, as in off the shelf, to pull RSS into a database and accumulate it there.... Could such a database of RSS not be used as a potential knowledgebase for a chatbot?
The missing element seems to be the processing, or DL Reasoner(?).... I have been unable to find any reference to such a web-based, modular DL Reasoner yet....
http://www.knoodl.com seems to be the closest thing to a "Web 2.0-style" collaborative ontology editor, which is fine for creating ontologies collectively, however falls short of meeting the processing requirement.
In short, I'm dreaming of RSS in => AIML out. At this point I would be happy with a "toy" or abbreviated system just to begin playing around with all this affordably (not least time-wise). So it seems what's still needed is a simple, plug and play "Web 2.0-style" (or is that "Web 3.0" style?) web-based DL Reasoner that accepts common OWL ontologies, then automagically goes from RDF into AIML....
01 November 2007
Destination Meta-Guide.com 2.0 Update, November 2007
I spent a few weeks last month in the Australian bush on Aboriginal land with a group of old friends.
Since I’ve come back to my ecological niche in the green Shire, I’ve been inspired to overhaul the “Destination Meta-Guide.com 2.0” (http://www.meta-guide.com/). It is now effectively a “daily green travel newspaper” for virtually every country on Earth. The Destination Meta-Guide.com 2.0 combines elements of both the collaborative “Web 2.0” and the semantic web or “Web 3.0”.
Besides maps and photos for every country, it contains relevant news stories based on my improved green travel taxonomy and the latest green travel news for all countries listed in the right hand sidebar.
The Destination Meta-Guide.com 2.0 also contains a green-travel mini-guide for each country, consisting of up to ten of the most recent postings to the green-travel group for that country. This represents the collaborative or “Web 2.0” element - but, the green-travel group has been in existence since 1991 which not only predates the graphical web we know today, but also the concept of “Web 2.0” itself.
Further, the Destination Meta-Guide.com 2.0 contains an automated selection of green travel links for each country, specially drawn from that country’s national domain. So, if you are operating in a specific country and do not have a web presence under the national top level domain, then to be listed here you should.
The biggest additions lately have been four pages for every country specifically searching respectively Development Agencies, Development Banks, UN Agencies, and international NGOs for green travel topics. In particular, this application represents semantic web or “Web 3.0” technology, in that taxonomic filtering at multiple levels effectively creates semantic relationships, increasing relevancy.
These pages are highly configurable, so any suggestions you might have of what not to include or what to include will be most welcome! Feedback of any kind is encouraged, public or private.
I need your help now to continue this project. I am requesting donations of any amount via paypal.com toward sponsoring my research and development. In return, I will include a link of your choice in recognition of your contribution - and have created a detailed Sponsors page for this purpose at http://www.mendicott.com/meta-guide/sponsors.asp
Since I’ve come back to my ecological niche in the green Shire, I’ve been inspired to overhaul the “Destination Meta-Guide.com 2.0” (http://www.meta-guide.com/). It is now effectively a “daily green travel newspaper” for virtually every country on Earth. The Destination Meta-Guide.com 2.0 combines elements of both the collaborative “Web 2.0” and the semantic web or “Web 3.0”.
Besides maps and photos for every country, it contains relevant news stories based on my improved green travel taxonomy and the latest green travel news for all countries listed in the right hand sidebar.
The Destination Meta-Guide.com 2.0 also contains a green-travel mini-guide for each country, consisting of up to ten of the most recent postings to the green-travel group for that country. This represents the collaborative or “Web 2.0” element - but, the green-travel group has been in existence since 1991 which not only predates the graphical web we know today, but also the concept of “Web 2.0” itself.
Further, the Destination Meta-Guide.com 2.0 contains an automated selection of green travel links for each country, specially drawn from that country’s national domain. So, if you are operating in a specific country and do not have a web presence under the national top level domain, then to be listed here you should.
The biggest additions lately have been four pages for every country specifically searching respectively Development Agencies, Development Banks, UN Agencies, and international NGOs for green travel topics. In particular, this application represents semantic web or “Web 3.0” technology, in that taxonomic filtering at multiple levels effectively creates semantic relationships, increasing relevancy.
These pages are highly configurable, so any suggestions you might have of what not to include or what to include will be most welcome! Feedback of any kind is encouraged, public or private.
I need your help now to continue this project. I am requesting donations of any amount via paypal.com toward sponsoring my research and development. In return, I will include a link of your choice in recognition of your contribution - and have created a detailed Sponsors page for this purpose at http://www.mendicott.com/meta-guide/sponsors.asp
Labels:
Australia,
destination,
development,
green tourism,
green travel,
guide,
NGOs,
search,
taxonomy,
web 2.0,
web 3.0
08 June 2007
green-travel taxonomy
Over the past 10 years or so I’ve been gradually developing a taxonomy, or classification system, for “green travel”, or more accurately green or sustainable tourism, an extension of my work with globetrotting and backpacker tourism. In other words, what are the key concepts involved in responsible tourism? A two dimensional taxonomy becomes an ontology when applied in three dimensions, as relationships among the concepts emerge. Taxonomies and ontologies are useful in artificial intelligence applications, such as bots.
I’ve spent much of the past decade tinkering with and tweaking the http://meta-guide.com which emerged from the old green-travel.com site. Today this would be called a “mashup”. Lately, I seem to have hit on a particularly useful algorithm, and have in effect taught the meta-guide to tell me everything in the popular press about “green travel” happening in our world today, in a more useful format, country by country… for nearly every “country” on Earth…. In particular, it returns the latest information about climate change and global warming in relation to tourism, in addition to ecotourism and sustainable tourism developments, etc.
I recommend trying the random country feature and let me know what you think, either in the green-travel group at http://groups.yahoo.com/group/green-travel or directly to me!
Marcus Endicott http://mendicott.com
I’ve spent much of the past decade tinkering with and tweaking the http://meta-guide.com which emerged from the old green-travel.com site. Today this would be called a “mashup”. Lately, I seem to have hit on a particularly useful algorithm, and have in effect taught the meta-guide to tell me everything in the popular press about “green travel” happening in our world today, in a more useful format, country by country… for nearly every “country” on Earth…. In particular, it returns the latest information about climate change and global warming in relation to tourism, in addition to ecotourism and sustainable tourism developments, etc.
I recommend trying the random country feature and let me know what you think, either in the green-travel group at http://groups.yahoo.com/group/green-travel or directly to me!
Marcus Endicott http://mendicott.com
05 May 2007
Vagabond Globetrotting for Green Travelers: Save a tree! A Classic Travel Book Becomes Available For Digital Audio Players
Fans of audiobooks can now enjoy Marcus Endicott's "Vagabond Globetrotting 3," the classic how-to book for global travelers, in audible format for iPods and other portable devices, including Creative, Palm, and WindowsMobile. "Vagabond Globetrotting 3" delivers in a concise and practical narrative the wisdom gained from Endicott's 30 years of global travel and his unparalleled expertise in sustainable tourism and green-travel. Read by the author himself, 20th anniversary edition of the book appears on Audible Wordcast at http://wordcast.audible.com/wordcast/page/mlendicott
Byron Bay, Australia (PRWEB) May 5, 2007 -- The 20th anniversary edition of green-travel expert Marcus Endicott's classic how-to book, "Vagabond Globetrotting 3," is now available in audiobook format via Audible Wordcast, read by the author himself. The Audible format plays on more than 160 varieties of portable digital audio players, including the iPod, Creative, Palm, and WindowsMobile devices.
Ron Mader of Planeta.com said of the print book: "Travelers seeking an eco-friendly, people-friendly vacation need to read this new edition of Vagabond Globetrotting. Based on 30 years of global travel and the reflections generated in the popular green-travel forum, this book stimulates and educates."
It's easy to say 'travel is a vital force for peace', but Marcus Endicott has lived and taught that important vision like no one else for more than 20 years.
As a teenager, Endicott cut his teeth on the legendary hippie trail overland from Europe to Asia, through Iran, Afghanistan, Pakistan, and India. Ever since, he has been educating people about globalization and sustainability. An early contributor to travel resources on what evolved into the world wide web, Endicott is now regarded as a pioneer in the field of sustainable tourism.
Rick Steves of "Europe Through The Back Door" says in praise of the author: "It's easy to say 'travel is a vital force for peace', but Marcus Endicott has lived and taught that important vision like no one else for more than 20 years."
Including everything from papers to money to gear, from health to transport to work, "Vagabond Globetrotting 3" not only offers guidelines for budget travelers but also lays out the basics of green-travel, or how to be a good world citizen by minimizing your footprint.
Buy the audiobook now at http://wordcast.audible.com/wordcast/page/mlendicott
The print book is available at http://www.lulu.com/content/54996
Contact the author through http://mendicott.com
###
http://www.prweb.com/releases/2007/05/prweb524017.htm
Byron Bay, Australia (PRWEB) May 5, 2007 -- The 20th anniversary edition of green-travel expert Marcus Endicott's classic how-to book, "Vagabond Globetrotting 3," is now available in audiobook format via Audible Wordcast, read by the author himself. The Audible format plays on more than 160 varieties of portable digital audio players, including the iPod, Creative, Palm, and WindowsMobile devices.
Ron Mader of Planeta.com said of the print book: "Travelers seeking an eco-friendly, people-friendly vacation need to read this new edition of Vagabond Globetrotting. Based on 30 years of global travel and the reflections generated in the popular green-travel forum, this book stimulates and educates."
It's easy to say 'travel is a vital force for peace', but Marcus Endicott has lived and taught that important vision like no one else for more than 20 years.
As a teenager, Endicott cut his teeth on the legendary hippie trail overland from Europe to Asia, through Iran, Afghanistan, Pakistan, and India. Ever since, he has been educating people about globalization and sustainability. An early contributor to travel resources on what evolved into the world wide web, Endicott is now regarded as a pioneer in the field of sustainable tourism.
Rick Steves of "Europe Through The Back Door" says in praise of the author: "It's easy to say 'travel is a vital force for peace', but Marcus Endicott has lived and taught that important vision like no one else for more than 20 years."
Including everything from papers to money to gear, from health to transport to work, "Vagabond Globetrotting 3" not only offers guidelines for budget travelers but also lays out the basics of green-travel, or how to be a good world citizen by minimizing your footprint.
Buy the audiobook now at http://wordcast.audible.com/wordcast/page/mlendicott
The print book is available at http://www.lulu.com/content/54996
Contact the author through http://mendicott.com
###
http://www.prweb.com/releases/2007/05/prweb524017.htm
Labels:
audible,
audiobook,
backpacking,
europe,
globetrotting,
marcus endicott,
tourism,
travel,
vagabond,
vagabond globetrotting
17 April 2007
Google is my back-end
My Travel & Tourism Destination Meta-Guide.com (http://www.meta-guide.com) has a number of new features.
I finally got around to converting the whole site from FrontPage Shared Borders to what is alternately known as a “Dynamic Web Template” or “Dreamweaver Template File” (dwt file).
I’ve really been enjoying the “Random country” option (http://www.mendicott.com/meta-guide/random.htm) I added to the new “Select country” dropdown menu.
I put my Easter holiday to use by adding google maps to some 240 "countries", which took me about one afternoon to learn. The google maps really add another dimension. Its fun to surf into the linked zooms for all capital cities. Since I previously had spent a lot of time and effort geocoding all the country pages, it was much easier to set-up without having to use the google geocoder.
I've also added a flickr.com widget keyed to the tags "travel" and "countryname", which displays infinite random images of faces of our world... (not only entertaining when used together with the random country feature, but iconically informative as well in an oracular sort of way).
I reduced the number of searches available in the “Search” section, and focused the initial capital city search further by adding the travel and tourism parameter.
The geobot on the home page (http://www.mendicott.com/meta-guide/) is now talking again, and with a more Commonwealth accent, after a little break… though still taking long to load its large knowledgebase.
I finally got around to converting the whole site from FrontPage Shared Borders to what is alternately known as a “Dynamic Web Template” or “Dreamweaver Template File” (dwt file).
I’ve really been enjoying the “Random country” option (http://www.mendicott.com/meta-guide/random.htm) I added to the new “Select country” dropdown menu.
I put my Easter holiday to use by adding google maps to some 240 "countries", which took me about one afternoon to learn. The google maps really add another dimension. Its fun to surf into the linked zooms for all capital cities. Since I previously had spent a lot of time and effort geocoding all the country pages, it was much easier to set-up without having to use the google geocoder.
I've also added a flickr.com widget keyed to the tags "travel" and "countryname", which displays infinite random images of faces of our world... (not only entertaining when used together with the random country feature, but iconically informative as well in an oracular sort of way).
I reduced the number of searches available in the “Search” section, and focused the initial capital city search further by adding the travel and tourism parameter.
The geobot on the home page (http://www.mendicott.com/meta-guide/) is now talking again, and with a more Commonwealth accent, after a little break… though still taking long to load its large knowledgebase.
14 February 2007
Marcus Endicott's Second Life Travel Guide
Lately, I've been working on porting my ByronBot (http://www.mendicott.com/byronbay/) over to the currently popular Second Life (http://www.mendicott.com/secondlife/) 3D virtual world.
My ByronBot is built on the Verbots Online (http://www.verbotsonline.com/) platform.
Metaverse Technology (http://www.metaversetech.com/) has built a Second Life Chatbot product based on the Pandorabots (http://www.pandorabots.com/) platform.
Pandorabots is based on Artificial Intelligence Markup Language, rather than the proprietary Verbot KnowledgeBase files.
The Metaverse Technology S-Bot product combines the Second Life Linden Scripting Language with the Pandorabots AIML.
There is an open source AIML-Verbot Converter tool going from AIML to VKB, but not vice versa....
So, I've been using the recommended GaitoBot AIML editor to recreate the ByronBot KnowledgeBase in Pandorabots AIML.
You should be able to find my new Second Life ByronBot near Byron Bay @ 5 O'Clock Somewhere at http://slurl.com/secondlife/Plush%20Omega/101/21/22 .
Ideally, the Second Life ByronBot will emulate a tourist information officer, of for instance a DMO (Destination Marketing Organization), inside the 3D virtual world... providing information about Byron Bay, Northern New South Wales, Australia.
My ByronBot is built on the Verbots Online (http://www.verbotsonline.com/) platform.
Metaverse Technology (http://www.metaversetech.com/) has built a Second Life Chatbot product based on the Pandorabots (http://www.pandorabots.com/) platform.
Pandorabots is based on Artificial Intelligence Markup Language, rather than the proprietary Verbot KnowledgeBase files.
The Metaverse Technology S-Bot product combines the Second Life Linden Scripting Language with the Pandorabots AIML.
There is an open source AIML-Verbot Converter tool going from AIML to VKB, but not vice versa....
So, I've been using the recommended GaitoBot AIML editor to recreate the ByronBot KnowledgeBase in Pandorabots AIML.
You should be able to find my new Second Life ByronBot near Byron Bay @ 5 O'Clock Somewhere at http://slurl.com/secondlife/Plush%20Omega/101/21/22 .
Ideally, the Second Life ByronBot will emulate a tourist information officer, of for instance a DMO (Destination Marketing Organization), inside the 3D virtual world... providing information about Byron Bay, Northern New South Wales, Australia.
Labels:
AIML,
Australia,
Byron Bay,
Chatbot,
LSL,
Metaverse,
Pandorabots,
Second Life,
Verbots,
virtual world,
VKB
01 January 2007
Travel Technology History Project: 1991 - 2001
I have started a basic Wikipedia page on Travel technology at http://en.wikipedia.org/wiki/Travel_technology
I am particularly interested in documenting the history of travel technology from the advent of the web in 1991 through the September 11, 2001 attacks, as well as the subsequent rise of immigration technology.
Please contribute and change as necessary!
I am particularly interested in documenting the history of travel technology from the advent of the web in 1991 through the September 11, 2001 attacks, as well as the subsequent rise of immigration technology.
Please contribute and change as necessary!
31 December 2006
ByronBot - Byron Bay, Australia
Lately I've been working with chatterbot technology, a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods, using AIML or Artificial Intelligence Markup Language, an XML dialect for creating natural language software agents.
My latest creation is ByronBot (http://www.mendicott.com/byronbay/), which you can ask questions about Byron Bay and the Rainbow Region of northern New South Wales, Australia - where I now live.
Previously, I created the meta-guide geobot (http://www.meta-guide.com/), which knows continents, regions, all countries, all of their capitals, and can provide more such as maps, travel books and cool travel videos - as well as country-specific information about ecotourism and sustainable tourism.
One associate has compared ByronBot with Microsoft's Ms. Dewey (http://www.msdewey.com/), an Adobe Flash-based experimental interface for Windows Live Search.
My latest creation is ByronBot (http://www.mendicott.com/byronbay/), which you can ask questions about Byron Bay and the Rainbow Region of northern New South Wales, Australia - where I now live.
Previously, I created the meta-guide geobot (http://www.meta-guide.com/), which knows continents, regions, all countries, all of their capitals, and can provide more such as maps, travel books and cool travel videos - as well as country-specific information about ecotourism and sustainable tourism.
One associate has compared ByronBot with Microsoft's Ms. Dewey (http://www.msdewey.com/), an Adobe Flash-based experimental interface for Windows Live Search.
Labels:
AIML,
Artificial Intelligence,
Australia,
Byron Bay,
ByronBot,
chatterbot,
meta-guide,
msdewey
19 December 2006
The difference between a web page and a blog
I would really like for someone to be able to tell me why my green-travel yahoogroup rss feed http://rss.groups.yahoo.com/group/green-travel/rss is not being indexed by any blog search despite pinging via http://pingomatic.com/ ....
I wrote to http://www.technorati.com/ to ask why and they replied that they don't index mailing lists or forums .... However, http://blogsearch.google.com/ frequently returns messages from http://www.tribe.net/ groups ....
I wrote to http://www.technorati.com/ to ask why and they replied that they don't index mailing lists or forums .... However, http://blogsearch.google.com/ frequently returns messages from http://www.tribe.net/ groups ....
Subscribe to:
Posts (Atom)