20 June 2018

2016-2017 Chatbot Trending News Analysis

From January 2016 to December 2017 a steady increase in interest of chatbots can be seen, reflected in the worldwide web search statistics.

The launch of Facebook Messenger chatbots in April of 2016 is generally considered to be the beginning of the current hype surrounding chatbots. This can be seen reflected in web search trends; however, the trends show that the story of rogue Microsoft chatbot “Tay” preceded that event by a month, and with equal popularity. Microsoft gained further publicity in December not only with the announcement of Skype chatbots, but also with the release of their “Zo” chatbot. Thus, the trends reveal that Microsoft and Facebook were neck and neck in competition for the chatbot space, although perhaps different markets.

While 2016 showed dramatic growth in chatbot interest worldwide, the trends show it leveling off throughout 2017, with slighter growth in interest. The big chatbot stories of 2017 were dominated by negative scenarios. In July, Facebook shut down its language-learning experiment, where chatbots developed their own language. In August, Chinese chatbots were shut down for going off-script, in an unpatriotic way. In September, banking chatbots were launched in Australia, which garnered worldwide attention. In November, chatbots were launched that could troll scammers.

Preliminary results for 2018 show interest in chatbots flat, but remaining at the top of the overall interest scale.



References:

21 April 2018

Artificial Intelligence in Affective Computing

Affect means "touch the feelings of, or move emotionally". Whereas, affective means “relating to moods, feelings, and attitudes”. Thus affective computing is “the study and development of systems and devices that can recognize, interpret, process, and simulate human affects”. Affective computing is an interdisciplinary field which spans computer science, psychology, and cognitive science. Affective computing is sometimes called artificial emotional intelligence, or emotion AI. Emotional intelligence can be defined as “the capacity to be aware of, control, and express one's emotions, and to handle interpersonal relationships judiciously and empathetically”.

Sentiment analysis might be considered a primitive form of affective computing. Sentiment analysis may be defined as “the process of computationally identifying and categorizing opinions expressed in a piece of text”, especially in order to determine whether the writer's attitude is positive, negative, or neutral. Sentiment analysis may also be referred to as opinion mining, or emotion AI. SentiWordNet is a popular lexical resource for opinion mining. It assigns three sentiment scores to WordNet synsets, or sets of synonyms. Natural language processing toolkits are often used for sentiment analysis, such as GATE, LingPipe, NLTK, R-Project, RapidMiner, StanfordNLP, UIMA, and WEKA.

In terms of natural language the 2011 book Affective Computing and Sentiment Analysis: Emotion, Metaphor and Terminology, edited by Khurshid Ahmad, addresses the role of metaphor in affective computing. Metaphor is something that is considered to be representative or symbolic of something else, in other words a paradox of comparing unlike things. Contributor Andrew Goatly looks at metaphor as a resource for conceptualisation and expression of emotion. For instance, emotions may be present in deep lexical semantics. Metaphoricity is the quality of being metaphorical, which contributor Carl Vogel maintains involves sense modulation. In a conversational agent, affect may be transferred by metaphor, forming a kind of artificial or synthetic emotion.

In ‘affective dialog systems’, an ‘affect listener’ is a device which detects and adapts to the affective states of users, facilitating meaningful responses. The SEMAINE project was a well known European Union initiative to create a non-verbally competent ‘sensitive artificial listener’. SAL, the SEMAINE sensitive artificial listener, was in effect a kind of ‘emotional agent’, or ‘emotion agent’, which could be termed an ‘affective interface’.

Automated, or automatic, emotion recognition leverages techniques from signal processing, machine learning, and computer vision. Computers use different methods to interpret emotion, from Bayesian networks to Paul Ekman's ‘Facial Action Coding System’. A number of companies are now working with automatic emotion recognition, including affectiva.com (Emotion Recognition Software), eyeris.ai (Emotional AI and Face Analytics), imotions.com (Emotion Analysis Engine), nviso.ch (Emotion Recognition Software), and visagetechnologies.com (Face Tracking and Analysis).

From this overview, it becomes clear that that there are two main aspects to affective computing, 1) emotion detection, or emotion recognition, and 2) artificial or synthetic emotion, or emotion synthesis. Facial recognition figures prominently in emotion detection, however language can be used for emotion recognition, in particular metaphor. Voice biometrics are also being used for detecting emotion, something like polygraph biofeedback. Emotion may be synthesized facially in the form of an avatar, or talking head, not to mention animatronic head. Emotional body language could be expressed by a humanoid robot. But also natural language can be used for the expression of computational affect, in the form of metaphor generation as a vehicle for emotion.

References:

16 April 2018

Aesthetics In Artificial Intelligence: An Overview

Here we look at the scholarly literature on aesthetics in artificial intelligence, and by extension robotics, over the past decade. The focus is mainly on natural language processing, and in particular the sub-genre of natural language generation, or generative text.

Aesthetics can be defined as “a set of principles concerned with the nature and appreciation of beauty”. Since artificial intelligence is built on computing, let’s first look at aesthetics in computing. From the perspective of engineering, the traditional definition of aesthetics in computing could be termed structural, such as an elegant proof, or beautiful diagram. Whereas, a broader definition would include more abstract qualities of form and symmetry enhancing pleasure and creative expression.

2006

2006 appears to have been a watershed year for aesthetics in computing with the publication by MIT Press of Paul Fishwick’s book, Aesthetic Computing. In this book, key figures from art, design, and computer science set the foundation for a new discipline that applies art theory to computing.

Nick Montfort published on Natural Language Generation And Narrative Variation In Interactive Fiction in 2006. In it, he demonstrated applying concepts from narratology, such as separating expression from content or discourse from story, to the new field of interactive fiction.

Also in 2006, Liu and Maes concretized a computational model of aesthetics in the form of an “Aesthetiscope”, as presented in Rendering Aesthetic Impressions Of Text In Color Space. Their device was a computer program that portrayed aesthetic impressions of text rendered as color grid artwork, partially based on Jungian aesthetic theory, and reminiscent of abstract expressionism.

2007

In Command Lines (2007), Liu and Douglass addressed “aesthetics and technique in interactive fiction and new media”. They analyse aesthetic developments in the field of interactive fiction, text-based narrative experiences, in the context of implied code versus frustration aesthetics, or the interactor’s mental model versus structural constraints.

2008

Game aesthetics are further explored by Andrew Hutchison in Making the Water Move (2008), examining “techno-historic limits in the game aesthetics of Myst and Doom”. He uses the landmark games as examples of the evolution of game aesthetics across time.

2009

Howe and Soderman talk about the RiTa toolkit for generative writing in The Aesthetics Of Generative Literature (2009). They discuss issues such as surprise, materiality, push-back, and layering within the larger contexts of generative art and writing practice.

In Data Portraits (2009), Dragulescu and Donath address “aesthetics and algorithms”. What they call data portraits would be called online footprint today, primarily social media; however, their data portraits engaged aesthetically with cinematography, typography and animation.

Michael Nixon’s Enhancing Believability (2009) evaluated the application of Delsarte’s aesthetic system in relation to designing virtual humans. Nixon empirically tested Delsarte’s system, and found it promising as a starting point for creating believable characters.

2010

Datta and Wang report on ACQUINE: Aesthetic Quality Inference Engine (2010), a public system which allowed users to upload photographs for automatic rating of aesthetic quality. The first public tool of its kind, this system was based on an SVM classifier which extracted visual features on the fly for real-time classification and prediction.

In Impossible Metaphors (2010), Hanna Kim addresses the premise of metaphorical un-interpretability of aesthetic terms, arguing instead in favor of metaphorical interpretability, based on multi-dimensionality and default dimension. Rafael De Clercq had previously postulated aesthetic terms are metaphorically uninterpretable, due to having no home domain of application.

2011

In 2011, Scott Dexter et al published On the Embodied Aesthetics of Code, broadly discussing the aesthetics of programming itself. They provide empirical evidence placing the “embodied experience of programming” in the context of “embodiment in the production of meaning”.

2012

Innovation theory, aesthetics, and science of the artificial after Herbert Simon (2012) by Helge Godoe broaches the aesthetic experience of generative literature from the user’s and designer’s roles, with reference to AI pioneer and Nobel laureate Herbert Simon in the context of innovation. Godoe suggests aesthetics, serendipity, and imagination form the “soul” of innovation.

Labutov and Lipson in Humor As Circuits In Semantic Networks (2012) present an implementation of their theory for the automatic generation of humor. They mine simple, humorous scripts using the well known ConceptNet, with dual iteration maximized for Victor Raskin’s “Script-based Semantic Theory of Humor”.

2013

In The User's And The Designer's Role And The Aesthetic Experience Of Generative Literature (2013), Carvalho-Pereira and Maciel analyse the aesthetic interaction between, writers, readers, and what they call “wreaders”. In testing, they found users don’t feel like co-authors.

2014

In the 2014 book, Examining Paratextual Theory and its Applications in Digital Culture, Desrochers and Apollon propose a theoretical and practical interdisciplinary framework. Today, GĂ©rard Genette’s original paratext could be thought of as online footprint, or “data portrait”.

Derezinski and Rohanimanesh’s An Information Theoretic Approach to Quantifying Text Interestingness (2014) studies the problem of automatically predicting text “interestingness”. They use a word distribution model that uses Jensen-Shannon divergence to measure text diversity, and demonstrate that correlates with interestingness (which they point out has been used elsewhere for humor identification).

Adesegun Oyedele et al in Individual Assessment Of Humanlike Consumer Robots (2014) examine the aesthetic attributes of products to appeal to consumers’ mental and emotional needs. They apply the “technology acceptance model” to consumer robots.

In this doctoral thesis, Digital Puppetry of Wayang Kulit Kelantan (2014), Khor Kheng Kia presents a study of how visual aesthetics impact on digital preservation of endangered culture, Malaysian shadow puppet theatre. Key-frame animation and motion capture were both used in his experiments.

In Embodied Aesthetics In Auditory Display (2014), Roddy and Furlong address the importance of aesthetics in “auditory display”, or the use of sound for communication between a computer and user. They present arguments for an embodied aesthetic framework, thus rejecting the aesthetic theory of Immanuel Kant.

Debasis Ganguly et al in their paper, Automatic Prediction Of Text Aesthetics And Interestingness (2014), investigate the problem of automated aesthetics prediction, for instance generated from user generated content and ratings, in this case Kindle “popular highlights” data. They use supervised classification to predict text aesthetics with feature vectors for each text passage.

2015

Stefania Sansoni et al look at the role of personal characteristics in attraction, in The Aesthetic Appeal Of Prosthetic Limbs And The Uncanny Valley (2015). This may be the first research into the relationship of aesthetic attraction to devices and their human-likeness.

The 2015 book by Michele Dickey, Aesthetics and Design for Game-based Learning, is about emotionally imbuing participants with motivation and meaning through aesthetic experiences. This is a vital but neglected aspect of game-based learning.

The book Robots that Talk and Listen (2015), edited by Judith Markowitz, delves into the social impact of technology. In particular, David Duffy addresses “android aesthetics”, humanoid robots as works of art.

2016

In her thesis Quality of Aesthetic Experience and Implicit Modulating Factors (2016), Wendy Ann Mansilla refutes the standing assumption that aesthetics in digital media are restricted to the structural or superficial, and instead points to complex implicit variables that contribute to individual user experiences. She investigated various factors, such as use of color in eliciting emotion, presence of familiar characters or alter ego, and food craving versus pleasure technologies.

Takashi Ogata, in Automatic Generation, Creativity, and Production of Narrative Content (2016), considers the generation, creation, and production of automatic narrative or story generation, from the perspective of cognitive science. The artistic and aesthetic problems are considered in terms of their relationships with technology.

In their 2016 book, Computational and Cognitive Approaches to Narratology, Ogata and Akimoto examine among other things the possibility of generating literary works by computer. They focus on the affective or psychological aspects of language engineering, with regard to “intertextuality”.

2017

In his book Aesthetic Origins (2017), Jay Patrick Starliper examines imagination through the work of Pulitzer Prize winning poet Peter Viereck. And through this philosophical deconstruction, Starliper demonstrates why books are bullets, perhaps giving rise to today’s weaponized narrative of bots and fake news.

Fabrizio Guerrini et al discuss an offspring of interactive storytelling applied to movies, called “movietelling”, in Interactive Film Recombination (2017). They propose the integration of narrative generation, AI planning, and video processing to model and construct filmic variants from baseline content.

Burt Kimmelman reconceives the actual in digital poetry and art in his paper, Code and Substrate (2017). Kimmelman argues that since the dynamic of digital poetry and art demands the dissolution of one from the other, this has brought the notion of embodiment to prominence.

Building on their 2014 work (above), Derezinski, Rohanimanesh and Hydrie present an approach based on a probabilistic model of surprise, and the construction of distributional word embeddings in their 2017 paper, Discovering Surprising Documents with Context-Aware Word Representations. They found it to be particularly effective for short documents, even down to the single sentence level.

In the Handbook of Writing, Literacies, and Education in Digital Cultures (2017), Theo van Leeuwen addresses aesthetics and text in the digital age. He argues that originally art, text has devolved into utility and industry. He maintains this devolution from illuminated to black and white was a function of the dour nature of Protestantism, unadorned delivery of the word.

Conclusion

Surveying the past decade or so, a number of figures, concepts and (new) terminologies emerge. As in the history of aesthetic inquiry in general, Immanuel Kant’s rejection of material dualism in favor of personal subjectivism relative to the material world figures prominently. Francois Delsarte’s “applied aesthetics” is introduced to robotic embodiment. Thus, note can be taken from the sensibilities of “interior design” to extend robotic design beyond the structural and superficial.

The watershed emergence of a field of “aesthetic computing” in 2006 really kicks off the modern journey, with the increase in application of art theory and practice to computing. However, the field of “computational aesthetics” - considered a subfield of artificial intelligence, concerned with the computational assessment of beauty - can be traced back as far as 1928 to mathematician George David Birkhoff’s proposed “aesthetic measure”.

Across time, we can see that machine learning is gradually taking over, leading to what may be termed a “neural aesthetic”. This can be seen in numerous recent “artistic hacks”, such as Deepdream, NeuralTalk, and Stylenet. Couching advanced, or deep, machine learning technologies in artistic metaphor helps clarify otherwise obscure jargon, and makes the subject more accessible to people in the real world.

27 February 2018

Art, Imagination, and Creativity: Natural versus Artificial

Today, computational creativity explores the intersection between artificial intelligence, psychology, and natural language processing, largely based on neural network algorithms or machine learning. These branches of AI have benefited from a million-fold increase in computing power over the last two decades, a rate of change which is unlikely to stop into the future.

As early as 2012, IBM started a project with its Watson system to explore computational creativity. Since then the Watson team applied computational creativity to various domains. They have looked at how it can be used to develop new scents in the fragrance industry, create personalized itineraries for travel, and improve sports teams based on skills or strengths. In 2014, a collaboration with the Institute of Culinary Education lead to the successful debut of Chef Watson, at the annual South by Southwest in Austin, Texas.1 According to Lav Varshney, one of the system’s designers, instead of replicating earlier styles the goal here was not to solve a Turing test for cooking, but rather to invent new kinds of recipes.2

Released to the public in 2015, Google’s DeepDream was a computer vision program which used a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia. (Pareidolia may be defined as the human ability to see shapes or make pictures out of randomness.) The following year MIT's Nightmare Machine appeared, consisting of frightening imagery generated by a computer using deep learning algorithms.3

In the 1990s, David Cope, a composer at UC Santa Cruz, created a program called Emily Howell, with which he can chat and share musical ideas. He describes it as “a conversationalist composer friend… a true assistant.”4 She scores music, he tells her what he liked or didn’t like, and together they compose. Fast forward to 2015, when Kelland Thomas, a jazz musician and associate professor at the University of Arizona School of Music, was granted funding to build a similar system capable of musically improvising with a human player, called MUSICA (for Music Improvising Collaborative Agent), under a Defense Advanced Research Projects Agency program called Communicating with Computers.5 According to Thomas, “We're trying to understand human creativity and simulate that." There are algorithms that can write hip-hop lyrics, for instance DeepBeat developed by Eric Malmi at Aalto University in Finland in 2015.6 In 2016, Margareta Ackerman’s ALYSIA (Automated LYrical SongwrIting Application) came to the attention of the popular press. Also using a computer as a collaborator, Ackerman came up with a system that could help write melodies.7 Also in 2016, Sony CSL Flow Machines showcased perhaps the first song to be composed by artificial intelligence, a pop song titled Daddy's Car.8 In 2016, even a computer generated musical appeared on the London stage, called “Beyond the Fence”.6

Robotic painting is an emerging genre of art, to the extent that Wikipedia now includes an entry on Robotic art. In 2013, there was an exhibition in Paris, called You Can't Know my Mind, which featured an artificial artist known as The Painting Fool offering free portraits on-demand, created by Simon Colton, a researcher at the pre-eminent Computational Creativity Research Group, Goldsmiths, University of London.9 Since 2016, RobotArt has been the sponsor of a $100,000 a year contest in robotic painting.10

​In 2011, the editors of one of the oldest student literary journals in the U.S. selected a poem called "For the Bristlecone Snag" for publication. However, it was written by a computer, but no one could tell. Zackary Scholl, then an undergraduate at Duke University, modified a program using a context-free grammar to auto-generate poems.6 The EU-funded What-if Machine project, 2013-2016, not only generated fictional storylines but also judged their potential usefulness and appeal.11 In early 2018, Microsoft unveiled a new technology called “drawing bot”, capable of creating images from text descriptions.12

Computational humor is another area of computational creativity. Dragomir Radev is interested in computational creativity, and trying to come up with systems that actually understand and generate funny text.13 Games By Angelina is the home of ANGELINA, the research of Michael Cook of the University of Falmouth, whose aim is developing an AI system that can design video games intelligently.14

Since 2017, Philippe Pasquier is teaching an online course in Generative Art and Computational Creativity, that introduces various algorithms from artificial intelligence, machine learning, and artificial life that are used for generative processes.15 Also in 2017, the World Science Festival in New York City featured a session on Computational Creativity: AI and the Art of Ingenuity, in which experts in psychology and neuroscience explored the roots of creativity in humans and computers, what artificial creativity reveals about human imagination, as well as the future of hybrid systems that build on the capabilities of both.16 Organized by the Association for Computational Creativity, the International Conference on Computational Creativity is the premier academic forum for researchers, which in turn has spawned Musical Metacreation workshop series. Metacreation refers to tools and techniques from artificial intelligence, artificial life, and machine learning, inspired by cognitive and natural science, for creative tasks.17

At the very root of “imagination” is not only the word “image” but also image itself. I believe most fundamentally the question is, how are images processed in the human mind, or brain, in such a way to lead to creativity? This then begs the questions, how are words converted into images, and how are images converted into words, in both humans and machines? And more specifically, how can image processing in machines lead to creativity?

References:

  1. Stinson, E. “America's Next Top Chef Is a Supercomputer From IBM.” Wired (June 2015).
  2. Marcus, G. “Cooking with I.B.M.: The Synthetic Gastronomist.” The New Yorker (April 2013).
  3. Dormehl, L. “This AI generates fake Street View images in impressive high definition.” Digital Trends (August 2017).
  4. Hutson, M. "Our Bots, Ourselves." The Atlantic (March 2017).
  5. Misener, D. “New musical Beyond the Fence created almost entirely by computers.” CBC News (December 2015).
  6. Kane, K. “Algorithm and rhyme: Artificial intelligence takes on songwriting.” Palo Alto Weekly (April 2017).
  7. Needham, J. "We Are The Robots: Is the future of music artificial?" FACT Magazine (February 2017).
  8. Johns, S. “Artificial intelligence experts question if machines can ever be truly creative.” Imperial College London (January 2018).
  9. Arbesman, S. "Computational Creativity and the What-If Machine." Wired (January 2015).
  10. Perez, S. “Microsoft’s new drawing bot is an AI artist.” TechCrunch (January 2018).
  11. Weir, W. “Programming for laughs: A.I. tries its hand at humor at YSEAS.” YaleNews (December 2017).
  12. Parkin, S. “AI Is Dreaming Up New Kinds of Video Games.” MIT Technology Review (November 2017).
  13. Luckow, D. "SFU MOOC a new route for students." SFU News (January 2017).
  14. Rockmore & Casey. "Humans and Machines Making Beautiful Music Together." Slate (July 2017).


06 November 2017

What are chatbots? And what is the chatbot community?

In the beginning, all bots on IRC (Internet Relay Chat) were popularly referred to as “chat bots”.  Basically, IRC was the predecessor of IM (Instant Messaging) for realtime chat.  And Facebook Messenger is basically the successor of IM.  

After years of IM services fighting bots and automation, in a surprise move Facebook opened Messenger to bots in April 2016, which I call the “Facebook April surprise”.  Immediately, people began referring to Facebook Messenger bots as “chat bots” (note space).  Until then, the term chatbots (no space) had been gradually taking over the space previously known as chatterbots.

Since the Facebook April surprise, there has been a grand confusion reigning with people talking at cross-purposes about chatbots, challenging expectations all around.  Basically, Facebook Messenger chatbots have become “chat apps”, with lots of graphical UI elements, such as cards, interspersed with natural language.

Prior to the Facebook April surprise, there has long been a robust chatterbot community largely gathered around the controversial Loebner Prize.  Until today, the Loebner Prize has been the only significant implementation of the Turing test in popular use.  I happen to believe that the Turing test itself is problematic, if not a red herring; however, the contest’s founder Hugh Loebner deserves a place in history for stimulating the art, especially through the so-called AI winter.

There are further stakeholders in this melee.  In addition to the academic community of artificial intelligence researchers, there is also the natural language processing community.  Some people count NLP as a subset of AI, though a good argument can also be made against that.  My long investigation into NLP has shown to me that natural language processing has been largely predicated on the analysis, or deconstruction, of natural languages, for instance in machine translation, leading to natural language understanding.  It is only relatively recently that an emphasis has been placed on the construction of natural language, generally referred to as natural language generation.

Artificial intelligence itself is not a very useful term, as it implies replicating, or copying, human intelligence, which carries its own set of baggage.  As used today, it is so broad as to be ineffectual.  In short, AI researchers are not necessarily chatterbot, or dialog system, researchers, and nor are NLP researchers.  There are various and sundry loci for high level discourse on dialog systems for the academic community, often with large corporations hanging around the periphery.

There used to be a very good, informal mailing list for the Loebner Prize crew, but it suddenly got deleted in a fit of passion.  From there, the chatterbot community more or less came in from the cold of perhaps a dozen separate web forums gradually to the chatbots.org “AI Zone” forum, largely dedicated to the art of hand-crafting so-called pattern-matching dialog systems, or chatterbots.  

Hot on the heels of the Facebook April surprise, an enterprising young man named Matt Schlicht opened the Facebook Bots (chatbot) group, which had close to 18,000 members six months later (and close to 30,000 members today, 18 months later).  I would say throughout that process it has provided an informative and dynamic timeline, around which a new community has rallied.  However, that same community has collectively come to realize that Facebook Messenger “chat apps” are not the chatterbots everyone has been dreaming about.

Matt Schlicht had a proverbial tiger by the tail, in the form of his Facebook Bots group.  Due to the public pressure of a scandal of his own making, he initiated the process of electing a moderation team in November 2016.  I know how difficult this can be, managing online communities, through my own experience with the once popular travel mailing list, infotec-travel, throughout the dot-com bubble of the 1990s, an online community which ultimately lead to much of the online travel infrastructure we enjoy today.

Not only have I been banned from Matt Schlicht’s Facebook Bots group, but have been banned twice, and am still banned today.  The first time I was banned after posting about my chatbot consulting services.  However, due to the gracious intercession of current Loebner Prize holder, Mitsuku developer Steve Worswick, my group membership was reinstated.  I was then banned for the second time after sharing a private offer from Tinman Systems for their high end artificial intelligence middleware.  

After being ejected from the Facebook Bots group for the second time, I started my own Facebook group at Chatbot Developers Bangalore, due to my particular interest in AI, NLP, and chatbots in India.  (I also happen to be co-organizer of the Bangalore Robotics Meetup.)  Today, I blog about this a year later, because one of those new Facebook Bots group admins stirred the controversy by requesting admission to a closed Facebook group of which I'm a member, Australian Chatbot Community.


This blog post was originally submitted to VentureBeat in November 2016, prior to the successful election of a new administration team for the Facebook Bots group.

07 March 2017

Marcus Endicott successfully predicts IBM Watson Salesforce partnership

IBM Watson announced partnership with Salesforce Einstein March 06, 2017.

Mar 06, 2017: IBM and Salesforce today announced a global strategic partnership to deliver joint solutions designed to leverage artificial intelligence and enable companies to make smarter decisions, faster than ever before. With the partnership, IBM Watson, the leading AI platform for business, and Salesforce Einstein, AI that powers the world’s #1 CRM, will seamlessly connect to enable an entirely new level of intelligent customer engagement across sales, service, marketing, commerce and more. IBM is also strategically investing in its Global Business Services capabilities for Salesforce with a new practice to help clients rapidly deploy the combined IBM Watson and Salesforce Einstein capabilities.

Salesforce Einstein was launched in September 2016.

Sep 18, 2016: Salesforce forms research group, launches Einstein A.I. platform that works with Sales Cloud, Marketing Cloud

For those still paying attention... I have been going on and on about this needing to happen for the past three years, on Quora.

Sep 1, 2014: I don't know of another system that integrates more systems, more easily than Salesforce. My main critique of Salesforce is that it is too rigidly focused on conventional business process, and does not allow enough leeway for the Internet of Things, much less for experimental AI....

Nov 13, 2014: Bluemix appears to be an empowerment play to widen the base of developers to include those less proficient in pure coding, along the lines of Salesforce. That said, when Bluemix becomes as user-friendly as Salesforce, only then will I consider it fully baked.

Feb 24, 2015: There needs to be something along the lines of Salesforce that is not exclusively limited to conventional business processes, but something broad enough to include all the possibilities of experimental AI.

Dec 3, 201: I'm most interested in "Lego-ization", and the plug-and-play model, which to some degree would require as yet non-existent standards. Think "Integration Platform as a Service", something along the lines of Salesforce meets MATLAB, up to the challenges of experimental AI of all kinds.

Aug 3, 2016: I want a *visual* middleware, along the lines of the highly modular Salesforce, but for experimental artificial intelligence instead of severely restricted to conventional business solutions.

15 September 2015

2015 PATA Technology Forum, Bangalore

06 September 2015, I had the pleasure of attending the first Pacific Asia Travel Association Technology Forum, at Bangalore International Exhibition Center, in partnership with phocuswright.com and connectingtravel.com. PATA is the travel and tourism industry association for the Asia Pacific region, now headed by artificial intelligence investor Mario Hardy. Phocuswright is the global nexus for technology in travel and tourism. Connecting Travel is a new professional social network initiative for the travel and tourism industry by Travel Weekly. (Both Phocuswright and Travel Weekly are now owned by Northstar Travel Media.)


The opening speaker was the prominent investor and philanthropist Mohandas Pai, in his role as chairman of the Karnataka Tourism Vision Group. Pai, who is heavily invested in tripfactory.com, provided a 360 degree overview of the skyrocketing digital economy in India, as well as its impacts on travel and tourism, both domestically and internationally. One of the most interesting things he mentioned was the Aadhaar, or Unique Identification Authority of India, basically the world's largest national identification number project, set to biometrically empower millions of people without conventional paper trail or fixed abode.


Tony D'Astolfo, managing director of Phocuswright, introduced this new "Phocuswright Fast Track", by calling it an event within an event. Phocuswright offers recent research on the Indian travel market, and not only maintains a dedicated team in India, but also is planning a full Phocuswright India travel technology conference 21-22 April 2016 near New Delhi, in Gurgaon.


Chetan Kapoor, Phocuswright research analyst for Asia Pacific, put the spotlight on Indian holidays and package travelers, highlighting the evolution of the Indian traveler, and how their shopping and booking habits are transforming traditional holidays and packages.



In the first executive roundtable, titled "Beyond Air - The Next Phase of India's Online Travel Story", Chetan Kapoor presented three of India's new travel and tourism heavyweights:
In terms of traffic, Tripadvisor is consistently within the top 3 travel sites in India, listing more than 30,000 Indian accommodations, with the largest number of reviews. HolidayIQ is a Bangalore-based travel information and review portal, with over 3 million members, listing 2,000 tourism destinations, and more than 50,000 accommodations, in India alone. Cleartrip is one of the top online travel agents in India, attracting more than $70 million in funding.


In the second executive roundtable, titled "Travel Innovation Summit Alumni Spotlight", Tony D'Astolfo introduced three of India's most innovative entrepreneurs to discuss how they are transforming the travel industry, at home and abroad:

Intuitive travel planner Mygola has recently been acquired by MakeMyTrip, one of India’s leading travel companies. TableGrabber, India's first real-time online restaurant reservation system, has recently launched RezGuru, a middle-layer software for restaurants. TripHobo, a travel itinerary-planning portal, recently announced a partnership with Zomato, a leading restaurant discovery platform made in India.


For the executive interview, Tony D’Astolfo did a one-on-one with Ritesh Agarwal, 21 year old founder and CEO of OYO Rooms, India's largest branded network of hotels. Not only is he one of India's youngest CEOs, but also India's most successful college drop-out. Agarwal was the first Indian to receive a $100,000 fellowship grant from Peter Thiel, which he invested in developing OYO Rooms. And mostly recently, he has raised $100 million from Japan’s SoftBank for OYO Rooms. Legend has it that Agarwal started OYO Rooms, which stands for "On Your Own", because his relatives would not let him control the TV remote when he was a child in India. On a personal note, I can say for sure that I am staying in better places in India, and paying less, now than I was a year ago, due to the phenomenal concept that is OYO Rooms.


Following lunch, Connecting Travel organized the "Technology Trends Defining Business Strategy" session, moderated by Tony Tenicela, IBM executive and global leader managing business development. This session focused on how global market players are redefining business models to adapt to the accelerated pace of communication, marketing, and loyalty initiatives. Social media, and virtual networks, figure prominently in creating vertical platforms that are aggregating professionals, consumers, advisers and investors into communities.
Helena Egan, director of industry relations at TripAdvisor, is primarily concerned with building relationships with destination marketing organisations, as well as educating the industry on the benefits user-generated content. Kenny Picken, CEO of Traveltek, a leading provider of travel technology solutions, shared valuable insights of how Traveltek empowers industry stakeholders, rather than by-passing them. Philip Napleton, VP at Open Destinations, providing software for tour operators and wholesalers, emerged as the voice of the younger generation, with his insight into social media and mobile applications. Rika Jean-Francois, head of corporate social responsibility for Internationale Tourismus-Boerse Messe Berlin, was the only person to emphasize the potential of travel technology in developing sustainable tourism. Mike Kistner, CEO of RezNext, a real-time hotel distribution technology company, provided perspectives of the seasoned travel technology professional. Daniela Wagner, Connecting Travel at Travel Weekly, spoke of how their new social network platform can benefit travel professionals.

References:
Note, YourStory is the largest platform for news, reports and analysis on India's booming startup ecosystem.