20 June 2018

2016-2017 Chatbot Trending News Analysis

From January 2016 to December 2017 a steady increase in interest of chatbots can be seen, reflected in the worldwide web search statistics.

The launch of Facebook Messenger chatbots in April of 2016 is generally considered to be the beginning of the current hype surrounding chatbots. This can be seen reflected in web search trends; however, the trends show that the story of rogue Microsoft chatbot “Tay” preceded that event by a month, and with equal popularity. Microsoft gained further publicity in December not only with the announcement of Skype chatbots, but also with the release of their “Zo” chatbot. Thus, the trends reveal that Microsoft and Facebook were neck and neck in competition for the chatbot space, although perhaps different markets.

While 2016 showed dramatic growth in chatbot interest worldwide, the trends show it leveling off throughout 2017, with slighter growth in interest. The big chatbot stories of 2017 were dominated by negative scenarios. In July, Facebook shut down its language-learning experiment, where chatbots developed their own language. In August, Chinese chatbots were shut down for going off-script, in an unpatriotic way. In September, banking chatbots were launched in Australia, which garnered worldwide attention. In November, chatbots were launched that could troll scammers.

Preliminary results for 2018 show interest in chatbots flat, but remaining at the top of the overall interest scale.



References:

21 April 2018

Artificial Intelligence in Affective Computing

Affect means "touch the feelings of, or move emotionally". Whereas, affective means “relating to moods, feelings, and attitudes”. Thus affective computing is “the study and development of systems and devices that can recognize, interpret, process, and simulate human affects”. Affective computing is an interdisciplinary field which spans computer science, psychology, and cognitive science. Affective computing is sometimes called artificial emotional intelligence, or emotion AI. Emotional intelligence can be defined as “the capacity to be aware of, control, and express one's emotions, and to handle interpersonal relationships judiciously and empathetically”.

Sentiment analysis might be considered a primitive form of affective computing. Sentiment analysis may be defined as “the process of computationally identifying and categorizing opinions expressed in a piece of text”, especially in order to determine whether the writer's attitude is positive, negative, or neutral. Sentiment analysis may also be referred to as opinion mining, or emotion AI. SentiWordNet is a popular lexical resource for opinion mining. It assigns three sentiment scores to WordNet synsets, or sets of synonyms. Natural language processing toolkits are often used for sentiment analysis, such as GATE, LingPipe, NLTK, R-Project, RapidMiner, StanfordNLP, UIMA, and WEKA.

In terms of natural language the 2011 book Affective Computing and Sentiment Analysis: Emotion, Metaphor and Terminology, edited by Khurshid Ahmad, addresses the role of metaphor in affective computing. Metaphor is something that is considered to be representative or symbolic of something else, in other words a paradox of comparing unlike things. Contributor Andrew Goatly looks at metaphor as a resource for conceptualisation and expression of emotion. For instance, emotions may be present in deep lexical semantics. Metaphoricity is the quality of being metaphorical, which contributor Carl Vogel maintains involves sense modulation. In a conversational agent, affect may be transferred by metaphor, forming a kind of artificial or synthetic emotion.

In ‘affective dialog systems’, an ‘affect listener’ is a device which detects and adapts to the affective states of users, facilitating meaningful responses. The SEMAINE project was a well known European Union initiative to create a non-verbally competent ‘sensitive artificial listener’. SAL, the SEMAINE sensitive artificial listener, was in effect a kind of ‘emotional agent’, or ‘emotion agent’, which could be termed an ‘affective interface’.

Automated, or automatic, emotion recognition leverages techniques from signal processing, machine learning, and computer vision. Computers use different methods to interpret emotion, from Bayesian networks to Paul Ekman's ‘Facial Action Coding System’. A number of companies are now working with automatic emotion recognition, including affectiva.com (Emotion Recognition Software), eyeris.ai (Emotional AI and Face Analytics), imotions.com (Emotion Analysis Engine), nviso.ch (Emotion Recognition Software), and visagetechnologies.com (Face Tracking and Analysis).

From this overview, it becomes clear that that there are two main aspects to affective computing, 1) emotion detection, or emotion recognition, and 2) artificial or synthetic emotion, or emotion synthesis. Facial recognition figures prominently in emotion detection, however language can be used for emotion recognition, in particular metaphor. Voice biometrics are also being used for detecting emotion, something like polygraph biofeedback. Emotion may be synthesized facially in the form of an avatar, or talking head, not to mention animatronic head. Emotional body language could be expressed by a humanoid robot. But also natural language can be used for the expression of computational affect, in the form of metaphor generation as a vehicle for emotion.

References:

16 April 2018

Aesthetics In Artificial Intelligence: An Overview

Here we look at the scholarly literature on aesthetics in artificial intelligence, and by extension robotics, over the past decade. The focus is mainly on natural language processing, and in particular the sub-genre of natural language generation, or generative text.

Aesthetics can be defined as “a set of principles concerned with the nature and appreciation of beauty”. Since artificial intelligence is built on computing, let’s first look at aesthetics in computing. From the perspective of engineering, the traditional definition of aesthetics in computing could be termed structural, such as an elegant proof, or beautiful diagram. Whereas, a broader definition would include more abstract qualities of form and symmetry enhancing pleasure and creative expression.

2006

2006 appears to have been a watershed year for aesthetics in computing with the publication by MIT Press of Paul Fishwick’s book, Aesthetic Computing. In this book, key figures from art, design, and computer science set the foundation for a new discipline that applies art theory to computing.

Nick Montfort published on Natural Language Generation And Narrative Variation In Interactive Fiction in 2006. In it, he demonstrated applying concepts from narratology, such as separating expression from content or discourse from story, to the new field of interactive fiction.

Also in 2006, Liu and Maes concretized a computational model of aesthetics in the form of an “Aesthetiscope”, as presented in Rendering Aesthetic Impressions Of Text In Color Space. Their device was a computer program that portrayed aesthetic impressions of text rendered as color grid artwork, partially based on Jungian aesthetic theory, and reminiscent of abstract expressionism.

2007

In Command Lines (2007), Liu and Douglass addressed “aesthetics and technique in interactive fiction and new media”. They analyse aesthetic developments in the field of interactive fiction, text-based narrative experiences, in the context of implied code versus frustration aesthetics, or the interactor’s mental model versus structural constraints.

2008

Game aesthetics are further explored by Andrew Hutchison in Making the Water Move (2008), examining “techno-historic limits in the game aesthetics of Myst and Doom”. He uses the landmark games as examples of the evolution of game aesthetics across time.

2009

Howe and Soderman talk about the RiTa toolkit for generative writing in The Aesthetics Of Generative Literature (2009). They discuss issues such as surprise, materiality, push-back, and layering within the larger contexts of generative art and writing practice.

In Data Portraits (2009), Dragulescu and Donath address “aesthetics and algorithms”. What they call data portraits would be called online footprint today, primarily social media; however, their data portraits engaged aesthetically with cinematography, typography and animation.

Michael Nixon’s Enhancing Believability (2009) evaluated the application of Delsarte’s aesthetic system in relation to designing virtual humans. Nixon empirically tested Delsarte’s system, and found it promising as a starting point for creating believable characters.

2010

Datta and Wang report on ACQUINE: Aesthetic Quality Inference Engine (2010), a public system which allowed users to upload photographs for automatic rating of aesthetic quality. The first public tool of its kind, this system was based on an SVM classifier which extracted visual features on the fly for real-time classification and prediction.

In Impossible Metaphors (2010), Hanna Kim addresses the premise of metaphorical un-interpretability of aesthetic terms, arguing instead in favor of metaphorical interpretability, based on multi-dimensionality and default dimension. Rafael De Clercq had previously postulated aesthetic terms are metaphorically uninterpretable, due to having no home domain of application.

2011

In 2011, Scott Dexter et al published On the Embodied Aesthetics of Code, broadly discussing the aesthetics of programming itself. They provide empirical evidence placing the “embodied experience of programming” in the context of “embodiment in the production of meaning”.

2012

Innovation theory, aesthetics, and science of the artificial after Herbert Simon (2012) by Helge Godoe broaches the aesthetic experience of generative literature from the user’s and designer’s roles, with reference to AI pioneer and Nobel laureate Herbert Simon in the context of innovation. Godoe suggests aesthetics, serendipity, and imagination form the “soul” of innovation.

Labutov and Lipson in Humor As Circuits In Semantic Networks (2012) present an implementation of their theory for the automatic generation of humor. They mine simple, humorous scripts using the well known ConceptNet, with dual iteration maximized for Victor Raskin’s “Script-based Semantic Theory of Humor”.

2013

In The User's And The Designer's Role And The Aesthetic Experience Of Generative Literature (2013), Carvalho-Pereira and Maciel analyse the aesthetic interaction between, writers, readers, and what they call “wreaders”. In testing, they found users don’t feel like co-authors.

2014

In the 2014 book, Examining Paratextual Theory and its Applications in Digital Culture, Desrochers and Apollon propose a theoretical and practical interdisciplinary framework. Today, GĂ©rard Genette’s original paratext could be thought of as online footprint, or “data portrait”.

Derezinski and Rohanimanesh’s An Information Theoretic Approach to Quantifying Text Interestingness (2014) studies the problem of automatically predicting text “interestingness”. They use a word distribution model that uses Jensen-Shannon divergence to measure text diversity, and demonstrate that correlates with interestingness (which they point out has been used elsewhere for humor identification).

Adesegun Oyedele et al in Individual Assessment Of Humanlike Consumer Robots (2014) examine the aesthetic attributes of products to appeal to consumers’ mental and emotional needs. They apply the “technology acceptance model” to consumer robots.

In this doctoral thesis, Digital Puppetry of Wayang Kulit Kelantan (2014), Khor Kheng Kia presents a study of how visual aesthetics impact on digital preservation of endangered culture, Malaysian shadow puppet theatre. Key-frame animation and motion capture were both used in his experiments.

In Embodied Aesthetics In Auditory Display (2014), Roddy and Furlong address the importance of aesthetics in “auditory display”, or the use of sound for communication between a computer and user. They present arguments for an embodied aesthetic framework, thus rejecting the aesthetic theory of Immanuel Kant.

Debasis Ganguly et al in their paper, Automatic Prediction Of Text Aesthetics And Interestingness (2014), investigate the problem of automated aesthetics prediction, for instance generated from user generated content and ratings, in this case Kindle “popular highlights” data. They use supervised classification to predict text aesthetics with feature vectors for each text passage.

2015

Stefania Sansoni et al look at the role of personal characteristics in attraction, in The Aesthetic Appeal Of Prosthetic Limbs And The Uncanny Valley (2015). This may be the first research into the relationship of aesthetic attraction to devices and their human-likeness.

The 2015 book by Michele Dickey, Aesthetics and Design for Game-based Learning, is about emotionally imbuing participants with motivation and meaning through aesthetic experiences. This is a vital but neglected aspect of game-based learning.

The book Robots that Talk and Listen (2015), edited by Judith Markowitz, delves into the social impact of technology. In particular, David Duffy addresses “android aesthetics”, humanoid robots as works of art.

2016

In her thesis Quality of Aesthetic Experience and Implicit Modulating Factors (2016), Wendy Ann Mansilla refutes the standing assumption that aesthetics in digital media are restricted to the structural or superficial, and instead points to complex implicit variables that contribute to individual user experiences. She investigated various factors, such as use of color in eliciting emotion, presence of familiar characters or alter ego, and food craving versus pleasure technologies.

Takashi Ogata, in Automatic Generation, Creativity, and Production of Narrative Content (2016), considers the generation, creation, and production of automatic narrative or story generation, from the perspective of cognitive science. The artistic and aesthetic problems are considered in terms of their relationships with technology.

In their 2016 book, Computational and Cognitive Approaches to Narratology, Ogata and Akimoto examine among other things the possibility of generating literary works by computer. They focus on the affective or psychological aspects of language engineering, with regard to “intertextuality”.

2017

In his book Aesthetic Origins (2017), Jay Patrick Starliper examines imagination through the work of Pulitzer Prize winning poet Peter Viereck. And through this philosophical deconstruction, Starliper demonstrates why books are bullets, perhaps giving rise to today’s weaponized narrative of bots and fake news.

Fabrizio Guerrini et al discuss an offspring of interactive storytelling applied to movies, called “movietelling”, in Interactive Film Recombination (2017). They propose the integration of narrative generation, AI planning, and video processing to model and construct filmic variants from baseline content.

Burt Kimmelman reconceives the actual in digital poetry and art in his paper, Code and Substrate (2017). Kimmelman argues that since the dynamic of digital poetry and art demands the dissolution of one from the other, this has brought the notion of embodiment to prominence.

Building on their 2014 work (above), Derezinski, Rohanimanesh and Hydrie present an approach based on a probabilistic model of surprise, and the construction of distributional word embeddings in their 2017 paper, Discovering Surprising Documents with Context-Aware Word Representations. They found it to be particularly effective for short documents, even down to the single sentence level.

In the Handbook of Writing, Literacies, and Education in Digital Cultures (2017), Theo van Leeuwen addresses aesthetics and text in the digital age. He argues that originally art, text has devolved into utility and industry. He maintains this devolution from illuminated to black and white was a function of the dour nature of Protestantism, unadorned delivery of the word.

Conclusion

Surveying the past decade or so, a number of figures, concepts and (new) terminologies emerge. As in the history of aesthetic inquiry in general, Immanuel Kant’s rejection of material dualism in favor of personal subjectivism relative to the material world figures prominently. Francois Delsarte’s “applied aesthetics” is introduced to robotic embodiment. Thus, note can be taken from the sensibilities of “interior design” to extend robotic design beyond the structural and superficial.

The watershed emergence of a field of “aesthetic computing” in 2006 really kicks off the modern journey, with the increase in application of art theory and practice to computing. However, the field of “computational aesthetics” - considered a subfield of artificial intelligence, concerned with the computational assessment of beauty - can be traced back as far as 1928 to mathematician George David Birkhoff’s proposed “aesthetic measure”.

Across time, we can see that machine learning is gradually taking over, leading to what may be termed a “neural aesthetic”. This can be seen in numerous recent “artistic hacks”, such as Deepdream, NeuralTalk, and Stylenet. Couching advanced, or deep, machine learning technologies in artistic metaphor helps clarify otherwise obscure jargon, and makes the subject more accessible to people in the real world.

27 February 2018

Art, Imagination, and Creativity: Natural versus Artificial

Today, computational creativity explores the intersection between artificial intelligence, psychology, and natural language processing, largely based on neural network algorithms or machine learning. These branches of AI have benefited from a million-fold increase in computing power over the last two decades, a rate of change which is unlikely to stop into the future.

As early as 2012, IBM started a project with its Watson system to explore computational creativity. Since then the Watson team applied computational creativity to various domains. They have looked at how it can be used to develop new scents in the fragrance industry, create personalized itineraries for travel, and improve sports teams based on skills or strengths. In 2014, a collaboration with the Institute of Culinary Education lead to the successful debut of Chef Watson, at the annual South by Southwest in Austin, Texas.1 According to Lav Varshney, one of the system’s designers, instead of replicating earlier styles the goal here was not to solve a Turing test for cooking, but rather to invent new kinds of recipes.2

Released to the public in 2015, Google’s DeepDream was a computer vision program which used a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia. (Pareidolia may be defined as the human ability to see shapes or make pictures out of randomness.) The following year MIT's Nightmare Machine appeared, consisting of frightening imagery generated by a computer using deep learning algorithms.3

In the 1990s, David Cope, a composer at UC Santa Cruz, created a program called Emily Howell, with which he can chat and share musical ideas. He describes it as “a conversationalist composer friend… a true assistant.”4 She scores music, he tells her what he liked or didn’t like, and together they compose. Fast forward to 2015, when Kelland Thomas, a jazz musician and associate professor at the University of Arizona School of Music, was granted funding to build a similar system capable of musically improvising with a human player, called MUSICA (for Music Improvising Collaborative Agent), under a Defense Advanced Research Projects Agency program called Communicating with Computers.5 According to Thomas, “We're trying to understand human creativity and simulate that." There are algorithms that can write hip-hop lyrics, for instance DeepBeat developed by Eric Malmi at Aalto University in Finland in 2015.6 In 2016, Margareta Ackerman’s ALYSIA (Automated LYrical SongwrIting Application) came to the attention of the popular press. Also using a computer as a collaborator, Ackerman came up with a system that could help write melodies.7 Also in 2016, Sony CSL Flow Machines showcased perhaps the first song to be composed by artificial intelligence, a pop song titled Daddy's Car.8 In 2016, even a computer generated musical appeared on the London stage, called “Beyond the Fence”.6

Robotic painting is an emerging genre of art, to the extent that Wikipedia now includes an entry on Robotic art. In 2013, there was an exhibition in Paris, called You Can't Know my Mind, which featured an artificial artist known as The Painting Fool offering free portraits on-demand, created by Simon Colton, a researcher at the pre-eminent Computational Creativity Research Group, Goldsmiths, University of London.9 Since 2016, RobotArt has been the sponsor of a $100,000 a year contest in robotic painting.10

​In 2011, the editors of one of the oldest student literary journals in the U.S. selected a poem called "For the Bristlecone Snag" for publication. However, it was written by a computer, but no one could tell. Zackary Scholl, then an undergraduate at Duke University, modified a program using a context-free grammar to auto-generate poems.6 The EU-funded What-if Machine project, 2013-2016, not only generated fictional storylines but also judged their potential usefulness and appeal.11 In early 2018, Microsoft unveiled a new technology called “drawing bot”, capable of creating images from text descriptions.12

Computational humor is another area of computational creativity. Dragomir Radev is interested in computational creativity, and trying to come up with systems that actually understand and generate funny text.13 Games By Angelina is the home of ANGELINA, the research of Michael Cook of the University of Falmouth, whose aim is developing an AI system that can design video games intelligently.14

Since 2017, Philippe Pasquier is teaching an online course in Generative Art and Computational Creativity, that introduces various algorithms from artificial intelligence, machine learning, and artificial life that are used for generative processes.15 Also in 2017, the World Science Festival in New York City featured a session on Computational Creativity: AI and the Art of Ingenuity, in which experts in psychology and neuroscience explored the roots of creativity in humans and computers, what artificial creativity reveals about human imagination, as well as the future of hybrid systems that build on the capabilities of both.16 Organized by the Association for Computational Creativity, the International Conference on Computational Creativity is the premier academic forum for researchers, which in turn has spawned Musical Metacreation workshop series. Metacreation refers to tools and techniques from artificial intelligence, artificial life, and machine learning, inspired by cognitive and natural science, for creative tasks.17

At the very root of “imagination” is not only the word “image” but also image itself. I believe most fundamentally the question is, how are images processed in the human mind, or brain, in such a way to lead to creativity? This then begs the questions, how are words converted into images, and how are images converted into words, in both humans and machines? And more specifically, how can image processing in machines lead to creativity?

References:

  1. Stinson, E. “America's Next Top Chef Is a Supercomputer From IBM.” Wired (June 2015).
  2. Marcus, G. “Cooking with I.B.M.: The Synthetic Gastronomist.” The New Yorker (April 2013).
  3. Dormehl, L. “This AI generates fake Street View images in impressive high definition.” Digital Trends (August 2017).
  4. Hutson, M. "Our Bots, Ourselves." The Atlantic (March 2017).
  5. Misener, D. “New musical Beyond the Fence created almost entirely by computers.” CBC News (December 2015).
  6. Kane, K. “Algorithm and rhyme: Artificial intelligence takes on songwriting.” Palo Alto Weekly (April 2017).
  7. Needham, J. "We Are The Robots: Is the future of music artificial?" FACT Magazine (February 2017).
  8. Johns, S. “Artificial intelligence experts question if machines can ever be truly creative.” Imperial College London (January 2018).
  9. Arbesman, S. "Computational Creativity and the What-If Machine." Wired (January 2015).
  10. Perez, S. “Microsoft’s new drawing bot is an AI artist.” TechCrunch (January 2018).
  11. Weir, W. “Programming for laughs: A.I. tries its hand at humor at YSEAS.” YaleNews (December 2017).
  12. Parkin, S. “AI Is Dreaming Up New Kinds of Video Games.” MIT Technology Review (November 2017).
  13. Luckow, D. "SFU MOOC a new route for students." SFU News (January 2017).
  14. Rockmore & Casey. "Humans and Machines Making Beautiful Music Together." Slate (July 2017).