Sentiment Analysis and Natural Language Processing: The Basics
Sentiment Analysis can lead to great insight beyond simple volume metrics which you may already be aggregating. Quantifying sentiment has the ability to improve customer interactions and allow your business to better understand the needs of your social consumer. You must however interpret the conversation in context or risk falsely identifying a social signal. As your social campaign matures integrating social signals along with context can greatly enhance your marketing strategies.
Most out-of-the-box tools such as Radian6 usually raise many false negatives. For a broad search the majority of sentiment is generally ‘neutral’. Though for the 10% or so which may be negative or positive, what is the best way of identifying or increasing your confidence scale? In future posts I’ll discuss workflows and optimizing your confidence rating. The majority of false negative sentiment stems from colloquialism and slang. A field of computer science and artificial intelligence named Natural Language Processing was developed to interpret human speech via computers. Today this field has advanced to the point where IBM’s Watson is able to obliterate all-time Jeopardy! winner Ken Jennings. Wondering how you can place decades of research into your own marketing campaign and decipher human speech into digital signals?
Here are three great open-source resources:
OpenNLP: supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution. These tasks are usually required to build more advanced text processing services. OpenNLP also includes maximum entropy and perceptron based machine learning.
Natural Language Toolkit: NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning.
The Stanford NLP Group makes parts of our Natural Language Processing software available to the public. These are statistical NLP toolkits for various major computational linguistics problems. They can be incorporated into applications with human language technology needs.
Why apply NLP for sentiment analysis?
A quick use-case on why such tools are important for contextualizing your conversation streams. A broad keyword such as “Ford” will yield a spectrum of results such as: Ford Motor Company, Toronot’s mayor Rob Ford, Betty Ford Clinic, Henry Ford Museum, etc. If you’re only interested in automotive related Twitter conversations then the rest is simply noise. A finely-tined NLP post-process can filter the noise and allow you to target only your high-value data streams.
On February 12th I’ll be discussing using Sentiment Analysis within your digital marketing campaigns. Join the chat every Tuesday at 5PM PT/ 8PM ET using the hashtag #MetricsChat.