For an in-depth explanation, our NLP data scientist wrote this great article:

https://www.sentisum.com/library/zendesk-nlp-support-ticket-tagging

Below is a summary!

What is NLP?

Natural Language Processing is just automatically assigning useful meaning to text and speech.

E.g. when someone says "I'm really not happy, my parcel arrived damaged because it was left outside all day in the rain, the delivery driver didn't put it in the front porch like I requested" - NLP is the process of automatically identifying important parts of the text, e.g. adding AI tags such as "damaged parcel" and "delivery instructions ignored".

What are the different types of NLP?

There are two types:

  1. Rule-based. Often looking for different patterns of keywords or phrases, like "parcel" and "damaged". These rules have to be created by a person.

  2. Machine-learning based. A model that has seen a lot of example support tickets and builds up a knowledge of what different patterns mean, like "my parcel arrived damaged", "my box was ruined from the rain", "there is a lot of water damage on my packaging".

At SentiSum, we are big fans of the machine learning approach! Yes, it does need example data to work well, but once you give a model the right data, the performance is unparalleled due to smart generalisation.

Why is smart generalisation important?

To use the previous examples (see below), we can see there are a lot of different ways to say the same thing - "damaged parcel"

  1. "my parcel arrived damaged"

  2. "my box was ruined from the rain"

  3. "there is a lot of water damage on my packaging"

Rule-based approaches need to know exactly how an issue will be described to correctly assign it an AI tag e.g. look for mentions of "box" or "parcel" and "damaged" or "ruined" or "bad shape". This means that they can accurately assign meaning to a conversation when these words are mentioned, but one big problem is they miss a lot of different examples such as no. 3 if the rules do not account for these words.

Machine-learning based approach needs to see some example data, but the model can then automatically generalise to other different ways of saying the same thing, and should correctly identify all three as 'damaged parcel' for more accurate insights.

How does this 'model' actually work though?

I think of it like the sorting hat from Harry Potter. Bear with me.

So, the sorting hat in Harry Potter will sit on a student's head when they first arrive and decide which 'house' the student should be assigned to. But, it's a bit elusive and won't always explain why it has made the decision of Gryffindor over Hufflepuff. I also think the sorting hat is good at what it does because it has spent years sorting and learning from the successes and failures.

AI models are the same. It will sit on a conversation/support ticket, and decide which AI tag to assign to the conversation. It has multiple tags to choose from and it will provide the best answer it can. Just like the sorting hat, it's not always clear why an AI model gave certain AI tags, but overall you can trust its judgement!! The more conversations it sees, the more confident it gets in assigning AI tags as well.

Did this answer your question?