After this post, where I have evaluated the possibilities of integration of a customer service chatbot in a company with an API like GPT4, today I want to introduce to you how we have built verbose in a matter of weeks and how you could integrate such a solution on a company or built a similar product for your personal needs.
Welcome to its story; today, we will give you a brief technical introduction to the beautiful word of GPT4 and the different possibilities you can use that simple API as a technical person to create a quick chatbot that can answer all of your customer’s queries.
Short introduction
Let’s start by defining what customer service is.
Customer service refers to customer support and assistance before, during, and after purchasing goods or services. It is an essential aspect of any business, as it helps to build strong relationships with customers, ensure their satisfaction, and create a positive brand image. Effective customer service requires good communication skills, product knowledge, problem-solving ability, and empathy.
There are various types of customer service, such as :
- face-to-face interactions
- phone calls
- Emails
- Live chat
- Self-service options.
The rise of artificial intelligence (AI) has provided new opportunities for customer service. AI-powered tools, such as chatbots and analytics, can assist human agents in providing faster and more accurate responses to customers, automating repetitive tasks, and providing 24/7 support. However, it is important to remember that AI should not replace human customer service entirely, as it lacks empathy and emotional intelligence. A combination of human and AI customer service can create a more effective and satisfying experience for customers.
Let’s go into more detail on how to automate a huge part of your customer support.
Problem definition
Before let me explain how this problem can be technically solved and how the power of AI can be leveraged to make the process faster and simpler.
The normal process is as follows.
- A user comes to your website and sees either your contact information or a direct chat interface
- he can leave a message in this chat interface, and if some agents are free, they will answer right away. If not, he will receive an email when someone is able to solve his problem.
This process has a lot of pain points that could be easily solved. First, as remarks, we may see that a large part of the questions of the user is actually documented either on the software or in some internal company documentation. And the agent will find that piece of information in that documentation in other to answer back to the user. It can also be simple information in the database, and you, as a business owner, will have to create a system that can easily query that and show it to your support agents.
As you can see, this is a part that multiple companies have tried to solve, but that can be solved really quickly and for a cheaper price today, thanks to OpenAI and GPT APIs.
The important conclusion here is that our responses are contained either in documentation or in some external system, and our AI must be able to retrieve that and answer users’ inquiries in a smooth conversational manner.
Manually drafting a solution.
Back in time, four years ago, building such an app would have been extremely hard. We need a team of data engineers and data scientists, … But today, we have CHATGPT or any Other LLM (large language model) that does the work pretty well. With almost zero cost.
So for our use case, we will use Chatgpt API.
Let’s first try a simple version of our chatbot for a company like Shopify.
The answer is right away; nothing to add. It works for most of the use cases for a lot of big companies. So you can directly plus the chatGPT API without further changes.
But we will face a problem called model hallucination. It means the model answers back with something that seems extremely clear and linked to the questions but that is actually false. So we need to give more context to our model.
We can do that in 2 different ways:
Model fine-tuning
This process helps us make the model more contextual based on all the data we have. It can be used to reduce the amount of data you send to the model or give more context to your answer. So the process is extremely simple, as described in the documentation.
Step 1: Collecting Data The first step in fine-tuning ChatGPT for customer support is to collect data. This involves gathering a large dataset of customer queries and the corresponding responses provided by your company documentation. The dataset should cover a range of topics and issues related to the business’s products or services. It is important to ensure that the data is of high quality and that the responses are accurate and helpful.
Step 2: Preprocessing Data Once the dataset has been collected, it needs to be preprocessed to prepare it for training the model. This involves cleaning the data by removing any irrelevant or duplicate entries and formatting the data into a suitable format for training the model.
The format of the final dataset should look like this:
{"prompt":"Company: BHFF insurance\\nProduct: allround insurance\\nAd:One stop shop for all your insurance needs!\\nSupported:", "completion":" yes"}
{"prompt":"Company: Loft conversion specialists\\nProduct: -\\nAd:Straight teeth in weeks!\\nSupported:", "completion":" no"}
Your customer support agents could review the data to make sure it represents your internal culture and that they are accurate enough for the model.
Step 3: Fine-Tuning the Model The next step is to fine-tune the ChatGPT model using the preprocessed data. This involves training the model on the dataset of customer queries and responses to create a model that is specifically tuned for customer support. During training, the model learns to generate responses to customer queries that are similar to those provided by human agents.
You can use Openai API if you are a developer to fine-tune your model. And that can be extremely easy. You can do that with their API documentation here.
But it can also be done with no-code tools that are extremely simple to use, like Riku.Ai.
Using the model
When you have finally built and fine-tuned your model, you can now query the model and integrate it into any system you have or just use the model for internal query via a no-code tool.
The most reliable way is to use it through your internal small tools. So you can control where your internal company data goes and which another entity has access to that. In other to integrate ChatGPT into your application, follow part two of this blog post, where I drive you through the integration and optimization of this technic with embeddings. Using embeddings, you can answer questions that are more complex and also provide just-in-time content to your application.
Let’s meet there.
And if you want to be part of our newsletter, register here, and we will send you tips and trick to optimize your business with AI.
Leave a Reply