e 10 key lessons from deploying an AI chatbot for HR transformation

10 key lessons learned from an AI chatbot project

Article

Discover key lessons from deploying an AI chatbot in HR, focusing on data quality, architecture, security, cost management, and enhancing efficiency.

AI chatbot

Möbius has launched a new AI chatbot aimed at transforming the HR department of a public organisation.

It's designed to:

  • Handle routine questions

  • Enhance the onboarding of new staff

  • Make it easier to access internal documents

It digs into the organisation’s vast knowledge base to find and deliver exactly the right information in response to specific queries, while also pointing users to the sources for deeper insight.

Of course, the development came with its fair share of challenges. In this blog post, we’ll touch upon the technical setup and dive deeper into the key takeaways from the project. 

BLOG_AIChatbot_Office

Photo by Arlington Research on Unsplash

 

What is the technical secret behind AI chatbots? 

The technical secret involves integrating a large language model with the organisation's internal database. This integration enables the virtual assistant to efficiently search and retrieve information relevant to user queries, ensuring accurate and context-aware responses. 

retrieval-augmented generationFigure: Retrieval-augmented generation (RAG) chatbot architecture (self-made image in draw.io) 

A critical component of this process is the "retrieval-augmented generation" (RAG) technique. This approach combines large language models' traditional natural language understanding capabilities with a retrieval system that can pull specific pieces of information from a database or knowledge base.  

When a query is received, the chatbot first searches its database to find relevant documents or data snippets. It then uses this retrieved information as a context or a primer to generate a coherent and informed response.

This technique enhances the chatbot's ability to provide precise answers and reduces the load on the generative model by focusing its capabilities on interpretation and response generation based on the most relevant data. 

 

Valuable lessons from deploying an AI chatbot

1. Pay attention to data quality and management 📁
The chatbot relies heavily on the quality of source data, which must be reliable, well-structured, and current. Conflicting or outdated information can lead to confusion and ineffective responses.  Additionally, data centralisation plays a critical role; data scattered across various platforms like SharePoint, shared disks, public websites, and intranets had to be centralised into a single repository to streamline access and processing by the chatbot.

 

2. Consider the data format 📝
Given that the chatbot utilises a large language model like GPT-3.5-turbo, the data must be in a text format since this model primarily processes textual information. The transition to models with vision capabilities, such as GPT-4o, could potentially automate the conversion of non-textual data like images and videos into text, enhancing the model's understanding of essential information. 


3. Objectively evaluate your chatbot’s outputs 📈
Establishing an objective framework for evaluating the chatbot’s outputs is vital. Ideally, this involves automatic assessments using tools like DeepEval, which leverage another LLM to gauge the quality of the chatbot’s responses. It is important to note that outputs from large language models can vary, even with identical parameters, due to their probabilistic nature. 

 

4. Understand your AI chatbot architecture 🛠️
A deep understanding of the architecture is essential to manage and optimise the system based on its outputs effectively. Key components such as the retriever, the large language model, and the data processing/enrichment pipeline each play distinct roles and require specific adjustments to optimise overall performance. 

 

5. Make sure your chatbot is secure and robust 🔒

Not only is technical expertise in engineering large language models necessary but also proficiency in deploying these systems live into cloud environments like Azure is required to manage an end-to-end workflow of AI systems development.

Moreover, data privacy must be prioritised to ensure that the architecture complies with GDPR, with cloud providers offering robust security features to support compliance. 

 

6. Organise multiple testing rounds ⚗️

Iterative testing is crucial for refining the performance of a chatbot. Involving both technical and business experts in the testing process helps validate the chatbot’s responses, identify issues, and collaboratively seek solutions.

Implementing a mechanism for users to provide detailed feedback—not just simple approvals or rejections, but also elaborative comments—enriches the development process. Additionally, maintaining a dashboard to track the chatbot's progress over time can illustrate improvements and areas needing attention, ensuring ongoing enhancement.

 

7. Make sure your chatbot is sustainable and adaptable 🌱

As the knowledge database evolves or expands, the chatbot must adapt its responses to reflect new information. Regular updates to the chatbot should be scheduled automatically at intervals that align with business needs, ensuring the chatbot remains accurate and relevant.

 

8. Educate users and manage expectations 🎓

Providing users with a manual that explains what questions can be effectively addressed by a Retrieval-Augmented Generation (RAG) architecture—and which cannot—is essential for setting realistic expectations. Training sessions and workshops can further aid in managing change, helping users understand how the chatbot works and how to interact with it most effectively.

 

9. Dare to go beyond pure RAG architecture with custom dialogue flows 🤖

A pure RAG setup often isn't enough to handle complex user interactions. Custom dialogue flows tailored to specific topics can greatly enhance the user experience. For instance, when a query about travel reimbursement is received, it’s beneficial to discern whether it pertains to commuting or business travel. Implementing intent detection and activating follow-up questions can gather more context about the user's query, enabling the chatbot to respond more accurately.

 

10. Know your cost model 💰

Depending on its architecture, your chatbot's operational cost will involve both fixed and variable expenses. For example, if your chatbot uses a retriever mechanism that relies on a vector database to search through a knowledge base, maintaining that vector database will result in a fixed monthly cost.

Additionally, if you're using a large language model, such as Azure OpenAI, to generate responses, each message sent to the chatbot incurs a variable cost. This cost fluctuates based on the length of the input query, the response, and the specific language model you're using. To ensure cost-effectiveness, it's crucial to estimate the expected chatbot usage and calculate monthly costs in advance. Make sure these expenses are balanced against the time and efficiency gains provided by the chatbot, aiming for a positive net benefit. 

 

Shaping the future of HR with AI 

The introduction of this AI chatbot by Möbius is a major step in using Generative AI to improve HR functions in public organisations. By automating routine tasks and enhancing key processes like onboarding and employee support, the chatbot boosts efficiency and allows HR teams to focus on more important work.

It shows how AI can be effectively integrated into public sector HR operations, offering clear benefits such as faster response times, better accuracy, and a smoother experience for both employees and HR staff.

If you're curious about how AI chatbots can help make your organisation more efficient, don’t hesitate to contact our GenAI professionals!

Frequently Asked Questions

What is a large language model?
A large language model (LLM) is an advanced type of artificial intelligence trained to understand and generate human language. It uses a neural network with billions to trillions of parameters and is trained on vast amounts of text data. LLMs can perform tasks like summarizing, classifying, and extracting information from texts, as well as generating coherent responses and engaging in conversations. They rely on a technique called "prompt engineering" to execute specific tasks and are widely used for applications such as chatbots, content moderation, and text analysis.
What is Retrieval-Augmented Generation (RAG)?
Retrieval-augmented generation (RAG) combines language models with a retrieval system to access specific information from databases. It improves accuracy by using relevant data to guide the model’s responses, making answers more precise and contextually informed.