
A Chief Executive Officer wakes in the morning to find herself faced with an exposed vulnerability susceptible to a cyberattack. Instead of urgently calling her subordinates, she asks her tailor made AI agent to carry out an assessment of the attack surface.
A Chief Revenue Officer (CRO) of an automotive company wakes up with a motivation to encourage their sales teams with deeper reflections and identify the biggest revenue opportunities for the upcoming fiscal year.
Instead of sifting through countless spreadsheets and waiting for manual reports, she turns to her bespoke AI agent. With a few natural language queries, she asks her agent to analyse which car models are selling best in specific regions, why they're outperforming other vehicles, and what factors are driving those trends.
This immediate, data-driven understanding directly informs how her teams plan and project for the upcoming year, allowing them to adapt strategies swiftly and capitalise on emerging market demands.
But what’s the solution actually helping the CRO here? It's a specially engineered AI solution that enables the CRO to create an AI agent specifically for the purpose of carrying out the fiscal-query related tasks.
One such company, ThoughtSpot offers such a complete solution that equips users at enterprises with the ability to create AI agents helping them carry out specific tasks.
Francois Lopitaux, SVP, product management at ThoughtSpot speaks to Shubhangi Dua, podcast host, producer and B2B tech journalist at EM360Tech. He provides exclusive details on ThoughtSpot's Agentic AI solution, what it is, how to train AI with intent and ethics, and how to make businesses autonomous.
What is ThoughtSpot’s agentic AI solution called? What makes it different from all other up and coming AI agents out in the data industry?
ThoughtSpot’s Agentic AI is called Spotter. When it comes to AI agents, we have to think above and beyond Large Language Models (LLMs) because LLM has become a commodity. It's almost like a database. Everybody needs to use a database, everybody needs to use LLM. This is important for us and this is how we differentiate ourselves.
We plug together all the pieces so we can provide the best experience for instance able to provide coaching. We provide an approach called–human-in-loop. This can improve the results of your personal AI system. Spotter is bringing a layer of usage based ranking.
When you ask a question to agentic AI solutions, you have to understand the person’s intent, their knowledge of the company or a combination of it. For example when I say, "List all the red accounts of the system,” the AI analyst needs to understand what a red account is.
When you are plugging your data, you are plugging potentially 25 or 50 tables and on these 50 tables you have like 1,000 columns. So then you need to be able to understand which one you really want to use to retrieve and to complete your query.
AI comprehending the user's search ‘intent’ sounds infringing. How do you stay within the ethical limits?
It's basically what you do on ChatGPT every day, you ask questions. When you ask a question to GPT the first thing it tries to understand is your goals. This specific underlying intent is why we use LLM. LLMs understand your intent and drive an answer.
ThoughtSpot is using LLM in the same way. Spotter translates your natural language into something more BI-oriented. To achieve this, LLMs are used at its core, with various mechanisms layered on top to best understand the user's intent.
For instance, when you incorporate usage-based ranking, you might find that 80 per cent of the time users are requesting a specific column of data. This is because when someone says, for example, "I want to know my revenue for the months in...", a typical dataset might contain around 25 different fields related to revenue.
This is crucial because, as you know, CRM systems, for instance, have become very complex with numerous different columns for revenue. So, how do you know which one is the right one? This is where, for example, this usage-based ranking comes into play.
Describe some of the capabilities of Spotter, ThoughtSpot's agentic AI.
We also provide that when a question isn't precise enough, we recognise it and ask clarifying questions. The goal isn't just to give you an answer at its core, it's really to ensure that if we provide an answer, there's a high degree of accuracy.
Basically, the semantic layer is a cornerstone of the agent. And this, again, isn't necessarily related to LLMs. The semantic layer's purpose is to explain, for example, that after plugging your data on top of your data warehouse – which might contain around 500 tables – the agent would be lost without it.
The semantic layer is where you define that, for instance, these five tables are related, these 300 tables are related, and the meaning of this table is X, Y, and Z. It describes these processes, and each column has a specific meaning, potentially with different synonyms. When you ask certain questions, we expect the answer to be presented in a particular way. So, all of this information resides within the semantic layer. All of that information resides within the semantic layer – or table in the semantic layer.
The AI agent sits on top of these semantic layers, and it's a bit like having a dictionary of your data, so to speak. This allows your agent and helps it to really provide the best answer.
ThoughtSpot provides multiple capabilities to this agent. Our work today is actually to add more capabilities. So we call it skills. When someone starts with some skills and then learns new skills, this is what we're doing. So the first skills that we taught it to do, obviously, were to answer simple questions or business intelligence questions, like, you know, what are my sales over months, what's the percentage year-on-year?
We then trained it to answer more complex questions, for example, can you create a customer lifetime value, or cut by region, where sometimes the system will even have to generate the formula because the formula might not be part of your data set.
These skills also include creating charting in charts and a graphical interface. So it's not purely text generation, it's also generating graphics that you can understand, you can act on it, you can publish it, you can share it, you can modify it.
Could you elaborate on how the "data literacy" skill proactively guides users on what they can ask?
When we tested it with our customers, we found that customers don't necessarily know what question to ask or what question they can ask the system. We created a new skill that we call data literacy.
Data literacy skills involve helping people to ask, "Right then, what question can I ask about my data set?" Based on the columns and the semantic layers that we've created of your data, then users can ask, for instance, an analysis of your customer churn or the demographics of your customers.
Another skill we call the ‘tangent analysis’ is when you have numbers – let's say you ask, "What is my churn over months?" and you see your churn going up – the next question you might ask is, "Right then, why did my churn go up?"
This isn't any more like a simple analytical question, something where you need to run more complex analysis. When you would ask, like, "Why did my number change?,” it's going to tell you, "your number changed because, you know, in this specific region something happened, and in this specific subscription type this something happened," and so on.
What is ThoughtSpot’s latest tech development?
We are currently working on a deep research mode. This mode is really going to take very high-level questions and be able to decompose them with a chain-of-thought process. It could provide you with an answer such as "I'm a new sales guy in this area, and what product should I do to"
Basically, it hopes to be able to sell you on ‘how to achieve your objective.’ It could break it down into sub-questions and provide you with an answer. That’s how we are perceiving this AI agent, not just as a pure natural language query thing. We solved the latter a year ago. Now it's really about bringing your personal analyst that can really answer complex questions based on the data.
How are you achieving significantly high accuracy in your benchmarks? Could you elaborate on the methodology behind this and what contributes to this level of precision?
We're running highly intense benchmarks. We’re the most accurate. Based on benchmarks and owing to the technology we've built over time, our experience in being able to build the right SQL queries based on the question is really high. We created this model of searching data earlier and now with LLMs, we're combining the two.
Now, we have the experience to make better systems. We don't use LLMs to generate SQL queries. We actually use LLMs to generate our own language of queries, and then that – we call it TML – and after that, we convert the TML into SQL in-house.
Our approach is different from the rest of the industry. It's really helping us achieve a higher degree of accuracy. The second thing helping us achieve a higher level of accuracy is our coaching capabilities.
Our human-in-the-loop approach can improve the query and it’s also helping us reach the highest level. Our benchmarks show a 94 per cent accuracy, and with coaching it goes up to 99 per cent accuracy.
Also watch: What Does AI Know About You?
If you had to list some of the significant milestones that you want to achieve futuristically, not the ones you've already reached, but what you truly aspire to accomplish with making businesses autonomous, what would they be?"
We are in the age of agentic analytics, ThoughtSpot is all in on it. Our capabilities include allowing people to have their own personal analyst, to receive notifications about changes in their business with alerting, and more.
We equip this intelligence for an enterprise’s employees, and also for their customers too, with our embedded capabilities. So, organisations can embed Spotter, embed their agent for their end customers. This is incredibly important, and we're seeing many customers adopting this tech today with great results.
The next level for us is the concept of autonomous. We see autonomy based on three main factors. If you want to create – so what does autonomous mean? Imagine waking up in the morning as a Chief Customer Officer (CCO), and an agent connects with you and says, 'Hey François, just be aware, these five customers may churn this next quarter because of this reason and this reason and this reason.'
Essentially, someone's working for you 24/7, able to proactively look at all the data you have, and capable of answering and warning you about things, and then acting on it.
We call it autonomous. You achieve that by allowing people to create not another dashboard to control the churn of your customers for instance, but to be able to create a new agent.
A new agent where you give it a mission – it's like a job description. You think about it like a new employee, your teammate. You tell them, 'Hey, I want you to look at every customer that may churn in the next quarter. I want you to send me a Slack notification. I want you to be able to create a task in my CRM system to warn me about it, and to send them an email, uh, you know, to invite them to a dinner.'
This way you create your journey. After the job description, you provide the agent with an objective. For example, you can give it a target, 'Hey, your target is to reduce churn by 10 per cent,' or to reduce unexpected churn by 10 per cent. Then provide the data sources it can work with, because trust is important for guardrails. You need to give it guardrails. Guardrails about, 'You can access this dataset, this dataset.'
Comments ( 0 )