How to Utilize AI in your Win/Loss Interviews

AI can be a powerful tool to help you analyze your Win/Loss data. It's especially powerful for summarizing
by: 
Brennon Garrett
Kaptify Founder
Brennon has conducted thousands (and thousands) of Win/Loss interviews. If he doesn't hold the world record for most Win/Loss interviews ever conducted, he's at least a contender.
Text Link

A common question we get is how teams can utilize AI to help with their Win/Loss programs. This is a big topic for a couple of reasons. The first reason is because AI seems to be everywhere these days, so people want to know how to use it in Win/Loss. And second, Win/Loss programs are full of natural language, and that’s what AI primarily does, it ingests natural language and outputs natural language. So a long-form Win/Loss interview should be the perfect document to feed into an AI. In order to utilize AI in your Win/Loss program it’s helpful to think first about how and where you get insights out of Win/Loss data. Do you want insights at the interview level? Do you want insights at the full dataset level? Do you want insights at the individual question level (“why are we losing”). Or do you want insights at one of those combinations for a certain customer or product segment? 

There’s a temptation to believe you can throw all of your Win/Loss data into an LLM and you can just start asking it all of your questions. At the rate AI is advancing, we may be there sometime in the near future, but we’re not there yet. 

How to get Win/Loss insights from AI

Segment your data appropriately

You need to segment your data before inputting it into the AI because the AI won’t segment your data on its own, unless you give it very explicit instructions on how to do that, and you include very clear segmentation variables in each of your interviews. For example, if you conducted 20 interviews and simply drop them into the AI, you need to organize the interviews in a way that makes it very clear to the AI where an interview begins and ends, which descriptive characteristics apply to which interview (name, revenue, date, etc). That said, the bigger your data-set the more errors the AI will make trying to keep segments straight. A much cleaner way to feed data into your AI is to feed it an already fully segmented dataset. So figure identify the key segments like:

– only loss interviews 

– only interviews above a certain revenue threshold 

– feedback data: answers to the question “why we lose”

– customer size: mid-market customers.

Prompt the AI appropriately 

Once you have your data appropriately segmented, take your Google Doc (or equivalent) and upload it to your AI. I’ll use ChatGPT for our example here. In the ChatGPT interface you’ll need to create a paid account and create an “Assistant”. We usually create an assistant for every single unique data-set that we generate. And into that Assistant we upload the dataset, and then ask it to generate insights about that dataset. Once you upload the dataset you’ll want to look at the Vector Store for your assistant to get the file ID of the file you just uploaded, it’ll help the AI understand where to look for answers to your queries (reach out to us if this is confusing and we can show you how to do it!). Once you’ve got the document uploaded and you’ve got the file ID in hand, ask the AI explicit questions about your Win/Loss data. We’ve wrestled a lot with the prompts we plug into various LLMs. 

As you identify shortcoming or errors in the responses from the AI you can tweak the prompt by adding and removing lines where necessary. When prompting the AI you should try to be as comprehensive as possible within as few sentences as possible - you’re looking for concision. It’s a trial and error process, and definitely takes time. But the more prompting you do the better you’ll get at it. 

AI is great for summarizing (but not the entire interview)

In addition to asking the AI for insights based on large datasets, AI is also great for quick summaries. I’ve noticed that when people think about Win/Loss summaries from AI, they tend to default to the idea of summarizing the entire interview. The problem with summarizing the entire interview is that if you’ve structured your interview questions correctly, you’ll likely have 10 or more discrete areas of feedback captured from the participant’s experience (“why we lost”, “sales experience”, “pricing”, “competition”, “product”, etc). An interview summary from an AI is usually too general and broad to summarize all of these areas individually. You could try building some sophisticated prompts that will move you further in this direction, but it tends to be pretty challenging to get right. 

Instead of summarizing the entire interview, we utilize AI to summarize small chunks within each category of feedback. For example, feedback for a single question like “why did we lose” will usually get captured in a few paragraphs of feedback. Instead of feeding the entire interview into the AI, we’ll take just those paragraphs and feed them into the AI and ask it to provide a succinct summary of the text. Now, instead of 3 paragraphs explaining a single reason why you lost, the AI summary makes summarizes it beautifully in a single sentence. This type of summary makes it much (much) easier to comprehend and navigate the transcript. Without easy to ready summaries of key parts of the interview you’re stuck slogging through long chunks of text to decipher what could probably be captured in a well written sentence. And if you don’t summarize these chunks cleanly, you’ll be attempting to interpret many large chunks of text simultaneously, and keep everything in your head as you move through each section of the interview. It’s kind of impossible. And it quickly exceeds human cognitive capacity. If you can organize your data with nice summaries it’ll make the interview far more manageable, and far more insightful. 

AI won’t be conducting interviews anytime soon

With all of the rapid progress in AI, there’s a general tendency to overestimate what AI is capable of, and what it will be capable of a year or two from now. One of the most interesting potential use-cases of AI for Win/Loss Interviews is using AI to conduct the actual interview itself. As the founder of Kaptify, and as the lead Software Engineer of our platform, I’m definitely interested and curious about where these capabilities are heading. If an AI could conduct human-like Win/Loss interviews it’d reduce our costs substantially, and allow us to scale our process in pretty exciting ways. But I have to admit, I don’t see this on the horizon at the moment. Not that an AI couldn’t do a “decent” job of conducting a Win/Loss interview. But one thing I’ve learned about the Win/Loss interview itself is that it’s a deeply human experience. People like seeing another human face, they like engaging in real conversation with a real human about the experience they’ve had. They engage with all of the tiny micro-expressions and intonations we generate when we communicate. For AI to fully replicate this experience means AI development would have conquered most if not all of the hardest problems in AI. Tiny facial movements right on cue, perfect language responses, eye contact that’s believable and imitates human emotion, timing, humor, slang.  It’s not that AI won’t eventually be here, it very well may. But even if AI is 95% as good as a human, those last 5 points will make the interview feel just weird enough that I think I believe it will degrade the quality of the interview, a lot. And from a technical standpoint, those last 5 points are the very hardest to conquer, and tend to take the most time and developer resources. So I guess we’ll see. Maybe I’m wrong. But that’s how things look to me at the moment.