Arion Research LLC

View Original

Disambiguation Podcast Episode 5: Intelligent Chatbots - Transcription

Michael: Welcome to the Disambiguation Podcast, where each week we try to remove some of the confusion around AI and business automation by talking to experts across a broad spectrum of business use cases and the supporting technology. I'm your host, Michael Fauscette, and if you're new to the show, We release a new episode every Friday. It's released as a podcast on all the major podcast platforms, as a video on YouTube, and we also post the transcript on the Arion Research blog. If you want to drop by there and read it, I'm not a hundred percent sure why you would, but it's available if you do. So, in the show today, I'm excited to talk a lot about chatbots and intelligent chatbots.

And of course that means we're talking about large language models and generative ai and I'm joined today. I'm gonna bring him in. I'm joined today by Tim Handorf. Tim welcome. Tim is currently the co-founder of G2, a software marketplace, head of G2 Labs and the executive chairman of CarrierSource, which is a startup that was founded by one of the ex-employees of G2. And it's a really interesting marketplace around shipping. He served in several roles at G2, and I should say I worked for Tim for several years at G2. He was the CEO when I first joined, and then he went to run an acquisition that we'd made around G2 Track. Prior to that he was the VP of product management at Big Machines, A CPQ startup that was acquired by Oracle in 2012.

So welcome Tim.

Tim: Thanks Michael. Pleasure to be here and excited to chat a little bit about chatbots.

Michael: Yeah. Maybe we could have just had my bot call your bot. So tell a little bit about your role currently at G2 and then I'm really interested in the innovations that you've added to the site this year using generative ai?

Tim: Yeah. And my co-founder, Mike Wheeler, and I run a group called G2 Labs, and we get the privilege of experimenting with all the new technologies that come up and ultimately trying to design and bring new products to market that will help buyers and sellers of software. And, back in November, as many people did. We had actually, prior to November, we had tried all kinds of interesting things with ai. AI is not new. We had tested machine learning and natural language processing, mainly around trying to summarize the reviews on our marketplace, and frankly, the results were lackluster.

And so like I wasn't that excited about it. I was like, I get it. I get where it's going, I get the potential, but I just didn't see it was ready yet for a prime time until November of last year when ChatGPT came out. And my co-founder said, hey, we should try this. And initially I was like, another large language model test. Okay let's go for it. But then I actually tried ChatGPT myself and like many people, my eyes were opened to the possibility. It was the first time that I had tried something and it just, it really worked and worked really well, and so we agreed we were gonna test it.

Unfortunately, my co-founder decided he was going to go on sabbatical during the holidays, and so we couldn't get right at it. But in February we decided to build our first bot, which was designed to help connect buyers and sellers and really helping buyers find the right short list of products that would meet their requirements.

Michael. Yeah. I've I played around with it a little bit when you guys first brought it out, because I, you'll remember when I first joined, I tried the existing Helper and it didn't help much. It was actually pretty bad. But the good news is nobody used it.,So I guess we didn't have to worry about that from a customer satisfaction perspective anyway but I'm amazed at how much,

Tim: remember Michael when we actually had live people behind the chat? We do actually. And so some of our initial testers of this were those, the live people that were getting those questions. They were equally impressed as we were.

Michael: Yeah, that's good because and those, in fact, some of those folks worked for me for a little while when I was there. That was and it's interesting to see as chatbots have evolved and you mentioned the, the people behind it.

I know one of the things that, I did a survey in a project for a client a few months ago asking about chatbots. It was about communications in general, but chatbots particularly, and around support. And, when I asked people, specifically about their talking to a chatbot, they didn't really want to, that if you say, would you rather talk to a human than a chatbot, they're always gonna say, I'd rather talk to a human. And that was a big percentage of them, like 80%. But what if you ask the questions a little differently? That if they're talking to more of an interactive chatbot, or if they've had good results that nearly 50% of them said, oh I had good results, interacting with the chatbot recently.

And I think that's a big shift in the way those things function. So I'm curious how you've implemented that and how you manage to make it interactive so that it does provide a good customer experience. So what are the advantages of having generative AI in the chatbot?

Tim: I think you were alluding to, our experiences like, and everybody's probably had this experience where you've called up your bank or whatever, and you have three options for a prompt and none of 'em meet your need. And it's just frustrating because you want to hit zero or whatever you want to go talk to a person. And so I'm not surprised based on your research, that people don't wanna use a chatbot if that's what they perceive a chatbot to be, because usually the old website, based chatbot were very similar to those voice systems. But the difference between an AI enabled chatbot is I think it really puts the user in control versus the chatbot in control. And so you can ask the question, it will automatically answer your question from the beginning as opposed to having to wait for a prompt or figure out if it will ever get to you.

And ultimately, at the end of the whole thing, you end up having to just enter a case or something like that and wait for somebody to respond to you where you can get the answer when you need it. And that is the big difference. And the amazing thing about these AI enabled chatbots is that if you feed it the right data it has perfect recall or near perfect recall.

And it just gives you better information than a human can because a human would probably have to do a search or find it. The answer, they don't always know. And so they've got a database on the other side that they're looking up and the chatbot is basically doing all that usually faster than what a human can do.

Michael: Yeah. It's interesting, and I know you'll be at Dreamforce next week and I will too. I know one of the things that Salesforce has done and you see this in other customer service systems too, is that not only you can use the chatbot up front. But when you get to the human, what you probably don't realize is that the chatbot now sits by the human too, because they're also using it to answer your questions because it's much faster at finding the answers than they ever were with the old systems. So it's funny to think, I don't wanna talk to a chat bot, but that human is probably having an assisted conversation with you anyway. But it is different than the Logic Tree ones. I tell you. I was trying to return something to a large online retailer recently, and had a lot of frustration. And you would've laughed at me as I was yelling at my computer because it wouldn't let me return the item. And I finally got to somebody. But, woo, that was not a good experience. So what are, using gen AI in the chatbot, what are challenges? What are limitations? What was hard and what worked really well?

Tim: Yeah. And I think you do have to be cautious in writing. And I don't think every application, this is just my opinion is ready for a chatbot. And I think ones that are more sensitive where you need the answer, let's say medical situations. You wanna be overly a little more cautious around those, but that's a little different than trying to select the right software, for example.

And chatbots will, they will say hallucinate, right? And they, you'll get an answer that seems perfectly legitimate. And the chatbot gives you an answer with great confidence. But it is only as good as the data that it's provided. If it doesn't have the data sometimes, right? It might make some assumptions that are in incorrect in the same way that a human might they, it doesn't do it intentionally. It doesn't lie intentionally, but sometimes it's wrong. And so right now I think there's good implementations or good use cases, and then there's others that probably aren't quite there yet.

Michael: Yeah, I was playing around with 'em when I first started to use them back in December or so, and I had chChatGPT write a biography for me and it did a, I'm a fairly online person. It did a pretty good job and in fact even gave me two advanced degrees from UC Davis, which was really exciting.

Of course, I never went to UC Davis, but that's okay. It was a very good biography. I was impressed with myself. I'll take it. Yeah, exactly. So I, obviously the, one of the biggest issues with chatbots I know is data, right? So how… And you're using ChatGPT in the backend of your chatbots. Monty, I should call it Monty. That's its name, right? I have one on my website. It's called Ario. So that's, we can have them talk at some point. But how do you, how did you train, like obviously the cChatGPT large language models, one piece of it, but then you've got all the site data and data in general. How do you guys think about that and how did you train it so that it can actually respond off of the G2 site data?

Tim: Sure. Sure. And so the way that I like to think about this is I like to thinking about it just in human forum. And so if we were to train a chat bot whose purpose is to provide information in the same way that a sales representative might provide information about a product. But you have to train that sales rep. And the first thing that you do when you're training that sales rep is you wanna gather all the product information, you wanna gather, its advantages, its disadvantages, its use cases, potentially some customer testimonials. And you also want to talk about, the sales methodology would train them in a sales methodology that that you want, that you subscribe to.

And if you can provide all of this information in text form and have the human read it, they will consume it. And then in theory they will be able to do the job. You basically do the exact same thing with the chatbot. You provide it all of its text data that goes into a vector database, that then you then tell the chatbot how to use and recall that data. And that is exactly the same way that you might train a sales person. So it sounds interesting. It's actually much simpler than what you might think.

Michael: Yeah I was on my show last week. I was talking to a couple folks from Pickax, which is an AI platform, and we were talking about prompts, but, and they used an analogy that I really like. They said that you'd have to think about the language model as a smart intern, as a reasonably smart intern. And so when you're training them they come to you. They have general intelligence, but they don't have specific intelligence. They don't know about your stuff. And so you have to train them in that. And that makes them then competent and they think of the large language model sort of the same way, which makes sense to me anyway. Yeah, exactly. So I know, I know you're using the chatbot on both ends of the marketplace, right? So one end of it is for buyers as they come in but I'm curious about the other side of it the seller side of it.

What are you doing with that side? What kinds of things? Because this, both of them are customer experience issues, 'cause obviously your customers are on both ends of that marketplace. But the monetization part of it is in the seller part. So how has that helped the experience by having that available?

Tim: The way that we think about this is if we can provide a great experience for the buyers, ultimately that's going to help the sellers. Yeah. And so the first bot that we built was really designed to narrow the list of software that would meet your requirements. But then at some point, the buyer has questions that are very detailed. They have very detailed requirements, whether it, does it integrate to x, y, Z software? As a classic example, and maybe that's buried somewhere in a review that you might be able to find whether it does it or not. But you have to really search for this. Yeah. And so if you can then enable the seller to provide more of this information to the chatbot, ultimately the buyer is going to get the answers that they need faster. And if you can, as a seller, speed up the sales cycle and do that without having to interact with the buyer so there's no labor on your end. You're saving money and ultimately getting much stronger leads through the process and they'll contact you, right? If it's a good fit and you provided a great service and a great early sales experience through the bot, they're, they want to contact you because it's gonna solve the problems that they're trying to solve.

Michael: Yeah, that makes sense. It sounds almost like the sellers are trying, helping train the bot and giving them more specifics around the products that they have listed. Is that accurate?

Tim: That's accurate. And there's a lot of information on G2 that the bot can be trained on, but not everything's there. And so enabling the sellers to provide more of that information, and with a bot, especially in the AI based bot you can provide it a lot more information. The problem with probably writing a ton of information on a website or on a marketplace is the buyer has to find it. Yeah. And that's hard in many cases. And so you have to have a really good search elastic search engine to be able to do that. And then it's even, you have to sift through the text with the bot. It does all that for you.

Michael: And a lot of that data is not very well structured, obviously. It's people writing reviews, so it's it has to be able to interpret some of that stuff too. I would imagine. It does.

Tim: That's what's amazing about, and it's not just, I think the technology of the large language models, but it's the concept of, these vector databases that are enabling it as well, which I don't claim to be an expert in vector databases, so please don't ask me any detailed questions.

Michael: I was just gonna jump in on that.

Tim: But what little I do know about it, right? It's really a big enabler for customizing these, the bot experience.

Michael: So I know one of the biggest issues with training the bots training the large language model is data quality. How are you, how have you approached this? How do you ensure that you have good quality data for the bot to use and that it continues to consume good quality data as it learns more.

Tim: Yeah, and I think there's a couple of points there. One is, the seller, right? We enable the seller to continue to upload the most recent data. But also the, you had mentioned the prompt strategy before being able to take, I. Let's say a recorded Gong call or a Zoom call, and being able to summarize that call into the advantages and disadvantages or pulling some of that data out right, is very important. And so often you're using a large language model to actually summarize the data before you put it into the vector database. So you're getting the right things into it versus the wrong things.

Michael: Yeah. I just added an extension to my Chrome browser the other day that's a little AI chatbot that summarizes things that you're reading. So you go in and you look at a long McKinsey blog article or something and you're like, I don't really wanna read this whole thing here. Can you summarize this in a paragraph for me? And it. It does. I'm like, wow, that saved me a lot of time. I hope it gave me everything I could have gotten out of it, but, time's important too. Yeah, that's good. And then from a quality perspective, have you had any issues with data quality? Is there anything that you had to do extra that to make sure that the training data is accurate?

Tim: There's several techniques that we use and this is really post-implementation. One is, ChatGPT uses this as well, which is basically user feedback that you can use for machine learning. Thumbs up, thumbs down, did it answer the question? But we've also then been able to implement a prompt on top of all of the data. So every chat we analyze via this prompt. That determines whether or not it the user succeeded in answering your questions or not. And so it's a really fancy sentiment analysis. I wouldn't call it sentiment analysis, but we can actually determine whether or not it was, we provided the right data from the conversation. And then we can dig into the ones that maybe didn't succeed and change their prompt strategy so that the next time it gets the answer. Correct. Or add data as you mentioned. Yeah.

Michael: And it learns not only from the training data you've already put in there then, but it's also learning from the conversations as it has them. That's right. That's really interesting. Yeah. So you get that iterative approach and it just gets better the more people use it.

Tim: That's correct. Yeah, definitely. That's cool. Like the more conversations it has, the be, the better it gets.

Michael: So it's exciting and, I'm excited about AI and all the things that I'm investigating and researching about it, but there's also this sort of other side of it, right? And there's a lot of talk around ethics a lot of talk around security, a lot of talk around privacy. So I'm curious, what have you guys thought about there. What are you doing to deal with those concerns, ethical, privacy, whatever those are. How have you approached that?

Tim: Yeah. I think one of our G2's mantras from the beginning has been all about transparency. Like we created G2 to bring more transparency to B2B buying, right? And everything we're doing with chatbots, we wanna be as transparent as possible. We believe transparency builds trust and over time, if you can gain that trust, people will ultimately have more trust in using the bots.

And when we think about transparency, it can be, there's issues of the company being transparent that you're using a bot. And so you wanna make sure that they know. In some cases people are actually trying to, because as you mentioned earlier, often people don't want to chat, to talk to a bot. And so if they're so good. Now, people may not even realize that they're talking to a bot. Some people perceive that as an advantage. Like we at G2 would stray away from that and say, we wanna be transparent that they're using the bot. But then on the other side, you wanna be transparent about how the data's being used too.

And that kind of gets into the privacy side of what we're doing. And so we will be transparent about how the data's viewing, who has access to it et cetera. At the same time, we don't wanna put a bunch of contracts in front of somebody that really scares people away from using the bot, or makes them go through a really bad user experience.

So those are a couple. And then of course there's the IP protection. Yeah. That you want to get into. And so that's usually the first thing that comes up when you're using a pre-trained model is Okay, what's the IP about? And you do have to read the terms. But at the same time I think these companies, ChatGPT, Google, they realize that their business is highly dependent on the privacy of this, in the IP protection. And so you do have to somewhat trust that they're gonna be a good actor in this situation and not a bad actor.

Michael: And you're not really. For the most part, you're not collecting a lot of privacy focused data in those interactions anyway. But there is some data being shared there if they go all the way through to connecting to a buyer, that sort of thing. Have you had any concerns around privacy or have you done anything there special?

Tim: With PII in particular, we would never, we do not encourage nor ask for any PII in a conversation. Now could a user put in PII without being prompted? Yes. We would do our best to try to scrub that. But it's not something that we would ask. And if we, if a user wanted to be contacted, we would probably do that in a way that is very similar to all the privacy laws that are out there today. Where we make sure they're very aware that they're opting in to be contacted.

Michael: Yeah. So that's, I know when we were going through privacy and GDPR preps and that sort of thing, that was something that we looked at a lot back then. So I guess that carries over into what you're doing with the bots as well. Yeah, I don't, there's probably not a big security risk around most of this except for the IP part. But there is also, the potential for some bias in the model. So I'm curious, have you guys looked at that at all?

Is there any, is there anything you've done and is there any risk around bias from that perspective?

Tim: Of course there's risk for bias and your prompt strategy I think can help mitigate that a little bit. But it will also be bias based on the data that's there. So like anytime you do a survey or anything, there is a potential for bias based on survey. Yeah. The reviews if we don't have an equal number of reviews for that are representative of different viewpoints, different size companies, whatever it might be. And we're using that data to provide the responses. The responses are gonna be biased as well.

Michael: That makes sense. And there's also I guess the risk of, you're talking about reviews. There is certainly some, there are some reviews that are more reactionary or maybe more negatively focused too, that could be in the mix that. People would have to understand, too.

Tim: For sure. And the G2 has a side of it that rates software based on the reviews that are out there. We have not tried to do any generative AI that modifies the algorithm for rating or anything like that. The bot uses that algorithm to present options. So we will say it is this product is rated X, Y, z on G2.

Michael: So you are pulling in the data from the other algorithms and then it can advise based on what it gets directly off of those rating algorithms. Yeah that makes sense. So obviously this is a really interesting application of the chat bot, I think, because you're not, it's not directly like I have a problem kind of a thing. It's more of a, I'm looking for solutions and that sort of thing. So how does this, what do you think the future possibilities that we can see from these chatbots and particularly these intelligent chatbots that use generative ai. What and where are we gonna see them? What else do you think is gonna happen around that? It's changing so quickly. It's hard to keep up,

Tim: yeah. These opinion questions. I love talking about them, but I think everybody has speculation around these, and I love, these are great conversations over a drink. Of where it could go because there's so many options. And so how it will change society and how it even change the way we learn over time and what we learn over time I think are wonderful questions. If I were to think about, specifically B2B interactions though, which is what G2 is, I think it's really changing the way probably people find and search Over time.

And if we compare, will it be the new Google? Will people enable enough information out there that bots basically are the complete way that you. Search and ultimately maybe find and buy products directly within the bot and never even go. So is it the new internet? I don't know. It's, I can see a world where you could do everything and interact through a bot, because when you're searching the information, you're looking for information typically and then you want to potentially transact.

And so a bot, you're looking for information. Would you ultimately be able to transact as well. I, those are some things that I, that's interesting I think about.

Michael: Yeah, because you are starting to see it things are starting to merge like just a simple one. I noticed that G2 had put a plugin on ChatGPT plus now, so you can actually, just like you would if you'd gone to the G2 site, I assume on ChatGPT. Is that, do you think that's a part of what you just said, this kind of merging of all these different things that are happening?

Tim: I think so. And right now, even like the bots that we build, they're designed for a specific purpose. They're trained on a specific purpose. But when, eventually we're gonna have bots that actually you just need one bot for everything. And it's smart enough to know the context of the question, and then where do you go and find that information versus a plugin, right? You have to actually like, say, I'm gonna use this plugin versus knowing based on the question that it should go use that plugin.

Michael: Yeah. I mean that that makes sense. It's almost like in a way you end up with this assistant bot that just helps you do all sorts of different things, you've jumped in the G2 has jumped in pretty heavily into this, and I know that there's a lot of support for automation and that sort of thing in the, among the founders and among the executives at G2 anyway. But what would you advise?

I talk to a lot of companies and there's mixed feelings I think about it. Is it time to jump in? Should we wait? Should we, what do you think, and this is definitely another opinion question, but I'm just curious. Is this, should companies be at least experimenting with this or are they gonna get left behind if they don't do something?

Tim: I obviously have bias in this question. But if you're not learning about this or if you don't have some way of learning about it and understanding how it might impact your business and how you can use it you, you might be at risk is my opinion. Now is it the right time for you today? I don't know. So as I mentioned, there's certain industries that probably it applies more to than others, but almost every industry, I think, probably has a way to make themselves more efficient, even if they're not developing an AI product. There's probably a way, at least internally, you can improve your operational efficiencies using it.

Michael: So it sounds like you'd say that you're probably better off at least piloting and experimenting than you are holding back at this point, because things are gonna continue to evolve and maybe you get left behind if you don't learn what you could do with it.

Tim: Yeah, at a minimum, even if you're not developing, putting development effort into it, but like looking at the tools that are out there and they are being developed fast. And there's more coming on every single day. And so just keeping up to speed on what tools might benefit your business or certain areas of your business. And then testing 'em. A lot of these newer tools, they have free trials and so just try 'em out.

Michael: Yeah, that makes sense. One of my other guests used the old Nike phrase, just do it. Maybe that's the right answer. we're just at the end of time great conversation and always fun to chat with you, Tim. I really appreciate you joining. Before I let you go though could you recommend someone, a thought leader, an author, some mentor that's influenced your career help, help you develop and evolve?

Tim: What's coming to mind when you say this right now? And it's, I wouldn't call it a mentor or anything like that, but maybe somebody that I've been inspired by as I've been thinking about AI is this it's a woman named Dorothea Vaughn. Dorothea Vaughn was she was a programmer head of human computing for Nasa. And when I say human computing, that means she was like ran a group of people that were running mathematical on paper. And she sees next door that they're installing the first IBM mainframe to basically take her and all of her team's job away. Yeah. And immediately many people, they react with fear. And they wanna sabotage. They don't wanna embrace the change, but her reaction was different. Her reaction was she goes to the library, she checks out all the books on Fortran, she makes a copy of the manual and trains her entire team on it. And so when I'm thinking about AI like I get inspiration from what she did, which was not react with fear. React with all we can do is learn and just go after the learning. And so I think about Dorothea Vaughn. Just recently.

Michael: Yeah. I love that story. And I think that's probably one of the things that can really help people understand, especially when you talk about the threat to jobs and, we know there are gonna be some jobs that go away or change or automated away whatever but it doesn't mean that you shouldn't be embracing this and learning because the more you learn, the more opportunity you're gonna have. Yeah. That's great. Thank you for I that's very, that's a very good story. And I love that story. I used that a few times myself actually. So that's all the time we have. But thanks everyone for joining us this week. Don't forget to hit that subscribe button and from more on AI, you can check out the Arion Research report that we published in August on AI adoption. It's a free download on the site and you can't beat free downloads. So go check that out and join us next week. I'm going to do a special edition of Dreamforce and I have three of the executives in Salesforce AI Cloud world that are gonna sit down with us for a few minutes and chat about some of the things that they're doing. So I think it's gonna be a really interesting episode. Looking forward to to the conversation next week. I know we're Gonna learn a lot about what they're doing since certainly Salesforce has gone all in on the AI storyline. And that's it. I'm Michael Fauscette and this is the Disambiguation podcast.