Arion Research LLC

View Original

Disambiguation Podcast - Dreamforce 2023 - Transcript

Michael: Welcome to the disambiguation podcast, where each week we try to remove some of the confusion around AI and business automation by talking to experts across a broad spectrum of business use cases and supporting technology. I'm your host, Michael Fauscette. If you're new to the show, we release a new episode every Friday, each week as a podcast on all the major podcast channels, on YouTube as a video, and then we post a transcript on the Arion Research blog, if you'd like to read it.

This week's show is a Dreamforce special edition. And as you can see, I'm just back from San Francisco. I'm certainly in my office, as you can tell, after a busy week, and I have some special guests provided by Salesforce to talk AI. And actually, I'd say Dreamforce this year that was the place to be if you wanted to talk AI. If you're interested, there's a live blog post of Marc Benioff's opening keynote on the Arion Research blog. Before I go to the guest interviews, though, I'll give you a little overview of the announcements this week. Perhaps the lead announcement was the rebrand of AI cloud as the Einstein 1 AI Platform.

Now, the best way to talk about it is to put up the slide from the keynote deck, which I conveniently snapped a picture of so excuse the slight quality issue on that, but let me bring that up.

As you can see, the Einstein 1 platform is a super set of all the Salesforce products. So the underpinning, if you call it that, is this metadata framework, which goes across all the applications. And platform and the data cloud, the CRM apps, plus the industry clouds and Einstein AI. Sitting on top are the productivity apps, including Slack. Canvas, which is the new name for Quip, Tableau analytics, Heroku, and then it also includes connections to productivity tools, Microsoft office 365 and Google workspace. And then the AppExchange, of course, is there. So you still have all the partner apps and then the integration platform, which is, extremely important the platform, MuleSoft so that you can integrate to other applications, integrate to other data sources, things that aren't natively supported integration to the platform Einstein AI, of course, does have integrations to all of the large language models.

Now, one important piece, and I discussed this with one of the guests later, so you'll get a little bit more from Salesforce themselves, is this trust layer. Let me switch to that.

There we go. The Einstein trust layer. And, at first I admit, I  didn't really understand this when I saw it in the keynote, but after my conversations I can say that I realized how important this actually is. And it's a, very interesting move by Salesforce to put this in and basically, and this is definitely an oversimplification, but it's extremely important layer because it actually sits between the applications and the large language models.

So it lets you automatically, it protects you really automatically filters things like personally identifiable information, PII, and it's active so that it helps keep data safe in both directions, right? So it keeps data that you don't want for your customer data going into the large language model and in the opposite direction any data that needs to be filtered out that way. Important and like I said we'll talk about that a bit more in the interviews themselves. Now, the next slide, and this is something that Marc Benioff put up pretty early in the keynote, which is probably not a surprise but it looks at some data that they've collected.

And, some from analysts firms, some from some surveys that they've done in Salesforce research. And you can see they're pretty large numbers, certainly gets people's attention. For example, 30 percent productivity employee time freed up by 2030 by the use of generative AI, $4.4 trillion expected annual GDP impact from AI customer success, 84% of leaders agree that gen AI better serves customers or provides a better experience. Jobs, 64 percent of execs will hire more skilled workers due to generative AI transformation, three of four companies are likely to adopt AI by 2027 and strategy, 92 percent of businesses are seeing returns from AI investments already.

You can't talk about AI without talking about data. So let me go to the next side.

There we go. So this is the Einstein Data cloud and the Data cloud is, essentially the customer data platform (CDP). It stretches across the entire application portfolio. And it's integrated into the metadata framework as well, so it connects to any data source in your enterprise, can bring all of that data together. Now, there's also something they call zero retention, which means that certain types of information aren't retained. It's set up, actually, to not be retained. It stays in whatever native source it came from. And then, of course, I mentioned openness before but that's really important that Salesforce is supporting all the large language models, the major ones, I should say. And then of course, with MuleSoft, you can integrate across that to anything you want.

Now, lastly, and then we'll get into the actual interviews. Lastly, though they announced Einstein 1 Copilots. Let me bring that slide up.

And the idea here it's not that different than what Microsoft is using or announced earlier this year using the branding copilot and I admit I think this might be a little bit confusing for customers because certainly a lot of companies are using the term copilot, not just Microsoft. Although Microsoft’s probably been the loudest no matter what you think of the name, it's really important because it's basically the assistant in each of the applications that sits next to the user and helps them navigate, do their normal activities. In the case of sales, for example, it can help the salesperson get data in and data out, set up appointments, those kinds of things, email content, stuff you would expect on a daily basis, daily activities. From a customer service standpoint, it can help the agent find information more quickly, find the right information, ensure that the data in any handoffs is smooth. So it's very important and like I said, pretty easy to think of it as an assistant.

I think, let me switch back to the view there. I think that's the major announcements. Now there are others and I will look through that and summarize some more of that on the blog later. But I wanted to get this episode out. That's it for the overview and stand by, of course, for the interviews that are coming up next.

Good afternoon. I'm here from Dreamforce and I'm joined today by Rob Katz, VP of Products, and we're going to have a really good discussion about AI ethics this afternoon.

Rob: Thank you so much for having me.

Michael: Yeah, I really appreciate you doing this. So just to get the audience up to speed, could you do a little introduction, tell us a little bit about what you do here and a little history on how you got here.

Rob: Sure happy to. So Salesforce, as is the company that helps other companies and organizations connect with their customers. And we've been doing this for 24, almost 25 years, started with sales, customer relationship management, and has evolved to be a suite of tools that help organizations connect with their customers through sales, customer service, ecommerce, marketing, data visualization real time communication with slack and so much more. And I joined Salesforce four years ago and I joined to help build out a new function within our office of ethical and humane use of technology. So Salesforce started an office of ethical and humane use of technology because after being a trusted partner to so many organizations for so long, there came a point where there were as many ambiguous questions about the nature of technology, and it was incumbent on us to help our customers navigate these. Ambiguous areas, and to avoid unintended consequences. Because technology is an incredible force for good and for transformation, and I'm very, privileged to have lived through a huge watershed of technological change that continues, now even more.

And it's not the technology that's, good or bad. It's how it is used. And so my background is in product management as you said, and I came to Salesforce to build a new function that we call responsible AI and tech and Responsible AI means how do we design, develop and ultimately deliver our software with our ethical use principles baked in from the ground up.

Michael: Interesting in that there's certainly been a lot of discussion around ethics and privacy, security, all of those things around AI, but particularly ethics, I think. And so it's interesting to see this Dreamforce has been all in around AI for the product. But at the same time, this function and it's been here for a while now, that's right, actually focused on that. So that's really interesting. And I think from a customer perspective, I would be very happy to understand that. So as we jump into this, so what do you see as the most pressing ethical issues when it comes to the current AI and applications, particularly in the context of Salesforce AI cloud?

Rob: Sure. AI is a blanket term for lots of really complex and technologically advanced systems that are using information, using data to try and improve and augment human work and behavior. And in Salesforce, it's all about helping the people doing those jobs in sales, service, marketing, and more do their jobs better. And so for us, it's ensuring first and foremost that we have the right data that are being used to ground the AI system. When I say that, it's simple enough to say, hey, let's go ask an AI tool for a suggestion about something. And anybody, which is a real technological marvel by the way, can go to ChatGPT or BARD or any of the commercially available consumer facing large language models and ask it a question. Hey, tell me a little bit about the Savannah College of Art and Design and boom, or tell me how to get there or, whatever. What's interesting is that as a Salesforce enterprise customer relationship management customer you have your own specific information about, let's say, the Savannah College of Art and Design. And it would tell you, not only are you going there, but here's our history in our CRM. And that's all about grounding the data. And within the risk areas, it's how might that information be processed, and then sent back to the user and include something potentially biased or toxic or unsafe. Or, how might that information in your CRM that is highly secure and private be it needs to be, that private information needs to be masked before it is sent to any external service like a large language model.

Michael: So how do, how should companies think about this? How, what can Salesforce, for example, what can you do with this? And then maybe we should, maybe we could talk a little bit about a kind of a government perspective on that too, because there's been a lot of movement lately around regulation. Slow movement, I should say probably, but what does Salesforce do to help customers navigate these problems?

Rob: It's been top of mind for our customers and it's been top of mind for us because any organization that wanted to give its sellers or its service agents or its marketing team access to generative AI could already have done so and many organizations and customers that we've spoken to have chosen not to because they're concerned about those issues that I just raised, masking private information, potentially toxic responses, harmful responses, unsafe responses. Things that could be not only reputationally damaging, but to your other point, from a regulatory perspective or from a compliance perspective outside of the boundaries. What we've done is we've architected our large language model gateway to allow a customer to use, let's say you're using it for customer service. Customer service agent gets an inbound request for information or for resolution of a particular issue. Let's say you're traveling, and you need to change something about your flight and you're using an autonomous or quasi autonomous AI to help change your flight information and it's complex because you have multi legs or something like that. That request comes in. The agent is then using grounding to ensure that it is grounded not only in that airline's knowledge article database, which is hosted on Salesforce or otherwise. And that it is grounded in the specific traveler's personal information. And they might have a lot of sensitive information, like your payment instrument. Or your birthday. That information is then collated and sent through a set of instructions called a system prompt. And that information is masked, all the PII is masked before the request goes to the large language model. Like GPT 3.5, or Anthropic model, or Cohere's model, or any of the other third party models and Salesforce, of course, has our own first party models. And that's what we call the trust layer. Is the masking of the PII and the dynamic grounding. The large language model gets the instruction. Help recommend the best alternative flight for this traveler. And it comes back to an agent.

At that point, it's screened to ensure that toxic content doesn't accidentally slip through. Because a human being who's very frustrated at being at the airport for the 19th hour might insert an expletive in their request that goes into the chat. And you don't want to come back and have that, I'm so sorry for your expletive experience coming back out. Because that would go viral pretty quickly, and that's what you want to avoid. And then the agent reviews it. And this is where it's the…industry term is human in the loop, but the agent has to actually read what has come back from the large language model and review it and make sure that it's accurate and that it will solve the problem or get a step closer to solving the problem and then it can be sent back to the Flight and be on their way and all of this happens in the matter of seconds, right?

Michael: That's really interesting and I've seen the diagram and it I didn't realize how active it was. That's really different than I think what my perception originally was that it really is filtering in a way that protects that PII and make sure that it doesn't cross that bridge into the language model.

Rob: And that is because, organizations have trusted Salesforce to steward their data. Trust is our number one value. And one of the big objections to cloud software 24, 25 years ago was, I won't trust if it's not in a box in the corner and I can go, plug into it and figure out what's there and what's not. And Salesforce has a lot of experience helping our customers transform digitally into the next sort of vanguard of what trust means. And we do that by proving it.

Michael: Yeah, that's very interesting though. We talk a little bit about the tech company and Salesforce particularly, but what about governments? What do you think the role should be? And, obviously, this is very new ground. And I think, we know that from an expertise perspective that they would need some assistance there because certainly it's not something that Congress would have, be grounded in from a knowledge perspective. But what do you think we need? Is it regulation? Is that really where we need to go? Does it tie into privacy regulations? Like, how should they think about this?

Rob: It's a great question, and that was a funny pun that you just made about grounded in in the regulatory landscape. The regulatory imperative is there at the federal level and also at the state level. And finally, it's not only in the U. S. So the EU AI Act is in a lot of ways laying the groundwork for how The US federal government as well as many state governments are thinking about AI regulation and it's following a similar pattern, which is that the general data protection regulation, our friend GDPR, led the way for states and not yet the federal government because as there's no federal privacy law in the United States where we come out on it is that there are really three pillars to our approach to how we're hoping to see regulation create a vibrant, but safe and trusted environment for generative AI to flourish. And that is transparency, explainability, and a risk-based approach. Transparency is ensuring that it's clear to all of the various stakeholders in the process that you're interacting with a generative AI or that you're, that you have, that this content was partially authored by a generative AI which is straightforward. Yeah, but it's important to me. No, it is important. And it's really about earning trust of end users because there's a lot of fear, uncertainty, and doubt. Our old... FUD out there about this, and it's really not that scary, but when people start to see it regularly, it will demystify. So that's number one.

Number two is explainability and we're working really hard inside of our AI research team And we're partnering very closely with them to develop things like confidence scores and citations So a confidence score is just what it sounds like this response to the traveler's need is to an 89 degree of accuracy we believe the best You know, route for that traveler to take and a citation is here are the data points from your CRM from your knowledge articles and from the language model that we used in order to generate this travel plan alternative travel plan for our hypothetical traveler.

Michael: Yeah, I think the citations are really important. One of the tools that I've been experimenting with adds in live internet connection to the large language model, ChatGPT 4, and it provides the citations every time when you get the response, which does, so that makes a lot of sense to me. And it is interesting to think about this in the context of it's very similar to how privacy has evolved from the GDPR standpoint and then CCPA and all the other sort of regulations.

Rob: But what I think we've learned from it is also the last part of the last leg of the three legged stool, which is the high risk or the risk based regulation. And it's really about what's a consequential decision and what's not. Because if someone receives a marketing email that they don't like because they were included in a segment by a large language model, that is too bad, but people have a right to get out, and that's not something that they should have necessarily a private right of action to sue about. On the other hand, if they were excluded from access to a loan, or from access to some sort of service, critical service, or if they got medical advice that was not accurate. Then those are more consequential decisions and as a result, those should be more carefully regulated and handled when it comes to generative AI.

Michael: Yeah, that makes sense. So it's almost a sliding scale of kind of balancing risk to that regulation and the output and what level of risk is there in that output.

Rob: That's right. Yeah. And, for example, one thing that I think the GDPR tried to do was data portability. And data portability is great when you're thinking about the large platforms, but for the, as a matter of course, very few people need to be able to port their data from all of their providers.

And as a result, a lot of investment has gone in from a technological perspective to comply with data portability regulations that has been not very helpful to the end consumer, generally speaking. And so it's one of those overly broad provisions that's well intended, but it wasn't necessarily a risk based approach, and that's I think where we can learn from GDPR, so that we can involve ourselves and help to create the regulations in a way that are appropriate and allow for the kind of innovation that we want to see as well, right?

Michael: Because I mean you bring up innovation and that's an interesting point because certainly we've been moving very rapidly over the last few years and particularly maybe in the last year. Yeah, because of all the publicity around generative AI so from a tech company perspective, and maybe Salesforce is a model around this because you have approached this in a very in a very aware way versus a reactive way.

Rob: We are always working hard to innovate on behalf of and for our customers. And trust is our number one value. And when push comes to shove, knowing that trust comes first is a really grounding values-based approach. And I'm proud of the fact that we can do both. But we are well past the era of move fast and break things. That is not the way that we run this thing.

Michael: Yeah, that's definitely the, it's been the mantra in the Silicon Valley for a long time. But this is one of those things that breaking has really dire consequences. Potential, consequential actions, yeah. Yeah, so how the other thing to talk about ethics, but the other thing that comes up a lot is bias, algorithmic bias. Bias in the models and the response. How is Salesforce thinking about that and how have you approached dealing with bias in the models?

Rob: In the models it's very difficult to understand a large language model, especially when it's a third party's large language model. But what we're doing is that, every time we're working to build a new feature that might have a higher risk of biased outputs, we are putting it through its paces through adversarial testing. Adversarial testing is a technique that was pioneered by the security industry. It also is sometimes referred to as red teaming. And that was actually pioneered by the army, the U.S. Army. And What we're doing is we are putting in intentionally boundary testing inputs and tuning the settings that are available on our side to understand where the parts of the system are weakest potentially, and by identifying where bias might crop up in unexpected ways, we're able to anticipate those and effectively pretest the model.

And we're doing that at scale and allowing ourselves the ability to adversarially test before we release things to production, and we're going to be building out forward looking statement, we're going to be building out the capability for our customers and for our ecosystem that is configuring these generative AI tools to run their own adversarial tests once they've customized the technology for themselves.

Michael: So once they add in their data sources and that makes a lot of sense. So I know we're running short on time. I really appreciate you joining me today. I just, one sort of quick hit at the end. What would you recommend to companies that are thinking about these technologies and getting into these technologies? Is now the time to get in? Is it something they should wait? What would you recommend to them?

Rob: I think that there, like I said, there's a lot of fear, uncertainty, and doubt around generative AI. And at the end of the day, it is a revolutionary opportunity to help organizations run better and help people do work better. And it is a great Augmenter. It is easy to understand, it's easy to learn and we are proud to be able to help our customers through that digital transformation as a trusted partner. So the answer to your question is I think people should be exploring it. Now they should be trying it. Now they should be getting in the sandbox.

They should be asking questions of their sales and account teams. And we're here to answer those questions. There's not one right way to do it. There's also not one wrong way to do it. And it will all be very context specific. And, that's what I'm going to spend my Dreamforce doing. I have customer meeting after customer meeting, where we're going to be talking about the specifics of those customers instances and use cases.

And I'm confident that we're going to be able to work with those customers. But not just those customers, but any customers. That is interested in applying these new technologies. And I'm confident that, for all of the fact that we worry, and I worry a lot about the risks. I'm also really excited about the potential of generative AI in particular to help our world be the world we want to be in and to make it a slightly more equitable and fair place as well. And maybe that'll be a topic for another podcast conversation.

Michael: It definitely would and I would love to get a little more time later on too because this is a very interesting topic and I know my audience is going to be very into that. Rob, really thank you for coming and joining me today. Have a good rest of Dreamforce.

Rob: Same to you. Thanks again pleasure

Michael: Welcome. I'm joined this afternoon at Dreamforce still by Avantika Ramesh, who is the director of product in the AI cloud, a world. And so we're going to have a nice conversation around some of the things that Salesforce is doing around AI. And certainly, if you saw the keynote or if you read my live blog of the keynote you would see that it is everywhere here, and certainly a lot of excitement from customers around that. So let's just to get it started. Can you give me a brief introduction of what you do, where you come from? Your background and how you got into to Salesforce and what's going on around here?

Avanthika: Sure. Thank you so much.

First of all, pleasure to be here. Yes, as you said, AI is everywhere, right? We're in an AI revolution. A little bit about my background. I joined Salesforce three years ago as a product manager. And my journey actually, I started off working on our Einstein bots product. Started off working on conversational AI before generative AI took off. I think, we were already looking ahead, right? A lot of that conversational AI work that we were doing in predictive AI really prepped us well for this whole new wave. And then after that, I also worked on our emerging technologies division. I was working on Web3, blockchain, non-fungible tokens. And then soon that led us to Salesforce AI. So essentially I've been working on technologies that keep Salesforce relevant. So it's been an exciting journey.

Michael: That's exciting. And there certainly been a lot of advancement. The lot of technologies before the November, December launch ChatGPT, that got everybody's attention and changed the world in a lot of ways.

Yeah. I want to understand, and I saw a lot in the keynote, and there are a lot of products out there and a lot of things that have been done, but can you help us understand what's available today and then what's happening over the next year or so that will, enhance those products?

Avanthika: Sure. Under the whole Einstein 1 umbrella it's, which is our new, Term for the new platform, right powered by AI, predictive AI and generative AI. Today, we have a lot of solutions out there for predictive AI, right? Things that we've been building since 2016 onwards. We have bots and solutions for service, marketing, commerce, everything, right?

So this is not our first step. Now with generative AI, that's where we've started extending every product and capability with new gen AI capabilities. So what's out today? We have sales, commerce. Service all went generally available with their generative AI products, right? So for sales, email generation, service, reply recommendations with the knowledge, data, and then case wrap up. And then commerce allows you to actually generate product descriptions automatically based on the product information. So we have some turnkey solutions out there. I think we just wanna. I think the value prop here is, hey, as a customer, you can easily implement AI in your organization and get value and realize value very quickly.

The time to value is quick. Now, soon enough, customers are going to start realizing that they will want to customize these applications, right? These one turnkey solutions may not be a fit and they may want to do more. So soon in the next few months, we're going to come out with some platform level technologies like the Prompt Builder, which is actually a really popular one here at Dreamforce.

Michael: Yeah, that was interesting. I admit it, I did a little bit of research around prompt building and prompt engineering. And actually a couple of guests a couple episodes ago talked a lot about that because they're doing a no code platform. And so that's a really interesting and I know a lot of people are trying to understand prompt building. So maybe talk a little bit about that because I saw the demo and it was really exciting.

Avanthika: Yeah, so yeah, I'm working on, the Prompt Builder from the product side, and I think the mission here was to make sure that prompt engineering and access to AI is democratized right across your organization. It shouldn't be something that only your data scientists know how to do, but you should be able to supercharge every employee's workflow with AI and allow them to have control around how AI impacts them. So the prompt builder actually allows anyone, an admin, a business user, to come into that interface, no code, just clicks, and be able to create prompts effectively that are grounded on their data.

So, the whole concept of building prompt templates that reference different data, ground those. And be able to embed them into the flow of work, whether you're in your sales, lead objects, trying to generate an email, or let's say you're in service on your case object, trying to generate a summary, right? So, it's allowing you that power of the platform and wrap all the Salesforce processes into one intuitive interface. And then you can connect this to any model, right? Connect the prompt to any model that you bring in through our model builder and deploy them in any workflow.

Michael: Yeah, and I, we talked to my previous guest, Rob, that's talked about that trust layer. Yes. And the way that it sort of filters between what goes in and what comes out. And so that's interesting. So that all ties into, to the prompt builder too.

Avanthika: Exactly. Any call to a large language model through Salesforce all go through the trust layer that's just baked in, whether it's the customer's model, whether it's one of our salesforce models or even one of the vendors that we support.

Michael: Okay, and that's one of the things that salesforce has done. It's a little different than some of the other companies. Rather than going to one partner or are your own just your own model, you've taken a much more open approach.

Avanthika: Yes, it's the open approach, right? We want to give flexibility to our customers around the models they use because you may have different models for different use cases. So now, some customers may use our internal Salesforce models. So our research team at Salesforce has actually built over 10 large language models already. There's one that's optimized for code generation that's already out there that our ApexGPT / developer GPT product is enabled by...So they can use Salesforce models. They can bring their own models, whether it's hosted on Azure or SageMaker, or they can use one of the vendors we support, like Anthropic, OpenAI, and Cohere. Now, when it's one of the vendors that we support, we have this strict zero data retention policy with any vendor, meaning none of the prompts will be seen. Stored, processed, or used for training. So that's a really a critical element.

Michael: Yeah, that's really interesting. That, I haven't heard that, so that, that's very interesting. Yeah, that is unique.

Avanthika: Usually, OpenAI is not willing to do that for anyone, right? Because they want to audit it on their end. Relationship and trust with Salesforce has gone a long way and they trust sales force to do the auditing.

Michael: Yeah, that's amazing. So one of the areas that I'm really interested in around when I think of a I always had this idea of two areas where you really see big advantage. One is this automation level. There are certain things that you don't want people to do, you can automate away and that sort of thing.

But the other side I think is maybe more interesting and that's the idea that it can act as an assistant to help you do better. And I think that ties into the Copilot launch for some of the tools that I've seen, right? So talk a little bit about that. How does that help businesses across the customer journey not just in sales or support or whatever but how can Salesforce Copilot improve the customer experience?

Avanthika: Great question. First of all, you talked about that assistive capability, right? Our goal is to help augment users, not replace users with AI So now users can do more than the silo tasks that they were originally doing, because now the AI is able to augment them and the results that they're producing, right? Helping them be more productive. If you think about the impact this has on customer experiences, it really blurs that line between all the different divisions at your company. Today, you'll have to reach out to a specific person for a service inquiry, another person for a sales inquiry, another one for a commerce inquiry, right? And with a lot of our customers, the results we're seeing is that a single agent is now able to answer questions across different lines of business so that you, as a company, come off as one entity rather than these siloed divisions where you're sending your end user everywhere scrambling to find the answers.

So that's where I think AI is super powerful and really helps deliver like a better experience and your internal employees are also learning and augmenting their own knowledge.

Michael: So essentially it's one of the ways that you can help get across data silos or siloed information across the business. Interesting.

Avanthika: And for that you need to have a good data strategy though, right? There's no way I without data, but you also need good data to produce good results. So as long as you have a great data strategy where you know the information is coming from and where to find that information, then it really helps.

Michael: Interesting. So do you have any examples of what some of your customers are doing with the products today?

Avanthika: Sure. Yeah. So, if we think about generative AI, we have customers who've already deployed our service products. Those are the most popular, right? Service cloud is the largest cloud for Salesforce and, you have the most simple use cases coming out of service that are like low hanging fruit. And if we look at the impact, I think it's important to look at some of the KPIs. So if we look in the service domain, it's almost like average handle time goes down. The number of turns in a conversation decreases. So as we measure these KPIs, we've been seeing a lot of impact. One famous customer that we always talk about is Gucci. That's the one where we talk about, hey, they were one of our first customers that actually deployed this and saw a real difference in how their most experienced agents were able to now understand different lines of business in the business and be able to communicate that out to the customers. And then if we look at the impact.

We had another company that does like job brokering and they implemented it and within two weeks, honestly, it was very quick and they were able to see results that, within their average handle time going down 200 percent increase and like the number of cases they were able to resolve. So you see those KPIs over time and that, you see that change and really moves the needle for them.

Michael: Interesting. I know they used in the keynote, you used William Sonoma as an example of another customer that's, and I would think in retail there's a lot of applications to, to.

Avanthika: Yeah, especially when you think about commerce, right? Even our current commerce product, it's around generating descriptions, which, that takes a long time to generate a description that's branded to your company's voice and tone for every single product. And now that lets you just publish new products immediately. Time to value is quicker. Soon we're going to start seeing smart promotions and even like concierge bots. Lots of opportunities in retail.

Michael: Yeah, that's interesting. The whole idea of a bot assistant I think goes yeah. Yeah. Businesses are, some are jumping in and In some ways, maybe that's the sort of leading edge bleeding edge, whatever you want to call. But, what should businesses be thinking about today?

What, is it time to get in? What kinds of things do you think they should go through to get involved in this today?

Avanthika: Great question. I get this question from customers every day, right? Especially the ones that have not been able to make that leap into predictive AI. I will say it's never been a perfect time to adopt AI, especially now. Because if you're still waiting AI is moving way faster than you think, right? We feel like a generative AI. It's here and this is like general AI, but autonomous AI is already taking over and right and then you know, you're gonna hit AGI soon and by then businesses will be super behind, lagging behind and not know how to catch up and customers will already be 10 steps ahead. So I would say it's the perfect time to start thinking about it and get your hands on it. I think one is education. Start educating yourself about what these different types of AI are, how they're different, and also think about the enterprise perspective, what's happening there, data, it's all starts with data.

So start with a good data strategy, figure out how you're going to feed data into the AI, think about how you're going to also. Think about use cases for your business. What are the unique use cases that you want to solve and start framing those use cases. Think about, talk to your people who are actually on the front lines doing this day-to-day work, figure out what their pain points are and how I might be able to solve them.

Then I would say, learn and iterate. Get your hands on it. You're gonna have to start somewhere. Start with a small pilot, maybe with your most experienced employees. See how it works, validate it and then deployed across your organization. And in that process, education is super important, so make sure people are getting onboarded, educated about this technology.

And then over time, you're still going to have to learn and iterate. It's not like you deploy it and you're done. New technologies are going to come, got to stay on top of it, and it's a process. So I think, it's a great time to get started and businesses will soon be able to catch up with the crowd.

Michael: I mean that, that makes sense. I know, I did an AI adoption survey a month or so ago. And the biggest concern that came up and these, all the respondents were people involved in AI in some way. So either they used it already in their business or they were a decision maker, an influencer, buyer, project manager, that kind of thing. So that so they were tied into AI and their top two concerns are related one of them. And the second was they were worried about finding skilled partners, which to me is the same problem, right? Have you heard that and do you think that's a part of this, why you need to get in now so that you could have for sure.

Avanthika: Yes. So if you think about AI, there is a skill gap, right? People who know how to work with AI models and know how to use it efficiently. We're trying to close that gap as Salesforce That's why we're building a suite of tools that even admins and people who are not experts at AI can actually use it the whole value prop here is democratization of AI and we're building into the platform so people like admins who know how to use Salesforce can easily embrace AI without being worried about data security. We take care of that for you. So literally, when we think about the prompt builder and the prompt engineering, we abstract out that complexity so that people can focus on the job to be done. And then we take care of the rest. The complexities around AI. Same thing with connecting a model and using it would provide simple interfaces to just connect a new model, test it, validate it works, and then focus on what the results are, the business outcomes, rather than how to get the AI working. Because as you said, it is intimidating, and we want to, make sure we reduce that barrier to entry.

Michael: Yeah, it seems that if you don't do something now, you really do take the risk of falling so far behind because it is going so quickly. yeah. And I know the other thing that Salesforce has always been always has for many years been good at is the idea of providing learning paths for people to get those skills.

Is there a Trailblazer?

Avanthika: Yeah, definitely. So you may have heard of a trailhead. That is our learning platform and we have a lot of trails already on predictive AI, which we've been doing for a while now. And we've also released trails on generative AI. We're talking about prompts and how to create good prompts. And then now we're starting as we release these products, we're soon going to have enablement material so customers can feel comfortable about getting started.

So that's really, education and it. Educating our stakeholders as we move along and improve our technologies is like a core value for us, and I would say on top of that, here we are at Dreamforce, Salesforce Plus. All the sessions we have over the next three days are all about educating and getting people to embrace AI. So I would say like all our Salesforce Plus sessions are so super duper relevant.

Michael: Yeah, that's great. Thank you very much. I really appreciate it. Great information. And I'm excited to see what else comes up in the show. So have a very good Dreamforce and thank you for joining us.

Avanthika: Thank you so much. This is amazing. I always love speaking AI and we're gonna, I'm excited to see where it takes us next. Great. All right.

Michael: Thank you. Thank you.

Welcome at Dreamforce still, having a conversation with Bobby Brill, Senior Director of Einstein Discovery. So we're going to try to dig into that, discover a little bit about discovery today and just learn a little bit more about what Salesforce is doing. So to start it though, can you just do a, give us a brief introduction? How'd you get here? What do you do? What do you love? Those kinds of things.

Bobby: Yeah, absolutely. So this will be my 13th Dreamforce. 11th in person. As we had two virtual ones. And I was actually a customer of Salesforce before joining here and I saw the power of the platform and how easy it was and how easy it was to transform our little startup and it was just so easy to collect data and I've always been like excited about analytics. And so the last seven years, I've been working with analytics a little product called wave analytics, and that grew into what's called CRM analytics today. I got to work with Tableau when that acquisition happened. So bring in analytics together and really super powering that. And then with the AI sort of revolution that started a few years ago, 2016 Salesforce launched Einstein.

Yep. And we wanted Einstein across all the clouds. So with analytics cloud, that's where we wanted to invest in a product called Einstein discovery. It was an acquisition of beyond core, and we took it from, it's just data discovery feature to a full blown model building feature. And we were really focused on model building clicks, not codes. We wanted to democratize the model building process because what we saw was bottlenecks within organizations when there's a data science team and, data science, they don't want to do yet another opportunity likelihood to win model. But in Salesforce, when you're running sales, that's all you want. You don't need anything crazy. So we figured out how to let customers sort of the business teams own their own models to build them on top of Salesforce.

Michael: Great. It's been pretty exciting. I think I beat you, by the way, too. I said 15, but I didn't know I could count the virtual, so 17. Nice. Yeah. Yeah.

That's a long time. That is. So let's just jump into this. Tell me a little bit about Einstein Discovery. What is it? What does it do? Why should people be excited?

Bobby: When you're looking at data. And you want that data to do something for you. That's where machine learning can come in.When you understand the patterns of your business data, usually when you're collecting business data, you're collecting it to a point and then it reaches an outcome. So what machine learning does is it helps you predict outcomes, whether they're successful outcome, like winning a deal. Or a bad outcome such as account churns so you can predict these sort of binary outcome, success failure, and that a lot in business data So what Einstein Discovery when it started to do was let's analyze your historical data and we'll tell you what's driving that outcome and that was a great tool for analysts, but the value of it was the model behind the scenes that was used to do that explain ability. And we said, if we can operationalize this and we can let you take these models and put it across your entire Salesforce Platform. Now we have a tool that you can get a I were, machine, traditional machine learning throughout your entire platform, right?

Michael: So you can then apply that in all the different areas that would be interesting. Like sales, for example, we mentioned doing predictive analysis on prospects to see where you should invest your time, for example, right?

Bobby: If you have the data to say, here's the time investment you've done, and here's, did they? Was it a lead? Did they convert? You have all that data, and if you and the other thing you need is a data layer to transform and maybe you're gonna bring in some outside data. And that's what back in the day wave brought to Salesforce now CRM Analytics and with data cloud that is now like exploding. Now we can get even more data. You can get it in faster. So now imagine what that data can do for your machine learning.

Michael: Yeah I think the data cloud focus is interesting and I'm watching the keynote today and really thinking about that. One of the problems that that I've seen and has come up a lot with clients that I've worked with is the problem of data silos. Yeah. And I, and that must have a direct impact on your ability to do predictive analytics as well. So the data cloud is supposed to help deal with that.

Bobby: Oh, absolutely. If you've got data, you've got your data in Snowflake, that's your data lake. We wanna make it easy to get that data or to get Data Cloud to talk to that data. We don't, we're moving past the, you gotta get all your data into Salesforce. Yeah. While we like it. And you can do a lot with that our biggest customers are saying, no, we're, we want you to work with our data strategies and we're trying to move to that and there's a lot of stuff coming out where we're going to be able to make that a little bit more seamless.

Michael: That's interesting because that is, I've heard that from, in a lot of places running into the whole silo data but then we have our strategy, we're not investing and moving it into another, yet another. Yeah. Okay. That makes a lot of sense. So what are the requirements then, data requirements for a business to do, to use Einstein Discovery and start to build that into their operations?

Bobby: So customers for years have been building reports on Salesforce data, right? The report feature I'll be honest, reports on Salesforce is one of the key reasons I wanted to join Salesforce. I loved how easy that was, and you're collecting a lot of data in Salesforce. So when we're talking about data silos, that's CRM data. You know it's there especially service cloud data. You know those cases are being worked on and they have to put in their sales a little bit more challenging because the sales reps, maybe they put it in after the fact.

So there's a little bit of data cleanup, but there is a lot of data that is already in CRM. Yep. So usually what I tell customers is. Start with the data that you have. Everyone thinks that they have a data problem, their data is bad, and they've got to solve that first. Now, that's not necessarily untrue, but they may be surprised on what data they might have. If we're talking about opportunities. The out of the box opportunity object in Salesforce, we automatically track history. You don't have to turn it on. From day one, we are tracking five very important fields, the amount, the stage, the the forecast I think if the close date was pushed, there's a lot of signal in that.

So you don't even have to, you don't, you might not even know that you're collecting this. And with CRM analytics, we make it really easy to pull in that opportunity history. And these are signals that you can actually put into your data that you're training for a predictive model, like how many times was a deal pushed and then how does that correlate to whether the deal won or not, or how does that, correlate to whether you hit your forecast?

So these are the things that you can start to look at without even worrying about do I have the right data? So usually start with the data that you have and then you're always gonna start with a what is your priority? What do you need to do, right? You don't want to just build a predictive model for the sake of building a predictive model You want to look at what are your business strategies?

So if it if you know selling faster is important to you? You should be looking at the sales stage. How long is it taking to, to close deals? And then again, if you look at that history, you can look at deltas in sales stage. How long was I spending in a stage? You can actually drill in and see what's driving time in each stage. And then you can see some of those kind of, you'll see some of those insights. And then that's gonna, especially if you have a businessperson look at this data, they can start thinking, Oh. I know why this is happening, so some of the insights are going to be super obvious. And then we're going to surface something that maybe wasn't so obvious.

And so you form these hypotheses, and then you'll iterate on the process. And then you'll maybe bring in something else. Maybe you'll filter down, you'll say, I, I don't want to look across all my products. I want to look across this single product. And then as you, you drill in, you're going to see some additional kind of things that you didn't know, and then once you have the, once we're able to explain what happened in the past, now you can operationalize that and start understanding what could happen in the future.

Michael: Okay that, that makes a lot of sense, so it's, and analytics always thought this way, like you're slicing and dicing and digging in, and so you're, you constantly dig deeper to find the insights, and it's the same sort of process here.

Bobby: And this is exactly why we built this on top of analytics.It was a very like analytical type tool and we thought hey, this is something that a data analyst could actually do we don't need data scientists are awesome. But in order to make this work for our large enterprises, we needed to make sure that our models were fully explainable. So that a data scientist could come in they could look at some of the metrics in the tool They could play around they can see what are the key drivers? What are the if they wanted to get down to the coefficients? We exposed all that. While it's a data analyst building it, we needed to make sure that everything was fully transparent. So that was one of our biggest strategies, was with this modeling capability, everything's got to be transparent. A data scientist has to be able to look at this. and give their okay, they just don't have to build it and no one has to write any code.

Michael: So one of the big issues that I've heard over and over again, across AI, is data quality. Absolutely. So how does Salesforce help your customers with the data quality problem?

Bobby: This has been a problem for, but when I was a customer of Salesforce back in 2008, it's a problem. And I don't think anyone's really solved the problem. There's no easy button to this. It's all about best practices. In fact, coming to Dreamforce, you'll talk to other customers and both have data problems, but maybe they're different. And you can actually, learn something from someone else. They learn from you. So there's no magic bullet on data. But I'm seeing some possibilities with Gen AI where we can actually start solving some of those data issues like there's missing data. Some of these language models can figure out, okay what things should this be? And it's interesting. There's some cool possibilities in the future, filling in product details, or, description fields based on a summarization of some stuff. So there's that is what a lot of what it does.

Michael: Is that a whole idea of taking what I have and predicting what the next thing is right?

Bobby: Exactly.

Michael: Yeah, I didn't really think of it in that context, but that is a big opportunity then to work on that. So let's talk about use and how customers are using this today. What? Give me some good examples of what customers have done with Einstein Discovery.

Bobby: Yeah. So what's really funny is our two biggest use cases are opportunity, likelihood to win and lead scoring. And the, why is that funny? Because there is a feature with Sales Cloud I think Sales Cloud UE, you flip a switch, you get opportunity scoring. Now, why customers, why some customers aren't comfortable with that is because it's just a black box. So what we offer is that full explainability, and then also a platform for them to build in whatever customizations they want. And we over time, learn that customers want turnkey applications, but they also love Salesforce because there's a platform and you can customize. Yeah, that makes sense. So with the Einstein One platform, what I'm really excited about, what I'm building, helping build, is now we can take those turnkey applications, and give you a layer of full customization. And I think that's really going to be our biggest game changer in the next coming year.

Michael: That, that's that, that makes a lot of sense. And obviously, transparency is a buzzword lately because of other reasons. But also from a data science perspective, I'm sure that really has been an issue of okay, I see I can turn this on, but what's it actually doing? And how do I know to trust it?

Bobby: And we give you methods to simulate this. So if you wanna, if you have an old, a historical data set, and you wanna apply the predictions on there and then use analytics you can see how did this model... That wasn't trained on this data. How did it work on it? We give you all the tools to try this stuff out. Which I think is, which is great.

But also, we know that data scientists want to get their models into Salesforce easily. One of the features with Data Cloud that we just delivered was bring your own model. That's a cool feature. And if you put those two things together some people are going to want to build, some people are going to want to go with their Pro Code tools. We want to be as open as possible to our customers.

Michael: Yeah, I know that that came up in one of my other conversations about the language models themselves and which ones you connect to, the fact that you've made it open so they can use. And it sounds like that's pervasive across the whole...

Bobby: Yeah, we believe in that for predictive and language models.

Michael: So when I talk to customers, there's reluctance to get involved in anything that sort of looks like AI and some of that is, misinformation or misunderstanding, maybe. But what? What would you if you were talking to a customer in there and they show reluctance? What would you tell them? What's the advice you give to customers today around Einstein Discovery? And should you use it? How you should jump in, right?

Bobby: Usually the biggest customer concern is trust on their data. And, thankfully, we build on top of the salesforce platform that trust already exists. By us saying we're not, we don't want to look at your data. Yeah. There's some, some tools where maybe a data scientist wants to get access to tweak it and, oh, we, imagine we can get your data to make a model for everybody. That sounds really cool. Yeah, that's scary. I don't ever have that conversation.

I don't want to have that conversation. No, that's scary. I like to teach them how to use this tool, and then once it's on their data, they gotta go figure it out. It's, I don't want their data. I never want their data. Let's trust is always, really high on our always number one, right? And then it's not just trust to the data. We also believe, shouldn't build bias models. So we have mechanisms to figure out if you're pulling in a sensitive field like a, zip code or, Yeah. Or gender, or age you gotta figure out on the historical data, have you have you been biased against a certain group of, a certain group, a certain age group, certain gender? Our tools can tell you if historically you were, and then our tools can tell us if you operationalize that model on the live data. Are you... Are you starting to, bias? So we have all those tools built in to really get you to trust that model.

Michael: So almost built in risk assessments that you can look at and go, okay, this is I'm worried about this. I'm not worried about that sort of thing.

Bobby: And then on top of that, I think what our business teams are figuring out is don't wait to get that perfect model. A data scientist is going to spend a, a long time trying to make sure this has, pull out every little, parameter and make sure this is as accurate as possible. Now, accuracy is really great, but if you think of it in business terms, and you could start in, a month and get a 10 percent uplift, get that out there, and then it's an iterative process, then iterate. I think that's where our customers are like, oh, we don't need to wait six months. I can get something out there fast. And start understanding how that's working and then iterate and then it just gets better over time. So that's, when you're selling to the business, I think it's, they see the dollar and how that's, really helping or whatever monetary. You're talking about return.

Michael: So you're given a much faster time to value. If you say good enough I'm going to try this and then I know I can make improvements over time.

Bobby: And of course, it depends on the model, right? Because good enough might be we're really good at predicting one class, but not the other one. And not being able to predict that other one really well. You got to understand what is the trade off. And this is why, business teams, and it's not a single person. It's a team sport. You got to really understand what is the impact of this model. What does it mean if it predicts accurately? What does it mean if it predicts inaccurately? And then you gotta, it's up to the business to understand what that impact is.

Michael: That makes sense. And it's almost everything we've seen around these models and the way they grow the way they evolve. The more data you're feeding, the more opportunity they have to learn. The more you're using it, the more you have opportunity for improvement. And so if you jump in now, six months from now, you're going to be way ahead of where you would be if you had waited. Three months or six months or whatever to implement. That makes sense. Yep. Makes sense. So that's all the time we have. I really appreciate it.

Bobby: Absolutely. Thank you.

Michael: It's a really interesting subject. And I know, people are very interested in the idea of predictive and how I can improve my sales, improve customer service, all of those kinds of things. So that's great. I really appreciate you joining.

Bobby: Yeah, absolutely. Thank you for having me.

Michael: Thanks.