Disambiguation Podcast AI Governance, Compliance and Regulation - Transcript

Michael Fauscette

Welcome to the disambiguation Podcast where each week we try to remove some of the confusion around AI and business automation by talking to experts across a broad spectrum of business use cases and the supporting technology. I'm your host, Michael Fauscette. If you're new to the show, we release a new episode every Friday as a podcast on all the major podcast channels on YouTube as a video. And we also post a transcript on the Arion Research blog in case you want to stop by and read it. In our show today, we look at AI governance, compliance, and regulations. I'm joined by Jacob Beswick, Director of AI governance solutions from Dataiku. Jacob welcome.

 

Jacob

Thank you very much.

 

Michael Fauscette

Could you give us just a brief intro and then a little bit about yourself your role at Dataiku?

 

Jacob

Sure. So, as you introduced me, I'm the director for AI governance solutions. What that means is that I lead a team who works closely with our customers on helping them to articulate what good governance looks like for them, then ultimately helping them to push that practice into the product, which I'm happy to talk a little bit more about in time. In terms of like, my background, why I'm in this space. So, I originally started in the UK Government, where I was in the office for artificial intelligence. And I was responsible for a few things, including AI governance or regulation at the UK level. That exposed me the EU and the developments there, which is like the thing I think a lot about today. And it's kind of pushed me forward in this direction. And that's great.

 

Michael Fauscette

It is. It's a topic that comes up a lot in conversations and certainly ethical user’s responsible use. So beyond just the regulation, how do I ensure, internally that I am compliant, that I do have governance in place? why don't we start there? So how does AI governance make it so that companies feel comfortable with ethical and responsible use of AI?

 

Jacob

It's a good question. I'm going to be sort of pseudo academic initially, which is to say AI governance onto itself doesn't ensure those things, right. So, AI governance can be driven by organizational priorities that can speak to things like driving value, operational efficiency, things that are completely detached from ethical considerations or responsibility considerations. But when an organization makes a decision to focus on ethical AI, responsible AI at the development, and use stages are even including procurement. It's really about the organization articulating the values that are associated with ethicality, or responsible AI. So things like reliability, robustness, transparency, fairness, etc., etc., the things you see quite commonly in the very public spaces of the OECD different kinds of governments, and figuring out the sequence of actions that helped to prove or supports proof of those principles being met. So, it requires the commitment from leadership, on the one hand to say we're committing to this ethical orientation, and then down the chain, figuring out what's the course of action, we need to execute to ensure that we're systematically doing this. But importantly, when it comes to governance, documenting and providing auditable content that you're doing?

 

Michael Fauscette

so, it seems like a big part of that, then Is that intentionality to actually approach it this way. I want to be ethical with the use, I want to, you know, deal with bias if I can, I want to be responsible. Okay. I mean, that makes sense. And I think that some of that is, is kind of like we used to talk about customer experience, you know, it's not technology first, it's actually a strategy, you have to sit down and figure out a strategy. And it seems like that's what you're seeing here, too,

 

Jacob

I would advocate that. Because if you kind of go slapdash, there are ramifications down the chain, I would argue. So, if you're going slapdash, one team is doing it and other teams not, somebody up wind catches, oh, this is an important thing. I didn't want to spread it out. Basically, if you don't have a consistent, coherent approach, you stand the opportunity, you open the opportunity to creating kind of like a patchwork, which could be fine, in some instances, but it might not and others. So, it does require concerted decision making concerted actions. And that's really important because there's a change management aspect to this. So, people need to know what they have to do differently. And that needs to be kind of instilled at a management level, but from my perspective, also at a tooling level.

 

Michael Fauscette

That makes sense. So, you can approach it both strategically. But you can also have tools that help enable that in the platform. That makes sense. So, I know, regulations are all over the place, and it's a complicated topic to bring up but I'm just curious from a high level, what do you think the challenges are in regulating AI? I know technologies in general. And I know certainly some things are different the EU's approaches that differently than the US, maybe has an approach to that. It's complicated. I know. But I'm just curious what do you think the challenges are for that?

 

Jacob

It's a good question. And I think about it, I've thought about it for a long time. The if you have anybody kind of with my background sitting in front of you, and you ask that question, the first thing that they're obliged to say, is, tech moves faster than regulation. And then they walk away. And they, and that's it. So, I kind of thought, is it worth exploring? What? Like, should we unpack that a little bit? So, on the one hand, we understand tech for various reasons, there's business reasons, there's kind of like, for lack of a better term ego reasons. There's all kinds of reasons that are motivating the development technologies. And then on the regulation side, there's so many layers of things that have to happen, right? So, you have to have the political will to agree Oh, there's a problem here. political will to continue and a policy level as well, to figure out what's the nature of that problem or that risk, then you have to conduct analyses are these risks and potential harms aligned to existing regulatory coverage? And we're seeing this play out differently in the EU and the UK. And then it goes all the way down to figuring out what's the course of action we have to take? How do we actually operationalize it in the market through regulators or other kinds of public entities. And it's really a long, cumbersome process. So, this idea of tech, outpacing regulation, it really is true. So, if you ever hear that it's not just a glib response from any commentator, it's a material thing. So that's the obligatory response. The next one is really the one that I'm intrigued by, and that I'm seeing kind of surface in particular, the latest draft of the EU act is supply chains. So, this idea of, okay, if you're a company A, and you decide you want to start using AI, your approach might be okay, I have data, I'll clean it, I'll use it to kind of develop the models and use it in particular use cases. And it's all really internal, right? And so regulation, as you would read, it would apply to you and you alone, right? Now, imagine your company B. And you think, well, I really want to use AI, it's part of my leadership strategy. But I need to access it outside of my company. And by that, I mean, go to a third party provider, that third party provider extends, as you know, from like the dataset acquisition, so maybe you want to acquire data from the market, the qualities or qualifications of that data set are iffy at best. So, you're bringing that in there potential regulatory hurdles? Are you responsible for the quality of that data and how you use it? Or is the provider responsible in some way? I'm not even gonna entertain that discussion. Beyond that, then there's their issue. So okay, you've got your data, you know, you want to execute a use case, you've done your research, you figured out some model providers, and you want to use something from a third party. So you go out to the market, you find somebody either who has a model that's been developed, and this needs to be refined on site, or you go to the likes of OpenAI, and you say, I want to use API connection, a third party LM in those scenarios, if you're exposed to risk, you have some catastrophic outcome in using those third party models. Where does liability where to the kind of regulatory burden sit with you, the user or with the provider? These are things that I think are really challenging to deal with. And I think that policymakers right now are agreeing, really challenging to deal with. I have more, but I think we can stop here.

 

Michael Fauscette

know, I think it's really interesting in you know, the thing is, one of the things you hit on, I think is probably one of the biggest challenges for businesses in general. And that is when we say this is moving quickly. Like that's a huge understatement. And Tech has moved exponentially, you know, forward for years. And we know that but if you think about the length of time from when we started talking about, we didn't call it cloud, we called it application service providers when it first started, right. But if you think about that, that was what 1998 – ‘99 timeframe, and took when companies were really adopting this in a in a full, you know, way, and that was in the 2008 - 2009 timeframe. So, you're talking 10, 12 years or so that, you know, you could get through a lot of those objections, the security risks, the fear of, you know, letting your date outside your firewall, etc. And now, we're talking about something that we only started talking about, in a broad sense, 9, 10 months ago. So, think I mean, from an acceleration standpoint, that has to be an amazingly difficult thing from a regulatory standpoint, but from a company standpoint. So I'm gonna turn this inside for a second, because I think that maybe this is a good place for us to spend a good bit of time. And that is a compliance itself. So in compliance can mean regulation. But it could also mean hitting the strategic goals that you set for yourself for responsible and ethical use. So, I'm curious how, how can frameworks be put in place to address those risks and concerns associated with AI, and also keep up to the changes that we know are happening on a weekly basis?

 

Jacob

Yep, tall order. I'll start with how we actually define when I go to in front of a customer, or anybody from my team goes in front of the customer and organization and we start talking about governance, we make sure the first thing we talk about is what exactly are we talking about? And then I think that'll lend itself to answering your question, and not in a roundabout way, but perhaps in a slightly extended way. When we say governance, we're referring to a framework that enforces organizational priorities, through standardized rules, requirements and processes that affect or shape how AI is designed, developed and deployed. And that first step is organizational priorities, right. So those things will shift organization by organization, when it comes to ethical priorities, these can be informed by things that already exists within a company, we want to be support equity, we want to support fair outcomes, whatever these can exist as a pre exists data. But then when you look to the wider market, and you get the sense of there's actually a lot of commonality in terms of what the right approach is. So, like the right, this is common principles for what good, ethical, responsible, trustworthy AI should look like. And these principles are effectively, I think, not historicize, but they've been developed with the backdrop of what are the risks, right? So, in a kind of adhering to these principles, you're kind of indirectly orienting your AI governance framework, you know, in a way that speaks to the risks that have been perceived, articulated, etc. I think that partially answers your question. Did I miss anything?

 

Michael Fauscette

Well, I mean, I think that does at least start to say because one of the things that I kind of hear in there is that you're really trying to align with how you've approached your business in general, right? It's not, this is not just a carve out. Like I have diversity challenges and ethical challenges and build that into the culture, then we want that to extend across the use of AI. And the way we govern ourselves around AI. Does that makes sense?

 

Jacob

I think I got caught up so much in the beginning of that journey, I completely forgot about the end of the journey. So yes. So, you know what business by business, they're not going to have the same sets of priorities, right? So, you can have a business who thinks, well, I'm using AI for pretty mundane back office applications, they're not exposed to anybody to any real risk or harm. Sure. And then six months later, you're talking about adapting, I think you refer to adapting. Six months later, that same business thinks, you know what I like how this is working, I want to see where else I can achieve any kind of savings, efficiencies, whatever. And let's assume that they then operationalize a use case in HR. We know that HR is pretty high risk, right? So, we look at the EU act, it's a high risk tier, it aligns to the pretty kind of like thorough set of new obligations that are oriented at the iris here. And then we also look at New York, where they say, basically, you can't use New York City, you can't use AI applications in HR functions. So, from that mundane use case, where risks are kind of nil, and you maybe thought about it, and you thought this was pretty old school, I don't need to really worry about it to HR use case. What are the risks associated there? What are the obligations you have to applicants, to existing employees? How do you ensure that those are accommodated or respected in the use of that application? Looking to the principles I mentioned earlier, you might find some alignment. So, you want to ensure that your usage of AI is fair, that it's not biased, you want to ensure that you can be transparent and about your usage and explain outcomes. Right. So, I think you're right, it's not a carve out. I think it's probably context dependent, use case dependent, where you might start thinking differently about when AI governance resonates.

 

Michael Fauscette

Yeah. I mean, I think I've heard because I've been to a couple of CRM, CX conferences recently. And, of course, you know, everybody wants to talk about generative AI. But the risks, you know, employee risk, data, privacy, certainly customer data and privacy risks, too, right. So that's a different approach. And some of those systems are starting to build things in that can help from a technology standpoint. So maybe that's an interesting sort of side bar around this, like, what do you think from a technology perspective? What should companies look for when they're looking for that underlying platform or approach when it comes to helping them manage that governance that company All right.

 

Jacob

So, I almost feel like the fundamental question here is what does good governance look like? And then what things you need to build good governance. So, from my perspective, good AI governance starts outside of a platform as we, as we've talked about already, you need to be able to articulate basically, what are your what are your goals? What's your goal state? What are you comfortable with? What do you think is the right approach for your use of AI? Fine, let's say you've articulated three principles or some high-level articulation. The next step is to figure how do I translate that into actions? Can I translate it into actions, and there's some decision making that needs to happen here, right? So, if you say that we don't want any AI we used to be biased? Well, or we want all AI used, or all AI we used to be fair, what's the threshold that helps you make a determination that you're meeting that principle that you set, I'm not the one who can tell you, this is something that has to be decided internally, right. Or if you're in a regulated industry, maybe it's been determined for you, and you can just articulate it through the action, a sequence of actions in house. Now, from my perspective, these technologies, there's technological kind of relevance in two ways here. I mentioned actions. So, like, there's a course of actions in the development side of things. So, this is basically making sure that you're able to through your tooling in house tooling, assuming this is a scenario where you're building internally, do the right checks on the datasets that you're going to be using to train models. It's about having the right kind of metrics and being able to examine and qualify models that you're training for a particular use case aligned to the things we were talking about. And then our bread and butter when we talk about AI governance is being able to articulate in full the sequence of decision making and codifying the right information about the process of developing a particular use case and model such that you build this auditable content that proves that you've done everything in your power to meet your goals, right. And importantly, from my perspective, and this is one of the reasons I joined Dataiku, it's this idea of governance should like seamlessly integrate into the space of developing and deploying AI. And that seamless integration might look different for some organizations. But from my perspective, the starting point is that codification relevant information that speaks to the things you care about, and also the ability to enforce accountability. So having people who are reviewing the process signing off on the process, and that being functionally a gating to deployment so that your governance speaks to any particular outcomes you you're looking to achieve. So, like, okay, but that differently, your governance is effectively the catalyst for successfully deploying things that will meet your company's tolerances or thresholds or goals.

 

Michael Fauscette

So, it's built in and to the process of going live with whatever that activity use case might be. And you're doing your testing or ensuring that you've documented that as you are going through that process, then,

 

Jacob

that was a much cleaner way of saying it. Yeah.

 

Michael Fauscette

That's what I'm supposed to do. There's right. So, this is, I mean, I think this was really important for companies to understand. And I guess the other side of that, then is, what if you don't, what are the potential consequences for an inadequate process around governance and compliance?

 

Jacob

Well, if we accept the common theory, that AI governance regulation is designed to mitigate risk, right, that's the key like, theme. Now, if we don't enforce the governance at all, and we operate on the same theory, we just check out the inverse, which is that if you're not doing anything, you're exposing yourself to risk. What that risk looks like will depend on the use case. So if you're doing something very public, we've seen this time and again, in I mean, going, I can't even remember going back to when but very public mess ups, let's just call the mess ups, oops, these which are beyond oops, these were companies basically have to put their tail between their legs and, you know, remove a lot of investment in particular products and services because of an outcome that could have been mitigated with a governance approach. We see in the public sector, I mean, we saw the UK when they were using some kind of algorithm to make determinations on students’ scores, test scores, right? We've seen it elsewhere in Europe. So effectively, if you're not deploying governance, it's like an insurance policy, you're exposing yourself to risks and then you are whether consciously you're not accepting the fact that you might have yourself a whoopsie.

 

Michael Fauscette

Yeah, I mean, that's, that makes a lot of sense. Because, I mean, you know, in the US, we've had some of those sorts of like, wow, are you really doing that with algorithms, particularly around things like sentencing algorithms, or, you know, places where there is really a clear risk of bias. And you know, that the data set, it's very difficult to get a clean data set that would actually support that sort of an algorithm.

 

Jacob

the case, the scenario you're referring to, I think they use a model that wasn't even designed for that purpose. They just repurposed it plus the data. Yeah. No, it was a mess. Yeah, that was that was. That was gross. And I'm I won't say that. That was that was careless, careless,

 

Michael Fauscette

careless is a good word. So, I mean, that kind of leads me into another part of this. And we talked a little bit about, it's hard to regulate. But I'm curious. You deal with this all the time. And you talk to a lot of companies? I mean, what do we want government? What should governments’ role play in this? Like? Is this something that we really do need regulation around? Or is there some other approach that would be better?

 

Jacob

I have a friend who I've worked with in government and who we've maintained contact to this day. And he and I feel radically different about this, right. So he is like free market, everybody should just do what they need to do. My, my perspective is not that my perspective is governments, in theory, purpose is to set out kind of like approaches to various things that ensure that the public is safe. And part of that ensuring that the public is safe to means exercising regulation, or at least exercising expectations for what good looks like. Now, if I look at the global market, or the global, whatever, different countries, I can think of like two extremes. On the one hand, you have Singapore, who's come out with a risk management framework in 2019. And more recently a verify which is a non-statutory or not legally required set of practices that are designed to support particular principles that they've articulated and reflect the reflections of what came out of the OSI OECD. And then you have the EU, which is like intense, like, it's quite interventionist, it's really thorough, it's really detailed, it will likely have a heavy burden on certain use cases or AI systems. government's role looks very different in those two scenarios. So, government, but the commonalities that both are saying this is what good should look like? And these are the things you should care about. Who am I to say whether this version, the EU version is the right way, or the Singapore version is the right way. But what I do know is that the valuable thing is governments could come forward and say, you know, Austin, or if, you know, theoretically, speaking on behalf of the public, these are the things that are the good things. Yeah. Right. aspire to that.

 

Michael Fauscette

I mean, that at a very basic level really resonates. And I think, you know, if you're an executive in a business, part of the problem is knowing what good looks like, I'm not sure and I, and some of the things in side of using generative AI are a bit opaque. So that is risk, and you need to be able to understand that and if you don't know what good looks like, then how do you do? Yeah, that I mean, that, to me, that makes a lot of sense.

 

Jacob

So, this idea of what good looks like we can actually unpack that, right? So, I'm referring to this very high level notion of kind of like a North Star, which is the, you know, follow the North Star, and then figure out the rest. But the rest is the complicated, but the rest is like, the actions I mentioned earlier, what do you actually do, right? And this is the space that I find really interesting. And it's the one that we're kind of getting embroiled in when we talk to customers, organizations who are looking to implement our governance. Right now, there's no right answer. And we're kind of being prescriptive in that we're proposing like a workflow based approach to governance that's integrated into the development and use of AI, right. It's not necessarily the case that any but everybody will follow that approach. They might take a really kind of like light touch approach. They might rely on things like spreadsheets, you know, it'll look different for different places for different reasons. But that, that issue of how do you how do you make it happen? It's quite, it's an interesting time to be in this space, because it's kind of, you know, it's gonna make reference to like, manifest destiny, but it's this period of like exploration and figuring out the right path. Yeah.

 

Michael Fauscette

And it also, I just, I wonder, it strikes me as I listened to that. The different approaches, of course, are going to make it difficult for global companies, because you have to comply. And I mean, honestly, privacy’s already kind of been in that boat for several years. And I've worked around and had to help, you know, figure out privacy policies, companies like G2 to where I used to be an executive. And so that's a complicated one. And I'm just curious, then, are there drawbacks to the way this is being implemented are, you know, like, like we say Singapore's is more North star, here's what good looks like and use more regulatory compliance, you know, forcing behavior essentially. Is there I mean, is, are there drawbacks to either approach really?

 

Jacob

Oh, goodness, this is a, I mean, yes. Okay. Look, I'm just gonna, this is me exercising like policy side of brain and we're not this is not this is experimental conversation rather than dogmatic or confident even. So, if I'm looking at Singapore, and I'm thinking, here's the guidance that's been provided by the government on what good looks like, but it's not obligatory, and let's just accept that that's this the state of play, there's no obligations, there's no punitive measures taken if you're not doing this, right. The drawback is that, by providing that approach, organizations might think it's, you know, indicative rather than use, like, actually functionally useful. And then there's what we talked about earlier, this kind of risk exposure, this idea of, are you doing the right things to ensure that your insurance policy via governance is in place? You know, it might not materialize in a way that's beneficial to companies or the public. So, you know, this Singapore's interesting context, because basically, I think if the government even makes a recommendation, companies will say, Ah, yes, okay, we hear you. Yeah. But we'll leave it or that, in the context of the EU. Look, I'll talk to the state of play today, which is that we are watching, try logs happen, we are waiting for technical and political decisions to be made around what the context content of the action look like. But we all know something's coming. Right. And so, I think a lot of organizations are spurred on by the experience with GDPR, to think more proactively about onboarding some of the things that they're reading in the text today, they're wanting to get ready. So, getting ready at this stage is kind of complicated, right? Because it's all been nothing's real yet. But we have strong indications of the direction of travel. And so, it's requiring for those companies who are being very firm footed investments without confidence that they're going to be the right ones, or that they'll have to fine tune, et cetera, et cetera. So there's a drawback there, in terms of a general drawback of a highly interventionist approach, some would say that being kind of rigid means that it's not allowing innovation, it's not allowing things to like, you know, again, these are the things that somebody in my seat would have to say, by obligation, which, you know, may or may not be true. too soon to say.

 

Michael Fauscette

Yeah, I mean, it's an interesting balance. And that, you know, we this is sort of a tech fallback, right. Oh, that stifles innovation. We can't have that. But there's also that dark side risk. So, I feel like we've got to in particularly in something that is growing so quickly, and is, you know, I think I saw, I think it was a McKinsey study that said nearly 100% of companies are going to do something with AI over the next 12 months. I mean, that's, that's crazy. And not in a bad way. It's just like, whoa, I mean, that's so I mean, there is real risk, and it's a balance, and I said so how do companies then think about this? How can you ensure to some confidence level of transparency and accountability in the AI systems that you're testing and deploying? Now? I mean, what should you do?

 

Jacob

So, this is the look, some of these are going to be like, no simple answers, right. So, the first one is, you need to know what your assets are. And so, when we talk about this, when in the context of governance, it means having a competent and well developed registry. So, what are the things that exist across your whole kind of like, ecosystem, not only in the development stage, but also in deployment and being able to kind of wrangle that is not straightforward. For some companies, or some organizations, it takes a lot of work, because things are hidden away. They're in somebody's laptop, they're being production eyes somewhere else, blah, blah, blah. So it takes a bit of internal I'm not going to, I'm using this word illustratively internal audit, kind of to figure out what assets, then it's a question of basically getting a sense of applying the aligning those assets to the expectations that you've articulated for what good looks like in your company, what you basically want to make sure of is happening, and then prove that it's happening through rigorous documentation, having proof points that everything's aligned to internal policy, which is begin informed, perhaps by external policy, and then having abilities to articulate internally and externally, when AI is being used. So, this is another thing that we often see, especially in the context of generative AI right now. How do you make sure that people know what's real, what's produced by AI? What's not. But I think that extends beyond journey of AI. And this is a transparency outcome right or a transparency goal. If you're accessing, I don't know finance through whatever Amazon and Amazon's uses an algorithm to make a determination about your credit worthiness, you should probably know because again gives you the opportunity to orient your challenge if you don't like the outcome of it. In terms of accountability, that idea of assets you need to know owners, you need to know what's been deployed, who signed off on it, assuming somebody has that idea of accountability is really about allocating responsibility. And knowing basically, where in your chain of the organization you can insert yourself if you have an issue, or something else. These things can be chaotic to organize. And so, like this is where tooling can be a useful thing. Tooling on the from the registry perspective is very useful. Tooling from the documentation about the development and use of different AI products is useful. So being able to qualify their kind of like risk levels, being able to explain where they're being used, et cetera, et cetera, et cetera. And then, of course, for the purpose of like accountability, having documented information about who did what, when, right. Because if you look at this is a slight detour. But if you look at something that came out of the UK a year or two ago called AI assurance, and the AI assurance theory was that you can grow trust in AI use an AI in general, through AI assurance, and therefore, in so doing, you will have wider impacts on the economy because more people will be predisposed to using it cetera, et cetera. But AI assurance was really about outsourcing things like audit elements of governance that could be executed by third parties. Now, if you can imagine a world where the scenario of an organization who cares about transparency and accountability later down the line, but has already deployed many things, then has to go back and hire a third-party auditor to help basically do what I've described. That's a big price tag.

 

Michael Fauscette

Yeah. Well, it seems like it's a bit fraught with risk there in a lot of ways, because of the fact that you, you haven't established standards up front to back with them, which seems like even more difficult, from my perspective. Anyway,

 

Jacob

I would 100% agree with you.

 

Michael Fauscette

The one thing that I heard in there that I think is interesting, and I haven't really thought about it this way. I mean, I guess it's obvious in some ways, but is that transparency really varies greatly by the use case, who you're transparent to, what you're transparent with, is very dependent on what you're trying to do, what the outcomes are.

 

Jacob

Yes, I agree. And look, I'm not going to claim like intellectual ownership over this. This is something that's in the heading of implied in the heading of the one of the obligations set on the UI act, its transparency and information to users’ information provision to users. And the idea of transparency on the one hand is just safe. Okay, if you're using AI here, but then it comes down to the world of transparency and explainability. Right. So, it's this idea of, yeah, you're using AI here. Now, what else do you need to know? What else can we be transparent about? It might be line level expert explanations for why decisions been made in a particular way. And that won't resonate in every circumstance, but it will in some,

 

Michael Fauscette

yeah. So that that mean, that makes a lot of sense. I think, in some really actionable things that we've talked about today. That's really all the time we have. So, Jacob, I really want to thank you for joining me today, I know the audience will really appreciate the interesting discussion and advice that they can take out of this. So that's, I think it's a really actionable and important topic that that companies really need to be thinking about. And like you said, they need to think about it first, not later, because the later is much more costly and risky. Before I let you go though, my hard question of the day, I like to ask every guest can you recommend somebody a thought leader and author, another mentor may be who's influenced your career you think that the audience would find interesting to investigate and learn from?

 

Jacob

That was a really tough one for me to answer. And I think that my Get Out of Jail Free answer is, in all the workshops that I've had, what's been fascinating is that the things that I've worked on, especially in this space, are really pretty democratic, right? So, it's groups of people coming together to hash things out, rather than one individual here, there. So, if you can withstand the laborious reads of things, like, I don't know, extensive documents that reflect legislative developments or whatever, I would highly recommend looking into things like if you haven't read the EU AI act draft, you got to do it. You got to understand basically what's being laid out there and know in the background that you literally had just start with groups of like, they're called high level experts coming together and like hammering out the details of what matters, right. So that to me is brilliant. spiraling. Similarly, if you're looking at if you're preoccupied with things like procurement, if you're preoccupied with things like, what are what are your principles? What could principles look like at a risk management level? Go to the NIST National Institute for Standards and Technologies AI risk management framework. Again, it's a long read. But it's useful because it helps to kind of open your mind to ways of thinking about, about challenging topics that we've covered today.

 

Michael Fauscette

And could certainly inform your strategy as you're trying to think through that too. So, it sounds like that would be really important to help with a basic framework that helps you understand kind of where things are going there. So yeah, that's good. Thanks. And I don't think that's a cop out at all. By the way, I think that's, I think that's a very good way to point people. I mean, it's a bit of a long read, and I certainly in my podcasts I've never recommended you go read regulations. But today, Thursday, it might be it might happen more and more, depending on the world goes.

That's absolutely true. Well, anyway, thanks, everyone for joining today. Remember to hit that subscribe button. For more on AI. If you want to check out some research. We did a survey in August and published a research report on the area on research site. You should check that out. It's a free download, and then join us next week we'll have another interesting discussion around AI and business use of AI and automation. I'm Michael Faus

Michael Fauscette

Michael is an experienced high-tech leader, board chairman, software industry analyst and podcast host. He is a thought leader and published author on emerging trends in business software, artificial intelligence (AI), generative AI, digital first and customer experience strategies and technology. As a senior market researcher and leader Michael has deep experience in business software market research, starting new tech businesses and go-to-market models in large and small software companies.

Currently Michael is the Founder, CEO and Chief Analyst at Arion Research, a global cloud advisory firm; and an advisor to G2, Board Chairman at LocatorX and board member and fractional chief strategy officer for SpotLogic. Formerly the chief research officer at G2, he was responsible for helping software and services buyers use the crowdsourced insights, data, and community in the G2 marketplace. Prior to joining G2, Mr. Fauscette led IDC’s worldwide enterprise software application research group for almost ten years. He also held executive roles with seven software vendors including Autodesk, Inc. and PeopleSoft, Inc. and five technology startups.

Follow me @ www.twitter.com/mfauscette

www.linkedin.com/mfauscette

https://arionresearch.com
Previous
Previous

The New Era of AI Regulation: Understanding the Biden Administration's Executive Order

Next
Next

Regulating Artificial Intelligence in the United States, the European Union and the United Kingdom