Alrighty. Let's get going. So welcome to the, to the webinar. We're gonna be talking about how AI, specifically generative AI, is really accelerating the ability to be successful with data governance. If we could, hop over to the next slide, please. While we're doing that, we just launched a, a little poll. So take a second to have a look at that. Reese, my co presenter, will introduce that, a little bit later on. But if you wanna have a look at it now, by all means, have a look. I can see people already entering in some answers. Let's do some introductions. My name is Robert Curtis. I'm the managing director for InterWorks. We are a global data and analytics consultancy, and we work with great partners like Informatica, which brings me to Reese. Reese is the data governance leader for Asia Pacific, working for Informatica, and he's gonna be our primary presenter today. Next slide, please. So a little bit more about InterWorks. We'll get this salesy bit out of the way quick and fast. We basically do three things, Very narrowly focused on data and everything around data to make you successful on it. And that's strategy. Where do you wanna go with it? What vision do you have that is gonna set up a strategy for the next couple of years? Building solutions to make your data effective as well as supporting your data platforms, your data, and your community. So we do all of those things. Or more detailed way of looking at, if you would go to the next slide, please, Reese, is this sort of diagram. So at the heart of it is is unifying your data into a place where you can leverage it for the most maximum benefit. And then you surround that with governance and culture and strategies, well as the the support and training of your community. And then as you look at the top, the green bits, that's the stuff that your users are are doing and then the foundational stuff, all the backroom engineering type things, that's all the stuff in purple. And we can help you with all of that. We've been in business for twenty seven years. We're a global company, and we're all over Australia. Some other fun facts. We have an enormous amount of experience. So we were a Forbes small giant. There were only twenty five companies that Forbes picked, and we were one of them. Seventy five of the Fortune one hundred are our customers across all verticals, all customer sizes. Our website has a blog about data and analytics, which generates, I think, closer to four million page views every year. So we've got a tremendous amount of thought leadership in this space. And many of you on this, webinar might be our customers. So you might represent some of those eight thousand clients that we have all over the world. And thank you, Giovanna. She has shared with you in the chat, the inter works dot com blog. So a great resource to check out. While Reese is presenting, this is the last thing I'll probably say, is we have a q and a option at the bottom of Zoom. If you have questions during our presentation, please chuck them in there. We'll see if we can't get done with this in about fifty minutes or so, and then we'll use the remaining time to answer whatever questions we can. Other than that, I think, Rhys, I turn over to you. Thanks, Rob. So hi, Ron. My name is Rhys. And, yeah, like, Rob introduced me, I'm the, data governance kind of specialist and leader for Informatica in the Australia Pacific area. So today, we will be talking about, you know, the role AI has in data governance and how data governance has a role in AI. And just for a bit of presenter context. Alright? So I started off, you know, my journey as a data scientist, and my journey into data governance was really born from a lot of common data science struggles. Right? So, you know, as a data scientist, I found, it was very hard to, you know, find the right data in the organization. You know, which teams do I talk to? How do I get access to it? Who owns it? Who are the key stakeholders? What does it mean to the business? You know, understanding and kind of acquiring that domain knowledge as well. And then the accessibility of that data. Right? So how can I get access to the right data? You know? How can I get access to it in a timely manner? But also, you know, be driven by the policies and regulations that we have enforced so that we're not breaching any sort of, you know, compliance obligations? And then, you know, things like trust in the data quality. Is it good quality information? Right? Where where does it come from? Does the business believe in it? Does it have the integrity that we kind of require as an organization to use, for actionable insights? You know, getting things like, you know, business, buy in. Right? So you'd build these amazing models, and then you'd showcase it to stakeholders. And how much do they trust it? Right? How much can you show them that, hey. You know, there there are challenges here that we need to address. Right? What are those business challenges? Kind of, you know, how do we, you know, bring technical insights and business value together all whilst also managing risk? So how do we make sure that we're, you know, using these modern practices of of data governance and AI? How do we, you know, utilize the the capabilities we have but also manage risk at the same time? And this is where I kinda started to realize the role of data governance and the role it has in an effective AI, you know, driven strategy and how also, you know, things like federated data governance models can really unravel some of these challenges. So these are kind of the things that I faced as well as a data scientist, and that kinda led me into the data governance space. And what we saw, you know, in the last, you know, let's say, call it four, five years, is kind of the rise of, you know, not just modern data architecture, but the AI trends that kind of supported it. Right? Things like generative AI, you know, data products and data democratization, AI driven data management, you know, AI governance. And we realized that, you know, for the expansion of this kind of AI era, you need a modern data architecture to really operationalize your data strategy. And the success of kind of the, you know, the the this kind of AI era is really dependent on holistic, you know, governed and trusted data. And what this can do is then, you know, it can help you operationalize all features of your models. Right? It can help you, you know, transform and prepare data for the AI models, you know, feature scaling, feature engineering, standardization, deduplicate information. Right? And then provide you also things like end to end lineage. Right? Where is it coming from? Where is it going to? What's happening in between it? And then taking not only that business that technical context, but adding that business layer over it and automating that process to say, well, how do we support this from, you know, the business context, some policies, some regulations, some business terms, glossaries. Right? Things like that. And by relying upon an integrated set of tools to kind of achieve, you know, your AI powered data management, you can really create, you know, the visibility you need across your data state and make it easier than to scale those data pipelines. However, you know, what we do all kind of on this call acknowledge is that, you know, bad data leads to bad AI outcomes. Right? And this is where risk management is important. So to deliver real, you know, business value through data insights and analytics, you know, driven by AI, we need to showcase to our audience, you know, say it's business stakeholders or auditors. We need to showcase transparency in the data, the common understanding of the data's integrity, and its trust so that we can deliver these kind of actionable insights. But how do we scale, you know, our use of AI in not just, you know, generating insight, but it generated insight that has trust, that has transparency in its business and technical lineage whilst also adhering to policies and regulations and remaining compliant in its you know, in the way it's accessed and in the pipelines that it's being used in. So then, you know, the time, it it may not be on our side. Right? Because we see in every industry the value of protecting our data, being compliant with policies and regulations, and the further depth and enforcement of policies that organizations have to abide by. You know, things like PII, GDPR, you know, in financial services, things like CPG two three five or, you know, the privacy act. So how do we kind of foster our organizations to kind of work creatively with that within these kind of governance parameters, but then also scale AI, you know, driven data management to enable, you know, your organization. And this is where the challenge kicks in. You know? These are some examples where, you know, AI with poor governance and quality has resulted in, you know, damaging reputations. Right? We've seen, you know, everything from from chatbots selling vehicles for one dollar to legal precedent that actually has been deemed, you know, illegitimate being created through generative AI. We've seen, you know, things like generative AI, you know, doing, you know, when it comes to ball tracking in in football or soccer games, but not actually following the ball and following spectators instead. Right? All of this is to point out really that generative AI is still evolving, and many companies are hitting these kind of snags. Right? And I bet most of us here, you know, we don't learn these lessons the hard way. We'd rather make sure that our, you know, data and our AI initiatives, you know, bring value whilst being, you know, predictable and governed as well. So this is just to kind of highlight and iterate that, you know, there can be a a loss in reputation through the kind of, you know, not having the right governance enforced. And this actually, you know, comes to the poll, and I'll talk about that real quick. And I wanna see, you know, through the poll what everyone's kind of thoughts were on those top AI generative challenges. But, you know, when you consider, you know, in governance, we call it, you know, people processes tech. Right? And when you consider the impact of people, you know, and their skill set readiness, when we talk about processes and, you know, how to wrangle more data but also reduce risk and then technology, we can easily see where data governance comes into into the picture as kind of critical to success. In fact, when it comes to the top, you know, generative AI challenges from a survey we had previously done with CDOs, you know, when it comes related to technology, some of our longtime favorites, you know, make an appearance. But at the heart of, you know, trusted AI is trusted data. And it shouldn't be a surprise that, you know, we see data quality and data privacy up there topping that list. Right? We also found that, you know, number one and number two here are also where a lot of companies are, you know, putting their key investment spending in. Right? And you can even consider that AI ethics is actually a byproduct of getting the first two right by delivering trust. Right? And dealing with data quality, the ability to discover, catalog, and understand data for fueling AI models kind of impacts that accuracy, that bias, that reliability, and other things as well. Now you think the the fifth one around AI governance might be higher up, but when we kind of surveyed our CDOs, right, everyone's feedback was we need to understand the basics of data management, and we need to get that right first. So, you know, without trusted data, you don't have trusted AI. It's kind of that age old kind of saying of garbage in, garbage out. Right? So what does this mean for data governance readiness? Clearly, without, you know, a way to streamline how these challenges are addressed, AI adoption will slow down. It'll create integration challenges and more. So the question is, you know, what can we do about it? Right? And this is why data management is kind of a mandate for corporate AI solutions. The success of AI is kind of dependent on the effectiveness of trained large language models, and data management is kind of required to ensure the vast amounts of trusted and timely data to learn effectively. Right? And like we kinda mentioned before, right, how do we, you know, use AI powered data management to quickly find, you know, features for our models? How do we, you know, quickly deduplicate information? How do we trust you know, provide trust and master data for customers, you know, partners, services, assets? Right? And then that end to end lineage of where things are starting and where things are going and how it's kind of evolving as well, all in that kind of real time fashion. Right? Everything that has, you know, the depth and breadth of us to account for changes, right, and understand, well, how do we make sure that our business is adjusting and our data is adjusting in the right manner? And at Informatica, we kind of believe that much of this can be solved with kind of modern data governance architecture in the cloud. Because not only do we believe our tools should accelerate your journey to leveraging AI and generative AI, but we have built modern cloud native microservices, you know, based on kind of AI at its core. And what this does is it kind of simplifies the way we drive data quality. We drive data governance and data sharing. Right? Trusted quality and timely data are kind of the heart of AI outcomes, and the right data will prevent your the right data will prevent your AI projects from unpredictable outcomes. Right? And the right data will also help make sure your AI avoids incorrect, biased, or irrelevant outcomes. So this is our kind of, you know, in intelligent data management cloud. Right? And you can see here that our AI engine underpins the entire intelligent data management cloud, and everything within this is kind of different facets of data management. That's the kind of the way Informatica has a structure around a different kind of solutions. We see things like data cataloging, data integration, data quality and observability, you know, master data management, governance and privacy and data marketplace. All of these are different facets of data management and key use cases really that organizations wanna start with. Saying, hey. We really wanna understand our lineage, our data catalog. Where's it coming from? Where's it going to? Or we wanna understand, you know, really around what are common business terms? How do we centralize them? How do we make sure that a business term to team a is used in the same way for team b? Right? Now what's the quality and observability? Quality and observability underpin every service here. And that is saying, how do we make sure that we have the trust and integrity and our information? And then finally, things like marketplace and accessibility. Right? How do we enforce our policies but also make our data available to our organization? So RDMC is, you know, only it's cloud native, microservice based, AI driven platform. And it, you know, it does this from collecting metadata across all these services to kind of help train on data patterns, accelerate, and automate your data management journey. So our AI journey actually started when we launched our Clare engine in twenty seventeen. And AI is embedded in all solution in all of our solutions leveraging machine learning algorithms and natural language processing on metadata to really drive intelligence and productivity. So we actually access, you know, fifty thousand plus metadata aware connections now leveraging around forty eight plus petabytes of active metadata in the cloud. And I think we're processing over ninety two trillion, you know, mission critical cloud transactions as of, you know, twenty twenty four. And and having this power in a platform can help us quickly kind of, you know, train and leverage data to automate and, you know, bring insights to your information. So we leverage AI capabilities across the IDMC platform in two ways. The first way is our Clerk Copilot capabilities. Now this includes capabilities today like auto classification. Now auto classification is where Informatica can go and scan your metadata, and we can automatically recommend data classifications, including things like Australian specific data classifications. If you add custom classifications, these can then be recommended to you the next time you scan those systems. Right? This is a great way to go through your ecosystem, scan your systems, and immediately get a baseline of data classifications. Start to understand, you know, maybe, hey. We've gotta enforce a policy or PII or something like that, and we need to understand not only what we have, but how we wanna classify it. So let's go in and rapidly scale from, let's say, you know, zero, ten, twenty percent to seventy, seventy five percent. Right? Now these classifications for some, you know, systems like, let's say, Snowflake can be written back as tag values as well. So your data engineers can, you know, go in through a database like Snowflake and see those tags based on the classifications from the governance layer. Right? We have seen organizations use this feature and go from, you know, classifying data in two months down to eight minutes. Right? And this is a very good way to rapidly scale and then say, okay. Now we wanna build onto our next use case and understand how we can then relate this to policies and maybe accessibility and enforcement of policies as well. Another capability that we have is things like intelligent glossary associations. So we can actually automatically link your business terms and glossaries to your technical assets. Right? So what we're trying to do here is create active data management. Right? Now the difference between active data management and proactive data management is the actual enablement of information to your stakeholders to give them that dynamic nature to say, hey. This is the information. Now go use it. Alright? But we also have things as entity matching, things like join recommendations. So we can show you which tables, you know, are could be joinable using things like, you know, primary keys and foreign keys and understand, hey. These are where you know, there are duplicates not in just, you know, data itself, but maybe in tables, things like that, schema mapping, and many other features as well. Right? And where we see the benefits is namely twofold. Accelerating automation by leveraging metadata kind of intelligence for faster decision making to give us quicker, timely, trusted data, but then also a simplified interface that really makes it easy for business users to go in and get the data that they trust and they need. And all of this is powered by our CLEAR engine, which is really kind of trying to drive those governance services. The second area is our new generative AI called CLEAR GPT. Now CLEAR GPT is a natural language interface. Right? And it leverages, once again, forty plus petabytes of IDMC metadata. And it's kind of like you know, imagine a a chat GPT, but just within your organization and pulling out, you know, your governance layer for your users. And through this kind of text interface, users can discover, explore data, you know, both metadata and data and generate pipelines, understand lineage, understand quality, visualize it, ask it questions in the way that those users wanna ask those questions. So the Clear GPT brings a step function in simplifying, you know, data management, allowing those nontechnical users to interface with Claire for more simplified kind of data management experience. Right? And once again, all of those capabilities are built into you for things like intelligent glossary association, you know, auto classification, schema mapping, you know, column similarity, including things like data quality insights and data set recommendations. And data quality insights are things like, hey. We've profiled this information, and we think these data quality rules could help actually clean your information. And you could choose to accept those rules, reject those rules, or augment those rules and apply them. Why this is handy is because somebody who might not necessarily be, you know, a technical user, but they really understand that information, they may say, look. I really know how I want this information to look, but I don't really know how to get there. But using those data quality insights, they could actually say, well, yeah. Okay. I do wanna apply these rules because I do believe that if I say that, you know, email has to have an at symbol for it to be true, that is a good way of getting a better representation of how many email addresses we have. Right? So you can go in and apply those data quality rule insights, rerun the profiles, and see in real time how the data quality scores have improved. And then you can say, great. This is now where I wanted to use it. Maybe I'm in the marketing team and I'm a marketing analyst, and I will now only use this, information to run a marketing campaign. Right? So this is where it becomes very important. So while, you know, data and AI governance help, you know, build trust that AI can be, you know, used responsibly, Informatica's a IDMC is already powered by AI. Now what this means is we can really help organizations accelerate trusted data delivery through automating tasks, you know, improve data, you know, relying on, you know, taking away manual processes and procedures and automating them. Right? So things like generating rules, you know, monitoring them, identifying and fixing those quality issues. Right? Automatically linking governance policies, you know, regulations, and things like that to your enterprise data. Keeping a business data flow that interacts with not only your source systems, not only your business terms, but things like your stakeholders, your policies, your data quality rules, your data quality measurements, right, and multiple other overlays that you can start to enrich the user experience of understanding not just where it's coming from and where it's going to from a technical sense, but even from a business sense. Right? Automating those glossary associations. So instead of saying, hey, guys. We've gone and created some new business terms. Now do we go into the system and add each one? No. We can actually go in and scan those systems and automatically attribute those glossary terms. Right? Optimizing resources, utilizing things like APIs. So a lot of customers will say, hey, Reese. We already have some solutions that integrate our workflow management capabilities. Great. We can utilize that. Now we have workflow capabilities. Right? But you could maybe then say, look. We wanna keep business stewards in the governance layer and technical stewards in ITSM tools like ServiceNow or Jira or Halo. Great. Use the APIs to pull in and, you know, pull out information as required. You know, by taking the risk and uncertainty out of data curation and using automation to recommend the next best action, this is where responsible AI becomes easier, right, to achieve this through tools that are designed to achieve those goals. That is kind of responsible AI use. So given the need for kind of AI powered tools in a modern data governance kind of management solution, what are the building blocks, right, for a modern data governance solution? So this is where I wanna go through these kind of three key pillars. The first pillar is really helping you meet your risk and compliance requirements. You know, regulatory and compliance issues continue to increase. You know, just look at how ESG and, you know, privacy acts and AI acts are now emerging, you know, GDPR as well and things like that. We continue to kind of evolve our tools to help you meet the existing and new regulatory and corporate compliance needs. So this is where Informatica comes into play because not only can we take those policies and apply them to your information, but what I will show you later down the, this this presentation is things around data accept, data access management, which is driven by policies. And where for the first time, you can take the context of a policy and through classifications, enforce it onto the way users access information. The second pillar is enabling analytics and AI across your organization. You know, there are more and more areas across the business that want to kind of consume a growing diverse set of data across the organization. And what this does is it truly enables the data democratization for the organization. Right? It is about utilizing tools like a data marketplace and providing high level and proactive data privacy and security that really kind of allow that democratization within the governance kind of guard guidelines. The third pillar is establishing data quality and data observability across your data estate. So operating at scale means that automation and visibility must be at the heart of any solution. Right? Technical and business users need to be working on the same data. Right? They need to make sure that, you know, however it's being enriched from a technical side or business side, it's having that single lens view from an organization perspective. And observability might be able to see how data is being used or how utilization can be increased and scaled to grow your business. So this is also about things like anomaly detection and and finding patterns in your information as well as observing how security and compliance tools are working across your organization. Any compromise in these three areas means you'll be scrambling either through manual processes, you know, or separate tools to implement a holistic governance solution, you know, for your growing AI needs. Okay. So I'm gonna jump into kinda some of the solutions that Informatica has around those three key three pillars. And the first one is one I mentioned, which is the cloud data access management. Now this is to help kind of manage risk and compliance. In fact, you know, to help with policy based privacy and security. So we actually recently, released our cloud data access management on IDMC earlier this year. We acquired Privator in July of last year, and, you know, they were a leading, you know, access management solution. And we integrated that into our capability of access management from Privator. And through this integration, you can now use the power of AI generated classifications, what we talked about before, and the power of clear metadata to identify sensitive and private data and then apply across the across policies across your teams. Right? So this new capability kind of proactively can make sure that you are putting the right data in front of the right people while protecting, you know, your information or sensitive information around the regulations and policies that you need to be compliant with. So what this means is and if you look at this diagram, you you have a policy definition. Right? Let's say it's a PII policy. So the PII policy could be like, you know, we need to, have, masking rules around, you know, PI information like date of birth, you know, phone number, address, for example. And then what you do is you have your classifications. Now your classifications can come from some of this AI generated classifications or whatever your classifications you wanna roll into the governance layer. And then we use those classifications based on how we've defined those policies, and we enforce them on the users. So let's say, you know, myself and Rob are looking at a Power BI dashboard. Right? Maybe I have the ability to view five out of ten columns, but Rob has the ability to view eight out of ten columns. Now that's when it comes to masking, but you can also do things like tokenization. You can also do things like, hey. Maybe, you know, we wanna keep the, year off the individual, but not the date and month. Because for us, from an analytics perspective, we just need to go in and we're doing some analytics around our users and maybe by year, but not necessarily we, you know, we need the the month and date. Or let's put some rules in there to say, hey. If there's anybody under eighteen, let's filter them out so you can get very granular with your policy transformations and then force them on, you know, your systems like, let's say, Snowflake or Power BI or Redshift or s three or whatever it is. And then users can go in and access that information in a kind of policy with policy based controls. Right? So the right people are doing the right information. And now this, on top of governance, you know, cataloging and lineage capabilities, really helps you discover, understand, and trust your data, right, and how it's shared with applications and users. You know, we talked about the second pillar of, you know, analytics, AI, and data sharing. And here as well, CDAM or the access management solution can enable that, you know, safe data sharing for analytics. The next is, right, data cloud data marketplace. Now this is great because this is really to help easily democratize data across your organizations. Right? And for that, we have introduced the data marketplace a few years ago. And, effectively, this is trying to solve the challenge of, you know, like I said, when I was a data scientist, how do I find the right data? How do I collaborate with the rest of my organization in different teams? How do I know that it's it's active and it has the right quality? This is how you find information. It doesn't store information. It just shows you where it lives. So currently in my organization, I might say, hey. I'm gonna I'm a data scientist. I need to email Rob about this information. Rob emails somebody else about information. He's on leave. They get back to me. It takes me a week or two. Then somebody says, actually, you don't have the right, credentials to access it. You shouldn't be accessing it. This hasn't been updated. And we get into this kind of rigmarole of trying to find information, and that's a lot of where our time goes when it comes to building out these kind of analytics pipelines and data science pipelines. The marketplace is aimed at allowing those data producers and consumers to collaborate and work together to provide data assets for those projects. Alright? So this is saying, hey. Imagine I'm shopping for information. Maybe I'm on, you know, some sort of ecommerce store, and I'm trying to shop for headphones or glasses. But in this case, I'm shopping for my my data. It's broken up into categories. For example, let's say finance, operations, HR. And I'm going in and I'm finding it. I'm searching. I'm discovering. Right? I'm, you know, saying, hey. What do I have access to based on my permissions? Right? And now all of this behind this marketplace layer is that access management layer. So when I go in and check an information out, so imagine I put it in my shopping cart, imagine I'm buying glasses or headphones, I put it in my shopping cart. I go in. I request access to that information. And for it to be accessed, it then has to go to the relevant data stewards to go and then approve or reject it. Right? But through that process, it was also enforcing this access management rules so that if I do get access to that information, I get access to it with the right kind of policy, you know, requirements. So maybe bringing back to that Power BI example, I then have access to see five out of ten columns. Right? Or the right anomalized columns. So I'm going in finding information, right, and I'm not limited by the the rigmarole of having to go in and ask people where information is and, you know, go through that kind of status. I can actually go in and see what it means to the business, who owns it. I can chat to them. I can see what permissions are required. I can see the usage context. I can see who in the business is using it, what they're using it for, why they're using it. You get all of that context, and then you request access to it. And, traditionally, let's say I just emailed someone for information. You know, Rob may come back to me and say, Rhys, sorry. You can't have access to it because there are four columns here that are highly sensitive. But using the map marketplace, I can actually get access to that with those four columns in the right kind of masking techniques applied to it so that I can then perform my analysis on the other six columns. Right? So it allows you know, the data producers as well to understand what data assets are being utilized. So I could say, hey. I'm creating these data assets. But how many people in the business are actually using it? There's time and time again we see that where, you know, when I was a data scientist, I was creating these, you know, these these Power BI dashboards, for example. And then suddenly, only, you know, maybe three out of five were actually being used. So why was I spending my FTE time, you know, working on those other two dashboards? So this way, even those data producers can actually see what's being utilized. Right? There are collaboration mechanisms in there, rating tools. Right? So that, you know, your consumers can give feedback to those producers to say, hey, Reese. Could you maybe change this, or could you add this? So now what we're doing is we're also refining the way we're building. It's not saying, hey. I've turned out ten dashboards. Please go and use them. It's like, actually, what are the four dashboards that you really wanna see, and how are they gonna be the most effective? So not only in that pipeline is about bringing technical and business together, it's also about being bringing that collaboration together. So this is a dedicated tool to kind of drive the right data in your organization to the right places under a governed and protected framework. The final pillar is kind of helping data users to kind of, at scale, quickly understand how and where their data is being used. Right? To understand the health of their data and take actions to remediate it. Understanding how their data is related to the lineage of the data assets, you know, even to help make quick decisions on cost and performance optimization. But with these enhanced data governance capabilities, how can we kind of scale this more easily? So let's just kind of quickly touch on this third pillar. Organizations today increasingly need, you know, to manage data analytics at scale. We have more organizations being data driven. Right? And there are multiple people with multiple skill sets working at all levels on the data across the business. So this is where we see, you know, our citizen users come into play as well. So how do we enable different skills while maintaining control and visibility? And this is where, a modern data governance and observability architecture kind gives you the ability to observe how data is used across your organization and then take action to improve that data flow while kind of maintaining that quality, trust, and security. Right? Providing different views of your data pipelines for technical users, business users, and governance teams. So this is where persona based views are important. Sometimes when we talk to organizations, they say, okay. So how do we not only simplify our tech stack? Okay? But a lot of the times, those those large tech stacks are aimed for different personas. So how do we centralize that into kind of one solution? But then instead of having multiple solutions, it's one solution that gives different views for different personas. And that's what's really important in organizations today. It's having persona based views. We wanna see what's important for our role. If I'm a technical unit owner, I wanna maybe see those technical assets. I wanna see how those technical assets are interacting with each other. If I'm a business owner, I might wanna see a little bit around those technical assets, but I really wanna understand how the business layer, the policies, the business terms, the context is actually being applied to those technical layers. So that way, when we're trying to create actionable results, we're we're we're relying on insights that are being kind of QA ed by both business and technical owners. That builds trust. So our AI powered governance solution has, you know, predictive data intelligence built into the kind of recommended next best best action. Right? And what we do is we kind of understand, you know, your data consumption patterns. You know, what data is used most? By whom is it used? For what purpose is it used? And we learn these kind of consumption patterns. Once we learn these patterns, we can kind of modernize and optimize your governance effort on, you know, what matters to deliver, you know, business outcomes, make it fast, make it reliable. And most importantly, we need to augment and scale human effort to meet the needs of a growing number of data users. So what this does is it frees up time, you know, by better connecting data producers and data consumers without data stewards, you know, busy searching for the qualifying data. But I remember when I was a data, you know, data scientist as well, I was, a lot of the time, servicing requests for other kinds of data users. And And they were saying, hey, Reese. Where is this information? And I will spend a lot of time just finding that information for them. But instead, we wanna focus on kind of having that kind of collaborative approach where there's a nice little handshake between the producers and consumers. They can collaborate well. And, also, the qualification of what data is correct for those teams, let's say it's a marketing team or a finance team or operations team, can be determined by those users instead of by that technical driven team. And what we wanna do here from an Informatica perspective is focus on the consumption consumption side of data governance, and this is where business value is realized by connecting you to reliable data sources. Right? The key component is a self-service data marketplace, and And what that does is it allows business users to shop for data. So the marketplace is not only connected to data governance, data catalog, data mastering, and data quality, but it's also collect connected to a data delivery engine. And the data delivery engine is that checkout functionality that actually delivers the information to those users. This allows users to get that relevant and trusted data faster. Right? And by linking data intelligence to a delivery service, what we're doing here is we're enable data we're enabling data consumers to really engage with what they need most. And this can help accelerate, you know, your business outcomes and really drive value for the organizations, things like increased productivity, reliable operations, lower risk data use, customer experience. So let's just, you know, kind of summarize how we modernize data governance for an AI era on the Intel Intelligent Data Management Cloud. So our modern data governance portfolio is made up of four integrated user focused solutions. So like I said, user focused or let's call it persona focused solutions. And the catalog and governance services, right, provide that simplified and consolidated views off a data catalog, really enabling, you know, data profiling and lineage and providing both technical and business views and, you know, proactive alerts, notifications integrated into the tools to discover, you know, data and drive that metadata intelligence. So what that says is, hey. How do we not only understand this information, but how do we flag this information? How do we make sure that there's proactive alerts that users can go and remediate, have data remediation use cases, data retention use cases, and how do we make sure we're proactive in the way we're addressing our our data management, kind of capabilities? Then a data marketplace. The data marketplace, once again, provides that consumer a storefront to really access their data products. The next is the data quality service, and that allows you to kind of really quickly identify and resolve issues. You know? Issues, things like anomaly detection, clear data governance insights, data quality insights. Right? Deduplication and consolidation. It also generates data quality scorecards. Right? So you can traffic light, you know, what does good look like for our organization? You know? What does amber look like? What does red look like? Right? And you could see the scorecards for data consumption across the entire portfolio. So when we're looking at this, right, with our whole solution, you can receive data quality statistics through every service. And lastly, that data access management piece. So this is looking at well, now we've gone and we've cataloged. We've, you know, we've built our, our ecosystem of classifications and and policies and regulations, but now let's secure sensitive data. Right? But not just securing sensitive data, but securing sensitive data while building trust in data sharing and improving the data literacy. Right? We're actually making data more available to users because like that example I gave before, what we're saying here is how do we actually say, well, previously, you couldn't look at this table because there were two columns in there that had sensitive information. But now what we're saying is you can look at that table, but those two columns will be masked appropriately as your kind of user permissions require. So why adopt, you know, AI without, you know, the the support, right, when it's quite easy really to get started? So what we do is we kind of offer the tools to increase trust and improve credibility. Right? We help you kind of reduce the bias in AI as well as, you know, minimize hallucinations when AI is not provided with, you know, complete and accurate data. We can lower the risk of data misuse and really ensure that data is shared with the right teams and that data is appropriate for use. And with a reliable data governance kind of solution in place, improve those outcomes and really strengthen reputations, allowing for, you know, responsible AI to help get ahead of, you know, competitors. And by really enabling and monitoring that data quality and improving those data pipelines, what we're doing here is we're really avoiding, you know, that classic saying of garbage in and garbage out scenarios. And after you see and, you know, experience what we just kinda talked about around, you know, how do we get clear governance, democratization, and use observability, we can then see that, you know, a key role, you know, in modern data governance solutions is around, you you know, having that agility and that ability to drive trusted data insights and use it in a accelerated kind of fashion. Right? Deliver it with complete end to end set of capabilities. You know? Make it available to users. Make it available to, you know, requirements around policy mandates. Right? Bring down that risk. You know? You'll have auditors coming into play. You'll have, you know, business stakeholders coming into play, both internally and externally. You know? And because of those regulatory requirements, how do you make it in a way that your organization can democratize, observe, and govern, but also enhance your your actionable insights? Right? Bring value tangible value or ROI to your organization. And by democratizing and really simplifying, you know, the delivery of both technical and and business users, what we can do here is we can easily connect your data sources and producers. And with that trusted data, with that data observability to really monitor those pipelines. So we talked about that changing landscape of data and the face of AI adoption and how your data challenges can, you know, make or break your AI initiatives. So for solving, you know, quantity and complexity of of data and AI governance, you can really start using this leading kind of AI powered data management cloud to accelerate and simplify those data governance and cataloging requirements. Right? Giving you a solid foundation for growth. What we can also do really here is kind of help you meet those regulatory and compliance challenges with leading data privacy and security, you know, applied to your data products to really deliver and enforce that ethical use of information. Right? Ethical use through things like the cloud data marketplace. And we're helping you kind of solve this data quality and observability challenges for you to understand your data estate and take action to scale those pipelines. So these are kind of the three areas that we kind of mitigate and manage in one solution. Right? Supported by generative AI, by the role of AI, and how you govern AI. Right? So all three kind of different areas in that kind of one portfolio. That's kind of it from my side. So what I might do is I might just pass back to Rob to to to talk about, you know, ways to get started. Hello. I'll just jump in here. Robert had an issue with his laptop, so I'll just jump into, here on the next steps. Yeah. Just, if you wanna move on to the next slide. So well, thanks everyone for joining today's webinar. If you have any questions, about today's session or if you wanna talk to us, about data governance, AI, InterWorks, and Informatica, we can help you. You just simply scan the QR code, to contact us, fill out the form, or I'll drop the email on the chat, and we're happy to schedule some time and, have a chat with you. And that's it from us. Reese, any final, saying for today? Yeah. So I just I saw one of the, the the questions from from Vincent that I would like to address, if that's okay. Yeah. Sure. Sure. Yeah. So Vincent's question team was about saying, hey. You know, we have unmanaged unknown data needs. Right? That needs to be governed first. And and how do we really you know, you you know, before we turn on AI appropriately, we need to rectify those those those kind of legacy, solutions. And, Vincent, to your question exactly. So what we're doing here is there's there's three areas. One is, you know, unlocking the power of AI, but also the ability to use AI to actually find that unknown. And but what I mean is we can go in and scan your systems, understand those legacy solutions. We have hundreds of out of the box connectors that are all available to you. Right? And we go in and we scan all those systems. And what our AI can do, our CLEAR engine, it can actually go in and stitch that lineage for you together using our inferred data lineage. So what that'll do is it'll say, hey, Vincent. We've gone in and scanned your legacy systems, and we think that a system a, b, and c have links. These are the links. Now they only appear to you as recommendations, and you can choose to use those recommendations as a way of kind of governing and understanding that legacy environment. Right? So this is not really about leveraging AI to then go and build your actionable insights. This is about leveraging AI to find the unknown, to discover, and help you get there quicker. So, traditionally, you might say, hey. We're gonna go scan these systems and put together these big maps of how we're gonna go in and find the relationships. But in this case, our AI will go in and help you understand those lineages, understand where things are coming from and they're going to, and they'll say, here's the recommendations of what we think it looks like. And then it's up to you and the team to say, yep. We agree with it. Now let's go and add in other aspects to it. I hope that answered that question. Cool. Okay. Awesome. If you are interested, obviously, Reese has got the the QR code that you can scan and reach out. Obviously, you can check out the Informatica or InterWorks websites to learn more information. We are happy and hopeful to hear from you, and we look forward to helping you with any, data or governance needs that you might have. Thank you so much for attending. Thank you. Bye. Thanks, everyone. Bye.