Now this requires kind of driving a change in skill set as well and a change in culture in organizations, and that's where that also is a part of it, that data literacy path, that AI literacy piece as well. We kind of touched on this in the previous one, but regulatory compliance. Right? We see regulatory compliance is, you know, continuously evolving to direct our companies use AI. Several legislations, you know, we see, you know, the EU build a lot of, you know, like GDPR and the AI Act, and we see Australia as well adopting those kinds of, those kinds of regulations. Right? And we see that, you know, we have internal and external auditing. We have governing bodies, you know, bringing out ways that we need to be having that regulatory compliance. How do we make sure we don't breach those compliance obligations? Right? The ambiguity and flux in mandate make it challenging for organizations to safeguard data privacy and security. And as, you know, you leverage that large data sources to train those models, we need to make sure that we have that ability to kind of mandate, you know, a whole data estate. And finally, the responsible treatment of data and data and AI. Right? Making sure that our outcomes are correct, making sure they're fair, making sure they're non biased, and cause no harm and key to your AI practice. We know that, you know, brand reputation is also so important and that trust in that customers have in us. So we need to make sure that if we are collating personal information, if we're utilizing it, if we're providing insights to our members, if we're in insurance, if we're, you know, customers in a in a bank or, you know, you know, even if it's in asset management? How do we make sure that we actually have, you know, non biased fair outcomes? And to further complicate issue as well, you know, there there continue to be thousands of production data sources across on prem and cloud ecosystems. Right? Each of these ecosystems have their own disparate tools and capabilities. But most, you know, here, you most of us in the room here will, you know, work across multiple ecosystems and even leverage data that is on prem and maybe unique to your environment as well. So centralizing control and governance while distributing trusted and high quality data to your analytics and AI teams can be a major hurdle because those teams want to leverage all the data across the organization or even from third parties, not from a single source or limited in scope. So now let's talk about how we solve this problem. Right? It involves adopting a modern approach to data and AI governance. So a modern data governance architecture in the cloud helps simplify, centralize, and automate data and AI governance. Right? We believe our tools should kind of accelerate your journey to leveraging AI and generative AI with AI. Now what that means, right, is AI for data management helps equip you with intelligence, efficiency, automation, and that is the things that are required to overcome those challenges of governing AI. We have built a modern cloud native microservice based platform where AI is at its core to simplify the way we drive data quality, data governance, and data sharing that helps you really build that trust across the data and AI life cycle. So you see trusted data quality and timely data at the heart of your AI outcomes. Right? And the right data will prevent you from your AI projects being unpredictable and those outcomes being unpredictable. The right data will make sure your AI avoids incorrect bias or even irrelevant outcomes. Okay. So together, these integrated kind of solutions operationalize with four key pillars of building a modern data and AI governance foundation, right, generated and geared in this AI era. So the first pillar the first pillar helps you meet your risk and compliance requirements. Regulatory and compliance issues continue to increase. Just like how ESG and the AI act are now emerging globally, we continue to evolve our tools to help you meet existing and new regulatory corporate compliance needs. The second pillar enables analytics, BI, and AI org and AI access across your organization. Right? So there are more and more areas across your business that want to consume a growing diverse set of data across the organization, and there's a growing number of data consumers with unique data demands. So enabling data democratization for the organization is a must. Right? This is key to data discovery because we see today that there's more citizen users. Right? There are more people who want access to their internal information. So we wanna democratize that, and that's a form of data discovery. Utilizing tools like our data marketplace, customers can provide that self-service access to verified governed data and then scale it out and provide it at speed. The third pillar is establishing data observability across your data estate. Right? And what that is is operating at scale means that automation and visibility must be at the heart of any solution. Technical and business users need to be working on the same data, and observability must be able to see how data is being used or how utilization can be increased and scaled to grow your business. As well as observing how security and compliance tools are working across your business. And the fourth pillar is establishing a balance between innovation and risk. Now safeguarding data with access control policies to ensure safe, responsible use of data and AI. Data protection techniques such as deidentification, marks masking, etcetera, to help ensure that sensitive data is not exposed to AI risk. And then native controls and policy push down, which allow you to operate across a diverse data and AI landscape with prioritizing data privacy and security. Now any kind of compromise in these four areas means you'll be scrambling either through manual processes or separating tools or separate tools, sorry, to implement a holistic governance solution of your growing AI needs. So Informatica has been in the data governance business for for for many years now, and, you know, AI is changing so rapidly. And this is where we see the need for AI governance. Right? A broader capability that says all of my assets and results of my AI projects need to be safe, secure, and meet our client our our compliance requirements. So AI governance is not, you know, about people, processes, and tools and often means sorry. AI governance is about, sorry, people, processes, and tools and often means that driving cultural and transformational changes for your organization. However, at Informatica, what we're also doing is we're working on simplifying and accelerating your journey to governed AI outcomes, and that's what the solution does. That's the core of the governance solution there from an AI perspective. You don't need AI governance to slow down your AI projects. You actually need AI governance to accelerate your AI. Right? It can build trust and adoption of AI across the organization. It can shorten time to insights by speeding up the identification of data assets you can you can use and AI project approvals, you know, model validation, observed outcomes, and it can also provide that safety and compliance with the ability to drive those non biased and ethical outcomes at scale. Right? AI governance for us, you know, is holistically everything that enables us to do, you know, AI work responsibly. There are many aspects to this, of course, across people, processes, and technology, but it's clear that governance will be critical to the success of scaling your Gen AI applications in production. In fact, here's, you know, governance is, you know, very quickly becoming a critical focus of adoption scales. And given Gen AI is kind of democratizing the use of AI, literally every business area, you know, sees the application for it. That scale has kind of, you know, will be unprecedented over time. And because governance is an accelerated AI, the only way to kind of efficiently leverage these new capabilities for competitive advantage at scale is using AI governance as a growth lever. Right? Just gonna add in a few more of the, animations there that I forgot to add in. So what are the things here that we can see? Right? AI governance, it's that growth leader. It's embedded in the AI life cycle. Cycle. It builds trust. It shortens time to insight. It enables AI safety. Right? And it's embedded in life cycle. AI governance isn't one of, you know, the task. It's it's woven through every phase of the AI life cycle. It's it's it's it's there from the start to the finish. Right? It's end to end integration means issues are caught early and best practices are enforced consistently, avoiding costly failures late in the game. Now governance is ultimately about trust. When you have clear policies ensuring data quality, privacy, and fairness, stakeholders trust the AI outputs. Right? Teams and executives gain confidence that the insights are reliable and bias free, and even customers and regulators feel assured. This trust kind of encourages broader use of AI solutions. Right? It helps people embrace the technology rather than resist it. Now what we also see here is things like shortened time to insight. Right? Paradoxically, good governance makes AI development faster. With robust governance, data is ready and approved for use more quickly, and models are validated against standards early on. Instead of, you know, lengthy back and forth to fix data issues or get compliance clearance, teams can then accelerate the raw data to actionable insights. And these results shorten project life cycles. Right? And they faster and and faster time to value for AI initiatives. And then what this does is it drives growth. Right? So, you know, far from being, you know, a bureaucratic a bureaucratic kinda hurdle, AI governance is a strategic governance growth enabler. It turns caution into confidence. Right? Companies strong companies with strong governance can then scale AI across the organization without fear driving digital transformation faster. And by building a foundation of trust and accountability, governance then lets you kind of innovate with AI boldly, unlocking new opportunities and accelerating business growth while others might still be, you know, grappling with with risk. Okay. Now let's examine a few steps to kind of ensure that you're setting up a governance solution that can rapidly deliver, you know, trusted, high quality, and reliable AI outcomes. Now even through Informatica's intelligent data management cloud, which is a leading data management platform for AI and Gen AI projects, we are still not sitting still. What that means is we want to make AI governance simpler. We wanna make data discovery simpler. We wanna be more efficient in the way that organizations can then reveal that that outcome and then have that security in their in their governance platform. So comprehensive inventory. Right? What we wanna do first, right, is we wanna, you know, extend the ability to scan all of our all of our, source systems, right, whether they be on prem or the cloud. We wanna catalog them. We wanna catalog all of our metadata. Right? And all of those platforms, including mod model registries, model life cycle management platforms, and, you know, vector data braces. In addition to structured data, we can then catalog many unstructured data sources today as well. Detailed metadata extraction from unstructured data helps kind of optimize search discovery, retrieval for use in Gen AI and AI agents. Once we've gone and scanned and understood and, you know, understood classifications, tagging, you know, glossary associations, relationship discovery, We've then got this kind of enriched catalog from a technical front providing a business insight to it. And then governed access. For governed access, Informatica offers automated, customizable workflows, which enables collaboration between stakeholders and improve the assessments of regulatory compliance. Also, we are providing enforcement across control policies for data assets via, you know, push downs using enforcement enforcement engines into common data stores. And trusted data. With Informatica, you can then leverage state of the art quality tools for unstructured data. This is a complete paradigm shift from quality of structured data. You can not only look at consistency, completeness, accuracy in a column, or adherence to, data data type. Instead, you can explore subjective measures like relevancy, bias, toxicity, and, you know, completeness. We deliver fine grained access controls for data and AI pipelines, including, you know, rag architectures to ensure data is safeguarded from unauthorized access. And data protection measures such as, you know, removal of PII and sensitive sensitive fields can actually then, you know, promote and protect and increase the value of unstructured data and the use of it in AI models. And finally, for evaluation and monitoring, automatic scanning of AI deployment platforms to continually monitor output from these systems. Expanded data observability solution to provide dashboards and metrics on AI performance and its underlying data. And finally, for the evaluation and monitoring as well and the future vision of it is, you know, the ability to then say, well, now how do we then pull all that together in that single solution? And that's what Informatica offers. So, you know, I know it can seem like a big step, and and, you know, even a lot of change in governance for your organization. However, you can start small. You don't need to boil the ocean and get value, you know, from from the whole thing straight away. You can get value from simple steps. An easy first step can be to to scan the data across, you know, a single source, right, or a single data domain. Start with the data domain and say, look. Maybe we're an insurance company. We wanna start with just claim systems. Right? We wanna start with a specific data domain, a small number of sources. Narrow down on one area that you wanna focus on. Right? With intelligent glossary association, we can associate your business meaning to semantics. Right? Booking versus rev, customer versus account. Right? And then you can understand the context of its operations, its quality, and likely identify, you know, the downstream impacts of the data. You don't need to get to millions of assets before you get value. Governance isn't is is iterative, and you can build on the effort that you have started. Now I'm gonna kind of, jump into, a bit of the the DUI. I just wanted to see if there were, Robert, any, kind of questions or, if, yeah, I can just jump in. No questions yet. You are free to go. But remember, folks, let's, take advantage of our time here with Reese. Get all those questions in, and we'll get to him, at the end of the webinar. All yours, Reese. Cool. I will just, share my screen. Can you see Rob, I'm just doing you a sense check. You can see the the screen here? Yep. Yep. You're good to go. Perfect. Okay, guys. So what I've done is now I've logged into to Informatica's governance solution. Right? And when I log in, I can see this kind of dashboard view, of of my of my governance estate. Now you can create these dashboards for different users, user groups, whatever is important for them. So a data steward in, you know, maybe the the claims, kind of data domain can log in and see everything related to to that. Now this is where, right, we get that single view of business and technical. You can see here I have business assets and I have technical assets, and these are widgets that I've created. You can create as many widgets as you like, right, and as many pages as you like. You can see here, and I'll jump into some of these in a second. I can I've created other dashboards as well. Things like data privacy, things like curation dashboards, things like AI compliance dashboards. Right? And what I can do here is I can go in and I can say, well, okay. I'm looking at policies and processes with our owners. And by the way, our clear AI engine can curate these dashboards for you. Alright? So you can actually go here and say, these are our clear curated suggestions. So this is where the AI can help scale that data discovery piece and say, look to your organization. You have x amount of data classifications. This is your recommendation status. This is, you know, your your catalogs sources with pending recommendations. Right? Click those recommendations. Understand what they are. This is where domain knowledge, right, comes in because those users can say, look. We really understand that information, but sometimes we don't necessarily know how to curate our information, our metadata. So this is where the clear curation suggestions can help provide that guidance because you can understand the references. You can understand the recommendations. Sorry. And you could say, yes. We wanna apply them, or, no. We don't wanna apply them. Or that's actually a good, that's food for thought. Let's think about how we wanna action that. So this is once again how the Clear AI engine can help streamline that data discovery. Because what we're doing here is we get idea of what we have and where is it. Now that actually takes me to, you know, a key question that that, people always start off with when they're looking at a governance project. Right? Reese, we don't know where our information is and what we have. And the first part of that is just to get that understanding of where it is. So to my slides, one of the things, you know, we were talking about was going in and scanning those sources. So for instance, I could go in and scan, you know, as we have hundreds of out of the box connectors, these are ones that I've just scanned for demo purposes. Right? But I'm gonna click into this this this Snowflake scanner. And what I get here is I get some database statistics and usage information. And then what I can see is I can see everything that resides within that data warehouse or within that source system. And this is a great entry point for organizations because the first thing we've answered is what do we have in that in that in that, in that source system? Maybe we're doing a modernization project, and we're moving everything from an old data warehouse to our new analytics ready warehouse. But we wanna make sure that we're we're pulling over the right curated information into the new analytics warehouse. So let's go scan maybe those those those, data warehouses that we might retire at a point. Let's just see what's in them and then start to curate what information we wanna bring across to the analytics ready warehouse. And this is where we can start to see. Great. We know we have forty nine tables. Good to see. We have statements. Right? We have snowflake tags. We have schemas. Right? We have primary keys. You can even then jump into the hierarchy and say, well, I wanna see this from that perspective and open up that catalog source. Right? And then look at that database and understand, you know, the scheme is in there. Because, bear in mind, one of your first projects might be not just to start with the source system, but with a specific data domain. So you may say, look. We scanned everything in Snowflake, or we just wanna scan a specific database or a specific schema in Snowflake, and that's how we wanna start our curation. So in this case, I can jump in and I can see from a hierarchical perspective, you know, what's there. I can say that's the the catalog source, that's the schemas, that's the table, that's the columns. Right? But then what I could do is I can extend it to that, and I can say, what are the relationships here? Right? I can say, well, what are the catalog sources? What are the databases? And from here, I can even attribute things like policies to it. I can look at stakeholders, attributes, tickets, and history. This is a good way to get that first overarching view of what we have in our source systems. We have the ability to kind of chat, the ability to recommend, and rate. Now just stepping it back. If you're maybe thinking, well, you know, I'm not really interested in the source system to start with, but I really wanna understand what our domains are. What are our domains? What are our business terms? Right? What are our subdomains? Understanding what that looks like is also critical. Critical to kind of having that link between application of our or business use purposes to our technical estate. And this is where that's key because what we wanna understand here is saying, well, if if I'm gonna click into the the the HR subdomain, I wanna maybe know a little bit about it, but mostly, I wanna know how it relates to the rest of the organization. Cool. I can see here that it's kind of, it has, you know, four business terms that sit under that subdomain. Department ID, location ID, manager ID, job ID. Great. So this is where we can get, you know, a good understanding of what we have. And we, Informatica, we separate that data discovery into things like, yes, business terms and, yes, your catalogs or systems, but also things like business processes. Right? Now this is important because what we see when we're looking at lineages, we traditionally think it's coming from source system a to source system b and then goes into, like, a PI tool like Tableau, Power BI, click, whatever it is. But there are also many business processes in there, and this is where we wanna understand those processes. We wanna be able to click into them and kind of see what the definition is, what the application is, and how it kinda is relevant to that process and say, well, what are the steps in that process as well? And this is where we can even understand things like process maps. Now these are business focused process maps, and their idea here is to say, well, we're seeing information come in. Maybe we're in a, you know, financial services. Right? And this is obviously a a KYC form. But we're in we're in a financial services, and we're trying to collect information for customers, and we can see that they're landing in this data warehouse. Maybe where the data quality is is low or high, but what was really the process for that information to land there? We can see here the business steps. We can see, you know, the onboarding decisions. We can see the individuals. We can see their, you know, you know, how we have we have to obtain official identification documents. We can see the steps entailed to enrich that information. And then we can start to quantitate, is the depreciation and data quality from a technical step or from a business step. And that's important because sometimes we think, oh, bad data quality. We have to somehow fix it. Let's go add data quality rules and then clean the information. But maybe we're having bad data quality because of our collection processes or business steps, and this is where you can get that identification. And then you can see how it's tethered to the rest of that journey. Now aside processes, we also have things like policies. Remember, we were talking on the on that wheel around policy management and regulatory compliance. This is where we can define those policies. Right? And the AI can then say, well, based on your classifications of, let's say, PI information, we can see that you have, you know, things like date of birth, phone number, and other PI fields. And then you can say, well, we're using those classification rules. We wanna attribute this policy to it. Right? What is that policy, and, you know, how should it be used? And where in the business is this policy also used? We can see that this policy has a relationship with eighty four business terms, right, with one geography. Because maybe your organization operates in, you know, different states or maybe even is is multinational in different countries. So we have different, you know, acts depending on different, states, different countries. Different business areas could have that as well. Maybe some business areas like your finance is a bit more regulated than, you know, maybe other other departments. So what are those policies that we must abide by? Then we can even break things into projects. Now when we talked about earlier, the last slide of starting small, this is where when I always talk to customers about how they wanna start a governance project, define it as a project. And in that project, your goal is to maybe comply with a certain, regulation. You know, maybe it's, CPG two three five. Right? Maybe it's, you know, the Australian Privacy Act. Right? Maybe it's around PII and PCI information. That's our project. And then in that, say, well, what are the source systems that we need to understand that have that information, and how how do we go and classify them and provide the relevant stakeholders to it. Now we also have things like regulations. Right? Legal entities. So you can actually define if you have, you know, if you have a parent organization or you have subsidiaries. We wanna define those legal entities because maybe we wanna operate them more consistently or we wanna operate them in different silos as well. But we wanna have that hierarchy so that we can manage that that those differentiations. Right? Same with business areas. And finally, what I wanna show you guys is AI models. So this is where we can actually go in and say, well, look. Our data scientist has gone in and built an AI model. Right? And what we wanna do here is we wanna actually go in and say, well, we have a customer churn prediction model. The data scientist goes into the catalog and provides these metrics or tether tethers their model to the solution. Now what this does is it gives confidence on those business insights. My background, by the way, is in data science. I used to be a data scientist and build a lot of models. And one of the challenges I found when I was doing it was conveying to the end users, the end stakeholders to use the results because they said, we like the results, but we don't know how to verify it, or we don't really understand it. But this is that bridge between the data scientist and the end gate stakeholders because what it does is it gives a business description of the model. You can see some more technical attributes like drift scores and bias scores. You can see the in model purpose. You know, you can see model formats, repositories, all things like that. Right? But then what you can do is you can see, you know, what were the trained datasets, right, the output datasets, the input datasets. Right? And you can actually go in and say, well, I wanna maybe drill down onto what those elements are. Right? So when this AI model was built, these are the the columns that we used. And these columns, by the way, can come from various different tables. So this is giving great insight into business steward to say, you know what? I actually agree with this, or maybe I disagree with this. If I disagree with this, I can then start a comment thread and actually have a conversation with the data scientist being like, actually, I think we should change these key attributes, rerun the model, and see what the outcomes are. Right? Now what this does, right, is it gives that collaboration approach between your business stewards, your your your business, and your and your technical users. So now you've you've developed a model you put into production that's kind of a joint initiative from your whole business, and then you can track that and validate that within the governance solution as well. So this is how we streamline. Right? Because what we want is here is we want Informatica's AI to help scale this and show this and curate this, and then we want these outcomes to then be also guardrails for our AI activity that we have in our organization. So that's a good way to start. Right? Understand. Discover. Right? You could even kind of, use the search functionalities to do that. Now I like the search functionality because sometimes we don't know what we're looking for. Right? We just wanna be able to say, well, you know, what do we have? And by the way, you can customize all these filters. Right? I can say, good. I wanna I'm looking at sales data, for instance. I could just type in sales. It'll show me hundreds of things. And then I can create filters and say, well, I wanna filter by, you know, datasets, for example. I can see this dataset here, and I can then start to deep dive and understand, okay. That's my data discovery. I like it. It's got a good data quality score. I can click into that and say, well, I know that the business has has has set these thresholds, and they're saying that this information is good to use. Now before we talked about those dimensions and monitoring their data quality, so this is where you can have a look at that and say, well, maybe I wanna deep dive into each rule occurrence itself, understand which row rows had failed, when it was last run, who ran it, what the actual rule is. But high level, maybe I don't need to get that that granular nature. I just wanna have a look at, you know, is it compliant, and and what does it look like on an aggregated sense? Once I've determined, yes. I do like this information. The next step is what we talked about, that data democratization, making it accessible. Right? So this is where you jump into the marketplace. Now the marketplace essentially, right, is is is that kind of shopping experience to say, well, I wanna go in, find out where we have information, and then get access to it. It's the cherry on the cake. The cake is the governance layer that you're building. And then marketplace is the ability to say, well, guess what? I wanna go in and get access to it. But there are some key things I must know before I get access to it. Right? Some of those things are, you know, who's using it in the business? What are the what are they using it for? Okay? Where is it in the organization? Where does it sit? Right? What is its purpose? This is what the marketplace shows me. It says to me, look. Two hundred and twenty six people are using analytics, fifteen for marketing, five for regulatory reporting. And as the governance team builds this, out, they can actually define the intentions of use of that information. So they can say, look, guys. This is HR information or sensitive information, so you can only use it for internal reporting, or you can only use it for analytics. You can't use this for marketing. So you can set the intentions of use. Right? Now what's tethered to this in the back are those policies we just talked about as well, things like those those PI policies. So what that means is is after I say, yes. I like, this this this product. Right? I can, you know, consumers, and I can see that these are the people in the organization who are using it. Right? I can see data quality. I can see the policies in place. Right? I can say, yep. I like this. I now wanna check it out. Checking it out is you go through a three step approval process. You say, look. I wanna use this for analytics. Right? I a justification, need for report, which is a terrible justification, by the way. And then you could say a cost center attribution. I wanna attribute this to this part of the business, and this is where you can start to monetize in your organization which departments, you know, are or domains are using that information. Right? So that you're now curating the right assets instead of spending time saying, this is what we think we should curate. Let's go curate it. And the business says, guys, we're only using five percent of this. So instead of trying to curate a hundred assets, can you actually just spend time curating those five assets? So that's also about that time management there, that effectiveness. Right? And then saying, okay. Well, these are the policies that are in place. I can't submit that order until I have agreed to those policies and then hitting that submit order. That goes then through a workflow process to those business stewards, those technical stewards to then go in and say, yes. Can Rob or can Reese access that information? Yes or no? Right? But this is also where we push down those policy rules, right, to the source systems so that when I go and look at that data that I'm now looking at, if I don't have or don't meet the right, regulatory or the the right standards based on that policy, I can't view certain things based on my maybe my location, the contents, the purpose of use, my my my my job type, my job level. Maybe if it's HR information, I can't see salaries. Or maybe, you know, if we're looking at customer information, I can't see the date and the and the date of a of a birthday, but I can see the year. So I can still perform analytics, but I'm not viewing that personal information. So I'm not, you know, breaching any of those those privacy regulations. So that's where the marketplace stems from. And now the ability to then harness the clear GPT to say, well, we've gone and cataloged all of this. Right? We've gone in and said we've cataloged it. We've we've built that let's call it the cake there. Now we wanna scale this and give this to everyone in the business to use and make it simple for them to access that metadata. How do we do that? We live in the world now of, obviously, the the chat GPT or the GPTs in general. We wanna provide users with that sandbox clear GPT experience that kind of sits across your metadata governance layer. So this is where they can go in, and they can start to write questions on how they want, to ask the the the the solution saying, hey. Show me, you know, how the data quality looks. Show me the rules. Show me the policy. Show me the lineage. Right? This is how we now drive data discovery. We drive data discovery from a simple search in the way users want to ask the engine those those prompts. And then from here, they can either deep dive back into the catalog or they can be satisfied with their responses and say that has now solved my question. But this is the way you scale it to the business. Alright, team. I'm gonna pause there. Rob, that's, kind of, what I had for today, but happy to answer any questions as well. Awesome. Thank you very much. And we do have a question. So as we're asking this question again, a lot of great engagement we had on webinar one. Would certainly love to see more of that on this one. So let's ask as many questions as you've got. So here's our first question. This dashboard and UI looks awesome. It's clear you have put a lot of time into putting this together. What would a dashboard or UI look like for someone just starting out who only has one or two dashboards or widgets? And what is a new starter with this product going to see? Yeah. Great question. So what we do, right, is in that first step where we go in and scan our source systems, right, when you first log in, right, our AI clear engine will provide you with suggested dashboards and suggested widgets. And you can say, great. This is a great template to start with. Yes. I like that. No. I don't like that. Or, actually, I wanna augment that. So our AI engine will actually provide you with out of the box widgets and dashboards and ways to curate that as well. And when you then decide, hey. I'm a new starter. I wanna go and create my own. You can actually use the AI to create those dashboards. For instance, you saw in one of the widgets I had before, it said policies and and and processes without owners. All I did was I went into that clear engine and they said, build me a widget of policies and processes without owners, and then it populates that widget for you. So nice and easy and streamlined way to build those widgets, but also get a good baseline. Awesome. So I I think I think part of the questionnaire, which is probably worth addressing more broadly, is it's clear you have put, quote, unquote, a lot of time into this. And I think what people need to understand is when Informatica went private, rebuilt the whole platform as a SaaS product integrated with Clair, quote, unquote, a lot of time now is a very different calculus than with other tools. Rhys, do you wanna sort of broadly talk about how AI is built into the workflows? Yeah. Yeah. Yeah. So our Claire AI engine, right, is built into the way we also address our workflows as well. And what that means is we have firstly out of the box workflows. Right? But, effectively, they're meant to be simple and easy for the users to have that kind of automation in the way there's information either delivered to them or information either curated into the catalog. Right? Now what I mean by that, right, is when you go in and scan a source system, we do things like classifications, auto generated classifications, Australian specific classifications. Right? We do things like, you know, relationship discovery. We do things like glossary association. Right? We provide all of that through the AI engine. But then what happens is you would have stakeholders who might be assigned to either the source systems or those data domains. Right? And whenever those recommendations come from the AI, workflows kick off for those stakeholders to then approve or reject those those those, those classifications, those recommendations. Right? And those workflows are really built around those stakeholders to say, well, do you accept this? Do you wanna change it? Do you wanna augment it? Or do you wanna accept it based on a threshold score? So there's a lot of levers there that we give you to play with. And the idea being when we talked about AI at the start in the presentation, everyone's confidence levels are different. Some people wanna start off by just saying we want them we want it as recommendations. As we get more confidence, we might wanna use just threshold scores, right, and just auto accept. So that's where the workflows and the automation piece that we built. And as Rob said, over the last few years of of saying, well, how do we scale? Because Informatica has been in the governance game for a long time. You know, we started, you know, years and years ago with our with our with our on prem solution, you know, EDC. And then what we did was we took the learnings from that solution. Right? We took the insights. We took the customer feedback, and we modernized it. Right? We modernized it to the cloud governance solution, which said, well, governance and the way we scale governance has changed in the last twenty years. But in all honesty, there's so much changes happening in the last, you know, five years alone with the, speed and and and rapid growth and use of AI. So that's where we've seen that that that that that that change, that growth. And Informatica has, you know, made sure that every, you know, six months, we have seen the ability to to scale the way AI has applied to how users are now rapidly changing their patterns of view of usage. No. That's awesome. Last call for questions. Oh, we looks like we've got another one, also from Josh. How would you use AI to create a data dictionary that business users can access and understand starting only with access to the database? Great question. Yeah. So so what we do, right, is when we we when we create some business terms so so firstly, when we talk governance, you know, we always talk people process this tech. Now what we do is we would go in and say, well, we can go in and scan these these databases, and we can infer that these are the relevant business terms that should be attached to it. What we can also do using our AI engine is we can also generate those business descriptions. So for instance, let's say you you were in a in a in a government department and, you know, you're in a transport department and you put in, you know, a business term, which was, you know, maybe New South Wales trains or Victorian trains or Western Australian trains or something like that. Right? And that was your business term. We can actually generate what that business description should look like, and then we can even say, well, we think it should actually attribute to this part of the business. Right? Now that's really cool because what it does is it's really easy for organizations to say, well, if we have a blueprint, it's easier for us to action that. But if we have to start with a complete blank canvas, then we really don't know how to take the next step. So that's where the AI gives you that baseline and says, well, these are what we think some of those recommendations for data dictionaries could look like based on purely just the database itself. Right? Now the user doesn't have to have access to the database. Your your technical team would go in and scan the metadata from that source system. Now the metadata from that source system that we would scan is things like data types, you know, inferring those those you know, what the the column structures are, the table names, and putting together that as our inferences. Now the inferences will give you recommendation scores that'll say, look. We think we're eighty percent confident here, ninety percent confident here, seventy percent confident here. And if it's something you're working on, you could choose to accept that low confidence and say, well, that's our goal here. Our goal is to enrich it. Or you could say, no. We actually want a higher confidence level. So we're gonna maybe go and do this portion manually and this portion portion using the AI. Now I'm not saying that AI solves all of that governance, kind of scalability. Right? But what it does is it helps you narrow down what, you know, maybe can be done in an automated fashion and say, well, actually, guys, we can do sixty percent, and you only have to do forty percent now. Right? And that's where the AI comes in with those recommendations. I think a follow-up question to the answer you just gave. This one's from an anonymous. Can you please elaborate more about what business area means? Yeah. So business area is is is a very generic term to be fair, but, effectively, it could be your different data domains. It could be things like finance, operations, legal, HR. That could be a business area. Right? Or if you, you know, you work in an organization where maybe you wanna slice it by by states, you could say, well, this is the New South Wales part of our business. Or if you are a company that has maybe a lot of subsidiaries, you could price business area by your subsidiaries as well. So it really is a kind of an open term, and you can then define it how you see fit. Some people might like categories. Some people like my my like my, domains. It really depends on how the organization and their terminology that they wanna adopt. And that's what Informatica is very flexible. You can change that terminology. You can change that word business area that I showed in that in that in that tabulated form to whatever matches your organization's language. Because, ultimately, we wanna make it so that this solution fits into your organization and fits into your organization's culture, and that could be driven by the language they use as well. Awesome. The questions are now flowing. So another one. Can this is from Karen. Can CDGC write classification and or policy, quote, unquote, tags to Snowflake sources? I guess, in other words, can it create metadata that can then be used in Snowflake or other data sources? Hundred percent. So we have writeback. Is the is is this is specifically, to Karen's question, the functionality we have there. So when you go in and define classifications, you can then use writeback to then push that back down to to to Snowflake itself. Same with, I think, you know, Databricks and things like that as well. You can push them back, those those those those classifications in Snowflake terminology as tags to Snowflake. Yes. Awesome. Thank you. I'll tick that one. It looks like for now, that is all of the questions, so we might let you guys go a little bit early, ten minutes early. We do have a recording. Unfortunately, I stupidly forgot to press record until about one or two minutes into the presentation, but we're happy to share that. That'll probably be done by Monday. If you do have more questions, please reach out. We're more than happy to schedule some time for you and your team to go into Informatica as deep as you would like. Otherwise, thank you so much, Reese, for an excellent presentation today, and thank you, to all of you for joining us. Remember, there is a third webinar coming, I believe, in early July. It's in the link that we shared, and we'll also be socializing that too after the webinar. Great. Thanks, everybody. Have a great weekend. Cheers. Bye.