So let's get started. My name is Robert Curtis. I am the managing director for InterWorks for Asia Pacific. We are a data consultancy that focuses on data strategy solutions and support. I'll turn my camera on along with Daniel. I'll turn mine off as he's gonna be the primary presenter. And we are presenting a three part series, with Informatica, about how you can use AI to help drive data governance. This session today will be talking about how we simplify data quality using AI capabilities native to the Informatica platform. There is also, an event on June twenty seventh, as part of this webinar series with which is on data cataloging and metadata management. There's a a final and third one on July eighth, which is on intelligent data enterprise with AI driven governance. So I'll share with you a link. So if you wanted to go and register for those other events, you certainly can. I'll put that into the chat window for you right now. Great. So presenting today is Daniel Hein. Thank you, Daniel, for joining us. He is Informatica's chief architect. And whether you are talking about data management or hybrid cloud architecture, AI, and, again, we're talking about all of the major platforms including Snowflake. Daniel is recognized as a global expert at this, so we are very lucky to have him today talking about data quality, and leveraging AI to make it easier. Daniel, over to you. Thank you, Robin. And thanks everyone for your time. Thanks for your kind words there. I hope I can, yes, pay back that beer for you later. Anyway, Tim, thanks for your time today. We're gonna talk about, AI governance data quality. I believe it's a hot topic and hopefully gonna be interesting and relevant relevant for you as well. Okay? So let me share my screen. And while I present, if we have any questions, please feel free to, put them in a chat or in the q and a. We're gonna have time to address your questions, or your inputs as well. Okay? So the topic of today's session, we're gonna discuss how we can accelerate agenda adoption with, data quality and AI governance. You might be worried about my accent, before I continue. Yes. I'm based in Australia, but I was born in Brazil. Okay? People always, worry, wondering where I'm from. Yeah. Fifteen years in Australia is still a very strong accent. By the way, let's continue and digress. So we know the AI, will be part. And if it's not if it's if it's not part of our lives, alright, we'll be part of our day to day work and even personal lives. Right? Lots of benefits in leveraging AI to automate processes and do more with less. We have adoption increasing quite fast. Right? I have implemented AI agents, on my laptop just to help me do my job faster or better. Things that you used taking me hours to do, I can do now in five seconds. Anyway, it it's definitely part of, what we do and what we're gonna continue doing, in the future. I have some stats here that, I would like to share a few, and I'm gonna share it back later on as well. But I'm gonna focus on the bottom right here. Okay? It's when we, understand that data management is a major major obstacle in scaling AI initiatives. Okay? And this from a survey that we had earlier this year with six hundred CDOs of the biggest companies in the world. And data issues are considered, the top obstacle prevent GNI initiatives. Okay? So we're gonna talk more about that today and gonna also discuss some, examples. So what are the key challenges? You might know that way better than me. Today, we're not gonna focus, in everything. We're gonna focus more in the quality, completeness, and readiness of data. Okay? Well, we all know data privacy and protection, concerns of, of course, are very important. We have to consider regulation. K? We got our compliance that has been implemented across the globe. We have the EU AI act being introduced in Europe. In the same way as GDPR, we'll set a standard for AI regulation. We have laws being introduced in Singapore, in Japan, in Australia. So we're gonna need to understand to go very high and ensure our AI models, our agents are leveraging accurate, biosphere, and private data for training and for a good outcomes. Right? So, what are the challenges to effectively govern data, right, and scale our AI initiatives? As you, might be aware, if Monica helps with all these challenges, the first one is, data discovery, data movement, right, ingest to integration in batch or real time, be able to classify the data. So that's this challenge here. They define unknown data source and types. We all know AI requires large volume of data. Right? The key one that we're gonna we are discussing today, improving low trust in data accuracy. K? We cannot rely manual processes or code based processes. We have to, automate how we, identify data anomalies, how we monitor and manage data quality, and also how we clean data physically. We're gonna discuss that. And, of course, we must enable response with data use. It's about implementing robust governance and data access controls. K? Of course, we must act now or become irrelevant. Right? You know, the disadvantages of not doing anything that is talent drain, competitive lag. We have lots of surveys and research being done, where they highlight the organizations that have adopted AI already are experiencing more higher revenue, better customer experience, better employee retention. So, anyway, there are benefits all over the place. Right? So talk about AI governance, before I go through some demonstrations and further examples. This is how we make AI governance simple. Okay? It's about, the inventory of all your data and your AI assets. K? It's important to understand that the same way that we have data products that we must go of our data, this will be required for AI. Your organization might have three three AI models today or five or ten or a couple of AI agents. Eventually, we're gonna have dozens, hundreds. How are we gonna govern that? Who are the stakeholders? The technical owners? The business owners? Which data was used? Who's consuming these models? Right? Is the data clean or not? So there are lots of variables there. Lots of points that we must consider. We must, of course, control access, not just today, but to this AI assets. Again, automate compliance and risk management. Observability, is key of data and AI models or, JNI applications. Right? And, of course, delivery. Right? Not just of your AI outcomes, but of relevant high quality data for AI. Okay? And, of course, ensuring data is unbiased, consistent, and free from, erroneous content. Therefore, that bring us to Informatica Intelligent Data Management Cloud. Okay? One key point here, you do not have to use all Informatica services. If you just want to use our data quality and observability solution, that's totally fine. Okay? But if you wanna expand, you wanna reduce fragmentation, you wanna simplify data management, the platform is available for you. As you know, in a consumption based model, you only pay for what you consume. So you can't start small and scale as per your business needs. You can prove the value. Okay? So talk about data quality methodologies. K? I think that's why everyone is here today. Right? I am participating of the Gartner data and AI summit. I'm in Sydney now. I just step step out of the event, to present and talk to you guys. But that's what Gartner says. They are the key actions to improve data quality. And I put a star here in what we can effectively help you. K? I'm not gonna read this, but we're gonna, see some examples of how Informatica helps you to improve your data quality enterprise wide. Okay? Okay. How do we ensure data is accurate? First, we all know data quality is a business issue. Right? If I have poor customer data, I will provide poor customer experiences. I might send a package to the wrong address. I might call the wrong number. Where they interact with me, I'm not gonna know the customer well. If I have I have bad product data that can impact, my communication, my suppliers, or how I list my products in my website. Anyway, data quality is a business issue. From data analysis perspective, it's the same. If I have, reports, dashboards with inaccurate data, we it might take us to incorrect decisions. Right? So decision making will be impacted as well. And for for AI, it's exactly the same. If I don't have clean, accurate data for AI, obviously, I also gonna have poor AI outcomes. Right? So the cause of not having fit for business use data. I was talking about that just now. We're gonna we we might have financial financial losses, decreased productivity. I spoke about poor customer experience. I might not be able to, be compliant, and I might miss business opportunities, like cross sell opportunities, upsell opportunities, and so What are the common causes of data cloud issues? Right? I'm not gonna read all this slide because you know that very well. But key ones redundancy, right, of data. We have I have duplicate data. Right? My data might not be standardized. Like, let's say countries. I might country in one system is the full name of the country. In the another system, it's the ISO three code. Right? Some root causes. The company went to emerge. Now I have different sources of the truth. Right? I have, poor processes. Like, my data entry does not validate data in real time. So I have lack of standards or rules to curate data. So there are lots of challenges there. And, of course, data quality is a growth lever. Right? So I'm gonna go to some demonstrations, but it's important to mention that a successful data quality program is not dependent only of technology and processes. We must bring people together as well to understand that good data, clean data, will impact their work and their lives as well. How they interact with customers, how they, produce results, and so on. Okay. Let's let's see the solution. I'm gonna start with analogy here. Think about about a river. Okay? How do you clean the water of this river? Think about that. Of course, I have to sample, the water test to see if I have nasty stuff there. Right? You're gonna see what does that mean. What else? How they're gonna clean the water, and what are the requirements? Am I gonna drink the water? Am I gonna use to wash my car? I'm gonna use for irrigation. Right? I have to compare our water to the requirements. Are gonna have to determine how gonna remove the garbage, the trash. Right? The pollutants in that water. That's the big of quality rules that we're gonna have to implement. And then, of course, I'm gonna have to physically clean the river. I'm gonna keep monitoring. Hey. Did I miss anything? K? Did I miss some big stuff or small stuff? That's when you are monitoring data quality. You are managing data life cycle and you can track if anything is happening along that lineage, along that data flow. And, of course, we're gonna keep sampling water to find out if there's anything that was missing missed. Okay? And if you need, you can modify the rules of how you clean the water depend of your findings. K? So and there's an example here. Right? Clean the water here down the track in my data house might be a good idea, but but the problem is being created here. So that's a a key challenge I see organizations experiencing. And these are not the knowledge before I jump in the demos. Profiling data here, analyze the data, catalog, it's good. If something bad's happened here and my report is here, so I have to continuously monitor the data. And if needed, clean up the data in multiple places. Okay? So let's skip this. And I spoke about different requirements. Again, I might need to drink the water. I might need the water to produce some chemicals. So it might not be that might might not need to be that clean. So we have to have flexibility to address different requirements. Okay? And this is a a consistent process and methodology. We're gonna profile the data first, to identify issues. We're gonna set metrics and goals. We're gonna review the insights and data. We're gonna then implement the rules, define dictionaries, build and test the rules, and then later on implement, more robust processes to they're gonna allow me to clean the data effectively. Then, of course, we have to physically clean the data and monitor and manage your progress. Right? So first step, discovering data quality issues. K? I'm gonna skip this because I think I have gone through too many slides already. So let's see in the solution. Okay? Click in here. I don't want my session to expire when presenting. One more. Okay. From data profiling perspective, I used to do that using SQL or using Python. It used to take me hours to do that. To analyze a specific table like person, multiple columns or attributes, and get all these statistics. Do I have no's? What's the patterns I have here? Minimum length, maximum length, minimum value, maximum value. Do I have blanks or not? This automated. I can get this result in ten seconds. We can drill down to the attribute level. In this case, here I have, for instance, a call on call country. And what can we see here? Two data quality issues. My data is incomplete and my data is not standardized. This solution recommends data quality rules to you. That's why I have this icon here. And if I scroll down oh, that looks better. Right? By the way, we are not changing any data. Yeah. Just profiling, discovering, testing. Okay? You don't have to do this analysis manually because we also provide data quality insights. Let's get a simple one here, gender. Date appears incomplete. The call includes one or more. No blank or empty values. The length of the data values, the call has a high standard deviation. If you don't trust that insight, you can double check. Let's go to gender. Oh, yes. I have a deviation and my data is incomplete. Right? It does not end here. We can create a data quality rule in one click like I did here. This rule, in my case, it would take me thirty minutes, one hour to create using SQL or code or Python. So that's the rule that was created. K? And, of course, I can now test the data. I could type some information here like mail, female, and test. K? Another key aspect of data profiling is being able to compare data quality across time. And by the way, I'm using the UI, but you can do that with APIs. You can automate this. Okay? How's my data today compared to yesterday? Let's go to Jane there again. I used to have zero null values. Now I have two. I used to have two distinct values. Now I have four. You can implement an exception management task, a workflow. What are you gonna do if there are exceptions? Okay? And, again, everything I'm showing here, it's available, in the UI through APIs or you can export this data. And think about you generating all this data manually. So here I have export Excel, by the way. I have my columns, the statistics. I have the data quality rules and the exception. I have the value frequency, like, for for AI or data science. This is amazing. I can very easily remove outliers. Okay? Gonna help my future engineering. How about this? Statistics. Bottom five, top five, maximum length, minimum length, the patterns. So it will take any of us at least several hours, if not a few days, to get this information. I got that in one minute. Okay? And data types and so on. Anyway, so that's data profiling. Okay? Let me go back to our presentation. So that is the discovery of a data shows. You can implement the scorecards, and you can test data quality rules. Okay? These are key dimensions that we see being, leveraged by customers to monitor, to discover, and to clean data. K? But there are more. Depends of your business. Right? But accuracy, validity, completeness, consistency, uniqueness, and timeliness. Right? And that's a demo just, demonstrated. Step two, define the rules. Okay? First, what is a rule? Very simply, is a check to test if your data is good or bad. Okay? And you you can have multiple types of rules, with different levels of importance. You can use, dimensions to help create a broad picture of your DQ position. You can use dictionaries or reference tables to help you clean the data, and you should be able to test the rule before change the data in a very easy way. And, also, it should promote usability. So you build a rule you wanna place. This is gonna be used in your data pipelines. It can be used as an API, as point of entry of data. It can be used in application. Okay? It can be used to monitor. I have a dashboard to monitor the quality. It can be used to profile data and test the data. So build a rule here and gonna be used across the enterprise. So let's see some rules. You might be wondering, Daniel, I wanna see some examples. Okay. K. Let's go to data quality. So here are examples of dictionary tables. Okay? The you can create, but we provide hundreds out of the box. You can see I can even update if I want. I have a a curse code, country dictionary, time zones, Australia BSCP codes, Australia telephone prefix. We have hundreds. K? We also have an option to create the rules such as this cleans rule. I wanna format and clean up for numbers. Okay? Look how simple this is. I have added steps to remove some strings, some characters from my phone numbers. The second step was to remove double zeros, to remove spaces, and then I can test here in the bottom. I have some sample data on the left and the results on the right. Okay? So very simple. Again, this rule, I can now use as part of a data pipeline to physically clean the data. K? As simple as that. Another example, these are rule to standardize country names. Sample date on the left, results on the right. And let's say we are using that rule in a in a Python script, in a in a format, a data pipeline, and then you wanna the business comes to you and say, I don't want the ISO three code anymore. I want the full name of the country. You don't have to change every single pipeline. You come here once, you wanna change that rule. I don't want the ISO three code. I want the full name of the country. And as simple as that, every single pipeline will be updated. Okay? You can also build more complex rules to validate multiple attributes or data elements. Like this one, I can test. And, again, if my email is inconsistent, I should have a invalid contact. And that's what's happening here. Right? You can parse data. And I have run the test already, but let's say here in the bottom, I wanna separate the user from the corp domain, the global domain. It's doing good job. That's my sample date on the left and the result on the right. How I'm doing that? I'm using a regular expression. The good news is you don't have to build this from scratch. Look this. Hundreds of label expressions to parse all kind of data. Okay? I'm gonna show one more if you wanna deduplicate the data. Again, I have some sample data here. This one. And then I wanna consolidate this. I did a test I read. You can see here, the clusters. You can see the data. Row one and row three looks like a duplicate. Right? Look at the cluster ID and the cluster size. At the same time, row two, four and five also looks like a duplicate. Gene, l gene is doing a good job. Cluster size, cluster ID. The next step is the consolidation. By the way, here, I was able to select what kind of data I want you to duplicate. Again, I I'm a Microsoft Azure, solution architect. I got a certification. I worked at ADF a lot, for instance. What I do here in five minutes, it took me several hours using ADF. So I managed to consolidate here to have the clusters. Now I wanna consolidate. I can choose my strategy. In my case, I chose field based. For each column, I can choose my strategy. How cool is that? And then I have I can pass that. I I go I have my clusters, and I'm gonna co create my consolidated record. Okay? Think about you implementing that manually. How long would take any of us? Right? It would take several hours, if not days, to implement a robust process. And you can see now the clusters and my master record here on top. K? My deduplicate record. And then one more time, I can now leverage that rule anywhere as part of my data pipeline if I want. Okay? One more important point. We gonna provide you thousands of rules out of the box. You don't have to reinvent the wheel. Big companies big companies, the word they are using us, informatica, for data quality, for data management. So the rules that we have built for them, you can leverage. K? And we provide these rules through these bundles. If I search for quality I'm sorry. You have these bundles here with hundreds of assets that you can customize if you want. We are talking here one or two years of work that you're gonna save. If I get one of these rules, this one here, I wanna parse and validate emails. This is the logic behind that. I have a maplet. We use the logic. I have two sub maplets. I wanna open this one here. That's a logic there. I have another one, email validation, and that's a loss to validate the email. That's one single rule in one of the bundles we provide, And this one will take any of us many hours to build. Okay? So you're gonna be able to save time, do more or less, and really scale how you govern and curate your data. Okay? So that's step two. Okay? Define the rules, test them, and then finally gonna be able to implement that in your pipelines. So that's an example of what you can do. Okay? Validate, standardize, do the the duplication, the parsing, and enrichment, like address verification, phone number verification. All of that we can do as well. Okay? So the application now is to physically clean the data. Okay? As I was sharing before, is to add the rules in your Jupyter notebooks to call them as an API, is to leverage them in your informatic data mappings or data pipelines. Okay? And, of course, hand coding, it's, of course, a big challenge. Right? So imagine doing that manually. I'll just explain how we can build that rules very, easily in a few seconds. Okay? So that's a GFS form using code and using Informatica. Okay? And by the way, you can also define exception tasks. If anything is incorrect, inconsistent, what are you gonna do? We have lots of flexibilities there as well. Okay? So the last step is now the measurement, the monitoring. Right? Is your scorecards? Okay? I have some examples here on the screen. I wanna show the solution. So let's see a few examples. K. Here you go. It's gonna actually start in another place. We can classify your data automatically. Okay. We have a customer a customer that they needed to map eighteen thousand columns in a database across six thousand glossary terms. They told us it's gonna take us between two and three months of work with a few people. We did the same in eight minutes. So you can see here a table, a dataset. I have the columns. And we have identified this column here as email. And by the way, the column name could be few ten. It doesn't matter. K? We are able to map that and it's fine as email. And now you wanna click. I can find out every place across the business where I have email information, in files, in in columns, in database, or data houses, in in reports. I'm gonna have the policies allowing me to dynamically mask, tokenize, secure data. But going back data quality, look at look this. I have four data quality rules associated with this business term. What does that mean? Every time we had automagically, The data asset is email, a a attribute. We're gonna be checking the data quality, and this is gonna give you the capabilities to manage data quality as data flows from transaction systems to analytical systems or through AI. So here I have, attribute level, data lineage. K. It's gonna make this a bit bigger. Look how cool is this. I can't attach overlays. I'm gonna attach a business term, data quality aggregated, if the data sensitive or not, and data observability. And remember, because we are able to automatically classify the data and that business term has the data cloud rules, this is possible. Now I can see, for instance, the city he is forty nine percent clean. However, he is twelve percent. Something is happening in this data pipeline, and, of course, I could act on that. At the same time, I can see the email. It's categorized as sensitive information. Okay? I have the business Daniel. Yes, please. Appears your audio has stopped, so just letting you know. Okay. That's that's weird. Let me try again. Not yet. Can you hear me now? Hello? Try again, please. Hello? Can you hear me? Oh, other people are telling me fine, so it must be on my end. Ignore that, Daniel. Please continue. Sorry. No worries. Okay. No problem. Okay. Thanks, Tim, and thanks, Rob. So yes. So you're gonna be able to monitor the quality across the enterprise. And for that specific asset, now I have that consolidated information about how clean my data is. And this information can be exposed to data consumers through our data marketplace solution. Okay? That you're gonna have data products there. As people browse through data products, they can understand everything about the data product and also if the data is clean or not. Okay? So if I just drive, navigate to a category here, sales, I have data collections, and then you're gonna be able to see the data quality of that specific data collection. Okay? That's because and I can drill down that as well. But, again, as we classify the data, as the rules are mapped to the classification, eventually, you're gonna have something like this. Okay? You're gonna have a data quality and observability dashboard. Okay? Then, by the way, you can do down there. Look at this. Think about you if you would it be if it's pause would it be possible for you to you won't click, understand, okay, what's impacting accuracy across the business? And if I click here, I'm gonna have hundreds of data quality rules mapped to data elements, to columns, to fields. All of this was done automatically. Okay? And, of course, because you have all these scorecards and metrics, you can implement a curation dashboard. Hey. I wanna focus only, in that data elements that have poor data quality. And they open service in our cases or sending emails. You can do lots of things there, but you're gonna be able to monitor and act as well. Okay? And here's an example of a curation dashboard. On the top right are critical data elements with low data quality score. Okay? Yeah. Some examples there. Okay. Let's go back to our, presentation. So that's how we clean data. Okay? You're gonna profile that discovery, the analysis, establish metrics and define targets, You then design, implement the rules. You deploy the rules, review exceptions, and then you monitor. Okay? Looks complex, but it's very simple when you have the right tools. Right? Here's an example of what's possible. Okay? We have customers really drive the business value with accurate data. An example is here, an insurance customer. Let's say you have a chatbot for your customer and they ask something like this. Because the organization knows the customer very well, And this data is a hundred percent clean. That's a kind of response that a customer would get. Hey. Hey, Alex. That's very tailored, contextual, and accurate. Based on your membership and contribution history for a superannuation, you can get as much as this. Premium is a deduct from your super. Anyway, it's a contextual tailored response. And this is a real story from a bank. We have banks, some customers approving mortgages or loans, but final approvals in less than five minutes. Let's say go to a bank portal, you ask something like this. Again, the bank knows you very well. Credit score, assets, liabilities, income, etcetera. That information about asset class, previous loan applications, square data. All this orchestration, Informatica can do for you as well. The ingest of data in your VectorDB, the integration of language models. But today, you're focusing on good data, right, in in curate curation of the data. So that's what a customer gets. Hey, Gio. Based on your credit score, your account balances, your spend history, you can get up to this. Here's a link for you to submit documents for final approval. Even this document assessment is done with AI and robust data cloud processes. And that's how banks can approve mortgages in five minutes. Okay? And this is an example for a bank. Law approvals in three minutes instead of four to six weeks. Okay, Tim. We have twelve minutes. I'm gonna go quick in this, last demonstration. I wanna get your questions. One thing you can do as well, we can enable people that are nontechnical, lots of people, by the way, to not just find analyze data, but also exploit data. K? All everything I showed you today can be accessed through a natural language interface. And by the way, Gartner is saying this. That's quite scary in my opinion. Right? But that's true. You're gonna be able to do way more with less. Okay? And this is a solution that we have called ClearDPC that supports business users to ask business questions, analysts to understand the lineage, to check if they are anomalies or outliers in a specific report dataset, and the data engineers to find data or even create data pipelines. And I'm doing use these slides. I can do this live. Okay? Just ask Rob and I, we can do a live demo for you as well. If I wanna explore data, it gives me the information. Let's say I'm interested in this one here. Tell me more about retail customers. It gives me the key attributes, the glossary terms, data and classification. Show me what gloss is associated. I have two. How about data quality of that data asset? How about the profiling of the key columns? K? All of that with a natural language. What's the lineage? And if I have the high level lineage here, but I can drill down if I want. What's the impact of adding a column upstream, downstream, overall ecosystem impact? And even going further, which tables I can join? I can further explore the table, and I can ask business questions. I can combine the data. I can start asking business questions. Okay? How many customers I have? How many customers in Paris? And does not end here. Okay? It knows where to go, which tables to explore. If I go if I start start talking about products, as I'm doing here, the auto match goes to different datasets. But if I do not trust the response, I can analyze the code. That's a SQL. Okay. So let's create a mapping for the above. And they create a data pipeline for me. Okay? As a data engineer, some people get concerned. I'm gonna lose my job. No. You're gonna be way more productive, but we still have to validate the logic here, the sources, the logic in the middle. I have to save. I have to run. Gonna have to deploy in dev, test, production. So you're gonna be more productive. K? You're gonna be able to everyone organization should be data driven, to create usable data faster, and, of course, be way more productive across anyone in the business. Okay? Here the here are the takeaways, how we can accelerate AI adoption. Okay? We can help reduce risk, identifying issues early. We we can accelerate development with local, no code data profiling, cleansing, initialization enrichment, and we're gonna help you scale and reuse. Okay? And I think it's my last slide. You're gonna be using the best solution that it's out there. Data and analyst called governance platforms, we are number one. Data cloud solutions, we have been number one for more than ten years. So we are best of breed. If we're technical, we're gonna work with state of the art technology. Okay? And, again, that's the Informatica platform. We focus today here and a little bit here and here, but we have a, broad range of services allowing you to manage data better. K? So thank you very much, team. I'm gonna see if we have any questions. Let me stop sharing. Awesome. Thank you, Daniel. So if you have any questions, you can put them directly into the chat, or there's a q and a down in the, Zoom bar. We have five minutes or so. I would say, let's see see if we can challenge Daniel. He's been doing this for a very long time. He's probably heard a lot of different questions, so let's try to give him a real difficult one. While we are waiting for folks, I will just remind you again that this is part of a series. That is the link I just put into the chat if you wanted to join us on the twenty seventh, which is a week from this Friday. And then, about a week or so, two weeks or so after that on July eighth, we'll be covering more topics around how AI is accelerating and augmenting your capabilities with data governance, which is always been the big challenge with these tools. There's functionality, but it requires a lot of human intervention, which means these tools never get used. So the great thing about what Informatica has done is they've rebuilt they built the entire platform again from scratch to be cloud native and to integrate clear the AI engine into all of the different features so you have a single workflow that flows to the entire, system. Oop. We have a question. Alright. I'll read it to you, Dan, if you wanna respond. Can Clair AI be used for Azure SQL Server DB? Yes. Absolutely. What we're gonna do is just gonna scan your Azure SQL database. As I said before, we can classify the data automatically, like, in five, ten minutes, And then when I start interacting with it using natural language. So we do not store your data. So we ask questions to credit it. It's analyzing my question, my semesters, the business terms I'm using. It maps to the classification, to the metadata that we have, after have scanned your Azure SQL, and they create a SQL query, pushes to your database, it gets a result, and it brings the screen. So do not store your data. So what's the bottom line there? We can work with any database, in data lake, in data house, and so on. K. Good question. Thank you. Couple more questions. There's a lot of questions about this presentation and this video we did record. It will go on the interworks dot com website. When it is posted, we will email everyone letting you know that you wanna rewatch, there's a lot of great content you might wanna share with some colleagues. But we do have another question for you. How can you surface the data quality like confidence to the answers report metrics next to the answers and numbers? Not like, not like a DQ dashboard on the side. I'm not sure what that last bit means, but I'll let you answer. Yes. I'm just wondering. Look. We have many ways to surface the data quality. One is using, for instance, a Google plug in. Let me share my screen very quickly. We have a couple of minutes here. We should have time. So I'm just gonna let let me know if you can see my screen. Yes. Yeah. Okay. So I'm just gonna log in here. Let's say I am, in a Power BI report or Tableau report or click. Doesn't matter. Okay? Or or I'm exploring a dataset end dataset. Why I'm not able to log in? Let's try another in cog window and see it. Okay. Here we go. It should be working. I just don't know why why that wasn't working before. Okay. So this, interface here. Let's say I have my report over here. And the report, the name is the dataset's name the name is customers. Gonna highlight that. Look at this Google plug in. You're gonna give me all the information about customer. If it's a metric, if it's a attribute, let's say, customer code, I select that, you're gonna give me the grosser terms, you're gonna give me the metrics, you're gonna need datasets about customer code, the data quality, the tables, everything. So you can surface that information multiple ways as I'm saying here through a plugin that runs in your browser. You can use our e y to consume the information. You can export the data, and they consume this data anywhere, or you can use APIs as well. K. So there are many ways for you to interact with the metrics information that we have in Informatica. I hope that answer your question, but if not, please let me know. Yeah. I think we can probably squeeze in one more question, Daniel. Great presentation with the data quality checks. How complex can the DQ be? What if I have a check that is a sum of sub accounts need to be equal to a holding account only for certain assets? Can I incorporate some Python code as a DQ check? Look. We can't call Python scripts in our, data pipeline, interface. Like, it's called data integration solution. That's totally fine. As part of our data quality checks, you shouldn't need Python code. I haven't seen any need for that, but I might be I could be wrong. Okay? But we can, implement very complex rules. I don't know if you remember when I I showed the maplets. In that maplets, it's a reusable, data quality transformation. They can have multiple sources. You can have multiple, calculations happening or verifications happening, different rules. You can aggregate data before you check the quality. So we can do that absolutely using mapplets and the rules that you're gonna create, customize, or you're gonna leverage the ones we provide out of the box. So, yeah, totally fine to implement very complex rules as the one you have described. Python code, I don't think you're gonna need, but if you wanna use Python code, no problem. We can interact with it fine. Okay? Good question. Thanks, Kimesh. Thanks for the feedback as well. Awesome. We are right on time. We might stop it there. So, again, as I mentioned, we'll send you an email with the recording, and I think Daniel said you'll share the deck. Is that right? No problem. Yes. Awesome. So thanks again, Daniel, and please join us for future webinar events. Thank you, everybody. Bye. Thanks, Tim. Hope to have a good day. Bye bye.