Tongue in the mirror and bending over backwards just to try to see it clearer, but my breath fogged up the glass. And so I drew a new face and I laughed. I guess what I've been saying is there ain't no better reason Welcome to our webinar Go Data IQ. So we will be starting shortly in five minutes. In the meantime, if you do need a coffee or a tea, there's a lot of time. Hurry up. Otherwise, we do have some of our products that are showcased on the right hand side. We have assist, curator, and servant care. Go ahead and check out those links so you get more information, but we will start shortly in about five minutes. Thank you. Thank you all for joining. So we will start, five minutes past the hour. I've popped in links into the chat. So we have three amazing products that Indorex brings, to you guys. We have assist, which is like an on demand support platform where you can access our experts. We have Curator trying to bring everything into one place, dashboards, data, all of that. Check out Curator by Innerworks. And lastly, do you need someone to help you manage your server? We start off with server management, server infrastructure, and everything related to your Tableau server. So check out ServerCare by Interex as well. We will start shortly in about two minutes, but thank you all for joining today. Again, made a tea or coffee. This would be the perfect time. We still have, two more minutes to go. Thank you. I can see a raised hand already. So if you do have any questions, just pop that pop that into the chat button. So chat is at the bottom of your Zoom control. So just use the chat function, if you have any questions or queries at all, and we'll be able to take it from there. And I want to, also take this opportunity to segregate two important functionalities in Zoom. So you have the chat option and also you'll have another option called q and a. So if you have questions related to data IQ or anything, you know, specific and technical, pop that into the q and a so we can actually answer those questions. And I'm able to actually help, our presenter, catch those questions and give you the answers. But if you are trying to pop in some general commentary of how are you feeling today or anything very generic, pop that into the chat because that is for your general commentary. So two options, you have Q and A for questions and chat for your general comments and exclamations. I think we are at time now, so let's get started. Perfect. Awesome. Next slide please. Perfect. So before we jump into today's content, I want to take a few minutes to introduce IntroWorks. Maybe this is your first webinar with us, or maybe you are returning, either way, welcome. I am super, super, super glad to see you all here. You might be wondering who Interworks is, and we do a lot of things, and sometimes it's really hard to explain, but if I put it simply, we specialize in data strategy. So if you work in analytics, you know the challenges of an ever changing tech landscape and the pressure of keeping up with a high demand of insights that we need to drive change within an organization. So this is where we come in. Our specialty is building the best data strategies alongside you and to be your trusted advisor when you need it. So further everything we do is backed by our people. So we're constantly learning and we always wanna share our learnings with all of you. Next slide, please. So beyond our mission and our people, we can also help you navigate the right tools to align with your goals. So some of our partners you'll see on the screen right now, and if you're looking for more resources on data analytics, or any of the technologies that we discuss today, be sure to visit our Interworks blog. It's world famous and it's a great knowledge base for anyone in your organisation who might be working with data. So some of the reminders for today, so we hold these webinars every single month. So we value your feedback because we are constantly trying to make sure that we are putting out the best content and the delivery based on the needs of our user communities or customer community. So as mentioned before, today's webinar will be recorded, and in a few days we'll send out an email to access the replay. So this will only be available to the people who have registered. If wanna access previous webinars, previous monthly interworks webinars, you will find that catalogue on our website. Finally, one request throughout today's presentation, and I mentioned this at the beginning as well of today's session, that we will take questions using that Q and A function, which is at the bottom of your Zoom control. We'll mostly take these questions towards the end of the session because there's a lot to cover and you can make use of the chat function for any sort of a general commentary. Again, I repeat, use the chat function for general commentary, use the Q and A if you have a lot of questions, we will be recording this session and the replay will be sent to all the people who have registered. Next slide, please. So I think it's right. It's time to meet our presenter for today. So my name is Carol, and I am an analytics consultant based out of Melbourne. I will be your emcee for today. Our presenter is Azucina Coronel. She's a data architect with Interworks. I will let Azusena introduce herself. Go for it. Hi everyone. Thanks for joining today. I'm Azusena and I'm based in Sydney. I am a data architect at Interworks. I have been here almost about three years, a lot of fun, good stuff, good stories, and it's great that you are joining us for this webinar today. Awesome. So let's get into what we will be covering today. So there will be a little bit of introduction into what is advanced analytics. So we wanna like set the landscape for you guys and what exactly is DataIQ. And then we get into the fun stuff wherein we look into data connections, explore, do a little bit of data analysis prep and train the ML model. So it'll be really intense as well. So it's really important that we are all tuned in. And lastly, we'll score our data as well. So when we go to the next slide, we'll see what we require for today's session. And even if you get the replay, we do need to make sure that everyone has the trial version of DataIQ. I will pop that link into the chat right now. So you can go get a trial version. The other thing is that we will be making use of a dataset called as a Scooby Doo dataset, which is actually available on Kaggle. So if you haven't downloaded that link or the material will be available, and I will just pop that link into the chat as well. So two things you need today, you'll need an online trial version of DataIQ, the free edition up and running, in your machine. And the last one is you'll need the dataset which we'll be using today to go through the DataIQ essentials. And that is a Scooby Doo dataset available on Kaggle. I've popped in both the links, use them, if you'd like to follow along today's session. So over to you, Azu, without any further delay. Thanks. Thanks very much, Carol. Let's get started. And for that, first of all, let's set the landscape a little bit, and let's talk about first what is advanced analytics and why is it important to do it. So advanced analytics is the autonomous or semi autonomous examination of data or content using sophisticated techniques and tools, typically beyond those traditional business intelligence ones. This is to discover deeper insights, make predictions, or generate recommendations. And why is it important? Well, it helps, decision making and boost competitive advantage. Advanced analytics can help companies to enhance this, and using, advanced analytics can be the difference between keeping up with the competition or falling further behind, so it is really important. And here we can see in the screen some few logos, and I will explain a bit of examples of how these companies are using advanced analytics. So for example, Nike is using predictive analysis to forecast the consumer demand on a hyper local level. This helps them to optimize their inventory and to develop more targeted campaigns. The car dealers, for example, use regression analysis to forecast the price of a used car, given its mileage, brand, or other variables and conditions. So that's another very good and very typical regression analysis, very typical example of advanced analytics. The airlines, for example, a lot of the stuff and a lot of data is going through with the airlines, and they use time series forecasting models to identify peak travel times, anticipate flight volumes, and be able to schedule flights accordingly. So very important, no one likes to get stuck in the airport, and there is a lot of things and variables going around. So time series forecasting is a very good method to be able to use advanced analytics for this. Another good example is retail. So for example, retail uses a lot of clustering to create upselling and cross channel marketing. You would have been browsing, for example, the iconic or maybe even Amazon, etcetera. And you can see those suggestions made to you. Very typical example, cross selling, upselling. I like that dress. Yeah, actually it goes well with these shoes. Okay, let's get them. Why not? And last but not least, manufacturing. So manufacturing, for example, uses these advanced analytics for predictive maintenance. We all know that it is more expensive to actually have the line stopped, for example, and do some maintenance as an emergency, and try to predict for that, plan for it, and be able to keep our manufacturing correctly going. So good predictive maintenance there and condition monitoring. And let's make a distinction a little bit here between artificial intelligence and machine learning. And well, artificial intelligence is the it refers to the constellation of theories, technologies, and research that surrounds all the simulation of human intelligence processes by computer algorithm. So for example, computer vision. Nowadays, we are hearing a lot about computer vision being used in, for example, medical scans to be able to detect, maybe some tumors or maybe some anomalies in that. And that is a very good way of using computer vision. Natural language processing, for example, a good example could be maybe the chatbots. So natural language processing is that for that as well, or even the spam filtering. So it detects some few words that could make the emails be a spam or not. And finally, have here machine learning, and this is the science of getting computers to learn from experience and perform tasks automatically. And how or why we would like to do that? Well, to uncover patterns in market research. So remember, we have a lot of data and what is best than computers to go through all of that in a more efficient manner. We can also use it to flag errors in different transactions. We can use it to personalize shopping experience based on browsing history and also to signal anomalies in medical research, as we just commented with computer vision. In other words, machine learning typically works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data. So that's where we get our power for the machine learning. And I want to take a moment here to briefly talk about the current tool landscape that we use here at Interworks. So we partner this is our reference architecture and the tools that we generally use in each of the layers. We have partnerships with the best in breed technology partners. So on the left hand side here, you will see all the different sources of data that you can imagine. We can have APIs, we can have different types of databases, we can have file based structures on the structured data, sensor data, Internet of Things data. So all the data, it's lying here. And then what we do with that, we have here in the middle all our data warehousing and our ELT tools. So we have all these. We want to put it in Snowflake, for example, for the data warehouse to have it all in one single application, in one single place, so that we can actually use it in a clean and organized manner. So we have one of our partners here, of partners here for extraction, being Fivetran and Matillion. We have here the integration layer. It can be in any of the clouds, AWS, Azure, or GCP, loading the same as extraction, Fivetran and Matillion. And finally, here, we have a Snowflake, where is where we built all the data warehouse. And once we have ready our data clean and ready to use, we have on this right hand, the analytics platform. So we have a bunch of data, it's ready to use. We have some technology partners to actually go and make the best use of it and communicate within the organization. So we have Tableau and ThoughtSpot here, and we are going to talk about today the The Taiko, on top of being very good with machine learning, it's also very good for local prep and analysis. So for example, for your data prep, it can be like cleansing. It can help with adding formulas and business rules, new measures, derived fields, renaming columns, all this stuff that it's like very small towards the end of the data preparation. It can also help you with enriching with sources not stored in your data warehouse. So for example, if you have all your sales data ready, but you want to add maybe a forecast that it's very specific to you, why not? You can use that iQOO to consume that, join it as more appropriate it is, and then use it in a dashboard or in your prediction models. And it also helps, for example, with light reshaping of data, joining, pivoting, filtering, removing unrequired columns and rows, etcetera. So we have the TAICO here as local prep and analysis. And finally, we have here the area of machine learning and artificial intelligence. And again, best in class, the Taiko, can build there, not just AutoML models that we are going to talk about this later in this presentation, but also it has great capability to if you have a data scientist that is really into coding and they have already developed their models in Python or R, you can use them in Dataiku as well in a single platform to have them all readily available. So let's get rid of these drawings and talk now a little bit more that we set in that reference architecture where the taiko fits. Let's talk a little bit more deeper of all the capabilities that it has. So the Taiko is a one stop solution for design, deployment, and management of all your, artificial intelligence applications. So it has, for example, here we can see the six top capabilities, data preparation, and we talked a little bit about that. It has, over ninety different built in data processors to help you with tasks such as binning, concatenation, date conversions, etcetera. So it's ready there, drag and drop, and then you connect your data and use them. When a processor is not available, you also have a formula that it is very similar to Excel. So in that way, it's very easy to any level of knowledge. It's very easy to just use the TAICO in that sense for data preparation. We also have the capability of visualization. So the Tyco's visualization capabilities help you accelerate your exploratory data analysis, where you can create quick visual analysis of columns, including the distribution of values, top values, outliers, invalid, and overall statistics. So it's also very important to be fast in an exploratory phase, and Dataiku helps us with this. What about showing your work and explaining your data well? Dataiku also have charts and graphs that makes you easy to use visualizations to accomplish this. It's out of the box dashboarding help you with this. On top of this, Dataiku also provides statistical analysis, for example, univariate and bivariate analysis. So it's everything there in the platform, readily available for you to use. And let's talk about the machine learning capabilities. So Dataiku provides an AutoML capability to get you started. It also helps you with feature engineering to automatically fill missing values and encode categorical data. For example, no more coding in Python, being very specific and iterating a lot. You can do it easily in the Tyco. On top of that, the Taiko provides notebooks for code based experimentation. And as I was just commenting before, if you have your data scientists which have developed a lot of models and who likes to work in notebooks, that Tyco provides the environment to develop notebooks using Python, R, and Scala. And this is based on Jupyter. So the data scientists out there will be very familiar with Jupyter notebooks. It really doesn't stay just with creating something for a product to be successful into production and the whole life cycle. For that, you need DataOps and MLOps. So this is the second part of the capabilities that I have here. With DataOps, that I could provide data quality checks to automatically assess that your flows run within expected timeframes and with expected results. So, for example, operating artificial intelligence projects require repetitive tasks like loading and processing your data, making sure it's clean, making sure it's ready. So the Tyco has these scenarios and triggers to allow you automate all this by scheduling, periodic executions, or triggers based on condition. So you have all that ready for your data to be processed. And on the side of machine learning operations, with the Tycoon Unified Deployer, deploying the projects to production for batch and real time scoring is easy. So you have, again, in one single platform, all the ability to deploy, to manage between environments, and to monitor, because monitoring in machine learning is very important that your model is not drifting, that it is still accurate, that it's providing good predictions. So you have the MLOps abilities there. And finally, we have towards the right there, the analytics apps. So within the Tyco, it's possible to create different analytics apps. For example, the Tyco has a what if analysis scenario that allows data scientists and analysts to check different input scenarios and publish the what if analysis for business users. So, I will show you later, but basically, if you want to play with the different variables, it is very easy to see how it affects the output with that, what if scenario. With the Dataiku apps, data scientists and business analysts can easily create apps with a few clicks and publish a project, including the app, to production. So with this, the business users can easily interact because it's very important to not just keep within the team all the knowledge and and what's happening with the data science and the data sets. It's important to show and show to the business, how this can be used. And finally, the Tyco supports various various leading web app frameworks. For example, Dash, Bokeh, R, Shiny, JavaScript, and more. This is to allow for more ways to share your data and applications. So these are all the Dataiku capabilities, all in one single platform. Let's keep going. Okay. So we are ready to jump, hands on the tool. I hope that by this time, you have opened your if if you are using a local instance of the taiko, make sure that it's open. And if you have created your the taiko free trial, make sure that you are connected and ready to go, because we are going to get started. So today we are going to explore the data connections. I'm going to show you several options that you have, and we are actually going to go and connect to the CSV of Scooby Doo. We are going to do some exploratory data analysis. Before we even start preparing the data, we need to understand what we have available. So we need to understand also what needs, what we need for our model to work and which steps we need to get there. So the TATICO offers visual tools to make this step very, very easy, and we will see. We also are going to do itself the data preparation. So after we have identified the different steps that we need to take to use our data and to model our ML model, we are going to do all this and prepare the data in the correct form that we require for this. Once the data is prepared, we are going to go and actually create our model. So that's going to be a very interesting part. And well, it doesn't end just with grading model, right? But we want to see how it is applied and how it is used and what happens to the data when we push it through it. So we are going to score some data that we have there as well in our SCOBY two dataset. So without further ado, let's start chatting about data connections. Dataiku provides different connectors to over twenty five leading data sources on premise and in the cloud. So for example, Amazon S3, Azure Blob Storage, Google Cloud Storage, Snowflake, SQL databases, NoSQL databases, HDFS, and more. So we have different, possibilities to connect to the different data sources. And let's go to the first exercise. So in this first exercise, first of all, we are going to explore the data connections options inside the Dataiku, and then we are going to connect to our SCOOBY two dataset that you have downloaded, with the links that Carol provided and that we sent by email earlier. So let's first of all go to your Dataiku instance. And again, if you are using a local instance, you will be able to access that with local host, and the port is eleven twenty zero, eleven two hundred. The user and password to access initially is admin admin, so you can access through that. And first of all, you can see here that this is the very first screen that you will see in the taiko. So this is the DSS desktop, and you can see several sections here, such as projects in here. We also have some workspaces, applications, and project folders towards down here. Applications, project folders, dashboards, and wikis. So all your objects, you will be able to access them from this initial homepage. Okay? We also have here this bottom of new project. So that's what we are going to do first. We're going to go and create, click here, new project. And we are going to select blank projects. This is interesting because ATAYCO offers a very useful ATAYCO Academy for the people that want to get started, and there are several ATAYCO tutorials already there in the instance. So if you want to, if you fall in love with ATAYCO in this session and you want to know more, you can always start doing that academy and getting the hands dirty in the tool. So for now, let's go and create a blank project, and we are going to call it Scobito. In my case, I'm calling Scobito two, because I already did a little project here with the name Scobito. So I put the name here, it will generate the project key, and I go click and do create. Okay. So this is our project screen. And in here, we have, again, different sections as well. So we have, first of all, the summary here, where you can put a little bit more about what your project is about so that people can understand what you are achieving with this. You have here also the summary of different objects that you have, data sets, recipes, and models. You have here a summary of the notebooks and analysis that you have in the lab, and you also have your objects dashboards, wiki, and tasks. And this is a very well, I quite like this capability. You can also put your list of to dos here for the project. Let's say that you are collaborating with another two people in your team, so you can always put here the list of tasks that you need to do, and then just put the name, and then you can start working collaboratively and see what everybody's up to within the project. Okay. Okay, next step is in the top black ribbon here, very top black ribbon. You will have access to the diverse data type of functionality. So here we have access to the flow, the data sets, recipes, etcetera. We have access to the different types of analysis, the notebooks for those people that want to experiment a little bit with coding, notebooks, web apps, libraries, jobs and scenarios for all that automation. And you have also the wiki here, and dashboards and insights. So all that different functionality, you can access it from this black top ribbon. Okay. And well, for this demo, we are going to connect to one single CSV, that's CoV-two dataset that you have downloaded. So for this, let's get started. And first of all, go again here in the top black ribbon to flow. Once you are in flow, everything is empty because we haven't created things, but you will be able to see here that you have the different objects that you can use. So for now, let's go and click in dataset. And let's take a moment to just explore the different connections that we were chatting before about before. So we have here the ability to upload our files, which is the one that we are actually going to use for the SCOOPI two dataset because we download it in CSV, but we have other options as well. And because here I'm in a in in my local instance, I don't have the full options available, but you will be able to see in your free trial that you have more options. So for example, in the network, you here have the possibility to connect to FTP, SFTP, HTTP, and all these areas that you need. You can also connect to HDFS and Hive. You can connect to a whole big range of SQL databases, Snowflake, PostgreSQL, MySQL, Amazon Redshift, Google BigQuery. There is, really a lot of options there. You also can connect to cloud storages and social. For example, this Amazon S3, Blob Storage, Google Cloud, and social is Twitter for now. We also have the possibilities to connect to NoSQL, MongoDB, Cassandra, Elasticsearch. So you can see all the different connectors that are available there. Cool. So once we have explored a little bit the different connectors available, let's go ahead and upload our files. So once you are here, next step is go and click on that upload your files. So we click there, and you have the option to either select the file or to drag and drop it. I love drag and drop it, so I'm just going to do that. Drag my CSV to here. I wait a little bit for it to upload, and I am going to rename to something short and easy here my dataset. So I'm just going to call it Scobito. And this is quite nice because you can see here a little preview of what the data looks like. And you will see that it took the first, the top row of your dataset and used it as the name of the column. So that is quite good. Okay. We just scroll here. Everything looks good. Everything is as expected. And then we are going to go and create. Perfect. So we have created our dataset, and now we're going back to our flow screen, and we will be able to see it in the canvas. So again, remember, the top black ribbon, it's your access to pretty much all the functionality. So go here, top black ribbon first, flow, click, and you can see here your dataset. Once you explore more, you will be able to see that this, again, the type was very, very visual. You will have different icons for the different types of datasets. So this is a file upload, very visual here. The snowflake has a little snowflake, and the Azure databases have some other icon. So that, when that, when you have a very complex, or yeah, like more complicated flow, you easily be able to use those icons and to understand where your data is coming from. So again, remember that I quiz all about being very, very visual. Cool. Let's go back to our exercise and just make sure that we achieve all the objectives. So we explored the data connection options that we have available in the taiko, and then we connected to our SCOBY DOO dataset. Check and check, we are ready to go to the next section. And this is a fun one, exploratory data analysis. So once we are connected to our data, we want to understand it a little bit more and understand what we can do with it and what we are working with. So the exploratory data analysis is done to understand your data. So in this step, you identify the kind of problem that you are trying to solve. Is it like a prediction problem? Does it need a supervisor unsupervised approach? Do you have actually a target variable that you need to predict? If you have a target variable, are you trying to classify it into some categories, or are you trying to predict a continuous number? You can start thinking about all these questions. Once you identify your target variable, if you have a target variable, you can also start identifying the relevant fields. So you can start thinking, okay, if I am predicting, maybe I am predicting, I don't know, sales, or maybe I am predicting the possibility if someone is going to convert or not to convert in a e commerce pipeline, something like that. So then I start thinking about, okay, so what could impact that decision? Okay, maybe the different number of pages that they browsed. Let's talk about, for example, ICONIC again, the different type of patients that they browse of address, or maybe if they put it in the checkout or not, different things. So you can start identifying all the relevant fields that are going to be useful for your prediction. And once you identify this, you also identify all the data preparation that you need to do. Maybe the data is not one hundred percent clean. Maybe you need to separate some fields, or maybe you want to get rid of some rows. So you can start thinking as well on all those steps that you are going to require. So let's go now to our exercise number two, and this is going to be the exploratory data analysis. So main objective here, identify the target variable, identify our relevant fields, and identify our data prep opportunities. Okay? So let's go back to the tool. And let's look, first of all, at what we have in our datasets. So I'm going to double click our Scooby Doo dataset, and we can see here that each of the rows contain information about each of the Scooby Doo episodes. So we have all these episodes that were aired in TV and, yeah, that were aired in TV. We got episode information such as the title. We have title here. We have date aired. We have runtime. We have format. So we have all the information of the episode itself. We also have some information about the monsters that appear in that episode. So we have monster name, gender, monster type, subtype, species, and if it was real or not, and also the mouse. Moving on, we have some information about who caught the monster, if they caught it or not. And we can see here that for each of the characters of the scuba dugong, we have here if they caught it, if they capture it, and if they unmasked. So we have some, those three alternative of interaction with the monster. And we have some extra information. For example, if they ate a snack or not, if some other person that was not of the main characters unmasked, then was unmasked, caught, or if it was not caught, the monster. It also have some information about the landscape itself or the setting, the terrain, the country or state, and it has some information about the culprit itself, culprit name, culprit gender, motive, and some more information. So while we are thinking this, we can start thinking what we want to do with this data. If some other character was included in that episode, etcetera, okay? So we have all this information, and we can explore our information here with the So in here, the TAICU offers us two very important things. First of all, it offers us the data type that was detected from the, from the data source. So in this case, it is detecting everything as string because we uploaded a CSV, so it doesn't really have a data type. If we are connecting to a SQL database, for example, it will give us the data type of the source, integer, string, Boolean, etcetera. And it also have this blue type that it's the meaning. So the taiko has the ability to identify what kind of information we have in the field. So for example, here, natural language, it is basically just a text. We have here as well natural language text. We have some date here. So even though it is a string, because again, it's coming from a CSV, it has identified that it is a date, but it's not passed. So we are going to do something with that. And let's actually go and show a functionality of the natural language. So the Taiko has some capabilities here to analyze directly from this screen. And again, because everything is very visual, you can do the exploratory data analysis quite fast here. So let's try and understand what is the word, for example, because we have natural language here. What is the word that most appears in the title? So for that, I go click analyze, and I'm going to use the natural language processing there. So I click here in natural language processing. I am going to keep the defaults here, so I'm normalizing the different words, and I am clearing of stop words. I am not interested in those stop words as they or to, etcetera. So like articles and things like that, I'm not interested in those words, I just want the real nouns. I go and compute, and we can see here that the word that appears most in these tiles is Scooby Doo, and really no surprise because after all, he is the star of his own show, right? So Scooby is the word most used there. So that was good. Let's explore. Okay, so we have that natural language processing. We can also, we want to explore here parsing the data. So the Tyco has a very good ability to parse the data. We sometimes know that managing dates are difficult because probably we don't exactly know in which format they are. So for that, we can go here and see the analyze schema. We're actually going to pass it in the next step. That is fine. Okay. So here we have identified that we have here IMDb, So that is what we are going to try and predict. We have some null values there. So again, if I go and analyze, we have invalid values, and I want to see what are those invalid values are null. So we are going to use IMDB to, as our target variable, so that we can prepare this and predict the IMDB. Okay? So let's go back to the flow and start our data preparation. Okay, so for exploratory data analysis, we have identified our target variable, which is IMDb, and we have identified some of the information that we are going to use in the model that we are going to build. Okay? So let's go and start our data preparation. So we were chatting about the different capabilities of Dataiku in data preparation, and data preparation in Dataiku, the Dataiku visual flow allows coders and no coders alike to easily build data pipelines with datasets, recipes to join and transform datasets, and the ability to build predictive models. So it has a very good and easy to use visual flow that we will see in a moment. Also provides an easy to use, visual interface that speeds data preparation. So everything is going to be very agile with this. Dataiku offers ninety plus built in data transformations to easily aggregate, clean, normalize, duplicate records, etcetera. So common tasks that are required while data cleaning, we can find them there in the data transformers. And it also offers some geospatial data preparation functions when working with geospatial data. So for example, these geospatial data preparation functions include the ability to extract latitude and longitude from geopoint data and vice versa, with geo IP location to reserve locations, data like country, regional state, city, postal code, and morph from an IP address, for example. So it has some, that embedded capability there. Cool? So this is our exercise number three, and we are going start preparing the data. So let's review very quickly what we need to do. Okay. So first of all, we identify that we have a date, but we need to parse it to be able to use. So we are going to parse the date EARTH. We are going to keep the rows where monster amount is zero or one so that we are able to predict that IMDb variable. We are going to remove the rows where the monster type is null, because we are going to use the monster characteristics to be able to see which episode was the most interesting one, so which has a better IMDB review. So we're going to remove those rows. We're going to remove the unwanted columns. So there was a lot of natural language processing columns there that might not add to the model, so for now, we are going to remove them. We are going to standardize the monster amount, and we are going to set IMDB and engagement to double date type. So we are going to do those six steps. Let's go back Oops, sorry. Let's go back to the tool itself. And again, I'm going to click here my dataset, and we are remember that I was very visual, so we are going to use a prepare receipt for this. So once I click here, I'm going to go to the right side of the visual recipes, and I'm going to click prepare. Okay? Click prepare. And this has given me the option already to have an output dataset that would be called Scooby to Prepare. So I'm going to leave that one for now. I want to store it in a CSV format in the file system manage. So pretty standard steps. Okay. So we have transformation that we want to do is to parse the date Earth. So how we are going to do that is we are going to go and Here it is, the date Earth. This is what I was trying to show before, but it is in the prepare recipe. So we have here the option to click here, and we have parse date. Again, sometimes it's difficult to work with different dates, because we are not sure if it is month, month, day, day, year, year, year, or if it's Thursday, day, month, etcetera. So I really like this ability of the taiko, where it can explore the whole dataset, and it tells me, suggests me the format that I should use. So here it is telling me, okay, day, day, month, month, year, year, year, actually doesn't fit in half of the cases, or more than half of the cases. So probably that's not a good format to take, but month, date, year fits well. So let's go and use month and year. We leave the default here, and we use that date format. And as you can see, this action built the very first step of data preparation here on the left hand side. So as we start building, you will be able to see that all the preparation steps will be able to be seen here, and this is how we are going to organize. Okay? So we are happy with this data preparation step. We are going to go to the next one, and we can see here as well the new column with the date already passed. Okay. Let's let me look at what is next. And we next, we are going to keep under and remove rows. So we are going to go add a new step here. Okay. And we want to remove the rows that we are not going to use. So we are going to get rid of the rows that have monsters zero. Okay. People remove rows. I'm going to use here, filter row cells on value. Okay. So I am going to I am going to go here and just keep the one that has just one monster. So only keep matching rows, the column is monster amount. Okay. And we are going to use only the one that has one monster there. Okay? We could go and get creative and do a little bit more of transformation here to use the the other ones, but at the moment, we are going to keep it simple for this first workshop, and we are going to just use the one that only has one monster. Okay? So I go here and click add new step. Another nice functionality before we move on to the data Tyco has is this little eye so that you can actually see what is the effect of each of these steps here. Right? So if I go and click that little eye, it is just applied this step. So I am not deleting or getting rid of any rows here. If I go and click this eye, now I see what it's doing, what that step is doing. Okay? So we have got rid of the rows that have several monsters. Let's go and review. Keep rows where monster amount is zero or one. Remove rows where monster type is no. Okay. So next step is we want we're going to use the monster information to predict our target variable. So if we don't have monster information in there, we just want to get rid of them. So again, I will go click add new step, filter row cells on value, and I'm going to use monster type. I'm going to remove the rows for monster type is null. Sorry, monster type. Okay, perfect. Done. Okay, so I got rid of the rows that don't have monster type, and I am just going to check that that is true. I am going to go here, analyze, and see that everything has monster. But, okay, we are seeing here that this has, we need to do some extra cleaning here. So we are also going to do that. Okay? We're going to standardize that. So next step is removing the unwanted columns. So we were chatting about having different natural language processing columns here that we are not going to use for our model. So we have a processor here to delete or keep columns by name, so we are now going to use that one. Clicking there, that's a new step, and we want to remove different columns. So I click there in multiple, and we are going to get rid of the title. We don't want the title there. We don't want the series name neither. What else do we have there? Monster name, we wanted, let's keep it. There were some other names there. Pulpit name, let's remove as well that one. Okay. And there was some, wasn't, if it wasn't for, that it was another phrase that Mr. Yigang said, like, if it wasn't for those kids, etcetera. So we don't want also that natural language processing text getting into our model. So that's fine. We're going to get rid of that. Perfect. That's it. Okay. So the next step is to standardize the monster attribute. The monster attributes are the gender, monster gender, monster type, monster subtype, and monster species. So we are going to do some data standardization, because remember that before we saw that some of them have commas, and it looks like an array. So we are going to go and get rid of that. Okay. For that, we are going to add a new step, and we are going to do it first with the monster type. I'm going to use a formula for that. Okay. Actually, I'm going to here open editor panel. So I was talking in the beginning about if you cannot find a processor and you wanted to do something very specific, we have the ability here to have this formula capability, and this is a formula very similar to what you will do in Excel. So in this case, we are going to utilize the splits, and it also gives you some advice there as how to use those functions. So that split tells me that the first part of the formula is the string that we actually want to split. So for now, I'm going to take first the monster type. So I go and type monster type there, and it helped me here with the value of the monster type that was put by itself. So the second part is how I want to divide it, and we saw earlier on that we have commas there. So I'm going to use a comma to divide it. Okay. And that's it. It is returning an array with the different values here. And because we just want one of the values, we are going to use here just a square bracket so that it gives me the very first value that it has. Okay. I am going to select this, and I'm just going to copy to the output column so that it gets replaced there. Apply. And if we go and check here our monster type, we will see that we got rid of those weird arrays that we had there. Now we have just one single word here. Okay? So let's go, and because we want to do exactly the same transformation for the monster subtype, monster species, and monster gender, let's go, and we don't want to type several times the same thing, so let's go and use this function duplicate step. So I went here to the three dots, duplicate step, and I'm going to use the same for the other fields. So this is monster subtype. Same idea, copy paste. Cool. Duplicate once more. I'm going to do it for the monster species. And you can start seeing here why it's easier to just come and use this rather than probably exploring and doing everything by code. It just takes a little bit more of time. So I'm going to apply here again, monster species, and I'm going to do it one last time for the monster gender duplicate. Cool. Okay. And again, let's just our analyze here and just make sure that everything is looking great. Okay. Male, female. Perfect. Single values. Monster type, we already checked it. Analyze. This subtype, it's looking good. And the species. Perfect. Okay. We have done that of standardizing our monster features. That it's what we are going to use in our model. And finally, we want to go and parse the monster amount, IMDb, and engagement, we are going to cast it to double. So we go here once more, IMDb, and I'm going to change this to double. Engagement as well, I'm going to change it to double. And the monster amount, I am going to change it. Monster amount is already a begin, so we're good with that. Okay, perfect. So we are ready to get going. We have finalized our data preparation. We just save it. And remember, return always black top ribbon. Let's go back to the flow, and we are going to run it from here. So as you can see, we have, in this flow, we have our input here, our prepare recipe, and we have our output dataset here. So to run this, I'm going to use the flow actions to the bottom right side, flow actions, build all. Yes, we want to build required dependencies, build. And it is building. Job started, job finished. I'm going to do a quick refresh here, and I can see that I have now a solid figure here. It means that I, my dataset is ready to go. I can double click and see all that I did. I have the data that are parsed. I don't have the columns that are natural language processing, and we don't want to use them in the model, and everything is looking standardized. So let's go back and just review that we accomplished all the objectives of this section of the exercise. So number one, we passed date Earth, so now we have a proper date in the correct data type. We keep the rows where the monster amount was one. We remove the rows where the monster type is null, and we remove the unwanted columns. So all those columns that were natural language processing, we didn't want them, we removed them. We standardized the monster attributes, so subtype, type, species, and gender. We got rid of those arrays that were showing in some of them, and we set IMDB and engagement to a double data type. So with this, our data preparation part one is done. Let's go to data preparation part two, because it is well known, right? Like in data science, eighty percent is data preparing, but this makes it easier. So let's go to the data preparation part two. And for this, we already have all our data ready, we just need to split it in the label and in the unlabeled datasets, so that we can train our model in the labeled dataset, and then we're going to score our unlabeled dataset. So Black Ribbon, top page, top of the page flow, we are going to use a split here. So again, I find here the split, and I'm going to call the first dataset scoping DOOM labeled. Labeled. Oops, I cannot type today. Labeled. Create dataset. Ah, SCOBY DOOM. Okay. I did a typo there, but it doesn't matter. Scobido unlabeled. Unlabeled. Okay. So we will have our labeled and our unlabeled. Let's go and create the recipe. And we are going, what we want to do with this step is to just get in our labels all those rows that already have an IMDb, which is the one that we are going to use to train our model, and we are going to keep separate the ones that don't have an IMDb, so that we can score those ones. So I'm going to use here the map values of a single column. I go and click here. I want to split in the IMDb field based on discrete values, and we know that IMDb score can go from zero to ten. So actually, I'm going to not discrete values, I'm going to use a range there. So everything that has an IMDb from zero to ten, that is going to go to the labeled dataset. Everything else is going to go to the unlabeled dataset, okay? And I am going to show here another way of running this. So if you see here, your bottom part on the left, you have there big green button of running. So I will just go there and run. Everything is running. Job succeeded, and we go back to our flow. Once more, top black ribbon, click on the first icon and flow. And we can see here now that we have our two different tasks. Okay. Exactly same structure, because that's exactly what you want with your model. The labeled and the unlabeled datasets need to have exactly the same structure. Cool. So we achieved that exercise number four step, and we split our dataset into labeled and unlabeled. Let's go. We are ready to do the machine learning model. Let's go and do it. So with the Tyco for creating a machine learning model, it has different functionalities such as feature engineering. It also has AutoML. It has the capacity to have notebooks in Python and R so that you can do a little bit more of a research kind of thing in notebooks, and it can have time series visualization. So lots of capabilities here. And let's go to our exercise number five. So in here, we are going to create an analysis on our labeled dataset, and then we are going to just explore a little bit of the model interpretability sections that the model Intatyco offers. So let's go back to our flow, and how we're going to start here is we select our labeled dataset, and we are going to go to analysis, to lab, link here, and new analysis, okay? I'm going to just keep the name, analysis name as analysis COVID-nineteen labeled, create analysis. And in here, we have different options for the models. So I'm going to go here to the top right, models, create my first model. And here, again, Dataiku offers different types of models available here. We are going to use for this workshop the AutoML prediction, but there is also capabilities for deep learning prediction, image classification, object detection, time series forecasting, and AutoML clustering. So it has much more capabilities than the ones that we are showing for now here. I clicked on the AutoML one, and I am going to use IMDB. I want to predict IMDB as the target variable. So there, I am just going to select IMDB, and again, once more, even within AutoML, we are going to use a quick prototype for now, but there are other types of algorithms that are either more interpretable interpretable so that business analysts can understand better what's going on, and it's not a black box, or high performance models. So let's go and select quick prototypes, click and create. Okay, and here, very quickly, we can see in this tab of design, different options that we have. We are not going to move the default for now, but we have the possibility to explore a little bit more and to experiment different things with our design of the AutoML. So different percentages for the train and test dataset, different metrics to optimize. For now, we are going to optimize in that R2 score. We have here debugging. It gives us some diagnostics and suggestions as to what to do with that model. We can see here include or exclude. So remember that we didn't want to have any natural language or text field, but we left the monster name. So, okay, I did, I left it there. For some reason, I really don't want to use it in the model. So it's as easy as go here, and I don't want to use it anymore. So I have the possibility there as well to tell the model what to use for the modeling. Okay? Feature generation, feature reduction, there are several options there. And the good thing is also, if you are starting in data science, this is very interactive, and it gives you a lot, there is, it has a lot of advice and a lot of documentation on how best to use that, so you can explore a lot. Cool. For time sake, I am going to go to RESOL here and start the training. We have twenty more minutes including Q and A, so we'll get going. I didn't show, but there is also, at the moment, it is just going to give us random forest and rich L2 regression. You have more algorithms available there if you want to play a little bit with those ones, so you can select them to be included in the AutoML. So cool. Here, our model indicate that the random forest is the best one as per the optimization of that R2 score, and we want to actually go deploy that. Okay, I want to save it first, of course. So to deploy it, again, everything it's managed from the flow, so it is as easy as selecting that random forest one, that is the one that I want to deploy, and go here to the top right, and I have my deploy here, so I want to deploy it. I am going to give the default name here, predictive MDB regression, create, and voila, our flow, all of a sudden, has another two objects. It has a training object, and it has the prediction model here ready for using. Okay, so we have created our model. So we create an analysis on the labeled datasets, and we review a little bit of the design of the model. Okay. I was going to show you the interpretability, but I might leave it to the end if we have a little bit more of time. So I'm going to go to the final step, which is the exciting one, right? We have created our machine learning model. We actually want to see what it does. So in this step, we are going to use that unlabeled dataset that we prepared just before, and we are going to apply the model to it. So the Tyco has two options to deploy the models. You can have batch scoring with automation nodes, so that means that you will be scoring in batch, in packages of several rows, and outputting predictions, which is really what we are going to do at the moment with this flow. But you also have real time scoring with API nodes. So let's say that you are in a use case of, you want to see maybe with the car dealers, you want to give a prediction of the price of the car. That needs to happen in real time really. The person is putting several features and selecting different stuff. It needs an API to go, apply the model, score it, and then come back immediately. We cannot wait till one hundred rows are ready for batch scoring. So it has that possibility with the API notes. Okay. And as we said before, we have our model there in the pipeline. We are ready to use it. So final exercise, we are going to score the data. So let's go back here to our pipeline. I'm just dragging here just to make a little more of a space, but how I'm going to use that is I click here, and I am going to use the recipe for scoring. Here it is, score. Okay. So once I click here, it appears in this side. Click score, and I'm just going to give the dataset that I want to use for scoring, which is the SCOBYDU unlabeled. Okay. Yes, I'm going to leave that name here, unlabeled scored, create recipe. Okay. Here, I am just going to put the compute individual explanations. It's a little bit slower, but our dataset is small, so we can still do that. That's fine. And I am going to run again from here. I'm going to go back to the flow, and we can see here that a new icon appeared there, which is the scoring icon, and we have finally the score dataset here. So I'm going to refresh so that I can get the full scored dataset here. And done. If we scroll all the way to the right, we have for those fields, sorry, for those rows, we have our prediction here, and there are little bit of explanation of how it, yeah, the different features, how they played into that prediction. And that's our dataset scores. We have fifteen more minutes. I am going to just take two more minutes to show you the interpretability section of the model. So to go back there, I go to, on top here, visual analysis, analysis COVIDOLabel, that is the one that I created before, random forest, and this is the section where once the model is created, we are able to see here a little bit more, and understand it a little bit more here. So we have some decision trees, so that to see, like, for example, the first step that the algorithm took is to see if the formative was serious, yes or no, and then it start dividing the tree like that. Very important features of population analysis. So we as data scientists, as data engineers, as people in data need to be very aware always to not be biased, and our algorithms not to be biased. So this subpopulation analysis helps with that. So let's imagine that we have monster gender, where it doesn't make a lot of sense to show this functionality, right? I can select there my variable monster gender, compute, and the idea here to make sure that it is not biased is that the metrics behave similarly in all of the populations. So here we have very few female monsters there, and our metrics are different within the populations. So if this was a real case scenario, I would be worried about bias towards females. I would go and investigate a little bit further that, make a little bit more tests, see what's happening with the data. So this is very, very important to population analysis. We have some other things like scatter plots there for interpretability, error distribution, metrics and assertions. We have here which features were actually used, etcetera. So you can also research a little bit more on this. But yeah, just to show that, and in general, data science is not just about creating the model and who knows what's happening inside, we also need to be very conscious and aware of what is actually going on and be able to interpret it. Cool. Any questions so far? I talked a lot. I hope that we have some questions there. We don't really have any questions in the Q and A. I'm just looking at chat as well. Looks like we don't have a lot of questions. So thanks Azu for that really insightful workshop. I hope everyone who attended this webinar got a lot of tips and tricks and especially how to just start off with DataIQ DSS. So this was a ninety minute workshop and you will get that webinar replay as you've registered. Again, we're going to pause like a couple of minutes to see if anyone has any final Q and A, any questions at all. Feel free to pop them into the Q and A section of your Zoom control. Otherwise I'm monitoring chat as well at the moment, so you can pop it into chat if you'd like. And again, just a recommendation, if you are interested to start exploring your machine learning use cases, to go one step beyond, right, and do that predictive analytics part, that ICO is a very easy way to start, and it really lowers the barrier of entry. So just make sure to grab a free trial account or to download the limited free edition and explore the possibilities. Yes. I've also popped in a blog article, which gives you a little bit of a recap of this event, because this is the second time we're running the GoDataIQ workshop. So this is just a little bit of a recap of, and you'll also see some of the questions that we had last time, and you can get a little bit more information, but feel free to reach out to us if you have more specific questions on our website. So next slide, please, Azu. Yes, again, I will just remind everyone there will be a replay that will be sent to you again within two to three business days. You will not be able to see this replay on the blog. It'll only be available to the people who have registered. Also at the end of this webinar, there will be a short survey. So please do give us your thoughts. We are looking to continuously improve our content as well as delivery. So do let us know your thoughts and give us your feedback. Otherwise, reach out to us if you have any questions or queries on anything that we have discussed today. We do have all our links in our website. But otherwise, thank you all for joining today's webinar and hope you all have a lovely rest of the day. Thank you. Thanks Azu. Thank you. Thanks everyone. Thanks, Kyle. Bye bye.