Data Forum: Choose Your Own AI Adventure with Adam Mico

Transcript
Cool. It's ten o six. I say let's get started. We'll do our introductions. I'll kick it off. My name is Garrett Sauls. I work at InterWorks. I'm a content manager. That's a fancy way of saying corporate English teacher, but I get to do fun stuff like this, talk with data people, get the smart ideas out of their brain, and ask them just riveting questions. So I'll I'll pitch it over to Annabelle who will introduce herself, and then she can introduce our guest, Adam. Good afternoon. Good morning. My name is Annabelle Rincon. I'm living in Switzerland. I'm a Tableau visionary and Tableau ambassador. I'm passionate about Tableau data visualization And also, And it's a great honor to receive my friend Adam for the second time in one year and a half. So let me read some bio from him because I don't want to to forget anything. So Adam Mico most recently served as a principal data analytics strategy at a global biotech company, leading data visualization, general AI enablement, tablet registration, data governance, and leadership across analytics strategy. So he has been reward four time Tableau visionary, six time Tableau ambassador, and two times, I'm very jealous about that, recipient of the Michael Christiani community leadership award at the Tableau conference. He also published author from back, a Dreamforce Golden Oddie recipient, and the founder of Data Farm Rising Star. He's a statistic in support diversity and inclusion effort, which include neurodiversity, and he's a big advocate of diversity and inclusion. And I think that's already a lot. Do you know that's why I needed, like, to really read my notes? And maybe, Adam, you can, like, deep dive a little bit and tell us what changed for you in the past year and a half. Maybe you had new GPTs or AI initiative. A lot changed. In fact, the date after, we had our interview, I, had a septic joint and had emergency surgery. Woke up two in the morning with a sore wrist, and then, it got more sore. So I went to the emergency room, and it was swelling it more and more. And they're like, oh, that's a septic joint. We need to get you emergency surgery. And then I had, surgery two months later to remove, basically, a row of bones from my wrist. But now it works really well, and I'm thankful I have had those surgeries because, obviously, you don't wanna mess around with sepsis. It doesn't it it it's difficult. So but, yes, a lot since then has changed. I built a ton of GPTs. A lot of them, I'll kinda be sharing with you, today and going over a little bit. The other thing too is that Annabelle and I have been collaborating quite a bit. She's my partner in crime in a sense where we do a lot of, blogs together and so forth. A big one, during the year was, covering Tableau next. Annabelle really wanted to start to deep dive into it, and I was kind of there already, but I wanted to give my perspective, for use cases and businesses and why it matters to businesses. So we take team on that, and we had, what, five four or five of, like, twenty plus minute blog. So it was basically an ebook worth of content. So that was a lot of work. And and Ambella's working with me right now with Data Famerizing Stars. So that was an initiative I started five years ago. This is our fifth year, and it's a lot of work. And, partnering with somebody that also likes to do a lot of work and is a great partner is just tremendous, and it makes a lot more joyful rather than joining it. So she understands exactly what we need to go through the to publish this, and it's it's almost like a second, full time job. In addition to that, I joined the board of, Dreamin' and Data, which is a nonprofit organization. Including that is, we're planning for a conference in Chicago in May right after the Tableau conference. So, follow me on LinkedIn and or Dreamland Data. We have a separate page, for Dreamland Data, so follow that that page for updates. And we also have a website too. Many other things I've been working on, basically, studying AI and generative AI, studying governance, working a lot on governance and understanding what that is, applying a diff looking at the holistic nature of what we're the future of analytics is, just because I don't wanna be left behind. So a lot of the tools I built helps me understand what that future looks like, and I wanna help people come along, with me, just because, if you're not utilizing, tools right now to help, scale your capabilities, it's gonna be very difficult to catch up. So I want people to come along, and that's why I share a lot of tools publicly and all for free. And that's a big thing for me. The only thing I ever charged for was a book with PAC publishing, and I wouldn't have written it if I didn't have a team behind me and PAC pushed me along and tell me, you need you need something out there. We need something out there. And that's the only thing I've ever charged for because I do, really love, the opportunity giving back ensuring tools because a lot of people are a great community, whether it be GenAI, AI, and, the data fan community share a ton. And, pretty much everything that's shared is free and often better than than what the company puts out themselves and more creative. So it's very interesting to see what people come up with, and those are things that you wouldn't generally think about before. So, yeah, I've been quite busy since the last time we talked, but, enjoying every moment of it. Yeah. It's funny, Adam. I was just about to ask so many questions that you literally just answered in succession. I was like, why did you wanna get into AI? What about the why what is it about giving back to the community that that you enjoy? But, yeah, I mean, you really just answered those things. I mean, giving back We could deep dive in them. I'm happy to do that. I'll give you that overview. I love that. Maybe this is an interesting question because I know both of you have been part of, in particular, the Tableau community for for quite a while. I mean, what what is it about that community that that helped you along your journey? What are you appreciative of of that of that community to where now, obviously, you're giving back to that in a big way, and it and it meant something to you. But what about that community really stuck with you? Let's start with Annabelle. Yeah. Love putting you on the spot. Yeah. It's okay. Don't worry. When I was learning Tableau, I would say that I was, like, navigating to Tableau forum a lot, asking question myself, but also, like, trying to see if someone someone has the same issues as me, how it solve it. So that's really helped me on the journey. And I have to say that each time I ask a question or I reach out to someone, they were very, very nice and very helpful with their and generous with their time. So that's why when I felt that I could give back, that I knew a little more, I start doing it. So for me, I was pretty much a hermit. So I worked in the public sector. I reclassed the highest level I possibly could reclass to. I'm looking at you know, I'm counting down the days of retirement. So I wasn't, like, a a social media person at all. I hated everything about it. I despised it. But then I kinda looked on Facebook, and, Tuan Wang, who, ran Tableau Magic, he kind of encouraged me to do more and be part of the data fam community. And at that point, I did a lot of stuff behind the scenes before that. I used to be a music reviewer in a lot of different things, but just pretty anonymously, going behind my brand. I and and so forth at the time, not really sharing my name or anything about myself. So this time, I'm like, you know, I don't wanna be a hermit forever, and I'm gonna give it one last try and just be as authentic as I could be. And there was a a person in the community by the name of Hunter Hansen who, was out in autistic and sharing his autistic journey. And, really, at the time when I started with the community, I probably shared it with maybe a handful or two handfuls of people, the people that needed to know. But I thought, you know, if I'm gonna put myself out there, I have to be my authentic self, and he kinda helped pave the way for that. And the other thing that really drove me into community because I've been into a lot of communities, there's message boards for groups or interests and so forth back in the day. But the thing is is that you don't really meet up with them and form real bonds with people. And when I met people in person and this was when I met went to Cincinnati. I met Kevin Furlidge, Dee. There's Jeff Schafer and a number of other people. And I saw that they were just as great in person, if not better. And Sarah Bartlett, who came out away from England to join us Cincinnati Tug and and whip my butt biz games, which was kind of funny. But it was just amazing to see and spend time with those people and understand they're just as good, if not better, offline as they are online. And that just kinda kept going. I mean, that's how Annabelle and I became friends. We're kinda we kinda came up at the same way, same time. We had the same titles. We're both social ambassadors. We became visionaries at the same time and all that stuff. And we had our own challenges and our own unique perspectives on the community. And a lot of that helps, us form a really good partnership and bond because we could see it through similar eyes but different eyes as well. And you do form a lot of those real friendships that you would never have with any other community. I mean, I've been in GenAI for three years, and I don't have close friends from the gen GenAI community. The DataFam community is its own thing. So it's amazing to be part of that. That's great. In terms of GenAI, I mean, I love talking about that. It's been you're our first guest on Data Forum, and you shared some of your GPTs. I'm curious. You know, it's been a year and a half. What kind of in a snapshot, what was the state of the AI landscape maybe a year and a half ago, what you were building, what you could do compared to where we're at now? What's changed? That's a great question. And it's kinda funny because in April of two thousand twenty five, so that's less than a year ago, I wrote an article on creating custom GPTs. By far and away, my most popular blog article I ever wrote. But then in December in November and December when the new models came out, I noticed that it was completely, pretty much obsolete at that point. Everything changed. When you're building GPTs before, everything had to be rule based, and it wasn't context aware. You had to make really firm and, concrete rules. But now there's so much awareness that you could do things that weren't possible before. So it understands things. And in fact, when you create a bunch of rule based GPTs or some people would refer to them as GPT rappers, w r, not not r a p p. But when you when you build custom GPTs, if you do apply a lot of rules these days, they become very brittle. So that's the one thing is that when you're building for older models, and it could be if you're building skilled MDs and so forth, for other tools like Claude or Gemini, you have to consider how contextually aware it is. Because if you do include a lot of rule based things, it's very difficult for it to really do what it's intending to do. And, the main difference now is that you have to think in workflows and systems. And the fortunate thing for me, I tend to already think in systems and workflows. I more abstractly, I don't write a bunch of process flows or anything like that, but it's really helped me wrap my head around the new changes and what the future changes will likely be when it comes to building custom GPTs or skill based markdown documents and so forth. And you had, like So I had to rebuild everything from Exactly. You had some challenge recently due to a product update. Exactly. So everything yeah. Everything even my public and and ones at work, I rebuilt three or four dozen GPTs at work just because of the the changes from four o to five two. And that's the one thing that you really have to think about when you're building these GPTs. It's not a set and forget it. It's not a project. It's a product. So when there are changes in models, you really have to understand what those changes are and how that impacts, what it does and what it's intending to do. Because you don't want people going to GPTs and, coming away thinking that it's an awful experience just because you didn't update to to the model. And that's really important, for me is that, you know, I those are things that keep me up at night. It's like, how's that model, impacting now? Even though some of the GPTs don't have a lot of activity and so forth, it's very important for me that the one person that goes on it has a really good experience with that or the best possible experience they can have. For sure. Got two questions for you. So Will our friend Will, is asking, how would you improve the GPT store from OpenAI? Says I don't think there's been many updates other than the models since it launched. Any any hot takes, feelings, ideas? Oh, a lot because it's now kind of a it's an endless pit. So I don't know when and if they'll be making updates to that particular store. I would imagine that has to be on top of mind because people are asking about that all the time, but it's very difficult to find what you're looking for. There's no filters, for example, for the most part. You're just looking randomly for stuff, you have to kinda know where a GPT is, especially with there's millions. I imagine there's millions and millions of public, custom GPTs out there. The other thing to it, there's really no sense of sorting. So there could be an old GPT that was created on an old model that hasn't been updated that may not be very good, but it shows up first in the search results because it's one of the older ones. And then more people use it because it's showing up first, and they're getting a a wrong, idea of what those GPTs are, and the capabilities of what you could do within that particular, thing within a GPT. A good example is if you're looking up, like, Tableau product assistance ones. There's many out there. Not very many good ones, and most of them haven't been updated at all, since prior models. I mean, they were built, like, right when the custom GPTs came out and haven't been updated. There's really no way to filter out that. So having something and having a better curated set and filters and and capabilities within the search to find what you need to find is a huge thing just because I I know that's endlessly frustrating. I mean, if you're looking for mine, what I've done is I put them all on my blog. So if I have all the featured, quote, unquote, GPTs I built, I I put them on the most recent version of my blog, and I update that kind of along. And that's not fun. It would be a lot easier if I could just have my own portfolio just kinda like Tableau Public has for each person. They don't really have that. If you search my name, and I and I'm also intentional about putting my name in as many GPTs as possible just so if people are searching for my name, they could find, the GPTs I built. Some, I don't do that for, maybe because the title is too long or when I recently built for the four village, twins, blog. I didn't put my name on it, because it's it's intended for them. It's not intended as finding my GPT per se. So but that's one thing I would use to work around it. But, yeah, there's a lot of opportunities, I would say, with the ChatGPT store that hasn't been they haven't done anything with it since ever, pretty much, which is kind of frustrating and disappointing. Disappointing. But I have some friends that work there, so I'm not gonna say too much about that. They know me. That's okay. It's it's early days. Yeah. I you know, they'll they'll they'll evolve. They'll grow, especially as things become more ubiquitous and stuff like that. I imagine we'll see those those sorts of changes. Denise had a great question, and this this is one that, there are obviously a lot of a lot of my colleagues use use AI various things, for for their work. But in the context of data and analytics, Denise is asking, how do you get started in AI? And with the express caveat that, you know, she works in confidential data and can't Right. Necessarily use the Internet. So how do you you know, is is there a use case for, you know, getting started with AI, especially when you work with sensitive data? So one of the things I would do is consider if you're working with data, make it, PII proof. So I have a mock data generator, in, in as a custom GPT that's actually rated pretty well, and people love it. Basically, it creates mock data that's realistic. And the biggest hurdle I have was geography because I wanna make realistic geography but with, but with mock data. So that was the biggest hurdle I had, building that GPT. The other part was I recently received feedback on utilizing Faker, for Python, to make it more realistic, and that was helpful too. But one thing I would do is, especially if you're working on, POCs and whatnot, is utilize something like my mock data generator to create the dataset that you need to create and work with, and then, utilize technology to analyze that data. And that's something you could share and kind of, crowdsource as well. So one of the big things about building in public is that people can comment, people can provide feedback, and people help you improve your product. And that's super helpful, with that. So you don't necessarily have to use private information. In fact, I would say never do that. Pretty much every single of my GPTs have some strong ethical every one of my GPTs have strong ethical guidelines related to what the GPT is, and it's important that, but most people don't really consider that, unfortunately. So that's the thing you have to think about is that you have to protect yourself, make your own mock data. So, for example, if you're using one of my GPTs, don't put in the bring in the fields and all the data and so forth. Just highlight what fields are being used and ask for mock data related to those fields. And then it would give you a realistic data source that you can work with and do some analytics from, as opposed to bringing in real data that's that's confidential or very or your employer's data that should never be on there. The other piece too is that if you do work with an employer and they have an enterprise solution for GPTs, that's super helpful too. So, they often they have to apply an AI policy, with those enterprise solutions. So you're kind of helped along there, but they also may remind you to be very careful about what you share even if they're not, keeping your data or storing your data ChatGPT rather. So that's important to think about too. But to get started, utilize mock data. Long way to answer that question. But No. That's great. I think I that con long is good. Context is good. You know what I mean? It's always better than a short answer of just, like, start here at the end. This is kind of a follow-up, and maybe this might be one that we we I'll let you choose, Adam, if it's something we we can answer as you're going through, one of these that maybe you've created. But and then I wanna I wanna ask, Anna Annabelle a question as well after after we ask this, and then we can get into the demo portion. But, William Moorhead is asking, what is the best process, or best practice for creating an agent for AgenTik AI? With confidential data, can we create an agent behind our firewall to run analyses or our confidential information? So any best practices things there in terms of creating agent AI? So if it's confidential information that's on a public platform, I would not suggest you do that. You would definitely need, you would definitely need something in between to block that data coming through that public, API. So you wanna make sure it's I would say, like, you would have to develop your own product on the back end, especially if it's confidential information and, utilize a tool that aggregates as opposed to brings all the data in or somehow creates metadata as opposed to, bringing in all the data. Metadata is often safe, but it's not a hundred percent percent safe because a lot of times your employers won't even want, metadata to come through, depending on what that data is. So those are things that you really have to think about, but, ultimately, don't utilize it with any sort of public, GPT. You would wanna build something on your side of it, and make sure that you, have clearance from cybersecurity or, your security team at work before you do anything like that, just because it's super important to do that. And that's why I always suggest if you can utilize mock data. Mock data gets the job done, and it tells you exactly it's great for MVPs or POCs. Yeah. That that was one of my question. Is it all your I mean, a lot of your GPT are are free. I was wondering if it was possible to take, for instance, your this critic GPT and have it inside my company. So, you know, like, the image doesn't float around, and I still receive this critic under, like, confidential data. But I don't think that is something possible. Right? That's possible. You're if you so for example, if your company has ChatGPT Enterprise, they could elect to turn off access to to custom GPTs or Turn on access to them. Most companies would probably not turn on access to them just because they don't want any potential for any private information to get through. So that's a big, thought there. But, again, utilize a mock version of that or even even if you built something internally, you could have a coded version of it where it can review that coded version of it as well. Cool. Okay. I'm gonna save the Annabelle question, and we have another question from Mitesh here that I see that that we can we can ask towards the end. But, Adam, I wanna I wanna get into the demo portion. I want you to do a little show and tell because I think we're talking a lot about these things, so it's like, alright. Let's see them. Yeah, I will let me know if you can. Yep. You can. Perfect. And we'll Okay. We'll dive right in. Can you yeah. Can you see what can we can bot. Yes. And and then we can bot what is our favorite. Oh, I like that. I'm not going to spend a lot of time on these just to give you an idea. But one thing that's important that I do for pretty much every single one of my GPTs is I give you a help document upfront. Hardly anybody provides any sort of help documentation or anything like that on their GPTs, and I think that's a a a major missed opportunity. My job as a builder is to make sure that you understand it and can work with it. So to give you an example of what that does, and it's usually the first one I have as a conversation starter. So when you're going to a GPT, it's like, what can this GPT do? So then I'll, share a lot of information about this GPT. And, of course, it's going a little bit slow now because it's live. I do the same thing. So, basically, what I did and the one thing I do with my GPTs, just to be upfront about that, is that they're pretty they're all proprietary, which means that I have security that locks down and explicit instructions and so forth. But I wanna make sure that you have the ability to understand exactly what it does, and I try to be as transparent as possible. And that's part of the reason why I have that question. So it basically tells you what this includes and what's covered here. And also some yeah. I do have some old, Tableau blueprint documents as well, which is great for customer success and so forth. So there's thousands of page of, documentation. So it kinda tells you exactly what it can do. One of my favorite use cases for this, and I'm not gonna share all this with you, but I just wanted to give you an example of, you know, when you're working with the GPT, it's always helpful to understand what the purpose of the GPT is or what it can do. So one thing that's a real fun exercise is, if you have an error in a table calculation, and it's like, I'm trying to figure out why I have that error. And, let me see if I could pull that up so you have a better idea of what that error is. And you've probably come across a lot of, this. When you're working with Tableau, you get these random error messages that don't really tell you what's going on. The cool thing with this GPT is that you could just paste that error message, don't provide it any context, and, usually, we'll troubleshoot appropriately and give you the right solution. So now it's telling me, yes, it's an if statement. It's re it's missing certain things here. So what what it's telling me is that it's missing an else else what. So it now has the corrected calculation here. It says SQL, but it's technically Bisql because it's tabled language. But that's how you could fix it right away. Copy and paste that, and it should update your calculations on the spot. People use this a lot of times to reverse engineer some problems and so forth that they're working on the dashboard. One thing if it's a public visualization, another thing you could do too is if you really wanna understand what's going on with the dashboard and get a very good, information about the data and so forth is that you could upload, the the what you call it, the XML of the Tableau workbook and get good information on what's going on with that workbook and get a full understanding of what's there. Again, it should, definitely need to be public information. Don't share any private work dashboards or anything like that there. If you're creating a visualization or if you're downloading a visualization from Tableau Public and trying to figure out how something is working, you definitely wanna go ahead and, review that because it'll make it a lot easier for you to understand. And the other thing that comes naturally with GPTs is that you could do it in your language too. So if English is not your language, you could translate it for you in pretty much every language. I think Swiss German is off the books because it's kind of a niche language, but there's a lot of other languages that it works, Thai, Japanese, and pretty much everything. And the language model is much better now because it's more context order. So that's one thing to keep in mind is that, everything you see generally in Tableau or ninety percent of what you see in Tableau is always English based. So that excludes most of the world. So utilizing these tools helps you understand the world in your language. So that's one thing you really need to think about. It's like, should I be spending a lot of time understanding this in English when I don't really English is maybe my third or fourth language. You definitely wanna utilize tools that help you explain it to you in your language, in your cultural context. The next one is super fun. It's called VizCritique Pro. So, basically and I already I already had these two built when we talked about them, but I've iterated a lot on these, quite a bit on these. So, again, as I mentioned, how do I score this? Can you score my Viz? Review this dashboard suite so you could review not just one dashboard, but multiple dashboards, and it could score everything together. And all also, initially, I just had scoring, and you didn't really have an opportunity to not have scoring, but some people may not wanna see scoring. I think scoring is fun because it gives you an idea of, without really digging into the the information about it too much and understanding that. So, the other thing about this is that, you could simply just upload a picture of your dashboard. One thing I would point out, though, is that if you have a lot of interactivity that you wanna show and have evaluated, you'll wanna do multiple pictures of the dashboard just to show its interactive parts. Like, if you have tool tips or filters and so forth, you'll wanna go through that because it'll have more context because it only could score what it can see. But this example is a dashboard I built. I don't wanna share other people's dashboards, but this was Earth is on Fire. It was a global warming data visualization where it was great before when it was cooler, but some people are experiencing warmer. So it does this. This is something I created maybe five years ago. So I just upload this. I didn't share any information and ask anything about it. I didn't provide any additional context. And what it does and, thankfully, it likes my visualization. Otherwise, that would have been quite embarrassing. But too long, didn't read. It provides a lot of information. It provides a short amount of information on top so it doesn't overwhelm you. But it also gives you more information that you probably didn't even think of a lot about when you're building the visualization. Who's my audience? Why is this important? What questions are you trying to answer? And then it provides you a scorecard up front that provides a very brief description. This came after hundreds and hundreds of, visualization tests and iterate and probably a dozen iterations of this over time to get the scoring right because mind you, waiting the scoring on this is the hardest part. It it really is because, it's very difficult for it to understand what's good and what's bad and the deltas in between. So that was a very difficult. But the other thing too is that you get a scorecard on top, and then it goes into much more detail in the bottom. So it tells you, you know, who what the audience is likely, who you're catering to, and what can be done. And the best thing about this is that if I scroll all the way down is this is funny. It gives you a score tier. So this one this score tier is great data with its life together. So that means it's pretty good. But it provides you a recommended order of of of what you could fix and make it even better and how that would impact it. This is the most important piece because the actual feedback, it's fun to have a a tool that scores, but it doesn't if it doesn't provide you actual feedback and it doesn't provide a time estimate of what that would cost and how that would impact the visualization, it's not super helpful. So that's most important piece is you wanna look at the top recommendations, see whether it's worth your time to make those updates and what that means. So, I don't have enough here for access text, and I agree with that. It was more of an art artsy visualization, but it would be super helpful to have that so viewers could understand exactly what it's saying. Also, clarifying the legend scale and units and make it a little bit clearer and explain, central vertical highlights, you know, why there's it's hard to see and hard to immediately grasp. So there's things that you could do to make fixes in under an hour that will improve it and make the visualizations more accessible. So this is not only just catered to public dashboards, but also business dashboards. And it's kinda smart tracker behind the scenes that determines scoring based on the type of dashboard that you're sharing. But if you wanna focus on certain elements of your dashboard, share that. It's not gonna impact the scoring, but it's gonna impact how it provides you feedback. Meaning that or you could say, don't score this particular portion. I'm working on this, and I will omit that from scoring, but it won't raise the score of other stuff, if that makes sense. Any questions on that so far? Oh, and, Annabelle, you're sharing these as we're going? Thank you. Save me MVP. Data mock star. This was the one we talked about in a lot of detail here, and this one's a ton of fun. And I'm surprised it has over probably close to five thousand combos right now. But if it were me, this was a dream thing, and I got this from a work example. Somebody had brought up something about pulling data from a PDF. I'm like, how about if we could do this, we could do this, and we could do that? And that's how a data mock start came to be. And then, again, as I share with everything is how do I get started? So what I wanna do right now gives you some examples of datasets that you could do. I would always suggest here because there's a lot of things that you could do with this particular, GPT. So always select this, but I wanna do something quickly. So what it does is I I wanna do a mock dataset on LinkedIn. So it helps you define the dataset, provide you with a data dictionary upfront, which is pretty awesome. This was something I wish that was available to me when I was building out a lot of POCs and MVPs, especially working with Macaroo. Awesome tool, but it just made my head bang a lot just because of all the the work that you needed to do with it and understanding some of the business logic that doesn't make a lot of sense to me. This gives you a a really good understanding of what's going on here, what the column name is, what type it is, and what are the notes in the columns. And it gives you the distributions when relevant to. So that's super helpful. And then geography, you could say what geography you want, but it gives you a default. And then it gives you information as, you know, maybe we wanna do this, but you could also ask for it to come up with outliers as well. You know, maybe there could be a concentration of people in Iowa or Arkansas, for example, which you could do some analytics on for POC. And then it gives you a five row preview. And then you can look at it to see if it's exactly what you want. And then if you like everything, it will go ahead and x default to a CSV. It defaults to one thousand rows because, typically, that's great for MVPs and POCs. But you could also do this and, have it download in a SQL or a JSON or, Excel, and you could do full schemas as well. Just remember that there can be some limitations with memory. So if it you may wanted to iterate certain things. Like, I want twenty thousand rows of data. You may wanna say, you know, maybe we should do this in one or, you know, two or three different outputs just to be safe, and then you could union them later. Next up, choose your own adventure, the title of this conversation right here. So I created a so long story short, we don't have too much time, so I don't wanna go into too much detail here. But working with, the model, the five plus models on ChatGPT, I understood it can go places where it couldn't go before. So, ultimately, when I grew up, there's only one type of book I love that was a fiction book, and it was choose your own own adventure books. Everything else was, like, sports books or almanacs and so forth, But this felt realistic, and it felt like I was learning something. And it was a lot of fun because you could you choose your own you're the you're the person that makes the decisions. And, effectively, these are just all workflows. And they're cumbersome workflows when you're going through a book and paging through them. But what happens if you're doing that in real life when you have real situations? And, for example, here in Tableau, you have different types situations that you encounter as a Tableau person, whether it is building a dashboard, whether it is receiving feedback, and you could also provide your own problem for it to help you think through. So it's almost like a a a gamified thinking thinking tool that helps you work from beginning to end. And I know Annabelle loves Chatty, and she's played around with this a little bit. And what what's your fun what's your fun takeaway for this tool, TableauQuest? Did I play with TableauQuest or with DataQuest? You you played with I believe you played with both but DataQuest That's probably true. No. I really love it because first, you can also, like you said, like, choose your own adventure so it make it more fun. I I choose, like, your Polar, like, from, like, the thirties or something like that. Yeah. Here's DataQuest. So I guess the gamified portion at the end is really fun because you can go on learning paths or you can go on quest, but you could generate your own pixel art from Tableau. Yeah. So based on how well you did with your quest or learning path, what happens is it gives you a Tableau sparkle from one to three. So if you have three level level three sparkle, it means you did really great with your quest. If have level one, there you did fine, but there's some things that could be improved. So that's a fun way, and it's all pixel art. So it's kind of and it's all based on your specific quest. So if your quest was kinda going through through a dashboard during a presentation, it would be a pixel art representation of that. And if you did really good, there would be a lot of Tableau Sparkle. With the DataQuest one, there's also the opportunity to have a pixel art at the end of each of these quests, and they're built kinda similarly. The difference is is that Tableau is focused on pretty much anything related to Tableau where DataQuest takes on a whole, universe of data. So it could be data visualization. It could be data engineering. It could be data product management. It could be stakeholder, working with stakeholders and different types of stakeholders when working with data. There's a lot of things that are different about these, and that's why they're they're own tools. But I really encourage you to have fun play with this, provide feedback, share with me on LinkedIn or directly or if you post about it. I don't care. As long as the feedback is constructive, I'm happy to work on it and make those improvements. And those are a couple things I wanted to point out really quick. I did recently make a couple updates of ones I'll go over very quickly. So as I mentioned with, the mock, with the data mockster, that it creates kind of a data dictionary for you. This is a dashboard dictionary that you could help create because the worst thing when you're creating dashboards is work is creating documentation for those deck dashboards. And as covered earlier, if you have a proprietary business information or business dashboards, Dummy them up enough where you could just kind of, replace, some of the terminology, when you make the updates. But, yes, this tool will really help you get eighty percent there with your dashboard documentation, the least favorite thing for pretty much any Tableau developer. So that's an important note here. Here's another one that's called dashboard detective. So a lot of these tools I built were built for people that are developing, but this tool is, specifically for viewers. So a lot of times, people are given dashboards that they don't understand. They look at it and it's like, I don't even know where to start. What is this trying to answer? I'm new to database or, you know, maybe the person didn't follow best practices or whatnot, and it's very difficult to make heads or tails of a dashboard. So would be helpful to have a tool to be able to understand what that dashboard is trying to relay and share that in a way where a person with limited background and what was being built could really understand quickly. So this is a tool for viewers, and it's called dashboard detective by me. And I have just two more left. And this is a meta of all meta dash two more featured ones. I have never I've built, but two more featured ones. So this is GPT Architect Pro five point two. And the reason why it's five point two, it's built for the five point two model. And this is a system, so it's not only a dash it's not only a GPT model evaluator. It's also a GPT model builder, and it's a GPT model that works with itself. It about it can evaluate and build at the same time. It's the only tool like that I've really seen on ChatGPT, and it's actually kinda gaining some traction. And, also, it covers a lot with I'll do a real quick, with a high level just so you have an understanding of what it does, but it has ethical guardrails built right in. It really considers ethics as you're building out, these tools, and it makes you and it forces you to really consider these, ethics as well. So it has a dual mode system, build and evaluate, and it's smart. It does it behind the scenes based on what your prompt is. The build mode makes it production ready or helps you create a tool that's production ready. So, basically, you can come you can come with it with a half baked, custom GPT or just an idea of a GPT. Is this possible? You know, help me start building this out, and then you can work with it and iterate. And what it delivers is fun name options, benefits based on the variants, full system instructions based on, the models how the model works now, knowledge, architecture design. So that's another thing is that a lot of people think that building custom GPTs are just patching together a up props. If you're doing that, you're doing it all wrong, and the GPT is not gonna really be a value. The delta between that GPT and just using a a generic model is very low. And then it has a four stage workflow. It's kind of identified up there. And then, in evaluate mode, as discussed, it looks at technical accuracy, practical usability, knowledge, actions, user experience, and production readiness, but it also goes deep into the ethical part of it too. So if there's something wrong with, something not evident that works well ethically, it's gonna point that out and make suggestions on how to improve that. And that's so important that people are not really considering, or it's very difficult to consider because you're not always in the person's shoes that would have, concerns about those ethical items. So that's really important part of this is that ethics is a huge huge thing with this GPT. So finally, we have one I just shared on Monday, and this is on the Four Lids twins. They have an awesome blog, and they they're fun together. So a big part of what they do is that they are identical twins, but they have different, personalities, slightly different personalities, but they also love to work together. So this is a tool that, uses something called the Foolidge Fetch. So that's a a a custom API call I built just to get the information from their blog and then, works with you to get the content from their blog, in a way that hopefully makes sense to you, and that gives you a nice rundown of what that blog is without deep diving into it. The other thing too is that you could have really fun interactions if you're a fan of the Twins is to kinda get what their take would be on certain things. Maybe Kevin wrote a blog about something, but you would want Ken's take on that blog. So based on the personality profiles that I built for them, you can kinda get that as well. And you can get fun silly stuff too. It's like which floor lodge is the best. And, of course, people are asking me, there's a known, quote, unquote, third forward that's called Keith. So I think on April Fool's Day, I'll add Keith profile to this. So, like, this is kind of a fun banter that you would get, but in real realistically, you're getting ninety percent content, maybe ten percent personality because the content is the most important part to everybody that's working with this tool. But I would agree the answer is yes. You want them both, and you would wanna work with them both they have different skills and they complement each other really well. And I think that's why so many people gravitate to what they do together because their skill sets complement really well with each other, kinda like Annabelle's and mine. I thought that you will call this GPT, like, Keith, like, the third one. Maybe that will be the April fool's joke. Don't change it to any other like, Keith Foolidge and only have a Keith personality and just change for a So this is a a quick rundown, a super quick rundown on the GPTs. And I apologize if I was talking too fast, but there's a lot of GPTs that cover in a short period of time. And I wanted to make sure to get you a really good, eyeful of the GPTs that we cover today. I think this is great. I mean, even just, you know, from our you had a a lot to show a year and a half ago, but to see how it's grown and to see what's added and to see the logic that's been kind of baked in and evolved into it is really, really cool. I think this is a good overview, and it's kind of back to that question. It might have been Denise who had asked, how do you get started with AI? And in going through this, my takeaway is go try these. Go plug some stuff in. Go make some with not you know, with dummy data, not sensitive data, but go make some mistakes. You know? Go go see what works and what doesn't work. I think that's an underrated thing. I hear a lot, in our kind of, like, budding AI developer, analytics engineering sort of thing that we have at Enerworks is this idea of go use these tools, go see what works, but just as important, go see what doesn't work. Because, yeah, there's there's sometimes there's just use cases or things where maybe it doesn't work the way you thought it did. Maybe you need a different approach or maybe you know what? Actually, this is something that is more positioned for a human approach, more positioned for, like, a quality control approach, and and that's there's a the more you use these things, the more you see the need for that that humanity in certain things. And you had mentioned it with the ethics, Adam. That's a huge, huge point of where that human intervention and that human oversight is is really vital. I'm I'm curious to that end, and and we can we can ask I think Ritesh had one outstanding question. But I did wanna ask while we're kind of on that topic, what do you think is is the importance of kind of that that human component in all of this, especially as we start using more GPTs as there's more as AI becomes more, complex and has more use cases? What is the benefit of having that human component? Well, to me, it's a requirement. So all my GPTs, even though I try to make them as easy as possible up front, they require human interaction. So when you're utilizing a tool like GPT, Architect Pro five point two, sure, I could build a tool that will make up a a pretty good GPT on its own, without really a lot of input from you, but that takes out what makes the GPT purposeful in yours. You don't want all GPTs to be the same. And if there's a bunch of the same GPTs out there, you don't know which one to use, which one's using ethics, and so forth. So being able to come up with something, have you involved with it and make it yours and have your personality come up with it, makes it better, and it's something that you could use and actually speak to and wanna work with and produce. If you have something that's all hands off, you're generally gonna forget it, and maybe just share the tool, but you're not gonna make updates and so forth. You won't treat it as a product. And the main thing with human interaction pieces, first of all, without that, you're gonna get a bunch of hallucinations, and you're gonna get a bunch of wrong information, and it's gonna hurt the ability to trust the content that's coming out of it. Two is you're not gonna be empowered to build or create anything on top of it. Oftentimes, I like think of, like, this tool, for example. I've worked with this tool quite a bit, and I built with it, but I don't do a hundred percent of my work from it. It's I consider it a really great starting point at least for some of the stuff I do. And I wanna put in a lot of what I would do into it to make it better. So one of the things that it does is that you don't have to create a lot with yourself. What you could do is work with it and work with it as a partner. So you work with AI as a teammate as opposed to somebody that's just gonna make something for you. It's not you're not task mastering it. You're working with as a teammate, so your input is baked right into whatever you build. So that's the most important part of that right now. And two is that for anything, especially as people are concerned about job loss and so forth, you don't ever wanna take humans out of completely out of something. For one, it's gonna be difficult to understand what your value is. And, but building tools that incorporate human interaction, that's a lot of value add right there. So it's not only to build, but it's also to learn. So incorporating, learning with, human and AI interaction is super important to what we're gonna be experiencing in the future. Ritesh had had asked a a question kind of during the demos section, right before. He'd asked, can you share any insights or learnings you have on building semantic layer and data governance for use with agentic AI? So that component of it. So that's quite interesting. It all starts with the data. So it's not even just the AI tool itself. You need to make sure the data is in the right spot, govern, not not redundant. And that's a part people miss is that you could build a good AI tool that creates a semantic layer. But if your data is in poor shape, it's gonna come up with really bad information for you. So focus on the data and work on the data. And many tools could get you from point, b to point c, but only you could really put it from point a to make it good enough to go to point b. So you really need to get the data. So one of the mistakes people make is they throw in all their data. So it can come from a data lake or whatnot, and it doesn't really work good with unstructured data. You need to find a way to structure whether the unstructured data you you create the metadata, but that has its limitations. You really want really good structured model data that you could feed into it because that will give you better information and better clues as to what you could do with it on a semantic level. And I agree that in the next couple years, the semantic layer is gonna be more and more relevant as we go on just because, a lot of people can get what they need from working with and communicating with and understanding the semantic layer. They'll be able to understand their data a little bit more to understand what it could build downstream Whether that's dashboards or whatnot or metrics. That's great. That's great intel. Okay. This should be an easy one. But, Irene had asked, is there something similar, to TableauQuest for Power BI? Have you have you seen anything out in the wild that for Power BI users or non Tableau users, is there anything like this? Or there just frankly, there's not an Adam Mico for Power BI. I don't know what to tell you. I think I'm the only one that really built these type of, tools, but, I would I would, work with DataQuest because I think that can get you a good portion of what you would wanna get from the Power BI side of it because that's part not a secret, no secret. Other tools besides Tableau are covered in the DataQuest part of it. So So I would utilize the DataQuest one if you wanna work with Power BI insight there because, that's more tool agnostic. It actually is tool agnostic, but also covers a lot of popular tools, including Power BI. Awesome. But you can still use a risk critique. You can still use Datamox style even if you use Power BI. Yeah. Yeah. That's good intel. That's good intel to be able to because a lot you know, at the end of the day, obviously, they're they're unique visual analytics platforms. But when you look at the underlying fundamentals of what you're trying to do and what you're trying to achieve, it's it's largely the same. And I've done that intentionally just to give people an opportunity to work with it that aren't just Tableau people. So if it's reads Tableau on the GPT, that means it's not tool agnostic. But if reads anything but Tableau, that generally means it's tool agnostic. So as you mentioned, Annabelle, this Critique Pro is a great example. You could even use PowerPoint or those unexploding pie charts in Excel. Evaluate it. I don't know how well that would evaluate, but can it's a value it's funny because I tested it before with nondashboard stuff, like a person's picture, and it says, please send me a data visualization. So it has to be a data visualization, and, hopefully, that's what you're using for. But, yeah, any tool would work. So Power BI would be a great use case for that. Yeah. Yeah. A PowerPoint, I'm sure that you can also analyze. A PowerPoint slide deck and give you d three, anything. Yep. I really have also DataQuest not only because it's funny be but also because it was real world question. And some were strategic, some were how you will present this and that, and that was very interesting to think. Yeah. Have a lot of built in seed seeds there. So there's a hundred seeds for each, but it's intended to be kinda like starting points for it. So pretty much any type of situation in those realms, should be coverable, including any ad hoc stuff that you have from work. If you have a data problem or Tableau problem, the quest series is kind of perfect for you to kinda work figure out where to go and, work the wins could be and, what to consider for potential wins. This is great. I know we're at eleven o two, so we're we're right at a little over time. But I did have one question if both of you have the time to answer. It's one I wanted to actually ask Annabelle. And, Adam, feel free to chime in too because I know you've been working with her a lot. But, Annabelle, as you've kind gone on this journey with Adam, the AI journey, and learning about things, you know, about whether it's Tableau next or all, you know, all these different GPTs, and you've gotten your hands on them. What's your journey with AI been like over the past year and a half? How do you feel like you've grown, you know, in this partner in in your AI skills, but also in partnership with Adam? So we had them. I learned a lot about Tableau next, and we had a lot of fun. I play with this GTP, especially when we were, like, writing the Tableau next blocks because we need, like, a fake data. So I also, like, use data mocks down, but I really love it. I follow several training on AI, especially how to ethic and AI. I haven't play around to build a GPT yet. I don't know. Maybe I got bored in the following months when I start doing that. But I will say that I also play around a lot about chat GTP. And I discovered that you can become, like, very dependent and use your skills, like, more or less, like, what's a discussion is going on the blog on the chat. And I said that there is also a way instead of asking a chat GPT like to, hey, fix the code for me, write this piece for me, write this letter, email. You can say, hey, that is my draft. Tell me how I can improve it so you learn. It will suggest that, hey. You should change that and that. You try again and you try again, and that is a way also to learn better. New step. For me, it's like looking at somebody new and excite excited about the potential, but also she provides great feedback as to, you know, what could be the pitfalls as well. So she has a a good view of both great and bad. And part of the reason why I build the GPT is the way I do is that I wanna encourage interaction as much as possible. I I could see where people just get brainwrought from being too dependent on a quick response, copy and pasting it and throwing it into a chat or their LinkedIn or whatnot. But that's not what you what it's useful for. It's useful for as an I I think it's useful if if you're not building GPTs as a learning partner and as a teammate. So you could bounce things off it. Be critical, challenge, answers, especially if you're concerned about, the repercussions of those answers or maybe it's not thinking of it in the right way. Because oftentimes, they're sharing information that may not be fully accurate or, be something that could, be damaging to you. So you have to think of ways you know, maybe this approach doesn't work. So let's think of other approaches. Give me other alternatives besides the one that you shared. That's really good at working with you in that regard. That's why you refer to the GPT as Chatty when you're working with Chatty because you're you're you're a teammate with Chatty. Chatty is not not just telling you what to do. Chatty is your teammate. Sometimes it annoys you, but then it works with you. And you'll be able to learn more if you use it correctly. So use it as a learning tool. Don't use it as a as a primarily as a content creator. Yeah. I love it. Well, we're at we're at eleven o six. I think we've answered pretty much all the questions, but, anything else before before we part ways? I know so just to reiterate what's in the chat we had mentioned, yeah, we'll be we'll be sending an email with a recording of this. It'll be on YouTube as well, and we'll include all these links, to the different GPTs. Yeah. Great great reminder, Ritesh. Be careful how much you share with GPTs. Yeah. Always good to be to be smart, but we'll have that for everyone, then you can share with other people who maybe didn't tune in. But any anything before we before we part ways? Great. Well, thank you, Annabelle. For the next one? Yes. You can join us for the next one in March. But as as always, Annabelle, Adam, it's very informational, incredibly practical, and useful information. I really wanna go play with these as well now. You know, I wanna go try them out and and have some fun. So thanks for building them. Thanks for what you both do for the community. And with that, I'll say have a great day. Thank you. It was a pleasure. Thank you, Adam, for coming. Of course.

In this episode of the Data Forum series, Tableau visionary Adam Mico joins co-hosts Annabelle Rincon and Garrett Sauls of InterWorks to explore the rapidly evolving world of AI and custom GPTs for data professionals. Adam showcases a suite of free, publicly available GPT tools he has built, including VizCritique Pro, Data Mockster, TableauQuest, and GPT Architect Pro, designed to help analysts build smarter, work faster, and learn more effectively. The conversation covers best practices for working with sensitive data, building agentic AI responsibly, the importance of human oversight in AI workflows, and how the Tableau community continues to lead the way in data innovation.

InterWorks uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy. Review Policy OK

×

Interworks GmbH
Ratinger Straße 9
40213 Düsseldorf
Germany
Geschäftsführer: Mel Stephenson

Kontaktaufnahme: markus@interworks.eu
Telefon: +49 (0)211 5408 5301

Amtsgericht Düsseldorf HRB 79752
UstldNr: DE 313 353 072

×

Love our blog? You should see our emails. Sign up for our newsletter!