So I'm Ben Vasileo. I'm the Global Director of Product here. I spend a lot of my time thinking about how we can make better experiences around data and deliver analytics in a more effective way. So you know, how do we create user adoption? How do we make it so people find and can share their insights? And Curator Bionorworks is one of those products that we do there, but we also focus on supporting the ecosystems and making, you know, everything run efficiently for everyone. One other housekeeping note that I before I forget is just, you will get a recording of this talk afterwards. So that comes a few days later and any materials that Dan wants to send as well will be part of that. Yeah, we'll have a slide deck and we'll probably add a thing or two based on the questions that come. And so I'll introduce myself. I'm the guy that started the BI practice at Interworks, probably now two thousand and eight or nine. I remember Ben when he was like the third employee hired and we made him do all the horrible jobs and he did them really well. Now he's a boss. So I probably know some of the people on the call. If I do, I'm glad you arrived. And let's get started. Cause I'm gonna talk about a lot of things I hope that you haven't heard discussed directly related to BI and the ecosystem. So we'll have about fifty minutes of discussion, Ben mentioned polls. At the end, we'll have up to ten minutes that we'll use for Q and A, but that doesn't mean you shouldn't ask questions during the sessions. Just ask them when they're fresh in your mind. As Ben said, he'll break in if it's something that's really relevant to what I'm saying and we'll provide the follow-up session. So the agenda for today is really go through a brief review of how BI has evolved over the last twenty years because there've been a lot of changes. We'll discuss opportunities that that's created, some of the challenges that have been caused by the new tool sets, and we'll offer some solutions and then we'll finish with the Q and A. Alright, so there has been an evolution or a revolution depending on your point of view in the whole BI stack. When I was getting out of college in the early 80s, you had like two choices or three for databases. It was IBM or Oracle, maybe one or two other tools, and that was it. So the stack was much simpler. It it looked like this. You had a source system, probably all of it was on premise, not probably all of it was on premise. You had a single data warehouse that was in a single database and you had reports. And this actually was reasonably effective. I can remember the first time I got a data cube on a two thousand four hundred baud modem, and it only took two minutes to download the report. And I thought that was amazing in the late 80s. So the good thing about this was that it was relatively simple to manage. It wasn't really that easy to deploy. The tools required technical experts to use, and you didn't have end users building their own reporting. And so what I saw early on was a lot of backlogs in BI groups because there was just many more questions than there were people to build the reporting. So it was not uncommon to see six to twelve month to twenty four month backlogs. So we had these legacy systems, they were on premise, we started seeing in the late 90s, in the early aughts, the early cloud companies and Salesforce was one of them. I can remember getting calls from Salesforce salespeople in the nineteen ninety nine. And I thought, well, this is a good idea, but the web's just way too slow. It'll never work. Of course, I thought in nineteen eighty that my five gig hard drive in my computer was going to last me forever. Now, what we started to see is that more and more data was coming faster. And this started to create some real challenges because you had this mix of on premise and cloud sources and you had this single point of failure, the data warehouse, and you had to process that data. It didn't come in clean, you had to transform it, you had to fix the errors, fill the missing holes, and it took time. And so the throughput constraint was in IT and the BI team that had to manage all this. It was relatively easy to see and manage because there weren't that many moving pieces, but that was also a problem. The typical response I would get when I'd walk into a client fifteen years ago was, reports I get are pretty good, but I just don't get enough of them. And I'm too reliant on people that are too busy. And they just had to do a lot of gut decision making because they couldn't get the data that they needed to make better decisions. Right, so let's do a quick poll question. And Ben's gonna bring this up. We just are curious to see what types of tools you're using. And Ben, I think you got to scroll down there a little bit to see some of the ones on the bottom. I'm not sure what people are seeing. Yeah, if they can scroll on the chat themselves. Okay. Alright, so this is pretty much what you'd expect to see a lot of Excel, the database of choice for people without a database and then SQL Server, which is everywhere. We got, you know, PostgreSQL and MySQL, which are at least used to be or once upon a time or open source Mongo, which is pretty popular when you're dealing with unstructured data. And then Snowflake and Databricks, which are coming on strong. And of course, Oracle's been out there for a long time. So these results aren't really surprising to me at all. How about you, Ben? You see anything there that stands out? No, I mean, I think most cases are people have multiple systems. So I'm sure, you know, people don't have one database, they have several. And if you've never heard of DB Engines, always think it's interesting to look at their ranking list. Three ninety two commercially available databases as of a couple of days ago, and they probably span about fifteen different core designs for different purposes. That that's what we've seen databases that become special purpose built tools for different types of data and use cases. All right, should I stop sharing, Ben? Yeah, you can move on. Okay. All right, so how's it different today? Well, we have this two two things, two things I see. If you've been in the BI game for a while, and you got into it in the old the the the pre cloud days, you probably have a hybrid setup where you have some databases that are on premise, others that are in the cloud. And then of course, we've had this expansion of data science and predictive analytics, perceptive prescriptive analytics, we're pulling stuff off of web pages. So we have, you know, this concept of object storage, unstructured data that could be used for a number of different things. We have on premises data, as well, possibly in the number of places. So we've got to structure the unstructured data, we got to feed the more structured stuff, and it all ends up in a in a warehouse, which in most cases today when you were building on their cloud based data warehouses. Because, you you don't have to worry about the capacity problems, you don't have to manage the hardware, it just makes a lot of sense. And there's financial realities related to this as well that facilitate people wanting to move to the cloud. So it's worked well and it's cost effective, but there are challenges that come with it. And one of them is just the amount of complexity that you have to grapple with if you're the IT manager responsible for your data in the company, because it isn't any longer the province of just the BI team and a few technocrats that know how to build things. There are many different people that are able to access data and build things. And there are many different tools available that make that much easier to do. So we have all these different loading and transforming points now that didn't exist in the past. And all of the same functions are happening. They're just different people doing those functions. And a lot of them are not technical experts. They're domain subject matter experts that have learned enough about the data they use to build their own reporting and their own analysis. So the positive or the opportunities that created is that people have better information to make decisions with. But as a backend or a data manager in a company, this has increased the amount of complexity that you have to manage because who decides what is the best source of data when there's hundreds of people making new data sources, possibly every day. So this has become more complicated to manage for that reason. So let's ask another question here, Ben. These are different integration tools. They might be just data loading tools, data transformation tools. Just we're curious to see what ones that your company, you're using personally. Okay, so we see two of the ones that we use quite a bit. And then all of the incumbents are there. I'm surprised so oh, there goes SSIS. Nobody's using SAS. Nobody's using Talend. Nobody's using Matillion. That's actually surprising to me. Okay. I'm gonna go ahead and share those results so they can see them. Perfect. I'm surprised that SSIS isn't higher given the number of SQL Servers. Yes, so SQL Server Integration Services is the data integration and data transformation tool that ships with SQL Server. So you may not have known that, if you're not working in the data, you might not know what that is, but that's what it is. If you're using SQL Server, you're certainly using that. The other tools are, again, a lot of them are legacy tools, the FlagTran and DBT are typically tools we see in the cloud universe because they're well suited to that. And it's really more of a choice as to how much coding do you like doing versus how much drag and drop do you wanna do. Okay. I'd say for the four people who answered other, I mean, if you wanna post those in the chat, we're always really curious about the other tools. Yeah. Yeah. Please do. Should we move on, Ben? Sounds good. Okay. Okay, so obviously this change over the last twenty years has presented opportunities and it's also created new challenges. So let's talk a little bit about this. All right. So we know what the cloud's main advantage is that you just have this infinite scale at a dial for a fee. And that's part of the challenge is knowing how to manage the fee. But it does give you the ability to store infinite amounts of data, which is relatively trivial in cost. And the real issue is the speed number of queries that you're you're asking a cloud data source to handle because that can get expensive if you aren't smart about how you manage it. The big advantage here, the big problem it eliminated is that, you know, in the in the pre cloud days to justify a BI project was a really expensive endeavor, and involved a certain amount of risk. I mean, the first before I was a consultant, and I decided I was going to build a data warehouse, I went out and got quotes for six months. And half of the quotes were for more than we spent on our enterprise resource system. And I thought, I can't afford to spend that much money on a reporting system in essence. Of course, today with the SaaS world, you don't have this huge upfront cost. You can probably run a proof of concept project and a demo, for yourself to prove everything works without even licensing the products, if you can get the vendors to agree and most of them will. And then the other part of this is that we've seen this convergence start between traditional data warehouse applications like Snowflake and a more, let's say, traditional data science environment like Databricks. You know, five years ago, they were very separate and did totally different things. Now you can see both of those companies are adding more of the other features that they didn't start with. So Databricks is becoming a little bit more like Snowflake and Snowflake has become a little bit more like Databricks. And this is sort of converging the two worlds of historical with the predictive and the prescriptive. The other advantages is that in the in the current world, you know, the API world, the cloud world, everything's still SQL based, because that's the language of data. But, we have all these drag and drop tools, you can pick different coding tools and scripting engines that work for your particular situation. There isn't one size fits all. You can plug and play whatever you need. And I think the biggest change we've seen is this in the extract and load world. You don't have to have just one tool. In fact, you're probably gonna have several because you'll just find that different tools work better for different situations. And yes, you have a little more complexity, maybe licensing, but you save a bunch of money in terms of time on build outs. And so this is, again, all created opportunity for creating better data faster. But it's just made, managing the back end of this a little more complicated. So this whole concept of best of breed, versus going with a single stack provider has completely changed how most of the projects we work on are done. I can remember the first project I did with Ben, probably two thousand and nine, it was everything was SQL Server, and a little bit of Excel. And that was pretty typical fifteen years ago or twelve years ago, you know, there was a single stack vendor. If it was a smaller company, it was Microsoft, it was a bigger company, it was Oracle, IBM or SAP. And that was it. Now you have infinitely more options. And therefore there isn't a single model that works for every company. Every company is a little bit different, but fundamentally similar in a lot of ways. Let's let's ask another question here. And this really gets the you may not know the answer to all these questions, but I'm always curious to see what companies decide they wanna bring in house versus outsource. Because we have a lot of clients that maybe their first investment in BI is somebody that learns how to make a pivot table in Excel. And then some people get a database administrator and they start building databases. But there isn't a single way to do this. We have a lot of clients that just focus on the last mile of dashboard building and they outsource database administration and database design. So this is, I don't know if we've done a poll like this before, Ben. I don't have any real expectations about what we're seeing, but I guess it makes sense to me that, database administration and security would be two of the top, in house positions because let's face it, if you don't have security, you're gonna eventually be embarrassed and possibly lose valuable proprietary information. I mean, I remember several years ago, I live in Atlanta, and you probably all remember Equifax, a data company that does your credit ratings that had data breaches, and that wasn't good for their business for a while. So database, that is a typical inroad dashboard development, of course, especially if you're using Tableau that that is something we see a lot. But it isn't necessarily, or it isn't necessary to have all of these skills in house. You can outsource what you need that you don't have, and you can develop that over time in house with your existing employees. I do like the comment in our chat about sometimes a person satisfies Oh yeah, you do everything. Yeah, I wanna comment on that because one of the things we kinda have fun with here is that clients will call us and ask, so I'm looking for someone, and can you help me find them? And we say, fine, send us your job description. And the job description is about two miles long and encompasses every one of the skills you see in the poll. And of course, if that person existed, they cost a million dollars a year because they don't exist. It's a pretty diverse skill set. But you know, small companies wanna get, it's pretty common to see somebody that can do dashboards and knows how to make a database and might understand a little bit about data transformation tools. And you can find a person like that, and they'll probably be really good at one of the three and acceptable at the other two. Would you agree with that, Ben? Yeah, I think that's true. I mean, generals are super useful in a lot of situations because they can see the connections between things, but they're not gonna have the depth of knowledge of a specialist. Yep, so security, everybody's used to seeing the data breach information that's available on the web. So that gets a lot of focus, especially from IT people who are never thanked for the beautiful reporting and the wonderful database, but they do get punished when there's a breach. So there tends to be a lot more focus on security almost always. So are we ready? We shared this? Yeah, I shared it. So I just wrapped it up. Right. So let's review some of these challenges that we've spoken about here. Clearly, you know, the cloud universe and the hybrid universe is a little more complicated to manage from a spending standpoint because, especially, with some products like, Snowflake making it very easy to separate your storage costs from the compute costs that are generated from running queries. And storage costs are really trivial. I mean, they don't typically affect the decision to buy or not buy. It's the compute costs that clients start to understand once they start using the tool in a proof of concept. And that there's a little learning curve there and a need for having the right sort of monitoring system so that you're aware of what's going on. Because every month, you know, depending on the volume of queries that you're running, your prices are gonna change or your costs are gonna change. So it takes usually four to eight weeks for companies to get enough of an understanding where they can wrap some, let's say sound controls around that. Ben, do you have any comments on that? No, not especially. I mean, I think data governance is one of those things where you have either none of it or someone or people try to put something in that's so rigid that it's hard to innovate and like finding that balance can be really complex, especially when you have this many systems. Yeah, totally agree. And so this managing of complexity from a standpoint of a technical manager, In the old world, it was very simple. There was one pipe going from left to right of that screen. And now in the newer world that you see at the right side, there's just many more points at which you could have a failure or many more ways that data could be duplicated unintentionally. And that's what we see in environments that they didn't think about governing the data beyond security. What I've been in companies where they have twenty seven servers, a dozen different data tools, two dozen different data transformation tools, And they have duplicated the same data over and over again. And we get called usually a year or two after this happens to say, how do we sort this out? And because a lot of this growth started with tools like Tableau, where you had a lot of end users that were thirsty for getting their own information, a lot of companies didn't realize what that was gonna unleash. A lot of creative expansion of data, and if you didn't have a good, you know, thought process wrapped around that, you probably didn't establish the governance system that you needed. And I find that most many IT managers think of data governance as basically access prevention. Like, we're gonna control access, and we will, you know, set up rules for who can and can't look at something. Unfortunately, if you're too heavy handed with that, and you don't make data accessible, you're just encouraging bad behavior. And before I was a consultant, I was one of the worst behaviors in the company I worked for because if I couldn't get secure access, I was bound to go out to lunch with somebody that had it and get the data I needed, even if it was in a flat file. So you wanna make security protect you, but not make it so arduous that people have to work around the system to get what they need to do their job. And so, you know, this concept of data lineage and data provenance and data curation, what I mean by lineage and that, by the way, I'll have a cheat sheet in this slide deck that you'll get is just the chain of the data that makes up the the end thing that you're looking at, whatever it is, an analysis, a report, so that you see the chain of data that makes that up. And presumably, these are the authorized sources for what makes it up. Now data provenance is something that I've thought about for a while because I think in this more complex world, it's more or less impossible for a centralized IT team to manage all this themselves. It's just too much going on. And so I think of provenance this way regarding data. If you can imagine, you know, you, you know, I thought of it in the art world where you hear stories of one hundred million dollars paintings being bought by investors. And they spend a lot of time making sure that that artwork is the actual real thing that say Van Gogh painted. They do their diligence to make sure they're buying a real original artwork. And the same concept can be applied to data. If you have a certified provenance over a particular data source for a domain of information, for example, a sales database, and you've certified this is the sales database and this manager over the workflows that are generating that owns it. I think that's an important concept that I'll talk about some more because it's a way that as a manager, you can decentralize control and still maintain visibility and accountability. And so the last part of governance is curation, and that's just making it easy for people to find what they're looking for. And that's what we'll end the talk later talking about the tool we built called Tableau Curator or Curator actually, it's more than just Tableau. So let's talk a little bit more about all the challenges related to this variety and size problem and velocity problem. So I'm gonna skip through this, this is just for you after we've finished. Well, let's talk about one more poll question before we get into my talking point. So I'm curious as to how many of you actually have a well defined governance system that includes all of the aspects of what I was talking about, because very I think very few companies have this. And if they do, it's normally just related to security. And right now the winning answer is no. And security is in second place and that isn't surprising to me. How about you, Ben? No, not surprising at all. I mean, I think the interesting thing is even when it's yes, there's always some Wild West thing happening. Mean, you're It's the yes, but. It's the yes, we don't do these, this, and this. Yeah, I mean, a number of times we've come in to do a strategy engagement where we kind of do like connect interviews with people who are using the data. A lot times we're asked, Well, show us your data sources, and IT will be sure that they're all going to this central location. And, you know, they so often have a Excel sheet, a access database, their own database that someone spun up for them, that they're getting feeds from someplace else. And so the amount of data that is just distributed in large companies is very widespread. Yeah. And for the most part, who's ever managing all of that, who has the title has no idea most of what's happening. It's just impossible to keep up with from a central point. Alright, so we'll move on. So what are some of the solutions to this problem? What are the things you can do? Alright, so first off, you know, we've shown some generic workflows here. It's a really valuable exercise for you to go and document your data workflows throughout your organization. Now, I may think of this a little differently than a lot of people. To me, data workflows are the business. Know, everybody that's commercial entity generates revenue, takes orders, issues some kind of work order, it's either going to a warehouse or a shop floor or a team. All these workflows are generating data. And so documenting those is somewhat similar to documenting your entire business. And that's how I think of this data isn't the end product per se, it is an off an output of the end products. And so a lot of the concepts that were developed in the early 90s, late 80s in manufacturing, or process improvement are directly related or relatable to data. You have a virtual factory building data and you have an actual factory building products or you have an actual team building services. So security, yes, everyone knows you need it. And everybody's probably got it. And for the most case, in many cases, it works pretty well, because it's a centrally controlled activity. And, most people don't say that's a concern unless they've been breached recently. Now tracking and measuring all of your licensing and data usage and data volume and different data sources, this is where we start seeing people fall short. Because they may have all the tools and the pieces, but they haven't thought through how do we monitor licensing usage, especially in the cloud world, if we're paying every month for people to access these tools? Are we using the tools? And if we aren't, how do we reassign those? So we do spend a fair amount of time in strategic and tactical planning with clients to help them figure this out. Course, got a phone call on my cell phone. So, you know, this tracking and measuring builds a framework of governance for measuring improvement, and measuring activity and measuring volume. And it's also, you know, helping you and your team building data literacy skills around the metadata of your entire system. And I think this is important when you start dealing with governance issues. Because if you have some facts to start with, you have a way of measuring improvement, and you can begin to decentralize some of the workload and responsibility for managing this thing. So of course, governance starts with, you know, having the right software, but decentralizing governance functions for data provenance and, you know, lineage, I think is the only way this can be managed in a modern framework because one team cannot manage a global enterprise that has hundreds of people creating data. And so one of the ideas that I have about this is that you should decentralize the responsibility for data governance to the operational managers that are creating the data as part of their job. And this is not something we typically see in clients. There are a few that do it. And I think they're the ones that do the best job managing their increasing complexity. But what I'd like to see more of is that, you know, management needs to start looking at data as not just an end, it is just part of the whole chain of value creation. And so if you're working on improving your business, you need to be working on improving your data. And let me give you an example. I'll go back to the first time this dawned on me, which was before I came to Innerworks. I was a CFO CIO for a multinational and we were developing a data warehouse and we were using, Tableau for the reporting. And what I found out really quickly is that when we thought we had good data, we really didn't. We every single workflow had data mistakes. And I used to call these the Monday morning, oh shit sessions because I started measuring it. And then I had a team of about seven people from different work groups who were the managers of those teams, and I'll focus just on one of them. Our customer service manager who was responsible for every order we entered. And these orders could be for ten dollars or a hundred thousand or a half a million dollars. And what we've discovered over a pretty short amount of time, about a month, is that they generated a lot of errors in order entry because we were at a national company based in the United States that went international. And our systems were not designed for dealing with international. And this is predating Wikipedia and broad use of the web in an office. And we had a lot of just basic order entry mistakes because our team didn't understand international zip codes, postal codes, and in a given week, we might get fifteen or twenty of those wrong. Well, this manifested itself by customers not paying on time. And of course, a lot of people thought those customers were untrustworthy. Well, when we looked at this, we realized the customers weren't paying because they didn't get an invoice. And so, you know, I started spending one day a week reviewing the quality of the data coming out of each of these different work teams. And within about two and a half months, we eliminated ninety five to one hundred percent of the problems. And our Monday morning, all shits, meetings weren't necessary anymore because those managers took over running their own data quality because they realized and saw the benefit to doing it. And this is what I mean by data provenance. If the customer service manager is entering all of the customer information, all of the order details, And if that team doesn't get that right, it generates downstream effects that affected everyone. And in that case, it was a manufacturing company, it affected orders that were going to the shop floor, production efficiency, and ultimately ended up with non payment of bill if the client got the product, loved it, would have paid on time, but didn't have an invoice to pay against. And so that to me is an important aspect of any data project. Figure out who owns the workflow, incentivize them to care about data quality, and show them why they should care. Most of them will figure out that the data going in right the first time saves them a lot of time and agony later and improves efficiencies everywhere. Now, number five here, steal from the past. What I mean is that there are many really good books from the 80s 90s and probably the 70s that talk a lot about process improvement. It was most of it was generated from the automotive industry. I can tell you that Meta, Google, Microsoft have all stolen liberally these ideas. They've renamed them, they've repurposed them. But I recognize a lot of stuff from the 80s these tech companies that have just renamed what it is and are following the same tenet. So I'm gonna include some information on some books that you might find helpful. If you want to undertake some of this, you'll get some ideas there. And then of course, one of the easiest things you can do to improve accessibility, because mostly what we've been talking about here is data quality. Accessibility is putting data in a place that is easy for people to find and use. And so the idea for Curator came from what I said before, early on, it was probably more than ten years ago, we started thinking, we have a lot of people using Tableau who don't, they don't know SQL, they aren't coders, they don't know database architecture, but they are able to get a lot done. Don't we have a tool that would make it easy for somebody that doesn't code, doesn't build websites to embed their data in a website without having to write code? And and that's what Curator, by Innerworks is. It's a no coding environment for building embedded consumption of whatever, dashboarding or endpoint tools you're using. So there's no coding. There's setup is usually a day or less, and, it's just easy. So, you know, this is what's missing from this is that you need something out here to the right that becomes a data hub that makes it very easy for people to find what they're looking for. And what we've found is that while there are a few people that traverse a lot of different data, Most people have two or three or maybe four dashboards that they look at every day. They might have a few that they look at once a month, but they don't have that many different sources they're using regularly in their job. And by using a tool like Curator, you can make that consumption environment extremely friendly and easy to use and accessible from any device. So that's what Curator is. It's a no code data hub. And if you get or want a sample Curator, because we have free trial periods for this, and you're one of the people that are going to set up your own embedded website, this is where you start, you get a nice environment that's got videos that explains things, blog posts, detailed documentation. It's meant been made to be as easy as it can possibly be to set it up by yourself. Of course, if you want help, we're happy to provide it. But it's very easy. And the first time I realized how easy, I think I was giving a talk in Boston about six years ago. And I didn't tell one of our sales guys who had built their own curator site that I was gonna have the guy talk. Guy's name was Dustin Thompson. Dustin had built his own instance of Curator and I had him get up in front of the audience with a live version of Curator and show off the webpage he made with embedded content. Now, Dustin is a great salesman and a fantastic customer and admin guy, but he is not a coder and he doesn't really understand data. He couldn't build the dashboard. Well, maybe he could build a dashboard with Tableau, but he's not a data guy. He's not technical. So the tools that we had then compared to what we have today are night and day better. That you know, the if you're the one that's going to be setting this up, you could probably do it all on your own in under a day. But if you need help, we're there to do it. And then if you want to see example sites, we have plenty of them out on the internet. There's a link at the bottom of the slide, but these are just some of the examples you can see, they all look different. They're all different industries, different use cases. Ben and his team have done a fantastic job of putting this together. And in fact, I'm sure there's been dozens of people at InnerWorks that have worked on this. So there's plenty of example content to look at. And then just a specific example that is one of my favorites is a restaurant dashboard that's in curator. This particular example on the top half is just showing you some fairly simple executive level dashboards that are Tableau And then down below, what you see there is a way for people to ask, plain English questions using ThoughtSpot. And they're both right next to each other in this page. We've also built, curator sites that have different tools that are actually interacting with each other. We've had sites that allow you to do write back to a database based on different questions that you want people to be able to answer and then feed it back into a system. But these are just three different examples of how we make it easy to set up, give you sites to give you ideas on what's possible. And then here's just a more detailed example of what's possible. So I think we're right about at fifty minutes and we're gonna allow ten minutes for Q and A. But the point here is that to do this, if you're gonna start the right way, you really need to document what you're doing now. I think James Wright came up with this term and I like it. You need to pay your technical debts. You don't wanna end up with twenty one trillion in technical debts. You don't wanna end up with money debts that high, but you need to understand what you're doing now before you can improve. And then you have to have security, but it's gotta be balanced with accessibility. And so that balance is important. If you don't make it easy for people to find what they need, they're gonna get it one way or another. And usually the other way is not the way you want. Also measuring the usage of the tools that you've got, the license activity, the amount of data that's being processed, what endpoint content is being consumed most actively by most people? You know, what is the efficacy of what you're building? And who's doing it best in your company? And what are they doing? Try to learn from that and spread it. So this other concept of decentralizing the responsibility for the provenance and lineage of data, I think is important. I don't see many people doing this. The companies that do it have the best run systems, they're the most effective, they're the most cost effective. And so if you're having problems managing the growing pile of data that you're accumulating, I'd strongly consider decentralizing management and ownership of data quality. And then, of course, I've mentioned this earlier, there are some good books that I'm gonna include in the deck that are all about process improvement, which is really what data and data systems are about improving the process for generating data, making sure the data is accurate and complete. The same tool sets from thirty years ago that were used primarily in automotive manufacturing are directly related to data factories. And then Curator is the one product you can just buy and get a high quality curated environment that makes it easy for people to find what they need. And then these are some of the books I was talking about. We're gonna open this up for questions because we're right at about ten minutes till the end, but these are books that I personally have on my shelf because I was using them thirty years ago. I think they're directly applicable to everything we do with clients every day. And you can find another two dozen books from a similar vintage and they're gonna be useful today and many of these books have been updated for current reality. So I'd encourage you to take a look at them if you haven't read these or seen them or heard of them. So let's open it up for questions, Ben. Let's see what we have. I'm also gonna watch the final poll. So if you could just take time to kind of talk to us about next steps, that'd be much appreciated. Okay, well, I actually love the Phoenix Project. I think we have one of our consultants, Scott Perry, that wrote a blog post about both the Phoenix Project and The Goal, which The Goal is one of the books I put in there. And so I think that is great. There's a lot of good books that I didn't mention. I could fill up an entire presentation on that. That'd be a fun one. Book's. Oh, and I didn't mention the fact that I do have a blog post series that just started publishing. It's literally a book in blog post form. It started publishing Friday of last week and it's going to go on for a month. And the plan with the BI Kantos is we'll see what feedback we get from the blog posts Probably a few weeks after the blog post series finishes in mid May, we're gonna turn it into an ebook and put it out on Amazon. I don't know what the price will be. It'll probably be two point nine nine dollars three point nine nine dollars something like that. But I'm gonna rewrite portions of the book or add to it based on feedback that readers give me. And I've already gotten some that's been pretty useful. So if you do have any questions, feel free to use the QA or drop in the chat. We'll monitor both and can answer. But if not, we can wrap up a little early. Okay. Either people were asleep or they don't have questions. I would strongly encourage you to think about if you're in a responsible position managing data ecosystem to think about decentralizing control. That's probably the single biggest thing that could help you manage the growing complexity of the back end because it just makes sense. And, you know, I've had discussions with clients about altering incentive pay plans for people to allocate some portion of their bonuses to data quality. It's all measurable. So why not? Yeah, I mean, one of the things I always took from, I mean, it came from the goal and it came up in the Phoenix project as well, but it was, you know, if you're looking at a system, you know, if you don't solve the bottleneck, it doesn't matter how efficient everything else becomes. And a lot of times these central processes become the bottleneck. And all that means is that there's a whole bunch of piled up work in process that is going to kill your productivity. It's going to slow everything down, and it's going be hard to manage. And so a lot of our first impulse is to solve the problem by putting a committee together and centralizing things end up actually making a exactly asked a good question here. Do you think decentralization can be difficult for big companies? Of course. Yeah. Yes. We've dealt with the biggest companies in the world and some of them I'm not gonna name names. I know for a fact, one of our clients had twenty seven Tableau servers and a dozen different data warehouses. And a good percentage of those had duplicated data, like completely independently built the exact same data because no one was looking at it. And of course, if you're a truly global company with one hundred thousand employees and you've been doing BI for twenty years and you're acquiring businesses every year, yes, of course saying it is gonna be more difficult for you. But the good news is you can start small. If you're in a big company and you're in one division and you only have five locations, do it there. Well, and I think there's actually inspiration you can take from places like software development, right? So it used to be we built products and we create what people call the majestic monolith, right? Like everything is just in one, you know, tangled up, piece of code, and it becomes hard to change it and manage it and understand it. And what people have actually pivoted to is microservices, right? So we create little bits of code that have defined APIs that communicate to each other. And so, you know, basically, what that means is that we have a piece of small code that says, this is what I need, and this is what I'll give you on the output. And I think we can do the same thing with our companies, right? What do we owe each other? What do I expect from other departments or different groups as input? And what am I going to provide you as a result? And if we clearly define those things, it lets us interact in much more complex ways without having to overly engineer everything. Yeah, well, this is why, again, I think the metaphor of thinking about a manufacturing plant where you have, I worked in a factory for twenty years before I became a consultant. That factory had probably a hundred workstations doing discrete tasks, welding, assembly. And that's why it's no different really than a data warehouse. You've got all these different moving pieces building data products, data schema. If any one of the part moving parts goes down, it affects the whole thing. And exactly, I think Ben alluded to this earlier. The one throughput constraint can bring everything down. Yeah. So that's why they're similar and that's why I think some of these books, you go back and look at them, may give you some good ideas for places to start. Anyway, thanks for the questions.