Well, I'm really excited for this this conversation today. You know, for those of you who are in the industry, you'll know that, you know, if you've heard the keywords semantic layer, right, there's been a lot of, buzz in the space recently coming out of Tableau conference a few weeks ago. You know, I think that there's just been the if you look at the Google trend of, like, semantic layer in the last, sixty, ninety days, there would be a not insignificant spike. So that's yeah. So we're we're really excited to talk more about that. We have some exciting, examples to demonstrate that I hope you enjoy. So thanks everybody for joining. Yes, Emily. We're five after. We'll get back to you. Let's get into it. Yeah. Yeah. So hi, everybody. Welcome to Anorak's, semantic layers, the universal data translator. For those of you who don't know, as Jack mentioned, he actually gave this talk at Tableau conference. Gosh. Was that a couple weeks ago now? I'm losing track of time. So we'll be kind of going a little bit more in-depth today. For those of you who I have not met yet, my name is Emily Miley. I am a strategy lead out of Portland, Oregon. And largely what that means is, I am helping clients understand kind of their full data stack, talk through, their use cases and considerations, and, just kind of talking about, like, how to make your data work best for you. For those of you who don't know, InterWorks is a data and analytics consultancy. So a lot of what we're doing is helping clients solve their end to end, data needs and and trying to talk about, you know, what what might make their lives a little bit easier. Right? And I'll hand it over to Jack who will be the star of our show today. Jack, if you wanna introduce yourself. Yeah. Thanks. And for those of you who are just getting into the u the inner work inner work's universe of of webinars, I'm Jack. I'm an analytics architect here. My role here is I am helping our clients build systems that are going to be, you know, resilient and carry them and set them up for success in the future, making it so that they can increase the speed of delivery and efficiency of their analytics to different user bases no matter where they're working, whether it's custom software, business intelligence tools, you know, helping people stand up data platforms from scratch. I I do it all. And, yeah, really excited to talk about semantic layers. This has been something that's it's it's a topic that's been on my mind for, as I mentioned, some tech time now and been a lot more momentum in this space that, you know, Warren's talking about with y'all, especially as, fellow data practitioners. Right? There's there's a lot to unpack here. So, today, we're gonna kinda break this into a few different areas. We're gonna break it in down into what this looks like in practice. And what we're first, before we get into, like, the practicality and what this looks like in the wild, we're gonna talk about just, like, the business case, their semantic layer, and how we got to this point and why it's, emerging right now. And then after that, we'll show you a hands on not hands on, but interactive demonstration of what a semantic layer workflow could look like. And then after that, we'll leave some, room for questions. So I hope you all have questions at the end of this because we will be leaving, plenty of time for us to have some discussions and answer them on the fly. We're already appointed, so I wanna just kinda dive into it. But before we get into the meat and potatoes, I just have a couple of questions that we wanna pose for the audience. So yeah. Emily, Yeah. So y'all should see, a poll and quiz that popped up on your screen. If you don't see all of the answers for the second question, you can scroll down a little bit. But just for those of you who might just be listening in, I think really what we're interested in is is how confident you are, or how confident are you that your organization has clear standards on their metrics and KPIs. So, you know, maybe very, very confident, maybe somewhat confident, but not, you know, super confident. And then maybe it's just a flat out, not at all, but it's the wild west. You know, we're we're figuring it out, which is also totally fine. We're also a little bit curious about how prepared you feel like your organization is for AI and AgenTek applications. For those of you who are at TC last couple weeks ago, I think AgenTek was was definitely a huge star of that show, and so we're hearing more and more about that in the same way we're hearing more and more about semantic layer. So, maybe your organization is very prepared. Maybe it's kinda somewhat prepared, and maybe, again, not at all might still be the wild west. And, yeah, Jack, I'm I'm curious to see kind of what we see here. Yeah. I mean, I I'm actually I'm I'm seeing I'm honestly I I'm seeing a lot more somewhat in both of these questions than I expected. I I typically on the first one, I would pose that a lot of people feel like they're very confident, like they have a strong source of truth, but, you know, that really can vary department by department or if you're the analyst persona versus, you know, an analytics manager or chief analytics or data officer who has to manage data sprawl across, a a lot of different departments. Right? So very subjective. But, yeah, we're seeing, a lot of somewhat on the first question, which, honestly, that that's actually pretty consistent with what we hear and, like, what recent, you know, industry surveys. So, actually, Atlassian ran a survey recently, and this is consistent with one of their findings that around seventy percent of of, knowledge workers and data workers have eight hours of waste in their work week dedicated to wrangling duplication or sprawl in business logic or code or, you know, whatever application or technical project they're working on. So eight hours. That's an entire workday wasted just to redundant tasks. So that's, like, pretty remarkable. And, again, not surprised that that we're seeing that reflected here, in the in the data amongst data analytics practitioners. And the second question around AI, this is also I'm I'm seeing a lot of somewhat and, you know, fifty nine percent somewhat, thirty one percent, not at all prepared. So, you know, if you're in that not at all cohort, like, don't worry. There are a lot of other folks that are also in in that position. There's there's a lot going on right now, and, it's hard to know. Right? Like, where do we where do we even start? But, yeah, recently, DBT, they released their analytics engineering by report, which they do every year. And it's quoting about, you know, seventy percent of organizations are either dedicating to dedicated to grow their AI footprint and investments or maintain it. So, you know, again, just to reflect what's happening in the industry and right? How do these questions relate to the semantic layer? We should probably talk about that. Before we do, Jack, I just wanna call out. I'm curious for some of those that poll that we had up. Right? Like, I'm surprised to see that, you know, sixteen percent feel very confident in the clear standards and metrics and KPIs. Right? That makes me wonder kind of, like, what is that next step for those groups. I actually expected that to be even lower. So yeah. Anyways, I I was just I was I was fascinated by that. Yeah. Definitely. It's it's a the herculean task to wrangle all that and and come to consensus, and that, you know, yeah, that takes a lot of wrangling and a lot of a lot of work. So, yeah, I I hope that some of the takeaways here can help inspire and help teams do that work and bring them from somewhat to very. But, yeah, let's let's talk about how these questions relate to a semantic layer. But first, I wanna start with sort of a, like, a story. So just imagine standing in front of a wall covered in ancient symbols like this Indiana Jones looking fella here. This is, you know, coming from my background and lots of other analysts or business users that I've I've talked to who have to navigate lots of data assets through our organization. Now this is what the experience of an analytics end user navigating data without context looks like. They know that all of this data holds the answers to their questions, the patterns, the warnings, and but you can't put it all together. You don't have the context of how these things can all be bridged together, which data sources are connected, the recency, you know, if they're up to date on the same cadence. So, you know, you can't really take all of these hieroglyphics and bring it into a con consistent narrative without a translator. And the way I like to think about semantic layers here is the Rosetta Stone. So it's taking the language of data in your organization. Right? Raw, complex, often inaccessible sources of data and translating it into human understanding. Right? So giving descriptions, giving context to information in a database that might otherwise go undocumented. And, really, this is not just, like, a tool or an exercise or a, like, a solution for the analytics person, but, like, really, everyone can benefit from having a semantic layer in place. You know, humans who have the ability to interpret context and operationalize that into our job, but also our agentic friends that are are coming into popularity who need that critical metadata layer, that sits next to our, you know, data warehouse, the data we do analytics on so that it can have that context to retrieve better experiences and better insights, to support analytics professionals. So all in all, the semantic layer is a tool or solution that helps future proof your team's work for AI and also allows you to increase the speed of delivery of your analytics products to virtually anywhere. And what I mean by anywhere is, like, really what semantic layer serves is as this critical middleware where you define all of your metrics, so how things are measured within your business, way of what does revenue truly mean from the CEO down to the intern. Right? Like, what is that revenue metric? The entities. Right? So clearly defining what a customer is. Is it someone who who starts a free trial of of your product? Is it someone that converts and has a contract? Right? Defining entities very clearly and codifying that into your analytics dataset. User context, so who has access to the data and, you know, being able to detect who it is and serve them a dynamic experience based on what departments or what the you know, if there's duplication in the metric, what's the definition they should be served based on their role or function in an organization. And then for our agents and trends, you know, critically, the definitions and instructions of data. So descriptions of how data ought to be used, structuring our our semantic models in a way that makes sense, and, you know, re reports data accurately. So all in all, that ability to deliver everywhere, we can do all this work in one place, that critical middleware, the semantic layer, and then propagate that upstream into all of our different places where we have folks doing work with with data. So whether that be your BI tools like a Tableau or a Power BI, or Sigma web applications. So if you wanna enable the this works to developers who are building maybe a custom embedded experience, they can, you know, most semantic leadership with REST API credentials or some integration where they can easily build on top of this modeled work. And then our agentic friends there, who, you know, need to interpret this context, and we can build integrations and agents that can help us streamline our tasks and, do our analytics. Now really critically, what the semantic layer is not, and, just wanna emphasize this, this is not a replacement for your data warehouse. In fact, this is something that coexists and functions on top of and in tandem with your data warehouse. Right? So, it's also not a replacement for any of your existing data infrastructure. It's more we'd think about it from, what is this replacing? It's more of a replacement of static hard coded, objects in our database that serve single purposes and are not flexible. Right? It's also not a tool that physically moves or duplicates or stores data. Right? So the semantic layer is working on top of your data warehouse. You get all the intelligence built in. If you're running Snowflake or Databricks, you're getting you know, you're able to see the queries going back. You have all that intelligence, all that logging. You're not investing in another system to do all that. It's integrated and coupled tightly with your data warehouse. And last but not least and most importantly, it is not a substitute for good data modeling. Data modeling is the lifeblood and the most essential step, as it goes for, you know, any version of analytics, whatever stage, the data modeling is essential here. And to fully benefit from the semantic layer, doing thoughtful proper data modeling is is key. Yeah. Yeah. And very much so agree with what Jack's saying here. And I think, really, what, you know, semantic layers allows you to do is with really good data modeling and database design and data engineering, it allows for a more seamless kind of, like, end user and analytics experience. But, again, much like what we've probably all heard when it comes to things like AI, generative AI. Right? If you haven't done that foundational work, if you haven't built a really good, sort of baseline around, you know, data modeling, defining access control, and even, you know, getting into data governance. Right? This isn't necessarily a substitute for those things, but it helps, it it can help bolster your entire experience. But but like Jack said, cannot be substituted, and it is not, and should not be substituted. Yeah. Yep. That's yeah. So data modeling is is still key and the lifeblood. So just to kind of break this down, though, like, what makes a semantic layer novel, and what are these, like, key pillars that that, you know, are essential and best practice when thinking about it or what the value is and what it can bring to your organization, and what features and levers does it give you, as an analytics as an analytics professional and or or manager. The first one, yeah, as as you've mentioned, the unification of business metrics. Right? So one of the and, you know, someone mentioned in the chat, the di semantic layer has been around for a couple of decades. Totally true. And, you know, starting with business objects. Right? It's been this this concept has been around for a while, that idea of bringing consistency to metrics no matter where folks are consuming it. What's key here is that, you know, that unification of data sources behind the the scenes. So, you know, being able to work on top of your data warehouse, integrating different systems, having it all plugged into this one, central, like, logical layer that you can then push into any system. Right? So you're not just locked into the BI tool where you can consume metrics. You can push these metrics anywhere. You can push it into a REST API, GraphQL, integrate it into really really any experience with with minimal effort. So, and we've really evolved with that sense. But, yes, this notion of metric consistency has not gone away, and it's it's definitely not novel, but we're just able to benefit from the, you know, the modern the modern, data warehousing technology and apply that and, increase performance and security and, governance. The second one is that metadata management. This pillar is just the ability to integrate, your semantic layer in your broader metadata ecosystem. So, you know, if you have a a a data catalog or or similar tool where you have visibility over all of your data assets or the flow of data through your organization from sources to model table, maybe you're using d b d d b t to do transformations and you have a DAG. And then, you know, a good semantic layer will also be you know, could be integrated into that and into your DAG and, you know, be documented and give you that metadata, you know, like those other tools. Right? So you get full visibility. You get all of the metadata, the descriptions of metrics, the descriptions of entities, and make that exposable and consumable via documentation or to your users your end users. And so the third one, and, really, this is, one of the more you know, for the modern semantic layer, just this notion of an unlimited integrations or maximal integration, just being able to enable the work you're doing in a semantic layer to any business intelligence application and any agentic application or a place where, you know, framework that developers may be using. Right? With how fast we can build and deploy these experiences now, it's it's critical that, you know, your semantic layer ships with this and has these integration these these first class support for these types of, use cases in mind. And more specifically, performance optimization can help you. This kind of goes back to the modern semantic layer. Performance optimization semantic layers will give you key levers to pull so you can accelerate the performance and maintain consistency of your queries and your your your data as it's, exposed to users in different places. So few examples of this, which we'll kinda cover later. You know, caching. Right? There are different caching strategies and semantic layers so that you can ensure consistent materialization schedules. You're always giving users the same metrics, same net numbers no matter if they're Excel or Power BI or custom app and, query optimization techniques. So that's also pretty essential, especially if you're working with larger datasets. You have, like, you know, complex aggregations. Think of the the type of aggregations you might be doing, like, a a level of detailed calculation for, fees, caching, and inquiry optimization or preaggregate preaggregation strategies can really help you out there. And then, of course, data governance. What's a good, you know, tool without proper data governance if you can't, you know, set up or integrate with your data warehouse, your, you know, set up RBAC or have, you know, user level or, you know, department level granularities or of access control, you know, it's it's not going to scale. Right? It's it's gonna be a nonstarter. So that's, of course, critical here. So just to kind of reinstate and, like, where this falls, we talked a lot about the how essential data modeling is and how you know, there's all this work that we may have to do before we we get to, you know, adopting the semantic layer. So I'm going to borrow one of my colleagues made a really good analogy with with Maslow's hierarchy of needs and data governance. So I'm I'm I'm gonna talk a little bit about that here. Of course, everyone knows Maslow's hierarchy. It's the five stages of the five, you know, five levels of human needs. Right? So physiological, air, water, food, that's what keeps us that gives us the energy to attend this webinar, to do our jobs, and then, you know, self esteem and full potential. Right? You can't skip safety and belonging before getting to full potential. These things need to build upon each other. They are dependent on on one another. And the data team's hierarchy needs is the same. Right? So first thing, you know, before we reach full potential at a at a base level, we need to have data available and accessible. I hope if you're here today, you are a data professional and have this. But, you know, the second one, right, safety, that is that that's a trip I mean, humans, you know, that that's oftentimes subjective. But in the data organization safety, it it can also be a complex thing to tackle. Right? So different industries have different, you know, different versions or flavors of compliance. There's GDPR. There's HIPAA. There's, you know, these things that you have to think of before you even, adopt you can even think about adopting a new tool. It doesn't meet meet the mark here. And so, you know, a good semantic layer, you know, will have to meet and be compliant here or, you know, support that mission, keeping data secure and safe. That kind of goes without saying if you're in one of those industries. Consistency. So, you know, kind of building upon safety, the being able to deliver consistency in numbers, in experiences, in performance across the board. So having one central place to define the relationships between, you know, entities and metrics within your data model and ensuring that those figures are always consistent and accurate, across all the places where you have users. So, you know, for example, if you're in finance or you have some sort of, pattern of external reporting to investors or, an external body, you have to make sure that your figures are always I mean, we've all heard about the the nightmares of analysts where someone fat fingers a number or a a number's off because a a dashboard wasn't refreshed. That that consistency here is key. We need to have mechanisms and fail safes in place so that doesn't happen and semantic layer supports that consistency. And finally, for as I mentioned earlier, for AgenTic brands and for us humans who need to make sense of all this data that works that we're doing, the context, just making sure we have documentation and that critical metadata around, you know, what is an what does this entity mean, what defines a customer, right, what defines an order, what defines a return, and then how do we measure that and how you know, giving instructions to agents within that metadata on how it ought to be used. And in essence, you know, by organizing our semantic models, we are kind of controlling that context window or that context for our users. Right? We are we are very explicitly defining, like, this is the scope. This is how things ought to be aggregated in order to be reported correctly, and a good semantic layer will have those fail safes in place and, that context for users right off the bat. And finally, to reach our full potential, you know, we need all these we we need all to do go through all of these steps before we can be fully actualized, data professionals. Right? And it's not an easy journey, but, you you know, these are these are steps we have to go through, regardless of, you know, if you're implementing a semantic layer or not. But, you know, I think the semantic layer supports these these this work and, you know, quite quite well. So in practice, let's just go through a scenario that we might all be familiar with here. So I'm assuming majority of the audience here has some experience with a tool like Tableau or a similar business intelligence tool and has gone through the exercise of taking we're collaborating with a data engineer that will take data from raw source into staging, transforming that into, you know, some sort of star schema or third normal form. In this case, we're assuming star schema. And then a gold layer where you have, a dedicated marts or wide tables that support a domain of analytics. Right? So, this is pretty standard. You have, you know, analysts connecting to your gold layer in Tableau, maybe building published data sources on top of that and defining that. You know, maybe they're creating extracts, just to keep up performance depending on how you'd model your data in Tableau. And then from there, you can have analysts who are creating their own workbooks depending on how you're governing your business intelligence platform, creating their own KPIs. Right? So, say, you have an analyst who's created average order value or the profitability metric here. So what happens if we want to add a new metric? Right? So in our current model here, what it requires is we have to go back, assess what are the tables that we need to join or build or utilize to derive this new metric, call it profitability for products. Right? Let's just say we're adding this from scratch. That code update requires, you know, going back into the SQL or into our gold layer, assessing how we're gonna bring it, if it's, you know, ambiguous in what domain it belongs in. You know, is it an inventory metric? Is it a sales metric? That conversation needs to happen. Call that two hours of work. Right? Writing the code, testing, you know, doing all of your doing the merge if you're using, like, DBT or or some version controlled system. And then from there, you have to go in your published data source, bring that new metric in. And then in order for the analyst to realize that they have to go in, pull it into their workbook, someone has to go in, refresh. So there's, like, there's three steps here in order to put that metric into the hands of analysts. And then once they're in that look. Like, once they're in the workbook, they can go in there. They can write LODs on top of that. They can modify the context of how the original metric was intended to be used. They can aggregate it in different ways. They can, it's subject to misinterpretation, and, that is that that can cause problems. Right? If there's not education, if there's that context is not there for users. So what if we wanna extend this? And, you know, obviously, that's a couple of hours of work to add a new metric and bringing in into our Tableau workflow and give it to all of our analysts. But what if we have an external vendor that has requests for a rest API, like, integration, or a custom web app interface that's outside of Tableau? Right? This is where things get tricky. To build this experience, you're either having to figure out how to integrate your data warehouse into the application or, you know, use their rest API. Maybe you have your data all modeled in Tableau in a way that is more meaningful and presentation ready for the end user, so you have to figure out how to integrate with your published data source. At any rate, you're looking at a lot of work to ship this and figure out how to integrate the work you've already done into this novel new experience. So we have a few problems to solve for here. Again, that decentralized business logic, I mean, tech debt, definition drift with self-service, those are the the big ones. Right? Someone can go in, overwrite, make a sum and average, make a min and max, etcetera. Second is the our this thwarts our ability to keep up with the pace and the demand and the need for new analytic use cases and use of data work that we're doing into new tools. Right? So, forty hours to build an integration, right, that might be on the low end, but, you know, depending on how much human coordination and human work is required, it could take months, you know, if you're competing with priorities in in your organization or for developers. So that could be a nonstarter, and that's where projects can go to die if they're just not, if they can't happen fast enough. And finally, you have data siloed from AI apps here. Right? Like, without being centralized, it makes it a lot harder to, connect an agentic system to, like, the true source of record. In this model, we have, you know, work being done in our warehouse, and then we also have work being done in Tableau. And so then we have to figure out, like, what do we wanna build on top of? Right? So with the semantic layer, this is how we could approach this problem. Again, looking at where the gold layer used to be, that's where our semantic layer is coming in. So rather than having those domain specific reporting objects, We have a semantic layer where we are unifying, and we are taking all the relationships in the data models that were maybe in place in that gold layer. But we're creating it logically. They were not materializing that, as, you know, a table per se. We are creating a logical middle layer where we define our metrics, our dimensions. We define the correct grain for those metrics and dimensions, like how tilt down to what granularity are they to be queried and, like, what what makes the most sense. For our more complex LODs or more complex metrics, we're using our data warehouse to pre aggregate those and cache those so we can, have consistent performance. And we're also housing, you know, the context of who can see these metrics. Right? So we're doing that upstream. And in our semantic layer, you know, if we follow the criteria set forth in this presentation here, This will shift with connections to Tableau. It'll have it'll be able to integrate well into a, you know, a REST API or GraphQL or whatever development framework you're using. There's likely a path to really, like, with a lot a lot of agility, just adopt that and go from what might have been forty hours into you know, if you wanna vibe code it, it could be an hour or zero hours. So you can really see now this work, by doing this, we can we can, you know, really save ourselves a lot of time and centralize our work and support these new integrations and these places where our analytics end users might be doing their work. So, Jack, just to catch you on the spot for a quick question in the chat, we had a question where they asked, if you could list off some of the BI tools that can connect to these semantic layers. I know there's a good amount, but maybe some of the more popular ones that we're familiar with. Yeah. So I I really it depends. There's a few different vendors in the market. Here in a second, I'm going to demonstrate how you can build a semantic do some semantic layer work in a platform called Qube. There's also DBT. You've heard of that. They also have a semantic layer offering. It really is just going to depend on the vendor, but I'd say there's full parity in support for, you know, Tableau, Power BI, or ThoughtSpot, Figma, and then, you know, smaller tools, like, or I guess I'm not gonna say smaller, but, some BI tool that will have a, like, more native integration with semantic layers, so like, like, a hex or presets are popular ones. And then, there's, you know, a few different frameworks. Right? It's not just about BI tools. It's, you know, being able to expose this work through an API, through a notebook, or even Excel. Some like, you know, I know Qube, for example, has a native, DAX DAX layer integration. I can't reference DAX, but it works it's tightly coupled with, you know, some of the Microsoft products. Yeah. Happy to have a follow-up conversation if anyone, would like to talk about what, you know, particular offerings. But here, we're gonna focus on, one particular one in in my demo. Yep. And we'll have a we'll have a q and a at the very end as well, for more specific questions just so you all know. Yeah. I love some of the comments in here. Reading room. So let's take a look under the hood. Right? We have, just a little bit of time. I wanna make sure that we show you what this actually looks like in a workflow. So we're gonna look at cube, which is one of the semantic layers, that, you know, I personally have used quite a bit. And I'm just gonna show how we can leverage a semantic layer to have consistency and push to centralized data modeling and data work into multiple disparate systems really quickly and effectively. But first, context for this demo, I was in San Francisco last week, and I got to see the band Phish. So some of the data I'll be showing you is based off an app I built that uses a lot of data about the band Fish and, their their set lists and nerdy music analytics stuff. So just a little bit of context, we're not going to be looking at sales by region, any of that. Yes. Is that we can Is that a picture fun. Is that a picture you took last week, Jack? Yeah. Yeah. This is, yeah, this is from the top top row at Bill Graham in San Francisco. So, yeah, this is my my work. It's a good time. Nice. So I'm gonna switch gears here. Sorry. Okay. So really quickly, just gonna before I get into the actual work, we're using Qube. Qube is one of the semantic layer vendors in the market. I've used it for personally for a lot of development work. But just to give you a lay of the land here, this may look like a familiar interface if you view if you've done data modeling or worked in an IDE. The way cube works is, you know, you take the work you've done in your data warehouse. So, you know, in my case, I'm using Snowflake where I've done a dimensional modeling exercise where I've modeled all of the data from my app into, fact and dimension tables, and I brought those and put them all into cube. Each of those tables is represented by a metadata file. So for instance actually, let's use a better example, like, shows. So this is my my model my semantic model for, the shows that the band Fish has played. So in here, you know, really, the syntax is quite simple, and it'll vary from system to system a little bit. But at the end of the day, it's all, like, YAML. You're coming, and you'll define the data types, the name of your column. And then you can also this is also, you know, if you've worked with published data sources, this is the layer where where you'll be building out your business logic. So, you know, in this case, we have this calculation for show name where we're doing some concatenation, and we're really just writing SQL here. And then down here, we have some measures. So we have just our count. That's the count of all of the all of the facts or all of the rows that, you know, we're we're telling the the model, to roll up to and then distinct show count. So we'll have a count distinct there of show ID, which will give us the number of distinct shows, when we reference this upstream. That's just an example of the syntax. It simply measures dimensions. Now this is a more robust cube, so this is a model that we're going to give to our app. Right? So what I have going here is, I I have the the tables in my star schema. I define all the relationships and so, you know, many to one, one to many, add all their appropriate IDs. Down here, I have my dimensions. So because I'm referencing all of these different tables, it's bringing all of the work I've done upstream. And, you know, it's it's taking all that into my model. And then I'm also creating these additional calculations. And then most importantly, I have description fields. So this is the context that when you feed this to a agentic system, right, they'll be able to pick this up. And as it's querying the data, they'll also be able to query this metadata alongside it, and it will tell, you know, the agent, okay. This isn't just the column name original cover. This you know, oftentimes, when people throw, this raw database language or DDL at without any context or metadata at an LLM, it tends to not perform well or yield accurate results because you're going off of the the column name and how the LLM interprets this. These description fields are totally essential for our agent six systems because it it gives instructions and context and, allows you to propagate that throughout the entire system. So we have some we could consider calculated fields, and then you have some measures. So we have a few, ways that we're doing distinct counts. And then, you know, nothing too novel there. And then one of the more important features that we've covered, throughout this is the notion of preaggregations. Right? So, this isn't a particularly large dataset, but just imagine you have a lot of event level data. The pre aggregation feature here, which other, you know, tools also have some flavor of this, can allow you to set a materialization schedule for certain metrics in your model. So say you have a large count to state that's really expensive or you have a year over year computation that, really, really expensive. And what this allows you to do is set a materialization schedule so that all the tools that are connected to your your model are being updated at the same time, and that cache it's hitting the same cache, and you're going to have systematically, accurate and consistent data across all of your tools. So you can get very granular here by, you know, how you wanna partition it. And, you know, for any data engineers, right, like, there are a lot of levers you can pull to optimize performance here. So without going into a ton of detail on fish, I'm just gonna show you so cube, for example, ships with, some integrations with tools like Tableau, but I'm just gonna show you. Right? We were to make any any modifications here. This is all hooked up to a Git repository. So, you know, we can make our change, add it up for review, and then merge it into our main branch. I'm not gonna do that because I don't wanna code on the fly, but, we can you know, there this is all subject to version control, which is something that is very good and something that we want with our business logic because things do change, and we wanna make sure that that is tested and subject to review. But let's just take a look at a a couple of experiences that we can build right out of the box with, the semantic layer. So the first one, I'm gonna go through. I have a very, very simple, streamlet app that I I have going. I won't show the code because that that will, you know, keep a little bit glazed over. But, you know, I just created a simple experience that's querying the the model that I just built. So let's just say we wanna look at all of the, you know, venues top venues by states in the two thousand twenties. And we can see MSG, also known colloquially as Madison Square Garden, large venue in New York. They play there a lot, so that's not surprising. But, you know, again, you can see how, like, this is this is not very complex. You can also look at the, you know this is actually a really good example of how you can limit, this experience too. So say you have an interactive app where you only wanna show, like, a subset of your data to your audience, you can build that into a custom interface like this where you're only showing maybe, the top twenty of something. So you're really, you know, putting hard limits or boundaries on how you give users access to, you know, the compute in your underlying data warehouse or the cache data. So, you know, you're able to deliver a diverse diversity of experiences, here. And this is one example. Right? This could be embedded. This could be given to an external party or vendor. And, you know, it it's just another way of delivering analytics or data that can be, you know, used with other applications or other context other than a dashboard. And then as another example, I'm really quickly going to share a Tableau screen. And so in this case, what we have what we're looking at is, you know, stop like, all the songs that have been covered or sorry. All the artists have covered dish songs. So, in this case, if we, like, remove a few of these numbers, we can see all the different folks who have covered, songs originally. John Fish and Zook on major these are members of Fish. But, like, Phil Lush, member of the Grateful Dead, we can start looking at additional queries just, like, count of plays of this song by this artist. We're looking at the same exact cube that we looked at in our, streamlet demo. One actually, one important thing to note here, and we were we were talking about, like, making sure that the logic and the aggregations in your semantic layer are immutable and unchangeable. If I go into Tableau and I try to update this to be the sum to be an average, Oh, that's a bad example because that's just a a raw account. Just use this one. If I try to make this an average. Okay. I'll that's anything wrong example, perhaps. Well, I'm not gonna worry about it too much. I might have, done something. But, typically, what we'd expect to see, I think that I might have made a change here that made it. But, typically, what you would see is, an error message saying that you can't aggregate something a certain way. But because I have account here, it's it's letting me update, how things are being aggregated in Tableau. But if you have a very specific like, if you set something as a sum on a metric in your semantic layer, you would not be able to override that or a user would not be able to make that an average or create an LOD on top of that. There would just be rules in place. The way the SQL is being compiled, that would make that, not possible so that you have consistency and, you know, accuracy there. So apologies for that. So, yeah, again, just going back to our main screen here. We covered quite a bit today. Right? We covered a few different modes of, like, how you ways that you can consume a semantic layer. And, yeah, ultimately, like, the value proposition. So I wanna take the last seven or so minutes and open it up for questions, see if we can answer any yeah. Seems like we have a few q and a's, Molly. Yeah. And also really quickly, Jack, just for those who maybe are interested in sticking around for the q and a, let's show that QR slide really quickly just so that folks can know how to reach out to us. So for those of you who are looking to game seven minutes back, I just wanted to call out that, if you're interested or you have some really pretty specific questions about your environment or the tools that you're using or, like, really just wanna talk to somebody to get started with semantic layers and conversations around them, feel free to scan that QR code and reach out to us. We're always happy to hop on a call and and just see if we can answer some quick quick questions for you, or if you're just sort of, again, curious where to get started. You can also scan, the QR code to the right if you're interested in learning a little bit more about that white paper that Jack mentioned, semantic layers and action, and the real road use cases and business impact. And I believe Teddy has also put that in the chat for us. So, you know, just wanted to give everyone a heads up. If you're heading out, thank you so much for joining us. And then, yeah, happy to get started on some some q and a questions. It looks like and I'm not totally sure if this is a question or more of a statement. It it says, like, Power BI has the ability to build semantic models. That to my understanding, that's correct, but I wasn't sure if there was a question around that specifically. Jack, I don't know, if you have anything to add to that. Yes. And there's this is, you know, a little a little tongue in cheeky, but, there's a lot of the problems with the semantics around semantic layers right now. Like, there are business intelligence tools that are I think have always been, as someone mentioned earlier, like business objects. A lot of tools have or at least, like, most BI tools have some flavor of what I'll call semantic layer coupled with their business intelligence tool. So, technically, you know, Tableau has one now. Power BI has one that they called semantic models and rebranded that late last year. Yes. They they all function similarly to what we are speaking about here, only that they are primarily going to serve the Power BI user. And the use cases that may exist for for delivering analytics outside of Power BI, would go under or unnourished because it's coupled with the BI tool, or it would be very a lot more challenging to integrate. Also, you know, you some of the more modern semantic layers right there doing the work to actually write the correct SQL and consistent SQL for each data platform and each, data warehouse. Right? So it's optimizing that agnostic of, you know, whether you're on Databricks or Snowflake. Whereas, semantic layers that are coupled to the BI tool will typically have their own optimization technique or way that they're wanting to compile the SQL, which, can yield, you know, challenges. Right? Like, there's, you know, a bunch of different dialects. There's different strategies. If you ever try to look at you know, try to debug queries, you know, it can be frustrating because it's not how you would write it. But, yeah, just to kinda repackage that in a shorter version is that, you're gonna you know, a lot of tools will have a semantic layer, but it's it's not truly agnostic of of of system. And that's what you know, the example we presented here, it's ready to ship for developers. It's ready to ship for BI use cases because it presents itself as a PostgreSQL connection to Tableau. So, you know, I guess that that's what I would qualify or explain to, like, the difference there. Yeah. And I'll just call out to, it sounds like, someone had a question specifically around some of the tools. And so to Jack's point, right, like, my understanding is MicroStrategy does have in its own, like, integrated semantic layer called semantic graph. But then Looker, right, Dataiku, I believe, had full integration with DBT semantic layers. Right? So depending on your tool stack, depending on your use cases, and and I'll also call out, you know, to go back to the data governance conversation, it also depends on your people, your process, your systems. Right? So, there's there's a lot of like, it depends in in customization of semantic layers for the stack that you have for the for the the skill set of your teams and for the use case of the end users. That'll depend on how you integrate semantic layers in into your systems. And so, you know, this is hopefully, like, a good overview of what it looks like, but not necessarily, like, a full full overhaul. Yeah. Jack, we I actually wanna, like, piggyback on that really quickly. We have a micro strategy. Right? Like, if if you're an organization that's locked into that, you know, and you're doing most of your use cases are going to live in that ecosystem, then, you know, maybe it makes sense to do your semantic modeling there because that's where all of your users are going to live. I guess what we're seeing though in in in the industry is with more agentic applications with, you know, the the speed at which we can deliver new experiences, you know, new custom interfaces that can connect back to the data work we're doing. It behooves a lot of organizations to have this centralized so that they can meet the changing demands of, you know, internal users or external users who may be working at different places. Yep. Exactly. We had one other question, Jack, that I think might be helpful. We have one minute left, which is, like, at the question is, what is the demarcation between silver and semantic layers, and understanding if, you know, there's any additional transformation that you're doing on top of the fact or dim tables, that are essentially or effectively a semantic layer? Yeah. So, actually, that that's a that's a great question. And, I actually can scroll back to that. These yeah. So going back here where we have our silver layer here, those are our facts and DIMMs. And then once you get into the semantic layer, I think I could see how that, like, may have been a little bit, confusing if you were looking at, you know, my example in cube where I had a YAML file for each fact and DIMM table. When you see those represented, those aren't actually being materialized in any different layer. It's just simply a reference. But, that gold layer is being replaced by simply, like, defining all the join paths and bringing in all of the the work you've done in your silver models into one unified logical data dataset. So it's not being materialized as, like, a or or stored as a table that you would hit, like, you know, you would connect to in Snowflake. It's going to be queried dynamically. It's going to be logical. So depending on what the user brings in the view, that's going to be it's not operating on top of a specific table. It's going to be taking in you know, it'll be executing the joins when that query is run. And, obviously, you know, that has different implications depending on how your data is set up or how large it is or how expensive your queries are to run, but that's where the preaggregation comes into place. You can optimize there. You can preaggregate and precompute things so that it's, you know, it's not as expensive and especially those more deterministic known patterns for how if you have, like, a dashboard that's always gonna retrieve the same fields, right, you can plan for that and optimize for costs there. But all in all, no. It's they're they are different. The silver layer, your fax and DIMMs layer will always be needed for your data engineers, for your data analysts who, you know, need to look at that that fax level or that transaction level information. The semantic layer just creates this abstraction that has that those descriptions and that context there. So, I hope that answers your question. And, yeah, I know we're a little over time, but, I just wanna thank everybody sincerely for joining us today. And, yeah, please please reach out if you have any other questions. Yeah. Thanks, everybody. We'll see you next time.