An issue and an opportunity for nearly every existing business intelligence system is keeping it running well and minimizing costs.
Cloud-based software makes it easy to spin up a data warehouse or a data lake, use data transformation tools, create data science studies and publish dashboard environments.
Monitoring these systems is crucial for ensuring you are not wasting money. Storage is inexpensive. Computational costs can be very high if you aren’t aware of how they are triggered, who is using computation resources, and why.
One of the first questions I ask people deploying business intelligence is how frequently they want to update reports. Many say they want “real-time” reporting when, in most cases, they mean daily reporting.
A rule of thumb to remember, real-time is always more expensive than daily batch reporting. In many cases, daily is good enough.
What Needs to Be Maintained?
Over time, systems proliferate redundant files, especially if an effective data governance plan was not part of the initial planning. Even when it was planned, governance tends to take a back seat to more exciting work. This can lead to data proliferation and redundancy. Data pruning and efficiency must include reviews of the following:
- Existing databases
- Flat files
- Data extracts
- Dashboards
- Storage consumption
- Compute activity
- License usage
Many clients engage us when something breaks, and their mission-critical reporting fails. I’ve had Sunday morning phone calls from panicked technical managers who decided to do something they had never done before, resulting in a mission-critical data workflow failure. They wanted to avoid a Monday morning catastrophe because mission-critical reports won’t be updated.
The problem causing these failures is typically addressed in less than eight hours. But if it isn’t addressed quickly, it can be embarrassing for the technical leadership and potentially harmful to the organization.
Training and disciplined maintenance reduces the risk of these inconveniences.
To help clients avoid these problems, we offer fixed-cost professional services to maintain system health and minimize the possibility that you experience downtime events. I refer to these services as “data plumbing” because they keep your metaphorical toilet from clogging.
Would you prefer to deploy your scarce technical resources on data “plumbing” or focus them on value-added proprietary projects that enable new insights, process improvement, sales increases and profits? The proprietary work is more interesting, and your team possesses the proprietary business knowledge to do that work.
Maintaining System Health
While most of our clients can maintain their systems, we noticed a decade ago that they didn’t do it consistently. Why? Lots of reasons, but the most common ones are:
- Competing priorities
- Large projects sapping resources
- Not knowing best practices
- Loss of key staff
- Changing priorities
Nobody intends to neglect system maintenance, but it tends to get worse as the size of your user base expands. This is why we created fixed-cost services to address system maintenance regularly. Our KeepWatch services provide a cost-effective solution.
We will safeguard your system’s health, help you avoid unnecessary expenses and minimize the possibility of downtime events. Every 3 to 6 months, we will conduct a deep dive into your system’s health, providing a detailed report along with recommendations to address any issues we find. The InterWorks team will keep you running reliably and efficiently.
The Types of Issues Addressed with Keepwatch
We do a deep dive into every part of your business intelligence stack, including dashboard environments (like Tableau or Power BI), data transformation tools (like Matillion, DBT or others) and your database environment (Snowflake). We look at:
- System performance
- File proliferation
- Security
- Storage
- Costs
The system audit work and analysis take a few weeks. Then, we review the findings and recommended actions with you in writing. You walk away with a clear plan to address any issues discovered and improvement recommendations.
Monitoring and Managing Metadata
As your business intelligence system usage expands and you add more advanced capabilities like data science, predictive analytics or Artificial Intelligence, the data you depend on must be accurate.
Large Language Models (LLMs) are brilliant technical innovations, but if they are trained on inaccurate or incomplete data, your results could be worse than unsatisfactory.
In tomorrow’s final post of this series, I’ll discuss data governance metadata from a business manager’s perspective in more detail.
I’ll review technical topics in the least technical way possible, focusing on what is needed to develop and distribute the right KPIs to drive improvement. I’ll review the roles I suggest for your technical team, business workflow teams and executive management. We will suggest ideas for expanding adoption, improving the quality of your first-run data and creating a data ecosystem that will enable you to fully realize the potential of data science, predictive analytics and artificial intelligence.