This blog post is Human-Centered Content: Written by humans for humans.
This is part of an ongoing series where I discuss Terraform and its specific benefits (or drawbacks) to data teams.
What This Blog is Not Meant to Be
This blog is not meant to be a How to Terraform Your Snowflake Data Warehouse blog. If you are looking for information on the “How to,” I will point you to the Snowflake Terraform Provider docs. If you are looking for help with migrating to managing your Snowflake instance with Terraform, feel free to contact us.
I will also not be talking about the specific benefits of Terraforming your data infrastructure, in general. Those generic benefits were discussed in the first blog post in this series and can be referred to there.
What This Blog is Meant to Be
This blog is, however, meant to be a resource for you as you weigh the decision of implementing Terraform into your data stack, specifically in regard to Snowflake, and give you the information you need to be fully prepared to answer the question of whether Terraforming your Snowflake instance is right for your team.
What is the Value in Terraforming Snowflake?
Snowflake is one of the strongest candidates in your data stack for Terraform adoption, and that is largely due to two things:
- How much infrastructure Snowflake actually asks you to manage (the platform’s feature set grows by the day).
- The volume and variety of users and teams that typically share a single Snowflake account.
Unlike some tools where configuration is relatively shallow, a mature Snowflake account accumulates a significant number of objects over time — warehouses, databases, schemas, roles, users, resource monitors, network policies and more. Without a structured approach to managing these, the surface area for configuration drift, cost surprises and permission creep grows quickly. The scenario described in the first post of this series — runaway warehouses with no auto_suspend — is a common one.
Terraform addresses this by giving you a declarative, version-controlled, PR/MR reviewed record of what your Snowflake account is supposed to look like. Below are a few specific areas where this pays off:
Role and permission management
Snowflake’s RBAC model is powerful but can become difficult to reason about at scale (ever tried to recursively traverse your nested role hierarchy successfully?). When roles and grants are defined in Terraform, the full permission structure is visible in code, reviewable in pull requests and auditable over time. Onboarding a new team or revoking access becomes a code change rather than a manual operation.
This is not to say that role management becomes “easy” with Terraform. Role management is as complex as a business needs it to be for proper control. That said, when users and roles are both managed in a codebase stored in git, tracking down grants, preventing stray permissions and keeping an organized mental model of your structure can become considerably easier. And InfoSec will love you.
Warehouse Governance
Compute warehouses are the primary cost lever in Snowflake. As seen in the example from my first blog, a rogue 4XL warehouse running without auto_suspend can get you that slot in the leadership meeting you’ve been wanting but for the wrong reasons. Defining warehouses in Terraform — with auto_suspend, auto_resume, and size parameters explicitly set — makes it much harder for misconfigured warehouses to go unnoticed.
It also makes it easier to enforce opinionated standards. You can create what are called Golden Paths in software engineering. Golden paths are opinionated, preapproved, standardized workflows with set guardrails that define how something should be built.
Environment parity
If your team runs separate Snowflake accounts for development, QA and production, Terraform makes it straightforward to keep those environments structurally consistent. Deploying a new schema or warehouse configuration to all three environments becomes a controlled, repeatable process rather than a manual checklist. Usually, this is accomplished through CI/CD pipelines that run terraform apply when a merge to main occurs. This keeps infrastructure up to date with desired changes and in parity across environments.
Audit trail for compliance
Have I mentioned that InfoSec will love you? Oh, and those notoriously demanding SOC2 auditors will lavish your codebase in praise (maybe not, but we can dream).
For teams operating in regulated industries or under internal governance requirements, having infrastructure changes tied to Git commits and PR approvals provides a meaningful audit trail. When someone asks why a database exists or who granted a role, the answer is in version control.
Not only that, but there is also clear visibility into the governance and security structure deployed which reduces the amount of questions and headaches when audits come around.
What Are the Drawbacks of Terraforming Snowflake?
The Snowflake Provider Lags Behind Snowflake’s Release Cycle
Snowflake ships new features aggressively — and that pace has accelerated considerably with the wave of AI and ML capabilities the platform has been releasing. The Terraform provider, maintained separately, does not always keep pace. This means there will be moments — sometimes extended ones — where a feature available in the Snowflake UI or via SQL is not yet manageable through Terraform. Teams that are early adopters of new Snowflake capabilities may find themselves in a frustrating hybrid state: Most of their infrastructure in code, but certain resources managed manually until the provider catches up. This can stack tech debt over time.
For teams that are leaning into Snowflake’s newer AI capabilities, this gap is worth taking seriously before committing to a fully Terraform-managed account.
Migrating an Existing Snowflake Account is a Significant Undertaking
If your Snowflake account already has years of accumulated infrastructure — databases, schemas, warehouses, roles and grants provisioned manually or through scripts — importing all of it into Terraform state is a substantial project.
The Snowflake provider does support import, but the effort scales with the complexity and age of the account. Teams that underestimate this lift often end up in a halfway state that creates more confusion than it resolves. I usually advocate for a Strangler Pattern approach – a phased approach, where legacy infrastructure is imported incrementally while all net-new objects are Terraformed from day one.
The PR/MR Review Process can Become a Development Bottleneck
One of Terraform’s core value propositions — gating infrastructure changes behind code review — is also one of its most significant friction points in practice. Data teams move fast. When an engineer needs a new schema, a new stage or a new Snowpipe to unblock a pipeline, routing that request through a pull request, a reviewer, an approval and a merge process introduces latency that can frustrate both the engineer and the stakeholders waiting on the work. The tighter your governance requirements, the more this friction compounds.
This is not a reason to avoid Terraform, but it is a reason to think carefully about your team’s structure and appetite for process before adopting it.
Architectural Considerations
Who Owns Your Snowflake?
The question of who owns the Terraform codebase is really a question of how your organization owns Snowflake itself. If a single data engineering team is the gatekeeper for all Snowflake infrastructure, Terraform fits naturally — one repo, one approval chain, one deployment pipeline. But if your organization operates closer to a Data Mesh model, where domain teams have ownership over their own data products and infrastructure, the picture gets more complicated. Multiple teams contributing to the same Terraform codebase introduces the potential for conflicting changes, unclear review authority and governance gaps.
This does not make Terraform unworkable in a Data Mesh context, but it does mean the ownership model for your codebase needs to be as intentional as the ownership model for your data.
Module Considerations
How you structure your Terraform modules is a meaningful architectural decision. If your team has relatively uniform infrastructure needs, opinionated modules that enforce Golden Paths — pre-approved, standardized configurations with guardrails built in — can dramatically reduce the surface area for misconfiguration and make onboarding new team members easier. On the other hand, if your data teams have dynamic, varied infrastructure needs, overly rigid modules can become a bottleneck of their own. The right balance depends on your team’s maturity and how much configuration variance you actually need. There is no universal answer, but it is worth deciding deliberately rather than letting module structure emerge organically.
Not Everything in Snowflake Belongs in Terraform
Account-level objects — warehouses, resource monitors, network policies, roles, users — are strong candidates for Terraform because they are shared, high-impact and slow-changing. But not every Snowflake object warrants the same treatment. Drawing the right boundary between what Terraform owns and what it does not is one of the more consequential decisions you will make. We will dig into this more specifically in the gotchas section.
Common Gotchas
Don’t Terraform Your Tables
Tables feel like infrastructure, but they are owned by your transformation logic — dbt, application code or migration scripts. Putting tables in Terraform creates a conflict between two systems trying to own the same objects, and Terraform’s declarative model does not play well with the iterative, column-level changes tables undergo over time. The same logic applies to views and most database-level objects. Terraform’s sweet spot in Snowflake is account-level and structural objects — warehouses, databases, schemas, roles, users, resource monitors, network policies, stages, pipes and tasks.
Manual Changes Will Come Back to Haunt You
Once you commit to managing a Snowflake object in Terraform, any manual change to that object — through the UI, a SQL script or a SnowSQL session — creates drift between your code and your actual state. The gotcha is that drift often happens quietly, introduced by a well-meaning teammate who needed to move fast. Locking down permissions so that manual provisioning is not possible outside of break-glass scenarios is the right long-term answer, but most teams take time to get there.
Other Snowflake-like Tools That Can Be Terraformed
Snowflake is not the only data platform with Terraform support. If your organization runs on Databricks, there is a mature, officially maintained Terraform provider with broad resource coverage — we will cover that in the next post in this series. BigQuery and other GCP data services are manageable through the Google Cloud Terraform provider, Redshift is manageable through the AWS provider, and Azure Synapse through the AzureRM provider, all of which we will touch on in the cloud infrastructure post later in the series.
Conclusion
Of the tools in your data stack, Snowflake tends to present the most favorable conditions for Terraform adoption — a mature provider, a large infrastructure surface area, and concrete governance and cost control benefits. The friction points are real — provider lag, migration complexity, and the slowdown that comes with a code-review-gated workflow — but they are manageable with the right team structure and a deliberate approach to what you put in Terraform and what you leave out.
If you are weighing this decision for your team or looking for help navigating the implementation, we would love to talk. Reach out to us at InterWorks and let’s figure out the right path forward for your organization.
