This blog post is Human-Centered Content: Written by humans for humans.
Generative AI continues to capture our collective imagination, and if my feeds are any indication, we’re finding new words to describe them all the time — chatbots, copilots, agents and more. But what’s the reality here? What do practical AI workloads actually look like?
Single-LLM Features
These are straightforward, linear flows — the kind you’ll see in basic product features or data integration use cases:
- Text summarization
- Concept extraction
- Text transformations (adding structure or combining text)
- Q&A
These use cases are the easiest to get started with. If you’re a Snowflake customer, you already have access to Cortex, which gives you LLM capabilities through simple functions. Tools like Sigma can take these functions from Snowflake (or Databricks) and bring them directly into your analytics workflow.
This opens powerful new ways to work with unstructured data. A common example is survey data — You can use these functions to add structure to free-form responses, making them easier to analyze. LLMs are a powerful tool for imposing structure on our world of unstructured and semi-structured datasets.
In their simplest form, chatbots work this way too, with each output becoming part of the next input to continue the conversation. Just like chatbots, the key to success with any AI feature is your initial input (or prompt) which enables you to provide context and instruction to your request.
AI Workflows
This is where things get more complex. You still have defined inputs and outputs, but the path between them can branch based on conditions. Think of LLMs being orchestrated through code or visual ETL-like flows—something many data professionals are already familiar with. Traditional ETL tools are adding AI features, and we’re also seeing AI-focused workflow tools emerge, like n8n.
With AI workflows, you can tackle problems like:
- Support response automation
- Lead management and enrichment
- Research tasks
- Data enrichment
- Communications integration and summarization
The power here is in handling more nuanced tasks that require some deterministic decision-making along the way.
AI Agents
Agents are different. They decide their own path and work mostly independently. They have access to multiple tools and choose which ones to use (or not use) through technologies like MCP (Model Context Protocol). This autonomy makes agents powerful but also introduces new considerations around cost, processing time and error handling.
A few practical tips about AI agents:
- Start simple. Don’t jump straight to agents. They’re best for complex, valuable workflows where you can wait for results.
- Focus on the basics. Agents are just models using tools in a loop. Your job is figuring out what tools they need and what prompts will guide them effectively.
- Manage context carefully. The model only knows what’s in its prompt. You need to help it track what it’s done and what comes next — this can be as simple as a text file the agent can read and update.
Human-in-the-loop review becomes important here, allowing for guidance and quality control when needed.
Looking Ahead
It’s hard to predict exactly what’s next, but current trends suggest we’ll see multiple agents working together, with humans assigning and managing their workloads. Whether the future brings one powerful agent or a fleet of specialized ones working in concert, the key is starting with practical applications today.
The path forward is clear: Begin with simple LLM features, experiment with workflows as your needs grow more complex, and consider agents only when you have high-value problems that justify their complexity. Each step builds on the last, creating a foundation for whatever comes next.