Ask an AI tool for a summary and being close might be good enough.
That changes when accuracy is non-negotiable. As AI enters regulated and operational systems, organizations are being forced to confront which questions can tolerate probability and which ones can’t.
“If you’re a CFO and you go ask your AI agent what was my Q2 revenue, it has to be accurate down to the penny,” says Anahita Tafvizi, chief data and analytics officer at Snowflake. “It cannot be probabilistic.”
That reality is shaping how and when AI is used inside organizations. While generative tools have proven they can move fast and sound convincing (sometimes a little too convincing), organizations are now grappling with a harder problem.
How do you use AI inside systems where the cost of being wrong is real, measurable, and sometimes regulated?
As AI moves closer to core operations, the question becomes whether it can be trusted to return consistent, accurate answers, or to say clearly when it does not know.
Bringing AI to the data, not the other way around
One way enterprise platforms are responding is by embedding AI directly into existing data environments instead of moving data into separate tools.
“The value that Snowflake brings is that it has expertise in both structured and unstructured data,” Tafvizi says. “You bring AI to the data versus the other way around.”
Structured data, such as financial records and operational metrics, has long formed the backbone of enterprise analytics, the systems organizations rely on to analyze performance and make decisions. Unstructured data, including documents, presentations, and internal knowledge, has been harder to integrate.
As interaction with these models moves into natural language, the boundaries between structured and unstructured data start to blur.
A single question can pull from financial systems, internal documents, and institutional knowledge at once, raising new questions about accuracy, context, and control.
As organizations experiment with this approach, early examples are beginning to emerge. This fall, TD Bank launched a Wealth Virtual Assistant that allows advisors to query internal policies and market data simultaneously to deliver faster client insights.
Inside Snowflake, that approach has led to the development of an internal go-to-market AI assistant that allows employees to query both types of data using natural language.
“We get today like 30,000 questions a week on that tool alone,” she says.
That scale can’t come at the cost of data management and security. The system is designed to respect access controls and decline questions outside its defined scope.
“If our sales team goes in there and asks a question about politics, the agent will gracefully decline,” she says.
The goal is to answer the right questions, in the right context, with clear boundaries around what the system is designed to do.
The work happening behind AI adoption
In discussing the challenges of AI deployment inside organizations, Tafvizi pointed to what might be described as documentation debt.
Many organizations, Tafvizi says, are discovering that gaps in how data is defined and documented can limit AI deployment.
“If you don’t have it documented to train a human, then how do you train an AI agent?” she says.
In response, data teams are spending more time building semantic layers, effectively translation dictionaries that explain what data actually means.
A column labelled “rev_q2” needs to be defined as gross revenue, how it is calculated, and when it should be used. Without that context, AI systems move quickly and make assumptions to fill any gaps, a recipe for problematic hallucinations.
This work is rarely visible to end users, but it determines whether AI can be trusted inside core systems.
Without it, even powerful models struggle to distinguish signal from noise.
A different view of AI and work
The shift toward AI-assisted work is also changing how teams are hired and evaluated.
“I expect my team members to use AI products,” Tafvizi says. “So, therefore, I have to test for the ability to be able to use AI products.”
That stance cuts against a growing trend of companies banning AI use during interviews out of concern about cheating. Tafvizi takes the opposite view. Using AI effectively and knowing how to verify its output is becoming part of the job.
At the same time, she is clear that automation has limits.
“Any time that you need judgment or evaluation or testing, I think you still need a human,” she says.
The future of work, in her view, is not about replacing people, but about changing where their time and attention are best spent.
Measuring value beyond speed
When it comes to return on investment, Tafvizi cautions against reducing AI’s impact to cost savings alone.
“The value isn’t just in hours saved,” she says. “It’s in the new information you give people so they can have more informed conversations.”
At Snowflake, she says, value is measured through a mix of adoption and behaviour. Usage rates show whether teams are actually incorporating AI into their daily work, while the volume and type of questions reveal how it is changing decision-making.
Simple knowledge queries may save minutes. More complex analytical questions can replace work that once required analysts, queue time, and manual review.
In some cases, those questions were not asked at all before. The effort required made them easy to deprioritize. By making answers available immediately, AI can change how fast teams move and what they choose to explore.
That shift, Tafvizi suggests, is harder to quantify than time saved, but more telling of long-term value.
Drawing the line
As enterprise AI becomes more embedded in day-to-day work, the hardest decisions increasingly sit outside the technology itself.
Leaders are being pushed to decide which questions can be explored, which answers can be automated, and where responsibility should still rest with a human.
That line will vary by function.
AI that helps surface ideas or summarize internal knowledge plays a different role from systems that feed financial reporting or compliance workflows. Each use carries its own expectations around accuracy, accountability, and oversight.
The next phase will be defined by how deliberately organizations decide where these tools belong and how clearly they define the limits around their use.
In that sense, the most capable AI systems may be the ones that know when to say nothing at all. To be fair, this is a standard some humans would do well to keep in mind.
Final shots
- Trust, accuracy, and repeatability are becoming central requirements as AI systems move into operational and regulated environments.
- Natural language interaction is reshaping how organizations connect structured data, documents, and internal knowledge, increasing the importance of governance and clarity.
- Clear data definitions and documentation are emerging as foundational work for AI deployment, even though this effort remains largely invisible.
- Broader access to answers is changing how teams make decisions by enabling questions that were previously difficult or time-consuming to pursue.
Successful enterprise AI adoption depends on clear decisions about scope, responsibility, and where human judgment remains essential.
