“Whether you’re implementing AI or you’re the one approving the budget, we don’t want you to be the person to be like, ‘Oh crap. This was a huge mistake that I made.’”
That was Kelly Cherniwchan’s goal as he took the stage at YYC DataCon 2025. Speaking to a wide audience of data and business professionals, the CEO and founder of Chata.ai outlined the financial and operational risks that organizations overlook at their own peril when adopting AI.
For business leaders and technology teams, AI can be a game-changer — but only if implemented strategically.
Rushing in without understanding the economics and risks can lead to budget overruns, legal liabilities, and failed projects.
Here’s how Cherniwchan suggests you can ensure AI investments deliver real value.
1. Assess your AI readiness
Before adopting AI, organizations should evaluate whether they are equipped to manage the costs and risks involved. Cherniwchan said that too many companies dive in without clear expectations.
Ask yourself:
- Do you have a clear business case for AI, beyond “it’s a hot trend?”
- Can you quantify the return on investment (ROI)?
- Do you have the infrastructure to support AI deployment long-term?
Many businesses overestimate their ability to train their own AI models.
“Don’t go down the pathway of, ‘Hey, let’s use an open-source LLM, and then we can replicate a ChatGPT.’ It’s not that easy,” Cherniwchan warned.
2. Calculate the true cost of AI
AI costs are not just about initial set-up. Ongoing inference costs — the price of running AI models on cloud servers — can quickly spiral out of control.
Cherniwchan shared a cautionary example of an NFL team that reportedly blew a 12-month budget in just one month due to the unanticipated high inference costs of generative AI used for fan engagement.
Checklist for evaluating AI costs:
- Estimate training costs: Custom AI models require significant compute power.
- Calculate inference costs: Consider per-use fees for cloud-based AI services.
- Factor in maintenance: AI models degrade over time and need regular updates.
- Account for personnel: AI projects require engineers, data scientists, and compliance teams.
AI service providers may also increase prices over time.
“If you think the price of AI services will stay the same, think again,” Cherniwchan said. “Companies have to make money at some point, and costs will rise.”
3. Implement AI governance and risk mitigation
Cherniwchan emphasized that AI governance is no longer optional — especially in regulated industries.
Whatever AI you use — GenAI or otherwise — it’s becoming a requirement that you be able to explain what’s going on.
Steps to ensure AI accountability:
- Define AI risk categories: Distinguish between low-risk (email automation) and high-risk (financial modeling, medical decision-making) applications.
- Require explainability: AI-generated decisions should be auditable, i.e. businesses must be able to track what data was used, how decisions were made, and how outputs were generated.
- Address legal liability: Ensure compliance with data protection laws and copyright considerations.
Cherniwchan highlights that some AI startups have already faced lawsuits for incorrect AI-generated recommendations. In one case, an AI misdiagnosed a heart attack as a panic attack, leading to a fatality.
4. Create AI checkpoints to prevent errors
AI systems can make mistakes, and those errors compound in automated workflows. Cherniwchan warned about hallucinations — incorrect or fabricated AI outputs — which occur far more frequently than some research suggests.
Errors in AI-generated content can also compound in multi-step processes, Cherniwchan explained. When AI models introduce even a small percentage of incorrect outputs, those errors multiply as they pass through automated workflows. In agentic workflows (where AI models interact with and refine their own outputs) hallucinations can quickly snowball.
He illustrated this with an example: if an AI system has a 10% hallucination rate — meaning 10% of its outputs are incorrect — then after three sequential AI-driven steps, the compounded accuracy drops to approximately 72.9%. If the hallucination rate is 20%, accuracy after three steps falls even further, and in workflows with more stages, errors continue to accumulate.
“You do the math, and it starts getting down to the point of 50-50,” he said.
How to reduce AI errors:
- Introduce verification steps: AI outputs should be reviewed by humans or secondary AI models.
- Use multiple models for comparison: Some organizations use OpenAI, Anthropic, and Grok in parallel to validate results.
- Stop AI processes at key decision points: Don’t let automated systems run unchecked.
5. Explore emerging AI technologies
While GenAI dominates headlines, newer architectures may offer better performance and cost efficiency. Cherniwchan highlighted neuromorphic hardware as a promising alternative to traditional GPUs.
“Better technology is my favorite topic,” he said. “There’s new architectures coming out that I think are very promising.”
Some he suggested to watch:
- Spiking neural networks: More efficient for AI processing.
- Liquid neural networks: Ideal for time-series forecasting.
- Neuromorphic chips: AI processing with lower power consumption.
The bottom line: Proceed with strategy, not hype
Cherniwchan’s message was clear: AI is powerful, but only if implemented with caution and clear objectives.
“This is incredibly powerful technology,” he concluded, advising the audience to take their time and assess the available options before diving into an AI implementation.
Ask yourself: What problem am I really trying to solve, and is AI the best way to solve it?
For professionals considering AI adoption, those might be the most important questions of all.
Digital Journal is the official media partner of YYC DataCon 2025.

This article was created with the assistance of AI. Learn more about our AI ethics policy here.
