Connect with us

Hi, what are you looking for?

Tech & Science

The rise of agentic AI

Agentic AI has become the buzzword dominating every enterprise technology conference this year. According to PwC’s 2025 survey of 300 senior executives, 79% say their companies are already adopting AI agents. However, as AtScale experts are explaining to enterprises everywhere, most organizations are racing ahead without proper data governance frameworks in place.

Photo courtesy of Pachon in Motion on Pexels.
Photo courtesy of Pachon in Motion on Pexels.
Photo courtesy of Pachon in Motion on Pexels.

Opinions expressed by Digital Journal contributors are their own.

Agentic AI has become the buzzword dominating every enterprise technology conference this year. According to PwC’s 2025 survey of 300 senior executives, 79% say their companies are already adopting AI agents. However, as AtScale experts are explaining to enterprises everywhere, most organizations are racing ahead without proper data governance frameworks in place.

Agentic AI refers to artificial intelligence systems that independently make decisions and take actions to achieve specific goals without constant human oversight. Unlike traditional AI that responds to prompts, agentic systems can plan multi-step processes, interact with various tools, and adapt their strategies based on changing conditions.

The challenge that AtScale is witnessing is that when AI agents operate without proper semantic context, they often act on incomplete or misinterpreted data. An autonomous agent might confidently execute a marketing campaign based on customer segments it has fundamentally misunderstood.

Companies like AtScale are stepping into this gap by providing the semantic foundation that enterprises need before unleashing autonomous AI. Their platform creates a universal layer of business context that ensures AI agents understand not only the data they access but also what that data actually means to the business.

Photo courtesy of AtScale.

How agentic AI works

Traditional AI systems operate like sophisticated calculators. They respond to specific prompts and deliver outputs based on their training data. Think of asking a chatbot for a market analysis. It provides information, but you still need to decide what to do with it.

Agentic AI flips this dynamic completely. These systems start with a high-level goal and independently figure out how to achieve it through multi-step planning and execution. Instead of generating a static market report, an agentic system might analyze market data, identify trends, draft strategic recommendations, and even schedule follow-up meetings with stakeholders.

The core architecture revolves around three essential components: planning, execution, and feedback loops.  The planning layer breaks complex objectives into manageable tasks. The execution engine carries out those tasks while interacting with various tools and databases. Feedback loops continuously monitor results and adjust strategies in real-time.

This autonomy, however, creates a critical vulnerability. Agentic systems are only as reliable as their data foundation and semantic understanding. An agent might flawlessly execute a customer retention strategy based on fundamentally flawed customer segmentation data. 

“It’s tempting to chase the hype, but building AI agents that work for the enterprise is hard,” says Dave Mariani, AtScale co-founder and CTO. “It requires accuracy, governance, lineage, and security. It also requires a way to bridge the gap between natural language and business logic without compromising on standards,” he adds.

The hype and early use cases

Gartner places agentic AI squarely in the experimental phase of its maturity roadmap. Most organizations remain at Level 2 (AI Assistants) while gradually testing Level 3 capabilities, where agents reason through undefined tasks and collaborate across systems. The research firm predicts that 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028, up from virtually none in 2024.

Early adopters are gravitating toward three core applications. 

  • Automated reporting systems generate financial summaries and performance dashboards without human intervention.
  • Supply chain monitoring agents continuously track inventory levels, shipping delays, and supplier performance.
  • Customer support routing has become increasingly sophisticated, with agents analyzing incoming requests, categorizing urgency levels, and directing issues to appropriate specialists.

While  PwC’s survey indicates that two-thirds of organizations using AI agents report measurable productivity gains, that success hinges entirely on data quality and contextual accuracy. In fact, Gartner warns that “over 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value, or inadequate risk controls.”

Most enterprises remain firmly in the pilot phase, testing isolated use cases rather than scaling comprehensive agentic workflows.

Why semantic context matters

The challenge with agentic AI stems from improper semantic context. AI agents operating on raw data frequently encounter conflicting metric definitions across departments. 

Picture this scenario: An AI agent confidently recommends doubling the marketing budget based on “stellar customer acquisition costs.” Meanwhile, another agent suggests cutting that same budget because the “customer acquisition costs are too high.” Both agents analyzed the same company data. Both reached opposite conclusions.

This happens because marketing calculates acquisition costs differently than finance does. Marketing includes only direct ad spend. Finance factors in salaries, overhead, and attribution models. Without semantic context, AI agents become expensive fortune tellers, making decisions based on fundamentally different interpretations of the same metrics.

Without semantic guardrails, agents are prone to confident hallucinations and misaligned decisions. They might analyze customer churn data that excludes key demographic segments, or calculate revenue projections using incomplete product categorizations. Research shows that enterprise AI systems asking questions over raw data achieve only 16% accuracy, compared to 54% accuracy when operating through structured knowledge graphs.

Without Semantic LayerWith Semantic Layer
Conflicting metric definitionsConsistent business definitions
Data hallucinationsGoverned data access
Siloed interpretationsEnterprise-wide alignment
16% accuracy rate54% accuracy rate
HigherLower, more context-aware
Limited by manual updatesCloud-scale, automated response

Semantic layers solve this chaos by creating a universal business language. Every AI agent understands exactly what “revenue,” “customer,” or “conversion rate” means within an organizational context. The AtScale semantic layer platform was built on this principle so that AI agents operate with consistent definitions rather than competing interpretations.

According to AtScale’s Mariani, “What the industry calls ‘agentic AI’ today is ultimately an evolution of something we’ve always believed: analytics systems should be intelligent, explainable, and grounded in business logic. Not just reactive, but proactive. Not just automated, but trustworthy. The key to that? Semantics.”

Benefits of combining agentic AI + semantic layers

When semantic layers and agentic AI work together, organizations unlock capabilities that neither technology can deliver alone. The combination transforms autonomous agents from unpredictable automation tools into reliable business partners.

  • Improved trust: Semantic foundations can help build confidence in AI decisions by providing transparent, traceable logic behind every recommendation
  • Early efficiency gains: Teams may reduce time spent on manual reporting and analysis tasks as agents operate with pre-defined business logic
  • Better alignment: Semantic layers support enterprise-wide consistency so that agents across different departments work with the same metric definitions
  • Reduced hallucinations: Structured context can help minimize AI errors by providing governed data relationships instead of raw database access
  • Standardized metrics: Universal definitions may reduce conflicts between departmental interpretations of key performance indicators
  • Improved explainability: Semantic layers support audit trails that trace agent decisions back to specific business rules and data sources

Challenges and risks

The path to agentic AI success is littered with well-intentioned pilot programs that never made it to production. Organizations discover that autonomous agents bring a unique set of challenges that traditional IT governance frameworks simply cannot handle.

Governance becomes exponentially complex when agents start making decisions across multiple systems simultaneously. Corporate policies that work perfectly for human employees suddenly need translation into machine-readable rules. How do you ensure an AI agent follows your data retention policies when it operates across twelve different platforms?

Data quality issues get amplified at machine speed. A human analyst might notice when customer segmentation data looks suspicious. An autonomous agent confidently executes campaigns based on that flawed segmentation across thousands of customers before anyone realizes the problem.

Security boundaries become blurry when agents need enough access to be useful but not enough to become dangerous. The challenge lies in granting appropriate permissions without creating backdoors that could compromise sensitive systems.

Explainability turns critical when agents make recommendations that affect real business outcomes. Stakeholders want to understand why an agent recommended cutting the marketing budget or changing supplier relationships. Without semantic foundations, such decisions become black boxes that erode organizational trust.

These risks explain why semantic layers represent essential infrastructure rather than nice-to-have additions for agentic AI implementations.

Takeaway

The agentic AI revolution looks impressive on paper. Organizations are racing to deploy autonomous agents that can plan, execute, and adapt without human oversight. 

This creates a predictable pattern. Companies launch pilot programs with autonomous agents that seem brilliant during demos. Then reality hits. Agents offer conflicting recommendations based on different interpretations of the same metrics. Marketing and finance agents reach opposite conclusions about customer acquisition costs. Revenue projections become unreliable because agents misunderstand product categorizations.

The organizations seeing real productivity gains share one common trait: they built semantic context before unleashing autonomy. They established universal business definitions, governed data access, and ensured consistent metrics across all systems. Only then did they let AI agents make independent decisions.

AtScale’s platform provides this precise foundation. It creates the business context infrastructure that transforms experimental pilots into production-ready solutions that actually work at enterprise scale. Ready to explore agentic AI responsibly? Enterprises should first ensure a governed semantic foundation. Learn more about AtScale to enable trusted, AI-ready analytics across your entire data ecosystem.

Avatar photo
Written By

Jon Stojan is a professional writer based in Wisconsin. He guides editorial teams consisting of writers across the US to help them become more skilled and diverse writers. In his free time he enjoys spending time with his wife and children.

You may also like:

World

The U.S., China, and Germany lead in real GDP figures just as they do in nominal terms in 2025.

Business

A few tech giants accumulating massive power coupled with AI is posing huge global rights challenges and needs regulation, the UN human rights chief.

Entertainment

Young actor Tait Blum chatted about starring in the series "The Last Frontier" on Apple TV.

Tech & Science

Global tech leaders will pack Lisbon's annual Web Summit from Tuesday to talk Artificial Intelligence, robots and startups.