Connect with us

Hi, what are you looking for?

Business

Q&A: Tackling the major challenges limiting enterprise AI training

Improving AI at work: One of the most practical advances is the shift toward structured and supervised training pathways.

Asian markets stuttered into the weekend, with eyes on US data and next week's Federal Reserve rate decision
Asian markets stuttered into the weekend, with eyes on US data and next week's Federal Reserve rate decision - Copyright AFP Mohd RASFAN
Asian markets stuttered into the weekend, with eyes on US data and next week's Federal Reserve rate decision - Copyright AFP Mohd RASFAN

Data privacy, security, and compliance are major challenges that are limiting enterprise AI training in firms. However, organisations can address these risks — like shadow AI — through stronger governance and trust frameworks.

The process to achieve this has been outlined by Shane Tierney, Senior Program Manager of GRC at Drata (agentic trust management platform), in conversation with Digital Journal.

Digital Journal: In what ways are data privacy, security, and compliance challenges hindering the ability to train better AI within enterprises today?

Shane Tierney: Enterprises are running into a trust ceiling. This is the point where the need for more business-relevant AI models conflicts with privacy, security, and compliance obligations. Training stronger models requires using sensitive internal data, but that data is often subject to retention limits, data residency rules, contractual restrictions, and customer confidentiality requirements.

As shadow AI expands, employees sometimes move data into unsanctioned tools or services, which introduces leakage risk, audit gaps, and unclear accountability. At the same time, regulators and customers increasingly expect organizations to demonstrate governance and transparency through measurable practices. That friction slows model training, limits how widely data can be used, and encourages teams to be conservative about experimentation even when the business is urging them to move quickly.

So the central question is no longer whether enterprises can train better AI. The real question is how they can do it while proving that the process is secure, compliant, and accountable.

DJ: What recent progress in new training processes, privacy-preserving machine learning, and other techniques is helping address these limitations?

Tierney: There has been meaningful progress in approaches that strengthen control and reduce exposure while still improving model performance. Organizations are adopting stronger governance and visibility controls to reduce the risks associated with shadow AI, and they are beginning to treat trust as something that can be measured and demonstrated rather than assumed.

One of the most practical advances is the shift toward structured and supervised training pathways. Every effective training program depends on clear objectives and defined boundaries, and AI is no different. Enterprises are creating sanctioned workflows for AI use, improving visibility into where data flows, clarifying which models are being used, and identifying who owns the associated risks. These measures do not remove every privacy or security concern, but they replace ad hoc behavior with accountable systems that regulators and customers increasingly expect to see.

DJ: How should enterprises weave these kinds of capabilities to achieve the best results over the next twelve months?

Tierney: Enterprises should run AI as a trust program rather than a side project or a one-time compliance checklist. Over the next twelve months, the organizations that elevate the CISO as a de facto “Chief Trust Officer” will see the strongest results. This role can help quantify how governance, transparency, and accountability support faster adoption and unlock higher quality training data that teams may be reluctant to use without clear safeguards.

Even in a period of governmental deregulation, the most mature organizations are tightening their bolts. They are improving governance, evidence collection, and risk oversight so they are prepared for the next regulatory cycle rather than caught off guard when expectations inevitably rise again.

It is also important to provide sanctioned AI tools and workflows that are safer and easier than shadow alternatives. By increasing visibility, automating compliance evidence, and streamlining risk review, enterprises allow teams to innovate quickly without having to renegotiate risk every time.

In summary, if you can’t show control and accountability, you’ll keep getting stuck in AI purgatory.

DJ: What sort of further progress or capabilities might shift the adoption of new approaches?

Tierney: A major breach traced back to shadow AI would instantly change risk tolerance and funding priorities. When that does happen, organizations will accelerate investment in governance, monitoring, and culture change, because policy alone won’t contain it.

Beyond a crisis-driven catalyst, adoption will shift when trust becomes something that organizations can consistently prove and communicate. Enterprises need to demonstrate AI accountability in ways that satisfy customers, auditors, and regulators without slowing execution. When companies can measure trust, compare it over time, and show that stronger controls directly support safer and more effective AI use, adoption will increase. The organizations that treat trust as a competitive differentiator will set the pace for the rest of the market.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Social Media

UK Prime Minister Keir Starmer on Thursday hinted at possible measures limiting children's access to social media.

Social Media

The AI tools have been forbidden from providing recommendations, rankings, or opinions regarding candidates and political parties.

World

AI tools make deepfakes easier to create and harder to detect than ever before.

Business

Taiwanese chip manufacturer TSMC said net profit for the first quarter of 2026 leaped to a fresh record - Copyright AFP I-Hwa ChengJoy Chiang,...