Connect with us

Hi, what are you looking for?

Life

Q&A: AI has the potential to boost mental health support

Trust is a foundational element in mental health care. But for many, AI simply hasn’t earned it yet.

A man expressing sadness with his head in his hands. Image by Tellmeimok. (CC BY-SA 4.0)
A man expressing sadness with his head in his hands. Image by Tellmeimok. (CC BY-SA 4.0)

Does artificial intelligence have a role to play in assisting medics and patients with mental healthcare? Dr. Tom Milam, Chief Medical Officer at Iris Telehealth explains to Digital Journal what the findings from a recent consumer survey mean.

The interview breaks down comfort levels, top concerns, and the areas where people are most open to AI being used in the field of mental health.

Digital Journal: What role do people see AI playing in behavioral health today?

Tom Milam: Demand for mental health care has continued to grow in recent years. However, the supply of qualified providers really hasn’t been able to keep pace. That imbalance naturally has many people questioning whether AI can help fill that gap.

We recently ran a consumer survey to get a clearer picture of how people are feeling about that possibility, the takeaway being that it’s not a simple yes or no. There’s plenty of curiosity and just as much hesitation; 40% of respondents said they’re against the use of AI in mental health care, while 32% said they support it. The rest were undecided, which tells us there’s space for opinions to shift, but only if the tech evolves in the right ways.

DJ: Which groups are most open to using AI-powered mental health tools and why might that be?

Milam: The data shows some pretty clear demographic differences in how people view AI for mental health.

One group that really stood out was parents — 65% stated they’d be comfortable using AI to assess symptoms before connecting with a human provider. Another notable demographic was men, with 47% stating they’d be open to receiving treatment recommendations from an AI tool, compared to just 36% of women.

These numbers tell us that AI could reasonably help fill care gaps among groups who’ve been traditionally less likely to seek out mental health support, especially time-strapped parents navigating postpartum challenges and childcare responsibilities, and those who are less likely to pursue mental health care through traditional routes, like men.

DJ: What are the most common concerns people have about AI in mental health care?

Milam: Trust is a foundational element in mental health care. But for many, AI simply hasn’t earned it yet. According to our survey, 70% of people said they were significantly concerned about the privacy and security of their data when using AI tools. That skepticism doesn’t stop at privacy either; only 18% of respondents considered AI tools “very reliable” for mental health support.

Overall, the top three concerns among those surveyed were:

● Loss of empathy and human connection (60%)
● Accuracy of AI recommendations (55%)
● Potential algorithmic bias (36%)

The results ultimately show why many patients remain hesitant to rely on AI without stronger validation and oversight.

DJ: How does the fear of losing human connection factor into AI resistance?

Milam: A personal connection is often central to mental health care. We found that 44% of respondents said having an in-person relationship with a provider was important to them. That preference reflects a broader concern about using AI in care and ties into a deeper discomfort around its role in mental health. People want to feel like they’re being heard and understood by someone who gets what they’re going through.

If patients fear being met with canned responses or reduced to data points, they may shut down before opening up. That hesitation could keep them from sharing important details that affect their care. These concerns help explain why many patients continue to value human clinicians, even as interest in AI grows.

DJ: Are there specific AI use cases that patients are more comfortable with?

Milam: Even though trust in AI is still pretty low overall (only 18% of respondents said they find AI tools “very reliable”), support is much stronger for certain behind-the-scenes applications.

Many people said they’d be comfortable with AI helping out in more administrative areas, like scheduling appointments or reducing paperwork. Comfort levels also go up when these tools are built into platforms they already use. About one-third of respondents said they’d be more likely to use AI mental health tools if they were part of existing services, like telehealth platforms or insurance portals.

DJ: What would help increase trust in AI tools used in mental health settings?

Milam: There are several ways we as an industry can increase trust; survey participants identified several key factors that would make them more comfortable using AI-powered tools in a mental health context:

● Involvement of mental health professionals in AI design and training (39%): People want reassurance that clinical expertise is baked into the technology itself, not added after the fact.

● Strong data privacy and security measures (35%): Robust encryption, clear retention policies, and transparent data governance are essential for handling sensitive mental health data.

● Transparency around how the AI makes decisions (34%): Without clear explanations of how recommendations are generated, many worry that AI will become a “black box” they can’t question or understand.

● Easy access to human clinicians when needed (32%): People want to know they can transition to a real person if needed, reinforcing continuity of care and overall patient confidence.

● Endorsements from trusted healthcare institutions (27%): Support from established healthcare organizations could lend AI tools credibility and help hesitant users feel more secure.

This tells us that people aren’t necessarily anti-AI, but they do want guardrails in place.

DJ: What should healthcare organizations keep in mind as they consider integrating AI?

Milam: When it comes to integration, the most accepted uses of AI tend to be those that support, not replace, the work of human clinicians. Our research points to several areas where this support could be most helpful:

● Administrative support: Reducing paperwork, streamlining scheduling, and flagging trends in appointment patterns to inform staffing decisions.
● Information gathering: Organizing pre-visit information to help clinicians prepare without interfering with actual clinical decision-making.
● Targeted engagement: Offering customized outreach to groups shown to be more receptive to AI-supported care, such as men and parents.

At its best, AI should help improve access and streamline care without getting in the way of the personal connection that defines effective behavioral health support.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Business

Alberta Innovates unveils a new strategy focused on partnerships, outcomes, and enabling others as it shifts its role in Alberta’s innovation ecosystem.

World

A recent study by the firm Hostinger has assessed the jobs that will remain essential through 2030 despite AI transforming the workforce.

Business

Premier Smith and Jim Balsillie discuss innovation, economic resilience, and Alberta’s strategic potential at Inventures 2025.

Life

A judge suspended the Trump administration's move to block Harvard from enrolling foreign students after the university sued.