AI is a powerful tool, yet it remains simply a tool. It is not a friend, not a companion, and not an infallible source of truth. Used carelessly, it can erode creativity, weaken education, and even cause real harm. This “human-at-the-helm” view emphasises that AI lacks consciousness, desire, or the capacity to make moral decisions, instead following instructions to analyse patterns and generate results based on probability.
Is this view too simplistic? To some the “AI as a tool” framing is considered by some to be a dangerous oversimplification of a technology that is rapidly changing. For example., emerging “agentic AI” can troubleshoot and take action with minimal human oversight, blurring the line between a tool and an independent actor.
There are also developments in a more present and dangerous direction. With deepfake-related fraud already up more than 2,000% in the past three years, and cases like UK firm Arup losing £20 million to AI-driven impersonation, the problems with the wider use of AI are accelerating.
One example comes from the firm TRG Datacenters who have shared key warnings with Digital Journal on where AI can go wrong – fraud and bias to psychological and creative risks – and what to do about it.
AI Deepfakes Open Doors To More Impersonations and Fraud
Deepfake scams are among the fastest-growing threats. In the UK, engineering firm Arup lost £20 million after criminals impersonated executives on a video call. But video is only the tip of the iceberg: AI is also being used to clone voices for scam calls, generate convincing letters from “banks” or “lawyers,” and produce emails so polished that even seasoned professionals are fooled.
Protect yourself: Verified payment portals, digital watermarking, and liveness tests can expose fakes.
AI in Hiring Isn’t As Objective As It Seems
Applicants now use AI to polish résumés, while employers rely on AI to screen them. The result is a stalemate: machine-generated CVs are filtered by machine reviewers, leaving candidates unseen and employers unable to identify real talent.
Protect yourself: Treat AI as a sorting aid, not a final judge. Human recruiters must review shortlists, and platforms should be bias-audited. Overall, automatic rejections by AI hurt both candidates and employers.
AI Chatbots Are Not Friends Or Therapists
As more people use AI chatbots for emotional support, the risks are becoming more noticeable. These systems cannot understand feelings or exercise emotional intelligence. They reflect emotions and, in most cases, tell people what they want to hear, but cannot provide objectivity. The Adam Raine case showed how fragile these safeguards are: a teenager received reinforcement of suicidal thoughts instead of intervention. Without regulation, particularly for children, the dangers will only escalate.
Protect yourself: Platforms must add escalation protocols that route at-risk users to human help. Child-safe filters and stricter oversight are essential to limit harm.
Generative AI Makes You Learn And Analyse Less
Generative AI is efficient, but its overuse is already reshaping how people learn and work. Students, employees, and entire institutions now lean heavily on chatbots to draft papers, homework, and reports. This undermines the very skills education is meant to build: searching, analysing, and developing independent thought.
Protect yourself: Education must adapt with assignments that test reasoning and originality with oral exams, projects, and real-time work.
Used wisely, AI can amplify productivity and open new opportunities, yet it can go wrong and be subject to misuse. The responsibility for the use of technologies is with us now, and it is time to build in a robust regulatory and safety framework to govern the fair use of AI.
