Deepfake-related fraud alone has surged by over 2,000% in the past three years, according to different assessments. Consequently, many people have lost millions to impersonation scams.
To help with identifying the most harmful applications of AI, the firm TRG Datacenters has unveiled key warnings on where AI can go wrong, from fraud and bias to psychological and creative risks, and what steps businesses and individuals can take to protect themselves.
“AI is a powerful tool, of course, but only if we remember it is just that: a tool. It is not a friend, not a companion, and not an infallible source of truth. Used carelessly, it can erode creativity, weaken education, and even cause real harm,” the TRG Datacenters report runs.
“We can delegate certain tasks to AI and free time and resources for ourselves, but some jobs are just not suitable for artificial intelligence.”
AI Deepfakes Open Doors To More Impersonations and Fraud
Used wisely, AI can amplify productivity and open new opportunities. However, the responsibility for the use of technologies is with us. It is important that we keep questioning and creating. Moreover, societal institutions need to adapt education and regulation to preserve critical thinking and prevent significant damage. Key points made in the report are explored below.
Deepfake scams are among the fastest-growing threats. In the UK, engineering firm Arup lost £20 million after criminals impersonated executives on a video call. But video is only the tip of the iceberg: AI is also being used to clone voices for scam calls, generate convincing letters from “banks” or “lawyers,” and produce emails so polished that even seasoned professionals are fooled.
Protect yourself: Verified payment portals, digital watermarking, and liveness tests can expose fakes.
AI in Hiring Isn’t As Objective As It Seems
Applicants now use AI to polish résumés, while employers rely on AI to screen them. The result is a stalemate: machine-generated CVs are filtered by machine reviewers, leaving candidates unseen and employers unable to identify real talent.
Protect yourself: Treat AI as a sorting aid, not a final judge. Human recruiters must review shortlists, and platforms should be bias-audited. Overall, automatic rejections by AI hurt both candidates and employers.
AI Chatbots Are Not Friends Or Therapists
As more people use AI chatbots for emotional support, the risks are becoming more noticeable. These systems cannot understand feelings or exercise emotional intelligence. They reflect emotions and, in most cases, tell people what they want to hear, but cannot provide objectivity. The Adam Raine case showed how fragile these safeguards are: a teenager received reinforcement of suicidal thoughts instead of intervention. Without regulation, particularly for children, the dangers will only escalate.
Protect yourself: Platforms must add escalation protocols that route at-risk users to human help. Child-safe filters and stricter oversight are essential to limit harm.
Generative AI Makes You Learn And Analyse Less
Generative AI is efficient, but its overuse is already reshaping how people learn and work. Students, employees, and entire institutions now lean heavily on chatbots to draft papers, homework, and reports. This undermines the very skills education is meant to build: searching, analysing, and developing independent thought.
Protect yourself: Education must adapt with assignments that test reasoning and originality with oral exams, projects, and real-time work.
