Looking at the ramifications of the plan for Digital Journal is deepfake and AI fraud expert Joshua McKenty: former Chief Cloud Architect at NASA and the Co-Founder and CEO of Polyguard.
With over 20 years of experience, McKenty has deep insights into the evolving threat landscape and the national security implications of AI misuse, the gaps in current policy, and how these executive orders may (or may not) shift the game when it comes to safeguarding public discourse and digital trust. This is because, at its heart, the plan is all about accelerating AI innovation through deregulation.
Trump is directing U.S. government departments to revise their artificial intelligence risk management frameworks.
The new Trump plan directs the U.S. Department of Commerce to revise artificial intelligence risk management frameworks. These could undo protections that were set to be put in place in order for a firm to do business with the federal government.
According to McKenty, this latest proclamation does not fully help the U.S. to leap forward in the development of AI: “On the creation of “AI Information Sharing and Analysis Center” led by the Department of Homeland Security to overwatch AI-linked cybersecurity threats:
The US is dangerously behind in their response to emerging AI-powered cybersecurity attacks, as evidenced by the recent mishandling of deepfake attacks on Marco Rubio, Rick Crawford and others.”
However, the proclamation will help the sector to develop: “It’s encouraging to see the White House finally take AI threats seriously – but urgency without coordination risks compounding the problem. The challenge ahead isn’t just standing up new programs, it’s making sure they actually work.”
AI-specific cybersecurity guidance for the private sector
In terms of the cyber-threats and where AI can help, McKenty observes: “As we work to establish bilateral communication channels between the federal government and the private sector, it’s important to build on the existing cybersecurity guidance already coming from the FBI, CISA, NSA and the DOD Cybercrime Center. What’s needed is clever coordination and actionable intelligence.”
On workforce development
It is also important, according to McKenty , that the U.S. develops the skills necessary to meet the AI development challenge: “The U.S. faces a growing talent gap in AI. While demand for skilled professionals is accelerating, our pipeline of trained engineers, researchers, and cybersecurity experts isn’t keeping pace. Closing the gap will require long-term investment in STEM education, immigration pathways for top talent and stronger industry-academic collaboration.”
AI Risk Management Framework
Returning to the topic of risk, McKenty sets out the policy framework that should be adopted to mitigate the risks faced by the sector: “NIST’s framework is one of the few widely respected tools for managing AI risk. Revisions should focus on technical clarity, threat modelling, operational usability, and science – not politics. Stripping out key areas that address misinformation or emergent behaviour would make the framework less relevant just as the stakes are getting higher.”
