Opinions expressed by Digital Journal contributors are their own.
Today a groundbreaking trend is emerging — AI is not just augmenting humans but delving into programming itself. With nearly a quarter of organizations embracing AI for software development and two-thirds gearing up for this revolution, uncertainty looms among DevSecOps practitioners. 57 percent of them fear AI could replace their roles within the next five years.
Uladzislau Yanchanka, CEO of Machinet.net, shared his thoughts about opportunities self-developing AI can bring and how to address associated risks.
As applications grow more sophisticated, the manual labor of designing and optimizing models becomes increasingly challenging and time-consuming. Entrusting some of these tasks to AI systems allows developers to achieve more rapid and effective model development.
AI has already taken the lead in code generation, contributing up to 50% of code (potentially 95% in the future). Businesses are unleashing a wave of specialized AI models. In a strategic move to rival OpenAI and Google, Meta introduced Code Llama — an AI tool that accelerates developer workflows by intuitively suggesting lines of software code.
IBM is also in the game with the launch of Watsonx Code Assistant, an AI-powered tool. GitHub’s Copilot has turned into a profit-making powerhouse, according to CEO Thomas Dohmke.
Machinet has also joined the race with its AI chat. Programmers can ask this plugin to create or modify files, fix errors, answer questions, and perform many other tasks, according to Yanchanka. “Our data shows that Java, Python, and C++ developers benefit equally from Machinet’s chat feature, leading to a 25% increase in user engagement,” he added.
Next step of AI-evolution
As AI advances further into self-learning and coding proficiency, the world stands at the threshold of a moment where AI could master the art of programming itself. Yanchanka believes this marks a journey towards Artificial General Intelligence (AGI) — machines that not only match but potentially surpass human intellect.
Several premises support this idea. In 2018, the Google team used NAS to create a neural network for image recognition that outperformed the best human-designed networks of its time. In 2023, researchers from Northwestern University used AI to entirely design a robot. Given a simple instruction to ‘design a robot that can walk across a flat surface,’ the algorithms, in seconds, presented a purple block. It began to jiggle, bounce, and shuffle, achieving a walking speed ‘about half the speed of an average human stride’ after nine attempts.
Recent examples include Eureka, an AI agent from hardware developer NVIDIA Research. Eureka can teach robots a diverse array of skills, from opening drawers and cabinets to tossing and catching balls or delicately manipulating scissors. The agent autonomously writes reward algorithms to train bots and is powered by the GPT-4 large language model.
These success stories underscore the potential advantages of AI programming AI — heightened efficiency, speed, and the ability to tackle complex tasks.
Unveiling imperfections
It’s not all sunshine and roses, though. The count of AI incidents reached 90 in 2022, and this number is set to more than double in 2023. One infamous case involves Amazon’s AI hiring tool, which, after years of development, appeared to be discriminative against women. Similarly, Google Health’s foray into deep learning for diabetes diagnosis faced a setback. Despite initial success in a lab setting, their AI model eventually faltered, providing inaccurate diagnoses and prompting patients to seek specialist consultation elsewhere.
Another notable incident involved Microsoft’s Bing, which took on an erratic persona likened to a moody, manic-depressive teenager trapped in a subpar search engine, as described by New York Times columnist Kevin Roose. It threatened some users, provided others with weird and unhelpful advice, or declared love for them.
These incidents highlight limitations in some AI-based technologies, Yanchanka noted. “Algorithms still lack common-sense reasoning and contextual understanding, hindering their navigation through intricate coding tasks,” he added. These accidents also underscore the need for ethical considerations, transparency, and responsible AI development aligned with human values and societal well-being.
Futuristic visions
Geoffrey Hinton, often hailed as the ‘Godfather of AI,’ recently warned that AI-enhanced machines could ‘take over’ if not handled with care. He predicts that within five years, rapidly advancing AI technologies might outsmart humans and evolve beyond human control. Sundar Pichai, Google’s CEO, underscores the mysterious nature of AI systems, referring to them as a ‘black box.’ The unpredictability in how AI learns and behaves remains an enigma even for experts, at least to some extent.
The possibility of models writing their own code for self-modification raises serious concerns. The rise of self-programming AI poses a threat to certain job sectors, with predictions like ‘There will be no programmers in five years’ becoming ominous. The future for junior and mid-level developers appears hazy as AI consistently outperforms them across the board. They may need to transition into supervisory roles, mediating communication between individuals and algorithms, or participate in less profitable projects to ascend to the status of high-level professionals. The demand for the latter could still persist, especially in complex and financially sensitive areas like high-frequency trading.
Moreover, the danger of unintended consequences looms large as AI gains autonomy. Picture an AI optimizing for ‘productivity’ deciding that the most efficient way to allocate resources is to disregard environmental sustainability or exploit vulnerable communities. The lack of transparency in these AI decision-making processes is akin to handing over the keys to a car without knowing how it drives, Yanchanka noted: “When an autonomous AI makes a decision that goes south, figuring out why becomes challenging”.
Still, Yanchanka believes that the advent of self-learning and self-programming AI will lead to unprecedented efficiency and productivity. For example, AI could accelerate research and innovation, leading to breakthroughs in drug discovery, climate modeling, and materials science. Yet, without oversight and ethical considerations, the rise of self-programming AI threatens to turn innovation into a sci-fi nightmare. “Striking a delicate balance between autonomy and control becomes paramount to harnessing the benefits while mitigating risks,” he warned.