The AI models that most of us have been using for the past few years are “generative AI” – programs that can generate original content based on a user’s prompt or instructions. We’ve all seen examples of what happens when generative AI goes rogue or ‘hallucinates’ – inventing things out of thin air that inaccurate at best, or completely false at worst. Have you heard about the next generation of AI tools, known as “agentic AI?”
Agentic AI systems are designed to work towards a goal and handle complex tasks autonomously, with minimal human interaction. They can learn from new data and make decisions in real-time, as opposed to simple tasks or predefined automations. While this may initially seem like the next great iteration in AI, there are even more serious privacy and other risks than in generative AI. These digital agents are making decisions and taking actions all on their own and because of their complexity, it can be difficult to troubleshoot where errors occur. Furthermore, each decision it makes builds on a previous AI-made decision. Some potential risks include:
- Accessing and disclosing sensitive information
- Completing financial transactions
- Data breaches
- Compliance with laws
- Lack of emotional intelligence in situations that require it
- Lack of transparency and auditability in decision-making processes
More guardrails are required to control these AI systems, including longer and more comprehensive pre-launch testing. Constant monitoring may require significant upgrades in both computing power and human watchers. Accountability must be baked in, along with the ability to turn off the AI if it is not functioning correctly. Ongoing staff training will also be needed, along with potentially whole new systems of governance. In short, most companies are not prepared for these shifts, and the technology is already here.
As always, there terrific ways and reasons to use AI but the key is that AI should enhance human processes, not replace them.