Interacting with AI chatbots like ChatGPT can be fun and sometimes even useful, but the next level of everyday AI goes beyond just answering questions: AI agents perform tasks for you.
Major technology companies including OpenAI, Microsoft, Google and Salesforce have recently released or announced plans to develop and release AI agents. They claim these innovations will bring new efficiencies to the technology and management processes underlying systems used in healthcare, robotics, gaming and other businesses.
Simple artificial intelligence agents can be taught to respond to standard questions sent via email. More advanced individuals can book air and hotel tickets for cross-continental business travel. Google recently showed reporters Project Mariner, a Chrome browser extension that can reason about text and images on the screen.
In demos, agents helped plan meals by adding items to a shopping cart on the grocery chain’s website, and even found substitutes for certain ingredients if they weren’t available. Someone still needs to be involved to complete the purchase, but the agent can be instructed to take all necessary steps.
In a sense, you are an agent. You take action every day in response to what you see, hear, and feel in your world. But what exactly are artificial intelligence agents? As a computer scientist, I give this definition: an artificial intelligence agent is a technological tool that can learn a lot about a given environment and then, with some simple prompts from a human, solve a problem or perform a specific task in that environment .
rules and goals
A smart thermostat is a very simple example of an agent. Its ability to sense its environment is limited to a thermometer that tells it the temperature. When the room temperature drops below a certain level, the smart thermostat responds by turning up the heat.
A familiar predecessor to today’s artificial intelligence agents was the Roomba. For example, a robot vacuum cleaner can learn the shape of a carpeted living room and how much dirt is on the carpet. It then takes action based on that information. After a few minutes, the carpet was clean.
A smart thermostat is an example of what artificial intelligence researchers call a simple reflective agent. It makes decisions, but those decisions are simple and based solely on what the agent is sensing at the time. The robot vacuum is a goal-based agent with one goal: to clean all floors it can reach. The decisions it makes—when to turn, when to raise or lower the brushes, when to return to the charging base—are all geared toward achieving this goal.
A goal-based agent succeeds simply by achieving its goals by whatever means are required. Goals can be achieved in a variety of ways, however, some of which may be more or less satisfying than others.
Many of today’s AI agents are utility-based, meaning they think more about how to achieve their goals. They weigh the risks and benefits of each possible approach before deciding how to proceed. They are also able to consider conflicting goals and decide which goal is more important to achieve. They go beyond goal-based agents by selecting actions that take into account the user’s unique preferences.
make decisions, take action
When tech companies refer to artificial intelligence agents, they’re not talking about chatbots or large language models like ChatGPT. Although chatbots that provide basic customer service on websites are technically artificial intelligence agents, their perception and actions are limited. Chatbot agents can sense words typed by users, but the only action they can take is reply text, hoping to provide the user with a correct or informative response.
What AI companies refer to as AI agents is a significant improvement over large language models like ChatGPT because they are able to take actions on behalf of the people and companies using them.
OpenAI says agents will soon be tools that people or businesses can run independently for days or weeks without having to check their progress or results. Researchers at OpenAI and Google DeepMind say agents are another step toward general artificial intelligence, or “strong” artificial intelligence—that is, artificial intelligence that surpasses human capabilities in many fields and tasks.
The artificial intelligence systems people use today are considered narrow artificial intelligence or “weak” artificial intelligence. A system might be good at one area—chess, perhaps—but if thrown into a game of checkers, that same AI wouldn’t know how to function because its skills wouldn’t translate. A general artificial intelligence system is better able to transfer its skills from one domain to another, even if it has never seen the new domain before.
Is it worth the risk?
Are artificial intelligence agents ready to revolutionize the way humans work? It will depend on whether tech companies can prove that agents can not only perform the tasks assigned to them, but also deal with new challenges and unexpected obstacles as they arise.
The adoption of AI agents also depends on people’s willingness to give them access to potentially sensitive data: depending on the purpose of your agent, it may need to access your web browser, email, calendar, and other applications or systems with Relevant to the given assignment. As these tools become more common, people will need to consider how much material they want to share with them.
A breach of an AI agent system could result in private information about your life and finances falling into the wrong hands. Are you willing to take those risks if it means the agency can save you some work?
What happens when an AI agent makes a wrong choice or a choice the user disagrees with? For now, developers of AI agents are keeping humans in the loop, ensuring people have a chance to check out the agent’s work before making any final decisions. In the Project Mariner example, Google doesn’t let resellers make the final purchase or accept the site’s terms of service agreement. By keeping you informed, the system gives you the opportunity to undo any choices the agent makes that you don’t approve of.
Like other AI systems, AI agents are subject to bias. These biases can come from the data on which the agent was originally trained, the algorithm itself, or the way the agent’s output is used. Involving people is one way to reduce bias by ensuring that decisions are scrutinized by people before they are implemented.
The answers to these questions may determine the popularity of AI agents and depend on how well AI companies can improve their agents once people start using them.
This article is republished from The Conversation under a Creative Commons license. Read the original article here.
Published – December 19, 2024 5:37 pm (IST)