Training intelligent systems to think on their own
The computing devices and software programs that enable the technology on which the modern world relies, says Hector Muñoz-Avila, can be likened to adolescents.
Thanks to advanced mathematical formulas known as algorithms, these systems, or agents, are now sufficiently intelligent to reason and to make responsible decisions—without adult supervision—in their own environments.
Indeed, says Muñoz-Avila, an associate professor of computer science and engineering, algorithm-powered agents will soon be capable of investigating a complex problem, determining the most effective intermediate goals and taking action to achieve a long-range solution. In the process, agents will adjust to unexpected situations and learn from their environment, their cases and their mistakes.
They will achieve all of this without human control or guidance.
An agent—a robot, for example, or an automated computer game player or the system monitoring an electrical grid—that is programmed with advanced algorithms can do many things not possible for a human being, says Muñoz-Avila. It can sift through thousands of stimuli and data points, pinpoint unusual patterns or anomalies, correct most of them in real-time and single out the complex abnormalities that require human intervention.
Muñoz-Avila, a pioneer in the new field of goal-driven autonomy (GDA), recently received a three-year research grant from the National Science Foundation to develop autonomous agents that dynamically identify and self-select their goals, and to test these agents in computer games.
Prepared to deal with the unexpected
“For a long time,” he says, “scientists have told agents which goals to achieve. What we want to do now is to develop agents that autonomously select their own goals and accomplish them.
“A GDA agent follows a basic cycle. It has an expectation of something that will happen in an environment. When it detects an unexpected phenomenon, it attempts to explain the discrepancy between what it expected and what is actually happening. It is constantly checking when expectations are satisfied and when they are not, developing explanations for discrepancies and forming new goals to achieve them.”
The potential applications of GDA agents include military planning, robotics, computer games and control systems for electrical grids and security networks. One example: unmanned vehicles that operate autonomously under water for several days while performing search or repair missions.
In recent years, Muñoz-Avila and collaborators from the Naval Research Laboratory and the Georgia Institute of Technology pioneered the topic of GDA agents, which overcome unexpected phenomena in their environments.
In his current project, Muñoz-Avila and his students have two goals—to improve and expand the knowledge that GDA agents acquire of their domains and to generalize the success of these agents to other domains and applications.
As autonomous computing devices and software gain wider use in society, says Muñoz-Avila, GDA agents must be able to recognize and diagnose discrepancies in their environments and take intelligent action.
As an example, he cites an automated air quality control system that is programmed to monitor and control a variety of devices.
“It is very difficult, if not impossible,” Muñoz-Avila says, “for a programmer to foresee all of the potential situations that such a system will encounter.”
Similarly, the openness of many networks requires a cyber security system capable of continuously integrating new technologies and services.
“It is not feasible to implement counter measures for all potential threats in advance,” he says. “An agent-based system must continuously monitor the overall network, learn and reason about expectations, and act autonomously when discrepancies are encountered.”
Two Ph.D. candidates—Ulit Jaidee and Dustin Dannenhuer—are working with Muñoz-Avila in the area of plan diversity, in which GDA agents formulate multiple solutions with significant differences to a complex problem.
Other recent Ph.D. students in Muñoz-Avila’s lab who contributed to his research into goal-driven autonomy are now pursuing academic careers. Alexandra Coman is an assistant professor at Northern Ohio University, Chad Hogg is an assistant professor at King’s College and Stephen Lee-Urban is a postdoctoral researcher at the Georgia Institute of Technology.
In addition to artificial intelligence and computer games, Muñoz-Avila’s research expertise also includes case-based reasoning, planning and machine learning. His research has been funded by the Air Force Research Laboratory (AFRL), the Defense Advanced Research Projects Agency (DARPA), the Office of Naval Research (ONR) and the Naval Research Laboratory (NRL). He is a past recipient of NSF’s CAREER Award.
Posted on: