Robot hand

Optimization: A Complex Road to the Simplest Path

Lehigh researchers’ work in foundational optimization aims to improve our ability to learn from massive amounts of data more efficiently.

You may not have heard of optimization, but you can see its fruit in pretty much any direction you look. 

“If you use the word ‘best’—like the best way to classify data—you are talking about optimization, and specifically, about employing mathematical tools to do that,” says Frank E. Curtis, associate professor of industrial and systems engineering. 

Optimization is the process of using sophisticated mathematics and algorithms to handle the astronomical amounts of data and information in which our technological systems are now awash. In that sense, optimization is the silent partner to almost every computer and technological function you could name. From self-driving cars and search engines to supply chains, image recognition or countless other applications, optimization undergirds each of them, providing the operational efficiency needed for them to work. 

“Optimization is very fundamental, and it’s the basis for a lot of things happening right now,” says Katya Scheinberg, the Harvey E. Wagner Chair in Manufacturing Systems Engineering. “Take something like machine learning ... Sometimes the data used is very noisy or inconsistent. With optimization, we want to create robust methods that are pretty much guaranteed to work without too much user intervention, and the algorithms have to work for various kinds of data. What we basically do is general, foundational optimization, but there are also particular applications we’re working on.” 

Curtis and Scheinberg, along with Martin Takáč, assistant professor of industrial and systems engineering, and Hector Muñoz-Avila, professor of computer science and engineering, bring their expertise to the exploration of optimization in a variety of applications, all in an effort to improve researchers’ ability to use massive amounts of data in purposeful and productive ways.  

Optimization in Action

Machine learning is critical to robotics, and the self-driving car is an example where numerous machine learning systems are required for an autonomous vehicle to operate. 

“There are diverse components to this, and some of these components are the optimizations of the kind that we’re working on,” says Scheinberg, who has received a grant from Google to develop optimization algorithms for robot locomotion. “I’m working on finding sequences of movements for a robot’s arms and legs, so to speak, that ensure the best performance for the robot to perform certain tasks. Some of the parameters we use to measure this are how far the robot can go without falling, how quickly it can move and other similar activities.”  

Takáč has explored methods with Nader Motee, associate professor of mechanical engineering and mechanics, and two doctoral students, Mohammadreza Nazari and Hossein K. Mousavi, for employing a network of robots that can learn how to collaborate and achieve a given task. For instance, in a search to locate survivors in critical situations, such as a building collapse in the event of an earthquake, a team of flying robots can be deployed to find survivors as fast as possible. The robots need to learn how to communicate effectively and collaboratively in order to explore and cover a large area.

Given the interconnected nature of transportation, energy, communication systems and more, smart cities will be unthinkable without advanced optimization. Voluminous data flows require innovative optimization models to keep the systems running at their peak by integrating energy, traffic, transit and infrastructure data to maximum effect. Another project Takáč is working on, in collaboration with Shamim Pakzad, associate professor of civil and environmental engineering, and doctoral students Sila Gulgec and Soheil Sadeghi Eshkevari, involves remotely monitoring structures like bridges by installing sensors on the structure, which allows for continuous data collection and cuts back on the need for human inspections. They also work on a crowdsourcing approach in which data from sensors on mobile phones can be utilized to make many important decisions in civil engineering applications. 

The number of sensors, the amount of real-time data to process, and the possibility of false alarms from the sensors present a prototypical optimization challenge. 

“Bridges age and move a bit, and the machine learning will be used to define what a normal state of the bridge is, and when something is outside the norm and needs to be checked,” says Takáč. 

Such a sensor system would be extremely useful in the event of an earthquake or other unforeseen occurrence, alerting infrastructure officials about exactly where to look for structural problems. 

Another example: supply chain management, a critical facet of running a successful business. In a global economy, managing stock efficiently depends on myriad factors. Transit times and methods, consumer demand, fuel costs, traffic patterns, manufacturing lead times and other variables need to be taken into account, Takáč explains. “The optimization would tell you how much time ahead you have to ship iPhones so you can satisfy your customers with high probability.” 

Illustration of circuit board elements

Curtis, Scheinberg, Takáč and Muñoz-Avila explore optimization in a variety of applications. 

As part of his work studying complex supply chains, Takáč last year co-authored a paper with Larry Snyder, professor of industrial and systems engineering, and doctoral students Mohammadreza Nazari and Afshin Oroojlooy. In that work they applied novel machine learning techniques to find more efficient solutions of the Vehicle Routing Problem, which involves finding the best routes for a fleet of trucks that must deliver products to numerous locations while allowing the machine learning algorithm to learn the travel patterns and customer demands from data.

The Black Box Problem

Lehigh researchers also explore an increasingly common machine learning anomaly, dubbed the “black box” problem, which occurs when the models become so complex or sophisticated that programmers do not know why they give a particular result. Even if the result is desirable, the opacity as to how the result was achieved often is not. 

“Think about machine learning and healthcare,” says Curtis. “If you’re asking a machine learning tool to make a prediction or guess as to the best course of treatment, doctors and patients naturally want to know why.”

Legal and regulatory complexities are also inherent to the black box problem as artificial intelligence (AI) becomes more prevalent. General Data Protection Regulation (GDPR) in the EU currently prohibits the use of AI in any context that could “significantly affect” its citizens. 

But the problem goes beyond matters of perception, or allaying the fears of humans interacting with these systems, says Muñoz-Avila. “The solution is called explainable artificial intelligence. It’s one of the hot topics in research today, and involves many open questions.” 

Rather than decoding existing black box systems where the data feedback loops are voluminous, optimization techniques could be used to develop alternative models that are more transparent. 

“Trying to train models that are easier to interpret can often be more challenging than training a black-box model,” says Curtis, referring to the process of priming a computer model with the information it needs to solve complex operations and classify data autonomously. “To do that, you need to develop better optimization methods to train these models. Going back to the healthcare example, rather than using a deep neural network—which uses software to emulate the function of the brain—to make predictions for a diagnosis, one could train a decision tree or other type of model that, while arguably being more difficult to train well, leads to better interpretability.”

Tackling the ‘Big Question’

Regarding future plans, the researchers acknowledge that they aim to overthrow the reigning king of neural network training, a venerable algorithm called the stochastic gradient method.  

The stochastic gradient method is used to examine and learn from vast arrays of uncategorized data. In simple terms, the algorithm picks a data point from the array at random, it examines it, improves the machine learning model, and moves on to another random data point. 

Imagine having a truckload of documents dumped into your garage. The algorithm would grab one piece of paper from the pile, read it, make a tentative assessment about the contents of the pile, grab another from a different part of the pile, update its assessment, and repeat the procedure over and over.

“Imagine you have billions and billions of data points, and this is an iterative process, it’s going to have to run for a long time,” says Curtis. “We’re looking for a way to process that data in a more sophisticated way.”

“We’re not the only ones,” Scheinberg adds. “Everyone is trying to beat it.”

An Interdisciplinary Approach

Optimization is so fundamental to so many disciplines and applications, it is naturally interdisciplinary. In 2017, Scheinberg, Curtis and Takác received a three-year, $1.5 million Transdisciplinary Research in Principles of Data Science (TRIPODS) grant from the National Science Foundation. TRIPODS grants are meant to promote advances in data science and related disciplines across science, mathematics and engineering. Scheinberg and Muñoz-Avila received a follow-on grant last year to organize workshops bringing together scholars from specific disciplines to explore promising areas of collaborative research. 

“We envision that this new series of workshops will happen this summer,” says Scheinberg. “We will have experts in robotics, chemistry and physics, psychology, and supply chain management to talk about optimization and machine learning.” 

The researchers hope the conferences will be a catalyst for the creation of advanced computational tools with broad application. Gatherings like this are important because often researchers in different fields may be inadvertently talking past each other, even though they share interests in the same topic, says Curtis. 

“The problems are large and require cooperation. We need to learn from each other, but you might find people from three different communities working on the same problem, using different terminology,” he says. 

Story by Chris Quirk

Main illustration by Sabit Sugirov

This story originally appeared as "A Complex Road to the Simplest Path" in the 2019 Lehigh Research Review

Related Stories

Green, blue and gray illustration of shapes with hint of elephant trunk

Advancing Robotic Grasping, Dexterous Manipulation & Soft Robotics

Jeff Trinkle and his colleagues work to advance intrinsically safe soft robots, the future of human-machine collaboration.

Illustration of rows of social media profile pages

Eric Baumer: A Human Approach to Algorithm Design

Baumer’s work ranges from humanizing algorithmic designs to the socioeconomic inequities of Facebook and online data privacy.

Futuristic Artificial Intelligence Circuitry Close Up illustration

Dan Lopresti to Present AAAS Panel on Roadmap to "a Radical Transformation of the AI Research Enterprise"

Lopresti will join the Computing Community Consortium for the presentation of a 20-year roadmap for artificial intelligence (AI) research in the U.S., offering a vision for a strategic path to unleashing the full potential of AI for the greatest societal benefit.