Skip to main content

There’s something coming through the Wi-Fi that you didn’t ask for. It’s watching what you do. It’s there when you open those “private” e-mails or post on a friend’s wall.  It talks to you, even if you don’t realize it. And what’s worse, you’re talking back.

But no one is hiding behind your computer screen, secretly tracking your movements through the net. That task has been left to the computers and a technique called machine-learning.

Many of the ads you see online are displayed by machine learning systems, whose purpose is to “learn” how people use the Internet. They compare your activity with everyone else’s to determine whether you’re more likely to respond to an ad for body wash or one for sexy body wash.

And it does not stop in cyberspace.

“We are surrounded by computer-based systems that impact our everyday lives,” said Itamar Arel, director of the Machine Intelligence Lab at University of Tennessee, Knoxville.

Machine-learning techniques are currently used in automobile navigation systems, noise-cancelling headsets, and red-eye reduction in cameras.

Arel focuses on machine-learning algorithms that aid doctors in interpreting medical images. He hopes to develop machines that learn just as well, if not better, than humans. According to Arel, a good radiologist needs to look at about a thousand cases of benign and malignant growths before becoming proficient.

“We’re trying to mimic that same capability. Given a large number of examples, can we teach the system to learn what to look for by itself?” he asked.

As computer power increases, these same machine-learning agents may be able to diagnose cancers more quickly and with less training than even the best radiologists.

Machine-learning systems don’t come out of the box knowing about your health or your favorite book. Instead, they begin much like us: cute and stupid.

Like school children learning arithmetic from a teacher, some agents can use simple examples and feedback to learn how to approach more complicated information. Others learn by experience, storing the consequences of their decisions and drawing upon them when faced with similar situations. Either way, these machines can catch on to patterns in the data they’re given. When it comes to placing advertisements, this allows them to quickly home in on your likes and dislikes, so your next click is their doing.

Despite its ubiquity, machine learning has limitations. Because much of machine learning attempts to mimic human learning, our incomplete understanding of the human brain means that teaching a machine to “learn to learn” is easier said than done. Unlike us, machine learning programs can only deal with a limited range of information. The software that decides which advertisements you see cannot drive your noise-cancelling headphones or spot a problem on an X-ray of your head. And, if given too much information too early, a machine may get caught up in the details and not learn to catch patterns at all.

Image-interpreting machines like Arel’s cannot process entire pictures at once, but have to wade through them pixel by pixel. Whereas you could take in the richness of a picture at a glance, computers prefer the thousand words. As scientists continue to learn about human behavior, researchers like Arel will continue to translate those results into machines that respond to the world more like we do.

Online advertising, medicine, and other current uses are only the tip of the iceberg.

“I suspect that in ten years, there will be at least one machine learning component in almost any consumer electronics product,” Arel said.

This Behind the Scenes article was provided to LiveScience in partnership with the National Science Foundation (NSF). For more information about the NSF, visit http://www.nsf.gov/ .