Artificial Intelligence and Machine Learning: What Are They?
You hear the terms everywhere. Artificial intelligence (AI). Machine learning (ML). But what are they, what can they do, and what are their limits? This two-part series offers a quick introduction. Part I starts with some basics and a bit of history. Once you’ve got that under your belt, Part II will turn to one powerful approach we use in Sophos Home to help keep you safe: deep learning.
Defining AI
There are many definitions of artificial intelligence (just as there are many definitions of intelligence itself). Here’s one we like: AI involves agents capable of perceiving something about their environment and acting in ways that help them achieve specified goals. Sometimes AI agents are designed to behave somewhat like human brains (or at least our best current understanding of how brains work). In other cases, not so much: for example, some forms of AI use mathematical logic in quite non-human ways. Regardless, though, AI systems are trying to understand* something about the world, so they can change it (or help people do so).
How Did It All Begin?
If you don’t count myths or science fiction, modern AI began in the late 1940s/early 1950s, along with the creation of the first digital computers. Mathematicians and early computer scientists realized that many problems formerly solved only by humans could at least theoretically be solved by computers using step-by-step rules (“algorithms”). So, for example, they wrote checkers programs which could match or defeat most human players by following a handful of pre-defined rules.
Optimists envisioned scaling these techniques to increasingly complex problems: some thought that within 20 years, they could build AI systems capable of doing virtually any human work. It didn’t turn out that way. For the first 50+ years of artificial intelligence, that pattern repeated itself: new advances made headway with certain problems, but broader hopes for AI often came up short.
Expert Systems and Their Discontents
For example, “expert systems” sought to embed the factual knowledge and rules-of-thumb developed by human experts in highly technical fields such as diagnosing disease or analyzing chemicals. Once a system was built, it could theoretically replace expert humans, complement their judgments with a “second opinion,” or make their expertise available more widely. Of course, experts didn’t always like to teach systems that might replace them, and sometimes they couldn’t explain their own intuitions.
Typical expert systems were very narrow in scope, needed continual updating, and sometimes didn’t recognize when a problem exceeded their expertise. Many were ultimately viewed as disappointing. But much of what was learned in building them is now used in other ways – for example, in helping businesses automate repetitive workflows to improve efficiency.
Making Machines Learn Like Humans Do
The limits of many early AI systems led some researchers to step back and think more closely about how humans actually learn. We don’t typically learn by being taught a massive set of rules upfront. Rather, we experience the world, see what works and what doesn’t, and gradually learn from our experiences. The AI equivalent of this is called machine learning.
Over the past thirty years, AI researchers have developed many techniques, algorithms, and models for machine learning. The right approach varies depending on the problem that needs solving. For example, techniques called natural language processing are often used to derive meaning from human language – e.g., for chatbots.
Most machine learning is either supervised or unsupervised. In supervised learning, the AI developer trains her algorithm by showing it a large set of examples, each labeled with the correct answer – say, for example, thousands of labeled pictures of vegetables. As she corrects the algorithm’s mistakes, its performance should gradually improve, as it discovers the patterns associated with each vegetable.
By contrast, in unsupervised learning, the examples aren’t labeled: the algorithm has to figure out the categories, too. Imagine you’re a business with millions of customers. You know they probably cluster into groups of similar customers, but how? By demographics? Income? Previous purchases? Lifestyles? Hobbies? Something you haven’t thought of? You can’t tell your system what to look for, because you don’t know. But given enough high-quality data, it can identify the clusters on its own.
Among the many types of algorithms that have been developed for machine learning, one has attracted unusual attention lately: neural networks. These are composed of large numbers of artificial neurons (loosely analogous to your brain cells), organized into layers. Neural networks have long been viewed as attractive for unsupervised learning, and for working with “noisy” data containing irrelevant content you want to ignore. But they had their own significant tradeoffs and limitations.
In recent years, however, the algorithms have advanced. Meanwhile, computer power has soared, and massive training datasets have become available. (For example, think of how much data Google, Facebook, or Netflix possess; or the huge collections of captioned images now available on the Internet.)
Taking advantage of these improvements, researchers are achieving amazing results with a related technique called deep learning. In Part II, we’ll explore deep learning, and discuss what’s been done with it – at places like Google, and right here at Sophos, too.
------------------------
*For our purposes, we’ll put aside the philosophical question of what AI systems really, truly “understand.” If you want to explore that question, start with the 1980s-1990s debate surrounding John Searle’s “Chinese Room” problem. But be warned: once you dive into that, it may take you awhile to get back to your work and family.