AI for malware, is it real?

December 15th, 2019
What Does Artificial Intelligence Have to Do with Cybersecurity?

Artificial intelligence involves using computer hardware, software, and math to perform tasks that have traditionally required natural (human) intelligence – or, in some cases, clearly require intelligence, but can’t be done well by most humans. Often, this means creating a computer-based system that can process information to recognize something about the environment, analyze the implications, and act to achieve a goal (or to help humans achieve one).

Working artificial intelligence systems have been around since the 1950s. By 1955, researchers had built systems that could play a reasonable game of checkers. By 1965, the program ELIZA was using pattern matching to carry on a clunky conversation that simulated a type of psychotherapy – convincing some people it actually “understood” them. Since then, the field has had many ups and downs, and it has invented a wide variety of techniques, each with their own strengths and weaknesses. These areas of artificial intelligence have included:

  • Logical reasoning to solve problems such as simple puzzles
  • Expert systems to organize and represent the knowledge in a specific field, and use it to improve decision-making
  • Perception systems to capture sensor input such as video or audio, and interpret it to act in the world – for example, self-driving cars that can recognize pedestrians, or warehouse robots that can find and retrieve the correct items from a customer order
  • Natural language processing systems to interpret human language and respond appropriately – perhaps in conversation (Amazon Alexa), translation (Google Translate), or by uncovering emerging trends by reading millions of Facebook and Twitter posts
  • Machine learning systems that use large amounts of data to learn from experience and solve problems more successfully, without being explicitly programmed in relevant problem-solving techniques

AI’s evolving uses

So far, the goals that an artificial intelligence system aims to achieve have typically been specific and at least somewhat well-defined. For example, at Sophos, we use a form of machine learning in our Sophos Home to recognize new forms of malware nobody has ever seen in the wild before. Others might use machine learning to recognize potential terrorists in an airport, or analyze MRI results more accurately, or decide whether to approve your loan. Our anti-malware tech won’t score your loan application. (Yet).

Today, substantial research is also being done on “artificial general intelligence” (AGI): the development of machines that can move between tasks at will, even if they relate to different areas of knowledge, and solve problems that haven’t been defined in advance. This is how humans operate in the world, and simulating it is obviously a huge challenge. Nobody knows how long it’ll take to achieve AGI: a decade, a lifetime, a century, never? But companies like Google and Microsoft are already investing enormous amounts of money in trying to do so.

The AI ethical dilemma

All this means that the science of artificial intelligence also has an ethical component. In the future, what would it mean if computers could outthink humans across the board? And right now – to what extent should humans rely on AI that usually does a good job, but occasionally makes big mistakes that hurt people? Who gets to assess these systems and decide if they’re good enough – and what happens to the unlucky people who get hurt?

Issues like these mean AI isn’t just for scientists, researchers, and executives to think about. It matters to everyone.

To learn more about AI, see Sophos Home’s recent articles Artificial intelligence and machine learning: what are they? and What’s so deep (and powerful) about deep learning?

What are you waiting for? Let's get started!

Free Download
No credit card required