Tom Cross

Head of Threat Research

How did you first become interested in cybersecurity?

I was involved in the “old school” hacker scene in the late 1980s and early 1990s. At that time, several different networks and technologies were competing to become the one “true” Internet that we know today. People in the hacker community knew that massive changes were coming, which would have unexpected consequences for society, both positive and negative. Many people from that community ultimately became cybersecurity professionals as our digital landscape evolved, and we found that we could apply what we had learned to help people. 

What are you working on at GetReal, and why is it important?

The emergence of Generative AI has given threat actors new, powerful capabilities for targeting enterprises. To start, AI is exceptional at working with source code, making it simple for those who lack technical expertise to “vibe code” software that automates tasks, enabling them to operate at a scale that wasn’t previously possible. Additionally, AI is a great tool for automated reconnaissance. An LLM can read and synthesize everything about a company and its employees, making it easier than ever for malicious actors to craft highly customized social engineering attacks. 

It’s critical that we don’t think of social engineering attacks in isolation, but rather consider how these attacks fit into broader campaigns that actors use to target organizations, their networks, and infrastructures. When you combine these capabilities with the ability to produce nearly indistinguishable audio and video deepfakes, the result is an increasingly automated, full-spectrum enterprise intrusion that can target both people and infrastructure. 

This is a truly unprecedented cybersecurity problem that will require a multidisciplinary effort to combat. And I’m honored to join the team here at GetReal alongside world-class experts in digital media forensics and machine learning. My role here is to ensure that we first understand the threat, track the incidents and the actors who perpetrate them,  and make the connections that enable us to anticipate what those actors will do in the future.  We will be able to apply this threat intelligence to build the best detection capabilities on the market. 

What’s your unique approach or philosophy toward research?

Part of my role at GetReal is to engage in “adversarial thinking,”  or the ability to see vulnerabilities in systems. This is an essential skill set in cybersecurity, and there is considerable debate about whether it can be taught or if it can only be learned through experience. 

I think, at its most fundamental level, it’s important to see computer systems as they are, rather than as they are meant to be. Professional training as an engineer gives you deep mental models for how computers are supposed to work, which can lead you to make favorable assumptions about their implementation that are not always correct. We also use abstractions in computer science to manage the complexity of systems, and those abstractions are designed to hide from us the details of implementation. Vulnerabilities hide within abstractions and assumptions. 

Of course, adversarial thinking is not just applicable to computer systems, but also to human systems. Human systems are what attackers are increasingly exploiting with generative AI.  

What’s the best advice you’ve ever received, and why was it impactful?

In the literature on military deception, there is a concept known as Magruder’s Principle. It’s the idea that a target of deception is more likely to believe something that confirms their pre-existing beliefs than something that contradicts or challenges those beliefs. 

I tend to think of Magruder’s Principle as the golden rule of deception, and it’s a useful thing to remember in various contexts, especially cybersecurity. We are most vulnerable to false information that reinforces the things that we want to believe, and it takes self-discipline to scrutinize that information as carefully as we scrutinize facts that challenge our assumptions. 

What excites you most about GetReal’s future?

It’s becoming increasingly difficult for people to trust what they see on their computer screens. If I go back to the ideas that people like Douglas Engelbart and Ted Nelson were thinking about in the 1960s, they envisioned computers as tools to augment human intelligence. If computers are engines of deception, they aren’t helping us think. They actually may be driving us toward poorer decisions. 

That’s why anti-deception technology is needed to help computers fulfill their purpose—to advance human intelligence. This is part of the greater mission GetReal is contributing to.