Steve Schmidt, Amazon Chief Security Officer, discusses the future of cyber security in the age AI.

Steve Schmidt, Amazon’s chief security officer, spoke at AWS Re:Inforce last week. (Amazon Photo)

This Week on the GeekWire podcast: Seattle’s cloud giants had a big week in cybersecurity, though each in a very different way.

Microsoft President Brad Smith is in Washington, D.C., to testify before the U.S. House Homeland Security Committee on Microsoft’s security challenges.

Amazon held its annual AWS Re:Inforce cloud safety conference at the Philadelphia Convention Center. The use of generative AI in cybersecurity has taken on a whole new dimension. That was the main topic of my recent conversation with Steve Schmidt – Amazon’s chief information security officer.

Continue reading to read the highlights edited for clarity and context.

Subscribe GeekWire to Apple Podcasts or Spotify .

How generative artificial intelligence is changing the security landscape. “Generative Artificial Intelligence definitely allows attackers to be much more effective in certain areas. It is a great tool for the attacker to use when creating more effective phishing email or requesting people to click links.

“But it also allows the defender to be more effective, because when we use generative AI, our security engineering staff can be more efficient. It allows us to delegate a lot of the heavy lifting that engineers used to do, so they can focus on the things that they are best at: looking at the murky grey area, sorting through the tiny pieces that don’t make sense, and putting the puzzle together into a picture that makes them say, ‘Aha.’ Alright. I know what is going on.

“In most cases when we apply generative AI in the security work we have to perform, we end up having happier security engineers at the other end because they don’t really want to do the tedious, laborious stuff. They want to use their minds. They want to apply their minds.

A use case for generative AI is security: “An example of a simple application is the plain language summarizations of complex events. You can see that a large part of my security job involves combining tiny bits of technical data into a narrative about what is happening.

Every security professional must be able to create a story and then convey that information to business owners. It’s one of the most difficult parts of our jobs — translating something complex, technical, and nuanced into a language that is understandable to a chief finance officer or chief executive officer. “Generative AI has proven to be very helpful in that area.”


Three questions that companies should ask themselves to adopt generative AI in a secure way:

  1. Where are our data? Business teams send data to LLMs for processing, whether for training, to help build or customize the model, through queries, or when they use the model. How was the data handled during this workflow? How was it protected? These are important things to know.”
  2. What happens to my query and any data associated with it? “Training data shouldn’t be the only sensitive data you are concerned about as users begin to embrace generative AI. If your user queries an AI app, are the results of that query and how the user reacts to them used to train the model? What about the file the user submitted with the query?”
  3. Are the outputs from these models accurate? “The quality of outputs from these model is improving steadily. Security teams can use generative AI to solve challenges. “From a security perspective, the use case is what defines the relative risk.”

How Schmidt’s experience at FBI informs his current approach: “The thing I took most from my experience at FBI was a concentration on the people who were behind adverse actions. For example, a large part of my career was spent focusing on Russian and Chinese counterintelligence. In the classic world of espionage the same motivators that drive hackers today are the same. It’s money or ideology, coercion or ego.”

What he gains from his volunteer work as an EMT, and firefighter: “As humans, we crave feedback.” We want to know that our efforts are appreciated. In the computer world, much of what we deal with is virtual. It’s hard to see what your actions will lead to. It’s hard to see the individual impact of your actions when you’re looking over hundreds of millions of machines, as I am.

“As a volunteer firefighter and advanced Emergency Medical Technologist, I know that if I perform my job well, a human being I can see or touch will have a better day. I also get real human feedback, which is not available from a computer. It’s very satisfying. As a human being, I know that I am personally adding value to this. I am helping the person in a difficult situation, which may have been their worst day ever. We’re going make it better.”

You can listen to the entire conversation in Apple Podcasts or Spotify or wherever you listen.


Audio editing done by Curt Milton.