What will it take to make artificial intelligence safe and secure? What will it take to make artificial intelligence safe and secure?

The use of technology in elections was a hot topic in the 2020 presidential election. With the explosion of artificial intelligent in recent years, this issue will again permeate the political scene in this year’s

Milos Manic Ph.D. is the director of Virginia Commonwealth University’s Cybersecurity Center. He is particularly concerned about the impact of AI on undecided votes

“Will the undecided part of the electorate, be targeted in a specific way?” He asked, “Will they be fed carefully selected information and specific misinterpreted data?”

Manic is a professor at VCU’s College of Engineering in the Department of Computer Science. He has also completed more than forty research grants In April, he was awarded the FBI director’s community leadership award for his work in advancing AI and protecting the nation’s infrastructure against cyberattacks.

VCU News asked Manic to provide insights into the intersection between AI, elections, and society as a whole.


What are the key vulnerabilities that we face when it comes to election integrity?

Social media can quickly and easily create the illusion of group consensus, where 90% of the time this is not the case. Human psychology, human vulnerability, and human psyche are the front lines of battles today.

Since hundreds of years, psychological warfare has been practiced. In the World Wars, a person might have been given a booklet to read to an entire village. It could have been pamphlets thrown from planes. Now, it’s a psychological war through technology. Humans react to something called emotional triggers. Cognitive biases are common in humans.


Is there a way to address these shortcomings quickly, even if they are imperfect?

We could examine four areas. First, we could look at policy. The second is AI that is safe and secure. The third is awareness of the user. The fourth is to look across borders and form alliances.

Milos Manic, Ph.D. served in May as part of the U.S. delegation to the European Union for cybersecurity and AI at Brussels. (Courtesy photo)

The world is late to the AI game in terms of policy. Now, a number organizations, both governmental and nongovernmental as well as nonprofit, are attempting to deal with issues such governance, regulatory frameworks, and policy.

There’s not much we can’t do with our current resources, algorithms, or knowledge if we have enough time. The question is: Are we developing it in the right manner for the right reasons.

We showed six or seven years ago that we could find three algorithms that would solve almost any problem. Question is, will this solution work when it’s put into production? Are we looking for transformations that are as important in the future, as they are today?


What are the other three areas?

To have a safe and secured AI, the key question is can we focus on ethical developers who are unbiased and trustworthy?

The next step will be the users, the public’s awareness and understanding. What are we posting on Facebook? What are we sharing or making available to others on our site? How do we know who we are sharing it with? Facebook users may claim, “I know with whom I share.” No, you do not. You think you know. How can you be sure that your friend is not a fraud and has become someone else?

The next step is to go beyond borders. In the cyberworld there is something called the Five Eyes [Australia Canada New Zealand United Kingdom and United States] which forms an intelligence alliance. Now, enemies can be based on computers or networks located continents away. If there is no alliance, then it will be easy for nefarious agents to invade.


Do you have a job that has a strong connection with election integrity, or political disinformation?

AI fraud detection in real-time is the key. AI is powerful if you have the resources or computer resources. If you have enough data and time, there’s nothing we can’t do. But can you do it real-time?

VCU is the leader of the statewide initiative CCAC – the Commonwealth Center for Advanced Computing. Our center’s centerpiece is the IBM z16. It is one of the supercomputers with on-chip AI that are specifically designed for fraud detection using real-time AI. There are so many powerful algorithms and machines in industry and academia today. But can you do it in real time? VCU is in charge and steward for this state resource.


As AI’s ability to create realistic images and comprehensive texts grows, how would you assess the potential of its threat (or protection) to the political or electoral processes?

The key question is: Can you use AI to achieve something in real-time? All of these attacks, frauds and disinformations happen in real-time. It doesn’t really matter if you give it enough time. The opinion has already been influenced. It’s like not reacting at any time. It’s too much to ask people to change their minds.

I am concerned that these machines may become intelligent enough to customize their responses to the user. They will present the same information in a completely different way to me than to my daughter who is much younger. Her perception of reality will be different from mine. The ability of a computer to learn and adapt in real time is scary.

We must focus on human psyches and human vulnerabilities. What are our vulnerabilities? It’s no longer just engineering and computer science. Human factors, psych specialists and so on.


Beyond elections, what is keeping you awake at night about AI right now – and what makes you sleep better?

It’s always moving. The problem is, the moment the bad actors discover what we can detect they will work to make deep fakes even better.

Some say that we need to develop better tools. Some say that we should be developing better tools. Who will vouch that the vetting tool has not been compromised? This has been the case for years in the cyberspace. It’s a game of cat and mouse. So I am certain that this will not go on. I have no doubt that the good actors won’t stop not sleeping at night.

Real-time detection will be the key in years to come. We will need to develop faster and faster solutions to detect deep-fake content and to provide a vetting system that is tied to our tools. We will have tools to flag in real-time any browser we use, whether it is Chrome, Firefox or Safari.

The vulnerability of humans is what keeps me up at night. How can we keep up with all these changes, and make decisions in a fair way? We are bringing a lot of the practices from cyberspace into AI.

I’m more concerned about humans making a last-minute decision because something happened. This is the main issue today. They will change their opinions based on what they have seen, heard, or been told. Cyberwars will continue because of human vulnerability and human psyche. I’m less concerned about the technical aspect. Humans are the starting point and end of it all.