Cybersecurity QA fails: Social media’s a part of it too

https://regmedia.co.uk/2016/07/18/dumpsterfire.jpg

Opinion In Neal Stephenson’s 1992 novel Snow Crash, he invents malware that can leap species from silicon to the human brain. That’s a great metaphor for so much of our online lives, but it raises one question of particular interest. If humans can be damaged by our own technology, should we protect not just our data but ourselves through cybersecurity?

There are many areas where this makes sense. Our struggles to define what generative AI can safely do and how the results fit into law and ethics are as much about protecting our culture as our commerce.

You can’t have it both ways. You can’t say that your AI expertise and security chops are the finest in the business, and say that there’s no solution to hate speech and disinfo online

Likewise, the power of algorithmic content selection and social media to amplify disinfo and hate has a direct effect on how people behave towards others. These are real world problems with frightening potential for actual harm, as we’ve been seeing with the far right riots across the UK this past week. They exist at the interface between our technology and ourselves, and they must have a technical component in their remedy. Cybersecurity is all about detecting and disabling damage agents at interfaces. It feels like a good conceptual fit. 

If only cybersecurity wasn’t so frequently awful. A lot is fine; the stuff that works never makes the headlines. We’ve built a global network of billions of devices that most of the time doesn’t actually attack us. But when cybersecurity goes wrong, oh boy. Occasionally, this is because people discover things that are genuinely hard to foresee, such as glitching chips with noise spikes, exposing data that’s supposed to be uninspectable. Mostly, though, it’s the trifecta of human vice – stupidity, greed, and laziness.

This is particularly obnoxious in security software. PC antivirus software, so often forced on users as bundled bloatware, often slowed down and crashed computers far more often than viruses ever did. Without consumer choice in the loop, it fed corporate greed without demanding high quality user experiences that proper design and QA could provide.

Software design may have improved over the years, but humans haven’t. CrowdStrike lived up to its name recently when it BSODed up the world by pushing a broken update. How did that happen? Not enough QA. Was this stupidity in thinking this wasn’t necessary? Greed in minimizing costs to plump up the bottom line? Laziness in doing a half-arsed job before knocking off for a beer? Whatever the case may be, the result was a global cyberattack that touched millions.

Meanwhile, Meta made a richly amusing clown of itself when its machine learning powered prompt injection detector could itself be skewered by a prompt injection attack. One enabled, moreover, by simply typing some spaces. In general, failures in brand new software can be seen as part of the QA process. Moving from release candidate to production just means you probably haven’t found all the bugs yet. That this software team didn’t find this bug in this tool, though, will overload even the best protected irony meter. Thanks, Zuck, we needed that.

Experts at replacing human thinking work with AI? Prove it!

It’s quite possible that by the time you read this, that bug will be fixed. Meta, like all its peers in big tech, is proud of its AI and predicts huge things for it. There’s not an area of human activity untouched by the analysis of massive datasets and the synthesis of useful output. The end results may be uncertain, but the process is well under way. Well, almost.

At the same time as Meta and Google are proclaiming their AI expertise in performing human-like tasks at massive scale, they claim an inability to solace one particular and highly dangerous technology/human interface problem. This is the way content-selection algorithms on social media and content delivery services hack human psychology to drive engagement.

You can see this for yourself if you show any sort of interest in invective online. People engage very strongly with content that angers them, even more so if it assures the content consumer that they’re the victim and should be angry too. The result is an explosion in online populist politics, conspiracy-centered culture warcraft, and pathways to radicalization. It is symbiotic with disinformation, hate speech, and unevidenced claims. It is not difficult to spot, rather it is very difficult to avoid. It is eating the world at a terrifying rate.

If we can spot this human malware, it should be bread and butter to large language model-driven machine learning. Especially if deployed within the networks that deliver it. Detecting the attack at the interface, observing the behavioral changes it engenders, at scale without affecting legitimate usage? Exactly the mix of classic cybersecurity and new LLM technology that Meta and friends promote as the future. Except, it seems, for the algorithms that push engagement and profit.

You can’t have it both ways. You can’t say that your AI expertise and security chops are the finest in the business, and say that there’s no solution to hate speech and disinfo online. Perhaps machine learning can’t solve this problem, in which case they’ve got some explaining to do about how it’s so “good” at everything else. Or perhaps they’d rather break the world than clean up their act. It’s a massive failure in QA, not technology. Stupidity, greed, laziness.

Let’s call their bluff. Let’s ask about the projects to use their prize technologies to solve their most heinous antisocial behavior. Let’s ask our politicians why they accept excuses in place of action. And let’s ask ourselves, who know and use good cybersecurity when we see it, how we can start to protect the human endpoints of the network. We are, after all, under attack. ®

<<<- Go Back