Lead story – Cyber resilience – what is it and how can you achieve it? The experts’ view
Cyber resilience? It certainly beats cyber-reactive-mode, which is where most companies are at. It’s not easy to get out in front of a threat landscape powered by AI scale and well-funded bad actors.
How to respond? Via a recent expert panel, Chris boils down the best ideas. Start with this: even if your organization avoids the obvious blunders, there is a big software footprint to manage:
But while there may be examples of basic security protocols not being followed, the reality is that, for many organizations – especially those that are publicly funded – outdated systems can’t simply be stripped out and replaced at scale. Money is tight, budgets have been cut, and even the wealthiest, most cutting-edge enterprise is only as secure as the number of opportunities everyday office procedures afford to make a simple mistake.
Invest in new security tech? Sure. But as this panel argues, this is a human problem/solution. Chris:
In part, the answer is to adopt a more human-centric approach to cybersecurity – policies that consider people as the strongest, rather than the weakest, link, via a no-blame culture of open reporting.
Getting security right means pushing past enterprise borders:
However, another factor is recognizing that the wider supply chain – both upstream and downstream – is also a source of risk; no enterprise is an island in a cloud-enabled world.
In Cyber resilience – how to achieve it when most businesses – and CISOs – don’t care, Chris delves further into some surprising data. As he concludes, it comes down to your security culture:
Educating staff about risk and encouraging no-blame breach reporting are essential, therefore – rather than the culture of victim-shaming that still dominates the media. After all, if organizations such as national data centres and the US Federal Reserve can be breached, then anyone can. The question then becomes what to do about it without locking down the business and repelling allcomers – including customers, perhaps.
This all rings true, but I will say this: if you’re going to have a business with sensitive customer data in the cloud, then invest in whatever it takes. My health care provider, Harvard Pilgrim, was offline for months due to a ransomware attack it was not built to recover from. My own social security number has been compromised more than once, including an infamous breach via Equifax in 2017 that exposed the half-@ssed nature of their approach.
So, with greater efficiency (cloud) comes greater responsibilities. Culture matters but so does the investment. Equifax spent vast resources in legal compensation for past mistakes. $1.5 billion later, their security is much tighter. Could they ever be breached? Of course. But at least security is now a top line priority, aligned with the type of data they store online.
Diginomica picks – my top stories on diginomica this week
Vendor analysis, diginomica style. Here’s my three top choices from our vendor coverage:
A few more vendor picks, without the quotables:
Jon’s grab bag – Sarah looks at How Rolls Royce is using AI to look under the rocks of complexity, albeit in the pilot/experimentation phase. Martin moves sacred cows aside with A SaaD future knocking over honeypots? Onymous CEO Shiva Nathan on why the cloud has been set up wrong. (SaaD, a rather unfortunate acronym, stands for Software as a Device).
Cath raises (and answers) the right question in As Pride Month draws to a close, what can tech sector employers do to support their LGBTQIA+ colleagues every day of the year? Finally, George asks a question I wasn’t looking forward to in Generative AI accents are coming to call centers – is this a good thing? I’m going to take a ‘wait and see’ attitude on this one, but if you ask me today, I’ll say no. How about make your call center easier to navigate, and empower your agents to solve problems rather than hand out escalation phone numbers for another trip to another call center?
Best of the enterprise web
My top seven
MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI – Ron Miller with strong reporting here; even if a few more sacred cows are now out to pasture. I’m not sure if we are vastly overestimating gen AI, but I do think we overestimate the pace of gen AI improvements from here. We are close to the limits of training data scale. Bring on the enterprise gen AI pursuit, where the focus has shifted from scale to industry-specific output improvement and process embedding. Robotics has similarities to gen AI (and self-driving cars) in terms of the difficulty of the “outlier” problem. But as Rodney Brooks says in his interview with Miller, in more controlled settings, things are promising:
We need to automate in places where things have already been cleaned up. So the example of my company is we’re doing pretty well in warehouses, and warehouses are actually pretty constrained. The lighting doesn’t change with those big buildings. There’s not stuff lying around on the floor because the people pushing carts would run into that. There’s no floating plastic bags going around.”
- How adversarial AI is creating shallow trust in deepfake world – Louis Columbus raises the potent question du jour: “The growing trust gap permeates everything, from customers’ buying relationships with businesses they’ve trusted for years to elections being held in seven of the ten largest countries in the world. Telesign’s 2024 Trust Index provides new insights into the growing trust gap between customers and the companies they buy from and, on a broader scale, national elections. Deepfakes and misinformation are driving a wedge of distrust between companies, the customers they serve, and citizens participating in elections this year.”
- Customer-Facing Incidents on the Rise, IT Leaders Say – On the brighter side, much of this is preventable. As per The New Stack: “51% of cybersecurity and IT leaders surveyed said more than half of cybersecurity incidents at their organization are due to poor cyber hygiene.”
- OpenAI Faces More Lawsuits Over Copyrighted Data Used to Train ChatGPT – The copyright lawsuits are mounting. I believe OpenAI will face a losing position in these proceedings, but that the end result will be a line item expense: licensing fee payments, and perhaps some penalty fines. This will affect OpenAI’s profitability but not its business model. Individual creators who played the most vital/unwitting role in training these systems are (and will be) the big losers. Still, enterprises with OpenAI subscriptions should be tracking this.
- Where Are We With Enterprise Generative AI? – Speaking of gen AI in the enterprise, this is a pretty good summary from Evangelos Simoudis on how enterprises are refining LLMs for better accuracy/relevance/utility. The fascinating industry-specific use cases are mostly still in pilot mode.
- Redefining Your Relationship with Data – Lora Cecere coaches up supply chain leaders on how to deal with the data quality problem in new ways. The extent to which machine learning/AI can help in data cleansing/quality efforts is the burning question.
- ‘No Bot is Themselves Anymore:’ Character.ai Users Report Sudden Personality Changes to Chatbots – This 404 media piece is not about the enterprise, but the lessons around bots and model/output drift with new releases are relevant. (Character.ai is second only to ChatGPT in consumer popularity).
- Podcast note – also check my podcasts with Brian Sommer on Sage analyst day and AI, as well as a shorter look at CFO dilemmas.
Whiffs
A few doozy headlines from 404 Media this week, including Lawsuit Claims Microsoft Tracked Sex Toy Shoppers With ‘Recording in Real Time’ Software. But then again, I already handed out article title of the week:
Yes, I know, celebrity keynotes are a soft target, but as Bonnie Tinder pointed out, the irony is through the roof:
Finally, Frank Scavo has been on a roll lately spotting mega-whiffs:
See you next time… If you find an #ensw piece that qualifies for hits and misses – in a good or bad way – let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.