Advisers should ‘try something, fail fast and learn’, says CISI chief executive

https://cdn.moneymarketing.co.uk/content/uploads/2024/07/10160653/Compliance.jpg

Instead of waiting to see what AI solutions to adopt, advisers should ‘try something, fail fast and learn’, says Tracy Vegro, chief executive of the Chartered Institute for Securities & Investment (CISI).

Vegro was speaking as part of the panel discussion ‘How AI can help adviser compliance’ at the Artificial Intelligence in Financial Advice 2024 conference today (10 July).

Expanding on her point, Vegro added: “I think we’ve lost the learning culture in this country, and we’ve forgotten to have good innovations and ideas.

“So many of our members in Middle Eastern countries are looking to undertake the CISI exams because they see us as the gold standard. But we need to continue to innovate if we want to remain the gold standard.”

The panel discussion was chaired by FTRC director Ian McKenna and also featured Joanne Smith, founder of the TCC Group, and the Rt Hon Stephen McPartland, Conservative MP and founder of Green Cyber Research.

Smith noted that AI can play a big role in monitoring and analysing what advisers record on behalf of clients. The introduction of the Consumer Duty requires firms to evidence that clients are getting the right advice, she said, and “AI is helping us do that faster and better”.

McPartland agreed that AI offered “huge opportunities” if managed correctly, but also warned that “it’s impossible to have an AI strategy without a cybersecurity strategy”.

He pointed out that most businesses are SMEs, and hackers that want to damage big businesses tend to target the SMEs in the supply chain.

“Of all the companies that are hacked, 50% are out of business within 12 months,” McPartland added, highlighting the importance of this for firms of all sizes.

On the subject of AI bias, all the panellists agreed on the need to challenge preconceptions, noting that neurodiverse clients or those with ADHD require personalised engagement of the kind not always identified by AI.

Smith agreed that such biases should be managed by keeping “a human in the loop” and training AI tools with sample data. The nature of AI is changing all the time, she said, and this “AI drift” requires “a clear policy that is well thought through and well documented”.

Any regulator would want to see such documentation, added Smith, noting that a fully-fledged AI regulator will likely be arriving soon.

The session ended with an optimistic poll result. When asked ‘How would you describe the current state of data security within your organisation’, 67% responded: ‘Compliant, but want to do more.’ Only 4% chose: ‘It keeps me awake at night.’

<<<- Go Back