Generative AI - risk versus reward

Jack Scott-Bowden
Advised Fintech Senior Account Manager
18 August 2023
4 minute read

It seems every week, scrolling through the news or LinkedIn, more generative artificial intelligence (AI) tools have been developed or ‘groundbreaking’ AI partnerships have been forged.

Artificial intelligence has seen its fair share of doomsday press of late, but there is no doubt that this technology will be transformative for businesses – and life itself – as it progresses. But we should take into account some of the potential risks we might face with this developing technology, and take appropriate measures to mitigate potential issues.

Businesses are rapidly embracing generative AI's incredible potential. Google Cloud, for example, is making a significant push into generative AI with a slew of new tools and services and expanding with partnerships with companies like DataStax, Neo4j, Twilio and Typeface. As recently as August 2023, UK fintech Bud has developed a generative AI core and chatbot for personalised customer insights using Google's large language model (LLM).

But there are valid concerns within the cybersecurity community. Lightning-fast evolution and adoption are outpacing the development of crucial regulation and guidelines, leaving companies vulnerable to security threats and data privacy risks.

So, what's at risk?

As technology integrates, new challenges emerge for founders and their companies, from data breaches to technology liability.

Insurance forms the backbone of a company’s security when it comes to these risks. Yes, you should have cyber hygiene, firewalls and company procedures, but a tailored insurance policy can provide an extra level of protection against AI-related exposures, such as:

1. Data breaches: AI's insatiable appetite for diverse data can potentially expose businesses to higher liability costs, especially when sensitive information like financial data and biometrics are compromised – as seen in fintech and healthcare AI systems. In recent years, data breaches have also translated into increased Directors’ and Officers’ (D&O) litigation.

If regulatory bodies uncover wrongful conduct on the part of company management, legal actions against both the company and its executives can be taken.

2. Technology liability: Glitches, training data issues, or model drift can lead to unexpected outputs, which could leave innovators facing issues – especially in high-stakes industries like healthcare, transportation, or finance.

3. Content accuracy: Let's not forget what generative AI can produce. Inaccuracies and biases in large language models used by chatbots can be deceivingly credible but ultimately wrong, posing potential risks if relied upon for key decisions.

4. Supply chain management: Third-party integrations matter. Technology businesses often depend on suppliers for critical components of their AI services, making them potentially liable for supply chain issues.

In 2020, tech company SolarWinds suffered a major cyberattack by suspected government-backed Russian hackers – but it was done so stealthily, it took months for the plot to be uncovered. The attack led to numerous data breaches where Solarwinds was a third-party partner, and penetrated up to 18,000 organisations globally, including Fortune 500 companies and the US federal government.

To this day, it’s still unknown how destructive the attack on SolarWinds was, but it caused both the US government and industry leaders to reassess scrutiny levels of cybersecurity across supply chains. Threat actors will ultimately target a single point of failure in a supply chain.

5. Cyber crime: While the full impact of AI-enabled malware has yet to materialise, cybercriminals are already using AI to develop sophisticated campaigns. A new dark web threat, WormGPT, harnesses LLM capabilities for phishing.

Realistic, compelling emails lure users into clicking malicious links or downloading malware, expanding the attack surface for cybercriminals. Although sophisticated hackers and AI-fueled cyberattacks tend to hijack the headlines, one thing is clear: the biggest threat is human error, accounting for over 80% of incidents.

So many businesses, from scale ups to multinational corporations, are facing the same difficulties surrounding how to manage and control the secure use of AI. In May this year, Samsung workers leaked proprietary information to ChatGPT, inadvertently making their code open sourced. This led Samsung to banning Chat GTP outright.

Chief Information Security Officers (CISCOs) and Risk Managers face a challenge: craft a risk management framework with a security policy that supports generative AI adoption without repressing innovation. Easier said than done, right?

Where does insurance fit in?

Risk management is the process of identifying, assessing and controlling financial, legal, strategic and security risks to an organisation’s capital and earnings. These threats, or risks, could stem from a wide variety of sources, including financial uncertainty, legal liabilities, strategic management errors, accidents and natural disasters.

Murphy’s Law is one to remember here: if something can go wrong, eventually it will. If an unforeseen event catches your organisation unaware, the impact could be minor, such as a small impact on your overhead costs. In a worst-case scenario, though, it could have serious ramifications.

A tailored technology insurance policy is only one piece of the puzzle; pairing coverage with robust risk management strategies is a start to ensuring proprietary information isn’t leaked and could minimise potential missteps.

New risks call for a new form of insurance

Traditionally, business insurance has been taken up to protect policyholders after an incident has occurred. To protect businesses utilising generative AI, it’s more important than ever that insurers offer continuous risk monitoring and mitigation.

Working alongside a digital insurer with expert tech brokers who understand the market is paramount. At Superscript, we've teamed with specialist cyber and executive risk insurer, Coalition, that pioneered a new form of insurance called Active Insurance, allowing us to offer this proactive cover to our clients.

Unlike traditional insurance, Active Insurance brings together three essential capabilities:

  • Real-time risk assessment
  • Ongoing monitoring and protection, and
  • Rapid response to provide support before, during, and after an incident occurs.

To benefit from Active Insurance handled by tech-first brokers, reach out to us and let's ensure your business is well-protected continuously throughout your innovative journey.

This content has been created for general information purposes and should not be taken as formal advice. Read our full disclaimer.

Share this article

We've made buying insurance simple. Get started.

Related posts