AI in the Boardroom – 6 ways the rise of AI could impact the future of D&O insurance
- Sam Cornelius
- Aug 4
- 11 min read
There has been a significant amount of commentary in the insurance and wider press around the use of Artificial Intelligence (AI) in company boardrooms and, more generally, how AI is shaping the future of business as we understand it.
Naturally, this has led practitioners in the D&O space to question what this could mean for them. What are the potential implications on policy coverage and portfolio performance? What exposures should underwriters be considering when reviewing applications involving AI use? How do brokers need to address this new wave of risk?
Below, I have outlined six key considerations for what I believe could be the potential impacts of the rise of AI use for the Directors and Officers market, with a particular focus on the UK.
1. Increasing reliance on AI tools for governance and corporate decision making
According to the Institute of Directors, “nearly two thirds of directors now personally use AI tools to aid their work”[i]. The advantage of AI is immediately apparent, especially for smaller companies who may lack the ability to access the full range of professional support and advice available in house or on demand at large entities.
AI has a number of potential uses in the boardroom[ii], including:
Data analysis and reporting
Review of legal documents[iii] and business contracts (including business insurance documents)
Minuting and documentation of board meetings
Review existing or proposed processes for potential compliance issues or operational inefficiencies
However, the use of AI in these scenarios raises some significant questions about liability in the event of errors. After all, AI is not infallible – it can, and often does, make mistakes.
The Companies Act 2006 places a fiduciary duty on directors to act in the best interests of the company. If AI tools are used to make or support key decisions, the question arises: who is accountable if the AI makes a mistake? Could directors be held liable for relying on flawed outputs or insufficient due diligence when deploying AI systems?
This is not a straightforward matter to answer, given the UK government has adopted a decentralised approach to AI regulation[iv]. Instead, this task has been left to the various sector-specific regulators. The Financial Conduct Authority, for example, has indicated that it expects firms to have robust governance arrangements, including effective board oversight of a firm’s deployment and use of AI tools. Failure to demonstrate adequate governance could lead to breaches of the FCA’s Principles for Businesses and subsequent enforcement action, particularly if this presents a potential or actual risk of consumer harm[v].
This means underwriters and brokers will need to understand which regulators are in play for any given risk – and what their stance, and enforcement powers and policies, is towards AI deployment and governance.
This requirement for effective oversight may also lead to the introduction of new corporate leadership positions focused on AI governance[vi]. D&O underwriters and brokers will need to ensure that policies are capable of capturing these individuals within the scope of cover and responding to subsequent actions brought against them.
2. AI Washing
Put simply, AI washing is the practice of making representations that a company utilises, develops, or is otherwise somehow significantly “powered” by AI when in reality this is not the case.. The concept of corporate “washing” is not a new phenomenon – For example the D&O market has been grappling with “green washing” for some time now. What we are seeing now is simply an evolution of businesses jumping on the next big buzzword in attempts to raise money and gain customers through corporate puffery.
This phenomenon has been taking place for several years already in the US. In a 2024 review of disclosures to the US Securities and Exchange Commission (SEC), just over 40% of business filings mentioned AI. This was a significant increase from a 2018 review, which found AI mentioned “only sporadically”[vii].
The story of Builder.ai is a fantastic and perhaps extreme example of this practice. Builder.ai was a London based AI company which attracted over $500m in funding and achieved a valuation of $1.5bn at its latest fundraising round[viii] on the promise of selling an advanced code-writing AI that turned out to actually be a software farm employing 700 human developers in India[ix].
As a practice AI washing opens a company to a number of exposures. In. In the US there has been a developing number of court actions, such as the those brought against Innodata Inc. in New Jersey after investors alleged its stock price dropped more than 30% following publication of a report claiming its artificial intelligence technology was “smoke and mirrors.”[x].
Whilst the class action market is more developed in the US, mechanisms do exist in the UK to facilitate similar potential actions under the Financial Services and Markets Act 2010[xi].
Similarly, regulators have powers to bring actions against companies and directors for making false or misleading communications to the market.
Underwriters must therefore be willing to throw a pinch of salt over companies presenting with extreme valuations or promises of AI powered infrastructure and solutions which seem a little too good to be true. Efforts should be made to verify business output and understand the products they are purporting to sell or develop in as great a detail as possible.
3. AI Bubble
Leading nicely on from AI washing is the potential for, or perhaps existence of, an “AI bubble”. For years now comparisons have been drawn between the current market scramble to invest in AI powered businesses with the “dotcom bubble” of the 2000s. We have already explored above through Builder.ai, which has now filed for insolvency in the UK[xii], how “AI” companies can attract unicorn values on the back of little more than promise and thinly disguised proofs of concept.
AI is also an industry reliant on infrastructure and manufacturing that is potentially struggling to keep pace with demand. Nvidia has skyrocketed to one of the world’s most valuable companies off the back of demand for the microchips used in, amongst other things, AI technologies. Overall – it is estimated the AI industry needs around $900 billion in investment by 2028 to meet the $2.9 trillion spending need for data centres that facilitate this technology. To contextualise this, the entire capital expenditure of all S&P 500 companies in 2024 was around $950 billion[xiii].
Without this investment, the promise of AI realising its potential remains distant – and with AI companies notorious for long pre-profit runways (i.e. being cash-burners), it is unlikely many will survive long enough to see conceptual realisation, let alone profit.
The tendency to place significant valuations on AI companies, even non-public entities, is also of concern for underwriters, who must carefully review capital expenditure, cash-burn, future investment needs and investor sentiment when considering risks in pre-revenue stage.
Underwriters should be prepared to undertake greater financial due diligence of risks, and brokers must be prepared for this questions and support insureds in outlining how their model is forecast and governed from a financial perspective.
4. Impact on court proceedings
Although slightly tangential, there is already evidence of AI impacting the running of court proceedings – which in some cases has served to negatively impacts timescales and increase associated costs. This is particularly prevalent thanks to the problem of AI “hallucinations” – which is essentially when AI makes things up.
Litigants in person have increasingly been turning to AI as a substitute for professional advice and representation. This however has not always resulted in beneficial outcomes. There is significant potential impact on D&O claims as a result of this – especially in more routine matters such as employment tribunal disputes. In the recent case of Ms M Wright v SFE Chetwode Limited & Ms K Winter, Ms Wright, representing herself, admitted to using ChatGPT to help draft her statements and submissions. This resulted in Judge Atkinson remarking “I am left with strong feeling that Ms Wright is pursuing a claim she does not understand and cannot personally justify…”[xiv]. As a result, all but one claim was struck out and the other only allowed to proceed with a deposit order payment.
Other recent examples include the matter of HMRC v Marc Gunnarsson, where the submissions presented by Mr Gunnarsson where found to be drafted by AI and which referenced 3 FTT cases which simply did not exist[xv].
The use of AI in drafting and presenting court submissions is not restricted to litigants in person either. The High Court has recently delivered judgement on two separate matters where practicing solicitors were found or suspected to have used AI in their submissions. Amazingly, one matter involved an £89m claim against the Qatar National Bank, in which the “claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also bogus. The claimant admitted using publicly available AI tools and his solicitor accepted he cited the sham authorities.”[xvi]
Insurers must be mindful not only that they may face an increase in claims supported by AI “lawyers” but also must ensure their own selected counsel maintains the highest levels of scrutiny and oversight of its solicitors, so they do not find themselves relying on entirely made-up case law.
5. AI bias and discrimination in corporate practices
The most well-known AI is probably ChatGPT, which is what is known as a Large Language Model (LLM). These, according to Cloudflare, are simply a “computer program that has been fed enough examples to be able to recognize and interpret human language or other types of complex data.”[xvii]
This is all very clever, but it ultimately means that these systems are trained on existing data, and that means they suffer the same issues humans struggle with. One of these being discrimination and bias. If the model is trained on bias data, regardless of whether that bias is conscious or not, then the LLM may adopt a similar bias when interpreting new data.
This leads to the potential for an AI powered tool to be discriminatory.
We have already seen examples of these practices being challenged. In the matter of Mjan v Uber Eats, Mr Mjan, who was a Black driver working for Uber Eats, was required to use AI powered facial recognition software to register for jobs. This software, he claimed, was racially based, causing him to verify more frequently than other users and eventually leading to his account being deactivated.
Although the case was settled out of court, and so we might never know if the system was biased, this is a good example of where claims may arise out of AI bias and discrimination.
Another strong avenue for consideration is the use of AI in hiring practices. A study in February 2022 found that 79% of employers use some form of AI or automation in the hiring process[xviii]. This has resulted in issues even for giants like Amazon, who in 2018 scrapped an AI recruiting tool because it showed bias against women[xix].
Underwriters therefore need to be mindful when looking at companies, even those not focused on the AI space, regarding the use of AI in their business practices. If this is identified, there should be a firm understanding required of the governance around the use of these tools and how their output is checked and verified by human eyes.
6. AI energy consumption and climate commitments.
According to a report published by the University of Cambridge, “The idea that governments such as the UK can become leaders in AI while simultaneously meeting their net zero targets amounts to “magical thinking at the highest levels,””[xx].
The same is potentially true for private businesses. There has been a significant focus on business climate commitments over the past decade. According to the Net Zero Tracker, as of August 2025 60% of the 2000 largest companies have proposed, progressed, or adopted as part of their corporate strategy net zero emission targets[xxi]. This includes Alphabet (the parent company of Google), and Meta (the parent company of Facebook and Instagram). Yet, these are also two of the largest adopters and innovators of AI on the planet.
AI, which, according to the same Cambridge study, could drive a “25-fold increase in the global tech sector’s energy use”[xxii]. Google itself has admitted in a 2024 environmental report that “As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment.”[xxiii]
We have already seen actions brought against companies such as Shell and Santos for supposed failures to meet net zero targets. Could AI be the next frontier in developing climate legislation and case law as well?
Underwriters will need to carefully consider any company with lofty climate and carbon goals which at the same time is purporting to invest significantly in AI technology and seek to understand the ability of the insured to marry the two apparently conflicting goals.
Conclusion
Clearly, AI will change the way we live. The future of work, health, and even art – everything is currently being reviewed through an AI lens, and insurance is no different. This article has just about scratched the surface of the potential impact of AI on D&O insurance, but one thing is clear. Markets need to think about adapting to this “new normal”.
Underwriters need to consider in much greater detail the ramifications of AI on their claims and pricing decisions, wordings need to be reworked to ensure cover is clear and reflective of the policies intention. There is a long road ahead, but any market not already working on this will be left behind. Innovation has always outpaced insurance, and AI has the potential to evolve faster than anything we’ve seen before.
Disclaimer
This text does not contain, does not constitute, and should not be viewed as legal or professional advice. The information provided herein is for informational purposes only, and whilst every effort has been made to ensure it is correct, the author accepts no liability for any errors, misinterpretations, or omissions of fact. You should seek qualified legal or professional advice on all matters pertaining to your insurance.
The views expressed by the author are their own, and do not reflect his current or former employers’ views in any way.
References
[i] Institute of Directors Business Paper “AI Governance in the Boardroom” Page 6, published June 2025. Accessible via https://www.iod.com/resources/business-advice/ai-governance-in-the-boardroom/
[ii] See also this list published by Liberty Mutual on the “promise and potential benefits of AI”. Arlene Levitin, Esq., “Artificial intelligence and potential D&O risk” Liberty Mutual. Published January 2025. Accessible via https://business.libertymutual.com/insights/artificial-intelligence-and-potential-do-risk/
[iii] See, for example, the implementation of AI at JP Morgan. Ahmed Raza, “How JPMorgan Uses AI to Save 360,000 Legal Hours a Year” Medium. Published May 2025. Accessible via https://medium.com/@arahmedraza/how-jpmorgan-uses-ai-to-save-360-000-legal-hours-a-year-6e94d58a557b
[iv] Ibid i, page 8.
[v] Hannah Meakin & Rebecca Dulieu, “AI Regulation in Financial Services: FCA Developments and Emerging Enforcement Risks” Norton Rose Fulbright. Published July 2025. Accessible via https://www.regulationtomorrow.com/eu/ai-regulation-in-financial-services-fca-developments-and-emerging-enforcement-risks/
[vi] Anthony Rapa, “A perfect fit: Generative artificial intelligence & corporate insurance” WTW. Published July 2024. Accessible via https://www.wtwco.com/en-gb/insights/2024/07/a-perfect-fit-generative-artificial-intelligence-and-corporate-insurance
[vii] Matthew Bultman, “AI Disclosures to SEC Jump as Agency Warns of Misleading Claims” Bloomberg Law. Published February 2024. Accessible via https://news.bloomberglaw.com/securities-law/ai-disclosures-to-sec-jump-as-agency-warns-of-misleading-claims
[viii] Matthew Broersma, “Builder.ai Collapsed After Finding Sales ‘Inflated By 300 Percent’” Silicon. Published May 2025. Accessible via https://www.silicon.co.uk/cloud/ai/builder-ai-sales-collapse-615436
[ix] David Braue, “The company whose ‘AI’ was actually 700 humans in India” Information Age. Published June 2025. Accessible via https://ia.acs.org.au/article/2025/the-company-whose--ai--was-actually-700-humans-in-india.html
[x] Bracewell, “Innodata Suit Highlights ‘AI Washing’ Liability Risk for Cos.” Law360. Published March 2024. Accessible via https://www.bracewell.com/resources/innodata-suit-highlights-ai-washing-liability-risk-cos/
[xi] See this article by DACB, which provides a good overview of AI washing and potential action routes under FSMA. Sarah Davies & William Naylor, “Is AI washing the next big risk for D&O Insurers?” DAC Beachcroft. Published June 2025. Accessible via https://www.dacbeachcroft.com/en/What-we-think/Is-AI-washing-the-next-big-risk-for-directors-and-officers-insurers
[xii] Alexandra Heal and Robert Smith, “Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency”, Financial Times. Published May 2025. Accessible via https://www.ft.com/content/9fdb4e2b-93ea-436d-92e5-fa76ee786caa
[xiii] Jamie McGeever, “Is today's AI boom bigger than the dotcom bubble?”, Reuters. Published July 2025. Accessible via https://www.reuters.com/markets/europe/is-todays-ai-boom-bigger-than-dotcom-bubble-2025-07-22/
[xiv] Kaine Davey, “AI in Employment Tribunals”, afterathena. Published July 2025. Accessible via https://afterathena.co.uk/ai-in-the-employment-tribunal/
[xv] “Misuse of AI in SEISS Upper Tribunal appeal”, Rossmartin.co.uk. Published July 2025. Accessible via https://www.rossmartin.co.uk/sme-tax-news/8563-misuse-of-ai-in-seiss-upper-tribunal-appeal
[xvi] Robert Booth, “High court tells UK lawyers to stop misuse of AI after fake case-law citations” The Guardian. Published June 2025. Accessible via https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work
[xvii] Cloudflare, “What is a Large Language Model (LLM)”, Cloudflare. Published date unknown. Accessible via https://www.cloudflare.com/en-gb/learning/ai/what-is-large-language-model/
[xviii] Gary D. Friedman
[xix] Jeffrey Dastin, “Insight - Amazon scraps secret AI recruiting tool that showed bias against women”, Reuters. Published October 2018. Accessible via https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/
[xx] Bhargav Srinivasa Desikan and Professor Gina Neff, “Big Tech’s Climate Performance and Policy Implications for the UK” University of Cambridge. Published July 2025. Accessible via https://www.cam.ac.uk/research/news/banking-on-ai-risks-derailing-net-zero-goals-report-on-energy-costs-of-big-tech
[xxii] ibid xx.
[xxiii] Dashveenjit Kaur, “Google’s dilemma: AI expansion vs achieving climate goals”, AINews. Published July 2024. Accessible via https://www.artificialintelligence-news.com/news/google-dilemma-ai-expansion-vs-achieving-climate-goals/



Comments