Intelligent Diligence: is AI a game-changer for due diligence?

The focus of DD can vary depending on the nature of the transaction:
- an early-stage venture capital investor is likely to focus on the founder team, ownership of key IP and any initial commercial traction;
- a private equity investor will want to see evidence of solid operations, a reliable revenue model and addressable steps towards significant short to mid-term growth, and will expect all legal obligations to be in good order;
- a full acquiror will be looking to fully understand the assets and liabilities they are taking on, whilst apportioning risk for any identifiable issues.
Whatever the background, DD can be a time-intensive and detail-oriented process. Depending on the age and complexity of the business in question, DD can involve obtaining, organising and reviewing hundreds, if not thousands, of documents. Even before lawyers get involved, acquirors and investors will find themselves analysing detailed financial information to understand the value in making an offer.
These are precisely the types of processes where the use of appropriate AI could create significant efficiencies.
The AI landscape
The UK is Europe’s foremost AI hub, with The Tech Nation UK AI Sector Spotlight 2025 report attributing a value of around $230 billion to the UK’s AI sector as a whole, and the Government is on a mission to integrate AI at a rate that competes with major global players like the US and China.
In January 2025 the UK Government launched their AI Opportunities Action Plan, which sets out significant ambitions for the growth of development, use and understanding of AI. Particularly notable is the Government’s tone-shift away from safety and risk, and towards growth potential (see for example Liv McMahon, Zoe Kleinman & Charlotte Edwards, via BBC News, which comments on this). The plan reflects a prevalent attitude amongst the business community towards the introduction and assimilation of AI into everyday working practices, with the aim of automating labour-intensive tasks and increasing cost-efficiency wherever possible.
It seems inevitable then that AI will (and arguably should) increasingly be utilised in all manner of time-intensive tasks, including DD processes.
How AI tools can support DD
Here are some of the key advantages AI can bring to the DD process:
-
Speed
Undoubtedly, AI has the potential to speed up the DD process: rather than the hours which can be spent by advisor teams reading through documents individually and extracting relevant information, AI can be used to identify and compile relevant information from voluminous sources more efficiently for subsequent human analysis.
Imagine an acquiror is concerned with understanding the longevity of key customer relationships within a target business. An AI tool could be used to quickly identify a list of top 10 customers by revenue, then identify whether the contracts for those customers contain change of control provisions that could be triggered on a sale of the business. This could reduce a couple of hours of work to minutes or even seconds, reporting back in a more efficient and detailed manner than a typical “Ctrl + F” search.
-
Data room building
A well-organised and suitably populated virtual data room (VDR) has always been key to an efficient DD process. A suitable AI tool has the potential to remove much of the pain of compiling these, by organising information as it comes in and producing easily navigable VDRs, sorted by theme, value or any other category which helps make sense of the data. This removes the need for a person to assess, file and index each document manually or via more “traditional” sorting tools.
Bespoke software already exists to ease the process of compiling and organising a VDR, but the right AI tool could conceivably manage this just as smoothly (or more so) with a lower barrier to entry.
-
Simplified financials
Financial advisors, corporate finance professionals and those in the VC and PE industry will already have a variety of tools and templates available to them for efficiently calculating financial metrics and KPIs. However, an AI tool could supercharge this further by aggregating all available financial information to very quickly produce EBITDA reports, burn rates, return on assets, earnings per share and any other desired assessments.
-
Easily digestible analysis
AI tools can be very useful for distilling information into accessible and easily digestible bullet points and summaries (some, like Google’s Notebook LM, can even produce engaging podcasts from selected source material). Taking the example of the financial information mentioned above, this could result in not just efficient delivery of metrics but succinct and approachable explanations of why these are important, meaning professional advisors spending less time explaining common technical concepts and being able to focus more on their application to the transaction at hand.
-
Cap table automation
Despite what you might think from the inevitable “I love Excel” mug you’ll find in most office cupboards, not everyone is a fan of spreadsheets. AI could create professional-grade cap tables using requests inputted from a natural-language chatbot and could be set to monitor and continually update these based on legal documents or public filings made accessible to it. Again, there are already software tools out there which can automate much of cap table management, but an acquiror or investor could use this to quickly and easily run its own assessment of the “correct” shareholding structure, and the benefits would be particularly enhanced when combined with some of the other points listed in this article.
-
Accuracy?
With the ability to scan through a VDR and locate relevant information without the need for manual searches, AI could be used to remove human error from the DD process. However, as we note below, there appears to be some work still to be done on an AI’s ability to gather and interpret information reliably to answer specific questions.

To gain a more general understanding of the quality of an LLM’s responses, we asked ChatGPT to give us a literature lesson.
But how did it stack up against the knowledge of an English language & literature graduate from Oxford University?
Challenges associated with implementing AI
While introducing AI to the process can create many efficiencies and remove potential for human error, removing all aspects of human input is not necessarily to be championed as yet. Having considered some of the key benefits of utilising AI for a DD process, here is our view of some of the key risks and challenges of AI (as at the time of writing):
-
Context and application
Taking legal advice as our obvious starting point, one of the major benefits of obtaining professional legal advice is that good lawyers know which questions to ask and how to contextualise the answers to produce commercially sensible and relevant advice. Much of the value in good legal advice comes not only from a solid understanding of the law (which should be a given!), but from applying that understanding in a commercial manner, often with elements of give and take, in order to arrive at a solution which is acceptable for all parties.
Even if AI could be trained to understand commerciality and market norms, some parties to transactions can act based on their emotions, rather than pure commercial drivers. There isn’t a definitive rulebook on human emotion and empathy, so AI likely does not have the source material or capacity to account for this.
The key underlying point here is that, despite the impressive and approachable responses it is capable of, AI is not sentient and is not capable of truly understanding what it is generating.
-
Interpretive inaccuracy
AI doesn’t always produce an accurate or comprehensive answer. Many people are now aware of the possibility of “hallucinations” appearing in responses by LLMs (so prevalent that “hallucinate” in this context was Cambridge Dictionary’s word of the year in 2023, ‘Hallucinate’ is Cambridge Dictionary’s Word of the Year 2023 | Cambridge University Press & Assessment), often arising as a result of the AI model predicting the most suitable next word in its output in a probabilistic manner by correlation with the previous words used (we’ve all tried forming messages on our smart phones using the predictive suggestions, which often end up in a nonsensical string of words which almost, but not quite, sound like a coherent sentence). A key concern with AI is that it tends to present its responses with complete confidence and little to no acknowledgement of its scope to be wrong.
It could be difficult to discern where the AI has gone wrong if the evidence appears sound and you don’t already know what answer you should expect.
-
Quality
Looking at the Shakespearian case study from our colleague Imogen, set out below, the answer that ChatGPT gave us is evidentially insufficient; it does not go into any detail about the nature of Shakespeare’s authorship and does not appear to have even considered conflicting views. Indeed, Imogen was very keen to tell us that although the chatbot confidently stated Shakespeare wrote 39 plays, in fact only 37 are agreed to have been written by him and opinion varies on his contribution to the remaining two.
Evidently AI is currently unable to replicate the level of detail and analysis required for certain tasks when it attempts to produce a slick and easily digestible answer to a multifaceted problem. This could be particularly problematic in the context of legal advice, where accuracy and the quality of advice is paramount.
-
Poor record keeping
The output of an AI tool is often only as good as the information fed into it. If a business has been poor at keeping written records and ensuring its information is up to date, it would be difficult for AI to produce accurate and actionable insights based on that information. A human advisor should be able to spot and question any material weaknesses in the source material (e.g. inaccurate share capital records, or failure to follow generally accepted accounting principles), allowing this to be addressed before analysis and output is completed.
-
Hidden bias
Examples of text-to-image generative AI have proven to be subject to the influence of prolific biases based on those that occur in its training data (Leonardo Nicoletti and Dina Bass have produced a really interesting article on this for Bloomberg UK). Could this mean that AI tools used for DD could produce results based on conclusions drawn from what it expects the answer to be, propagating bias within its training materials? If so, how do we police the bias?
-
Security
There are clear risks associated with asking a third-party entity (i.e. an AI tool) to process information. A typical VDR will include a wealth of sensitive information pertaining to a business and its clients / customers, and a data breach or leak could have material implications. Insurance products can help to manage the financial impact of these risks, but no business is going to be fully comforted by insurance if their most commercially sensitive information is leaked to competitors.

How accurate is AI? – a case study
Imogen King is a Corporate Paralegal in our Bristol office and holds a degree in English language & literature from Oxford University (the relevance of this will become clear below).
To gain a more general understanding of the quality of an LLM’s responses, we asked Imogen to test ChatGPT on a topic of her choosing. Here is Imogen’s report:
I selected a non-legal topic that I knew would be widely discussed on the internet and asked the chatbot to “Tell me something interesting about Shakespeare.” It replied with the following: “Shakespeare was a prolific playwright: he wrote 39 plays, 154 sonnets, and two long narrative poems.”
This piqued my interest, because while Shakespeare did indeed write two long narrative poems, he also wrote a third and fourth.

Perhaps the AI restricted its response due to the length of the poems, so I then asked the chatbot “What narrative poems did Shakespeare write?” assuming that it would use its powers of data-scraping to produce a more detailed and accurate answer. Instead, on its second try, ChatGPT acknowledged the two major works referenced in the first answer and included a nod to The Phoenix and the Turtle, which is arguably the most obscure work to be loosely attributed to the Bard.
What ChatGPT has somehow missed is the narrative poem A Lover’s Complaint, which was published in the 1609 quarto of Shakespeare’s sonnets and uncontroversially attributed to him. My follow-up question was then: “Is that all of them?” It replied: “Yes, those are the only three narrative poems definitively attributed to Shakespeare […] nothing else is accepted by scholars as a Shakespearean narrative poem.” This would have been the case in the 1970s, but scholarship has progressed since then: A Lover’s Complaint is widely attributed to Shakespeare in academia and online (including in non-scholarly sources such as Wikipedia). So how did ChatGPT miss it? My concluding comment to the chatbot was: “What about A Lover’s Complaint?” to which it replied in oddly humanoid tone of voice: “Ah, great catch – yes, A Lover’s Complaint is indeed another narrative poem by Shakespeare!”
Concluding Thoughts
If we are to fully trust AI to provide important legal advice and analysis of DD materials, we need to be able to trust that the AI we propose to replace human intelligence with is equipped with the tools needed to judge and respond to nuance.
This is where we encounter the “Black Box Problem,” wherein we are unable to understand how the technology works, either due to production secrecy or because, according to some sources, developers themselves don’t understand how an AI found a solution (see the explanation of “Black box” by Cecily Mauran, via Mashable). The lack of transparency and the concerns around accuracy and quality noted above would surely be a significant roadblock to investing in certain AI resources in an industry as heavily regulated as the legal industry.
Our case study interaction poses obvious issues in the legal landscape where we might try to rely on AI to answer equally simple questions on even smaller data sets, like VDRs. The phenomenon of AI “hallucination” is especially risky when the information provided by the AI could be used to form the basis of, for example, warranties in an M&A context. Can we truly trust that the AI has interpreted the available information in the correct way, or that the underlying information itself was sound?
There are some emerging AI tools, like Microsoft’s Copilot, that openly use law-specific training data to produce legally sound responses. This is a promising move away from more generic chatbots and a step towards streamlining processes like DD thanks to the availability of AI that is more adept at parsing legalese and responding to legal problems.
It is worth noting that AI is under constant development and with $1.03 billion of investment in UK AI startups in the first quarter of 2025 (via Tech Nation), not to mention wider international development, we can anticipate significant improvements being made to its overall accuracy and reliability. While AI tools will undoubtedly continue to develop and be improved, many are still in their early stages and are relatively untested in the long-term, meaning a dose of scepticism could go far.
Overall, despite risks and concerns, there are clear benefits to integrating appropriate AI into the DD process in a carefully managed manner. With the UK AI industry experiencing a period of rapid growth, we expect to see more demand from clients for their lawyers to make use of the tools available and therefore a greater response to the need to improve AI tools’ ability to deal with complex legal issues.
In any event, it appears unlikely that AI will ever completely remove the need for experienced and emotionally intelligent lawyers to do their job, but AI is absolutely a tool which lawyers should look forward to utilising in order to improve and supplement the service we provide to our clients.