Could AI replace lawyers? – and other implications of AI chatbots | Legal Thinking Podcast
You can also listen to our podcast on your podcast platform of choice - find it here >
Welcome to this episode. In today’s episode, we are talking about AI and the law with the head of our Tech sector team, Carl Selby.
I assume you've had a chance to listen to our recording from the previous episode, Carl. What did you think of that?
Yeah, I have had a chance to listen to it. It's both interesting and… well, it's slightly scary in the sense that a lot of the stuff that the AI is putting out is very accurate, albeit it's at a very high level. So it couldn't really be used for giving legal advice or other good stuff that we might do, but it's certainly something that if you wanted a basic overview of the legal implications as things stand at the moment of AI‑generated content, well actually it's not a bad starting point. And many colleagues would say I've been telling them for years that I'm just hoping that my career outlasts AI before it starts doing all the legal work for us, but I suspect that it's a bit further along than I thought it was when I saw this and I might have to revise my opinion downwards and make all my money in the next decade or something to avoid being overtaken by the robots!
Yeah, well in the meantime you can tell us a bit more maybe about what you feel like the implications… because obviously with the ChatGPT export we focused a little bit on the kind of intellectual property aspects of it and copyright and things like that. Do you think that covered everything?
Do you think there's anything more you'd want to add to the kind of… maybe not along the specific legal lines but just thoughts you have on ChatGPT and how it might impact on people's intellectual property. Because I saw something the other day saying well, if Google's just going to scrape all our content and then just serve it up without ever leaving Google then surely people are just going to start blocking their bot, if that makes sense. Is that a fear for people who might want to use AI that people just get fed up of it scraping and stealing their content and serving it out somewhere else? Is that something you've seen?
I haven't seen that as a particular issue yet but I suspect it's a only a matter of time, because if you look at how a lot of the content sites on the internet work, the quid pro quo for a search engine is: yes, you'll appear in the search results as a result of having your SEO well prepared and also putting up good-quality content that people want to read, and so you gain a reputation from doing that, but that's all to generate traffic to your website so your payment then comes from when people click on it and either they buy a product or service from you or they go onto your site to read in particular sort of journalistic-type articles or news articles. The adverts are served to people to pay for that content. So if Google were to replace its current search engine with something like what Microsoft generated on Bing when they launched that last week, that will cut that element of revenue generation out of the loop for the people who are developing the sites, especially news organisations, as I say.
So, yeah, there's definitely going to be a conflict there in the future because if you're just getting a summary of all the information you need to know about a particular search that you've done without actually having to go onto the website that's created that content, well how's that going to be paid for? And you've obviously already had issues in places like Australia where they are talking about Google having to pay for using, in particular, media organisations' websites so there's a bit more of a quid pro quo in that relationship rather than Google being the kind of overriding master that controls the internet.
So that's bound to happen and, as things stand, there isn't any specific law, certainly in England and Wales, that deals with how AI scraping websites for that content is regulated. There is talk from the Government of making it actually easier for people to do that because they see it as a longer‑term economic benefit to have new AI tools developed that can learn from all the information that's already on the internet. But, yeah, we'll have to see the details of that; at the moment it's out for consultation and they'll obviously be taking information from the likes of Google, Microsoft and Open AI and the other big players that are in the market but they'll also be taking views from others at the other end of that chain that would want to see it done in a slightly different way so there'll have to be some sort of economic bargain done there so that people aren't putting content up and creating good content just for an AI bot to steal it, use it and then not to make any money out of it.
And I think it's a really interesting point that you make, especially around the ruling between News UK and Google in Australia because I think it's important to remember that this isn't necessarily a new point.
I remember reading that the New York Times was looking into taking their stuff off Google in around about 2010 because they were essentially serving up all their content for free to Google but, as, Carl, you point out, the exchange is then you do become searchable, but with news organisations especially, I mean it needs to play to their strategy and it needs to benefit them. But opting out of like Facebook's Open Graph, I believe it was called, and the Google index isn't necessarily something new.
But anyway, Carl, if you'd like to outline what you think some of the practical uses for AI and ChatGPT might be. So you mentioned there it hasn't got to the point yet of quite replacing what you do as a lawyer. In your opinion, what do you think the use is right now, now it's launched on Bing and ChatGPT is open to the public, what are you seeing people use it for?
So, I think the biggest use, certainly from a ChatGPT point of view, is creating social media memes at the moment! But beyond that there is a useful service that it can provide in doing either low-level drafting work, I think there's probably a lot of content creation going on on it at the moment, especially amongst some of the… you know, I talk about the media sites and there's obviously different levels of journalistic output and some of it at the moment is purely clickbait to create money by getting people to click links and my understanding is that's being used - AI in general is being used quite extensively now to create your run-of-the-mill type article, so nothing that requires a lot of investigation or needs a lot of input from a human but if you want to do a piece every morning that runs what the stock price at the FTSE is, you don't get someone to look that up any longer; you just get the AI to look at the FTSE website, report it…and put some fluff around it...
I know that CNET, for example, has already started producing AI articles but then not declaring which ones are written by a human and which ones are written by the AI bot.
Yeah, yes, that's right. Do they refer to the editorial team when people think it's coming from the AI rather than the individual journalist that might have written it? I think I read that somewhere. There's a bit of a hint that if it says from the editorial team it may well be at least background AI generated. I'm sure…
Then edited by them, yeah.
Yeah, I was going to say they're still doing some editing in the background, I'm sure, because it's not reached the level of sophistication yet, I wouldn't have thought, where you'd want to put that up without someone casting an eye over it and giving out false information effectively. So, that's certainly one use. There's obviously the children cheating at their homework use by getting essays to be written by it, which I suspect will be a fad until the tools get a bit more sophisticated but…
It passed an MBA in America.
Yeah, I was going to say it did pass an MBA so it can certainly generate good enough stuff to get a reasonable grade without too much difficulty. But in a business context there are also AI applications now that are writing code. GitHub I think have launched a platform where you can write your code or get AI to fill in the gaps in your code, add things to it and that's being used quite heavily by developers at the moment to do some of the more run-of-the-mill coding tasks that they might need to do in a particular project. And then the other context I've sort of heard of it being used in is in things like research, where you want to take a large amount of information and then either summarise it or draw out key themes. So rather than say write a research paper, it might just suggest some topics or headings that you might want to use as the structure for that paper and that kind of potentially also has some quite useful implications in terms of reducing the time that's spent doing the grunt work on those types of projects.
I think the recurrent theme in all the kinds of examples you've mentioned there is it's a start, as in with homework it might be a good place to start an essay with the research as a good place to suggest headings and, similarly, with writing pieces of online content like what CNET's doing it then still needs to be reviewed by editors. And even when I've heard interviews with artists and creative writers over the last few weeks since this was launched, the ones who aren't completely against it have said oh well, it is actually quite useful to offer starting points to kind of get the creative process going.
Yeah, exactly, and I suspect in due course in our field or things like accountancy or other things where you're taking information, testing it, producing it in a certain format, it will be incredibly useful because you can just dump the data in and get the output out without having to do much of the work yourself. So, there the more obvious uses for it rather than, I don't know, writing glorious symphonies or something like that, which might be a bit more complicated and need more input from a human. Although having said that, you've got Dall-E, haven't you, and other image generation tools where you put in you want a, I don't know, pink elephant drinking tea on a unicycle and it'll come up with an image of it.
Yeah, that all sounds fantastic to allow people to have leg up but obviously there are also business and wider risks: the launch of Google's AI having hit their stock price because it made seemingly a minor mistake - though there's a discussion online as to whether it did make a mistake but whatever - but also wider concerns. I was reading something the other day about some medical research that was going on where they were trying to find vaccines or treatments for various things and they then told the AI to reverse engineer what it was doing and it basically came up with a whole new load of nerve agents that would be extremely dangerous, and obviously it would require an understanding of chemistry to make them but it then gives people a pretty horrible leg up.
But focusing on the business issues, I suppose when you've got something that's potentially scraping the internet - how it's doing that who knows, how far it's going into various websites and things like that - what are the data protection kind of implications for use of AI in terms of it then just producing stuff that it throws out on the internet potentially without oversight?
Yes, well yes. So, I think there's two things to talk about there when it comes to scraping. First of all, we've already touched on it: it's sort of the intellectual property aspect of someone's put some content on the internet; your AI tool is going out to look for it, finds it; is it then infringing your copyright by taking that information, using it to train itself by, you know, if you've got a picture of an elephant, to give it more context for when someone asks for the pink elephant sipping tea, riding a unicycle. And the answer to that is probably yes but no-one actually knows because it's not yet been tested.
And then there's the one stage further, which is, as you say, the data protection implications, which is let's say that's an image of someone's face, so it can identify that individual, particularly where the image is tagged with that individual's name. So it might be you go onto the RWK Goodman website, you want to look for who's in charge of PR and you get a fantastic picture of Liam come up but actually it's not you doing that, it's AI doing that, and it actually goes out to the whole internet and tries to find images of Liam doing whatever he might've been up throughout his entire life, and then collating all that information for the purposes of, I don't know, someone wanting to know whether they should employ Liam or not. And all of that activity is processing personal data by the person or the legal entity that has instructed the AI to do that.
So in that scenario you would have needed to at least tell Liam that you were going to do that but obviously if you're not looking specifically for Liam because you're just looking for any human who might be doing a particular thing - I don't know, it might be you want to find humans who have played football - how do you go about telling everyone who might come up in that search that their data might be processed for the purposes of training an AI to look for humans amongst an image as an example of something that it might practically be used for. And that's a much more difficult issue to deal with and you would have to, as the person or the company that is creating that AI, do a proper risk assessment as to how likely it is that anyone would ever be able to identify individuals from those images and how that's being processed to decide to whether there was, in the particular instance, anything you needed to do to comply with your obligations under either UK GDPR as it is now post‑Brexit or GDPR if you're dealing with it in Europe.
And that's not a simple question, and one of the reasons why the Information Commissioner's Office has been working with a number of AI companies to basically come up with a framework for how people ought to deal with AI projects that they're running and looking at steps they can take to build their process so it is as compliant as it's possible to be. And actually for once there's an awful lot of very good guidance on the Information Commissioner's website about that. If you are planning to develop your own AI tool using personal data, there's a lot of good guidance there. I can't get into the detail now of how that would work because it's a very complicated area but certainly something that if you are thinking of doing it, you ought to take advice on.
And then there's a third tranche to this which is if you've got some information that is made available publicly for a specific use - say, for instance, you might have a university or other academic institution make datasets available to others for non‑commercial purposes - well, if you want to use it for training your AI bot, is that a non‑commercial use because it's training? It's not the actual process of selling the service to someone who might want to use it in the future and therefore do you need a licence on it? Again, it's one of those areas the law is not yet developed so the answer is probably you need to go and get a commercial licence to be able to do that. But all of these things are subject to ongoing review from the Government, certainly in the UK. Their aim at the moment is to make it as simple as possible for people to develop AI tools and, as I say, there's a consultation ongoing to try and create a framework around this that makes it easier for people to understand what the legal implications of doing certain things are and to facilitate development of AI going forwards.
So, Carl, if you're someone who is producing the AI, it is important to have some kind of smart contract in place?
Well you don't necessary need a smart contract but we'll come on to those in a minute, but you definitely need a proper contract for anyone who's using your AI for any purpose. We talked about the legal liability earlier being unclear in terms of who's responsible for things. Well, your contract with your customer is going to be the key tool that you've got to use to set out who is liable for what, at least as between you and your customers. So if anyone's signing up for a service, you certainly need a robust set of terms that they agree before using that service to make sure that you're properly protected.
But smart contracts are interesting. For those that don't know, basically a smart contract is blockchain-enabled contract that basically allows certain conditions to be satisfied and then something else to happen as a result of that depending on what the contract says. So it's a mixture between legal writing in terms of what the contract actually says but also code in terms of making that automation happen and the reason for using the blockchain is so you've got the audit trail on how that's being fulfilled.
It's far from bulletproof yet; I was listening to a different podcast - I can't remember who it was - but the chap was explaining that it's very easy to fool the smart contract by going outside of its parameters. But it's certainly something that will develop over the next five/ten years, I would've thought, into a relatively commonplace thing for run‑of‑the‑mill-type agreements that people enter into where certain things have to happen and if certain things happen then there's outcomes. So, again, it will be a case of working closely not just with lawyers but also the developers to make sure that that smart contract is properly interpreted into the code so that, if the thing that triggers the next action happens, the code does what it's supposed to. But that will in itself no doubt introduce new legal concepts that we haven't yet seen and there's an argument potentially about whether you could get sufficient certainty to make the contractual provisions in the smart contract actually binding but, again, there's be some case law soon, I'm sure, that covers that off or there'll have to be some legislation to set out how those things work.
So, is who is liable if something goes wrong? Seconds ago Ed mentioned an AI making phoney medicines that were actually nerve agents but there was also a relatively funny example this week of an AI that created an endless episode of Seinfeld; it had very basic graphics and it was programmed with all the episodes of Seinfeld and it'd be like Jerry doing his stand-up in this very badly‑animated comedy club and then they needed to shut it down temporarily because Jerry got very transphobic… Who's liable?
You're asking me some very difficult questions here, Liam, because the answers are not yet known. There's been no direct case law, that I'm aware of, on any of these issues but the logical position would be if an AI system does something and it is owned or operated by someone - a company or other individual - that you'd have to say that that company or individual is liable for what the AI did, in the absence of anything else.
But, again, there is no case law on that yet, no legislation that I'm aware of, but where it'll come to the fore will be things like in autonomous vehicles where there will have to be some law on who is responsible for an accident if an autonomous vehicle has an accident. Will it be the company that built the car if the AI is proved to have been negligent in driving it or will it be the person who's driving the car? So that, again, has to be decided. But there's another interesting aspect to this, which is it's not necessarily just a one-way street and that you work out if the AI is liable or if the entity that owns or operates the AI is liable.
The other question is if someone could have used AI to get a better outcome. So, again, one theoretical scenario that might happen is there is an AI tool for diagnosing a particular medical condition that's developed that is far more accurate that a human doctor could ever be and a human doctor doesn't use it, is that human doctor then liable for the failure to use the AI if the AI would have then found that diagnosis more quickly and given a better or different course of treatment to the patient?
And, again, all this stuff has to be worked through as part of a review of how AI works and, actually, if you read the Government consultation on AI, one of the key things they want to try and make more freely available is medical data so that people can get access to large datasets on specific diseases to try and use AI to come up with better ways of diagnosing conditions. Whether it's looking at blood samples and doing a better diagnosis on them or reviewing scans, like X-rays or any other medical scan, to find issues that humans might spot because they're difficult to spot on a consistent basis.
So, sorry, that's not really a very good answer to the question, is it, in the sense that no-one knows the answer really.
Shall we ask an AI? Shall we get rid of you, Carl, and just…
Wait a minute! I can ask ChatGPT: who is liable if AI is negligent? The one limitation on this is it's much easier to type it in. So if they could do it so you could just speak.
Yeah, I think Alexa and ChatGPT together would probably be an absolute disaster for server overload…
This is a very longwinded answer.
I have to say it's got a very lawyerly answer at the end of it. I won't read out the whole thing but it says at the end: "Ultimately, determining liability in cases of AI negligence will require a careful assessment of the relevant facts and circumstances and the application of legal and ethical principles to those facts", which is a very good summary and a very lawyerly answer of saying 'it depends' in a long‑winded way.
Because it scrapes every lawyer's blog who's written about it it! You haven't written a blog about that, have you, Carl? They might have just stolen your…
No, I don't think I have. I just like recording podcasts…
Yeah, we won't transcribe it for fear that ChatGPT will have it.
There's a half-decent chance that it already is transcribing this because Google automatically transcribes podcasts so they can appear in search results so…
Yes. Or if you upload to YouTube now they automatically caption it for you so that people who are deaf can read the transcript. It's very helpful.
Read us on Google then is probably my advice to listeners. OK, well yeah, thanks very much for your time, Carl. It sounds like it's a frontier at the minute rather than something that's well established but it's all incredibly interesting. So yeah, thanks for your time.
I was going to say thank you very much but it's one of the more exciting aspects of potential legal developments that's happened certainly in my career.
It's going to take a lot of planning and then there'll be a lot of updating as things come out but obviously my concern is that the law will not move as quickly as the technology because that seems to be moving apace. You know, Google have launched a product; Microsoft have launched a product; Open AI, which is funded by Microsoft but also has got Elon Musk behind it, hasn't it, are putting an awful lot of money into a lot of tools so it's a only a matter of time before governments around the world are going to need to deal with this. I did say to a colleague earlier when I was talking about this podcast, I'd definitely get Skynet in…
The singularity is nigh!
Yeah, exactly, and that's the other great challenge from a kind of not legal point of view but an ethical point of view: how far do you want this to go because AI will do what it's told? So the classic example is still the paperclip, isn't it: if you tell it to make as many paperclips as possible, when will it stop to harvest the materials to do so?
Well I'm definitely going to link the article I read about the nerve agents into the description of this episode so that everyone can see what I'm talking about. And that leads into the one about Seinfeld as well.
Again, thanks very much for your time, Carl. We'll leave it there.