Detailed information on Mr. Ioannidis can be found here: https://www.linkedin.com/in/dimitrios-ioannidis-4783258/
Detailed information on Boston International Innovation Moot can be found here: https://www.innovationmoot.com/
The Motive Behind the Boston International Innovation Moot:
The way we teach students is based on the idea of precedent. We give them many cases, ask them to study them, and then we ask questions. When they become lawyers, they tend to follow the same mentality. The education of lawyers is to train them how to study events of the past, whether they are decisions or laws enacted. We always look in the past but very very rarely look into the future. As a result, when a client, a start-up company, for example, comes and says that they have an innovative idea, we do not exactly know what existing legislation may be used to support that innovation. This is why I founded the Boston International Innovation Moot (“BIIM”). In all, we also need to consider that scientists, lawyers, and innovators don’t mix well. The entire model of education and the practice of law doesn’t work well with the incredible fast pace of innovation.
The Issues on Traditional IP Rights:
I ask students, “What should come first: Innovation or Legislation?”. Scientists tell me that they need the legislation so that they can innovate. Lawyers typically tell me the opposite. So, this is one of major themes of the Innovation Moot; how do we, as lawyers, deal with emerging technologies that have no regulation.
Last year, we had the moot problem around the ownership rights of the formula of the perfume Chanel No.05. I worked with a company from Texas, which was using an AI platform that could actually give us all the ingredients of all the perfumes. If someone could easily provide the ingredients and the exact formula of Chanel Number 05, by using an AI, then who owns it? Is it a trade secret? Of course it is, as it is protected, like the name, the box, the colors of the trade name. But now, the AI is able to provide the exact formula. So, who owns the formula? I think there is no right or wrong answers and these are issues that we need to discuss. We have traditional IP rights, but when it comes to AI, we are not quite sure what the capabilities are and what is protected.
Will Law-Related AI Platforms Replace Lawyers?:
In 2022, I wrote a law review article, titled:“Will Artificial Intelligence Replace Arbitrators Under the Federal Arbitration Act?”. The Federal Arbitration Act was enacted in the USA in 1925. Back in 1925, when we didn’t have the modern airplanes of today, when traveling around the USA and the world was extremely difficult, , the US Congress enacted that law. When we look back in history, we see that neither lawyers or judges were supportive of arbitration. Everyone was against arbitration, except business people; our clients. 75 business organizations testified in the US Congress in support of passing the Arbitration law. One proponent of the legislation said that “lawyers are a waste of time”.. Business people just wanted to easily solve their disputes and this is why the law was passed. Clients drive our innovational targets, WE DO NOT.
I also asked questions like: If you and I make a contract, and include a little paragraph that says “We want to decide any disputes between us through ChatGPT.” Is that enforceable? Would a judge say that AI is not a human, and this clause is not enforceable? The second question is, if we agree on a provision of arbitration through an AI, and we go to a jurisdiction that has accepted these types of arbitrations, would that be enforceable in another jurisdiction?
We have to analyze the science behind all this innovation, and we lawyers do not tend to do that.
There are two trends, especially in the US, that I want people to think about: (1) I did a lot of research during Covid-19, back when I was writing the article, and I could not find any investment on AI platforms as a result of my research. No one cared about law-related AI platforms. Then, ChatGPT came, and everything changed. After 2022, the kind of investment put into these AI platforms was incredible. A year ago, a company on the West Coast of the US raised 900 million dollars, when the valuation of the company was 3 billion dollars, and they do something very simple; they take documents from lawyers, and then use an AI to pick up patterns of retrieve information from the documents by using an AI platform. Now, Venture capitalists are putting a lot of money into law- related platforms. This is the first trend. (2) As to the second trend, in 2022, only 2 states in the US considered legislation that would allow non-lawyers to provide some legal services, and whether or not non-lawyers could own law firms. Currently, there are 10 states that are going through that kind of debate.
We have business people putting in a lot of money in AI platforms, and states in the US debating on the legislation that allows non-lawyers to own law firms and provide legal services. Guess what is going to happen to lawyers now? We are going to be pushed out, just like what happened with arbitration in the year 1925. Clients will say that hiring a lawyer is too expensive and that they want to resolve disputes easily. So, the use of AI tools in law will increase exponentially in the coming years and those lawyers who fight that trend will go out of business.
There are a lot of lawyers who tell me we still need humans to make decisions. But I disagree. Our clients are going to decide what we need, because they pay. They don’t want to pay for a lawyer; they just want to resolve disputes easily. I think the entire profession is going to change.
What do you think about a plain petition written using ChatGPT? Recently, a petition written by ChatGPT was rejected. Wouldn’t there be an issue with the acceptance of these petitions if AI platforms replaced lawyers?:
Now, we are still a little backwards. There is a common rule and approach that bans the use of AI in most moot court competitions. But, if you go to the Philip C. Jessup International Law Moot Court, they say that they allow the use of AI, because it’s a tool. I also used an AI to write the closing arguments in my Innovation Moot Court problem for next year, and got almost perfect closing arguments for both parties. I would say, they were good enough to be used in Court. If I use an AI and indicate fake cases and file them in court, that would be a big problem, as I’ll be misleading the court. That is not accepted. But if you use an AI tool and write a great argument, once you review and make sure that it is your argument, and you file it with a court, that works very well. AI is a tool, so let’s see it as that.
For example, what if we gave students a tool that allows them to argue against the AI to practice? I did exactly that, with the moot problem from the Foreign Direct Investment Arbitration Moot. I put it into the AI platform that we are developing. I wanted the best arguments for both the claimant and the defendant. The answers that I was getting were very good. AI is a tool, and we should not be afraid to use all the tools within our capacity that would make us better lawyers. As long as it is our work that goes out, we can use any kind of tool that we want. Everyone is using it. If you file a complaint in a case, it is based on facts that you have reviewed but written by AI, what difference does it make? AI is just a tool.
There is a petition that was filed by a law firm in the US, challenging the decision of an arbitrator on the basis that the arbitrator wrote the opinion using ChatGPT. They went through the details of how the answers were very similar to the answers that ChatGPT would give. This case is still pending. If you don’t do any work as a lawyer or an arbitrator, and if you simply type a couple of letters and command the AI, then it’s a problem, as you haven’t adapted your own work. But as long as it is your work, I think you should be able to use any available tools. If you abuse the tool, misdirect the court, or the other lawyer, that would be a problem and an unethical violation.
If we apply the opinion that we use AI only as a “tool” that we provide our inputs, how would we answer the conflicting issues in terms of an artistic or intellectual work subject to copyright protection? It’s a debated subject whether AI could be the author of any work.:
I’ve used an AI, and I gave a lecture back in April 2025 titled “The AI Meets the Bible”. I wrote the phrase “the last day of earth” as a prompt. I got incredible photographs of fire, destruction, and humans who were facing fire. Then, I used a phrase related to “heaven”. I got the pictures of blue skies, waters, forests... the pictures were peaceful. Then I wrote the word “hell”, and AI gave me some images of Earth. What that mean to me is that the AI tools we have today are a mirror of our world. They simply give us back what we give it. It doesn’t generate any original works at this point in time. It simply takes information from what we feed it. Generative AI tools are super calculators. They give us the most likely output according to the inputs they have.
About AI replacing the lawyers, I understand your point in terms of jurisdiction. However, how would we define the applicable law if the decision maker itself is AI, and parties are from different countries? How would we provide the relevant inputs on the applicable law to the AI platforms?:
If this were the case of an arbitration and we indicated a provision to the arbitration agreement on AI, we would probably indicate a provision on the applicable law as well.
We also need to learn about: “RAG = RETRIEAVAL AUGMENTED GENERATION”. The problem with AI now is that it picks up a lot of information from lots of sources. For example, if you use Gemini, created by Google, it picks up information from a lot of available sources within Google. RAG is still an AI model, but it’s sort of a closed system. It takes information from the resources that you are giving it. You kind of control the available prompts. When you close the system and do not allow it to go beyond the prompts that you provided, it is called Retrieval Augmented Generation. AI is a mirror of our world; it picks up all the information we are feeding it. In my lectures on AI, I often tell the audience that the AI doesn’t pick any of the information and resources unless you “feed the beast”.
They say that less than 5% of the input AI use comes from Africa. Most of the input is not from the places you normally think of. So, where is this input being created? You’ll realize that the input is not from all over the world. As a result, the outcome from the AI is very limited. I think as lawyers, we need to be able to organize the information that we have. In your country, for example, you have so much stuff that is published, but most of it is not accessible to the AI. The key is to “feed the AI” with the materials you believe correspond to your knowledge so the output you will get will reflect that input.
Issues On Copyrights, Software Codes and Emerging Technologies:
I’m considering organizing a Masters program in the Dominican Republic. The questions came up when I was talking to a friend of mine from MIT, who is the founder of the Supercomputing Lab at MIT. At the beginning of the use of ChatGPT, around the year 2021, he said that a lot of people at MIT were concerned about copyright infringement.
For example, when a developer uploads the code into the virtual library, they often put that license at the bottom of the code. As a result, when someone uses that code, they are automatically bound by the license the coder provided. There are other institutions and other licenses out there, but the MIT license is the most prominent, and it has about 27% usage worldwide.
Basically, people were uploading code to the repositories. One of them is called GitHub. GitHub has millions of code that people have uploaded. I think it was started in2007, as a non-profit organization. The whole idea was to support innovation. Most of the people uploaded their codes with a license, and some people uploaded their codes openly without a license requirement. As a result of the license requirements on the codes, you are subject to that license if you use the code, you cannot sue if anything goes wrong, and you have to recognize the work of the coder work, show attribution. This is the same logic as preventing plagiarism.
In 2017, Microsoft purchased GitHub for I believe, 7.5 billion dollars. That means it had the access to all these codes. Then OpenAI came and Microsoft invested a lot of money in OpenAI. Now, ChatGPT is a tool and trained with Microsoft’s copilot program. What they do is, they take all the coding from GitHub and all other available sources, and they trained the AI. There are now litigation cases in the US and elsewhere that claim that OpenAI and Microsoft trained their model to take the information, anonymize it, and accordingly remove the rights of authors’ names. The model is built upon many codes by different people, and the output is created as if it is owned by OpenAI- ChatGPT, with no reference.
When I first talked to Jeremy Kepner at MIT, I asked, “Can we prove this infringement? How can we show the code that ChatGPT is using?”. The answer from a lot of scientists was that “Yes, we can tell where the code came from. We can tell the source of the ChatGPT outputs.”.
We eventually wrote a law review article that we published in 2023; we basically amended the MIT license to exclude non-human access. Now, the MIT License that we have amended says that “This is only available to humans. Only a human can use it. Non-human access to the code is not allowed.” As a result, if ChatGPT uses the code, it constitutes an infringement. When the article was pre- published, we got some feedback and began to see some changes. Now we begin to see Generative AI platforms entering into licensing agreements with some institutions that have developers. We may not be able to prove it precisely, but all of the information that you are pulling comes from these resources. Some of them are not copyrighted, but some of them are.
The first case was filed in November 2022 as a class action. Now, we have about 25-30 cases filed throughout the US. It is very interesting to see how the Courts are going to handle something that has no legislation behind it. Some similar arguments indicate the fair use doctrine. But, fair use in the IP area of the law is very, very specific.
The Idea of Personhood for AI:
There was a famous “Monkey Selfie Case”. A photographer had many devices, like gears, photography machines, etc. The photographer took a break, and when he came back, he saw an amazing picture of a monkey. It was a selfie the monkey took by playing with the camera. He published the photograph, and then somebody used it without authorization. The Court had to answer: does the monkey own the rights to the selfie it took? The Court held that animals cannot be inventors under our current IP laws. Our system does not recognize any of these rights.
In another case in New York, a group of animal activists filed a petition to the court about the zoo in New York, holding an elephant named “Happy”. They said that the elephant wanted to leave and the zoo had no right to keep Happy without its permission. Can an animal have any rights? The Court decided that animals did not have any rights to bring claims. But two of the judges considered that perhaps we should recognize some of the rights of animals.
Stephen Thaler makes the claim that the AI DABUS is the author and inventor of a work of art. All of the Courts rejected the claim and indicated that rights cannot be recognized on AI. Some of the Courts in South Africa, Australia, and New Zealand initially considered that maybe we should recognize some rights of the AI.
Right now, we are not recognizing the idea of personhood for AI. It’s a tool right now. Of course, it is subject to misuse, just like the Internet. But still, most of us use the Internet in a good way; it’s a tool that helps us. The same thing is going to happen with AI as well.
The word “Artificial Intelligence” is not a good word, according to me. “Artificial” has a meaning like it has bulbs, wires, and some kind of electronic materials. The word that I like to use is “Autonomous Intelligence”.
When we have autonomous intelligence, where the AI can not only stream a consciousness but can also exist sufficiently without the push of a button, then what do we do?. We also studied this issue in last year’s Innovation Moot problem.
The Technology of Gene Editing and Viruses:
Another important topic is, we now have viruses that change the behavior. The question I ask is whether or not we can use viruses or gene editing to modify behavior? So, if somebody commits a crime, instead of putting them in jail, we can modify their behavior with viruses. Can we use a virus instead of a jail sentence to modify that behavior? And for people who are sentenced to imprisonment for a lifetime, can we use gene editing to modify their behaviors?
This is an existing technology out there, but we don’t have the regulatory background.
About “Lawlipops":
Lawlipops is a reality live stream for law, founded by Mr. Ioannidis. It’s a start-up aiming to teach law through precedent games and methods personalized for each user, depending on their ways of learning. The users will play games and learn.
So, let’s think outside the box in terms of legal education and our profession as it will evolve whether we like it or not. Can we afford to stay out of these changes? I say no.


No comments:
Post a Comment