In time we will also protect AI works, but it still takes evolution

13. 12. 2023

 

New technology has arrived and with it a lot of new questions. How many times have you experienced this? Lawyer Petra Dolejšová has done it many times, and right now she is dealing with almost nothing but the legal aspects of using artificial intelligence. "We revolve around questions of who owns the output from AI, whether it is possible to generate images of specific people and whether you can use styles such as of the painter Mucha or heroes from Marvel movies. All of this is legally quite clear, but it turns out that people are still grasping the essence of copyright law," muses the expert, who believes we have much bigger thinking to do - who do we "pin" responsibility to for any potential transgressions of technology.

Was the law prepared for the extent to which AI tools would start to be used?

I can say that when I was getting into AI, I was surprised at how well prepared copyright law was for it. For being such old regulations, the current situation is very well suited to it, and the spirit with which the regulations were written works too. People need to understand that creative works, or copyright works, don't always work like traditional things. It's not a car that has an owner. With copyright works as intangible outputs, we're talking about who has the rights to them. It's about who gets to say that they can or cannot touch a given work. And the law only protects people's creations. Not of software. So with AI, the basic idea is that works created by software are not protected from a copyright perspective.

But in the future, we can expect to create more and more as humans using these technologies. Will AI outputs still simply be "the output of technology"?

I don't think so, after all we already had this situation here in the past. By 1825, if you wanted a portrait, you hired a painter, and he stood somewhere for weeks and months painting. After that year, a guy came in, set up a tripod, put a box on it, and in a few minutes a picture came out. The law back then also said, we're going to protect the artist-painter, not somebody who just took a picture out of a machine. But then a number of years went by and it became clear that it wasn't just about the machine, it was about working with it. And photos today have more "special" protection in themselves than other copyrighted works. AI will probably go through the same thing. We're getting a feel for it for now, but in time we'll figure out even here that the value of the work doesn't reside in the camera, but that someone has to somehow put something in front of it, create a composition and so on.  

It'll be quicker than the photo, though, won't it?

I think so, everything around us is speeding up. But the current problem that society is experiencing in the context of copyright and AI is something else. I've been training for ten years, and the more I train, the more I'm convinced that people overall don't understand the concept of copyright. That it's not like buying a car. When you buy a picture, it behaves completely differently. It's a tangled area, hard to grasp because it's not tangible like the car. That's why people are now surprised that what they promoted is not "theirs" and not protected. 

IMG_839.jpg
Petra Dolejšová currently deals mainly with the legal aspects of the use of AI.

Do we see this as a bigger issue because the society is playing with AI in a big way and more than ever getting to create "works of authorship"?

That's just it. Copyright law actually says we're protecting human talent. Why is a work of authorship a work of authorship? Because not everyone has talent. At the moment, however, we are all creating with the help of AI, which in itself has a huge benefit, but it doesn't automatically mean that we have to protect that creation. The idea of the law was that we would stand behind those who have talent and creativity. In this context, we have not yet done enough to say what is worth protecting.

So nothing new, no new topic for the law?

What we have yet to resolve is liability in the context of technology. It's been talked about for years in the context of autonomous cars, I mean the questions that if a car is driven entirely by software and runs someone over, who does the damage and liability go to? The person who was driven, the expert, or the company that developed the car? This is not settled, we are waiting for legislation and the courts, and I think that ultimately this will be one of the reasons why we will not use autonomous cars fully. Because there is no fair solution to this equation. In marketing and creative, we're going to talk about where the dividing line is between what copyright will and will not be. In other words, in what case has a person put in enough creative input. This won't be a new debate though, it's long been addressed, even by the courts, on the topic of ripping off content. If you take inspiration from something, that's fine, but ripping off is not. The precedents are already there. In the case of AI-generated works, we'll be dissecting the same thing, we're just waiting for those precedents.  

You mentioned liability in the case of autonomous car crashes. Is that the biggest topic related to the legal issues around new technologies?

Definitely. We have to remember that the value-added of humans is that they are responsible for everything they do. That's what stops us from doing bad things, hurting each other, among other things. The problem with AI is that it will gradually replace humans in various positions, and that stopping point doesn't exist for an AI. It is not afraid of going to jail or paying a million dollar fine. That's why we are so concerned about who will be held accountable for its actions. Where there is no accountability, there is no prevention and caution. We have to give it to somebody. Personally, I don't think the law has kept up with the development of technology. It just can't answer everything. Let's not talk about the law "catching up" with technology. Let's talk about the law holding back technology, sometimes to the benefit of humans.

In this context, I can think of the example of GDPR legislation, which many people thought was just a bogeyman and a "brake".

I would like to stand up for that; it has become a stickler unjustly. I rather regret that, at the time it came into force, the European Union did not have spokespeople to explain properly why the GDPR came into existence in the first place. It wasn't because we wanted to restrict entrepreneurs and marketers, but instead it was supposed to be a regulation to help them. Years ago, the problem was that every European country had different data protection regulations. This created problems, for example, in e-commerce, which is cross-border in its principle of operation, so that different separate databases had to be maintained. Therefore, it was said, let us abolish the individual national regulations, let us make one, so that the flow of data can be international, because it already works anyway. But fear sells, so the word got out that Europe was going to impose huge sanctions and the company could be fined up to 20 million. But that was only directed at giants like Facebook and Google, which are based in Europe, otherwise any fines are assessed in proportion to the size of the business. Unfortunately, Europe has never emphasised the business positives - that it simplifies the flow of data in this way. That is, after all, a terribly good idea. It is the communication of the reasons for creating the standard that we should take better care of next time.

"We have the issue of liability to resolve. The added value of a person is that they are responsible for everything they do. But who should bear it for a piece of technology?"

The future of AI? ChatGPT-based language models for businesses

22. 11. 2024

Chances are, you're already familiar with it and are no stranger to the word "prompt". The use of generative AI tools has massively increased in recent months, but they can't be used everywhere. Sensitive corporate data doesn't belong in the public domain. So people are looking for ways to create protected systems that allow analysis and generation of information without data leaks. Aricoma's specialised team is working on this and has therefore also teamed up with the mathematics institute of the Brno University of Technology.

We kept getting rid of chip production in Europe, now we struggle to get it back. But it is worth it

17. 9. 2024

"We've been carrying chip debt since the 1970s, Europe was happy to get rid of chip production because it's water and electricity-intensive. And now we're slowly and painfully catching up. At the same time, we will need more and more chips, and even more sophisticated ones than today," says Tomáš Pitner, professor at the Faculty of Informatics of Masaryk University and head of the research centre, about the situation in which the Czech Republic and Europe find themselves. But in exactly which way?

Paula Januskiewicz: Understanding infrastructure is not the same as knowing how to attack it

31. 7. 2024

The number of cyberattacks will not decrease. Let's face it and defend ourselves. This is how one could sum up the words of Paula Januszkiewicz, a Polish cybersecurity expert who spoke this spring at Security 2024, Aricoma's annual conference about IT security trends. Januszkiewicz, whose company CQURE has four offices around the world, spoke about why companies and institutions can't resist attacks, how to get more experts, and where the industry is headed.