Council presidency and the European Parliament’s negotiators have on 8 December reached a provisional agreement on the proposal on harmonized rules on artificial intelligence (AI).
Over the past year, the pace of development in artificial intelligence has been unprecedentedly fast. New AI programs are continuously being developed, and the market has seen increasingly advanced applications, such as the widely discussed advanced language model ChatGPT launched by OpenAI, which is capable of producing human-like text. In addition to ChatGPT, other generative AI programs have entered the market, capable of creating new content in forms of images, texts, sounds, videos, etc., simulating creative human thinking.
Entrepreneurs can leverage AI in various contexts of their business to improve efficiency, customer service, and decision-making. Firstly, AI can be used to enhance targeted marketing and advertising. AI can analyse large amounts of data, helping to build more accurate customer profiles that include, e.g., information on customers’ purchasing behaviour, preferences, and demographic data. Based on this, marketing communication can be targeted more precisely, creating dynamic advertising to improve ad appeal and relevance. Additionally, chatbots and automated customer service systems can enhance customer experience by providing immediate and efficient support, answering questions, and offering product information.
Automation and AI can also improve operational efficiency in business by optimising various processes and logistics. Furthermore, AI can be utilised in human resources and recruitment, such as in automated screening processes and suitability assessments. Needless to say, the utilisation of AI in tech-heavy businesses is even further advanced and the opportunities here seem endless. However, AI poses various challenges from a legal perspective, related to the rapid development of technology and its diverse applications across different sectors. These challenges particularly include issues related to data protection and copyright.
Legislative Challenges and the EU’s Artificial Intelligence Act
From a legislative standpoint, the rapid development of AI brings various challenges, with the most significant being that legislators have struggled to keep pace with the development of AI. The EU is currently preparing the so-called EU AI Act, the world’s first comprehensive AI legislation, aiming to ensure proper use and development of AI in the European Union. The negotiations on the final form of the regulation faced delays and major technology firms actively opposed perceived excessive regulation that they believe hampers innovation, but a provisional agreement on the AI Act was finally reached as the negotiations were concluded late on 8 December 2023. The text will now have to be formally adopted by both Council and Parliament to become EU law. The provisional agreement provides that the AI Act should apply two years after its entry into force, with some exceptions for certain provisions.
The AI Act will impose obligations on AI system manufacturers and users, varying based on the risks posed by these systems, as the AI Act categorises artificial intelligence into different risk levels, ranging from “unacceptable” (banned technologies) to high, medium, and low-risk classifications. According to EU legislators, artificial intelligence systems that produce new content and analyse extensive data, including generative AI programs such as ChatGPT, are considered to pose a medium level of risk. General-purpose AI (GPAI) systems must adhere to certain transparency requirements on the functioning of their AI systems, particularly in terms of preventing the generation of illegal content. This should be carried out by, e.g., creating technical documentation, adhering to EU copyright regulations and distributing comprehensive summaries of the content utilised in the training process, to mention but a few. For instance, any content generated using ChatGPT must be appropriately labelled. GPAIs with systemic risk face even stricter obligations, such as conducting model evaluations, assessing and mitigating system risks, reporting serious incidents to the Commission, ensuring cyber security and reporting on energy efficiency. Failure to adhere to the regulations may result in fines, which vary based on the infringement and the company’s size, ranging from EUR 35 million or 7% of global turnover to EUR 7.5 million or 1.5 % of turnover. With the EU AI Act, legislation expressly related to AI will also be introduced in Finland.
AI and Copyright Issues
The copyright challenges posed by AI relate to questions such as, for example, who owns or is responsible for content created by AI. The copyright of creative content generated by AI is a complex issue with no clear-cut answer. Copyright traditionally applies to content created by humans, but with technological advancements the role of AI is also being considered. Copyright grants the creator exclusive rights to determine the use of their work economically and morally. Economic rights enable actions such as presenting, reproducing, and distributing the work, while moral rights relate to the personal acknowledgment and protection of the creator. Copyright protects the concrete expression of creative work, not the idea itself. Copyright covers various forms of expression, including music, literary works, films, visual arts, and other creative works, and does not require separate registration.
Currently, it is unclear whether copyright belongs to the human who created the AI, the developer of the software, or the AI program itself. In some countries and legal systems, if a person has programmed the AI and defined its parameters, the copyright may be attributed to the creator of the AI, making the software developer eligible for copyright. In cases where AI has autonomous creative abilities and can independently produce content without direct human guidance, the question may arise as to whether the AI itself holds the copyright to the output.
However, in the current legal framework in Finland and many other countries, copyright generally relies on human-created original content and the individuality and personality of the creator. Consequently, AI does not have independent copyright in Finland. This creates challenges within the current legislation for granting copyright to AI that operates based on pre-defined algorithms without personal contribution by humans. Many countries and legal systems have not yet established clear guidelines regarding the copyright of creative content generated by AI. This ambiguity is one of the challenges at the intersection of AI and copyright, and efforts are underway to update legislation to address this development in the future.
When utilising AI, it is also crucial to ensure that material protected by copyright is not used without permission. Most generative AI applications, like ChatGPT, are tools whose operation is based on the data they have learned during their training. Therefore, AI-generated material may be based on copyrighted original material. If AI is used, for example, in creating text or music, the AI may incorporate copyrighted works without the user being aware of this. It is therefore not a surprise that artists and copyright owners have started to raise claims against misuse of their work in AI systems.
Currently, proper referencing is almost entirely absent from AI generated content. Thus, users lack practical opportunities to check what material AI has utilised in the process and, subsequently, prevent potential copyright infringement on their side. The upcoming EU AI Act aims to address this issue by including certain transparency requirements for generative or content-producing AI. This includes, as stated above, a requirement to publish summaries of copyrighted data used for AI training, and systems must be developed to prevent the production of illegal content.
Consideration of Data Protection
The use of AI raises numerous privacy and data protection questions, as AI applications can process vast amounts of personal data that must be secured. Personal data and sensitive information must be protected appropriately, and the risks of data breaches must be taken into account when utilising AI. Privacy and data protection issues become particularly relevant when AI utilises personal information for business purposes, such as in the fields of security, marketing, or healthcare. For the purposes of this short news update, a few issues related to marketing are pointed out below in order to keep the text at a rather general level.
When AI uses individuals’ personal data in marketing, several data protection issues arise. General data protection standards, such as the requirements of GDPR, must be considered. This includes obtaining individual consent, purpose limitation of collected data, profiling and automated decision-making, security, and the right to be forgotten.
It is paramount that companies obtain consent before collecting and using personal data for marketing. Consent must be clear and voluntary, and individuals must be aware of how their information will be used. Purpose limitation means that the use of collected personal data must align with the original purpose for which consent was given. If data is collected, for example, during a specific campaign, it should not be used for other purposes without proper consent. Companies must remember these fundamental data protection principles when using AI, even though the opportunities created by AI applications may tempt the use of collected personal data for additional purposes. Additionally, individuals must always have the option to request the deletion of their data, and companies must be able to respond to such requests. It is essential to keep these principles in mind both during the training phase of AI and the subsequent utilisation phase.
Although regulations related to AI are still in the development stage and are inadequate in many aspects at present, individuals or companies using AI in violation of the legislation may be held accountable, especially if such use causes harm or damage or otherwise violates existing regulations. According to existing legislation, the AI program itself cannot be held responsible for intellectual property infringement or data protection violations. Therefore, the person using the AI system as a tool or aid in their business activities may, depending on the situation, be held responsible since the unlawful action performed by the AI program is generally considered to be carried out by the user.
The use of artificial intelligence can be enticing for businesses, and its implementation can provide a competitive advantage while enabling the development of new business models across various sectors. However, it is of paramount importance that companies use AI responsibly and adhere to existing practices to ensure the proper preservation of customer privacy and avoid any infringement of intellectual property rights. This includes continuous monitoring of the effects arising from the use of AI by the business itself or its suppliers and where needed seeking proper permissions or licenses for the use of, among others, copyrighted material.
It is further recommended that companies establish their own guidelines and instructions regarding the utilisation of AI in business operations and how these programs should be employed. Companies should also regularly update these guidelines and instructions to align with current legislative requirements concerning the use of AI and, where possible, offer appropriate training to employees to enhance their understanding and compliance with these guidelines and instructions.