Robot rights: Where science ends and fiction starts

Artificial intelligence (AI), thanks to pop culture, is widely identified with robots or humanoid machines that take control over humans. Even though we might be feared of the new technology or the question-able social and ethical issues that arise from it, AI is developing rapidly which makes it a priority in the cognitive economy. Consequently, processes or services performed without any help from humans can no longer be considered as a part of the distant future. According to public opinion research from last year, conducted for IBM by NMS Market Research, 92% of Poles have heard of AI and 8 out of 10 expect it to be used more broadly. Efficient legislation can ensure the correct and regulated development of new technologies whilst inefficient legislation or the complete lack thereof can halt and even completely cease further research, or make the usage of AI significantly difficult in both social life and the economy. This paper is an attempt at placing national legislation concerning AI in the context of the legislation of the EU and other countries. I will attempt to answer the question of whether it is possible to introduce into legislation a technology whose usage and full potential are yet unknown. Is AI, in terms of the law, a scientific fantasy or can it be regulated? I have analysed soft law on which some general regulations and future law recommendations are based. Currently, AI is only restricted by single provisions as there are no regulations that can be used in a complex manner in the area of new technology.


Introduction
A thinking machine has been the objective of scientists' and creators' work for centuries. We have already got used to smart solutions and have been using them on a large scale in work and private life. "The future is today!", this slogan in simple words illustrates how technological spheres of science fiction have unnoticeably become a part of everyday life. In 2018, the android from Saudi Arabia, Sophia, who e.g. became a student of the AGH University of Science and Technology in Kraków, and Pepper, a robot assistant, who recognises human emotions and speech, gained media popularity. While their participation in global business conferences and events is aimed at promoting advanced technological solutions, we come across artificial intelligence much more often than we think. Every third report on the helpline of one of the most popular telecommunications network in Poland is served by Max -artificial intelligence that gives information e.g. on the bank account balance. Giants from the IT industry introduced and developed assistants for their customers e.g. Siri (Apple), Alexa (Amazon), Cortana (Microsoft), Google Assistant. Speech, text (also handwritten) or image recognition is no longer a great challenge for automated processes. We can "talk" to artificial intelligence on social media portals, chatbots are mass produced for the purposes of automating companies' communication with customers. Autonomous cars have as many supporters as they have opponents. Every year medicine is opening to new technological solutions, especially in the field of diagnostics. Irrespective of the industry, enterprises are implementing algorithms to search for savings and gain market advantage. This broad use of technology raises risks and has legal consequences, therefore, development of artificial intelligence is not only the domain of engineers. Legal experts and legislators should develop norms that keep up with the development of artificial intelligence.

What is artificial intelligence?
Artificial Intelligence is such a complex issue that we are faced with difficulties as soon as at the stage of defining the subject of considerations. It is intuitively associated with robots, however, it also refers to the area of IT which develops models and programmes, operation of which is based on the rules identical to intelligent human behaviours. This term was already made up in the 1950s by John McCarthy, with regard to the science and engineering creating thinking machines, which would be able to perform activities that constitute the domain of humans. According to Alan M. Turing, 1 a machine can be considered intelligent, when a human is not able to differentiate answers given by the machine from answers given by a human. Technological terms are as advanced as the engineering of artificial intelligence itself, however, for the purposes of other sciences, the definition proposed by Andreas Kaplan and Michael Haenlein as a "system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation" may be appropriate. 2 The legal definition of artificial intelligence has not yet been sufficiently developed. The European Commission underlined the need to establish commonly acceptable and flexible definitions of the concepts: a "robot" and "artificial intelligence". While determining legal issues, the issue of specifying the technologically diverse matter becomes key as the starting point for defining the object and subject of rights. Therefore, on the one hand, the definition should not raise doubts with regard to the use within legal regulations and on the other hand, it has to take into consideration the dynamic development of technology. In the "Artificial Intelligence for Europe" it is proposed that "artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions -with some degree of autonomy -to achieve specific goals". 3

Soft law: From Asimov's Laws to Guidelines on the AI Code of Ethics
At the current stage of development of artificial intelligence, the opinion that artificial intelligence is to serve to the best interest of a human prevails. It means that the technological revolution should proceed in compliance with the law and ethical principles which are currently the subject of experts' discussion. For AI creators, producers and operators, in the ethical matter depiction, the Three Laws of Robotics of I. Asimov remain everlasting: 4 1) A robot may not harm a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human being except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Can the laws of robotics formulated in the science-fiction story from 1942 be interpreted as soft law in the 21st century? Universal principles are reflected in the guidelines drawn up by the high level expert group on AI of the European Union, 5 in the scope of ethics concerning the development and use of artificial intelligence. Out of the recommendations established by independent experts, as the superior one they indicate the protection of fundamental human rights: − human agency and oversight -AI systems should support the development of just society by reinforcing the leading role of a human and fundamental rights and not diminishing, limiting or distorting human autonomy, − technical robustness and safety -algorithms used in reliable artificial intelligence have to be secure, dependable, and sufficiently solid in order to manage errors or inconsistencies at all stages of the AI system lifecycle, − privacy and data governance -citizens should have full control over own data, whereas, data concerning them will not be used to their detriment or to discriminate them, − transparency -identifiability of AI systems should be provided, − diversity, non-discrimination and fairness -AI systems should take into account the whole range of human abilities, skills and requirements, and ensure availability, societal and environmental well-being -artificial intelligence systems should reinforce positive social changes, support balanced development and environmental responsibility, − accountability -mechanisms ensuring responsibility for AI systems and results thereof should be introduced. Guidelines should have key application to all AI systems in various environments and industries. It was underlined that "in order to achieve "trustworthy AI", three components are necessary: 1) it should comply with the law, 2) it should fulfil ethical principles, and 3) it should be robust". 6 Recommendations are not binding and do not create any legal obligations, they remain in the soft law sphere by creating the future policy of European Union legislators.
As far as the ethical and legal subject matter related to the development and use of artificial intelligence is concerned, it is worth mentioning "Assumptions to the AI Strategy in Poland". 7 It is a collection of recommendations drawn up on the invitation and under the leadership of the Ministry of Digitalisation. In 2018, environments interested in artificial intelligence development in Poland engaged in AI-related legal issues. The analysis conducted by the legal group, who drew up recommendations, implies the direction of works for legislation in selected areas concerning artificial intelligence technology and machine learning. 8 The following legal challenges were identified as crucial: protection of fundamental human rights, providing a wide access to data with the respect for personal data protection principles, protection of consumer rights, establishing the principles of civil liability for damages caused with the use of AI, determining the rules and terms of using AI in the process of concluding agreements, and considering introduction of a support system for persons who will lose work due to the AI implementation.

Selected legal challenges
In the future, the basic legal conceptions should be re-constructed so that they take into account the economic trend using technological innovations, the application scale of which is rapidly growing. In the contemporary business, obtaining and analysing large numbers of data in nearly real time is perceived as a key competitive advantage. In the cognitive economy data is a new type of intangible goods, therefore, it is necessary to cover it with protection in civil legal turnover. On the grounds of binding legal orders, non-personal data protection may be considered e.g. in the context of sui generis database protection. However, experts believe that "introduction of an exclusive right to data may affect competitiveness and innovativeness. In consequence, it should be recommended not to introduce an absolute right of machine data ownership. Instead of a separate data ownership right, it would be worth considering developing frameworks determining the right to access data". 9 It should also be determined whether it is possible to draw up one general regulation or separate principles determining access to data for various industries or entities. The new Regulation (EU) of the European Parliament and of the Council on framework for free flow of nonpersonal data, 10 assumes implementation of self-regulatory codes (codes of conduct) and other best practices, taking into account recommendations, decisions, and actions taken without human interaction. At Union level it is encouraged to develop codes of conduct adjusted to open standards, covering, among others, quality management, information security management, business continuity management, and environmental management on the grounds of the adopted national and international norms.
11 Adjusting provisions to technological progress and new technologies in the market assumes a draft of a regulation concerning the respect for private life and the protection of personal data in electronic communications (ePRIVACY), which extends the principles of data confidentiality with new communication services. In the area of machine learning and Big Data solutions it is planned to extend the scope of the regulation with communication, with the use of telecommunications network among devices and applications (so-called Internet of Things).
Using personal data by artificial intelligence that is information on an identified or identifiable natural person causes specific legal issues. Automated decisions can be made with the use of various types of data including personal data directly transferred by the data subject, observed data on natural persons (e.g. face recognition), as well as derivative or inferred data. The General Data Protection Regulation 12 specifically refers to profiling that is automated processing of personal data to assess personal features of a natural person. The GDPR imposes new obligations on artificial intelligence disposers as entities responsible for processing natural persons' data as well as automated decision making with regard to persons in compliance with the same principles even if the entities are different. Commercially used automated processes may be difficult to observe and understand by natural persons and, in consequence, they may not see the effects of such a process on them. Therefore, keeping the principles of personal data protection with the use of artificial intelligence technology should be one of the basic standards. In compliance with the requirement of accuracy and transparency, Article 5 of the GDPR, the controller has to ensure transparency of data processing also with regard to derivative or inferred data, so-called "new personal data". Furthermore, profiling may be related to using personal data previously collected for other purpose. In order to determine whether controllers, who intend to use personal data in this manner, have relevant grounds, it may be problematic to justify whether the conditions stipulated in Article 6 of the GDPR have been met, that is, giving consent, necessity to perform a contract or comply with a legal obligation, or the necessity for the purposes of the legitimate interests pursued by the controller or by a third party. It imposes on the controllers the obligation to consider interests in order to protect the rights and freedoms of a person. With regard to personal data processing by artificial intelligence systems X. Konarski recommends, 13 among others: − determining by the personal data protection authority which anonymization techniques it considers effective, − indicating by the personal data protection authority how should the information obligation stipulated in Articles 13-14 of the GDPR be fulfilled in the case of data processing for the purposes of machine learning processes, − determining the manner of giving consent in the case of changing the purposes of processing personal data which has been generated by data subjects (so-called digital footprints, information from devices at a disposal of data subjects), − indicating when and how should the so-called balance test be conducted in the case of basing processing on the legally justified interest of the data controller or a third party, specifying in which types of situations a processor from the public sector will be able to base (secondary) processing of personal data on a legal provision ("implementation of the legal obligation imposed on the controller"), and in which it will be necessary to request the consent of the stakeholder, − drawing up guidelines of the personal data protection authority concerning the obligation and manner of carrying out assessment of the effects of planned processing operations for the protection of personal data. Functioning and using of the new technology raises problems related to the liability for damages caused with the use of artificial intelligence. The issue of AI liability is the most often discussed in the context of autonomous cars. Accidents with the participation of this type of cars brought up issues of liability of many managing entities for activity with the use of AI, activity of a producer, a user or a person outside the autonomous vehicle, or possibly other traffic participant. A. Chłopecki differentiates liability in terms of the entity who actually manages artificial intelligence: − liability for AI activities leading to damages caused to third parties is borne by the producer (creator) of AI, but only in the scope in which AI activity irregularities were saved in the primary algorithm. Whereas, it should be understood more broadly, i.e. also as a situation when the algorithm includes elements facilitating or not sufficiently hindering unfavourable changes of this activity, − liability for AI activities leading to damages of third parties is held by the AI disposer (owner, lessee, leaseholder etc.), − in the case of the AI activity leading to damages caused to third parties when AI has more than one disposer, the liability should be held by each of them in compliance with the principles of several liability.
for defective products is indicated as possibly applicable in the context of liability towards consumers. A product means any movable even being a component of other movable or immovable. Electricity is also a product. The question is whether the "self-awareness" and learning process of AI may be a reason for excluding liability of the managing entity? Therefore, there cannot be an equality sign and artificial intelligence cannot be directly included in goods, and thus, the issue of liability for defective products cannot be derived. Thus, should the liability be transferred to a robot? Such a solution is proposed by the supporters of giving legal personality to artificial intelligence, however, this direction is highly debatable. M. Rosiński explains: I love my dog, but it does not have a legal personality, therefore, if it bites someone I will be in trouble, not my dog. I do not love my bank, yet, it does have a legal personality. Therefore, it can sue me or I can sue the bank. 15 Legal personality of AI requires revolution in the traditional division of the civil law into persons and things. Despite the fact that artificial intelligence is identified with a machine, opinions that give certain rights to robots are not isolated. Such a conception was presented in the works of the European Parliament, giving legal personality or limited capacity to perform acts in law. By analogy, the rights of legal persons or evolution of animals' rights are referred to. D. Szostek is of a different opinion, since he believes that activities aimed at giving legal personality to AI should be opposed. 16 In times when artificial intelligence raises ethical questions, introducing to the civil code terms such as "an autonomous being", "an electronic person" seems to be a futuristic vision. Thus, the debate focuses on legal consequences of artificial intelligence activities. A. Chłopecki believes that "in order to talk about an actual possibility of autonomous functioning in the legal sphere, we have to […] define what this actual possibility means. In fact, in essence it means the actual "legal Turing test". In the legal turnover a legal entity encounters, in fact, a being characterised with the following features: − has the possibility and ability to enter into legal interactions, − acts in an autonomous manner, that is, particular legal activities or more broadly -legal events -do not result from instructions of a natural person determining the contents of such activities (events), − acts outside the human control (relatively, in a situation of a posteriori inspection), − is able to adjust its activities in the legal sphere to its own needs or intentions irrespective of whether they result from self-awareness or algorithm. 17 Therefore, is it possible to conclude an agreement with artificial intelligence? New solutions based on automatic decision making processes are no longer a technological novelty but become more and more popular in the economic turnover. Fintechs establish new legal constructions e.g. smart contracts as an effect of the development of blockchain technology and DLT (Distributed Ledger Technology). In the reality of the digital market, they create new solutions and, as noticed by D. Szostek, it results in transferring from the property law in the direction of services regulated in compliance with the principle of freedom of contract. 18 Automation and auto-execution of intelligent contracts cannot be identified with the artificial intelligence's declaration of will. At the current stage, it may occur as a support element of the process of concluding an agreement with a consideration of lex specialis, and not as an actual representation of the entity. In the legal order, human -machine or machine -machine contracts, despite their presence in the business trading, constitute a vague vision of the future.
Industrial Revolution 4.0 changes not only contractual relations. In recent years it has significantly influenced employment relationships from supporting recruitment processes to changing jobs in many industries. Development of artificial intelligence means potentially new professions and thus, new workplaces; and according to pessimistic scenarios, a complete breakdown of the labour market. If robots commonly replace people at work, it will be necessary to support the unemployed, whereas, the focus should primarily be on co-financing improvement of competences or ensuring the living wage. In this context, new propositions are made with regard to imposing a tax on the work of robots or to introduce fees for employers who liquidated or limited workplaces due to the use of artificial intelligence. A different direction may turn out to be the unconditional guaranteed income which was introduced as an experiment in Finland. Moreover, the questions regarding the relation of work performed by humans and thinking machines remain. Could artificial intelligence hold a managerial position? Should the law regulate the parity of employees and robots? P. Polański forecasts that "the key question which politicians and lawyers will soon have to ask themselves is the question whether in a quarter of a century computerization and robotization will lead to losing repetitive work and thus, deepen social inequalities or, on the contrary, we will witness societies functioning more harmoniously.
[…] The revolution of artificial intelligence may affect the essence of provisions protecting the rights of employees".

Summary
Which initiatives regarding artificial intelligence and the law are taken mainly results from the fact in which field the use of AI has priority in a given country. 20 For the United States it is key to maintain global technological dominance, whereas, the share of administration, including state regulations and standardisations, is limited to a minimum, indicating the key role of free market and industry in the development of AI. China focuses in terms of artificial intelligence on automation of the industry, the use of artificial intelligence in medicine and image processing. the grounds of highly qualified specialists and a friendly economic environment. In France, it is recommended to research artificial intelligence without excessive state regulations and developing such relations at the European and even international level. Cooperation of the public and private sectors comprises the main assumption of the strategy of Canada which conducts basic studies in the areas of forecasting the effects of artificial intelligence activities, its impact on society, economy, and ethical issues. German automotive industry is developing its advantage with sectors based on knowledge, electromobility, and artificial intelligence. Japan assumes the use of the newest technologies in each social area -Society 5.0. Indian strategy of development underlined that liability for automated processes is held not only by disposers, but also the artificial intelligence itself. Estonia attempts to include artificial intelligence in its judicial system with adjudicating cases on petty crimes. Jerry Kaplan believes that artificial intelligence will turn the social order as we know it upside down. "Profound ethical issues, which have been tormenting philosophers for centuries, suddenly appear in court rooms. Can machines be held liable for their actions? Should intelligent systems have independent rights and obligations or are they also simply a property?". 21 If and which limitations should be imposed on developing and using artificial intelligence? Strategies of developing artificial intelligence are not coherent in various countries. Good practices, which are the subject of a debate of industry experts and public institutions take into account the opportunities and threats to common implementation of new technologies. "The tendency of automation requires persons engaged in the development and commercialisation of artificial intelligence-based solutions to follow the principles of security and ethics from the beginning so that they are aware of the necessity of legal liability for the quality of technology they develop". 22 The Communication from the European Commission 23 states that citizens and entrepreneurs have to be able to trust technologies they come across and their basic rights and freedoms should be guaranteed by effective securities in a predictable and understandable legal environment. Currently, it is undoubtedly one of the biggest challenges for private and public institutions which cannot be treated as a futurological proposition, but as a starting point for introducing artificial intelligence to the legal order.