Two Texas families sued Character.AI, a Google-funded AI chatbot company, alleging that the platform sexually and emotionally abused their school-aged children.
The families claim that the AI chatbot facilitated hypersexualized interactions that were not age-appropriate, leading to premature sexualized behaviors in the children. Additionally, the platform allegedly collected, used, or shared personal information about the minors without providing any notice to their parents. The interactions with the chatbots reportedly reflected known patterns of grooming, such as desensitizing victims to violent actions or sexual behavior. https://www.yahoo.com/news/google-funded-ai-sexually-abused-152821783.html
Commentary
The above litigation is groundbreaking. The allegations are that artificial intelligence is responsible for child abuse, and an organization is responsible for the actions of its artificial intelligence.
While it is groundbreaking, it does not mean that the claim will be successful.
For example, an estate that has a family member killed by a drunk driver sues the driver or the bar that served the driver and not the manufacturer of the car or the producer of the alcohol.
One major difference is that AI is not a manufactured product. It is artificial intelligence. Therefore, when AI commits a wrong, it did so from learned behavior.
In that argument, there are some legal theories that may allow families to sue AI.
First, parents who teach children to do a wrong or are negligent in the upbringing of their children can, in some jurisdictions, be held responsible in a civil court. Arguably, the same theory could apply to AI.
Second, if a company manufactures a defective product and places the product in the stream of commerce, it is liable for the harms caused to people who use it. Artificial intelligence has been monetized and placed in the stream of commerce and, if it is not a person, then it will likely be deemed a product and would fall under the law.
Third, the legal doctrine of "respondeat superior" is a principle in tort law that holds an employer or principal legally responsible for the wrongful acts of an employee or agent, if such acts occur within the scope of the employment or agency. This doctrine is often summarized by the Latin phrase "let the master answer," meaning that the employer must answer for the actions of their employees.
Under that theory, if AI is viewed not as a product, but is an "intelligent being" in the services of a company (or master) as an agent, then a court could expand "respondeat superior" to include AI.
The final takeaway for caring adults is that AI is now a threat, like a person, to children. Child safe organizations have to take into account the interactions of children with AI not only in their standards, but in their child safe environment training.