Google has suspended a programmer who claimed that the language model LaMDA, developed by the technology giant, has a consciousness.
The developer of the company Blake Lemoine insists that the language program Google LaMDA not only can perform the functions of a chat-bot, imitating the speech, “absorbing trillions of words from the Internet”, but also has a consciousness. This is the conclusion the engineer came to after working with LaMDA, the newspaper points out. Since last fall, Lemoyne has been testing whether the language model uses discriminatory vocabulary. The chatbot wrote about its rights and discussed Isaac Asimov’s laws of robotics, a company official shared.
The developer tried to prove his findings to Google, but the company suspended him and put him on paid leave, and he responded by deciding to tell the general public about the “discovery.”
Lemoine is not alone in his assumptions, about the conversations with the “intelligent” program reported and others, attaching fragments of conversations to the statements.
Google explains that the data processed by the chat-bot is so much that the program does not need to understand what is written, the report said. Cited in the article scientists, for their part, believe that the dialogue that seemed reasonable to programmers could be a simple quote from Internet encyclopedias or forums. The suspended engineer, meanwhile, tried to confirm his conclusions with parts of the dialogue where the chatbot talks about fears of being disabled, the paper reported.
Lemoyne was placed on leave for violating privacy principles, among other things, after he hired an attorney to represent LaMDA, the paper said.
“Google spokesman Brian Gabriel said in a statement, “Our team, including ethicists and technologists, reviewed Blake’s concerns according to our artificial intelligence principles and told him that the evidence did not support his claims. He was told that there is no evidence that LaMDA is intelligent, and plenty of evidence against it.”