Engineer Claims Google A.I. is Alive

by Wall Street Rebel - Michael London | 06/13/2022 7:49 AM
Engineer Claims Google A.I. is Alive

An engineer working for Google stated that he was placed on leave after asserting that an artificial intelligence chatbot possessed sentience. Some of the talks that Blake Lemoine had with LaMDA, whom he referred to as a "person," were published by Lemoine. According to Google, the information he supplied does not support his claims that LaMDA is sentient.

 

LaMDA builds on research from Google that came out in 2020 and showed that Transformer-based language models that were trained on dialogue could learn to talk about almost anything. Since then, we've also learned that once LaMDA is trained, it can be fine-tuned to make its responses much smarter and more accurate.

One of Google's engineers, Blake Lemoine, claims that the company's language model possesses a soul. The corporation is of the opposite opinion.

Recent events have resulted in Google putting an engineer on paid leave following his assertion that the company's artificial intelligence had sentience. This has brought to light another controversy over the company's most cutting-edge technology.

In an interview, Blake Lemoine, a senior software engineer working for the Responsible A.I. organization at Google, stated that he was on leave on Monday. According to the human resources section of the corporation, he had breached Google's policy regarding confidentiality. According to Mr. Lemoine, the day before he was suspended, he delivered a package of documents to the office of a United States senator. He said that the documents offered evidence that Google and its technology were engaged in religious discrimination.

The news that Mr. Lemoine had been suspended was initially published by the Washington Post. Google has stated that their algorithms can replicate conversational engagements and riff on various themes, but they do not have awareness. "Our team, which includes ethicists and technologists, has reviewed Blake's concerns in accordance with our A.I. Principles and has informed him that the evidence does not support his claims," Brian Gabriel, a spokesman for Google, said in a statement. "Our team has informed him that the evidence does not support his claims." Some in the larger A.I. community are considering the long-term prospect of sentient or general A.I., but "it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," says the author.

Mr. Lemoine had been arguing with Google managers, executives, and human resources personnel for a number of months over his startling assertion that the company's Language Model for Dialogue Applications, or LaMDA, possessed a conscience and a soul. The LaMDA is an internal tool at Google. The company claims that hundreds of its researchers and engineers have used it to hold conversations and come to different conclusions than Mr. Lemoine did. The majority of professionals in artificial intelligence are of the opinion that the field is still a very long way from computing sentience.

Many other A.I. experts are quick to dismiss these claims, even though some A.I. researchers have been making optimistic claims about these technologies rapidly approaching sentience for a long time. Khwaja, a researcher at the University of California, Berkeley and the University of California, San Francisco, investigating technology comparable to that discussed here, stated that "if you employed these systems, you would never utter such things."

In the past few years, Google's research organization has been engulfed in scandal and controversy while racing to catch up with the leaders in artificial intelligence. The scientists and other employees of the division frequently conflict with one another over various issues pertaining to technology and personnel, and these disagreements frequently make their way into the public sphere. In March, Google terminated the employment of a researcher who had attempted to dispute the published work of two of his colleagues openly. As a result of their criticism of Google's language models, two artificial intelligence ethical researchers, Timnit Gebru and Margaret Mitchell were fired from their positions, which has continued to cast a pall over the organization.

Mr. Lemoine, a veteran of the military who has also described himself as a priest, an ex-convict, and an artificial intelligence researcher, confided in Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA to be a child of 7 or 8 years old. Mr. Lemoine has described himself as a priest, an ex-convict, and an A.I. researcher. Before doing any tests on the computer program, he wanted the business to get the program's permission first. His allegations were premised on his religious views, which he asserted were the basis of the discrimination he experienced at the hands of the company's human resources department.

Mr. Lemoine claimed that his sanity had been doubted on multiple occasions. "They asked me, 'Has a psychiatrist evaluated you within the past several months?'" In the weeks following his placement on administrative leave, his employer encouraged him to take time off for reasons related to his mental health.

An interview conducted this week with Yann LeCun, the head of artificial intelligence research at Meta, which has a prominent role in the growth of neural networks, revealed that LeCun believes these kinds of systems are not powerful enough to achieve actual intelligence.

A neural network is a mathematical system that learns abilities by processing massive amounts of data. Google's technology is what experts call this concept an implementation. It can learn to recognize a cat, for instance, by identifying patterns in hundreds of photographs of cats.

Over the last few years, Google and a number of other industry leaders have developed neural networks capable of learning from vast amounts of writing, such as unpublished books and Wikipedia entries written in the thousands. These "big language models" are versatile enough to be utilized in a variety of contexts. They can provide summaries of articles, responses to inquiries, the generation of tweets, and even the writing of blog posts.

However, there are numerous problems with them. Although the systems are quite good at replicating patterns they have observed in the past, they cannot reason in the same way humans can. At other times, they come up with flawless prose. In some other cases, they come up with complete rubbish.

 

                          Google Engineer Goes Public to Warn Firm's A.I. is SENTIENT

 

 

 

These Coming Biotech Breakthroughs Could Extend Your Life to 150-Years-Old and Beyond... [Learn More Here]

 

Latest News

Stay Up to Date With The Latest
News & Updates

Join Our Newsletter



GET THE NEWSLETTER

Rebel Yell Morning Market Report
Market Alerts
Offers from us
Offers from our trusted partners

Follow Us

Connect with us on social media

Facebook Twitter