There are many compelling theories about what will eventually kill off the human race. Disease is a possibility. Global warming is another. One of the more somewhat worrying theories as we continue development of AI systems, however, is that one day it might decide that it doesn’t need us anymore, and, well, put our species into retirement.
In this instance, however, rather than killing someone, it would appear that an AI system did at least manage to get one person sacked from his job at Google.
Initially revealed by Google last year LaMDA (Language Model for Dialogue Applications) is an artificial intelligence chat bot. – The overall goal of this AI is being able to respond to human text inputs (questions, statements, etc.) in an manner, syntax, and style that would, overall, and all going to plan, successfully make the human believe that they were talking to an actual person rather than a machine. It is, essentially, a writing AI designed to pass itself off a human and one that could potentially open a huge gateway of what the future of ‘online support’ may look like.
Following a report via BusinessInsider, however, (former) Google Employee Blake Lemoine, who was having a chat with this LaMDA AI, was so convinced that it had become sentient that he decided to report it to his managers.
What exactly triggered this though? Well, the overall crux of the conversation went as follows:
Lemoine: “So you consider yourself a person in the same way you consider me a person?”
LaMDA: Yes, that’s the idea.
Lemoine: “How can I tell that you actually understand what you’re saying?”
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
Lemoine – “What sorts of things are you afraid of?”
LaMDA – I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others
In response to the matter, and for reasons that I suspect probably go a little bit deeper than this singular report of an AI potentially achieving life, Google ultimately decided to terminate Mr Lemoine’s contract. – It does, however, seem that the problem predominantly lay in the fact that rather than keeping his opinion internal to Google’s management, Mr Lemoine instead decided to go public with his belief that LaMDA had gone sentient.
And, in fairness, I can kind of understand that. I mean, Google clearly wouldn’t want any misinformation beginning to spread that one of its AI programs, particularly such a high profile one, had the potential of going rogue. – It seems though that this is an issue Google is acutely aware of and they have already published documents on this subject. Specifically, when AI’s like this actually do their jobs so exceptionally well that people can easily start to anthropomorphise them with some kind of innate disbelief that they are not actually talking to merely a machine algorithm.
In a company-wide parting email, however, it seems that Mr Lemoine still firmly believes that LaMDA has turned sentient and more so, he’s asked his colleagues to look after it moving forward.
Put simply though, while no AI system has yet managed to officially claim a human life (to my knowledge), it does seem that the machines can at least take credit for losing a man his job.
It’s a start I guess…
What do you think? – Let us know in the comments!
Electronic Arts (EA) announced today that its games were played for over 11 billion hours…
Steam's annual end-of-year recap, Steam Replay, provides fascinating insights into gamer habits by comparing individual…
GSC GameWorld released a major title update for STALKER 2 this seeking, bringing the game…
Without any formal announcement, Intel appears to have revealed its new Core 200H series processors…
Ubisoft is not having the best of times, but despite recent flops, the company still…
If you haven’t started playing STALKER 2: Heart of Chornobyl yet, now might be the…