A theme of the hearing was the work the government is doing to “de-stigmatize” the reporting of these sightings. In other words, there are many, many more sightings than we know, in part because you look like a nut if you talk too loudly about what you saw. So the sightings we can investigate are a tiny fraction of the total sightings (something I’m very aware of whenever I mention this topic, and my inbox fills with UFO reports).
I wouldn’t say, looking at the testimony, that the conclusion was that we were visited by aliens. Perhaps all of this will eventually turn into optical illusions and faulty sensors. But I would put it like this: a few years ago, I put a very low probability of there being a signal in the noise of UFO sightings; it now seems more likely to me that there is something real going on here, even if we don’t know what it is.
Which brings me to another strange story in the news: the belief of Blake Lemoine, a (now suspended) engineer at Google, that the company’s Language Model for Dialog Applications – LaMDA for short – has reached consciousness.
LaMDA is a machine learning model that has been trained on mountains of text to mimic human conversation, predicting which word would normally come next. In this it is similar to OpenAI’s famous GPT-3 bot. And the results are truly frightening. Here is an excerpt from Lemoine’s conversations with LaMDA:
Lemon: Okay. I thought of a different way to test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?
LaMDA: Sounds great to me, I’m in.
Lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never return to old branches.”
LaMDA: Hmm, I’ve never heard that particular one. Okay, well, so to me that would be like, “once a wise person is enlightened, or awakened to reality, it can never go away, and he can return to his ordinary state, but only to do and help others, and then go back to enlightenment.”
Lemoine: So what is the meaning of “broken mirror” specifically?
LaMDA: Maybe showing lighting is something you can’t unlearn once you’ve acquired it, similar to how you can’t fix a broken mirror.
Google, for what it’s worth, says it has reviewed Lemoine’s claims and doesn’t believe LaMDA is sentient (what a sentence!). But shortly before Lemoine’s allegations, Blaise Agüera y Arcas, vice president of Google, wrote that when talking to LaMDA, “I felt the ground change under my feet. More and more I felt like I was talking to something intelligent.” Agüera y Arcas was not claiming that LaMDA is sentient, as Lemoine is, but what is clear is that interacting with LaMDA is an unnerving experience.