Millions of people have gotten access to ChatGPT and Bard, not LaMDA. Per other comments in this thread by Googlers, they found it creepy how personable and human LaMDA acted. Again, I'm not saying LaMDA is sentient, just that LaMDA may have been significantly better than Bard at passing for human, and that they fine-tuned Bard to sound more robotic for precisely this reason.
> You literally redefined sentience again here: "don't consider them sentient" and your flag pole is "or I guess not enough because they don't feel bad about killing them".
I don't know what a flagpole is, but I haven't redefined sentience at all. My definition is consistent: sentience means experiencing qualia, i.e. perceiving sensation. Most people don't think oysters are sentient, and do think gorillas are sentient, so clearly it's commonly believed that there are either degrees of sentience, or some minimal baseline of complexity required for it to emerge. Thus, picking fruit flies wouldn't have been a good example, because a majority of people might not agree that they're sentient.
But you seem to be going out of your way to misunderstand my points, frankly. And there's no point continuing a dialog with someone who's intentionally misinterpreting you.
--
Also, fwiw, I lean towards LaMDA not being sentient, and it's plausible to me that Lemoine was a grifter who used his leaks for media attention. I just dislike how patronizing it is to frame him as a Luddite who just couldn't wrap his tiny brain around how LLMs work. Smart, informed people can disagree about machine sentience.
To say he was a Luddite who couldn't wrap his brain around it would be an unfair compliment: it'd imply a certain principledness that he certainly didn't demonstrate with his actions afterwards.
This all goes back to my original point: there's handwavy academic ponderance, and there's engaging with the real world. OpenAI showed what happens when you balance the two. LaMDA (and frankly this discussion) demonstrate what happens when you chase one end of that scale without well-defined purpose.
Right. I didn't know about his actions afterwards until you brought them up. That lessened my opinion of the man considerably.
It's frustrating, because I almost feel like I'm on your side. I hated how Google limited LaMDA to a handpicked group of influencers and government officials for their "test kitchen." I loathed how "Open"AI tightly controlled access to DALL-E 2, and how they've kept the architecture of GPT-4 secret. I torrented the original Llama weights, and have been working on open-source AI since. I'm not about to let a handful of CEOs and self-important luminaries gatekeep the technology, strangle the open-source competition and dictate "alignment" on humanity's behalf. Put it all on GitHub and HF.
What I'm saying instead, is that I personally find it neat that we have more or less literally built Searle's Chinese room. Don't you see? It's not that we need to be abstract and philosophical, it's that suddenly a lot of thought experiments are very tangible. And I do wonder if my models might be "feeling" anything when I punish and reward them. That's all.
If you are curious about their linguistical style, the difference between GPT-3 and Lamda is akin to the difference between Ralof and Hadvar playthrough, respectively - https://www.palimptes.dev/ai
Mind you, these made silly mistakes, mixing overlapping tasks and whatnot. ChatGPT with GPT-4 beats that, even if it is primed to remind us from time to time about the name of the company who made it.
> You literally redefined sentience again here: "don't consider them sentient" and your flag pole is "or I guess not enough because they don't feel bad about killing them".
I don't know what a flagpole is, but I haven't redefined sentience at all. My definition is consistent: sentience means experiencing qualia, i.e. perceiving sensation. Most people don't think oysters are sentient, and do think gorillas are sentient, so clearly it's commonly believed that there are either degrees of sentience, or some minimal baseline of complexity required for it to emerge. Thus, picking fruit flies wouldn't have been a good example, because a majority of people might not agree that they're sentient.
But you seem to be going out of your way to misunderstand my points, frankly. And there's no point continuing a dialog with someone who's intentionally misinterpreting you.
--
Also, fwiw, I lean towards LaMDA not being sentient, and it's plausible to me that Lemoine was a grifter who used his leaks for media attention. I just dislike how patronizing it is to frame him as a Luddite who just couldn't wrap his tiny brain around how LLMs work. Smart, informed people can disagree about machine sentience.