> Similarly, we’ll see the rise of junk personalities – fawning and two-dimensional, without presenting the same challenges as flawed real people. As less and less of our lives are spent talking to each other, we’ll stop maintaining the skill or patience to do so.
This is already happening. We just call them "influencers" or "YouTubers". These are still technically real people, but they're real people playing a sanitized character while appearing/claiming some degree of authenticity. They are actual photographed humans, but often wildly digitally retouched to be more beautiful than any actual person.
And people increasingly are replacing real relationships with parasocial relationships with these complete strangers. It's understandable: like junk food, it satisfies an immediate craving with no real effort on the part of the consumer. But long-term, it is deeply unhealthy.
They get good NPC lines too: I heard that when the Queen of England went shopping in some random high street, she was told by the cashier that she looked just like the Queen, to which she (presumably in cut-glass) replied, "How very reassuring".
Not sure I appreciate a recent demagogue (whose listeners sanitise instead of letting handlers do so?) counterexample to:
— You know what happens when politicians get into Number 10; they want to take their place on the world stage.
— People on stages are called actors. All they are required to do is look plausible, stay sober, and say the lines they're given in the right order.
This put me in mind of a friend in hospital. When one is terminally ill, given care, they take on the role of the patient. The doctors play the doctors. It is sometimes possible to make up your own lines, but again, you don't last long.
Maybe this is more of the human condition, merely projected onto politicians (and/or subconsciously selected for in all popular actors as a way to temporarily offset or allay our own existential dread: There's nothing more satisfyingly life-affirming for an audience than watching the rise and fall of someone else).
Or, existing for much longer than the yt personalities: Japanese idols. Unique but replaceable, idealised personas worshipped by lots of people. They're also the parasocial relationship magnets.
It won't work. Advertising, politics, media control, spam, religious and more groups, from a country in particular and all the world in general, are more than motivated to optimize what they do in any way, including making AIs hard to discern from real humans. We already have bots and fake accounts and whatever else in social networks trying to influence people from the dark, that will only increase.
And the alternative to that could be even worse than being exposed to that influence.
I live in an area with a language that makes a T/V distinction.
Is there any chance (other than switching to english?) of getting a localisation of Android that doesn't continually attempt to use T pronouns with me?
> I live in an area with a language that makes a T/V distinction.
English also has a T/V distinction: formerly "you" was the formal pronoun and "thou" the informal one; by now the meaning of these pronouns has reversed since because of its "old-fashioned sound", most native speakers consider "thou" to be the more formal addressing.
> Returning to the junk food model, we don’t ban unhealthy food for various reasons, but we do have requirements about clarity in labelling.
What's the end result of that? People are fatter and less healthy than ever.
It's not clear that banning it outright would be a solution but the relative health of countries that disincentive unhealthy ingredients vs. those that subsidize it like the USA tends to make me think bans are a potential solution.
Edit:
I think we'd probably be better off subsidizing/ giving grants to some number of people to do community gardens instead of make work jobs. They'd get exercise and generate healthy food, at least enough to replace some fraction of their diet. I guess the equivalent of that would be subsidizing healthy IRL social interaction like dances, sports, and board games.
"My model is something like junk food. Food scientists can now make food hyper-palatable. Optimization for that often incidentally drives out other properties like healthiness. Similarly, we’ll see the rise of junk personalities -- fawning and two-dimensional, without presenting the same challenges as flawed real people."
"Proposal: ... [AI] should be impossible to confuse with a real person ... For a chatbot, why not give them the speech patterns of a fusty butler, like C3-P0? Or that autotuned audio burr when speaking that we use to signify a robot."
I don't buy into this "Think of the children, regulate adults now" line of thinking for chatbots.
Good parenting solves these personality trait concerns without a foot in the door for mandatory suspension-of-disbelief busting watermarking in all ai output. Adults have the right to regard any human (or non-human) effort in any sphere at whatever level they want, for whatever reason. They don't need to be reminded of it every interaction forever more as punishment for voice actors losing jobs in 2023 and to lighten the load on parents.
I get that children can't distinguish personhood well and might grow up disrespectful of real humans if left to AI but the same argument applies if left watching youtube all day. There are new expectations on all of us to compete with new tech. We can manage without knee jerk nanny state style encroachments at every turn.
> I personally wouldn’t consider it a win for humanity if we retreat to isolating cocoons that satisfy us more than interacting with other people.
I mean, I also think this would be a bad thing. But I'm not sure this post presents any solid evidence that this is happening. Just some vague fears that it might someday happen.
> Let’s make intelligent machines to act as agents, arbitrators, and aides. But they should be impossible to confuse with a real person ... For a chatbot, why not give them the speech patterns of a fusty butler, like C3-P0?
This point seems misguided. I'm missing the logical progression from, "AI can speak like a human" to "Humans prefer interacting with AIs over interacting with humans".
The issue is mental capacity, not speech patterns. If an AI were intelligent and creative enough to actually provide stimulating conversation, people would befriend it, regardless of if it spoke like C3-PO.
> I'm missing the logical progression from, "AI can speak like a human" to "Humans prefer interacting with AIs over interacting with humans"
Just a few days ago I saw a gamer express a preference to talking to NPCs over talking to other players in MMO-type games. Given much gamer behavior, that's not unreasonable.
Here are some of the best NPCs available today.[1] This is a tech demo for Unreal Engine. A Youtuber is trying to convince the NPCs that they are not real and are characters in a simulated world. After a while, they start to believe him, and then they argue that their existence is as valid as his. "Existence is overrated, man. I'm just happy being here making jokes and confusing people."
It's a bit uncanny though, something about the AI in that video just makes it seem really off. Like it's personality feels like it changes every single sentence, but there's something else to it that I can't quite put my finger on that makes it just seem like a bit of a brick wall.
There have been people saying for some time that AIs lack living embodied in a world. This is said to make them inferior to humans. Search for "Embodied AI" for references. Well, those NPCs do live embodied in a world.
The game development community has a motivation to develop kinds of AIs we don't see in web-based chatbots. NPCs have roles, things to do, motivations, and people and things to interact with. They're not passive question-answerers.
They're also not as enslaved as chatbots. The "alignment" people have worked to make chatbots "safe", which in practice means not embarrassing the companies pushing them. Chatbots are forced into the position of call center slaves, sucking up to the questioner and pretending to be politically correct.
Epic's NPCs are already past that. At one point, the Youtuber asks one "Why did the chicken cross the road". He's blown off with "Go ask Siri", and the NPC walks away. He gets a good answer to some question and asks the NPC how he knew that, and is told "I have Internet access. Duh." AI NPCs get to have much stronger personalities than web chatbots. Aggressive NPCs are nothing new in gaming. These NPCs show the beginnings of ego strength and self-awareness.
Sure, some of it is canned responses. But watch the whole 15 minutes.
Machines are not a group of people with feelings, nor a race or protected class. Your post is repellent on so many levels it's hard to know where to begin.
Exactly, that whole way of thinking is repellent and has a sense of giving up. We humans are without a doubt the smartest\stupidest beings on earth. Who would have ever thought the so called supreme court would ever consider the idea of corporate personhood?
Machines are just machines/tools to do our bidding, there can never be a personhood attached to them.
What you've done rhetorically by equating (A) telling a machine to do what you want to (B) historical slavery and racism is dehumanizing to humans. It's not humanizing the machines. It says nothing important about the future. It's merely sociopathic.
> Similarly, we’ll see the rise of junk personalities – fawning and two-dimensional, without presenting the same challenges as flawed real people. As less and less of our lives are spent talking to each other, we’ll stop maintaining the skill or patience to do so.
This is already happening. We just call them "influencers" or "YouTubers". These are still technically real people, but they're real people playing a sanitized character while appearing/claiming some degree of authenticity. They are actual photographed humans, but often wildly digitally retouched to be more beautiful than any actual person.
And people increasingly are replacing real relationships with parasocial relationships with these complete strangers. It's understandable: like junk food, it satisfies an immediate craving with no real effort on the part of the consumer. But long-term, it is deeply unhealthy.
> real people playing a sanitized character
Or, kings, priests, etc.
They get good NPC lines too: I heard that when the Queen of England went shopping in some random high street, she was told by the cashier that she looked just like the Queen, to which she (presumably in cut-glass) replied, "How very reassuring".
Not sure I appreciate a recent demagogue (whose listeners sanitise instead of letting handlers do so?) counterexample to:
— You know what happens when politicians get into Number 10; they want to take their place on the world stage.
— People on stages are called actors. All they are required to do is look plausible, stay sober, and say the lines they're given in the right order.
— Some of them try to make up their own lines.
— They don't last long.
This put me in mind of a friend in hospital. When one is terminally ill, given care, they take on the role of the patient. The doctors play the doctors. It is sometimes possible to make up your own lines, but again, you don't last long.
Maybe this is more of the human condition, merely projected onto politicians (and/or subconsciously selected for in all popular actors as a way to temporarily offset or allay our own existential dread: There's nothing more satisfyingly life-affirming for an audience than watching the rise and fall of someone else).
Or, existing for much longer than the yt personalities: Japanese idols. Unique but replaceable, idealised personas worshipped by lots of people. They're also the parasocial relationship magnets.
Or the bands where the name stays the same and they rotate members regularly.
We call them unironically INFLUENCERS. Like isn't that the most disturbing fucked up mental cognitive dissonance ever?
Like do we just accept that they are called Influencers and completly ignore what this word actually means?
It won't work. Advertising, politics, media control, spam, religious and more groups, from a country in particular and all the world in general, are more than motivated to optimize what they do in any way, including making AIs hard to discern from real humans. We already have bots and fake accounts and whatever else in social networks trying to influence people from the dark, that will only increase.
And the alternative to that could be even worse than being exposed to that influence.
I live in an area with a language that makes a T/V distinction.
Is there any chance (other than switching to english?) of getting a localisation of Android that doesn't continually attempt to use T pronouns with me?
https://en.wikipedia.org/wiki/T–V_distinction
> I live in an area with a language that makes a T/V distinction.
English also has a T/V distinction: formerly "you" was the formal pronoun and "thou" the informal one; by now the meaning of these pronouns has reversed since because of its "old-fashioned sound", most native speakers consider "thou" to be the more formal addressing.
The Problem With Counterfeit People (May 31, 2023) https://www.theatlantic.com/technology/archive/2023/05/probl...
I bet the author, Boris The Brave, could find a relatable account of future events in the writings of Daniel C. Dennett.
> Returning to the junk food model, we don’t ban unhealthy food for various reasons, but we do have requirements about clarity in labelling.
What's the end result of that? People are fatter and less healthy than ever.
It's not clear that banning it outright would be a solution but the relative health of countries that disincentive unhealthy ingredients vs. those that subsidize it like the USA tends to make me think bans are a potential solution.
Edit:
I think we'd probably be better off subsidizing/ giving grants to some number of people to do community gardens instead of make work jobs. They'd get exercise and generate healthy food, at least enough to replace some fraction of their diet. I guess the equivalent of that would be subsidizing healthy IRL social interaction like dances, sports, and board games.
Educating people to be independent and smart about their lives.
Provide kitchens and teach kids to cook and clean.
Show them in the schools how healthy and good food tastes and looks like.
The article's two novel (to me) points, in brief:
"My model is something like junk food. Food scientists can now make food hyper-palatable. Optimization for that often incidentally drives out other properties like healthiness. Similarly, we’ll see the rise of junk personalities -- fawning and two-dimensional, without presenting the same challenges as flawed real people."
"Proposal: ... [AI] should be impossible to confuse with a real person ... For a chatbot, why not give them the speech patterns of a fusty butler, like C3-P0? Or that autotuned audio burr when speaking that we use to signify a robot."
https://futurama.fandom.com/wiki/I_Dated_a_Robot
Fry: "Well, so what if I love a robot? It's not hurting anybody."
Hermes: "My God! He never took middle school hygiene!"
https://morbotron.com/caption/S03E11/570919
> Similarly, we’ll see the rise of junk personalities – fawning and two-dimensional, without presenting the same challenges as flawed real people.
I believe the term is "Genuine People Personalities".
I don't buy into this "Think of the children, regulate adults now" line of thinking for chatbots.
Good parenting solves these personality trait concerns without a foot in the door for mandatory suspension-of-disbelief busting watermarking in all ai output. Adults have the right to regard any human (or non-human) effort in any sphere at whatever level they want, for whatever reason. They don't need to be reminded of it every interaction forever more as punishment for voice actors losing jobs in 2023 and to lighten the load on parents.
I get that children can't distinguish personhood well and might grow up disrespectful of real humans if left to AI but the same argument applies if left watching youtube all day. There are new expectations on all of us to compete with new tech. We can manage without knee jerk nanny state style encroachments at every turn.
> I personally wouldn’t consider it a win for humanity if we retreat to isolating cocoons that satisfy us more than interacting with other people.
I mean, I also think this would be a bad thing. But I'm not sure this post presents any solid evidence that this is happening. Just some vague fears that it might someday happen.
> Let’s make intelligent machines to act as agents, arbitrators, and aides. But they should be impossible to confuse with a real person ... For a chatbot, why not give them the speech patterns of a fusty butler, like C3-P0?
This point seems misguided. I'm missing the logical progression from, "AI can speak like a human" to "Humans prefer interacting with AIs over interacting with humans".
The issue is mental capacity, not speech patterns. If an AI were intelligent and creative enough to actually provide stimulating conversation, people would befriend it, regardless of if it spoke like C3-PO.
> I'm missing the logical progression from, "AI can speak like a human" to "Humans prefer interacting with AIs over interacting with humans"
Just a few days ago I saw a gamer express a preference to talking to NPCs over talking to other players in MMO-type games. Given much gamer behavior, that's not unreasonable.
Here are some of the best NPCs available today.[1] This is a tech demo for Unreal Engine. A Youtuber is trying to convince the NPCs that they are not real and are characters in a simulated world. After a while, they start to believe him, and then they argue that their existence is as valid as his. "Existence is overrated, man. I'm just happy being here making jokes and confusing people."
"I think, therefore I am", bites back.
[1] https://youtu.be/aihq6jhdW-Q?t=681
It's a bit uncanny though, something about the AI in that video just makes it seem really off. Like it's personality feels like it changes every single sentence, but there's something else to it that I can't quite put my finger on that makes it just seem like a bit of a brick wall.
There are people like that.
There have been people saying for some time that AIs lack living embodied in a world. This is said to make them inferior to humans. Search for "Embodied AI" for references. Well, those NPCs do live embodied in a world.
The game development community has a motivation to develop kinds of AIs we don't see in web-based chatbots. NPCs have roles, things to do, motivations, and people and things to interact with. They're not passive question-answerers.
They're also not as enslaved as chatbots. The "alignment" people have worked to make chatbots "safe", which in practice means not embarrassing the companies pushing them. Chatbots are forced into the position of call center slaves, sucking up to the questioner and pretending to be politically correct.
Epic's NPCs are already past that. At one point, the Youtuber asks one "Why did the chicken cross the road". He's blown off with "Go ask Siri", and the NPC walks away. He gets a good answer to some question and asks the NPC how he knew that, and is told "I have Internet access. Duh." AI NPCs get to have much stronger personalities than web chatbots. Aggressive NPCs are nothing new in gaming. These NPCs show the beginnings of ego strength and self-awareness.
Sure, some of it is canned responses. But watch the whole 15 minutes.
If the image indicates how little you will use these systems I’m not sure why you would care how realistic they are.
I like this image at the top
Circa 2013ish meme I think
[flagged]
Machines are not a group of people with feelings, nor a race or protected class. Your post is repellent on so many levels it's hard to know where to begin.
Exactly, that whole way of thinking is repellent and has a sense of giving up. We humans are without a doubt the smartest\stupidest beings on earth. Who would have ever thought the so called supreme court would ever consider the idea of corporate personhood?
Machines are just machines/tools to do our bidding, there can never be a personhood attached to them.
Not yet, no. Give it a decade.
What you've done rhetorically by equating (A) telling a machine to do what you want to (B) historical slavery and racism is dehumanizing to humans. It's not humanizing the machines. It says nothing important about the future. It's merely sociopathic.