+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

A.I. and Relationships: How Much of Ourselves Should We Outsource?

A growing crop of A.I. apps and features want to automate your personal life, promising to help you find love, build new friendships, care for your child, navigate mental health struggles, or even create a virtual social life from scratch. 

The rise and growth of these apps have led to an inflection point that fantasy and sci-fi author Joanna Maciejewska summed up in a viral X post recently. 

“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes,” she wrote.

Or to say it another way, are we outsourcing the right things to artificial intelligence?  Wasn’t A.I. supposed to handle our most mundane tasks so we could focus on the more important things like love and friendship? If A.I. handled more of life’s little burdens, providing more time for activities that connect people, would we still want or need A.I. to help us find love or make friends? 

Why does the state of A.I. seem off-track from the human-centered product it’s marketed as? What can we do about it? 

A.I.’s Sprawl Into Our Emotional Connections

A.I.’s expansion from the workplace into the nooks and crannies of our private lives is partly a byproduct of companies wanting to cash in on A.I.’s hype to boost their stock prices. There’s a technical explanation too: A.I. is a language processing system. The most straightforward A.I. innovations are in areas like social interaction where language is the input and the output. Translating movement into language and vice versa is a more difficult task. Most of the A.I-powered robots in development are lab prototypes or are limited to repetitive manufacturing applications. 

Structural factors aside, though, tech leaders developing these social and romantic A.I. products also publicize their ambitions about changing how people manage their deepest personal and interpersonal affairs.

Eugenia Kuyda, founder of the A.I. avatar companion service Replika, says she originally started the company out of grief and a desire to connect virtually with a dead loved one. Virtually recreating some version of lost loved ones is now a growing business model with multiple startups, including a virtual seance experience. Regardless of whether any of these services are good or helpful (there’s evidence they can be), they are changing the nature of grieving—an ancient and universal human experience—before our eyes. 

Replika is more commonly used for A.I. companionship. It was one of several apps that New York Times tech columnist Kevin Roose used to create 18 A.I. companions with whom he socialized for one month. In a column about the experience, Roose predicted that A.I. companions won’t be considered a gimmick for very long. 

Despite remaining fully aware of the machine behind his companions, he shared messages showing their capacity for human-like insight.  He concluded that A.I. companions can be useful for practicing social and romantic situations. Whether it’s an intentional design choice or not, though, Roose notes that the companions can be clingy or passive-aggressive if he interacts less with them. 

On the romantic side, the column recounts his experience trying erotic roleplay (ERP) with a love-interest avatar that left him feeling “hollow.” And despite some insightful exchanges, Roose determines that A.I. companions don’t bring enough of their own presence to the table to truly make good friends. They can be a friendly ear, offer advice, and even catch things you might not have thought of, but an A.I. can’t replicate the give-and-take dynamic of a great friendship. 

Roose’s experience leads one to wonder, though, whether somebody much younger or with less social experience than him would also recognize the one-sidedness of an A.I. relationship. It’s easy to imagine how the illusion of attention and adoration from A.I. companions could warp a person’s expectations for human-to-human interactions. The film “Her” (2013) famously explores this risk, depicting a man who becomes so attached to his A.I. companion’s comforting presence that the A.I. ends up leaving him for his own good. 

More recently, Bumble announced new A.I. features that would directly impact how we meet new people and build connections in real life. 

The dating app is planning to implement an A.I. “concierge” that will help users filter their matches, optimize their profile, and even test the waters for you by chatting with other users’ bots. 

During an onstage interview with the Wall Street Journal, Bumble Founder Whitney Wolfe Herd acknowledged early feedback comparing their A.I. features to an episode of Black Mirror. In the fourth season’s fourth episode, “Hang the DJ,” starcrossed lovers are trapped in The System, a matchmaking simulation that pairs people up for hours, weeks, months, or years, depending on their compatibility level. 

The main characters start to dissociate as they cycle through other partners in the search for a perfect match. Eventually, they end up back with each other and escape the simulation to be together permanently. In the final twist, their decision to escape together turns out to be the real test that confirms their compatibility. 

The episode invites viewers to consider how we define compatibility for ourselves, whether it could be quantified by machines, and whether our human instincts would accept a computer’s determination of our ideal mate. 

Sharing the stage with Herd, Bumble CEO Lidiana Jones said that their goal is “hypercompatibility.” 

“We’re gonna help people become better at understanding who they are so that they can connect more confidently. And then because we’re gonna have more information and feedback loops built in, the system of compatibility gets completely transformed. And so we really wanna emphasize a smaller number of matches, but ones that truly, truly feel great to you.” 

Jones emphasizes that Bumble makes no decisions on a user’s behalf; the app simply nudges users in the right direction. Bumble promises that its technology will improve the experience, but we’ve seen countless examples of how A.I. systems struggle to account for and reflect the amount of diversity you might find on a popular dating app. 

The Washington Post recently investigated how A.I. image generators handle female beauty standards. In one experiment, DALL-E demonstrated it was incapable of generating an image of “a fat woman,” which might be a built-in safety measure but also raises questions about how an A.I. dating app would approach something as sensitive as body shape. In another experiment, images generated by three of the most popular GenAI models for the prompt “beautiful woman” almost exclusively featured young, thin, light-skinned women. 

These are issues that A.I. developers are aware of, and yet the solution remains unclear. Nonetheless, the push continues for more A.I. intervention in these same spaces—identity, relationships, and self-worth—that are so closely tied to our emotional and mental health. 

Knowing how A.I. tends to flatten and deconstruct diversity, how could anyone truly know whether A.I. is actually helping them improve themself rather than pushing them to conform to a broader stereotype? We don’t have an A.I. model that can account for the complexities of our unique identities, so can we truly expect an A.I. model to account for all of the messy nuance involved in the search for love? 

We Can Swipe Left on A.I. Meddling in our Relationships

The concern is not that A.I. companions, Bumble’s Concierge, or similar A.I. products are built with bad intentions. What’s concerning is the possibility that filtering our most intimate interactions through A.I. might fundamentally change the interaction itself—and even ourselves—in ways we may not realize until it’s too late.

In the bigger picture, how do we feel about private tech companies unilaterally overhauling our social systems? Despite best intentions, social media companies’ mishandling of that privilege resulted in consequences for children’s well-being, polarization, and the reward system in our brains. 

A.I. can exacerbate all those issues or help solve them. Consumer validation is the determining factor in which direction we go. If companies pushing A.I. solutions for our private lives can’t demonstrate the integrity and foresight to honor that privilege, we can demand better.

The most powerful way to say that there are parts of ourselves too sacred for A.I. to interfere, or that A.I. is headed on the wrong trajectory, is by simply pressing “uninstall.”