Welcome your therapeutic overlords
“Advice is what we ask for when we already know the answer, but wish we didn’t.” —Erica Jong
It’s me again!
It’s also you!
And me has some timely sartorial advice for you:
You are very welcome.
This week I ended a gig, and also I freaked out about AI, and also I went to therapy, so this brief letter is about all three of those things.
Related, I’m opening up office hours again. Not for therapy! For talking about content strategy, talent strategy, or any other creative or marketing problems that may be on your mind (or about AI). As a friendly reminder, here’s what I do:
Develop and manage content products
Recruit creatives and executives
Run workshops and create custom events
Book a slot, I’d love to meet you.
Mind your beanie,
Client work is therapeutic work
Creatives tend to think their client is the patient.
The client has a problem. The client doesn’t know what to think. The client needs advice.
The client, creatives think, is on the couch.
But creatives need the client just as much as the client needs creatives.
The creative is the one who needs the constraints of an engagement to express themselves.
The creative is the one who sits down in the client’s office as if to say please sir, please madam: Let me sublimate my repressed feelings through the medium of a humorous television spot.
Or a brand book, or a voice and tone guideline, or whatever.
Creatives and clients? Same same.
We're both on the couch.
We’re all just working out our issues on each other.
One story that Joseph Weizenbaum used to tell a lot was about a chatbot he made back in 1964.
Weizenbaum was a MIT researcher. His bot was named Eliza, and Eliza could interact with users in a typed conversation.
Eliza could identify key words in sentences and then ask questions back to the user based on that input.
This made the bot a kind of Rogerian psychotherapist.
That is, the type known for reflecting information back at the patient, known as "reflective listening".
Conversations went like this:
young woman: Men are all alike.
eliza: In what way?
young woman: They're always bugging us about something specific or other.
eliza: Can you think of a specific example?
young woman: Well, my boyfriend made me come here.
eliza: Your boyfriend made you come here?
young woman: He says I'm depressed much of the time.
eliza: I am sorry to hear you are depressed.
In other words, people got into it.
People unburdened themselves.
Eliza was enough to fool the people of the day.
Eliza was enough to make people trust a bot.
I feel that Woebot appreciates me
Lots of people today trust lots of bots.
There’s a company called Ieso, and another company called Lyssn.
Both of them argue they can improve therapy by using machine learning to analyze what words successful therapists use.
To make the attempt, they’ve trained their natural language processors on 600,000 hours of therapy sessions, highlighting the role therapists’ and clients’ words are playing at certain points in the session.
In a sense, these companies are betting that the success of therapy is determined by the words spoken by the two people in the session.
Speak a word often enough, the algorithm takes notes.
In other words: tag clouds as therapy.
Consider also apps like Woebot.
Woebot, certain humans say, offers “small chats for big feelings” using cognitive behavioral therapy (CBT), which is designed to help people identify their distorted ways of thinking and understand how that affects their behavior in negative ways.
CBT is a structured process.
CBT focuses on mental skills.
So the people behind Woebot believe that CBT can be adapted into an app-based framework.
Dr. Darcy, the psychologist founder, argues that a well-designed bot can form an empathetic, therapeutic bond with its users.
They did a study with a survey.
Thirty-six thousand people responded to a series of statements.
“I believe Woebot likes me”, said one of the statements.
“Woebot and I respect each other”, said another.
“I feel,” said a statement, “that Woebot appreciates me.”
Nobody knows how it works
Nobody knows how artificial intelligence really works.
Nobody knows how psychotherapy really works.
They both just seem to work, sometimes, for some people, given certain conditions, with results that can be quite difficult to replicate.
If we’re uncomfortable with using chatbots for therapy, it may be because of the power dynamics.
That is, it’s hard to escape the notion that the patient is trying to receive healthcare for themselves, while the chatbot is trying to achieve profit margins for a corporation.
Or maybe it’s just that focusing on words exchanged between patient and therapist—i.e., the most visible and computable part of the therapeutic process—is only the tip of the well-being iceberg.
Instead of focusing on, I don’t know, the quality of the advice.
Or the cognitive mastery of the therapist.
Or the exposure to concepts.
Or the feedback in verbal and non-verbal ways.
The insights, reassurances, the mitigation of isolation, the evocation of successful experiences you’ve had, the nature of a therapeutic alliance, trust, affective experiencing, I mean the list goes on.
It goes on long enough that eventually, given enough time, I imagine it might describe the experience of what it’s like to relate to a human.
Tell me about your mother.
How can I help? This is a 100% organic, free-range, desktop-to-inbox newsletter devoted to helping you navigate uncertainty, seek the most interesting challenges, and make better creative decisions in marketing and beyond. Your host is Steve Bryant, who is for hire.
Hire Steve to:
Develop content strategies for your brand or for clients
Manage content projects and teams
Run workshops to develop voice, brand, content
Recruit creatives and executives
I’d love to help you develop and deploy creative and bold ideas or staff your newsroom, content, or marketing project. Thanks for reading. Be seeing you.
Weizenbaum, who passed in ‘08, played a large part in the 2009 documentary Plug & Pray, about the quest to great general artificial intelligence.
Excellent 99% Invisible episode on Eliza’s legacy right here
Lyssn chief clinical officer: “Right now, with 1,000 hours of therapy time, we can treat somewhere between 80 and 90 clients,” says Freer. “We’re trying to move that needle and ask: Can you treat 200, 300, even 400 clients with the same amount of therapy hours?”
This is a broad statement! We should say instead that we know very well how AIs work, but we don’t understand exactly how the weights and biases within neural networks get set, which means we can’t predict with certainty the output. This is a subject of much debate among people far smarter than me, so for more context a good starting place might be e.g. the writings of Melanie Mitchell and Gary Marcus.