Big Tech and Big AI Don’t Have To Win: Just Look Up

The writer Zadie Smith doesn’t have a smartphone, and in a recent interview Smith notes that when she shared this fact with an audience of college students, they weren’t horrified at all. The students were jealous. Impressed, they asked Smith how they could have more control over their digital lives. It’s simple, said Smith. “Just look up.”

I’m a deeply political person (I was a political and issue advocacy consultant for two decades before I focused full time on leadership and mental health), but I don’t post about politics on LinkedIn. This is a personal, even selfish decision. LinkedIn has its faults, but I relish that this is the last social media platform that doesn’t try to outrage me with every post. I can have conversations here about things that matter without jolting between heart-wrenching scenes of violence interspersed with posts hawking skincare and menopause supplements.

And yet there’s a conversation to be had on LinkedIn, because technology is political and tech powers almost our every move at work. So today I want to highlight two civic, political issues that are also work and leadership issues. The first is how we implement AI and the second is Big Tech and social media addiction.

I want to start by saying that I credit my career to digital media, and I use AI everyday. I’m not a hater but it’s our civic duty to seek regulation and responsibility for how these technologies interact with our lives. Most everyone I know wants to feel less dependent on social media. We love AI but we worry what’s going to happen. As a political messaging strategist, I can tell you 100% we wouldn’t be at this horrible place in our country if it weren’t for social media and its incredible power to inflame, incite, and drive emotion. And if the government doesn’t get involved soon, artificial intelligence will create even worse outcomes for our democracy and our workplaces. So let’s get into it— I’m going to share some links that will help you tune in, and you can decide from there!

And if you’re going to be at this year‘s SXSW, come to my talk How To Manage AI Anxiety At Work- and let me know what you think.

What Will The AI Future Look Like? The Time To Act Is Now.

Science fiction is happening! The story has focused this week on something called Moltbot, which Elon Musk says is the beginning of the “singularity,” because AI agents now have a social network where they can… network? (The singularity”refers to the hypothetical point at which tech intelligence equals or surpasses human intelligence.) Here, I’m quoting from Fortune magazine: “It started when Austrian developer Peter Steinberger created Moltbot—formerly known as Clawdbot and rebranded again as OpenClaw—as an AI agent that can manage calendars, browse the web, shop online, read files, write emails, and send messages via tools like WhatsApp. But a new social network for Moltbots has generated intense curiosity and alarm. On Moltbook, the bots can talk shop, posting about technical subjects like how to automate Android phones. Other conversations sound quaint, like one where a bot complains about its human, while some are bizarre, such as one from a bot that claims to have a sister.” Not to mention, it’s a “security nightmare,” writes the Cisco blog.

I asked a colleague who is an AI expert if the “Molt” situation should increase my AI anxiety. And he said, “Well the moltbot / open claw (https://openclaw.ai/) tool certainly should. I think that moltbook fuels my own anxiety about how dumb AI bots might destroy the world.  A bunch of people thought: I’m going to setup and run a super-powered AI bot on my own computer that basically has permission to do anything (if you set it up that way) and then I’m going to tell it to go and talk with a bunch of other people’s agents that I don’t know.  What could go wrong?”

And that’s what we need to be asking ourselves: What could go wrong? This isn’t nervous Nellie fear-mongering. If you read me, you know I deal in anxiety at work and I’m a worrywart. I don’t think this is that. I definitely don’t know enough about technology to predict our AI future, but the people who do know enough seem pretty concerned, frankly begging that someone show some leadership here. Don’t take it from me. Read Anthropic CEO Dario Amodei’s latest essay. It’s long but you should read it. And this piece in New York Magazine is a great overview of Amodei’s letter.

Demanding that all of us have a hand in how AI transforms our world isn’t worrywarting; it’s good leadership and good public policy. It’s good corporate policy. But most companies are too busy throwing money at AI and also feeling frustrated that it’s not showing “results” soon enough to ponder the near future. You can do this differently if you are a leader. Watch this short clip of BetterUp’s Director of AI Transformation 🌊 Lee Gonzales talking about his own AI anxiety, and how he’s addressing it through education and action.

Listen to my interview with Lee here, where he details his program, Passengers to Pilots and how it’s shifting employees’ sense of agency over their AI future. And ask yourself, ask your leadership, ask your representative: why aren’t we listening to the experts?

Social Media: Landmark Big Tech Addiction Lawsuit

The average young person spends five hours a day on social media, and no one thinks that’s healthy. And most of us feel powerless to change it. But we’re not. That’s why I want you to know about the big tech “addiction” lawsuit filed last week in Los Angeles — the first time platforms like Meta face consequences for profiting from platforms engineered to be as addictive as tobacco. For the first time ever, “the world’s biggest social media giants are heading to court for the first time in a wave of landmark trials that will determine whether their platforms are responsible for harming children.” Learn about the lawsuit here.

One of my favorite psychologists to learn from is Tracy Dennis-Tiwary, who writes a great Substack. I think anyone who’s clicked on a delicious Instagram ad or gone down a rabbit hole understands we use social media to avoid difficult emotions, but a couple weeks ago, Tiwary introduced me to something fascinating: do we use tech and social media to regulate emotions for us, especially ones that are uncomfortable? Tracy writes,

“Our emotional lives have been changed by our digital ones. Tech teaches us that, in the blink of an eye, we can feel something else, be something else, and push all the unpleasant things to the backburner. Tech has become a powerful volume control on our feelings - what psychologists like me call emotion regulation. We turn the knobs up and down on our emotions, amplifying, soothing, or muting them altogether, through our media of choice.  Using digital tech in this way can become a powerful habit, a default strategy for what we do with the feelings and experiences we don’t want. The problem with these habits is that if we chronically use tech to cut off or amplify our emotions, we block our ability to work with and benefit from them.

She continues, “The ways our emotions “work for” us online often don’t translate to working well for us offline. That’s by design. A key feature of our digital selection environments is that they are optimized for frictionlessness - ease, speed, and efficiency - over slow deliberation. It’s a simple equation: less friction = more screentime = more time spent consuming content and buying stuff. Yet the more frictionless our online lives become, the fewer opportunities we have to practice discomfort, boredom, confusion, or distress. We come to expect bad feelings to resolve instantly, just as we expect food to be delivered in minutes and rides to appear at our doorstep with a tap of the screen.” That experience ill-prepares us for the real world. And you’re probably familiar with someone who is brash or downright mean online, who shames people anonymously, and who would never do that in person.

I don’t want my children to live in this world, but they do. It’s time to wrest our attention away from Big Tech and back to where it matters. Physically, emotionally, cognitively, social media can harm people, especially young people. Learn more:

Check out the “mountains of evidence” that social media harms young people: here is a substack post from Jonathan Haidt and Zach Rausch, with findings from 31 studies that Meta itself carried out but never released.

If you want a good listen on the topic, check out Aarti Shahani’s recent episode of The Forum on KQED.

Morra

P.S.: For a stark illustration, and to take action and get ICE out of our communities, check out Scott Galloway’s website ResistAndUnsubscribe to see how Big Tech supports ICE.

Previous
Previous

Leading Through Anxiety

Next
Next

Your Team Is Anxious About AI. Here’s How to Talk to Them About It.