THE EFFECTIVE PROBLEMSOLVER #109
Last week, I asked ChatGPT how to reduce homelessness in my city.
It gave me a beautifully formatted list of best practices — permanent supportive housing, coordinated entry, wraparound services, prevention strategies.
All very sensible. All very familiar.
And it hit me: AI sounds just like every report I’ve read in the last twenty years.
If you’re like me, you’ve been using AI for a while now.
Tools like ChatGPT make work faster and cleaner — helping draft emails, organize thoughts, summarize reports, and surface information you’d otherwise have to dig for.
It’s a great writing accelerator, planning assistant, and organizer of chaos.
But when it comes to tough social problems — homelessness, opioid deaths, crime, education — it’s useless.
Try it yourself.
Ask ChatGPT how to reduce homelessness in your community. You’ll get a neat list of “best practices” that sound impressive.
It reads like a center-left think tank report — the greatest hits from the first hundred Google search results.
And yet, we know how that story ends.
Spending on homelessness has skyrocketed. In many places, the problem has only gotten worse.
I’ve sat in meetings where everyone nods along to the same PowerPoint slides — the same “best practices” — while the shelters down the street are overflowing.
The gap between what we know and what we do is where the real heartbreak lives.
That shouldn’t surprise us.
Homelessness isn’t one thing — it’s a network of interconnected issues, each shaped by local context.
Every person’s story is different.
Every community’s flow dynamics — who enters, who exits, what resources are available — are different too.
There’s no single answer.
There’s only local troubleshooting.
That’s why tactics like by-name tracking (knowing exactly who is experiencing homelessness and why) and regional command centers (coordinating inflow, outflow, and service capacity) make sense.
They’re not “the answer.”
They’re just common sense ways of paying attention and a platform for adapting over time.
The real question isn’t “How do we end homelessness?”
It’s: “Given what’s happening here, how do we reach a 10% reduction — in total homelessness, in in-flows, and in exits?”
AI can’t help with that.
Because it’s not trained on what’s happening here.
It’s trained on what’s already been written — by people summarizing someone else’s summaries.
If your local answer hasn’t been written yet (and it hasn’t), you won’t find it in a model trained on the past.
Let’s make this concrete with examples unrelated to homelessness.
Can AI help two siblings, who argue all the time, get along?
No.
Can it help a dysfunctional family find harmony?
Still no.
Okay, so what about a small neighborhood of stubborn, opinionated NIMBYs — can it help them overcome partisan divides to fix potholes, improve garbage collection, or tackle other everyday issues?
No again.
So help me understand how it’s going to help the single mom living out of her car, or the man using in the encampment under the overpass.
People can help them. Communities can help them. Organizations can help them.
But AI?
I don’t see it.
There’s no magic innovation or policy or solution that AI will create to get at this issue.
I’ve seen this up close in workforce development here in Minnesota.
I wouldn’t trust AI with a 10-foot pole to make sense of it.
The barriers to progress aren’t about content — they’re about people.
The ideas, soundbites, and “best practices” are already everywhere. You can get them from a think tank report, a conference keynote, or the latest ChatGPT answer.
What’s missing isn’t knowledge — it’s follow-through.
Collaboration. Adaptability in strategy. Consistency in implementation.
The discipline to track spending and results over time.
That’s where change happens, and that’s where AI has nothing useful to say.
I know some of you are going to say, “Look, you admitted that people in the social sector use AI to make work faster and cleaner — for writing emails, lobbying testimonies, or grant reports. That alone will save so much time that can be used to do more, or solve more.”
Maybe.
But saving time doesn’t automatically translate into more progress.
If you’re doing the wrong thing, you’ll just do it faster.
Nothing beats critical thinking — in context, in real relationships, and in the messy, adaptive work of getting things done with others.
I’m super bullish on AI for a lot of things.
But not for this.
Still, if you’ve seen it work — if you’ve seen AI help real people make real progress on a complex problem — I’d honestly love to hear about it.
Just reply to this email.
Maybe you’ll change my mind.
Because if there’s one thing I’ve learned, it’s that being wrong, in good company, is how learning starts.
See you in two weeks.
— Bryan



