5 Surprising Truths About How We’re Really Using AI (My Field Notes on Anthropic’s Economic Index)
Here’s my take—written in my voice—on Anthropic’s latest report, shaped for leaders, operators, and neighbors here in Santa Clarita who actually need AI to move the needle in real business.
5 Surprising Truths About How We’re Really Using AI (My Field Notes on Anthropic’s Economic Index)
Introduction: Beyond the Headlines—Into the Work
Here is the PDF I read from Anthropic
Artificial intelligence is on every screen, in every boardroom, and all over the news. But headlines don’t ship results—process does. I care about what actually gets adopted at the keyboard, in the CRM, at the sales desk, and on the job site. Anthropic’s new Economic Index cuts through the noise with usage data—who’s using AI, where, and for what. Some of the findings confirmed what I’m seeing daily at SantaClaritaArtificialIntelligence.com. Some of it surprised me—in good ways and in ways that should make us act fast.
If you’ve followed my work, you know I’m relentless about two things: alignment and execution. Tools don’t create outcomes—discipline does. Still, the right tool, in the right hands, with the right data, at the right time, can change a business quarter. The Index helps us understand why certain regions and teams are punching above their weight, why “more automation” isn’t always the answer, why price is rarely the true constraint, and why the biggest blocker isn’t model quality—it’s your data house. And yes, there’s a warning I take very seriously: AI is not spreading evenly. If we don’t build locally and intentionally, the gap widens.
Below are the five truths I drew from the report, with my practical spin as “Connor with Honor”—operator, realtor, and AI systems builder for Santa Clarita businesses.
1) The World’s AI Hotspots Aren’t Where You Think
What the data says: Per-capita usage is where the real story lives. In the U.S., Washington D.C. and Utah lead per person, ahead of California. Globally, small, tech-forward nations—Israel, Singapore, Australia, New Zealand, South Korea—top the charts. Bigger economies like the U.S., Canada, and the U.K. still show strong adoption, but the leaders are agile places with decisive policy, dense expertise, and tight feedback loops.
What that means for us:
A lot of folks assume you have to be in San Francisco or Seattle to use AI well. Not true. What matters is a culture of implementation and the ability to route decisions quickly. We have that in Santa Clarita—plenty of founder-operators, service pros, and small teams who can move fast without a 12-person committee. That’s an advantage.
If you’re a local business owner (realtor, lender, contractor, medical practice, restaurant), you don’t need “Silicon Valley” to compete. You need:
A prioritized set of AI-eligible workflows (clear, high-value, measurable),
The discipline to test weekly, and
A way to capture what works into playbooks so your team repeats it.
Small, properly run teams out-execute large, unfocused ones. That’s the lesson from the Index: velocity beats volume.
Action for SCV teams:
Start per-capita: measure AI usage per seat and per workflow. A 10-person shop that’s 70% augmented on intake, quoting, and follow-up can outperform a 70-person shop where three people “dabble.”
Build local clusters: title/escrow + lender + realtor + inspector can share aligned AI prompts for scheduling, disclosures, and milestone updates. A micro-cluster behaves like an “Israel/Singapore” inside our valley.
2) Mature AI Users Automate Less and Collaborate More

What the data says: Lower-adoption regions tend to push AI toward full task delegation (e.g., “just finish the code”). As adoption matures, users shift toward augmentation: brainstorming, learning, iterating, and using AI as a thinking partner. That shift holds even when you control for task type. In other words, it’s not the work—it's the posture.
What that means for us:
Automation is seductive because it feels like “set and forget.” But the highest-ROI use cases I see locally start as co-pilot patterns:
Drafting and red-teaming emails, proposals, and scope statements.
Brainstorming campaign angles, subject lines, and headline variants.
Rewriting complex process explanations for a 9th-grade reading level.
“Rubber-ducking” a strategy: “Point out hidden assumptions in my plan.”
Iterating live on a phone script until it matches your voice.
As your team matures, you don’t hand the keys to the car—you build the habit of looping with AI. Think of it as a tireless junior partner who is excellent at first drafts, counterpoints, and structured thinking, and who never resents your edits.
Action for SCV teams:
Reframe adoption goals from “automate X% of tasks” to “co-create X% of deliverables.”
Institute a 3-pass rule: Draft with AI → You revise → AI improves per your notes. That third pass is where brand voice and quality lock in.
Track “time to a good first draft” as a KPI. Speed to clarity is often the unlock for revenue.
3) For Businesses, Capability Beats Cost—By a Lot
What the data says: Businesses are less price-sensitive than many assume. Higher-capability tasks get used more, even if they cost more per interaction. When you control for task characteristics, price still matters, but weakly. Translation: teams will pay for power if it moves revenue, risk, or cycle time.
What that means for us:
If a model saves you 90 minutes of a $150/hour employee—or helps convert two extra deals a month—then quibbling over pennies per 1,000 tokens is missing the forest for the trees. I’m careful with cost controls, but I’m ruthless about opportunity cost. The expensive model that makes your client say “yes” one week earlier is cheap.
This is doubly true in real estate, lending, legal, healthcare, and home services—domains where small improvements in clarity and speed create outsized trust and conversion.
Action for SCV teams:
Tie model choice to outcome class: compliance language, high-stakes client emails, money-math explanations → use the strongest tool you have. Internal note cleanup? Use the cheaper tier.
Instrument ROI at the workflow level: If a pricier model lifts appointment-set rate from 12% to 17%, the math is done. Buy the lift.
Cap costs with guardrails (max tokens, review queues) without downgrading capability where it matters.
4) The Biggest Hurdle Isn’t the AI—It’s Your Data (and Context)
What the data says: The report explains a very real bottleneck: high-value tasks require rich context. You can give an AI the entire codebase and say, “refactor.” That’s straightforward. But “build me a sales strategy” requires tacit knowledge—what’s in your people’s heads, your call recordings, your messy Google Drive, your CRM in shorthand. There are diminishing returns to just stuffing more text into the prompt; you need the right context and a way to organize it.
What that means for us:
If you want AI to think like your best rep, your best appraiser, your savviest TC, you must make what they know available to the system in a structured way. This is the hard, unglamorous work—what I call data modernization:
Clean your custom values (names, phones, SLAs, fees, timelines).
Centralize SOPs and decision trees (quotes, escalations, exceptions).
Summarize call transcripts and tag them by scenario (objection type, persona, stage).
Create short knowledge cards (200–400 words) for recurring situations.
Once that foundation exists, AI can reason with your world, not a generic one.
Action for SCV teams:
Build a Context Ladder in four rungs:
Static facts: hours, service areas, pricing ranges, turnaround times.
SOPs & checklists: exact steps for intakes, quotes, follow-ups, closings.
Tacit notes: “What experienced staff watch for,” “phrases that convert,” “things never to promise.”
Case summaries: 10–20 “golden” past cases with who/what/why/lessons.
Keep each unit short and atomic. Ten crisp cards beat one 40-page policy doc.
Refresh monthly. Assign ownership. If nobody owns context, nobody owns outcomes.
5) AI Is Spreading Unevenly—Mind the New Digital Divide
What the data says: Early usage clusters around wealthy regions and certain task categories (e.g., coding). Historically, when technology concentrates, inequality widens before it narrows. The same risk is here: if high-adoption teams capture more productivity sooner, the gains can accumulate to the already-advantaged.
What that means for us:
If you wait, you don’t just get “late to the party.” You get priced out of the party. The gap becomes cultural, not just technical. Teams that learn to co-pilot, document, and measure AI will lap teams that hesitate. That’s the uncomfortable part of this report: the clock is running.
I don’t say this to scare small shops; I say it to empower you. Small shops that adopt today can skate to where the puck is going. You don’t need 500 features. You need three that you use every day: lead handling, quoting, and follow-up. Nail those, and you’re already not average.
Action for SCV teams:
Pick two frontline workflows and go deep for 30 days:
Missed-call triage → call-back scheduling with templated questions and promises the business can keep.
Quote generation → payment link with two clean price tiers and optional add-ons.
Write down what works. Promote the people who follow the playbook. Now you’re a “high-adoption” team, regardless of zip code.
How I’m Implementing This Locally (Playbook You Can Copy)

Here’s how I translate these truths into operating moves for Santa Clarita businesses. Use this as your first 6-week sprint plan.
Week 1: Inventory & Intent
Workflow audit: Intake, quoting, scheduling, follow-up, onboarding. Score each by friction and revenue sensitivity.
Choose two “money moves”: Workflows that can change topline or cycle time.
Define “good” output: 3 examples of the perfect email, the perfect quote, the perfect voicemail.
Week 2: Context Cards
Write 10–15 knowledge cards (200–400 words each) covering your facts, SOPs, objections, and “voice.”
Build a glossary of industry terms and banned phrases.
Store these in a single, searchable spot and reference them in every AI session.
Week 3: Co-Pilot First
Run the 3-pass rule on real work: AI draft → Human edit → AI refine.
Measure “time to usable draft” and “edits required.” Track daily.
Capture winning prompts into a shared prompt library.
Week 4: Partial Automation with Human-in-the-Loop
Add guardrails: max tokens, required approvals, confidence checks.
Automate the safe middle (e.g., appointment setting) while keeping final send on sensitive items human-approved.
Instrument outcomes: appointment rate, show rate, quote-to-close.
Week 5: ROI & Model Strategy
A/B test model tiers on one workflow where quality matters (e.g., pricing explanation).
If higher-tier lifts conversion or reduces back-and-forth, lock it in for that workflow.
Document cost per win, not cost per thousand tokens.
Week 6: Scale Out & Train
Turn your best two workflows into repeatable playbooks.
Train your team on the why and the how (screen recordings + checklists).
Add one new workflow per month. Slow is smooth. Smooth is fast.
The “Connor with Honor” Guardrails I Use on Every Build
Never promise what the business can’t deliver. Align scripts with operational reality.
Human-sounding ≠ human-lying. AI should clarify, not impersonate.
Data minimization. Ask only what’s needed to serve next best step.
Audit trail. Every AI-touched interaction is logged with source context and final human approval where relevant.
Weekly reviews. What did we improve? What broke? What’s the single biggest blocker now? Fix that first.
FAQs I’m Getting from Local Owners (Rapid-Fire)
“Should I aim for full automation?”
Aim for co-creation first. Full automation comes safely later. Speed to a solid first draft is the first profit center.
“How do I get AI to use our voice?”
Feed it your top 5 emails/texts that “won.” Mark what made them work. Use those as style anchors. Build a voice card.
“We’re worried about cost.”
Good—track it. But judge costs against wins. If a better model closes one extra job a week, nobody will miss the pennies.
“Our data is a mess.”
Welcome to the club. Start with 10 knowledge cards and a clean list of custom values. Improve weekly. Don’t wait for perfect.
“How do we avoid the digital divide you mentioned?”
Start now. Pick two workflows. Co-pilot them hard for 30 days. Write down what works. You just closed the gap.
Conclusion: The Future Isn’t Inevitable—It’s Built
Anthropic’s data confirms what I see daily: adoption favors the decisive, not the giant. Capability beats price when outcomes are on the line. The limiters aren’t algorithms—they’re context, clarity, and culture. Teams that organize their knowledge and commit to the co-pilot posture compound advantages quickly. Teams that hesitate watch the ground shift under their feet.
The next 12–24 months in Santa Clarita won’t be about who “has AI.” Everyone will. It will be about who learns with AI the fastest—who gets the right context into the system, who measures outcomes honestly, who trains their people to partner with the tool rather than fight it.
If you’re ready to operationalize this—really do it, not just talk about it—start with two workflows, build your context ladder, and commit to the 3-pass rule. That’s the difference between a demo and a durable advantage.
I’m here in Santa Clarita building this every day with real businesses that answer real phones and ship real work. If you want help—from intake scripting to reputation response, from appointment setting to post-sale follow-through—this is what I do.
— Connor with Honor
SantaClaritaArtificialIntelligence.com