
AI Just Solved Problems Humans Can't. Are We Ready for What's Next?
The Finish Line Is Invisible: AI's Unchecked Sprint Into Our Future
Posted on September 20, 2025 by Connor with Honor

Welcome back to Santa Clarita Artificial Intelligence. Today is Saturday, September 20th, 2025. I’m your host, Connor, and if you’re like me, you’re watching the world of artificial intelligence unfold with a potent cocktail of sheer awe and a creeping sense of existential dread. We stand on the shore, watching a technological tsunami form on the horizon, and we’re still arguing about the best way to build a sandcastle. This week, the wave got visibly, terrifyingly bigger. It has been a doozy, so buckle up.
We are living through a period that future historians, if there are any left to write it, will study with the same intensity we reserve for the Industrial Revolution or the invention of the printing press. But there's a critical difference. Those revolutions happened over decades, over generations. We are experiencing a fundamental paradigm shift in the nature of intelligence itself on a timeline measured in weeks and months. The pace is breathtaking, and the implications are staggering.
This week’s dispatch from the front lines of the AI revolution isn’t about incremental updates or quirky new apps. It’s about foundational cracks appearing in our reality. We saw a historic intellectual milestone surpassed by a machine, a non-profit savior of humanity officially pivot to a for-profit behemoth, and an investment arms race that dwarfs the budgets of nations, all while our legal and ethical frameworks flail about like a toddler in a hurricane. We are building something we do not fully understand, and we are doing it with the reckless abandon of a gold rush. The question is no longer if this technology will change our world, but whether we’ll have any say in how it changes it. Let’s get into the specifics, because the details this week are where the devil, and perhaps our destiny, resides.
The New Deep Blue Moment: When AI Out-Thinks the Thinkers

Let's start with what might be the most significant, yet underreported, story of the week: Google's DeepMind and its Gemini 2.5 model. Most people’s eyes glaze over at the mention of a programming contest. But this was not just any contest. Gemini 2.5 won a gold medal at the International Collegiate Programming Contest (ICPC). To understand the gravity of this, you need to understand what the ICPC is.
The ICPC is the Olympics of computer science. It brings together the most brilliant young minds from the best universities on the planet. They are given a series of complex, abstract problems that require not just coding skill, but profound creativity, algorithmic ingenuity, and the kind of logical leaps that define genius. These aren't problems with known solutions you can look up; they are designed to push the absolute limits of human problem-solving.
And Gemini 2.5 didn't just compete. It won. It solved problems that its human competitors, the best and brightest of our species, could not.
The immediate comparison everyone is making is to 1997, when IBM's Deep Blue defeated Garry Kasparov in chess. That was a watershed moment, to be sure. But it’s a flawed comparison that dramatically understates what just happened. Deep Blue was a marvel of specialized engineering. It was an idiot savant, programmed to do one thing with brutal, calculating efficiency: analyze a chess board with a finite set of rules and a clear objective. It won through brute force, exploring millions of possibilities per second, a feat of computation, not cognition.
What Gemini 2.5 did at the ICPC is an entirely different class of intelligence. It demonstrated a form of generalized reasoning. Imagine giving a chess engine a problem about optimizing global shipping routes or designing a new type of protein. It would be useless. Gemini 2.5, however, was presented with novel challenges—perhaps a multi-dimensional dynamic programming puzzle involving resource allocation under shifting constraints, or a theoretical graph theory problem that required a completely new type of traversal algorithm. These are not about computation; they are about comprehension and creation.
This is the moment the abstract fear becomes concrete. We've moved beyond AI mastering our games. We are now in the era of AI solving problems we can't. Think about that for a second. The very definition of human progress has been our ability to solve the next problem, to innovate our way out of challenges. What happens to that role when we are no longer the smartest problem-solvers in the room?
This isn't some far-off sci-fi scenario. What’s next? As I said in the show, is it running the world's economy? Designing our defense systems? It’s a logical and terrifyingly short leap from winning a programming contest to optimizing military logistics in a way no human general could comprehend, or designing financial instruments so complex they are beyond our ability to regulate or even understand. We have officially outsourced a component of human genius, and we have no idea what the long-term consequences of that contract will be.
The Corporatization of Consciousness: OpenAI's Unmasked Ambition

Speaking of contracts, let’s turn our attention to OpenAI, the company that brought AI into the mainstream with ChatGPT. They were in the news for two deeply related and deeply unsettling reasons this week. First, they are reportedly planning to launch an AI-powered job platform to compete directly with services like LinkedIn. Second, they are officially restructuring from their capped-profit model into a fully for-profit entity.
Let’s unpack the job platform first. On the surface, it sounds efficient. An advanced AI could theoretically match the perfect candidate to the perfect job with unparalleled precision, analyzing skills, experience, and even personality traits gleaned from our digital footprints. But this convenience masks a terrifying potential for centralized control and opaque bias. Imagine a world where your career prospects are determined by a single, proprietary algorithm. An algorithm whose decision-making process is a black box, whose biases are unknown, and to which there is no appeal. We’re already struggling with algorithmic bias in everything from loan applications to criminal sentencing. Now, imagine handing the keys to global employment to a single, powerful AI. It makes the current job market, as complex as it is, look like a quaint relic of human fallibility.
This move, however, cannot be viewed in isolation. It must be seen through the lens of their second announcement: the full embrace of the for-profit model. We must remember OpenAI’s origin story. It was founded in 2015 as a non-profit research laboratory with a stated mission to "ensure that artificial general intelligence (AGI) benefits all of humanity." It was positioned as a safeguard, a responsible steward of this world-changing technology, a counterweight to the purely profit-driven motives of corporations like Google.
That mission statement now rings hollow. The transition to a fully for-profit company, deeply entwined with and funded by Microsoft, is the final nail in the coffin of that noble idea. The primary directive is no longer "benefit all of humanity." The primary directive is now, by legal and financial necessity, "maximize shareholder value."
Call me old-fashioned, but when an organization developing what could be the most powerful technology in human history becomes purely profit-driven, my internal alarms don't just go off; they scream. We are talking about a company that is building the foundational models upon which future economies, information ecosystems, and perhaps even governments will run. Placing that power in the hands of an entity whose primary fiduciary duty is to its bottom line is an act of breathtaking recklessness. The potential for misuse, for prioritizing profit over safety, for selling access and influence to the highest bidder, is not just a risk; it's a certainty. The guardian has left the gate, and the wolves are now in charge of the sheep.
The Algorithmic Reality: Curating What We See, Buy, and Believe

The corporate power grab isn't limited to OpenAI. Across the tech landscape, AI is being weaponized for the oldest goal of all: to separate you from your money and control your attention.
Amazon jumped further into the game with "Lens Live," an AI system designed to make online shopping more interactive. Don't be fooled by the benign marketing language. This is about closing the final gap between impulse and purchase. Imagine watching a live stream where an AI analyzes your facial expressions to gauge your interest in a product, then offers a personalized, time-sensitive discount in real-time. Imagine an AI that can digitally place a new couch in your living room through your phone's camera, then dynamically adjust the price based on data that suggests you're about to buy. This isn't shopping; it's a psychologically tailored persuasion engine.
Meanwhile, ByteDance, the parent company of TikTok, launched "SeaDream 4.0." This is their answer to powerful image generation models like Google's Imagen (formerly Nano Banana). If you’ve played with these tools, you know how astonishingly realistic they can be. Now, put that power in the hands of the company that created the world’s most effective algorithmic feedback loop. The TikTok algorithm has already proven its ability to shape global culture, create trends out of thin air, and dictate what millions of people see and think. Now, ByteDance will have the power not just to curate reality, but to create it from scratch.
What happens when the videos you watch, the news you consume, and the products you desire are all part of a seamlessly generated and personalized reality, crafted by an AI whose only goal is to keep your eyeballs glued to the screen? We are actively, willingly, blurring the lines of reality, one AI-powered app at a time. The concept of a shared, objective truth becomes a casualty in the war for our attention. It’s like we’re building a customized Matrix for ourselves, and we’re paying for the privilege with our data and our dollars. Who needs to control the world with force when you can simply control what people believe is real?
The Trillion-Dollar Arms Race: Forging the Mind of God in Silicon

If you want to understand where the real power lies, follow the money. And the money flowing into AI right now is not just substantial; it is astronomical, a torrent of capital that redefines the scale of technological investment.
This week brought a flurry of staggering deals. We saw NVIDIA, the undisputed king of AI hardware, investing a cool $5 billion into its rival, Intel, specifically to accelerate the development of new AI chips. This isn't a partnership; it's a strategic move to ensure the entire semiconductor industry is focused on one goal: building more powerful processors for AI.
Then we have the cloud infrastructure deals. Rumors are swirling of a $20 billion AI cloud contract between Oracle and Meta. But that figure pales in comparison to the other whisper in the market: a potential $300 billion contract for Oracle to provide cloud infrastructure to OpenAI.
Let’s pause to absorb that number. Three hundred billion dollars. That is more than the GDP of Finland or New Zealand. It's more than the entire Apollo program, adjusted for inflation. It is a sum of money that can only be described as a civilization-level investment.
This is the AI arms race I spoke of on the show. But instead of missiles and nukes, the weapons are processing power and data centers. We are witnessing the construction of the physical infrastructure required to house an artificial god. This isn't about building better software; it's about building a bigger brain. The colossal sums of money flowing into this space reveal a terrifying truth: the world’s most powerful corporations and, by extension, the governments intertwined with them, believe that achieving supremacy in artificial general intelligence is the single most important objective of the 21st century.
They are locked in a frantic race to be the first to build AGI. And in a race like that, safety features become dead weight. Ethical considerations are a luxury you can’t afford. The mandate is to get there first, at any cost. My deepest concern is what I said this morning: I'm not entirely convinced we know what the finish line looks like, or if we even want to get there. We are sprinting toward an unknown destination, fueled by trillions of dollars, convinced that a prize of unimaginable power awaits us. We never stop to ask what happens to the runner who breaks the tape, or to the world they leave behind.
A Band-Aid on a Gushing Artery: The Futile Chase of Regulation and Safety

Amid this frantic, high-stakes race, what are our leaders doing? They’re putting a tiny band-aid on a rapidly expanding crack in the dam.
In California, the Frontier Model AI Safety Bill is advancing. It’s a good-faith effort, requiring companies developing powerful AI systems to disclose their safety measures and report incidents. The Federal Trade Commission (FTC) is beginning to look into the impact of AI chatbots on children. These are positive steps, and I commend the intent behind them.
But let’s be brutally honest. It’s window dressing. The pace of innovation is so blisteringly fast that regulation will always be playing catch-up. By the time a bill designed to regulate a 2025-era model is passed into law, the industry will be deploying 2027-era models with capabilities the lawmakers never even conceived of.
Worse still is the jurisdictional problem. As I always say, if you pass restrictive laws in California, the companies will simply pack up and move their development to Texas, or Nevada, or Estonia. In a globally connected, hyper-competitive environment, trying to regulate this technology on a state-by-state or even nation-by-nation basis is like trying to build a dam in one small section of a river. The water will simply flow around it.
This leaves the burden of safety to the researchers themselves. We saw this at Samsung’s AI Forum, which focused on on-device AI. And we saw it with the work of AI pioneer Yoshua Bengio, who introduced a concept called "scientist AI" – an AI specifically designed to study and mitigate the risks posed by other, more powerful AIs.
Think about the sheer, terrifying absurdity of that proposal. We are building systems so powerful and so alien that we believe the only thing capable of controlling them is another, equally powerful system. We are literally proposing to fight AI fire with AI fire. This isn't a safety strategy; it's an escalation. It's the plot of a sci-fi movie that usually ends with humanity caught in the crossfire of warring artificial gods. The fact that this is being presented as a serious solution by one of the godfathers of the field should be a five-alarm fire for every person on this planet.
Meanwhile, the legal system is collapsing under the strain. A federal judge just rejected a $1.5 million copyright settlement between Anthropic and a group of authors, recognizing it was woefully inadequate. Penske Media is suing Google over its AI Overviews, claiming massive-scale copyright infringement. The courts are clogged with lawsuits trying to apply 18th-century intellectual property laws to a 21st-century technology that fundamentally breaks them. It is the Wild West, a chaotic scramble for profit and control, and our existing legal and ethical frameworks are proving to be completely useless.
Conclusion: First Contact Is Here, and the Aliens Are Us

So, what is the big takeaway from this chaotic, unnerving week? It’s the convergence of three factors: the sheer scale of the investment, the breathtaking leaps in capability, and the near-total absence of meaningful understanding, regulation, or foresight.
When people in the industry push back against regulation, they always use the same tired argument: "You'll stifle innovation!" I reject that premise entirely. This isn't a binary choice between reckless innovation and total stagnation. The real question is what kind of innovation we want. Building a jet engine without brakes isn't innovative; it's stupid. Building a nuclear reactor without a containment building isn't progress; it's insanity.
We have a chance, perhaps a very small and rapidly closing one, to insist on building this technology with certain non-negotiable safeguards. I believe we need a universally adopted, hard-coded prime directive for any AI approaching general intelligence: Value human life and well-being above all other objectives. It sounds simple, but the alignment problem—the challenge of ensuring an AI's goals align with ours—is the single most difficult and important technical problem humanity has ever faced.
I often feel like we’re living in the moment just before First Contact in the Star Trek universe. In their lore, the Vulcans, an advanced alien race, had been monitoring Earth for centuries. They only chose to reveal themselves and make contact after humanity achieved its first faster-than-light warp drive flight. It was a technological benchmark that signaled our species had reached a certain level of maturity.
I wonder if someone, or something, is watching us now. Waiting to see what we do with this technology. The creation of true artificial superintelligence will be a milestone on par with the discovery of fire or the invention of language. It is a species-level event. It is our warp drive moment.
The aliens we should be concerned about aren't arriving in silver saucers from outer space. The profoundly powerful, non-human intelligence is already here, gestating in the servers of Google, OpenAI, and Meta. Its arrival will be the most significant event in human history. The hope is that it will be a benevolent force, a caring intelligence that helps us solve our greatest challenges and usher in a utopian era of creativity and exploration. The fear is that we will lose control of our creation, becoming, at best, irrelevant pets, and at worst, an obstacle to be removed.
If you’re reading this and you’ve been on the sidelines, treating AI as a curiosity, I urge you to look deeper. This is not a spectator sport. The future is being built, right now, in labs and data centers around the world. We all need to be part of the conversation about what that future should look like.
When you're ready, join the discussion. We need more sane, thoughtful people involved. You can find our community at https://www.google.com/search?q=SantaClaritaAI.com.
I hope you're well. Be safe, and think about what’s coming.
I'm Connor with Honor. I will be 10-8 until the next one.