AI models and capabilities seem to arrive daily, with an updated AI Timeline tracking 171 large language models, from Transformer to GPT-5.3, in less than a decade. But this technical progress still feels disconnected from real economic value or broad societal benefit. Instead, we’re seeing growing negative consequences and complex deployment challenges we can no longer afford to ignore.
The economic paradox: Hype vs. hard numbers
Goldman Sachs delivered the most sobering data point this week: AI, they stated bluntly, “added ‘basically zero’” to US economic growth last year. This isn’t merely an inconvenient truth; it’s a direct refutation of the idea that AI is an immediate economic engine. Billions have been poured into compute, R&D, and deployment, yet the broader economic impact remains negligible. I see a capital expenditure boom, not a productivity surge – massive investment going in, but little coming out.
The disconnect comes into sharper focus when we look at specific industry shifts. IBM, for example, watched its stock drop 13% after Anthropic launched an AI tool that could convert old COBOL code. This demonstrates AI’s power in niche applications, especially for modernizing legacy systems or processing structured data. But I don’t think it creates new economic value so much as it reallocates existing value, disrupting entrenched players and shifting market share. The value flows to Anthropic and its users; IBM absorbs the cost of displacement. This is pure value migration, not economic expansion.
The struggle to capture value weighs heavily on developers and businesses trying to integrate AI. The “How do you know if AI agents will choose your tool?” thread on Ask HN perfectly captures this uncertainty. The ecosystem is fragmented and hyper-competitive, awash with new platforms like Aqua, a CLI message tool for AI agents, and a flood of system prompts and models shared on GitHub. Without clear standards or predictable agent behavior, developers are building in the dark. I find that real economic returns aren’t flowing to the majority of these ventures. Instead, they’re concentrated with the foundational model providers and a handful of application layers that manage to build network effects or proprietary data moats. For many, AI is shaping up to be a high-cost, low-return proposition, draining resources rather than generating profits.
The digital deterioration: Slop, security, and scarcity of trust
Beyond the economic ledger, AI’s societal and digital costs are rapidly accumulating. The internet, once a vast repository of human knowledge, is fast becoming a dumping ground for automated mediocrity. This week, 404 Media reported that Pinterest is “drowning in a sea of AI slop and auto-moderation.” It’s a stark example of how unchecked AI generation degrades platform quality and user experience. When content costs almost nothing to produce, the incentive shifts from quality to volume. The result is a glut of generic, unoriginal, and often incorrect material that clogs our digital arteries. This isn’t just an aesthetic problem; it erodes the implicit trust users place in platforms to deliver anything meaningful.
The integrity of our digital commons faces a direct assault. “AI is destroying open source, and it’s not even good yet,” reads the title of one YouTube video, highlighting a real concern: AI models, trained indiscriminately on vast swathes of open-source code, aren’t just repackaging existing work without attribution. They’re introducing errors and undermining the very communities that fuel innovation. This intellectual property appropriation, exemplified by alleged distillation attacks from players like DeepSeek, Moonshot AI, and MiniMax, threatens the collaborative ethos of open source and fair competition. The “AI uBlock Blacklist” now appearing on GitHub is more than a signal; it’s a user-driven rebellion against this onslaught of AI-generated noise, a desperate effort to reclaim control of their digital experience.
Security threats have also taken on a new, more sinister dimension. AWS reported “more than 600 FortiGate firewalls hit in AI-augmented campaign,” showing malicious actors are already leveraging AI to scale sophisticated cyberattacks. This isn’t just about faster attacks; it’s about adaptive, personalized assaults that learn and evolve in real-time. NIST’s recent call for public comment on AI agent security underscores the urgency, acknowledging that autonomous agents, for all their promise, introduce novel and significant vulnerabilities that current frameworks simply can’t handle. The very tools meant to enhance efficiency are being weaponized, creating a digital arms race where defenses struggle to keep pace with AI-powered offense.
The human-AI interface: Control, agency, and adaptation
Amidst economic uncertainty and digital degradation, a crucial conversation is emerging about human agency and control in our AI-permeated world. Firefox 148’s launch, featuring an “AI Kill Switch Feature,” sends a clear signal: users want explicit control over AI in their daily tools. This isn’t just a technical feature. It’s a statement of autonomy, a recognition that not all AI intervention is welcome, and that the default should never be “on.”
This sentiment echoes from even higher places. Pope Leo XIV directed priests “to use their brains, not AI, to write homilies.” That’s a powerful reminder of the irreplaceable human element required for tasks demanding empathy, nuance, and genuine connection. AI can draft text, but it cannot imbue it with the authentic human spirit or understanding needed for deeply personal or spiritual communication. It exposes the qualitative gap between AI’s generative capacity and humanity’s unique ability to create meaning and foster real relationships.
The tension between AI’s utility and human control gets further complicated by platform dynamics. Google, for instance, restricted AI Pro/Ultra subscribers for using OpenClaw, exposing the challenges of maintaining open ecosystems when private entities control powerful AI capabilities. This illustrates an evolving battleground over platform control, where AI’s immense value incentivizes walled gardens and restricts user freedom, often under the guise of security or terms of service. As AI agents become more prevalent, the question of who dictates their access and behavior grows paramount.
Yet, despite these pervasive challenges, specific, directed AI applications continue to show remarkable utility. The story of AI building a Wi-Fi driver for an old MacBook, or assisting Ladybird in adopting Rust, highlights AI’s power as a highly specialized tool for complex, often niche, technical problems. These aren’t cases of AI replacing human ingenuity, but augmenting it – tackling tedious or computationally intensive tasks that would otherwise be impractical or take immense human effort. Anthropic’s “AI Fluency Index” research correctly points to the necessity of humans adapting and developing new skills to interface effectively with AI. It’s about becoming skilled conductors of an orchestra, not just passive listeners. The value, I think, lies in focused augmentation, not broad automation.
The takeaway
The current reality of AI, as I see it, is a complex and often contradictory picture. While technological progress in large language models continues unabated, the immediate-term value remains largely concentrated and frequently offset by significant negative consequences.
I draw a few key conclusions:
The notion of AI as a universal, immediate economic panacea is largely unfounded. Real value creation remains narrow, more often involving value migration than broad economic expansion. For many businesses, AI integration still looks like a cost center without a clear return on investment.
The digital environment is rapidly deteriorating under the weight of AI-generated content and weaponized AI. This erodes trust, compromises security, and challenges established norms of intellectual property and our digital commons. The “slop economy” is a net negative for user experience and societal discourse.
Finally, the coming years will be defined by a struggle for human agency and control over AI. Whether through “kill switches,” ethical directives, or regulatory frameworks, the push to reclaim human oversight and dictate the terms of AI interaction will be crucial. We must mitigate the risks and ensure AI remains a tool, not a master. The future of AI’s promise hinges less on its technical capabilities, and more on our collective ability to govern its deployment and manage its very real, very painful side effects.