Summary:
Tech critics Cory Doctorow (author of “Enshittification”) and Ed Zitron predict an imminent collapse of the AI investment bubble driven by unprofitable business models, low public adoption, and infrastructure limitations. They contrast with tech CEOs like Nadella and Bezos who acknowledge bubble risks but foresee long-term societal benefits. Doctorow proposes specialized post-crash AI applications like medical diagnostics and legal analysis, while Zitron dismisses current infrastructure as fundamentally non-viable. Both recommend workforce strategies like unionization and vocational training over traditional tech education as safeguards against industry instability.
What This Means for You:
- Reassess AI dependency: Audit workflows for non-essential AI tools vulnerable to service disruptions during market corrections
- Develop portable skills: Prioritize vocational certifications (e.g., electrical work) or financial literacy that withstand tech sector volatility
- Document AI outputs rigorously: Maintain human-verified records for legal/compliance processes relying on generative AI
- Monitor GPU marketplace trends: Prepare to acquire discounted hardware assets if major AI players liquidate infrastructure
Original Post:

Tech analysts Cory Doctorow and Ed Zitron clashed with optimistic AI narratives during a Seattle Library event, forecasting catastrophic market corrections in artificial intelligence investments. Doctorow’s enshittification framework – describing platform degradation via profit extraction – contextualizes their prediction of AI infrastructure becoming stranded assets. While acknowledging niche applications like HRDAG’s exoneration analytics and diagnostic support systems, both critics dismissed foundation model sustainability, with Zitron citing ChatGPT’s “insanity-inducing” outputs as symptomatic of systemic flaws.
The analysts diverge on post-crash scenarios: Doctorow envisions repurposing discounted GPUs for open-source projects, whereas Zitron doubts infrastructure utility. Both unanimously recommended workforce hardening strategies – particularly electrician training and financial upskilling – as hedges against AI job displacement and industry instability. Their Q3 2026 bubble-burst prediction challenges the “long-term benefit” narratives promoted by AWS and Microsoft leadership.
Expert Opinion:
“The AI investment cliff mirrors crypto’s speculative excess but with higher societal stakes,” observes MIT Sloan Tech Review’s lead economist. “When trillion-parameter models require $700,000 hourly inference costs while delivering demonstrably unreliable outputs, we’re witnessing not innovation but subsidized hallucinations. True progress demands redirecting capital toward domain-specific AI with measurable ROI – not foundation model arms races.”
Key Terms:
- AI infrastructure stranded assets
- Enshttification lifecycle in tech platforms
- GPU depreciation post-AI bubble
- Vocational hedging against tech unemployment
- Domain-specific AI ROI metrics
- Labor solidarity in tech consolidation
- Foundation model viability thresholds
People Also Ask About:
- What defines enshittification in tech? A degradation process where platforms first overvalue users, then exploit them for advertiser benefit before ultimately destroying value for all stakeholders.
- How would AI unionization work? Collective bargaining could establish output accountability standards, training reimbursement, and severance protocols during AI-induced layoffs.
- Which industries will keep using AI post-crash? Medical imaging analysis, legal document review, and controlled industrial systems show measurable efficiency gains justifying ongoing implementation.
- Why electrician careers over coders? Physical infrastructure modernization creates recession-resistant demand, with 23% projected growth in solar/EV electrical specialties through 2032 (BLS data).
Extra Information:
- HRDAG’s Innocence Project LLM Research – Demonstrates judicious AI implementation in social justice applications
- Electrician Occupational Outlook – Validates Doctorow’s vocational recommendation with official growth projections
- AI Now Institute’s Accountability Frameworks – Policy blueprints addressing discussed regulatory gaps
ORIGINAL SOURCE:
Source link




