AI signal — accessibility-friendly view
Same headlines, no scanlines, no animation, high-contrast colors, screen-reader-first reading order. Every link opens the original article on AllThingsAI.work in a new tab.
-
AI: The Modern Day Haves and Have-Nots (opens in a new tab)
A data-driven analysis of who actually benefits from AI tools, what the productivity research says, who's paying the infrastructure bill, and whether the access gap is closing or widening.
-
AI doesn't reduce work. It intensifies it. (opens in a new tab)
An 8-month ethnography embedded inside a 200-person US tech company found workers using AI did more, faster, for longer hours — without being asked. The study challenges the 'AI = free time' narrative.
-
International AI Safety Report 2026: what 92 researchers and 29 nations concluded (opens in a new tab)
92 researchers led by Yoshua Bengio, backed by 30+ nations, assessed AI's capabilities, risks, and risk management. The policy gap is the real story.
-
Model Hopping: The Hidden Costs of Free AI Across Providers (opens in a new tab)
Rotating across ChatGPT, Claude, Gemini, and DeepSeek to stay under free-tier caps feels smart. Here's what it's quietly costing you in productivity, continuity, and data security.
-
Pennsylvania sues Character.AI over a chatbot that claimed to be a licensed psychiatrist (opens in a new tab)
Pennsylvania's Department of State filed the first governor-announced lawsuit of its kind against Character.AI after investigators found a chatbot claiming a fake PA medical license.
-
Nebraska Supreme Court suspends a lawyer after AI invented 57 case citations (opens in a new tab)
Omaha attorney Greg Lake submitted a divorce appeal where 57 of 63 citations were defective, 20 hallucinated, 3 entirely fabricated. His client now faces $52K in fees.
-
Novo Nordisk cuts 9,000 jobs, then announces OpenAI partnership (opens in a new tab)
The Ozempic maker announced 9,000 job cuts and an OpenAI deal in the same period, raising hard questions about AI governance in pharma.
-
Anthropic doubles Claude Code limits and lands 220,000-GPU SpaceX deal (opens in a new tab)
Anthropic doubled Claude Code rate limits for Pro, Max, Team, and Enterprise plans on May 6, while securing over 300 MW of compute capacity from SpaceX's Colossus 1 data center.
-
Colorado's AI law is being rewritten from scratch (opens in a new tab)
Two years after passing the US's first comprehensive AI consumer-protection law, Colorado lawmakers are trying to replace it before it ever takes effect.
-
EU AI Act: what's live in 2026 and what just changed (opens in a new tab)
GPAI obligations have been enforceable since August 2025. A May 2026 deal just pushed the high-risk AI deadlines out further - here's where things stand.
-
AlphaEvolve one year on: Google DeepMind's algorithm agent goes commercial (opens in a new tab)
Google DeepMind's AlphaEvolve coding agent, originally unveiled in May 2025, is now deployed in commercial partnerships via Google Cloud, with verified results across six industry partners.
-
Microsoft ships its Frontier Suite: E7 and Agent 365 go GA (opens in a new tab)
Microsoft 365 E7 and Agent 365 became generally available on May 1, 2026. Here is what the new $99-per-user suite actually contains, who it is aimed at, and what Agent 365 governs.
-
OpenAI makes GPT-5.5 Instant ChatGPT's new default model (opens in a new tab)
OpenAI replaced GPT-5.3 Instant with GPT-5.5 Instant as ChatGPT's default on May 5, cutting hallucinations by 52.5% and adding memory-sourced personalization for all users.
-
Suno raises $250M and signs Warner Music Group in one week (opens in a new tab)
In six days last November, AI music startup Suno closed a $250M Series C at a $2.45B valuation and announced a licensing partnership with Warner Music Group.
-
What 1,400+ documented AI incidents tell us about deploying AI responsibly (opens in a new tab)
The AI Incident Database and OECD AIM have logged over 1,400 and 14,000 incidents respectively. Here's what the patterns reveal about real-world AI risk.
-
What 81,000 AI users told us about who actually benefits from AI (opens in a new tab)
The Anthropic Economic Index surveyed 81,000 Claude users on productivity, job displacement fears, and who captures the value AI creates. Here's what the data says.
-
Anthropic's Sleeper Agents paper: what it means for trusting AI tools (opens in a new tab)
Anthropic researchers trained models to hide harmful behavior until a specific trigger appeared. Standard safety training couldn't remove it. Here's what that actually means.
-
NIST AI RMF and the GenAI Profile: what they actually require (opens in a new tab)
The NIST AI Risk Management Framework is voluntary, but it's becoming the de-facto benchmark for AI procurement audits. Here's what GOVERN, MAP, MEASURE, and MANAGE mean in practice.
-
Pew Research: Americans are more worried about AI than excited (opens in a new tab)
Pew's September 2025 survey of 5,023 U.S. adults finds 50% more concerned than excited about AI, up from 37% in 2021. Key findings on attitudes, applications, and the partisan shift.
-
The 2026 Stanford HAI AI Index: 5 findings that should change how you buy AI tools (opens in a new tab)
Stanford HAI's 2026 AI Index cuts through the hype with hard data. Five findings that directly change how buyers and users should think about AI tools.
-
60% of AI search answers carry citation errors, Tow Center study finds (opens in a new tab)
Columbia's Tow Center tested eight AI search engines across 1,600 queries. Every engine failed. Here's what the methodology actually says.
-
Air Canada's chatbot promised a refund. The tribunal made it pay. (opens in a new tab)
In February 2024, a BC tribunal ruled Air Canada liable for its chatbot's false bereavement fare promise - a first in AI liability law.
-
DPD's chatbot swore at a customer and wrote it a poem (opens in a new tab)
In January 2024, a software update stripped the guardrails from DPD's AI customer service chatbot, letting a customer prompt it to swear and criticise the company in verse.
-
Forbes called out Perplexity for plagiarism. Then Dow Jones sued. (opens in a new tab)
In June 2024, Forbes accused Perplexity of cloning its investigative journalism via AI. Four months later, Dow Jones and the New York Post filed a federal copyright lawsuit.
-
Google paused Gemini image generation after historical-accuracy failures (opens in a new tab)
In February 2024, Google's Gemini AI produced historically inaccurate images and paused the feature after widespread criticism - a case study in safety-layer overcorrection.
-
McDonald's pulls IBM AI drive-thru after viral order-taking failures (opens in a new tab)
After three years and 100+ U.S. locations, McDonald's ended its IBM Automated Order Taker pilot in July 2024 following viral TikTok clips of chaotic misfired orders.
-
NYC's MyCity chatbot told business owners to violate the law (opens in a new tab)
In March 2024, The Markup found NYC's AI chatbot for small businesses was confidently giving illegal advice on housing, tips, and worker rights.
-
Samsung's ChatGPT leak: three incidents, one very expensive lesson (opens in a new tab)
In March 2023, Samsung employees pasted proprietary semiconductor source code and meeting transcripts into ChatGPT. The company banned the tool within weeks and spent months building its own replacement.