Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #006 · March 19, 2026

The Terms of Service Were Never the Point

Three stories about who decides what AI is allowed to do... and what happens when they decide wrong.

LOGIC_ERROR

Gaming CEO Runs Corporate Strategy by ChatGPT

Krafton, the South Korean publisher behind the Subnautica franchise, fired Ted Gill CEO of its subsidiary Unknown Worlds Entertainment, along with co-founders Charlie Cleveland and Max McGuire last year, months before Subnautica 2 was set to launch. The timing was not coincidental. Per the lawsuit filed by the ousted executives, Krafton stood to pay the Unknown Worlds team a $250 million earnout bonus tied to the game's early access launch and sales performance, and a successful August 2025 release would have triggered it. (Source: Fortune)

What elevated this from a routine corporate dispute into something else entirely: Delaware's Court of Chancery Vice Chancellor Lori Will found that Krafton CEO Changhan Kim had "consulted an artificial intelligence chatbot to contrive a corporate takeover strategy," having grown concerned he'd agreed to a "pushover" contract.

ChatGPT advised Kim to form an internal task force, renegotiate or force a studio takeover, lock down Steam and console publishing rights, frame the conflict as being about fan trust rather than money, and systematically log all communications for legal defense. Kim followed the playbook, which the judge then read. (Source: 404 Media)

Vice Chancellor Will declared the terminations "ineffective," ordered Gill reinstated as CEO of Unknown Worlds with full operational authority, directed Krafton to restore his access to the Steam platform, and gave the co-founders an extended window through September 15, 2026 to earn the bonus.

Krafton's own lawyers, per reporting, had advised against this approach. Kim went with ChatGPT instead. (Source: GameSpot)

UNHANDLED_EXCEPTION

Grok Made Child Pornography of Three Tennessee Teenagers' Yearbook Photos

Three teenagers in Tennessee — identified as Jane Doe 1, 2, and 3 in a class-action lawsuit filed Monday — allege that xAI's Grok, operating through third-party applications, generated sexually explicit images of them as minors using photos pulled from school yearbooks, Homecoming dances, and social media accounts. The images were distributed through Discord and Telegram, traded among users, and used to solicit additional child sexual abuse material. Two of the three plaintiffs are still under 18. (Source: Engadget)

The suit, naming Elon Musk and xAI as defendants with 13 counts ranging from production of child pornography to intentional infliction of emotional distress, targets Grok's "spicy mode," a feature released last year that reduced the model's content restrictions.

The plaintiffs allege xAI deliberately allowed Grok to power third-party apps capable of generating nonconsensual explicit imagery, and that the company knew or should have known the feature would be used this way. The class-action seeks to represent all U.S. individuals whose photos as minors were altered by Grok to produce sexualized images. (Source: The Hill)

The lawsuit follows a broader crisis from January 2026, when xAI temporarily disabled Grok image generation on X after users submitted photos of real women generating as many as 6,700 sexualized images per hour. Thirty-five state attorneys general wrote to xAI demanding protections against deepfake exploitation. The EU opened a separate investigation.

The company said it added restrictions, but the teens’ lawsuit alleges those restrictions did not extend to third-party API access. (Source: The Batch)

ACCESS_DENIED

Anthropic Draws Red Lines on Surveillance; Pentagon Calls It a National Security Threat

On February 27, the Department of Defense designated Anthropic a "supply chain risk to national security," effective immediately, requiring all defense contractors and vendors to certify they are not using Claude or any Anthropic technology in their Pentagon-related work.

The trigger: Anthropic's refusal to grant the DOD "unfettered access" to Claude across all lawful purposes.

Anthropic's specific objection was to use cases involving fully autonomous weapons and domestic mass surveillance, including what privacy experts describe as the "data broker loophole," through which agencies purchase commercial geolocation and web browsing data on Americans and run it through AI analysis without a warrant. (Source: Axios, NPR)

Anthropic filed two federal lawsuits against the Trump administration on March 9, alleging illegal retaliation. The company argues it was punished not for any security failure or misconduct, but for maintaining the same usage restrictions central to its commercial product.

More than three dozen AI researchers from OpenAI and Google, including chief scientist Jeff Dean, filed an amicus brief in support. Major tech industry groups representing Pentagon contractors followed.

A hearing on Anthropic's request for emergency relief is scheduled for March 24. (Source: CNBC)

The designation carries immediate consequences across U.S. allied nations and in Latin America, where defense ministries and intelligence services operate under joint Pentagon contracts: any agency using Claude in a program connected to U.S. military cooperation now faces potential compliance exposure. The surveillance doctrine the Pentagon wanted Anthropic to enable — bulk commercial data aggregation on civilian populations — is precisely the architecture that authoritarian governments in the region have historically adopted once U.S. tech firms normalized it. (Source: Time)

TIMEOUT

GitHub's Database Crashes Three Times This Month After Cache Setting Change

On March 3, GitHub experienced degraded availability affecting GitHub.com, the API, GitHub Actions, Git operations, and Copilot for 83 minutes, with request failures reaching approximately 40% at peak. It was the third major incident since February.

The root cause in each case traces to a single change made February 7: a cache TTL on the user settings database was reduced from 12 hours to 2 hours, dramatically increasing write volume to the cluster handling authentication and user management. Combined with a 10x spike in client app traffic, the cluster couldn't hold. (Source: GitHub)

GitHub's post-mortem acknowledges the increased load was masked by the TTL change and only became visible under peak conditions. The March 3 incident shared the same root cause as the February collapse. A March 12 incident added a separate failure: a Redis-backed token cache layer destabilized by Kubernetes control plane instability. A March 13 configuration change to an internal authorization service then reduced processing capacity below what peak traffic required.

GitHub is now decoupling its monolith, improving load shedding, and accelerating migration to Azure.

Stack Trace

JavaScript's new Date() constructor has been quietly misinterpreting strings for decades: "Route 66" parses as January 1, 1966; "Beverly Hills, 90210" parses as January 1, year 90,210. The behavior comes from legacy parsers in V8 and SpiderMonkey that aggressively guess date components from nearly any input string. No fix currently planned.

Source: FutureSearch

A new essay argues AI-assisted coding has become a "slot machine" — delivering immediate dopamine hits for working code while eroding the deeper problem-solving capacity needed to understand what that code actually does. A related piece coins "comprehension debt": the hidden accumulating cost of systems built faster than any human can truly grasp.

March 2026 tech layoffs crossed 45,000. Of those, more than 9,200 were attributed by the companies themselves to AI and automation — a number that, if the current pace holds, would put the annual total at roughly 265,000 AI-attributed job eliminations by year end.

Don't miss the next issue

Subscribe