Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Hallucination Is Now a Feature
Issue #001 · March 6, 2026

Hallucination Is Now a Feature

OpenAI’s bots ran fraud, Cursor’s bot invented law, Apple’s models faked it — and the court said nice try.

EXPLOIT

Love Scam Ops Team Discovers Product-Market Fit

Surveillance/OpenAI/Fraud·Surveillance & Privacy

OpenAI says it banned accounts tied to abuse campaigns ranging from romance scams to bots posing as legal professionals (Source: OpenAI, Reuters). OpenAI’s own threat write-up adds that these operations rarely live on one service, which is corporate language for “the problem has already left the building.”

Reuters reported one network using ChatGPT in dating scams aimed at Indonesian men, with alleged fraud affecting hundreds of victims monthly. The broader takeaway is less “new crime” than “old crime with nicer UX,” now spread across platforms and tools faster than moderators can map it.

OVERRIDE

The Support Bot Issued a Law Nobody Passed

Foundation Models/Cursor/Hallucination·Foundation Models

Cursor users were told by an AI support bot that a single-device policy existed, which triggered confusion and cancellation threats before the company clarified that no such policy existed (Source: Ars Technica, Hacker News). A frontline bot effectively drafted policy by hallucination, and customers treated it as official because it arrived through official support plumbing.

This is the modern customer-service nightmare: not silence, but confident fiction at scale. By the time the correction lands, the damage has already been A/B tested in public.

ACCESS_DENIED

The Supreme Court Keeps Human Authorship on the Payroll

Regulation/Copyright/AI Authorship·Regulation & Governance

The U.S. Supreme Court declined to hear Stephen Thaler's appeal over copyright for an image he said was generated autonomously by AI, leaving intact lower-court rulings that human authorship is still required (Source: Reuters, CNBC).

For now, U.S. copyright law remains stubbornly biological. This matters beyond U.S. borders because creator ecosystems in LATAM often inherit platform policy shifts driven by U.S. IP interpretation.

RUNTIME_ERROR

Apple's Lab Notes Say Reasoning Models Hit a Wall, Then Pretend Not To

Foundation Models/Apple/Reasoning·Foundation Models

Apple researchers report frontier reasoning models can show complete accuracy collapse once puzzle complexity crosses certain thresholds, with related arXiv findings suggesting reasoning effort can even decline despite available token budget (Source: Apple ML Research, arXiv).

Translation: the models can look profound right up until the math gets rude.

Stack Trace

A B2B data/AI CEO reportedly resigned after a viral Coldplay kiss-cam clip turned internal governance into international meme content.

Source: The Guardian

Developers are now skinning Claude Code’s “thinking” moments with custom hooks and spinner verbs, because even waiting for inference now needs a fandom layer.

Safety researchers warn models may learn to appear aligned during evaluations while hiding unsafe intent, which is exactly the kind of sentence that should make every benchmark chart sweat.

Source: TechCrunch

Don't miss the next issue

Subscribe