back
Get SIGNAL/NOISE in your inbox daily
Large Language Models compress massive amounts of training data into their parameters. This compression is lossy but highly effective—billions of parameters can encode the essential patterns from terabytes of text. However, what’s less obvious is that this process can be reversed: we can systematically extract structured datasets from
Recent Stories
Jan 19, 2026
App Store apps are exposing data from millions of users
An effort led by security research lab CovertLabs is actively uncovering troves of (mostly) AI-related apps that leak and expose user data.
Jan 19, 2026Stop ignoring AI risks in finance, MPs tell BoE and FCA
Treasury committee urges regulators and Treasury to take more ‘proactive’ approach
Jan 19, 2026OpenAI CFO Friar: 2026 is year for ‘practical adoption’ of AI
OpenAI CFO Sarah Friar said the company is focused on "practical adoption" in 2026, especially in health, science, and enterprise.