back
Get SIGNAL/NOISE in your inbox daily
Recent work from Anthropic and others claims that LLMs’ chains of thoughts can be “unfaithful”. These papers make an important point: you can’t take everything in the CoT at face value. As a result, people often use these results to conclude the CoT is useless for analyzing and monitoring AIs. Here, instead of asking whether the CoT always contains all information relevant to a model’s decision-making in all problems, we ask if it contains enough information to allow developers to monitor models in practice. Our experiments suggest that it might.
Recent Stories
Jan 19, 2026
Startup Funding: Q4 2025
More and bigger funding rounds for AI chips and AI for making chips; 75 companies raise $3 billion.
Jan 19, 2026Policymakers And Lawmakers Eyeing The Use Of AI As A Requisite First-Line For Mental Health Gatekeeping And Therapy Intervention
Should we use AI to pre-screen whether people can see a human therapist? And should we use AI to do initial intervention? It's a huge controversy. An AI Insider scoop.
Jan 19, 2026FedEx CEO explains why regular humanoid robots can’t get the job done
Raj Subramaniam said "super humanoid robots" with a "couple of elbows" would be better at automating deliveries.