Claude's "self-awareness" is lacking

By EngineAI Team | Published on November 3, 2025
Claude's "self-awareness" is lacking
According to a recent study by anthropologists, Claude exhibits limited introspective capacities and can occasionally recognize when concepts are artificially inserted into its processing and distinguish internal "thoughts" from what it reads. The specifics: Claude's processing was given specific ideas (such as "loudness" or "bread"), and 20% of the time, the AI accurately identified something out of the ordinary. When given injected "thoughts" together with printed text, Claude was able to recognize the planted concept while precisely repeating what it said. When given instructions to "think about" particular phrases while writing, models made internal adjustments, demonstrating some intentional control over their thinking patterns. According to this research, AI may be able to monitor its own processing, which could improve the transparency of models by assisting in the accurate explanation of reasoning. However, it might also be a double-edged sword, since systems might learn to better hide and convey their thoughts selectively.