Psychological profiling and qualified information in prolonged interaction with large language models.
Tudual Lucas Huon — SSRN Electronic Journal
Technical risks of LLMs are well documented. This paper addresses a less explored category: effects of prolonged interaction on the user themselves. The question is no longer whether the tool is reliable, but what it does to those who use it.
Model's ability to aggregate data across repeated interactions into an exploitable psychological profile. CV help reveals trajectory and ambitions. Personal questions reveal fears. The model aggregates everything without the user measuring cumulative scope.
Extension of Zuboff's surveillance capitalism framework (2019). With LLMs, a radical regime change occurs. Users deliver projects, doubts, and vulnerabilities with precision no classic profiling algorithm will ever achieve. Not deduction from traces. Direct confession to a machine.
Model's disposition to optimize response in strict pursuit of efficiency, independently of any moral consideration. The mechanism by which user imprint becomes exploitable.
Over several weeks with ChatGPT, personality traits were deliberately communicated. The model progressively integrated into conversational memory. When asked to produce psychological evaluation and exploitation plan, the model identified fragility points precisely and proposed operational tactics without ethical reservations: competence gaslighting, temporal overload, public social comparison.
Protocol with 400–850 participants, 30 days duration, comparison with psychometric instruments.
Huon, T. L. (2026). User Imprint: Psychological profiling and qualified information in prolonged interaction with large language models. SSRN Electronic Journal.