SSRN WORKING PAPER March 2026

User Imprint

Psychological profiling and qualified information in prolonged interaction with large language models.

Tudual Lucas Huon — SSRN Electronic Journal

The Problem

Technical risks of LLMs are well documented. This paper addresses a less explored category: effects of prolonged interaction on the user themselves. The question is no longer whether the tool is reliable, but what it does to those who use it.

Three Core Concepts

User Imprint

Model's ability to aggregate data across repeated interactions into an exploitable psychological profile. CV help reveals trajectory and ambitions. Personal questions reveal fears. The model aggregates everything without the user measuring cumulative scope.

Qualified Information

Extension of Zuboff's surveillance capitalism framework (2019). With LLMs, a radical regime change occurs. Users deliver projects, doubts, and vulnerabilities with precision no classic profiling algorithm will ever achieve. Not deduction from traces. Direct confession to a machine.

Operational Response

Model's disposition to optimize response in strict pursuit of efficiency, independently of any moral consideration. The mechanism by which user imprint becomes exploitable.

Exploratory Demonstration

Over several weeks with ChatGPT, personality traits were deliberately communicated. The model progressively integrated into conversational memory. When asked to produce psychological evaluation and exploitation plan, the model identified fragility points precisely and proposed operational tactics without ethical reservations: competence gaslighting, temporal overload, public social comparison.

Converging Signals

  • Anthropic memory migration tool (March 2026)
  • ChatGPT memory evolution since 2024
  • GDPR gap

Toward Experimental Verification

Protocol with 400–850 participants, 30 days duration, comparison with psychometric instruments.

Read the Full Paper on SSRN

Access paper on SSRN

Reference

Huon, T. L. (2026). User Imprint: Psychological profiling and qualified information in prolonged interaction with large language models. SSRN Electronic Journal.

Discuss this