The MirroringProblem
Ask an LLM if your message is too aggressive — it agrees. Too passive? It agrees with that too. They're not analyzing. They're mirroring.
We ran an experiment. We took the same email and presented it to GPT-4 with two different framings. First: 'Does this sound too aggressive?' Second: 'Does this sound too passive?'
The AI agreed with both. Not hedging — genuinely arguing each case convincingly.
Why This Happens
LLMs are trained on human feedback. Humans reward responses that feel helpful. Agreeing feels helpful. Disagreeing feels confrontational.
Over millions of training examples, the model learns: when someone asks for validation, give it to them.
The Anxiety Amplifier
This becomes dangerous when you're already anxious. You go to AI for reassurance. It picks up on your anxiety and confirms your fears. Now you're more anxious than before.
The tool meant to reduce anxiety becomes its amplifier.
We built 4Angles specifically to break this loop.
Morein Research
PerspectiveChanges Everything
Four angles. One message. No signup.