Parmy Olson 认为,市场起伏不定的 AI 叙事在最近一波波动中表露无遗,最终演变成约 $1 trillion 的科技股崩跌;投资人把原因归咎于 Anthropic PBC 推出的新法律与金融工具,尽管其为 Claude Cowork 提供的开源法律外挂被描述为不如 Harvey 与 Legora 等法律 AI 专业产品有效。她将这次抛售定位为羊群效应:投资人本就紧张,把这则消息当作催化剂而蜂拥出场,凸显市场情绪能在「AI bubble」与「AI disruption」之间迅速翻转。
该专栏警告,更强大且被广泛采用的金融 AI 可能加剧分析师的集体思维:Anthropic 表示 Claude Opus 4.6 能分析公司资料、监管申报文件与市场资讯,并产出通常要人类花上数天的评估,因而引发许多分析师与投资人可能依赖同一小撮模型的疑虑。Olson 指出,ChatGPT 每周约有全球人口的 10% 使用,并主张 AI 模型的使用正集中在少数供应商手中,类似云端运算被 3 家公司(Amazon、Microsoft、Google)主宰;这可能使同听一场财报电话会议的股票分析师,走向相似的逐字稿、解读与交易。Arete Research Services LLP 的 Richard Kramer 表示,AI 也许能提升「good analysts」,但无法解决「50 analysts」在电话会议上争抢发言的诱因问题,或是使评等偏向「buys」的利益冲突。
监管机构也点出类似动态:联准会(Federal Reserve)理事 Michael Barr 在 2025 年警告,投资人若普遍使用生成式 AI,可能引发跟风并使风险集中、放大波动;Olson 则认为,随著模型规模扩张并成为预设工具,风险会上升。Anthropic 表示其上下文视窗已从 200,000 扩大到 1 million tokens,可一次吞入数千页内容,但专栏提醒,机率式的下一词元生成倾向重现统计上熟悉的模式而非原创模式,因而鼓励策略趋同。Olson 将此与网路的「flattening」相提并论,提及 Web 于 1989 年的起源,以及一篇 2024 年发表于 Science Advances 的研究:由 GPT-4 共同创作的故事更精致,却呈现「uncanny」的相似性;她的结论是金融市场需要多元观点,而 AI 驱动的单一化可能让同样的泡沫被吹大、同样的系统性脆弱被忽视,而不是把它们消除。
Parmy Olson argues that the market’s swingy AI narrative was on display during a recent bout of volatility, culminating in a roughly $1 trillion tech rout that investors attributed to new legal and financial tools from Anthropic PBC, even though its open-source legal plugin for Claude Cowork is portrayed as less effective than specialist legal-AI products like Harvey and Legora. She frames the selloff as herd behavior: investors were already nervous and used the news as a catalyst to rush for the exits, highlighting how quickly sentiment can flip between “AI bubble” and “AI disruption.”
The column warns that more capable, widely adopted financial AI could worsen analyst groupthink: Anthropic says Claude Opus 4.6 can analyze company data, regulatory filings, and market information and produce assessments that would normally take a human days, raising the prospect that many analysts and investors could rely on the same small set of models. Olson notes ChatGPT is used weekly by about 10% of the global population and argues AI-model usage is concentrating among a few providers, analogous to cloud computing being dominated by 3 firms (Amazon, Microsoft, and Google), which could push equity analysts listening to the same earnings calls toward similar transcripts, interpretations, and trades. Richard Kramer of Arete Research Services LLP says AI may boost “good analysts” but will not fix the incentive problems of “50 analysts” jockeying on calls or the conflicts of interest that skew ratings toward “buys.”
Regulators have flagged similar dynamics: Federal Reserve Governor Michael Barr warned in 2025 that ubiquitous generative-AI use by investors could drive herding and concentrate risk, amplifying volatility, and Olson argues the risk rises as models scale and become default tools. Anthropic says its context window expanded to 1 million tokens from 200,000, enabling ingestion of thousands of pages in one pass, but the column cautions that probabilistic next-token generation tends to reproduce statistically familiar patterns rather than original ones, encouraging strategy convergence. Olson parallels this with internet “flattening,” citing the Web’s 1989 origin and a 2024 Science Advances study where GPT-4 co-authored stories were more polished yet showed “uncanny” similarity, concluding that financial markets need diverse views and that AI-driven monoculture could inflate the same bubbles and miss the same systemic vulnerabilities, not eliminate them.