新一代生成式 AI 正在把董事会咨询转变为软件应用。Diligent 上周推出了 AI Board Member,并加入 AI-Buffett、activist、cyber security 与 geopolitics 等人格类型,基于董事会材料、新闻与研究进行训练,支持按需提供董事观点。其他供应商和机构也在同步推进:Lloyds Banking Group 借助 Board Intelligence 落地 AI board advice,Mubadala 在其 investment committee 以及投资组合公司董事层面部署自研 Maia,Nasdaq 的 Boardvantage 与 Govenda 的 Gabii 也在提供治理场景工具。当前这些工具主要用于会前准备、分析董事会文件、会议记录与材料摘要,属于辅助决策,而非取代董事的直接判断。
与一年前相比,董事会对 AI director 的态度从“不会做”转向“如何安全落地”,但业界普遍仍将其定位为 adviser 而非投票成员。英国监管与治理框架强调关键法律门槛:公司法中的董事定义并未涵盖“坐在桌尾的黑匣子”。要让 AI 参与表决,必须赋予其 agency 与 persona,这在当前语境仍被视为禁忌。更核心是 fiduciary duty 的边界:忠诚义务与谨慎义务仍是不可外包给人类或机器的职责,White & Case 与 UK 法务观点都指出董事不能将法定义务委托给 AI。
AI 已在实务层面展现出显著价值:它可用于复盘历史会议纪要与决策结果,检验假设,评估治理强项与短板,监测会议中谁被听见、谁被忽略,并帮助董事更快识别风险。与之配套的私有化、ringfenced 部署也降低了高管成员被迫使用公开 LLM 处理机密文件的暴露面。尽管 HBR 引用的实验显示 AI 在决策质量、证据运用、包容性和执行规划上优于人类,AI 在非正式的人际与文化治理方面仍不足。专家们因此强调,AI 有助于即时获取信息,但能否替代 judgment 仍未解决;Buffett 在 2019 年致股东信中依然强调:“Thought and principles, not robot-like ‘process’.”
New generative AI tools are turning board advisory into software. Diligent launched an AI Board Member last week, including personas such as AI-Buffett, activist, cyber security, and geopolitics experts, trained on board materials, news, and research. Other providers and institutions are also deploying AI governance tools: Lloyds Banking Group is rolling out AI advice with Board Intelligence, Mubadala is using its own Maia in its investment committee and pushing it into portfolio boards, while Nasdaq’s Boardvantage and Govenda’s Gabii provide related offerings. In practice, use cases focus on pre-meeting preparation, board-paper analysis, minute drafting, and instant specialist perspectives, functioning as support rather than replacing human directors.
Compared with a year ago, board sentiment has moved from reluctance to cautious interest. Most systems are framed as advisers, not vote-casting members. Legal and governance barriers remain high: UK company-law rules define directors as humans, and granting a vote to AI would require assigning agency and persona to software, which is still treated as taboo. Even stronger are fiduciary-duty constraints: duties of loyalty and care cannot be outsourced to machines, so legal and compliance views, including those from White & Case, reject AI as a full fiduciary actor. Directors therefore still treat AI as supplementary, and the central debate is operational governance, not replacement.
AI is still a useful decision layer. Boards are loading prior minutes and outcomes into private models to test hypotheses, map governance strengths and weaknesses, and check who spoke or was ignored in meetings. This can surface overlooked risks and support evidence-based discussion while reducing dependence on insecure public LLMs for confidential documents. A study summarized by HBR reported that AI-led groups scored better than humans on decision quality, evidence use, inclusivity, and implementation planning, though AI still underperformed on informal interpersonal and cultural governance dimensions. Experts therefore diverge on outcomes: AI improves access to information quickly, but questions of judgment, discretion, and accountability remain, and Buffett’s standard still holds: “Thought and principles, not robot-like ‘process’.” (Key numbers: 2019)