sábado, 28 de junio de 2025

How Pioneering Boards Are Using AI by Stanislav Shekshnia y Valery Yakubovich, Harvard Business Review (July-August 2025)

How Pioneering Boards Are Using AI by Stanislav Shekshnia y Valery Yakubovich, Harvard Business Review (July-August 2025)

When the Boardroom Meets the Bot: A New Era in Corporate Governance 

Introduction: The Algorithm at the Head of the Table

In 2014, a Hong Kong-based VC firm stunned the business world by appointing an algorithm to its board. At the time, it seemed like a novelty perhaps even satire in the face of Silicon Valley’s obsession with disruption. But a decade later, the discussion is no longer if AI belongs in the boardroom, but how much of it should. The Harvard Business Review’s July-August 2025 article “How Pioneering Boards Are Using AI,” written by Stanislav Shekshnia and Valery Yakubovich, is a prophetic deep-dive into this very future a future both fascinating and fraught, where human judgment and artificial intelligence wrestle for primacy in the governance of global enterprise.


1. The Human Bottleneck: An Information Age Dilemma

Corporate directors, particularly nonexecutive ones, operate in a paradox they wield extraordinary influence, yet engage only episodically with the organizations they oversee. As Shekshnia and Yakubovich note, most directors serve on multiple boards and attend only a handful of meetings per year. With this distance comes vulnerability: information asymmetry between directors and executives, leading to critical blind spots in decision-making.

AI, particularly large language models (LLMs), offer a radical remedy. By parsing vast quantities of data and distilling it into digestible summaries, AI arms directors with the context and insight they often lack. One Danish director’s description of ChatGPT as a “sparring partner” for board prep reflects a subtle but profound shift algorithms are no longer passive tools, but intellectual collaborators.


2. Strategic Simulation: AI as the Prophet of Possibilities

The article highlights an often underutilized boardroom practice: scenario planning. While boards wax poetic about their fondness for “what ifs,” few actually engage in rigorous forecasting. AI, in contrast, thrives in this domain. By simulating complex environments and modeling variable shifts, it allows boards to visualize alternative futures with clarity and speed.

The case of a steel company using AI to compare facility investments is telling not only did it provide direction, it reshaped the company’s strategic lens. In the words of one director, after feeding scenarios to ChatGPT and receiving similar conclusions as the board itself, AI “confirmed we had gotten it right.” The tool became both advisor and validator.


3. From Spectator to Participant: The Rise of Virtual Board Members

If the first phase of AI integration was backstage support, the second is center stage participation. Enter Aiden Insight, the UAE’s AI board observer with official meeting participation. Though still a “listener,” its insights and suggestions make it a silent but influential actor. As Shekshnia and Yakubovich suggest, the trajectory is clear: AI isn’t just advising the board it’s becoming a member of it.

Yet the ambition is laced with risk. These models, while impressive, remain devoid of human nuance incapable of reading a room, sensing hesitation, or challenging a colleague with delicate firmness. Their tendency to suggest premature votes reflects a misunderstanding of boardroom politics, where consensus is often cultivated rather than commanded.


4. The Dark Side of the Code: Risk, Bias, and the Ghosts of Data Past

The authors deftly unpack three central risks of AI integration: information leaks, data bias, and temporal anchoring. Each represents a different kind of threat technical, ethical, and strategic.

Information leaks are less about AI itself and more about careless data governance. Yet the fear lingers. As one vice-chair warns, “Today I feed my board book to ChatGPT, and tomorrow my competitors will know what we’re discussing.” The concern is visceral, the potential reputational damage real.

Bias, however, is subtler and more insidious. As the Microsoft chatbot incident reminds us, AI is a mirror sometimes reflecting the ugliest corners of the datasets it’s trained on. When boards rely on AI trained solely on management’s narrative, they risk reinforcing executive groupthink rather than challenging it.

Temporal anchoring is perhaps the most philosophical critique: AI is rooted in past data. Yet strategy, by definition, is future-oriented. One CEO nails the paradox: “AI knows almost everything about the past. But the past does not predict the future.” Yet neither, Shekshnia and Yakubovich remind us, does human instinct also rooted in memory. The challenge is not to escape history, but to learn how to use it wisely.


5. AI as Coach and Critic: The Performance Review Revolution

One of the article’s most compelling anecdotes involves AI evaluating not just strategy but boardroom dynamics. In Switzerland, AI tools track speaking time, tone, and participation balance then offer suggestions: less airtime for dominant voices, more space for quieter ones, and warnings against dismissive phrases like “no-brainer.”

This democratization of the boardroom, mediated by machine, is both thrilling and chilling. It invites a level of objectivity and introspection rarely found in executive culture. But it also raises concerns: will directors self-censor for fear of algorithmic judgment? Will spontaneity be sacrificed on the altar of optimization?


6. Learning to Think with Machines: From Digital Illiteracy to AI Fluency

A major theme throughout is the generational and experiential chasm between AI-native tools and analog-age directors. Many tried AI and abandoned it, overwhelmed by irrelevant outputs and uncertain truths. The remedy, the authors argue, is targeted training and human mentoring board-level upskilling in both tool use and critical thinking.

What emerges is an implicit curriculum for the future director: part technologist, part ethicist, part strategist. The path to AI fluency mirrors the corporate transformation journey itself cautious experimentation, peer coaching, and collective learning loops.


7. Experimentation as Strategy: Piloting AI Before Codifying It

Shekshnia and Yakubovich advocate an incremental approach. Start with public LLMs like ChatGPT, move toward enterprise models fine-tuned on internal governance data, and finally integrate firm-specific information. Crucially, the process must be collective. Chairs must resist the temptation to impose AI from above, and instead foster participatory exploration ensuring buy-in and shared responsibility.

In this model, AI isn’t imposed; it’s co-evolved. Directors don’t just learn to use the tool—they help shape it. AI becomes not just a resource, but a reflection of the board’s own strategic and cultural DNA.


8. Sustaining the Shift: AI as a Culture, Not Just a Tool

Once AI enters the boardroom, the work is not done it has only begun. As the authors emphasize, progress must be evaluated regularly, not just on performance but on engagement and experimentation. Chairs play a pivotal role in modeling this behavior using AI visibly, embracing vulnerability, and sharing learnings openly.

The transition to AI-literate boards requires cultural rewiring. Recognition, coaching, and sustained enthusiasm are as important as software updates. AI integration is not a checkbox it’s a leadership commitment.


9. The Future Is (Almost) Here: What Happens When AI Gets a Vote?

The authors pose a provocative hypothesis: one day, every board may have an AI member with voting rights. While this may seem dystopian to some, it underscores the trajectory of current trends. From data assistant to performance analyst, from simulation engine to co-decision maker, AI’s role is expanding rapidly.

Yet with great power comes the need for ethical safeguards, legal frameworks, and philosophical clarity. What values are embedded in our algorithms? Who owns AI’s mistakes? Can a machine understand fiduciary duty?

These are not technical questions they are human ones. And they require boards to evolve not just in competence, but in conscience.


10. Conclusion: Why This Article and This Future Matters

At its core, “How Pioneering Boards Are Using AI” is not about technology. It is about governance, trust, and the limits of human judgment. In an era of unprecedented complexity, where speed and scale often outstrip deliberation, AI offers both a crutch and a catalyst. Used wisely, it can expand human capacity. Used poorly, it can amplify blind spots.

This article is essential reading for anyone who sits on a board, advises one, or is shaping the next generation of governance models. It offers not just a roadmap, but a mirror—revealing how much work we must still do to marry human wisdom with machine intelligence.


About the Authors

Stanislav Shekshnia is a senior affiliate professor at INSEAD and co-director of the Leading from the Chair program. As board chair of Technoenergy AG, he brings firsthand experience to his research on leadership and corporate governance.

Valery Yakubovich is executive director of the Mack Institute at the Wharton School and adjunct professor of management. His research explores innovation ecosystems and organizational design.

Together, they blend academic rigor with real-world insight, offering a rare and valuable lens on the convergence of AI and boardroom dynamics.


Why You Should Read This Article

Because the future is already knocking on the boardroom door.

Because leadership today means understanding not just balance sheets, but algorithms.

Because ignoring AI is no longer a strategy—it’s a liability.

Because those who lead tomorrow’s companies must learn to lead with machines, not just over them.

And because, as this article shows, the most effective directors of the future will not be those who know the most—but those who ask the machines the right questions.

 

No hay comentarios.:

Publicar un comentario

The Architecture of Purpose: Human Lessons in an Age of Uncertainty (2025)

Here is the profound and structured analysis of the work The Meaning of Life by James Bailey The Architecture of Purpose: Human Lessons in ...