AI and the Pursuit of Ethical Clarity
How Jesuit and Quaker Practices Can Guide Decision-Making in the Age of Artificial Intelligence
By Andrew Sullivan
Spirituality is central to American values and public leadership. From the declaration that people are "endowed by their Creator with certain unalienable rights," to the call for "malice toward none, with charity for all," and a dream where "all of God's children will be able to join hands and sing in the words of the old Negro spiritual 'Free at last! Free At Last! Thank God Almighty we are free at last!'" — the principles of equality, compassion, kinship and fairness draw from spiritual traditions. These principles, aspirational though they may be, are the foundation of American identity. They have inspired American leaders from Jefferson to Lincoln, King and so many others.
THE VALUE OF STRUCTURED THINKING
Less explored than those principles, however, are the spiritual practices that can shape public leadership: the structured thought process a leader can apply to thorny questions. Spiritual traditions offer guidance on this point too — especially when it comes to existential matters like nuclear weapons or artificial intelligence.
JESUIT DISCERNMENT
Few religious orders do structure like the Society of Jesus. Their “spiritual exercises” are a rigorous process of meditation, prayer and reflection — typically concentrated in a 30-day period — with a goal of weighing tradeoffs and removing personal bias to do deeper thinking about one’s path in life and God’s purpose. Emerging from this effort at self-reflection is discernment: the ability to see a moral truth with clarity.
Translated to public leadership — and credit to former California governor Jerry Brown, a Jesuit seminarian, for this insight — leaders who practice discernment can see the consequences of a chain of future decisions. It’s a rare skill that Governor Brown calls “the eye.” He cites President Kennedy’s commitment to diplomacy during the Cuban Missile Crisis as perhaps the foremost example of a discerning eye in American political history. “Every single one of the people who knew best said to bomb. And Kennedy said no. With the unanimous advice of these people [the Joint Chiefs of Staff], this forty-five-year-old guy stood up to them. Avoiding the extinction of humanity. That’s how close we came. That may be the most amazing moment in American history.”
QUAKER COMMUNAL DISCERNMENT
The Jesuits are not alone in the practice of discernment. The Society of Friends — Quakers — have a related practice called communal discernment. The idea is that when people come together as a group to engage in structured, good-faith debate, they can “discern a truth that exceeds the reach of any individual.”
In grappling with complex matters, Quakers do not vote to determine a majority view; they work to reach unity over a deeper insight. They call this unity the “Sense of the Meeting.” To reach it requires Friends to share their experiences and knowledge, to listen respectfully to others’ experiences and knowledge, and above all, to remain open to new ideas and insights. Quaker discernment blends personal experience, spiritual openness, rationality and faith. For Quakers — and this is critical — the process of communal discernment is more important than the outcome.
THE AI CHALLENGE: BALANCING BENEFITS AND RISKS
The practices of discernment cultivated by Jesuits and Quakers offer vital guidance as we confront the challenges posed by artificial intelligence. AI is rapidly emerging as one of the most consequential public issues of our time, rising to the level of the 20th century nuclear threat in its implications for humanity.
That’s the parallel OpenAI CEO Sam Altman drew last summer when he said: “Let’s make sure we come together as a globe…. We talk about the International Atomic Energy Agency as a model where the world has said ‘OK, very dangerous technology, let’s all put some guard rails.’ And I think we can do both. In this case, it’s a nuanced message because it’s saying [AI is] not that dangerous today, but it can get dangerous fast. We can thread that needle.”
Threading the needle means gathering the benefits of AI while minimizing risks. That outcome is possible, but to achieve it will require discerning public leadership.
DARKENING PUBLIC ATTITUDES
AI is by all accounts in its infancy, and public attitudes about it — in the United States, at least — are already darkening. According to Pew, 52 percent of Americans feel more concern than excitement about the increased use of artificial intelligence. That's a sharp change from 2021, when just 37 percent felt more concern than excitement. Extending Altman’s metaphor, opinion data suggests the opportunity to thread the needle is slipping.
Public anxiety over AI is understandable. It’s rooted, no doubt, in the psychology of loss aversion, a theory from behavioral economists Daniel Kahneman and Amos Tversky. The gist is that the pain of losing something is psychologically twice as powerful as the pleasure of gaining something. That something can be tangible, like a job, or abstract, like the world as we know it. When it comes to AI, the downsides — at least today — are more visceral to people than the benefits.
Real-world problems are emerging to reinforce these concerns, like the AI recruiting tool at Amazon that preferred male candidates for technical jobs and Replika, the AI bot that harassed users. There’s also the larger black box issue, which is that sometimes even AI developers do not understand why AI systems make decisions.
POLICYMAKER ACTION AND BUSINESS INNOVATION
Policymakers are moving quickly to create AI guardrails. The Biden Administration released a framework last fall for AI regulation, the European Union went further in passing a sweeping law this year to regulate AI. Moving down a level, state legislators are drafting and negotiating AI legislation in nearly every state of the union (44 as of February).
At the same time, AI is leading to important breakthroughs for society, from cancer detection and classification to supporting the analysis behind ocean monitoring and satellite observations that will help predict — and mitigate — the effects of climate change.
On a parallel path, AI companies are innovating with incredible speed. To cite just a few examples, AI startups are simulating fluid dynamics for the aerospace and automotive industries, analyzing brainwaves for neurology diagnosis and — looking within — using AI to explain the reasoning within a given AI model’s black box.
Against the backdrop of growing public anxiousness, regulatory action and business dynamism, it’s no surprise tribal attitudes are forming, with AI “doomers” and “effective accelerationists” defining the vocal extremes of the debate. In the words of psychologist Rhoda Au, a professor of anatomy and neurobiology at the Boston University School of Medicine, “We can’t just be dismissive and say: ‘AI is good’ or ‘AI is bad.’ We need to embrace its complexity and understand that it’s going to be both.”
Indeed, we must embrace the complexity of the AI debate, but the challenge is even more elusive. AI is everywhere but intangible: a largely invisible force shaping our world. Compounding this are the competing individual interests and market forces propelling AI’s development, seemingly without coherent direction. What’s needed is rigorous, high-level thinking to engage the public on AI’s impacts and ethical pathways. That’s where the practice of discernment, with an emphasis on stripping away bias to find a deeper truth, can provide a constructive framework.
AI AND DISCERNMENT: A PATH FORWARD
If we accept that AI is going to have profound impacts on human society and the planet, then logic suggests human society should have a say on the path AI follows. A simple point, but this is the ethical foundation for a discussion of discernment and AI.
Consistent with Sam Altman’s point on the IAEA as a model, the public has a role to play in ensuring AI reflects the values of an open society. In the American context, those values — drawn from spiritual traditions — include equality, compassion, fairness and kinship.
To embed those values into AI systems, the Jesuit practice of discernment would call for deep thinking — especially from AI company leaders — on the cascading consequences of AI development, both risks and benefits. What higher purpose are you aiming for? What are the paths your company might take and what are the upsides and downsides of each path? What are the biases, financial, competitive and otherwise, that you need to strip away to see clearly? What’s the problem you see today that could be a crisis tomorrow? Deep thinking on the ethical implications of AI must shape — not constrain — AI leaders’ actions.
The Quaker practice of communal discernment is also a helpful framework. Quakers would bring together people with diverse viewpoints — technologists, business leaders, ethicists, policymakers, scientists, educators — to discern the path forward through careful listening and open dialogue. The outcome might well be an international organization like the IAEA, but one that spans public, private and nonprofit sectors.
The first step might be defining the thresholds at which the AI industry needs to take a step back and assess risks. Anthropic CEO Dario Amodei calls this responsible scaling — developing a plan to reckon with the moments when an AI system might become capable of certain dangerous things, like developing a biological weapon. A responsible scaling plan might include industry-wide pauses to assess risk and chart a safe path forward. “If we stop for a year in 2027, I think that’s probably feasible,” says Amodei. “If we need to stop for 10 years, that’s going to be really hard because the models are going to be built in other countries. People are going to break the laws. The economic pressure will be immense… ultimately, this is an exercise in getting a coalition on board with doing something that goes against economic pressures.”
Amodei’s responsible scaling plan is consistent with the Quaker view that process — rigorous, open, good-faith dialogue — is what matters. That process will lead to a truth beyond the reach of any individual, an outcome greater than the sum of the parts that went into conceiving it.
Some may still argue that such lofty concepts are inconsistent with the fast pace of AI development and a highly competitive market environment. That it’s too much to ask of highly competitive companies in a fight for talent and market share; their leaders do not have time for abstraction and existential thinking.
In response, I would suggest the incongruous Jesuit watchword of “contemplative action” as a guiding principle. Discerning leaders engage their intellects as they go about doing their work, even when crises press upon them. It’s a high bar to set — Kennedy level leadership — but the challenge of artificial intelligence demands it.