What are the products of peace?
In the military-industrial complex or war economy, this question answers itself: The products are weapons, drones, uniforms, surveillance systems, logistics platforms, and all manner of other items that serve directly or indirectly the purpose of the warfighter. The defense sector sells tangible goods and services to customers who understand exactly what they’re buying and for what purpose: war and defense.
What is the equivalent on the peace side?
I wrote about this question after the USAID collapse, framing it as the need for a “peace-industrial complex.” That post caught the attention of the Alliance for Peacebuilding, who invited me to co-lead their Peace Economies at Scale initiative. And that’s how I found myself in October 2025 at their headquarters in Washington D.C. preparing for a closed-door conversation with family offices and potential investors about something called “peace tech.”
I’ll be honest: I was skeptical. Technology doesn’t have a sterling track record when it comes to delivering social goods or respecting human rights (e.g., Facebook in Myanmar). The AI tools I’ve used aren’t failproof, contain bias, and lack guardrails. And I worry about their application in contexts as sensitive as peacebuilding, where human lives are at stake.
Before the investor event, I was chatting with Brian Abrams, founder and managing partner of B Ventures Group, the first venture capital fund explicitly focused on peace tech. He told me a story that helped me see his vision of how peace tech could be helpful.
And as we’ll see, the philosophy and character of the people running these companies, as well as the products they’re creating, is what is cultivating the soil for the peace-industrial complex to grow.
The Seed is Planted in Ramallah
About ten years ago, Abrams was investing in an Israeli startup in the cloud data security space. The company was located in Israel but as it grew, it had built a development center in Ramallah, staffed by Palestinian colleagues.
The company received an acquisition offer from a multinational tech company. The Israeli founders’ response was that they would to sell, but only if the multinational retain their Palestinian colleagues.
“To me, that was maybe almost a subconscious spark at the time,” Abrams told me in a recent interview. “It showed me what was possible in terms of people seeing others as similar to themselves.”
That seed germinated for a decade. Meanwhile, Abrams continued to be successful as a co-founder of a venture capital fund, investing in dozens of companies and building a career in the traditional mold of investment: take money, make more money. But the Ramallah story never quite left him.
What sprouted this seed into B Ventures it were two unlikely nutrient sources: the 18th-century philosopher Immanuel Kant and a Vietnamese Buddhist monk, Thich Nhat Hanh.
Use Money for People
“Kant said we should never treat a human being as a means to an end,” Abrams explained. “We should only treat human beings as ends in and of themselves. In other words, don’t use people for stuff. Use stuff for people. Don’t use people for money. Use money for people.”
Abrams saw this as a first principle that could reorient how capital operates. Basically: bring humans into the foreground.
“In the venture capital world, we talk about limited partners, investing in fund managers, investing in portfolio companies that have employees—or the worst abstraction, ‘FTEs,’ full-time equivalents, which is awful, like that’s part of a person,” he said. “But if instead we think about human beings who have wants and needs and hopes and fears, and they’re investing in other human beings who have all that, investing in other human beings who gather other human beings, and they’re all working in service of human beings—it just fundamentally alters how we do what we do.”
Abrams also draws inspiration from Thich Nhat Hanh, and his concept of “interbeing.” This is the idea that we are not separate, discrete individuals but interconnected nodes in a larger whole.
“If we zoom out a little bit and see ourselves not as separate, discrete individuals, but rather as nodes on a network, parts of the whole, then violent conflict becomes incomprehensible,” Abrams said. “It becomes a form of collective self-harm.”
Inspired by Kant’s imperative and Thich Nhat Hanh’s teachings on interconnectedness, Abrams left the fund he had co-founded. He began researching what might be possible if he applied the toolkit of venture capital and startup innovation toward the problem of war itself.
What he discovered was a $19 trillion opportunity.
The $19 Trillion Cost of Conflict
According to the Institute for Economics and Peace, the global economic impact of violence topped $19 trillion in 2024, equivalent to about 13.5 percent of global GDP, or roughly $2,500 per person on the planet. Conflict deaths hit 25-year highs, with more conflict today than any time since World War II. Military spending surged to $9 trillion. And that figure doesn’t capture the full cost: the displaced populations, the destroyed infrastructure, the generations of trauma, and the foregone economic development that was prevented because of conflict.
“In addition to hundreds of thousands of human beings who are killed, harmed, or displaced every year by violent conflict, there weren’t any venture capital funds that I was aware of focused on what we’re calling peace tech,” Abrams noted.
So he started one.
If violent conflict is one of the largest drags on the global economy, then technologies that reduce it should be enormously valuable.
B Ventures Group launched as the first venture capital fund explicitly dedicated to peace tech, defined as technology that preempts, mitigates, or resolves violent conflict. The thesis is simple: if violent conflict is one of the largest drags on the global economy, then technologies that reduce it should be enormously valuable.
“When you see a huge problem, a huge opportunity to do something about it, and you know how to do that thing, I think in life you have to do it,” Abrams said. “It’s not just the thing you can do—it’s the thing you can’t not do.”
Returns First, Then Peace
Here’s where Abrams’s approach gets interesting.
When I asked how he explains peace tech to investors, he explained: “Most of the investors in my fund, if not all, invested first and foremost to generate a great return. That was their main reason for investing. I think they like the peace tech angle. But their number one reason for investing is to generate great returns.”
This sequencing matters. Abrams believes that leading with returns—rather than leading with peace outcomes—is what has been missing from efforts to build a peace economy.
As he explained: “Had I had those reversed—if I said, ‘Look, here are our peace outcomes that we’re solving for first, then secondarily, we’ll try to generate a return’—I don’t think the vast majority of [the investors in the fund] would have invested. They would have said, ‘That’s not why I do this. When I give, I give. When I invest, I expect a return.’”
Abrams sees it as the only viable path to develop peace tech: “I’d rather start from the return side and then optimize for the peace outcomes, because I think that can succeed. But if you do it the other way around, I don’t think we even get off the ground.”
It’s not that peace doesn’t matter. It’s just a different starting point, where financial returns create the engine that funds peace outcomes at scale.
His big bet is that making money becomes an instrument to making peace.
What Peace Tech Actually Looks Like
So what are the products of peace tech? Abrams offered several examples from B Ventures’ portfolio, but one stood out: Anadyr Horizon.
Anadyr builds AI-powered crisis simulation tools. Essentially, it is a wargaming platform that can run thousands of geopolitical scenarios in the time it would take traditional analysts to run a handful. The platform models how different actors might behave under various conditions, helping governments, corporations, and financial institutions identify escalation pathways before they reach violence.
“Our human brains can only hold so many variables, players, actors, levers, connections at the same time,” Abrams explained. “But AI can hold thousands and thousands of different players, actors, levers, connections, scenarios, and sequencing of those scenarios. Not that AI all by itself is going to help prevent war, but AI can enable human beings as a co-pilot to make much better decisions.”
Abrams shared an example. About six weeks before we spoke, Anadyr’s founder told him their model was showing the probability of a U.S. land strike on Venezuela at roughly double what prediction markets like Polymarket were suggesting. “Sure enough, it happened a few weeks later,” Abrams said.
The question, of course, is what happens with that predictive capacity. Seeing a conflict coming is not the same as preventing it—the U.S. in fact conducted a land strike.
I’ve seen how having evidence does not equate to influencing decision making. Early in my career, I worked at the Department of Defense. I was asked to do a political network analysis to inform scenarios for what might happen if the leader of a foreign country was deposed: who could replace him and maintain stability. Based on my analysis, the best-case scenario was that the leader stay in power because there was no clear successor and the situation would likely deteriorate the way Iraq had in the power vacuum.
That analysis never reached the general on the frontline. My commander wouldn’t let it go up the chain because it contradicted U.S. strategy in the region. The evidence existed. The influence didn’t. The leader was deposed, the country descended into a protracted conflict, and remains unstable to this day.
Abrams argues the startup model can solve this problem. “Let’s say you’re not in government and you’re able to say, ‘I’m not going to send this analysis to my boss and hope it goes to their boss and their boss. I’m going to post it publicly on the internet.’ You circumvent the whole chain of command.”
It’s an appealing vision. But the gap that peace tech claims to address—not only generating better analysis, but creating pathways for that analysis to actually reach and influence decision-makers—remains unproven.
The Dual-Use Tension
Another thorny issue involves how these companies market themselves. Anadyr, for example, publicly brands itself as defense tech and peace tech. They describe their work as “crisis simulation in an era of strategic surprise.”
I was eager to hear Abrams thoughts about this tension.
“Defense tech is well known,” he explained. “If somebody’s searching for AI wargaming defense tech, and we say peace tech, they might not come up in search results. But then when you look at how they put their product or solution into the marketplace, they’re very careful about who they work with, who they sell to, and how it’s used. That’s where those peace tech guardrails come in.”
Abrams described what he called “Hippocratic guardrails,” a do-no-harm ethos that guides portfolio companies even when they don’t use peace language in their marketing. He referenced his inspiration for this ethos: “I often harken back to Google’s motto in the early days: ‘Don’t be evil.’ It’s a pretty good North Star.”

But guardrails are not governance structures. When I pressed on whether Anadyr has formal mechanisms to ensure peace outcomes, Abrams was candid: “I don’t believe so, other than a psychological and philosophical North Star. We find great people like the founders of Anadyr, and we trust them to make great decisions.”
This is the crux of the peace tech proposition and its central tension. The same tools that can predict conflict can also be used to prosecute it more effectively. The same AI that models de-escalation pathways can model escalation advantages. The difference lies in the choices made by the people building and deploying these systems.
“I’d rather invest in a great, potentially multi-billion-dollar startup like Anadyr and have to trust that they’re going to do the peace part of it right,” Abrams said, “than overthink the peace part of it and fail to create a really successful company.”
The Governance Gap
Abrams’s reference to Google’s early motto—”Don’t be evil”—is instructive, though perhaps not in the way he intended.
Google reportedly removed that motto from the preface of its code of conduct in 2018. By then, the company had become embroiled in controversies over data privacy, algorithmic bias, and contracts with defense and intelligence agencies and Customs and Border Patrol that many of its own employees found objectionable. That same year, about a dozen employees resigned and over 4,000 signed a petition demanding Google withdraw from Project Maven, a Pentagon program using AI to analyze drone footage. The employees argued the work directly contradicted the company’s famous motto. Google ultimately chose not to renew the contract.
The Google saga reveals how a philosophical North Star can collide with commercial pressures and, well, incentives to make money.
This is the challenge that peace tech will face. Early-stage startups can operate on trust and shared values. Founders can be selective about customers and use cases. But what happens when those companies scale? When they take on new investors with different priorities? When they go public and face quarterly earnings pressure? When a lucrative defense contract conflicts with peace tech principles?
Abrams is betting on character by finding “great people” and trusting them to make good decisions. It’s not an unreasonable bet at this stage. But character-based governance has a mixed track record in business. And not institutionalizing peace standards poses a significant risk of peacewashing.
The peacebuilding field has spent decades developing frameworks for conflict sensitivity, do-no-harm principles, and accountability mechanisms. These aren’t perfect, but they represent hard-won wisdom about how good intentions can go sideways in complex environments. Peace tech, if it’s serious about the “peace” part, may eventually need to institutionalize similar guardrails to reinforce good character in its leadership.
The Move 37 Possibility for Peace Tech
Despite these challenges, Abrams is optimistic about the possibilities.
He referenced the 2016 example of when Google DeepMind’s AlphaGo defeated the world champion Go player Lee Sedol. The decisive moment came in Game 2, when AlphaGo made a move—Move 37—that no human player would have conceived. A move that had a 1 in 10,000 chance of being used.
Commentators initially thought it was a mistake. It turned out to be the move that won the game.
Abrams uses Move 37 as a touchstone when describing what peace tech might achieve. “The AI was capable of something we couldn’t even imagine it was capable of,” he said. “And that’s the possibility. What does that look like for peace tech? First, let’s just have the humility to acknowledge that it’s possible—that there may be something we can’t see coming that startups, AI, could create.”
He imagines a scenario: A startup company runs its models on an emerging conflict and determines, with probabilistic accuracy, that a cooperative approach would achieve better outcomes than a military one. That analysis goes directly to decision-makers through public channels, bypassing the bureaucratic filters that might otherwise suppress inconvenient findings.
And it prevents violence.
“Is it always going to work? Of course not,” Abrams admitted. “But that’s the Move 37 thing. It doesn’t have to work every time. It just has to be more possible than it was before.”
Open Questions
I came away from my conversations with Abrams with more questions than answers, which I think is an honest and appropriate response to a field this nascent.
Peace tech offers something concrete that peacebuilders have long lacked: tangible products that people can wrap their heads and their investment portfolios around. AI-powered crisis simulation. Platforms that model de-escalation pathways. Tools that help decision-makers see new ways of navigating tough peace negotiations. Duo lingo for communication skills. These are products that can be built, sold, and scaled in ways that traditional peacebuilding programming cannot.
While the tensions are real, Brian Abrams and B Ventures has provided one answer to peacebuilders’ questions about what comes next: venture capital in service of peace.
Possibility.
Whether that answer proves sufficient will depend on choices about governance, accountability, and who gets to define what “peace” means when there’s money to be made from it.
For now, at least, peace has a minimum viable product.
