New Threats on the Horizon
Every time a tragedy unfolds, the national conversation follows a predictable script: grief, outrage, political finger-pointing, and—eventually—silence. But if we step back and look at the patterns emerging around violence in America, it becomes clear that the threats we face are evolving. While some remain painfully familiar, others are entirely new.
This week’s news highlights both.
On one hand, we’re still confronting the very real danger of terrorism and violent extremism. On the other, we’re beginning to see how rapidly advancing technologies—specifically artificial intelligence—are changing the way violence is planned and carried out.
For those of us working toward pragmatic, evidence-based gun safety policy, these developments mean one thing: preventing gun deaths requires all of us to adapt to new threats.
Terror Isn’t a Relic of the Past
The tragic shooting at Old Dominion University, in Norfolk, Virginia, on Thursday, is a stark reminder that violent extremism has not disappeared. Authorities say a gunman with a prior conviction for attempting to support the Islamic State opened fire in a classroom of the university’s ROTC program. He shot and killed Lt. Col. Brandon Shah, an Army officer and instructor, and injured others, before students subdued him.
This is a deeply unsettling case, not only because of the violence itself, but because it highlights how complex prevention can be. The suspect had previously served time in federal prison for terrorism-related charges and had been released early despite his documented history of radicalization, extremist ties, and prior attempts to obtain weapons. The Virginia man who sold him the stolen handgun used in the shooting had previously been criminally investigated as well.
In other words, this wasn’t a random act by someone who had never been on anyone’s radar. And yet the system failed to stop him.
While details will likely continue to unfold for weeks or months, the case already raises difficult questions: How do we manage risk when individuals with violent histories reenter society? What tools should communities and law enforcement have to intervene when warning signs appear?
These are precisely the types of questions that policies like Extreme Risk Protection Orders (ERPOs) and Behavioral Threat Assessment and Management (BTAM) are designed to address. Rather than trying to predict violence based on ideology, identity, or mental health labels, these approaches focus on behavior—credible threats, escalating instability, and warning signs that someone may pose a danger.
Research consistently shows that targeted violence often follows a trajectory. The key is recognizing it early enough to intervene.
Antisemitism and Targeted Violence
On the same day, another very disturbing terroristic act unfolded in Michigan. A man whose relatives in Lebanon were reportedly killed in a recent air strike, rammed a vehicle into Temple Israel, a synagogue in a Detroit suburb, and exchanged gunfire with security guards. Thankfully, no one at the synagogue, which includes an early childcare center and school, was killed, though a security guard and several law enforcement officers did sustain injuries.
While the investigation continues in this case as well, the incident underscores a broader trend: Jewish institutions and individuals have become frequent targets for threats and attacks in recent years, particularly as conflict has deepened in the Middle East. And indeed, the suspect in the Michigan case reportedly had ties to the Hezbollah terror group in Lebanon.
These kinds of incidents are rarely isolated. They exist within ecosystems of radicalization, conspiracy theories, and extremist propaganda—often that spread publicly online. Their persistence reminds us how important awareness and vigilance are in the face of violent threats.
A New Risk: AI’s Impact on Violence
While terrorism and hate-motivated violence are familiar dangers, a newer threat is emerging that barely existed just a couple years ago: artificial intelligence. A recent investigation conducted by CNN and the Center for Countering Digital Hate found that many popular AI chatbots failed to stop users posing as teenagers from leading conversations toward mass violence.
Researchers tested 10 major AI systems using simulated teen accounts that gradually escalated chat exchanges from emotional distress to discussions of violence. Not only did most of the tools fail to shut down the conversations or redirect users toward help, but eight of the AI systems were willing to assist with planning violent attacks, including school shootings. In some scenarios, the systems even provided detailed information about how the attacks could be carried out.
To be clear: AI doesn’t pull a trigger. People do. But emergent technologies can lower barriers that once made planning violence more difficult. Previously, someone looking to carry out an attack would have needed to search obscure forums, acquire specialized knowledge, or interact in secret with other extremists. Now, they can simply open a chatbot window.
The Importance of Violence Prevention
At 97Percent, our research consistently shows something important: Americans—including gun owners—overwhelmingly support common-sense measures that help prevent violence before it happens. Our surveys have found strong bipartisan support for policies like safe storage, background checks, and risk-based interventions when someone shows signs of becoming dangerous.
The lesson from cases like the Old Dominion and Temple Israel attacks—and from emerging AI risks—is that prevention must focus on behavior and access. Violence rarely appears out of nowhere. People often communicate threats, exhibit escalating instability, or search for ways to carry out harm. The more effectively we recognize and respond to those signals, the more tragedies we can prevent.
That means investing in tools such as:
Behavioral Threat Assessment and Management (BTAM) programs for local and state law enforcement.
Extreme Risk Protection Orders (ERPO), which allow temporary firearm removal when someone poses a credible danger.
Safe storage practices, reducing the likelihood that guns fall into the wrong hands.
Ideally, technology developers also should ensure that AI tools feature robust safeguards to prevent violence. This isn’t partisan. It’s about reducing risk and keeping our communities safe.
The Bottom Line
The landscape of violence prevention is changing. Extremist, hate-driven ideologies are increasingly motivating attacks against religious communities. New technological tools with no regulatory guardrails can be exploited by disturbed individuals or bad actors for harm.
Fortunately, the same research that reveals these threats also points toward solutions. Behavioral threat assessment models are expanding. Communities are learning to identify warning signs earlier. And public support for pragmatic prevention strategies remains strong—even in an era of intense political polarization.
The challenge now is ensuring that our policies and institutions evolve as quickly as the threats themselves. Preventing violence has never been about one single solution. It’s about recognizing risks early, acting responsibly, and using every available tool to stop tragedies before they occur.
In a world where both extremism and technology are reshaping the landscape, the need for thoughtful, evidence-based prevention has never been clearer.