When someone dials a non-emergency line at their local Public Safety Answering Point (PSAP), they might soon hear something unexpected: an artificial intelligence (AI) system on the other end of the line. This technological shift is already happening in dispatch centers across the country, and it’s sparking important conversations about the future of public safety communications.

For decades, human dispatchers have been the sole gatekeepers between the public and emergency services. They’ve handled everything from life-threatening emergencies to questions about parking tickets and computer-generated calls for the best insurance options. But as call volumes continue to climb and staffing shortages persist, many PSAPs are turning to AI technology to help manage the load. The question isn’t whether this change is coming—it’s whether we’re ready for it.

The Case for AI Dispatchers

The numbers tell a compelling story. Many PSAPS report that a significant portion, sometimes as high as 60 to 80 percent of their incoming calls, are non-emergency in nature.  In my own experience as a PSAP director, even in 2021, our call volume reflected similar patterns.  These calls might be requests for general information, reports of minor incidents, or questions that could easily be answered through other channels. Add to that the increasing number of spam and sales calls. When human dispatchers spend time handling these routine calls, they have less capacity to focus on genuine emergencies where every second counts.

This is where AI technology shows its greatest promise. An AI dispatcher can handle multiple calls simultaneously, never needs a break, and doesn’t experience the fatigue that comes with managing hundreds of calls per shift. For PSAPs struggling with understaffing, this technology offers a way to maintain service levels even when human resources are stretched thin.

The relief felt by dispatchers in centers using AI systems is real and measurable. Human operators report less stress when they’re not constantly juggling non-emergency calls alongside critical incidents. They can give their full attention to situations that truly require human judgment, empathy, and expertise. In this way, AI doesn’t replace human dispatchers—it frees them to do what they do best, handle emergencies.

AI systems also bring consistency to call handling. They follow protocols exactly, ask the same screening questions every time, and don’t have bad days that affect their performance. This standardization can improve service quality and ensure that every caller receives the same level of attention, regardless of when they call or which system answers.

Another significant advantage is the AI’s ability to filter out nuisance calls. Robocalls, spam, and sales calls waste valuable dispatcher time. An AI system can identify and manage these calls efficiently, acting as a first line of defense that protects human dispatchers from these interruptions. This filtering alone can save hundreds of hours per year in busy dispatch centers.

From a practical standpoint, AI systems can also gather and organize information more efficiently than manual note-taking. They can simultaneously listen, categorize, and route calls while collecting relevant details. This streamlined data collection means that when a call does need to be transferred to a human dispatcher or emergency services, all the essential information is already documented and ready to go.

The Concerns and Challenges

As a former director, I understand the concerns of the workforce as well as the possibilities of relief from a staffing crisis. When there is a way to reduce the strain and demand on staffing by implementing a new tool, as a director, I would have seen advantages to that.

Despite these advantages, the implementation of AI dispatchers raises legitimate concerns that deserve serious consideration. The most immediate concern for many people is the loss of human connection during a moment when they’re reaching out for help.

Even non-emergency calls often come from people who are worried, confused, or upset. A resident calling about a suspicious vehicle in their neighborhood might not be facing an immediate threat, but they’re still anxious. A person calling to report a non-injury traffic accident is dealing with stress and frustration. In these moments, the warmth and reassurance of a human voice can matter tremendously. An AI system, no matter how well-designed, cannot provide the same emotional support that a human dispatcher offers.

There’s also the question of judgment and nuance. Human dispatchers are trained to read between the lines, to catch the subtle cues in someone’s voice that might indicate a situation is more serious than it initially appears. A caller might downplay an emergency out of embarrassment or fear. They might not know how to accurately describe what they’re experiencing. A seasoned dispatcher can pick up on these red flags—can an AI system do the same?

The technology itself isn’t perfect. AI systems can misunderstand accents, struggle with background noise, or misinterpret ambiguous statements. When the stakes involve public safety, even a small error rate is concerning. A call that gets miscategorized as non-emergency could delay response to a developing crisis. A system that fails to recognize urgency in a caller’s voice could have serious consequences.

Privacy and security concerns also loom large. AI systems process and store sensitive information about community members and their situations. How is this data protected? Who has access to it? What happens if the system is hacked or compromised? These questions become even more critical when dealing with vulnerable populations who may already distrust institutions.

The implementation of AI dispatchers also raises workforce concerns. While proponents argue that AI will free up human dispatchers rather than replace them, workers in the field understandably worry about job security. Will successful AI implementation lead to reduced staffing levels in the long run? Will fewer new dispatchers be hired and trained? These concerns are heightened in communities where PSAP jobs represent stable, middle-class employment.

There’s also a learning curve and adaptation period that shouldn’t be underestimated. Dispatchers need training on how to work alongside AI systems. Technical problems need to be resolved. Community members need to be educated about what to expect when they call. During this transition period, there’s potential for confusion, frustration, and even service disruptions.

Finally, there’s the question of equity. Not everyone has the same comfort level with technology. Elderly residents, non-native English speakers, people with certain disabilities, or those who simply aren’t tech-savvy might struggle to communicate effectively with an AI system. If we’re not careful, we could create a two-tiered system where some community members receive better service than others based on their ability to navigate AI technology.

Finding the Right Balance

The debate over AI in PSAPs isn’t really about whether technology is good or bad. It’s about finding the right balance between innovation and human judgment, between efficiency and empathy, between progress and precaution.

The most successful implementations will likely be those that view AI as a tool to support human dispatchers, not replace them. AI can handle the routine, the repetitive, and the clearly non-emergency. But there should always be an easy path to reach a human dispatcher when needed. A simple “press 0 for a human operator” option, clearly communicated at the start of the call, can address many concerns about accessibility and human connection.

Transparency is also crucial. Communities deserve to know when they’re speaking with an AI system and what that system can and cannot do. Clear communication about the capabilities and limitations of the technology builds trust and helps manage expectations.

Ongoing evaluation and refinement must be built into any AI implementation. PSAPs should regularly assess how well the system is performing, where it’s falling short, and how it can be improved. Feedback from both dispatchers and community members should drive continuous improvement.

Training and support for human dispatchers is equally important. They need to understand how the AI works, how to troubleshoot problems, and how to seamlessly take over calls when necessary. The goal should be a smooth partnership between human and artificial intelligence, not an awkward handoff.

 Looking Ahead

As AI technology continues to advance, its role in public safety communications will likely expand. The question facing PSAPs today isn’t whether to engage with this technology, but how to do so responsibly and effectively.

The communities that will benefit most are those that approach AI implementation thoughtfully, with clear goals, robust safeguards, and ongoing commitment to evaluation and improvement. They’ll be the ones that use technology to enhance public safety while preserving the human elements that matter most in moments of need.

Change in public safety systems always requires careful consideration. The stakes are too high for hasty decisions. But when implemented well, with the right balance of innovation and humanity, AI dispatchers could help ensure that when someone calls for help—whether it’s an emergency or not—they get the response they need, when they need it.

Sounds interesting? Equature has a solution for you, already working at PSAPs across the country! Contact us for more information. We are here to help you.

~ Cherie Bartram, ENP.