People make mistakes, which is why user interface and software design is so critical. Just ask the Hawaii Emergency Management Agency (HEMA), which accidentally sent a false inbound ballistic missile threat warning to residents and tourists earlier this month, which urged them to seek shelter.
“This is not a drill,” read the message, which appeared on thousands of phones as well as TV and radio stations amid growing nuclear tensions between the US and North Korea. Not surprisingly, people panicked, sending terrified messages to friends and loved ones for more than a half hour—at which point HEMA finally announced that the alert was a false alarm.
Later, the agency admitted that an employee pressed the wrong button when testing the missile warning system, in part because the badly designed software had no safeguards against false alarms.
Help a User Out
The incident prompted the Federal Communications Commission (FCC) to launch an investigation.
“Based on the information we have collected so far, it appears that the government of Hawaii did not have reasonable safeguards or process controls in place to prevent the transmission of a false alert,” FCC Chairman Ajit Pai said in a statement. “Federal, state, and local officials throughout the country need to work together to identify any vulnerabilities to false alerts and do what’s necessary to fix them. We also must ensure that corrections are issued immediately in the event that a false alert does go out.”
According to the Washington Post, the only thing standing between a system test and sending a real missile alert was a drop-down menu option.
Good user-interface (UI) design hinges on isolating functions that have different purposes. When you want to separate an internal test and a command that sends a critical message to hundreds of thousands of people, you must integrate visual cues. This can be as simple as using separate buttons, or changing the color theme of the UI when users enter the alert mode. Another best practice can be using an “Are you sure?” prompt before executing a command.
The Hawaii missile alert system contained none of those features.
No Path to Correct Mistakes
HEMA used Wireless Emergency Alerts (WEA), a public safety system that sends alerts to all mobile devices within a designated area. It’s an effective way to reach many people in short notice, but WEAs are limited to short text messages. They can’t contain images, clickable phone numbers, or links to online sources. Recipients are left to further investigate the warning.
What made the Hawaii incident worse was that the system could not issue corrections; as the Post reports, the Federal Emergency Management Agency (FEMA) provides HEMA “standing permission…to use civil warning systems to send out the missile alert—but not to send out a subsequent false alarm alert.”
Clearly, it hadn’t occurred to the design team that an operator might press the wrong button. HEMA posted an update tweet about 13 minutes after the initial alert was sent, but the message didn’t reach as many people as the WEA. A full 38 minutes passed before a second WEA was sent, informing everyone that there was “NO missile threat.”
“Part of the problem was it was too easy—for anyone—to make such a big mistake,” a spokesman for HEMA told the Post. He also said the agency has suspended the drills and added safeguards to the system, including a prompt to confirm the operator’s intention before an alarm is sent.
The Hawaii incident is a reminder of how design errors as small as choosing the wrong UI elements and skipping simple features can have broad repercussions. This underlines the critical responsibilities of software developers and engineers as software becomes ubiquitous.
As for the employee who made the mistake, he will not be fired, according to the HEMA spokesman. That is only fair. When software fails this miserably, developers—not users—should be held to account.