By Stacy Kirk
It is impossible to overlook the major controversy that has been sparked by the failures of the mobile voting app that was commissioned to support the recent Iowa elections. This app, designed to tabulate and report results from the precincts – had significant glitches that resulted in a full day of delays, thereby increasing skepticism as to whether the world is ready for mobile voting.
According to the New York Times, the Iowa Democratic Party commissioned Shadow Inc. to develop the mobile app between November and December for the price of $63,183. The app, which was designed to work on most mobile devices, was not tested on a large scale simulation; fixes were still being made up until two days before the release.
The main lessons that can be learned from this debacle are:
1. Tight Timelines can compromise quality
You cannot build, customize, test, and train for a mission-critical app release in two months, even with an army of developers and testers. When you are dealing with an end-user application, you must have sufficient time to develop training, run user-acceptance/beta testing, collect feedback, and resolve any issues found. Even if the development company (Shadow Inc.) is filled with a great development and testing team, it is important to realize that the best teams can make mistakes, especially when they are pushed to a short timeline.
2. Understand the costs
Photo by Fabian Blank on Unsplash
I don’t want to say “You get what you pay for” but… I know $63,183 may sound like a lot to the general public for an application; however, top app developers and security analysts are expensive. When you are developing a secure and well-tested application, it takes a team of experts that includes not only developers, but also testers in the area of security, usability, performance, and functionality.
3. Reduce device compatibility
Photo by Hal Gatewood on Unsplash
By not limiting the number of devices that could support the Iowa caucus app, Shadow Inc attempted the feat of providing a quality solution in an impossibly short timeline. The testing of all operating system-to-device combinations cannot only be time-consuming but difficult to fix without the app breaking on another device. When my team and I test an app, we first determine the combination of devices and operating systems to be supported. Even with our lab of mobile devices, we know that the risk of poor quality increases with the number of devices that must be supported. The better option would have been to release the app to a limited set of devices and then create a simple web-based option for Caucus managers with bad cell service or older phones.
4. Leverage automation
Photo by Alex Knight on Unsplash
This is a must for simulation of responses and performance load. It’s not clear what testing best practices were used by Shadow Inc., but users share that large-scale testing was not done. Keep in mind that this application was not used by all Iowa Democratic voters. The user size was only 1700 sites, which could have been easily simulated with test automation.
5. Late Releases
Reports say they were updating the app build up to two days before the release. That margin of time was too close and did not leave enough time for a final round of testing or to get it approved by the Apple Store. This would explain the desperate decision to use a testing platform, TestFlight, for the distribution of the app as reported by the Wall Street Journal on Tuesday.
In my experience, a last-minute release is always a reflection of the instability of an app. The app deadline for a “Go-No-Go” should have been two weeks ahead of the primaries. It’s better to release an app with known issues that users can be trained to avoid rather than try to fix it up to the last minute and risk other functionality breaking in the process.
Many political commentators recommend going back to paper for voting. While this event may indicate that “online voting is not ready for prime time”, as mentioned by one computer science professor in the NY Times; digital voting is inevitable. We just need to understand that when the impact of the failure is as visible as something like the “Iowa Primaries”, there cannot be any shortcuts. If an estimate or timeline seems too good to be true, it probably is. The time required for the user, security, performance, and accuracy testing is critical. Fortunately, testing best practices leverage automation in all of these areas, so this time can be replaced by focus and expertise.