Edge Cases

In the early 1990s, my brother-in-law and his family drove from Maine to Pittsburgh for a visit.  Because the drive home was a long one, they departed Pittsburgh at 4:00 a.m.  As they climbed through the Allegheny Mountains on the Pennsylvania Turnpike, a deer leapt from a bridge abutment into the road in front of their small GM station wagon.  Aware his two children were asleep in the rear of the station wagon, John glanced in his rear-view mirror.  Seeing a large truck following at close range, John kept his foot on the accelerator and took the deer at 60 mph.  The car was destroyed, but his family was unharmed.

For robotics and artificial intelligence experts, such as those currently designing autonomous driving systems, this event is a so-called Edge Case: a situation for which software is not necessarily trained, but to which it must respond immediately and correctly.  As robotics and AI move from controlled environments to assist activities of daily living, programming for Edge Cases becomes both more difficult and more important.  The inevitable hiccups are what led California to ban Cruze’s self-driving cars and GM to halt use of the vehicles nationwide.

The growing pains of the fourth industrial revolution now underway will get worked out, just as in the earlier revolutions seeded by steam power, distributed electric power, and early computers and automation.  The open question is how that will happen. For insight, I recently participated in a program put on by Carnegie Mellon University, “Manufacturing & Warehouse Robotics Forum.”  Despite its title, the program ranged far beyond manufacturing and warehouse applications.

Particularly interesting were panelists’ opinions about where the next 5-15 years will lead us.  “Human-centric platforms will be prevalent; historically robots were deployed in larger, fixed platforms [like manufacturing],” opined one expert.  Said another, “Every human will have two extra arms, significantly augmenting the human being.”  Robots will “move from [handling] objects to whole environments,” said a third participant.

Challenges ahead include creating “systems that are effective at collaborating with humans.”  Integration across robotic systems is another hurdle.  A 30-year robotics industry veteran said, “AI will be judged not by the 99% of situations it gets right, but by the Edge Cases.”  Does computer vision deployed in transportation read as an actual bicycle a painted image of one that denotes a bike lane?  Does computer vision read as an actual chicken a man wearing a chicken suit like Richard Pryor and Gene Wilder in “Stir Crazy”?  If the algorithm says, “Go ahead, run over the painted bicycle image,” it matters not.  Not so if it runs over Pryor or Wilder.

In the financial innovation space, the difficulties presented by Edge Cases led the Federal Reserve Bank of Boston and MIT Media Lab to quietly shelve their Project Hamilton, a bid to create a “high-speed transaction processor for a centralized digital currency, to demonstrate the throughput, latency, and resilience of a system that could support a payment economy at the scale of the United States.”[1]  Earlier this year, the Federal Reserve Bank of New York and eight big financial institutions conducted a pilot project to create a digital dollar that could be traded among participants in a closed network of institutions.  In July, the Fed announced the pilot had been completed, noting it had no plans to continue development of a digital dollar and any decision to do so would be a “political” one for Congress to make.

The decision to step back from creating digital dollars surely reflect industry and government officials’ misgivings following the collapse of crypto currency platforms including FTX and the speed with which Silicon Valley Bank, Signature Bank and Republic Bank failed earlier this year.  As a participant in the CMU-sponsored forum noted, AI operates as an accelerator.  Instead of having to program a computer to take a specific action, AI enables a computer to interact with its environment “on the fly”—with no or minimal lag time.  In a financial crisis, perhaps that is not a benefit.

Designers of manufacturing and distribution environments too need to rebalance speed, accuracy and safety as humans and robots collaborate more intensively.  A recent Michigan case, Holbrook v. Prodomax Automation,[2] illustrates the problem. 

Wanda Holbrook worked on a robotic assembly line for a Tier 3 supplier to Ford.  The line welded up trailer hitches installed on Ford pickup trucks.  A robotic arm in one of six zones on the line fed parts to a jig in the next zone where another robotic arm did the welding.  The robotic arm in a third zone picked the completed products out of the jig and placed them on a rack for cooling. 

When the robotic arm in the third zone failed to pick a completed hitch out of its jig, Wanda disregarded the required safety protocol to power down both the second and third zones.  She stepped directly from the third zone into the second zone to free the stuck hitch from its jig.  Computers controlling robots in the first two zones read her actions as making the jig ready to receive the next set of metal pieces for welding.  The robot arm from the first zone reached into the second zone, pinning Wanda’s head against the jig.  The second zone robot arm then tried to weld what it interpreted as a new hitch assembly, severely burning Wanda’s face, nose and mouth.  She died as a result of her injuries.

Because she failed to follow prescribed safety rules, the court, applying Michigan’s products liability statute, ruled in favor of Wanda’s employer and the manufacturer of the robotic assemblies involved in the case.  Yet it is not difficult to believe that error-prone humans will continue to misjudge their interactions with robots and suffer similarly severe consequences. 

The New York lawyer who summarized the Holbrook case for The Business Lawyer observed: “companies that design or purchase robots to work with humans should foresee the harm that can arise from a combination of robot malfunction, human error, and placement of humans and robots in close proximity.  Neither the robot designer nor the robot purchaser-user should implement security paradigms that rely on error-prone humans’ adherence to safety protocols.  The designer could instead program the robots to practice the first law of Isaac Asimov’s Three Laws of Robotics: ‘a robot may not injure a human being, or, through inaction, allow a human being to come to harm.’”[3]

In practice, that is an impossible standard to meet.  Robots, like any assistive device, cannot serve the functions for which they are designed and be expected to save humans from their own reckless behavior.  Engineers will do their best to idiot-proof robots and the software that informs their and our work.  But we humans need to have the intelligence, patience and humility to learn to work with robots as our helpers, respecting their power and the risks associated with our use of them.     


[1] https://news.mit.edu/2022/digital-currency-fed-boston-0203

[2] No. 1:17-cv-219, 2021 WL 4260622 (W.D. Mich. Sept. 20, 2021).

[3] R. Trope, “When Security Paradigms Fail,” 78 The Business Lawyer 258, 265 (Winter 2022-2023).