On October 19,1987 I was sitting at my desk in California when a colleague called from Chicago.
“Have you seen the market news?” he asked.
“No...but I’m looking now”
“It’s fallen 20% and counting…”
I pulled up my market application on my PC and stared in disbelief.[1]
There were a variety of factors that caused the crash of ‘87.[2] One was program trading, computer programs used by hedge funds and others to execute groups of trades directly and rapidly.
In response to the crash, regulators developed policies known as “circuit breakers” so that stock exchanges could halt trading when large swings in prices crossed a threshold. They were essentially alert systems that watched the trading systems and took rapid action to pause. This temporarily stopped the chaos and allowed people to step in and restore order.
The metaphor of a circuit breaker comes from the common electrical circuit box in many homes. If you’ve ever experienced a power outage, you know that sometimes running the toaster, dishwasher, and a space heater in the kitchen can overload the electrical circuit, cause the breaker-switch to “pop” and turn off the flow of current to that room. That’s the breakers purpose: to stop the electricity flow from increasing to the point of burning up the wires (or the house!) And, as a colleague pointed out, the tripped circuit breaker is an important indicator that something went wrong that needs attention.
It’s evident why the circuit breaker name stuck for the “trading curb” systems that the Securities and Exchanges Commission (SEC) put in place following the market crash.[3] I believe this is a useful metaphor for the current day and the concerns about runaway artificial intelligence (AI) programs.
The key feature of this metaphor is that a rapid overload in the wrong direction can be automatically halted so human beings can step in to correct things. It also implies a dash of humility and avoids the hubris of insisting we can build the failsafe system.
This is not fundamentally different from the “endless loop” interrupt or bottom of an else-if chain in programs.[4] The graceful exit when unexpected situations occur is good design, as is the expectation that the system will at some point fail. Most programmers would pick an orderly shutdown over a system crash. How the system fails is important. This same expectation needs to temper our AI designs. Self-healing and machine learning systems aside, a key interrupt in systems needs to be the human judgment interrupt.
What I’m suggesting is that our AI systems should be designed and programmed with this feature in mind. When a system starts to cross a threshold that we’ve agreed is dangerous, the system halts and asks for some human judgment. Defining those thresholds and boundaries may be hard work and such standards may be elusive, but as has been said in other realms, “I’ll know it, when I see it.” The point is if we build the need for human judgment pauses into our systems --human interrupts, if you will, we just may avoid the system barreling forward at great speed and potential damage.
_______________[1] At the time, I was Client Marketing Director for Lotus Signal, a PC-based real-time stock market application. Lotus bought Dataspeed in 1985 primarily to get this technology.
[2] See the discussion in Wikipedia on “Black Monday (1987)”, https://en.wikipedia.org/wiki/Black_Monday_(1987) . Also see “Algorithmic Trading,” https://en.wikipedia.org/wiki/Algorithmic_trading
[4] The story I heard when learning to code was about the software engineer who put at the end of an exhaustive else-if chain the message “this can’t happen.” which of course was displayed one evening as the program merrily ran.
"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."
No comments:
Post a Comment