Structure, Content, & Humans: Critical ‘Planks’ when Building Artificial Intelligence into a Business

A maze representing artificial intelligence, may/may not house a Minotaur

“The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.”

- Plutarch, Theseus

Theseus’ Paradox” refers to a myth, which led to a thought experiment of whether an object that has had all of its pieces replaced, fundamentally remains the same object.

The Journey Forward

The journey was completed and the ship was sustained or renewed, such that the ship was preserved for many years. They had a clear understanding of the structure of the ship and the crew was able to monitor and replace the planks as they decayed in tandem with the journey—a successful strategy.

As you read this article, think about “structure” as an enabler of intelligence transfer, facilitator of learning, and a catalyst of innovative thought. When our environments are organized in a manner that supports our growth and performance and adds value to our existence and future direction, our probability of achieving our purpose and intended destinations is elevated. However, as we navigate our environment, do we know the landscape of the business and do we have enough knowledge to renew and sustain our structures, or do we believe that they are eternal? How does intelligence drive the replacement of the pieces of the structure—such that the business enterprise is sustained, renewed, and improved? How does structure drive the introduction of change, such that it is heard and adopted?

Structure, Content, Humans, & Artificial Intelligence

Are we building structures and replacing the old planks as they decay? How are we identifying the decay? Are there structures in place to provide visibility into which plank needs to be replaced and why? Are we putting intelligence in the proper context, or are we lumping it into one bucket of “Artificial” because we lack a clear understanding or visual of its existence? Are we building tools that help us to not only think, but to recall what we may have forgotten? Why are these tools failing to provide the intelligence, knowledge, and insights that they are purposed to provide? Below are three cases that fuel these questions.

Office of Personnel Management (OPM) - improving the current state should not be done in the aftermath of negative outcomes

OPM intrusions compromised millions of federal employees, clearance applicants, and contractors, even with the existence of the Homeland Security Department’s $3 billion network monitoring program—Einstein. They even moved forward with a bill in the Senate, S 1869, the Carper-Johnson Federal Cybersecurity Enhancement Act of 2015, to speed up the adoption of Einstein. It was stated that it’s not necessarily the best out there, but if that’s the fastest way to get government agencies to catch up to the rest of the world on protecting themselves, the bill could be a good thing, but if that happens at the expense of the deployment of best-in-breed detection and prevention systems, then that’s a bad thing. The decision was made to take the risk, knowing that it was not the “best out there”, but not knowing if it was the “best fit” for the government agencies.

According to Department of Homeland Security (DHS), there were nearly 70,000 information security incidents on federal government networks in fiscal year 2014, up 15 percent from fiscal year 2013. In fiscal year 2016, government agencies reported 30,899 information security incidents, 16 of which met the threshold of being major incidents. A GAO report released in September 2017 highlighted federal agencies’ continued weakness in protecting their information systems. Over the last several years, foreign adversaries have stolen tens of millions of Americans’ sensitive data as a result of insufficient cybersecurity. At least 21 agencies continued to show weakness in the five major categories for information-security control: access, configuration management, segregation of duties, contingency planning, and agency-wide security management. Yet, the decision was made to deploy Einstein at every government agency to get each agency to square one. Attacks were continuous while the focus was on broad deployment of technology. Einstein started development in 2003, 14 years later, in 2016, DHS’s $6 billion dollar system fails to scan for 94 percent of common security vulnerabilities and doesn’t have the ability to monitor web traffic for malicious content. Even in light of this lack of intelligence, in 2018 Einstein will move forward to wireless network protection. Does OPM have a clear view of the cybersecurity structure for the federal government? Is there adequate data and information to establish cybersecurity intelligence that recalls past intrusions and provide insights into potential future intrusions? Was the system built void of the human intelligence it is required to emulate, augment, and supplement? Would an incremental evolution to the new structure have been more successful?

New Orleans Infrastructure – ensuring the source and validity of intelligence

Our systems, both manual and automated, are not naturally intelligent. They must be structured to think and act via the use of the data and information by those who know its intent. Logic drives the intelligence and knowledge of systems. Humans must be skilled in the creation and execution of a system, as well as the assimilation, synthesis, and integration of the data and information that feeds its logic. In simple terms, humans are the source of intelligence that is in the context of the environment it supports.

During Hurricane Katrina, there were 50 failures in the levees and flood walls protecting New Orleans. Thousands of lives were lost and more than 100,000 homes and businesses were destroyed. The U.S. Army Corps of Engineers was responsible for the design and construction of the levees, and maintenance was the responsibility of the local levee boards. When Katrina struck in 2005, the project was between 60 and 90 percent complete. Five investigations were conducted by civil engineers and other experts in an attempt to identify the reasons for the failure of the federal flood protection system. The results from all investigations pointed to the primary cause of the flooding being the inadequate design and construction by the Corps of Engineers.

Did the Corps possess the intelligence to design and develop the levees? How do we ensure the purity and proficiency of the intelligence provided and captured? Should a tool be used in the aftermath or can it be coupled with monitoring to provide visibility and insights to lessen or prevent the impacts? Fundamental to our advancement in artificial intelligence is the critical necessity to understand the domain of intelligence and knowledge, the design of the tools that support its monitoring and the ability to learn to institute continuous renewal and sustainment of our structures.

Tragic flight of Air France 447 - trusting and listening to the augmented intelligence that supports decisions

I will summarize from Tom Koulopoulos’ article, “It’s Time to Stop Calling it ‘Artificial’ Intelligence.” On June 1, 2009, Air France Flight 447 took off on a journey from Rio to Paris. While in flight, the plane’s onboard radar displayed what appeared to be typical storm cells. If the co-pilots had known, they could have chosen to route around the storm cells as earlier flights had done. Being caught in the storm, the plane encountered ice that clogged its Pitot tube sensors, which relay airspeed to the computer and the pilots. Airspeed is critical information for the autopilot and the human pilot to correctly fly the plane. So, the autopilot disengaged, placing the plane in “alternate law,” which gives the pilot full manual control that cannot be overridden by the computer. With full manual control and not knowing the airspeed, one of the co-pilots, in an attempt to gain altitude, pulled back on the joystick. This action, pitched the plane’s nose up, placing the plane into stall, a condition that reduces the effect of lift on the wings until the plane literally falls out of the sky. The co-pilot did not know that this was the last thing that a pilot should do in any plane attempting to stay aloft at an uncertain speed.

Six minutes later, the plane plunged into the Atlantic, killing 228 people aboard. What is discovered, is that two minutes after the autopilot was disengaged the Pitot tubes began functioning again, and although the computer was not in control, it knew that the plane was rapidly losing altitude and sounded a voice alarm that says “STALL” along with a high pitch tone—75 times. If the co-pilot had re-engaged the autopilot during the four minutes before the plane plunged, the plane would have continued on course and operated in “normal law,” preventing the plane from flying outside of its flight envelope and not allowing a stall.

In the CNN article, “Airspeed Sensor Troubled History Ignored?,” dated September 21, 2009, it was identified that the manufacturer was aware of malfunctions with Pitot’s sensors on its planes. A series of industry documents verified by investigators that there were warnings on Pitot’s sensors as far back as 1994. The French agency investigating said that Air France was aware of at least seven incidents and the manufacturer was aware of around 20 incidents. In addition, Europe's air safety authorities were aware of nine incidents of malfunctions. The cost of replacing a worldwide fleet of planes was estimated to cost $220,000, but recommendations to replace were ignored.

Should knowledge be ignored, because as stated in a CNN article, “Final Air France Crash Report Says Pilots Failed to React Swiftly,” dated July 5, 2012, that Flight 447 was passing through an area known for volatile and dangerous weather, the flight recorders identified that the pilots did not discuss the stall warnings and failed to gain control of the plane, and the pilots were not trained in “high altitude training,” thus blaming the pilots and not the integrated structure of the system. How do we ensure that man and computer work together? How do we perceive and trust computer intelligence?

Conclusion

Decision makers must be diligent in having a robust understanding of their business enterprise environment, the proficiency of the intelligence being captured, and a keen focus on learning initiatives for both the machine and the human. Understand that:

  • Seeking and finding intelligence and new ways of improving the current state should not be done in the aftermath of negative outcomes. Consider if the effort of being proactive is more valuable than the cost of being reactive. When introduced to negative outcomes, look back and identify indicators that could have been addressed to mitigate or avoid the impact. Take time to calculate the cost and how the costs could have been put to better use.
  •  The conduit to actionable data and information are the tools and systems used to capture,  store, manipulate, disseminate, and present. Determine if the current tools are too generalized to capture domain aligned data and information. Bring together the thought leaders and experts to gain a clear view of how to build structures that enable dissemination of intelligence and knowledge.
  • According to the Harvard Business Review, lifelong learning programs fail more than they succeed. Why? Because most companies have failed to grasp a basic truth that continuous improvement requires a commitment to learning. Has your business enterprise already tried similar programs? What were some of the lessons learned which should not be repeated?
  • Communication is duplex. Not only do machines augment humans, but humans augment machines. Ensure their congruency, resilience, and integration. Artificial Intelligence and Knowledge Management are not mutually exclusive.

Learn more