This blog is the first in a series exploring the implications of different policies on the development and deployment of artificial intelligence, machine learning, and cognitive systems.
The term “cognition” is defined as “of, relating to, being, or involving conscious intellectual activity”.1 Our world is surrounded by cognitive organisms that range in complexity from an ant to the most intellectually-capable cognitive organism that is known today – a human. A fundamental characteristic of cognitive organisms is their ability to form higher-level abstractions of cognition that mimic or surpass their individual levels of cognition.2 A cognitive system is a human-made system that is able to interact with their human counterparts and understand human expression mechanisms that are primarily delivered via visual, audial, and textual communication.3 For clarity, this article defines cognitive organisms as being nature made, and cognitive systems as being human-made (…for now). By understanding human tendencies, cognitive systems should therefore be able to anticipate human intent and either be prepared to respond instantly, or be proactive and respond preemptively.
Given the transient nature of humans, cognitive systems must be able to evolve their interactions with their human counterparts. For that to happen, they must learn from new input data, and augment their decision making models based on feedback from both humans and their environments. The need to learn about their environment through data presents a fundamental question; i.e., how do policy decisions pertaining to data acquisition, transmission, storage, and communication impact the ability of cognitive systems to learn and achieve their operational objectives?
Imagine a world where policy makers decided that all medical records should be publicly-available data. The thought alone seems concerning. However, the idea itself is not far-fetched. It is easy for us to forget the fact that not all countries have data management policies that mimic those of the United States. While continents such as Europe and North America have robust policies surrounding the protection of patient health data, many countries around the world do not.
For a multinational corporation that has cognitive systems that learn about specific populations across the world, does this mean that one of their regional cognitive systems would be inferior to the other due to the availability (or lack therefore) of publicly-available data? Do the risks outweigh the benefits?
Along the same lines, consider a scenario wherein a cognitive system, trained with an abundance of publicly-available health data, has the ability to predict health-related issues on a patient level. On one hand, such a cognitive system could help save lives through the rapid detection of health-related anomalies in individuals. On the other hand, such a cognitive system could also be used to discriminate, based on certain health-related discoveries. For instance, Individual X wakes up in the morning and their cognitive assistant informs them that there is a 95 percent chance that they may have a malignant tumor. If the intelligence engine of this cognitive system is owned by Company Y, that company may be incentivized to sell this health-related information to Organization Z that Individual X is scheduled to interview with later that same day. Right before Individual X leaves home for the interview, Organization Z calls to inform Individual X that their interview needs to be cancelled and rescheduled. That is all the information that Organization Z provides Individual X because after all, there is no policy in place indicating that Organization Z needs to provide additional information.
In the above scenario, the information not readily-available to Individual X represents the proprietary knowledge that Organization Z owns. Organization Z’s own cognitive system had calculated the cost of hiring Individual X and the potential short-term loss of productivity that may result due to Individual X having to get treatment for their cancer.
Should there be policies set by governments in terms of the use of such cognitive systems? Should policy makers restrict the sharing of such knowledge to third parties who may be interested in using such information to learn about an individual, prior to having met them (e.g., job interview or dating app)?
The immediate concerns that come to mind are latent discrimination. The term “latent” is used here because the results provided by a cognitive system may be not readily-available to all parties. In the example above, Organization Z paid for access to the malignant tumor cognitive results provided to Individual X, prior to their interviewing at Organization Z.
Latent discrimination does not necessarily have to be top down, as individuals themselves may latently discriminate against companies, based on the results recommended by their personal cognitive assistants. For example, it is a well-known problem that online review data typically follows a bimodal distribution4 (i.e., users often leave reviews only when they are extremely satisfied or extremely dissatisfied with a given product or service). A cognitive system that learns from these biases within the data may recommend that an individual avoid interviewing at Organization Z, based on the publicly-available training data that was generated from previous disgruntled employees of Organization Z. This imbalance of information both for training cognitive systems and for sharing their results has the potential to reduce the trust of these systems and the entities that utilize them for decision making.
So how should society move forward with data acquisition, management and dissemination as it pertains to cognitive systems? How much should policy makers be involved in these decisions? Will the free markets “figure it out”? Will individuals get smarter about the value of their data? Will policies that restrict the access and availability of data diminish the rate of advancements of cognitive systems? If such a policy were not applied broadly and evenly on an international scale, this may potentially reduce the technological competitiveness of a nation state, hereby potentially posing a national security threat, as one nation state’s cognitive capabilities quickly surpass another due to the availability of data. These and other questions are some of the twenty-first century policy challenges surrounding big data, cognitive systems and human decision making.
- “Definition of Cognition,” Merriam-Webster, https://www.merriam-webster.com/dictionary/cognition.
- I. D. Couzin, “Collective cognition in animal groups,” Trends in Cognitive Science 13 (2009), 36–43.
- “How Watson works – myth busting at IBM InterConnect 2017,” Internet of Things blog, https://www.ibm.com/blogs/internet-of-things/myth-busting-watson/.
- N. Hu, J. Zhang, and P. A. Pavlou, “Overcoming the J-shaped distribution of product reviews,” Communications of the ACM 52 (2009), 144–147.
Dr. Conrad Tucker is currently serving as a science and policy fellow in the Foresight, Strategy, and Risks Initiative at the Atlantic Council’s Brent Scowcroft Center on International Security. Dr. Tucker holds a joint appointment as associate professor in engineering design and industrial and manufacturing engineering at the Pennsylvania State University. He is also affiliate faculty in computer science and engineering. Dr. Tucker is the director of the Design Analysis Technology Advancement (D.A.T.A) Laboratory. His research focuses on the design and optimization of complex systems through the acquisition, integration, and mining of large scale, disparate data.