Zero Trust

Public release last month of OpenAI’s ChatGPT artificial intelligence software brought both bravos and brickbats. The New York Times headline last Saturday read, “A New Era of A.I. Booms, Even Amid the Tech Gloom.”[1]

Users have marveled at the ease of creating pseudo fine art using only a brief word prompt like, “day at the beach.” Meanwhile, Los Angeles computer programmer, designer and lawyer Michael Butterick sued Microsoft, OpenAI and GitHub, claiming ChatGTP and similar programs pirate existing code. Buttrick’s real beef is that he and fellow coders will become obsolete because ChatGTP uses existing code to teach itself how to write new code.[2]

More serious critics say the new software is flawed because it cannot distinguish truth from fiction. Two detractors put it this way: “The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it’s always convincing, so it’s hard to tell the difference.”[3]

To test this critique, I asked ChatGTP, “Who will be the first naturalized citizen of the U.S. elected President?”

Thirty seconds later came the reply. “It is possible that a naturalized citizen of the United States could be elected President in the future, but it has not happened yet. A naturalized citizen is a person who is not a U.S. citizen by birth, but who has become a citizen through the naturalization process. To be eligible to become President, a person must be a natural-born citizen of the United States, which means they must have been born a U.S. citizen. It is not uncommon for naturalized citizens to hold high public office in the United States, but to date, all Presidents have been natural-born citizens. There is no way to predict when or if a naturalized citizen will be elected President of the United States.”

The reply is not wrong per se, expressed conditionally as it is. Missing though is the essential fact that the U.S. Constitution requires the President and Vice President to be natural-born U.S. citizens. A constitutional amendment would be needed to permit a naturalized citizen to hold either office. The second and third sentences contradict one another. The fourth sentence is beside the point and misleading.

In the words of AI pioneer and psychologist Gary Marcus, “the problem is not with GPT-3’s syntax (which is perfectly fluent) but with its semantics: it can produce words in perfect English, but it has only the dimmest sense of what those words mean, and no sense whatsoever about how those words relate to the world.”[4] Marcus illustrates, saying if you ask a GTP-3-programmed robot to clean the house while you are away, you may return to find the sofa has been cut into pieces and placed in the closet.

At a deeper level, LLMs do not have a sense of community with humans nor any awareness of human values. This makes generative AI a dangerous tool in the hands of bad actors.

Financial institutions are already swamped with scams perpetrated using computer-generated deceptions. Phishing, spoofing, deep fakes and other frauds all depend on victims accepting falsehood as truth. Generative AI makes creating those falsehoods easier, less expensive and more plausible. Or as Marcus says, the cost of creating bullshit is approaching zero.

To counter the threat, industry and government are developing next generation authentication tools and frameworks for using them. Up for public comment now is a draft guide for implementing the Zero Trust Architecture promoted by the National Cybersecurity Center of Excellence of the National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce.[5] The “zero trust” phrase is somewhat misleading, however.

The need is obvious: legitimate electronic financial commerce requires devices and systems to protect its integrity. A true zero trust environment is one in which transactions grind to a halt because no trust exists anywhere. That is the stuff of which financial crises are made, viz, the 2008 global banking crisis when none of the nation’s biggest banks knew which of their counterparties were truly solvent. To preserve a world in which trust continues to exist despite AI advances will require unprecedented investment of human and financial capital. That is the cost ignored by Silicon Valley denizens in their effort to frame our future in their image and profit mightily in the process. Perhaps it is time to require them to co-invest in creating the protections we and they need to prevent that future from being wholly dystopian.

——————————————————————————————————

1 https://www.nytimes.com/2023/01/07/technology/generative-ai-chatgpt-investments.html

2 https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html

3 https://aisnakeoil.substack.com/p/chatgpt-is-a-bullshit-generator-but

4 https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/

5 https://csrc.nist.gov/publications/detail/sp/1800-35/draft. “A zero trust architecture (ZTA) focuses on protecting data and resources. It enables secure authorized access to enterprise resources that are distributed across on-premises and multiple cloud environments, while enabling a hybrid workforce and partners to access resources from anywhere, at any time, from any device in support of the organization’s mission. Each access request is evaluated by verifying the context available at access time, including criteria such as the requester’s identity and role, the requesting device’s health and credentials, the sensitivity of the resource, user location, and user behavior consistency. If the enterprise’s defined access policy is met, a secure session is created to protect all information transferred to and from the resource. A real-time and continuous policy-driven, risk-based assessment is performed to establish and maintain the access. In this project, the NCCoE and its collaborators use commercially available technology to build interoperable, open, standards-based ZTA implementations that align to the concepts and principles in NIST Special Publication (SP) 800-207, Zero Trust Architecture. This NIST Cybersecurity Practice Guide explains how commercially available technology can be integrated and used to build various ZTAs.”