Fred Lederer is the at William & Mary University, where teaches law and directs the .
During the Supreme Court鈥檚 February, 2023 arguments in , an important Internet case that could decide whether tech companies are liable for the material published on their platforms, Justice Elena Kagan commented, 鈥淲e really don鈥檛 know about these things.鈥 Supreme Court justices may be forgiven for lacking a better understanding of how key aspects of the modern world work. After all, they have law clerks and briefs to help them. The same can鈥檛 be said for most of the rest of us, and technology we don鈥檛 understand can hurt us.
As a law professor and Director of William & Mary Law School鈥檚 Center for Legal & Court Technology, I sometimes have a closeup view of how technological ignorance can cause harm. Years ago in our very high-tech experimental McGlothlin Courtroom, we had a lovely small hexagonal wood box on the judge鈥檚 bench with a microphone, to transmit audio to those with hearing difficulties. None of our visitors took the time to ask about how it worked.
Counsel in one simulated case was quite surprised when he discovered that it could pick up private conversation with his client and transmit it to everyone in the courtroom wearing the appropriate headphones. We summarily retired the microphone.
Today, we are increasingly dependent upon technology, especially cyber technology, and we take it for granted to an extreme degree. This is not a new problem. How do modern automobiles work? Most of us don鈥檛 know. We get in, turn a key or push a button, and the car works. When there鈥檚 a problem, we frequently are at the mercy of an auto 鈥渕echanic鈥 to diagnose and repair the problem, which often stems from computer hardware or software.
Issues of related to cyber technology are increasingly common. But based on informal surveys of my law school classes, almost no one, or at least no law student, knows how email works. 鈥淵ou just 鈥榯ype鈥 and hit send,鈥 they say. by someone in the process of transmission? No one knows. One would imagine that highly intelligent law students who are attuned to the importance of privacy and client confidentiality would be familiar with how they transmit and receive important information.
Ignorance can even result in physical harm. Lithium-ion batteries power many different types of devices, from computers to E-bikes. Some, unpredictably, can , especially if overcharged.
Not all of us need to become technical experts to avoid technological harms.
The technological development of the moment is Generative AI, like ChatGPT and its competing programs such as Google鈥檚 Bard. These large language models , often limited to given time periods. When asked a question or given instructions from a 鈥減rompt,鈥 they predict what a human being would say in response. The results can be extraordinarily impressive. The verdict is out on whether Generative AI will ultimately help humans work better and smarter, or simply .
What is clear, however, is how often Generative AI chatbots are wrong, sometimes creating non-existent facts or citing non-existent sources. Reports mentioning that ChatGPT was good enough to pass a sample law school examination, for example, don鈥檛 always mention or the degree to which the response depends upon the prompt. Accordingly, generative AI systems can be of great help, but they cannot be relied upon. Using them for a first draft of a document seems highly reasonable, but more poses grave risks. Next academic year, one of my colleagues at William & Mary Law School will teach a course on how to integrate ChatGPT into legal writing. Understanding how the technology works produces a great tool; not understanding it and using it anyways .
Not all of us need to become technical experts to avoid technological harms; rather, we simply need to apply what we already know. We speak of cybersecurity failures, for example, but is usually the fundamental cause of successful hacking, resulting in the unlawful seizure of personal information. Phishing attacks take advantage of most people鈥檚 ignorance or willful failure to question suspicious emails designed to tempt them to open seemingly important or interesting email attachments. These breaches of personal or business communications are disturbing, but the risk to our democracy is far greater.
Companies seek to increase profits and thus need to communicate with potential buyers. Consumers seek information. Sometimes the result is as harmless as anti-aging products and cute animal pictures targeted to your social media feed. But too often the same algorithms that drive this flow of information drive us, as news consumers, into echo chambers that feed hyper-partisanship, conspiracy theories and even extremism.
So, where does this leave those of us who aren't technologists by trade? What is our responsibility as individuals, family members, workers, and as members of or communities and nations? We should borrow from the medical professions and vow to do no wrong via technology.
When using technology we don't really understand, perhaps we should ask ourselves questions such as:
- Given that many people and organizations care only about their own goals, to what extent should we trust claims about what the technology will do and how it will do it?
- If the technology actually does do what it is supposed to, what would be the direct and collateral consequences; what will be the unforeseen effects?
- What data will the technology collect and what will happen to it; will it give a financial or other benefit to others or place us at risk?
Technology can vastly improve our lives. But, if we choose not to question it, for whatever reason, technology and those who control it will rule our lives and not for the better.
The Public Interest Technology University Network (PIT-UN), convened by 麻豆果冻传媒, fosters collaboration between universities and colleges to build the field of public interest technology and nurture a new generation of civic-minded technologists. .