There’s a reason you’re hearing so much about artificial intelligence (AI) lately: it has the potential to dramatically alter the world as we know it. But the question is, will it lead to a utopia, or are we doomed to live in a dystopian Hellscape with Skynet’s angry robot brethren at the helm?
Not surprisingly, I’m rooting for the utopia, and with all of the recent advancements in generative AI, I couldn’t be more excited about the potential of this technology. However, recent applications of AI have me questioning my faith in humanity and wondering if our leaders are failing to heed the lessons of novels like “1984,” and are instead using these post-apocalyptic tales as a “how-to” guide.
If, like me, you follow technology news, it seems like every day there’s a slew of new product announcements or articles that highlight the latest application of AI to our lives. Sometimes the news is heartening, but more often than not it’s simply downright disturbing, particularly when the government is involved.
For example, there’s an alarming trend that seems to support my theory that our government representatives are operating from the playbook of Orwell’s nightmarish fictional future. Namely, AI is increasingly being used to predict criminal behavior and thoughts.
In one case, it was reported that a Pentagon-funded study used AI to detect ‘violations of social norms’ based on the texts sent by users. If this sounds familiar to you, it’s probably because you heard about China’s “social credit system” or watched the “Black Mirror” episode “Nosedive,” which envisions a world where social media users’ ratings of each other have real-world consequences.
Meanwhile, while the Pentagon is busy requisitioning “thought police” studies, actual cops are busy using AI to watch millions of cars and predict whether you might be a criminal, based on “suspicious” patterns of movement. Don’t you hate it when reality mimics dystopian fiction?
And if that wasn’t bad enough, on the other side of the pond, British businesses are using unreliable and biased facial recognition software to ban people from shopping in grocery stores, and faulty identification mistakes are accepted as an unfortunate but expected outcome.
Back in the United States, AI is being used in ways that disproportionately impact people who are economically disadvantaged. For example, in New York City, AI is being used to track subway fare evasion. This crime-stopping effort inequitably affects New Yorkers unable to afford public transport.
Similarly, public housing residents across the U.S. are also being unfairly targeted and punished for trivial offenses based on faulty information obtained using facial recognition software. In one case, a single mother in Massachusetts who was taking night classes was improperly evicted when facial recognition surveillance software flagged the movements of the babysitter who arrived and left each time she went to and from class. The housing authority interpreted the data as evidence that she was violating the policy regarding frequent overnight guests.
Notably, it’s not just low-income individuals whose lives and movements are affected by inaccurate artificial intelligence software results. Even high-level government employees are being impacted, such as a U.S. senator who faced delays at the airport due to the TSA’s use of facial recognition software. According to Sen. Jeff Merkley, when he recently refused a face scan at Washington’s Reagan National Airport, he was told it would cause a significant delay, even though submitting to a scan is voluntary per TSA policy.
No one is immune from the effects of AI, and it’s indisputable that this technology harbors the immense potential to reshape our world. Yet, under our guidance, it seems to be charting a course toward a dystopian reality, rather than a utopian vision. Fundamentally, AI is a tool; its applications, whether beneficial or damaging, mirror our decisions and values.
Unfortunately, its recent applications are straight from an Orwellian thought police scenario. As we stand on the brink of an AI-centric era, the sheer power and potential of AI are mind-boggling. However, if we fail to change our trajectory, and quickly, it’s not Skynet’s wrathful robots we should worry about — it’s us.
Nicole Black is a Rochester, New York attorney and Director of Business and Community Relations at MyCase, web-based law practice management software. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter at @nikiblack and she can be reached at email@example.com.