The Fordham Ram

Industries Should Proceed With Caution Before Incorporating AI Tech

AI+technology+is+so+new+that+we+are+not+fully+aware+of+its+ethical+implications.+%28Courtesy+of+Twitter%29

AI technology is so new that we are not fully aware of its ethical implications. (Courtesy of Twitter)

The development of AI technology has been occurring rapidly and rising ethical and economical concerns have been posed. Many members of the tech community including Elon Musk and Apple co-founder Steve Wozniak expressed some of these concerns in an open letter calling for an immediate six-month pause on “training models more powerful than GPT-4, the latest version of the large language model software developed by U.S. startup OpenAI.” The advancement of AI is creating more natural and human-competitive technology which can cause risks for ways we receive information and jobs. ChatGPT, an AI chatbot which gives “human-like responses to questions asked by a user,” can create poetry and even draft legal opinions on court cases. While AI technology can make some jobs more efficient and productive like in the case of doctors and lawyers, it can pose a risk to the availability of other jobs, especially jobs which involve repetition. AI technology can also virtually eliminate some “human-performed” jobs such as “audio-to-text transcription and translation.” 

Other risks include safety and reliability of information. While AI researchers try their best to find and circumvent misuses of AI technology, it is impossible to cover every single base. This leads to worry “that people would rely on these systems for inaccurate or harmful medical and emotional advice.” Additionally, abuse of AI technology is already seen with the spread of plagiarism, as it is easier for students to write essays and do their homework with AI chatbots like ChatGPT. Dangerous abuse of AI technology is seen with AI scammer calls where people receive false calls from loved ones requesting money. It is hard to trace these calls and legal precedent regarding AI scam calls and the responsibility of AI companies hasn’t been set yet. AI technology makes it easier to mimic voices by allowing scammers to “re-create the pitch, timbre and individual sounds of a person’s voice to create an overall effect that is similar.” In order to mimic these voices, AI technology only “requires a short sample of audio, taken from places such as YouTube, podcasts, commercials, TikTok, Instagram or Facebook videos.” This is alarming considering that a majority of people have some presence in these platforms and have probably uploaded audio clips or videos without the thought that it can be used to scam their loved ones.

Many have fallen victim to these scam calls, losing thousands of dollars. In 2022, the Federal Trade Commission reported that there have been “over 36,000 reports of people being swindled by those pretending to be friends and family” with more than 5,100 of these incidents happening over the phone and “accounting for over $11 million in losses.” AI technology hasn’t slowed down, and companies like Google are incorporating AI into their products. Google’s new AI technology, Bard, has been released for testing and aims to “generate ideas, write blog posts and answer questions with facts or opinions.” This rise in AI is unprecedented and there are fears of technological advancement outpacing the development of laws and safety regulations. Many hold fears that without regulation, we will lose control and AI will replace humanity in civilization. 

While I believe that AI technology definitely does make things more efficient and lead to improvements in many areas of life such as in the medical field, I also think that misuses of this technology and better safety regulations should be considered, especially regarding privacy. A pause on AI would allow many people to step back and consider the future of AI without the current frenzy of companies developing technology as fast as they can. Lawmakers and ethicists should especially consider legal and ethical protections for individuals who use AI technology. These protections should then be established in AI technology so there is a general baseline of what to expect when using AI. Cooperation between AI researchers, lawmakers, ethicists and the public is key in controlling AI regulations so that it can be beneficial and that risk of harm can be minimized as much as possible. 

In conclusion, there should be a pause on AI technology so more of the ethical and legal implications of it can be considered. While AI technology can make great improvements in many jobs and in daily life, it can also reduce the necessity of some jobs and has risks for misuse. Although AI researchers do their best to circumvent any risks, not everything can be fully covered. Misuse of AI technology can present in the forms of misinformation and plagiarism. More dangerous uses of AI include the use of technology in scam phone calls where people may lose thousands of dollars to scammers because they believe their loved ones are in trouble. A pause on AI will be beneficial in regulating it for more safety so that it can be used as beneficial as possible.

Saisha Islam, FCRH ’25, is a biology major from New York, N.Y.

Leave a Comment
Author(s)
Photo of Saisha Islam
Saisha Islam, Opinion Editor

Saisha Islam is a junior from the Bronx, N.Y., who is majoring in biological sciences and minoring in English. She first joined The Fordham Ram as a contributing...

Navigate Left
  • Ahead of the 2024 presidential election, celebrities should keep their politics to themselves. (Courtesy of Grace Campbell/The Fordham Ram)

    Op-Ed

    Celebrities Need to Keep Out of Politics

  • Fixing the public school system is far more complex than data suggests. (Courtesy of The Fordham Ram)

    Op-Ed

    The Double-Edged Sword of Public Education

  • CoComelon has little educational value for children. (Courtesy of Instagram/ @cocomelon_official)

    Op-Ed

    The “CoComelon” Conundrum: Crack for Kids or Parenting Tool?

  • Central Park is a favorite destination for New Yorkers and tourists alike. (Courtesy of Mary Hawthorn/The Fordham Ram)

    Op-Ed

    The Big Apple: Too Big for Tourists to Chew?

  • California is reinstating anti-loitering laws to prevent human trafficking, but they are ineffective. (Courtesy of Grace Campbell/The Fordham Ram)

    Op-Ed

    Don’t Reinstate Anti-Loitering Laws, They Don’t Work

  • The Baby Olivia Act mandates that students view a video depicting fetal growth and development. (Courtesy of Grace Campbell for The Fordham Ram)

    Op-Ed

    Prevarications on Pregnancy: Lying to the Next Generation

  • The ties between Donald Trump and the Evangelical Bloc are worrying. (Courtesy of Grace Campbell for The Fordham Ram)

    Op-Ed

    We Should All Be Worried About “Christian Conservatism”

  • Consumers are still faced with the age old question: Are Macs or PCs better? (Courtesy of Grace Campbell for The Fordham Ram)

    Op-Ed

    The Age Old Question: Mac or PC?

  • The U.S. must declare a war against climate change. (Courtesy of Grace Campbell for the Fordham Ram)

    Op-Ed

    The U.S. Must Adopt a “War Footing” Against Climate Change

  • Doctors rights to their conscience doesnt trump a patients right to care.
(Courtesy of Grace Campbell for the Fordham Ram)

    Op-Ed

    Conscience vs. Care: Supreme Court Hears Abortion Pill Case

Navigate Right

Comments (0)

All The Fordham Ram Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

Activate Search
Industries Should Proceed With Caution Before Incorporating AI Tech