Fordham University hosted a symposium on responsible Artificial Intelligence (AI) on Oct. 16 and 17. The event was co-sponsored by Fordham, IBM, New York University (NYU) and the Global AI Alliance and brought together representatives from seven institutions who are part of the alliance.
The two main themes of the symposium were “legal regulation” and “trustworthy AI,” highlighting issues related to healthcare, ethics, security, justice, governance and safety, according to Z. George Hong, Ph.D., chief research officer and associate provost for research at Fordham.
“This symposium marks Fordham’s first large-scale collaborative initiative as an AI Alliance member,” Hong said in an email. “Offering participants a rich mix of perspectives and engagement opportunities.”
The event involved 48 speakers, five paper sessions, a panel discussion, a fireside chat and workshops covering the benefits of AI while also highlighting possible drawbacks.
There were also two keynote addresses led by Ben Brooks from the Berkman Klein Center at Harvard University, Julia Stoyanovich, director of the Center for Responsible AI at NYU, Anthony Annunziata, director of Open Source AI at IBM and Doni Bloomfield, associate professor of law at Fordham Law School.
Hong said that as one of 195 members of the AI Alliance, Fordham has a commitment to promoting responsible AI use. He also said the symposium provided the opportunity for discussion on how to use AI as a tool while maintaining ethical values.
“[Fordham hosted the symposium] to address the legal, ethical, and societal dimensions of artificial intelligence—areas that align closely with the University’s mission and strengths as a comprehensive research institution with a distinguished law school, leading ethics scholars, and a strong Jesuit tradition,” Hong said.
Hong says that one goal of the symposium was to find common ground between the over 260 registered attendees from various backgrounds in science, technology, law and education.
“This diversity of voices underscored Fordham’s belief that advancing responsible AI requires collaboration across disciplines and sectors,” Hong said.
This isn’t Fordham’s first time hosting an event covering AI. In March 2025, Fordham hosted the International Conference on Im/migration, AI and Social Justice. That event was co-sponsored by IBM and Sophia University in Japan.
Fordham has also previously worked with IBM to host student research workshops.
“The Symposium on Responsible AI builds on these efforts, representing our most comprehensive collaboration yet as a member of the AI Alliance,” Hong said.
Hong emphasized that AI touches nearly every corner of modern life, from classrooms to courtrooms.
Hong said that two defining challenges of AI are its “dependence on vast datasets, which raises serious concerns about privacy, data rights, provenance, bias, economic and environmental impacts,” and “non-deterministic outputs, which introduce new risks involving trust, safety, reliability, consistency, and security.”
Because of the increasing role of AI in almost every facet of society, Hong said it is imperative that we understand the uses and scope of the technology. He specifically referenced the importance of understanding “responsible AI” and said that the symposium served as a way to increase understanding of the topic.
“Responsible AI aims to promote the development of trustworthy AI systems that advance discovery and informed decision-making for the common good,” Hong said. “In this spirit, one of the symposium’s central goals is to provide that essential balance, ensuring that the advancement of AI remains stable, sustainable, and aligned with principles of responsibility and public trust.”



































































































































































































