• Skip to main content
  • Skip to footer

Safe and Responsible AI

A National Science Foundation Research Traineeship Program

  • Home
  • About
  • People
  • Portfolio Program
  • Fellowship
  • News and updates
  • Publications
  • Good Systems

News and updates

Oct 31 2025

New Publication: LLM Ethics Benchmark

🔍 How Well Do LLMs Truly Understand Human Ethics?If you’re curious about how large language models (LLMs) comprehend human ethical reasoning, check out the “LLM Ethics Benchmark”—a novel framework (developed by Urban Information Lab) for evaluating AI moral alignment.This benchmark uses a three-dimensional approach to assess how closely LLMs match human ethical standards. It is now published in Nature Scientific Reports. Key […]

Written by Saleh Afroogh · Categorized: News and updates

Oct 31 2025

New Publication: Navigating LLM Ethics

Thrilled to announce that our comprehensive study, “Navigating LLM ethics: advancements, challenges, and future directions,” has been officially published in the AI and Ethics journal (Springer Nature).What we explored:📊 Systematic review of 192 academic papers on LLM ethics🔍 Identified 13 major ethical dimensions across the LLM landscape⚠️ Highlighted unique challenges like hallucination, verifiable accountability, and […]

Written by Saleh Afroogh · Categorized: News and updates

May 16 2025

New Research from Urban Information Lab: Introducing the “LLM Ethics Benchmark”!

Excited to share our new preprint: “LLM Ethics Benchmark” Our team at UIL-UT Austin has developed an evaluation system based on a three-dimensional framework for evaluating moral reasoning in large language models.LLM Ethics Benchmark uniquely evaluates LLMs across: 📜 Foundational moral principles 🔍 Reasoning robustness đź§© Value consistency. This enables precise identification of ethical strengths […]

Written by Saleh Afroogh · Categorized: News and updates

May 16 2025

Recap: NRT Ethical AI Talk Series – “Societal Impact of Language Models: Harm Analysis and Reduction” with Dr. Aylin Caliskan

We were honored to host the fourth installment of our NRT Ethical AI Talk Series, featuring Dr. Aylin Caliskan, Assistant Professor at the University of Washington’s Information School. Her thought-provoking session, “Societal Impact of Language Models: Harm Analysis and Reduction,” examined the ethical challenges and unintended consequences of large-scale language models (LLMs). Dr. Caliskan explored how biases embedded in […]

Written by Saleh Afroogh · Categorized: News and updates

May 16 2025

Recap: NRT Ethical AI Talk Series – “Transparency and Explainable AI” with Dr. Robert Clements

We were thrilled to host the third session of our NRT Ethical AI Talk Series, featuring Dr. Robert Clements, who explored the critical role of Explainable AI (XAI) in ensuring transparency and trust in AI systems. Dr. Clements delved into why interpretability matters in AI decision-making, especially in high-stakes fields like healthcare, finance, and criminal justice. He highlighted how black-box models can lead […]

Written by Saleh Afroogh · Categorized: News and updates

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 6
  • Go to Next Page »

Footer

Questions?
Contact Director Dr. Junfeng Jiao or Senior Research Program Coordinator Saleh Afroogh

Copyright © 2025 · Altitude Pro on Genesis Framework · WordPress · Log in
“Machine Learning & Artificial Intelligence” image by Mike MacKenzie is licensed under CC BY 2.0.
"Austin in evening twilight" image by Lara Eakins is licensed under CC BY-NC 2.0.