🔍 How Well Do LLMs Truly Understand Human Ethics?If you’re curious about how large language models (LLMs) comprehend human ethical reasoning, check out the “LLM Ethics Benchmark”—a novel framework (developed by Urban Information Lab) for evaluating AI moral alignment.This benchmark uses a three-dimensional approach to assess how closely LLMs match human ethical standards. It is now published in Nature Scientific Reports. Key […]
News and updates
New Publication: Navigating LLM Ethics
Thrilled to announce that our comprehensive study, “Navigating LLM ethics: advancements, challenges, and future directions,” has been officially published in the AI and Ethics journal (Springer Nature).What we explored:📊 Systematic review of 192 academic papers on LLM ethics🔍 Identified 13 major ethical dimensions across the LLM landscape⚠️ Highlighted unique challenges like hallucination, verifiable accountability, and […]
New Research from Urban Information Lab: Introducing the “LLM Ethics Benchmark”!
Excited to share our new preprint: “LLM Ethics Benchmark” Our team at UIL-UT Austin has developed an evaluation system based on a three-dimensional framework for evaluating moral reasoning in large language models.LLM Ethics Benchmark uniquely evaluates LLMs across: 📜 Foundational moral principles 🔍 Reasoning robustness đź§© Value consistency. This enables precise identification of ethical strengths […]
Recap: NRT Ethical AI Talk Series – “Societal Impact of Language Models: Harm Analysis and Reduction” with Dr. Aylin Caliskan
We were honored to host the fourth installment of our NRT Ethical AI Talk Series, featuring Dr. Aylin Caliskan, Assistant Professor at the University of Washington’s Information School. Her thought-provoking session, “Societal Impact of Language Models: Harm Analysis and Reduction,” examined the ethical challenges and unintended consequences of large-scale language models (LLMs). Dr. Caliskan explored how biases embedded in […]
Recap: NRT Ethical AI Talk Series – “Transparency and Explainable AI” with Dr. Robert Clements
We were thrilled to host the third session of our NRT Ethical AI Talk Series, featuring Dr. Robert Clements, who explored the critical role of Explainable AI (XAI) in ensuring transparency and trust in AI systems. Dr. Clements delved into why interpretability matters in AI decision-making, especially in high-stakes fields like healthcare, finance, and criminal justice. He highlighted how black-box models can lead […]

