William Agnew

If you or your community are experiencing harms beacuse of algorithms, data, or AI, please reach out to me at wagnew [at] andrew [dot] cmu [dot] edu. I am very interested in helping you, whether through reasearch, advocacy, or leveraging my connections in academic, industry, and media to address the harms you are expericing.

I use research and organizing to challenge power and technologies that concentrate power, and empower marginalized communities over tech, data, and AI impacting them.

I’m a CBI Postdoc Fellow at CMU studying AI ethics, critical AI, community mobilization, and 3D vision. I run marathons, rock climb, backpack, read, and cook in my spare time.

CV    Semantic Scholar    Twitter    GitHub

Contact: wagnew[at]andrew[dot]cmu[dot]edu

For anything queer related, please contact me at william dot agnew at ostem dot org for privacy reasons.

News

November 2023: Volunteered at the oSTEM 2023 conference.

September 2023: Started a CBI Postdoc Fellowship at Carnegie Mellon University with Sauvik Das.

August 2023: Presented Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms at AIES 2023.

June 2023: Queer In AI: A Case Study in Community-Led Participatory AI won best paper at FAccT 2023!

June 2023: Presented two papers, Queer In AI: A Case Study in Community-Led Participatory AI and Representation in AI Evaluations at FAccT 2023.

August 2022: Started AI ethics internship at DeepMind London

July 2022: Panelist at DeepMind Queer AI Workshop

June 2022: The Values Encoded in Machine Learning Research won best paper at FAccT 2022!

June 2022: Robots Enact Malignant Stereotypes published at FAccT 2022. Covered in Wired, Washington Post

June 2022: Co-organized a CRAFT workshop Collaboratively Developing Evaluation Frameworks for Queer AI Harms at FAccT 2022

December 2021: Panelist during NeurIPS'21 tutorial Beyond Fairness in Machine Learning

December 2021: Co-organized Queer in AI Workshop at NeurIPS'21

September 2021: Published Rebuilding Trust: Queer in AI Approach to Artificial Risk Management in response to the NIST AI Risk Management RFI

August 2021: My paper Documenting Large Corpora: A Case Study on the Colossal Clean Crawled Corpus accepted to EMNLP! See coverage in Unite.AI: Minority Voices ‘Filtered’ Out of Google Natural Language Processing Models 

June 2021: Released a new paper The Values Encoded in Machine Learning Research!

June 2021: My work with Queer in AI profiled in MIT Tech Review: Inside the fight to reclaim AI from Big Tech's control 

June 2021: Gave the D&I keynote at NAACL'21: Give Your Time to Radical Communities, Not Your Boss 

May 2021: I'm serving as a social chair for ICLR '22!

February 2021: Talked about "bad" words and impacts on NLP models in Wired: AI and the List of Dirty, Naughty, Obscene, and Otherwise Bad Words 

January 2021: Discussed LGBTQ research in "‘This deserves our attention.’ New data highlight LGBTQ scientists’ workplace challenges" from Science Magazine

December 2020: Organized  the Resistance AI Workshop,  Object Representations for Learning and Reasoning Workshop, and Queer in AI Workshop at NeurIPS 2020

Moderated the panel "What are We Going to Do About Computer Vision?"

November 2020: I'm organizing the Queer in AI @ CoRL Social

October 2020: My paper Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity was accepted to CoRL 2020 as an oral (20/~485)!

October 2020: New preprint Relevance-Guided Modeling of Object Dynamics for Reinforcement Learning on arxiv

August 2020: I'm organizing the Resistance AI Workshop and the Object Representations for Learning and Reasoning Workshop at NeurIPS 2020

July 2020: My paper Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity was accepted for a spotlight the the ICML Object-Oriented Learning (OOL): Perception, Representation, and Reasoning workshop

July 2020: I co-organized the Queer in AI ICML 2020 workshop and socials

May 2020: I gave a talk on amodal 3D reconstruction in cluttered environments to MIT CoCoSci

Research

 I work to build technologies and do organizing that help people be informed and involved in the systems impacting their lives, and that helps them hold these systems and the people controlling them accountable. I work to help communities resist and ultimately dismantle technology that surveils, misinforms, and concentrates power. 

If you or your community are experiencing harms beacuse of algorithms, data, or AI, please reach out to me at wagnew [at] andrew [dot] cmu [dot] edu. I am very interested in helping you, whether through reasearch, advocacy, or leveraging my connections in academic, industry, and media to address the harms you are expericing.

Service

Robotics competitions gave me the opportunity to learn many of the skills I use as a researcher. Giving students the same opportunities is incredibly meaningful (and fun!) for me.

As an undergraduate at Georgia Tech, I helped start and lead the Undergraduate Research Ambassadors Program and the Big O Theoretical Computer Science Club, both of which have provided incredible opportunities, mentorship, and community for countless undergrads interested in research. I also proposed and organized the first  Home Depot Deep Learning Competition which gives Georgia Tech students at  a hands-on introduction to deep learning each year.

Students

Mentoring brilliant students is one of the most impactful and meaningful things I do. If you are interested in working with me, please email me your resume and 2-3 paragraphs describing your interests.

Undergraduate

Zhichao Lei

John Barcellos

Jize Cao

Christopher Kang

Ryan Pachauri

Jaclyn Brockschmidt

Caelen Wang