University of Maryland

ML projects

 

Power with(out) responsibility? ‘Effective Accountability’ in US Algorithmic Governance

In this project, Dr. Sivan-Sevilla aims to realize how AI policies are implemented by government agencies and to what extent we can still hold ML-based government accountable. ML systems offload complex, time-consuming, cognitive tasks of public administrators to machines, allowing cost savings and better resource allocation for agencies that often operate under significant constraints. As government agencies automate their decision-making, however, they undermine the premise of public administration whose power derives from their expertise, flexibility, and ability to be held accountable. Through organized workshops & in-depth interviews with Maryland’s agencies who use ML to deliver public outcomes, this project aims to realize how public agencies implement ML policy requirements, and which accountability arenas – judicial, professional, or social – are the most effective ones in holding the government accountable for ML consequences.

 

Values-Centered AI (VCAI) Initiative

Prof. Katie Shilton is one of the leaders of The VCAI Initiative, a UMD Grand Challenge project to integrate AI research and education across campus, engage in high-impact research with local stakeholders, and transform how artificial intelligence is practiced. It brings together UMD researchers interested in placing social and human values at the center of AI design to innovate on AI design methods and education. Activities include seminars, round-tables, tutorials, and collaborative research. More details are available here.

 

 

 

Learning Code(s): Community-Centered Design of Automated Content Moderation
Prof. Katie Shilton is co-leading a project on community-based content moderation. Online platforms increasingly enforce complex speech and content policies to encourage participation and prevent hate speech and extremism. Balancing free speech and equality online is not only a thorny social problem debated by platforms and legislators, but also a problem negotiated every day by a (volunteer and paid) workforce of online moderators. This project uses participatory design with volunteer moderators to build machine learning tools to support healthier online communities, enable better working conditions for online moderators, and create more flexible software responses to community policies and norms. With Sarah GilbertHal Daume, and Michelle Mazurek. More details are here, and a summary slide is available here.