CS 595D AI safety and bias in machine learning


CS595D is a graduate computer science seminar that will explore topics
in AI safety and bias in machine learning. These are both fundamental
problems in AI research that have far more questions than answers.
Machine learning is currently deployed all over the world, classifying
data that impacts real people every single day. This year the EU passed
"right to explanation", a law that will take effect in 2018, and will
affect all companies that operate in Europe (yes Google, Facebook, etc).

595D - Public and Private Knowledge in Life Cycle Assessment

Firms, trade associations, and governments all have various occasions to make assertions to the public about the environmental performance of their operations or products, such as an environmental product declaration, a labeling claim, or a sustainability report.  In most or all of these cases, knowledge of private information is required to support or validate the assertion.  Without this knowledge, public confidence in the assertion is limited by the public's trust in the entity making the claim.

595E

Recent readings in computer architcture and embedded systems.   Students
will read and present papers from the past 2 years of work in the field
including ISCA, Micro, ASPLOS, and conferences associated with Embedded
Systems Week.