Publications

Prompt Exploration with Prompt Regression

Published in arXiv, 2024

In the advent of democratized usage of large language models (LLMs), there is a growing desire to systematize LLM prompt creation and selection processes beyond iterative trial-and-error. Prior works majorly focus on searching the space of prompts without accounting for relations between prompt variations. Here we propose a framework, Prompt Exploration with Prompt Regression (PEPR), to predict the effect of prompt combinations given results for individual prompt elements as well as a simple method to select an effective prompt for a given use-case. We evaluate our approach with open-source LLMs of different sizes on several different tasks.

Recommended citation: Feffer M., Xu R., Sun Y., Yurochkin M. (2024) Prompt Exploration with Prompt Regression. arXiv (2024) https://arxiv.org/abs/2405.11083

Red-Teaming for Generative AI: Silver Bullet or Security Theater?

Published in arXiv, 2024

In this work, we identify recent cases of red-teaming activities in the AI industry and conduct an extensive survey of the relevant research literature to characterize the scope, structure, and criteria for AI red-teaming practices. Our analysis reveals that prior methods and practices of AI red-teaming diverge along several axes, including the purpose of the activity (which is often vague), the artifact under evaluation, the setting in which the activity is conducted (e.g., actors, resources, and methods), and the resulting decisions it informs (e.g., reporting, disclosure, and mitigation).

Recommended citation: Feffer, M., Sinha, A., Lipton, Z. C., & Heidari, H. (2024). Red-Teaming for Generative AI: Silver Bullet or Security Theater? arXiv (2024) https://arxiv.org/abs/2401.15897

DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms

Published in ISMIR HCMIR Workshop, 2023

In this paper, we investigate two leading sources of musical deepfake models, the AI Hub Discord server and the Uberduck website, which are dedicated to the training, utilization, and distribution of these deepfakes. Interestingly, musical deepfakes target hundreds of artists of different backgrounds, levels of success, and musical styles. In light of the economic, legal, and ethical issues raised by deepfakes of so many artists, we provide warnings about the generation of discriminatory forms of content and potential financial and contractual problems for artists. We recommend more research should be conducted in this area, especially to probe peoples' perceptions of this technology and devise approaches that mitigate potential harms.

Recommended citation: Feffer, M., Lipton, Z. C., & Donahue, C. (2023). DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms. HCMIR@ISMIR (2023) https://ceur-ws.org/Vol-3528/paper3.pdf

The AI Incident Database as an Educational Tool to Raise Awareness of Harms: A Classroom Exploration of Efficacy, Limitations, & Future Design Improvements

Published in EAAMO, 2023

We provide evidence suggesting that one of the critical objectives of AI Ethics education must be to raise awareness of AI harms. While there are various sources to learn about such harms, The AI Incident Database (AIID) is one of the few attempts at offering a relatively comprehensive database indexing prior instances of harms or near harms stemming from the deployment of AI technologies in the real world. This study assesses the effectiveness of AIID as an educational tool to raise awareness regarding the prevalence and severity of AI harms in socially high-stakes domains. We present findings obtained through a classroom study conducted at an R1 institution as part of a course focused on the societal and ethical considerations around AI and ML.

Recommended citation: Feffer, M., Martelaro, N., & Heidari, H. (2023). The AI Incident Database as an Educational Tool to Raise Awareness of Harms: A Classroom Exploration of Efficacy, Limitations, & Future Design Improvements. EAAMO (2023) https://dl.acm.org/doi/10.1145/3617694.3623223

Searching for Fairer Machine Learning Ensembles

Published in AutoML, 2023

Bias mitigators can improve algorithmic fairness in machine learning models, but their effect on fairness is often not stable across data splits. A popular approach to train more stable models is ensemble learning, but unfortunately, it is unclear how to combine ensembles with mitigators to best navigate trade-offs between fairness and predictive performance. To that end, we extended the open-source library Lale to enable the modular composition of 8 mitigators, 4 ensembles, and their corresponding hyperparameters, and we empirically explored the space of configurations on 13 datasets.

Recommended citation: Feffer, M., Hirzel, M., Hoffman, S. C., Kate, K., Ram, P., & Shinnar, A. (2023). Searching for Fairer Machine Learning Ensembles. AutoML (2023) https://openreview.net/forum?id=7Nbd1Ru1M_t

From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research

Published in AIES, 2023

We critically examine the leap from structured preference elicitation to participatory design for value alignment. Through an extensive literature review and comparative analysis of several existing methods, we outline ten axes along which participation (by non-technical stakeholders) should be evaluated.

Recommended citation: Feffer, M., Skirpan, M., Lipton, Z. C., & Heidari, H. (2023). From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. AIES (2023) https://dl.acm.org/doi/abs/10.1145/3600211.3604661

A Suite of Fairness Datasets for Tabular Classification

Published in arXiv, 2023

We introduce a suite of functions for fetching 20 fairness datasets and providing associated fairness metadata. Hopefully, these will lead to more rigorous experimental evaluations in future fairness-aware machine learning research.

Recommended citation: Hirzel, M. & Feffer, M. (2023). A Suite of Fairness Datasets for Tabular Classification. arXiv arXiv:2308.00133 https://arxiv.org/abs/2308.00133

Assistive Alignment of In-The-Wild Sheet Music and Performances

Published in ISMIR LBD, 2022

We developed an interactive system, MeSA, which leverages off-the-shelf measure and beat detection software to aid musicians in quickly producing measure-level alignments (ones which map bounding boxes of measures in the sheet music to timestamps in the performance audio). We verified MeSA's functionality by using it to create a small proof of-concept dataset, MeSA-13.

Recommended citation: Feffer, M., Lipton, Z. C., & Donahue, C. (2022). Assistive alignment of in-the-wild sheet music and performances. ISMIR (2022) https://archives.ismir.net/ismir2022/latebreaking/000038.pdf

A Mixture of Personalized Experts for Human Affect Estimation

Published in MLDM, 2018

We investigate the personalization of deep convolutional neural networks for facial expression analysis from still images. While prior work has focused on population-based (“one-size-fits-all”) approaches, we formulate and construct personalized models via a mixture of experts and supervised domain adaptation approach, showing that it improves greatly upon non-personalized models.

Recommended citation: Feffer M., Rudovic O., Picard R.W. (2018) A Mixture of Personalized Experts for Human Affect Estimation. In: Perner P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2018. Lecture Notes in Computer Science, vol 10935. Springer, Cham. https://dspace.mit.edu/bitstream/handle/1721.1/129494/personalized-mixture-supervised_final_tYWcW0Y.pdf?sequence=2&isAllowed=y