Public perception of accuracy-fairness trade-offs in algorithmic decisions in the United States
Abstract
The naive approach to preventing discrimination in algorithmic decision-making is to exclude protected attributes from the model’s inputs. This approach, known as “equal treatment,” aims to treat all individuals equally regardless of their demographic characteristics. However, this practice can still result in unequal impacts across different groups. Recently, alternative notions of fairness have been proposed to reduce unequal impact. However, these alternative approaches may require sacrificing predictive accuracy. The present research investigates public attitudes toward these trade-offs in the United States. When are individuals more likely to support equal treatment algorithms (ETAs), characterized by higher predictive accuracy, and when do they prefer equal impact algorithms (EIAs) that reduce performance gaps between groups? A randomized conjoint experiment and a follow-up choice experiment revealed that support for the EIAs decreased sharply as their accuracy gap grew, although impact parity was prioritized more when ETAs produced large outcome discrepancies. Additionally, preferences polarized along partisan identities, with Democrats favoring impact parity over accuracy maximization while Republicans displayed the reverse preference. Gender and social justice orientations also significantly predicted EIA support. Overall, findings demonstrate multidimensional drivers of algorithmic fairness attitudes, underscoring divisions around equality versus equity principles. Achieving standards around fair AI requires addressing conflicting human values through good governance.