«By JONATHAN D. KETCHAM, NICOLAI V. KUMINOFF AND CHRISTOPHER A. POWERS* We develop a structural model for estimating the welfare effects of poli- cies ...»
Estimating the Heterogeneous Welfare Effects of Choice Architecture:
An Application to the Medicare Prescription Drug Insurance Market
By JONATHAN D. KETCHAM, NICOLAI V. KUMINOFF AND CHRISTOPHER A. POWERS*
We develop a structural model for estimating the welfare effects of poli-
cies that alter the design of differentiated product markets when some
consumers may be misinformed about product characteristics. We use the model to analyze three proposals to simplify markets for Medicare pre- scription drug insurance: (1) reducing the number of plans, (2) providing personalized information, and (3) changing defaults so consumers are re- assigned to cheaper plans. First we combine national administrative and survey data to determine which consumers appear to make informed en- rollment decisions. Then we analyze the welfare effects of each proposal, using revealed preferences of informed consumers to proxy for concealed preferences of misinformed consumers. Results suggest that the menu re- duction would harm most consumers whereas personalized information and reassignment would benefit most consumers. Each policy produces large gains and losses for some consumers, but no policy changes average consumer welfare by more than 19% of average expenditures.
April 2016 * Ketcham: Arizona State University, Department of Marketing, Tempe, AZ 85287 (e-mail: Ketcham@asu.edu). Kuminoff: Arizona State University, Dept. of Economics and NBER, Tempe, AZ 85287 (e-mail: email@example.com). Powers: U.S. Department of Health and Human Services, Centers for Medicare and Medicaid Services, Center for Strategic Planning / DDSG, 7500 Security Boulevard, Mailstop C3-24-07, Baltimore, MD 21244 (e-mail: Christopher.Powers@cms.hhs.gov ). Ketcham and Kuminoff’s research was sup- ported by a grant from the National Institute for Health Care Management (NIHCM) Research and Educational Foundation. The find- ings do not necessarily represent the views of the NIHCM Research and Education Foundation. We are grateful for insights and sug- gestions from Gautam Gowrisankaran, Sebastien Houde, Christos Makridis, Alvin Murphy, Sean Nicholson, Jaren Pope, Dan Silverman, Meghan Skira, V. Kerry Smith, and seminar audiences at the AEA/ASSA Annual Meeting, the Congressional Budget Of- fice, Health and Human Services Office of the Assistant Secretary for Planning and Evaluation, the ASU Health Economics Confer- ence, the Annual Health Economics Conference, the Quantitative Marketing and Economics Conference, Brigham Young University, Cornell, Iowa State University, Stanford, University of Arizona, UC Santa Barbara, University of Chicago, University of Maryland, University of Miami, University of Southern California, Vanderbilt, and Yale.
One of the frontiers in empiricalmicroeconomics is to assess the equity and efficiency of polices that manipulate the way markets are designed in order to nudge consumers toward making certain decisions. Thaler and Sunstein (2008) dubbed this approach to policy “choice architecture”. Examples of choice architecture include restricting the number of differentiated products in a market, providing consumers with personalized information about their options, and making default choices for consumers but letting them opt out. Understanding how such polices affect consumer welfare and government spending is increasingly important for program evaluation. The United Kingdom, the United States, the World Bank, and other government organizations have begun using choice architecture to nudge program beneficiaries.1 A stated goal of choice architecture is to benefit consumers who do not make fully informed decisions. Such paternalistic policies may also harm some consumers by eliminating their preferred products, by making it harder to buy those products, and by causing prices to increase (Camerer et al. 2003). Yet little work has attempted to predict the distribution of gains and losses of prospective choice architecture policies. To do so requires addressing three key challenges. First, one must identify which consumers are misinformed. Second, one must infer the preferences of both informed and misinformed consumers. Finally, one must predict how a counterfactual policy would affect sorting behavior and market prices. In this paper we develop a revealed preference framework to address these challenges and use it to predict the welfare effects of three recent proposals to redesign Medicare markets for prescription drug insurance.
Prescription drug insurance is an ideal setting for studying choice architecture. In 2006, Medicare Part D created government-designed, taxpayer-subsidized geographic markets for standalone prescription drug insurance plans. By 2013, these markets annually enrolled 23 million seniors with federal outlays of $65 billion (US Department of Health and Human Services 2014). When obtaining coverage, the typical enrollee chooses among 50 plans that differ in cost, risk protection, and quality. A new enrollee’s choice 1 For example, in 2015 President Obama issued an executive order directing the newly created US Social and Behavioral Sciences Team to help federal agencies “identify programs that offer choices and carefully consider how the presentation and structure of those choices, including the order, number, and arrangement of options, can most effectively promote public welfare, as appropriate, giving particular consideration to the selection and setting of default options.” (U.S. Executive Order #13707, Section 1.b.iii).
1 becomes her future default; she will be passively reassigned to that same plan the following year unless she actively switches to a different one during the open enrollment window. Due to concerns about market complexity and consumer inertia, researchers and federal agencies have proposed several reforms (McFadden 2006, Thaler and Sunstein 2008, Federal Register 2014). These include reducing the number of plans, providing consumers with personalized information about their options, and auto-assigning people to default plans other than the plan they had chosen previously. We assess the welfare effects of these proposals using a novel combination of administrative records and survey data on a national panel of enrollees from 2006-2010. Specifically, we link the Medicare Current Beneficiary Survey (MCBS) to administrative records of the respondents’ annual enrollment decisions, drug claims, and chronic medical conditions.
Although the MCBS and administrative data have been analyzed separately by prior studies, this is the first time they have been linked for research. Linking the two data sets allows us to identify enrollees who are misinformed and analyze their decisions. The longitudinal MCBS tracks enrollees’ effort to learn about the market and tests their knowledge of how the market works. Equally important, the MCBS reports whether relatives or other advisors helped enrollees choose plans. In cases where advisors made the enrollment decisions, the MCBS tests the advisors’ knowledge. Given the money at stake and the advanced age of the Medicare population, it is unsurprising to find that 38% of enrollees had help choosing a plan. Enrollees are more likely to get help if they are older, lower income, less educated, less internet savvy, or diagnosed with depression or dementia.
We model the person’s annual plan choice as a static repeated-choice process in which it may be costly for people to learn about their options or to switch plans.2 We first identify a subset of choices for which we are unwilling to assume they reveal the person’s preferences for plan attributes because the person appears to be misinformed. We identify a choice as misinformed if the MCBS knowledge test reveals that the decision maker misA static model seems appropriate here because it is difficult for consumers to forecast their own future prescription drug needs, let alone the drug needs and enrollment decisions of other consumers together with the implications for plan prices and offerings. Our static approach is similar to other health insurance applications such as Handel (2013) and Handel and Kolstad (2015).
2 understood a critical feature of the market, or if her choice can only be rationalized under full information by preference orderings that violate weak risk aversion or basic axioms of consumer theory. Based on these criteria, we find that 44% of 2006-2010 plan choices are misinformed. The probability of being in this group increases as enrollees age, as they develop cognitive illnesses, and as their drug expenditures increase. The probability decreases with education and with their effort to learn about the market.
We then estimate and validate separate multinomial logit models for informed and misinformed choices. We find that informed enrollees are sensitive to price and risk averse at levels consistent with prior evidence (Cohen and Einav 2007, Handel 2013, Handel and Kolstad 2015).3 In contrast, the decisions made by misinformed enrollees imply that they are risk-loving and less price sensitive. Because their choices appear to be misinformed, however, we do not interpret them as revealing preferences. Instead, we infer the preferences of misinformed enrollees from the behavior of observationally identical enrollees in the informed group. The underlying assumption is that being informed is uncorrelated with preferences after conditioning on prescription drug use and demographics. Using this assumption, we generalize Small and Rosen’s (1981) welfare measure to allow some consumers to be misinformed about product attributes. This introduces flexibility to the welfare analysis to account for the idea that misinformed consumers can be made better off or worse off by policies that provide information or restrict the choice set.
We use our estimates to simulate three counterfactual policies. First, we simulate the government’s proposal to limit each insurer to sell no more than two plans per market (Federal Register 2014). Second, we calibrate our model to replicate treatment effects from a field experiment by Kling et al. (2012) in which enrollees were told how much money they could expect to save by switching plans. In the third experiment, we simulate the government’s proposal to automatically reassign people to low-cost plans (Health and Human Services 2014). All three policies have winners and losers. Reducing the number of plans makes at least two thirds of consumers worse off and effectively transfers inOur baseline estimates imply that informed consumers would be indifferent between a 50-50 bet of winning $100 and losing between $91.6 and $97.3. Within this range, risk aversion is increasing with education.
3 come from consumers and taxpayers to insurers. This policy also embeds strong incentive for regulatory capture as insurers can increase transfer payments by influencing which plans are retained. In contrast, we find that personalized information benefits more than two thirds of consumers and reduces government spending. The same is true for reassignment if it is costless to opt out. However, average opt out costs of $73 eliminate these gains..
All of these findings account for premium adjustments caused by consumer sorting, and they persist if we use more inclusive or more exclusive rules for identifying misinformed choices. Moreover, our findings are robust to competing explanations for consumer inertia and to divergent assumptions for how policy will affect behavior. That is, the reported share of consumers who benefit from each policy and the breakeven opt out cost are bounds on ranges that we obtain by repeating our analyses under extreme assumptions about the efficacy of choice architecture. In our “most effective” scenario we assume that each policy causes misinformed consumers to start behaving like their analogs in the informed group. This scenario also assumes that inertia is caused entirely by misinformation. At the opposite extreme, our “least effective” scenario assumes the policies would not affect consumer behavior and that inertia by informed consumers reflects their hassle cost of switching insurance plans and/or their utility from latent welfarerelevant features of their preferred plans.
This article advances on prior work that also adapts Bernheim and Rangel’s (2009) welfare framework to evaluate policies that target consumer inertia and misinformation.
Prior studies have sought to recover preferences in such environments by leveraging experiments and surveys to distinguish between active and passive choices made by consumers who are assumed to differ in their knowledge of market institutions. Most identify preferences by assuming that information treatments make consumers fully informed or that consumers making active decisions are fully informed (Handel 2013, Allcott and Kessler 2015, Allcott and Taubinsky 2015, Taubinsky and Rees-Jones 2015, Ho, Hogan and Scott Morton 2015, Polykova 2015). We relax both assumptions. Like Handel and Kolstad (2015) and Handel, Kolstad and Spinnewijn (2015) we assess whether active decision makers are informed by testing their knowledge directly. We sharpen our test by leveraging novel features of our data to identify who actually makes enrollment decisions (beneficiaries or their advisors) and whether those decisions violate axioms of consumer theory. Like Handel (2013) and Bernheim, Fradkin, and Popov (2015) we estimate bounds on welfare that recognize consumer inertia may arise from a mixture of latent preferences, information costs, switching costs, and psychological biases. We extend this partial identification logic to consider alternative hypotheses for how consumers and firms will respond to choice architecture policies and the implications of those responses for consumer welfare, firm revenue, and taxpayer spending. From a policy perspective, we believe our study is the first to use Bernheim and Rangel’s framework to evaluate federal proposals to simplify a high-stakes differentiated product market that is both subsidized and regulated by the federal government.
The remainder of the paper is organized as follows. Section I provides background on Medicare Part D and relevant literature. Section II describes our data and Section III explains how we used it to identify choices that we suspect are based on misinformation.
Section IV presents our parametric model of drug plan choice and Section V uses it to derive measures of consumer welfare for choice architecture policies. Section VI presents results from logit models of drug plan choice for informed and misinformed consumers.