**Hi, my name is Melody!**

*mail_outline* Email: **melodyyhuang** [asperand] **berkeley** [dot] edu

Twitter: @melodyyhuang

I'm a Statistics Ph.D. candidate at **University of California, Berkeley**, working with **Prof. Erin Hartman**. My research broadly focuses on developing robust statistical methods to credibly estimate causal effects under real-world complications.

Outside of statistics, I spend my time thinking about coffee beans, how it's not fair that women can't safely run at night, and optimal board game strategies.

**Recent News**

- [March 2023] I will be presenting on
**variance-based sensitivity models**at the**Online Causal Inference Seminar**! - [Nov. 2022] I will be presenting my work on sensitivity analysis at
**University of Pennsylvania's Causal Inference Seminar**. - [Oct. 2022] Our paper on
**leveraging population outcomes to improve the generalization of experimental results**will be appearing in**Annals of Applied Statistics**! - [Sep. 2022] I will be at
**APSA**this week, presenting at the EGAP/CSDC workshop,**“Knowledge Accumulation and External Validity."** - [Aug. 2022] My paper with Sam Pimentel on
**variance-based sensitivity models**is on ArXiv (**link**)! - [July 2022] I will be presenting some new work in sensitivity analysis at
**PolMeth 2022**. Please swing by to say hello! - [June 2022] My paper with Erin Hartman on
**sensitivity analysis for survey weighting**is now available on ArXiv (**link**)! - [May 2022] I will be at
**ACIC**, presenting on my work in sensitivity analysis. Come say hi! - [Apr. 2022] I will presenting on sensitivity analysis for survey weighting at
**MPSA 2022**! - [Feb. 2022] My paper on
**sensitivity analysis for external validity**is now on ArXiv (link). - [Jan. 2022] My work on
**improving covariate balancing approaches**(joint work with Brian Vegetabile, Lane Burgette, Claude Setodji, and Beth Ann Griffin) will be appearing in**Epidemiology**! - [Nov. 2021] Our paper on
**improving the generalization of experimental results**(joint work with Naoki Egami, Erin Hartman, and Luke Miratrix), is now on AriXiv (link). - [Oct. 2021] I am presenting my work on sensitivity analysis for generalizing experimental results at
**WSDS 2021**!

**Sensitivity Analysis in the Generalization of Experimental Results**

Randomized control trials (RCT’s) allow researchers to estimate causal effects in an experimental sample with minimal identifying assumptions. However, to generalize a causal effect from an RCT to a target population, researchers must adjust for a set of treatment effect moderators. In practice, it is impossible to know whether the set of moderators has been properly accounted for. In the following paper, I propose a three parameter sensitivity analysis for the generalization of experimental results using weighted estimators, with several advantages over existing methods. First, the framework does not require any parametric assumption on either the selection or treatment effect heterogeneity mechanism. Second, I show that the sensitivity parameters are guaranteed to be bounded and propose (1) a diagnostic for researchers to determine how robust a point estimate is to killer confounders, and (2) an adjusted calibration approach for researchers to accurately benchmark the parameters using existing data. Finally, I demonstrate that the proposed framework can be easily extended to the class of doubly robust, augmented weighted estimators. The sensitivity analysis framework is applied to a set of Jobs Training Program experiments.

**Variance-based Sensitivity Analysis for Weighted Estimators Result in More Informative Bounds**

with Sam Pimentel

Weighting methods are popular tools for estimating causal effects; assessing their robustness under unobserved confounding is important in practice. In the following paper, we introduce a new set of sensitivity models called "variance-based sensitivity models." Variance-based sensitivity models characterize the bias from omitting a confounder by bounding the distributional differences that arise in the weights from omitting a confounder, with several notable innovations over existing approaches. First, the variance-based sensitivity models can be parameterized with respect to a simple R^2 parameter that is both standardized and bounded. We introduce a formal benchmarking procedure that allows researchers to use observed covariates to reason about plausible parameter values in an interpretable and transparent way. Second, we show that researchers can estimate valid confidence intervals under a set of variance-based sensitivity models, and provide extensions for researchers to incorporate their substantive knowledge about the confounder to help tighten the intervals. Last, we highlight the connection between our proposed approach and existing sensitivity analyses, and demonstrate both, empirically and theoretically, that variance-based sensitivity models can provide improvements on both the stability and tightness of the estimated confidence intervals over existing methods. We illustrate our proposed approach on a study examining blood mercury levels using the National Health and Nutrition Examination Survey (NHANES).

**Sensitivity Analysis for Survey Weighting**

with Erin Hartman

Survey weighting allows researchers to account for bias in survey samples, due to unit nonresponse or convenience sampling, using measured demographic covariates. Unfortunately, in practice, it is impossible to know whether the estimated survey weights are sufficient to alleviate concerns about bias due to unobserved confounders or incorrect functional forms used in weighting. In the following paper, we propose two sensitivity analyses for the exclusion of important covariates: (1) a sensitivity analysis for partially observed confounders (i.e., variables measured across the survey sample, but not the target population), and (2) a sensitivity analysis for fully unobserved confounders (i.e., variables not measured in either the survey or the target population). We provide graphical and numerical summaries of the potential bias that arises from such confounders, and introduce a benchmarking approach that allows researchers to quantitatively reason about the sensitivity of their results. We demonstrate our proposed sensitivity analyses using a 2016 U.S. Presidential Election poll.

**Leveraging Population Outcomes to Improve the Generalization of Experimental Results**

with Naoki Egami, Erin Hartman, and Luke Miratrix

Randomized control trials are often considered the gold standard in causal inference due to their high internal validity. Despite its importance, generalizing experimental results to a target population is challenging in social and biomedical sciences. Recent papers clarify the assumptions necessary for generalization and develop various weighting estimators for the population average treatment effect (PATE). However, in practice, many of these methods result in large variance and little statistical power, thereby limiting the value of the PATE inference. In this article, we propose post-residualized weighting, in which information about the outcome measured in the observational population data is used to improve the efficiency of many existing popular methods without making additional assumptions. We empirically demonstrate the efficiency gains through simulations and apply our proposed method to a set of jobs training program experiments.

**Higher Moments for Optimal Balance Weighting in Causal Estimation**

with Brian Vegetabile, Lane Burgette, Claude Setodji, and Beth Ann Griffin

We expand upon a simulation study that compared three promising methods for estimating weights for assessing the average treatment effect on the treated for binary treatments: generalized boosted models, covariate-balancing propensity scores, and entropy balance. The original study showed that generalized boosted models can outperform covariate-balancing propensity scores, and entropy balance when there are likely to be non-linear associations in both the treatment assignment and outcome models and when the other two models are fine-tuned to obtain balance only on first-order moments. We explore the potential benefit of using higher-order moments in the balancing conditions for covariate-balancing propensity scores and entropy balance. Our findings showcase that these two models should, by default, include higher order moments and focusing only on first moments can result in substantial bias in estimated treatment effect estimates from both models that could be avoided using higher moments.

**Improving Precision in the Design and Analysis of Experiments with Non-Compliance**

with Erin Hartman

Even in the best-designed experiment, noncompliance with treatment assignment can complicate analysis. Under one-way noncompliance, researchers typically rely on an instrumental variables approach, under an exclusion restriction assumption, to identify the complier average causal effect (CACE). This approach suffers from high variance, particularly when the experiment has a low compliance rate. The following paper suggests blocking designs that can help overcome precision losses in the face of high rates of noncompliance in experiments when a placebo-controlled design is infeasible. We also introduce the principal ignorability assumption and a class of principal score weighted estimators, which are largely absent from the experimental political science literature. We then introduce the ''block principal ignorability'' assumption which, when combined with a blocking design, suggests a simple difference-in-means estimator for estimating the CACE. We show that blocking can improve precision of both IV and principal score weighting approaches, and further show that our simple, design-based solution has superior performance to both principal score weighting and instrumental variables under blocking. Finally, in a re-evaluation of the Gerber, Green, and Nickerson (2003) study, we find that blocked, principal ignorability approaches to estimation of the CACE, including our blocked difference-in-means and principal score weighting estimators, result in confidence intervals roughly half the size of traditional instrumental variable approaches.

**University of California, Berkeley**

- STAT 232: Experimental Design (Spring 2023)
- POLI SCI 236B: Quantitative Methodology in the Social Sciences (Spring 2022)

**University of California, Los Angeles**

- STAT 100C: Linear Models (Spring 2019)
- ECON 412: Fundamentals of Big Data (Spring 2019)