-
Notifications
You must be signed in to change notification settings - Fork 0
/
research.html
180 lines (156 loc) · 12.3 KB
/
research.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
<!DOCTYPE HTML>
<html>
<head>
<title>Research</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="description" content="Yu Group's Research Website" />
<meta name="keywords" content="yugroup" />
<script src="js/jquery.min.js"></script>
<script src="js/skel.min.js"></script>
<script src="js/skel-layers.min.js"></script>
<script src="js/init.js"></script>
<noscript>
<link rel="stylesheet" href="css/skel.css" />
<link rel="stylesheet" href="css/style.css" />
</noscript>
</head>
<body class="no-sidebar">
<!-- Header -->
<div id="header-wrapper">
<div id="header" class="container">
<!-- Logo -->
<h1 id="logo"><a href="index.html">YU Group</a></h1>
<!-- Nav -->
<nav id="nav">
<ul>
<li><a href="people.html">People</a></li>
<li><a href="research.html">Research</a></li>
<li class="break"><a href="publications.html">Publications</a></li>
<li><a href="code.html">Code</a></li>
</ul>
</nav>
</div>
</div>
<!-- Main -->
<div class="wrapper">
<center>
<header>
<h2>Research Projects</h2>
</header>
</center>
<div class="container" id="main">
</br>
<div class="row features">
<header>
<h3>The Effect of Batch Size in Single Neuron Autoencoders</h3>
Nikhil Ghosh, Spencer Frei, Wooseok Ha
</header>
<p>
Dictionary learning is an important data science technique used in many scientific areas such as genomics. One assumes that the data are sparse combinations of elements of an unknown dictionary and the goal is to recover this dictionary. Can we solve this problem with algorithms which do not require specifying the data generative model? For instance, can we simply train a neural network using stochastic gradient descent (SGD), a strategy that has yielded immense practical successes across many domains? We make progress on this question by studying it in a simplified setting. We show that when the data are 1-sparse from an orthogonal dictionary, training a single neuron ReLU autoencoder recovers an element of this dictionary if and only if the batch size is smaller than the size of the dictionary. Even in this simplified setting, the analysis is highly nontrivial due to the stochastic nature of SGD and the non-convex objective function. We introduce tools from non-homogeneous random walk theory in order to tackle the problem and we believe these tools may be useful for analyzing other machine learning algorithms.
</p>
</div>
</br>
<div class="row features">
<header>
<h3>Heart disease genetics</h3>
Tiffany Tang, Omer Ronen, Abhineet Agarwal, Merle Behr, Karl Kumbier
</header>
<p>
Approximately 1 in 500 people suffer from hypertrophic cardiomyopathy (HCM), a common genetic heart disease where the heart muscle cells are larger than normal and cause the heart to work harder than it should. As a result, those with HCM are at higher risk for heart failure and other heart-related complications later in life. To better understand HCM, our group is building a stability-driven pipeline to identify and understand the genes and gene interactions that affect the size of heart muscle cells. This work builds upon ideas from previous work including iterative Random Forests and epiTree. In collaboration with the Ashley lab at Stanford, we are also validating our scientific recommendations through wet-lab experiments.
</p>
</div>
</br>
<div class="row features">
<header>
<h3>Interpreting neural networks</h3>
Chandan Singh, Wooseok Ha, Robbie Netzorg, Jamie Murdoch, Laura Rieger
</header>
<p>
This project is part of a broad theme running through our group on <a href="https://arxiv.org/abs/1901.04592">interpretable machine learning</a>. Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables. However, the inability to effectively visualize these relationships has led to DNNs being characterized as black boxes and consequently limited their applications. To ameliorate these problems, we have introduced multiple new algorithms for interpreting individual decisions made by neural networks.
<br/> <br/>
This line of work began with scoring interactions in LSTMs via <a href="https://arxiv.org/abs/1801.05453">Contextual decomposition (ICLR 2018)</a> and was later extended to generate hierarchical interpretations for a large class of neural networks, including CNNs <a href="https://openreview.net/pdf?id=SkEqro0ctQ">(ACD, ICLR 2019)</a>. We then used these techniques to improve the generalization of neural networks during training <a href="https://arxiv.org/abs/1909.13584"> (CDEP, ICML 2020)</a> and to investigate the importance of different transformations in predicting cosmological parameters <a href="https://arxiv.org/abs/2003.01926"> (TRIM, ICLR Workshop 2020)</a>.
</p>
</div>
</br>
<div class="row features">
<header>
<h3>Information Extraction for Pathology Reports</h3>
Briton Park, Aliyah Hsu, Nicholas Altieri
</header>
<p>
There exists decades of pathology reports which could be used to analyze the efficacy of treatments or to augment disease diagnoses. However, they are currently locked away in free-text, which prevents the application of statistical techniques. Our group is currently developing methods to convert these pathology reports across heterogeneous cancers and institutions into a structured database so that statistical methods can be brought to bear on them.
</p></div>
</br>
<div class="row features">
<header>
<h3>Adaptive wavelets</h3>
Wooseok Ha, Chandan Singh
</header>
<p>
This project aims to leverage the power of wavelets to achieve high predictive performance while maintaining interpretability and computational efficiency.
Recent deep-learning models have achieved impressive prediction performance, but often sacrifice both of these. This line of work begins with <a href="https://arxiv.org/abs/2107.09145"> adaptive wavelet distillation (AWD)</a>, a method which aims to distill information from a trained neural network into a wavelet transform. Specifically, AWD penalizes feature attributions of a neural network in the wavelet domain to learn an effective multi-resolution wavelet transform.
We show how AWD works in two real-world settings: cosmological parameter inference, in close collaboration with cosmologist Francois Lanusse, and molecular-partner prediction, with Gokul Upadhyayula, head of the advanced bioimaging center at UC Berkeley. In both cases, AWD yields a scientifically interpretable and concise model which gives predictive performance better than state-of-the-art neural networks.
</p>
</div>
<div class="row features">
<header>
<h3>Causal inference: heterogeneous treatment effect estimation</h3>
Sören Künzel, Simon Walter
</header>
<p>
Most studies investigate phenomena that have different manifestations in different circumstances and these differences cannot be captured by statistics that estimate population average or sample average effects. Identifying this heterogeneity has assumed increasing importance in the last quarter century in several domains. For example, technology companies now conduct experiments on tens or hundreds of millions of subjects, which provides the power to detect fine-grained heterogeneity; this is desirable in practice because many interventions have zero effect on the outcome of interest for all but a small fraction of subjects. Similarly, there is increasing focus in medicine in providing treatments that are tailored to the peculiarities of individual patients. There is a rich literature on heterogeneous treatment effect estimation, beginning at least as early as 1865.
</br>
Our group suggested a new procedure for heterogeneous treatment effect estimation, called the X-learner that enjoys some optimality properties: the X-learner is a two stage meta-algorithm that first models the unrealized counterfactual and then applies an ordinary regression model to the difference of the actual and counterfactual outcome to produce estimators of the conditional average treatment effect; this procedure is shown to achieve a minimax rate under certain conditions.
</br>
In separate work, we suggest refinements to an existing method called the modified outcome method that render the procedure doubly robust — meaning that it consistently estimates the treatment effect if only one of the model for the counterfactual and probability of treatment assignment is correct.
</p>
</div>
<div class="row features">
<header>
<h3>Gene expresssion study</h3>
Karl Kumbier, Yu Wang
</header>
<p>
A fundamental problem in systems biology is to understand how regulatory interactions drive development and function of living organisms. We are working with the Sue Celniker and Ben Brown Labs at Lawrence Berkeley National Laboratory to investigate interactions among transcription factors (TFs) in Drosophila embryos. As part of this ongoing collaboration, we developed stability-driven Nonnegative Matrix Factorization (staNMF) to decompose gene expression images into “principal patterns” (PPs) that can be used to relate TFs to pre-organ regions. In addition, we developed iterative Random Forests (iRF) to identify stable, high-order interactions in high-throughput genomic data. Our ongoing work builds on these methods to identify high-quality experimental targets for wet lab validation.
</p>
</div>
<div class="row features">
<header>
<h3>Sampling Algorithms</h3>
Raaz Dwivedi, Yuansi Chen
</header>
<p>
Drawing samples from a known distribution is a core computational challenge common to many disciplines, with applications in statistics, probability, operations research, and other areas involving stochastic models. Recent decades have witnessed great success of Markov Chain
Monte Carlo (MCMC) algorithms. These methods are based on constructing a Markov chain whose
stationary distribution is equal to the target distribution, and then drawing samples by
simulating the chain for a certain number of steps. From a theoretical view point, a core
challenge is to provide convergence guarantees for such algorithms, namely the number of steps
required to provide an approximate sample from the target distribution.
In our work, we provide theoretical guarantees for several sampling algorithms as a function
of the problem parameters. A few algorithms that we analyzed include: Randomized interior
point methods for sampling from polytopes and certain Langevin algorithms to sample from log-concave
distributions. Such guarantees can then be used for providing estimates for expectations of functions,
probabilities of events and volume of certain sets.
</p>
</div>
<div class="row features">
<header>
<h3>Neuroscience: understanding The Visual Pathway</h3>
Reza Abbasi, Yuansi Chen
</header>
<p>
The volume and quality of data recorded from the brain are constantly increasing, giving us a better view of mental processes. We collaborate with neuroscience labs, primarily the Gallant lab, to develop methodology for analyzing such data. We focus on understanding vision by studying the representation of images and videos in early visual areas. These experiments are great examples for modern statistical work: both the stimuli (a video, or sequence of images) and the response (continuous brain-scans, or electrode recordings) are high-dimensional structured objects. We develop principled methods to relate the stimuli and responses for both prediction and interpretation purposes. These include, among others, methods for feature-extraction (both learned and engineered), such as building upon the scattering transform. Additionally, we focus on building methods for interpretation including DeepTune (a method for generating stable maximally activating stimuli from deep learning models) and compression. Our ongoing work focuses on developing interpretation methods for population-level analysis.
</p>
</div>
</div>
</div>
<!-- Footer -->
<div id="footer-wrapper">
<div id="copyright" class="container">
<ul class="menu">
<li>Yu Group</li>
</ul>
</div>
</div>
</body>
</html>