forked from alex04072000/SingleHDR
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
297 lines (222 loc) · 11.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1.0"/>
<meta name="author" content="yulunliu">
<title>Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline</title>
<!-- CSS -->
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="website/css/materialize.css" type="text/css" rel="stylesheet" media="screen,projection"/>
<link href="website/css/style.css" type="text/css" rel="stylesheet" media="screen,projection"/>
<link href="website/css/font-awesome.min.css" rel="stylesheet">
<!--<meta property="og:image" content="http://gph.is/2oZQz8h" />-->
</head>
<body>
<div class="navbar-fixed">
<nav class="grey darken-4" role="navigation">
<div class="nav-wrapper container"><a id="logo-container" href="#" class="brand-logo"></a>
<a href="#" data-activates="nav-mobile" class="button-collapse"><i class="material-icons">menu</i></a>
<ul class="left hide-on-med-and-down">
<li><a class="nav-item waves-effect waves-light" href="#home">Home</a></li>
<li><a class="nav-item waves-effect waves-light" href="#abstract">Abstract</a></li>
<li><a class="nav-item waves-effect waves-light" href="#paper">Paper</a></li>
<li><a class="nav-item waves-effect waves-light" href="#download">Download</a></li>
<li><a class="nav-item waves-effect waves-light" href="#results">Results</a></li>
<li><a class="nav-item waves-effect waves-light" href="#reference">References</a></li>
</ul>
</div>
</nav>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container scrollspy" id="home">
<h4 class="header center black-text">Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline</h4>
<br>
<div class="row center">
<h5 class="header col l3 m4 s12">
<div class="author"><a href="http://www.cmlab.csie.ntu.edu.tw/~yulunliu/" target="blank">Yu-Lun Liu<sup>1,5*</sup></a></div>
</h5>
<h5 class="header col l3 m4 s12">
<div class="author"><a href="https://www.wslai.net/" target="blank">Wei-Sheng Lai<sup>2*</sup></a></div>
</h5>
<h5 class="header col l3 m4 s12">
<div class="author"><a href="https://www.cmlab.csie.ntu.edu.tw/~nothinglo/" target="blank">Yu-Sheng Chen<sup>1</sup></a></div>
</h5>
<h5 class="header col l3 m4 s12">
<div class="author">Yi-Lung Kao<sup>1</sup></div>
</h5>
</div>
<div class="row center">
<h5 class="header col l4 m4 s12">
<div class="author"><a href="https://faculty.ucmerced.edu/mhyang/" target="blank">Ming-Hsuan Yang<sup>2,4</sup></a></div>
</h5>
<h5 class="header col l4 m4 s12">
<div class="author"><a href="https://www.csie.ntu.edu.tw/~cyy/" target="blank">Yung-Yu Chuang<sup>1</sup></a></div>
</h5>
<h5 class="header col l4 m4 s12">
<div class="author"><a href="https://filebox.ece.vt.edu/~jbhuang/" target="blank">Jia-Bin Huang<sup>2</sup></a></div>
</h5>
</div>
<div class="row center affiliation-row">
<h5 class="header col offset-l1 l2 m4 s12">
<div class="affiliation"><a href="https://www.ntu.edu.tw/" target="blank"><sup>1</sup>National Taiwan University</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://ai.google/research" target="blank"><sup>2</sup>Google</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://vt.edu/" target="blank"><sup>3</sup>Virginia Tech</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://www.ucmerced.edu/" target="blank"><sup>4</sup>University of California at Merced</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://www.mediatek.tw/" target="blank"><sup>5</sup>MediaTek Inc.</a></div>
</h5>
</div>
</div>
</div>
<div class="container">
<div class="section">
<!-- Icon Section -->
<div class="row center">
<div class="col l12 m12 s12">
<img class="responsive-img" src="website/teaser.png">
</div>
</div>
</div>
<br>
<div class="row section scrollspy" id="abstract">
<div class="title">Abstract</div>
Recovering a high dynamic range (HDR) image from a single low dynamic range (LDR) input image is challenging due to missing details in under-/over-exposed regions caused by quantization and saturation of camera sensors.
In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We then propose to learn three specialized CNNs to reverse these steps.
By decomposing the problem into specific sub-tasks, we impose effective physical constraints to facilitate the training of individual sub-networks.
Finally, we jointly fine-tune the entire model end-to-end to reduce error accumulation.
With extensive quantitative and qualitative experiments on diverse image datasets, we demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
</div>
<div class="row section scrollspy" id="paper">
<div class="title">Papers</div>
<br>
<div class="row">
<div class="col m6 s12 center">
<a href="https://arxiv.org/abs/2004.01179" target="_blank">
<img src="website/images/icon_pdf.png">
</a>
<br>
<a href="https://arxiv.org/abs/2004.01179" target="_blank">CVPR 2020 (arXiv)</a>
</div>
<div class="col m6 s12 center">
<a href="https://openaccess.thecvf.com/content_CVPR_2020/supplemental/Liu_Single-Image_HDR_Reconstruction_CVPR_2020_supplemental.pdf" target="_blank">
<img src="website/images/icon_pdf.png">
</a>
<br>
<a href="https://openaccess.thecvf.com/content_CVPR_2020/supplemental/Liu_Single-Image_HDR_Reconstruction_CVPR_2020_supplemental.pdf" target="_blank">Supplementary Material</a>
</div>
</div>
</div>
<div class="row">
<div class="subtitle">Citation</div>
<p>Yu-Lun Liu, Wei-Sheng Lai, Yu-Sheng Chen, Yi-Lung Kao, Ming-Hsuan Yang, Yung-Yu Chuang, and Jia-Bin Huang, "Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline", in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020</p>
<br>
<div class="subtitle">Bibtex</div>
<pre>
@inproceedings{liu2020single,
author = {Liu, Yu-Lun and Lai, Wei-Sheng and Chen, Yu-Sheng and Kao, Yi-Lung and Yang, Ming-Hsuan and Chuang, Yung-Yu and Huang, Jia-Bin},
title = {Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
year = {2020}
}
</pre>
</div>
<div class="section row scrollspy" id="download">
<div class="title">Download</div>
<div class="row">
<div class="col m4 s12 center">
<a href="https://github.com/alex04072000/SingleHDR" target="_blank">
<img src="website/images/github.png">
</a>
<br>
<a href="https://github.com/alex04072000/SingleHDR" target="_blank">Code</a>
</div>
<div class="col m4 s12 center">
<a href="https://drive.google.com/file/d/1muy49Pd0c7ZkxyxoxV7vIRvPv6kJdPR2/view?usp=sharing" target="_blank">
<img src="website/images/icon_zip.png">
</a>
<br>
<a href="https://drive.google.com/file/d/1muy49Pd0c7ZkxyxoxV7vIRvPv6kJdPR2/view?usp=sharing" target="_blank">Training data (14 GB)</a>
</div>
<div class="col m4 s12 center">
<a href="https://drive.google.com/file/d/1m4CVsxAy4sXD1IlhPi1Tfi9pqiuhly3L/view?usp=sharing" target="_blank">
<img src="website/images/icon_zip.png">
</a>
<br>
<a href="https://drive.google.com/file/d/1m4CVsxAy4sXD1IlhPi1Tfi9pqiuhly3L/view?usp=sharing" target="_blank">Testing data (HDR-Synth) (87 GB)</a>
</div>
</div>
<div class="row">
<div class="col m4 s12 center">
<a href="https://drive.google.com/file/d/1aHm55zFrPoRu2KwhSnzqbsUNsapStrlC/view?usp=sharing" target="_blank">
<img src="website/images/icon_zip.png">
</a>
<br>
<a href="https://drive.google.com/file/d/1aHm55zFrPoRu2KwhSnzqbsUNsapStrlC/view?usp=sharing" target="_blank">Testing data (HDR-Real) (17 GB)</a>
</div>
<div class="col m4 s12 center">
<a href="https://drive.google.com/file/d/1rl3BSQ0Oyx-qCfpVCH4kVxaidrR01EZ7/view?usp=sharing" target="_blank">
<img src="website/images/icon_zip.png">
</a>
<br>
<a href="https://drive.google.com/file/d/1rl3BSQ0Oyx-qCfpVCH4kVxaidrR01EZ7/view?usp=sharing" target="_blank">Testing data (RAISE) (179 GB)</a>
</div>
<div class="col m4 s12 center">
<a href="https://drive.google.com/file/d/1l-Ix-7qQQqRjAOU6ow9UK-glooaWJ5_Y/view?usp=sharing" target="_blank">
<img src="website/images/icon_zip.png">
</a>
<br>
<a href="https://drive.google.com/file/d/1l-Ix-7qQQqRjAOU6ow9UK-glooaWJ5_Y/view?usp=sharing" target="_blank">Testing data (HDR-Eye) (0.52 GB)</a>
</div>
</div>
</div>
<div class="section row scrollspy" id="results">
<div class="title">Results</div>
<div class="row center">
<div class="subtitle"><a href="website/HDR_HTML_CameraReady/result.html">Results on the HDR-Synth, HDR-Real, RAISE, HDR-Eye dataset, and examples from prior work and DxOMark</a></div>
<a class="summary" href="website/HDR_HTML_CameraReady/result.html"><div class="col s12 summary-transfer"></div></a>
</div>
<br>
</div>
<div class="row section scrollspy" id="reference">
<div class="title">References</div>
<ul>
<li>•
<a href="https://github.com/gabrieleilertsen/hdrcnn" target="blank">HDR image reconstruction from a single exposure using deep CNNs</a>, SIGGRAPH Asia, 2017.
</li>
<li>•
<a href="http://www.cgg.cs.tsukuba.ac.jp/~endo/projects/DrTMO/" target="blank">Deep reverse tone mapping</a>, SIGGRAPH Asia, 2017.
</li>
<li>•
<a href="https://github.com/dmarnerides/hdr-expandnet" target="blank">ExpandNet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content</a>, Eurographics, 2018.
</li>
</ul>
</div>
</div>
<footer class="page-footer grey lighten-3">
<!--
<div class="row">
<div class="col l4 offset-l4 s12">
<script type="text/javascript" id="clustrmaps" src="//cdn.clustrmaps.com/map_v2.js?cl=ffffff&w=330&t=tt&d=fhGpuMgoLYXytRWhcIV-396rCSmJYtpAJdk3tTNbAnY"></script>
</div>
</div>
-->
<div class="footer-copyright center black-text">
Copyright © Jason Lai 2017
</div>
</footer>
<!-- Scripts-->
<script src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="js/materialize.js"></script>
<script src="js/init.js"></script>
</body>
</html>