-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
351 lines (321 loc) · 17.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Hyun Seok Seong</title>
<meta name="author" content="Hyun Seok Seong">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/skku_icon.jpeg">
<script type="text/javascript">
function display(id) {
var traget = document.getElementById(id);
if (traget.style.display == "none") {
traget.style.display = "";
} else {
traget.style.display = "none";
}
}
</script>
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Hyun Seok Seong</name>
</p>
<p> I am a Ph.D. candidate in the
<a href="https://sites.google.com/site/vclabskku/">Visual Computing Lab (VCLab)</a> at Sungkyunkwan University, supervised by Prof.
<a href="https://sites.google.com/site/jaepilheo">Jae-Pil Heo</a>.
<!-- <a href="https://www.hku.hk/">The University of Hong Kong</a>, fortunately supervised by Prof. <a href="https://hszhao.github.io/">Hengshuang Zhao</a>. -->
</p>
<p> I received my Bachelor's degree from Sungkyunkwan University.
</p>
<p>
<!-- My research interests include various tasks in machine learning-->
<!-- and computer vision, with a particular focus on object categorization and semantic segmentation. Recently my interest has been-->
<!-- in addressing segmentation tasks with limited labeled data, with a specific focus on unsupervised semantic segmentation which-->
<!-- is a key task for solving real-world problems.-->
My research interests include various tasks in machine learning
and computer vision, with a particular focus on image segmentation and grounding with limited labeled data.
Recently, my interest has been in unsupervised semantic segmentation and weakly-supervised affordance grounding.
</p>
<p style="text-align:center">
<a href="mailto:[email protected]">Email</a>  / 
<a href="https://scholar.google.com/citations?user=ZGbTICYAAAAJ&hl">Google Scholar</a>  / 
<a href="https://github.com/hynnsk">Github</a>  / 
<a href="https://www.linkedin.com/in/hyun-seok-seong-7b4a70242">LinkedIn</a>  / 
<a href="./images/CV_of_Hyun_Seok_Seong.pdf">CV</a>
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/hyunseok_3_4.png"><img style="width:80%;max-width:80%" alt="profile photo" src="images/hyunseok_3_4.png" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<!-- <table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>-->
<!-- <tr>-->
<!-- <td style="padding:20px;width:100%;vertical-align:middle">-->
<!-- <heading>News</heading>-->
<!-- <p> <strong>2024-06:</strong> <a href="https://depth-anything-v2.github.io/">Depth Anything V2</a> is released. </p>-->
<!-- <p> <strong>2024-02:</strong> <a href="https://depth-anything.github.io/">Depth Anything</a> is accepted by <em>CVPR</em> 2024. </p>-->
<!-- <p> <strong>2023-12:</strong> I am awarded the <em>NeurIPS</em> 2023 Top Reviewer. </p>-->
<!-- <p> <strong>2023-09:</strong> One paper on generative perception is accepted by <em>NeurIPS</em> 2023. </p>-->
<!-- <p> <strong>2023-09:</strong> I start at HKU as a PhD student.</p>-->
<!-- <p> <strong>2023-07:</strong> Two papers on semi-supervised learning and semantic segmentation are accepted by <em>ICCV</em> 2023. </p>-->
<!-- <p> <strong>2023-02:</strong> Two papers on semi-supervised semantic segmentation are accepted by <em>CVPR</em> 2023. </p>-->
<!-- <p> <strong>2023-02:</strong> I am awarded the HKU Presidential PhD Scholarship. </p>-->
<!-- <a onclick="return display('old_news');"> ---- show more ----</a>-->
<!-- <div id="old_news" style="display: none;">-->
<!-- <p> <strong>2022-03:</strong> One paper on semi-supervised semantic segmentation is accepted by <em>CVPR</em> 2022. </p>-->
<!-- <p> <strong>2021-10:</strong> I am awarded the National Scholarship. </p>-->
<!-- <p> <strong>2021-07:</strong> One paper on few-shot segmentation is accepted by <em>ICCV</em> 2021 as an Oral presentation. </p>-->
<!-- </div>-->
<!-- </td>-->
<!-- </tr>-->
<!------------------------------------------------------------------------------------------------------------------>
<!------------------------------------------------------------------------------------------------------------------>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Publications</heading>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/FCP.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a >
<papertitle>Foreground-Covering Prototype Generation and Matching for SAM-Aided Few-Shot Segmentation</papertitle>
</a>
<br>
Suho Park*, SuBeen Lee*, <strong>Hyun Seok Seong</strong>, Jaejoon Yoo, and Jae-Pil Heo (*: equal contribution)
<br>
<em>AAAI Conference on Artificial Intelligence (<b>AAAI</b>)</em>, 2025
<br>
<a >Arxiv</a> /
<a >Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/ppap.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://arxiv.org/abs/2407.12463">
<papertitle>Progressive Proxy Anchor Propagation for Unsupervised Semantic Segmentation</papertitle>
</a>
<br>
<strong>Hyun Seok Seong</strong>, WonJun Moon, SuBeen Lee, and Jae-Pil Heo
<br>
<em>European Conference on Computer Vision (<b>ECCV</b>)</em>, 2024
<br>
<a href="https://arxiv.org/abs/2407.12463">Arxiv</a> /
<a href="https://github.com/hynnsk/PPAP">Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/tdmiam.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://arxiv.org/abs/2308.00093">
<papertitle>Task-Oriented Channel Attention for Fine-Grained Few-Shot Classification</papertitle>
</a>
<br>
SuBeen Lee, WonJun Moon, <strong>Hyun Seok Seong</strong>, and Jae-Pil Heo
<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (<b>TPAMI</b>)</em>, 2024
<br>
<a href="https://arxiv.org/abs/2308.00093">Arxiv</a> /
<a href="https://github.com/leesb7426/CVPR2022-Task-Discrepancy-Maximization-for-Fine-grained-Few-Shot-Classification">Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/tbs.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://arxiv.org/abs/2312.15894">
<papertitle>Task-disruptive Background Suppression for Few-Shot Segmentation</papertitle>
</a>
<br>
Suho Park, SuBeen Lee, Sangeek Hyun, <strong>Hyun Seok Seong</strong>, and Jae-Pil Heo
<br>
<em>AAAI Conference on Artificial Intelligence (<b>AAAI</b>)</em>, 2024
<br>
<a href="https://arxiv.org/abs/2312.15894">Arxiv</a> /
<a href="https://ojs.aaai.org/index.php/AAAI/article/view/28242/28479">Paper</a> /
<a href="https://github.com/suhopark0706/tbsnet">Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/hp.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://arxiv.org/abs/2303.15014">
<papertitle>Leveraging Hidden Positives for Unsupervised Semantic Segmentation</papertitle>
</a>
<br>
<strong>Hyun Seok Seong</strong>, WonJun Moon, SuBeen Lee, and Jae-Pil Heo
<br>
<em>Computer Vision and Pattern Recognition (<b>CVPR</b>)</em>, 2023
<br>
<a href="https://arxiv.org/abs/2303.15014">Arxiv</a> /
<a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Seong_Leveraging_Hidden_Positives_for_Unsupervised_Semantic_Segmentation_CVPR_2023_paper.pdf">Paper</a> /
<a href="https://github.com/hynnsk/HP">Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/move.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://arxiv.org/abs/2211.13471">
<papertitle>Minority-Oriented Vicinity Expansion with Attentive Aggregation for Video Long-Tailed Recognition</papertitle>
</a>
<br>
WonJun Moon, <strong>Hyun Seok Seong</strong>, and Jae-Pil Heo
<br>
<em>AAAI Conference on Artificial Intelligence (<b>AAAI</b>)</em>, 2023 <b>(Oral presentation)</b>
<br>
<a href="https://arxiv.org/abs/2211.13471">Arxiv</a> /
<a href="https://ojs.aaai.org/index.php/AAAI/article/view/25284/25056">Paper</a> /
<a href="https://github.com/wjun0830/MOVE">Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/tcx.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://www.sciencedirect.com/science/article/pii/S0167865523002787">
<papertitle>TCX: Texture and channel swappings for domain generalization</papertitle>
</a>
<br>
Jaehyun Choi, <strong>Hyun Seok Seong</strong>, Sanguk Park, and Jae-Pil Heo
<br>
<em>Pattern Recognition Letters</em>, 2023
<br>
<a href="https://www.sciencedirect.com/science/article/pii/S0167865523002787">Paper</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/dias.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://arxiv.org/abs/2207.10024">
<papertitle>Difficulty-Aware Simulator for Open Set Recognition</papertitle>
</a>
<br>
WonJun Moon, Junho Park, <strong>Hyun Seok Seong</strong>, Cheol-Ho Cho, and Jae-Pil Heo
<br>
<em>European Conference on Computer Vision (<b>ECCV</b>)</em>, 2022
<br>
<a href="https://arxiv.org/abs/2207.10024">Arxiv</a> /
<a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850360.pdf">Paper</a> /
<a href="https://github.com/wjun0830/Difficulty-Aware-Simulator">Code</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;vertical-align:middle">
<img src='images/pge.png' alt="game" width="180" style="border-style: none">
</td>
<td style="padding:20px;width:70%;vertical-align:middle">
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9967985&tag=1">
<papertitle>Pivot-Guided Embedding for Domain Generalization</papertitle>
</a>
<br>
<strong>Hyun Seok Seong</strong>, Jaehyun Choi, Woojin Jeong, and Jae-Pil Heo
<br>
<em>IEEE Access</em>, 2022
<br>
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9967985&tag=1">Paper</a> /
<a href="https://github.com/hynnsk/PGE_DG">Code</a>
</td>
</tr>
<!------------------------------------------------------------------------------------------------------------------>
<!------------------------------------------------------------------------------------------------------------------>
</tbody></table>
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Education</heading>
<p>
<li><b>Sungkyunkwan University (SKKU), South Korea</b></li>
        Integrated M.S. and Ph.D., Artificial Intelligence </br>
        Sep. 2019 - present
</p>
<p>
<li><b>Sungkyunkwan University (SKKU), South Korea</b></li>
        B.S., Electronic and Electrical Engineering </br>
        Mar. 2013 - Feb. 2019
</p>
</td>
</tr>
</tbody></table>
<!-- <table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>-->
<!-- <tr>-->
<!-- <td style="padding:20px;width:100%;vertical-align:middle">-->
<!-- <heading>Honors</heading>-->
<!-- <p>-->
<!-- <b>NeurIPS Top Reviewer</b>, NeurIPS, 2023-->
<!-- </p>-->
<!-- <p>-->
<!-- <b>HKU Presidential PhD Scholarship</b>, The University of Hong Kong, 2023-2027-->
<!-- </p>-->
<!-- <p>-->
<!-- <b>Tencent Scholarship</b>, Nanjing University, 2022-->
<!-- </p>-->
<!-- <p>-->
<!-- <b>First Prize Scholarship for Postgraduate Students</b>, Nanjing University, 2020-2022-->
<!-- </p>-->
<!-- <p>-->
<!-- <b>National Scholarship</b>, Ministry of Education of P.R. China, 2021-->
<!-- </p>-->
<!-- <p>-->
<!-- <b>First Prize of Excellent Undergraduate Thesis</b>, Nanjing University, 2020-->
<!-- </p>-->
<!-- <p>-->
<!-- <b>MICCAI Undergraduate Student Travel Award</b>, MICCAI, 2019-->
<!-- </p>-->
<!-- </td>-->
<!-- </tr>-->
<!-- </tbody></table>-->
<!-- <table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>-->
<!-- <tr>-->
<!-- <td style="padding:20px;width:100%;vertical-align:middle">-->
<!-- <heading>Academic Services</heading>-->
<!-- <p>-->
<!-- Conference reviewer: CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, etc.-->
<!-- </p>-->
<!-- <p>-->
<!-- Journal reviewer: TIP, TNNLS, TCSVT, TGRS, etc.-->
<!-- </p>-->
<!-- -->
<!-- </td>-->
<!-- </tr>-->
<!-- </tbody></table>-->
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:right;font-size:small;">
<br>
<a href="https://jonbarron.info/">Website Template</a>
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>