Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
bestonebyone committed Sep 29, 2024
1 parent 468dc5d commit 07ba478
Show file tree
Hide file tree
Showing 2 changed files with 82 additions and 56 deletions.
9 changes: 9 additions & 0 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,15 @@
</div>
<br>
</p>

<p style="align-items: center;text-align: center; font-size:20px;">
<b>Welcome to attend our workshop on-site or online, Sep 30 (2 pm-6 pm), 2024</b><br>
<b>Location:</b> Room Suite 8, MiCo Milano<br>
<b>Online Zoom Link:</b> <a href="https://nus-sg.zoom.us/j/9015323166?omn=82071666030">https://nus-sg.zoom.us/j/9015323166?omn=82071666030</a><br>
Posters: inside: 11 (W1 - W7, W20 - W24, others), outside: 11 (W8 - W19). <br>
Poster number can be found in the website. There will be organizers to coordinate.


<h1 id="overview">Overview </h1>
<font size="5">
Welcome to our HANDS@ECCV24.
Expand Down
129 changes: 73 additions & 56 deletions workshop2024.html
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,15 @@
</div>
<br>
</p>

<p style="align-items: center;text-align: center; font-size:20px;">
<b>Welcome to attend our workshop on-site or online, Sep 30 (2 pm-6 pm), 2024</b><br>
<b>Location:</b> Room Suite 8, MiCo Milano<br>
<b>Online Zoom Link:</b> <a href="https://nus-sg.zoom.us/j/9015323166?omn=82071666030">https://nus-sg.zoom.us/j/9015323166?omn=82071666030</a><br>
Posters: inside: 11 (W1 - W7, W20 - W24, others), outside: 11 (W8 - W19). <br>
Poster number can be found in the website. There will be organizers to coordinate.


<h1 id="overview">Overview </h1>
<font size="5">
Welcome to our HANDS@ECCV24.
Expand All @@ -98,7 +107,8 @@ <h1 id="overview">Overview </h1>
<h1 id="schedule">Schedule (Italy Time)</h1>
<p style="align-items: center;text-align: center;"><b>September 30th (2 pm-6 pm), 2024</b></br>
<b>Room Suite 8, MiCo Milano</b></br>
<b>Poster Boards Position: inside: 11, outside: 11</b></p></br>
<b>Poster Boards Position: inside: 11, outside: 11</b></br>
<b>Online Zoom Link: </b><a href="https://nus-sg.zoom.us/j/9015323166?omn=82071666030">https://nus-sg.zoom.us/j/9015323166?omn=82071666030</a></br></p>

<table class="dataintable">
<tbody>
Expand Down Expand Up @@ -185,49 +195,49 @@ <h1 id="papers">Accepted Papers & Extended Abstracts</h1>
<!-- <ul> -->
<h2>Full-length Papers</h2>
<ul>
<li> AirLetters: An Open Video Dataset of Characters Drawn in the Air <br>
<li> <b>W01</b> AirLetters: An Open Video Dataset of Characters Drawn in the Air <br>
<i>Rishit Dagli, Guillaume Berger, Joanna Materzynska, Ingo Bax, Roland Memisevic</i> <br>
<!-- [pdf] -->
</li>
<a href="files/2024/airletters.pdf">[pdf]</a>
</ul>
<ul>
<li> RegionGrasp: A Novel Task for Contact Region Controllable Hand Grasp Generation <br>
<li> <b>W02</b> RegionGrasp: A Novel Task for Contact Region Controllable Hand Grasp Generation <br>
<i>Yilin Wang, Chuan Guo, Li Cheng, Hai Jiang</i> <br>
<!-- [pdf] -->
<a href="files/2024/regiongrasp.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Generative Hierarchical Temporal Transformer for Hand Pose and Action Modeling <br>
<li> <b>W03</b> Generative Hierarchical Temporal Transformer for Hand Pose and Action Modeling <br>
<i>Yilin Wen, Hao Pan, Takehiko Ohkawa, Lei Yang, Jia Pan, Yoichi Sato, Taku Komura, Wenping Wang</i> <br>
<!-- [pdf] -->
<a href="files/2024/adaptive.pdf">[pdf]</a>
<a href="files/2024/GHTT_ECCVW_final (1).pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Adaptive Multi-Modal Control of Digital Human Hand Synthesis using a Region-Aware Cycle Loss <br>
<li> <b>W04</b> Adaptive Multi-Modal Control of Digital Human Hand Synthesis using a Region-Aware Cycle Loss <br>
<i>Qifan Fu, Xiaohang Yang, Muhammad Asad, Changjae Oh, Shanxin Yuan, Gregory Slabaugh</i> <br>
<!-- [pdf] -->
<a href="files/2024/IHPT__ECCVW_2024_.pdf">[pdf]</a>
<a href="files/2024/adaptive.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Conditional Hand Image Generation using Latent Space Supervision in Random Variable Variational Autoencoders <br>
<li> <b>W05</b> Conditional Hand Image Generation using Latent Space Supervision in Random Variable Variational Autoencoders <br>
<i>Vassilis Nicodemou, Iason Oikonomidis , Giorgos Karvounas, Antonis Argyros</i> <br>
<!-- [pdf] -->
<a href="files/2024/ECCVW_Hands_2024_CG_SRV_VAE-4.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> ChildPlay-Hand: A Dataset of Hand Manipulations in the Wild <br>
<i>Arya Farkhondeh, Samy Tafasca, Jean-Marc ODOBEZ</i> <br>
<li> <b>W06</b> ChildPlay-Hand: A Dataset of Hand Manipulations in the Wild <br>
<i>Arya Farkhondeh*, Samy Tafasca*, Jean-Marc Odobez</i> <br>
<!-- [pdf] -->
<a href="files/2024/childplay-hand.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> EMAG: Ego-motion Aware and Generalizable 2D Hand Forecasting from Egocentric Videos <br>
<li> <b>W07</b> EMAG: Ego-motion Aware and Generalizable 2D Hand Forecasting from Egocentric Videos <br>
<i>Masashi Hatano, Ryo Hachiuma, Hideo Saito</i> <br>
<!-- [pdf] -->
<a href="files/2024/emag.pdf">[pdf]</a>
Expand All @@ -237,23 +247,23 @@ <h2>Full-length Papers</h2>

<h2>Extended Abstracts</h2>
<ul>
<li> AFF-ttention! Affordances and Attention models for Short-Term Object Interaction Anticipation <br>
<li> <b>W08</b> AFF-ttention! Affordances and Attention models for Short-Term Object Interaction Anticipation <br>
<i>Lorenzo Mur-Labadia, Ruben Martinez-Cantin, Jose J Guerrero, Giovanni Maria Farinella, Antonino Furnari</i> <br>
<!-- [pdf] -->
<a href="files/2024/affordances.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Diffusion-based Interacting Hand Pose Transfer <br>
<i>Junho Park,
Yeieun Hwang,
Suk-Ju Kang</i> <br>
<li> <b>W09</b> Diffusion-based Interacting Hand Pose Transfer <br>
<i>Junho Park*,
Yeieun Hwang*,
Suk-Ju Kang#</i> <br>
<!-- [pdf] -->
<a href="files/2024/IHPT__ECCVW_2024_.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? <br>
<li> <b>W10</b> Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? <br>
<i>Rosario Leonardi,
Antonino Furnari,
Francesco Ragusa,
Expand All @@ -263,7 +273,7 @@ <h2>Extended Abstracts</h2>
</li>
</ul>
<ul>
<li> Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer <br>
<li> <b>W11</b> Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer <br>
<i>Xueyi Liu,
Kangbo Lyu,
jieqiong zhang,
Expand All @@ -274,128 +284,135 @@ <h2>Extended Abstracts</h2>
</li>
</ul>
<ul>
<li> Pre-Training for 3D Hand Pose Estimation with Contrastive Learning on Large-Scale Hand Images in the Wild <br>
<i>Nie Lin,
Takehiko Ohkawa,
<li> <b>W12</b> Pre-Training for 3D Hand Pose Estimation with Contrastive Learning on Large-Scale Hand Images in the Wild <br>
<i>Nie Lin*,
Takehiko Ohkawa*,
Mingfang Zhang,
Yifei Huang,
Ryosuke Furuta,
Yoichi Sato</i> <br>
<!-- add pdf from files/2024 -->
<a href="files/2024/Nie_Lin_Extended_Abstracts_HANDS2024.pdf">[pdf]</a>
<a href="files/2024/Nie_Lin_HANDS@Workshop_ECCV24_HandCLR_Camera-ready_Submission.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Task-Oriented Human Grasp Synthesis via Context- and Task-Aware Diffusers <br>
<li> <b>W13</b> Task-Oriented Human Grasp Synthesis via Context- and Task-Aware Diffusers <br>
<i>An-Lun Liu, Yu-Wei Chao, Yi-Ting Chen</i> <br>
<!-- [pdf] -->
<a href="files/2024/ECCV24 Extended Abstracts Submission - Task-Oriented Human Grasp Synthesis via Context- and Task-Aware Diffusers.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Action Scene Graphs for Long-Form Understanding of Egocentric Videos <br>
<li> <b>W14</b> Action Scene Graphs for Long-Form Understanding of Egocentric Videos <br>
<i>Ivan Rodin*, Antonino Furnari*, Kyle Min*, Subarna Tripathi, Giovanni Maria Farinella</i> <br>
<!-- [pdf] -->
<a href="files/2024/EASG___HANDS.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Get a Grip: Reconstructing Hand-Object Stable Grasps in Egocentric Videos <br>
<li> <b>W15</b> Get a Grip: Reconstructing Hand-Object Stable Grasps in Egocentric Videos <br>
<i>Zhifan Zhu, Dima Damen</i> <br>
<!-- [pdf] -->
<a href="files/2024/HANDS_2024_Workshop__Get_a_Grip.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Self-Supervised Learning of Deviation in Latent Representation for Co-speech Gesture Video Generation <br>
<li> <b>W16</b> Self-Supervised Learning of Deviation in Latent Representation for Co-speech Gesture Video Generation <br>
<i>Huan Yang, Jiahui Chen, Chaofan Ding, Runhua Shi, Siyu Xiong, Qingqi Hong, Xiaoqi Mo, Xinhan Di</i> <br>
<!-- [pdf] -->
<a href="files/2024/ssl.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> OCC-MLLM-Alpha:Empowering Multi-modal Large Language Model for the Understanding of Occluded Objects with Self-Supervised Test-Time Learning <br>
<li> <b>W17</b> OCC-MLLM-Alpha:Empowering Multi-modal Large Language Model for the Understanding of Occluded Objects with Self-Supervised Test-Time Learning <br>
<i>Shuxin Yang, Xinhan Di</i> <br>
<!-- [pdf] -->
<a href="files/2024/OCC_MLLM_Alpha_Empowering_Multimodal_Large_Language_Model_For_the_Understanding_of_Occluded_Objects.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera <br>
<li> <b>W18</b> Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera <br>
<i>Zhengdi Yu, Alara Dirik, Stefanos Zafeiriou, Tolga Birdal</i> <br>
<!-- [pdf] -->
<a href="files/2024/Dyn_HaMR_ECCVW_2024_extended_abstract.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> Learning Dexterous Object Manipulation with a Robotic Hand via
<li> <b>W19</b> Learning Dexterous Object Manipulation with a Robotic Hand via
Goal-Conditioned Visual Reinforcement Learning Using Limited Demonstrations <br>
<i>Samyeul Noh, Hyun Myung</i> <br>
<!-- [pdf] -->
<a href="files/2024/learning.pdf">[pdf]</a>
</li>
</ul>

<h2>Technical Reports</h2>
<h2>Invited Posters</h2>
<ul>
<li> 3DGS-based Bimanual Category-agnostic Interaction Reconstruction <br>
<i>Jeongwan On, Kyeonghwan Gwak, Gunyoung Kang, Hyein Hwang, Soohyun Hwang, Junuk Cha, Jaewook Han, Seungryul Baek</i> <br>
<li> <b>W20</b> AttentionHand: Text-driven Controllable Hand Image Generation for 3D Hand Reconstruction in the Wild <br>
<i>Junho Park*, Kyeongbo Kong*, Suk-Ju Kang#</i> <br>
<!-- [pdf] -->
<a href="files/2024/UVHANDS.pdf">[pdf]</a>
<a href="files/2024/attentionhand.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> 2nd Place Solution Technical Report for Hands’24 ARCTIC Challenge from Team ACE <br>
<i>Congsheng Xu*, Yitian Liu*, Yi Cui, Jinfan Liu, Yichao Yan, Weiming Zhao, Yunhui Liu, Xingdong Sheng</i> <br>
<li> <b>W21</b> HandDAGT : A Denoising Adaptive Graph Transformer for 3D Hand Pose Estimation <br>
<i>Wencan Cheng, Eunji Kim, Jong Hwan Ko</i> <br>
<!-- [pdf] -->
<a href="files/2024/ACE.pdf">[pdf]</a>
<a href="files/2024/ECCV24_poster_HandDAGT.pdf">[poster]</a>
</li>
</ul>
<ul>
<li> Solution of Multiview Egocentric Hand Tracking Challenge ECCV2024 <br>
<i>Minqiang Zou, Zhi Lv, Riqiang Jin, Tian Zhan, Mochen Yu, Yao Tang, Jiajun Liang#</i> <br>
<li> <b>W22</b> On the Utility of 3D Hand Poses for Action Recognition <br>
<i>Md Salman Shamil, Dibyadip Chatterjee, Fadime Sener, Shugao Ma, Angela Yao</i> <br>
<!-- [pdf] -->
<a href="files/2024/JVHANDS.pdf">[pdf]</a>
<a href="https://arxiv.org/pdf/2403.09805">[pdf]</a>
</li>
</ul>
<ul>
<li>Technical report of HCB team for Multiview Egocentric Hand Tracking Challenge on HANDS 2024 Challenge <br>
<i>Haohong Kuang, Yang Xiao#, Changlong Jiang, Jinghong Zheng, Hang Xu, Ran Wang, Zhiguo Cao, Min Du, Zhiwen Fang, Joey Tianyi Zhou</i> <br>
<li> <b>W23</b> ActionVOS: Actions as Prompts for Video Object Segmentation <br>
<i>Liangyang Ouyang, Ruicong Liu, Yifei Huang, Ryosuke Furuta, Yoichi Sato</i> <br>
<!-- [pdf] -->
<a href="files/2024/HCB.pdf">[pdf]</a>
<a href="files/2024/ActionVOS.pdf">[poster]</a>
</li>
</ul>
<ul>
<li> <b>W24</b> GraspXL: Generating Grasping Motions for Diverse Objects at Scale <br>
<i>Hui Zhang, Sammy Christen, ZicongFan, OtmarHilliges, Jie Song</i> <br>
<!-- [pdf] -->
<a href="files/2024/GraspXL.pdf">[poster]</a>
</li>
</ul>

<h2>Invited Posters</h2>
<h2>Technical Reports</h2>
<ul>
<li> AttentionHand: Text-driven Controllable Hand Image Generation for 3D Hand Reconstruction in the Wild <br>
<i>Junho Park, Kyeongbo Kong, Suk-Ju Kang</i> <br>
<li> 3DGS-based Bimanual Category-agnostic Interaction Reconstruction <br>
<i>Jeongwan On, Kyeonghwan Gwak, Gunyoung Kang, Hyein Hwang, Soohyun Hwang, Junuk Cha, Jaewook Han, Seungryul Baek</i> <br>
<!-- [pdf] -->
<a href="files/2024/attentionhand.pdf">[pdf]</a>
<a href="files/2024/UVHANDS.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> HandDAGT : A Denoising Adaptive Graph Transformer for 3D Hand Pose Estimation <br>
<i>Wencan Cheng, Eunji Kim, Jong Hwan Ko</i> <br>
<li> 2nd Place Solution Technical Report for Hands’24 ARCTIC Challenge from Team ACE <br>
<i>Congsheng Xu*, Yitian Liu*, Yi Cui, Jinfan Liu, Yichao Yan, Weiming Zhao, Yunhui Liu, Xingdong Sheng</i> <br>
<!-- [pdf] -->
<a href="files/2024/ECCV24_poster_HandDAGT.pdf">[poster]</a>
<a href="files/2024/ACE.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> On the Utility of 3D Hand Poses for Action Recognition <br>
<i>Md Salman Shamil, Dibyadip Chatterjee, Fadime Sener, Shugao Ma, Angela Yao</i> <br>
<li> Solution of Multiview Egocentric Hand Tracking Challenge ECCV2024 <br>
<i>Minqiang Zou, Zhi Lv, Riqiang Jin, Tian Zhan, Mochen Yu, Yao Tang, Jiajun Liang#</i> <br>
<!-- [pdf] -->
<a href="https://arxiv.org/pdf/2403.09805">[pdf]</a>
<a href="files/2024/JVHANDS.pdf">[pdf]</a>
</li>
</ul>
<ul>
<li> ActionVOS: Actions as Prompts for Video Object Segmentation <br>
<i>Liangyang Ouyang, Ruicong Liu, Yifei Huang, Ryosuke Furuta, Yoichi Sato</i> <br>
<li>Technical report of HCB team for Multiview Egocentric Hand Tracking Challenge on HANDS 2024 Challenge <br>
<i>Haohong Kuang, Yang Xiao#, Changlong Jiang, Jinghong Zheng, Hang Xu, Ran Wang, Zhiguo Cao, Min Du, Zhiwen Fang, Joey Tianyi Zhou</i> <br>
<!-- [pdf] -->
<a href="files/2024/ActionVOS.pdf">[poster]</a>
<a href="files/2024/HCB.pdf">[pdf]</a>
</li>
</ul>

<!-- <h2>Important Dates (Deadline has been extended)</h2>
<table class="dataintable">
Expand Down

0 comments on commit 07ba478

Please sign in to comment.