Final Release for AutomotiveUI'19 WIP "First Attempt to Build Realistic Driving Scenes using Video-to-video Synthesis in OpenDS Framework"
We use OpenDS to simulate driving scenarios. However, due to the unreal textures and the limitation of the 3D simulation, the scenarios shown are not very real, and it affect the user experience to some extent. To enhance the user experience and improve the accuracy of experimental data,we want use Vid2Vid to improve the realism of the scenario.
The project aims to enhance the realism of the picture by combining OpenDS famework and Vid2Vid. Vid2Vid can make the unreal scenarios more realistic after a series of processing. By using Vid2Vid, users can see more realistic scenes while using OpenDS to simulate driving scenes. We uploaded some code files for processing images and the Vid2Vid we use in this project.
The purpose of this project is to use OpenDS to simulate driving scenes, use Vid2Vid to get a more realistic experience and improve the accuracy of the research result.
- Distinguish objects in the scenairo with textures of a specific color
- Export recorded video from OpenDS and use program in Image_convertr file to process the video.
- Import the processed video to Vid2Vid.
Zilin Song, Shuolei Wang, Weikai Kong, mentored by Xiangjun Peng