Infrared color-night vision database
Welcome to use the Infrared color-night vision database in IRay database
Time:2021-12-10 Font A- A A+
Human vision is often adversely affected by complex environmental factors, especially in night vision scenarios. Thus, infrared cameras are often leveraged to help enhance the visual effects via detecting infrared radiation in the surrounding environment, but the infrared videos are undesirable due to the lack of detailed semantic information. In such a case, an effective video-to-video translation method from the infrared domain to the visible light counterpart is strongly needed by overcoming the intrinsic huge gap between infrared and visible fields.
To address this challenging problem, we propose an infrared-to-visible (I2V) video translation method I2V-GAN to generate fine-grained and spatial-temporal consistent visible light videos by given unpaired infrared videos. Technically, our model capitalizes on three types of constraints: 1)adversarial constraint to generate synthetic frames that are similar to the real ones, 2)cyclic consistency with the introduced perceptual loss for effective content conversion as well as style preservation, and 3)similarity constraints across and within domains to enhance the content and motion consistency in both spatial and temporal spaces at a fine-grained level. Furthermore, the current public available infrared and visible light datasets are mainly used for object detection or tracking, and some are composed of discontinuous images which are not suitable for video tasks. Thus, we provide a new dataset for I2V video translation, which is named IRVI. Specifically, it has 12 consecutive video clips of vehicle and monitoring scenes, and both infrared and visible light videos could be apart into 24352 frames. Comprehensive experiments validate that I2V-GAN is superior to the compared SOTA methods in the translation of I2V videos with higher fluency and finer semantic details.
This dataset will be open to the public in the form of "Paper + Git", please see the link below for details:
The researchers
The study was conducted by:
Kai Chen ( kai.chen@iraytek.com )
Shuigen Wang ( shuigen.wang@iraytek.com )
Copyright Notice
-----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------
Copyright (C) 2021 Yantai IRay Technology Co., Ltd. All Rights Reserved.
All rights reserved.
Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute this database (the images, the results and the source files) and its documentation for any purpose, provided that the copyright notice in its entirety appear in all copies of this database, and the original source of this database, Yantai IRay Technology Co., Ltd (IRay, https://www.infiray.com/ ), is acknowledged in any publication that reports research using this database. In no event shall Yantai IRay Technology Co., Ltd be liable to any party for direct, indirect, special, incidental or consequential damages arising out of the use of this database and its documentation,even if Yantai IRay Technology Co., Ltd has been advised of the possibility of such damage.
Yantai IRay Technology Co., Ltd disclaims any warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The database provided hereunder is on an "as is" basis, and Yantai IRay Technology Co., Ltd has no obligation to provide maintenance, support, updates, enhancements, or modifications.
-----------COPYRIGHT NOTICE ENDS WITH THIS LINE------------