Toward realizing talking quantum movies: Synchronization of audio and visual content

  • Fei Yan Changchun University of Science and Technology, China http://orcid.org/0000-0001-8532-1978
  • Abdullah M Iliyasu Prince Sattam Bin Abdulaziz University, Kingdom of Saudi Arabia
  • Kehan Chen Changchun University of Science and Technology, China
  • Huamin Yang Changchun University of Science and Technology, China

Abstract

With the ubiquity and primacy of image- and video-processing, a new subdiscipline ensuring a smooth transition from digital to quantum image-processing has emerged. This study proposes a framework to synchronize audio and visual content to realize talking quantum movies. This framework segments a movie into frame, audio, and time components. A multichannel representation for quantum images (MCQI) is used to encode the still images that make up movie frames. In addition, the amplitude content of the flexible representation of the quantum audio (FRQA) signal is recorded at each instant of time. The synchronization of audio and frames in the movie is accomplished through a quantum sequence of time. The feasibility of this framework is demonstrated by a simple simulation, and some possible applications are discussed.
Published
Jun 23, 2019
How to Cite
YAN, Fei et al. Toward realizing talking quantum movies: Synchronization of audio and visual content. International Journal of Information Science and Technology, [S.l.], v. 3, n. 4, p. 24 - 33, june 2019. ISSN 2550-5114. Available at: <https://innove.org/ijist/index.php/ijist/article/view/94>. Date accessed: 20 apr. 2024. doi: http://dx.doi.org/10.57675/IMIST.PRSM/ijist-v3i4.94.
Section
Special issue : Information and Multimedia Processing