I'm interested in computer vision, machine learning, optimization, and image processing. Most of my
research is about inferring the physical world (shape, motion, color, light, etc) from images.
Representative papers are highlighted.
This paper proposes a cross-modal retrieval system that leverages on image and text encoding. Most
multimodal architectures employ separate networks for each modality to capture the semantic
relationship between them. However, in our work image-text encoding can achieve comparable results
in terms of cross-modal retrieval without having to use a separate network for each modality. We
show that text encodings can capture semantic relationships between multiple modalities. In our
knowledge, this work is the first of its kind in terms of employing a single network and fused
image-text embedding for cross-modal retrieval. We evaluate our approach on two famous multimodal
datasets: MS-COCO and Flickr30K.