My research aims to address the challenge of data hungry deep learning models by generating synthetic images that can effectively train computer vision models to solve tasks such as object counting, polyp segmentation, and pattern classification. The work carried out explores the use of various techniques to ensure effective use of synthetic data, including domain randomization and domain adaptation in both self- and semi-supervised setups.
Domain randomization is a strategy utilized to alleviate the challenges posed by domain shift. This technique involves introducing variations in object textures, background images, and lighting conditions within a semantic segmentation task. The underlying goal of domain randomization is to generate a diverse set of synthetic data that encompasses a wide range of variations. By exposing the model to these varied environments during training, it becomes capable of perceiving real-world data as merely another variation, even if some of the synthetic variations may seem unrealistic to human observers.
Unlike domain randomization, domain adaptation aims to make the data more realistic by minimising the gap between the source domain and the target domain. Techniques like feature alignment and adversarial learning can be used for adaptation. By leveraging these approaches and synthetic data, models can adapt their knowledge and perform well in target domains. This approach has been shown to be effective in reducing the domain gap, especially when there is limited labeled data in the target domain.