06-06-2022 || 02:53
Contrastive learning is self-supervised algorithm where unlabeled data is used to learn about the data.
There are 3 steps of the contrastive learning,
- Data augmentation
- Loss Minimization
At first, one single data is augmented to multiple views. It is normal that multiple views should be closer to one another as they are created from the same data. So the main idea of the contrastive learning is to have same representation for the both data. For that, we create a latent space representation of the augmented data. That space representation is then used to find the loss in between them. For them any type of similarity score is used as the similarity score need to be maximized to make the data similar to one another.
Bu doing this, the model learns to cluster the same type of objects to same cluster and push the other types apart.