Implementing math in deep learning papers into efficient PyTorch code: SimCLR Contrastive Loss
Learning to implement advanced math formulas in deep learning papers into performant PyTorch code in 3 steps.

Introduction
One of the best ways to deepen your understanding of the math behind Deep Learning models and loss functions, and also a great way to improve your PyTorch skills is to get used to implementing deep learning papers all by yourself.
Books and blog posts could help you get started in coding and learning the basics in ML / DL, but after studying a few of them and getting good at the routine tasks in the field, you will soon realize that you are on your own in the learning journey and you'll find most of the resources online as boring and too shallow. However, I believe that if you can study new deep learning papers as they get published and understand the required pieces of math in it (not necessarily all the mathematical proofs behind authors' theories), and, you are a capable coder who can implement them into efficient code, nothing can stop you from staying up to date in the field and learning new ideas.
Contrastive Loss implementation
I'll introduce my routine and the steps I follow to implement math in deep learning papers using a not trivial example: The Contrastive Loss in the SimCLR paper.
Here's the mathematical formulation of the loss:

I agree that the mere look of the formula could be daunting! and you might be thinking that there must be lot's of ready PyTorch implementations on GitHub, so let's use them