Geometric Deep Learning on Groups
Ideally geometric deep learning techniques on groups would encode equivariance to group transformations, to provide well-behaved representation spaces and excellent performance, while also being computationally efficient. However, no single approach provides both of these desirable properties. Continuous approaches offer excellent equivariance but with a very large computational cost. Discrete approaches are typically relatively computationally efficient but sacrifice equivariance. We point towards future techniques that achieve the best of both worlds.

Deep Learning on groups is a rapidly growing area of geometric deep learning (see our recent TDS article on A Brief Introduction to Geometric Deep Learning). Groups include homogenous spaces with global symmetries, with the archetypical example being the sphere.
Practical applications on geometric deep learning on groups are prevalent, particularly for the sphere. For example, spherical data arise in myrad applications, not only when data is acquired directly on the sphere (such as over the Earth or by 360° cameras that capture panoramic photos and videos), but also when considering spherical symmetries (such as in molecular chemistry or magnetic resonance imaging).
We need deep learning techniques on groups that are both highly effective and scalable to huge datasets of high-resolution data. In general this problem remains unsolved.

Goals
One of the reasons deep learning techniques have been so effective is due to the inductive biases encoded in modern architectures.
One particularly powerful inductive bias is to encode symmetries that the data are known to satisfy (as elaborated in our TDS article What Einstein Can Teach Us About Machine Learning). Convolutional neural networks (CNNs), for example, encode translational symmetry or, more precisely, translational equivariance, as illustrated in the diagram below.
