### Our Vision

*Our research vision is to create a symbiotic partnership between these two transformative technologies, where machine learning optimizes and accelerates quantum hardware development, while quantum computing unleashes new paradigms for machine learning algorithms. We envision ML algorithms not only designing and tailoring quantum circuits for specific tasks, but also actively managing and correcting noise in real-time, enabling a leap in quantum coherence and fidelity. *

## Training Hybrid Quantum-Classical Networks

Training hybrid quantum-classical networks has 2 main challenges – **(1).** Calculating gradients and **(2).** Quantum Loss functions based on measurements.

##### (1). Calculating Gradients:

The inability to directly observe intermediate quantum states poses a significant challenge for gradient calculation in quantum networks. Traditional gradient-based optimization methods heavily rely on the ability to measure the system’s response to small perturbations in its parameters. However, in the quantum realm, measuring the system collapses its quantum state, preventing the observation of the subtle changes induced by these perturbations. This limitation necessitates the development of indirect methods to estimate gradients, such as parameter shift rules or quantum Fisher information, which often require multiple quantum circuit executions.

##### Parameter Shift:

Parameter shift is a gradient estimation technique for training quantum machine learning models. By subtly shifting circuit parameters and measuring the resulting cost function changes, it approximates gradients, enabling optimization. It requires multiple circuit evaluations to produce gradients accurately. Despite its limitations, parameter shift remains a cornerstone in the development of quantum machine learning algorithms.

##### (2). Quantum Loss functions:

The need for native quantum loss functions arises from the fundamental differences between classical and quantum data. While classical loss functions operate on probability distributions, quantum loss functions must effectively capture the complexities of quantum states, including superposition and entanglement. By directly optimizing quantum objectives, native loss functions have the potential to unlock the full power of quantum machine learning.

## Quantum Polar Metric Learning

The architecture of Quantum Polar Metric Learning (QPMeL), a hybrid classical-quantum model for learning embeddings in Hilbert space.

The system learns as a single loop with gradients for the classical network propagating through the quantum circuit to learn functions in the quantum Hilbert space accurately.

The classical network generates 2 sets of vectors that are used as angles for orthogonal rotations in quantum space.

The main components are:

**Classical Head**: Learns the image to vector mapping to be encoded onto the quantum computer.**Embedding Circuit**: Quantum Circuit to map data from the classical to quantum space.**Training Circuit**: Wrapped circuit around the embedding circuit to calculate losses and generate gradients**Loss Function**: Hybrid quantum-classical loss based on triplet loss, utilizing state fidelity similarity.

##### (1). Classical Head:

It consists of the CNN Backbone and the “*Angle Prediction Layer*“. The classical head uses convolution blocks consisting of CONV + ReLU + MaxPool layers, a dense block with 3 Dense + GeLU layers with reducing dimensionality. The polar form of a qubit can be described in terms of 2 angles- 𝜃 and𝛾 which can be encoded via the 𝑅𝑦 and 𝑅𝑧 gates respectively. QPMeL aims to learn “*Rotational Representations*” for classical data by creating 2 embeddings for the 𝜃 and 𝛾 parameters respectively per qubit from the classical head.

##### (2). Embedding Circuit:

The encoding circuit is used to create the state |𝜓⟩ from the classical embeddings. The structure consists of 𝑅𝑦 and 𝑅𝑧 gates separated by a layer of cyclic 𝑍𝑍(𝜃) gates for entanglement.

##### (3). Training Circuit:

The encoding circuit is used to create the state |𝜓⟩ from the classical embeddings. The structure consists of 𝑅𝑦 and 𝑅𝑧 gates separated by a layer of cyclic 𝑍𝑍(𝜃) gates for entanglement. QPMeL uses separate circuits for training and inference, with 2 main differences- (1). The SWAtest extension requires 2 copies of the encoding circuit (2). Residual Corrections that are only used in the training process. In order to compute the fidelity we use the SWAP test.

##### (4). Loss Function:

QPMeL uses a quantum extension to triplet loss, which uses State Fidelity as the distance metric. We simplify our loss function by separating the comparison and distance formulation, favoring 2 calls to a much thinner and shallower circuit. This is more practical on NISQ devices with lower coherence time. QPMeL measures distances in Hilbert space using state fidelity and then computes the difference classically.

## Future Work

Our future work aims to look towards solving the issue of generalization in quantum machine learning and look towards developing a new probability distribution centric approach to quantum machine learning that emphasis evolution of distributions to enable learning and prediction.