Quantum Metric Learning¶
Quantum metric learning trains a parameterised quantum embedding such that distances between samples reflect label similarity.
Instead of directly predicting class labels, the model learns a representation in which:
- samples from the same class are close
- samples from different classes are separated
Classification can then be performed using simple classical methods such as nearest centroid or k-nearest neighbours.
This separates:
representation learning (quantum) classification rule (classical)
Overview¶
Given input features:
a quantum circuit defines an embedding:
where:
- \(k\) is the number of qubits
- \(\theta\) are trainable circuit parameters
The embedding is constructed using expectation values of Pauli observables:
Distances in this embedding space are used to measure similarity between inputs.
Model structure¶
The quantum embedding uses a parameterised circuit:
where:
- \(U(x,\theta)\) contains both data encoding and trainable parameters
- entangling gates allow correlations between features
The embedding vector is obtained from expectation values:
giving an embedding dimension equal to the number of qubits.
Data re-uploading embedding¶
To increase expressivity without increasing qubit count, features may be encoded multiple times:
Example encoding layer:
Repeated encoding allows the circuit to learn nonlinear transformations of the input space.
Contrastive training objective¶
Training uses pairs of labelled samples.
Given two inputs:
define embedding distance:
Define similarity indicator:
Contrastive loss:
where:
- \(m\) is a margin hyperparameter
- \(d_{ij}\) is Euclidean distance between embeddings
This objective:
- pulls same-class samples together
- pushes different-class samples apart
Training workflow¶
Typical training loop:
- sample labelled pairs from the dataset
- compute quantum embeddings
- compute pairwise distances
- evaluate contrastive loss
- update parameters using gradient-based optimisation
Optimisation is performed using classical optimisers such as Adam.
Gradients are computed using automatic differentiation of expectation values.
Classification in embedding space¶
After training, embeddings can be used for classical classification.
A simple approach uses nearest centroid prediction.
Compute centroid for each class:
Prediction rule:
Other possible classifiers include:
- k-nearest neighbours
- logistic regression
- support vector machines
Relationship to other QML methods¶
Variational classifiers¶
Variational classifiers optimise prediction error directly:
Metric learning instead optimises representation geometry.
Quantum kernel methods¶
Kernel methods compute similarity:
Metric learning uses Euclidean distance in embedding space:
Both approaches use quantum circuits as feature maps.
Trainable quantum kernels¶
Trainable kernels optimise similarity structure via kernel alignment.
Metric learning directly shapes the embedding geometry.
Model capacity¶
Expressivity depends on:
- number of qubits
- circuit depth
- entanglement structure
- number of re-uploading layers
Increasing depth allows more complex similarity structure but may increase optimisation difficulty.
Example usage¶
from qml.metric_learning import run_quantum_metric_learner
result = run_quantum_metric_learner(
dataset="moons",
samples=200,
layers=2,
steps=50,
)
Outputs include:
- learned embedding parameters
- embedding vectors
- centroid positions
- training and test accuracy
- optimisation loss history
The public API returns a QuantumMetricLearningResult dataclass rather than a
plain dictionary, so values are accessed via attributes such as
result.train_accuracy and result.loss_history.
When save=True, the workflow also writes JSON results and generated figures to:
results/metric_learning/images/metric_learning/
When to use quantum metric learning¶
Metric learning is useful when:
- classification boundaries are complex
- similarity structure is more important than direct prediction
- small datasets require expressive embeddings
- classical classifiers benefit from learned representations
References¶
Hadsell et al. (2006) Dimensionality reduction by learning an invariant mapping.
Mitarai et al. (2018) Quantum circuit learning.
Schuld & Killoran (2019) Quantum machine learning in feature Hilbert spaces.
Cristianini et al. (2002) Kernel-target alignment.