Variational Quantum Regression¶
This note describes the variational quantum regressor (VQR) implemented in qml.regression.
The model is a hybrid quantum–classical regressor:
- a classical feature vector is encoded into a quantum circuit
- a parameterised ansatz is applied
- an observable is measured
- the measured scalar is used as the prediction
- parameters are trained by minimising a regression loss
Data¶
We consider a regression dataset:
where:
- \(N\) is the number of samples
- \(x_i \in \mathbb{R}^d\) is the feature vector for sample \(i\)
- \(y_i \in \mathbb{R}\) is the target value for sample \(i\)
- \(d\) is the feature dimension
In the current implementation:
- \(d = 2\)
- the dataset is a synthetic regression dataset generated with
sklearn.datasets.make_regression - input features are standardised
- targets are standardised
Let:
denote one standardised input sample.
Quantum state preparation¶
The input is encoded into a quantum state using an angle embedding.
Let:
- \(n\) be the number of qubits
- \(n = d\) in the current implementation
- \(|0\rangle^{\otimes n}\) be the initial computational basis state
The feature map is:
where \(U_{\text{enc}}(x)\) is the encoding unitary.
Angle embedding¶
For an input vector \(x \in \mathbb{R}^n\), the encoding applies one \(R_Y\) rotation per qubit:
where:
- \(x_j\) is feature \(j\)
- \(R_Y(\alpha)\) is a single-qubit rotation by angle \(\alpha\) about the \(Y\) axis
The matrix form is:
Variational ansatz¶
After encoding, a trainable circuit is applied.
Let:
denote the full set of trainable parameters.
The ansatz unitary is:
and the full circuit state is:
Layered hardware-efficient ansatz¶
The implemented ansatz uses:
- one layer index \(\ell = 1,\dots,L\)
- one qubit index \(j = 1,\dots,n\)
where:
- \(L\) is the number of variational layers
- \(n\) is the number of qubits
Each layer applies:
- \(R_Y\) on each qubit
- \(R_Z\) on each qubit
- a chain of CNOT gates for entanglement
The parameter tensor is:
where:
- \(\theta_{\ell,j,1}\) is the \(R_Y\) angle for layer \(\ell\), qubit \(j\)
- \(\theta_{\ell,j,2}\) is the \(R_Z\) angle for layer \(\ell\), qubit \(j\)
One layer has the form:
where \(U_{\text{ent}}\) is the entangling unitary.
For the chain entangler:
Thus the full ansatz is:
Measurement and model output¶
The circuit measures the expectation value of the Pauli-\(Z\) observable on the first qubit.
Let:
where \(Z_1\) is Pauli \(Z\) acting on qubit 1.
The model prediction is:
where:
- \(\hat{y}(x,\theta)\) is the predicted target value
- \(\hat{y}(x,\theta) \in [-1,1]\)
Because the targets are standardised, this bounded output is sufficient for the current minimal implementation.
Loss function¶
Training uses mean squared error.
For a batch of \(N\) training samples, let:
- \(y_i \in \mathbb{R}\) be the true target of sample \(i\)
- \(\hat{y}_i = \hat{y}(x_i,\theta)\) be the predicted target of sample \(i\)
The loss is:
where:
- \(\mathcal{L}(\theta)\) is the training objective
- \(N\) is the number of training samples
Optimisation¶
The parameters \(\theta\) are trained using a classical optimiser.
The current implementation uses Adam with step size:
where \(\eta\) is the learning rate.
Training proceeds for a fixed number of steps:
where \(T\) is the total number of optimisation iterations.
At each step:
- evaluate the circuit on the training set
- compute predictions \(\hat{y}_i\)
- compute loss \(\mathcal{L}(\theta)\)
- compute gradients with respect to \(\theta\)
- update \(\theta\) using Adam
The loss history is recorded as:
where \(\mathcal{L}^{(t)}\) is the loss after optimisation step \(t\).
Regression metrics¶
After training, predictions are formed on both train and test sets.
For any evaluation set of size \(M\), let:
- \(y_i\) be the true target of sample \(i\)
- \(\hat{y}_i\) be the predicted target of sample \(i\)
Mean squared error¶
where:
- \(\mathrm{MSE}\) is the mean squared error
- \(M\) is the number of evaluated samples
Mean absolute error¶
where:
- \(\mathrm{MAE}\) is the mean absolute error
The implementation reports:
- training MSE
- test MSE
- training MAE
- test MAE
Parameter count¶
The ansatz parameter tensor has shape:
so the total number of trainable parameters is:
where:
- \(P\) is the total number of trainable parameters
- \(L\) is the number of layers
- \(n\) is the number of qubits
For the current minimal model:
- \(n = 2\)
- typical choice: \(L = 2\)
so:
Current implementation choices¶
The current VQR is intentionally minimal.
Included¶
- regression with scalar targets
- two-dimensional input
- angle embedding
- hardware-efficient ansatz
- Pauli-\(Z\) measurement on the first qubit
- mean squared error training
- Adam optimisation
- MSE and MAE reporting
- prediction visualisation
Relation to the code¶
The implemented workflow is organised as follows:
qml.dataprepares the datasetqml.embeddingsapplies the feature mapqml.ansatzapplies the trainable circuitqml.regression.run_vqrperforms training and evaluationqml.visualizecreates plots
So the notebook remains a package client, while the full VQR logic lives in the package.
Summary¶
The implemented VQR is a regressor defined by:
- a feature map \(U_{\text{enc}}(x)\)
- a trainable ansatz \(U_{\text{ans}}(\theta)\)
- an observable \(M = Z_1\)
- a scalar prediction from the expectation value
- a mean squared error training objective
Formally:
This is the core variational regression workflow used in the repository.