Skip to content

Commit 3d6c057

Browse files
committed
update
1 parent e9c6ede commit 3d6c057

File tree

3 files changed

+1110
-0
lines changed

3 files changed

+1110
-0
lines changed
Lines changed: 281 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,281 @@
1+
\documentclass{beamer}
2+
3+
\usepackage{amsmath,amsfonts,amssymb,bm}
4+
\usepackage{braket}
5+
\usepackage{physics}
6+
7+
\usetheme{Madrid}
8+
9+
\title{How the HHL Algorithm is Used in Quantum Machine Learning}
10+
\subtitle{Linear Systems, Kernels, and Spectral Filtering}
11+
\author{Morten Hjorth-Jensen}
12+
\date{Spring 2026}
13+
14+
\begin{document}
15+
16+
\frame{\titlepage}
17+
18+
%================================================
19+
\section{Motivation}
20+
%================================================
21+
22+
\begin{frame}{The Core Idea}
23+
The HHL algorithm solves a linear system
24+
\[
25+
A x = b
26+
\]
27+
by preparing a quantum state proportional to
28+
\[
29+
\ket{x} \propto A^{-1}\ket{b}.
30+
\]
31+
32+
In quantum machine learning, this is important because many learning tasks reduce to
33+
\begin{itemize}
34+
\item linear systems,
35+
\item matrix inversion,
36+
\item least-squares problems,
37+
\item kernel methods.
38+
\end{itemize}
39+
\end{frame}
40+
41+
%------------------------------------------------
42+
43+
\begin{frame}{Why HHL Appears in Machine Learning}
44+
Many machine learning algorithms can be reformulated as
45+
\[
46+
A x = b
47+
\]
48+
for a suitable matrix \(A\) and vector \(b\).
49+
50+
Examples include:
51+
\begin{itemize}
52+
\item linear regression,
53+
\item ridge regression,
54+
\item kernel methods,
55+
\item Gaussian processes,
56+
\item linearized neural networks.
57+
\end{itemize}
58+
59+
Thus HHL serves as a quantum subroutine for solving the underlying learning problem.
60+
\end{frame}
61+
62+
%================================================
63+
\section{Quantum Linear Regression}
64+
%================================================
65+
66+
\begin{frame}{Ordinary Least Squares}
67+
In linear regression, one minimizes
68+
\[
69+
\min_w \norm{Xw - y}^2.
70+
\]
71+
72+
The formal solution is
73+
\[
74+
w = (X^T X)^{-1} X^T y.
75+
\]
76+
77+
This can be written as a linear system:
78+
\[
79+
A w = b,
80+
\qquad
81+
A = X^T X,
82+
\qquad
83+
b = X^T y.
84+
\]
85+
\end{frame}
86+
87+
%------------------------------------------------
88+
89+
\begin{frame}{How HHL Enters}
90+
Instead of returning the vector \(w\) explicitly, HHL prepares a quantum state proportional to
91+
\[
92+
\ket{w} \propto (X^T X)^{-1} X^T \ket{y}.
93+
\]
94+
95+
This means:
96+
\begin{itemize}
97+
\item HHL does not directly output all regression coefficients,
98+
\item but it gives quantum access to the solution state,
99+
\item which can be used to estimate global properties or predictions.
100+
\end{itemize}
101+
\end{frame}
102+
103+
%================================================
104+
\section{Kernel Methods and QSVMs}
105+
%================================================
106+
107+
\begin{frame}{Kernel Ridge Regression and Related Methods}
108+
Kernel methods often require solving
109+
\[
110+
(K + \lambda I)\alpha = y,
111+
\]
112+
where:
113+
\begin{itemize}
114+
\item \(K\) is the kernel matrix,
115+
\item \(\lambda\) is a regularization parameter,
116+
\item \(\alpha\) determines the classifier or regressor.
117+
\end{itemize}
118+
119+
This is again a linear system, so HHL can in principle be applied.
120+
\end{frame}
121+
122+
%------------------------------------------------
123+
124+
\begin{frame}{Connection to Quantum Support Vector Machines}
125+
In a quantum kernel method, one defines
126+
\[
127+
K(x,x') = |\braket{\phi(x)}{\phi(x')}|^2,
128+
\]
129+
where \(\ket{\phi(x)}\) is a quantum feature state.
130+
131+
A more fully quantum workflow would:
132+
\begin{enumerate}
133+
\item compute the quantum kernel entries on a quantum computer,
134+
\item use HHL to solve the linear system involving \(K\),
135+
\item use the resulting coefficients for classification or regression.
136+
\end{enumerate}
137+
\end{frame}
138+
139+
%================================================
140+
\section{Gaussian Processes}
141+
%================================================
142+
143+
\begin{frame}{Gaussian Process Regression}
144+
Gaussian process regression requires solving systems of the form
145+
\[
146+
(K + \sigma^2 I)^{-1} y,
147+
\]
148+
where:
149+
\begin{itemize}
150+
\item \(K\) is the covariance or kernel matrix,
151+
\item \(\sigma^2\) is the noise variance.
152+
\end{itemize}
153+
154+
The predictive mean is typically
155+
\[
156+
\mu(x_*) = k_*^T (K + \sigma^2 I)^{-1} y.
157+
\]
158+
159+
This again fits naturally into the HHL framework.
160+
\end{frame}
161+
162+
%================================================
163+
\section{Quantum Neural Networks in the Linearized Regime}
164+
%================================================
165+
166+
\begin{frame}{Neural Tangent Kernel Perspective}
167+
In the linearized regime of neural network training, one approximates
168+
\[
169+
f(x) \approx f(x_0) + \nabla_\theta f \cdot (\theta - \theta_0).
170+
\]
171+
172+
Training then reduces to solving a linear system involving the neural tangent kernel (NTK).
173+
174+
Therefore, HHL could in principle be used as a quantum linear solver in this regime.
175+
\end{frame}
176+
177+
%================================================
178+
\section{Physics Interpretation}
179+
%================================================
180+
181+
\begin{frame}{HHL as a Spectral Inverse}
182+
From a physics perspective, HHL implements
183+
\[
184+
A^{-1}.
185+
\]
186+
187+
If
188+
\[
189+
A = \sum_j \lambda_j \ket{u_j}\bra{u_j},
190+
\]
191+
then
192+
\[
193+
A^{-1} = \sum_j \frac{1}{\lambda_j}\ket{u_j}\bra{u_j}.
194+
\]
195+
196+
So HHL acts as a spectral filter:
197+
\[
198+
\lambda_j \mapsto \frac{1}{\lambda_j}.
199+
\]
200+
\end{frame}
201+
202+
%------------------------------------------------
203+
204+
\begin{frame}{Connection to Green's Functions}
205+
In many-body physics, inverse operators appear as resolvents or Green's functions:
206+
\[
207+
G(z) = (zI - H)^{-1}.
208+
\]
209+
210+
This means that HHL can be understood as applying a Green's-function-like operator to data.
211+
212+
From this viewpoint:
213+
\begin{itemize}
214+
\item regression corresponds to a response problem,
215+
\item kernels resemble propagators,
216+
\item learning becomes spectral filtering.
217+
\end{itemize}
218+
\end{frame}
219+
220+
%================================================
221+
\section{Advantages and Caveats}
222+
%================================================
223+
224+
\begin{frame}{When HHL Can Help}
225+
In principle, HHL can offer strong speedups if:
226+
\begin{itemize}
227+
\item the matrix is sparse or efficiently block-encoded,
228+
\item the condition number is not too large,
229+
\item the input state can be prepared efficiently,
230+
\item one only needs global properties of the solution.
231+
\end{itemize}
232+
\end{frame}
233+
234+
%------------------------------------------------
235+
236+
\begin{frame}{Important Caveats}
237+
There are important limitations:
238+
\begin{itemize}
239+
\item HHL outputs a quantum state, not the full classical vector,
240+
\item reading out all coefficients may destroy the speedup,
241+
\item data loading can be costly,
242+
\item conditioning and noise may limit practical use.
243+
\end{itemize}
244+
245+
Thus HHL is most useful when one wants
246+
\begin{itemize}
247+
\item expectation values,
248+
\item overlaps,
249+
\item predictions,
250+
\item or other global observables.
251+
\end{itemize}
252+
\end{frame}
253+
254+
%================================================
255+
\section{Summary}
256+
%================================================
257+
258+
\begin{frame}{Summary}
259+
HHL enters quantum machine learning as a quantum linear solver for problems such as:
260+
\begin{itemize}
261+
\item linear regression,
262+
\item kernel methods,
263+
\item Gaussian processes,
264+
\item linearized neural networks.
265+
\end{itemize}
266+
267+
Conceptually, one may summarize the structure as
268+
\[
269+
\text{machine learning}
270+
\longleftrightarrow
271+
\text{linear systems}
272+
\longleftrightarrow
273+
\text{resolvents}
274+
\longleftrightarrow
275+
\text{HHL}.
276+
\]
277+
278+
Thus HHL is best viewed as a foundational inverse-operator subroutine in quantum machine learning.
279+
\end{frame}
280+
281+
\end{document}
290 KB
Loading

0 commit comments

Comments
 (0)