-
Notifications
You must be signed in to change notification settings - Fork 2
/
cs191.html
2569 lines (2551 loc) · 159 KB
/
cs191.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<div><div class='wrapper'>
<p><a name='1'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Introduction -- January 17, 2012</h2>
<h2>Course Information</h2>
<ul>
<li>Announcements on website</li>
<li>Try Piazza for questions.</li>
<li>GSIs:</li>
<li>Dylan Gorman<ul>
<li>[email protected]</li>
</ul>
</li>
<li>Seung Woo Shin<ul>
<li>[email protected]</li>
</ul>
</li>
<li>Lecture notes on website. No textbook.</li>
<li>Prerequisites: Math 54, Physics 7A/B, 7C or CS 70.</li>
<li>Work and Grading:</li>
<li>Weekly homework<ul>
<li>Out Tue, due following Mon @ 5 in dropbox (second-floor Soda).</li>
</ul>
</li>
<li>2 Midterms: 14 Feb, 22 Mar.</li>
<li>Final Project</li>
<li>In-class quizzes</li>
<li>Academic integrity policy</li>
</ul>
<h2>Today</h2>
<ul>
<li>What is quantum computation?</li>
<li>What is this course?</li>
<li>Double-slit experiment</li>
</ul>
<h2>What is Quantum Computation?</h2>
<ul>
<li>
<p>Computers based on quantum mechanics can solve certain problems
exponentially faster than classical computers, e.g. factoring
(Shor's algorithm).</p>
</li>
<li>
<p>How to design quantum algorithms?</p>
</li>
<li>Requires different methodology than for classical algorithms</li>
<li>Are there limits to what quantum computers can do? (Probably. Is not
known to automatically solve NP-complete problems. Also, halting
problem.)</li>
<li>How to implement quantum computers in the laboratory (AQC, among
other forms).</li>
<li>Can you design them so they're scalable?</li>
</ul>
<p>Quantum computation starts with this realization that if we were to
base our computers on QM rather than classical physics, then they can
be exponentially more powerful.</p>
<p>This was really a big deal because it was believed that it didn't
really matter how you implemented computers; all that you could do was
make each step faster.</p>
<p>The fact that there's something like quantum computers that can be
exponentially faster, this was really a big surprise. And really on
fundamental problems, like factoring.</p>
<p>What this course will focus on is several questions on quantum computers.</p>
<p>Where we are for quantum computers is sort of where computers were
60-70 years ago.</p>
<ul>
<li>Size -- room full of equipment</li>
<li>Reliability -- not very much so</li>
<li>Limited applications</li>
</ul>
<h2>Ion traps.</h2>
<p>Can trap a small handful of ions, small number of qubits. No
fundamental obstacle scaling to ~40 qubits over next two years.</p>
<h2>Entanglement</h2>
<p>Basic resource in quantum mechanics. Unique aspect of QM, and one
fundamental to quantum computing</p>
<h2>Quantum Teleportation</h2>
<p>Entanglement.</p>
<h2>Quantum Cryptography</h2>
<p>Ways to use QM to communicate securely (still safe even with Shor's).</p>
<h2>This course</h2>
<ul>
<li>Introduction to QM in the language of qubits and quantum gates.</li>
<li>Emphasis on paradoxes, entanglement.</li>
<li>Quantum algorithms.</li>
<li>Quantum cryptography.</li>
<li>Implementing qubits in the laboratory -- spin...</li>
</ul>
<p>There are certain difficulties you can sweep away by focusing on it in
this language. It also highlights certain aspects of QM. Interesting
to focus on these aspects because they lend an alternative
interpretation of QM.</p>
<h2>Aside:</h2>
<p>There will not be programming projects. There will be a couple of
guest lectures. (not clear it will happen) would try to set things up
so we could go and play with equipment in lab. This obviously depends
on whether it scales and is set up well enough. Might be in place by
the end of the semester.</p>
<p>One thing that has to be done is arrange discussion sections. (under
discussion. Looks like a tentative Wed 11-12 and Fri 1-2.)</p>
<p>INTERIM.</p>
<h2>Young's double-slit experiment.</h2>
<p>(particle-wave duality at quantum level. Physics completely
different. So different that it defies reason.)</p>
<p>There are two aspects of dealing with QM: understanding what those
rules are, and believing that nature works that way.</p>
<p>Hopefully you'll suspend your disbelief and just go with understanding
what the rules are.</p>
<p>(blah, more particle-wave duality)</p>
<p>(this basically boils down to interference.)</p>
<p>(tracking which slit each particle goes through leads to a collapse of
the wavefunction, and we observe particles behaving like particles,
not waves)</p>
<p>(talk about superposition of states; introduction of the wavefunction.
Explained entirely by Schrödinger's cat.)</p>
<p>The thing that's most troubling about this from actual experience as
well as physics is that there has to be a mechanism. How did nature do
this? We are going to have a completely precise description. But it's
not going to be a mechanism unlike anything else.</p>
<p>Part of understanding QM is coming to terms psychologically with this
superposition of states, the existence in more than one state
simultaneously.</p>
<p><a name='2'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Qubits, Superposition, & Measurement -- January 19, 2012</h2>
<h2>Quantization:</h2>
<ul>
<li>Atomic orbitals: Electrons within an atom exist in quantized energy
levels. Qualitatively -- resonating standing waves. (We have Bohr to
thank for this)</li>
<li>The important thing is that there is this quantization, and you can
choose to think of the ground/excited state of an electron as encoding
in binary. Or you can have a k-level system, where you have energy
levels 0 through k-1. These are the things we'll be thinking about
notationally.</li>
<li>There are other systems we can think of as a two-level system,
e.g. photons (in terms of polarization, e.g.)<ul>
<li>spin (very roughly magnetic moment associated with the charge)</li>
</ul>
</li>
<li>These are very rough descriptions. For our purpose, you can think about
k-level systems, where you have k discrete levels.</li>
</ul>
<h2>Superposition</h2>
<p>The first axiom of quantum mechanics is the superposition principle. It
says that if a system can exist in one of two states, then it can also
exist in a linear superposition of said states. Likewise, if it can exist
in k states, it can also exist in a superposition of all k states (trivial
to prove)</p>
<p>In other words, <mathjax>$\alpha_1\ket{0} + \alpha_1\ket{1} + ... +
\alpha_{k-1}\ket{k-1}$</mathjax></p>
<p>Our <mathjax>$\alpha _i$</mathjax> are actually probability amplitudes. These don't sum to
one; rather the magnitudes of their intensities sum to one.</p>
<p>(some talk about normalization in order to satisfy said property)</p>
<p>What does this superposition principle correspond to? We talked about the
double-slit experiment. Electron/photon going through one of two slits,
probability of which slit corresponds to these.</p>
<h2>Measurement</h2>
<p>The second axiom of quantum mechanics is the measurement axiom. The
superposition is the private world of this system. As long as you're not
looking at it, it's going to be in this linear superposition. But as soon
as you make a measurement, the outcome of the measurement is one of these
levels <mathjax>$\ket{j}$</mathjax> with probability <mathjax>$\abs{\alpha _j}^2$</mathjax> (i.e. you collapse
the wave function).</p>
<p>So if you go back to our last example, there we have a qubit (a quantum
bit) -- a two-level system is called a quantum bit because it's a quantum
analogue of a bit. So we had an example where we had a superposition of two
states. (demonstration of the probabilities)</p>
<p>[ talk about how attempting to detect which of the slits the particle went
through actually constitutes a measurement, which changes the state of
the system ]</p>
<p>standard basis measurement:
checking exactly which state the system is in.</p>
<p>Another way of writing the state of a quantum system (as opposed to bra-ket
notation) is just saying that it's k complex numbers and presenting them as
a vector. Should be immediately intuitive. We still have the same condition
that the summation of <mathjax>$\alpha _i^2 = 1$</mathjax>. Our vector, therefore, must sit on
the unit k-sphere in k-space.</p>
<p>Ket notation: invented by Dirac. The reason we are going to be so enamored
by the ket notation is that it simultaneously expreses 1) the quantum state
is a vector in a vector space and 2) this quantum state encodes
information. The fact that we are labeling our states as <mathjax>$\ket{0}$</mathjax> and
<mathjax>$\ket{1}$</mathjax> is indicative in itself that we are encoding information.</p>
<p>Two ways of rephrasing the probability of landing in a particular state,
therefore is 1) the length of the projection onto said basis vector and 2)
<mathjax>$cos^2θ_j$</mathjax>.</p>
<p>Generalization of the notion of measurement: in general, when you do a
measurement, you don't need to pick the standard basis; you can pick any
orthonormal basis.</p>
<p>There is another useful basis called the sign basis: <mathjax>$\ket{+}$</mathjax> and
<mathjax>$\ket{-}$</mathjax>. If placed on the unit circle, we have <mathjax>$\ket{+}$</mathjax> located at
<mathjax>$\theta=\frac{\pi}{4}$</mathjax> and <mathjax>$\ket{-}$</mathjax> located at <mathjax>$\theta=-\frac{\pi}{4}$</mathjax>.</p>
<p>[ change of basis can be done using matrices or using substitution. ]</p>
<h2>Significance of sign basis</h2>
<p>The standard basis is going to correspond to a certain quantity. The sign
basis will correspond to a different quantity. By analogy, let's assume
that we're measuring the position or momentum of the system depending on
which basis we're using (might as well be -- Heisenberg's uncertainty
principle and all).</p>
<p>explanation of Heisenberg's uncertainty principle, except without actually
attributing a name to it. basically, with two related quantities, the
more accurately you know one, the less accurately you can know the other.</p>
<p>maximal uncertainty occurs with conjugate basis. measure of uncertainty:
spread -- <mathjax>$\abs{\alpha_0} + \abs{\alpha_1}$</mathjax>. Corresponds to maximum
uncertainty. Uncertainty ranges from 1 to <mathjax>$\sqrt{2}$</mathjax>.</p>
<p>Specifically, Heisenberg's uncertainty principle: <mathjax>$S(\alpha_0) + S(\beta_0)
\ge \sqrt{2}$</mathjax>. (<mathjax>$\alpha_0$</mathjax>, <mathjax>$\beta_0$</mathjax> correspond to a conjugate basis)</p>
<p><a name='3'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Multiple-qubit Systems -- January 24, 2012</h2>
<p>Snafu with the projections, so lecture will be on the whiteboard! Will
also stop early, unfortunately.</p>
<p>State of a single qubit is a superposition of various states
(<mathjax>$\cos\theta\ket{0} + \sin\theta\ket{1}$</mathjax>). measurement has effect of
collapsing the superposition.</p>
<p>(hydrogen atom: electron can be in ground state or excited state.)</p>
<p>Now we study two qubits!</p>
<h1>TWO QUBITS</h1>
<p>Now you have two such particles, and we want to describe their joint state,
what that state looks like. Classically, this can be one of four states. So
quantumly, it is in a superposition of these four states. Our <mathjax>$\ket{\psi}$</mathjax>,
then, is <mathjax>$\alpha_{00}\ket{00} + \alpha_{01}\ket{01} + \alpha_{10}\ket{10} +
\alpha_{11}\ket{11}$</mathjax>. Collapse of the wavefunction occurs in exactly the
same manner.</p>
<p>Probability first qubit is 0: <mathjax>$\abs{\alpha_{00}}^2 + \abs{\alpha_{01}}
^2$</mathjax>. New state is a renormalization of the remaining states.</p>
<h1>ENTANGLEMENT</h1>
<p>First, let me show you what it means for two qubits not to be
entangled. Essentially, we have conditional independence.</p>
<p>Quantum mechanics tells us that this is a very rare event (i.e. it
almost never happens).</p>
<h2>Bell State</h2>
<p>You have two qubits in the state <mathjax>$\frac{1}{\sqrt{2}}\parens{\ket{00} +
\ket{11}}$</mathjax>. Impossible to factor (nontrivial tensor product), so we must
have some sort of dependence occurring. Neither of the two qubits has a
definite state. All you can say is that the two qubits together are in a
certain state.</p>
<p>Rotational invariants of Bell states -- maximally entangled in all
orthogonal bases.</p>
<p><a name='4'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Entanglement, EPR, Bell's Experiment -- January 26, 2012</h2>
<p>Ignore Q4 on HW2. Probably.</p>
<h1>Entanglement</h1>
<p>Last time, we saw a Bell state (a Bell basis state). This is a state
of two particles (i.e. two qubits), and the state of each of the
qubits is "entangled", so to speak, with the other of the qubits.</p>
<p>This state has a very curious property, as we saw last time: maximally
entangled qubits will remain maximally entangled regardless of the
choice of bases, as long as they are orthogonal. This is known as the
ROTATIONAL INVARIANCE OF BELL STATES.</p>
<p>For the bell basis state α₀₀|00〉 + α₁₁|11〉, we have rotational
invariance for <em>real</em> rotations only. Certain bell bases additionally
have rotational invariance over complex rotations.</p>
<p>FACT 1:</p>
<pre><code>You get the same outcome if measure both in the v, v⊥ basis.
</code></pre>
<p>FACT 2:</p>
<pre><code>Independent of separation between particles. It's not because
the particles are close to each other and talking to each
other; it's because of their state.
</code></pre>
<p>Einsten, Podolsky, & Rosen '35:</p>
<pre><code>Imagine that you have a pair of particles that are emitted
(e.g. electron, positron) that are highly entangled. They are
emitted in opposite directions and travel far from each other. And
then you measure Particle 1 in bit (0/1) basis ⇒ knowledge of the
bit on the other particle. Also, measure Particle 2 in the sign
basis ⇒ knowledge of the sign on the first particle. Contradicts
uncertainty principle?
Not at all. { sign information destroyed, etc. } Sign information
measured in the second particle has nothing to do with that of the
first particle, since measuring the bit information destroyed the
sign information (i.e. is now entirely unknown). { more questions
regarding the Einstein/deterministic interpretation of quantum
mechanics. }
</code></pre>
<p>Bell '64:</p>
<pre><code>Take two entangled particles. Despite large separation distance,
they are quantumly connected. What you can do is start playing
with the notion of measuring the particles in arbitrary
bases.
Make one measurement with outcome u. You'd have |v〉 and |v⊥〉
with probabilities cos²θ and sin²θ.
Bell's idea was this: Surely, if you play with your choice of u
and v, you're going to get something good.
We have some input, 0 or 1, that tells us which basis to
pick. Suppose there are two experimentalists who have these
entangled pairs of qubits. At the last minute, Alice gets as input
some random bit; likewise, Bob gets some other random bit.
We want to know the two output bits.
Goal: maximize probability that r{a}r{b} = a + b (mod 2) = a ⊕ b
If either of the inputs is a zero, they want to output the same
bit. But if both of the inputs are one, they want to output
opposite bits.
Fact: If you choose the correct angles, in the quantum world, you
get a success probability of cos²(π/8) ≈ 0.85.
Claim: no way to do better than 3/4, if you agree to say the same
thing in advance. (Local) hidden variable theory ≤ 0.75. Impossible
to do better.
However: Quantum mechanics gives us a success rate of ≈ 0.853, or
cos²(π/8).
</code></pre>
<p>Alice's protocol is as follows: if r{a} = 0, measure in basis rotated
↻ π/16. if r{a} = 1, measure in basis rotated ↺ 3π/16.</p>
<p>Bob protocol is as follows: if r{b} = 0, measure in basis rotated
↺ π/16. if r{b} = 1, measure in basis rotated ↻ 3π/16.</p>
<p>{ where did these angles come from? If you plot them on the number
line, you get four points a₁, b₀, a₀, b₁. When either is zero, we
have a distance of π/8, else we have a distance of 3π/8. }</p>
<p>For the cases where (a ⊕ b), we have probability cos²(π/8). for the
case where !(a ⊕ b), we have probability sin²(3π/8) = cos²(π/8).</p>
<p>Conclusively disproves Einstein's hidden-variable theory.</p>
<p>There's this remarkable aspect where over time you can refine these
concepts to the point that we can sit down in an hour and a half to
understand these concepts that Einstein would have given anything to
understand. Isn't this remarkable?</p>
<p>When you actually do these experiments, they turn out to refute the
entire plan Einstein had.</p>
<p><a name='5'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Quantum Gates -- January 31, 2012</h2>
<p>GATES, MORE GATES.</p>
<p><a name='6'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Revisiting Axioms of Quantum Mechanics -- Feb 2, 2012</h2>
<p>We're going to be revisiting, over the next few lectures, the axioms
of quantum mechanics and how to refine them further.</p>
<p>Today: first axiom: superposition principle. In general, if we're in a
system that has k distinguishable states, then in general it is in a
linear superposition of these states. Each state is a unit vector, and
the states of the system reside on the surface of the sphere.</p>
<h1>Addendum:</h1>
<p>What happens if we have two different subsystems? Take the first to be
k-dimensional, and the second to be l-dimensional. So now, in the
addendum, the question we are asking is "what happens if we take these
two subsystems and put them together and call this our new system?"
Take a tensor product of these two states. k × l distinguishable states.</p>
<p>So now, if you apply our superposition principle, what does it tell
us? We can be in any superposition of states. We are in a
superposition of basis vectors of (k ⊗ l).</p>
<p>Separately, we have k + l amount of storage space, but when we put
them together, we have k × l. These are the fundamental underpinnings
of quantum computing: this is where entanglement comes from; this is
where the exponential speedup comes from.</p>
<p>It's so very different from classical physics that if you chase it
out, you have consequences. One can just keep it at the level of
formalism, and then it's just notation; it's slightly weird. But then
you look at it and try to understand it, and it really has profound
consequences. So let's try to understand these consequences further.</p>
<p>[ calculating angles between states; inner product actually must be ]
[ equivalent to the product of the inner product of the components. ]</p>
<p>So now, let's back up for a moment and ask: we've said there's this
anomaly where we get a multiplicative effect instead of additive. Why?
They could be entangled. These states we are considering are product
states and are not entangled. In general, when you have a composite
system, you won't be able to decompose it into the tensor product of
two states, i.e. general state cannot be factored. For instance, Bell
states cannot be factored. You cannot say what the first particle must
be, and what the second state must be. All you can say is what the two
particles are simultaneously.</p>
<p>==== Intermission ====</p>
<p>Two different applications of concepts we've talked about before.</p>
<h1>No-cloning theorem</h1>
<pre><code>Suppose you've got a qubit in the state |ψ〉, in some unknown
state. Now that you have it, you'd like to make copies of it. What
you have in your lab is lots and lots of qubits which you can
initialize to the state |0〉. We also have a lot of fancy
equipment. You think to yourself, surely, given the fact that I
have all this fancy equipment and all these post-docs running
around, we should be able to make at least one copy of this
quantum state.
So we want at least to have the state |Ψ〉 ⊗ |Ψ〉. We want to
start with |Ψ〉 ⊗ 0 and go to |Ψ〉 ⊗ |Ψ〉 using fancy
equipment. We can do plenty of unitary transformations (third
axiom of quantum mechanics: no matter how big your lab is, it's
only going to perform a unitary transformation). Is this possible?
No-cloning theorem says this is impossible.
There's a principle called the Church of the Larger Hilbert
space. If you really want to, you could expand your Hilbert space,
and consider measurements to be something that happens in this
larger Hilbert space, and you're only looking at part of your
data. In this larger Hilbert space, this is unitary in the larger
Hilbert space.
Right now we're considering a closed system. Later we can make
this theorem more general and include everything, but the
statement will remain the same.
All you can do is perform some rotation on your Hilbert
space. However, we must preserve angles. Such a unitary
transformation only exists if we know that |Ψ〉 is one of two
known orthogonal states.
Basically, this tells us that we cannot clone an unknown quantum
state. There is only one exception: when you know that it is one
of two known orthogonal states.
</code></pre>
<h1>Quantum Teleportation</h1>
<pre><code>Related is the concept of quantum teleportation. Quantum
teleportation provides a way to transfer a particle from one party
to another, if the two parties share an EPR state (Bell state).
Quantum teleportation is this protocol by which the first party
performs a joint measurement on two qubits. The result of this
measurement is one of four results, which is shared with the
second party. The second party then performs one of four
operations (a series of quantum gates) on the other qubit and
receives as a result of these operations the original quantum
state.
There's this property of entanglement called monogamy. A qubit
cannot be maximally entangled with multiple qubits.
</code></pre>
<p>These things took a while to figure out. At first, it was completely
unclear. When this was happening in the early 90s, we'd spend a lot of
time figuring these things out. It was not easy. We'll need some more
concepts, though.</p>
<p><a name='7'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Observables, Schrodinger's equation -- Feb 7, 2012</h2>
<h1>Observable</h1>
<p>Operator (i.e. can be described by a matrix) that describes any quantity
that can be measured, like energy, position, or spin. You feed in a quantum
state and receive as output a real number.</p>
<p>Why an operator? If you have a k-level system, then an observable for this
would be a k-by-k Hermitian matrix (i.e. <mathjax>$A = A^\dagger$</mathjax>). Important thing
about hermitian matrices: spectral theorem: orthonormal eigenbasis of
eigenvectors <mathjax>$\phi$</mathjax> that correspond to real eigenvalues <mathjax>$\lambda$</mathjax>.</p>
<p>The real number you get as a result of the measurement -- what you read out
in the measurement outcome -- is <mathjax>$\lambda$</mathjax>. Consider discrete energy
levels; after a measurement, we collapse the wave function into a single
eigenstate.</p>
<p>We already knew what a measurement was. So what happened here, how can we
have a new definition of a measurement? This isn't fair. How can you trust
a course that changes its mind every other week? No complaints? I mean,
isn't it terrible? This is completely different. So what's going on?</p>
<p>Our previous notion of measurement required us to choose some orthonormal
basis. We write out our state <mathjax>$\Psi$</mathjax> in this basis. The result of the
measurement was equal to i with probability <mathjax>$\abs{\beta i}^2$</mathjax>, and the new
state was <mathjax>$\ket{\Psi_i}$</mathjax>. So how does this correspond to what we have now?</p>
<p>We can reconcile them by showing that our old notion was less
formalized. It's only that basis which corresponds to the basis vectors of
some Hermitian matrix.</p>
<p>Pick any arbitrary orthonormal set of vectors and an arbitrary set of real
numbers. Ask: is there any matrix that has these eigenvectors and these
eigenvalues? Argue: always possible. In that sense, the new definition of a
measurement is really the same as the old one.</p>
<p>Consider case where eigenvalues not unique: reconsider notion of orthonormal
eigenvectors as notion of orthonormal eigenspaces. We've seen an example of
this, by the way: when we had a two-qubit system and we only measured the
first qubit. Each of the two outcomes corresponded to a two-dimensional
subspace. There were two eigenvectors with the same eigenvalue. Project the
subspace onto the space spanned by eigenstates corresponding to result of
measurement.</p>
<p>Reasoning: in the general case, you don't project onto a basis vector; you
project onto the subspace that is consistent with the outcome of the
measurement.</p>
<p>What the measurement does is provide some information about the state and
change the state to reflect the outcome. It doesn't restrict itself any
more than it has to.</p>
<p>diagonalization: converting to a different basis, scaling appropriately,
converting back to the original basis.</p>
<p>A way to construct the operator (must be a hermitian matrix) is with an
outer product: you can generate the change-of-basis matrix.</p>
<p>==== Intermission =====</p>
<p>Piazza: posted question about other people wanting midterm moved. Enough
objections such that we will stick with original date: next Tuesday. Posted
yesterday a homework which is effectively a review for the midterm, which
will cover everything up until this lecture. Three problems on homework: 2
are review, 1 is on today's lecture. <strong>Due this Friday at 5.</strong></p>
<h1>Schrodinger's Equation</h1>
<p>Most basic equation in quantum mechanics; describes how a system evolves
over time. Depends on one particular operator, the Hamiltonian: the energy
operator (more specifically, kinetic energy T + potential V). When you
write out the Hamiltonian of this system, the eigenvectors correspond to
(eigen)states with definite energy. The corresponding eigenvalue <mathjax>$E_i$</mathjax> is
the corresponding energy.</p>
<p>So now what Schrodinger's equation says is that the state \psi of the
system is a function of t, and it evolves according to a differential
equation which relates the energy of the system.</p>
<p><mathjax>$i\hbar \pderiv{\psi}{t} = \hat{H} \psi$</mathjax>
(<mathjax>$\hbar \equiv $</mathjax>Planck's constant, <mathjax>$i \equiv \sqrt{-1}$</mathjax>)</p>
<p>The rate of change depends on what the Hamiltonian tells us to do. You can
consider the Hamiltonian talking about interaction between parts of the
system or between subsystems. Forces. Everything.</p>
<p>Now, Schrodinger actually discovered this equation in 1926. This was after
many of the initial discoveries in quantum mechanics. It was after
deBroglie discovered the wave-particle duality. One of the biggest
intellectual events of the twentieth century.</p>
<p>So let's see what this equation tells us about the equations of motion. PDE
solving, yay.</p>
<p>So what we know is that if the state at time 0 was an eigenvector <mathjax>$\phi$</mathjax>,
then the state at time t must be some constant <mathjax>$A(t)\phi$</mathjax>.</p>
<p>precession of individual states; generalization is the summation of the
various eigenstates. If you want to write out the linear operator that
tells you how to go from <mathjax>$\Psi(x,0)$</mathjax> to <mathjax>$\Psi(x,t)$</mathjax>, it's just the diagonal
matrix of eigenvalues. You can check that this is a unitary matrix.</p>
<p>The way you write this unitary matrix in notation is <mathjax>$\exp(-i\lambda
t/\hbar)$</mathjax>. Nothing to be scared by. Look, we're exponentiating a matrix,
but that's nothing to be worried about.</p>
<p>Suppose <mathjax>$\psi(0)$</mathjax> = <mathjax>$\ket{0}$</mathjax>, and you wanted to know <mathjax>$\psi(t)$</mathjax>.</p>
<p><a name='8'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Schrödinger's Equation -- Feb 9, 2012</h2>
<p>Had the sense last time that some of you might not remember all of your
linear algebra. So this time is a hybrid of linear algebra and getting you
up to speed with this bra-ket notation.</p>
<p>What we are going to be doing is looking at it from three or four different
viewpoints to try to get an intuition for why it is the way it is. So when
we get around to trying to solve the SE for specific equations, it's not
just an equation; you have a feel for it.</p>
<p>Goal for today: Figure out why does the Hamiltonian plays a role in
Schrödinger's equation.</p>
<p>So basically, the way we are going about this is that last time, we had a
rather abstract formulation of Schrödinger's equation. Why? It's because
the formulation is so clean. General form: write out hamiltonian,
diagonalize it, and once you understand the eigenvalues and eigenvectors,
you understand why it has the form it does.</p>
<p>So why does it look the way it does? Conservation laws.</p>
<p>Next week, we'll look at it for concrete systems; for continuous systems;
the behavior of an unstrained particle. In each of these cases, we're
trying to build an intuition as to <em>why</em> the Schrödinger equation is the
way it is.</p>
<p>Not going to get into time-dependent hamiltonians until maybe the end of
the semester.</p>
<p>Do Hamiltonians correspond to quantum circuits? The way you implement gates
is by implementing a suitable Hamiltonian. But a quantum circuit
corresponds to a time-varying Hamiltonian. Topics we'll get to closer to
the end of the semester.</p>
<p>What we are starting with are the basics of quantum mechanics. Viewing it
in the case of discrete systems, i.e. qubits. We've already started with
quantum gates, quantum circuits.</p>
<p>We're going back and forth between this abstract version which is very
close to axiomatic quantum theory (but also helps with the understanding of
theory), and physical systems (hamiltonians). After a few weeks of physical
systems, we'll start talking about quantum algorithms.</p>
<p>And then, after we have developed that for a little while, how do you
implement all this in a lab? We'll go back and look at the physics of it.</p>
<p>We're sort of walking this fine line between thinking about quantum devices
as abstract entities; where all you need to know is the axioms of quantum
mechanics; thinking about what you can and cannot do, and what you have to
do to make it all happen.</p>
<p>So let's start with the basics. What I'll do today is I'll describe in a
little more detail Dirac's bra-ket notation. We've already seen this
notation to some extent, but let's do this more systematically.</p>
<p>Remember if you have a k-state quantum system, then its state is described
by a unit vector in a k-dimensional Hilbert space. This is also
equivalently described in ket notation as α{j}|j〉. We love this notation
because it simultaneously highlights two aspects: this is a vector, and it
is information. For example, if k=2, this is a qubit storing a bit of
information.</p>
<p>The dual space (row space) of this, if you write this state as |Ψ〉, is the
bra 〈ψ| (hermitian conjugate). The inner product (square of the length of
the vector) is simply 〈Φ|Ψ〉</p>
<p>People who love the bra-ket notation love it because you don't have to
think. You just do what seems right and everything magically works out.</p>
<p>So if you have a vector |Ψ〉, you can talk about the projection operator
projecting onto |Ψ〉. It's a linear operator. What you want to do is design
the projection operator onto Ψ (often denoted by P) ≡ |Ψ〉〈Ψ|.</p>
<p>Pⁿ should ≡ P, for obvious reasons. |Ψ〉〈Ψ|Ψ〉〈Ψ|: 〈Ψ|Ψ〉 = 1, so
multiple applications of an operator are equivalent to a single
application.</p>
<p>Suppose |Ψ〉=|0〉. What does P look like as a matrix?
[1...0]
[.. 0]
[. . 0]
[0 .0]</p>
<p>I = ∑|j〉, therefore. It doesn't have to be in terms of the standard
basis. You could write down the identity in terms of any basis in this
way. Physics refers to this as the "resolution of the identity".</p>
<h2>Example</h2>
<p>Suppose you have a vector and you want to measure it in a general basis.</p>
<p>What happens if we measure |Ψ〉 in the |v〉, |v^{⊥}〉 basis? Do a change of
basis on |Ψ〉. Project |Ψ〉 onto each of the basis vectors. This is one way
of doing it.</p>
<p>==== Intermission ====</p>
<p>Goals for the midterm: fluency with the basics. Purpose of the course: not
a sequence of tests as much as getting something out of it. But to get
something out of it, you should be fluent in the maneuvers presented so
far. Not enough that you can sit down and figure it out in ten minutes.</p>
<p>Midterm will not be open-book or open-notes, but anything that you'd need
to remember will be on the midterm itself. e.g. teleportation protocol
would be given.</p>
<h1>Observables</h1>
<p>An observable is a Hermitian matrix M such that M = M†. So now we have
something called the spectral theorem, which says that hermitian matrices
can be diagonalized: you can write them in a different basis (the
eigenbasis), and you can write them out in this eigenbasis.</p>
<p>Suppose M = X (bit-flip). Xv = λv. (X-λI)v = 0. det(X-λI) = 0. Solve for λ,
which are your eigenvalues, and then we go back and solve for our
eigenvectors.</p>
<p>Here, we're going to do this by inspection. Eigenvectors of X would be
<mathjax>$\ket{+}, \ket{-}$</mathjax>; the corresponding eigenvalues are 1, -1.</p>
<p>Why is this an observable? If you were to create the right detector, we'd
observe something. We'd measure something. What we read out on the meter is
<mathjax>$\lambda\ket{j}$</mathjax> with probability equal to <mathjax>$\alpha_j^2$</mathjax>, and the new state
is <mathjax>$\ket{\Psi_j}$</mathjax>. What Schrödinger's equation tells us is that if you look
at the energy operator H, and then in order to solve this differential
equation, we need to look at it in its eigenbasis. It was not supposed to
be so frightening. You can write U(t) notationally as <mathjax>$e^{-iHt/ℏ}$</mathjax>.</p>
<h2>Why H?</h2>
<p>Why should Schrödinger's equation involve the Hamiltonian? Why the energy
operator? What's so special about energy? Here's the reasoning: from axiom
3 of quantum mechanics, which says unitary evolution, what we showed was
the unitary transformation is <mathjax>$e^{-iHt/\hbar}$</mathjax>. Any unitary transformation
can be written in this form. You can always write it in the form <mathjax>$e^{iM}$</mathjax>
for some Hermitian matrix M. The only question is, what should M be? Why
should M be the energy function? The second thing that turns out (either
something that we'll go through in class or have as an assignment) –
suppose that M is a driving force of Schrödinger's equation. So
<mathjax>$\pderiv{\Psi}{t} = M\ket{\Psi}$</mathjax>.</p>
<p>Suppose there were some observable quantity A that is conserved: i.e. if
you start out with a measurement of |Ψ(0)〉, and you do the same
measurement at time t, if A is a conserved value, then this expected value
should be the same both times. If A is conserved, then AM = MA. A has to
commute with M. This tells us that their eigenvectors are the same. The
fact that energy is conserved, therefore, says that HM = MH. Then we have
one last bit: energy is very special. It is very fundamental; these are the
building blocks; it goes right to the core. H commutes with M not for
special conditions of the system, but rather generally.</p>
<p>So you can reason that M is a function of H. And then you show that it must
be a linear function of H (aH + b), b must be 0, and a must be a constant
(in fact, Planck's constant).</p>
<p>Symmetry plays a very strong role in the way this comes about.</p>
<p><a name='9'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Continuous Quantum States, Free particle in 1D, Heisenberg Relation</h2>
<h2>Feb 14, 2012</h2>
<p>So far we've talked about discrete quantum systems. Today, we're going to
make the transition to talking about continunous quantum systems. In 1
dimension, the particle can be anywhere on the line.</p>
<p>Schrödinger's equation in 1 dimension, Heisenberg relation.</p>
<p>iℏ(∂Ψ(x,t)/∂t) = [-(ℏ²/2m)(∂²/∂x² + V(x)] Ψ(x,t) = HΨ(x,t)</p>
<p>Some things where won't be terribly careful about being precise for
now. Will fix later to extent that you won't be too upset about it
later. Attempt to day is to try to understand these objects on an intuitive
level. Before we do that, let's do a five-minute review of discrete
systems, just to orient ourselves when we go to continuous distributions.</p>
<p>Back to the familiar k-state systems. State is a unit vector in k-dim
Hilbert space, etc. And then we have this notion of an observable (given by
a Hermitian matrix M ≡ M†). What we like about this is that by the spectral
theorem, we have an orthonormal basis of eigenvectors corresponding to real
eigenvalues. Result is that discrete energy levels correspond to orthogonal
vector spaces.</p>
<p>Since this is an observable, the deflection of our meter is λj with
probability |〈Ψ|Φj〉|², and the new state is |Φj〉. We could ask a
couple of questions.</p>
<p>Before we continue: let's look at another way of considering M being
Hermitian: Mij = 〈Φi|M|Φj〉 = conj(〈Φj|M|Φi〉) for any Φi, Φj; not
necessarily just for basis vectors.</p>
<pre><code>* We can try to picture the measurement outcome. What our measurement
looks like is this: some probability distribution. We can
characterize this distribution by its moments, much like how we can
characterize a function by its derivatives. The more moments we have,
the more we know about our distribution.
+ mean: location.
+ standard deviation: width.
+ skewness: symmetry.
+ kurtosis: peakedness.
* We can consider the mean to be 〈Ψ|M|Ψ〉.
+ First write M in its eigenbasis, where it's a diagonal matrix.
* We can also do the same for variance. Var(X) = E(X²) - E(x)² = E(X²)
- μ², for obvious reasons. E(X²) = 〈Ψ|M²|Ψ〉: intuitive after
considering that M² preserves eigenvectors while squaring eigenvalues
(result of diagonalizability of M).
</code></pre>
<p>Here's what we're planning to do (hopefully) for the rest of the lecture:
it'll be in the form of a sketch, which hopefully will give you a picture
of what happens when you look at a particle in 1 dimension.</p>
<p>Before that: σ (standard deviation) is a measure of spread. If you were
really certain about a physical quantity, you'd have σ ≡ 0.</p>
<p>So, let's have a particle that's free to move about in one dimension. We'll
describe approximately for now; we just want to understand its form. Later
will be much precise, but now we just want to put an image in our minds as
to just where this equation comes from. We'll want to understand position
of the particle x (how it behaves) as well as momentum p. We'll also look
at the uncertainty relation that says ΔxΔp ≥ ℏ/2 (some constant). Show
intuitively that there must be some minimum.</p>
<p>In this continuous picture, x and p behave like something you've already
seen: bit and sign. There's something about the spread of the two. You
cannot know both of those quantities precisely, either in the one, or the
other, or both. The more certainly in one, the less certainly in the
other. Rather than do it by formula and precisely, we'll do it more
intuitively.</p>
<p>Often, when you think you want to explain something so that people really
understand, you want to go slowly. Paradoxically, it's sometimes better to
go fast. Explanation: it's easier to put all the pieces together when you
see the big picture all at once. See big picture first, then observe
individual bits later.</p>
<p>We want to talk about a lot of stuff.</p>
<p>Once again, what we are trying to do is describe the state of the particle
on an infinite line. So now, before you describe it, let's do an
approximation. Let's consider this as not infinite, but very long (take a
limit). Likewise, not continuous but very fine (also a limit). Could be at
one of various positions. Describe your state as this superposition of
states. What we're saying is that Ψ(j) is αj. When we generalize this to a
particle being anywhere on the line, the way to describe it is Ψ being a
continuous function, so Ψ(x) is a complex-valued function on the real
line. As in the discrete case, we want our distribution to be normalized.</p>
<p>Now, suppose we wanted to measure the position of this particle. Out here,
we'd have an observable, M. The corresponding observable in the continuous
case, let's call it x. We'd just do 〈Ψ|x|Ψ〉. Our inner product now is
defined with integrals.</p>
<p>Our observable now takes as input a wave function Ψ and spits out another
function as output.</p>
<p>More fuel for intuition: you should know one of the big discoveries: nature
is described by local differential equations. Every point in space only
considers its own neighborhood. It's concerned only with itself, and
nothing else. So now let's apply that principle here. How that wave
function evolves over time. So the point x is minding its own business and
its own infinitesimal neighborhood. So what does it do? The simplest thing
it does is compare itself to its neighbors, say the average of its
neighbors. (consider perceptron, maybe?) But this yields the second
derivative with respect to x, and the function smooths itself out. So we
must move in an orthogonal direction to avoid collapsing the wave
function, i.e. multiply by i.</p>
<p>Let's now try to understand where the uncertainty principle comes in.</p>
<p>Momentum we can measure in the quantum case; it's sort of a proxy for
velocity, since velocity doesn't really make sense.</p>
<p>Let's consider a fairly standard wave exp(i(kx-omegat)). Now you ask what the
equation of motion says if it's going to evolve. Let's say Ψ(x,0) ≡
exp(ikx). omega is the rate at which it twists over time. A twisting seems to
correspond to some sort of translation. The rate of translation is directly
proportional to k.</p>
<p>So now we have a function of definite momentum. We can then decompose this
wave function in terms of these functions. These are our Fourier basis
functions. So if you want to go from the position basis to the momentum
basis, you take a Fourier transform. In other words, this is roughly
equivalent to what we do when we go from the sign basis to the bit basis,
or vice versa (what a Hadamard gate does).</p>
<p><a name='10'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Free particles, Uncertainty, Quantization</h2>
<h2>Feb 21, 2012</h2>
<p>We talked about things informally last time, and what we were going to do
was see Schrödinger's equation from various viewpoints.</p>
<p>Today, again we'll look at a free particle in one dimension, a somewhat
more rigorous of the uncertainty relation, quantization, and use that as
our first implementation of a qubit.</p>
<p>What we want to do is describe how this state evolves in time. This is what
was described by Schrödinger's equation. Last time, we derived what this
equation should look like (approximately) as a result of certain
considerations.</p>
<p>Clasically, the energy of this particle (which we'll write as a function of
two parameters p and x). ∂ψ/∂t = p²/2m + V(x).</p>
<p>So, first of all, we want to figure out what corresponds to the position in
quantum mechanics? How do you measure the position of this particle? We
said we had a position operator, and when you applied it to ψ(x), what you
got back was xψ(x). Same idea of measurables as before: we're measuring
position, so our eigenvalues correspond to the position of the
particle. What we are requiring from our position operator is exactly the
same thing.</p>
<p>So the momentum operator \hat{p} is ℏ/i (∂/∂x). ℏ is our normalized
Planck's constant. Again, the discrete analogue of this is a
measurable. Once again, we're ignoring constants, since we only really care
about constants.</p>
<p>What the correspondence principle says is that what you should do (when you
know what the classical situation looks like) is to just substitute \hat{x}
and \hat{p} instead of x and p, and put that down for your energy
operator. So why do you do this? Better people than us have done this in
the past, and it seemed to work out for them.</p>
<p>So clearly, what Schrödinger's equation must say is:
iℏ ∂ψ/∂t = -ℏ²/2m ∂²ψ/∂x².</p>
<p>(ignoring potential energy as a result of it being a free particle – not
subject to any potential / potential is equivalent to 0.)</p>
<h1>Uncertainty</h1>
<p>Let's just try to see how our function (given as a function of x) looks as
<a function of p. What we said last time was that was the Fourier
transform. Fourier basis and standard basis are "like oil and water" – if
one is maximally certain, then the other is maximally uncertain. So you
cannot come up with any wave function that is localized in both spaces. So
we can try to quantify this. We had nice pictures, but now we're going to
work directly with operators. It's going to not vague and very precise, so
for some, it'll be great. But for others, it'll be a disaster since there
are no pictures.</p>
<p>Remember: the thing about an observable that we care about most is the
eigenvector decomposition. So the question is: what do the eigenvectors
look like? That determines how nicely they play with each other.</p>
<p>Discrete case first. Remember you had your phase-flip operator Z, bit-flip
X. Considering the eigenvectors and eigenvalues, the claim is that this is
as good as you are going to get.</p>
<p>Another way to measure how different these eigenvectors are is to see
whether these matrices commute or not. If they commute, they have a common
set of eigenvectors. Commuting means XZ - ZX = 0. So we want to look at XZ
- ZX (a commutator, denoted by [X,Z]).</p>
<p>So what does this look like between \hat{x} and \hat{p}? We have product
rule coming into play... yielding [x,p] ≡ iℏ.</p>
<p>We'll use this to derive ΔxΔp = ℏ/2. We'll do this quickly so we'll have
time to talk about the particle in a box.</p>
<p>Recall: given an observable A and a state |Ψ〉, the expected value is
〈Ψ|A|Ψ〉. We also saw that the variance was E(x²) - E(x)², so in this case
σ² = 〈Ψ|A²|Ψ〉 - 〈Ψ|A|Ψ〉². </p>
<p>For now, assume 〈Ψ|A|Ψ〉 = 0. Makes derivation simpler, and we're just
asserting that we don't really lose much (anything, really).</p>
<p>Take Ψ ≡ A|ψ〉, Φ ≡ B|Ψ〉. By Cauchy-Schwarz, we have that this is greater
than or equal to 〈Ψ|Φ〉 = 〈ψ|AB|ψ〉. By symmetry, it's also greater than
or equal to 〈Φ|Ψ〉 = 〈ψ|BA|ψ〉. As a result, (ΔA)²(ΔB)² ≥
|〈ψ|AB-BA|ψ〉/2|² = |〈ψ|[A,B]|ψ〉/2|² = |〈ψ|iℏ|ψ〉/2|² = ℏ²/4. So the
square of the spread of x + square of spread of momentum is at least
ℏ²/4. If you take square roots, you get the proper result. (you can get a
better bound by being more careful by also using the anticommutator. We
were being sloppy for the purpose of making things simpler.)</p>
<h1>Particle in a box</h1>
<p>This is basically just the infinite square well. We want to figure out our
eigenfunctions just by looking at the operator. Again the way we do this is
guess and come up with the right answer by chance. You expect the
eigenfunction to be an exponential, so we guess and check. Life is nice
that way.</p>
<p>We are trying to solve for H|φ〉 + E|φ〉. Guess that |φ〉 = e^{ikx}. Let's
figure this out. Our maneuver is to say that we're dealing with a separable
equation ("decompose this problem" by looking at the eigenvectors of H).</p>
<p>Suppose our state was one of these eigenvectors. We then know what the
Hamiltonian does to it: it simply applies a scalar (the corresponding
energy level). So of course we want to write our function in this basis,
since we know how to solve the simpler differential equation.</p>
<p>The operator affects each eigenvector separately. So we can tack on the
time dependence as an afterthought (to each eigenvector).</p>
<h1>COMING UP</h1>
<p>We'll solve this problem using this very strategy. We know what the
eventual answer looks like: ∑e^{-iEt/ℏ}|ψ(x)〉</p>
<p><a name='11'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h2>Quantization, Particle in a box, Implementing Qubits</h2>
<h2>Feb 23, 2012</h2>
<h1>A more precise review</h1>
<p>By this point, we've talked about a number of things. Discrete/continuous
quantum systems, measurements, and so forth. So what I'd like to do for the
first half of the lecture is give you a slightly more formal overview of
everything we've talked about before: about the model. In some sense what
we've been talking about has been challenging, but in some sense it's also
rather simple (i.e. we can do a more precise, more formal review of what
we've covered quickly).</p>
<h2>Multiple-qubit systems</h2>
<p>So let's start from the beginning. If our state is a discrete system, <mathjax>$\psi$</mathjax>
is an element of a <mathjax>$k$</mathjax>-dimensional vector space.</p>
<p>The second thing we want to say about quantum states is the following: what
happens if you have two quantum systems, <mathjax>$a$</mathjax>, a <mathjax>$k$</mathjax>-level system, and <mathjax>$b$</mathjax>,
an <mathjax>$l$</mathjax>-level system? Now we want to understand what happens when we put <mathjax>$a$</mathjax>
and <mathjax>$b$</mathjax> together and look at them as a composite system? When you put states
together, you need to take tensor products. If we happen to be in the state
<mathjax>$|i\rangle$</mathjax> in one system and <mathjax>$|j\rangle$</mathjax> in the other system, then
corresponding to that we have a state in the composite system as <mathjax>$|i\rangle
\otimes |j\rangle$</mathjax>, which can be written as <mathjax>$|ij\rangle$</mathjax>, if you're being
especially lazy.</p>
<p>The number of dimensions we get is <mathjax>$kl$</mathjax>, which is the dimensionality of the
space in which the composite system lives.</p>
<p>This is called taking tensor products.</p>
<p>The new system inherits the properties of the old one; for instance, how do
you compute inner products? Suppose a-system happened to be in the state
<mathjax>$|00\rangle$</mathjax>, and another was in <mathjax>$|++\rangle$</mathjax>. What is the angle between
these states? If you were computing the inner product, <mathjax>$\langle 00|11
\rangle$</mathjax>, then you would just take the inner products of the pieces
separately.</p>
<p>What is more interesting to us is measuring the probability of a state, and
that corresponds to its magnitude.</p>
<h2>Evolution of states</h2>
<p>We had our Hilbert space, and our evolution was just the rotation of our
Hilbert space. This was a rigid body rotation in that it preserved
angles / distances. So the inner product is not going to change as you do
this rotation.</p>
<p>And then we had our favorite gates, which consisted of things like the
bit-flip (X), phase-flip (Z), Hadamard (H), controlled not (CNOT, a
two-qubit gate).</p>
<p>So now, here's what I wanted to get to. Suppose you have two qubits, and
you apply a gate on each of them. Now you want to understand what operation
was applied. So first we must understand the form of the answer: it will be
a 4x4 matrix. And then to understand how to write out this 4x4 matrix:
effectively, we take the tensor product of the indivdual gate matrices.</p>
<p><mathjax>$$A \otimes B = \begin{pmatrix}
a_{00}B & a_{01}B \\ a_{10}B & a_{11}B
\end{pmatrix}
$$</mathjax></p>
<h2>Observables</h2>
<p>So then you have measurements, and we said that measurements correspond to
an observable M, which is a Hermitian operator, i.e. <mathjax>$M_{ij} = M_{ji}^{*}$</mathjax>.</p>
<p>So now, why is an observable M? Because we said when you have a Hermitian
matrix, by the spectral theorem, you have an orthonormal eigenbasis that
correspond to real eigenvalues. What you get as the outcome of a
measurement is some <mathjax>$\lambda_j$</mathjax>. This occurs with probability equal to
the square of the length of the projection onto the eigenspace
corresponding to <mathjax>$\lambda_j$</mathjax>. The new state is <mathjax>$|j\rangle$</mathjax>.</p>
<p>There is a special observable known as the Hamiltonian <mathjax>$\hat{H}$</mathjax>, the
energy observable. In order to solve the Schrodinger equation, which looks
very complex, if you write it in terms of the eigenvectors, we can neatly
partition it into a number of simpler differential equations, one for each
eigenvector. Since these are eigenvectors, the evolution of the system
leaves the direction alone, and all it does is change the amplitude and the
phase. There is a very nice short-hand for writing this: <mathjax>$e^{-i\hat{H}t}$</mathjax>:
this is our evolution operator, and you can check that it is unitary.</p>
<h2>Continuous quantum states</h2>
<p>So now let's fill in the corresponding picture for continuous quantum
states.</p>
<p>No longer finite, and it's even continuous. So, like a particle on a line,
and now you have a probability distribution representing your amplitude
(sort of; more like intensity). Usually you talk about what the probability
is of being in some range (of being in the neighborhood of x). If you were
looking at x itself, the amplitude would be zero (excepting Dirac
deltas). So now <mathjax>$\Psi$</mathjax> is a function mapping <mathjax>$\mathbb{R}$</mathjax> to
<mathjax>$\mathbb{C}$</mathjax>. It's normalized such that <mathjax>$\int |\Psi|^2dx = 1$</mathjax>. Another way
of saying this is that the inner product of <mathjax>$\Psi$</mathjax> with itself,
<mathjax>$\langle\Psi|\Psi\rangle$</mathjax>, is 1.</p>
<p>So what's the corresponding vector space we have for a continuous vector
space? We need some set of eigenfunctions that span all complex
numbers.</p>
<p>Then we have this notion of observables in these continuous systems, which
is going to be just like it was in the discrete case. So what does it do?
It takes a state <mathjax>$|\Psi\rangle$</mathjax> and maps it to <mathjax>$M|\Psi\rangle$</mathjax>. So we'll
just have some operator that maps a wave function to a not necessarily
normalized wave function.</p>
<p>And so we have two examples that we saw: <mathjax>$\hat{x}$</mathjax>, the position
observable, which maps <mathjax>$\Psi(x)$</mathjax> to <mathjax>$x\Psi(x)$</mathjax>, and then we had <mathjax>$\hat{p}$</mathjax>,
the momentum observable, which maps <mathjax>$\Psi(x)$</mathjax> to <mathjax>$i\pderiv{\Psi(x)}{x}$</mathjax>.</p>
<p>So we need these to have some notion of Hermitian. We must have
<mathjax>$\langle\phi|M|\psi\rangle = \bar{\langle\psi|M|\phi\rangle}$</mathjax>. We call this
sort of operator "self-adjoint".</p>
<p>Remember: integration by parts.</p>
<p>The momentum matrix would be skew-Hermitian (<mathjax>$M^\dagger = -M$</mathjax>), so we had to
multiply by a factor of i.</p>
<p>On this particular homework, all you have to do is work your way through
things like this (whether certain operators are Hermitian or not) and
compute the commutators of certain matrices. Should be an easy or useful
exercise, depending on how used to this sort of thing you are.</p>
<p>So now, let's talk about a particle in a box. We assume there is a box of
length L with infinitely high walls (i.e. infinite square well). Basically,
consider behind the boundaries of these walls there is a potential so large
that the particle cannot afford to leave.</p>
<p>So we want to solve Schrodinger's equation. What are the states? <mathjax>$H =
\frac{\hat{p}^2}{2m} \Psi = \frac{-\hbar^2}{2m}\pderiv{^2}{x^2}
\psi$</mathjax>. Rather than carry this potential around with us, we'll just impose
boundary conditions. <mathjax>$\Psi(0) = \Psi(l) = 0$</mathjax>. So this is H. Remember how we
solved Schrodinger's equation; we solved the eigenvalue problem. We tried
to figure out the eigenvalues <mathjax>$\phi_j$</mathjax> and the corresponding energies
<mathjax>$E_j$</mathjax>.</p>
<p>So now we want to understand what these eigenvalues look like for
corresponding energies. So what's an eigenfunction of this? The guess (what
we want) is for the eigenfunctions to look like e^{ikx}. Just evaluating
the right-hand-side, we get <mathjax>$E_k = \frac{\hbar^2 k^2}{2m}$</mathjax>. This is both
the energy of <mathjax>$e^{ikx}$</mathjax> as well as <mathjax>$e^{-ikx}$</mathjax>. So we guessed what the
eigenfunction looked like, and then we checked.</p>
<p>So we checked that <mathjax>$\psi_E(x)$</mathjax> is going to be of the form <mathjax>$Ae^{ikx} +
Be^{ikx}x$</mathjax>. When you take linear combinations of the two exponentials listed
above, you might as well take linear combinations of <mathjax>$\cos(kx)$</mathjax>,
<mathjax>$\sin(kx)$</mathjax>. This form is nicer because it is easier to impose the boundary
conditions. Enforcing boundary conditions, we get that <mathjax>$C = 0$</mathjax> (which makes
sense; cosine is even, and this function is 0 at x=0) and that <mathjax>$D =
\frac{k\pi}{l}$</mathjax>.</p>
<p>We now want to find C, which we can get by enforcing that our wave function
is normalized. use the <mathjax>$\int (sin^2 + cos^2) = 2\int (sin^2)$</mathjax> trick.</p>
<p>Finally, let's just go back and make two nice observations. Now you finally
see how to implement a qubit. To implement a qubit, you would restrict the
energy to be small enough to be in the first two modes. And then you would
let zero be one qubit and one be the other.</p>
<p><a name='12'></a></p>
<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
<h1>Quantum Algorithms</h1>
<h2>Feb 28, 2012</h2>
<h1>Introduction</h1>
<p>Today we're going to make a transition to quantum algorithms. But first, a
brief review of particle-in-a-box.</p>
<h1>Particle in a Box</h1>
<p>The particle in a box is sort of a toy model for a hydrogen atom. In what
sense? In a hydrogen atom, you have a proton and an electron, and the main
force it is subject to is Coulomb attraction. So for our purposes, as a
very first, very simple model, we'll think of writing out Schr\"odinger's
equation in the radial direction.</p>
<p>We'll think of this electron as not three-dimensional, but
one-dimensional (radial). And instead of dealing with the potential as it
is, we'll approximate it by saying what the potential really does is
confine the electron to within some range <mathjax>$\ell$</mathjax>. What we're going to do is
model the situation by saying that the electron is a free particle in a
box.</p>
<ul>
<li>
<p>Aside</p>
<p>It's worth thinking about this: once we solve this, we get a first
picture what a hydrogen atom looks like. When we plot out the solution,
what does this tell us about an electron and where it sits? This gives us
a first approximation; really inexact. But for our purposes, since we are
not so much wanting an understanding of the hydrogen atom so much as
wanting a general understanding, this is a great model.</p>
</li>
</ul>
<p>Normally we consider this particle as free to move anywhere, but now we
impose an infinite potential outside of a particular region. Writing out
Schr\"odinger's equation where <mathjax>$H \equiv \frac{-\hbar^2}{2m}
\pderiv{^2}{x^2}$</mathjax>. We initially did this by guessing that the eigenvectors
were of the form <mathjax>$e^{ikx}$</mathjax>. We guessed this, figured out that the