How I made software package to write Quantum Computer Algorithms
This is the transcript of the talk by Sumner Alperin-Lea presented at Git Commit Show 2019.
About the speaker
Sumner Alperin-Lea, a research graduate at University of Toronto demonstrating a software package, he and his team at Matter Lab built to make it easy to write optimized algorithms for quantum computers. This brings us closer to Quantum Computer Science future. today, he talks about "Tequila", a software package that his group has been working on for about eight months now this software package is currently in its live beta. So if you go to https://github.com/aspu ru-guzik-group/tequila you'll be able to find it.
Transcript
What do we mean when we talk about a quantum computer? What is a quantum computer?
It's quantum, in the sense that it's an array of quantum systems. In general, those are two-level systems because we have a sensible understanding of what those do and those are easy to manufacture. We call it a two-level system, a quantum analog of a bit using a quantum computer. This means controlling the unitary evolution of a quantum state for those not fresh on their math, a unitary is a symmetric matrix, or rather it's uh its conjugate. It is the same as its transpose and when you multiply it by its conjugate transpose you get the identity. So it's a specific kind of matrix multiplication, physical quantum computers are things like tiny ions in a magnetic field. In a laser trap, the polarization of individual photons in a circuit device or even superconducting circuits chilled down to a single-digit where these macroscopic devices will begin to exhibit quantum behavior despite their micrometer to cm size.
It's a computer, in the sense that it implements exactly two operations. The first is that a unitary in the two to the n unitary group acts on vectors in the 2 to the n Hilbert space. No matter the unitarian, no matter state is, at most polynomial in n specifically it should be no greater than n squared. There also needs to be some kind of quantum measurement, we refer to that as a POVM - Positive Operator Valued Measure and that should happen in constant time. That should take a quantum state to a that is on n q-bits to a bit string of length n. This is collapsing the wave function. If we're all familiar with that term, How do these devices compare to classical computers?
Well a q-bit stores information densely, that's one of its strengths. If you have n q-bits you would need two to the n classical bits to store the same state and when n is anything modest say 16 32 12, that's more classical bits than you're going to be able to find. They multiply matrices faster as well, that n squared scaling is a lot better than you're expecting and entanglement on a quantum device is in its sense. Though this requires a little more rigorous formalization. It is a computational resource in itself, these devices aren't magic though it's been proven that you can't copy a quantum state. An arbitrary quantum state is just given the state there is no copy and paste on a quantum computer. When you measure a state, you change the state permanently so there's this loss of information. Every time you try and get something out of the computer readout is a one-time event and in a sort of more meta-textual sense. These devices are sort of mind-boggling it's hard to wrap an intuition around it they don't work the way we're expecting them to, based on experiences with the rest of the macroscopic world so that can influence our thinking about them in ways that may make them harder to use. The graphic that I've included here is credited to the Wikipedia article on quantum complexity theory. How does that relate to classical complexity?
It includes all of p, the sum of np, none of np-complete, and the sum of p space. That's a weird overlap but there's a lot of useful stuff that goes on outside of p and that isn't np-complete. So there's still a lot of power there. Just about the field a bit, I would identify the sort of three major events if I had to pick.
Richard Feynman in a paper first theorized that quantum computers. What would you need to simulate quantum systems, which people were and still are having trouble with? It makes sense that you would need a device that could behave like a quantum system or indeed was one to replicate important quantum behavior. In 1994, Peter Shore published his factoring algorithm. This does in polynomial time what the best known classical computer or requires exponential time to do famously. This means that the RSA encryption that your bank account uses is no longer safe. The second someone gets their hands on a fully operating quantum computer. So a motivating factor in the further development of quantum computers and quantum algorithms was so that some evil mastermind with a quantum computer couldn't destroy the global economy in 2019. sort of the most recent major milestone scientists working at google announced quantum supremacy for the first time. So that's the first time that a real quantum computer on a specific and well-defined task did better than any supercomputer could by orders of magnitude.
That's the first time a quantum computer really in a physical test case proved its worth. So to speak but I need to tell you a little bit about the state of the field. So we're entering what's been called the noisy intermediate-scale quantum era NISQ is what it's usually called in the quantum community. Here the devices we have between tens and hundreds of q-bits, they're noisy. If you're going to do anything useful, you need to use algorithms that are in some way resistant to noise. To random errors that we can't control and you need them to be able to run fast. You can't get a deep circuit that has hundreds of thousands of operations. That's not going to fly hundreds to a thousand is where we're looking at. Now the kind of algorithms that work well in the NISQ era is called VQA - Variational Quantum Algorithms. They're a hybrid class of algorithms in which some parametrized quantum operations are optimized by classical optimization. Sort of this 3-way loop between
1. The quantum device running a quantum circuit (that's how quantum scientists refer to these unitary matrix transforms that operate on quantum states)
2. We have some measurement apparatus to send data back to a classical computer running a classical optimizer.
3. We iteratively improve the performance of whatever task we've defined and have encoded into the quantum program
This leverages the best of both worlds to the best we can. There's a lot of theory and even some evidence suggesting that these are much more noise resistant than the strictly deterministic unparameterized quantum algorithms.
The most important vas is the vqe originally pioneered by my research group. Back in 2010 or so, that's the variational quantum eigensolver given some hamiltonian h and some parametrized unitary u of theta. Your goal is to minimize the expectation value of that hamiltonian acting on the state created by the unitary onset said choices are critical here.
They need to relate to the hamiltonian in some way or you're not going to have any luck. The real varying thing in this family of algorithms is what kind of classical parameters optimizer. You use what kind of unitary you're using, how you transform the hamiltonian, and so on. There's also the quantum approximate optimization algorithm due to Edward Farhan and this is a little bit different. It has a well-defined onset and here you've got a hamiltonian where you've somehow encoded the right answer into the hamiltonian and here you maximizing a little bit of a difference in the way people think about these algorithms. So the software I've developed is tequila. It is extensible quantum informatics and learning architecture that's a recursive algorithm like gnu is focused mainly on these vqas especially on the vqe and especially targeting chemical applications. Quantum chemistry is one of the leading applications of quantum computing in the current age.
The point of tequila and the real design philosophy here is that we wanted to create a package that worked the way quantum scientists think. A lot of that time we talk about these ridiculous operators that are easy to write down exponentials of some infinitely long hermitian matrix, talking about gates in terms of generators talking about optimizing strangely transformed expectation values and unitaries and we wanted to make that easy. Here's a little graphic with example code from tequila, showing how quickly you could write up and optimize a bizarre exponential to the square objective. I'm going to dive right into some useful tutorials for tequila.
So we're going to restart and run a bit, so basic usage of tequila it's a python package. It interfaces a large number of other packages. Your favorite quantum simulators like circ and kiskit respective package affiliated with Google and IBM, are all available here so just import tequila as tq and see what simulators we have. So we've got installed on my machine culex, a very powerful simulator very fast circuit coil, and a built-in symbolic simulator with several abilities wave function sampling. Regular sampling noisy simulation in some cases and this little subheading tells you what's installed It's easy to create circuits again being the quantum analog of a program, you just use the tequila gates module and it has all the commonly defined gates you might look for.
Let's see what that circuit looks like when you print it out. We just get a list of the gates and the qubits they act on what controls them. So we've got a two-qubit circuit here. We can draw the circuit out using Circe's draw feature and you just get what the circuit looks like. It's a Hadamard gate followed by the controlled knot. That's as a quantum gate but you can get a lot stranger than that. You can do double Hadamard and you can also get some long circuits. This is all just playing around with various ways of initializing gates. So if you had a generalized rotation gate which is just the exponential of a complex projector onto a state tequila can do that too. It can compile it down to simpler versions. We can do an easy simulation so this somewhat complicated circuit acting on a null zero state leads to one of the four magic basis states. This is the uniform superposition over all states who'd have thought we can simulate it with others. You can specify what back end you want to use you could measure instead. So we see here when we run it in this case with 10 samples. You always get the zero state. You could do it considerably more. You get six times the one zero, four times zero but we're focused here on parameterized quantum circuits.
So here we simulate just a gate that's a specific quantum gate with just a string initialized variable. All you need to do in tequila to work with parameterized circuits is pass down a dictionary of strings and numbers and wherever you've thrown a variable of any hashable type into the circuit, we will handle that for you. So you don't have to do any kind of deep in the weeds wrangling with the simulation back end but more importantly than that, we can transform
this circuit. So we can exponentiate an angle inside and you can say I'm going to feed you this number but I want you to exponentiate it every time. You just do that with the dot apply method. You can get bizarre with your transformations. You could take the gaussian of it and you just apply your transformation. As seen here run your circuit, here we run it with the parameter set to 1 and we get out specific one cubit state. We could say have an angle that's a more bizarre hashable type. Any hashable type will work so if you want to get bizarre and you pass down the tuple 1. It's a stupid example that works too we try to be as flexible in the code base as people might want to be but the real focus here is on optimization and for that, we have these really easy classes that are from the core of tequila. These are our objectives and expectations. Value classes' objective is just any transformation on any quantum or classical data and an expected value is what you measure on a quantum circuit made up of some hamiltonian and some unitary. We implement that directly as a class so that you don't have to figure out how to do it yourself. We can use a simple unitary r y of some angle and our hamiltonian will be the sigma x and we can optimize that by simulating it with the parameter a equals 1.0 gives us back an expectation value of 0.84147. This is a bounded expectation value. Since it's a single poly word so it has to be between 1 and -1. But let's say that we cared about this and we wanted to optimize it or let's say we wanted to do it with some other circuit. We could build any hermitian matrix we want and use it as a projector or as a hamiltonian building it out of these poly words. The z, y, and x in whatever fashion we want
and we can optimize over more complex things. You can compile the objectives and run them and so when we compile an objective with this sort of bizarre hamiltonian x on all four qubits y on two z on zero and one tensor product with exon. We can plot it over several values. We see that it's just this little oscillating thing. What if we want uh to run e squared to plus one where e is our former. Well, we get a gaussian 2 or rather we get two Gaussians. We have this all above zero oscillators with a different period. What if we want to shift it by e to the - a squared where a is the same variable. We've been using it in the circuit. You can do these sorts of weird interactive transformations of quantum and classical data. What do you want to get crazy about? What if you want to exponentiate the negative square of the expectation value itself? Then you wanted to add the exponentiation of the negative square of a variable and multiply that by an expectation value.
It looks kind of like the back of a camel, one hump a little bigger than the other but other than that everything works perfectly fine. However, you want to build your objective. You can take gradients of objectives and we interface the package jacks. The successor of auto grad lets us automatically differentiate functions over quantum objectives so that you can optimize whatever you want agnostically of how the gradient is built. If you're doing the gradient-based compilation. Do you see here if we take the gradient of some objective, here we know that It's this sinusoidal function. We get back that its gradient has this over the variables just like we'd expect for that sinusoidal function and the second derivative is as we are expecting for sinusoidal functions the negative of the original.
We have more examples even further so this weird camel hump thing. This is the first derivative and this somewhat bizarre second derivative. I'm a little low on time so I'll dig real quick into interfacing more popular optimizer packages. Tequila in addition to having plugins to all your favorite quantum simulation packages also because it's targeted at vqa has interfaces to popular classical optimization routines among them. We've built a setup that makes that easier for you to use. So let's say we have this hamiltonian sigma x plus some constant times sigma z minus sigma zero or minus sigma z on a different qubit. Well, this hamiltonian matrix is given here. You can optimize over it so we'll build onsets. What these unitary circuits are but we have nice drawing features so you can see that. It's this series of gates that controls several parameterized gates and it's got two classical readouts.
It is easy to simulate this circuit has three variables a, b, and c setting them all to a one 4 you get negative 6 times 10 to the who knows what squaring it and simulating it you get an infinite decimal. As you'd expect for random variables, you can optimize easily by running the bugs optimization. A gradient-based optimizer that's quite popular in the quantum community.
You can minimize this and we get that the best we can do is negative two ballparks. We can get the parameter's final energy and we've built-in plotting features. So you can see there's this rapid convergence to the minimum. We can plot the angles over time. We can plot the gradients over time as well this final plot shows the angle c over time but also its gradient from optimization step to optimization step. So I think I'm a little bit over time but I'd like to point everyone interested back to GitHub.
Now it's time for questions.
- How do I get started with quantum computing as a programmer who doesn't know?
If one has a strong enough programming background but needs to get deeper into physics. There's this one specific textbook that in the quantum science community is the first thing your supervisor puts on your desk on your first day of graduate school and that is the textbook by Michael Nielsen and Isaac Chuang. It's affectionately referred to as mike and every quantum scientist is sitting above their desk. It's not the cheapest thing in the world but I'd be surprised if you couldn't with a little bit of finesse.
One problem that affects the quantum community is that nobody's sharing their code and even if they did, they wouldn't be able to tell what anyone else's code did unless they were familiar with the idiosyncrasies of umpteen different packages. So there's not a straight answer on how to get developing seriously. Once you're familiar with coding apparatus because the truth is its sort of a tower of babel situation at the moment.
2. Can you share more about your setup? How do you, When you code this? How do you probably test it on a simulator or is there anything else involved like testing it on a live computer or anything else?
We do have a plug-in so that you can straight from tequila interface the available quantum computers of the day. To my knowledge, there are only two companies that make actual quantum devices available to the public. That's the Righetti corporation and IBM. We have plugins for that so anything that runs in a reasonable amount of time. We do try and make sure our code works well with those of course time is expensive so we try not to waste it but mostly it's done by making sure everything works with the local simulator. We're at a point in the quantum community where you still want to know what a real quantum computer would do before you send your algorithm to it. We can make some programs which can add value to some of the stuff like what kind of calculations are we able to do.
The difference between a normal computer and what we are doing here with currently existing computers. I'll be honest not much yet the noise remains a problem through progress. In recent years there has been exponential progress. The size remains a problem though again exponential progress is being made still to a goal that is some years away. It isn't the same as cold fusion has been 20 years away for a hundred years but it is still a more distant goal.
One application where it's quite clear that the quantum computer will be necessary for simulating the energies of hamiltonians for chemical systems. So you need exponentially good accuracy in those to be accurate because the quantum energies of molecules and rates of chemical reactions depend exponentially on the differences between energies of various molecules. So a little error in the energy becomes massive. This and the quantum computer can a fully working large enough quantum computer can do in polynomial time in the number of electrons. What a supercomputer takes exponentially time to do and it can do it better in that time. So in less time, it can do better, and when you get to several electrons that's pretty small. Overall in terms of like active space which is you know how to generalize that, the electrons that matter is the ones that are participating in chemical bonds.
Thank you so much!!