An Introduction to Quantum Computers and Quantum Coding

Author:Murphy  |  View: 24650  |  Time: 2025-03-23 11:44:18

Preface:

This is an adaptation of something I wrote for one of my physics courses, so it assumes a level of knowledge in mathematics and science. The topics referenced in the article include some linear algebra, superposition, basic algorithm concepts, and a bit of modular arithmetic when discussing Algorithms. However, since you're reading an article on quantum computing you're likely savvy enough to look up and understand all the ideas referenced. Furthermore, all sources are cited, so you can explore all of those for deeper learning. Also, all images and figures used were generated by me, using tools like Microsoft Word, PyCharm, and diagrams.net unless otherwise noted.

Why Does This Matter?

Before committing to a somewhat lengthy and dense read, you might be wondering why this matters to you, even if you've never touched a quantum computer. The reality is that breakthroughs are happening all the time, and quantum computing holds real relevance in different computational fields, especially machine learning. For starters, the quantum analogs of classical algorithms have potential to be much more efficient.

One example of this is the Quantum Support Vector Machine. Notably, classical SVMs often use the kernel trick to transform data into a higher dimensional space so that they can locate a separating hyperplane. However, quantum SVMs would have a significant advantage as they naturally represent data in exponentially higher dimensional spaces without the computational strain that classical computers face. This allows quantum SVMs to handle more complex datasets more efficiently.

Another example is in the realm of neural network training. The basic unit of quantum computation, the qubit, can be entangled with other qubits, creating correlations that classical systems can't replicate. While entanglement offers possibilities for correlated updates across a quantum neural network, it's important to note that the concept is still under research.

Part 1: Introduction to Quantum Computing

Quantum computers function very differently from classical computers, leveraging quantum properties and phenomena to greatly increase computational power. At a high level, there are a few tenets of quantum computing that differentiate it from classical computation: qubits versus bits, quantum versus classical logic gates, the presence of quantum phenomena, and the opportunities offered by quantum computing's enhanced computational capabilities.

At the core of quantum computing is the qubit, which serves as the fundamental unit of computation in a quantum computer– taking the place of a classical computer's bit. While the bit can occupy either the 0 or 1 state exclusively, the qubit can be in a superposition of the 0 and 1 states (Microsoft, n.d.). It can be very hard to conceptualize the qubit; where the classical bit is simply an electric current or absence of electric current, the qubit can take many different physical forms.

These include "spin" qubits, which is the most straightforward example. This type of qubit uses the spin property of a particle (generally an electron) to complete computations. To initialize a spin qubit, an electron is trapped using a quantum dot, for example, then manipulated using magnetic fields that interact with their spin state (Harvey, 2024). The computational difference between a bit and qubit is significant, and stems from the qubit's ability to be affected by quantum phenomena like superposition between the 0 and 1 states, and entanglement with other qubits (Microsoft, n.d.).

One tool that is very helpful in visualizing the state of a qubit is the Bloch sphere; it is effectively just a sphere with north and south poles representing |0⟩ and |1⟩ respectively, and all other points along the sphere representing linear combinations of the poles' values (Microsoft, 2024). Since this representation of the qubit uses a complex vector space, the state of the qubit will be described in Dirac notation hereafter. This visualization of the superposition state of a qubit aids in the understanding of quantum logic gates especially because it allows for a geometric understanding of the operation being performed. Generally, when a qubit is initialized it is in the z-basis |0⟩ state, which is analogous to the classical 0 state (Quantum-Inspire by QuTech, 2024).

Bloch Sphere illustration, Wikipedia. (n.d.) https://en.wikipedia.org/wiki/Bloch_sphere. (GFDL License)

Another key difference between the classical and quantum computer is the logic gates: while classical computers use AND, OR, NOT, etc. to perform basic logic operations, quantum computers use quantum logic gates such as X, Hadamard, Toffoli, and CNOT (Wikipedia, 2024). These quantum gates are used to perform logical operations on a single qubit or a very small number of qubits, and can be combined with others to perform more complex operations and manipulations.

  1. First, the X gate is very similar to the classical NOT gate: it inverts the phase of a qubit– if a qubit is in the |0⟩ state, it inverts to the |1⟩ state, and vice versa.
  2. Next, the Hadamard gate is used to put a qubit in the |0⟩ state into an equal superposition between |1⟩ and |0⟩. Third, the Toffoli gate is an example of a multi-qubit gate.
  3. The Toffoli gate operates with three qubits, two of them are "control" and one is the "target." In the Toffoli gate it will invert the target qubit only if the two control qubits are in the |1⟩ state (Roy & Chakrabarti, 2024).
  4. Finally, the CNOT gate is a very common gate used in quantum computing, and we will examine a use case later on. The CNOT is also a multiple qubit gate, as it has one target qubit and one control qubit; when the control qubit is in the |1⟩ state, it inverts the phase of the target qubit.

These are just a few examples of many interesting quantum logic gates, and it is important to note that unlike classical logic gates, there is not necessarily a physical "gate" that the qubits pass through, but rather these are just operations that are performed on the qubit which take different forms depending on many factors.

A third major difference between classical and quantum computing is the presence of quantum phenomena such as superposition, superconduction, entanglement and interference. These properties are used in different ways depending on the methods used to perform quantum computations (Microsoft Azure, 2024). Another property that is present is quantum decoherence, which poses a serious problem to the development of useful or widespread quantum computing. Quantum decoherence is when a particle in superposition interacts with its environment and becomes entangled with the environment, ultimately interfering with the computational outcome (Brandt, 1999).

The computational advances of a quantum computer are great: take, for example, the algorithm used for finding the prime factors of an integer. In classical computing, one of the leading prime factorization algorithms is General Number Field Sieve (Wikipedia, 2024). The program runs at a quasi-polynomial time complexity, and it shows how hard it can be to factor a very large number. Compared to the leading quantum algorithm, Shor's Algorithm, which runs in logarithmic space complexity, and a polylogarithmic time complexity, which is once again a complicated expression, but boils down to the fact that it is vastly more efficient (Li et al., 2022). Obviously this is just one example, but it serves as a testament to the power of quantum computing– the power to turn a program which runs in exponential time into a program which runs in logarithmic time is truly remarkable.

Part 2: Quantum Programming: Languages, Compilers and Algorithms

Though their hardware is fundamentally different from classical computers, quantum computers are programmed using languages often similar in syntax to classical languages. These include QCL, Qiskit, and Q#, who are based around the syntax of C, Python, and C#/F# respectively. Furthermore, their compilers are built with C++, Python and C++, and C# respectively. (IonQ, 2024). Thus, classical and quantum languages can be very similar syntactically– the main difference comes from the content of the programs, and how Quantum Algorithms are structured.

Before examining different languages, their syntaxes, and how they compare to the classical languages that they're based around, it's important to understand the content of a quantum program and why no matter how similar the syntax is, there is an unbridgeable gap between a classical and quantum program.

This stems from the mechanics of a quantum computer– as discussed before, quantum computation is based around holding qubits in superposition, and applying different "gates" to them– effectively transformations along the Bloch sphere that they are represented by. What that boils down to is the fact that unlike a classical computer, where you write a program which will utilize a pre-made circuit to perform computations, quantum programming is the act of actually encoding the circuit. Let's examine some pseudo code and its associated quantum circuit to better understand this. Maybe the simplest possible quantum program/circuit, the following program simply initializes a qubit, applies a Hadamard gate to put it into a superposition, then measures the state of the qubit.

Image by author.

The associated quantum circuit for this program is:

The "H" represents a Hadamrd gate, and the meter represents a measurement being taken. Image by author.

The double line following the measurement symbol indicates that the qubit is no longer in a superposition, but rather one of two discrete states (0 or 1) since its wave function was collapsed during the measurement.

To get a better feel for the syntax of different quantum languages, let's look at programs in the three aforementioned languages that all serve identical purposes. All three programs are made to create a Bell state, which is an entangled state of two qubits. The gates (operations) applied to the two qubits are: Hadamard on the first qubit, 0, then on the second qubit, 1, with the first qubit as the control. The function of the CNOT gate is effectively just to entangle two qubits (Rioux, 2023).

#Qiskit (Python Basd)
#Create a quantum circuit with two qubits
qc = QuantumCircuit(2)

#Apply a Hadamard gate on the first qubit
qc.h(0)

#Apply a CNOT gate with the first qubit as control and the second qubit as target
qc.cx(0, 1)
//Q# (C#/F# based)
namespace BellState{
  operation PrepareBellState() : Unit{
    using (qubits = Qubit[2]) {
      //Apply a Hadamard gate on the first Qubit
      H(qubits[0]);
      //Apply a CNOT gate with the first qubit as control and the second qbit as target
      CNOT(qubits[0], qubits[1]);
    }
  }
}
//QCL (C based)
init {
  qubits q[2]
  //Apply a Hadamard gate on the first qubit
  H(q[0]);
  //Apply a CNOT gate with the first qubit as control and the second as target
  CNOT(q[0], q[1]);
}

Unsurprisingly, the quantum programs look very similar in syntax to the languages each is based on– for example the Pythonic program uses a couple built-in methods and doesn't have much else going on and the C# based program is full of curly brackets. Reviewing the syntax of a few quantum languages is helpful to understand what a quantum program looks like, but the reality is that the hardware being used is so different that the actual code in a quantum program would be useless to a classical computer, and vice versa. Because of that, it would be much more interesting to analyze two algorithms made for the same purpose, one classical and one quantum, and dissect the different steps taken in each case to achieve the result.

Recall the example presented in Part 1 (GNFS and Shor's algorithm), where we looked at the time complexities of two prime factorization algorithms. As both algorithms are rather abstract and complex, it may be easier to understand their respective theories in paragraph format instead of examining their pseudo code.

The classical algorithm, General Number Field Sieve, can be summarized into five main algorithmic steps (Case, n.d.). Throughout the explanation, "N" refers to the number being factored.

  1. The first step is polynomial selection: this step involves selecting two polynomials such that they multiply to smooth numbers when evaluated at certain points modulo N.
  2. The next step is the "sieve" step: the goal is to find sets of integers (a, b) such that

    Tags: Algorithms Getting Started Quantum Algorithms Quantum Computing Quantum Mechanics

Comment