#### Quantum Computing: Learning and Unlearning at the Same Time

Can machine learning algorithms get faster once we reach the end of Moore’s Law? According to researchers, the answer is yes through quantum computing.

Machine Learning

Can machine learning algorithms get faster once we reach the end of Moore’s Law? According to researchers, the answer is yes through quantum computing.

The rise of silicone-based computers has until now been a predominant cause for the world’s increase in productivity and creativity. These classical computers incorporate billions of transistors (tiny switches) which form many circuits. These circuits have binary outputs, either on or off (binary logic consisting of 1 or 0). These switches can be leveraged for different computational outputs depending on their configuration. As of 2021, Apple’s M1 Max chips have the largest transistor count (57 billion) in a commercially available microprocessor [1]. This allows the chip to perform more computations per second (floating-point operations per second or FLOPs) which means algorithms can be solved faster.

This increase of chip efficiencies (through increased transistor count) has reduced the time for videos to be rendered, machine learning (ML) algorithms to be optimised and physics simulations to run. But this increase is slowing down contrary to Moore's law, which is an observation that the number of transistors in an integrated circuit (IC) doubles roughly every two years. This law will, unfortunately, reach a limit due to physical constraints. So, can computers ever get faster once we reach the end of Moore’s Law? According to researchers at IBM, Google, Intel and more, the answer is yes, through quantum computing.

Quantum computing is a type of computation that exploits the principles of quantum mechanics. These computers are different to classical computers as they interoperate quantum properties such as superposition, entanglement and interference in order to perform their calculations. We’ve spoken about how classical computers use binary logic (also called bits), but quantum computers use qubits (a quantum bit). Each qubit can take on any combination of 0 and 1 simultaneously, which is known as superposition.

As previously mentioned, quantum computers use the basic quantum mechanical properties of superposition, entanglement and interference.

We’ve already started mentioning how qubits can simultaneously be a 0 or a 1. This is the most fundamental principle of quantum mechanics. Erwin Schrödinger famously described this principle in his thought experiment appropriately called “Schrödinger’s cat” [2]. I’ll describe the principle with a slightly less disturbing example:

Suppose I have a coin labelled with heads or tails. If I flip the coin and it lands on the desk, we can both clearly see that the coin is either heads or tails (similar to a classical bit). However, if I were to place this coin in an opaque box and then shake it around a bit, is the coin now heads or tails? The answer, and slightly unintuitively, is both. The coin is both heads **and **tails until the box is opened and we observe the coin’s state.

Using a different analogy as previously, suppose we have two coloured balls that are exactly the same in every way except that one is red and the other is blue. If I were to put each ball into its own opaque box and then put those boxes into an opaque container, these balls then become entangled. That is to say that neither you nor I know which box each coloured ball is in. This means that boxes A and B each have both a red ball **and **a blue ball inside (both boxes are in superposition). The moment I reveal that box A has the red ball, we both know that the blue ball is in box B. This means that observing one entangled qubit reveals the value of the other entangled qubit.

Light has this weird property of being both a particle and a wave as shown in the famous “double-slit” experiment [3] but so do other atomic/sub-atomic particles (such as electrons). Physical waves can be described as a sine with the properties of amplitude, wavelength and phase shift. When two waves interact, they either constructively or destructively interfere. The following animation should give you a sense of how waves interact. You can also play around with the online tool found here. The state of a quantum object is best described as a wave, which can be converted to a probability wave. Probability waves of different quantum objects can interfere, which in quantum computers means that if implemented correctly, this interference will amplify correct answers and cancel out wrong answers.

There are said to be three types of quantum computers: the quantum annealer, analogue quantum and universal quantum.

The quantum annealer is said to be the least powerful and has limited applications, but works particularly well for optimisation problems. The quantum annealer computer is able to find the most efficient wing designs for aircrafts in hours whereas a classical computer may take thousands of years. This is achieved mainly by the interference mechanism spoken of previously [4]. D-wave is the leading company using this computing method which is applied to manufacturing/logistics, financial services (e.g. trading bots) and life science problems.

Analogue quantum is the most popular form of quantum computing as they are far faster than today’s classical computers. Despite this computer being far more difficult to manufacture than the quantum annealer, many large companies are focusing on this technology such as Google, Microsoft, IBM Q, Rigetti, Honeywell and IonQ. This technology provides the tools to perform quantum simulations, allowing for the solving of protein folding problems that are notoriously hard to solve. With this, treatments can be developed to combat diseases like Alzheimer’s and Parkinson’s which are caused by misfolded proteins.

Lastly, universal quantum computing is the most powerful and most general but hardest to build as it is said to use between 100K – 1M qubits. Currently, the most number of qubits used within a quantum computer is 256 [5].

Quantum computers won’t completely replace classical computers as some problems are much more efficiently solved with a classical computer. Currently, ML algorithms on classical computers can take hours or days in order to produce an optimal model. With quantum computing, it’s predicted that these same models could be solved within fractions of a second. Volkswagen is currently implementing this technology to produce near instantaneous optimal (near-optimal) solutions for problems involving autonomous cars and road traffic [6]. Quantum ML is a promising and relatively new field which means that we’re still discovering all of it’s potential capabilities. We can speculate the various ML problems that will benefit from this quantum speed-up but we won’t know for sure until we try solving some problems.

This technology is currently in the hands of researchers within big tech companies, however, you can get ahead of the curve and start learning how to program quantum computers. If you want to learn more, you can access the following online tutorials provided by IBM and Microsoft respectively [7, 8]

- https://venturebeat.com/2021/10/18/apple-unveils-m1-pro-and-m1-max-chips-for-latest-macbook-pro-laptops/
- https://www.thoughtco.com/what-is-schrodingers-cat-2699362
- https://medium.com/the-physics-arxiv-blog/physicists-smash-record-for-wave-particle-duality-462c39db8e7b
- https://docs.dwavesys.com/docs/latest/c_gs_2.html
- https://phys.org/news/2021-07-team-quantum-simulator-qubits-largest.html
- https://www.businessmodelsinc.com/solving-real-world-problems-with-quantum-computing/
- https://qiskit.org/
- https://docs.microsoft.com/en-us/learn/paths/quantum-computing-fundamentals/

Stay up to date with the latest AI news, strategies, and insights sent straight to your inbox!

Oops! Something went wrong while submitting the form.