top of page

Episode 58: History of Quantum computing - Part 1

  • Writer: Embedded IT
    Embedded IT
  • Sep 22
  • 2 min read

Updated: 6 days ago


Quantum computing has been discussed for decades, but only recently has it begun to move from theory towards something practical. Understanding where it started, why progress was slow, and what has changed helps explain why quantum is now accelerating so quickly.


This first part of the history of quantum computing looks at how early ideas turned into real machines, and why today’s systems still behave very differently from classical computers.


Where the idea of quantum computing came from


The concept of a quantum computer dates back to the early 1970s. Physicist Richard Feynman observed that simulating nature accurately would require a computer built on the same quantum principles that govern the physical world.


From that point on, researchers began exploring whether quantum mechanics could be used for computation. For many years this work was entirely theoretical. Mathematicians and physicists developed algorithms for machines that did not yet exist, defining how quantum operations would behave if a computer could ever be built.


From theory to early quantum machines


By the mid-2010s, theory started to become reality. In 2016, the first small-scale quantum computers became accessible via the cloud. These early systems were extremely limited, containing only a handful of qubits, but they marked a major shift.


Making these machines publicly accessible was a deliberate decision. Quantum computers cannot be programmed in the same way as classical systems, so building a skills base early was essential. Without that, even powerful quantum hardware would be unusable.


Why qubits behave differently from bits


Classical computers use bits, which are either zero or one. Qubits behave very differently. They exist in a state called superposition, meaning they can represent values between zero and one until they are measured.


Once measured, a qubit collapses into a classical value. Because of this, quantum programs must be run many times, with results analysed statistically to infer the correct outcome.


Qubits are also fragile. Their quantum state only exists for a short time before noise causes it to break down, turning the system into little more than a random number generator.


The limits of today’s quantum computers


Modern quantum systems are still in what is known as the noisy intermediate-scale quantum era. Progress is measured along several dimensions at once:


  • The number of qubits available

  • How stable those qubits are

  • How long their quantum state lasts

  • How quickly operations can be performed


Another major challenge is error correction. Many physical qubits are needed to create a single reliable logical qubit. This is one reason quantum computing remains complex and difficult to scale.


Where quantum computing is heading


Despite these challenges, progress is rapid. Systems now contain hundreds of qubits, with multiple chips linked together to run larger programs. Roadmaps focus on improving stability, reducing errors, and increasing the number of qubits that can work together reliably.


Quantum computing is no longer just a theoretical exercise. It is still early, complex, and imperfect, but the foundations are now firmly in place.


For organisations trying to understand how emerging technologies like quantum computing could shape future technology strategy and procurement decisions, get in touch.


bottom of page