Quantum Computing: From Basics to Qiskit Workflows
Your laptop executes billions of instructions per second, yet some problems still scale beyond the age of the universe. This article explains why those limits exist in classical computing and how quantum computing approaches them differently.
Using precise but accessible explanations, it introduces qubits, superposition, and entanglement, then walks through a complete Qiskit workflow—from defining circuits to executing them on simulators and real hardware. The goal is operational clarity for developers, not abstract theory or hype.
Introduction
Your laptop can execute billions of instructions per second, yet there are well‑defined problems where even that raw speed barely matters. Add just a few more variables to certain calculations—like route optimization, cryptographic key factoring, or molecular simulations—and the time required balloons past the age of the universe. This gap between processing speed and problem complexity explains why quantum computing keeps showing up in developer conversations, conference keynotes, and cloud dashboards.
What is often missing from those conversations is a concrete explanation of where the wall actually appears in classical systems. For many optimization and simulation tasks, the issue is not CPU throughput but state explosion. A classical machine evaluating all possible configurations of a system with n binary variables must handle 2ⁿ states explicitly or implicitly. At n=60, that is already over 1.15 quintillion possibilities. Even with aggressive pruning, heuristics, and GPU acceleration, entire classes of problems remain intractable.
Over the past few years, quantum computing escaped the physics lab and landed squarely in developer tooling. Google, IBM, Microsoft, and Amazon now expose real quantum processors through cloud APIs. You can open a browser, write Python code, and submit a job to hardware cooled to ~15 millikelvin using dilution refrigerators. IBM’s Quantum Falcon processors, for example, operate with qubit coherence times on the order of 100 microseconds and two‑qubit gate error rates in the 10⁻³ range. These are not theoretical devices; they are noisy but measurable systems.
The common reaction among software developers is a mix of curiosity and resistance. The terminology sounds abstract, the diagrams look intimidating, and there’s a lingering suspicion that this is all hype meant for researchers, not people who ship production systems. That emotional barrier is real—and unnecessary. Modern quantum platforms intentionally abstract hardware details behind SDKs, circuit compilers, transpilers, and job schedulers that mirror familiar cloud workflows.
This article acts as a translation layer between quantum theory and everyday software development. It introduces the formal model behind qubits, superposition, and entanglement using just enough linear algebra to be precise. It then walks through an end‑to‑end workflow using Qiskit, from circuit definition to execution and analysis. The goal is not to turn you into a physicist, but to give you operational clarity about what quantum computers actually do and when they are worth using.
From Bits to Qubits: Why Classical Computing Has Limits
Why doesn’t adding more CPU cores solve every hard problem? Classical computing is built on bits—values that are either 0 or 1—and deterministic logic gates that transform those bits in predictable steps. Given the same input, a classical program always produces the same output. This reliability is why debugging works and why distributed systems can be reasoned about.
However, classical computation fundamentally operates on explicit representations. A register of n bits can represent exactly one of 2ⁿ possible states at a time. Parallelism allows multiple registers to be evaluated simultaneously, but each core still handles one concrete state. Even GPUs, which may evaluate tens of thousands of threads, scale linearly with hardware.
To make this limitation precise, consider the memory requirements of brute‑force simulation. Representing a probability distribution over 40 binary variables requires 2⁴⁰ ≈ 1.1 trillion floating‑point values. At 8 bytes per double, that is nearly 9 terabytes of memory—before performing any computation. This is why exact Bayesian inference, Ising model simulation, and exhaustive constraint satisfaction collapse beyond modest sizes.
Modern CPUs push performance through parallelism, caches, branch prediction, SIMD vectorization, and multi‑core designs. Even so, they remain bound by fundamental limits. Moore’s Law slowed as transistor feature sizes dropped below 10 nm, and Dennard scaling broke down due to power density. Performance gains increasingly come from specialization rather than general‑purpose speedups.
The limits become obvious with problems that scale combinatorially. For example, exact solutions to the traveling salesperson problem require evaluating (n−1)!/2 routes. At n=20, that is already ~6×10¹⁶ possibilities. Even with branch‑and‑bound and cutting‑plane methods, worst‑case complexity remains exponential.
Quantum computing exists because these limits are structural, not due to a lack of clever engineering. It introduces a computational model where state is represented as a vector in a complex Hilbert space. An n‑qubit register is described by a vector in ℂ²ⁿ, and quantum gates act as unitary operators on that space. Importantly, a single matrix multiplication updates all 2ⁿ amplitudes simultaneously, which is impossible to emulate efficiently on classical hardware.
Qubits Made Precise: Superposition with Formal Grounding
If classical bits are either 0 or 1, what exactly is a qubit? Formally, a qubit is a unit vector in a two‑dimensional complex vector space. It is written as:
|ψ⟩ = α|0⟩ + β|1⟩
where α and β are complex numbers satisfying |α|² + |β|² = 1. The squared magnitudes correspond to measurement probabilities. If you measure the qubit in the computational basis, you obtain 0 with probability |α|² and 1 with probability |β|².
This formalism has direct operational consequences. Any physically realizable quantum gate must be unitary, meaning U†U = I. This constraint preserves normalization and implies reversibility: there is no quantum equivalent of a lossy NAND gate. Erasure and irreversibility only occur at measurement boundaries.
A useful visualization tool is the Bloch sphere. Any single‑qubit pure state can be mapped to a point on the surface of a unit sphere parameterized by angles θ and φ:
|ψ⟩ = cos(θ/2)|0⟩ + e^{iφ} sin(θ/2)|1⟩
In this representation, common gates correspond to rotations:
- Pauli‑X: rotation of π radians around the X‑axis.
- Pauli‑Z: rotation of π radians around the Z‑axis (phase flip).
- RZ(λ): rotation by λ around the Z‑axis, often used in variational circuits.
On real hardware, these rotations are implemented using calibrated microwave pulses. For example, IBM superconducting qubits use DRAG pulses with durations on the order of 35–70 nanoseconds for single‑qubit gates. This mapping from abstract matrices to physical control signals is handled by the transpiler and pulse scheduler.
Entanglement Quantified: Multi‑Qubit States and Correlations
When multiple qubits are combined, their joint state lives in a tensor product space. Two qubits require four amplitudes:
|ψ⟩ = α₀₀|00⟩ + α₀₁|01⟩ + α₁₀|10⟩ + α₁₁|11⟩
The size of this representation doubles with every added qubit. A 50‑qubit state vector requires 2⁵⁰ complex amplitudes, which is why classical statevector simulation typically caps out around 30–32 qubits on commodity hardware.
A state is entangled if it cannot be factored into |ψ₁⟩⊗|ψ₂⟩. The Bell state created by applying a Hadamard followed by a CNOT gate is a canonical example:
|Φ⁺⟩ = (|00⟩ + |11⟩)/√2
Entanglement can be quantified using measures such as von Neumann entropy or concurrence. For |Φ⁺⟩, the reduced density matrix of either qubit is maximally mixed, with entropy equal to 1 bit. This is a precise, measurable statement about information content, not a metaphor.
From a programming standpoint, entangling gates impose connectivity constraints. On hardware where qubits are arranged in a heavy‑hex lattice (as on IBM Falcon and Eagle devices), a CNOT may only be available between adjacent qubits. The transpiler inserts SWAP gates to route interactions, increasing circuit depth and error probability.
How Quantum Algorithms Gain an Edge
Quantum advantage arises when algorithms exploit interference. Each gate redistributes amplitude across states. Constructive interference increases the probability of desired answers, while destructive interference suppresses others.
Grover’s algorithm provides a concrete example. Given an oracle that marks a target state by flipping its phase, Grover iterations rotate the state vector toward the target. After approximately π/4·√N iterations, the probability of measuring the target approaches 1. Classically, an unstructured search requires O(N) queries; Grover reduces this to O(√N), which is provably optimal.
Shor’s algorithm uses the quantum Fourier transform (QFT) to find the period of a modular exponentiation function. The QFT decomposes into O(n²) controlled phase rotations. On a fault‑tolerant machine, factoring a 2048‑bit RSA modulus would require on the order of 20 million physical qubits when error correction overhead is included—well beyond current capabilities, but within long‑term roadmaps.
Near‑term relevance comes from hybrid algorithms. VQE estimates molecular ground‑state energies by minimizing ⟨ψ(θ)|H|ψ(θ)⟩. QAOA maps combinatorial optimization problems to Ising Hamiltonians of the form:
H = Σᵢ hᵢ Zᵢ + Σᵢⱼ Jᵢⱼ ZᵢZⱼ
This formulation directly corresponds to problems like Max‑Cut, portfolio optimization, and scheduling.
End‑to‑end Qiskit Workflow: Concrete VQE Implementation
To make this operational, consider computing the ground‑state energy of molecular hydrogen using Qiskit 1.0.2.
Step 1: Environment setup
pip install qiskit==1.0.2 qiskit-nature==0.7.2 pyscf==2.4.0
Step 2: Define the problem Hamiltonian
from qiskit_nature.second_q.drivers import PySCFDriver
driver = PySCFDriver(atom="H 0 0 0; H 0 0 0.735", basis="sto3g")
problem = driver.run()
This produces a second‑quantized Hamiltonian with 4 spin‑orbitals, which maps to 4 qubits using the Jordan–Wigner transformation.
Step 3: Choose ansatz and optimizer
from qiskit.circuit.library import EfficientSU2
ansatz = EfficientSU2(num_qubits=4, reps=2)
from qiskit_algorithms.optimizers import SPSA
optimizer = SPSA(maxiter=200)
Step 4: Execute on simulator
from qiskit_aer import AerSimulator
backend = AerSimulator(method="statevector")
On a laptop, this converges to −1.137 Hartree, within chemical accuracy (≈1.6 milli‑Hartree) of the exact solution. Running the same circuit on a 7‑qubit hardware backend yields higher variance but similar trends after error mitigation.
Optimization and Simulation: Quantitative Use‑Case Comparison
Consider Max‑Cut on a 10‑node graph. A classical exact solver scales as O(2ⁿ). Using QAOA with depth p=1 requires 10 qubits and ~20 two‑qubit gates. On ibm_nairobi, the circuit depth is ~120 after transpilation, with an average cut value within 90–95% of the optimum after 1,000 shots.
For chemistry, classical coupled‑cluster methods (CCSD(T)) scale as O(N⁷) in basis size. VQE scales with circuit depth and measurement count. For small molecules like LiH, VQE on simulators matches CCSD energies with fewer than 1,000 circuit evaluations.
These are not blanket replacements for classical methods, but they demonstrate regimes where quantum approaches are competitive or complementary.
Framework Comparison (Operational Differences)
- Qiskit: Python‑first, supports pulse‑level control via OpenPulse, integrates with IBM Runtime primitives.
- Cirq: Emphasizes explicit circuit topology, strong for NISQ algorithm research.
- Amazon Braket SDK: Unified access to IonQ (trapped‑ion, ~99.3% two‑qubit fidelity) and Rigetti superconducting devices.
Conclusion
Quantum computing replaces explicit state enumeration with linear algebra over exponentially large spaces. Qubits, superposition, and entanglement are not metaphors but formal tools grounded in complex vector spaces and unitary transformations.
- Quantum advantage emerges from interference patterns, not parallel classical evaluation.
- Qiskit and related SDKs provide production‑grade workflows with reproducible results.
- Optimization and simulation show measurable, if narrow, advantages today.
The most practical next step is hands‑on experimentation. Build a circuit, run it on a simulator, then on hardware, and quantify the gap. That feedback loop—not hype—determines where quantum computing earns its place in real systems.
Related Posts
All postsGet your three regular assessments for free now!
- All available job profiles included
- Start assessing your candidates' skills right away
- No time restrictions - register now, use your free assessments later
- All available job profiles included
- Start assessing your candidates' skills right away
- No time restrictions - register now, use your free assessments later