Skip to content

Zuchongzhi 3.1 105-qubit superconducting processor

PODCAST: explore the recent advancements and persistent challenges in generating and verifying large-scale entangled states and investigating complex quantum material problems:


Pushing the Boundaries of Quantum: From Massive Entanglement to Material Breakthroughs

The quantum realm is a frontier of scientific exploration, promising revolutionary advancements in computing and our understanding of matter. Recent breakthroughs are dramatically pushing the limits of what’s possible, tackling some of the most complex challenges in quantum physics and information science. This post delves into two exciting areas: the generation and verification of large-scale entangled states for quantum computation, and novel computational methods for investigating challenging quantum lattice models.

The Quest for Large-Scale Entanglement: Overcoming Quantum Noise

Generating and verifying large-scale entangled states, especially cluster states crucial for measurement-based quantum computation (MBQC), remains a significant hurdle. The challenges are primarily experimental, stemming from the inherent fragility and complexity of quantum systems.

Key Experimental Challenges:

  • Scalability and Design Constraints: Building larger quantum processors capable of maintaining high-fidelity entanglement is inherently difficult, often limited by the physical design and the increasing number of interacting qubits.
  • Pervasive Noise: Quantum processors are susceptible to various errors, including signal crosstalk during parallel operations, frequency collisions between qubits, and decoherence errors from defects (two-level systems or TLS) on the processor chip. Readout operations are particularly error-prone, with qubit frequency shifts potentially causing correlated readout errors that worsen with increasing processor size.
  • High Overhead of Error Mitigation: While essential, advanced error mitigation techniques, like the Continuous-Time Markov Processes (CTMP), can require an impractically large number of measurements, especially for many qubits.

Groundbreaking Solutions and Achievements: Remarkable progress has been made, largely thanks to significant hardware and software innovations:

  • Zuchongzhi 3.1 Processor: A 105-qubit superconducting quantum processor has been developed, boasting improved performance metrics. Its median energy relaxation time (T1) increased from 30.8 µs to 49.7 µs, readout fidelity improved from 95.09% to 99.35%, and two-qubit CZ gate fidelity improved from 99.05% to 99.50%. These enhancements nearly doubled the previous record for large-scale entanglement generation.
  • Hardware Optimizations:
    • Electric Field Configuration: Systematic optimization of the electric field within the tunable-coupler architecture significantly enhanced qubit coherence times.
    • Improved Readout Design: Incorporating traveling-wave parametric amplifiers (TWPA) boosted the signal-to-noise ratio and readout fidelity.
    • Dynamically Optimized Parallel CZ Gates: Refined gate parameters led to higher two-qubit gate fidelity.
  • Successful Large-Scale Entanglement Generation: With these improvements, researchers successfully generated and verified 95-qubit one-dimensional (1D) cluster states with 0.5603±0.0084 fidelity and 72-qubit two-dimensional (2D) cluster states with 0.5519±0.0054 fidelity. This directly supports the feasibility of MBQC.
  • Enhanced Error Mitigation Techniques:
    • Multi-Model Error Mitigation: Both Tensor Product (TP) methods and CTMP schemes were used to characterize and mitigate noise. The TP method was effective due to the observed weak correlation in noise between qubit pairs.
    • Active Microwave Crosstalk Correction: This technology was implemented to improve the fidelity of parallel single-qubit gates.
    • Dynamic Coupling Off (DCO) Technology: DCO pulses were introduced during CZ gate operations and readout to mitigate unintended re-coupling, reducing swap, leakage, and correlated readout errors.
  • Efficient Verification Protocols: A postselection-free fidelity estimation protocol was developed, eliminating the need for complex feedforward quantum operations when the input state is a stabilizer state. This drastically reduced measurement time for fidelity estimation, for example, from 4.3992 × 10^8 single measurements for 51 qubits to only 1.0470 × 10^7 samples for 95 qubits.

Decoding Quantum Materials: Advancements in Solving the Fermi-Hubbard Model

Beyond quantum computing, understanding complex quantum materials is another grand challenge. The Fermi-Hubbard model, a fundamental quantum lattice model for correlated electrons, is notoriously difficult to solve due to the subtle energy differences between various ordered states near the ground state.

Computational Challenges:

  • Intricate Many-Body Problem: The extremely similar energies of different ordered states near the ground state make optimization very challenging.
  • Boundary Effects in Finite Systems: To accurately capture properties, simulations require sufficiently large lattice sizes to minimize artificial boundary effects.
  • Limitations of Traditional Methods:
    • DMRG: Struggles with accuracy in two-dimensional systems.
    • Quantum Monte Carlo (QMC): Faces high computational complexity due to the “sign problem”.
    • Projected-Entangled-Pair-State (PEPS): High computational costs, especially for periodic boundary conditions.
    • Neural-Network Based Wave-functions: While capable of representing long-range correlations, they introduce optimization difficulties and lack satisfactory energy precision on large lattices.

Breakthroughs with the Tensor-Backflow Method: A new approach, the Tensor-Backflow method, has emerged as a promising solution.

  • State-of-the-Art Precision at Reasonable Cost: This method has achieved exceptional energy precision for Fermi-Hubbard-type models without incurring prohibitive computational costs.
  • Enhanced Representation Ability: Its key innovation lies in embedding independent variables into independent degrees of freedom within a tensor, significantly increasing the number of variational parameters and thus the wave-function’s representational power.
  • Scalability: The Tensor-Backflow method has been successfully applied to two-dimensional lattices as large as 256 sites.
  • Robust Optimization: The wave-function is optimized using a combination of Variational Monte Carlo (VMC) and a Lanczos step, which substantially improves energy precision.
  • Unbiased Simulations: Unlike some methods, Tensor-Backflow does not impose prior symmetry conditions, allowing it to successfully obtain physical properties like linear stripe order under periodic boundary conditions.
  • Efficiency: It offers an efficient state representation, achieving similar energy precision with significantly fewer parameters compared to fPEPS in certain scenarios.
  • Consistency with Phase Diagrams: Results from direct optimizations across various electron fillings and interaction strengths are consistent with known phase diagrams derived from other advanced methods like AFQMC.
  • Versatility: The method has no restrictions on lattice shapes and boundary conditions, suggesting its potential as an efficient and universal method for solving latticed fermion systems.

The Quantum Future is Now

These dual advancements—the experimental mastery of large-scale entanglement and the computational breakthrough in simulating complex quantum materials—represent a significant leap forward. They not only validate the principles of quantum mechanics on unprecedented scales but also provide the foundational tools necessary to build more powerful quantum computers and unlock the mysteries of high-temperature superconductivity and other exotic material properties. The quantum future is unfolding, and these breakthroughs are paving the way.


Generating and verifying large-scale entangled states involves both significant experimental hurdles and complex computational challenges, which recent advancements have begun to address.

Experimental Challenges and Breakthroughs in Generating and Verifying Entangled Qubit States

Key Challenges:

  • Experimental Complexity and Scalability: Generating large-scale cluster states and verifying their symmetry-protected topological (SPT) phases presents significant experimental challenges. The size and scalability of entangled states are often limited by design constraints and inherent noise in quantum hardware.
  • Noise and Fidelity Degradation: Achieving and maintaining high fidelity is difficult due to various error sources. These include signal crosstalk during parallel gate operations, frequency collisions between qubits, and decoherence errors caused by two-level system (TLS) locations on the processor. Readout operations are particularly error-prone, with qubit frequency shifts (AC Stark effect) leading to correlated readout errors that become more pronounced as the processor scale increases.
  • Error Mitigation Overhead: While error mitigation techniques are crucial, advanced methods like Continuous-Time Markov Processes (CTMP) can incur prohibitively large overhead for a high number of qubits, demanding an impractically high number of measurements.

Breakthroughs and Solutions:

  • Advanced Superconducting Hardware: The Zuchongzhi 3.1 processor, a 105-qubit superconducting quantum processor, has enabled significant advancements. It features improved energy relaxation time (median T1 increased from 30.8 µs to 49.7 µs), enhanced readout fidelity (from 95.09% to 99.35%), and higher two-qubit CZ gate fidelity (from 99.05% to 99.50%). These improvements nearly doubled the previous record for large-scale entanglement generation.
  • Hardware Optimization:
    • Electric Field Configuration: Systematic optimization of the electric field configuration within the tunable-coupler architecture enhanced qubit coherence times.
    • Readout Design: An improved readout design, incorporating traveling-wave parametric amplifiers (TWPA), significantly boosted the signal-to-noise ratio and readout fidelity.
    • Gate Parameter Optimization: Dynamically optimized parallel CZ gate parameters improved two-qubit gate fidelity.
  • Successful Large-Scale Entanglement Generation: This advanced hardware allowed for the successful generation and verification of 95-qubit one-dimensional (1D) and 72-qubit two-dimensional (2D) genuine entangled cluster states, achieving fidelities of 0.5603±0.0084 and 0.5519±0.0054, respectively.
  • Enhanced Error Mitigation:
    • Multi-Model Error Mitigation: Multiple error models, including the tensor product (TP) method and continuous-time Markov process (CTMP) schemes, were utilized to characterize and mitigate noise effects. The relatively weak correlated noise between qubit pairs supported the use of the TP method, which simplifies error cancellation.
    • Active Crosstalk Correction: Active microwave crosstalk correction technology was implemented to improve the fidelity of parallel single-qubit gates.
    • Dynamic Coupling Off (DCO) Technology: DCO technology was introduced during CZ gate operations and readout to mitigate unintended re-coupling, effectively reducing swap, leakage, and readout correlation errors.
  • Efficient Calibration and Verification: An efficient processor calibration process was developed, encompassing frequency arrangement strategies and quantum gate optimization. Furthermore, a postselection-free fidelity estimation protocol was devised for verifying SPT phases, eliminating the need for complex feedforward quantum operations when the input state is a stabilizer state. This also significantly reduced the measurement time required for fidelity estimation for larger states (e.g., from 4.3992 × 10^8 single measurements for 51 qubits to 1.0470 × 10^7 samples for 95 qubits).

Computational Challenges and Breakthroughs in Investigating Quantum States

Key Challenges:

  • Complexity of Many-Body Systems: Solving quantum lattice models like the Fermi-Hubbard model is highly challenging due to the extremely similar energies of different ordered states near the ground state, making optimization difficult. Finite systems require sufficiently large lattice sizes to minimize boundary effects, demanding unbiased and scalable numerical methods.
  • Limitations of Existing Methods: Traditional methods face limitations: Density Matrix Renormalization Group (DMRG) struggles with accuracy in two-dimensional systems; Quantum Monte Carlo (QMC) has high computational complexity due to the “sign problem”; Projected-Entangled-Pair-State (PEPS) has high computational costs, especially for periodic boundary conditions; and methods like infinite-PEPS (iPEPS) and Density-Matrix-Embedding-Theory (DMET) impose prior conditions on the ground state. Neural-network based wave-functions, while capable of capturing long-range correlations, introduce optimization difficulties and lack satisfactory energy precision on large lattices.

Breakthroughs and Solutions:

  • Tensor-Backflow Method: This method has achieved state-of-the-art energy precision at reasonable optimization costs for Fermi-Hubbard-type models. Its representation ability is significantly enhanced by embedding independent variables into independent degrees of freedom within a tensor, allowing for a greater number of variational parameters.
  • Scalability and Precision: The Tensor-Backflow method has been applied to two-dimensional lattices as large as 256 sites. It achieves energy precision competitive with, or even better than, gradient-optimized fPEPS methods with large bond dimensions. Energy precision can be further improved by incorporating more backflow terms, such as those from next-nearest-neighbors or all sites.
  • Optimization Techniques: The wave-function is optimized using a combination of Variational Monte Carlo (VMC) and a Lanczos step, which significantly improves energy precision.
  • Unbiased Simulations: The Tensor-Backflow method does not enforce any prior symmetry conditions, successfully obtaining physical properties like linear stripe order under periodic boundary conditions.
  • Efficient Representation: The Tensor-Backflow method provides an efficient state representation, achieving similar energy precision with significantly fewer parameters compared to fPEPS in certain cases.
  • Consistency with Phase Diagrams: Results obtained by direct optimizations for various electron fillings and interaction strengths are consistent with known phase diagrams from other advanced methods like AFQMC.
  • Versatility: The Tensor-Backflow method has no restrictions on lattice shapes and boundary conditions, making it a potentially efficient and universal method for solving latticed fermion systems.

Leave a Reply

Your email address will not be published. Required fields are marked *