Research

With the advancement of computers, our lives in the information society have developed beyond imagination. However, no matter how much current computer technology progresses, there will always be problems that are fundamentally hard to solve. But what if we go back to the fundamental principles of the physical laws that describe our world and rethink the question: “What is information processing?” This is the challenge that quantum information science takes on.

What is quantum information science?

Quantum information science is a field that explores new frameworks for information processing by leveraging the laws of quantum mechanics. Quantum mechanics is a fundamental theory of physics that describes microscopic phenomena, such as the behavior of atoms and weak light, establishing universal principles that govern our world.

Quantum computers, which fully harness the principles of quantum mechanics for information processing, exhibit fundamentally different characteristics from conventional computers, opening up new possibilities in information processing. Conventional computers, in contrast to quantum computers, are referred to as classical computers. While classical computers process information using bits, quantum information processing employs quantum bits (qubits) as its fundamental unit. As a result, quantum computing is expected to achieve substantial speedups over classical computing for certain computational tasks, such as integer factorization and numerical simulations of quantum mechanical phenomena. Additionally, quantum communication enables cryptographic techniques that are inherently resistant to eavesdropping and information-theoretically secure.

Given these capabilities, quantum computers are set to drive groundbreaking innovations in information processing, establishing quantum information science as a rapidly expanding field of interest. However, unlike classical bits, qubits are highly susceptible to even small amounts of noise, posing a challenge in quantum experiments for realizing scalable quantum devices with many qubits. To achieve large-scale quantum information processing, fault-tolerant quantum computation is essential, employing quantum error correction techniques to protect qubits from noise. Consequently, research and development efforts worldwide are actively progressing toward the realization of quantum computers capable of fault-tolerant quantum computation. Additionally, ongoing research continues to explore practical applications of quantum information processing, aiming to unlock its full potential in shaping the future of our information-driven society.

Our research: Quantum information theory

Our research explores the foundational theory of quantum information science—quantum information theory—and its applications. Quantum computers hold promise for a wide range of fields, including machine learning, condensed matter physics, quantum chemistry, and cryptography. We develop methods to effectively leverage the types of computational problems where quantum computing offers a distinct advantage, particularly in applied domains such as machine learning. Realizing practical quantum information processing requires careful system design for quantum computers. Since qubits are highly susceptible to noise, implementing sophisticated fault-tolerant protocols while maintaining computational efficiency is crucial. However, a significant challenge lies in balancing the efficient use of quantum memory space with high-speed information processing, even while adhering to these protocols—an issue we are actively addressing. At the same time, theoretical frameworks used to analyze information processing also offer valuable insights into the physical properties of quantum mechanics. From this perspective, our research investigates the optimal performance and fundamental limits of quantum information processing, deepening our understanding of the quantum mechanical principles that underpin quantum information science.

Our goal is to identify the meaningful applications and ultimate realizations of quantum computers while uncovering their optimal performance and fundamental limits under the laws of quantum mechanics, thereby establishing a comprehensive theoretical foundation that bridges advancements in quantum technology with the future evolution of our information-driven society.

Topic: Quantum computation and quantum machine learning

It is a fundamental fact in mathematics that any integer can be factorized into prime numbers, such as 6 = 2 × 3 and 15 = 3 × 5. However, consider a much larger integer—one with, say, 2048 digits. Identifying its prime factors is extremely difficult with current computational technology. Even with the best-known classical algorithms, factoring such large integers on conventional computers requires subexponential time, resulting in an impractically long runtime. In contrast, quantum computing can solve this problem efficiently in a polynomial number of computational steps, thanks to a celebrated quantum algorithm known as Shor’s algorithm, thereby enabling a substantial speedup in factoring large integers.

This highlights how quantum computing has the potential to achieve what would be practically impossible with classical computational technologies. But ask yourself—have you ever truly needed to factor a large integer in your life? Beyond problems with limited real-world impact, we are working to identify applications of quantum technologies that drive meaningful advancements in society and everyday life.

Quantum computing is being actively explored to accelerate a wide range of computational tasks across various fields, including machine learning, condensed matter physics, quantum chemistry, and cryptography. Among these, quantum machine learning—leveraging quantum computation to enhance machine learning methods—has attracted significant attention as a promising application of quantum computing. However, much of the recent research in this area has focused on heuristic approaches that simply apply quantum computing to mimic conventional machine learning techniques, such as neural networks. In reality, quantum computation provides advantages only for specific classes of problems and does not inherently accelerate all computations unless carefully designed and theoretically justified.

To achieve end-to-end quantum speedups in machine learning, it is crucial to understand when quantum computation can provide advantages and how to meaningfully integrate such advantages into machine learning. Notably, quantum computation is not parallel computation that instantaneously tries all possible solutions at once—this is a common misconception. Instead, quantum algorithms function as a variant of randomized algorithms. In classical randomized algorithms, we may draw random numbers from a probability distribution over bits and then use them for computation. In contrast, quantum algorithms can generate random numbers from measurements of states of qubits. These quantum states can exhibit unique properties such as superposition and entanglement, enabling sampling from probability distributions that are otherwise intractable for classical algorithms. However, leveraging these quantum properties effectively requires the presence of specific mathematical structures within the computational problem. For example, in integer factoring, Shor’s algorithm exploits the periodic structure of modular exponentiation—an essential concept in number theory. The key to accelerating machine learning with quantum computing lies in identifying such useful mathematical structures within machine learning tasks, allowing quantum algorithms to unlock computational advantages beyond what classical methods can achieve.

Quantum algorithms that accelerate machine learning tasks

In our research, we develop and theoretically analyze quantum machine learning algorithms to establish a solid foundation for harnessing the advantages of quantum computation in machine learning. For instance, we design quantum machine learning algorithms that utilize quantum computation to efficiently search for and extract essential features from given data, employing powerful quantum techniques such as the quantum Fourier transform and quantum singular value transformation. The extracted features can then be leveraged to enhance learning efficiency through classical computation in subsequent stages. Through this approach, we ultimately aim to construct a novel framework for quantum machine learning that integrates the strengths of both quantum computation and classical learning methods.

Provable quantum advantages in computational complexity theory

In exploring computational tasks in machine learning that quantum algorithms can efficiently solve, it is equally important to rigorously identify problems that are inherently intractable for classical algorithms, thereby clarifying the true advantages of quantum computation. Simply applying quantum algorithms to problems that classical algorithms can already solve efficiently is not meaningful. Instead, problems for which it can be formally proven that no classical algorithm can solve efficiently are precisely the ones where quantum computation offers significant value.

Although unconditional proofs of computational hardness are generally challenging, we leverage tools from computational complexity theory to explore both the potential of quantum computation and the fundamental limitations of classical computation under standard assumptions in computational complexity theory. By tackling these challenges, our research aims to establish a comprehensive theoretical foundation that rigorously substantiates the true advantages of quantum computation in machine learning.

Numerical techniques for optimizing quantum algorithms

Numerical techniques on conventional classical computers play a crucial role in optimizing quantum algorithms. Large-scale matrix computations and mathematical optimization are fundamental for simulating quantum information processing and investigating its optimal performance. For instance, a class of convex optimization problems known as semidefinite programming (SDP) naturally aligns with the mathematical structure of quantum mechanics, making it a powerful tool for analyzing quantum information processing. We actively develop numerical techniques tailored to these applications, driving further advancements in the efficiency and applicability of quantum algorithms.

Topic: Fault-tolerant quantum computation

Quantum computation is typically represented using a quantum circuit. However, executing this circuit directly on quantum devices does not reliably produce correct computational results due to noise, which introduces random errors at a certain physical error rate and alters the output. To obtain accurate results without quantum error correction, the error rate per physical gate would need to be extremely low. For instance, if a quantum device has 1000 qubits, the physical error rate must be significantly lower than 1/1000 = 0.1%; otherwise, errors from simply initializing these qubits would accumulate to 0.1% × 1000 = 100%, making reliable computation impossible. For this reason, attempts to use noisy quantum devices without error correction—such as noisy intermediate-scale quantum (NISQ) algorithms and quantum annealing—have so far failed to outperform state-of-the-art classical algorithms. Due to the overwhelming impact of noise, NISQ algorithms and quantum annealing are not expected to achieve a substantial quantum advantage, making them useless for large-scale, real-world computations.

The only known solution to the problem of noise in quantum computation is fault-tolerant quantum computation (FTQC). FTQC employs quantum error-correcting codes, such as surface codes and Steane’s 7-qubit code, to encode each qubit in the original circuit as a logical qubit, which is distributed across multiple physical qubits. This redundancy ensures that even if some physical qubits experience errors, the encoded information remains recoverable from the remaining ones. To accurately reproduce the computational output of the original circuit, it is crucial to follow fault-tolerant protocols, which enable computations to be performed on logical qubits while continuously protecting the information from noise. These protocols allow for the construction of a fault-tolerant circuit, where an encoded version of the original computation is executed while quantum error correction is applied to noisy physical qubits in real time. By encoding and compiling the circuit in this way, if the physical error rate is kept below a certain threshold—typically around 0.1–1%—the error rate per logical operation can be arbitrarily suppressed. By sufficiently reducing the logical error rate,  it becomes possible to obtain correct computational results. FTQC techniques are ultimately essential for large-scale quantum information processing, serving as a fundamental pillar of quantum information science.

To realize useful quantum computation, various physical platforms, such as neutral atoms, trapped ions, superconducting qubits, and photonics, are actively being developed. A fundamental requirement for FTQC is that the error rates of all operations on physical qubits must remain below the threshold determined by the fault-tolerant protocol. In the past, much of the research in quantum computing was constrained by the NISQ paradigm, which focused on finding applications for noisy quantum devices that do not meet this requirement. However, in this regime, adding more physical qubits simply introduces additional sources of error, making it impossible to achieve any meaningful quantum advantage. In contrast, thanks to substantial advancements in quantum experiments, we are now entering an era where devices capable of FTQC are gradually emerging. In this regime, adding more physical qubits indeed reduces errors, enabling exponential error suppression as the system scales. We are witnessing a sharp, quantitative phase transition into the FTQC era. To fully leverage this transition and accelerate the development of useful quantum computation, it is crucial to establish a comprehensive theoretical foundation that deepens our understanding of FTQC and guides its realization in our quantum world.

Fault-tolerant protocols and quantum error correction  

In our research, we explore the fundamental theories underlying quantum error correction and fault-tolerant protocols. One of the major challenges in realizing FTQC is the significant overhead in both space and time. In a quantum circuit, the number of qubits is referred to as its width, while the number of time steps is called its depth. Since FTQC implements computations using quantum error-correcting codes, both width and depth inevitably increase. The space overhead is defined by the ratio of the width between the fault-tolerant circuit and the original circuit, while the time overhead is determined by the depth ratio. To reduce logical error rates, logical qubits must be encoded using multiple physical qubits, and computations must be executed while maintaining this encoding. This process often involves a computationally expensive subroutine known as magic state distillation, which further contributes to space and time overhead as the original circuit scales. Since qubits in quantum devices are valuable resources, addressing the growth of space overhead is a crucial challenge for the realization of FTQC. At the same time, minimizing time overhead is essential to preserving the computational speedup that quantum computing offers.

In our work, we develop techniques that drastically reduce the overhead of various FTQC protocols to a constantly bounded order of O(1). Our approach paves the way for scalable quantum computation, surpassing conventional fault-tolerant protocols such as surface codes and opening new frontiers in low-overhead FTQC.

Architectures and system design for building quantum computers

From the theory of FTQC, we can derive a fault-tolerant circuit, which translates into a sequence of instructions to be executed on quantum devices—analogous to an assembly language in classical computers. However, a significant gap remains between this theoretical foundation and quantum experiments. In quantum experiments, technologies for precisely manipulating qubits are advancing, much like transistors or complementary metal–oxide–semiconductor (CMOS) technology in classical computing. Crucially, modern computers—ranging from laptops to state-of-the-art supercomputers—are not built merely by executing assembly language directly on CMOS; they require a well-structured computational architecture for scalability. The same principle applies to quantum computers.

The realization of quantum computers depends on various physical platforms, such as neutral atoms, trapped ions, superconducting qubits, and optics. At the same time, the choice of architecture must align with specific application domains, including machine learning tasks and other scientific challenges. Therefore, the development of quantum computers requires a comprehensive design approach, integrating both theoretical foundations and precise numerical simulations to ensure feasibility across all scales, effectively interpolating between finite and asymptotic regimes. To bridge this gap, we actively collaborate with researchers in quantum experiments and computational architectures to explore the fundamental principles and methodologies for realizing scalable quantum computers.

Continuous-variable quantum computation and continuous-variable quantum codes

The study of information processing ultimately involves modeling and analyzing the physical systems that implement these processes. In quantum mechanics, many physical systems inherently have infinite degrees of freedom and are mathematically represented using infinite-dimensional vector spaces, more precisely, infinite-dimensional Hilbert spaces. For example, optical systems can utilize continuous-variable light, and superconducting cavities can process information through an infinite number of energy levels. Although qubits are described by finite-dimensional Hilbert spaces, they can be encoded into infinite-dimensional quantum systems using continuous-variable quantum codes such as the Gottesman-Kitaev-Preskill (GKP) code. Consequently, mathematical theories and analytical techniques for continuous-variable quantum computation are crucial across various quantum technologies. However, compared to the well-developed theories for qubits, those for continuous-variable quantum systems remain in a relatively early stage of development. We develop new methodologies and theoretical frameworks to deepen the understanding of continuous-variable quantum information processing, strengthening both its theoretical foundations and practical applications.

Topic: Quantum information

Human society advances as technology progresses, and at the core of these advancements lies a powerful theoretical foundation. In the nineteenth century, steam engine technology drove the Industrial Revolution, while thermodynamics emerged as the guiding theory that revealed the principles of efficient energy use and its fundamental limits, shaping how we manage energy resources ever since. Now, in an era where quantum devices allow precise control over physical systems under the laws of quantum mechanics, quantum technologies are opening new frontiers in quantum computation and information processing. To fully understand the potential of these technologies, it is essential to establish a rigorous theoretical foundation that explores both the capabilities and limitations of quantum mechanics. The central aim of physics is to uncover the ultimate use and fundamental constraints of the physical laws that govern our universe. The theory of quantum information seeks to answer this fundamental question: what new possibilities can we unlock in the quantum era, and how can we harness them?

Quantum information processing leverages the distinctive properties of quantum mechanics to achieve tasks that are fundamentally beyond the capabilities of classical information processing. At the same time, theoretical frameworks developed to analyze the efficiency of information processing provide valuable insights into the intrinsic nature of quantum mechanics. Key quantum mechanical features, such as superposition, entanglement, magic (non-stabilizerness), and non-Gaussianity, serve as essential computational resources in quantum information processing. By investigating the optimal performance and fundamental limits of quantum information processing that harnesses these properties, we can quantitatively analyze these fundamental properties and deepen our understanding of the foundational principles of quantum mechanics.

Entanglement theory and quantum resource theory

Quantum information processing represents a fundamental shift in information technology, unlocking capabilities beyond those achievable with classical methods. These breakthroughs stem from the efficient utilization of intrinsic quantum properties, such as coherence and entanglement, which act as key resources amplifying the power of quantum information processing. To systematically explore and harness these quantum properties, quantum resource theories (QRTs) have been developed, offering a rigorous operational framework for studying the manipulation and quantification of quantum resources. QRTs are defined by a restricted class of operations, known as free operations, which establish constraints on state manipulation. For instance, in the study of entanglement, local operations and classical communication (LOCC) serve as free operations, representing what is achievable within distant laboratories. Quantum states that can be freely generated under these operations are called free states, while those that surpass these restrictions are regarded as resources. By analyzing information processing tasks under these constraints, QRTs reveal both the potential advantages and fundamental limitations of quantum information processing, all dictated by the principles of quantum mechanics.

In our research, we investigate the general frameworks of QRTs and their mathematical properties. Much like energy conversion in thermodynamics, quantum resources can also be transformed into one another. The convertibility between quantum resources can be linked to the optimal performance of a quantum version of hypothesis testing, a fundamental task in quantum information theory. At the core of this formulation is the generalized quantum Stein’s lemma, which characterizes the optimal performance of quantum hypothesis testing between quantum resources and non-resources using a fundamental measure of quantum resources. We rigorously prove this lemma to establish a deeper understanding of quantum resource convertibility through hypothesis testing. These theoretical insights not only enhance our understanding of quantum resources but also provide a quantitative foundation for developing new quantum technologies and optimizing their performance within the constraints imposed by physical laws.

Quantum internet, quantum communication, and distributed quantum information processing

Entanglement is a fundamental quantum resource that enables various communication tasks, including quantum teleportation, superdense coding, and quantum key distribution (QKD). Just as state-of-the-art supercomputers rely on networks connecting multiple processor units, quantum communication networks are crucial for linking multiple quantum devices to construct a scalable multiprocessor quantum computer—a paradigm known as distributed quantum information processing. Our research explores the fundamental theory of multipartite entanglement and distributed quantum information processing, aiming to establish theoretical foundations that guide the development of scalable quantum networks and architectures.

Benchmarking of quantum devices

The techniques from quantum information theory are also valuable for benchmarking quantum devices and assessing their performance. While classical computers have well-established benchmarking toolkits, quantum computing lacks similarly mature frameworks, as quantum devices are still under development. To address this gap, we develop theoretically well-founded benchmarking protocols that rigorously evaluate the performance of quantum devices, offering essential insights for their scalable realization in an efficient and systematic manner.

Foundations of quantum mechanics

Fundamental concepts in quantum mechanics, such as Bell’s inequality, play a crucial role in establishing the advantages of quantum information processing over classical ones, particularly in the fields of quantum computation and quantum cryptography. At the same time, the techniques and frameworks developed to analyze these quantum advantages provide powerful tools for investigating the deeper foundations of quantum mechanics. Our research covers these topics, exploring the capabilities of quantum information processing and also contributing to the fundamental understanding of quantum mechanics itself.

Join us!

If you are interested in this research, please feel free to contact me via Join Us.

Further reading

If you are interested in learning more about this field, we recommend the following textbooks.