The Standard Model (SM) of particle physics is currently the most precise theory describing fundamental physics and drives research at a global level. As the discovery of the Higgs particle further confirms the SM's range of application, it also denied the existence of new physics – physics Beyond the Standard Model (BSM) – up to the TeV scale. This is challenging as current theories can not explain macroscopic observations like the existence of Dark Matter or the asymmetry between matter and antimatter. The essential question is therefore: where does one expect signals beyond the standard model on a microscopic level and how can one describe this?
Complementary to the collider investigations of BSM signatures that take place at the highenergy frontier, it is also possible to probe for these signals with experiments at low energy via precision measurements. Indeed, it can be argued that such indirect searches may have greater reach, with sensitivities in some cases approaching the Grand Unified scale. Examples for such sensitive lowenergy tests of BSM physics include searches for nucleon, electron, and atomic Electric Dipole Moments; efforts to directly detect Dark Matter through its scattering off atomic nuclei; measurements of lepton number violating processes like the neutrinoless double beta decay; and sensitive tests for new sources of flavor violation through the conversion of a muon to an electron in the nuclear field. Though there is a variety of ideas on how to detect such signals, the common ground of these lowenergy experiments is the measurement of BSM signals on a nuclear level – ranging from large nuclear cores such as Xenon to planned experiments on light nuclei like Helium.
The variety of possible BSM theories hereby answers these questions differently and leads to distinctive beyondtheSM phenomena. Thus, linking these experiments to the fundamental level is of relevance for two reasons:

Identification of BSM sources
While a nonvanishing signal would confirm the existence of BSM structures, to identify the sources of these signals, one needs to propagate the signal from the target (nuclear manybody level) to the level of the fundamental theory.

Experimental guidance
Once one is able to propagate possible BSM interactions to the level of nuclear cores, it might be possible to identify special nuclei which feature a coherent enhancement or suppression of such structures and thus guide experimental efforts.
Theoretical descriptions of such phenomena still suffer from large uncontrolled uncertainties – mostly associated with the nuclear manybody methods or an insufficient treatment or relevant interactions. In some cases, the methoddependent extrinsic uncertainties have improved by several factors but still can be as big as 100%. Since expected signals are supposed to be small, an accurate and precise description is needed to discriminate between different BSM structures. Therefore it is essential to understand the effects of all relevant uncertainties associated with the propagation of scales.
As the main objective of my current research, I intend to set up a consistent and accurate framework for analyzing BSM effects on a nuclear level, which can be systematically improved in order to increase the desired precision of the description.
While quantum computing proposes promising solutions to computational problems not accessible with classical approaches, due to current hardware constraints, most quantum algorithms are not yet capable of computing systems of practical relevance, and classical counterparts outperform them. To practically benefit from quantum architecture, one has to identify problems and algorithms with favorable scaling and improve on corresponding limitations depending on available hardware. For this reason, we developed an algorithm that solves integer linear programming problems, a classically NPhard problem, on a quantum annealer, and investigated problem and hardwarespecific limitations. This work presents the formalism of how to map ILP problems to the annealing architectures, how to systematically improve computations utilizing optimized anneal schedules, and models the anneal process through a simulation. It illustrates the effects of decoherence and many body localization for the minimum dominating set problem, and compares annealing results against numerical simulations of the quantum architecture. We find that the algorithm outperforms random guessing but is limited to small problems and that annealing schedules can be adjusted to reduce the effects of decoherence. Simulations qualitatively reproduce algorithmic improvements of the modified annealing schedule, suggesting the improvements have origins from quantum effects.
We report on the first application of the stochastic Laplacian Heaviside method for computing multiparticle interactions with lattice QCD to the twonucleon system. Like the Laplacian Heaviside method, this method allows for the construction of interpolating operators which can be used to construct a positive definite set of twonucleon correlation functions, unlike nearly all other applications of lattice QCD to two nucleons in the literature. It also allows for a variational analysis in which optimal linear combinations of the interpolating operators are formed that couple predominantly to the eigenstates of the system. Utilizing such methods has become of paramount importance in order to help resolve the discrepancy in the literature on whether two nucleons in either isospin channel form a bound state at pion masses heavier than physical, with the discrepancy persisting even in the $SU(3)$flavor symmetric point with all quark masses near the physical strange quark mass. This is the first in a series of papers aimed at resolving this discrepancy. In the present work, we employ the stochastic Laplacian Heaviside method without a hexaquark operator in the basis at a lattice spacing of $a\sim0.086$~fm, lattice volume of $L=48 a \simeq 4.1$~fm and pion mass $m_\pi \simeq 714$~MeV. With this setup, the observed spectrum of twonucleon energy levels strongly disfavors the presence of a bound state in either the deuteron or dineutron channel.
We report the results of a lattice QCD calculation of $F_K/F_\pi$ using Möbius DomainWall fermions computed on gradientflowed $N_f=2+1+1$ HISQ ensembles. The calculation is performed with five values of the pion mass ranging from $130 \lesssim m_\pi \lesssim 400$~MeV, four lattice spacings of $a\sim 0.15, 0.12, 0.09$ and $0.06$~fm and multiple values of the lattice volume. The interpolation/extrapolation to the physical pion and kaon mass point, the continuum, and infinite volume limits are performed with a variety of different extrapolation functions utilizing both the relevant mixedaction effective field theory expressions as well as discretizationenhanced continuum chiral perturbation theory formulas. We find that the $a\sim0.06$~fm ensemble is helpful, but not necessary to achieve a subpercent determination of $F_K/F_\pi$. We also include an estimate of the strong isospin breaking corrections and arrive at a final result of $F_{\hat{K}^+}/F_{\hat{\pi}^+} = 1.1942(45)$ with all sources of statistical and systematic uncertainty included. This is consistent with the FLAG average value, providing an important benchmark for our lattice action. Combining our result with experimental measurements of the pion and kaon leptonic decays leads to a determination of $V_{us}/V_{ud} = 0.2311(10)$.
EspressoDB is a programmatic objectrelational mapping (ORM) data management framework implemented in Python and based on the Django web framework. EspressoDB was developed to streamline data management, centralize and promote data integrity, while providing domain flexibility and ease of use. It is designed to directly integrate in utilized software to allow dynamical access to vast amount of relational data at runtime. Compared to existing ORM frameworks like SQLAlchemy or Django itself, EspressoDB lowers the barrier of access by simplifying the project setup and provides further features to satisfy uniqueness and consistency over multiple data dependencies. In contrast to software like DVC, VisTrails, or Taverna, which describe the workflow of computations, EspressoDB rather interacts with data itself and thus can be used in a complementary spirit.
Contact interactions can be used to describe a system of particles at unitarity, contribute to the leading part of nuclear interactions and are numerically nontrivial because they require a proper regularization and renormalization scheme. We explain how to tune the coefficient of a contact interaction between nonrelativistic particles on a discretized space in 1, 2, and 3 spatial dimensions such that we can remove all discretization artifacts. By taking advantage of a latticized Lüscher zeta function, we can achieve a momentumindependent scattering amplitude at any finite lattice spacing.
The Hubbard model arises naturally when electronelectron interactions are added to the tightbinding descriptions of many condensed matter systems. For instance, the twodimensional Hubbard model on the honeycomb lattice is central to the ab initio description of the electronic structure of carbon nanomaterials, such as graphene. Such lowdimensional Hubbard models are advantageously studied with Markov chain Monte Carlo methods, such as Hybrid Monte Carlo (HMC). HMC is the standard algorithm of the lattice gauge theory community, as it is well suited to theories of dynamical fermions. As HMC performs continuous, global updates of the lattice degrees of freedom, it provides superior scaling with system size relative to local updating methods. A potential drawback of HMC is its susceptibility to ergodicity problems due to socalled exceptional configurations, for which the fermion operator cannot be inverted. Recently, ergodicity problems were found in some formulations of HMC simulations of the Hubbard model. Here, we address this issue directly and clarify under what conditions ergodicity is maintained or violated in HMC simulations of the Hubbard model. We study different lattice formulations of the fermion operator and provide explicit, representative calculations for small systems, often comparing to exact results. We show that a fermion operator can be found which is both computationally convenient and free of ergodicity problems.
In our previous publication "A primer to numerical simulations: The perihelion motion of Mercury", we describe a scenario on how to teach numerical simulations to high school students. In this online article, we describe our motivation and the experiences we have made during the two summer schools where we have presented this course.
Fundamental symmetries (and their violations) play a significant role in active experimental searches of Beyond the Standard Model (BSM) signatures. An example of such phenomena is the neutron Electric Dipole Moment (EDM), a measurement of which would be evidence for ChargeParity (CP) violation not attributable to the basic description of the Standard Model (SM). Another example is the strange scalar quark content of the nucleon and its coupling to Weakly Interacting Massive Particles (WIMPs), which is a candidate model for Dark Matter (DM). The theoretical understanding of such processes is fraught with uncertainties and uncontrolled approximations. On the other hand, methods within nuclear physics, such as Lattice Quantum Chromodynamics (LQCD) and Effective Field Theories (EFT), are emerging as powerful tools for calculating nonperturbatively various types of nuclear and hadronic observables. This research effort will use such tools to investigate phenomena related to BSM physics induced within light nuclear systems. As opposed to LQCD which deal with quarks and gluons, in Nuclear Lattice Effective Field Theory (NLEFT) individual nucleons—protons and neutrons—form the degrees of freedom. From the symmetries of Quantum Chromodynamics (QCD), one can derive the most general interactionstructures allowed on the level of these individual nucleons. In general, this includes an infinite number of possible interactions. Utilizing the framework of EFTs, more specifically for this work Chiral Perturbation Theory (χPT), one can systematically expand the nuclear behavior in a finite set of relevant nuclear interactions with a quantifiable accuracy. Fundamental parameters of this theory are related to experiments or LQCD computations. Using this set of effective nuclear interactionstructures, one can describe manynucleon systems by simulating the quantum behavior of each involved individual nucleon. The 'ab initio' method NLEFT introduces a spatial lattice which is finite in its volume (FV) and allows to exploit powerful numerical tools in the form of statistical Hybrid Monte Carlo (HMC) algorithms. The uncertainty of all three approximations—the statistical sampling, the finite volume and the discretization of space—can be analytically understood and used to make a realistic and accurate estimation of associated uncertainty. In the first part of the thesis, χPT is used to derive nuclear interactions with a possible BSM candidate up to NexttoLeading Order (NLO) in the specific case of scalar interactions between DM and quarks or gluons. Following this analysis, Nuclear Matrix Elements (NMEs) are presented for light nuclei (2H, 3He and 3H), including a complete uncertainty estimation. These results will eventually serve as the benchmark for the manybody computations. In the second part of this thesis, the framework of NLEFT is briefly presented. It is shown how one can increase the accuracy of NLEFT by incorporating fewbody forces in a nonperturbative manner. FiniteVolume (FV) and discretization effects are investigated and estimated for BSM NME on the lattice. Furthermore, it is displayed how different boundary conditions can be used to decrease the size of FV effects and extend the scope of available lattice momenta to the range of physical interest.
Numerical simulations are playing an increasingly important role in modern science. In this work it is suggested to use a numerical study of the famous perihelion motion of the planet Mercury (one of the prime observables supporting Einsteins General Relativity) as a test case to teach numerical simulations to high school students. The paper includes details about the development of the code as well as a discussion of the visualization of the results. In addition a method is discussed that allows one to estimate the size of the effect as well as the uncertainty of the approach a priori. At the same time this enables the students to double check the results found numerically. The course is structured into a basic block and two further refinements which aim at more advanced students.
We present a general auxiliary field transformation which generates effective interactions containing all possible Nbody contact terms. The strength of the induced terms can analytically be described in terms of general coefficients associated with the transformation and thus are controllable. This transformation provides a novel way for sampling 3 and 4body (and higher) contact interactions nonperturbatively in lattice quantum montecarlo simulations. We show that our method reproduces the exact solution for a twosite quantum mechanical problem.
We show how lattice Quantum Monte Carlo simulations can be used to calculate electronic properties of carbon nanotubes in the presence of strong electronelectron correlations. We employ the path integral formalism and use methods developed within the lattice QCD community for our numerical work and compare our results to empirical data of the AntiFerromagnetic Mott Insulating gap in large diameter tubes.
Through the development of manybody methodology and algorithms, it has become possible to describe quantum systems composed of a large number of particles with great accuracy. Essential to all these methods is the application of auxiliary fields via the HubbardStratonovich transformation. This transformation effectively reduces twobody interactions to interactions of one particle with the auxiliary field, thereby improving the computational scaling of the respective algorithms. The relevance of collective phenomena and interactions grows with the number of particles. For many theories, e.g. Chiral Perturbation Theory, the inclusion of threebody forces has become essential in order to further increase the accuracy on the manybody level. In this proceeding, the analytical framework for establishing a HubbardStratonovichlike transformation, which allows for the systematic and controlled inclusion of contact three and morebody interactions, is presented.
We study the scattering of Dark Matter particles off various light nuclei within the framework of chiral effective field theory. We focus on scalar interactions and include one and twonucleon scattering processes whose form and strength are dictated by chiral symmetry. The nuclear wave functions are calculated from chiral effective field theory interactions as well and we investigate the convergence pattern of the chiral expansion in the nuclear potential and the Dark Matternucleus currents. This allows us to provide a systematic un certainty estimate of our calculations. We provide results for $^2$H, $^3$H, and $^3$He nuclei which are theoretically interesting and the latter is a potential target for experiments. We show that twonucleon currents can be systematically included but are generally smaller than predicted by power counting and suffer from significant theoretical uncertainties even in light nuclei. We demonstrate that accurate highorder wave functions are necessary in order to incorporate twonucleon currents. We discuss scenarios in which onenucleon contributions are suppressed such that higherorder currents become dominant.
We describe and implement twisted boundary conditions for the deuteron and triton systems within finite volumes using the nuclear lattice EFT formalism. We investigate the finitevolume dependence of these systems with different twist angles. We demonstrate how various finitevolume information can be used to improve calculations of binding energies in such a framework. Our results suggest that with the appropriate twisting of boundaries, infinitevolume binding energies can be reliably extracted from calculations using modest volume sizes with cubic length $L \approx 8–14$ fm. Of particular importance is our derivation and numerical verification of threebody analogs of “iperiodic” twist angles that eliminate the leadingorder finitevolume effects to the threebody binding energy.
The work tests the consistency of nucleonnucleon forces derived by two different approximation schemes of Quantum Chromodynamics (QCD)—the chiral perturbation theory ($\chi$PT) and largeNc QCD. The approximation schemes and the derivation of the potential are demonstrated in this work. The consistency of the chiral potential, derived using the method of unitary transformation, is verified for chiral orders $Q^\nu$ for $\nu = 0,2,4$. Used methods, as well as possible extensions for higher orders, are presented.
Computations of physical systems must be independent of the basis they are performed in. In the case of Nuclear Lattice computations, this basis generally corresponds to a finite volume on a discrete lattice. However, because of computational costs and the complexity of nuclear forces, results of nonrelativistic Nuclear Lattice computations with nucleons as degrees of freedom are frequently evaluated at finite lattice spacings. The topic of this talk is the analysis of finite lattice spacings effects on scattering data in a twonucleon system described by a contact interaction mimicking nuclear forces. Furthermore, for such contact interactions, a modified infinitevolume formalism considering finite discretizations is presented. This formalism enables the control of discretization effects in twofermion systems up to numerical precisionwhich can be used to prepare computations of unitary fermions.
Many problems across several disciplines ranging from production planning over DNA, can be represented as constrained integer optimization problems. Utilizing classical algorithms, generally exact solutions to integer linear programming (ILP) problems are exponentially hard to obtain, and heuristic algorithms are employed to find approximate solutions. This talk demonstrates a new algorithm that solves such constrained ILP on a quantum annealer. Details of the formalism are presented in the case of the minimum dominating set problem, and annealing results for the DWave 2000Q architecture are benchmarked against brute force classical solutions. For a variety of different annealing schedules, results are compared against numerical simulations of the quantum architecture to interpret hardwarespecific limitations, like the effects of decoherence and manybody localization.
What is the nature of socalled Dark Matter, and does it interact with regular matter except through gravity? Direct detection experiments aim to answer this question. Propagating measurements (or constraints) to the fundamental theory requires bridging several scales—from target nucleus to individual nucleons to the level of quarks & gluons and beyond. However, the sheer number of parameters in modelindependent descriptions of DM, and uncertainties associated with bridging the scales make it difficult to fully quantify uncertainties from theory to experiment. This talk exemplifies challenges associated with propagating uncertainties and presents an inprogress opensource and opendata metaanalysis tool for connecting strongoperators at the QCD scale to nuclear observables aiming at pinning down associated uncertainties. Since this project intends to unify efforts from different communities (from EFTs & LQCD over manybody computations to experiments), this talk addresses faced challenges and intends to spark conversations regarding communityspecific interests.
What is the nature of so called Dark Matter and does it interact with regular matter except through gravity? Direct detection experiments aim to answer this question. Yet, propagating measurements (or constraints) to the fundamental theory requires bridging several scales—from target nucleus to individual nucleons to the level of quarks & gluons and beyond. In this talk I describe the methods used, assumptions made and challenges faced by the bridging of scales to address implications of measurements.
What is LatteDB? LatteDB aims to integrate all aspects of a Lattice QCD calculation, ensuring data integrity and provenance. By offering this framework, the hope is that lattice practitioners can then spend more time thinking about science, and less about these wellunderstood, yet complex and time consuming, workflow challenges. LatteDB is built on top of EspressoDB and is publicly available under a BSD license. For more information, see also the LatteDB manuscript and the Django documentation. This talk presents work in progress.
Numerical simulations play an increasingly important role in modern science. In this work, we suggest using a numerical study of the famous perihelion motion of the planet Mercury (one of the prime observables supporting Einsteins General Relativity) as a test case to teach numerical simulations to high school students. The project was presented as a one day course at a student summer school. This work includes details about the development of the code (Python) for which no prior programming experience is needed, a discussion of the visualization as well as the course teaching experience. This course encourages students to develop an intuition for numerical simulations, motivates students to explore problems themselves and to critically analyze results.
In this talk, a simple git tutorial for users not familiar with git is presented. The tutorial teaches the essential basics for collaborating with git. In addition, a cheat sheet, good practice advises and further references as well as recommendations are provided. Press the "?" key within the "Web Presentation" to get navigation clues. "Download" the presentation for an offline version or print and save the "Web Presentation" to PDF for traditional formats (landscape format).
What do we know about Dark Matter and how can we possibly describe it on a fundamental scale? Dark Matter direct detection experiments provide the intriguing possibility to discover the existence of Dark Matter. The interpretation of the experiments, however, depends on a description of Dark Matter. In this talk, I present the calculation of Dark Matter particles scattering off various light nuclei from first principles. The calculations are based on the framework of Chiral Effective Field Theory, which relates nuclear interactions to the underlaying fundamental theory: Quantum Chromodynamics. I introduce this framework and show how it can be extended to include Dark Matter interactions.
What do we know about Dark Matter and how can we possibly describe it on a fundamental scale? In this talk, I present the scattering of Dark Matter particles off various light nuclei within the framework of chiral effective field theory. I focus on scalar interactions and include one and twonucleon scattering processes whose form and strength are dictated by chiral symmetry. The nuclear wave functions are calculated from chiral effective field theory interactions as well. The convergence pattern of the chiral expansion in the nuclear potential and the Dark Matternucleus currents is investigated. This allows to provide a systematic uncertainty estimate of the calculations. Results for ${}^2$H, ${}^3$H, and ${}^3$He nuclei, which are theoretically interesting and the latter is a potential target for experiments, are provided.
What do we know about Dark Matter and how can we possibly describe it on a fundamental scale? In this session, I present the scattering of Dark Matter particles off various light nuclei within the framework of chiral effective field theory. I focus on scalar interactions and include one and twonucleon scattering processes whose form and strength are dictated by chiral symmetry. The nuclear wave functions are calculated from chiral effective field theory interactions as well. The convergence pattern of the chiral expansion in the nuclear potential and the Dark Matternucleus currents is investigated. This allows to provide a systematic uncertainty estimate of the calculations. Results for ${}^2$H, ${}^3$H, and ${}^3$He nuclei, which are theoretically interesting and the latter is a potential target for experiments, are provided.
Through the development of manybody methodology and algorithms, it has become possible to describe quantum systems composed of a large number of particles with great accuracy. Essential to all these methods is the application of auxiliary fields via the HubbardStratonovich transformation. This transformation effectively reduces twobody interactions to interactions of one particle with the auxiliary field, thereby improving the computational scaling of the respective algorithms. With the increasing size of involved particles, the relevance of collective phenomena and interactions grows as well. For many theories, e.g. Chiral Perturbation Theory, the inclusion of threebody forces has become essential in order to further increase the accuracy on the manybody level. In this seminar, the analytical framework for establishing a HubbardStratonovichlike transformation, which allows for the systematic and controlled inclusion of contact three and morebody interactions, is presented.
Nuclear Lattice Effective Field Theory (NLEFT) has proven to be a valuable candidate for pushing the borders of nuclear physics in the regime of nuclei as Carbon12 and has enabled the computation of nuclear matrix elements for larger systems. The applicability of NLEFT to such systems is enabled through the utilization of lattice stochastic approaches as Hybrid Monte Carlo (HMC) algorithms. In this seminar, I briefly present the underlying principles and strategy behind modern algorithms.
As there are several indications for the existence of socalled "Dark Matter" (DM) from an astrophysical point of view, direct detection of a candidate particle in a lab has not been successful this far. A set of direct detection experiments, which use different targets, could potentially test the various types of DM interactions with different properties of nuclei. To connect possible future measurements of DM signals to a candidate DM particle, these DM experimental signals must be propagated through nuclear cores — composed of many protons and neutrons — to the fundamental level of the DM theory. Due to the complexity in describing the manybody nucleus, this propagation has historically been done in a model dependent fashion. As a candidate for propagating possible experimental data to the level of protons and neutrons, Nuclear Lattice Effective Field Theory (NLEFT) provides an approach which systematically enables the reduction of uncertainties.
Recently nuclear lattice effective field theory (NLEFT) has proven to be a valuable candidate for pushing the borders of nuclear physics in the regime of nuclei as Carbon12 and has enabled the computation of nuclear matrix elements for larger systems. The applicability of NLEFT to such systems is enabled through the utilization of lattice stochastic approaches as Hybrid Monte Carlo (HMC) algorithms. As it is common for effective field theories, low energy coefficients (LECs) which describe the strength of effective interactions, need to be determined before one is able to describe and predict physical systems. In this process, it is essential to understand the effect of lattice artifacts as a discrete lattice spacing and the finite volume – independent of stochastic artifacts. Therefore nonstochastic approaches for smaller systems are employed to estimate LECs. In this seminar, such a nonstochastic approach for extracting nuclear energy levels on the lattice will be presented.
In this talk, I describe twisted boundary conditions for the deuteron and triton systems within finite volumes using the nuclear lattice EFT formalism. The finitevolume dependence of these systems with different twist angles is presented. Various finitevolume information can be used to improve calculations of binding energies in such a framework. The results suggest that with the appropriate twisting of boundaries, infinitevolume binding energies can be reliably extracted from calculations using modest volume sizes with cubic length $L \approx 8–14$ fm. Of particular importance is the derivation and numerical verification of threebody analogs of “iperiodic” twist angles that eliminate the leadingorder finitevolume effects to the threebody binding energy.
One of the greatest unsolved problems in physics is the asymmetry of observed matter – the amount of baryonic matter greatly exceeds the amount of antibaryonic matter. Based on the assumption that at some point in time the universe contained the same amount of particles and antiparticles, a possible explanation for this observation requires the existence of physical processes which violate chargeparity (CP). Though it is known that CPviolating processes exist in nature, the effects by the CPviolating contributions of the standard model (SM) of particle physics are not sufficient to explain the current asymmetry of matter. Therefore there must be additional sources of CP violation that come from beyond the standard model (BSM). A quantity for probing the magnitude of such CP violating processes is the electric dipole moment (EDM) of nuclei. In this talk, a strategy for creating a link between a possible future measurement and the fundamental interpretation is presented.
Conference talk on the consistency of nucleonnucleon forces derived by two different approximation schemes of Quantum Chromodynamics (QCD)—the chiral perturbation theory ($\chi$PT) and largeNc QCD. The approximation schemes and the derivation of the potential are demonstrated. The consistency of the chiral potential, derived using the method of unitary transformation, is verified for chiral orders $Q^\nu$ for $\nu = 0,2,4$. Used methods, as well as possible extensions for higher orders, are presented.