
A classical dynamical system is called isochronous if it features in its phase space an open, fully dimensional sector where all its solutions are periodic in all their degrees of freedom with the same, fixed period. Recently, a simple transformation has been introduced, featuring a real parameter ω and reducing to the identity for ω=0. This transformation is applicable to a quite large class of dynamical systems and it yields ωmodified autonomous systems which are isochronous, with period T = 2π/ω. This justifies the notion that isochronous systems are not rare. In this monograph—which covers work done over the last decade by its author and several collaborators—this technology to manufacture isochronous systems is reviewed. Many examples of such systems are provided, including manybody problems characterized by Newtonian equations of motion in spaces of one or more dimensions, Hamiltonian systems, and also nonlinear evolution equations (PDEs: Partial Differential Equations). This monograph shall be of interest to researchers working on dynamical systems, including integrable and nonintegrable models, with a finite or infinite number of degrees of freedom. It shall also appeal to experimenters and practitioners interested in isochronous phenomena. It might be used as basic or complementary textbook for an undergraduate or graduate course.

This book deals with an important class of manybody systems: those where the interaction potential decays slowly for large interparticle distance. In particular, systems where the decay is slower than the inverse interparticle distance raised to the dimension of the embedding space. Gravitational and Coulomb interactions are the most prominent examples. However, it has become clear that longrange interactions are more common than previously thought. This has stimulated a growing interest in the study of longrange interacting systems, which has led to a much better understanding of the many peculiarities in their behaviour. The seed of all particular features of these systems, both at equilibrium and outofequilibrium, is the lack of additivity. It is now well understood that this does not prevent a statistical mechanics treatment. However, it does require a more indepth study of the thermodynamic limit and of all related theoretical concepts. A satisfactory understanding of properties generally considered as oddities only a couple of decades ago has now been reached: ensemble inequivalence, negative specific heat, negative susceptibility, ergodicity breaking, outofequilibrium quasistationarystates, anomalous diffusion, etc. The first two parts describe the theoretical and computational instruments needed for addressing the study of both equilibrium and dynamical properties of systems subject to longrange forces. The third part of the book is devoted to discussing the applications of such techniques to the most relevant examples of longrange systems. The only prerequisite is a basic course in statistical mechanics.

There are three parts to this book which addresses the analysis of musical sounds from the viewpoint of someone at the intersection between physicists, engineers, piano technicians, and musicians. The reader is introduced to a variety of waves and a variety of ways of presenting, visualizing, and analyzing them in the first part. A tutorial on the tools used throughout the book accompanies this introduction. The mathematics behind the tools is left to the appendices. Part 2 is a graphical survey of the classical areas of acoustics that pertain to musical instruments: vibrating strings, bars, membranes, and plates. Part 3 is devoted almost exclusively to the piano. Several two and threedimensional graphical tools are introduced to study the following characteristics of pianos: individual notes and interactions among them, the missing fundamental, inharmonicity, tuning visualization, the different distribution of harmonic power for the various zones of the piano keyboard, and potential uses for quality control. These techniques are also briefly applied to other musical instruments studied in earlier parts of the book. The book includes appendices to cover the mathematics lurking beneath the numerous graphs, and a brief introduction to Matlab® which was used to generate those graphs.

This volume contains lectures delivered at the Les Houches Summer School ‘Integrability: from statistical systems to gauge theory’ held in June 2016. The School was focussed on applications of integrability to supersymmetric gauge and string theory, a subject of high and increasing interest in the mathematical and theoretical physics communities over the past decade. Relevant background material was also covered, with lecture series introducing the main concepts and techniques relevant to modern approaches to integrability, conformal field theory, scattering amplitudes, and gauge/string duality. The book will be useful not only to those working directly on integrablility in string and guage theories, but also to researchers in related areas of condensed matter physics and statistical mechanics.

This book treats the central physical concepts and mathematical techniques used to investigate the dynamics of open quantum systems. To provide a selfcontained presentation, the text begins with a survey of classical probability theory and with an introduction to the foundations of quantum mechanics, with particular emphasis on its statistical interpretation and on the formulation of generalized measurement theory through quantum operations and effects. The fundamentals of density matrix theory, quantum Markov processes, and completely positive dynamical semigroups are developed. The most important master equations used in quantum optics and condensed matter theory are derived and applied to the study of many examples. Special attention is paid to the Markovian and nonMarkovian theory of environment induced decoherence, its role in the dynamical description of the measurement process, and to the experimental observation of decohering electromagnetic field states. The book includes the modern formulation of open quantum systems in terms of stochastic processes in Hilbert space. Stochastic wave function methods and Monte Carlo algorithms are designed and applied to important examples from quantum optics and atomic physics. The fundamentals of the treatment of nonMarkovian quantum processes in open systems are developed on the basis of various mathematical techniques, such as projection superoperator methods and influence functional techniques. In addition, the book expounds the relativistic theory of quantum measurements and the density matrix theory of relativistic quantum electrodynamics.

In addition to treating quantum communication, entanglement, error correction, and algorithms in great depth, this book also addresses a number of interesting miscellaneous topics, such as Maxwell's demon, Landauer's erasure, the Bekenstein bound, and Caratheodory's treatment of the second law of thermodynamics. All mathematical derivations are based on clear physical pictures which make even the most involved results — such as the Holevo bound — look comprehensible and transparent. Quantum information is a fascinating topic precisely because it shows that the laws of information processing are actually dependent on the laws of physics. However, it is also very interesting to see that information theory has something to teach us about physics. Both of these directions are discussed throughout the book. Other topics covered in the book are quantum mechanics, measures of quantum entanglement, general conditions of quantum error correction, pure state entanglement and Pauli matrices, pure states and Bell's inequalities, and computational complexity of quantum algorithms.

Theoretical physics and foundations of physics have not made much progress in the last few decades. There is no consensus among researchers on how to approach unifying general relativity and quantum field theory (quantum gravity), explaining socalled dark energy and dark matter (cosmology), or the interpretation and implications of quantum mechanics and relativity. In addition, both fields are deeply puzzled about various facets of time including, above all, time as experienced. This book argues that this impasse is the result of the “dynamical universe paradigm,” the idea that reality fundamentally comprises physical entities that evolve in time from some initial state according to dynamical laws. Thus, in the dynamical universe, the initial conditions plus the dynamical laws explain everything else going exclusively forward in time. In cosmology, for example, the initial conditions reside in the Big Bang and the dynamical law is supplied by general relativity. Accordingly, the present state of the universe is explained exclusively by its past. A completely new paradigm (called Relational Blockworld) is offered here whereby the past, present, and future codetermine each other via “adynamical global constraints,” such as the least action principle. Accordingly, the future is just as important for explaining the present as the past is. Most of the book is devoted to showing how Relational Blockworld resolves many of the current conundrums of both theoretical physics and foundations of physics, including the mystery of time as experienced and how that experience relates to the block universe.

This book is the first to examine the history of imaginative thinking about intelligent machines. As real artificial intelligence (AI) begins to touch on all aspects of our lives, this long narrative history shapes how the technology is developed, deployed, and regulated. It is therefore a crucial social and ethical issue. Part I of this book provides a historical overview from ancient Greece to the start of modernity. These chapters explore the revealing prehistory of key concerns of contemporary AI discourse, from the nature of mind and creativity to issues of power and rights, from the tension between fascination and ambivalence to investigations into artificial voices and technophobia. Part II focuses on the twentieth and twentyfirst centuries in which a greater density of narratives emerged alongside rapid developments in AI technology. These chapters reveal not only how AI narratives have consistently been entangled with the emergence of real robotics and AI, but also how they offer a rich source of insight into how we might live with these revolutionary machines. Through their close textual engagements, these chapters explore the relationship between imaginative narratives and contemporary debates about AI’s social, ethical, and philosophical consequences, including questions of dehumanization, automation, anthropomorphization, cybernetics, cyberpunk, immortality, slavery, and governance. The contributions, from leading humanities and social science scholars, show that narratives about AI offer a crucial epistemic site for exploring contemporary debates about these powerful new technologies.

Can Science answer all our questions? If not, what knowledge can it provide, and how? Written in a colloquial style, and arguing from wellknown and easytofollow facts, this book addresses all concepts pertaining to the scientific method and reinforces the inalienable role of experimental evidence in scientific truths. It also clarifies the limits of Science and the errors we make when abusing its method in contexts that are not scientific. Rather than a treatise on epistemology, this book is a collection of personal reflections on the scientific methodology as experienced and used daily by a practitioner. It is ideal for undergraduate students of all Natural Sciences, interested laymen, and quite possibly high school students who are approaching this topic for the first time.

Most of the solid materials we use in everyday life, from plastics to cosmetic gels exist in a noncrystalline, amorphous form: they are glasses. Yet, we are still seeking a fundamental explanation as to what glasses really are, why they form, and what their properties are. This book surveys the most recent theoretical and experimental research dealing with the physics of glassy and disordered materials, from molecular fluids to colloidal glasses, granular media and foams. Chapters present broad and original perspectives on one of the deepest mystery of condensed matter physics, with a particular emphasis on the key role played by dynamic heterogeneities to understand from a unified viewpoint phenomena occurring in scores of disordered materials. The book covers both fundamental aspects, and reviews extensively several recent theoretical developments in the field of the glass transition that have changed our view of the glass transition. It also provides an uptodate perspective on new experimental tools that have been developed to study with unprecedented resolution the structural relaxation of systems like molecular and polymer liquids, colloidal glasses, foams and granular materials. It also confronts, discusses, compares, and challenges the different perspectives actively promoted by different research groups and communities in a very active area of condensedmatter and statistical physics.

In attempting to understand the bewildering complexity of consumer markets, financial markets, and beyond, traditional textbooks and theories will not help much. This book presents a new market theory in which information plays the most important role. Markets are portrayed with three categories of actor: consumers, businesses, and information intermediaries. The reader can determine his own role, and with analysis and examples from the realworld economy, new questions can be raised and individual conclusions drawn. The aim is to stimulate the reader’s own thinking, either as a consumer on the high street, an investor on Wall Street, a policy maker in a government armchair, or an entrepreneur dreaming of the next big opportunity. This book should also generate and inspire academic debates, as the claims and conclusions are often at odds with mainstream theory.

The Analytic Element Method provides a foundation to solve boundary value problems commonly encountered in engineering and science. The goals are: to introduce readers to the basic principles of the AEM, to provide a template for those interested in pursuing these methods, and to empower readers to extend the AEM paradigm to an even broader range of problems. A comprehensive paradigm: place an element within its landscape, formulate its interactions with other elements using linear series of influence functions, and then solve for its coefficients to match its boundary and interface conditions with nearly exact precision. Collectively, sets of elements interact to transform their environment, and these synergistic interactions are expanded upon for three common types of problems. The first problem studies a vector field that is directed from high to low values of a function, and applications include: groundwater flow, vadose zone seepage, incompressible fluid flow, thermal conduction and electrostatics. A second type of problem studies the interactions of elements with waves, with applications including water waves and acoustics. A third type of problem studies the interactions of elements with stresses and displacements, with applications in elasticity for structures and geomechanics. The Analytic Element Method paradigm comprehensively employs a background of existing methodology using complex functions, separation of variables and singular integral equations. This text puts forth new methods to solving important problems across engineering and science, and has a tremendous potential to broaden perspective and change the way problems are formulated.

In the process of data analysis, the investigator is often facing highlyvolatile and randomappearing observed data. A vast body of literature shows that the assumption of underlying stochastic processes was not necessarily representing the nature of the processes under investigation and, when other tools were used, deterministic features emerged. Non Linear Time Series Analysis (NLTS) allows researchers to test whether observed volatility conceals systematic non linear behavior, and to rigorously characterize governing dynamics. Behavioral patterns detected by non linear time series analysis, along with scientific principles and other expert information, guide the specification of mechanistic models that serve to explain realworld behavior rather than merely reproducing it. Often there is a misconception regarding the complexity of the level of mathematics needed to understand and utilize the tools of NLTS (for instance Chaos theory). However, mathematics used in NLTS is much simpler than many other subjects of science, such as mathematical topology, relativity or particle physics. For this reason, the tools of NLTS have been confined and utilized mostly in the fields of mathematics and physics. However, many natural phenomena investigated I many fields have been revealing deterministic non linear structures. In this book we aim at presenting the theory and the empirical of NLTS to a broader audience, to make this very powerful area of science available to many scientific areas. This book targets students and professionals in physics, engineering, biology, agriculture, economy and social sciences as a textbook in Nonlinear Time Series Analysis (NLTS) using the R computer language.

This book presents a unified approach to a rich and rapidly evolving research domain at the interface between statistical physics, theoretical computer science/discrete mathematics, and coding/information theory. The topics which have been selected, including spin glasses, error correcting codes, satisfiability, are central to each field. The approach focuses on the limit of large random instances, adopting a common formulation in terms of graphical models. It presents message passing algorithms like belief propagation and survey propagation, and their use in decoding and constraint satisfaction solving. It also explains analysis techniques like density evolution and the cavity method, and uses them to derive phase diagrams and study phase transitions.

This textbook presents kinetic theory, which is a systematic approach to describing nonequilibrium systems. The text is balanced between the fundamental concepts of kinetic theory (irreversibility, transport processes, separation of time scales, conservations, coarse graining, distribution functions, etc.) and the results and predictions of the theory, where the relevant properties of different systems are computed. The book is organised in thematic chapters where different paradigmatic systems are studied. The specific features of these systems are described, building and analysing the appropriate kinetic equations. Specifically, the book considers the classical transport of charges, the dynamics of classical gases, Brownian motion, plasmas, and selfgravitating systems, quantum gases, the electronic transport in solids and, finally, semiconductors. Besides these systems that are studied in detail, concepts are applied to some modern examples including the quark–gluon plasma, the motion of bacterial suspensions, the electronic properties of graphene or the dynamics of a vortex gas, among others. In this way the reader will appreciate how the concepts and tools of kinetic theory can be applied to various situations. Besides these thematic chapters, a first chapter presents the main concepts in an introductory manner and then the formal kinetic theory for classical systems is presented in a general form, which will serve as the basis for subsequent thematic chapters. Finally, specific numerical methods for kinetic theory are given in a devoted chapter.

The investigation of discrete symmetries is a fascinating subject which has been central to the agenda of physics research for fifty years, and has been the target of many experiments, ongoing and in preparation, all over the world. This book approaches the subject from a somewhat less traditional angle: it puts more emphasis on the experimental aspects of the field, trying to provide a wider picture than usual and to convey the intellectual challenge of experimental physics. The book includes the related connection to phenomenology, a purpose for which the precision experiments in this field — often rather elegant and requiring a good amount of ingenuity — are very well suited. The book discusses discrete symmetries (parity, charge conjugation, time reversal, and of course CP symmetry) in microscopic (atomic, nuclear, and particle) physics, and includes a detailed description of some key or representative experiments. The book discusses their principles and challenges more than the historical development. The main past achievements and the most recent developments are both included.

Over the past near three decades, the Lattice Boltzmann method has gained a prominent role as an efficient computational method for the numerical simulation of a wide variety of complex states of flowing matter across a broad range of scales, from fully developed turbulence, to multiphase microflows, all the way down to nanobiofluidics and lately, even quantumrelativistic subnuclear fluids. After providing a selfcontained introduction to the kinetic theory of fluids and a thorough account of its transcription to the lattice framework, this book presents a survey of the major developments which have led to the impressive growth of the Lattice Boltzmann across most walks of fluid dynamics and its interfaces with allied disciplines, such as statistical physics, material science, soft matter and biology. This includes recent developments of Lattice Boltzmann methods for nonideal fluids, micro and nanofluidic flows with suspended bodies of assorted nature and extensions to strong nonequilibrium flows beyond the realm of continuum fluid mechanics. In the final part, the book also presents the extension of the Lattice Boltzmann method to quantum and relativistic fluids, in an attempt to match the major surge of interest spurred by recent developments in the area of strongly interacting holographic fluids, such as quarkgluon plasmas and electron flows in graphene. It is hoped that this book may provide a source information and possibly inspiration to a broad audience of scientists dealing with the physics of classical and quantum flowing matter across many scales of motion.

This book supports researchers who need to generate random networks, or who are interested in the theoretical study of random graphs. The coverage includes exponential random graphs (where the targeted probability of each network appearing in the ensemble is specified), growth algorithms (i.e. preferential attachment and the stubjoining configuration model), special constructions (e.g. geometric graphs and Watts Strogatz models) and graphs on structured spaces (e.g. multiplex networks). The presentation aims to be a complete starting point, including details of both theory and implementation, as well as discussions of the main strengths and weaknesses of each approach. It includes extensive references for readers wishing to go further. The material is carefully structured to be accessible to researchers from all disciplines while also containing rigorous mathematical analysis (largely based on the techniques of statistical mechanics) to support those wishing to further develop or implement the theory of random graph generation. This book is aimed at the graduate student or advanced undergraduate. It includes many worked examples, numerical simulations and exercises making it suitable for use in teaching. Explicit pseudocode algorithms are included to make the ideas easy to apply. Datasets are becoming increasingly large and network applications wider and more sophisticated. Testing hypotheses against properly specified control cases (null models) is at the heart of the ‘scientific method’. Knowledge on how to generate controlled and unbiased random graph ensembles is vital for anybody wishing to apply network science in their research.

Financial markets provide a fascinating example of ‘complexity in action’: a realworld complex system whose evolution is dictated by the decisions of crowds of traders who are continually trying to win in a vast global ‘game’. This book draws on recent ideas from the highlytopical science of complexity and complex systems to address the following questions: how do financial markets behave? Why do financial markets behave in the way that they do? What can we do to minimize risk, given this behavior? Standard finance theory is built around several seemingly innocuous assumptions about market dynamics. This book shows how these assumptions can give misleading answers to crucially important practical problems such as minimizing financial risk, coping with extreme events such as crashes or drawdowns, and pricing derivatives. After discussing the background to the concept of complexity and the structure of financial markets in Chapter 1, Chapter 2 examines the assumptions upon which standard finance theory is built. Reality sets in with Chapter 3, where data from two seemingly different markets are analyzed and certain universal features uncovered which cannot be explained within standard finance theory. Chapters 4 and 5 mark a significant departure from the philosophy of standard finance theory, being concerned with exploring microscopic models of markets which are faithful to real market microstructure, yet which also reproduce realworld features. Chapter 6 moves to the practical problem of how to quantify and hedge risk in real world markets. Chapter 7 discusses deterministic descriptions of market dynamics, incorporating the topics of chaos and the allimportant phenomenon of market crashes.

The book attempts to provide an introduction to quantum field theory emphasizing conceptual issues frequently neglected in more ‘utilitarian’ treatments of the subject. The book is divided into four parts, which look in turn at origins, dynamics, symmetries, and scales. The emphasis is conceptual — the aim is to build the theory up systematically from some clearly stated foundational concepts — and therefore to a large extent antihistorical, but two historical chapters are included to situate quantum field theory in the larger context of modern physical theories. The three remaining sections of the book follow a step by step reconstruction of this framework beginning with just a few basic assumptions: relativistic invariance, the basic principles of quantum mechanics, and the prohibition of physical action at a distance embodied in the clustering principle. The second section of the book lays out the basic structure of quantum field theory arising from the sequential insertion of quantummechanical, relativistic and locality constraints. The central role of symmetries in relativistic quantum field theories is explored in the third section of the book, while in the final section the book explores in detail the feature of quantum field theories most critical for their enormous phenomenological success — the scale separation property embodied by the renormalization group properties of a theory defined by an effective local Lagrangian.