
Today, Games User Research forms an integral component of the development of any kind of interactive entertainment. User research stands as the primary source of business intelligence in the incredibly competitive game industry. This book aims to provide the foundational, accessible, goto resource for people interested in GUR. It is a communitydriven effort—it is written by passionate professionals and researchers in the GUR community as a handbook and guide for everyone interested in user research and games. The book bridges the current gaps of knowledge in Game User Research, building the goto volume for everyone working with games, with an emphasis on those new to the field.

Inductive logic (also known as confirmation theory) seeks to determine the extent to which the premisses of an argument entail its conclusion. This book offers an introduction to the field of inductive logic and develops a new Bayesian inductive logic. Chapter 1 introduces perhaps the simplest and most natural account of inductive logic, classical inductive logic, which is attributable to Ludwig Wittgenstein. Classical inductive logic is seen to fail in a crucial way, so there is a need to develop more sophisticated inductive logics. Chapter 2 presents enough logic and probability theory for the reader to begin to study inductive logic, while Chapter 3 introduces the ways in which logic and probability can be combined in an inductive logic. Chapter 4 analyses the most influential approach to inductive logic, due to W.E. Johnson and Rudolf Carnap. Again, this logic is seen to be inadequate. Chapter 5 shows how an alternative approach to inductive logic follows naturally from the philosophical theory of objective Bayesian epistemology. This approach preserves the inferences that classical inductive logic gets right (Chapter 6). On the other hand, it also offers a way out of the problems that beset classical inductive logic (Chapter 7). Chapter 8 defends the approach by tackling several key criticisms that are often levelled at inductive logic. Chapter 9 presents a formal justification of the version of objective Bayesianism which underpins the approach. Chapter 10 explains what has been achieved and poses some open questions.

The Error of Truth recounts the astonishing and unexpected tale of how quantitative thinking was invented and rose to primacy in our lives in the nineteenth and early twentieth centuries, bringing us to an entirely new perspective on what we know about the world and how we know it—even on what we each think about ourselves. Quantitative thinking is our inclination to view natural and everyday phenomena through a lens of measurable events, with forecasts, odds, predictions, and likelihood playing a dominant part. This worldview, or Weltanschauung, is unlike anything humankind had before, and it came about because of a momentous human achievement: namely, we had learned how to measure uncertainty. Probability as a science had been invented. Through probability theory, we now had correlations, reliable predictions, regressions, the bellshaped curve for studying social phenomena, and the psychometrics of educational testing. Significantly, these developments in mathematics happened during a relatively short period in world history: roughly, the 130year period from 1790 to 1920, from about the close of the Napoleonic era, through the Enlightenment and the Industrial Revolutions, to the end of World War I. Quantification is now everywhere in our daily lives, such as in the ubiquitous microchip in smartphones, cars, and appliances, in the Bayesian logic of artificial intelligence, and in applications in business, engineering, medicine, economics, and elsewhere. Probability is the foundation of our quantitative thinking. Here we see its story: when, why, and how it came to be and changed us forever.

Most of our everyday life experiences are multisensory in nature, i.e. they consist of what we see, hear, feel, taste, smell, and much more. Almost any experience, such as eating a meal or going to the cinema, involves a magnificent sensory world. In recent years, many of these experiences have been increasingly transformed through technological advancements such as multisensory devices and intelligent systems. This book takes the reader on a journey that begins with the fundamentals of multisensory experiences, moves through the relationship between the senses and technology, and finishes by considering what the future of those experiences may look like, and our responsibility in it. The book seeks to empower the reader to shape his or her own and other people’s experiences by considering the multisensory worlds in which we live. This book is a powerful and personal story about the authors’ passion for, and viewpoint on, multisensory experiences.

Digital signal processing (DSP) is one of the ‘foundational’ engineering topics of the modern world, without which technologies such the mobile phone, television, CD and MP3 players, WiFi and radar, would not be possible. A relative newcomer by comparison, statistical machine learning is the theoretical backbone of exciting technologies such as automatic techniques for car registration plate recognition, speech recognition, stock market prediction, defect detection on assembly lines, robot guidance and autonomous car navigation. Statistical machine learning exploits the analogy between intelligent information processing in biological brains and sophisticated statistical modelling and inference. DSP and statistical machine learning are of such wide importance to the knowledge economy that both have undergone rapid changes and seen radical improvements in scope and applicability. Both make use of key topics in applied mathematics such as probability and statistics, algebra, calculus, graphs and networks. Intimate formal links between the two subjects exist and because of this many overlaps exist between the two subjects that can be exploited to produce new DSP tools of surprising utility, highly suited to the contemporary world of pervasive digital sensors and highpowered and yet cheap, computing hardware. This book gives a solid mathematical foundation to, and details the key concepts and algorithms in, this important topic.

Fortran marches on, remaining one of the principal programming languages used in highperformance scientific, numerical, and engineering computing. A series of significant revisions to the standard versions of the language have progressively enhanced its capabilities, and the latest standard—Fortran 2018—includes many additions and improvements. This second edition of Modern Fortran Explained expands on the first. Given the release of updated versions of Fortran compilers, the separate descriptions of Fortran 2003 and Fortran 2008 have been incorporated into the main text, which thereby becomes a unified description of the full Fortran 2008 version of the language. This is much cleaner, many deficiencies and irregularities in the earlier language versions having been resolved. It includes object orientation and parallel processing with coarrays. Four completely new chapters describe the additional features of Fortran 2018, with its enhancements to coarrays for parallel programming, interoperability with C, IEEE arithmetic, and various other improvements. Written by leading experts in the field, two of whom have actively contributed to Fortran 2018, this is a complete and authoritative description of Fortran in its latest form. It is intended for new and existing users of the language, and for all those involved in scientific and numerical computing. It is suitable as a textbook for teaching and, with its index, as a handy reference for practitioners.

Many online applications, especially in the financial industries, are running on blockchain technologies in a decentralized manner, without the use of an authoritative entity or a trusted third party. Such systems are only secured by cryptographic protocols and a consensus mechanism. As blockchainbased solutions will continue to revolutionize online applications in a growing digital market in the future, one needs to identify the principal opportunities and potential risks. Hence, it is unavoidable to learn the mathematical and cryptographic procedures behind blockchain technology in order to understand how such systems work and where the weak points are. The book provides an introduction to the mathematical and cryptographic concepts behind blockchain technologies and shows how they are applied in blockchainbased systems. This includes an introduction to the general blockchain technology approaches that are used to build the socalled immutable ledgers, which are based on cryptographic signature schemes. As future quantum computers will break some of the current cryptographic primitive approaches, the book considers their security and presents the current research results that estimate the impact on blockchainbased systems if some of the cryptographic primitive break. Based on the example of Bitcoin, it shows that weak cryptographic primitives pose a possible danger for the ledger, which can be overcome through the use of the socalled postquantum cryptographic approaches which are introduced as well.

This book is a technical, historical, and conceptual investigation into the three main methodological approaches to the computational sciences: mathematical, engineering, and experimental. Part I explores the background behind the formal understanding of computing, originating at the end of the nineteenth century, and it invesitagtes the formal origins and conceptual development of the notions of computation, algorithm, and program.Part II overviews the construction of physical devices to performautomated tasks and it considers associated technical and conceptual issues. It starts with the design and construction of the first generation of computingmachines, explores their evolution and progress in engineering (for both hardware and software), and investigates their theoretical and conceptual problems. Part III analyses the methods and principles of experimental sciences founded on computationalmethods. It studies the use ofmachines to performscientific tasks,with particular reference to computer models and simulations. Each part aims at defining a notion of computational validity according to the corresponding methodological approach.

This book is aimed at students interested in using game theory as a design methodology for solving problems in engineering and computer science. The book shows that such design challenges can be analyzed through game theoretical perspectives that help to pinpoint each problem's essence: Who are the players? What are their goals? Will the solution to “the game” solve the original design problem? Using the fundamentals of game theory, the book explores these issues and more. The use of game theory in technology design is a recent development arising from the intrinsic limitations of classical optimizationbased designs. In optimization, one attempts to find values for parameters that minimize suitably defined criteria—such as monetary cost, energy consumption, or heat generated. However, in most engineering applications, there is always some uncertainty as to how the selected parameters will affect the final objective. Through a sequential and easytounderstand discussion, the book examines how to make sure that the selection leads to acceptable performance, even in the presence of uncertainty—the unforgiving variable that can wreck engineering designs. The book looks at such standard topics as zerosum, nonzerosum, and dynamic games and includes a MATLAB guide to coding. This book offers students a fresh way of approaching engineering and computer science applications.

This book focuses on interpolation and definability. This notion is not only central in pure logic, but has significant meaning and applicability in all areas where logic itself is applied, especially in computer science, artificial intelligence, logic programming, philosophy of science, and natural language. The book provides basic knowledge on interpolation and definability in logic, and contains a systematic account of material which has been presented in many papers. A variety of methods and results are presented beginning with the famous Beth's and Craig's theorems in classical predicate logic (195357), and to the most valuable achievements in nonclassical topics on logic, mainly intuitionistic and modal logic. Together with semantical and prooftheoretic methods, close interrelations between logic and universal algebra are established and exploited.

A starting point of Bolzano’s logical reflection was the conviction that among truths there is a connection, according to which some truths are grounds of others, and these in turn are consequences of the former, and that such a connection is objective, i.e. subsisting independently of every cognitive activity of the subject. In the attempt to account for the distinction between subjective and objective levels of knowledge, Bolzano gradually gained the conviction that the reference of the subject to the object is mediated by a realm of entities without existence that, recalling the Stoic lectà, are here called ‘lectological’. Moreover, of the two main ways through which that reference takes place—psychic activity and linguistic activity—Bolzano favoured the first and traced back to it the problems of the second; i.e. he considered those intermediate entities first as possible content of psychic phenomena and only subordinately, on the basis of a complex theory of signs, as meanings of linguistic phenomena. This book follows this schema and treats, in great detail, first, lectological entities (ideas and propositions in themselves), second, cognitive psychic phenomena (subjective ideas and judgements), and, finally, linguistic phenomena. Moreover, it tries to bring to light the extraordinary systematic character of Bolzano’s logical thought and it does this showing that the main logical ideas developed principally in the first three parts of the Theory of Science, published in 1837, can be effortlessly formally presented within the wellknown Hilbertian epsiloncalculus.

This book presents computational interaction as an approach to explaining and enhancing the interaction between humans and information technology. Computational interaction applies abstraction, automation, and analysis to inform our understanding of the structure of interaction and also to inform the design of the software that drives new and exciting humancomputer interfaces. The methods of computational interaction allow, for example, designers to identify user interfaces that are optimal against some objective criteria. They also allow software engineers to build interactive systems that adapt their behaviour to better suit individual capacities and preferences. Embedded in an iterative design process, computational interaction has the potential to complement human strengths and provide methods for generating inspiring and elegant designs. Computational interaction does not exclude the messy and complicated behaviour of humans, rather it embraces it by, for example, using models that are sensitive to uncertainty and that capture subtle variations between individual users. It also promotes the idea that there are many aspects of interaction that can be augmented by algorithms. This book introduces computational interaction design to the reader by exploring a wide range of computational interaction techniques, strategies and methods. It explains how techniques such as optimisation, economic modelling, machine learning, control theory, formal methods, cognitive models and statistical language processing can be used to model interaction and design more expressive, efficient and versatile interaction.

The logician Kurt Gödel in 1951 established a disjunctive thesis about the scope and limits of mathematical knowledge: either the mathematical mind is equivalent to a Turing machine (i.e., a computer) or there are absolutely undecidable mathematical problems. In the second half of the twentieth century, attempts have been made to arrive at a stronger conclusion. In particular, arguments have been produced by the philosopher J.R. Lucas and by the physicist and mathematician Roger Penrose that intend to show that the mathematicalmind ismore powerful than any computer. These arguments, and counterarguments to them, have not convinced the logical and philosophical community. The reason for this is an insufficiency of rigour in the debate. The contributions in this volume move the debate forward by formulating rigorous frameworks and formally spelling out and evaluating arguments that bear on Gödel’s disjunction in these frameworks. The contributions in this volume have been written by world leading experts in the field.

This is the third edition of a wellknown graduate textbook on Booleanvalued models of set theory. The aim of the first and second editions was to provide a systematic and adequately motivated exposition of the theory of Booleanvalued models as developed by Scott and Solovay in the 1960s, deriving along the way the central set theoretic independence proofs of Cohen and others in the particularly elegant form that the Booleanvalued approach enables them to assume. In this edition, the background material has been augmented to include an introduction to Heyting algebras. It includes chapters on Booleanvalued analysis and Heytingalgebravalued models of intuitionistic set theory.

In[KF1] 1914, in an essay entitled ‘Logic as the Essence of Philosophy’, Bertrand Russell promised to revolutionize philosophy by introducing there the ‘new logic’ of Frege and Peano: “The old logic put thought in fetters, while the new logic gives it wings.” A century later, this book proposes a comparable revolution with a newly emerging logic, modal homotopy type theory. Russell’s prediction turned out to be accurate. Frege’s firstorder logic, along with its extension to modal logic, is to be found throughout anglophone analytic philosophy. This book provides a considerable array of evidence for the claim that philosophers working in metaphysics, as well as those treating language, logic or mathematics, would be much better served with the new ‘new logic’. It offers an introduction to this new logic, thoroughly motivated by intuitive explanations of the need for all of its component parts—the discipline of a type theory, the flexibility of type dependency, the more refined homotopic notion of identity and a powerful range of modalities. Innovative applications of the calculus are given, including analysis of the distinction between objects and events, an intrinsic treatment of structure and a conception of modality both as a form of general variation and as allowing constructions in modern geometry. In this way, we see how varied are the applications of this powerful new language—modal homotopy type theory.

This book gives an account of the present state of research on lattices of elementary substructures and automorphisms of nonstandard models of arithmetic. Major representation theorems are proved, and the important particular case of countable recursively saturated models is discussed in detail. All necessary technical tools are developed. The list includes: constructions of elementary simple extensions; a partial classification of arithmetic types, in particular Gaifman's theory of definable types; forcing in arithmetic; elements of the KirbyParis combinatorial theory of cuts; Lascar's generic automorphisms; and applications of Abramson and Harrington's generalization of Ramsey's theorem. There are also chapters discussing ω1like models with interesting second order properties, and a chapter on order types of nonstandard models.

Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.

The main purpose of this book is to demonstrate, by way of example, the several advantages of using a formal gametheoretic framework to explain complex events, diplomatic history, and contentious interstate relationships, via causal mechanisms and rationality. Chapter 1 lays out the broad parameters and major concepts of the mathematical theory of games and its applications in the security studies literature. Chapter 2 explores a number of issues connected with the use of gametheoretic models to organize analytic narratives, both generally and specifically. Chapter 3 interprets the Moroccan crisis of 1905–6 in the context of an incomplete information game model. Chapter 4 surveys and evaluates several prominent attempts to use game theory to explain the strategic dynamic of the Cuban missile crisis of 1962. Chapter 5 offers a general explanation that answers all of the foundational questions associated with the Cuban crisis within the confines of a single, integrated, gametheoretic model with incomplete information. Chapter 6 uses the same game form to develop a logically consistent and empirically plausible explanation of the outbreak of war in Europe in early August 1914. Chapter 7 introduces perfect deterrence theory and contrasts it with the prevailing realist theory of interstate war prevention, and classical deterrence theory. Chapter 8 addresses the charge made by some behavioral economists (and many strategic analysts) that game theory is of limited utility for understanding interstate conflict behavior.

Algorithms are the hidden methods that computers apply to process information and make decisions. The book tells the story of algorithms from their ancient origins to the present day and beyond. The book introduces readers to the inventors and events behind the genesis of the world’s most important algorithms. Along the way, it explains, with the aid of examples and illustrations, how the most influential algorithms work. The first algorithms were invented in Mesopotamia 4,000 years ago. The ancient Greeks refined the concept, creating algorithms for finding prime numbers and enumerating Pi. AlKhawrzmi’s 9th century books on algorithms ultimately became their conduit to the West. The invention of the electronic computer during World War II transformed the importance of the algorithm. The first computer algorithms were for military applications. In peacetime, researchers turned to grander challenges  forecasting the weather, route navigation, choosing marriage partners, and creating artificial intelligences. The success of the Internet in the 70s depended on algorithms for transporting data and correcting errors. A clever algorithm for ranking websites was the spark that ignited Google. Recommender algorithms boosted sales at Amazon and Netflix, while the EdgeRank algorithm drove Facebook’s NewsFeed. In the 21st century, an algorithm that mimics the operation of the human brain was revisited with the latest computer technology. Suddenly, algorithms attained humanlevel accuracy in object and speech recognition. An algloirthm defeated the world champion at Go  the most complex of board games. Today, algorithms for cryptocurrencies and quantum computing look set to change the world.

This book is about simple firstorder theories. The class of simple theories was introduced by S. Shelah in the early 1980s. Then several specific algebraic structures having simple theories have been studied by leading researchers, notably by E. Hrushovski. In the mid1990s the author established in his thesis the symmetry and transitivity of nonforking for simple theories and, with A. Pillay, typeamalgamation for Lascar strong types. Since then a great deal of research work on simplicity theory, the study of simple theories and structures has been produced. This book starts with the introduction of the fundamental notions of dividing and forking, and covers up to the hyperdefinable group configuration theorem for simple theories.