Welcome to the 4th Workshop on Statistical Physics! This workshop is part of a series organized by Universidad de los Andes and Universidad Nacional de Colombia. Our primary objective is to foster academic growth and collaboration within the field of statistical physics.
The workshop offers a unique opportunity for the academic community to delve into the latest developments and research methods that have significantly impacted statistical physics over the past decades. Through a series of engaging mini-courses, we aim to introduce participants to these topics, allowing them to be incorporated into advanced graduate courses.
Furthermore, the workshop serves as a platform for the local statistical mechanics community to share their cutting-edge research findings. Through invited conferences, oral presentations, and poster sessions, participants have the chance to present their most recent work and establish scientific exchanges and collaborations. By facilitating these interactions, we strive to foster the growth and advancement of Statistical Physics in Colombia.
The event is designed for professors and researchers specializing in statistical physics, as well as graduate and undergraduate students with a foundational understanding of statistical mechanics. Whether you are an expert in the field or a budding physicist, this workshop offers valuable insights and networking opportunities.
The workshop spans five days, combining three days of engaging short courses from Monday to Wednesday with two days dedicated to invited conferences, oral presentations, and poster sessions on Thursday and Friday. This format ensures a comprehensive learning experience while allowing ample time for attendees to engage with their peers and explore potential collaborations.
Join us at the 4th Workshop on Statistical Physics as we strive to advance knowledge, foster collaboration, and contribute to the development of Statistical Physics in Colombia and beyond.
In the course we will first introduce the basic concepts of percolation, its scaling laws and its fractal subsets like the backbone, the elastic backbone, the shortest path with its perturbations and the distribution of currents. We will also present rigidity percolation, bootstrap percolation and drilling percolation. Particular emphasis will be given to discontinuous percolation like explosive and bridge percolation as well as abrupt epidemic spreading. Next, we will consider correlated surfaces with Hurst exponents to study the fractality of its coastlines, its watersheds and its retention capacity. We will investigate the Schramm-Loewner Evolution of various loop-less curves on top of these surfaces. We will dedicate the end of the course to failure models, like the fuse model, metallic breakdown and the optimal path crack.
Control theory is an important topic for physicists that rarely is covered by standard curricula. In these three sessions, I will introduce the main ideas and discuss basic ideas such as feedback, feedforward, and robustness and how they apply to stochastic thermodynamics.
In the last decades thermodynamics have been extended to small (microscopic or nanoscopic) scales, where fluctuations play a major role pushing systems out of equilibrium, and where genuine quantum effects cannot be neglected anymore. Quantum thermodynamics is an interdisciplinary and growing field that places at the intersection of quantum information and non-equilibrium statistical physics. It aims to study quantum systems from a new perspective, emphasizing the energetic and entropic costs of quantum operations, and investigating possible enhancements of classical thermodynamic tasks by means of genuine quantum effects.
In this mini-course I will introduce some of the main concepts in quantum thermodynamics, including the definitions of work, heat or entropy production at the quantum level, but also some of their applications, such as designing quantum thermal machines that perform useful thermodynamic tasks. For that purpose I will introduce some basics of open and monitored quantum systems, that will allow us to describe fluctuations of relevant thermodynamic quantities in generic quantum processes. Using these tools we will also see how to derive universal results such as the so-called fluctuation theorems, as well as other related inequalities, altogether refining our understanding of the second law and irreversibility.
A bearing is a system of spheres (or disks) in contact. If in a bearing every loop must be even, one can obtain “bearing states”, in which touching spheres roll on each other without slip. We frustrate a system of touching spheres by imposing two different bearing states on opposite sides and search for the configurations of lowest energy dissipation. For Coulomb friction (with random friction coefficients) in two dimensions, a sharp line separates the two bearing states and we prove that this line corresponds to the minimum cut. Astonishingly however, in three dimensions, intermediate bearing domains, that are not synchronized with either side, are energetically more favourable than the minimum-cut surface. This novel state of minimum dissipation is characterized by a spanning network of slip-less contacts that reaches every sphere. Such a situation becomes possible because in three dimensions bearings of loops of size four have four degrees of freedom. By considering spheres of different size, packings with bearing states can even be made space-filling. The construction and mechanical properties of such space-filling bearings will be discussed. Space-filling bearing states can be viewed as a realization of solid turbulence exhibiting Kolmogorov scaling and anomalous heat conduction. Bearings states can be perceived as physical realizations of networks of oscillators with asymmetrically weighted couplings. These networks can exhibit optimal synchronization properties through tuning of the local interaction strength as a function of node degree or the inertia of their constituting rotor disks through a power-law mass-radius relation. As a consequence, one finds that space filling bearings synchronize fastest, when they are hollow.
Invited talks
We study the supercooled dynamics of the Gaussian Core Model in the low- and intermediate-density regimes by means of molecular dynamics simulations. In particular, we discuss the transition from the low-density hard-sphere-like glassy dynamics to the high-density one. The caging mechanism describes the dynamics at low densities well, giving rise to intermittent dynamics. At high densities, the particles undergo a more continuous motion in which the cage concept loses meaning. We elaborate on the idea that these different supercooled dynamics are in fact the precursors of two different glass states.
The use of cognitive assessments aids in identifying notable impairments, especially within cases involving the frontal lobe, as well as instances of neurodegenerative disorders like Alzheimer's, Parkinson's, and semantic dementia, among other pathologies.
This study concentrates on semantic memory, a crucial facet of cognitive storage responsible for integrating language and concepts acquired through everyday experiences. The conceptualization initiates by presenting memory as a complex semantic network, fostering various investigative processes. These processes' extent varies, based on the global or localized exploration nature and the strategic direction. Employing a stochastic model known as the 'switching random walker model,' it becomes feasible to emulate lexical searches within semantic networks. This model induces a diffusive process, characterized by Markov chains, within the network. Evaluating the efficiency and performance of these networks is accomplished through the utilization of rate entropy and the initial mean passage time.
Additionally, a unique approach introduces a fluency test model aimed at probing the correlation between short-term memory and the predisposition for lexical repetition. Instances of lexical repetition arise during tasks necessitating semantic verbal fluency, wherein individuals may inadvertently produce reiterated words. This situation encompasses contemplating a word through semantic network traversal, leading to the inability to ascertain if the word was previously uttered.
This research strives to bridge the gap between analyzing exploration mechanisms inherent to semantic networks and the intricate evaluation of lexical repetition incidents. To this end, a fusion is realized, encapsulating: i) The topological features characterizing the foundational semantic network; ii) Mechanisms of exploratory behavior rooted in switching dynamics and clustering modalities; and iii) The dimensions of short-term memory, embodied as a first-in, first-out buffer of defined proportions, proficient in retaining the most recently articulated lexical elements."
The biodiverse triangle of Santurban and Berlin, in northern Colombia, has been in the center of an environmental conflict since the 90s. Gold mining corporations, national and local governments and its inhabitants are clashing over economic development and the protection of the main source of water for more than 3 million people. Colombian legal framework provides individuals and communities with a set of tools to safeguard individual and communal rights. However, small communities face two large difficulties, on one side they usually lack the resources to set up legal teams; and on the other hand, the final decision by higher courts can be influenced by the public opinion. In this context it is paramount for both sides of the argument to control the narrative of public opinion. To understand this phenomenon, we scraped data out of twitter and built a network model for the ambientalist struggle of Santurbán over the last 10 years. The period of analysis coincides with, first, the existence of a civic organization called Comité para la defensa del agua y del Páramo de Santurbán, formed by social leaders, peasants and inhabitants of the region; and, second, the extent of the use of social networks as Twitter. The dynamics of our network model provides information on how the movement has shifted the narrative and strengthened the ties with other external actors forming an environmental coalition, and forced gold mining companies to drop the request of exploiting gold licenses.
Invited talks
The analysis of contagion-diffusion processes in metapopulations is a powerful theoretical tool to study how mobility influences the spread of communicable diseases. Here we address the impact that recurrent patterns of mobility, and the spatial distribution of distinguishable agents, have on the development of epidemics in large urban areas. We incorporate the distinguishable nature of agents with respect to both their residence and their usual destination. The proposed model allows both a rapid calculation of the spatiotemporal pattern of the epidemic trajectory and the analytical calculation of the epidemic threshold. This threshold is calculated as the spectral radius of a mixing matrix that encompasses the residential distribution and the specific travel patterns of the agents. We demonstrate that the simplification of indistinguishable individuals overestimates the value of the epidemic threshold, and we will also show the usefulness of the addition of distinguishability in designing epidemiological control and surveillance strategies.
In this work we present a statistical mechanics perspective for two simple kinetic models in which the exchange rules between an agent pair selected stochastically differ in which, for the first model, the quantity conserved in the exchange is the sum of the respective quantities that the two agents have before and after the exchange, and for the second model the quantity conserved is the product of the quantities of the two agents. For the first kinetic model, the distribution patterns obtained by numerical simulations are normals, while for the second kinetic model the distributions are lognormals. If a lower boundary condition is imposed on the underlying additive stochastic process of the first model, a Boltzmann exponential distribution is obtained. Analogously, if a lower boundary
condition is imposed on the underlying multiplicative stochastic process of the second model, a power-law distribution is obtained. For both models without boundary conditions, we show how the fit parameters depend on the number of time steps of the simulations and the exogen parameters involved in the exchange rules.
The kinetic model of opinion formation by Deffuant-Neau-Amblard-Weisbush (DNAW), one of the most well-known in sociophysics, describes the process of opinion formation towards consensus or homogeneous opinions. This is done by considering exchanges of opinion between pairs of agents, in such a way that these exchanges have limited influence as they are confined within a range of opinion and depend on a parameter called "convergence," which takes values between 0 and 0.5. In this work, we propose a kinetic model of opinion formation, which generalizes the DNAW model by including a parameter of "conviction" that allows describing the opinion formation process in terms of the competition between convergence and conviction. We show that the number of time steps required to reach opinion consensus depends not only on the parameters of range and conviction but also on the convergence parameter. In this model, the convergence parameter now takes values between 0 and 1. Additionally, we demonstrate the existence of opinion phase transitions, observed through the variation of the average opinion. Finally, we show that this model can generate patterns of heterogeneous opinion distribution, in addition to the homogeneous patterns that are characteristic of the DNAW model.
The distribution by regions of electoral voting is frequently studied in the sociopolitical field. Making use of the databases on presidential and congressional voting reported by the Registraduria NAcional del Estado Civil (RNEC) in Colombia, by the Servicio Electoral de Chile (SERVEL) and by the Instituto Nacional Electoral (INE) in Mexico, it is found that the votes obtained in properly established regions are distributed among municipalities for each of the electoral options following a Burr distribution pattern. Complementarily in this work, the generation of Burr distribution patterns is shown through a kinetic exchange model associated with an asymmetric additive stochastic process for two agents, which will establish the bases for a modeling of the previously mentioned voting results.
Most stochastic processes are solved by knowing the probability distribution of the process increments or the transition probability distribution that satisfies a Fokker-Planck equation. Also, a rather interesting and known application of stochastic processes is in the Feynman-Kac formula, which presents the equivalence of parabolic partial differential equation problems and stochastic processes under the Feynman path integral. Specifically, it has been shown that the Feynman-Kac formula is nothing more than the Fokker-Planck equation associated with the process but with "final" conditions instead of initial conditions. Then, we generalize the Feynman-Kac formula for an arbitrary additive stochastic process with a right-handed fractional Riemann-Liouville integral of order $H-1/\alpha+1$ with $H$ the Hurst exponent. To this end, we start from the evolution equation of a fractional stable process of order $H-1/\alpha+1$ and its stochastic component is modified with an arbitrary noise $\eta(t)$ to obtain a fractional Langevin equation. Thence, making use of the path integral formalism, an expression is deduced for the transition probability between two states of the stochastic process in terms of the noise cumulant generating function denoted by $H(p)$. Furthermore, the entire formalism is held in terms of the It\^{o} and Stratonovich calculus by a parameter $\gamma\in[0,1]$. Next, the extension of the Feynman-Kac formula is made for the fractional Langevin equation by deriving the Fokker-Planck equation of the underlying stochastic process in terms of the cumulant generating function. Additionaly, the Feynman-Kac formula is deduced in the particular case of a truncated Lévy distribution.
Invited talks
Urban mobility is a critical variable in urban planning. Understanding mobility patterns support decision-making processes at the city level. This talk shows how representing the movement flows of people in the city through networks and using concepts and tools derived from statistical physics and complex networks allow us to identify interesting patterns. Some results regarding differences and diversity in the mobility patterns according to socioeconomic status or gender of the traveler are presented. Finally, some applications of the different mobility patterns in disease spread dynamics are discussed.
Three short stories will be used to illustrate how percolation theory can be used to diagnose and understand some of the urban mobility problems that are faced today by some of the world’s cities. All the stories have in common that they address how the local flows in the roads are organized collectively into a global city flow. The third story characterizes this organization process of traffic as traffic percolation where the giant cluster of local flows disintegrates when the second largest cluster reaches its maximum. Traffic percolation thus opens a new approach in network and transportation science. The talk will discuss how combining traffic percolation with the conventional approaches can open up insightful avenues for research aimed at alleviating traffic congestion in our cities.
When a discrete-time process on a network is stochastically brought back from time to time to its starting node, the mean search time needed to reach another node of the network may be significantly decreased. In other cases, however, resetting is detrimental to search. Using the eigenvalues and eigenvectors of the transition matrix defining the process without resetting, we derive a general criterion for finite networks that establishes when there exists a non-zero resetting probability that minimizes the mean first passage time (MFPT) at a target node. We apply these results to the study of optimal transport on different structures including deterministic and random networks.
Chess is a board game that demands deep positional understanding from the first move of the opening to the end of the game. Here, we present a method for assessing the true intentions of chess players in their opening move, which is often considered to be the most crucial decision in a match. We use a hidden variables formalism developed for complex networks, and our findings include a study of relevant grandmasters as well as an identification of the role of asymmetries between white and black pieces throughout history.
In this paper, we explore the reduction of functionality in a complex system as a consequence of cumulative random damage and imperfect reparation, a phenomenon modeled as a dynamical process on networks. We analyze the global characteristics of the diffusive movement of random walkers on networks where the walkers hop considering the capacity of transport of each link. The links are susceptible to damage that generates bias and aging. We describe the algorithm for the generation of damage and the bias in the transport producing complex eigenvalues of the transition matrix that defines the random walker for different types of graphs, including regular, deterministic, and random networks. The evolution of the asymmetry of the transport is quantified with local information in the links and further with non-local information associated with the transport on a global scale such as the matrix of the mean first passage times and the fractional Laplacian matrix. Our findings suggest that systems with greater complexity live longer.
Poster session
A common application of the Ising model is the study of ferromagnetic materials and their properties. One of said properties, M, the magnetization, can be so easily defined that one may wonder if it is possible to come up with an analytical way of estimating its average value. Using the microcanonical framework of counting the different states the system can be in, this work proves a closely related question (dealing with the mean value of $|$M$|$, the magnetization magnitude, instead) can be reduced to a combinatorics problem, and presents solutions of this for different versions of the one-dimensional Ising model.
Granular media consist of a large number of discrete particles interacting mostly through contact forces that, being dissipative, jeopardizes a classical statistical equilibrium approach based on energy. Instead, two independent equilibrium statistical descriptions have been proposed: the Volume Ensemble and the Force Network Ensemble. Hereby, we propose a procedure to join them into a single description, using Discrete Element simulations of a granular medium of monodisperse spheres in the limit state of isotropic compression as testing ground. By classifying grains according to the number of faces of the Voronoï cells around them, our analysis establishes an empirical relationship between that number of faces and the number of contacts on the grain. In addition, a linear relationship between the number of faces of each Voronoï cell and the number of elementary cells proposed by T. Aste and T. Di Matteo in 2007 is found. From those two relations, an expression for the total entropy (volumes plus forces) is written in terms of the contact number, an entropy that, when maximized, gives an equation of state connecting angoricity (the temperature-like variable for the force network ensemble) and compactivity (the temperature-like variable for the volume ensemble). So, the procedure establishes a microscopic connection between geometry and mechanics and, constitutes a further step towards building a complete statistical theory for granular media in equilibrium.
MnAlCu systems has shown enormous potential as permanent magnets due to their magnetic properties, which is why their characterization study has been carried out. During this research, uniaxial anisotropy was discover using FORC diagrams, which showed different uncentered boomerang shapes that varied depending on the percentage of doping. Furthermore, in order to understand their domains behavior, other data were taken using different techniques. MFM images were taken by changing the total magnetization of the sample, and SEM images were taken with the samples being demagnetized. Finally, to prove that the magnetic properties change depending on the exposure angle, additional Hysteresis and FORC data were taken by varying the angle.
The obtained results showed an interacting single domain behavior with a negative average interacting field, a mostly flat surface with some roughness at the preferred angle for the anisotropy, and a preferred direction for most domains.
The proof of Liouville's theorem is important in statistical physics because it establishes a fundamental principle in the theory of dynamical systems and statistical thermodynamics. Liouville's theorem states that in a conservative system (where the total energy is conserved), the volume in phase space occupied by a set of initial conditions is also conserved over time. Starting from three points of view, relevant considerations and postulates are collected, such as the phase space as a dynamic entity that flows following the laws of mechanics; the generalization of Louiville's theorem to non-Hamiltonian systems, such as dissipative systems, taking the invariance of the Jacobian; or reaching the Poincaré-Cartan Integral Invariant, incorporating the symplectic geometry to the phase space and the principle of minimum action.
The present work is framed in the pertinence of making a revision of the concepts even to arrive at the same proof of a theorem, by revisiting its proof and underlying concepts, physicists ensure the validity of this fundamental principle in the ever-evolving landscape of physical theories. It reaffirms the robustness of this principle and its applicability to a wide range of systems.
The idea of a heat engine proposed by thermodynamics has been a great achievement in the development of classical physics. Based on this concept, mesoscopic heat engines were derived, which operate at micro-scales where thermal fluctuations acquire an important role for modeling the system. In consequence, the system must be modeled following a probabilistic point of view, as suggested by stochastic thermodynamics. A mesoscopic heat engine of big interest is a colloidal particle immersed in a thermal bath trapped with optical tweezers. This type of systems operate at the non-equilibrium regime and in finite times, thus numerous protocols have been developed to maximize the power and efficiency of this thermal engines. This thesis analyzes three different protocols for mesoscopic thermal engines based on Carnot's cycle. First, it gives an introduction to comprehend thermal fluctuations. Following, it provides a detailed description on the derivation of each protocol. Based on the latest, each protocol was simulated in order to obtain work and heat distributions, as well as the efficiency an power by fixing the cycle's time duration. It was found that the protocol that best reproduces the adiabatic processes is the third protocol. Moreover, the third protocol optimize the delivered power. Last but no least, the second protocol has a greater efficiency among the other protocols. Finally, it was concluded that the second or the third protocol should be implemented taking into account the specific requirements needed for the delivered power and efficiency.
Cellular processes are inherently stochastic, leading to protein level variations and gene expression fluctuations known as noise. Accurately understanding how noise spreads within gene networks is vital for creating gene circuits capable of withstanding noise and for comprehending signal reliability in biological networks. However, current models focusing on noise propagation are often limited to specific systems or short gene cascades. In response, we developed a novel model for noise propagation in gene expression using Langevin approach, applicable to networks comprising numerous genes, while considering the impact of the intracellular environment on gene expression. Our model offers a more comprehensive depiction of gene networks and proves valuable in the design of synthetic biological circuits in bacteria; moreover, it may shed light on how evolutionary processes shape circuit sizes to achieve both signal fidelity and cellular functionality.
In molecular dynamics simulations, thermostats are algorithms known for reproducing the canonical or NVT ensemble on a system of particles, i.e., they reproduce a given temperature in the system. In this work, we review a stochastic thermostat algorithm to reproduce Langevin dynamics according to the equation $\dot{v} = \frac{F}{m} - \gamma v + b\xi$ where $F$ is the total external force, $\gamma$ is the friction rate, and $\xi$ is a random variable with mean zero and no time correlation.
This algorithm is a modification to the classic leapfrog scheme, it adds the impulsive friction and noise terms $\Delta v = -fv + \sqrt{f(2-f)(k_B T/m)} \xi$ to velocity with $0\leq f\leq 1$ resulting in the convergence of $\langle v^2 \rangle$ to $k_B T/m$ therefore conserving the temperature. We show that this algorithm not only reproduces the correct average temperature, but it also preserves the canonical distribution of velocity throughout the time integration. Furthermore, we perform simulations to verify these claims by Ornstein-Uhlenbeck processes for different impulsive friction constants $f$ and corroborate its relationship with the friction rate to be $f=1-e^{-\gamma \Delta t}$.
Thanks to Einstein’s relation, it is known that two-dimensional diffusion coefficients give the amount of area that a particle under Brownian motion can cover in a determined time. In biological sciences, the diffusion coefficient is a relevant parameter to understand the motion of proteins and molecules providing quantitative insights into the mechanical properties of their microenvironment and their interaction with other molecules.
Among the available methods to measure these coefficients, fluorescence correlation spectroscopy (FCS) uses the fluorescence signal over time from the illumination volume in a confocal microscope that results from the random motion of fluorescent molecules. The analysis of time correlations in the fluorescence signal from FCS allow to quantitatively evaluate the concentration, interaction between molecules and the diffusion coefficient.
In this work, we characterize the FCS method in a model membrane system, allowing us to measure diffusion coefficients in a precise way. We use giant unilamellar vesicles (GUVs), a popular model for the study of the bilayer lipid membrane, composed of the phospholipid DOPC and the lipophilic fluorophore DiI. Using FCS we characterize the diffusion coefficient of DiI at 37 C and 45 C and corroborate the effect of temperature in molecule dynamics. To the best of our knowledge, this is the first implementation of FCS in a Colombian laboratory without use of any specialized software or external tools.
In the realm of statistical physics, this study explores the critical properties of the Ising model on two fractal lattices with different Hausdorff dimensions ($d_H \approx 1.892$ and $d_H \approx 1.595$). By employing the Monte Carlo technique and the Metropolis algorithm, a numerical analysis is presented to determine critical temperature values and correlation length functions. Additionally, analytical methods are implemented, and their results are compared with numerically obtained results. Our findings confirm that fractals with finite ramification do not exhibit phase transitions, while those with infinite connectivity do.
Jigsaw puzzles are a fascinating pastime that has given many hours of fun to people since its first appearance as an educational tool in geography in 1766. Multiple studies on human behavior, the way in which new knowledge is generated, cognitive styles, among others, have been carried out based on the process in which a puzzle is assembled. However, this work proposes a different approach in which the sequence used for assembling the puzzle is studied to determine whether it can be classified as a percolation process. Through the development of a web application, a collection of the sequences used by people to assemble the puzzles was made and with the tools of finite scaling, the critical density $p_c = 0,73 \pm 0,04$ and the exponent $1/\nu = 0,28 \pm 0,03$ and were obtained. These results are not consistent with the hypothesis that this process can be classified as invasion percolation, however, they are inconclusive due to the lack of data for large puzzles. As far as we know, this is the first work of this nature to study the process of jigsaw puzzle assembly as a percolation process.
Fluctuation scaling is an emergent property of complex systems that relates the variance ($\Xi$) and the mean ($\Upsilon$) from an empirical data set in the form $\Xi\sim\Upsilon^{\alpha_{TFS}}$, where the dispersion (fluctuation) of the data has been described in terms of $\Xi$. Taking into account the path integral formalism developed by H. Kleinert, we extend the path integral formalism in the context of the supersymmetric theory of stochastic dynamics to understand the origin of the temporal fluctuation scaling and the evolution of its exponent over time $\alpha_{TFS}(t)$. To this end, we first show how the probability of transition between two states of a stochastic variable $x(t)$ can be expressed once it is known its cumulant generating function $H(p)$. Thus, introducing a non-linear term in cumulant generating function $\mathcal{H}^{(n)}(p,t;\gamma)$ leads to a model where the n-th moment of the probability distribution evolves arbitrarily. Subsequently, in order to reproduce the temporal fluctuation scaling, a linear combination of $\mathcal{H}^{(n)}(p,t;\gamma)$ with $n\in\{1,2\}$ is used. Therefore, this allows describing how the mean $M_{1}(t)$ and the variance $\Xi_{2}(t)$ of empirical time series evolve. Thence, an analytical expression is deduced for the evolution of the temporal evolution of the temporal fluctuation scaling exponent $\alpha_{TFS}(t)$. Additionally, this approach is verified in different financial time series with a daily frequency.
Hereby we investigate the thermalization of a classical harmonic oscillator starting from a micro-canonical ensemble at energy Eo and finishing in a canonical one at temperature T. We derived analytically that the probabilities $P(Q)$ and $P(−Q)$ of gaining or losing a certain amount of heat $Q$ are related as $P(Q)=\exp(-2Q/kT) P(-Q)$, a result we also verified through molecular dynamics simulations with an overdamping Langevin equation algorithm. Our results give insight into the thermalization process and contributes to extend fluctuation relations to micro-canonical initial states.
Manganites consist of an alloy of manganese oxide ($MnO_3$) in conjunction with a rare-earth element (Lanthanum, Strontium, or Germanium). a particular case is Lanthanum manganite doped with praseodymium (LPCMO). This material holds significant interest due to their magnetic phase transitions occurring below temperatures of 130 K. One of the phenomena observed is the coexistence of ferromagnetic conducting and antiferromagnetic insulator phases, which varies depending on the doping level and temperature. An intriguing consequence of this phenomenon is the colossal magnetoresistance, where an abrupt change in electrical resistance with respect to the magnetic field is evident. The way these magnetic phases grows as a function of temperature follow nucleation and percolation processes via avalanches as in martensitic transformations. To assess these material properties, isothermal hysteresis curves are commonly employed to macroscopically identify the phases present.\newline
Nevertheless, to capture more detailed information about the material's properties, First Order Reversal Curves (FORC) are utilized. Here we used the FORC analysis to decombolute all magnetic interactions within the material. We also employed transport measurements (R vs T curves) combined with magnetization versus temperature measurements to pinpoint the precise temperature at which the phase transition occurs and the onsets of the percolation processes. Furthermore, our experimental results are reproduced using random-Ising model models, highlighting the significance of disorder and short-range interactions in the magnetic percolation processes. This underscores the role of disorder in shaping the material's magnetic behavior and its impact on phase transitions
Verbal fluency tests provide some insight into memory information and retrieval processes. These tests can be represented as a complex network, where the nodes are the words of the fluency test and the links between nodes are the semantic relationship of the words. The complex network formed in this way has been called a “semantic network”. To decipher the search mechanisms used by the brain in retrieving information from memory, various search models have been proposed on the semantic network, in order to reproduce said network and identify the information retrieval mechanism. Among the various models, we highlight the censored random walker with priming vector (CRW+PV), which better reproduces the processes of sub-category changes in the category of the verbal fluency test. We tested a new model, based on the reaction-diffusion model (R-D), initially worked on by Alan Turing, who applied it to chemical processes where characteristic patterns were observed, which were called Turing patterns. The particular reaction-diffusion model has been implemented in complex networks of neurons that define the connectome of the animal brain. The model makes it possible to show neural patterns and the movement of water in brain tissue. The application of the R-D model in complex networks made it possible to implement it in the semantic network of verbal fluency tests as a new search mechanism and compare it qualitatively with the results of the CRW+PV mechanism. A correlation was observed between the patterns of the R-D model and the results of the CRW+PV mechanism.
The formation of traffic bottlenecks in main roads is one of the most common causes of vehicular congestion, having a bigger impact in cities with main roads consisting of few lanes i.e. 2 or 3 lanes. A traffic bottleneck creates a challenge for those drivers who are in the congested lane since they must find a way to change lanes in order to surpass the bottleneck. In this scenario the best global outcome is found when an equilibrium between the desire of each driver to move as fast as possible and giving the way to other drivers so they can move faster is achieved. We analyze a traffic bottleneck in a road of two lanes in the stationary regime using a cellular automaton model, for that we look into a variable that accounts for the probability of a driver to give the lane when asked for it, and we use it to characterize the efficiency of the bottleneck to transport cars.
The universe lives on a Gravitational Wave Background (GWB). This GWB is extra energy that is not contained in Einstein's equations, and new models was developed to explain the accelerating expansion of the universe where a GWB is incorporated into Einstein's equations.
In this talk, we study this new paradigm: due to GWB, quantum particles cannot follow geodesics, but rather stochastic trajectories. We explore the different stochastic theories, namely Stochastic quantum Mechanics (SQM), Stochastic Electrodynamics (SED) and Scale Relativity (ScR), that lead to the generalized Klein-Gordon equation in curved space-time and generalized Schrödinger equation.
Information engines are a modern realization of the Maxwell-demon thought experiment. They exploit “favorable fluctuations” of a heat bath to generate work, at the cost of dissipation in a measuring device. Experimental tests of these engines require accurate measurements and fast feedback control. We designed a simple information engine using optical tweezers and feedback to raise a micron-sized trapped bead diffusing in water — a heavy mass — against gravity, without doing any external work. We first explore the conditions that maximize engine performance and achieve powers (~1000 kT/s) and speeds (~190 μm/s) that compare to bacteria, which have a similar size. We then show that naively implemented information engines fail to function when measurements are too noisy but that more sophisticated measurement “filters” can provide good performance, even when measurement noise is comparable to the size of displacements produced by thermal motion. Finally, we show that placing the bead in an environment with “extra” nonequilibrium fluctuations can dramatically increase power output. These experiments suggest that what was once a mere thought experiment may have practical applications.
Invited talks
The simulation of stochastic systems is a valuable tool to investigate a broad range of topics, from atomistic simulations of colloidal systems and magnetic materials to the simulation of interest rates and derivatives in mathematical finance. In most cases, these systems are simulated by integrating the corresponding stochastic differential equations (SDEs) via the Euler-Maruyama, Milstein or Heun methods, which are known to be stable and convergent whenever the coefficient functions in the SDE satisfy certain smoothness and regularity conditions. However, several systems of interest are described by SDEs with singularities, and many of the usual integrating methods become unstable and their convergence is not guaranteed. In this talk, we explore the particular case of the Bessel process as well as new numerical schemes tailored to handle singular coefficients in SDEs.
A modification of the classical Szilard engine is presented, where pores have been drilled on the piston. This change allows for the traversal of the particle from one side of the piston to the other, making it unnecessary to remove the piston from the engine, nor measure the position of the particle for the engine to do work. The dissipation on energy occurs when the mass over which the engine does work is reset.
When a system deviates from equilibrium, it is possible to manipulate and control it to drive it towards equilibrium within a finite time $t_f$, even reducing its natural relaxation time scale $\tau_{relax}$. Although numerous theoretical and experimental studies have explored these shortcut protocols, few have yielded analytical results for the probability distribution of work, heat and entropy production. In this talk, we discuss the two-step protocol that captures the essential characteristics of more general protocols and has analytical solution for the relevant thermodynamic probability distributions.
Invited talks
Strongly electron-correlated materials provide a rich platform for exploring the underpinnings of fundamental physics. These systems are characterized by a complex energy landscape, originating from the interplay of competing phases, which manifests in diverse phenomena including metal-insulator transitions, multiple magnetic transitions, and structural phase transitions. In some instances, these transitions can coincide, leading to intriguing and complex behavior. Viewed from a fundamental perspective, such transitions can be conceptualized as being driven by an external force to a critical point where the transition ceases to be continuous, thereby causing the system to progress via a series of discrete avalanches and therefore showing key fingerprints of percolation effects.
Here I will show recent research that exemplifies the collective behavior observed in particular physical systems [1,2]. These systems predominantly consist of oxide materials, like vanadium oxides and praseodymium-doped manganites, which share a common characteristic: strong electron-electron interactions across multiple scale lengths. Vanadium oxides demonstrate a first-order structural phase transition and a metal-insulator transition, induced by voltage, light, and temperature. In contrast, praseodymium-doped manganites reveal multiple magnetic phase transitions, magnetic percolation, and spin frustration, offering further layers of complexity.
Acknowledgements: funding by Facultad de Ciencias, Universidad de los Andes grant INV-2023-162-2717
References:
[1] Wolowiec, C.T. et.al., Physical Review Materials 6, 064408 (2022).
[2] Carranza-Celis, D. et al., Physical Review Materials 5, 124413 (2021).
In this work, the ferromagnetic phase transition in a monolayer of chromium triiodide (CrI3) was examined. Employing a microcanonical ensemble approach, entropy was evaluated as a function of internal energy and magnetization was calculated with respect to energy across various spin configurations. In this way, a methodology was found to observe phase transitions using thermodynamic quantities other than specific heat. The Hubbard model was used to characterize the exchange interactions, defined by a first-neighbor exchange energy of J=2.37 meV.
Using the mean-field renormalization group method (MFRG) and starting from the Ising Hamiltonian, magnetic phase diagrams were successfully reproduced in various systems composed of different types of magnetic atoms, such as FeMnAl, FeNiMn, and FeAl alloys. Quadratic errors we obtained below 0.016, and a preliminary approximation of the binding energy between atoms of this type was achieved. These latter alloys are of special interest since, due to their relative simplicity, they enable the implementation of machine learning when adjusting the phase diagram. This leads to a binding energy obtained through this method. Subsequently, this energy is introduced into a Monte Carlo simulation using the Metropolis algorithm.
In summary, the significance of this work lies not only in the discovery of a theoretical method for finding magnetic phase diagrams but also in the possibility of approximating the functional form of the binding energy in the Ising model.Additionally, it achieves the implementation of machine learning in this field of physics.
It should be noted that I prefer to present this work in the form of a presentation or talk.
Invited talks
In this document, we study the planar metallic layers at a constant voltage from the point of view of statistical mechanics and electrostatics. We use molecular dynamics simulations to find the system's positional correlation functions and velocity distributions by modeling it as a two-dimensional Coulomb plasma in the liquid phase. Alternatively, the surface charge density is calculated by implementing the Method of Moments (MoM) under the electrostatic approximation. Point-like and differential charges elements interact via a $1/r^\eta$ with $\eta\in\mathbb{R}^{+}$ - electric potential in both cases. We establish the range of the coupling parameter of the system where the surface charge density found in both approaches is in agreement.
More than 150 years ago, James Clerk Maxwell introduced a famous thought experiment, where a little intelligent being (the “demon”) defies the second law of thermodynamics by controlling a tiny door between two chambers with gases at different temperatures. Maxwell’s demon represented a cornerstone in the development thermodynamics of feedback control, and has attracted renewed attention recently motivated by experiments implementing it at the micro and nano-scales. In this talk, I will introduce a new concept of demon, lacking proper feedback control but just allowed to stop the process using a gambling strategy [1]. We demonstrate that such gambling demons can still bypass conventional thermodynamic bounds in unexpected ways. Indeed, the key quantity that limits its operation is no longer the amount of information retrieved about the system, but a new quantity measuring the asymmetry under time-reversal of the dynamics. We test experimentally the most important features of the gambling demon in a microelectronic system implementing single-electron box, and realize strategies leading to average work extraction above the free energy change.
[1] G. Manzano, D. Subero, O. Maillet, R. Fazio, JP. Pekola and É. Roldán, Thermodynamics of Gambling Demons, Phys. Rev. Lett. 126, 080603 (2021).
Invited talks
In quantum thermodynamics, fluctuation theorems provide a way for the quantification of irreversibility of single trajectories. In this work we propose a description of the dynamics of single trajectories based on an M-parametrization of unravellings of the master equation for a system coupled to its environment. We identify the measurable components of the entropy, and show ways to measure and control the system in such a way that the quantum components of the entropy can be corrected or minimized.
An important task in quantum thermodynamics consists of the characterization of work and heat in the quantum domain. A common approach to this problem, known as the two-point measurement (TPM) scheme, consists of performing two projective energy measurements at the beginning and at the end of a given evolution protocol. Although its importance for the development of the understanding of work statistics in the quantum regime, the TPM scheme has a fundamental limitation: since the initial projective measurement diagonalizes the initial state in the energy basis, the effect that the initial coherences may have on the energetics of the system is lost.
The Margenau-Hill (MH) scheme is an alternative scheme that allows initial coherent states in the energy basis relying on the replacement of the first projective measurement by an estimation of the initial Hamiltonian from the result of a single projective measurement at the end of the evolution protocol. The joint probability distribution describing the scheme, known as the MH distribution, is given by the following quasi-probability distribution
$$P^{MH}(n,m) = \frac{1}{2} \text{Tr}\left[\rho_S(0) (\bar{\Pi}_{E_m^\tau}\Pi_{E_n^0} + \Pi_{E_n^0}\bar{\Pi}_{E_m^\tau})\right]$$
In this talk I present a path integral formulation for work in the MH scheme developed in close analogy to that of the TPM scheme, providing further insight on the role of initial coherence in quantum thermodynamic setups.
We study the non-equilibrium dynamics for no-fusion and fusion events in a Dyson gas of $N$ charged particles interacting through a logarithmic Coulomb potential surrounded by a thermal bath at a reduced temperature $\beta=q_0^2/(k_BT)$, where $q_0$ is the charge per particle and $T$ is temperature of the bath. First, we characterize the relaxation-time, $\tau$, in the regime for no-fusion processes, for which the system reach a “thermal equilibrium” and show how a time-law-scale governs the time-evolution for this regime. We prove the validity of Wigner's Surmise for $\beta\geq1.0$ compared with those values used in Gaussian ensembles for times greater than relaxation time $t\gg\tau$, i.e., when the system reached the thermal equilibrium. Finally, we study the time-evolution of nearest neighbours distance distributions for different $\beta$ in the regime for fusion events and compare its dynamics with no-fusion regime.
Invited talks
Basic methods of statistical physics have proven extremely useful in the modelling of enzyme dynamics, but now the concepts of statistical physics have become important tools in many areas of systems biology, such as protein folding, site search on DNA, gene regulation, evolutionary dynamics or information processing. I will present an overview of the use of the concepts (more than the mathematical tools) in different areas of biological modeling, with emphasis on how collective action arises and how cells can deal with imperfect information about their environment.
Cryo-electron microscopy (cryo-EM) has recently become a leading method for obtaining high-resolution structures of biological macromolecules. However, cryo-EM is limited to biomolecular samples with low conformational heterogeneity, where most conformations can be well-sampled at various projection angles.
While cryo-EM provides single-molecule data for heterogeneous molecules, most existing reconstruction tools cannot retrieve the ensemble distribution of possible molecular conformations from these data. To overcome these limitations, we build on a previous Bayesian approach and develop an ensemble refinement framework that estimates the ensemble density from a set of cryo-EM particle images by reweighting a prior conformational ensemble, e.g., from molecular dynamics simulations or structure prediction tools. Our work provides a general approach to recovering the equilibrium probability density of the biomolecule directly in conformational space from single-molecule data. To validate the framework, we study the extraction of state populations and free energies for a simple toy model and from synthetic cryo-EM particle images of a simulated protein that explores multiple folded and unfolded conformations.
The Tsallis’ non-extensive statistical mechanics is a generalized framework for describing complex systems where ergodicity (and statistical equilibrium as its macroscopic manifestation) is just one of the dynamic possibilities of microscopical mixing. In practical manners, the generalization from Tsallis’ theory introduces a non-extensive entropic functional Sq through the q-index, which accounts for how far is Sq from SBG, identifies non-additive universality classes, and provides physically based information about the underlying dynamics. The Tsallis’ theory is being progressively applied in complex systems, in particular, relative to geophysical processes such as climate extremes, which are the result of weather conditions far from equilibrium emerging from spatiotemporal multi-scale interactions, long-term memory, a high degree of information content, and persistent positive feedback.
In this work, we evaluate the complexity of the Colombian climate via the extreme behavior of temperature and precipitation over a non-stationary detrended threshold. Based on a maximum likelihood estimation of the maximum entropy, we find the regional non-extensive parameters for the normalized q-exponential distribution with constant mean as a constraint.
The spatial structure of the regional q-index shows 59% (39%, 2%) of bounded (unbounded, in the Boltzmann-Gibbs limit) behavior for gauge temperature in the Caribbean Colombian Catchment (CCC) and 50% (50%, 0%) in Pacific Colombian Catchment (PCC). For gauge precipitation, we have 25% (67%, 8%) of bounded (unbounded, in the Boltzmann-Gibbs limit) in the CCC and 15% (77%, 8%) for the PCC. These results evidence an essentially nonextensivity in Colombian climate and furthermore, temperature and precipitation extremes do not share the same universality features.
Ab initio metadynamics enables the extraction of free energy landscapes with the precision of first-principles electronic structure methods. We developed and used an interface between the PLUMED and ASE codes to estimate the free energy of Ag5 and Ag6 clusters at 10, 100, and 300 K with the radius of gyration and coordination number as collective variables [1]. We find that Ag6 is the smallest silver cluster where entropic effects at room temperature increase the probability of the nonplanar isomer to a competitive state. To accelerate the determination of the free energy we currently use machine learning algorithms.
[1] J. Chem. Phys. 156, 154301 (2022)
We describe the steady state of the annihilation process of a one-dimensional system of two initially separated reactants A and B. The parameters that define the dynamical behavior of the system are the diffusion constant, the reaction rate, and the deposition rate. Depending on the ratio between those parameters, the system exhibits a crossover between a diffusion-limited (DL) regime and a reaction-limited (RL) regime. We found that a key quantity to describe the reaction process in the system is the probability $p(x_A,x_B)$ to find the rightmost A (RMA) particle and the leftmost B (LMB) particle at the positions $x_A$ and $x_B$, respectively. The statistical behavior of the system in both regimes is described using the density of particles, the gap length distribution $x_B-x_A$, the marginal probabilities $p_A(x_A)$ and $p_B(x_B)$, and the reaction kernel. For both regimes, this kernel can be approximated by using $p(x_A,x_B)$. We found an excellent agreement between the numerical and analytical results for all calculated quantities despite the reaction process being quite different in both regimes. In the DL regime, the reaction kernel can be approximated by the probability to find the RMA and LMB particles in adjacent sites. In the RL regime, the kernel depends on the marginal probabilities $p_A(x_A)$ and $p_B(x_B)$.