Anomalous diffusion is ubiquitous in systems ranging from intracellular transport and porous-medium flow to animal foraging, and its quantification requires robust methods that cope with short and noisy trajectories. We present AnomalousNet, a unified three-stage pipeline for analyzing anomalous particle dynamics. First, up to 64 particles per 128×128 field of view are tracked using the Crocker–Grier algorithm via Trackpy. Next, an Attention U-Net trained on over 8.6 × 104 simulated experiments infers frame-wise anomalous exponents α, generalized diffusion coefficients K, and discrete motion states. Finally, regime changes are identified using an L2-regularized, windowed pruned exact linear time change-point detection algorithm. On the 2nd Anomalous Diffusion Challenge benchmark the method achieved MAE(α=0.32) and MSLE(K)=0.1, , state-classification F1=0.93, and change-point RMSE = 0.09, ranking second in the video single-trajectory task and ensemble task. These results demonstrate precise discrimination of subdiffusive, normal, and superdiffusive regimes, and frame-level identification of state transitions establishing AnomalousNet as a powerful tool for quantitative analysis of heterogeneous diffusion in video data.
Y. Ahsini, M. Escoto, and J.A. Conejero. AnomalousNet: a hybrid approach with attention U-Nets and change point detection for accurate characterization of anomalous diffusion in video data. \textit{J. Phys. Photonics} 7 (2025) 7 045015. https://doi.org/ 10.1088/2515-7647/ae0120
Accurate, reliable and updated information support effective decision-making by reducing uncertainty and enabling informed choices. Multiple crises threaten the sustainability of our societies and pose at risk the planetary boundaries, hence requiring usable and operational knowledge. Natural-language processing tools facilitate data collection, extraction and analysis processes. They expand knowledge utilization capabilities by improving access to reliable sources in shorter time. They also identify patterns of similarities and contrasts across diverse contexts. We apply general and domain-specific large language models (LLMs) to two case studies and we document appropriate uses and shortcomings of these tools for two tasks: classification and sentiment analysis of climate and sustainability documents. We study both statistical and prompt-based methods. In the first case study, we use LLMs to assess whether climate pledges trigger cascade effects in other sustainability dimensions. In the second use case, we use LLMs to identify interactions between the sustainable development goals and detects the direction of their links to frame meaningful policy implications. We find that LLMs are successful at processing, classifying and summarizing heterogeneous text-based data helping practitioners and researchers accessing. LLMs detect strong concerns from emerging economies in addressing food security, water security and urban challenges as primary issues. Developed economies, instead, focus their pledges on the energy transition and climate finance. We also detect and document four main limits along the knowledge production chain: interpretability, external validity, replicability and usability. These risks threaten the usability of findings and can lead to failures in the decision-making process. We recommend risk mitigation strategies to improve transparency and literacy on artificial intelligence (AI) methods applied to complex policy problems. Our work presents a critical but empirically grounded application of LLMs to climate and sustainability questions and suggests avenues to further expand controlled and risk-aware AI-powered computational social sciences.