Our user-centered chatbot Wakamola published in JMIR Med. Inform.

Obesity and overweight are a serious health problem worldwide with multiple and connected causes. Simultaneously, chatbots are becoming increasingly popular as a way to interact with users in mobile health apps. Our study reports the user-centered design and feasibility study of a chatbot to collect linked data to support the study of individual and social overweight and obesity causes in populations. We first studied the users’ needs and gathered users’ graphical preferences through an open survey on 52 wireframes designed by 150 design students; it also included questions about sociodemographics, diet and activity habits, the need for overweight and obesity apps, and desired functionality. We also interviewed an expert panel. We then designed and developed a chatbot. Finally, we conducted a pilot study to test feasibility. With this work, we have shown that the Telegram chatbot Wakamola is a feasible tool to collect data from a population about sociodemographics, diet patterns, physical activity, BMI, and specific diseases. Besides, the chatbot allows the connection of users in a social network to study overweight and obesity causes from both individual and social perspectives. You can access to this work here S. Asensio-Cuesta  et al.
A User-Centered Chatbot (Wakamola) to Collect Linked Data in Population Networks to Support Studies of Overweight and Obesity Causes: Design and Pilot Study. JMIR Med Inform 2021;9(4):e17503
doi: 10.2196/17503

Winners of the 500k Pandemic Response Challenge

Our team VALENCIA IA4COVID, coleaded by Nuria Oliver and me, has won 500k Pandemic Response Challenge, organized by XPRIZE Foundation and supported by Cognizant. The $500K Pandemic Response Challenge, required teams to build effective data-driven AI systems capable of accurately predicting COVID-19 transmission rates and prescribing intervention and mitigation measures that, with testing in “what-if” scenarios, were shown to minimize infection rates as well as negative economic impacts.

Our group is made up of fourteen experts from the Universities and research centers of the Valencian Community. Our model successfully forecasted epidemiological evolution through their use of AI and data science and provided decision makers with the best prescriptor models to produce non-pharmaceutical intervention plans that minimize the number of infections while minimizing the stringency of the interventions.

You can find the details of the prize here  and of our model here .

Finalists of the 500k Pandemic Challenge Response XPRIZE!!!!

Our team VALENCIA IA4COVID has progressed to the 2nd phase of the Pandemic Response Challenge, organized by XPRIZE Foundation and supported by Cognizant. This is a $500K, a four-month challenge that focuses on the development of data-driven AI systems to predict COVID-19 infection rates and prescribe Intervention Plans (IPs) that regional governments, communities, and organizations can implement to minimize harm when reopening their economies. Our group is made up of fourteen experts from the Universities and research centers of the Valencian Community and it is leaded by Nuria Oliver and me. We have all been working intensively since the beginning of the pandemic, altruistically and using the resources available to us in our respective institutions and with the occasional philanthropic collaboration of some companies.

Our model is among the three best models in the competition in MAE Mean Rank, leading in ASIA and in the top 5 of EUROPE in MAE per 100k habitants.

You can see our predictions here. The model has not been updated since its release on December 22nd.

Potential limitations in COVID-19 machine learning due to data source variability

Our recent paper Potential limitations in COVID-19 machine learning due to data source variability: A case study in the nCov2019 dataset has been accepted for publication in . (JAMIA, IF 4.112). We study whether the lack of representative coronavirus disease 2019 (COVID-19) data is a bottleneck for reliable and generalizable machine learning. Data sharing is insufficient without data quality, in which source variability plays an important role. We showcase and discuss potential biases from data source variability for COVID-19 machine learning. Our results are based in the publicly available nCov2019 dataset, including patient-level data from several countries. We aimed to the discovery and classification of severity subgroups using symptoms and comorbidities. We show that cases from the 2 countries with the highest prevalence were divided into separate subgroups with distinct severity manifestations. This variability can reduce the representativeness of training data with respect the model target populations and increase model complexity at risk of overfitting.

Success at the ANDI-Challenge

During the last months, I have been participating in the ANDI Challenge, together with my Ph.D. student Óscar Garibo. Since Albert Einstein provided a theoretical foundation for Robert Brown’s observation of the movement of particles within pollen grains suspended in water, significant deviations from the laws of Brownian motion have been uncovered in a variety of animate and inanimate systems, from biology to the stock market. Anomalous diffusion, as it has come to be called, is connected to non-equilibrium phenomena, flows of energy and information, and transport in living systems.

The challenge consists of three main tasks, each of them on 3 Dimensions:

  • Task 1 – Inference of the anomalous diffusion exponent α.
  • Task 2 – Classification of the diffusion model.
  • Task 3 – Segmentation of trajectories.

We got the first position in Task 1 (1D) and the second position in Task 2 (1D). We also get the 3rd position in Task 2 (3d) and the 4th position in Task 2 (2D).