SAI will develop the scientific foundations for novel ML-based Al systems ensuring (i) individuation: in SAi each individual is associated with their own “Personal Al Valet” (PAIV), which acts as the individual’s proxy in a complex ecosystem of interacting PAIVs; (ii) personalisation: PAIVs process individuals ‘ data via explainable Al models tailored to the specific characteristics of their human twins; (iii) purposeful interaction: PAIVs interact with each other, to build global Al models and/or come up with collective decisions starting from the local (i.e., individual) models; (iv) human-centricity: novel Al algorithms and the interaction between PAIVs are driven by (quantifiable) models of the individual and social behaviour of their human users ; (v) explainability: explainable ML techniques are extended through quantifiable human behavioural models and network science analysis to make both local and global Al models explainable-by-design.



The ultimate goal of SAI is to provide the foundational elements enabling a decentralised collective of explainable PAIVs to evolve local and global Al models, whose processes and decisions are transparent, explainable and tailored to the needs and constraints of individual users.

To this end, the project will deliver (i) the PAIV, a personal digital platform, where every person can privately and safely integrate, store, and extract meaning from their own digital tracks, as well as interact with PAIVs of other users; (ii) human-centric local Al models; (iii) global, decentralised Al models, emerging from human-centric interactions between PAIVs; (iv) personalised explainability at the level of local and global Al models; and (v) concrete use cases to validate the SAi design principles, based on real datasets complemented, when needed, by synthetic datasets obtained from well-established models of human behaviour, in the areas of private traffic management, opinion diffusion/fake news detection in social media, and pandemic tracking and control.