In this internet era where 2.5 quintillion bytes of data are created per day, the development of
techniques for efficiently aggregating this dizzying amount of data has become a matter of utmost
importance. The relevance of the formalization of this simple notion of aggregation can be
quantified by countless papers, special issues, edited volumes, monographs, conferences, summer
schools, etc. Indeed, there exists a whole community centred on the study of aggregation
processes. However, although it constitutes an impressive body of mathematical knowledge, most
theoretically-oriented studies are confined to the aggregation of real numbers or, in more exotic
cases, to other ordered structures, such as intervals or ordinal linguistic scales. Shamefully, the
aggregation of non-ordered structures, such as graphs, strings and ranking data, is mostly
addressed by practitioners. One could say that, while the practical framework is full of "ad hoc"
methods, the theoretical framework is embarrassingly narrow. Both theory and practice do not
hitherto converge. This divergence is mainly motivated by two factors: 1) the difficulty of fully
understanding some black-box-esque practical aggregation techniques 2) the computational
challenge that applying some complex theoretical aggregation techniques implies. In this
postdoctoral proposal, we will address the development of a new theory of aggregation that
finally brings theory and practice together.