Le CRIL en bref
Le Centre de Recherche en Informatique de Lens (CRIL UMR 8188) est un laboratoire de l’Université d’Artois et du CNRS dont la thématique de recherche fédératrice concerne l'intelligence artificielle et ses applications. Il regroupe près de 70 membres : chercheurs, enseignants-chercheurs, doctorants et personnels administratifs et techniques.
Le CRIL participe à la Confédération Européenne de Laboratoires en Intelligence Artificielle CAIRNE et à l'alliance régionale humAIn. Il bénéficie du soutien du Ministère de l’Enseignement Supérieur et de la Recherche, du CNRS, de l’Université d’Artois et de la région Hauts de France.
Le CRIL est localisé sur deux sites à Lens : la faculté des sciences Jean Perrin et l’IUT.
Actualités (RSS)
Séminaire Séminaire de Nicolas Schwind
Iterated Belief Change as Learning
4 déc. 2025 - 14:00In this work, we show how the class of improve- ment operators – a general class of iterated belief change operators – can be used to define a learning model. Focusing on binary classification, we present learning and inference algorithms suited to this learning model and we evaluate them empirically. Our findings highlight two key insights: first, that iterated belief change can be viewed as an effective form of online learning, and second, that the well-established axiomatic foundations of belief change operators offer a promising avenue for the axiomatic study of classification tasks.
Séminaire Séminaire de Lars Kotthoff
The Shapley Value & the Temporal Shapley Value for Algorithm Performance Analysis
27 nov. 2025 - 14:00It is surprisingly difficult to quantify an algorithm's contribution to the state of the art. Reporting an algorithm's standalone performance wrongly rewards near-clones while penalizing algorithms that have small but distinct areas of strength. Measuring an algorithm's marginal contribution is better, but penalizes sets of strongly correlated algorithms, thereby obscuring situations in which it is essential to have at least one algorithm from such a set. Neither of these measures takes time into account, penalizing algorithms that are no longer state-of-the-art, but were when they were introduced.
Séminaire Séminaire d'Emmanuel Lonca
Présentation du fonctionnement du cluster
13 nov. 2025 - 14:00Présentation du fonctionnement du cluster
Séminaire Séminaire de Florent Capelli
DPLL is worst-case optimal
6 nov. 2025 - 14:00In database theory, a join algorithm is said to be worst case optimal for a class of databases $C$ if the time needed to compute the join of a set of tables is linear in the time needed to output the largest possible join one can have by considering databases from $C$. Many worst case optimal joins have been proposed in the literature but their analysis is often hard to understand.
Séminaire Séminaire de Markus Hecher
#P is Sandwiched by One and Two #2DNF Calls: Is Subtraction Stronger Than We Thought?
23 oct. 2025 - 14:00The canonical class in the realm of counting complexity is #P. It is well known that the problem of counting the models of a propositional formula in disjunctive normal form (#DNF) is complete for #P under Turing reductions. On the other hand, #DNF is in SpanL which is strictly contained in #P under parsimonious reductions and reasonable assumptions. Hence, the class of functions logspace reducible to #DNF is a strict subset of #P under plausible complexity-theoretic assumptions.
Séminaire Séminaire de David Ing
On Integrating Logical Analysis of Data into Random Forests
16 oct. 2025 - 14:00Random Forests (RFs) are one of the most popular classifiers in machine learning. RF is an ensemble learning method that combines multiple Decision Trees (DTs), providing a more robust and accurate model than a single DT. However, one of the main steps of RFs is the random selection of many different features during the construction phase ofDTs, resulting in a forest with various features,which makes it difficult to extract short and concise explanations.