Public thesis defense - Manel Barkallah
Synopsis
The spreading of internet-based technologies since the mid-90s has led to a paradigm shift from monolithic centralized information systems to distributed information systems based upon the composition of software components, interacting with each other and of heterogeneous natures. The popularity of these systems is nowadays such that our everyday life is touched by them.Classically concurrent and distributed systems are coded by using the message passing paradigm-according to which components exchange information by sending and receiving messages. In the aim of clearly separating computational and interactional aspects of computations, Gelernter and Carriero have proposed an alternative framework in which components interact through the availability of information placed on a shared space. Their framework has been concretized in a language called Linda. A series of languages, referred to nowadays as coordination languages, have been developed afterwards. In addition to providing a more declarative framework, such languages nicely fit applications like Facebook, LinkedIn and Twitter, in which users share information by adding it or consulting it in a common place. Such systems are in fact particular cases of so-called socio-technical systems in which humans interact with machines and their environments through complex dependencies. As coordination languages nicely meet social networks, the question naturally arises whether they can also nicely code socio-technical systems. However, answering this question first requires to see how well programs written in coordination languages can reflect what they are assumed to model.This thesis aims at addressing these two questions. To that end, we shall use the Bach coordination language developed at the University of Namur as a representative of Linda-like languages. We shall extend it in a language named Multi-Bach to be able to code and reason on socio-technical systems. We will also introduce a workbench Anemone to support the modelling of such systems. Finally, we will evidence the interest of our approach through the coding of several social-technical systems.
The Jury
Prof. Wim Vanhoof - University of Namur, BelgiumProf. Jean-Marie Jacquet - University of Namur, BelgiumProf. Katrien Beuls - University of Namur, BelgiumProf. Pierre-Yves Schobbens - University of Namur, BelgiumProf. Laura Bocchi - University of Kent, United KingdomProf. Stefano Mariani - UNIMORE University, Italy
Participation upon registration.
Register here
See content
Doctoral thesis defense - Sereysethy Touch
SynopsisA honeypot is a security tool deliberately designed to be vulnerable, thereby enticing attackers to probe, exploit, and compromise it. Since their introduction in the early 1990s, honeypots have remained among the most widely used tools for capturing cyberattacks, complementing traditional defenses such as firewalls and intrusion detection systems. They serve both as early warning systems and as sources of valuable attack data, enabling security professionals to study the techniques and behaviors of threat actors.While conventional honeypots have achieved significant success, they remain deterministic in their responses to attacks. This is where adaptive or intelligent honeypots come into play. An adaptive honeypot leverages Machine Learning techniques, such as Reinforcement Learning, to interact with attackers. These systems learn to take actions that can disrupt the normal execution flow of an attack, potentially forcing attackers to alter their techniques. As a result, attackers must find alternative routes or tools to achieve their objectives, ultimately leading to the collection of more attack data.Despite their advantages, traditional honeypots face two main challenges. First, emulation-based honeypots (also known as low- and medium-interaction honeypots) are increasingly susceptible to detection, which undermines their effectiveness in collecting meaningful attack data. Second, real-system-based honeypots (also known as high-interaction honeypots) pose security risks to the hosting organization if not properly isolated and protected. Since adaptive honeypots rely on the same underlying systems, they also inherit these challenges.This thesis investigates whether it is possible to design a honeypot system that mitigates these challenges while still fulfilling its primary objective of collecting attack data. To this end, it proposes a new abstract model for adaptive self-guarded honeypots, designed to balance attack data collection, detection evasion, and security preservation, ensuring that it does not pose a risk to the rest of the network.Jury membersProf. Wim VANHOOF, President, University of NamurProf. Jean-Noël COLIN, Promoter, University of NamurProf. Florentin ROCHET, Internal Member, University of NamurProf. Benoît FRENAY, Internal Member, University of NamurProf. Ramin SADRE, External Member, Catholic University of LeuvenDr. Jérôme FRANCOIS, External Member, University of LuxembourgYou are cordially invited to a drink, which will follow the public defense. For good organization, please give your answer by Tuesday, May 20, 2025.
I want to register
See content
Defense of doctoral thesis - Jérôme Fink
Synopsis deep learning methods have become increasingly popular for building intelligent systems. Currently, many deep learning architectures constitute the state of the art in their respective domains, such as image recognition, text generation, speech recognition, etc. The availability of mature libraries and frameworks to develop such systems is also a key factor in this success.This work explores the use of these architectures to build intelligent systems for sign languages. The creation of large sign language data corpora has made it possible to train deep learning architectures from scratch. The contributions presented in this work cover all aspects of the development of an intelligent system based on deep learning. A first contribution is the creation of a database for the Langue des Signes de Belgique Francophone (LSFB). This is derived from an existing corpus and has been adapted to the needs of deep learning methods. The possibility of using crowdsourcing methods to collect more data is also explored.The second contribution is the development or adaptation of architectures for automatic sign language recognition. The use of contrastive methods to learn better representations is explored, and the transferability of these representations to other sign languages is assessed.Finally, the last contribution is the integration of models into software for the general public. This led to a reflection on the challenges of integrating an intelligent module into the software development life cycle.Jury membersProf. Wim VANHOOF, President, University of NamurProf. Benoît FRENAY, Promoter, University of NamurProf. Anthony CLEVE, Co-promoter, University of NamurProf. Laurence MEURANT, Internal Member, University of NamurProf. Lorenzo BARALDI, External Member, University of ModenaProf. Annelies BRAFFORT, External Member, University of Paris-SaclayProf. Joni DAMBRE, External Member, University of GhentYou are cordially invited to a drink, which will follow the public defense. For a good organization, please give your answer by Friday June 6.
I want to register
See content
Annual Research Day
The program
2:00 pm | Keynote lecture on the use of AI in research - Hugues BERSINI, Professor at the Université libre de Bruxelles: "Can science be just data driven?" 3:00 pm | Presentations by UNamur researchers3:00 pm | Catherine Guirkinger: Use of AI in an economic history project3:15 pm | Nicolas Roy (PI: Alexandre Mayer): AI at the service of innovation in photonics and optics: revealing the secrets of scrolls through the classification of animal species15:25 | Nemanja Antonic (PI: Elio Tuci): An in silico representation of C. elegans collective behaviour<15h35 | Nicolas Franco : The benefits and dangers of "predicting the future" with covid-like machine learning models 15h45 | Michel Ajzen : Managerial and human implications of AI in organizations <15h55 | Robin Ghyselinck (PI : Bruno Dumas) : Deep Learning for endoscopy: towards next generation computer-aided diagnosis4:05 pm | Auguste Debroise (PI : Guilhem Cassan) : LLMs to measure the importance of stereotypes within gender representations in Hollywood films16h15 | Gabriel Dias De Carvalho : Learning practices in physics using generative AI16h25 | Sébastien Dujardin (PI : Catherine Linard) : Where Geography meets AI: A case study on mapping online flood conversations16h35 | Jeremy Dodeigne : LLMs in SHS: revolutionary tools in a Wild West Territory? Reflections on costs, transparency and open science16h45 | Antoinette Rouvroy : Governing AI in Democracy17h00 | Keynote lecture on ethics and guidelines to consider when using AI in research projects and writing research articles - Bettina BERENDT, Professor at KU Leuven18h00 | Benoît Frenay and Michaël Lobet : Creation of an IA scientific committee at UNamur18:10 | DrinkA certificate of attendance, worth 0.5 cross-disciplinary doctoral training credits, will be issued on request. Contact: secretariat.adre@unamur.beThis event is free of charge, but registration is required.
I want to register
See content
AI to the Future: User-Centric Innovation and Media Regulation
The workshop will feature:A keynote presentation on public value and AI implementation at VRT.Sessions on discoverability, user agency, and explainability.Discussions on regulation, including perspectives on the AI Act and transparency in media.An interactive session showingcasing AI-driven prototypes.The event will also highlight our project's latest findings. Join us for a day of thought-provoking discussions, knowledge exchange, and networking opportunities!Would you like to attend? Places are limited and will be allocated on a first-come, first-served basis, so register as soon as possible. Registration will close on April 11, 2025.
More information here
See content
Vivre la Ville | What technologies for the city of 2030?
The program
Interventions by experts and researchers in the field of data science, , AI, digital twins, digital law and participatory processes.Registrations on the Vivre la Ville... website.
See content
Defense of doctoral thesis in computer science - Gonzague Yernaux
Abstract
Detecting semantic code clones in logic programs is a longstanding challenge, due to the lack of a unified definition of semantic similarity and the diversity of syntactic expressions that can represent similar behaviours. This thesis introduces a formal and flexible framework for semantic clone detection based on Constraint Horn Clauses (CHC). The approach considers two predicates as semantic clones if they can be independently transformed, via semantics-preserving program transformations, into a common third predicate. At the core of the method lies anti-unification, a process that computes the most specific generalization of two predicates by identifying their shared structural patterns. The framework is parametric in regard with the allowed program transformations, the notion of generality, and the so-called quality estimators that steer the anti-unification process.
Jury
Prof. Wim Vanhoof - University of Namur, BelgiumProf. Katrien Beuls - University of Namur, BelgiumProf. Jean-Marie Jacquet - University of Namur, BelgiumProf. Temur Kutsia - Johannes Kepler University, AustriaProf. Frédéric Mesnard - University of the Reunion, Reunion IslandProf. Paul Van Eecke - Free University of Brussels, Belgium
The public defense (in English) will be followed by a reception.Registration required.
I want to register
See content
Defense of doctoral thesis in computer science - Sacha Corbugy
Abstract
In recent decades, the volume of data generated worldwide has grown exponentially, significantly accelerating advancements in machine learning. This explosion of data has led to an increased need for effective data exploration techniques, giving rise to a specialized field known as dimensionality reduction. Dimensionality reduction methods are used to transform high-dimensional data into a low-dimensional space (typically 2D or 3D), so that it can be easily visualized and understood by humans. Algorithms such as Principal Component Analysis (PCA), Multidimensional Scaling (MDS), and t-distributed Stochastic Neighbor Embedding (t-SNE) have become essential tools for visualizing complex datasets. These techniques play a critical role in exploratory data analysis and in interpreting complex models like Convolutional Neural Networks (CNNs). Despite their widespread adoption, dimensionality reduction techniques, particularly non-linear ones, often lack interpretability. This opacity makes it difficult for users to understand the meaning of the visualizations or the rationale behind specific low-dimensional representations. In contrast, the field of supervised machine learning has seen significant progress in explainable AI (XAI), which aims to clarify model decisions, especially in high-stakes scenarios. While many post-hoc explanation tools have been developed to interpret the outputs of supervised models, there is still a notable gap in methods for explaining the results of dimensionality reduction techniques. This research investigates how post-hoc explanation techniques can be integrated into dimensionality reduction algorithms to improve user understanding of the resulting visualizations. Specifically, it explores how interpretability methods originally developed for supervised learning can be adapted to explain the behavior of non-linear dimensionality reduction algorithms. Additionally, this work examines whether the integration of post-hoc explanations can enhance the overall effectiveness of data exploration. As these tools are intended for end-users, we also design and evaluate an interactive system that incorporates explanatory mechanisms. We argue that combining interpretability with interactivity significantly improves users' understanding of embeddings produced by non-linear dimensionality reduction techniques. In this research, we propose enhancements to an existing post-hoc explanation method that adapts LIME for t-SNE. We introduce a globally-local framework for fast and scalable explanations of t-SNE embeddings. Furthermore, we present a completely new approach that adapts saliency map-based explanations to locally interpret non-linear dimensionality reduction results. Lastly, we introduce our interactive tool, Insight-SNE, which integrates our gradient-based explanation method and enables users to explore low-dimensional embeddings through direct interaction with the explanations..
Jury
Prof. Wim Vanhoof - University of Namur, BelgiumProf. Benoit Frénay - University of Namur, BelgiumProf. Bruno Dumas - University of Namur, BelgiumProf. John Lee - University of Louvain, BelgiumProf. Luis Galarraga - University of Rennes, France
The public defense will be followed by a reception.Registration required.
I want to register
See content
Defense of doctoral thesis in computer science - Antoine Gratia
Abstract
Deep learning has become an extremely important technology in numerous domains such as computer vision, natural language processing, and autonomous systems. As neural networks grow in size and complexity to meet the demands of these applications, the cost of designing and training efficient models continues to rise in computation and energy consumption. Neural Architecture Search (NAS) has emerged as a promising solution to automate the design of performant neural networks. However, conventional NAS methods often require evaluating thousands of architectures, making them extremely resource-intensive and environmentally costly.This thesis introduces a novel, energy-aware NAS pipeline that operates at the intersection of Software Engineering and Machine Learning. We present CNNGen, a domain-specific generator for convolutional architectures, combined with performance and energy predictors to drastically reduce the number of architectures that need full training. These predictors are integrated into a multi-objective genetic algorithm (NSGA-II), enabling an efficient search for architectures that balance accuracy and energy consumption.Our approach explores a variety of prediction strategies, including sequence-based models, image-based representations, and deep metric learning, to estimate model quality from partial or symbolic representations. We validate our framework across three benchmark datasets, CIFAR-10, CIFAR-100, and Fashion-MNIST, demonstrating that it can produce results comparable to state-of-the-art architectures with significantly lower computational cost. By reducing the environmental footprint of NAS while maintaining high performance, this work contributes to the growing field of Green AI and highlights the value of predictive modelling in scalable and sustainable deep learning workflows.
Jury
Prof. Wim Vanhoof - University of Namur, BelgiumProf. Gilles Perrouin - University of Namur, BelgiumProf. Benoit Frénay - University of Namur, BelgiumProf. Pierre-Yves Schobbens - University of Namur, BelgiumProf. Clément Quinton - University of Lille, FranceProf. Paul Temple- University of Rennes, FranceProf. Schin'ichi Satoh - National Institute of Informatics, Japan
The public defense will be followed by a reception.Registration required.
I want to register
See content