Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

AI-Generated Content: Opportunities and Challenges
Abdenour Hadid, Sorbonne University of Abu Dhabi, United Arab Emirates

AI and High Field MRI for Neurodegenerative Diseases
Christine Fernandez-Maloigne, University of Poitiers, France

RGB-Depth Imaging for High-Throughput Monitoring of Plant Growth
David Rousseau, Laboratoire D'ingénierie Des Systèmes Automatisés, Université D'angers, France

 

AI-Generated Content: Opportunities and Challenges

Abdenour Hadid
Sorbonne University of Abu Dhabi
United Arab Emirates
 

Brief Bio
Abdenour Hadid received his Doctor of Science in Technology degree in electrical and information engineering from the University of Oulu, Finland, in 2005. Now, he is a Professor and PI of a CHAIR on Artificial Intelligence at Sorbonne University of Abu Dhabi. His research interests include physic-informed machine learning, forecasting, computer vision, deep learning, artificial intelligence, internet of things and personalized healthcare. He has authored more than 400 papers in international conferences and journals, and served as a reviewer for many international conferences and journals. His research works have been well referenced by the research community with more 24000 citations and an H-Index of 57. Prof. Hadid is currently a senior member of IEEE. He was the recipient of the prestigious “Jan Koenderink Prize” for fundamental contributions in computer vision. He participated and played a key role in different European projects. One of these projects has been selected as a Success Story by the European Commission. His achievements have also been recognized by many awards including the highly competitive Academy Research Fellow position from the Academy of Finland during 2013-2018, and a very prestigious international award within the 100-Talent Program (Outstanding Visiting Professor) of Shaanxi Province, China.


Abstract
It is becoming quite easy to generate realistic images and videos especially using diffusion-like models (e.g. DALL-E, GLIDE, Midjourney, Imagen, VideoPoet, Sora, Genie, etc.) due to their impressive generative capabilities. This creates a huge potential for a wide range of applications such as image editing, video production, content creation and digital marketing. Moreover, synthetically generated images and videos can be very useful for enhancing the training of AI models which usually require a large amount of data. However, these advances have also raised concerns about the potential misuse of these images and videos, including the creation of misleading content such as fake news and propaganda. So, one of the critical challenges associated with these advancements is the development of effective detection methods of synthetic images and videos. In the talk, we present the advances in automatically generating and editing images and videos, and discuss the limitations and challenges of such AI-generated content



 

 

AI and High Field MRI for Neurodegenerative Diseases

Christine Fernandez-Maloigne
University of Poitiers
France
 

Brief Bio
Christine Fernandez-Maloigne holds a degree in Computer Engineering from the Iniversité Technologique de Compiègne, where she obtained a PhD in Signal and Image Processing. She then obtained a research qualification in mathematics (HDR) from the University of Lille. She is currently Professor of Image Processing at the University of Poitiers, Vice-Rector in charge of international relations since June 2016. Since the beginning of 2019, she is co-founder and co-director of a joint laboratory between the CNRS, SIEMENS, the University and the Poitiers University Hospital, in AI and medical imaging, I3M. She is deputy director of the CNRS MIRES research federation, which brings together mathematical and scientific laboratories for engineering and information in Northern New Aquitaine, which she helped to create in 2012. At national level, she was a member of the National University Committee from 2008 to 2022. She was part of the French delegation to AFNOR for the ISO JPEG compression standards. She is an expert for several national agencies (ANR, HCERES, DGA, Campus France, etc.). At the international level, she is an expert for the European Commission and several European research bodies. She has also been the French representative in CIE Division 8 (Image Technologies) since 2006 and its secretary from 2015 to 2024. She is a member of the editorial board of several international journals and was deputy editor-in-chief of JOSA A until 2021 and is now a Senior Fellow of the Optical Society of America. She was awarded the Augustin Fresnel National Prize for her work in colour and multivariate imaging.


Abstract
The exponential growth of computing resources has led to the potential improvement of diagnostic and therapeutic methods in medicine through the use of artificial intelligence (AI) methods. The field of imaging, particularly neuro MRI data, has been and will likely continue to be at the forefront of this revolution. These approaches, combined with new ultra-high field MRI techniques, will advance the understanding of brain biology by enabling virtual biopsy. This involves non-invasive sampling of the molecular environment with high spatial resolution, providing a better understanding of the underlying heterogeneous cellular and molecular processes. By providing in vivo markers of spatial and molecular heterogeneity, these AI-based tools have the potential to guide diagnoses and therapeutic pathways for neurodegenerative diseases more accurately for a patient and enable better dynamic treatment monitoring in the era of personalized medicine.



 

 

RGB-Depth Imaging for High-Throughput Monitoring of Plant Growth

David Rousseau
Laboratoire D'ingénierie Des Systèmes Automatisés, Université D'angers
France
 

Brief Bio
Prof. David Rousseau, heads the ImHorPhen bioimaging research group at Université d'Angers, France. He develops since 2008 plant imaging methods for high-throughput phenotyping based on computer vision and machine learning. He cares about developing teachable research and promote the result of his group via the following youtube channel: https://www.youtube.com/channel/UCsd9Dt6N7O-fydynsWEfkww


Abstract
Plants are complex 3D structures that present real challenges for computer vision, due to their self-similarity, the presence of self-occultation, or the low color contrasts between certain organs (particularly leaves). The need for plant imagery is growing for applications in biology and agriculture, where digital technology provides objective repeatability, high-throughput parallelization capability, observations on temporal-spatial scales and spectral ranges that are inaccessible to human eyes. In this context, the use of low-cost RGB-Depth cameras as introduced in 2012 [1], by hijacking the "Kinect" video game sensor at the time, is proving to be a very powerful tool for segmenting leaves, characterizing the 3D shape of plant cover or detecting the effect of biotic and abiotic stresses. This presentation summarizes the work [1-6] carried out by our group on this subject over more than a decade.

References:
[1] Chéné, Y., Rousseau, D., Lucidarme, P., Bertheloot, J., Caffier, V., Morel, P., ... & Chapeau-Blondeau, F. (2012). On the use of depth camera for 3D phenotyping of entire plants. Computers and Electronics in Agriculture, 82, 122-127.
[2] Chéné, Y., Belin, É., Rousseau, D., & Chapeau-Blondeau, F. (2013). Multiscale analysis of depth images from natural scenes: Scaling in the depth of the woods. Chaos, Solitons & Fractals, 54, 135-149.
[3] Chéné, Y., Rousseau, D., Belin, É., Garbez, M., Galopin, G., & Chapeau-Blondeau, F. (2016). Shape descriptors to characterize the shoot of entire plant from multiple side views of a motorized depth sensor. Machine Vision and Applications, 27, 447-461.
[4] Garbouge, H., Rasti, P., & Rousseau, D. (2021). Enhancing the Tracking of Seedling Growth Using RGB-Depth Fusion and Deep Learning. Sensors, 21(24), 8425.
[5] Couasnet, G., Cordier, M., Garbouge, H., Mercier, F., Pierre, D., El Ghaziri, A. Rousseau, D. (2023). Growth Data—An automatic solution for seedling growth analysis via RGB-Depth imaging sensors. SoftwareX, 24, 101572.
[6] Cordier M., Metuarea H., Bencheikh M., Rasti R., Torrez C., Rousseau D. (2023) Leaf segmentation of seedlings using foundation model on RGB-Depth images, ICCV Computer vision for plant phenotyping and agriculture workshop



footer