FlySight announces its participation in the prestigious SPIE “Artificial Intelligence for Security and Defence Applications” conference, taking place in Madrid from September 15-18, 2025. Giuseppe Martino, a key member of our research team, will present a paper on September 18 that highlights our innovative work within the EDF STORE project.
The presentation, titled “From bounding boxes to semantic segmentation: leveraging SAM for weak supervision in remote sensing,” will unveil a novel approach to analyzing satellite imagery. This research focuses on using Meta’s Segment Anything Model (SAM) to dramatically reduce the time and effort traditionally required to train AI models for object detection in remote sensing data.
Pioneering Weak Supervision for Remote Sensing
For years, developing accurate AI models for tasks like identifying infrastructure or tracking environmental changes has relied on labor-intensive, manual data labeling. The research shifts this paradigm by pioneering a “weak supervision” method. Instead of requiring precise, pixel-by-pixel annotations, our technique leverages less detailed “bounding boxes” to achieve high-quality semantic segmentation. This makes the process faster, more scalable, and far more efficient.
The SPIE conference is a leading international forum for researchers and professionals in optics and photonics. FlySight’s inclusion in this event reinforces our position as a leader in applying cutting-edge AI to real-world security and defense challenges. This presentation not only showcases our technical expertise but also our commitment to pushing the boundaries of what’s possible in the remote sensing domain. We’re thrilled to join again this year. FlySight can’t wait to share our latest insights and help shape the global conversation on the future of AI in this critical field.
At a Glance
SPIE Sensors+Imaging Meeting: https://spie.org/esi
Meeting Dates: 15 – 18 September 2025
Conference: Artificial Intelligence for Security and Defence Applications III
Conference Program: https://spie.org/sd109
- Paper Number: 13679-42
18 September 2025 • 12:00 – 12:20 CEST | N111/112
Abstract
Semantic segmentation typically requires extensive pixel-level annotations, which are costly and time-consuming to obtain. This paper investigates the effectiveness of using the Segment Anything Model (SAM) for weakly supervised semantic segmentation of aerial and satellite imagery, utilizing only bounding box annotations. We present an approach that leverages SAM to generate pseudo ground truth annotations from bounding box prompts, which are then used to train the SegNeXT semantic segmentation model on the i-SAID dataset. Our method achieves results comparable to fully supervised training, with only a 4.2% decrease in mean Intersection over Union (mIoU). These findings demonstrate the potential of foundation models to reduce annotation costs while maintaining high performance in aerial image segmentation tasks.
FlySight is one of the 21 partners involved in the EDF STORE project. 👉🏻 Here you can see one of the key milestones that has been most significant for us.
╰┈➤ Click here to learn more and stay updated on the main milestones of the EDF Store project.