- Object detection and recognition in different sensor domains
- Image preprocessing for distortion removal (white-balance, dehazing, color correction)
- Sensor fusion (stereo-vision, acoustic sensors i.e. sonars, multibeams, etc)
- Environment modeling (3D mapping, octomap, partially structured environments)
- Simulated environments and continuous system integration (hardware-in-the-loop, simulated and real data synchronization)
- Deep Learning practices in all of the above topics
Reliable sensory perception of dynamic environments is a requirement to achieve long-term autonomy in robotics. However, in underwater scenarios perception tasks have proved to be particularly challenging. Common sensor feedback in Remote Operated Vehicles (ROVs) and Autonomous Underwater Vehicles (AUVs) are optical cameras and sonars; but camera images suffer from limited range and rapid degradation depending on light behavior and water turbidity, and sonars often lack the resolution needed. As onshore and offshore industrial activities continue to rapidly increase, efforts are made to identify which software and hardware technologies are best suited for different environmental conditions.
These technologies cover a wide range of topics such as stereo-vision, multibeam mapping, sensor fusion, image correction processing, simulation software, among many others. For this reason, this workshop in Underwater Perception aims to create a space where researchers can share their experiences, lessons learned and best practices in the process of transferring their work to field experiments/trials.
Discussions, an interactive time-slot with the audience and an industry partner participation are also planned.
For any inquiries please contact email@example.com or firstname.lastname@example.org
Precise time schedule will be published in the following days.
Workshop cost: 150 €