This page is meant to be a "book history" of Sherpa by presenting sketchy information about technical progress achieved during the project.
SHERPA System Benchmark Scenario
A winter and a summer benchmark validation scenarios were defined by ETHZ for the final demonstrations of the capabilities of the SHERPA system. The summer scenario (Figure 1) considers a typical rescue mission aiming to find missing people in an alpine search area including both open fields and wilds. When the alarm is raised, the rescue coordination center deploys a rescue team to the site. Once on the mission site, the delegation framework initially support the team in defining the search area, setting the priorities, and assigning the tasks to the SHERPA agents. The initial mission plan can then be updated during the mission according to the mission evolution, the collected data about the mission area and the agents conditions. A preliminary scan of the search area is performed by the SHERPA hawks, including both RMAX helicopters and FW-UAVs. These agents patrol the area and update the dynamic cognitive map of the search area with information useful to support the rescuer and the delegation framework in planning the next phases of the mission. The following detailed search area uses the different platforms of the SHERPA system to perform the involved tasks according to their operating conditions. More precisely, the hawks patrol the search area, map the environment, act as a communication relay and deploy wasps to remote or inaccessible areas. The SHERPA wasps execute a low altitude initial search strategy based on visual clues in order to extend the human rescuers’ perception by streaming video or images taken with their on-board cameras. The SHERPA donkey follows the team leader transporting the SHERPA box, which is the main processing center and communication hub. Some other wasps stored on the donkey can autonomously take off and land from the rover based on delegation framework commands with the aid of the robotic arm. Furthermore, during this phase the wasps can autonomously execute verbal and/or gestural commands issued by the rescuer. The mission ends when the victim is found. If he/she is out of reach from a human rescuer, a wasp reach him/her and interact verbally in order to guide him/her to a safe location. If neither humans nor wasps are nearby, then an RMAX helicopter can deploy a wasp or deliver a first-aid kit before the human rescuers arrive.
Figure 1- Sketch of the search area in the SHERPA summer mission
typical winter scenario is the case of an avalanche search-and-rescue mission aiming to find one or more victims buried under an avalanche. Such a mission can be triggered in the SHERPA scenario by an avalanche alert is sent from one of the victims, a member of the general population, the SHERPA Ground Control Station (GCS) via a dedicated application and/or a patrolling FW-UAV hawk. The alert information are then uploaded on the cognitive map and the delegation framework starts the mission. As soon as the alert is received, the SHERPA agents are deployed to the avalanche area. In particular, the fixed-wing hawk flies above the avalanche zone and upload the collected information about the search area on the cognitive map. An helicopter brings the ARTVA receiver for detecting the victims, the SHERPA box and the human rescuers to the avalanche zone. The RMAX reaches the avalanche after the helicopter and starts scanning the terrain through LIDAR and collecting data to be stored on the dynamic cognitive map. The information provided by the cognitive map are used to trigger and support the delegation framework mission replanning when necessary. In the next search phase, the rescuer sends both human rescuers and SHERPA platforms to different search areas. The commands to wasps are sent by means of verbal and gestural commands. Wasps are manually deployed in this phase by the rescuer from his SHERPA box or by the RMAX. A first search, according to the strategy proposed by CAI, is executed by wasps based on visual clues. Then both human rescuers and hawks, including both the FW-UAV and the RMAX, join the operation by searching visual clues according to the preliminary results provided by wasps. The second phase of research consists in identifying the ARTVA transmitter location. This search is executed mainly by the wasps. These are employed in a coordinated search according to the aerial leashing strategy or act in a completely autonomous way. The RMAX and the FW-UAV acts in this phase also as communication relays between SHERPA agents. Once that the ARTVA is localised, the human rescuers start digging and providing first aid to the people involved in the avalanche.
Figure 2- Overview of the agents potentially involved in the SHERPA winter mission
SHERPA Robots Reasoning Engine
SHERPA robots implement their decision making and reasoning capabilities through the Cognitive Robot Abstract Machine (CRAM) framework developed by UNIHB. This framework provides programming constructs along with reasoning mechanisms that can infer control decisions.
More precisely, CRAM is a software toolbox that provides a set of libraries in order to implement complex actions (e.g. picking up objects in the environment or searching for injured persons hidden by snow or branches) that requires a tight integration of action execution, reasoning, decision making, execution monitoring and failure handling. Its core libraries include a domain specific programming language, the CRAM plan language, a full-featured Prolog-like reasoning engine, and support for so called designators, symbolic descriptions of plan parametrizations such as objects, locations and actions. The CRAM plan language supports robot control in unstructured outdoor terrains, working with heterogeneous teams of robots, resource-constrained task achievement and coordination with the SHERPA human rescue team leader commands.
The architecture of the CRAM system used in SHERPA is sketched in Figure 3. It contains mechanisms for lightweight reasoning and action execution. It allows the robot to infer control decisions rather than requiring the decisions to be preprogrammed.
Figure 3- The CRAM system architecture
The CRAM reasoning engine uses OpenGL and the Bullet physics engine for performing visibility and reachability reasoning in outdoor settings. Furthermore, it is able to intepret vague instructions based on gesture and speech commands.
The CRAM Semantic Robot Description Language (SRDL) includes in its semantic descriptions the information about current status of a component that can be asserted by the robot's own perception or external sources. Therefore the SRDL is able to manage state changes for robot components. This allows the CRAM reasoning mechanisms to reason about dynamically changing components. This capability is useful to the system for managing components that are exchangeable, like the Sherpa-box, or fixed components that change their status over time, like the capacity of the battery of a SHERPA robot.
The knowledge about the status of components has a direct effect on the estimation of the capabilities and actions that can be executed by the robot. This does not only allow the robot to reason about its own status. It also allows other robots to reason about involving other SHERPA agents as collaborators in executing specific actions which require more robots (e.g. recharging a wasp) or in when they can assist in tasks that benefits from parallel execution (e.g. searching in a big area).
Furthermore, in unknown terrains robots can use information from external information sources like Geographic Information Systems (GIS) to assist initial motion plans and command interpretation. In particular, the donkey can use the information about ways marked in these sources for an initial motion-plan and the wasp can use the height data from the digital elevation models of the search area for the same purpose. This is allowed in the CRAM by an infrastructure allowing the migration of these data from such sources into the reasoning system.
Dynamical Cognitive Map
Significant work was also done by KUL on the development of the Dynamic Cognitive Map (DCM), the structure of which is sketched in Figure 4. The DCM is a virtual repository where all the information regarding the mission area available at the mission start is available and also the information collected by the SHERPA agents during the mission is aggregated. The DCM architecture includes the following information, despite the following list can be extended in a quite flexible way:
Figure 4- A sketch of the DCM structure
The SHERPA agents can add data to their local DCM that then synchronizes these changes with the DCMs running on other agents. They can also query the DCM for information about their operating zone, other destinations, and other agents to support reasoning and autonomous mission planning. Furthermore, the Human Machine Interface queries its DCM for information requested by the human operator.
SHERPA Donkey
The hardware integration of the rover of the SHERPA donkey (Figure 5) have been accomplished by BLUE. UNIBO is currently finalising the software in the ROS environment allowing the movements of the rover based on the delegation framework and human rescuer commands. In particular, the following software functionalities were developed.
Figure 5- A picture of the SHERPA donkey rover with the robotic arm installed on it
The SLAM of the rover was enhanced from the initial solution developed by BLUE. In particularm the laser scanner installed on the rover is now able to roll in order to create a 3D points cloud of the environment. This points cloud is used to contribute to the perception of rover of its environment. This functionality is of paramount importance for significantly improving tasks such as 3D navigation and mapping, and hazard avoidance.
A ROS navigation stack based on the graphical user interface software Rviz was setup on the rover in order to visualise and integrate in the rover navigation the SLAM functionality. In particular, a localization algorithm is used to calculate the position of the rover with respect to the map and send the corresponding data to Rviz, where the position and 3D pose of the rover are visualized. Based on the map and position of the Rover in the map, the user can send on the Rviz application or from a ROS node the waypoint commands to be reached by the rover.
From the point of view of the navigation, a ROS node was produced to read the GPS current position and receive new waypoints expressed in terms of target latitude and longitude. These are read from a ROS topic on which the delegation framework publishes the waypoints. These waypoints are used to derive a target trajectory in the local cartesian reference frame starting from the current position expressed in terms of latitude and longitude. This trajectory is then executed by the routine described in the previous paragraph taking advantage of its autonomous obstacle avoidance capability.
The donkey robotic arm was completely assembled from UT, except for the gripping system that is still under design. The overall weight of the arm is 15 kg and it is able to carry a payload of 3 kg. Its maximum extension is 1.5 m. It is charecterised by 7 DoF implemented by 3 DoFs for the shoulder, 1 DoF for the elbow and 3 DoFs for the wrist. The actuators are standard brushless and brushed DC motors. The motor drives and microcontrollers are the Elmo drives ExtrIQ and Hornet. The CPU is based on an Arm processor with Linux operating system. The sensors set includes hall-effect sensors for brushless motors, absolute encoders for motor axes, force sensors and vision system for high-level control. The communication is based on CAN-bus for internal communication and ethernet and WiFi for external communication. The implementation of the software for the robotic arm control is in progress. It communicates with the rest of the Sherpa robots through the ROS-MicroBLX bridge, developed by KUL, with the TST framework and the world model, developed by LKU.
SHERPA Wasp
The high-level controller implemented in ROS environment for the SHERPA wasps was further refined by UNIBO and successfully tested in different flight tests in the alpine environment. These tests (Figure 6) demonstrated the excellent performances in terms of autonomous navigation and correct operations of the ARTVA sensor acquisition of the wasps.
Figure 6- A picture of the SHERPA wasp during a flight test on the Alps
It was also possible to validate the integration of the multimodal HMI (voice, gesture and tablet) developed by CREATE in the SHERPA wasps control. The flying platforms demonstrated in fact the ability to autonomously execute high-level flight commands including take-off, landing and waypoints navigation. Furthermore, it was also possible for the wasps to autonomously define and execute a monitoring path over a search area and stream the captured videos based on voice and gesture commands provided by the busy genius. Also the coordinated flight operations capabilities of a swarm of two wasps were successfully tested with multimodals commands.
In the last months, the SHERPA high level architecture was then integrated with Delegation Framework in order to have a full compliance with the SHERPA system communication network. In particular, a list of executors were implemented to interact with the wasp autopilot. The list of executors were defined and implemented during a joint work between UNIBO and LKU. The delegation and the execution of TSTs were successfully tested via simulations and by mean of real flight experiments during the second integration week.
SHERPA Fixed Wing Hawk
ETHZ developed an algorithm capable of detecting and localizing potential human victims using a combination of visual and infrared cameras onboard the fixed-wing Hawks. The algorithm was demonstrated during the final field demonstrations of the ICARUS FP7 Project in Marche-en-Famenne, Belgium in September, providing rescuers with potential GPS locations of missing people in real time. A significant stride in flight endurance goals was made in July when AtlantikSolar UAV set a new world record for aircraft below 50 kg, flying for 81.5 hours straight on solar power alone. Auto-landing control has been successfully tested on the fixed-wing platforms using a small light-weight LiDAR sensor. Efforts continue on the integration of the senseSoar (Figure 7) solar UAV platform. Further work has been directed into visual-inertial state estimation from the fixed-wing platforms for mapping the environment, with the end goal of providing sparse point clouds in real-time.
Figure 7- The senseSoar fixed-wing drone developed by ETHZ
Requirements, regulations and potential fields
A preliminary version of the final summer and winter validation benchmark scenarios was elaborated in accordance with deliveralbles D2.1 and D2.2. ETHZ, UNIBO, LKU produced the necessary documentation for obtaining their permit-to-fly and, in particular:
Mechanical design and construction, HRI technologies and technology integration
Figure 1- The SHERPA rover
Figure 2- Rendered SHERPA arm with wrist and gripper placeholders. |
Figure 3- Shoulder of the SHERPA arm. |
Figure 4- The gripper of the SHERPA arm.
Figure 5- The RW-UAV |
Figure 6- The winter RW-UAV ARTVA machine |
Figure 7- Sherpa box - rover assembly
Figure 8- Human-machine technology
Environment and Situation Awareness
Figure 9- Thermal image (right) collected during a flight test in Rothenthurm, Switzerland: each image patch contains a human. |
Figure 10- LIDAR processing pipeline and flight-test area in Sweden
|
Figure 11- Automatic classification of the three LIDAR strips. Brown = ground , yellow
Figure 12- Multimodal Interaction in the Twente Flight Arena
References
[1] G. Bevacqua, J. Cacace, A. Finzi, V. Lippiello, "Mixed-Initiative Planning and Execution for Multiple Drones in Search and Rescue Missions", to appear in Proc. of ICAPS 2015.
[2] J. Cacace, A. Finzi, V. Lippiello, G. Loianno, D. Sanzone: "Aerial service vehicles for industrial inspection: task decomposition and plan execution". Appl. Intell. 42(1): 49-62 (2015)
[3] G. Bevacqua, J. Cacace, A. Finzi, V. Lippiello. "A Mixed-Initiative System for HumanRobot Interaction with Multiple UAVs in Search and Rescue Missions". AIRO-14 workshop at AI*IA Symposium on Artificial Intelligence.
[4] J. Cacace, A. Finzi, V. Lippiello, F. Cutugno, A. Origlia. "Multimodal Interaction with Multiple UAVs in Search and Rescue Missions". AIRO-14 workshop at AI*IA Symposium on Artificial Intelligence.
[5] G. Conte, P. Rudol, and P. Doherty. "Evaluation of a light-weight LiDAR and a photogrammetric system for unmanned airborne mapping applications". PFG Journal of photogrammetry, remote sensing and geoinformation processing. Photogrammetrie Fernerkundung - Geoinformation, Volume 2014, Number 4, August 2014, pp. 287-298(12).
[6] T. Cieslweski, S. Lynen, M. Dymczyk, S. Magnenat, and R. Siegwart. “Map API Scalable Decentralized Map Building for Robots”. 2015. In Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, p. to appear, 2015.
[7] M. Dymczyk, S. Lynen, T. Cieslweski, M. Bosse, and R. Siegwart and P. Furgale. “The Gist of Maps – Summarizing Experience for Lifelong Localization”. In Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, p. to appear, 2015.
[8] S. Leutenegger, "Unmanned Solar Airplanes", PhD thesis, Diss., Eidgenössische Technische Hochschule ETH Zürich, Nr. 22113, 2014.
[9] A. Origlia, V. Galatà, F. Cutugno, "Introducing context in syllable based emotion tracking", IEEE International Conference on Cognitive Infocommunications, pag. 337 – 342, 2014
[10] A. Origlia, F. Cutugno. "A simplified version of the OpS algorithm for pitch stylization". In Proc. of Speech Prosody, 2014
Cognitive-enabled reasoning and decision making
Figure 13- Simulation-based Reasoning
References
[1] M.Beetz, F. Balint-Benczedi, N. Blodow, D. Nyga, T. Wiedemeyer and Z.-C. Marton, “RoboSherlock: Unstructured Information Processing for Robot Perception”, in IEEE International Conference on Robotics and Automation (ICRA), Seattle, Washington, USA, 2015, Accepted for publication.
[2] M. Beetz, M. Tenorth and J. Winkler, “Open-EASE — A Knowledge Processing Service for Robots and Robotics/AI Researchers”, In IEEE International Conference on Robotics and Automation (ICRA), Seattle, Washington, USA, 2015. Accepted for publication.
[3] F. Yazdani, B. Brieber, M. Beetz, "Cognition-enabled Robot Control for Mixed Human-Robot Rescue Teams", In Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS-13), 2014.
[4] Patrick Doherty and Andrzej Szalas. 2015. "Stability, Supportedness, Minimality and Kleene Answer Set Programs". In T. Eiter, H. Strass, M. Truszczynski, S. Woltran, editors, Advances in Knowledge Representation, Logic Programming, and Abstract Argumentation: Essays Dedicated to Gerhard Brewka on the Occasion of His 60th Birthday, pages 125–140. In series: Lecture Notes in Computer Science #9060. Springer. DOI: 10.1007/978-3-319-14726-0_9.
Mixed-Initiative Cooperative Systems
Figure 14- Schematic overview of the role TSTs play in the RMAX robotic architecture.
References
[1] Bevacqua, Cacace, Finzi, Lippiello, Mixed-Initiative Planning and Execution for Multiple Drones in Search and Rescue Missions, to appear in Proc. of ICAPS 2015.
[2] Patrick Doherty, Jonas Kvarnström, Mariusz Wzorek, Piotr Rudol, Fredrik Heintz and Gianpaolo Conte. Aug 2014. HDRC3 - A Distributed Hybrid Deliberative/Reactive Architecture for Unmanned Aircraft Systems. In Kimon P. Valavanis, George J. Vachtsevanos, editors, Handbook of Unmanned Aerial Vehicles, pages 849–952. Springer Science+Business Media B.V.
[3] Oleg Burdakov, Patrick Doherty and Jonas Kvarnström. 2014. Local Search for Hopconstrained Directed Steiner Tree Problem with Application to UAV-based Multi-target Surveillance. In Butenko, S., Pasiliao, E.L., Shylo, V., editors, Examining Robustness and Vulnerability of Networked Systems, pages 26–50. In series: NATO Science for Peace and Security Series - D: Information and Communication Security #Volume 37. IOS Press.
[4] Mikael Nilsson, Jonas Kvarnström and Patrick Doherty. 2014. Incremental Dynamic Controllability in Cubic Worst-Case Time. In Proceedings of the 21st International Symposium on Temporal Representation and Reasoning (TIME), pages 17–26.
Low-level and reactive control
References
[2] Furci, M.; Naldi, R.; Paoli, A.; Marconi, L.. 2014. “A robust control strategy for mobile robots navigation in dynamic environments” In: Decision and Control (CDC). 2014 IEEE 53rd Annual Conference on 1 The planned person-months refer to the whole project while the actual person-months refer to the reporting period.
[3] R. Naldi, M. Furci, R.G. Sanfelice and L. Marconi. 2014. “Robust Global Trajectory Tracking for Underactuated VTOL Aerial Vehicles using Inner-Outer Loop Control Paradigms” Submitted to journal.
[4] Olov Andersson, Fredrik Heintz and Patrick Doherty. 2015. “Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization” In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
[5] Rudin, K., Mosimann, L., Ducard, G., Siegwart, R., “Robust Actuator Fault-tolerant Control using DK-iteration: Theory and Application to UAS” In: 9th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes (Safeprocess’15), Sep. 2-4, 2015. Paris, France (under review).
[6] J. Cacace, A. Finzi and V. Lippiello. 2014. “A Mixed-Initiative Control System for an Aerial Service Vehicle supported by force feedback” In: Intelligent Robots and Systems (IROS), 2014 IEEE/RSJ International Conference on.
[7] L. Aldrovandi, M. Hayajneh, M. Melega, M. Furci, R. Naldi, and L. Marconi. “A smartphone based quadrotor: Attitude and position estimation” In: ICUAS’15 – The 2015 International Conference on Unmanned Aircraft Systems, June 2015. Paper under submission.
[8] M. Melega, M. Hayajneh, R. Naldi, M. Furci, , and L. Marconi. “An aerial robot for alpine search and rescue : the sherpa platform” In DroNet 2015 – Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use, May 2015. Paper under submission.
[9] A. Bircher, K. Alexis, M. Burri, P. Oettershagen, S. Omari, T. Mantel, and R. Siegwart, “Structural Inspection Path Planning via Iterative Viewpoint Resampling with Application to Aerial Robotics”. In: IEEE International Conference on Robotics and Automation (ICRA), May, 2015 (accepted).
Architecture and Simulator
Figure 15- A graphical sketch of the world model architecture.
Experimental Validation
o Finding single ARTVA (on the snow): the ARTVA beacon is placed (known position) about 30m far from the takeoff position. By means of a slow speed approach the ARTVA signal is correctly identified when the RW-UAV is about 5 meters close to the beacon and, once the initial detectionis accomplished, the singal is correctly tracked up to 12m far from the ARTVA.
Figure 16- Finding the single ARTVA
o Finding multiple ARTVA (on the snow): Two beacons are placed 3m far each other ad both about 30m far from the takeoff position. As in the previous case, by means of a slow speed approach, the two signals are correctly indentified as separate items with tracking performance comparable to that of a single ARTVA scenario.
o Finding single ARTVA (1m under the snow): Same scenario of the “finding single ARTVA” test but with the beacon 1m under the snow. The detection performance slightly decrease but guaranteeing the correct beacon identification. In future tests the ARTVA beacon will be placed more in depth under the snow.
Figure 17- ARTVA concealing
o Finding ARTVA by predefined path serach (ARTVA 20cm under the snow): the ARTVA beacon has been placed under 20cm of snow (unknown position) within a predefined search zone of 20x40m. By following a predefined searching path (paraller and alternate straight lines) the first valid signal is received 90s after the takeoff (flown ground track of about 120m). After further 2’30’’ the beacon position is identified with 1m of accuracy.
Figure 18- ARTVA found 1m close to ita estimated position
o RW-UAV – rescuer dog interaction: this experiments tests the impact of the presence of a RW-UAV on the research performance of the rescuer dog. After the natural transient during which the dog “understands and accepts” the presence of the RW-UAV, the rescue tasks are accomplished by following the same procedures of the “RW-UAV free” scenario, thus leading to the same final searching performance.
Figure 19- RW-UAV - rescuer dog interaction
Trained Wasps (quadrotors, UNIBO & ATECH): UNIBO and ATECH were mainly involved in the development of the small scale quadrotors, identified in this project as trained wasps. Their main task is to support the rescuing and surveillance activity by enlarging the patrolled area with respect to the area potentially “covered” by the single rescuer, both in terms of visual information and monitoring of emergency signals. In order to develop this type of platform the following three types of prototypes were implemented and tested:
The mini wasp was derived by the nano quadcopter CrazyFlie of Bitcraze. This is a small and lightweight unit (around 19g and about 90mm motor to motor) allowing remote control by computer through his on-board low-energy (1mW) radio with up to 80m range. It includes moreover a 6-DoF sensors set (3-axis high-performance MEMs gyros with 3-axis accelerometer). The control is based on a powerful 32 bit MCU on which the algorithms developed for the other platforms were compiled for executing flight tests. More precisely, the wind rejection was based on an algorithm developed by UNIBO and described in [1]. The acrobatic flight capabilities were based on the Globally Asymptotically Stable (GAS) algorithm described in [2]. This algorithm was implemented on the mini-wasps in order to develop the hand-deployment capabilities of the bigger-scale wasps. The mini wasps were actually tested with hand-deployment. The formation flight was finally allowed by the consensus algorithm proposed in [3]. Also in this case flight tests were done considering a formation of three mini wasps.
The winter wasp is a bigger but very light quadrotor allowing the transport of a 300 g payload and with an autonomy of 20-25 minutes. The propulsion power generated by the rotors is controlled by an Electronic Speed Control (ESC). The low-level control of the quadrotor is based on the Pixhawk autopilot with an internal IMU (3-axis digital output gyroscope, 3-axis accelerometer and 3-axis magnetometer). A GPS receiver, an external magnetometer and a laser telemetry sensor and a RC receiver feed also the controller. The ARTVA transceiver is a Ortovox M2 connected with the board Odroid-U3 of Hardkernel co., Ltd. providing higher-level position commands to the low-level autopilot. The Odroid board allows also the communication with a Personal Computer (PC) acting as ground control station through a WiFi connection. Some preliminary flight tests were successfully accomplished for the winter wasp in indoor and outdoor environment.
Summer Wasp 3D CAD Drawing
The summer wasp was finally designed as a light-weight and high-stiffness quadrotor with the capability to hover a payload of more than 2 kg payload for about 25-30 minutes, that is a strong requirement for its involvement an SAR missions. The propulsion power generated by the rotors is controlled by an Electronic Speed Control (ESC) and produces a maximum thrust of 2 kg for each rotor. The low-level control of the quadrotor is based on the same Pixhawk autopilot of the summer wasp. This board takes inputs also from the GPS receiver, the external magnetometer and the optical flow sensor, as well as the RC receiver. At an higher-level, Vi-Sensor stereo camera of Skybotix and a gimbaled laser scanner are used to refine the position measure, estimate the pose of the vehicle and to produce a 3D map of the surrounding environment. The laser scanner is a UTM-30LX-EW of Hokuyo Automatic CO., LTD. mounted on a 2-axes gimbal system designed for obtaining quick 3D scanning of outdoor environments. The system is designed to compensate for rotation around the pitch axis inside the range ±35° and complete rotations around the roll axis. These higher level sensors communicate their acquisitions to an Mini PC - kit Intel® NUC D54250WYK computer allowing also the communication with the PC acting as ground control station through WiFi connection. This computer performs moreover the reactive supervisory control.
References
Fixed-Wing Patrolling Hawk (fixed-wing UAV, ETHZ): ETHZ worked on different aspect related with the fixed-wing patrolling hawk implementation. In particular, the following topics were developed:
The low-level control development dealt with the implementation of low-level guidance and of control algorithms robust to the relatively high winds. After some modifications to the controller, it was demonstrated the aircraft’s resistance to strong thermal updrafts and downdrafts during a 12-hour flight test. It was also looked into ways of integrating online wind estimates as inputs to new spatially optimal guidance logic for robustly tracking trajectories in the presence of wind-to-airspeed ratios approaching unity.
A fast and efficient inspection path-planning algorithm for executing mapping and surveillance missions was developed. Given a set of waypoints, the method finds a low-cost mission trajectory considering the dynamics of the fixed-wing aircraft and its sensor limitations, providing full coverage of the search area. The method has potential for efficient mapping and surveying during search-and-rescue missions. The approach has been successfully tested in field demonstrations organized within the framework of the ICARUS FP7 project in Barcelona, Spain and Marche-en-Famenne, Belgium.
In the weeks leading up to the first SHERPA integration week it was performed the initial integration of the Pixhawk PX4 autopilot system with the SHERPA delegation framework. It was created platform-specific TST nodes and tested them on an Multiplex EasyGlider test platform, completing waypoint-following tasks similar to those potentially encountered during a search-and-rescue mission. There are plans to provide further delegation functionalities. There is also a scheduled bilateral test between ETH Zürich and Linköping University, where a sample search-and-mapping mission will be flown simultaneously with the Atlantik Solar UAV and RMAX Helicopter in Zürich and Sweden, respectively.
the AtlantikSolar platform
Leichtwirk AG delivered the new SenseSoar platform, with solar cells installed. The new airframe is roughly 200 grams lighter than the prototype, which increases payload capacity. Efforts are underway to integrate flight systems and avionics into the new vehicle. Some of the mechanical design has also been reworked in order to obtain a more modular system capable of carrying the same sensor payload as the AtlantikSolar. In particular, these modifications will allow the SenseSoar to carry the sensor pod, allowing for efficient sensor payload development irrespective of the platform, and quick-and-easy flight testing.
Extensive field tests were conducted with the AtlantikSolar platforms to prove their flight worthiness (Figure 3). The series aircraft, AtlantikSolar AS-S1, performed a continuous 12-hour flight without solar power. Despite rough winds of up to 44 km/h and strong thermal updrafts and downdrafts, the battery remained above 20% charged. The maximum endurance is estimated to be approximately 15 hours. A further test in turbulent conditions was performed during the first SHERPA integration week at the Nijverdal RC club just outside of Enschede, NL. The AtlantikSolar proved to be remarkably robust to high winds, which at some points exceeded the airspeed.
Intelligent Donkey (ground rover): Since March 2014 BlueBotics has worked mainly on the design of the ground rover. The goal was to design a research platform allowing to demonstrate the various benchmarking scenarios. The chassis carries the SHERPA box, the robotic arm and has good off-road capabilities thanks to its passively articulated kinematics. The four tracked bogies can rotate freely around their pivot axis and the transverse bogie ensures good lateral compliance for optimal terrain adaptation. At the time of the writing of this document, the design (Figure 3) is finished and the production of 3 units is about to start.
3D CAD drawing of the ground rover current layout
Flight tests: some experimental flight tests were performed in “Valle d'Aosta” in collaboration with CAI during a training session for rescuers. The experiments were carried out at an altitude of 2000m.
One RW-UAV prototype, equipped with the ARTVA receiver was piloted in semi-automatic mode to track the signal coming for a buried ARTVA transmitter to simulate a person buried under the snow.
The drone, guided by the commands of one CAI rescuer and by the ARTVA signal, successfully found the buried ARTVA in short time.
Some problems of electromagnetic interference between the RW-UAV avionics and the ARTVA receiver were addressed as first step by distancing the receiver from the drone by mean of a hanging structure.
The video of the operation could be find here: http://www.youtube.com/watch?v=0ISRp0t7ofk
Mechanical design: in the first year of project the mechanical design of the agents have been designed. In particular the focus was on designing the ground rover, the RW-UAV, the arm, the SHERPA box, and adaptations of the FW-UAV.
Ground Robot: the role of ground rover will be only in summer scenario and consist in carrying the SHERPA box, some RW-UAV and act as communication relay and main computation station.
RW-UAV: the design of these agents was guided by the needs of a simple but effective drone, with some key feature. In particular the RW-UAV should be grasped by the robotic arm, should have protected propellers for the security of rescuers and should have enough space and payload to carry avionics and sensors.
Arm: the design of robotic arm was carried out following the specification about the interaction with RW-UAV. In particular the arm will act as a docking station for quadrotors.
FW-UAV: some adaptation were planned for the FW-UAV, in particular to carry all the necessary sensors (thermal camera, stereo vision camera) and improved avionics.
Sensor fusion and data collecting: some algorithms for sensor fusion and data collecting were developed and tested.
Visual odometry: a novel algorithm was developed. In particular using just information from a stereo camera and a high precision IMU (no GPS), it was possible to reconstruct a trajectory performed outside ETH main building with very good performances. The comparison with state-of-the-art algorithm and with ground thruth (GPS) can be seen in the figure below.
Object detection: a novel algorithm was developed to detect people with infrared images. Some tests were performed with the use of a thermal camera.
Airborne terrain mapping: during the first year of SHERPA a set-up for terrain mapping was developed. In particular an airborne LiDAR system and photogrammetric system were used to perform test with the use of RMAX helicopter.
Human interpretation and interaction: the main achievements in this field:
Cognitive-enabled reasoning and decision making: the main achievements in this field:
Low level Control: several algorithms were developed for the low level control, in particular for RW-UAV. One of the focuses was developing a control law suitable for robust hand and helicopter deployment.
Some robust control strategy were develop, to perform emergency maneuvers in case of fault of propellers/motors. The aim was to ensure safety for human rescuers in case of faults.
Some parameter estimation algorithms has been proposed to estimate unknown parameters and effects as unknown payload, disturbances, wind and aerodynamics effects.
Finally, a new image-based visual servoing controller for the coordinated landing of a VTOL UAV on a landing platform actuated by a mobile manipulator has been proposed.
Robotic arm modeling: a competent kinematic and dynamic model of the SHERPA robotic arm has been realized in order to specify the actuator requirements and as a basis for developing the low and high-level control strategies.
Kick-off meeting.
Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer