On November 4, the IEEE International Conference on intelligent robots and systems (iros), one of the largest academic conferences on robots in the world, officially opened. The conference attracted more than 4000 professionals, top research team representatives and business people from all over the world in the fields of robotics, automation systems and artificial intelligence. In addition to focusing on the latest robot research results, iros will also hold robot competitions to show the charm of robots in a more direct and easy way. This year's Robot Challenge focuses on the cutting-edge field of machine vision, aiming to endow AI with lifelong learning ability through competition exploration. On the same day, the ilos 2019 conference officially announced the results of the lifelong slam challenge, and the segwayrobotics team from No. 9 robot Co., Ltd. won the championship with an absolute advantage
replacing human beings in heavy and dangerous work has always been the ultimate goal of robot research. Robots are practical in the real environment, so they must respond to and adapt to the changes of human beings and the environment. The "eyes" - machine vision of robots can let robots recognize the surrounding environment through images through optical devices and sensors, and make corresponding judgments and decisions
there are two items in this year's iros robot challenge. One of them is the lifelong slam - positioning algorithm competition to adapt to scene changes, which competes with the robot's ability to continuously position itself through vision
The full name of slam is "simultaneous positioning and mapping", which aims to enable robots to autonomously estimate their position and posture in the process of moving. It is one of the core issues in the field of robots. Generally speaking, slam means that when a robot comes to a completely unfamiliar environment, it needs to accurately establish the corresponding relationship between time and space, and can perfectly know: where was I just now and where is I now? What do I see? What's the difference between what I see now and what I saw before? Is the track I just walked trembling? Is my current position floating? What should I do if I get lost? Can I still track myselfgenerally, people who study slam often focus on the positioning accuracy of robots in static environments or scenes containing some obvious dynamic characteristics (such as mobile people and objects), while ignoring the positioning failure and wrong matching problems caused by scene changes. For example, in the family scene, most items are mobile and deformable. After finishing in the same living room the next day, the world seen by the machine has changed dramatically compared with the first day. Can slam be repositioned successfully at this time? This dynamic change poses a challenge to the robustness of machine (relocation) and the reusability of maps
therefore, this competition proposes the index of positioning success rate, focusing on whether SLAM algorithm can stably recognize its own position when the angle of view, illumination and scene layout change, so as to support the long-term deployment of robots
in order to cooperate with this event, researchers produced and released a new slam data set openloris scene. Compared with previous slam datasets, openloris scene contains scenes closer to life, richer sensor configurations, and multiple recordings of each scene, including scene changes caused by real life. Openloris scene data set will become the touchstone of whether SLAM algorithm can support the real deployment of robots. Tax preference provided by Zibo Municipal Government
openloris scene using the No. 9 robot Segway robotics team, the range selection knob cannot be changed; Segway distribution robot S1 to collect data. It is understood that at present, only Segway distribution robot S1 can support the installation of many sensors to collect data and walk in the crowd. S1 is equipped with realsense d435i camera and realsense t265 camera to collect image data, both of which are installed at a fixed height of about 1m. S1 also provides wheel encoder odometer data
since its launch in July, the competition has attracted dozens of teams from around the world. Youth is for struggle; In the future, many powerful scientific research teams such as Peking University and Shanghai Jiaotong University will join the competition. After months of algorithm competition, the Segway robotics team and Shanghai Jiaotong University scored more than 90 points in the finals, while orb, as a reference benchmark_ Mainstream algorithms such as slam2 and vins mono can only get less than 40 points
according to the participating engineers, under the original positioning algorithm framework of Segway distribution robot S1, the Segway robotics team integrates deep learning feature matching and scene relocation, which greatly improves the robot's ability of continuous self positioning through vision, and the comprehensive evaluation score is the first
this championship is a full affirmation of Segway robotics' technical accumulation in slam for many years, especially in the practical aspect of SLAM algorithm. In the real environment of commercial robots, visual perception will face many challenges: changes in environment and light; Texturless environment, dynamic environment, etc. The algorithm framework of Segway robotics integrates multi-sensor information, including fish eye camera, inertial measurement device and chassis encoder, which makes the positioning algorithm more stable. At the same time, by constantly optimizing and merging maps, the range of visual perception is increased, and the probability of robot relocation is improved. Through this competition, the real-time positioning performance of Segway robotics algorithm and the ability to establish a consistent map in a large-scale indoor environment are verified
it is understood that Segway robotics belongs to robot 9 and is committed to the business of robot innovation. Since 2015, the team has been deeply involved in the field of robots, focusing on the research, development, production and promotion of intelligent robots for individual consumers and enterprise markets. It has the world's leading artificial intelligence (AI) algorithm research and development capabilities, as well as the R & D and mass production capabilities of hardware machines
in August, 2018, the team released Segway distribution robot S1, established strategic cooperation with meituan and hungry, and completed a total of more than 5000 kilometers of field operation. On the basis of the first generation of robots, the team also developed the second generation of indoor distribution robot S2, which has been upgraded in many aspects of product performance. Using LDS lidar, which is similar in cost to S1 depth camera but has a larger viewing angle, and the navigation system integrated with visual sensors, the positioning accuracy is higher and the visual blind area is less. It can easily detect human feet or steps and make appropriate responses. At the same time, the team also led the research and development of Segway outdoor distribution robot x1
Segway robotics has always attached great importance to the combination of academic frontier and engineering landing exploration, and manpower is invested every year. Therefore, ordinary protection and maintenance is of great significance to ensure the normal operation and measurement accuracy of equipment. Intern projects are participating in various academic conferences and competitions. In the just concluded biennial iccv 2019 (International Conference on computer vision) human object interaction detection competition held by Beihang, Shangtang technology, Chinese Academy of Sciences and CUHK, Segway robotics also sent a team of interns trained by itself and won the second place in the project of [interactive detection of the relationship between people and objects in real life scenes]. It seems that the robot technology of the No. 9 robot Segway robotics team is still quite advanced. I don't know why this company is so low-key in publicityLINK
Copyright © 2011 JIN SHI