Vision-Based Autonomous Navigation System Development for Agri-Robots
Vision-based navigation systems in arable fields are an underexplored area in agricul?tural robot navigation. Vision systems deployed in arable fields face challenges such as fluctuating weed density, varying illumination levels, growth stages and crop row irregularities. Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density. Moreover, the scarcity of comprehensive datasets hinders the development of generalised machine learning systems for navigating these fields. This thesis proposes deep learning-based percep?tion algorithms using affordable vision sensors for effective vision-based navigation in arable fields. In this thesis, the challenge of developing a vision-based navigation system for ag?ricultural mobile robots in arable crop row fields is addressed. Initially, a compre?hensive dataset that captures the intricacies of multiple crop seasons, various crop types, and a range of field variations was compiled. Next, this study delves into the creation of robust infield perception systems capable of accurately detecting crop rows under diverse conditions such as different growth stages, weed density, and varying illumination. Further, it investigates the integration of crop row following with vision-based crop row switching for efficient field-scale navigation. The experiments discovered that the proposed crop row detection pipeline can suc?cessfully detect crop rows under varying field conditions with an average angular and displacement errors of 1.3◦ and 9.35 pixels respectively. The deep-learning model was able to make zero-shot predictions on multiple crops. A 4.5km long distance nav?igation experiment revealed that the proposed navigation scheme could accurately follow crop rows with average heading and cross-track errors of 1.24◦ and 3.32cm respectively. The field scale navigation experiments reported 92.5% field coverage.