Abstract:
This thesis presents a pipeline for mapless navigation of mobile robots, where decision-
making and control are handled in separate stages. A Deep Reinforcement Learning (DRL)
agent, trained with artificial neural networks, generates velocity commands that allow the robot
to reach a goal while avoiding obstacles, using only onboard sensor data. These commands are
then passed to a fuzzy Takagi-Sugeno (T-S) controller, which ensures accurate and robust
trajectory tracking. In the single-agent case, the DRL-based navigation is compared with
a classical navigation approach. The framework is further extended to a multi-robot setup,
demonstrating decentralized coordination in shared environments. Simulation results validate
the effectiveness and adaptability of the proposed pipeline.