I successfully created a humanoid robot named SAIN ROBOTICS (Sri Lankan Artificial Intelligence New Robotics). It took me 8 months of dedicated effort to complete this innovative project. It was fully developed by continuous effort despite many failures while creating it. This robot's body is fully built by 2mm white plywood board, and its body chassis is developed with aluminum box bar.
This robot is equipped with 2 hands, 2 eyes, 2 ears, and a mouth. The hands are powered by high-torque 360-degree servo motors, enabling precise movements. Its eyes consist of two cameras, one for capturing photos and videos, and the other for AI processing and detection tasks. With microphones serving as its ears, the robot can detect and record sounds. Its mouth incorporates a high-volume speaker for sound emission and speech capabilities. Utilizing development boards such as Arduino Mega, Raspberry Pi 3B+, and ESP32 camera module, this robot also features six high-torque 50rpm speed motors for its movements like forward, backward, rotations
This robot features two primary systems: a face recognition system and a voice recognition system. The ESP32 camera is used for face recognition, enabling the robot to identify individuals. Upon recognizing a face, the robot greets them with a friendly "Hi, How are you? I know you!" The Raspberry Pi camera is utilized for capturing photos and videos. Additionally, the robot's microphone is integrated with Google Assistant, accessible via a front push button. Users can interact with the robot by asking questions, and it responds using Google's capabilities. Furthermore, commands such as "take photo" and "take video" prompt the Raspberry Pi camera to capture media, storing it on the SD card.
The robot's operations are managed through a versatile control system, incorporating radio control, voice commands, and detection-based control. While some functions are directly operated via in-code commands for immediate action, the majority of its functionalities are activated through time-controlled mechanisms, enabling autonomous operation. This multi-mode control system ensures flexibility and efficiency in managing the robot's tasks and interactions.
This robot is capable of autonomously teaching various subjects through a programmed teaching block. It will instruct to students on basic skills such as art drawing techniques, answering simple questions, demonstrating step-by-step processes like creating a small LED blinking system, providing guidance on assembling computer parts in 3D printing, and conducting other programmed educational activities.
For the next version of the robot, consider integrating a natural language processing (NLP) system that enables the robot to engage in more sophisticated conversational interactions with users. This would enhance its ability to answer complex questions, provide detailed explanations, and adapt its teaching style to individual learning styles. Additionally, incorporating advanced machine learning algorithms could enable the robot to personalize its teaching approach based on the student's progress and preferences, making the learning experience more tailored and effective.