International Journal of Electrical and Computer Engineering (IJECE) Vol. No. August 2017, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. Distance Estimation based on Color-Block: A Simple Big-O Analysis Budi Rahmani1. Hugo Aprilianto2. Heru Ismanto3. Hamdani Hamdani4 Program Studi Teknik Informatika. STMIK Banjarbaru. Kalimantan Selatan. Indonesia Department of Informatic Engineering. Musamus University. Merauke. Indonesia Department of Computer Science. Faculty of Computer Science and Information Technology. Mulawarman University. Samarinda. Indonesia Article Info ABSTRACT Article history: This paper explains how the process of reading the data object detection results with a certain color. In this case, the object is an orange tennis ball. We use a Pixy CMUcam5 connecting to the Arduino Nano with microcontroller ATmega328-based. Then through the USB port, data from Arduino nano re-read and displayed. ItAos to ensure weather an orange object is detected or not. By this process, it will be exactly known how many blocks object detected, including the X and Y coordinates of the object. Finally, it will be explained the complexity of the algorithms used in the process of reading the results of the detection orange object. Received Okt 15, 2016 Revised Jan 22, 2017 Accepted Feb 6, 2017 Keyword: Arduino nano Big O Complexity Object detection Pixy cmucam5 Copyright A 2017 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Budi Rahmani. Program Studi Teknik Informatika. STMIK Banjarbaru. Kalimantan Selatan. Indonesia. Email: hugo. aprilianto@gmail. INTRODUCTION Robot navigation is the way of the robot to change its position or to go to its goal position from its stationary position without hitting any obstacle. Navigation may define as the ability to move in any particular environment . Other researchers define it as a science in guiding a robot to move to its environment . As for the problems that accompany this process is the navigation can be defined in three questions, namely: "Where am I", "Where am I going", and "How do I get there". For the first and second questions can be answered by completing the appropriate robot with sensors, while the third question can be done with an effective planning system navigation . The navigation system itself is directly related to the presence of the sensors used in robots and the structure of the environment, and that means there will always match the purpose-built robot and the environment in which the robot will be operated . Vision-based navigation tremendous progress with the implementation of various autonomous vehicles, like autonomous ground vehicles (AGV), autonomous underwater vehicles (AUV), and unmanned aerial vehicles (UAV) . Regardless of the type of vehicle or robot is built, then the system utilizes a vision sensor for navigation purposes, can roughly be divided into two general categories: systems that require prior knowledge of the environment in which it will be operated and systems who see environmental conditions to which he will navigate. Systems that require a map can be subdivided into folders using systems, mapbuilding systems, and topological map-based systems . As the name suggests, it is the map using navigation systems need to include a complete map of the environment before starting navigation. While the Journal homepage: http://iaesjournal. com/online/index. php/IJECE A ISSN: 2088-8708 metric map-building systems all over the map itself built and used in the next phase of navigation. Furthermore, other systems that are in this category is a system that can perform self-localize on the environment simultaneously performed during map construction purposes . And other types of mapbuilding navigation systems are encountered, e. visual sonar-based systems or local folder-based systems. Both of these systems collect environmental data when navigating, and build a local folder that is used to support in order to be an appropriate navigational purpose. As for the local map includes mapping the barrier and space, and this is usually a function of the angle of view of the camera . The last system, namely: a topological map-based, which builds a topology map that consists of nodes are connected by a line, where the vertices represent the place/specific positions on the environment, and links represent the distance or travel time between the two nodes . , . The next navigation system is mapless navigation systems which mostly include the reactive technique that uses visual cues are built from image segmentation, optical flow, or the search process features between image frames obtained. There is no representation of the environment on these systems, and environments seen/perceived to navigate the system, recognize objects, or browse landmark . One of the most important parts of the process of vision-based robot navigation is how to process the data obtained from the camera sensor used into useful information for a particular robot to navigate toward a specific point on the environment or certain specified arena. On this occasion, the navigation is done by a humanoid robot soccer players is how to object orange balls . , using data obtained readings from sensors used CMUcam5 pixy camera. Then it will be analyzed more specifically the complexity of the algorithms used to read the results of the detection of the object . Visual-based navigation method divided into three categories, i. map-based, map building, and maples navigation. One of map-based navigation method was described in . that uses a stereo camera on a wheeled robot as a golf ball collector. ItAos associated with a wide-angle camera mounted on top of the golf course with a viewing area covers area 20m x 20m, with a viewing angle of 80C vertical and horizontal viewing angle of 55C with the camera installation height is 7m above the ground. Resolution capable captured by this camera is equal to 1280x780 pixels. The camera catches the image is then processed by a computer server which then builds navigation map grid-shaped or there is a call later with questions of occupancy . ccupancy folde. that can indicate where to position the robot makers ball on the golf course, and where the ball position to be taken off the map, is determined by the server and the robot can move to a certain position based on the information sent wirelessly . , . The next category of navigation method is map building navigation described in . , . A new algorithm that was developed using a stereo camera . The aim is to estimate the free space of the vehicle . which is in front of where the system is run, particularly for navigation on the highway. By utilizing the difference map . isparity ma. , it will obtain information on whether the condition of the vehicle in front of him was 'solid' or 'rarely'. Further to the development of scores folder, because the sensor used is a stereo camera, which is carried out next is a stereo matching process based on image data that is passed longitudinal road surface. Detection of free space is done by extracting the road surface without any obstacle. The last category is mapless navigation method. Optical flow is one of the best methods that mimics how the visual behavior of animals, bees, which are determined by the basic robot movement speed difference between the image seen by the eye . and the right image seen by the eye left, and the robot will move to the side of the speed change is the smallest image . This method is widely used to detect an obstacle, and following this method developed further in many algorithms, including the Hom-Sehunek algorithm (HSA) and Lucas-Kanade algorithm (LKA). In the HSA, proposed an Equation of the optical flow method that meets the constraints and smoothness constraints globally. While LKA proposed quadrat Equation of optical flow with the smallest weight . Furthermore, the optical flow is the instantaneous velocity of each pixel in the image at a certain Optical flow field is described as a field of gray of all pixels in the image of the observed field. This optical flow field reflects the relationship between the time domain and the position of adjacent frames of the same pixel position . Inconsistencies between the directions of optical flow field and optical flow and major movements can be used to detect an obstacle. Uses optical flow field computed multi-image obtained from the same camera at different times, and the main movement of the camera in the estimation based on OFF. Optical flow can be generated from two camera movements that flower and flowerpot, if the distance of the object in the field of view and the camera is d and the angle between the object and the direction of translational movement is , then the camera will move with translational velocity (TV) v and angular velocity (AV) O. Next optic flow generated by the object can be calculated by Equation . ( ) IJECE Vol. No. August 2017 : 2169 Ae 2175 IJECE ISSN: 2088-8708 Where: F = The amount of optical flow v = velocity or displacement translation . ranslational velocit. d = object in the field of view of the camera O = angular velocity = the direction of translational movement RESEARCH METHOD Here is the algorithm which will detect the presence of objects with specific color by using the camera pixy CMUcam5 whose output is fed to the controller (Arduino Nano with ATmega. via serial communication lines. The program code is executed by the controller continuously read serial data, the existence of an object with a specific color . n signature orange . , if the object in question then it is detected as the 'blocks' of those colors. Every block is detected by the camera will be sent the data to the controller with a speed of 50 frames per second . Hence the controller will also create delays the process of reading that much, this is done for asynchronous data sent by the camera and the data received by the controller. The controller will then communicate back the received data to the user via the universal serial bus (USB) port to the user that the information it contains 'blocks' color of the object detected by the camera, then its position in the field of x and y, as well as the size of the width and height of the block The. Diagram of a system built presented in Figure 1 fit the above description. Single Camera Controller_ Tilt servo Motion Controller_ 18 DOF body servo Figure 1. Block diagram system Below is shown an algorithm that reads the data of the detection object orange balls . , . void loop() static int i = 0. // initializing the variable y is zero with static data type integer int j. // j variable declaration uint16_t blocks. // blocks is the number of pixels of color of the object to be detected char buf. // buffer variable declaration with 32 array characters type // 'blocks' variable declaration to accommodate the results of the function pixy. getBlocks() blocks = pixy. getBlocks(). // If the blocks or object are found // Print the results of blocks every 50 frames // And subsequently fed to arduino if i modulo 50 = zero if . P==. uf, "Detected %d:\n", block. Serial. =0. j