I recently decided to program a robot with a camera(not a raspberry pi camera) to track objects and then use a servo onto which the camera is mounted, to turn the camera and an ultrasonic distance sensor attached to the same servo so that I might use some clever trigonometry to estimate distances between moving objects and the robot so as to do some active obstacle avoidance while it drives around.
However I really don't know of any one who has done something like this and would appreciate any help tweaking the code.
I started with the example provided by ultralytics and made a few changes here and there but I don't know if it works since I didn't test it out yet.
Kindly advise. Thankyou.
so basically I just want to calculate the distance between my robot and any obstacles while the robot is moving and I want to do this accurately with just a camera and an ultrasonic distance sensor.
find attached a copy of my program.
However I really don't know of any one who has done something like this and would appreciate any help tweaking the code.
I started with the example provided by ultralytics and made a few changes here and there but I don't know if it works since I didn't test it out yet.
Kindly advise. Thankyou.
so basically I just want to calculate the distance between my robot and any obstacles while the robot is moving and I want to do this accurately with just a camera and an ultrasonic distance sensor.
find attached a copy of my program.
Code:
import mathimport cv2from ultralytics import YOLOimport motioncontrol# Load a pretrained YOLOv11n modelmodel = YOLO("yolo11n.pt")source = 'https://ip_adress of pi goes here/robovision/' # so this takes an incoming video stream from the raspberry pi to my Mac.cap = cv2.VideoCapture(source)# Loop through the video frameswhile cap.isOpened(): # Read a frame from the video success, frame = cap.read() #----The rsp stream from raspberry pi4B is of ---- ---- --- format so ---this--- many pixels per centimeter---- pixel_per_centimeter=10 #should do calibration and needs to be precise if success: results = model.track(frame, stream=True, tracker="bytetrack.yaml") centroids=[] distances=[] # calculate centroids and distances between objects and storing it in a list for r in results: boxes = r.boxes.xyxy.cpu() if r.boxes.id is not None: track_ids = r.boxes.id.int().cpu().tolist() for box, track_id in zip(boxes, track_ids): #calculate centroids centroid = ((box[0]+box[2])/2,(box[1] + box[3])/2,track_id) #centroid calcs centroids.append(centroid) for i in range(len(centroids)): #To calculate distance between centroids (in actual centimeters) distance= [math.sqrt((centroids[i+1][0]-centroids[i][0])**2+(centroids[i+1][1]-centroids[i][1])**2)/pixel_per_centimeter,(centroids[i][2]+"to"+centroids[i+1][2])] if distance[i][0] >= 12.50 : #input the min gap length for robot to pass through motioncontrol.front() distances.append(distance) else: # Break the loop if the end of the video is reached break# Release the video capture object and close the display windowcap.release()cv2.destroyAllWindows()Statistics: Posted by Joraspberry — Fri Jun 20, 2025 10:35 pm