Protii #3: Integration of Yolov2 Object Detection
One of the features that I am implementing is object detection and tracking. Ultimately I would like to convert the algorithm from object detection to people detection, classification, and tracking.
As I had already integrated the Yolov2 in a previous exercise, the source code was refactored for this particular project.
Move to ROS Service
At the moment the the detector is subscribing to the stream topic and continuously running the detection algorithm and publishing results. Considering inference is relatively expensive, changing the system to use ROS service or actions likely makes more sense. In other words, run the detection algorithm when a trigger has been fired. The trigger can be an algorithm requiring less compute, or a time based trigger.
Inference is done on the TX2 and I get about 4.6~4.8 FPS, with raw images coming in at about 30 FPS. These stats degrades when measured over the network, and this degradation can be seen in the video below.
Next steps would be to retrain, (transfer learning?) and tune the model to detect humans only, and convert the ROS package to implement detection as a service. Also, probably make sense to update the package to use Yolov3.
Results
The video shows the display that is connected to the Jetson TX2 via a HDMI cable. I access the protii webapp via chrome and viewed through inspection and used the simplescreenrecorder [1] for desktop recording.
References
- https://www.ubuntupit.com/15-best-linux-screen-recorder-and-how-to-install-those-on-ubuntu/