Overview

  • Students: Matthew Rueben
  • GitHub: Coming Soon.

Description

When we do not know the exact position of a private object with respect to the camera, we are forced to estimate it from odometry observations, localization, or object detection. All of these potentially introduce errors, causing us to redact the wrong part of the image.

We propose a probabilistic definition of privacy, where we allow the local users to quantify how much they value their privacy, and adjust the location and amount of blurring or redaction in the video stream in an attempt to match their expectations. As the robot loses localization, we respond by redacting more and more of the image in order to ensure that the private objects are not seen.

Future Plans

The initial work focused on accounting for localization error within the robot. We want to expand our probabilistic model to account for other kinds of error, like object detection error. We also want to deal with some tricky situations, like when private objects get moved around or are being carried by a person.