Nowadays, heterogeneous unified memory architecture platforms are becoming increasingly common. These platforms incorporate several co-processors on a single chip with a shared physical memory. The use cases for such platforms can vary dramatically. On the one hand, they can be used in the context of Edge computing, which cannot tolerate high latency and has strict energy/power constraints. On the other hand, motivated by their growing computing capabilities, and their energy-efficiency, many have considered replacing traditional bulky servers with these platforms to deliver the same computing power but with lower energy budget. This study is an exploratory step to understand the trade-off between power consumption, processing time, and throughput on a low-power heterogeneous platform. We focus on data stream processing workloads by characterizing several common computing kernels found in computer vision algorithms. Our preliminary experiments on NVIDIA Jetson TX1 show that it is possible reduce power consumption by up to 12%.