Home Overview Rotation Estimation Rotation Annotation Dataset BibTex


We leveraged the power of 3D graphics and computer vision techniques to tackle a real-world problem, that we propose object-to-spot rotation estimation which is of particular significance for intelligent surveillance systems, bike-sharing systems, and smart cities. We introduced a rotation estimator (OSRE) that estimates a parked bike rotation with respect to its parking area. By leveraging 3D graphics, we generated Synthetic Bike Rotation Dataset (SynthBRSet) with accurate bike rotation annotations. Next, we presented a first-of-its-type object rotation model that assesses bike parking. As we obtained promising results, we believe this work would be a significant starting point for further deep studies on rotation estimation with respect to other visual semantic elements in the input image.

Bicycles organized manually by bike-sharing companies. The current bike-sharing systems include dock and dockless systems. Dockless systems feature smart algorithms to locate bikes. Thus, the problem of where to park the bike is effectively solved, whereas, How to park the bike is still an open question and challenging problem to tackle. The cost in terms of time and labor spent by such companies to keep bikes well organized and distributed motivated us to build a computer vision deep model that can effectively assess the bike parking and thus it can be used as a primitive step for further alert or alarm for park them appropriately.

Rotation Estimation

Object-to-Spot Rotation Estimation. The proposed object-to-spot rotation estimation method provides two rotation angle predictions in two axes \(y\) and \(z\). Rotation in \(y\) axis represents the bicycle leaning/falling left or right. When it is fallen down to the left side, it represents the angle \(90^\circ\) or \(\pi/2\) in radians, whereas falling down to the right size is representing the \(270^\circ\) or \(3\pi /2\). Similarly, rotation in \(z\) axis represents the bike direction in its standing pose. i.e., (c) shows the rotated bike in \(z\) axis (rotated class) in blue color, the bikes lean left/right (fallen class) are shown in orange color, and the well-parked bikes (parked class) are plotted in green color. We visualize \(z\) rotation in the inner circle and \(y\) is visualized in the outer circle.

Rotation Annotation

Rotation in \(y\) axis.
Rotation in \(z\) axis.

In the first row of the bellow figure (back view), the bike's predicted rotation in \(y\) axis with respect to the parking area. This camera view (back view) is obtained when the camera is located in the lower location of its vertical path \(z\). Note, we normalized this rotation into only \( 3 \) states: \(0^\circ\) represents the standing well-parked state in \(y\) axis, whereas \(90^\circ\) and \(-90^\circ\) represent the bike fallen state in the right and the left side, respectively. On the second row, top view. The bike's predicted rotation in \(z\) axis with respect to the parking area. This camera view (top view) is obtained when the camera is located in the middle location of its horizontal path \(x\).


The generated dataset is diverse with a wide range of variations in terms of parking space, lighting conditions, backgrounds, materials, textures, colors, 3D objects and camera positions. The generated dataset will be in Yolo annotation format. It involves the rotation annotations of each bike. The code creates COCO annotation as well. More detail of the synthetic dataset generation can be found here..


  author={Alfasly, Saghir and Al-Huda, Zaid and Bello, Saifullahi Aminu and Elazab, Ahmed and Lu, Jian and Xu, Chen},
  journal={IEEE Transactions on Intelligent Transportation Systems}, 
  title={OSRE: Object-to-Spot Rotation Estimation for Bike Parking Assessment}, 
This web page template is borrowed from here.