In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.
Citation: Chen Q, Xu H, Tan L (2015) Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry. PLoS ONE 10(5): e0127018. https://doi.org/10.1371/journal.pone.0127018
Academic Editor: Catalin Buiu, Politehnica University of Bucharest, ROMANIA
Received: October 1, 2014; Accepted: April 10, 2015; Published: May 26, 2015
Copyright: © 2015 Chen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Data Availability: All relevant data are within the paper.
Funding: The authors extend their gratitude to the National Natural Science Foundation of China (No. 51078167) and the National High Technology Research and Development Program of China (863 Program No. 2009AA11Z201) for their assistance with this project.
Competing interests: The authors have declared that no competing interests exist.
Traffic accident scene photogrammetry typically involves using a non-metric camera to capture accident scene photographs, and computer processing to measure important accident scene information [1–5]. Non-metric camera calibration is an important part of photogrammetry that directly affects the accuracy of the measurements. Depending on the availability of a high-precision calibration objects of known structure, there are three possible methods of calibrating a non-metric camera: traditional calibration [6, 7], self-calibration [8, 9], and active vision calibration [10–12]. Traditional calibration involves placing a calibration object in front of the camera, with a number of calibration points whose accurate 3D coordinates are known. The camera’s parameters are calculated by establishing the relationship of the coordinates of each point in space with their corresponding image coordinates. Self-calibration does not require the 3D coordinates of the calibration points, but it does need multiple images with corresponding coordinate points to determine the camera’s parameters. Active vision calibration requires the camera to move along a certain trajectory, and uses the geometric characteristics of the trajectory and relations between image coordinates to solve the camera’s parameters. The traditional method using calibration objects is generally used for traffic accident scene photogrammetry [13–15]. The two-dimensional direct linear transformation (2D-DLT) method based on a plane calibration object is simple, practical, and highly accurate. Thus, it is widely used at present [16, 17]. However, the 2D-DLT algorithm can also correctly reflect the perspective projection relationship between a spatial plane and the image plane of an entire scene, which requires as many calibration objects as possible to cover the site area of interest [18, 19]. A traffic accident scene can range up to dozens of meters, and difficulties exist in manufacturing and maintaining large high-precision calibration objects to match the site. However, a small calibration object can only offer a small range of feature point data, which leads to a reduction in the photographic measurement accuracy of a non-metric camera; this directly affects the accuracy of a traffic accident scene investigation.
In this paper, an improved 2D-DLT measurement method is proposed based on a composite of small calibration objects, thus addressing the difficulty in arranging large calibration objects and the low measuring accuracy of small calibration objects. In this method, several small calibration objects are placed around the traffic accident scene. Because of the constancy of the relative positions and the coplanar relationship between the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. Then, a minimum reprojection error objective function is established, and nonlinear optimization methods are used to rectify the image.
The paper is organized as follows. First, the camera model is introduced, and the synthesized calibration principle using composite small calibration objects is proposed. Using this principle, the improved 2D-DLT method is established. Then, using a non-metric consumer-grade camera and a self-developed calibration object, several field experiments and specific case analyses are performed to demonstrate the feasibility of this method. Finally, discussions are presented and conclusions are drawn.
The Camera Model
The perspective projection model  is one of the camera models used in traffic accident photogrammetry; the coordinate transformation between the object space and the image space is shown in Fig 1.
In Fig 1, π is the camera image plane, Ofuv is the pixel coordinate system, Oxy is the image plane coordinate system, OcXcYcZc is the camera coordinate system, and OwXwYwZw is the word coordinate system. The camera's optical center Oc is the origin of the camera coordinate system, with shaft axes Xc, Yc respectively parallel to axes x and y in Oxy, and Zc overlapping the optical axis of the camera. The origin O of the image plane coordinate system is on the optical axis, and is called the main point. Its pixel coordinate is (uo, vo) in Ofuv, which will be the offset in the imaging process. The distance OcO is the camera's focal length f. The origin Of in the pixel coordinate system is located in the upper left corner of the image, with shaft axes u and v respectively parallel to axes x and y in Oxy.
Given a space point P, Pw = [Xw Yw Zw 1]T are the homogeneous coordinates in OwXwYwZw, and Pc = [Xc Yc Zc 1]T are the homogeneous coordinates in OcXcYcZc, whose coordinate unit is the metric system. Let Pu be the undistorted image point of the homogeneous coordinates in Oxy. Pd is the actual distorted image point of the homogeneous coordinates in the imaging process, and uf = [u v 1]T are the homogeneous coordinates in Ofuv, whose coordinate unit is the pixel system.
In the undistorted image, the camera imaging model can be represented by the following formula: (1) where λ is an arbitrary proportion coefficient. A denotes the matrix for the inner orientation of the camera parameters, which relates to parameters ax, ay, c, uo, vo, and describes the parameters of the relative position between the photograph center and the photos, and represents the camera's own characteristics. R and T, respectively, denote the rotation and translation matrices from the world coordinate system to the camera coordinate system; these are called the exterior orientation parameters of the camera, and represent the parameters of the photography beam in the spatial position and the posture when photographing.
In actual image processing, because of the camera lens distortion, the image point deviates from the ideal location, as shown in Fig 1. The distortion point Pd deviates from the non-distortion point Pu, which affects the accuracy of the photogrammetry . Here, the effects of the radial distortion and the tangential distortion of the image processing are considered simultaneously. The relationship between the pixel coordinates (u, v) of distortion point Pd (xd, yd) and the coordinates of Pu (xu, yu) can be expressed as (2) where r2 = (u−uo)2 + (v−vo)2 is the radial distance from an imaging point to the optical axis; k1 and k2 respectively denote the first- and second-order radial distortion coefficients; p1 and p2 respectively denote the first- and second-order tangential distortion coefficients.
Principle of Composite Small Calibration Objects
When using a non-metric camera and a single small calibration object for traffic accident scene photogrammetry, the transformation relationship between the small calibration object’s coordinate system and the camera coordinate system can be obtained as (3)
Feature point Pi has homogeneous coordinates Pwi when it is located on a small calibration object i. Its homogeneous coordinates are Pwj for a small calibration object j. The homogeneous coordinates in the camera coordinate system of that point, respectively, can be represented as (4) (5)
Using the above formula, a large calibration object can be composed of a number of small calibration objects, as shown in Fig 2. The key is to calculate the exterior orientation parameters RTi of each small calibration object relative to the camera. Then, the coordinates of each small calibration object on the composite calibration object can be calculated by Eq (8), effectively replacing the small calibration objects and expanding the measurement range.
In order to calculate RTi of each small calibration object, the homographic matrix H that contacts the camera plane and the calibration object plane should be calculated first. Then, the camera parameters are calculated, i.e., RTi is determined. Finally, the small calibration objects are composited completely.
Calculate homographic matrix H.
As shown in Fig 2, the calibration object in 2D photogrammetry is planar, which is the Yw = 0 plane in the world coordinate system. Let ri denote the column vectors of matrix R; then, Eq (1) can be transformed into (9) where H = [h1 h2 h3] is the homographic matrix, which can be calculated according to the 2D-DLT method. This method uses a plane calibration object in which the point’s coordinates are known and will take at least four points [15, 22].
Solve the camera parameters.
According to Zhang , we obtain A-1, A using matrix decomposition.
Establish the composite calibration object’s point coordinates.
The key to establishing a composite calibration object is to determine the positional relationship between each small calibration object coordinate system and the composite calibration object coordinate system. The coordinates of a point on the photo are the same as those on the camera coordinate system for the media.
Assume that we choose the jth small calibration object’s coordinate system for the composite calibration object. Feature point Pi on the ith calibration object’s coordinate system has homogeneous coordinates of Pwi, whereas Pwj is on the composite calibration object’s coordinate system. The homogeneous coordinates of this point are the same as in the camera coordinate system. From Eqs (7) and (8), the transformation of the relationship between the coordinate system of the ith calibration object and the composite calibration object is (12)
Depending on the characteristics of the traffic accident scene, multiple small calibration objects can be placed near the scene of the accident. This method can composite the small calibration objects, and form a calibration area as large as necessary.
Improved 2D-DLT Method
After determining the coordinates of a calibration point in the composite calibration object, the entire region of interest in the accident scene can be covered. Based on a plane calibration object, the 2D-DLT photogrammetry method can be used for accident scene investigation. This can correctly reflect the perspective projection relationship between the space and image planes of the entire scene.
In order to improve the accuracy of the measurement, the effect of lens distortion is considered, and a minimum reprojection error objective function is established as (13) where n is the number of calibration points involved in the calculation, qi denotes the pixel coordinates of the ith feature point on the image plane, and Pwi denotes the composite calibration object coordinates of the ith feature point. The pixel coordinates q (A, k1, k2, p1, p2, Pwi) of feature point Pwi are obtained by camera model relations. The values of parameters A, k1, k2, p1, p2, and Pwi that minimize the objective function σ are the optimal solution, through the nonlinear optimization method. The process of the improved 2D-DLT method is shown in Fig 3.
According to the above model, a set of experiments was designed to examine the effect of the area covered by the composite calibration object, and the size of the small calibration object on the measurement results. A non-metric consumer-grade camera, i.e., Nikon D3200, was used in the experiments. The image resolution was 3008 × 2000 pixels; the photography distance was approximately 3.0 m. The measurement experiment site was 4.0 × 4.0 m. The composite calibration object was a square area that consisted of four small calibration objects. The number of feature points on each small calibration object was 4 × 4. The horizontal and vertical spacing of the feature point was 0.1 m, as shown in Fig 4.
In Fig 4, the composite calibration object is made up of four small calibration objects labeled 1, 2, 3, and 4; L is the length of the composite calibration object; the side length of quadrilateral ABCD, 3.0 m, is the measurement distance.
By positioning different small calibration objects, the 2D image rectification was conducted on site, followed by surveying and mapping. Finally, the data was analyzed to determine the relative error of the measuring distance.
Impact of composite calibration object region on measurement results.
The most important feature of a composite calibration object is the flexibility in placing the small calibration objects, which can form composite calibration objects of different sizes depending on the scene of the accident. The following analysis describes the effect of the size of the composite calibration object on the photogrammetry results; the size and quantity of small calibration objects remained constant.
The length L of the composite calibration object varied from 1.0 m to 4.0 m, increasing each time by 1.0 m; four field images were taken, as shown in Fig 5. The image rectified by the proposed correction algorithm is shown in Fig 6. The relative measurement error of each side of quadrilateral ABCD is shown in Table 1.
Four field images were taken as the length L of the composite calibration object varied from 1.0 m to 4.0 m. (a) 1.0 m, (b) 2.0 m, (c) 3.0 m, (d) 4.0 m.
(a), (b), (c), (d) are the rectified images corresponding to Fig 5.
From Fig 6 and Table 1, we observe that the greater the area of the composite calibration object, the better are the photography measurement results, which further confirms the effectiveness of a large composite calibration object created by closely linking each small calibration object with RTi. Therefore, when conducting a traffic accident scene investigation, small calibration objects should be placed around the scene, forming a large composite calibration object to reduce the relative photogrammetric error.
Effect of small calibration object size on calibration results.
The size of the composite calibration object was 4.0 × 4.0 m. The horizontal and vertical spacing Δl of feature points on the small calibration object varied from 0.1 m to 0.25 m, increasing each time by 0.05 m; four field images were taken, as shown in Fig 7. The image rectified by the proposed correction algorithm is shown in Fig 8. The relative measuring error of each side of quadrilateral ABCD is shown in Table 2.
The size of the composite calibration object was 4.0 m × 4.0 m; the horizontal and vertical spacing Δl of feature points on the small calibration object varied from 0.1 m to 0.25 m. (a) 0.1 m, (b) 0.15 m, (c) 0.2 m, (d) 0.25 m.
(a), (b), (c), (d) are the rectified images corresponding to Fig 7.
From Fig 8 and Table 2, with a constant composite calibration object area, as long as the small calibration object is legible, the photogrammetry results are the same, despite the increase in the small calibration object’s size.
In order to verify the superiority of a composite calibration object for photogrammetry, three comparative experiments were conducted. The first group used a composite calibration object according to the proposed method. Four small calibration objects were distributed at the four corners of the experiment site; the horizontal and vertical spacing of the feature points was 0.1 m. The second group used a single large calibration object; the horizontal and vertical spacing of the feature point was 0.25 m. The third group used a single small calibration object that was the size of a small composite calibration object. Three field images were taken, as shown in Fig 9. The image, rectified using the method described in this article, is shown in Fig 10. The relative measuring error of each side of quadrilateral ABCD is shown in Table 3.
(a) Four small calibration objects were distributed at the four corners of the experiment site; the horizontal and vertical spacing of the feature points was 0.1 m. (b) Using a single large calibration object; the horizontal and vertical spacing of the feature points was 0.25 m. (c) Using a single small calibration object that is the same as each small composite calibration object.
(a), (b), (c) are the rectified images corresponding to Fig 9.
From Fig 10 and Table 3, the relative error using the composite calibration object for photogrammetry was minimal, which is far better than the results using a single calibration object. When using a single calibration object, the size had no significant influence on the measurement results. The measuring accuracy was greatly influenced by the location of the calibration object.
A comprehensive analysis of the three groups of experimental results showed that the composite calibration object provided a wide field range, with mutual constraint relationships between the feature points. Through these feature points, the camera’s internal parameters well presented the representative perspective transformation relation between the camera and the scene plane. A small calibration object provided a smaller range of feature point distribution, which resulted in lower accuracy in camera model solving, and a larger photography measurement error.
Results and Discussion
Fig 11 shows a traffic accident that occurred in 2014; the accident vehicles left braking traces on the road. The composite calibration object consisted of four small calibration objects that were placed around the area of the accident scene. The number of feature points on each small calibration object was 4 × 4, and the horizontal and vertical spacing of the feature points was 0.1 m. The image resolution was 3008 × 2000 pixels. The main goal of this case was to measure the vehicle stop position and the braking trace length, in order to analyze the braking vehicle’s original speed, and other information.
In order to facilitate the comparative analysis, the traditional manual measurement method and the photogrammetry method using the proposed composite calibration object were conducted simultaneously. The image rectified by the proposed correction algorithm is shown in Fig 12.
The main factors are reflected in the rectified image of the scene of the accident. The relative measuring errors of the traditional hands-on method and the proposed method are shown in Table 4.
Fig 13 shows another forensic photograph. We use the same method to analyze this traffic accident scene. The rectified image is shown in Fig 14. The accuracy of the braking trace reconstructed using the proposed method can be evaluated by comparing with the traditional hands-on method. Their relative measuring errors are shown in Table 5.
Using the method of composite small calibration objects for photogrammetry, small calibration objects can be flexibly arranged at the scene of the accident. In order to meet the measurement accuracy requirements, the small calibration object must be clearly visible. Otherwise, the size of small calibration object must be increased to fit the scene. After calibrating the entire site area, the scene of the accident can be measured by the improved 2D-DLT method, which can avoid the influence of artificial factors. To a certain extent, this method can improve the accuracy of traffic accident scene investigation. The feasibility of using a composite calibration object for traffic accident scene investigation is demonstrated.
When using a non-metric consumer-grade camera for photogrammetry, the calibration object should cover the entire space. The ratio of the area that the calibration object occupies in the accident site and the location of the calibration object greatly affect the measurement accuracy. Because traffic accident scenes can be up to dozens of meters, it is generally not realistic to produce a calibration object sufficiently large for the scene. An improved 2D-DLT measurement method was proposed, based on a composite small calibration object. At a traffic accident scene, multiple small calibration objects were flexibly placed to include a large region of interest, thereby overcoming the limitations of a single small calibration object.
By maintaining the relative positions and coplanar relationship between small calibration objects, the local coordinate system of each small calibration object was transformed into the coordinate system of the composite calibration object. Thus, the feature point coordinates of the accident scene were obtained. This improved the measurement accuracy when using a non-metric consumer-grade camera for traffic accident investigation. Field experiments and a specific case analysis demonstrated the feasibility of this method. The relative error using the composite calibration object for photogrammetry was far better than the results using a single calibration object, and a higher field measurement accuracy could be obtained.
Conceived and designed the experiments: QC HGX LDT. Performed the experiments: QC. Analyzed the data: QC LDT. Contributed reagents/materials/analysis tools: QC HGX. Wrote the paper: QC HGX.
- 1. Fenton S, Kerr R. Accident scene diagramming using new photogrammetric technique. SAE Technical Paper 970944, 1997.
- 2. Rucoba R, Duran A, Carr L, Erdeljac D. A three-dimensional crush measurement methodology using two-dimensional photographs. SAE Technical Paper 2008-01-0163, 2008.
- 3. Fraser C, Cronk S, Hanley H. Close-range photogrammetry in traffic incident management. Proceedings of XXI ISPRS congress commission V, WG V/1; 2008. p. 125–128. Available: http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.150.9801&rank=1.
- 4. Randles B, Jones B, Welcher J, Szabo T, Elliott D, MacAdams C. The accuracy of photogrammetry vs. hands-on measurement techniques used in accident reconstruction. SAE Technical Paper 2010-01-0065, 2010.
- 5. DeChant L, Kinney J. A close-range photogrammetric solution working with zoomed images from digital cameras. SAE Technical Paper 2012-01-0612, 2012.
- 6. Tsai RY. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Trans. Rob. Autom. 1987; 3(4): 323–344.
- 7. Abdel-Aziz YI, Karara HM. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Proceedings of the ASP Symposium on Close-Range Photogrammetry; Illinois, Urbana, USA: American Society of Photogrammetry; 1971. p. 1–18. pmid:4077854
- 8. Maybank SJ, Faugeras OD. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992; 8(2): 123–151.
- 9. Jin J, Li X, Efficient camera self-calibration method based on the absolute dual quadric. J. Opt. Soc. Am. A 2013; 30(3): 287–292. pmid:23456104
- 10. Ma SD. A self-calibration technique for active vision systems. IEEE Trans. Rob. Autom. 1996; 12(1): 114–120.
- 11. Hartley RI. Self-calibration of stationary cameras. Int. J. Comput. Vis. 1997; 22(1): 5–23.
- 12. Rahman T, Krouglicof N. An efficient camera calibration technique offering robustness and accuracy over a wide range of lens distortion. IEEE Trans. Image Process. 2012; 21(2): 626–637. pmid:21843988
- 13. Topolšek D, Herbaj EA, Sternad M. The Accuracy Analysis of Measurement Tools for Traffic Accident Investigation. Journal of Transportation Technologies, 2014; 4(01): 84–92.
- 14. Massa D. Using computer reverse projection photogrammetry to analyze an animation, SAE Technical Paper 1999-01-0093, 1999.
- 15. Du X, Jin X, Zhang X, Shen J, Hou X. Geometry features measurement of traffic accident for reconstruction based on close-range photogrammetry. Adv. Eng. Softw. 2009; 40(7): 497–505.
- 16. Toglia A, Stephens GD, Michalski DJ, Rodriguez JL. Applications of PhotoModeler in accident reconstruction. Proceedings of the ASME 2005 International Mechanical Engineering Congress and Exposition; 2005 Nov 5–11; Orlando, Florida, USA: ASME; 2005. p. 21–30.
- 17. Hovey C, Toglia A. Four-point planar homography algorithm for rectification photogrammetry: development and applications. SAE Technical Paper 2013-01-0780, 2013.
- 18. Huo J, Yang N. Calibration of camera with wide field-of-view based on spliced small targets. Infrared and laser Engineering, 2013; 00(06): 1474–1479.
- 19. Shang Y, Sun X, Yang X, Wang X, Yu Q. A camera calibration method for large field optical measurement. Optik—International Journal for Light and Electron Optics, 2013; 124(24): 6553–6558.
- 20. Lin PD, Sung CK. Comparing two new camera calibration methods with traditional pinhole calibrations. Opt. Express 2007; 15(6): 3012–3022. pmid:19532540
- 21. Neale W, Hessel D, Terpstra T. Photogrammetric measurement error associated with lens distortion. SAE Technical Paper 2011-01-0286, 2011.
- 22. Ma Y, Soatto S, Kosecka J, Sastry SS. An invitation to 3-d vision: from images to geometric models. New York: Springer-Verlag; 2004.
- 23. Zhang Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000; 22(11): 1330–1334.