Insights

Accuracy of Rectifying Oblique Images to Planar and Non-Planar Surfaces

J.S. Held’s Inaugural Global Risk Report Examines Potential Business Risks & Opportunities in 2024

Read More close Created with Sketch.
Home·Insights·Articles

The material in this paper was researched, compiled, and written by J.S. Held. It was originally published by SAE International.

Abstract

Emergency personnel and first responders have the opportunity to document crash scenes while evidence is still recent. The growth of the drone market and the efficiency of documentation with drones has led to an increasing prevalence of aerial photography for incident sites. These photographs are generally of high resolution and contain valuable information including roadway evidence such as tire marks, gouge marks, debris fields, and vehicle rest positions. Being able to accurately map the captured evidence visible in the photographs is a key process in creating a scaled crash-scene diagram. Image rectification serves as a quick and straightforward method for producing a scaled diagram. This study evaluates the precision of the photo rectification process under diverse roadway geometry conditions and varying camera incidence angles.

Introduction/Background

Accident reconstructionists use scaled diagrams to analyze the events in a crash. A scaled diagram gives an overview of an accident scene from a top-down vantage which can be used to analyze what was happening before and after impact. There are many methods for creating scaled diagrams for accident reconstruction purposes. Often, the only documentation of evidence is photographs, creating the need to use photogrammetry to create a scaled diagram. The American Society for Photogrammetry and Remote Sensing (ASPRS) defines “Photogrammetry” as the science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring, and interpreting photographic images [1]“. There are many types of photogrammetry with forensic applications, including grid photogrammetry [2], two-dimensional analytical photogrammetry [3], two-dimensional rectification photogrammetry [4], single image three-dimensional photogrammetry [5], onsite photogrammetry[6], camera reverse projection photogrammetry [7], camera matching photogrammetry [8], Multiview or photoscanning photogrammetry [9,10,11], video tracking photogrammetry [12], videogrammetry [13], and more. Of these, perhaps one of the simplest to both understand and apply is two-dimensional photogrammetry and specifically two-dimensional image rectification.

In 1996, Pepe, et. al. [3] demonstrated the use of two-dimensional analytical photogrammetry to obtain discrete evidence locations from photographs. The authors reported an accuracy of within 4 inches was achieved on a tiremark longer than 40 feet. In reporting the accuracy of transferring the photograph to a planar coordinate system, the authors noted that the process “becomes less accurate as the flat surface assumption becomes less appropriate.” One of the tests in this work was performed on a non-planar surface where the roadway was crowned. The authors suggest that there were higher levels of deviation in this non-planar example than in the planar example, but the amount of deviation was not reported.

In 1997, Cliff, et. al. [14] used two-dimensional photogrammetry software titled PC-Rect to rectify photographs and compared the results by overlaying the rectified image to a scale drawing of the testing area. In tests where photographs were taken at a typical height of approximately 1.6m, the authors reported achieving less than 1% error over distances of 20m, increasing to the “2% to 4% range” when measuring at distances between 30 to 35m. On a second set of testing the authors increased the height that photographs were taken to 2.42m and reported a decrease in error to approximately 1% over a distance of 30m. In addition to considering the angle of incidence (approx. 85° to 87°) by analyzing two camera heights, the authors also investigated how the variables of camera lenses, and focal lengths would affect the accuracy. They noted distortion in the application of wide lenses and that when used, if the evidence to be rectified was near the edges of the photograph the resulting position would be “suspect.” While not directly referring to this distortion as lens distortion, this is the first known SAE publication to consider the effect of lens distortion on photogrammetric accuracy. Subsequent publications have not only quantified amounts of distortion and the effect these amounts can have on analyses such as speed calculations [15] but have presented multiple methods of solving for and correcting lens distortion [16]. It is also worth noting that when written (1997), digital photographs were not commonplace and additional errors from creating the bitmap, or the digitizing photographs were suggested, but could not be isolated and quantified by the authors.

In 2013, Hovey, et. al. [4] performed two four-point image rectification studies using a survey and photographs with known evidence locations, as well as a third, “controlled setting” study where known dimensions were also available. Image rectification in PhotoModeler was then used to determine the length of roadway evidence and was found to be within approximately one percent (or one foot) over one hundred feet. In a controlled interior space setting of approximately twenty-four square feet consisting of a two-foot, gridded floor tile, they calculated “error for the point sampled is 0.0263 feet/feet = 2.63%.” The authors also noted that “The shallower the camera angle is to the subject, the more sensitive the results will be to pixel selection. Conversely, the more perpendicular the camera angle is to the subject, the less sensitive the results will be to pixel selection.”

These studies provide useful information regarding two-dimensional analytical and two-dimensional rectification photogrammetry and demonstrate their application to accident reconstruction. They also validate the use of PC-Rect and PhotoModeler for these purposes. Both of these software titles have many additional features beyond what is required for two-dimensional rectification, and both are relatively expensive. In contrast, this paper presents three rectification methods including two software titles, which are both stand-alone, and opensource. Previous studies do not fully explore the relationship of errors with effort to separate variables and evaluate error sources, specifically in terms of the angle of incidence and curvature amounts in non-planar surfaces. Likewise, these prior works do not correct for lens distortion before performing two-dimensional rectification. Lens distortion can have an adverse effect and should be considered in any photogrammetric process. Common lens distortion amounts, the effect of lens distortion on photogrammetric processes, and methods to solve for and correct for lens distortion have been well documented [15, 16]. For this reason, the variable of lens distortion was removed from consideration in this study by using a computer modeled environment. The computer modeled environment also allowed for precise incremental changes in the non-planar surface and the location of cameras within the computer model at specific distance and angle to the surface. A real-world scene was also documented with drone imagery to demonstrate this methodology and to further understand the effect of non-planar surfaces.

Perspective Projection/Image Rectification

Perspective projection or image rectification is a type of geometric transformation in which points of the image (Pixels) are transferred to another image plane by rotating, scaling and translating each pixel (Figure 1). This transformation can be modeled by Equation 1.

Where x’ and y’ are the new coordinates of the point (x, y) after projective transformation. The 3x3 matrix (also known as Homography Matrix) has eight unknown variables. To solve this matrix, the coordinates of four or more points are required. Once the homography matrix is calculated it can then be used to transform any point (pixel) in the original image to the output image. The above equation is true for all sets of corresponding points as long as they lie on the same plane.

The image rectification technique can be used to create orthographic images of an accident scene, which can then be used to create a scaled diagram. Figure 2 shows an example of a rectified image using four control points.

Methodology

For this paper, three image rectification tools, ImageJ [17], OpenCV [18], and Matlab [19] were chosen to study the accuracy of image rectification with respect to camera elevation, incident angle, and various roadway geometries. After defining four control points in the input image and corresponding points in the output image, the homography matrix was automatically calculated by these software titles and used to determine the new coordinates of all pixels within the image. Rectified images were created from each of these three software titles. It was observed that all these software titles generated the same rectified image when using the same control points. For this examination, Adobe Photoshop was used, and the rectified images were layered to study any potential difference between the outputs of each software in the rectification process. While minor differences may exist from different resampling algorithms within the software titles, any differences in the rectified images that were evaluated were negligible. For this reason, the OpenCV software library with Python programming language was used to generate all subsequent rectified images. Open CV was also chosen for its ability to record the coordinate of reference points for each rectification scenario.

Figure 1 – Original image (left) and after transformation (right).


Figure 2 – Original image (top) and after rectification (bottom).


For the analysis, an idealized 3D computer model was created in Autodesk 3D Studio Max with real world dimensions. This computer model contained roadway geometry with measurements of 24 ft by 140 ft. The roadway was subdivided into 1 ft x1 ft squares with a checkerboard pattern. This checkerboard pattern was then used to determine and quantify the potential pixel displacement after rectification. A top-down view of this roadway geometry is shown in Figure 3.

The four corners of the roadway geometry were marked with different colors to be used as a reference when defining the control points. These pixel coordinates were recorded in a spreadsheet to be used in the code. (Figure 4).

Figure 3 – Top-down view of the modeled roadway geometry.



Figure 4 - One corner of the roadway geometry.


To evaluate the effect that non-planar surfaces have on two-dimensional image rectification, the roadway geometry was initially modeled to represent a flat surface and then modified to represent roadways with different cross slopes. According to the U.S. Department of Transportation Federal Highway Administration the normal cross slope is between 1.5% to 2% and not exceeding 4% [20]. For this study, roadway geometries with the parabolic camber (cross slope) of 1%, 3%, and 4% were modeled (Figure 5).

To evaluate the effect a camera’s angle of incidence may have on the accuracy of two-dimensional image rectification, an array of twenty-five virtual cameras facing the roadway at different elevations and incident angles were placed in the computer model. Cameras were placed facing the roadway at five different incident angles (15°,30°,45°,60° and 70°) relative to the roadway surface, and at five different horizontal angles (0°,30°,45°,75° and 90°) (Figure 6). The camera properties for all twenty-five cameras were set within the software to represent a 24mm lens with a field of view of 53° and the distance of each camera to the center of the roadway geometry was approximately 200 feet. Figure 6 is a top-down view showing the five different horizontal camera angles. Figure 7 is an elevation or side view showing the five different camera incident angles. Figure 8 shows the same twenty-five cameras and their orientation to the roadway surface from a three-quarter perspective view.

Figure 5 – Comparison of different roadway profiles.


Figure 6 – Top-down view of the cameras at five different horizontal angles.


Figure 7 – Elevation or side view of the five different camera incidence angles.


Images of the roadway surface were rendered from each of the cameras at a resolution of 5472x3648. This resolution is similar to a DJI Mavic 2 Pro camera. These images were saved in .jpg format with high quality setting. Figure 9 is a grid of all twenty-five renderings with a flat roadway surface.

The geometry of the roadway surface was then modified from flat roadway surface to a crowned roadway surface with a cross slope of 1%. After modifying the roadway surface to have a 1% cross slope, renderings were created again from each of the twenty-five cameras. The roadway geometry was then modified to have a 3% and then a 4% cross slope, after which the same twenty-five cameras were used to create separate renderings for each.

Figure 8 – Perspective view of all twenty-five cameras and their orientation to the roadway surface.


Figure 9 – Rendered roadway geometry from different cameras.


After creating all 75 renderings of the varied roadway surfaces from varied camera angles, each rendered image was loaded into Adobe Photoshop to evaluate the coordinates of the marked corners to be used in the rectification process. OpenCV Libraries were used with Python programming language to generate rectified images from each of the rendered images. The resulting rectified images were then compared with a rendering of the baseline roadway geometry to quantify the maximum displacement. The grid pattern on the roadway surface served as a visual representation of displacement and provided a way to quantify the amount of shift throughout the image. Each square in the baseline roadway grid pattern was a 22x22 block of pixels with a scale of 1ft x 1ft. As an Example, Figure 10 Shows a portion of a rectified image, overlaid with 50% transparency on the orthographic rendering of the roadway represented by a green grid (Baseline). It is apparent that the shift is mostly at the center of the roadway away from reference points. The maximum displacement in terms of pixels was recorded for each of the twenty-five different scenarios and converted to units of displacement in inches.

Figure 10 – Portion of Rectified image overlaid on baseliner with 50% transparency (Non-Flat Roadway).

Analysis/Results

Flat Roadway

In comparing the rectified images from the flat terrain surface to the baseline image of the roadway geometry, it was noted that in idealized flat roadway, there was no shift in the grid. This demonstrates that with a flat surface the rectification error is insignificant. Figure 11 shows the overlay of the rectified image from the camera that was at 45° incident angle and 45° horizontal angle and shows no shift in the grid.

Non-Flat Roadway

To quantify the amount of error that would be introduced based on the varied roadway geometry, the same comparison method was used to evaluate rectified images of the roadway geometry with 1%, 3% and 4% cross slope.

Figure 11 – Portion of Rectified image overlaid on top-down render (Flat Terrain).


Figure 12 – Maximum displacement shown in inches for the varied camera incident angles and roadway cross slope models.


Figure 13 – The pixel shift in rectified images increases as incident angles increase. Roadway surface with 3% cross slope and incident angles of 15°, 30°, 45°, 60°, and 70° from top to bottom.


Figure 12 shows the maximum displacement in inches for three different roadway geometries considering all twenty-five different scenarios. It is evident that with the increase in the cross slope of the road and the incident angle of the camera, The error associated with the pixel shift of the rectified images increases.

Consider the following two examples:

  1. At a 70°incident angle from a roadway surface with a 4% cross slope, the rectification error leads to a maximum shift of approximately 19.4 inches.
  2. At a 15° incident angle from a roadway surface with 3% cross slope, the rectification error leads to a maximum shift of approximately 2.7 inches.

Figure 13 shows the pixel shift after projection into terrain with 3% cross slope at different incident angles. The green grid represents the orthogonal view of the checkerboard pattern and overlaid black and white grid represent the rectified checkerboard pattern.

In general, the image rectification process can be summarized in the following steps:

  1. Identify the photograph(s) to be used for image projection/rectification.
  2. Consider, and correct for lens distortion.
  3. Acquire high-resolution aerial imagery with a known scale.
  4. Note/ label common points in the aerial image and the photograph chosen for rectification.
  5. Identify common points on both images within the image rectification software.
  6. Rectify the image(s) and visually compare results to the aerial imagery.

Real-World Study

To further understand the effect of cross slope on 2D image rectification and to validate the overall accuracy of the rectification process, a real-world study was also analyzed. For this study, a multilane roadway with two lanes in each direction was chosen. This roadway was selected due to its visible crowning, where the cross slope was found to be approximately 3%, indicating a consistent transverse incline. Additionally, the longitudinal slope of the road was determined to be close to zero, suggesting a relatively flat profile along the direction of travel.

Figure 14 – One of the twelve sprayed markers (P7).


Figure 15 - Subject roadway and the marked control points.

Data Collection

A DJI Mavic 2 Pro was deployed to capture a series of high-resolution (5472 x 3648 pixels), 8-bit (automatic lens correction) aerial photographs. The drone was positioned at horizontal angles of approximately 0°, 45°, and 90° relative to the roadway, with altitudes of approximately 55, 113, and 139 feet above the ground level which approximately represents the incident angles of 70°, 45° and 30°. This approach ensured covering a stretch of roadway that included twelve randomly distributed markings representing roadway evidence such as tire marks, gouge marks or debris fields. These markings were spray painted onto the roadway using a 2-inch by 2-inch square template, with the spray color being white for better contrast with the roadway. Figure 14 shows a zoomed-in version of one of these markings visible in the photograph taken by the drone.

Additionally, four control points (Marked as A, B, C, D) were deliberately positioned on the roadway's edge, precisely at the termination points of selected tar markings, for utilization in the rectification process. These tar markings, often associated with road maintenance, served as distinctive landmarks for the study. The distance from point A to point B was approximately 176 ft and from point B to point C was approximately 48 ft.

Figure 16 – Cross section of the roadway.


Figure 17 – Orthogonal top-down rendering of the roadway with highlighted markings (Points 1-12).


Figure 15 shows the subject roadway and the marked control points used in the rectification process. In addition to capturing aerial photographs, a Faro Focus laser scanner was utilized to precisely scan and capture the roadway geometry along with the precise locations of each marking. The point cloud created from scanner was analyzed to determine the cross slope of the roadway at three different sections. The findings revealed a relatively consistent cross slope, approximately measuring 3% (Figure 16).

Subsequently, the data obtained from the Faro scanner facilitated the creation of an orthogonal top-down rendering of the roadway (Figure 17). This rendering not only offered a highly accurate depiction of the roadway, complete with markings, but also functioned as a reference for evaluating the accuracy of the rectified images. This comparison allowed for a direct assessment of the rectified images against the ground truth established by the Faro scanner.


Figure 18 - Drone Aerial Photographs- Top to Bottom 0°,45°,90° - Left to Right Incident angles: 30°,45°,70°

Figure 19 - Rectified Aerial Photographs

Rectification Process

All nine aerial photographs captured by the drone (Figure 18) were rectified using ImageJ. This process involved definition of four control points (A, B, C and D) in the input image and their corresponding counterparts in the output. Since the output was the orthogonal top-down rendering of the Faro scan, the coordinates of each control point in the output remained consistent.

Figure 19 illustrates the same nine aerial photographs after the rectification process.

Figure 20 - Top: Original location of the 12 marker points – Middle: Location of Markers after rectification- Bottom: Overlaid images and the corresponding location of the markers


Figure 21 - Point Displacement (70° Incident Angle)

Comparison and Analysis

Following rectification, all rectified images were overlayed and were compared to the orthogonal top-down view of the scene scan.

The marked points in each of the nine rectified images were connected to form a closed polygon for visual comparison and taking displacement measurement of each point after rectification. As an example, Figure 20 exemplifies this process, showing the original location of markers on the Faro scene scan, the new location of markers after rectification, and the rectified image with corresponding markers polygon overlaid on top of the top-down view of the scene scan with fifty percent transparency. The rectified imaged in this example is from the photo taken at 70° incident angle.

Figure 22 - Point Displacement (45° Incident Angle)


Figure 23 - Point Displacement (30° Incident Angle)

Results

Analysis of the nine rectified images revealed distinct displacement patterns for each of the twelve marked points in every image. The observed range of displacement spanned from 0.3 inches to 36.7 inches across the studied points on the roadway with a 3% cross slope.

Photos taken at a 70° incident angle exhibited the maximum displacement, ranging from a minimum of 1.3 inches to a maximum of 36.7 inches, with an average displacement of 17.4 inches (Figure 21).

Photos taken at 45° incident angle exhibited significantly less displacement compared to 70° incident angle ranging from a minimum of 0.4 inches to a maximum of 19.2 inches, with an average displacement of 8.4 inches (Figure 22).

Photos taken at 30° incident angle exhibited the least displacement ranging from a minimum of 0.3 inches to a maximum of 15.9 inches, with an average displacement of 7.0 inches (Figure 23).

In the analysis, it was discovered that the horizontal angle of the camera with respect to the roadway (A) (See Figure 6) did not have a lot of influence on overall accuracy and it appeared that certain points exhibited better accuracy (less displacement) while others showed comparatively poorer accuracy (more displacement). For example, Points 3 and 4 (P3, P4) experienced less displacement as the angle of the camera with respect to the roadway increases whereas Points 8, 9, 10, 11 and 12 experienced more displacement as the angle of the camera increases.

In all scenarios, points that were closer to the edges of the rectification plane (ABCD edge) experienced less displacement (P5, P6 and P9) while points at the center of the plane displaced more (P1, P7 and P12). Additionally, points situated farther from both the camera and control points encountered the most substantial displacement (P1 and P4).

Overall, the displacement of the points varies and is attributed to various parameters, including the incident angle of the camera, the distance of the point from the camera, the proximity to control points, and the specific location of the point with respect to the slope of the road. These diverse factors collectively contribute to the observed variations in displacement values, highlighting the complexity of the spatial dynamics involved in the study. It's noteworthy that, perhaps for more accurate results, minimizing the incident angle contributes to less displacement in points after rectification.

Discussion

With respect to non-planar surfaces, this research shows an increase in error with an increased camera incidence angle, making the methodology more suited for aerial imagery than ground-based photographs.

Given the opportunity to take photographs of an incident site for later use in image rectification, vertical images with a camera incidence angle of close to 0° are preferable to minimize potential rectification errors. Furthermore, in the course of this study, control points were chosen along the edges of the roadway for ease of identification. Nevertheless, it is imperative to recognize that selecting control points at the center of the roadway, assuming identifiable points exist in both images, may yield different results.

Additionally, the presence of a crest in the roadway can impact the outcomes of the study. The non-planar nature introduced by the elevation change associated with a crest introduces another potential source of variation, necessitating careful consideration when interpreting the results.

Moreover, while there may be potential to develop a more intricate rectification technique based on the terrain's geometry, achieving accurate rectification results depends significantly on having precise knowledge of both the roadway's geometry and the camera's position and orientation relative to the roadway. Without accurate information regarding the camera's parameters, it is not sufficient to develop an accurate rectification technique solely based on the terrain's geometry.

Summary/Conclusions

The image rectification process involves a mathematical geometric transformation applied to digital images. ImageJ, OpenCV, and Matlab all utilize these mathematical transformations and yielded comparable results.

In instances where the photographed surface is flat there is minimal if any measurable error associated with image rectification. This is true regardless of the horizontal and vertical angle the photograph was taken from.

In instances where the surface has been photographed vertically from above such that the camera incidence angle is 0°, there is minimal if any measurable error associated with image rectification. This is true if the photographed surface is flat, as well as if the photographed surface contains cross slopes within a typical range of cross slopes as defined by the FHA.

In instances where surface cross slope photographs are not taken vertically, as the cross slope of the road and the incident angle of the camera increase, rectification errors become more pronounced. Conversely, reducing the incident angle of the camera leads to a decrease in rectification errors. This relationship is reciprocal, highlighting the sensitivity of rectification accuracy to variations in the cross slope and incident angle of the camera.

References

[1] 1American Society for Photogrammetry and Remote Sensing (ASPRS) “WHAT IS ASPRS? Definitions”.

[2] Kerkhoff, J., “Photographic Techniques for Accident Reconstruction,” SAE Technical Paper 850248 (1985). https://doi.org/10.4271/850248.

[3] Pepe, M., Grayson, E., and McClary, A., “Digital Rectification of Reconstruction Photographs,” SAE Technical Paper 961049 (1996). https://doi.org/10.4271/961049.

[4] Hovey, C. and Toglia, A., “Four-Point Planar Homography Algorithm for Rectification Photogrammetry: Development and Applications,” SAE Technical Paper 2013-01-0780 (2013). https://doi.org/10.4271/2013-01-0780.

[5] Terpstra, T., Hashemian, A., Gillihan, R., King, E. et al., “Accuracies in Single Image Camera Matching Photogrammetry,” SAE Technical Paper 2021-01-0888 (2021). https://doi.org/10.4271/2021-01-0888.

[6] Terpstra, T., Beier, S., and Neale, W., “The Application of Augmented Reality to Reverse Camera Projection,” SAE Technical Paper 2019-01-0424 (2019). https://doi.org/10.4271/2019-01-0424.

[7] Jorgensen, M., Swinford, S., and Jones, B., “Validation of Vehicle Speed Analysis Utilizing the iNPUT-ACE Camera Match Overlay Tool,” SAE Int. J. Adv. & Curr. Prac. In Mobility 4, no. 1 (2022): 78-85. https://doi.org/10.4271/2021-01-0877.

[8] Terpstra, T., Dickinson, J., Hashemian, A., and Fenton, S., “Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry,” SAE Technical Paper 2019-01-0423 (2019). https://doi.org/10.4271/2019-01-0423.

[9] Terpstra, T., Voitel, T., and Hashemian, A., “A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush,” SAE Technical Paper 2016-01-1475 (2016). https://doi.org/10.4271/2016-01-1475.

[10] Carter, N., Hashemian, A., Rose, N., and Neale, W., “Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence,” SAE Technical Paper 2016-01-1467 (2016). https://doi.org/10.4271/2016-01-1467.

[11] Carter, N., Hashemian, A., Rose, N., and Neale, W.,“Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence,” SAE Technical Paper 2016-01-1467 (2016). https://doi.org/10.4271/2016-01-1467.

[12] Neale, W., Fenton, S., McFadden, S., and Rose, N., “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction,” SAE Technical Paper 2004-01-1221 (2004). https://doi.org/10.4271/2004-01-1221.

[13] Bailey, A.M., Sherwood, C.P., Funk, J.R., Neale, W. et al., “Characterization of Concussive Events in Professional American Football Using Videogrammetry,” Ann Biomed Eng 48 (2020): 2678-2690, doi:10.1007/s10439-020-02637-3.

[14] Cliff, W., Maclnnis, D., and Switzer, D., “An Evaluation of Rectified Bitmap 2D Photogrammetry with PC-Rect,” SAE Technical Paper 970952 (1997). https://doi.org/10.4271/970952.

[15] Neale, W., Hessel, D., and Terpstra, T., “Photogrammetric Measurement Error Associated with Lens Distortion,” SAE Technical Paper 2011-01-0286 (2011). https://doi.org/10.4271/2011-01-0286.

[16] Terpstra, T., Miller, S., and Hashemian, A., “An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable,” SAE Technical Paper 2017-01-1422 (2017). https://doi.org/10.4271/2017-01-1422.

[17] ImageJ- National Institutes of Health. U.S. Department of Health and Human Services. Available at: https://imagej.nih.gov/ij/index.html (Accessed: October 31, 2022). OpenCV. Available at: https://opencv.org/ (Accessed: October 31, 2022)

[18] OpenCV. Available at: https://opencv.org/ (Accessed: October 31, 2022).

[19] Matlab-MathWorks. Available at: https://www.mathworks.com/ (Accessed: October 31, 2022).

[20] U.S. Department of Transportation Federal Highway Administration, Mitigation Strategies For Design Exceptions: Cross-Slope Available at: https://safety.fhwa.dot.gov/geometric/pubs/mitigationstrategies/chapter3/3_crossslope.cfm

Appendix

Maximum Displacement of rectified images taken at different incident angles (I), Different angles of the camera with respect to the road (A) and different road camber (1%, 3% and 4%).

Drone Photographs

Rectified Drone Photographs

Find your expert.

This publication is for educational and general information purposes only. It may contain errors and is provided as is. It is not intended as specific advice, legal, or otherwise. Opinions and views are not necessarily those of J.S. Held or its affiliates and it should not be presumed that J.S. Held subscribes to any particular method, interpretation, or analysis merely because it appears in this publication. We disclaim any representation and/or warranty regarding the accuracy, timeliness, quality, or applicability of any of the contents. You should not act, or fail to act, in reliance on this publication and we disclaim all liability in respect to such actions or failure to act. We assume no responsibility for information contained in this publication and disclaim all liability and damages in respect to such information. This publication is not a substitute for competent legal advice. The content herein may be updated or otherwise modified without notice.

noun_Download_747989_000000 Created with Sketch. Download PDF
You May Also Be Interested In
White Papers & Research Reports

A Comparison of Mobile LiDAR Capture and Established Ground-Based 3D Scanning Methodologies

Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as...

White Papers & Research Reports

Accuracy of Aerial Photoscanning with Real-Time Kinematic Technology

Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation...

White Papers & Research Reports

Aerial Photoscanning with Ground Control Points from USGS LiDAR

This paper presents a methodology for utilizing publicly available LiDAR data from the United States Geological Survey (USGS) in combination with high-resolution aerial imagery to establish GCPs based on preexisting site landmarks. This method is...

 
INDUSTRY INSIGHTS
Keep up with the latest research and announcements from our team.
Our Experts