J.S. Held’s Inaugural Global Risk Report Examines Potential Business Risks & Opportunities in 2024
Read MoreThe material in this paper was researched, compiled, and written by J.S. Held. It was originally published by SAE International.
Aerial photoscanning is a software-based photogrammetry method for obtaining three-dimensional site data. Ground Control Points (GCPs) are commonly used as part of this process. These control points are traditionally placed within the site and then captured in aerial photographs from a drone. They are used to establish scale and orientation throughout the resulting point cloud. There are different types of GCPs, and their positions are established or documented using different technologies. Some systems include satellite-based Global Positioning System (GPS) sensors which record the position of the control points at the scene. Other methods include mapping in the control point locations using LiDAR based technology such as a total station or a laser scanner.
This paper presents a methodology for utilizing publicly available LiDAR data from the United States Geological Survey (USGS) in combination with high-resolution aerial imagery to establish GCPs based on preexisting site landmarks. This method is tested and compared to accuracies achieved with traditional control point systems.
Photoscanning Photogrammetry
Photoscanning is sometimes referred to as Structure from Motion (SfM) [1], Multiview photogrammetry [2], or photo-based 3D scanning. Photoscanning is a photogrammetric technology that utilizes software to analyze similarities and differences within multiple 2D images from multiple perspectives to calculate depth. The software is then capable of generating millions of 3D points referred to as a point cloud. This technology is relatively new with the first well-known photoscanning software titles such as Agisoft Metashape, Pix4d, and VisualSFM appearing in 2010 and 2011 [3-5]. Aerial photoscanning is typically accomplished with a drone. Drone based photoscanning photogrammetry is currently used in many industries including archeology, architecture, film production, mining, and forensics.
Ground Control Points (GCPs)
Photoscanning projects are based on photographs and do not have an inherent scale or orientation in the resulting three-dimensional point cloud. For this reason, a reference is needed for precision aerial mapping. Ground control points (GCPs) are reference points that can be used as tie points within photoscanning software to inform the resulting point cloud of scale, orientation, and to improve quality of data throughout the point cloud. These GCPs are typically fiducial markers that are placed at the site, captured within the aerial photographs, and mapped to document their positions. Figure 1 is a photograph of a lightweight fiducial GCP measuring 2’ by 2’.
After placing the GCPs at the site, mapping the GCP can be accomplished using a total station or 3D scanner. Another method for establishing GCP locations at the scene is post-processing kinematics (PPK). This is accomplished by recording geo-coordinates at the GCP locations through onboard GPS. Yet another method is referred to as real-time kinematics (RTK). This method is similar to gathering data with a total station but uses GPS technology to determine positions between a base receiver and a rover receiver. The base station remains unmoved, while the rover is moved to each GCP to record its location. It is worth noting that PPK and RTK equipped drones are gaining popularity for aerial photoscanning. These methods do not necessarily require GCPs. They do still require a base station to be placed on site, but the drone equipped with the PPK or RTK sensor replaces the rover receiver.
Aerial LiDAR
In 2012 the United States Geological Survey (USGS) started the 3D Elevation Program (3DEP) with the purpose of mapping the United States with aerial LiDAR. The term LiDAR originated as a combination of two words, light and radar, but it is more commonly accepted as an acronym for Light Detection and Ranging [6, 7]. LiDAR data in the form of point clouds is one of many publicly available resources offered through the USGS website. There are also other online resources for aerial LiDAR including NOAA [8], and Open Topography [9]. At the end of 2020, nearly 78% of the United States had been mapped by USGS LiDAR. Figure 2 shows USGS LiDAR coverage as of 2021, with new areas scheduled to be completed next in bright green.
Aerial LiDAR is not yet available in all areas, but many areas have multiple dates of historical LiDAR. This has become an excellent resource for the accident reconstruction community. Aerial LiDAR in combination with high resolution aerial imagery and camera matching photogrammetry can be used to create accurate 3D site diagrams. This includes temporal evidence located through camera matching photogrammetry, all without physically visiting the incident site [10]. The elevational accuracy of USGS LiDAR has also been compared to terrestrial scan data sets. On average, approximately 80% of USGS LiDAR points were found to be within 1” of terrestrial scan data, and approximately 95% of USGS points were found to be within 1.6” of terrestrial scan data [10]. Aerial LiDAR has also been used as a supplemental data set to increase the accuracy of camera matching photogrammetry, especially in instances where there is a lack of recognizable features that can readily be documented onsite with a total station or a terrestrial 3D laser scanner [11].
GCPs from USGS LiDAR and Aerial Imagery
The methodology described in this paper is focused on the utility of creating GCPs from aerial LiDAR and high-resolution aerial images. This has the potential for changing workflows in aerial photoscanning projects and limiting the amount of equipment needed at a site inspection. Like previous research where USGS LiDAR and historical aerial images were used to create 3D diagrams of incident sites [11] this research also incorporates aerial LiDAR and aerial imagery but is focused on obtaining a small number of GCPs throughout the scene.
High-resolution aerial imagery is an excellent 2D resource for an incident site. This imagery provides a unique perspective that is useful for laying out and dimensioning a sequence of events as they occurred in a specific incident. Aerial LiDAR provides 3D data of the incident site, but this resource is not provided at the resolution or point density that is common in terrestrial laser scanning or aerial photoscanning photogrammetry. In combination, high-resolution aerial imagery can provide precise 2D (X, Y) coordinates and aerial LiDAR can provide the necessary elevational (Z) coordinates for obtaining GCPs to be used in aerial photoscanning. Figure 3 demonstrates this with an image showing the process of creating a 3D environment from 2D aerial images and USGS LiDAR [11].
Testing Sites
For the purposes of this study, three different sites were selected. These sites were selected based on accessibility and varied aerial and USGS LiDAR capture data sets (Table 1).
The first site is a multi-lane asphalt roadway in a business area with a turning lane/ median. The second site is a 2-lane concrete road separated by a double yellow lane line. This road is curving and borders a reservoir. The third is also a 2-lane roadway in a more rural setting. This area of roadway is straight and the lanes transition from double yellow to a passing lane in one direction (Figure 4).
Baseline Data Collection
Six Ground control points with eyelets for staking them to the ground, as seen in Figure 1, were placed at each scene. These were placed off road on each side of the roadway covering the area selected for aerial photoscanning. The locations for each of these were documented using a Sokkia Set5 30R total station with a pole mounted prism. This onsite method established known GCPs at each site to be used in creating a baseline photoscanning point cloud (Figure 5).
Aerial Photography for Photoscanning
After placing and surveying the onsite GCPs, a DJI Mavic 2 Pro drone was used to take aerial photographs at each of the sites. This was accomplished using PIX4Dcapture with one double-grid mission at 100 feet AGL with the default setting of a 70° camera angle, where 90° would be straight down or nadir (Figure 6).
Online Data Collection and GCPs from USGS LiDAR
USGS LiDAR was downloaded for all three of the sites as LiDAR point clouds of elevation source data in .LAZ file format. Aerial images from Nearmap were also downloaded for all three sites. The Nearmap aerial from site 3 had visible perspective distortion due to elevational features. This distortion was noted by unusual curvature in the roadway edges on an otherwise straight roadway. For this reason, an aerial image from Google Earth was used for site 3. The aerial images were then aligned to the USGS LiDAR in Autodesk’s AutoCAD 2022, by tracing visible features over large distances in the USGS LiDAR and then aligning the aerial images to these traced features. This alignment was visually assessed, and the ruler measuring tool in Google Earth was used to verify the scale of the aerial images. Note that geo-coordinates can also be used for alignment by associating the geocoordinates from each source. Once aligned, visual landmarks in the aerial image were chosen along with corresponding 3D point data from the aerial LiDAR. The USGS LiDAR data set did not have resolution sufficient to choose a 3D point directly corresponding to the visual landmarks selected to be GCPs. For this reason, it was necessary to create 3D linework connecting the nearest aerial LiDAR data points, and then to use midpoints of the line, bisecting the linework created from LiDAR, until a point could be established on the visual landmark. These 3D GCPs based on USGS LiDAR and aerial imagery were then recorded for use in the photoscanning software. Figures 7-9 demonstrate this process of bisecting linework from 3D USGS LiDAR points on top of an aerial image to determine a GCP location based on a visual landmark.
Photoscanning Photogrammetry
There are many professional photoscanning photogrammetry software titles. For the purposes of this paper, Agisoft Metashape Professional was used. Aerial photographs taken with the drone were aligned within the software. After aligning, the 6 total station based GCPs in, X, Y, Z, format, were imported into the software. These points were then assigned to the corresponding position within the point cloud by choosing the fiducials for each as visible within the photographs. After referencing the position on a few photographs, the software can estimate the location within the remaining photographs contain the same reference point. These estimated positions can then be refined and positioned directly on the fiducial. After refining the calculated positions within each of the corresponding photographs, a dense cloud was generated. Metashape calculates confidence in the data and provides this as a filtering method within the software. While it is unclear to the authors how the software calculates the confidence of data points within the point cloud solution, this filtering method is useful for minimizing noise and can also be useful for separating out foliage from ground. After confidence- based filtering, the dense point cloud was exported in “.pts” format for comparison to the USGS and aerial image based method for obtaining GCPs. This same process was followed for all three of the sites. Table 2 compares the point counts before and after filtering and the software estimated GCP error per site. Figures 10-12 show the full point clouds and results after confidence filtering.
After processing all three scenes with the total-station based GCPs, the same method was followed within Agisoft Metashape processing, but the previously recorded X, Y, Z locations from USGS LiDAR and aligned aerial imagery were imported into the software as GCPs. Similar to the total station based GCPs, 6 GCPs were located for each of the three sites. After confidence filtering, USGS LiDAR based point clouds from each of the sites were again exported for comparison to the total station based point clouds. The same photographs were used in the USGS based GCP data set for each scene, and the resulting point clouds were similar. Table 3 compares the point counts before and after filtering and the software estimated GCP error per site.
Overview of Methodology
The processes described in this methodology can be summarized in the following steps:
USGS LiDAR GCPs v. Total Station Mapped GCPs
The point clouds from the total-station based GCPs and the USGS LiDAR based GCPs were imported into CloudCompare and aligned using the equivalent point pair tool. Prior to alignment, any remaining areas of foliage and grass were removed, leaving the roadway and sidewalk/ shoulder areas for comparison. In order to assess any elevational differences between the two data sets, only the translation and yaw or Z-rotation parameters were adjusted during registration. Roll (X), Pitch (Y), and scale remained unaltered parameters as determined by the respective GCP methods in the photoscanning point cloud solutions. Because the same photographs were used in both the total-station and the USGS LiDAR data sets, the fiducials placed at the scene served as a visual to assess any issues with the registration. After alignment the points nearest the GCP fiducials in the total station point clouds for sites 1, 2, and 3, were selected and X, Y, Z coordinates were recorded. The same was done for the point clouds for sites 1, 2, and 3 with GCPs based on USGS LiDAR (Figure 13).
Site 1: GCP Comparison
The differences in X, Y, and Z axes between the total station based GCPs and the USGS LiDAR based GCPs were calculated for each of the three sites. For site 1, the average 2D difference (X, Y) was 2.6 inches, and the average 3D difference (X, Y, Z) was 6.6 inches. Figure 14 shows the points selected from each data set with the total station based GCPs on the left and the USGS based GCPs on the right. Table 4 shows the distances calculated for each of the GCPs on each axis. Figure 15 is a graph showing these differences.
Site 2: GCP Comparison
For site 2, the average 2D difference (X, Y) was 0.8 inches, and the average 3D difference (X, Y, Z) was 4.8 inches. Figure 16 shows the points selected from each data set with the total station based GCPs on the left and the USGS based GCPs on the right. Table 5 shows the distances calculated for each of the GCPs on each axis. Figure 17 is a graph showing these differences.
Site 3: GCP Comparison
For site 3, the average 2D difference (X, Y) was 3.2 inches, and the average 3D difference (X, Y, Z) was 6.5 inches. Figure 18 shows the points selected from each data set with the total station based GCPs on the left and the USGS based GCPs on the right. Table 6 shows the distances calculated for each of the GCPs on each axis. Figure 19 is a graph showing these differences.
For all three sites the average difference the average 2D difference (X, Y) was 2.2 inches, and the average 3D difference (X, Y, Z) was 5.9 inches (Figure 17).
Based on the results achieved through this study, the authors believe that the methodology presented for obtaining GCPs through USGS LiDAR and aerial imagery will prove useful to the accident reconstruction community. Using USGS LiDAR in combination with aerial imagery is a good method for obtaining GCPs for photoscanning photogrammetry projects. This method is an alternative to purchasing, placing, documenting, shipping and or traveling with traditional onsite GCPs. Additionally, this method allows for the incorporation of more than the traditional number or spacing of GCPs as fiducials at a site. Without the need for traditional field placed hardware, any number of GCPs can be placed using landmarks visible within aerial imagery.
As noted in previous research related to USGS LiDAR and aerial imagery [10], there are potential limitations that also apply to this methodology. Terpstra (2019) noted:
“Aerial imagery must be available with a resolution high enough to uniquely distinguish features to be used in camera matching. Aerial images can contain perspective distortion based on the incidence angle of the camera when the photograph was taken. This distortion is prevalent in scenes with significant elevation differences and particularly over larger distances. The USGS LiDAR data sets are not imagery based and are therefore not subject to perspective distortion. Inability to align an aerial with USGS LiDAR can be an indicator of perspective distortion. The alignment can be used as a method for evaluating if perspective distortion is present within an aerial image.
Similarly, USGS LiDAR must be available. While untested, it may be possible to achieve similar results with lower resolution USGS LiDAR point clouds. However, as the 3DEP program progresses, higher resolution will become available throughout the United States.”
We would like to thank Toby Terpstra, Nathan McKelvey, Eric King, Charles King, and Alireza Hashemian for providing insight and expertise that greatly assisted in this research.
Toby Terpstra is a Senior Visualization Analyst in J.S. Held’s Accident Reconstruction Practice. He specializes in 3D analysis, site reconstruction, photogrammetry, video analysis, visibility, interactive media, and 3D visualization. He is currently the lead instructor of an ACTAR accredited course offered by the Society of Automotive Engineers titled “Photogrammetry and Analysis of Digital Media”. Mr. Terpstra has also taught and conducted research on topics such as onsite photogrammetry, photoscanning, video analysis, videogrammetry, lens distortion, LiDAR, body-worn cameras, trajectory rod documentation, acoustics, and 3D visualization.
Toby can be reached at [email protected] or +1 303 733 1888.
Nathan McKelvey is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.
Nathan can be reached at [email protected] or +1 303 733 1888.
Eric King is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.
Eric can be reached at [email protected] or +1 303 733 1888.
Charles King is a Senior Technician in J.S. Held’s Accident Reconstruction Practice. Mr. King applies his educational experience in mechanical engineering and more than 10 years of experience within the automotive field to the reconstruction of traffic accidents. Mr. King’s areas of expertise include vehicle dynamics, failure analysis, automotive diagnostics and repair, shop management, and 12V systems. Mr. King is skilled in scene investigation, evidence collection and analysis from vehicles and sites, laser metrology, video analysis, photogrammetry, and engineering dynamics analysis. He has investigated and assisted in the reconstruction of complex crashes involving multiple vehicles, heavy trucks, pedestrians, bicycles, and motorcycles.
Charles can be reached at [email protected] or +1 407 707 4986.
3DEP: Three-Dimensional Elevation Program
AGL: Above Ground Level (Elevation)
ASPRS: American Society of Photogrammetry and Remote Sensing
GCPs: Ground Control Points
GPS: Global Positioning System
LiDAR: Portmanteau for light and radar, or an acronym for Light Detection and Ranging
Photo scanning: A photogrammetric application where multiple (typically many) photographs with significant overlap in subject matter, are imported into software that solves for each camera location and creates a 3D point cloud (also referred to as Multi-view photogrammetry)
Photogrammetry: Defined by ASPRS as: The art, science, and technology of obtaining reliable information about physical objects and the environment through process of recording, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.
Point cloud: Large numbers (typically millions) of 3D data points commonly obtained through 3D scanning or photo scanning
USGS: United States Geological Survey
[1] Luhmann, Thomas. Close-Range Photogrammetry and 3D Imaging. 2nd ed. De Gruyter, 2014.
[2] Terpstra, T., Voitel, T., and Hashemian, A., "A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush," SAE Technical Paper 2016-01-1475, 2016, https://doi.org/10.4271/2016-01-1475.
[3] “Metashape.” Wikipedia. Wikimedia Foundation, April 29, 2021. https://en.wikipedia.org/wiki/Metashape.
[4] Mitchell, Michael. “EPFL Spinoff Turns Thousands of 2D Photos into 3D Images,” May 9, 2011. https://actu.epfl.ch/news/epfl-spinoff-turns-thousands-of-2d-photos-into-3d-/.
[5] Changchang Wu, "VisualSFM: A Visual Structure from Motion System" 2011, http://ccwu.me/vsfm/
[6] Jensen, John R. Remote Sensing of the Environment: An Earth Resource Perspective. Harlow: Pearson, 2013.
[7] Lillesand, Tom M., Ralph W. Kiefer, and Jonathan W. Chipman. Remote Sensing and Image Interpretation. New York: Wiley, 1999.
[8] “Data Access Viewer.” NOAA. Accessed October 17, 2021. https://coast.noaa.gov/dataviewer/#/lidar/search/.
[9] “Data Catalog.” OpenTopography. Accessed October 17, 2021. https://portal.opentopography.org/dataCatalog.
[10] Terpstra, T., Dickinson, J., Hashemian, A., and Fenton, S., "Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry," SAE Technical Paper 2019-01-0423, 2019, https://doi.org/10.4271/2019-01-0423.
[11] Terpstra, T., Dickinson, J., and Hashemian, A., "Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy," SAE Int. J. Trans. Safety 6(3):193-216, 2018, https://doi.org/10.4271/2018-01-0516.
Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation...
Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as...
Approaching an intersection and braking to a stop, as well as accelerating from a stop, is a common occurrence in daily life. While the experience is routine, the actual rate of deceleration and acceleration has...