Insights

Aerial Photoscanning with Ground Control Points from USGS LiDAR

J.S. Held’s Inaugural Global Risk Report Examines Potential Business Risks & Opportunities in 2024

Read More close Created with Sketch.
Home·Insights·Articles
The material in this paper was researched, compiled, and written by J.S. Held. It was originally published by SAE International.

Abstract

Aerial photoscanning is a software-based photogrammetry method for obtaining three-dimensional site data. Ground Control Points (GCPs) are commonly used as part of this process. These control points are traditionally placed within the site and then captured in aerial photographs from a drone. They are used to establish scale and orientation throughout the resulting point cloud. There are different types of GCPs, and their positions are established or documented using different technologies. Some systems include satellite-based Global Positioning System (GPS) sensors which record the position of the control points at the scene. Other methods include mapping in the control point locations using LiDAR based technology such as a total station or a laser scanner.

This paper presents a methodology for utilizing publicly available LiDAR data from the United States Geological Survey (USGS) in combination with high-resolution aerial imagery to establish GCPs based on preexisting site landmarks. This method is tested and compared to accuracies achieved with traditional control point systems.

Introduction/Background

Photoscanning Photogrammetry

Photoscanning is sometimes referred to as Structure from Motion (SfM) [1], Multiview photogrammetry [2], or photo-based 3D scanning. Photoscanning is a photogrammetric technology that utilizes software to analyze similarities and differences within multiple 2D images from multiple perspectives to calculate depth. The software is then capable of generating millions of 3D points referred to as a point cloud. This technology is relatively new with the first well-known photoscanning software titles such as Agisoft Metashape, Pix4d, and VisualSFM appearing in 2010 and 2011 [3-5]. Aerial photoscanning is typically accomplished with a drone. Drone based photoscanning photogrammetry is currently used in many industries including archeology, architecture, film production, mining, and forensics.

Ground Control Points (GCPs)

Photoscanning projects are based on photographs and do not have an inherent scale or orientation in the resulting three-dimensional point cloud. For this reason, a reference is needed for precision aerial mapping. Ground control points (GCPs) are reference points that can be used as tie points within photoscanning software to inform the resulting point cloud of scale, orientation, and to improve quality of data throughout the point cloud. These GCPs are typically fiducial markers that are placed at the site, captured within the aerial photographs, and mapped to document their positions. Figure 1 is a photograph of a lightweight fiducial GCP measuring 2’ by 2’.

Figure 1 - Ground Control Point (GCP) staked to ground.

After placing the GCPs at the site, mapping the GCP can be accomplished using a total station or 3D scanner. Another method for establishing GCP locations at the scene is post-processing kinematics (PPK). This is accomplished by recording geo-coordinates at the GCP locations through onboard GPS. Yet another method is referred to as real-time kinematics (RTK). This method is similar to gathering data with a total station but uses GPS technology to determine positions between a base receiver and a rover receiver. The base station remains unmoved, while the rover is moved to each GCP to record its location. It is worth noting that PPK and RTK equipped drones are gaining popularity for aerial photoscanning. These methods do not necessarily require GCPs. They do still require a base station to be placed on site, but the drone equipped with the PPK or RTK sensor replaces the rover receiver.

Aerial LiDAR

In 2012 the United States Geological Survey (USGS) started the 3D Elevation Program (3DEP) with the purpose of mapping the United States with aerial LiDAR. The term LiDAR originated as a combination of two words, light and radar, but it is more commonly accepted as an acronym for Light Detection and Ranging [6, 7]. LiDAR data in the form of point clouds is one of many publicly available resources offered through the USGS website. There are also other online resources for aerial LiDAR including NOAA [8], and Open Topography [9]. At the end of 2020, nearly 78% of the United States had been mapped by USGS LiDAR. Figure 2 shows USGS LiDAR coverage as of 2021, with new areas scheduled to be completed next in bright green.

Figure 2 - USGS LiDAR coverage through 3DEP as of September 2021.

Aerial LiDAR is not yet available in all areas, but many areas have multiple dates of historical LiDAR. This has become an excellent resource for the accident reconstruction community. Aerial LiDAR in combination with high resolution aerial imagery and camera matching photogrammetry can be used to create accurate 3D site diagrams. This includes temporal evidence located through camera matching photogrammetry, all without physically visiting the incident site [10]. The elevational accuracy of USGS LiDAR has also been compared to terrestrial scan data sets. On average, approximately 80% of USGS LiDAR points were found to be within 1” of terrestrial scan data, and approximately 95% of USGS points were found to be within 1.6” of terrestrial scan data [10]. Aerial LiDAR has also been used as a supplemental data set to increase the accuracy of camera matching photogrammetry, especially in instances where there is a lack of recognizable features that can readily be documented onsite with a total station or a terrestrial 3D laser scanner [11].

GCPs from USGS LiDAR and Aerial Imagery

The methodology described in this paper is focused on the utility of creating GCPs from aerial LiDAR and high-resolution aerial images. This has the potential for changing workflows in aerial photoscanning projects and limiting the amount of equipment needed at a site inspection. Like previous research where USGS LiDAR and historical aerial images were used to create 3D diagrams of incident sites [11] this research also incorporates aerial LiDAR and aerial imagery but is focused on obtaining a small number of GCPs throughout the scene.

High-resolution aerial imagery is an excellent 2D resource for an incident site. This imagery provides a unique perspective that is useful for laying out and dimensioning a sequence of events as they occurred in a specific incident. Aerial LiDAR provides 3D data of the incident site, but this resource is not provided at the resolution or point density that is common in terrestrial laser scanning or aerial photoscanning photogrammetry. In combination, high-resolution aerial imagery can provide precise 2D (X, Y) coordinates and aerial LiDAR can provide the necessary elevational (Z) coordinates for obtaining GCPs to be used in aerial photoscanning. Figure 3 demonstrates this with an image showing the process of creating a 3D environment from 2D aerial images and USGS LiDAR [11].

Figure 3 - From top to bottom, 1) NearMap aerial image, 2) 2D vector lines traced on aerial image, 3) Aerial traced 2D vector lines and USGS LiDAR, 4) Resulting 3D environment with vector lines projected onto surfaced ground mesh. Reused with author permission [11].
  1. This research compares GCPs for photoscanning photogrammetry obtained from aerial imagery and aerial LiDAR to a more traditional collection method where GCPs are physically placed at the scene, photographed with a drone, and then documented using a total station. The accuracy with which GCPs can be placed using aerial imagery and LiDAR is described within the results and conclusions section. The potential benefits of the proposed methodology on aerial photoscanning can be summarized as follows.Time savings: No physical placement of GCPs at the extents of a site during a site inspection.
  2. Safety: Less time spent on a site inspection site placing GCPs next to roadways.
  3. Access: GCPs can be placed in areas of sites where physical GCPs could not typically be placed (roadways).
  4. Shipping and Travel: No GCPs or the equipment needed to document GCP locations during the site inspection.
  5. Expense: Efficiency of time at the site inspection, shipment savings, and less equipment for purchase.
  6. Unlimited GCPs: Any number of GCPs can be added to the solution based on visible landmarks.

Methodology

Testing Sites

For the purposes of this study, three different sites were selected. These sites were selected based on accessibility and varied aerial and USGS LiDAR capture data sets (Table 1).

Table 1 - Site differences include selected area, USGS LiDAR collection dates, aerial imagery dates, amount of elevation change in the selected area, and number of points in the point clouds.

The first site is a multi-lane asphalt roadway in a business area with a turning lane/ median. The second site is a 2-lane concrete road separated by a double yellow lane line. This road is curving and borders a reservoir. The third is also a 2-lane roadway in a more rural setting. This area of roadway is straight and the lanes transition from double yellow to a passing lane in one direction (Figure 4).

Figure 4 - Site locations 1, 2, and 3 in order from top to bottom.

Baseline Data Collection

Six Ground control points with eyelets for staking them to the ground, as seen in Figure 1, were placed at each scene. These were placed off road on each side of the roadway covering the area selected for aerial photoscanning. The locations for each of these were documented using a Sokkia Set5 30R total station with a pole mounted prism. This onsite method established known GCPs at each site to be used in creating a baseline photoscanning point cloud (Figure 5).

Figure 5 - GCP locations placed on site and documented using a total station.

Aerial Photography for Photoscanning

After placing and surveying the onsite GCPs, a DJI Mavic 2 Pro drone was used to take aerial photographs at each of the sites. This was accomplished using PIX4Dcapture with one double-grid mission at 100 feet AGL with the default setting of a 70° camera angle, where 90° would be straight down or nadir (Figure 6).

Figure 6 - Top: DJI Mavic 2 Pro drone, Bottom: Pix4D Capture double grid mission at site 2.

Online Data Collection and GCPs from USGS LiDAR

USGS LiDAR was downloaded for all three of the sites as LiDAR point clouds of elevation source data in .LAZ file format. Aerial images from Nearmap were also downloaded for all three sites. The Nearmap aerial from site 3 had visible perspective distortion due to elevational features. This distortion was noted by unusual curvature in the roadway edges on an otherwise straight roadway. For this reason, an aerial image from Google Earth was used for site 3. The aerial images were then aligned to the USGS LiDAR in Autodesk’s AutoCAD 2022, by tracing visible features over large distances in the USGS LiDAR and then aligning the aerial images to these traced features. This alignment was visually assessed, and the ruler measuring tool in Google Earth was used to verify the scale of the aerial images. Note that geo-coordinates can also be used for alignment by associating the geocoordinates from each source. Once aligned, visual landmarks in the aerial image were chosen along with corresponding 3D point data from the aerial LiDAR. The USGS LiDAR data set did not have resolution sufficient to choose a 3D point directly corresponding to the visual landmarks selected to be GCPs. For this reason, it was necessary to create 3D linework connecting the nearest aerial LiDAR data points, and then to use midpoints of the line, bisecting the linework created from LiDAR, until a point could be established on the visual landmark. These 3D GCPs based on USGS LiDAR and aerial imagery were then recorded for use in the photoscanning software. Figures 7-9 demonstrate this process of bisecting linework from 3D USGS LiDAR points on top of an aerial image to determine a GCP location based on a visual landmark.

Figure 7 - Site 1, GCP selection using landmarks visible in aerial imagery and USGS LiDAR.
Figure 8 - Site 2, GCP selection using landmarks visible in aerial imagery and USGS LiDAR.
Figure 9 - Site 3, GCP selection using landmarks visible in aerial imagery and USGS LiDAR.

Photoscanning Photogrammetry

There are many professional photoscanning photogrammetry software titles. For the purposes of this paper, Agisoft Metashape Professional was used. Aerial photographs taken with the drone were aligned within the software. After aligning, the 6 total station based GCPs in, X, Y, Z, format, were imported into the software. These points were then assigned to the corresponding position within the point cloud by choosing the fiducials for each as visible within the photographs. After referencing the position on a few photographs, the software can estimate the location within the remaining photographs contain the same reference point. These estimated positions can then be refined and positioned directly on the fiducial. After refining the calculated positions within each of the corresponding photographs, a dense cloud was generated. Metashape calculates confidence in the data and provides this as a filtering method within the software. While it is unclear to the authors how the software calculates the confidence of data points within the point cloud solution, this filtering method is useful for minimizing noise and can also be useful for separating out foliage from ground. After confidence- based filtering, the dense point cloud was exported in “.pts” format for comparison to the USGS and aerial image based method for obtaining GCPs. This same process was followed for all three of the sites. Table 2 compares the point counts before and after filtering and the software estimated GCP error per site. Figures 10-12 show the full point clouds and results after confidence filtering.

Table 2 - Photoscanning point cloud points from total station based GCPs, including total points, points after confidence filtering, and software estimated GCP error per site.

After processing all three scenes with the total-station based GCPs, the same method was followed within Agisoft Metashape processing, but the previously recorded X, Y, Z locations from USGS LiDAR and aligned aerial imagery were imported into the software as GCPs. Similar to the total station based GCPs, 6 GCPs were located for each of the three sites. After confidence filtering, USGS LiDAR based point clouds from each of the sites were again exported for comparison to the total station based point clouds. The same photographs were used in the USGS based GCP data set for each scene, and the resulting point clouds were similar. Table 3 compares the point counts before and after filtering and the software estimated GCP error per site.

Table 3 - Photoscanning point cloud points from USGS based GCPs, including total points, points after confidence filtering, and software estimated GCP error per site.
Figure 10 - Site 1, photoscanning with GCPs from total station. Top to bottom: Full point cloud, Full point cloud by confidence, Confidence filtered, resulting point cloud.
Figure 11 - Site 2, photoscanning with GCPs from total station. Top to bottom: Full point cloud, Full point cloud by confidence, Confidence filtered, resulting point cloud.
Figure 12 - Site 3, photoscanning with GCPs from total station. Top to bottom: Full point cloud, Full point cloud by confidence, Confidence filtered, resulting point cloud.


Overview of Methodology

The processes described in this methodology can be summarized in the following steps:

  1. Determine incident location, the area to be documented using aerial photoscanning, and record GPS coordinates.
  2. Use GPS coordinates to find the highest resolution aerial imagery available for that area.
  3. Use GPS coordinates to locate aerial LiDAR and download point cloud(s) that cover the area.
  4. Align the aerial image to the aerial LiDAR in X, Y (plan view).
  5. Determine landmarks within the aerial imagery to be used as GCPs, encompassing the photoscanning aera.
  6. Create an order to the chosen GCPs and record X, Y, and Z coordinates for each.
  7. Conduct the site inspection and fly drone photoscanning mission(s) over pre-determined area ensuring that chosen GCP landmarks are captured in the photographs.
  8. Import pre-determined GCP location data into photoscanning photogrammetry software.
  9. Process the drone based aerial photographs using imported GCPs in the photoscanning photogrammetry software.
  10. Create, filter (optional), and export photoscanning photogrammetry point cloud.
  11. Verify overall scale and orientation of photoscanning point cloud to the aerial LiDAR data from step 3.

Results

USGS LiDAR GCPs v. Total Station Mapped GCPs

The point clouds from the total-station based GCPs and the USGS LiDAR based GCPs were imported into CloudCompare and aligned using the equivalent point pair tool. Prior to alignment, any remaining areas of foliage and grass were removed, leaving the roadway and sidewalk/ shoulder areas for comparison. In order to assess any elevational differences between the two data sets, only the translation and yaw or Z-rotation parameters were adjusted during registration. Roll (X), Pitch (Y), and scale remained unaltered parameters as determined by the respective GCP methods in the photoscanning point cloud solutions. Because the same photographs were used in both the total-station and the USGS LiDAR data sets, the fiducials placed at the scene served as a visual to assess any issues with the registration. After alignment the points nearest the GCP fiducials in the total station point clouds for sites 1, 2, and 3, were selected and X, Y, Z coordinates were recorded. The same was done for the point clouds for sites 1, 2, and 3 with GCPs based on USGS LiDAR (Figure 13).

Figure 13 - Site 1, GCP fiducial markers points chosen from photoscanning point cloud for comparison between GCPs documented using a total station and GCPs created from USGS LiDAR.

Site 1: GCP Comparison

The differences in X, Y, and Z axes between the total station based GCPs and the USGS LiDAR based GCPs were calculated for each of the three sites. For site 1, the average 2D difference (X, Y) was 2.6 inches, and the average 3D difference (X, Y, Z) was 6.6 inches. Figure 14 shows the points selected from each data set with the total station based GCPs on the left and the USGS based GCPs on the right. Table 4 shows the distances calculated for each of the GCPs on each axis. Figure 15 is a graph showing these differences.

Figure 14 - Site 1 comparison points chosen on fiducials for total station based point cloud on left, and USGS based point cloud on the right.
Table 4 - Site 1 difference in inches between total station based GCPs and USGS LiDAR based GCPs.
Figure 15 - Site 1 difference in inches between total station based GCPs and USGS LiDAR based GCPs per axis.

Site 2: GCP Comparison

For site 2, the average 2D difference (X, Y) was 0.8 inches, and the average 3D difference (X, Y, Z) was 4.8 inches. Figure 16 shows the points selected from each data set with the total station based GCPs on the left and the USGS based GCPs on the right. Table 5 shows the distances calculated for each of the GCPs on each axis. Figure 17 is a graph showing these differences.

Figure 16 - Site 2 comparison points chosen on fiducials for total station based point cloud on left, and USGS based point cloud on the right.
Table 5 - Site 2 difference in inches between total station based GCPs and USGS LiDAR based GCPs.
Figure 17 - Site 2 difference in inches between total station based GCPs and USGS LiDAR based GCPs per axis.

Site 3: GCP Comparison

For site 3, the average 2D difference (X, Y) was 3.2 inches, and the average 3D difference (X, Y, Z) was 6.5 inches. Figure 18 shows the points selected from each data set with the total station based GCPs on the left and the USGS based GCPs on the right. Table 6 shows the distances calculated for each of the GCPs on each axis. Figure 19 is a graph showing these differences.
For all three sites the average difference the average 2D difference (X, Y) was 2.2 inches, and the average 3D difference (X, Y, Z) was 5.9 inches (Figure 17).

Figure 18 - Site 3 comparison points chosen on fiducials for total station based point cloud on left, and USGS based point cloud on the right.
Table 6 - Site 3 difference in inches between total station based GCPs and USGS LiDAR based GCPs.
Figure 19 - Site 3 difference in inches between total station based GCPs and USGS LiDAR based GCPs per axis.
Figure 20 - Site 1 difference in inches between total station based GCPs and USGS LiDAR based GCPs per axis.

Summary/ Conclusions

Based on the results achieved through this study, the authors believe that the methodology presented for obtaining GCPs through USGS LiDAR and aerial imagery will prove useful to the accident reconstruction community. Using USGS LiDAR in combination with aerial imagery is a good method for obtaining GCPs for photoscanning photogrammetry projects. This method is an alternative to purchasing, placing, documenting, shipping and or traveling with traditional onsite GCPs. Additionally, this method allows for the incorporation of more than the traditional number or spacing of GCPs as fiducials at a site. Without the need for traditional field placed hardware, any number of GCPs can be placed using landmarks visible within aerial imagery.

Limitations

As noted in previous research related to USGS LiDAR and aerial imagery [10], there are potential limitations that also apply to this methodology. Terpstra (2019) noted:

“Aerial imagery must be available with a resolution high enough to uniquely distinguish features to be used in camera matching. Aerial images can contain perspective distortion based on the incidence angle of the camera when the photograph was taken. This distortion is prevalent in scenes with significant elevation differences and particularly over larger distances. The USGS LiDAR data sets are not imagery based and are therefore not subject to perspective distortion. Inability to align an aerial with USGS LiDAR can be an indicator of perspective distortion. The alignment can be used as a method for evaluating if perspective distortion is present within an aerial image.

Similarly, USGS LiDAR must be available. While untested, it may be possible to achieve similar results with lower resolution USGS LiDAR point clouds. However, as the 3DEP program progresses, higher resolution will become available throughout the United States.”

Acknowledgments

We would like to thank Toby Terpstra, Nathan McKelvey, Eric King, Charles King, and Alireza Hashemian for providing insight and expertise that greatly assisted in this research.

More About J.S. Held's Contributors

Toby Terpstra is a Senior Visualization Analyst in J.S. Held’s Accident Reconstruction Practice. He specializes in 3D analysis, site reconstruction, photogrammetry, video analysis, visibility, interactive media, and 3D visualization. He is currently the lead instructor of an ACTAR accredited course offered by the Society of Automotive Engineers titled “Photogrammetry and Analysis of Digital Media”. Mr. Terpstra has also taught and conducted research on topics such as onsite photogrammetry, photoscanning, video analysis, videogrammetry, lens distortion, LiDAR, body-worn cameras, trajectory rod documentation, acoustics, and 3D visualization.

Toby can be reached at [email protected] or +1 303 733 1888.

Nathan McKelvey is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.

Nathan can be reached at [email protected] or +1 303 733 1888.

Eric King is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.

Eric can be reached at [email protected] or +1 303 733 1888.

Charles King is a Senior Technician in J.S. Held’s Accident Reconstruction Practice. Mr. King applies his educational experience in mechanical engineering and more than 10 years of experience within the automotive field to the reconstruction of traffic accidents. Mr. King’s areas of expertise include vehicle dynamics, failure analysis, automotive diagnostics and repair, shop management, and 12V systems. Mr. King is skilled in scene investigation, evidence collection and analysis from vehicles and sites, laser metrology, video analysis, photogrammetry, and engineering dynamics analysis. He has investigated and assisted in the reconstruction of complex crashes involving multiple vehicles, heavy trucks, pedestrians, bicycles, and motorcycles.

Charles can be reached at [email protected] or +1 407 707 4986.

Definitions/Abbreviations

3DEP: Three-Dimensional Elevation Program

AGL: Above Ground Level (Elevation)

ASPRS: American Society of Photogrammetry and Remote Sensing

GCPs: Ground Control Points

GPS: Global Positioning System

LiDAR: Portmanteau for light and radar, or an acronym for Light Detection and Ranging

Photo scanning: A photogrammetric application where multiple (typically many) photographs with significant overlap in subject matter, are imported into software that solves for each camera location and creates a 3D point cloud (also referred to as Multi-view photogrammetry)

Photogrammetry: Defined by ASPRS as: The art, science, and technology of obtaining reliable information about physical objects and the environment through process of recording, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.

Point cloud: Large numbers (typically millions) of 3D data points commonly obtained through 3D scanning or photo scanning

USGS: United States Geological Survey

References

[1] Luhmann, Thomas. Close-Range Photogrammetry and 3D Imaging. 2nd ed. De Gruyter, 2014.

[2] Terpstra, T., Voitel, T., and Hashemian, A., "A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush," SAE Technical Paper 2016-01-1475, 2016, https://doi.org/10.4271/2016-01-1475.

[3] “Metashape.” Wikipedia. Wikimedia Foundation, April 29, 2021. https://en.wikipedia.org/wiki/Metashape.

[4] Mitchell, Michael. “EPFL Spinoff Turns Thousands of 2D Photos into 3D Images,” May 9, 2011. https://actu.epfl.ch/news/epfl-spinoff-turns-thousands-of-2d-photos-into-3d-/.

[5] Changchang Wu, "VisualSFM: A Visual Structure from Motion System" 2011, http://ccwu.me/vsfm/

[6] Jensen, John R. Remote Sensing of the Environment: An Earth Resource Perspective. Harlow: Pearson, 2013.

[7] Lillesand, Tom M., Ralph W. Kiefer, and Jonathan W. Chipman. Remote Sensing and Image Interpretation. New York: Wiley, 1999.

[8] “Data Access Viewer.” NOAA. Accessed October 17, 2021. https://coast.noaa.gov/dataviewer/#/lidar/search/.

[9] “Data Catalog.” OpenTopography. Accessed October 17, 2021. https://portal.opentopography.org/dataCatalog.

[10] Terpstra, T., Dickinson, J., Hashemian, A., and Fenton, S., "Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry," SAE Technical Paper 2019-01-0423, 2019, https://doi.org/10.4271/2019-01-0423.

[11] Terpstra, T., Dickinson, J., and Hashemian, A., "Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy," SAE Int. J. Trans. Safety 6(3):193-216, 2018, https://doi.org/10.4271/2018-01-0516.

Find your expert.

This publication is for educational and general information purposes only. It may contain errors and is provided as is. It is not intended as specific advice, legal, or otherwise. Opinions and views are not necessarily those of J.S. Held or its affiliates and it should not be presumed that J.S. Held subscribes to any particular method, interpretation, or analysis merely because it appears in this publication. We disclaim any representation and/or warranty regarding the accuracy, timeliness, quality, or applicability of any of the contents. You should not act, or fail to act, in reliance on this publication and we disclaim all liability in respect to such actions or failure to act. We assume no responsibility for information contained in this publication and disclaim all liability and damages in respect to such information. This publication is not a substitute for competent legal advice. The content herein may be updated or otherwise modified without notice.

You May Also Be Interested In
White Papers & Research Reports

Accuracy of Aerial Photoscanning with Real-Time Kinematic Technology

Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation...

White Papers & Research Reports

A Comparison of Mobile LiDAR Capture and Established Ground-Based 3D Scanning Methodologies

Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as...

White Papers & Research Reports

A Naturalistic Study of Vehicle Acceleration and Deceleration at an Intersection

Approaching an intersection and braking to a stop, as well as accelerating from a stop, is a common occurrence in daily life. While the experience is routine, the actual rate of deceleration and acceleration has...

 
INDUSTRY INSIGHTS
Keep up with the latest research and announcements from our team.
Our Experts