Insights

Video and Object Tracking for Speed Determination Using Aerial LiDAR

J.S. Held’s Inaugural Global Risk Report Examines Potential Business Risks & Opportunities in 2024

Read More close Created with Sketch.
Home·Insights·Articles
The material in this paper was researched, compiled, and written by J.S. Held. It was originally published by SAE International.


Abstract

Video of an event recorded from a moving camera contains information not only useful for reconstructing the locations and timing of an event, but also the velocity of the camera attached to the moving object or vehicle. Determining the velocity of a video camera recording from a moving vehicle is useful for determining the vehicle’s velocity and can be compared with speeds calculated through other reconstruction methods, or to data from vehicle speed monitoring devices. After tracking the video, the positions and speeds of other objects within the video can also be determined. Video tracking analysis traditionally has required a site inspection to map the three-dimensional environment. In instances where there have been significant site changes, where there is limited or no site access, and where budgeting and timing constraints exist, a three-dimensional environment can be created using publicly available aerial imagery and aerial LiDAR. This paper presents a methodology for creating a three-dimensional environment and performing video tracking analysis without a site visit. To validate the methodology, a blind study was conducted where three different videos were tracked. Each video presented a different traffic scenario including oncoming traffic, cross traffic, and passing traffic. The speed of the vehicle from which the video was recorded was determined through the video tracking process, and speed was also determined for a second vehicle visible within the videos. The speed of both vehicles in each video was then compared to vehicle speeds measured with vehicle instrumentation using Harry’s LapTimer to evaluate the accuracy of vehicle speeds determined through video and object tracking.

Introduction/Background

Previous research has shown that aerial imagery can be used in combination with aerial LiDAR, to create a three-dimensional environment model that is representative of the time of an incident without a physical site inspection [1, 2]. It is worth noting that there are many potential benefits for a physical site inspection, but there are times when a physical site inspection is impractical or unsafe. This method of creating a three-dimensional environment model is particularly beneficial in instances where there is limited or no site access, there have been significant site changes or updates and where budget or timing do not permit traditional site mapping.

Aerial Imagery

Historical aerial photographs have been and continue to be invaluable to the accident reconstruction community. There are many online aerial resources available, and often there are multiple historical dates available when high resolution imagery was captured. This imagery can be used for determining changes to the incident site, incorporating historical site features such as roadway striping from the time of incident, and locating evidence such as furrows, tire marks, gouge marks, fluid spill areas, and burn areas visible within the imagery. Aerial imagery can also be used as a background image or projected as a texture map to create a photorealistic representation of the incident site on which evidence can be placed to create a forensic scene recreation easily understood by any audience [3].

Aerial LiDAR

In 2015, the United States Geological Survey (USGS) began publicly providing aerial LiDAR data in the form of 3D point clouds [4]. The USGS begin the 3D Elevation Program (3DEP) in 2012 as an eight-year program for mapping the United States and the U.S. territories. By the end of the 2022 fiscal year, there was approximately 90 percent aerial LiDAR coverage of the United States. USGS data collection process is ongoing and similar to aerial imagery availability, there are regions where this LiDAR data has been collected on multiple dates. This allows for a three-dimensional comparison of changes to an incident site. To assess availability of data and resolution, there is a map on the USGS 3DEP website that includes a legend for lidar point clouds (LPC) coverage areas with a unique color for each resolution range in points per meter (Figure 1).

In addition to USGS there are other publicly available LiDAR data sources in the United States and in other countries throughout the world. Additional resources in the United States include: The National Oceanic and Atmospheric Administration (NOAA), the National Center for Airborne Laser Mapping (NCALM), OpenTopography, the Interagency Elevation Inventory (IEI), the Puget Sound LiDAR Consortium (PSLC), and the National Ecological Observatory Network (NEON). Other agencies throughout the world include the Environment Agency in the United Kingdon, The National Land Survey of Finland, The Current Elevation File of the Netherlands, The National Geographic Institute of Spain, and The National Land Survey of Finland. Many additional European Union countries are making LiDAR data available online, and OpenTopography includes other countries throughout the world as well [5,6,7].

Video Tracking and Object Tracking

Video tracking is a photogrammetry-based method for obtaining three-dimensional information about an environment and has proven useful in forensics for mapping roadways [8], creating realistic three-dimensional environments through image projection [9,10], determining vehicle speeds [11,12], dynamic roof crush [13,14], and speeds of other objects visible within the video [15]. Object tracking can be used to differentiate between solving for the camera’s location over time and solving for an object visible within the video over time. A good example of object tracking is when video is recorded from a static camera and the camera’s position is known. In this scenario, object tracking can be used to determine the positions and speed of objects within the video. The methodology presented in this paper uses both video tracking and object tracking. Starting with video tracking, the camera’s position was determined over time and then used to determine the speed the camera (and vehicle the camera is mounted to) is moving through the environment. Then after the video tracking was complete, the positions of a secondary vehicle were determined through object tracking.

Video tracking is a close-range photogrammetry method, similar to camera matching photogrammetry. It utilizes the principles of reverse camera projection within 3D software to solve for a camera’s position, orientation, and field of view. The video tracking software is used to solve for these parameters for each frame of video to be tracked. The resulting video tracking solution achieves a relationship between the video and the three-dimensional environment such that an overlay will show an alignment between features visible within the video and the corresponding features within the three-dimensional environment.

Figure 1 - USGS LiDAR coverage through 3DEP as of October 2023.

Methodology

Testing Sites

For the purposes of this study, three different sites were selected for analysis. These sites were chosen for their proximity, accessibility, and because aerial LiDAR and high-resolution aerial imagery was available. Each site also offered a different opportunity for object tracking. The first incident site contained an intersection where object tracking was used to determine the speed of a cross-traffic vehicle. The second site was a roadway where object tracking was used to determine the speed of an oncoming vehicle. The third site was a highway environment where object tracking was used to determine the speed of a passing vehicle (Figure 2), (Table 1).

Baseline Data Collection – Vehicle Instrumentation

Previous research has shown the efficiency and reliability of the Harry's Lap Timer application for Apple and Android cell phones for tracking vehicle positions and speeds [16,17]. A 2018 Nissan Leaf and a 2015 GMC Canyon were used in this testing (Figure 3).

The GMC Canyon was instrumented with a separate GPS receiver, the Dual Electronics SkyPro. This GPS sensor works with Harry’s Lap Timer and is capable of a 20Hz recording rate, or 20 samples per second (Figure 4).

The Nissan Leaf did not have this additional receiver and had a 1Hz recording rate. For this reason, the GMC was chosen as the vehicle to record video for tracking purposes, and the Nissan was chosen as a secondary vehicle for object tracking. The video was recorded from an iPhone 13 Pro Max cell phone which was rigidly mounted to the vehicle’s review mirror. After recording video and vehicle instrumentation data, the video was exported from Harry's Lap Timer in as a .MOV video file in “raw” format, without instrumentation data overlaid.

Figure 2 - Site locations 1, 2, and 3 in order from top to bottom with vehicle for object tracking circled in yellow.


Table 1 - Site locations and traffic types for object tracking.


The resulting videos had a resolution of 1280 x 720 and a frame rate of 30 frames per second.


Creating 3D Environments and Objects

Aerial LiDAR data recorded in 2020 and available through USGS was selected and downloaded for each of the three site locations. Multiple LiDAR data sets in this area were available, and the most recent date was used to minimize differences in the scenes from time of aerial LiDAR acquisition and video recordings. After downloading the aerial LiDAR data in “.LAS” file format, the point clouds were imported into CloudCompare v. 2.12.4 [18] where the point clouds were colorized based on intensity values stored within the scalar properties. The intensity values are stored as a grey scale value from white to black depending on the amount of return energy measured during capture and can be colorized based on any gradient chosen within the software. Viewing the point cloud with intensity values makes it easier to distinguish lane lines as well based on their retro-reflective material and higher energy return relative to surrounding surfaces [19]. The intensity colorized aerial LiDAR point cloud data was then converted into “.rcs” format using Autodesk ReCap for use in Autodesk 3ds Max 2023, and Autodesk AutoCAD 2023. It is worth noting that other file formats may be required if working with alternate 3D modeling software.

Figure 3 - Vehicles used for video recording and instrumentation.


Figure 4 - SkyPro GPS sensor from Dual Electronics.


A terrain mesh was then created using an isolated portion of the same aerial LiDAR data. Using CloudCompare v.2.12.4, areas farther from the center of the site were cropped out or removed, as were points that did not define the roadway area. Outlier points that can be described as individual points or “islands” were visually detected and removed from the point cloud. The point cloud was then subsampled to create a less dense point cloud. The resulting point cloud was then surfaced in CloudCompare, creating a 3D mesh (Figure 5).

Aerial imagery from April 3, 2023, was downloaded through NearMap, a browser-based software. The NearMap aerial image resolution for all three sites was 0.037 meters per pixel or 1.457 inches per pixel. This high-resolution aerial was scaled and aligned to the aerial LiDAR data with a known scale in AutoCAD 2023. Once aligned, roadway markings were traced on top of the aerial imagery creating 2D vector-based lines. To complete the 3D environment, the 2D aerial traced lines were projected down to the 3D mesh created from aerial LiDAR within Autodesk 3ds Max. This projection can be accomplished using various tools including shape merging and the free Glue Utility from iToo Software [20] (Figure 6).

As an alternative to the tracing of aerial features and projecting down to the three-dimensional terrain, aerial imagery can be used as a texture on the USGS based roadway. As a matter of preference, this method was used for site 2.

The 2018 Nissan Leaf used in this research was scanned using a Faro Focus S350. The resulting LiDAR point cloud was used as a reference for creating accurate, scaled, three-dimensional computer model. This model was then used in object tracking to solve for the position of the Nissan Leaf as visible within the video recorded from the GMC Canyon (Figure 7).

Figure 5 - Top) Aerial LiDAR point cloud with red to yellow gradient for intensity values, Middle) Terrain points separated from LiDAR point cloud, Bottom) Resulting mesh terrain built from aerial LiDAR data [1].


Figure 6 - From top to bottom, 1) NearMap aerial image, 2) 2D vector lines traced on aerial image, 3) Aerial traced 2D vector lines and aerial LiDAR, 4) Resulting 3D environment with vector lines projected onto surfaced ground mesh.


Figure 7 - LiDAR point cloud of the 2018 Nissan Leaf overlaid with the resulting three-dimensional computer model of the Nissan Leaf used in object tracking.


Video and Object Tracking

Each of the three videos to be tracked was evaluated and individual video frame images were exported as a frame sequence for the portion of each video to be tracked. The length of video tracking was chosen relative to the portions of video where the Nissan Leaf was also visible within the video and could be placed through object tracking. The frame sequence was then imported into PFTrack 23 for two-dimensional video tracking (Figure 8).

The two-dimensional tracks were exported, and both the image sequence and two-dimensional tracks were imported into SynthEyes version 22.6.1054. Two-dimensional tracking can also be accomplished in SynthEyes, but the authors have experienced greater two-dimensional tracking success using PFTrack. The two-dimensional tracks were then associated to corresponding three-dimensional (aerial LiDAR) data points. As a matter of personal preference, the participant for site 2 did not use SynthEyes, but rather performed this same process in PFTrack, which is also capable of assigning tracking points to corresponding three-dimensional point cloud data. The software titles were then used to create a video tracking solution which was reviewed to ensure that the three-dimensional computer environment was aligned to the video at every frame. If the video track alignment was unsuccessful, such that the software could not achieve a video tracked solution, additional tracking points were added to help inform the software. After achieving a solution filtering of various tracking parameters was used to achieve a more consistent video tracking solution (Figure 9).

The video tracking solution was then imported as camera motion into Autodesk 3DS Max, and the same video frame sequence was designated as a viewport background (Figure 10).

The scaled computer model of the 2018 Nissan Leaf was then positioned within Autodesk 3DS Max on specific frames within the video, such that it was aligned to the video and to the ground surface within the three-dimensional environment. The position of the Nissan Leaf was then interpolated between these frames. Where necessary, more positions were added until there was agreement with the video at every frame, whereby defining the overall motion of the vehicle (Figure 11).

Figure 8 - Two-dimensional trackers shown in the PFTrack software.


Figure 9 - Video tracking solution shown within the SynthEyes software with three-dimensional track points and LiDAR point cloud aligned to the video.


Figure 10 - Video tracking solution shown within 3DS Max with LiDAR point cloud environment overlaid on the video.


Vehicle speeds determined through video and object tracking were then exported for comparison to the vehicle speeds recorded through vehicle instrumentation at the time the videos were recorded.

Figure 11 - Object tracking solution shown within 3DS Max with the Nissan Leaf computer model overlaid on the video.


The three-dimensional environment model creation, video tracking, and object tracking for each of the three scenarios was accomplished as a blind study by three different participants. Each participant was assigned a separate scenario and was neither present at the time of the vehicle instrumentation and video collection, nor provided with any information related to the collected vehicle speeds through vehicle instrumentation.

Overview of Video Tracking and Object Tracking Methodology

The processes described in this methodology can be summarized in the following steps:

  1. Download and convert aerial LiDAR point cloud.
  2. Create surfaced mesh of the terrain using a subsampled portion of aerial LiDAR data.
  3. Download high-resolution aerial imagery and align to LiDAR data.
  4. Trace roadway markings and features from aerial, creating 2D vector-based lines.
  5. Project the 2D vector-based lines, or the aerial image as a texture onto the 3D terrain.
  6. Create a 3D environment to include aerial LiDAR point cloud, surfaced mesh of roadway, and 3D lines of roadway markings, or textured 3D terrain.
  7. Analyze video to be used and save out an image frame sequence.
  8. Use video tracking software to create 2D tracks on video.
  9. Associate 2D tracks with corresponding 3D aerial LiDAR points.
  10. Review video tracking solution for consistency with video and add additional tracking points as needed.
  11. Import video tracking solution into 3D modeling software for object tracking.
  12. Based on the video tracking solution, place a 3D model of the vehicle or other object to match locations as seen in the video footage.
  13. Calculate object speed based on video timing and object locations.
  14. Peer review of results.

Results

Site-01

The first vehicle and object tracking scenario involved an intersection and a cross traffic vehicle. In this scenario the Nissan Leaf was traveling perpendicular to the GMC Canyon and crossed the intersection as the GMC Canyon was approaching the intersection. In this scenario a 9.85s section of video was tracked and compared to the instrumented vehicle data exported from Harry’s Lap Timer. The instrumented vehicle speed for this timeframe was an average of 27.7 mph. The vehicle speed determined through vehicle tracking was an average of 27.7 mph. The average difference between instrumented vehicle speed and vehicle speed determined through video tracking was 0.3 mph. (Figure 12).

A 6.0s section of the tracked video was used for determining the position of the Nissan Leaf vehicle and was compared to the instrumented vehicle data exported from Harry’s Lap Timer. It is worth noting that tire contact with the pavement for the Nissan Leaf could not be clearly seen in the video and when solving for vehicle location, the Nissan Leaf was placed in center of the travel lane. The instrumented vehicle speed for this timeframe was an average of 16.6 mph. The vehicle speed determined through vehicle tracking was an average of 16.1 mph. The average difference between instrumented vehicle speed and vehicle speed determined through video tracking was 0.7 mph. (Figure 13, Table 2).

Site-02

The second vehicle and object tracking scenario involved a roadway with an oncoming traffic vehicle. In this scenario the Nissan Leaf was traveling in the opposite direction of the GMC Canyon and the vehicles passed by each other. In this scenario a 4.9s section of video was tracked and compared to the instrumented vehicle data exported from Harry’s Lap Timer. The instrumented vehicle speed for this timeframe was an average of 41.0 mph. The vehicle speed determined through vehicle tracking was an average of 40.7 mph. The average difference between instrumented vehicle speed and vehicle speed determined through video tracking was 0.6 mph. (Figure 14).

Figure 12 - Site 1 comparison: Instrumented vehicle speed and vehicle speed determined through video tracking.

Figure 13 - Site 1 comparison: Instrumented vehicle speed and vehicle speed determined through object tracking.

Table 2 - Site 1 summary: Instrumented vehicle speed and vehicle speeds determined through video and object tracking.


A 4.0s section of the tracked video was used for determining the position of the Nissan Leaf vehicle and was compared to the instrumented vehicle data exported from Harry’s Lap Timer. The instrumented vehicle speed for this timeframe was an average of 36.2 mph. The vehicle speed determined through vehicle tracking was an average of 36.5 mph. The average difference between instrumented vehicle speed and vehicle speed determined through video tracking was 0.4 mph. (Figure 15, Table 3).

Site-03

The third vehicle and object tracking scenario involved a highway and a passing vehicle. In this scenario the Nissan Leaf was traveling in the same direction as the GMC Canyon vehicle and the Nissan Leaf passed the GMC Canyon. In this scenario a 4.95 second section of video was tracked and compared to the instrumented vehicle data exported from Harry’s Lap Timer. The instrumented vehicle speed for this timeframe was an average of 55.3 mph. The vehicle speed determined through vehicle tracking was an average of 55.5 mph. The average difference between instrumented vehicle speed and vehicle speed determined through video tracking was 0.4 mph. (Figure 16).

Figure 14 - Site 2 comparison: Instrumented vehicle speed and vehicle speed determined through video tracking.


Figure 15 - Site 2 comparison: Instrumented vehicle speed and vehicle speed determined through object tracking.


Table 3 - Site 2 summary: Instrumented vehicle speed and vehicle speeds determined through video and object tracking.

A 4.0s section of the tracked video was used for determining the position of the Nissan Leaf vehicle and was compared to the instrumented vehicle data exported from Harry’s Lap Timer. The instrumented vehicle speed for this timeframe was an average of 73.9 mph. The vehicle speed determined through vehicle tracking was an average of 75.3 mph. The average difference between instrumented vehicle speed and vehicle speed determined through video tracking was 1.4 mph. (Figure 17, Table 4).

Figure 16 - Site 3 comparison: Instrumented vehicle speed and vehicle speed determined through video tracking.

Summary/Conclusions

On average, the difference between instrumented vehicle speeds and vehicle speeds determined through video tracking for all three scenarios was 0.4 mph. On average, the difference between instrumented vehicle speeds and vehicle speeds determined through object tracking for all three scenarios was 0.8 mph (Table 5).

Based on results achieved through this study, the authors believe that this methodology will prove useful to the accident reconstruction community and that video tracking is an invaluable tool for determining vehicle speeds, object speeds, and their motion based on the principles of physics. While more accurate results may be achieved with a site visit, there are instances where a site visit is impractical or of little value due to significant site changes. In these instances, using aerial LiDAR, aerial photographs, in combination with video and object tracking may be the best if not only solution available to determine vehicle and object speeds. This study represents a less than ideal situation for obtaining evidence from video through video and object tracking, without a site visit. With a site visit and opportunity to document a site using traditional means, smaller differences in position and speed data can be achieved.

Figure 17 - Site 3 comparison: Instrumented vehicle speed and vehicle speed determined through object tracking.


Table 4 - Site 3 summary: Instrumented vehicle speed and vehicle speeds determined through video and object tracking.


Table 5 - Sites 1-3 average: Instrumented vehicle speed and vehicle speeds determined through video and object tracking.

Limitations

There are potential limitations when using this methodology. LiDAR and aerial imagery must be available with a resolution high enough to uniquely distinguish features to be used in video tracking. Aerial images can contain perspective distortion based on the incidence angle of the camera when the photograph was taken. This distortion is prevalent in scenes with significant elevational differences and particularly over larger distances. The aerial LiDAR data sets are not imagery based and are therefore not subject to perspective distortion. Inability to align an aerial with aerial LiDAR can be an indicator of perspective distortion and prove useful for determining when it is necessary to find an alternate aerial imagery source.

Discussion

The authors suspect that the greater average speed difference between instrumented vehicle speeds and vehicle speeds determined through video and object tracking for the highway scenario with a passing vehicle may be related to higher speeds themselves, where similar positional differences at lower speeds would have less effect on the resulting speed difference at lower speeds than at higher speeds. Another possible factor related to the accuracy of object tracking in these scenarios is the visibility of the longer and shorter vehicle axes. In the first scenario the Nissan Leaf’s longest axis is visible to the GMC Canyon as it passes through the intersection on a path that is generally perpendicular to that of the camera. In the second and third scenarios the shorter vehicle axis (front or back) is generally more visible to the camera and may provide less information for positional determination.

Small variants in video tracking parameters at a high frequency such as 30fps can create significant noise in resulting speed data. Filtering of tracking parameters was accomplished in SynthEyes version 22.61054. The parameters and filtering amounts varied for each scenario to achieve optimal results without having a visibly negative effect on the video tracking solution.

When object tracking is performed within a video tracking solution, inaccuracies from the video tracking may be reflected in the object tracking. This may account for the larger (on average) object tracking differences in this research.

Two methods of projecting the roadway lines onto USGS LiDAR based geometry were used in this study. For sites 1 and 3 aerial tracing was used and for site 2 the aerial image itself was projected onto the geometry. While all three sites had similar results in terms of accuracy, site 2 had the lowest overall error when compared to instrumented vehicle data. Additional studies are needed to understand if there is a correlation.

The aerial image resolution for all three sites in this research was 1.457 inches per pixel. With the presented methodology, roadway lines from aerial images are used in video tracking. Using the highest available aerial imagery resolution is recommended to reduce potential for error.

As the 3DEP and other programs progress, it is likely that aerial LiDAR and aerial imagery will continue to become available in higher resolution or point density. With increased resolution and point density, it is reasonable to believe that the resulting accuracy of position and speed analysis determined using this methodology will also increase.

References

[1] Terpstra, T., Dickinson, J., Hashemian, A., and Fenton, S., “Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry,” SAE Technical Paper 2019-01-0423 (2019), https://doi.org/10.4271/2019-01-0423.

[2] Terpstra, T., Dickinson, J., and Hashemian, A., “Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy,” SAE Int. J. Trans. Safety 6, no. 3 (2018): 193-216, https://doi.org/10.4271/2018-01-0516.

[3] Dilich, M. and Goebelbecker, J., “Accident Investigation and Reconstruction Mapping with Aerial Photography,” SAE Technical Paper 960894 (1996), https://doi.org/10.4271/960894.

[4] What is 3DEP? Accessed October 23, 2023. https://www.usgs.gov/3d-elevation-program/what-3dep.

[5] Anusuya Datta, “Did You Know Which Are the Sources for Free LiDAR Data?,” Geospatial World, November 15, 2019, accessed January 14, 2024, https://www.geospatialworld.net/blogs/did-you-know-the-sources-for-free-lidar-data/.

[6] Murithi, Godwin. 2023. “Free Sources of LiDAR Data.” Open Source GIS Data. August 26, 2023. https://opensourcegisdata.com/free-sources-of-lidar-data.html#:~:text=OpenTopography,different%20areas%20%20around%20the%20world.

[7] “Data.Europa.Eu.” Accessed January 14, 2024. https://data.europa.eu/data/datasets?query=lidar&locale=en.

[8] Neale, W., Fenton, S., McFadden, S., and Rose, N., “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction,” SAE Technical Paper 2004-01-1221 (2004), https://doi.org/10.4271/2004-01-1221.

[9] Neale, W., Marr, J., and Hessel, D., “Video Projection Mapping Photogrammetry through Video Tracking,” SAE Technical Paper 2013-01-0788 (2013), https://doi.org/10.4271/2013-01-0788.

[10] Neale, W., Marr, J., and Hessel, D., “Nighttime Videographic Projection Mapping to Generate Photo-Realistic Simulation Environments,” SAE Technical Paper 2016-01-1415 (2016), https://doi.org/10.4271/2016-01-1415.

[11] Manuel, E., Mink, R., and Kruger, D., “Videogrammetry in Vehicle Crash Reconstruction with a Moving Video Camera,” SAE Technical Paper 2018-01-0532 (2018), https://doi.org/10.4271/2018-01-0532.

[12] Molnar, B. and Peck, L., “Evaluation of Tesla Dashcam Video System for Speed Determination Via Reverse Projection Photogrammetry,” SAE Technical Paper 2023-01-0629 (2023), https://doi.org/10.4271/2023-01-0629.

[13] Chou, C., McCoy, R., Le, J., Fenton, S. et al., “Image Analysis of Rollover Crash Tests Using Photogrammetry,” SAE Technical Paper 2006-01-0723 (2006), https://doi.org/10.4271/2006-01-0723.

[14] Rose, N., Neale, W., Fenton, S., Hessel, D. et al., ““A Method to Quantify Vehicle Dynamics and Deformation for Vehicle Rollover Tests Using Camera-Matching Video Analysis,” SAE Int,” J. Passeng. Cars - Mech. Syst. 1, no. 1 (2009): 301-317, https://doi.org/10.4271/2008-01-0350.

[15] Bailey, A., Funk, J., Lessley, D., Sherwood, C. et al., “Validation of a Videogrammetry Technique for Analyzing American Football Helmet Kinematics,” Sports Biomechanics (2018): 1-23, doi:10.1080/14763141.2 018.1513059.

[16] McDonough, S., Danaher, D., and Neale, W., “Mid-Range Data Acquisition Units Using GPS and Accelerometers,” SAE Technical Paper 2018-01-0513 (2018), chttps://doi.org/10.4271/2018-01-0513.

[17] Danaher, D., McDonough, S., Donaldson, D., and Cochran, R., “Validation of MoTeC Data Acquisition System,” SAE Technical Paper 2023-01-0630 (2023), https://doi.org/10.4271/2023-01-0630.

[18] CloudCompare (version 2.12.4). http://www.danielgm.net/cc/

[19] U.S. Department of the Interior, U.S. Geological Survey, Collection and Delineation of Spatial Data, “Chapter 4: Lidar Base Specification, Version 1.2” National Geospatial Program, 2014.

[20] Itoosoft. “Free Plugins; ColorEdge, Glue, Clone.” IToo Software. Accessed October 11, 2018. https://www.itoosoft.com/freeplugins/glue.

Contact Information

Toby Terpstra
J.S. Held LLC
(303) 733-1888
[email protected]

Acknowledgments

The authors would like to thank David Danaher for his initial help with vehicle instrumentation and supporting publications involving Harry’s Lap Timer, and Jordan Dickinson for his assistance with three-dimensional site model development.

Definitions/Abbreviations

3DEP - Three-Dimensional Elevation Program

ASPRS - American Society of Photogrammetry and Remote Sensing

LAS - LASer public file format developed by the ASPRS for 3D point cloud data exchange

LiDAR - Portmanteau for light and radar, or an acronym for Light Detection and Ranging

Point cloud - Large numbers (typically millions) of 3D data points commonly obtained through 3D scanning or photo scanning

USGS - United States Geological Survey

Find your expert.

This publication is for educational and general information purposes only. It may contain errors and is provided as is. It is not intended as specific advice, legal, or otherwise. Opinions and views are not necessarily those of J.S. Held or its affiliates and it should not be presumed that J.S. Held subscribes to any particular method, interpretation, or analysis merely because it appears in this publication. We disclaim any representation and/or warranty regarding the accuracy, timeliness, quality, or applicability of any of the contents. You should not act, or fail to act, in reliance on this publication and we disclaim all liability in respect to such actions or failure to act. We assume no responsibility for information contained in this publication and disclaim all liability and damages in respect to such information. This publication is not a substitute for competent legal advice. The content herein may be updated or otherwise modified without notice.

noun_Download_747989_000000 Created with Sketch. Download PDF
You May Also Be Interested In
White Papers & Research Reports

A Comparison of Mobile LiDAR Capture and Established Ground-Based 3D Scanning Methodologies

Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as...

White Papers & Research Reports

Accuracy of Aerial Photoscanning with Real-Time Kinematic Technology

Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation...

White Papers & Research Reports

Aerial Photoscanning with Ground Control Points from USGS LiDAR

This paper presents a methodology for utilizing publicly available LiDAR data from the United States Geological Survey (USGS) in combination with high-resolution aerial imagery to establish GCPs based on preexisting site landmarks. This method is...

 
INDUSTRY INSIGHTS
Keep up with the latest research and announcements from our team.
Our Experts