The material in this paper was researched, compiled, and written by J.S. Held. It was originally published by SAE International.
Ground-based Light Detection and Ranging (LiDAR) using FARO Focus 3D scanners (and other brands of scanners) are repeatedly shown to accurately capture the geometry of accident scenes, accident vehicles, and exemplar vehicles, as well as corresponding evidence from these sources such as roadway gouge marks, vehicle crush depth, debris fields, and burn areas. However, ground-based scanners require expensive and large equipment on-site, as well as other materials that may be required depending on the scenario, such as tripods and alignment spheres. New technologies, such as Apple’s mobile phone LiDAR capture, were released recently for their newer model phones, and these devices offer a way to obtain LiDAR data but with less cumbersome and less expensive equipment. This mobile LiDAR can be captured using many different applications from the App Store which can then be exported into point cloud data. This paper will investigate the accuracy of Apple’s mobile phone LiDAR when obtaining geometry from multiple exemplar vehicles. This mobile phone LiDAR data will be compared against LiDAR captured from the same exemplar vehicles using the current standard technology, the FARO Focus 3D scanner. Multiple applications will be chosen for the mobile phone LiDAR export to investigate accuracy differences between the applications, and results are compared using the software CloudCompare.
LiDAR (Light Detection and Ranging) is a method that uses a laser to measure distance [1]. There are two different types of LiDAR, pulse-based and phase-based. For the purposes of this paper, pulse-based LiDAR is used and discussed. Pulse-based LiDAR technology began as a type of traditional survey equipment with which a user would aim a laser at a desired object or location. That laser would then bounce back to the equipment, allowing the equipment to determine the distance and angular properties of the object or location. This early method, which is still used today, only allowed for one laser to be shone at one object at a time, which resulted in an accurate yet time consuming process.
Current ground-based LiDAR devices use this same technique, but instead manually aiming and shooting an individual laser, a laser is rapidly shot at a rotating mirror, resulting in millions of laser points collected in all directions. This allows for large areas or objects to be completely covered and accurately documented with relative ease and within a shorter timeframe. The FARO Focus 350 is documented to be accurate down to the millimeter level [2], and this scanner, along with other models, has been used in multiple journal publications [3, 5-14]. However, the cost of these ground-based LiDAR devices makes them a nonviable option for many users. For this paper, a FARO Focus 350 3D scanner was used.
Apple’s recent release of the iPhone 12 Pro and the iPhone 12 Pro Max (2020) introduced LiDAR to mobile devices. These devices offer the possibility of capturing LiDAR data at a much lower cost than traditional 3D scanners currently available and, additionally, may capture data in an even shorter time frame. Apple continued the implementation of LiDAR technology in their next generation of iPhones, namely the iPhone 13 Pro and the iPhone 13 Pro Max. An iPhone 13 Pro Max was used for this paper.
This new mobile phone LiDAR technology has been previously researched to assess its capabilities [15, 16, 17]. However, due to the recent release of this technology, previous research is limited and has not yet been evaluated for its utility in the accident reconstruction field. This paper specifically focuses on LiDAR capture of exemplar vehicles, which is a common occurrence in accident reconstruction. It compares ground-based LiDAR scanning, which is known for reliable accuracy, to Apple’s iPhone 13 Pro Max LiDAR capabilities.
Vehicle Selection
When 3D scanning a vehicle, it is the author’s experience that the color of a vehicle may impact the quantity of LiDAR points collected. Lighter colored vehicles result in a return of more 3D data points than darker colored vehicles. Due to this effect, a variety of colored vehicles were chosen. The selected vehicles are listed below:
Baseline FARO 3D Scanning Documentation
To establish an accurate baseline for the geometry of the selected vehicles, all five vehicles were scanned using a FARO Focus 350. This 3D scanner has an accuracy of ± 1millimeter [2]. To ensure complete coverage of the vehicles, a total of six scans were collected for each vehicle. The scanner was placed at each corner of each vehicle and once along each side panel. For each scan, the scanner was placed approximately seven feet from the subject vehicle and was positioned approximately six feet off the ground. These approximate locations can be seen in Figure 1.
These scans were full 360° scans with settings of “1/5” for resolution and “level 4” for quality. Each scan collected approximately 12 million points and took approximately seven minutes and nine seconds to complete. The total time for a complete vehicle scan was approximately 45 minutes. A resulting scan of the subject white 2020 MINI Cooper can be seen in Figure 2.
Apple iPhone Lidar Documentation
There are many available LiDAR applications in Apple’s App Store. For this paper, three different apps were used. They were selected due to the quantity of reviews and the number of highly rated reviews. The three apps are listed below:
All three applications functioned in the same way. The user looks through the camera, and as the user walks around a vehicle, a mesh begins to appear on the areas that have been captured by the LiDAR sensor. This process is shown for the Polycam app in Figure 3. The user needs to move slowly enough for the app to properly track the vehicle. If the user moves too quickly, each app will notify the user that they need to slow down. Once the user covers the desired area, a “stop” button appears.
Polycam did not have any setting for the LiDAR capture portion of the process. However, 3D Scanner App and Scaniverse did have settings that could be changed. 3D Scanner App provided “low res” and “high res” options. The “high res” option had additional settings as listed below:
For this paper, the “high res” setting was selected for 3D Scanner App. The additional “high res” settings were left on their default settings, except range, which was increased to its maximum of 5 meters. Scaniverse also provided a range setting which was moved to its maximum of 5 meters.
Due to the lack of research and the uncertainty of the reliability of the Apple LiDAR capture, two different scans were completed for each vehicle and for each LiDAR app, and 13-foot scale was placed next to the vehicle for potential scaling needed later in the research. This scale is shown in Figure 4. The first scan included one pass around the subject vehicles. The user started at the front left corner of each vehicle and moved around the vehicle in a clockwise fashion, capturing LiDAR as they moved. Once the user returned to the front left corner of the vehicle, the scanning was stopped. These first scans were labeled “one-pass” and took approximately 30-90 seconds to complete. The second scan included two passes around the subject vehicles. The user started at the front left corner of each vehicle and moved around the vehicle in a clockwise fashion, capturing LiDAR as they moved. Once the user returned to the front left corner of the vehicle, the user made an additional pass around the vehicle capturing additional details such as wheel wells, trim areas, and the roof area. These second scans were labeled “two-pass” and took approximately three to five minutes.
A total of six Apple iPhone LiDAR scans were collected for each vehicle—a “one-pass” with each of the three apps and a “two-pass” with each of the three apps.
Once the Apple iPhone scans were completed, each scan needed to be processed before exporting. All three apps had different options for processing. Scaniverse provided the three different options listed below. For this paper, the “Ultra” setting was selected when processing in Scaniverse.
3D Scanner App also provided three different options for processing which are listed below. The “Custom” setting provided additional options for “Smoothing” and “Simplify.” For this paper, the “HD” setting was selected when processing in 3D Scanner App.
Polycam provided four different options listed below. The “Custom” setting provided additional options for “Depth Range,” “Voxel Size,” and “Simplification.” For this paper, the “Space” setting was selected when processing in Polycam.
Each of these apps can export a 3D model of the LiDAR capture as well as the 3D point cloud data. These processing settings are intended to change the resulting 3D model that is exported; however, during preliminary testing it was determined that these settings also affected the resulting 3D point cloud data that was exported. The resulting iPhone scans of the blue 2002 Subaru WRX from the three different apps can be seen in Figures 5-7.
FARO 3D Scan Processing
The FARO Focus 350 3D scans that were collected from each vehicle were aligned using the software FARO SCENE 2019.2. The alignment method was a combination of the “top-view” method and the “cloud to cloud” method. The scans collected not only the vehicle but the surrounding scene as well, which is beneficial and helps ensure an accurate alignment. Once a proper alignment was achieved, the surrounding area and ground plane were removed to create a stand-alone vehicle point cloud. The vehicle point clouds were then filtered for errant points using filtering methods within the software. It is the author’s experience that glass returns fewer and inaccurate points, so the windows were removed from the vehicle scans as well. The resulting filtered vehicle scans were then exported in the “PTS” file format to be used in analysis. In addition to the PTS files, a “RCS” file format version of the scans was needed for analysis. To achieve this, the exported PTS files were then exported through Autodesk’s Recap 1.0 software to create the RCS file format which can be used in Autodesk’s 3ds Max software for analysis.
Apple iPhone LiDAR Scan Processing
There were multiple point cloud file formats available when exporting the Apple iPhone LiDAR scans. For ease of use and continuity with the FARO 3D scans, PTS files were chosen as the file format type for 3D Scanner App and Polycam. However, Scaniverse only had the option of exporting the “PLY” file format and the “LAS” file format. Georeferencing (a function of the LAS format) was not needed for the purposes of this paper, so the PLY file format was selected for Scaniverse. The point clouds were then exported to Drop Box—an option provided by all three apps. The iPhone scan files were then downloaded to a computer for processing.
The Scaniverse Focus 350 PLY files were imported into CloudCompare v2.125 alpha [18], a point cloud processing and editing program, then exported in the PTS file format for analysis. In addition to the PTS files, a “RCS” file format version of the iPhone scans was needed for analysis. To achieve this, the exported PTS files were then exported through Autodesk’s Recap 1.0 software to create the RCS file format which can be used in Autodesk’s 3DS Max [19] software for analysis.
Aligning and Comparing the Scans
Positioning and aligning scans in CloudCompare can be manually done, but it a cumbersome and time-consuming process. As a result, the RCS files for both FARO and iPhone scans were imported into 3ds Max, and a separate file was created for each vehicle. The FARO scans were the baseline for comparison and were positioned at “0,0,0.” The iPhone scans were aligned manually to the FARO scans. They were positioned, rotated, and scaled as necessary.
After aligning the iPhone scans, the PTS files for both FARO scans and iPhone scans were imported into CloudCompare. The translation, rotation, and scale values of each iPhone scan from 3ds Max were manually entered into CloudCompare for each iPhone scan.
Once each data set was aligned in CloudCompare, iPhone scans were manually cleaned as needed. The surrounding area and ground plane were removed to create a stand-alone vehicle scan. The windows and interiors were also removed from the vehicle scans as they would not be a part of the analysis.
Figure 8 shows the one-pass cleaned iPhone scans of the grey 2003 Honda S2000 from all three apps that were used.
Results
Out of the 30 iPhone scans that were collected, 50% required scaling during the alignment process 40% needed to be scaled up, and 10% needed to be scaled down, with a scaling minimum of 97.72% and maximum 104.60%. The scale values are shown in Table 1. A comparison of the scale values between each vehicle is shown in Figure 9. The length of the Faro vehicle scans and the length of the mobile LiDAR scans before scaling is shown in Table 2, and the difference in Faro vehicle scan lengths and mobile LiDAR scan lengths is shown in Table 3.
The total number of FARO LiDAR points used for comparison were 4,364,396 for the grey 2003 Honda S2000; 4,257,866 for the grey 2018 Nissan Leaf, 2,968,555 for the blue 2002 Subaru WRX, 4,072,754 for the black 2013 Toyota Highlander, and 3,079,963 for the white 2020 MINI Cooper. These LiDAR points were used as a baseline for all the scan analyses. These values along with the iPhone LiDAR point numbers can be seen in Table 4. A comparison of these values can be seen in Figure 10. Distances were then compared and analyzed between the FARO scans and the iPhone scans.
Grey 2003 Honda S2000
The iPhone scans were found to have an average of 45% of their points within .25 inches of the FARO scan data, with a standard deviation of 6%. An average of 23% were located between .25 inches and .5 inches, with a standard deviation of 4%. An average of 12% were located between .5 and .75 inches, with a standard deviation of 3%. An average of 19% were .75 inches or greater in distance, with a standard deviation of 3%. These values are shown in Table 5 and Figure 11.
Grey 2018 Nissan Leaf
The iPhone scans were found to have an average of 37% of their points within .25 inches of the FARO scan data, with a standard deviation of 14%. An average of 21% were located between .25 inches and .5 inches, with a standard deviation of 2%. An average of 13% were located between .5 and .75 inches, with a standard deviation of 4%. An average of 29% were .75 inches or greater in distance, with a standard deviation of 13%. These values are shown in Table 6 and Figure 12.
Blue 2002 Subaru WRX
The iPhone scans were found to have an average of 28% of their points within .25 inches of the FARO scan data, with a standard deviation of 5%. An average of 21% were located between .25 inches and .5 inches, with a standard deviation of 2%. An average of 19% were located between .5 and .75 inches, with a standard deviation of 5%. An average of 33% were .75 inches or greater in distance, with a standard deviation of 7%. These values are shown in Table 7 and Figure 13.
Black 2013 Toyota Highlander
The iPhone scans were found to have an average of 27% of their points within .25 inches of the FARO scan data, with a standard deviation of 9%. An average of 16% were located between .25 inches and .5 inches, with a standard deviation of 2%. An average of 12% were located between .5 and .75 inches, with a standard deviation of 1%. An average of 45% were .75 inches or greater in distance, with a standard deviation of 10%. These values are shown in Table 8 and Figure 14.
White 2020 Mini Cooper
The iPhone scans were found to have an average of 30% of their points within .25 inches of the FARO scan data, with a standard deviation of 7%. An average of 23% were located between .25 inches and .5 inches, with a standard deviation of 5%. An average of 15% were located between .5 and .75 inches, with a standard deviation of 4%. An average of 33% were .75 inches or greater in distance, with a standard deviation of 3%. These values are shown in Table 9 and Figure 15.
One-Pass vs. Two-Pass
The one-pass iPhone scans, across all vehicles and apps were found to have an average of 33% of their points within .25 inches of the FARO scan data, with a standard deviation of 8%. An average of 21% were located between .25 inches and .5 inches, with a standard deviation of 4%. An average of 14% were located between .5 and .75 inches, with a standard deviation of 3%. An average of 31% were .75 inches or greater in distance, with a standard deviation of 9%.
The two-pass iPhone scans, across all vehicles and apps, were found to have an average of 33% of their points within .25 inches of the FARO scan data, with a standard deviation of 13%. An average of 20% were located between .25 inches and .5 inches, with a standard deviation of 4%. An average of 14% were located between .5 and .75 inches, with a standard deviation of 5%. An average of 32% were .75 inches or greater in distance, with a standard deviation of 13%. These values are shown in Table 10 and Figure 16.
Software Comparison
The software used to create the iPhone scans were also analyzed across all vehicles and passes. The 3D Scanner App scans were found to have an average of 31% of their points within .25 inches of the FARO scan data, with a standard deviation of 9%. An average of 21% were located between .25 inches and .5 inches, with a standard deviation of 4%. An average of 15% were located between .5 and .75 inches, with a standard deviation of 3%. An average of 33% were .75 inches or greater in distance, with a standard deviation of 11%.
The Scaniverse scans were found to have an average of 31% of their points within .25 inches of the FARO scan data, with a standard deviation of 8%. An average of 21% were located between .25 inches and .5 inches, with a standard deviation of 4%. An average of 16% were located between .5 and .75 inches, with a standard deviation of 4%. An average of 32% were .75 inches or greater in distance, with a standard deviation of 11%.
The Polycam scans were found to have an average of 38% of their points within .25 inches of the FARO scan data, with a standard deviation of 12%. An average of 20% were located between .25 inches and .5 inches, with a standard deviation of 4%. An average of 12% were located between .5 and .75 inches, with a standard deviation of 5%. An average of 30% were .75 inches or greater in distance, with a standard deviation of 11%. These values are shown in Table 11 and Figure 17.
Visual Comparison
Visual comparisons were created for the grey 2003 Honda S2000 to illustrate the differences between the FARO scans and the iPhone scans. These visual comparisons included the three different apps used and the one-pass/two-pass data sets. The blue color indicates areas of greater accuracy that were within .25 inches of the FARO scan. The red color indicates areas that were less accurate and more than .75 inches from the FARO scan. These comparisons are shown in Figure 18-20.
Using the Apple iPhone 12 Pro or 13 Pro is a quick way to collect LiDAR data. It can be done by anyone that is familiar with Apple iPhones, and the entire collection process can take as little as two minutes. The process is relatively inexpensive, with the iPhone 13 Pro Max costing $1399.00, compared to traditional ground-based scanners, like the FARO Focus 350, which can cost tens of thousands of dollars. The apps used are either free or paid monthly with a low-cost subscription ($7 per month for Polycam Pro, which is required for point cloud export).
With such a low cost and speed of data capture, mobile phone LiDAR is a promising technology, however there was a wide range of accuracies in the data sets. Depending on the use of the data acquired, this range may not be acceptable. More research is needed to understand this range and if more accurate results could be obtained from differing methodologies, subject matter, and app selection.
The scaling needed presents the largest concern when scanning without a baseline comparison such as the FARO scans. A 13’ scale was placed next to the vehicles during the iPhone scans to be used for scaling verification which was possibly needed later in the research. However, this scale was illegible in the iPhone scans, and there were times when the scale appeared misshaped. For this paper, the iPhone scans were scaled to the size of the FARO scans, but in a real-world situation the lack of baseline scan data could present an issue. Additional research would be needed to determine if fiducials or something comparable would be visible in the iPhone data which could be used for scaling.
It is worth noting that even though the iPhone scan data appeared to align to the baseline FARO scan data, there were many instances of features of the vehicles not aligning. For example, there were times when the center of a wheel was displaced by multiple inches, or the width of the bumper was narrower than the baseline scan. Visual verification will be needed to truly determine alignment and overall accuracy.
The method of LiDAR capture is something that was not considered in this paper. These LiDAR sets were captured by one individual, and there was no way to ensure the exact same phone path taken for each capture. Further research is needed to understand the effect of this on resulting accuracy of the point clouds.
There were settings within the iPhone LiDAR apps that could not be quantified. The author selected the settings thought appropriate for the situation; however, these settings should be researched further to understand how these differences affect the resulting scan data.
In our analysis comparing the different vehicles, the Black 2013 Toyota Highlander returned the fewest reliable points with 45% of the Apple LiDAR points falling .75 inches or more away from the FARO scan points. This was an expected result from the author’s previous experience with scanning black vehicles. Methods to improve these results will need to be investigated.
There appeared to be no significant difference between the one-pass and two-pass data sets. It was thought that the addition of a second pass would add stability and resolution to the iPhone scan capture. However, our findings do not support that assumption. A single, slow, thorough pass is expected to yield the same data set without the additional time. Additional research will be needed to determine best practices.
Using Apple iPhone’s LiDAR capabilities presents an easy and fast way to acquire LiDAR data. However, additional research will be needed to determine accuracy thresholds, best practices, limitations, and processing techniques for use on exemplar vehicles and if these improve accuracy results. Research on exterior scenes, interior scenes, and smaller objects is also needed as this paper only investigated capturing vehicle exteriors, which due to the reflectivity of the subject matter, can make acquiring data difficult.
We would like to thank Seth Higgins Miller, Robert Gillihan, Ethan Helms, and Alireza Hashemian for providing insight and expertise that greatly assisted this research.
Seth Miller is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.
Seth can be reached at [email protected] or +1 303 733 1888.
Robert Gillihan is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.
Robert can be reached at [email protected] or +1 303 733 1888.
Ethan Helms is a Visualization Analyst in J.S. Held’s Accident Reconstruction Practice.
Ethan can be reached at [email protected] or +1 303 733 1888.
[1] National Ocean Service. What is lidar?. Accessed October 7, 2021. https://oceanservice.noaa.gov/facts/lidar.html.
[2] FARO Laser Scanners Tech Sheet. Accessed October 7, 2021. https://www.faro.com/en/Resource-Library/Tech-Sheet/techsheet-faro-focus-laser-scanners.
[3] Carter, N., Hashemian, A., Rose, N., and Neale, W., "Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence," SAE Technical Paper 2016-01-1467, 2016, doi:10.4271/2016-01-1467.
[4] Neale, W., Marr, J., and Hessel, D., "Nighttime Videographic Projection Mapping to Generate Photo-Realistic Simulation Environments," SAE Technical Paper 2016-01-1415, 2016, doi:10.4271/2016-01-1415.
[5] Terpstra, T., Voitel, T., and Hashemian, A., "A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush," SAE Technical Paper 2016-01-1475, 2016, doi:10.4271/2016-01-1475.
[6] Neale, W., Hessel, D., and Koch, D., "Determining Position and Speed through Pixel Tracking and 2D Coordinate Transformation in a 3D Environment," SAE Technical Paper 2016-01-1478, 2016, doi:10.4271/2016-01-1478.
[7] Terpstra, T., Miller, S., and Hashemian, A., "An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable," SAE Technical Paper 2017-01-1422, 2017, doi:10.4271/2017-01-1422.
[8] Terpstra, T., Dickinson, J., and Hashemian, A., “Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy,” SAE Technical Paper 2018-01-0516, 2018, doi:10.4271/2018-01-0516.
[9] Ann Bailey, James Funk, David Lessley, Chris Sherwood, Jeff Crandall, William Neale & Nathan Rose (2018): Validation of a videogrammetry technique for analyzing American football helmet kinematics, Sports Biomechanics, DOI: 10.1080/14763141.2018.1513059.
[10] Terpstra, T., Dickinson, J., Hashemian, A., and Fenton, S., “Reconstruction of 3D Accident Sites Using USGS LiDAR, Aerial Images, and Photogrammetry,” SAE Technical Paper 2019-01-0423, 2019, doi:10.4271/2019-01-0423.
[11] Danaher, D., Neale, W., McDonough, S., and Donaldson, D., “Low Speed Override of Passenger Vehicles with Heavy Trucks,” SAE Technical Paper 2019-01-0430, 2019, doi:10.4271/2019-01-0430.
[12] Neale, W.T., Terpstra, T., Mckelvey, N., and Owens, T., “Visualization of Driver and Pedestrian Visibility in Virtual Reality Environments,” SAE Technical Paper 2021-01-0856, 2021, doi:10.4271/2021-01-0856.
[13] Terpstra, T., Hashemian, A., Gillihan, R., King, E. et al., “Accuracies in Single Image Camera Matching Photogrammetry,” SAE Technical Paper 2021-01-0888, 2021, doi:10.4271/2021-01-0888.
[14] Heinrichs, B. and Yang, M., “Bias and Repeatability of Measurements from 3D Scans Made Using iOS-Based Lidar,” SAE Technical Paper 2021-01-0891, 2021, doi:10.4271/2021-01-0891.
[15] Vogt M, Rips A, Emmelmann C. Comparison of iPad Pro®’s LiDAR and TrueDepth Capabilities with an Industrial 3D Scanning Solution. Technologies. 2021; 9(2):25. https://doi.org/10.3390/technologies9020025
[16] Gillihan, R., “Accuracy Comparisons of iPhone 12 Pro LiDAR Outputs,” (01/2021), ProQuest Dissertations Publishing, ISBN: 9798759988380.
[17] CloudCompare (Version 2.6.1), Computer Software, http://www.cloudcompare.net/
[18] 3ds Max 2022, Computer Software, Autodesk, Mill Valley, CA, http://www.autodesk.com/products/3ds-max/
Photoscanning photogrammetry is a method for obtaining and preserving three-dimensional site data from photographs. This photogrammetric method is commonly associated with small Unmanned Aircraft Systems (sUAS) and is particularly beneficial for large area site documentation...
This paper presents a methodology for utilizing publicly available LiDAR data from the United States Geological Survey (USGS) in combination with high-resolution aerial imagery to establish GCPs based on preexisting site landmarks. This method is...
Approaching an intersection and braking to a stop, as well as accelerating from a stop, is a common occurrence in daily life. While the experience is routine, the actual rate of deceleration and acceleration has...