Question No. 1 (Raised by Todd and answered by Faisal & Martin)

When we parse a scan the parser breaks it up into multiple files. How do we merge them back into one scan in JRC so we can align the scan with another scan?

Answer:

Unfortunately, to our knowledge, there is no easy way to merge subscans to one new scan without losing information or having to deal with unstructured point clouds. Here are two possibilites to deal with this problem.
 
1st possiblity - Pose matrices

1) Align one subscan 2) Right click on the registrated subscan and click on registration and then pose, this will give you a quick look on matrix. copy that.
3) Right click on the second sub scan and then click on registration and then pose, past the copied matrix here. repeat it for all the sub scans. Thus you adjust the matrices of the unregistrated subscans to the aligned one. However, this way, you are restricted to registration with only one subscan, and it is not guaranteed that the alignment which fits for the registrated subscan will also fit for the other subscans.
2nd possibility - merge subscans /w virtual scan 1) Load all subscans of one laser scan. Display the scans from the LiDAR unit point and field of view (right click on scans: "Center to local origin" or "insert origin into point list".) 2) Place an ortho camera there. 3) Now, conduct a virtual scan (right click on camera, virtual scan). Be sure to adjust the resolution adequately (e.g., increase it to your scanning resolution), otherwise your pre-registration will be a hard time. 4) Click "Add point cloud to project". This will make one complete scan and will be placed in the current project. By choosing the FOV of the scanner, you won't lose information when you conduct a virtual scan (virtual scan only takes the points into account which are displayed!). Now you can align


Question No. 2 (Raised by Todd and answered by Janos)
What does the beam width field in the controller software represent? Is it Mean beam width at the mean distance of the scan?
Should we aim for a resolution / spacing that matches the beam width or should we go significantly smaller (this is what we are currently doing).

Answer:

Concerning your question regarding beam width, you are assuming right! This is the footprint of the laser beam on a wall perpendicular to the laser ray at the given distance (the distance you have in the mean distance field). This value represents the total footprint. So assuming you have a clear energetic centre of the beam (what you have when using fibre optic instruments such as the ILRIS) the possible resolution is still a bit better (min. by factor 2). Then, you have to consider the footprint shape: When two circles overlap by most of the area, the information you get from a range measure is pretty much the same. When only overlaping by 1/3 to 1/4, the information is really new. When shifting the laser by one radius, you will be in this ratio...

A last factor you will need to consider is, that the range you get from acquiring is a medium range. That means: Very narrow points will show a better resolution which is fine as the beam is significantly smaller there. Further points might need a finer step width.
As a result, your grid's size is not the resolution you have. Resolution is not only how many points you have in one row or one column. Resolution is a figure that represents the content of information in your point cloud and is a function of step witdh and beam diameter.
Scientifically, resolution is often defined as information impulse of one pixel. That means: Acquiring more pixels with the same information will not increase your resolution. But I'm sure you are aware of that.
That leads to an example:
Assumption: Distance around 2000m
=> 50 cm footprint
=> 4 cm min step width possible
Discussing the footprint: 50 cm total, ~25 cm energy centre, proper new information every 13 cm (half diameter of energy centre or 1 radius)
Discussing step width: Setting the step with to 1 count represents 4 cm, which will just cost time. Setting to 2 (8cm) will be too dense still, but might be the densest to look for (to incorporate the further and narrower points). Best is to decide between 3 or 4 (depending on prospected scan time). If you really need to save scanning time, another way to approach this is to define the level of detail you need from the object to do the analysis you are planning to do!


Question No. 3 (Raised by Martin and answered by Janos)

We are trying to eliminate the noise in the grid point clouds, e.g. vegetation. JRC is not offering an appropriate filter to do this, or is it? (filtering by Inclination, Confidence values etc. in edit 2d doesn't really work) Best would be to filter the vegetation but keep potential ground points beneath it (bare-ground model). I see there is some kind of software out there in the web, but mostly commercial..What do you recommend to do? - In the moment, we are cutting out the vegetation by hand, which is probably not the most elegant thing.

Answer:

It is suggested to approach the vegetation with the "mixed points filter" in the preprocessing dialogue.

As the póint's roughness is a lot higher for vegetation, a lot of vegetated areas can be filtered out while rock information should be remaining. Try stting the incidence angle to something like 5 deg.

Start with this and continue to generate a DGM (DEM) style point cloud from the data. Turn your model to the back site and do a virtual scan (or use the selecton mode which only selects the visible points) using a very low resolution (keep the area small, so that only proper back side points will remain). In case at some densly vegetated areas, unwanted points remain, delete by hand. Do a 2.5D Delauney mesh (which is like a usual DGM (DEM) triangulation then). Calculate an inspection (original point cloud vs. DGM model). Using pseudo colours, you can colourate points, that exceed a specific distance black and later remove them using the hide black points dialogue (it might be necessary to first export the colour overlay and reimport it as a new proper overlay (vs. pseudo colours).
Comment by Martin:
As the vegetation might be quite dense, this way probably won't work to well, as behind the vegetation there are basically dataholes. So when you make a virtual scan from the back you'll still capture the vegetation. Straight-forward solution for now then is: delete by hand!
Question No. 4 (Raised by Jakob and answered by Todd via Janos)
Memory problem with JRC: Unable to load more data. How to avoid this porblem
Answer:
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Arial} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Arial; min-height: 14.0px}

To load larger data sets into JRC we want to:

 

1) Enable 'Scalable Quick Rendering' most likely under options menu.

 

2) Other option is before loading all the scans in go to

property browser

set subsampling to 1/4 (every second point) or 1/8. This will reduce data quite a bit. Note that if you then do things like virtual scans it will NOT be using all the points.

 

3) New version of software has some memory improvements - This is now ready for us to install.