But this also applies to aerial lidar, on airplanes and helicopters and really any aerial lidar unit you’re going to come into contact with so first the equipment used there’s something that’s common between every lidar unit and that’s, the aerial lidar the rover. And then you have a base station. These two components have a gps on the base on the aerial unit. There is a gps. The laser scanner inside of this box right here is a navigation. This is the imu and also the gnss receiver for the antenna, as well as sometimes not always, a camera it’s co aligned as well now let’s talk about how these systems actually all talk together in order to produce a geo rectified and colorized point cloud. So let’s talk about the principle of how all these systems talk together to get our end result, which is a point cloud on the ground. So let’s just say. The end goal is that we want to have points on the ground, says the house right here. We got points on the roof, we have points on the ground yeah, you know what there was a tree over here and we have a bunch of points on the tree and points underneath the tree as well talk about vegetation penetration later, but that’s our end goal. The end goal is that we want these dots to be measurements of the real world, so we want to know exactly: where is this spot on the corner of the house and latitude longitude and elevation or northing and easting and elevation? The point is, we want to know this point very precisely: its coordinates, x, y z on the earth, so let’s work ourselves backwards into our measurement system to talk about how we resulted in getting that point located right there on the earth so moving backwards.

We have the laser that came down and hit this spot and moved back up into our measurement system, which is the drone. This is our little drone up here with a little box, payload that’s the lidar right there now in order to know where this is located. On the earth first, we need to know how far is it delta x, that’s, a dis? Well, let’s call it delta d for distance, so we have a measurement of distance from the drone and that’s actually captured by the lidar laser scanner itself, that’s all that’s, doing that’s, just measuring distances and an angle with respect to itself. So if we look at itself, let’s just draw a coordinate axis here’s, an x y and z, and we just measured let’s, just say an angle theta from here. So now we know a distance and an angle that’s it, but really this device. The drone could be oriented in any direction, so we want to know is how is it oriented in three dimensional space? This brings us to our second component of a lidar aerial scanner, that’s the imu, so the imu is going to tell us our orientation, so the imu tells us exactly how we’re positioned in three dimensional space. So let’s look at the lidar right here. So if i was holding the lidar right here – and this is the laser scanner – and this is measuring a distance down to the ground and back – i need to know how is this oriented.

So if it was oriented like this and it’s measuring that distance it’s going to be located somewhere else on the ground, but if i bring it over here, it’s going to be located directly underneath it so step. One is, we need to know the orientation very accurately and, of course, just like any lever arm, so you have a stick coming out from this, the more precise we know, the orientation, the more precise that dot is on the ground. If we didn’t know this very precise, that dot would range in a large area of where is that located, so the more precise orientation, the more precise that data is on the ground, now that’s only one piece of the equation, because really orientation doesn’t tell us everything. We want to project this onto the real world, so not only do i know the orientation, we need to know exactly where this is located on the world. We need to know its latitude, its longitude and how high it is, and so that’s the second device and that’s the gps antenna, as well as the gnss receiver that’s inside of the water unit itself. So every lidar has these three components: that’s a gnss receiver, an imu and the laser scanner. So the gnss tells us our position and x y z latitude, longitude elevation. You know you get the point: it’s it’s location in space, three dimensions. The imu will tell you, your orientation, how you oriented, and then you can project that point that was measured, a distance and an angle down onto the ground.

So now you might be asking what’s this other piece over here. This base station do well. This is a very important piece of the equation, because a single gps will never get more accurate than about a meter that’s the best you can really hope out of it, it’s about three feet: most likely it’s about 10 feet in accuracy on a single gps. Now, if you throw a base station on the ground, this can receive signals from the satellite. This is my satellite very crappy, drawing here guys now it’s going to be sending out a signal and both the base station, as well as the gnss on the airborne lidar system, will measure that same system same measurement and basically, what we’re doing here is there’s a Bunch of aberrations in the signals that come from satellites essentially think of it like this think, like you got, you know the old, you got a cup of water and you put a straw in it and then you see the straw bend. You know it’s refraction light is refracting through there it’s bending the light same thing happens with our atmosphere and these gps signals. Basically, we got the satellites in the sky are sending out their signal, but then it interacts with a storm cloud or you have some other different weather patterns and those weather patterns will bend the signal through our atmosphere. What we want to do is correct that common mode distortion and this differential gps setup will correct that problem.

So that is how the basics of the lidar is working. You have a base station, you have an aerial lidar unit, you have the imu and then you have the laser scanner, the gnss imu on board and that’s – how you project this point onto the ground, starting from where am i located in the air i’m? Getting that high accuracy location from this differential gps, and then i know how am i oriented that’s the imu and then i can say i measured a distance so far away from that laser. All these things work together to get you the end result of having that dot on the ground. Now there’s, two ways of doing this differential gps setup there’s, one called rtk, which is called real time kinematics, and this is actually done in real time. It sends corrections. These this base station and this rover, which is also the exact same thing, it’s just another gps and another gps. They will send corrections from the base station up to the rover and it’s going to get that accurate precision in real time now, here’s the drawback one. You always have to have that signal communicating from here to here. So if you ever lose signal, you lose precision of your measurement that kind of sucks now. The other thing is that this data is only so good in real time. The best way to get the most high accurate data is through doing something called ppk, which is post processing, kinematics and if you do it in post, you can do a couple of really cool things, and you don’t really do this, but there’s software.

That does this. What it does is it calculates that trajectory through space – and it goes from time, equals zero to the future, and then it goes a second process from the future to the past, calculating the same thing but starting in the past to the future in the future. To the past and it minimizes the difference between its two calculations, so by doing this, it’s able to iterate back and forth many times and get more precise data, so that’s why we want to do ppk every time, that’s. Why rtk, in my humble opinion, is not so important, so i hope this video helped. You guys understand a little bit about the measurement system that is unmanned lidar. This applies to the r2a. It applies to aerial lidar units, any lighter unit, it’s a measurement system, and that measurement system consists of an imu, a gnss, a laser scanner as well as a base station. Now i didn’t mention this earlier, but there is oftentimes a third component. There is a camera. Now the camera it’s just doing the same thing just taking photos and it used that same orientation, information to project the photo pixels onto your point cloud, so they’re, just co, aligned tightly coupled and co aligned tightly coupled you know, it’s one rigorous unit. I should just use the unit right, it’s just tightly coupled one unit tightly coupled and therefore you’re able to project the photo pixels onto the lasers onto the ground.

So now you get a colorized point cloud, so there you guys go.

https://www.youtube.com/watch?v=f7EUnwJcRKs