Introduction. Occasionally we and also we have gotten a question as to whether it is possible to use video frames as input for Structure from Motion models. This has certainly been done before with good success. For example, Yuichi Hayakawa did it starting with news video for a landslide triggered by the Kumamoto, Japan Earthquake in April 2016. Roman DiBiase showed me how he had done it video from helicopter and even performed topographic differencing with lidar for the Big Sur Landslide (e.g., NPR site and USGS site).
So, in a fit of procrastination, I decided to play around with the process myself. I was motivated originally by an idea from Andrea Donnellan and others at JPL to do topography from satellite video. They wrote a report entitled Gazing at the Solar System: Capturing the Evolution of Dunes, Faults, Volcanoes, and Ice from Space and I worked with Andrea and her team some on the problem.
Video to images. The main generic challenge for SfM from video is to extract the video frames and prepare them for the SfM. The SfM part is no different from what my group has been doing for a while with Agisoft Photoscan. I used MATLAB to do the video processing. The script is here: readplanetvid.m. The main code bits include:
PlanetObj = VideoReader(videoname); %make a video object from an MP4 file
vidWidth = PlanetObj.Width; %get the width
vidHeight = PlanetObj.Height; %and height
mov = struct('cdata',zeros(vidHeight,vidWidth,3,'uint8'),...
'colormap',[]); %set up a MATLAB structure to contain the video
k = 1;
while hasFrame(PlanetObj)
mov(k).cdata = readFrame(PlanetObj); %pull out the frames one at a time from the MP4 object and put them in the mov
k = k+1;
end
step = floor(k/number_of_frames) %determine how many frames to skip each time to get the desired number
for i = 1:step:(k-1)
framepart = sprintf('_frame_%06d.png', i);
filename = strcat(foldername,'/',projectname,framepart);
imwrite(mov(i).cdata, filename) %easy to write the frame out as a png file
end
Satellite video from Terra Bella. I have been watching the hi resolution satellite activity with great interest. Skybox had a few relatively high res (approx 1 m ground resolution) visible and near IR satellites with video capability. They were bought by Terra Bella (google) and then now are owned by Planet (who were just visiting us on the ASU campus last week and with whom we are building some collaborations). Some of the Terra Bella imagery is available on youtube. I grabbed one video of the Usak Mine in Turkey (used real player to convert youtube to mp4):
You can really see the parallax as the satellite moves over (not to mention the activity of the vehicles).
I ran my script on the mp4 and extracted 100 png frames. Here is an example:
You can see the model and the camera positions. They are in the roughly correct arc, and relatively far away, but they should be much farther (orbit is approx 450 km).
Nice looking textured mesh. It is distorted, but not too bad, all things considered! And, here is the point cloud in Cloud Compare.
What did we learn? We learned that the SfM from video is doable (see a future post from my backyard and phone video). Here is the Photoscan report on the Usak project. The geometry that is computed from the satellite video is not bad. Agisoft Photoscan does a pretty good job. We cannot get under the hood very easily to see more about the processing. I think that someone who knows more about computer vision than me would be able to comment as to the performance. I think that the main issue is probably the relatively low angular variation for the model.
Planet Team (2017). Planet Application Program Interface: In Space for Life on Earth. San Francisco, CA. https://api.planet.com.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.