Agisoft photoscan user manual professional edition version 1.2 free download.Photogrammetric Procedure for Modeling Castles and Ceramics

Looking for:

Agisoft photoscan user manual professional edition version 1.2 free download

Click here to Download

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

PhotoScan allows to export obtained results and save intermediate data in a form of project files at any stage of the process. If you are not familiar with the concept of projects, its brief description is given at the end of the Chapter 3, General workflow. In the manual you can also find instructions on the PhotoScan installation procedure and basic rules for taking “good” photographs, i. For the information refer to Chapter 2, Capturing photos and Chapter 1, Installation. NVidia GeForce 8xxx series and later.

PhotoScan is likely to be able to utilize processing power of any OpenCL enabled device during Dense Point Cloud generation stage, provided that OpenCL drivers for the device are properly installed.

However, because of the large number of various combinations of video chips, driver versions and operating systems, Agisoft is unable to test and guarantee PhotoScan’s compatibility with every device and on every platform.

The table below lists currently supported devices on Windows platform only. We will pay particular attention to possible problems with PhotoScan running on these devices.

Table 1. Using OpenCL acceleration with mobile or integrated graphics video chips is not recommended because of the low performance of such GPUs. Start PhotoScan by running photoscan. Restrictions of the Demo mode Once PhotoScan is downloaded and installed on your computer you can run it either in the Demo mode or in the full function mode.

On every start until you enter a serial number it will show a registration box offering two options: 1 use PhotoScan in the Demo mode or 2 enter a serial number to confirm the purchase. The first choice is set by default, so if you are still exploring PhotoScan click the Continue button and PhotoScan will start in the Demo mode.

The employment of PhotoScan in the Demo mode is not time limited. Several functions, however, are not available in the Demo mode. These functions are the following: saving the project; 2. To use PhotoScan in the full function mode you have to purchase it. On purchasing you will get the serial number to enter into the registration box on starting PhotoScan.

Once the serial number is entered the registration box will not appear again and you will get full access to all functions of the program.

Capturing photos Before loading your photographs into PhotoScan you need to take them and select those suitable for 3D model reconstruction. Photographs can be taken by any digital camera both metric and non-metric , as long as you follow some specific capturing guidelines.

This section explains general principles of taking and selecting pictures that provide the most appropriate data for 3D model generation.

Make sure you have studied the following rules and read the list of restrictions before you get out for shooting photographs. Equipment Use a digital camera with reasonably high resolution 5 MPix or more. Avoid ultra-wide angle and fish-eye lenses. The best choice is 50 mm focal length 35 mm film equivalent lenses. It is recommended to use focal length from 20 to 80 mm interval in 35mm equivalent. If a data set was captured with fish-eye lens, appropriate camera sensor type should be selected in PhotoScan Camera Calibration settings prior to processing.

Fixed lenses are preferred. If zoom lenses are used – focal length should be set either to maximal or to minimal value during the entire shooting session for more stable results. Take images at maximal possible resolution. ISO should be set to the lowest value, otherwise high ISO values will induce additional noise to images. Aperture value should be high enough to result in sufficient focal depth: it is important to capture sharp, not blurred photos.

Shutter speed should not be too slow, otherwise blur can occur due to slight movements. If still have to, shoot shiny objects under a cloudy sky. Avoid unwanted foregrounds. Avoid moving objects within the scene to be reconstructed.

Avoid absolutely flat objects or scenes. Image preprocessing PhotoScan operates with the original images. So do not crop or geometrically transform, i. Number of photos: more than required is better than not enough. Number of “blind-zones” should be minimized since PhotoScan is able to reconstruct only geometry visible from at least two cameras. Each photo should effectively use the frame size: object of interest should take up the maximum area.

In some cases portrait camera orientation should be used. Do not try to place full object in the image frame, if some parts are missing it is not a problem providing that these parts appear on other images. Good lighting is required to achieve better quality of the results, yet blinks should be avoided.

It is recommended to remove sources of light from camera fields of view. Avoid using flash. If you are planning to carry out any measurements based on the reconstructed model, do not forget to locate at least two markers with a known distance between them on the object.

Alternatively, you could place a ruler within the shooting area. In case of aerial photography and demand to fulfill georeferencing task, even spread of ground control points GCPs at least 10 across the area to be reconstructed is required to achieve results of highest quality, both in terms of the geometrical precision and georeferencing accuracy. The following figures represent advice on appropriate capturing scenarios: Facade Incorrect Facade Correct 5.

A short list of typical reasons for photographs unsuitability is given below. Modifications of photographs PhotoScan can process only unmodified photos as they were taken by a digital photo camera.

Processing the photos which were manually cropped or geometrically warped is likely to fail or to produce highly inaccurate results.

Photometric modifications do not affect reconstruction results. In this case PhotoScan assumes that focal length in 35 mm equivalent equals to 50 mm and tries to align the photos in accordance with this assumption. If the correct focal length value differs significantly from 50 mm, the alignment can give incorrect results or even fail.

In such cases it is required to specify initial camera calibration manually. The details of necessary EXIF tags and instructions for manual setting of the calibration parameters are given in the Camera calibration section.

Otherwise it is most unlikely that processing results will be accurate. Fisheye and ultra-wide angle lenses are poorly modeled by the common distortion model implemented in PhotoScan software, so it is required to choose proper camera type in Camera Calibration dialog prior to processing.

General workflow Processing of images with PhotoScan includes the following main steps: loading photos into PhotoScan; inspecting loaded images, removing unnecessary images; aligning photos; building dense point cloud; building mesh 3D polygonal model ; generating texture; exporting results.

If you are using PhotoScan in the full function not the Demo mode, intermediate results of the image processing can be saved at any stage in the form of project files and can be used later. The concept of projects and project files is briefly explained in the Saving intermediate results section.

The list above represents all the necessary steps involved in the construction of a textured 3D model from your photos. Some additional tools, which you may find to be useful, are described in the successive chapters.

Preferences settings Before starting a project with PhotoScan it is recommended to adjust the program settings for your needs. In Preferences dialog General Tab available through the Tools menu you can indicate the path to the PhotoScan log file to be shared with the Agisoft support team in case you face any problems during the processing. Here you can also change GUI language to the one that is most convenient for you.

PhotoScan exploits GPU processing power that speeds up the process significantly. If you have decided to switch on GPUs for photogrammetric data processing with PhotoScan, it is recommended to free one physical CPU core per each active GPU for overall control and resource managing tasks.

Loading photos Before starting any operation it is necessary to point out what photos will be used as a source for 3D reconstruction. In fact, photographs themselves are not loaded into PhotoScan until they are needed. So, when you “load photos” you only indicate photographs that will be used for further processing. To load a set of photos 1.

Select Add Photos Add Photos toolbar button on 2. In the Add Photos dialog box browse to the folder containing the images and select files to be processed. Then click Open button. Selected photos will appear on the Workspace pane. Photos in any other format will not be shown in the Add Photos dialog box. To work with such photos you will need to convert them in one of the supported formats.

If you have loaded some unwanted photos, you can easily remove them at any moment. To remove unwanted photos 1. On the Workspace pane select the photos to be removed.

Right-click on the selected photos and choose Remove Items command from the opened context Camera groups menu, or click Remove Items toolbar button on the Workspace pane. The selected photos will be removed from the working set. If all the photos or a subset of photos were captured from one camera position – camera station, for PhotoScan to process them correctly it is obligatory to move those photos to a camera group and mark the group as Camera Station.

It is important that for all the photos in a Camera Station group distances between camera centers were negligibly small compared to the camera-object minimal distance. However, it is possible to export panoramic picture for the data captured from only one camera station. Refer to Exporting results section for guidance on panorama export.

To move photos to a camera group 1. On the Workspace pane or Photos pane select the photos to be moved. Right-click on the selected photos and choose Move Cameras – New Camera Group command from the opened context menu. A new group will be added to the active chunk structure and selected photos will be moved to that group.

To mark group as camera station right click on the camera group name and select Set Group Type command from the context menu.

Inspecting loaded photos Loaded photos are displayed on the Workspace pane along with flags reflecting their status. The following flags can appear next to the photo name: NC Not calibrated Notifies that the EXIF data available is not sufficient to estimate the camera focal length. In this case PhotoScan assumes that the corresponding photo was taken using 50mm lens 35mm film equivalent. More details on manual camera calibration can be found in the Camera calibration section.

NA Not aligned Notifies that external camera orientation parameters were not estimated for the current photo yet. Images loaded to PhotoScan will not be aligned until you perform the next step – photos alignment. Notifies that Camera Station type was assigned to the group. Multispectral imagery PhotoScan supports processing of multispectral images saved as multichannel single page TIFF files. The main processing stages for multispectral images are performed based on the master channel, which can be selected by the user.

During orthophoto export, all spectral bands are processed together to form a multispectral orthophoto with the same bands as in source images. The overall procedure for multispectral imagery processing does not differ from the usual procedure for normal photos, except the additional master channel selection step performed after adding images to the project.

For the best results it is recommended to select the spectral band which is sharp and as much detailed as possible. To select master channel 1. Add multispectral images to the project using Add Photos Select Set Master Channel Display of images in PhotoScan window will be updated according to the master channel selection.

Note Set Master Channel You can either indicate only one channel to be used as the basis for photogrammetric processing or leave the parameter value as Default for all three channels to be used in processing.

When exporting in other formats, only master channel will be saved. Aligning photos Once photos are loaded into PhotoScan, they need to be aligned. At this stage PhotoScan finds the camera position and orientation for each photo and builds a sparse point cloud model. To align a set of photos 1.

Select Align Photos In the Align Photos dialog box select the desired alignment options. Click OK button when done. The progress dialog box will appear displaying the current processing status.

To cancel processing click Cancel button. You can inspect alignment results and remove incorrectly positioned photos, if any. To see the matches between any two photos use View Matches Incorrectly positioned photos can be realigned. To realign a subset of photos 1. Reset alignment for incorrectly positioned cameras using Reset Camera Alignment command from the photo context menu. Set markers at least 4 per photo on these photos and indicate their projections on at least two photos from the already aligned subset.

PhotoScan will consider these points to be true matches. For information on markers placement refer to the Setting coordinate system section. Select photos to be realigned and use Align Selected Cameras command from the photo context menu. When the alignment step is completed, the point cloud and estimated camera positions can be exported for processing with another software if needed.

Image quality Poor input, e. To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature. Images with quality value of less than 0. To disable a photo use Disable button from the Photos pane toolbar. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture. To estimate image quality 1.

Switch to the detailed view in the Photos pane using on the Photos pane toolbar. Details command from the Change menu 2. Select all photos to be analyzed on the Photos pane. Right button click on the selected photo s and choose Estimate Image Quality command from the context menu. Once the analysis procedure is over, a figure indicating estimated image quality value will be displayed in the Quality column on the Photos pane. Alignment parameters The following parameters control the photo alignment procedure and can be modified in the Align Photos dialog box: Accuracy Higher accuracy setting helps to obtain more accurate camera position estimates.

Lower accuracy setting can be used to get the rough camera positions in a shorter period of time. Pair preselection The alignment process of large photo sets can take a long time. A significant portion of this time period is spent on matching of detected features across the photos. Image pair preselection option may speed up this process due to selection of a subset of image pairs to be matched.

In the Generic preselection mode the overlapping pairs of photos are selected by matching photos using lower accuracy setting first. In the Reference preselection mode the overlapping pairs of photos are selected basing on the measured camera locations if present. For oblique imagery it is recommended to set Ground altitude value in the Settings dialog of the Reference pane to make the preselection procedure more efficient.

Ground altitude information must be accompanied with yaw, pitch, roll data for cameras to be input in the Reference pane as well. Additionally the following advanced parameters can be adjusted. Key point limit The number indicates upper limit of feature points on every image to be taken into account during current processing stage.

Using zero value allows PhotoScan to find as many key points as possible, but it may result in a big number of less reliable points. Tie point limit The number indicates upper limit of matching points for every image.

Using zero value doesn’t apply any tie point filtering. Constrain features by mask When this option is enabled, features detected in the masked image regions are discarded. For additional information on the usage of masks please refer to the Using masks section.

Note Tie point limit parameter allows to optimize performance for the task and does not generally effect the quality of the further model. Recommended value is Too high tie-point limit value may cause some parts of the dense point cloud model to be missed.

The reason is that PhotoScan generates depth maps only for pairs of photos for which number of matching points is above certain limit. As a results sparse point cloud will be thinned, yet the alignment will be kept unchanged. Point cloud generation based on imported camera data PhotoScan supports import of external and internal camera orientation parameters.

Thus, if for the project precise camera data is available, it is possible to load them into PhotoScan along with the photos, to be used as initial information for 3D reconstruction job. To import external and internal camera parameters 1. Select Import Cameras command from the Tools menu. Select the format of the file to be imported. Browse to the file and click Open button.

The data will be loaded into the software. Camera calibration data can be inspected in the Camera Calibration dialog, Adjusted tab, available from Tools menu.

If the input file contains some reference data camera position data in some coordinate system , the data will be shown on the Reference pane, View Estimated tab. Once the data is loaded, PhotoScan will offer to build point cloud. This step involves feature points detection and matching procedures. As a result, a sparse point cloud – 3D representation of the tie-points data – will be generated.

Parameters controlling Build Point Cloud procedure are the same as the ones used at Align Photos step see above. Building dense point cloud PhotoScan allows to generate and visualize a dense point cloud model. Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud.

PhotoScan tends to produce extra dense point clouds, which are of almost the same density, if not denser, as LIDAR point clouds. A dense point cloud can be edited and classified within PhotoScan environment or exported to an external tool for further analysis. To build a dense point cloud 1. Check the reconstruction volume bounding box. To adjust the bounding box use the Resize Region and Rotate Region toolbar buttons. Rotate the bounding box and then drag corners of the box to the desired positions.

Select the Build Dense Cloud In the Build Dense Cloud dialog box select the desired reconstruction parameters. Reconstruction parameters Quality Specifies the desired reconstruction quality. Higher quality settings can be used to obtain more detailed and accurate geometry, but require longer time for processing. The only difference is that in this case Ultra High quality setting means processing of original photos, while each following step implies preprocessing image size downscaling by factor of 4 2 times by each side.

Depth Filtering modes At the stage of dense point cloud generation reconstruction PhotoScan calculates depth maps for every image. Due to some factors, like poor texture of some elements of the scene, noisy or badly focused images, there can be some outliers among the points. To sort out the outliers PhotoScan has several built-in filtering algorithms that answer the challenges of different projects. If the area to be reconstructed does not contain meaningful small details, then it is reasonable to chose Aggressive depth filtering mode to sort out most of the outliers.

Moderate depth filtering mode brings results that are in between the Mild and Aggressive approaches. You can experiment with the setting in case you have doubts which mode to choose. Additionally depth filtering can be Disabled. But this option is not recommended as the resulting dense cloud could be extremely noisy. Building mesh To build a mesh 1. If the Height field reconstruction method is to be applied, it is important to control the position of the red side of the bounding box: it defines reconstruction plane.

In this case make sure that the bounding box is correctly oriented. Select the Build Mesh In the Build Mesh dialog box select the desired reconstruction parameters. Reconstruction parameters PhotoScan supports several reconstruction methods and settings, which help to produce optimal reconstructions for a given data set. Surface type Arbitrary surface type can be used for modeling of any kind of object. It should be selected for closed objects, such as statues, buildings, etc. It doesn’t make any assumptions on the type of the object modeled, which comes at a cost of higher memory consumption.

Height field surface type is optimized for modeling of planar surfaces, such as terrains or bas-reliefs. It should be selected for aerial photography processing as it requires lower amount of memory and allows for larger data sets processing. Source data Specifies the source for the mesh generation procedure. Sparse cloud can be used for fast 3D model generation based solely on the sparse point cloud. Dense cloud setting will result in longer processing time but will generate high quality output based on the previously reconstructed dense point cloud.

Polygon count Specifies the maximum number of polygons in the final mesh. Suggested values High, Medium, Low are calculated based on the number of points in the previously generated dense point cloud: the They present optimal number of polygons for a mesh of a corresponding level of detail.

It is still possible for a user to indicate the target number of polygons in the final mesh according to his choice. It could be done through the Custom value of the Polygon count parameter.

Please note that while too small number of polygons is likely to result in too rough mesh, too huge custom number over 10 million polygons is likely to cause model visualization problems in external software. Interpolation If interpolation mode is Disabled it leads to accurate reconstruction results since only areas corresponding to dense point cloud points are reconstructed.

Manual hole filling is usually required at the post processing step. With Enabled default interpolation mode PhotoScan will interpolate some surface areas within a circle of a certain radius around every dense cloud point. As a result some holes can be automatically covered.

Yet some holes can still be present on the model and are to be filled at the post processing step. Enabled default setting is recommended for orthophoto generation. In Extrapolated mode the program generates holeless model with extrapolated geometry.

Large areas of extra geometry might be generated with this method, but they could be easily removed later using selection and cropping tools. Point classes Specifies the classes of the dense point cloud to be used for mesh generation.

Preliminary dense cloud classification should be performed for this option of mesh generation to be active.

Note PhotoScan tends to produce 3D models with excessive geometry resolution, so it is recommended to perform mesh decimation after geometry computation. More information on mesh decimation and other 3D model geometry editing tools is given in the Editing model geometry section.

Building model texture To generate 3D model texture 1. Select Build Texture Select the desired texture generation parameters in the Build Texture dialog box.

Texture mapping modes The texture mapping mode determines how the object texture will be packed in the texture atlas. Proper texture mapping mode selection helps to obtain optimal texture packing and, consequently, better visual quality of the final model. No assumptions regarding the type of the scene to be processed are made; program tries to create as uniform texture as possible.

Adaptive orthophoto In the Adaptive orthophoto mapping mode the object surface is split into the flat part and vertical regions.

The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions. When in the Adaptive orthophoto mapping mode, program tends to produce more compact texture representation for nearly planar scenes, while maintaining good texture quality for vertical surfaces, such as walls of the buildings.

Orthophoto In the Orthophoto mapping mode the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions.

Spherical Spherical mapping mode is appropriate only to a certain class of objects that have a ball-like form. It allows for continuous texture atlas being exported for this type of objects, so that it is much easier to edit it later.

When generating texture in Spherical mapping mode it is crucial to set the Bounding box properly. The whole model should be within the Bounding box. The red side of the Bounding box should be under the model; it defines the axis of the spherical projection. The marks on the front side determine the 0 meridian. Single photo The Single photo mapping mode allows to generate texture from a single photo. The photo to be used for texturing can be selected from ‘Texture from’ list.

Keep uv The Keep uv mapping mode generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolution or to generate the atlas for the model parametrized in the external software.

Texture generation parameters The following parameters control various aspects of texture atlas generation: Texture from Single photo mapping mode only Specifies the photo to be used for texturing. Available only in the Single photo mapping mode. Blending mode not used in Single photo mode Selects the way how pixel values from different photos will be combined in the final texture.

Mosaic – gives more quality for orthophoto and texture atlas than Average mode, since it does not mix image details of overlapping photos but uses most appropriate photo i. Mosaic texture blending mode is especially useful for orthophoto generation based on approximate geometric model. Average – uses the average value of all pixels from individual photos. Max Intensity – the photo which has maximum intensity of the corresponding pixel is selected.

Min Intensity – the photo which has minimum intensity of the corresponding pixel is selected. Exporting texture to several files allows to archive greater resolution of the final model texture, while export of high resolution texture to a single file can fail due to RAM limitations. Enable color correction The feature is useful for processing of data sets with extreme brightness variation. However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality.

To improve result texture quality it may be reasonable to exclude poorly focused images from processing at this step. PhotoScan suggests automatic image quality estimation feature. PhotoScan estimates image quality as a relative sharpness of the photo with respect to other images in the data set. Saving intermediate results Certain stages of 3D model reconstruction can take a long time. The full chain of operations could easily last for hours when building a model from hundreds of photos.

It is not always possible to finish all the operations in one run. PhotoScan allows to save intermediate results in a project file. PhotoScan project files may contain the following information: List of loaded photographs with reference paths to the image files.

Photo alignment data such as information on camera positions, sparse point cloud model and set of refined camera calibration parameters for each calibration group. Next the distance from the survey station to each object could either be measured radiation plane tabling or a new survey station could be set up and the distances to objects then calculated using intersecting rays from both survey stations intersection plane tabling Burtch, , pp.

Figure 1: Basic setup for plane table surveying. Figure 2: Plane Table Photogrammetry survey of a historic building, conducted by Meydenbauer. Furthermore, using kites and balloons, Laudessat was also one of the first people to experiment with aerial photography. A young architect, in Meydenbauer began documenting important public buildings using photogrammetry, making him the first person to adopt the technique for cultural heritage purposes Figure 2 Albertz, ; Burtch, , pp.

Over the course of this early period the method was further refined, improved instruments were developed and new materials such as roll film were introduced. However, because of the need to level the camera above a known point, photogrammetry remained earth-bound and could not be used from the air.

Furthermore, while highly accurate results could be achieved, the method was quite difficult to implement and very time-consuming. As a result photogrammetry never really broke through as a mainstream surveying technique, and it was instead used primarily on inaccessible terrain as a supplement to regular ground surveys Burtch, , pp. The technique came to fruition, in part, again thanks to significant developments made in other research fields. These included the invention of the floating mark by Franz Stolze in , the widespread development of stereoscopy in the late 19th century and the invention of the airplane by the brothers Orville and Wilbur Wright in Burtch, , pp.

As opposed to the theodolite-like approach used in Plane Table Photogrammetry, Stereo Photogrammetry was based on a new principle: the concept of parallax. In order to understand how parallax is useful to photogrammetry, imagine you are driving a car on a straight road. On your left, next to the road there is a tree, and in the distance, behind the tree, you see a mountain Figure 3.

You take a picture of this landscape at two points on the road, several metres apart, while facing in the exact same direction. In other words the parallax between the two representations of the tree is greater than the parallax between the two representations of the mountain.

This example illustrates that the amount of parallax is proportional to the distance of the objects relative to the viewpoints. Measuring parallax between partially overlapping pictures therefore allows photogrammetrists to calculate the distance of the objects in the pictures relative to the position from which the pictures were taken.

Human sight is based on this same principle: our brain receives two slightly different images of the same scene from our left and right eyes respectively, allowing us to perceive depth Figure 4 Konecny, , pp. Direction to distant Direction to distant mountain mountain Picture 1 Picture 2 Figure 3: Diagram illustrating the principle of parallax. Figure 4: Human stereo vision. There is some debate as to who was the first person to apply the concept of parallax to photogrammetry: the German physicist Carl Pulfrich and the South African researcher Henry George Fourcade both independently developed a Stereo Photogrammetry device in While these instruments were capable of producing highly accurate results, they still required a lot of computation on the part of the photogrammetry operator, and could only be used for terrestrial images.

Despite its atrocities, World War I was also accompanied by great advances in both photography and aviation Burtch, , pp. In these years the foundations of aerial Stereo Photogrammetry, as they would stand throughout most of the 20th century, were elaborated. The technique very rapidly grew in importance, and by the s photogrammetry had become the mapping method of choice in most western nations.

In the years after World War II various new instruments were again introduced, each one functioning slightly differently, being slightly more easy to use or slightly more economical than the last Burtch, , pp.

However, the overall photogrammetry methodology remained largely unchanged. Essentially aerial Stereo Photogrammetry involved first taking at least two photos from an airplane, as perpendicular as possible to the ground i. Two consecutive photos, functioning as a stereo-pair, then had to be oriented correctly on the stereoplotter: this involved a interior orientation, meaning the pictures had to be oriented to compensate for their respective camera directions and focal lengths b relative orientation, meaning both pictures were aligned to overlap and c absolute orientation, meaning the pictures were positioned on a map in reference to known ground control points.

In the plotting device a floating mark was positioned above both photographs. Using stereoscopic viewing the photogrammetrist could then see the scene in 3D, with both floating marks perceived as a single dot above the 3D terrain. By moving that dot along contours and features, the operator could trace elements of the landscape automatically onto a sheet of paper Figure 6.

The distance between both floating marks indicated the amount of parallax between both images, which in turn allowed the operator to determine the height of the features he was tracing Burtch, , pp.

Figure 6: Photogrammetry operator plotting aerial images on a mechanical stereoplotter. However, while it was a vast improvement over previous methods, Analogue Stereo Photogrammetry remained very expensive, requiring specialised technicians to operate dedicated equipment such as calibrated cameras, purpose-built airplanes and cumbersome stereoplotters Linder, , p.

Analytical Photogrammetry Whereas Analogue Stereo Photogrammetry focused on producing mechanical devices capable of creating photogrammetric plans from stereo-photographs, Analytical Photogrammetry concerned itself first and foremost with describing the mathematical principles behind photogrammetry.

In particular, Analytical Photogrammetry made extensive use of the tools of matrix algebra and least square adjustment Konecny, , pp. As early as Sebastian Finsterwalder published papers discussing the math of intersecting rays and image orientation. Several decades later, around , Otto von Gruber described orientation changes of cameras numerically and developed equations and differentials for projective geometry.

Similarly Earl Church derived analytical formulas to describe photogrammetric problems such as space resection, image orientation, intersection, rectification and control Burtch, , pp. At the time, their research was of little consequence. The mathematical algorithms they developed were highly complex and therefore too time-consuming to actually calculate; they could not be efficiently implemented into a photogrammetry workflow.

Accordingly, for Analytical Photogrammetry to actually become useful, a necessary prerequisite was the development of a device that could rapidly carry out a whole sequence of complex calculations.

As computers became increasingly accessible, by Uuno Vilho Helava patented the first analytical plotter. At this point analytical plotters still used analogue pictures, and were operated in much the same way as analogue stereoplotters. Major advantages of this approach were a increased workflow automation and b improved accuracy thanks to statistical testing and least square adjustments Burtch, , pp.

The formulas developed in Analytical Photogrammetry also resulted in the first practical procedure for developing orthophotos Konecny, , p. Orthophotos are photographs without geometric distortion, such that their scale is uniform throughout the picture. This development is significant because it meant that for the first time photogrammetry was capable of producing a result other than line drawings.

Compared to simple line drawings, orthophotos have the advantage that they contain not only the geometric size, shape and position of objects, but also their colour, tone and texture. Over the years many additional mathematical models were developed, generally improving upon earlier formulas for space resection, orientation and control extension. Another significant development was the introduction of computer screens, which allowed photogrammetric image coordinates to be plotted digitally, opening up new possibilities such as GIS integration Konecny, , p.

Figure 7: Example of an analytical plotter. Analytical Photogrammetry lauded in a new era in photogrammetry, one where map coordinates were defined not mechanically but mathematically. This new, more rigorous photogrammetry was capable of processing increasingly complex image sets and allowed for increased automation.

On the other hand, just like Analogue Stereo Photogrammetry, Analytical Photogrammetry did remain a very technical discipline, requiring specialised training and dedicated equipment Figure 7 Linder, , p. First experimentally realised by John Sharp of IBM in , Digital Photogrammetry required the development of better computers in order to really become practical.

By the s people started scanning analogue film in order to apply digital processes, and by the late s and early s the first commercially affordable true digital cameras became available Burtch, , pp. While the transition from analogue to digital may sound like a benign technical formality, digital imaging transformed photogrammetry in more ways than one.

After digital photographing, the pictures were instantly available, therefore significantly reducing image acquisition time and allowing operators to identify and resolve possible issues with the picture sequence on the spot.

Even more importantly, while Digital Photogrammetry was still based on the same mathematical formulas as Analytical Photogrammetry, it could be performed on a computer, thereby eliminating the need for specialised plotting devices. The result was that traditional producers of photogrammetric instruments lost their monopoly on the market, and photogrammetry therefore became a lot more affordable Linder, , pp.

At this point photogrammetry had been continuously evolving for about years. From its roots as a small niche within terrestrial surveying, it grew to be the primary mapping technique in the age of aviation. At the heart of the discipline was the principle of parallax, a principle which was first implemented mechanically and later elaborated and improved upon mathematically.

Successive phases of photogrammetry saw the introduction of new theoretical frameworks but also of novel technologies such as airplanes, computers and digital cameras. All these developments contributed to a science which was increasingly accurate, increasingly automated and increasingly accessible. Nevertheless the photogrammetric workflow remained technical and tedious: pictures had to be taken in a specific fashion and with pre-calibrated cameras, consecutive images then had to be positioned and aligned in the correct order and finally each and every point of interest had to be manually identified in at least two overlapping photographs.

This lengthy technical procedure limited the range of useful applications, and in practice photogrammetry was used almost exclusively for mapping purposes. All of this would soon change with the introduction of a new type of photogrammetry. Interestingly, unlike its predecessors, it did not develop from research within the discipline of photogrammetry itself; rather it was the product of a much younger field of study: Computer Vision.

The Field of Computer Vision In the s researchers were optimistic about the prospect of creating intelligent machines; robots that could mimic human behaviour, that were capable of understanding and interacting with the world around them.

The problem has turned out to be more complicated than that, and now — nearly 50 years later — real artificial perception remains elusive.

Nonetheless, computer vision does have numerous real-life applications in various fields including surveillance, industrial inspection, medical imaging and motion capture. It is the science behind technologies such as face detection which allows your camera, smartphone or for instance Facebook to automatically detect and even recognise faces , self-driving cars computer vision helps these vehicles navigate obstacles , photo stitching where multiple pictures of a scene can be stitched together in order to create a panoramic view of that scene and automated fingerprint matching Huang, , pp.

Computer Vision Photogrammetry So, how does any of this relate to photogrammetry? Well, from the very beginning computer vision researchers realised that if robots were really going to interpret a scene, they would first need to have an understanding of the three-dimensional structure of that scene.

Readers may have noted that this is a near-perfect description of Analytical and later Digital Photogrammetry. Interestingly however, until quite recently there appears to have been little or no interaction between the field of computer vision on the one hand and photogrammetry on the other. However, since some older photogrammetry approaches could also process multiple images at a time, and work with pictures taken at close-range, I believe Computer Vision Photogrammetry is a less ambiguous name.

Over the past few decades computer vision researchers have developed a wide range of possible methods for extracting 3D information from imagery. Since Structure from Motion is also an essential part of the photogrammetry methods explored in this thesis, I will discuss it in a bit more detail. Using feature detection algorithms, the computer will then detect thousands of points that stand out in each picture, and compare these to the points that stand out in every other picture in order to find matching feature points Verhoeven, , p.

If enough matching features are found, using camera auto-calibration algorithms Figure 9 the computer can then calculate the intrinsic calibration parameters such as principal point, focal distance and skew of the camera used to take the pictures.

This information is essential in order to compensate for any possible lens distortion and in order to determine the position of the focal point of each picture. Matching features detected in two pictures of the same monument, taken from different perspectives. Feature detection algorithms allow software to automatically detect and match features in overlapping pictures. Figure 9. Normal picture with perspective distortion left compared to rectified picture without distortion right.

Camera auto-calibration algorithms allow software to calculate and compensate for the perspective distortion that is inherently present in most pictures. Figure Diagram illustrating the concept of feature-based alignment using intersecting rays. This is based on the principle of intersecting rays; essentially, using at least three pictures with a matching feature point, a ray is projected from the focal point of each picture, through the detected feature points.

The place where these rays intersect then determines the 3D coordinate of the detected feature point. When this process is repeated for all feature points in the dataset, the result is a sparse point cloud Figure 11 which is a 3D approximation of the scene in the pictures Verhoeven, , pp. The procedure discussed so far, from feature detection to sparse point cloud generation, is what is generally referred to as Structure from Motion.

However, these Structure from Motion results can be further processed in order to create even more detailed 3D models. Typically the next step is to find additional matching features across pictures. If additional matching points are detected in at least three images, they can again be inserted into the 3D model using the principle of intersecting rays. However by now, even if a matching feature point is detected in just two images, its 3D coordinates can still be calculated; essentially, since the position of the two images is already known, just like in traditional photogrammetry the distance of the feature point relative to the two images can be determined using the principle of parallax.

If all of these additional feature points are added to the existing 3D model, the result is a much more detailed dense point cloud Figure 11 Verhoeven, , pp. Figure A rock modelled in 3D using the photogrammetry workflow discussed in this chapter. The different steps in the reconstruction pipeline, from sparse point cloud to textured mesh, are shown. Lastly, the original pictures can then be projected onto that surface in a process called texture mapping, in order to create the final textured mesh of the model Figure In much the same way that orthophotos contain more information than line drawings, textured meshes have certain advantages over point clouds or simple surface meshes; they contain not only the geometric shape of the scene but also every colour, tone or texture detail contained in the original pictures.

In photogrammetry, a good textured mesh of a scene is therefore the closest possible 3D approximation we can make of the actual scene Verhoeven, , pp. Implications It is quite clear that Computer Vision Photogrammetry offers distinct advantages over previous photogrammetry approaches, the main one being increased automation.

The overall result is a photogrammetry workflow which is capable of processing more complex datasets in less time in order to produce more detailed 3D models than ever before. Nevertheless, ever since the introduction of the first commercially available underwater cameras in the midth century, researchers have also been experimenting with underwater applications.

In fact, within maritime archaeology the first uses of photogrammetry date back to as early as the s, and the method has been successfully used for underwater archaeological site recording on numerous occasions since. In this chapter I therefore want to have a closer look at these early uses of Analytical and later Digital Photogrammetry within the field of underwater archaeology: What are the specific challenges faced when using photogrammetry in the underwater environment, and how successfully have these been overcome in the past?

Why did certain archaeologists choose photogrammetry over traditional recording methods? Were they satisfied with the results obtained using this technique? Where did photogrammetry excel, and where might there still be room for improvement — potentially through the use of Computer Vision Photogrammetry? Photogrammetry Challenges Underwater In comparison to traditional land-based or aerial photogrammetry, underwater photogrammetry faces several unique challenges.

The main factors complicating the use of photogrammetry in the underwater environment are light refraction, light absorption and limited visibility. Light Refraction When light rays pass from one medium to another, for instance from air to water, they bend; this is known as refraction Figure Figure Diagram illustrating the Figure Experiment illustrating perspective phenomenon of light refraction.

Light refraction is problematic for photogrammetry because it causes perspective distortion in underwater pictures, which is hard to compensate for during photogrammetric processing. These different mediums all have different refractive indexes and — to further complicate matters — the refractive index of water varies depending on pressure, temperature and salinity.

Light Absorption As light travels through water, it is gradually absorbed. This absorption is caused by interactions between light radiation and H2O molecules, and will therefore occur even in crystal clear water. The absorption of light in water is also selective, meaning various light frequencies are affected differently; the longer wavelengths in our visible spectrum are absorbed much faster than shorter wavelengths. Consequently, the deeper you dive, reds, oranges, yellows, greens and eventually even blues are gradually absorbed Figure Note that the actual depth at which various wavelengths are absorbed may vary depending on the underwater conditions.

In underwater photography, light absorption results in pictures which are not only dull and monotone, but which also lack contrast and sharpness. This is particularly problematic for computer vision-based photogrammetry techniques, since computer algorithms will have difficulty automatically detecting feature points in hazy, low- contrast images Ludvigsen et al.

Figure Colour absorption Light from artificial light sources will at different depths. The traditional way of resolving the issue of light absorption is of course to use underwater strobes or lights. Consequently, in order to produce crisp, colourful pictures using artificial lighting, pictures have to be taken very close to the subject of study. This is very different from for instance aerial photogrammetry, where a single picture from an airplane might capture an area of several square kilometres.

Limited Visibility Perhaps the most obvious and arguably also the most important concern for underwater photogrammetry is limited visibility. Underwater visibility may vary depending on water temperature, salinity, light penetration and simply the amount of particles suspended in the water column. Visibility can range from a theoretical maximum of 80 m to absolutely 0 depending on the underwater conditions.

In much the same way that mist might obscure a landscape and interfere with aerial photogrammetry, limited underwater visibility can severely hamper underwater photogrammetry. Typically the only way to capture good quality images in a low visibility setting is to take pictures very close to the subject of study, which again means lots of images have to be processed in order to map a small area.

In the worst case scenario, in situations where visibility is extremely low, photogrammetry may simply not be a feasible option Rule, , p. Past Accomplishments and Limitations 2. The Early Pioneers Despite the various complications faced when using photogrammetry in the underwater environment, over the past decades numerous archaeological projects have — often quite successfully — made use of photogrammetry for underwater site recording.

In this regard, one of the most important early pioneers to mention is no doubt Dimitri Rebikoff. A French inventor with a passion for diving, Rebikoff first discussed the potential of using photography to map ancient wrecks in the Mediterranean as early as Rebikoff, b, p.

As such, using the corrective lens, the same photogrammetry procedures and theoretical models originally developed for aerial photogrammetry could be applied to underwater images. In comparison to normal flat camera lenses, the Ivanoff-Rebikoff lens compensated for water refraction, eliminating pincushion distortion and chromatic aberration, while still allowing for a wide angle of view and a large depth of field Rebikoff, a, pp.

After moving to the United States in , in Rebikoff participated in an archaeological campaign by the University of Chicago to map the sunken remains of the harbour city of Kenchreai, near Corinth in Greece.

Since the remains of Kenchreai lie in very shallow water, a photographer equipped with a single camera simply swam on the surface, following white-labelled parallel guidelines in a lawnmower pattern across the site. Just like in aerial photogrammetry, the idea was then to process consecutive pictures as stereo-pairs in order to finally produce a photogrammetric plan of the ancient harbour.

However, because the site was so shallow 1. Unfortunately the estimated cost of processing and aligning such a large quantity of images was so high that the team eventually abandoned photogrammetry in favour of more traditional recording methods Rebikoff, a, p.

Around the same time, the underwater archaeology team at the University of Pennsylvania, headed by George Bass, started experimenting with underwater photogrammetry at Yassi Ada in Turkey. The waters off the coast of Yassi Ada — a small island near Bodrum — contain various shipwrecks, including a 7th century Byzantine and a 4th century Roman vessel. In order to test different camera setups and different photogrammetry plotters, initial photogrammetry trials took place on the Byzantine wreck in and A single camera and a stereo-camera diver-based approach were both tried, as well as a stereo- camera submarine-mounted system Rosencrantz, Following these early trials, in the submarine-mounted stereo-system was successfully used to capture nine stereo-photograph pairs of the 4th century Roman wreck site.

In this setup a camera was suspended from a horizontal bar using a gimbal in order to capture perfectly vertical pictures. By taking pictures at fixed intervals along the bar, consecutive pictures could then be handled as stereo-pairs. The resulting site plan Figure 15 covered an area of Figure Site plan of the 4th century Yassi Ada Roman wreck, recorded using photogrammetry. The Roman wreck was located at depths between 36 and 42 m.

Photogrammetry proved to be an effective means of reducing the recording time required in order to produce accurate site plans at such depths. Given the high demands in terms of detail and accuracy required for good archaeological recording, maritime archaeology has often been a testing ground for new techniques prior to their wider adoption for other — often military or industrial — applications Bingham et al. As such, our discipline has always been at the forefront of new developments in underwater recording using Analytical, Digital and finally Computer Vision Photogrammetry.

Within maritime archaeology, photogrammetry has been used most commonly to record shipwrecks, but also to document submerged structures such as towns and harbours and even prehistoric sites. Archaeologists have used both stereo-camera setups and single camera approaches. Generally projects relied on a diver-based approach, but photogrammetry has also been successfully deployed to document archaeological sites from submarines, ROVs and more recently, AUVs.

Throughout this diverse range of projects, researchers generally published very positive results; some of the most interesting findings are briefly quoted below. It is unlikely that any underwater drawing technique could ever match the quality and quantity of information provided by good photogrammetric recording.

This was an extremely useful way of analysing the composition of the site and how it had been formed. It provided far more detailed information than a conventional archaeological site-plan.

Naturally, the quoted articles represent just a small fraction of the total corpus of literature covering the use of photogrammetry in maritime archaeology. Nonetheless, they serve to highlight what many researchers have known and agreed upon for decades: compared to manual recording methods, photogrammetry can significantly reduce underwater recording times while producing more detailed, more objective and more accurate, three-dimensional results.

The Other Side of the Coin Given the very compelling arguments for the use of photogrammetry described above, one might expect photogrammetry to be the uncontested recording method of choice for any underwater archaeological campaign. Nevertheless we know that this is not the case; the fact is that, over the past five decades, the vast majority of maritime archaeology projects have continued to rely, almost exclusively, on manual recording methods.

Despite the fantastic results obtained at Yassi Ada, there are various important considerations to make about these early experiments: a the campaign was extremely well-financed and b like Dimitri Rebikoff before him, Donald Rosencrantz — the person in charge of photogrammetry — was an engineer, not an archaeologist.

Because most expeditions do not have similar financial backing, a major goal of the work was to produce a system that would be useful to other groups as well. Such a system must be operable in the field, have low equipment costs, and be simple enough for use by typical archaeological expedition personnel with minimal training.

Additionally the setup required specially engineered pre-calibrated underwater cameras, a plotter and photography darkroom had to be transported to the site Rosencrantz, , pp. Furthermore, the Yassi Ada Roman wreck had the advantage of being very flat meaning it was relatively straightforward to model , and having great on-site visibility meaning a small number of pictures taken from several metres distance sufficed to model the entire site.

Naturally a lot of progress has been made since the s; over time, step by step, each new development in photogrammetry gradually contributed to lower equipment costs and easier-to-use workflows.

Following every such development, researchers have proclaimed photogrammetry as the technique that would once and for all revolutionise underwater archaeological recording. Secondly, the method was very technical and therefore most archaeological projects had to rely on photogrammetry specialists to process the data. Thirdly, any time-advantage gained during underwater recording was typically cancelled out by the extremely time-consuming process of manually aligning and processing the images.

This was especially problematic in scenarios where a lot of pictures had to be aligned, such as on shallow sites like Kenchreai or sites with limited visibility. Each of these factors — costly equipment, the need to hire photogrammetry specialists and long processing times — contributed to making photogrammetry a very expensive and therefore inaccessible recording method.

Finally, even for projects with the financial and technical means to use the method, photogrammetry had one major disadvantage: it was simply not very reliable. Since data processing was so time-consuming, and since most projects relied on external photogrammetry specialists to process the pictures, the photogrammetry results could typically only be assessed after the project had ended.

However, pictures had to be taken in an extremely controlled manner to produce good results. As such, recording a site using only photogrammetry was a very risky decision: if there was any problem with the original picture sequence images not captured perfectly perpendicular to the seabed, not enough image overlap between consecutive pictures, etc.

This happened on several occasions, and it proved particularly problematic on sites with complex geometries or limited visibility. In light of the various issues discussed above, it is not surprising that over the past 50 years most underwater archaeologists have preferred to rely on affordable, easy-to-use and reliable manual recording methods. In practice, past uses of photogrammetry in underwater archaeology have therefore — for the most part — been limited to well- funded research projects where three-dimensional information was deemed very important, or to deep sites where manual recording was simply not a realistic option.

A first clear conclusion is that underwater archaeologists have long been aware of the various benefits of using photogrammetry. Compared to manual recording methods, photogrammetry could greatly reduce underwater recording times, while also producing more accurate, detailed and objective three-dimensional results.

However, while publications have typically focused on these positive aspects, it is also clear that in the past photogrammetry suffered several important limitations: the method was very expensive, highly technical, extremely time-consuming during data processing and not very reliable. As a result — despite 50 years of promising results — the actual number of projects using photogrammetry for underwater archaeological site recording has so far been very limited.

The method should be capable of producing the same excellent 3D results obtained in the past, while simultaneously being more affordable, easier to use, more time-efficient and more reliable.

In the second half of this thesis I want to find out whether a new generation of Computer Vision Photogrammetry techniques can meet these requirements. As such we must first consider which specific software package to use in order to process this data. Initially Computer Vision Photogrammetry required a lot of IT expertise to use, since computer vision algorithms had to be coded and implemented by the researchers themselves.

As a result the procedures remained very technical and expensive, and therefore inaccessible. Correspondingly, in tandem with the proliferation of increasingly powerful personal computers, public interest in 3D technologies such as Computer Vision Photogrammetry has surged massively. As a consequence, over the past few years numerous photogrammetry software packages have been released which for the first time target not only the surveying industry but also the ever-growing crowd of hobbyist 3D modellers.

These software applications implement the various steps of the Photogrammetry workflow described in Chapter II, but without the user necessarily having to understand all the complex principles behind each step. While most of these new software solutions are based on the Structure from Motion approach, the specific algorithms they use for subsequent steps such as feature detection, camera calibration and feature-based alignment may differ considerably.

Consequently the quality of the 3D results obtained will also vary from software to software. Before exploring the potential of modern Computer Vision Photogrammetry for underwater archaeological site recording, we should therefore make an informed decision on what software to use to process our underwater images.

Within this wide range of options, our software of choice should preferably produce excellent 3D results while also being affordable, easy to use and reliable. The first two are devoted to evaluating various photogrammetry applications in order to find the software package most suited to our recording requirements.

The evaluations are based on personal experiments with test data, as well as on similar experiments published in the scientific literature. After having singled out one application in particular, in the third part of this chapter our software of choice is then discussed in a bit more detail. Photogrammetry Software Evaluated using Test Data Testing all existing photogrammetry applications would have been unrealistic and certainly beyond the scope of this dissertation. As such, I initially experimented with free software packages and then gradually worked my way up towards more expensive options until I found a program which produced satisfactory results.

Another delimiting factor was the technical expertise needed to use each software package: applications that required special IT skills such as coding were not considered, since these would be beyond the skillset of most archaeologists, myself included. Among the different applications tested, Agisoft PhotoScan clearly stood out, for various reasons.

Firstly, whereas all applications produced decent 3D results for the relatively simple picture sequence in dataset A, PhotoScan was the only software capable of correctly aligning the images in dataset B. Notice that the laser scanner results contain slightly more geometric detail left , while the photogrammetry results contain more texture detail right.

Secondly, whereas Photosynth and VisualSFM only perform initial camera alignment and sparse point cloud generation, PhotoScan conveniently combines all photogrammetric processing steps from picture alignment and sparse point cloud generation to dense point cloud, mesh and eventually textured mesh generation. On the other hand, it also has the considerable disadvantage that the user loses all control over the consecutive processing phases, and therefore has no way of improving bad 3D results.

Additionally, an online platform is not ideal if the method is to be used on remote archaeological excavations, where a good internet connection is often difficult to come by. Thirdly, for someone like me, without a specialised technical or IT background, PhotoScan turned out to be surprisingly easy to use. The software follows a very straightforward workflow, and every possible aspect of this workflow is covered by a wide range of online tutorials which target both beginners and advanced users.

Photogrammetry Software Evaluated in the Scientific Literature Admittedly, from a scientific point of view, the findings in the previous paragraph have some shortcomings: the software packages with were tested on just two — rather arbitrary — datasets, each dataset was recorded using the same camera, and I did not have the necessary funding to experiment with more expensive software solutions.

By conducting a review of the relevant literature, I therefore wanted to verify whether my conclusions could be backed by strong scientific research. In the papers discussed below, different photogrammetry applications or algorithms were each time tested and compared, in order to assess their ability to correctly reconstruct various image sequences. In this case the geometry of the sparse point cloud was compared to the geometry of the reference data.

In this case the geometry of the dense point cloud was compared to the reference data. Their conclusion was that while each application produced accurate results, VisualSFM and Bundler gave the most precise results, with their point clouds showing the least amount of deviation from the reference data. Remondino et al. In order to create digital surface models from aerial images, Sona et al. This is significant because it demonstrates that, at least in this particular case study, PhotoScan outperformed more expensive software solutions such as Pix4UAV and PhotoModeler.

Their main conclusion was that modern photogrammetry applications are a good alternative to more expensive recording methods such as laser scanning or LiDAR. Although the researchers again did not express a concrete preference for one application over another, the data suggests that the models produced by PhotoScan generally deviated least from the reference data. Finally, whereas the other software applications required points to be detected in at least three overlapping images, after image calibration PhotoScan was capable of calculating 3D coordinates from just two overlapping images.

Discussion These papers have all systematically tested the accuracy and precision of different photogrammetry software solutions. For heritage researchers other considerations such as affordability, flexibility and usability will often outweigh accuracy or precision. Nevertheless, my experiments, as well as the technical analyses discussed above, demonstrate that PhotoScan manages to combine all these factors into a single affordable, easy to use, reliable, highly accurate software package.

In conclusion, we can now state with some confidence that — at least for the time being — PhotoScan is the most appropriate software package in order to test the potential of modern photogrammetry for underwater archaeological site recording.

Before moving on to our underwater case studies, let us therefore have a brief look at this software package in particular. Since then the software has gone through various updates, the most recent edition being Version 1.

Additionally, in order to speed up the processing procedure, a high-speed multi-core CPU and a good OpenCL-compatible graphics card are recommended. Workflow Naturally the first step in any Computer Vision Photogrammetry application is uploading pictures to the software interface.

PhotoScan can process pictures in a wide range of digital image formats, but to facilitate alignment, ideally these images should conform to certain specifications. Since points have to be detected in each picture, the images should preferably also be as sharp as possible high aperture , be well-lit and contain as little noise as possible low ISO.

Similarly, the use of flash photography is not recommended since this will cause strong shadow differences across images, which might confuse feature detection. Figure Recommended camera positions for various photogrammetry recording scenarios. Since Agisoft is a commercial enterprise, the company is somewhat secretive about the specific algorithms used for each of these processing steps.

Nevertheless, some information can be surmised from the online discussion forums and from scientific articles. Today most digital pictures contain. The specific algorithm used for the subsequent camera alignment step is unknown, but it involves the calculation of approximate camera locations which are then refined using a bundle-adjustment technique Semyonov, Nevertheless, Remondino et al.

These patches are then projected onto their corresponding points on the surface mesh, and source photos subsequently blended to create the texture atlas Semyonov, Once again little is known about the origin of the actual algorithm used.

For each of these four steps different processing settings can be chosen in order to fine- tune the processing procedure to the needs of the specific image sequence. Additionally, between each major processing step the user has the opportunity to perform additional smaller actions in order to improve the final results. These actions include picture masking, deleting erroneous points, importing camera positions from external files, setting the reconstruction bounding box, and so forth.

Alternatively the 3D models can be uploaded to the online platforms Sketchfab and Verold. Two Editions, Four Price Classes PhotoScan is available in two editions: a Standard Edition targeted at hobby users, and a Professional Edition targeted at survey professionals and the digital animation industry. While both Editions contain all the essential features discussed above, the Professional Edition has additional features such as model scaling, marker-based chunk or picture alignment, geo-referencing based on known ground coordinates and the possibility of 4D recording i.

There is no time restriction on any of these licenses and the software can be freely updated to more recent versions. Additionally, anyone wishing to try the software for the first time has access to a day free trial, which offers exactly the same functionality as PhotoScan Professional Edition. Applications Today an immense selection of different camera models, lenses and accessories are available to meet a wide range of imaging requirements.

Multi-spectral imagery opens up new perspectives such as feature-detection through foliage. In tight spaces a wide-angle lens might be more appropriate, in order to record a lot of details from a close range in a limited number of pictures.

If a subject has to be recorded through a glass casing, a polarising filter can be used to avoid reflections. In short, the possibilities are endless.

These applications range from artistic modelling projects to face, full-body and prop scanning for game design and the film industry, to aerial surveys in the context of mining activities, agricultural and environmental management or city planning. Nevertheless, the question remains how well PhotoScan can cope with the specific challenges faced in the underwater environment.

This issue will be explored in the following chapter, based on the lessons learnt from working with PhotoScan to record three archaeological shipwreck sites.

The data in question covers three shipwreck sites, one in Denmark and two in the Netherlands, all dated between the late 16th and early 18th century. All data was processed on a relatively high-end laptop equipped with an Intel Core iMQ 2. Photogrammetry using Casual Video Footage The first case study concerns photogrammetry modelling using data captured in the summer of , during the excavation of the Lundeborg Tile Wreck by the Maritime Archaeology Programme of the University of Southern Denmark.

Today the site, which consists of the remains the location of the Lundeborg wreck. Within this zone, three areas of special interest were excavated during the campaign, namely: 1. Since photogrammetry did not form part of the original Lundeborg project design, these areas were originally recorded using manual offset drawings and tape measure trilateration Figure Nonetheless, the site archive also contained several hundred pictures and a couple of videos, captured for redundancy purposes.

The areas excavated and recorded in are outlined in blue. Main Challenges In theory the Lundeborg wreck made an ideal scenario for underwater photogrammetry: the site is relatively flat, so its geometry is not too complex, underwater visibility was generally good, and since the site is located at just 5 m depth, there was an abundance of natural light.

As such, typical issues of underwater light absorption or limited visibility were of little concern during the Lundeborg recording. In this case the main challenge was that the footage had been captured without photogrammetry in mind. Even if the hundreds of pictures combined might have covered every part of the wreck, the pictures were taken over a series of days.

Over the course of these days the excavation progressed and as such vegetation and sediment were gradually removed, baselines and timber tags were added, etc. In other words, the site looked very different from day to day and as a result PhotoScan was unable to detect enough matching features across pictures taken on different days.

During video recording the diver was continuously moving around the wreck, and as a result — even though most of the videos were just a couple of minutes long — each video contained a lot of image data about the site. Nevertheless the video footage also had certain drawbacks. First of all, like in aerial photogrammetry, during underwater photogrammetric recording it is generally recommended to record sites in a lawnmower pattern in order to ensure sufficient image overlap.

However, since these videos had not been captured for photogrammetry, the prescribed lawnmower survey pattern had not been followed. Secondly, the videos were recorded using a GoPro a camera model equipped with a fisheye lens. This lens type causes extreme perspective distortion and is therefore generally best avoided for photogrammetry purposes. Finally, frames extracted from video footage do not contain the camera. EXIF metadata which can help PhotoScan correctly calibrate cameras, and as such photogrammetry modelling from videos is not usually recommended.

Processing Procedure Three videos — one for each of the areas excavated in the campaign — were identified as best candidates for photogrammetry. Total recording time for all three videos was about 6 minutes. In order to provide sufficient overlap between consecutive frames, a frame was extracted every couple of seconds.

From these initial sparse point clouds a dense point cloud, mesh and textured mesh were subsequently generated. The processing of all three datasets — from extracting the frames to producing the textured models — was done in a single afternoon.

The results are shown in Figures Textured 3D model produced in PhotoScan. Figure Top view of the timbers to the southeast of the Lundeborg wreck mound excavation area 3. Discussion Since we can only compare the photogrammetry models to the manually drawn site plan which has its own inherent inaccuracies , it is hard to draw any conclusions regarding their absolute accuracy.

Nonetheless, simple visual inspection suggests that the accuracy is certainly high enough for archaeological purposes. This has profound methodological implications: whereas we spent several days recording each excavated area by hand using offset measurements, video recording of each area took only a couple of minutes.

Furthermore, the 3D textured models contain a lot more information than the 2D site plan, and since the photogrammetry process is largely automated, the 3D models are more objective and less prone to human errors than traditional offset drawings. For instance, we now know that during the original campaign we forgot to record at least one timber in excavation area 3; while the timber is visible in the photogrammetry model, it is missing from the site plan.

From a photogrammetric point of view, these experiments have demonstrated that 3D modelling using frames extracted from videos is a viable alternative to 3D modelling from picture sequences. Furthermore it succeeded in processing such casual footage not just once, but for all three videos, allowing us to produce a photogrammetric model of each of the areas excavated during the campaign.

That being said, in the current case study the photogrammetry models cannot serve as a replacement for the Lundeborg site plan, simply because the videos were captured from too great a distance, and when parts of the site were still covered by sand. The result is that while the overall geometry of the site is clearly modelled, small details such as nail holes or tool marks are not always visible. Photogrammetry using Legacy Data The second case study concerns photogrammetry modelling using data captured during the historic Aanloop Molengat excavation.

The Aanloop Molengat site is located in the North Sea, to the west of the island of Texel, at a depth of 16 m. Extending over an area of 33 by 13 m, the site consists of a 17th century Dutch vessel built in the Dutch-flush tradition, carrying a significant cargo of raw and half-finished products. Following underwater visibility of up to 10 m during the original assessment of the site in , Digital Photogrammetry was chosen as primary recording method for the project.

In order to help ensure image overlap, initial photogrammetry experiments in and made use of a rigid frame structure allowing archaeologists to take pictures at fixed intervals across the site. Figure Site plan of the Aanloop Molengat wreck. Unfortunately, despite using a systematic data capture strategy, and despite partnering with photogrammetry experts at the Delft University of Technology as well as with an engineering firm specialised in geodetic surveys, both the initial photogrammetry trials in and , and the renewed attempts in , failed Vos, In particular, we will make use of the stereo pictures from Main Challenges Unlike the Lundeborg case study, the stereo pictures from were taken for the explicit purpose of photogrammetry modelling.

Nevertheless, the data proved extremely challenging to work with, due to both the quality of the original picture sequence, and how these pictures were later digitised. Original Picture Sequence The most positive aspect of the picture sequence was the fact that the images were captured in a very systematic manner.


 
 

Agisoft photoscan user manual professional edition version 1.2 free download

 
Computer Vision Photogrammetry allows users to simply upload a series of overlapping pictures of a scene into dedicated software in order to automatically. Agisoft PhotoScan User Manual: Professional Edition. Version User Manuals. , p. Available online: replace.me Agisoft PhotoScan is an advanced image-based 3D modeling solution aimed at creating professional quality 3D content from still images.

 

Agisoft photoscan user manual professional edition version 1.2 free download

 
Computer Vision Photogrammetry allows users to simply upload a series of overlapping pictures of a scene into dedicated software in order to automatically. The software allows to process images from RGB, thermal or multispectral cameras, including multi- camera systems, into the spatial information in the form. Processing in AGISoft PhotoScan Professional Plus. Software UAV/Drone Image Processing Manual Versions of Software and download link. Agisoft PhotoScan User Manual: Professional Edition. Version User Manuals. , p. Available online: replace.me Agisoft PhotoScan User Manual, Professional Edition, Version Agisoft LLC. Retrieved from replace.me❿
 
 

Agisoft PhotoScan User Manual. Standard Edition, Version PDF Free Download

 
 
GelAnalyzer User s manual Contents 1. Added support for changing chunks order in Workspace pane. Resize Region страница Rotate Region toolbar buttons.

Leave a Comment

Your email address will not be published. Required fields are marked *