diff --git a/IntroPhotogrammetry/Metashape/01-AgisoftMetashapeOverview.md b/IntroPhotogrammetry/Metashape/01-AgisoftMetashapeOverview.md deleted file mode 100644 index b96281e..0000000 --- a/IntroPhotogrammetry/Metashape/01-AgisoftMetashapeOverview.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: "Agisoft Metashape Overview" -layout: single -header: - overlay_color: "444444" - overlay_image: /assets/images/nasa.png ---- - - - -## Overview - -### Get Inspired - -Discover inspiring stories of very different [projects](https://www.agisoft.com/community/showcase/) that all used Agisoft Metashape software. - -### Graphical Interface - -[Docs](https://www.agisoft.com/pdf/metashape-pro_1_5_en.pdf) diff --git a/IntroPhotogrammetry/Metashape/02-MetashapeOnAtlasHPC.md b/IntroPhotogrammetry/Metashape/02-MetashapeOnAtlasHPC.md deleted file mode 100644 index 60c35a1..0000000 --- a/IntroPhotogrammetry/Metashape/02-MetashapeOnAtlasHPC.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -title: "Metashape on Atlas" -layout: single -header: - overlay_color: "444444" - overlay_image: /assets/images/nasa.png ---- - -{% include toc %} - -## WARNING! Agisoft Metashape was uninstalled from all SCINet computing resources in early 2022 for security reasons. - -## Metashape on Atlas cluster (SCINet HPC) -The **Agisoft** is preinstalled on [Atlas](https://www.hpc.msstate.edu/computing/atlas/) SCINet HPC infrastructure as a *Metashape* distribution in version **1.7.3** and can be loaded into your computational environment with `module load` command. - -### Requirements -To activate the module, you must first load the `gcc/10.2.0` package, which is the required dependency for *Metashape*. - -``` -module load gcc/10.2.0 # required (always) -module load mesa/20.1.6 # required (only when using graphical interface) -module load metashape/1.7.3 # required (always) -``` - -### Platform plugin - -The [Qt](https://doc.qt.io/qt-5/qtgui-index.html) is a GUI toolkit which provides modules for cross-platform development and creating graphical user interfaces. Therefore, depending on the platform you are working on, you should choose the right plugin. -There are two platform plugins available of which *xcb* is the default setting. -* `xcb` , provides the basic functionality needed by Qt GUI to run against X11 (*Xterminal graphical interface*) - - use *xcb* platform (default), when using Agisoft via the [Open OnDemand service](#access-atlas-via-open-ondemand-service) - -* `offscreen` , prevents the startup of the graphical interface when using a Linux terminal system - - use *offscreen* platform, when using Agisoft via the [SSH terminal connection](#access-atlas-via-ssh-terminal-connection) - - ``` - metashape -r script.py -platform offscreen - ``` - - -### Access Atlas via *Open OnDemand* service - -[*Open OnDemand*](https://www.hpc.msstate.edu/computing/atlas/#ondemand) (OOD) is web-based platform which provides user-friendly graphical interface directly in a web browser tab for access to the Atlas HPC infrastructure, including both the file system, GUI applications, and standalone software. - -#### 1. Log in to the OOD service - -To connect to Atlas via OnDemand interface, visit [https://atlas-ood.hpc.msstate.edu/](https://atlas-ood.hpc.msstate.edu/) within your web browser (*Google Chrome*, *Mozilla Firefox* or *Microsoft Edge* are preferred). - -![Atlas OOD service](assets/images/AtlasOnDemand.png) - -Follow the instructions provided by [hpc.msstate.edu](https://www.hpc.msstate.edu/computing/atlas/#ondemand) : -> In order to log in, users must use their SciNet credentials. The 6-digit Google Authenticator code must be entered in the same field as the password, with no spaces or other characters in-between the password and Google Authenticator code. - -Your SciNet credentials includes: -* username, usually in the form `name.surname` -* password, the same password as used for ssh connection to Atlas -* Google Authenticator code, if you don't use it yet, find out more at [SCINet/GA](https://scinet.usda.gov/guide/multifactor/#google-authenticator-ga-on-android) - -#### 2. Set up remote Atlas Desktop - -To get a graphical interface to the file system and applications on an Atlas cluster, you first need to activate remote Atlas Desktop. For this, select `Interactive Apps` from the top menu and then `Atlas Desktop` from the dropdown options. - -![Atlas Desktop](assets/images/AtlasDesktop.png) - -**Specify the the interactive job request** - -Now, in your browser tab, a form should appear where you should configure the settings for the interactive job you want to complete on the Atlas cluster. When you have set the options, click the long blue *'Launch'* button. - -| Param, Value | Form | -| ----------- | ----------- | -| **Account** , your project_name visible on Atlas in /project/

**Partition_Name** , 'atlas' or 'gpu' (if available)

**QOS** , 'ood'

**Number of hours** , 2 (up to 8 hours)

**Number of nodes** , 1

**Number of tasks** , 2 (tasks = cores here, with a max of 2)

**Resource Reservation** , *leave empty*

**Memory Required (In GB)** , *our example uses ~10GB*

*Check boxes if want to receive emails* | Atlas OOD form| - -**Launch Atlas Desktop** - -In the next step you will see your request queued and once the resources have been allocated to your job (should get an email if selected as an option), you will finally be able to `Launch Atlas Desktop` using blue button in the bottom left corner of the form. With the options *'Compression'* and *'Image Quality'* you can decide how good the graphic quality you want to get in the remote desktop view. - -At any time you can remove your interactive session from the queue by pressing the red `Delete` button, available in the top right corner of the form. - -![Atlas Desktop](assets/images/ActivateDesktop.png) - -#### 3. Open the terminal emulator - -The Atlas Desktop should automatically open in a new tab in the same browser window in which you requested *Open OnDemand* access. You can browse the file system in graphical mode. Your home directory is visible on the desktop. At the bottom of the screen you will find quick shortcuts to applications, including the terminal. In the picture, its icon is indicated by a red arrow. Clicking on the icon will open the coding console window. - -Atlas OOD form - - -#### 4. Load dependencies - -When a terminal window appears on your screen, type the following commands to load all the required dependencies. Note that in order to keep the graphical interface functional, you need to load an additional `mesa` package. - -``` -module load gcc/10.2.0 -module load mesa/20.1.6 -module load metashape -``` - -If you want to check the validity of the above modules for a Metashape module, you can use the command below, which will return a list of requirements. - -``` -module spider metashape -``` - -#### 5. Run Metashape Python scripts - -In the Metashape analysis, you can run python scripts in two ways: - -* directly in a terminal window using the `metashape` command with *-r* flag followed by the name of the python script - - ``` - metashape -r metashape_script.py - ``` - -* by launching the interactive GUI for Agisoft Metashape using the `metashape` command in the terminal, and then running a python script in the Console tab of the GUI window (fourth tab from the left on the bottom menu bar in the GUI of the *Agisoft Metashape Professional*) - - ``` - metashape - ``` - -Atlas OOD form
- - -If you want to learn more about Agisoft Metashape analysis using python programming go to the [Metashape Python Scripting](03-MetashapePythonScripts.md) article.
You can also go to the [Photogrammetry Tutorial](03-Tutorial-Photogrammetry.md), which introduces you to photogrammetric analysis with a practical example of processing drone imagery in Agisoft Metashape. The automated workflow was developed and made available through the kindness of Hugh Graham, Andrew Cunliffe, Pia Benaud, and Glenn Slade.
- -### Access Atlas via SSH terminal connection - -[*SSH*](https://en.wikipedia.org/wiki/Secure_Shell) (Secure Shell) is a protocol that enables two computational machines to communicate via network. This is especially useful when you want to remotely request computations or access the data on a cluster from your local computer. The SSH connection by default does not provide a graphical interface, so file system browsing and calculations are done using commands written in the terminal window. - -#### 1. Open terminal window - -Depending on the operating system on your local computing machine (Windows, macOS, Linux) the way to run a terminal is slightly different but each of these systems should have at least a basic terminal pre-installed. To learn what is the difference between Terminal, Console, Shell and Kernel, check [here](https://www.geeksforgeeks.org/what-is-terminal-console-shell-and-kernel/). - -**Windows** - -In Microsoft Windows, a command-line shell is usually called a *Command Prompt* or more recently a *PowerShell*.
-Follow these steps to display the terminal window: -1. Go to Start Menu (Windows icon available on the left corner of your desktop) and type *'cmd'* or *'powershell'*.
The search results should display a dark square icon. -2. Click on the icon to open the terminal window. - -**macOS** - -In macOS you can start a terminal session by clicking on the black square icon present in the Dock bar. If the shortcut is not there by default, you can search for *'Terminal'* or '*iTerm'* in the Finder. - -**Linux** - -In Linux you can start a terminal session by clicking on the black square icon *'Terminal'* present in the Menu bar. - -| Windows | macOS | Linux | -| ----------- | ----------- | ----------- | -| ![Atlas Desktop](assets/images/terminalWin.png) | ![Atlas Desktop](assets/images/terminalMac.png) | ![Atlas Desktop](assets/images/terminalLin.png) | - -#### 2. Connect to Atlas using SSH - -Once you have a terminal window open, log in to the Atlas cluster using the `ssh` command and your SCINet credentials. - -``` -ssh name.surname@atlas-login.hpc.msstate.edu -``` - -Your SciNet credentials includes: -* username, usually in the form `name.surname`; make sure username is lower case -* Google Authenticator (GA) code, if you don't use it yet, find out more at [SCINet/GA](https://scinet.usda.gov/guide/multifactor/#google-authenticator-ga-on-android) -* password, the password to Atlas is set separately than a password for Ceres cluster - -**First time user's login** - -* Enter `ssh name.surname@atlas-login.hpc.msstate.edu` in the terminal window and press `enter` or `return` on your keyboard. -* When prompted for verification code, enter the 6 digits generated by the GA application and press `enter` or `return` on your keyboard.
*Note: the code will not be shown on the screen as you enter them.* -* When prompted for your password, enter your password and press `enter` or `return` on your keyboard. -
*Note: the code will not be shown on the screen as you enter them.* - - * For first time user's login to Atlas use your temporary password received in the SCINet welcome email. A message will appear telling you that your password has expired. - * When prompted for your *'current password'*, enter your temporary password once again. - * Then you will be prompted twice to eneter your *'new password'*.
*When creating a new password, make sure it contains at least 12 characters of at least 3 different classes (e.g. upper/lower case, numbers and special characters: # @ $ %, etc.).* - -**Exit the SSH connection** - -Once you have completed your activities on the Atlas cluster, you should use a secure way to disconnect your local computer from the HPC infrastructure. To terminate the ssh connection with Atlas, type `exit` in the terminal and press `enter` on your keyboard. Then, you can close the terminal window on your computer. - - -#### 3. Navigate to your workdir - -Once you have successfully logged in, by default you are in your **/home** directory on Atlas. You can confirm this by typing the `pwd` command.
*Note: any shell command require pressing enter or return to be executed.* - -``` -pwd -``` - -Home directory **is not** intended to be used as workspace. Instead, the correct location is **/project**, where you should use `ls` to find the directory for your group's project. Usually the project name matches the *'account'* field used to access the [Atlas OOD service](#set-up-remote-atlas-desktop), and formally refers to a **slurm account**, as defined at [scinet.usda.gov](https://scinet.usda.gov/guide/ceres2atlas/#slurm-account) : - -> To run jobs on compute nodes of either cluster, the jobs need to be associated with a slurm account. For users that have access to one or more project directories, their slurm accounts have same names as the project directories. The slurm account for users without project directories is called scinet. - -To display a list of available slurm accounts (projects), execute in terminal: - -``` -ls /project/ -``` - -If you don't find an account for your group on the list, you can [request one](https://scinet.usda.gov/guide/storage/#project-directories) or you can use one of the general access projects, such as ***scinet***, ***shared*** or ***90daydata***. Navigate to `/project/` (here we use *scinet* as an example) and create with `mkdir` a new working directory for your computing task. - -``` -cd /project/scinet -mkdir your_workidr_name -cd your_workidr_name -``` - -You can check your current location in the file system with `pwd` and display its contents with `ls`. - - -#### 4. Set up SLURM job - -On an Atlas cluster, **computation on a *login node* is prohibited**. Therefore, all calculations must be submitted to the *compute nodes* using the SLURM task management system (learn more from the [tutorial](https://bioinformaticsworkbook.org/Appendix/HPC/SLURM/slurm-cheatsheat.html#gsc.tab=0)). In general, you can access compute nodes by: -* submitting a job to a queue using the `sbatch` command -* starting an interactive session with the `salloc` command - -For projects using Metashape analytics, running **an interactive session is not recommended** due to the high-impact processes that result in a violation of the resource usage policy. As a consequence, the user will receive a penalty status that will reduce their original computing power limits (e.g., status *penalty1* limits to 80% of CPU and memory usage). So, if you need to test your protocol with an interactive preview of the computation progress, use [Atlas OOD service](#set-up-remote-atlas-desktop). - -![Atlas Desktop](assets/images/metaUsage.png) - -Once your computation protocol is steady or if you want to use the scripts available in the [Photogrammetry Tutorial](03-Tutorial-Photogrammetry.md), follow the steps below to prepare the job for submission into the queue with `sbatch` command. - -**1. Create SLURM submission script** - -At the command line, you can easily create a new empty file with the `touch` command. - -``` -touch submit_metashape.sh -``` - -**2. Copy-paste the basic content of the SLURM script** - -First, open the submission script file `submit_metashape.sh` in text editor of your choice (usually *nano* or *vim*) by typing in the command-line the name of the editor followed by the name of the file. - -``` -nano submit_metashape.sh -``` - -In the text editor paste the text passage copied from the code block provided below. Use arrows keys on your keyboard to navigate and change values of variables according to your preference. -* SLURM VARIABLES: - * *job-name*, name of your job visible in the queue with `squeue -u user.name` command - * *account*, your slurm account, for details see section [3. Navigate to your workdir](3.-navigate-to-your-workdir) - * *mail-user*, provide email of your choice -* CODE VARIABLES: - * *workdir*, provide the full path to the directory with the python script - * *script_name*, provide the filename of python script used in this job - -Then, press `control` and `x` to exit, then press `y` for *yes* to save changes. - -``` -#!/bin/bash - -# job standard output will go to the file slurm-%j.out (where %j is the job ID) -# DEFINE SLURM VARIABLES -#SBATCH --job-name="metashape" -#SBATCH --partition=gpu # GPU node(s) -#SBATCH --nodes=1 # number of nodes -#SBATCH --ntasks=48 # 24 processor core(s) per node X 2 threads per core -#SBATCH --time=01:00:00 # walltime limit (HH:MM:SS) -#SBATCH --account=scinet -#SBATCH --mail-user=your.email@usda.gov # email address -#SBATCH --mail-type=BEGIN # email notice of job started -#SBATCH --mail-type=END # email notice of job finished -#SBATCH --mail-type=FAIL # email notice of job failure - - -# LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE -module load gcc # load gcc dependency -module load metashape # load metashape, then run script with x11 turned off - -# DEFINE CODE VARIABLES -workdir=/project/scinet/your_workdir # path to your workdir (can check with 'pwd' in terminal) -script_name=metashape_part1_SPC_linux.py # the filename of the python script you want to run - -# DEFINE METASHAPE COMMAND -metashape -r $your_workdir/$script_name -platform offscreen -``` - -**2. Submit your job to the queue** - -Make sure all required files are in the current directory. You can display its contents with `ls`. -
The required files include: -* SLURM script: e.g., *submit_metashape.sh* -* Python script: e.g., *metashape_part1_SPC_linux.py*, available for download from [here]() -* Input files (example dataset is available for download from [here]()): - * config file: e.g., *input_file.csv*, column-type text file with key-value pairs of initial parameters used in the Python scripts - * input folder: e.g., *input_data*, the directory that contains photos in DNG format - -Finally, to sumbit your job use `sbatch` command followed by your SLURM script name, and press `enter`. If successfully submitted into a batch, the job number will be displayed on your screen. -``` -sbatch submit_metashape.sh -``` -> Atlas-login-1[31] user.name$ sbatch submit_metashape.sh -> Submitted batch job 726220 - -If you have used the email notification option, you will receive emails 1) when the job is queued, 2) when the job has completed running, and eventually 3) when the job has failed due to an error (e.g., exceeding of reserved resources such as walltime or memory). - -**3. Trace the output file** - -It is useful to trace the contents of the output file (slurm-*job_id*.out, e.g., slurm-726219.out for this example) that collects the results from the standard output & error. This can help you detect the case of job failure, but also to analyze the performance and efficiency of the computation in the case of a successful job. -
*Note: If you don't remember the job_id for your job, then display the contents of your project directory and filter out the results that have the 'out' extension.* - -``` -ls | grep "out" -``` -> slurm-726219.out - -To preview the content of this file use `less` command, followed by the *filename* and press `enter`. *less* is a read-only viewer, so you don't have to worry about accidentally making any changes. - -``` -less slurm-726219.out -``` - -*Note: In the viewer, use arrow keys and/or *'page'*, *'button'* keys on your keyboard, to smoothly navigate through the file.* - -In the file view, check analysis for % of cameras aligned and not aligned. That the mean reprojections values are within the threshold. Use `q` letter on the keyboard to exit viewer. - -Once you have completed your activities on the Atlas cluster, you can disconnect your local computer from the HPC infrastructure using `exit` command followed by pressing `enter` on your keyboard. diff --git a/IntroPhotogrammetry/Metashape/03-MetashapePythonScripts.md b/IntroPhotogrammetry/Metashape/03-MetashapePythonScripts.md deleted file mode 100644 index de5d575..0000000 --- a/IntroPhotogrammetry/Metashape/03-MetashapePythonScripts.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "Metashape Python Scripting" -layout: single -header: - overlay_color: "444444" - overlay_image: /assets/images/nasa.png ---- - - - -## Introduction - -Python scripting is supported only in Metashape Professional edition. - -Metashape Professional uses Python 3.8. - -### Metashape functionality available via Python - -* Open/save/create Metashape projects. -* Add/remove chunks, cameras, markers. -* Add/modify camera calibrations, ground control data, assign geographic projections and coor- dinates. -* Performprocessingsteps(alignphotos,builddensecloud,buildmesh,texture,decimatemodel, etc...). -* Export processing results (models, textures, orthophotos, DEMs). -* Access data of generated models, point clouds, images. -* Start and control network processing tasks. - -### Overview of Metashape module in Python - -Documentation for [Metashape_python_api_1.7.3](https://www.agisoft.com/pdf/metashape_python_api_1_7_3.pdf) . - -`import Metashape as ms` # select custom shortcut for the imported module - -#### Global application attributes -`Metashape.app.` - -Metashape.Application class provides access to global attributes: - -| attribute | DESCRIPTION | VALUE TYPE | -|---------|---------|---------| -| *document*| main application document object | [document] | -| *enumGPUDevices* | enumerate installed GPU devices | [array] | -| *gpu_mask* | GPU device bit mask | [int] *1* - use device, *0* - do not use | -| *cpu_enable* | use cpu when GPU is active | [bool]
*False* - Disable CPU for GPU accelerated tasks
*True* - Enable CPU for GPU accelerated processing | diff --git a/IntroPhotogrammetry/Metashape/03-Tutorial-Photogrammetry.md b/IntroPhotogrammetry/Metashape/03-Tutorial-Photogrammetry.md deleted file mode 100644 index 5f916a6..0000000 --- a/IntroPhotogrammetry/Metashape/03-Tutorial-Photogrammetry.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: "Metashape Photogrammetry Tutorial" -layout: single -header: - overlay_color: "444444" - overlay_image: /assets/images/nasa.png ---- - - - -## Introduction - - -### Requirements - -* Python 3.5, 3.6, 3.7, or 3.8 -* gcc/10.2.0 -* metashape/1.7.3 -* mesa/20.1.6 - -`module load ` - - -## Agisoft Metashape Tutorial - -### Step 0: Data Collecting - -#### Camera settings -Any high resolution digital camera (> 5 MPix) can be used for capturing images suitable for 3D model reconstruction in Metashape software. It is suggested to use focal length from 20 to 80 mm interval in 35mm equivalent while avoiding ultra-wide angle and fisheye lenses. Fixed lenses are preferred for more stable results. - -#### Images settings -Take sharp photos at maximal possible resolution with sufficient focal depth and the lowest value of ISO. For the Metashape analysis use RAW data. The lossless conversion to the TIFF format is preferred over JPG, which induce more noise. Also, do not use any pre-processing (resize, rotate, crop, etc.) on your photos. - -#### Optimal image sets -In general, a good set of images is not random. More than required number of photos is better than not enough but redundant or highly overlap pictures are not useful. However, the detail of the geometry should be visible from at least two different camera snapshots. To learn more tips & tricks see the *Capturing scenarios* and *Plan Mission* sections in the Metashape [user manual](https://www.agisoft.com/pdf/metashape-pro_1_5_en.pdf), pp 9-14. - -### Step 1: Loading & Inspecting Photos - -#### Creating application object -``` -import Metashape as MS - -doc = MS.app.document -``` - -#### CPU/GPU settings -> Metashape exploits GPU processing power that speeds up the process significantly. -If you have decided to switch on GPUs to boost the data processing with Metashape, it is recommended to uncheck "Use CPU when performing GPU accelerated processing" option, providing that at least one discrete GPU is utilized for processing. *(Preference settings in [Metashape User Manual](https://www.agisoft.com/pdf/metashape-pro_1_5_en.pdf), pp. 15)* - -``` -MS.app.gpu_mask = 2 ** (len(MS.app.enumGPUDevices()) - 1) # activate all available GPUs - if MS.app.gpu_mask <= 1: # (faster with 1 no difference with 0 GPUs) - MS.app.cpu_enable = True # enable CPU for GPU accelerated processing - elif MS.app.gpu_mask > 1: # (faster when multiple GPUs are present) - MS.app.cpu_enable = False # disable CPU for GPU accelerated tasks -``` - -#### Loading images - -``` -datadir = "/absolute/path/to/your/input/directory/with/photos" # directory with image inputs -photo_files = os.listdir(datadir) # list of filenames for photo set -photos = [os.path.join(datadir, p) for p in photo_files] # convert to full paths - -``` - -### Step 2: Generating Sparse Point Cloud (SPC) - - -### Step 3: Generating Dense Point Cloud (DPC) - - -### Step 4: Generating of a Surface: Mesh or DEM diff --git a/IntroPhotogrammetry/Metashape/assets/images/ActivateDesktop.png b/IntroPhotogrammetry/Metashape/assets/images/ActivateDesktop.png deleted file mode 100644 index cb98604..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/ActivateDesktop.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/AtlasDesktop.png b/IntroPhotogrammetry/Metashape/assets/images/AtlasDesktop.png deleted file mode 100644 index c543378..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/AtlasDesktop.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/AtlasOnDemand.png b/IntroPhotogrammetry/Metashape/assets/images/AtlasOnDemand.png deleted file mode 100644 index 76d900f..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/AtlasOnDemand.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/DesktopScreen.png b/IntroPhotogrammetry/Metashape/assets/images/DesktopScreen.png deleted file mode 100644 index 1039487..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/DesktopScreen.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/LaunchAgisoft.png b/IntroPhotogrammetry/Metashape/assets/images/LaunchAgisoft.png deleted file mode 100644 index ff4b07e..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/LaunchAgisoft.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/ODDform.png b/IntroPhotogrammetry/Metashape/assets/images/ODDform.png deleted file mode 100644 index 407c524..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/ODDform.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/metaUsage.png b/IntroPhotogrammetry/Metashape/assets/images/metaUsage.png deleted file mode 100644 index 2f64841..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/metaUsage.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/terminalLin.png b/IntroPhotogrammetry/Metashape/assets/images/terminalLin.png deleted file mode 100644 index dca042b..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/terminalLin.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/terminalMac.png b/IntroPhotogrammetry/Metashape/assets/images/terminalMac.png deleted file mode 100644 index eaa5852..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/terminalMac.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/assets/images/terminalWin.png b/IntroPhotogrammetry/Metashape/assets/images/terminalWin.png deleted file mode 100644 index 1749914..0000000 Binary files a/IntroPhotogrammetry/Metashape/assets/images/terminalWin.png and /dev/null differ diff --git a/IntroPhotogrammetry/Metashape/scripts/01_metashape_SPC.py b/IntroPhotogrammetry/Metashape/scripts/01_metashape_SPC.py deleted file mode 100755 index ff6834b..0000000 --- a/IntroPhotogrammetry/Metashape/scripts/01_metashape_SPC.py +++ /dev/null @@ -1,592 +0,0 @@ -################################################################################ -# ------ PhotoScan workflow Part 1: -------------------------------------------- -# ------ * Image Quality analysis, --------------------------------------------- -# ------ * Camera Alignment analysis, ------------------------------------------ -# ------ * Reprojection Error Analysis, ---------------------------------------- -# ------ * Sparse Point Cloud Creation, ---------------------------------------- -# ------ * Reference settings -------------------------------------------------- -################################################################################ - -# IMPORTS # -import os -import sys -import math -import Metashape as MS -from datetime import datetime - -#MS.app.console_pane.clear() # comment out when using ISCA - -# USER-CUSTOMIZABLE GLOBAL VARIABLES -config_file = "config_file.csv" # config file with variable-value pairs; must be in the same folder as this script -logfile = open("log.txt", "a") # file to which verbose mode notes are saved [INFO, WARNING, ERROR] - -# AUTOMATIC GLOABL VARIABLES -path = os.path.dirname(os.path.realpath(__file__)) # derive the path of the directory, where the script and config are stored -params = ["workdir", "doc_title", "data_dir", "coord_system", "spc_quality", - "marker_coord_file", "marker_coord_system", "export_dir", - "est_img_quality", "img_quality_cutoff", "reprojection_error", - "rolling_shutter", "revise_altitude", "altitude_adjust", "depth_filter", - "marker_type", "marker_tolerance", "marker_min_size", "marker_max_res", - "marker_min_projections", "marker_projection_error", "marker_gcp_distance"] - - -# MAIN FUNCTION manages calls to subsequent subprocesses: -# STEP 0: loading config file -# STEP 1: setting up Metashape application -# STEP 2: loading & inspecting photos -# STEP 3: preprocessing of the images -# STEP 4: building sparse points cloud (SPC) -# STEP 5: filtering reprojection errors -# STEP 6: getting number of NOT aligned cameras -# STEP 7: setting up reference settings -# STEP 8: detecting markers -def script_setup(): - startTime = datetime.now() # calculation start time - logfile.write("Script start time: " + str(startTime) + "\n") - -# - STEP 0: Loading config_file and preparing list of photo inputs - print("\nSTEP 0: Loading config_file and preparing list of photo inputs...") - logfile.write("\nSTEP 0: Load config_file and prepare list of photo inputs\n") - try: - config, photos = load_config_file(path, config_file) - name = "/" + config['doc_title'] + ".psx" - except: - print("ERROR: STEP 0 FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("EXIT: STEP 0 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 1: Seting up Metashape application: doc, chunk - print("\nSTEP 1: Seting up Metashape application: doc, chunk...") - logfile.write("\nSTEP 1: Set up Metashape application: doc, chunk\n") - try: - doc = MS.app.document # create Metashape (MS) application object - ## Provide CPU/GPU settings - MS.app.gpu_mask = 2 ** len(MS.app.enumGPUDevices()) - 1 # activate all available GPUs - if MS.app.gpu_mask <= 1: # (faster with 1 no difference with 0 GPUs) - MS.app.cpu_enable = True # enable CPU for GPU accelerated processing - elif MS.app.gpu_mask > 1: # (faster when multiple GPUs are present) - MS.app.cpu_enable = False # disable CPU for GPU accelerated tasks -# ----- WARNING: you need to delete the original automatically created chunk when Metashape opens if running the script from tools - chunk = doc.addChunk() # create Metashape (MS) chunk - except: - print("ERROR: STEP 1 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("ERROR: Metashape could NOT create a doc or chunk.\nEXIT: STEP 1 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 2: Loading & Inspecting photos - print("\nSTEP 2: Loading & Inspecting photos...") - logfile.write("\nSTEP 2: Load & Inspect photos\n") - try: - load_photos(chunk, config, photos) - except: - print("ERROR: STEP 2 HAS FAILED! Loading photos has failed.") - logfile.write("EXIT: STEP 2 HAS FAILED!\n") - sys.exit(1) - - - -# - STEP 3: Preprocess images - print("\nSTEP 3: Preprocessing images results...") - logfile.write("\nSTEP 3: Preprocess images results\n") - try: - orig_n_cams, n_filter_removed, perc_filter_removed, real_qual_thresh = preprocess(config['est_img_quality'], float(config['img_quality_cutoff']), chunk) - except: - print("ERROR: STEP 3 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("EXIT: STEP 3 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 4: Build Sparse Point Cloud (SPC) - print("\nSTEP 4: Building Sparse Point Cloud...") - logfile.write("\nSTEP 4: Build Sparse Point Cloud\n") - try: - points, projections = build_SPC(chunk, config['spc_quality']) - except: - print("ERROR: STEP 4 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("EXIT: STEP 4 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 5: Filter reprojection errors - print("\nSTEP 5: Filtering reprojection errors...") - logfile.write("\nSTEP 5: Filter reprojection errors\n") - try: - total_points, perc_ab_thresh, nselected = filter_reproj_err(chunk, config['reprojection_error']) - except: - print("ERROR: STEP 5 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("EXIT: STEP 5 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 6: Get number of NOT aligned cameras - print("\nSTEP 6: Getting number of NOT aligned cameras...") - logfile.write("\nSTEP 6: Get number of NOT aligned cameras\n") - try: - n_aligned, n_not_aligned = count_aligned(chunk) - except: - print("ERROR: STEP 6 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("EXIT: STEP 6 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 7: Set up reference settings & Export settings - print("\nSTEP 7: Setting up reference settings & Exporting settings...") - logfile.write("\nSTEP 7: Set up reference settings & Export settings\n") - try: - ref_setting_setup(chunk, points, projections) - export_settings(orig_n_cams, n_filter_removed, perc_filter_removed, - real_qual_thresh, n_not_aligned, total_points, - nselected, config['workdir'], config['doc_title']) - except: - print("ERROR: STEP 7 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("EXIT: STEP 7 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 8: Detect Markers - print("\nSTEP 8: Detect Markers...") - logfile.write("\nSTEP 8: Detect Markers\n") - try: - detect_markers(chunk, config) - except: - print("ERROR: STEP 8 HAS FAILED!\n-- Please check the " + logfile + " for the details of the error.") - logfile.write("ERROR: There was a problem with marker detection.\nEXIT: STEP 8 HAS FAILED!\n") - sys.exit(1) - - -# - STEP 9: Save Document - if os.path.isdir(config['workdir']): - try: - doc.save(config['workdir'] + name) - except: - print("WARNING: The Metashape license is NOT available. The script can NOT save the results.") - logfile.write("WARNING: The Metashape license is NOT available. Please check if there is a problem with the license server.\n" + - "The results of your Metashape analysis are NOT saved to a file: " + config['workdir'] + name + "\n") - sys.exit(1) - else: - print("WARNING: The path to the working directory: " + config['workdir'] + " does NOT exist. " + - "The doc can NOT be saved. Please provide the correct path.") - logfile.write("ERROR: STEP 9 HAS FAILED!\n--The doc could NOT be saved." + - "-- Please check that the path you want to save the file to is correct: " + config['workdir'] + name + "\n") - - -# - STEP 10: Get Execution Time - logfile.write("\nTotal Execution Time: " + str(datetime.now() - startTime) + "\n") - logfile.write("\nPART 1 of the PhotoScan workflow completed successfully!\n") - logfile.close() - print("\nPART 1 of the PhotoScan workflow completed successfully!") - print("\nNow it's time to do some manual cleaning as per the protocol.") - - - -#----------------- Section of functions defining subprocesses -----------------# - -# FUNCTION for STEP 0 - LOAD CONFIG FILE & PREPARE LIST OF PHOTOS -def load_config_file(path, config_file): - config = {} - # Load 'variable':'value' pairs from the input config_file into the config dictionary - try: - config_path = str(path + "/" + config_file).replace('//', '/') - with open(config_path, 'r') as f: - for row in f: - tokens = row.split(',')[:2] - if tokens[0].strip() in params: - config[tokens[0].strip()] = tokens[1].strip() - if len(config) == len(params): - logfile.write("INFO: The loaded config variables include: \n") - for i in config: - logfile.write(" - " + i + " : " + config[i] + " \n") - else: - for i in config.keys(): - if not i in params: - logfile.write("ERROR: Your config file does NOT include the required variable: " + i + - " . Please check the spelling carefully.\n") - sys.exit(1) - except Exception: - logfile.write("ERROR: The " + config_file + " does NOT exist on the " + path + " path.\n") - logfile.write(" Please copy the required configuration file to the " + path + " directory.\n") - print("\nERROR in the STEP 0, when loading the variables from the: " + config_file) - - # Locate photos and prepare list of photo inputs - datadir = config['data_dir'] - if os.path.isdir(datadir) == False: - datadir = str(config['workdir'] + "/" + datadir).replace('//', '/') - config['data_dir'] = datadir - try: - os.path.isdir(datadir) - photos = os.listdir(datadir) # get the list of photos filenames - photos = [os.path.join(datadir, p) for p in photos] # convert names in the list to full paths - logfile.write("INFO: " + str(len(photos)) + " photos were found on the path: " + datadir + "\n") - except Exception: - logfile.write("ERROR: The following path to folder with photos does NOT exist: " + datadir + "\n") - print("ERROR in the STEP 0, when preparing the list of photos. The following path to folder with photos does NOT exist: " + datadir) - - return config, photos - - -# FUNCTION for STEP 2 - LOAD & INSPECT PHOTOS -def load_photos(chunk, config, photos): - # 1: Add photos to chunk - try: - chunk.addPhotos(photos) - print("-- Photos added successfully.") - except Exception: - logfile.write("ERROR: The Metashape chunk.addPhotos() function has failed.\n") - - # 2: Enable rolling shutter compensation - if config['rolling_shutter'] == 'TRUE': - chunk.sensors[0].rolling_shutter = True - print("-- Enabled rolling shutter compenstation.") - - # 3: Define desired Coordinate System and try to set the coordinates for cameras - new_crs = MS.CoordinateSystem(config['coord_system']) - try: - for camera in chunk.cameras: - camera.reference.location = MS.CoordinateSystem.transform(camera.reference.location,chunk.crs, new_crs) - print("-- Defined coordinate system.") - except Exception: - logfile.write("WARNING: Images do not have projection data... No Worries! It will continue without!") - - # 4: Correct the DJI absolute altitude problems - # -- WARNING: This portion of the script needs to be checked to see how it interacts with non DJI drone data - print("-- Trying to correct the DJI absolute altitude problems...") - for camera in chunk.cameras: - if not camera.reference.location: - continue - elif "DJI/RelativeAltitude" in camera.photo.meta.keys() and config['revise_altitude'] == "TRUE": - z = float(camera.photo.meta["DJI/RelativeAltitude"]) + float(config['altitude_adjust']) - camera.reference.location = (camera.reference.location.x, camera.reference.location.y, z) - print(" DJI corrected successfully.") - - # 5: [OPTIONAL] Import of markers; if a path is given markers are added; if no markers are given then pass - print("-- Trying to import markers...") - marker_coord = config['marker_coord_file'] - if marker_coord != "NONE" and os.path.isfile(marker_coord) == False: - marker_coord = str(config['data_dir'] + "/" + config['marker_coord_file']).replace('//', '/') - try: - os.path.isfile(marker_coord) - config['marker_coord_file'] = marker_coord - chunk.importReference(marker_coord, columns="nxyzXYZ", delimiter=",", - group_delimiters=False, skip_rows=1, - ignore_labels=False, create_markers=True, threshold=0.1) - logfile.write("INFO: Import of the reference for markers was successful.\n") - print(" Markers added successfully.") - - if config['marker_coord_system'].strip() != config['coord_system'].strip(): # if marker and project crs match then pass otherwise convert marker crs - for marker in chunk.markers: # this needs testing but should theoretically work... - marker.reference.location = new_crs.project(chunk.crs.unproject(marker.reference.location)) - logfile.write("INFO: Import of the marker.reference.location was successful.\n") - except: - logfile.write("WARNING: Import of the reference for markers has failed... No Worries! It will continue without!\n") - - # 6: Set project coordinate system - chunk.crs = new_crs - chunk.updateTransform - print("-- Project coordinate system set successfully.") - - -# FUNCTION for STEP 3 - PREPROCESS IMAGES -def preprocess(est_img_qual, img_qual_thresh, chunk): - - # Estimating Image Quality and excluding poor images - if est_img_qual == "TRUE": - print("-- Running image quality filter...") - chunk.analyzePhotos() # MSCHANGE - - kept_img_quals = [] - dumped_img_quals = [] - - for camera in chunk.cameras: - IQ = float(camera.meta["Image/Quality"]) - if IQ < float(img_qual_thresh): - camera.enabled = False - dumped_img_quals.append(IQ) - else: - kept_img_quals.append(IQ) - - n_filter_removed = len(dumped_img_quals) - orig_n_cams = len(kept_img_quals) + n_filter_removed - perc_filter_removed = round(((n_filter_removed/orig_n_cams)*100), 1) - real_qual_thresh = min(kept_img_quals) - - logfile.write("- number of cameras disabled = " + str(n_filter_removed) + "\n") - logfile.write("- percent of cameras disabled = " + str(perc_filter_removed) + "%\n") - logfile.write("- number of cameras enabled = " + str(len(kept_img_quals)) + "\n\n") - - for cutoff in [0.9, 0.8, 0.7, 0.6, 0.5]: - length = len([i for i in kept_img_quals if i >= cutoff]) - logfile.write("-- number of photos with image quality >= " + - str(cutoff)+ " : " + str(length) + "\n") - if length > 0: - logfile.write("-- percent of cameras with image quality >= " + - str(cutoff)+ " : " + str(round((length/orig_n_cams)*100, 1)) + "%\n") - else: - logfile.write("Preprocess images skipped.\n") - print("-- Image quality filtering skipped.") - chunk.estimateImageQuality() - all_imgs_qual = [] - for camera in chunk.cameras: - IQ = float(camera.meta["Image/Quality"]) - all_imgs_qual.append(IQ) - - orig_n_cams = len(chunk.cameras) - n_filter_removed = "no_filter_applied" - perc_filter_removed = "no_filter_applied" - real_qual_thresh = min(all_imgs_qual) - - return orig_n_cams, n_filter_removed, perc_filter_removed, real_qual_thresh - - -# FUNCTION for STEP 4 - BUILD SPARSE POINTS CLOUD -def build_SPC(chunk, spc_quality): - print("STEP 4: Building Sparse Point Cloud...") - scale = {"LowestAccuracy" : 5, "LowAccuracy" : 4, "MediumAccuracy" : 3, - "HighAccuracy" : 2, "HighestAccuracy" : 1} - - # Match Photos - # WARNING: Accuracy changed to downscale in Metashape 1.6.4 removed preselection MSCHANGE - if spc_quality in scale.keys(): - chunk.matchPhotos(downscale=scale[spc_quality], generic_preselection=True, - reference_preselection=True, filter_mask=False, - keypoint_limit=40000, tiepoint_limit=8000) - logfile.write("Photos matched with customized SPC quality: " + spc_quality + ".\n") - else: - print("WARNING: The entered value for variable: spc_quality is invalid. The default setting will be used: HighestAccuracy.") - chunk.matchPhotos(downscale=1, generic_preselection=True, - reference_preselection=True, filter_mask=False, - keypoint_limit=40000, tiepoint_limit=8000) - logfile.write("Photos matched with default SPC quality: HighestAccuracy.\n") - - # Align Cameras - try: - chunk.alignCameras(adaptive_fitting=False) - point_cloud = chunk.point_cloud - points = point_cloud.points - projections = chunk.point_cloud.projections - logfile.write("Cameras aligned successfully.\n") - except: - logfile.write("ERROR: There was a problem while aligning the cameras and/or creating projections.\n") - print("ERROR in the STEP 4, when aligning cameras.") - - return points, projections - - -# FUNCTION for STEP 5 - FILTER REPROJECTION ERRORS -# Filter points by their reprojection error and remove those with values > 0.45 (or the limit set in the input file) -def filter_reproj_err (chunk, reproj_err_limit): - - logfile.write("Filtering tiepoints by reprojection error (threshold = " + - str(reproj_err_limit) + ").\n") - reproj_err_limit = float(reproj_err_limit) - f = MS.PointCloud.Filter() - f.init(chunk, MS.PointCloud.Filter.ReprojectionError) - f.selectPoints(reproj_err_limit) - nselected = len([p for p in chunk.point_cloud.points if p.selected]) - total_points = len(chunk.point_cloud.points) - perc_ab_thresh = round((nselected/total_points*100), 1) - - if perc_ab_thresh > 20: - print ("---------------------------------------------------------") - print ("WARNING >20% OF POINTS ABOVE REPROJECTION ERROR THRESHOLD") - print ("---------------------------------------------------------") - logfile.write("WARNING: more than 20% of points above reprojection error threshold!\n") - - logfile.write("Number of points below threshold: " + str(reproj_err_limit) + - "\nReprojection Error Limit: " + str(nselected) + "/" + - str(total_points) + " (" + str(perc_ab_thresh) + "%)\n") - - print("Removing points above error threshold...") - f.removePoints(reproj_err_limit) - logfile.write("Removed points above error threshold.\n") - - return total_points, perc_ab_thresh, nselected - - -# FUNCTION for STEP 6 - COUNT NOT ALIGNED CAMERAS -def count_aligned(chunk): - aligned_list = [] - for camera in chunk.cameras: - if camera.transform: - aligned_list.append(camera) - - not_aligned_list = [] - for camera in chunk.cameras: - if not camera.transform: - not_aligned_list.append(camera) - - n_aligned = len(aligned_list) - n_not_aligned = len(not_aligned_list) - sum = n_aligned + n_not_aligned - try: - val = n_aligned / sum * 100 - logfile.write("Number (%) of aligned cameras is: " + str(n_aligned) + " (" + str(val)+ "%)\n") - except ZeroDivisionError: - logfile.write("WARNING: No cameras are aligned!") - try: - val = n_not_aligned/sum*100 - logfile.write("Number of cameras NOT aligned is: " + str(n_not_aligned) + " (" + str(val)+ "%)\n") - except ZeroDivisionError: - logfile.write("WARNING: No cameras loaded - something isn't aligned...\n") - - return (n_aligned, n_not_aligned) - - -# FUNCTION for STEP 7A - SET UP REFERENCE SETTINGS -def ref_setting_setup(chunk, points, projections): - cam_loc_acc = [15, 15, 20] # xyz metres - mark_loc_acc = [0.02, 0.02, 0.05] # xyz metres - mark_proj_acc = 2 # pixels - - chunk.camera_location_accuracy = cam_loc_acc # units in m - chunk.marker_location_accuracy = mark_loc_acc # SINGLE VALUE USED WHEN MARKER-SPECIFIC ERRORS ARE UNAVAILABLE - chunk.marker_projection_accuracy = mark_proj_acc # FOR MANUALLY PLACED MARKERS - - total_error = calc_reprojection_error(chunk, points, projections) # calculate reprojection error - reproj_error = sum(total_error)/len(total_error) # get average rmse for all cameras - - logfile.write("Mean reprojection error for point cloud: " + str(round(reproj_error, 3)) + "\n") - logfile.write("Max reprojection error is: " + str(round(max(total_error), 3)) + "\n") - - if reproj_error < 1: - reproj_error = 1.0 - chunk.tiepoint_accuracy = round(reproj_error, 2) - - -# INTERNAL FUNCTION for STEP 7A - CALC REPROJECTION ERROR -def calc_reprojection_error(chunk, points, projections): - npoints = len(points) - photo_avg = [] - - for camera in chunk.cameras: - if not camera.transform: - continue - point_index = 0 - photo_num = 0 - photo_err = 0 - for proj in projections[camera]: - track_id = proj.track_id - while point_index < npoints and points[point_index].track_id < track_id: - point_index += 1 - if point_index < npoints and points[point_index].track_id == track_id: - if not points[point_index].valid: - continue - dist = camera.error(points[point_index].coord, proj.coord).norm() ** 2 # get the square error for each point in camera - photo_num += 1 # counts number of points per camera - photo_err += dist # creates list of square point errors - photo_avg.append(math.sqrt(photo_err / photo_num)) # get root mean square error for each camera - - return photo_avg # returns list of rmse values for each camera - - -# FUNCTION for STEP 7B - EXPORT SETTINGS -def export_settings(orig_n_cams, n_filter_removed, perc_filter_removed, - real_qual_thresh, n_not_aligned, total_points, - nselected, workdir, doc_title): - print("Exporting settings to temp_folder...") - - opt_list = {"n_cameras_loaded" : orig_n_cams, - "n_cams_removed_qual_filter" : n_filter_removed, - "%_cams_removed_qual_filter" : perc_filter_removed, - "img_qual_min_val" : real_qual_thresh, - "n_cameras_not_aligned" : n_not_aligned, - "n_points_orig_SPC" : total_points, - "n_points_removed_reproj_filter" : nselected - } - - try: - tmp_path = str(workdir + '/' + doc_title + '.files').replace ('//', '/') - if not os.path.isdir(tmp_path): - os.mkdir(tmp_path) - export_file = open(tmp_path + "/PhSc1_exported_settings.csv", "a") - export_file.write("variable,value\n") - for i in opt_list.keys(): - export_file.write(i + "," + str(opt_list[i]) + "\n") - export_file.close() - - except: - print("\nWARNING: The settings can NOT be exported to the temporary directory: " + tmp_path) - logfile.write("WARNING: The settings can NOT be exported to the temporary directory: " + tmp_path + " .\n") - - -# FUNCTION for STEP 8 - DETECT MARKERS -def detect_markers(chunk, config): - # Detect Markers - try: - chunk.detectMarkers(target_type = getattr(Metashape.TargetType, config['marker_type']), - tolerance = config['marker_tolerance'], - minimum_size = config['marker_min_size'], - maximum_residual = config['marker_max_res']) - logfile.write(str(len(chunk.markers)) + " markers were detected.\n") - except: - print("ERROR in STEP 8, when detecting markers.") - sys.exit(1) - - # Read in the measured GCPs. - if len(chunk.markers) > 0: - try: - marker_coords = config['marker_coord_file'] - os.path.isfile(marker_coords) - except: - print("ERROR in STEP 8, when reading the measured GCPs.\n") - logfile.write("ERROR in STEP 8, when reading the measured GCPs.\n" + " The provided file is: " + marker_coords + - "\n Please make sure you have entered the correct path & file name, and its content is not empty.") - sys.exit(1) - - gcplist = [] - with open(marker_coords, 'r') as f: - for row in f: - gcpline = row.split(',') - if len(gcpline) == 4 and gcpline[0].isnumeric(): - try: - gcplist.append([float(gcpline[1]), float(gcpline[2]), float(gcpline[3])]) - except: - print("WARNING: Skipped a line that might have been a header line or an invalid GCP entry: " + str(row)) - logfile.write(str(len(gcplist)) + " GCP entries were collected.\n") - - # See: https://github.com/campbell-ja/MetashapePythonScripts/blob/main/_Workflow-IntheRound/Metashape_PythonScripts_04-Part_02_DetectMarkers-OptimizeMarkerError.py - # Remove Markers with less than n number of projections - for marker in chunk.markers: - if len(marker.projections) < int(config['marker_min_projections']): - chunk.remove(marker) - logfile.write("Removed marker: " + marker.label + " for having only " + str(len(marker.projections)) + " projections.\n") - if len(gcplist): - for marker in chunk.markers: - if not marker.position: - logfile.write("The marker: " + marker.label + " position is not defined, so skipping.\n") - continue - - position = marker.position - for camera in marker.projections.keys(): - if not camera.transform: - continue - proj = marker.projections[camera].coord - reproj = camera.project(position) - error = (proj - reproj).norm() - # Set the projection to "None" if error greater than > x (default x = 0.5) - if error > float(config['marker_projection_error']): - marker.projections[camera] = None - # Get rid of cameras with too few projections again. - for marker in chunk.markers: - if len(marker.projections) < int(config['marker_min_projections']): - chunk.remove(marker) - logfile.write("Removed marker: " + marker.label + " for having only " + str(len(marker.projections)) + " projections.\n") - # Now let's compare our detected markers to our measured points. - # Need to convert from pixel coordinates to projection. - T = chunk.transform.matrix - pos = chunk.crs.project(T.mulp(marker.position)) - # Look through GCPs... - for gcp in gcplist: - dist = math.sqrt((gcp[0] - pos.x)**2 + (gcp[1] - pos.y)**2) - # If our distance is within... three meters? ...assign this GCPs x/y/z values. - if dist < float(config['marker_gcp_distance']): - marker.reference.location = MS.Vector([gcp[0], gcp[1], gcp[2]]) - logfile.write("Changed reference location for marker: " + marker.label + ", " + str(marker.reference.location) + "\n") - break - logfile.write("After this step " + str(len(chunk.markers)) + " markers were kept.\n") - - -# START EXECUTING FROM THE MAIN FUNCTION -if __name__ == '__main__': - script_setup() diff --git a/IntroPhotogrammetry/Metashape/scripts/config_file.csv b/IntroPhotogrammetry/Metashape/scripts/config_file.csv deleted file mode 100644 index 4b6655c..0000000 --- a/IntroPhotogrammetry/Metashape/scripts/config_file.csv +++ /dev/null @@ -1,48 +0,0 @@ -variable,value, # description -workdir,/project/90daydata/isu_gif_vrsc/agisoft/develop, # [required#1] [string] full path to the project folder (make sure to use forward slash) -doc_title,cunliffe_mbs, # [required#1] [string] name of the project (do not add extension) -data_dir,input_data, # [required#1] [string] name of the folder where all images are being stored; must be placed in the project folder (workdir variable) otherwise enter the full path -coord_system,EPSG::32611, # [required#1] [string] the coordinate system EPSG code (OSGB- EPSG:: 27700) EPSG:: 2027 -marker_coord_file,cunliffe_mbs_2019_gcps.csv, # [optional#1] [string] filename of marker coordinates input file; must be placed in the folder with images (data_dir variable) otherwise enter the full path or NONE if not desired -marker_coord_system,EPSG::32611, # [optional#1] [string] enter the EPSG code for the coordinate system that the marker coordinates are measures in -export_dir,exports, # [required#1] [string] name of an export folder (no need to create this - if it doesn't exist it will be created) -est_img_quality,TRUE, # [required#1] [bool] estimate image quality; either TRUE or FALSE for the following options (must be in caps) -img_quality_cutoff,0.5, # [required#1] [float] the PhotoScan image quality threshold (0.5 as default) -reprojection_error,0.45, # [required#1] [float] tie point error reprojection limit used to filter out 'bad' tie points (this may need to be increased if too many points are removed); 0.45 by default -rolling_shutter,FALSE, # [required#1] [bool] enables rolling shutter correction; either TRUE or FALSE for the following options (must be in caps) -revise_altitude,TRUE, # [optional#1] [bool] enables altitude correction in DJI images; either TRUE or FALSE for the following options (must be in caps) -altitude_adjust,2104, # [optional#1] [int] GNSS derived (if possible) absolute height of 'take off point' for DJI Drone only; used to correct DJI camera absolute altitude values -spc_quality,HighestAccuracy, # [required#1] [option] Sparse Point Cloud (SPC) accuracy; select option from: [HighestAccuracy - HighAccuracy - MediumAccuracy - LowAccuracy - LowestAccuracy] -marker_type,CrossTarget, # [required#1] [option] type of targets for detecting markers; select option from: [CircularTarget12bit - CircularTarget14bit - CircularTarget16bit - CircularTarget20bit - CircularTarget - CrossTarget] -marker_tolerance,10, # [required#1] [int] detector tolerance in range 0-100 -marker_min_size,30, # [required#1] [int] minimum target radius in pixels to be detected (CrossTarget type only) -marker_max_res,5, # [required#1] [float] maximum residual for non-coded targets in pixels; required for non-coded type of targets (e.g. CircularTarget or CrossTarget) -marker_min_projections,6, # [required#1] [int] minimum number of projections for a marker; markers with fewer projections will be deleted -marker_projection_error,0.5, # [required#1] [float] the difference between marker projection coordinates and camera position -marker_gcp_distance,3.0, # [required#1] [float] distance in meters; the difference between measured gcp points and transformed marker positions -depth_filter,MildFiltering, # [optional#1] [option] filter mode for buildDepthMaps; select option from: [NoFiltering - MildFiltering - ModerateFiltering - AggressiveFiltering] -dpc_quality,UltraQuality, # [optional##] [option] Dense Point Cloud (DPC) quality; select option from: [UltraQuality - HighQuality - MediumQuality - LowQuality - LowestQuality] -export_dpc,TRUE, # [optional##] [bool] export Dense Point Cloud; either TRUE or FALSE for the following options (must be in caps) -build_mesh,TRUE, # [optional##] [bool] build mesh; either TRUE or FALSE for the following options (must be in caps) -mesh_quality,HighFaceCount, # [optional##] [option] mesh face count option; select option from: [LowFaceCount, MediumFaceCount, HighFaceCount] -build_texture,FALSE, # [optional##] [bool] build texture; either TRUE or FALSE for the following options (must be in caps) -build_mosaic,TRUE, # [optional##] [bool] build orthomosaic; either TRUE or FALSE for the following options (must be in caps) -build_dsm,TRUE, # [optional##] [bool] build digital elevation model (DEM); either TRUE or FALSE for the following options (must be in caps) -export_model,FALSE, # [optional##] [bool] export model; either TRUE or FALSE for the following options (must be in caps) -export_mosaic_lr,TRUE, # [optional##] [bool] export orthomosaic LowRes; either TRUE or FALSE for the following options (must be in caps) -mosaic_lr_res,0.2, # [optional##] [float] options for the resolution of orthomosaic and DSM exports in metres -mosaic_lr_write,FALSE, # [optional##] [bool] write big tiff; keep FALSE unless dataset is expected to be very large; either TRUE or FALSE for the following options (must be in caps) -export_mosaic_hr,TRUE, # [optional##] [bool] export orthomosaic in high resolution; either TRUE or FALSE for the following options (must be in caps) -mosaic_hr_res,0.02, # [optional##] [float] orthomosaic HighRes resolution -mosaic_hr_write,FALSE, # [optional##] [bool] write big tiff; keep FALSE unless dataset is expected to be very large; either TRUE or FALSE for the following options (must be in caps) -export_dsm_lr,TRUE, # [optional##] [bool] export DSM in low resolution; either TRUE or FALSE for the following options (must be in caps) -dsm_lr_res,1, # [optional##] [float] DSM LowRes resolution -dsm_lr_write,FALSE, # [optional##] [bool] write big tiff; keep FALSE unless dataset is expected to be very large; either TRUE or FALSE for the following options (must be in caps) -export_dsm_hr,TRUE, # [optional##] [bool] export DSM in high resolution; either TRUE or FALSE for the following options (must be in caps) -dsm_hr_res,0.05, # [optional##] [float] DSM HighRes resolution -dsm_hr_write,FALSE, # [optional##] [bool] write big tiff; keep FALSE unless dataset is expected to be very large; either TRUE or FALSE for the following options (must be in caps) -export_report,TRUE, # [optional##] [bool] export report; either TRUE or FALSE for the following options (must be in caps) -depth_map_enable,FALSE, # [optional##] [bool] sets a threshold for the number of pairs allowed for depth map creation; if FALSE then this is unlimited by default -depth_map_limit,80, # [optional##] [int] if not enabled this value does nothing number of pairs allowed for depth filtering; 80 is conservative -dpc_limit_enable,FALSE, # [optional##] [bool] sets a threshold for the number of pairs allowed for dense cloud creation; if FALSE then this is unlimited by default -dpc_limit_value,80, # [optional##] [int] if not enabled this value does nothing number of pairs allowed for depth filtering; 80 is conservative diff --git a/IntroPhotogrammetry/Metashape/scripts/submit_metashape.sh b/IntroPhotogrammetry/Metashape/scripts/submit_metashape.sh deleted file mode 100644 index fcb7d2f..0000000 --- a/IntroPhotogrammetry/Metashape/scripts/submit_metashape.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -# job standard output will go to the file slurm-%j.out (where %j is the job ID) -# DEFINE SLURM VARIABLES -#SBATCH --job-name="metashape" -#SBATCH --partition=atlas # GPU node(s): 'atlas' or 'gpu' partition -#SBATCH --nodes=1 # number of nodes -#SBATCH --ntasks=48 # 24 processor core(s) per node X 2 threads per core -#SBATCH --time=01:30:00 # walltime limit (HH:MM:SS) -#SBATCH --account=isu_gif_vrsc -#SBATCH --mail-user=your.email@usda.gov # email address -#SBATCH --mail-type=BEGIN # email notice of job started -#SBATCH --mail-type=END # email notice of job finished -#SBATCH --mail-type=FAIL # email notice of job failure - - -# LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE -module load gcc/10.2.0 # load gcc dependency -module load metashape # load metashape, then run script with x11 turned off - -# DEFINE CODE VARIABLES -script_dir=/project/90daydata/isu_gif_vrsc/agisoft # path to scripts in your workdir (can check with 'pwd' in terminal) -script_name=01_metashape_SPC.py # the filename of the python script you want to run - -# DEFINE METASHAPE COMMAND -metashape -r $script_dir/$script_name -platform offscreen