-
Notifications
You must be signed in to change notification settings - Fork 16
Mesh2HRTF_Export_Parameters
#Mesh2HRTF Export Parameters This section describes all parameters available in the Mesh2HRTF export menu in Blender after preparing the project in more detail. It is advised that you go through the Tutorials first.
The title of your project. It will appear in some of the text files that are generated upon export.
This selects the kind of Boundary Element Method (BEM) that is used for numerically simulating the HRTF.
BEM: The pure BEM requires the longest computation time and the most memory for the calculations.
SL-FMM-BEM: The Single Level Fast Multipole Method (SL-FMM) BEM is faster and requires less memory.
ML-FMM-BEM: The Multi-Level (ML) FMM-BEM is the fastest computation method and uses the least memory.
This defines the acoustic source that is used to generate the sound field. Please note that numerical HRTF simulations exploit the concept of acoustic reciprocity to speed up the simulation. This means that the positions of sources and receivers are interchanged. In most acoustic HRTF measurements, the receiver (microphone) is located in the ear of the test subject and the sources are surrounding it. In most numerical simulations, the source is located in the ear of the test subject (3D mesh) and the receivers (termed EvaluationGrid in Mesh2HRTF) are surrounding it. Reciprocity is used, because only one simulation is needed to calculate the sound field of a source at an arbitrary number of receiver positions. However, N simulations are needed to calculate the sound field of N sources at a single receiver position.
Both ears:
In this case a velocity boundary condition is assigned to the faces of the mesh that were assigned to the materials 'Left ear' and 'Right ear'. This can be thought of as if these elements vibrate like a loudspeaker to radiate sound. Note that multiple elements in the mesh can be assigned to the materials 'Left ear' and 'Right ear'.
Mesh elements with user assigned material 'Left ear' and 'Right ear' act as the source and corresponding subfolders 'source_1' and 'source_2' are created in the 'NumCalc' folder upon project export. The left and right ear responses need to be calculated in separate NumCalc instances and can then later be combined into one .sofa file using merge_sofa_files
from the Python API.
Left ear: Like 'Both ears', but only mesh elements with user assigned material 'Left ear' act as the source, resulting in only one subfolder 'source_1'.
Right ear: Like 'Both ears', but only mesh elements with user assigned material 'Right ear' act as the source, resulting in only one subfolder 'source_1'.
Point source:
This uses the analytical expression of an ideal point source to generate the sound field. The position of the point source is defined by a Point
light object in Blender as shown in this tutorial. Because the point source can be placed anywhere, it can be used for reciprocal simulations (by placing it in the ear) and for non-reciprocal simulations (by placing it somewhere else). Coordinates are taken from user placed point light named 'Point source' in the Blender scene.
Plane wave: This uses the analytical expression of an ideal plane wave to generate the sound field. To designate the direction of travel of the wave in the Blender scene, coordinates of its normal vector are defined through the location (not rotation) of the user placed area light named 'Plane wave'.
This tells Blender where your mesh2hrtf folder. This is the folder containing the sub-folders Mesh2Input, NumCalc, and Output2HRTF (e.g., 'SomeDrive/Users/PeterPan/Mesh2HRTF/mesh2hrtf')
If this box is checked, Pictures of your scene will be rendered and saved upon project export. Note that this increases the export time.
If this box is checked, the simulated sound pressure (i.e., the HRTF) is referenced according to the classic HRTF definition. This means that the pressure at the ear channels is divided by the pressure at the center of the head with the head being absent (For example see: H. Møller, “Fundamentals of binaural technology,” Appl. Acoust., vol. 36, pp. 171–218, 1992, doi: 10.1016/0003-682x(92)90046-u.).
This will have to effects: First, this normalizes the HRTF level and the HRTF magnitude will approach 1 (0 dB) at low frequencies. Without referencing, the HRTF level depends on the source type and the area of the vibrating element(s). Second, it will remove the acoustic travel time from the source to the ear. Without referencing, the acoustic travel time will depend on the distances from the points in the evaluation grid to the ear channel.
In any case, the HRTFs will be saved as a SOFA file inside 'YourProjectFolder/Output2HRTF/HRTF*.sofa'.
###Compute HRIRs If this box is checked, HRIRs will be calculated from the complex sound pressure, i.e., the HRTF. This requires that the HRTFs were referenced (see above) and that they were calculated for equidistant frequencies, i.e., f = {1k, 2k, ..., Nk} with k>0.
The HRIR is computed in three steps. First, the 0 Hz frequency bin is set to 1 (0 dB). Second the inverse Fourier transform is applied to obtain impulse responses. Third a small delay is applied to make sure that the peaks occur at the beginning of the impulse responses (Due to the referencing they were shifted to the end of the impulse responses, if the ear channel was closer to a point in the evaluation grid than the center of the head).
If this is checked, the HRIRs will be saved as a SOFA file inside 'YourProjectFolder/Output2HRTF/'.
This tells Mesh2HRTF, if the unit of your scene/project is millimeters (mm) or meters (m). If the unit is millimeters, the ear channels should be located approximately at y-coordinates of +/- 66 mm. If the unit is meters, they should be located at approx. +/- 0.066 m.
The speed of sound in meters per second that is used for the numerical simulation.
The density of air in kg per cubic meters that is used for the numerical simulation.
This tells Mesh2HRTF for which evaluation grids the HRTF should be simulated. This would refer to the source positions in an acoustic measurement. There are two ways of defining evaluation grids.
- The grids that are included in 'Mesh2HRTF/mesh2hrtf/Mesh2Input/EvaluationGrids' can be used by the name of the containing folder. E.g. 'Default' uses the grid 'Mesh2HRTF/mesh2hrtf/Mesh2Input/EvaluationGrids/Data/Default'.
- Full paths to any folder containing an evaluation grid can be used, e.g., /SomeDrive/SuperCustumEvalGrid'.
Multiple evaluation grids can be separated by semi colons (';'). Custom evaluation grids can be generated with the function write_evaluationgrid
from the Mesh2HRTF_API.
This tells Mesh2HRTF in which folders it will search for materials. Materials can be used to simulate HRTF with non-rigid boundary conditions, e.g., to account for high-frequency absorption of hair. Multiple folders can be separated by semi colons (';'). Custom material data can be written with the function write_material
from the Mesh2HRTF_API.
This section defines the frequencies at which the HRTFs are simulated.
The minimum frequency in Hz.
The maximum frequency in Hz.
This drop down denotes in what way the frequencies are distributed between the minimum and maximum frequency.
Step size: Calulate HRTFs with a fixed frequency step size, e.g., calculate HRTFs in steps of 100 Hz between the minimum and maximum frequency. Num steps: Calculate HRTFs at N frequencies, equally distributed between the minimum and maximum frequency.
The value for the step size or num steps as explained above.