Interpreting End-to-End Deep Learning Models for Acoustic Source Localization using Layer-wise Relevance Propagation
Code repository for the paper Interpreting End-to-End Deep Neural Networks Models for Acoustic Source Localization using Layer-wise Relevance Propagation, EUSIPCO 2024 [1].
For any further info feel free to contact me at [email protected]
- Python, it has been tested with version 3.9.18
- Numpy, tqdm, matplotlib
- Pytorch 2.1.2+cu118
- zennit
- gpuRIR
The generate_data.py script generates the data using the room parameters contained in params.py.
The command-line arguments are the following
- T60: float, Reverberation Time (T60)
- SNR: Int, Signal to Noise Ratio (SNR)
- gpu: Int, number of the chosen GPU (if multiple are available)
The train.py script trains the network.
The command-line arguments are the following
- T60: float, Reverberation Time (T60)
- SNR: Int, Signal to Noise Ratio (SNR)
- gpu: Int, number of the chosen GPU (if multiple are available)
- data_path: String, path to where dataset is saved
- log_dir: String, Path to where to store tensorboard logs
To perform the XAI experiments:
The perturbation_experiment.py performs manipulation of input features.
The command-line arguments are the following
- T60: float, Reverberation Time (T60)
- SNR: Int, Signal to Noise Ratio (SNR)
- gpu: Int, number of the chosen GPU (if multiple are available)
The tdoa_experiment.py performs the time-delay estimation experiment.
The command-line arguments are the following
- T60: float, Reverberation Time (T60)
- SNR: Int, Signal to Noise Ratio (SNR)
- gpu: Int, number of the chosen GPU (if multiple are available)
The jupyter notebooks Input_visualization_paper.ipynb and Plot_Perturbation.ipynb can be used to obtain the same figures presented in the paper.0
N.B. pre-trained models used to compute the results shown in [1] can be found in folder models
[1] L.Comanducci, F.Antonacci, A.Sarti, Interpreting End-to-End Deep Learning Models for Acoustic Source Localization using Layer-wise Relevance Propagation, accepted at EUSIPCO 2024