-
Notifications
You must be signed in to change notification settings - Fork 433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
moving sound source #335
base: master
Are you sure you want to change the base?
moving sound source #335
Conversation
@fakufaku I have created a pull request for the simulation of the movement of sound in a ShoeBox The last part still missing is creating a time-varying convolution. Quick question, should this be a separate function or added to the sound moving function? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much for starting this! The code is nice and well-commentd, bravo!
I think this is a good start, but there are a few things to change to make it the best possible.
I like that you keep things simple, but I can see two problems with this implementation.
- It is merged as a method of the Room class, but this does not seem necessary.
- The only movement allowed is a straight line.
I would like to suggest to change the code to modularize the different tasks involved.
- Define trajectories (of sources, but also possibly microphones). We can have a collection of functions that all tackle different trajectories. The input would depend on the type of trajectory and the outputs would be
a. a list of time stamps
b. a list of locations - the simulation would be done by a function taking as input a
Room
object and the outputs of the above function for sources and microphones (both could be optional). Since the function would be outside of theRoom
class, it can be defined in a separate file, e.g.pyroomacoustics/dynamic.py
.
Please do not hesitate to comment if some things are unclear.
# Simulate the room and obtain the room impulse response | ||
self.simulate() | ||
|
||
# only use the duration of original, to avoid echo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the time varying convolution, the tail (i.e., the samples beyond the original length) need to be kept and added to the head of the following segment.
Kind of like overlap add for long filtering (except the filter changes everytime).
You can think about it this way: after the source has moved, you still hear echoes generated by the source at the previous position.
self.simulate() | ||
|
||
# only use the duration of original, to avoid echo | ||
# set data type as 16bit, ref:https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.write.html |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not do this here.
The scaling, normalization, and data type casting should happen only when saving to file, which is independent from the simulation.
movemix = record_ | ||
filter_kernels = self.rir.copy() | ||
else: | ||
movemix = np.concatenate((movemix, record_), axis=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very inefficient as a copy of the array occurs at every loop, and at each loop the array becomes longer. In the end this incurs a cost quadratic in the length of the array.
The correct way to do it is to append to a list in the loop and concatenate all the bits at once at the end.
filter_kernels = self.rir.copy() | ||
else: | ||
movemix = np.concatenate((movemix, record_), axis=1) | ||
filter_kernels = np.concatenate( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
import pyroomacoustics as pra | ||
|
||
|
||
def test_simulation_output_shape(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good as a basic test, but it would be nice to have some tests for the correctness of the time-varying convolution too.
One test for that is that the result of a time-varying convolution where all the filters are the same should be the same as a single long convolution.
y_direction=y_direction, | ||
) | ||
|
||
print(movemix, filter_kernels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be some assert at the end of the test that fails is something is wrong with the code.
Thanks for sending a pull request (PR), we really appreciate that! Before hitting the submit button, we'd be really glad if you could make sure all the following points have been cleared.
Please also refer to the doc on contributing for more details. Even if you can't check all the boxes below, do not hesitate to go ahead with the PR and ask for help there.
nosetests
orpy.test
at the root of the repo ?Happy PR 😃