#!/usr/bin/env python # coding: utf-8 # # Binaural Room Impulse Responses (BRIRs) # # [return to main page](index.ipynb) # # In this unit we will measure - with the help of our dummy head Ulf - binaural room impulse # responses (BRIRs) of our seminar room. # # ![Our Dummy Head](images/ulf.jpg) # # We will be using two different methods: # # * First, we excite the room - like in the [previous unit](rir.ipynb) - by clapping two wooden boards together. # But this time, instead of using a single microphone, we will record the room response with the dummy head. # We'll use the free audio recording/editing software [Audacity](http://web.audacityteam.org/) again. # # * Afterwards, we use the slightly more modern *sweep method*. # We excite the room with a sine sweep, which we reproduce by means of a loudspeaker. # The actual impulse response will be calculated from the excitation signal and the signal recorded by the dummy head. # # Further information will be provided during the exercises. # # If you cannot be with us for the measurements, you can still try the following exercises with these files (from older measurements): # # * using the wooden boards: [data/brir_clap.wav](data/brir_clap.wav) # # * using the sweep method: [data/brir_sweep.mat](data/brir_sweep.mat) # ## Loading the BRIRs # # We already know from the previous units how to load WAV files, so the first one should be easy. # Note, however, that now we are dealing with a two-channel file (one channel for each ear). # The resulting NumPy array will be two-dimensional and it will contain the channels along the columns. # # *Exercise:* Load the WAV file with the BRIRs. # Use the `shape` property of the resulting array to check if the dimensions/sizes are as you expect them. # How long (in seconds) are the impulse responses? # In[ ]: # The impulse responses obtained with the sweep method were created with Matlab®. # Along with some additional information they are stored in MAT files. # Luckily, SciPy [can load these kinds of files](https://docs.scipy.org/doc/scipy/reference/io.html) with the [scipy.io.loadmat()](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html#scipy.io.loadmat) function (as long as a certain MAT-file-version is used). # In[ ]: # Make sure to use the options `struct_as_record=False` and `squeeze_me=True` when loading the MAT file. # # *Exercise:* Load the MAT file with the other BRIRs. # How long (in seconds) are the impulse responses? # In[ ]: # The object returned by `scipy.io.loadmat()` is a bit strange ... # # It is like a `dict` object which has variable names as keys. # In our case, there is only one variable named `data`, which you can access with # # data = mat_contents['data'] # # The `data` variable is a Matlab "structure" whose attributes you can access with the well-known dot-notation (but only if you used the argument `struct_as_record=False` as suggested above!). # Use tab-completion (or `dir(data)`) to find out which attributes are available. # # For us, the most interesting attribute is `data.ir`, which holds the actual BRIR data as a two-dimensional NumPy array. # ## Listening to the BRIRs # # As we saw (or rather *heard*) in the [previous unit](rir.ipynb), listening to the impulse responses directly doesn't tell us very much, but let's do it anyway! # # *Exercise:* Listen to the impulse responses. # Do you hear a difference? # # You should use `tools.normalize()` (from [tools.py](tools.py)) on both IRs before playback to adjust their volume. # In[ ]: # To get a clearer picture of the data, let's convolve the IRs with some signals! # # Note that in contrast to the previous unit, we now have to deal with two-channel impulse responses. # # We might want to use [scipy.signal.fftconvolve()](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html) # which can handle convolution along a desired axes. # # *Exercise:* Load a mono signal (e.g. from [data/xmas.wav](data/xmas.wav)) and convolve it with both BRIRs. # Do you hear a difference? # # Use `tools.normalize()` on the results to be able to compare them with appropriate levels. # In[ ]: # ## Headphone Compensation # # There should be a clearly audible difference between the two measured BRIRs, right? # # But we can still make it better. # One thing that's still missing is *headphone compensation*. # Load the impulse response stored in the file # [data/THOMSON_HED415N_KEMAR_hcomp.wav](data/THOMSON_HED415N_KEMAR_hcomp.wav) and convolve it with the measured impulse responses to apply the headphone compensation filter. # # *Exercise:* Listen to the BRIRs with and without headphone compensation (after convolving some input signal with them). # # In[ ]: # ## Plotting the BRIRs # # *Exercise:* Plot all impulse responses which were used up to now. # In[ ]: # *Exercise:* Estimate the time-of-flight from $t = 0$ until the direct sound hits the ears. # Which distance in meters does this correspond to? # Is this consistent with the actual measurement setup? # In[ ]: # *Exercise:* Roughly estimate the signal-to-noise ratio of the measured impulse responses. # Does it differ for the different measurement methods? # In[ ]: # *Exercise:* Plot the frequency response of the different measurements. # Plot the frequency logarithmically on the x-axis and the magnitude in dB on the y-axis. # In[ ]: # ## Multiple Head Orientations # # At the beginning of this unit, we measured impulse responses for different head orientations. # # If you couldn't be with us for the measurements, you can use these example files: # # [data/brir_sweep-80.mat](data/brir_sweep-80.mat) # [data/brir_sweep-40.mat](data/brir_sweep-40.mat) # [data/brir_sweep.mat](data/brir_sweep.mat) (0 degree, same file as we used above) # [data/brir_sweep+40.mat](data/brir_sweep+40.mat) # [data/brir_sweep+80.mat](data/brir_sweep+80.mat) # # *Exercise:* Load all files, extract the BRIRs, convolve them with a mono signal and listen to the results. # In[ ]: # *Exercise:* Select either the left or the right ear and plot its impulse responses for each # measured orientation. # To do that, create a 2-dimensional array containing one impulse response per column. # In[ ]: # *Exercise:* Do the same thing with the magnitude spectra. # # Note: Try to add a legend to see which line corresponds to which measurement. # In[ ]: # ## Let's Watch a Video! # # This video is from this page: [how to create animations with matplotlib](http://nbviewer.jupyter.org/github/mgeier/python-audio/blob/master/plotting/matplotlib-animation.ipynb). # # # # *Exercise:* Try to understand what's shown there. # What's the meaning of the angle on the right side? # # Note that the impulse responses in the video were measured in an anechoic # chamber, therefore they look quite different compared to our measured BRIRs. # ## Plotting Many Head Orientations # # First, have a quick look at this [public dataset](https://zenodo.org/record/55418#.YmK9actBxhE), of most interest might be the [pdf documentation](https://zenodo.org/record/55418/files/wierstorf2011_QU_KEMAR_anechoic.pdf). # # Then, download [QU_KEMAR_anechoic_2m.mat](https://zenodo.org/record/4459911/files/QU_KEMAR_anechoic_2m.mat). # # *Exercise:* Load the file and extract the variable called `irs`. # In[ ]: # *Exercise:* Plot all data from `irs.left` (or, if you prefer, `irs.right`) using [matplotlib.pyplot.pcolormesh()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html) or [matplotlib.pyplot.imshow()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html). # # You might see more if you first convert the data to dB. # # What can you recognize on the plot? # Use the zoom feature to enlarge certain areas. # # Note that also in this case the data was measured in an anechoic chamber. # In[ ]: # *Exercise:* Try to find out what the axes show and label them accordingly. # In[ ]: # ## Solutions # # If you had problems solving some of the exercises, don't despair! # Have a look at the [example solutions](brir-solutions.ipynb). # #

# # CC0 # #
# To the extent possible under law, # the person who associated CC0 # with this work has waived all copyright and related or neighboring # rights to this work. #