Our apps
Sound Spectrum Pro for Windows
Calibration and files
Frequency response calibration
- Ideally, a microphone would output the same voltage amplitude for the same sound pressure, for any frequency. This is called a flat response.
- The purpose of this calibration is to compensate the non-ideal (non-flat) response of the microphone.
- This is done by applying a frequency-dependent correction to the signal generated by the microphone.
- The correction is described in a text file with a list of (frequency - dB difference) pairs, describing the response of the microphone.
Procedure to calibrate the frequency response, starting from a calibrated reference microphone
- Plug the reference microphone in the device. Choose it in Windows settings as the default audio input.
Start the app and load the calibration file from the File menu.
- Connect the device line out to your amp. Play a reference signal - best choices are pink sweep or pink noise.
Set a level that rises enough (>30dB) above the silence levels shown by the app.
- Choose an RTA mode. The saved files will have one point per bar, so it's up to you which one.
- Because the microphone is calibrated, the response you see is the response of the amp + speakers + room.
Save this as a calibration file (File > Save as... calibration file) with a relevant name like "Roomref.txt".
- Place the uncalibrated microphone exactly in the same place as the reference one. Plug it in your device. If you have to change the Windows audio input, re-start the app.
- Apply the saved room response as calibration file. Play the exact same reference signal as before (including level - don't touch any volume settings).
- The response you see now corresponds to your microphone. Save it as a calibration file. Done.
Input level calibration
- This targets to have correct SPL (Sound Pressure Level) and weighted totals (dBA and dBC) indicated on the graphs.
- It's better to have the frequency response already calibrated, but rough adjustments are also possible without.
- This is basically one numerical parameter per microphone, called sensitivity. It can be expressed as:
- dB SPL: the value is referenced to 20 µPa and 1V RMS. Typical value: +140dB SPL.
It shows how many times more (vs. 20 µPa) needs the sound pressure be in order to produce 1V RMS at the microphone connector.
- dB Pa: the value is referenced to 1Pa. Typical value: +40dB. To convert it to dB SPL you have to add 94dB.
- dB V RMS (ref Pa): like the previous, but reversed (Volts per Pascal). Typical value: -40dB. To convert to dB SPL, subtract from 94dB.
- mV RMS / Pa: like the previous, but as gain (not dB). Typical value: 2.5mV. To convert to dB SPL: 154 - 20 * log10(mV value).
- On top of this, there is a gain stage inside the device, from the microphone connector to the ADC.
This gain is expressed as dB V RMS (ref 0dBFS), meaning how many Volts RMS are needed for full scale ADC input.
Typical value: -40dB (10mV RMS max input at the connector).
- The app has these two parameters in the File > Sensitivities menu.
These have to be calibrated.
Procedure to calibrate the full scale input level, using voltage measurement
- You will need a cable with 3.5mm plugs at both ends.
- You will also need an oscilloscope or a DMM with decent AC voltage accuracy in the range of 1-2V RMS.
- Plug the cable in the device line out. Connect your scope or DMM between the tip and the sleeve of the other end of the cable.
- Play a sine wave tone from the app, with max amplitude (0dBFS). Also turn the device volume to max. Choose a frequency that matches your DMM capabilities.
- Measure the RMS voltage. The DMM will give you directly RMS, with the scope you can measure peak-to-peak and calculate V RMS = Vpp/(2*sqrt(2)). Typical value: 400-500mV.
- Turn the amplitude down to -6dBFS and check that the voltage is half. If it's not, the device might clip at high volume and you should use this one times 2.
- Calculate the dBV level of this voltage: dBV = 20 * log10(V RMS). Example: 400mV gives 20 * log10(0.4) = -8dBV.
- From the app, turn the amplitude to min (-60dBFS). Don't touch the device volume buttons. Don't change the frequency.
- Connect the free end of the cable to the device microphone input (loopback).
- Set the app in FFT mode and set the window type to flat-top. You should see the peak of your tone. Note down the amplitude.
- From the app, increase the amplitude step by step and note down the value of the peak for each output setting. Stop when (before) the graph reaches 0dBFS.
- The output steps are 6dB - check that the steps measured on the FFT are also 6dB.
Use for next calculation the highest value before the FFT step is no longer 6dB (some devices clamp before reaching 0dBFS) and is below 0dBFS.
- Your input level for 0dBFS is: dBV (calculated above) plus output level minus graph peak. Example: 400mV RMS and for -42dBFS output you measure -4dBFS on the graph, the result is -8 -42 -(-4) = -46dB.
- Set this value in the corresponding parameter in Settings. This is a characteristic of your device and the reference for applying all sensitivities (below).
Microphone sensitivity calibration
- If you have a calibration file from the microphone manufacturer, it normally contains two special lines before the frequency/dB lines:
Unit:SPL or Unit:Pa
Sens:value
The app will read and use these lines when you load the calibration file, so the microphone will be fully calibrated.
- If you saved a calibration file to adjust the frequency response, the two lines (Unit:SPL and Sens:value) are there.
You can edit the file and adjust the value, trying to match a calibrated reference: a calibrated microphone or a sound level meter.
Your target is to have the same dBA and/or dBC reading for the same sound applied (see note below on sound level meters).
- If you are not loading any calibration file, the app will use the "Microphone sensitivity (SPL for 0dBV)" parameter from the File menu.
Calibration files
- The app can load and save calibration files. These are plain text files, containing the following types of lines:
- Comments: The app ignores any line which does not match the patterns below. Traditionally the comment lines start with ";" or "*".
- "Unit:SPL" or "Unit:Pa" - Sets the unit for the sensitivity value.
- "Sens:value" - Sets the sensitivity value.
- "value value" - First is frequency in Hz, second is dB.
- Simple example:
; Calibration for my mic
Unit:SPL
Sens:136
; Freq dB
20 -9.3
50 -3
100 0
1000 0
8000 0
10000 0.3
12000 0
20000 -5.5
- This file describes the frequency response of a microphone (or other system) and the sensitivity calibration.
It can come from the microphone manufacturer (.CAL or .FRD), can be edited by you with any text editor, or can be generated by this app following a calibration procedure.
Usually the 0dB reference is taken as the response at 1kHz or at the flattest part of the microphone response.
- A calibration file is normally applied to calibrate the current microphone. See above.
- When you load (apply) this file, the app will subtract the file dB values from the received microphone signal, turning it into a "flat response" microphone.
The values between the given points are interpolated and the values outside are extrapolated.
- You can also load the calibration file as a graph - tap on FILE and select it.
If the sensitivity lines are present and valid, the file name shown after loading will have "SPL" appended to it.
- You can save a file in this format from the app (menu File > Save as).
If you check "As calibration file", all values will be referenced to the value at 1kHz.
Otherwise, the SPL levels are written as such in the file, so you can load it and directly compare the data with the current LIVE and PEAK data.
In FFT mode you can not save "As calibration file".
Written points: all points for FFT (a lot), band centers for RTA (depending on which RTA is shown).
Note on sound level meters
- Using a sound level meter is easy and simplifies some of the calibrations. However, you should be aware of their limitations.
- Even the best meters on the market (non-professional, let's say under $100) have:
- Bandwidth 31.5Hz - 8kHz. This means that they don't hear a part of the spectrum (dBA and especially dBC are defined from 20Hz to 20kHz).
As usual, the bandwidth is (best case) defined at -3dB, so their flat response region is even narrower.
- Such and such precision (1-2dB) referenced to (usually) 94dB @ 1kHz.
Read between the lines: the precision degrades when you are far from these conditions.
- So, when using such a meter for calibration, your best choice is to use as signals: tone, pink octave, pink 1/3 octave.
The wide-spectrum sounds will have a good part outside the (calibrated) capabilities of the meter.
Stay with the signals around 1kHz for the actual calibration. You can then move up and down to see where the readings start to diverge...
Copyright © 2012-2022 Lindentree. All rights reserved.