English
Language : 

CS4237B Datasheet, PDF (69/114 Pages) Cirrus Logic – CrystalClear Advanced Audio System with 3D Sound
CS4237B
Hearing Basics
It has long been known that the hearing system
uses several methods to determine from which
direction a particular sound is coming. Since hu-
man hearing is binaural (two ears), these
methods include relative phase shift for low fre-
quency sounds, relative intensity for sounds in
the voice range, and relative time of arrival for
sounds having fast rise times and high frequency
components.
The outer ear plays a significant role in the de-
termination of direction. Due to the complex
nature of the ear’s shape, sound is subject to re-
flection, reinforcement, and cancellation at
various frequencies. Effectively, the human hear-
ing system functions as a multiple filter,
emphasizing some frequencies, attenuating oth-
ers, and letting some get through with no
change. The response changes with both azimuth
and elevation, and together with the binaural ca-
pabilities helps determine whether a sound is
coming from up, down, left, right, ahead, or be-
hind.
The frequency response of microphones is not
dependent on azimuth in the same way as the
ear. Omni-directional microphones exhibit flat
response in all directions. Cardioid microphones
exhibit flat response to sounds coming from the
front and sides and are dead at the rear. As no
microphone behaves like the human ear, the
sounds picked up by a microphone are accurate
as far as the microphone is concerned but are not
the same as the sounds impinging on the human
eardrum under similar circumstances.
When the sound is reproduced by speakers, the
situation is further altered by speaker location. If
sounds which originally came from one side or
the other are reproduced by speakers which are
frontally located, these side sounds are heard
with the incorrect spectral response. The same is
true for frontal sounds which are coming from
DS213PP4
side-mounted loudspeakers. The result is spacial
distortion of the sound field which prevents the
user from hearing what was originally performed
with the proper spatial cues.
The SRS 3D Stereo Process
The Crystal SRS DSP, illustrated in Figure 7,
processes the signal in such a manner that the
spacial cues lost in the record/playback process
are restored. Since the human hearing system is
involved and is actually part of the loop, its
transfer function is made part of the system
transfer function. At the same time, SRS 3D Ste-
reo processing avoids an objectionable buildup
of frequencies of increased phase sensitivity and
is effective over a wide area so that the listener
is not restricted to a favorable listening position
(sweet spot) between two speakers.
In the stereophonic signal, frontal sounds pro-
duce equal amplitudes in the left and right
channels and are therefore present in the "sum"
or L+R signal. Ambient sounds, which include
reflected and side sounds, produce a complex
sound field and do not appear equally in the left
and right channels. They are therefore present in
the "difference" or L-R signal. Although these
two signals are normally heard as a composite
signal, it is possible to separate and process them
independently and then remix them into a new
composite signal which contains the required
spatial cues that the stereo recording and play-
back processes do not provide. The directional
cues are mostly contained in the difference sig-
nals, so these can be processed, (L-R)p, to bring
the missing directional cues back to their normal
levels. The processed difference signal can then
be increased in amplitude, using SPC3-0, in or-
der to increase apparent image width.
SRS Space Control
The SRS Space adjustment, SPC3-0 in C2, con-
trols the amount of processed difference signal,
(L-R)p, that is added to the final left and right
digital signals going to the DACs. The difference
69