I ‘m working on audio conferencing in - WCF, WPF and Visual Studio 2008 using C#. I capture the sound from microphone using Windows API. Then I continuously capture the sound from microphone and send the Audio stream (byte[]) to the server and broadcast from there. Its working fine and I also get the response from the server as well. Then I play the Audio Stream(byte[]) coming from the server. I used this article in codeproject website to record and play audio stream. http://www.codeproject.com/KB/audio-video/cswavrec.aspx But when the incoming audio stream is playing, an unrecognizable sound is coming. This may be due to the difference between data consuming and receiving rates. I also used directsound to record and playing sound but the same problem arise. So please suggest me how to solve this problem( Better DierctSound suggestion – Streaming Buffer) . This code is fire when audio stream is received from the server.
private void playAudio()
{
objAudioConferencingPlayer.StartAudioPlayer(IncommingAudioStream);
}
public void StartAudioPlayer(byte[] AudioStream)
{
if (!IsPlayerRunning)
{
StopPlaying();
m_Player = new WaveOutPlayer(-1, fmt, 16384, 3, new
BufferFillEventHandler(Filler));
IsPlayerRunning = true;
}
m_Fifo.Write(AudioStream, 0, AudioStream.Length);
}
private void Filler(IntPtr data, int size)
{
if (m_PlayBuffer == null || m_PlayBuffer.Length < size)
m_PlayBuffer = new byte[size];
if (m_Fifo.Length >= size)
m_Fifo.Read(m_PlayBuffer, 0, size);
else
for (int i = 0; i < m_PlayBuffer.Length; i++)
m_PlayBuffer[i] = 0;
System.Runtime.InteropServices.Marshal.Copy(m\_PlayBuffer, 0, data, size);
// m\_Fifo ==> m\_PlayBuffer==> data ==> Speakers
}
Thanks in Advance.