Android-MediaRecorder之setAudioSource

  上一篇文章介紹了怎麼通過MediaRecorderlu錄製音頻,詳見(https://blog.csdn.net/cheriyou_/article/details/105544086

在創建MediaRecorder之後第一步就是設置AudioSource,那麼audiosource具體是做什麼用的?今天就來研究下。

1. 首先看看setAudioSource的過程

status_t MediaRecorder::setAudioSource(int as)

MediaRecorderClient::setAudioSource(int as)

status_t StagefrightRecorder::setAudioSource(audio_source_t as) {
    mAudioSource = as; // 在此把app設置流類型賦值給mAudioSource
}

  從上面的代碼可以看出,AudioSource最終就是設置給了StagefrightRecorder的mAudioSource。然後,StagefrightRecorder中會根據mAudioSource去創建audioSource……下面接着看代碼。

2. StagefrightRecorder::createAudioSource()

sp<MediaCodecSource> StagefrightRecorder::createAudioSource() {
    sp<AudioSource> audioSource = AVFactory::get()->createAudioSource(mAudioSource, ......); 
    // 根據type創建audioSource
    
    sp<MediaCodecSource> audioEncoder = MediaCodecSource::Create(mLooper, format, audioSource); 
    // 創建audioEncoder(mediacodec)時把audioSource當做參數傳入
    mAudioSourceNode = audioSource; // 把audioSource賦值給mAudioSourceNode
}

 AudioSource* AVFactory::createAudioSource(audio_source_t inputSource, ......){
     return new AudioSource(inputSource, ......);
 }

 // frameworks/av/media/libstagefright/AudioSource.cpp
 AudioSource::AudioSource(audio_source_t inputSource,  ......){
     mRecord = new AudioRecord(inputSource, ......);
 }

 //frameworks/av/media/libaudioclient/AudioRecord.cpp
 AudioRecord::AudioRecord(audio_source_t inputSource,  ......){
     (void)set(inputSource,  ......);
 }

 status_t AudioRecord::set(audio_source_t inputSource,......){
     mAttributes.source = inputSource; // 此處把inputSource賦值給mAttributes.source
 }

     整個create過程實際就是把audioSource設置給了mAttributes.source。

  看到這裏有一個疑問,mAttributes是什麼?有什麼作用?繼續往下看。

3. mAttributes的用法

 status_t AudioRecord::createRecord_l(const Modulo<uint32_t> &epoch, const String16& opPackageName)
{
   input.attr = mAttributes; // 把mAttributes賦值給input.attr
     record = audioFlinger->createRecord(input,output,&status); // 然後根據input創建record
     mAudioRecord = record;
}

 sp<media::IAudioRecord> AudioFlinger::createRecord(const CreateRecordInput& input,CreateRecordOutput& output,status_t *status)
{
     lStatus = AudioSystem::getInputForAttr(&input.attr, &output.inputId,input.riid,sessionId,.......);
     // 獲取到input等信息之後再創建record,此處略
}

  可以看到,這裏先把mAttributes賦值給了input.attr,然後在getInputForAttr時把input.attr傳入。下面看看getInputForAttr怎麼獲取input信息的。

4. getInputForAttr怎麼獲取的input信息

// frameworks/av/media/libaudioclient/AudioSystem.cpp
AudioSystem::getInputForAttr ---->  aps->getInputForAttr(attr, input, riid, session, pid, uid, ......);
// frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,audio_io_handle_t *input,......)
{
    audio_attributes_t attributes = *attr;
    device = mEngine->getInputDeviceForAttributes(attributes, &policyMix); // 根據inputSource獲取device
    *input = getInputForDevice(device, session, attributes, config, flags, policyMix); // 根據device獲取input
}

    整個過程分爲兩步,第一步是根據inputsource獲取device,第二步是根據device獲取input。繼續往下看。

5. 根據inputsource獲取device

// frameworks/av/services/audiopolicy/enginedefault/src/Engine.cpp
sp<DeviceDescriptor> Engine::getInputDeviceForAttributes(const audio_attributes_t &attr,sp<AudioPolicyMix> *mix) const{
    sp<DeviceDescriptor> device = findPreferredDevice(inputs, attr.source, availableInputDevices);
    device = policyMixes.getDeviceAndMixForInputSource(attr.source, availableInputDevices, mix);
    audio_devices_t deviceType =  getDeviceForInputSource(attr.source);
    return availableInputDevices.getDevice(deviceType,String8(address.c_str()),AUDIO_FORMAT_DEFAULT);
}
audio_devices_t Engine::getDeviceForInputSource(audio_source_t inputSource) const{
    // 此函數直接根據inputSource對應devicetype
    switch (inputSource) {
    case AUDIO_SOURCE_DEFAULT:
    case AUDIO_SOURCE_MIC:
    if (property_get_bool("vendor.audio.enable.mirrorlink", false) &&
        (availableDeviceTypes & AUDIO_DEVICE_IN_REMOTE_SUBMIX)) {
        device = AUDIO_DEVICE_IN_REMOTE_SUBMIX;
    } else if (......
    }
    .......  
}

     這裏沒什麼特別的,就是規定好的一一對應。

6. 根據device獲取input

audio_io_handle_t AudioPolicyManager::getInputForDevice(const sp<DeviceDescriptor> &device,......){
    profileFormat = config->format;
    // 先獲取profile
    profile = getInputProfile(device, profileSamplingRate, profileFormat, profileChannelMask,profileFlags); 
    // 然後創建inputDesc
    sp<AudioInputDescriptor> inputDesc = new AudioInputDescriptor(profile, mpClientInterface); 
    // 最後open input
    status_t status = inputDesc->open(&lConfig, device, halInputSource, profileFlags, &input); 
    return input;
}

     這裏又分了三個步驟,我們重點看下第一個步驟,怎麼根據device獲取profile。

sp<IOProfile> AudioPolicyManager::getInputProfile(const sp<DeviceDescriptor> &device,uint32_t& samplingRate, ......){
    // audio_policy_configuration.xml此函數的信息都在這個xml裏面獲取。
    for (const auto& hwModule : mHwModules) { 
    // 遍歷所有mHwModules,mHwModules是從audio_policy_configuration.xml裏面解析到的。具體有: primary/a2dp/usb等module
        for (const auto& profile : hwModule->getInputProfiles()) {
        // 遍歷所有的InputProfiles,也是從audio_policy_configuration.xml中解析到的。
              if (profile->isCompatibleProfile(DeviceVector(device), samplingRate, // 找一個最合適的profile
                   &samplingRate  /*updatedSamplingRate*/,
                   format,
                   &format,       /*updatedFormat*/
                   channelMask,
                   &channelMask   /*updatedChannelMask*/,
                   (audio_output_flags_t) flags,
                   true /*exactMatchRequiredForInputFlags*/,
                   true,
                   rue)) {
                return profile;
            }
        }
    }
}

  到這裏整個過程基本就結束了,接下來就是創建input,track等,沒什麼特別的,此處不再贅述。

 

   最後看看audio_policy_configuration.xml這個文件的內容:

1. 看devices對應的route

// eg: device = AUDIO_DEVICE_IN_BUILTIN_MIC ("Built-In Mic")
</devicePorts>
 <routes>
                <route type="mix" sink="fast input"
                       sources="Built-In Mic,Built-In Back Mic,BT SCO Headset Mic,USB Device In,USB Headset In,Wired Headset Mic"/>
                <route type="mix" sink="voip_tx"
                       sources="Built-In Mic,Built-In Back Mic,BT SCO Headset Mic,USB Device In,USB Headset In,Wired Headset Mic"/>
                <route type="mix" sink="record_24"
                       sources="Built-In Mic,Built-In Back Mic,Wired Headset Mic"/>
               <route type="mix" sink="mmap_no_irq_in" 
                       sources="Built-In Mic,Built-In Back Mic,Wired Headset Mic,USB Device In,USB Headset In"/> 
// 這裏可以看到Built-In Mic一共對應了四個route

2. 看devices對應的格式

<devicePort tagName="Built-In Mic" type="AUDIO_DEVICE_IN_BUILTIN_MIC" role="source">
      <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
            samplingRates="8000,11025,12000,16000,22050,24000,32000,44100,48000"
            channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO,AUDIO_CHANNEL_IN_FRONT_BACK"/>
</devicePort>
// 這裏可以看到device對應的採樣率、聲道等信息

3. 看route對應的profile信息

<mixPort name="fast input" role="sink" maxOpenCount="2" maxActiveCount="2" flags="AUDIO_INPUT_FLAG_FAST">
        <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
              samplingRates="8000,11025,12000,16000,22050,24000,32000,44100,48000"
              channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO,AUDIO_CHANNEL_IN_FRONT_BACK"/>
</mixPort>
<mixPort name="record_24" role="sink" maxOpenCount="2" maxActiveCount="2">
        <profile name="" format="AUDIO_FORMAT_PCM_24_BIT_PACKED"
               samplingRates="8000,11025,12000,16000,22050,24000,32000,44100,48000,96000,192000"
               channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO,AUDIO_CHANNEL_IN_FRONT_BACK,AUDIO_CHANNEL_INDEX_MASK_3,AUDIO_CHANNEL_INDEX_MASK_4"/>
       <profile name="" format="AUDIO_FORMAT_PCM_8_24_BIT"
              samplingRates="8000,11025,12000,16000,22050,24000,32000,44100,48000,96000,192000"
              channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO,AUDIO_CHANNEL_IN_FRONT_BACK,AUDIO_CHANNEL_INDEX_MASK_3,AUDIO_CHANNEL_INDEX_MASK_4"/>
       <profile name="" format="AUDIO_FORMAT_PCM_FLOAT"
              samplingRates="8000,11025,12000,16000,22050,24000,32000,44100,48000,96000,192000"
              channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO,AUDIO_CHANNEL_IN_FRONT_BACK,AUDIO_CHANNEL_INDEX_MASK_3,AUDIO_CHANNEL_INDEX_MASK_4"/>
</mixPort>
<mixPort name="voice_rx" role="sink">
      <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
              samplingRates="8000,16000,48000" channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO"/>
 </mixPort>
<mixPort name="mmap_no_irq_in" role="sink" flags="AUDIO_INPUT_FLAG_MMAP_NOIRQ">
       <profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
              samplingRates="8000,11025,12000,16000,22050,24000,32000,44100,48000"
              channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO,AUDIO_CHANNEL_IN_FRONT_BACK,AUDIO_CHANNEL_INDEX_MASK_3"/>
</mixPort>

demo:

例如,app設置的信息如下: inputsource: AUDIO_SOURCE_MIC, samplingRate: 16000, channelMask: 2, bitdepth: 16

則獲取的profile就是:

<mixPort name="voice_rx" role="sink">

<profile name="" format="AUDIO_FORMAT_PCM_16_BIT"

samplingRates="8000,16000,48000" channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO"/>

</mixPort>

獲取方法就是在上面三個步驟之後獲取一個匹配度最高的。

 

到這裏今天的內容就結束了了,emmm……寫完才發現自己犯了一個錯,還沒講整個audio的大框架就開始講細節,從下往上看其實很難看全看懂,從上往下看就容易的多。下次就分享下audio的框架, 敬請期待。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章