当前位置:网站首页>Audiotrack and audiolinker

Audiotrack and audiolinker

2022-06-27 10:13:00 Step base

1.native layer audiotrack in ,write How the operation is initiated
In code debugging , We see ,thread The thread keeps reading audioflinger Share buffer The data of , and audiotrack Of write Is to keep writing inside , So this write Who initiated the operation ? First, you need to go back to the application layer audiotrack The creation of ,audiotrack Create sub static Patterns and stream Pattern , If it's the former , Will be in java Apply for a share buffer, Write the data in once , If it is stream In terms of mode ,buffer It is from audioflinger To apply ,java Layer copies data to native Layer .

public int write(byte[] audioData, int offsetInBytes, int sizeInBytes) {
    ...
    int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,true /*isBlocking*/);
    ...
}

audioData by java The application of layer buffer, Used to store data read from a file or decoder , No matter what the data type is , Will be called native_write_byte Enter JNI layer ,JNI Layers will eventually call writeToTrack function :

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
                  
    ssize_t written = 0;
     /* stream This is the mode if Branch */  
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
        // for compatibility with earlier behavior of write(), return 0 in this case
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {
        /* static Pattern */
        const audio_format_t format = audioFormatToNative(audioFormat);
        switch (format) {

        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            // writing to shared memory, check for capacity
            if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size();
            }
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            written = sizeInBytes;
            } break;

        case AUDIO_FORMAT_PCM_8_BIT: {
            // data contains 8bit data we need to expand to 16bit before copying
            // to the shared memory
            // writing to shared memory, check for capacity,
            // note that input data will occupy 2X the input space due to 8 to 16bit conversion
            if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size() / 2;
            }
            int count = sizeInBytes;
            int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
            const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
            memcpy_to_i16_from_u8(dst, src, count);
            // even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide
            // the 8bit mixer restriction from the user of this function
            written = sizeInBytes;
            } break;

        }
    }
    return written;

}

 

This function is actually very simple , according to track Type to proceed memcpy operation , If it is stream Pattern , Then the data will be copy to native Layer audioflinger Applied buffer in , If it is static Pattern , directly copy To java Share of application buffer in ;
Through the above analysis , You can see , If it is stream In terms of mode ,native Layer of write The operation is initiated by the upper layer , There are two ways for the upper level to initiate : The first is initiative push, The second is native Layer callback ;
a. Take the initiative push, such as :

/* while The loop keeps calling write Writing data */
 if (readCount != 0 && readCount != -1) {
     if (mTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING){
         mTrack.write(mTempBuffer, 0, readCount);
     }
 }

b. Apply for callback , stay createtrack When ,java Layer can set a callback function :

[email protected]
{
    if (cbf != NULL) {
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
    }    
}

If cbf Not empty words , Then it will create a AudioTrackThread Threads , Keep writing data , The specific operation is [email protected]::AudioTrackThread::threadLoop() in , I won't analyze it in detail ;

2.audiotrack The producers in - Consumer model
Whole audiotrack The consumption of audio data is a dynamic producer - Consumer model ,audiotrack You can think of it as a producer , and audioflinger Manipulated thread It can be understood as the ultimate consumer , The relationship between the two is as follows :

audiotrack adopt IAudioTrack Realization and audioflinger Collaboration , and IAudioTrack I won't support it binder signal communication ,audioflinger Is now through proxy mode on track Operating , Take another look at the picture below :

This picture covers AT,AF And thread Inheritance relationship of ,audioflinger In charge of two threads , One is for playback PlaybackThread, The other is for recording RecordThread, and PlaybackThread It is divided into... According to whether it is mixed or not MixerThread and DirectOutputThread, Both end up through agents TrackHandle Realize to track The control of , Be careful ,TrackHandle It's supporting binder communication , that , How producers and consumers operate sharing buffer What about ? The following picture shows the clear statement , The next two sections , Producers and consumers will be introduced respectively :

 

3. How producers share buffer Write data in
Before analyzing the producer and consumer models , Let's first look at the very important structure audio_track_cblk_t:

struct audio_track_cblk_t
{
                // Since the control block is always located in shared memory, this constructor
                // is only used for placement new().  It is never used for regular new() or stack.
                            audio_track_cblk_t();
                /*virtual*/ ~audio_track_cblk_t() { }

                friend class Proxy;
                friend class ClientProxy;
                friend class AudioTrackClientProxy;
                friend class AudioRecordClientProxy;
                friend class ServerProxy;
                friend class AudioTrackServerProxy;
                friend class AudioRecordServerProxy;

    // The data members are grouped so that members accessed frequently and in the same context
    // are in the same line of data cache.

                uint32_t    mServer;    // Number of filled frames consumed by server (mIsOut),
                                        // or filled frames provided by server (!mIsOut).
                                        // It is updated asynchronously by server without a barrier.
                                        // The value should be used "for entertainment purposes only",
                                        // which means don't make important decisions based on it.

                uint32_t    mPad1;      // unused

    volatile    int32_t     mFutex;     // event flag: down (P) by client,
                                        // up (V) by server or binderDied() or interrupt()
#define CBLK_FUTEX_WAKE 1               // if event flag bit is set, then a deferred wake is pending

private:

                // This field should be a size_t, but since it is located in shared memory we
                // force to 32-bit.  The client and server may have different typedefs for size_t.
                uint32_t    mMinimum;       // server wakes up client if available >= mMinimum

                // Stereo gains for AudioTrack only, not used by AudioRecord.
                gain_minifloat_packed_t mVolumeLR;

                uint32_t    mSampleRate;    // AudioTrack only: client's requested sample rate in Hz
                                            // or 0 == default. Write-only client, read-only server.

                // client write-only, server read-only
                uint16_t    mSendLevel;      // Fixed point U4.12 so 0x1000 means 1.0

                uint16_t    mPad2;           // unused

public:

    volatile    int32_t     mFlags;         // combinations of CBLK_*

                // Cache line boundary (32 bytes)

public:
                union {
                    AudioTrackSharedStreaming   mStreaming;
                    AudioTrackSharedStatic      mStatic;
                    int                         mAlign[8];
                } u;

                // Cache line boundary (32 bytes)
};

   

The variables that need special attention include several friend classes defined above , Among them client and service The end needs to be clear in the actual data reading and writing , And finally union The variables in the mStreaming, Take a look at this statement :

struct AudioTrackSharedStreaming {
    // similar to NBAIO MonoPipe
    // in continuously incrementing frame units, take modulo buffer size, which must be a power of 2
    volatile int32_t mFront;    // read by server
    volatile int32_t mRear;     // write by client
    volatile int32_t mFlush;    // incremented by client to indicate a request to flush;
                                // server notices and discards all data between mFront and mRear
    volatile uint32_t mUnderrunFrames;  // server increments for each unavailable but desired frame
};

 

audiotrack Of buffer The pattern is still a ring buffer Pattern , but mFront and mRear It is not simply the relationship between read pointer and write pointer , In fact, these two values record the whole track Producer writes to share buffer Total number of frames in and shared by consumers buffer Total number of frames read out in ,mUnderrunFrames The record is underrun The number of frames .

From the previous analysis we already know ,audiotrack It represents the producer , Keep sharing buffer Write data in , We intercept [email protected] Part of the code :

ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
    ...
    size_t written = 0;
    Buffer audioBuffer;    
    while (userSize >= mFrameSize) {
        /* 1. Convert the data to be written into frames */
        audioBuffer.frameCount = userSize / mFrameSize;
        /* 2. Sharing buffer Found a section of writable memory in */
        status_t err = obtainBuffer(&audioBuffer,
                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
        ...
        toWrite = audioBuffer.size;
        /* 3. Suppose the bit width is 16bit, Copy data to shared memory */
        memcpy(audioBuffer.i8, buffer, toWrite);
        ...
        /* 4. Necessary calculations , Because it's circular writing , Make sure you finish */
        buffer = ((const char *) buffer) + toWrite;
        userSize -= toWrite;
        written += toWrite;
        /* 5. Release this paragraph buffer */
        releaseBuffer(&audioBuffer);
    }
    return written;
}

   

The code is logically simple , adopt obtainBuffer Function to find a piece of memory where data can be written , Then copy the data , Last call releaseBuffer Release this paragraph buffer( The release here is understood as the completion of filling , To be read , No delete),obtainBuffer Emphasize a little ,AudioTrack Class is overloaded with a obtainBuffer function , The previous version is obsolete , Pay attention to the input parameters , Don't make a mistake , The specific content will not be analyzed , Mainly through obtainBuffer(AudioTrackShared.cpp Medium client End ) To get mCblk, And then calculated ,release The same analysis method , in addition , During the debugging of practical problems , If you want to dump Data before mixing , It's just write Function to implement ;

4. How consumers share buffer Middle reading data
The consumer side is complicated , Let's start with how the consumer thread turns . In the previous analysis ,audioflinger in createTrack The function says , Will pass checkPlaybackThread_l By the incoming output Find an existing one PlaybackThread, You can see , Create in app audiotrack When , At this point, the thread has been created and rotated , Where was the place of creation ? The system starts and loads APS when , By loading audio_policy.conf Created at file time ,AP It will eventually pass AF Of openOutput_l To create a thread :

        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            thread = new OffloadThread(this, outputStream, *output, devices);
            ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
            thread = new DirectOutputThread(this, outputStream, *output, devices);
            ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
        } else {
            thread = new MixerThread(this, outputStream, *output, devices);
            ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
        }

You can see the code snippet above , It will be based on flag To create the corresponding thread , As can be seen from the previous inheritance diagram , These threads are inherited from PlaybackThread Of , Different from a separate thread for recording RecordThread, No subclasses , Now that the place where the thread was created has been found , So how run What about ? This is the first reference call with the help of a strong pointer onFirstRef Features of functions , First look at the location of the first strong pointer reference :

[email protected]

sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);

 

Look again. PlaybackThread Of onFirstRef function :

void AudioFlinger::PlaybackThread::onFirstRef()
{
    run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
}

  

The consumer thread succeeded run Up to then , How to know which track There's data , I need to get it ? Here we have to audiotrack Of start Order to complete , Let's see audiotrack Of start fragment :

status_t AudioTrack::start()
{
    ...
    status_t status = NO_ERROR;
    if (!(flags & CBLK_INVALID)) {
        /* call track Of start */
        status = mAudioTrack->start();
        if (status == DEAD_OBJECT) {
            flags |= CBLK_INVALID;
        }
    }
    ...
}

  

there mAudioTrack yes sp, At the sight of IAudioTrack You should know it's audioflinger End , And is trackhandle Acting track, Let's go to the Tracks.cpp Look at it :

status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
                                                    int triggerSession __unused)
{
    ...
    PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
    status = playbackThread->addTrack_l(this);
    ...
}

call addTrack_l Change the current track Add in , Where to add it ?

[email protected]

mActiveTracks.add(track);

 

mActiveTracks Is the playback thread PlaybackThread Member variables of , Records all active in the entire playback thread track,dumpsys see audioflinger You can also see this data during the service , To summarize the application calls audiotrack Of start Interface , about native layer audioflinger for , That is to say track Add to play / Active recording thread track in , For data processing .
“ everything , Only east wind ”, All the preparations have been made , Suppose that the application has now called write Keep sharing buffer Write data in , that , How does the consumer get it ? The answer lies in PlaybackThread Of threadloop Function ,threadLoop To simplify the , Namely “ SanBanFu ”, Intercept code snippets :

bool AudioFlinger::PlaybackThread::threadLoop()
{
    ...
    /* 1. Find the active track */
    mMixerStatus = prepareTracks_l(&tracksToRemove);
    ...
    /* 2. Conduct buffer The mixing algorithm */
    threadLoop_mix();
    ...
    /* 3. Write the processed mixing data to hal Layer of output in */
    ssize_t ret = threadLoop_write();
    ...
}

threadloop The important thing is to see what functions these three functions achieve ,“ A hatchet ”prepareTracks_l Overall, it's complicated , It is through mActiveTracks Get active track, But we need to pay attention to the member variables mAudioMixer Related operations :

    mAudioMixer->setBufferProvider(name, track);
    mAudioMixer->enable(name);

  

It's set up inside mAudioMixer Of setBufferProvider And enables the mixer, The point is this enable, Let's follow up , Enter into AudioMixer.cpp:

void AudioMixer::enable(int name)
{
    name -= TRACK0;
    ALOG_ASSERT(uint32_t(name) < MAX_NUM_TRACKS, "bad track name %d", name);
    track_t& track = mState.tracks[name];

    if (!track.enabled) {
        track.enabled = true;
        ALOGV("enable(%d)", name);
        invalidateState(1 << name);
    }
}

there name It's actually track The number of , Continue to look at invalidateState:

void AudioMixer::invalidateState(uint32_t mask)
{
    if (mask != 0) {
        mState.needsChanged |= mask;
        mState.hook = process__validate;
    }
 }

 

You will see that there is a function pointer here hook, Can guess through process__validate Point to a suitable executable function ,process__validate We won't go into the specific analysis , All you need to know is , according to track Select the corresponding function to mix the scene , Where can I use this function pointer ?
“ Two axes ”threadLoop_mix I'll tell you right away :

[email protected]
mAudioMixer->process(pts);

mAudioMixer yes mixer, adopt mixer Of process To mix , The code is :

[email protected]

void AudioMixer::process(int64_t pts)
{
    mState.hook(&mState, pts);
}

  

Now I know “ A hatchet ”prepareTracks_l Chinese vs mAudioMixer Conduct enable Your intention , According to my current scenario , I only did one track For playing , Then the function it calls will be process__OneTrack16BitsStereoNoResampling, So far, we haven't seen any places where shared memory can be operated , Hold on a little longer , Look at the code snippet of this function :

void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state,
                                                           int64_t pts)
{
    ...
    /* 1. Get readable buffer */
    t.bufferProvider->getNextBuffer(&b, outputPTS);
    ....
    /* 2. Write after algorithm processing out in */
    *out++ = (r<<16) | (l & 0xFFFF);
    ...
    /* 3. After processing , Release this paragraph buffer */
    t.bufferProvider->releaseBuffer(&b);
    ...
}

   

This operation is similar to audiotrack in write Function pairs share buffer The operation of is very similar ,t.bufferProvider Namely track object , Let's take a look at this getNextBuffer:

status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}

  

When we were analyzing producers , Met ClientProxy, Now I see it mServerProxy, I feel that the distance is not far away , to glance at [email protected]

status_t ServerProxy::obtainBuffer(Buffer* buffer, bool ackFlush)
{
    ...
    audio_track_cblk_t* cblk = mCblk;
    ...
}

   

As soon as you come in , You finally find that you have seen the familiar mCblk, It turns out that both producers and consumers pass obtainBuffer and releaseBuffer To operate sharing buffer Of , It's just , The consumer side packaging is too complicated . To sum up “ Two axes ”, It is through audio_track_cblk_t Struct found readable share buffer, Then mix it , The last axe should be clear , Write the processed data to the final output terminal :

[email protected]

ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
    ...
bytesWritten = mOutput->stream->write(mOutput->stream,
                                                   (char *)mSinkBuffer + offset, mBytesRemaining);
    ...
}

mOutput Namely audiopolicyservice Defined output strategy , Here, the data is directly written to the corresponding hal It's in the floor .

原网站

版权声明
本文为[Step base]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/178/202206271003293987.html

随机推荐