当前位置:网站首页>Audiotrack and audiolinker
Audiotrack and audiolinker
2022-06-27 10:13:00 【Step base】
1.native layer audiotrack in ,write How the operation is initiated
In code debugging , We see ,thread The thread keeps reading audioflinger Share buffer The data of , and audiotrack Of write Is to keep writing inside , So this write Who initiated the operation ? First, you need to go back to the application layer audiotrack The creation of ,audiotrack Create sub static Patterns and stream Pattern , If it's the former , Will be in java Apply for a share buffer, Write the data in once , If it is stream In terms of mode ,buffer It is from audioflinger To apply ,java Layer copies data to native Layer .
public int write(byte[] audioData, int offsetInBytes, int sizeInBytes) {
...
int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,true /*isBlocking*/);
...
}
audioData by java The application of layer buffer, Used to store data read from a file or decoder , No matter what the data type is , Will be called native_write_byte Enter JNI layer ,JNI Layers will eventually call writeToTrack function :
jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
ssize_t written = 0;
/* stream This is the mode if Branch */
if (track->sharedBuffer() == 0) {
written = track->write(data + offsetInBytes, sizeInBytes, blocking);
// for compatibility with earlier behavior of write(), return 0 in this case
if (written == (ssize_t) WOULD_BLOCK) {
written = 0;
}
} else {
/* static Pattern */
const audio_format_t format = audioFormatToNative(audioFormat);
switch (format) {
default:
case AUDIO_FORMAT_PCM_FLOAT:
case AUDIO_FORMAT_PCM_16_BIT: {
// writing to shared memory, check for capacity
if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size();
}
memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
written = sizeInBytes;
} break;
case AUDIO_FORMAT_PCM_8_BIT: {
// data contains 8bit data we need to expand to 16bit before copying
// to the shared memory
// writing to shared memory, check for capacity,
// note that input data will occupy 2X the input space due to 8 to 16bit conversion
if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size() / 2;
}
int count = sizeInBytes;
int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
memcpy_to_i16_from_u8(dst, src, count);
// even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide
// the 8bit mixer restriction from the user of this function
written = sizeInBytes;
} break;
}
}
return written;
}
This function is actually very simple , according to track Type to proceed memcpy operation , If it is stream Pattern , Then the data will be copy to native Layer audioflinger Applied buffer in , If it is static Pattern , directly copy To java Share of application buffer in ;
Through the above analysis , You can see , If it is stream In terms of mode ,native Layer of write The operation is initiated by the upper layer , There are two ways for the upper level to initiate : The first is initiative push, The second is native Layer callback ;
a. Take the initiative push, such as :
/* while The loop keeps calling write Writing data */
if (readCount != 0 && readCount != -1) {
if (mTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING){
mTrack.write(mTempBuffer, 0, readCount);
}
}
b. Apply for callback , stay createtrack When ,java Layer can set a callback function :
[email protected]
{
if (cbf != NULL) {
mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
}
}
If cbf Not empty words , Then it will create a AudioTrackThread Threads , Keep writing data , The specific operation is [email protected]::AudioTrackThread::threadLoop() in , I won't analyze it in detail ;
2.audiotrack The producers in - Consumer model
Whole audiotrack The consumption of audio data is a dynamic producer - Consumer model ,audiotrack You can think of it as a producer , and audioflinger Manipulated thread It can be understood as the ultimate consumer , The relationship between the two is as follows :
audiotrack adopt IAudioTrack Realization and audioflinger Collaboration , and IAudioTrack I won't support it binder signal communication ,audioflinger Is now through proxy mode on track Operating , Take another look at the picture below :
This picture covers AT,AF And thread Inheritance relationship of ,audioflinger In charge of two threads , One is for playback PlaybackThread, The other is for recording RecordThread, and PlaybackThread It is divided into... According to whether it is mixed or not MixerThread and DirectOutputThread, Both end up through agents TrackHandle Realize to track The control of , Be careful ,TrackHandle It's supporting binder communication , that , How producers and consumers operate sharing buffer What about ? The following picture shows the clear statement , The next two sections , Producers and consumers will be introduced respectively :
3. How producers share buffer Write data in
Before analyzing the producer and consumer models , Let's first look at the very important structure audio_track_cblk_t:
struct audio_track_cblk_t
{
// Since the control block is always located in shared memory, this constructor
// is only used for placement new(). It is never used for regular new() or stack.
audio_track_cblk_t();
/*virtual*/ ~audio_track_cblk_t() { }
friend class Proxy;
friend class ClientProxy;
friend class AudioTrackClientProxy;
friend class AudioRecordClientProxy;
friend class ServerProxy;
friend class AudioTrackServerProxy;
friend class AudioRecordServerProxy;
// The data members are grouped so that members accessed frequently and in the same context
// are in the same line of data cache.
uint32_t mServer; // Number of filled frames consumed by server (mIsOut),
// or filled frames provided by server (!mIsOut).
// It is updated asynchronously by server without a barrier.
// The value should be used "for entertainment purposes only",
// which means don't make important decisions based on it.
uint32_t mPad1; // unused
volatile int32_t mFutex; // event flag: down (P) by client,
// up (V) by server or binderDied() or interrupt()
#define CBLK_FUTEX_WAKE 1 // if event flag bit is set, then a deferred wake is pending
private:
// This field should be a size_t, but since it is located in shared memory we
// force to 32-bit. The client and server may have different typedefs for size_t.
uint32_t mMinimum; // server wakes up client if available >= mMinimum
// Stereo gains for AudioTrack only, not used by AudioRecord.
gain_minifloat_packed_t mVolumeLR;
uint32_t mSampleRate; // AudioTrack only: client's requested sample rate in Hz
// or 0 == default. Write-only client, read-only server.
// client write-only, server read-only
uint16_t mSendLevel; // Fixed point U4.12 so 0x1000 means 1.0
uint16_t mPad2; // unused
public:
volatile int32_t mFlags; // combinations of CBLK_*
// Cache line boundary (32 bytes)
public:
union {
AudioTrackSharedStreaming mStreaming;
AudioTrackSharedStatic mStatic;
int mAlign[8];
} u;
// Cache line boundary (32 bytes)
};
The variables that need special attention include several friend classes defined above , Among them client and service The end needs to be clear in the actual data reading and writing , And finally union The variables in the mStreaming, Take a look at this statement :
struct AudioTrackSharedStreaming {
// similar to NBAIO MonoPipe
// in continuously incrementing frame units, take modulo buffer size, which must be a power of 2
volatile int32_t mFront; // read by server
volatile int32_t mRear; // write by client
volatile int32_t mFlush; // incremented by client to indicate a request to flush;
// server notices and discards all data between mFront and mRear
volatile uint32_t mUnderrunFrames; // server increments for each unavailable but desired frame
};
audiotrack Of buffer The pattern is still a ring buffer Pattern , but mFront and mRear It is not simply the relationship between read pointer and write pointer , In fact, these two values record the whole track Producer writes to share buffer Total number of frames in and shared by consumers buffer Total number of frames read out in ,mUnderrunFrames The record is underrun The number of frames .
From the previous analysis we already know ,audiotrack It represents the producer , Keep sharing buffer Write data in , We intercept [email protected] Part of the code :
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
...
size_t written = 0;
Buffer audioBuffer;
while (userSize >= mFrameSize) {
/* 1. Convert the data to be written into frames */
audioBuffer.frameCount = userSize / mFrameSize;
/* 2. Sharing buffer Found a section of writable memory in */
status_t err = obtainBuffer(&audioBuffer,
blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
...
toWrite = audioBuffer.size;
/* 3. Suppose the bit width is 16bit, Copy data to shared memory */
memcpy(audioBuffer.i8, buffer, toWrite);
...
/* 4. Necessary calculations , Because it's circular writing , Make sure you finish */
buffer = ((const char *) buffer) + toWrite;
userSize -= toWrite;
written += toWrite;
/* 5. Release this paragraph buffer */
releaseBuffer(&audioBuffer);
}
return written;
}
The code is logically simple , adopt obtainBuffer Function to find a piece of memory where data can be written , Then copy the data , Last call releaseBuffer Release this paragraph buffer( The release here is understood as the completion of filling , To be read , No delete),obtainBuffer Emphasize a little ,AudioTrack Class is overloaded with a obtainBuffer function , The previous version is obsolete , Pay attention to the input parameters , Don't make a mistake , The specific content will not be analyzed , Mainly through obtainBuffer(AudioTrackShared.cpp Medium client End ) To get mCblk, And then calculated ,release The same analysis method , in addition , During the debugging of practical problems , If you want to dump Data before mixing , It's just write Function to implement ;
4. How consumers share buffer Middle reading data
The consumer side is complicated , Let's start with how the consumer thread turns . In the previous analysis ,audioflinger in createTrack The function says , Will pass checkPlaybackThread_l By the incoming output Find an existing one PlaybackThread, You can see , Create in app audiotrack When , At this point, the thread has been created and rotated , Where was the place of creation ? The system starts and loads APS when , By loading audio_policy.conf Created at file time ,AP It will eventually pass AF Of openOutput_l To create a thread :
if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
thread = new OffloadThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
|| !isValidPcmSinkFormat(config->format)
|| !isValidPcmSinkChannelMask(config->channel_mask)) {
thread = new DirectOutputThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
} else {
thread = new MixerThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
}
You can see the code snippet above , It will be based on flag To create the corresponding thread , As can be seen from the previous inheritance diagram , These threads are inherited from PlaybackThread Of , Different from a separate thread for recording RecordThread, No subclasses , Now that the place where the thread was created has been found , So how run What about ? This is the first reference call with the help of a strong pointer onFirstRef Features of functions , First look at the location of the first strong pointer reference :
sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
Look again. PlaybackThread Of onFirstRef function :
void AudioFlinger::PlaybackThread::onFirstRef()
{
run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
}
The consumer thread succeeded run Up to then , How to know which track There's data , I need to get it ? Here we have to audiotrack Of start Order to complete , Let's see audiotrack Of start fragment :
status_t AudioTrack::start()
{
...
status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
/* call track Of start */
status = mAudioTrack->start();
if (status == DEAD_OBJECT) {
flags |= CBLK_INVALID;
}
}
...
}
there mAudioTrack yes sp, At the sight of IAudioTrack You should know it's audioflinger End , And is trackhandle Acting track, Let's go to the Tracks.cpp Look at it :
status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
int triggerSession __unused)
{
...
PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
status = playbackThread->addTrack_l(this);
...
}
call addTrack_l Change the current track Add in , Where to add it ?
mActiveTracks.add(track);
mActiveTracks Is the playback thread PlaybackThread Member variables of , Records all active in the entire playback thread track,dumpsys see audioflinger You can also see this data during the service , To summarize the application calls audiotrack Of start Interface , about native layer audioflinger for , That is to say track Add to play / Active recording thread track in , For data processing .
“ everything , Only east wind ”, All the preparations have been made , Suppose that the application has now called write Keep sharing buffer Write data in , that , How does the consumer get it ? The answer lies in PlaybackThread Of threadloop Function ,threadLoop To simplify the , Namely “ SanBanFu ”, Intercept code snippets :
bool AudioFlinger::PlaybackThread::threadLoop()
{
...
/* 1. Find the active track */
mMixerStatus = prepareTracks_l(&tracksToRemove);
...
/* 2. Conduct buffer The mixing algorithm */
threadLoop_mix();
...
/* 3. Write the processed mixing data to hal Layer of output in */
ssize_t ret = threadLoop_write();
...
}
threadloop The important thing is to see what functions these three functions achieve ,“ A hatchet ”prepareTracks_l Overall, it's complicated , It is through mActiveTracks Get active track, But we need to pay attention to the member variables mAudioMixer Related operations :
mAudioMixer->setBufferProvider(name, track);
mAudioMixer->enable(name);
It's set up inside mAudioMixer Of setBufferProvider And enables the mixer, The point is this enable, Let's follow up , Enter into AudioMixer.cpp:
void AudioMixer::enable(int name)
{
name -= TRACK0;
ALOG_ASSERT(uint32_t(name) < MAX_NUM_TRACKS, "bad track name %d", name);
track_t& track = mState.tracks[name];
if (!track.enabled) {
track.enabled = true;
ALOGV("enable(%d)", name);
invalidateState(1 << name);
}
}
there name It's actually track The number of , Continue to look at invalidateState:
void AudioMixer::invalidateState(uint32_t mask)
{
if (mask != 0) {
mState.needsChanged |= mask;
mState.hook = process__validate;
}
}
You will see that there is a function pointer here hook, Can guess through process__validate Point to a suitable executable function ,process__validate We won't go into the specific analysis , All you need to know is , according to track Select the corresponding function to mix the scene , Where can I use this function pointer ?
“ Two axes ”threadLoop_mix I'll tell you right away :
[email protected]:
mAudioMixer->process(pts);
mAudioMixer yes mixer, adopt mixer Of process To mix , The code is :
void AudioMixer::process(int64_t pts)
{
mState.hook(&mState, pts);
}
Now I know “ A hatchet ”prepareTracks_l Chinese vs mAudioMixer Conduct enable Your intention , According to my current scenario , I only did one track For playing , Then the function it calls will be process__OneTrack16BitsStereoNoResampling, So far, we haven't seen any places where shared memory can be operated , Hold on a little longer , Look at the code snippet of this function :
void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state,
int64_t pts)
{
...
/* 1. Get readable buffer */
t.bufferProvider->getNextBuffer(&b, outputPTS);
....
/* 2. Write after algorithm processing out in */
*out++ = (r<<16) | (l & 0xFFFF);
...
/* 3. After processing , Release this paragraph buffer */
t.bufferProvider->releaseBuffer(&b);
...
}
This operation is similar to audiotrack in write Function pairs share buffer The operation of is very similar ,t.bufferProvider Namely track object , Let's take a look at this getNextBuffer:
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
ServerProxy::Buffer buf;
size_t desiredFrames = buffer->frameCount;
buf.mFrameCount = desiredFrames;
status_t status = mServerProxy->obtainBuffer(&buf);
buffer->frameCount = buf.mFrameCount;
buffer->raw = buf.mRaw;
if (buf.mFrameCount == 0) {
mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
}
return status;
}
When we were analyzing producers , Met ClientProxy, Now I see it mServerProxy, I feel that the distance is not far away , to glance at [email protected]
status_t ServerProxy::obtainBuffer(Buffer* buffer, bool ackFlush)
{
...
audio_track_cblk_t* cblk = mCblk;
...
}
As soon as you come in , You finally find that you have seen the familiar mCblk, It turns out that both producers and consumers pass obtainBuffer and releaseBuffer To operate sharing buffer Of , It's just , The consumer side packaging is too complicated . To sum up “ Two axes ”, It is through audio_track_cblk_t Struct found readable share buffer, Then mix it , The last axe should be clear , Write the processed data to the final output terminal :
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
...
bytesWritten = mOutput->stream->write(mOutput->stream,
(char *)mSinkBuffer + offset, mBytesRemaining);
...
}
mOutput Namely audiopolicyservice Defined output strategy , Here, the data is directly written to the corresponding hal It's in the floor .
边栏推荐
- Cross cluster deployment of helm applications using karmada [cloud native open source]
- 2-4Kali下安装nessus
- 细说物体检测中的Anchors
- flutter 微信分享
- This application failed to start because it could not find or load the QT platform plugin
- 别再用 System.currentTimeMillis() 统计耗时了,太 Low,StopWatch 好用到爆!
- 【OpenCV 例程200篇】211. 绘制垂直矩形
- 迪米特法则
- . Net
- 小哥凭“量子速读”绝技吸粉59万:看街景图0.1秒,“啪的一下”在世界地图精准找到!...
猜你喜欢

【OpenCV 例程200篇】212. 绘制倾斜的矩形
Scientists develop two new methods to provide stronger security protection for intelligent devices

通俗易懂理解朴素贝叶斯分类的拉普拉斯平滑

Analysis of mobile ar implementation based on edge computing (Part 2)

三层架构中,数据库的设计在哪一层实现,不是在数据存储层吗?

C# Any()和AII()方法

Oracle连接MySQL报错IM002

In the three-tier architecture, at which layer is the database design implemented, not at the data storage layer?

Record in detail the implementation of yolact instance segmentation ncnn

【报名】基础架构设计:从架构热点问题到行业变迁 | TF63
随机推荐
片刻喘息,美国电子烟巨头禁令推迟,可暂时继续在美销售产品
【报名】基础架构设计:从架构热点问题到行业变迁 | TF63
小哥凭“量子速读”绝技吸粉59万:看街景图0.1秒,“啪的一下”在世界地图精准找到!...
R langage plotly visualisation: visualisation de plusieurs histogrammes normalisés d'ensembles de données et ajout d'une courbe de densité KDE à l'histogramme, réglage de différents histogrammes en ut
新旧两个界面对比
软交换呼叫中心系统的支撑系统
BufferedWriter 和 BufferedReader 的使用
C any() and aii() methods
DNS standby server information, DNS server address (how many DNS preferred and standby are filled in)
你睡觉时大脑真在自动学习!首个人体实验证据来了:加速1-4倍重放,深度睡眠阶段效果最好...
【云享新鲜】社区周刊·Vol.68-华为云招募工业智能领域合作伙伴,强力扶持+商业变现
[cloud enjoys freshness] community weekly · vol.68- Huawei cloud recruits partners in the field of industrial intelligence to provide strong support + business realization
In the three-tier architecture, at which layer is the database design implemented, not at the data storage layer?
技术与业务同等重要,偏向任何一方都是错误
Mongodb cross host database copy and common commands
通俗易懂理解朴素贝叶斯分类的拉普拉斯平滑
border影响父元素的高度-解决方案
Use aspese Cells convert Excel to PDF
C語言學習-Day_04
CPU设计(单周期和流水线)