当前位置:网站首页>[Shanda conference] acquisition of user media based on webrtc
[Shanda conference] acquisition of user media based on webrtc
2022-06-22 16:03:00 【What does Xiao Li Mao eat today】
List of articles
Preface
WebRTC It is a technology that supports web browsers to make real-time voice and video calls , On 2011 year 6 month 1 Japan open source and in Google、Mozilla、Opera Is included in the World Wide Web Consortium with the support of W3C Recommended standards .
This is my first exposure to this technology , Therefore, before the formal use, we should learn the basic technology .
Get audio and video streams
To open the voice video call function , First, we need to get the audio and video stream . Modern browsers have basically been implemented WebRTC Of API, We can call these API Realize the most basic functions . First, let's learn how to pass WebRTC Of API Get the audio stream of the user's default microphone , The code used is very simple :
navigator.mediaDevices.getUserMedia({
audio: true
}).then((stream) => {
console.log(stream.getTracks());
})
We call navigator.mediaDevices.getUserMedia() Method , Got a Promise<MediaStream> object , adopt then Method to get a media stream , And this media stream contains a kind by “audio” Media streaming track for . When you use this media stream as srcObject To assign to <audio /> DOM Object time , You can hear your own microphone .
Then we can imitate the above writing , Get the audio of microphone and video of camera at the same time , The code is also simple , Just add one to the input parameter video: true That's all right. :
navigator.mediaDevices.getUserMedia({
audio: true,
video: true
}).then((stream) => {
console.log(stream.getTracks());
})
This time we will see , One more video track is added to the stream obtained this time , The content is the data captured by our camera .
After learning the basic media stream acquisition operation , We can also add more constraints to get different flows as needed , Like the following :
navigator.mediaDevices.getUserMedia({
audio: {
noiseSuppression: true,
echoCancellation: true,
},
video: {
width: 1920,
height: 1080,
frameRate: {
max: 30
}
}
}).then((stream) => {
console.log(stream.getTracks());
})
In this case , We give getUserMedia Method passes in some special constraints , Let's introduce the meaning of these constraints one by one :
- noiseSuppression: Indicates noise suppression , The default value is
true. - echoCancellation: Indicates echo cancellation , The default value is
true. - width、height: Indicates the width and height of the acquired video stream .
- frameRate: Indicates the number of frames of the acquired video stream ,max Represents the maximum value ,min Represents the minimum value ,exact Indicates the exact value .
In addition to the above constraints, there are many more constraints , You can consult the manual to learn , This article will not do too much detail .
Capture desktop streams
Similar to Tencent conference , We also want to add screen sharing to our products , Similar desktop capture features WebRTC There are also for us to achieve , We can go through this API To call :
navigator.mediaDevices.getDisplayMedia({
video: true,
audio: true
}).then((stream) => {
console.log(stream.getTracks());
})
In this way , We can choose to capture different windows and share their images and audio .
Select different devices
Of course , To meet the needs of different users , Sometimes we also need to let users choose the audio and video devices they need to use . Do that , We must first list all the audio and video devices on the user's current device .
navigator.mediaDevices.enumerateDevices().then((devices) => {
for (const device of devices) {
console.log(device);
}
})
Through this call , We can get all the audio and video devices connected to the current user's machine . And can be through the kind Property to determine what type of device this is .
- audioinput It's an audio input device , That is, microphones ;
- audiooutput Is an audio output device , That is, speakers ;
- videoinput It is a video input device , That is, cameras .
obviously , What we need is audioinput and videoinput Two types of input devices , And call... Through the above method , We can also get the deviceId , adopt deviceId You can specify the input device to use .
for instance , Suppose I don't want to use the default microphone on my computer , Instead, choose another deviceId by cc7f8d6ec7b6764d8ad8b8a737dde5b0a54943816b950bef56c00a289d1180d2 The equipment , Then I can manually specify the devices I need by adding the following constraints :
navigator.mediaDevices.getUserMedia({
audio: {
deviceId: {
exact: 'cc7f8d6ec7b6764d8ad8b8a737dde5b0a54943816b950bef56c00a289d1180d2',
},
noiseSuppression: true,
echoCancellation: true,
}
}).then((stream) => {
console.log(stream.getTracks());
})
So I can get the audio stream input belonging to this microphone .
边栏推荐
- Jenkins automatically triggers compilation by checking code submissions
- High precision calculation
- ArcGIS JS之 4.23之IIS本地部署与问题解决
- 推荐几个AI智能平台
- 知识管理在业务中的价值如何体现
- stack和queue的模拟实现
- Is SQL analysis query unavailable in the basic version?
- Advanced thinking on application scenarios of standardization, maximum normalization and mean normalization
- 【山大会议】WebRTC基础之对等体连接
- 阿里云中间件的开源往事
猜你喜欢
随机推荐
Rosbag使用命令
对领域驱动设计DDD理解
ArcGIS JS之 4.23之IIS本地部署与问题解决
stack和queue的模拟实现
标准化、最值归一化、均值归一化应用场景的进阶思考
静态断言 static_assert
推進兼容適配,使能協同發展 GBase 5月適配速遞
phantomJs使用总结
Be an we media video blogger, and share the necessary 32 material websites
小白操作Win10扩充C盘(把D盘内存分给C盘)亲测多次有效
nvarchar和varchar的区别
Merge sort of sorting
[leetcode] 9. Palindromes
【山大会议】使用TypeScript为项目进行重构
CMake教程系列-00-简介
On the routing tree of gin
Bridging the gap between open source databases and database services
DDD understanding of Domain Driven Design
Pymssql Module User Guide
GBASE现身说 “库” 北京金融科技产业联盟创新应用专委会专题培训









