当前位置:网站首页>[Shanda conference] acquisition of user media based on webrtc

[Shanda conference] acquisition of user media based on webrtc

2022-06-22 16:03:00 What does Xiao Li Mao eat today

Preface

WebRTC It is a technology that supports web browsers to make real-time voice and video calls , On 2011 year 6 month 1 Japan open source and in Google、Mozilla、Opera Is included in the World Wide Web Consortium with the support of W3C Recommended standards .
This is my first exposure to this technology , Therefore, before the formal use, we should learn the basic technology .

Get audio and video streams

To open the voice video call function , First, we need to get the audio and video stream . Modern browsers have basically been implemented WebRTC Of API, We can call these API Realize the most basic functions . First, let's learn how to pass WebRTC Of API Get the audio stream of the user's default microphone , The code used is very simple :

navigator.mediaDevices.getUserMedia({
    
	audio: true
}).then((stream) => {
    
	console.log(stream.getTracks());
})

We call navigator.mediaDevices.getUserMedia() Method , Got a Promise<MediaStream> object , adopt then Method to get a media stream , And this media stream contains a kind by “audio” Media streaming track for . When you use this media stream as srcObject To assign to <audio /> DOM Object time , You can hear your own microphone .
Then we can imitate the above writing , Get the audio of microphone and video of camera at the same time , The code is also simple , Just add one to the input parameter video: true That's all right. :

navigator.mediaDevices.getUserMedia({
    
	audio: true,
	video: true
}).then((stream) => {
    
	console.log(stream.getTracks());
})

This time we will see , One more video track is added to the stream obtained this time , The content is the data captured by our camera .
After learning the basic media stream acquisition operation , We can also add more constraints to get different flows as needed , Like the following :

navigator.mediaDevices.getUserMedia({
    
	audio: {
    
		noiseSuppression: true,
		echoCancellation: true,
	},
	video: {
    
		width: 1920,
		height: 1080,
		frameRate: {
    
			max: 30
		}
	}
}).then((stream) => {
    
	console.log(stream.getTracks());
})

In this case , We give getUserMedia Method passes in some special constraints , Let's introduce the meaning of these constraints one by one :

  • noiseSuppression: Indicates noise suppression , The default value is true.
  • echoCancellation: Indicates echo cancellation , The default value is true.
  • width、height: Indicates the width and height of the acquired video stream .
  • frameRate: Indicates the number of frames of the acquired video stream ,max Represents the maximum value ,min Represents the minimum value ,exact Indicates the exact value .

In addition to the above constraints, there are many more constraints , You can consult the manual to learn , This article will not do too much detail .

Capture desktop streams

Similar to Tencent conference , We also want to add screen sharing to our products , Similar desktop capture features WebRTC There are also for us to achieve , We can go through this API To call :

navigator.mediaDevices.getDisplayMedia({
    
	video: true,
	audio: true
}).then((stream) => {
    
	console.log(stream.getTracks());
})

In this way , We can choose to capture different windows and share their images and audio .

Select different devices

Of course , To meet the needs of different users , Sometimes we also need to let users choose the audio and video devices they need to use . Do that , We must first list all the audio and video devices on the user's current device .

navigator.mediaDevices.enumerateDevices().then((devices) => {
    
	for (const device of devices) {
    
		console.log(device);
	}
})

Through this call , We can get all the audio and video devices connected to the current user's machine . And can be through the kind Property to determine what type of device this is .

  • audioinput It's an audio input device , That is, microphones ;
  • audiooutput Is an audio output device , That is, speakers ;
  • videoinput It is a video input device , That is, cameras .

obviously , What we need is audioinput and videoinput Two types of input devices , And call... Through the above method , We can also get the deviceId , adopt deviceId You can specify the input device to use .
for instance , Suppose I don't want to use the default microphone on my computer , Instead, choose another deviceId by cc7f8d6ec7b6764d8ad8b8a737dde5b0a54943816b950bef56c00a289d1180d2 The equipment , Then I can manually specify the devices I need by adding the following constraints :

navigator.mediaDevices.getUserMedia({
    
	audio: {
    
		deviceId: {
    
			exact: 'cc7f8d6ec7b6764d8ad8b8a737dde5b0a54943816b950bef56c00a289d1180d2',
		},
		noiseSuppression: true,
		echoCancellation: true,
	}
}).then((stream) => {
    
	console.log(stream.getTracks());
})

So I can get the audio stream input belonging to this microphone .

原网站

版权声明
本文为[What does Xiao Li Mao eat today]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206221441208158.html