version : 1102
Thank’s a lot for usefull library. Greatfull!
I need information.
I’m using janus.plugin.videoroom
and adding background with @mediapipe/selfie_segmentation
function initRTC() {
Janus.init({
dependencies: Janus.useDefaultDependencies({ adapter }),
callback: () => {
janusInstance = new Janus({
server: url,
success: () => {
janusInstance.attach?.({
plugin: 'janus.plugin.videoroom',
opaqueId: `user_${user.id}`,
success: (plugin) => createRoom(),
onmessage: (message, jsep) => handleMessage(message, jsep),
onlocaltrack: (track) => handleLocalTrack(track),
error: (error) => handleError(error),
});
},
error: (error: string) => handleError(error),
});
},
});
}
after join to room, on handlemessage make createOffer in publishOwnFeed
function.
I modified this callbacks with adding custom function addBackgound
.
function handleMessage(message) {
if (message.videoroom) handleEvent(message);
}
function handleEvent(message) {
if (message.videoroom === 'joined') {
publishOwnFeed();
}
}
function publishOwnFeed() {
pluginHandle?.createOffer({
tracks: [
{ type: 'audio', capture: true, recv: false },
{ type: 'video', capture: true, recv: false },
],
success: (jsep) => {
pluginHandle?.send({
message: {
request: 'configure',
audio: true,
video: true,
},
jsep,
});
},
// get client stream, modify and return it
addBackground: (stream) => {
const videoElement = document.createElement('video');
const newStream = new MediaStream(stream).clone();
const audioTracks = newStream.getAudioTracks();
if (audioTracks[0]) {
audioTracks[0].enabled = false;
}
videoElement.autoplay = true;
videoElement.playsInline = true;
videoElement.srcObject = newStream;
let lastTime = new Date().getTime();
async function getFrames() {
const now = videoElement?.currentTime;
if (now > lastTime) await selfieSegmentation.send({ image: videoElement });
lastTime = now;
requestAnimationFrame(getFrames);
}
getFrames();
const canvasStream = canvas.captureStream();
return canvasStream;
},
});
}
const selfieSegmentation = new SelfieSegmentation({
locateFile: (file) => {
if (file.endsWith('.tflite')) {
return tflite;
} else if (file.endsWith('wasm_bin.js')) {
return binJs;
} else if (file.endsWith('.binarypb')) {
return binarypb;
} else if (file.endsWith('.wasm')) {
return selfieWasm;
}
return ``;
},
});
selfieSegmentation.setOptions({
modelSelection: 1,
});
selfieSegmentation.onResults(handleSegmentationResults);
And this addBackground
function I used in janus.js
in here:
I rewrite stream with results from addBacgkround
else if (track.capture) {
if (track.gumGroup && groups[track.gumGroup] && groups[track.gumGroup].stream) {
// We did a getUserMedia before already
let stream = groups[track.gumGroup].stream;
/**
* UPDATES: Get user stream and redefine stream with background
*
*/
if (track.type === 'video') {
let canvasStream = callbacks.addBackground?.(stream);
canvasStream?.getTracks().forEach((track) => {
const updateStream = new MediaStream([track]);
stream = updateStream;
});
}
/**
* UPDATES: Get user stream and redefine stream with background
*
*/
nt = track.type === 'audio' ? stream.getAudioTracks()[0] : stream.getVideoTracks()[0];
delete groups[track.gumGroup].stream;
delete groups[track.gumGroup];
delete track.gumGroup;
} else if (track.capture instanceof MediaStreamTrack) {
And this solution is work. I redefine stream and add background here.
But the key question is: Is this solution correct? I mean is this a correct place to modify stream. Maybe I missed something and in janus I can make it with more correctfull way.
P.S. One minus I have, which I’m trying to solve. After using SelfieSegmenation
function and import .wasm
, .tflite
, .binarypb
, wasm_bin.js
web browser memory usage increases. And It can be explained with calculating on browser. But without this, normal memory usage - 130Mb, with this - jumps from 200Mb to 1Gb(2Gb) and down.