Hey,
I am trying to add some data to the end of the stream in janus_streaming.c and extract it on the FE with Insertable Streams (using FFMPEG)
While debugging the FE I do see the data is passed, but the video is not showing, only the spinner and the bitrate are changing.
The problem is that I don’t get any playing event when I change the stream.
Any Ideas?
You need to remove the extra data before passing frames to the decoder, whether it’s in the browser or programmatically (e.g., a custom app that uses FFmpeg), otherwise that data will remain part of what the decoder receives, which will probably cause decode errors. Notice that insertable streams work on frames, not RTP packets, so if you’re adding data in the middle of a frame you won’t be able to extract it that way.
I am confused.
If I will remove the extra data from ffmpeg how the FE will know what is it?
About insertable streams, I am doing something like what there is in the e2test demo, But inside the streaningtest.js inside streaming.createAnswer
then in the receiverTransforms decoding the bytes of the extra data to a text.
Thanks again,
I mean removing it in the Frontend (I guess that’s what you mean by FE?) after you’ve consumed it. Once you’ve read the data (which happens before it’s decoded), you need to remove it and just leave the actual audio/video data. This is exactly what all Insertable Streams applications do: the e2etest demo you mentioned changes the whole data (it doesn’t just append), and then reverts it back to the original data when dealing with incoming packets.
Sorry I still don’t understand, cause I think I am doing what you described
As I understand ffmpeg is to transform from .js to rtp. Janus gets rtp.
The stream is changed like this (just adding 9 Bytes):
var receiverTransforms = {};
for(var m of ["video"]) {
receiverTransforms[m] = new TransformStream({
start() {
// Called on startup.
console.log("[Receiver transform] Startup");
},
transform(encodedFrame, controller) {
let PonitsLengthInbytes = 9;
const view = new DataView(encodedFrame.data);
const origData = new ArrayBuffer(encodedFrame.data.byteLength - PonitsLengthInbytes);
const origDataView = new DataView(origData);
const pointsData = new ArrayBuffer(PonitsLengthInbytes);
const pointsView = new DataView(pointsData);
//Put the orig video (without the points) inside the encodedFrame
for (let i = 0; i < encodedFrame.data.byteLength - PonitsLengthInbytes; i++) {
origDataView.setUint8(i, view.getUint8(i));
}
//Extract the string, and call draw canvas with the points..
for( let i = 0; i < PonitsLengthInbytes; i ++) {
pointsView.setUint8(i, view.getUint8(encodedFrame.data.byteLength - PonitsLengthInbytes + i))
}
//Get the string of points (decode)
const decoder = new TextDecoder();
receivedPointsString = decoder.decode(pointsView);
encodedFrame.data = origData;
controller.enqueue(encodedFrame);
},
flush() {
// Called when the stream is about to be closed
console.log("[Receiver transform] Closing");
}
});
}
I don’t know how that code works, but from the C side it looks like you’re adding those 9 bytes to every RTP packet. As I already said in a previous response, Insertable Streams work on frames, not packets. A single video frame will very often be spread over multiple RTP packets, which means the 9 bytes should only be added on the last packet that belongs to a frame. As you’re doing it now, you’re modifying a frame by adding those 9 bytes multiple times within the frame itself, obviously breaking the video.
Thanks so much, I did what you explained to me and it works!
I still have some small questions, Do you know why the rtp->timestmp (janus_streaming.c) and the encodedFrame.timestmp (streamingtest.js) are ever the same ?