Is it possible to set up a video conference between WebRTC participants (via Janus) and SIP clients (via FreeSWITCH) without running a conference on FreeSWITCH?
Specifically:
Can SIP clients be represented as if they were regular WebRTC participants inside Janus (each with their own feed)?
Can those SIP clients receive a single mixed video stream from Janus that contains all other participants (including video coming from other SIP clients)?
If you mean audio only, we played a bit with SIP and the AudioBridge some time ago:
If you mean using the Janus VideoRoom, so video and SFU mode, then no, not out of the box. You’ll need the help of a mixer for that. I talked a bit about that in a presentation some tine ago:
I tested a video conference in FreeSWITCH where both WebRTC participants and SIP participants join the same call.( i used b2bua for webrtc participants)
If we only have one sip participant what we do is create two canvases in freeswitch (one for webrtc participants and one for sip participant) and we make them watch each other , the webrtc participants receive the mixed stream from freeswitch but since they are watching the canvas where sip participant is placed it works fine.
When we invite a second SIP participant, now a new feed is going to be created in the web interface and the video of both sip participants is going to be showed twice (since we are putting sip participants in the same canvas) , but as a solution in the web interface i avoided creating a new feed so lets just use the one video mixed feed sent by freeSWITCH where we can see both sip participants and this works fine but… sip participants cannot see each other now , since they are in the same canvas and watching the webRTC participants canvas , they’re unable to see each other. (if i use one canvas the webrtc participants are going to see themself twice and it just looks bad)
This is why I was wondering if Janus could help:
Ideally, all SIP participants would receive video from all other conference participants (both SIP and WebRTC).
On the WebRTC side, Janus would expose each SIP participant as its own unique feed, not as a single mixed video.
Do I understand correctly that this would require an external mixer, or is there a way to configure Janus + FreeSWITCH so SIP participants are treated more like individual Janus feeds, instead of just receiving the mixed video?
If you check slides 33-36 of the presentation I linked, that’s exactly what we did for a customer a few years ago. We wrote for them a new component that would act as a mixer for SIP participants, and as an SFU “avatar” of SIP participants for WebRTC endpoints. This means that SIP participants would always get a mix of everybody else (audio, video), while WebRTC participants would see everyone (Janus and SIP users) as individual streams. But that’s why I meant you need the help of an external mixer, ideally one capable of selectively mixing and simply routing.
Thank you for replying , would you care to elaborate about this external mixer , im having trouble understanding it , can i integrate this mixer with freeswitch since i need it for PSTN ?
If you check the slides and the part of the video where I talk about that, it’s mostly explained there. It’s basically what I summarized in my previous comment. Notice that the specific mixer I talked about is not an open source component, though: it’s an ad-hoc component we wrote for a specific customer in the context of a consulting engagement.