I’m working on an application trying to implement Streaming Demo #3 (from: Janus WebRTC Server (multistream): Streaming Demo).
My process is:
- Creating a mountpoint through HTTP
- Using ffmpeg to stream to the mountpoint
- Connecting to the mountpoint from a front-end using websockets
Issue:
I can view the stream on my computer, but when someone else tries to connect while I’m connected, they receive no data.
I’m able to create a session with Janus through HTTP and create an RTP mountpoint with this as my POST body:
{
"janus": "message",
"transaction": "transaction-id",
"body": {
"admin_key": "supersecret",
"request": "create",
"type": "rtp",
"id": 104,
"name": "My VideoSource",
"description": "TEST",
"is_private": false,
"media": [
{
"type": "video",
"mid": "video1",
"port": 50161,
"codec": "vp8",
"payload_type": 100
},
{
"type": "audio",
"mid": "audio1",
"port": 50162,
"codec": "opus",
"payload_type": 111
}
],
"permanent": false
}
}
Ffmpeg is also able to stream from the source RTSP stream to the mountpoint. I’m just not able to view it concurrently from 2 computers. Any help would be appreciated. Here is the python script I use to stream:
def stream_video(rtsp_url, video_dst):
cap = cv2.VideoCapture(rtsp_url)
if not cap.isOpened():
print("Error: Could not open the video stream.")
return
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
input_fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f"Input FPS: {input_fps}")
# ffmpeg input stream for video
video_process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='bgr24', s='{}x{}'.format(width, height), framerate=input_fps)
.output(video_dst, vcodec='libvpx', deadline='realtime', format='rtp')
.run_async(pipe_stdin=True)
)
frame_count = 0
start_time = time.time()
while True:
ret, frame = cap.read()
if not ret:
print("Error: Could not read the frame.")
break
# Write the frame to the ffmpeg process
video_process.stdin.write(frame.tobytes())
frame_count += 1
# Calculate and print output FPS every second
elapsed_time = time.time() - start_time
if elapsed_time >= 1.0:
output_fps = frame_count / elapsed_time
print(f"Output FPS: {output_fps:.2f}")
frame_count = 0
start_time = time.time()
# Cleanup
cap.release()
video_process.stdin.close()
video_process.wait()
I am capturing using opencv2 first because the stream has a much lower fps if I use ffmpeg to capture it directly.
Question
Is this due to an issue with how I am creating a mountpoint?
Is this an issue of how I am streaming my content to the mountpoint?
Or does everything here look fine and the issue may be somewhere else entirely?
Other posts I’ve checked out (sorry maxed out to links as a new user)
problem-viewing-existing-stream-on-multiple-clients-sessions
- Unlike this user, I was trying to connect from multiple computers
unable-to-join-stream-missing-mandatory-element-feed
- Looks like they were using the video room plugin instead. I’m also able to connect to the stream, it’s just that multiple people can’t.