ICE failures - Janus in docker in a VM

Hey everyone,

I’m running Janus on an OVHCloud virtual machine and encountering consistent ICE failures right after the remote SDP description is applied.

Setup Details:

  • Janus is running inside a Docker container using --network=host on an OVHCloud VM.
  • I have a separate signaling server built using the Janode implementation (GitHub - meetecho/janode: A Node.js adapter for the Janus WebRTC server), which handles room creation, attaching plugins, and general signaling. This is also run in a Docker container but in bridge network mode.
  • All communication between the client and Janus goes through this Janode server.
  • The Janus config includes proper NAT settings like nat_1_1_mapping with the machine’s public IP.
  • ice_lite = false, and Janus is bound to 0.0.0.0 to allow full ICE.

The Issue:

  • The client sends and receives the SDP successfully.
  • Remote description is applied without any problems.
  • ICE candidates are gathered, but shortly after that, the ICE connection fails (ICE connection state = “failed”).
  • TURN credentials are confirmed to be valid.
  • UDP ports (10000–10200) are open both on the host and in OVHCloud’s control panel.

Tried So Far:

  • Enabled full_trickle, disabled ice_tcp.
  • Confirmed that iptables are not blocking any traffic.
  • Verified that Janus is using the correct public-facing IP.

Questions:

  • Could this setup (host-networked Janus container + bridge-mode Janode container) cause issues with ICE?
  • Has anyone faced similar ICE failures on OVHCloud due to their networking layer?
  • Are there any tips to debug candidate pairing at the Janus level?

Any help or ideas would be super appreciated!

My janus config:

certificates: {

cert_pem = "/etc/janus/certs/fullchain.pem"
cert_key = "/etc/janus/certs/privkey.pem"
#cert_pwd = "secretpassphrase"
dtls_accept_selfsigned = false
#dtls_ciphers = "your-desired-openssl-ciphers"
#rsa_private_key = false

}

media: {

ipv6 = true
ipv6_linklocal = false
min_nack_queue = 500
rtp_port_range = "10000-10200"
dtls_mtu = 1200
no_media_timer = 2
slowlink_threshold = 5
twcc_period = 200
dtls_timeout = 500
nack_optimizations = false
#dscp = 46

}

nat: {

stun_server = "stun1.l.google.com"
stun_port = 19302
nice_debug = true
full_trickle = true
ice_nomination = "regular"
ice_consent_freshness = true
ice_keepalive_conncheck = true
ice_lite = false
ice_tcp = false
hangup_on_failed = true
ignore_mdns = true
nat_1_1_mapping = "my.public.ip"
keep_private_host = false
turn_server = "relay1.expressturn.com"
turn_port = 3480
turn_type = "udp"
turn_user = "turn-user"
turn_pwd = "turn-pass"
#turn_rest_api = "http://yourbackend.com/path/to/api"
#turn_rest_api_key = "anyapikeyyoumayhaveset"
#turn_rest_api_method = "GET"
#turn_rest_api_timeout = 10
allow_force_relay = false
#ice_enforce_list = "eth0"
ice_ignore_list = "docker0,vmnet,172.16.0.0/12,127.0.0.1"
ignore_unreachable_ice_server = false

}

My client side config:
const config = {

iceServers: [

{ urls: 'stun:<stun-server>:<port>' },

{

urls: 'turn:<turn-server>:<port>',

username: '<turn-username>',

credential: '<turn-password>',

},

],

// iceTransportPolicy: "relay"

};

  1. Docker container in “host” network mode is probably the easiest setup and proved to work flawlessly in many cases. Janode should not play any role for ICE, since it’s on the signalling plane (assuming that you have not bugs in your logics, like munging JSEPs, omitting candidates etc.).
  2. Can not help with that specific cloud provider unfortunately
  3. Use the Admin API to fetch the handle status, enable libnice debug, set verbose debugging. You should have an idea of the ongoing ICE pairs.

That said, there are some things that looks suspicious in your configurations:

  1. Probably unrelated to ICE failures, but why using a custom DTLS certificate, disabling self signed certs?
  2. dtls_timeout is only supported when BoringSSL has been linked
  3. Setting nat_1_1_mapping, together with stun_server and turn_server makes little sense. If you are suggesting the public IP to the local ICE agent there is no reason to make the server discover a new one, or relay traffic through a TURN server. More importantly, the turn_server configuration in Janus is almost always unneeded. This is NOT the TURN server used by clients (that can be configured in the client code), but a relay server that Janus itself will offer as a candidate.

Double check your configuration and test again, gathering data about ICE pairs.

Thanks for you reply,

I did the changes , however still seeing same failures. I was using expressTurn earlier and now switched to cloudflare turn server. I am still seeing same issues.

I debugged further using the admin API and found that no dtls handshake happened. Could there be any firewall issue, the client is behind symmetric NAT so can’t do without a TURN server. My VM is listening on ports 10000-10200 for udp. (these correspond to my rtp ports in janus config file.)

Below are the logs for Ice and dtls from handle_info endpoint of the admin api:

       "ice": {
                "stream_id": 1,
                "component_id": 1,
                "state": "failed",
                "failed-detected": 38235387861,
                "icetimer-started": true,
                "gathered": 38228088229,
                "local-candidates": [
                    "1 1 udp 2015363327 51.38.y.x 10135 typ host"
                ],
                "remote-candidates": [
                    "3939175717 1 udp 2122194687 192.168.1.11 60814 typ host generation 0 ufrag 6qKG network-id 1 network-cost 10",
                    "2738998221 1 udp 2122262783 2401:4900:1f38:7913:8daa:3e45:6820:c9fc 61792 typ host generation 0 ufrag 6qKG network-id 2 network-cost 10",
                    "1166142491 1 udp 1685987071 122.161.x.y 18455 typ srflx raddr 192.168.1.11 rport 60814 generation 0 ufrag 6qKG network-id 1 network-cost 10",
                    "1623477837 1 udp 41886975 104.30.146.230 19357 typ relay raddr 2401:4900:1f38:7913:8daa:3e45:6820:c9fc rport 61792 generation 0 ufrag 6qKG network-id 2 network-cost 10",
                    "3022373691 1 udp 41886207 104.30.150.210 45017 typ relay raddr 2401:4900:1f38:7913:8daa:3e45:6820:c9fc rport 61792 generation 0 ufrag 6qKG network-id 2 network-cost 10",
                    "70855494 1 udp 41821439 104.30.146.254 35330 typ relay raddr 122.161.x.y rport 18455 generation 0 ufrag 6qKG network-id 1 network-cost 10",
                    "1250891695 1 udp 25109503 104.30.150.210 38912 typ relay raddr 2401:4900:1f38:7913:8daa:3e45:6820:c9fc rport 58669 generation 0 ufrag 6qKG network-id 2 network-cost 10",
                    "1466309435 1 udp 8332031 104.30.146.231 45602 typ relay raddr 2401:4900:1f38:7913:8daa:3e45:6820:c9fc rport 58673 generation 0 ufrag 6qKG network-id 2 network-cost 10",
                    "1320227166 1 udp 25108735 104.30.148.94 57403 typ relay raddr 2401:4900:1f38:7913:8daa:3e45:6820:c9fc rport 58677 generation 0 ufrag 6qKG network-id 2 network-cost 10",
                    "2206701133 1 udp 8331263 104.30.150.211 15091 typ relay raddr 2401:4900:1f38:7913:8daa:3e45:6820:c9fc rport 58681 generation 0 ufrag 6qKG network-id 2 network-cost 10"
                ],
                "ready": 1
            },
            "dtls": {
                "fingerprint": "4B:06:2B:03:B9:CC:EB:73:90:E7:3D:AD:23:85:A5:59:B9:4D:00:03:FA:7F:E8:10:87:A5:FD:54:2B:8A:84:F3",
                "remote-fingerprint": "15:3D:04:B2:52:11:F4:2D:3B:B0:79:FA:EF:CA:9F:79:54:D3:8E:5B:AB:0B:2A:C3:46:FA:C4:2A:42:18:D0:5B",
                "remote-fingerprint-hash": "sha-256",
                "dtls-role": "active",
                "dtls-state": "created",
                "retransmissions": 0,
                "valid": false,
                "srtp-profile": "none",
                "ready": false,
                "stats": {
                    "in": {
                        "packets": 0,
                        "bytes": 0
                    },
                    "out": {
                        "packets": 0,
                        "bytes": 0
                    }
                }
            },

Been stuck at it for 3 weeks now. Any suggestions are much appreciated. Thanks.

There will be no DTLS handshake if ICE is not succeeding beforehand. In your case ICE is still failing. The problem might be on the client network or a misconfiguration of the server vm.
Try adding to client turn configuration the TLS transport. Maybe the client firewall is blocking outbound UDP.
Also inspect the ICE pairs in chrome webrtc internals. That will make you understand which addresses are being tested.