Multiple Janus Instances Behind a NAT

Hello everyone,

I’m seeking advice on a technical challenge I’m facing with my on-premise setup involving multiple Janus WebRTC servers. Previously, I utilized AWS EC2 instances for hosting these servers, each with an assigned elastic IP, making them individually accessible over the internet. This approach negated the need for TURN or STUN since a straightforward 1:1 NAT IP mapping was effective.

However, I’m transitioning to an on-premise deployment with several bare-metal servers, but with only one public IP at my disposal. Previously, for development purposes, I employed a macvlan network, providing unique private IPs to each Janus instance. The current requirement is to make these servers publicly accessible, but assigning separate public IPs isn’t feasible.

I’m exploring alternatives to port forwarding, as I’m looking for a more elegant and scalable solution. Could anyone share insights or experiences on how to efficiently expose multiple Janus servers to the internet using a single public IP address? I’m interested in learning about any practical methods or best practices that could be applicable in this scenario.

Thank you in advance for your time and assistance!

I suspect that making Janus use STUN (no need for TURN in Janus itself) to go through the NAT may be the only way to get it done. Otherwise, not sure if your NAT can be configured to behave like AWS does, maybe by partitioning the port space to associate different ranges to different private IPs.

2 Likes

Thank you very much, Lorenzo. Your advice is extremely valuable, and I’m eager to try implementing the STUN configuration on my Janus servers to see if they can successfully create sessions behind the NAT. Also, I’d like to take this opportunity to express my gratitude to you and your team for developing such an excellent product. Working with it has been a tremendously enjoyable experience.

Since I have your attention, I hope it’s okay to ask one more question. In my on-premise setup where opening a wide range of ports isn’t feasible, I’ve set up a Coturn server for my clients, also located behind the NAT and in the same network as my Janus server (single Janus without a Stun or Coturn configuration). I’ve successfully created sessions through Coturn, but the challenge arises with scaling. I need to deploy more than one Janus server, and I’m unsure how to effectively scale Coturn alongside my Janus services. I’ve come across containerized Janus servers with integrated Coturn, as well as Kubernetes pods hosting both Janus and Coturn (One to one relationship). The relationship and optimal configuration strategy between Coturn and Janus are somewhat confusing to me. Your insights on this would be immensely appreciated.

Janus and TURN servers can be scaled differently: no need to couple them together. You’ll need less TURN servers than Janus instances, since they have less work to do.

1 Like

Thank you for your prompt reply! That aligns with what I was thinking. Considering that Slack managed to serve thousands with only 23 TURN servers, I’m confident that one or two TURN servers will suffice for my needs. :slight_smile:

Status Update : For anyone interested in how I resolved this issue, I’m pleased to share that my setup is now working. Here’s what I did: Since my TURN server is on the same network as my Janus servers, I decided to proceed without a STUN configuration. I configured my clients to use TURN and made the following adjustments to my Janus configuration:

nat_1_1_mapping = "${public_ip}"
keep_private_host = true

With these changes, I successfully managed to make video calls using multiple Janus instances.

Hi
Are you using Janus in kubernetes? I am facing issues where I have to scale Janus for say 10 video rooms per pod. I am using Nginx Proxy to access Janus on http interface. But when it comes to ICE connection, it fails, as Janus is running on the pods, without public IP in nat_1_1;

Hello.

Running Janus on K8 is tricky. It’s problematic to open all the ports that Janus need on a pod. If you have multiple pods in a single node it gets even more complicated. I am currently running a single pod in a single node and use Turn server (only clients use it, not Janus) to relay the incoming packets to respective Janus instances. Please read the latest post that I wrote for more detailed description.

good luck.