Difficulty Exposing Janus UDP Ports on AWS CloudFormation

Hello everyone,

As a first-time poster, I’d like to express my gratitude to the Janus team for creating such an incredible product. Working with the Janus platform and exploring its fantastic features has been a lot of fun.

I’ve been building a web application on my local machine, and now it’s time to deploy it to the cloud and test it. All my services are dockerized, and I used docker-compose to deploy the app on my docker system. I used macvlan, so all my ports were accessible, and everything was running smoothly.

For the final product, I decided to use AWS. I created my stack definition in compose format and used the AWS CLI to transpile it into a CloudFormation template (YAML). All my web services seem to be working fine, except I’m having difficulty exposing the RTP-ports that Janus uses. I restricted my Janus ports to 20000-25000 and added that to my docker-compose. However, when the AWS CLI transpiles my compose file into a CloudFormation template, it creates a set of resources such as a load balancer entry, security group, etc. for every single port. This is problematic because I can only create 500 resources.

I’m seeking advice on how to properly expose my Janus UDP ports in my AWS CloudFormation template. Any guidance would be greatly appreciated!

version: "3.9"

services:
.
.
.
  janus:
    image: myECR/Janus
    networks:
      - frontend
      - backend
     ports:
      - 80:80
      - 7088:7088 
      - 8088:8088 
      - 8188:8188
      - 20000-20500:20000-20500/udp
    deploy:
      replicas: 2

networks:
  frontend:
  backend:

volumes:
  recording-data:

Kaan.

I assume when you say “AWS CLI transpiles my compose file”, it creates a config for a AWS managed container service?

It has been a while since I used AWS managed containers (ECS, Fargate), but I have recent experience with Cloud Run, the equivalent on Google Cloud.

My experience with those managed container services so far has been that they are not useful when you need a high degree of control over your networking. They are fine for the common use case of “stateless HTTP[S] server”, but that’s pretty much it. The abstractions they have just don’t fit:

  • There is no easy way to open an entire port range
  • The load balancing is hard to get right (you need some sort of instance-sticky UDP load balancing, good luck)
  • Scaling is often based on a HTTP request model, which doesn’t make sense with WebRTC/RTP

In the end, we resorted to running Janus on VM instances (EC2) via docker-compose. We have a machine template which on instantiation automatically provisions docker-compose, some secrets, and then pulls the right containers and runs them (Janus, Nginx/Caddy). The instances are ephemeral. Thanks to the template it is very easy to scale up, auto scaling is also possible with some additional work.

Hello Jonathan, thanks for replying and sorry for the late reply, there was no response for a few days so I stopped checking, I still use the old forum for my investigations, didn’t see your message.

Yes, aws cli takes Docker compose file and creates its own config file for cloud formation.

Yes that seems to be the case. Took me for a while to understand it.

That sounds like a great setup, thanks for sharing. I’m curious, when you say docker-compose, do you mean Docker-Swarm?

We also use EC2s, I will look into creating templates, that sounds great.

For now we fall back on Docker Swarm, which is working surprisingly good. We have a few januses, we manually load balance among them. We still have some issues, like session creation takes 8-10 seconds, which is a bit too long and some performance issues which we think is caused by the way we design the architecture.