menu

Flyio

Talk about Fly.io's product Global Application Platform - Support and operational questions should go to [email protected]

Channels
Team

Phantom IAD Region Showing in flyctl status

May 25, 2020 at 7:40pm

Phantom IAD Region Showing in flyctl status

May 25, 2020 at 7:40pm
Hi there!
I have my regions set to only allow ewr. However, when I do flyctl status, I see:
efb7860e 104 ewr run running 1 total, 1 passing 0 2m19s ago
168ccae3 104 iad run running 1 total, 1 passing 0 1m38s ago
Where is iad coming from?

May 25, 2020 at 7:42pm
we're still working on explaining this better, but our defaults are a little strange when you have a single region set. each region has "backup" regions that will sometimes get processes, iad is a backup of ewr. We can set your app to not use backup regions but you run the risk of losing vms if we have an outage that way
  • reply
  • like
if you want me to set your backup count to 0 just let me know what app you're working with
  • reply
  • like
understood - i figured it was something like that. The iad region seems to fail health checks quite often, whereas ewr is usually fine. Is there a reason that might happen? Is it possible iad is using a less powerful configuration, crashing one container without enough resources?
  • reply
  • like
that's interesting, how do you have your healthchecks setup? we have the same hardware in both regions
  • reply
  • like
Like so:
app = "spirit-fish-ingress"
[[services]]
internal_port = 8080
protocol = "tcp"
[services.concurrency]
hard_limit = 2000
soft_limit = 200
[[services.ports]]
port = "80"
[[services.ports]]
port = "443"
[[services.http_checks]]
interval = 10000
method = "get"
path = "/__healthz__"
protocol = "http"
timeout = 2000
[services.http_checks.headers]
Host = "health.check"
Not exactly sure what's happening. I just scaled my box up a tier and removed dallas, and now I can deploy again (although that iad health check seems to take a long time)
  • reply
  • like
I'm looking at that app to see what's up
  • reply
  • like
it should be able to put your stuff all in ewr
  • reply
  • like
but I'm also curious what would be failing healthchecks
  • reply
  • like
The app runs an nginx reverse proxy through to a node app (started with pm2).
My healthcheck goes through nginx -> upstream to node and back.
Docker CMD runs node app in the background, nginx in the background, and crond in the foreground:
CMD [ "sh", "-c", "yarn start-daemon && openresty && crond -f" ]
The healthcheck only started failing when I set it up to go through to node, which leads me to believe that the node app isn't booting in time for the healthcheck.
When does the healthcheck start sending? Is there a way for me to let Consul know we're ready for checks?
  • reply
  • like
does node connect to anything for the check?
  • reply
  • like
there's no great way to let consul know you're ready for checks yet, it gives you a few seconds, it'll happily keep checking for a while though
  • reply
  • like
ok! I'm getting deploys up for the time being, so I'll keep an eye on it. Thanks Kurt!
  • reply
  • like
oh it was failing healthchecks and never booting?
  • reply
  • like
not failing after boot
  • reply
  • like
got it
  • reply
  • like
if you see that again just send over the allocation id and I'll take a look
  • reply
  • like

May 27, 2020 at 5:01pm
Can I just ask, what does "flyctl scale show" ?
  • reply
  • like

May 28, 2020 at 12:26am
➜ spirit-fish-function git:(feature/dockerize) ✗ flyctl scale show
Scale Mode: Standard
Min Count: 2
Max Count: 10
VM Size: cpu4mem4
  • reply
  • like

May 28, 2020 at 6:28am
There’s your issue - the min count is asking for two instances at least so another instance is created “close” to the pool.
  • reply
  • like
do flyctl scale set min=1
  • reply
  • like
(And to trip a redeploy.... flyctl secrets set nosecret=no)
  • reply
  • like
(it’s a dummy secret setting which by its nature will update and redeploy)
  • reply
  • like

May 29, 2020 at 5:25pm
Thank you!
  • reply
  • like