Phantom IAD Region Showing in flyctl statusMay 25, 2020 at 7:40pm
I have my regions set to only allow
ewr. However, when I do
flyctl status, I see:
efb7860e 104 ewr run running 1 total, 1 passing 0 2m19s ago168ccae3 104 iad run running 1 total, 1 passing 0 1m38s ago
May 25, 2020 at 7:42pm
we're still working on explaining this better, but our defaults are a little strange when you have a single region set. each region has "backup" regions that will sometimes get processes, iad is a backup of ewr. We can set your app to not use backup regions but you run the risk of losing vms if we have an outage that way
if you want me to set your backup count to 0 just let me know what app you're working with
app = "spirit-fish-ingress"[[services]]internal_port = 8080protocol = "tcp"[services.concurrency]hard_limit = 2000soft_limit = 200[[services.ports]]port = "80"[[services.ports]]port = "443"[[services.http_checks]]interval = 10000method = "get"path = "/__healthz__"protocol = "http"timeout = 2000[services.http_checks.headers]Host = "health.check"
Not exactly sure what's happening. I just scaled my box up a tier and removed dallas, and now I can deploy again (although that
iadhealth check seems to take a long time)
it should be able to put your stuff all in ewr
but I'm also curious what would be failing healthchecks
The app runs an nginx reverse proxy through to a node app (started with
My healthcheck goes through
nginx -> upstream to nodeand back.
Docker CMD runs node app in the background, nginx in the background, and crond in the foreground:
CMD [ "sh", "-c", "yarn start-daemon && openresty && crond -f" ]
The healthcheck only started failing when I set it up to go through to node, which leads me to believe that the node app isn't booting in time for the healthcheck.
When does the healthcheck start sending? Is there a way for me to let Consul know we're ready for checks?
there's no great way to let consul know you're ready for checks yet, it gives you a few seconds, it'll happily keep checking for a while though
not failing after boot
if you see that again just send over the allocation id and I'll take a look
May 27, 2020 at 5:01pm
May 28, 2020 at 12:26am
May 28, 2020 at 6:28am
There’s your issue - the min count is asking for two instances at least so another instance is created “close” to the pool.
do flyctl scale set min=1
(And to trip a redeploy.... flyctl secrets set nosecret=no)
(it’s a dummy secret setting which by its nature will update and redeploy)
May 29, 2020 at 5:25pm