Posts Issues Deploying a Fresh App Service
Post
Cancel

Issues Deploying a Fresh App Service

I’m diving into the deep end a little here to avoid droning on with context, but the gist is that I’m trying to redeploy a WordPress site I previously had running as a Windows App Service, trying to now deploy it as a Linux App Service. Seeing as you can’t migrate from one to the other (or vice versa), I’m starting anew and then using UpdraftPlus to restore my files and database.

TL;DR

Supposedly, my App Service ran out of memory during initial configuration. Stopped other App Services under same plan. Ran setup again. Worked without issues.

The Problem

While I had success in deploying an app service with a similar enough configuration (which happens to be this very site), the other site simply won’t start. Rather than showing the usual WordPress setup screen, it’s giving me a 500:

1
2
3
4
5
6
7
8
9
10
~ » curl -i https://(redacted)
HTTP/2 500
content-type: text/html; charset=UTF-8
server: nginx
x-powered-by: PHP/7.2.13
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
date: Wed, 14 Apr 2021 03:28:58 GMT
content-length: 0

Meanwhile, this site seems to work just fine:

1
2
3
4
5
6
7
8
9
10
11
12
13
~ » curl -i https://thedylanboyd.com
HTTP/2 301
content-type: text/html; charset=UTF-8
location: https://thedylanboyd.com/
server: nginx
x-powered-by: PHP/7.2.13
x-redirect-by: WordPress
fastcgi-cache: HIT
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
date: Wed, 14 Apr 2021 03:29:10 GMT
content-length: 0

Seeing as I haven’t even had the opportunity to configure WordPress on that site in the first place, I can’t imagine I might have done much wrong. It doesn’t work on either the azurewebsites.net or the custom domain I’ve set up with an App Service Certificate, registered against the App Service Domain hooked up to it.

One thing that may have caused the issue is enforcing SSL and setting up the custom domain before going through with the WordPress installation. I’ll disable HTTPS Only and see if that does anything:

App Service protocol settings

Still nothing there, I’m afraid…

1
2
3
HTTP/2 500
content-type: text/html; charset=UTF-8
server: nginx

Perhaps I should try restarting the app service at this point:

1
~ » az webapp restart --name (redacted) --resource-group (redacted)

And let’s try again…

1
2
3
HTTP/2 500
content-type: text/html; charset=UTF-8
server: nginx

Hmm… this is one unhappy app service. Time to try re-deploying it. I thought I would try their troubleshooting option before launching nuclear missiles, and found it thoroughly amusing that the troubleshooting page didn’t work, and returned a blank screen. Wow.

Diagnose and solve problems

Just before I do that, it is interesting to see that Nginx is complaining about not being able to communicate with the upstream server:

1
2
3
4
5
6
113843d0ecb2:~# tail /home/LogFiles/nginx/error.log
2021/04/14 03:19:07 [error] 141#141: *50 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.16.2.1, server: _, request: "POST /wp-login.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "(redacted)"
2021/04/14 03:19:07 [error] 141#141: *52 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.16.2.1, server: _, request: "POST /xmlrpc.php HTTP/1.1", upstr
eam: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "(redacted)"
2021/04/14 03:20:01 [error] 141#141: *1 connect() to unix:/var/run/php/php7.0-fpm.sock failed (111: Connection refused) while connecting to upstream, client: 172.16.2.1, server: _, request: "GET /robots933456.
txt HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "127.0.0.1:3875"

Maybe it’s worth checking what Nginx thinks the upstream server should be.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
113843d0ecb2:~# cat /home/etc/nginx/conf.d/default.conf

upstream php {
        server unix:/var/run/php/php7.0-fpm.sock;
        #server 127.0.0.1:9000;
}

server {
        listen 80;
        ## Your website name goes here.
        server_name _;
        ## Your only path reference.
        root /home/site/wwwroot;
        ## This should be in your http block and if it is, it's not needed here.
        index index.php;

        location = /favicon.ico {
                log_not_found off;
                access_log off;
        }

        location = /robots.txt {
                allow all;
                log_not_found off;
                access_log off;
        }

There’s more to that file, and it looks like it’s using the FPM socket. It must be struggling to communicate with that, but I definitely did not touch anything to do with that. Bizarre. I wonder if it exists.

1
2
113843d0ecb2:~# ls -l /var/run/php/php7.0-fpm.sock
srwxrwxrwx    1 www-data www-data         0 Apr 14 03:42 /var/run/php/php7.0-fpm.sock

It sure does. There’s a PHP error file – I wonder what’s in it.

1
2
113843d0ecb2:~# wc -l /home/LogFiles/nginx/php-error.log
0 /home/LogFiles/nginx/php-error.log

It’s empty. Okay, odd. I’m not a PHP master, so I’m not sure why it doesn’t work at this stage. I think it’s high time we blew this popsicle stand.

A Madagascar penguin, courtesy of GIPHY

A Madagascar penguin, courtesy of GIPHY

This is truly odd. After deleting and re-creating the app service, it exhibits the exact same behaviour. Further testing also shows that it alternates between 500 Internal Server Error responses and another 5xx code I can’t remember now. It might have been a 502 Bad Gateway from Nginx.

The only likely cause I can think of right now would be limited resources in the App Service Plan that cause deployment to fail at some point. Memory usage is less than 90%, so intuition says that this shouldn’t be the cause. It seems some more troubleshooting is necessary.

Memory usage in App Service Plan

I would have thought one would see a spike above 90% if a process were to run out of memory. One less destructive way to test it could be to stop the site currently using it and start the one that’s failing. It is admittedly this site that would be stopped, but…

For science, courtesy of GIPHY

For science, courtesy of GIPHY

Suddenly, a Wild Solution Appears.

So it turns out that simply stopping this site and trying to restart the offending App Service didn’t change anything. What did, though, was leaving this site stopped and recreating the offending App Service.

Because of that, I’m thinking that the underlying cause is insufficient memory, since I can only imagine that the unpacking of resources using applications such as git and docker would have put enough of a strain on memory, or perhaps done some preliminary checks which would have failed, had it been in short supply. I can’t confirm this now that it worked, but me believes it’s time to boost that App Service Plan.

Fluctuating memory as App Services were stopped/deleted created

It just goes to show that the introduction of conveniences through the lens of SaaS, PaaS, and IaaS takes just as much of an engineering mindset as it did to run applications prior to their existence.

At this post’s end, I’ve pasted the logs from the successful setup process. Of note are these lines:

1
2
2021-04-14T05:30:16.262040051Z: [INFO]  128:M 14 Apr 05:30:16.261 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2021-04-14T05:30:16.262764956Z: [INFO]  128:M 14 Apr 05:30:16.262 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

Happy reading! Until next time.

Goodbye

Goodbye, courtesy of GIPHY

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
2021-04-14T05:29:30.711950789Z: [INFO]  PHP 7.2.13 (cli) (built: Dec 10 2018 14:53:23) ( NTS )
2021-04-14T05:29:30.712010789Z: [INFO]  Copyright (c) 1997-2018 The PHP Group
2021-04-14T05:29:30.712022589Z: [INFO]  Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
2021-04-14T05:29:30.712031789Z: [INFO]      with Zend OPcache v7.2.13, Copyright (c) 1999-2018, by Zend Technologies
2021-04-14T05:29:30.724357659Z: [INFO]  Setup openrc ...
2021-04-14T05:29:31.243366896Z: [ERROR]  Service `hwdrivers' needs non existent service `dev'
2021-04-14T05:29:31.248735183Z: [INFO]   * Caching service dependencies ... [ ok ]
2021-04-14T05:29:31.273568822Z: [INFO]  INFO: /home/site/wwwroot/wp-config.php not found.
2021-04-14T05:29:31.274449520Z: [INFO]  Installing WordPress for the first time ...
2021-04-14T05:29:31.280020607Z: [INFO]  INFO: There in no wordpress, going to GIT pull...:
2021-04-14T05:29:31.318370713Z: [INFO]  INFO: ++++++++++++++++++++++++++++++++++++++++++++++++++:
2021-04-14T05:29:31.319054812Z: [INFO]  REPO: https://github.com/azureappserviceoss/wordpress-azure
2021-04-14T05:29:31.319615710Z: [INFO]  BRANCH: linux-appservice
2021-04-14T05:29:31.321066307Z: [INFO]  INFO: ++++++++++++++++++++++++++++++++++++++++++++++++++:
2021-04-14T05:29:31.321088307Z: [INFO]  INFO: Clone from https://github.com/azureappserviceoss/wordpress-azure
2021-04-14T05:29:31.368671091Z: [ERROR]  Cloning into '/home/site/wwwroot'...
2021-04-14T05:29:49.421436676Z: [ERROR]  Checking out files:   5% (105/1839)
Checking out files:   6% (111/1839)
Checking out files:   7% (129/1839)
Checking out files:   8% (148/1839)
Checking out files:   9% (166/1839)
Checking out files:  10% (184/1839)
Checking out files:  11% (203/1839)
Checking out files:  11% (213/1839)
Checking out files:  12% (221/1839)
Checking out files:  13% (240/1839)
Checking out files:  14% (258/1839)
Checking out files:  15% (276/1839)
Checking out files:  16% (295/1839)
Checking out files:  17% (313/1839)
Checking out files:  17% (331/1839)
Checking out files:  18% (332/1839)
Checking out files:  19% (350/1839)
Checking out files:  20% (368/1839)
Checking out files:  21% (387/1839)
Checking out files:  22% (405/1839)
Checking out files:  23% (423/1839)
Checking out files:  24% (442/1839)
Checking out files:  24% (443/1839)
Checking out files:  25% (460/1839)
Checking out files:  26% (479/1839)
Checking out files:  27% (497/1839)
Checking out files:  28% (515/1839)
Checking out files:  29% (534/1839)
Checking out files:  29% (543/1839)
Checking out files:  30% (552/1839)
Checking out files:  31% (571/1839)
Checking out files:  32% (589/1839)
Checking out files:  33% (607/1839)
Checking out files:  34% (626/1839)
warning: unable to access 'wp-content/themes/twentynineteen/sass/site/primary/.gitattributes': Interrupted system call
2021-04-14T05:30:02.573748011Z: [ERROR]  Checking out files:  34% (627/1839)
Checking out files:  35% (644/1839)
Checking out files:  36% (663/1839)
Checking out files:  37% (681/1839)
Checking out files:  38% (699/1839)
Checking out files:  38% (710/1839)
Checking out files:  39% (718/1839)
Checking out files:  40% (736/1839)
Checking out files:  41% (754/1839)
Checking out files:  42% (773/1839)
Checking out files:  43% (791/1839)
Checking out files:  44% (810/1839)
Checking out files:  45% (828/1839)
Checking out files:  46% (846/1839)
Checking out files:  47% (865/1839)
Checking out files:  48% (883/1839)
Checking out files:  48% (897/1839)
Checking out files:  49% (902/1839)
Checking out files:  50% (920/1839)
Checking out files:  51% (938/1839)
Checking out files:  52% (957/1839)
Checking out files:  53% (975/1839)
Checking out files:  54% (994/1839)
Checking out files:  54% (1008/1839)
Checking out files:  55% (1012/1839)
Checking outfiles:  56% (1030/1839)
Checking out files:  57% (1049/1839)
Checking out files:  58% (1067/1839)
Checking out files:  59% (1086/1839)
Checking out files:  60% (1104/1839)
Checking out files:  60% (1107/1839)
Checking out files:  61% (1122/1839)
Checking out files:  62% (1141/1839)
Checking out files:  63% (1159/1839)
Checking out files:  64% (1177/1839)
Checking out files:  65% (1196/1839)
Checking out files:  65% (1209/1839)
Checking out files:  66% (1214/1839)
Checking out files:  67% (1233/1839)
Checking out files:  68% (1251/1839)
Checking out files:  69% (1269/1839)
Checking out files:  70% (1288/1839)
Checking out files:  70% (1295/1839)
Checking out files:  71% (1306/1839)
Checking out files:  72% (1325/1839)
Checking out files:  73% (1343/1839)
Checking out files:  74% (1361/1839)
Checking out files:  75% (1380/1839)
Checking out files:  75% (1393/1839)
Checking out files:  76% (1398/1839)
Checking out files:  77% (1417/1839)
Checking out files:  78% (1435/1839)
Checking out files:  79% (1453/1839)
Checking out files:  80% (1472/1839)
Checking out files:  81% (1490/1839)
Checking out files:  81% (1496/1839)
Checking out files:  82% (1508/1839)
Checking out files:  83% (1527/1839)
Checking out files:  84% (1545/1839)
Checking out files:  85% (1564/1839)
Checking out files:  85% (1570/1839)
Checking out files:  86% (1582/1839)
Checking out files:  87% (1600/1839)
Checking out files:  88% (1619/1839)
Checking out files:  89% (1637/1839)
Checking out files:  90% (1656/1839)
Checking out files:  91% (1674/1839)
Checking out files:  92% (1692/1839)
Checking out files:  93% (1711/1839)
Checking out files:  94% (1729/1839)
Checking out files:  95% (1748/1839)
Checking out files:  95% (1750/1839)
Checking out files:  96% (1766/1839)
Checking out files:  97% (1784/1839)
Checking out files:  98% (1803/1839)
Checking out files:  99% (1821/1839)
Checking out files:  99% (1822/1839)
Checking out files: 100% (1839/1839)
Checking out files: 100% (1839/1839), done.
2021-04-14T05:30:02.608208161Z: [INFO]  INFO: Checkout to linux-appservice
2021-04-14T05:30:03.906359167Z: [INFO]  Branch 'linux-appservice' set up to track remote branch 'linux-appservice' from 'origin'.
2021-04-14T05:30:14.142251520Z: [ERROR]  Switched to branch 'linux-appservice'
2021-04-14T05:30:14.201693838Z: [INFO]  Your branch is up to date with 'origin/linux-appservice'.
2021-04-14T05:30:14.227848323Z: [INFO]  INFO: External Mysql is used.
2021-04-14T05:30:16.214951720Z: [INFO]  INFO: Check SSL Setting...
2021-04-14T05:30:16.228830417Z: [INFO]  INFO: SSL Setting is exist!
2021-04-14T05:30:16.229720124Z: [INFO]  Starting Redis ...
2021-04-14T05:30:16.231900739Z: [INFO]  INFO: /home/LogFiles/supervisor not found. creating ...
2021-04-14T05:30:16.240189397Z: [INFO]  128:C 14 Apr 05:30:16.240 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2021-04-14T05:30:16.243905423Z: [INFO]  128:C 14 Apr 05:30:16.240 # Redis version=4.0.11, bits=64, commit=bca38d14, modified=0, pid=128, just started
2021-04-14T05:30:16.243940724Z: [INFO]  128:C 14 Apr 05:30:16.240 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
2021-04-14T05:30:16.260505140Z: [INFO]  128:M 14 Apr 05:30:16.260 * Running mode=standalone, port=6379.
2021-04-14T05:30:16.261353046Z: [INFO]  128:M 14 Apr 05:30:16.260 # Server initialized
2021-04-14T05:30:16.262040051Z: [INFO]  128:M 14 Apr 05:30:16.261 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2021-04-14T05:30:16.262764956Z: [INFO]  128:M 14 Apr 05:30:16.262 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2021-04-14T05:30:16.264859571Z: [INFO]  INFO: Log folder for nginx/php not found. creating...
2021-04-14T05:30:16.271446717Z: [INFO]  128:M 14 Apr 05:30:16.265 * Ready to accept connections
2021-04-14T05:30:16.279444773Z: [INFO]  INFO: 50x file not found. createing...
2021-04-14T05:30:16.543778430Z: [ERROR]  mv: can't preserve ownership of '/home/etc/nginx/modules': Operation not permitted
2021-04-14T05:30:16.559521440Z: [INFO]  INFO: creating /run/php/php7.0-fpm.sock ...
2021-04-14T05:30:16.594933289Z: [INFO]  Starting SSH ...
2021-04-14T05:30:16.597216305Z: [INFO]  Starting php-fpm ...
2021-04-14T05:30:16.598027611Z: [INFO]  Starting Nginx ...
2021-04-14T05:30:17.361585175Z: [INFO]  2021-04-14 05:30:17,361 INFO Set uid to user 0 succeeded
2021-04-14T05:30:17.373393858Z: [INFO]  2021-04-14 05:30:17,373 INFO supervisord started with pid 143
2021-04-14T05:30:18.380687934Z: [INFO]  2021-04-14 05:30:18,380 INFO spawned: 'php-fpm' with pid 145
2021-04-14T05:30:18.383759556Z: [INFO]  2021-04-14 05:30:18,383 INFO spawned: 'nginx' with pid 146
2021-04-14T05:30:18.387669483Z: [INFO]  2021-04-14 05:30:18,386 INFO spawned: 'sshd' with pid 147
2021-04-14T05:30:18.423949338Z: [INFO]  2021-04-14 05:30:18,423 INFO spawned: 'watch-fpm-socket' with pid 148
2021-04-14T05:30:19.493448552Z: [INFO]  2021-04-14 05:30:19,493INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-04-14T05:30:19.501426408Z: [INFO]  2021-04-14 05:30:19,493 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-04-14T05:30:19.502047513Z: [INFO]  2021-04-14 05:30:19,501 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-04-14T05:30:19.503631724Z: [INFO]  2021-04-14 05:30:19,502 INFO success: watch-fpm-socket entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-04-14T05:30:21.523443216Z: [INFO]  2021-04-14 05:30:21,523 INFO exited: sshd (exit status 0; expected)
This post is licensed under CC BY-SA 4.0 by the author.