High Availability Apps via Fleet & CoreOS – Start to Finish: Deploying an App

Deploying a High-Availability Application

This post is part of the 4-part series High Availability Apps via Fleet & CoreOS from Start to Finish. The series takes you through creating all infrastructure required to run your own scalable apps on using Fleet and CoreOS.

Be sure to clone https://github.com/sedouard/fleet-bootstrapper.git to your local machine before getting started.

Objective

By the end of this walkthrough you will know

  • How to deploy an application docker image across multiple nodes
  • How to specify to fleet that units should run on different machines
  • How to deploy your own application based on the pattern of this one

Deploying the App

This repo comes with a simple Node.js express application to demonstrate how to deploy an application. To deploy the application you’ll need to build and push the image for the app as we did with the router:

cd ./example-app
docker build -t localhost:5000/example-app:latest .
# log into boot2docker, Windows & OSX Only
boot2docker ssh
docker push localhost:5000/example-app:latest

Now we’ll use the fleet unit file example-app/example-app@.service:

Description=Example High-Availabilty Web App
After=router.service

[Service]
EnvironmentFile=/etc/environment
ExecStartPre=/usr/bin/docker pull localhost:5000/example-app:latest
ExecStart=/usr/bin/docker run --name example -p 3000:3000 localhost:5000/example-app:latest
ExecStop=/usr/bin/docker stop example
ExecStopPost=/usr/bin/docker kill example
ExecStopPost=/usr/bin/docker rm example
TimeoutStartSec=30m

[X-Fleet]
Conflicts=example-app@*.service

As mentioned in the How it Works section this unit file will create a model (indicated by the ‘@’ in the file name) and ensure that only one instance runs per node. The app will pull from the central image store via the registry running at localhost:5000.

Deploy 2 instances of this app by doing the following commands:

cd ./example-app
fleetctl submit example-app@.service
fleetctl start example-app@1
fleetctl start example-app@2

Now fleetctl list-units should show the app instances running on two different machines:

fleetctl list-units
example-app@1.service       110dea21.../100.73.38.124   active  running
example-app@2.service       8abff2e7.../100.73.4.95     active  running
nginx_lb_router.service     110dea21.../100.73.38.124   active  running
nginx_lb_router.service     8abff2e7.../100.73.4.95     active  running
nginx_lb_router.service     b5815f25.../100.73.54.68    active  running
registry.service        110dea21.../100.73.38.124   active  running
registry.service        8abff2e7.../100.73.4.95     active  running
registry.service        b5815f25.../100.73.54.68    active  running

However browsing to example-app.your_domain_name.com won’t work because the router has no idea this app or its instances exist.

You need to deploy the ‘sidekick’ service for each instance defined by the unit file ./exampleapp/example-app-discovery@.service:

[Unit]
Description=Announce Example App
BindsTo=example-app@%i.service
After=nginx_lb_router.service

[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c "while true; do etcdctl set /services/web/example-app/example-app@%i '${COREOS_PRIVATE_IPV4}:3000' --ttl 60;sleep 45;done"
ExecStop=/usr/bin/etcdctl rm /services/web/example-app/example-app@%i

[X-Fleet]
MachineOf=example-app@%i.service

This service template will broadcast the instance under the default /services/web directory with the application name example-app and the actual key example-app@<instance number> with the value set to the host ip and port number of the example application. This allows nginx to build a routing template that looks something like:

    upstream example-app {


        server 100.73.38.124:3000;

        server 100.73.4.95:3000;

    }

    server {
      listen 80;
      server_name example-app.your_domain_name.com;

      #ssl on;
      #ssl_certificate /etc/ssl/certs/mycert.crt;
      #ssl_certificate_key /etc/ssl/private/mykey.key;
      #ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
      #ssl_ciphers         HIGH:!aNULL:!MD5;

      access_log /var/log/nginx-servicename-access.log;
      error_log /var/log/nginx-servicename-error.log;

      location / {
        proxy_pass http://example-app/;
        proxy_http_version 1.1;
        proxy_read_timeout 86400s;
        proxy_send_timeout 86400s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
      }
    }

Nginx will build any number of upstreams for each application deployed. Allowing for any number of apps and instances.

The Machineof attribute in the service file tells fleet to place it on the same machine as the specfied service instance and the BindsTo attribute makes so that the sidekick will stop broadcasting if the application goes down, preventing nginx from sending requests to a dead container.

To deploy the sidekick services do:

cd ./example-app
fleetctl submit example-app-discovery@.service
fleetctl start example-app-discovery@1
fleetctl start example-app-discovery@2

Notice now with fleetctl list-units the services will be running on the same machine as the running service:

fleetctl list-units
example-app-discovery@1.service 110dea21.../100.73.38.124   active  running
example-app-discovery@2.service 8abff2e7.../100.73.4.95     active  running
example-app@1.service       110dea21.../100.73.38.124   active  running
example-app@2.service       8abff2e7.../100.73.4.95     active  running
nginx_lb_router.service     110dea21.../100.73.38.124   active  running
nginx_lb_router.service     8abff2e7.../100.73.4.95     active  running
nginx_lb_router.service     b5815f25.../100.73.54.68    active  running
registry.service        110dea21.../100.73.38.124   active  running
registry.service        8abff2e7.../100.73.4.95     active  running
registry.service        b5815f25.../100.73.54.68    active  running

Run docker exec router cat /etc/nginx/conf.d/apps.conf to check what your rendered routing template looks like.

You can check the app is running by curl‘ing the app address:

curl http://example-app.captainkaption.com     
<!DOCTYPE html><html><head><title>Fleet Starter!</title><link rel="stylesheet" href="/stylesheets/style.css"></head><body><h1>Fleet Starter!</h1><p>Welcome to Fleet Starter!</p><p>If you're seeing this than you have sucessfully deployed the coreos cluster with fleet</p><p>managing a scalable web application (this one!). To scale me, issue the command:</p><p>fleetctl start example-app@2 or fleetctl start example-app@2 or however many you want</p></body></html>

Stop an instance of the application notice how the announcer sidekick service also stops. Again, use docker exec router cat /etc/nginx/conf.d/apps.conf to confirm that the server pool for example-app has shrunk to 1 instance.

$>docker exec router cat /etc/nginx/conf.d/apps.conf

    upstream example-app {


        server 100.73.4.95:3000;

    }

    server {
      listen 80;
      server_name example-app.captainkaption.com;

      #ssl on;
      #ssl_certificate /etc/ssl/certs/mycert.crt;
      #ssl_certificate_key /etc/ssl/private/mykey.key;
      #ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
      #ssl_ciphers         HIGH:!aNULL:!MD5;

      access_log /var/log/nginx-servicename-access.log;
      error_log /var/log/nginx-servicename-error.log;

      location / {
        proxy_pass http://example-app/;
        proxy_http_version 1.1;
        proxy_read_timeout 86400s;
        proxy_send_timeout 86400s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
      }
    }

You can confirm the app is still running by curling the url again:

# we're still running!
curl http://example-app.your_domain.com
<!DOCTYPE html><html><head><title>Fleet Starter!</title><link rel="stylesheet" href="/stylesheets/style.css"></head><body><h1>Fleet Starter!</h1><p>Welcome to Fleet Starter!</p><p>If you're seeing this than you have sucessfully deployed the coreos cluster with fleet</p><p>managing a scalable web application (this one!). To scale me, issue the command:</p><p>fleetctl start example-app@2 or fleetctl start example-app@2 or however many you want</p></body></html>

Here’s what the example looks like in the browser:

Run Your App

To run your own instance of your apps just be sure to follow the same template as example-app. Just be sure to pick another port number besides 3000.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>