Hosters Handbook
Understanding ~/.abra
¶
Co-op Cloud stores per-app configuration in the $USER/.abra/servers
directory, on whichever machine you're running abra
on (by default, your own work station). In other words, app configurations are grouped under their relevant server directory. This corresponds to the ordering of the output of abra app ls
.
What format do the .env
files use?
.env
files use the same format as used by Docker (with the env_file:
statement in a docker-compose.yml
file, or the --env-file
option to docker run
) and direnv
. There is no export ...=...
required since abra
will take care to thread the values into the recipe configuration at deploy time.
abra
doesn't mind if ~/.abra/servers
, or any of its subdirectories, is a symlink, so you can keep your app definitions wherever you like!
mv ~/.abra/servers/ ~/coop-cloud
ln -s ~/coop-cloud ~/.abra/servers
You don't need to worry about ~/.abra/{vendor,catalogue,recipes,autocompletion}
, abra
manages those automagically.
Backing up ~/.abra
¶
Just make sure the ~/.abra/servers
is included in the configuration of your favourite backup tool. Because ~/.abra/servers
is a collection of plain-text files, it's easy to keep your backup configuration in a version control system (we use git
, others would almost certainly work).
This is particularly recommended if you're collaborating with others, so that you can all run abra app ...
commands without having to maintain your own separate, probably-conflicting, configuration files.
In the simple case where you only have one server configured with abra
, or everyone in your team is using the same set of servers, you can version-control the whole ~/.abra/servers
directory:
cd ~/.abra/servers
git init
git add .
git commit -m "Initial import"
Test your revision-control self-discipline
abra
does not yet help keep your ~/.abra/server
configurations up-to-date! Make sure to run git add
/ git commit
after making configuration changes, and cd ~/.abra/servers && git pull
before running abra app ...
commands. Patches to add some safety checks and auto-updates would be very welcome! 🙏
Sharing ~/.abra
¶
In a more complex situation, where you're using Co-op Cloud to manage several servers, and you're collaborating with different people on different servers, you can set up a separate repository for each subdirectory in ~/.abra/servers
, or even a mixture of single-server and multi-server repositories:
ls -l ~/.abra/servers
# Example.com's own app configuration:
swarm.example.com -> /home/user/Example/coop-cloud-apps/swarm.example.com
# Configuration for one of Example.com's clients – part of the same repository:
swarm.client.com -> /home/user/Example/coop-cloud-apps/swarm.client.com
# A completely separate project, part of a different repository:
swarm.demonstration.com -> /home/user/Demonstration/coop-cloud-apps
To make setting up these symlinks easier, you might want to include a simple installer script in your configuration repositories.
Save this as Makefile
in your repository:
# -s symlink, -f force creation, -F don't create symlink in the target dir
default:
@mkdir -p ~/.abra/servers/
@for SERVER in $$(find -maxdepth 1 -type d -name "[!.]*"); do \
echo ln -sfF "$$(pwd)/$${SERVER#./}" ~/.abra/servers/ ; \
ln -sfF "$$(pwd)/$${SERVER#./}" ~/.abra/servers/ ; \
done
This will set up symlinks from each directory in your repository to a correspondingly-named directory in ~/.abra/servers
– if your repository has a swarm.example.com
directory, it'll be linked as ~/.abra/servers/swarm.example.com
.
Then, tell your collaborators (e.g. in the repository's README.md
), to run make
in their repository check-out.
You're on your own!
As with the simple repository set-up above, abra
doesn't yet help you update your version control system when you make changes, nor check version control to make sure you have the latest configuration. Make sure to commit
and push
after you make any configuration changes, and pull
before running any abra app ...
commands.
Even more granularity?
The plain-text, file-based configuration format means that you could even keep the configuration for different apps on the same server in different repositories, e.g. having git.example.com
configuration in a separate repository to wordpress.example.com
, using per-file symlinks.
We don't currently recommend this, because it might set inaccurate expectations about the security model – remember that, by default, any user who can deploy apps to a Docker Swarm can manage any app in that swarm.
Migrating a server into a repository¶
Even if you've got your existing server configs in version control, by default, abra server add
will define the server locally. To move it -- taking the example of newserver.example.com
:
mv ~/.abra/servers/newserver.example.com ~/coop-cloud-apps/
cd ~/coop-cloud-apps
git add newserver.example.com
git commit
make link
Running abra server side¶
If you're on an environment where it's hard to run Docker, or command-line programs in general, you might want to install abra
on a server instead of your local work station.
To install abra
on the same server where you'll be hosting your apps, just follow getting started guide as normal except for one difference. Instead of providing your SSH connection details when you run abra server add ...
, just pass --local
.
abra server add --local
Technical details
This will tell abra
to look at the Docker system running on the server, instead of a remote one (using the Docker internal default
context). Once this is wired up, abra
knows that the deployment target is the local server and not a remote one. This will be handle seamlessly for all other deployments on this server.
Make sure to back up your ~/.abra
directory on the server, or put it in version control, as well as other files you'd like to keep safe.
Managing secret data¶
Co-op Cloud uses Docker Secrets to handle sensitive data, like database passwords and API keys, securely.
abra
includes several commands to make it easier to manage secrets:
abra app secret generate <domain>
: to auto-generate app secretsabra app secret insert <domain>
: to insert a single secretabra app secret rm <domain>
: to remove secrets
Secret versions¶
Docker secrets are immutable, which means that their values can't be changed after they're set. To accommodate this, Co-op Cloud uses the established convention of "secret versions". Every time you change (rotate) a secret, you will insert it as a new version. Because secret versions are managed per-instance by the people deploying their apps, secret versions are stored in the .env
file for each app:
find -L ~/.abra/servers/ -name '*.env' -print0 | xargs -0 grep -h SECRET
OIDC_CLIENT_SECRET_VERSION=v1
RPC_SECRET_VERSION=v1
CLIENT_SECRET_VERSION=v1
...
If you try and add a secret version which already exists, Docker will helpfully complain:
abra app secret insert mywordpress.com db_password v1 foobar
Error response from daemon: rpc error: code = AlreadyExists desc = secret mywordpress_com_db_password_v1 already exists
By default, new app instances will look for v1
secrets.
Generating secrets automatically¶
You can generate secrets in one of two ways:
- While running
abra app new <recipe>
, by passing-S/--secrets
- At any point once an app instance is defined, by running
abra app secret generate <domain> ...
(seeabra app secret generate -h
for more)
Inserting secrets manually¶
For third-party API tokens, like OAuth client secrets, or keys for services like Mailgun, you will be storing values you already have as the appropriately-named Docker secrets. abra
provides a convenient interface to the underlying docker secret create
command:
abra app secret insert <domain> db_password v2 "your-secret-value-here"
Rotating a secret¶
So, given how secret versions work, here's how you change a secret:
- Find out the current version number of the secret, e.g. by running
abra app config <domain>
, and choose a new one. Let's assume it's currentlyv1
, so by convention the new secret will bev2
- Generate or insert the new secret:
abra app secret generate <domain> db_password v2
orabra app secret insert <domain> db_password v2 "foobar"
- Edit the app configuration to change which secret version the app will use:
abra app config <domain>
- Re-deploy the app with the new secret version:
abra app deploy <domain>
Storing secrets in pass
¶
The Co-op Cloud authors use the UNIX pass
tool to share sensitive data, including Co-op Cloud secrets, and abra app secret ...
commands include a --pass
option to automatically manage generated / inserted secrets:
# Store generated secrets in `pass`:
abra app new wordpress --secrets --pass
abra app secret generate mywordpress.com --all --pass
# Store inserted secret in `pass`:
abra app secret insert mywordpress.com db_password v2 --pass
# Remove secrets from Docker, and `pass`:
abra app secret rm mywordpress.com --all --pass
This functionality currently relies on our specific pass
storage conventions; patches to make that configurable are very welcome!
Networking¶
So dark the con of Docker Networking
Our understanding of Docker networking is probably wrong. We're working on it. Plz send halp
Traefik networking¶
Traefik is our core web proxy, all traffic on a Co-op Cloud deployment goes through a running Traefik container. When setting up a new Co-op Cloud deployment, abra
creates a "global" overlay network which traefik is hooked up to. This is the network that other apps use to speak to traefik and get traffic routed to them. Not every service in every app is also included in this network and hence not internet-facing (by convention, we name this network internal
, see more below).
App networking¶
By convention, the main app
service is wired up to the "global" traefik overlay network. This container is the one that should be publicy reachable on the internet. The other services in the app such as the database and caches should not be publicly reachable or visible to other apps on the same instance.
To deal with this, we make an additional "internal" network for each app which is namespaced to that app. So, if you deploy a Wordpress instance called my_wordpress_blog
then there will be a network called my_wordpress_blog_internal
created. This allows all the services in an app to speak to each other but not be reachable on the public internet.
Multiple apps on the same domain?¶
At time of writing (Jan 2022), we think there is a limitation in our design which doesn't support multiple apps sharing the same domain (e.g. example.com/app1/
& example.com/app2/
). abra
treats each domain as unique and as the single reference for a single app.
This may be possible to overcome if someone really needs it, we encourage people to investigate. We've found that often there are limitations in the actual software which don't support this anyway and several of the current operators simply use a new domain per app.
How do I bootstrap a server for running Co-op Cloud apps?¶
The requirements are:
- Docker installed
- User in Docker user group
- Swarm mode initialised
- Proxy network created
You may need to log in/out
When running usermod ...
, you may need to (depending on your system) log
in and out again of your shell session to get the required permissions for
Docker.
# docker install convenience script
wget -O- https://get.docker.com | bash
# add user to docker group
usermod -aG docker $USER
# setup swarm
docker swarm init
docker network create -d overlay proxy
# on debian machines as of 2023-02-17
apt install apparmor
systemctl restart docker containerd
How do I persist container logs after they go away?¶
This is a big topic but in general, if you're looking for something quick & easy, you can use the journald logging driver. This will hook the container logs into systemd which can handle persistent log collection & managing log file size.
You need to add the following to your /etc/docker/daemon.json
file on the server:
{
"log-driver": "journald",
"log-opts": {
"labels":"com.docker.swarm.service.name"
}
}
And for log size management, edit /etc/systemd/journald.conf
:
[Journal]
Storage=persistent
SystemMaxUse=5G
MaxFileSec=1month
Tne restart docker
& journald
:
systemctl restart docker
systemctl restart systemd-journald
Now when you use docker service logs
or abra app logs
, it will read from the systemd journald logger seamlessly! Some useful journalctl
commands are as follows, if you're doing some more fine grained logs investigation:
journalctl -f
journalctl CONTAINER_NAME=my_git_com_app.1.jxn9r85el63pdz42ykjnmh792 -f
journalctl COM_DOCKER_SWARM_SERVICE_NAME=my_git_com_app --since="2020-09-18 13:00:00" --until="2020-09-18 13:01:00"
journalctl CONTAINER_ID=$(docker ps -qf name=my_git_com_app) -f
Also, for more system wide analysis stuff:
journalctl --disk-usage
du -sh /var/log/journal/*
man journalctl
/man systemd-journald
/man journald.conf
How do I specify a custom user/port for SSH connections with abra
?¶
abra
uses plain 'ol SSH under the hood and aims to make use of your existing SSH configurations in ~/.ssh/config
and interfaces with your running ssh-agent
for password protected secret key files.
The server add
command listed above assumes that that you make SSH connections on port 22 using your current username. If that is not the case, pass the new values as positional arguments. See abra server add -h
for more on this.
abra server add <domain> <user> <port> -p
Running server add
with -d/--debug
should help you debug what is going on under the hood. It's best to take a moment to read this troubleshooting entry if you're running into SSH connection issues with abra
.
How do I attach to a running container?¶
If you need to run a command within a running container you can use abra app run <domain> <service> <command>
. For example, you could run abra app run cloud.lumbung.space app bash
to open a new bash terminal session inside your remote container.
How do I attach on a non-running container?¶
If you need to run a command on a container that won't start (eg. the container is stuck in a restart loop) you can temporarily disable its default entrypoint by setting it in compose.yml
to something like ['tail', '-f', '/dev/null'], then redeploy the stack (with --force --chaos
so you don't need to commit), then get into the now running container, do your business, and when done revert the compose.yml change and redeploy again.
Can I run Co-op Cloud on ARM?¶
@Mayel
:
FYI I've been running on ARM for a while with no troubles (as long as images used support it of course,
abra
doesn't work yet!) 😀 ... in cases where I couldn't find a multiarch image I simply have eg. image: ${DB_DOCKER_IMAGE} in the docker-compose and set that to a compatible image in the env config ... there was really nothing to it, apart from making sure to use multiarch or arm images
See #312
for more.
How do I backup/restore my app?¶
If you're app supports backup/restore then you have two options: backup-bot-two
& abra
.
With abra
, you can simply run the commands:
$ abra app backup <domain>
$ abra app restore <domain>
Pass -h
for more information on the specific flags & arguments.
If your app Recipe does not support backups you can do it manually with the
abra cp
command. See the exact commands in abra
cheetsheet.
How do I take a manual database backup?¶
MySQL / MariaDB:
abra app run foo.bar.com db mysqldump -u root <database> | gzip > ~/.abra/backups/foo.bar.com_db_`date +%F`.sql.gz
Postgres:
abra app run foo.bar.com db pg_dump -u root <database> | gzip > ~/.abra/backups/foo.bar.com_db_`date +%F`.sql.gz
If you get errors about database access:
- Make sure you've specified the right database user (root
above) and db name
- If you have a database password set, you might need to load it from a secret,
something like this:
abra app run foo.bar.com db bash -c 'mysqldump -u root -p"$(cat /run/secrets/db_oot_password)" <database>' | gzip > ~/.abra/backups/foo.bar.com_db_`date +%F`.sql.gz
Can I deploy a recipe without abra
?¶
Yes! It's a design goal to keep the recipes not dependent on abra
or any
single tool that we develop. This means that the configuration commons can
still be useful beyond this project. You can deploy a recipe with standard
commands like so:
set -a
source example.com.env
cd ~/.abra/recipes/myrecipe
docker stack deploy -c compose.yml example_com
abra
makes all of this more convenient.
Proxying apps outside of Co-op Cloud with Traefik?¶
It's possible! It's actually always been possible but we just didn't have
spoons to investigate. Co-op Cloud can co-exist on the same server as bare
metal apps, non-swarm containers (plain docker-compose up
deployments!),
Nginx installs etc. It's a bit gnarly with the networking but doable.
Enable the following in your Traefik $domain.env
configuration:
FILE_PROVIDER_DIRECTORY_ENABLED=1
You must also have host mode networking enabled for Traefik:
COMPOSE_FILE="$COMPOSE_FILE:compose.host.yml"
And re-deploy your traefik
app. You now have full control over the file
provider
configuration of Traefik. This also means you lost the defaults of the
file-provider.yml.tmpl
, so this is a more involved approach.
The main change is that there is now a /etc/traefik/file-providers
volume
being watched by Traefik for provider configurations. You can re-enable the
recipe defaults by copying the original over to the volume (this assumes you've
deployed traefik
already without FILE_PROVIDER_DIRECTORY_ENABLED
, which is
required for the following command):
abra app run $your-traefik app \
cp /etc/traefik/file-provider.yml /etc/traefik/file-providers/
You don't need to re-deploy Traefik, it should automatically pick this up.
You can route requests to a bare metal / non-docker service by making a
/etc/traefik/file-providers/$YOUR-SERVICE.yml
and putting something like this in
it:
http:
routers:
myservice:
rule: "Host(`my-service.example.com`)"
service: "myservice"
entryPoints:
- web-secure
tls:
certResolver: production
services:
myservice:
loadBalancer:
servers:
- url: "http://$YOUR-HOST-IP:8080/"
Where you should replace all instances of myservice
.
You must use your host level IP address (replace $YOUR-HOST-IP
in the
example). With host mode networking, your deployment can route out of the swarm
to the host.
If you're running a firewall (e.g. UFW) then it will likely block traffic from
the swarm to the host. You can typically add a specific UFW to route from the
swarm (typically, your docker_gwbridge
) to the specific port of your bare
metal / non-docker app:
docker network inspect docker_gwbridge --format='{{( index .IPAM.Config 0).Gateway}}'
172.18.0.1
ufw allow from 172.18.0.0/16 proto tcp to any port $YOUR-APP-PORT
Notice that we turn 172.18.0.1
into 172.18.0.0/16
. It's advised to open the
firewall on a port by port case to avoid expanding your attack surface.
Traefik should handle the usual automagic HTTPS certificate generation and
route requests after. You're free to make as many $whatever.yml
files in your
/etc/traefik/file-providers
directory. It should Just Work ™
Please note that we have to hardcode production
and web-secure
which are
typically configurable when not using FILE_PROVIDER_DIRECTORY_ENABLED
.
Can I use Caddy instead of Traefik?¶
Yes, it's possible although currently Quite Experimental! See
#388
for more.
Running an offline coop-cloud server¶
You may want to run a coop-cloud directly on your device (or in a VM or machine on your LAN), whether that's for testing a recipe or to run coop-cloud apps outside of the cloud ;-)
In that case you might simply add some names to /etc/hosts
(e.g 127.0.0.1 myapp.localhost
), or configure them on a local DNS server - which means traefik
won't be able to use letsencrypt
to generate and verify SSL certificates. Here's what you can do instead:
1. In your traefik .env file, edit/uncomment the following lines:
LETS_ENCRYPT_ENV=staging
WILDCARDS_ENABLED=1
SECRET_WILDCARD_CERT_VERSION=v1
SECRET_WILDCARD_KEY_VERSION=v1
COMPOSE_FILE="$COMPOSE_FILE:compose.wildcard.yml"
localhost
you may want to edit that where it appears in the command, and/or add multiple (sub)domains to the certificate e.g: subjectAltName=DNS:localhost,DNS:myapp.localhost
3. Run these commands:
abra app secret insert localhost ssl_cert v1 "$(cat localhost.crt)"
abra app secret insert localhost ssl_key v1 "$(cat localhost.key)"
traefik
with --force
and voila!
Remote recipes¶
This is only available in the currently unreleased version of abra
Please see this issue to track current progress towards a release. All feedback and testing are welcome on this new feature. The design is not finalised yet.
It is possible to specify a remote recipe in your .env
file:
RECIPE=mygit.org/myorg/cool-recipe.git:1.3.12
Where 1.3.12
is an optional pinned version. When abra
runs a deployment, it
will fetch the remote recipe and create a directory for it under $ABRA_DIR
(typically ~/.abra
):
$ABRA_DIR/recipes/mygit_org_myorg_cool-recipe