Skip to content

Frequently Asked Questions

What is the Co-op Cloud?

Co-op Cloud aims to make hosting libre software apps simple for small service providers such as tech co-operatives who are looking to standardise around an open, transparent and scalable infrastructure. It uses the latest container technologies and configurations are shared into the commons for the benefit of all.

Who is behind the project?

The project was started by workers at Autonomic which is a worker-owned co-operative who provides technologies and infrastructure to empower users to make a positive impact on the world. Numerous other like minded co-ops have since joined our Federation and rely Co-op Cloud in production.

Why Co-op Cloud?


  • ๐Ÿ‘ Thin "ease of use" layer on top of already standardised tooling.
  • ๐Ÿ‘ Extremely modular.
  • ๐Ÿ‘ Collective commons based configuration via public git repos.
  • ๐Ÿ‘ Focussed on hosting providers.
  • ๐Ÿ‘ Uses upstream published containers (no duplication on packaging).
  • ๐Ÿ‘ Now and always libre software.
  • ๐Ÿ‘ Command line focussed.
  • ๐Ÿ‘ Horizontal and vertical scaling.


  • ๐Ÿ‘Ž Still a very young project.
  • ๐Ÿ‘Ž Limited availability of well tested apps.
  • ๐Ÿ‘Ž Requires command line knowledge to use.
  • ๐Ÿ‘Ž Currently x86 only (see this FAQ question for more).

Why start another project?

We think our carefully chosen blend of technologies and our social approach is quite unique in today's technology landscape. Please read our initial project announcement post for more on this. Also see our strategy page.

How do I make a recipe for (package) an app?

Head on over to Maintainers section and see "Package your first recipe" for more.

Which technologies are used?

The core technologies of Co-op Cloud are libre software and enjoy wide adoption across software developer communities.

Why containers?

We use containers because so many libre software communities choose to use them. They are already writing and using Docker files and Docker-compose definitions for their development and production environments.

We can directly re-use this good packaging work and contribute back by helping maintain their in-repository files. We meet them where they are at and do not create a new packaging format or duplicate effort.

Co-op cloud proposes the idea of more direct coordination between distribution (app packagers) and production (developers) methods.

Aren't containers horrible from a security perspective?

It depends, just like any other technology and understanding of security. Yes, we've watched that CCC talk.

It's on us all as the libre software community to deliver secure software and we think one of the promises of Co-op Cloud is better cooperation with developers of the software (who favour containers as a publishing format) and packagers and hosters (who deliver the software to the end-user).

This means that we can patch our app containers directly in conversation with upstream app developers and work towards a culture of security around containers.

We definitely recommend using best-in-class security auditing tools like docker-bench-security, IDS systems like OSSEC, security profiles like Apparmor and hooking these into your existing monitoring, alert and update maintenance flows.

It's up to you how you want to arrange your system. For example, Co-op Cloud also allows you to compartmentalise different apps onto different servers. You could stack a bunch of apps on one big server or you could deploy one app per server.

These are organisational concerns that Co-op Cloud can't solve for you which any software system will require. See this additional question for further information.

What is important to consider when running containers in production?

The Co-op Cloud uses containers as a fundamental building block. Therefore it is important to be aware of some general principles for container management in production environments. These are typically things that you will want to discuss within your co-op or democratic collective about how to prioritise and build up process for.

However, as the Co-op Cloud project is still very young, we're also still thinking about how we can make the platform itself mitigate problematic issues and make the maintenance of containers a more stable experience.

With that all in mind, here are some leading thoughts.

  • How do you install the Docker daemon itself on your systems and how do you manage upgrades? (system package, upstream Docker Inc. repository?)
  • How do you secure the Docker daemon from remote access (firewalls and system access controls).
  • How do you secure the Docker daemon socket within the swarm (locking the socket down, using things like a socket proxy)
  • How do you trust the upstream container registry (there are content trust mechanisms but it seems also useful to think about whether we need community registry infrastructure using tools like harbor or distribution. This involves a broader discussion with upstream communities.)
  • How do I audit my container security in an on-going process (IDS, OSSEC, Apparmor, etc.)
  • Can I run my containers with a non-root user setup?

And further reading on this topic:

Why use the Compose specification?

Every app packaged for the Co-op Cloud is described using a file format which uses the compose specification. It is important to note that we do not use the Docker compose tool itself to deploy apps using this format, instead we rely on Docker swarm.

The compose specification is a useful open standard for specifying libre software app deployments in one file. It appears to be the most accessible format for describing apps and this can be seen in the existence of tools like Kompose where the compose format is used as the day-to-day developer workflow format which is then translated into more complicated formats for deployment.

We are happy to see the compose specification emerging as a new open standard because that means we don't have to rely on Docker Inc. in the future - there will be more community tools available.

Why Docker Swarm?

While many have noted that "swarm is dead" it is in fact not dead (2020). As detailed in the architecture overview, Swarm offers an appropriate feature set which allows us to support zero-down time upgrades, seamless app rollbacks, automatic deploy failure handling, scaling, hybrid cloud setups and maintain a decentralised design.

While the industry is bordering on a k8s obsession and the need to scale down a tool that was fundamentally built for massive scale, we are going with Swarm because it is the tool most suitable for small technology.

The Co-op Cloud Communityโ€™s forecast at the start of 2024 for the future of Docker Swarm is positive after five years after Mirantisโ€™s acquisition of Docker Enterprise in 2018. Since then, their strategy has developed towards using Docker Swarm as an intermediary step between Docker/Docker-Compose, and Kubernetes โ€“ where previously it seemed like their aim was to migrate all their customersโ€™ deployments to Kubernetes (Oct, 2022). Mirantis acquired Docker Enterprise in 2019 and today delivers enterprise-grade Swarmโ€”either as a managed service or with enterprise support through Mirantis Kubernetes Engine.

There is reasonably healthy activity in their issue tracker with label area/swarm. Additionally, we see it as reassuring that Mirantis has a growing number of pages relating to Docker Swarm:

Lastly, itโ€™s worth mentioning that much of the configuration involved in setting up Docker Swarm, particularly in terms of preparing images, and in managing the conceptual side, are transferable to other orchestration engines. We hope to see a container orchestrator tool that is not directly linked to a for-profit company emerge soon but for now, this is what we have.

If you want to learn more, see for a nice guide. See also this list of awesome-swarm by Bret Fisher.

What licensing model do you use?

The Co-op Cloud is and will always be available under copyleft licenses.

Isn't running everything in containers inefficient?

It is true that if you install 3 apps and each one requires a MySQL database, then you will have 3 installations of MySQL on your system, running in containers.

Systems like YunoHost mutualise every part of the system for maximum resource efficiency - if there is a MySQL instance available on the system, then just make a new database there and share the MySQL instance instead of creating more.

However, as we see it, this creates a tight coupling between apps on the database level - running a migration on one app where you need to turn the database off takes down the other apps.

It's a balance, of course. In this project, we think that running multiple databases and maintaining more strict app isolation is worth the hit in resource efficiency.

It is easier to maintain and migrate going forward in relation to other apps and problems with apps typically have a smaller problem space - you know another app is not interfering with it because there is no interdependency.

It can also pay off when dealing with GDPR related issues and the need to have more stricter data layer separation.

How do I keep up-to-date with Docker Swarm developments?

Can I run Co-op Cloud on multiple servers?

Yes! Horizontal scaling is one of the ways Co-op Cloud can really shine. abra is designed to handle multiple servers from the first day. As long as you have a DNS entry pointing to your server then Co-op Cloud can serve apps (e.g. you can serve a from one server and a from another server) and abra handles this seamlessly.

Why only x86 support?

We would love to do ARM support and hope to get there! We've been testing this and ran into some issues. The TLDR; is that a lot of upstream libre app developer communities are not publishing container builds that support ARM. If they are, there are typically subtle differences in the conventions used to build the image as they are mostly done by community members and not directly taken on by the upstream project themselves. Since one of the core goals is to coordinate and reuse upstream packaging work, we see that ARM support requires a lot of organising and community engagement. Perhaps projects themselves will not want to take on this burden? It is not the role of the Co-op Cloud to set up an entire ARM publishing work flow at this moment in time. We see the benefits of supporting ARM and if you've got ideas / thoughts / approaches for how to make progress here, please get in touch.

Update: Can I run Co-op Cloud on ARM?

Why would an activist group use Co-op Cloud infrastructure over private cloud infrastructure (e.g. AWS, Azure, GCP)?

If your group is powerful enough to have generated opposition, it's not implausible that some law enforcement body may be trying to stymie your group's advances. To do this, law enforcement bodies may and probably will collaborate with big tech. Indeed, Big Tech has consistently shown a quick willingness to cooperate with Law Enforcement agencies (a la Snowden-revealed NSA subpoenas, disallowing Signal to domain front and other such incidents where Big Tech aided governments in hunting activists).

If your group has ambitions that generate enough fury in your opposition, you should think twice about where you store your data and whose services you rely on to store your data.

By using Co-op Cloud infrastructure over private cloud infrastructure, you create a few possibilities:

  • You may interact with a server provider that is more ethical than Big Tech. Although the server provider may still succumb to law enforcement, you might place more trust in some providers than in private cloud providers (e.g. AWS).

  • You may be able to situate your servers in locations that are relatively more impervious to law enforcement attempts to dismantle your infrastructure. Indeed, if you deployed your infrastructure in a relatively secure setting such as Switzerland, then you would weather a greater chance of keeping your infrastructure alive than if you deployed it in, say, the United States. Protonmail and Extinction Rebellion (XR) choose Switzerland for their servers, for reasons along these lines.