Green Software Patterns

last updated: 2024-05-23

The Green Software Foundation started working on Green Software Patterns.

Since we are very much aligned with the goals of the Green Software Foundation we wanted to bring some of those pattern applied in the context of DownToZero.Cloud to everybodies attention.

Artificial Intelligence (AI)

https://patterns.greensoftware.foundation/catalog/ai/

Since we are not offering any AI related services, we skip this section of the catalog.

Cloud

https://patterns.greensoftware.foundation/catalog/cloud/

Cache static data

From an energy-efficiency perspective, it’s better to reduce network traffic by reading the data locally through a cache rather than accessing it remotely over the network.

We are caching on many instances while processing requests, e.g. our asynchronous job runtime locally caches docker images, so we wont need a full pull on every invocation.

We are also using Image hashes for container images as much as possible to reduce realtime load while instantiating new containers.

Choose the region that is closest to users

From an energy-efficiency perspective, it’s better to shorten the distance a network packet travels so that less energy is required to transmit it. Similarly, from an embodied-carbon perspective, when a network packet traverses through less computing equipment, we are more efficient with hardware.

Since we are not multi-region, or we do not directly expose the regional layout of our infrastructure, this cannot be influenced by a user. Internally we try to optimise for regionality as much as possible, without sacrificing durability and availabilty.

Compress stored data

Storing too much uncompressed data can result in bandwidth waste and increase the storage capacity requirements.

We are commited to only store compressed data.

Both Objectstore and Container Registry support compression at the storage level. This is built into both services. The Observability service has an internal tiering mechanism that moves data from a hot storage tier to a colder, better compressed format after a fixed retention period.

Compress transmitted data

From an energy-efficiency perspective, it’s better to minimise the size of the data transmitted so that less energy is required because the network traffic is reduced.

Containerize your workloads

Containers allow resources to be used more flexibly, as workloads can be easily moved between machines. Containers allow for bin packing and require less compute resources than virtual machines, meaning a reduction in unnecessary resource allocation and an increase in utilization of the compute resources.

We are only supporting containerized workloads. Being able to move workload between locations is necessary at the core of DTZ to enable shifting workloads to more efficient execution environments on demand.

Delete unused storage resources

From an embodied carbon perspective, it’s better to delete unused storage resources so we are efficient with hardware and so that the storage layer is optimised for the task.

Out object store supports expirations on objects, so that every entity can be cleaned up after it is no longer used. Our container registry also runs configurable cleanup jobs to reduce the amount of stored objects.

Encrypt what is necessary

Data protection through encryption is a crucial aspect of our security measures. However, the encryption process can be resource-intensive at multiple levels. Firstly, the amount of CPU required for encryption varies depending on the chosen algorithm, and more complex algorithms tend to demand higher computational power. Additionally, encryption can lead to increased storage requirements as it inflates the size of the data being stored because it typically contains additional metadata and padding, which is especially noticeable for smaller files. Furthermore, encryption is a repetitive task that needs to be performed each time data is fetched or updated. This repetitive nature can contribute to increased energy consumption, especially in high-throughput systems.

Evaluate other CPU architectures

Applications are built with a software architecture that best fits the business need they are serving. Cloud providers make it easy to evaluate other CPU types, such as x86-64, which can be included in the evaluation along with many cost effective alternatives that feature good performance per watt.

For now we only support a single CPU architecture, X86. We currently do not have the resources to evaluate different architectures.

Use a service mesh only if needed

A service mesh deploys additional containers for communication, typically in a sidecar pattern, to provide more operational capabilities. This can result in an increase in CPU usage and network traffic but also allows you to decouple your application from these capabilities, moving them out from the application layer and down to the infrastructure layer.

Since our network does not rely on any kubernetes abstractions, we also have no use for a service mesh.

Terminate TLS at border gateway

Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU usage and might be unnecessary in certain architectures.

Implement stateless design

Service state refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk.

Our existing services, like Container Services and Container Jobs build on stateless design and also bring those acapabilties to our users.

Match your service level objectives to business needs

If service downtimes are acceptable it’s better to not strive for highest availability but to design the solution according to real business needs. Lower availability guarantees can help reduce energy consumption by using less infrastructure components.

Our SLO is to provide the best service possible, but not compromising on efficiency and sustainability. So our services heavily depend on automatic scaling to alway provied an appropriate service level at minimal overhead.

Match utilization requirements of virtual machines (VMs)

It’s better to have one VM running at a higher utilization than two running at low utilization rates, not only in terms of energy proportionality but also in terms of embodied carbon.

So far, we do not offer any VM based services. All our infrastructure offerings are based on containers to offer a more streamlined experience.

Match utilization requirements with pre-configured servers

It’s better to have one VM running at a higher utilization than two running at low utilization rates, not only in terms of energy proportionality but also in terms of embodied carbon.

We aim for a high utilization in our infrastructure, but those components are not exposed to the end user. The user only sees containers as abstraction. Since we are providing energy efficiency metrics, those can vary based on the utilisation of the underlying infrastructure.

Minimize the total number of deployed environments

In a given application, there may be a need to utilize multiple environments in the application workflow. Typically, a development environment is used for regular updates, while staging or testing environments are used to make sure there are no issues before code reaches a production environment where users may have access.

Optimise storage utilization

It’s better to maximise storage utilisation so the storage layer is optimised for the task, not only in terms of energy proportionality but also in terms of embodied carbon.

Optimize average CPU utilization

CPU usage and utilization varies throughout the day, sometimes wildly for different computational requirements. The larger the variance between the average and peak CPU utilization values, the more resources need to be provisioned in stand-by mode to absorb those spikes in traffic.

Optimize impact on customer devices and equipment

Applications run on customer hardware or are displayed on it. The hardware used by the customer has a carbon footprint embodied through the production and electricity required while running. Optimising your software design and architecture to extend the life of the customer devices reduces the carbon intensity of the application. Ideally the customer can use the hardware until it’s failure or until it becomes obsolete.

Optimize peak CPU utilization

CPU usage and utilization varies throughout the day, sometimes wildly for different computational requirements. The larger the variance between the average and peak CPU utilization values, the more resources need to be provisioned in stand-by mode to absorb those spikes in traffic.

Queue non-urgent processing requests

All systems have periods of peak and low load. From a hardware-efficiency perspective, we are more efficient with hardware if we minimise the impact of request spikes with an implementation that allows an even utilization of components. From an energy-efficiency perspective, we are more efficient with energy if we ensure that idle resources are kept to a minimum.

Our service Container Jobs is the manifestation of that goal as a service within DTZ.

Reduce transmitted data

From an energy-efficiency perspective, it’s better to minimize the size of the data transmitted so that less energy is required because the network traffic is reduced.

Remove unused assets

Monitor and analyze the application and the cloud bill to identify resources that are no longer used or can be reduced.

Scale down kubernetes applications when not in use

In order to reduce carbon emissions and costs, Dev&Test Kubernetes clusters can turn off nodes (VMs) out of office hours (e.g at night & during weekends). => thereby, optimization is implemented at the cluster level.

We are not offering any Kubernetes based services.

Scale down applications when not in use

Applications consume CPU even when they are not actively in use. For example, background timers, garbage collection, health checks, etc. Even when the application is shut down, the underlying hardware is consuming idle power. This can also happen with development and test applications or hardware in out-of-office hours.

DownToZero, as the name states, scaling down is at the core of our doings. So we aim for all services to provide scale-downs.

Scale infrastructure with user load

Demand for resources depends on user load at any given time. However, most applications run without taking this into consideration. As a result,resources are underused and inefficient.

Scale Kubernetes workloads based on relevant demand metrics

By default, Kubernetes scales workloads based on CPU and RAM utilization. In practice, however, it’s difficult to correlate your application’s demand drivers with CPU and RAM utilization.

We do not offer any Kubernetes services.

Scale logical components independently

A microservice architecture may reduce the amount of compute resources required as it allows each independent component to be scaled according to its own demand.

Scan for vulnerabilities

Many attacks on cloud infrastructure seek to misuse deployed resources, which leads to an unnecessary spike in usage and cost.

Set storage retention policies

From an embodied carbon perspective, it’s better to have an automated mechanism to delete unused storage resources so we are efficient with hardware and so that the storage layer is optimised for the task.

Shed lower priority traffic

When resources are constrained during high-traffic events or when carbon intensity is high, more carbon emissions will be generated from your system. Adding more resources to support increased traffic requirements introduces more embodied carbon and more demand for electricity. Continuing to handle all requests during high carbon intensity will increase overall emissions for your system. Shedding traffic that is lower priority during these scenarios will save on resources and carbon emissions. This approach requires an understanding of your traffic, including which call requests are critical and which can best withstand retry attempts and failures.

Time-shift Kubernetes cron jobs

The carbon emissions of a software system depends on the power consumed by that software, but also on the Carbon intensity of the electricity it is powered on. For this reason, running energy-efficient software on carbon intensive electricity grid, might be inefficient to reduce its global carbon emissions.

While not offering any Kubernetes services, our Container Jobs offer the possiblity for a flexible schedule model. See our docs for more info.

Use Asynchronous network calls instead of synchronous

When making calls across process boundaries to either databases or file systems or REST APIs, relying on synchronous calls can cause the calling thread to become blocked, putting additional load on the CPU.

Use circuit breaker patterns

Modern applications need to communicate with other applications on a regular basis. Since these other applications have their own deployment schedule, downtimes and availability, the network connection to these application might have problems. If the other application is not reachable, all network requests against this other application will fail and future network requests are futile.

Use cloud native network security tools and controls

Network & web application firewalls provide protection against most common attacks and load shedding bad bots. These tools help to remove unnecessary data transmission and reduce the burden on the cloud infrastructure, while also using lower bandwidth and less infrastructure.

Use DDoS protection

Distributed denial of service (DDoS) attacks are used to increase the server load so that it is unable to respond to any legitimate requests. This is usually done to harm the owner of the service or hardware. Due to the nature of attack, a lot of environmental resources are used up by nonsensical requests.

Use cloud native processor VMs

Cloud virtual machines come with different capabilities based on different hardware processors. As such, using virtual machines based on the efficiency of their processors would impact hardware efficiency and reduce carbon emissions.

Use serverless cloud services

Serverless cloud services are services that the cloud provider manages for the application. These scale dynamically with the workload needed to fulfill the service task and apply best practices to keep resource usage minimal.

All offerings of DownToZero are serverless cloud services.

Web

https://patterns.greensoftware.foundation/catalog/web/

Since we are not offering any Web related services, we skip this section of the catalog.