About

For many companies, the demand for DevOps engineers continues to be a recurring problem that doesn’t seem to go away. This is hardly surprising considering the importance of DevOps in bridging the gap between development and operations, especially in light of the current shift towards a hybrid or remote work model. 

 

According to a 2021 DevOps Institute report, 60% of organizations are either currently hiring or intend to do so soon. Having professionals with DevOps skills on your team leads to faster release velocity, better software quality, and improved collaboration. 

 

As the second most in-demand tech skill in 2022, organizations have to explore other approaches to overcome these DevOps workforce shortages. Automating your DevOps processes is one way to do this.

 

DevOps Automation: A Better Way to Handle Workforce Shortages

Various solutions have been proposed to help organizations handle their DevOps skill shortages. Some development teams prefer to upskill their internal developers to handle this role, however, they have to contend with training time and the unpredictable nature of employee loyalty. 

 

After all, employees are leaving their roles now more than ever—much faster than companies can replace them. Others prefer to hire remote workers from other locations to handle the roles. These companies must also deal with communication barriers, compliance regulations, or legal ramifications that come with hiring from other regions. 

 

 

While these measures can reduce some of the concerns with these workforce shortages, DevOps automation helps to directly combat these challenges. This way, companies can automatically implement repetitive infrastructure tasks, which subsequently reduces the demand for DevOps engineers.

 

Automation in DevOps increases operational efficiency, reduces the possibility of human errors, and enables faster software development cycle. In DevOps, using a Platform as a Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) solution to automate DevOps tasks leads to faster feedback loops and iterative updates between development and operations teams.

 

The Key DevOps Tasks to Automate 

 

Automate Security Operations

Security is a vital part of any operation. In much of traditional development practices, security has always been an afterthought, only considered at the tail end of the development. Often, this is a source of frustration. 

 

Imagine discovering errors too late in the cycle, which subsequently affects the software launch date. And that’s just one part of it. 43% of respondents in a 2021 State of DevSecOps report struggle to understand vulnerability findings. 

 

The problem is made even worse when you consider that large organizations have 100 developers for every security professional. What can companies do to resolve these kinds of frustrations? DevSecOps. 

 

Rather than make security the responsibility of a dedicated team, DevSecOps is a practice that incorporates security as a part of software development from the start. 

 

To enable this, companies can automate security operations to deal with any foreseeable or unforeseen errors. They can use tools to automate security operations, detect threats, and take immediate action. Having this automated security tool in place reduces vulnerability risks, and improves application quality and reliability.

 

Automate Continuous Delivery

One of the core principles of DevOps involves the continuous integration and continuous delivery (CI/CD) of software. Put simply, it’s the seamless and automated transition from coding to PR submission to testing, and then production. 

 

By automating deployment procedures, companies can accelerate the speed of software delivery, enabling frequent release cycles, while detecting bugs sooner and gathering faster feedback. Companies can use continuous delivery tools to ensure the secure and fast delivery of software to production environments.

 

Automate Continuous Integration and Testing

Before deploying software to production, companies need to ensure the continuous integration of the source code into the project repo. This process involves regularly building and testing code to ensure there are no problems.

 

With QA or unit tests, companies can ensure the software adheres to the predefined configurations before new commits are merged into the main branch. Test automation is crucial in cutting down repetitive tasks and the demand for manual testers. 

 

Various DevOps tools offer continuous integration features that can help reduce the burden and overreliance on a DevOps team.

 

Automate Monitoring and logging

Once you deploy software to production, it’s necessary to monitor and log the performance of the application and its underlying infrastructure. Monitoring software performance helps companies get ahead of any failures or bugs. 

 

Companies can monitor a range of metrics, including workload, resource consumption, system error rates, and network performance. 

 

By doing this, they can avoid issues such as inefficient resource usage and uncover the reasons behind unplanned expenses. It can also be useful for debugging or auditing the software. Automated monitoring tools offer comprehensive reports on performance, graphical dashboards, and notifications.

 

How to Improve Operational Efficiency With DevOps Automation Tools

By using DevOps tools, companies can save time, cost, and resources by automating mundane tasks. Here’s a useful framework for improving efficiency with an automation tool:

 

Identify Team Dynamics and Define Roles and Responsibilities

Understanding how your team contributes to the software deployment process is critical for maximizing the effectiveness of your DevOps. Each team member must have clear and defined responsibilities and access.

 

Do you have enough skilled people for DevOps? Can your team deploy applications using automation scripts? A company’s automation journey should begin by outlining the team dynamics, identifying areas for training, and clearly defining responsibilities for everyone.

 

Assess the DevOps Tools and Document the Entire Process

When you’re automating many DevOps processes, it can be somewhat daunting to keep track of all the configuration management processes and operations. Besides, there are several automation tools available on the market. 

 

By clearly documenting all processes, you can identify points in which a particular tool isn’t performing as expected. When that happens, a well-documented log can make it easier for you to make a decision.

 

Understand the Software Development Process

For every aspect of the DevOps lifecycle to be covered, the tools used must cover all details. In other words, any software change requiring a process change needs to be integrated into the DevOps pipeline.

 

To ensure this is done efficiently, engineers should implement robust change management tools for faster adoption and greater control of the implementation process. Some of the common tasks during this process include creating, reviewing, planning, and testing change requests. 

 

Gather Feedback and Log the Automation Tool Performance

Optimization is an essential component in any infrastructure. By logging and monitoring the performance of the automation tools, you can track and identify areas for improvement. You can monitor the CI/CD pipeline for any vulnerability or shortcomings, gathering data on your production environment and infrastructure operations.

 

You can also gather relevant end-user data by building a robust feedback system across your operations and production environments so you are informed of any potential issues.

 

Best Practices for Automating DevOps Processes

DevOps is a culture that encompasses the entire software development team. It’s not a specific framework with a list of standard rules to follow, so there will be variations depending on the organization, their DevOps practices, and the automation tools they use.

 

The following are key DevOps practices to consider when getting started:

 

  • Focus on customer needs
  • Developing a culture of interdependence and collaboration among team members.
  • Embrace agile development methodology for faster and more effective results.
  • Track and log available KPIs
  • Use a DevOps tool like Convox
 

Ensure Efficiency With DevOps Automation

With the current workforce shortages affecting companies worldwide, automation can help bridge this gap and reduce the need to make a lot of hires. 

 

Let’s take Codelitt as an example. With Convox’s infrastructure, Codelitt was able to effectively reduce its DevOps engineer headcount to just one. And this DevOps engineer only needs to spend a few hours a month on the system’s maintenance or other manual tasks.

 

 

By automating DevOps operations for CI/CD, testing, monitoring, and security, companies can ensure faster delivery, receive more feedback, detect more bugs, and deliver reliable high-quality software.

Hello Convox Community 

As we near the end of September, we continue to put in a lot of effort to keep improving. This month, we have concentrated our efforts on expanding our test coverage to ensure that our platform is reliable and that new releases won’t negatively impact your business. 

In recent news, we have had customers contacting us regarding notifications from AWS and various Node versions.  We are aware of the issue and are investigating where ancillary old Node versions are running.  We will have an official fix/update out before the depreciation.   

Convox v2 racks AWS RDS Issue. 

Convox utilizes AWS RDS for resource database operations. AWS RDS uses the `encrypted: [true/false]` parameter (defined in the convox.yml) to control encryption of a given database. The Convox default value for this parameter has been an empty value which was interpreted as “false”, resulting in the resource being created un-encrypted. 

Recently, AWS changed the functionality of this parameter to no longer accept empty values. 

If you attempt to create a new resource or update an existing resource without having this option set, you will encounter the following error: 

CREATE_FAILED Instance Properties validation failed for resource Instance with message: 
The following resource(s) failed to create: [Instance]. 

If you are attempting to create a new resource, please update to the latest rack release (20220915193714) or explicitly set this option to be “true” or “false” in the convox.yml file to get everything set up.  

If you have an existing resource, setting this parameter will cause CloudFormation to replace the resource, which is not desired in most cases. 

If you’re having issues with this, don’t worry. We have a working method to resolve the problem and are available to assist you. Just open a support ticket via the console and we’ll assist you right away. We are also preparing a detailed, illustrated, step-by-step guide to help you fix the issue.  

We have also observed that this problem doesn’t occur 100% of the time when attempting to update an affected resource. We have not been able to identify what causes CloudFormation to try to update the instance, which is triggering the error. 

To help identify affected resources we have created a script that you can run which will identify any of your current resources that are affected. If so, please open a support ticket in the console or contact us at support@convox.com and we’ll be happy to help.  

Finally, the issue with creating new resources has been fixed in our most recent release: https://github.com/convox/rack/releases/tag/20220915193714 

This update will explicitly set `encrypted=false` as default for new app resources. 

Latest Releases 

To keep you updated on the latest releases, here is a summary of our recent work: 

Version 2 (RSS or GitHub) 

Article: Availability vs Cost  

With Convox, we think we’ve covered the ‘Ease of Use’ corner pretty well (one command automatic rolling deploys, fast rollbacks, 12-factor apps by default, simple but powerful abstractions of the underlying platforms), so your sweet spot now boils down to a simple evaluation of Availability vs. Cost. Find out more by clicking here. 

Blog: What is a DevOps strategy roadmap and how do I create one? 

In this month’s blog, we discuss how creating a strategic DevOps roadmap is a major key to success in your organizational process and overall DevOps journey. If you want to find out more click here. 

Like some leading principles, operations can be represented by a triangle with three fundamental components: Ease of Use, Availability, and Cost.  You can’t advocate for one without sacrificing an aspect of the others…

… and it’s up to you to find the sweet spot that works best for you and your particular use case.

With Convox, we think we’ve covered the ‘Ease of Use’ corner pretty well (one command automatic rolling deploys, fast rollbacks, 12-factor apps by default, simple but powerful abstractions of the underlying platforms), so your sweet spot now boils down to a simple evaluation of Availability vs. Cost.

Up until now, we have erred on the side of providing highly available environments for your apps and services. Our environments are designed for production use out of the gate and ensure continued uptime in the face of the worst cloud outages.

Feedback from our customers, users, and the community suggests that while our environment provisioning is appreciated, there are situations where a lower-cost environment would be preferred to a high level of availability. Development Racks, staging environments, and temporary and throw-away set-ups do not necessarily need the same uptime guarantee level.

We’ve discussed previously about how choosing a different cloud provider can generate cost savings in this situation. However, if this isn’t an option for you, or even if it is and you’re looking to save even more, we have now released a cheaper Rack installation option!

This option allows you to install a Convox Rack with the minimal set of Cloud resources required, saving you a lot in ongoing operational costs. Internally, we refer to this as a non-HA (Highly Available) Rack due to its exposure to a broader range of cloud outages, but in reality, we would still expect this Rack to cope in most situations.

The existing ‘Highly Available’ Rack setup remains the default option and our recommended choice for production and other critical environments. However, the new alternative setup is ideal for new development and similar setups where cost reduction may be more important.

What difference does it make?

Let’s take a look at how they stack up:

For a standard, highly available v3 (k8s-based) Rack in AWS, Convox will automatically provision for you:

  • 1 VPC

  • 1 EKS cluster

  • 6 Subnets

  • 5 Route tables

  • 1 Internet gateway

  • 3 NAT gateways

  • 1 Network ACL

  • 3 Elastic IPs

  • 1 Network Load Balancer

  • 3 Security groups

  • 3 EC2 instances

  • 3 EBS volumes

  • 1 S3 bucket

By default, we use t3.small instances with two vCPUs and 2GB of memory. As of this writing, the cost of running this cluster in us-east-1 is roughly $236.54 per month or $7.78 daily.

By using a non-HA Rack, we can reduce the resources required to:

  • 1 VPC

  • 1 EKS cluster

  • 2 Subnets

  • 3 Route tables

  • 1 Internet gateway

  • 1 NAT gateway

  • 1 Network ACL

  • 1 Elastic IP

  • 1 Network Load Balancer

  • 1 Security Group

  • 1 EC2 instance

  • 1 EBS volume

  • 1 S3 bucket

This reduces the cost to approximately $138.47 per month, or $4.55 per day – almost $100 a month saved on a single Rack!

More importantly, this Rack is still scalable and has all the features of the standard Rack platform. Further cost savings can be made by utilizing a non-HA Rack in Digital Ocean or other of our supported cloud providers if you wish to, as we have also enabled non-HA options across the board for v2 Racks running on ECS.

How to use non-HA mode

You can get started today with a new non-HA Rack by specifying the appropriate Rack parameter on installation.

For v2 (ECS-based Racks), you can use the Convox CLI: convox rack install aws HighAvailability=false –name development.

For v3 (k8s-based Racks), you can: convox rack install {cloud_provider} development high_availability=false …

Or, you can quickly and easily install your Racks from the Convox Console into your Cloud account, setting the appropriate parameters from the UI.

Due to limitations in the cloud providers, we can’t currently support switching an existing Rack between HA and non-HA modes.

We always welcome feedback from our users, so please let us know how you get on with the new non-HA Racks and if there are other things you would like to see us provide or areas we can improve upon!

Keep an eye out for further cost-reducing measures coming to Convox very soon!

Try It Out!

Table of Content

  • Introduction
  • What is DevOps?
  • Why is the DevOps Roadmap important? 
  • 8-Steps to Create a DevOps Roadmap:
  • Key Takeaways

Introduction

A DevOps strategy roadmap is a document that visually summarizes the resources and techniques needed to accomplish the DevOps goals of your company. Organizations can treat this document as a blueprint for implementing DevOps best practices. 

With a robust DevOps roadmap in place, software development teams and operations teams collaborate seamlessly and stay on the same page. This shared roadmap also facilitates the rapid development of secure, portable, and scalable products. A well-defined roadmap can help you successfully implement a DevOps strategy that drives faster product releases and better customer experiences when done right.

In this post, we will dive deeper into what precisely a DevOps strategy roadmap is – and how you can successfully create one.

What is DevOps?

Before we begin, let us first understand what DevOps is. DevOps embodies a culture of seamless collaboration between software development teams and operations teams. Software developers handle software coding, and the operations team handles software deployments and releases.

Traditionally, these teams were disassociated with each other, working in isolation within siloed environments. Software code would be forwarded to the operations team for release management. Bugs in the code were only discovered quite late in the development process. This process negatively impacted the user experience and necessitated the DevOps culture’s new approach.

DevOps culture is based on agile methodology. It facilitates seamless collaboration between development and operations right from the beginning – making it easier to iterate the code on the go based on the input provided by the Ops team.

How does the agile approach benefit teams? For starters, ops teams test small sections of new code to detect potential issues. This facilitates continuous delivery with the faster release of new features or upgrades and fewer bugs further down the line. It also makes way for automation and enhances the quality of the products, which then delivers a much better customer experience. 

Introducing a DevOps strategy helps you significantly streamline team workflows, manage resources better and reduce overhead costs, especially with the help of a roadmap. Furthermore, it also enables you to eliminate unnecessary tasks from the software development lifecycle (SDLC).

 

Why is a DevOps strategy roadmap important? 

Now that we’ve covered what DevOps is, let’s discuss why a DevOps strategy roadmap is essential. Here are a few ways that a well-planned DevOps strategy roadmap can benefit your organization:

  • Increases visibility into your processes on a granular level

Cross-functional DevOps teams get an integrated 360-degree view of the company’s objectives, workflows, and product development plans. Such a strategic blueprint acts as a unified reference for all stakeholders throughout the SDLC.

Especially for DevOps teams, a roadmap helps with high-level monitoring of the IT operations team and engineers and keeps their work in sync. You can think of it as a detailed guide relevant to both development and Ops teams.

  • Aligns interdependencies

A DevOps implementation roadmap helps you convey and align the interdependencies and priorities within workflows that both teams are currently working on.

Engineers and the Ops team come together to prioritize and organize their work based on dependencies and deadlines. Such work coordination is what streamlines the entire development process for an enterprise, thereby speeding up the new releases with fewer errors.

  • Drives seamless iterations and continuous improvements

With an agile approach driving a DevOps culture, engineers and the operations team can work closely to implement continuous improvement. This sets the stage for incremental improvements that drive regular code upgrades, new features, faster deployment, and a better customer experience. A DevOps strategy roadmap is an ideal space to establish priorities for both teams so that each party understands what to expect from the other.

By the same token, each team can rely on the roadmap for performance insights of past initiatives to see what was successful and what was not.

Creating a DevOps strategy roadmap – 7 steps to consider:

  • Define your objectives

The first step to creating a DevOps roadmap is defining your objectives or overall purpose. What do you want to accomplish with this strategy roadmap? Some great examples of these objectives can be:

  • To create a unified space for all DevOps-related tasks
  • To promote seamless collaboration between the development team and operation team members
  • To create an efficient software development process with well-planned development and release workflows 
  • To improve the customer experience through faster product releases
  • To build a knowledge repository including success and failure stories to aid future DevOps planning
  • To implement best practices for continuous delivery
  • Set short-term goals

Plan your DevOps strategy roadmap (as well as your product roadmap) 3 – 4 months ahead to help you stay focused. Using a short time horizon enables you to add a range of initiatives into the roadmap and prioritize them for the development and operations team. This also helps you to eliminate clutter from the roadmap that leaves teams confused or unfocused. 

  1. Prioritize automation.

Automation should be your first port of call when creating DevOps strategy roadmaps. As any DevOps transformation process most likely features a significant number of processes and handovers, there’s no such thing as too much automation. Automation tools such as Convox help to reduce friction and inefficiency in the development processes of successful teams.

  1. Use visual cues

A DevOps strategy roadmap comprising only comprehensive text, documents, and spreadsheets might limit your team’s ability to visualize and understand the initiative. To provide your teams with a crystal-clear view of their initiatives, use visual cues that convey important details to your team just at a glance. This presentation style will also help with better planning, prioritizing, and organization.

Leverage color-coded labels, containers, bars, tags, epics, legends, and other aesthetic elements to make your roadmap more compelling. Doing this will help all stakeholders quickly understand individual roles, processes, and what’s happening within the SDLC.

  1. Share the DevOps implementation roadmap with all stakeholders

The core objective of a DevOps roadmap is to help both teams stay in sync, focused, and updated about each other’s initiatives. For this to work, however, both teams need to have open access to a single source of truth. 

Allow all stakeholders easy access to your DevOps roadmap at all times. This will help both the teams improve collaboration and specialized efficiencies while also meeting deadlines. Development teams and operations teams will understand what initiatives they need to prioritize based on the dependencies of each other.

  1. Keep the DevOps roadmap up-to-date

Your DevOps strategy roadmap must align with the current objectives of your organization. This is possible when the roadmap is updated regularly to reflect the current dynamics of your organization. 

The DevOps strategic roadmap must be in sync with the latest planning, strategies, resources, budget, and objectives of your organization. As your goals change, you need to upgrade your DevOps roadmap as well to prevent your team from wasting their efforts and time on an outdated plan.

  1. Check the efficiency of your roadmap

Use automation tools to implement workflows and improve efficiency within your process. Then, regularly review the workflows within your DevOps roadmap to ensure that all parties stay on the right path. One effective way to check whether your roadmap is working as it should is to evaluate whether or not it helps both teams stay in sync. Also, evaluate whether or not it helps to prioritize and strategically organize high-level initiatives.

  1. Integrate your DevOps roadmap with other apps

Keeping your DevOps roadmap up-to-date is essential, but so is informing all the critical stakeholders about the updates. If any team member misses a new update within the DevOps workflow, it may compromise their ability to complete strategic tasks. 

To avoid this, integrate your DevOps roadmap with productivity tools such as Slack to automatically notify your team of important updates or information. 

Final word

Creating a strategic DevOps roadmap is a major key to success in your organizational process and overall DevOps journey.

Here are some of the key takeaways to note about the DevOps roadmap:

  • A DevOps roadmap will help you stay on the right track by promoting seamless collaboration between development and operations teams.
  • Creating a well-planned, visual DevOps roadmap needs tactical implementation that demands your time and effort. It must include your high-level objectives, vision, initiatives, strategies, and planning for the DevOps team. These need to be in line with your organization’s current picture.
  • Use automation tools such as Convox PaaS to improve
  • To make your DevOps roadmap more intuitive, integrate it with other communication tools such as Slack to get instant notifications and updates.
  • Convox offers a powerful automation platform to smoothen your DevOps journey. It offers easy integrations with Slack and other monitoring tools to help your teams stay up-to-date at all times.

Automation tools improve the flow and productivity of business processes, however, they can be pricey and inflexible. The cost of using Zapier for business process automation can run into thousands of dollars, yet, it does not support self-hosting or workflow exports/imports, which ultimately limits control.

To circumvent these limitations, businesses of all sizes opt for cost-effective, self-hosted Zapier alternatives like n8n. N8n is an open source, simple-to-use, and feature-rich workflow automation tool that supports over 220 integrations. Its cloud version costs somewhere between $20 and $120 per month.

Even better, the self-hosted & desktop version of n8n is completely free If you’re wondering how to
self-host n8n on Convox, read on to learn more about n8n, why Convox is the best platform for self-hosting n8n, and how to host it step-by-step.

What is n8n?

n8n is a free self-hosted automation tool that can be added to your cloud infrastructure to power your business process management. You can run it as a workflow engine on Convox or any similar platform to create a powerful workflow management system.

Thanks to the efficiency of the “fair-code work automation platform” the project received seed funding of $1.5 million within its first year of business. The self-hosted workflow management software is great for marketing and sales automation. n8n is the best choice if you want:

  • Cost savings:The self-hosted version is free, while the cloud-based alternative is priced competitively.
  • Simplicity of use: The cloud tool is really simple to use, and the self-hosted version is also easy to use once installed.
  • Versatility: It allows 220+ integrations focusing on process diversity, so you can get more things done with fewer integrations enabled.
  • Flexibility: You can design/program nodes from scratch using JavaScript.

Automation Workflows in n8n

A workflow in n8n is like a directed flow chart connecting various nodes to perform a task. Its starting point is always a trigger node that initiates the process, automating the workflow. However, there can be more than one trigger node for added flexibility.

You think of the nodes in an automation workflow as apps/integrations with previous and next actions specified. See a quick example of how n8n automation works below.

Lead Email Validation in Mautic CRM

    1. A Contact is identified in Mautic. Is it a new contact?
    2. If it’s not a new contact, do nothing. Otherwise, use the ‘Item Lists’ integration to extract the information of the new contact.
    3. Use the ‘One Simple API’ integration to check whether the email is suspicious or not.

<li

 

Source: n8n.io

 

Source

Why Automate Workflows?

The simple answer is: Because you want  higher productivity with fewer resources.

Automating your workflows will speed up your processes, reduce operations costs, and save you a lot of money. Automation also removes human intervention from business processes and improves the quality of work.

Automation can reduce your teams’ workload, transforming them into more efficient, organized, and productive professionals.

What can n8n help you automate?

Right, you’re probably wondering how automation can help you.

Let’s explain.

While n8n requires more technical knowledge and lacks in-built integrations, it still matches the capabilities of top tools in its domain, like Zapier.

First, have a look at various integrations it works with:

  • Core Nodes: Compression, image editing, error triggering, item list fetching, date, time, encryption, HTML extraction, Merging, workflow execution, and so on.
  • Marketing and Content:Mailchimp, SendInBlue, Strapi, SendGrid, SurveyMonkey, MS Dynamics CRM, Google Slides, Iterable, Facebook, Bannerbear, and so on.
  • Communication and Sales: AWS SNS, ClickUp, ConvertKit, Discord, Freshdesk, Discourse, Freshworks, Hubspot, Intercom, LinkedIn, Mailchimp, Mandrill, MS Teams, MS Outlook, PayPal, and so on.
  • Data and Storage:Airtable, AWS S3, Dropbox, Google Drive, Google Sheets, MS Excel, OneDrive, Supabase, Odoo, Postgres, and so on.
  • Finance and Accounting:InvoiceNinja, PayPal, Stripe, CoinGecko, Xero, Wise, and so on.
  • Utility and Productivity: Asana, Bitbucket, Calendly, Flow, Freshservice, Google Calendar, Jira, Monday.com, One Simple API, Notion, Jira, and so on.
  • Development and Analytics:Github, Google Analytics, AWS SQS, CircleCI, Bubble, DHL, MongoDB, MQTT, Orbit, Twilio, and so on.
  • Misc.:BambooHR, Google Docs, Onfleet, Spotify, Workable, and so on.

Now that you know about integrations that work with n8n, you can assess your existing workflows to identify areas where you can combine and automate business operations. For example:

  • Enable PayPal invoice creation when an invoice is created in Invoice Ninja and has a valid PayPal ID of the client.
  • Add new orders in Shopify to your CRM.
  • Send email addresses of your Mailchimp subscribers to an Excel sheet.
  • Send order confirmation emails to customers for your FB Campaigns.
  • Send tasks to your employees in Slack every day.

You can also create more complex workflows with any number of nodes as your business demands.

n8n Vs. Zapier

How best can we appreciate the benefits of n8n than viewing it side by side with some of the most popular automation tools? Let’s compare and contrast n8n vs Zapier in the table below:

Zapier

n8n

Zapier allows more integrations (170+).

n8n.io has more diverse integrations available (220+ at present)

Zapier cannot be self-hosted and costs up to $599.

Self-hosted n8n is free. The cloud version costs up to $120.

Doesn’t support custom integrations (e.g., of your own scripts).

Allows custom integrations. 

Easy to set up

Complex installation process when self-hosted.

Low code platform automates workflows, so execution takes the least time. However, the controls are not very customizable.

Custom (not in-built) scripts and integrations automate various workflows.. Tougher than Zapier, but more customizable.

How to self-host n8n on Convox

To get started, sign up with Convox, then follow these steps next:

Step 1:

You should have a V2 (ECS) Rack up and running in your Convox Configuration. Follow these steps until the “Install a Rack” step is completed.

*With a specific note before the “Install a Rack” step advising to ensure the Rack is being created on ECS (Version 2) engine.

Step 2:

In the CLI, address the new rack with the command: 

convox switch [Rack Name]

In this example:

convox switch staging

Step 3:

***This step could be followed earlier too.***

Get the n8n example repository from this GitHub repository

Step 4:

Edit the convox.yml file to define environment variables for n8n:

The convox.ymlin the example repository comes with the required env var for functional user management and invite features via SMTP listed:

If you would prefer to manually manage users or simply to not setup SMTP you should use:

N8N_USER_MANAGEMENT_DISABLED=true

This configuration would look like this:

For a full list of the available n8n environment variables, including their: type, default value, and description, visit: https://docs.n8n.io/hosting/environment-variables/

Step 5:

Once desired variables are set, return to the CLI and execute the command:

convox apps create n8n

Once built, deploy the app with command:

convox deploy

Deployment may take a few minutes and will complete with a final message: OK 

Step 6:

Now to show the DNS resolution for your running app execute the command:

convox services

Copy and paste the DOMAIN into your web browser to view your new self-hosted n8n instance.

The initial account setup will be the OWNER. 

Additional users can be invited via SMTP or manually configured in the settings menu.

Now, you can start using your self-hosted n8n as you prefer.

Why Convox?

n8n is a more cost-convenient and customizable automation solution when self-hosted. For self-hosting, we have suggested using Convox for many reasons:

  • Developer-friendly: Its K8s-based deployment allows you to customize your n8n installation and workflows easily.
  • Cloud: It is production-ready and supports DigitalOcean, AWS, Google Cloud, and Microsoft Azure. So, it can fulfill your cloud requirements for n8n very easily.
  • Convox allows auto-scaling and lets your n8n deployment grow without hassles.
  • Cost-convenience: Convox’s cost will amaze you when compared  with other options available.

Final Word

n8n is a  powerful workflow automation tool that every business can use to work more efficiently. However, self-hosting this solution in an efficient infrastructure platform like Convox is the only way to explore the full potential of n8n’s capabilities. With this tutorial, you should have a step-by-step practical guide showing you how to do this.

You’ve pored over the job descriptions. You’ve checked out the salaries.  Your mind is made up, and you’ve decided to train as a DevOps engineer. 

And you have a good reason to choose this path. DevOps engineers have been in high demand since enterprises like Adobe and Amazon began shifting to the cloud and adopting DevOps strategies. Research conducted by PuppetLab showed that most DevOps engineers earned higher in 2021 than they did in 2020 and 2019. According to the 2019 Tech Salary Report, “DevOps Engineer” ranks in the top five of all tech salaries, with an average pay of $111,683.

But this is probably not news to you if you’re ready to get on track to becoming a DevOps engineer. One good news however is that you can spend less time on this journey if you have a map to guide you. Maps make things easier. They show you where you started from and where you’re going. The journey to becoming a successful DevOps engineer should begin with a starting point, a destination, and a clear guide showing how to get there. For example, a roadmap could show that you start as a release manager, work towards becoming a DevOps test engineer, then a DevOps cloud engineer, and finally, DevOps Architect.

So what realistic roadmap can prospective DevOps engineers follow? Before we look at that we have to understand some basics.

First, what does a DevOps engineer do?

IT operations and development teams come with diverse skills and objectives. Developers create and add new features to applications, while operations teams deploy and release the applications. DevOps is a sort of a blend between these two – ‘development’ and ‘operations’. DevOps engineers are integral to the development process, from planning to execution and maintenance as they help bridge the gap between these activities.

On an operational level, a DevOps engineer is an IT professional who oversees and/or facilitates code releases or deployments alongside software developers, system operators, admins, IT operations employees, and others. 

Because DevOps is all about integrating and automating processes,  DevOps engineers play a crucial role in bringing code, application maintenance, and application administration together. These activities demand a solid understanding of development life cycles and DevOps culture, DevOps career roadmap, philosophy, techniques, and tools.

How to Become a DevOps Engineer?

To become a DevOps Engineer, you must have the requisite knowledge and experience to work with various teams and technologies. The objective is to learn the skills, put them to practice, and build a portfolio that impresses employers and wins team members’ trust. However, all of it starts with a DevOps career roadmap, which is an absolute necessity for you to become a DevOps engineer. Before we delve in, let’s address some popular self-limiting myths:

Popular Self-limiting Myth #1: I need a Bachelor’s Degree to become a DevOps engineer

It’s true that many job descriptions for DevOps engineers require individuals to follow the DevOps career roadmap and have a bachelor’s degree in computer science or a related discipline. However, most employers will take work experience over a bachelor’s degree.

Popular Self-limiting Myth #2: I need to Have a Certificate to become a DevOps engineer

Do you need to take a course or gain some qualifications to become a DevOps engineer? 

While every individual’s journey is different, the short answer is no. You do not strictly need certification from a particular governing body if you can demonstrate that you have the relevant skills to do the job. 

To assess your proficiency, certain employers might request for certificates in disciplines such as Linux administration and SQL server programming. In this case, your DevOps career roadmap will include obtaining certification in this area to fulfill these obligations. 

Laying a solid foundation for your DevOps career

One foundational requirement for DevOps engineers is proficiency in various software technologies and programming languages. This is important because of the nature of DevOps itself. Earlier in this piece, we discussed that DevOps involves integrating and automating processes. This means that rather than being about full proficiency in one tool or language, there is a need for the DevOps engineer to have a working knowledge of the different tools and processes they are expected to integrate and automate. Prospective DevOps engineers can gain this working knowledge by working in junior roles in IT, system administration, or software development.

A good place to begin is with a role as a system administrator, support, or help desk representative. The experience of maintaining software is a good starting point for the DevOps career roadmap. Now, let’s break down the different steps you need to go from zero to a successful DevOps engineer.

Your DevOps career Roadmap in 8 steps

Image Source

  1. Learn a Programming Language: A DevOps engineer should have an understanding of the programming languages that their team uses to comprehend existing code, review new code, and debug. You should have a working understanding of programming languages to implement a successful Continuous Integration/Continuous Delivery (CI/CD) strategy. Popular programming languages that can help you get your foot in the door include Python, Perl, Ruby, etc. When selecting the desired programming language, consider important factors such as its scalability, efficiency, modularity, etc.

  1. Study Linux & Operating Systems: Operating systems (OSs) run the local machines teams use to communicate and execute jobs. They also manage the servers that hold the team’s deployed apps. Linux comes highly recommended because a number of businesses use it for their applications. While you don’t need to be a master in Linux (or any other OS), having a general understanding of major OS principles such as Process Management, I/O Management, Threads and Concurrency, Memory Management, etc, can help you advance your DevOps career.

  1. Examine Networking & Security: DevOps Engineers must understand networking basics in order to manage IT operations. To prevent malicious actors from accessing sensitive information or entering your application, you must also understand the risks associated with these transfer methods and how to safeguard them. A DevOps expert must also be mindful of the security of the organization’s broader environment at all stages, including development, testing, and deployment. To be a competent DevOps Engineer, you must have a working knowledge of basic networking protocols such as DNS, OSI Model, HTTP, HTTPS, FTP, SSL, TLS, and others.
  2. Be familiar with infrastructure as Code: Infrastructure as Code (IaC) is a method for automating the provisioning of infrastructure to deploy your application. Template files are used to configure and manage networks, servers, and other infrastructure to create an environment that meets your application’s particular requirements. As a DevOps engineer, you should be familiar with containers such as Docker and Kubernetes, as well as configuration management tools such as Ansible, Chef, Salt, and Puppet. Terraform and Cloud formation are examples of infrastructure provisioning.

  1. Know about CI/CD tools: The practice of inserting code into the main branch of a common repository by a developer is referred to as Continuous Integration. It helps to cut costs, increase productivity, and so on. Continuous Delivery, on the other hand, automates the delivery of that validated code to a repository, following the Continuous Integration process. Software release becomes more efficient and easier to implement with Continuous Delivery. CI/CD tools automate processes and handoffs to free up team members and support different stages of the pipeline. CI/CD tools to learn include GitHub, GitLab, Bamboo, Jenkins, CircleCI, and others.
  2. Carry out Application and Infrastructure Monitoring: Once in production, it is important to monitor software to check performance and identify any issues with your infrastructure and application. 
  3. Find out about Cloud Providers: The majority of modern programs are hosted in the cloud. As a DevOps engineer, you must be familiar with cloud technology, its advantages, requirements, services, providers, and packages that apply to your company. Some of the most well-known and widely used cloud service providers are AWS, Azure, Google Cloud, and Digital Ocean.
  4. Learn Cloud Design Patterns: Cloud design patterns are used to create scalable, dependable, and secure cloud applications. Each pattern describes the issue being addressed as well as the thought required to apply the pattern. Some of the cloud design patterns encompass; CQRS, Event Sourcing, Anti-corruption Layer, and others.

Tools that Make the DevOps Engineers’ Job Easier

The DevOps career roadmap is incomplete without a list of tools that can improve collaboration, efficiency, and speed.

DevOps tools are crucial for all phases of the DevOps workflow, and many tools integrate with others for a seamless working experience.  

Below, we list some industry-standard tools from well-known companies. Alternatives exist – however if you’re seeking DevOps experts with specific software skills, this list is a great place to start:

Automation Tools

DevOps engineers can use these to modify and automate the delivery pipeline. These include Convox, Jenkins, etc.

Source Code Management

You can use source code management tools to track the progress of development projects, view version control, and create version ‘branches,’ which can subsequently be merged as needed once you’ve decided on the final product’s appearance. Examples of these include Git.

Repository Hosting

A code repository is required for source code management software to incorporate into your DevOps operations. Github and Bitbucket are very popular Git repositories. 

Containerization Software

Containerization is putting an application, together with all of its relevant configuration files and libraries, into its operating environment to operate on several physical machines without affecting dependencies. Tools that help with this include Docker and Kubernetes.

Monitoring Software

Monitoring software helps your DevOps team keep an eye on your infrastructure for any problems so that you can resolve them quickly and efficiently, e.g., Nagios, Raygun.

Useful Resources for DevOps Engineers

If you want to learn how to become a DevOps engineer or keep up with current DevOps trends, it is vital to follow a set DevOps career roadmap, and seek out resources that will help your learning. The knowledge and expertise provided by current practitioners in blogs, podcasts, and whitepapers will help you understand how DevOps engineers fit into organizational roles. 

Here’s a list of some of the finest DevOps courses and technologies for implementing automation in your application development and delivery process. These courses are a wonderful place to start if you want to become a DevOps engineer this year.

Key Takeaways

  • The demand for DevOps engineers is growing. There’s no better time than now to train to become a DevOps engineer as more firms adopt DevOps practices.
  • DevOps is all about bringing processes together and automating them, and DevOps engineers play a crucial role in bringing code, application maintenance, and application administration together.
  • To become a DevOps Engineer, you must have the requisite knowledge and experience working with various teams and technologies. Using the correct DevOps tools for the job is also an essential aspect of any DevOps approach, and these tools can be used in all phases of the DevOps workflow.
  • If you want to learn how to become a DevOps engineer or keep up with current DevOps trends, it is vital to have plenty of resources that will aid you in learning – DevOps courses, industry whitepapers, DevOps groups on social media, DevOps podcasts, etc.
  • DevOps has become an increasingly popular way to develop and deploy software. Many organizations are now looking for ways to apply DevOps principles to their own development processes. One tool that can help with this is Convox.
  • Convox is an open-source platform that enables you to deploy and manage applications using DevOps principles. It offers a simple, scalable way to automate application deployments. Convox also provides several features that make it easy to work with, such as a web console, CLI tool, and REST API.

Cloud servers such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and DigitalOcean (DO) are an integral aspect of every developer’s practice. They are virtual servers that reside and run in a cloud computing environment. 

By hosting or deploying your apps in the cloud, you can ensure better scalability, faster performance, auto-updates, and integrations, and unlimited storage capacity, all at a competitive price. Many top enterprises have migrated to the cloud so as to reduce costs and complexity—and you should too.

PHP is one of the most popular programming languages among software engineers and developers, with over 22% of respondents using it in the past year, according to a 2021 survey on Statista.

As a developer, knowing how to deploy your PHP applications across multiple cloud services such as AWS, GCP, and Digital Ocean DO is important. 

This article will guide you on how to host PHP applications in your own account on the Digital Ocean environment.

How To Host PHP in Your DigitalOcean (DO) Account

DigitalOcean provides Virtual Private Servers (VPS), which are called Droplets. At their core, Droplets run on DigitalOcean hardware as Linux virtual machines (VMs).

 

In the following section, you’ll find out how you can host a PHP website on DigitalOcean Server.

Step 1 : DigitalOcean Registration

Registration at DigitalOcean is straightforward. It only asks for your name, email, and password. 

Also, you can register with your Google account. You can register on DigitalOcean with this link.

Step 2: Configuring the Droplet

As soon as you log in to your DigitalOcean account, the homepage or “ControlPanel” in this case will greet you. Our next step will be to create the first instance of the DigitalOcean droplet.

To do that, click on the “Create” button that is located in the top left corner and then click on “Droplets” (create cloud servers). 

Screenshot showing configure droplet page on Digital Ocean

From Marketplace, we can select the LAMP image with version 18.04 or the latest version which is available. This way, DigitalOcean will deploy the latest versions of Linux, Apache, PHP, and MySql installed on a given virtual machine.

Screenshot showing create droplet page on Digital Ocean

Next, you need to choose a plan for DigitalOcean. Choose the plan size you need here or, for demo purposes, select the “Standard plan” and then select the data center closest to your users.

After this, you will also need to set up authentication that will be used to administer and communicate with DigitalOcean Droplet. 

In the Authentication section, select the option “SSH keys”. These SSH keys provide a secure way to login into your server.

Next, click on the new SSH keys. The next window requires the public keys from the local system. If the public ssh key is absent use the following command to create:

ssh-keygen

After entering the command, confirm the path and passphrase. Once it has been executed successfully, it will create a private/public key pair in your local system. In the subsequent step, we need to place the public key on the DigitalOcean server so users can use SSH-key-based authentication to log in from their local system.

This can be achieved by running the following command. This will copy the public key to the clipboard.

 

pbcopy < ~/.ssh/id_rsa.pub

Now, paste it to DigitalOcean and add the public SSH key option as below:

You can then create a droplet by clicking the “Create a Droplet”.

Step 3: Entering the Droplet

Once the Droplet is created, it becomes visible on the resources screen on the homepage.

Using the IP address shown in the Resources tab,  you can access the droplet through the web console provided by DigitalOcean. However, for security reasons, I recommend that you do it through a terminal if your computer has Linux or Mac as these come with SSH installed. If you have Windows, you can use  Windows PowerShell or Putty.

On the terminal you can run the following command:

ssh root@

When you connect with the host for the first time, you might see something like this:

Type Yes and continue.

Once we enter our droplet, we are going to update the operating system with the following commands:


sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

Step 4: Deploy Website to VM instance

This step is important for our website routes to work properly. We execute the following command:

sudo ufw allow in "Apache Full"

This will open port 80 and port 443 for our incoming website traffic.

Next, we can verify that everything went as planned by visiting the droplet server’s public IP address in the web browser:

 

“`

http://your_public_server_ip_from_droplet

“`

It should display the default apache page.

This confirms that the web server is installed and firewall settings allow incoming connections.

Apache on Ubuntu by default is configured to serve webpages from the /var/www/html folder.

So next, we change permission for the Apache website hosting folder i.e HTML

 

cd ~
cd var/www
sudo chmod 777 html

 

Finally, reload Apache so these changes take effect:

sudo systemctl reload apache2

Your new website configuration is active, but the webroot /var/www/html is still empty. Here we can use a basic PHP-based “index.php” file to test that the virtual host works as expected:

 

Create the file “index.php” and add the following content in this .php file:

 

“`

html>

<html>

   <body>

      <html>

         <head>

            

         </head>

         <body>

            

            <p>You are at myownwebsite.com</p>

         </body>

      </html>

   </body>

</html>

“`

Save and close the file, then go to your browser and access your server’s IP address:

http://public_IP_address

It should display the following PHP page:

While it’s impossible to cover all the different cloud servers available, this article covers the deployment procedures for a PHP application to Digital Ocean. Since PHP is such a popular language, it’s unlikely that any PHP site will be as simple, and the deployment process as strenuous and complex as the one described in this article. 

In that case, there’s a need for more efficient methods for hosting PHP apps from the local machine directly to the production environment. Using a modern and user-friendly Platform as a Service (PaaS) solution like Convox helps you easily deploy and manage PHP applications on all infrastructures, including AWS, Google Cloud, Azure, and DigitalOcean. 

With just a few clicks, Convox Multi-Cloud enables you to manage these apps in multiple cloud environments at the same time from a single console, and move between different clouds, taking the term “build once, deploy anywhere” to a whole other level.

Interested in hosting your PHP app on Convox? Get started now in just a few clicks.

 

Successful DevOps teams have a lot to account for. First, they need organized processes that streamline the delivery of core services – builds, internal tooling, quality checks, testing, deployments, and automation. But this need also creates a gap for powerful DevOps tools to support engineer roles and empower them to manage dynamic workflows.

When it comes to DevOps engineer roles, coding isn’t all there is to the part. The job also involves workflows and processes such as releasing to web servers, load balancing, testing, version control systems, etc. These processes are time-consuming and require utmost precision. One mistake and the TTM (time-to-market) increases significantly. 

So, how can Convox help to reduce the time spent on each task while increasing operational efficiency? Let’s find out:

What is Convox?

Convox is a cloud-agnostic PaaS (Platform-as-a-Service) that DevOps engineers can use to deploy applications to the cloud more efficiently. Convox streamlines the entire deployment lifecycle by integrating cloud providers’ services with other open-source tools to make up a highly reliable platform. This gives you your own PaaS with total privacy and control without the unpredictable pricing structure typical of other platforms. 

Key Features of Convox:

  • Open-source platform 
  • Cloud-agnostic, supporting AWS, Digital Ocean, Google Cloud, etc.
  • Accelerates deployment.
  • As the build engineer pushes to GitHub, it automatically triggers the creation of new cloud services and containers for branch testing.
  • As the engineer merges to GitHub, it automatically triggers the rollout of new images and containers toward production.
  • The app automatically scales up or down depending on the load.
  • Cloud services automatically replace or discard components of the cloud stack due to detection of failure.

Why Choose Convox?

DevOps is an automation culture that bridges the gap between developers and operations teams. A good DevOps culture promotes the execution of parallel development and testing workflows within the development lifecycle.

Today, the DevOps culture is embraced by many businesses to streamline workflows, keep up with market demands, speed-up upgrades and accelerate software delivery for cloud applications. But, manually running this procedure is complicated and time-consuming. 

Enter automation. 

Automation helps to take the efficacy of DevOps processes to a whole new level by facilitating faster software releases. With the help of tool integrations, DevOps automation reduces the time spent in the software development lifecycle (SDLC) and allows for more precision, consistency, reliability, and acceleration in deliveries.

Now, why should you choose Convox to streamline your DevOps tasks?

Convox solves two of the main bottlenecks that arise within the internal tooling of businesses:

  1. Building a tailored system is costly and distracts you from core business goals, as you have to manage hiring, R & D, maintenance, and so on.
  2. Private solutions offered in the market are based on experimental software.  These platforms also lack maturity.

These problems often demand substantial upfront investments and may uncover new issues in the long run for the DevOps team.

Here’s where Convox excels.

Convox replaces lengthy and complex code development processes and simplifies various DevOps engineer jobs by automating repetitive tasks related to code deployment, testing, upgrades, maintenance, and more. 

 

What DevOps Engineer jobs does Convox make easier?

Convox is a PaaS solution that addresses DevOps pain points and vitally reduces time spent managing cloud resources. It retains your standard processes and tasks while causing minimal disruption. Not only does it substantially save the number of hours spent in monitoring, but it also reduces costs, enhances resource management, and more.

Here’s how Convox simplifies DevOps Engineer jobs:

1. Faster deploys to aid the build and release engineers.

Convox can install ‘’Convox Rack”- a power-packed PaaS into your cloud account in just a few minutes. It efficiently manages your servers, data, and networks, which means that you deploy and scale applications using just one command.

 

 

The convox install command will configure a production-ready infrastructure with the best and latest AWS services for your applications. EC2, ASG, CloudFormation, S3, Lambda, ELB, VPC, etc., play a vital role within the system. Convox manages the tasks of testing, researching, and integrating all of these services, thereby reducing complexity.

Use convox deploy command to deploy any twelve-factor application upon AWS. Convox leverages build and release API to create images, load balancers, and containers:

2. Automated Workflows 

Automated workflows are easy to design and execute with Convox, and they help engineers manage the regular deployment of applications to staging and production. 

Here’s how Convox simplifies the delivery engineer and automation engineer’s tasks:

  • Every update made to the application triggers the creation of a deployable release
  • Rolling deploys take care of timely upgrades with zero downtime
  • Easy rollbacks to a previous version using just a single command
      • Creates CI/CD pipelines for all applications via a unified console. 
      • Easily automate deployments to one or more clouds.
      • Offers source control integrations for GitLab and GitHub.
      • Integrates with Jenkins, CircleCI, GitHub Actions, etc.

3. Helps SysAdmin with Role-based Access Control

With Convox, SysAdmins can control team members’ access levels with multiple access roles that help granulize control and secure your environments. Convox manages this seamlessly by leveraging hardware two Factor-Authentication (2FA).

4. Aids DevOps Engineers with 360-degree visibility.

Test engineers and agile coaches can use Convox to reduce the time taken for bug fixes or to get upgrades live in several ways:

      1. Build engineers can start and change any application with convox start. 
      2. Use ‘Convox Integrations’ to ship patches to GitHub > CircleCI > Production > Slack Notification.
      3. Use convox scale to scale a service.
      4. DevSecOps engineers can use convox rack updates to apply infrastructure security updates across the board.
      5. Test engineers can use convox exec to debug the live app
      6. Quality engineers can use convox proxy to analyze an application’s private database

These agile processes call for detailed audits to track team members’ activities. This is where the Audit Log comes in.

5. Convox creates detailed logs of all the changes made to the applications to maintain compliance requirements. 

Its ‘self-hosted’ option helps test engineers seamlessly comply with the GDPR, PCI, HIPAA, and SOC2 standards.


    • Monitor and manage your infrastructure throughout cloud environments
    • Check health and load of the applications via one console
    • Features Syslog Forwarding, Log Aggregation, and Metrics Dashboard
    • Provides a complete audit log of every change done and by whom
    • Check the entire history of application changes and roll back with a click

6. Single API access

 

Convox also provides engineers with a single API to access or upgrade resources and keep track of all API calls, even the sensitive ones.

  1. Use Convox Console to streamline modern DevSecOps workflows

Console goes one step ahead of the Convox Rack. It comes with intuitive, web-based tools that allow you to switch between Racks, integrate with third-party tools, control developer access, etc.

Convox Console comprises three core features, namely:

  • Rack Sharing: Rack Sharing allows teams to share Racks using CLI commands. This is vital for allowing or restricting access of individual engineers to resources. For delicate systems, DevSecOps engineers must ensure that team members have exclusive credentials with authentication via an ‘API key’ through Grid’s interface. 
  • GitHub Integration: Console seamlessly integrates with GitHub to simplify the setup of CI/CD workflows. As you push code to master on GitHub, it triggers automatic building and deploys to your Convox app. 
  • Slack Integration: Businesses often leverage robust chat services like Slack for seamless collaboration amongst DevOps teams. Grid offers an ‘Add to Slack’ button, where engineers can opt-in to get critical notifications about specific Racks.

Final Word

Convox is one of the best PaaS platforms for growing businesses that do not yet have the resources and capacity to handle a large team of DevOps engineers. The platform offers a perfect solution for companies that manage multiple client projects with a limited number of DevOps engineers onboard.  

By simplifying the creation of standard tasks and workflows, Convox speeds up the delivery lifecycle and drives timely product releases. It saves time and expenses with single commands that automatically trigger rollout deployments, faster rollbacks, modern twelve-factor apps, etc.

Convox helps businesses gain operational efficiency, with a limited pool of DevOps engineers. This primarily benefits those chained by mundane, repetitive manual tasks – who desire to reach their core business goals and deliver true value via automation.

It’s June! We are halfway through the year, 2022 is going fast! This month, we’re bringing you up to speed with the latest updates, releases, and information about Convox. 

Product updates: Please note that from now on, all actions done by a deploy key will be audited in the console and available on the Audit Logs. 

Moving forward, we’ve added a new parameter (proxy_protocol) for AWS racks. This new parameter will enable proxy-protocol in the AWS NLB. A use case for proxy-protocol is if you’d like to preserve the client’s source IP of incoming requests. This is available via the x-forwarded-for header.

Side note for enterprise customers:

  • We detected an edge case where racks managed by consoles in version < 2.2.3 with auto-update enabled fail to update and leave the rack in a drifted state, potentially preventing new deployments so please update your console to 2.2.3 or later before doing any rack updates past rack versions 3.4.5

V2 Racks – Classic to Application Load Balancer Migration

On August 1st, we’re going to release a new update for v2 racks to migrate from Classic to Application Load Balancer.

Due to the Load Balancer change, the rack API hostname will also change but this should be transparent for all console-managed racks. Note that during the update, the rack will become unreachable from 5 to 10 minutes, which is the time it takes for the update to complete and to synchronize the new Rack API URL with our system. To manually synchronize the rack API URL we’re also adding a new CLI command `convox rack sync`

For users running v2 racks not managed by the console, you can still synchronize the rack API URL by running `convox rack sync –name rack-name`. It will return the new URL and then you can use it to update your rack configuration file.

Gitlab Integration

Gitlab changed in version 15.0 the Access Token behavior, before that they didn’t have a lifetime expiration (similar to Github), and now they expire two hours after their creation. When you authorize Convox to access your Gitlab account, Gitlab will send us the AccessToken, RefreshToken, and the expiration date for the AccessToken. We will use RefreshToken to request a new AccessToken when it’s expired. You must resync your Gitlab integration so we can save the new information and be able to refresh your token when it’s necessary. Users with a self-hosted console (enterprise) must also update their console to the 2.2.5 version.

Latest Releases

To keep you all updated on the latest releases, here is a summary of recent work:

Version 3 (RSS or GitHub)

  • 3.5.2
    – Prevent skip minor on update (#446)
    – Adding more details about AutoScaling (#444)
  • 3.5.1
    – Adding Postgis as a resource type (#443)
  • 3.5.0
    – AWS – Add proxy protocol parameter (client’s real source ip)
    This new release introduces the proxy_protocol=true/false rack parameter to enable AWS NLB proxy-protocol. A use case for proxy-protocol is if you’d like to preserve the client’s source IP of incoming requests. Available via the x-forwarded-for header key.
    If you’re running a non-console managed rack just make sure to have AWS CLI and jq installed in your terminal.
    Enterprise customers should first update the console app to use the enterprise.convox.com/console:2.2.3 image version.
    Important: When changing this parameter in an existing rack the ingress traffic becomes unavailable for 5 to 10 minutes so please plan for this beforehand.
  • 3.4.5
    – A1 instance type should be included in arm_type (#439)
    – Add e2e tests for installing in existing VPC (#436)
    – Add c7g family to arm instance type (#435)

Version 2 (RSS or GitHub)

  • 20220525184928
    – closes #3538 Rack with HighAvailability false should AutoScale
    – closes #3539 Add c7g family to arm instance type 

Featured article: DevOps Automation: What DevOps Tasks Should You Automate?

Blog: How to use DevOps automation to combat DevOps workforce shortages

In this month’s blog, we explain how automation can help you work more efficiently and address workforce shortages in the DevOps industry. Automating DevOps is the way forward for teams struggling to maintain a critical mass of engineers to attend to necessary DevOps tasks. Check out the blog by clicking here. 

In April 2015 AWS announced a new cloud service: Elastic File System — a Shared File Storage for Amazon EC2. Fast forward to June 28th 2016, more than a year later, and AWS announced that EFS is finally available and production-ready in 3 regions (us-east-1, us-west-2 and eu-west-1).

A day later, Convox’s David Dollar opened a pull request integrating EFS into Convox Rack, the open-source cloud management platform built on top of AWS.

Let’s give it a try and run a Postgres container on EFS…

We’ll see that EFS is quite easy to get up and running, and that it works as advertised. A Postgres data directory is synchronized to every instance in the cluster. This unlocks a much needed challenge with containers. We can now run, kill and reschedule a Postgres container anywhere in the cluster and resume serving the persistent data.

We’ll also see that while possible, Postgres on EFS probably isn’t something we need or want to use outside of development or testing, but this experiment gives us confidence to add EFS to our infrastructure toolbox.

EFS and NFS Primer

EFS is an implementation of the Network File System (NFS) protocol.

NFS dates back to 1984, but EFS implements version 4.1 which was proposed as a standard in 2010.

With NFS, every computer uses an NFS client to connect to an NFS server, and synchronize file metadata and data over the network. The goal is to synchronize low level file modifications— locks and writes — across servers so that all clients have an eventually consistent filesystem to read from.

The NFS client/server protocol is designed to handle tough failures around network communication and mandatory and advisory file locking. Version 4.1 made big improvements around using multiple servers to separate the metadata paths from the data paths.

See these primers on NFS 4.0 and NFS 4.1 for more details.

Because NFS looks like a standard filesystem it is trivial to use an EFS volume with Docker containers.

EFS is free to provision and offers 5 GB of storage for free in the AWS free tier. Usage beyond that costs $0.30 per GB per month. See the launch announcement for more details.

1990 called and wants its technology back1990 called and wants its technology back

Update Convox To Use EFS

An EFS volume is relatively easy to provision with CloudFormation and to mount into an EC2 instance with UserData.

Here I am demonstrating the simple, open-source Convox project and tools to write set up a new environment (or to update an existing environment) to use EFS and inspect the resulting instances and containers. See this pull request for the specifics of the CloudFormation and UserData changes.

As long as we install Convox into one of the 3 regions where EFS is supported, Convox instances now automatically have EFS mounted in /volumes.

# install convox in us-east-1, us-west-2 or eu-west-1 $ convox install --region=us-west-2 # update to the EFS release $ convox rack update 20160629185452-efs # check out the new /volumes path $ convox instances ssh i-492ab0cf $ mount | grep /volumes us-east-1c.fs-f223e6bb.efs.us-east-1.amazonaws.com:/ on /volumes type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.139,local_lock=none,addr=10.0.2.95) # modify something in the /volumes path $ sudo su $ echo hi > /volumes/hi # see the changes on other instances $ convox instances ssh i-cb979a57 cat /volumes/hi hi $ convox instances ssh i-f08a3660 cat /volumes/hi hi

Sure enough we have a filesystem shared between three instances!

Update an App Database To Use an EFS Volume

Let’s run a database on this filesystem.

Convox takes an application with a Dockerfile and docker-compose.yml file and automates building images and pushing them to the EC2 Container Registry (ECR) and deploying the images to AWS via the EC2 Container Service (ECS).

We can take the convox-examples/rails app and add a persistent volume to the Postgres container by adding 2 new lines to the docker-compose.yml file that say we want a data directory in production:

volumes: - /var/lib/postgresql/data

Now I can deploy the app:

$ cd rails $ cat docker-compose.yml web: build: . labels: - convox.port.443.protocol=tls - convox.port.443.proxy=true links: - database ports: - 80:4000 - 443:4001 database: image: convox/postgres ports: - 5432 volumes: - /var/lib/postgresql/data $ convox deploy Deploying rails ... RUNNING: docker pull convox/postgres RUNNING: docker tag rails/database 132866487567.dkr.ecr.us-east-1.amazonaws.com/convox-rails-eefmdtclkf:database.BHEUKPGVHOB RUNNING: docker push 132866487567.dkr.ecr.us-east-1.amazonaws.com/convox-rails-eefmdtclkf:database.BHEUKPGVHOB database.BHEUKPGVHOB: digest: sha256:be8596b239b2cf9c139b93013898507ba23173a5238df1926e0ab93e465b342c size: 11181 Promoting RJVOZPCGYHE... UPDATING

And watch the logs:

$ convox logs agent:0.69/i-e0117466 Starting database process 4c6cde98595c database/4c6cde98595c The files belonging to this database system will be owned by user "postgres". database/4c6cde98595c database/4c6cde98595c This user must also own the server process. database/4c6cde98595c database/4c6cde98595c The database cluster will be initialized with locale "en_US.utf8". database/4c6cde98595c The default database encoding has accordingly been set to "UTF8". database/4c6cde98595c The default text search configuration will be set to "english". database/4c6cde98595c database/4c6cde98595c Data page checksums are disabled. database/4c6cde98595c fixing permissions on existing directory /var/lib/postgresql/data ... ok database/4c6cde98595c creating subdirectories ... ok database/4c6cde98595c selecting default max_connections ... 100 database/4c6cde98595c selecting default shared_buffers ... 128MB database/4c6cde98595c selecting dynamic shared memory implementation ... posix database/4c6cde98595c creating configuration files ... ok database/4c6cde98595c creating template1 database in /var/lib/postgresql/data/base/1 ... ok database/4c6cde98595c initializing pg_authid ... ok database/4c6cde98595c initializing dependencies ... ok database/4c6cde98595c creating system views ... ok database/4c6cde98595c loading system objects' descriptions ... ok database/4c6cde98595c sh: locale: not found database/4c6cde98595c creating collations ... ok database/4c6cde98595c No usable system locales were found. database/4c6cde98595c Use the option "--debug" to see details. database/4c6cde98595c creating conversions ... ok database/4c6cde98595c creating dictionaries ... ok database/4c6cde98595c setting privileges on built-in objects ... ok database/4c6cde98595c creating information schema ... ok database/4c6cde98595c loading PL/pgSQL server-side language ... ok database/4c6cde98595c vacuuming database template1 ... ok database/4c6cde98595c copying template1 to template0 ... ok database/4c6cde98595c copying template1 to postgres ... ok database/4c6cde98595c syncing data to disk ... ok database/4c6cde98595c database/4c6cde98595c WARNING: enabling "trust" authentication for local connections database/4c6cde98595c You can change this by editing pg_hba.conf or using the option -A, or database/4c6cde98595c --auth-local and --auth-host, the next time you run initdb. database/4c6cde98595c database/4c6cde98595c database/4c6cde98595c Success. database/4c6cde98595c database/4c6cde98595c database/4c6cde98595c PostgreSQL stand-alone backend 9.4.6 database/4c6cde98595c backend> statement: CREATE DATABASE app; database/4c6cde98595c database/4c6cde98595c backend> database/4c6cde98595c database/4c6cde98595c PostgreSQL stand-alone backend 9.4.6 database/4c6cde98595c backend> statement: ALTER USER postgres WITH SUPERUSER PASSWORD 'password'; database/4c6cde98595c database/4c6cde98595c backend> database/4c6cde98595c LOG: database system was shut down at 2016-07-05 21:53:36 UTC database/4c6cde98595c LOG: MultiXact member wraparound protections are now enabled database/4c6cde98595c LOG: database system is ready to accept connections database/4c6cde98595c LOG: autovacuum launcher startedTest Persistence

Postgres boots up though this looks just like how it worked before with ephemeral Docker volumes.

But… Pick a different instance and look again at the shared filesystem. It also sees the Postgres data directory!

$ convox instances ssh i-f08a3660 sudo ls /volumes/var/lib/postgresql/data base global pg_clog pg_dynshmem pg_hba.conf pg_ident.conf pg_logical pg_multixact pg_notify pg_replslot pg_serial pg_snapshots pg_stat pg_stat_tmp pg_subtrans pg_tblspc pg_twophase PG_VERSION pg_xlog postgresql.auto.conf postgresql.conf postmaster.opts postmaster.pid

The Postgres container is only accessible from inside the VPC so lets proxy in, create a table and insert a record:

$ convox apps info Name rails Status running Release RJVOZPCGYHE Processes database web Endpoints internal-rails-database-T5LLUKS-i-1848066794.us-east-1.elb.amazonaws.com:5432 (database) rails-web-WQDNFES-853899250.us-east-1.elb.amazonaws.com:80 (web) rails-web-WQDNFES-853899250.us-east-1.elb.amazonaws.com:443 (web) $ convox proxy internal-rails-database-T5LLUKS-i-1848066794.us-east-1.elb.amazonaws.com:5432 proxying 127.0.0.1:5432 to internal-rails-database-T5LLUKS-i-1848066794.us-east-1.elb.amazonaws.com:5432 $ psql -h 127.0.0.1 -U postgres -d app Password for user postgres: psql (9.4.5, server 9.4.6) app=# CREATE TABLE users ( app(# name varchar(40) app(# ); CREATE TABLE app=# INSERT INTO users VALUES('foo'); INSERT 0 1

Now for the moment of truth, let’s kill the Postgres container while watching the logs:

$ convox ps stop 4c6cde98595c Stopping 4c6cde98595c... OK $ convox logs agent:0.69/i-e0117466 Stopped database process 4c6cde98595c via SIGKILL database/4c6cde98595c LOG: received smart shutdown request database/4c6cde98595c LOG: autovacuum launcher shutting down database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet database/4c6cde98595c LOG: incomplete startup packet agent:0.69/i-e0117466 Stopped database process 4c6cde98595c via SIGKILL agent:0.69/i-e0117466 Dead database process 4c6cde98595c agent:0.69/i-e0117466 Stopped database process 4c6cde98595c via SIGTERM agent:0.69/i-492ab0cf Starting database process 533a014918b5 database/533a014918b5 LOG: incomplete startup packet database/533a014918b5 LOG: incomplete startup packet database/533a014918b5 LOG: database system was not properly shut down; automatic recovery in progress database/533a014918b5 LOG: record with zero length at 0/16CB998 database/533a014918b5 LOG: redo is not required database/533a014918b5 LOG: MultiXact member wraparound protections are now enabled database/533a014918b5 LOG: database system is ready to accept connections database/533a014918b5 LOG: autovacuum launcher started

The old container stopped and the new container started on a different instance and it sees and recovers the data!

$ psql -h 127.0.0.1 -U postgres -d app Password for user postgres: psql (9.4.5, server 9.4.6) app=# SELECT * FROM users; name ------ foo (1 row)

IT WORKS

In Summary

  • Provision an EFS volume with a bit of CloudFormation

  • Mount EFS to every instance via UserData and standard Linux NFS utilities

  • Mount sub-directories of the EFS volume into Docker containers (ECS services) via volume mounts

  • Persist data across container stops and re-starts, independent of instances

All of this is working with a couple of hours of integrating EFS into an existing AWS / CloudFormation / VPC / ECS setup. Thanks Convox!.

I did notice some side-effects…

  • Rolling deploys can result in two processes trying to lock the database

  • Deleting an app leaves data around

And I didn’t push towards extreme usage scenarios…

  • High throughput

  • High volume

  • Network failures

Still…

Key Takeaways

Always Bet On AWS — A new standard has been set in cloud storage. Not only does EFS work, it promises all the things we want from the cloud. It’s cheap at $0.30/GB-month, you don’t provision storage up front, you pay for what you use, and it scales to petabytes.

No doubt there will be questions about the NFS protocol and horror stories of the dark days of EBS that cast a shadow over EFS. But you’d be silly to think AWS won’t operate this with the extremely high quality-of-service that is the reason they are taking over all of the world’s computing.

They will keep the system running, recover as fast as possible when it does have problems, and will answer support tickets when it’s not working as advertised.

Amazon continues the trend of turning cloud computing into utility services that get better and cheaper over time all while taking on massive infrastructure complexity so we don’t have to.

Filesystems Are Back — The experiment with Postgres gives me confidence that EFS this will fit into my architecture in some places. Now one of the tenets of 12 Factor is up for review:

VI. Processes Execute the app as one or more stateless processes Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.

Shared state and file persistence is back!

All of a sudden WordPress got a lot easier to run on AWS. What other software and use cases are possible again? How about:

  • Docker image caching

  • Home directories

  • User file uploads and processing

Integration Over Invention — The 12 Factor tenet came from avoiding the hard problems of stateful containers.

I have been working on container runtimes for more than 6 years between Heroku and Convox and have explicitly avoided putting engineering resources towards this problem until now.

We have been tempted with solutions like S3FSGlusterFS, or Flocker. We have built contraptions around EBS volumes, snapshots and rsync. But these systems always bring on a lot of additional complexity which means more operational risk and more engineering time.

Tremendous engineering has gone into these systems and people have been extremely successful with the various solutions. But most of us have correctly put our own energy into architecting our applications to not need to solve tough problems with state, delegating everything to a Postgres service and S3.

Finally the tides have turned. We can integrate a service with a few lines of CloudFormation and build around shared state vs. invent, install, debug and maintain complex distributed systems software.

Specialized Utility Services Still Win —Even if EFS works for a Postgres container an RDS Postgres database starts at $12/mo. This includes more assurances about catastrophic failure like Postgres data replication and backups that would be be risky to ignore when running on any storage service.

So I still see no reason to take on the operational properties of a containerized data volume for a database outside of development or test purposes.

Likewise S3 isn’t going anywhere. It’s hard to beat the simplicity and maturity of a simple blob storage service in our application architecture.

What do you think?

Is EFS a new tool in your infrastructure toolbox?

What new or old use cases does this open up for our apps in the cloud?

What use cases will you still avoid using EFS for at all costs?

What becomes easier, cheaper and and more reliable now that AWS has taken on this tough challenge for us?

Scroll to top