Devops Tools Review – AWS CodeArtifact

Service Description

AWS CodeArtifact (not to be confused with the compliance tool AWS Artifact) allows organisations to securely store, publish and share software packages used in their software development process.  User or organisation created or curated code can be organised into a custom repository and be logically linked with public repositories before it is made available for distribution.

The organisational unit for custom and public repositories is known as a domain.

Leveraged parts of public repositories are also cached within the CodeArtifact repository.

Example Use Case Scenario 1

An application has been developed which makes use of Python scripting to perform image processing and video manipulation. CodeArtifact is leveraged to create a domain. The domain creates a repository for the user-created or “fresh” and a public upstream repository is chosen during setup. PyPi is specified. Both repositories will be listed in the same single pane within CodeArtifact, grouped together as a domain.

Once the domain has been created, the local repository of the artefact can be uploaded through CLI operation.

Once the application has been installed by the end user, pip libraries such as Pillow and moviepy are grabbed through the public remote repository, delivered as part of the package and can be cached within the CodeArtifact repository as a resiliency failsafe.

Example Use Case Scenario 2

Fuelled by a need to reduce operating expense, an organisation is looking for an alternative to their existing GitHub solution and licensing costs.

CodeArtifact (in conjunction with CodeCommit for source and version control) can be leveraged in order to provide a pay-as-you-go model for services which require no maintenance from the customer. This can allow the customer more time to concentrate on producing code.

Additionally, this brings further benefits. Users have one less set of credentials to work with – they can use existing IAM accounts. Administrators can use IAM roles to manage access to packages. Known, working upstream repositories can be cached which protects against upstream developers’ code drifting resulting in problems with local repositories potentially failing.

Features of CodeArtifact

  • A secure location for storage of software packages. Integration with AWS Key Management Service to provide encrypted storage.
  • Reduction in overheads. No need to host an always-on repository server.
  • No fees or licensing above the pay-as-you-go rates
  • Access controlled through existing IAM accounts – no new credentials required 
  • Integration with existing package managers – ability to use an in-house package manager to publish or retrieve your developed software packages.


As of June 2020, there is still a limited number of external repositories which CodeArtifact makes use of. These are pypi, npm, Maven and Gradle. Software items from other public repositories may still need to be included in the user-created repository part of an artifact, or the end user may need to download and install the necessary dependencies separately.

DevOps tools in the cloud


The concept of DevOps is now commonplace within modern IT infrastructures and there are a wealth of DevOps tools available. Some of which are multi-faceted tools which cover more than one stage of a DevOps cycle, while some are more useful at doing individual stages much better.

As DevOps is a relatively modern concept, those who employ DevOps methodology and develop DevOps tools are likely to be champions of cloud technology. Today, cloud technologies and DevOps are seen as being entirely complimentary. The aim of this article is to provide an outline of DevOps tools and to introduce AWS DevOps tools and where these can be of great use.


Planning Tools

The planning of a project, which would entail a process to involve all contributors, set a timescale, present documentation and, perhaps, align costings. Jira and Asana are market leading tools of this functionality. They both also offer extensive issue tracking functionality. AWS has yet to produce a comprehensive planning and tracking tool. However, AWS CodeStar offers excellent integration with Jira.

Development – Coding / Repository Tools

Within a well thought out DevOps cycle, there is also a need to have a place to store source code, facilitate access control and apply version control. GitHub is a very popular cloud-based Git repository hosting service. Other third-party tools include Bitbucket and SourceForge. 

AWS also offers tools within this sector of DevOps tooling, which may be of more benefit to those already familiar with AWS tooling. AWS CodeCommit provides a fully managed Git-based repository service which offers high availability without product tiering in a pay-as-you-use model. You are charged by the minute for the compute resource you use rather than higher tier usage and features being available on a per person/month cost basis.

AWS CodeBuild is a Platform as a Service Continuous Integration service which can compile and also test software packages leaving them ready to deploy. Again, users are charged by the minute for compute resources used.

Testing Tools

With respect to third-party tooling, Selenium remains the industry leading test tool. Selenium currently does not exist as a Platform as a Service, but its popularity makes it worthy of a mention. Other third-party testing tools include JUnit (core strength is testing Java), Cucumber and Appium (good for mobile device orientated) are also widely used.

AWS CodePipeline also provides integration with several testing tools which they have found to work well within its Source-Build-Test-Deploy framework which can be used on a pay-as-you-use basis. BlazeMeter, Ghost Inspector, Micro Focus StormRunner Load and Runscope cover a wide range of testing criteria.  AWS CodePipeline is covered in more detail later in this article.

Release / Deployment Tools

There are several well-known tools which are used for releasing code. Docker (for containers), Puppet, Chef, Ansible and TravisCI are all third-party tools which have good market shares.

AWS CodeDeploy focuses on automated deployment to AWS compute services, such as EC2, and on-premises servers. AWS OpsWorks is a very useful configuration management tool which provides managed instances of Chef and Puppet to automate server configuration and deployment.

AWS Elastic Beanstalk is another of the tools which could be classified as spanning one or more stages of the DevOps cycle. Elastic Beanstalk is a web application deployment and orchestration service which runs in the normal pay-as-you-use principle. Packages can be deployed in minutes upon a number of platforms and from numerous source languages.

Maintenance / Monitoring Tools

Nagios, Opsgenie and Splunk offer a wealth of monitoring capabilities over numerous types of systems and metrics. However AWS has some fantastic offerings which cover a vast range of monitoring and maintenance needs.

AWS CloudWatch offers the ability to highly visualise logs and metrics for AWS and on-prem assets and to set alarms based on thresholds. AWS Config can help prevent configuration drift by assessing, auditing and evaluating AWS resources and matching against desired configurations. With EC2 Systems Manager, it’s possible to pull together monitoring metrics from numerous resources, such as EC2 instances, RDS instances and S3 buckets along with on-premises data into an aggregated view. The user journey through applications can be monitored through AWS X-Ray. This is especially suited to microservices where many elements may be drawn together. The abovementioned AWS OpsWorks, which provides Chef and Puppet instances, is also a useful maintenance tool. 

Which tools are right for you?

There is no easy answer as to which toolset is best suited for your needs without first knowing which qualities you favour. If you prefer to use open source tools which require hosting, there are still savings to be had by deploying those tools to cloud instances.

However, there are advantages to be gained by using AWS-based tooling, such as metered usage, no need to maintain the tools nor the underlying hardware on which they reside, be it physical, virtual or cloud OS instances.


A DevOps framework works best with a continuous integration / continuous deployment (CI/CD) platform which brings elements together.

Jenkins is the most widely used CI/CD tool. The Jenkins community has created over 1000 plugins. One disadvantage of Jenkins is the need to  deploy and maintain Jenkins. This can involve considerable maintenance if to be used with large numbers of developers.

The AWS CI/CD tool, AWS CodePipeline has some advantages over Jenkins and other CI/CD tools. As it is Software as a Service, there is no need for maintenance of the tool or the platform. It easily integrates with other AWS services and uses the familiar pay-as-you-use model. It can also deploy to non-cloud resources via an agent, making it a good option for where there are still on-premises assets to take into account. One disadvantage of AWS CodePipeline is that fewer third-party applications can be integrated, but this is improving.

In summary, AWS continues to develop their comprehensive range of DevOps tools which can be used in more of an out-of-the-box manner than other well-established third-party tools. AWS DevOps tools are designed to be interoperable with other AWS tools and the pricing model is pay-as-you-use. However, for those who already have a preference for existing third-party tools, the scalability and disposability of AWS EC2 instances makes AWS well-suited as a hosting solution as well as an ideal playground for trying out as many tools as you wish. It is a product range which is well worth keeping an eye on.

Why not get in touch with CirrusHQ to get the ball rolling on your DevOps journey.


DevOps Tools Review – AWS Elastic Beanstalk

Service Description

Amazon Web Services (AWS) comprises over one hundred services, each of which exposes an area of functionality. While the variety of services offers flexibility for how you want to manage your AWS infrastructure, it can be challenging to figure out which services to use and how to provision them.

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application.

You can interact with Elastic Beanstalk by using the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or eb, a high-level CLI designed specifically for Elastic Beanstalk. (More information in the open source GitHub repository)

To learn more about how to deploy a sample web application using Elastic Beanstalk, see Getting Started with AWS: Deploying a Web App.

You can also perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console).

To use Elastic Beanstalk, you create an application, upload an application version in the form of an application source bundle (for example, a Java .war file) to Elastic Beanstalk, and then provide some information about the application. Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After your environment is launched, you can then manage your environment and deploy new application versions. The following diagram illustrates the workflow of Elastic Beanstalk.

After you create and deploy your application, information about the application—including metrics, events, and environment status—is available through the Elastic Beanstalk console, APIs, or Command Line Interfaces, including the unified AWS CLI.

Example Use Case Scenarios

Scenario 1 : Customer wants a web application without having to worry about the underlying infrastructure

Elastic Beanstalk can be deployed quickly for a customer and provides a lot of functionality that would otherwise take a longer time to set up and maintain. An elastic beanstalk stack provides the customer with a deployment process and stack that includes a lot of AWS features in one package.

  • Blue / green deployment
  • Load balancer
  • Health checks / Monitoring
  • Auto scaling / auto healing instances
  • Integration with deployment tools such as Git and Visual Studio
  • Managed updates
  • Security Groups
  • Networking / VPC Configuration
  • Log Rotation
  • X-Ray

If the customer just needs a stable application deployed and doesn’t need any more complexity, then Elastic Beanstalk can be a good candidate to get a lot of the best practices covered in an efficient manner. 

Scenario 2 : The customer would like to manage some of the infrastructure themselves but doesn’t want to have to learn the full AWS suite.

If the customer would like to handle some of the updates themselves such as the ASG size, or handle when a deployment is done and when the switchover to the new version is done, then Elastic Beanstalk can enable them to do this without having to have access and training on all of the other AWS systems.

The Elastic Beanstalk interface can be used to deploy and to make changes in one place that would traditionally be in multiple places. For example, the customer could deploy a new version of their code and update the instance size and ASG min/max from the one screen. 


  • As the resources are still created as normal resources, it can mean there are 2 places to update, and if any changes are made to the resources outside of Elastic Beanstalk, any subsequent deployment or change in the Elastic Beanstalk stack will override these changes.
  • Elastic Beanstalk stacks can’t be easily copied from one to create another. There is a workaround by using the Elastic Beanstalk CLI to download both stack details, saving the configuration in one, then manually copying it over
  • The Elastic Beanstalk CLI is a separate tool from the AWS CLI


As Elastic Beanstalk is trying to be a simpler way of handing applications, there are some drawbacks if you need more complexity.

  • Changes are designed to be straightforward, but this can sometimes mean that not all of the information about issues is presented in Elastic Beanstalk
  • Limited set of deployment software targets
  • More complex applications that require more than just a load balancer and ec2 instances behind it aren’t really suitable for Elastic Beanstalk. Any workload with background Worker processes are a good example of this that would be better separated out.
  • Issues with deployments / scaling issues can be more complicated to resolve due to more limited information provided

To find out more about the wide range of uses of Elastic Beanstalk and more customer case studies, AWS have recorded a presentation showcasing Elastic Beanstalk during their Re:Invent conference. The video of this talk is below:

DevOps Tools Review – AWS OpsWorks

Service Description

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. 

OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. 

OpsWorks has three offerings:

  • AWS Opsworks for Chef Automate
  • AWS OpsWorks for Puppet Enterprise
  • AWS OpsWorks Stacks.

AWS OpsWorks for Chef Automate

AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate, a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment. OpsWorks also maintains your Chef server and gives you access to all of the Chef Automate features, such as configuration and compliance management, which you manage through the Chef console or command line tools like Knife. It also works seamlessly with your existing Chef cookbooks.

AWS OpsWorks for Puppet Enterprise

AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management. OpsWorks also maintains your Puppet master server and eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure. OpsWorks gives you access to all of the Puppet Enterprise features, which you manage through the Puppet console. It also works seamlessly with your existing Puppet code.

AWS OpsWorks Stacks

AWS OpsWorks Stacks is an application and server management service. With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application server. Within each layer, you can provision Amazon EC2 instances, enable automatic scaling, and configure your instances with Chef recipes using Chef Solo. This allows you to automate tasks such as installing packages and programming languages, frameworks, and configuring software.

Example Use Case Scenarios

Scenario 1 : Customer with Existing Chef or Puppet Adoption

A customer has existing Chef or Puppet adopted within there workload and are looking to continue with this pattern, however would like to move to a aaS(As A Service) model to reduce platform management overheads, and leverage the HA(High Availability) and Premium/Enterprise feature benefits.

  • Create and Deploy SaaS Puppet-Master or Chef-Automate Server
  • Complete config or upload Cookbooks. 
  • Add Nodes to Manage
  • Use the Service to Configure and Maintain

Scenario 2 : Customers looking for IaaC with lifecycle management.

A customer using Chef wants to leverage this further and have a naive interface to control infrastructure architecture in addition to Instance config, not only during deployment but also in life cycle. Opsworks can provide this with OpsWorks Stacks as an alternative to or partner with cloudformation.

  • Create a Stack
  • Create Layers in the Stack – EC2, RDS, ELB, ECS. 
  • Add Instances to the Layers – Based on Chef Builds
  • Deploy the Stack
  • Maintain the Instances via Chef over life time of the instances
  • Includes Monitoring and Security of the Stack


You should use OpsWorks in place of CloudFormation if you need to deploy an application that requires updates to its EC2 instances. However If your application uses a lot of AWS resources and services, including EC2, or you have no existing skills in Puppet or Chef, consider using a combination of CloudFormation and OpsWorks. Cloud formation with System Manager is also a viable alternative however this would rely on further agents running on the instances.

DevOps Tools Review – AWS CodeStar

Service Description

AWS CodeStar enables management and rapid delivery of software projects leveraging DevOps principles. With AWS CodeStar, you can use a variety of project templates to start developing applications on Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk.

 AWS CodeStar projects support many popular programming languages including Java, JavaScript, PHP, Ruby, and Python. AWS CodeStar allows you to accelerate application delivery by providing a pre-configured continuous delivery toolchain for developing, building, testing, and deploying your projects on AWS. 

You can easily secure access for your projects through built-in security policies for various roles including owners, contributors, and viewers. 

The project dashboard in AWS CodeStar makes it easy to centrally monitor application activity and manage day-to-day development tasks such as recent code commits, builds, and deployments. Because AWS CodeStar integrates with Atlassian JIRA, a third-party issue tracking and project management tool, you can create and manage JIRA issues in the AWS CodeStar dashboard. 

Some of the advantages can involve

  • Use a code editor of choice like VisualStudio or Eclipse
  • Team access management is integrated with IAM
  • Leverages CodeBuild, CodeDeploy and CodePipeline to deliver CI/CD
  • Provides an intuitive user interface for developers or managers to setup the pipeline

Example Use Case Scenarios

Scenario 1 : Customer wants a toolset to rapidly develop a PHP web application and deploy in AWS EC2 

A customer has a requirement to quickly transform internal practises to align with DevOps principles and has a software project that needs to be deployed on AWS. The software is designed to be deployed to EC2. CodeStar can be leveraged in this scenario to achieve a quick win. To setup the following are required,

  • A project template can be chosen based on the programming language and infrastructure of choice
  • A Github / Code commit repository is created / reused as a version controlled source code store 
  • CodeDeploy is used for the deployment of the code to the ec2 instance

Scenario 2 : Customer wants a service integrating with JIRA for project management, access management and bug tracking with a centralised dashboard

A customer has a requirement to increase velocity and manage a new software project and issues from a central location. Dashboards and visual tools for management are also non functional requirements. Code star provides a project dashboard and integrates with Jira to enable ticket management for features, issues and bugs. Also CodeStar simplifies access management for the developer teams, because it provides built-in role-based policies that follow AWS security best practices. I.e  you easily manage access for project owners, contributors, and viewers without needing to manually configure your own policy for each service. This enables access and  project management without a lot of overhead and integrates with the existing ticketing system of the organisation, giving a quick win.

Scenario 3 : Customer wants to learn about CI/CD and requires a Proof of Concept

A customer is exploring DevOps transformation and wants to know more about Continuous delivery and integration. As a DevOps consultancy, CirrusHQ can setup a Proof of Concept with a real world application without a lot of overhead to advocate the advantages of a CI/CD pipeline and hence achieving a quick turnaround


CodeStar is limited by the languages and deployment method it supports. The supported deployments are only for Amazon EC2, AWS Elastic Beanstalk or AWS Lambda

CodeStar cannot be used for managing existing applications

CodeStar can only integrate with GitHub/CodeCommit. Gitlab, Bitbucket are not supported and neither is S3 drop deployments though it is supported by CodePipeline. 

DevOps Tools Review – AWS Code Deploy

Service Description

AWS CodeDeploy was one of the first DevOps focussed tools released by AWS to enable developers to smoothly enable code deployments. It enables automated deployment to other AWS services like EC2, Lambda functions, ECS and even on-premise servers.

It can be used to deploy numerous types of application content that include and not limited to:

  • code
  • serverless AWS Lambda functions
  • web and configuration files
  • executables
  • packages
  • scripts
  • multimedia files

Some of the major advantages of CodeDeploy is that it makes it easier for a developer to:

  • Rapidly release new features.
  • Update AWS Lambda function versions.
  • Avoid downtime during application deployment.
  • Handle the complexity of updating your applications, without many of the risks associated with error-prone manual deployments.

Example Use Case Scenarios

Scenario 1 : Customer starting DevOps transformation – A Quick Win Scenario – Transforming deployment process

A customer has not yet fully implemented infrastructure as code. The customer has a requirement to quickly transform internal practises to align with DevOps principles. CodeDeploy can be leveraged in this scenario to achieve a quick win. Some of the steps / consulting will involve 

  • making sure the code is stored in a GitHub / CodeCommit repository / revisions of code package is uploaded to an s3 bucket
  • choose a relevant deployment strategy in-place / blue-green deployment
  • Setup deployment groups 
  • Configure code deploy to deploy based on the chosen strategy

Scenario 2 : Customer leveraging a Hybrid Cloud environment – On premise Deployments

A customer has an architecture that spans on-premise and the AWS cloud. Instance groups are located both on-premises and in the AWS cloud. The customer needs to make the deployment process automated and smoother to both the instance groups. This can be achieved by actioning the following

  • making sure the code is stored in a GitHub / CodeCommit repository / revisions of code package is uploaded to an s3 bucket
  • Code deploy agent needs to be installed on the instances
  • On-premise instances need to be registered to code deploy
  • Deployment groups are created for the instance groups

Scenario 3 : Customer leveraging microservices environment leveraging lambda functions and ECS – Minimise downtime

A customer has an architecture that leverages ECS behind an API gateway and lambdas. Code deploy with blue green deployment strategy can be used to perform deployments in this scenario. The steps can be found in AWS documents to leverage this –




  • If Database changes are involved as part of a deployment process as code deploy doesn’t support that. Its then better as part of a pipeline
  • If manual approval steps are required, a pipeline also needs to be leveraged along with code deploy


DevOps Tools Review – AWS Code Commit

Service Description

CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. CodeCommit eliminates the need for you to manage your own source control system or worry about scaling its infrastructure. You can use CodeCommit to store anything from code to binaries. It supports the standard functionality of Git, so it works seamlessly with your existing Git-based tools.

With CodeCommit, you can:

  • Benefit from a fully managed service hosted by AWS. CodeCommit provides high service availability and durability and eliminates the administrative overhead of managing your own hardware and software. There is no hardware to provision and scale and no server software to install, configure, and update.
  • Store your code securely. CodeCommit repositories are encrypted at rest as well as in transit.
  • Work collaboratively on code. CodeCommit repositories support pull requests, where users can review and comment on each other’s code changes before merging them to branches; notifications that automatically send emails to users about pull requests and comments; and more.
  • Easily scale your version control projects. CodeCommit repositories can scale up to meet your development needs. The service can handle repositories with large numbers of files or branches, large file sizes, and lengthy revision histories.
  • Store anything, anytime. CodeCommit has no limit on the size of your repositories or on the file types you can store.
  • Integrate with other AWS and third-party services. CodeCommit keeps your repositories close to your other production resources in the AWS Cloud, which helps increase the speed and frequency of your development lifecycle. It is integrated with IAM and can be used with other AWS services and in parallel with other repositories. For more information, see Product and Service Integrations with AWS CodeCommit.
  • Easily migrate files from other remote repositories. You can migrate to CodeCommit from any Git-based repository.
  • Use the Git tools you already know. CodeCommit supports Git commands as well as its own AWS CLI commands and APIs.

Example Use Case Scenarios

Scenario 1 : Customer wants to keep all users in a single place to make additions, updates and deletions simpler

As CodeCommit is an AWS service it has tight integration with other AWS services, such as IAM. Using CodeCommit as the customers version control system allows the customer to manage access to each repository using IAM users and roles, in the same way they would for other AWS Services.

This allows for the Customer to follow best practices easily and monitor access to their version control in a single place, without having to manage a separate external set of users for another service.

Scenario 2 : Customer would like to trigger actions based on repository events

As CodeCommit uses CloudWatch Events to trigger different steps of its processes, this means that actions can be integrated with other functions within AWS.

A few examples of this are:


CodeCommit prices based on the number of git requests past 2,000 per user per month, which is something to be aware of. It also prices per GB per Month past 10GB per user (after the first 5 users). Depending on the use cases, this might work out more than alternatives. The pricing beyond these levels are:

  • $0.06 per GB-month
  • $0.001 per Git request


Some other integrations with popular software that would normally be used as part of the Software Development Lifecycle may either not be as readily available or may need to be built out manually as part of the setup for the customers workflows.

Direct collaboration within CodeCommit is limited. This can be overcome by using other services such as CodeStar or Cloud9, but this does limit how users work, who may prefer their own environment.