Education Ransomware – What Education Should be Learning!

Educational  institutions have increasingly been targeted by criminals during the pandemic, resulting in time-consuming and financially damaging recovery actions. In fact, Education suffers from more cyber attacks than any other sector. 

So with sustained demand for hybrid learning, compliance requirements, and modernisation of legacy IT: threats will continue to increase. However, new cloud technologies also offer robust security for the demands of modern education.

Recently, CirrusHQ hosted an expert panel webinar “Why Education must take IT Security More Seriously” which looked at key issues related to cyber-attacks education institutes are experiencing and need to address. The panel also discussed  the technical solutions to these challenges and provided a first-hand account from an institution that seized opportunity after experiencing disruption, and how they revolutionised their infrastructure by shifting from on premise to the Cloud.

The panel included:

The panel were told that over 30,000 websites are hacked every day,with 40% of companies already having been hacked globally. These seem to be scary statistics, and because of the scale of that data, many organisations simply bury their heads in the sand as it appears to be too big a problem to tackle.

 

The majority of cloud security failures are caused by end users

Since the start of the COVID-19 pandemic, education has been increasingly targeted by cyber criminals, perhaps because the sector is seen as an easy target because historically as a sector it has underspent on IT and security. Left with such technical debt, this has led to complexity – the enemy of security. 

This means employees may spend a lot of time managing that debt to “keep their heads above water” instead of investing time in proactive technical and security optimisation. What is also unique is the challenge of having a fluid user base which can change frequently, providing only a short window of time to educate on safe practices. Students, until entering higher education, have mainly been responsible for their own personal IT and device usage, and are therefore vulnerable to phishing activities which are responsible for over 70% of attacks.

 

But it is not all bad news as attacks tend not to gain access to the Cloud, which is why many institutions are moving to platforms such as AWS 

With the biggest threat being ransomware, another reason why Education is a target is its scale as an industry. Cyber-crime has now overtaken drug crime as the leading criminal activity. It’s very commercial and has a vibrant market where sharing and selling of software on the dark web is big business, and once data is stolen, it is sold to others to maximise revenue opportunities. Also, the software used to force access to systems was once only available to governments but is now easily acquired and at little cost. This means anyone with minimal IT knowledge can now set up themselves as hackers. Payback is high, and chances of being caught are very low. The primary defence against these actors is lowering their success rates. Take basic steps on safety as it’s a volume game, so be less vulnerable than others to attack, but be mindful you cannot defend against all attacks.

 

To give us a sense of what was being discussed, it was described first-hand, the experience of a cyber attack, including the response, and what happened next from a College’s perspective

Fortunately, while the college IT team were viewing their system infrastructure, unusual activity was noticed which prompted an immediate assessment. Recognising an attack was underway, containment was instigated and following a swift evaluation it was highlighted that data backups were of the utmost importance. 

The college was already in the process of reviewing cloud solutions with a number of suppliers, and reached out to Amazon Web Services (AWS) and CirrusHQ, an AWS Advanced Consulting Partner with Education sector competency. Within 24 hours the college was operating in AWS having created an immutable data storage in the cloud. This achievement was delivered by AWS and CirrusHQ providing external expertise to plug the college’s skills gap to reverse all the damage from the cyber attack and to re-initiate services, and also tested by the JISC cyber security team. Dormant code placed by the hackers later caused further disruption but due to a stronger security posture and recovery plan it was more readily addressed. Double extortion is common practice in more than 50% of cases and is likened to a virus. 

 

The panellists were asked, “Why migrate to the Cloud now?” and  “Is now the time for Educational Institutions to pause and take stock of where they are?”

It was stressed that the ease with which an attack can be orchestrated has become immensely disruptive. The threat is very real and is happening all the time. Making an immutable backup is key as it provides confidence especially as criminals are also targeting data backups to prevent restoration and frustrate  an organisation’s ability to recover. An on-premises campus can easily and quickly create a cloud storage as another layer of security. It’s not just server backup, but complete environment restoration which the Cloud can provide, saving potentially months of cleaning after an attack.

As the nature of attacks are always evolving, it was stated that the best approach to provide confidence in dealing with the yet unknown, is employing a mindset that an attack will happen at any time.Security should always be a topic to be included in all plans and activities for all stakeholders across an organisation and to not rely mainly on the IT department. Education has a transient population – so to educate on ‘cyber hygiene’ is challenging and the key is not about chasing bad behaviour but ensuring good practice.

 

Migration to Cloud platforms such as AWS is a powerful IT solution, but a shift in organisational culture is needed to achieve maximum resilience

Culture is the most important factor for an organisation’s protection over time, and with a poor security culture, organisations can find it difficult to migrate, especially when managing new tools and devolving control to more employees. An organisation must realise it is everyone’s job to uphold good practice, whilst the IT department is there to provide expert help and deliver modern tools and methods such as landing zones and continuous compliance monitoring.

Asked to summarise, the panellists agreed that having first-hand experience of an attack has highlighted that maintaining business continuity requires a cloud first strategy. Legacy services and infrastructure will not be replaced overnight, but organisations need to make a planned approach and take advantage of “low hanging fruit” opportunities along the way. These short term activities should then form part of a strategic plan to address  the remaining IT estate.

 

Security, resilience, performance, and cost savings are key focuses in migration to the Cloud and a shift in system support from looking after hardware, to supporting the Cloud

What lessons did the panel think needed to be learned? 

  • Accept that an attack is going to happen – it may already have.
  • As a minimum, have your data in an immutable safe place as part of your recovery plan. 
  • Following an attack, nothing compares to the raw fear that seeing everything has stopped working  can be paralysing. The state of disruption is massive, and especially for dispersed organisations whose communications have also been brought down. 
  • Create a cyber-attack rehearsal: think of it as a fire drill –  as it’s not a plan until it’s rehearsed. 
  • Test your back-up recovery, as well as your plans to contain and recover. 
  • Finally, look at your culture and make sure it’s not ‘who to blame’ but ‘who will respond to the incident at all levels’ – as it is not just the IT department’s responsibility to ensure business continuity.

To watch the webinar click here.

 


SafeGuard your data – Education and Research Organisations

Protect your organisation from attacks and maintain business continuity with a tailored end-to-end data backup and restoration solution from CirrusHQ – read more.

 

 

Safeguarding your organisation means governing your cloud

You have probably read numerous articles about the challenging environment faced by education establishments of all sizes and complexity, from local secondary schools to internationally renowned universities, with cybercrime presenting each with a variety of security threats.

In this situation, security has become the number one priority and for good reason. Whether you are protecting students’ sensitive personal data from hackers or research department intellectual property from malicious state actors, the challenge is similar, despite a potentially huge variation in mitigation budgets.

There’s more to total safety than just security

The focus on security is easy to understand, given the scale of the threat. According to Microsoft research, which highlights the situation over a rolling 30 day period, cyber-attacks on the education sector dwarf those targeting other sectors, accounting for 63.42% of all incidents. By comparison, attacks on the next highest sector, business and professional services, accounted for just 9.2%.

However, security cannot be taken in isolation. Security measures cannot become sticking plasters that merely cover each successive problem as it is discovered, or you begin down the path to a game of whack-a-mole, where you hammer each successive threat, more and more frantically until the mole eventually beats you.

The first move to a safer security posture is making the move to the cloud, and Amazon Web Services (AWS) is the largest and most specialised for the education sector. While IT security has traditionally been squarely on the shoulders of the IT department, it is now becoming recognised that a wider shared responsibility across all roles and departments is required. Cloud governance is based on a shared responsibility model, in which both the cloud provider and all stakeholders in your organisation share the responsibility for data security and compliance.

Transforming education institutions with cloud and the right partner

CirrusHQ is an AWS Cloud, Advanced Consulting and Solution Provider Partner, but one of fewer than 50 globally that possess the AWS Competency in Education. This means we have the necessary experience and expertise in delivering cloud solutions and services to help facilitate teaching and learning, whilst helping organisations modernise their IT operations.

Over the past 18 months, we have seen amazing innovation among our higher education customers in the face of complex challenges caused by the Covid-19 pandemic. It accelerated the need to transform not only their infrastructure and systems, but the models and methods for how they meet the needs of learners and their institution’s mission.

Education institutions that hadn’t previously considered the cloud prior to the pandemic are often now utilising a hybrid cloud computing strategy that takes advantage of cloud benefits, but still leaves out a critical component: cloud governance.

Cloud governance is your future and it makes sense now

Having made the move to the cloud, a robust cloud governance strategy will help you run your organisation optimally and keep it safe, in an increasingly threat-laden environment.

Effective cloud governance delivers the appropriate level of protection against a number of avoidable risks: data breaches, intellectual property theft, damage to your brand and your reputation, to say nothing of the potential financial loss.

A well-crafted governance strategy will also mitigate the risk of non-compliance with mandates such as PCI, HIPAA and GDPR. In fact, Gartner, the leading IT research and advisory firm, predicts that “through 2025, 99% of cloud security failures will be the customer’s fault.”

Effective cloud governance involves a two stage approach:

1. Understanding of how the four pillars – visibility, configuration, operations and risk – impact the security of your cloud assets. Each pillar has been developed through years of experience and thousands of successful AWS deployments.

2. Developing best practices for continuously monitoring, assessing, and optimising each of these.

To help cloud governance become an integral part of your computing strategy, together with AWS, we have created a succinct guide. It explains the business case for cloud governance, and a strategy to follow for helping you safeguard your organisation – to drive not only your success, but that of your students and colleagues.

Is your organisation ready to take the next step on its cloud governance journey? Ask yourself:

• Do we have full visibility into our cloud infrastructure?

• Can we easily and effectively find and remediate misconfigurations?

• Do we have automated processes for managing cloud operations?

• Can we find, address, and reduce risks at scale?

Download the guide to help you make better informed decisions around the cloud, AWS and CirrusHQ. 

AWS Container series – Blog 3 – Troubleshooting ECS Containers using ECS Exec

Introduction

Troubleshooting Docker containers locally is as simple as executing commands or creating a shell directly to the docker container.  This process changes within AWS with increasing complexity levels depending on whether you use traditional Amazon EC2 instances or serverless AWS Fargate to host your Docker containers. 

Debugging Fargate containers is notoriously tricky, considering you cannot access the underlying instance or container.  A brand new feature released by AWS today makes accessing Fargate containers possible by allowing command execution via docker exec.

Amazon ECS Exec

Amazon ECS Exec utilises AWS System Manager Session Manager to run single commands or access a shell to troubleshoot the running container instances.  Session Manager has been established for a while now to SSH into EC2 Instances directly from the SSH console. Unfortunately, console access is still in development, with access only via the SDK and CLI. 

This new feature debunks the key reason why some developers show resent towards AWS Fargate; although it is serverless and reduces the management overhead, including patching and scaling, the inability to access the container itself is off-putting for most. 

AWS best-practice highly recommends against accessing containers directly via ssh to debug and troubleshoot; the recommended method is that developers should implement monitoring and log analysis to keep engineers at a distance and away from data.  ECS Exec does not replace this best practice; instead, it secures it, ensuring only secure, encrypted access is available using temporary credentials. 

The ability to access a running container within AWS brings functionality that is useful especially around debugging containers in an early lifecycle.  This allows container analysis and retrieval of data which contributes towards future development or allowing for production critical errors to be fully investigated and resolved more efficiently than redeployment. 

As a new feature, Amazon ECS Exec requires a few prerequisites:

  • Linux Instance, windows is currently not supported.
  • Latest ECS Optimised Container AMI 
  • Container Agent Version 1.50.2 or Fargate Platform Version 1.4. Or later
  • SSM Session Manager Plugin to be installed to the instance

Security Considerations

Accessing production docker containers can raise a few questions, especially around the security implications.  AWS has closely integrated AWS IAM Roles and Policies to tightly lockdown access to executing ECS Exec commands.  Access is locked down via three key principles – ECS Tagging Values, Container Name and ECS Cluster ARN. For example, the above diagram outlines the user access process alongside a role that denies access to the MySQL Container but allows access to NGINX and API.  In this instance, the MySQL container is locked down to keep users away from data. 

 Additionally, a new ECS action has been created as ecs:ExecuteCommand.  An example IAM Policy can be viewed below;

{

    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecs:ExecuteCommand"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/tag-key": "<tag-value>”",
                    "StringEquals": {
                        "ecs:container-name": "<container_name>"
                    }
                }
            },
            "Resource":"arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
        }
    ]
}

Developers can lock down access to a specific Amazon ECS Cluster and each container running within the cluster, allowing for fine-grained control on a container by container level. All Amazon ECS Exec commands are logged via CloudTrail for auditing. 

Logging

Amazon ECS Exec supports commands to be logged to either an S3 Bucket or CloudWatch Log group for analysis, archiving, and auditing purposes, as outlined in the above diagram. Logging options are configured during the execute command configuration settings allowing for the logging type, either S3 / CloudWatch Logs or both, encryption settings and S3 key to be defined. 

Executing commands that append /bin/bash and create a shell to the container will log both the commands and output to the chosen logging location.  Whereas executing single commands will only log the command output. 

Logging can also be disabled to stop any logging to either S3 or CloudWatch, use the existing awslogs configuration within the contained task definition, or override to the configuration specified in the execute command configuration as detailed below when creating the ECS Cluster via the CLI.

executeCommandConfiguration={kmsKeyId=string,\
                            logging=string,\
                            logConfiguration={cloudWatchLogGroupName=string,\
                                            cloudWatchEncryptionEnabled=boolean,\
                                            s3BucketName=string,\
                                            s3EncryptionEnabled=boolean,\
                                            s3KeyPrefix=string}}

 

Executing Commands

As previously mentioned, commands can only be executed via AWS CLI or SDK, with console access aimed for release in the future. Once the Amazon ECS cluster, task definition, service, and task are running with the prerequisites met, a final IAM role is required to allow AWS System Manager Session Manager access and create and put logs to S3 and CloudWatch Logs. 

Running Single Commands

Executing single commands only logs the command output to S3 or CloudWatch Logs.  For example, if a simple pwd is run, the default directory of the container will be printed.  Similarly, to list the directory contents, the following command can be executed:

aws ecs execute-command  \
    --region $AWS_REGION \
    --cluster ecs-exec-demo-cluster \
    --task 1234567890123456789 \
    --container nginx \
    --command "ls" \
    --interactive

 

Creating a Shell

Creating a shell allows for full CLI access to the container, allowing full access to navigate, troubleshoot, and resolve any container errors.  Accessing a shell is similar to running a single command such as ls, apart from /bin/bash is passed instead. Importantly, in this mode,Amazon ECS will log all container commands and outputs to S3 or CloudWatch. 

aws ecs execute-command  \
    --region $AWS_REGION \
    --cluster ecs-exec-demo-cluster \
    --task 1234567890123456789 \
    --container nginx \
    --command "/bin/bash" \
    --interactive

CirrusHQ

CirrusHQ has a wide range of experience in provisioning, deploying, managing, and troubleshooting container instances within AWS. If you would like to find out more from CirrusHQ regarding your container deployments, feel free to contact us via our contact page.

A Well Architected Use Case :  Securing WordPress by aligning best practises to the Security Pillar

Security is often described as job zero for AWS. It is paramount to the success of any workload running on AWS, and this ethos is replicated at CirrusHQ. For example, as part of our Well Architected reviews, the security pillar is a key focus on our remediation efforts. As we continue to highlight the importance of security, we have focused here on a typical concern: Securing WordPress on AWS.

WordPress is one of the most popular blogging, ecommerce, and marketing content management platforms. With over 60% market share, it has become a common target for hackers and spammers trying to maliciously affect a site hosted on it. Given the known threats on WordPress, some basic security precautions are required so that the chance of an installation being compromised is reduced. We pride ourselves on the rigour we have introduced for our customers to set up and secure their WordPress based websites. As a result of these preventative activities, we significantly reduce the risk of malicious intent. 

This blog focuses on how to secure WordPress by aligning it to the security pillar of the well architected framework. Broadly the security pillar can be split into the following:

  1. Identity and access management
  2. Detection and Incident Response
  3. Infrastructure and Data Protection

Identity and access management

Implement 2FA for user logins

2FA, or two factor authentication, requires an user to authenticate via 2 different and unique ways. This is also one of the industry standard best practises to prevent login exploits. Typically a generated one time token is used in addition to the password associated with the user account to gain access. This can be achieved by installing a suitable plugin from the wordpress plugin directory or a custom plugin.

Remove/rename admin user

Every WordPress install asks you to create an admin user when being first created. It’s advisable to set up the admin user with a different name rather than the default ‘admin’. Also this user usually has a user id of 1. These are not exactly secret as it is the default for most WordPress sites, which hence plays right into the hands of the hackers’ information gathering procedure. Therefore it is advisable to use a different name other than admin and change the user id to something different than 1. 

Grant Least privilege by Leveraging different levels of permissions Users

WordPress has inbuilt functionality to specify access levels. These include roles like Administrator, Editor, Author, Subscriber. For a default use case as a blogging platform, these can be used natively. Alternatively this functionality can be extended and custom roles can be created either via a custom plugin or one of the numerous plugins available in the WordPress plugin repository

Infrastructure and Data Protection

Make use of database prefixes and change default salt values

An important consideration while setting up a WordPress database is to set up the database tables with a non-common prefix. Since it is common knowledge that WordPress installs have the default prefix as wp_, sql injection attacks, and other exploits, can in theory run like brute force attacks without having to know much about the site’s setup. In fact many published WordPress-specific SQL-injection attacks make the assumption that the table_prefix is wp_, the default. Changing this can block these opportunist attacks. 

In addition, another consideration for the database is to change the salt values you would find in wp-config.php to custom values.

Disable PHP and file execution inside uploads directory and give proper permissions

WordPress has a folder called uploads into which users can upload content. It is important to disable any php executions inside this directory as well as execution of any other type of files as well. The rationale is to mitigate an executable file being uploaded to the folder, which allows the intruder/hacker the ability to compromise the site. Disabling of executions of specific files can be achieved with the help of .htaccess rules. Lastly, never give the uploads directory a 777 permission. Utmost if properly configured, it requires a 755 permission.

Permissions for wp-config

There is no reason to have write permissions on wp-config after the WordPress site has been configured and is running without any issues in production. To prevent any vulnerability or a hack from manipulating these files, it is advisable to remove write and execution permissions and leave it only with read permissions. (i.e) 444. This can be done after all the required plugin installations.

Remove files like Readme.html / Readme.txt

These files contain information regarding the WordPress install and are easily available over the URL. Hackers first try to find this file to know more about the version of WordPress being run and using this they can identify any and all vulnerabilities with that specific version and try to exploit them. it is advisable to deny access to these files. 

Keep WordPress and Plugins up to date

The WordPress team is currently excellent at identifying and patching security issues with the WordPress core. It’s important to keep track of updates released by them and apply as required. 

The same applies to the plugins chosen to maintain patching and security vulnerability. Given the volume of available plugins being used it is important to ensure these are sourced reliable developers/3rd parties. Unfortunately, plugins can be built with malicious intent.

Detection and Response

Leverage WAF to block common attack scenarios

AWS WAF (Web Application Firewall) can be implemented at the CDN level, or at a load balancing level when the site is built for scale. There can also be use cases where WordPress is used as a micro site and lives on a single instance. During these scenarios, a custom software WAF can also be installed along with the web server to inspect and protect the site against attacks. A  common strategy is to protect against the top 10 exploits from OWASP https://owasp.org/www-project-top-ten/. In addition, there are a variety of managed rules sets, including a WordPress specific AWS managed rules set, that can be applied to AWS WAF that ensure optimal protection is available for the website. It is highly recommended, along with the above mentioned PHP application rule set, and IP reputation rule groups should also be enabled. All these help automatically detect and respond to vulnerability scanning, SQL infection, and cross site scripting attacks

Make use of Security Plugins

There are quite a few plugins that are available for securing WordPress sites. We recommend the iThemes Security plugin for securing WordPress sites. It provides a simple and centralised way to configure a lot of the above mentioned security reinforcements. It also scans the WordPress site and suggests any reinforcements that can be applied.  It also detects and blocks bruteforce and scanning attacks.

Leverage AWS Services like GuardDuty and Shield

AWS provides managed security services like Guard Duty which automatically and continuously monitor the VPC flow logs and DNS logs to provide actionable alerts regarding any malicious activity. Shield and Shield premium provides protection against DDoS which is a very high threat for WordPress sites that are popular. 

 

At CirrusHQ we can analyse and help secure your WordPress installations. Our focus is to help migrate them to a secure, highly scalable and highly available AWS architecture to improve performance and also help reduce your recovery time if anything impacted your website.  Hope these steps help you secure your WordPress websites. 

 If you would like to find out more from CirrusHQ regarding your WordPress sites on AWS, feel free to contact us via our contact page.

Introduction

Containers allow applications, code, and it’s dependencies to be packaged into a lightweight image. These can be deployed to different servers and environments while running the application with the same results every time. In addition, containers offer a great deal of flexibility when selecting the infrastructure, environment or service you choose to host your containerised application. 

Amazon Elastic Container Service (ECS)

The AWS container orchestration service, Amazon Elastic Container Service (ECS), simplifies running Docker containers on AWS by managing the provisioning and deployments of containers.  Amazon ECS provides a highly available, highly scalable container solution that manages them across a single or multiple hosts. The ability to apply load balancing across multiple containers is a key feature of the service which dynamically registers containers to a specific load balancer, listener, and target group to accommodate cycling, deployments or scaling events.

Container settings and configuration are predominantly managed by Task Definitions and Services.  Services orchestrate the containers by cycling unhealthy containers, enabling the management of how the containers scale, and the network configuration such as VPC, Subnets and Load Balancer settings. 

Task Definitions handle the configuration of the container image and are associated with the containers service.  Container port, mount points, logging configuration such as exporting all logs to CloudWatch Logs and pulling images from either DockerHub or AWS’ Elastic Container Registry (ECR).  Environmental variables are set within the Task Definition to bake variables into the container on deployment.  These can be stored either in plaintext or as sensitive data that can be securely stored and retrieved from AWS Secrets Manager.

Traditionally, Amazon ECS offers containers to be deployed directly onto an Amazon EC2 Instance which allows full access to the underlying infrastructure. That deeper level of access also comes with the overhead that requires the typical operational management of maintaining an Amazon EC2 Instances such as provisioning, monitoring, scaling and patching.  Some of these tasks can be simplified by utilising AWS Services such as AWS Systems Manager’s Patch Manager to automate patching. Utilising Serverless Architecture can assist in mitigating these operational management overheads with the AWS ECS Fargate option. 

Serverless Containers

Serverless is a cloud native architecture which shifts the responsibility of operating, maintaining and patching servers a thing of the past. AWS fully manages key infrastructure tasks such as provisioning, patching, upgrading, and capacity provisioning allowing developers to entirely focus on developing and getting the most out of their applications. 

AWS Fargate removes the need to provision, manage, and scale Amazon EC2 instances.  A server architecture requires containers, otherwise known as tasks, to be launched onto EC2 instances with the ECS Container Agent installed. Additional configuration can be applied to an autoscaling group to scale the number of instances based on capacity.  With Fargate, tasks are launched, scaled and run on fully managed servers removing the need for developers to manage the underlying infrastructure with only the docker image, CPU, and memory being defined as outlined in the diagram below. 

CirrusHQ has a breadth of knowledge on deploying, running, and supporting workloads using Containers on AWS. If you would like to find out more from CirrusHQ regarding your docker containers on AWS, feel free to contact us via our contact page.

 

Lambda Container Support – Re:Invent 2020

Introduction

AWS has kicked off its annual re:Invent keynotes and sessions this year and has already launched some interesting new features and products. One of these that caught my eye was the Lambda Container Support.

Lambda Container Support is an extension of the work AWS has been doing regarding custom runtimes and similar features. The idea is that you can use your existing docker containers with not too many changes and use the same processes and tools you are used to in Docker to test and deploy lambda functions.

This example will go over creating a simple docker container from the python:buster base image (which is based on debian), installing the AWS Lambda Python Runtime Interface Client ( https://pypi.org/project/awslambdaric/ ) which is a python module that allows you to interact with Lambda, and run a simple python script to demonstrate the process.

The Python Script

For this example, the actual python script is very simple, just returns the content of /etc/os-release to show we are running on debian:

def handler(event, context):
   f = open("/etc/os-release", "r")
   version = f.read()
   return version

The dockerfile

To demonstrate a container being compiled and uploaded to be deployed to the lambda service, we create the below docker file.

In the same directory we have the above python script in app/app.py

ARG FUNCTION_DIR="/home/app/"

FROM python:buster
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR}
COPY app/*.py ${FUNCTION_DIR}

RUN apt-get update \
   && apt-get install -y \
       g++ \
       make \
       cmake \
       unzip \
       libcurl4-openssl-dev
RUN python3 -m pip install awslambdaric

WORKDIR ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python3", "-m", "awslambdaric" ]
CMD [ "app.handler" ]

This docker file does the following:

  1. Sets our function directory to /home/app/
  2. Gets the latest python:buster base image
  3. Creates our function’s directory
  4. Copies our python script into the function directory
  5. Updates the base image and installs some build dependencies for aws lambda runtime interface client
  6. Installs the runtime interface client module
  7. Sets our working directory to the functions directory for the path
  8. Set our entrypoint (the interpreter – in this case python3 with the awslambadic module loaded)
  9. Set our command, which is our handler.

In effect, the lambda process will run /usr/local/bin/python3 -m awslambdaric app.handler and pass in the event and context

Pushing to AWS Elastic Container Registry

To build and push to ECR, you need to first login:

aws ecr get-login-password --region REGION --profile PROFILENAME | docker login --username AWS --password-stdin ACCOUNTID.dkr.ecr.REGION.amazonaws.com

Where REGION is your region, such as eu-west-1, PROFILENAME is your aws profile (optional depending on your aws credential setup) and ACCOUNTID is your AWS account id.

Then create a new repository:

aws ecr create-repository --repository-name REPONAME --region REGION --profile PROFILENAME

Where REPONAME is your repository name, REGION is your region, such as eu-west-1, PROFILENAME is your aws profile (optional depending on your aws credential setup).

Then in your directory where your dockerfile is:

docker build -t REPONAME .

Tag your container, in this case, i’m using latest:

docker tag REPONAME:latest ACCOUNTID.dkr.ecr.REGION.amazonaws.com/REPONAME:latest

Push your container:

docker push ACCOUNTID.dkr.ecr.REGION.amazonaws.com/REPONAME:latest

Creating the Lambda

To create the lambda from a container, create a new function and choose the Container Image option:

 

CreateLambda

Create Lambda using Container Image

 

Choose a name, and then choose Browse Images. Choose your repository and the latest container.

Once the lambda has created, you can create an empty Test Event and run the lambda. It will output the Debian version details:

 

RunLambda

Run the Lambda to see the Debian OS Details

 

If you make an update, then just build, tag and push your container, and to update in lambda, choose Deploy New Image:

 

Deploy Container Lambda

Click Deploy New Image to deploy a new version of your container

 

Then as before, Browse Images and choose your repository and the latest container. It will deploy a new container based on the latest version in the repository.

Conclusion

As you can see, a lot of the same processes used when creating and deploying docker containers can be used for Lambda Containers, including features such as repository scanning. Using Lambdas as your way of executing containers brings all the other features Lambda offers, such as triggers and tight integrations with other AWS Services. 

Introduction

A key component of deploying applications to the cloud with a Devops mindset is ensuring that you are keeping up to date with all of the latest patches and changes that are applicable to your environment. This can take the form of using Managed Services from AWS and others that take care of some of the underlying updates for you, or provide maintenance windows for applying patches and security fixes.

For other services where you need the flexibility of running your own EC2 instances, keeping them up to date with the latest patches and fixes can be a challenge. AWS offers a number of options for performing this task, from providing the Latest AMI’s with patches applied while still staying within a product family (for example staying within Ubuntu 16.04 LTS as a short term action before moving to Ubuntu 18.04 LTS, to using AWS Systems Manager to perform automated patching on a regular cadence to ensure that your instances are up to date.

One of the main recommendations we make to our customers is that patches are applied to a test or staging environment before they are rolled out to production, so that the compatibility can be confirmed between the application updates, operating system updates and your own code that may be running on the instance.

Once these changes have been validated to work, then the same set of patches can be applied to your production environment. This is critical to ensure that any patches that have been released in between your initial testing and your roll out to production aren’t included, as they may not have been tested with your application.

This approach works well and provides security of knowing that your application and instances are up to date, but it does also require manual intervention from Devops engineers to ensure compliance and compatibility. 

This is where automated testing comes into play and will reduce manual intervention requirements, while providing required testing.

At CirrusHQ we pride ourselves on delivering your workloads with these capabilities introduced from the outset, please do contact us to find out more about our services.

Automated Compliance

When working on compliance with industry standards or internal compliance requirements, a number of tools and policies are normally set up to clean up standard instances and make sure that each image that is deployed confirms to a set of standards.

A good example of this is the AWS Best Practices for hardening AMIs before deployment.

Other AWS Tools such as Amazon Inspector that can run a number of tests against your instances and provide findings for actions to be taken.

This document describes a way of including these best practices and the benefits of Amazon Inspector in an automated way using EC2 Image Builder.

EC2 Image Builder

AWS provides the EC2 Image Builder service for creating your own customised AMIs with all of your own software installed, best practice security and compliance applied and tests run against your instances, all through an automated pipeline that is repeatable and can be run on a schedule to ensure your deployments have the latest patches and fixes that are also tested.

Overview

The EC2 Image builder is designed to create Golden AMIs for your application software. These are Amazon Machine Images that have been customised for use in your specific use case and have your set of applications and code installed. As an example you could take one of Amazon’s patched images for Amazon Linux 2 with EBS volume support, install php and your code, as well as other packages, then run security scripts against it to harden the image, run some test, and at the end of the process you will have a Golden AMI that can then be used anywhere you need to run your application. 

Having an AMI that is pre-updated, hardened and ready to go, makes launching new instances via Auto-Scaling groups much easier and quicker.

EC2 Image builder is split into a number of stages. These are Build, Validate and Test. These are all performed in an Image Pipeline, which is run through to create the AMI at the end.

Build Stage

Within the build stage is where you customise the AWS provided Image to meet your needs. There are a number of different actions you can take here.

The full build stage is described as a Recipe, that can be reused in other places. The individual actions within that Recipe are called Components, and are also reusable. 

Each of these components can be set up separately and can be reused as needed. An example below is a command to update a linux instance, while excluding a specific package.

name: UpdateMyLinux
action: UpdateOS
onFailure: Abort
maxAttempts: 3
inputs:
    exclude:
        - ec2-hibinit-agent

Components can also be created and controlled through cloudformation

The build step is the main section of the Image builder. This is where your application will be set up to work as you need it.

Validation Stage

Within the validation stage, you can run tasks that will ensure that your AMI is up to the standards you require. One example of this is by using AWS Inspector to run against your AMI and ensure that all of your Inspector requirements are met. 

If one of the requirements fails, then you can stop the Image Builder process at that point and manually notify your team to investigate the issue.

Test Stage

The test stage is further used to ensure that your Image is built to the way you require. This can encompass unit testing and as much or as little examples as you need. It may be for example that you want to make sure that when a new instance is spun up you can connect to the application on a specific endpoint, and you can write a test to check this is working correctly. 

Then, as with the Validation stage, you can stop or proceed with the build as needed.

Automation using CloudFormation and Code Pipeline

The Image Builder process provides a lot of automation for creating your images, however you can also go a few steps further and automate the full end to end process for your deployment, so that when you deploy new code, a full new Golden AMI is created with your code installed, so that you can be sure that all of the latest configuration updates and operating systems patches have been installed and your code has been validated and tested before you deploy it.

As the EC2 image builder creates a pipeline that takes in an AMI and applies recipes and components, then outputs another AMI, this process can fit very well within a Code Pipeline by using the Cloudformation support released for EC2 Image Builder. 

In the example below, we create a CodePipeline setup that will connect to a Github repository, download the Infrastructure Cloudformation as well as downloading the Application artifacts, then goes through a process to create the EC2 Image Builder Pipeline, running the Image Pipeline as described to this point, then outputs an AMI into the AWS Parameter Store, so that other AWS products can access it (such as another Cloudformation deployment). 

As you can see, there are a wide range of customisations that could be done at this point (in this example, we have also created a Lambda Stack deployment for further control of the Image Pipeline stages and for cleanup). You have the full power of Cloudformation and Code Pipeline to take the AMI that is created and use it for any number of processes. This allows you to have confidence that your deployments are using a pre-prepared, hardened and up to date Image containing your application.

To further enhance the automation, the EC2 Image Builder pipeline can be configured to be run on a regular schedule, bringing in the latest configurations and updates for each section as required, so that even if you have not rolled out any new code recently, your instances are still kept up to date with the latest patches and your tests have been run to ensure your instances are working as required.

Cost

AWS EC2 Image Builder is itself free, but each of the resources you use will be chargeable. So for example, you will be responsible for paying the costs associated with any logs, the use of AWS Inspector, the time the test image is running in your account, and any other resources used during the creation of the AMI. However, as this process is automated, the costs are kept to a reasonable level and any unused resources are removed when required.

If you would like to know more from CirrusHQ then we’d be delighted to have a conversation regarding your DevOps practises and how we can automate compliance, please visit our contact page

Devops Tools Review – AWS CodeArtifact

Service Description

AWS CodeArtifact (not to be confused with the compliance tool AWS Artifact) allows organisations to securely store, publish and share software packages used in their software development process.  User or organisation created or curated code can be organised into a custom repository and be logically linked with public repositories before it is made available for distribution.

The organisational unit for custom and public repositories is known as a domain.

Leveraged parts of public repositories are also cached within the CodeArtifact repository.

Example Use Case Scenario 1

An application has been developed which makes use of Python scripting to perform image processing and video manipulation. CodeArtifact is leveraged to create a domain. The domain creates a repository for the user-created or “fresh” and a public upstream repository is chosen during setup. PyPi is specified. Both repositories will be listed in the same single pane within CodeArtifact, grouped together as a domain.

Once the domain has been created, the local repository of the artefact can be uploaded through CLI operation.

Once the application has been installed by the end user, pip libraries such as Pillow and moviepy are grabbed through the public remote repository, delivered as part of the package and can be cached within the CodeArtifact repository as a resiliency failsafe.

Example Use Case Scenario 2

Fuelled by a need to reduce operating expense, an organisation is looking for an alternative to their existing GitHub solution and licensing costs.

CodeArtifact (in conjunction with CodeCommit for source and version control) can be leveraged in order to provide a pay-as-you-go model for services which require no maintenance from the customer. This can allow the customer more time to concentrate on producing code.

Additionally, this brings further benefits. Users have one less set of credentials to work with – they can use existing IAM accounts. Administrators can use IAM roles to manage access to packages. Known, working upstream repositories can be cached which protects against upstream developers’ code drifting resulting in problems with local repositories potentially failing.

Features of CodeArtifact

  • A secure location for storage of software packages. Integration with AWS Key Management Service to provide encrypted storage.
  • Reduction in overheads. No need to host an always-on repository server.
  • No fees or licensing above the pay-as-you-go rates
  • Access controlled through existing IAM accounts – no new credentials required 
  • Integration with existing package managers – ability to use an in-house package manager to publish or retrieve your developed software packages.

Limitations

As of June 2020, there is still a limited number of external repositories which CodeArtifact makes use of. These are pypi, npm, Maven and Gradle. Software items from other public repositories may still need to be included in the user-created repository part of an artifact, or the end user may need to download and install the necessary dependencies separately.

Using Containers to Automate and Accelerate Delivery

What is a container?

A container is a unit of delivery for software packages which contain the necessary libraries and configuration files required for the application to run in an out-of-the-box manner on a host operating system.  Docker is the preeminent set of products which use containerisation at its core. In the case of docker, the host operation system will require the Docker Engine to be installed.

By its name, containerisation implies portability. Indeed, as a visual aid, containers are imagined as shipping containers. This is a very deliberate form of imagery as containerisation is a very portable method of packaging and distributing software.

Containerisation can be seen as a more lightweight alternative to virtualisation. In a “traditional” virtualised environment, a hypervisor and guest OS VM would be required to be placed on the underlying infrastructure before any application-specific libraries and the applications themselves are built or imported.

A containerised environment differs in that multiple applications can sit upon a single container engine upon a single host operating system, without the need for a hypervisor. The container engine can efficiently isolate the dependencies required for the application when creating a container. The container itself would be smaller to transport than an entire VM or image, with all its extraneous parts of the OS that would invariably have to be shipped with it.

What are containers used for?

Containers are used in scenarios where there is a need for portability, faster deployment, high scalability and inter-compatibility. For example, a container created by someone using a Windows OS can run on any flavour of Linux which has the Docker engine installed just as easily if it were also deployed on another Windows machine. They are also a favourable option for customers who wish to improve on traditional virtualisation options but wish to remain vendor agnostic and cut down on VM licensing costs.

They are also becoming increasingly used within cloud computing on platforms such as AWS and they provide a reliable encapsulation method when used in continuous integration and continuous deployment (CI/CD) methodologies.

How do containers differ from Lambdas?

Considering Docker, the first word that springs to mind is “containerisation”. For Lambda, the key word would be “serverless”. However, both Docker and Lamba can be regarded as  serverless given how multiple deployments of containers over a number of distributed machines, of different OS, can be argued as a form of de-centralisation.

Containerisation still requires initial work to be carried out to create the container that can be deployed or distributed. Lambda code which is developed can be made live instantaneously. 

With Lambda, scaling up or down is not a consideration for the administrator as Lambda’s own infrastructure is geared towards varying levels of usage, with customers charged per execution of code – true benefits of serverless computing. When compared to containers, you would find lambda is better suited when the scope of its code is limited to the AWS-based service it interacts with, e.g. S3, DynamoDB, SQS.

Containerisation is better suited to situations where code which uses an existing service or tool, such as Apache, which has to be configured before the working code build is containerised. Scaling up and down, or orchestration, of Docker containers can be managed through another open source tool called Kubernetes. This allows for a high level of automation to be applied which defines the conditions for automatic scaling.

Many vendors have ready-made Docker containers of open source and commercial software available in much the same way as AMI customer images are available for use in Amazon Elastic Compute (EC2). 

How can containers accelerate delivery?

Containers are especially suited to microservices. Their smaller footprint than conventional images makes it more justifiable to dedicate a container to more defined, limited functions, rather than dedicate an entire OS VM or image. This makes it easier for more containers to be deployed faster than conventional OS images when scaling up resources to meet demand.

AWS ECS (Elastic Container Service) is the AWS solution for combining serverless principles with containerisation flexibility. It is to containers what EC2 is to OS images. 

For management of existing clusters through Kubernetes, AWS offer EKS which removes the need for maintaining a Kubernetes application

Embracing microservices through containerisation in a development environment can bring numerous benefits. As microservices promote the greater use of APIs between integral parts of a code deployment, single containers amongst many can be modified rather than a significant chunk of the entire code. With all components talking to each other through API, it can allow a newcomer coder to be able to work on one element without needing to know the history of the code before them or the entire code base – just what’s to go in and what’s to come out. Changing or modifying containers can be platform and language agnostic also. Allowing a developer to choose their preference.

Microservices can also be managed much more easily within a CI/CD pipeline with smaller items able to move from left-to-right far quicker than, say, a more complicated piece of sprawling legacy code pulling in numerous third-party libraries

In summary, containers offer:

  • Great flexibility
  • Faster scaling
  • Smaller data footprints 
  • A vendor-agnostic platform 
  • Potentially leaner development cycles

If you would like CirrusHQ to advise you on how containerisation can help you improve efficiency and productivity, feel free to drop us a line at info@cirrushq.com or via our contact page.

 

DevOps tools in the cloud

Introduction

The concept of DevOps is now commonplace within modern IT infrastructures and there are a wealth of DevOps tools available. Some of which are multi-faceted tools which cover more than one stage of a DevOps cycle, while some are more useful at doing individual stages much better.

As DevOps is a relatively modern concept, those who employ DevOps methodology and develop DevOps tools are likely to be champions of cloud technology. Today, cloud technologies and DevOps are seen as being entirely complimentary. The aim of this article is to provide an outline of DevOps tools and to introduce AWS DevOps tools and where these can be of great use.

 

Planning Tools

The planning of a project, which would entail a process to involve all contributors, set a timescale, present documentation and, perhaps, align costings. Jira and Asana are market leading tools of this functionality. They both also offer extensive issue tracking functionality. AWS has yet to produce a comprehensive planning and tracking tool. However, AWS CodeStar offers excellent integration with Jira.

Development – Coding / Repository Tools

Within a well thought out DevOps cycle, there is also a need to have a place to store source code, facilitate access control and apply version control. GitHub is a very popular cloud-based Git repository hosting service. Other third-party tools include Bitbucket and SourceForge. 

AWS also offers tools within this sector of DevOps tooling, which may be of more benefit to those already familiar with AWS tooling. AWS CodeCommit provides a fully managed Git-based repository service which offers high availability without product tiering in a pay-as-you-use model. You are charged by the minute for the compute resource you use rather than higher tier usage and features being available on a per person/month cost basis.

AWS CodeBuild is a Platform as a Service Continuous Integration service which can compile and also test software packages leaving them ready to deploy. Again, users are charged by the minute for compute resources used.

Testing Tools

With respect to third-party tooling, Selenium remains the industry leading test tool. Selenium currently does not exist as a Platform as a Service, but its popularity makes it worthy of a mention. Other third-party testing tools include JUnit (core strength is testing Java), Cucumber and Appium (good for mobile device orientated) are also widely used.

AWS CodePipeline also provides integration with several testing tools which they have found to work well within its Source-Build-Test-Deploy framework which can be used on a pay-as-you-use basis. BlazeMeter, Ghost Inspector, Micro Focus StormRunner Load and Runscope cover a wide range of testing criteria.  AWS CodePipeline is covered in more detail later in this article.

Release / Deployment Tools

There are several well-known tools which are used for releasing code. Docker (for containers), Puppet, Chef, Ansible and TravisCI are all third-party tools which have good market shares.

AWS CodeDeploy focuses on automated deployment to AWS compute services, such as EC2, and on-premises servers. AWS OpsWorks is a very useful configuration management tool which provides managed instances of Chef and Puppet to automate server configuration and deployment.

AWS Elastic Beanstalk is another of the tools which could be classified as spanning one or more stages of the DevOps cycle. Elastic Beanstalk is a web application deployment and orchestration service which runs in the normal pay-as-you-use principle. Packages can be deployed in minutes upon a number of platforms and from numerous source languages.

Maintenance / Monitoring Tools

Nagios, Opsgenie and Splunk offer a wealth of monitoring capabilities over numerous types of systems and metrics. However AWS has some fantastic offerings which cover a vast range of monitoring and maintenance needs.

AWS CloudWatch offers the ability to highly visualise logs and metrics for AWS and on-prem assets and to set alarms based on thresholds. AWS Config can help prevent configuration drift by assessing, auditing and evaluating AWS resources and matching against desired configurations. With EC2 Systems Manager, it’s possible to pull together monitoring metrics from numerous resources, such as EC2 instances, RDS instances and S3 buckets along with on-premises data into an aggregated view. The user journey through applications can be monitored through AWS X-Ray. This is especially suited to microservices where many elements may be drawn together. The abovementioned AWS OpsWorks, which provides Chef and Puppet instances, is also a useful maintenance tool. 

Which tools are right for you?

There is no easy answer as to which toolset is best suited for your needs without first knowing which qualities you favour. If you prefer to use open source tools which require hosting, there are still savings to be had by deploying those tools to cloud instances.

However, there are advantages to be gained by using AWS-based tooling, such as metered usage, no need to maintain the tools nor the underlying hardware on which they reside, be it physical, virtual or cloud OS instances.

Orchestration

A DevOps framework works best with a continuous integration / continuous deployment (CI/CD) platform which brings elements together.

Jenkins is the most widely used CI/CD tool. The Jenkins community has created over 1000 plugins. One disadvantage of Jenkins is the need to  deploy and maintain Jenkins. This can involve considerable maintenance if to be used with large numbers of developers.

The AWS CI/CD tool, AWS CodePipeline has some advantages over Jenkins and other CI/CD tools. As it is Software as a Service, there is no need for maintenance of the tool or the platform. It easily integrates with other AWS services and uses the familiar pay-as-you-use model. It can also deploy to non-cloud resources via an agent, making it a good option for where there are still on-premises assets to take into account. One disadvantage of AWS CodePipeline is that fewer third-party applications can be integrated, but this is improving.

In summary, AWS continues to develop their comprehensive range of DevOps tools which can be used in more of an out-of-the-box manner than other well-established third-party tools. AWS DevOps tools are designed to be interoperable with other AWS tools and the pricing model is pay-as-you-use. However, for those who already have a preference for existing third-party tools, the scalability and disposability of AWS EC2 instances makes AWS well-suited as a hosting solution as well as an ideal playground for trying out as many tools as you wish. It is a product range which is well worth keeping an eye on.

Why not get in touch with CirrusHQ to get the ball rolling on your DevOps journey.