Migrating Services from Physical Servers to AWS
ThinkAlpha needed to migrate services from on-premise, physical servers to AWS Cloud to streamline their operations and increase resiliency. They required an unusually high number of environments for various purposes, each one replicating exactly the same services. This was a medium-scale project, with about 25 services, six environments, VPNs and Direct Connect.
I proposed the Terraform as Infrastructure-as-Code tool and built the environments from the ground up, adding each service in turn. Services were either Node-based application or static websites. The Node apps were run as Fargate ECS clusters. I also helped with the Dockerization process. Static websites were run as S3 buckets fronted with CloudFront distributions.
Continuous Deployment was performed with CircleCI. I updated the CircleCI scripts to deploy the services to AWS instead of the physical servers.
ThinkAlpha had some additional requirements in terms of networking, and needed VPNs and Direct Connect to link services. To ease the hand over of the infrastructure to ThinkAlpha, I generated templates from the two types of services and documented how to create new services and update the infrastructure for the various environments.
Modernizing a Web Infrastructure
SIBOTest needed to move away from a monolithic and fragile approach of having everything on one server and wanted to up their game as a startup.
I proposed an overhaul of the internal structure of the web app to use microservices based on Docker so that the architecture would be robust and scalable.
Work Done:
· Introduced load balancers with SSL termination.
· Separated the production and staging environments.
· Moved the MySQL databases to a separate subnet, using MySQL NDB Cluster.
· Dockerized their Ruby on Rails application.
· Set up a continuous-integration system based on Jenkins to automatically build the Docker images.
· Wrote Ansible scripts to automate the provisioning of new servers and deployment of the docker images.
· Ensured the air-tight security of the overall infrastructure.
In the end, this multitier architecture was working very nicely, with no reported downtime.
Design a Highly Available and Scalable Architecture on AWS
MyDocSafe needed some expert help to design a monitoring system suited to their application. They experienced a lot of instability, server crashes, and performance problems. Once this immediate solution was in place, they required an expert to design and implement a highly available and scalable architecture to run their workload reliably on AWS, complete with automated deployment.Work Done:• Write Ansible roles and playbook to install CloudWatch Agent on EC2 instances and configure Amazon SNS and CloudWatch to notify key people of alarms on EC2 instances.• Tweak Apache configuration to stop it crashing under heavy load.• Write Ansible roles and playbook to create Let's Encrypt SSL certificates using DNS challenge, including creating the subdomain on AWS Route53.• Install and configure ELK stack to monitor Apache logs.• Design a highly available and scalable architecture to reliably run a complex workload.• Progressively migrate the existing system to the new architecture.• Employed the new architecture in production.
Set up of CI/CD Pipelines for a Startup
PSD2Enabler required the setup of CI/CD pipelines on GitLab for various projects.
Work Done:
· Set up a GitLab pipeline to build and deploy an app to AWS using Terraform and Ansible.
· Set up a GitLab pipeline to build and deploy an app to a Kubernetes cluster hosted on the Google Cloud Platform.
· Set up a GitLab pipeline to build and deploy an app to a Kubernetes cluster hosted on AWS EKS.
Cisco | DevOps Engineering and Python Development
I worked as part of the system team who released set-top-box software to a major EU broadcaster.
Work Done:
· Set up and maintained a variety of software tools to enable the smooth running of the continuous integration process.
· Set up the Coverity static analysis and enabled nightly automation using Jenkins
· Set up BlackDuck open-source code matching.
· Configured Nagios and set up an NRPE with custom Git checks.
· Set up various Cron/Jenkins jobs with Bash/Python to automate tasks.
· Worked in a Scrum process with three-week iterations with a team spread over three countries.
· Worked with a codebase of 20 million+ lines of code.
· Detected and responded to the system problems.
Technologies: Linux, CentOS, Coverity, Jenkins, Black Duck, Continuous Integration, Scrum, Bash, Python, iptables/Netfilter, Nmap, Git, Rally
DevOps Architect
Armedia needed an AWS specialist and DevOps expert to help them modernize the architecture of ArkCase, a case management system. They needed to move from running everything on one server to a modern architecture based on Kubernetes, microservices, and infrastructure as code (IaC). Additionally, they required an AWS expert to help them build an AWS marketplace offering for ArkCase.Tasks: - Move from a monolithic architecture to a microservice-based architecture - Dockerize various services - Write Helm charts - Secure the Kubernetes cluster using network policies and RBAC - Install and configure Istio to encrypt internal traffic and facilitate distributed tracing - Install and configure observability tools: Loki for log aggregation, Prometheus and Grafana - Write CloudFormation templates to set up the infrastructure on AWS - Write Lambda functions in Python as CloudFormation custom resources or for admin tasks such as rotating secrets with the SecretsManager - Modify existing Ansible roles and playbooks - Build an internal PKI using IaC on AWS using only serverless services; certificates are renewed automatically when they expire and when CA certificates are renewed themselves.