Aybars is a skilled Technical Architect with close to 20 years of hands-on expertise and know-how in the design and building and delivery of scalable, cost-effective systems and solutions on projects for clients. He can handle monitoring tasks and continuous integration processes and is adroit with building on different cloud services and hybrid infrastructure using Python, PostgreSQL, MySQL, Redis and a host of other tech stacks.
Oversaw DevOps on 10+ projects; engineering a few shared services, notably a push message service and an image resizing service, eliminating the need to implement them on other projects.
Adapted continuous integration/deployment practices, reliability practices, and partial deployment strategies for fast processing, and implemented an alert system to handle the case of failure before they occur.
Wrote a recommendation and route optimization engine on top of PostGIS and PgRouting using Ssdeep hashing on a Car/Ride Sharing project.
Managed processes and tasks for service monitoring, service quality assurance, and participated in the development of backend services.
Developed a distributed microservices architecture that implements a function defined on another service from a totally different service on Koding's IDE.
Optimized Koding's databases to manage the increase in load, and scale seamlessly in response to the extensive growth in users (from a few thousands to 100k+).
Managed an online storage service architectured as a loosely coupled service-oriented solution that fetches torrents from (https://put.io); serving 300k users and around 10k paying customers.
Actively participated in the development of a distributed CDN on top of Nginx for distributed encoding services; engineered to be fail resilient on Put.io's main datastore.
Developed and deployed parts continuously using a message broker, so some PHP functions can call some services/functions from Python.
Served as a technical expert on the implementation of Febau's distance learning platform; built on top of Moodle and PHP.
Maintained and enhanced deployment processes by implementing continuous integration and continuous deployment strategies on the project.
Provided technical guidance and leadership to a small 10-man team made up of 5 Flash developers, 2 backend developers, 1 content/documentation writer, 1 QA, and 1 education consultant.
Koding is an online development environment owned by Koding, Inc. which allows software developers to program and collaborate online in the browser without the need to download software development kits in multiple programming languages. Koding lets your organization create and share fully automated dev environments on any infrastructure for modern distributed applications, micro-services and containers.
Mergen embeds into Hazelcast and serves Redis queries by converting them to Hazelcast. It is fast, fault-tolerant and scalable. It acts as low latency, super fast distributed in-memory cache/data grid accessible from Python. Users can run it through a few Mergen instances which become a cluster that can be used on users' fav. lang. connected via a Redis client.
Working on a video player platform for the Olympic Games. The Video Player can stream live games, events and contests, combining live data - position of athletes etc. - with various statistical data - performance of the athletes last week - with commentary. The platform has multiple data sources - olympic data source - and live data feed. The data tags are encoded into video streams on an on-premise solution, combined with data delivered on the cloud. It is designed to have multiple services - eg. search, content, identity, cms etc. - and uses event-sourced streams (on Kafka) as the main data source. It uses CQRS as the main architectural pattern, services build up their own data source(s) depending on the best choice (eg: search uses flask as a framework and Elasticsearch as its main database, but CMS uses Django as framework and MySQL as datastore.) In case of service failure - including total data loss - we can re-build the service and the datastore in a very short time. Every service is designed to have failover and scalability of course but even in the worst-case scenario - think multiple regions failing at the same time - we can rebuild the infrastructure in a short time. Eventsourcing also provides (offline) replaying capabilities - for debugging or re-analyzing/processing data - as there are multiple independent data sources pushing data, there might be cases they might have possible data type changes - or can simply send wrong data. We can rollback to the earliest snapshot and replay the data with a fixed service handler. Having independent services with different databases also provide independent teams and development.
This is an HTML5 game platform. It was developed with infrastructure and API to 3rd party developers - which stores game achievements, game data, leaderboard etc. The platform was built with event sourcing using command/events pattern, it allows clients to send commands and in response can get one or more events. This architecture saved a lot of development time.
Put.io is an online storage service that can fetch torrents. It also helps consume videos by converting them to mp4s watchable in smartphones/tablets/tvs etc. It had around 300k users when it was free service, with nearly 1 petabyte of storage used, and had around 800million files and nearly 100 servers. Worked on an in-house cloud solution designed to be resilient and scalable.