Yuriy M.

Yuriy M.

Lead Data Architect / Engineer

Los Angeles, United States of America
Hire Yuriy M. Hire Yuriy M. Hire Yuriy M.

About Me

Yuriy is a data specialist with over 15 years of experience in data warehousing, data engineering, feature engineering, big data, ETL/ELT, and business intelligence. As a big data architect and engineer, Yuriy specializes in AWS and Azure frameworks, Spark/PySpark, Databricks, Hive, Redshift, Snowflake, relational databases, tools like Fivetran, Airflow, DBT, Presto/Athena, and data DevOps frameworks and toolsets.

Work history

Databricks
Specialist Solutions Architect
2023 - Present (1 year)
Remote
  • Joined the Field Engineering, Communications, and Media and Entertainment verticals.

  • Provided technical support for field engineers, architects, and customers.

  • Performed data warehousing, data engineering, migrations, and integrations.

Databricks
Paramount
Senior Manager Data Engineering
2020 - 2023 (3 years)
Remote
  • Built a revenue data mart and added a server-side subject area to the data lake.

  • Managed a team and oversaw ETL monitoring, optimization, and performance tuning.

  • Represented the data engineering team in the company's architecture guild activities.

SparkPysparkScalaDatabricks SnowflakeApache Airflow SQLAmazon Web Services (AWS) Amazon AthenaData Build Tool (dbt) Google BigQuery
Teespring
Data Engineer
2019 - 2020 (1 year)
Remote

Migrated a data warehouse ETL pipeline from Airflow/Redshift to Fivetran, Databricks, and Snowflake.

Amazon Web Services (AWS) APIsRedshiftApache Airflow PythonSparkSnowflakeDatabricks FiveTran
BCG GAMMA (via Toptal, Three Contracts)
Data Engineer
2018 - 2019 (1 year)
Remote
  • Provided engineering support for data scientists.

  • Designed and built a featured engineering data mart and customer 360° data lake in AWS S3.

  • Designed and developed a dynamic S3-to-S3 ETL system in Spark and Hive.

  • Completed various DevOps tasks included an Airflow installation, development of Ansible playbooks, and history backloads.

  • Worked on a feature engineering project which involved Hortonworks, Spark, Python, Hive, and Airflow.

  • Built a one-on-one marketing feature engineering pipeline in PySpark on Microsoft Azure and Databricks (used ADF, ADL, Databricks Delta Lake, and ADW as a source).

AnsibleBoto 3 Apache Airflow PostgreSQLRelational Database Services (RDS) AWS GlueAmazon AthenaPresto DB Apache HiveSparkPython
Enervee
Vice President, Data
2017 - 2018 (1 year)
Remote
  • Managed the data engineering, BI reporting, and data science teams.

  • Worked as a hands-on data engineer.

  • Built a data lake on AWS.

  • Developed a reporting system with Redash/Presto.

Amazon Web Services (AWS) Redash Apache Airflow PythonAmazon S3 (AWS S3) Amazon AuroraMySQLPostgreSQLRedshiftApache HivePresto DB SparkAmazon Elastic MapReduce (EMR) Hadoop
Crowd Consulting
Co-founder | CEO
2016 - 2023 (7 years)
Remote
  • Worked on full data warehouse implementations for multiple clients.

  • Provided big data training and support for consulting partners.

  • Engineered and built an ETL pipeline for an AWS S3 data warehouse using AWS Kinesis, Lambda, Hive, Presto, and Spark. The pipeline was written in Python.

  • Delivered data warehouses, data lakes, data lakehouses, feature marts, BI systems, migrations, and integrations.

Amazon Web Services (AWS) Data Warehouse DesignData Warehousing Amazon AthenaTableauLuigiScalaPythonAmazon S3 (AWS S3) Amazon DynamoDB MySQLPostgreSQLRedshiftAWS Lambda Apache HiveDatabricks SparkHadoopAmazon Elastic MapReduce (EMR)
ITG
Big Data Architect
2016 - 2017 (1 year)
Remote
  • Worked in a full-time position, as a data architect for a transaction cost analysis system.

  • Installed a four-node Apache Hadoop/Spark cluster on ITG's private cloud.

  • Conducted platform POC embedding Apache Spark technology into ITG's data platform.

  • Supported the development of a platform POC for Kx Kdb+; also converted Sybase IQ queries to Kdb+ Q language.

American Taekwondo Association
Data Engineer
2016 - 2017 (1 year)
Remote
  • Converted data from a legacy Oracle database to a newly designed SQL Server database.

  • Wrote SQL scripts, stored procedures, kettle transformations.

  • Administered two databases.

  • Performed extensive data cleansing and validation.

Connexity
Director, Data Warehouse
2015 - 2016 (1 year)
Remote
  • Managed two data warehouses and BI teams for both PriceGrabber and Shopzilla. Connexity is also known as PriceGrabber, Shopzilla, and BizRate.

  • Handled operational support for the PriceGrabber data warehouse. Recovered data warehouse after the data center migration.

  • Merged one data warehouse into another and retired one of them. Hands-on designed business and data integration architecture; developed data validation scripts and ETL integration code. Managed the transfer of a BI reporting system from Cognos to OBIEE and Tableau.

  • Defined the technology platform change strategy for the combined data warehouse.

  • Created SQL: PL SQL stored procedures, packages, and anonymous scripts for ETL and data validation.

  • Completed an Amazon Redshift project.

  • Worked on and completed a Cloudera Impala project.

Amazon Web Services (AWS) LinuxPythonPerlTableauOracle Business Intelligence Enterprise Edition 11g (OBIEE) Cognos 10 ImpalaHadoopRedshiftPL/SQLOracle
PriceGrabber
Director, Data Warehouse
2008 - 2015 (7 years)
Remote
  • Oversaw the company's data services, defined the overall and technical strategy for data warehousing, business intelligence, and big data environments.

  • Hired and managed a mixed on-shore (US)/off-shore (India) engineering team.

  • Replatformed a data warehouse to Oracle Exadata X3/Oracle ZFS combination, added big data and machine learning components to the data warehousing environment.

  • Supported 24x7x365 operations in compliance with the company's top-level production SLA.

  • Wrote thousands of lines of PL/SQL, PL/pgSQL, MySQL, and HiveQL code.

  • Wrote ETL scripting in Perl, Python, and JavaScript internally in Kettle.

  • Worked with big data on multiple types of projects (Hadoop, Pig, Hive, and Mahaut).

  • Developed a tool-based ETL for a Pentaho (Kettle) CE ETL redesign project.

  • Worked on machine learning for various types of projects (Python, SciPy, NumPy, and Pandas).

Edmunds
Director, Data Warehouse
2007 - 2008 (1 year)
Remote
  • Managed a data warehouse team and project pipeline; supported operations.

  • Created PL/SQL stored procedures, packages, and anonymous scripts for ETL and data validation.

  • Worked on a tool-based ETL for multiple Informatica projects.

Universal Music Group
Manager, Data Warehouse
2003 - 2007 (4 years)
Remote
  • Managed, developed, and operated a CRM data warehouse.

  • Wrote PL/SQL, MySQL, and Perl code.

  • Administered to a Cognos reporting system.

  • Worked on C# for multiple supporting projects for the OLAP reporting system.

  • Designed and developed a MSAS OLAP cube system.

LinuxPerlC#Cognos 10 MySQLMicrosoft SQL Server Oracle
MediaLive International
Director, Decision Support and Financial Systems
2001 - 2003 (2 years)
Remote
  • Managed a data warehouse, BI, and CRM systems.

  • Assumed responsibilities over an Oracle EBS application team.

  • Developed the PL/SQL coding for a data warehouse ETL and Oracle Application integration.

  • Worked with SQL server for multiple Transact-SQL and analysis service projects.

  • Worked on a tool-based ETL for multiple epiphany EPI*Channel projects.

UnixVBMicrosoft SQL Server Oracle EBSOracle
Hyperion (Currently: Oracle)
Senior Principal Consultant (Professional Services, Essbase Practice)
1999 - 2001 (2 years)
Remote
  • Led a practice for a consulting company covering for multiple clients.

  • Developed Essbase satellite systems: relational data warehouses and data marts, reporting systems, ETL systems, CRM's, EPP's, ETL in and out of Essbase and with Essbase itself.

  • Worked on multiple PL/SQL projects, by providing full support of the team's Oracle project pipeline.

  • Helped to develop SQL servers for multiple Transact-SQL and analysis services projects.

  • Developed a tool-based ETL for an Informatica project.

  • Worked with Hyperion, Essbase, Enterprise, Pillar, planning, financial analyzers, and VBA projects.

EssbaseHyperion InformaticaVisual Basic for Applications (VBA) Microsoft SQL Server Oracle
CVS Health
Data Engineering Architect
Present (2024 years)
Remote

ETL and feature engineering - personalization engine.

RAPIDS ScalaPythonSparkDatabricks Azure
Maisonette
Data Engineer
Present (2024 years)
Remote
  • Built a data platform and data lake using Fivetran, dbt, and Databricks.

  • Participated in the development of a BI platform in Looker.

  • Performed CI/CD deployment and operational support.

Amazon Web Services (AWS) FiveTranLookerPythonApache Airflow SnowflakePostgreSQL

Portfolio

Principal Data Consultant - Carbon 38
Principal Data Consultant - Carbon 38

Acted in a consulting capacity in implementing the data management lifecycle components for Carbon 38's Data Lake and Data Warehouse project - handling the implementation of real-time streaming replication on the solution.

Big Data Engineer - BCG GAMMA
Big Data Engineer - BCG GAMMA

Designed, built and deployed a feature engineering platform that provides support for data scientists - migrating the platform from MSSS to Spark. Authored and maintained professional documentation describing data architecture, design specifications, source to target mappings and other client deliverables as required.

Big Data Engineer - Content.ad
Big Data Engineer - Content.ad

Worked on delivering a data warehouse solution for an ad-tech company - building a new Snowflake data warehouse with real-time data pipeline, Kafka streaming, heterogeneous database replication and migration from MSSS to Snowflake.

Education

Education
Diploma (Master of Science Equivalent) Degree in Applied Mathematics
Odessa I.I. Mechnikov University
1975 - 1980 (5 years)
Education
Certificate of Completion in Oracle Database Administration
UCI Extension
Education
Certificate of Completion in Cloudera Developer Training for Apache Hadoop
Cloudera University
Education
Certificate of Completion in Data Science and Engineering with Apache Spark
UC BerkeleyX (Online Courses from Berkeley)