Mayur C.

About Me

Mayur is a Data Architect and Engineer with 17 years of IT experience managing and supporting business process-led technology and strategic management initiatives. He builds products from scratch, developing PoCs for CXOs and converting them to production-grade solutions. With a strong analytical background, Mayur is a high-caliber Big Data/Data Warehouse/ETL architect with expertise in data management focused on Big Data, EDW, Cloud Data Warehouse, real-time analytics, and data lakes. He also boasts good balance of technical and management skills and a proven ability to lead large, complex projects and globally distributed teams.

Other

Work history

UpStack
UpStack
Data Engineer
2023 - Present (2 years)
, Remote
  • Building and improving databases, acquiring data, ETL/ELT, Big Data pipelines, and deploying cloud services on projects.

  • Administering infrastructure solutions to improve data models, increase data accessibility, and foster data-driven solutions for clients.

  • Implementing monitoring solutions to ensure data integrity; working closely with engineers, product managers, and other stakeholders.

UKG
UKG
Technical Architect
2021 - Present (4 years)
Noida, India
  • Planning, conceiving, and developing a project to switch from UKG-managed Cassandra clusters to a fully managed database as a service.

  • Migrating 45+ Cassandra clusters to a fully managed service with zero downtime to customers; saved ~$22 million by reducing the TCO of Cassandra by 1/8th in the organization.

  • Planning the architectural runway to support new business features and capabilities, establishing serviceability and observability of the application.

Impetus
Impetus
Technical Architect | Team Lead
2006 - 2021 (15 years)
Indore, India
  • Worked on various client projects implementing metadata management, data warehouse modernization, ETL, syntax and data validation services.

  • Designed and developed various features: schema translation and target-based optimization DML/code transformation, ETL to PySnowSQL/PySpark automated translation, and translation support for major modern DWs.

  • Reduced the operating cost of a modernization project by 80% with AI-powered frameworks.

Showcase

Ankush
Ankush
  • Architected and designed a Big Data cluster provisioning tool with core framework for deployment, including provisioning for Hadoop and its ecosystem components like Ganglia, Kafka, Storm, Oozie, Zookeeper.

  • Implemented monitoring metrics using Ganglia, Prometheus and developed log, service management, and auditing of properties.

  • Owned end-to-end deployment of proprietary software on HDP and CDH clusters across clouds - AWS, Azure.

MDM – Metadata Management and Governance Tool
MDM – Metadata Management and Governance Tool
  • Developed a Metadata Management and Governance Tool featuring a Metadata Catalog, an automated metadata crawler, data observability/quality profiler and data/cross-system lineage for impact analysis.

  • Collaborated with various teams, liaised with stakeholders, consulted with customers, used industry trends knowledge to guarantee data security.

  • Employed technologies including Spark, Spark Graphx/Graphframes, Java, Spring Boot, Apache Ranger, Antlr, and Solr.

Assessment Tool
Assessment Tool
  • Implemented EDW Code and Query Log Analysis feature, technical debt and dead code detection, ML-based Query time prediction in Spark, Snowflake, Redshift.

  • Developed features for DW Migration project estimation, target compatibility matrix, SaaS based cloud deployments for assessments, and customized offerings for AWS, Azure, GCP partners.

  • Reduced analysis time by 70% by re-platforming product on Databricks and Snowflake.

Education

Education
B.Eng Computer Science
DAVV University - Indore, India
2002 - 2006 (4 years)