Stefan M.

Stefan M.

Data Engineer

Serbia
Hire Stefan M. Hire Stefan M. Hire Stefan M.

About Me

Stefan is an experienced machine learning and machine learning operations (MLOps) engineer with hands-on experience in big data systems. His demi-decade of expertise is supplemented by a master's degree in artificial intelligence. Stefan has worked on problems such as object detection, classification, sentiment analysis, named-entity recognition (NER), and recommendation systems. He is always looking forward to being involved in end-to-end machine learning projects.

Software Engineering Deep Learning Machine Learning Artificial Intelligence (AI) Computer Vision Natural Language Processing (NLP) AI Design Deep Neural Networks Code Review Github Python 3 Python PyTest Pycharm Amazon S3 (AWS S3)

Work history

PepsiCo Global - DPS
MLOps Engineer
2022 - 2023 (1 year)
Remote
  • Implemented an end-to-end pipeline using PySpark machine learning pipeline.

  • Implemented CI/CD with unit and integration tests using GitHub actions.

  • Implemented Spark and scikit-learn/Pandas ETL jobs for handling large volumes of data (150 TB).

Machine Learning Operations (MLOps) APIsMachine LearningPythonDatabricks Big DataSparkPandas
Motius
Tech Lead Data Engineer
2022 - 2023 (1 year)
Remote
  • Led a small team in implementing an ELT pipeline to get data from a GraphQL database and put it into Azure SQL. Everything was Dockerized and pushed to Azure Image Registry.

  • Implemented KPI calculations using PySpark, which was communicating with Snowflake. Defined table schema for Snowflake and created migration scripts.

  • Followed the Scrum methodology, including daily scrums, retro, and planning, and used Jira.

  • Led a small team in implementing ETL Spark jobs with Apache Airflow as an orchestrator, AWS as infra and Snowflake as a data warehouse.

SparkApache SparkPysparkSnowflakePythonPython 3 Amazon Web Services (AWS) DatabasesDistributed SystemsAzure SQL Azure AWS GlueApache Airflow Software Architecture
Lifebit
MLOps Engineer
2021 - 2022 (1 year)
Remote
  • Carried out deep learning model optimizations using quantization, ONNX Runtime, and pruning, among others.

  • Monitored model performance, including memory, latency, and CPU usage.

  • Used Valohai to automate the CI/CD process and GitHub Actions to automate some parts of the MLOps lifecycle.

  • Created automated experiment tracking using Amazon CloudWatch, Valohai, Python, GitHub Actions, and Kubernetes.

Amazon EC2 Valohai KerasTensorflowPython 3 KubernetesCodeshipGithubOpen Neural Network Exchange (ONNX) Visual Studio Code (VS Code) Optimization Neural NetworksNumpyMonitoringAmazon S3 (AWS S3) CloudAmazon Web Services (AWS) AI Design Deep Neural Networks Software EngineeringPyTestJSONSource Code Review Code ReviewTask Analysis DatabasesData Science
HTEC Group
Machine Learning Engineer
2020 - 2021 (1 year)
Remote
  • Optimized a machine learning compiler already on a trained network without re-training using Open Neural Network Exchange (ONNX) and implemented custom operators using PyTorch and C++.

  • Worked on an Android machine learning solution and mentored a less experienced developer to train and prepare an object detector and classifier to run smoothly on an Android device.

  • Enhanced a project that aimed to upscale images to be as perfect as possible toward 4K resolution.

  • Involved in SDP of ship routing problem. Implemented an algorithm from scratch that will guide the ships. Fuel consumption and ETA were used for calculations.

  • Worked on open source ONNX Runtime in order to add support for the MIGraphX library.

Python 3 PythonDockerComputer VisionPytorchArtificial Intelligence (AI) Machine LearningTeam Leadership Machine Learning Operations (MLOps) GithubConvolutional Neural Networks Open Neural Network Exchange (ONNX) Visual Studio Code (VS Code) Neural NetworksNumpyCloudPandasComputer Vision Algorithms AI Design Deep Neural Networks Software EngineeringPyTestJSONTechnical Hiring Source Code Review Code ReviewTask Analysis Interviewing DatabasesData Science
SmartCat
Machine Learning Engineer
2019 - 2020 (1 year)
Remote
  • Contributed to complete MLOps lifecycles using MLflow for model versioning, LakeFS for data versioning, AWS S3 for data storage, and TensorFlow serving in Docker.

  • Functioned as a data engineer using Apache Spark for ETL jobs with Prefect and Apache Airflow for scheduling.

  • Trained several different architectures for object detection and classification.

Python 3 ScalaPythonDockerSQLComputer VisionMongoDBArtificial Intelligence (AI) Machine LearningData EngineeringMachine Learning Operations (MLOps) GithubConvolutional Neural Networks ETLVisual Studio Code (VS Code) Neural NetworksNumpyAmazon S3 (AWS S3) Big DataImage ProcessingCloudPandasObject DetectionComputer Vision Algorithms Object Tracking Apache SparkAmazon Web Services (AWS) AI Design Deep Neural Networks Software EngineeringPyTestETL Tools JSONJupyter NotebookSource Code Review Code ReviewTask Analysis PysparkDatabasesData ScienceDistributed Systems
Freelance
Machine Learning Engineer
2016 - 2019 (3 years)
Remote
  • Scraped product information from various websites, then analyzed and prepared the scraped data for web shops using natural language processing—long short-term memory (LSTM), Word2Vec, and transformers—and added NER since the data was in Serbian.

  • Used Amazon SageMaker to automate the machine learning pipeline—data preprocessing, model training, and deployment. Executed automated retraining and deployment of the model, completing the machine learning process before the client updated new data.

  • Worked on big data projects using Apache Spark, Kafka, Hadoop, and MongoDB.

  • Worked as a data engineer using Spark to create optimized ETL pipelines. Translated the client's needs into SQL.

Python 3 SparkAmazon SageMaker PythonDockerComputer VisionMongoDBArtificial Intelligence (AI) Machine LearningData EngineeringKubernetesMachine Learning Operations (MLOps) GithubAmazon EC2 Convolutional Neural Networks Open Neural Network Exchange (ONNX) Recommendation Systems Natural Language Understanding (NLU) GPT Generative Pre-trained Transformers (GPT) Natural Language Processing (NLP) Visual Studio Code (VS Code) Time Series Data ModelingData MiningNeural NetworksNumpyAmazon S3 (AWS S3) Big DataApache Kafka Hugging Face TransformersCloudPandasObject DetectionComputer Vision Algorithms Apache SparkAmazon Web Services (AWS) AI Design Web DevelopmentDeep Neural Networks Software EngineeringPyTestJSONJupyter NotebookSource Code Review Code ReviewTask Analysis PysparkDatabasesData ScienceDistributed SystemsProject Management

Portfolio

Automated End-to-end (E2E) Computer Vision Solution

Created a system that performed several things in real-time, including:• Detecting objects in the room • Classifying person poses• Automated re-training (active learning)• Model and data versioning• Dockerized pipelineUsing those models and predictions, we created a post-processing pipeline for creating reports or key performance indicators (KPIs) for clients.

Android COVID-19 Test Classification

The goal was to create a COVID-19 test classification model. We had a small dataset and had to build the best model in the shortest possible time (two weeks). I led a team of two people on this project. We used MobileNet due to size, and all business-relevant metrics were great. We used many optimization techniques to deploy the model to Android, such as quantization, pruning, and knowledge distillation.

MLOps Engineer

Participated in a project where my job was to optimize the whole machine learning system using quantization, pruning, ONNX, and more. I achieved the same accuracy with five times reduced latency, two times reduced model size, and four times reduced cost. I also changed the type of underlying EC2 instances to get more of our system.

Image Super Resolution

The goal was to improve the model for upscaling and super-resolution by researching and developing approaches from SOTA research papers. There were a lot of different custom loss function, layers, metrics, and even custom back propagations.

ETL Jobs

• Created batch ETL jobs for calculating KPIs.• Optimized solution to reduce cost and calculation time.• Scheduled jobs via Airflow and Prefect.The tech stack was: Spark, Scala, AWS S3, Kafka, Apache Airflow, and Prefect.

NLP Articles Processing

The goal of this project was to develop two stages of article processing: 1. Find all relevant tags (events, locations, names, etc.) in the article.2. Find pairs of tags that are somehow related.Hugging Face transformers were mainly used to tackle this problem (BERT-based models). Overall metrics were above 95%.

Data Ingestion

Led a team whose goal was to get data from the GraphQL database and insert it into Azure SQL. Everything was Dockerized and pushed to EKS on every push to the main branch on GitLab. Concurrent threads were used in order to optimize the solution.

Tech Leadership for the DE project

My responsibility was to make all decisions from architectural to the nitty gritty details about the implementation. We used AWS for infra (CloudWatch, Glue, S3) and Airflow to orchestrate Spark jobs. Every result of a Spark job was saved to Snowflake.

Education

Education
Master's Degree in Artificial Intelligence
University of Novi Sad
2020 - 2021 (1 year)