Aditya A.

Aditya A.

Data Scientist

Visakhapatnam, India
Hire Aditya A. Hire Aditya A. Hire Aditya A.

About Me

Aditya is a developer with experience building machine learning and statistical models with large-scale data sets on cloud platforms using the latest big data technologies. Thanks to master's degrees from the IE Business School and IIT (ISM) Dhanbad, Aditya has a solid understanding of data science in various business scenarios. He is also a former quantitative researcher specializing in time-series and machine learning-based strategies and risk models in financial markets.

Machine Learning Data Analysis Statistical Analysis Data Analytics Data Visualization Natural Language Processing (NLP) Deep Learning Python Pandas Numpy Algorithms Computer Vision Recommendation Systems Data Warehousing Web Scraping

Work history

Novo Nordisk
Senior Data Scientist
2022 - 2023 (1 year)
Remote
  • Built time series forecasting models using SOTA deep learning algorithms like N-HiTS and N-BEATS, which outperformed traditional ARIMA and Holt-Winters ES models.

  • Built a proprietary trial optimization algorithm to predict the end date of trials, which outperformed all the time series models.

  • Built ensemble models for demand and sales forecasting.

PythonDeep LearningTime Series Machine LearningAzure Machine Learning Databricks Supply Chain Optimization
COGNIZER AI
Senior Data Scientist
2020 - 2021 (1 year)
Remote
  • Developed a BERT-based conversational AI solution based on business requirements.

  • Converted natural language queries into SQL queries using BERT-based deep-learning architecture.

  • Contributed to significant parts of the back-end flow and took ownership of those flows.

  • Extracted various fields from contract PDFs using regex and deep learning models and optimized the models to increase processing speed using TensorRT.

  • Put the DL models into production using APIs and Docker. Used AWS and GCP to enable autoscaling features.

Natural Language Processing (NLP) Generative Pre-trained Transformers (GPT) GPT Custom BERT APIsPython 3 Google Cloud Platform (GCP) Deep LearningAmazon Web Services (AWS) Machine Learning Operations (MLOps) FlaskREST APIs DockerAutoscaling
Futures First
Quantitative Analyst
2013 - 2019 (6 years)
Remote
  • Performed an exploratory data analysis on large-scale financial datasets and derived insights that led to tradable strategies, using Python and visualizing data through dashboards in Tableau.

  • Implemented a time series analysis (SARIMA and GARCH) of prices in commodity markets, considering CFTC reports and external factors like currency.

  • Developed regression-based mean-reverting strategies in fixed-income markets of the US and Brazil.

  • Deployed ETL pipelines and ML pipelines working on GCP.

  • Performed backtesting and forward testing of strategies by tracking their Sharpe ratios.

  • Performed hypothesis testing and evaluated the risk for strategies based on Monte Carlo simulations and historical value at risk.

  • Built natural language pipelines to track news sentiment.

Google Cloud Platform (GCP) NumpyPandasPythonData ScienceData Analytics Statistical Analysis Machine LearningFixed-income Derivatives Derivatives Bloomberg API GitJupyterExcel VBA
Zvoid
Machine Learning Developer
Present (2024 years)
Remote
  • Created a tweet listener capable of listening to the tweets from a given list of authors and making the data ready for the decision engine.

  • Built the automated trading capacity using the Alpaca API.

  • Developed the end-end analysis of a particular Twitter IPO hypothesis.

  • Worked on the decision engine using a random forest regressor that accepts the tweet and the stock price and gives out a stock buying or selling recommendation.

Machine LearningPythonQuantitative Modeling Quantitative Finance Data Science
Freelance
Data Scientist | Researcher
Present (2024 years)
Remote
  • Built data pipelines for data coming from multiple sources like the Quandl API and a SQL database.

  • Performed an exploratory data analysis on the built dataset, derived insights, and presented it to the stakeholders on Jupyter Notebook and Tableau.

  • Modeled the data using decision tree-based regression models.

Amazon Web Services (AWS) TableauJupyter NotebookRedshiftNumpyPandasPythonData ScienceData Analytics Statistical Analysis Machine LearningGitDockerAmazon EC2 APIsNatural Language Processing (NLP) GPT Generative Pre-trained Transformers (GPT) PostgreSQLJupyterPython 3
WiseLike
CTO
Present (2024 years)
Remote
  • Competed at the IE Business School's startup lab and won the investors' choice award and the most innovative project award.

  • Developed the whole machine learning pipeline from scratch, starting with a web scraper for pictures, extracting properties of a picture, and training the model using the data.

  • Served the model using a REST API (Flask) on the website wiselike.pythonanywhere.com.

  • Performed A/B and hypothesis testing to test the validity of the model.

Next Sapiens
Research Intern
Present (2024 years)
Remote
  • Developed a novel 4D (degrees of freedom) solution for the simultaneous localization and mapping of an unmanned aerial vehicle to reduce the computation cost and published research on the same (Leeexplore.ieee.org/document/6461785).

  • Combined location data from various sources like LIDAR, proximity sensors, inertial measurement units, and camera using extended Kalman filters to update the state information of the robot.

  • Developed a fuzzy logic-based PID controller for the unmanned aerial vehicle to maintain stability during flight.

Portfolio

Churn Prediction for a Book Publisher

The problem statement involved predicting which classes were about to change from using the publisher's books to online material. After implementing feature engineering using a genetic algorithm and clustering, the best prediction results were achieved using a random forest model.

Stock Suggestions | Distributed System with PySpark

This is an attempt to understand the relationship between the financials of a company and its performance in the stock market. There is also an attempt to identify cheap buying opportunities based on various risk profiles. The dataset was huge, so it was stored in a distributed file system, and we used PySpark for the transformations.

Word Recommendation System for Movie and Series Reviews

This is a natural language processing project where we used various methods like parts of speech tagging, name-entity recognition, readability, sentiment score, topic modeling, and more to train a regression model for good and bad reviews scraped from websites concerning different topics. The recommendations were made based on how various features impacted the score and what measures could be taken to improve it.

SQL Database for North American Oil and Gas and Visualization through Tableau

I developed the database using ETL processes on the data from online resources, normalized the data to create a star schema using MySQL workbench, and used this output to visualize the data using Tableau.

Machine Learning Model to Suggest Better Pictures for Social Media

I have created a database by scraping the web for pictures and trained a machine learning model with several characteristics of images available in social media and the number of likes to suggest which picture works better. I also deployed the model using the Flask API.

Generating Insights in Stock Market Data

I created data pipelines for merging data from various sources like several data APIs and the PostgreSQL database. I also implemented an exploratory data analysis and modeling of the new data to derive new insights along with running Jupyter Lab on an AWS EC2 instance.

Predicting the Probability of a Default of a Company to Make Loan Decisions

The project involved retrieving financial data of the company from a database and building a random forest model. The project had the scope of having a variable interest rate based on the probability of default for different sectors. Finally, the model was deployed using the Flask API.

Live Tweet Sentiment Tracking

The project involved ingesting live tweet data using the API into Kafka topics. Then we used Spark streaming as a subscriber and did sentiment analysis and feature engineering. This data was then aggregated and passed onto a shiny dashboard. The data then was stored in a MongoDB database.

Cancer Prediction Using VOC Data

This project is based on research where volatile organic compounds (VOCs) released by humans have predictive power with cancer. Here we are using a VOC database with labeled cancer data. The results are deployed using a Flask API which predicts the kind of cancer based on the VOC content.

Sales Forecast Model for FMCG, Taking the COVID Scenario Into Account

I developed the sales forecasting model for an FMCG client. We trained an ARIMA model and decomposed the data into its component sine waves using FFT. This data was then fed to a machine learning model along with some external factors to predict the sales.

Time Series Forecasting

Built an ensemble model that combined outputs from deep learning time series models like N-HiTS and N-BEATS with a traditional linear regression that outperformed all the existing forecasts. Also built a Twitter scraper to get tweet data for the products and their associated sentiments.

End-to-end NLP Model Deployment

Trained BERT-based solutions to fit the given use case. Built APIs to allow its interaction with external modules. Dockerized the whole application.Connected it with AWS and GCP solutions like Lambda, container registry, etc., to achieve autoscaling of the API.

Education

Education
Accelerated General Management Program in General Management
IIM Ahmedabad
2022 - 2023 (1 year)
Education
Master's Degree in Business Analytics and Big Data
IE Business School
2019 - 2020 (1 year)
Education
Bachelor of Technology Degree in Electrical Engineering
Indian Institute of Technology (ISM), Dhanbad
2009 - 2013 (4 years)