Mesh - Verifying Professionals
I was part of a team that developed a service for a software company to improve the process of verifying professionals. Our goal was to create a system that would help customers quickly verify their professional licenses without contacting support directly. To achieve this, we developed an on-demand live crawler crawling over 100 websites and could understand customer queries and return license information.To ensure the service was reliable and scalable, we deployed it on a cloud platform and set up a monitoring system to track performance and quickly identify any issues. We also regularly reviewed customer feedback and improved the service responses based on user input.We decided to use a REST API to provide users with easy access to the service, which required me to build a robust back-end system in Python using the FastAPI web framework.We collaborated closely as a team throughout the development process, leveraging cutting-edge technologies to create a high-quality product. To ensure that the system was reliable and efficient, I performed extensive testing and evaluation on the data processing pipeline, including testing for edge cases and performance issues.
REST API for Climate-related Project
As a Python developer, I was involved in creating a REST application for a climate-related company that allowed their users to detect losses in building costs related to climate change, like flooding, heating, etc. Additionally, I played a crucial role in restructuring and refactoring the architecture of the existing code to improve its efficiency and scalability.To start, I worked closely with the company's product team to understand the requirements and goals of the new application. We decided to use a REST API to provide users with easy access to the data, which required me to build a robust back-end system in Python using the FastAPI web framework.After completing the initial development, I began restructuring the existing codebase to improve its efficiency and scalability. This involved identifying areas of the code that were slowing down the application and rewriting them to be more performant. Additionally, I worked to simplify the codebase and remove any unnecessary dependencies, which helped to make the code more maintainable and easier to work with.Through this process, I was able to significantly improve the performance of the application and make it easier to add new features and functionality.
Juststream | Video Streaming Platform
I am the founder of Juststream.live video streaming platform. The serverless video streaming platform built on AWS is a highly scalable and performant platform that had 500,000 monthly users. The platform is designed using a serverless architecture, which allowed me to focus on building features and functionality rather than managing servers. The platform leveraged AWS services such as Lambda, API Gateway, AWS Media Convert, Elastic Load Balancer, and CloudFront to provide users with a highly secure, reliable, and low-latency video streaming experience. I have also implemented various monitoring and logging solutions to ensure the platform's performance and health were constantly checked. Overall, the project was an impressive achievement that demonstrated the power of serverless architecture and the capabilities of AWS services.
Big Data Collection and Management for a Social Media Platform
As a Python developer, I was tasked with writing a REST API and a Big Data crawler to collect more than 100 million user information data from different social media platforms for a marketing research company.To start, I researched the different social media platforms and identified the necessary data points relevant to the marketing research project. I then wrote a Big Data crawler in Python that could collect this data from multiple sources, process it, and store it in a scalable database.To ensure that the data was collected efficiently and reliably, I designed the crawler to run on multiple servers in parallel to ensure that the crawler could handle a large volume of data in a timely manner.Once the data was collected and processed, I wrote a fast and reliable REST API in Python to allow the marketing research company to access and analyze the data easily.To deploy the project on AWS, I used Amazon's Elastic Beanstalk service. This helped ensure the project could handle a large volume of traffic and data.I worked closely with the marketing research company to ensure that the project met their needs and goals. I also performed testing to ensure that the system was reliable and secure.
US Statutes and Laws
As a Python developer, I took part in building a scalable web crawler to collect US statutes and laws from various legal websites. This involved designing a crawler architecture that could handle large amounts of data and was efficient in its execution.To begin, I worked with the product team to identify the websites that contained the desired legal data and then created a crawler using Python libraries such as Scrapy and Selenium to scrape the web pages and store the data in a format that could be easily used in further processing.To ensure that the crawler was efficient and scalable, I deployed it using AWS. This allowed us to process large amounts of data quickly and reliably.Once the crawler was complete, I created a REST API in Flask that would provide authorized users access to the collected data.I designed the API to be fault-tolerant, with data replication and load balancing, to ensure it could handle high traffic levels without downtime.Throughout the development process, I worked closely with the product team to ensure that the final product met their requirements and goals. I performed testing and monitoring to ensure that the system was reliable and secure.
Allergen Checking on Products Using ML
As a Python developer, I built a pipeline for a software engineering project that checked for products' allergens and other information in their description on Amazon pages. The pipeline consisted of data extraction, cleaning, preprocessing, and deployment.I researched the necessary data points to extract from the Amazon product pages. I used Python libraries such as Requests, Selenium, and Pandas to extract the data from Amazon product pages and clean it for further processing.After cleaning the data, I developed a robust system that could efficiently process large datasets of product descriptions to identify the presence of allergens and other key information.Once the pipeline was developed and tested, I deployed it in a REST API using Flask, which allowed users to enter the Amazon product URL and retrieve information on the presence of allergens and other key information related to the product's ingredients and nutritional facts.Throughout the development process, I worked closely with the product team to ensure the system met their requirements and goals. I also performed testing and monitoring to ensure the system was reliable and secure.