Tiago is a Senior AI/ML Engineer and Data Scientist with 20+ years of experience in Machine Learning, Data Science, and data engineering, specializing in building and deploying AI models in production. He has worked with enterprise clients like Gartner and Nylas, developing AI/ML solutions, NLP applications, and scalable data pipelines using PyTorch, Keras, Scikit-Learn, and Apache Spark. With expertise in AI architecture, Big Data processing, and cloud-based deployments, he has applied BERT for sentiment analysis and optimized AI-driven solutions. Tiago collaborates with cross-functional teams, ensuring seamless AI integration in business operations following ethical AI practices. Proficient in Python, SQL, and Java, he is passionate about advancing AI innovation and deploying scalable AI models.
Developing Machine Learning models and NLP solutions that drive revenue growth, reduce non-conversion calls, and improve client attendance.
Extracting insights from large datasets, visualizing data trends, and improving overall model performance using Pandas, Matplotlib, NumPy, SciPy, and Polars.
Applying BERT to the Sentiment Analysis of transcribed calls to improve the quality of client attendance and satisfaction.
Utilized AWS tools such as Glue with PySpark for data wrangling, S3 for storing intermediary data, and Redshift for storing final data, facilitating integration between disparate data sources.
Employed an ETL process to store data in Redshift.
Leveraged GCP tools like BigQuery and Looker to create dashboards.
Worked with Pandas, Matplotlib, NumPy, SciPy, and Polars to extract insights from large datasets, visualize data trends, and improve overall model performance.
Developed Machine Learning models using XGBoost, CatBoost, and LSTM to reduce the percentage of non-conversion calls between Gartner and their clients.
Applied BERT to the Sentiment Analysis of transcribed calls, improving the quality of client attendance and satisfaction.
Used Pandas, Matplotlib, NumPy, SciPy, and Polars to extract insights from large datasets, visualize data trends, and improve overall model performance.
Worked on a data pipeline for CERN to perform data wrangling to extract information based on a series of hyperparameters in order to choose a new algorithm to be used in the ATLAS experiment after the LHC update.
Used Scikit-Learn to do simulations on the project and applied ML techniques like Linear Regression and XGBoost for specific parts of the CERN project.
Developed an API to link data from GPS to the Waze platform and another one to consume data from the Waze platform to be stored in MongoDB.
Working as a technical coordinator of UEM (Municipal Execution Unit), validating technical specifications for the national program to support the administrative and fiscal management of Brazilian municipalities.
Worked on Waze, a project linked to the City Hall of Juiz de Fora, for the municipality to send information of city events to the Waze database and receive feedback from users.
Assisting with LOA and PPA updates and performing maintenance for the PDTI.
Worked on the project design of SAP PI/XI interfaces with legacy systems of accounting, human resources, invoices, and production using Java to implement some features of SAP PI/XI.
Coordinated a team of 13 third-party developers on the development of interfaces for the legacy systems based on specifications generated by me after performing user analysis.
Assisted key users of the Financial Administrative department to draw processes of the area during the time of Business Blueprint.
Developed multiple software projects in Java, including an HR system.
Worked on scientific software that applied genetic algorithms, simulated annealing, and a few more heuristics techniques to generate new architectures capable of improving how processing elements are placed and routed in FPGA.
Translated software requirements into working and maintainable solutions.
The goal of this project was to work in a data pipeline to do data wrangling to extract information based on a series of hyperparameters in order to choose a new algorithm to be used in the ATLAS experiment after the update of the LHC (the LHC is the biggest machine ever built, having a diameter of 27km, the ATLAS experiment was one of the particle detectors that proved the existence of the Higgs boson, the God Particle). To achieve this goal, it was necessary to work on the horizontal scaling of the solutions. Used multiple libraries like Pandas, SciPy, NumPy, Matplotlib, Multiprocessing, Concurrent, and so on. Used ScikitLearn to do simulations in a few specific parts of the project. Used Machine Learning techniques such as Linear Regression and XGBoost and contributed to some parts of the code with Apache Spark.
Waze is a GPS navigation software app and a subsidiary of Google. The goal was to develop an API to link data from our GPS to the Waze platform and another to consume data from the Waze platform to store in our MongoDB. Used Django to experiment with a model linking to PostgreSQL with PostGIS to geo-referenced data. Also worked with libraries like GeoPy, Pandas, Matplotlib, NumPy, and SciPy.
Performed different roles including as technical coordinator in an IDB project, supervisor of IT planning, and system analyst in multiple outsourcing projects by software houses. The Waze project was linked to the City Hall of Juiz de Fora.
Education
PhD Electrical Engineering - Used TensorFlow to implement Machine Learning algorithms and generate the necessary results for my research. Had access to a cluster of computers that was well-suited for the amount of data I was processing with TensorFlow.