I’m Yeshwanth Gowd Sreerama
,
Data
Slide 0
Engineer
based in USA

I have 6+ years of experience as a Data Engineer working with Databricks and Apache Spark.

My Certifications

Solving real problems with
an constantly expanding stack

milky way emoji

My Experience

Senior Data Engineer Messer Americas Dec 2022 / Present
TypeScriptTypeScript
SCSSSCSS
GitGit
FigmaFigma
🌱 Responsibilities
  • ā—‰ Developed scalable data pipelines leveraging Delta Live Tables declarative framework using Pyspark and Spark SQL to automate historical data tracking and real-time spark streaming and developed AWS Lambda functions in rust.
  • ā—‰ Orchestrated data ingestion pipelines using Azure Data Factory across 20+ heterogeneous sources and implemented Hive metastore to unity catalog migration for data governance and metadata management.
  • ā—‰ Processed streaming data from Kafka using Spark Streaming API to flatten complex datasets and developed machine learning models in Azure Machine Learning.
✨ Key Results
  • ā—‰ Refactored and streamlined codebase for 900+ data pipelines programmatically using Python and Shell scripting, saving an estimated 500+ engineering hours as part of a company-wide initiative to standardize pipeline management.
  • ā—‰ Imported and exported data across SQL Servers using SSIS and T-SQL, ensuring data integrity and reducing processing errors and scheduled maintenance jobs and re-indexed databases to optimize query performance.
  • ā—‰ Integrated Apache Kafka with Spark Streaming to process real-time and historical data, leveraging Pyspark, Pandas and NumPy for data transformations, reducing processing latency.
Data Engineer Pfizer Feb 2019 / Nov 2021
PythonPython
JavaJava
🌱 Responsibilities
  • ā—‰ Automated data quality checks by developing Python scripts in Azure Functions and leveraging Delta Lake features (time travel queries, vacuum, schema evolution) to ensure accurate data cleaning and reliable downstream analytics.
  • ā—‰ Developed Kafka-to-Delta Lake pipelines using Apache Flink Datastream API and Kafka-Delta-Ingest to process real-time data and orchestrated workflows with Apache Airflow and worked with CI/CD automation frameworks.

Let's talk

I'm excited to apply my skills to your projects. Contact me to learn more about how I can contribute.
const message =
send message
You can also hit me up in
any of this places šŸ¤™šŸ¾
Find me at :
Yeshwanth Gowd Sreerama

I’m Yeshwanth Gowd Sreerama
,
Senior Data
Slide 0
Engineer
based in USA

I have 6+ years of experience as a Data Engineer working with Databricks and Apache Spark.

My Certifications

Solving real problems with
an constantly expanding stack

milky way emoji

My Experience

Senior Data Engineer Messer Americas Dec 2021 / Present
Apache Spark
TypeScript Azure Data Factory
SCSS Databricks
Git Azure Machine Learning
Figma Pyspark
🌱 Responsibilities
  • ā—‰ Developed scalable data pipelines leveraging Delta Live Tables declarative framework using Pyspark and Spark SQL to automate historical data tracking and real-time spark streaming.
  • ā—‰ Orchestrated data ingestion pipelines using Azure Data Factory across 20+ heterogeneous sources to Azure Blob Storage and developed file partitioning strategies including liquid clustering to enhance query performance.
  • ā—‰ Processed streaming data from Kafka using Spark Streaming API to flatten complex datasets and implemented ML pipelines for risk modeling through machine learning workflows in Azure Machine Learning.
✨ Key Results
  • ā—‰ Refactored and streamlined codebase for 900+ data pipelines programmatically using Python and Shell scripting, saving an estimated 500+ engineering hours as part of a company-wide initiative to standardize pipeline management.
  • ā—‰ Imported and exported data across SQL Servers using SSIS and T-SQL, ensuring data integrity and reducing processing errors and scheduled maintenance jobs and re-indexed databases to optimize query performance.
  • ā—‰ Integrated Apache Kafka with Spark Streaming and batch jobs to process real-time and historical data, leveraging Pyspark, Pandas UDF's and NumPy for data transformations, reducing processing latency.
Data Engineer Pfizer Feb 2019 / Nov 2021
Python Databricks
Java Azure Data Factory
🌱 Responsibilities
  • ā—‰ Automated data quality checks by developing Python scripts in Azure Functions and leveraging Delta Lake features (time travel queries, vacuum, schema evolution) to ensure accurate data cleaning and reliable downstream analytics.
  • ā—‰ Developed Kafka-to-Delta Lake pipelines using Apache Flink Datastream API and Kafka-Delta-Ingest to process real-time data and orchestrated workflows with Apache Airflow and worked with version control systems like Git.

Let's talk

I'm excited to apply my skills to your projects. Contact me to learn more about how I can contribute.
const message =
send message
You can also hit me up in
any of this places šŸ¤™šŸ¾
Find me at :