Data Engineer – OpenShift / Snowflake

We are looking for a Data Engineer with strong experience in Python, PySpark, SQL, Snowflake, and OpenShift to build scalable data pipelines and enterprise data platforms.

Key Responsibilities

  • Develop and maintain ETL/ELT pipelines using Python, PySpark, and SQL.
  • Build data pipelines integrating Snowflake and enterprise Data Lakes.
  • Deploy and manage containerized applications on OpenShift.
  • Optimize data processing workflows and ensure data reliability.

Key Requirements

  • Strong experience in Python, PySpark, SQL, and Snowflake.
  • Hands-on experience with Azure Data Factory and Data Lake architectures.
  • Experience working with OpenShift or Kubernetes environments.
  • Knowledge of data governance and security best practices.
Why CoreKonnect?

You're more than your skills
and your recruiter should act like a partner, not a salesperson.

We're a human-first recruitment agency — empowering people and elevating business through meaningful opportunities.

Find Work