Our client NISUM, is seeking an experienced and driven Snowflake Data Platform Engineer to join our Data Analytics Platform Engineering team.
Key Responsibilities
Maintain and enhance Snowflake infrastructure: account configuration, role-based access controls, usage monitoring, and platform optimization.
Develop, scale, and maintain ETL/ELT frameworks to support data ingestion and transformation processes from diverse internal and external sources.
Manage and evolve the Data Lake architecture on AWS S3, ensuring security, organization, and access standards are enforced.
Act as the gatekeeper for platform-level permissions and entitlements, ensuring consistent implementation of access policies across Snowflake, S3, and other integrated services.
Design, implement, and maintain ingestion processes from:
File shares, SFTP
Cloud storage (S3)
Relational databases (SQL Server, PostgreSQL)
NoSQL/document stores (MongoDB, DocumentDB)
Support and integrate with custom ingestion and transformation applications developed in Python, hosted on EKS and EC2.
Design, manage, and troubleshoot CI/CD pipelines using CircleCI, Octopus Deploy, and other tools for infrastructure-as-code and application delivery.
Use Git and GitHub to manage codebases, implement branching strategies, and enforce collaboration through peer reviews and version control best practices.
Collaborate with Data Engineering, DevOps, Security, and Application teams to design creative, scalable, and AWS-native solutions.
Gather requirements, articulate solution approaches, and participate in technical discussions to align solutions with business needs and platform standards.
Proactively identify opportunities for optimization, automation, and documentation to improve platform reliability and usability.
24h On-call support for one week (7 days) every month
Required Skills & Qualifications
5+ years of experience in Data Engineering or Platform Engineering roles.
Proven expertise with Snowflake: infrastructure design, security model, resource management, and performance optimization.
Strong proficiency in AWS services, particularly S3, IAM, EC2, EKS, and general networking/security concepts.
Hands-on experience with CI/CD tools such as CircleCI, Octopus Deploy, and familiarity with GitHub Actions (bonus).
Proficient in Python development within data-driven environments.
Solid understanding of data ingestion patterns across structured, semi-structured, and unstructured data.
Familiarity with orchestrating workloads in Kubernetes/EKS.
Excellent troubleshooting and debugging skills across infrastructure and data pipelines.
Strong communication skills: capable of gathering requirements, proposing solutions, and collaborating effectively with cross-functional teams.
Self-motivated, proactive, and willing to go beyond assigned tasks to improve systems and processes.
Exposure to Terraform or other infrastructure-as-code tools.
Experience implementing platform observability and monitoring.
Familiarity with data governance, metadata management, and platform security best practices.
Available to work in EST, CST, or PST time zones.
Education
Bachelor’s or Master’s Degree in Computer Science or related field, or equivalent combination of education and work experience.
What can we offer you?
Be part of international projects with a presence in North America, Pakistan, India and Latam.
Work environment with extensive experience in remote and distributed work, using agile methodologies.
Culture of constant learning and development in current technologies.
Pleasant and collaborative environment, with a focus on teamwork.
Access to learning platforms, Google Cloud certifications, Databricks, Tech Talks, etc.
Estas vacantes tienen roles y ubicación similares.