Job Description:
Design and architecture data systems from scratch to production to support platform (scalable, reliable, and high-performance data processing)
Design, develop, and maintain ETL workflows for different data sources
Troubleshoot and resolve issues related to data integration
Optimize performance and query execution in large-scale environments.
Analyze raw data and maintain datasets specific to business requirements
Collaborate with external and internal stakeholders for assistance in data-based issues (data quality, data accessibility, etc.)
Job Qualifications:
5+ years experience as a data engineer or software engineer in the field of data
5+ years experience in Python programming language and data libraries
3+ years experience working with Cloud Providers (GCP and/or AWS)
5+ years experience with data modeling, ETL processes, data warehousing, and BI
Practical knowledge of SQL scripting
Experience with big data technologies and data streaming platforms (e.g. Kafka)
Proven experience working in a micro-services and GitOps approach
Experience with monitoring procedures and data quality tools
Experience with BigQuery DB
Experience developing large-scale infrastructure or distributed systems
Practical experience with organizing CI/CD data pipelines with Terraform or other alternatives
Strong understanding of Functional programming
Company Occupation:
High Tech