Job Description:
What will you do
As Software Engineering Manager of the Data Platform team you will be taking a central role orchestrating, overseeing and leading the many different aspects of our challenging journey towards a new and modernized big data Lakehouse platform (PB’s of data), built using Databricks on Google cloud (GCP). Among the challenges - ingestion at stream in massive scale, providing a platform for processing structured and unstructured data, security and compliance at Enterprise scale, data governance, optimizing performance, storage and cost and many more.
You will lead, mentor, guide, recruit and manage a team of experienced Data Engineers and be responsible for the enablement of our Big Data platform serving developers, data engineers, data analysts, product managers, data science and ML Engineering
You will work closely with our Data PM on leading our Data Strategy and as such, you will learn how data serves our goals, come up with ways to improve our TBs of daily data processes while maintaining high data quality; guide other R&D teams and provide best practices; conduct POCs with latest data tools; and by that, help our clients make smarter decisions that continuously improve their ad-impression quality.
Find your way to influence and impact a team that utilizes a wide array of languages and technologies, among them - Python, Scala, SQL, GCP, Databricks, Spark, Kafka, Docker, Kubernetes, Terraform, Prometheus, Gitlab and more.
Job Qualifications:
+4 years of both people and technical management experience, leading a platform/infra/ backend/data engineering team in high scale companies
A versatile “go to” tech geek, passionate about learning and sharing the latest and greatest Big Data technologies out there (Managed, open source), and use them to deliver state of the art cost effective solutions.
A team player with great interpersonal and communication skills
A leader by example
Actively seek ways to improve development velocity, processes, remove bottlenecks and help those surrounding you grow
+4 years of experience with one of the following languages: Python, Scala, or any other JVM language.
Able to take hard decisions with a can-do attitude
Hands-on in-depth experience with at least 2 of Kafka/Kafka Streams/Spark/Spark structured streams.
Familiarity with SQL/NoSQL databases and the different main data architectures
Experience working with a public cloud provider such as GCP/AWS/Azure
Company Occupation:
Internet related, Software
Company Size:
500+