Duties: We’re using technologies like Python, Terraform, Kubernetes, Docker & Pub/Sub to run event-based analytics pipelines, processing new data points every day on the Google Cloud Platform; We have developed an ML Ops stack allowing us to seamlessly deploy our models into a production environment; We’re rolling out new features through fully automated CI/CD pipelines including code reviews & automated tests; We take responsibility for the full DevOps cycle, but thanks to managed cloud services & automation, we spend most of our time working on new features & architecture optimisations, rather than responding to ops issues
Requirements: completed degree in computer science or a related subject; at least four years of experience in data engineering (setting up and operating data pipelines in big data/analytics environments); at least two years of experience with one or more of the major cloud providers (GCP/Azure/AWS); experience in software development with Python, knowledge of tools such as VS Code, PyCharm, git, Jupyter, pip, conda, CLI