Key responsibilities include:
• Development of distributed real-time services with challenging scalability and reliability requirements.
• Work closely with data scientists to bring their researches to production
• Constant search for efficient technologies and solutions, experimenting with different patterns, databases, etc.
• Build ETL and streaming pipelines, integrate with new data sources across systems, bring new features.
• Facilitate collaboration with other engineers, product owners, and designers to solve interesting and challenging problems across our platform.
• You care about quality and you know what it means to ship high quality code.
• Proficiency server-side technologies. Java stack knowledge is required.Knowledge of database systems concepts (data modeling, partitioning, indexing, joins, etc.), experience with relational and no-sql databases.
• Familiarity with theoretical distributed systems concepts and parallel data processing.
• Linux system and CLI skills: ssh, process monitoring, storage management, tailing logs, etc.
• Base DevOps skills, include familiarity with clouds, containerization and CICD
It s a big plus if you also bring one or more of the following:
• Hands-on experience deploying and using the Apache Big Data Stack (Hive, Spark, Kafka, etc.)
• Front-end and UI experience (react, angular)
• Kotlin experience and base understanding of functional programming
• Python and Node.js experience
• Have data science and data analysis knowledge
What do we offer you?
Working in an international environment with colleagues from 70 nationalities, a flat hierarchy, flexible working hours, unlimited (paid!) holidays, the latest technologies and full ownership!