Software Engineer - Big Data
at Agolo (View all jobs)
Agolo's mission is to develop a content summarization platform to fight information overload. Our systems currently crawl more than 500,000 documents per day and expect to grow as we add more sources. We are looking for top-notch software engineers who love working with data-intensive systems. As a data engineer at Agolo, you would be designing and building our data pipeline, as well as optimizing data flows for the different components in our system.
As a Data Engineer, you will:
• Develop processes supporting data transformation, data structures, metadata, dependency and workload management.
• Develop new data integrations as well as maintaining the existing ones.
• Create and maintain the optimal data model across the pipeline components, and track the model consistency across the pipeline.
• Develop REST APIs over data models to facilitate and unify the data access layer.
• Participate in defining the most suitable backing technologies (relational, NoSQL, in-memory, queues) for Agolo's pipeline components and defining the scaling challenges and factors.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Passionate about building and optimizing data pipelines, architectures, and data sets.
• At least 2+ years of relevant experience.
• Proven track record in the following:
• SQL database.
• Algorithmic and problem-solving skills.
• Strong understanding of software engineering concepts and design patterns.
• Clean code/design advocate.
• Professional-level English written and oral communication skills.
Experience with any of the following will be a great plus:
• Knowledge of Distributed Systems.
• NoSQL databases, any of Elastic, Solr, MongoDB, Redis, CouchDB.
• Knowledge with Big data tools: Hadoop, Spark, Kafka, etc.
• Any cloud infrastructure: Azure, GCP, AWS.
• Stream-processing systems: Kafka-Streaming, Storm, Spark-streaming.
• Analytical skills related to working with unstructured dataset.
• Knowledge with DevOps tools and technologies: CI/CD CircleCI, ELK, Docker containers, Kubernetes, monitoring and alerting tools (e.g. Prometheus and Grafana).
Why you should join the team:
• We offer very competitive salaries in USD.
• You will be working with a top-talented team of engineers in Egypt and the USA.
• Work on a unique product with unlimited challenges for fortune 500 clients.
• Excitement to introduce and work with the newest open-source technologies.
• Phenomenal company culture! Ask our team members or check our company values .