§ Provide L2, L3 Support for Analytics platform issues.
§ Debug day-to-day job issues in Analytics platform and provide solutions.
§ Maintain service uptime & SLA as committed.
§ Perform Tuning and increase Operational efficiency on a continuous basis.
§ Work with cross-functional teams to set up production environment and enable solutions for business.
§ Interact with business users to answer their queries and solve their problems.
§ Able to help and Guide L1 support engineers to fix day-to-day Operational issues.
§ Perform data migration, upgrades and database/tool maintenance etc.
§ Develop scripts to automate reports and maintenance activities.
§ Send daily or weekly status reports to management.
§ Monitor health of Analytics platform and Generate Performance Reports and Monitor and provide continuous improvements.
§ Minimum 4 years of work experience in Data warehousing technologies and minimum 2 years of work experience in Big Data or Hadoop platform environment (Spark, HBase, HDFS, Hive, Parquet, Sentry, Impala and Sqoop).
§ Good experience in Level 2, 3 support for performing hot fixes and dealing with platform issues in production environment.
§ Good knowledge in ETL Architecture and Hadoop platform architecture.
§ Require good Unix skills and good knowledge about Linux networks etc.
§ Experience in tool Integration, automation, configuration management in GIT, Jira platforms.
§ Should be proficient in writing shell scripts, automating batch and designing scheduler processes.
§ Should be able to understand Python/Scala/Java programs and debug the issues.
§ Require good understanding and knowledge about Informatica BDM/BDS/CDC, Hadoop Ecosystem, HDFS and Big Data concepts.
§ Good understanding of Software Development Life Cycle (SDLC), agile methodology.
§ Excellent oral, written communication and presentation skills, analytical and problem solving skills.
§ Self-driven, Ability to work independently and as part of a team.
Knowledge, Skills, and Attributes:
Knowledge and Skills
§ Good knowledge of Big Data platforms, frameworks, policies, and procedures and Informatica product as well.
§ Proficient understanding of distributed computing principles
§ Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
§ Good knowledge of Informatica tools, such as PowerExchange, BDM/BDS, EDC/EDQ/Axon and CDC
§ Experience with Cloudera, NoSQL databases, such as HBase, Big Data .
§ Experience with building stream-processing systems, using solutions such as Spark-Streaming and ideally Kafka.
§ Excellent SQL knowledge.
§ Experience on Cloud big data technologies beneficial, like AWS and Azure.
§ Kafka experience highly beneficial.
§ A reliable and trustworthy person, able to anticipate and deal with the varying needs and concerns of numerous stakeholders adapting personal style accordingly.
Adaptable and knowledgeable who is able to learn and improve his skills with existing and new technologies.