As an integral part of the Data Platform team, primarily work on Big Data & related tech stacks.
Setting up and maintaining of Big data clusters like Hadoop, Hive, HBase, etc.
Ensure timely upgrades of the infra by providing rolling upgrades.
Maintain and manage services on cloud infra like AWS.
Work on org level initiatives likes CI , metrics and alerting.
Collaborate with different team in order to understand / resolve infra availability , scalability and consistency issues.
Be involved in Knowledge Sharing (Knowledge Base Articles, Documentation, Forums, Blogs, etc...)
Exhibit continuous improvement on technical knowledge and problem resolution skills and strive for excellence
3 - 6 years of experience in DevOps role with experience in managing Big data infrastructure.
Experience with Docker (Apache Mesos/Marathon and Kubernetes experience is a plus) preferred.
Understanding of one of Relational Databases such as MySQL, Oracle, Postgres, SQL Server.
Excellent command on Linux with ability to write small scripts in Bash/Python. Ability to grapple with log files and unix processes.
Prior experience on Hadoop, Map-Reduce, Hive, and any NoSQL database like HBase would be an added advantage. Prior experience of working on messaging queues like Kafka.
Prior experience in working on cloud services, preferably AWS.
Experience developing telemetry, metrics, and usage analysis using monitoring and logging tools (like New Relic, Cloud Watch, Data Dog, ELK stack)
Familiar with the challenges surrounding efficient operations and failure mode analysis in large complex distributed systems
Experience with configuration management tools (Ansible, Chef, etc) is a plus
Ability to learn complex new things quickly
Occasional involvement in DevOps support over weekends/ early mornings/ late nights.
Be a team player with an ability to work under pressure with good time management skills