4 weeks ago
Pune
Full Time

Brief Job Description:

  • Responsible for implementation and ongoing administration of Hadoop infrastructure.
  • Aligning with the engineering teams to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up Kerberos principles, and testing HDFS, Hive, Pig, and MapReduce access for the new users.
  • Cluster maintenance as well as creation and removal of nodes.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines.

Educational Qualification: Any Technical (IT) Graduate / Post-Graduate

Experience: 2-5yrs

Location: Pune

Company Description

Inuxu offers Digital & Social Media Advertising solutions to advertisers/brands.

Our Digital Advertising solutions include advertising on our 2000+ partner publisher websites in various ad formats including standard & rich media banners, videos, native banners and other ad innovations in 10+ regional languages. This is achieved through our ad-tech platform called ‘Adgebra’, through which we help brands/advertisers connect to their target audience on the internet in their preferred language. Adgebra serves as a digital advertising marketplace for advertisers and publishers.

Reaching new-age Indians on the internet to promote a brand or any social message requires a massive scale and a multilingual messaging capability. Adgebra by Inuxu is India’s largest multilingual native advertising platform, that reaches 500mn+ monthly unique users, serving 15bn+ native, rich media and video ads each month via its 2000+ publisher network.

Inuxu started operations In May 2013 and Is headquartered in Pune, with Sales staff across Mumbai, Gurgaon, Bengaluru & Chennai.

Job Description:

  • The Candidate should be able to deploy the Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
  • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
  • Hadoop skills like HBase, Hive, Pig, Mahout, etc.
  • Good knowledge of Linux as Hadoop runs on Linux.
  • Must be familiar with open-source configuration management and deployment tools Ansible, Linux shell scripting, and Jenkins.
  • Knowledge of Troubleshooting Core Java Applications is a plus.
  • Responsible for implementation and ongoing administration of Hadoop infrastructure.
  • Aligning with the engineering teams to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up Kerberos principles, and testing HDFS, Hive, Pig, and MapReduce access for the new users.
  • Cluster maintenance as well as creation and removal of nodes.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
  • Screen Hadoop cluster job performances and capacity planning.
  • Monitor Hadoop cluster connectivity and security.
  • Manage and review Hadoop log files.
  • File system management and monitoring.
  • HDFS support and maintenance.
  • Diligently teaming with the engineering teams to guarantee high data quality and availability.
  • Collaborating with engineering teams to install the operating system and Hadoop updates, patches, version upgrades when required.

Application Form