4 months ago
Pune
Full Time

Brief Job Description:

  • Responsible for implementation and ongoing administration of Hadoop infrastructure.
  • Aligning with the engineering teams to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up Kerberos principles, and testing HDFS, Hive, Pig, and MapReduce access for the new users.

Educational QualificationsAny Technical (IT) Graduate / Post-Graduate

Experience: 2-5yrs

Location: Pune

Company Description:

Inuxu offers Digital & Social Media Advertising solutions to advertisers/brands.

Inuxu’s ad-tech platform ‘adgebra’ empowers businesses to connect, engage and win the trust of billions of digital consumers. Adgebra enables brands to precise target desired audiences and is available in managed-service or self-serve models.

Adgebra caters to Native, Video and Rich-Media ad formats and presently connects to over 500 million monthly active users via its network of 2000+ partner publishers, managing 15 billion+ monthly ad serving opportunities.

Adgebra is the only digital adtech platform that supports and serves ads in 10 different Indian regional languages. Adgebra is monetizing millions of daily active users for top publications and news aggregators like DailyHunt, Sharechat, Tamil Samayam, TV9 Network, Sakshi, LiveHindustan, NavbharatTimes, Maharashtra times, and more.

Inuxu started operations In May 2013 and Is headquartered in Pune, with Sales staff across Mumbai, Gurgaon, Bengaluru & Chennai.  Learn more about us @ www.adgebra.in

Job Responsibility:

  • Responsible for implementation and ongoing administration of Hadoop infrastructure.
  • Aligning with the engineering teams to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up Kerberos principles, and testing HDFS, Hive, Pig, and MapReduce access for the new users.
  • Cluster maintenance as well as creation and removal of nodes.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
  • Screen Hadoop cluster job performances and capacity planning.
  • Monitor Hadoop cluster connectivity and security.
  • Manage and review Hadoop log files.
  • File system management and monitoring.
  • HDFS support and maintenance.
  • Diligently teaming with the engineering teams to guarantee high data quality and availability.
  • Collaborating with engineering teams to install the operating system and Hadoop updates, patches, version upgrades when required.

Experience Requirement:

  • The Candidate should be able to deploy the Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
  • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
  • Hadoop skills like HBase, Hive, Pig, Mahout, Scoop & Spark etc.
  • Good knowledge of Linux.
  • Must be familiar with open-source configuration management and deployment tools Ansible, Linux shell scripting, and Jenkins.
  • Knowledge of Troubleshooting Core Java Applications is a plus.

We Offer:

  • A unique and diverse company culture, shaped by people with commitment, a sense of responsibility & care, risk-taking and discipline.
  • An excellent start-up work environment, flat hierarchies, and short decision paths.
  • Freedom to enhance, share and demonstrate your skills and capabilities aligned towards organizational goals and objectives.
  • Challenging and Learning Oriented work environment that nurtures personal and professional growth.

Application Form