About Us:

Abjayon is an Enterprise solutions and services provider with a strong focus on engineering and implementing solutions for Utilities, Healthcare and Hospitality industries.

We are a team of highly experienced and driven technology professionals who take immense pride in our inherent value of remaining unperturbed by any problem, task, or challenge.

The top management and core engineering team have many years of rich engineering experience. Our deep understanding and expertise in engineering solutions create value for organizations by engineering innovative customer experiences, customizing products and technologies for new markets, integrating new-age technologies, facilitating faster time to market, and ensuring a competitive edge.

Job Summary:

We are looking for a Spark developer who knows how to fully exploit the potential of our Spark cluster.  You will be involved in cleaning, transforming, and analyzing vast amounts of raw data from various systems using Spark to provide ready-to-use data to our feature developers and business analysts. This involves both ad-hoc requests as well as data pipelines that are embedded in our production environment.

Responsibilities:

  • Strong grip on Hadoop / Spark and its ecosystem
  • Understanding the basic suet that provides the solution to Big Data Is Mandatory
  • Write Scaladoc-style documentation with all code
  • Design data processing pipelines
  • Create Scala/Spark jobs for data transformation and aggregation
  • Data accessing tolls like high pick scoop
  • Master data management and monitoring tools like Flume Zookeeper and Guzzi are Mandatory
  • Ability to install and configure the Hadoop ecosystem

Required Skills:

  • Data Querying Languages like MySQL and NoSQL
  • Data Analysis Languages like python and Scala
  • Spark query tuning and performance optimization
  • Familiar with Big data Architecture like Hadoop Spark
  • Knowledge of Spark apache, Java, Python Linux, map-reduce, pig, hive, HBase
  • SQL database integration
  • Experience working with HDFS, S3, Cassandra, and/or DynamoDB
  • Deep understanding of distributed systems (e.g. CAP theorem, partitioning, replication, consistency, and consensus)
  • Familiarity with ETL tools and data loading tools like Flume and Sqoop
  • Scripting languages like Pig Latin, query languages like high fuel

Job Overview

  • Date Posted: 21-Nov-2022
  • Location: Philippines
  • Job Title: Senior / Principal Software Engineer - Hadoop (Full-Time)

Apply for JOB