Abjayon is an Enterprise solutions and services provider with the strong focus on engineering and implementing solutions for Utilities, Healthcare and Hospitality industries.
We are a team of highly experienced and driven technology professionals who take immense pride in our inherent value of remaining unperturbed by any problem, task, or challenge.
The top management and core engineering team have many years of rich engineering experience. Our deep understanding and expertise in engineering solutions create value for organizations by engineering innovative customer experiences, customizing products and technologies for new markets, integrating new-age technologies, facilitating faster time to market, and ensuring a competitive edge.
We are looking for a Hadoop/Spark developer who knows how to fully exploit the potential of our Spark cluster.
You will be involved in cleaning, transforming, and analyzing vast amounts of raw data from various systems using Spark to provide ready-to-use data to our feature developers and business analysts.
This involves both ad-hoc requests as well as data pipelines that are embedded in our production environment.
- Strong grip on Hadoop and its ecosystem
- Understanding the basic suet that provides the solution to Big Data Is Mandatory
- Basic components of Hadoop which are HDFS and MapReduce
- Data accessing tolls like high pick scoop
- Master data management and monitoring tools like Flume Zookeeper and Guzzi are Mandatory
- Ability to install and configure the Hadoop ecosystem
- Produce unit tests for Spark transformations and helper methods
- Data Querying Languages like MySQL and NoSQL
- Data Analysis Languages like python and Scala
- Familiar with Big data Architecture like Hadoop Spark
- Knowledge of Spark apache, Java, Python Linux, map-reduce, pig, hive, HBase
- Familiarity with ETL tools and data loading tools like Flume and Sqoop
- Scripting languages like Pig Latin, query languages like high fuel