fbpx

Consultant - Data Engineer

SKU 6555 Categories ,
Location

Bangalore

Technical area

YoE

, , ,

HQ Location

Company Name

Company Size

Description

About Flipkart

Flipkart is India’s largest e-commerce marketplace with a registered customer base of over 150 million. In the 10 years since we started, Flipkart has come to offer over 100 million products across 120+ categories including Smartphones, Books, Media, Consumer Electronics, Furniture, Fashion and Lifestyle.

Launched in October 2007, Flipkart is known for its path-breaking services like Cash-onDelivery, No-Cost-EMI and 10-day replacement policy. Flipkart was the pioneer in offering services like In-a-Day Guarantee (65 cities) and Same-Day-Guarantee (13 cities) at scale. With over 1,20,000 registered sellers, Flipkart has redefined the way brands and MSME’s do business online.

Job Description
:

Specific responsibilities include:

● You should have expertise in designing, implementing, and operating stable, scalable,

solutions to flow data from production systems into analytical data platform (big data

tech stack + MPP) and into end-user facing applications for both real-time and batch use

cases

● You should be able to work with business customers in a fast paced environment

understanding the business requirements and implementing analytical solutions.

● You should have experience in the design, creation, management, and business use of

large datasets

● Contribute in designing platforms as consumable data services across the organization

using Big Data tech stack

● Build and execute data modeling projects across multiple tech stacks i.e. big data, MPP,

OLAP using agile development techniques

● Challenge status quo and propose innovative ways to process, model, consume data

when it comes to tech stack choices or design principles

● Build and integrate robust data processing pipelines for enterprise-level business

analytics.

● Strong engineering mindset – build automated monitoring, alerting, self healing

(restartability/graceful failures) features while building the consumption pipelines

● Translate business requirements into technical specification

(fact/dimension/filters/derivations/aggregations)

● Assist business analysts in understanding data models, reports, dashboards.

● As needed, assist other staff with reporting, debugging data accuracy issues and other

related functions.

● Build front-end visualization using available BI tools in line with Business Analysts

requirements.

● An ideal candidate will have excellent communication skills to be able to work with

engineering, product and business owners to develop and define key business questions

and to build data sets that answer those questions.

● Above all, you should bring your passion for working with huge data sets and bringing

datasets together to answer business questions and drive change

Desired Competencies and Skill sets include:

● 2-3 years’ experience with Bachelor’s Degree in Computer Science, Engineering,

Technology or related field required.

● 1-2 years of relevant software development experience with sound skills in database

modeling (relational, multi-dimensional) & optimization and data architecture – databases

e.g. Vertica

● Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning,

optimization and system landscape integration in large-scale, enterprise deployments.

● Comfortable in one of the programming language preferably Java, Scala or Python

● Good knowledge of Agile, SDLC/CICD practices and tools

● Must have experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must

have good knowledge of performance tuning/optimizing data processing jobs, debugging

time consuming jobs.

● Proven experience in development of conceptual, logical, and physical data models for

Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions.

● Good understanding of distributed systems

● Experience on BI tool e.g. PowerBI will be desirable

● Experience working extensively in multi-petabyte DW environment

Open Positions
:

1

Skills Required
:

Tech Excellence and Growth Mindset

Role
:

Specific responsibilities include:

● You should have expertise in designing, implementing, and operating stable, scalable,

solutions to flow data from production systems into analytical data platform (big data

tech stack + MPP) and into end-user facing applications for both real-time and batch use

cases

● You should be able to work with business customers in a fast paced environment

understanding the business requirements and implementing analytical solutions.

● You should have experience in the design, creation, management, and business use of

large datasets

● Contribute in designing platforms as consumable data services across the organization

using Big Data tech stack

● Build and execute data modeling projects across multiple tech stacks i.e. big data, MPP,

OLAP using agile development techniques

● Challenge status quo and propose innovative ways to process, model, consume data

when it comes to tech stack choices or design principles

● Build and integrate robust data processing pipelines for enterprise-level business

analytics.

● Strong engineering mindset – build automated monitoring, alerting, self healing

(restartability/graceful failures) features while building the consumption pipelines

● Translate business requirements into technical specification

(fact/dimension/filters/derivations/aggregations)

● Assist business analysts in understanding data models, reports, dashboards.

● As needed, assist other staff with reporting, debugging data accuracy issues and other

related functions.

● Build front-end visualization using available BI tools in line with Business Analysts

requirements.

● An ideal candidate will have excellent communication skills to be able to work with

engineering, product and business owners to develop and define key business questions

and to build data sets that answer those questions.

● Above all, you should bring your passion for working with huge data sets and bringing

datasets together to answer business questions and drive change

Desired Competencies and Skill sets include:

● 2-3 years’ experience with Bachelor’s Degree in Computer Science, Engineering,

Technology or related field required.

● 1-2 years of relevant software development experience with sound skills in database

modeling (relational, multi-dimensional) & optimization and data architecture – databases

e.g. Vertica

● Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning,

optimization and system landscape integration in large-scale, enterprise deployments.

● Comfortable in one of the programming language preferably Java, Scala or Python

● Good knowledge of Agile, SDLC/CICD practices and tools

● Must have experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must

have good knowledge of performance tuning/optimizing data processing jobs, debugging

time consuming jobs.

● Proven experience in development of conceptual, logical, and physical data models for

Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions.

● Good understanding of distributed systems

● Experience on BI tool e.g. PowerBI will be desirable

● Experience working extensively in multi-petabyte DW environment

Location
:

Bangalore,Karnataka

Download Zigup app to

Explore more features​

Available on Play Store & App Store

Addtional features on Zigup app:

Available on Play Store & App Store

product company jobs - Download Zigup available at Play Store and Appstore

Download Zigup app to

Explore more features

Addtional features on Zigup app:

Available on Play Store & App Store

Filter by jobs

Location
Technical Area