The Finance Automation team at Amazon is looking for a Data Engineer to play a key role in building next generation data warehouse services in finance space. The ideal candidate will be passionate about building large, scalable, and fast distributed systems on AWS stack. The candidate will want to be part of a team that has accepted the goal to democratize access to data and enabling data driven innovations for the entire finance business in Amazon. We are one of the fastest growing Data Engineering teams across Amazon with strong technology orientation.
We are looking for a candidate with strong background in the AWS technology stack (S3, EMR, Redshift) and traditional data warehousing experience with interest in data mining and ability to sieve emerging patterns and trends from large amounts of data. The successful candidate should have strong experience with all standard data warehousing technical components (e.g. ETL, Reporting, and Data Modeling), infrastructure (hardware and software) and its integration. The ideal candidate will have experience in dimensional modeling, excellent problem solving ability, capability in dealing with huge volumes of data, and has a short learning curve. Excellent written and verbal communication skills are required as the candidate will work very closely with diverse teams and senior leadership.
Along with complex problems to solve, we provide you a world class work environment and a chance to work with talented team members in the data engineering space and the opportunity to contribute and create history while having fun.
Â· Design, build and own components of a high volume data warehouse.
Â· Build efficient data models using industry best practices and metadata for ad hoc and pre-built reporting
Â· Interface with business customers, gathering requirements and delivering complete data and reporting solutions owning the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards to drive key business decisions
Â· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
Â· Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources
Â· Own the functional and nonfunctional scaling of software systems in your ownership area.
Â· Provides input and recommendations on technical issues to BI Engineers, Business and Data Analysts and Data Scientists.
Â· 3+ years of experience as a Data Engineer or in a similar role
Â· Experience with data modeling, data warehousing, and building ETL pipelines
Â· Experience in SQL
Â· Understanding of ETL concepts and experience of using them to build large-scale, complex datasets using traditional data warehousing or big data technologies.
Â· Data modeling skills with solid knowledge of various industry standards such as dimensional modeling, star schemas etc
Â· Proficient in writing performant SQL working with large data volumes
Â· Experience designing and operating very large Data Warehouses
Â· Experience with scripting for automation (e.g., UNIX Shell scripting, Python, Perl, Ruby).
Â· Experience working on AWS stack
Â· 3+ years of experience in advanced SQL working with large data sets.
Â· 3+ years of experience in dimensional modeling.
Â· Experience with AWS technologies including Redshift, RDS, S3, EMR or similar solutions build around Hive/Spark etc.
Â· Experience with reporting tools like Tableau, OBIEE or other BI tools.
Â· Understanding of machine learning algorithms and experience supporting ML models.