9 February 2021
Our client is searching for an experienced Data Engineer to assist them in developing a comprehensive data pipeline architecture to support the management in making decision based on data.
- Background in Computer Science, Computer Engineer or Data Science;
- At least 2 years of experience in data engineering;
- Working knowledge of programming languages such as SQL and Python;
- Experience in data integration tools such as Talend, Alteryx, Havo Data, Dell Boomi, Informatica PowerCenter or Pentaho;
- Experience in cloud data warehouse such as AWS Redshift, Teradata, Oracle Exacdata Machine or Cloudera Enterprise Data Hub;
- Experience in data model designing such as Star Schema or Snowflake;
- Familiar with big data streaming tools and database frameworks such as Apache NiFi, Storm, Spark, Hadoop, Tableau;
- Good writing, communication and presentation skills;
- Fast learner and ability to work independently.
- Exposure in Hadoop or big data.
- Design, develop, document and implement end-to-end data pipelines and data integration processes, both batch and real time;
- Perform data analysis, data profiling, data cleansing, data lineage, data mapping and data transformation;
- Develop ETL / ELT jobs and workflows, and deployment of data solutions;
- Monitor, recommend, develop and implement ways to improve data quality including reliability, efficiency and cleanliness to optimize and fine tune processes;
- Recommend, execute and deliver best practices in data management and data lifecycle processes, including modular development of data processes, coding and configuration standards, error handling and notification standards, auditing standards, and data archival standards;
- Collaborate with different stakeholders to understand data needs, gather requirements and implement data solutions to deliver business goals;
- Provide technical support for any data issues and change requests, document all investigations, findings, recommendations and resolutions.