Our client from 1 of the global banks is seeking for a Big Data Engineer to join their team to develop new data pipelines of financial data for the Markets & Security business.

Mandatory Skill(s)

  • Degree in Computer Science, Mathematics, Computer Engineering or relevant disciplines;
  • Has minimum 4 years of experience in data engineering and modelling;
  • Strong understanding and experience in Python and its libraries NumPy, Pandas, Jupyter;
  • Experience in managing data processing framework i.e. Apache Spark, Beam or Presto;
  • Able to develop codes to read/write on storage platforms i.e. Apache Kafka, HDFD or AWS S3 and pipelines using Apache Airflow & Dask;
  • Exposure in machine learning algorithms i..e Scikit and/or Natural Language Processing (NLP);
  • Experience in code reviews, unit testing, continuous integration & deployment with strong quality standards;
  • Dynamic team player with resourcefulness and strong written & verbal communication skills.

Desirable Skill(s)

  • Banking & Financial Services experience is preferred

Responsibilities

  • Build data ingestion pipelines for multiple data sources from financial sources;
  • Process complex datasets using Spark, Beam, Presto;
  • Develop codes to access data using Kafka, HDFS, Amazon S3;
  • Develop strong, fault tolerant and intelligent data processing pipelines for internal team;
  • Collaborate with cross functional teams to source data and ensure high availability of downstream applications;
  • Transform and deliver data downstream in the required format through data modeling and engineering;
  • Ensure compliance with regulations for the delivery of functional and technical documents;
  • Manage data processing requirements and technical requirements
  • Support testing, user training and deployment;
  • Review and monitor ETL tasks and performance;
  • Gain and establish support of business and project stakeholders to achieve project objectives.
Apply to this Job