Data Engineer (Contract)

Kuala Lumpur
RM13,000.00 - RM18,000.00 Monthly Market Aligned
Default

Sector: 

Technology

Function:

Data Analytics & Data Science

Contact Name:

Aviral Bhargava

Expiry Date:

13-Feb-2026

Job Ref:

JN -012026-492038

Date Published:

14-Jan-2026

Role Overview:

This position sits within the technology transformation function of a multinational organisation, supporting enterprise-wide reporting, analytics, and modern data architecture initiatives. The role focuses on designing and operationalising automated data ingestion, transformation pipelines, and governed cloud-based data layers to enable business intelligence, process transparency, and AI-driven insights. It requires close collaboration with product, analytics, and portfolio teams to deliver scalable data solutions within a structured cloud environment.

Key Responsibilities:

  • Build and maintain automated ingestion pipelines from workflow tools and other enterprise data sources into cloud-based storage and analytics platforms.
  • Develop and tune pipelines supporting batch and near-real-time loads, including incremental ingestion from relational databases.
  • Design layered data architecture (raw to curated to consumption) and create models optimised for reporting, analytics, and downstream semantic layers.
  • Implement data quality checks, monitoring mechanisms, and remediation workflows covering completeness, consistency, timeliness, and lineage tracking.
  • Apply metadata governance, cataloguing, lifecycle rules, and policy enforcement using cloud-native governance tooling.
  • Develop and validate transformation logic using SQL/Python, incorporating unit/integration testing and CI/CD deployment patterns.
  • Collaborate with product owners and BI teams to translate reporting needs into data contracts, datasets, and model structures suitable for analytical tools.
  • Produce technical documentation including schema definitions, runbooks, SLO/SLA expectations, and re-usable standards for future data products.

Key Requirements:

Must-Have:

  • Minimum 5 years’ experience building production-grade cloud data pipelines.
  • Proven hands-on expertise with Google Cloud data services (e.g., BigQuery, Dataplex, Dataflow).
  • Strong SQL capabilities, including performance tuning, stored procedures, and CDC patterns across RDBMS.
  • Demonstrated experience integrating workflow platform APIs and processing structured files from enterprise repositories.
  • Solid understanding of dimensional modelling, lakehouse patterns, and governed data architecture frameworks.
  • Experience in data governance, quality validation rules, and monitoring frameworks.
  • Familiarity with BI consumption patterns and semantic layer development.

Nice-to-Have:

  • Exposure to insurance or financial services data domains.
  • Working knowledge of Power BI modelling and DAX optimisation considerations.

If this role aligns with your experience and career goals, please send your application to [email protected].

Argyll Scott Asia is acting as an Employment Business in relation to this vacancy.

APPLY NOW
APPLY NOW
Interested in this job?
Save Job

Share this job

Sign up for Job alerts

Get similar jobs like these by email

Create As Alert

Similar Jobs

SCHEMA MARKUP ( This text will only show on the editor. )