Data Engineer
Newbridge Alliance Xem tất cả việc làm
- Tp Hồ Chí Minh
- Lâu dài
- Toàn thời gian
- Architectural Sovereignty: Design and implement robust, distributed data pipelines (ETL/ELT) that handle terabytes of structured and unstructured data.
- The AI Enabler: Build specialized feature stores and vector database integrations to provide the high-quality, low-latency data required for Large Language Models (LLMs).
- Real-Time Mastery: Move beyond batch processing. Implement streaming architectures (Kafka/Flink) to enable real-time insights and reactive AI systems.
- Data Reliability & Governance: Ensure "Single Source of Truth" integrity by implementing automated data quality testing, lineage tracking, and high-availability storage solutions.
- Greenfield Innovation: Our client is moving away from legacy constraints. You will work with a "Modern Data Stack" (Snowflake, Databricks, dbt, Airflow, and Cloud-native tools).
- Massive Scale: Work with one of the largest data footprints in the region, solving challenges related to concurrency, latency, and global distribution.
- The Newbridge Technical Hub: You aren't just a resource; you are part of a community. You’ll collaborate with Newbridge’s elite AI and Software labs to define the future of data-driven engineering.
- The Pipeline Pro: You have 3+ years of experience in Data Engineering, with a deep command of SQL and Python (or Scala/Java).
- The Distributed Systems Expert: You understand how to scale data across the cloud (AWS/GCP/Azure) and have experience with Spark, Hadoop, or Snowflake.
- The MLOps-Adjacent Engineer: You understand that data engineering for AI is different. You know how to manage data versioning and feature engineering workflows.
- The Optimization Specialist: You take pride in reducing query costs, improving job performance, and ensuring 99.9% pipeline uptime.