Data Platform Engineer Big Data Cloud ETL

Taguig, National Capital Region
Posted 2 days ago
Logo Hunter's Hub Incorporated
Company:
Hunter's Hub Incorporated
Company Description:
Hunter’s Hub Incorporated is a Sourcing and Headhunting company that was founded earlier on in the year 2018. The company prides itself in its ability to source and recruit only the best and brightest of each industry. Hunter’s Hub caters to numerous clients in a multitude of industries, and has a wide-range of candidate selections to suit any of our clients’ needs. Likewise, the company specialises in sourcing out highly skilled and multi-talented IT professionals because the company mostly caters to clients being widely known to be in the IT industry. Hunter’s Hub sets itself apart from the rest of the company in the industry due to the various prominent selections of services that are custom-fit for our clients and the numerous kinds of professionals we are able to provide. Our services are highly based off of our clients’ needs and requirements, and we are able to dispense any kind of personnel that they need whether professional or non-professional. We look for only the best, and provide only the best.
Contract Type:
Full Time
Experience Required:
3 to 4 years
Education Level:
Bachelor’s Degree
Number of vacancies:
2

Job Description

Job Description
We’re seeking a Data Platform Engineer to help architect, develop, and optimize large-scale data ingestion, transformation, and analytics solutions across cloud and on-prem environments. This role is ideal for professionals experienced in handling diverse data formats and streaming workloads, who thrive in fast-moving environments and enjoy building data-driven systems from the ground up.

Key Responsibilities
Implement solutions for batch and streaming data ingestion from APIs, flat files, and cloud sources

Develop data ingestion and transformation pipelines using tools such as Spark, Scala, and Kafka

Set up data staging, ETL processes, and quality checks to prepare datasets for analytics

Build scalable data frameworks and contribute to architecture decisions for cloud and on-prem systems

Optimize SQL and PLSQL queries to ensure efficient data retrieval from structured and unstructured sources

Support data profiling, source-to-target mappings, and validation of business requirements

Work closely with Data Architects and Delivery Leads to shape data solutions that meet stakeholder goals

Deliver solutions that integrate well with BI/reporting tools (Looker, Tableau, or similar)

Job Qualifications
Preferred Skills & Experience
Bachelor’s degree in Computer Science, Statistics, Information Management, Finance, Economics, or related field

Minimum 3 years' experience in integrating data for analytics using formats like JSON, XML, flat files, Hadoop, or cloud-native formats

Proficient in Spark/Scala and other JVM-based programming languages

Hands-on experience with ingestion tools such as NiFi, Sqoop, or Flume

Strong background in working with HDFS, Hive, HBase, and large-scale data processing systems

Familiar with data pipeline development using cloud-native tools (e.g., AWS Glue, Azure Data Factory, GCP Dataflow)

Knowledge of CI/CD tooling including Jenkins, Bitbucket, SonarQube, and Nexus

Proficient in SQL and PLSQL for data access, transformation, and optimization

Familiar with BI tools like Looker, Tableau, or similar for data visualization

Knowledgeable in core concepts such as data lakes, warehousing, and real-time data architecture

Strong understanding of streaming platforms like Apache Kafka, Spark Streaming, or Storm

Capable of handling structured and unstructured data, profiling source systems, and delivering fit-for-purpose datasets

Working knowledge of Java is a plus

Why Join Us?
Work with modern big data stacks and real-time analytics platforms

Collaborative and technically driven team

Exposure to both on-prem and cloud-native technologies

Opportunity to shape the future of enterprise data architecture
Salary:
₱80,000.00 Monthly