Location:
Makati, National Capital Region
Contract Type:
Remote
Experience Required:
5 to 10 years
Education Level:
Bachelor’s Degree
Salary:
80.000,00 ₱ /
Monthly
Job Description
Additional Information:
The role requires a willingness to work in a hybrid setup, with onsite reporting at UP Ayala Technohub in Quezon City. This is a project-based engagement for six months, with the possibility of extension. The work schedule follows a rotation between mid-shift and graveyard hours. U.S. holidays are observed instead of Philippine holidays.
Key Responsibilities:
The role involves designing, building, and maintaining cloud-based data pipelines and workflows that support analytics and operational systems. Responsibilities include integrating data from various sources using APIs and cloud services, and developing clean, efficient, test-driven Python code for data ingestion and processing. You will optimize data storage and retrieval using big data formats such as Apache Parquet and ORC, and implement robust data models including relational, dimensional, and NoSQL models. Collaboration with cross-functional teams to gather and refine requirements is expected, as well as delivering high-quality solutions. Infrastructure will be deployed using Infrastructure as Code (IaC) tools like AWS CloudFormation or CDK. You will also be responsible for monitoring and orchestrating workflows using Apache Airflow or Dagster, and for following best practices in data governance, quality, and security.
Core Expertise:
Candidates should have at least 3 years of experience in a data engineering role focused on data integration, processing, and transformation using open-source languages like Python and cloud technologies. Strong programming skills in Python are essential, particularly for API integration and data processing with a focus on quality and test-driven development. Proficiency with big data storage formats such as Apache Parquet and ORC is required, along with an understanding of common pitfalls and optimization techniques. Solid SQL skills and experience with data modeling (relational, dimensional, and NoSQL) are necessary. A working knowledge of Infrastructure as Code on AWS, specifically using CloudFormation or CDK, is expected. Required AWS services include Glue, IAM, Lambda, DynamoDB, Step Functions, and S3, with additional experience in Athena, Kinesis, MSK, MWAA, or SQS considered a plus. Experience with data orchestration tools like Apache Airflow or Dagster is also important. Bonus experience includes data streaming with Kinesis or Kafka, familiarity with Apache Spark, and experience working in client-facing or multicultural environments, as well as technical or team leadership.
Key Competencies and Abilities:
The ideal candidate can work independently and as part of a team, with a strong attention to detail and a commitment to delivering high-quality work. Analytical thinking, critical reasoning, effective time management, and organizational skills are essential. A strong customer focus is required, along with the ability to clearly communicate technical concepts to stakeholders. Excellent written and verbal communication skills are also important.
The role requires a willingness to work in a hybrid setup, with onsite reporting at UP Ayala Technohub in Quezon City. This is a project-based engagement for six months, with the possibility of extension. The work schedule follows a rotation between mid-shift and graveyard hours. U.S. holidays are observed instead of Philippine holidays.
Key Responsibilities:
The role involves designing, building, and maintaining cloud-based data pipelines and workflows that support analytics and operational systems. Responsibilities include integrating data from various sources using APIs and cloud services, and developing clean, efficient, test-driven Python code for data ingestion and processing. You will optimize data storage and retrieval using big data formats such as Apache Parquet and ORC, and implement robust data models including relational, dimensional, and NoSQL models. Collaboration with cross-functional teams to gather and refine requirements is expected, as well as delivering high-quality solutions. Infrastructure will be deployed using Infrastructure as Code (IaC) tools like AWS CloudFormation or CDK. You will also be responsible for monitoring and orchestrating workflows using Apache Airflow or Dagster, and for following best practices in data governance, quality, and security.
Core Expertise:
Candidates should have at least 3 years of experience in a data engineering role focused on data integration, processing, and transformation using open-source languages like Python and cloud technologies. Strong programming skills in Python are essential, particularly for API integration and data processing with a focus on quality and test-driven development. Proficiency with big data storage formats such as Apache Parquet and ORC is required, along with an understanding of common pitfalls and optimization techniques. Solid SQL skills and experience with data modeling (relational, dimensional, and NoSQL) are necessary. A working knowledge of Infrastructure as Code on AWS, specifically using CloudFormation or CDK, is expected. Required AWS services include Glue, IAM, Lambda, DynamoDB, Step Functions, and S3, with additional experience in Athena, Kinesis, MSK, MWAA, or SQS considered a plus. Experience with data orchestration tools like Apache Airflow or Dagster is also important. Bonus experience includes data streaming with Kinesis or Kafka, familiarity with Apache Spark, and experience working in client-facing or multicultural environments, as well as technical or team leadership.
Key Competencies and Abilities:
The ideal candidate can work independently and as part of a team, with a strong attention to detail and a commitment to delivering high-quality work. Analytical thinking, critical reasoning, effective time management, and organizational skills are essential. A strong customer focus is required, along with the ability to clearly communicate technical concepts to stakeholders. Excellent written and verbal communication skills are also important.
Number of vacancies:
1
Company Description
DEMPSEY, INC. – Your Recruitment Partner for Client Companies
We are a human resources firm specializing in the sourcing and referral of college graduates and professionals.
Our role is to assist client companies in finding competent and qualified candidates to fill various job positions within their organizations. We focus on roles intended for direct hiring by our clients.
We provide candidates for positions across all levels, including entry-level, supervisory, managerial, and executive roles, for direct hiring by our clients.
OUR MISSION
We aim to be a key platform for both private companies and professionals by:
Aligning their needs with the right talent
Creating opportunities for meaningful, productive, and long-term employment
HOW WE SUPPORT CLIENT COMPANIES
We offer alternative, back-office manpower sourcing and recruitment services to help fill various positions within their organizations.
HOW WE SUPPORT APPLICANTS
DEMPSEY is dedicated to matching candidates with the right job and the right company.
We never charge a fee to candidates, regardless of whether they are successfully hired.
Applicants' resumes are automatically included in our active database. If we find that your qualifications and experience meet a client's needs, we will endorse you to the client.
OUR SERVICES
DEMPSEY is committed to helping clients find suitable candidates for their staffing needs through:
Fast deployment of pre-qualified applicants
A wider pool of candidates to choose from
High-quality applicants
Cost-effective recruitment processes
OUR AREAS OF EXPERTISE
Finance, Accounting, HR, and Administration personnel
Sales, Marketing, and Promotion personnel
Engineering, Technical, and Highly-Skilled personnel
IT, Web, and Programming personnel
Behavioral Science personnel
Creative & Liberal Arts personnel
View Company Profile