Be wary of WhatsApp messages impersonating Jobline Resources's staff offering job opportunities. Those who encounter suspicious messages can contact Jobline at +65 6339 7198

Responsibilities

  • Using the Talend ETL toolset, to create new and maintain existing ETL jobs.
  • Design and implement ETL for extracting and transforming data from diverse sources, such as: Hive, PostgreSQL, and SQL Server databases.
  • Design and develop database tables necessary along with the necessary constraints as per the requirement.
  • Collaborate with Team members to understand source system structures and data retrieval methods/techniques, and tools within the organization.
  • Support the development of data transformation logic using ETL tools or scripting languages like SQL, Python, etc.
  • Clean, validate, and transform data to confirm target schema and quality standards.
  • Work with the Team to execute data quality improvement plans.
  • Participate in troubleshooting activities to maintain data integrity and process efficiency.

Requirements

  • Bachelor's degree or higher in Computer Science Engineering, or a related field. 
  • At least 2-3 years of relevant working experiences in Talend, Python and Spark
  • Should have good knowledge and working experience in Datalake and Hadoop (Hive, Impala, HDFS).
  • Hands-on experience in designing, developing, and optimizing Talend Big Data jobs leveraging the Spark engine
  • Good understanding of Spark Catalyst Optimizer and spark executer parameters to optimize the Spark queries
  • Strong experience in data-warehousing and data-modeling techniques
  • Knowledge of industry-wide visualization and analytics tools
  • Good interpersonal skills and positive attitude

Shortlisted candidates will be offered 1 Year agency contract employment.