Requirements
- Bachelor's degree or higher in Computer Science Engineering, or a related field.
- At least 2-3 years of relevant working experiences in Talend, Python and Spark
- Should have good knowledge and working experience in Datalake and Hadoop (Hive, Impala, HDFS).
- Hands-on experience in designing, developing, and optimizing Talend Big Data jobs leveraging the Spark engine
- Good understanding of Spark Catalyst Optimizer and spark executer parameters to optimize the Spark queries
- Strong experience in data-warehousing and data-modeling techniques
- Knowledge of industry-wide visualization and analytics tools
- Good interpersonal skills and positive attitude
Shortlisted candidates will be offered 1 Year agency contract employment.