Data Engineer

Ecotek is leading Ecopark’s development of becoming an Exemplary Smart City model of Vietnam pursuant to the Ministry of Constructions approval. To achieve this feat, the corporation has identified digital transformation as one of their key goals moving forward, conducted through the digitization and digitalization of operational management, establishment of a digital service partnership network and development of core services.


In 2018 Ecopark founded a technology subsidiary, Ecotek to lead the Group’s vision. Ecotek is currently looking for a data engineer that can drive our initiatives and vision forward to making Ecopark become a true smart city of Vietnam. Your main responsibility is to continuously come up with and develop new data initiatives that can enhance the effectiveness of the operations of Ecopark through data driven decision making, increase the services for the residents and visitors & increase the effectiveness of our digital ecosystem when integrating or working with partners. Through these initiatives that you will embark on looking for new data models that we can commercialize in the outer market.

What you will do

  • Take part in building Ecopark smart city data warehouse as a foundation for our City O/S.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Hadoop/Spark ‘big data’ technologies

  • Build processes supporting data transformation, data structures, metadata, dependency and workload management

  • Identify valuable data sources and automate collection processes.

  • Present information using data visualization techniques.

  • Analyze large amounts of information to discover trends and patterns.

  • Implement statistical methods to solve specific business problems utilising code (Python, R, Scala, etc.).

  • Collaborate with researchers, software developers, and business leaders to define product requirements, provide analytical support, and communicate feedback.

Preferred Qualifications:

  • At least 02 years of experience with big data tools: Hadoop, Spark, Kafka.

  • Experience with data pipeline and workflow management tools: Airflow or similar.

  • Experience with stream-processing systems: Spark-Streaming.

  • Ability to work with one of the cloud services for Hadoop is an advantage: Dataproc, HDInsight or EMR, Redshift.

  • Knowledge of SQL, Python

  • Experience using business intelligence tools and data frameworks.

  • Problem-solving aptitude and strong math skills.

Salary: Up between 25-35 million VND Gross

Contact at or