Junglee Games

Technical Lead - Data Engineering

As our Technical Lead - Data Engineering you will be responsible for the development of high performance, distributed computing tasks using Big Data technologies such as Spark, Kafka, NoSQL and other distributed environment technologies based on the needs of the organization. The Tech Lead is also responsible for analyzing, designing, programming, debugging and modifying software enhancements and/or new products used in distributed large scale analytics solutions. You will take part in setting up, improve, expand and maintain the data storage, data infrastructures and data pipelines needed by the other engineering teams. This is primarily a “hands-on” technical role. The role will also have some strategic and architectural aspects


  • A visionary in technical architecture, with experience building and maintaining end-to-end large scale Data Engineering products.
  • Develop, construct, test and maintain data architectures (such as NoSql databases, ETL pipelines and large processing systems).
  • Mentor the team, recognize their strengths, and encourage them to take ownership of their deliverables
  • Enable the capability of adding new data sources for the Platform.
  • Employee a variety of languages and tools (e.g. scripting languages) to marry data systems together.
  • Recommend ways to improve data reliability, efficiency and quality.
  • Ensure platform data security and compliance.
  • Ensure a culture of accountability, high performance and ethical behaviour.


  • 7 plus years of experience in developing software applications including: analysis, design, coding, testing, deploying and supporting of applications
  • Bachelors or Masters in Computer Science or any related field.
  • Proficient in application/software architecture (Definition, Data Flow, Business Process Modeling, etc.)
  • Experience building real-time streaming applications using Kafka/Kinesis/Storm etc.
  • In depth domain experience building high performing data platform architectures, systems and pipelines in the following domains:
    1. Cloud (Ideally AWS).
    2. Data Platform (Airflow, Kafka Streaming, ScyllaDB, Glue, Apache Hudi, Athena, EMR, Spark, Lake Formation or equivalent).
    3. Data Warehouse/SQL (Snowflake/Redshift) .
    4. Data Transformation/ETL (Spark, Spark-Streaming, Storm or equivalent).
    5. Languages: (Python or Scala).
    6. Databases: (RDS, PostgreSQL, Cassandra, MongoDB).
  • The role will be responsible for providing innovative operational solutions and best practices
  • Troubleshoot on live site issues, engage appropriate parties, and drive through to resolution
  • Handle escalations and communicate with users and partners
  • Collaborate with teams/partners to improve overall operational maturity of Big Data ecosystem
  • Experience with operational aspects of platforms such as monitoring and alerts management, availability, capacity management and service management

You're applying!