Senior Data Engineer - CONTRACT

· Sandyford, Dublin
Employment Type Contractor
Minimum Experience Experienced

Founded by Mastercard & IBM, Truata offers a new approach to handling data anonymization and analytics to help organizations meet the standards of personal data protection envisioned by GDPR and similar, emerging privacy laws around the globe. Trūata offers its customers a service to fully anonymize algorithms and reports that customers can use in their own products and solutions. We are based in Sandyford Dublin 18.


The Role

Trūata is recruiting for a Senior Data Engineer ( 6 Month Contract) to join their customer operations team in Dublin, Ireland.

As a Senior Data Engineer, you will build analytic tools / reports providing customer insights into their data. You will analyze customer use cases, creating design specifications taking into consideration data privacy regulatory requirements, develop analytical products, validate the insights gained, and ensure that results meet Trūata’s strict data privacy requirements.


You will be a member of a highly skilled cross-disciplinary technical team, solving challenging data problems and working alongside your peers and external stakeholders to deliver customer data analytics projects in a way that is compliant to GDPR and similar privacy laws.

This position reports into Trūata’s Customer Lead.


What you will do


  • Work with client stakeholders to gather requirements around their data analytics needs and use of data
  • Understand business objectives and customer requirements
  • Apply Software Engineering best-practices to build Big Data analytics applications that deliver value to our clients
  • Create and maintain design specification and test documentation.
  • Drive and increase adoption of automation, particularly automated quality checks in existing data products
  • Collaborate with members of the Data Science and Privacy Risk teams to further boost and document robust data privacy checks
  • Work with Truata Engineering and Product teams to identify productization potentials for analytics use cases applicable across a range of customers
  • Implement best practices for data management maintenance, reporting and security
  • Identification of risk assessment tests and configurations based on internal analysis and state of the art reviews.
  • Development of algorithms to implement required privacy enhancement controls for customer’s use cases and applications
  • Ongoing evaluation of data privacy controls for each customer, identification of area for concern


 What you need:


  • Third level degree in a related discipline, e. g. Computer Science, Statistics, Data Analytics
  • 8+ years experience in Software Engineering in a hands-on, developer role
  • 4+ years hands-on coding experience with Apache Spark and related Big Data stacks / technologies
  • Ability to debug complex data issues while working on very large data sets with billions of records
  • A strong desire to write code cleanly and efficiently and never be satisfied with just attaining a “good enough”.
  • Working knowledge of SQL
  • Understanding of DevOps tools and Git workflow
  • Direct experience with the entire data project lifecycle, including requirements gathering, design, implementation, evaluation and presentation of results.
  • Ability to present technical concepts and results clearly to different audiences and stakeholders
  • Excellent communication skills and ability to work both on-site and remotely with the client stakeholders


What is also good to have

  • Expert-level knowledge of Scala, or Python, be able to compose applications from the ground up combined with a desire to code not just to spec, but write functional, beautiful, and well-documented code
  • Hands-on experience with cloud services, e.g., Azure Databricks, IBM Public Cloud, Google Cloud, AWS
  • Working knowledge of Data Science tool chains and stacks, such as Jupyter, R
  • In-depth knowledge of Big Data technologies including Hadoop, Cassandra, Kafka, Redis, Hive, Impala
  • Understanding of Machine Learning algorithms and how to run ML at scale
  • Experience with data partitions, transformation, and in-memory computations (large-scale join / group-by / aggregations)
  • Experience with designing and building Spark applications and data pipelines, monitoring and optimizing Spark job performance
  • Industry experience in relevant verticals, such as financial services, travel / hospitality, and retail
  • Customer requirements gathering and KPI reporting / presentation of data science project outputs
  • Experience in data privacy, GDPR compliance and risk management.


Benefits

We take pride in offering an energetic and contemporary employee experience, supported by and array of benefits that provide our employees and their families with flexibility, quality and value. These include excellent health insurance, contributory pension scheme and free lunches!

Thank You

Your application was submitted successfully.

  • Location
    Sandyford, Dublin
  • Employment Type
    Contractor
  • Minimum Experience
    Experienced