Senior Software Engineer - Big Data & Analytics

· Sandyford, Dublin
Employment Type Full-Time
Minimum Experience Experienced

Founded by Mastercard and IBM, Trūata offers a new approach to handling data anonymization and analytics to help organizations meet the standards of personal data protection envisioned by the GDPR. We engineer privacy-enhancing technologies that empower organizations to do more with their data – at scale. Trūata offers its customers a service to fully anonymize algorithms and reports that they can use in their own products and solutions.

We are based in Sandyford Dublin 18.

We are currently recruiting for a Senior Software Engineer to join the Platform Engineering team in Dublin. This is an excellent opportunity for somebody who wants to work at building something new and exciting.

As a Senior Software Engineer, you will be required to implement advanced GDPR compliant big data and data analytic applications. You will be a member of a highly capable software team designing and developing applications for big data handling, data wrangling, anonymization, and data analytics. Our PaaS solutions will be deployed in cloud native environments providing highly scalable and secure data management. A great candidate will have a real passion for developing team-oriented solutions to complex engineering problems. This role is computer programming intensive. A great candidate will have a real passion for developing team orientated solutions to complex engineering problems.

This position reports to Trūata’s Director of Software Engineering.

Key Responsibilities:

  • Design, develop, and maintain Trūata’s data management platform components which include but are not limited to a Data Lake, Data Privacy Engine, Data Ingestion Algorithms, Enterprise Application Integration (EAI), Analytics Engines, Data Engineering, etc.
  • Full software lifecycle participation including design, development, testing, bug fixing, cloud deployment, etc.
  • Participation in an Agile Scrum based software development process

What you need

  • University degree in Computer Science or equivalent. Advanced degree preferred.
  • 5+ years strong coding experience in either Scala or Java
  • Expertise with Big Data Hadoop platforms like Hortonworks, Cloudera, MAPR, Teradata, etc; experience building large scale Spark applications & data pipelines
  • Solid foundation in data structures, algorithms and software design
  • Experience in architecture and development of data models and data dictionaries in big data systems
  • Experience working in an Agile environment following Scrum
  • Creativity and a passion for tackling challenging data problems and willingness to work in a start-up environment

What you should also have

  • Expertise with platform engineering
  • Extensive knowledge with Hadoop stack and storage technologies, including HDFS, MapReduce, Yarn, HBASE, HIVE, Sqoop, Impala, spark and oozie
  • Experiences in manipulating large datasets using data partitions and transformation, and in-memory computations (large-scale join / groupby / aggregations)
  • Strong knowledge in NOSQL technologies (MongoDB, Cassnadra, Hbase etc), and/or object storage services from AWS or Azure, or open source object storage solutions (CEPH, Openstack Swift)
  • Relevant experience with ANSI SQL using relational databases (Postgres, MySQL, Oracle, or others).


We take pride in offering an energetic and contemporary employee experience, supported by and array of benefits that provide our employees and their families with flexibility, quality and value. These include excellent health insurance, contributory pension scheme and free lunches!

Thank You

Your application was submitted successfully.

  • Location
    Sandyford, Dublin
  • Employment Type
  • Minimum Experience