Join our Talent Network
Skip to main content

Specialist, Data Engineering

Job ID : 6334
Category : Technology Solutions
Brand : Definity
Regular/Temporary : Regular
Fulltime/Parttime : Full Time
Location : Toronto, Canada

Share : share to e-mail
Save job Saved

Definity includes some of Canada’s most long-standing and innovative insurance brands, including Economical Insurance, Sonnet Insurance, Family Insurance Solutions, and Petline Insurance. With strong roots that date back to 1871, we’ve grown to become a digital leader in the insurance industry. We’re proud to help our clients and communities adapt and thrive in a world of constant change.

Our promise to you: It’s better here. Why? Because we CARE, and we provide an employee experience that’s collaborative, ambitious, rewarding, and empowering.

Our ambition is to be one of Canada’s leading and most innovative P&C insurers. Come be a part of our journey, and love what you do.

Definity offers a flexible, hybrid work experience where employees work from the office and virtually depending on the type of work they are doing and who they are working with. Leader’s partner with their teams to find the right balance of on-site and remote work that best meets the needs of their teams, colleagues, brokers and customers, while ensuring collaboration, teamwork and accountability for goals.

We are looking for a hands-on Big Data Engineer to join our Data Engineering team. As a Big Data Engineer, you will be working with our Advanced Analytics team to support the operationalization of complex analytical solutions in production, and the monitoring & improvement of the underlying machine learning models, so that we can sustain the benefits of the solutions to the organization and to the customers.

 

What can you expect in this role?

  • You will design, develop, test, and implement data ingestion and transformation pipelines to and from big data/cloud platforms using tools and technologies including but not limited to spark, HBase, Impala, Python, R, Scala, BigQuery, Databricks, Datafusion Kafka, shell scripting, Cloud composer, and Kubernetes
  • You will design, create and deploy functions, applications and databases that run on the cloud in support of medium and significant complex data Pipelines
  • Ability to build Data integrations and Debug application in detail to understand issues in applications
  • Good understanding and experience of Data Management aspects like Metadata management, DQ, etc.
  • Experience in Building streaming Data pipelines and Batch data Pipelines
  • Close collaboration across engineering team on product strategies that align with business pain points and
  • Ability to understand requirements, create BSTM, build business logic and ability to learn and quickly adopt for changing needs
  • Actively participate in addressing non-functional requirements such as performance, security, scalability, continuous integration, migration and compatibility
  • Take ownership from design of the feature through first lines of code to how it performs in production (You build it, you run it)
  • Ensure fully automated testing by designing and writing automated unit, integration and acceptance tests
  • Contribute to Best practices of data ingestion, transformation and extraction solutions in big data platform (Datahub) and Cloud Data platforms
  • Acts as a SME / Trusted day-to-day advisor to development teams on application portfolio, SDLC and development design frameworks, components and services re-use
  • Provides key inputs into future technology direction and areas associated stability, performance, and roadmap strategy and development initiatives for applications in the portfolio
  • Experience in building Data Architectures and exposure to Dimensional Data Modeling

 

What do you bring to the role?

  • University degree in Computer Science or equivalent technical experience
  • Strong proficiency in Spark and Python
  • 2 years in the Cloud and Data Engineering space
  • A minimum of 4 to 6 years of development experience handling a variety of structured/unstructured data in various formats, e.g., flat files, XML, JSON, relational, legacy, parquet, etc.
  • Experience in developing batch and real time data streams to create meaningful insights and analytics
  • Experience with Google Cloud for data engineering is an asset

 

 

#LI-KM1

We also take potential into consideration. If you don’t have this exact experience, but you know you have what it takes, be sure to give us more insight through your application and cover letter.

Go ahead and expect a lot — you deserve it, and we’ve got it: 

  • Hybrid work schedule for most roles
  • Company share ownership program
  • Pension and savings programs, with company-matched RRSP contributions
  • Paid volunteer days and company matching on charitable donations
  • Educational resources, tuition assistance, and paid time off to study for exams
  • Focus on inclusion with employee groups, support for gender affirmation surgery, access to BIPOC counsellors, access to programs for working parents
  • Wellness and recognition programs
  • Discounts on products and services

Our inclusive work environment welcomes diversity and supports accessibility. If you require accommodation at any time during the recruitment process, please let us know by contacting: [email protected]

Background checks
This role requires successful clearance of a background check (including criminal checks and leadership references).

#LI-Hybrid

Share : share to e-mail

Similar jobs