Definity includes some of Canada’s most long-standing and innovative insurance brands, including Economical Insurance, Sonnet Insurance, Family Insurance Solutions, and Petline Insurance. With strong roots that date back to 1871, we’ve grown to become a digital leader in the insurance industry. We’re proud to help our clients and communities adapt and thrive in a world of constant change.
Our promise to you: It’s better here. Why? Because we CARE, and we provide an employee experience that’s collaborative, ambitious, rewarding, and empowering.
Our ambition is to be one of Canada’s leading and most innovative P&C insurers. Come be a part of our journey, and love what you do.
Starting in September, Definity employees move to a hybrid work model so we can collaborate, build mentoring relationships, and solve complex or cross-functional business opportunities together. Our teams work in whichever environment best supports what they're working on and who they're working with. We're actively reinventing our offices as welcoming workspaces that optimize collaboration and empower leaders to use our space to strengthen team dynamics. Our tools and processes seamlessly connect employees from multiple locations, and our culture encourages respectful engagement and flexibility. Leaders work with their teams to determine the right balance of on-site and remote work that best meets the needs of their team, cross-functional engagement, responsibilities and timelines, plus those of our customers, our broker partners, and the company culture.
We are looking for a hands-on Big Data Engineer to join our Data Engineering team. The role of Big Data Engineer in our Datahub team is accountable for delivering the solutions of assigned big data applications throughout the complete use case lifecycle
What can you expect in this role?
- Data Engineer is expected to design, develop, test, implement data ingestion, transformation, and extraction to and from Big Data/cloud Platforms using tools and technologies including but not limited to spark frameworks, Sqoop, Hive, HBase, Impala, Python, Scala ,ControlM, Big query, Databricks, Datafusion Kafka and shell scripting
- Hands-on application design, coding, and deployment of medium and significant Complex data Pipelines
- Close collaboration across engineering team on product strategies that align with business pain points and
- Ability to understand requirements, create BSTM, build business logic and ability to learn and quickly adopt for changing needs
- Actively participate in addressing non-functional requirements such as performance, security, scalability, continuous integration, migration and compatibility
- Contribute to Best practices of data ingestion, transformation and extraction solutions in big data platform (Datahub) and Cloud Data platforms
- Experience in building Data Architectures and exposure to Dimensional Data Modeling
- Take ownership from design of the feature through first lines of code to how it performs in production (You build it, you run it)
- Ensure fully automated testing by designing and writing automated unit, integration and acceptance tests
- Acts as a SME / Trusted day-to-day advisor to development teams on application portfolio, SDLC and development design frameworks, components and services re-use
- Provides key inputs into future technology direction and areas associated stability, performance, and roadmap strategy and development initiatives for applications in the portfolio
- Strong experience in using DevOps tools
- Champions a high-performance environment and contributes to an inclusive work environment
- Responsible for creating functions, applications or databases that run on the cloud
What do you bring to the role?
- BS/MS in computer science or equivalent technical experience
- Should have worked in Hadoop ,cloud and Data Engineering space for at least 2+ years
- Minimum of 4 to 6 years of development experience handling variety of data (structured/unstructured), data formats (flat files, XML, JSON, relational, legacy, parquet)
- Experience in developing batch and real time data streams to create meaningful insights and analytics
- Strong understanding of different file formats (e.g. AVRO, ORC, Parquet, etc..) and data sources moving data into and out of HDFS
- Coding background in either Java, Spark or Python, some experience with Shell and Ansible writing
- Good knowledge of issue tracking (Jira), source code management (Bitbucket), Continuous Integrations tools (Jenkins), Linux/Mac OS administration, build tools, package management and testing framework.
- Good knowledge of networking, CPU, memory, storage
- Good to have Cloud Experience and cloud data technologies
- Working Experience with ETL/ELT tools
- Experience in Cloud Engineering (Google Cloud Experience is an asset)
- Should have worked under agile methodology
- Efficient in communication, able to work with cross teams
We also take potential into consideration. If you don’t have this exact experience, but you know you have what it takes, be sure to give us more insight through your application and cover letter.
Go ahead and expect a lot — you deserve it, and we’ve got it:
- Hybrid work schedule for most roles
- Company share ownership program
- Pension and savings programs, with company-matched RRSP contributions
- Paid volunteer days and company matching on charitable donations
- Educational resources, tuition assistance, and paid time off to study for exams
- Focus on inclusion with employee groups, support for gender affirmation surgery, access to BIPOC counsellors, access to programs for working parents
- Wellness and recognition programs
- Discounts on products and services
Our inclusive work environment welcomes diversity and supports accessibility. If you require accommodation at any time during the recruitment process, please let us know by contacting: [email protected]
This role requires successful clearance of a background check (including criminal checks and leadership references).