Fidelity Investments Big Data Engineer, Technical Lead (Hands on, individual contributor) in Boston, Massachusetts
Fidelity Personal Investing has an opportunity for a hands-on Big Data Engineer, reporting to VP of Big Data Architecture to help anchor an exciting and fast-paced engineering team focused on designing and implementing large-scale distributed data processing systems using cutting edge cloud based open source and proprietary big data technologies. In this role, you will implement a variety of solutions to ingest data into, process data within, and expose data from a Data Lake that enables our data analysts and scientists to explore data in the ad-hoc manner as well as quickly implement data-driven models that generate accurate insights in an automated fashion. This position is a critical element to delivering Fidelity’s promise of creating the best customer experiences in financial services.
TheExpertiseWe’re Looking For
Bachelor’s degree or higher in a technology related field (e.g. Engineering, Computer Science, etc.) required, Master’s degree a plus
5 years of hands-on experience applying principles, best practices and trade-offs of schema design to various types of database systems: relational (Oracle, MSSQL, Postgres, MySQL), NoSQL (HBase, Cassandra, MongoDB) and in-memory (e.g. VoltDB). Understanding data manipulation principles.
9 years of hands-on experience in one or more modern Object Oriented Programming languages (Java, Scala, Python) including the ability to code in more than one programming language. Our engineers work across several of them, sometimes simultaneously.
9 years of hands-on experience in building distributed back-end enterprise software platforms.
5 years of working in Linux environment, ability to interface with the OS using system tools, scripting languages, integration frameworks, etc.
2 years of hands-on experience in implementing batch and real-time Big Data integration frameworks and/or applications, in private or public cloud, preferably AWS, using various technologies (Hadoop, Spark, Impala, etc); debugging, identifying performance bottlenecks and fine-tuning those frameworks.
3 years of experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker).
Experience and comfort executing projects in Agile environments (Kanban and Scrum).
Your ability to build support among key stakeholders across BUs for proposed strategies and solutions and provide technical leadership to the database development, QA testing, and support teams in preparing the design artifacts and implementation of database-as-a-service automation solutions
Your experience identifying integration patterns and points between various design areas and ability to track the implementation of integrations for the automation of deployments
Your Big Data Skills with popular stacks like Hadoop and Spark
Your knowledge of AWS CloudFormation, OpenStack HEAT templates and Terraform
Your hands on experience in all phases of data modeling, from conceptualization to database optimization.
Your ability to map the systems and interfaces used to manage data, sets standards for data management, analyzes current state and conceives desired future state, and conceives projects needed to close the gap between current state and future goals
Your desire and aptitude for learning new technologies
Your ability to work on initiatives and projects that cut across business unit boundaries. Working with peers (technical/non-technical), team members and project teams independently to drive results and business value
Your experience owning your systems end-to-end and you are hands-on in every aspect of the software development and support: requirement discussion, architecture, prototyping, development, debugging, unit-testing, deployment, support.
Your passion and intellectual curiosity to learn new technologies and business areas
Your willingness to mentor and guide junior developers
Your excellent presentation, documentation, communication and influencing skills as well as skills which present/influence technology direction in business context to the stakeholders
Build a strategy to re-invent systems and tools to create a continuous cycle of Innovation
Create data monitoring models for each product and work with our marketing team to create models ahead of new releases
Ability to build data models supporting complex transformation
Identifying and ingesting new data sources and performing feature engineering for integration into models
How Your WorkImpactsthe Organization
Our teams are flat, non-hierarchical structures, which run on agile principles. You’ll be driving multiple initiatives by means of designing architecture, defining best practices and evangelizing initiatives across this and other teams. Many of these initiatives are just starting allowing you the opportunity to drive and affect the foundational development work for years ahead. We invest in a broad range of technologies and experiment before delivering a production system. As a lead engineer, you’ll help lead our technology research, evaluation and PoC efforts, mentor and guide less senior members of our team.
At Fidelity, we are focused on making our financial expertise broadly accessible and effective in helping people live the lives they want. We are a privately held company that places a high degree of value in creating and nurturing a work environment that attracts the best talent and reflects our commitment to our associates. For information about working at Fidelity, visitFidelityCareers.com
/Fidelity Investments is an equal opportunity employer./
Title: Big Data Engineer, Technical Lead (Hands on, individual contributor)
Requisition ID: 1709223