JPMorgan Chase Asset & Wealth Management, Intelligent Digital Solutions - Big Data & Analytics Database Engineer - Associate - New York, NY in New York, New York
J.P. Morgan Intelligent Digital Solutions (IDS) is a newly developed division of J.P. Morgan Asset & Wealth Management – one of the largest asset and wealth mangers in the world, with client assets of $2.8 trillion and assets under management over $2 trillion (Assets as of June 2018).
The IDS Big Data and Analytics team primarily focuses on using statistical analysis, modeling techniques, and deep learning methods to drive innovative solutions for data-driven investment insights, improved client engagement, and operational effectiveness.
We are seeking a talented engineer in our Database Engineering team to build out and operate the most critical database systems which are integrated tightly with the team’s highly parallel research and analytics environment. We are working to develop new database services and solutions to further scale our research and analytics infrastructure and remain on the cutting edge. As a Database Engineer, you will be charged with all aspects of the development, innovation, and operations of database systems to enable high availability, continuous scaling and performance improvement. This role is a great opportunity to design the next generation data infrastructure that will help support one of the biggest businesses in the bank
Design, engineer and implement highly complex concurrent and distributed computing data software systems and develop new database services and solutions to further scale our research platform using sophisticated automation techniques and scripting languages on the latest technology
Provide technical expertise for database design, development, implementation, information storage and retrieval, data flow and analysis to enable high availability, continuous scaling and performance improvement.
Work closely with software engineering groups on schema, index, and query design to support application development
Build data pipelines to pull together information from different source systems; integrating, consolidating and cleansing data; and structuring it for use in individual analytics applications
Collaborate with financial experts and software engineers to design data models
Implement complex big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into insights using multiple platforms
Design and implementation of distributed data analytics systems, using Hadoop/Spark, Python, and Java/Scala
Manage cloud resources in order to maintain resiliency and performance
Own a wide variety of technical projects using a combination of Windows and UNIX based applications, including big data technologies and relational databases as well as several proprietary in-house file and database systems distributed over multiple sites.
Advanced experience in Architecture, Data Structures and Algorithms, high reliable real time data processing systems; design patterns
Advanced knowledge and/or understanding of at object-oriented languages (Java, Scala, C++) and scripting languages (Python, Groovy)
Strong system automation experience, operational excellence and best practice in database systems
Experience with development and execution of database security policies, procedures and auditing-experience with database authentication methods, authorization methods, and data encryption techniques, as well as planning and implementing backup and recovery procedures
Ability to input and understand database identifiers in complex distributed environments, determine the optimum values of the physical database parameters, create database management system documentation, and automate processes already in place
Demonstrated success producing database wrappers and stored procedures
Experience with data and schema design and engineering
Knowledge and understanding of all aspects of database tuning: software configuration, memory usage, data access, data manipulation, SQL, and physical storage.
Proficient in designing efficient and robust ETL workflows
Understanding of financial market data like Financial Instruments, Pricing, Index/Benchmark, Account, Client, Reference, Quant/Research, etc…
Experience working with large structured and/or unstructured data sets
Hands on experience in Big Data Technologies or Relational databases like Apache Hadoop, Casandra, Greenplum, Oracle, Sybase, MYSQL Server or VMware
Hands-on experience and ability to work with data manipulation and database query languages (e.g., SQL, T-SQL, PL/SQL)
Knowledge and/or understanding of Shell scripting; The data analysis tool pandas; Using relational databases
Knowledge and/or understanding of machine learning including Bayesian inference
JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran.