Receive alerts when this company posts new jobs.
Lead Data Engineer - Enterprise Observability (ASE6)
at Wells Fargo
At Wells Fargo, we want to satisfy our customers’ financial needs and help them succeed financially. We’re looking for talented people who will put our customers at the center of everything we do. Join our diverse and inclusive team where you’ll feel valued and inspired to contribute your unique skills and experience.
Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you.
Wells Fargo Technology sets IT strategy; enhances the design, development, and operations of our systems; optimizes the Wells Fargo infrastructure footprint; provides information security; and enables continuous banking access through in-store, online, ATM, and other channels to Wells Fargo’s more than 70 million global customers.
At Wells Fargo, the Chief Technology Office (CTO) organization is leading technology transformation in a multi-cloud technology landscape by developing and delivering innovative products that delight our customers and speed to market banking products to our customers. As part of the CTO Operations and Site Reliability Engineering (SRE) group, the Enterprise Observability Team is responsible for delivering innovative, scalable, stable, and secure products that provide deep business insight, operational insight, and predictive analysis. As a member of the team, you will be a key contributor to the development of products that provide full stack observability including application monitoring, logging, alerting, and visualization.
As part of the Enterprise Observability team, the Lead Data Engineer will design, develop, and implement near real-time data streams, ingesting and enriching business and operational data from various sources across the organization as part of a new enterprise observability platform.
Duties/responsibilities include the following:
- Partner with internal customers, architects, engineers and other technical partners to gather requirements, understand existing systems and develop products to maximize secure and compliant application observability
- Work with complex databases, conduct in-depth research to identify data issues, propose solutions to improve data integrity; perform other database-related analyses and projects as requested.
- Determine the most appropriate data collection and methodologies
- Identify, retrieve, manipulate, relate and exploit multiple structured data and unstructured data sets from various sources, including building or generating new data sets as appropriate
- Integrate operational data platforms and tools including Kafka, MongoDB, ElasticSearch (Elastic Search, Logstash, Kibana, Beats), Splunk, etc.
- Develop enterprise level data APIs and data streams using object-oriented languages (Java, C#.NET) and scripting (Python, PowerShell, etc.) and data formats (XML, JSON)
- Leverage subject matter expertise in Enterprise Messaging/Services, APIs, Microservices architecture, and enterprise application event-streaming and logging technologies
- Utilize the Spring Ecosystem including Spring Boot, Spring Framework, Spring Web Flow, Spring Cloud Connector
- Collaborate through the Agile Scrum methodology to deliver products and key features, including the use of Agile tools like Jira and Confluence
- Work with Hybrid Cloud Platform Providers and Containers including AWS, Azure, Pivotal Cloud Foundry (PCF), Pivotal Container Service (PKS / Enterprise Kubernetes Platform), Kubernetes, and Docker.
The preferred location for this role is Charlotte-NC. Other locations that can be considered are: Chandler-AZ, Minneapolis-MN, Des Moines-IA, San Francisco-CA and New York-NY
Other Desired Qualifications
- 5+ years of experience with Data Modeling and tuning of relational and NoSQL data stores (Athena, MongoDB, MySQL, Red-shift, etc.)
- Experience building data pipelines and automating Big Data platform applications/services
- Experience building-operating highly scalable, fault tolerant, distributed systems for extraction, ingestion and processing of large data sets
- Experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, Kibana) and Kafka
- 3+ years of experience in building large-scale data processing projects using AWS technologies (Lambda, S3, EC2, EMR, Kinesis, DynamoDB, API Gateway)
- Experience with Data Visualization Tools like Tableau, PowerBi, Qlikview, Kibana and/or Graffana
- Build-deploy automation and configuration experience within the Linux and Unix environment
- Experience with Software engineering best-practices including, but not limited to CICD (Git, Jenkins, TFS, Maven, Nexus), Version Control (Git, Subversion, etc.), and automated unit testing
All offers for employment with Wells Fargo are contingent upon the candidate having successfully completed a criminal background check. Wells Fargo will consider qualified candidates with criminal histories in a manner consistent with the requirements of applicable local, state and Federal law, including Section 19 of the Federal Deposit Insurance Act.
Relevant military experience is considered for veterans and transitioning service men and women.
Wells Fargo is an Affirmative Action and Equal Opportunity Employer, Minority/Female/Disabled/Veteran/Gender Identity/Sexual Orientation.