Job Details
Architect - Data Engineering
As an Architect in Credera’s Data Engineering capability group, you will lead teams in implementing modern data architecture, data engineering pipelines, and advanced analytical solutions. Our projects range from implementing enterprise data lakes and data warehouses using best practices for on-prem and cloud solutions, building visualizations and dashboards, unifying data for a single view of the customer, or predicting next-best outcomes with advanced analytics.
You will act as the primary architect and technical lead on projects to scope and estimate work streams, architect and model technical solutions to meet business requirements, serve as a technical expert in client communications, and mentor junior project team members. On a typical day, you might expect to participate in design sessions, build data structures for an enterprise data lake or statistical models for a machine learning algorithm, coach junior resources, and manage technical backlogs and release management tools. Additionally, you will seek out new business development opportunities at existing and new clients.
WHO YOU ARE:
- You have a minimum of 5 years of technical, hands-on experience building, optimizing, and implementing data pipelines and architecture
- Experience leading teams to wrangle, explore, and analyze data to answer specific business questions and identify opportunities for improvement
- You are a highly driven professional and enjoy serving in a fast-paced, dynamic client-facing role where delivering solutions to exceed high expectations is a measure of success
- You have a passion for leading teams and providing both formal and informal mentorship
- You have strong communication and interpersonal skills, and the ability to engage customers at a business level in addition to a technical level
- You have a deep understanding of data governance and data privacy best practices
- You have a degree in Computer Science, Computer Engineering, Engineering, Mathematics, Management Information Systems or a related field of study
- The ideal candidate will have technical knowledge of the following:
- Big data tools (e.g. Hadoop, Spark, Kafka, etc.)
- Relational SQL and NoSQL databases (e.g. Postgres, MySQL, SQL Server, Cassandra, MongoDB, etc.)
- Data pipeline and workflow management tools (e.g. Azkaban, Oozie, Luigi, Airflow, etc.)
- Stream-processing systems (e.g. Storm, Spark-Streaming, etc.)
- Scripting languages (e.g. Python, Java, C++, Scala, etc.)
- Container Orchestration (e.g. Kubernetes, Docker, etc.)
- Experience with one or more of the following cloud service providers:
- AWS cloud services
- Google Cloud Platform
- Azure cloud services