We are looking for a Data Engineer to join our growing Data team. The person in this position will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. They must be self-directed and comfortable supporting the data needs of multiple teams, domains, systems and products. The person in this position will work in a small team environment and support the development of new data products in the evolving world of eVTOL aircraft design.
Essential Duties and Responsibilities:
Create and maintain optimal data pipeline and storage architectures
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Design, write, test and deploy production-ready code
Work with other members of the team to create data products that meet the needs of a growing and diverse company
Build the infrastructure required for optimal collection, extraction, transformation, and loading of data from a variety of data sources using cloud ‘big data’ technologies as appropriate
Work with data and subject matter experts to strive for greater functionality in our data systems.
Mentor interns and junior engineers
Minimum Qualifications (Knowledge, Skills, and Abilities):
Bachelor’s Degree or Master’s in Computer Science, Statistics, Software Engineering, or a relevant field.3 years experience in a Cloud/Big Data Engineering role
Extensive experience architecting and programming large scale software applications in Python
Extensive experience with cloud platforms such as AWS or GCPAdvanced working SQL knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases including columnar (RedShift, BigQuery, etc.) and NoSQL.
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Experience working with message queuing, stream processing, and highly scalable ‘big data’ data stores.
Experience working with GIT version control and CI/CD systems.
Strong project management and organizational skills.
Stellar troubleshooting skills with the ability to spot issues before they become problems.
Excellent communication skills, both written and verbal.
Experience supporting and working with cross-functional teams in a dynamic environment.
Preferred Qualifications (Knowledge, Skills, and Abilities):
Experience developing Infrastructure as Code (IaC) using AWS CDK, Cloudformation or Terraform.
Familiarity with Labview, Matlab, and/or Simulink.
Proficiency in building RESTful APIs and web services
Experience with Apache Big Data tools such as Avro, Beam, Parquet, etc.