JOB TITLE: Platform Data Engineer
(intermediate - 3 to 5 yrs experience)
EMPLOYMENT: Full Time
LOCATION: Remote, U.S. Only
If you were recently impacted by tech layoffs, we encourage you to apply! We are actively interviewing for multiple Data Engineering positions.
Please note, we do a thorough background check which includes clearly verifiable employment history and references.
ABOUT US
We're a data consulting & engineering company that offers services across the entire data ecosystem. Our team of engineers do excellent work and have a partnership approach with the clients we serve.
Combining industry best practices with deep technical capabilities, we bring forward meaningful solutions that accelerate company growth through advanced data, cloud, and AI solutions.
RESPONSIBILITIES
You'll work closely with data and analytics engineers, software engineers, and ML/AI engineers to solve complex data challenges and make informed architectural decisions. Depending on the client and project, you might:
- Help define and release scalable architectures that balance agility with governance.
- Build and orchestrate purpose-driven data platforms, including services for data ingest, ELT pipelines, monitoring and alerting, and other components across end-to-end data ecosystems.
- Support the productionization of downstream data delivery, including but not limited to: embedded data applications, reverse ETL, ML features and AI products.
- Provide strategic guidance and thought leadership on scaling data and AI solutions.
QUALIFICATIONS:
- 3-5 years proven work experience as a Platform Data Engineer, with expertise in at least one programming language (Python, Scala, Go)
- 3-5 years experience with data platform and ETL/ELT design and implementation, including development best practices in testing, logging, monitoring
- Extensive experience with infrastructure as code (terraform) and implementing data platforms from the group up on cloud ecosystems
- Background working with distributed big data technologies such as Spark, Presto, Hive, etc. as well as data warehouse technologies including Snowflake, BigQuery, or Redshift
- Extensive data modeling knowledge including in NoSQL, transactional dBs, columnar and row based distributed analytical data environments, as well as prior experience designing and implementing best practice Data Lakes
- Knowledge of agile software development and continuous integration / deployment principles
- BS degree in Computer Science or related technical field, or equivalent practical experience
BENEFITS
- Remote-First Culture: Work remotely if you thrive in an independent setting while integrating well with technical teams and across multiple departments
- Flexible Time Off: Benefit from the flexibility of an open PTO policy
- Participation in company Health Insurance plan, plus Dental and Vision
- Competitive annual bonus structure