Senior Data Engineer
Location: Remote (preference for Mountain Time Zone)
Type: Full-time
About the Role
We’re looking for an experienced Senior Data Engineer to design, build, and maintain the data backbone of our fast-growing GovTech & healthcare platform. You’ll own end-to-end data pipelines, third-party integrations, and database modeling to power analytics, reporting, and operational workflows. As a core contributor, you’ll collaborate with Product, Engineering, QA, and DevOps to ensure data is accurate, accessible, and compliant with HIPAA, FedRAMP, and SOC-2 requirements.
Key Responsibilities
- ETL/ELT Pipeline Development:
- Architect, develop, and operate scalable data pipelines in Python using frameworks such as Apache Airflow, AWS Glue, or similar.
- Ingest and transform data from internal sources, microservices, and third-party APIs (REST, streaming, webhooks).
- Data Modeling & Warehousing:
- Design and maintain dimensional and normalized schemas in cloud data warehouses (AWS Redshift, Snowflake, or equivalent).
- Optimize table structures, partitioning, and indexing for performance and cost efficiency.
- Third-Party Integrations:
- Build and manage robust, fault-tolerant integrations with external systems (payment gateways, identity providers, data vendors).
- Develop monitoring, retries, and alerting to ensure integration reliability.
- Data Quality & Governance:
- Implement data validation, anomaly detection, and reconciliation processes to guarantee accuracy.
- Collaborate with Security and Compliance teams to enforce data governance, encryption, and access controls.
- Collaboration & Enablement:
- Partner with Analytics, ML, and Product teams to translate requirements into data solutions.
- Provide self-service data access (views, dashboards) and documentation for stakeholders.
- Performance & Cost Optimization:
- Monitor and tune pipeline and warehouse performance; identify opportunities to reduce AWS spend.
- Introduce caching, batching, and parallelism as appropriate for large-scale workloads.
- Innovation & Tooling:
- Evaluate and prototype emerging data technologies (Spark, Kafka, dbt, data mesh patterns).
- Leverage AI/ML tools to automate repetitive data tasks or anomaly detection.
Required Qualifications
- Experience: 7+ years in data engineering or analytics engineering roles.
- Core Language: Expert-level Python for ETL scripting, API clients, and automation.
- Cloud Proficiency: Hands-on with AWS data services (S3, Glue, Redshift, EMR, Lambda) and infrastructure-as-code (Terraform or CloudFormation).
- SQL & Data Modeling: Deep expertise designing relational schemas, writing complex SQL, and building data marts.
- Pipeline Frameworks: Proven experience with Apache Airflow, AWS Glue, or equivalent orchestration tools.
- Integration Skills: Solid background integrating and transforming data from third-party APIs, streaming platforms, and message queues.
- Regulatory Compliance: Familiarity with data handling requirements in HIPAA, FedRAMP, and SOC-2 environments.
- Collaboration: Strong communication skills; able to partner effectively with cross-functional teams.
Preferred Qualifications
- Big Data Technologies: Experience with Apache Spark, Kafka, Kinesis, or similar.
- Modern Transform Tools: Proficiency with dbt, Delta Lake, or Iceberg for versioned tables and transformations.
- Containerization & Orchestration: Knowledge of Docker and Kubernetes for data workloads.
- Machine Learning Pipelines: Exposure to MLOps frameworks and feature stores.
- Healthcare & Government Domain: Prior work on healthcare analytics or government data projects.
- Open-Source Contributions: Engagement with data-engineering or analytics OSS communities.
Core Competencies & Expectations
- Hands-On Contributor: You dive into code and configurations, continuously shipping reliable data solutions.
- Data Quality Champion: You obsess over accuracy, completeness, and timeliness of data.
- Problem Solver: You break down complex data challenges into clear designs and implementations.
- Collaborative Mindset: You communicate clearly, document thoroughly, and empower others with data.
- Continuous Learner: You stay current with evolving data architectures and share insights with the team.
What We Offer
- Competitive salary and equity packages
- Comprehensive health, dental, and vision benefits
- Monthly Wellness Stipend
- Generous PTO
If you’re passionate about building robust data foundations that drive mission-critical insights and workflows, we’d love to talk. Please apply with your résumé and examples of your most impactful data-engineering projects.