HT IconHealthy Together
Solutions
Solutions
Cloud-based, native, modular solutions.
WIC
Luna MIS
Enrollment
Messaging/Chatbot
Shopping & Delivery
FMNP
SUN Bucks
Epi Management
EDSS
IIS
Crisis Management
Medicaid
IE&E
SNAP
HHS Enrollment
Benefits
Shopping & Delivery
Referral Systems
Behavioral Health
988
Talk to our team
Company
Company
See the plan, meet the people, get the scoop.
About Us
Press & Blog
Careers
Legal
Privacy Policy
Terms & Conditions
Talk to our team
Resources
Resources
Find help. Sharpen your skills. Get the tools you need.
Support
FMNP Help Center
Help Center
Give Feedback
Downloads
Android App
iOS App
Talk to our team
Need Help?
Need Help?
close

Talk to Our Team

Senior Data Engineer

Location: Remote (preference for Mountain Time Zone)
Type: Full-time

About the Role

We’re looking for an experienced Senior Data Engineer to design, build, and maintain the data backbone of our fast-growing GovTech & healthcare platform. You’ll own end-to-end data pipelines, third-party integrations, and database modeling to power analytics, reporting, and operational workflows. As a core contributor, you’ll collaborate with Product, Engineering, QA, and DevOps to ensure data is accurate, accessible, and compliant with HIPAA, FedRAMP, and SOC-2 requirements.

Key Responsibilities

  • ETL/ELT Pipeline Development:
    • Architect, develop, and operate scalable data pipelines in Python using frameworks such as Apache Airflow, AWS Glue, or similar.
    • Ingest and transform data from internal sources, microservices, and third-party APIs (REST, streaming, webhooks).
  • Data Modeling & Warehousing:
    • Design and maintain dimensional and normalized schemas in cloud data warehouses (AWS Redshift, Snowflake, or equivalent).
    • Optimize table structures, partitioning, and indexing for performance and cost efficiency.
  • Third-Party Integrations:
    • Build and manage robust, fault-tolerant integrations with external systems (payment gateways, identity providers, data vendors).
    • Develop monitoring, retries, and alerting to ensure integration reliability.
  • Data Quality & Governance:
    • Implement data validation, anomaly detection, and reconciliation processes to guarantee accuracy.
    • Collaborate with Security and Compliance teams to enforce data governance, encryption, and access controls.
  • Collaboration & Enablement:
    • Partner with Analytics, ML, and Product teams to translate requirements into data solutions.
    • Provide self-service data access (views, dashboards) and documentation for stakeholders.
  • Performance & Cost Optimization:
    • Monitor and tune pipeline and warehouse performance; identify opportunities to reduce AWS spend.
    • Introduce caching, batching, and parallelism as appropriate for large-scale workloads.
  • Innovation & Tooling:
    • Evaluate and prototype emerging data technologies (Spark, Kafka, dbt, data mesh patterns).
    • Leverage AI/ML tools to automate repetitive data tasks or anomaly detection.

Required Qualifications

  • Experience: 7+ years in data engineering or analytics engineering roles.
  • Core Language: Expert-level Python for ETL scripting, API clients, and automation.
  • Cloud Proficiency: Hands-on with AWS data services (S3, Glue, Redshift, EMR, Lambda) and infrastructure-as-code (Terraform or CloudFormation).
  • SQL & Data Modeling: Deep expertise designing relational schemas, writing complex SQL, and building data marts.
  • Pipeline Frameworks: Proven experience with Apache Airflow, AWS Glue, or equivalent orchestration tools.
  • Integration Skills: Solid background integrating and transforming data from third-party APIs, streaming platforms, and message queues.
  • Regulatory Compliance: Familiarity with data handling requirements in HIPAA, FedRAMP, and SOC-2 environments.
  • Collaboration: Strong communication skills; able to partner effectively with cross-functional teams.

Preferred Qualifications

  • Big Data Technologies: Experience with Apache Spark, Kafka, Kinesis, or similar.
  • Modern Transform Tools: Proficiency with dbt, Delta Lake, or Iceberg for versioned tables and transformations.
  • Containerization & Orchestration: Knowledge of Docker and Kubernetes for data workloads.
  • Machine Learning Pipelines: Exposure to MLOps frameworks and feature stores.
  • Healthcare & Government Domain: Prior work on healthcare analytics or government data projects.
  • Open-Source Contributions: Engagement with data-engineering or analytics OSS communities.

Core Competencies & Expectations

  • Hands-On Contributor: You dive into code and configurations, continuously shipping reliable data solutions.
  • Data Quality Champion: You obsess over accuracy, completeness, and timeliness of data.
  • Problem Solver: You break down complex data challenges into clear designs and implementations.
  • Collaborative Mindset: You communicate clearly, document thoroughly, and empower others with data.
  • Continuous Learner: You stay current with evolving data architectures and share insights with the team.

What We Offer

  • Competitive salary and equity packages
  • Comprehensive health, dental, and vision benefits
  • Monthly Wellness Stipend
  • Generous PTO

If you’re passionate about building robust data foundations that drive mission-critical insights and workflows, we’d love to talk. Please apply with your résumé and examples of your most impactful data-engineering projects.

Apply For This Position
WIC
Luna MISEnrollmentMessaging/ChatbotShopping & DeliveryFMNPSUN Bucks
Medicaid
IE&E
SNAP
HHS EnrollmentBenefitsShopping & DeliveryReferrals
Epi Management
Epi ManagementEDSSIIS
Crisis Management
Emergency & Recovery Services
Referral Systems
Referral SystemsBehavioral Health988
About Us
BlogCareers
Legal
PrivacyTermsTrust Center
Support
Help Center
Downloads
Android AppiOS App
Download on the App StoreGet it on Google Play
© 2025 Twenty Labs, LLC.
close

Talk to Our Team

close

Schedule a Demo