Senior Data Engineer (Microsoft Fabric)
We usually respond within 2 hours
Calling All Upstarters!
SENIOR DATA ENGINEER WANTED!
We are Upstart 13. We are humble, hungry, and competent people who are radically changing the expectations and experience of outsourcing for all participants by challenging barriers that create inequality and by bringing down borders in technology for people everywhere. We’re all about delivering value and doing big things. We have become a game-changer for teams around the world who look to Upstart’s services as a differentiator.
Job Description:
We are seeking a Senior Data Engineer to own the design, build-out, and evolution of our data platform on Microsoft Fabric. You will work hands-on across Lakehouses, Warehouses, Data Factory pipelines, Spark notebooks, and Dataflows Gen2, ingesting, transforming, and serving high-quality data to downstream consumers including semantic models and Power BI reports. The ideal candidate has deep experience with lakehouse architecture, medallion patterns, and modern ELT practices within the Microsoft data ecosystem.
Responsibilities:
Design and implement end-to-end ELT pipelines using Data Factory, Spark notebooks (PySpark/Spark SQL), and Dataflows Gen2, ingesting from APIs, databases, flat files, and on-prem sources.
Build and maintain a medallion architecture (Bronze → Silver → Gold) in Fabric Lakehouses, with Delta Lake optimization (partitioning, Z-ordering, incremental loads) across OneLake.
Implement data validation, schema enforcement, and automated quality checks; partner with governance on lineage, sensitivity labels, and RLS/OLS.
Manage Fabric platform operations, capacity monitoring, Git-based CI/CD for notebooks and pipelines, and multi-environment workspace configuration.
Deliver well-structured Gold-layer tables optimized for Direct Lake; collaborate with BI stakeholders on star-schema design and semantic model performance.
Design and implement RTM Isolation strategies using CDC (Change Data Capture) and optimized incremental loading patterns to ensure zero performance impact on mission-critical production databases.
Collaborate with BI stakeholders to deliver AI-Ready Gold-layer tables, ensuring all technical columns are hidden and business-friendly metadata (descriptions and synonyms) is integrated into the semantic foundation.
Hands-on experience with Git-based CI/CD using Power BI Projects (PBIP) and TMDL to manage model lifecycles across Dev, Test, and Prod environments.
Qualifications
Experience:
6+ years building production ETL/ELT pipelines in cloud data platforms.
Strong hands-on experience with Microsoft Fabric (Lakehouses, Warehouses, Data Factory, Spark notebooks, OneLake).
Deep understanding of lakehouse and medallion architecture patterns.
Proficient in PySpark, Spark SQL, and/or T-SQL for data transformation and optimization.
Experience with Delta Lake table management (partitioning, optimization, incremental loads, merge/upsert patterns).
Familiarity with dimensional modeling and star-schema design principles.
Working knowledge of Git-based CI/CD for data artifacts (notebooks, pipelines, TMDL/PBIP a plus).
Comfortable working with the SQL analytics endpoint and understanding Direct Lake requirements.
Soft skills:
Execution-First Mindset — delivers working functionality quickly, then iterates.
Curious Integrator — enjoys untangling messy, niche data feeds and brings order.
Quality Advocate — insists on tests, logging, and clear hand-offs.
Collaborative Communicator — explains decisions to peers and stakeholders.
Continuous Learner — keeps abreast of Azure & Fabric feature releases.
Bonus skills:
Power BI semantic model development (DAX, Power Query/M, DirectQuery, Import mode).
Experience with Tabular Editor, DAX Studio, or XMLA endpoint.
SemPy / Semantic Link for programmatic model validation or notebook-based analytics.
Fabric Capacity Metrics app and workspace monitoring experience.
Experience deploying in hybrid (cloud/on-prem) environments.
Familiarity with Azure Data Lake Storage Gen2, Azure Key Vault, or broader Azure ecosystem services.
Why Upstart13?
We put people first at Upstart 13! We believe the world is filled with amazing people and we are willing to go to great lengths to seek out others who share our values to join our cause of bringing down borders in technology for people everywhere.
We develop leaders at Upstart 13, we focus on what matters to do meaningful work, we own our shit, we stay curious, and we understand responsibility leads to giving. We do big things together!
Perks:
Job-type: long-term, full-time job.
Fully remote.
USD competitive salary.
20+ Paid time off days.

Are you ready to join our cause? Be sure to ask, “Why 13?”
- Department
- Data
- Role
- Data Engineer
- Remote status
- Fully Remote
- Employment type
- Full-time
Colleagues
About Upstart 13
We strategize, solve, and build solutions to business problems with AI, data, and software—grounded in strategic clarity.
From boardroom to build, we connect strategy to execution using all available intelligence—human and otherwise—to help companies achieve efficiency, growth, and competitive advantage.