About
Desert Data Labs is a Tucson‑based data and technology studio specializing in Shiny applications, SQL databases, and data engineering. We help small teams turn messy data into clean, decision‑ready systems.
Founder

Hello! I’m an ecologist turned data scientist and the founder of Desert Data Labs — a small consulting studio in Tucson that builds data collection apps, dashboards, and automation tools for environmental consultants, nonprofits, and research teams. I also work at the University of Arizona developing data‑collection applications and spend most of my time building R and Python tools for interactive visualizations and reporting. I’m currently finishing my master’s in Data Science.
I started my career in the field collecting ecological data — small mammal trapping, electrofishing, plant diversity surveys, all the messy real‑world stuff — while working with the National Ecological Observatory Network. That’s where I taught myself to code, mostly out of curiosity and a desire to make our workflows less painful. One project led to another, and eventually I moved fully into building data applications, pipelines, and tools that help teams work faster and with fewer errors.
I love the desert, hiking and running, and spending time with the people I care about. And I still get excited about turning a messy dataset into something clean, useful, and actually enjoyable to work with.
Background
Three specialties at the core, with a deep bench of supporting skills.
Specialty
R Shiny app development
Interactive web applications in R Shiny — dashboards, data collection tools, scientific visualization apps. The framework I use day in, day out.
Specialty
SQL database management
Schema design, query tuning, stored procedures, migrations, backups. SQL Server, PostgreSQL, SQLite, and Azure SQL — built to last and easy to live with.
Specialty
Data engineering
ETL pipelines, API integrations, and the plumbing that keeps data flowing cleanly between collection, storage, and reporting layers.
Data science & analytics
Statistics, modeling, forecasting, and turning numbers into something a non-technical stakeholder will actually read.
Environmental & research data
Field-collection workflows, QA protocols, and the realities of messy real-world data — from years in the field with NEON.
Azure cloud
Managed databases, ETL pipelines, and app hosting on Microsoft Azure when the project calls for it.
Data Security & Storage
Most clients ask some version of “who can see my data and where does it go?” — those are the right questions. Here’s how I answer them.
Where does your data live?
Depends on the project — usually a cloud database (Azure SQL, PostgreSQL) in a region you specify, or on your own servers if you need it on-premises. Either way, you own it. Not me, not a platform vendor.
Encryption — at rest and in transit
Every database I build is encrypted at rest and all connections use TLS. That means your data is protected whether it’s sitting in storage or moving between your app and the database. This isn’t optional — it’s the default.
Access control & audit logs
Role-based access means people see only what they need — a field technician doesn’t get access to the full landowner database, a manager gets summary views without raw records. Every change is logged: who did what, and when.
Government & federal contracts
I build on FedRAMP-authorized cloud infrastructure (Azure Government, AWS GovCloud) and structure deployments to align with federal compliance requirements. Full FedRAMP authorization is an organizational process, not something a single contractor delivers — but you’ll be starting on the right foundation.
Sensitive & private data
Private landowner records, PII, health data — if your project involves information people expect to stay private, that shapes the architecture from day one: what gets encrypted, who has access, what gets logged, and how data is handled when a user leaves the project.
Backups & recovery
Every database gets a backup plan — scheduled snapshots and a documented restore procedure. You’ll know what the recovery process looks like before launch, not after something goes wrong.
Mission
Make data useful, accessible, and actionable for everyone, not just teams with a budget for enterprise software.
Like the way we work?
Let’s see if there’s a project to build.