Aviation & Operations Technology Research

Building to Understand:
How Does This Actually Work?

Aviation is changing faster than the infrastructure built to support it. As a self-taught Python developer with nearly 25 years in airport and air traffic operations, I build experiments to understand what those changes actually demand, technically and operationally. From FAA SWIM feeds to advanced air mobility ground ops, this is how I learn to ask better questions.

View Experiments

What is OC's Tech Lab?

OC's Tech Lab is my personal learning lab. OC comes from my air traffic control operating initials, a callsign earned in the tower and carried into everything since. The mindset that came with it hasn't changed: build with intention, check the details, and don't assume something works until you've tested it yourself.

Aviation is in the middle of a genuine transformation. Advanced air mobility is moving from concept to certification. Smart airports are deploying sensor networks, automation layers, and AI-assisted operations. And most of the vendor claims about what these systems can do haven't been tested against real operational constraints.

With all the open source tools and public data pipelines available now, I started asking questions that don't show up in vendor slide decks: why does it have to work this way? When someone says "proprietary development," do they really just mean they have their own way of parsing data from an established information pipeline? What does AAM ground operations actually require at the infrastructure level?

I'm exploring what's possible when you build these systems yourself, with FAA SWIM feeds, optimization algorithms, LLMs, and simulation frameworks, before anyone tries to sell you the finished product.

Trial and Error Learning

The best way to understand any technology is to build it, watch it break, figure out why, and iterate. That's how I learn.

Better Questions Over Time

Early experiments asked "can I do this?" Now they ask "what breaks first?" and "where does signal turn into noise?"

Understanding the Pipeline

It's not about the end result. It's about understanding what it takes to get there. What data? What validation? What guardrails?

The Real Goal: Not to prove technology works, but to understand the demands behind making it work. What do you need? Where does it fail? What questions should you even be asking? You only learn that by building.

Airports are complex, massive organizations, and aviation is moving at a pace that makes it easy to mistake motion for progress. I don't have all the answers. Nobody does. But I know the questions worth asking, and I know the three things every technology decision in this industry ultimately gets measured against: does it improve return on investment, does it drive efficiency, and does it increase throughput? Those are the lanes. Everything runs inside them.

And above all of it, wrapping every lane, every experiment, every question on this site, is safety for the flying public. Not as a compliance checkbox. As the reason the work matters at all. That is the standard this industry has always held itself to, and it is the standard every new technology claiming to improve aviation operations has to be measured against before anything else.

Portfolio Experiments

Each experiment tests a specific hypothesis about where technology can reduce friction in airport and advanced air mobility operations.

Active

AI Agent Orchestration for Gate and Schedule Management

Investigating how AI agents can be structured to assist with gate scheduling, conflict detection, and schedule recovery without replacing the human judgment required to handle edge cases. The core question isn't whether an agent can generate a schedule. It's whether the agent knows what it doesn't know, and whether it hands off to a human at the right moment.

Active

Surface Conflict Research

Studying how conflicts emerge in airport surface operations: the conditions that create them, the signals that precede them, and the tradeoffs involved in detecting them earlier. Focused on understanding the problem space before evaluating solution approaches.

Active

Aviation Data Feed Quality

Examining the reliability, latency, and data quality characteristics of public aviation information pipelines. Understanding what operational tools can actually depend on versus where human validation remains essential.

Active

Unstructured Input in Operational Contexts

Investigating how language models handle the kind of free-text, abbreviated, and context-dependent communication that characterizes real airport operations. Where do they add value, where do they break, and what guardrails are non-negotiable?

Active

Operational Visualization and Decision Support

Studying the relationship between information density, update frequency, and decision quality in high-tempo operational environments. The question isn't what data to show; it's what data to withhold and when.

Active

Delay Propagation and Resource Contention

Modeling how disruptions move through shared airport resources: gates, ground handlers, crew schedules, and connecting passengers. Understanding the non-linear dynamics before trying to optimize them.

How the Experiments Are Built

These experiments span everything from low-level scripting to orchestration with LLMs in closed-loop setups. The stack evolved organically as I explored different problem spaces: Python for data ingestion and simulation, SQL for state management, REST APIs and message queues for integration, web frameworks for visualization.

Early on, I was just trying to figure out if I could parse FAA SWIM feeds or model gate conflicts. But as the experiments matured, the questions got better. Now I'm asking: what's the acceptable latency? Where do validation rules belong? When does automation need human oversight? You only learn to ask those questions by building systems and watching where they break.

Computational Model Selection

The point is not loyalty to a particular toolchain. It is mapping problem classes to the right computation model: deterministic code for hard rules, statistical models for patterns and optimization, and probabilistic systems like LLMs where language, ambiguity, or semi-structured inputs dominate.

Most airport operations problems reduce to state management, constraint satisfaction, and event-driven workflows. Core business rules and validation sit in traditional code and finite state machines. Machine learning handles pattern recognition, anomaly detection, and parameter tuning under constraints. LLMs are wired in as controlled components: parsing semi-structured inputs, assisting with schema inference, or driving MCP-style tool chains where the model can call tools, query state, and propose actions but cannot change anything without guardrails.

Drawing the Boundaries

The interesting work is drawing the boundaries. Which paths must be fully deterministic? Where is ML actually adding signal instead of noise? Where does an LLM require an external validator or human in the loop before its output is allowed to matter?

Key Questions I'm Asking in 2026

As the industry pushes harder into automation, AI, and advanced air mobility, these are the questions I'm wrestling with.

Is AAM ground operations a staffing problem or a systems problem?

The early conversations treat vertiport ground ops as a staffing question: how many people do you need on a pad? But the real question is about systems design. The turnaround windows are shorter, the energy state is a constraint, and the failure modes are different. If we design the human role first and bolt on technology later, we'll repeat every mistake made in conventional ramp automation.

Are airports building the right interfaces for the operators who will actually use them?

Smart airport technology generates more data than most operations teams can act on. The gap isn't sensor coverage or AI capability. It's the interface layer between the data and the decision. Operators in high-tempo environments need tools that narrow choices under pressure, not dashboards that add cognitive load.

Do we need digital twins of everything?

The pitch is compelling: a complete virtual replica of your airport. But what problem does it actually solve? Is it operational improvement or just an expensive visualization tool?

Are we getting 80% of the value with LiDAR deployments?

Comprehensive sensor coverage sounds great until you see the price tag and maintenance burden. Are we getting the return we expected, or would better ML models trained on existing camera feeds deliver 80% of the value at 20% of the cost?

Are we replacing our most valuable asset with technology to cut costs?

Humans, experienced operators who understand context and edge cases, are being replaced to reduce headcount. But are we actually improving the passenger experience, or just making operations cheaper and more brittle?

How can we better communicate airport needs to our technology partners?

Vendor demos show polished interfaces and promise transformative results. But do they understand the operational reality? The radio coordination delays? The gate conflicts that cascade? The data quality issues in real-time feeds? Building these experiments helps clarify what we're actually asking for.

BNATCS and the Airport Operations Question Nobody Is Asking Yet

The FAA is replacing the entire U.S. air traffic control infrastructure. Not patching it. Not upgrading components. Replacing it. The Brand New Air Traffic Control System, BNATCS, is a $32.5 billion program with a target delivery date of end of 2028. Peraton was awarded the prime integrator contract, and the work is already underway.

The public conversation has focused on the controller side: paper flight strips being replaced by the Terminal Flight Data Manager, 5,170 new high-speed network connections across fiber, satellite, and wireless, Enterprise Information Display Systems replacing the floppy-drive-era hardware still running in FAA facilities today. Equipment-related delay minutes in 2025 ran roughly 300 percent above the 2010 to 2024 average. That is the infrastructure problem BNATCS is built to fix.

That conversation is necessary. It is also incomplete.

The part that is not being discussed loudly enough is what happens on the airport side of the fence when the ATC side changes this dramatically. BNATCS does not exist in isolation. The data it generates, the reliability it promises, and the new coordination pathways it creates will flow downstream into airport operations systems, ground handling workflows, gate management tools, and AAM integration frameworks. If airport operators are not building toward those interfaces now, they will be scrambling to retrofit when the new system goes live.

Remote Towers Change the Ground Operations Equation

One of BNATCS's six workstreams includes the deployment of remote control towers, facilities where controllers manage airport traffic offsite using cameras and sensors rather than a physical cab. This is not a distant hypothetical. It is in the program plan. For smaller airports the operational implications are significant, but even at larger facilities the concept raises questions about how ground coordination, ramp management, and surface awareness tools need to evolve when the tower is no longer physically co-located with the operation it is managing.

Better Data Is Only Useful If the Airport Side Can Receive It

BNATCS promises more reliable, lower-latency data across the NAS. That is genuinely valuable for airport operations tools that depend on ATC feeds. But reliability upstream does not automatically translate into usability downstream. Airport operators and technology vendors need to be asking now: what does our ingestion pipeline look like when the source data is cleaner and faster? Where are our current tools buffering for bad data in ways that will create new problems when the data quality improves? What integration points need to be redesigned, not just updated?

AAM Cannot Wait for BNATCS to Finish

Advanced air mobility integration into the NAS depends heavily on the kind of data infrastructure BNATCS is building. Reliable feeds, modern telecommunications, and better coordination tools between facilities are all prerequisites for managing eVTOL traffic alongside conventional aircraft. The 2028 BNATCS target and the near-term AAM certification timelines are on a collision course. Airport operators cannot wait for one program to finish before starting to design for the other. The ground operations framework for AAM has to be built in parallel, against the architecture that BNATCS is delivering.

The Integration Risk Is Real

Peraton has no prior FAA or ATC experience. That is not an opinion; Transportation Secretary Duffy acknowledged it directly. The program is ambitious, the timeline is aggressive, and the $12.5 billion initial appropriation is roughly 38 percent of the total estimated cost. Additional funding authorization is still required. None of that means the program fails. It does mean that airport operators should be planning for implementation variability rather than assuming a clean 2028 cutover. The airport side of the house needs its own transition framework, not just a dependency on the FAA delivering on schedule.

The question for airport operators is not whether BNATCS matters. It is whether your operation is building toward it or waiting to react to it.

Speaking & Appearances

These conversations connect the technical work to the operational reality. The experiments inform the questions; the panels are where the questions go public.

Airports@Work Conference Chicago, IL
March 2026
Panelist

I Don't Want to Live Without Your Love: Spotlighting Opportunities for Airport and Airline Collaboration

A working session on where airport and airline operational incentives align, where they diverge, and what it actually takes to close the gap in practice. Alongside counterparts from United Airlines, moderated by Phil Morris.

Smart Airports Symposium February 2026
Panelist

Human in the Loop for Smart Tech

Examined where human oversight remains operationally essential as airports deploy increasingly automated systems, and how to design the handoff between automated decision support and human authority without creating new failure modes.

About

I'm accountable for the Virtual Ramp Control program at MCO: the standards, the structure, and the operational decisions that govern roughly 18% of daily aircraft movements across 41 airlines. Outside of that, I build experiments. That's how I've always learned.

My path runs through 5 airports including a NATO facility, U.S. Navy Air Traffic Control, and nearly 25 years in aviation operations. Along the way: BA-IT, MBA in Strategic Leadership, FAA CTO, ACI-NA USAP designation, AAAE certifications (IACE, ASC), ISO 9001 internal and external auditor credentials, plus PM training and a lot of self-taught Python, machine learning, and systems design.

Quality assurance is the discipline underneath all of it. Whether I was turning around underperforming ATC facilities, conducting formal audits, or building experiments to understand what airport technology actually demands operationally, the question is always the same: how do you know it works? Not in the demo. Not on paper. Under real conditions, with real data, when things don't go as planned. That orientation drives the experiments on this site as much as it drives any formal audit process.

Right now, that question points at advanced air mobility. eVTOL operations at vertiports aren't just a new vehicle class; they're a ground operations problem, a systems integration problem, and a quality assurance problem the industry hasn't fully defined yet. I'm building experiments to understand the gap before vendors show up with finished products.

5 Airports (Including NATO)
25 Years Aviation Experience
8 Active Research Experiments

How This Industry Actually Works

Airports are complex systems operating at a scale few organizations fully appreciate. Operations, technology, regulation, commercial pressure, and human performance are tightly coupled, and small changes in one domain cascade quickly into others. That makes outcomes difficult to model and even harder to predict with confidence.

The pace of change in aviation now exceeds the decision frameworks many organizations still rely on. That gap introduces operational risk, but it also creates opportunity for organizations willing to rethink how they evaluate change and make decisions.

No organization has perfect visibility, and certainty is rare in this environment. What matters is building structures that surface the right signals early, support informed judgment at the leadership level, and keep the organization aligned as conditions shift.