Building to Understand: How Does This Actually Work?

A tech enthusiast's hands-on exploration of airport operations technology

If you haven't noticed, AI is the buzzword. As a self-taught Python developer with a stack of certifications all driven by "how does that work?", I build experiments to understand the demands and pipelines behind the hype. This isn't about evaluating SaaS products. It's about learning to ask better questions through trial and error.

What is OC's Tech Lab?

OC's Tech Lab is my personal learning lab. OC comes from my air traffic control operating initials, but it's also become shorthand for how I work. It's the mindset I carried from the tower into tech: build with intention, check the details, and don't assume something works until you've tested it yourself.

With all the open source tools and public data pipelines available now, I started asking questions that don't show up in vendor slide decks: why does it have to work this way? When someone says "proprietary development," do they really just mean they have their own way of parsing data from an established information pipeline?

I'm exploring what's possible when you actually build these systems yourself with FAA SWIM feeds, optimization algorithms, LLMs, and simulation frameworks.

Trial and Error Learning

The best way to understand any technology is to build it, watch it break, figure out why, and iterate. That's how I learn.

Better Questions Over Time

Early experiments asked "can I do this?" Now they ask "what breaks first?" and "where does signal turn into noise?"

Understanding the Pipeline

It's not about the end result. It's about understanding what it takes to get there. What data? What validation? What guardrails?

The Real Goal: Not to prove technology works, but to understand the demands behind making it work. What do you need? Where does it fail? What questions should you even be asking? You only learn that by building.

Portfolio Experiments

Each experiment tests a specific hypothesis about where technology can reduce friction in airport operations.

Gate & Stand Conflict Modeling

Active

Models pushback sequences, arrival flows, and blocking conflicts at common-use gates.

Can constraint-based search predict blocking conflicts faster than manual inspection when turnaround windows shift?

Tests rule-based engines against simulated gate activity. Measures how often the system catches conflicts that human planners would miss under time pressure.

Python Constraint Solver Event Simulation FAA SWIM

SWIM Feed Parsing & Validation

Active

Ingests FAA System Wide Information Management data and extracts operational signals.

How much latency and error tolerance is acceptable when real-time feeds drive operational tools?

Tracks message arrival patterns, schema drift, and data quality issues. Documents where automation needs human validation versus where it can run closed-loop.

REST APIs Message Queues Schema Validation Error Handling

Semi-Structured Input Parsing

Active

Uses LLMs to parse free-text coordination messages and radio calls into structured events.

Can controlled LLM outputs supplement rule-based parsers without introducing hallucination risk?

Tests Model Context Protocol workflows where LLMs parse text but can't change state without validation. Measures precision/recall against human-labeled ground truth.

LLM Integration MCP Framework Validation Pipelines NLP

Real-Time Dashboard Rendering

Active

Displays live aircraft positions, gate status, and constraint violations in a web interface.

At what update frequency does real-time visualization add decision value versus overwhelming users?

Explores tradeoffs between data freshness, computational cost, and user cognitive load. Tests whether pushing every state change improves outcomes or just creates noise.

WebSockets React State Management Data Streaming

Operational Metrics & Analytics

Active

Extracts signal from high-cardinality event streams: delay patterns, resource bottlenecks, anomaly detection.

Can statistical outlier filtering catch emerging problems before they cascade into operational failures?

Tests anomaly detection against historical data. Measures false positive rates and whether alerts arrive with enough lead time to matter.

Statistical Analysis Anomaly Detection SQL Time Series

Schedule Impact Modeling

Active

Simulates how delays propagate through shared resources: gates, ground handlers, crew schedules.

Can stochastic simulation predict downstream impacts faster than human judgment under uncertainty?

Models non-linear interaction effects using Monte Carlo methods. Evaluates computational cost versus decision quality in time-critical scenarios.

Monte Carlo Event Simulation Stochastic Modeling Optimization

How the Experiments Are Built

These experiments span everything from low-level scripting to orchestration with LLMs in closed-loop setups. The stack evolved organically as I explored different problem spaces: Python for data ingestion and simulation, SQL for state management, REST APIs and message queues for integration, web frameworks for visualization.

Early on, I was just trying to figure out if I could parse FAA SWIM feeds or model gate conflicts. But as the experiments matured, the questions got better. Now I'm asking: what's the acceptable latency? Where do validation rules belong? When does automation need human oversight? You only learn to ask those questions by building systems and watching where they break.

Computational Model Selection

The point is not loyalty to a particular toolchain. It is mapping problem classes to the right computation model: deterministic code for hard rules, statistical models for patterns and optimization, and probabilistic systems like LLMs where language, ambiguity, or semi-structured inputs dominate.

Most airport operations problems reduce to state management, constraint satisfaction, and event-driven workflows. Core business rules and validation sit in traditional code and finite state machines. Machine learning is used for pattern recognition, anomaly detection, and parameter tuning under constraints.

LLMs are wired in as controlled components: parsing semi-structured inputs, assisting with schema inference, or driving Model Context Protocol style tool chains where the model can call tools, query state, and propose actions but cannot change anything without guardrails.

The Test Rig

Yes, I can't wait to get my hands on an NVIDIA DGX Spark system. But at the moment, the test rig is a custom build: dual RTX 5090s, 128GB RAM, and a Threadripper PRO. This isn't about building enterprise infrastructure. It's about testing assumptions and proving MVPs at home.

A 13B parameter model tends to be more than enough when you consider Model Context Protocol integration and proper fine-tuning. Working at this scale helps you understand what can actually be done and where hallucinations happen. The real questions are about airport data and ingestion: Can you parse years of coordination logs into training data? Can you ingest ADS-B and MLAT feeds alongside SWIM data without causing delays or parsing issues? What happens when your fine-tuned LLM references the "live feed" but one data source is delayed while the others aren't? Can you handle gate and runway assignments as they change in real-time, tracking the drift between what was planned and what actually happened? If your ingestion pipeline works on a home workstation, you understand the scope of what's possible and the boundaries of where things break before anyone talks about scaling up.

Drawing the Boundaries

The interesting work is drawing the boundaries. Which paths must be fully deterministic? Where is ML actually adding signal instead of noise? Where does an LLM require an external validator or human in the loop before its output is allowed to matter?

Key Questions I'm Asking in 2026

As the industry pushes harder into automation and AI, these are the questions I'm wrestling with.

Do we need digital twins of everything?

The pitch is compelling: a complete virtual replica of your airport. But what problem does it actually solve? Is it operational improvement or just an expensive visualization tool?

Are we really getting 80% of the value with LiDAR deployments?

Comprehensive sensor coverage sounds great until you see the price tag and maintenance burden. Are we getting the return we expected, or are we finding that better ML models trained on existing camera feeds could deliver 80% of the value at 20% of the cost? Sometimes the constraint isn't sensor coverage - it's knowing what questions to ask about ROI.

Are we replacing our most valuable asset with technology to cut costs?

Humans - experienced operators who understand context and edge cases - are being replaced to reduce headcount. But are we actually improving the passenger experience, or just making operations cheaper and more brittle?

How can we better communicate airport needs to our technology partners?

Vendor demos show polished interfaces and promise transformative results. But do they understand the operational reality? The radio coordination delays? The gate conflicts that cascade? The data quality issues in real-time feeds? When we say "we need real-time gate management," are we speaking the same language as the people building the systems?

Building these experiments helps clarify what we're actually asking for. Not "AI-powered optimization" as a buzzword, but specific technical requirements: latency tolerances, validation rules, failure modes, integration points. Better questions lead to better solutions. Maybe the gap isn't just in technology - it's in how we articulate what we need.

Sometimes the constraint isn't technology. It's that we haven't done the procedural work to make technology useful. And sometimes we're solving for cost reduction when we should be solving for operational quality and passenger experience. That's not a sexy vendor pitch, but it might be the real answer.

About

I'm fortunate enough to be a Virtual Ramp Control Manager at Orlando International Airport, overseeing about 18% of daily aircraft movements across about 30 airlines. But outside of work hours, I'm a tech enthusiast who learns by building things and breaking them.

My path spans 5 different airports including a NATO facility (that was fun), U.S. Navy Air Traffic Control, and 20+ years in aviation operations. Along the way: BA-IT, MBA in Strategic Leadership, FAA CTO, ACI and AAAE certifications (IACE, ASC), plus PM training and a lot of self-taught Python, machine learning, and systems design. Every cert, every course, every tutorial has been driven by the same impulse: "How does that work?"

We know the saying: if you've been to one airport, you've been to one airport. Every operation is different, which makes technology claims about "universal solutions" suspect. That's why I build experiments to understand what demands and constraints are actually universal versus what's just context-specific.

It's 2025. Everyone should have a way to promote themselves beyond the typical resume and LinkedIn profile. OC's Tech Lab is my way of breaking from the standard corporate narrative and showing what I actually do when I'm exploring technology. Not building products. Not consulting. Just understanding the pipelines behind the buzzwords. This portfolio is independent of my employer - it's my own sandbox for technical exploration and self-promotion on my own terms.

5 Airports (Including NATO)
20+ Years Aviation Experience
6 Active Research Experiments