A Diversion Decision, Two Ways
Inside one operational decision: AI as signal, authority preserved, decision captured.
Most aviation AI conversations stay above the operation. This one sits inside it.
It’s 16:30z on a Tuesday. An aircraft is at FL410 with a routine charter en route from BFI to SNA. The crew sees an unusual indication on the engine instruments. Not a hard failure. Not nothing. The PIC references the QRH, reduces power and calls Mission Control.
Now the operation has a decision to make.
How that decision plays out is the difference between two operating models. The first is what most operators do today. The second is what bounded AI participation inside an articulated decision architecture looks like.
I want to walk through both.
The standard scenario
Pilot calls Mission Control with the indication and QRH checklist that says land as soon as practical. Dispatcher that’s flight following sees the position north of Nevada, asks the pilot’s preference. Pilot suggests Oakland. Familiar airport, established AOG resources at a major business aviation field, negligible added flight time. Dispatcher checks OAK, confirms it is a legally and operationally suitable diversion airport. Tells the pilot they concur.
Pilot begins diversion to OAK. Maintenance Control gets notified after the change is in motion. Customer Service gets notified after that.
The aircraft lands safely. The first vendor that responds isn’t on the operator’s approved list. Maintenance Control spends the next two hours finding an approved vendor that can absorb the work today and ultimately lands on soonest availability being the next day in the morning with techs traveling from SFO. The customer service team begins working customer recovery. They need to find an Argus Platinum or Wyvern Wingman-rated charter operator who can take the customer the remaining leg to SNA. Most local aircraft are on other trips. They eventually find a recovery operator with availability for the next day. The client elects to fly commercial instead and arrives at SNA seven hours after the original ETA.
The diversion was legal. OAK was suitable. The decision was made under time pressure with partial information by the people authorized to make it.
Nothing was wrong with this scenario. Operators run it every day. But almost everything good in this scenario happened after the decision, scrambling to recover from a choice made with the information that was easy to access in the moment.
The architected scenario
Same flight, same indication, same call from the cockpit at 16:30z.
Pilot calls Mission Control with the indication and QRH checklist that says land as soon as practical. Dispatcher that’s flight following takes the call. As the conversation begins, the system surfaces signals into a shared workspace without making any decision.
A list of suitable airports within range, sorted not just by distance but by total recovery picture. Each one annotated with which approved maintenance vendors operate there for this aircraft type, prior-use history with this operator, contact paths, service capability, and status fields. Each airport is also annotated with nearby Argus Platinum or Wyvern Wingman-rated charter operators in the area.
Maintenance control sees the same workspace. Within ninety seconds, they confirm RNO has an approved vendor who can start the inspection within an hour of arrival.
Customer service sees a Wyvern Wingman-rated operator based at RNO with a light jet and crew listed as potentially available to fly RNO to SNA at a reasonable one-way price departing in roughly ninety minutes. They confirm with the recovery operator on a parallel thread.
The dispatcher clicks through to RNO. The system opens the operator’s approved weather source, current METAR and TAF for RNO and its alternates, with current NOTAMs, TFRs, and the airport diagram all loaded in the workspace. The dispatcher is reading the approved source documents, not an AI interpretation of them.
Within four minutes of the initial call, the team has reviewed three options together. OAK is one of them, with the pilot’s instinct attached. So is RNO, directly along the route, with approved maintenance confirmed and a recovery flight ready in two hours. The maintenance picture and recovery picture both favor RNO. Both airports are legally and operationally suitable.
The dispatcher recommends RNO. The PIC accepts the divert. The captured record reflects why: maintenance vendor approval status with confirmed availability, recovery operator rating and confirmation, customer onward routing, runway length, approach availability, weather, and the team coordination that produced the choice.
The aircraft lands safely at RNO. The inspection starts within an hour of arrival. The customer transfers to the recovery aircraft and arrives at SNA three hours after the original ETA, instead of seven hours late on a commercial airline.
What got preserved
Authority did not move. Operational control responsibility remained with the operator. The PIC retained final authority for the safe conduct of the flight. Maintenance control retained vendor authorization authority. Customer service retained customer logistics authority.
The AI participated. It did not decide.
This matters because the moment AI starts deciding inside operational control, the operator has a regulatory exposure that no amount of accuracy mitigates. Licensed authority cannot be transferred to a system that cannot hold a certificate, cannot accept liability, and cannot hold operational control under the current regulatory structure.
Authorization architecture is not optional in regulated operations. It is the precondition for AI to participate at all.
What changed
Three things, in order of significance.
Decision quality improved because more dimensions of the decision became visible to the decision-makers in the moment. The dispatcher did not choose RNO because the system told them to. They chose it because they could see the full constraint set the standard scenario forces them to learn after the fact: maintenance vendor approval, recovery flight availability, comparable safety profile.
Time-to-good-decision compressed because the coordination collapsed. Three roles that normally inform each other sequentially worked in parallel inside the same workspace.
Compounding began. The captured rationale, with its full constraint set, becomes part of what the architecture surfaces for the next mechanical diversion in the same region. The reasoning the senior dispatcher just applied becomes available to whoever takes the call next week. The institutional knowledge does not walk out the door with the retiring dispatcher.
The architectural point
What you just read is one small slice of Operational Decision Architecture. There is more underneath. Authority distribution. Escalation paths. Decision logic. Governance layers. Capture mechanics. The slice above does not show any of that.
What it shows is the shape.
Bounded AI participation is an architecture that defines exactly where AI can participate, exactly where authority remains human, and exactly how the decision and its reasoning get captured so the operation can learn from itself.
The phrase that gets used in the industry is “human in the loop.” That phrase is doing too much work. There are operators today where humans are nominally in the loop but the AI is producing summaries the human is approving, decisions the human is rubber-stamping, and audit trails reconstructed after the fact. The human is in the loop on paper. The decision happened somewhere else.
The architecture you just read is different. The human is the decision. The AI is signal. The capture is the loop.
Why this matters at scale
Multiply this across every diversion, every release decision, every weather hold, every MEL deferral, every maintenance call, every IROP recovery decision your operation makes in a year.
The operation becomes faster. Operators feel it within weeks. Coordination tightens. Decision quality improves because more dimensions of every decision become visible in the moment.
The operation becomes more defensible. When the FAA, the insurer, the customer, or the board asks how AI is participating in operational decisions, “we have a policy” is no longer the answer. The answer is the captured record of every decision, with its reasoning, its authority chain, and its AI participation level intact.
The operation becomes a learning system. The institutional knowledge of senior operators starts to compound across the architecture instead of evaporating when they retire. A three-year dispatcher operating inside a five-year-old architecture is supported by structure that reflects what the seniors did, routed back into the decision flow.
One more thing
Most of the AI conversation in aviation right now is about the tools. Which vendor, which platform, which model.
That is the wrong layer to be debating. The tools come and go. The decisions do not.
The operators who get this right will define where AI is permitted to participate in their decision flows, what authority lives where, what gets captured, and what compounds, before the tools they pick start shaping decisions in ways nobody bothered to articulate.
Build the architecture. The tools fit into it.
Toby Benenson is the founder of SayFlight, which builds the architecture that governs AI participation in regulated operations without eroding human authority or judgment.


