Skip to main content

Compliance Checking

Compliance rules are forbidden sequences: access after revocation, approval without review, data export after access removal. Sifting patterns express these rules directly. When a pattern matches, that's a violation. When it doesn't, why_not explains how close the system came.

Time~15 minutes
PrerequisitesWhat is Sifting?

1. Access after revocation

A user accesses a resource after their access was revoked, with no re-authorization between.

Loading playground...

Result: 1 match — Alice accessed db_prod after revocation with no re-authorization. Bob's access was revoked too, but he was re-authorized at time 4, so the negation prevents a match.

What to notice: The variables ?user and ?resource both join across stages AND the negation window. A re-authorization for a different user or different resource doesn't count — the negation must match the same entity pair.


2. Four-eyes principle violation

The same person both initiates and approves a transaction. The pattern completing IS the violation — no negation needed.

Loading playground...

Result: 1 match — Alice initiated and approved txn_500. Bob initiated txn_501 but Carol approved it (different people), so dual control held.

What to notice: This is a conceptual inversion from narrative sifting. In narrative detection, a match means "something interesting happened." In compliance checking, a match means "a rule was broken." The mechanism is identical; the interpretation differs. The join on ?person enforces that the same actor performed both actions.


3. Data export without approval

Data is exported from a sensitive system without a preceding approval for the same dataset. Model this as: approval event, then export event for the same dataset — with a negation ensuring the approval actually happened. If the export has no matching approval before it, the second pattern (export-only) catches it.

Loading playground...

Result: 1 match — Alice exported customer_pii and no approval for that dataset exists anywhere. Bob exported financial_reports, which was approved at time 2.

What to notice: The unless after e1 checks for an approval event after the export. Combined with the graph having approvals before exports, this catches cases where no approval exists at all. The join on ?data ensures the approval covers the specific dataset — approving one dataset doesn't authorize exporting a different one.


Auditing with gap analysis

When a pattern does NOT match, that's a good thing — the system is compliant. But near-misses matter. Use why_not (gap analysis) to find events that almost violated a rule:

let violation_pattern = PatternBuilder::<String, MemValue>::new("unauthorized_access")
.stage("e1", |s| {
s.edge("e1", "type".into(), MemValue::Str("revoke".into()))
.edge_bind("e1", "user".into(), "user")
.edge_bind("e1", "resource".into(), "resource")
})
.stage("e2", |s| {
s.edge("e2", "type".into(), MemValue::Str("access".into()))
.edge_bind("e2", "user".into(), "user")
.edge_bind("e2", "resource".into(), "resource")
})
.unless_between("e1", "e2", |neg| {
neg.edge("mid", "type".into(), MemValue::Str("reauthorize".into()))
.edge_bind("mid", "user".into(), "user")
.edge_bind("mid", "resource".into(), "resource")
})
.build();

// Build a compliant graph — revoke then reauthorize then access.
let mut graph = MemGraph::new();
graph.add_str("e1", "type", "revoke", 1);
graph.add_ref("e1", "user", "alice", 1);
graph.add_ref("e1", "resource", "db_prod", 1);
graph.add_str("mid", "type", "reauthorize", 2);
graph.add_ref("mid", "user", "alice", 2);
graph.add_ref("mid", "resource", "db_prod", 2);
graph.add_str("e2", "type", "access", 3);
graph.add_ref("e2", "user", "alice", 3);
graph.add_ref("e2", "resource", "db_prod", 3);
graph.set_time(10);

let mut engine: SiftEngineFor<MemGraph> = SiftEngine::new();
engine.register(violation_pattern);

let matches = engine.evaluate(&graph);
if matches.is_empty() {
// System is compliant. Check near-misses for each pattern:
for pattern in engine.patterns() {
let gap = gap_analysis(&graph, pattern);
for stage in &gap.stages {
match stage.status {
StageStatus::Matched => {}
StageStatus::Unmatched | StageStatus::PartiallyMatched { .. } => {
println!(
"Near-miss for '{}': stage '{}' — {:?}",
pattern.name, stage.anchor, stage.status
);
for clause in &stage.clauses {
println!(
" clause: matched={}, reason={:?}",
clause.matched, clause.reason
);
}
}
}
}
}
}

A rule that reaches stage 2 of 3 before failing is a near-miss worth investigating — the system was one event away from a violation.

The pattern across all three examples

PatternWhat makes it a violationStagesKey mechanism
Unauthorized accessAccess after revocation without re-auth2 + negationVariable join on user AND resource
Four-eyesSame person in both roles2, no negationMatch = violation (conceptual inversion)
Unapproved exportExport without matching approval1 + negationunless after checks for missing approval

Mapping your data

Transaction and audit log entries map to fabula edges as follows:

Real-world fieldFabula edge
TransactionID or AuditEventIDsource node
Action (initiate, approve, export)label value
Actor, resourcetarget nodes (enables joins)
Timestampinterval start

Each audit event becomes a set of edges sharing the source node. The actor and resource fields become target nodes, so patterns can join across events by the same person or touching the same resource.


Timestamp resolution

Fabula requires strict temporal ordering between stages. Audit log entries with identical timestamps cannot be placed in consecutive stages.

If your audit system batches events at the same second or millisecond, add sequence numbers or use batch evaluation (evaluate_pattern()), which sees all events simultaneously. See Thinking in Time for details.


How fabula compares

  • vs SIEM correlation rules: Time-windowed threshold alerts ("N events of type X within Y minutes"). No structural graph joins, no variable-scoped negation, no gap analysis for near-misses. Fabula patterns express entity-correlated forbidden sequences.
  • vs manual audit scripts: Brittle, hard-coded queries against event logs. No gap analysis to surface near-misses, no incremental mode for real-time monitoring, no composition for building complex rules from reusable fragments.

Where to go next