Skip to main content

Cookbook

This page contains five practical workflows that combine multiple MCP tools to solve real governance challenges. Each recipe shows the tool call sequence, expected inputs, and how to interpret the results.

These examples use SQL syntax for clarity. The supported agent workflow in this release is Snowflake Intelligence through the core.agent_* wrappers, but the canonical SQL tool sequence shown here is still the underlying source of truth.


Recipe 1: Governance-Aware AI Coding Assistant

Scenario: An AI agent is helping a developer write a data pipeline. Before generating SQL, the agent needs to understand what tables are available, what rules apply, and whether the generated query is compliant.

Tool Sequence

discover-context → get-decision-context → validate-query-context

Step 1: Discover available tables

SELECT METATATE_APP.CORE.DISCOVER_CONTEXT(
OBJECT_CONSTRUCT(
'database', 'ANALYTICS_DB',
'schema', 'CORE'
)
);

The agent now knows which tables exist, their sensitivity levels, and whether they contain PII. It uses this to decide which tables to reference.

Step 2: Get governance context for the target table

SELECT METATATE_APP.CORE.GET_DECISION_CONTEXT(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.CUSTOMERS')
);

The agent learns about policies, compliance requirements, retention rules, and business context. It can now generate SQL that respects these constraints (e.g., avoiding restricted columns, applying filters for data minimization).

Step 3: Validate the generated query

CALL METATATE_APP.CORE.VALIDATE_QUERY_CONTEXT(
OBJECT_CONSTRUCT(
'sql', 'SELECT customer_id, account_status, region FROM analytics_db.core.customers WHERE region = ''US''',
'operation', 'read',
'intended_use', 'Generate customer engagement metrics for Q1 report'
)
);

The validation confirms compliance. If issues are found, the agent revises the query and validates again.

Result: The developer gets a governance-compliant query without needing to manually review policy documentation.


Recipe 2: Pre-Query PII Audit

Scenario: Before an analyst runs a query against a customer table, an automated check verifies PII exposure and masking status.

Tool Sequence

inspect-data-meaning → (review PII columns) → authorize-use

Step 1: Inspect columns for PII

SELECT METATATE_APP.CORE.INSPECT_DATA_MEANING(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.CUSTOMERS')
);

Review the response for columns where is_pii is true. Check the masking object to confirm masking is enabled and the analyst's role is not in exempt_roles.

Step 2: Authorize the specific use case

CALL METATATE_APP.CORE.AUTHORIZE_USE(
OBJECT_CONSTRUCT(
'table_name', 'ANALYTICS_DB.CORE.CUSTOMERS',
'operation', 'read',
'intended_use', 'Ad-hoc analysis of customer churn patterns',
'actor_role', 'DATA_ANALYST',
'columns', ARRAY_CONSTRUCT('CUSTOMER_ID', 'EMAIL', 'ACCOUNT_STATUS', 'LAST_LOGIN')
)
);

If the decision is ALLOW or CONDITIONAL (with acceptable conditions like masking), the analyst proceeds. If DENY, the analyst knows exactly why and what to do next.

Result: PII exposure is checked before any data is touched. The audit trail captures the authorization decision for compliance.


Recipe 3: Authorization Check Before Data Export

Scenario: A data engineer wants to export customer data to an external CRM system. Before initiating the export, the pipeline checks whether the operation is authorized under current governance policies.

Tool Sequence

authorize-use → (conditional?) → explain-why

Step 1: Request authorization for export

CALL METATATE_APP.CORE.AUTHORIZE_USE(
OBJECT_CONSTRUCT(
'table_name', 'ANALYTICS_DB.CORE.CUSTOMERS',
'operation', 'export',
'intended_use', 'Sync active customer records to Salesforce for account management',
'actor_role', 'DATA_ENGINEER',
'columns', ARRAY_CONSTRUCT('CUSTOMER_ID', 'NAME', 'EMAIL', 'COMPANY', 'ACCOUNT_STATUS'),
'destination', OBJECT_CONSTRUCT(
'system', 'Salesforce',
'jurisdiction', 'US'
),
'jurisdiction', 'EU'
)
);

Step 2: If CONDITIONAL, review conditions

The response might include conditions like:

  • "Exclude or mask EMAIL column before export"
  • "Log export event for GDPR compliance"
  • "Obtain data steward approval for cross-border transfer"

Step 3: Explain the decision for the audit log

CALL METATATE_APP.CORE.EXPLAIN_WHY(
OBJECT_CONSTRUCT('decision_id', 'dec_xyz789')
);

The full trace shows which policies triggered each condition, giving the data engineer clear guidance on what to change. The snapshot_status field confirms whether policies have changed since the decision was made.

Result: The export pipeline has a documented authorization decision with full traceability. If conditions are met, the export proceeds. If denied, the reason is clear.


Recipe 4: Column Sensitivity Discovery Across a Database

Scenario: A security team wants a comprehensive map of sensitive columns across all governed tables in a database. This is useful for security audits, data classification reviews, or planning masking strategies.

Tool Sequence

discover-context → inspect-data-meaning (for each table)

Step 1: Discover all governed tables in the database

SELECT METATATE_APP.CORE.DISCOVER_CONTEXT(
OBJECT_CONSTRUCT(
'database', 'ANALYTICS_DB'
)
);

Step 2: Inspect columns for each table

For each table returned in step 1, call inspect-data-meaning:

-- Table 1
SELECT METATATE_APP.CORE.INSPECT_DATA_MEANING(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.CUSTOMERS')
);

-- Table 2
SELECT METATATE_APP.CORE.INSPECT_DATA_MEANING(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.ORDERS')
);

-- Table 3
SELECT METATATE_APP.CORE.INSPECT_DATA_MEANING(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.TRANSACTIONS')
);

Building a Sensitivity Report

An AI agent can aggregate the results into a structured report:

TableColumnSensitivityPIIMasking
CUSTOMERSEMAILconfidentialYespartial
CUSTOMERSSSNrestrictedYeshash
CUSTOMERSNAMEconfidentialYesredact
ORDERSORDER_IDinternalNonone
TRANSACTIONSAMOUNTinternalNonone
TRANSACTIONSCARD_LAST4confidentialYespartial

Result: The security team has a complete column-level sensitivity map built from live governance data, not stale spreadsheets.


Recipe 5: Compliance Reporting

Scenario: A compliance officer needs to generate a report of all tables subject to GDPR, including their governance posture, PII exposure, and retention policies.

Tool Sequence

discover-context (with compliance filter) → get-decision-context (for each table)

Step 1: Find all GDPR-governed tables

SELECT METATATE_APP.CORE.DISCOVER_CONTEXT(
OBJECT_CONSTRUCT(
'compliance_any', ARRAY_CONSTRUCT('GDPR')
)
);

Step 2: Get governance context for each GDPR table

SELECT METATATE_APP.CORE.GET_DECISION_CONTEXT(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.CUSTOMERS')
);

From each response, extract:

  • Compliance: compliance.regulations and compliance.certifications
  • Retention: retention.period, retention.trigger, retention.action
  • Policy coverage: policy_summary.total and policy_summary.enforceable
  • Business ownership: business_context.owner and business_context.steward

Building the Compliance Report

An AI agent can compile this into a structured compliance report:

TableRegulationsPolicies (Enforced)RetentionPII ColumnsOwnerSteward
CUSTOMERSGDPR, CCPA3 (2 enforced)7 years / archive5data_engineeringjane.doe@co.com
USER_EVENTSGDPR2 (2 enforced)2 years / delete2analyticsbob.smith@co.com
CONSENT_LOGGDPR1 (1 enforced)10 years / archive1legalalice.jones@co.com

Optional: Deep-Dive into Column-Level PII

For tables with high PII column counts, follow up with inspect-data-meaning to get the full column breakdown:

SELECT METATATE_APP.CORE.INSPECT_DATA_MEANING(
OBJECT_CONSTRUCT('table_name', 'ANALYTICS_DB.CORE.CUSTOMERS')
);

Result: The compliance officer has an up-to-date, policy-backed GDPR report built from live governance metadata rather than manual documentation.


Tips for Building Workflows

  1. Start broad, then narrow. Use discover-context to find relevant tables, then drill into specifics with get-decision-context and inspect-data-meaning.

  2. Always validate before executing. Call validate-query-context before running any AI-generated SQL to catch governance issues early.

  3. Capture decision IDs. When using authorize-use, store the decision_id for audit purposes. Use explain-why to retrieve the full trace if questions arise later.

  4. Use intended_use descriptively. The intended_use field in authorize-use and validate-query-context is evaluated against usage rules. Be specific (e.g., "Quarterly customer churn analysis for the retention team" rather than "analytics").

  5. Check snapshot_status. When explaining a historical decision, check the snapshot_status field. If it shows stale, the governance policies have changed since the decision was made -- consider re-evaluating.