Use Case

Secure OpenAI Function Calling with Identity & Permission Control

OpenAI function calling lets your LLM trigger real-world actions. KYA adds the missing security layer: cryptographic agent identity, per-function policies, spend limits, and a tamper-proof audit log.

Why OpenAI function calling needs a permission layer

When your LLM calls a function, it's making a real-world decision on your behalf. The function might charge a customer, send an email, or modify a database. OpenAI's API has no way to enforce 'this function can only be called with amounts under €100' — that logic lives outside the LLM.

The KYA verify pattern for function calls

import openai
from kya import KYAClient

kya = KYAClient(api_key="kya_...")

def execute_function_call(agent_id, function_name, arguments):
    # Verify before execution
    result = kya.verify(
        agent_id=agent_id,
        action=function_name,
        payload=arguments,
    )

    if result.decision != "ALLOW":
        return {"error": f"Denied: {result.reason_code}"}

    # Execute the actual function
    return FUNCTIONS[function_name](**arguments)

Policy example for payment functions

{
  "agent": "agt_checkout_assistant",
  "rules": {
    "allowed_tools": ["create_charge", "issue_refund"],
    "spend_limits": {
      "max_per_tx": 200,
      "max_per_day": 2000
    },
    "deny_if": {
      "currency": ["not in", ["EUR", "USD"]]
    }
  }
}

Prompt injection protection

If an attacker injects a prompt that convinces your LLM to call `delete_all_records`, KYA will deny it — because that function isn't in the agent's allowed_tools list. Policy enforcement happens server-side, outside the LLM's control.

Add KYA to your agent

Get identity, permissions, and audit logs in under 5 minutes.