# Common Patterns

Real-World Use Cases

Practical examples of how to use Aptly in production applications.

Customer Support Chatbot

A chatbot that handles customer inquiries may receive PII like names, emails, and phone numbers. Aptly ensures this data never reaches your LLM provider.

Example
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful support agent."},
        {"role": "user", "content": user_message}
    ],
    user=user_id,  # Track which end-user made the request
    extra_body={
        "api_keys": {"openai": openai_key},
        "redact_response": True  # Also redact PII from LLM's response
    }
)

# Aptly automatically:
# - Redacts PII from user_message before sending to OpenAI
# - Logs the request with user_id for tracking
# - Optionally redacts PII from the response

Best Practice: Pass the user parameter to track which end-user made each request. This appears in audit logs and analytics.

Document Summarization

Summarizing legal documents, contracts, or medical records that contain names, addresses, and other PII.

Example
# Read a document containing PII
with open("contract.txt") as f:
    document = f.read()

response = client.chat.completions.create(
    model="claude-3-5-sonnet-20241022",
    messages=[{
        "role": "user",
        "content": "Summarize this contract: " + document
    }],
    extra_body={
        "api_keys": {"anthropic": anthropic_key}
    }
)

summary = response.choices[0].message.content

# The summary will reference "PERSON_A" instead of actual names
# Original PII never left your infrastructure

Because Aptly uses mask mode by default, the LLM can still understand relationships while your compliance team knows no real names were sent.

Streaming Responses

Stream LLM responses while still protecting PII. The audit log is created when the stream completes.

Example
stream = client.chat.completions.create(
    model="gpt-4",
    messages=messages,
    stream=True,
    extra_body={"api_keys": {"openai": openai_key}}
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

    # The final chunk includes Aptly metadata
    if hasattr(chunk, 'aptly'):
        print("PII detected:", chunk.aptly['pii_detected'])
        print("Audit log:", chunk.aptly['audit_log_id'])

PII is still redacted before streaming begins. The audit log is written when the stream completes.

Analytics & Cost Tracking

Track LLM usage, costs, and PII detection rates across your application.

Example
# Get usage summary for the last 30 days
usage = requests.get(
    "https://api-aptly.nsquaredlabs.com/v1/analytics/usage",
    headers={"Authorization": "Bearer " + aptly_key},
    params={"granularity": "day"}
).json()

print("Total requests:", usage['summary']['total_requests'])
print("Total cost:", usage['summary']['total_cost_usd'])

# Get PII detection stats
pii_stats = requests.get(
    "https://api-aptly.nsquaredlabs.com/v1/analytics/pii",
    headers={"Authorization": "Bearer " + aptly_key}
).json()

print("Requests with PII:", pii_stats['summary']['requests_with_input_pii'])
print("PII detection rate:", pii_stats['summary']['input_pii_rate'])

Use the analytics endpoints to understand your LLM usage patterns and demonstrate compliance to auditors.