Use Collection
ThirdLaw Collect enables high-throughput, low-latency collection from distributed environments, ensuring real-time data capture without compromising application performance. It minimizes overhead at collection points and ensures data integrity as it is routed to ThirdLaw storage or further downstream to Destination Integrations.
Flexible Integration Points
ThirdLaw Collect seamlessly integrates into:
- Gateways that receive calls from applications before they reach LLMs. Once installed on a supported gateway, the ThirdLaw agent can collect prompts and responses from the LLM interactions. ThirdLaw supports the following gateways:
- Kong Gateway via a Kong Plugin that connects to a reverse proxy
- NGINX via a NGINX Plugin
- Envoy via a Plugin
- Agent OS Environments ThirdLaw will integrate into orchestration frameworks like Langchain as an LLM Agent to support cross-LLM communication collection and track details within agent execution. Other libraries like AutoGen will be supported in the future.
- SDK- or OpenTelemetry-instrumented applications to capture the full body and headers of requests to LLMs. ThirdLaw provides wrappers as an available integration option. A one-time setup with just a few lines of code changes per application is required to integrate with ThirdLaw using language SDKs. All data classes are captured as well as user defined metadata, interventions, transformations, and user identity. Python and Typescript will be the initial supported languages, with a desire to extend to Java.
- Rest API Customers can also send data to ThirdLaw through a Rest API. Similar to the Gateway centric approach, prompts, responses, sessions, and agent traces will be captured for further processing. Active interventions, prompt transformations, user defined metadata, and user identity are also supported. Some source that may feed ThirdLaw via API may be less real-time, such as the audit-logs from Anthropic Claude Enterprise and OpenAI Compliance Logs
Configurations
- Envoy Gateway
- SDK instrumentation
How to Install
1. Download the Plugin
The thirdlaw_envoy_filter.wasm file can be downloaded directly from the ThirdLaw releases page. To do so:
Navigate to the ThirdLaw release page.
Find the latest release and download the thirdlaw-envoy-filter.zip file from the assets section.
thirdlaw-envoy-filter.zip contains thirdlaw_filter.wasm and thirdlaw_filter.meta.json files. Only
thirdlaw_filter.wasm is required for ThirdLaw Envoy Gateway plugin installation.
2. Load the Plugin into your Envoy Proxy
Transfer the downloaded thirdlaw_envoy_filter.wasm file to the Envoy proxy server.
Place the wasm file in an appropriate directory (for example, /etc/envoy/proxy-wasm-plugins/).
Ensure the Envoy proxy has read access to the wasm file.
3. Configure Envoy
Update your Envoy configuration (envoy.yaml) to use the ThirdLaw Envoy plugin. Add the http_filters and clusters
sections as shown in the provided code snippets.
Also, remember to update the filename path in the vm_config section to match the location where you placed the
thirdlaw_envoy_filter.wasm file.
http_filters:
# ... other filters ...
- name: envoy.filters.http.wasm
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
config:
name: "thirdlaw_api"
root_id: "thirdlaw_root_id"
configuration:
"@type": "type.googleapis.com/google.protobuf.StringValue"
value: |
{
"debug": true,
"upstream": "thirdlaw_service",
"base_uri": "thirdlaw_service:80",
"batch_max_wait": 1000
}
vm_config:
runtime: "envoy.wasm.runtime.v8"
vm_id: "thirdlaw_vm"
code:
local:
filename: "/etc/envoy/thirdlaw_filter.wasm"
# ... other filters ending with router
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
# ... other clusters ...
- name: thirdlaw_service
type: LOGICAL_DNS
load_assignment:
cluster_name: thirdlaw_service_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: <dns-name for thirdlaw-router≥
port_value: 443
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
4. Restart Envoy
After saving the updated configuration file, restart Envoy to apply the changes. Check Envoy's log to ensure that there are no errors during startup.
5. Test
Make a few LLM requests that pass through the Envoy proxy. These calls should now be logged in ThirdLaw Collect.
How to Use
Capturing traffic
The ThirdLaw Envoy plugin captures LLM traffic from Envoy and logs it to ThirdLaw Collect automatically when Envoy routes traffic through the plugin. Envoy has detailed and robust configuration options for traffic and plugin routing to apply the ThirdLaw plugin to only some traffic or to all traffic. For more information, please refer to the Envoy Request Lifecycle Guide.
Configuration Options
These configuration options are specified as JSON in the configuration section of the http_filters in your envoy.yaml file.
| Option | Type | Default | Description |
|---|---|---|---|
| batch_max_size | Integer | 100 | Optional. The maximum batch size of events to be sent to ThirdLaw. |
| batch_max_wait | Integer | 2000 | Optional. The maximum wait time in milliseconds before a batch is sent to ThirdLaw, regardless of the batch size. |
| upstream | String | "thirdlaw_api" | Optional. The upstream cluster that points to ThirdLaw Collect. |
| connection_timeout | Integer | 5000 | Optional. Default connection timeout in milliseconds for ThirdLaw Collect. |
Example
configuration:
"@type": "type.googleapis.com/google.protobuf.StringValue"
value: |
{
"batch_max_size": 100,
"batch_max_wait": 5,
"upstream": "custom_envoy_cluster_naming_scheme_thirdlaw"
}
Updating the Configuration
Updating the envoy.yaml configuration file in the example above and restarting is sufficient to update your ThirdLaw WASM Plugin configuration. Envoy has a diversity of configuration mechanisms and supports hot reloading of configuration. For more information, please refer to the Envoy Configuration Documentation.
Setting up Open Telemetry for ThirdLaw is a low code process that only takes a few steps.
Instrumenting the Application via one of the following:
- OpenTelemetry Auto Instrumentation
- OpenTelemetry SDK
OpenTelemetry Auto Instrumentation
Auto Instrumentation is a process where a Python agent is attached to a Python application. It dynamically injects bytecode to capture telemetry, resulting in low code instrumentation. Please see additional information on the Open Telemetry website here:
https://opentelemetry.io/docs/languages/python/automatic/
Install ThirdLaw Compatible Open Instrumentation
Install the following instrumentation libraries on your Open Telemetry auto instrumentation setups depending on your downstream LLM.
| LLM Provider | Pip Command | Poetry Command |
|---|---|---|
| OpenAI | pip install opentelemetry-instrumentation-openai | poetry add opentelemetry-instrumentation-openai |
| Anthropic | pip install opentelemetry-instrumentation-anthropic | poetry add opentelemetry-instrumentation-anthropic |
| Mistral AI | pip install opentelemetry-instrumentation-mistralai | poetry add opentelemetry-instrumentation-mistralai |
| VertexAI | pip install opentelemetry-instrumentation-vertexai | poetry add opentelemetry-instrumentation-vertexai |
| Google Generative AI (Gemini) | pip install opentelemetry-instrumentation-google-generativeai | poetry add opentelemetry-instrumentation-google-generativeai |
| AWS Bedrock | pip install opentelemetry-instrumentation-bedrock | poetry add opentelemetry-instrumentation-bedrock |
The following agent libraries are also supported:
| LLM Agent Framework | Pip Command | Poetry Command |
|---|---|---|
| Langchain | pip install opentelemetry-instrumentation-langchain | poetry add opentelemetry-instrumentation-langchain |
| LlamaIndex | pip install opentelemetry-instrumentation-llamaindex | poetry add opentelemetry-instrumentation-llamaindex |
You may install more than one instrumentation libraries at the same time.
Afterwards, make sure the auto instrumentation is installed:
pip install opentelemetry-distro opentelemetry-exporter-otlp
opentelemetry-bootstrap -a install
Finally, modify your run command to use the Auto Instrumentation agent.
opentelemetry-instrument --exporter_otlp_traces_endpoint http://<OTEL_COLLECTOR>:4317/v1/traces \
python3 existingApplication.py