Skip to main content
The deepslate-pipecat package provides an LLMService implementation for the Pipecat framework, enabling seamless integration with Deepslate’s unified voice AI infrastructure.
This plugin is in early development. We welcome contributions! See the GitHub repository to get involved.

Prerequisites

  • A Deepslate account with API credentials
  • Python 3.11+
  • A Pipecat-compatible transport (e.g. Daily.co, Twilio, generic WebSocket)
  • (Optional) ElevenLabs API key for server-side TTS

Installation

pip install git+https://github.com/deepslate-labs/deepslate-pipecat.git

Environment Variables

Set up your credentials as environment variables:
VariableRequiredDescription
DEEPSLATE_VENDOR_IDYesYour Deepslate vendor ID
DEEPSLATE_ORGANIZATION_IDYesYour Deepslate organization ID
DEEPSLATE_API_KEYYesYour Deepslate API key
ELEVENLABS_API_KEYNoElevenLabs API key for server-side TTS
ELEVENLABS_VOICE_IDNoElevenLabs voice ID
ELEVENLABS_MODEL_IDNoElevenLabs model (e.g., eleven_turbo_v2)
Never expose your Deepslate or ElevenLabs API keys to clients. This plugin is for server-side use only.

Quick Start

import asyncio
import os
import sys

import aiohttp
from dotenv import load_dotenv
from loguru import logger

from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.runner import PipelineRunner
from pipecat.pipeline.task import PipelineParams, PipelineTask
from pipecat.transports.daily.transport import DailyParams, DailyTransport

from deepslate.pipecat import DeepslateOptions, DeepslateRealtimeLLMService, ElevenLabsTtsConfig

load_dotenv()

async def main():
    # Fetch a Daily meeting token
    async with aiohttp.ClientSession() as session:
        room_name = os.environ["DAILY_ROOM_URL"].split("/")[-1]
        async with session.post(
            "https://api.daily.co/v1/meeting-tokens",
            headers={"Authorization": f"Bearer {os.environ['DAILY_API_KEY']}"},
            json={"properties": {"room_name": room_name}},
        ) as r:
            token = (await r.json())["token"]

    transport = DailyTransport(
        room_url=os.environ["DAILY_ROOM_URL"],
        token=token,
        bot_name="Deepslate Bot",
        params=DailyParams(
            audio_in_enabled=True,
            audio_out_enabled=True,
            vad_enabled=False,  # VAD is handled server-side by Deepslate
        ),
    )

    llm = DeepslateRealtimeLLMService(
        options=DeepslateOptions.from_env(
            system_prompt="You are a friendly and helpful AI assistant."
        ),
        tts_config=ElevenLabsTtsConfig.from_env(),
    )

    pipeline = Pipeline([transport.input(), llm, transport.output()])
    task = PipelineTask(pipeline, params=PipelineParams(allow_interruptions=True))

    @transport.event_handler("on_participant_left")
    async def on_participant_left(transport, participant, reason):
        await task.cancel()

    await PipelineRunner().run(task)

if __name__ == "__main__":
    asyncio.run(main())

Configuration Reference

The main configuration class for connecting to the Deepslate API. Use DeepslateOptions.from_env() to load credentials from environment variables, with optional overrides.
ParameterTypeDefaultDescription
vendor_idstrenv: DEEPSLATE_VENDOR_IDYour Deepslate vendor ID
organization_idstrenv: DEEPSLATE_ORGANIZATION_IDYour Deepslate organization ID
api_keystrenv: DEEPSLATE_API_KEYYour Deepslate API key
base_urlstrhttps://app.deepslate.euBase URL for the Deepslate API
system_promptstr"You are a helpful assistant."System prompt for the model
ws_urlstr | NoneNoneDirect WebSocket URL (bypasses standard URL construction, useful for local testing)
max_retriesint3Maximum reconnection attempts before emitting an ErrorFrame
Pass a DeepslateVadConfig to DeepslateRealtimeLLMService to tune server-side Voice Activity Detection.
ParameterTypeDefaultDescription
confidence_thresholdfloat0.5Minimum confidence to consider audio as speech (0.0–1.0)
min_volumefloat0.01Minimum volume threshold (0.0–1.0)
start_duration_msint200Duration of speech required to detect speech start (ms)
stop_duration_msint500Duration of silence required to detect speech end (ms)
backbuffer_duration_msint1000Audio buffered before speech detection triggers (ms)
from deepslate.pipecat import DeepslateVadConfig, DeepslateRealtimeLLMService

llm = DeepslateRealtimeLLMService(
    options=opts,
    vad_config=DeepslateVadConfig(
        confidence_threshold=0.3,
        stop_duration_ms=300,
    ),
)
Configure server-side text-to-speech with ElevenLabs via Deepslate.
ParameterTypeDescription
api_keystrElevenLabs API key (env: ELEVENLABS_API_KEY)
voice_idstrVoice ID (env: ELEVENLABS_VOICE_ID)
model_idstr | NoneModel ID, e.g., eleven_turbo_v2 (env: ELEVENLABS_MODEL_ID)
Use ElevenLabsTtsConfig.from_env() to create a config from environment variables.
Server-side TTS enables automatic interruption handling — when the user interrupts, Deepslate truncates the response context so the model knows what was actually spoken. Without server-side TTS, the service emits TTSTextFrame for a downstream Pipecat TTS service, but interruption context truncation will not work.

Features

Real-time Voice Streaming

Low-latency bidirectional PCM audio streaming over WebSockets for natural conversations

Server-side VAD

Voice activity detection handled server-side for reliable, configurable speech detection

Function Calling

Full tool/function calling support using OpenAI JSON schema format with async handlers

ElevenLabs TTS

Server-side text-to-speech with automatic interruption handling

Automatic Reconnection

Exponential-backoff reconnection with a configurable retry limit

Transport Agnostic

Works with any Pipecat transport: Daily.co, Twilio, generic WebSocket, and more

Function Calling

Define tools in OpenAI JSON schema format, register async handlers on the service, and push the definitions into the pipeline before it starts:
import random
from pipecat.frames.frames import LLMSetToolsFrame
from pipecat.services.llm_service import FunctionCallParams

TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "lookup_weather",
            "description": "Get the current weather for a given location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "The city to look up."}
                },
                "required": ["location"],
            },
        },
    },
]

async def lookup_weather(params: FunctionCallParams):
    result = {
        "location": params.arguments.get("location", "unknown"),
        "temperature_celsius": random.randint(10, 35),
    }
    await params.result_callback(result)

# Register the handler on the service
llm.register_function("lookup_weather", lookup_weather)

# Queue tool definitions — synced to Deepslate after the pipeline starts
await task.queue_frame(LLMSetToolsFrame(tools=TOOLS))

Transport Examples

The Deepslate service is transport-agnostic. Swap the transport to suit your deployment.
from pipecat.transports.daily.transport import DailyTransport, DailyParams

transport = DailyTransport(
    room_url=daily_room_url,
    token=token,
    bot_name="My Voice Bot",
    params=DailyParams(
        audio_in_enabled=True,
        audio_out_enabled=True,
        vad_enabled=False,  # Deepslate handles VAD
    ),
)

pipeline = Pipeline([transport.input(), llm, transport.output()])
from pipecat.transports.services.twilio import TwilioTransport

transport = TwilioTransport(
    account_sid=twilio_account_sid,
    auth_token=twilio_auth_token,
    from_number=twilio_from_number,
)

pipeline = Pipeline([transport.input(), llm, transport.output()])
from pipecat.transports.network.websocket import WebsocketTransport, WebsocketParams

transport = WebsocketTransport(
    host="0.0.0.0",
    port=8765,
    params=WebsocketParams(
        audio_in_enabled=True,
        audio_out_enabled=True,
    ),
)

pipeline = Pipeline([transport.input(), llm, transport.output()])

Contributing

This plugin is open source and we welcome contributions. Visit the GitHub repository to:
  • Report issues
  • Submit pull requests
  • Request features

Next Steps