# 原子化接口(需要签名认证)

认证方式: 签名认证 适用范围: 直接处理AI任务的底层接口,使用严格的签名认证机制

请求头要求:

X-APPID: your-AppID-here
X-Request-ID: uuid-request-id
X-Timestamp: 1677652288
X-Nonce: random-nonce-string
X-Signature: calculated-signature

签名计算规则:

signature = HMAC-SHA256(Secret-Key, timestamp + "\n" + nonce + "\n" + request_id + "\n" + request_body)

参数说明:

Secret-Key和AppID可以通过https://weknora.weixin.qq.com/platform/openapi获取

X-Request-ID与X-Nonce为用户生成string

X-Timestamp为当前时间戳

包含接口组:

  • /api/v1/doc - 文档处理接口
  • /api/v1/embeddings - 嵌入向量接口
  • /api/v1/rerank - 重排序接口
  • /api/v1/chat - 聊天接口

# Chat接口

# POST /api/v1/chat/completions

功能: Chat completions接口,支持流式和非流式响应

请求参数:

{
  "model": "chat",
  "messages": [
    {
      "role": "user",
      "content": "你好,请介绍一下自己"
    }
  ],
  "stream": false,
  "max_tokens": 1000,
  "temperature": 0.7
}

非流式响应:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "chat",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "这是AI的响应内容"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

流式响应 (当 stream=true 时):

功能: 启用流式响应模式,服务器以Server-Sent Events (SSE)格式实时返回响应内容

请求参数:

{
  "model": "chat",
  "messages": [
    {
      "role": "user",
      "content": "请用流式方式回答:介绍一下人工智能的发展历史"
    }
  ],
  "stream": true,
  "max_tokens": 1000,
  "temperature": 0.7
}

参数说明:

参数名 类型 必填 说明
model string 模型标识符
messages array 消息数组,包含对话历史
stream boolean 启用流式响应,必须设置为true
max_tokens integer 最大生成token数量
temperature number 生成温度,控制随机性

流式响应示例 (服务器返回的SSE格式数据):

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"人"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"工"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"智"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"能"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"的"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"发"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"展"},"logprobs":null,"finish_reason":null}]}


data: {"id":"chatcmpl-e3af18d9441f49c6b5713dcb5b76ab77","object":"chat.completion.chunk","created":1772181842,"model":"chat","choices":[{"index":0,"delta":{"content":"..."},"logprobs":null,"finish_reason":null}]}

data: [DONE]

响应格式说明:

字段名 类型 说明
id string 响应ID,在整个流式响应中保持不变
object string 对象类型,固定为 "chat.completion.chunk"
created integer 创建时间戳
model string 使用的模型标识符
choices array 选择数组,包含响应内容
choices[].index integer 选择索引,通常为0
choices[].delta object 增量内容,包含角色或内容变化
choices[].delta.role string 角色信息(仅在第一个块中出现)
choices[].delta.content string 内容增量,每次返回一个或多个字符
choices[].logprobs object/null 对数概率信息
choices[].finish_reason string/null 结束原因,在最后一个块中设置

流式响应特点:

  • 每个响应块以 data: 开头,后面跟着JSON格式的数据
  • 第一个块通常设置助手的角色信息,content为空
  • 后续每个块包含一个或多个字符的内容增量
  • 当响应完成时,会发送 data: [DONE] 表示流结束
  • 每个块包含相同的 idmodel 信息,确保响应的一致性
  • finish_reason 在最后一个有效块中会设置为 stop 或其他结束原因
# VLM(视觉语言模型)请求示例

功能: 支持多模态输入的视觉语言模型对话,可处理图片和文本的联合分析

请求参数:

{
  "model": "vlm",
  "messages": [
    {
      "role": "user",
      "multiContent": [
        {
          "type": "text",
          "text": "请详细描述这张图片中的内容,并用中文回答。"
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAUEAEAAAAAAAAAAAAAAAAAAAAA/8QAFQEBAQAAAAAAAAAAAAAAAAAAAAX/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCdABmX/9k="
          }
        }
      ]
    }
  ],
  "stream": true
}

参数说明:

参数名 类型 必填 说明
model string 模型标识符,必须设置为 "vlm"
messages array 消息数组,支持多模态输入
messages[].role string 消息角色,支持 "user"、"assistant"、"system"
messages[].multiContent array 多模态内容数组,支持文本和图片混合输入
messages[].multiContent[].type string 内容类型:"text" 或 "image_url"
messages[].multiContent[].text string 条件必填 文本内容,当type为"text"时必填
messages[].multiContent[].image_url object 条件必填 图片URL对象,当type为"image_url"时必填
messages[].multiContent[].image_url.url string 图片URL,支持data URL格式(data:image/<格式>;base64,<base64数据>)
stream boolean 启用流式响应,建议设置为true

图片数据格式要求:

  • 支持格式:JPEG、PNG、GIF、WebP等常见图片格式
  • 数据格式:必须使用data URL格式,包含MIME类型和base64编码
  • 示例格式:data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAUEAEAAAAAAAAAAAAAAAAAAAAA/8QAFQEBAQAAAAAAAAAAAAAAAAAAAAX/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwCdABmX/9k=

使用注意事项:

  • 支持多张图片同时上传,按数组顺序处理
  • 图片将与文本查询结合进行多模态分析
  • 图片数据大小建议不超过10MB
  • 流式响应格式与标准Chat接口一致,返回Server-Sent Events
  • 响应内容包含对图片的视觉理解和文本描述

响应格式: 服务器端事件流(Server-Sent Events,Content-Type: text/event-stream)

响应示例:

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"这"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"张"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"图"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"片"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"展"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"示"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"了"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"一"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"个"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"美"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"丽"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"的"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"风"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"景"},"logprobs":null,"finish_reason":null}]}

data: {"id":"chatcmpl-1234567890","object":"chat.completion.chunk","created":1677652288,"model":"vlm","choices":[{"index":0,"delta":{"content":"..."},"logprobs":null,"finish_reason":null}]}

data: [DONE]

# 文档处理接口(签名认证)

# POST /api/v1/doc/reader

功能: 创建文档读取异步任务

请求参数:

{
  "file_name": "document.pdf",
  "file_type": "pdf",
  "file_content": "base64编码的文件内容",
  "read_config": {
    "chunk_size": 1000,
    "chunk_overlap": 200,
    "separators": ["\\n\\n", "\\n", " ", ""],
    "enable_multimodal": true,
    "vlm_config": {
      "model_name": "vlm-model-name"
    }
  },
  "request_id": "optional-request-id"
}

响应:

{
  "task_id": "task-1234567890",
  "status": "pending",
  "message": "任务已创建",
  "created_at": 1677652288
}
# GET /api/v1/doc/:task_id

功能: 获取任务状态

响应:

{
  "task_id": "task-1234567890",
  "status": "completed",
  "message": "任务已完成",
  "progress": 1.0,
  "result": {
    "chunks": [
      {
        "content": "文档内容片段1",
        "seq": 1,
        "start": 0,
        "end": 100,
        "images": []
      }
    ]
  },
  "created_at": 1677652288,
  "updated_at": 1677652388
}
# DELETE /api/v1/doc/:task_id

功能: 取消任务

响应:

{
  "task_id": "task-1234567890",
  "status": "cancelled",
  "message": "任务已取消",
  "created_at": 1677652288,
  "updated_at": 1677652388
}

# Embeddings接口

# POST /api/v1/embeddings

功能: 获取文本嵌入向量

请求参数:

{
  "model": "embedding",
  "input": [
    "人工智能是计算机科学的一个分支",
    "机器学习是人工智能的重要技术",
    "深度学习推动了人工智能的发展"
  ]
}

响应:

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [0.1, 0.2, 0.3, ...],
      "index": 0
    }
  ],
  "model": "embedding",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

# Rerank接口

# POST /api/v1/rerank

功能: 对文档进行相关性重排序

请求参数:

{
  "model": "rerank",
  "query": "查询文本",
  "documents": ["文档1内容", "文档2内容"]
}

响应:

{
  "id": "rerank-af52d3f836574b73b7d0e9b76a5c6a4c",
  "model": "rerank",
  "usage": {"total_tokens": 18},
  "results": [
    {
      "index": 0,
      "document": {"text": "hello world"},
      "relevance_score": 0.98974609375
    },
    {
      "index": 1,
      "document": {"text": "goodbye"},
      "relevance_score": 0.3701171875
    }
  ]
}