Merge remote-tracking branch 'origin/dev' into feature/organic-extraction

# Conflicts:
#	.cursor/skills/add-workstation/SKILL.md
#	.cursor/skills/add-workstation/reference.md
This commit is contained in:
ZiWei
2026-03-27 11:49:30 +08:00
17 changed files with 2048 additions and 812 deletions

View File

@@ -1,260 +1,626 @@
--- ---
name: add-workstation name: add-workstation
description: Guide for adding new workstations to Uni-Lab-OS (接入新工作站). Walks through workstation type selection, sub-device composition, external system integration, driver creation, registry YAML, deck setup, and graph file configuration. Use when the user wants to add/integrate a new workstation, create a workstation driver, configure a station with sub-devices, set up deck and materials, or mentions 工作站/工站/station/workstation. description: Guide for adding new workstations to Uni-Lab-OS (接入新工作站). Uses @device decorator + AST auto-scanning. Walks through workstation type, sub-device composition, driver creation, deck setup, and graph file. Use when the user wants to add a workstation, create a workstation driver, configure a station with sub-devices, or mentions 工作站/工站/station/workstation.
--- ---
# Uni-Lab-OS 工作站接入指南 # Uni-Lab-OS 工作站接入指南
工作站是组合多个子设备的大型设备,拥有独立的物料管理系统PLR Deck和工作流引擎。 工作站workstation是组合多个子设备的大型设备,拥有独立的物料管理系统和工作流引擎。使用 `@device` 装饰器注册AST 自动扫描生成注册表
> **完整代码模板**见 [templates.md](templates.md)**高级模式**见 [reference.md](reference.md)。
## 第一步:确定工作站类型
向用户确认:
| 类型 | 基类 | 适用场景 | 示例 |
|------|------|----------|------|
| **Protocol** | `ProtocolNode` | 标准化学操作协议 | FilterProtocolStation |
| **外部系统** | `WorkstationBase` | 对接 LIMS/MES API | BioyondStation |
| **硬件控制** | `WorkstationBase` | 直接控制 PLC/硬件 | CoinCellAssembly |
还需确认:
- 英文名称、通信方式HTTP/Modbus/OPC UA/无)
- 子设备组成(哪些已有、哪些新增、硬件代理关系)
- 物料需求(是否需要 Deck、物料类型、是否需外部同步
--- ---
## 第二步:理解工作站架构 ## 工作站类型
| 维度 | 普通设备 | 工作站 | | 类型 | 基类 | 适用场景 |
|------|---------|--------| | ------------------- | ----------------- | ---------------------------------- |
| 基类 | 纯 Python 类 | `WorkstationBase` / `ProtocolNode` | | **Protocol 工作站** | `ProtocolNode` | 标准化学操作协议(泵转移、过滤等) |
| ROS 节点 | `BaseROS2DeviceNode` | `ROS2WorkstationNode` | | **外部系统工作站** | `WorkstationBase` | 与外部 LIMS/MES 对接 |
| 状态管理 | `self.data` 字典 | `@property` 直接访问 | | **硬件控制工作站** | `WorkstationBase` | 直接控制 PLC/硬件 |
| 子设备 | 无 | `self._children` / `self._ros_node.sub_devices` |
| 物料 | 无 | `self.deck`PLR Deck |
### 继承体系 ---
``` ## @device 装饰器(工作站)
WorkstationBase (ABC)
├── BioyondWorkstation ← HTTP RPC + 资源同步 工作站也使用 `@device` 装饰器注册,参数与普通设备一致:
│ ├── BioyondReactionStation
│ └── BioyondDispensingStation ```python
├── CoinCellAssemblyWorkstation ← Modbus/PLC @device(
└── ProtocolNode ← 标准化学协议 id="my_workstation", # 注册表唯一标识(必填)
category=["workstation"], # 分类标签
description="我的工作站",
)
``` ```
### 子设备初始化流程 如果一个工作站类支持多个具体变体,可使用 `ids` / `id_meta`,与设备的用法相同(参见 add-device SKILL
`ROS2WorkstationNode.__init__` → 遍历 `children`type=="device")→ `initialize_device_from_dict()` → 存入 `sub_devices` → 为每个动作创建 `ActionClient` → 识别通信设备(`serial_*`/`io_*`)→ `_setup_hardware_proxy()`
--- ---
## 第三步:创建驱动文件 ## 工作站驱动模板
路径:`unilabos/devices/workstation/<station_name>/<station_name>.py` ### 模板 A基于外部系统的工作站
根据类型选择模板(完整代码见 [templates.md](templates.md) ```python
import logging
from typing import Dict, Any, Optional
from pylabrobot.resources import Deck
| 类型 | 模板 | 关键要素 | from unilabos.registry.decorators import device, topic_config, not_action
|------|------|---------| from unilabos.devices.workstation.workstation_base import WorkstationBase
| 外部系统 | Template A | `config` 接收 API 配置,`post_init` 启动 RPC/HTTP 服务 |
| 硬件控制 | Template B | `TCPClient` + CSV 寄存器映射,`use_node()` 读写 |
| Protocol | Template C | 直接使用 `ProtocolNode`,通常不需要自定义类 |
**所有模板的 `__init__` 必须接受 `deck` 和 `**kwargs`。** try:
from unilabos.ros.nodes.presets.workstation import ROS2WorkstationNode
except ImportError:
ROS2WorkstationNode = None
---
## 第四步:创建子设备(如需要) @device(id="my_workstation", category=["workstation"], description="我的工作站")
class MyWorkstation(WorkstationBase):
_ros_node: "ROS2WorkstationNode"
子设备是独立设备,有自己的驱动类和注册表。完整模板见 [templates.md § 子设备模板](templates.md)。 def __init__(self, config=None, deck=None, protocol_type=None, **kwargs):
super().__init__(deck=deck, **kwargs)
self.config = config or {}
self.logger = logging.getLogger("MyWorkstation")
self.api_host = self.config.get("api_host", "")
self._status = "Idle"
### 关键要点 @not_action
def post_init(self, ros_node: "ROS2WorkstationNode"):
super().post_init(ros_node)
self._ros_node = ros_node
1. **驱动类**:普通 Python 类,`self.data` 预填所有属性 async def scheduler_start(self, **kwargs) -> Dict[str, Any]:
2. **注册表**`category` 包含工作站标识,`auto-` 前缀动作不创建 ActionClient """注册为工作站动作"""
3. **图文件**`parent` 指向工作站 ID`type: "device"` return {"success": True}
4. **代码访问**`self._children.get("reactor_1").driver_instance`
### 硬件代理模式 async def create_order(self, json_str: str, **kwargs) -> Dict[str, Any]:
"""注册为工作站动作"""
return {"success": True}
当子设备需要通过通信设备(串口/IO通信时 @property
@topic_config()
def workflow_sequence(self) -> str:
return "[]"
1. 通信设备 ID 必须以 `serial_``io_` 开头 @property
2. 子设备注册表中声明 `hardware_interface: {name, read, write}` @topic_config()
3. 子设备实例的 `name` 属性值 = 通信设备 ID def material_info(self) -> str:
4. ROS 节点自动将通信设备的 read/write 方法注入到子设备上 return "{}"
---
## 第五步:创建注册表 YAML
路径:`unilabos/registry/devices/<station_name>.yaml`
**最小配置(`--complete_registry` 自动补全):**
```yaml
my_workstation:
category:
- workstation
class:
module: unilabos.devices.workstation.my_station.my_station:MyWorkstation
type: python
``` ```
**完整配置**见 [templates.md § 注册表完整配置](templates.md)。 ### 模板 BProtocol 工作站
直接使用 `ProtocolNode`,通常不需要自定义驱动类:
```python
from unilabos.devices.workstation.workstation_base import ProtocolNode
```
在图文件中配置 `protocol_type` 即可。
--- ---
## 第六步:配置物料系统(如需要 ## 子设备访问sub_devices
物料层级:`Deck``WareHouse``ResourceHolder` (site) → `BottleCarrier``Bottle` 工站初始化子设备后,所有子设备实例存储在 `self._ros_node.sub_devices` 字典中key 为设备 idvalue 为 `ROS2DeviceNode` 实例)。工站的驱动类可以直接获取子设备实例来调用其方法:
### 快速流程 ```python
# 在工站驱动类的方法中访问子设备
sub = self._ros_node.sub_devices["pump_1"]
1. **创建 Bottle**`unilabos/resources/<project>/bottles.py`)— 工厂函数,返回 `Bottle` 实例 # .driver_instance — 子设备的驱动实例(即设备 Python 类的实例)
2. **创建 Carrier**`.../bottle_carriers.py`)— 工厂函数,用 `create_ordered_items_2d` 定义槽位 sub.driver_instance.some_method(arg1, arg2)
3. **创建 WareHouse**`.../warehouses.py`)— 用 `warehouse_factory()` 创建堆栈
4. **创建 Deck**`.../decks.py`)— 继承 `pylabrobot.resources.Deck``setup()` 中放置 WareHouse
5. **注册表**`unilabos/registry/resources/<project>/`)— `class.type: pylabrobot`
6. **PLR 扩展**`unilabos/resources/plr_additional_res_reg.py`)— 导入新 Deck 类
完整代码模板见 [templates.md § 物料资源模板](templates.md)。 # .ros_node_instance — 子设备的 ROS2 节点实例
sub.ros_node_instance._action_value_mappings # 查看子设备支持的 action
```
### 图文件中的 Deck 配置 **常见用法**
工作站节点引用 Deck ```python
class MyWorkstation(WorkstationBase):
def my_protocol(self, **kwargs):
# 获取子设备驱动实例
pump = self._ros_node.sub_devices["pump_1"].driver_instance
heater = self._ros_node.sub_devices["heater_1"].driver_instance
# 直接调用子设备方法
pump.aspirate(volume=100)
heater.set_temperature(80)
```
> 参考实现:`unilabos/devices/workstation/bioyond_studio/reaction_station/reaction_station.py` 中通过 `self._ros_node.sub_devices.get(reactor_id)` 获取子反应器实例并更新数据。
---
## 硬件通信接口hardware_interface
硬件控制型工作站通常需要通过串口Serial、Modbus 等通信协议控制多个子设备。Uni-Lab-OS 通过 **通信设备代理** 机制实现端口共享:一个串口只创建一个 `serial` 节点,多个子设备共享这个通信实例。
### 工作原理
`ROS2WorkstationNode` 初始化时分两轮遍历子设备(`workstation.py`
**第一轮 — 初始化所有子设备**:按 `children` 顺序调用 `initialize_device()`,通信设备(`serial_` / `io_` 开头的 id优先完成初始化创建 `serial.Serial()` 实例。其他子设备此时 `self.hardware_interface = "serial_pump"`(字符串)。
**第二轮 — 代理替换**:遍历所有已初始化的子设备,读取子设备的 `_hardware_interface` 配置:
```
hardware_interface = d.ros_node_instance._hardware_interface
# → {"name": "hardware_interface", "read": "send_command", "write": "send_command"}
```
1.`name` 字段对应的属性值:`name_value = getattr(driver, hardware_interface["name"])`
- 如果 `name_value` 是字符串且该字符串是某个子设备的 id → 触发代理替换
2. 从通信设备获取真正的 `read`/`write` 方法
3.`setattr(driver, read_method, _read)` 将通信设备的方法绑定到子设备上
因此:
- **通信设备 id 必须与子设备 config 中填的字符串完全一致**(如 `"serial_pump"`
- **通信设备 id 必须以 `serial_``io_` 开头**(否则第一轮不会被识别为通信设备)
- **通信设备必须在 `children` 列表中排在最前面**,确保先初始化
### HardwareInterface 参数说明
```python
from unilabos.registry.decorators import HardwareInterface
HardwareInterface(
name="hardware_interface", # __init__ 中接收通信实例的属性名
read="send_command", # 通信设备上暴露的读方法名
write="send_command", # 通信设备上暴露的写方法名
extra_info=["list_ports"], # 可选:额外暴露的方法
)
```
**`name` 字段的含义**:对应设备类 `__init__` 中,用于保存通信实例的**属性名**。系统据此知道要替换哪个属性。大部分设备直接用 `"hardware_interface"`,也可以自定义(如 `"io_device_port"`)。
### 示例 1name="hardware_interface"
```python
from unilabos.registry.decorators import device, HardwareInterface
@device(
id="my_pump",
category=["pump_and_valve"],
hardware_interface=HardwareInterface(
name="hardware_interface",
read="send_command",
write="send_command",
),
)
class MyPump:
def __init__(self, port=None, address="1", **kwargs):
# name="hardware_interface" → 系统替换 self.hardware_interface
self.hardware_interface = port # 初始为字符串 "serial_pump",启动后被替换为 Serial 实例
self.address = address
def send_command(self, command: str):
full_command = f"/{self.address}{command}\r\n"
self.hardware_interface.write(bytearray(full_command, "ascii"))
return self.hardware_interface.read_until(b"\n")
```
### 示例 2电磁阀name="io_device_port",自定义属性名)
```python
@device(
id="solenoid_valve",
category=["pump_and_valve"],
hardware_interface=HardwareInterface(
name="io_device_port", # 自定义属性名 → 系统替换 self.io_device_port
read="read_io_coil",
write="write_io_coil",
),
)
class SolenoidValve:
def __init__(self, io_device_port: str = None, **kwargs):
# name="io_device_port" → 图文件 config 中用 "io_device_port": "io_board_1"
self.io_device_port = io_device_port # 初始为字符串,系统替换为 Modbus 实例
```
### Serial 通信设备class="serial"
`serial` 是 Uni-Lab-OS 内置的通信代理设备,代码位于 `unilabos/ros/nodes/presets/serial_node.py`
```python
from serial import Serial, SerialException
from threading import Lock
class ROS2SerialNode(BaseROS2DeviceNode):
def __init__(self, device_id, registry_name, port: str, baudrate: int = 9600, **kwargs):
self.port = port
self.baudrate = baudrate
self._hardware_interface = {
"name": "hardware_interface",
"write": "send_command",
"read": "read_data",
}
self._query_lock = Lock()
self.hardware_interface = Serial(baudrate=baudrate, port=port)
BaseROS2DeviceNode.__init__(
self, driver_instance=self, registry_name=registry_name,
device_id=device_id, status_types={}, action_value_mappings={},
hardware_interface=self._hardware_interface, print_publish=False,
)
self.create_service(SerialCommand, "serialwrite", self.handle_serial_request)
def send_command(self, command: str):
with self._query_lock:
self.hardware_interface.write(bytearray(f"{command}\n", "ascii"))
return self.hardware_interface.read_until(b"\n").decode()
def read_data(self):
with self._query_lock:
return self.hardware_interface.read_until(b"\n").decode()
```
在图文件中使用 `"class": "serial"` 即可创建串口代理:
```json ```json
"deck": { {
"data": { "id": "serial_pump",
"_resource_child_name": "my_deck", "class": "serial",
"_resource_type": "unilabos.resources.my_project.decks:MyStation_Deck" "parent": "my_station",
"config": { "port": "COM7", "baudrate": 9600 }
}
```
### 图文件配置
**通信设备必须在 `children` 列表中排在最前面**,确保先于其他子设备初始化:
```json
{
"nodes": [
{
"id": "my_station",
"class": "workstation",
"children": ["serial_pump", "pump_1", "pump_2"],
"config": { "protocol_type": ["PumpTransferProtocol"] }
},
{
"id": "serial_pump",
"class": "serial",
"parent": "my_station",
"config": { "port": "COM7", "baudrate": 9600 }
},
{
"id": "pump_1",
"class": "syringe_pump_with_valve.runze.SY03B-T08",
"parent": "my_station",
"config": { "port": "serial_pump", "address": "1", "max_volume": 25.0 }
},
{
"id": "pump_2",
"class": "syringe_pump_with_valve.runze.SY03B-T08",
"parent": "my_station",
"config": { "port": "serial_pump", "address": "2", "max_volume": 25.0 }
}
],
"links": [
{
"source": "pump_1",
"target": "serial_pump",
"type": "communication",
"port": { "pump_1": "port", "serial_pump": "port" }
},
{
"source": "pump_2",
"target": "serial_pump",
"type": "communication",
"port": { "pump_2": "port", "serial_pump": "port" }
}
]
}
```
### 通信协议速查
| 协议 | config 参数 | 依赖包 | 通信设备 class |
| -------------------- | ------------------------------ | ---------- | -------------------------- |
| Serial (RS232/RS485) | `port`, `baudrate` | `pyserial` | `serial` |
| Modbus RTU | `port`, `baudrate`, `slave_id` | `pymodbus` | `device_comms/modbus_plc/` |
| Modbus TCP | `host`, `port`, `slave_id` | `pymodbus` | `device_comms/modbus_plc/` |
| TCP Socket | `host`, `port` | stdlib | 自定义 |
| HTTP API | `url`, `token` | `requests` | `device_comms/rpc.py` |
参考实现:`unilabos/test/experiments/Grignard_flow_batchreact_single_pumpvalve.json`
---
## Deck 与物料生命周期
### 1. Deck 入参与两种初始化模式
系统根据设备节点 `config.deck` 的写法,自动反序列化 Deck 实例后传入 `__init__``deck` 参数。目前 `deck` 是固定字段名,只支持一个主 Deck。建议一个设备拥有一个台面台面上抽象二级、三级子物料。
有两种初始化模式:
#### init 初始化(推荐)
`config.deck` 直接包含 `_resource_type` + `_resource_child_name`,系统先用 Deck 节点的 `config` 调用 Deck 类的 `__init__` 反序列化,再将实例传入设备的 `deck` 参数。子物料随 Deck 的 `children` 一起反序列化。
```json
"config": {
"deck": {
"_resource_type": "unilabos.devices.liquid_handling.prcxi.prcxi:PRCXI9300Deck",
"_resource_child_name": "PRCXI_Deck"
} }
} }
``` ```
Deck 子节点: #### deserialize 初始化
`config.deck``data` 包裹一层,系统走 `deserialize` 路径,可传入更多参数(如 `allow_marshal` 等):
```json ```json
{ "config": {
"id": "my_deck", "deck": {
"parent": "my_station", "data": {
"type": "deck", "_resource_child_name": "YB_Bioyond_Deck",
"class": "MyStation_Deck", "_resource_type": "unilabos.resources.bioyond.decks:BIOYOND_YB_Deck"
"config": {"type": "MyStation_Deck", "setup": true, "rotation": {"x": 0, "y": 0, "z": 0, "type": "Rotation"}} }
}
} }
``` ```
> **`_resource_child_name`** 必须与 Deck 节点的 `id` 一致 没有特殊需求时推荐 init 初始化
--- #### config.deck 字段说明
## 第七步:配置图文件
路径:`unilabos/test/experiments/<station_name>.json`
```json
{
"nodes": [
{
"id": "my_station",
"name": "my_station",
"children": ["my_deck", "sub_device_1"],
"parent": null,
"type": "device",
"class": "my_workstation",
"position": {"x": 0, "y": 0, "z": 0},
"config": {},
"deck": {"data": {"_resource_child_name": "my_deck", "_resource_type": "...decks:MyStation_Deck"}},
"size_x": 2700.0, "size_y": 1080.0, "size_z": 1500.0,
"protocol_type": [],
"data": {}
},
{"id": "my_deck", "parent": "my_station", "type": "deck", "class": "MyStation_Deck", "config": {"type": "MyStation_Deck", "setup": true}},
{"id": "sub_device_1", "parent": "my_station", "type": "device", "class": "sub_device_class", "config": {}}
]
}
```
### 图文件规则
| 字段 | 说明 | | 字段 | 说明 |
|------|------| |------|------|
| `children` | 包含 deck ID 和所有子设备 ID | | `_resource_type` | Deck 类的完整模块路径(`module:ClassName` |
| `parent` | 工作站为 `null`;子设备/deck 指向工作站 ID | | `_resource_child_name` | 对应图文件中 Deck 节点的 `id`,建立父子关联 |
| `type` | 工作站和子设备 `"device"`deck 为 `"deck"` |
| `class` | 注册表中的设备名 |
| `protocol_type` | Protocol 工作站填协议名列表;否则 `[]` |
| `config` | 传入 `__init__``config` 参数 |
### Config 字段速查 #### 设备 __init__ 接收
| 字段 | 外部系统 | PLC/硬件 | 说明 | ```python
|------|---------|---------|------| def __init__(self, config=None, deck=None, protocol_type=None, **kwargs):
| `api_host` / `api_key` | ✅ | — | 外部 API 连接 | super().__init__(deck=deck, **kwargs)
| `address` / `port` | — | ✅ | PLC 地址init 参数,非 config 内) | # deck 已经是反序列化后的 Deck 实例
| `workflow_mappings` | ✅ | — | 工作流名 → 外部 UUID | # → PRCXI9300Deck / BIOYOND_YB_Deck 等
| `material_type_mappings` | ✅ | — | PLR 资源类 → 外部物料类型 | ```
| `warehouse_mapping` | ✅ | — | 仓库 → 外部 UUID + 库位 UUID |
| `http_service_config` | ✅ | — | HTTP 回调 host/port |
> 完整 Config 结构详见 [reference.md § 2](reference.md) #### Deck 节点(图文件中)
Deck 节点作为设备的 `children` 之一,`parent` 指向设备 id
```json
{
"id": "PRCXI_Deck",
"parent": "PRCXI",
"type": "deck",
"class": "",
"children": [],
"config": {
"type": "PRCXI9300Deck",
"size_x": 542, "size_y": 374, "size_z": 0,
"category": "deck",
"sites": [...]
},
"data": {}
}
```
- `config` 中的字段会传入 Deck 类的 `__init__`(因此 `__init__` 必须能接受所有 `serialize()` 输出的字段)
- `children` 初始为空时,由同步器或手动初始化填充
- `config.type` 填 Deck 类名
### 2. Deck 为空时自行初始化
如果 Deck 节点的 `children` 为空,工作站需在 `post_init` 或首次同步时自行初始化内容:
```python
@not_action
def post_init(self, ros_node):
super().post_init(ros_node)
if self.deck and not self.deck.children:
self._initialize_default_deck()
def _initialize_default_deck(self):
from my_labware import My_TipRack, My_Plate
self.deck.assign_child_resource(My_TipRack("T1"), spot=0)
self.deck.assign_child_resource(My_Plate("T2"), spot=1)
```
### 3. 物料双向同步
当工作站对接外部系统LIMS/MES需要实现 `ResourceSynchronizer` 处理双向物料同步:
```python
from unilabos.devices.workstation.workstation_base import ResourceSynchronizer
class MyResourceSynchronizer(ResourceSynchronizer):
def sync_from_external(self) -> bool:
"""从外部系统同步到 self.workstation.deck"""
external_data = self._query_external_materials()
# 以外部工站为准:根据外部数据反向创建 PLR 资源实例
for item in external_data:
cls = self._resolve_resource_class(item["type"])
resource = cls(name=item["name"], **item["params"])
self.workstation.deck.assign_child_resource(resource, spot=item["slot"])
return True
def sync_to_external(self, resource) -> bool:
"""将 UniLab 侧物料变更同步到外部系统"""
# 以 UniLab 为准:将 PLR 资源转为外部格式并推送
external_format = self._convert_to_external(resource)
return self._push_to_external(external_format)
def handle_external_change(self, change_info) -> bool:
"""处理外部系统主动推送的变更"""
return True
```
同步策略取决于业务场景:
- **以外部工站为准**:从外部 API 查询物料数据,反向创建对应的 PLR 资源实例放到 Deck 上
- **以 UniLab 为准**UniLab 侧的物料变更通过 `sync_to_external` 推送到外部系统
在工作站 `post_init` 中初始化同步器:
```python
@not_action
def post_init(self, ros_node):
super().post_init(ros_node)
self.resource_synchronizer = MyResourceSynchronizer(self)
self.resource_synchronizer.sync_from_external()
```
### 4. 序列化与持久化serialize / serialize_state
资源类需正确实现序列化,系统据此完成持久化和前端同步。
**`serialize()`** — 输出资源的结构信息(`config` 层),反序列化时作为 `__init__` 的入参回传。因此 **`__init__` 必须通过 `**kwargs`接受`serialize()` 输出的所有字段\*\*,即使当前不使用:
```python
class MyDeck(Deck):
def __init__(self, name, size_x, size_y, size_z,
sites=None, # serialize() 输出的字段
rotation=None, # serialize() 输出的字段
barcode=None, # serialize() 输出的字段
**kwargs): # 兜底:接受所有未知的 serialize 字段
super().__init__(size_x, size_y, size_z, name)
# ...
def serialize(self) -> dict:
data = super().serialize()
data["sites"] = [...] # 自定义字段
return data
```
**`serialize_state()`** — 输出资源的运行时状态(`data` 层),用于持久化可变信息。`data` 中的内容会被正确保存和恢复:
```python
class MyPlate(Plate):
def __init__(self, name, size_x, size_y, size_z,
material_info=None, **kwargs):
super().__init__(name, size_x, size_y, size_z, **kwargs)
self._unilabos_state = {}
if material_info:
self._unilabos_state["Material"] = material_info
def serialize_state(self) -> Dict[str, Any]:
data = super().serialize_state()
data.update(self._unilabos_state)
return data
```
关键要点:
- `serialize()` 输出的所有字段都会作为 `config` 回传到 `__init__`,所以 `__init__` 必须能接受它们(显式声明或 `**kwargs`
- `serialize_state()` 输出的 `data` 用于持久化运行时状态(如物料信息、液体量等)
- `_unilabos_state` 中只存可 JSON 序列化的基本类型str, int, float, bool, list, dict, None
### 5. 子物料自动同步
子物料Bottle、Plate、TipRack 等)放到 Deck 上后,系统会自动将其同步到前端的 Deck 视图。只需保证资源类正确实现了 `serialize()` / `serialize_state()` 和反序列化即可。
### 6. 图文件配置(参考 prcxi_9320_slim.json
```json
{
"nodes": [
{
"id": "my_station",
"type": "device",
"class": "my_workstation",
"config": {
"deck": {
"_resource_type": "unilabos.resources.my_module:MyDeck",
"_resource_child_name": "my_deck"
},
"host": "10.20.30.1",
"port": 9999
}
},
{
"id": "my_deck",
"parent": "my_station",
"type": "deck",
"class": "",
"children": [],
"config": {
"type": "MyLabDeck",
"size_x": 542,
"size_y": 374,
"size_z": 0,
"category": "deck",
"sites": [
{
"label": "T1",
"visible": true,
"occupied_by": null,
"position": { "x": 0, "y": 0, "z": 0 },
"size": { "width": 128.0, "height": 86, "depth": 0 },
"content_type": ["plate", "tip_rack", "tube_rack", "adaptor"]
}
]
},
"data": {}
}
],
"edges": []
}
```
Deck 节点要点:
- `config.type` 填 Deck 类名(如 `"PRCXI9300Deck"`
- `config.sites` 完整列出所有 site从 Deck 类的 `serialize()` 输出获取)
- `children` 初始为空(由同步器或手动初始化填充)
- 设备节点 `config.deck._resource_type` 指向 Deck 类的完整模块路径
--- ---
## 第八步:验证 ## 子设备
```bash 子设备按标准设备接入流程创建(参见 add-device SKILL使用 `@device` 装饰器。
python -c "from unilabos.devices.workstation.<name>.<name> import <ClassName>"
unilab -g <graph>.json --complete_registry 子设备约束:
unilab -g <graph>.json
``` - 图文件中 `parent` 指向工作站 ID
- 在工作站 `children` 数组中列出
--- ---
## 关键规则 ## 关键规则
1. `__init__` 必须接受 `deck``**kwargs` 1. **`__init__` 必须接受 `deck``**kwargs`** — `WorkstationBase.**init**`需要`deck` 参数
2. 通过 `self._children` 访问子设备,不自行维护引用 2. **Deck 通过 `config.deck._resource_type` 反序列化传入** — 不要在 `__init__` 中手动创建 Deck
3. `post_init` 中启动后台服务,不在 `__init__` 中启动网络连接 3. **Deck 为空时自行初始化内容** — 在 `post_init` 中检查并填充默认物料
4. 异步方法使用 `await self._ros_node.sleep()`,禁止 `time.sleep()` / `asyncio.sleep()` 4. **外部同步实现 `ResourceSynchronizer`**`sync_from_external` / `sync_to_external`
5. 子设备在图文件中声明,不在驱动代码中创建 5. **通过 `self._children` 访问子设备** — 不要自行维护子设备引用
6. `_resource_child_name` 必须与 deck 节点 ID 一致 6. **`post_init` 中启动后台服务** — 不要在 `__init__` 中启动网络连接
7. Protocol 工作站优先使用 `ProtocolNode` 7. **异步方法使用 `await self._ros_node.sleep()`** — 禁止 `time.sleep()``asyncio.sleep()`
8. 通信设备 ID 以 `serial_``io_` 开头 8. **使用 `@not_action` 标记非动作方法**`post_init`, `initialize`, `cleanup`
9. **子物料保证正确 serialize/deserialize** — 系统自动同步到前端 Deck 视图
--- ---
## 工作流清单 ## 验证
``` ```bash
- [ ] 1. 确定类型Protocol / 外部系统 / 硬件控制) # 模块可导入
- [ ] 2. 确认子设备组成和物料需求 python -c "from unilabos.devices.workstation.<name>.<name> import <ClassName>"
- [ ] 3. 创建工作站驱动
- [ ] 4. 创建子设备驱动 + 注册表(如需要) # 启动测试AST 自动扫描)
- [ ] 5. 创建工作站注册表 unilab -g <graph>.json
- [ ] 6. 创建物料资源 Bottle→Carrier→WareHouse→Deck如需要
- [ ] 7. 注册 PLR 扩展Deck 类需要)
- [ ] 8. 配置图文件
- [ ] 9. 验证
``` ```
--- ---
## 参考资源 ## 现有工作站参考
- **代码模板**[templates.md](templates.md) — 驱动模板 A/B/C、子设备、注册表、物料资源 | 工作站 | 驱动类 | 类型 |
- **高级模式**[reference.md](reference.md) — 外部系统集成、Config 模式、资源同步、PLC 框架、端到端案例 | -------------- | ----------------------------- | -------- |
- **现有工作站** | Protocol 通用 | `ProtocolNode` | Protocol |
| Bioyond 反应站 | `BioyondReactionStation` | 外部系统 |
| 纽扣电池组装 | `CoinCellAssemblyWorkstation` | 硬件控制 |
| 工作站 | 注册表名 | 类型 | 驱动路径 | 参考路径:`unilabos/devices/workstation/` 目录下各工作站实现。
|--------|----------|------|---------|
| Bioyond 反应站 | `reaction_station.bioyond` | 外部系统 | `bioyond_studio/reaction_station/` |
| Bioyond 配液站 | `bioyond_dispensing_station` | 外部系统 | `bioyond_studio/dispensing_station/` |
| 纽扣电池组装 | `coincellassemblyworkstation_device` | 硬件控制 | `coin_cell_assembly/` |
| Protocol 通用 | `workstation` | Protocol | `workstation_base.py` |

View File

@@ -1,6 +1,6 @@
# 工作站高级模式参考 # 工作站高级模式参考
本文件是 SKILL.md 的补充,包含外部系统集成、物料同步、PLC 框架、硬件代理等高级模式。 本文件是 SKILL.md 的补充,包含外部系统集成、物料同步、配置结构等高级模式。
Agent 在需要实现这些功能时按需阅读。 Agent 在需要实现这些功能时按需阅读。
--- ---
@@ -116,6 +116,7 @@ class ConnectionMonitor:
def _monitor_loop(self): def _monitor_loop(self):
while self._running: while self._running:
try: try:
# 调用外部系统接口检测连接
self.workstation.hardware_interface.ping() self.workstation.hardware_interface.ping()
status = "online" status = "online"
except Exception: except Exception:
@@ -209,35 +210,6 @@ class ConnectionMonitor:
} }
``` ```
### 2.7 工作流到工序名映射
```json
{
"workflow_to_section_map": {
"reactor_taken_in": "反应器放入",
"reactor_taken_out": "反应器取出",
"Solid_feeding_vials": "固体投料-小瓶"
}
}
```
### 2.8 动作名称映射
```json
{
"action_names": {
"reactor_taken_in": {
"config": "通量-配置",
"stirring": "反应模块-开始搅拌"
},
"solid_feeding_vials": {
"feeding": "粉末加样模块-投料",
"observe": "反应模块-观察搅拌结果"
}
}
}
```
--- ---
## 3. 资源同步机制 ## 3. 资源同步机制
@@ -274,25 +246,14 @@ class MyResourceSynchronizer(ResourceSynchronizer):
return True return True
``` ```
### 3.2 资源树回调 ### 3.2 update_resource — 上传资源树到云端
Bioyond 工作站注册了资源树变更回调,实现与外部系统的自动同步:
| 回调名 | 触发时机 | 外部操作 |
|--------|---------|---------|
| `resource_tree_add` | PLR Deck 中添加资源 | 入库到外部系统 |
| `resource_tree_remove` | PLR Deck 中移除资源 | 出库 |
| `resource_tree_transfer` | 创建物料(不入库) | 创建外部物料记录 |
| `resource_tree_update` | 资源位置移动 | 更新外部系统库位 |
### 3.3 update_resource — 上传资源树到云端
将 PLR Deck 序列化后通过 ROS 服务上传。典型使用场景: 将 PLR Deck 序列化后通过 ROS 服务上传。典型使用场景:
```python ```python
# 在 post_init 中上传初始 deck
from unilabos.ros.nodes.base_device_node import ROS2DeviceNode from unilabos.ros.nodes.base_device_node import ROS2DeviceNode
# 在 post_init 中上传初始 deck
ROS2DeviceNode.run_async_func( ROS2DeviceNode.run_async_func(
self._ros_node.update_resource, True, self._ros_node.update_resource, True,
**{"resources": [self.deck]} **{"resources": [self.deck]}
@@ -354,11 +315,15 @@ async def transfer_materials_to_another_station(
"""将物料转移到另一个工作站""" """将物料转移到另一个工作站"""
target_node = self._children.get(target_device_id) target_node = self._children.get(target_device_id)
if not target_node: if not target_node:
# 通过 ROS 节点查找非子设备的目标站
pass pass
for group in transfer_groups: for group in transfer_groups:
resource = self.find_resource_by_name(group["resource_name"]) resource = self.find_resource_by_name(group["resource_name"])
# 从本站 deck 移除
resource.unassign() resource.unassign()
# 调用目标站的接收方法
# ...
return {"success": True, "transferred": len(transfer_groups)} return {"success": True, "transferred": len(transfer_groups)}
``` ```
@@ -404,437 +369,3 @@ def post_init(self, ros_node):
# 5. 初始化资源同步器(可选) # 5. 初始化资源同步器(可选)
self.resource_synchronizer = MyResourceSynchronizer(self, self.rpc_client) self.resource_synchronizer = MyResourceSynchronizer(self, self.rpc_client)
``` ```
---
## 7. PLC/Modbus 完整框架
### 7.1 寄存器映射 CSV 格式
PLC 工作站使用 CSV 文件定义寄存器映射表。路径通常为工作站目录下的 `<name>.csv`
**CSV 列定义:**
| 列名 | 说明 | 值示例 |
|------|------|--------|
| `Name` | 寄存器节点名称(代码中引用的唯一标识) | `COIL_SYS_START_CMD` |
| `DataType` | 数据类型 | `BOOL`, `INT16`, `FLOAT32` |
| `InitValue` | 初始值(可选) | — |
| `Comment` | 注释(可选) | — |
| `Attribute` | 自定义属性(可选) | — |
| `DeviceType` | Modbus 设备类型 | `coil`, `hold_register`, `input_register`, `discrete_inputs` |
| `Address` | Modbus 地址 | `8010`, `11000` |
**CSV 示例:**
```csv
Name,DataType,InitValue,Comment,Attribute,DeviceType,Address,
COIL_SYS_START_CMD,BOOL,,系统启动命令,,coil,8010,
COIL_SYS_STOP_CMD,BOOL,,系统停止命令,,coil,8020,
COIL_SYS_RESET_CMD,BOOL,,系统复位命令,,coil,8030,
REG_MSG_ELECTROLYTE_VOLUME,INT16,,电解液体积,,hold_register,11004,
REG_DATA_OPEN_CIRCUIT_VOLTAGE,FLOAT32,,开路电压,,hold_register,10002,
REG_DATA_AXIS_X_POS,FLOAT32,,X轴位置,,hold_register,10004,
```
**命名约定:**
- 线圈:`COIL_` 前缀(读写布尔量)
- 保持寄存器:`REG_MSG_`(消息/命令寄存器)、`REG_DATA_`(数据/状态寄存器)
- `_CMD` 后缀:写入命令
- `_STATUS` 后缀:读取状态
### 7.2 TCPClient 初始化
```python
from unilabos.device_comms.modbus_plc.client import TCPClient, BaseClient
from unilabos.device_comms.modbus_plc.modbus import DataType, WorderOrder
# 创建 Modbus TCP 客户端
modbus_client = TCPClient(addr="192.168.1.100", port=502)
modbus_client.client.connect()
# 从 CSV 加载寄存器映射
import os
csv_path = os.path.join(os.path.dirname(__file__), 'register_map.csv')
nodes = BaseClient.load_csv(csv_path)
client = modbus_client.register_node_list(nodes)
```
### 7.3 寄存器读写操作
```python
# 读取线圈(布尔值)
result, err = client.use_node('COIL_SYS_START_STATUS').read(1)
is_started = result[0] if not err else False
# 写入线圈
client.use_node('COIL_SYS_START_CMD').write(True)
# 读取保持寄存器INT16
result, err = client.use_node('REG_DATA_ASSEMBLY_COIN_CELL_NUM').read(1)
# 读取保持寄存器FLOAT32需要 2 个寄存器)
result, err = client.use_node('REG_DATA_OPEN_CIRCUIT_VOLTAGE').read(2)
# 写入保持寄存器FLOAT32
client.use_node('REG_MSG_ELECTROLYTE_VOLUME').write(
100.0,
data_type=DataType.FLOAT32,
word_order=WorderOrder.LITTLE,
)
```
**FLOAT32 字节序注意:** 许多 PLC 使用 Big Byte Order + Little Word Order需要交换两个 16 位寄存器的顺序。参考 `coin_cell_assembly.py` 中的 `_decode_float32_correct` 函数。
### 7.4 ModbusWorkflow 生命周期
PLC 工作站的动作通过 `ModbusWorkflow` + `WorkflowAction` 组织,每个动作有 4 个生命周期阶段:
```python
from unilabos.device_comms.modbus_plc.client import ModbusWorkflow, WorkflowAction
# 定义动作的生命周期函数
def my_init(use_node):
"""初始化:设置参数"""
use_node('REG_MSG_ELECTROLYTE_VOLUME').write(
100.0, data_type=DataType.FLOAT32, word_order=WorderOrder.LITTLE
)
return True
def my_start(use_node):
"""启动:触发动作并轮询等待完成"""
use_node('COIL_SYS_START_CMD').write(True)
while True:
result, err = use_node('COIL_SYS_START_STATUS').read(1)
if not err and result[0]:
break
time.sleep(0.5)
return True
def my_stop(use_node):
"""停止:复位触发信号"""
use_node('COIL_SYS_START_CMD').write(False)
return True
def my_cleanup(use_node):
"""清理:无论成功失败都执行"""
use_node('COIL_SYS_RESET_CMD').write(True)
# 组合成工作流
workflow = ModbusWorkflow(
name="我的加工流程",
actions=[
WorkflowAction(init=my_init, start=my_start, stop=my_stop, cleanup=my_cleanup)
],
)
# 执行
client.run_modbus_workflow(workflow)
```
**生命周期执行顺序:** `init``start``stop``cleanup`cleanup 始终执行,即使前序步骤失败)
### 7.5 PLC 工作站中的握手循环
纽扣电池组装站的典型 PLC 交互模式(信息交换握手):
```python
async def _send_msg_to_plc(self, data: dict):
"""向 PLC 发送消息并等待确认"""
# 1. 写入数据寄存器
for key, value in data.items():
self._write_register(key, value)
# 2. 发送"消息已准备好"信号
self._write_coil('COIL_UNILAB_SEND_MSG_SUCC_CMD', True)
# 3. 等待 PLC 读取确认
while not self._read_coil('COIL_REQUEST_REC_MSG_STATUS'):
await self._ros_node.sleep(0.3)
# 4. 撤销发送信号
self._write_coil('COIL_UNILAB_SEND_MSG_SUCC_CMD', False)
async def _recv_msg_from_plc(self) -> dict:
"""等待 PLC 发送消息"""
# 1. 等待 PLC 发送信号
while not self._read_coil('COIL_REQUEST_SEND_MSG_STATUS'):
await self._ros_node.sleep(0.3)
# 2. 读取数据寄存器
data = {}
for key in self._recv_registers:
data[key] = self._read_register(key)
# 3. 发送"已收到"确认
self._write_coil('COIL_UNILAB_REC_MSG_SUCC_CMD', True)
# 4. 等待 PLC 撤销发送信号
while self._read_coil('COIL_REQUEST_SEND_MSG_STATUS'):
await self._ros_node.sleep(0.3)
# 5. 撤销确认信号
self._write_coil('COIL_UNILAB_REC_MSG_SUCC_CMD', False)
return data
```
### 7.6 JSON 驱动的 PLC 工作流
PLC 工作站还支持通过 JSON 描述工作流,无需编写 Python 代码。使用 `BaseClient.execute_procedure_from_json`
```json
{
"register_node_list_from_csv_path": {"path": "register_map.csv"},
"create_flow": [
{
"name": "初始化系统",
"action": [
{
"address_function_to_create": [
{"func_name": "write_start", "node_name": "COIL_SYS_START_CMD", "mode": "write", "value": true},
{"func_name": "read_status", "node_name": "COIL_SYS_START_STATUS", "mode": "read", "value": 1}
],
"create_init_function": null,
"create_start_function": {
"func_name": "start_sys",
"write_functions": ["write_start"],
"condition_functions": ["read_status"],
"stop_condition_expression": "read_status[0]"
},
"create_stop_function": {"func_name": "stop_start", "node_name": "COIL_SYS_START_CMD", "mode": "write", "value": false},
"create_cleanup_function": null
}
]
}
],
"execute_flow": ["初始化系统"]
}
```
参考:`unilabos/device_comms/modbus_plc/client.py``ExecuteProcedureJson` 类型定义)
---
## 8. 端到端案例 WalkthroughBioyond 反应站
以 Bioyond 反应站为例,展示从零接入一个带物料输入的外部系统工作站的完整过程。
### 8.1 需求
- **类型**:外部系统工作站(与 Bioyond LIMS 系统对接)
- **通信**HTTP APIRPC 客户端 + HTTP 回调服务)
- **子设备**5 个反应器reactor_1 ~ reactor_5
- **物料**:反应器、试剂瓶、烧杯、样品板、小瓶、枪头盒 → 6 种 WareHouse → 1 个 Deck
### 8.2 文件结构
```
unilabos/
├── devices/workstation/bioyond_studio/
│ ├── station.py # BioyondWorkstation 基类
│ ├── bioyond_rpc.py # RPC 客户端
│ └── reaction_station/
│ └── reaction_station.py # BioyondReactionStation + BioyondReactor
├── resources/bioyond/
│ ├── bottles.py # Bottle 工厂函数8 种)
│ ├── bottle_carriers.py # Carrier 工厂函数8 种)
│ ├── warehouses.py # WareHouse 工厂函数6 种)
│ └── decks.py # BIOYOND_PolymerReactionStation_Deck
├── registry/
│ ├── devices/reaction_station_bioyond.yaml
│ └── resources/bioyond/
│ ├── bottles.yaml
│ ├── bottle_carriers.yaml
│ └── decks.yaml
└── test/experiments/reaction_station_bioyond.json
```
### 8.3 继承链
```
WorkstationBase
└── BioyondWorkstation # 通用 Bioyond 逻辑
├── __init__(config, deck, protocol_type)
├── post_init() → 启动连接监控 + HTTP 服务 + 上传 deck
├── BioyondResourceSynchronizer # 物料双向同步
└── BioyondReactionStation # 反应站特化
├── reactor_taken_in() # 反应器放入工作流
├── solid_feeding_vials() # 固体投料
├── liquid_feeding_solvents() # 液体投料
└── workflow_sequence @property # 工作流序列状态
```
### 8.4 物料资源层级(反应站实例)
```
BIOYOND_PolymerReactionStation_Deck (2700×1080×1500mm)
├── 堆栈1左 (WareHouse 4x4) ← Coordinate(-200, 400, 0)
│ ├── A01 → BottleCarrier → Reactor
│ ├── A02 → BottleCarrier → Reactor
│ └── ...(共 16 槽位)
├── 堆栈1右 (WareHouse 4x4, col_offset=4) ← Coordinate(350, 400, 0)
│ ├── A05 → BottleCarrier → Reactor
│ └── ...
├── 站内试剂存放堆栈 (WareHouse 1x2) ← Coordinate(1050, 400, 0)
│ ├── A01 → 1BottleCarrier → Bottle
│ └── A02 → 1BottleCarrier → Bottle
├── 测量小瓶仓库 (WareHouse 3x2) ← Coordinate(...)
├── 站内Tip盒堆栈(左) (WareHouse, removed_positions)
└── 站内Tip盒堆栈(右) (WareHouse)
```
### 8.5 图文件关键结构
```json
{
"nodes": [
{
"id": "reaction_station_bioyond",
"children": ["Bioyond_Deck", "reactor_1", "reactor_2", "reactor_3", "reactor_4", "reactor_5"],
"parent": null,
"type": "device",
"class": "reaction_station.bioyond",
"config": {
"api_key": "DE9BDDA0",
"api_host": "http://172.21.103.36:45388",
"workflow_mappings": {
"reactor_taken_out": "3a16081e-...",
"reactor_taken_in": "3a160df6-..."
},
"material_type_mappings": {
"BIOYOND_PolymerStation_Reactor": ["反应器", "3a14233b-..."],
"BIOYOND_PolymerStation_1BottleCarrier": ["试剂瓶", "3a14233b-..."]
},
"warehouse_mapping": {
"堆栈1左": {
"uuid": "3a14aa17-...",
"site_uuids": {"A01": "3a14aa17-...", "A02": "3a14aa17-..."}
}
},
"http_service_config": {
"http_service_host": "127.0.0.1",
"http_service_port": 8080
}
},
"deck": {
"data": {
"_resource_child_name": "Bioyond_Deck",
"_resource_type": "unilabos.resources.bioyond.decks:BIOYOND_PolymerReactionStation_Deck"
}
},
"size_x": 2700.0,
"size_y": 1080.0,
"size_z": 2500.0,
"protocol_type": [],
"data": {}
},
{
"id": "Bioyond_Deck",
"parent": "reaction_station_bioyond",
"type": "deck",
"class": "BIOYOND_PolymerReactionStation_Deck",
"config": {"type": "BIOYOND_PolymerReactionStation_Deck", "setup": true}
},
{
"id": "reactor_1",
"parent": "reaction_station_bioyond",
"type": "device",
"class": "reaction_station.reactor",
"position": {"x": 1150, "y": 300, "z": 0},
"config": {}
}
]
}
```
### 8.6 初始化时序
```
1. ROS2WorkstationNode.__init__
├── 创建 BioyondReactionStation 实例__init__
├── 加载 DeckBIOYOND_PolymerReactionStation_Deck, setup=true → 创建 6 个 WareHouse
├── 初始化 reactor_1~5BioyondReactor 实例)→ sub_devices
└── 为每个 reactor 创建 ActionClient
2. BioyondReactionStation.post_init(ros_node)
├── 初始化 BioyondV1RPCHTTP 客户端)
├── 初始化 BioyondResourceSynchronizer
├── 启动 ConnectionMonitor30s 轮询)
├── 启动 WorkstationHTTPService接收回调
├── sync_from_external()(从 Bioyond 拉取物料到 Deck
└── update_resource([self.deck])(上传 Deck 到云端)
```
### 8.7 物料同步流程
```
外部入库:
Bioyond API → stock_material() → 获取物料列表
→ resource_bioyond_to_plr() → 转为 PLR Bottle/Carrier
→ deck.warehouses["堆栈1左"]["A01"] = carrier
→ update_resource([deck])
外部变更回调:
Bioyond POST /report/material_change
→ WorkstationHTTPService 接收
→ process_material_change_report()
→ 更新 Deck 中的资源
→ update_resource([affected_resource])
```
### 8.8 工作站动作执行流程(以 reactor_taken_in 为例)
```python
async def reactor_taken_in(self, assign_material_name, cutoff, temperature, **kwargs):
# 1. 从 config 获取工作流 UUID
workflow_id = self.config["workflow_mappings"]["reactor_taken_in"]
# 2. 构建工序参数
sections = self._build_sections(temperature, cutoff, ...)
# 3. 合并到工作流序列
self._workflow_sequence.append({"name": "reactor_taken_in", ...})
# 4. 调用外部系统创建工单
result = self.hardware_interface.create_order(order_data)
# 5. 等待外部系统完成(通过 HTTP 回调通知)
# process_order_finish_report 被回调时更新状态
return {"success": True}
```
---
## 9. 现有工作站 Config 结构完整对比
| 特性 | BioyondReactionStation | BioyondDispensingStation | CoinCellAssemblyWorkstation |
|------|----------------------|------------------------|-----------------------------|
| **继承** | BioyondWorkstation | BioyondWorkstation | WorkstationBase (直接) |
| **通信方式** | HTTP RPC | HTTP RPC | Modbus TCP |
| **`__init__` 签名** | `(config, deck, protocol_type, **kwargs)` | `(config, deck, protocol_type, **kwargs)` | `(config, deck, address, port, debug_mode, **kwargs)` |
| **子设备** | 5 个 BioyondReactor | 无 | 无 |
| **Deck** | BioyondReactionDeck (6 个 WareHouse) | BioyondDispensingDeck | CoincellDeck |
| **物料同步** | BioyondResourceSynchronizer (双向) | BioyondResourceSynchronizer (双向) | 无(本地 PLR |
| **status_types** | `workflow_sequence: str` | 空 | 18 个属性 (sys_status, 传感器数据等) |
| **动作风格** | 语义化 (reactor_taken_in, ...) | 语义化 (compute_experiment_design, ...) | PLC 操作 (func_pack_device_init, ...) |
| **post_init** | 连接监控 + HTTP 服务 + 资源同步 + 上传 deck | 继承父类 | 上传 deck |
| **工作流管理** | workflow_mappings → 合并序列 → create_order | batch_create → wait_for_reports | PLC 握手循环 |
### Config 字段对比
| 字段 | 反应站 | 配液站 | 纽扣电池 |
|------|--------|--------|---------|
| `api_host` | ✅ | ✅ | — |
| `api_key` | ✅ | ✅ | — |
| `workflow_mappings` | ✅ (8 个工作流) | — | — |
| `material_type_mappings` | ✅ (8 种物料) | ✅ | — |
| `warehouse_mapping` | ✅ (6 个仓库) | ✅ (3 个仓库) | — |
| `workflow_to_section_map` | ✅ | — | — |
| `action_names` | ✅ | — | — |
| `http_service_config` | ✅ | — | — |
| `material_default_parameters` | ✅ | — | — |
| `address` (init 参数) | — | — | ✅ |
| `port` (init 参数) | — | — | ✅ |
| `debug_mode` (init 参数) | — | — | ✅ |

View File

@@ -0,0 +1,233 @@
---
name: batch-insert-reagent
description: Batch insert reagents into Uni-Lab platform — add chemicals with CAS, SMILES, supplier info. Use when the user wants to add reagents, insert chemicals, batch register reagents, or mentions 录入试剂/添加试剂/试剂入库/reagent.
---
# 批量录入试剂 Skill
通过云端 API 批量录入试剂信息,支持逐条或批量操作。
## 前置条件(缺一不可)
使用本 skill 前,**必须**先确认以下信息。如果缺少任何一项,**立即向用户询问并终止**,等补齐后再继续。
### 1. ak / sk → AUTH
询问用户的启动参数,从 `--ak` `--sk` 或 config.py 中获取。
生成 AUTH token任选一种方式
```bash
# 方式一Python 一行生成
python -c "import base64,sys; print('Authorization: Lab ' + base64.b64encode(f'{sys.argv[1]}:{sys.argv[2]}'.encode()).decode())" <ak> <sk>
# 方式二:手动计算
# base64(ak:sk) → Authorization: Lab <token>
```
### 2. --addr → BASE URL
| `--addr` 值 | BASE |
|-------------|------|
| `test` | `https://uni-lab.test.bohrium.com` |
| `uat` | `https://uni-lab.uat.bohrium.com` |
| `local` | `http://127.0.0.1:48197` |
| 不传(默认) | `https://uni-lab.bohrium.com` |
确认后设置:
```bash
BASE="<根据 addr 确定的 URL>"
AUTH="Authorization: Lab <gen_auth.py 输出的 token>"
```
**两项全部就绪后才可发起 API 请求。**
## Session State
- `lab_uuid` — 实验室 UUID首次通过 API #1 自动获取,**不需要问用户**
## 请求约定
所有请求使用 `curl -s`POST 需加 `Content-Type: application/json`
> **Windows 平台**必须使用 `curl.exe`(而非 PowerShell 的 `curl` 别名),示例中的 `curl` 均指 `curl.exe`。
---
## API Endpoints
### 1. 获取实验室信息(自动获取 lab_uuid
```bash
curl -s -X GET "$BASE/api/v1/edge/lab/info" -H "$AUTH"
```
返回:
```json
{"code": 0, "data": {"uuid": "xxx", "name": "实验室名称"}}
```
记住 `data.uuid``lab_uuid`
### 2. 录入试剂
```bash
curl -s -X POST "$BASE/api/v1/lab/reagent" \
-H "$AUTH" -H "Content-Type: application/json" \
-d '{
"lab_uuid": "<lab_uuid>",
"cas": "<CAS号>",
"name": "<试剂名称>",
"molecular_formula": "<分子式>",
"smiles": "<SMILES>",
"stock_in_quantity": <入库数量>,
"unit": "<单位字符串>",
"supplier": "<供应商>",
"production_date": "<生产日期 ISO 8601>",
"expiry_date": "<过期日期 ISO 8601>"
}'
```
返回成功时包含试剂 UUID
```json
{"code": 0, "data": {"uuid": "xxx", ...}}
```
---
## 试剂字段说明
| 字段 | 类型 | 必填 | 说明 | 示例 |
|------|------|------|------|------|
| `lab_uuid` | string | 是 | 实验室 UUID从 API #1 获取) | `"8511c672-..."` |
| `cas` | string | 是 | CAS 注册号 | `"7732-18-3"` |
| `name` | string | 是 | 试剂中文/英文名称 | `"水"` |
| `molecular_formula` | string | 是 | 分子式 | `"H2O"` |
| `smiles` | string | 是 | SMILES 表示 | `"O"` |
| `stock_in_quantity` | number | 是 | 入库数量 | `10` |
| `unit` | string | 是 | 单位(字符串,见下表) | `"mL"` |
| `supplier` | string | 否 | 供应商名称 | `"国药集团"` |
| `production_date` | string | 否 | 生产日期ISO 8601 | `"2025-11-18T00:00:00Z"` |
| `expiry_date` | string | 否 | 过期日期ISO 8601 | `"2026-11-18T00:00:00Z"` |
### unit 单位值
| 值 | 单位 |
|------|------|
| `"mL"` | 毫升 |
| `"L"` | 升 |
| `"g"` | 克 |
| `"kg"` | 千克 |
| `"瓶"` | 瓶 |
> 根据试剂状态选择:液体用 `"mL"` / `"L"`,固体用 `"g"` / `"kg"`。
---
## 批量录入策略
### 方式一:用户提供 JSON 数组
用户一次性给出多条试剂数据:
```json
[
{"cas": "7732-18-3", "name": "水", "molecular_formula": "H2O", "smiles": "O", "stock_in_quantity": 10, "unit": "mL"},
{"cas": "64-17-5", "name": "乙醇", "molecular_formula": "C2H6O", "smiles": "CCO", "stock_in_quantity": 5, "unit": "L"}
]
```
Agent 自动为每条补充 `lab_uuid``production_date``expiry_date` 等字段后逐条提交。
Agent 循环调用 API #2 逐条录入,每条记录一次 API 调用。
### 方式二:用户逐个描述
用户口头描述试剂(如「帮我录入 500mL 的无水乙醇Sigma 的」agent 自行补全字段:
1. 根据名称查找 CAS 号、分子式、SMILES参考下方速查表或自行推断
2. 构建完整的请求体
3. 向用户确认后提交
### 方式三:从 CSV/表格批量导入
用户提供 CSV 或表格文件路径agent 读取并解析:
```bash
# 期望的 CSV 格式(首行为表头)
cas,name,molecular_formula,smiles,stock_in_quantity,unit,supplier,production_date,expiry_date
7732-18-3,水,H2O,O,10,mL,农夫山泉,2025-11-18T00:00:00Z,2026-11-18T00:00:00Z
```
### 执行与汇报
每次 API 调用后:
1. 检查返回 `code`0 = 成功)
2. 记录成功/失败数量
3. 全部完成后汇总:「共录入 N 条试剂,成功 X 条,失败 Y 条」
4. 如有失败,列出失败的试剂名称和错误信息
---
## 常见试剂速查表
| 名称 | CAS | 分子式 | SMILES |
|------|-----|--------|--------|
| 水 | 7732-18-3 | H2O | O |
| 乙醇 | 64-17-5 | C2H6O | CCO |
| 甲醇 | 67-56-1 | CH4O | CO |
| 丙酮 | 67-64-1 | C3H6O | CC(C)=O |
| 二甲基亚砜(DMSO) | 67-68-5 | C2H6OS | CS(C)=O |
| 乙酸乙酯 | 141-78-6 | C4H8O2 | CCOC(C)=O |
| 二氯甲烷 | 75-09-2 | CH2Cl2 | ClCCl |
| 四氢呋喃(THF) | 109-99-9 | C4H8O | C1CCOC1 |
| N,N-二甲基甲酰胺(DMF) | 68-12-2 | C3H7NO | CN(C)C=O |
| 氯仿 | 67-66-3 | CHCl3 | ClC(Cl)Cl |
| 乙腈 | 75-05-8 | C2H3N | CC#N |
| 甲苯 | 108-88-3 | C7H8 | Cc1ccccc1 |
| 正己烷 | 110-54-3 | C6H14 | CCCCCC |
| 异丙醇 | 67-63-0 | C3H8O | CC(C)O |
| 盐酸 | 7647-01-0 | HCl | Cl |
| 硫酸 | 7664-93-9 | H2SO4 | OS(O)(=O)=O |
| 氢氧化钠 | 1310-73-2 | NaOH | [Na]O |
| 碳酸钠 | 497-19-8 | Na2CO3 | [Na]OC([O-])=O.[Na+] |
| 氯化钠 | 7647-14-5 | NaCl | [Na]Cl |
| 乙二胺四乙酸(EDTA) | 60-00-4 | C10H16N2O8 | OC(=O)CN(CCN(CC(O)=O)CC(O)=O)CC(O)=O |
> 此表仅供快速参考。对于不在表中的试剂agent 应根据化学知识推断或提示用户补充。
---
## 完整工作流 Checklist
```
Task Progress:
- [ ] Step 1: 确认 ak/sk → 生成 AUTH token
- [ ] Step 2: 确认 --addr → 设置 BASE URL
- [ ] Step 3: GET /edge/lab/info → 获取 lab_uuid
- [ ] Step 4: 收集试剂信息(用户提供列表/逐个描述/CSV文件
- [ ] Step 5: 补全缺失字段CAS、分子式、SMILES 等)
- [ ] Step 6: 向用户确认待录入的试剂列表
- [ ] Step 7: 循环调用 POST /lab/reagent 逐条录入(每条需含 lab_uuid
- [ ] Step 8: 汇总结果(成功/失败数量及详情)
```
---
## 完整示例
用户说:「帮我录入 3 种试剂500mL 无水乙醇、1kg 氯化钠、2L 去离子水」
Agent 构建的请求序列:
```json
// 第 1 条
{"lab_uuid": "8511c672-...", "cas": "64-17-5", "name": "无水乙醇", "molecular_formula": "C2H6O", "smiles": "CCO", "stock_in_quantity": 500, "unit": "mL", "supplier": "国药集团", "production_date": "2025-01-01T00:00:00Z", "expiry_date": "2026-01-01T00:00:00Z"}
// 第 2 条
{"lab_uuid": "8511c672-...", "cas": "7647-14-5", "name": "氯化钠", "molecular_formula": "NaCl", "smiles": "[Na]Cl", "stock_in_quantity": 1, "unit": "kg", "supplier": "", "production_date": "2025-01-01T00:00:00Z", "expiry_date": "2026-01-01T00:00:00Z"}
// 第 3 条
{"lab_uuid": "8511c672-...", "cas": "7732-18-3", "name": "去离子水", "molecular_formula": "H2O", "smiles": "O", "stock_in_quantity": 2, "unit": "L", "supplier": "", "production_date": "2025-01-01T00:00:00Z", "expiry_date": "2026-01-01T00:00:00Z"}
```

View File

@@ -0,0 +1,301 @@
---
name: batch-submit-experiment
description: Batch submit experiments (notebooks) to Uni-Lab platform — list workflows, generate node_params from registry schemas, submit multiple rounds. Use when the user wants to submit experiments, create notebooks, batch run workflows, or mentions 提交实验/批量实验/notebook/实验轮次.
---
# 批量提交实验指南
通过云端 API 批量提交实验notebook支持多轮实验参数配置。根据 workflow 模板详情和本地设备注册表自动生成 `node_params` 模板。
## 前置条件(缺一不可)
使用本指南前,**必须**先确认以下信息。如果缺少任何一项,**立即向用户询问并终止**,等补齐后再继续。
### 1. ak / sk → AUTH
询问用户的启动参数,从 `--ak` `--sk` 或 config.py 中获取。
生成 AUTH token任选一种方式
```bash
# 方式一Python 一行生成
python -c "import base64,sys; print('Authorization: Lab ' + base64.b64encode(f'{sys.argv[1]}:{sys.argv[2]}'.encode()).decode())" <ak> <sk>
# 方式二:手动计算
# base64(ak:sk) → Authorization: Lab <token>
```
### 2. --addr → BASE URL
| `--addr` 值 | BASE |
|-------------|------|
| `test` | `https://uni-lab.test.bohrium.com` |
| `uat` | `https://uni-lab.uat.bohrium.com` |
| `local` | `http://127.0.0.1:48197` |
| 不传(默认) | `https://uni-lab.bohrium.com` |
确认后设置:
```bash
BASE="<根据 addr 确定的 URL>"
AUTH="Authorization: Lab <上面命令输出的 token>"
```
### 3. req_device_registry_upload.json设备注册表
**批量提交实验时需要本地注册表来解析 workflow 节点的参数 schema。**
按优先级搜索:
```
<workspace 根目录>/unilabos_data/req_device_registry_upload.json
<workspace 根目录>/req_device_registry_upload.json
```
也可直接 Glob 搜索:`**/req_device_registry_upload.json`
找到后**检查文件修改时间**并告知用户。超过 1 天提醒用户是否需要重新启动 `unilab`
**如果文件不存在** → 告知用户先运行 `unilab` 启动命令,等注册表生成后再执行。可跳过此步,但将无法自动生成参数模板,需要用户手动填写 `param`
### 4. workflow_uuid目标工作流
用户需要提供要提交的 workflow UUID。如果用户不确定通过 API #2 列出可用 workflow 供选择。
**四项全部就绪后才可开始。**
## Session State
在整个对话过程中agent 需要记住以下状态,避免重复询问用户:
- `lab_uuid` — 实验室 UUID首次通过 API #1 自动获取,**不需要问用户**
- `workflow_uuid` — 工作流 UUID用户提供或从列表选择
- `workflow_nodes` — workflow 中各 action 节点的 uuid、设备 ID、动作名从 API #3 获取)
## 请求约定
所有请求使用 `curl -s`POST 需加 `Content-Type: application/json`
> **Windows 平台**必须使用 `curl.exe`(而非 PowerShell 的 `curl` 别名),示例中的 `curl` 均指 `curl.exe`。
>
> **PowerShell JSON 传参**PowerShell 中 `-d '{"key":"value"}'` 会因引号转义失败。请将 JSON 写入临时文件,用 `-d '@tmp_body.json'`(单引号包裹 `@`,否则会被解析为 splatting 运算符)。
---
## API Endpoints
### 1. 获取实验室信息(自动获取 lab_uuid
```bash
curl -s -X GET "$BASE/api/v1/edge/lab/info" -H "$AUTH"
```
返回:
```json
{"code": 0, "data": {"uuid": "xxx", "name": "实验室名称"}}
```
记住 `data.uuid``lab_uuid`
### 2. 列出可用 workflow
```bash
curl -s -X GET "$BASE/api/v1/lab/workflow/workflows?page=1&page_size=20&lab_uuid=$lab_uuid" -H "$AUTH"
```
返回 workflow 列表,展示给用户选择。列出每个 workflow 的 `uuid``name`
### 3. 获取 workflow 模板详情
```bash
curl -s -X GET "$BASE/api/v1/lab/workflow/template/detail/$workflow_uuid" -H "$AUTH"
```
返回 workflow 的完整结构,包含所有 action 节点信息。需要从响应中提取:
- 每个 action 节点的 `node_uuid`
- 每个节点对应的设备 ID`resource_template_name`
- 每个节点的动作名(`node_template_name`
- 每个节点的现有参数(`param`
> **注意**:此 API 返回格式可能因版本不同而有差异。首次调用时,先打印完整响应分析结构,再提取节点信息。常见的节点字段路径为 `data.nodes[]` 或 `data.workflow_nodes[]`。
### 4. 提交实验(创建 notebook
```bash
curl -s -X POST "$BASE/api/v1/lab/notebook" \
-H "$AUTH" -H "Content-Type: application/json" \
-d '<request_body>'
```
请求体结构:
```json
{
"lab_uuid": "<lab_uuid>",
"workflow_uuid": "<workflow_uuid>",
"name": "<实验名称>",
"node_params": [
{
"sample_uuids": ["<样品UUID1>", "<样品UUID2>"],
"datas": [
{
"node_uuid": "<workflow中的节点UUID>",
"param": {},
"sample_params": [
{
"container_uuid": "<容器UUID>",
"sample_value": {
"liquid_names": "<液体名称>",
"volumes": 1000
}
}
]
}
]
}
]
}
```
> **注意**`sample_uuids` 必须是 **UUID 数组**`[]uuid.UUID`),不是字符串。无样品时传空数组 `[]`。
---
## Notebook 请求体详解
### node_params 结构
`node_params` 是一个数组,**每个元素代表一轮实验**
- 要跑 2 轮 → `node_params` 有 2 个元素
- 要跑 N 轮 → `node_params` 有 N 个元素
### 每轮的字段
| 字段 | 类型 | 说明 |
|------|------|------|
| `sample_uuids` | array\<uuid\> | 该轮实验的样品 UUID 数组,无样品时传 `[]` |
| `datas` | array | 该轮中每个 workflow 节点的参数配置 |
### datas 中每个节点
| 字段 | 类型 | 说明 |
|------|------|------|
| `node_uuid` | string | workflow 模板中的节点 UUID从 API #3 获取) |
| `param` | object | 动作参数(根据本地注册表 schema 填写) |
| `sample_params` | array | 样品相关参数(液体名、体积等) |
### sample_params 中每条
| 字段 | 类型 | 说明 |
|------|------|------|
| `container_uuid` | string | 容器 UUID |
| `sample_value` | object | 样品值,如 `{"liquid_names": "水", "volumes": 1000}` |
---
## 从本地注册表生成 param 模板
### 自动方式 — 运行脚本
```bash
python scripts/gen_notebook_params.py \
--auth <token> \
--base <BASE_URL> \
--workflow-uuid <workflow_uuid> \
[--registry <path/to/req_device_registry_upload.json>] \
[--rounds <轮次数>] \
[--output <输出文件路径>]
```
> 脚本位于本文档同级目录下的 `scripts/gen_notebook_params.py`。
脚本会:
1. 调用 workflow detail API 获取所有 action 节点
2. 读取本地注册表,为每个节点查找对应的 action schema
3. 生成 `notebook_template.json`,包含:
- 完整 `node_params` 骨架
- 每个节点的 param 字段及类型说明
- `_schema_info` 辅助信息(不提交,仅供参考)
### 手动方式
如果脚本不可用或注册表不存在:
1. 调用 API #3 获取 workflow 详情
2. 找到每个 action 节点的 `node_uuid`
3. 在本地注册表中查找对应设备的 `action_value_mappings`
```
resources[].id == <device_id>
→ resources[].class.action_value_mappings.<action_name>.schema.properties.goal.properties
```
4. 将 schema 中的 properties 作为 `param` 的字段模板
5. 按轮次复制 `node_params` 元素,让用户填写每轮的具体值
### 注册表结构参考
```json
{
"resources": [
{
"id": "liquid_handler.prcxi",
"class": {
"module": "unilabos.devices.xxx:ClassName",
"action_value_mappings": {
"transfer_liquid": {
"type": "LiquidHandlerTransfer",
"schema": {
"properties": {
"goal": {
"properties": {
"asp_vols": {"type": "array", "items": {"type": "number"}},
"sources": {"type": "array"}
},
"required": ["asp_vols", "sources"]
}
}
},
"goal_default": {}
}
}
}
}
]
}
```
`param` 填写时,使用 `goal.properties` 中的字段名和类型。
---
## 完整工作流 Checklist
```
Task Progress:
- [ ] Step 1: 确认 ak/sk → 生成 AUTH token
- [ ] Step 2: 确认 --addr → 设置 BASE URL
- [ ] Step 3: GET /edge/lab/info → 获取 lab_uuid
- [ ] Step 4: 确认 workflow_uuid用户提供或从 GET #2 列表选择)
- [ ] Step 5: GET workflow detail (#3) → 提取各节点 uuid、设备ID、动作名
- [ ] Step 6: 定位本地注册表 req_device_registry_upload.json
- [ ] Step 7: 运行 gen_notebook_params.py 或手动匹配 → 生成 node_params 模板
- [ ] Step 8: 引导用户填写每轮的参数sample_uuids、param、sample_params
- [ ] Step 9: 构建完整请求体 → POST /lab/notebook 提交
- [ ] Step 10: 检查返回结果,确认提交成功
```
---
## 常见问题
### Q: workflow 中有多个节点,每轮都要填所有节点的参数吗?
是的。`datas` 数组中需要包含该轮实验涉及的每个 workflow 节点的参数。通常每个 action 节点都需要一条 `datas` 记录。
### Q: 多轮实验的参数完全不同吗?
通常每轮的 `param`(设备动作参数)可能相同或相似,但 `sample_uuids` 和 `sample_params`(样品信息)每轮不同。脚本生成模板时会按轮次复制骨架,用户只需修改差异部分。
### Q: 如何获取 sample_uuids 和 container_uuid
这些 UUID 通常来自实验室的样品管理系统。向用户询问或从资源树API `GET /lab/material/download/$lab_uuid`)中查找。

View File

@@ -0,0 +1,394 @@
#!/usr/bin/env python3
"""
从 workflow 模板详情 + 本地设备注册表生成 notebook 提交用的 node_params 模板。
用法:
python gen_notebook_params.py --auth <token> --base <url> --workflow-uuid <uuid> [选项]
选项:
--auth <token> Lab tokenbase64(ak:sk) 的结果,不含 "Lab " 前缀)
--base <url> API 基础 URL如 https://uni-lab.test.bohrium.com
--workflow-uuid <uuid> 目标 workflow 的 UUID
--registry <path> 本地注册表文件路径(默认自动搜索)
--rounds <n> 实验轮次数(默认 1
--output <path> 输出模板文件路径(默认 notebook_template.json
--dump-response 打印 workflow detail API 的原始响应(调试用)
示例:
python gen_notebook_params.py \\
--auth YTFmZDlkNGUtxxxx \\
--base https://uni-lab.test.bohrium.com \\
--workflow-uuid abc-123-def \\
--rounds 2
"""
import copy
import json
import os
import sys
from datetime import datetime
from urllib.request import Request, urlopen
from urllib.error import HTTPError, URLError
REGISTRY_FILENAME = "req_device_registry_upload.json"
def find_registry(explicit_path=None):
"""查找本地注册表文件,逻辑同 extract_device_actions.py"""
if explicit_path:
if os.path.isfile(explicit_path):
return explicit_path
if os.path.isdir(explicit_path):
fp = os.path.join(explicit_path, REGISTRY_FILENAME)
if os.path.isfile(fp):
return fp
print(f"警告: 指定的注册表路径不存在: {explicit_path}")
return None
candidates = [
os.path.join("unilabos_data", REGISTRY_FILENAME),
REGISTRY_FILENAME,
]
for c in candidates:
if os.path.isfile(c):
return c
script_dir = os.path.dirname(os.path.abspath(__file__))
workspace_root = os.path.normpath(os.path.join(script_dir, "..", "..", ".."))
for c in candidates:
path = os.path.join(workspace_root, c)
if os.path.isfile(path):
return path
cwd = os.getcwd()
for _ in range(5):
parent = os.path.dirname(cwd)
if parent == cwd:
break
cwd = parent
for c in candidates:
path = os.path.join(cwd, c)
if os.path.isfile(path):
return path
return None
def load_registry(path):
with open(path, "r", encoding="utf-8") as f:
return json.load(f)
def build_registry_index(registry_data):
"""构建 device_id → action_value_mappings 的索引"""
index = {}
for res in registry_data.get("resources", []):
rid = res.get("id", "")
avm = res.get("class", {}).get("action_value_mappings", {})
if rid and avm:
index[rid] = avm
return index
def flatten_goal_schema(action_data):
"""从 action_value_mappings 条目中提取 goal 层的 schema"""
schema = action_data.get("schema", {})
goal_schema = schema.get("properties", {}).get("goal", {})
return goal_schema if goal_schema else schema
def build_param_template(goal_schema):
"""根据 goal schema 生成 param 模板,含类型标注"""
properties = goal_schema.get("properties", {})
required = set(goal_schema.get("required", []))
template = {}
for field_name, field_def in properties.items():
if field_name == "unilabos_device_id":
continue
ftype = field_def.get("type", "any")
default = field_def.get("default")
if default is not None:
template[field_name] = default
elif ftype == "string":
template[field_name] = f"$TODO ({ftype}, {'required' if field_name in required else 'optional'})"
elif ftype == "number" or ftype == "integer":
template[field_name] = 0
elif ftype == "boolean":
template[field_name] = False
elif ftype == "array":
template[field_name] = []
elif ftype == "object":
template[field_name] = {}
else:
template[field_name] = f"$TODO ({ftype})"
return template
def fetch_workflow_detail(base_url, auth_token, workflow_uuid):
"""调用 workflow detail API"""
url = f"{base_url}/api/v1/lab/workflow/template/detail/{workflow_uuid}"
req = Request(url, method="GET")
req.add_header("Authorization", f"Lab {auth_token}")
try:
with urlopen(req, timeout=30) as resp:
return json.loads(resp.read().decode("utf-8"))
except HTTPError as e:
body = e.read().decode("utf-8", errors="replace")
print(f"API 错误 {e.code}: {body}")
return None
except URLError as e:
print(f"网络错误: {e.reason}")
return None
def extract_nodes_from_response(response):
"""
从 workflow detail 响应中提取 action 节点列表。
适配多种可能的响应格式。
返回: [(node_uuid, resource_template_name, node_template_name, existing_param), ...]
"""
data = response.get("data", response)
search_keys = ["nodes", "workflow_nodes", "node_list", "steps"]
nodes_raw = None
for key in search_keys:
if key in data and isinstance(data[key], list):
nodes_raw = data[key]
break
if nodes_raw is None:
if isinstance(data, list):
nodes_raw = data
else:
for v in data.values():
if isinstance(v, list) and len(v) > 0 and isinstance(v[0], dict):
nodes_raw = v
break
if not nodes_raw:
print("警告: 未能从响应中提取节点列表")
print("响应顶层 keys:", list(data.keys()) if isinstance(data, dict) else type(data).__name__)
return []
result = []
for node in nodes_raw:
if not isinstance(node, dict):
continue
node_uuid = (
node.get("uuid")
or node.get("node_uuid")
or node.get("id")
or ""
)
resource_name = (
node.get("resource_template_name")
or node.get("device_id")
or node.get("resource_name")
or node.get("device_name")
or ""
)
template_name = (
node.get("node_template_name")
or node.get("action_name")
or node.get("template_name")
or node.get("action")
or node.get("name")
or ""
)
existing_param = node.get("param", {}) or {}
if node_uuid:
result.append((node_uuid, resource_name, template_name, existing_param))
return result
def generate_template(nodes, registry_index, rounds):
"""生成 notebook 提交模板"""
node_params = []
schema_info = {}
datas_template = []
for node_uuid, resource_name, template_name, existing_param in nodes:
param_template = {}
matched = False
if resource_name and template_name and resource_name in registry_index:
avm = registry_index[resource_name]
if template_name in avm:
goal_schema = flatten_goal_schema(avm[template_name])
param_template = build_param_template(goal_schema)
goal_default = avm[template_name].get("goal_default", {})
if goal_default:
for k, v in goal_default.items():
if k in param_template and v is not None:
param_template[k] = v
matched = True
schema_info[node_uuid] = {
"device_id": resource_name,
"action_name": template_name,
"action_type": avm[template_name].get("type", ""),
"schema_properties": list(goal_schema.get("properties", {}).keys()),
"required": goal_schema.get("required", []),
}
if not matched and existing_param:
param_template = existing_param
if not matched and not existing_param:
schema_info[node_uuid] = {
"device_id": resource_name,
"action_name": template_name,
"warning": "未在本地注册表中找到匹配的 action schema",
}
datas_template.append({
"node_uuid": node_uuid,
"param": param_template,
"sample_params": [
{
"container_uuid": "$TODO_CONTAINER_UUID",
"sample_value": {
"liquid_names": "$TODO_LIQUID_NAME",
"volumes": 0,
},
}
],
})
for i in range(rounds):
node_params.append({
"sample_uuids": f"$TODO_SAMPLE_UUID_ROUND_{i + 1}",
"datas": copy.deepcopy(datas_template),
})
return {
"lab_uuid": "$TODO_LAB_UUID",
"workflow_uuid": "$TODO_WORKFLOW_UUID",
"name": "$TODO_EXPERIMENT_NAME",
"node_params": node_params,
"_schema_info仅参考提交时删除": schema_info,
}
def parse_args(argv):
"""简单的参数解析"""
opts = {
"auth": None,
"base": None,
"workflow_uuid": None,
"registry": None,
"rounds": 1,
"output": "notebook_template.json",
"dump_response": False,
}
i = 0
while i < len(argv):
arg = argv[i]
if arg == "--auth" and i + 1 < len(argv):
opts["auth"] = argv[i + 1]
i += 2
elif arg == "--base" and i + 1 < len(argv):
opts["base"] = argv[i + 1].rstrip("/")
i += 2
elif arg == "--workflow-uuid" and i + 1 < len(argv):
opts["workflow_uuid"] = argv[i + 1]
i += 2
elif arg == "--registry" and i + 1 < len(argv):
opts["registry"] = argv[i + 1]
i += 2
elif arg == "--rounds" and i + 1 < len(argv):
opts["rounds"] = int(argv[i + 1])
i += 2
elif arg == "--output" and i + 1 < len(argv):
opts["output"] = argv[i + 1]
i += 2
elif arg == "--dump-response":
opts["dump_response"] = True
i += 1
else:
print(f"未知参数: {arg}")
i += 1
return opts
def main():
opts = parse_args(sys.argv[1:])
if not opts["auth"] or not opts["base"] or not opts["workflow_uuid"]:
print("用法:")
print(" python gen_notebook_params.py --auth <token> --base <url> --workflow-uuid <uuid> [选项]")
print()
print("必需参数:")
print(" --auth <token> Lab tokenbase64(ak:sk)")
print(" --base <url> API 基础 URL")
print(" --workflow-uuid <uuid> 目标 workflow UUID")
print()
print("可选参数:")
print(" --registry <path> 注册表文件路径(默认自动搜索)")
print(" --rounds <n> 实验轮次数(默认 1")
print(" --output <path> 输出文件路径(默认 notebook_template.json")
print(" --dump-response 打印 API 原始响应")
sys.exit(1)
# 1. 查找并加载本地注册表
registry_path = find_registry(opts["registry"])
registry_index = {}
if registry_path:
mtime = os.path.getmtime(registry_path)
gen_time = datetime.fromtimestamp(mtime).strftime("%Y-%m-%d %H:%M:%S")
print(f"注册表: {registry_path} (生成时间: {gen_time})")
registry_data = load_registry(registry_path)
registry_index = build_registry_index(registry_data)
print(f"已索引 {len(registry_index)} 个设备的 action schemas")
else:
print("警告: 未找到本地注册表,将跳过 param 模板生成")
print(" 提交时需要手动填写各节点的 param 字段")
# 2. 获取 workflow 详情
print(f"\n正在获取 workflow 详情: {opts['workflow_uuid']}")
response = fetch_workflow_detail(opts["base"], opts["auth"], opts["workflow_uuid"])
if not response:
print("错误: 无法获取 workflow 详情")
sys.exit(1)
if opts["dump_response"]:
print("\n=== API 原始响应 ===")
print(json.dumps(response, indent=2, ensure_ascii=False)[:5000])
print("=== 响应结束(截断至 5000 字符) ===\n")
# 3. 提取节点
nodes = extract_nodes_from_response(response)
if not nodes:
print("错误: 未能从 workflow 中提取任何 action 节点")
print("请使用 --dump-response 查看原始响应结构")
sys.exit(1)
print(f"\n找到 {len(nodes)} 个 action 节点:")
print(f" {'节点 UUID':<40} {'设备 ID':<30} {'动作名':<25} {'Schema'}")
print(" " + "-" * 110)
for node_uuid, resource_name, template_name, _ in nodes:
matched = "" if (resource_name in registry_index and
template_name in registry_index.get(resource_name, {})) else ""
print(f" {node_uuid:<40} {resource_name:<30} {template_name:<25} {matched}")
# 4. 生成模板
template = generate_template(nodes, registry_index, opts["rounds"])
template["workflow_uuid"] = opts["workflow_uuid"]
output_path = opts["output"]
with open(output_path, "w", encoding="utf-8") as f:
json.dump(template, f, indent=2, ensure_ascii=False)
print(f"\n模板已写入: {output_path}")
print(f" 轮次数: {opts['rounds']}")
print(f" 节点数/轮: {len(nodes)}")
print()
print("下一步:")
print(" 1. 打开模板文件,将 $TODO 占位符替换为实际值")
print(" 2. 删除 _schema_info 字段(仅供参考)")
print(" 3. 使用 POST /api/v1/lab/notebook 提交")
if __name__ == "__main__":
main()

View File

@@ -163,7 +163,7 @@ python ./scripts/extract_device_actions.py [--registry <path>] <device_id> ./ski
### Step 4 — 写 SKILL.md ### Step 4 — 写 SKILL.md
直接复用 `unilab-device-api` 的 API 模板10 个 endpoint,修改: 直接复用 `unilab-device-api` 的 API 模板,修改:
- 设备名称 - 设备名称
- Action 数量 - Action 数量
- 目录列表 - 目录列表
@@ -181,15 +181,18 @@ API 模板结构:
## 前置条件(缺一不可) ## 前置条件(缺一不可)
- ak/sk → AUTH, --addr → BASE URL - ak/sk → AUTH, --addr → BASE URL
## Session State ## 请求约定
- lab_uuid通过 API #1 自动匹配,不要问用户), device_name - Windows 平台必须用 curl.exe非 PowerShell 的 curl 别名)
## API Endpoints (10 个) ## Session State
# 注意: - lab_uuid通过 GET /edge/lab/info 直接获取,不要问用户), device_name
# - #1 获取 lab 列表 + 自动匹配 lab_uuid遍历 is_admin 的 lab
# 调用 /lab/info/{uuid} 比对 access_key == ak ## API Endpoints
# - #2 创建工作流用 POST /lab/workflow # - #1 GET /edge/lab/info → 直接拿到 lab_uuid
# - #10 获取资源树路径含 lab_uuid: /lab/material/download/{lab_uuid} # - #2 创建工作流 POST /lab/workflow/owner → 拼 URL 告知用户
# - #3 创建节点 POST /edge/workflow/node
# body: {workflow_uuid, resource_template_name: "<device_id>", node_template_name: "<action_name>"}
# - #10 获取资源树 GET /lab/material/download/{lab_uuid}
## Placeholder Slot 填写规则 ## Placeholder Slot 填写规则
- unilabos_resources → ResourceSlot → {"id":"/path/name","name":"name","uuid":"xxx"} - unilabos_resources → ResourceSlot → {"id":"/path/name","name":"name","uuid":"xxx"}
@@ -206,7 +209,7 @@ API 模板结构:
### Step 5 — 验证 ### Step 5 — 验证
检查文件完整性: 检查文件完整性:
- [ ] `SKILL.md` 包含 10 个 API endpoint - [ ] `SKILL.md` 包含 API endpoint#1 获取 lab_uuid、#2-#9 工作流/动作、#10 资源树)
- [ ] `SKILL.md` 包含 Placeholder Slot 填写规则ResourceSlot / DeviceSlot / NodeSlot / ClassSlot + create_resource 特例)和本设备的 Slot 字段表 - [ ] `SKILL.md` 包含 Placeholder Slot 填写规则ResourceSlot / DeviceSlot / NodeSlot / ClassSlot + create_resource 特例)和本设备的 Slot 字段表
- [ ] `action-index.md` 列出所有 action 并有描述 - [ ] `action-index.md` 列出所有 action 并有描述
- [ ] `actions/` 目录中每个 action 有对应 JSON 文件 - [ ] `actions/` 目录中每个 action 有对应 JSON 文件
@@ -249,7 +252,7 @@ API 模板结构:
``` ```
> **注意**`schema` 已由脚本从原始 `schema.properties.goal` 提升为顶层,直接包含参数定义。 > **注意**`schema` 已由脚本从原始 `schema.properties.goal` 提升为顶层,直接包含参数定义。
> `schema.properties` 中的字段即为 API 请求 `param.goal` 中的字段 > `schema.properties` 中的字段即为 API 创建节点返回的 `data.param` 中的字段PATCH 更新时直接修改 `param` 即可
## Placeholder Slot 类型体系 ## Placeholder Slot 类型体系

View File

@@ -0,0 +1,275 @@
---
name: submit-agent-result
description: Submit historical experiment results (agent_result) to Uni-Lab notebook — read data files, assemble JSON payload, PUT to cloud API. Use when the user wants to submit experiment results, upload agent results, report experiment data, or mentions agent_result/实验结果/历史记录/notebook结果.
---
# 提交历史实验记录指南
通过云端 API 向已创建的 notebook 提交实验结果数据agent_result。支持从 JSON / CSV 文件读取数据,整合后提交。
## 前置条件(缺一不可)
使用本指南前,**必须**先确认以下信息。如果缺少任何一项,**立即向用户询问并终止**,等补齐后再继续。
### 1. ak / sk → AUTH
询问用户的启动参数,从 `--ak` `--sk` 或 config.py 中获取。
生成 AUTH token
```bash
python -c "import base64,sys; print(base64.b64encode(f'{sys.argv[1]}:{sys.argv[2]}'.encode()).decode())" <ak> <sk>
```
输出即为 token 值,拼接为 `Authorization: Lab <token>`
### 2. --addr → BASE URL
| `--addr` 值 | BASE |
|-------------|------|
| `test` | `https://uni-lab.test.bohrium.com` |
| `uat` | `https://uni-lab.uat.bohrium.com` |
| `local` | `http://127.0.0.1:48197` |
| 不传(默认) | `https://uni-lab.bohrium.com` |
确认后设置:
```bash
BASE="<根据 addr 确定的 URL>"
AUTH="Authorization: Lab <上面命令输出的 token>"
```
### 3. notebook_uuid**必须询问用户**
**必须主动询问用户**:「请提供要提交结果的 notebook UUID。」
notebook_uuid 来自之前通过「批量提交实验」创建的实验批次,即 `POST /api/v1/lab/notebook` 返回的 `data.uuid`
如果用户不记得,可提示:
- 查看之前的对话记录中创建 notebook 时返回的 UUID
- 或通过平台页面查找对应的 notebook
**绝不能跳过此步骤,没有 notebook_uuid 无法提交。**
### 4. 实验结果数据
用户需要提供实验结果数据,支持以下方式:
| 方式 | 说明 |
|------|------|
| JSON 文件 | 直接作为 `agent_result` 的内容合并 |
| CSV 文件 | 转为 `{"文件名": [行数据...]}` 格式 |
| 手动指定 | 用户直接告知 key-value 数据,由 agent 构建 JSON |
**四项全部就绪后才可开始。**
## Session State
在整个对话过程中agent 需要记住以下状态:
- `lab_uuid` — 实验室 UUID通过 API #1 自动获取,**不需要问用户**
- `notebook_uuid` — 目标 notebook UUID**必须询问用户**
## 请求约定
所有请求使用 `curl -s`PUT 需加 `Content-Type: application/json`
> **Windows 平台**必须使用 `curl.exe`(而非 PowerShell 的 `curl` 别名),示例中的 `curl` 均指 `curl.exe`。
>
> **PowerShell JSON 传参**PowerShell 中 `-d '{"key":"value"}'` 会因引号转义失败。请将 JSON 写入临时文件,用 `-d '@tmp_body.json'`(单引号包裹 `@`,否则 `@` 会被 PowerShell 解析为 splatting 运算符导致报错)。
---
## API Endpoints
### 1. 获取实验室信息(自动获取 lab_uuid
```bash
curl -s -X GET "$BASE/api/v1/edge/lab/info" -H "$AUTH"
```
返回:
```json
{"code": 0, "data": {"uuid": "xxx", "name": "实验室名称"}}
```
记住 `data.uuid``lab_uuid`
### 2. 提交实验结果agent_result
```bash
curl -s -X PUT "$BASE/api/v1/lab/notebook/agent-result" \
-H "$AUTH" -H "Content-Type: application/json" \
-d '<request_body>'
```
请求体结构:
```json
{
"notebook_uuid": "<notebook_uuid>",
"agent_result": {
"<key1>": "<value1>",
"<key2>": 123,
"<nested_key>": {"a": 1, "b": 2},
"<array_key>": [{"col1": "v1", "col2": "v2"}, ...]
}
}
```
> **注意**HTTP 方法是 **PUT**(不是 POST
#### 必要字段
| 字段 | 类型 | 说明 |
|------|------|------|
| `notebook_uuid` | string (UUID) | 目标 notebook 的 UUID从批量提交实验时获取 |
| `agent_result` | object | 实验结果数据,任意 JSON 对象 |
#### agent_result 内容格式
`agent_result` 接受**任意 JSON 对象**,常见格式:
**简单键值对**
```json
{
"avg_rtt_ms": 12.5,
"status": "success",
"test_count": 5
}
```
**包含嵌套结构**
```json
{
"summary": {"total": 100, "passed": 98, "failed": 2},
"measurements": [
{"sample_id": "S001", "value": 3.14, "unit": "mg/mL"},
{"sample_id": "S002", "value": 2.71, "unit": "mg/mL"}
]
}
```
**从 CSV 文件导入**(脚本自动转换):
```json
{
"experiment_data": [
{"温度": 25, "压力": 101.3, "产率": 0.85},
{"温度": 30, "压力": 101.3, "产率": 0.91}
]
}
```
---
## 整合脚本
本文档同级目录下的 `scripts/prepare_agent_result.py` 可自动读取文件并构建请求体。
### 用法
```bash
python scripts/prepare_agent_result.py \
--notebook-uuid <uuid> \
--files data1.json data2.csv \
[--auth <token>] \
[--base <BASE_URL>] \
[--submit] \
[--output <output.json>]
```
| 参数 | 必选 | 说明 |
|------|------|------|
| `--notebook-uuid` | 是 | 目标 notebook UUID |
| `--files` | 是 | 输入文件路径支持多个JSON / CSV |
| `--auth` | 提交时必选 | Lab tokenbase64(ak:sk) |
| `--base` | 提交时必选 | API base URL |
| `--submit` | 否 | 加上此标志则直接提交到云端 |
| `--output` | 否 | 输出 JSON 路径(默认 `agent_result_body.json` |
### 文件合并规则
| 文件类型 | 合并方式 |
|----------|----------|
| `.json`dict | 字段直接合并到 `agent_result` 顶层 |
| `.json`list/other | 以文件名为 key 放入 `agent_result` |
| `.csv` | 以文件名(不含扩展名)为 key值为行对象数组 |
多个文件的字段会合并。JSON dict 中的重复 key 后者覆盖前者。
### 示例
```bash
# 仅生成请求体文件(不提交)
python scripts/prepare_agent_result.py \
--notebook-uuid 73c67dca-c8cc-4936-85a0-329106aa7cca \
--files results.json measurements.csv
# 生成并直接提交
python scripts/prepare_agent_result.py \
--notebook-uuid 73c67dca-c8cc-4936-85a0-329106aa7cca \
--files results.json \
--auth YTFmZDlkNGUt... \
--base https://uni-lab.test.bohrium.com \
--submit
```
---
## 手动构建方式
如果不使用脚本,也可手动构建请求体:
1. 将实验结果数据组装为 JSON 对象
2. 写入临时文件:
```json
{
"notebook_uuid": "<uuid>",
"agent_result": { ... }
}
```
3. 用 curl 提交:
```bash
curl -s -X PUT "$BASE/api/v1/lab/notebook/agent-result" \
-H "$AUTH" -H "Content-Type: application/json" \
-d '@tmp_body.json'
```
---
## 完整工作流 Checklist
```
Task Progress:
- [ ] Step 1: 确认 ak/sk → 生成 AUTH token
- [ ] Step 2: 确认 --addr → 设置 BASE URL
- [ ] Step 3: GET /edge/lab/info → 获取 lab_uuid
- [ ] Step 4: **询问用户** notebook_uuid必须不可跳过
- [ ] Step 5: 确认实验结果数据来源(文件路径或手动数据)
- [ ] Step 6: 运行 prepare_agent_result.py 或手动构建请求体
- [ ] Step 7: PUT /lab/notebook/agent-result 提交
- [ ] Step 8: 检查返回结果,确认提交成功
```
---
## 常见问题
### Q: notebook_uuid 从哪里获取?
从之前「批量提交实验」时 `POST /api/v1/lab/notebook` 的返回值 `data.uuid` 获取。也可以在平台 UI 中查找对应的 notebook。
### Q: agent_result 有固定的 schema 吗?
没有严格 schema接受任意 JSON 对象。但建议包含有意义的字段名和结构化数据,方便后续分析。
### Q: 可以多次提交同一个 notebook 的结果吗?
可以,后续提交会覆盖之前的 agent_result。
### Q: 认证方式是 Lab 还是 Api
本指南统一使用 `Authorization: Lab <base64(ak:sk)>` 方式。如果用户有独立的 API Key也可用 `Authorization: Api <key>` 替代。

View File

@@ -0,0 +1,133 @@
"""
读取实验结果文件JSON / CSV整合为 agent_result 请求体并可选提交。
用法:
python prepare_agent_result.py \
--notebook-uuid <uuid> \
--files data1.json data2.csv \
[--auth <Lab token>] \
[--base <BASE_URL>] \
[--submit] \
[--output <output.json>]
支持的输入文件格式:
- .json → 直接作为 dict 合并
- .csv → 转为 {"filename": [row_dict, ...]} 格式
"""
import argparse
import base64
import csv
import json
import os
import sys
from pathlib import Path
from typing import Any, Dict, List
def read_json_file(filepath: str) -> Dict[str, Any]:
with open(filepath, "r", encoding="utf-8") as f:
return json.load(f)
def read_csv_file(filepath: str) -> List[Dict[str, Any]]:
rows = []
with open(filepath, "r", encoding="utf-8-sig") as f:
reader = csv.DictReader(f)
for row in reader:
converted = {}
for k, v in row.items():
try:
converted[k] = int(v)
except (ValueError, TypeError):
try:
converted[k] = float(v)
except (ValueError, TypeError):
converted[k] = v
rows.append(converted)
return rows
def merge_files(filepaths: List[str]) -> Dict[str, Any]:
"""将多个文件合并为一个 agent_result dict"""
merged: Dict[str, Any] = {}
for fp in filepaths:
path = Path(fp)
ext = path.suffix.lower()
key = path.stem
if ext == ".json":
data = read_json_file(fp)
if isinstance(data, dict):
merged.update(data)
else:
merged[key] = data
elif ext == ".csv":
merged[key] = read_csv_file(fp)
else:
print(f"[警告] 不支持的文件格式: {fp},跳过", file=sys.stderr)
return merged
def build_request_body(notebook_uuid: str, agent_result: Dict[str, Any]) -> Dict[str, Any]:
return {
"notebook_uuid": notebook_uuid,
"agent_result": agent_result,
}
def submit(base: str, auth: str, body: Dict[str, Any]) -> Dict[str, Any]:
try:
import requests
except ImportError:
print("[错误] 提交需要 requests 库: pip install requests", file=sys.stderr)
sys.exit(1)
url = f"{base}/api/v1/lab/notebook/agent-result"
headers = {
"Content-Type": "application/json",
"Authorization": f"Lab {auth}",
}
resp = requests.put(url, json=body, headers=headers, timeout=30)
return {"status_code": resp.status_code, "body": resp.json() if resp.headers.get("content-type", "").startswith("application/json") else resp.text}
def main():
parser = argparse.ArgumentParser(description="整合实验结果文件并构建 agent_result 请求体")
parser.add_argument("--notebook-uuid", required=True, help="目标 notebook UUID")
parser.add_argument("--files", nargs="+", required=True, help="输入文件路径JSON / CSV")
parser.add_argument("--auth", help="Lab tokenbase64(ak:sk)")
parser.add_argument("--base", help="API base URL")
parser.add_argument("--submit", action="store_true", help="直接提交到云端")
parser.add_argument("--output", default="agent_result_body.json", help="输出 JSON 文件路径")
args = parser.parse_args()
for fp in args.files:
if not os.path.exists(fp):
print(f"[错误] 文件不存在: {fp}", file=sys.stderr)
sys.exit(1)
agent_result = merge_files(args.files)
body = build_request_body(args.notebook_uuid, agent_result)
with open(args.output, "w", encoding="utf-8") as f:
json.dump(body, f, ensure_ascii=False, indent=2)
print(f"[完成] 请求体已保存: {args.output}")
print(f" notebook_uuid: {args.notebook_uuid}")
print(f" agent_result 字段数: {len(agent_result)}")
print(f" 合并文件数: {len(args.files)}")
if args.submit:
if not args.auth or not args.base:
print("[错误] 提交需要 --auth 和 --base 参数", file=sys.stderr)
sys.exit(1)
print(f"\n[提交] PUT {args.base}/api/v1/lab/notebook/agent-result ...")
result = submit(args.base, args.auth, body)
print(f" HTTP {result['status_code']}")
print(f" 响应: {json.dumps(result['body'], ensure_ascii=False)}")
if __name__ == "__main__":
main()

View File

@@ -754,6 +754,32 @@ class MessageProcessor:
req = JobAddReq(**data) req = JobAddReq(**data)
job_log = format_job_log(req.job_id, req.task_id, req.device_id, req.action) job_log = format_job_log(req.job_id, req.task_id, req.device_id, req.action)
# 服务端对always_free动作可能跳过query_action_state直接发job_start
# 此时job尚未注册需要自动补注册
existing_job = self.device_manager.get_job_info(req.job_id)
if not existing_job:
action_name = req.action
device_action_key = f"/devices/{req.device_id}/{action_name}"
action_always_free = self._check_action_always_free(req.device_id, action_name)
if action_always_free:
job_info = JobInfo(
job_id=req.job_id,
task_id=req.task_id,
device_id=req.device_id,
action_name=action_name,
device_action_key=device_action_key,
status=JobStatus.QUEUE,
start_time=time.time(),
always_free=True,
)
self.device_manager.add_queue_request(job_info)
logger.info(f"[MessageProcessor] Job {job_log} always_free, auto-registered from direct job_start")
else:
logger.error(f"[MessageProcessor] Job {job_log} not registered (missing query_action_state)")
return
success = self.device_manager.start_job(req.job_id) success = self.device_manager.start_job(req.job_id)
if not success: if not success:
logger.error(f"[MessageProcessor] Failed to start job {job_log}") logger.error(f"[MessageProcessor] Failed to start job {job_log}")

View File

@@ -57,7 +57,7 @@ class VirtualSampleDemo:
readings.append(round(random.uniform(0.1, 1.0), 4)) readings.append(round(random.uniform(0.1, 1.0), 4))
samples.append(idx) samples.append(idx)
return {"volumes": out_volumes, "readings": readings, "samples": samples} return {"volumes": out_volumes, "readings": readings, "unilabos_samples": samples}
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# Action 3: 入参和出参都带 samples 列(不等长) # Action 3: 入参和出参都带 samples 列(不等长)
@@ -78,7 +78,7 @@ class VirtualSampleDemo:
scores.append(score) scores.append(score)
passed.append(r >= threshold) passed.append(r >= threshold)
return {"scores": scores, "passed": passed, "samples": samples} return {"scores": scores, "passed": passed, "unilabos_samples": samples}
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# 状态属性 # 状态属性

View File

@@ -679,14 +679,17 @@ def _resolve_name(name: str, import_map: Dict[str, str]) -> str:
return name return name
_DECORATOR_ENUM_CLASSES = frozenset({"Side", "DataSource", "NodeType"})
def _resolve_attribute(node: ast.Attribute, import_map: Dict[str, str]) -> str: def _resolve_attribute(node: ast.Attribute, import_map: Dict[str, str]) -> str:
""" """
Resolve an attribute access like Side.NORTH or DataSource.HANDLE. Resolve an attribute access like Side.NORTH or DataSource.HANDLE.
Returns a string like "NORTH" for enum values, or 对于来自 ``unilabos.registry.decorators`` 的枚举类 (Side / DataSource / NodeType)
"module.path:Class.attr" for imported references. 直接返回枚举成员名 (如 ``"NORTH"`` / ``"HANDLE"`` / ``"MANUAL_CONFIRM"``)
省去消费端二次 rsplit 解析。其它 import 仍返回完整模块路径。
""" """
# Get the full dotted path
parts = [] parts = []
current = node current = node
while isinstance(current, ast.Attribute): while isinstance(current, ast.Attribute):
@@ -696,21 +699,20 @@ def _resolve_attribute(node: ast.Attribute, import_map: Dict[str, str]) -> str:
parts.append(current.id) parts.append(current.id)
parts.reverse() parts.reverse()
# parts = ["Side", "NORTH"] or ["DataSource", "HANDLE"] # parts = ["Side", "NORTH"] or ["DataSource", "HANDLE"] or ["NodeType", "MANUAL_CONFIRM"]
if len(parts) >= 2: if len(parts) >= 2:
base = parts[0] base = parts[0]
attr = ".".join(parts[1:]) attr = ".".join(parts[1:])
# If the base is an imported name, resolve it if base in _DECORATOR_ENUM_CLASSES:
source = import_map.get(base, "")
if not source or _REGISTRY_DECORATOR_MODULE in source:
return parts[-1]
if base in import_map: if base in import_map:
return f"{import_map[base]}.{attr}" return f"{import_map[base]}.{attr}"
# For known enum-like patterns, return just the value
# e.g. Side.NORTH -> "NORTH"
if base in ("Side", "DataSource"):
return parts[-1]
return ".".join(parts) return ".".join(parts)

View File

@@ -8,7 +8,7 @@ Usage:
device, action, resource, device, action, resource,
InputHandle, OutputHandle, InputHandle, OutputHandle,
ActionInputHandle, ActionOutputHandle, ActionInputHandle, ActionOutputHandle,
HardwareInterface, Side, DataSource, HardwareInterface, Side, DataSource, NodeType,
) )
@device( @device(
@@ -73,6 +73,13 @@ class DataSource(str, Enum):
EXECUTOR = "executor" # 从执行器输出数据 (用于 OutputHandle) EXECUTOR = "executor" # 从执行器输出数据 (用于 OutputHandle)
class NodeType(str, Enum):
"""动作的节点类型(用于区分 ILab 节点和人工确认节点等)"""
ILAB = "ILab"
MANUAL_CONFIRM = "manual_confirm"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Device / Resource Handle (设备/资源级别端口, 序列化时包含 io_type) # Device / Resource Handle (设备/资源级别端口, 序列化时包含 io_type)
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -335,6 +342,7 @@ def action(
description: str = "", description: str = "",
auto_prefix: bool = False, auto_prefix: bool = False,
parent: bool = False, parent: bool = False,
node_type: Optional["NodeType"] = None,
): ):
""" """
动作方法装饰器 动作方法装饰器
@@ -365,6 +373,8 @@ def action(
description: 动作描述 description: 动作描述
auto_prefix: 若为 True动作名使用 auto-{method_name} 形式(与无 @action 时一致) auto_prefix: 若为 True动作名使用 auto-{method_name} 形式(与无 @action 时一致)
parent: 若为 True当方法参数为空 (*args, **kwargs) 时,通过 MRO 从父类获取真实方法参数 parent: 若为 True当方法参数为空 (*args, **kwargs) 时,通过 MRO 从父类获取真实方法参数
node_type: 动作的节点类型 (NodeType.ILAB / NodeType.MANUAL_CONFIRM)。
不填写时不写入注册表。
""" """
def decorator(func: F) -> F: def decorator(func: F) -> F:
@@ -389,6 +399,8 @@ def action(
"auto_prefix": auto_prefix, "auto_prefix": auto_prefix,
"parent": parent, "parent": parent,
} }
if node_type is not None:
meta["node_type"] = node_type.value if isinstance(node_type, NodeType) else str(node_type)
wrapper._action_registry_meta = meta # type: ignore[attr-defined] wrapper._action_registry_meta = meta # type: ignore[attr-defined]
# 设置 _is_always_free 保持与旧 @always_free 装饰器兼容 # 设置 _is_always_free 保持与旧 @always_free 装饰器兼容
@@ -515,6 +527,38 @@ def clear_registry():
_registered_resources.clear() _registered_resources.clear()
# ---------------------------------------------------------------------------
# 枚举值归一化
# ---------------------------------------------------------------------------
def normalize_enum_value(raw: Any, enum_cls) -> Optional[str]:
"""将 AST 提取的枚举成员名 / YAML 值字符串 / 旧格式长路径统一归一化为枚举值。
适用于 Side、DataSource、NodeType 等继承自 ``str, Enum`` 的装饰器枚举。
处理以下格式:
- "MANUAL_CONFIRM" → NodeType["MANUAL_CONFIRM"].value = "manual_confirm"
- "manual_confirm" → NodeType("manual_confirm").value = "manual_confirm"
- "HANDLE" → DataSource["HANDLE"].value = "handle"
- "NORTH" → Side["NORTH"].value = "NORTH"
- 旧缓存长路径 "unilabos...NodeType.MANUAL_CONFIRM" → 先 rsplit 再查找
"""
if not raw:
return None
raw_str = str(raw)
if "." in raw_str:
raw_str = raw_str.rsplit(".", 1)[-1]
try:
return enum_cls[raw_str].value
except KeyError:
pass
try:
return enum_cls(raw_str).value
except ValueError:
return raw_str
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# topic_config / not_action / always_free 装饰器 # topic_config / not_action / always_free 装饰器
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------

View File

@@ -2815,8 +2815,8 @@ virtual_sample_demo:
readings: readings readings: readings
samples: samples samples: samples
goal_default: goal_default:
readings: [] readings: null
samples: [] samples: null
handles: handles:
input: input:
- data_key: readings - data_key: readings
@@ -2846,18 +2846,12 @@ virtual_sample_demo:
handler_key: samples_result_out handler_key: samples_result_out
label: 样品索引 label: 样品索引
placeholder_keys: {} placeholder_keys: {}
result: result: {}
passed: passed
samples: samples
scores: scores
schema: schema:
description: 对 split_and_measure 输出做二次分析,入参和出参都带 samples 列 description: 对 split_and_measure 输出做二次分析,入参和出参都带 samples 列
properties: properties:
feedback: feedback:
properties: {}
required: []
title: AnalyzeReadings_Feedback title: AnalyzeReadings_Feedback
type: object
goal: goal:
properties: properties:
readings: readings:
@@ -2876,52 +2870,11 @@ virtual_sample_demo:
title: AnalyzeReadings_Goal title: AnalyzeReadings_Goal
type: object type: object
result: result:
properties:
passed:
description: 是否通过阈值
items:
type: boolean
type: array
samples:
description: 每行归属的输入样品 index (0-based)
items:
type: integer
type: array
scores:
description: 分析得分
items:
type: number
type: array
required:
- scores
- passed
- samples
title: AnalyzeReadings_Result title: AnalyzeReadings_Result
type: object type: object
required: required:
- goal - goal
title: AnalyzeReadings title: analyze_readings参数
type: object
type: UniLabJsonCommandAsync
auto-cleanup:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: cleanup的参数schema
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: cleanup参数
type: object type: object
type: UniLabJsonCommandAsync type: UniLabJsonCommandAsync
measure_samples: measure_samples:
@@ -2929,7 +2882,7 @@ virtual_sample_demo:
goal: goal:
concentrations: concentrations concentrations: concentrations
goal_default: goal_default:
concentrations: [] concentrations: null
handles: handles:
output: output:
- data_key: concentrations - data_key: concentrations
@@ -2943,17 +2896,12 @@ virtual_sample_demo:
handler_key: absorbance_out handler_key: absorbance_out
label: 吸光度列表 label: 吸光度列表
placeholder_keys: {} placeholder_keys: {}
result: result: {}
absorbance: absorbance
concentrations: concentrations
schema: schema:
description: 模拟光度测量,入参出参等长 description: 模拟光度测量,入参出参等长
properties: properties:
feedback: feedback:
properties: {}
required: []
title: MeasureSamples_Feedback title: MeasureSamples_Feedback
type: object
goal: goal:
properties: properties:
concentrations: concentrations:
@@ -2966,25 +2914,11 @@ virtual_sample_demo:
title: MeasureSamples_Goal title: MeasureSamples_Goal
type: object type: object
result: result:
properties:
absorbance:
description: 吸光度列表(与浓度等长)
items:
type: number
type: array
concentrations:
description: 原始浓度列表
items:
type: number
type: array
required:
- concentrations
- absorbance
title: MeasureSamples_Result title: MeasureSamples_Result
type: object type: object
required: required:
- goal - goal
title: MeasureSamples title: measure_samples参数
type: object type: object
type: UniLabJsonCommandAsync type: UniLabJsonCommandAsync
split_and_measure: split_and_measure:
@@ -2994,7 +2928,7 @@ virtual_sample_demo:
volumes: volumes volumes: volumes
goal_default: goal_default:
split_count: 3 split_count: 3
volumes: [] volumes: null
handles: handles:
output: output:
- data_key: readings - data_key: readings
@@ -3013,21 +2947,16 @@ virtual_sample_demo:
handler_key: volumes_out handler_key: volumes_out
label: 均分体积 label: 均分体积
placeholder_keys: {} placeholder_keys: {}
result: result: {}
readings: readings
samples: samples
volumes: volumes
schema: schema:
description: 均分样品后逐份测量,输出带 samples 列标注归属 description: 均分样品后逐份测量,输出带 samples 列标注归属
properties: properties:
feedback: feedback:
properties: {}
required: []
title: SplitAndMeasure_Feedback title: SplitAndMeasure_Feedback
type: object
goal: goal:
properties: properties:
split_count: split_count:
default: 3
description: 每个样品均分的份数 description: 每个样品均分的份数
type: integer type: integer
volumes: volumes:
@@ -3040,31 +2969,11 @@ virtual_sample_demo:
title: SplitAndMeasure_Goal title: SplitAndMeasure_Goal
type: object type: object
result: result:
properties:
readings:
description: 测量读数
items:
type: number
type: array
samples:
description: 每行归属的输入样品 index (0-based)
items:
type: integer
type: array
volumes:
description: 均分后的体积列表
items:
type: number
type: array
required:
- volumes
- readings
- samples
title: SplitAndMeasure_Result title: SplitAndMeasure_Result
type: object type: object
required: required:
- goal - goal
title: SplitAndMeasure title: split_and_measure参数
type: object type: object
type: UniLabJsonCommandAsync type: UniLabJsonCommandAsync
module: unilabos.devices.virtual.virtual_sample_demo:VirtualSampleDemo module: unilabos.devices.virtual.virtual_sample_demo:VirtualSampleDemo
@@ -3079,7 +2988,7 @@ virtual_sample_demo:
config: config:
properties: properties:
config: config:
type: string type: object
device_id: device_id:
type: string type: string
required: [] required: []

View File

@@ -33,6 +33,8 @@ from unilabos.registry.decorators import (
is_not_action, is_not_action,
is_always_free, is_always_free,
get_topic_config, get_topic_config,
NodeType,
normalize_enum_value,
) )
from unilabos.registry.utils import ( from unilabos.registry.utils import (
ROSMsgNotFound, ROSMsgNotFound,
@@ -159,9 +161,10 @@ class Registry:
ast_entry = self.device_type_registry.get("host_node", {}) ast_entry = self.device_type_registry.get("host_node", {})
ast_actions = ast_entry.get("class", {}).get("action_value_mappings", {}) ast_actions = ast_entry.get("class", {}).get("action_value_mappings", {})
# 取出 AST 生成的 auto-method entries, 补充特定覆写 # 取出 AST 生成的 action entries, 补充特定覆写
test_latency_action = ast_actions.get("auto-test_latency", {}) test_latency_action = ast_actions.get("auto-test_latency", {})
test_resource_action = ast_actions.get("auto-test_resource", {}) test_resource_action = ast_actions.get("auto-test_resource", {})
manual_confirm_action = ast_actions.get("manual_confirm", {})
test_resource_action["handles"] = { test_resource_action["handles"] = {
"input": [ "input": [
{ {
@@ -234,9 +237,11 @@ class Registry:
"parent": "unilabos_nodes", "parent": "unilabos_nodes",
"class_name": "unilabos_class", "class_name": "unilabos_class",
}, },
"always_free": True,
}, },
"test_latency": test_latency_action, "test_latency": test_latency_action,
"auto-test_resource": test_resource_action, "auto-test_resource": test_resource_action,
"manual_confirm": manual_confirm_action,
}, },
"init_params": {}, "init_params": {},
}, },
@@ -847,6 +852,9 @@ class Registry:
} }
if (action_args or {}).get("always_free") or method_info.get("always_free"): if (action_args or {}).get("always_free") or method_info.get("always_free"):
entry["always_free"] = True entry["always_free"] = True
nt = normalize_enum_value((action_args or {}).get("node_type"), NodeType)
if nt:
entry["node_type"] = nt
return action_name, entry return action_name, entry
# 1) auto- actions # 1) auto- actions
@@ -971,6 +979,9 @@ class Registry:
} }
if action_args.get("always_free") or method_info.get("always_free"): if action_args.get("always_free") or method_info.get("always_free"):
action_entry["always_free"] = True action_entry["always_free"] = True
nt = normalize_enum_value(action_args.get("node_type"), NodeType)
if nt:
action_entry["node_type"] = nt
action_value_mappings[action_name] = action_entry action_value_mappings[action_name] = action_entry
action_value_mappings = dict(sorted(action_value_mappings.items())) action_value_mappings = dict(sorted(action_value_mappings.items()))
@@ -1153,7 +1164,7 @@ class Registry:
return Path(BasicConfig.working_dir) / "registry_cache.pkl" return Path(BasicConfig.working_dir) / "registry_cache.pkl"
return None return None
_CACHE_VERSION = 3 _CACHE_VERSION = 4
def _load_config_cache(self) -> dict: def _load_config_cache(self) -> dict:
import pickle import pickle
@@ -1878,6 +1889,9 @@ class Registry:
} }
if v.get("always_free"): if v.get("always_free"):
entry["always_free"] = True entry["always_free"] = True
old_node_type = old_cfg.get("node_type")
if old_node_type in [NodeType.ILAB.value, NodeType.MANUAL_CONFIRM.value]:
entry["node_type"] = old_node_type
device_config["class"]["action_value_mappings"][action_key] = entry device_config["class"]["action_value_mappings"][action_key] = entry
device_config["init_param_schema"] = {} device_config["init_param_schema"] = {}

View File

@@ -17,6 +17,7 @@ from typing import Any, Dict, List, Optional, Tuple, Union
from msgcenterpy.instances.typed_dict_instance import TypedDictMessageInstance from msgcenterpy.instances.typed_dict_instance import TypedDictMessageInstance
from unilabos.utils.cls_creator import import_class from unilabos.utils.cls_creator import import_class
from unilabos.registry.decorators import Side, DataSource, normalize_enum_value
_logger = logging.getLogger(__name__) _logger = logging.getLogger(__name__)
@@ -487,10 +488,7 @@ def normalize_ast_handles(handles_raw: Any) -> List[Dict[str, Any]]:
} }
side = h.get("side") side = h.get("side")
if side: if side:
if isinstance(side, str) and "." in side: entry["side"] = normalize_enum_value(side, Side) or side
val = side.rsplit(".", 1)[-1]
side = val.lower() if val in ("LEFT", "RIGHT", "TOP", "BOTTOM") else val
entry["side"] = side
label = h.get("label") label = h.get("label")
if label: if label:
entry["label"] = label entry["label"] = label
@@ -499,10 +497,7 @@ def normalize_ast_handles(handles_raw: Any) -> List[Dict[str, Any]]:
entry["data_key"] = data_key entry["data_key"] = data_key
data_source = h.get("data_source") data_source = h.get("data_source")
if data_source: if data_source:
if isinstance(data_source, str) and "." in data_source: entry["data_source"] = normalize_enum_value(data_source, DataSource) or data_source
val = data_source.rsplit(".", 1)[-1]
data_source = val.lower() if val in ("HANDLE", "EXECUTOR") else val
entry["data_source"] = data_source
description = h.get("description") description = h.get("description")
if description: if description:
entry["description"] = description entry["description"] = description
@@ -537,17 +532,12 @@ def normalize_ast_action_handles(handles_raw: Any) -> Dict[str, Any]:
"data_type": h.get("data_type", ""), "data_type": h.get("data_type", ""),
"label": h.get("label", ""), "label": h.get("label", ""),
} }
_FIELD_ENUM_MAP = {"side": Side, "data_source": DataSource}
for opt_key in ("side", "data_key", "data_source", "description", "io_type"): for opt_key in ("side", "data_key", "data_source", "description", "io_type"):
val = h.get(opt_key) val = h.get(opt_key)
if val is not None: if val is not None:
# Only resolve enum-style refs (e.g. DataSource.HANDLE -> handle) for data_source/side if opt_key in _FIELD_ENUM_MAP:
# data_key values like "wells.@flatten", "@this.0@@@plate" must be preserved as-is val = normalize_enum_value(val, _FIELD_ENUM_MAP[opt_key]) or val
if (
isinstance(val, str)
and "." in val
and opt_key not in ("io_type", "data_key")
):
val = val.rsplit(".", 1)[-1].lower()
entry[opt_key] = val entry[opt_key] = val
# io_type: only add when explicitly set; do not default output to "sink" (YAML convention omits it) # io_type: only add when explicitly set; do not default output to "sink" (YAML convention omits it)

View File

@@ -24,7 +24,7 @@ from unilabos_msgs.srv import (
from unilabos_msgs.srv._serial_command import SerialCommand_Request, SerialCommand_Response from unilabos_msgs.srv._serial_command import SerialCommand_Request, SerialCommand_Response
from unique_identifier_msgs.msg import UUID from unique_identifier_msgs.msg import UUID
from unilabos.registry.decorators import device from unilabos.registry.decorators import device, action, NodeType
from unilabos.registry.placeholder_type import ResourceSlot, DeviceSlot from unilabos.registry.placeholder_type import ResourceSlot, DeviceSlot
from unilabos.registry.registry import lab_registry from unilabos.registry.registry import lab_registry
from unilabos.resources.container import RegularContainer from unilabos.resources.container import RegularContainer
@@ -313,7 +313,9 @@ class HostNode(BaseROS2DeviceNode):
callback_group=self.callback_group, callback_group=self.callback_group,
), ),
} # 用来存储多个ActionClient实例 } # 用来存储多个ActionClient实例
self._action_value_mappings: Dict[str, Dict] = {} # device_id -> action_value_mappings(本地+远程设备统一存储) self._action_value_mappings: Dict[str, Dict] = {
device_id: self._action_value_mappings
} # device_id -> action_value_mappings(本地+远程设备统一存储)
self._slave_registry_configs: Dict[str, Dict] = {} # registry_name -> registry_config(含action_value_mappings) self._slave_registry_configs: Dict[str, Dict] = {} # registry_name -> registry_config(含action_value_mappings)
self._goals: Dict[str, Any] = {} # 用来存储多个目标的状态 self._goals: Dict[str, Any] = {} # 用来存储多个目标的状态
self._online_devices: Set[str] = {f"{self.namespace}/{device_id}"} # 用于跟踪在线设备 self._online_devices: Set[str] = {f"{self.namespace}/{device_id}"} # 用于跟踪在线设备
@@ -1621,6 +1623,18 @@ class HostNode(BaseROS2DeviceNode):
} }
return res return res
@action(always_free=True, node_type=NodeType.MANUAL_CONFIRM, placeholder_keys={
"assignee_user_ids": "unilabos_manual_confirm"
}, goal_default={
"timeout_seconds": 3600,
"assignee_user_ids": []
})
def manual_confirm(self, timeout_seconds: int, assignee_user_ids: list[str], **kwargs) -> dict:
"""
timeout_seconds: 超时时间默认3600秒
"""
return kwargs
def test_resource( def test_resource(
self, self,
sample_uuids: SampleUUIDsType, sample_uuids: SampleUUIDsType,

View File

@@ -80,11 +80,12 @@ def get_result_info_str(error: str, suc: bool, return_value=None) -> str:
Returns: Returns:
JSON字符串格式的结果信息 JSON字符串格式的结果信息
""" """
samples = None # 请在返回的字典中使用 unilabos_samples进行返回
if isinstance(return_value, dict): # samples = None
if "samples" in return_value and type(return_value["samples"]) in [list, tuple] and type(return_value["samples"][0]) == dict: # if isinstance(return_value, dict):
samples = return_value.pop("samples") # if "samples" in return_value and type(return_value["samples"]) in [list, tuple] and type(return_value["samples"][0]) == dict:
result_info = {"error": error, "suc": suc, "return_value": return_value, "samples": samples} # samples = return_value.pop("samples")
result_info = {"error": error, "suc": suc, "return_value": return_value}
return json.dumps(result_info, ensure_ascii=False, cls=ResultInfoEncoder) return json.dumps(result_info, ensure_ascii=False, cls=ResultInfoEncoder)