You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feature/dry-run(Pipelex#135)
---------
Co-authored-by: Louis Choquel <lchoquel@users.noreply.github.com>
Co-authored-by: Louis Choquel <louis@pipelex.com>
The PipeCondition controller allows you to implement conditional logic in your pipeline, choosing which pipe to execute based on an evaluated expression. It supports both direct expressions and expression templates.
9
+
10
+
## Usage in TOML Configuration
11
+
12
+
### Basic Usage with Direct Expression
13
+
14
+
```toml
15
+
[pipe.conditional_operation]
16
+
PipeCondition = "A conditonal pipe to decide wheter..."
17
+
inputs = { input_data = "CategoryInput" }
18
+
output = "native.Text"
19
+
expression = "input_data.category"
20
+
21
+
[pipe.conditional_operation.pipe_map]
22
+
small = "process_small"
23
+
medium = "process_medium"
24
+
large = "process_large"
25
+
```
26
+
or
27
+
```toml
28
+
[pipe.conditional_operation]
29
+
PipeCondition = "A conditonal pipe to decide wheter..."
Place your Pydantic models in `pipelex_libraries/pipelines/your_models.py`:
40
+
41
+
```python
42
+
from pipelex.core.stuff_content import StructuredContent
43
+
44
+
class PersonInfo(StructuredContent): # The output models always have to be subclass of StructuredContent
45
+
name: str
46
+
age: int
47
+
email: str
48
+
```
49
+
50
+
## Advanced Features
51
+
52
+
### LLM Settings
53
+
54
+
You can specify LLM settings in two ways:
55
+
56
+
1. **Direct in the pipe**:
57
+
```toml
58
+
[pipe.analyze]
59
+
PipeLLM = "Analyze text"
60
+
output = "Analysis"
61
+
llm = { llm_handle = "gpt-4", temperature = 0.7 }
62
+
prompt_template = "Analyze this text"
63
+
```
64
+
65
+
2. **Using predefined settings** from `pipelex_libraries/llm_deck/base_llm_deck.toml`:
66
+
```toml
67
+
[pipe.analyze]
68
+
PipeLLM = "Analyze text"
69
+
output = "Analysis"
70
+
llm = "llm_for_analysis" # References a preset from llm_deck
71
+
prompt_template = "Analyze this text"
72
+
```
73
+
74
+
### System Prompts
75
+
Add system-level instructions:
76
+
```toml
77
+
[pipe.expert_analysis]
78
+
PipeLLM = "Expert analysis"
79
+
output = "Analysis"
80
+
system_prompt = "You are a data analysis expert"
81
+
prompt_template = "Analyze this data"
82
+
```
83
+
84
+
### Multiple Outputs
85
+
Generate multiple results:
86
+
```toml
87
+
[pipe.generate_ideas]
88
+
PipeLLM = "Generate ideas"
89
+
output = "Idea"
90
+
nb_output = 3 # Generate exactly 3 ideas
91
+
# OR
92
+
multiple_output = true # Let the LLM decide how many to generate
93
+
```
94
+
95
+
### Vision Tasks
96
+
Process images with VLMs:
97
+
```toml
98
+
[pipe.analyze_image]
99
+
PipeLLM = "Analyze image"
100
+
inputs = { image = "Image" } # `image` is the name of the stuff that contains the Image. If its in a stuff, you can add something like `{ "page.image": "Image" }
101
+
output = "ImageAnalysis"
102
+
prompt_template = "Describe what you see in this image"
103
+
```
104
+
105
+
# Important tip
106
+
107
+
Always run the cli `pipelex validate` when you are finished writing pipelines: This checks for errors. If there are errors, iterate.
0 commit comments