template management for workflow automation
This commit is contained in:
parent
78157324ac
commit
94128bfc46
6 changed files with 207 additions and 393 deletions
|
|
@ -1,285 +0,0 @@
|
||||||
# Pydantic Class Enhancement Proposal
|
|
||||||
## Format Tracking & Validation Alignment
|
|
||||||
|
|
||||||
**Date:** 2025-11-02
|
|
||||||
**Purpose:** Align validation logic with prompt requirements, enable workflow-level validation, and track expected file formats
|
|
||||||
|
|
||||||
**Simplified Approach:** Use existing document metadata (name, size, format, mimeType) - no summary fields needed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
|
|
||||||
This proposal addresses:
|
|
||||||
1. **Validation alignment**: What prompts ask for matches what validators check
|
|
||||||
2. **Workflow-level validation**: Check ALL deliverables from ALL tasks against original user request
|
|
||||||
3. **Format tracking**: Track expected formats (list) at workflow and task levels
|
|
||||||
4. **Adaptive task planning**: Next task uses ALL workflow data (messages, document metadata) to refine objective
|
|
||||||
|
|
||||||
**Key Simplification:** Actions deliver documents with metadata (as today). No summary fields needed - use existing document metadata.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. ActionResult Class Changes
|
|
||||||
|
|
||||||
**File:** `gateway/modules/datamodels/datamodelChat.py` (lines 483-521)
|
|
||||||
|
|
||||||
### NO CHANGES NEEDED
|
|
||||||
|
|
||||||
**Current Structure (KEEP ALL - ALL USED):**
|
|
||||||
- ✅ `success: bool` - Used by validation
|
|
||||||
- ✅ `error: Optional[str]` - Used for error handling
|
|
||||||
- ✅ `documents: List[ActionDocument]` - Contains document metadata (name, data, mimeType)
|
|
||||||
- ✅ `resultLabel: Optional[str]` - Used for document routing
|
|
||||||
|
|
||||||
**Documents already provide all needed metadata:**
|
|
||||||
- `documentName` - File name
|
|
||||||
- `documentData` - Content
|
|
||||||
- `mimeType` - MIME type (can derive format from this)
|
|
||||||
|
|
||||||
**No summary field needed** - document metadata is sufficient.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. TaskResult Class Changes
|
|
||||||
|
|
||||||
**File:** `gateway/modules/datamodels/datamodelChat.py` (lines 718-736)
|
|
||||||
|
|
||||||
### NO CHANGES NEEDED
|
|
||||||
|
|
||||||
**Current Structure (KEEP ALL - ALL USED):**
|
|
||||||
- ✅ `taskId: str` - Task identification
|
|
||||||
- ✅ `status: TaskStatus` - Task status tracking
|
|
||||||
- ✅ `success: bool` - Success flag
|
|
||||||
- ✅ `feedback: Optional[str]` - Task feedback
|
|
||||||
- ✅ `error: Optional[str]` - Error message
|
|
||||||
|
|
||||||
**Document metadata available from workflow:**
|
|
||||||
- Can extract delivered formats from documents in workflow messages
|
|
||||||
- No need to store separately - use existing document metadata
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. TaskStep Class Changes
|
|
||||||
|
|
||||||
**File:** `gateway/modules/datamodels/datamodelChat.py` (lines 790-825)
|
|
||||||
|
|
||||||
### Modify
|
|
||||||
- Change `expectedFormat: Optional[str]` → `expectedFormats: Optional[List[str]]`
|
|
||||||
- Keep `dataType` and `qualityRequirements` as-is
|
|
||||||
|
|
||||||
### Modified Class:
|
|
||||||
```python
|
|
||||||
class TaskStep(BaseModel):
|
|
||||||
id: str
|
|
||||||
objective: str
|
|
||||||
dependencies: Optional[list[str]] = Field(default_factory=list)
|
|
||||||
successCriteria: Optional[list[str]] = Field(default_factory=list)
|
|
||||||
estimatedComplexity: Optional[str] = None
|
|
||||||
userMessage: Optional[str] = Field(
|
|
||||||
None, description="User-friendly message in user's language"
|
|
||||||
)
|
|
||||||
# Format details extracted from intent analysis
|
|
||||||
dataType: Optional[str] = Field(
|
|
||||||
None, description="Expected data type (text, numbers, documents, etc.)"
|
|
||||||
)
|
|
||||||
expectedFormats: Optional[List[str]] = Field(
|
|
||||||
None, description="Expected output file format extensions (e.g., ['docx', 'pdf', 'xlsx']). Use actual file extensions, not conceptual terms."
|
|
||||||
)
|
|
||||||
qualityRequirements: Optional[Dict[str, Any]] = Field(
|
|
||||||
None, description="Quality requirements and constraints"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Register Labels
|
|
||||||
Update:
|
|
||||||
```python
|
|
||||||
"expectedFormats": {"en": "Expected Formats", "fr": "Formats attendus"}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. ChatWorkflow Class Changes (for Workflow-Level Tracking)
|
|
||||||
|
|
||||||
**File:** `gateway/modules/datamodels/datamodelChat.py` (find ChatWorkflow class)
|
|
||||||
|
|
||||||
### Add (if not exists)
|
|
||||||
```python
|
|
||||||
expectedFormats: Optional[List[str]] = Field(
|
|
||||||
None,
|
|
||||||
description="List of expected file format extensions from user request (e.g., ['xlsx', 'pdf']). Extracted during intent analysis."
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: `_workflowIntent` is already stored as a dict (not a model field), so `expectedFormats` can be extracted from there, but having it as an explicit field makes it easier to query.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. ActionItem Class Review
|
|
||||||
|
|
||||||
**File:** `gateway/modules/datamodels/datamodelChat.py` (lines 652-715)
|
|
||||||
|
|
||||||
### Current Structure (ALL USED - KEEP):
|
|
||||||
- ✅ `id: str` - Used for action identification
|
|
||||||
- ✅ `execMethod: str` - Used for action execution
|
|
||||||
- ✅ `execAction: str` - Used for action execution
|
|
||||||
- ✅ `execParameters: Dict[str, Any]` - Used for action execution
|
|
||||||
- ✅ `execResultLabel: Optional[str]` - Used for document routing
|
|
||||||
- ✅ `expectedDocumentFormats: Optional[List[Dict[str, str]]]` - Used by action planning
|
|
||||||
- ✅ `userMessage: Optional[str]` - Used for user communication
|
|
||||||
- ✅ `status: TaskStatus` - Used for tracking
|
|
||||||
- ✅ `error: Optional[str]` - Used for error handling
|
|
||||||
- ✅ `retryCount: int` - Used for retry logic
|
|
||||||
- ✅ `retryMax: int` - Used for retry logic
|
|
||||||
- ✅ `processingTime: Optional[float]` - Used for performance tracking
|
|
||||||
- ✅ `timestamp: float` - Used for ordering/auditing
|
|
||||||
- ✅ `result: Optional[str]` - Used to store action result text
|
|
||||||
|
|
||||||
**NO CHANGES NEEDED** - All attributes are used
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Summary of Changes
|
|
||||||
|
|
||||||
### Classes to Modify:
|
|
||||||
1. ✅ **TaskStep** - Change `expectedFormat` (str) → `expectedFormats` (List[str])
|
|
||||||
2. ✅ **ChatWorkflow** - Add `expectedFormats` (optional, for explicit tracking)
|
|
||||||
|
|
||||||
### Classes to Review (NO CHANGES):
|
|
||||||
- ✅ **ActionResult** - Keep as-is, documents already have metadata
|
|
||||||
- ✅ **TaskResult** - Keep as-is, no summary needed
|
|
||||||
- ✅ **ActionDocument** - Already correct (documentName, documentData, mimeType)
|
|
||||||
- ✅ **ActionItem** - All attributes used
|
|
||||||
- ✅ **Observation** - Already has contentValidation field
|
|
||||||
- ✅ **TaskItem** - Used for database storage, separate from TaskStep
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Implementation Impact
|
|
||||||
|
|
||||||
### Files That Will Need Updates:
|
|
||||||
|
|
||||||
1. **datamodelChat.py** - Class definitions (this proposal)
|
|
||||||
- Change `expectedFormat` → `expectedFormats` in TaskStep
|
|
||||||
- Add `expectedFormats` to ChatWorkflow (optional)
|
|
||||||
|
|
||||||
2. **taskPlanner.py** - Populate `expectedFormats` list instead of single `expectedFormat`
|
|
||||||
- **Adaptive planning:** Use ALL workflow data (messages, document metadata) to refine next task objective
|
|
||||||
- Extract delivered formats from workflow documents
|
|
||||||
- Compare what was delivered vs. what was planned
|
|
||||||
|
|
||||||
3. **contentValidator.py** - Use `expectedFormats` list for validation
|
|
||||||
- **Action-level validation:** Check action results against task objective (already exists)
|
|
||||||
- **Task-level validation:** Validate THIS task's deliverables against THIS task's expectations
|
|
||||||
- Uses document metadata (name, size, format, mimeType) - no summaries needed
|
|
||||||
|
|
||||||
4. **intentAnalyzer.py** - Fix prompt to ask for actual file format extensions
|
|
||||||
- Change from conceptual terms ("raw_data", "formatted") to actual extensions ("pdf", "docx", "xlsx")
|
|
||||||
|
|
||||||
5. **promptGenerationTaskplan.py** - Ask for `expectedFormats` in task planning
|
|
||||||
- **Adaptive planning:** Include ALL workflow data (messages, document names/sizes/formats/metadata) when planning next task
|
|
||||||
- Show what was actually delivered to help refine objective
|
|
||||||
|
|
||||||
6. **workflowManager.py** - Pass ALL workflow data to next task planning
|
|
||||||
- Messages (text content)
|
|
||||||
- Document metadata (names, sizes, formats, mimeTypes)
|
|
||||||
- Validation results
|
|
||||||
|
|
||||||
### Key Implementation Points:
|
|
||||||
|
|
||||||
- **No summary fields:** Use existing document metadata (name, size, format, mimeType)
|
|
||||||
- **Adaptive task planning:** Next task receives ALL workflow data (messages + document metadata) to refine objective
|
|
||||||
- **Validation scope:** Task validation checks ONLY that task's actions, not all workflow actions
|
|
||||||
- **After each action:** Validate against task objective → decide if complete or next action needed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Validation Logic Alignment
|
|
||||||
|
|
||||||
### Action-Level Validation (Within Task):
|
|
||||||
- **When:** After each action execution within a task
|
|
||||||
- **Checks:** Action results against task objective
|
|
||||||
- **Against:** Action documents (name, size, format, mimeType metadata)
|
|
||||||
- **Purpose:** Decide if task is complete or next action needed
|
|
||||||
- **Triggers:** Continue to next action if incomplete, complete task if done
|
|
||||||
|
|
||||||
### Task Planning (Adaptive - Uses ALL Workflow Data):
|
|
||||||
- **Input:** ALL workflow data available:
|
|
||||||
- All messages (text content)
|
|
||||||
- All document metadata (names, sizes, formats/extensions, mimeTypes)
|
|
||||||
- Previous task validation results
|
|
||||||
- **Process:**
|
|
||||||
- Extract delivered formats from all workflow documents
|
|
||||||
- Compare what was ACTUALLY delivered vs. what was PLANNED
|
|
||||||
- Refine next task objective:
|
|
||||||
- Deliver MORE if previous tasks delivered less than expected
|
|
||||||
- Deliver LESS if previous tasks already delivered more
|
|
||||||
- Adapt to actual workflow progress
|
|
||||||
|
|
||||||
### Task-Level Validation (Task Completion):
|
|
||||||
- **When:** After ALL actions in a task complete
|
|
||||||
- **Checks:** Task objective, task `expectedFormats`, task `successCriteria`
|
|
||||||
- **Against:** Documents from THIS task only (extract formats from document metadata)
|
|
||||||
- **Purpose:** Verify THIS task delivered what was expected for THIS task scope
|
|
||||||
- **Output:** Validation result (used in workflow data for next task planning)
|
|
||||||
|
|
||||||
### Workflow-Level Validation (Final):
|
|
||||||
- **When:** After ALL tasks complete
|
|
||||||
- **Checks:** Original user request, workflow `expectedFormats`, workflow success criteria
|
|
||||||
- **Against:** ALL documents from ALL tasks (extract formats from document metadata)
|
|
||||||
- **Purpose:** Final verification that complete workflow delivered what user requested
|
|
||||||
- **Triggers:** New compensatory task if validation fails (missing deliverables)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. Next Steps
|
|
||||||
|
|
||||||
1. **Review and approve this proposal**
|
|
||||||
2. **Implement class changes** in datamodelChat.py
|
|
||||||
3. **Update intent analyzer prompt** to request actual file format extensions
|
|
||||||
4. **Update task planning prompt** to request `expectedFormats` list
|
|
||||||
5. **Update AI generation prompts** to include summary instruction
|
|
||||||
6. **Implement aggregation logic** for summaries at task/workflow levels
|
|
||||||
7. **Implement workflow-level validation** method
|
|
||||||
8. **Update all references** from `expectedFormat` to `expectedFormats`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Questions Answered
|
|
||||||
|
|
||||||
✅ **Document metadata:** Use existing document fields (name, size, format from mimeType/extensions) - no summaries needed
|
|
||||||
✅ **Format extraction:** Extract formats from document metadata (mimeType or file extensions)
|
|
||||||
✅ **Task validation scope:** Task validation checks ONLY actions in that task, not all workflow actions
|
|
||||||
✅ **Adaptive planning:** Next task uses ALL workflow data (messages + document metadata) to refine objective
|
|
||||||
✅ **After each action:** Validate against task objective → decide complete or next action needed
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 10. Validation Flow Clarification
|
|
||||||
|
|
||||||
### Simplified Flow:
|
|
||||||
|
|
||||||
1. **Within Task (Action-by-Action):**
|
|
||||||
- Action executes → delivers documents with metadata
|
|
||||||
- Validate action results against task objective
|
|
||||||
- If incomplete → next action needed
|
|
||||||
- If complete → task done
|
|
||||||
|
|
||||||
2. **Task Planning (Adaptive):**
|
|
||||||
- Receives: ALL workflow data (messages, document metadata from all previous tasks)
|
|
||||||
- Extracts: Delivered formats from document metadata (file extensions/mimeTypes)
|
|
||||||
- Compares: What was actually delivered vs. what was planned
|
|
||||||
- Refines: Next task objective (may need more/less based on actual progress)
|
|
||||||
|
|
||||||
3. **Task Completion:**
|
|
||||||
- Validate: THIS task's documents (extract formats from metadata) against THIS task's expectations
|
|
||||||
- Result: Used in workflow data for next task planning
|
|
||||||
|
|
||||||
4. **Workflow Completion:**
|
|
||||||
- Final validation: All documents (extract formats from metadata) meet original user request
|
|
||||||
- If missing: Create compensatory task
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Status:** Ready for implementation after approval
|
|
||||||
|
|
||||||
|
|
@ -1219,7 +1219,7 @@ class ChatObjects:
|
||||||
def _computeAutomationStatus(self, automation: Dict[str, Any]) -> str:
|
def _computeAutomationStatus(self, automation: Dict[str, Any]) -> str:
|
||||||
"""Compute status field based on eventId presence"""
|
"""Compute status field based on eventId presence"""
|
||||||
eventId = automation.get("eventId")
|
eventId = automation.get("eventId")
|
||||||
return "active" if eventId else "inactive"
|
return "Running" if eventId else "Idle"
|
||||||
|
|
||||||
def getAllAutomationDefinitions(self, pagination: Optional[PaginationParams] = None) -> Union[List[Dict[str, Any]], PaginatedResult]:
|
def getAllAutomationDefinitions(self, pagination: Optional[PaginationParams] = None) -> Union[List[Dict[str, Any]], PaginatedResult]:
|
||||||
"""
|
"""
|
||||||
|
|
|
||||||
|
|
@ -147,39 +147,39 @@ class ChatService:
|
||||||
else:
|
else:
|
||||||
# Direct label reference - can be round1_task2_action3_contextinfo format or simple label
|
# Direct label reference - can be round1_task2_action3_contextinfo format or simple label
|
||||||
# Search for messages with matching documentsLabel to find the actual documents
|
# Search for messages with matching documentsLabel to find the actual documents
|
||||||
matchingMessages = []
|
matchingMessages = []
|
||||||
for message in workflow.messages:
|
for message in workflow.messages:
|
||||||
# Validate message belongs to this workflow
|
# Validate message belongs to this workflow
|
||||||
msgWorkflowId = getattr(message, 'workflowId', None)
|
msgWorkflowId = getattr(message, 'workflowId', None)
|
||||||
if not msgWorkflowId or msgWorkflowId != workflowId:
|
if not msgWorkflowId or msgWorkflowId != workflowId:
|
||||||
if msgWorkflowId:
|
if msgWorkflowId:
|
||||||
logger.debug(f"Skipping message {message.id} with workflowId {msgWorkflowId} (expected {workflowId})")
|
logger.debug(f"Skipping message {message.id} with workflowId {msgWorkflowId} (expected {workflowId})")
|
||||||
|
else:
|
||||||
|
logger.debug(f"Skipping message {message.id} with no workflowId (expected {workflowId})")
|
||||||
|
continue
|
||||||
|
|
||||||
|
msgDocumentsLabel = getattr(message, 'documentsLabel', '')
|
||||||
|
|
||||||
|
# Check if this message's documentsLabel matches our reference
|
||||||
|
if msgDocumentsLabel == docRef:
|
||||||
|
# Found a matching message, collect it for comparison
|
||||||
|
matchingMessages.append(message)
|
||||||
|
|
||||||
|
# If we found matching messages, take the newest one (highest publishedAt)
|
||||||
|
if matchingMessages:
|
||||||
|
# Sort by publishedAt descending (newest first)
|
||||||
|
matchingMessages.sort(key=lambda msg: getattr(msg, 'publishedAt', 0), reverse=True)
|
||||||
|
newestMessage = matchingMessages[0]
|
||||||
|
|
||||||
|
if newestMessage.documents:
|
||||||
|
docNames = [doc.fileName for doc in newestMessage.documents if hasattr(doc, 'fileName')]
|
||||||
|
logger.debug(f"Added {len(newestMessage.documents)} documents from newest message {newestMessage.id}: {docNames}")
|
||||||
|
allDocuments.extend(newestMessage.documents)
|
||||||
|
else:
|
||||||
|
logger.debug(f"No documents found in newest message {newestMessage.id}")
|
||||||
else:
|
else:
|
||||||
logger.debug(f"Skipping message {message.id} with no workflowId (expected {workflowId})")
|
logger.error(f"No messages found with documentsLabel: {docRef}")
|
||||||
continue
|
raise ValueError(f"Document reference not found: {docRef}")
|
||||||
|
|
||||||
msgDocumentsLabel = getattr(message, 'documentsLabel', '')
|
|
||||||
|
|
||||||
# Check if this message's documentsLabel matches our reference
|
|
||||||
if msgDocumentsLabel == docRef:
|
|
||||||
# Found a matching message, collect it for comparison
|
|
||||||
matchingMessages.append(message)
|
|
||||||
|
|
||||||
# If we found matching messages, take the newest one (highest publishedAt)
|
|
||||||
if matchingMessages:
|
|
||||||
# Sort by publishedAt descending (newest first)
|
|
||||||
matchingMessages.sort(key=lambda msg: getattr(msg, 'publishedAt', 0), reverse=True)
|
|
||||||
newestMessage = matchingMessages[0]
|
|
||||||
|
|
||||||
if newestMessage.documents:
|
|
||||||
docNames = [doc.fileName for doc in newestMessage.documents if hasattr(doc, 'fileName')]
|
|
||||||
logger.debug(f"Added {len(newestMessage.documents)} documents from newest message {newestMessage.id}: {docNames}")
|
|
||||||
allDocuments.extend(newestMessage.documents)
|
|
||||||
else:
|
|
||||||
logger.debug(f"No documents found in newest message {newestMessage.id}")
|
|
||||||
else:
|
|
||||||
logger.error(f"No messages found with documentsLabel: {docRef}")
|
|
||||||
raise ValueError(f"Document reference not found: {docRef}")
|
|
||||||
|
|
||||||
logger.debug(f"Resolved {len(allDocuments)} documents from document list: {documentList}")
|
logger.debug(f"Resolved {len(allDocuments)} documents from document list: {documentList}")
|
||||||
return allDocuments
|
return allDocuments
|
||||||
|
|
|
||||||
|
|
@ -931,9 +931,26 @@ class MethodSharepoint(MethodBase):
|
||||||
return ActionResult.isFailure(error=f"Invalid pathQuery '{pathQuery}'. This appears to be search terms, not a valid SharePoint path. Use findDocumentPath action first to search for folders, then use the returned folder path as pathQuery.")
|
return ActionResult.isFailure(error=f"Invalid pathQuery '{pathQuery}'. This appears to be search terms, not a valid SharePoint path. Use findDocumentPath action first to search for folders, then use the returned folder path as pathQuery.")
|
||||||
|
|
||||||
# For pathQuery, we need to discover sites to find the specific one
|
# For pathQuery, we need to discover sites to find the specific one
|
||||||
sites = await self._discoverSharePointSites()
|
all_sites = await self._discoverSharePointSites()
|
||||||
if not sites:
|
if not all_sites:
|
||||||
return ActionResult.isFailure(error="No SharePoint sites found or accessible")
|
return ActionResult.isFailure(error="No SharePoint sites found or accessible")
|
||||||
|
|
||||||
|
# If pathQuery starts with /site:, extract site name and filter
|
||||||
|
if pathQuery.startswith('/site:'):
|
||||||
|
# Extract site name from /site:Company Share/... format
|
||||||
|
site_path_part = pathQuery[6:] # Remove '/site:'
|
||||||
|
if '/' in site_path_part:
|
||||||
|
site_name = site_path_part.split('/', 1)[0]
|
||||||
|
else:
|
||||||
|
site_name = site_path_part
|
||||||
|
|
||||||
|
# Filter sites by name (case-insensitive substring match)
|
||||||
|
sites = self._filter_sites_by_hint(all_sites, site_name)
|
||||||
|
if not sites:
|
||||||
|
return ActionResult.isFailure(error=f"No SharePoint site found matching '{site_name}'")
|
||||||
|
logger.info(f"Filtered to site(s) matching '{site_name}': {[s['displayName'] for s in sites]}")
|
||||||
|
else:
|
||||||
|
sites = all_sites
|
||||||
else:
|
else:
|
||||||
# Step 3: Both pathObject and pathQuery failed - ERROR, NO FALLBACK
|
# Step 3: Both pathObject and pathQuery failed - ERROR, NO FALLBACK
|
||||||
return ActionResult.isFailure(error="No valid upload path provided. Either provide pathObject (from findDocumentPath) or a valid pathQuery with specific site information.")
|
return ActionResult.isFailure(error="No valid upload path provided. Either provide pathObject (from findDocumentPath) or a valid pathQuery with specific site information.")
|
||||||
|
|
@ -1094,15 +1111,14 @@ class MethodSharepoint(MethodBase):
|
||||||
"""
|
"""
|
||||||
GENERAL:
|
GENERAL:
|
||||||
- Purpose: Upload documents to SharePoint. Only to choose this action with a connectionReference
|
- Purpose: Upload documents to SharePoint. Only to choose this action with a connectionReference
|
||||||
- Input requirements: connectionReference (required); documentList (required); fileNames (required); optional pathObject or pathQuery.
|
- Input requirements: connectionReference (required); documentList (required); optional pathObject or pathQuery.
|
||||||
- Output format: JSON with upload status and file info.
|
- Output format: JSON with upload status and file info.
|
||||||
|
|
||||||
Parameters:
|
Parameters:
|
||||||
- connectionReference (str, required): Microsoft connection label.
|
- connectionReference (str, required): Microsoft connection label.
|
||||||
- pathObject (str, optional): Reference to a previous path result.
|
- pathObject (str, optional): Reference to a previous path result.
|
||||||
- pathQuery (str, optional): Upload target path if no pathObject.
|
- pathQuery (str, optional): Upload target path if no pathObject.
|
||||||
- documentList (list, required): Document reference(s) to upload.
|
- documentList (list, required): Document reference(s) to upload. File names are taken from the documents.
|
||||||
- fileNames (list, required): Output file names.
|
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
connectionReference = parameters.get("connectionReference")
|
connectionReference = parameters.get("connectionReference")
|
||||||
|
|
@ -1110,14 +1126,13 @@ class MethodSharepoint(MethodBase):
|
||||||
documentList = parameters.get("documentList")
|
documentList = parameters.get("documentList")
|
||||||
if isinstance(documentList, str):
|
if isinstance(documentList, str):
|
||||||
documentList = [documentList]
|
documentList = [documentList]
|
||||||
fileNames = parameters.get("fileNames")
|
|
||||||
pathObject = parameters.get("pathObject")
|
pathObject = parameters.get("pathObject")
|
||||||
|
|
||||||
upload_path = pathQuery
|
upload_path = pathQuery
|
||||||
logger.debug(f"Using pathQuery: {pathQuery}")
|
logger.debug(f"Using pathQuery: {pathQuery}")
|
||||||
|
|
||||||
if not connectionReference or not documentList or not fileNames:
|
if not connectionReference or not documentList:
|
||||||
return ActionResult.isFailure(error="Connection reference, document list, and file names are required")
|
return ActionResult.isFailure(error="Connection reference and document list are required")
|
||||||
|
|
||||||
# If pathObject is provided, extract folder IDs from it
|
# If pathObject is provided, extract folder IDs from it
|
||||||
if pathObject:
|
if pathObject:
|
||||||
|
|
@ -1296,7 +1311,23 @@ class MethodSharepoint(MethodBase):
|
||||||
|
|
||||||
upload_site_scope = selected_site
|
upload_site_scope = selected_site
|
||||||
# Use the inner path portion as the actual upload target path
|
# Use the inner path portion as the actual upload target path
|
||||||
upload_paths = [f"/{parsed['innerPath'].lstrip('/')}"]
|
# Remove document library name from path (same logic as listDocuments)
|
||||||
|
inner_path = parsed['innerPath'].lstrip('/')
|
||||||
|
path_segments = [s for s in inner_path.split('/') if s.strip()]
|
||||||
|
if len(path_segments) > 1:
|
||||||
|
# Path has multiple segments - first might be a library name
|
||||||
|
# Try without first segment (assuming it's a library name)
|
||||||
|
inner_path = '/'.join(path_segments[1:])
|
||||||
|
logger.info(f"Removed first path segment (potential library name), path changed from '{parsed['innerPath']}' to '{inner_path}'")
|
||||||
|
elif len(path_segments) == 1:
|
||||||
|
# Only one segment - if it's a common library-like name, use empty path (root)
|
||||||
|
first_segment_lower = path_segments[0].lower()
|
||||||
|
library_indicators = ['document', 'dokument', 'shared', 'freigegeben', 'library', 'bibliothek']
|
||||||
|
if any(indicator in first_segment_lower for indicator in library_indicators):
|
||||||
|
inner_path = ''
|
||||||
|
logger.info(f"First segment '{path_segments[0]}' appears to be a library name, using root")
|
||||||
|
|
||||||
|
upload_paths = [f"/{inner_path}" if inner_path else "/"]
|
||||||
sites = [selected_site]
|
sites = [selected_site]
|
||||||
else:
|
else:
|
||||||
# When using pathObject, check if upload_path is a folder ID or a path
|
# When using pathObject, check if upload_path is a folder ID or a path
|
||||||
|
|
@ -1311,6 +1342,10 @@ class MethodSharepoint(MethodBase):
|
||||||
# Process each document upload
|
# Process each document upload
|
||||||
upload_results = []
|
upload_results = []
|
||||||
|
|
||||||
|
# Extract file names from documents
|
||||||
|
fileNames = [doc.fileName for doc in chatDocuments]
|
||||||
|
logger.info(f"Using file names from documentList: {fileNames}")
|
||||||
|
|
||||||
for i, (chatDocument, fileName) in enumerate(zip(chatDocuments, fileNames)):
|
for i, (chatDocument, fileName) in enumerate(zip(chatDocuments, fileNames)):
|
||||||
try:
|
try:
|
||||||
fileId = chatDocument.fileId
|
fileId = chatDocument.fileId
|
||||||
|
|
|
||||||
|
|
@ -1,66 +0,0 @@
|
||||||
{
|
|
||||||
"template":
|
|
||||||
{
|
|
||||||
"overview": "Automated workflow task",
|
|
||||||
"tasks": [
|
|
||||||
{
|
|
||||||
"id": "Task01",
|
|
||||||
"title": "Main Task",
|
|
||||||
"description": "Execute automated workflow",
|
|
||||||
"objective": "Execute automated workflow",
|
|
||||||
"actionList": [
|
|
||||||
{
|
|
||||||
"execMethod": "ai",
|
|
||||||
"execAction": "webResearch",
|
|
||||||
"execParameters": {
|
|
||||||
"prompt": "{{KEY:webResearchPrompt}}",
|
|
||||||
"list(url)": ["{{KEY:webResearchUrl}}"]
|
|
||||||
},
|
|
||||||
"execResultLabel": "web_research_results"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"execMethod": "sharepoint",
|
|
||||||
"execAction": "listDocuments",
|
|
||||||
"execParameters": {
|
|
||||||
"connectionReference": "{{KEY:connectionName}}",
|
|
||||||
"pathQuery": "{{KEY:sharepointFolderNameSource}}"
|
|
||||||
},
|
|
||||||
"execResultLabel": "sharepoint_source_path"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"execMethod": "sharepoint",
|
|
||||||
"execAction": "readDocuments",
|
|
||||||
"execParameters": {
|
|
||||||
"connectionReference": "{{KEY:connectionName}}",
|
|
||||||
"documentList": ["sharepoint_source_path"],
|
|
||||||
"pathQuery": "{{KEY:sharepointFolderNameSource}}"
|
|
||||||
},
|
|
||||||
"execResultLabel": "sharepoint_source_documents"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"execMethod": "sharepoint",
|
|
||||||
"execAction": "uploadDocument",
|
|
||||||
"execParameters": {
|
|
||||||
"connectionReference": "{{KEY:connectionName}}",
|
|
||||||
"documentList": ["sharepoint_source_documents","web_research_results"],
|
|
||||||
"pathQuery": "{{KEY:sharepointFolderNameDestination}}",
|
|
||||||
"fileNames": ["report.docx"]
|
|
||||||
},
|
|
||||||
"execResultLabel": "sharepoint_upload_documents"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
,
|
|
||||||
"parameters":
|
|
||||||
{
|
|
||||||
"connectionName": "connection:msft:p.motsch@valueon.ch",
|
|
||||||
"webResearchUrl": "https://www.valueon.ch",
|
|
||||||
"webResearchPrompt": "Wer arbeitet bei ValueOn AG in der Schweiz und was machen die?",
|
|
||||||
"PromptSharepointSource": "Fasse die Dokumente in einer Liste zusammen",
|
|
||||||
"sharepointFolderNameSource": "/site:Company Share/Freigegebene Dokumente/15. Persoenliche Ordner/Patrick Motsch/input",
|
|
||||||
"sharepointFolderNameDestination": "/site:Company Share/Freigegebene Dokumente/15. Persoenliche Ordner/Patrick Motsch/output",
|
|
||||||
"PromptDeliverable": "Erstelle mir einen Word Bericht der Webanalyse und der Dokumente im Sharepoint"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
130
modules/workflows/processing/shared/automationTemplates.json
Normal file
130
modules/workflows/processing/shared/automationTemplates.json
Normal file
|
|
@ -0,0 +1,130 @@
|
||||||
|
{
|
||||||
|
"sets": [
|
||||||
|
{
|
||||||
|
"template":
|
||||||
|
{
|
||||||
|
"overview": "Beispiel Web und Sharepoint",
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"id": "Task01",
|
||||||
|
"title": "Main Task",
|
||||||
|
"description": "Execute automated workflow",
|
||||||
|
"objective": "Execute automated workflow",
|
||||||
|
"actionList": [
|
||||||
|
{
|
||||||
|
"execMethod": "ai",
|
||||||
|
"execAction": "webResearch",
|
||||||
|
"execParameters": {
|
||||||
|
"prompt": "{{KEY:webResearchPrompt}}",
|
||||||
|
"list(url)": ["{{KEY:webResearchUrl}}"]
|
||||||
|
},
|
||||||
|
"execResultLabel": "web_research_results"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"execMethod": "sharepoint",
|
||||||
|
"execAction": "listDocuments",
|
||||||
|
"execParameters": {
|
||||||
|
"connectionReference": "{{KEY:connectionName}}",
|
||||||
|
"pathQuery": "{{KEY:sharepointFolderNameSource}}"
|
||||||
|
},
|
||||||
|
"execResultLabel": "sharepoint_source_path"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"execMethod": "sharepoint",
|
||||||
|
"execAction": "readDocuments",
|
||||||
|
"execParameters": {
|
||||||
|
"connectionReference": "{{KEY:connectionName}}",
|
||||||
|
"documentList": ["sharepoint_source_path"],
|
||||||
|
"pathQuery": "{{KEY:sharepointFolderNameSource}}"
|
||||||
|
},
|
||||||
|
"execResultLabel": "sharepoint_source_documents"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"execMethod": "ai",
|
||||||
|
"execAction": "process",
|
||||||
|
"execParameters": {
|
||||||
|
"aiPrompt": "{{KEY:PromptDeliverable}}",
|
||||||
|
"documentList": ["sharepoint_source_documents","web_research_results"]
|
||||||
|
},
|
||||||
|
"execResultLabel": "sharepoint_report"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"execMethod": "sharepoint",
|
||||||
|
"execAction": "uploadDocument",
|
||||||
|
"execParameters": {
|
||||||
|
"connectionReference": "{{KEY:connectionName}}",
|
||||||
|
"documentList": ["sharepoint_report"],
|
||||||
|
"pathQuery": "{{KEY:sharepointFolderNameDestination}}"
|
||||||
|
},
|
||||||
|
"execResultLabel": "sharepoint_upload_documents"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"parameters":
|
||||||
|
{
|
||||||
|
"connectionName": "connection:msft:p.motsch@valueon.ch",
|
||||||
|
"webResearchUrl": "https://www.valueon.ch",
|
||||||
|
"webResearchPrompt": "Wer arbeitet bei ValueOn AG in der Schweiz und was machen die?",
|
||||||
|
"PromptSharepointSource": "Fasse die Dokumente in einer Liste zusammen",
|
||||||
|
"sharepointFolderNameSource": "/site:Company Share/Freigegebene Dokumente/15. Persoenliche Ordner/Patrick Motsch/input",
|
||||||
|
"sharepointFolderNameDestination": "/site:Company Share/Freigegebene Dokumente/15. Persoenliche Ordner/Patrick Motsch/output",
|
||||||
|
"PromptDeliverable": "Erstelle mir einen Bericht der Webanalyse und der Dokumente im Sharepoint als Word Dokument"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"template":
|
||||||
|
{
|
||||||
|
"overview": "Immobilienrecherche Zürich",
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"id": "Task02",
|
||||||
|
"title": "Immobilienrecherche Zürich",
|
||||||
|
"description": "Webrecherche nach Immobilien im Kanton Zürich und Speicherung in Excel",
|
||||||
|
"objective": "Immobilienrecherche im Kanton Zürich zum Verkauf (5-20 Mio. CHF) und speichere Ergebnisse in Excel-Liste auf SharePoint",
|
||||||
|
"actionList": [
|
||||||
|
{
|
||||||
|
"execMethod": "ai",
|
||||||
|
"execAction": "webResearch",
|
||||||
|
"execParameters": {
|
||||||
|
"prompt": "{{KEY:immobilienResearchPrompt}}",
|
||||||
|
"list(url)": ["{{KEY:immobilienResearchUrl}}"]
|
||||||
|
},
|
||||||
|
"execResultLabel": "immobilien_research_results"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"execMethod": "ai",
|
||||||
|
"execAction": "process",
|
||||||
|
"execParameters": {
|
||||||
|
"aiPrompt": "{{KEY:excelFormatPrompt}}",
|
||||||
|
"documentList": ["immobilien_research_results"],
|
||||||
|
"resultType": "xlsx"
|
||||||
|
},
|
||||||
|
"execResultLabel": "immobilien_excel_list"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"execMethod": "sharepoint",
|
||||||
|
"execAction": "uploadDocument",
|
||||||
|
"execParameters": {
|
||||||
|
"connectionReference": "{{KEY:connectionName}}",
|
||||||
|
"documentList": ["immobilien_excel_list"],
|
||||||
|
"pathQuery": "{{KEY:sharepointFolderNameDestination}}"
|
||||||
|
},
|
||||||
|
"execResultLabel": "immobilien_upload_result"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"parameters":
|
||||||
|
{
|
||||||
|
"connectionName": "connection:msft:p.motsch@valueon.ch",
|
||||||
|
"sharepointFolderNameDestination": "/site:Company Share/Freigegebene Dokumente/15. Persoenliche Ordner/Patrick Motsch/output",
|
||||||
|
"immobilienResearchUrl": ["https://www.homegate.ch", "https://www.immoscout24.ch", "https://www.immowelt.ch"],
|
||||||
|
"immobilienResearchPrompt": "Suche nach Immobilien zum Verkauf im Kanton Zürich, Schweiz, im Preisbereich von 5-20 Millionen CHF. Sammle Informationen zu: Ort, Preis, Beschreibung, URL zu Bildern, Verkäufer/Kontaktinformationen.",
|
||||||
|
"excelFormatPrompt": "Erstelle eine Excel-Datei mit den recherchierten Immobilien. Jede Immobilie soll eine Zeile sein mit den folgenden Spalten: Ort, Preis (in CHF), Beschreibung, URL zu Bild, Verkäufer. Verwende die Daten aus der Webrecherche."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
Loading…
Reference in a new issue