serviceCenter = DI-Container (Resolver, Registry, Context) fuer Service-Instanziierung serviceHub = Consumer-facing Aggregation (DB-Interfaces, Runtime-State, lazy Service-Resolution via serviceCenter) - modules/serviceHub/ erstellt: ServiceHub, PublicService, getInterface() - 22 Consumer-Dateien migriert (routes, features, tests): imports von modules.services auf serviceHub bzw. serviceCenter umgestellt - resolver.py: legacy fallback auf altes services/ entfernt - modules/services/ komplett geloescht (83 Dateien inkl. dead code mainAiChat.py) - pre-extraction: progress callback durch chunk-pipeline propagiert, operationType DATA_EXTRACT->DATA_ANALYSE fuer guenstigeres Modell |
||
|---|---|---|
| .. | ||
| functional | ||
| integration | ||
| testdata | ||
| unit | ||
| validation | ||
| __init__.py | ||
| conftest.py | ||
| README.md | ||
Test Suite Documentation
Overview
This test suite includes:
- Unit Tests: Fast, isolated tests for individual components
- Integration Tests: Tests for component interactions
- Validation Tests: End-to-end architecture validation
- Functional Tests: Standalone async test scripts for real-world scenarios
Running Tests
Prerequisites
# Install dependencies (pytest is already in requirements.txt)
cd gateway
pip install -r requirements.txt
# Or install pytest separately if needed
pip install pytest pytest-asyncio pytest-cov
Running Pytest Tests
All tests:
cd gateway
pytest
By category:
# Unit tests only
pytest tests/unit/
# Integration tests only
pytest tests/integration/
# Validation tests only
pytest tests/validation/
Specific test:
# Specific file
pytest tests/unit/datamodels/test_workflow_models.py
# Specific test class
pytest tests/unit/datamodels/test_workflow_models.py::TestActionDefinition
# Specific test function
pytest tests/unit/datamodels/test_workflow_models.py::TestActionDefinition::test_actionDefinition_needsStage2_without_parameters
With options:
# Verbose output
pytest -v
# Show print statements
pytest -s
# Stop on first failure
pytest -x
# Run tests matching pattern
pytest -k "test_actionDefinition"
# Run with coverage
pytest --cov=modules --cov-report=html
Running Functional Tests
These are standalone async scripts that test real AI operations. They are NOT pytest-compatible and must be run directly:
cd gateway
# AI Models Test (IMAGE_GENERATE)
python tests/functional/test_ai_models.py
# AI Model Selection Test
python tests/functional/test_ai_model_selection.py
# AI Behavior Test
python tests/functional/test_ai_behavior.py
# AI Operations Test
python tests/functional/test_ai_operations.py
Note: These functional tests require:
- Valid API keys configured in environment/config
- Database access
- May make actual AI API calls (costs may apply)
- Must be run directly (not via pytest)
Test Structure
tests/
├── unit/ # Unit tests (fast, isolated, pytest-compatible)
│ ├── datamodels/ # Data model tests
│ ├── services/ # Service layer tests
│ ├── workflows/ # Workflow tests
│ └── utils/ # Utility function tests
├── integration/ # Integration tests (pytest-compatible)
│ └── workflows/ # Workflow integration tests
├── validation/ # Architecture validation tests (pytest-compatible)
└── functional/ # Functional tests (standalone scripts, NOT pytest-compatible)
├── test_ai_models.py
├── test_ai_behavior.py
├── test_ai_model_selection.py
└── test_ai_operations.py
Test Categories
Unit Tests (tests/unit/)
Data Models:
test_workflow_models.py- ActionDefinition, AiResponse, etc.test_docref.py- DocumentReference models
Services:
test_ai_service.py- AI service methods (mocked)
Workflows:
test_state_management.py- ChatWorkflow state management
Utils:
test_json_utils.py- JSON parsing utilities
Integration Tests (tests/integration/)
test_workflow_execution.py- Full workflow execution flows
Validation Tests (tests/validation/)
test_architecture_validation.py- End-to-end architecture validation
Functional Tests (tests/functional/)
Note: These are standalone scripts that must be run directly (not via pytest):
test_ai_models.py- Real AI model testing (IMAGE_GENERATE)test_ai_model_selection.py- Model selection logictest_ai_behavior.py- AI behavior with different promptstest_ai_operations.py- AI operations testing
Pytest Configuration
Configuration is in pytest.ini:
- Default: Runs non-expensive tests only
- Use
pytest -m ""to run ALL tests (including expensive ones) - Test paths:
tests/ - Python paths:
.(gateway directory)
Markers
Tests can be marked with pytest markers:
@pytest.mark.asyncio
async def test_something():
...
@pytest.mark.expensive
def test_expensive_operation():
...
Run only expensive tests:
pytest -m expensive
Debugging Tests
Run with debugger:
pytest --pdb # Drop into debugger on failure
Show local variables:
pytest -l # Show local variables in traceback
Run last failed tests:
pytest --lf
Continuous Integration
For CI/CD, use:
# Run all tests with coverage
pytest --cov=modules --cov-report=xml --cov-report=html
# Run only fast tests (exclude expensive)
pytest -m "not expensive"
Troubleshooting
Import errors (ModuleNotFoundError: No module named 'modules'):
- Ensure you're running pytest from the
gateway/directory - The
conftest.pyfile automatically adds the gateway directory tosys.path - If issues persist, verify
pytest.inihaspythonpath = .(notpython_paths) - You can also set PYTHONPATH manually:
$env:PYTHONPATH = "." pytest
Async test issues:
- Ensure
pytest-asynciois installed - Tests marked with
@pytest.mark.asynciowill run correctly
Path issues:
- Standalone scripts automatically add gateway to
sys.path - Pytest tests use
conftest.pyto set up the path automatically - If running from a different directory, use:
python -m pytestfrom the gateway directory