This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Zeuz Node is a cross-platform test automation client that connects to Zeuz Server to receive and execute test cases. It supports Web (Selenium/Playwright), Mobile (Appium), Desktop (PyAutoGUI), Database, REST/SOAP APIs, and Performance testing.
uv sync # Install dependencies
uv run pytest tests/ # Run tests
uv run pytest tests/test_cases.py::test_name # Run single test
uv run mypy Framework/ # Type checking
uv run ruff check Framework/ # Linting
uv run ruff format Framework/ # Format codecd Apps/Web/AI_Recorder_2/
npm install
npm run dev # Development
npm run build # Production (TSC + Vite)
npm run lint # ESLintcd Apps/node_runner/
make all # Build all platforms
make windows # Windows only
make mac # macOS only
make linux # Linux onlypython node_cli.py --help # Show available flags
python node_cli.py --login # Authenticate with server
python node_cli.py --logout # Clear credentialsZeuz Server → long_poll_handler.py → adapter.py → MainDriverApi.py
↓
sequential_actions.py
↓
┌──────────────┬──────────────┬──────────────┐
↓ ↓ ↓ ↓
Selenium Playwright Appium Desktop
BuiltInFunctions BuiltInFunctions BuiltInFunctions ...
node_cli.py- Entry pointFramework/MainDriverApi.py- Test orchestrationFramework/Built_In_Automation/Sequential_Actions/sequential_actions.py- Action dispatcherFramework/Built_In_Automation/Shared_Resources/LocateElement.py- Universal element locationFramework/Built_In_Automation/Shared_Resources/BuiltInFunctionSharedResources.py- Shared variablesFramework/Utilities/CommonUtil.py- Logging and exceptionsFramework/Utilities/decorators.py- @logger, @deprecated decorators
Each automation module follows this pattern:
Built_In_Automation/<Platform>/
├── BuiltInFunctions.py # Action implementations
├── utils.py # Platform-specific utilities
└── ...
Action declarations are in Sequential_Actions/action_declarations/:
info.py- Master registry that loads all declarationsselenium.py,playwright.py,appium.py, etc. - Module-specific action definitions
All actions must follow this pattern:
@logger
def Action_Name(step_data):
sModuleInfo = inspect.currentframe().f_code.co_name + " : " + MODULE_NAME
try:
# 1. Parse parameters from step_data (list of 3-tuples)
for left, mid, right in step_data:
left = left.strip().lower()
# Extract values...
# 2. Get element if needed
Element = LocateElement.Get_Element(step_data, driver)
if Element in failed_tag_list:
CommonUtil.ExecLog(sModuleInfo, "Element not found", 3)
return "zeuz_failed"
# 3. Perform action
# ...
CommonUtil.ExecLog(sModuleInfo, "Success", 1)
return "passed"
except Exception:
return CommonUtil.Exception_Handler(sys.exc_info())All actions must return one of:
"passed"- Success"zeuz_failed"- Failure"skipped"- Skipped
Actions receive data as a list of 3-tuples:
step_data = [
("id", "element parameter", "submit-btn"),
("use js", "optional parameter", "true"),
("click", "selenium action", "click"),
]CommonUtil.ExecLog(sModuleInfo, "message", log_level)
# Levels: 0=Trace, 1=Info/Pass, 2=Warning, 3=Error, 4=Debugfrom Framework.Built_In_Automation.Shared_Resources import BuiltInFunctionSharedResources as sr
sr.Set_Shared_Variables("key", value)
value = sr.Get_Shared_Variables("key")
sr.Test_Shared_Variables("key") # Check if existsVariables use %|variable_name|% syntax:
"%|my_var|%" # Basic reference
"%|my_list[0]|%" # Index access
"%|user["name"]|%" # Dictionary accessThe unified LocateElement.Get_Element() supports:
- Selenium WebDriver
- Playwright Page objects
- Appium drivers
- Auto-detects driver type from string representation
Parameter types: element parameter, parent parameter, child parameter, sibling parameter, unique parameter
Selector modifiers:
*prefix = partial match (contains)**prefix = case-insensitive partial match|*|separator = platform-specific (Android|iOS)
Loops are handled in sequential_actions.py with two main mechanisms:
Iterates through data sets or shared variable lists:
# Loop settings parameter format:
("for each_item in %|MyList|%", "loop action", "action to loop")Control options (via optional loop control):
- Exit Loop and Fail - Terminate loop and fail test case
- Exit Loop and Continue - Terminate loop, continue test case
- Continue to Next Iteration - Skip current iteration
Data structures track pass/fail conditions per step:
exit_loop_and_fail = {"pass": [[]], "fail": [[]], "cond": []}
exit_loop_and_cont = {"pass": [[]], "fail": [[]], "cond": []}
continue_next_iter = {"pass": [[]], "fail": [[]], "cond": []}Conditional looping based on dataset results:
# Parameters:
max_no_of_loop # Maximum iterations (via "repeat" setting)
loop_this_data_sets # Dataset indices to loop
# Operators: |==|, !=, <=, >=, >, <, |in|Exit conditions:
- Pass/fail condition match on specified dataset
- Operator comparison between variables
- Max iteration count reached
loop action- Main loop designationoptional loop settings- Loop configurationoptional loop condition- Conditional expressionsoptional loop control- Pass/fail control directives
Reports are stored in CommonUtil.all_logs_json as a hierarchical JSON:
all_logs_json = [{
"run_id": "<run_id>",
"objective": "<test_objective>",
"execution_detail": {"duration", "teststarttime", "status"},
"test_cases": [{
"testcase_no": "<tc_id>",
"title": "<tc_name>",
"execution_detail": {"status", "duration", "failreason"},
"steps": [{
"step_sequence": <seq>,
"step_id": "<id>",
"execution_detail": {"status", "duration", "stepstarttime", "stependtime"},
"log": [{"status", "modulename", "details", "tstamp", "loglevel"}]
}]
}]
}]
CommonUtil.CreateJsonReport() - Builds JSON report structure dynamically
- Extracts log_id components:
run_id|testcase_no|step_id|step_no - Updates test case and step entries in
all_logs_json - Appends log entries to step logs
MainDriverApi.upload_step_report() - Uploads individual step results
- POST to
/create_step_report/ - 5 retry attempts with 4-second delays
MainDriverApi.upload_reports_and_zips() - Uploads complete report + artifacts
- POST to
/create_report_log_api/(execution report) - POST to
/save_log_and_attachment_api/(ZIP files) - Failed uploads saved to
failed_uploads/<run_id>/for retry
calculate_test_case_result() aggregates step results (priority order):
testcase_exitflag if set- "BLOCKED" → Blocked
- "CANCELLED" → Cancelled
- "zeuz_failed" → Failed or Blocked (based on verify_point)
- "WARNING" or "NOT RUN" → Failed
- All "SKIPPED" → Skipped
- "PASSED" → Passed
- Default → Unknown
<run_id>/<session_name>/<test_case>/
├── Log/ # Browser console error logs
├── screenshots/ # Test screenshots
├── performance_report/ # Performance metrics
├── json_report/ # Step-level JSON reports
├── zeuz_download_folder/ # Downloaded files
└── attachments/ # Test case attachments
Generated via reporting/junit_report.py after test completion:
- Output:
<zip_dir>/junitreport.xml - Standard JUnit XML format with test suite and test case elements
- Create
Framework/Built_In_Automation/<Category>/<Module>/BuiltInFunctions.py - Add action declarations in
Sequential_Actions/action_declarations/<module>.py - Register in
action_declarations/info.py - Add module loading in
sequential_actions.py:load_sa_modules()
See docs/ARCHITECTURE.md for complete guide.
FastAPI server runs on port 18100+ (auto-increments if in use):
/api/v1/status- Health check/api/v1/connect- Connection management/api/v1/mobile- Mobile UI dump
~/.zeuz/- User artifacts~/.zeuz/zeuz_node_downloads/- Driver/app downloadsAutomationLog/- Test execution logsAutomationLog/attachments/- Captured files