Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds utilities for deduplicating materials and workflows and a flexible job-creation helper; updates band_gap notebook to use mat3ra.ide.compute-based compute construction and cluster listing; adds a new total_energy notebook; and updates dependency pins to use git-sourced mat3ra-ide and mat3ra-api-client. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Notebook
participant MatEP as Materials Endpoint
participant WfEP as Workflows Endpoint
participant JobEP as Jobs Endpoint
participant API as mat3ra API
User->>MatEP: get_or_create_material(material)
activate MatEP
MatEP->>API: query by material.hash
alt exists
MatEP-->>User: return existing material
else
MatEP->>API: create material
MatEP-->>User: return new material
end
deactivate MatEP
User->>WfEP: get_or_create_workflow(workflow)
activate WfEP
WfEP->>API: create workflow
WfEP->>API: check duplicates by hash
alt duplicate
WfEP->>API: delete newly created
WfEP-->>User: return original workflow
else
WfEP-->>User: return new workflow
end
deactivate WfEP
User->>JobEP: create_job(materials, workflow_or_id, save_to_collection, compute?)
activate JobEP
alt save_to_collection == true
JobEP->>API: create_by_ids(material_ids, workflow_id, project_id, compute)
JobEP-->>User: created job refs
else
loop per material
JobEP->>API: create job with embedded workflow and compute
end
JobEP-->>User: created job refs
end
deactivate JobEP
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
other/materials_designer/workflows/band_gap.ipynb (1)
459-470: Consider moving the conditional import to the top of the cell.The
get_or_create_workflowimport is conditionally done inside theifblock. While functional, this can be confusing. Consider importing at the top of the cell unconditionally for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@other/materials_designer/workflows/band_gap.ipynb` around lines 459 - 470, Move the conditional import of get_or_create_workflow out of the if block and place it at the top of the cell alongside the existing import of dict_to_namespace to make imports consistent; update the cell so get_or_create_workflow is imported unconditionally (while leaving the runtime conditional on SAVE_WF_TO_COLLECTION that sets workflow_id_or_dict and uses get_or_create_workflow, dict_to_namespace, and workflow.to_dict() unchanged).utils/api.py (1)
179-188: Consider adding a Union type hint forworkflow_id_or_dict.The parameter accepts either a
str(workflow ID) or adict(workflow config), but the type hint is implicitAny. Adding an explicit type improves readability and IDE support.💡 Suggested type hint improvement
+from typing import List, Optional, Union + def create_job( jobs_endpoint: JobEndpoints, materials: List[dict], - workflow_id_or_dict, + workflow_id_or_dict: Union[str, dict], project_id: str,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/api.py` around lines 179 - 188, The parameter workflow_id_or_dict in create_job should be explicitly typed as accepting either a string or a dict to improve readability and IDE support; update the signature to use a Union (e.g., Union[str, dict] or Union[str, Mapping[str, Any]]) and add any required typing imports (Union, Mapping/Dict, Any) at the top of the module so the function annotation reflects both allowed types.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@other/materials_designer/workflows/band_gap.ipynb`:
- Around line 508-516: The code assumes clusters is non-empty when doing cluster
= clusters[0]; add an explicit check (e.g., if not clusters:) before that line
and handle the empty case by raising a clear error (ValueError or RuntimeError)
or logging and aborting, supplying a helpful message like "no clusters
available" so downstream code that constructs Compute(cluster=cluster,
queue=QueueName.D, ppn=2) doesn't get an IndexError; update any surrounding
setup/exception handling to propagate this meaningful error.
In `@utils/api.py`:
- Around line 153-176: The function get_or_create_workflow can race when two
processes create identical workflows; wrap the endpoint.delete(created["_id"])
call in a try/except that catches deletion errors (e.g., not-found or conflict)
so a second process that already removed the duplicate won't raise, and after
catching an exception re-query duplicates (using endpoint.list with {"hash":
created["hash"], "owner._id": owner_id}) to return the surviving original if
present; ensure the exception path logs or silently continues and ultimately
returns the existing workflow rather than propagating the delete error.
---
Nitpick comments:
In `@other/materials_designer/workflows/band_gap.ipynb`:
- Around line 459-470: Move the conditional import of get_or_create_workflow out
of the if block and place it at the top of the cell alongside the existing
import of dict_to_namespace to make imports consistent; update the cell so
get_or_create_workflow is imported unconditionally (while leaving the runtime
conditional on SAVE_WF_TO_COLLECTION that sets workflow_id_or_dict and uses
get_or_create_workflow, dict_to_namespace, and workflow.to_dict() unchanged).
In `@utils/api.py`:
- Around line 179-188: The parameter workflow_id_or_dict in create_job should be
explicitly typed as accepting either a string or a dict to improve readability
and IDE support; update the signature to use a Union (e.g., Union[str, dict] or
Union[str, Mapping[str, Any]]) and add any required typing imports (Union,
Mapping/Dict, Any) at the top of the module so the function annotation reflects
both allowed types.
| "from mat3ra.ide.compute import Compute, QueueName\n", | ||
| "\n", | ||
| "CLUSTER_NAME = get_cluster_name()\n", | ||
| "cluster = clusters[0]\n", | ||
| "\n", | ||
| "compute = client.jobs.build_compute_config(\n", | ||
| " cluster=CLUSTER_NAME\n", | ||
| "compute = Compute(\n", | ||
| " cluster=cluster,\n", | ||
| " queue=QueueName.D,\n", | ||
| " ppn=2\n", | ||
| ")" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
mat3ra.ide.compute QueueName enum D value
💡 Result:
In mat3ra.ide.compute.QueueName, the enum member D corresponds to the Debug queue, i.e. its value is the string "D". [1], [2]
🏁 Script executed:
# Search for any references to mat3ra or QueueName in the repository
rg -i "queuename" --type py --type ipynb -A 2 -B 2Repository: Exabyte-io/api-examples
Length of output: 96
🏁 Script executed:
# Check if there are any imports or documentation about mat3ra in the repo
rg "mat3ra" --type py --type ipynb -A 1 -B 1 | head -50Repository: Exabyte-io/api-examples
Length of output: 96
🏁 Script executed:
# Read the notebook file to understand the actual context
cat "other/materials_designer/workflows/band_gap.ipynb" | head -600Repository: Exabyte-io/api-examples
Length of output: 16598
Add validation for empty clusters list.
Line 510 accesses clusters[0] without checking if the list is non-empty. If no clusters are available, this will raise an IndexError with an unhelpful message.
🛡️ Suggested validation
+if not clusters:
+ raise ValueError("No clusters available. Please check your account configuration.")
cluster = clusters[0]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/band_gap.ipynb` around lines 508 - 516,
The code assumes clusters is non-empty when doing cluster = clusters[0]; add an
explicit check (e.g., if not clusters:) before that line and handle the empty
case by raising a clear error (ValueError or RuntimeError) or logging and
aborting, supplying a helpful message like "no clusters available" so downstream
code that constructs Compute(cluster=cluster, queue=QueueName.D, ppn=2) doesn't
get an IndexError; update any surrounding setup/exception handling to propagate
this meaningful error.
| def get_or_create_workflow(endpoint: WorkflowEndpoints, workflow, owner_id: str) -> dict: | ||
| """ | ||
| Creates the workflow on the server, then uses the server-assigned hash to check for | ||
| pre-existing duplicates. If a duplicate exists, deletes the new entry and returns the | ||
| original. The server is the authoritative source for structural deduplication. | ||
|
|
||
| Args: | ||
| endpoint (WorkflowEndpoints): Workflow endpoint from the API client. | ||
| workflow: mat3ra-wode Workflow object with a .to_dict() method. | ||
| owner_id (str): Account ID under which to search and create. | ||
|
|
||
| Returns: | ||
| dict: The workflow dict (existing or newly created). | ||
| """ | ||
| # TODO: calculate the hash in wode, client side | ||
| created = endpoint.create(workflow.to_dict(), owner_id=owner_id) | ||
| duplicates = endpoint.list({"hash": created["hash"], "owner._id": owner_id}) | ||
| if len(duplicates) > 1: | ||
| original = next(w for w in duplicates if w["_id"] != created["_id"]) | ||
| endpoint.delete(created["_id"]) | ||
| print(f"♻️ Workflow already exists: {original['_id']}") | ||
| return original | ||
| print(f"✅ Workflow created: {created['_id']}") | ||
| return created |
There was a problem hiding this comment.
Potential race condition with concurrent workflow creation.
If two processes call get_or_create_workflow simultaneously with identical workflows, both will create entries, both will detect duplicates, and both may attempt to delete the other's entry. This could result in neither workflow surviving or unexpected errors.
For typical notebook usage this is low-risk, but consider adding a try/except around the delete call to handle cases where the "duplicate" was already deleted by another process.
🛡️ Suggested defensive handling
if len(duplicates) > 1:
original = next(w for w in duplicates if w["_id"] != created["_id"])
- endpoint.delete(created["_id"])
+ try:
+ endpoint.delete(created["_id"])
+ except Exception:
+ pass # Already deleted by concurrent process
print(f"♻️ Workflow already exists: {original['_id']}")
return original🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@utils/api.py` around lines 153 - 176, The function get_or_create_workflow can
race when two processes create identical workflows; wrap the
endpoint.delete(created["_id"]) call in a try/except that catches deletion
errors (e.g., not-found or conflict) so a second process that already removed
the duplicate won't raise, and after catching an exception re-query duplicates
(using endpoint.list with {"hash": created["hash"], "owner._id": owner_id}) to
return the surviving original if present; ensure the exception path logs or
silently continues and ultimately returns the existing workflow rather than
propagating the delete error.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
pyproject.toml (1)
17-19: Migrate from git-pinned dependencies to published releases.Both
mat3ra-ideandmat3ra-api-clienthave stable releases available (latest:2025.11.19-0and2026.2.23-0respectively). Replace the git-pinned commits with versioned PyPI releases to improve dependency management, enable better tooling support, and simplify maintenance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 17 - 19, Replace the git-URL pinned dependencies in pyproject.toml for mat3ra-ide and mat3ra-api-client with their published PyPI release versions: change the entries that currently read "mat3ra-ide @ git+https://github.com/Exabyte-io/ide.git@15dbf8c8..." and "mat3ra-api-client @ git+https://github.com/Exabyte-io/api-client.git@b309650..." to standard versioned requirements using the package names mat3ra-ide==2025.11.19-0 and mat3ra-api-client==2026.2.23-0 (or the canonical version specifier supported by your build backend), then run your dependency resolver / poetry lock to verify and update lockfile.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@other/materials_designer/workflows/total_energy.ipynb`:
- Around line 52-58: Update the duplicate section heading: locate the markdown
cell whose source contains "### 1.1. Set parameters and configurations for the
workflow and job" (cell id "3") and change the heading token from "1.1" to "1.2"
so it reads "### 1.2. Set parameters and configurations for the workflow and
job".
- Around line 449-461: The code assigns cluster = next((c for c in clusters if
c.fqdn == CLUSTER_NAME), clusters[0] if clusters else None) and then
instantiates Compute(...) and prints cluster.fqdn which will raise if clusters
is empty; modify the logic to detect an empty clusters list before creating the
Compute object (e.g., if not clusters: raise a meaningful exception or exit with
a clear error), or provide a fallback/validation that ensures cluster is not
None, then pass that validated cluster into Compute and only call cluster.fqdn
in the print after the check; update references in this block (clusters,
CLUSTER_NAME, cluster, Compute) accordingly.
- Around line 225-234: The code assumes a default project exists and directly
indexes projects[0], which can raise an IndexError; update the block that calls
client.projects.list(...) and assigns project_id to first element (projects[0])
to first check if the returned projects list is non-empty (e.g., if not
projects: log/raise a clear error mentioning ACCOUNT_ID and that no default
project was found, or handle creation/selection flow), and only then set
project_id = projects[0]["_id"] and print the message; reference the variables
client.projects.list, projects, project_id, and ACCOUNT_ID when making the
change.
- Around line 119-133: The notebook hardcodes OIDC_ACCESS_TOKEN (and sets it
into os.environ), which must be removed; replace the hardcoded value with a
non-secret placeholder and load the token from the environment
(os.environ.get("OIDC_ACCESS_TOKEN")) or prompt the user to input it at runtime,
and stop writing a real token into source—update the lines that define
OIDC_ACCESS_TOKEN and the subsequent os.environ assignment so they read from the
environment or an interactive prompt and include a clear placeholder/comment
instructing users to set OIDC_ACCESS_TOKEN before running.
---
Nitpick comments:
In `@pyproject.toml`:
- Around line 17-19: Replace the git-URL pinned dependencies in pyproject.toml
for mat3ra-ide and mat3ra-api-client with their published PyPI release versions:
change the entries that currently read "mat3ra-ide @
git+https://github.com/Exabyte-io/ide.git@15dbf8c8..." and "mat3ra-api-client @
git+https://github.com/Exabyte-io/api-client.git@b309650..." to standard
versioned requirements using the package names mat3ra-ide==2025.11.19-0 and
mat3ra-api-client==2026.2.23-0 (or the canonical version specifier supported by
your build backend), then run your dependency resolver / poetry lock to verify
and update lockfile.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
config.ymlother/materials_designer/workflows/band_gap.ipynbother/materials_designer/workflows/total_energy.ipynbpyproject.toml
| { | ||
| "cell_type": "markdown", | ||
| "id": "3", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### 1.1. Set parameters and configurations for the workflow and job" | ||
| ] |
There was a problem hiding this comment.
Duplicate section number.
Section "1.1" is used twice - first for "Install packages (JupyterLite)" (line 30) and again for "Set parameters and configurations" (line 57). The second occurrence should be "1.2".
📝 Suggested fix
- "### 1.1. Set parameters and configurations for the workflow and job"
+ "### 1.2. Set parameters and configurations for the workflow and job"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy.ipynb` around lines 52 - 58,
Update the duplicate section heading: locate the markdown cell whose source
contains "### 1.1. Set parameters and configurations for the workflow and job"
(cell id "3") and change the heading token from "1.1" to "1.2" so it reads "###
1.2. Set parameters and configurations for the workflow and job".
| "source": [ | ||
| "import os\n", | ||
| "\n", | ||
| "os.environ[\"API_HOST\"] = \"localhost\"\n", | ||
| "os.environ[\"API_PORT\"] = \"3000\"\n", | ||
| "os.environ[\"API_SECURE\"] = \"false\"\n", | ||
| "\n", | ||
| "API_HOST = \"localhost\"\n", | ||
| "API_PORT = \"3000\"\n", | ||
| "API_SECURE = \"false\"\n", | ||
| "API_VERSION = \"2018-10-01\"\n", | ||
| "\n", | ||
| "OIDC_ACCESS_TOKEN = '_qHzr2miPGLP3xlnV-n2E_k9Wdr_3xVcbcgztVwk-Sx'\n", | ||
| "os.environ[\"OIDC_ACCESS_TOKEN\"] = OIDC_ACCESS_TOKEN" | ||
| ] |
There was a problem hiding this comment.
Critical: Remove hardcoded access token from notebook.
Line 131 contains a hardcoded OIDC_ACCESS_TOKEN that will be committed to version control. Even if intended for local development, this poses a security risk:
- The token may grant access to real resources
- It creates a bad example for users who might commit their own tokens
- It violates secrets management best practices
Replace with a placeholder and prompt users to set their own token.
🔒 Suggested fix
-OIDC_ACCESS_TOKEN = '_qHzr2miPGLP3xlnV-n2E_k9Wdr_3xVcbcgztVwk-Sx'
-os.environ["OIDC_ACCESS_TOKEN"] = OIDC_ACCESS_TOKEN
+# Set your token here for local development (do not commit real tokens!)
+OIDC_ACCESS_TOKEN = os.environ.get("OIDC_ACCESS_TOKEN", "")
+if not OIDC_ACCESS_TOKEN:
+ raise ValueError("OIDC_ACCESS_TOKEN environment variable must be set for local development")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "source": [ | |
| "import os\n", | |
| "\n", | |
| "os.environ[\"API_HOST\"] = \"localhost\"\n", | |
| "os.environ[\"API_PORT\"] = \"3000\"\n", | |
| "os.environ[\"API_SECURE\"] = \"false\"\n", | |
| "\n", | |
| "API_HOST = \"localhost\"\n", | |
| "API_PORT = \"3000\"\n", | |
| "API_SECURE = \"false\"\n", | |
| "API_VERSION = \"2018-10-01\"\n", | |
| "\n", | |
| "OIDC_ACCESS_TOKEN = '_qHzr2miPGLP3xlnV-n2E_k9Wdr_3xVcbcgztVwk-Sx'\n", | |
| "os.environ[\"OIDC_ACCESS_TOKEN\"] = OIDC_ACCESS_TOKEN" | |
| ] | |
| "source": [ | |
| "import os\n", | |
| "\n", | |
| "os.environ[\"API_HOST\"] = \"localhost\"\n", | |
| "os.environ[\"API_PORT\"] = \"3000\"\n", | |
| "os.environ[\"API_SECURE\"] = \"false\"\n", | |
| "\n", | |
| "API_HOST = \"localhost\"\n", | |
| "API_PORT = \"3000\"\n", | |
| "API_SECURE = \"false\"\n", | |
| "API_VERSION = \"2018-10-01\"\n", | |
| "\n", | |
| "# Set your token here for local development (do not commit real tokens!)\n", | |
| "OIDC_ACCESS_TOKEN = os.environ.get(\"OIDC_ACCESS_TOKEN\", \"\")\n", | |
| "if not OIDC_ACCESS_TOKEN:\n", | |
| " raise ValueError(\"OIDC_ACCESS_TOKEN environment variable must be set for local development\")" | |
| ] |
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 131-131: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy.ipynb` around lines 119 -
133, The notebook hardcodes OIDC_ACCESS_TOKEN (and sets it into os.environ),
which must be removed; replace the hardcoded value with a non-secret placeholder
and load the token from the environment (os.environ.get("OIDC_ACCESS_TOKEN")) or
prompt the user to input it at runtime, and stop writing a real token into
source—update the lines that define OIDC_ACCESS_TOKEN and the subsequent
os.environ assignment so they read from the environment or an interactive prompt
and include a clear placeholder/comment instructing users to set
OIDC_ACCESS_TOKEN before running.
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "15", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "projects = client.projects.list({\"isDefault\": True, \"owner._id\": ACCOUNT_ID})\n", | ||
| "project_id = projects[0][\"_id\"]\n", | ||
| "print(f\"✅ Using project: {projects[0]['name']} ({project_id})\")" | ||
| ] |
There was a problem hiding this comment.
Add validation for empty projects list.
Line 232 accesses projects[0] without checking if a default project exists. If no default project is configured, this will raise an IndexError.
🛡️ Suggested fix
projects = client.projects.list({"isDefault": True, "owner._id": ACCOUNT_ID})
+if not projects:
+ raise ValueError("No default project found. Please create a default project in your account.")
project_id = projects[0]["_id"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy.ipynb` around lines 225 -
234, The code assumes a default project exists and directly indexes projects[0],
which can raise an IndexError; update the block that calls
client.projects.list(...) and assigns project_id to first element (projects[0])
to first check if the returned projects list is non-empty (e.g., if not
projects: log/raise a clear error mentioning ACCOUNT_ID and that no default
project was found, or handle creation/selection flow), and only then set
project_id = projects[0]["_id"] and print the message; reference the variables
client.projects.list, projects, project_id, and ACCOUNT_ID when making the
change.
| "source": [ | ||
| "from mat3ra.ide.compute import Compute\n", | ||
| "\n", | ||
| "# Select first available cluster or use specified name\n", | ||
| "cluster = next((c for c in clusters if c.fqdn == CLUSTER_NAME), clusters[0] if clusters else None)\n", | ||
| "\n", | ||
| "compute = Compute(\n", | ||
| " cluster=cluster,\n", | ||
| " queue=QUEUE_NAME,\n", | ||
| " ppn=PPN\n", | ||
| ")\n", | ||
| "\n", | ||
| "print(f\"Using cluster: {cluster.fqdn}, queue: {QUEUE_NAME}, ppn: {PPN}\")" |
There was a problem hiding this comment.
Handle empty clusters list gracefully.
Line 453 sets cluster to None when no clusters are available, but the code continues to create a Compute object and access cluster.fqdn on line 461, which would raise an AttributeError.
🛡️ Suggested fix
# Select first available cluster or use specified name
cluster = next((c for c in clusters if c.fqdn == CLUSTER_NAME), clusters[0] if clusters else None)
+if not cluster:
+ raise ValueError("No clusters available. Please check your account configuration.")
+
compute = Compute(
cluster=cluster,
queue=QUEUE_NAME,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "source": [ | |
| "from mat3ra.ide.compute import Compute\n", | |
| "\n", | |
| "# Select first available cluster or use specified name\n", | |
| "cluster = next((c for c in clusters if c.fqdn == CLUSTER_NAME), clusters[0] if clusters else None)\n", | |
| "\n", | |
| "compute = Compute(\n", | |
| " cluster=cluster,\n", | |
| " queue=QUEUE_NAME,\n", | |
| " ppn=PPN\n", | |
| ")\n", | |
| "\n", | |
| "print(f\"Using cluster: {cluster.fqdn}, queue: {QUEUE_NAME}, ppn: {PPN}\")" | |
| "source": [ | |
| "from mat3ra.ide.compute import Compute\n", | |
| "\n", | |
| "# Select first available cluster or use specified name\n", | |
| "cluster = next((c for c in clusters if c.fqdn == CLUSTER_NAME), clusters[0] if clusters else None)\n", | |
| "\n", | |
| "if not cluster:\n", | |
| " raise ValueError(\"No clusters available. Please check your account configuration.\")\n", | |
| "\n", | |
| "compute = Compute(\n", | |
| " cluster=cluster,\n", | |
| " queue=QUEUE_NAME,\n", | |
| " ppn=PPN\n", | |
| ")\n", | |
| "\n", | |
| "print(f\"Using cluster: {cluster.fqdn}, queue: {QUEUE_NAME}, ppn: {PPN}\")" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy.ipynb` around lines 449 -
461, The code assigns cluster = next((c for c in clusters if c.fqdn ==
CLUSTER_NAME), clusters[0] if clusters else None) and then instantiates
Compute(...) and prints cluster.fqdn which will raise if clusters is empty;
modify the logic to detect an empty clusters list before creating the Compute
object (e.g., if not clusters: raise a meaningful exception or exit with a clear
error), or provide a fallback/validation that ensures cluster is not None, then
pass that validated cluster into Compute and only call cluster.fqdn in the print
after the check; update references in this block (clusters, CLUSTER_NAME,
cluster, Compute) accordingly.
Summary by CodeRabbit
New Features
Dependencies