Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
49f267d
Update __init__.py
lmoresi Mar 26, 2025
0a35525
Update NextSteps.qmd
lmoresi Mar 27, 2025
0b3a4b3
Update UW-Background.bib
lmoresi Mar 27, 2025
dc8887d
Update build_deploy_pdoc.yml
lmoresi Mar 28, 2025
49c09b8
Alternative to .vtk for mesh to pyvista
lmoresi Jun 24, 2025
91bec90
"Fix" for pointwise function test
lmoresi Jun 24, 2025
ae32ac5
Geometry tools - add point in simplex etc
lmoresi Jun 24, 2025
bd8bbb3
Formatting changes
lmoresi Jun 24, 2025
4670fe8
Formatting changes
lmoresi Jun 24, 2025
086c0d7
Hiding some functions
lmoresi Jun 24, 2025
4841242
BASIC swarm
lmoresi Jun 24, 2025
a53470b
Hiding some functions
lmoresi Jun 24, 2025
8d6c94b
Advection for BASIC swarm
lmoresi Jun 28, 2025
20c4731
Merge branch 'swarm-rework-lm-2025' into swarm-rework-dev-merged
lmoresi Jun 28, 2025
2722460
kdtree updates propagating into swarm rework code
lmoresi Jun 28, 2025
9103066
Updating conda env file
julesghub Jun 30, 2025
b1b8dca
Mesh class: clean up code and update KDTree
lmoresi Jul 1, 2025
4cc1209
Add docstrings to swarm classes. NodalPointBasicSwarm
lmoresi Jul 1, 2025
ee394d2
NodalSwarm rework
lmoresi Jul 3, 2025
0778b7d
Fix uw.function.evaluate / eliminate evalf
lmoresi Jul 7, 2025
0c6ed17
Adding back nanoflann code temporarily
lmoresi Jul 7, 2025
f1daed6
Swarm -> evalf rework
lmoresi Jul 7, 2025
5f33fb3
Adding rbf as alternative to evalf in function.evaluate
lmoresi Jul 8, 2025
f141acb
Update github action: build+test+deploy dev docker
julesghub Jul 24, 2025
bd4116e
Merge branch 'development' of github.com:underworldcode/underworld3 i…
julesghub Jul 24, 2025
cf66bdb
Merge pull request #5 from underworldcode/joss-submission
lmoresi Jul 24, 2025
b8becc6
Minor fixes to quickstart
lmoresi Jul 24, 2025
b291ae9
update for dev docker gh action file
julesghub Jul 24, 2025
c757223
Typo fix
julesghub Jul 24, 2025
861bc10
Add conditional update to GH action
julesghub Jul 24, 2025
5428076
fixes
julesghub Jul 24, 2025
603bcf5
Update docker-image.yml
julesghub Jul 24, 2025
a43af52
Just make a dev docker with no dependency of other workflows
julesghub Jul 24, 2025
9a71dc6
Update syntax in Dockerfile
julesghub Jul 24, 2025
0de2376
Merge pull request #2 from underworldcode/swarm-rework-dev-merged
lmoresi Jul 24, 2025
7482989
some small corrections to PR 2
Jul 25, 2025
0f5c911
Merge pull request #7 from jcgraciosa/swarm-edit
jcgraciosa Jul 25, 2025
5ddea7e
Update README.md
julesghub Jul 25, 2025
9bedea1
Update README.md
julesghub Jul 25, 2025
6dd542c
Merge branch 'main' into development
lmoresi Jul 28, 2025
8e290d5
Update tests/test_1130_IndexSwarmVariable.py
lmoresi Jul 28, 2025
9c9700a
Update tests/test_1100_AdvDiffCartesian.py
lmoresi Jul 28, 2025
2db9069
Update src/underworld3/systems/ddt.py
lmoresi Jul 28, 2025
bd209c4
Changes to notebooks and documentation
lmoresi Jul 28, 2025
ea5b9e6
Adding some more explanations / references
lmoresi Jul 28, 2025
6209bf5
Merge pull request #8 from underworldcode/development
lmoresi Jul 28, 2025
3ef4559
Update paper.md
lmoresi Jul 29, 2025
73979d8
Merge pull request #9 from underworldcode/joss-revision-fixes
lmoresi Jul 29, 2025
74fb33b
Fixes noticed by copilot
lmoresi Jul 29, 2025
3338840
Merge pull request #11 from underworldcode/development
lmoresi Jul 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 0 additions & 38 deletions .github/workflows/CI.yml

This file was deleted.

16 changes: 3 additions & 13 deletions .github/workflows/build_uw3_and_test.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Build and test UW3
name: test_uw3

# We should trigger this from an upload event. Note that pdoc requires us to import the
# built code, so this is a building test as well as documentation deployment
Expand All @@ -15,7 +15,7 @@ on:
workflow_dispatch:

jobs:
deploy:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
Expand All @@ -27,12 +27,6 @@ jobs:
cache-downloads: true
cache-environment: true

# # gmsh is such a pig to install properly
# - name: Add gmsh package
# shell: bash -l {0}
# run: |
# pip install gmsh

- name: Build UW3
shell: bash -l {0}
run: |
Expand All @@ -42,12 +36,8 @@ jobs:

## TODO. Use compile.sh once it is in development
pip install -e . --no-build-isolation
## ./compile.sh

# Test - split into short, low memory tests 0???_*
# and longer, solver-based tests 1???_*

- name: Run pytest
- name: Run tests
shell: bash -l {0}
run: |
./test.sh
23 changes: 12 additions & 11 deletions .github/workflows/docker-image.yml
Original file line number Diff line number Diff line change
@@ -1,32 +1,33 @@
name: Docker Image CI
name: Image Build and Push

on:
push:
branches: ["development"]
push:
branches:
- development

jobs:
build:
push-to-dockerhub:
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4

- name: Exact branch name
run: echo "BRANCH=${GITHUB_REF##*/}" >> $GITHUB_ENV

- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PWORD }}
username: ${{ secrets.XXX_USERNAME }}
password: ${{ secrets.XXX_PWORD }}

- name: Build and push Docker image
uses: docker/build-push-action@v4.1.1
uses: docker/build-push-action@v6
with:
context: .
push: true
file: ./Dockerfile
platforms: linux/amd64
# see https://github.com/docker/build-push-action/issues/276 for syntax help
tags: julesg/underworld3:${{ env.BRANCH }}
#-$(date +%s)
tags: underworldcode/underworld3:${{ env.BRANCH }} #-$(date +%s)
48 changes: 48 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# syntax=docker/dockerfile:1.7-labs

### how to build docker image

# (1) run from the underworld3 top directory
# podman build . \
# --rm \
# -f ./.github/Dockerfile \
# --format docker \
# -t underworldcode/underworld3:0.99

### needs the --format tag to run on podman

FROM docker.io/mambaorg/micromamba:2.3.0

USER $MAMBA_USER
ENV NB_HOME=/home/$MAMBA_USER

# create the env
COPY --chown=$MAMBA_USER:$MAMBA_USER environment.yml /tmp/env.yaml
RUN micromamba install -y -n base -f /tmp/env.yaml && \
micromamba clean --all --yes

# activate mamba env during `docker build`
ARG MAMBA_DOCKERFILE_ACTIVATE=1

# install UW3
WORKDIR /tmp
COPY --exclude=**/.git \
--chown=$MAMBA_USER:$MAMBA_USER \
. /tmp/underworld3
WORKDIR /tmp/underworld3

RUN pip install --no-build-isolation --no-cache-dir .

# copy files across
RUN mkdir -p $NB_HOME/workspace

COPY --chown=$MAMBA_USER:$MAMBA_USER ./tests $NB_HOME/Underworld/tests

EXPOSE 8888
WORKDIR $NB_HOME
USER $MAMBA_USER

# Declare a volume space
VOLUME $NB_HOME/workspace

CMD ["jupyter-lab", "--no-browser", "--ip=0.0.0.0"]
11 changes: 10 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,16 @@ Welcome to `Underworld3`, a mathematically self-describing, finite-element code

All `Underworld3` source code is released under the LGPL-3 open source licence. This covers all files in `underworld3` constituting the Underworld3 Python module. Notebooks, stand-alone documentation and Python scripts which show how the code is used and run are licensed under the Creative Commons Attribution 4.0 International License.

[![Build and test UW3](https://github.com/underworldcode/underworld3/actions/workflows/build_uw3_and_test.yml/badge.svg?branch=main)](https://github.com/underworldcode/underworld3/actions/workflows/build_uw3_and_test.yml)
## Status

main branch

[![test_uw3](https://github.com/underworldcode/underworld3/actions/workflows/build_uw3_and_test.yml/badge.svg?branch=main)](https://github.com/underworldcode/underworld3/actions/workflows/build_uw3_and_test.yml)


development branch

[![test_uw3](https://github.com/underworldcode/underworld3/actions/workflows/build_uw3_and_test.yml/badge.svg?branch=development)](https://github.com/underworldcode/underworld3/actions/workflows/build_uw3_and_test.yml)

## Documentation

Expand Down
2 changes: 2 additions & 0 deletions docs/joss-paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ Users of `underworld3` typically develop python scripts within `jupyter` noteboo

# Statement of need

Typical problems in geodynamics usually require computing material deformation, damage evolution, and interface tracking in the large-deformation limit. These are typically not well supported by standard engineering finite element simulation codes. Underworld is a python software framework that is intended to solve geodynamics problems that sit at the interface between computational fluid mechanics and solid mechanics (often known as *complex fluids*). It does so by putting Lagrangian and Eulerian variables on an equal footing at both the user and computational levels.

Underworld is built around a general, symbolic partial differential equation solver but provides template forms to solve common geophysical fluid dynamics problems such as the Stokes equation for mantle convection, subduction-zone evolution, lithospheric deformation, glacial isostatic adjustment, ice flow; Navier-Stokes equations for finite Prandtl number fluid flow and short-timescale, viscoelastic deformation; and Darcy Flow for porous media problems including groundwater flow and contaminant transport.

These problems have a number of defining characteristics: geomaterials are non-linear, viscoelastic/plastic and have a propensity for strain-dependent softening during deformation; strain localisation is very common as a consequence. Geological structures that we seek to understand are often emergent over the course of loading and are observed in the very-large deformation limit. Material properties have strong spatial gradients arising from pressure and temperature dependence and jumps of several orders of magnitude resulting from material interfaces.
Expand Down
35 changes: 32 additions & 3 deletions docs/user/NextSteps.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,11 @@ In addition to the notebooks in this brief set of examples, there are a number o

- [The Underworld Website / Blog](https://www.underworldcode.org)

- [The API documentation](https://underworldcode.github.io/underworld3/main_api/underworld3/index.html)
- [The API documentation](https://underworldcode.github.io/underworld3/main_api/underworld3/index.html)
(all the modules and functions and their full sets of arguments) is automatically generated from the source code and uses the same rich markdown content as the notebook help text.

- [The API documentation (development branch)](https://underworldcode.github.io/underworld3/development_api/underworld3/index.html)

- The [`underworld3` GitHub repository](https://github.com/underworldcode/underworld3) is the most active development community for the code.


Expand All @@ -44,6 +46,33 @@ Almost all of our notebook examples are annotated python for this reason.An exce

The main difference between the notebook development environment and HPC is the lack of interactivity, particularly in sending parameters to the script at launch time. Typically, we expect the HPC version to be running at much higher resolution, or for many more timesteps than the development notebook. We use the `PETSc` command line parsing machinery to generate notebooks that also can ingest run-time parameters from a script (as above).

#### Parallel scaling / performance

Running geodynamic models on a single CPU/processor (i.e. serial) is time-consuming and limits us to low resolution. Underworld is build from the ground-up as a parallel computing solution which means we can easily run large models on high performance computing clusters (HPC); that is, sub-divide the problem into many smaller chunks and use multiple processors to solve each one, taking care to combine and synchronise the answers from each processor to obtain the correct solution to the original problem.

Parallel computation can reduce time we need to wait for the our results to be computed but it does happen at the expense of some overhead The overhead does depend on the nature of the computer we are using but typically we need to think about:

- **Code complexity**: any time we manage computations across different processors, we have additional coding to reassemble the calculations correctly and we need to think about many special cases. For example, integrating a quantity of the surface of a mesh: many processes contribute, some do not, the results have to be computed independently then combined.

- **Additional memory is often required**: to manage copies of information that lives on / near boundaries, to store the topology of the decomposed domain and to help navigate the passing of information between processes.

- **The time taken to synchronise results** and the work required to keep track of who is doing what, when they are done, and in making sure everyone waits for everyone else. There is a time-cost in actually sending information as part of a synchronisation and a computational cost in ensuring that work is distributed efficiently.

To determine the efficiency of parallel computation, we introduce the *strong scaling test* which measures the time taken to solve a problem in parallel compared to the same problem solved in serial. In strong scaling tests, the size of the problem is kept constant, while the number of processors is increased. The reduction in run-time due to the addition of more processors is commonly expressed in terms of the speed-up:

$$
\textrm{speed up} = \frac{t(N_{ref})}{t(N)}
$$

where $t(N_{ref})$ is the run-time for a reference number of processors, $N_{ref}$, and $t(N)$ is the run-time when $N$ processors are used. In the ideal case, $N$ additional processors should contribute all of its resources in solving the problem and reduce the compute time by a factor of $N$ relative to the reference run time. For example, using $2 N_{ref}$ processors will ideally halve the run-time resulting to a speed-up = 2.

::: {#fig-strong-scaling}

![](media/UW3-StrongScalingSolvers.png)

Strong parallel-scaling tests run on Australia's peak computing system, [GADI, at the National Computational Infrastructure](https://nci.org.au/our-systems/hpc-systems?ref=underworldcode.org). This is a typical High Performance Computing facility with large numbers of dedicated, identical CPUs and fast communication links.
:::


### Advanced capabilities

Expand Down Expand Up @@ -77,11 +106,11 @@ It is also possible to use the PETSc mesh adaption capabilities, to refine the r

```{=html}
<center>
<iframe src="media/pyvista/AdaptedSphere.html" width="600" height="300">
<iframe src="media/pyvista/AdaptedSphere.html" width="600" height="500">
</iframe>
</center>
```
*Live Image: Static mesh adaptation to the slope of a field. The driving buoyancy term is three plume-like upwellings and the slope of this field is shown in colour (red high, blue low). The adapted mesh is shown in green.*
*Live Image: Static mesh adaptation to the slope of a field. The driving buoyancy term is a plume-like upwelling and the slope of this field is shown in colour (red high, blue low). Don't forget to zoom in !*

```python

Expand Down
Loading
Loading