diff --git a/README.md b/README.md
index 48ebb87e9..368f6528c 100644
--- a/README.md
+++ b/README.md
@@ -386,4 +386,12 @@ Example shows the use cases of using MONAI to evaluate the performance of a gene
#### [VISTA2D](./vista_2d)
This tutorial demonstrates how to train a cell segmentation model using the [MONAI](https://monai.io/) framework and the [Segment Anything Model (SAM)](https://github.com/facebookresearch/segment-anything) on the [Cellpose dataset](https://www.cellpose.org/).
-ECHO°¡ ¼³Á¤µÇ¾î ÀÖ½À´Ï´Ù.
+
+#### **Reconstruction**
+##### [K-Space Basics with fastMRI Knee Data](./reconstruction/MRI_reconstruction/tutorials/01_kspace_basics_fastmri_knee.ipynb)
+This tutorial introduces MRI reconstruction fundamentals: what k-space is, how the Fourier transform connects k-space to images, why undersampling causes aliasing, and how MONAI's reconstruction transforms process k-space data. Uses the fastMRI knee single-coil dataset.
+##### [U-Net MRI Reconstruction](./reconstruction/MRI_reconstruction/unet_demo)
+Training and inference for accelerated MRI reconstruction using BasicUNet on the fastMRI brain multi-coil dataset.
+##### [VarNet MRI Reconstruction](./reconstruction/MRI_reconstruction/varnet_demo)
+Training and inference for accelerated MRI reconstruction using e2e-VarNet on the fastMRI brain multi-coil dataset.
+ECHO�� �����Ǿ� �ֽ��ϴ�.
diff --git a/reconstruction/MRI_reconstruction/tutorials/01_kspace_basics_fastmri_knee.ipynb b/reconstruction/MRI_reconstruction/tutorials/01_kspace_basics_fastmri_knee.ipynb
new file mode 100644
index 000000000..2143393de
--- /dev/null
+++ b/reconstruction/MRI_reconstruction/tutorials/01_kspace_basics_fastmri_knee.ipynb
@@ -0,0 +1,779 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Copyright (c) MONAI Consortium \n",
+ "Licensed under the Apache License, Version 2.0 (the \"License\"); \n",
+ "you may not use this file except in compliance with the License. \n",
+ "You may obtain a copy of the License at \n",
+ " http://www.apache.org/licenses/LICENSE-2.0 \n",
+ "Unless required by applicable law or agreed to in writing, software \n",
+ "distributed under the License is distributed on an \"AS IS\" BASIS, \n",
+ "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n",
+ "See the License for the specific language governing permissions and \n",
+ "limitations under the License.\n",
+ "\n",
+ "# Understanding K-Space and MRI Reconstruction: A Beginner's Guide\n",
+ "\n",
+ "This tutorial introduces the fundamentals of MRI reconstruction using the fastMRI knee single-coil dataset and MONAI. You will learn:\n",
+ "\n",
+ "* What k-space is and how it relates to MRI images via the Fourier transform\n",
+ "* Why undersampling k-space causes aliasing artifacts\n",
+ "* How MONAI's reconstruction transforms process k-space data\n",
+ "* How undersampling masks work and their effect on image quality\n",
+ "* How these concepts connect to deep learning-based MRI reconstruction\n",
+ "\n",
+ "**Prerequisites:** Basic Python and NumPy. No prior MRI knowledge required.\n",
+ "\n",
+ "**Dataset:** [fastMRI](https://fastmri.org/dataset) knee single-coil (requires registration, non-commercial license)\n",
+ "\n",
+ "[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/reconstruction/MRI_reconstruction/tutorials/01_kspace_basics_fastmri_knee.ipynb)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup environment"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!python -c \"import monai\" || pip install -q \"monai-weekly[pillow, tqdm]\"\n",
+ "!python -c \"import matplotlib\" || pip install -q matplotlib\n",
+ "!python -c \"import h5py\" || pip install -q h5py\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import shutil\n",
+ "import tempfile\n",
+ "\n",
+ "import h5py\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import torch\n",
+ "\n",
+ "from monai.apps.reconstruction.fastmri_reader import FastMRIReader\n",
+ "from monai.apps.reconstruction.transforms.dictionary import (\n",
+ " EquispacedKspaceMaskd,\n",
+ " RandomKspaceMaskd,\n",
+ " ReferenceBasedNormalizeIntensityd,\n",
+ ")\n",
+ "from monai.config import print_config\n",
+ "from monai.transforms import (\n",
+ " CenterSpatialCropd,\n",
+ " Compose,\n",
+ " EnsureTyped,\n",
+ " Lambdad,\n",
+ " LoadImaged,\n",
+ " ThresholdIntensityd,\n",
+ ")\n",
+ "\n",
+ "print_config()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Setup data directory\n",
+ "\n",
+ "You can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable.\n",
+ "This allows you to save results and reuse downloads.\n",
+ "If not specified a temporary directory will be used."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "directory = os.environ.get(\"MONAI_DATA_DIRECTORY\")\n",
+ "if directory is not None:\n",
+ " os.makedirs(directory, exist_ok=True)\n",
+ "root_dir = tempfile.mkdtemp() if directory is None else directory\n",
+ "print(root_dir)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## About the Dataset\n",
+ "\n",
+ "This tutorial uses the [fastMRI](https://fastmri.org/dataset) **knee single-coil** dataset from the NYU fastMRI Initiative.\n",
+ "\n",
+ "**Why knee single-coil?**\n",
+ "- Much smaller than brain multi-coil (~19 GB validation set vs. ~1.5 TB)\n",
+ "- No coil combination complexity — lets us focus purely on reconstruction\n",
+ "- Standard entry point in the MRI reconstruction literature\n",
+ "\n",
+ "**How to obtain the data:**\n",
+ "1. Register at https://fastmri.org/dataset\n",
+ "2. Download the knee single-coil validation set (`knee_singlecoil_val.tar.gz`)\n",
+ "3. Extract to a folder (e.g., `/knee_singlecoil_val/`)\n",
+ "\n",
+ "**Note:** This dataset is under a non-commercial license. You may not use it for commercial purposes.\n",
+ "\n",
+ "**For this tutorial, you only need a single `.h5` file** (~200-400 MB). Each HDF5 file contains:\n",
+ "- `kspace`: Complex-valued k-space data (shape: `[num_slices, height, width]`)\n",
+ "- `reconstruction_esc`: Ground truth images reconstructed from fully-sampled k-space"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Update this path to where your fastMRI knee single-coil data is stored.\n",
+ "# You only need ONE .h5 file from the knee_singlecoil_val set.\n",
+ "data_path = os.path.join(root_dir, \"knee_singlecoil_val\")\n",
+ "\n",
+ "sample_files = sorted([f for f in os.listdir(data_path) if f.endswith(\".h5\")])\n",
+ "sample_file = os.path.join(data_path, sample_files[0])\n",
+ "print(f\"Using sample file: {sample_file}\")\n",
+ "print(f\"Total .h5 files found: {len(sample_files)}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Part 1: What is K-Space?\n",
+ "\n",
+ "MRI scanners do **not** directly capture images. Instead, they measure radio-frequency (RF) signals emitted by hydrogen atoms in the body when placed in a strong magnetic field. These raw measurements live in a domain called **k-space** (spatial frequency domain).\n",
+ "\n",
+ "Key intuitions:\n",
+ "- Each point in k-space encodes information about **all** pixels in the image (global frequency information)\n",
+ "- The **inverse Fourier transform** converts k-space data into a viewable image\n",
+ "- Think of it this way: k-space is to an MRI image what a musical recording is to sheet music — same information, different representation\n",
+ "\n",
+ "Let's load a real k-space measurement and see what it looks like."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "with h5py.File(sample_file, \"r\") as hf:\n",
+ " kspace = hf[\"kspace\"][()] # shape: (num_slices, height, width)\n",
+ " ground_truth = hf[\"reconstruction_esc\"][()]\n",
+ " print(f\"K-space shape: {kspace.shape}\")\n",
+ " print(f\"K-space dtype: {kspace.dtype}\")\n",
+ " print(f\"Ground truth shape: {ground_truth.shape}\")\n",
+ "\n",
+ "# Select a middle slice for clear anatomy\n",
+ "slice_idx = kspace.shape[0] // 2\n",
+ "kspace_slice = kspace[slice_idx]\n",
+ "gt_slice = ground_truth[slice_idx]\n",
+ "print(f\"\\nUsing slice {slice_idx} of {kspace.shape[0]}\")\n",
+ "print(f\"Slice k-space shape: {kspace_slice.shape}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fig, axes = plt.subplots(1, 3, figsize=(18, 6))\n",
+ "\n",
+ "# K-space magnitude (log scale for visibility)\n",
+ "kspace_mag = np.log1p(np.abs(kspace_slice))\n",
+ "axes[0].imshow(kspace_mag, cmap=\"gray\")\n",
+ "axes[0].set_title(\"K-Space (log magnitude)\")\n",
+ "axes[0].axis(\"off\")\n",
+ "\n",
+ "# Reconstruct image via inverse FFT\n",
+ "image_from_kspace = np.fft.ifftshift(np.fft.ifft2(np.fft.fftshift(kspace_slice)))\n",
+ "image_magnitude = np.abs(image_from_kspace)\n",
+ "axes[1].imshow(image_magnitude, cmap=\"gray\")\n",
+ "axes[1].set_title(\"Image from IFFT of full k-space\")\n",
+ "axes[1].axis(\"off\")\n",
+ "\n",
+ "# Ground truth\n",
+ "axes[2].imshow(gt_slice, cmap=\"gray\")\n",
+ "axes[2].set_title(\"Ground truth (reconstruction_esc)\")\n",
+ "axes[2].axis(\"off\")\n",
+ "\n",
+ "plt.tight_layout()\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**What you're seeing:**\n",
+ "- **Left:** Raw k-space data displayed on a log scale. The bright center represents low spatial frequencies; the dimmer edges represent high spatial frequencies.\n",
+ "- **Center:** The image reconstructed by applying the inverse FFT to the full k-space. This is what the MRI scanner would produce with a complete measurement.\n",
+ "- **Right:** The ground truth image provided by fastMRI.\n",
+ "\n",
+ "The key takeaway: **k-space and the image are two representations of the same information**, connected by the Fourier transform."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Part 2: The Fourier Transform Connection\n",
+ "\n",
+ "Different regions of k-space encode different types of image information:\n",
+ "\n",
+ "- **Center of k-space** = low spatial frequencies = overall contrast, large anatomical structures\n",
+ "- **Edges of k-space** = high spatial frequencies = fine details, sharp edges, tissue boundaries\n",
+ "\n",
+ "This is why MRI scans take time — every k-space measurement matters. Let's see what happens when we keep only the center or only the edges."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "height, width = kspace_slice.shape\n",
+ "center_fraction = 0.1 # Keep only 10% of k-space in each dimension\n",
+ "center_h = int(height * center_fraction)\n",
+ "center_w = int(width * center_fraction)\n",
+ "\n",
+ "# Shift k-space so center (low frequencies) is in the middle\n",
+ "kspace_shifted = np.fft.fftshift(kspace_slice)\n",
+ "\n",
+ "# Center-only: keep low frequencies, zero out high frequencies\n",
+ "kspace_center = np.zeros_like(kspace_shifted)\n",
+ "h_start = height // 2 - center_h // 2\n",
+ "w_start = width // 2 - center_w // 2\n",
+ "kspace_center[h_start : h_start + center_h, w_start : w_start + center_w] = kspace_shifted[\n",
+ " h_start : h_start + center_h, w_start : w_start + center_w\n",
+ "]\n",
+ "\n",
+ "# Edges-only: keep high frequencies, zero out center\n",
+ "kspace_edges = kspace_shifted.copy()\n",
+ "kspace_edges[h_start : h_start + center_h, w_start : w_start + center_w] = 0\n",
+ "\n",
+ "fig, axes = plt.subplots(1, 3, figsize=(18, 6))\n",
+ "\n",
+ "axes[0].imshow(np.abs(np.fft.ifft2(np.fft.ifftshift(kspace_shifted))), cmap=\"gray\")\n",
+ "axes[0].set_title(\"Full k-space (all frequencies)\")\n",
+ "axes[0].axis(\"off\")\n",
+ "\n",
+ "axes[1].imshow(np.abs(np.fft.ifft2(np.fft.ifftshift(kspace_center))), cmap=\"gray\")\n",
+ "axes[1].set_title(\"Center only (low frequencies)\\nContrast but blurry\")\n",
+ "axes[1].axis(\"off\")\n",
+ "\n",
+ "axes[2].imshow(np.abs(np.fft.ifft2(np.fft.ifftshift(kspace_edges))), cmap=\"gray\")\n",
+ "axes[2].set_title(\"Edges only (high frequencies)\\nDetails but no contrast\")\n",
+ "axes[2].axis(\"off\")\n",
+ "\n",
+ "plt.tight_layout()\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Observations:**\n",
+ "- **Center only:** The image has overall shape and contrast but is very blurry — all fine detail is lost.\n",
+ "- **Edges only:** You can see tissue boundaries and fine structures, but there is no overall contrast or brightness.\n",
+ "\n",
+ "Both the center and edges are needed for a complete, diagnostic-quality image. This is the fundamental challenge of accelerated MRI: **how can we acquire fewer k-space measurements while still recovering a high-quality image?**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Part 3: Why Does Undersampling Cause Aliasing?\n",
+ "\n",
+ "A full MRI scan acquires every line of k-space, which is slow. **Accelerated MRI** skips lines to reduce scan time. If we acquire only every $R$-th line, we achieve $R\\times$ acceleration (e.g., 4$\\times$ means the scan is 4 times faster).\n",
+ "\n",
+ "However, the **Nyquist-Shannon sampling theorem** tells us that uniformly skipping lines creates periodic copies (aliases) of the image that overlap. These appear as ghosting artifacts.\n",
+ "\n",
+ "Let's see this in action across different acceleration factors."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fig, axes = plt.subplots(2, 4, figsize=(20, 10))\n",
+ "accelerations = [1, 2, 4, 8]\n",
+ "\n",
+ "kspace_shifted = np.fft.fftshift(kspace_slice)\n",
+ "\n",
+ "for idx, accel in enumerate(accelerations):\n",
+ " # Create uniform undersampling mask (every accel-th line)\n",
+ " mask = np.zeros(width, dtype=bool)\n",
+ " mask[::accel] = True\n",
+ "\n",
+ " # Apply mask to k-space columns\n",
+ " kspace_masked = kspace_shifted.copy()\n",
+ " kspace_masked[:, ~mask] = 0\n",
+ "\n",
+ " # Show masked k-space (log magnitude)\n",
+ " axes[0, idx].imshow(np.log1p(np.abs(kspace_masked)), cmap=\"gray\")\n",
+ " axes[0, idx].set_title(f\"K-space ({accel}x undersampling)\")\n",
+ " axes[0, idx].axis(\"off\")\n",
+ "\n",
+ " # Reconstruct via IFFT\n",
+ " recon = np.abs(np.fft.ifft2(np.fft.ifftshift(kspace_masked)))\n",
+ " axes[1, idx].imshow(recon, cmap=\"gray\")\n",
+ " label = \"Fully sampled\" if accel == 1 else \"Aliasing artifacts\"\n",
+ " axes[1, idx].set_title(f\"Reconstruction ({accel}x)\\n{label}\")\n",
+ " axes[1, idx].axis(\"off\")\n",
+ "\n",
+ "plt.suptitle(\"Effect of Uniform Undersampling on Image Quality\", fontsize=14)\n",
+ "plt.tight_layout()\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Key observations:**\n",
+ "- At **1x** (no undersampling), the image is clean.\n",
+ "- At **2x**, faint ghosting appears.\n",
+ "- At **4x**, significant aliasing artifacts are visible — ghosted copies of the anatomy overlap with the true image.\n",
+ "- At **8x**, the image is severely corrupted.\n",
+ "\n",
+ "This is the central problem that deep learning-based MRI reconstruction aims to solve: **recovering a clean image from undersampled k-space data.**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Part 4: Undersampling Masks — Random vs. Equispaced\n",
+ "\n",
+ "In practice, MRI reconstruction methods use more sophisticated undersampling patterns:\n",
+ "\n",
+ "- **Equispaced:** Skip lines at regular intervals (causes coherent aliasing — repeating ghost patterns)\n",
+ "- **Random:** Skip lines at random positions (causes incoherent aliasing — noise-like artifacts that are easier for algorithms to remove)\n",
+ "\n",
+ "Both strategies **always keep the center of k-space fully sampled**, because the low-frequency content is critical for basic image contrast.\n",
+ "\n",
+ "MONAI provides both patterns as transforms:\n",
+ "- `RandomKspaceMaskd`: Random undersampling with a fully-sampled center\n",
+ "- `EquispacedKspaceMaskd`: Equispaced undersampling with a fully-sampled center\n",
+ "\n",
+ "Let's use these MONAI transforms to compare the two approaches."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Both keys point to the same .h5 file; FastMRIReader extracts the\n",
+ "# correct HDF5 dataset based on the key name.\n",
+ "input_dict = {\"kspace\": sample_file, \"reconstruction_esc\": sample_file}\n",
+ "\n",
+ "base_loader = Compose(\n",
+ " [\n",
+ " LoadImaged(\n",
+ " keys=[\"kspace\", \"reconstruction_esc\"],\n",
+ " reader=FastMRIReader,\n",
+ " image_only=False,\n",
+ " dtype=np.complex64,\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "# Random undersampling pipeline\n",
+ "random_masker = Compose(\n",
+ " [\n",
+ " base_loader,\n",
+ " RandomKspaceMaskd(\n",
+ " keys=[\"kspace\"],\n",
+ " center_fractions=[0.08],\n",
+ " accelerations=[4],\n",
+ " spatial_dims=2,\n",
+ " is_complex=True,\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "# Equispaced undersampling pipeline\n",
+ "equispaced_masker = Compose(\n",
+ " [\n",
+ " base_loader,\n",
+ " EquispacedKspaceMaskd(\n",
+ " keys=[\"kspace\"],\n",
+ " center_fractions=[0.08],\n",
+ " accelerations=[4],\n",
+ " spatial_dims=2,\n",
+ " is_complex=True,\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "random_result = random_masker(input_dict.copy())\n",
+ "equispaced_result = equispaced_masker(input_dict.copy())\n",
+ "\n",
+ "print(\"Random mask output keys:\", list(random_result.keys()))\n",
+ "print(\n",
+ " \"kspace_masked_ifft shape:\",\n",
+ " random_result[\"kspace_masked_ifft\"].shape,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def as_numpy(x):\n",
+ " \"\"\"Convert torch tensor or array to numpy.\"\"\"\n",
+ " if isinstance(x, torch.Tensor):\n",
+ " return x.detach().cpu().numpy()\n",
+ " return np.asarray(x)\n",
+ "\n",
+ "\n",
+ "fig, axes = plt.subplots(2, 3, figsize=(18, 12))\n",
+ "\n",
+ "# Row 1: Random mask\n",
+ "rand_mask = as_numpy(random_result[\"mask\"]).squeeze()\n",
+ "axes[0, 0].imshow(np.tile(rand_mask, (64, 1)), cmap=\"gray\", aspect=\"auto\")\n",
+ "axes[0, 0].set_title(\"Random mask pattern\")\n",
+ "axes[0, 0].axis(\"off\")\n",
+ "\n",
+ "rand_recon = np.abs(as_numpy(random_result[\"kspace_masked_ifft\"]).squeeze())\n",
+ "axes[0, 1].imshow(rand_recon, cmap=\"gray\")\n",
+ "axes[0, 1].set_title(\"Zero-filled recon (random 4x)\")\n",
+ "axes[0, 1].axis(\"off\")\n",
+ "\n",
+ "gt = np.abs(as_numpy(ground_truth[slice_idx]))\n",
+ "axes[0, 2].imshow(gt, cmap=\"gray\")\n",
+ "axes[0, 2].set_title(\"Ground truth\")\n",
+ "axes[0, 2].axis(\"off\")\n",
+ "\n",
+ "# Row 2: Equispaced mask\n",
+ "equi_mask = as_numpy(equispaced_result[\"mask\"]).squeeze()\n",
+ "axes[1, 0].imshow(np.tile(equi_mask, (64, 1)), cmap=\"gray\", aspect=\"auto\")\n",
+ "axes[1, 0].set_title(\"Equispaced mask pattern\")\n",
+ "axes[1, 0].axis(\"off\")\n",
+ "\n",
+ "equi_recon = np.abs(as_numpy(equispaced_result[\"kspace_masked_ifft\"]).squeeze())\n",
+ "axes[1, 1].imshow(equi_recon, cmap=\"gray\")\n",
+ "axes[1, 1].set_title(\"Zero-filled recon (equispaced 4x)\")\n",
+ "axes[1, 1].axis(\"off\")\n",
+ "\n",
+ "axes[1, 2].imshow(gt, cmap=\"gray\")\n",
+ "axes[1, 2].set_title(\"Ground truth\")\n",
+ "axes[1, 2].axis(\"off\")\n",
+ "\n",
+ "plt.suptitle(\n",
+ " \"Random vs. Equispaced Undersampling (4x acceleration)\",\n",
+ " fontsize=14,\n",
+ ")\n",
+ "plt.tight_layout()\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Observations:**\n",
+ "- **Random mask:** The sampled lines are irregularly spaced (with a dense center). Aliasing artifacts appear as noise-like patterns.\n",
+ "- **Equispaced mask:** The sampled lines are evenly spaced (with a dense center). Aliasing artifacts appear as coherent ghost copies.\n",
+ "\n",
+ "Random undersampling is generally preferred for compressed sensing and deep learning reconstruction because incoherent artifacts are easier for algorithms to suppress than coherent ghosts."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Part 5: The MONAI Reconstruction Pipeline\n",
+ "\n",
+ "MONAI provides a complete set of transforms for MRI reconstruction workflows. The existing [U-Net demo](../unet_demo/) and [VarNet demo](../varnet_demo/) use these transforms in production training pipelines. Here we walk through each step to understand what they do.\n",
+ "\n",
+ "The full pipeline is:\n",
+ "\n",
+ "1. **`LoadImaged`** with `FastMRIReader` — reads HDF5 files, extracts k-space as complex-valued tensor\n",
+ "2. **Select a single slice** — each `.h5` file contains a full volume; we pick the middle slice\n",
+ "3. **`RandomKspaceMaskd`** — applies an undersampling mask and computes zero-filled reconstruction (`kspace_masked_ifft`)\n",
+ "4. **`CenterSpatialCropd`** — crops to 320x320 to match the ground truth spatial dimensions\n",
+ "5. **`ReferenceBasedNormalizeIntensityd`** — normalizes intensity values using a reference image\n",
+ "6. **`ThresholdIntensityd`** — clamps extreme values to a fixed range\n",
+ "\n",
+ "The output `kspace_masked_ifft` (zero-filled reconstruction) becomes the **input** to a neural network, and `reconstruction_esc` (ground truth) becomes the **target**."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def take_middle_slice(x):\n",
+ " \"\"\"Select the middle slice from a volume: (S, H, W) -> (1, H, W).\"\"\"\n",
+ " if hasattr(x, \"shape\") and len(x.shape) >= 3:\n",
+ " mid = x.shape[0] // 2\n",
+ " return x[mid : mid + 1]\n",
+ " return x\n",
+ "\n",
+ "\n",
+ "def to_numpy(x):\n",
+ " \"\"\"Convert torch tensor to numpy safely.\"\"\"\n",
+ " if isinstance(x, torch.Tensor):\n",
+ " return x.detach().cpu().numpy()\n",
+ " return np.asarray(x)\n",
+ "\n",
+ "\n",
+ "def ensure_channel_first_2d(x):\n",
+ " \"\"\"Promote (H, W) to (1, H, W) for CenterSpatialCropd.\"\"\"\n",
+ " x = np.asarray(x)\n",
+ " if x.ndim == 2:\n",
+ " return x[None, ...]\n",
+ " return x\n",
+ "\n",
+ "\n",
+ "def complex_to_magnitude(x):\n",
+ " \"\"\"Take magnitude if complex; otherwise return as-is.\"\"\"\n",
+ " x = np.asarray(x)\n",
+ " return np.abs(x) if np.iscomplexobj(x) else x\n",
+ "\n",
+ "\n",
+ "# Stage 1: Load, select middle slice, apply undersampling mask\n",
+ "pre_transform = Compose(\n",
+ " [\n",
+ " LoadImaged(\n",
+ " keys=[\"kspace\", \"reconstruction_esc\"],\n",
+ " reader=FastMRIReader,\n",
+ " image_only=False,\n",
+ " dtype=np.complex64,\n",
+ " ),\n",
+ " Lambdad(\n",
+ " keys=[\"kspace\", \"reconstruction_esc\"],\n",
+ " func=take_middle_slice,\n",
+ " ),\n",
+ " RandomKspaceMaskd(\n",
+ " keys=[\"kspace\"],\n",
+ " center_fractions=[0.08],\n",
+ " accelerations=[4],\n",
+ " spatial_dims=2,\n",
+ " is_complex=True,\n",
+ " ),\n",
+ " Lambdad(\n",
+ " keys=[\"kspace_masked_ifft\"],\n",
+ " func=lambda x: (torch.abs(x) if isinstance(x, torch.Tensor) and torch.is_complex(x) else x),\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "# Stage 2: Shape fixes, crop to 320x320, normalize, clamp\n",
+ "post_transform = Compose(\n",
+ " [\n",
+ " Lambdad(\n",
+ " keys=[\"kspace_masked_ifft\", \"reconstruction_esc\"],\n",
+ " func=to_numpy,\n",
+ " ),\n",
+ " Lambdad(keys=[\"kspace_masked_ifft\"], func=ensure_channel_first_2d),\n",
+ " CenterSpatialCropd(\n",
+ " keys=[\"kspace_masked_ifft\", \"reconstruction_esc\"],\n",
+ " roi_size=(320, 320),\n",
+ " ),\n",
+ " Lambdad(keys=[\"kspace_masked_ifft\"], func=complex_to_magnitude),\n",
+ " ReferenceBasedNormalizeIntensityd(\n",
+ " keys=[\"kspace_masked_ifft\", \"reconstruction_esc\"],\n",
+ " ref_key=\"kspace_masked_ifft\",\n",
+ " channel_wise=True,\n",
+ " ),\n",
+ " ThresholdIntensityd(\n",
+ " keys=[\"kspace_masked_ifft\", \"reconstruction_esc\"],\n",
+ " threshold=6.0,\n",
+ " above=False,\n",
+ " cval=6.0,\n",
+ " ),\n",
+ " ThresholdIntensityd(\n",
+ " keys=[\"kspace_masked_ifft\", \"reconstruction_esc\"],\n",
+ " threshold=-6.0,\n",
+ " above=True,\n",
+ " cval=-6.0,\n",
+ " ),\n",
+ " EnsureTyped(\n",
+ " keys=[\"kspace_masked_ifft\", \"reconstruction_esc\"],\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "data = pre_transform(input_dict.copy())\n",
+ "result = post_transform(data)\n",
+ "\n",
+ "print(\"Pipeline output keys:\", list(result.keys()))\n",
+ "print(f\"Input shape (kspace_masked_ifft): \" f\"{result['kspace_masked_ifft'].shape}\")\n",
+ "print(f\"Target shape (reconstruction_esc): \" f\"{result['reconstruction_esc'].shape}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Part 6: From Zero-Filled to Deep Learning\n",
+ "\n",
+ "The zero-filled reconstruction is the starting point for deep learning methods. A neural network learns to map the degraded zero-filled image to the clean ground truth.\n",
+ "\n",
+ "- **U-Net approach** (see [U-Net demo](../unet_demo/)): Treats reconstruction as an image-to-image translation task. The zero-filled image goes in; a clean image comes out.\n",
+ "- **VarNet approach** (see [VarNet demo](../varnet_demo/)): Iterates between data consistency in k-space and image refinement, using cascaded networks.\n",
+ "\n",
+ "Let's visualize the problem these networks are trained to solve."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def as_numpy_2d(x):\n",
+ " \"\"\"Return a 2D numpy image from (1, H, W) torch/numpy.\"\"\"\n",
+ " if isinstance(x, torch.Tensor):\n",
+ " x = x.detach().cpu().numpy()\n",
+ " x = np.asarray(x)\n",
+ " return x[0] if x.ndim == 3 else x\n",
+ "\n",
+ "\n",
+ "input_img = as_numpy_2d(result[\"kspace_masked_ifft\"])\n",
+ "target_img = as_numpy_2d(result[\"reconstruction_esc\"])\n",
+ "\n",
+ "diff = np.abs(input_img - target_img)\n",
+ "\n",
+ "fig, axes = plt.subplots(1, 3, figsize=(18, 6))\n",
+ "\n",
+ "axes[0].imshow(input_img, cmap=\"gray\")\n",
+ "axes[0].set_title(\"Zero-filled reconstruction (4x)\\nInput to neural network\")\n",
+ "axes[0].axis(\"off\")\n",
+ "\n",
+ "axes[1].imshow(target_img, cmap=\"gray\")\n",
+ "axes[1].set_title(\"Ground truth\\nTarget for training\")\n",
+ "axes[1].axis(\"off\")\n",
+ "\n",
+ "axes[2].imshow(diff, cmap=\"hot\", vmax=np.max(diff) * 0.5)\n",
+ "axes[2].set_title(\"Difference map\\nWhat the network must learn\")\n",
+ "axes[2].axis(\"off\")\n",
+ "\n",
+ "plt.suptitle(\"The MRI Reconstruction Problem\", fontsize=14)\n",
+ "plt.tight_layout()\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**What you see:**\n",
+ "- **Left:** The zero-filled reconstruction after the full MONAI preprocessing pipeline. This is what gets fed into a neural network.\n",
+ "- **Center:** The ground truth image. This is what the network is trained to produce.\n",
+ "- **Right:** The difference map (hot colormap). Bright regions show where the zero-filled reconstruction differs most from the ground truth — these are the aliasing artifacts the network must learn to remove.\n",
+ "\n",
+ "For production training pipelines that train U-Net and VarNet models on this type of data, see the companion demos:\n",
+ "- [U-Net demo](../unet_demo/) — `BasicUNet` achieving 0.9436 SSIM on brain multi-coil\n",
+ "- [VarNet demo](../varnet_demo/) — end-to-end Variational Network achieving 0.9650 SSIM on brain multi-coil"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Summary\n",
+ "\n",
+ "In this tutorial, you learned:\n",
+ "\n",
+ "1. **K-space** is the spatial frequency domain where MRI scanners acquire data. The inverse Fourier transform converts it to an image.\n",
+ "2. **Low frequencies** (center of k-space) encode contrast and large structures; **high frequencies** (edges) encode fine details.\n",
+ "3. **Undersampling** k-space speeds up MRI scans but causes aliasing artifacts due to the Nyquist-Shannon theorem.\n",
+ "4. **Random undersampling** produces incoherent artifacts (easier to remove) vs. **equispaced** which produces coherent ghosts.\n",
+ "5. MONAI provides a complete **reconstruction transform pipeline** (`FastMRIReader` -> `RandomKspaceMaskd` -> `CenterSpatialCropd` -> normalization) to prepare data for deep learning.\n",
+ "6. The **zero-filled reconstruction** is the input to neural networks that learn to remove aliasing artifacts.\n",
+ "\n",
+ "### Next Steps\n",
+ "\n",
+ "- Try changing the `accelerations` parameter (e.g., `[8]`) and `center_fractions` (e.g., `[0.04]`) to see how they affect image quality\n",
+ "- Explore the [U-Net demo](../unet_demo/) to see how `BasicUNet` is trained for MRI reconstruction\n",
+ "- Explore the [VarNet demo](../varnet_demo/) to see a more advanced approach using data consistency cascades"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Acknowledgment\n",
+ "\n",
+ "Data used in the preparation of this tutorial were obtained from the NYU fastMRI Initiative database (fastmri.med.nyu.edu). [Citation: Knoll et al., Radiol Artif Intell. 2020 Jan 29;2(1):e190007. doi: 10.1148/ryai.2020190007. (https://pubs.rsna.org/doi/10.1148/ryai.2020190007), and the arXiv paper: https://arxiv.org/abs/1811.08839] As such, NYU fastMRI investigators provided data but did not participate in analysis or writing of this tutorial. A listing of NYU fastMRI investigators, subject to updates, can be found at: fastmri.med.nyu.edu. The primary goal of fastMRI is to test whether machine learning can aid in the reconstruction of medical images."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Cleanup data directory\n",
+ "\n",
+ "Remove directory if a temporary was used."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "if directory is None:\n",
+ " shutil.rmtree(root_dir)\n",
+ "print(\"Cleanup complete!\")"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbformat_minor": 4,
+ "version": "3.10.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/reconstruction/MRI_reconstruction/tutorials/README.md b/reconstruction/MRI_reconstruction/tutorials/README.md
new file mode 100644
index 000000000..cd16f8374
--- /dev/null
+++ b/reconstruction/MRI_reconstruction/tutorials/README.md
@@ -0,0 +1,25 @@
+# MRI Reconstruction Tutorials
+
+This folder contains educational Jupyter notebooks that introduce the fundamentals of MRI reconstruction.
+
+## Tutorials
+
+### 01 - K-Space Basics with fastMRI Knee Data
+**Notebook:** [01_kspace_basics_fastmri_knee.ipynb](./01_kspace_basics_fastmri_knee.ipynb)
+
+An introductory tutorial covering:
+- What k-space is and its relationship to MRI images via the Fourier transform
+- How low and high spatial frequencies contribute to image content
+- Why undersampling k-space causes aliasing artifacts
+- How MONAI's reconstruction transforms (`RandomKspaceMaskd`, `EquispacedKspaceMaskd`, etc.) process k-space data
+- The zero-filled reconstruction problem that deep learning methods aim to solve
+
+**Dataset:** [fastMRI](https://fastmri.org/dataset) knee single-coil validation set (requires registration, non-commercial license). Only one `.h5` file is needed.
+
+**Prerequisites:** Basic Python and NumPy. No MRI experience required.
+
+## Related Production Tutorials
+
+For training-focused tutorials using the brain multi-coil dataset, see:
+- [U-Net Demo](../unet_demo/) - BasicUNet for MRI reconstruction
+- [VarNet Demo](../varnet_demo/) - End-to-end Variational Network for MRI reconstruction
diff --git a/runner.sh b/runner.sh
index 3f5473f03..0c4ea4e14 100755
--- a/runner.sh
+++ b/runner.sh
@@ -83,6 +83,7 @@ doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" maisi_inference_tut
doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" realism_diversity_metrics.ipynb)
doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" omniverse_integration.ipynb)
doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" hugging_face_pipeline_for_monai.ipynb)
+doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" 01_kspace_basics_fastmri_knee.ipynb)
# Execution of the notebook in these folders / with the filename cannot be automated
skip_run_papermill=()