Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/_build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ jobs:

# Save ("upload") the distribution artifacts for use by downstream Actions jobs
- name: Upload distribution artifacts
uses: actions/upload-artifact@v6 # This allows us to persist the dist directory after the job has completed
uses: actions/upload-artifact@v7 # This allows us to persist the dist directory after the job has completed
with:
name: python-package-distributions
path: dist/
Expand Down Expand Up @@ -98,7 +98,7 @@ jobs:

# This makes the artifacts available for downstream jobs
- name: Upload Conda build artifact
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v7
with:
name: conda-package
path: ${{ env.CONDA_BLD_PATH }}/**/space_packet_parser-*
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
run: |
pytest --color=yes --cov --cov-report=xml

- uses: codecov/codecov-action@v5
- uses: codecov/codecov-action@v6
with:
use_oidc: true

Expand Down
31 changes: 20 additions & 11 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
steps:
# This downloads the build artifacts from the build job
- name: Download distribution artifacts
uses: actions/download-artifact@v7
uses: actions/download-artifact@v8
with:
name: python-package-distributions
path: dist/
Expand All @@ -59,7 +59,7 @@ jobs:
steps:
# This downloads the build artifacts from the build job
- name: Download distribution artifacts
uses: actions/download-artifact@v7
uses: actions/download-artifact@v8
with:
name: python-package-distributions
path: dist/
Expand All @@ -80,7 +80,7 @@ jobs:

steps:
- name: Download Conda artifact
uses: actions/download-artifact@v7
uses: actions/download-artifact@v8
with:
name: conda-package
path: conda-package/
Expand Down Expand Up @@ -146,12 +146,21 @@ jobs:
GITHUB_TOKEN: ${{ github.token }}
# Uses the GitHub CLI to generate the Release and auto-generate the release notes. Also generates
# the Release title based on the annotation on the git tag.
run: >-
run: |
RELEASE_NAME=$(basename "${{ github.ref_name }}")
gh release create
'${{ github.ref_name }}'
--repo '${{ github.repository }}'
--title "$RELEASE_NAME"
${{ env.PRE_RELEASE_OPTION }}
--generate-notes
--notes-start-tag '${{ env.LATEST_RELEASE_TAG }}'
ARGS=(
"${{ github.ref_name }}"
--repo "${{ github.repository }}"
--title "$RELEASE_NAME"
)

if [ "${{ env.PRE_RELEASE_OPTION }}" = "--prerelease" ]; then
ARGS+=(--prerelease)
fi

ARGS+=(
--generate-notes
--notes-start-tag "${{ env.LATEST_RELEASE_TAG }}"
)

gh release create "${ARGS[@]}"
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ build
dist
space_packet_parser/_version.py
uv.lock
node_modules

# Packages #
############
Expand Down
5 changes: 5 additions & 0 deletions space_packet_parser/xarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from pathlib import Path
from typing import BinaryIO

from space_packet_parser import common
from space_packet_parser.exceptions import UnrecognizedPacketTypeError
from space_packet_parser.generators import ccsds_generator
from space_packet_parser.generators.utils import _read_packet_file
Expand Down Expand Up @@ -236,6 +237,10 @@ def _process_generator(generator):
else:
val = value

# Convert BinaryParameter to plain bytes to prevent numpy truncation
if isinstance(val, common.BinaryParameter):
val = bytes(val)
Comment thread
medley56 marked this conversation as resolved.

data_dict[apid][key].append(val)
Comment on lines +240 to 244
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bytes(val) will allocate a new bytes object for every BinaryParameter (it’s a bytes subclass), which can add significant extra copying for large/streamed datasets. Consider avoiding per-element conversion by (a) choosing an explicit fixed-width NumPy dtype for fixed-size binary encodings (e.g., S{nbytes}) or (b) deferring conversion to a single pass right before np.asarray so the list only gets copied once.

Copilot uses AI. Check for mistakes.
if key not in datatype_mapping[apid]:
# Add this datatype to the mapping
Expand Down
35 changes: 35 additions & 0 deletions tests/unit/test_xarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,41 @@ def test_create_dataset_with_custom_generator(tmp_path, fixed_length_packet_defi
assert list(dataset["INT32_FIELD"].values) == [12345, 67890, -99999]


def test_create_dataset_preserves_binary_parameter_width(tmp_path):
"""Test that binary parameters keep their full byte width in the resulting dataset."""
packet_definition = definitions.XtcePacketDefinition(
container_set=[
containers.SequenceContainer(
"BINARY_CONTAINER",
entry_list=[
parameters.Parameter(
"BIN_FIELD",
parameter_type=parameter_types.BinaryParameterType(
"BIN_TYPE", encoding=encodings.BinaryDataEncoding(fixed_size_in_bits=64)
),
)
],
)
]
)
packet_data = b"ABCDEFGH"
test_file = tmp_path / "binary_packets.bin"
test_file.write_bytes(packet_data)

datasets = xarr.create_dataset(
test_file,
packet_definition,
packet_bytes_generator=fixed_length_generator,
generator_kwargs={"packet_length_bytes": 8},
parse_bytes_kwargs={"root_container_name": "BINARY_CONTAINER"},
)

dataset = list(datasets.values())[0]

assert dataset["BIN_FIELD"].values.dtype.itemsize == 8
assert dataset["BIN_FIELD"].values.tolist() == [packet_data]
Comment on lines +212 to +213
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion dataset["BIN_FIELD"].values.dtype.itemsize == 8 can produce false positives if the array ends up as object dtype (its itemsize is pointer-size, often 8 on 64-bit). To make this regression test robust across platforms and dtype outcomes, also assert the dtype kind/type is a fixed-width bytes dtype (e.g., dtype.kind == "S" or dtype == np.dtype("S8")).

Suggested change
assert dataset["BIN_FIELD"].values.dtype.itemsize == 8
assert dataset["BIN_FIELD"].values.tolist() == [packet_data]
values = dataset["BIN_FIELD"].values
# Ensure we truly have a fixed-width bytes dtype (not an object array whose itemsize matches pointer size).
assert values.dtype.kind == "S"
assert values.dtype.itemsize == 8
assert values.tolist() == [packet_data]

Copilot uses AI. Check for mistakes.


def test_create_dataset_with_packet_filter(tmp_path, fixed_length_packet_definition, fixed_length_test_packets):
"""Test filtering packets with packet_filter parameter using raw byte inspection"""
_, _, _, binary_data = fixed_length_test_packets
Expand Down
Loading