Divergent data reduction for Amor - advanced#
In this notebook, we will look at how to use the essreflectometry package with Sciline, for reflectometry data collected from the PSI instrument Amor in divergent beam mode.
We will begin by importing the modules that are necessary for this notebook.
Setup#
[1]:
# %matplotlib widget
import warnings
import scipp as sc
from ess import amor
from ess.amor import data # noqa: F401
from ess.reflectometry.types import *
from ess.amor.types import *
from ess.reflectometry import batch_processor
# The files used in this tutorial have some issues that makes scippnexus
# raise warnings when loading them. To avoid noise in the notebook the warnings are silenced.
warnings.filterwarnings('ignore', 'Failed to convert .* into a transformation')
warnings.filterwarnings('ignore', 'Invalid transformation')
warnings.filterwarnings('ignore', 'invalid value encountered')
Create and configure the workflow#
We begin by creating the Amor workflow object which is a skeleton for reducing Amor data, with pre-configured steps, and then set the missing parameters which are specific to each experiment:
[2]:
workflow = amor.AmorWorkflow()
workflow[SampleSize[SampleRun]] = sc.scalar(10.0, unit='mm')
workflow[SampleSize[ReferenceRun]] = sc.scalar(10.0, unit='mm')
workflow[ChopperPhase[ReferenceRun]] = sc.scalar(-7.5, unit='deg')
workflow[ChopperPhase[SampleRun]] = sc.scalar(-7.5, unit='deg')
workflow[WavelengthBins] = sc.geomspace('wavelength', 2.8, 12.5, 2001, unit='angstrom')
# The YIndexLimits and ZIndexLimits define ranges on the detector where
# data is considered to be valid signal.
# They represent the lower and upper boundaries of a range of pixel indices.
workflow[YIndexLimits] = sc.scalar(11), sc.scalar(41)
workflow[ZIndexLimits] = sc.scalar(80), sc.scalar(370)
workflow[BeamDivergenceLimits] = (
sc.scalar(-0.75, unit='deg'),
sc.scalar(0.75, unit='deg'),
)
[3]:
workflow.visualize(ReflectivityOverQ, graph_attr={'rankdir': 'LR'})
[3]:
Caching the reference result#
The reference result (used for normalizing the sample data) only needs to be computed once. It represents the intensity reflected by the super-mirror.
We compute it using the pipeline and thereafter set the result back on the original pipeline.
[4]:
workflow[Filename[ReferenceRun]] = amor.data.amor_run(614)
# The sample rotation value in the file is slightly off, so we set it manually
workflow[SampleRotationOffset[ReferenceRun]] = sc.scalar(0.05, unit='deg')
reference_result = workflow.compute(ReducedReference)
# Set the result back onto the pipeline to cache it
workflow[ReducedReference] = reference_result
If we now visualize the pipeline again, we can see that the reference is not re-computed:
[5]:
workflow.visualize(ReflectivityOverQ, graph_attr={'rankdir': 'LR'})
[5]:
Computing sample reflectivity from batch reduction#
We now compute the sample reflectivity from 4 runs that used different sample rotation angles. The measurements at different rotation angles cover different ranges of \(Q\).
We set up a batch reduction helper (using the batch_processor function) which makes it easy to process multiple runs at once.
In this tutorial we use some Amor data files we have received. The file paths to the tutorial files are obtained by calling:
[6]:
amor.data.amor_run(608)
[6]:
'/home/runner/.cache/ess/amor/2/amor2023n000608.hdf'
When you encounter amor.data.amor_run you should imagining replacing that with a path to your own dataset.
[7]:
runs = {
'608': {
# The sample rotation values in the files are slightly off, so we replace
# them with corrected values.
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(608),
},
'609': {
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(609),
},
'610': {
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(610),
},
'611': {
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(611),
},
}
batch = batch_processor(workflow, runs)
# Compute R(Q) for all runs
reflectivity = batch.compute(ReflectivityOverQ)
sc.plot(reflectivity.hist(), norm='log', vmin=1e-5)
[7]:
[8]:
batch.visualize(ReflectivityOverQ, graph_attr={'rankdir': 'LR'})
[8]:
Scaling the reflectivity curves to overlap#
In case we know the curves are have been scaled by different factors (that are constant in Q) it can be useful to scale them so they overlap:
[9]:
# Cache intermediate result to avoid re-loading the data
batch[ReducibleData[SampleRun]] = batch.compute(ReducibleData[SampleRun])
[10]:
from ess.reflectometry.tools import scale_for_reflectivity_overlap
scaling = scale_for_reflectivity_overlap(
reflectivity,
critical_edge_interval=(
sc.scalar(0.01, unit='1/angstrom'),
sc.scalar(0.014, unit='1/angstrom'),
),
)
scaled_r = reflectivity * scaling
sc.plot(scaled_r.hist(), norm='log', vmin=1e-5)
[10]:
Curves obtained from measurements at different angles can be combined to one common reflectivity curve:
[11]:
from ess.reflectometry.tools import combine_curves
qbins = sc.geomspace('Q', 0.005, 0.4, 501, unit='1/angstrom')
combined = combine_curves(scaled_r.hist(), qbins)
combined.plot(norm='log')
[11]:
Diagnostic figures#
There are some useful visualizations that can be used to troubleshoot the instrument. They typically operate on the ReflectivityData.
The difference between ReflectivityData and ReflectivityOverQ is that ReflectivityData is not binned in \(Q\), but instead has the same shape as the reference.
Essentially it represents a “reflectivity” computed in every wavelength-detector coordinate (z_index) bin. This makes it easier to spot inhomogeneities and diagnose problems.
[12]:
diagnostics = batch.compute(ReflectivityOverZW)
diagnostics['608'].hist().flatten(('blade', 'wire'), to='z').plot(norm='log')
[12]:
Make a \((\lambda, \theta)\) map#
A good sanity check is to create a two-dimensional map of the counts in \(\lambda\) and \(\theta\) bins and make sure the triangles converge at the origin.
[13]:
from ess.reflectometry.figures import wavelength_theta_figure
wavelength_theta_figure(
diagnostics.values(),
theta_bins=batch.compute(ThetaBins[SampleRun]).values(),
q_edges_to_display=(
sc.scalar(0.018, unit='1/angstrom'),
sc.scalar(0.113, unit='1/angstrom'),
),
)
[13]:
This plot can be used to check if the value of the sample rotation angle \(\omega\) is correct. The bright triangles should be pointing back to the origin \(\lambda = \theta = 0\). In the figure above the black lines are all passing through the origin.
Make a \((Q, \theta)\) map#
Another good sanity check is to create a two-dimensional map of the counts in \(\lambda\) and \(Q\) and make sure the stripes are vertical. If they are not that could indicate that the ChopperPhase setting is incorrect.
[14]:
from ess.reflectometry.figures import q_theta_figure
q_theta_figure(
diagnostics.values(),
theta_bins=batch.compute(ThetaBins[SampleRun]).values(),
q_bins=qbins,
)
[14]:
Compare the sample measurement to the reference on the detector#
Here we compare the raw number of counts on the detector for the sample measurement and the reference respectively.
z_index is the \(z-\)detector coordinate, that is, the detector coordinate in the direction of the scattering angle \(\theta\).
[15]:
from ess.reflectometry.figures import wavelength_z_figure
wf_608 = batch.workflows['608']
wavelength_z_figure(
wf_608.compute(Sample), wavelength_bins=wf_608.compute(WavelengthBins), grid=False
) + wavelength_z_figure(wf_608.compute(Reference), grid=False)
[15]:
Save data#
We can save the computed \(I(Q)\) to an ORSO .ort file using the orsopy package.
First, we need to collect the metadata for that file. To this end, we insert a parameter to indicate the creator of the processed data.
[16]:
from ess.reflectometry import orso
from orsopy import fileio
[17]:
batch[orso.OrsoCreator] = {
k: orso.OrsoCreator(
fileio.base.Person(
name='Max Mustermann',
affiliation='European Spallation Source ERIC',
contact='max.mustermann@ess.eu',
)
)
for k in runs
}
We can visualize the workflow for a single run ('608'):
[18]:
wf_608.visualize(orso.OrsoIofQDataset, graph_attr={'rankdir': 'LR'})
[18]:
[19]:
# We want to save the scaled results to the file
batch[ReflectivityOverQ] = scaled_r
We build our ORSO dataset from the computed \(I(Q)\) and the ORSO metadata:
[20]:
iofq_datasets = batch.compute(orso.OrsoIofQDataset)
We also add the URL of this notebook to make it easier to reproduce the data:
[21]:
for ds in iofq_datasets.values():
ds.info.reduction.script = (
'https://scipp.github.io/essreflectometry/user-guide/amor/amor-reduction.html'
)
Finally, we can save the data to a file. Note that iofq_datasets contains orsopy.fileio.orso.OrsoDatasets.
[22]:
fileio.orso.save_orso(
datasets=list(iofq_datasets.values()), fname='amor_reduced_iofq.ort'
)
Look at the first 50 lines of the file to inspect the metadata:
[23]:
!head amor_reduced_iofq.ort -n50
# # ORSO reflectivity data file | 1.2 standard | YAML encoding | https://www.reflectometry.org/
# data_source:
# owner:
# name: J. Stahn
# affiliation: null
# contact: ''
# experiment:
# title: commissioning
# instrument: Amor
# start_date: 2023-11-13T13:27:08
# probe: neutron
# facility: SINQ
# sample:
# name: SM5
# model:
# stack: air | 20 ( Ni 7 | Ti 7 ) | Al2O3
# measurement:
# instrument_settings:
# incident_angle: {min: 0.004573831271192303, max: 0.028638334239433454, unit: rad}
# wavelength: {min: 2.8000112325761113, max: 12.499999993767076, unit: angstrom}
# polarization: null
# data_files:
# - file: amor2023n000608.hdf
# additional_files:
# - file: amor2023n000614.hdf
# comment: supermirror
# reduction:
# software: {name: ess.reflectometry, version: 25.10.0, platform: Linux}
# timestamp: 2025-10-02T07:08:37.924408+00:00
# creator:
# name: Max Mustermann
# affiliation: European Spallation Source ERIC
# contact: max.mustermann@ess.eu
# corrections:
# - chopper ToF correction
# - footprint correction
# - supermirror calibration
# script: https://scipp.github.io/essreflectometry/user-guide/amor/amor-reduction.html
# data_set: 0
# columns:
# - {name: Qz, unit: 1/angstrom, physical_quantity: wavevector transfer}
# - {name: R, physical_quantity: reflectivity}
# - {name: sR, physical_quantity: standard deviation of reflectivity}
# - {name: sQz, unit: 1/angstrom, physical_quantity: standard deviation of wavevector
# transfer resolution}
# # Qz (1/angstrom) R sR sQz (1/angstrom)
4.9932428126801473e-03 5.4699980526077474e-01 2.2331174752526448e-01 2.8742860476410782e-04
5.0296863300743778e-03 4.5462124096858986e-01 2.0331280664098425e-01 2.8943294734858955e-04
5.0663958329233314e-03 9.8277340533751156e-01 2.7258610953721379e-01 2.8867114543253765e-04
5.1033732625396788e-03 5.4814630683489796e-01 1.8272485456026058e-01 2.9262049558691764e-04
Merging multiple runs for the same sample rotation#
[24]:
runs = {
'608': {
# The sample rotation values in the files are slightly off, so we replace
# them with corrected values.
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(608),
},
'609': {
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(609),
},
'610': {
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: amor.data.amor_run(610),
},
'611+612': {
SampleRotationOffset[SampleRun]: sc.scalar(0.05, unit='deg'),
Filename[SampleRun]: (
amor.data.amor_run(611),
amor.data.amor_run(612),
), # List of files means their event lists should be merged
},
}
batch = batch_processor(workflow, runs)
# Compute R(Q) for all runs
reflectivity = batch.compute(ReflectivityOverQ)
sc.plot(reflectivity.hist(), norm='log', vmin=1e-4)
Downloading file 'amor2023n000612.hdf' from 'https://public.esss.dk/groups/scipp/ess/amor/2/amor2023n000612.hdf' to '/home/runner/.cache/ess/amor/2'.
[24]:
[25]:
batch.visualize(ReflectivityOverQ, graph_attr={'rankdir': 'LR'})
[25]:
[26]:
scaled_r = reflectivity * scale_for_reflectivity_overlap(
reflectivity,
critical_edge_interval=(
sc.scalar(0.01, unit='1/angstrom'),
sc.scalar(0.014, unit='1/angstrom'),
),
)
sc.plot(scaled_r.hist(), norm='log', vmin=1e-5)
[26]: