Divergent data reduction for Amor#

In this notebook, we will look at how to use the essreflectometry package with Sciline, for reflectometry data collected from the PSI instrument Amor in divergent beam mode.

We will begin by importing the modules that are necessary for this notebook.

Setup#

[1]:
import warnings
import scipp as sc
from ess import amor
from ess.amor import data  # noqa: F401
from ess.reflectometry.types import *
from ess.amor.types import *

# The files used in this tutorial have some issues that makes scippnexus
# raise warnings when loading them. To avoid noise in the notebook the warnings are silenced.
warnings.filterwarnings('ignore', 'Failed to convert .* into a transformation')
warnings.filterwarnings('ignore', 'Invalid transformation, missing attribute')

Create and configure the workflow#

We begin by creating the Amor workflow object which is a skeleton for reducing Amor data, with pre-configured steps.

[2]:
workflow = amor.AmorWorkflow()

We then need to set the missing parameters which are specific to each experiment:

[3]:
workflow[SampleSize[SampleRun]] = sc.scalar(10.0, unit='mm')
workflow[SampleSize[ReferenceRun]] = sc.scalar(10.0, unit='mm')

workflow[ChopperPhase[ReferenceRun]] = sc.scalar(-7.5, unit='deg')
workflow[ChopperPhase[SampleRun]] = sc.scalar(-7.5, unit='deg')

workflow[QBins] = sc.geomspace(dim='Q', start=0.005, stop=0.3, num=391, unit='1/angstrom')
workflow[WavelengthBins] = sc.geomspace('wavelength', 2.8, 12, 301, unit='angstrom')

# The YIndexLimits and ZIndexLimits define ranges on the detector where
# data is considered to be valid signal.
# They represent the lower and upper boundaries of a range of pixel indices.
workflow[YIndexLimits] = sc.scalar(11, unit=None), sc.scalar(41, unit=None)
workflow[ZIndexLimits] = sc.scalar(80, unit=None), sc.scalar(370, unit=None)
[4]:
workflow.visualize(ReflectivityOverQ, graph_attr={'rankdir': 'LR'})
[4]:
../../_images/user-guide_amor_amor-reduction_7_0.svg

Caching the reference result#

The reference result (used for normalizing the sample data) only needs to be computed once. It represents the intensity reflected by the super-mirror.

We compute it using the pipeline and thereafter set the result back on the original pipeline.

[5]:
workflow[Filename[ReferenceRun]] = amor.data.amor_reference_run()
# The sample rotation value in the file is slightly off, so we set it manually
workflow[SampleRotation[ReferenceRun]] = sc.scalar(0.65, unit='deg')

reference_result = workflow.compute(IdealReferenceIntensity)
# Set the result back onto the pipeline to cache it
workflow[IdealReferenceIntensity] = reference_result
Downloading file 'amor2023n000614.hdf' from 'https://public.esss.dk/groups/scipp/ess/amor/2/amor2023n000614.hdf' to '/home/runner/.cache/ess/amor/2'.

If we now visualize the pipeline again, we can see that the reference is not re-computed:

[6]:
workflow.visualize(ReflectivityOverQ, graph_attr={'rankdir': 'LR'})
[6]:
../../_images/user-guide_amor_amor-reduction_11_0.svg

Computing sample reflectivity#

We now compute the sample reflectivity from 4 runs that used different sample rotation angles. The measurements at different rotation angles cover different ranges of \(Q\).

In this tutorial we use some Amor data files we have received. The file paths to the tutorial files are obtained by calling:

[7]:
amor.data.amor_sample_run('608')
Downloading file 'amor2023n000608.hdf' from 'https://public.esss.dk/groups/scipp/ess/amor/2/amor2023n000608.hdf' to '/home/runner/.cache/ess/amor/2'.
[7]:
'/home/runner/.cache/ess/amor/2/amor2023n000608.hdf'

When you encounter amor.data.amor_sample_run you should imagining replacing that with a path to your own dataset.

[8]:
runs = {
    '608': {
        # The sample rotation values in the files are slightly off, so we replace
        # them with corrected values.
        SampleRotation[SampleRun]: sc.scalar(0.85, unit='deg'),
        Filename[SampleRun]: amor.data.amor_sample_run('608'),
    },
    '609': {
        SampleRotation[SampleRun]: sc.scalar(2.25, unit='deg'),
        Filename[SampleRun]: amor.data.amor_sample_run('609'),
    },
    '610': {
        SampleRotation[SampleRun]: sc.scalar(3.65, unit='deg'),
        Filename[SampleRun]: amor.data.amor_sample_run('610'),
    },
    '611': {
        SampleRotation[SampleRun]: sc.scalar(5.05, unit='deg'),
        Filename[SampleRun]: amor.data.amor_sample_run('611'),
    },
}

reflectivity = {}
for run_number, params in runs.items():
    workflow[Filename[SampleRun]] = params[Filename[SampleRun]]
    workflow[SampleRotation[SampleRun]] = params[SampleRotation[SampleRun]]
    reflectivity[run_number] = workflow.compute(ReflectivityOverQ).hist()

sc.plot(reflectivity, norm='log', vmin=1e-4)
Downloading file 'amor2023n000609.hdf' from 'https://public.esss.dk/groups/scipp/ess/amor/2/amor2023n000609.hdf' to '/home/runner/.cache/ess/amor/2'.
Downloading file 'amor2023n000610.hdf' from 'https://public.esss.dk/groups/scipp/ess/amor/2/amor2023n000610.hdf' to '/home/runner/.cache/ess/amor/2'.
Downloading file 'amor2023n000611.hdf' from 'https://public.esss.dk/groups/scipp/ess/amor/2/amor2023n000611.hdf' to '/home/runner/.cache/ess/amor/2'.
[8]:
../../_images/user-guide_amor_amor-reduction_16_1.svg

Scaling the reflectivity curves to overlap#

In case we know the curves are have been scaled by different factors (that are constant in Q) it can be useful to scale them so they overlap:

[9]:
from ess.reflectometry.tools import scale_reflectivity_curves_to_overlap

scaled_reflectivity_curves, scale_factors = scale_reflectivity_curves_to_overlap(
    reflectivity.values(),
    # Optionally specify a Q-interval where the reflectivity is known to be 1.0
    critical_edge_interval=(sc.scalar(0.01, unit='1/angstrom'), sc.scalar(0.014, unit='1/angstrom'))
)

sc.plot(dict(zip(reflectivity.keys(), scaled_reflectivity_curves, strict=True)), norm='log', vmin=1e-5)
[9]:
../../_images/user-guide_amor_amor-reduction_18_0.svg

Curves obtained from measurements at different angles can be combined to one common reflectivity curve:

[10]:
from ess.reflectometry.tools import combine_curves
combined = combine_curves(scaled_reflectivity_curves, workflow.compute(QBins))
combined.plot(norm='log')
[10]:
../../_images/user-guide_amor_amor-reduction_20_0.svg

Diagnostic figures#

There are some useful visualizations that can be used to troubleshoot the instrument. They typically operate on the ReflectivityData.

The difference between ReflectivityData and ReflectivityOverQ is that ReflectivityData is not binned in \(Q\), but instead has the same shape as the reference.

Essentially it represents a “reflectivity” computed in every wavelength-detector coordinate (z_index) bin. This makes it easier to spot inhomogeneities and diagnose problems.

[11]:
# Start by computing the `ReflectivityData` for each of the files
diagnostics = {}
for run_number, params in runs.items():
    workflow[Filename[SampleRun]] = params[Filename[SampleRun]]
    workflow[SampleRotation[SampleRun]] = params[SampleRotation[SampleRun]]
    diagnostics[run_number] = workflow.compute((ReflectivityData, ThetaBins[SampleRun]))

# Scale the results using the scale factors computed earlier
for run_number, scale_factor in zip(reflectivity.keys(), scale_factors, strict=True):
    diagnostics[run_number][ReflectivityData] *= scale_factor

Make a \((\lambda, \theta)\) map#

A good sanity check is to create a two-dimensional map of the counts in \(\lambda\) and \(\theta\) bins and make sure the triangles converge at the origin.

[12]:
from ess.amor.figures import wavelength_theta_figure

wavelength_theta_figure(
    [result[ReflectivityData] for result in diagnostics.values()],
    theta_bins=[result[ThetaBins[SampleRun]] for result in diagnostics.values()],
    q_edges_to_display=(sc.scalar(0.018, unit='1/angstrom'), sc.scalar(0.113, unit='1/angstrom'))
)
[12]:
../../_images/user-guide_amor_amor-reduction_25_0.svg

This plot can be used to check if the value of the sample rotation angle \(\omega\) is correct. The bright triangles should be pointing back to the origin \(\lambda = \theta = 0\). In the figure above the black lines are all passing through the origin.

Make a \((Q, \theta)\) map#

Another good sanity check is to create a two-dimensional map of the counts in \(\lambda\) and \(Q\) and make sure the stripes are vertical. If they are not that could indicate that the ChopperPhase setting is incorrect.

[13]:
from ess.amor.figures import q_theta_figure

q_theta_figure(
    [res[ReflectivityData] for res in diagnostics.values()],
    theta_bins=[res[ThetaBins[SampleRun]] for res in diagnostics.values()],
    q_bins=workflow.compute(QBins)
)
[13]:
../../_images/user-guide_amor_amor-reduction_28_0.svg

Compare the sample measurement to the reference on the detector#

Here we compare the raw number of counts on the detector for the sample measurement and the reference respectively.

z_index is the \(z-\)detector coordinate, that is, the detector coordinate in the direction of the scattering angle \(\theta\).

[14]:
from ess.amor.figures import wavelength_z_figure

workflow[Filename[SampleRun]] = runs['608'][Filename[SampleRun]]
workflow[SampleRotation[SampleRun]] = runs['608'][SampleRotation[SampleRun]]
wavelength_z_figure(
    workflow.compute(FootprintCorrectedData[SampleRun]),
    wavelength_bins=workflow.compute(WavelengthBins),
    grid=False
) + wavelength_z_figure(
    reference_result,
    grid=False
)
[14]:
../../_images/user-guide_amor_amor-reduction_30_0.svg

Save data#

We can save the computed \(I(Q)\) to an ORSO .ort file using the orsopy package.

First, we need to collect the metadata for that file. To this end, we build a pipeline with additional providers. We also insert a parameter to indicate the creator of the processed data.

[15]:
from ess.reflectometry import orso
from orsopy import fileio
[16]:
workflow[orso.OrsoCreator] = orso.OrsoCreator(
    fileio.base.Person(
        name='Max Mustermann',
        affiliation='European Spallation Source ERIC',
        contact='max.mustermann@ess.eu',
    )
)
[17]:
workflow.visualize(orso.OrsoIofQDataset, graph_attr={'rankdir': 'LR'})
[17]:
../../_images/user-guide_amor_amor-reduction_34_0.svg

We build our ORSO dataset from the computed \(I(Q)\) and the ORSO metadata:

[18]:
iofq_dataset = workflow.compute(orso.OrsoIofQDataset)
iofq_dataset
[18]:
OrsoDataset(info=Orso(
     data_source=DataSource(
                            owner=Person(name='J. Stahn', affiliation=None, contact='', comment=None),
                            experiment=Experiment(title='commissioning', instrument='Amor', start_date=datetime.datetime(2023, 11, 13, 13, 27, 8), probe='neutron', facility=None, proposalID=None, doi=None, comment=None),
                            sample=Sample(name='SM5', category=None, composition=None, description=None, size=None, environment=None, sample_parameters=None, model=SampleModel(stack='air | 20 ( Ni 7 | Ti 7 ) | Al2O3', origin=None, sub_stacks=None, layers=None, materials=None, composits=None, globals=None, reference=None, comment=None), comment=None),
                            measurement=Measurement(
                                                    instrument_settings=InstrumentSettings(
                                                                                           incident_angle=ValueRange(min=0.004563767609003476, max=0.028431498466197823, unit='rad', individual_magnitudes=None, offset=None, comment=None),
                                                                                           wavelength=ValueRange(min=2.8000202343174574, max=11.999998758060757, unit='angstrom', individual_magnitudes=None, offset=None, comment=None),
                                                                                           polarization=None,
                                                                                           ),
                                                    data_files=[File(file='amor2023n000608.hdf', timestamp=None, comment=None)],
                                                    additional_files=[File(file='amor2023n000614.hdf', timestamp=None, comment='supermirror')],
                                                    ),
                            ),
     reduction=Reduction(
                         software=Software(name='ess.reflectometry', version='24.10.3', platform='Linux', comment=None),
                         timestamp=datetime.datetime(2024, 10, 25, 8, 7, 52, 831992, tzinfo=datetime.timezone.utc),
                         creator=Person(name='Max Mustermann', affiliation='European Spallation Source ERIC', contact='max.mustermann@ess.eu', comment=None),
                         corrections=[],
                         ),
     columns=[Column(name='Qz', unit='1/angstrom', physical_quantity='wavevector transfer', flag_is=None, comment=None), Column(name='R', unit=None, physical_quantity='reflectivity', flag_is=None, comment=None), Column(name='sR', unit=None, physical_quantity='standard deviation of reflectivity', flag_is=None, comment=None), Column(name='sQz', unit='1/angstrom', physical_quantity='standard deviation of wavevector transfer resolution', flag_is=None, comment=None)],
     ), data=array([[5.02638405e-03, 2.62029935e-01, 4.70619668e-02, 3.00881770e-04],
       [5.07943060e-03, 3.53651911e-01, 6.06535962e-02, 3.00748615e-04],
       [5.13303698e-03, 3.76012557e-01, 5.11731358e-02, 3.02774350e-04],
       ...,
       [1.19724504e-01, 4.48254393e-03, 1.58507549e-03, 2.92669822e-03],
       [1.20988031e-01, 1.98316428e-03, 1.40254620e-03, 2.96737457e-03],
       [1.22264893e-01, 2.26362873e-03, 2.26362873e-03, 2.98680597e-03]]))

We also add the URL of this notebook to make it easier to reproduce the data:

[19]:
iofq_dataset.info.reduction.script = (
    'https://scipp.github.io/essreflectometry/examples/amor.html'
)

To support tracking provenance, we also list the corrections that were done by the workflow and store them in the dataset:

[20]:
iofq_dataset.info.reduction.corrections = orso.find_corrections(
    workflow.get(orso.OrsoIofQDataset)
)

Now let’s repeat this for all the sample measurements! To do that we can use an utility in ess.reflectometry.tools:

[21]:
from ess.reflectometry.tools import orso_datasets_from_measurements

datasets = orso_datasets_from_measurements(
    workflow,
    runs.values(),
    # Optionally scale the curves to overlap using `scale_reflectivity_curves_to_overlap`
    scale_to_overlap=True
)

Finally, we can save the data to a file. Note that iofq_dataset is an orsopy.fileio.orso.OrsoDataset.

[22]:
fileio.orso.save_orso(datasets=datasets, fname='amor_reduced_iofq.ort')

Look at the first 50 lines of the file to inspect the metadata:

[23]:
!head amor_reduced_iofq.ort -n50
# # ORSO reflectivity data file | 1.1 standard | YAML encoding | https://www.reflectometry.org/
# data_source:
#   owner:
#     name: J. Stahn
#     affiliation: null
#     contact: ''
#   experiment:
#     title: commissioning
#     instrument: Amor
#     start_date: 2023-11-13T13:27:08
#     probe: neutron
#   sample:
#     name: SM5
#     model:
#       stack: air | 20 ( Ni 7 | Ti 7 ) | Al2O3
#   measurement:
#     instrument_settings:
#       incident_angle: {min: 0.004563767609003476, max: 0.028431498466197823, unit: rad}
#       wavelength: {min: 2.8000202343174574, max: 11.999998758060757, unit: angstrom}
#       polarization: null
#     data_files:
#     - file: amor2023n000608.hdf
#     additional_files:
#     - file: amor2023n000614.hdf
#       comment: supermirror
# reduction:
#   software: {name: ess.reflectometry, version: 24.10.3, platform: Linux}
#   timestamp: 2024-10-25T08:07:56.480678+00:00
#   creator:
#     name: Max Mustermann
#     affiliation: European Spallation Source ERIC
#     contact: max.mustermann@ess.eu
#   corrections:
#   - chopper ToF correction
#   - footprint correction
# data_set: 0
# columns:
# - {name: Qz, unit: 1/angstrom, physical_quantity: wavevector transfer}
# - {name: R, physical_quantity: reflectivity}
# - {name: sR, physical_quantity: standard deviation of reflectivity}
# - {name: sQz, unit: 1/angstrom, physical_quantity: standard deviation of wavevector
#     transfer resolution}
# # Qz (1/angstrom)    R                      sR                     sQz (1/angstrom)
5.0263840502434900e-03 2.6202993538553171e-01 4.7061966835727186e-02 3.0088177049042544e-04
5.0794305979733716e-03 3.5365191059187928e-01 6.0653596244305108e-02 3.0074861530848113e-04
5.1330369788154728e-03 3.7601255664396532e-01 5.1173135798138966e-02 3.0277435001431252e-04
5.1872091010357786e-03 5.1627447498916823e-01 5.6674907281939961e-02 3.0471441015956428e-04
5.2419529352538633e-03 3.7502059580367697e-01 3.6261782312799105e-02 3.0685539409401694e-04
5.2972745151009613e-03 5.8035468085748432e-01 5.5599519897208405e-02 3.0924822169676032e-04
5.3531799378849550e-03 4.5244897926975997e-01 3.8252397963333788e-02 3.0986471896110265e-04