Proton Calorimetry/Detector Analysis: Difference between revisions

From PBTWiki
Jump to navigation Jump to search
Line 39: Line 39:


==Fitting QuARC Data==
==Fitting QuARC Data==
*To fit previously acquired photodiode data, the scripts <code>calibrate.cpp</code> and <code>fit.cpp</code> are used.
*To compile the calibration script: <code>g++ -o calibrate calibrate.cpp `root-config --cflags --glibs`</code>
*To execute the calibration script: <code>./calibrate 2 12.5 folder background.txt frontST.txt backST.txt data.txt</code>
**The first argument is the number of DDC232 boards.
**The second argument is the DDC232 FSR.
**The third argument is the folder containing the measurements.
**The fourth, fifth, sixth and seventh argument are the filenames for the background, front and back shoot-through and proton beam measurements.
*The output of this script is a file containing two lines: calibrated average photodiode values and standard deviations.
*To compile the fitting script: <code>g++ -o fit fit.cpp `root-config --cflags --glibs`</code>
*To execute the fitting script: <code>./fit 2 12.5 folder data.txt energy</code>
**The first argument is the number of DDC232 boards.
**The second argument is the DDC232 FSR.
**The third argument is the folder containing the measurements.
**The fourth argument is the name of the calibrated data (which should reside in a directory called "calibrated" in "folder").
**The fifth argument is the beam energy in MeV, which is just used as a label for the plot.
*The calibration and fitting scripts can be executed consecutively using the bash script <code>fit.sh</code>.
===Useful literature===
===Useful literature===
*Laurent's paper on the [https://doi.org/10.1002/mp.14099 quenched Bragg model].
*Laurent's paper on the [https://doi.org/10.1002/mp.14099 quenched Bragg model].
Line 50: Line 68:


===Notes on Fitting===
===Notes on Fitting===
*The latest version of the code is implemented in Eclipse, see [http://www.hep.ucl.ac.uk/pbt/wiki/Software/ROOT this guide] on how to set up Eclipse for C++ programming with ROOT.
*Loading either CMOS or Photodiode data into the fitting routine creates a Measurement object, which contains the following class variables:
**The measurement number, species, energy and absorber thickness.
**A reference range is taken from LibdEdx using the energy of the delivered beam, subtracting the thickness of any absorber used in the measurement.
**A relevant FLUKA curve for the beam energy, if it exists (HIT only).
**Vectors of the average light output in each sheet, the sheet edges in WET and the uncertainty of the light output.
**The RAW PDL in a TH1D object.
*Sigma is estimated with Bortfeld's approximation, which is only truly valid for protons.
*Sigma is estimated with Bortfeld's approximation, which is only truly valid for protons.
**While sigma can be estimated from a Gaussian fit of the Bragg peak, the poor spatial resolution of the detector means that Bortfeld's estimation tends to work better for all particle species.
**While sigma can be estimated from a Gaussian fit of the Bragg peak, the poor spatial resolution of the detector means that Bortfeld's estimation tends to work better for all particle species.

Revision as of 11:06, 11 May 2021

This page contains information on the analysis code for the QuARC detector data.

Photodiode Data Processing

Details on how to acquire data from the DDC232 can be found here.

  • Data is saved in hexadecimal format where one line represents data of all photodiodes in the setup. Each DDC232 channel measurement is represented by 5 hexadecimal characters.
  • In any script analysing DDC232 data, each 5-character hex value is converted to an absolute decimal and then converted to a charge in pC. The zero-current offset is subtracted in the same step.
  • The photodiode-input-resistor maps ensure that the ordering of measurements represents the physical order of photodiodes/accounts for input summing.
    • As each photodiode is split across two inputs (as of rev. B), the 32 total measurements become 16 measurements, where inputs of the same photodiode are summed.
    • Practically, this makes the full-scale range twice that of what is set on the FPGA.

Photodiode Noise Analysis

  • The script analysis.cpp analyses DDC232 data to report the amount of external electronic noise present in the data. This is useful in determining the quality of the data.
  • To compile the script: g++ -o analysis analysis.cpp `root-config --cflags --glibs`
  • To execute the script: ./analysis 4 folder capture.txt
    • The first argument is the number of DDC232 boards.
    • The second argument is the folder containing the captured data.
    • The third argument is the name of the data file.
  • The script produces the following plots both by individual DDC232 channel and by photodiode (if summing is present):
    • Average value with standard deviation
    • Histogram
    • Time evolution graph (measured value against sample number)

Photodiode Replay

  • This script "replays" a previously acquired measurement, as if it were being captured at that moment.
  • To compile the script: g++ -o replay replay.cpp `root-config --cflags --glibs`
  • To execute the script: ./replay 2 350 0 170 50 capture.txt step/avg 50 100 (clean folder background.txt backST.txt frontST.txt)
    • The first argument is the number of DDC232 boards.
    • The second argument is the DDC232 FSR.
    • The third argument is a custom y-axis range (set to 0 for default).
    • The fourth argument is the DDC232 integration time.
    • The fifth argument is the target frame rate of the ROOT plot.
    • The sixth argument is the full path and name of the captured data.
    • The seventh argument is the mode: "step" or "avg".
      • Step replays each individual measurement at approximately the target frame rate. In this mode, the eighth and ninth arguments are the meausrement number to start and finish on respectively.
      • Avg replays an average of measurements along with a standard deviation. The number of measurements averaged in each frame is given by the eigth argument*1000 divided by the integration time. It represents the avergaing rate in Hz. The ninth argument is ignored in this mode.
    • The tenth argument "clean" is optional and can be used to replay photodiode data with previously measured background and calibration corrections.
      • The eleventh, argument is the folder the background and front and back shoot-through measurements are in. The remaining arguments are the filenames.


Fitting QuARC Data

  • To fit previously acquired photodiode data, the scripts calibrate.cpp and fit.cpp are used.
  • To compile the calibration script: g++ -o calibrate calibrate.cpp `root-config --cflags --glibs`
  • To execute the calibration script: ./calibrate 2 12.5 folder background.txt frontST.txt backST.txt data.txt
    • The first argument is the number of DDC232 boards.
    • The second argument is the DDC232 FSR.
    • The third argument is the folder containing the measurements.
    • The fourth, fifth, sixth and seventh argument are the filenames for the background, front and back shoot-through and proton beam measurements.
  • The output of this script is a file containing two lines: calibrated average photodiode values and standard deviations.
  • To compile the fitting script: g++ -o fit fit.cpp `root-config --cflags --glibs`
  • To execute the fitting script: ./fit 2 12.5 folder data.txt energy
    • The first argument is the number of DDC232 boards.
    • The second argument is the DDC232 FSR.
    • The third argument is the folder containing the measurements.
    • The fourth argument is the name of the calibrated data (which should reside in a directory called "calibrated" in "folder").
    • The fifth argument is the beam energy in MeV, which is just used as a label for the plot.
  • The calibration and fitting scripts can be executed consecutively using the bash script fit.sh.


Useful literature

Notes on Fitting

  • Sigma is estimated with Bortfeld's approximation, which is only truly valid for protons.
    • While sigma can be estimated from a Gaussian fit of the Bragg peak, the poor spatial resolution of the detector means that Bortfeld's estimation tends to work better for all particle species.
  • Phi0 is estimated with a rough guess using the largest count.
  • For low energies, there are more direct proton hits in the sensor, so the fit start is moved to avoid the first few sheets.
  • The fit end is chosen to avoid the fragmentation tails of ion fits.
  • Birks' law to second order has been implemented, though currently not tested. C is set to 0. kB is set to a rough estimate of 0.07, typically correct for protons.
  • The particle species, material and buildup approximation codes are used as fake fixed parameters in the model, to set the correct constants.
    • The buildup approximation is rarely ever used and can be ignored.
  • The QB model is implemented in a TF1 object and is fitted using ROOT's binned fitting routine.
  • After the QB model is fitted, the relevant fit parameter results are extracted to recover Bortfeld's Bragg curve, which can be compared against facility reference curves.
    • The quenchedBraggHist function implements ROOT's binned fit by hand, it is often slower but can be useful for measurements with only a small number of data points.

SOBP Fit

  • A reference SOBP for HIT can be constructed by interpolating and weighting individual FLUKA curves using the number of proton beams delivered, their energies and the number of particles in each beam.
  • An SOBP fit just weighs and sums individual proton beams, using the additional parameters chi and p, which are described in the literature linked above.
    • Chi and p are estimated by hand.

Ion Fits

  • As mentioned, for ion fits, the QB fit range must be restricted to avoid the nuclear fragmentation tail after the Bragg peak.
  • The QB model performs reasonably well with helium, but may not describe carbon well.
  • Reference FLUKA curves for HIT are currently being sought in order to accurately determine the QB model performance in ion fits.
  • Birks' law to second-order may be necessary here.

Future Development

  • Replace the QB model with Kramer's numerical model for depth-dose curves, applicable to all ions, and then apply (second-order) Birks' law.
  • Fitting routine using GNU Scientific Library rather than ROOT for potentially faster curve fitting.

CMOS Sensor Image Analysis (Legacy)

Details on how to process images taken by the ISDI CMOS Sensor to recover the average light output in each scintillator sheet.

  • First, the 21 images taken by the CMOS sensor must be corrected for non-linearity effects in the photon-electron conversion. The MATLAB script linear_corr_tiff.m does this.
    • The correct full-well mode must be selected in the script, typically high full-well is chosen for its superior linearity.
    • The script loops over each pixel in the 21 images and applies a correction using cubic interpolation of the linearity data (linearity_high.txt). A corrected version of each image is saved.
  • The 21 corrected images are then averaged using averageTiffs.sh, which outputs a single image.
    • On MacOS, this may require ImageMagick, which can be installed with MacPorts: sudo port install ImageMagick
  • To apply background subtraction and corrections for non-linearity in scintillator light output, the MATLAB script edit_runs.m is run.
    • This requires a corrected/averaged background and shoot-through measurements (front and/or back).
    • Important for background and shoot-through to have the same full-well mode as the measurement. Similar spot size for the shoot-through and measurement is also desirable.
    • The script outputs a .txt file containing a column of data, representing the light output in each horizontal pixel row in arbitrary units.
  • As part of the fitting procedure, the method readCMOSData calibrates the x-axis to WET and calculates the average light output in each sheet:
    • Each scintillator sheet is enumerated and a vector of the sheet thickness (in numerical order) is hardcoded along with the configuration of the sheets for a given experimental run.
    • A small thickness correction is applied based on the physical measured thickness of the stack and the perceived thickness of the stack in the sensor.
    • The sheet edges in WET is found by multiplying the corrected thickness by the RSP, which is found from the ratio of the measured WET of the stack and the physical thickness.
    • The light output in each sheet (0.5 millimetres away from the edges) is averaged and then normalised based either on the maximum measured light output or the reference curve at detector entry.
    • Uncertainty is calculated with contributions from: sheet cross-talk, direct hits of protons in the sensor, averaging errors, spot size and position and background.