\documentclass[10pt]{article}
\usepackage{fullpage}
\usepackage{setspace}
\usepackage{parskip}
\usepackage{titlesec}
\usepackage[section]{placeins}
\usepackage{xcolor}
\usepackage{breakcites}
\usepackage{lineno}
\usepackage{hyphenat}
\PassOptionsToPackage{hyphens}{url}
\usepackage[colorlinks = true,
linkcolor = blue,
urlcolor = blue,
citecolor = blue,
anchorcolor = blue]{hyperref}
\usepackage{etoolbox}
\makeatletter
\patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{%
\errmessage{\noexpand\@combinedblfloats could not be patched}%
}%
\makeatother
\usepackage{natbib}
\renewenvironment{abstract}
{{\bfseries\noindent{\abstractname}\par\nobreak}\footnotesize}
{\bigskip}
\titlespacing{\section}{0pt}{*3}{*1}
\titlespacing{\subsection}{0pt}{*2}{*0.5}
\titlespacing{\subsubsection}{0pt}{*1.5}{0pt}
\usepackage{authblk}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{tabulary}
\usepackage{booktabs,array,multirow}
\usepackage{amsfonts,amsmath,amssymb}
\providecommand\citet{\cite}
\providecommand\citep{\cite}
\providecommand\citealt{\cite}
% You can conditionalize code for latexml or normal latex using this.
\newif\iflatexml\latexmlfalse
\providecommand{\tightlist}{\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}%
\AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}}
\usepackage[utf8]{inputenc}
\usepackage[ngerman,english]{babel}
\begin{document}
\title{Applied Radiation And Isotopes Template}
\author[1]{Mostafa Borhani}%
\affil[1]{Shahid Beheshti University}%
\vspace{-1em}
\date{\today}
\begingroup
\let\center\flushleft
\let\endcenter\endflushleft
\maketitle
\endgroup
\selectlanguage{english}
\begin{abstract}
Pellentesque tincidunt lobortis orci non venenatis. Cras in justo
luctus, pulvinar augue id, suscipit diam. Morbi aliquet fringilla nibh,
vel pellentesque dui venenatis eget. Orci varius natoque penatibus et
magnis dis parturient montes, nascetur ridiculus mus. Donec ultricies
ultrices magna gravida porta. Maecenas accumsan diam dui, auctor ornare
ex pellentesque id. Integer tempus massa id augue finibus convallis.
Nulla vestibulum, nisl id tempor pulvinar, felis dui pellentesque lacus,
quis bibendum metus enim sed ex.%
\end{abstract}%
\sloppy
\hyperref[csl:1]{(Marouli et al., 2018)}
\section*{Introduction}
{\label{874460}}
\hyperref[csl:2]{(Sivalingam et al., 2018)}~~
\par\null
Airborne gamma ray spectrometry with its applications such as
identification of lithology, facies and depositional environment, depth
correlations and core-log integration, mineralogy, geochemistry,
cyclo-stratigraphic analysis, environmental monitoring, especially
nuclear site surveillance and emergency response recently hanged in the
place of honor for its fast, precise and accurate
outcomes~\hyperref[csl:3]{(Lutter et al., 2018)}. The risks of a nuclear power plants
incident consequences, require instant measurements of radiation dose or
radionuclide pollution within the potential exposure area. Online
interpretation of the outcomes of radiometric spectrometry will
effectively impress the measures to control or decrease the exposure
consequences~\hyperref[csl:4]{(Gupta, 2012)}. AGRS is the most common technique to
accelerate the radiometric spectrometry~\hyperref[csl:5]{(Grasty, 1974)}. Utilization
advanced computer science procedures, by their numerical optimization,
can improve the accuracy of AGRS~\hyperref[csl:5]{(Grasty, 1974)}
\hyperref[csl:6]{({\v{S}}vec, 2016)}.~~
The most emphasis focusing theme of this manuscript is the fast
optimization of AGRS based on enhancement the updating formula in its
training. This optimization was applied on a multilayer neural network,
for AGRS, to improve convergence rate and performance by new updating
criteria, which are inherited from gamma radiation stochastic
properties\hyperref[csl:7]{(Tatsumi et al., 2015)}. The proposed ANN was trained by an
advanced numerical unconstrained nonlinear optimization, the
quasi-Newton method~\hyperref[csl:8]{(Price, 2018)} with novel Enhanced
Rezaei-Ashoor-Sarkhosh (ERAS) updating formula. This training technique
is an extension for optimization process of quasi-Newton procedures that
can solve unconstrained optimization problems such as AGRS. The output
of airborne gamma ray spectrometry typically arises of many parameters,
such as intensity of a measured radiation, the detector response and the
distance between the source and the detector, those all extremely affect
the measured spectrum~\hyperref[csl:9]{(You and Xu, 2014)} . The proposed updating
formula, ERAS, is chosen based on the \hyperref[csl:10]{(Stalter and Howarth, 2012)}.
This paper optimizes the training of real-time airborne gamma-ray
spectrometry which may be utilized in an automatic environmental
radiation surveillance network to be used on board a light Unmanned
Aerial Vehicle (UAV)\hyperref[csl:6]{({\v{S}}vec, 2016)}. A trained ANN by a quasi-Newton
algorithm {[}4{]} which uses updates is employed for airborne gamma ray
spectrometry evaluating in different altitudes. To resolve the problems
of gradient \hyperref[csl:11]{(Amini and Rizi, 2010)} , we explored an advanced updating
formula ERAS.
Error optimization may provide some numerical challenges in neural
network training, due to the numerous parameters and developed extremely
powerful approaches of function optimization for neural networks in
recent years . This paper utilized an advanced neural network training
approach for AGRS, to achieve less convergence rate and much accuracy,
based on gamma ray stochastic inherent trait. In this study we used the
second-order stochastic Hessian or Hessian-free optimization technique
with negative curvature direction detection, for ANN training purpose.
Hessian-free is an influential optimization method which has two major
modules: First, it models the quadratic optimization problem indirectly,
the problem of optimization a convex quadratic function, subject to some
linear constraints. This specific form of nonlinear programming or
Quadratic Programming (QP) was implemented using Hessian-vector products
with the quasi-Newton matrix~\hyperref[csl:12]{(Pankratov and Kuvshinov, 2015)}; and second, it uses
the generalized, truncated or preconditioned conjugate gradient
iterations for solving the sub-problems, regarding the generalized
dogleg process, where the inaccurate quasi-Newton step is taken
asymptotically.
Lately, numerous stochastic quasi-Newton techniques have been offered
for large-scale learning machines. Three challenges were mentioned for
quasi-Newton methods: 1: the computation of Hessian-vector products in
updating formula of training. 2: the difference in their update rate. 3:
the applicability range of such algorithms to nonconvex problems. The
additivity of stochastic quasi-Newton procedures provides robustness and
independency of the quality of the curvature information for AGRS. The
proposed stable updating formula may be utilized to perform convergence
rate and performance of neural network, which was used and tested in
AGRS in this study.
Numerous theoretical~\hyperref[csl:13]{(Moslemi and Ashoor, 2017)} and experimental studies have
been done to guide intelligently the direction of photons originated
from the object in order to increase SNR. Although the ground radiation
measurement to extract the standard radiation map is not pragmatic, due
to its circumscriptions such as low speed, long time operation, high
cost for limited area and high risk at nuclear accidents. AGRS is the
most noteworthy clarification for outdoor measurement and analysis for
ground radiation maps, radionuclides type recognition, their density
approximation and assessment. The typical methods for acquisition and
processing of AGRS data is intensely dependent to survey parameters such
as flying height, profile separation, detector volume, energy window
width, standards, calibration and lastly the method of the data
analysis~\hyperref[csl:14]{(Kluso{\v{n}}, 2010}; \hyperref[csl:15]{Lavi and Alfassi, 2004)}.
Direct comparison of the outcomes of different AGRS assessment is very
difficult due to their measurement parameter dependencies.~ In order to
moderate these dependencies, the neural network modeling is applied in
this study so far. Thanks to real-time output and ability to model
complicated systems such as AGRS, ANNs which usually use to implement
tasks such as prediction, segmentation, classification, visualization,
evaluation, optimization, decision making and unknown approximate
functions.
Adaptive neural networks are appropriate solution for modeling the
endlessly varying environment, which is typical for AGRS and they have
contributed in meaningful share of nuclear science and engineering
aspects such as instrumentation for the detection and measurement of
ionizing radiation; particle accelerators and their controls; nuclear
medicine and its application; effects of radiation on materials,
components, and systems; reactor instrumentation and controls; and
measurement of radiation in space. To that effect neural networks with
their different learning algorithms and architectures such as
Feed-forward, Regulatory feedback, Radial basis function, Recurrent
neural network, Physical and etc. have been used in the nuclear science
and technologies \hyperref[csl:16]{(Manuel et al., 2013)}.
The problem which was considered for this study is the AGRS optimization
in different altitudes with different updating formula for ANN. The
neural network was trained by different algorithms including
quasi-Newton techniques, and then accuracy, validation, convergence
speed and computational complexity of each, for AGRS data were reported.
The used dataset has been~ taken by a 3" NaI detector and recorded by
a1024 Multi Channel (MC) \hyperref[csl:17]{(Peter, 2018)}. Various tests and their
outcomes showed that our proposed updating formula, ERAS, is well
applicable for airborne gamma ray spectrometry optimization.
The rest of this paper is organized as follows; using of neural network
computational machines in AGRS is described in section II. Then, the
proposed updating formula for training of ANN on stochastic gamma-ray
measurement is presented in section III. The experimental procedures and
outcomes are given in section IV, followed by provision of the
conclusion in section V.
\section*{AGRS and Training}\label{agrs-and-training}
Environmental radioactivity measurement is a technique for mineral
exploration, geological mapping and monitoring. Arrangements of rescue
operations management and cooperation in nuclear accidents are
completely based on the post accidents collected radiation information
in the site and around that. The measuring of the radiation dose and
exposure information on the ground after such accidents is a high-risk
task in terms of the human immunity, safety and radiation protection.
AGRS is the only safe and fast solution for gathering necessary data.
The common commercial systems for AGRS implements a heavy setup of
equipment with a high volume of scintillator crystal. AGRS typically
consist of three detectors as different windows for main geochemical
elements thorium, uranium and potassium plus on detector for whole
concerned spectrum and one for background and space radiation. Such
setup usually with more than 100 Kg weight needs a couple of operators.
The airborne measurement system then requires a two engine helicopter
for safety of flight over cities and demographic areas.
Improving the accuracy of AGRS by employing advanced computer science
algorithms recently was reported by some new
studies~\hyperref[csl:18]{(Pandey and Singh, 2016)}. Radiation monitoring includes the
measurement of radiation dose or radionuclide pollution for motives
associated to the assessment or control of radiation exposure or
radioactive substances and the interpretation of the
outcomes~\hyperref[csl:19]{(Henrichs, 2011)}. Radiation monitoring in nuclear events and
accidents is a challenging duty. Therefore, to carry out high
performance computing in the serious tasks, the advanced radiation data
collection approaches are essential.
This manuscript optimized the AGRS based on ANN with a new updating
criterion which is inherited from stochastic properties of gamma
radiation. The proposed method is independent from detector selection,
so that all gamma spectrometer configurations such as NaI, LaBr3 or HPGe
detectors can be used. In our study, among all available scintillators,
the most commonly used material, sodium iodide was chosen as the
detector scintillator. The HPGe detector cannot result high performance
in AGRS due to low energy deposition due to its maximum possible size,
and the LaBr3 is not still costs effective in compare with NaI(Tl)
commercially.
This study optimized the the gamma photon count evaluation based on AGRS
data in different altitudes. Eight training approaches have been
performed to investigate some parameters including their accuracy and
convergence speed of the proposed AGRS. These advanced stochastic
quasi-Newton techniques are efficient, robust, and scalable in neural
network training and this paper customized them for AGRS application.
Finally we have evaluated a new updating criterion, ERAS, which is
inherited from stochastic properties of gamma radiation to improve
convergence rate and performance of AGRS.
1)~ Levenberg-Marquardt BackPropagation~ (LMBP)~~\hyperref[csl:20]{(Sapna, 2012)}
2) Scaled Conjugate Gradient BackPropagation (S-CGBP)
\hyperref[csl:21]{(Nayak, 2017)}
3) Resilient Backpropagation (RBP) \hyperref[csl:22]{(Saputra et al., 2017)}
4) BFGS quasi-Newton backpropagation~\hyperref[csl:23]{(Silaban et al., 2017)}
5) Conjugate Gradient BackPropagation with Polak-Ribi\selectlanguage{ngerman}ére updates
(CGBP-PR)~\hyperref[csl:24]{(Ghani et al., 2017)}
6) Conjugate Gradient BackPropagation with Fletcher-Reeves updates
(CGBP-FR) \hyperref[csl:25]{(Wanto et al., 2017)}
7) Conjugate Gradient BackPropagation with Hestenes--Stiefel updates
(CGBP-HS)~\hyperref[csl:26]{(Sharee, 2014)}
8)~ Conjugate Gradient BackPropagation with Dai--Yuan updates (CGBP-DY)
~\hyperref[csl:27]{(Dai et al., 2013)}
9) Conjugate Gradient BackPropagation with Enhanced
Rezaei-Ashoor-Sarkhosh updating (CGBP-ERAS)
\section*{The AGRS with ERAS Updating
Criterion}\label{the-agrs-with-eras-updating-criterion}
The new ERAS training of ANN is defined in details for AGRS in this
section. The multilayer ANN is trained by some advanced stochastic
quasi-Newton techniques, LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP
\hyperref[csl:21]{(Nayak, 2017)} , RBP \hyperref[csl:22]{(Saputra et al., 2017)} , BFGS \hyperref[csl:23]{(Silaban et al., 2017)}
, CGBP-PR ~\hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR \hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-HS
\hyperref[csl:26]{(Sharee, 2014)} , CGBP-DY \hyperref[csl:27]{(Dai et al., 2013)} and finally optimized
for real time stochastic AGRS data with proposed ERAS. In these updating
formulas which were used in AGRS, the convolutional and subsampling
layers were merged into one layer, which simplifies the network
architecture and finally an adaptive update creation {[}29{]}
incorporated curvature information into stochastic approximation
approaches for AGRS data. Noisy curvature estimations that have
destructive effects on the robustness of the iterations are the mostly
possible outcome of using ``classical'' quasi-Newton updating methods
such as LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP \hyperref[csl:21]{(Nayak, 2017)} , RBP
\hyperref[csl:22]{(Saputra et al., 2017)} , BFGS \hyperref[csl:23]{(Silaban et al., 2017)} , CGBP-PR
~\hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR \hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-PB, so this
paper introduce a new optimized updating formula, ERAS, for AGRS. This
new update criterion encountered the problems of the robustness of
training's convergence and excessive computational complexity which were
common troubles of traditional quasi-Newton updating methods for AGRS
optimization. These complications were committed utilizing on gamma ray
stochastic properties, as a part of novelty of this work. The rest of
this section dived deep to mathematical description of quasi-Newton
updating methods and the proposed optimized updating formula, ERAS.
The Newton's technique is a substitute to the Conjugate Gradient
BackPropagation (CGBP) approaches for fast optimization. The basic step
of Newton's process is
\(e^{i\pi}+1=0\)\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\(x_(k+1)=x_k-A_k^(-1)g_k\)\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(1)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{where ~is the Hessian matrix (second derivatives) of the performance
index at the current values of the weights and biases. Newton's
technique frequently converges faster than CGBP approaches.
Inappropriately, it is complex and expensive to compute the Hessian
matrix for ANNs. The quasi-Newton technique is a class of procedures
based on Newton's process, which does not need calculation of second
derivatives. The approximation of Hessian matrix is updated by
quasi-Newton algorithm at each iteration based on a function of the
gradient. The most popular update procedures of quasi-Newton technique
are the LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP \hyperref[csl:21]{(Nayak, 2017)} , RBP
\hyperref[csl:22]{(Saputra et al., 2017)} , BFGS \hyperref[csl:23]{(Silaban et al., 2017)} , CGBP-PR
~\hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR \hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-HS
\hyperref[csl:26]{(Sharee, 2014)} , CGBP-DY \hyperref[csl:27]{(Dai et al., 2013)} update. In this study,
these update procedures plus the new proposed update, CGBP-ERAS, are
implemented in the ANN training routine for AGRS data.}
Training the ANN with ~free parameters (weights and biases) is
equivalent to optimizing a function of ~independent variables with AGRS
data and it can be expressed as the mean squared error (MSE)\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(2)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
where ~is the number of output neurons, ~is the number of training
patterns, ~is the number of training iterations, ~and are the actual and
desired response of the -th output neuron due to the i-th counted gamma
ray photons, respectively. Let ~\textbf{~}be the -dimensional column
vector containing all free parameters (i.e., adaptable weights) of the
ANN at the -th iteration\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(3)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{where ` ' denotes the transpose operator.}
To optimize the error function in Equ. (2), the following update rule is
applied iteratively, starting from an initial weight vector \textbf{.}\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(4)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
with~ {}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(5)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{where is the weight-update vector, ~is a search direction, and ~is the
step-length at the -th iteration.~}
There are various ways for computing the search direction and the
step-length, ranging from the simple gradient descent to the more
efficient CGBP and quasi-Newton methods. The simplest solution is to
take a constant step-length, ~and set the search direction to the
negative gradient, which is the direction of the steepest descent from
any given point on the error surface; that\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(6)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{where is the gradient vector of the error function at the -th epoch.
The gradient vector is an -dimensional column vector given by}\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
\textbf{{}}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(7)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{where ~is the local gradient.}
This formulation is often called the steepest descent algorithm or the
gradient descent method. In a multilayer ANN network, the gradient
vector can be computed very efficiently using quasi-Newton techniques.
Gradient descent methods typically work fairly well during the early
stages of the optimization process but unfortunately this method behaves
poorly with airborne Gamma ray spectrometry data. So, in this study we
employ not only the gradient, but also the curvature of the error
surface, to minimize the error function of airborne Gamma ray
spectrometry data. The rest of this section presents a number of such
techniques such as LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP \hyperref[csl:21]{(Nayak, 2017)}
, RBP \hyperref[csl:22]{(Saputra et al., 2017)} , BFGS \hyperref[csl:23]{(Silaban et al., 2017)} , CGBP-PR
~\hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR \hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-HS
\hyperref[csl:26]{(Sharee, 2014)} , CGBP-DY \hyperref[csl:27]{(Dai et al., 2013)} and finally our
proposed ERAS update procedure. They use batch training, in which any
weight update is performed after the presentation of the airborne Gamma
ray spectrometry data.
The first strategy can be the local adaptation where the temporal
behavior of the partial derivative of the weight is used in the
computation of the weight-update. The weight update rule is given by\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(8){}\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{where `` }\textbf{~}{is the element-by-element product of two column
vectors. The vector of adaptive momentum rate }\textbf{~}{is taken as
the vector of magnitude with respect to the error in the previous
iteration.}
The Conjugate Gradient BackPropagation (CGBP) method is another
efficient optimization technique; it can minimize a quadratic error
function of variables in ~steps. This method generates a search
direction that is mutually conjugate to the previous search directions,
with respect to a given positive definite matrix ~and finds the optimal
point in that direction, using a line-search technique. Two search
directions ~and ~are said to be mutually conjugate with respect to ~if
the following condition is satisfied:\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(9)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
{The next search direction is calculated as a linear combination of the
previous direction and the current gradient, in such a way that the
minimization steps in all previous directions are not interfered with.
The next search direction can be determined as follows:}\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(10){}\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
The variable is a scalar chosen so that ~becomes the -th conjugate
direction. There are various ways for computing the scalar : each one
generates a distinct nonlinear conjugate gradient method which has its
own convergence property and numerical performance. Several formulae for
computing ~have been proposed; the most notable ones are the following:\selectlanguage{english}
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(11)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(12)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(13)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(14)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
{}\strut
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
(15)\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\selectlanguage{ngerman}
{In this study, we propose a new hybrid CGBP technique by combining the
good numerical performance of Polak--Ribière (PR)~\hyperref[csl:24]{(Ghani et al., 2017)}
technique and the wonderful global convergence properties of
Fletcher--Reeves (FR) \hyperref[csl:25]{(Wanto et al., 2017)} technique. The proposed method
is an adapted version of the Hestenes--Stiefel \hyperref[csl:26]{(Sharee, 2014)} and
Dai--Yuan \hyperref[csl:27]{(Dai et al., 2013)} techniques. The empirical outcomes of
proposed algorithm present that this approach overtakes the
Polak--Ribière~\hyperref[csl:24]{(Ghani et al., 2017)} , Fletcher--Reeves
\hyperref[csl:25]{(Wanto et al., 2017)} , Hestenes--Stiefel \hyperref[csl:26]{(Sharee, 2014)} and
Dai--Yuan \hyperref[csl:27]{(Dai et al., 2013)} techniques.}
The reset of the proposed approach is similar to the other quasi-Newton
methods which were established based on Newton's optimization technique
in which the Hessian matrix is replaced by a Hessian approximation to
avoid the calculation of the exact Hessian matrix.
\section*{The AGRS with ERAS Updating
Criterion}\label{the-agrs-with-eras-updating-criterion-1}
The solution of the unconstrained optimization problem of AGRS with
several data files taken with a 3" NaI detector and logged by a1024 MC
\href{https://www.authorea.com/users/230628/articles/293258-scholarly-article\#peter2018}{(Peter
2018)} using stochastic quasi-Newton techniques in neural network is
described in this section. This data set is listing the counts logged
channel by channel in a1028 MC to determine the attenuation
coefficients. A lead absorber with different thickness took between the
source and detector place to attenuate the energy of this gamma ray. The
used source was Cs137, which gives off a gamma at energy 662 KeV. This
data allows us to determine values of the attenuation coefficients of
the gamma ray. All data were taken with the same source-detector
geometry. The counts of these experiments were collected in 120 seconds.
The experimental data for 662 KeV gamma ray with lead absorber can be
summarized as follow:
1) The ``no absorber'' data set contains spectra data for the Cs137
source with no absorber. Collection time was two minutes.
2) The file ``absorber C'' contains spectra data for the Cs137 source
with a lead absorber of thickness 2.651 g/cm2. Collection time was two
minutes.
3) The file ``absorber D'' contains spectra data for the Cs137 source
with a lead absorber of thickness 4.451 g/cm2. Collection time was two
minutes.
4) The file ``absorber E'' contains spectra data for the Cs137 source
with a lead absorber of thickness 7.194 g/cm2. Collection time was two
minutes.
5) The file ``absorber C and E'' contains spectra data for the Cs137
source with a lead absorber of thickness 9.845 g/cm2. Collection time
was two minutes.
The 662 KeV photopeaks in all cases were seen at around channel 390.
The parameters of all examined training methods were set or tuned to
achieve best validation, these parameters lead the algorithm to minimize
the search direction. In our experiments, training stops when any of the
following conditions occurs:
1) The maximum number of epochs (repetitions) is reached.
2) The maximum time limit is exceeded.
3) Performance is minimized to the goal.
4) The performance gradient falls below the stopping criterion MSE.
5) Validation performance has increased more than maximum fail times
since the last time when it decreased (when using validation).
The energy levels of MC were considered as input units of the neural
network to compute a series of transformations between counted gamma ray
photons and their related AGRS altitudes for various gamma sources. Fig
{\ref{931769}}-3 present the statistical data for the
normalized MSE, validation and convergence speed of the quasi-Newton
techniques compared with the proposed ERAS training algorithm, which
were tested on ANN for the AGRS. All the quasi-Newton methods training
algorithms met the stopping criterions, at different training times, in
all training trials, and none of them failed to converge. The proposed
ERAS algorithm requires less computation in each iteration and more
storage than LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP \hyperref[csl:21]{(Nayak, 2017)} , RBP
\hyperref[csl:22]{(Saputra et al., 2017)} , BFGS \hyperref[csl:23]{(Silaban et al., 2017)} , CGBP-PR
~\hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR \hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-HS
\hyperref[csl:26]{(Sharee, 2014)} , CGBP-DY \hyperref[csl:27]{(Dai et al., 2013)} methods, although it
generally converges in fewer iterations.\selectlanguage{english}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1.00\columnwidth]{figures/mass/mass}
\caption{{Pellentesque suscipit risus massa, non vestibulum libero euismod
feugiat. In hac habitasse platea dictumst. Maecenas rutrum lobortis
lobortis. Vestibulum convallis porttitor sem ac ultricies.
{\label{931769}}%
}}
\end{center}
\end{figure}
\par\null
Using stochastic gradient descent procedures in AGRS, as a part of
contribution of this study, has to tolerate a lot of iterations in high
dimensional problems such as spectrometry. Also, impracticable nature of
quasi-Newton approaches, which was inherited from excessive
computational cost of Hessian inverses extraction, extremely increases
the demand of selecting optimization algorithm as it is proposed by this
paper, ERAS. The novel approach, which is proposed in this study, is
based on the introduction of a combination of Hessian approximation
matrix computed from finite gradient differences.
Using Hessian-vector products, ERAS technique controls the quality of
the curvature estimates in spite of the classical quasi-Newton based
approaches such as Polak--Ribi\selectlanguage{ngerman}ère ~\hyperref[csl:24]{(Ghani et al., 2017)} , Fletcher--Reeves
\hyperref[csl:25]{(Wanto et al., 2017)} , Hestenes--Stiefel \hyperref[csl:26]{(Sharee, 2014)} and Dai--Yuan
\hyperref[csl:27]{(Dai et al., 2013)} techniques. The results showed that, this training
approach is much successive in compare with former quasi-Newton methods
such as LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP \hyperref[csl:21]{(Nayak, 2017)} , RBP
\hyperref[csl:22]{(Saputra et al., 2017)} , BFGS \hyperref[csl:23]{(Silaban et al., 2017)} , CGBP-PR
~\hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR \hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-HS
\hyperref[csl:26]{(Sharee, 2014)} , CGBP-DY \hyperref[csl:27]{(Dai et al., 2013)} .
The evaluation results of ANN for AGRS in all channels of Cs137 were
also depicted in Figs. 4-6. These evaluations were examined for
different thickness of lead absorbers which were used to attenuate the
662 KeV gamma photon. The evaluated results were compared with published
experimental data and they were represented in various color and shapes
for ``no absorber'' and for lead absorbers of different thicknesses
2.651 g/cm2, 4.451 g/cm2, 7.194 g/cm2 and 9.845 g/cm2. Figs. 4 shows
that the evaluations of proposed updating formula are successful in ANN
with different altitudes and the evaluated function was well fitted when
the ERAS was used. The outcome of radiation instrumentation, AGRS, was
improved based on data reconstruction and analysis using proposed ERAS
updating formula.
\section*{Conclusion}\label{conclusion}
In this study, several optimization methods have been implemented for
training of ANNs with AGRS data. Experimental results, based on an AGRS
task, showed that all implemented algorithms may be used to train the
proposed ANN. They are also indicated that the new ERAS training update
of ANN has better performance than other quasi-Newton algorithm such as
LMBP \hyperref[csl:20]{(Sapna, 2012)} , S-CGBP \hyperref[csl:21]{(Nayak, 2017)} , RBP
\hyperref[csl:22]{(Saputra et al., 2017)} , BFGS, CGBP-PR \hyperref[csl:24]{(Ghani et al., 2017)} , CGBP-FR
\hyperref[csl:25]{(Wanto et al., 2017)} , CGBP-HS \hyperref[csl:26]{(Sharee, 2014)} , CGBP-DY
\hyperref[csl:27]{(Dai et al., 2013)} . The new ERAS updating formula is very suitable for
training networks with large number of free parameters such as AGRS. In
general, when the weights of the ANN for AGRS are further adapted with
the ERAS updating formula, there are improvements in both accuracy and
convergence speed of training. The proposed algorithm, CGBP- ERAS
method, increases the quality of curvature information of AGRS data
while cost of the algorithm is reduced in each iteration.
\hyperref[csl:28]{(Bernstein, 2015)}
\par\null
\hyperref[csl:29]{(Kulisek et al., 2018)}
\par\null
\hyperref[csl:30]{(Deslattes, 2000)}
\selectlanguage{english}
\FloatBarrier
\section*{References}\sloppy
\phantomsection
\label{csl:11}Amini, K., Rizi, A.G., 2010. {A new structured quasi-Newton algorithm using partial information on Hessian}. Journal of Computational and Applied Mathematics 234, 805–811. \url{https://doi.org/10.1016/j.cam.2010.01.044}
\phantomsection
\label{csl:28}Bernstein, E.R., 2015. {Neutral cluster mass spectrometry}. International Journal of Mass Spectrometry 377, 248–262. \url{https://doi.org/10.1016/j.ijms.2014.08.034}
\phantomsection
\label{csl:27}Dai, Z., Chen, X., Wen, F., 2013. {Comments on {\textquotedblleft}A hybrid conjugate gradient method based on a quadratic relaxation of the Dai-Yuan hybrid conjugate gradient parameter{\textquotedblright}}. Optimization 64, 1173–1175. \url{https://doi.org/10.1080/02331934.2013.840783}
\phantomsection
\label{csl:30}Deslattes, R.D., 2000. {High resolution gamma-ray spectroscopy: the first 85 years}. Journal of Research of the National Institute of Standards and Technology 105, 1. \url{https://doi.org/10.6028/jres.105.002}
\phantomsection
\label{csl:24}Ghani, N.H.A., Mamat, M., Rivaie, M., 2017. {A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search}, in: . Author(s). \url{https://doi.org/10.1063/1.4995892}
\phantomsection
\label{csl:5}Grasty, R.L., 1974. {Computer Processing of Airborne Gamma-ray Spectrometry Data}. Natural Resources Canada/{ESS}/Scientific and Technical Publishing Services. \url{https://doi.org/10.4095/102846}
\phantomsection
\label{csl:4}Gupta, T.K., 2012. {Radiation Exposure: Consequences Detection, and Measurements}, in: Radiation, Ionization, and Detection in Nuclear Medicine. Springer Berlin Heidelberg, pp. 59–134. \url{https://doi.org/10.1007/978-3-642-34076-5_2}
\phantomsection
\label{csl:19}Henrichs, K., 2011. {Application of {ISO} Standard 27048: dose assessment for the monitoring of workers for internal radiation exposure}. Radiation Protection Dosimetry 144, 43–46. \url{https://doi.org/10.1093/rpd/ncq568}
\phantomsection
\label{csl:14}Kluso{\v{n}}, J., 2010. {In-situ gamma spectrometry in environmental monitoring}. Applied Radiation and Isotopes 68, 529–535. \url{https://doi.org/10.1016/j.apradiso.2009.11.041}
\phantomsection
\label{csl:29}Kulisek, J.A., Wittman, R.S., Miller, E.A., Kernan, W.J., McCall, J.D., McConn, R.J., Schweppe, J.E., Seifert, C.E., Stave, S.C., Stewart, T.N., 2018. {A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy}. Nuclear Instruments and Methods in Physics Research Section A: Accelerators Spectrometers, Detectors and Associated Equipment 879, 84–91. \url{https://doi.org/10.1016/j.nima.2017.10.030}
\phantomsection
\label{csl:15}Lavi, N., Alfassi, Z.B., 2004. {Development and application of Marinelli beaker standards for monitoring radioactivity in Dairy-Products by gamma-ray spectrometry}. Applied Radiation and Isotopes 61, 1437–1441. \url{https://doi.org/10.1016/j.apradiso.2004.05.001}
\phantomsection
\label{csl:3}Lutter, G., Hult, M., Marissens, G., Stroh, H., Tzika, F., 2018. {A gamma-ray spectrometry analysis software environment}. Applied Radiation and Isotopes 134, 200–204. \url{https://doi.org/10.1016/j.apradiso.2017.06.045}
\phantomsection
\label{csl:16}Manuel, J., del Rosario Martinez-Blanco, M., Viramontes, J.M.C., Rene, H., 2013. {Robust Design of Artificial Neural Networks Methodology in Neutron Spectrometry}, in: Artificial Neural Networks - Architectures and Applications. {InTech}. \url{https://doi.org/10.5772/51274}
\phantomsection
\label{csl:1}Marouli, M., Lutter, G., Pommé, S., Van, A.R., Hult, M., Richter, S., Eykens, R., Peyrés, V., García-Toraño, E., Dryák, P., Mazánová, M., Carconi, P., 2018. {Measurement of absolute γ-ray emission probabilities in the decay of ^{235}U.}. Appl Radiat Isot 132, 72–78.
\phantomsection
\label{csl:13}Moslemi, V., Ashoor, M., 2017. {Introducing a Novel Parallel Hole Collimator: The Theoretical and Monte Carlo Investigations}. {IEEE} Transactions on Nuclear Science 64, 2578–2587. \url{https://doi.org/10.1109/tns.2017.2736881}
\phantomsection
\label{csl:21}Nayak, S., 2017. {Scaled Conjugate Gradient Backpropagation Algorithm for Selection of Industrial Robots}. {INTERNATIONAL} {JOURNAL} {OF} {COMPUTER} {APPLICATION} 6. \url{https://doi.org/10.26808/rs.ca.i7v6.12}
\phantomsection
\label{csl:18}Pandey, P., Singh, I., 2016. {Improving Accuracy using different Data Mining Algorithms}. International Journal of Computer Applications 150, 10–13. \url{https://doi.org/10.5120/ijca2016911573}
\phantomsection
\label{csl:12}Pankratov, O., Kuvshinov, A., 2015. {General formalism for the efficient calculation of the Hessian matrix of {EM} data misfit and Hessian-vector products based upon adjoint sources approach}. Geophysical Journal International 200, 1449–1465. \url{https://doi.org/10.1093/gji/ggu476}
\phantomsection
\label{csl:17}Peter, S., 2018. {Gamma Detector Data Files}. \url{http://www.cpp.edu/~pbsiegel/nuclear.html.}
\phantomsection
\label{csl:8}Price, C.J., 2018. {A direct search quasi-Newton method for nonsmooth unconstrained optimization}. {ANZIAM} Journal 59, 215. \url{https://doi.org/10.21914/anziamj.v59i0.10651}
\phantomsection
\label{csl:20}Sapna, S., 2012. {Backpropagation Learning Algorithm Based on Levenberg Marquardt Algorithm}, in: Computer Science {\&} Information Technology ( {CS} {\&} {IT} ). Academy {\&} Industry Research Collaboration Center ({AIRCC}). \url{https://doi.org/10.5121/csit.2012.2438}
\phantomsection
\label{csl:22}Saputra, W., Tulus, Zarlis, M., Sembiring, R.W., Hartama, D., 2017. {Analysis Resilient Algorithm on Artificial Neural Network Backpropagation}. Journal of Physics: Conference Series 930, 012035. \url{https://doi.org/10.1088/1742-6596/930/1/012035}
\phantomsection
\label{csl:26}Sharee, S.G., 2014. {New Homotopy Conjugate Gradient for Unconstrained Optimization using Hestenes- Stiefel and Conjugate Descent}. {IOSR} Journal of Engineering 4, 38–43. \url{https://doi.org/10.9790/3021-04573843}
\phantomsection
\label{csl:23}Silaban, H., Zarlis, M., Sawaluddin, 2017. {Analysis of Accuracy and Epoch on Back-propagation {BFGS} Quasi-Newton}. Journal of Physics: Conference Series 930, 012006. \url{https://doi.org/10.1088/1742-6596/930/1/012006}
\phantomsection
\label{csl:2}Sivalingam, G.N., Cryar, A., Williams, M.A., Gooptu, B., Thalassinos, K., 2018. {Deconvolution of ion mobility mass spectrometry arrival time distributions using a genetic algorithm approach: Application to $\upalpha$ 1 -antitrypsin peptide binding}. International Journal of Mass Spectrometry 426, 29–37. \url{https://doi.org/10.1016/j.ijms.2018.01.008}
\phantomsection
\label{csl:10}Stalter, R., Howarth, D., 2012. {Gamma Radiation}, in: Gamma Radiation. {InTech}. \url{https://doi.org/10.5772/34856}
\phantomsection
\label{csl:7}Tatsumi, K., Ibuki, T., Tanino, T., 2015. {Particle swarm optimization with stochastic selection of perturbation-based chaotic updating system}. Applied Mathematics and Computation 269, 904–929. \url{https://doi.org/10.1016/j.amc.2015.07.098}
\phantomsection
\label{csl:25}Wanto, A., Zarlis, M., Sawaluddin, Hartama, D., 2017. {Analysis of Artificial Neural Network Backpropagation Using Conjugate Gradient Fletcher Reeves In The Predicting Process}. Journal of Physics: Conference Series 930, 012018. \url{https://doi.org/10.1088/1742-6596/930/1/012018}
\phantomsection
\label{csl:9}You, Z., Xu, B., 2014. {Investigation of stochastic Hessian-Free optimization in Deep neural networks for speech recognition}, in: The 9th International Symposium on Chinese Spoken Language Processing. {IEEE}. \url{https://doi.org/10.1109/iscslp.2014.6936597}
\phantomsection
\label{csl:6}{\v{S}}vec, A., 2016. {Photon energy conversion efficiency in gamma-ray spectrometry}. Applied Radiation and Isotopes 107, 103–108. \url{https://doi.org/10.1016/j.apradiso.2015.09.015}
\end{document}