Skip to content

Commit 47bc2b2

Browse files
author
O'Brien
committed
QPR-13706: fix numerous typos
1 parent b373c57 commit 47bc2b2

64 files changed

Lines changed: 225 additions & 219 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

Docs/UserGuide/curve_configurations/commodity_curves.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ \subsubsection{Commodity Curves}
6262
\item InterpolationMethod [Optional]: The variable on which the interpolation is performed. The allowable values are
6363
Linear, LogLinear, Cubic, Hermite, LinearFlat, LogLinearFlat, CubicFlat, HermiteFlat, BackwardFlat, ForwardFlat. This is different to yield curves above in that Flat versions
6464
of the standard methods are defined, with each of these if there is no Spot price than any extrapolation between $T_0$ and the
65-
first future price will be flat (i.e. the first future price will be copied back "Flat" to $T_0$).
65+
first future price will be flat (i.e. the first future price will be copied back ``Flat'' to $T_0$).
6666
If the element is omitted or left blank, then it defaults to \emph{Linear}.
6767
\item Conventions [Optional]: The conventions to use, if omited it is assumed that these quotes are Outright prices.
6868
\item Extrapolation [Optional]: Set to \emph{True} or \emph{False} to enable or disable extrapolation respectively. If

Docs/UserGuide/curve_configurations/commodity_volatilities.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -392,7 +392,7 @@ \subsubsection{Commodity Volatilities}
392392

393393
Note that, similar to the procedure outlined above for the absolute strike surface, quote strings are created from the configuration to be looked up in the market. For the put deltas, quote strings of the form \lstinline!COMMODITY_OPTION/RATE_LNVOL/{N}/{C}/e_n/DEL/{T}/Put/d_m! are created. Here, \lstinline!d_m! are the \lstinline!PutDeltas! and \lstinline!{T}! is the delta type i.e.\ either \lstinline!Spot! or \lstinline!Fwd!. Similarly for the call deltas, quote strings of the form \lstinline!COMMODITY_OPTION/RATE_LNVOL/{N}/{C}/e_n/DEL/{T}/Call/d_j! are created where \lstinline!d_j! are the \lstinline!CallDeltas!. For ATM, quote strings of the form \lstinline!COMMODITY_OPTION/RATE_LNVOL/{N}/{C}/e_n/DEL/ATM/{A}[/DEL/{T}]! are created where \lstinline!{A}! is the \lstinline!AtmType! i.e.\ \lstinline!AtmSpot!, \lstinline!AtmFwd! or \lstinline!AtmDeltaNeutral! and \lstinline!{T}! is the optional delta type.
394394

395-
Also, it is worth adding a note here on the interpolation for a delta based surface. Assume we want the volatility at time $t$ and absolute strike $s$ i.e. at the $(t, s)$ node. For the maturity time $t$, a delta "slice" i.e. a set of (delta, vol) pairs for that time $t$, is obtained by interpolating, or extrapolating, the variance in the time direction on each delta column. Then for each (delta, vol) pair at time $t$, an absolute strike value is deduced to give a slice at time $t$ in terms of absolute strike i.e. a set of (strike, vol) pairs at time $t$. This strike versus volatility curve is then interpolated, or extrapolated, to give the vol at the $(t, s)$.
395+
Also, it is worth adding a note here on the interpolation for a delta based surface. Assume we want the volatility at time $t$ and absolute strike $s$ i.e. at the $(t, s)$ node. For the maturity time $t$, a delta ``slice'' i.e. a set of (delta, vol) pairs for that time $t$, is obtained by interpolating, or extrapolating, the variance in the time direction on each delta column. Then for each (delta, vol) pair at time $t$, an absolute strike value is deduced to give a slice at time $t$ in terms of absolute strike i.e. a set of (strike, vol) pairs at time $t$. This strike versus volatility curve is then interpolated, or extrapolated, to give the vol at the $(t, s)$.
396396

397397
Listing \ref{lst:comm_vol_apo_surface_config} outlines the configuration for the APO volatility surface. This currently only supports a \lstinline!QuoteType! of \lstinline!ImpliedVolatility! and \lstinline!VolatilityType! must be \lstinline!Lognormal!. This configuration takes a base commodity volatility surface and builds a surface that can be queried for volatilities to price APOs directly i.e.\ using the volatility directly in a Black 76 formula along with the average future price. It uses the approach described in the Section entitled \textit{Commodity Average Price Option - Future Settlement Prices} in the Product Catalogue to go from future option volatilities to APO volatilities.
398398

Docs/UserGuide/curve_configurations/default_curves_from_cds.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ \subsubsection{Default Curves from CDS}
66

77
\begin{itemize}
88
\item
9-
\lstinline!CurveId!: Unique identifier for the bootstrapped default curve. For index term curves a suffix \lstinline!_5Y! should be appended to the name indicating the index term, since this is the prefered name looked up by index cds and index cds option pricers. If such a curve is not found, the pricers will fall back to the specified credit curve id without suffix, i.e. following this naming convention is not mandatory, but recommended.
9+
\lstinline!CurveId!: Unique identifier for the bootstrapped default curve. For index term curves a suffix \lstinline!_5Y! should be appended to the name indicating the index term, since this is the preferred name looked up by index cds and index cds option pricers. If such a curve is not found, the pricers will fall back to the specified credit curve id without suffix, i.e. following this naming convention is not mandatory, but recommended.
1010

1111
\item \lstinline!CurveDescription! [Optional]:
1212
A description of the default curve. It is for information only and may be left blank.

Docs/UserGuide/curve_configurations/yieldcurves.tex

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ \subsubsection{Yield Curves}
2828
\end{listing}
2929

3030
The meaning of each of the top level elements in Listing \ref{lst:top_level_yc} is given below. If an element is labelled
31-
as 'Optional', then it may be excluded or included and left blank.
31+
as `Optional', then it may be excluded or included and left blank.
3232
\begin{itemize}
3333
\item CurveId: Unique identifier for the yield curve.
3434
\item CurveDescription: A description of the yield curve. This field may be left blank.
@@ -262,7 +262,7 @@ \subsubsection*{Average OIS Segment}
262262
IRS quote and an OIS-LIBOR basis swap spread quote. The IDs of these two quotes are stored in the
263263
\lstinline!CompositeQuote! node. The \lstinline!RateQuote! node holds the ID of the vanilla IRS quote and the
264264
\lstinline!SpreadQuote! node holds the ID of the OIS-LIBOR basis swap spread quote. The \lstinline!PillarChoice! node
265-
determines the bootstrap pillars that are used (MaturityDate, LastRelevantDate, if not given 'LastRelevantDate' is the
265+
determines the bootstrap pillars that are used (MaturityDate, LastRelevantDate, if not given `LastRelevantDate' is the
266266
default value).
267267
268268
For the \lstinline!Priority! and \lstinline!MinDistance! nodes see the explanation under ``Simple Segment''.
@@ -317,7 +317,7 @@ \subsubsection*{Tenor Basis Segment}
317317
values are short for receive, and long for pay. These are optional nodes. If they are left blank or omitted, then the projection
318318
curve is assumed to equal the curve being bootstrapped i.e.\ the current CurveId. However, at least one of the nodes
319319
needs to be populated to allow the bootstrap to proceed. The \lstinline!PillarChoice! node determines the bootstrap pillars
320-
that are used (MaturityDate, LastRelevantDate, if not given 'LastRelevantDate' is the default value).
320+
that are used (MaturityDate, LastRelevantDate, if not given `LastRelevantDate' is the default value).
321321
322322
For the \lstinline!Priority! and \lstinline!MinDistance! nodes see the explanation under ``Simple Segment''.
323323
@@ -351,7 +351,7 @@ \subsubsection*{Cross Currency Segment}
351351
in the other currency i.e.\ the currency in the currency pair that is not equal to the currency in Listing
352352
\ref{lst:top_level_yc}. The \lstinline!SpotRate! node holds the ID of a spot FX quote for the currency pair that is
353353
looked up in the {\tt market.txt} file. The \lstinline!PillarChoice! node determines the bootstrap pillars that are used
354-
(MaturityDate, LastRelevantDate, if not given 'LastRelevantDate' is the default value).
354+
(MaturityDate, LastRelevantDate, if not given `LastRelevantDate' is the default value).
355355
356356
\begin{listing}[H]
357357
%\hrule\medskip
@@ -712,9 +712,9 @@ \subsubsection*{Ibor Fallback Segment}
712712
\end{listing}
713713
714714
\subsubsection*{Discount Ratio Segment}
715-
\label{sec:dicount_ratio_segment}
715+
\label{sec:discount_ratio_segment}
716716
717-
When the node name is \lstinline!DiscountRatio!, the \lstinline!Type! node has the only allowable value \emph{Dicount
717+
When the node name is \lstinline!DiscountRatio!, the \lstinline!Type! node has the only allowable value \emph{Discount
718718
Ratio} and the node has the structure shown in Listing \ref{lst:discount_ratio_segment}. This segment is used to build a
719719
curve with discount factors $P(0,t)$ from three input curves with discount factors $P_b(0,t)$, $P_n(0,t)$ and $P_d(0,t)$
720720
(``base'', ``numerator'', ``denominator'' curves) following the equation

Docs/UserGuide/examples/examples.tex

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ \section{Examples}\label{sec:examples}
141141
{\tt exposure\_trade\_*.csv} & Trade exposure evolution reports\\
142142
{\tt exposure\_nettingset\_*.csv} & Netting set exposure evolution reports\\
143143
{\tt rawcube.csv} & NPV cube in readable text format \\
144-
{\tt netcube.csv} & NPV cube after netting and colateral, in readable text format \\
144+
{\tt netcube.csv} & NPV cube after netting and collateral, in readable text format \\
145145
{\tt *.csv.gz} & Intermediate storage of NPV cube and scenario data \\
146146
{\tt *.pdf} & Exposure graphics produced by the python script {\tt run.py} after ORE completed\\
147147
\hline
@@ -473,7 +473,7 @@ \subsubsection{Sensitivity Analysis}
473473
</Analytic>
474474
\end{minted}
475475
476-
The usual ``raw'' sensitivity analysis is performed by bumping the "raw" rates (zero rates, hazard rates, inflation zero rates, optionlet vols).
476+
The usual ``raw'' sensitivity analysis is performed by bumping the ``raw'' rates (zero rates, hazard rates, inflation zero rates, optionlet vols).
477477
This is followed by the Jacobi transformation that turns ``raw'' sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
478478
CDS spreads, ZC and YOY Inflation Swap rates, flat Cap/Floor vols). The conversion is controlled by the additional {\tt ParConversion} data blocks
479479
in {\tt sensitivity.xml} where the assumed par instruments and corresponding conventions are coded, as shown below for three types of discount curves.
@@ -670,7 +670,7 @@ \subsubsection{Stand-alone Par Conversion Utility}
670670
\label{example:marketrisk_parconversionl}
671671
672672
This example ({\tt python run\_parconversion.py}) demonstrates ORE's capability to convert external computed zero sensitivities (e.g Zero rates) to par sensitivities (e.g. to Swap rates)
673-
that is implemented by means of a Jacobi transformation of the "raw" sensitivities (e.g. to zero rates), see a sketch of the
673+
that is implemented by means of a Jacobi transformation of the ``raw'' sensitivities (e.g. to zero rates), see a sketch of the
674674
methodology in \cite{methods} and section \ref{sec:sensitivity} for configuration details.
675675
676676
To perform a par sensitivity analysis, the following required change in {\tt ore.xml} is required
@@ -716,7 +716,7 @@ \subsubsection{Stand-alone Par Conversion Utility}
716716
\item baseNpvColumn: The base npv of the trade / nettingset / portfolio in currency.
717717
\end{enumerate}
718718
719-
This is followed by the Jacobi transformation that turns "raw" sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
719+
This is followed by the Jacobi transformation that turns ``raw'' sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
720720
CDS spreads, ZC and YOY Inflation Swap rates, flat Cap/Floor vols). The conversion is controlled by the additional {\tt ParConversion} data blocks
721721
in {\tt sensitivity.xml} where the assumed par instruments and corresponding conventions are coded, as shown below for three types of discount curves.
722722
@@ -2208,15 +2208,15 @@ \subsection{American Monte Carlo}\label{example:amc}
22082208
The cases in this section, folder {\tt AmericanMonteCarlo}, demonstrate how to use American Monte Carlo Simulation (AMC)
22092209
to generate exposures in ORE.
22102210
\begin{itemize}
2211-
\item We start with benchmarking against "classic" exposure simulation, i.e.
2211+
\item We start with benchmarking against ``classic'' exposure simulation, i.e.
22122212
we run both AMC and classic simulation on a small IR/FX portfolio, almost vanilla, that
22132213
consists of a Bermudan Swaption, Single and Cross Currency Swaps, FX Swap and FX Option,
22142214
and we compare the resulting AMC vs. classic exposures. \\
22152215
Run with {\tt python run\_benchmark.py}
22162216
\item Scripted Bermudan Swaption and LPI Swap: This case shows that the scripted trade
22172217
framework works with AMC too, demonstrated here with a scripted Bermudan Swaption and an LPI Swap. \\
22182218
Run with: {\tt python run\_scriptedberm.py}
2219-
\item FX TaRF: FxTaRF product is implemented using the scripted trade framework "under the hood"
2219+
\item FX TaRF: FxTaRF product is implemented using the scripted trade framework ``under the hood''
22202220
with the payoff script embedded into C++, so that it is neither explicit in the trade XML nor
22212221
in the script library.\\
22222222
Run with: {\tt python run\_fxtarf.py}
@@ -2229,7 +2229,7 @@ \subsubsection{Benchmarking AMC vs Classic Simulation}
22292229
22302230
This example demonstrates how to use American Monte Carlo simulation (AMC) to generate exposures in ORE.
22312231
For a sketch of the methodology and comments on its implementation in ORE see \cite{methods}.
2232-
Moreover we discuss the essential configuration chnages for applying AMC.
2232+
Moreover we discuss the essential configuration changes for applying AMC.
22332233
22342234
Calling
22352235
@@ -2860,7 +2860,7 @@ \subsubsection{NPV Sensitivities with AAD and GPUs}
28602860
\item as above but using the Computation Graph, see {\tt UseCG=true} in {\tt pricingengine\_cg.xml}, which
28612861
is the basis for the following two approaches ((14 sec on Apple M2 Max))
28622862
\item using AAD, see {\tt pricingengine\_ad.xml} ((2.2 sec on Apple M2 Max))
2863-
\item using the external device if available, see {\tt pricingengine\_gpu.xml} (2.3 sec on Apple M2 Max with "OpenCL/Apple/Apple M2 Max" device)
2863+
\item using the external device if available, see {\tt pricingengine\_gpu.xml} (2.3 sec on Apple M2 Max with ``OpenCL/Apple/Apple M2 Max'' device)
28642864
\end{itemize}
28652865
to compare sensitivities and performance. In the latter case we have set the external device in
28662866
{\tt pricingengine\_gpu.xml} to ``BasicCpu/Default/Default'' which mimics an external device on the CPU.

Docs/UserGuide/examples/legacyexamples.tex

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1461,7 +1461,7 @@ \subsection{Multifactor Hull-White Scenario Generation}% Example 37
14611461
\end{table}
14621462
14631463
again matching the input principal components quite well. The second eigenvector is the negative of the input vector
1464-
here (the principal compoennt analysis can not distinguish these of course).
1464+
here (the principal component analysis can not distinguish these of course).
14651465
14661466
The example also produces a plot comparing the input eigenvectors and the model implied eigenvectors as shown in figure \ref{fig:ex37}.
14671467
@@ -1809,7 +1809,7 @@ \subsubsection*{Multi Leg Options / MC pricing engine}
18091809
\item \verb+IrCalibrationStrategy+ can be \verb+None+, \verb+CoterminalATM+, \verb+UnderlyingATM+
18101810
\item \verb+FXCalibration+ can be \verb+None+ or \verb+Bootstrap+
18111811
\item \verb+ExtrapolateFxVolatility+ can be \verb+true+ or \verb+false+; if false, no calibration instruments are used
1812-
that require extrapolation of the market fx volatilty surface in option expiry direction
1812+
that require extrapolation of the market fx volatility surface in option expiry direction
18131813
\item \verb+Corr_Key1_Key2+: These entries describe the cross asset model correlations to be used; the syntax for
18141814
\verb+Key1+ and \verb+Key2+ is the same as in the simulation configuration for the cross asset model
18151815
\end{enumerate}
@@ -1820,7 +1820,7 @@ \subsection{Par Sensitivity Analysis}% Example 40
18201820
%--------------------------------------------------------------------
18211821
18221822
The example in folder {\tt Examples/Example\_40} demonstrates ORE's par sensitivity analysis (e.g. to Swap rates)
1823-
that is implemented by means of a Jacobi transformation of the "raw" sensitivities (e.g. to zero rates), see a sketch of the
1823+
that is implemented by means of a Jacobi transformation of the ``raw'' sensitivities (e.g. to zero rates), see a sketch of the
18241824
methodology in \cite{methods} and section \ref{sec:sensitivity} for configuration details.
18251825
18261826
To perform a par sensitivity analysis, the following required change in {\tt ore.xml} is required
@@ -1851,8 +1851,8 @@ \subsection{Par Sensitivity Analysis}% Example 40
18511851
\item CapFloor volatilities
18521852
\end{itemize}
18531853
1854-
The usual sensitivity analysis is performed by bumping the "raw" rates (zero rates, hazard rates, inflation zero rates, optionlet vols).
1855-
This is followed by the Jacobi transformation that turns "raw" sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
1854+
The usual sensitivity analysis is performed by bumping the ``raw'' rates (zero rates, hazard rates, inflation zero rates, optionlet vols).
1855+
This is followed by the Jacobi transformation that turns ``raw'' sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
18561856
CDS spreads, ZC and YOY Inflation Swap rates, flat Cap/Floor vols). The conversion is controlled by the additional {\tt ParConversion} data blocks
18571857
in {\tt sensitivity.xml} where the assumed par instruments and corresponding conventions are coded, as shown below for three types of discount curves.
18581858
@@ -2051,7 +2051,7 @@ \subsection{Initial Margin: ISDA SIMM and IM Schedule}% Example 44
20512051
20522052
\subsubsection*{IM Schedule}
20532053
2054-
As an additonal case in this example we demonstrate how to use the IM Schedule method to compute initial margin.
2054+
As an additional case in this example we demonstrate how to use the IM Schedule method to compute initial margin.
20552055
The related input file is {\tt Input/ore\_schedule.xml}. It is also run when calling {\tt python run.py}, and results are written to folder
20562056
{\tt Output/IM\_SCHEDULE}.
20572057
The basic input is provided in CRIF file format where ORE expects two lines per trade, one with RiskClass = PV and one with RiskClass = Notional,
@@ -2169,7 +2169,7 @@ \subsection{Zero to Par sensitivity Conversion Analysis}% Example 50
21692169
%--------------------------------------------------------------------
21702170
21712171
The example in folder {\tt Examples/Example\_50} demonstrates ORE's capability to convert external computed zero sensitivities (e.g Zero rates) to par sensitivities (e.g. to Swap rates)
2172-
that is implemented by means of a Jacobi transformation of the "raw" sensitivities (e.g. to zero rates), see a sketch of the
2172+
that is implemented by means of a Jacobi transformation of the ``raw'' sensitivities (e.g. to zero rates), see a sketch of the
21732173
methodology in \cite{methods} and section \ref{sec:sensitivity} for configuration details.
21742174
21752175
To perform a par sensitivity analysis, the following required change in {\tt ore.xml} is required
@@ -2215,7 +2215,7 @@ \subsection{Zero to Par sensitivity Conversion Analysis}% Example 50
22152215
\item baseNpvColumn: The base npv of the trade / nettingset / portfolio in currency.
22162216
\end{enumerate}
22172217
2218-
This is followed by the Jacobi transformation that turns "raw" sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
2218+
This is followed by the Jacobi transformation that turns ``raw'' sensitivities into sensitivities in the par domain (Deposit/FRA/Swap rates, FX Forwards, CC Basis Swap spreads,
22192219
CDS spreads, ZC and YOY Inflation Swap rates, flat Cap/Floor vols). The conversion is controlled by the additional {\tt ParConversion} data blocks
22202220
in {\tt sensitivity.xml} where the assumed par instruments and corresponding conventions are coded, as shown below for three types of discount curves.
22212221

0 commit comments

Comments
 (0)