on Monte Carlo and Quasi-Monte Carlo

Methods in Scientific Computing

Click on talk's title to open the corresponding presentation file.

Region mapping is one of the variance reduction methods in the classical MC scheme. For instance,
well-known antithetic variate method, which is widely used in different applications, can be
interpreted as the simplest integration formula obtained with a symmetric mapping for each coordinate.
However, the theoretical basis can be significantly extended. This approach is possible after the
random quadratures mechanisms are involved. Such formulas look like $S_n\left[f\right]=\sum_{i=1}^n A_i(x_1,\ldots,x_n)f(x_i)$,
where the nodes $x_i, i \in 1:n$ are taken randomly, $A_i, i \in 1:n$ are given functions, and satisfy the
property of unbiasedness: $E(S_n)=J=\int_{\mathfrak{D}} \varphi_1(x) f(x) \mu(dx)$.
Random quadratures with one free node are of particular interest. Their construction is defined by the following
theorem (Granovskii, Ermakov).
Theorem 1.
Let us consider ($\mathfrak{D}$, $\mathfrak{A}$, $\mu$) - a measurable space, $S_n\left[f\right]$
- a numeric quadrature, exact for orthonormal functions $\varphi_1,\ldots,\varphi_N$ for integral $J$.
Let all of the nodes be dependent on the first node: $x_i=T_{i-1}(x_1)$, where $x_1$ is taken randomly with density $u(x)$. Mappings $T_i$ have
Jacobians $|\Phi_i|$ and form a cyclic group with $n$ elements: $T_i=T^i$, $i \in 0:n-1$, where $T^n=T^0$. Then $S_n$ is an unbiased estimator for
$J$ if and only if $T_i$ satisfy the conditions
\begin{eqnarray*}
\sum_{i=1}^n \varphi_q(x_i) \varphi_1(x_i) |\Phi_i| = 0 \quad (mod \mu),
\end{eqnarray*}
where $q=2,\ldots,N$. In that case $A_i$ coefficients and density $u(x)$ are defined unambiguously.
The work continues theoretical study of the problem and offers the transition of the same technique
into the terms of QMC.
Theorem 2.
Let us take $|\mathfrak{D}|=1$ and a quadrature $\int_{\mathfrak{D}} f(x) dx \approx \frac{1}{mn} \sum_{j=0}^{m-1} \sum_{i=0}^{n-1} f(T^i(x_j)$,
where $T_i$ are defined as above, $|\Phi_i|=1, i \in 0:n-1$ and $x_j$ for $j \in 0:m-1$ are low-discrepancy points. In that case the upper bound for the sum converges faster
than in classical QMC due to the reduction of the Hardy-Krause variation. The formula is exact for all functions satisfying the condition
$\frac{1} {n} \sum_{i=0}^{n-1} f(T^i(x_j) = const$.
The proof is based on the first theorem, Koksma-Hlawka inequality and low-discrepancy point sets properties.
The theorem can be generalized for more complicated regions and quadratures.
The work also demonstrates numeric results for the proposed idea. $\mathfrak{D}$ is taken as a $d$-dimensional cube or sphere, $T$ is considered to be an orthogonal mapping,
low-discrepancy sets are multidimensional Halton or Sobol points. A comparative analysis of classical MC and QMC methods and the proposed algorithm is given. Further
modifications are spoken of (e.g. scrambled Sobol sequences usage). Discrepancy behavior after mapping procedures is studied. All computations are performed in R language.

We look for optimal estimates for the error $\mathbb{E}|g(X)-g(\hat X)|^p$ in
terms of moments of $X-\hat X$, where $X$ and $\hat X$ are random variables, and
the function $g$ satisfies minimal regularity assumptions. Under the assumption
that $X$ has a bounded density, we find estimates for indicator
functions $\chi_{[K,\infty)}$ and for functions of bounded variation [1]. Then
we extend the results to functions of generalized bounded variation [2], which
allows the variation to be polynomially bounded.
We apply the results to the approximation of stochastic differential equations,
and show the corresponding strong error estimates if $X$ is the solution of a
one-dimensional SDE at the endpoint $T$, and $\hat X$ is the Euler approximation
of $X$.
These results are needed to analyze the computational complexity of the
multilevel Monte Carlo method [3], which gives an efficient way to compute expected
payoffs of options. A particular example is the payoff function of the digital
option, $\chi_{[K,\infty)}$.

[1] Rainer Avikainen, On irregular functionals of SDEs and the Euler scheme,
*Finance Stoch.* 13 (2009), no. 3, 381--401.

[2] Rainer Avikainen, On Generalized Bounded Variation and Approximation of SDEs. Ph.D. thesis, University of Jyv\"askyl\"a, 2009.

[3] Michael B. Giles, Multilevel Monte Carlo path simulation,
*Oper. Res.* 56 (2008), no. 3, 607--617.

In this talk, we show how to construct polynomial lattice rules, which have, in some sense, small gain coefficients, using a component-by-component approach. The gain coefficients, as introduced by Owen, indicate to what degree the method under consideration improves upon Monte Carlo. We show that the variance of an estimator based on a scrambled polynomial lattice rule constructed component-by-component decays at a rate of $N^{-(2 \alpha +1) + \delta}$, for all $\delta>0$, assuming that the function under consideration is of bounded variation of order $\alpha$ and where $N$ denotes the number of quadrature points. It is also established that these rules are almost optimal for the function space considered in this talk. Finally, we present numerical results comparing scrambled polynomial lattice rules and scrambled digital nets.

A new algorithm for generating uniform pseudorandom numbers
is proposed based on a generalization of method of [1].
The proofs of new propositions have been carried out in [2].
The propositions guarantee the absence of correlations and
the equidistribution of probabilities for all generated subsequences
of length $\leq t$, where mesh size $g=p\cdot 2^t$, and $p$ is prime integer.
On larger distances the deviations from equidistribution are also estimated.
The particular implementation is proposed with period $2.8\cdot 10^{17}$.
A possibility of effective implementations with larger periods is discussed.
The speed of the generator is comparable to other modern generators.
Extensive statistical testing using available test packages demonstrates
excellent results. The theory of the generator, including the proof of periodic
properties and the proof of uniform distribution and statistical independence
of numbers on lengths up to $t$, is presented.
We also discuss [3] usage of SSE2 processor command set, which is supported by all modern Intel and AMD CPUs. It allows to essentially improve performance of our generators, as well as performance of other modern and reliable generators [4,5,6].

[1] L. Barash, L.N. Shchur, Phys. Rev. E **73**, 036701 (2006).

[2] L.Yu. Barash, unpublished.

[3] L.Yu. Barash and L.N. Shchur, e-print arXiv:1006.1235.

[4] M. Matsumoto and T. Nishimura, ACM Trans. on Mod. and Comp. Sim., **8** (1998) 3.

[5] P. L'Ecuyer, Oper. Res., **47** (1999) 159.

[6] P. L'Ecuyer, Math. of Comp., **68** (1999) 261.

It is known that many standard sets which have optimal star-discrepancy fail to meet the optimal estimates for the $L^2$ discrepancy and have to be modified to achieve the best $L^2$ bounds. The remedies include, e.g., random shifts, symmetrization, or ``digit-scrambling''. We present several results in this direction; in particular, a ``de-randomization'' of Roth's random shift construction (i.e. a deterministic shift of the Van der Corput set, which yields $L^2$ discrepancy of the order $\sqrt{\log N}$) and an explicit
formula and estimates for the $L^2$ discrepancy of the symmetrized Fibonacci set.

We shall first review various theoretical results on the discrete time approximation of decoupled Forward-Backward SDEs. We will then focus on the Malliavian calculus and the regression based approaches for their numerical resolution and simulation. We shall in particular explain how both technics can be exploded with improved complexity and efficiency. Numerical tests and comparisons, including the quantization approach, will be presented.

We investigate the extension of the multilevel Monte Carlo path simulation method [1,2] to the calculation of first and second order Greeks.
The pathwise sensitivity analysis differentiates the path evolution and effectively reduces the smoothness of the payoff by one order.
This leads to new challenges: computing a first order Greek for a call option is similar to pricing a digital option;
a naive implementation for digital options' Greeks is impossible because of the inapplicability of pathwise sensitivity to discontinuous payoffs.
These challenges can be addressed in three different ways:
payoff smoothing using conditional expectations of the payoff before maturity [3] ;
an approximation of the above technique using path splitting for the final timestep [4];
the use of a hybrid combination of pathwise sensitivity and the likelihood ratio method [5].
We will discuss the strengths and weaknesses of these alternatives in a multilevel Monte Carlo setting.

[1] Giles, M.B., *Improved multilevel Monte Carlo convergence using the Milstein scheme*, Monte Carlo and Quasi-Monte Carlo Methods 2006, Springer-Verlag, pp.343-358.

[2] Giles, M.B., *Multilevel Monte Carlo path simulation*, Monte Carlo and Quasi-Monte Carlo Methods 2006, Operations Research, Vol 56, 2008, pp.607-617.

[3] Glasserman, P., *Monte Carlo Methods in Financial Engineering*, Springer, New York, 1992.

[4] Asmussen, A. and Glynn, P., *Stochastic Simulation*, Springer, New York, 2007.

[5] Giles, M.B., *Vibrato Monte Carlo sensitivities*, Monte Carlo and Quasi-Monte Carlo Methods 2008, pp.369-382.

In the framework of [1] we consider acceleration oriented
vehicular traffic flow model and study the evolution of $N$-particle
ensembles, which is governed by a homogeneous Boltzmann-like equation.
We obtain both $N$-particle Kolmogorov type equation and the
corresponding linear integral equation of the second kind.
For solution of the latter one we can apply the method of majorant
frequencies and the weighted modifications [2].
This work is supported by Integration Grant No.22 SB RAS (2008), RFBR
(grants 08-01-00334, 09-01-00035, 09-01-00639)

[1] K.T. Waldeer. The direct simulation Monte Carlo method applied to a Boltzmann-like vehicular traffic flow model Comp. Phys. Commun. 2003. V. 156, N 1, pp. 1--12.

[2] M.S. Ivanov, M.A. Korotchenko, G.A. Mikhailov, and S.V. Rogasinsky. Global weighted Monte Carlo method for the nonlinear Boltzmann equation Comp. Math. Math. Phys. 2005. V. 45, N 10. pp. 1792--1801.

We suggest a new statistical algorithm for estimating
both the solution $u(r_0)$ and spatial derivatives
$\displaystyle\frac{\partial u}{\partial x_i}(r_0)$,
$\displaystyle\frac{\partial^2 u}{\partial x_j\partial x_i}(r_0)$
for the following interior Dirichlet problem
\begin{equation}\label{equationD}
\left \{
\begin{array}{l}
\Delta u(r)+c(r)u(r)+ \big(v(r),\nabla u(r)\big)+g(r)=0, \enskip r\in D,\\
u(s)=\varphi(s), \enskip s\in \Gamma=\partial D.
\end{array} \right.
\end{equation}
For this purpose we construct system of local integral equations which is
equivalent to the initial problem (\ref{equationD}) and solve it by a
random walk algorithm. It is possible to take into account the summand
$u^n$ by introducing the branching process.
The algorithm suggested is a development of the approach presented in [1].
This work is supported by Integration Grant No.22 SB RAS (2008), RFBR (grants 08-01-00334, 09-01-00639).

[1] A.V. Burmistrov. Random Walks Inside a Domain for Estimation of the Gradient of Solutions to Elliptic Boundary Problems Russ. J. Numer. Anal. Math. Model. 2007. V. 22, N 6, pp. 515--530.

We have invented a new theory of exact particle flow for nonlinear filters. This generalizes our theory of particle flow that is already many orders of magnitude faster than standard particle filters and which is several orders of magnitude more accurate than the extended Kalman filter for difficult nonlinear problems. The new theory generalizes our recent log-homotopy particle flow filters in three ways: (1) the particle flow corresponds to the exact flow of the conditional probability density corresponding to Bayes’ rule; (2) roughly speaking, the old theory was based on incompressible particle flow (like subsonic flight in air), whereas the new theory allows compressible flow (like supersonic flight in air); (3) the old theory suffers from obstruction of particle flow as well as singularities in the equations for flow, whereas the new theory has no obstructions and no singularities. Moreover, our basic filter theory is a radical departure from all other particle filters in three ways: (a) we do not use any proposal density; (b) we never resample; and (c) we compute Bayes’ rule by particle flow rather than as a point wise multiplication. We have made hundreds of numerical experiments to test this new theory, using several classes of examples: quadratic and cubic nonlinearities of the measurements, stable and unstable dynamical systems, linear systems, multimodal probability densities, and radar tracking problems. It turns out that the computational complexity of particle filters (for optimal accuracy) depends on the following parameters, which we vary in our numerical experiments: dimension of the state vector of the plant, stability or instability of the plant (as gauged by the eigenvalues of the plant), initial uncertainty of the state vector of the plant, signal-to-noise ratio of the measurements, and the process noise of the plant. Particle filters generally suffer from the curse of dimensionality, whereas our filter substantially mitigates this effect for the examples studied so far, for plants with dimension up to 30. Other particle filters generally do not exploit any smoothness or structure of the problem, whereas our new theory assumes that the densities are twice continuously differentiable in x (the state vector of the plant), and we assume that the densities are nowhere vanishing.
We design the particle flow using the solution of a first order linear (highly underdetermined) PDE, like the Gauss divergence law. We analyze 11 methods for solving this PDE, including an exact solution for certain special cases (Gaussian and exponential family), solving Poisson’s equation, separation of variables, generalized method of characteristics, direct integration, Gauss’s variational method, optimal control, generalized inverse for linear differential operators, and a second homotopy inspired by Gromov’s h-principle. The key issues are: (1) how to select a unique solution; (2) stability of the particle flow; and (3) computational complexity to solve the PDE.

We present constructions of point sets which are suitable for numerical
integration of functions defined on $\mathbb{R}^s$. The construction is based
on digital $(t,m,s)$-nets. We show bounds on the worst-case error of
integration using QMC type rules based on these constructions.

The random numbers driving Markov chain Monte Carlo (MCMC) simulation are
usually modeled as independent uniform $U(0,1)$ random variables.
Tribble reports substantial improvements when those random numbers are
replaced by carefully balanced inputs from completely uniformly distributed
sequences.
The previous theoretical justification for using anything other than IID
$U(0,1)$ points showed consistency for estimated means, but only applies for
discrete stationary distributions. We extend those results to some MCMC
algorithms for continuous stationary distributions. The main motivation is
the search for quasi-Monte Carlo versions of MCMC. As a side benefit, the
results also establish consistency for the usual method of using
pseudo-random numbers in place of random ones.

We extend Atanassov's methods for Halton sequences [1] in two
different directions:\\
(1) in the direction of Niederreiter $(t,s)-$sequences, including Sobol' and
Faure sequences.\\
(2) in the direction of generating matrices for Halton sequences.
It is quite remarkable that Atanassov's method for \emph {classical} Halton
sequences (see [1], Theorem 2.1) applies almost ``word for word" to
$(t,s)-$sequences and gives an upper bound quite comparable to those of
Sobol', Faure and Niederreiter. The method applies also to generalized Faure
sequences introduced by Tezuka and to Owen's scrambling for $(t,s)-$sequences.
Until now, Atanassov's method for modified Halton sequences (see [1] , Theorem 2.3) has no equivalent for $(t,s)-$sequences, even the
simplest ones. In fact his method applies and gives asymptotics improvements
in the case of Halton sequences thanks to the very structure of these
sequences which is completely different from the structure of $(t,s)-
$sequences.
But the modification of Halton sequences found by Atanassov with the help of
so-called \emph{admissible integers} can be extended to \emph{admissible
matrices} and leads to the same asymptotic upper bounds as for admissible
integers. We will discuss this extension as well.
This communication is based on a joint work with Christiane Lemieux and Xiaoheng Wang.

[1] E. I. Atanassov, On the discrepancy of the Halton sequences,
*Math. Balkanica, New Series* **18.1-2** (2004), 15--32.

In this communication we will present a survey of recent results on Hammersley and Zaremba two-dimensional point sets obtained by means of various scramblings such that shifts and reflecting permutations. Such Hammersley and Zaremba point sets are finite two-dimensional
versions of van der Corput infinite sequences. Our results are based on the precise knowledge of these sequences, but they need a lot of further investigations to obtain exact formulas and to perform computer search of good scramblings. Most of them come from joint works with F. Pillichshammer.

We consider quantile estimation based on Markov chain Monte Carlo (MCMC) simulations. Quantile estimates are commonly reported to summarize features of a given target distribution, most often in the form of Bayesian credible region endpoints. Using estimates from the inverse of the empirical distribution function, we establish a central limit theorem (CLT) enabling assessment the associated uncertainty. Further, we propose the use of the subsampling bootstrap and regeneration to estimate the associated asymptotic variance. Finally, we investigate the finite sample properties of these methods through three examples and provide recommendations to practitioners.

Adaptive and Interacting Markov chain Monte Carlo algorithms (MCMC) have been
recently introduced in the literature. These novel simulation algorithms are
designed to increase the simulation efficiency to sample complex
distributions. Motivated by some novel algorithms introduced recently (such
as the Adaptive Metropolis (AM) algorithm and the Equi-Energy (EE) sampler),
we develop a general methodological and theoretical framework, covering a
large number of adaptation algorithms. Within this theoretical framework, we
are able to cope with both internal and external adaptation
schemes. In the internal scheme, the current value of the parameter is adapted
using the past samples of the process. In the external scheme, the adaptation
is carried out using a certain number of auxiliary processes which can be
interact. We also consider the case where the parameter to adapt is infinite
dimensional. This framework covers, among other, the AM and the EE samplers,
but much more general variants can be considered as well.
In this talk, we will developp a general theory to establish the convergence of these algorithms. For that purpose, we study the convergence of randomly perturbed Markov operators. Using these novel results, we establish both the convergence of the marginal distribution for bounded functions and a strong law of large numbers. This theory leads to explicit guidelines concerning the design of algorithms.

We assume a L\'evy process $(X_t)$ such that $(S_t)$
given by
$S_t =e^{X_t}$ is a square integrable martingale.
The F\"ollmer-Schweizer decomposition provides
for any Borel function $f$ with $f(S_1) \in L_2$ a representation
\[
f(S_1)= \mathbb{E} f(S_1) + \int_0^1 \theta_t dS_t + \mathcal{N},
\]
where the random variable $\mathcal{N}$ is orthogonal to all stochastic integrals with
respect to $(S_t).$
We will relate Malliavin fractional smoothness of $\int_0^1 \theta_t dS_t$
to the rate of
convergence $r$ of
\[
\bigg \|\int_0^1 \theta_t dS_t - \sum_{k=1}^n
\theta_{t_{k-1}} (S_{t_k}-S_{t_{k-1}}) \bigg \|_{L_2} \le cn^{-r},
\,\,\, n \to \infty.
\]
If $\int_0^1 \theta_t dS_t$ has certain Malliavin fractional smoothness one can choose *specific* time nets
$0=t_0 < t_1 < \ldots < t_n=1$ to get always the best possible rate $r=\frac{1}{2}.$
On the other hand, the convergence rate $r$ which occurs when using * equidistant* time nets behaves
directly proportional to the Malliavin fractional smoothness of $\int_0^1 \theta_t dS_t.$
The considered approximation problem arises in Financial Mathematics.
Assuming $(S_t)$ models a share price process and $f$ is a payoff function,
we want to optimize asymptotically that part of the mean-variance hedging error
which is caused by rebalancing the portfolio only finitely many times.

A five-dimensional Bayesian forecasting model for cognitive performance
impairment during sleep deprivation is used to approximately determine
confidence intervals for psychomotor vigilance task (PVT) prediction.
Simulation is required to locate the boundary of a confidence region for
the model pdf surface. Further simulation is then used to determine PVT
lapse confidence intervals as a function of sleep deprivation time. Quasi-Monte
Carlo simulation methods are constructed for the two types of simulations. The
results from these simulations are compared with results from previous methods,
which have used various combinations of grid-search, numerical optimization and
simple Monte Carlo methods.

Motivated by applications in computational finance, we investigate the use of the multilevel Monte Carlo path simulation method for multidimensional SDEs which require knowledge of the L{\'e}vy areas to achieve first order strong convergence using the Milstein discretisation.
Two numerical approaches will be demonstrated and analysed. The first, which is applicable to Lipschitz payoffs, neglects the L{\'e}vy areas, but uses an antithetic Brownian Bridge construction
to generate two fine paths for each coarse path. Although each fine path is $O(h^{1/2})$ from the coarse path (where $h$ is the uniform timestep) the average of the two is within $O(h)$
of the coarse path. Hence, using the average payoff from the two fine paths we obtain a multilevel correction estimator with a variance which is $O(h^2)$ for smooth payoffs, and $O(h^{3/2})$ for standard put and call options.
The second approach, which is required for discontinuous payoffs, simulates the L{\'e}vy areas by sub-sampling the driving Brownian path within each timestep. There is a tradeoff here, since
improving the accuracy of the L{\'e}vy area simulation increases the computational cost but reduces the strong convergence errors and hence the variance of the multilevel estimator. Analysis
reveals the optimum, in combination with the antithetic treatment and the use of conditional expectations, corresponds to a sub-sampling interval which is asymptotically $O(h^{3/2})$.

The well-known Koksma-Hlawka inequality states that the error of approximating
a multivariate integral by a quasi-Monte Carlo rule is bounded from
above
by the product of the variation of the function in the sense of
Hardy and Krause and the star discrepancy of the point set.
In practical applications functions do often not have a bounded
variation. To address this point, function spaces with less smothness
where studied in [J. Dick, Koksma-Hlawka type inequalities of
fractional order, Ann. Math. Pura Appl. 187 (2008), 385--403]
and a generalized Koksma-Hlawka inequality
for mulitvariate quasi-Monte Carlo integration was proved.
These function spaces were defined via fractional derivatives in the
sense of Riemann and Liouville. The corresponding fractional
discrepancy, which appears in the upper bound of the generalized
Koksma-Hlawka inequality, is more complicated than the common
star discrepancy or $L_p$-discrepancies. Nevertheless, one would
like to find point sets which exhibit a small fractional discrepancy,
since such point sets are good samples for quasi-Monte Carlo
integration on these specific function spaces with fractional
smoothness. We prove via embedding theorems for fractional function
spaces that known low-discrepancy point sets exhibit indeed also a
small fractional discrepancy.

A unified approach to estimate the probability of joint default under several classical models in the bottom up approach is presented. We construct importance sampling schemes for high dimensional problems including Gaussian factor copula models in reduced form and the first-passage time problems under the Black-Scholes model in structural form. Particularly, the large deviation principle is used to prove that all these importance sampling schemes are efficient for rare event simulation.
When random environment arises such as a Student t-variate (Gaussian with a random volatility) in reduced form or a stochastic correlation model in structural form, the aforementioned importance sampling to estimate the joint default probability cannot be applied directly. We overcome these difficulties by investigating asymptotics of these random coefficients.
In the end, we demonstrate two neat stochastic algorithms for estimating lower tail probabilities under multivariate normal and Student t, and compare their numerical performance with Matlab codes mvncdf.m and mvtcdf.m.

We consider an $\mathbf{F}_2$-linear pseudorandom number generator,
i.e., its state space, transition function, and output function are all $\mathbf{F}_2$-linear.
One of the merits of such a generator is that we can compute the dimension
of equidistribution with $v$-bit accuracy (denoted by $k(v)$), which measures high-dimensional uniformity of the sequence for the whole period.
Couture, L'Ecuyer, and Tezuka (1993) introduced a lattice basis reduction
method for computing these dimensions, over the formal power series field
$\mathbf{F}_2((t^{-1}))$.
Couture and L'Ecuyer (2000) improved Tezuka's
resolution-wise lattice method (1994) by using the dual lattice.
In MCM2009, Harase, Matsumoto, and Saito proposed
a faster lattice method than Couture and L'Ecuyer's one, and named it SIS.
This method is the following: (1) Uses the original resolution-wise lattice, not the dual lattice.
(2) Uses a reduction algorithm which works for a generating set of
the lattice, not only for the basis (a modification of Schmidt's algorithm (1991)).
(3) First computes $k(v)$ and then inductively computes $k(v-1), k(v-2), \ldots, k(1)$ in this order, which is converse to the dual lattice case (i.e., inductive projection).
(4) Represents the lattice points in terms of state arrays,
instead of polynomial-coefficient vectors, in order to save the space (i.e., state representation).
In this talk, we propose to replace (2) with a more efficient algorithm
based on (Mulders and Storjohann, 2003) and (Wang, Zhu, and Pei, 2004).
A key idea of the algorithm is the ``pivot index'',
so that we call the new method PIS (Pivot index reduction with Inductive projection and State representation).
In the computational complexity, our PIS is superior to SIS.
For computing all $k(v)$, numerical experiments show that
our approach improves the speed by a factor of about three
for $64$-bit Mersenne Twister, and by a factor of about $1.5$
for the WELL generator, where each state size is $19937$.
Furthermore, we propose to apply the lattice reduction
to the sequences generated from $0$-excess states.
Because of the sparseness of such sequences, the reduction
is accelerated for $\mathbf{F}_2$-linear generators which keep $0$-excess states
for a long term, such as Mersenne Twisters.

The contributon theory for radiative transport problems
originated in studies of nuclear shielding,
in particular, problems involving low probability detector
responses or deep-penetration [1].
The contributon function provides a phase space map of
the lossless flow of ``contributon'' particles that necessarily
are transported from source to detector. Our work uses
this function to quantify the relative importance of
phase space locations and directions in terms of
transport from source to detector.
In the field of biomedical optics, non-invasive use
of light to detect cancerous tissue transformations often
involves a low probability detector response because tissue
is very turbid and scattering is highly forward-peaked.
In these applications, we use the contributon map to extend
the geometric learning of adaptive Monte Carlo algorithms.
The adaptive algorithm makes sequential use of correlated
sampling to lower the variance systematically providing
improved convergence rates over conventional
Monte Carlo. Prior results illustrated the benefit
in modifying the algorithm based on information from the
**spatial** or scalar contributon map [2]. However,
optical transport through biological tissue systems often
produces contributon maps with large angular variation.
We now have developed an algorithm that employs both the
**spatial** and **angular** contributon maps
which provides further improved geometric convergence rates.
Our solution method is described and applied to
tissue model problems. Numerical results illustrate the
utility of contributon maps in solving
problems that are difficult to solve using conventional
Monte Carlo methods.

[1] M.L. Williams.
Generalized Contributon Response Theory.
*Nucl. Sci. Eng.*, 108:355--383, 1991.

[2] R. Kong, M. Ambrose, and J. Spanier.
Efficient, automated Monte Carlo methods for radiation transport.
*J. Comp. Physics*, 227(22):9463--9476, 2008.

We consider the computational problem of evaluating an expectation $\mathbb{E}[f(Y)]$, where $Y=(Y_t)_{t\in[0,1]}$ is the solution of a stochastic differential equation driven by a square integrable L\'evy process and $f:D[0,1] \to \mathbb{R}$. Here, $D[0,1]$ denotes the Skorohod space of c\`adl\`ag functions. Computational problems of this kind arise, e.g., in computational finance for valuations of path-dependent options. We are interested in Monte Carlo algorithms, and we analyze their worst case error over all functionals $f$, that are Lipschitz continuous with Lipschitz constant one with respect to supremum norm.
We introduce multilevel Monte Carlo algorithms for the Euler scheme where the small jumps of the L\'evy process are neglected. Furthermore, we provide upper bounds for the worst case error of these multilevel algorithms in terms of their computational cost. Specifically, if the driving L\'evy process has Blumenthal-Getoor index $\beta$, errors of order $n^{-(\frac 1{\beta\vee 1}-\frac12)}$ are achieved with cost $n$. In the case $\beta>1$, this rate can even be improved by using a Gaussian correction term for the neglected small jumps, see Dereich (2009).
In this talk, we emphasize on the implementation of the coupled Euler scheme. Moreover, we present numerical experiments complementing the results.
Supported by the DFG within the SPP 1324.

We introduce a function system on $L^2([0,1[^s)$ that is closely related to the dual group
of $p$-adic integers $\mathbb{Z}_p$,
$p$ a prime.
In terms of this function system,
we exhibit a new variant of the inequality of
Erd\"os-Tur\'an-Koksma,
which leads to general upper bounds for discrepancy.
Further,
we show a variant of the Weyl criterion
for the $p$-adic function system under consideration.
The uniform distribution of the van der Corput sequence in base $p$
and of Halton sequences follows as a simple corollary.
In addition,
we discuss a new notion of diaphony that is related to $p$-adic addition by means
of a pseudo-inverse of the $p$-adic Monna map.

We consider stochastic heat equations on the spatial domain $(0,1)^d$ with additive nuclear noise as well as additive space-time white noise, and we study approximation of the mild solution at a fixed time point. The error of an algorithm is defined by the average $L_2$-distance between the solution and its approximation. The cost of an algorithm is defined by the total number of evaluations of one-dimensional components of the driving Brownian motion at arbitrary time nodes. We want to construct algorithms with an (asymptotically) optimal relation between cost and error. Furthermore we wish to determine the asymptotic behaviour of the corresponding minimal errors.
We show how the minimal errors depend on the spatial dimension $d$ and, in the case of nuclear noise, on the decay of the eigenvalues of the associated covariance operator.
Asymptotic optimality is achieved by a spectral Galerkin approximation together with a nonuniform time discretization. This optimality cannot be achieved by uniform time discretizations, which are frequently studied in the literature.
This work was supported in part by the DFG.

Radial basis function methods are popular for function approximation because they may be used with arbitrary designs (sampling schemes) and have the potential of high accuracy. These methods are essentially the best approximations in a Hilbert space (native space) with reproducing kernel $K$ of the form $K(\mathbf{x}, \mathbf{t}) =\Phi(\lVert \mathbf{x} - \mathbf{t} \rVert^2)$ for some suitable function $\Phi$, i.e., an isotropic reproducing kernel. Unfortunately, most existing error bounds for radial basis function methods have heavily dimension-dependent error bounds. This is not surprising since employing a radially symmetric kernel function implies that a data point only provides information about the function in some Euclidean ball of finite radius centered at the data point. Moreover, one requires $O(n^{d})$ balls of radius $O(n^{-1})$ to fill a unit volume. However, if the radially symmetry is broken by introducing a non-homogeneous shape parameter, i, e.g., with the Gaussian kernel,
\[
K(\mathbf{x}, \mathbf{t})= \exp(-\gamma_1^2\lvert x_1 - t_1 \rvert^2 - \cdots - \gamma_d^2\lvert x_d - t_d \rvert^2),
\]
then one may obtain excellent dimension-independent convergence rates, provided that $\gamma_d \to 0$ as $d \to \infty$. This is demonstrated via the Gaussian kernel for $L_2$ worst case and average case approximation where the function data available includes arbitrary linear functionals or function values.

Pricing a path-dependent financial derivative, such as an Asian option, requires the computation of $\mathbb{E}(g(B))$, the expectation of a payoff function $g$, that depends on a Brownian motion $B$. Employing a standard series expansion of $B$ the latter problem is equivalent to the computation of the expectation of a function of the corresponding i.i.d.\ sequence of random coefficients. This motivates the construction and the analysis of algorithms for numerical integration with respect to a product probability measure on the sequence space $\mathbb{R}^{\mathbb{N}}$. The class of integrands studied in this paper is the unit ball in a reproducing kernel Hilbert space obtained by superposition of weighted tensor product spaces of functions of finitely many variables. Combining tractability results for high-dimensional integration with the multi-level technique we obtain new algorithms for infinite-dimensional integration. These deterministic multi-level algorithms use variable subspace sampling and they are superior to any deterministic algorithm based on fixed subspace sampling with respect to the respective worst case error. Numerical experiments will be presented.

We investigate the optimal performance of importance sampling for the approximation of integrals
$$I(f) = \int_D f(x) \rho (x) \, dx$$
of functions $f$ in a Hilbert space $H\subset L_1(\rho)$ where $\rho$ is a given probability density on $D\subset \reals^d$.
We show that whenever the embedding of $H$ into $L_1(\rho)$ is a bounded operator then there exists another density
$\omega$ such that the worst case error of importance sampling with density function $\omega$ is of order $n^{-1/2}$.
This applies in particular to reproducing kernel Hilbert spaces with a nonnegative kernel.
As a result, for multivariate problems generated from nonnegative kernels we prove strong polynomial tractability of the
integration problem in the randomized setting.
The density function $\omega$ is obtained from the application of change of density results used in the geometry of
Banach spaces in connection with a theorem of Grothendieck about 2-summing operators.

Traditional Monte Carlo methods for numerical integration rely on estimates to determine the variance of the result. Some methods, however, can provide guarantees on performance without the need to either calculate or estimate a variance. Foremost among these methods is the product estimator, which operates by creating a series of problems that interpolate between the target integration and an integration that is easy to solve analytically. The series of estimates is then multiplied together to create the overall estimate. The difficulty in using the product estimator is in building the series of problems to estimate, and in analyzing the resulting output. The new method of TPA simultaneously solves both these problems. TPA adaptively creates the series of in-between problems, and the output has exactly a Poisson distribution with mean equal to a known quantity times the target integration value. Hence there is no need to estimate the variance, and even the tails of the output can be analyzed completely. Several applications from statistics and computer science will be used to illustrate the method.

Stochastic differential equations are often simulated with the Monte Carlo Euler method. Convergence of this method is well understood in the case of globally Lipschitz continuous coefficients of the stochastic differential equation. The important case of superlinearly growing coefficients, however, remained an open question for a long time now. The main difficulty is that numerically weak convergence fails to hold in many cases of superlinearly growing coefficients. In this talk we overcome this difficulty and establish convergence of the Monte Carlo Euler method for a large class of one-dimensional stochastic differential equations whose drift functions have at most polynomial growth.

The point set of a $d$-dimensional rank-$1$ lattice rule with $n$ points
is given by $P_n(\mathbf{z}):=\{\{j\mathbf{z}/n\},\ 0\le j\le n-1\}$,
where $\mathbf{z} \in \mathbb{Z}^d$ has no factor in common with $n$ and
the braces around a vector indicate that
we take the fractional part of each component. A criterion used to find
a `good' $\mathbf{z}$ is the star discrepancy, $D^*(P_n(\mathbf{z}))$,
of the point set.
For $h\in\mathbb{Z}$ satisfying $-n/2 < h \le n/2$, let
$r(h)=\max(1,|h|)$ and
\[
t(h,n)=\left\{\begin{array}{ll}
n\sin(\pi|h|/n) &\mbox{for }-n/2 < h\le n/2,\ h\ne 0,\\
1 &\mbox{for }h=0.\end{array}\right.
\]
Further, with $\mathbf{h}\in\mathbb{Z}^d$ and $C_d^*(n):=
\{\mathbf{h}\ne\mathbf{0}: -n/2< h_i\le n/2,\ 1\le i\le d\}$, let
\[
R(\mathbf{z},n)=\!\!\sum_{{\mathbf{h}\in C_d^*(n)}\atop{\mathbf{h}
\cdot\mathbf{z}\equiv 0\mod n}}\!\!\left(\prod_{i=1}^d r(h_i)\right)^{-1}
\mbox{and}
\ \ T(\mathbf{z},n)=\!\!\sum_{{\mathbf{h}\in C_d^*(n)}\atop{\mathbf{h}
\cdot\mathbf{z}\equiv 0\mod n}}\!\!\left(\prod_{i=1}^d t(h_i,n)\right)^{-1}.
\]
Then it is known that the star discrepancy may be bounded in the following way:
\[
D^*(P_n(\mathbf{z}))
\le 1-\left(1-1/n\right)^d+T(\mathbf{z},n)
\le 1-\left(1-1/n\right)^d+R(\mathbf{z},n)/2.
\]
Here we introduce a new quantity $W(\mathbf{z},n)$ which satisfies
\[
T(\mathbf{z},n)\le W(\mathbf{z},n)\le R(\mathbf{z},n)/2.
\]
This leads to an intermediate bound on the star discrepancy. For a given
$\mathbf{z}$, calculation of $T(\mathbf{z},n)$ appears to require
$O(n^2d)$ operations.
However, like $R(\mathbf{z},n)$, the quantity
$W(\mathbf{z},n)$ can be calculated in $O(nd)$ operations. Numerical
calculations
indicate that values of $W(\mathbf{z},n)$ are much closer to
$T(\mathbf{z},n)$ than to $R(\mathbf{z},n)/2$.

Regenerative simulation is a technique for identifying updates at which a Markov chain ``probabilistically restarts itself,'' so that the paths taken between regeneration times are independent and identically distributed. Regeneration was introduced by Mykland et al (1995, JASA) as a method for Markov chain Monte Carlo (MCMC) output analysis and by Gilks et al (1998, JASA) as a method for constructing adaptive MCMC algorithms. In the years since those papers, regeneration has been employed to great effect in Gibbs samplers and in block-updated Metropolis-Hastings algorithms, but its use in variable-at-a-time MCMC, i.e., component-wise Metropolis-Hastings or ``Metropolised Gibbs,'' remains unexplored. In this paper we develop a framework for conducting regenerative simulation in general variable-at-a-time MCMC. We illustrate our method in a toy problem in Bayesian inference as well as a frequentist example involving maximum likelihood estimation for a generalized linear mixed model. We demonstrate the use of regeneration as a basis for MCMC standard error estimation, and propose an adaptive component-wise Metropolis-Hastings algorithm.

For problems in linear algebra, quasi-Monte Carlo methods give better
accuracy and more stable convergence with the increasing length of the
walks than Monte Carlo methods. The disadvantage of quasi-Monte
Carlo approach is the difficulty in obtaining practical error estimates.
This disadvantage can be overcome by scrambling the underlying
low-discrepancy sequences. Scrambling also provides a natural way
to parallelize the streams.
The matrix-vector computations are the basic building blocks of various
Monte Carlo and quasi-Monte Carlo algorithms for linear algebra problems:
obtaining the largest (smallest) eigenvalue, solving systems of linear
algebraic equations, etc. In this paper we study matrix-vector
computations using scrambled quasirandom sequences. We use
GPU-based generation of Sobol and Halton sequences with
Owen-type scrambling to obtain fully GPU-based algorithms
for approximate calculation of matrix-vector products. The use
of GPU computing allows for highly efficient implementation of
the algorithms using all the available levels of parallelism -
all the cores of the graphics cards and MPI for synchronizing
the computations across several computing nodes.

A straightforward discretisation of high dimensional problems often leads to
an exponential growth in the number of degrees of freedom.
So, computational costs of even efficient algorithms increase similar.
Sparse frequency grids like hyperbolic crosses allow for a good approximative
representation of functions of appropriate smoothness and decrease the number
of used Fourier coefficients strongly.
As a matter of course, an important issue
is the customisation of efficient algorithms to these
thinner discretisations.
The sampling schemes defined in a natural way by the hyperbolic crosses are the sparse grids. The evaluation of trigonometric polynomials with frequencies supported by hyperbolic crosses at the sparse grid nodes suffers from growing condition numbers. The known associated fast algorithm is hard to implement.
By changing the spatial discretisation to \mbox{rank-1} lattices one can simplify multidimensional trigonometric polynomials to one dimensionals in an easy way. With the well known FFT one can evaluate these stable and fast. Thus we receive stability and a quite simple implementation.
We discuss necessary and sufficient conditions to the \mbox{rank-1} lattice to allow for a unique reconstruction of frequencies from the sampling values.

We consider the partitioning of one low-discrepancy
sequence into multiple low-discrepancy sequences.
For all rational number theoretic constructions,
efficient enumeration algorithms are derived
by uniformly partitioning one domain dimension.
The new scheme provides a solution to long standing issues
in parallel quasi-Monte Carlo integration: By considering
the distribution of computing jobs across a multiprocessor
as an additional problem dimension, the straightforward
application of quasi-Monte Carlo methods implies parallelization.
This allows for adaptive parallel processing
without synchronization, i.e. communication is
required only once for the final reduction of the partial results.
The resulting algorithms are deterministic, independent of the
number of processors, and include previous approaches as special
cases.

Many integration problems in image synthesis expose an
intrinsic stratification, where subdomains correspond to voxels across the
domain of integration. We present algorithms for efficiently enumerating certain low-discrepancy
sequences in such voxels. An example application of the technique
is pixel-adaptive sampling using only one Halton or Sobol' sequence
over the whole image plane. The method is readily extended to draw
samples according to a given voxel importance,
which e.g. can be used for importance-sampling
light transport paths. The main benefit of the approach is that
the low-discrepancy property is preserved over the whole
domain of integration, while still allowing to draw
samples for each voxel independently. Compared to previous
approaches, the implementation
uses very small lookup tables to efficiently compute the sample
indices of points inside a given voxel.

We construct the weighted value algorithms and analyze their efficiency
for estimating the total monomer concentration as well as
total monomer and dimer concentration in ensemble
which is governed by pure coagulation Smoluchowski equation.
For this purpose we use the following theorem.
\noindent **Theorem.**
*Let $N'_1$ and $N_1$ be the number of monomers in ensemble before
the next collision and after one, respectively.
If the mean value of $N_1$ is proportional to $N'_1$, then
the value function is proportional to $N_1$.*
\noindent
Theorem is valid for $N_1+N_2$ also, where
$N_2$ stands for the number of dimers.
A considerable gain in computational costs is achieved
via the approximate value modeling of time between collisions in the
ensemble combined with the value modeling of the interacting pair number.
The technique presented is a development of the methods proposed
in [1].
This work is supported by Integration Grant No.22 SB RAS (2008), RFBR (grants 09-01-00035, 09-01-00639)

[1] M.A. Korotchenko, G.A. Mikhailov, and S.V. Rogasinsky. Modifications of Weighted Monte Carlo Algorithms for Nonlinear Kinetic Equations Comp. Math. Math. Phys. Vol. 47, No. 12, 2007, p. 2023--2033.

Let $x=(x_1,\ldots,x_n)$ be a point in the $n$-dimensional unit hypercube with Lebesgue measure $dx=dx_1\cdots dx_n$. Consider a model function $f(x_1,\ldots,x_n)$ defined in the hypercube $H^n$. It can be a black box model and not necessarily an analytical expression. If $f \in L_2$, global sensitivity indices provide adequate estimates for the impact upon $f$ from individual factors $x_i$ or from groups of factors. Such indices can be efficiently computed by Monte Carlo or Quasi-Monte Carlo methods. However, if the number of model evaluations is too high application of global sensitivity indices can become unpractical.
We introduce new global sensitivity measures called derivative based global sensitivity measures (DGSM). The method is based on averaging local derivatives using Monte Carlo or Quasi Monte Carlo sampling methods. We also show that there is a link between DGSM and Sobol' total sensitivity indices which makes this approach theoretically sound and general. The computational time required for numerical evaluation of DGSM can be much lower than that for estimation of the Sobol' sensitivity indices, although it is problem dependent.
For evaluation of the Sobol' sensitivity indices and DGSM we apply recently developed
high-dimensional Sobol' sequence generator with advanced uniformity properties: Property A for all dimensions and Property A' for adjacent dimensions. We consider a relationship between Properties A and A' for Sobol' sequences. We show that for dimensions higher than 3 it is possible to construct the Sobol' sequence satisfying Property A' but not Property A.

The problem of flow through a porous medium, with the permeability treated as a Gaussian random field, can be thought of as a high-dimensional problem. The dimensionality arises from, for example, the number of terms in a truncated Karhunen-Lo\`eve expansion, or the number of mesh points in a spatial discretization. We explore the use of quasi-Monte Carlo methods to study the dispersion of the fluid as it seeps through the medium, and compare the results with Monte Carlo calculations. The results are encouraging.
This is joint work with Ian Sloan (UNSW), Ivan Graham (U. Bath), Rob Scheichl (U. Bath) and Dirk Nuyens (K.U. Leuven).

\noindent
Small deviation probabilities of Gaussian processes have
been intensively studied in recent years, but very little is known about
\emph{smooth} processes, i.e. when the covariance function is smooth.
We give some results in this direction, taken from a recent
joint paper with F. Aurzada, F. Gao, W.V. Li and Q. Shao.
Consider the family of centered Gaussian processes $X_{\alpha,\beta}(t)$ with covariance function
$$
\mathbb{E} X_{\alpha,\beta}(s)X_{\alpha,\beta}(t)=
\frac{s^\alpha \, t^\alpha}{(s+t)^{\beta}}\quad
\mbox{for}\quad s,t\ge 0\,.
$$
Then the small deviation probabilities with respect to the $L_2-$norm satisfy
$$
-\log \mathbb{P}\left(\int_0^1|X_{\alpha,\beta}(t)|^2\,dt\le \varepsilon^2\right)
\asymp \left(\log\frac 1\varepsilon\right)^3\quad\mbox{if}\quad
2\alpha + 1>\beta >0\,,
$$
and with respect to the $\sup-$norm
$$
-\log \mathbb{P}\left(\sup_{0\le t\le 1}|X_{\alpha,\beta}(t)|\le \varepsilon\right)
\asymp \left(\log\frac 1\varepsilon\right)^3\quad\mbox{if}\quad
2\alpha >\beta >0\,.
$$
The proofs are based on the intimate relationship between small
deviation/small ball properties of the process and metric entropy of its
reproducing kernel Hilbert space, which was discovered by Kuelbs and Li (1992).

Array-RQMC is a recently-proposed randomized quasi-Monte Carlo (RQMC) method designed to estimate the distribution of the state of a discrete-time Markov chain at each step,
and/or the expectation of a smooth cost function of the sample path of that chain, with
better accuracy than standard Monte Carlo for an equivalent number of sample paths. It
simulates $n$ copies of the chain simultaneously, using highly uniform point sets of cardinality
$n$, randomized independently at each step. The copies are sorted after each step, according
to some multidimensional order, for the purpose of assigning the RQMC points to the chains.
In this talk, we first review the main intuition behind the method, summarize what is known
about its rate of convergence as a function of n, and compare different ways of realizing the
assignment between the points and the chains. Previous empirical experiments have shown
that the method can be very effective to estimate the expectation of a cost function that
depends on the state at a fixed step number, and for linear combinations of such functions.
Here we test the effectiveness of the array-RQMC method in situations where the cost is
a function of the state at a random stopping time, which occurs for example in optimal
stopping problems such as the pricing of American-style options by Monte Carlo methods,
and even in the evaluation phase of a fixed stopping policy. The method still provides an
unbiased estimator of the expected cost in that case, but the variance reduction with respect
to standard Monte Carlo is not as impressive when the stopping times differ for the different
realizations of the chain compared with the case where all stopping times are equal (which
can be either a common random stopping time for all copies or just a deterministic step
number). We explain why.

The quality of an estimate produced by an MCMC algorithm depends on
probabilistic properties of the underlying Markov chain. Designing an
appropriate transition kernel $P$ that guarantees rapid convergence
to stationarity and efficient simulation is often a challenging task,
especially in high dimensions. For Metropolis algorithms there are
various optimal scaling results which provide ``prescriptions" of how to do
this, though they typically depend on unknown characteristics of $\pi$.
For random scan Gibbs samplers, a further design decision is choosing the
selection probabilities (i.e., coordinate weightings) which will be used
to select which coordinate to update next. These are usually chosen to be
uniform, but some recent work has suggested that non-uniform
weightings may sometimes be preferable.
One promising avenue to improve sampling efficiency
is employing *adaptive MCMC algorithms*.
As an MCMC simulation progresses, more and more information about the
target distribution $\pi$ is learned. Adaptive MCMC attempts to use
this new information to redesign the transition kernel $P$ on the fly,
based on the current simulation output. Unfortunately, such adaptive algorithms are only valid if their ergodicity
can be established. The stochastic process for an adaptive
algorithm is no longer a Markov chain; the potential benefit of adaptive
MCMC comes at the price of requiring more sophisticated theoretical
analysis. Most of the previous work on
ergodicity of adaptive MCMC has concentrated on adapting Metropolis and
related algorithms, with less attention paid to ergodicity when adapting
the selection probabilities for random scan Gibbs samplers.
In the present talk we study the
ergodicity of various types of adaptive Gibbs samplers. It turns out that seemingly reasonable algorithms may exhibit transient bahaviour and steadily drift to infinity. We are not aware of any valid ergodicity results in the literature that
consider adapting selection probabilities of random scan Gibbs samplers,
and we attempt to fill that gap herein considering typical scenarios.
We also consider particular
methods of simultaneously adapting the selection probabilities and
proposal distributions of Metropolis within Gibbs samplers, and prove that in addition to being ergodic,
such algorithms are approximately optimal under certain
assumptions.

The disintegration of a bulk liquid material into droplets in a surrounding gas (referred to as atomization) is extensively developed and applied to a variety of industrial processes. Jet printing technology offers a broad range of utilization in areas of microfluidic applications such as biotechnology, electronics, fuel cell manufacturing. Our work focuses on an innovative \emph{Spray on Demand} technology, where spray is generated only if required (in contrast with a continuous jetting device) [TemLecSou08]. A microchannel conveying fluid is excited and the drop hung at the beveled nozzle tip breaks up into droplets.
In some applications, a specific droplet size distribution is required. In traditional approaches only time-independent distributions are modelled. Here we propose a modelling of the evolution of droplet size distribution. We retain the \emph{Maximum Entropy Formalism} to derive the initial distribution. The atomization process tends to favor coalescence between spray droplets. We solve the population balance equation using a Monte Carlo method adapted from our previous schemes [LecWag04,LecTar07]. Comparisons with experiments extend the results given in [TemLecSou09] and show that the model can predict droplet size distribution. In addition the efficiency of the use of quasi-random numbers in the algorithm is tested.
[LecWag04] C. Lecot and W. Wagner, A quasi-Monte Carlo scheme for Smoluchowski's coagulation equation, Math. Comput., 73, 2004, 1953-1966.
[LecTar07] C. Lecot and A. Tarhini, A quasi-stochastic simulation of the general
dynamics equation for aerosols, Monte Carlo Methods Appl., 13, 2007, 369-388.
[TemLecSou08] M. Tembely, C. Lecot and A. Soucemarianadin, Theoretical study of a new spray on demand print-head, Proceedings of the 2008 International Conference on Mechanical Engineering (London, 2-4 July), World Congress on Engineering 2008, pp. 1357--1365.
[TemLecSou09] M. Tembely, C. Lecot and A. Soucemarianadin, Monte Carlo method for physically-based drop size distribution evolution, Proceedings of the 2009 International Conference of Applied and Engineering Mathematics (London, 1-3 July), World Congress on Engineering 2009, pp. 1224--1229.

Abstract: In [1], Hesse, Kuo and Sloan describe a component-by-component
method for quadrature on weighted tensor product spaces defined on products of
unit spheres in $R^3$. We adapt the WTP (sparse grid) quadrature algorithm of
Wasilkowski and Wozniakowski [2] to this setting, and compare the initial and
asymptotic convergence rates of the two methods, giving both theoretical
bounds and numerical results for each, for different rates of decay of the
dimension weights. This is joint work with Markus Hegland, (ANU).

[1] K. Hesse, F. Y. Kuo and I. H. Sloan, A component-by-component approach to efficient numerical integration over products of spheres. Journal of Complexity, 23(1), 2007, pp. 25- 51.

[2] G. W. Wasilkowski and H. Wozniakowski, Weighted Tensor Product Algorithms for Linear Multivariate Problems. Journal of Complexity, 15(3), 1999, pp. 402-447.

Importance sampling can be viewed as a quite general technique for improving the stochastic solution of quadrature problems as well as problems associated with the solution of integral equations. Additionally, lattice methods are particularly effective for integrating sufficiently smooth periodic functions. In this presentation we will discuss the advantage of using these ideas to transform non-periodic to periodic integrands over the unit hypercube to improve the convergence of lattice-based quadrature formulas. We will show that with the proper choice of importance transformation, the order in the rate of convergence of a quadrature formula can be increased significantly.
We will also introduce a family of multidimensional dyadic sequences based on an extension of an idea of Sobol that are both infinite and at the same time return to lattice based methods with the appropriate choice of sample size. The effectiveness of these sequences will be shown with both theoretical and numerical results. Also, we will provide a potential method for generating these sequences and an interesting parallel with sequences generated by the fractional parts of integer multiples of irrationals.

We develop exact simulation methods for pricing occupation-time derivatives such as step options, quantile options, Parisian options, etc. As an underlying asset price process we consider a solvable diffusion whose transition distribution function is available in analytical form. Examples considered in the talk include the constant elasticity of variance model and state-dependent-volatility hypergeometric diffusions obtained from a Bessel process by using change of measure and change of variable. For such diffusions originated from a Bessel process, we develop a precise path simulation method by making use of the Bessel bridge sampling and the conditioning on the first hitting time at the origin. Such an approach allows us to construct an unbiased estimator for any discretely-monitored occupation-time derivative. Another contribution of this talk is the probability distribution function of the occupation time of a general Brownian bridge. By combining these two results we obtain new estimators for step options.

The high computational power of low-cost graphic processing units (GPUs)
is now started to be
applied in scientific computations, such as Monte Carlo simulations.
Therefore, efficient pseudorandom number generators
fitting GPUs are in strong demand.
GPUs have some proper architecture for
graphic processing.
They have high parallelism in both processing and memory,
at the cost of stratification of them (thus limitting
the communications between processes/memory).
Processes are grouped into several *blocks*, and
each process in a same block can execute
essentially only one same procedure on the same memory at the
address shifted according to the process ID.
The memory is stratified so that
global memory accessible from any process
is slow in access, and local memory assigned for each block
is fast -- but is not accessible from
the processes in a different block.
Our proposal here is a type of
pseudorandom number generators,
*Mersenne Twisters for Graphic Processors*
(MTGP), which assigns a variant of Mersenne Twister
generator to each block of GPU, so that the processes
in the block share the internal state array
(say, of size 11213 bits) of one Mersenne Twister
in a local memory, and that the processes renew the whole
state array in parallel by avoiding dependencies.
MTGP uses the texture memory (a memory
which is read-only after the first set-up, intended for the
graphic texture pattern and very quickly accecible in paralell)
as a look-up table for an ${\mathbf F}_2$-linear
transformation, which is the core of the recursion formula
of MTGP.
We implemented these codes in CUDA, a programming
environment for GeForce GPU supplied by NVIDIA.
We released parmeters of MTGPs with period
$2^{11213}-1$, $2^{23209}-1$ and $2^{44497}-1$.
Output formats of MTGPs are 32-bit integers and single
precision floating point numbers
obeying the IEEE 754 format. 64-bit integer version and
double precision floating point version are also available.
For a speed comparison, we measured the consumed
time for generating $5 \times 10^7$ single precision
real numbers by an MTGP of period $2^{11213}-1$,
and that by a standard
implementation of Mersenne Twisters with period
$2^{607}-1$ by NVIDIA named *CUDA SDK MT*.
For a GPU called GT120, MTGP (32.8ms) is 1.5 times
faster than SDK MT, and for a newer GPU called GTX260,
MTGP (4.8ms) is nearly four times faster than SDK MT.

Vector Monte Carlo algorithms are used for solving the systems of integral equations. The distinctive feature of the vector algorithms is their matrix weight. After each elementary transition in the simulated Markov chain the matrix weight is multiplied by the matrix of the system kernels divided by the transition density. We suggest to call such algorithms ``probabilistic algebraic''. Based on the vector stochastic physical models the vector algorithm for solving both the transfer equation taking in account the
polarization [1,2] and multigroup transfer equation [3,4] were
constructed. These algorithms were generalised and applied to solving the system of integral equations of the second kind. For the system of integral equations unbiasedness and variance estimator finiteness of the vector estimators were studied.
In this talk more detailed investigation (as compared with [5]) of the vector Monte-Carlo estimator is given. We construct vector estimator that is close to the optimal according to the criteria of uniform variance estimator boundedness. Also we discuss the construction of the estimates for linear functionals of the solution. The dual presentation for the second moments of such estimates has been first introduced.
Using proposed dual presentation we minimize the majorant mean-square error of the global solution estimator ( histogram type). Also we give for the first time a detailed description of scalar Monte-Carlo algorithm for solving the system of integral equations and present some comparison between scalar and vector algorithms. Finally it has been shown that the theory of vector estimators provides a simple criteria of finiteness for the variance estimator of the multiple parametric derivatives.
This work is supported by RFBR 09-01-00035 and 10-01-09325.

[1] Kattawar G.W., Plass G.N Radiation and polarization of multiple scattered light from hase and clouds Applied Optics. - 1968.- Vol.7, 8, - P. 1519-1527

[2] G.I. Marchuk, G.A. Mikhailov, M.A. Nazaraliev et al. (1980) Monte Carlo Method in Atmospheric Optics. Springer-Verlag.

[3] Sobol I.M. Monte Carlo method for the criticality computation in multigroup approach Monte Carlo method in transfer problems. - M.: Atomizdat, 1967. - N.232-254 (in Russian).

[4] Spanier J., Gelbard E.M. Monte Carlo principles and neutron transport problems. Addison-Wesley, Reading, 1969.

[5] G.A. Mikhailov. Optimization of Monte Carlo Weighted Methods. Springer-Verlag, 1992.

We consider $\psi$-irreducible, geometrically ergodic Markov chain $X_n$. Assume that $f$ is a bounded function on $\mathcal{X}$ with real values and there exists such $\lambda< 1$ that for every
function $g$ with $\pi g = 0$ we have $\left\|Pg\right\|\leq\lambda$ where $\left\|g\right\|=\left[\int g(x)^2\pi(dx)\right]^{\frac{1}{2}}$. Then the probabilities of large deviations of sums $S_n=\sum_{k=1}^nf(X_k)$ satisfy Hoeffding's-type inequalities:
$$P_\nu\left(S_n\geq n(\mu+\varepsilon)\right)\leq\left\|\frac{d\nu}{d\pi}\right\|_\infty\exp\left\{-2\frac{1-\lambda}{1+\lambda}\varepsilon^2n\right\},$$
where $\mu := \pi f$ and $\nu$ denote any initial measure absolutely continuos with
respect to $\pi$. These bounds depend only on the stationary mean $\pi f$, spectral-gap and the end-points of support of $f$. We generalize the results of Le\'on and Perron in two directions. The state space is general and we do not assume reversibility.
We use a similar technique as Le\'on and Perron. The first step is to construct an associated chain $X'_n$ and reduce the problem to properties of operator corresponding to $E_\pi \exp(tS'n)$.
In the second step the problem is reduced to the two-state space case.

For every infinite set $X = \{ x_1 < x_2 < \ldots \} \subset \NN$
the sequence
\begin{equation}
\label{a1}
\frac{x_1}{x_1}, \frac{x_1}{x_2}, \frac{x_2}{x_2},
\frac{x_1}{x_3}, \frac{x_2}{x_3}, \frac{x_3}{x_3}, \ldots ,
\frac{x_1}{x_n}, \frac{x_2}{x_n}, \ldots , \frac{x_n}{x_n}, \ldots
\end{equation}
is called *the ratio block sequence* of the set $X$. It is
formed by blocks $X_1, X_2, \ldots , X_n , \ldots $ where the $n$-th
block is
$$
X_n = \left( \frac{x_1}{x_n}, \frac{x_2}{x_n}, \ldots ,
\frac{x_n}{x_n}\right).
$$
To each $X_n$ we can attach the following step distribution
function
$$
F(X_n,x)=\frac{\# \{ i: i \leq n,\ \frac{x_i}{x_n} \leq x \}}{n}.
$$
Denote $G(X_n)$ the set of all distribution functions of the
sequence of single blocks $(X_n)$. It is the set of all functions
$g(x)$ for which there exists an increasing sequence of indices
$(n_k)$ such that
$$\lim_{k\to\infty}F(X_{n_k},x)=g(x)$$
almost everywhere in $[0,1]$.
In this contribution we study relations between asymptotic densities of the original set of positive integers
and some properties of $G(X_n)$. The main theorem follows.
\begin{theorem}[see {[}1{]}]
Let $x_1 < x_2 < \ldots$ be a sequence of positive integers with positive lower asymptotic density $\underline{d}>0$
and upper asymptotic density $\overline{d}$. Then all d.f.s $g(x)\in G(X_n)$ are continuous, non-singular and
bounded by $h_1(x)\le g(x)\le h_2(x)$, where
\begin{align}
h_1(x)&=
\begin{cases}
x\frac{\underline{d}}{\,\overline{d}\,}\ ,
&\mbox{ if }x\in\left[0,\frac{1-\overline{d}}{1-\underline{d}}\right];\\
\frac{\underline{d}}{\frac{1}{x}-(1-\underline{d})}\ , &\mbox{otherwise},
\end{cases}
\label{eq35}\\
h_2(x)&=\min\left(x\frac{\overline{d}}{\,\underline{d}\,},1\right). \label{eq36}
\end{align}
Moreover, $h_1(x)$ and $h_2(x)$ are the best possible in the following sense: for given
$0<\underline{d}\le\overline{d}$, there exists $x_1 < x_2 < \ldots$ with lower and upper asymptotic densities
$\underline{d}$, $\overline{d}$, such that $h_1(x) = \inf G(X_n)$ and $h_2(x) \in G(X_n)$.
\end{theorem}

[1] V. Bal\'a\v z, L. Mi\v s\'ik, O. Strauch, J.T. T\'oth: \emph{Distribution functions of ratio sequences, III} to appear in Publ. Math. Debrecen

In this paper, an extensive statistical analytic investigation for PRNGs of two most
popular operating systems viz. Linux and Windows operating systems has been made.
The *Linux pseudo-random number generator* (LPRNG) is the most popular open source
pseudo-random number generator. The LPRNG is part of the kernel of all Linux
distributions and is based on generating randomness from entropy of operating system
events. The output of this generator is used for almost every security protocol, including
TLS/SSL key generation, choosing TCP sequence numbers, and files system and email
encryption. The study carried out on LRNG shows that 100\% of the sequences generated
from `/dev/random` generator passed the National Institute of Standards and Technology
(NIST) test suite. However, 30\% of these sequences fail to pass tests based on the next-
bit theory. It is thus concluded that the random bit strings generated by `/dev/random`
random generator are weak. If a hacker is able to predict few-bits he can potentially
recover the key, which can compromise the entire security.
The pseudo-randomness of the output of Windows Pseudo Random Generator (WPRNG)
is crucial for the security of all applications running on Windows such as the Internet
Explorer etc. In this paper, the WPRNG is examined and the experiments on WPRNG
show that the random sequences generated by WPRNG are even weaker than those
generated by `/dev/random`. Fifty five percent of the random sequences generated using
WPRNG failed to pass the NIST test suite. 72\% of the sequences fail in tests based on
the next-bit theory such as ``A new universal test for bit strings''. The results indicate that
the WPRNG generator used in windows operating systems does not produce quality
random sequences, which are essential for all security applications.
To overcome the inherent problems of PRNGs, the *Universal Fuzzy Statistical Test for
Pseudo Random Number Generators* (UFST-PRNG) has been proposed. The UFST-PRNG test is based on standard statistical tests, the next-bit test theory, and fuzzy logic.
The test has a very useful property that if a random string passes UFST-PRNG, the string
will also pass other standard statistical test and tests based on next-bit theory. Thus, the
UFST-PRNG may be used as a single test to check the quality of randomness of bit
strings.
Towards the end of the paper, we propose the UFST-PRNG framework which uses
UFST-PRNG test to evaluate ``goodness'' of random bits generated by built-in OS
PRNGs. The framework ensures delivery of statistically sound random numbers to the
requesting security applications. The framework will thus help in improving the overall
security of applications which depends on random sequences from operating system.

Stochastic differential equations driven by fractional Brownian motions have recently found applications in various fields, which include electrical engineering, biophysics or financial modelling. While the existence and uniqueness result is a canonical example for an application of the rough paths theory, so far only few is known about implementable numerical methods for these equations.
The Wong-Zakai approximation of such an SDE is itself not an implementable approximation scheme. However, we show that discretising the Wong-Zakai approximation with a numerical method of at least second order for deterministic ordinary differential equations leads to an implementable and convergent approximation scheme for an SDE driven by fBms with Hurst parameter $H>1/3$.

Gaussian random fields on uniform rectangular grids, with isotropic covariances, are characterized by block-Toeplitz (with Toeplitz blocks) covariance matrices. When producing instances of the random field, it is efficient to embed the covariance matrix in a block circulant matrix, and apply Fourier methods to find the eigenvalue decomposition. However, positive definiteness of the circulant extension is not always guaranteed unless sufficient padding is employed in the embedding. We present results demonstrating that integrations over the embedded probability space remain consistent with the original probability space,
and minimal requirements of the padding scheme for the embedding.

Hybrid sequences are sequences of points in a multidimensional unit cube that are composed from
lower-dimensional sequences of two different types. In practice, one combines a low-discrepancy
sequence and a sequence of pseudorandom numbers or vectors, in which case one obtains the hybrid
sequences first proposed by Spanier in 1995. Some probabilistic results on the discrepancy of hybrid
sequences are already available in the literature.
In this talk we present deterministic discrepancy bounds for various types of hybrid sequences. The
sequences that we combine include Halton sequences, Kronecker sequences, linear congruential
sequences, explicit nonlinear congruential sequences, and digital inversive sequences. The discrepancy
bounds imply sufficient conditions for the asymptotic uniform distribution of the considered hybrid
sequences and for the convergence of the corresponding hybrid Monte Carlo methods.

Integral equations with Lipschitz kernels and right-hand sides
are intractable for deterministic methods, the complexity
increases exponentially in the dimension $d$.
This is true even if we only want to compute a single function
value of the solution.
For this latter problem
we study coin tossing algorithms (or restricted Monte Carlo methods),
where only random bits are allowed.
We construct a
restricted Monte Carlo method with error $\varepsilon$ that uses
roughly $\varepsilon^{-2}$ function values and only
$d \, \log^2 \varepsilon$ random bits. The number of arithmetic operations
is of the order $\varepsilon^{-2} + d \, \log^2 \varepsilon$.
Hence, the cost of our algorithm increases only mildly with the
dimension $d$, we obtain the upper bound
$C \cdot ( \varepsilon^{-2} + d \, \log^2 \varepsilon )$ for the complexity.
In particular, the problem is tractable for coin tossing algorithms.
This is an example from our book ``Tractability of Multivariate
Problems'', Volume II, 2010, with Henryk Wo\'zniakowski
and based on joint work with Harald Pfeiffer.

In classical multivariate quadrature with product rules it is natural
to select an appropriate one-dimensional quadrature rule for each
dimension in accordance with its importance. E.g., one may choose to
use a Gauss rule of degree 13 for dimension 1, then a rule of degree
5 for dimension 2 and so on.
With lattice rules (with a prime number of points) these
one-dimensional degrees are all equal as we have the same projected
point set in each dimension. We can however try to maximize the
shape of the frequency indices (e.g., the Zaremba or hyperbolic
cross for the Zaremba degree) while weighting the contributions of the
different dimensions. This gives us a weighted degree of exactness.
We present a new algorithm to construct lattice rules based on a
weighted degree of exactness and a worst case error.

I will discuss two types of randomizations for the Halton sequence. Since 1970's, several researchers have introduced permutations that scramble the digits to improve the uniformity of the original Halton sequence. The first question I want to investigate is how a Halton sequence scrambled by a randomly selected permutation differs from a Halton sequence scrambled by the best deterministic permutations. This investigation will be numerical. I will then show that any scrambled Halton sequence, independent of the way the permutations are constructed, can be randomized by the random-start approach that was first applied to the original Halton sequence by Wang and Hickernell. This randomization approach gives unbiased estimates. Numerical examples will be used to compare various scrambled Halton sequences when they are randomized via the random-start approach.

To many scientists, Monte Carlo means Markov
chain Monte Carlo (MCMC), which is more powerful
but also more complicated than plain
Monte Carlo sampling. Quasi-Monte Carlo (QMC) methods
are usually incorporated into plain Monte Carlo
but recently there have been experiments and
theoretical results to support the use of QMC in
MCMC applications.
This talk will review some of the theory of QMC
for MCMC and present new results.
Part of this work is based on contributions
by Josef Dick, Makoto Matsumoto and Takuji Nishimura.

Estimating the ground state energy of a multiparticle system
with relative error $\varepsilon$ using deterministic classical algorithms has cost
that grows exponentially with the number of particles. The problem
depends on a number of state variables $d$ that is proportional
to the number of particles and suffers from the curse of dimensionality.
Quantum computers can vanquish this curse.
In particular, we study a ground state eigenvalue problem and
exhibit a quantum algorithm that achieves
relative error $\varepsilon$ using a number of qubits
$C^\prime d\log \varepsilon^{-1}$ with total cost (number of queries plus
other quantum operations) $Cd\varepsilon^{-(3+\delta)}$,
where $\delta>0$ is arbitrarily small and $C$ and $C^\prime$ are independent
of $d$ and $\varepsilon$.

We approximate the solution of some linear systems of SDEs driven by a fractional Brownian motion $B^{H}$ with Hurst parameter $H \in (\frac{1}{2},1)$ in the Wick-It{\^o} sense, including a geometric fractional Brownian motion. To this end, we apply a Donsker-type approximation of the fractional Brownian motion by disturbed binary random walks due to Sottinen. Moreover, we replace the rather complicated Wick products by their discrete counterpart, acting on the binary variables, in the corresponding systems of Wick difference equations. As the solutions of the SDEs admit series representations in terms of Wick powers, a key to the proof of our Euler scheme is an approximation of the Hermite recursion formula for the Wick powers of $B^{H}$. Furthermore we extend this approach to fractional Wiener integrals of continuous functions.

[1] Bender, C., Parczewski, P., Approximating a geometric fractional Brownian motion and related processes via discrete Wick calculus. *Bernoulli (2010)*

[2] Sottinen, T., Fractional Brownian motion, random walks and binary market models. *Finance and Stochastics.* **5**, 343-355 (2001).

In a series of papers H. Faure established a method to compute the diaphony of generalized van der Corput sequences in arbitrary base $b$. In 2005 he proved that the sequence generated by the identical permutation has, among others, the worst distribution behaviour with respect to the diaphony. In our talk we will present recent results in our attempt to identify the class of permutations that show the best distribution behaviour with respect to the diaphony. We will prove a lower bound for the diaphony of all generalized van der Corput sequences in arbitrary base $b$. Furthermore we will discuss possibilities to obtain good permutations in any base $b$.
This work is supported by the Austrian Science Foundation (FWF), Project S9609, that is part of the Austrian National Research Network ``Analytic Combinatorics and Probabilistic Number Theory''.

We use MCMC on the space of drift functions and missing
observations to sample from the posterior measure arising
from a Gaussian prior on drift function space and discrete
observations of a scalar Ito SDE. We present applications
to the CIR model and to molecular dynamics data as well
as some observations on robustness and convergence.

We propose an approach that approximates the solution of large-scale linear inverse problems within a low-dimensional subspace using Monte Carlo type simulation. A central characteristic of our methodology, which also involves Tikhonov regularization and regression, is the use of low-dimensional calculations and storage in solving high-dimensional problems. To enhance the efficiency of simulation we incorporate variance reduction importance sampling schemes that rely on designing sampling distributions that exploit the smooth structure of the underlying models. A few approaches for designing near-optimal distributions based on the basis functions and model matrix are discussed, addressing explicitly overdetermined/square as well as under-determined systems. The performance of our method has been numerically evaluated on a number of classical linear ill-posed problems arising from Fredholm integral equations of the first kind. The computation experiments demonstrate an adequate reduction in simulation noise after a relatively small number of samples and an attendant improvement in quality of the resulted approximate solution. Two important approximation issues arise within this context: first the solution of the problem should admit a reasonably accurate representation in terms of a relatively small number of basis functions, and second, the problem should possess a reasonably continuous/smooth structure so that effective importance sampling distributions can be designed with relatively small effort. In this regard, the classical setting of linear ill-posed inverse problems with a priori information on the solution is well suited to this framework. In our computational experiments, simple piecewise polynomial approximations have proved adequate, although other more efficient alternatives may be possible. We finally note that the use of regularized regression based on a sample covariance obtained as a byproduct of the simulation was another critical element for the success of our methodology with nearly singular problems.

We present a multivariate extension to Clenshaw-Curtis quadrature based on the hyperinterpolation theory. At the heart of it, a cubature rule for an integral with Chebyshev weight function is needed. Several point sets have been discussed in this context but we introduce Chebyshev lattices as generalizing framework. This has several advantages: (1) it has a very natural extension to higher dimensions, (2) allows for a systematic search for good point sets and (3) because of the construction, there is a direct link with the Fourier transform that can be used to reduce the computational cost.
It will be shown that almost all known two- and three-dimensional point sets for this Chebyshev weighted setting fit into the framework. We give them a uniform description using the Chebyshev lattices and reveil some previously unobserved similarities. Blending, the not so commonly known extension to higher dimensions, also fits into this framework.

We investigate pointwise approximation via Euler algorithms of scalar stochastic differential equations (SDEs) of the form
\begin{equation}
\label{PROBLEM1}
\left\{ \begin{array}{ll}
dX(t)=\sigma_1(t)a(X(t))dt+\sigma_2(t)b(X(t))dW(t), &t\in [0,T], \\
X(0)=\eta,
\end{array}\right.
\end{equation}
where we allow existence of singularities for coefficients $\sigma_1,\sigma_2:[0,T]\to\mathbb{R}$.
We first consider the regular case, when $\sigma_1,\sigma_2$ belong to the class of H\"older continuous functions with parameter $\varrho\in (0,1]$. In the additive noise case ($b\equiv const$), we show that the classical Euler algorithm $X^E$ has the optimal worst case error $\Theta(n^{-\varrho})$, $\varrho\in (0,1]$, among all adaptive algorithms. In the multiplicative noise case, we prove that the algorithm $X^E$ has the error $O(n^{-\min\{1/2,\varrho\}})$, which is optimal if $\varrho\in (0,1/2]$.
In the singular case, we consider a class of functions $\sigma_1,\sigma_2$ that are piecewise H\"older continuous, except for a finite number of unknown singular points. We investigate error of the Euler algorithm $X^E$ and conclude that only singularities of $\sigma_2$ have influence on its accuracy. This will allow us to show that, in the additive and multiplicative noise case, the algorithm $X^E$ has the error $O(n^{-\min\{1/2,\varrho\}})$. Next, we show that any nonadaptive algorithm with respect to $\sigma_2$ has the error which is no less than $n^{-\min\{1/2,\varrho\}}$, even if there is at most one unknown singular point. In order to preserve the same order of convergence as for regular functions, we therefore turn to adaptive algorithms. In the additive noise case, when coefficient $\sigma_2$ has at most one singularity, we construct the adaptive Euler algorithm $\bar X^E$ which locates singularity for $\sigma_2$ and preserves the optimal error $\Theta(n^{-\varrho})$, known from the regular case.
We also consider the case of multiple singularities.

We study random fields $X$ on $D \subset {\mathbb R}^d$
that are defined in terms of wavelet expansions,
which permit for sparsity. For $D=[0,1]$
the corresponding stochastic processes $X$ have been
introduced by Abramovich, Sapatinas, Silverman (1998)
and Bochkina (2006) in the context of
Bayesian non-parametric regression.
In this talk we present error bounds
for linear and non-linear approximation
and we determine the Besov regularity of the
realizations of $X$ with $L_p$-norms for $0 < p < \infty$.
Joint work with
P. Cioica, S. Dahlke, S. Kinzel (all from Marburg),
F. Lindner, R. Schilling (both from Dresden),
and T. Raasch (Mainz).
Supported by the DFG within the SPP 1324.

Let the function $f$ be given and $\pi$ be a probability measure.
The goal is to approximate the expectation of $f$ with respect to $\pi$, such that
\[
S(f)=\int_D f(y) \pi(dy)
\]
is the quantity of interest.
A straight generation of the desired distribution is in general not possible.
Thus it is reasonable to use Markov chain sampling with a burn-in $n_0$.
Suppose that the Markov chain fulfills some convergence conditions, e.g. has an absolute $L_2$-spectral gap
or is uniformly ergodic.
Then an explicit bound of the mean square error of the Markov chain Monte Carlo approximation
\begin{equation*} \label{sum}
S_{n,n_0}(f):=\frac{1}{n} \sum_{i=1}^n f(X_{i+n_0})
\end{equation*}
can be shown.
If there exists an $L_2$-spectral gap then one can get error bounds for integrands $f\in L_p$ where $p\in(2,\infty]$. If the Markov chain is reversible and uniformly ergodic, then one can get an error bound for $f\in L_2$.

The talk first gives a (partial) overview of known results on the convergence of basic adaptive MCMC algorithms. The second part of the talk dicusses a recent result on the convergence of the original Adaptive Metropolis Algorithm for targets with unbounded support (joint with Matti Vihola, Jyv\"askyl\"a).

Suppose that $f$ is a suitable function and $X=(X_t)_{t\in [0,1]}$ is a stochastic process such that $dX_t=\sigma(X_t)dW_t$, where $W$ is the standard Brownian motion and $\sigma$ satisfies certain regularity properties. Moreover, suppose that $(\mathcal{F}_t)_{t\in[0,1]}$ is the augmentation of the filtration generated by $W$. We consider the $L_2$-approximation error, which occurs when the stochastic integral
$$
f(X_1)=\mathbb{E}f(X_1)+\int_{(0,1]}\varphi(t, X_t)dX_t\ \ a.s.
$$
is approximated by
$$
\mathbb{E}f(X_1) + \sum_{i=1}^n v_{i-1}(X_{t_i}-X_{t_{i-1}}),
$$
where $v=(v_i)_{i=0}^{n-1}$ is the sequence of $\mathcal{F}_{t_i}$-measurable step functions $v_i:\Omega\rightarrow \mathbb{R}$. Earlier results
by C. and S. Geiss [1] indicate that in most cases the optimal quadratic approximation rate is $\frac{1}{\sqrt{n}}$ when one optimizes over deterministic time-nets of cardinality $n+1$. However, Hujo [2] has shown that there are functions such that
the approximation rate is worse than $\frac{1}{\sqrt{n}}$. In this talk we characterize those functions $f$ with the approximation rate $\frac{1}{\sqrt{n}}$ by using the functional
$
H_Xf(t):=\left\Vert\sigma\frac{\partial\varphi}{\partial x}(t, X_t)\right\Vert_{L_2}$, $t\in [0,1).
$
We also give conditions for the optimal approximation rate $\frac{1}{\sqrt{n^\beta}}$ with
$\beta \in (0,1)$ in terms of the $H_X$-functional. The talk is based on the paper [3].

[1] Geiss, C., Geiss, S.: *On approximation of a class of stochastic integrals and interpolation*, Stochastics and Stochastics Reports 76 (2004) 339--362.

[2] Hujo, M.: *Is the approximation rate for certain stochastic integrals always $1/\sqrt{n}$?*, Journal of Theoretical Probability 19 (2006) 190--203.

[3] Sepp\"al\"a, H: *On the optimal approximation of certain stochastic integrals*. To appear in Journal of Approximation Theory.

The Monte Carlo technique can be used as a method of statistical trials to calculate surface areas or object volumes by employing random numbers. It is especially helpful for complicated functions or irregular shapes in higher dimensional spaces.
There are two methods of Monte Carlo integration. The Crude Monte Carlo inserts the random numbers into the function and calculates the average of values obtained. This method is preferred in practice since it is fast and efficient. On the other hand, Hit-or-Miss Monte Carlo casts $n$ points into the space of function and finds the ratio of points falling within the integration regions. In this approach the outcome of each point generation is either success (within the integration region) or failure (outside of the integration region). Therefore the outcome of an experiment with $n$ random points will have a binomial distribution.
Relying on a multinomial distribution we present here a new Hit-or-Miss integration technique called Mean Estimator by expressing the integral area in four different regions described by f(x), f(1-x), 1-f(x),and 1-f(1-x). By taking the average of estimates of these four regions it is possible to increase the efficiency with a reasonable calculation speed.
Our new method creates an estimator with smaller variance than the Crude Monte Carlo. The efficiency can be improved further by choosing suitable rectangles other than unit square in order to minimize intersections of sub-regions or empty regions in the plane. There are cases very common in physics, chemistry, medicine, genetics or biology where there is no explicit function defining the region or the volume to be estimated. A distinct advantage of our method is its applicability to these problems. In this application, instead of four different forms of function expression, different rotations of the figure can be used in Monte Carlo simulation. Performance of the proposed technique is demonstrated in two dimensional space by using six different functions. It is worth studying the application for different volumes in higher dimensions. It is also a promising subject to investigate the suitability of the method to multi-core processors.

The variance of randomly shifted lattice rules for numerical multiple integration can be expressed by the Fourier coefficients over the dual lattice. Bounds on the variance can then be obtained by making certain assumptions on the smoothness of the integrands, which is reflected in the rate of decay of their corresponding Fourier coefficients.
Here we only assume that the integrands are square integrable. We allow our integrands to have the associated Fourier series not absolutely convergent. Thus, our assumptions are weaker than the usual assumptions made on the space of functions that are integrated by lattice rules. We obtain a bound for the variance that is the same as the worst-case error in weighted Korobov spaces, but can be used for a larger class of integrands. It is known that the square worst-case error in Korobov spaces converges as $O(n^{-\alpha}(\log n)^{d\alpha})$ for $\alpha>1$. We show that we can obtain a refined convergence order of $O(n^{-\beta}(\log\log n)^\beta)$, where $1\le\beta<\alpha$, which has the merit of being independent of the dimension.
We then examine the impact of these randomly shifted lattice rules for some problems that arise in practice and lead to integrals that fit our assumptions. Such examples are the likelihood estimation in a mixed logit model and the payoff function of a barrier option in finance.
Under the assumption that weights are general and the number of points is composite, we construct good lattice rules by using the well known `component by component' technique or some simpler randomised algorithms and compare the performances of these algorithms.

Quasi-Monte Carlo methods can be used to approximate integrals in various weighted spaces of functions. The uniformity of the distribution of QMC point sets can be assessed by the weighted star discrepancy. Such a discrepancy is based on an $L_\infty$ maximum error and has the merit that among other types of discrepancy, it requires lesser smoothness from the integrand.
This talk will summarise my results on the weighted star discrepancy of lattice rules, which belong to the larger family of QMC methods. Some recent results on the existence and construction of lattice rules have been obtained by assuming that the weights are ``general'', the number of points is composite (not necessarily prime) and the lattice rule can be shifted arbitrarily. These assumptions have only been considered before separately, not altogether.
We then obtain bounds on the weighted star discrepancy and establish conditions for tractability and strong tractability. We also compare these bounds with the actual value of the discrepancy in low dimensions and discuss the future of the weighted star discrepancy as a measure of goodness from a more practical viewpoint.

Using notations in monographs [1], [2] and [3]
we define:
\par\noindent$\bullet$
$x_n$, $n=1,2,\ldots$ is a sequence
from the unit interval $[0,1)$;
\par\noindent$\bullet$
$F_N(x)=\frac{\#\{n\le N;x_n\in[0,x)\}}{N}$
is the step distribution function of $x_1,\ldots,x_N$,
while $F_N(1)=1$;
\par\noindent$\bullet$
$g(x)$ is a distribution function (d.f.) of the sequence
$x_n$, $n=1,2,\ldots$ if an increasing
sequence of positive integers $N_1 < N_2 < \ldots < $
exists such that
$\lim_{k\to\infty}F_{N_k}(x)=g(x)$ a.e. on $[0,1]$;
\par\noindent$\bullet$
$G(x_n)$ is the set of all d.f.s
of the given sequence $x_n$, $n=1,2,\ldots$.
\par
We present here two new applications of the theory of d.f.s.
**Benford's law.**
A sequence $x_n$, $n=1,2,\ldots$, of positive real numbers
expressed in base $b$,
satisfies Benford law (B.L.) of order $r$
in base $b$ if for every $r$-digits number
$K=k_1k_2\cdots k_r$ expressed in the base $b$ we have
$$
\lim_{N\to\infty}\frac{\#\{n\le N;
\mbox{ first }r\mbox{ digits of }x_{n}\mbox{ are equal to }K\}}{N}=
\log_b(K+1)-\log_b K,
$$
where for $x_n$ of the type $x_n=0.00\cdots0k_1k_2\cdots k_r\cdots$,
the first zero digits we shall omit and use $x_n=k_1.k_2\cdots k_r\cdots$.
\par
If a sequence $x_n$, $n=1,2,\ldots$,
satisfies B.L. of order $r$, for every
$r=1,2,\ldots$, then it is said that
$x_n$ satisfies **strong B.L.**
It is well known that
{\it a sequence $x_n$, $x_n>0$, $n=1,2,\ldots$,
satisfies strong B.L. if and only if the sequence
$\log_b x_n\bmod1$ is u.d. in $[0,1)$.}
\par
In [5]
we study strong B.L. of sequences $x_n\in[0,1)$
via theory of d.f.s and we have proved
\par\noindent
**Theorem 1.**
Let $x_n$, $n=1,2,\ldots$, be a sequence
in $(0,1)$ and $G(x_n)$ be the set of all d.f.s of $x_n$.
Assume that every d.f. $g(x)\in G(x_n)$
is continuous at $x=0$.
Then the sequence $x_n$ satisfies strong B.L.
in the base $b$ if and only if
for every $g(x)\in G(x_n)$ we have
$$
x=\sum_{i=0}^\infty\left(
g\left(\frac{1}{b^i}\right)-g\left(\frac{1}{b^{i+x}}\right)
\right)\mbox{ for }x\in[0,1].
\qquad (1)
$$
For example d.f.
$$
g(x)=
\begin{cases}
x&\mbox{ if }x\in\left[0,\frac{1}{b}\right],\\
1+\log_b x+(1-x)\frac{1}{b-1}&\mbox{ if }x\in\left[\frac{1}{b},1\right]
\end{cases}
$$
is a solution of (1)
\par\noindent
**Theorem 2.**
For a sequence $x_n\in(0,1)$, $n=1,2,\ldots$,
assume that every d.f. $g(x)\in G(x_n)$ is continuous at $x=0$.
Then there exist only finitely many different integer bases $b$
for which the sequence $x_n$ satisfies strong B.L. simultaneously.
Moreover, if the sequence $x_n$ satisfies strong B.L. in base $b$,
and for some $k=1,2,\ldots$ there exists $k$th integer root $\root k\of{b}$,
then $x_n$ satisfies strong B.L. also in base $\root k\of{b}$.
\par
In the contrary the sequence $x_n=n^n$ satisfies strong B.L.
for arbitrary integer base $b$, because
$\log_bn^n=n\log n\frac{1}{\log b}$ and
by [3]
, p. 2--117, 2.12.14.
the sequence
$\alpha n\log^\tau n\bmod1$, $\alpha\neq0$, $0<\tau\le1$,
is u.d. Similarly for $x_n=n^{n^2}$.
**Ratio sequences.**
Let $x_1 < x_2 < \ldots$ be an increasing sequence of positive integers
and consider a sequence of blocks $X_n$,
$n=1,2,\ldots$, with blocks
$$
X_n=\left(\frac{x_1}{x_n},\frac{x_2}{x_n},
\ldots,\frac{x_n}{x_n}\right)
$$
and denote by $F(X_n,x)$ the step d.f.
$
F(X_n,x)=\frac{\#\{i\le n;\frac{x_i}{x_n} < x\}}{n},
$
for $x\in[0,1)$ and $F(X_n,1)=1$.
A d.f. $g$ is a d.f. of the sequence of single blocks $X_n$,
if there exists an increasing sequence of
positive integers $n_1,n_2,\ldots$ such that
$
\lim_{k\to\infty}F(X_{n_k},x)=g(x)
$
a.e. on $[0,1]$.
Denote by $G(X_n)$ the set of all d.f. of the sequence
of single blocks $X_n$. Using boundaries of d.f.s of $G(X_n)$
in [4]
we have proved
\par\noindent
**Theorem 3.**
For every increasing sequence $x_1 < x_2 < \ldots$
of positive integers with lower and upper asymptotic densities
$0<\underline{d}\le\overline{d}$
we have
$$
\frac{1}{2}\frac{\underline{d}}{\overline{d}}
\le\liminf_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\frac{x_i}{x_n},
\qquad (2)
$$
$$
\limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\frac{x_i}{x_n}
\le\frac{1}{2}+\frac{1}{2}
\left(\frac{1-\min(\sqrt{\underline{d}},\overline{d})}{1-\underline{d}}\right)
\left(1-\frac{\underline{d}}{\min(\sqrt{\underline{d},\overline{d}})}\right).
\qquad [3]
$$
Here the equations in \thetag{2} and \thetag{3}
can be attained.

[1] KUIPERS, L. -- NIEDERREITER, H., Uniform Distribution of Sequences, John Wiley \& Sons, New York, 1974, Reprint edition Dover Publications, Inc. Mineola, New York, 2006.

[2] DRMOTA, M. -- TICHY, R. F., Sequences, Discrepancies and Applications, Lecture Notes in Mathematics 1651, Springer-Verlag, Berlin, Heidelberg, 1997

[3] STRAUCH, O. -- PORUBSK\'Y, \v S., Distribution of Sequences: A Sampler, Schriftenreihe der Slowakischen Akademie der Wissenschaften, Band 1, Peter Lang, Frankfurt am Main, 2005, 569

[4] BAL\'A\v Z, V. -- MI\v S\'IK, L. -- STRAUCH, O. -- T\'OTH, J.T., Distribution functions of ratio sequences, III, Publ. Math. Debrecen, 2009, sending

[5] BAL\'A\v Z, V. -- NAGASAKA, K. -- STRAUCH, O., Benford's law and distribution functions of sequences in $(0,1)$, Mathematical Notes, 2009, sending

We consider the problem of approximating an embedding operator
$$ A:H\to L_p(X)\,, $$
where $H$ is a Hilbert space of functions on an arbitrary measure space $X$. It is well-known, that linear algorithms
$$ A_n:H\to L_p(X), \quad f\mapsto\sum_{i=1}^n \alpha_i(f)h_i \qquad (h_i\in L_p(X)) $$
are optimal amoung all adaptive and non-linear algorithms. This is true in the case, that we can pick any continuous linear functionals (general information) for the $\alpha_i$ or only function evaluations $f\mapsto f(x)$ for $x\in X$ (assuming all function evaluation functionals are continuous). Obviously, approximation using general information is at least as good as approximation using function evaluations only. However, function evaluations are practically easier to perform in general. We will discuss, what can be said about the errors using function evaluations, when the errors for general information are roughly known.

We present new algorithms for weak approximation of stochastic differential equations driven by pure jump L\'evy processes. The method uses adaptive non-uniform discretization based on the times of large jumps of the driving process. To approximate the solution between these times we replace the small jumps with a Brownian motion. Our technique avoids the simulation of the increments of the L\'evy process, and in many cases achieves better convergence rates than the traditional Euler scheme with equal time steps. To illustrate the method, we discuss an application to option pricing in the Libor market model with jumps.

Reliable estimation of risk exposition is a both, demanding and challenging task of risk units in financial institutions. Since the complexity of large portfolios do not allows us to use any (simplifying) multidimensional distribution in order to describe the joint dependency among particular risk factors, the approach of copula functions is commonly used. As opposed to ordinary copula function for standard processes, the L\'evy copulas are required in order to join the more appropriate L\'evy processes -- the processes with (infinite activity of) jumps, that allows one to perceive the true distribution pattern of market returns better. However, the factors of dependency are not constant in time, but evolve rather dynamically. It implies several directions to research the most suitable approach for its description -- parametric (stochastic processes), non-parametric (e.g.\ GARCH type models) or interval specification. In this paper, we define the input parameters of dependency by means of fuzzy numbers, which allows us to obtain a whole set of risk measures. This approach provides the risk mangers more extensive information about the risk they are exposed to, including a ranking possibility.

Stochastic mesh methods have been applied successfully to solve American
option pricing problems [1,2,3].
Here we introduce as a new application the computation of an approximate
optimal policy for a portfolio hedging problem with transaction costs.
To improve the efficiency of the mesh approach, we consider using a single
grid of $n$ simulated states for all time steps,
and to discard negligible weights with high probability, using a Russian roulette method.
This does not change the computational complexity of using the stochastic
mesh, but can reduce the computation time by a significant factor,
depending on the number of time steps.
Using a single grid also simplifies the use of randomized quasi-Monte Carlo (RQMC)
to generate the mesh.
On the other hand, using a single grid and the roulette increases the variance
and may also increase the bias.
We consider the hedging of a basket option as an example, and examine how these
methods improve (or not) the quality of the hedging policy retained at the end,
for a given (fixed) total computing budget.
We do this for a variety of key parameters, such as
the number of time steps, the time horizon, the number of assets,
and the computing budget.

[1] P. P. Boyle, A. W. Kolkiewicz, and K. S. Tan. Pricing American style options using low discrepancy mesh methods. IIPR - Research report 00-07, November 2000.

[2] M. Broadie and P. Glasserman.
A stochastic mesh method for pricing high-dimensional American
options.
*The Journal of Computational Finance*, 7(4):35--72, 2004.

[3] P. Glasserman.
*Monte Carlo Methods in Financial Engineering*.
Springer-Verlag, New York, 2004.

Adaptive Markov chain Monte Carlo (AMCMC) methods have received increasing
interest in the last decade [1]. The AMCMC algorithms tune the proposal
distribution of the Metropolis-Hastings Markov kernel continuously using the
simulated history of the chain. There are many applications where AMCMC
algorithms are empirically shown to improve over the traditional MCMC methods.
Due to the non-Markovian nature of the adaptation, however, the analysis of
these algorithms requires more care than the traditional methods.
Most theoretical results on adaptive MCMC in the literature
are based on assumptions that typically require, either explicitly or
implicitly, one to modify a natural adaptation scheme by some seemingly
artificial constraints [1]. The constraint parameters may be difficult to choose
in practice, and the algorithms are generally sensitive to these parameters. In
the worst case, poor choices can render the algorithms useless.
This talk focuses on the recent stability and ergodicity results obtained for
two commonly applied AMCMC algorithms, without such constraints
[2--4]. The algorithms are based on the random walk Metropolis sampler: the
first one adjusts the proposal covariance, and the second one adjusts the scale
of the proposal distribution based on the observed acceptance probability.
The results assume only verifiable conditions on the target
distribution and the functional.

[1] C. Andrieu and J. Thoms. A tutorial on adaptive MCMC.
*Statistics and Computing*, 18(4):343--373, 2008.

[2] E. Saksman and M. Vihola. On the ergodicity of the adaptive Metropolis
algorithm on unbounded domains.
*Annals of Applied Probability*, to appear.

[3] M. Vihola. On the stability and ergodicity of an adaptive scaling
Metropolis algorithm.
*Preprint, arXiv:0903.4061v2*, 2009.

[4] M. Vihola. Can the adaptive Metropolis algorithm collapse without the
covariance lower bound?
*Preprint, arXiv:0911.0522v1*, 2009.

A series of recent articles introduced a method to construct stochastic partial differential equations (SPDEs) which are invariant with respect to the distribution of a given conditioned diffusion. Thus, these SPDEs can be used as a building block to for MCMC methods to sample conditioned diffusions
This talk describes how the methodology can be extended to allow the construction of SPDEs which are invariant with respect to the distribution of a class of hypoelliptic diffusion processes, subject to a bridge conditioning. The resulting sampling equations are SPDEs of fourth order parabolic type. This allows the treatment of more realistic physical models, for example one can use the resulting SPDE to study transitions between meta-stable states in mechanical systems with friction and noise. In this situation the restriction of the drift being a gradient, required in the elliptic case, can also be lifted.

We consider the computational complexity of computing the star
discrepancy of a point set ($n$ points, in the $d$-dimensional unit
cube).
From the work of Gnewuch et al. [M. Gnewuch, A. Srivastav, C. Winzen:
Finding Optimal Volume Subintervals with $k$ Points and Calculating
the Star Discrepancy are NP-Hard Problems, Journal of Complexity,
Vol. 25, 115-127, 2009], we know that computing the star discrepancy
of a point set is NP-hard when the dimension is unbounded. However,
the constructions used create point sets where the dimension is on the
same order as the number of points.
On the other hand, for bounded dimension, we can trivially compute the
star discrepancy in $O(n^d)$ time, making it polynomial for fixed $d$.
However, a dependency like $n^{\Theta(d)}$ quickly becomes
intractable, even for moderate dimension.
Thus, for the problem of actually computing the star discrepancy for a
point set of moderate dimension, it seems that classical complexity
has very little to say. To adress this situation, we need tools from
\emph{parameterized complexity}.
Parameterized complexity is a form of multi-variate complexity theory,
studying problems equipped with a secondary \emph{parameter}, which in
addition to the size of the instance controls its complexity -- as in
our case, when the dimension $d$ is a parameter controlling the
difficulty of computing the star discrepancy of a point set.
Using tools from parameterized complexity, we provide evidence that
computing the star discrepancy is a very hard problem, and that it is
very unlikely to be possible to compute it in time $n^{o(d)}$,
or in time $f(d) n^{O(1)}$ for any function $f(d)$.
This represents joint work with Panos Giannopoulos, Christian Knauer,
and Daniel Werner.

Quasi-Monte Carlo (QMC) methods have become important numerical tools in computational finance due to their greater efficiency relative to other numerical methods. Two important factors could significantly affect the performance of QMC methods, namely, the smoothness property and the effective dimension of the problems. Many financial derivatives have discontinuities in their payoffs and/or Greeks (sensitivities). For pricing or hedging such derivatives QMC methods may lose their superiority. Special care must be taken to ensure the superiority of QMC methods. This paper proposes a method to overcome the obstacle of the discontinuity in pricing and hedging financial derivatives. By judiciously designing
a method for simulating the underlying processes, we recover the superiority of QMC. The nice feature of the method is that it removes the irregular discontinuities and reduces the effective dimension of the problem, overcoming the difficulty of discontinuity and high dimensionality simultaneously. Numerical experiments illustrate the high efficiency and robustness of the proposed method
in QMC setting for pricing options with discontinuous payoffs and for estimating Greeks of options.

In this talk we consider weighed integration of functions with
infinitely many variables in the worst case and randomized
settings for a large class of weighted quasi-reproducing kernel
Hilbert spaces. We show, in particular, that Changing Dimension
algorithms in conjunction with Smolyak's construction yield
efficient cubatures with optimal rate of convergence, at least
in the worst case setting. We also show that these results imply
polynomial and weak tractabilities for a very wide range of cost functions
including exponential and double-exponential ones.

We study the tractability of computing $\varepsilon$-approximations of
the Fredholm problem of the second kind: given $f\in F_d$ and $q\in
Q_{2d}$, find $u\in L_2(I^d)$ satisfying
\[u(x) - \int_{I^d} q(x,y)u(y)\,dy = f(x)
\qquad\forall\,x\in I^d=[0,1]^d.\]
Here, $F_d$ and $Q_{2d}$ are spaces of $d$-variate right hand
functions and $2d$-variate kernels that are continuously embedded
in $L_2(I^d)$ and $L_2(I^{2d})$, respectively. We consider the worst
case setting, measuring the approximation error for the solution $u$
in the $L_2(I^d)$-sense. ``Tractability'' means that the minimal
number of information operations of $f$ and $q$ needed to obtain an
$\varepsilon$-approximation is sub-exponential in $\varepsilon^{-1}$
and $d$. One information operation corresponds to the evaluation of
one linear functional or one function value. The lack of
sub-exponential behavior may be defined in various ways, and so we
have various kinds of tractability. In particular, the problem is
strongly polynomially tractable if the minimal number of information
operations is bounded by a polynomial in $\varepsilon^{-1}$ for
all $d$.
We show that tractability (of any kind whatsoever) for the Fredholm
problem is equivalent to tractability of the $L_2$-approximation
problems over the spaces of right-hand sides and kernel functions. So
(for example) if both these approximation problems are strongly
polynomially tractable, so is the Fredholm problem. In general, the
upper bound provided by this proof is essentially non-constructive,
since it involves an interpolatory algorithm that exactly solves the
Fredholm problem (albeit for finite-rank approximations of $f$
and $q$). We remedy this situation for tensor product spaces $F_d$
and $Q_{2d}$ by proving that a modified interpolatory algorithm
provides an approximation with nearly-optimal cost, i.e., one whose
cost is within a factor $\ln\,\varepsilon^{-1}$ of being optimal.

We obtain lower bounds on the convergence (``mixing'') time of some adaptive MCMC techniques. Although these methods produce non-Markovian, time-inhomogeneous stochastic processes, we show that bounds can be obtained using ideas of hitting time and conductance from the theory of Markov chains.
Our bounds suggest and in some cases prove that these adaptive MCMC methods are not able to provide a speed-up from slow to rapid mixing relative to their non-adaptive counterparts (although some of them may successfully optimize over a family of non-adaptive methods). The methods we consider include the technique of Haario, Saksman, and Tamminen (2001), inter-chain adaptation by Craiu, Rosenthal, and Yang (2009), the equi-energy sampler (Kou, Zhou, and Wong 2006), and the importance-resampling MCMC algorithm (Atchade 2009). We use our bounds to show slow mixing on multimodal examples: a mixture of normals and the mean-field Potts model.

Aicke Hinrichs recently studied
multivariate integration defined over reproducing kernel Hilbert spaces
in the randomized setting and for the normalized error criterion.
In particular, he showed that such problems are strongly polynomially
tractable if the reproducing kernels are pointwise nonnegative and
integrable. More specifically, let $n(\epsilon,{INT}_d)$ be the minimal number
of randomized function samples that is needed to compute
an $\epsilon$-approximation for the $d$-variate case of multivariate
integration. Hinrichs proved that
$$
n(\epsilon,{INT}_d)\le
\left\lceil\frac{\pi}2\,\left(\frac1{\epsilon}\right)^2\right\rceil
\mbox{ for all }\epsilon\in(0,1)\mbox{ and } d\in\mathbb{N}.
$$
We prove that the exponent $2$ of $\epsilon^{-1}$ is sharp
for tensor product Hilbert spaces whose univariate
reproducing kernel is \emph{decomposable} and
univariate integration is not trivial for the two spaces corresponding
to decomposable parts. More specifically we have
$$
n(\epsilon,{INT}_d)\ge\left\lceil\frac18\,\left(\frac1{\epsilon}\right)^2\right\rceil
\mbox{ for all }\epsilon\in(0,1)
\mbox{ and } d\ge\frac{2\,\ln\,\epsilon^{-1}\,-\,\ln\,2}{\ln\,\alpha^{-1}},
$$
where $\alpha\in[1/2,1)$ depends on the particular space.
We stress that these estimates hold independently of the smoothness
of functions in a Hilbert space. Hence, even for spaces of very smooth
functions the exponent
of strong polynomial tractability must be $2$.
Our lower bounds hold not only for multivariate integration but
for all linear tensor product functionals defined over a Hilbert space
with a decomposable reproducing kernel and
with a non-trivial univariate functional for the two spaces
corresponding to decomposable parts.
We also present lower bounds for
reproducing kernels that are not decomposable but have a decomposable
part. However, in this case it is not clear if the lower bounds are
sharp.
This is a joint work with Erich Novak.

We investigate the extension of the multilevel
Monte Carlo path simulation method [1,2]
to jump-diffusion SDEs. The first part considers models with
finite rate activity (e.g. [4]), using a
jump-adapted discretisation in which the jump times are computed
and added to the standard timestepping discretisation times.
The key component in multilevel analysis is the calculation
of an expected payoff difference between a coarse path simulation
and a fine path simulation with twice as many timesteps.
If the Poisson jump rate is constant, the jump times are the
same on both paths and the multilevel extension is relatively
straightforward, but the implementation is more complex in the
case of path-dependent jump rates for which the jump times
naturally differ [3].
The second part we consider the variance gamma model
[5] as an example of a jump diffusion process with infinite activity.
Here the benefits of the multilevel method arise in applications
with path-dependent payoffs such as lookback and barrier options,
which requires approximate interpolation of the numerical solution
within each timestep.

[1] M.B. Giles. Improved multilevel Monte Carlo convergence using the Milstein scheme. In A. Keller, S. Heinrich, and H. Niederreiter, editors, {\em Monte Carlo and Quasi-Monte Carlo Methods 2006}, pages 343--358. Springer-Verlag, 2007.

[2] M.B. Giles.
Multilevel Monte Carlo path simulation.
*Operations Research*, 56(3):607--617, 2008.

[3] P. Glasserman and N. Merener.
Convergence of a discretization scheme for jump-diffusion processes
with state-dependent intensities.
*Proc. Royal Soc. London A*, 460:111--127, 2004.

[4] R.C. Merton.
Option pricing when underlying stock returns are discontinuous.
*Journal of Finance*, 3:125--144, 1976.

[5] D.B. Madan and E. Seneta.
The variance gamma model for share market returns.
*Journal of Business*, 63:511--524, 1990.

Consider the solution $X$ of a scalar stochastic differential
equation (SDE). We study approximation of the expectation
$E(f(X(t)))$ for functions $f:\mathbb R\to\mathbb R$ of the solution at time $t$,
which is motivated by pricing of pathindependent options with
discounted payoff $f$. We present a deterministic quadrature rule
for this problem, which is based on iterative quantization of the
Euler scheme, and we discuss its worst case error wrt to a class of
smooth functions $f:\mathbb R\to\mathbb R$ and, in particular, the cost of
constructing this rule. The latter is particularly relevant since
the distribution of $X(t)$ is only given implicitly via the SDE.

We consider randomized algorithms for simulating the evolution of a Hamiltonian $H = \sum_{j=1}^m H_j$ for time $t$. The evolution is simulated by a product of exponentials of $H_j$ for some random sequence and random evolution times, hence the final state of the system is approximated by a mixed state. First, we provide a scheme to bound the error of the final state in a randomized algorithm. Then, we show some randomized algorithms, which have the same efficiency as certain deterministic algorithms, but are less complicated than their opponentes. Finally, we provide a lower bound for the number of exponentials for both deterministic and randomized algorithms, when the evolution time is required to be positive.

Under the honorary patronage of the Mayor of Warsaw, Hanna Gronkiewicz-Waltz.