From posting-system@google.com Wed Oct 31 20:48:37 2001
Date: Wed, 31 Oct 2001 17:48:33 -0800
From: oleg@pobox.com (oleg@pobox.com)
Reply-To: oleg@pobox.com (oleg@pobox.com)
Newsgroups: comp.lang.functional
Subject: Re: Non-local arithmetic. Was: Help! Which language is it?
References: <3BDFE855.DF71664@info.unicaen.fr>
Message-ID: <7eb8ac3e.0110311748.1104b2be@posting.google.com>
Status: OR
Jerzy Karczmarczuk wrote in message news:<3BDFE855.DF71664@info.unicaen.fr>...
> > That might be a bit slow, but it would get the principle down. There
> > are probably more efficient ways of representing things, and laziness
> > might come in useful.
> I am not sure about that last point.
Actually, that _can_ be done. It is slow, unwieldy -- but eminently
possible. Monads and lazy evaluation come in very handy. This article
shows the example.
The key insight is that we don't need to "symbolically" manipulate
distributions. We represent a random variable by a function, which,
when evaluated, generates one number. If evaluated many times, the
function generates many numbers -- a sample of a given distribution.
We can combine such functions into expression. At the very end, we
evaluate that expression many times -- which gives us the final
sample. From that sample, we can compute the estimate of the
mathematical expectation, and the estimate of the variance (and of
higher moments if needed). This is basically a Monte Carlo
approach. True, Willem Kahan doesn't like it -- when it is applied to
numerical instabilities that is. In physics, this is a highly
justified approach. Most (all?) of our experimental results are
ensemble averages.
As an example, we compute the center and the variance of
sqrt( (Lw - 1/Cw)^2 + R^2)
which is the impedance of an electric circuit, with an inductance
being a normal variable (center 100, variance 1), frequency being
uniformly distributed from 10 through 20 kHz, resistance is also
uniformly distributed from 10 through 50 kOm and the capacitance has a
square root of the normal distribution. I admit, the latter is just to
make the example more interesting.
The framework first:
import Monad
-- An integer uniformly distributed within [0..32767]
type RandomState = Int
random_state_max::RandomState
random_state_max = 32767
type RandomGen = RandomState -> RandomState
newtype RandomVar a = RandomVar ( RandomGen -> RandomState -> (a,RandomState) )
instance Monad RandomVar where
return x = RandomVar $ \gen state -> (x,state) -- deterministic variable
(RandomVar x) >>= f = RandomVar $
\gen state -> let (v,state') = x gen state
RandomVar x' = f v
in x' gen state'
-- Other morphisms and monad constructors
-- A random number uniformly distributed within [0,1]
uniform_stan:: RandomVar Float
uniform_stan = RandomVar $
\gen state ->
let state' = gen state
in ((fromInt state') / (fromInt random_state_max), state')
-- A random number uniformly distributed on [a, b]
uniform a b = do { x <- uniform_stan; return $ a+x*(b-a) }
add:: (Num a) => (RandomVar a) -> (RandomVar a) -> (RandomVar a)
add = liftM2 (+)
sub:: (Num a) => (RandomVar a) -> (RandomVar a) -> (RandomVar a)
sub = liftM2 (-)
mul:: (Num a) => (RandomVar a) -> (RandomVar a) -> (RandomVar a)
mul = liftM2 (*)
divM:: (Num a,Fractional a) => (RandomVar a) -> (RandomVar a) -> (RandomVar a)
divM = liftM2 (/)
sqrM:: (Num a) => (RandomVar a) -> (RandomVar a)
sqrM = liftM $ \x -> x*x
sqrtM:: (Num a, Floating a) => (RandomVar a) -> (RandomVar a)
sqrtM = liftM sqrt
-- standardized normal distribution N(0,1)
-- This is a well-known binomial approximation
normal_stan = sub (foldM (\acc _ -> add (return acc) uniform_stan) 0 [1..12])
(return 6)
-- N(z,var)
normal z var = do { x <- normal_stan; return $ x*var + z }
-- square_root distribution
sqrt_dist dist = do { x <- dist; y <- dist; return $ max x y }
-- Run the RandomVar monad n times and estimate mathematical expectation and
-- the variance
run_random:: Int -> (RandomVar Float) -> (Float,Float)
run_random n (RandomVar mx) =
let state = 5 -- initial state
gen::RandomGen -- the pathetic random number generator
gen state = ((state * 1103515245) + 12345) `mod` (random_state_max+1)
samples = take n $ iterate (\(_,state) -> mx gen state) $ mx gen state
(m0,m1) = foldl (\(acc_0,acc_1) (v,_) -> (acc_0+v,acc_1+v*v))
(0,0) samples
in (m0/(fromInt n),sqrt( (m1 - m0*m0/(fromInt n))/(fromInt (n-1)) ))
Now let's see what we've got
Main> run_random 5000 uniform_stan
(0.49526,0.286909)
well, close enough to the expected result.
Main> run_random 5000 normal_stan
(-0.00557715,0.997899)
Again, normal_stan indeed looks like a normally-distributed variable
with center 0 and variance 1.
We can now do many wonderful things, such as
Main> run_random 1000 $ add (normal 10 1) (normal 20 2)
(29.9166,2.17711)
Back to the impedance however
-- compute the impedance of an electric circuit
-- z = sqrt( (Lw - 1/Cw)^2 + R^2)
impedance inductance capacitance freq resistance =
sqrtM $
(sqrM
(freq `mul` inductance) `sub` ((return 1) `divM` (capacitance `mul` freq)))
`add`
(sqrM resistance)
-- inductance
ct_l = normal 100 1
-- Frequency 10kHz - 20kHz
ct_freq = uniform 10000.0 20000.0
-- capacitance
ct_cap = sqrt_dist $ normal 1000 10
-- resistance
ct_res = uniform 10000.0 50000.0
z = impedance ct_l ct_cap ct_freq ct_res
Main> run_random 1000 z
(1.51302e+06,286369.0)
Here you have it: the average value and the variance. We can repeat
the sample with bigger n and watch the convergence -- just as in the
regular Monte Carlo method.