Sunday, July 31, 2016

The Gaussian Moment Integrals

In this blog I want to document a useful technique for evaluating integrals of the form
$$I_n(a) = \int_{-\infty}^{\infty}x^n e^{-ax^2}\ dx,$$
for any integer \(n\). Recall the Gaussian integral is given by
$$I_0(1) = \int_{-\infty}^{\infty}e^{-x^2}\ dx = \sqrt{\pi},$$
from which we easily deduce
$$ I_0(a) = \sqrt{\frac{\pi}{a}}.$$
Now note that for odd \(n\) the integrand is precisely anti-symmetric, thus the integral vanishes over the real line, e.g.
$$I_n(a) = 0, \ \ n\text{ odd}. $$
To see this analytically, simply recognize
$$ \frac{(-1)^{n+1}}{(2a)^n}\frac{\partial^n}{\partial x^n}e^{-ax^2} = x^n e^{-ax^2}, \ \ n\text{ odd}, $$
and apply the fundamental theorem of calculus.

Wonderful!

For even \(n\) we need only apply the trick commonly known as differentiation under the integral sign or otherwise Feynman integration, though Prof. Feynman did not originate it. We select a parameter of the integrand and evaluate its partial to sufficient order so that the integral is trivial, then we simply integrate with respect to the parameter itself to obtain the true value of the integral. Thus
$$ \int_{-\infty}^{\infty}x^n e^{-ax^2}\ dx =  \int_{-\infty}^{\infty}\frac{\partial^{n/2}}{\partial a^{n/2}}e^{-ax^2}\ dx =  \frac{\partial^{n/2}}{\partial a^{n/2}}\int_{-\infty}^{\infty}e^{-ax^2}\ dx$$
from which we obtain
$$I_n(a)  = \sqrt{\pi}\frac{\partial^{n/2}}{\partial a^{n/2}}a^{-1/2} =  \frac{\sqrt{\pi}(n+1)!!}{2^{n/2}a^{(n+1)/2}},  \ \ n\text{ even}$$
where we define
$$k!! = 1\cdot 3\cdot 5 \cdots (k-2)\cdot k.$$

Exercise:

Let \(\nu\) be a random white noise parameter which is Gaussianly distributed, e.g.
$$\nu \sim \frac{1}{\sqrt{2\pi}\sigma}e^{-\nu^2/(2\sigma^2)}.$$
Show that the RMS value of \(\nu\) is \(\sigma\). In other words, show
$$\sqrt{\bar{\nu^2}} = \sigma.$$

Thursday, June 2, 2016

Weekly blog 4: Derivations of the Fourier Transform

I swear I'll get to the discussion of first and second order systems in a bit, but before leaving the topic of transforms altogether, I wanted to write another blog on the Fourier transform. You see, the Fourier transform is special to me because, despite the fact it is really easy to understand intuitively, no one ever gave me this intuition and I had to build it on my own. That's really a shame, since I believe intuition can be a much more agile tool than rigor, especially for people who are practitioners rather than just theorists, and so I wanted to take some time to document my own intuition for the Fourier transform hoping that someone else might find it useful.

1. Sinusoidal Representations of Functions

Most people know that functions can be represented and approximated by other functions in a methodical way. For instance, the Taylor series can be used to approximate a function by a series of polynomials with the right coefficients, in the case of Taylor given by their derivatives evaluated at the expansion point. Most people also know that series of sines and cosines can be used to do the job for Periodic functions. In particular the series
$$S = \left\{\sin\frac{2\pi n t }{T}\right\}_{n\in\mathbb{N}}$$
is orthonormal with respect to the inner product 
$$(f,g) = \frac{1}{T}\int_{-T/2}^{T/2}f(t) g(t)\ dt,$$
and thus, for an arbitrary function \(f\) on \([0,T]\), we can write an approximation consisting of all sines, e.g. find \(a_0,a_1,a_2,\dots,a_N\) such that 
$$f(t) \approx \sum_{n=0}^Na_n\sin\frac{2\pi n t }{T},\ \ t\in [0,T].$$
The same is true of cosines. Notice that these are all sines and cosines whose frequencies are discrete harmonics of the fundamental frequency, defined by \(\omega_n = 2\pi n/T\).

2. The Spring Analogy

The trick I keep in mind when thinking about the Fourier transform is simple: I know that these sines and cosines arise as solutions for simple harmonic oscillators (say ideal, massless, infinitely elastic, undamped springs) which each satisfy the equation
$$\ddot{x} = -\omega^2_nx.$$
Actually, we can even write these solutions together, in an exponential representation given by \(e^{\pm j\omega_nt}\). What the Fourier series of a function essentially says is that the function may be approximated by a weighted sum of the solutions to simple harmonic oscillator equations.  This is equivalent to weighting the solution by either the harmonics of a single spring of the fundamental of a (possibly infinite) series of springs whose natural frequencies are all harmonics of each other.

Now we imagine including not just positive harmonics but (mathematically possible) negative harmonics of an infinite series of springs. We write
$$f(t) \approx \sum_{n=-\infty}^{\infty}a_nS_n, $$
where \(S_n\) is any choice for representing the solution to the spring equation. Finally, we imagine an uncountably infinite (stay with me) number of springs which have natural frequencies at every number from \(-\infty\) to \(\infty\) and go ahead and use \(S_n = e^{j\omega_n t}\) where now \(n\in\mathbb{R}\). We can drop the subscripts on the frequencies (since now there's no point) and turn the sum into an integral. Since each frequency needs a unique coefficient (to determine its weighting in the overall sum) the sequence \(a_n\) becomes a function \(a(\omega)\) and we may write
$$f(t) = \int_{-\infty}^{\infty}a(\omega)e^{j\omega t}\ d\omega.$$
\(a(\omega)\) still represents the weighting of the springs, but using a continuous index now instead of a discrete index, and just as \(a_n\) as a sequence represented the function by the weighting given to a series of springs whose solutions were understood, so too does \(a(\omega)\). The difference however is that while \(a_n\) approximately represented \(f\) using a set number of harmonics, \(a(\omega)\) gives an exact representation in terms of all possible frequencies. Thus we say that \(a(\omega)\) is the frequency representation of \(f(t)\), and can be thought of as the function which assigns a weighting to the set of all possible springs so that their combined behavior imitates \(f(t)\) exactly. This representation is one-to-one with the set of all continuous, integrable, bounded functions \(f(t)\). In fact there is an inverse:
$$a(\omega) = \int_{-\infty}^{\infty}f(t)e^{-j\omega t}\ dt.$$
Now where have I seen that before? 

Weekly Blog 3: More About Integral Transforms

Welcome to the 3rd installment of the weekly controls blog! I was going to do a blog on first and second-order systems, but wow, those Laplace transforms last time just got my blood pumped to extend the discussion on integral transforms a bit! I've also recently become interested in applications of integral equations, Fredholm theory, and fractional calculus to control and I figure recounting a bit of common knowledge on integral transforms is a good way to dive into this.

Of course I won't do hardly any more on integral transforms in the large scope, there are 500 page books for that. Instead I'll work a little bit more with the Laplace Transform
$$(\mathcal{L}f)(s) = \int_{0}^{\infty}e^{-st}f(t)\ dt,$$
and the Fourier transform
$$(\mathcal{F}f)(\omega) =  \int_{-\infty}^{\infty}e^{-j\omega t}f(t)\ dt,$$
where, since I'm an engineer, I prefer to annoy mathematicians by defining \(j^2 = -1\).

1. Laplace Transform of a Delay

We've seen the widely used result \(\mathcal{L}\dot{y}(t) = sY(s)\) in the previous blog. This can also be seen as the Laplace transform of the operator \(d/dt\) which gives us the operator equation
$$ \mathcal{L}\frac{d}{dt} = s,$$
or, recursively
$$ \mathcal{L}\frac{d^n}{dt^n} = s^n.$$
We proved this by taking an arbitrary sample function, in this case \(y\), and using derivatives of this function (with the known Laplace transform \(Y(s)\)) we were able to compute the Laplace transform for the derivative itself. 

Now let \(D_\tau\) be the delay operator which is defined as the operator for which
$$D_\tau f(t) = f(t-\tau).$$
That is, \(D_\tau\) simply delays an arbitrary function \(f\) by some amount of time \(\tau\). The question of this section will be to compute \(\mathcal{L}D_\tau\). We do this again by using a sample function, say \(f\) so that we have
$$ \mathcal{L}D_\tau f = \int_{0}^{\infty}e^{-st}D_\tau f(t)\ dt = \int_{0}^{\infty}e^{-st}f(t-\tau)\ dt.$$
Letting \(\sigma(t) = t-\tau\) we find \(d\sigma = dt\) and \(\sigma(0) = -\tau,\sigma(\infty)=\infty\). Then
$$\int_{0}^{\infty}e^{-st}f(t-\tau)\ dt = \int_{-\tau}^{\infty}e^{-s(\sigma+\tau)}f(\sigma)\ d\sigma = e^{-s\tau}\left(\int_{-\tau}^{0}e^{-s\sigma}f(\sigma)\ d\sigma + \int_{0}^{\infty}e^{-s\sigma}f(\sigma)\ d\sigma \right).$$
As before we define \(F(s) = \mathcal{L}f\). Now we need only take care of that bizarre term with the delay in the integral bounds. We have a couple of options: most people define \(f\) as zero on \(0\leq t\leq\tau\) which you will find makes the integral above work out to have only \(F(s)\) and not the second term, but this is ad hoc for me. We can equivalently simply define \(f = 0\) for all \(t < 0\), but again why should we? We obviously need to do something. For now let's just say
$$\int_{-\tau}^{0}e^{-s\sigma}f(\sigma)\ d\sigma + F(s) \approx F(s)$$
but keep that issue in the back of our minds. We thus have
$$\mathcal{L}(D_\tau) = e^{-s\tau}.$$

2. Fourier Transform of the Derivative and Neper Frequency

I'm finished with the Laplace transform for now, though I make no promises of never returning to it again...Let's go ahead and figure out what the Fourier transform of the derivative is. Say \(y\) is our sample function whose Fourier transform is \(Y(\omega)\). Then
$$(\mathcal{F}\dot{y})(s) = \int_{-\infty}^{\infty}e^{-j\omega t}\dot{y}(t)\ dt = -j\omega e^{-j\omega t}y(t)|_{-\infty}^{\infty} - \left(-j\omega\int_{-\infty}^{\infty}e^{-j\omega t}y(t)\ dt\right)$$
so, assuming again the surface term vanishes, 
$$(\mathcal{F}\dot{y})(s)  = j\omega Y(\omega),$$
or
$$\mathcal{F}\frac{d}{dt} =  j\omega.$$
This in particular allows us to easily map linear systems from the \(s\)-domain to the frequency domain by sending \(s\mapsto j\omega\). This is the so-called Joukowski substitution, though it is unclear whether or not Joukowski was the first one to notice this fact. 

Although the Joukowski substitution is the most typically used way of mapping into the frequency domain, it should be noted for completeness that it assumes the forcing frequency is perfect and non-attenuating. Consider for instance the signal 
$$y(t) = \sin(\omega t)$$
The Joukowski substitution works (in the complexified case) and we have no problem, but what about 
$$y(t) = A(t)\sin(\omega t),$$
where \(A\rightarrow 0\) as \(t\rightarrow \infty\)? In this case the Joukowski substitution is not the correct map to the frequency domain. Instead we must introduce the Neper Frequency, \(\sigma\), which keeps track of the signal attenuation, and send \(s\mapsto \sigma+j\omega\) to get into the frequency domain. In practice however, the frequency domain representation of a function is regarded as being consistent with a series of ideal simple harmonic oscillators being used to represent a function and the Neper frequency rarely comes into play when analyzing systems at a purely mathematical level.

Saturday, May 28, 2016

(Late) Weekly Blog 2: Transfer functions and Their Compositions

First: only one week into my project and I have already failed to reach my intended goal as the post slated for last week never happened. I apologize for this but also am undeterred in my commitment to continue this blog!

In order to make up for the post I missed, I'll deliver two by this sunday, of which this will be the first. The topic of this blog will be a really easy topic but also something which is totally essentially to SISO control: transfer functions. I'll also leave you guys with an open question I've been mulling over from a book on "open problems in control theory"

1. The Laplace Transform

To talk about transfer functions we need to understand a few Laplace transforms. Laplace transforms are a specific case of the more general idea of integral transforms, which are essentially any linear transformation of the form
$$F(s) = \int_{x\in X} k(s,x) f(x)dx,$$
where \(f\) is the input \(F\) is the output and \(k(s,x)\) is a function called the kernel of the transformation. The kernel, along with the selection of the set the integral is taken over, are the elements which define the specific transformation. While the theory on general integral transforms is extensive, control theorists are most concerned with either the Laplace or Fourier transforms, and of these two mostly the Laplace transform. The Laplace transform is given by
$$F(s) = \mathcal{L}f(t) = \int_{0}^{\infty}e^{-st} f(t)dt,$$
and itself has a long and interesting history in theory of functions, but for our purposes is simply a way of solving differential equations by turning them into algebraic equations. It's actually easy to see how this happens. Let's suppose \(y(t)\) is a time-domain function whose Laplace transformation is denoted \(Y(s)\). We want to find the Laplace transform of \(\dot{y}(t)\). This is
$$\int_{0}^{\infty}e^{-st} \dot{y}(t)dt = -se^{-st}y(t)|_0^\infty - \int_{0}^{\infty}(-se^{-st}) y(t)dt = sY(s),$$
assuming the surface term vanishes. Applying this argument recursively yields the important result
$$\mathcal{L}y^{n}(t) = s^nY(s),$$
which we shall use in the next section.

2. Transfer Functions

Control theory has been said to have emerged from two strains of engineering heritage: electrical engineering and mechanics. Electrical engineering is formulated in terms of input-output relationships for black-box systems. A signal \(u\) is fed into the box and a response \(y\) is output. To the electrical engineer the objective of feedback control is to change the input signal to achieve the desired output. Mechanics on the other hand is formulated in terms of differential equations. To a mechanical engineer, the objective of feedback control is to find a forcing term for the equation which produces the desired solution. The Laplace transform gives us a way to represent the differential equation for a system as an input-output relation--so long that the equation is linear (you can look at Blog 1 to find out how to approximate a nonlinear system by a linear one). Let
$$a_ny^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_1\dot{y}+a_{0}y = b_mu^{(m)}+b_{m-1}u^{(m-1)}+\cdots+b_1\dot{u}+b_{0}u$$ 
be our model of the system. Applying the Laplace transformation we have
$$\begin{aligned}\mathcal{L}a_ny^{(n)}+\mathcal{L}a_{n-1}y^{(n-1)}+\cdots+&\mathcal{L}a_1\dot{y}+\mathcal{L}a_{0}y\\&= \mathcal{L}b_mu^{(m)}+\mathcal{L}b_{m-1}u^{(m-1)}+\cdots+\mathcal{L}b_1\dot{u}+\mathcal{L}b_{0}u.\\\end{aligned}$$ 
Whose LHS is 
$$\begin{aligned}a_ns^nY(s) + a_{n-1}s^{n-1}Y(s) +\cdots &+ a_{1}sY(s) + a_{0}Y(s)\\ = &(a_ns^n+a_{n-1}s^{n-1}+\cdots +a_1s +a_0)Y(s),\\\end{aligned}$$
and whose RHS is
$$\begin{aligned}b_ms^mU(s) + b_{m-1}s^{m-1}U(s) +\cdots &+ b_{1}sU(s) + b_{0}U(s)\\ = &(b_ns^n+b_{n-1}s^{n-1}+\cdots +b_1s +b_0)U(s).\\\end{aligned}$$
Putting these together we have
$$\frac{Y(s)}{U(s)} = \frac{b_ms^m+b_{m-1}s^{m-1}+\cdots +b_1s +b_0}{a_ns^n+a_{n-1}s^{n-1}+\cdots +a_1s +a_0}.$$  
We typically denote the fraction \(Y(s)/U(s)\) as a single function, something like \(H(s)\). The transfer function can be used to determine virtually every significant thing about the controller and system, from stability to rise/settling times, overshoots, gain and phase margins, etc. In fact, without using transfer functions there's no way of easily understanding what is known as "classical" control theory.

                                                             3. An Open Question

So now that I've described transfer functions, I'll leave you with an open question. Supposing we have a transfer function \(G\), then find transfer functions \(G_0\) and \(H\) for which
$$ G = G_0\circ H.$$
It has been shown by Fernandez and Martinez-Garcia (G. Fernandez, "Preservation of SPR functions and stabilization by substitutions in SISO plants," IEEE Transaction on Automatic Control, vol. 44, no. 11, pp. 2171-2174, 1999.; G. Fernandez and J. Alvarez, “On the preservation of stability in families of polynomials via substitutions,” Int. J. of Robust and Nonlinear Control, vol. 10, no. 8, pp. 671-685, 2000.) that controlling \(G\) is equivalent to controlling \(G_0\) by substituting \(K(s)\) by \(K(H(s))\). This is one of those interesting problems in classical control that peaks my interest. I've been working on it a bit and might be announcing a few results soon ;-)

Saturday, May 14, 2016

Weekly Blog 1: Linearization and the Informal Perturbation Approach

I've decided to devote the first weekly blog to looking at linearization--an incredibly powerful and pervasive technique in control. I will also look at how many informal approaches connect linearization to perturbation theory; on a side note I've always wanted to look at how a formal perturbation approach would stack up, if formal perturbation theory is even relevant at all, but wont have time for that this week.

1. Introduction to Linearization

Consider the following plant model:
$$\ddot{x} + \frac{K}{1-x/r} = u(t),\ x(0) = x_0,\ \dot{x}(0) = V_0 $$
Suppose the control objective is to regulate the solution to approach a setpoint of \(x_{c}\) within a specified settling time and overshoot range, reject disturbances, etc. This is equivalent to finding a compensator function for \(u\) and the required gains to achieve these objectives, but unlike what you might have seen in a linear controls course, the plant is nonlinear, hence no simple transfer function can be obtained. We could, of course, try to control the nonlinear plant, but with this approach would have to throw out all of the wonderful linear theory that your course developed. You might think that we could just replace it with an equally robust and practical nonlinear theory, but sadly no such theory exists for general nonlinear systems, despite years of effort. Indeed, even proving the stability of most nonlinear systems is quite a Herculean task.

A second approach is to find a way to make the linear theory fit the nonlinear model--viz. to make the nonlinear model look linear. Although this might seem crazy, it will actually work so long as the system state--in this case \(x\)--does not deviate too much from a specified point at which we linearize the model. How exactly do we linearize the model? In this case we observe
$$ \frac{1}{1-x/r} = 1 + \frac{x}{r} + \left(\frac{x}{r}\right)^2 +  \left(\frac{x}{r}\right)^3 + \dots$$
which converges so long as \(-r < x< r\). If \(|x|\ll r\) we can argue that all the terms beyond the first-order linear term are too small to matter and can be discarded. This truncation of the series to first-order allows us to make the approximation
$$\frac{1}{1-x/r} \approx 1 + \frac{x}{r} $$
which we place into the original plant model to obtain
$$\ddot{x} + K\left(1+\frac{x}{r}\right) = u(t).$$
The offset can be compensated for by defining \(\tilde{u} = u - K\) and we may now write the system as
$$\ddot{x} + K\frac{x}{r} = \tilde{u}$$
which is treatable by the linear techniques so long as \(x\ll r\). Of course if this limit is exceeded, higher order terms in the expansion become relevant and the linear approximation breaks down, leaving us at square one with the nonlinear problem.

2. A General Approach to Linearization 


So at this point you have seen how a system can be linearized by expanding the nonlinear part in a Taylor series and dropping the higher order terms. Unfortunately from the last example you might be of the impression that this can only be done with particularly nice models which have an obvious series expansion. Not so! Indeed, any smooth function \(f(x)\) can be expanded about an arbitrary point \(c\) via
$$f(x) = f(c) + f'(c)(x-c) + \frac{1}{2}f''(c)(x-c)^2 + \dots$$
so that the general first-order ODE \(\dot{x} = f(x,u)\) can be linearized about a trim point \(x=c,u=u_0\) by
$$\dot{x} = f(x,u) \approx f(c,u_0) + \frac{\partial f}{\partial x}|_{c,u_0}(x-c) + \frac{\partial f}{\partial u}|_{c,u_0}(u-u_0) $$
And you may have been told that any system of ODEs can be put into first-order form by defining mulligan states for the derivative terms (e.g. \(v = \dot{x}\) so that \(\ddot{x} = \dot{v}\)). So expanding the linearization to the case of an arbitrary number of plant and controller states \(x_1,x_2,\dots,x_n\) and \(u_1\dots,u_m\), we simply linearize about a set of conditions which uniquely specifies the trim point and sum over all the partials. This is
$$\begin{aligned}
\dot{x}_i = &f_i(x_1,\dots,x_n;u_1,\dots,u_m) \\
&\approx f_i(\text{Trim}) + \sum_{j=1}^n\frac{\partial f_i}{\partial x_j}|_{\text{Trim}}(x_j-x(\text{Trim})_j)  + \sum_{k=1}^m\frac{\partial f_i}{\partial u_k}|_{\text{Trim}}(u_k-u(\text{Trim})_k).\\
\end{aligned}$$
It is of course easier to write this as a matrix equation
$$\dot{x} = f(x,u) + A(x-x(\text{Trim})) + B(u-u(\text{Trim}))$$
where
$$ A_{ij} = \frac{\partial f_i}{\partial x_j}|_{\text{Trim}},\ B_{ik} =  \frac{\partial f_i}{\partial u_k}|_{\text{Trim}}$$
e.g. the state and controller vector Jacobians evaluated at the trim point. Note that we have assumed that the plant model is smooth not only in the dynamical variable but also in all of the state variables--viz. the EOMs are at least \(C^1\) for the linearization to be consistent, but hopefully \(C^{\infty}\) so that the Taylor expansion is justified. The deviations from Trim, calculated by \(x_j-x(\text{Trim})_j\) and discussed further below, are assumed small enough so that the linearized system is an accurate enough approximation of the true plant dynamics. The limits of the linear region may sometimes be obtained by a rigorous proof, but more often than not the applicability of the linearized model is checked by simulating the nonlinear plant with the linear controller.

3. Informal Perturbation Approach

There's two things you notice about the general linearization formula which prevent a typical transfer function from being obtained. The first is the fact that the trim is subtracted off of the state variables, which means the system does not look like the typical linear system
$$\dot{x} = A(t)x(t) + B(t)u(t).$$
The second is the constant left over from evaluating the EOMs at trim. The informal perturbation approach (as opposed to a rigorous, formal perturbation approach) is a method for dealing with these two nonidealities. To apply this approach we first separate the solution into two parts: the homogenous or known part \(x_h(t)\) and the perturbation \(\delta_x(t)\). The total solution is \(x(t) = x_h(t) + \delta_x(t)\). The homogenous part is typically called the reference solution, and need not be time-invariant as many texts would imply. It could, for instance, be computed numerically and perturbations made about it. The important part of the reference solution is it is a known function, thus so too is \(\dot{x}_h\), and furthermore we let
$$\dot{x}_h = f(\text{Trim}).$$
Note that this implies trim is not necessarily constant and indeed it doesn't need to be in general. The perturbation formalism assumes that \(\text{Trim}(t)\) is known since this is equivalent to \(x_h(t)\), so a time-varying trim simply results in a time-varying linear state model. Next we introduce the perturbation quantities as
$$\delta_{x,i} = x_i - x_i(\text{Trim}),\ \delta_{u,i} = u_i - u_i(\text{Trim})$$
and we finally write
$$\dot{\delta}_x = A(\text{Trim}(t))\delta_{x} + B(\text{Trim}(t))\delta_{u},$$
a linearized form of the solution which is valid for all model and controller states which are "close enough" to the trim, and which can be treated using linear control techniques.

Now, go forth and linearize! The first week's post is finished--and with a whole 23 hours to spare!

Sunday, May 8, 2016

An Early Summer's New Year's Resolution

I've never really been that fascinated with New Year's resolutions. To me, the resolutions you keep tend not to be those you've cooked up because of some arbitrary tradition, but rather the ones you were planning on getting to but never remembered to get started on. Thus, why wait for the new year when, as the aphorism goes, "tomorrow is the first day of the rest of your life"?

This is the spirit in which I've dedicated myself to the following project: to write a new post on some aspect of control theory or practice every week until the new year. Depending on how I am feeling at that point, I may abandon this project or continue on, but regardless of which of these fates I consign my little endeavor to, the larger goal--of more thoroughly exploring my own field of study and expertise and sharing this exploration with the public--will be attained. The rules are as follows:

1. The post will be due Sunday, at 11:59 p.m. sharp, every week.
2. There will not be a word requirement or limit, nor a required topic, other than control of course.
3. Topics cannot be repeated, but may be expanded upon.

With these modest goals, I will set about this new project. Feedback from anyone is welcome but none of the information will be guaranteed to be accurate ;-)