Instructions on how to study for the resit exam in September 2016 are the same as for the May 2016 exam.

## Wednesday, 20 July 2016

## Sunday, 15 May 2016

### Picard's Theorem

One of the students has asked me: "In the past papers, the statement of the Picard Theorem is in
a different form from the version given in the notes. Which one shall we
use?"

There are various versions of the Picard's Theorem, which although they may be formulated with different technical details, all essentially state the same thing, i.e. that unique solutions of ODEs can be found by means of Picard iteration. If you would be asked to state "Picard's Theorem" then of course I would be happy with any correct and sensible version of this Theorem. However, note that if I would be asking you to prove a certain result, your proof should of course be related to the Theorem that you provide, or a specific version of this Theorem (from the lectures) that I would ask you to prove.

Past exam papers are useful to get an idea whether or not you are broadly well prepared for the exam. There is little point to learn answers to past exam papers by heart. Also in most cases, the provided "model answers" are not necessarily the unique correct formulation to an answer...

In general, let me assure you that I will give generous credit to answers to exam questions that demonstrate your understanding of the material that you are asked about, rather than split hairs over whether or not you provide exactly the answer that I would have given as a model...

There are various versions of the Picard's Theorem, which although they may be formulated with different technical details, all essentially state the same thing, i.e. that unique solutions of ODEs can be found by means of Picard iteration. If you would be asked to state "Picard's Theorem" then of course I would be happy with any correct and sensible version of this Theorem. However, note that if I would be asking you to prove a certain result, your proof should of course be related to the Theorem that you provide, or a specific version of this Theorem (from the lectures) that I would ask you to prove.

Past exam papers are useful to get an idea whether or not you are broadly well prepared for the exam. There is little point to learn answers to past exam papers by heart. Also in most cases, the provided "model answers" are not necessarily the unique correct formulation to an answer...

In general, let me assure you that I will give generous credit to answers to exam questions that demonstrate your understanding of the material that you are asked about, rather than split hairs over whether or not you provide exactly the answer that I would have given as a model...

## Sunday, 8 May 2016

### Persistence of transverse intersections

In Example 1.4.9 we discuss the persistence of transverse intersections as an application of the Impicit Function Theorem. Someone asked me about this in the second revision class. I will try to elucidate the final conclusion in this example from the notes.

"Persistence" of the isolated intersection of two curves in \(\mathbb{R}^2\) in this example, means that if we "perturb" the curves slightly, then there remains to be a unique isolated intersection of these two curves near the original isolated intersection.

We represent the two curves by differentiable functions that parametrize these curves: \(f,g:\mathbb{R}\to\mathbb{R}^2\). We assume the intersection to be at \(f(0)=g(0)\). We now consider a parametrized family of functions \(f_\lambda,g_\lambda:\mathbb{R}\to\mathbb{R}^2\), representing "perturbations of the original curves", where \(\lambda\) serves as the "small parameter" so that \(f_0=f\) and \(g_0=g\). We furthermore assume that the perturbations are such that the derivatives \(Df_\lambda(0)\) and \(Dg_\lambda(0)\) are continuous in \(\lambda\) near \(\lambda=0\).

In the example it is proposed to consider the function \(h_\lambda(s,t):\mathbb{R}^2\to\mathbb{R}^2\) defined as \(h_\lambda(s,t):=f_\lambda(s)-g_\lambda(t).\) By construction \(h_0(0,0)=(0,0)\) and indeed the intersection points of the curves represented by \(f_\lambda\) and\(g_\lambda\) are given by \(h_\lambda^{-1}(0,0)\).

It follows that \(Dh_\lambda(s,t)=(Df_\lambda(s),-Dg_\lambda(s))\), as in the notes. This two-by-two matrix is non-singular (ie has no zero eigenvalue, or - equivalently - is invertible) if and only if the two-dimensional vectors \(Df_\lambda(s)\) and \(Dg_\lambda(t)\) are not parallel (ie not real multiples of each other).

We now use this to analyze the intersection at \(\lambda=0\): when \(Df_0(0)\) and \(Dg_0(0)\) (which are the tangent vectors to the respective curves at the intersection point) are not parallel, then \(Dh_0(0,0)\) is invertible and the intersection of the two curves at \(f_0(0)=g_0(0)\) is isolated (there is a neighbourhood of this point, where there is no other intersection).

Considering a small variation of \(\lambda\), we note that by application of the Implicit Function Theorem to \(h_0\), for sufficiently small \(\lambda\) there exist continuous functions \(s(\lambda)\) and \(t(\lambda)\) so that \((s(\lambda),t(\lambda))\) is the element of \(h_\lambda ^{-1}(0,0)\) near \((0,0)=(s(0),t(0))\). This unique "continuation" of the original solution \(0,0\) is of course also isolated since if \(Dh_0(0,0)\) is invertible then so is \(Dh_\lambda(s(\lambda),t(\lambda))\) by continuity of all the dependences; so the vectors \(Df_\lambda(s(\lambda))\) and \(Dg_\lambda(t(\lambda))\) will not be parallel for sufficiently small \(\lambda\).

"Persistence" of the isolated intersection of two curves in \(\mathbb{R}^2\) in this example, means that if we "perturb" the curves slightly, then there remains to be a unique isolated intersection of these two curves near the original isolated intersection.

We represent the two curves by differentiable functions that parametrize these curves: \(f,g:\mathbb{R}\to\mathbb{R}^2\). We assume the intersection to be at \(f(0)=g(0)\). We now consider a parametrized family of functions \(f_\lambda,g_\lambda:\mathbb{R}\to\mathbb{R}^2\), representing "perturbations of the original curves", where \(\lambda\) serves as the "small parameter" so that \(f_0=f\) and \(g_0=g\). We furthermore assume that the perturbations are such that the derivatives \(Df_\lambda(0)\) and \(Dg_\lambda(0)\) are continuous in \(\lambda\) near \(\lambda=0\).

In the example it is proposed to consider the function \(h_\lambda(s,t):\mathbb{R}^2\to\mathbb{R}^2\) defined as \(h_\lambda(s,t):=f_\lambda(s)-g_\lambda(t).\) By construction \(h_0(0,0)=(0,0)\) and indeed the intersection points of the curves represented by \(f_\lambda\) and\(g_\lambda\) are given by \(h_\lambda^{-1}(0,0)\).

It follows that \(Dh_\lambda(s,t)=(Df_\lambda(s),-Dg_\lambda(s))\), as in the notes. This two-by-two matrix is non-singular (ie has no zero eigenvalue, or - equivalently - is invertible) if and only if the two-dimensional vectors \(Df_\lambda(s)\) and \(Dg_\lambda(t)\) are not parallel (ie not real multiples of each other).

We now use this to analyze the intersection at \(\lambda=0\): when \(Df_0(0)\) and \(Dg_0(0)\) (which are the tangent vectors to the respective curves at the intersection point) are not parallel, then \(Dh_0(0,0)\) is invertible and the intersection of the two curves at \(f_0(0)=g_0(0)\) is isolated (there is a neighbourhood of this point, where there is no other intersection).

Considering a small variation of \(\lambda\), we note that by application of the Implicit Function Theorem to \(h_0\), for sufficiently small \(\lambda\) there exist continuous functions \(s(\lambda)\) and \(t(\lambda)\) so that \((s(\lambda),t(\lambda))\) is the element of \(h_\lambda ^{-1}(0,0)\) near \((0,0)=(s(0),t(0))\). This unique "continuation" of the original solution \(0,0\) is of course also isolated since if \(Dh_0(0,0)\) is invertible then so is \(Dh_\lambda(s(\lambda),t(\lambda))\) by continuity of all the dependences; so the vectors \(Df_\lambda(s(\lambda))\) and \(Dg_\lambda(t(\lambda))\) will not be parallel for sufficiently small \(\lambda\).

## Friday, 6 May 2016

### 2009 exam question 2

A student has asked me go in more detail of the model answers for parts 2009 (c)(iii) and (d)(ii)

First let me recall some generalities about the Jordan Chevalley decomposition. In the (complex) Jordan form, the diagonal part of the matrix is semi-simple - as it obviously has a diagonal complex Jordan form - and the remaining off-diagonal part is nilpotent (as it is upper or lower triangular with zero diagonal; one easily verifies that taking powers of such matrices eventually always results in the 0 matrix). One also easily verifies in Jordan form that the diagonal part and the upper or lower triangular part of the matrix commute with each other. A very convenient fact is that the properties "semi-simple", "nilpotent" and "commutation" are intrinsic and do not depend on the choice of coordinates:

If \(A^k=0\) then \((TAT^{-1})^k=0\) for any invertible matrix \(T\).

If \(A\) is complex diagonalizable, then so is \(TAT^{-1}\) for any invertible matrix \(T\).

If \(A\) and \(B\) commute, i.e. \(AB=BA\), then also \((TAT^{-1})(TBT^{-1})=(TBT^{-1})(TAT^{-1})\) for any invertible matrix \(T\).

So we observe that we can prove the Jordan-Chevalley decomposition (and its uniqueness) directly from the Jordan normal form.

We proved in an exercise that \(\exp(A+B)=\exp(A)\exp(B)\) if \(AB=BA\). So in particular, if \(A=A_s+A_n\) is the Jordan-Chevalley decomposition, then \(\exp(At)=\exp(A_st)\exp(A_nt)\). This is very useful since the first part \(\exp(A_st)\) depends only on the eigenvalues of \(A\), and thus contains terms depending only on \(e^{\lambda_i t}\) where \(\lambda_i\) denote eigenvalues of \(A\) (if eigenvalues are complex this leads to terms with dependencies \(e^{\Re(\lambda_i)t}\cos(\Im(\lambda_i) t)\) and \(e^{\Re(\lambda_i)t}\sin(\Im(\lambda_i) t))\), were \(\Re\) and \(\Im\) denote the real and imaginary parts, respectively).

We know that sometimes we also have polynomial terms appearing in the expression \(\exp(At)\). These polynomials come from the second part \(\exp(A_nt)\) since \(\exp(A_n t)=\sum_{m=0}^{k-1}\frac{A_n^m}{m!} t^m\) (this follows from the fact that \(A_n^k=0\)).

The question (c)(iii) is about the Jordan-Chevalley decomposition of \(\exp(At)\). The only thing to check is that we can write this as sum of a nilpotent and a semi-simple matrix which commute with each other. (The Jordan-Chevalley decomposition theorem than asserts that this decomposition is in fact unique.)

The question contains the hint that \(\exp(A_s t)\) is semi-simple. We can see this by verifying that if \(TA_sT^{-1}\) is (complex) diagonal, then so is \(T\exp(A_st)T^{-1}=\exp(TA_sT^{-1}t)\).

Let us check whether indeed the semi-simple part of \(\exp(At)\) is equal to \(\exp(A_st)\) (in the sense of the Jordan-Chevalley decomposition). We write \(\exp(At)=\exp(A_st)+N\) where \(N:=[\exp(At)-\exp(A_st)]\). Now we recall that \(\exp(At)=\exp(A_st)\exp(A_nt)\) so \(N=\exp(A_st)[\exp(A_nt)-1]\) and as these two factors commute we have \(N^k=\exp(A_s t k)[\exp(A_nt)-1] ^k\) and if \(A_n^k=0\) we also have \([\exp(A_n t)-1]^k=0\) since \([\exp(A_n t)-1]=p(A_n)\) is a polynomial in \(A_n\) with \(p(0)=0\). Thus \(N\) is nilpotent, and it is readily checked that \(N\) also commutes with \(\exp(A_s t)\). So \(\exp(A t)=\exp(A_s t)+N\) is the Jordan-Chevalley decomposition of \(\exp(A t)\) where \(\exp(A_s t)\) is the semi-simple part and \(N=\exp(A_s t)[\exp(A_n t)-1]\) is the nilpotent part.

In part d(ii), \(B\) is the projection to the eigenspace for eigenvalue \(-1\) with as kernel the generalised eigenspace for eigenvalue +1, and \(D=A_n\). As there is a Jordan block for eigenvalue +1 (and not for eigenvalue -1), the range of A_n is the eigenspace of A for eigenvalue +1 (check this by writing a 2-by-2 matrix with a Jordan block); the kernel of \(A_n\) is spanned by the eigenspaces of \(A\). Since the range of \(D\) lies inside the kernel of \(B\), and the range of \(B\) in the kernel of \(D\), it follows that \(DB=BD=0\).

First let me recall some generalities about the Jordan Chevalley decomposition. In the (complex) Jordan form, the diagonal part of the matrix is semi-simple - as it obviously has a diagonal complex Jordan form - and the remaining off-diagonal part is nilpotent (as it is upper or lower triangular with zero diagonal; one easily verifies that taking powers of such matrices eventually always results in the 0 matrix). One also easily verifies in Jordan form that the diagonal part and the upper or lower triangular part of the matrix commute with each other. A very convenient fact is that the properties "semi-simple", "nilpotent" and "commutation" are intrinsic and do not depend on the choice of coordinates:

If \(A^k=0\) then \((TAT^{-1})^k=0\) for any invertible matrix \(T\).

If \(A\) is complex diagonalizable, then so is \(TAT^{-1}\) for any invertible matrix \(T\).

If \(A\) and \(B\) commute, i.e. \(AB=BA\), then also \((TAT^{-1})(TBT^{-1})=(TBT^{-1})(TAT^{-1})\) for any invertible matrix \(T\).

So we observe that we can prove the Jordan-Chevalley decomposition (and its uniqueness) directly from the Jordan normal form.

We proved in an exercise that \(\exp(A+B)=\exp(A)\exp(B)\) if \(AB=BA\). So in particular, if \(A=A_s+A_n\) is the Jordan-Chevalley decomposition, then \(\exp(At)=\exp(A_st)\exp(A_nt)\). This is very useful since the first part \(\exp(A_st)\) depends only on the eigenvalues of \(A\), and thus contains terms depending only on \(e^{\lambda_i t}\) where \(\lambda_i\) denote eigenvalues of \(A\) (if eigenvalues are complex this leads to terms with dependencies \(e^{\Re(\lambda_i)t}\cos(\Im(\lambda_i) t)\) and \(e^{\Re(\lambda_i)t}\sin(\Im(\lambda_i) t))\), were \(\Re\) and \(\Im\) denote the real and imaginary parts, respectively).

We know that sometimes we also have polynomial terms appearing in the expression \(\exp(At)\). These polynomials come from the second part \(\exp(A_nt)\) since \(\exp(A_n t)=\sum_{m=0}^{k-1}\frac{A_n^m}{m!} t^m\) (this follows from the fact that \(A_n^k=0\)).

The question (c)(iii) is about the Jordan-Chevalley decomposition of \(\exp(At)\). The only thing to check is that we can write this as sum of a nilpotent and a semi-simple matrix which commute with each other. (The Jordan-Chevalley decomposition theorem than asserts that this decomposition is in fact unique.)

The question contains the hint that \(\exp(A_s t)\) is semi-simple. We can see this by verifying that if \(TA_sT^{-1}\) is (complex) diagonal, then so is \(T\exp(A_st)T^{-1}=\exp(TA_sT^{-1}t)\).

Let us check whether indeed the semi-simple part of \(\exp(At)\) is equal to \(\exp(A_st)\) (in the sense of the Jordan-Chevalley decomposition). We write \(\exp(At)=\exp(A_st)+N\) where \(N:=[\exp(At)-\exp(A_st)]\). Now we recall that \(\exp(At)=\exp(A_st)\exp(A_nt)\) so \(N=\exp(A_st)[\exp(A_nt)-1]\) and as these two factors commute we have \(N^k=\exp(A_s t k)[\exp(A_nt)-1] ^k\) and if \(A_n^k=0\) we also have \([\exp(A_n t)-1]^k=0\) since \([\exp(A_n t)-1]=p(A_n)\) is a polynomial in \(A_n\) with \(p(0)=0\). Thus \(N\) is nilpotent, and it is readily checked that \(N\) also commutes with \(\exp(A_s t)\). So \(\exp(A t)=\exp(A_s t)+N\) is the Jordan-Chevalley decomposition of \(\exp(A t)\) where \(\exp(A_s t)\) is the semi-simple part and \(N=\exp(A_s t)[\exp(A_n t)-1]\) is the nilpotent part.

In part d(ii), \(B\) is the projection to the eigenspace for eigenvalue \(-1\) with as kernel the generalised eigenspace for eigenvalue +1, and \(D=A_n\). As there is a Jordan block for eigenvalue +1 (and not for eigenvalue -1), the range of A_n is the eigenspace of A for eigenvalue +1 (check this by writing a 2-by-2 matrix with a Jordan block); the kernel of \(A_n\) is spanned by the eigenspaces of \(A\). Since the range of \(D\) lies inside the kernel of \(B\), and the range of \(B\) in the kernel of \(D\), it follows that \(DB=BD=0\).

### 2014 exam question 4 (v)

A student asked me about the model answer, which is very short and perhaps not so obviously correct.

If an equilibrium \(y\) has a stable manifold, then the \(\omega\)-limit set of every point on this manifold equals \(y\) (as the point \(x\) converges to \(y\) under the flow). However, if we have a saddle point, there exists also an unstable manifold. If an initial point \(x\) does not lie on the stable manifold of an equilibrium, then by definition it does not converge to the equilibrium. It is a more subtle question whether it could accumulate to the equilibrium. I did not make this exam question, but the model answer if perhaps a bit too brief. It could namely be that a for point \(x\) that does not lie on the stable manifold of an equilibrium \(y\), we still have \(y\in\omega(x)\). For instance, there could be a heteroclinic or homoclinic cycle (consisting of equilibria and connecting orbits) to which \(x\) accumulates, and that contains the saddle equilibrium \(y\). In this exam question, there is only one equilibrium, so this implies that we only could have a homoclinic cycle (connecting orbit to one saddle), but this homoclinic cycle to a saddle would imply the existence of an equilibrium inside the area enclosed by the homoclinic loop (by PB arguments similar to the conclusion about the existence of an equilibrium inside the area enclosed by a periodic orbit) and as there is only one equilibrium in the system under consideration, this cannot be the case here. So there is no homoclinic cycle, there is only one equilibrium in \(A\) and orbits cannot leave \(A\) but also do not converge to the equilibrium. Then by PB they need to accumulate to a periodic solution (in \(A\)).

The \(\omega\)-limit set of a point \(x\) are the points to which \(\phi^t(x)\) accumulates, ie points \(y\) so that there exists an increasing sequence \(t_n\) such that \(lim_{n\to\infty} \phi^{t_n}(x)=y\).'there exists \(x\) in \(A\) such that the \(\omega-\)limit set of \(x\) is not contained in the stable manifold of the singularity. Hence \(A\) contains a periodic orbit.' .

If an equilibrium \(y\) has a stable manifold, then the \(\omega\)-limit set of every point on this manifold equals \(y\) (as the point \(x\) converges to \(y\) under the flow). However, if we have a saddle point, there exists also an unstable manifold. If an initial point \(x\) does not lie on the stable manifold of an equilibrium, then by definition it does not converge to the equilibrium. It is a more subtle question whether it could accumulate to the equilibrium. I did not make this exam question, but the model answer if perhaps a bit too brief. It could namely be that a for point \(x\) that does not lie on the stable manifold of an equilibrium \(y\), we still have \(y\in\omega(x)\). For instance, there could be a heteroclinic or homoclinic cycle (consisting of equilibria and connecting orbits) to which \(x\) accumulates, and that contains the saddle equilibrium \(y\). In this exam question, there is only one equilibrium, so this implies that we only could have a homoclinic cycle (connecting orbit to one saddle), but this homoclinic cycle to a saddle would imply the existence of an equilibrium inside the area enclosed by the homoclinic loop (by PB arguments similar to the conclusion about the existence of an equilibrium inside the area enclosed by a periodic orbit) and as there is only one equilibrium in the system under consideration, this cannot be the case here. So there is no homoclinic cycle, there is only one equilibrium in \(A\) and orbits cannot leave \(A\) but also do not converge to the equilibrium. Then by PB they need to accumulate to a periodic solution (in \(A\)).

## Tuesday, 3 May 2016

### Questionnaire

In the final revision class I handed out a questionnaire to get a more detailed feedback about the course beyond SOLE. If you have not filled out and handed in the form to me yet, you can find an electronic copy of the questionnaire here. Please fill it out and send it to me by e-mail or print it out and leave it in my pigeonhole. Your feedback is very much appreciated.

### Second revision class

I discussed the application of Poincare-Bendixson theory to make sketches of phase portraits. The short note/summary I used can be found here

Subscribe to:
Posts (Atom)