Instructions on how to study for the resit exam in September 2016 are the same as for the May 2016 exam.
Wednesday, 20 July 2016
Sunday, 15 May 2016
Picard's Theorem
One of the students has asked me: "In the past papers, the statement of the Picard Theorem is in
a different form from the version given in the notes. Which one shall we
use?"
There are various versions of the Picard's Theorem, which although they may be formulated with different technical details, all essentially state the same thing, i.e. that unique solutions of ODEs can be found by means of Picard iteration. If you would be asked to state "Picard's Theorem" then of course I would be happy with any correct and sensible version of this Theorem. However, note that if I would be asking you to prove a certain result, your proof should of course be related to the Theorem that you provide, or a specific version of this Theorem (from the lectures) that I would ask you to prove.
Past exam papers are useful to get an idea whether or not you are broadly well prepared for the exam. There is little point to learn answers to past exam papers by heart. Also in most cases, the provided "model answers" are not necessarily the unique correct formulation to an answer...
In general, let me assure you that I will give generous credit to answers to exam questions that demonstrate your understanding of the material that you are asked about, rather than split hairs over whether or not you provide exactly the answer that I would have given as a model...
There are various versions of the Picard's Theorem, which although they may be formulated with different technical details, all essentially state the same thing, i.e. that unique solutions of ODEs can be found by means of Picard iteration. If you would be asked to state "Picard's Theorem" then of course I would be happy with any correct and sensible version of this Theorem. However, note that if I would be asking you to prove a certain result, your proof should of course be related to the Theorem that you provide, or a specific version of this Theorem (from the lectures) that I would ask you to prove.
Past exam papers are useful to get an idea whether or not you are broadly well prepared for the exam. There is little point to learn answers to past exam papers by heart. Also in most cases, the provided "model answers" are not necessarily the unique correct formulation to an answer...
In general, let me assure you that I will give generous credit to answers to exam questions that demonstrate your understanding of the material that you are asked about, rather than split hairs over whether or not you provide exactly the answer that I would have given as a model...
Sunday, 8 May 2016
Persistence of transverse intersections
In Example 1.4.9 we discuss the persistence of transverse intersections as an application of the Impicit Function Theorem. Someone asked me about this in the second revision class. I will try to elucidate the final conclusion in this example from the notes.
"Persistence" of the isolated intersection of two curves in \(\mathbb{R}^2\) in this example, means that if we "perturb" the curves slightly, then there remains to be a unique isolated intersection of these two curves near the original isolated intersection.
We represent the two curves by differentiable functions that parametrize these curves: \(f,g:\mathbb{R}\to\mathbb{R}^2\). We assume the intersection to be at \(f(0)=g(0)\). We now consider a parametrized family of functions \(f_\lambda,g_\lambda:\mathbb{R}\to\mathbb{R}^2\), representing "perturbations of the original curves", where \(\lambda\) serves as the "small parameter" so that \(f_0=f\) and \(g_0=g\). We furthermore assume that the perturbations are such that the derivatives \(Df_\lambda(0)\) and \(Dg_\lambda(0)\) are continuous in \(\lambda\) near \(\lambda=0\).
In the example it is proposed to consider the function \(h_\lambda(s,t):\mathbb{R}^2\to\mathbb{R}^2\) defined as \(h_\lambda(s,t):=f_\lambda(s)g_\lambda(t).\) By construction \(h_0(0,0)=(0,0)\) and indeed the intersection points of the curves represented by \(f_\lambda\) and\(g_\lambda\) are given by \(h_\lambda^{1}(0,0)\).
It follows that \(Dh_\lambda(s,t)=(Df_\lambda(s),Dg_\lambda(s))\), as in the notes. This twobytwo matrix is nonsingular (ie has no zero eigenvalue, or  equivalently  is invertible) if and only if the twodimensional vectors \(Df_\lambda(s)\) and \(Dg_\lambda(t)\) are not parallel (ie not real multiples of each other).
We now use this to analyze the intersection at \(\lambda=0\): when \(Df_0(0)\) and \(Dg_0(0)\) (which are the tangent vectors to the respective curves at the intersection point) are not parallel, then \(Dh_0(0,0)\) is invertible and the intersection of the two curves at \(f_0(0)=g_0(0)\) is isolated (there is a neighbourhood of this point, where there is no other intersection).
Considering a small variation of \(\lambda\), we note that by application of the Implicit Function Theorem to \(h_0\), for sufficiently small \(\lambda\) there exist continuous functions \(s(\lambda)\) and \(t(\lambda)\) so that \((s(\lambda),t(\lambda))\) is the element of \(h_\lambda ^{1}(0,0)\) near \((0,0)=(s(0),t(0))\). This unique "continuation" of the original solution \(0,0\) is of course also isolated since if \(Dh_0(0,0)\) is invertible then so is \(Dh_\lambda(s(\lambda),t(\lambda))\) by continuity of all the dependences; so the vectors \(Df_\lambda(s(\lambda))\) and \(Dg_\lambda(t(\lambda))\) will not be parallel for sufficiently small \(\lambda\).
"Persistence" of the isolated intersection of two curves in \(\mathbb{R}^2\) in this example, means that if we "perturb" the curves slightly, then there remains to be a unique isolated intersection of these two curves near the original isolated intersection.
We represent the two curves by differentiable functions that parametrize these curves: \(f,g:\mathbb{R}\to\mathbb{R}^2\). We assume the intersection to be at \(f(0)=g(0)\). We now consider a parametrized family of functions \(f_\lambda,g_\lambda:\mathbb{R}\to\mathbb{R}^2\), representing "perturbations of the original curves", where \(\lambda\) serves as the "small parameter" so that \(f_0=f\) and \(g_0=g\). We furthermore assume that the perturbations are such that the derivatives \(Df_\lambda(0)\) and \(Dg_\lambda(0)\) are continuous in \(\lambda\) near \(\lambda=0\).
In the example it is proposed to consider the function \(h_\lambda(s,t):\mathbb{R}^2\to\mathbb{R}^2\) defined as \(h_\lambda(s,t):=f_\lambda(s)g_\lambda(t).\) By construction \(h_0(0,0)=(0,0)\) and indeed the intersection points of the curves represented by \(f_\lambda\) and\(g_\lambda\) are given by \(h_\lambda^{1}(0,0)\).
It follows that \(Dh_\lambda(s,t)=(Df_\lambda(s),Dg_\lambda(s))\), as in the notes. This twobytwo matrix is nonsingular (ie has no zero eigenvalue, or  equivalently  is invertible) if and only if the twodimensional vectors \(Df_\lambda(s)\) and \(Dg_\lambda(t)\) are not parallel (ie not real multiples of each other).
We now use this to analyze the intersection at \(\lambda=0\): when \(Df_0(0)\) and \(Dg_0(0)\) (which are the tangent vectors to the respective curves at the intersection point) are not parallel, then \(Dh_0(0,0)\) is invertible and the intersection of the two curves at \(f_0(0)=g_0(0)\) is isolated (there is a neighbourhood of this point, where there is no other intersection).
Considering a small variation of \(\lambda\), we note that by application of the Implicit Function Theorem to \(h_0\), for sufficiently small \(\lambda\) there exist continuous functions \(s(\lambda)\) and \(t(\lambda)\) so that \((s(\lambda),t(\lambda))\) is the element of \(h_\lambda ^{1}(0,0)\) near \((0,0)=(s(0),t(0))\). This unique "continuation" of the original solution \(0,0\) is of course also isolated since if \(Dh_0(0,0)\) is invertible then so is \(Dh_\lambda(s(\lambda),t(\lambda))\) by continuity of all the dependences; so the vectors \(Df_\lambda(s(\lambda))\) and \(Dg_\lambda(t(\lambda))\) will not be parallel for sufficiently small \(\lambda\).
Friday, 6 May 2016
2009 exam question 2
A student has asked me go in more detail of the model answers for parts 2009 (c)(iii) and (d)(ii)
First let me recall some generalities about the Jordan Chevalley decomposition. In the (complex) Jordan form, the diagonal part of the matrix is semisimple  as it obviously has a diagonal complex Jordan form  and the remaining offdiagonal part is nilpotent (as it is upper or lower triangular with zero diagonal; one easily verifies that taking powers of such matrices eventually always results in the 0 matrix). One also easily verifies in Jordan form that the diagonal part and the upper or lower triangular part of the matrix commute with each other. A very convenient fact is that the properties "semisimple", "nilpotent" and "commutation" are intrinsic and do not depend on the choice of coordinates:
If \(A^k=0\) then \((TAT^{1})^k=0\) for any invertible matrix \(T\).
If \(A\) is complex diagonalizable, then so is \(TAT^{1}\) for any invertible matrix \(T\).
If \(A\) and \(B\) commute, i.e. \(AB=BA\), then also \((TAT^{1})(TBT^{1})=(TBT^{1})(TAT^{1})\) for any invertible matrix \(T\).
So we observe that we can prove the JordanChevalley decomposition (and its uniqueness) directly from the Jordan normal form.
We proved in an exercise that \(\exp(A+B)=\exp(A)\exp(B)\) if \(AB=BA\). So in particular, if \(A=A_s+A_n\) is the JordanChevalley decomposition, then \(\exp(At)=\exp(A_st)\exp(A_nt)\). This is very useful since the first part \(\exp(A_st)\) depends only on the eigenvalues of \(A\), and thus contains terms depending only on \(e^{\lambda_i t}\) where \(\lambda_i\) denote eigenvalues of \(A\) (if eigenvalues are complex this leads to terms with dependencies \(e^{\Re(\lambda_i)t}\cos(\Im(\lambda_i) t)\) and \(e^{\Re(\lambda_i)t}\sin(\Im(\lambda_i) t))\), were \(\Re\) and \(\Im\) denote the real and imaginary parts, respectively).
We know that sometimes we also have polynomial terms appearing in the expression \(\exp(At)\). These polynomials come from the second part \(\exp(A_nt)\) since \(\exp(A_n t)=\sum_{m=0}^{k1}\frac{A_n^m}{m!} t^m\) (this follows from the fact that \(A_n^k=0\)).
The question (c)(iii) is about the JordanChevalley decomposition of \(\exp(At)\). The only thing to check is that we can write this as sum of a nilpotent and a semisimple matrix which commute with each other. (The JordanChevalley decomposition theorem than asserts that this decomposition is in fact unique.)
The question contains the hint that \(\exp(A_s t)\) is semisimple. We can see this by verifying that if \(TA_sT^{1}\) is (complex) diagonal, then so is \(T\exp(A_st)T^{1}=\exp(TA_sT^{1}t)\).
Let us check whether indeed the semisimple part of \(\exp(At)\) is equal to \(\exp(A_st)\) (in the sense of the JordanChevalley decomposition). We write \(\exp(At)=\exp(A_st)+N\) where \(N:=[\exp(At)\exp(A_st)]\). Now we recall that \(\exp(At)=\exp(A_st)\exp(A_nt)\) so \(N=\exp(A_st)[\exp(A_nt)1]\) and as these two factors commute we have \(N^k=\exp(A_s t k)[\exp(A_nt)1] ^k\) and if \(A_n^k=0\) we also have \([\exp(A_n t)1]^k=0\) since \([\exp(A_n t)1]=p(A_n)\) is a polynomial in \(A_n\) with \(p(0)=0\). Thus \(N\) is nilpotent, and it is readily checked that \(N\) also commutes with \(\exp(A_s t)\). So \(\exp(A t)=\exp(A_s t)+N\) is the JordanChevalley decomposition of \(\exp(A t)\) where \(\exp(A_s t)\) is the semisimple part and \(N=\exp(A_s t)[\exp(A_n t)1]\) is the nilpotent part.
In part d(ii), \(B\) is the projection to the eigenspace for eigenvalue \(1\) with as kernel the generalised eigenspace for eigenvalue +1, and \(D=A_n\). As there is a Jordan block for eigenvalue +1 (and not for eigenvalue 1), the range of A_n is the eigenspace of A for eigenvalue +1 (check this by writing a 2by2 matrix with a Jordan block); the kernel of \(A_n\) is spanned by the eigenspaces of \(A\). Since the range of \(D\) lies inside the kernel of \(B\), and the range of \(B\) in the kernel of \(D\), it follows that \(DB=BD=0\).
First let me recall some generalities about the Jordan Chevalley decomposition. In the (complex) Jordan form, the diagonal part of the matrix is semisimple  as it obviously has a diagonal complex Jordan form  and the remaining offdiagonal part is nilpotent (as it is upper or lower triangular with zero diagonal; one easily verifies that taking powers of such matrices eventually always results in the 0 matrix). One also easily verifies in Jordan form that the diagonal part and the upper or lower triangular part of the matrix commute with each other. A very convenient fact is that the properties "semisimple", "nilpotent" and "commutation" are intrinsic and do not depend on the choice of coordinates:
If \(A^k=0\) then \((TAT^{1})^k=0\) for any invertible matrix \(T\).
If \(A\) is complex diagonalizable, then so is \(TAT^{1}\) for any invertible matrix \(T\).
If \(A\) and \(B\) commute, i.e. \(AB=BA\), then also \((TAT^{1})(TBT^{1})=(TBT^{1})(TAT^{1})\) for any invertible matrix \(T\).
So we observe that we can prove the JordanChevalley decomposition (and its uniqueness) directly from the Jordan normal form.
We proved in an exercise that \(\exp(A+B)=\exp(A)\exp(B)\) if \(AB=BA\). So in particular, if \(A=A_s+A_n\) is the JordanChevalley decomposition, then \(\exp(At)=\exp(A_st)\exp(A_nt)\). This is very useful since the first part \(\exp(A_st)\) depends only on the eigenvalues of \(A\), and thus contains terms depending only on \(e^{\lambda_i t}\) where \(\lambda_i\) denote eigenvalues of \(A\) (if eigenvalues are complex this leads to terms with dependencies \(e^{\Re(\lambda_i)t}\cos(\Im(\lambda_i) t)\) and \(e^{\Re(\lambda_i)t}\sin(\Im(\lambda_i) t))\), were \(\Re\) and \(\Im\) denote the real and imaginary parts, respectively).
We know that sometimes we also have polynomial terms appearing in the expression \(\exp(At)\). These polynomials come from the second part \(\exp(A_nt)\) since \(\exp(A_n t)=\sum_{m=0}^{k1}\frac{A_n^m}{m!} t^m\) (this follows from the fact that \(A_n^k=0\)).
The question (c)(iii) is about the JordanChevalley decomposition of \(\exp(At)\). The only thing to check is that we can write this as sum of a nilpotent and a semisimple matrix which commute with each other. (The JordanChevalley decomposition theorem than asserts that this decomposition is in fact unique.)
The question contains the hint that \(\exp(A_s t)\) is semisimple. We can see this by verifying that if \(TA_sT^{1}\) is (complex) diagonal, then so is \(T\exp(A_st)T^{1}=\exp(TA_sT^{1}t)\).
Let us check whether indeed the semisimple part of \(\exp(At)\) is equal to \(\exp(A_st)\) (in the sense of the JordanChevalley decomposition). We write \(\exp(At)=\exp(A_st)+N\) where \(N:=[\exp(At)\exp(A_st)]\). Now we recall that \(\exp(At)=\exp(A_st)\exp(A_nt)\) so \(N=\exp(A_st)[\exp(A_nt)1]\) and as these two factors commute we have \(N^k=\exp(A_s t k)[\exp(A_nt)1] ^k\) and if \(A_n^k=0\) we also have \([\exp(A_n t)1]^k=0\) since \([\exp(A_n t)1]=p(A_n)\) is a polynomial in \(A_n\) with \(p(0)=0\). Thus \(N\) is nilpotent, and it is readily checked that \(N\) also commutes with \(\exp(A_s t)\). So \(\exp(A t)=\exp(A_s t)+N\) is the JordanChevalley decomposition of \(\exp(A t)\) where \(\exp(A_s t)\) is the semisimple part and \(N=\exp(A_s t)[\exp(A_n t)1]\) is the nilpotent part.
In part d(ii), \(B\) is the projection to the eigenspace for eigenvalue \(1\) with as kernel the generalised eigenspace for eigenvalue +1, and \(D=A_n\). As there is a Jordan block for eigenvalue +1 (and not for eigenvalue 1), the range of A_n is the eigenspace of A for eigenvalue +1 (check this by writing a 2by2 matrix with a Jordan block); the kernel of \(A_n\) is spanned by the eigenspaces of \(A\). Since the range of \(D\) lies inside the kernel of \(B\), and the range of \(B\) in the kernel of \(D\), it follows that \(DB=BD=0\).
2014 exam question 4 (v)
A student asked me about the model answer, which is very short and perhaps not so obviously correct.
If an equilibrium \(y\) has a stable manifold, then the \(\omega\)limit set of every point on this manifold equals \(y\) (as the point \(x\) converges to \(y\) under the flow). However, if we have a saddle point, there exists also an unstable manifold. If an initial point \(x\) does not lie on the stable manifold of an equilibrium, then by definition it does not converge to the equilibrium. It is a more subtle question whether it could accumulate to the equilibrium. I did not make this exam question, but the model answer if perhaps a bit too brief. It could namely be that a for point \(x\) that does not lie on the stable manifold of an equilibrium \(y\), we still have \(y\in\omega(x)\). For instance, there could be a heteroclinic or homoclinic cycle (consisting of equilibria and connecting orbits) to which \(x\) accumulates, and that contains the saddle equilibrium \(y\). In this exam question, there is only one equilibrium, so this implies that we only could have a homoclinic cycle (connecting orbit to one saddle), but this homoclinic cycle to a saddle would imply the existence of an equilibrium inside the area enclosed by the homoclinic loop (by PB arguments similar to the conclusion about the existence of an equilibrium inside the area enclosed by a periodic orbit) and as there is only one equilibrium in the system under consideration, this cannot be the case here. So there is no homoclinic cycle, there is only one equilibrium in \(A\) and orbits cannot leave \(A\) but also do not converge to the equilibrium. Then by PB they need to accumulate to a periodic solution (in \(A\)).
The \(\omega\)limit set of a point \(x\) are the points to which \(\phi^t(x)\) accumulates, ie points \(y\) so that there exists an increasing sequence \(t_n\) such that \(lim_{n\to\infty} \phi^{t_n}(x)=y\).'there exists \(x\) in \(A\) such that the \(\omega\)limit set of \(x\) is not contained in the stable manifold of the singularity. Hence \(A\) contains a periodic orbit.' .
If an equilibrium \(y\) has a stable manifold, then the \(\omega\)limit set of every point on this manifold equals \(y\) (as the point \(x\) converges to \(y\) under the flow). However, if we have a saddle point, there exists also an unstable manifold. If an initial point \(x\) does not lie on the stable manifold of an equilibrium, then by definition it does not converge to the equilibrium. It is a more subtle question whether it could accumulate to the equilibrium. I did not make this exam question, but the model answer if perhaps a bit too brief. It could namely be that a for point \(x\) that does not lie on the stable manifold of an equilibrium \(y\), we still have \(y\in\omega(x)\). For instance, there could be a heteroclinic or homoclinic cycle (consisting of equilibria and connecting orbits) to which \(x\) accumulates, and that contains the saddle equilibrium \(y\). In this exam question, there is only one equilibrium, so this implies that we only could have a homoclinic cycle (connecting orbit to one saddle), but this homoclinic cycle to a saddle would imply the existence of an equilibrium inside the area enclosed by the homoclinic loop (by PB arguments similar to the conclusion about the existence of an equilibrium inside the area enclosed by a periodic orbit) and as there is only one equilibrium in the system under consideration, this cannot be the case here. So there is no homoclinic cycle, there is only one equilibrium in \(A\) and orbits cannot leave \(A\) but also do not converge to the equilibrium. Then by PB they need to accumulate to a periodic solution (in \(A\)).
Tuesday, 3 May 2016
Questionnaire
In the final revision class I handed out a questionnaire to get a more detailed feedback about the course beyond SOLE. If you have not filled out and handed in the form to me yet, you can find an electronic copy of the questionnaire here. Please fill it out and send it to me by email or print it out and leave it in my pigeonhole. Your feedback is very much appreciated.
Second revision class
I discussed the application of PoincareBendixson theory to make sketches of phase portraits. The short note/summary I used can be found here
Saturday, 30 April 2016
2013 exam question 2.
A student has asked me about this exam question (I paraphrase):
 In part a, how is the stable manifold determined?
The stable manifold of the equilibrium 0 of a linear ODE is precisely equal to the union of all (generalized) eigenspaces for eigenvalues that have negative real part. Namely, on these (generalized) eigenspaces, all solution curves converge (exponentially fast) to the equilibrium 0, whereas on the other eigenspaces solutions that start outside the equilbrium never converge to 0. These properties all follow from the explicit solutions of the linear ODE restricted to generalized eigenspaces. In the example at hand the matrix A has eigenvalues 1 and 1 and the eigenvector for 1 is (1,1). Hence the stable manifold is the line through 0 and (1,1).  In part d, why does the model answer use the EulerLagrange equation rather than the conservation of the Hamiltonian?
Either route is possible. I will show here how we can obtain the answer using the conservation of the Hamiltonian. The Hamiltonain is given by \(H=y'f_{y'}[y]f[y]=(2y)^2+(y')^2\). The level sets \(H=c\) are ellipses in the \(yy'\) plane if \(c>0\). We observe that \(c\geq0\). \(H=0\) corresponds to an equilibrium. So let \(c=d^2\). Then we have \(y'=\pm\sqrt{d^2(2y)^2}\) which can be solved by separation of variables
\[\int dt =\int \sqrt{d^2(2y)^2}^{1}.\] Then \[ t+T=\frac{1}{2}\tan^{1}\left(\frac{2y}{d^24y^2}\right),\] where \(T\) is constant. After some algebraic manipulations, one can rewrite this as \(y=\pm d \cos(2(t+T))\). The boundary condition \(y(0)=0\) yields \(T=\pm\frac{\pi}{4}\) so that \(y=\pm d \sin(2t)\) and \(y(1)= 1\) implies that \(y(t)=\frac{\sin(2t)}{\sin(2)}\), in accordance with the model answer.
Clearly in this case, the calculations using the Hamiltonian appear more involved than using the EulerLagrange equation. Which route to the answer is most efficient will depend on the example. My advice would be when approaching such a calculation, is to try one route and if it looks tedious, quickly try the other one as well to see if it makes a difference.
Bifurcation points
A few students have asked me what they need to know about bifurcation points. Of course we only touched upon this topic superficially (and a more detailed analysis is well beyond the scope of M2AA1).
If an equilibrium point is hyperbolic (no eigenvalues of the Jacobian are on the imaginary axis), then the flow near the equilibrium is determined by its linearized flow (Hartman Grobman theorem) and the flow does not essentially change under sufficiently small perturbations, so hyperbolicity is a counter indicator for (local) bifurcation. If an equilibrium is not hyperbolic, then small perturbations to the vector field may lead to substantial changes of the flow near the equilibrium, which would amount to a (local) bifurcation. We have not really discussed the precise analysis of the flow
at a nonhyperbolic equilibrium point, so this is not something you need to master. However, if we have a parameter in our problem, and at one value of the parameter we have a nonhyperbolic equilibrium, we can often induce from the local behaviour near the hyperbolic equilibrium/equilibria before and/or after this parameter value, what may have happened at the bifurcation point. Nothing beyond this superficial level of analysis will be expected or required at the exam.
Phase portrait sketch in case of a Lyapunov function that is a conserved quantity
A student asked me:"I was looking at the past exam paper from 2010, and in question 03. (a) (iii) when drawing the phase plane, was wondering how we are supposed to conclude that it is a periodic orbit? I'm not completely sure how to proceed after finding the nullclines and the direction of the solution curve."
In the 2010 exam question 3(a)(iii), we have a Lyapunov function \(V\) with \(\frac{d}{dt}V=0\). This means that the level sets \(V_C:=\{x~V(x)=C\}\) are flowinvariant and thus that every solution curve must lie inside one particular level set. As the level sets \(V_C\) with \(C>0\) in this example are closed curves that do not contain equilibria, necessarily these level sets must be periodic solutions. The level set \(V_0\) is the unique equilibrium \((x,y)=(1,1)\).
A similar situation was encountered in Test 2, question 2(ii).
In the 2010 exam question 3(a)(iii), we have a Lyapunov function \(V\) with \(\frac{d}{dt}V=0\). This means that the level sets \(V_C:=\{x~V(x)=C\}\) are flowinvariant and thus that every solution curve must lie inside one particular level set. As the level sets \(V_C\) with \(C>0\) in this example are closed curves that do not contain equilibria, necessarily these level sets must be periodic solutions. The level set \(V_0\) is the unique equilibrium \((x,y)=(1,1)\).
A similar situation was encountered in Test 2, question 2(ii).
Wednesday, 27 April 2016
Lyapunov functions and phase portraits
A student asked me: "I have a question regarding Q3 on the 2008 M2AA1 paper. In the question
you are asked to sketch the phase portrait of the system for various
parameters. The question gives you two pictures of the lyapunov function
for these parameters. I was wondering how
one might use a lyapunov function to deduce the phase portrait as the
question suggests. I have checked the answers for that question which
don't give much detail, however the portraits look remarkable similar to
the lyapunov functions. "
Let \(V(x)\) be a Lyapunov function, ie \(\frac{d}{dt}V(x)\leq 0\). Then \(V(x(t)\) must not increase when \(t\) increases. It is also insightful to note that \(\frac{d}{dt}V(x)=\nabla V(x) \cdot\dot{x}\leq 0\). Recall that \(\nabla V(x)\) is the normal to level set \(\{\tilde{x}~~V(\tilde{x})=V(x)\}\) at the point \(x\). Thus \( \nabla V(x)\cdot \dot{x}\leq 0\) indeed means that the vector \(\dot{x}\) does not point in the direction of higher level sets of \(V\)and thus that if \(t\geq\tau\) then \(V(x(t))\leq V(x(\tau))\).
Let \(V(x)\) be a Lyapunov function, ie \(\frac{d}{dt}V(x)\leq 0\). Then \(V(x(t)\) must not increase when \(t\) increases. It is also insightful to note that \(\frac{d}{dt}V(x)=\nabla V(x) \cdot\dot{x}\leq 0\). Recall that \(\nabla V(x)\) is the normal to level set \(\{\tilde{x}~~V(\tilde{x})=V(x)\}\) at the point \(x\). Thus \( \nabla V(x)\cdot \dot{x}\leq 0\) indeed means that the vector \(\dot{x}\) does not point in the direction of higher level sets of \(V\)and thus that if \(t\geq\tau\) then \(V(x(t))\leq V(x(\tau))\).
Monday, 25 April 2016
Sketching phase portraits
A student asked me:
"With regards to Problem Sheet 7, question 2 (the first question on the sheet), after sketching the nullclines and determining the direction of flow at the nullclines, as well as the nature of the equilibrium point at (0.5,0.5)  stable equilibrium, how should I attempt to sketch the phase portrait, i.e. the diagram on the right in the answers. "
It may be instructive to go through the steps of how to try sketching a phase portrait:
Locally near equilibria:
1. Find the equilibria. (Depending on the type of the equations, this may be easy or impossible. In most exercises this is easy.)
2. Calculate the linearization of the vector field (ie Jacobian) at these equilibria and deduce , where possible (ie when hyperbolic), what the phase portrait should look like near these equilibria.
More globally:
1. Where relevant or possible, determine a bounded invariant set to which all solutions are attracted (most of our examples have such a region).
2. Try drawing nullclines where the vector field (and thus the tangent to solution curves) is horizonal and vertical. These nullclines may be helpful. In simple examples, often the nullclines can be computed explicitly.
3. Where there are saddles, it may be useful to try sketching where stable and unstable manifolds may end up.
4. Determine possibilities for the \(\omega\)limit sets, in view of the PoincareBendixson theory.
Allinall, this is what you should be doing when sketching a phase portrait. It often not possible to get all the properties of the flow this way, so that there still may be some unknowns. In particular, it is often hard to rule out the existence of a periodic solution around an attracting or repelling equilibrium point (unless it lies at the border of a forward invariant set, in which case no periodic solutions can encircle it).
If at the exam, you think there is some ambiguity or unknown property of the phase portrait, you should just write that. "Sketching" a phase portrait is precisely that: provide those features which you are certain of and discuss which additional features may or may not be present, on the basis of the theory.
For the specific problem in question: it is completely reasonable to conclude that around the attracting equilibrium (0.5,0.5), the Poincarebendixson theorem leave open the possibility of an \(\omega\)limit set that is a periodic solution encircling this equilibrium. The argument I give in the model answer is correct, but not so easily verifiable (so you are not supposed to discover such a subtle property in an exam question, for instance).
I hope this helps.
"With regards to Problem Sheet 7, question 2 (the first question on the sheet), after sketching the nullclines and determining the direction of flow at the nullclines, as well as the nature of the equilibrium point at (0.5,0.5)  stable equilibrium, how should I attempt to sketch the phase portrait, i.e. the diagram on the right in the answers. "
It may be instructive to go through the steps of how to try sketching a phase portrait:
Locally near equilibria:
1. Find the equilibria. (Depending on the type of the equations, this may be easy or impossible. In most exercises this is easy.)
2. Calculate the linearization of the vector field (ie Jacobian) at these equilibria and deduce , where possible (ie when hyperbolic), what the phase portrait should look like near these equilibria.
More globally:
1. Where relevant or possible, determine a bounded invariant set to which all solutions are attracted (most of our examples have such a region).
2. Try drawing nullclines where the vector field (and thus the tangent to solution curves) is horizonal and vertical. These nullclines may be helpful. In simple examples, often the nullclines can be computed explicitly.
3. Where there are saddles, it may be useful to try sketching where stable and unstable manifolds may end up.
4. Determine possibilities for the \(\omega\)limit sets, in view of the PoincareBendixson theory.
Allinall, this is what you should be doing when sketching a phase portrait. It often not possible to get all the properties of the flow this way, so that there still may be some unknowns. In particular, it is often hard to rule out the existence of a periodic solution around an attracting or repelling equilibrium point (unless it lies at the border of a forward invariant set, in which case no periodic solutions can encircle it).
If at the exam, you think there is some ambiguity or unknown property of the phase portrait, you should just write that. "Sketching" a phase portrait is precisely that: provide those features which you are certain of and discuss which additional features may or may not be present, on the basis of the theory.
For the specific problem in question: it is completely reasonable to conclude that around the attracting equilibrium (0.5,0.5), the Poincarebendixson theorem leave open the possibility of an \(\omega\)limit set that is a periodic solution encircling this equilibrium. The argument I give in the model answer is correct, but not so easily verifiable (so you are not supposed to discover such a subtle property in an exam question, for instance).
I hope this helps.
Saturday, 23 April 2016
Revision classes
Just to let you know that after consultation with the class rep, it has been decided that the scheduled revision classes of Tuesday 26 April 12:0013:00 and Tuesday 3 May 12:0013:00 in Clore, will take the form of problem classes. Graduate Teaching Assistants and I will be available for questions.
Guidance on past exam papers
Past summer M2AA1 exam papers can be found here (the course first featured in 20072008). I provide some guidance here on the relevance of the questions on the past papers for the current exam.
2008: all questions
2009: all questions apart from Q3(c,d,e); for the model answers to Q3 (which do not seem to appear on the web), see this link
2010: all questions
2011: not relevant
2012: not relevant
2013: all questions apart from Q3.
2014: all questions apart from Q1b(ii) and Q3
2015: all questions apart from Q1(d) and Q3
I was the setter (and lecturer of the course) in 2008, 2009 and 2010. It may be no surprise that the exam questions in these years are more representative of my style, than those of the other years.
2008: all questions
2009: all questions apart from Q3(c,d,e); for the model answers to Q3 (which do not seem to appear on the web), see this link
2010: all questions
2011: not relevant
2012: not relevant
2013: all questions apart from Q3.
2014: all questions apart from Q1b(ii) and Q3
2015: all questions apart from Q1(d) and Q3
I was the setter (and lecturer of the course) in 2008, 2009 and 2010. It may be no surprise that the exam questions in these years are more representative of my style, than those of the other years.
Coordinate systems for calculus of variations
Your class rep wrote me "A student asked me about calculus of variations, if we have to be able
to define coordinate systems ourselves? For example in Problem sheet 8
Question 8, students might struggle to get started on it."
This question addresses a common type of anxiety that some of you may feel. It is unjustified.
Problem sheet exercises are not necessarily model exam questions. When I list question 8 as "important" this means that I think it is important you understand how this problem is solved, and that it is a good exercise. Of course, in this question a "tricky" part is how to define the coordinates. If I would use this example as the base for an exam question, I would find it most important that you can show how to apply the Euler Lagrange equation. Most likely, as getting the coordinates right is perhaps a little tricky, I would most likely suggest you to use certain coordinates, since without a reasonable choice, you would not be able to get anywhere.
As I told many of you before: it is my "problem" to set an exam that tests how well you understand the material. So I would aim to avoid exam questions to depend on "tricky" bits that are not at the core of the course material.
So in answer to the question whether "[do] we have to be able to define coordinate systems ourselves?", the answer is "I would likely avoid obstacles, so if the choice of coordinates is not obvious, I would likely provide you with a suggestion."
I hope this helps.
This question addresses a common type of anxiety that some of you may feel. It is unjustified.
Problem sheet exercises are not necessarily model exam questions. When I list question 8 as "important" this means that I think it is important you understand how this problem is solved, and that it is a good exercise. Of course, in this question a "tricky" part is how to define the coordinates. If I would use this example as the base for an exam question, I would find it most important that you can show how to apply the Euler Lagrange equation. Most likely, as getting the coordinates right is perhaps a little tricky, I would most likely suggest you to use certain coordinates, since without a reasonable choice, you would not be able to get anywhere.
As I told many of you before: it is my "problem" to set an exam that tests how well you understand the material. So I would aim to avoid exam questions to depend on "tricky" bits that are not at the core of the course material.
So in answer to the question whether "[do] we have to be able to define coordinate systems ourselves?", the answer is "I would likely avoid obstacles, so if the choice of coordinates is not obvious, I would likely provide you with a suggestion."
I hope this helps.
Friday, 15 April 2016
Lecture notes Chapter 5 and Chapter 6
Chapters 5 and 6 of the lectures notes correspond to Chapters 10 and 11 of Differential equations, dynamical systems and an introduction to chaos by
M.W. Hirsch, S. Smale and R.L. Devaney. Please be aware that the
entire book can be consulted digitally from the Imperial library website
(where these chapters can also be downloaded; just search for the authors
on the Library search
and you find a link to the digitcal copy). Because of copyright issues I unfortunately cannot link on this webpage directly to these chapters. I understand that the class reps also sent around instructions how this material can be downloaded and consulted from the library website.
Saturday, 9 April 2016
Chapter 7 lecture notes
I have just posted typed up lecture notes of Chapter 7 on the Calculus of Variations. I apologize for the delay in finishing this. I also updated the instructions on "how to study for the exam" concerning this chapter of the notes.
Tuesday, 29 March 2016
Problem sheet 8 and answers
I have now posted model answers for problem sheet 8. Please note that questions 7 and 9 are not relevant for the exam (I updated the exam preparation note). I added a boundary condition to simplify question 6 somewhat.
Tuesday, 22 March 2016
Notes on synchronisation
My last lecture was loosely based on some lecture notes by Dr Tiago Pereira. This lecture connects my first lecture (where I mainly motivated the study of differential equations) to the material covered in this course. None of this material is examinable, but I hope it is interesting in context.
Problem class tuesday 22/3, 121pm, in Blacket 1004
Room 340 is unfortunately not available. Blackett 1004 is on Level 10 of the Blackett building.
Come out of the lifts and turn left. Prof Turaev will discuss starred questions from the final problem sheet 8.
Sunday, 20 March 2016
Thursday, 17 March 2016
Final problem sheet nr 8 about calculus of variations.
This problem sheet has now been posted in the lefthand side margin.
Thursday, 10 March 2016
Update on solutions to problem sheet 4
I corrected a typo in the solution to question 1(a) and reformulated the solution to question 5(a).
Wednesday, 9 March 2016
Additional office hour on wednesday 9 March 122pm in room 144
Questions will be answered about the material for the class test of tomorrow.
Tuesday, 8 March 2016
About the duration of the second test
You will have 1.5 hours to complete the test on thursday (16:0017:30). I set the test such that this should normally be plenty of time.
Monday, 7 March 2016
Proof of Lemma 4.1.1 (and video)
Some of you have been querying me about the proof of Lemma 4.1.1. I wrote in the guidance for the second test that it is important to study this proof. Of course, there will never be a question on the test (or exam) like "prove lemma 4.1.1". First of all it is too long, but secondly I do not believe in remembering such proofs by heart. However, the manipulations and arguments used are very instructive of similar but even more involved estimates of this kind one encounters in the analysis of ordinary and partial differential equations in various contexts, so although I do not want you to learn this proof by heart, I would really appreciate if you understand it when you read it.... I posted a video in the lefthand side margin where I go a bit slower through the proof than in the lecture notes or in the lecture, and I hope it helps you if you have been struggling with some of the steps in the proof.
Sunday, 6 March 2016
Problem sheet 6, model answer posted
With apologies for the delay, please find the model answers for problem sheet 6 (except for question 7, I hope to complete this asap) in the lefthand side margin.
Saturday, 5 March 2016
Question 4 sheet nr 6
This question is more involved than I originally anticipated, so I am stripping it of the *, as it is not in the category "elementary". This question is also no longer relevant for the test (and in the processes I added a more questions of this sheet not being essential study for the test, see the updated post below). I also added some more detail to this question on the problem sheet 6, and switched the parts (a) and (b), since it is most natural to answer these questions in the opposite order.
PoincareBendixson appendix
I attached a short note extending the PoincareBendixson theory including connecting orbits between equilibria, as I presented in the lecture on 23/2.
Tuesday, 1 March 2016
How to prepare for the class test of 10 March?
The material to study for the class test is Chapter 3  Linear autonomous ODEs
Chapter 4 The flow near an equilibrium and Chapter 5  PoincareBendixson Theorem from the lecture notes, and the problems sheets nr 3, 4 and 5 relating to these chapters.
I now summarize in some detail the main points in this material:
Chapter 3
definition of linearity
solution of autonomous linear ODEs in terms of exponential of matrix (including relevant proofs)
existence and uniqueness of solutions of autonomous linear ODEs
flow map and its computation in elementary examples in the twoand threedimensional case
geometric interpretation of explicit formulas for the flow map: invariant subspaces, eigenspaces and generalised eigenspaces, projections and the decoupling principle for linear ODEs
Jordan normal form (general result, but not general proof, and ability to determine the jordan normal form in some simple examples); generalised eigenspaces
Jordan Chevalley decomposition; definition, implications for exp(A) and finding the JC decomposition in some elementary examples.
Lyapunov and asymptotic stability. Application to linear systems; role of eigenvalues and determination of (in)stability based on information about eigenvalues.
Lyapunov functions; proofs and elementary applications
Chapter 4
Linear approximation near an equilibrium point
Lemma 4.1.1, including interpretation (what does the lemma establish?) and proof (with relevant components, like gronwell estimate and variations of constant formula
Theorem 4.1.2 and proof of part (i)
HartmanGrobman Theorem may be skipped
Hyperbolic equilibria: proposition 4.2.2 and corollary 4.2.3 (inclusive of proof)
prop 4.2.5 & prop 4.2.6 incl proofs
stable and unstable manifolds (only definitions and main results but no proofs  as not given)
simple examples of bifurcations (and use of implicit function theorem in this context)
Chapter 5 (Hirsch, Smale, Devaney chap 10)
limit sets (definitions and identification of limit sets in simple examples) (10.1)
local sections and flow box (10.2)
monotone sequences (10.4)
PoincareBendixson Theorem (10.5 and 10.6) + additional note on classification of omegalimit sets for planar ODEs (as discussed in lecture)
Exercise sheets to be studied:
Problem sheet 4 (but nr 6 not important for test).
Problem sheet 5 (but nrs 1 and 6 not important for test).
Problem sheet 6 (but nrs 4, 5 and 7 not important for test)
Chapter 4 The flow near an equilibrium and Chapter 5  PoincareBendixson Theorem from the lecture notes, and the problems sheets nr 3, 4 and 5 relating to these chapters.
I now summarize in some detail the main points in this material:
Chapter 3
definition of linearity
solution of autonomous linear ODEs in terms of exponential of matrix (including relevant proofs)
existence and uniqueness of solutions of autonomous linear ODEs
flow map and its computation in elementary examples in the twoand threedimensional case
geometric interpretation of explicit formulas for the flow map: invariant subspaces, eigenspaces and generalised eigenspaces, projections and the decoupling principle for linear ODEs
Jordan normal form (general result, but not general proof, and ability to determine the jordan normal form in some simple examples); generalised eigenspaces
Jordan Chevalley decomposition; definition, implications for exp(A) and finding the JC decomposition in some elementary examples.
Lyapunov and asymptotic stability. Application to linear systems; role of eigenvalues and determination of (in)stability based on information about eigenvalues.
Lyapunov functions; proofs and elementary applications
Chapter 4
Linear approximation near an equilibrium point
Lemma 4.1.1, including interpretation (what does the lemma establish?) and proof (with relevant components, like gronwell estimate and variations of constant formula
Theorem 4.1.2 and proof of part (i)
HartmanGrobman Theorem may be skipped
Hyperbolic equilibria: proposition 4.2.2 and corollary 4.2.3 (inclusive of proof)
prop 4.2.5 & prop 4.2.6 incl proofs
stable and unstable manifolds (only definitions and main results but no proofs  as not given)
simple examples of bifurcations (and use of implicit function theorem in this context)
Chapter 5 (Hirsch, Smale, Devaney chap 10)
limit sets (definitions and identification of limit sets in simple examples) (10.1)
local sections and flow box (10.2)
monotone sequences (10.4)
PoincareBendixson Theorem (10.5 and 10.6) + additional note on classification of omegalimit sets for planar ODEs (as discussed in lecture)
Exercise sheets to be studied:
Problem sheet 4 (but nr 6 not important for test).
Problem sheet 5 (but nrs 1 and 6 not important for test).
Problem sheet 6 (but nrs 4, 5 and 7 not important for test)
Sunday, 28 February 2016
Typos Chapter 4
There were some typos (too many \(\varepsilon\)'s) in the proof of Lemma 4.1.1 on page 41. I corrected these, see the link in the lefthand side margin.
Tuesday, 16 February 2016
Problem sheet 5
A new problem sheet has now been posted. Given last week's test, it appeared to me that most of you had been studying for the test rather than on recent course material, so I decided to delay the next sheet for a few days.
Sunday, 14 February 2016
Lecture notes Chapter 5
The material we need for the PoincareBendixson Theorem is covered in chapter 10 of Differential equations, dynamical systems and an introduction to chaos by M.W. Hirisch, S. Smale and R.L. Devaney. Please be aware that the entire book can be consulted digitally from the Imperial library website (where this chapter can also be downloaded; just search for the authors on the Library search and you find a link to the digitcal copy). For your convenience, I also created a temporary link in the lefthand side margin, under chapter 5.
Lectures, office hour and tuesday problem class this week
In my absence, the lectures on Tuesday, Thursday and Friday this week will be given by Dr Trevor Clarke. He will also be standing in for me during the office hour (in Husley 638) on Tuesday afternoon at 3pm, as usual. Tuesday's problem session after the lecture will be conducted by Dr Christian Pangerl (in Prof Turaev's absence) and will concern last week's test.
Lecture notes Chapter 4
Please note that the temporary notes have been replaces with a proper version. Please disregard the preliminary ones.
Thursday, 11 February 2016
Model answers first progress test
Please find model answers for the first progress test in the lefthand side margin. Please note that it does not necessarily mean that if your answers are a little different, that they will be necessarily wrong.
Wednesday, 10 February 2016
Update Lecture Notes Chapter 3
Chapter 3 has now been updated to include the JordanChevalley decomposition section. The stability section also includes now some text about Lyapunov functions. I further made some small changes to the text, but nothing significant.
Jordan normal form notes
Several of you have asked me about more information about Jordan normal forms, like proofs or algorithms. There is no time in the lectures to really do the proofs and it is my experience that exercises in constructing nontrivial Jordan forms for examples is not a popular pasttime for most of you. But for those interested I link here two manuscripts that may be of use: a relatively compact proof of the Jordan normal form theorem (Prof Sebastian van Strien's appendix to the M2AA1 lecture notes of last year), and
a more constructive and algorithmic approach to Jordan forms (2005 notes by Dr Stefan Friedl)
Saturday, 6 February 2016
Videos online
I posted videos of the proof of the Inverse Function Theorem, Implicit Function Theorem and higher dimensional derivatives in the left hand margin. Please note that there is an annoying typo in the last line of the proof of the Inverse Function Theorem: in the first posted version of chapter 1, at the end of the proof of the derivative, F has the variable y as argument but it should of course be x (identically copied from the line above). As you see in the video, one finally inserts \(x=G(y)\) to get the result.
Friday, 5 February 2016
Inner product in \(\mathbb{C}^n\)
I received a question concerning Chapter 3 of the lecture notes where an inner product is used in \(\mathbb{C}^n\) (instead of \(\mathbb{R}^n\)), when doing computations with complex eigenvectors. I thought that you had seen this before, but in case you haven't, please note that the standard inner product in (\(\mathbb{C}^n\)):
$$\langle \textbf{x},\textbf{y}\rangle=\sum_ix_i\cdot\overline{y}_i$$
Thursday, 4 February 2016
Videos proof of Inverse and Implicit Function Theorem, and derivatives of maps
I am in the process of producing some short videos where I go (more slowly) through the proofs of the Inverse and Implicit Function theorems (of Chapter 2), and also discuss in more detail how to deal with higher dimensional derivatives, as I received several questions about this. It is the first time that I am making such videos and I had some issues with the equipment slowing me down. I intend to have these videos up on this webpage still before the weekend.
JordanChevalley Decomposition
The JordanChevalley decomposition subsection (3.3.3) was not yet compiled into Chapter 3 of the lecture notes. I have temporarily added a link to this subsection in the left hand side margin and will integrate it into the chapter properly, asap.
Tuesday, 2 February 2016
Small revision model answer 5 of problem sheet nr1
Some asked me how I concluded so quickly in the previously posted model answer for this question that in case c=0 the sequence was Cauchy and thus converged, and I decided to expand the explanation.
How to prepare for the class test of 11 Feb?
The material to study for the class test is chapters 1 (Contractions) and 2 (Existence and uniquesness) from the lecture notes, and the problems sheets Nrs 1, 2 & 3 relating to these chapters.
I now summarize in some detail the main points in this material:
Chap 1:
 definition of metric space
 do not worry about the “elementary notions" in metric spaces on the bottom of p5 and top of p6.
 definition of contraction
 contraction mapping theorem (including proof)
 derivative test in R (including proof)
 do not learn example 1.3.3 by heart! (it is instructive to understand it, though)
 Theorem 1.3.6 (derivative test in higher dimensions) (no proof, as not given)
 Inverse function theorem in R (including proof)
 Inverse function theorem in R^n (not the proof)
 Implicit function theorem in R^n (including proof)
Chap 2:
 Picard iteration applied to examples
 Some examples of ODEs without existence and uniqueness of solutions
 PicardLindelof Theorem (including proof; but not regarding the completeness
of the function space C(J,U))
 Gronwall’s inequality and application to Theorem 2.2.3, establishing continuity of the finitetime
flow
Exercise sheets:
Nr 1: all * problems, question 6; questions 5&7 are not crucial
Nr 2: all * questions, question 7 (limited to intersections of surfaces and curves in R^3);
questions 5&6 are not crucial
Nr 3: all * questions, questions 5 & 7; questions 6 & 8 are not crucial
I now summarize in some detail the main points in this material:
Chap 1:
 definition of metric space
 do not worry about the “elementary notions" in metric spaces on the bottom of p5 and top of p6.
 definition of contraction
 contraction mapping theorem (including proof)
 derivative test in R (including proof)
 do not learn example 1.3.3 by heart! (it is instructive to understand it, though)
 Theorem 1.3.6 (derivative test in higher dimensions) (no proof, as not given)
 Inverse function theorem in R (including proof)
 Inverse function theorem in R^n (not the proof)
 Implicit function theorem in R^n (including proof)
Chap 2:
 Picard iteration applied to examples
 Some examples of ODEs without existence and uniqueness of solutions
 PicardLindelof Theorem (including proof; but not regarding the completeness
of the function space C(J,U))
 Gronwall’s inequality and application to Theorem 2.2.3, establishing continuity of the finitetime
flow
Exercise sheets:
Nr 1: all * problems, question 6; questions 5&7 are not crucial
Nr 2: all * questions, question 7 (limited to intersections of surfaces and curves in R^3);
questions 5&6 are not crucial
Nr 3: all * questions, questions 5 & 7; questions 6 & 8 are not crucial
Thursday, 21 January 2016
Office hour Tuesday 15:00
My office hour for M2AA1 will be Tuesday 15:0016:00. My office is 638 Huxley.
Thursday, 14 January 2016
Books
There are many books which can be used in conjunction with this module, but none are required. Recommended
books include
– G.F. Simmons and S.G. Krantz, Differential Equa
tions: Theory, Technique, and Practice. This book
covers a significant amount of the material we cover.
Some students will love this text, others will find it
a bit longwinded.
– R.P Agarwal and D. O’Regan, An introduction to
ordinary differential equations.
– G.Teschl, Ordinary Differential Equations and Dynamical Systems. This book can be downloaded
for free from the authors webpage.
– M.Hirsch,S.Smale and R.L.Devaney, Differential
equations, dynamical systems, and an introduction
to chaos.
– V.I. Arnold, Ordinary differential equations. This
book is an absolute jewel and written by one of the
masters of the subject. It is a bit more advanced
than this course, but if you consider doing a PhD,
then get this one. You will enjoy it.
Additional exercises and lecture notes can be freely down
loaded from the internet.
Practical Arrangements
Welcome to M2AA1, 2016 session!

The lectures for this module will take place Tuesday 11
12, Thursday 1011, Friday 1112 in Clore.
 Each week I will hand out a sheet with problems. It is very important you go through these thoroughly, as these will give the required training for the exam and class tests. Problems will be divided into elementary and more advanced problems that need a bit more of work and thought.

Problem classes: Tuesday 121 (in 340) and Thursday 1112 (in 340, 341, 342),
from January 19. The problem classes will be run a
bit differently from previous years. In addition to the
usual problem class with many assistants, there will be
an additional optional one hour session dedicated to the
elementary problems.

The Tuesday problem session in room 340 will be used
to explain the elementary problems in detail on the
board. This session will be moderated by Prof Dmitry
Turaev.

The Thursday problem session in rooms 340, 341 and
342 has teaching assistants present to help you in small
groups, mainly with the more advanced problems, but
additional questions on the elementary problems and questions concerning the lecture notes can of course also be
addressed.
 The objective is to make sure that students can benefit maximally from the support classes.
 The best way to revise for the tests and the exam is by doing the exercises and study the notes critically and in detail. Exercises are not necessarily like exam or test questions but they have been chosen to help you understand the course material.
 Detailed instructions on how to study and prepare for the exam will be given towards the end of the term.
 There will be two class tests. These will take place on Thursday 11th February and Thursday 10th March. Each of these count for 5%.
 Questions are most welcome, during or after lectures and during the office hour.
 My office hour will in my office 638 Huxley Building, at a time TBA asap.
Subscribe to:
Posts (Atom)