text
stringlengths
83
79.5k
H: Finding a PDF given a (strictly) right continuous CDF. I have the CDF: $$ F(x)= \begin{cases} 0 & \text{if } x < 1 \\ \frac{x^2-2x+2}{2} & \text{if } 1 \le x < 2 \\ 1 & \text{if } x \ge 2 \end{cases} $$ I want to find the PDF and I noticed that $F$ is not continuous (at $x=1$), but it is right continuous. Therefore it's still a valid CDF. And I know, after reading different posts on this site, that the PDF is $$ f(x)= \begin{cases} 0 & \text{if } x < 1 \\ \frac{1}{2} & \text{if } x=1 \\ x-1 & \text{if } 1 < x < 2 \\ 0 & \text{if } x \ge 2 \end{cases} $$ My question is this: Why (according to what definition/fact/theorem) is the probability mass $\frac{1}{2}$ at $x=1$? If you blindly differentiate the CDF, piece-by-piece, you lose that information; at least I did. (Edit) I always thought the probability of a single point from a continuous random variable was $0$. (End edit) Thank you in advance for your help and insights. AI: Ah, this looks like an actuarial science problem I've seen. Yes, when you do differentiate, you do lose the point mass. Notice something though - if $F'(x)$ directly equaled the pdf, that would mean that $\int\limits_{1}^{2}(x-1) \text{ d}x = 1$, but clearly, this isn't the case since this integral leads to $2^2/2 - 2 - (1/2 - 1) = 1/2$. Notice, though, that $F(1) = \dfrac{1}{2}$. This implies that $P(X \leq 1) = \dfrac{1}{2}$, so there must be something that is adding a probability at $x = 1$, which implies your $f(x)$. In case you want to read up more on this concept, $X$ is said to follow a mixed distribution.
H: Is the Arens-Fort Space Compact? thanks in advance. My question is this: Is the Arens-Fort space $X \; = \; (\mathbb{N} \times \mathbb{N}) \cup \lbrace \omega \rbrace$ compact? What I have so long is this: Since we know that if a space is compact then it is locally compact, we know that $(0,0)$ is not locally compact so we know that Arens-Fort is not compact. But I don't know how to show a neiborghood of $(0,0)$ that is compact. AI: Hint: To show that $X$ is not compact, recall that the (open) neighbourhoods $U$ of $\omega$ have the property that $$\{ m \in \mathbb{N} : | \mathbb{N} \setminus \{ n \in \mathbb{N} : \langle m , n \rangle \in U \} | = \aleph_0 \} \text{ is finite.}$$ Also, the points of $\mathbb{N} \times \mathbb{N}$ are isolated. So construct an open cover of $X$ which consists of an open neighbourhood $U$ of $\omega$ such that $X \setminus U$ is infinite, and all of the singletons missed by $U$. (No proper subfamily of such a cover could cover $X$, because its elements are pairwise disjoint.) Using a similar argument you can show that no neighbourhood of $\omega$ is compact.
H: Find domain of $\log_4(\log_5(\log_3(18x-x^2-77)))$ Problem:$\log_4(\log_5(\log_3(18x-x^2-77)))$ Solution: $\log_3(18x-x^2-77)$ is defined for $(18x-x^2-77) \ge 3$ $(x^2-18x+77) \le -3$ $(x^2-18x+80) < 0$ {As it can't be 0} $(x-8)(x-10)<0$ $8<x<10$ So domain is $(8,10)$ Am I doing right ?? AI: HINT: For for real $m>0$ $\log_ma,$ is defined if $a>0$ as if $\log_ma=b\iff a=m^b>0$ $\log_4(\log_5(\log_3(18x-x^2-77)))$ is defined if $\log_5(\log_3(18x-x^2-77))>0$ $\implies \log_3(18x-x^2-77)>5^0=1$ $\implies (18x-x^2-77)>3^1=3$ $\implies x^2-18x+80<0$ Now, the roots of $x^2-18x+80=0$ are $\frac{18\pm\sqrt{18^2-4\cdot1\cdot 80}}2=8,10$ Now, if $(x-a)(x-b)<0$ where $a,b(<a)$ are real Either $x-a<0$ and $x-b>0\implies b<x<a$ or $x-a>0\implies x>a$ and $x-b<0\implies x<b$ which needs $a<x<b$ which is impossible as $a>b$ So, the domain here will be $(8,10)$
H: Examples of how to calculate $e^A$ I'm trying to learn the process to discover $e^A$ For example, if $A$ is diagonalizable is easy: $$A =\begin{pmatrix} 5 & -6 \\ 3 & -4 \\ \end{pmatrix}$$ Then we have the canonical form $$J_A =\begin{pmatrix} 2 & 0 \\ 0 & -1 \\ \end{pmatrix}$$ because the auto-values are $2$ and $-1$. Am I right? so I continue $$e^A =P\begin{pmatrix} e^2 & 0 \\ 0 & e^{-1} \\ \end{pmatrix}P^{-1}$$ Where $$P=\begin{pmatrix} 2 & 1 \\ 1 & 1 \\ \end{pmatrix}$$ Because the auto-vectors are (2,1) and (1,1). If the auto-values the things become more complicated: For example: $$B =\begin{pmatrix} 0 & 1 \\ -2 & -2 \\ \end{pmatrix}$$ The auto-values are $-1+i$ and $-1-i$, then the canonical form is: $$J_B =\begin{pmatrix} -1 & -1 \\ 1 & -1 \\ \end{pmatrix}$$ I don't know how to discover $e^B$ in the complex case. How do I have to proceed in this case? I would like to know also if there are some pdfs or books with examples or problems with solutions about this subject. I really need help Thanks a lot. EDIT I found another example of a matrix whose some auto-values are complexes: $$ C=\begin{pmatrix} 1 & 0 & -2 \\ -5 & 6 & 11 \\ 5 & -5 & -10 \\ \end{pmatrix} $$ Its canonical form $$ J_C=\begin{pmatrix} 1 & 0 & 0 \\ 0 & -2 & 1 \\ 0 & -1 & -2 \\ \end{pmatrix} $$ Why $J_A$ is the matrix $A$ in the canonical form? the author doesn't use complexes numbers, why? How do I find $e^A$ in this case? in the same way? EDIT 2 The book I'm using says that the complex Jordan block related to the auto-value $a+bi$ is $$ \begin{pmatrix} a & b \\ -b & a \\ \end{pmatrix} $$ The book also says that it's using the real Jordan canonical form in contrast with the complex Jordan canonical form (see answer below). AI: You might want to have a look at Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later Other references can be found in The matrix exponential: Any good books? We have: $$A = \begin{bmatrix}0 & 1 \\-2 & -2\end{bmatrix}$$ We want to find the characteristic polynomial and eigenvalues by solving $$|A -\lambda I| = 0 \rightarrow \lambda^2+2 \lambda+2 = (\lambda+(1-i)) (\lambda+(1+i))= 0 \rightarrow \lambda_1 = -1+i, ~~\lambda_2 = -1-i $$ If we try and find eigenvectors, we setup and solve: $[A - \lambda I]v_i = 0$. So, we have: $$[A - \lambda_1 I]v_1 = 0 = \begin{bmatrix}1-i & 1\\-2 &-1-i\end{bmatrix}v_1 = 0$$ This leads to a RREF of: $$\begin{bmatrix}1 & \dfrac{1}{2}(1 +1)\\0 & 0\end{bmatrix}v_1 = 0 \rightarrow v_1 = \left(1, -\dfrac{1}{2}(1 + i)\right)$$ Doing the same process for the second eigenvalue yields: $v_2 = \left(1, -\dfrac{1}{2} + \dfrac{i}{2}\right)$ From all of this information, we can write the matrix exponential using the Jordan Normal Form. We have the diagonal form: $$A = P \cdot J\cdot P^{-1} = \begin{bmatrix}-1/2+i/2 & -1/2-i/2 \\ 1 & 1\end{bmatrix} \cdot \begin{bmatrix}-1-i & 0 \\ 0 & -1+i\end{bmatrix} \cdot \begin{bmatrix}-i & 1/2-i/2 \\ i & 1/2+i/2 \end{bmatrix}$$ So, now we can take advantage of the diagonal form as: $\displaystyle e^A = P \cdot e^J \cdot P^{-1} = \begin{bmatrix}-1/2+i/2 & -1/2-i/2 \\ 1 & 1\end{bmatrix} \cdot e^{\begin{bmatrix}-1-i & 0 \\ 0 & -1+i\end{bmatrix}} \cdot \begin{bmatrix}-i & 1/2-i/2 \\ i & 1/2+i/2 \end{bmatrix} = \begin{bmatrix}-1/2+i/2 & -1/2-i/2 \\ 1 & 1\end{bmatrix} \cdot \begin{bmatrix}e^{-1-i} & 0 \\ 0 & e^{-1+i}\end{bmatrix} \cdot \begin{bmatrix}-i & 1/2-i/2 \\ i & 1/2+i/2 \end{bmatrix} = \begin{bmatrix} (1/2+i/2) e^{-1-i}+(1/2-i/2) e^{-1+i} & (1/2) i e^{-1-i}-(1/2) i e^{-1+i} \\-i e^{-1-i}+i e^{-1+i} & (1/2-i/2) e^{-1-i}+(1/2+i/2) e^{-1+i} \end{bmatrix}$ Other methods are shown in that paper I referenced above. If we compare this for the real case, the process is the same and we end up with: $\displaystyle e^A = P \cdot e^J \cdot P^{-1} = \begin{bmatrix}1 & 2\\1 & 1\end{bmatrix} \cdot e^{\begin{bmatrix}-1 & 0\\0 & 2\end{bmatrix}} \cdot \begin{bmatrix}-1 & 2\\1 & -1\end{bmatrix}= \begin{bmatrix}1 & 2\\1 & 1\end{bmatrix} \cdot {\begin{bmatrix}e^{-1} & 0\\0 & e^2\end{bmatrix}} \cdot \begin{bmatrix}-1 & 2\\1 & -1\end{bmatrix} = \begin{bmatrix}\dfrac{-1+2 e^3}{e} & -\dfrac{2 (-1+e^3)}{e}\\ \dfrac{-1+e^3}{e} & \dfrac{2-e^3}{e}\end{bmatrix}$ It is also worth noting that things can also get very ugly, for example, see: generalized eigenvector for 3x3 matrix with 1 eigenvalue, 2 eigenvectors
H: Linear transformation?Rotation question in linear algebra Given the vector $x=[1, 3, -7]^T$ in the basis $[1, 0, 0]^T$, $[0, 1, 0]^T$, and $[0, 0, 1]^T$ (Cartesian coordinates) perform the following operations: Rotate 45 degrees about the x-axis then rotate 30 degrees about the y-axis then rotate -10 degrees about the z-axis I have absolutely no idea how to start this problem. I can't find any examples. I will continue to look on this website for examples as well. I do know that the rotation matrix is: $$ Q= \left[ \begin{array}{cc} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{array} \right] $$ The question does go on to ask for reflections and projections about and onto various axis. Any help would be greatly appreciated. All of you math wizards have given me some great help!! Thanks!! AI: The matrix you need will be $3 \times 3$ and it is formed by taking the matrix you wrote and adjusting it a bit. Basically, you want to fix the axis of the rotation and twist the other two directions. To fix the $z$-direction use: $$ R = \left[ \begin{array}{ccc} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{array} \right] $$ To fix the $y$-direction (which means $x$ and $z$ directions are rotated) use: $$ R = \left[ \begin{array}{ccc} \cos(\theta) & 0 & -\sin(\theta) \\ 0 & 1 & 0 \\ \sin(\theta) & 0 & \cos(\theta) \end{array} \right] $$ and finally, to rotate around the $x$-axis: $$ R = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos(\theta) & -\sin(\theta) \\ 0 & \sin(\theta) & \cos(\theta) \end{array} \right] $$ maybe you should try some simple examples with my suggestions to see how these work. Pick an easy angle and a simple vector and test the transformations.
H: Convergence of increasing measurable functions in measure? Let ${f_{n}}$ a increasing sequence of measurable functions such that $f_{n} \rightarrow f$ in measure. Show that $f_{n}\uparrow f$ almost everywhere My attempt The sequence ${f_{n}}$ converges to f in measure if for any $\epsilon >0$ there exists $N\in \mathbb{N}$ such that for all n>N, $$ m (\{x: |f_n(x) - f(x)| > \varepsilon\}) \rightarrow 0\text{ as } n \rightarrow\infty. $$ I think that taking a small enough epsilon are concludes the result since $f_n$ are increasing Can you help me solve this exercise? Thanks for your help AI: If $f_n$ converges to $f$ in measure, we have a subsequence $f_{n_k}$ converging to $f$ almost everywhere. As $f_n$ is increasing, we know that $f_n$ converges at every point or every subsequence increases to $\infty$. But as the subsequence $f_{n_k}$ converges to $f$ at almost every point, we have that $f_n$ converges to $f$ almost everywhere.
H: Distance between convex set and non-convex set? So in http://en.m.wikipedia.org/wiki/Shapley%E2%80%93Folkman_lemma there is some talk about distance between a mintowksi sum and a convex set. But I couldn't get how distance is being defined. Can anyone help here? AI: In general, the distance between two sets $A$ and $B$ is defined as $$d(A,B)=\sup\limits_{a\in A}\inf\limits_{b\in B}\|a-b\|.$$ So for example if $A=\{0\}$ and $B=\{1,2\}$ then $$d(A,B)=\inf\{|0-1|,|0-2|\}=\inf\{1,2\}=1.$$
H: Problem Solving using Algebra If Peter is $7$ years older than Sharon and John is twice as old as Peter, work out how old Peter is if the average of their ages is $19$. Thanks! :) AI: HINT: If the age of Peter is $x$ years, the age of Sharon will be $x-7$ years and that of John will be $2x$ years So, $$\frac{x-7+x+2x}3=19$$
H: Differentiation term by term of Taylor series Suppose I have A Taylor Series of a function around $z_{0}$ in the complex plane which convergence in a ball of radius $r>0$. Can I differentiate term by term the Taylor series and get the derivative of f? If so, can you please proof it? else, give an example why is it wrong? Thanks! AI: You can view the taylor series as a power series about $z_0$ with radius of convergence $r$. If $f(z)$ denotes the series, then we know that the derivative $f'(z)$ is found by differentiating the series term by term. For the proof, you can look up John B. Conway's 'Function of one complex variable', Proposition 2.5, which is available in internet. So I think you are right.
H: Limit on a spiral I was thinking about limits of functions along various spirals and this one stumped me a bit. The limit that needs to be found is ultimately: $$\lim_{\varphi\to\infty} \coth\varphi\csc\varphi$$ Here is the sprial that it comes from: The equation for this is $(x,y)=(\tanh \varphi \sin \varphi, \tanh \varphi \cos \varphi)$ and the function for which I desire the limit is $f(x,y)=\frac{1}{x}$ I'm interested in the limit as $\varphi\to\infty$. I've got an answer using computer algebra, which is $$L=\lim_{\varphi\to\infty} \coth\varphi\csc\varphi \notin (-1,1)$$ I have no intuitive understanding of why it isn't just $L\in[-\infty,\infty]$. AI: It is clear that the range of $$\coth(\varphi)\csc(\varphi)=\frac {e^{2\varphi}+1}{(e^{2\varphi}-1)\sin(\varphi)}$$ is $(-\infty,-1) \cup (1,\infty).$ Therefore, no partial limit of the function under consideration belongs to $[-1,1]$. On the other hand, for every $a \in (-\infty,-1) \cup (1,\infty)$ and for every $E>0$ the intermediate value theorem implies that the equation $$ \sin\varphi= \frac {e^{2\varphi}+1}{(e^{2\varphi}-1)a}$$ has a solution $\varphi_0 > E.$ The last statement means that $a$ is a partial limit of the function under consideration.
H: Example of a function that's uniformly continuous on a closed interval but not on an open one I'm looking for an example of a function that's uniformly continuous on a closed interval [a,b] but isn't on an open one (a,b). Can such a function exist? If so, can you help me find an example? AI: I don't think such a function can exist, at least not if you mean the same $a,b$ in the open and closed intervals. Logically, the statement that a function $f$ is uniformly continuous on $(a,b)$ follows directly from the statement that it is uniformly continuous on $[a,b]$. To be specific, given $\epsilon$, we know from uniform continuity on $[a,b]$ that we have some $\delta$, such that for any $x,y \in [a,b]$, $| x-y| < \delta$ implies $|f(x)-f(y)| < \epsilon$. Now, since any $x,y \in (a,b)$ are also in $[a,b]$, the definition for uniform continuity is satisfied on $(a,b)$, and we have immediately that $f$ is uniformly continuous on $(a,b)$ as well.
H: Could we define multiplication of “complex numbers” in this way? If we define multiplication of complex numbers as follows: $$z_1 \cdot z_2=(x_1x_2+y_1y_2, x_1y_2+x_2y_1)$$ then it can be shown that it induces a group structure $(G, \cdot)$, because it has inverse elements: $$z^{-1}=\left(\frac{x}{x^2-y^2},\frac{y}{-x^2+y^2}\right)$$ and the unit element appears also: $(1,0)$. It is also commutative and satisfies all other conditions (I think so). So why not define multiplication of complex numbers like this? Does this definition lead into contradiction? Of course you have no definition whatsoever for $\sqrt{-1}$, because now $i^2=1$ implies $i=1$. AI: This is just the ring $\mathbb{R}[t]/(t^2-1)$ in disguise; the equivalence class of the polynomial $a+bt$ corresponds to the ordered pair $(a,b)$, as you can see: $$\begin{align*} (x_1+y_1t)(x_2+y_2t)&=x_1y_1+(x_1y_2+x_2y_1)t+(x_2y_2)t^2\\\\ &\equiv (x_1y_1+x_2y_2)+(x_1y_2+x_2y_1)t\mod (t^2-1). \end{align*}$$ There's nothing "wrong" with this ring, but I think it is fair to say that it is less useful than the complex numbers.
H: Möbius transformation question Möbius transformation copies the annulus $\{z:r<|z|<1\}$ to the domain between $\{z:|z-1/4|=1/4\}$ and $\{z:|z|=1\}$ Please help me to find what is $r$. AI: Hint: The inverse of this transform is also Möbius and might for simplicity map $1\mapsto 1$, $-1\mapsto-1$, $\frac12\mapsto r$, $0\mapsto -r$. Can you find $r$ in this special case? Can you justify the simplification?
H: Probability of the last remaining item after removing items Question: Mr. J has 3 shirts and he wears them each day with the following probabilities: Green (1/3), White(1/6) and Red(1/2). Once he decided to clean out his closet, so each evening he independently decides with a prob. 1/5 to throw the shirt he was wearing that day. Each morning he picks out a shirt of the remaining shirts, according to the the same probabilities. For example if he threw the green shirt, then he will wear the red shirt with prob 3/4 and the white with prob 1/4. What is the probability that the last shirt remaining in his closet is the white one? a. 0.33 b. 0.58 c. 0.69 d. 0.83 Our thoughts Me and my buddy are pretty frustrated from this one... We saw some solution that ignores the throwing out part completely. We were wondering why is that correct? And could anyone post a fully explained solution to this. Thanks in advance. AI: john is correct. As he says, you're only interested in the days when shirts are thrown out, so the probability only affects how long it takes Mr. J to have only one shirt left. So the probabilities for which shirts are left are: $$ \begin{align} p_{white \ left} = \frac{1}{3}\frac{3}{4}+\frac{1}{2}\frac{2}{3}=\frac{7}{12} \\ p_{red \ left} = \frac{1}{3}\frac{1}{4}+\frac{1}{6}\frac{2}{5}=\frac{3}{20} \\ p_{green \ left} = \frac{1}{6}\frac{3}{5}+\frac{1}{2}\frac{1}{3}=\frac{4}{15} \end{align} $$ These probabilities add up to 1. The white is most likely to be left because it's worn with the lowest probability, so it's least likely to be thrown out. $\frac{7}{12}$ is 0.58, so it's answer (b).
H: Differential Equation of 2nd degree with non constant coefficients How to solve this equation : $x''+tx'+x=0$ I tried variable change but no results. Is there a concrete way that works for every equation of this kind. If you can show me step by step that would be great. Thank you for help ! AI: Note $tx'+x=(tx)'$ and so:$$x''+tx'+x=0\\(x'+tx)'=0\\x'+tx=C$$Can you solve this simple linear first-order equation now?
H: $F[x]/\langle p(x)\rangle$ is a field $\iff F[x]/\langle p(x)\rangle$ is an integral domain $$\color{red}{Is~my~interpretation~correct?}$$ Let $F$ be a field. I know that $p(x)\in F[x]$ is irreducible $\iff \langle p(x)\rangle$ is maximal i.e. $F[x]/\langle p(x)\rangle$ is a field $\iff p(x)$ is irreducible in $F[x].$ I want to obtain an analogous result for integral domain. I know that for any integral domain $D,$ $p(x)(\ne0)\in D[x]$ is prime $\iff \langle p(x)\rangle$ is prime ideal. So For $p(x)(\ne 0)\in D[x],~D[x]/\langle p(x)\rangle$ is an integral domain $\iff\langle p(x)\rangle$ is a prime ideal. So for $p(x)(\ne0)\in F[x],F$ being a field, $F[x]/\langle p(x)\rangle$ is an integral domain $\iff \langle p(x)\rangle$ is a prime ideal $\iff p(x)$ is prime $\Rightarrow p(x)$ is irreducible $\iff F[x]/\langle p(x)\rangle$ is a field. So for any field $F$ and for any $p(x)\in F[x],F[x]/\langle p(x)\rangle$ is a field $\iff F[x]/\langle p(x)\rangle$ is an integral domain. AI: It is true only provided you exclude $p(x)=0$ (which you did en route, but not in the final statement). The zero ideal of a principal ideal domain is prime (because of the "domain"), but not maximal (unless the PID is itself a field), and it is the only non-maximal prime ideal. A direct way to see the equivalence is that, assuming $p(x)\neq0$, the ring $R=F[x]/\langle p(x)\rangle$ is a finite dimensional vector space over the field$~F$. The operation of multiplication by any element $a\in R$ is an $F$-linear operator on $R$; it will be injective ($a$ is regular in $R$) if and only if it is surjective ($a$ is invertible in $R$, as $1\in R$ is a multiple of it) if and only it its determinant is nonzero. In particulat all nonzero $a$ are regular ($R$ is an integral domain, and $\langle p(x)\rangle$ prime) if and only if all nonzero $a$ are invertible ($R$ is a field, and $\langle p(x)\rangle$ maximal).
H: integration with substitution - why is this so? I have this problem: $$\int_0^2 \mathrm{(x-1-e^{-\frac{1}{2}x})}\,\mathrm{d}x$$ what I tried: $t=-\dfrac{1}{2}x \Rightarrow \dfrac{dt}{dx} = \dfrac{1}{2} \Rightarrow dx = \dfrac{dt}{2}$ $$\int_0^2 \mathrm{(x-1)}\,\mathrm{d}x -\int_0^2 \mathrm{(e^{-\frac{1}{2}x})}\,\mathrm{d}x = \\ \int_0^2 \mathrm{x}\,\mathrm{d}x - \int_0^2 \mathrm{1}\,\mathrm{d}x - \int_0^2 \mathrm{(e^{-\frac{1}{2}x})}\,\mathrm{d}x = \\ [\dfrac{x^2}{2}]_{0}^{2} - [x]_{0}^{2} - \int_0^{-1} \mathrm{(e^t)}\,\mathrm{d}t = \\ [\dfrac{4}{2} - 0] - [2] - [e^{-\frac{1}{2}x}]_{0}^{-1} = 2- 2 - e^{\frac{1}{2}} - 0 = - e^{\frac{1}{2}} $$ which is obvously wrong because i substituted $t$ with $-\frac{1}{2}x$ at the end. why shouldnot I do this? AI: You must either keep the integral in terms of $t$ AND change the bounds, or convert back to $x$ and use the original bounds. Not both.
H: Verifying spectral norm I was wondering how one could verify the relation that $||A||_2 = \sqrt{\rho(A^HA)}$ for matrices. I mean I have seen this so often, but never found a proof of it. Is there a smart way to do this quickly? Because only referring to the most basic properties of matrices seem to be not a good idea here. AI: If you know singular value decomposition, let $A=USV^H$ be a SVD, where the singular values in $S=\operatorname{diag}(\sigma_1,\ldots,\sigma_n)$ are arranged in descending order. Then $\sqrt{\rho(A^HA)}=\sqrt{\rho(S^HS)}=\sigma_1$. If you define $\|A\|_2$ as $\sigma_1$, you are done. If you define $\|A\|_2$ as $\max_{\|x\|_2=1}\|Ax\|$, it follows immediately that $$\max_{\|x\|_2=1}\|Ax\|_2=\max_{\|x\|_2=1}\|US(V^H)x\|_2=\max_{\|x\|_2=1}\|S(V^H)x\|_2=\max_{\|y\|_2=1}\|Sy\|_2=\sigma_1.$$ Edit: Alternatively, note that $A^HA$ is positive semidefinite because $x^H A^HAx=\|Ax\|_2^2\ge0$. Therefore it can be unitarily diagonalised as $U^HDU$, where $U$ is unitary and $D=\operatorname{diag}(d_1,\ldots,d_n)$ is a nonnegative diagonal matrix. As the diagonal entries of $D$ are eigenvalues of $A^HA$, we have $\rho(A^HA)=\max_id_i$. Therefore \begin{align*} \|A\|_2 &= \max_{\|x\|_2=1}\|Ax\|_2 = \sqrt{\max_{\|x\|_2=1}\|Ax\|_2^2} = \sqrt{\max_{\|x\|_2=1} x^H A^HAx}\\ &= \sqrt{\max_{\|x\|_2=1} (Ux)^H D(Ux)} = \sqrt{\max_{\|y\|_2=1} y^H Dy} =\sqrt{\max_id_i}=\sqrt{\rho(A^HA)}. \end{align*}
H: Proving that $Hom_G (V,W)$ is 1-dimensional when $V,W$ are irreducible Question: Let $G$ be a group. For any two representations $V,V'$ of $G$ over $\mathbb C$, let $Hom_G (V,V')$ denote the space of all linear maps $h: V\rightarrow V'$ such that $h\rho'_g = \rho_g h\forall g\in G$. I want to prove that if $V$ and V' are irreducible and $V\cong V'$ then $Hom_G(V,V')$ is 1-dimensional. Please tell me if I have got it right: By Schur's lemma, Any element $f\in Hom_G(V,V)$ is of the form $\lambda.Id_V$, where $\lambda\in \mathbb C$, so dim $Hom_G(V,V) = 1.$ Now, let $T:V\rightarrow V'$ be an isomorphism, so that $T\rho'_g = \rho_g T$, hence $T\neq0$, and $T\in Hom_G(V,V')$. So $Hom_G(V,V')\neq0.$ Now, for any $h\in Hom_G(V,V')$, we have$$V\xrightarrow{h}V'\xrightarrow{T^{-1}}V$$, so that $T^{-1}h\in Hom_G(V,V)$, and hence $T^{-1}h=\alpha.Id_V$ for some $\alpha\in\mathbb C\Rightarrow h=\alpha.T\circ Id_V=\alpha T$. Hence any element in $Hom_G(V,V')$ is a scalar multiple of $T$, and so dim $Hom_G(V,V')=1$. AI: Looks fine. $\phantom{space filler to have more space}$
H: manifold diffeomorphic (?) to SO(3) Consider the set of all pairs $(\boldsymbol{n},\boldsymbol{v})$ of vectors in $\mathbb{R}^3$ such that $\boldsymbol{n}$ is a vector on the unit sphere centered at the origin and $\boldsymbol{v}$ is a unit vector tangent to the sphere at the point $\boldsymbol{n}.$ i. Introduce a structure of smooth manifold on this set. ii. Prove that this manifold is diffeomorphic to the group $SO(3).$ To my understanding, this manifold is $S^2 \times S^1,$ which gives a parametrization of $SO(3),$ but it is far from being a diffeomorphism, i.e. the exercise is false: do you agree? AI: The exercise is fine. The manifold described in the exercise is called the unit tangent bundle of $S^2$ and it is not diffeomorphic to $S^2 \times S^1$ [one way to see this is to observe that you could produce a nowhere vanishing vector field on $S^2$ if they were diffeomorphic. This is impossible by the hairy ball theorem.] Here's a slightly more detailed outline: By definition the set $M$ is given as a subset of $\mathbb{R}^3 \times \mathbb{R}^3 \ni (\boldsymbol{n}, \boldsymbol{v})$ subject to the equations $$ \begin{align*} 1 & = \boldsymbol{n} \cdot \boldsymbol{n} && \boldsymbol{n} \text{ is a unit vector}\\ 1 & = \boldsymbol{v} \cdot \boldsymbol{v} && \boldsymbol{v} \text{ is a unit vector}\\ 0 & = \boldsymbol{n} \cdot \boldsymbol{v} && \boldsymbol{n} \text{ is perpendicular to }\boldsymbol{v}. \end{align*} $$ Can you use this information to turn $M$ into a manifold? (implicit functions, regular values, etc) The map $M \to SO(3)$ given by $(\boldsymbol{n},\boldsymbol{v}) \mapsto [\boldsymbol{n},\boldsymbol{v},\boldsymbol{n} \times \boldsymbol{v}]$ is well-defined smooth and bijective. You can exhibit an explicit smooth inverse, so $M$ and $SO(3)$ are diffeomorphic.
H: Show that a rational number has no good rational approximations This is homework question. The teacher proved that if $a$ is irrational, there are infinitely many rational numbers $\frac{x}{y}$ such that $|\frac{x}{y} - a|<\frac{1}{y^2}$. What we need to prove is: Let $a$ be a rational number. Show that $a$ has no good rational approximations in the following sense: there exists a real number $c > 0$ such that for any rational number $\frac{p}{q}$ other than $a$ itself, we have $|a-\frac{p}{q}|\geq \frac{c}{q}$. Show that for a rational number $a$ the inequality $|a-\frac{p}{q}|<\frac{1}{q^2}$ has only finitely many solutions $(p, q)$. Your help is appreciated because I'm clueless here... Thank you! AI: If $a=\frac rs$ (and both are in shortest terms and $s,q>0$) then $\left|\frac rs-\frac pq\right|=\frac{|rq-ps|}{sq}\ge\frac1{sq}$, so we can use $c=\frac 1s$. From $\frac1{sq}\le \left|a-\frac pq\right|\le \frac 1{q^2}$ we get $q\le s$. There are only finitely many fractions $\frac pq$ with denominator $q\le s$ and(!) $|a-\frac pq|\le 1$.
H: How to prove the Archimedian property using the completeness axiom? The problem follows "Using the Completeness Axiom for R, prove the Archimedian property of the real numbers : for any x in R, there is an integer n>0 such that n>x " I tried to prove it in a reductio ad absurdumd. But I can't.... (I tried in this way) : for a fixed x in R, assume n<=x for all interger n. since left side diverges to infinity so, x cannot be fixed. therefore the assumption is not the case, there exists n s.t n>x. I think my proof has some problems.. and I don't know how I can prove it using the Completeness Axiom. Thanks you! AI: I'm not sure what form of the completeness axiom you're used to, but I guess we could use here the following: if a subset of the real numbers has an upper [lower] bound then it has a least [greatest] upper [lower] bound (supremum) (infimum). Thus suppose the claim is false, so $$\exists\,x\in\Bbb R\;\;s.t.\;\;\forall\,n\in\Bbb N\;,\;n<x$$ By the above, let $$x_0:=\sup\Bbb N\;\implies \forall\,\epsilon>0\;\exists\,n_\epsilon\in\Bbb N\,\,\;s.t.\,\,\;x_0-\epsilon<n_\epsilon\le x_0$$ This means $\,x_0\in\Bbb N\,$ (why?) , but then the successor of $\;x_0\;$ is not longer bounded by $\,x_0\;$ (I'm assuming here Peano's Axioms for the naturals)
H: Algorithm to find the number of numbers which are both perfect square as well as perfect cube I was teaching indices chapter to my brother when I got this idea to find the number of numbers which are perfect squares as well as perfect cubes. I was wondering whether there is an algorithm to find these numbers between a fixed range like between $0$ and $100$. One number that can be thought of belonging to this category is 64,it as the cube of 4 and square of 8. AI: You do know that being a square and a cube is equivalent to being a sixth power, don't you? The number of sixth powers up to (and including) $n\ge 0$ is $1+\lfloor\sqrt[6]n\rfloor$, hence the number of sixth powers between $a$ and $b$, inclusive (with $0<a\le b$), is $\lfloor\sqrt[6] b\rfloor - \lfloor\sqrt[6] {a-1}\rfloor$
H: Relationship between adjoing matrix and inverse function I am struggling with the following excercise: Let A be a matrix, then we have for every subspace $U$ that: $A^*(U ^\perp)=(A^{-1}(U))^\perp$ I do not even know where to start to solve this excercise. Does anybody have a hint for me? AI: Hints: We have $$w\in U^\perp\;\wedge v\in A^{-1}(U)\;(\text{so}\;\;v=A^{-1}u\;\;\text{for some}\;\;u\in U\;):$$ $$\;\langle A^*w,v\rangle=\langle w,Av\rangle=\langle w,u\rangle=0$$ and thus we get $\;A^*(U^\perp)\subset \left(A^{-1}(U)\right)^\perp\;$ . Now you try to prove the other direction inclusion.
H: Relationship between an inhomogeneous Poisson process and Markov chain What type of Markov process relates to an inhomogeneous Poisson process? A homogeneous Poisson process-- one where the rate, $\lambda$, is constant-- is a pure birth continuous time Markov chain (with a constant birth rate). That is, if $q_{ij}$ is the transition rate then $q_{ij} = 0 $ unless $j=i+1$ i which case $q_i,i+1 = \lambda$ What happens when $\lambda = \lambda \left(t\right)$? Does, $q \rightarrow q\left(t\right)$? And, how is this represented in a transition matrix? For a homogeneous Poisson process, the matrix of transition probabilities for the corresponding Markov process is one that is zero every where except the band immediately above the diagonal, all entries of which are $\lambda$. AI: Well, let's try this... Let $\{N_{t}\}_{t\geq0}$ be an inhomogeneous Poisson process with rate function $\{\lambda(t)\}_{t\geq0}$. Let's try to compute the transition probabilites. By definition of an inhomogeneous Poisson process, we have $$P\{N(t+h)-N(t)=1\}=\lambda(t)h+o(h)$$ $$P\{N(t+h)-N(t)>1\}=o(h)$$ Since $\{N_{t}\}_{t\geq0}$ has independent increments by definition, we have $$P\{N(t+h)-N(t)=1|N(t)=k\}=\lambda(t)h+o(h)$$ $$P\{N(t+h)-N(t)>1|N(t)=k\}=o(h)$$ for any $t\geq0$, any $k\in\mathbb{N}_{0}$ and "small" $h$. Dividing previous equations by $h$, and letting $h\rightarrow 0$, we get $$\lim_{h\rightarrow0}\frac{1}{h}P\{N(t+h)-N(t)=i+1|N(t)=i\}=\lambda(t)$$ $$\lim_{h\rightarrow0}\frac{1}{h}P\{N(t+h)-N(t)=j|N(t)=i\}=0$$ for $j>i+1$ or $j<i$.
H: Why is the additive identity of a ring always a multiplicative absorbing element? In problems concerned with finding the units in a ring, my textbook seems to always ignore the additive identity as a possibility. In combination with learning the definition of a field (a ring in which every nonzero element is a unit) and the fact that in every ring I've encountered so far, the additive identity is a multiplicative absorbing element, this led me to the suspicion that maybe this is always the case. The Wikipedia page on additive identities confirms this and proves it by stating: $s\cdot0 = s\cdot \left(0 + 0\right) = s\cdot0 + s\cdot0 \Rightarrow s\cdot 0 = 0$ (by cancellation) However, my textbook also shows that rings do not satisfy the cancellation law of multiplication in general, so I guess this 'proof' is not sufficient then. Is there a way to prove it without assuming the multiplicative cancellation property? AI: In the last step of the proof we use cancellation of addition, not multiplication. We are effectively adding the additive inverse of $s\cdot 0$ to each side.
H: How does one obtain the Jordan normal form of a matrix $A$ by studying $XI-A$? In our lecture notes, there's the following example problem. Find a Jordan normal form matrix that is similar to the following. $$A=\begin{bmatrix}2 & 0 & 0 & 0\\-1 & 1 & 0 & 0\\0 & -1 & 0 & -1\\1 & 1 & 1 & 2\end{bmatrix}$$ The solution begins by demonstrating that the matrix $XI-A$ is equivalent to a matrix of the form $$B=\begin{bmatrix}-1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & -(X-1) & 0\\0 & 0 & 0 & (X-2)(X-1)^2\end{bmatrix}$$ From here, the notes immediately deduce that the Jordan normal form of $A$ is $$J=\begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 1 & 1 & 0\\0 & 0 & 0 & 2\end{bmatrix}.$$ I'm struggling to understand this final step - how do we get from $B$ to $J$? I get that $J$ consists of three elementary Jordan matrices, but how are these matrices actually found? AI: The method used in the lecture notes is based $\def\KX{K[X]}$on the structure theorem for finitely generated $\KX$-modules (more generally f.g. modules over a PID) rather than on purely linear algebraic considerations. What the given computation shows is that the $\KX$-module, defined by having $X$ act on $K^4$ by the matrix$~A$, is isomorphic to $\KX/\langle1\rangle\oplus\KX/\langle1\rangle\oplus\KX/\langle X-1\rangle\oplus\KX/\langle(X-2)(X-1)^2\rangle$ (the first two summands are trivial and can be omitted). The final factor can be further decomposed as $\KX/\langle(X-2)(X-1)^2\rangle\cong\KX/\langle X-2\rangle\oplus\KX/\langle(X-1)^2\rangle$ because of a theorem for which I am still wondering if it has a name in English (in French it is the lemme des noyaux, it is related to the Chinese remainder theorem, but not the same as it is about modules rather than rings). So our $\KX$-module is isomorphic to $\KX/\langle X-1\rangle\oplus\KX/\langle(X-1)^2\rangle\oplus\KX/\langle X-2\rangle$, where I have moved to the front the two factors that contain eigenvectors for the eigenvalue$~1$. The outermost summands are of dimension$~1$, and $X$ acts as a scalar $1$ respectively $2$ on each of them; in the middle factor of dimension$~2$, the action of $X$ has eigenvalue$~1$ but is not diagonalisable: it is a Jordan block of size$~2$. So this is how one gets two Jordan blocks of sizes $1,2$ for $\lambda=1$ and a single Jordan block of size$~1$ for $\lambda=2$. I'll add a word about the initial computation leading to the diagonal form. The algorithm applied is that of computing the Smith normal form of the matrix $XI-A$ over $\KX$ (which is a PID), using row and column operations with scalars in $\KX$ to arrive at a diagonal form in which successive diagonal entries each divide the next one. I am not sure whether your lecture notes explain why one should start with $XI-A$, and why the result describes a cyclic decomposition of the $\KX$-module defined by$~A$, so here it is. Intermediate matrices describe a presentation of a $\KX$-module as quotient of the free module $\KX^n$, where $n$ is the number of rows, by the sub-module $N$ generated by the columns of the matrix. During the algorithm, column operations correspond to changing to different generators of $N$, while row operations correspond to choosing a different basis of $\KX^n$ than the standard one, and changing coordinates (of the generators of$~N$) to the new basis. At the end of the transformation one arrives at a situation in which all generators of$~N$ are (polynomial) multiples of the basis of $\KX^n$ used, whence the diagonal form, and this describes the module as a direct sum of cyclic sub-modules. So why does $XI-A$ describe the $\KX$-module$~M$ defined on $K^n$ by the action of$~A$? Because the standard basis $(e_1,\ldots,e_n)$ of $K^n$ is certainly a set of generators for$~M$ (probably quite redundant) so we have a surjective morphism $f:\KX^n\to M$, and column$~j$ of $XI-A$ describes $Xe_j-\sum_i A_{i,j}e_i\in\KX^n$ which element, given that $X$ acts on $M$ as $A$, is in the kernel $N$ of $f$ by definition; it is not hard to see that these elements generate all of$~N$, so that they give a complete presentation of$~M$. Whether this method is a very practical manner to find a Jordan normal form is questionable; in my experience performing the Smith normal form algorithm over $\KX$ is extremely tedious an error prone by hand. However, apart from the theoretical importance, it is certainly an algorithm, while I think methods of finding the Jordan normal form by linear algebra only tend to be hard to describe as a complete algorithm (in practice many shortcuts are possible, but a complete method covering all possibilities is long to describe). I've tried to describe a linear algebraic algorithm (as well as the one above) for the somewhat coarser Rational Canonical (or Frobenius Normal) Form in this answer.
H: Atiyah-Macdonald, Proposition 2.12, uniqueness of the tensor product. The following is a result from Atiyah-Macdonald, defining and showing existence and uniqueness of tensor product of modules over a commutative ring. Proposition 2.12. Let $M, N$ be $A$-modules. There exists a pair $(T,g)$ consisting of an $A$-module $T$ and an $A$-bilinear mapping $g: M \times N \rightarrow T$. with the following property: Given any $A$-module $P$ and any $A$-bilinear mapping $f: M \times N \rightarrow P$, there exists a unique $A$-linear mapping $f':T \rightarrow P$ such that $f = f' \circ g$. Moreover, if $(T,g)$ and $(T',g')$ are two pairs with this property, then there exists a unique isomorphism $j:T \rightarrow T'$ such that $j \circ g = g'$. The proof of this proposition starts with uniqueness statement, which is the thing I have some problems with. Proof. i) Uniqueness. Replacing $(P,f)$ by $(T',g')$ we get a unique $j:T \rightarrow T'$ such that $g' = j \circ g$. Interchanging roles of $T$ and $T'$, we get $j':T' \rightarrow T$ such that $g = j' \circ g'$. Each composition $j \circ j', j' \circ j$ must be the identity, and therefore $j$ is an isomorphism. I do not understand how the authors at this stage can conclude that $j \circ j'$ and $j' \circ j$ are identity homomorphisms. From the equations we get $g = j' \circ j \circ g$, which shows that $j' \circ j$ is the identity on the image of $g$. Since $j,j'$ are homomorphisms, if we know that the image of $g$ generates $T$, then I could conclude that indeed $j' \circ j$ is the identity, but without this assumption I do not see how this can be done. Of course, when constructing $T$ explicitly one sees that indeed image of $g$ generates $T$. But even then, if I get another pair $(T', g')$ satisfying the hypotheses of the proposition, where image of $g'$ does not necessarily generate $T'$, I will obtain $g' = j \circ j' \circ g'$, and again I won't be able to conclude that $j \circ j'$ is the identity on entire $T'$. So, I guess my question is if I must add the hypothesis on $g, g'$ that their images generate $T, T'$ as $A$-modules to make this proof work, or is there something that I am missing? AI: First replace $(T',g')$ by $(T,g)$. Then there is an unique morphism $k:T\to T$ such that $k \circ g = g$, namely, $k={\rm id}$. Since $j' \circ j$ is another morphism with the same property, then $j' \circ j={\rm id}$.
H: Symmetric positive definite with respect to an inner product Let $A$ be a SPD(symmetric positive-definite) real $n\times n$ matrix. let $B=LL^T$ be also SPD. Let $(,)_B$ be an inner product given by $(x,y)_B=x \cdot By=y^T Bx$. Then $(B^{-1}Ax,y)_B=(x,B^{-1}Ay)_B$ for all $x,y$. Show that $B^{-1}A$ is SPD with respect to $(,)_B$ I don't understand what it means SPD with respect to an inner product. What does it mean? AI: It means that in the definition of positive definite matrix you replace a standard euclidean scalar product by an inner product generated by another matrix. A matrix $C$ is positive definite with respect to inner product $(,)_B$ defined by a positive definite matrix $B$ iff $$\forall v\ne 0\, (Cv,v)_B = (BCv,v)>0$$ or, in our case $$\forall v\ne 0\, (B^{-1}Av,v)_B = (Av,v)>0,$$ which is given.
H: $\frac{c}{1+c}\leq\frac{a}{1+a}+\frac{b}{1+b}$ for $0\leq c\leq a+b$ When proving that if $d$ is a metric, then $d'(x,y)=\dfrac{d(x,y)}{1+d(x,y)}$ is also a metric, I have to prove the inequality: $$\dfrac{c}{1+c}\leq\frac{a}{1+a}+\frac{b}{1+b}$$ for $0\leq c\leq a+b$. This is obvious by expanding, but is there a nicer way to see why it is true? AI: $$\frac{a}{1+a}+\frac{b}{1+b}\ge\frac{a}{1+a+b}+\frac{b}{1+b+a}=\frac{a+b}{1+a+b}=\frac{1}{1+\frac{1}{a+b}}\ge\frac{1}{1+\frac1c}=\frac{c}{1+c}$$
H: Can we 'form' infinitely many subspaces out of finite dimensional vector space? Let $V$ be a vector space over $\mathbb{R}$ of dimension $n$, and let $U$ be a subspace of dimension $m$, where $m < n$. Show that if $m = n − 1$ then there are only two subspaces of $V$ that contain $U$ (namely $U$ and $V$), whereas if $m < n − 1$ then there are infinitely many distinct subspaces of $V$ that contain $U$. I have two questions arising from this problem. 1) Now say $m=n-1$. Then since $U$ is a subspace of $V$, it is contained in $V$. It is also contained in itself, namely $U$. Other options are out of the game in this case, because say if we form a space of dimension $n-1$, it would have to be precisely $V$. This follows because if say $u_1,...,u_{n-1}$ is a basis for $U$, then a basis for another subspace of $V$ with $n-1$ elements, would have to span the space as does $u_1,...,u_{n-1}$ forming the same vector spac - $U$. Am I right here? 2) Now let $m<n-1$. Applying the same reasoning as above, we can see, that there are much more options now. What I don't understand, is that how do I prove, that we can form infinitely many subspaces of $V$, even though it is finitely dimensional. For example, let $\lambda _j$ be scalars from the field $\mathbb{R}$ and let $u_1,...,u_m$ with $m<n-1$ a basis for vector space $V$. While I think it follows without a proof (or do I need one?) than we can form infinitely many linearly independent combinations $\lambda_ju_j$ (for different set of bases), why would there be infinitely many spaces spanned by those combinations, forming infinitely many subspaces of $V$? Hints would be appreciated! Thanks! AI: The subspaces $W$ with $U\subseteq W\subseteq V$ correspond 1-1 to the subspaces of $V/U$. If $V/U$ is onedimensional, the only subspaces are $0$ and $V/U$ itself. If $V/U$ is at least twodimensional there are infinitely many subspaces. For example, already $\mathbb R^2$ has - at least - the infinitely many (because $\mathbb R$ is infinite!) subspaces $\{(t,at)\in\mathbb R^2\mid t\in\mathbb R\}$, one for each value of $a\in \mathbb R$.
H: Simplify a expression with nested radical signs Simplify : $\sqrt{10+6 \sqrt{2}+5 \sqrt{3}+4 \sqrt{6}}$ I have tried completing square but failed, Can anyone help me please? Thanks. AI: $$10+6 \sqrt{2}+5 \sqrt{3}+4 \sqrt{6}=5(2+\sqrt3)+2\sqrt6(2+\sqrt3)=(2+\sqrt3)(5+2\sqrt6)$$ Now, $$5+2\sqrt6=3+2+2\cdot\sqrt2\cdot\sqrt3=(\sqrt3+\sqrt2)^2$$ and $$2+\sqrt3=\frac{4+2\sqrt3}2=\frac{3+1+2\cdot\sqrt3\cdot1}2=\frac{(\sqrt3+1)^2}2$$ Can you take it from here?
H: (locally Holder) + (locally Lipschitz) $\Longrightarrow$ Continuity? Let $f = f(x,y)$ be locally Holder continuous in $x$ and locally Lipschitz continuous in $y$, i.e. given $(z_1,z_2)$ in the domain of definition, there exists a neighborhood $U$ such that for any $(x_1,x_2)$, $(y_1,y_2) \in U$ then $$|f(x) - f(y)| \le H|x_1 - y_1|^{\theta} + L|x_2 - y_2|, \qquad \theta \in (0,1).$$ Is it true that $f$ is continuous? My opinion is that it is trivially true given that the continuity is a local concept. Am I missing something? AI: Yes, the inequality you wrote trivially implies continuity. A tiny effort is required to obtain the inequality you wrote from the assumptions "locally Hölder continuous in $x$ and locally Lipschitz continuous in $y$", since each of these assumptions refers to the behavior of $f$ when one of its variables is being held constant. This tiny effort amounts to using the triangle inequality.
H: Fibonacci sequence: how to prove that $\alpha^n=\alpha\cdot F_n + F_{n-1}$? Let $F_n$ be the $n$th Fibonacci number. Let $\alpha = \frac{1+\sqrt5}2$ and $\beta =\frac{1-\sqrt5}2$. How to prove that $\alpha^n=\alpha\cdot F_n + F_{n-1}$? I'm completely stuck on this question. I've managed to take the equation form of $F$ and come down to: $$\frac1{\sqrt 5}(\alpha^n(\alpha+\alpha^{-1}) - \beta^n(\alpha+\beta^{-1}))$$ But I'm lost from there on. I'm not looking for the answer, but any pointers would be great :)! AI: It's easier to prove this by induction. Note that $F_1\alpha + F_0=\alpha+0=\alpha^1$. Then use that $\alpha$ is a root of $x^2-x-1=0$ to show that if $\alpha^n=F_n\alpha+F_{n-1}$ then it follows that $\alpha^{n+1}=F_{n+1}\alpha + F_{n}$.
H: Boundedness of $\sum_{m=k}^{\infty} \frac{k}{m^2}$ Is the series $\sum_{m=k}^{\infty} \frac{k}{m^2}$ bounded independently of k? AI: Write $\sum_{m=k}^{+\infty}\frac 1{m^2}\leqslant \sum_{m=k}^{+\infty}\frac 1{m(m-1)}$, which behave approximatively as $\frac 1k$.
H: Some matrix calculus (differentiation) Let $x\in \Bbb R^n$, $f(x)=||Ax-b||_2^2$. I want to show that $grad f(x)=2 A^T (Ax-b)$. But why is it true? I almost forgot gradient and matrix calculus, though I looked wiki I can't figure it out. My trial: $$grad f(x)= \frac {\partial f}{\partial x}=2||Ax-b||_2 \frac {\partial ||Ax-b||_2}{\partial x}$$ And also $$\frac {\partial ||Ax-b||_2}{\partial x}=\frac {\partial ||Ax-b||_2}{\partial Ax}\frac{\partial Ax}{\partial x}=\frac{Ax-b}{||Ax-b||_2}A^T$$ So I have $gradf(x)=2(Ax-b)A^T$, what's the problem? AI: Compute $\frac{d}{dt} \mid _{t=0}f(x+th)$. You will get $2(Ax-b)\cdot Ah$, and you get the result by the fact that the conjugate of $A$ is $A^t$. If you have problems with the computations I can write them down if you want.
H: Modular arithmetic: How to solve $3^{n+1} \equiv 1 \pmod{11}$? Please, I can't solve this equation: $$3^{n+1}\equiv 1 \pmod{11}$$ for $n \in \mathbb{N}$. So what should I do please? Thanks. AI: First of all, using Fermat's Little Theorem, $3^{10}\equiv1\pmod {11}$ as $(3,11)=1$ Now,as we know if $a^n\equiv1\pmod m$ multiplicative order $\text{ord}_ma$ must divide $n$ where $a,m$ are any integers So, we need to check for the following powers of $3 : 1,2,5,10$ Now, $3^2=9\equiv-2\pmod {11}$ $3^5=(3^2)^2\cdot3^1\equiv(-2)^2\cdot3\equiv1\pmod {11} \implies \text{ord}_{11}3=5$ So, $\text{ord}_{11}3=5$ must divide $(n+1)\implies n=5m-1$ where $m$ is any integer As $n\in \mathbb N,$ we need $5m-1>0$ or $\ge0$ as "there seems to be no general agreement about whether to include $0$ in the set of natural numbers." In either case, $m\ge 1$
H: Prove $n\mid \phi(2^n-1)$ If $2^p-1$ is a prime, (thus $p$ is a prime, too) then $p\mid 2^p-2=\phi(2^p-1).$ But I find $n\mid \phi(2^n-1)$ is always hold, no matter what $n$ is. Such as $4\mid \phi(2^4-1)=8.$ If we denote $a_n=\dfrac{\phi(2^n-1)}{n}$, then $a_n$ is A011260, but how to prove it is always integer? Thanks in advance! AI: Consider $U(2^n-1)$. Clearly $2\in U(2^n-1)$. It can also be shown easily that the order of $2$ in the group $U(2^n-1)$ is $n$. By Lagrange's theorem $|2|=n$ divides $|U(2^n-1)|=\phi(2^n-1)$. Remark: Thank you for letting me know about this fact. It's interesting!
H: Integral equality when $f(x) \ge g(x)$ Given that $f(x) \ge g(x)$ for $x$ in $[0,1]$, I need to find an example of two different functions such that $$ \int_{0}^1 f(x)\,dx = \int_{0}^1 g(x)\,dx. $$ Edit: my answer was to take a function, like f(x) = 1 and g(x) will be the same with discontinuous one point at, say, x=$\frac{1}{2}$. but it seems too obvious for an exam answer... AI: Hint This is not possible for two continuous functions (why?). So at least one of the functions needs to be discontinuous at at least one point. Hint 2 For the functions to be different, they don't need to be different at many points.
H: Ergodic action of a group What does it mean and how is it defined if the action of a group is meant to be ergodic? Thank you for your replies! AI: Here is a context of group action where ergodicity arises naturally. As pointed out by Martin, the definition given in the grey box applies to more general situations where the transformations are not required to be measure-preserving, and the group is not countable discrete. Take a probability measure space $(X,\mathcal{A},\mu)$ and let a countable discrete group $G$ act on it by measure-preserving transformations. First, this means that each $g\in G$ induces a measurable map $g:X\longrightarrow X$ such that $\mu(g^{-1}(A))=\mu (A)$ for every measurable set $A\in\mathcal A$. It is standard to write $g\cdot x$ for $g(x)$. Second, we have a group action: the identity of $G$ induces the identity on $X$ and $g\cdot (g'\cdot x)=(gg')\cdot x$ for every $g,g'\in G$ and $x\in X$. The action $G\curvearrowright(X,\mathcal A,\mu)$ is called ergodic if $$ g\cdot A=A\quad \forall g\in G\quad\Rightarrow\quad \mu(A)=0\mbox{ or } 1. $$ That is, up to measure $0$ sets, the only invariant measurable sets under the action of $G$ are $\emptyset$ and $X$. Remarks: 1) As pointed out by Stéphane Laurent, the particular case $G=\mathbb{Z}$ corresponds to the action of a single measure-preserving transformation $T$. Then the action is ergodic if and only if $T$ is ergodic in the usual way. 2) By Koopman representation $u_g(f):=f\circ g^{-1}$, we get a unitary representation $g\longmapsto u_g$ of $G$ on $L^2(X)$. In particular, $u_g(1_A)=1_A\circ g^{-1}=1_{g(A)}$ for every $A$ measurable. So ergodicity reads: there are no notrivial projections $p\in L^\infty(X)$ such that $u_g( p)=p$ for every $g\in G$. Identifying $f\in L^\infty(X)$ with multiplication $m_f$ by $f$ on $L^2(X)$, we get in $B(L^2(X))$: $u_gm_fu_g^*=m_{f\circ g^{-1}}$. And now ergodicity reads: there are no nontrivial projections in $L^\infty(X)\subseteq B(L^2(X))$ which commute with $G\subseteq B(L^2(X))$. Example: take a standard Borel space $(X,\mathcal{B},\mu)$ equipped with a $\sigma$-finite probability measure, i.e. a standard measure space. Then every essentially free ergodic group action $G\curvearrowright(X,\mathcal B,\mu)$ by an infinite countable discrete group gives rise to a $\rm{II}_1$ factor von Neumann algebra $L^\infty(X)\rtimes G$ by the so-called group-measure space construction of von Neumann. Whenever $G$ is amenable and $X\simeq [0,1]$, we get this way, up to isomorphism, the unique (by Connes' deep work) injective $\rm{II}_1$ factor $\mathcal{R}$.
H: How to integrate a binomial expression without expanding it before? Let $(3-x^2)^3$ be a binomial expression. What is the integral of such expression? First I tried integration by substitution, because there is a composition of two functions. But$\displaystyle\frac{d}{dx}(3-x^2)=2x$ and I learned that this method only works if the integrand has the derivative of the inner function, multiplied by a constant. Then I used what I learned about power series: $$(3-x^2)^3=\binom{3}{0}3^3+\binom{3}{1}3^2(-x^2)+\binom{3}{2}3(-x^2)^2+\binom{3}{3}(-x^2)^3$$ And so, \begin{align} (3-x^2)^3&=27-27x^2+9x^4-x^6\\ \int(3-x^2)^3\,\mathrm dx&=\int(27-27x^2+9x^4-x^6) dx \end{align} Finally: $$\displaystyle\int(3-x^2)^3dx=27x-9x^3+\frac{9}{5}x^5-\frac{1}{7}x^7+C$$ But imagine that the power is $10$, or maybe $20$? There is any way to integrate this kind of expression without expand it? Thanks. AI: Let us generalise a bit. The integral of $(a+bx^n)^m$ is given by (just expanding and integrating term-wise): $$\int(a+bx^n)^m\,\mathrm dx = \sum_{k=0}^m \binom m ka^{m-k}b^k\frac{x^{kn+1}}{kn+1}$$ Now I have not ever seen a nice form for a general expression like the sum on the right-hand side. In fact, when I just look at it (for example when handed a similar sum in a different context), I usually think nothing except "Hey, that's the integral of $(a+bx^n)^m$". For many usages of sums like the one on the right, the integral representation is actually a way of simplifying the problem; if there were nice closed forms currently (widely) known, the mathematical practice would probably en masse use those instead of the integral representation. This all provides some circumstantial evidence to support the gut feeling that there probably is no general way.
H: What do you use for your basis step when its domain is all integers? Example: *For all integers $ n , 4( n ^2 + n + 1) – 3 n ^2$ is a perfect square what should i use? negative infinity? I know you can use a direct proof but what if theres an induction question with the same domain? AI: You need to use induction twice, once going down and once going up. You can choose any (finite) integer as the base step.
H: Find the fixed points We have $x_{n+1} = ax_n +b$ with $x_0$ given. We have to find the fixed points of this function, and decide for which values of $a$ they are stable. So I looked it up and found that a fixed point is a point for which $f(x) = x$, so basically intersections of a certain function with the line $y=x$. My question is; How can we use this for the above function? Does $x_{n+1}$ have to equal $x_n$? And how would we able to find these fixed points? AI: Your system needs to remain at the same point after an additional iteration. I.e. $x_{n}=x_{n+1}$. $x_{n+1} = ax_{n}+b$ hence becomes $x_{n}=ax_{n}+b$, which yields $x_{n}-ax_{n}=b$ then $x_{n}(1-a)=b$ and finally $x_{n}=\frac{b}{1-a}$. For $b=0$ we get $x_{n+1} = ax_n$, which is feasible for any $a$, with the fixed point coming to lie at 0. Otherwise this is feasible for any $a\neq 1$ (which would make the system $x_{n+1}=x_{n}+b$, which makes visible how b will for ever increase the system value). This also explains the result, which means in simple terms that at the fixed point the factor a diminishes the value of x as much as b adds to it. Again, this relates to the stability. A BIBO stable system will remain bounded on a bounded input b (or returns to zero for asymptotic stability). This is achieved by limiting the eigenvalues of the discrete system (here a) to be strictly smaller than 1 in magnitude: $|a|<1$.
H: Show that the subspace A is the whole Hilbert space H "Let $A$ be a subset in a Hilbert space $H$, such that $x\in H$ and $x \perp A$ imply $x = 0$. (1) Show that the closed subspace that is generated by $A$ is $H$. (2) Let $f(x)$ be a square summable function on $\Bbb R$, such that its Fourier transform is almost everywhere (strictly) positive. Let $B$ be the set of all functions of the form $f(x+a)$, with $a\in R$ arbitrary. Show that the closed subspace generated by $B$ is the whole $L^2(\Bbb R)$." (1) feels trivial but I am not sure how to formulate, maybe something like this: since x ⊥ A => x = 0 which says that the only vector ⊥ to A in H is the null vector => ... => every vector i H can be described with vectors in A.. something.. something.. When trying to do (2) I got a hint: apply the above result (1) to the set A given by the Fourier transform of set B. Sincerely Ingvar AI: For 1., one can use the projection on the closure of the subspace generated by $A$. For 2., we can use result of 1.. Let $f$ be a square integrable function whose Fourier transform is almost everywhere positive. Let $g\in L^2$ be orthogonal to $f(\cdot +a)=:\tau_a(f)$ for all real number $a$. Then using Plancherel's formula, we get $$\int_{\Bbb R}\widehat f(x)\widehat{\tau_ag}(x)dx=0.$$ Since $\widehat{\tau_ag}(x)=e^{iax}\widehat g(x)$, we get that for each $a$, $$\int_{\Bbb R}\widehat f(x)\widehat{g}(x)e^{iax}dx=0,$$ hence the Fourier transform of the integrable function $\widehat f\widehat g$ vanishes everywhere. This proves that $\widehat f\widehat g=0$ and the assumption on the Fourier transform of $f$ gives $\widehat g=0$. Conclude.
H: Non-convergence of sequence and existence of subsequence with special property If the sequence $(x_{n})$ doesn't converge to $x_0 \in X$ then there exists subsequence $(x_{n_k})$ so that there is no subsequence of that subsequence that converges to $x_0$. (or: If the sequence $(x_{n})$ doesn't converge to $x_0 \in X$ then there exists subsequence $(x_{n_k})$ so that $x_0$ is not accumulation point of that subsequence) (I hope I didn't confused anyone, I'm translting this from Croatian) My idea for solution: Let suggest that for every subsequence there is subsequence that converges to $x_0$. That means that $x_0$ is an accumulations point for every subseqence of $(x_n)$. And now that would mean it is a limit point And I'm stuck and I think I'm going in circles with this one... AI: One way to approach this is to note that the fact that $(x_n)$ does not converge to $x_0$ means there is some open set $U$ containing $x_0$ and arbitrarily large $i$ so that $x_i$ is not in $U$. You could use this to construct a subsequence $(x_{n_k})$ that does not intersect $U$ at all, and then there is no way any subsequence of $(x_{n_k})$ could converge to $x_0$. You could also use the above paragraph to get a subsequence that gives you the contradiction you are looking for in your approach, but it is probably cleaner to just prove this directly.
H: $x^4+5y^4=z^2$ doesn't have an integer solution. I hope to show that $x^4+5y^4=z^2$ doesn't have an integer solution. You may guess that you can solve it using the infinite descent procedure. I tried it but I had a trouble in solving it. What I did : Observe that we can say $(x,y)$=1. $$-x^4+z^2=5y^4$$ $$-(x^2-z)(x^2+z)=5y^4$$ Set $x^2-z=u$ and $x^2+z=v$. But since we don't know $(x,z)$, we cannot tell that $(u,v)=1,2$, so failed. Another : I observed that $$k(x^2+ay^2)^2+k(x^2-ay^2)^2=rz^2$$ Then we get $2k=r$, $2ka^2=5r$, $a^2=5$, so failed because $a$ is not an integer. AI: If $x=t,y=2t,z=9t^2$ then $x^4+5y^4=81t^4=z^2.$ The parameter $t$ here just serves to remind that if $(x,y,z)$ is a solution so is $(tx,ty,t^2z)$. So one has families of solutions, the generator of the above being $(1,2,9)$. If one were to look for all solutions, finding the generators would be the task. Update: I searched using maple for $x,y \le 1000$ and found the generator $(1,2,9)$ above along with another generator $(79,36,6881).$ Likely someone who knows more about elliptic curves than I do could show there are only finitely many different families (distinct generators with pairwise gcd 1) and maybe even find all such generators.
H: Closed subset of $\;C([0,1])$ $$\text{The set}\; A=\left\{x: \forall t\in[0,1] |x(t)|\leq \frac{t^2}{2}+1\right\}\;\;\text{is closed in}\;\, C\left([0,1]\right).$$ My proof: Let $\epsilon >0$ and let $(x)_n\subset A$, $x_n\rightarrow x_0$. Then for $n\geq N$ $\sup_{[0,1]}|x_n(t)-x_0(t)|\leq \epsilon$ for some $N$ and we have $|x_0(t)|\leq|x_0(t)-x_N(t)|+|x_N(t)|\leq\epsilon+\frac{t^2}{2}+1$ Then $|x_0(t)|\leq \frac{t^2}{2}+1$ for all $t\in[0,1]$ Is the proof correct? Thank you. AI: Yes, it is correct. An "alternative" proof would consist in writing $A$ as the intersection of the closed sets $F_t:=\{x,|x(t)|\leqslant \frac{t^2}2+1\}$. It also work if we replace $\frac{t^2}2+1$ by any continuous function.
H: A problem on sequences and series Let $ (x_n) $ be a sequence of real numbers such that $ \lim x_n =0 $. Prove that there exists a subsequence $(x_{n_k} )$of $ (x_n) $ such that $ \sum_{k=0}^\infty 2^{k}x_{n_k}$ coverges and $\sum_{k=0}^\infty 2^{k}x_{n_k}$ converges and $\sum_{k=0}^\infty| 2^{k}x_{n_k}|\leq 1$ Didn't have any idea on how to prove or approach it so any help would be appreciated. AI: $\exists k_1\in N$ such that $\displaystyle |x_{m}|<\frac{1}{2^2},\forall m\ge k_1$ choose $n_1=k_1$ $\exists k_2\in N$ such that $\displaystyle |x_{m}|<\frac{1}{2^4},\forall m\ge k_2$ ,its clear that $k_2\ge k_1$choose $n_2=k_2+1$ so $n_2>n_1$ In this way choose the sub sequence(by induction) $\{x_{n_k}\}$ with $\displaystyle |x_{n_k}|<\frac{1}{2^{2k}}$ So we have $\displaystyle |x_{n_k}|<\frac{1}{2^{2k}}\Rightarrow 2^k|x_{n_k}|<\frac{1}{2^{k}}$ So we have $\displaystyle \sum_{k=1}^{\infty}x_{n_k}$ a convergingseries (by comparison test) and , $\displaystyle \sum_{k=1}^{\infty}x_{n_k}\le \sum_{k=1}^{\infty}\frac{1}{2^{k}}=1$
H: Terminological question on "action factors through" What does it mean that the action of a group on some space factors through the action of another one? AI: If the other group is a quotient group of the first $G$, it means that the action can be written as composition of the canonical projection (group morphism) from $G$ to the quotient, followed by the action of the quotient (group morphism to a general linear group). It implies that the kernel of the projection acts trivially.
H: Proving a function is continuous on all irrational numbers Let $\langle r_n\rangle$ be an enumeration of the set $\mathbb Q$ of rational numbers such that $r_n \neq r_m\,$ if $\,n\neq m.$ $$\text{Define}\; f: \mathbb R \to \mathbb R\;\text{by}\;\displaystyle f(x) = \sum_{r_n \leq x} 1/2^n,\;x\in \mathbb R.$$ Prove that $f$ is continuous at each point of $\mathbb Q^c$ and discontinuous at each point of $\mathbb Q$. I find this question very challenging and have no idea even how to start off with the proof. Please suggest a proof or any hint. AI: This is a part of my answer here, but it should completely answer your questions too. I use the notation $$\lim_{y\to x^{+}}f(y)=f(x^+)$$ $$\lim_{y\to x^{-}}f(y)=f(x^-)$$ There is a very nice way of constructing, given a sequence $\{x_n\}$ of real numbers, a function which is continuous everywhere except the elements of $\{x_n\}$ [That is, discontinuous on a countable set $A\in\Bbb R$]. Let $\{c_n\}$ by any nonnegative summable sequence [that is $\sum\limits_{n\geq 0} c_n$ exists finitely], and let $$s(x)=\sum_{x_n<x} c_n$$ What we do is sum through the indices that satisfy the said inequality. Because of absolute convergence, order is irrelevant. The function is monotone increasing because the terms are nonnegative, and $s$ is discontinuous at each $x_n$ because $$s(x_n^+)-s(x_n^-)=c_n$$ However, it is continuous at any other $x$: see xzyzyz's proof with the particular case $c_n=n^{-2}$. In fact, this function is lower continous, in the sense $\lim\limits_{y\to x^{-}}f(y)=f(x^-)=f(x)$ for any value of $x$. If we had used $x_n\leq x$, it would be upper continuous, but still discontinuous at the $x_n$. To see the function has the said jumps, note that for $h>0$, we have $$\begin{align} s(x_n^+)-s(x_n^-)&=\\ \lim_{h\to 0^+} s(x_k+h)-s(x_k-h)&=\lim_{h\to 0^+}\sum_{x_n<x_k+h} c_n-\sum_{x_n<x_k-h}c_n\\&=\lim_{h\to 0^+}\sum_{x_k-h\leq x_n<x_k+h} c_n\end{align}$$ and we can take $\delta$ so small that whenever $0<h<\delta$, for any given $x_m\neq x_k$, $x_m\notin [x_k-\delta,x_k+\delta)$, so the only term that will remain will be $c_k$, as desired.
H: Orthonormalize the set of functions {1,x} I'm having trouble while doing this exercise, it says: In the vector space of the continuous functions in [0,1] with the inner product : $$\langle f,g \rangle = \int_{0}^{1}f(x)g(x)dx$$ a) Orthonormalize the set of functions $\left\{1,x\right\}$ So I assumed $B$ as a base of the vector space $ \prod^{1} $ (the canonical base) $B = \left\{(1,0),(0,1)\right\} $ and applied Gram-Schmidt method to orthonormalize the functions. The first one it's already normalized for that inner product $\vec{u_{1}} = \frac{\vec{v_{1}}}{\|\vec{v_{1}}\|} = (1,0)$ So, for the second one $\vec{u_{2}} = \frac{\vec{v_{2}}-\langle \vec{v_{2}},\vec{u_{1}}\rangle \vec{u_{1}}}{\|\vec{v_{2}}-\langle \vec{v_{2}},\vec{u_{1}}\rangle \vec{u_{1}}\|} = \frac{(0,1)-\left(\frac{1}{2},0\right)}{\|\left(-\frac{1}{2},1\right)\|}=(-6,12)$ But then $\|\vec{u_{2}}\|=\sqrt{\langle \vec{u_{2}},\vec{u_{2}}\rangle} = \sqrt{12} \neq 1 $ ; so $\vec{u_{2}}$ is not normal. I don't know whether I'm all wrong assuming the base or doing it wrong with the Gram-Schmidt method, but I've been stuck for more than an hour with this :$. AI: You don't need to use the standard basis, you already have a basis of $\{1,x\}$ for the space you're interested in, namely $Span(\{1,x\})$. Use Gram-Schmidt on those vectors directly. As you point out $\langle 1,1\rangle=1$ already, so you have one orthonormal basis element already. For the other one, take $x-\frac{\langle x,1\rangle}{\langle 1,1\rangle}1=x-\langle x,1\rangle=x-\int_0^1xdx=x-0.5x^2|_0^1=x-\frac{1}{2}$. Now $x-\frac{1}{2}$ is orthogonal to $1$, and you need to normalize it to make it length $1$. We calculate $\langle x-0.5,x-0.5\rangle=\int_0^1 (x-0.5)^2dx=\int_0^1x^2-x+0.25dx=\frac{1}{3}x^3-\frac{1}{2}x^2+0.25x|_0^1=\frac{1}{3}-\frac{1}{2}+\frac{1}{4}=\frac{1}{12}$. Putting it all together, $\{1,x\sqrt{12}-\sqrt{3}\}$ is an orthonormal basis for the desired space.
H: How can i solve this Cauchy-Euler equation? My problem is this given Cauchy-Euler equation: $$x^{3}y^{\prime \prime \prime} +xy^{\prime}-y=0$$ My approach: this is an differential equation, so i was looking for a solution with the method of undetermined coefficients. but honestly, i failed. AI: Besides to another answer, you may substitute $y=x^m$ wherein $m$ is a number and then form the certain $y_c(x)$. Let's do that: $$y=x^m\to y'=mx^{m-1},~~y'''=m(m-1)(m-2)x^{m-3}$$ So $$x^3y'''+xy'-y=0\Longrightarrow x^m(m(m-1)(m-2)+m-1)=0$$ If $x\neq 0$ then $$(m-1)^3=0$$ This means that $$y_c(x)=x^1(1+\ln x+\ln^2 x), ~x>0$$
H: How to find the area of a shape whose sides are made up of line or arc of circle? The arc and lines form the sides of the shape. The sides touch each other at end point in such a way that each end point can touch only one shape and the shape is closed eg:- line--arc--line--line--line--arc--arc AI: Hint: Make them all lines and get that area, and then add the areas of the parts of circles cut off by the circle arcs by lines where that circle part was cut off. If a circle arc bends inward, subtract that area.
H: Definition of a monoid: clarification needed I'm only in high school, so excuse my lack of familiarity with most of these terms! A monoid is defined as "an algebraic structure with a single associative binary operation and identity element." A binary operation, to my understanding, is something like addition, subtraction, multiplication, division i.e. it involves 2 members of a set, a single operation, and the resulting third member within that set. And an identity element is a special type of element of a set, with respect to a binary operation on that set, which leaves other elements unchanged when combined with things. Examples are "$0$" as an additive identity and "$1$" as a multiplicative identity. How does this definition correspond to a category with a single object? What are some examples of monoids? AI: First definition (algebraic): A monoid is a pair $(M,b)$, where $M$ is a set (called the underlying set of the monoid) and $b\colon M\times M\to M$ is a mapping (called the binary operation of the monoid; for $m_1,m_2\in M$ denote $b(m_1,m_2)=m_1\bullet m_2$), which satisfy the following two properties: 1). for any $m_1,m_2,m_3\in M$ the following equality holds (associativity): $$(m_1\bullet m_2)\bullet m_3=m_1\bullet(m_2\bullet m_3).$$ 2). there exists such $e\in M$ (called the identity of the monoid), that for any $m\in M$ the following equalities hold: $e\bullet m=m\bullet e=m$. It is a standard definition of a monoid. There are a lot of examples of monoids; for example, any group is a monoid. However, there are monoids, which are not groups. You are right, $(\mathbb{Z},+)$, $(\mathbb{Z},\cdot)$ are monoids, but $(\mathbb{Z},-)$ is not (because, for example, $1-(1-1)\ne(1-1)-1$). If you are familiar with category theory, then you can get a lot of natural examples in different categories. The reason is following: Let $A$ be a category, $a\in A$ be its object. Then any morphism $f\colon a\to a$ is called an endomorphism of $a$. Thus we can consider the set of all endomorphisms of the object $a$, denote it by $end(a)$. Note, that we can composite any two morphisms in $end(a)$, therefore we get a binary operation on $end(a)$! $$ \bullet\colon end(a)\times end(a)\to end(a);\qquad f\bullet g=f\circ g. $$ It is a monoid by the definition of category. Now you can take any category, for example $\mathbf{Set}$, take any its object -- for example, $\mathbb{N}$ (which is considered without its algebraic structure), -- and you get the monoid $\mathbb{N}^{\mathbb{N}}$ of all functions $f\colon\mathbb{N}\to\mathbb{N}$, which, of course, is not a group (check it). Now it is easy to believe in the following definition of a monoid: Second definition (category-theoretic): Monoid is a category with one object. Indeed, denote by $x$ its single object, then $end(x)$ is a corresponding monoid. Conversely, if you have a monoid $(M,b)$, you can define a category $\mathbf{M}$ with single object, such that $Arr(\mathbf{M})=M$, and composition defined by the following rule: $$ \forall f,g\in M:\;f\circ g=f\bullet g. $$ But it was, of course, intuitive reasoning. In order to make an exact statement, I have to give a few more definitions: 1). Let $(M,\bullet_M)$ and $(N,\bullet_N)$ be monoids. Monoid homomorphism is a mapping $f\colon M\to N$, such that for all $m_1,m_2\in M$ the equalities $f(m_1\bullet_M m_2)=f(m_1)\bullet_N f(m_2)$ and $f(e_M)=e_N$ hold. It's easy to check that all small monoids and their homomorphisms form a category, called $\mathbf{Mon}$. 2). Let's denote by $\mathbf{S}$ the full subcategory of $\mathbf{Cat}$, which objects are categories with fixed single object. Now I can formulate the exact statement (equivalence of two definitions): Statement: The category $\mathbf{Mon}$ of monoids is isomorphic to the category $\mathbf{S}$.
H: Curiosity about the area between the arclength and the connected line For example, $f(x)=x^2$ I'm curious if it's possible to find the area, for example, between the function $f(x)$ and the connected line between $f(1)$ and $f(2)$? AI: Call the line connecting $f(1)$ and $f(2)$ $g(x)$: $$g(x)-f(1)=\frac{f(2)-f(1)}{2-1}(x-1)$$ $$g(x)=3(x-1)+1$$ $$g(x)=3x-2$$ The area you're looking for can then be found by integration: $$\int_1^2(g(x)-f(x))dx$$ It is $g(x) - f(x)$ rather than $f(x) - g(x)$ because for all $x\in[1,2]$, $g(x)\ge{f(x)}$.
H: Infinite average queueing delay for M/M/1 queues According to queueing theory, the average queueing delay for an M/M/1 queue can be calculated as $\frac{1}{\mu-\lambda}$, where $\mu$ is the average service rate and $\lambda$ is the average arrival rate. Is there an intuitive explanation for what happens at full utilization, i.e. for an arrival rate equal to the service rate? Why does the expected delay become infinite? Or is that formula not applicable for $\mu = \lambda$? AI: The average queueing delay for an M/M/1 queue is difficult to define when $\lambda=\mu$ since the length of the queue has no stationary distribution in this case. Recall that when $\lambda\lt\mu$, there exists a stationary distribution $\pi$ for the length of the queue, that the distribution of the length of the queue converges to $\pi$ for every initial distribution, and that $\pi$ also describes the ergodic averages of the length of the queue. None of this survives at $\lambda=\mu$. What is true however is that $\pi\to\infty$ when $\lambda\to\mu$, in the following sense. For every $n$ finite, $\pi_{\lambda,\mu}([0,n])\to0$ when $\lambda\to\mu$, $\lambda\lt\mu$. In this sense the typical length of the queue becomes large in this limit.
H: Recursively Defined Functions I am taking a summer class in discrete math and have done very well up till now. I am nervous because I have reviewed the lecture slides and practice problems but I still don't really understand what to do. The problem example I was given is: Suppose $f$ is defined by: \begin{align} f(0) &= 0, \\ f(1) &= 1, \\ f(n + 2) &= f(n + 1) + 2\times f(n) \end{align} Question: Find $f(2), f(3), f(4), f(5), f(6)$ Now I understand that what I am doing is finding the function by plugging in the previous function, but I do not understand the $f(n+2)$. How do I interpret this? Does this mean that for $f(2)$ I plug in the result of $f(1)$ and add $2$ to it? Or do I plug in $0$ since $0+2 = 2$ and I am trying to find $f(2)$? AI: No. For $f(n+2)$ you first compute $n+2$, then apply $f$ to it. So, for $n=2$ we want $f(n+2)=f(4)$. In order to compute $f(2)$, we take $n=0$ in the equation to get $f(2)=f(1)+2 f(0)$. Then for $f(3)$, we take $n=1$ in the equation to get $f(3) = f(2)+2f(1)$.
H: Simple random sampling In probability, why when you do random sampling without replacement: $$\operatorname{Cov}(X_i,X_j)=-\frac{\sigma^2}{N-1}.$$ $N$ is the total population, $\sigma$ is the variance of population. AI: Because the random variables $X_i$ are identically distributed, the random variables $(X_i,X_j)$ are identically distributed and the sum of the $N$ random variables is deterministic. Thus, considering $c=\mathrm{cov}(X_i,X_j)$ and $\sigma^2=\mathrm{var}(X_i)$, one gets $$ 0=\mathrm{var}\left(\sum_{i=1}^NX_i\right)=\sum_{i=1}^N\mathrm{var}(X_i)+\sum_{i\ne j}\mathrm{cov}(X_i,X_j)=N\sigma^2+N(N-1)c. $$
H: Does p(a,b,c)=p(a,b)p(a,c) hold when b and c are independent? I am reading through a thesis, it states that given $b$ and $c$ being independent: \begin{equation} p(a\mid b,c) := p(a\mid b) p(a\mid c) \end{equation} This would imply just using the definition of a conditional probability: \begin{equation} \frac{p(a,b,c)}{p(b,c)} = \frac{p(a,b)}{p(b)} \frac{p(a,c)}{p(c)} \end{equation} Now, assuming independence between $b$ and $c$ the denominator $p(b,c)=p(b)p(c)$ can be crossed out, and the following has to hold: \begin{equation} p(a,b,c) = p(a,b) p(a,c) \end{equation} Does this actually hold if $b$ and $c$ are independent? If so, how? And if not, how would this constraint be called? It's not pairwise independence. It is between a pair of two variables out of a set of three. AI: Does this actually hold if $b$ and $c$ are independent? No. A standard counterexample is when $b$ and $c$ are i.i.d. Bernoulli $\pm1$ and $a=bc$, then $p(a=-,b=+,c=+)=0$ while $p(a=-,b=+)=p(a=-,c=+)=\frac14$. And if not, how would this constraint be called? Dunno. The constraint $p(a,b,c)p(a)=p(a,b)p(a,c)$ is the conditional independence of $b$ and $c$ conditionally on $a$.
H: Do proper dense subgroups of the real numbers have uncountable index Just what it says on the tin. Let $G$ be a dense subgroup of $\mathbb{R}$; assume that $G \neq \mathbb{R}$. I know that the index of $G$ in $\mathbb{R}$ has to be infinite (since any subgroup of $\mathbb{C}$ of finite index in $\mathbb{C}$ has to have index 1 or 2); does it have to be uncountable, though? All the examples I can readily come up with (e.g. $\mathbb{Q}$, $\mathbb{Z}[\sqrt{2}]$, ...) have uncountable index in $\mathbb{R}$. Thanks! AI: No: $\mathbb{R}$ is a $\mathbb{Q}$-vector space. Choose a basis and drop finitely many basis elements to find a $\mathbb{Q}$-subspace $G$ of finite codimension. Clearly, $G$ has countable index in $\mathbb{R}$ and is dense. Added: It can't be done without some form of the axiom of choice. The statement $\mathbb{R} \cong \mathbb{R} \oplus \mathbb{Q}$ is form number 252 in the book Consequences of the Axiom of Choice, by Howard and Rubin, as you can check on the homepage of the book. C. J. Ash explains in A consequence of the axiom of choice, Journal of the Australian Mathematical Society (Series A), 19 (1975) 306-308, that $\mathbb{R} \cong \mathbb{R} \oplus \mathbb{Q}$ implies the existence of non-measurable sets in $\mathbb{R}$. This in turn is known not to be provable from ZF alone. If you want to know more about this, you can consult Herrlich, The Axiom of Choice, chapter 5, especially the pages following Diagram 5.10.
H: Relations between $\|x+a\|$ and $\|x-a\|$ in a normed linear space. 1) Can it happen that $\|x+a\|=\|x-a\|=\|x\|+\|a\|$ when $a\ne0$? 2) How large can $\min(\|x+a\|,\|x-a\|)/\|x\|$ be when $\|x\|\ge \|a\|$? (For a inner-product space, the answers are no and $\sqrt{2}$ I think.) AI: 1) This is trivially true if $x=0$ or $a=0$. So you want $x\neq 0$ and $a\neq 0$. So what you are asking is equivalent to $$ \exists?\; x\neq 0, y\neq 0\qquad\Big\| \frac{x+y}{2}\Big\|=\Big\| \frac{x-y}{2}\Big\|=\frac{\|x\|+\|y\|}{2}. $$ Example: take $x=(1,1)$ and $y=(1,-1)$ in $\mathbb{R}^2$ with the $\ell^\infty$ norm $\|(x_1,x_2)\|=\max\{|x_1|,|x_2|\}$. Inner-product space: recall the parallelogram law $\|x+y\|^2+\|x-y\|^2=2\|x\|^2+2\|y\|^2$ which characterizes norms coming from an inner product space. So if $x,y$ satisfy your condition, we get $2(\|x\|+\|y\|)^2=2\|x\|^2+2\|y\|^2$ whence $\|x\|\|y\|=0$, i.e. $x=0$ or $y=0$. So the answer is indeed no. Minkowski: more generally, if $1<p<\infty$, this is impossible in a $L^p$ space. Indeed, by the equality case of Minkowski inequality, if $f\neq 0$, $\|f+g\|=\|f\|+\|g\|$ forces $f=t g$ for $t>0$, and $\|f-g\|=\|f\|+\|g\|$ yields $f=-sg$ for $s> 0$. Thus $(s+t)g=0$ yields $g=0$. 2) For your second question, it suffices to restrict to the span of $x$ and $y$. Draw a picture of the two parallelograms one can build from $x$ and $y$. Inner-product space: without loss of generality, $(x,y)\geq 0$ and $\|x-y\|$ is the minimum. So for $\|x\|$ and $\|y\|$ given, the maximum is reached for $(x,y)=0$ (in dimension $\geq 2$), in which case $$ \frac{\|x-y\|}{\|x\|}=\sqrt{\frac{\|x-y\|^2}{\|x\|^2}}=\sqrt{\frac{\|x-y\|^2}{\|x\|^2}}=\sqrt{\frac{\|x\|^2+\|y\|^2}{\|x\|^2}}=\sqrt{1+\frac{\|y\|^2}{\|x\|^2}}. $$ So under the constraint $\|x\|\geq \|y\|$, the maximum of the quotient is indeed $\sqrt{2}$. General normed vector space: one can see by triangular inequality that $$ \frac{\min\{\|x-y\|,\|x+y\|\}}{\|x\|}\leq 2 \qquad \forall \|x\|\geq \|y\|>0. $$ The first example I gave shows that $2$ can be attained.
H: Identity with an alternating binomial sum: $\sum\limits_{i=1}^n(-1)^i{n-i \choose n-k} {k \choose i} = {n-k\choose k}$ I'm learning for the test and: Prove identity: $$\sum_{i=1}^n(-1)^i{n-i \choose n-k} {k \choose i} = {n-k\choose k}$$ for all $n,k\in \mathbb{N}$. This problem is just awful. I was trying to solve it for the last three hours and finally it has defeated me, so I'm writing for help. I think combinatorial proof is not possible because of $(-1)$ factor, which we can get rid of since: $(-1)^i {x\choose i}={i-1-x\choose i}$ but then we have negative upper number in binomial which is not good for interpretation. So I tried with generating functions but it also lead me to nowhere :( AI: Hints: For the $\dbinom{n-i}{n-k}$ I prefer the equivalent $\dbinom{n-i}{k-i}$. As mentioned in the OP, the minus signs perhaps discourage most combinatorial interpretation. However, they are familiar in one combinatorial setting, Inclusion/Exclusion. We have $n$ apples, of which $k$ are bad. We find the number of ways to choose $k$ good apples. Of course this is $\dbinom{n-k}{k}$. But let us count it another way. We will do it by Inclusion/Exclusion. Our first estimate ($i=0$) is $\binom{n-0}{k-0}\binom{k}{0}$, a fancy way of saying we choose $k$ from the $n$. This is wrong, we need to subtract all choices where there was a bad apple. As a first estimate of that, choose the bad apple (there are $\binom{k}{1}$ ways to do it, and for each choice, $\binom{n-1}{k-1}$ to choose the rest. But we have subtracted too much. Continue.
H: Is This Referring to the Existence of an Antiderivative? My text gave the following statement for the FTOC: Let $f$ be an integrable function on $[a, b]$, $F$ be the antiderivative of $f$. Then $$ \int_a^b f \, du = F(b) - F(a)$$ Why does the book need to specify that $F$ is an antiderivative of $f$? Is it suggesting that while $f$ is integrable, the derivative of the integral may not be $f$? I don't recall encountering such a situation in Calculus .... TY AI: Just because a function is integrable, doesn't mean it has an antiderivative. Consider, for example, the function $f$ such that $f(0) = 1$ and $f(x) = 0$ for all $x \ne 0$. In this case, the derivative of $\int_c^x f(t) dt$ is not equal to $f(x)$ at $x=0$.
H: Geometry: Measurements of right triangle inscribed in a circle So, I've got a triangle, ABC, inscribed in a circle--Thale's theorem states that it is therefore a right triangle. It is also given that $\overline{BA}$ is the diameter of the circle, and hence angle ACB is the right angle. That's all well and good. My question is... What is the probabilty that the measure of angle CAB is less than/equal to 60 degrees? My thinking: Since it's a right triangle, for CAB to be == 60 degrees, CBA would have to be == 30 degrees. This seems...perfectly plausible. But then what if CBA is 45 degrees...that's possible too... This is where I kind of ran into a dead end, because I couldn't figure out the probabilities of each scenario occurring. What am I missing? AI: Let's say we have our triangle $\triangle ABC$ inscribed in a circle, with $\overline{BA}$ its diameter. It is not too hard to show that the angle labeled $\theta$ in the diagram is equal to $2\angle CAB$. This is sometimes known as the central angle theorem. This means that, assuming a "random" triangle is chosen by randomly choosing the point $C$ on the top half of the circle, we are randomly choosing an angle $\theta$ between $0^\circ$ and $180^\circ$, and wanting it to come up less than or equal to $120^\circ$. The probability of this would be $\frac{2}{3}$.
H: Series $\{a_nb_n\}$ is not absolutely convergent Suppose that the sequence $\{a_n\}$ is unbounded. Prove that for some absolutely convergent series $\{b_n\}$, the series $\{a_nb_n\}$ is not absolutely convergent. Absolute convergence means we must choose $\{b_n\}$ such that series $|b_n|$ converges, but series $|a_nb_n|$ does not converge. I thought about choosing $b_n = 1/a_n$ which clearly makes $\{a_nb_n\}$ not absolutely convergent, but then I realized that even though $\{a_n\}$ is unbounded, it doesn't mean the series $|b_n|$ converges (for example, $\{a_n\} = 1,2,1,3,1,4,1,5,\ldots$.) EDIT: Okay I might have gotten it. Since $\{a_n\}$ is unbounded, for any $k$ we can find $|a_{i_k}|>2^k$. So we have the subsequence $a_{i_1},a_{i_2},\ldots$. Choose $b_{i_k} = 1/a_{i_k}$ for all $k$, and $b_i=0$ otherwise. Then the series $|b_n|$ converges by comparison test with the geometric series $\{1/2^k\}$. On the other hand, the series $|a_nb_n|$ has infinitely many terms equal to $1$, implying that it diverges. AI: Because $\{a_n\}$ is unbounded, for any $k$ there is an $n$ such that $|a_n|\geq 2^k$. Define $$b_n=\begin{cases}\frac{1}{|a_n|} & \text{if there exists some }k\text{ such that }n\text{ is the first index such that }|a_n|\geq 2^k,\\[0.2in] \;\;0 & \text{otherwise} \end{cases}$$ Thus $$\begin{align*} \sum_{n=1}^\infty|b_n|&=\frac{1}{|a_1|}+0+\cdots+0+\frac{1}{|a_2|}+\cdots\\\\\\ &\leq\frac{1}{2^{k_0}}+0+\cdots+0+\frac{1}{2^{k_1}}+\cdots\\\\\\ &\leq\sum_{n=1}^\infty\frac{1}{2^n}=1 \end{align*}$$ but $$\sum_{n=1}^\infty |a_nb_n|=1+0+\cdots+0+1+\cdots=\infty.$$
H: Would this be an effective way to study and comprehend text's? This is probably a grey area question but I am going to test the waters anyway. What I am thinking of doing would be to basically record myself doing examples from textbooks and making lessons for myself which I can then view later and possibly post them online. Sort of like the Feynman method I guess which is to pretend you are teaching someone. I have read other places that the best way to learn math is to teach it. Would this be a good way to get maximum comprehension from Math Text's? Since it would activate all the methods of learning and I would eventually develop a pretty extensive reference collection for later. AI: I think it's a fine idea: "Whatever works!" Seriously, the accountability that teaching inevitably incurs (and the accountability in terms of having to watch yourself teaching!) are both excellent ideas. Perhaps you can get both immediate feedback and "view-at-a-later-time" feedback by finding a friend, or a handful of friends, with whom you can experiment "teaching one another" - with or without videotaping. You can then learn from others' success (and failures), as well as your own. Both of the above: "teaching the class" (e.g., scheduled presentations to the class) and viewing yourself "teaching in action" are pedagogical strategies employed successfully by many instructors. You might also want to think about blogging: many successful students opt to "go public" with their work, through on-line blogging, which offers a similar sort of "accountability."
H: $3^{3n+1} < 2^{5n+6} $ for all non-negative integers $n$. Is my induction solution correct? Show using mathematical induction that $3^{3n+1} < 2^{5n+6} $ for all non-negative integers $n$. I'm not sure whether what I did at the last is valid? Basis step: for all non-negative integers $$P(n) = 3^{3n+1} < 2^{5n+6} $$ $$P(0) = 3^{3(0) + 1} = 3 < 64 = 2^{5(0) + 6}$$ $$P(0) = T$$ Inductive Step: Assume: $3^{3k+1} < 2^{5k+6}$ Show: $3^{3(k+1)+1} < 2^{5(k+1)+6}$ $$ 3^{3(k+1)+1} = 3^{3k+4} = 3^3 \cdot 3^{3k+1}$$ By inductive hypothesis~ $$3^3 \cdot 3^{3k+1} < 3^3 \cdot 2^{5n+6} $$ This is the part where I'm not sure if you can do this in induction but it seems logically correct. $$3^3 \cdot 2^{5n+6} = 27 \cdot 2^{5n+6}$$ $$2^{5(k+1)+6} = 2^{5k+5+6}= 2^5 \cdot 2^{5n+6} = 32 \cdot 2^{5n+6}$$ I'm not sure whether it should be $\le$ or $<$ but I used '$<$' for $3^3 \cdot 2^{5n+6}<2^{5(k+1)+6} $ Therefore: $$3^{3(k+1)+1} < 3^3 \cdot 2^{5n+6}<2^{5(k+1)+6} $$ AI: Excellent work: You can conclude, since you have shown $$3^{3(k+1)+1}\;\; <\;\; 3^3 \cdot 2^{5n+6} \;\;{\color{blue}{\bf <}}\;\; 2^{5(k+1)+6}$$ or simply, $$3^{3(k+1)+1}\; {\color{blue}{\bf <}}\; 2^{5(k+1)+6}$$ as desired. The "blue" strict inequality is all you need. You have shown, prior to your conclusion, that $$3^{3(k+1)+1}\;\; <\;\; 3^3 \cdot 2^{5n+6}$$ and $$3^3\cdot 2^{5n+6} \;\;<\;\; 2^{5(k+1)+6}$$
H: Default metrics for $c_0$ and $l^{\infty}$ In my book there is a question like Let $\{a^{(k)}\}$ be a convergent sequence of points in $l^1$. Prove that $\{a^{(k)}\}$ converges in $l^{\infty}$. Now I don't see it mentioned anywhere what metric I should use for $l^{\infty}$. The book only mentions the metric $$d(\{a_n\},\{b_n\})=\sum_{n=1}^{\infty}|a_n-b_n|$$ for $l^1$, and the metric $$d(\{a_n\},\{b_n\})=\sqrt{\sum_{n=1}^{\infty}(a_n-b_n)^2}$$ for $l^2$. But it also talks about convergence for $c_0$ (the set of sequences converging to $0$) and $l^{\infty}$ (the set of bounded sequences). Are there default metrics for those two sets? AI: In both $l^\infty$ and its subspace $c_0$, the standard metric is the sup metric $d(a,b)=\sup |a_n-b_n|$.
H: Proving that $\exp(z_1+z_2) = \exp(z_1)\exp(z_2)$ with power series Probably a simple question, but I wonder about the following: To prove that $\exp(z_1+z_2) = \exp(z_1)\exp(z_2)$, I use : $$\exp(z_1+z_2) = \sum_{n=0}^{\infty}\sum_{k=0}^n\frac{1}{k!(n-k)!}z_1^kz_2^{n-k} $$ by using the binomial expansion. Now the property has been proved if this sum equals: $$\sum_{k=0}^{\infty}\sum_{m=0}^{\infty}\frac{z_1^{k}z_2^{m}}{k!m!}$$ Strange enough, I don't see exactly why this is true (although they use this without explanation in many books). I see that both sums contain "all terms" formally, but I would be glad if someone could show rigorously that both sums converge to the same complex number. Probably it is just a property of series that I'm missing here. AI: What you want to use is the Cauchy product or convolution. Note that $$\sum_{k\geqslant 0} a_k\cdot \sum_{k\geqslant 0} b_k=a_0b_0+(a_0b_1+a_1b_0)+(a_0b_2+a_1b_1+a_2b_0)+\cdots$$ and this is true whenever either series converges absolutely and the other converges. In your case, both powerseries converge absolutely, so you're more than safe. This can be written as $$\sum_{k\geqslant 0} a_k\cdot \sum_{k\geqslant 0} b_k=\sum_{k\geqslant 0}\left(\sum_{j+i=k}a_ib_j\right)$$ But we can also write this as $$\sum_{k\geqslant 0} a_k\cdot \sum_{k\geqslant 0} b_k=\sum_{k\geqslant 0}\sum_{n=0}^k a_n b_{k-n}$$ Now see what happens when $$a_k=\frac {z_1^k}{k!}$$ $$b_k=\frac{z_2^k}{k!}$$
H: Why does symmetry allow the assumption that $D(a,c) \ge D(b,c)$? In Kaplansky, Set Theory and Metric Spaces (pg. 69) there is the following theorem: Theorem: For any points $a,b,c$ in a metric space, we have: $$ |D(a,c) -D(b,c)| \le D(a,b)$$ The following proof is provided: Proof: Because of symmetry in [the above] between $a$ and $b$, we can assume $D(a,c) \ge D(b,c)$. Then $D(a,c) - D(b,c) \ge 0$, so that we can remove the absolute values in [the above]. We then find that [this equation] corresponds with the triangle inequality [in the definition of a metric space]. What I do not understand is how symmetry allows the assumption that $D(a,c) \ge D(b,c)$. An explanatin would be appreciated. AI: Since $d(a,c) \le d(a,b)+d(b,c)$, we have $d(a,c)-d(b,c) \le d(a,b)$. If you switch $a,b$ you get $d(b,c)-d(a,c) \le d(b,a) = d(a,b)$. It follows that $|d(a,c)-d(b,c)| \le | d(a,b) |$.
H: Linear recurence relation We have the linear recurrence relation $$x_{n+1} = \dfrac{3}{2}x_n - 20$$ with $n = 0,1,2...$ and $a,b$ being constants. Does this equation have a fixed point? Does the equation have a period 2 (a period 2 solution $x_0,x_1$ is a solution where you get from $x_0$ to $x_1$ after 1 iteration and after the next iteration you get back to $x_0$) fixed point? Is the fixed point attractive? I have absolutely no clue how to do this. My cheap school isn't willing to buy us books so we have to use horrible free pdfs from the internet and this one is particularly bad. The terminology is all over the place and as a result I have absolutely no clue on earth how to even begin this stuff. Can anyone at least point me in the correct direction? AI: A fixed point for this recurrence relation is a value $x_n$ such that $x_{n+1}=x_n$. So you can try to solve $x=\frac{3}{2}x-20$, this will give you all the fixed points. If a solution has period 2 then $x_n=x_{n+2}$ and $x_n\neq x_{n+1}$ so here you want to try to solve: $$ x_{n}=x_{n+2}=\frac{3}{2}x_{n+1}-20=\frac{3}{2}(\frac{3}{2}x_n-20)-20 $$ and check that any solution you get doesn't have period one. A fixed point $x_0$ is attractive if for any $y$ which is close enough to $x_0$, the sequence $y=y_0,y_1,y_2,y_3,\ldots$ converges to $x_0$ (I can explain what this means if you don't know). You can test whether a fixed point is attractive if you define a function $f(x)=$the recurrence relation (in your case $f(x)=\frac{3}{2}x-20$) then if $f$ is (continuously) differentiable in a neighborhood of $x_0$ and $|f'(x_0)|<1$, your fixed point is attractive. Note that this method only allows you to show that a point is attractive, I don't think it works to show a point is NOT attractive. Also ncmathsadist's approach works.
H: How many of these four digit numbers are odd/even? For the following question: How many four-digit numbers can you form with the digits $1,2,3,4,5,6$ and $7$ if no digit is repeated? So, I did $P(7,4) = 840$ which is correct but then the question asks, how many of those numbers are odd and how many of them are even. The answer for odd is $480$ and even is $360$ but I have no clue as to how they arrived to that answer. Can someone please explain the process? Thanks! AI: We first count the number of ways to produce an even number. The last digit can be any of $2$, $4$, or $6$. So the last digit can be chosen in $3$ ways. For each such choice, the first digit can be chosen in $6$ ways. So there are $(3)(6)$ ways to choose the last digit, and then the first. For each of these $(3)(6)$ ways, there are $5$ ways to choose the second digit. So there are $(3)(6)(5)$ ways to choose the last, then the first, then the second. Finally, for each of these $(3)(6)(5)$ ways, there are $4$ ways to choose the third digit, for a total of $(3)(6)(5)(4)$. Similar reasoning shows that there are $(4)(6)(5)(4)$ odd numbers. Or else we can subtract the number of evens from $840$ to get the number of odds. Another way: (that I like less). There are $3$ ways to choose the last digit. Once we have chosen this, there are $6$ digits left. We must choose a $3$-digit number, with all digits distinct and chosen from these $6$, to put in front of the chosen last digit. This can be done in $P(6,3)$ ways, for a total of $(3)P(6,3)$.
H: Given a specefic set $ A$ we need to find $A^\perp$ Suppose we have a set of functions which are an element of $L^2[0,1]$ where if we let f(x) be the function equal to 0 from $0<x<1/2$. If this set A is a subset of the Hilbert space $L^2[0,1]$ then we need to find $A^\perp$. What my attempt was that we know all functions in this space will be orthogonal to f(x) = 0 from $0<x<1/2$ so if g(x) is a element of A perp, then g(x) can be any function from $0<x<1/2$. However, from the interval from $1/2<x<1$ f(x) does not have any restrictions, we would need: $(f,g) = \int_{1/2}^1 f\bar g = 0$, then we need to find g such that this is true, that's where I need help. AI: First, let's study the restriction of $A$ on $x\in [1/2,1]$. Clearly, $A\big|_{x\in [1/2,1]}=L^2([1/2,1])$, hence $\left(A\big|_{x\in [1/2,1]}\right)^\bot=\{0\}$. Second, as you already mentioned, if $g\in A^\bot$, then $g\big|_{x\in[0,1/2]}$ can be any function in $L^2([0,1/2])$, so we can conclude that $A^\bot=\{g\in L([0,1]):g|_{x\in[1/2,1]}=0\}$.
H: $G$ be a non-nilpotent group and every $2$-maximal subgroup Per with all $3$-maximal subgroup Let $G$ be a non-nilpotent group. If $|G|=p^{\alpha}q^{\beta}r^{\gamma}$ where $p$,$q$,$r$ are primes (two of them maybe are same) such that $\alpha + \beta +\gamma \leq 3$ then every $2$-maximal subgroup of $G$ permuts with all $3$-maximal subgroup of $G$. AI: If $\alpha + \beta +\gamma \leq 3$ then every 3-maximal subgroup is equal $1$, so it permuts with all subgroups.
H: Limit of $\left(2-a^\frac{1}{x}\right)^x$ How do I prove the following limit? $$\lim_{x \to \infty}\left(2-a^\frac{1}{x}\right)^x = \frac{1}{a}$$ AI: $$\lim_{x \to \infty} \left(2-a^\frac{1}{x}\right)^x = \exp \log \lim_{x \to \infty} \left(2-a^\frac{1}{x}\right)^x = \exp \lim_{x \to \infty} \log \left(2-a^\frac{1}{x}\right)^x = \exp \lim_{x \to \infty} x \log \left(2-a^\frac{1}{x}\right) = \exp \lim_{t \searrow 0} \frac{1}{t} \log \left(2-a^t\right) = \exp \lim_{t \searrow 0} \frac{\log 2-a^t}{t} $$ Using L'Hospital: $$ = \exp \lim_{t \searrow 0} \frac{\overbrace{a^t}^{\to 1} \log(a)}{\underbrace{a^t}_{\to 1}-2} = \exp \frac{1 \log(a)}{1-2} = \exp (- \log(a)) = \frac{1}{a} $$
H: Under which circumstances are there fixed points? Consider the following equation: $$x_{n+1} = ax_n + b$$ Under which circumstances is there a fixed point solution? Under which circumstances is there a period 2 solution? So for the first question I just rewrote it to $x=ax+b$ and I got $ x = \dfrac{b}{1-a}$. So my answer was there exists a fixed point for $ a \neq 1$. I did something similar for the second solution, and I ended up with $x = \dfrac{ab+a}{1-a^2}$, so there exists a period 2 solution for $ a \neq 1$ and $a \neq -1$. (and I believe $\dfrac{b}{1-a} \neq \dfrac{ab+a}{1-a^2}$, but I'm not sure and I wouldn't know how to write that concisely). Is this correct, or am I missing something here which would make the problem more extensive? AI: A fixed point solution exists iff $$ x = ax + b, $$ has a solution. If $a \neq 1$, the fixed point solution is given by $$ x = \frac{b}{1 - a}. $$ If $a = 1$, $x = x + b$ has a solution iff $b = 0$, in which case you have infinitely many solution. Therefore, the conclusion is $a \neq 1$ or $a = 1, b = 0$. A period 2 solution exists iff $$ x \neq ax + b, x = a^2x + b(a + 1), $$ has a solution. If $a \neq \pm 1$, then $$ x = \frac{b}{1 - a} $$ which is a fixed point solution. If $a = 1$, then for a solution to exist $b = 0$, but then the solution is a fixed point solution. If $a = -1$, then for a solution to exists $b$ can be anything, in which case you have infinitely many solution. Therefore, the conclusion is $a = -1$.
H: Derivative using the chain rule Differentiate $g(x) = (1-x)\left[\cos\left({\pi\over2}x\right){\pi\over2}\right]$ So (...) $$g'(x)=-\left[\sin\left({\pi\over2}x\right){\pi\over2}\right]+\left[\cos\left({\pi\over2}x\right){\pi\over2}\right](1-x)$$ This is well done? AI: This question needs both the chain rule and the product rule. If you have two function multiplied together then $(fg)' = f'g+fg'$. Let $f(x) = 1-x$ and $g(x) = \frac{\pi}{2}\cos(\frac{\pi}{2}x)$. Obviously $f'(x)=-1$. To find $g'(x)$ we need the chain rule. To differentiate $\frac{\pi}{2}\cos(\frac{\pi}{2}x)$, think of $\frac{\pi}{2}\cos u$. Differentiating gives $-\frac{\pi}{2}\sin u$. Then we multiply by the derivative of $u$, i.e. by the derivative of $\frac{\pi}{2}x$ which is $\frac{\pi}{2}$. Hence $\left(\frac{\pi}{2}\cos(\frac{\pi}{2}x)\right)'=-\frac{\pi^2}{4}\sin(\frac{\pi}{2}x)$ Putting all of this together gives: $$\left((1-x)\frac{\pi}{2}\cos\left(\frac{\pi}{2}x\right)\right)' = -\frac{\pi}{2}\cos\left(\frac{\pi}{2}x\right) -(1-x)\frac{\pi^2}{4}\sin\left(\frac{\pi}{2}x\right)$$
H: How can evaluate $\lim_{x\to0}\frac{\sin(3x^2)}{\tan(x)\sin(x)}$ I know this: $$\lim_{x\to0}\frac{\sin(3x^2)}{\tan(x)\sin(x)}$$ But I have no idea how make a result different of: $$\lim_{x\to0}\frac{3(x)}{\tan(x)}$$ Any suggestions? AI: $$\frac{\sin 3x^2}{\tan x\sin x}=3\cos x\frac x{\sin x}\frac{\sin 3x^2}{3x^2}\frac x{\sin x}\xrightarrow [x\to 0]{}3\cdot 1\cdot1\cdot 1\;\ldots$$
H: Euler-Fermat Theorem So I am trying to teach myself number theory, and while trying to work on some exercises I got stuck trying to prove that, for all $n \in \mathbb{Z}$, $$ n^{91} \equiv n^{7} \bmod 91 $$ What I first thought of was applying the theorem directly and then multiply by the number of n's that was necessary, getting something like \begin{array}{ccc} n^{72} &\equiv& 1 \bmod 91 \\ n^{91} &\equiv& n^{19} \bmod 91 \end{array} Which is the same as finding \begin{array}{ccc} n^{19} &\equiv& n^{7} \bmod 91 \\ n^{12} &\equiv& 1\ \bmod 91 \end{array} But then I got stuck. Does anyone have anything that can give me a push? I am not looking for the answer, but more of a hint. Thanks in advance! AI: Hint: $91 = 13 \times 7$, so $a \equiv b \pmod {91}$ iff $a \equiv b \pmod {13}$ and $a \equiv b \pmod 7$. See CRT
H: A solvable group with order divisible by exactly two primes contains a normal subgroup of prime index. $G$ is solvable group then $G$ has a normal subgroup $N$ of $G$ such that $|G: N|$ is a prime. AI: Hints: (1) By the correspondence theorem, if $\,H\lhd G\;$ , then any subgroup $\,\overline B\le G/H\;$ is of the form $\,\overline B=B/H\;$ , with $\;H\le B\;$ , and $\,\overline B\lhd G/H\iff B\lhd G\;$ and also $\,[G/H:B/H]=[G:B]\;$ (2) A finite abelian group $\,A\,$ has a subgroup of order $\;d\; $ for any divisor $\,d\;$ of $\;|A|\;$ . (3) The group $\,G/G'\;$ is finite abelian...
H: Confusion in proof that $C(X,Y)$ is separable From Kechris' Classical Descriptive Set Theory: (4.19) Theorem If $X$ is compact metrizable and $Y$ is Polish, then $C(X,Y)$ is Polish. In the proof of separability, we consider $$ C_{m,n} = \{f \in C(X,Y) : \forall x,y[d_X(x,y) < 1/m \implies d_Y(f(x),f(y))<1/n]\}, $$ and let $X_m$ be a $1/m$-dense set in $X$, i.e. no point in $X$ is further than $1/m$ from an element of $X_m$. Next is the part that confuses me. Kechris writes Then let $D_{m,n} \subseteq C_{m,n}$ be countable such that for every $f \in C_{m,n}$ and every $\epsilon>0$ there is $g \in D_{m,n}$ with $d_Y(f(y),g(y)) < \epsilon$ for $y \in X_m$. How do we know that such a set $D_{m,n}$ exists? Of course we could define a countable set of functions with the above property by just declaring that for all $i$ we have $g(x_i) = y_j$ for some $j$ where $\{y_i\}$ is dense in $Y$. However, as far as I know, there is no theorem guaranteeing that given $g:\{x_1,\ldots,x_m\} \to Y$ there is a continuous extension to all of $X$. Tietze's theorem only applies to $\mathbb{R}^n$ as far as I know, not an arbitrary Polish space. AI: Write $X_m=\{ a_1,\dots ,a_N\}$, and consider the set $$A=\{ (f(a_1),\dots ,f(a_N));\; f\in C_{m,n}\}\subset Y^N\, .$$ Take any countable dense set $D\subset A$. Then one can write $$D=\{ (f(a_1),\dots ,f(a_N));\; f\in D_{m,n}\}$$ for some countable set $D_{m,n}\subset C_{m,n}$. This set $C_{m,n}$ works.
H: How many ways a surface can curve differently in different directions? How many ways a surface can curve differently in different directions for a n-dimensional embedded submanifolds of $\mathbb{R}^m, m>n$? I think they can curve infinitely many ways but I am not quite certain: don't they have n-dimensional basis? I think I confused something here.. AI: Even for a non-umbilic surface in $\mathbb R^3$ you get plane sections with all possible curvatures at $P$ between $k_1$ and $k_2$, where these are the principal curvatures at $P$. So that's infinitely many. For a hypersurface in $\mathbb R^{n+1}$ there are $n$ principal curvatures. We have to decide what you mean by curvature — curvature of curves, curvature of surfaces (that's what sectional curvatures give you), or Gaussian curvature. For higher codimension, we can still talk about sectional curvature or about Gaussian curvature for every normal direction.
H: is $[0,1]^\omega$ with product topology a compact subspace of $\mathbb{R}^{\omega}$ Is $[0,1]^\omega$ with product topology a compact subspace of $\mathbb{R}^{\omega}$, where $\mathbb{R}^{\omega}$ denote the space of countably many products of $\mathbb{R}$. Is the subspace locally compact? AI: By Tychonoff's theorem, the product of any family of compact spaces is compact, so $[0,1]^\omega$ is compact. Compactness is a property that a space possesses or not, independent of the surrounding space, so it is a compact subspace of $\Bbb R^\omega$, given that the product topology on $[0,1]^\omega$ is the same as the topology as a subspace. That seems like a trivial fact, and a proof that a product of subspaces is a subspace of the product is indeed easy, using the universal property of the initial topology. $[0,1]^\omega$ is also locally compact, as is every product of compact locally compact spaces. Here locally compact means that every point has a local base of compacts sets.
H: Hamilton path and minimum degree $n$ is a number of vertices in graph $G, n≥4$. Prove that if $\;\operatorname{min deg}(G) \geq \frac{(n-1)}{2}\,$ then $G$ has a Hamilton path. AI: Let $G$ be a given graph with $n$ vertices such that $\min \deg G \ge \frac{n-1}{2}$. Create a new graph $G'$ by add another vertex $w$ to $G$ and connect it to all vertices of $G$. Note that $\min \deg G' = \min \deg G + 1 \ge \frac{n+1}{2}$, and number of vertices in $G'$ is at least $5$. So, we can apply Dirac's theorem, deducing that $G'$ has a Hamiltonian circle. Back to graph $G$, it has a Hamiltonian path obtained by removing $w$ and all its edges from $G'$ (thus breaking the cycle, but leaving all the vertices of $G$ in it).
H: Probability (usage of recursion) In an hour, a bacterium dies with probability $p$ or else splits into two. What is the probability that a single bacterium produces a population that will never become extinct? AI: Let $x$ (for "extinction") be the probability that the colony dies out. This can happen in two ways: either the first bacterium dies immediately (with probability $p$) or it splits and then neither of its children has infinite descendants. This means we can write $x$ as $$x = p + (1-p)x^2$$ which is a quadratic with two real roots. One solution is $x=1$ and the other is $x=\frac{p}{1-p}$. The latter is a valid probability (ie, it's between 0 and 1) only when $p \in [0,1/2]$ when the graph looks like this: So extinction is guaranteed for $p \geq 1/2$ and then decreases as shown in the graph above as $p$ decreases. Survival is guaranteed, of course, if $p=0.$
H: $\delta$ Notation in linear algebra In this equation below, what is $\delta_{l,q}$ denoting? Is $\delta$ a standard notation, or anything to do with all one's or the basis matrix etc? $$A_{ij}=\delta_{l,q}\left(\sum_{h=1}^n B_{l,h} + B_{l,q}\right)$$ AI: It's the Kronecker delta function.
H: Show that if $n$ and $k$ are positive integers, then $\lceil \frac{n}{k} \rceil = \lfloor \frac{n - 1}{k} \rfloor + 1$ This is answer in the back of the book but it doesn't make sense to me: There is some $b$ with $(b-1)k < n \leqslant bk$. Hence, $(b-1)k \leqslant n-1 < bk$. Divide by $k$ to obtain $b-1 < \frac{n}{k} \leqslant b$ and $b-1 \leqslant \frac{n - 1}{k} < b$. Hence, $\lceil \frac{n}{k} \rceil = b$ and $\lfloor \frac{n - 1}{k} \rfloor = b-1$. Where did step one come from? What happened in step two? How did step three turn into step 4? AI: 1) $n$ is an integer, so you divide the positive real line into disjoint intervals $$ (0, k], (k, 2k], (2k, 3k], \cdots, $$ then $n$ must fall into one of them. In fact, this shows the existence of $\lceil \frac{n}{k} \rceil$. 2) it meant to say that $$ (b - 1)k \leq n - 1 < bk, $$ since all of them are integers. 3) Now you divide all these inequalities by $k$. 4) That is the definition of ceil and floor. $$ \lfloor a \rfloor = \text{the greatest integer that is smaller than }a $$ $$ \lceil a \rceil = \text{the least integer that is greater than }a $$ Since you seem to be really confused about this answer from the book, I will write a little bit more about it. Here is how you should have worked out this proof: 1) Suppose $\lceil \frac{n}{k} \rceil = b$ some integer, then it suffices to show that $\lfloor \frac{n - 1}{k} \rfloor = b - 1$; 2) What conditions do I get from my assumption $\lceil \frac{n}{k} \rceil = b$? 3) Will conditions I get from 2) imply that $\lfloor \frac{n - 1}{k} \rfloor = b - 1$?
H: analysis problem proof with derivative $f:[a,b] \to \mathbb R$ is a continuous function and $0 < a < b$ and $f$ is differentiable in $(a,b)$ and $\dfrac{f(a)}{a} = \dfrac{f(b)}{b}$. Prove that there exists $x \in (a,b)$ so that $xf'(x) = f(x)$. AI: Hint: Apply the mean value theorem (or Rolle's theorem) to $g(x) = \dfrac{f(x)}{x}$.
H: I would like a hint in order to prove that this matrix is positive definite Let $a_{ij}$ be a real number for all $i,j\in\{1,...,n\}$. Consider the matrix below. $$B=\begin{bmatrix} \sum_{k=1}^n(a_{1k})^2 & \sum_{k=1}^na_{1k}a_{2k} & \cdots & \sum_{k=1}^na_{1k}a_{nk}\\ \sum_{k=1}^na_{2k}a_{1k} & \sum_{k=1}^n(a_{2k})^2 & \cdots & \sum_{k=1}^na_{2k}a_{nk}\\ & & \vdots & \\ \sum_{k=1}^na_{nk}a_{1k} & \sum_{k=1}^na_{nk}a_{2k} & \cdots & \sum_{k=1}^n(a_{nk})^2 \end{bmatrix}$$ I want to prove that $B$ is definite positive. Notice that $B$ is symmetric, because $$b_{ij}=\sum_{k=1}^na_{ik}a_{jk}=\sum_{k=1}^na_{jk}a_{ik}=b_{ji}$$ I think that symmetry is helpful in this case. Maybe I'll need a theorem (that I don't know!) about sufficient conditions for positive definiteness. Can someone help me? Thanks. AI: As noted by the commenters, this is the product of a matrix with its transpose. Such a product is always positive semidefinite. It is positive definite if and only if $A$ is invertible. (Correction here due to @julien). Suppose $B=AA^T$. We must show $x^TAA^Tx>0$ for all nonzero $x$. This is $x^TA(x^TA)^T$, or the inner product of $x^TA$ with itself. Since $v\cdot v=\|v\|^2$ for any vector $v$, and the norm is positive definite, we are done.
H: Evaluating $\lim_{x\to0}\frac{x+\sin(x)}{x^2-\sin(x)}$ I did the math, and my calculations were: $$\lim_{x\to0}\frac{x+\sin(x)}{x^2-\sin(x)}= \lim_{x\to0}\frac{x}{x^2-\sin(x)}+\frac{\sin(x)}{x^2-\sin(x)}$$ But I can not get out of it. I would like do it without using derivation or L'Hôpital's rule . AI: $$\lim_{x\to0}\;\frac{\left(x+\sin(x)\right)}{\left(x^2-\sin(x)\right)}\cdot \frac{1/x}{1/x} \quad =\quad \lim_{x\to 0}\;\frac{1+\frac{\sin(x)}x}{x-\frac{\sin(x)}x}$$ Now we evaluate, using the fact that $$\lim_{x \to 0} \frac {\sin(x)}x = 1$$ we can see that: $$\lim_{x\to 0}\;\frac{1+\frac{\sin(x)}x}{x-\frac{\sin(x)}x} = \frac{1 + 1}{0 - 1} = -2$$ $$ $$
H: Explain this consequence of continuity A consequence of continuity is the following fact: if $f(x)$ is continuous at $x=b$ and $\lim\limits_{x \to a} g(x)=b$, then, $\lim\limits_{x \to a} f(g(x)) = f(\lim\limits_{x \to a}g(x))$ with this fact we can solve the following: $\lim\limits_{x \to 0} e^{\sin x}= e^{\lim\limits_{x \to a}\sin x} = e^0 = 1$ so the fact is saying that in this case $f(\lim\limits_{x \to a}g(x)) = f(b).$ I don't understand how this fact helps solve the problem, is it because $\exp$ and $\sin$ are both continuous everywhere, and if they're continuous at the same places, you can use this fact? Please elaborate. thanks! AI: This solve your problem because they are both continuous function every where, you can pass the limit like this $\lim_{x\to x_0}f(g(x))=f(\lim_{x\to x_0}g(x))$ two times.
H: Does $(\{0,1\},*)$ form a group? I am reading my first book on abstract algebra. I am not enrolled in a class on the subject. Given $S = \{0,1\}$. Is $(S,\cdot)$ a group? $S$ is closed under multiplication. $$0\cdot1=0,\,1\cdot0=0,\,0\cdot0=0,\,1\cdot1=1.$$ $S$ has an identity, $1$, I think. $$0\cdot1=0,\,1\cdot1=1.$$ I don't believe $S$ satisfies $a\cdot a^{-1}=\operatorname{id}.$ However, zero is excluded when stating that $\mathbb R$ satisfies $a\cdot a^{-1} = \operatorname{id}$ under multiplication. $S$ would be a group if zero is excluded. $1\cdot1=1.$ So is $S$ a group or not? AI: Looking at the structure $S = (\{0, 1\}, \cdot)$, as you found, $\,0\,$ has no (multiplicative) inverse. Therefore, $S$, under multiplication, cannot be a group, as it fails to have closure on taking inverses. It meets the other criteria of a group, but not the group axiom requiring that the inverse of every element of a group is contained in the group. However, $S' = \{1\}$, under multiplication, is a group: it's called the trivial group, as it contains only one element: the (multiplicative) identity of the group itself. If you know about the addition of integers modulo $n$, then you should know that $\,\mathbb Z_2 = \left(\{0, 1\}, +\right)\,$ is a group: it is the additive group of integers under addition modulo $2$.
H: How to check if a set of vectors is a basis OK, I am having a real problem with this and I am desperate. I have a set of vectors $\{(1,0,-1), (2,5,1), (0,-4,3)\}$. How do I check is this is a basis for $\mathbb{R}^3?$ My text says a basis $B$ for a vector space $V$ is a linearly independent subset of $V$ that generates $V$. OK then. I need to see if these vectors are linearly independent, yes? If that is so, then for these to be linearly independent the following must be true: $a_1v_1 + a_2v_2 + ... + a_nv_n \neq 0$ for any scalars $a_i$ Is this the case or not? If it is, then I just have to see if $a_1(1,0.-1)+ a_2(2,5,1)+ a_3(0,-4,3) = 0$ or $a_1 + 2a_2 + 0a_3 = 0$ $0a_1 + 5a_2 - 4a_3 = 0$ $-a_1 + a_2 + 3a_3 = 0$ has a solution. Adding these equations up I get $8a_2 - a_3 = 0$ or $a_3 = 8a_2$ so $5a_2 - 32a_2 = 0$ which gets me $a_2 = 0$ and that implies $a_1 = 0$ and $a_3=0$ as well. So they are all linearly dependent and thus they are not a basis for $\mathbb{R}^3$. Something tells me that this is wrong. But I am having a hell of a time figuring this stuff out. Please someone help, and I ask: pretend I am the dumbest student you ever met. AI: A set of vectors $v_1, v_2, ..., v_n$ is linearly independent if and only if we have that $$a_1v_1 + a_2v_2 + ... + a_nv_n = 0 \;\;$$ only when $ a_1 = a_2 = ... = a_n = 0 $. (After all, any linear combination of three vectors in $\mathbb R^3$, when each is multiplied by the scalar $0$, is going to be yield the zero vector!) So you have, in fact, shown linear independence. And any set of three linearly independent vectors in $\mathbb R^3$ spans $\mathbb R^3$. Hence your set of vectors is indeed a basis for $\mathbb R^3$.
H: Why are the integers appearing in lens spaces coprime? I have a past paper question for a first course in algebraic topology, which asks one to calculate the first three homology and homotopy groups for the space $L_n$, defined as follows: Let $G=\{z\in\mathbb C|z^n=1\}\cong\mathbb Z_n$ act on $S^3=\{(z_1,z_2)| |z_1|^2+|z_2|^2=1\}$ by $z(z_1,z_2)=(zz_1,zz_2)$ and define $L_n$ to be the quotient space. (The question claims that the action of $G$ is properly discontinuous.) This space $L_n$ looks suspiciously like a lens space $L(p,q)$ with $p=q=n$, with the exception that $p,q$ are not coprime, as usually required in the definition. I think I can manage to calculate homology and homotopy groups for the usual lens space, but I am a bit lost as to what $L_n$ is. My question is, why are $p,q$ in the definition of a lens space $L(p,q)$ required to be coprime? AI: Recall that $L(p,q)$ is the quotient of $S^3$ by the action of $\mathbb{Z}/p\mathbb{Z}$ (thought of as the $p$th roots of $1$) given by $z(z_1,z_2) = (zz_1, z^q z_2)$. Hence, the space $L_n$ is actually the space $L(n,1)$ in the $(p,q)$ notation (and note that $\gcd(n,1) = 1$, as is required for the $(p,q)$ notation.) As Qiaochu (and Wikipedia) point out, requiring $\gcd(p,q)=1$ is equivalent to asking that the action be free. Free actions are nice for two reasons. First, as you pointed out, freeness guarantees the quotient is a manifold. Second, it also guarantees the quotient map $S^3\rightarrow L(p,q)$ is a covering, which reduces much (but not all) of the algebraic topology considerations of $L(p,q)$ to those of $S^3$.
H: In a sequence of $n$ integers, must there be a contiguous subsequence that sums to a multiple of $n$? Let $x_1, \ldots, x_n$ be integers. Then are there indices $1\le a\le b\le n$ such that $$\sum_{i=a}^b x_i$$ is a multiple of $n$? AI: Let $$s_k = \sum_{i=1}^k x_i\pmod n.$$ If $s_k$ is zero for any $k$, we have found the desired subsequence ($a=1, b=k$), so suppose all the $s_k$ are nonzero. There are $n$ of them, each in the range $1,\ldots,n-1$, so two, say $s_{a}$ and $s_b$, must be equal. Then since $s_b - s_a = 0$, we have $$\begin{align} \sum_{i=1}^b x_i - \sum_{i=1}^{a} x_i & = 0\pmod n \\ \sum_{i=a+1}^b x_i & = 0\pmod n \end{align}$$ as desired. (Note that if $a+1 = b$, that is all right; it just means that $x_b$ itself is a multiple of $n$.)
H: Find out the position of an element in a loop according to the number of elements First of all, I am terrible at math, and this small problem is giving me a nice headache. I'm sure most here will see this as an easy solution. I will try to be as clear as possible. I have X number of items I have 3 positions for these items I need to find out the position of the last item Example: X = 5 Item 1 = Position A Item 2 = Position B Item 3 = Position C Item 4 = Position A Item 5 = Position B My answer is Item 5 = Position B I can't figure out the equation! Some help would be much appreciate. PS: Although is the first time I post at math.stackexchange. I have been a member of other stackexchange website for a while. Let me know if I need to provide more information. AI: Start numbering items at 0 instead of 1. Then the position of the $n$th item is the remainder after dividing $n$ by 3, where 0, 1, and 2 mean positions A, B, and C, respectively. Here's why: You can always write $n$ in the form $3q + r$, where $q$ and $r$ are the quotient and remainder after dividing $n$ by 3. Since we've started numbering at 0, items divisible by 3 (I.e. 0,3,6,9,...) will always be at position 0 (=position A). So, to find out the position of our item, we just notice that item $3q$ is at position 0, and so ours is at position $r$.
H: Could someone help me to prove that this symmetric matrix is definite positive? Let $a_{ij}\in\mathbb{R}$ for all $i,j\in\{1,...,n\}$ and $m\in\mathbb{N}$. Consider the matrix below. $$B=\begin{bmatrix} \sum_{k=1}^n(a_{1k})^2 & \sum_{k=1}^na_{1k}a_{2k} & \cdots & \sum_{k=1}^na_{1k}a_{mk}\\ \sum_{k=1}^na_{2k}a_{1k} & \sum_{k=1}^n(a_{2k})^2 & \cdots & \sum_{k=1}^na_{2k}a_{mk}\\ & & \vdots & \\ \sum_{k=1}^na_{mk}a_{1k} & \sum_{k=1}^na_{mk}a_{2k} & \cdots & \sum_{k=1}^n(a_{mk})^2 \end{bmatrix}$$ I want to prove that $B$ is definite positive. Notice that $B$ is symmetric (I think that symmetry is helpful in this case). If $n=m$, there is a answer here. But I need the general case. I hope you help me. Thanks. AI: Here $A$ is $m\times n$ and $B=AA^\top$ is positive definite if and only if the row vectors of $A$ are linearly independent, i.e., $\text{rank}\,(A)=m$. Same holds when $A$ is square. $B$ is always positive semidefinite because $Bv\cdot v=\|A^\top v\|^2\ge 0$.
H: Is there a rational number between any two irrationals? Suppose $i_1$ and $i_2$ are distinct irrational numbers with $i_1 < i_2$. Is it necessarily the case that there is a rational number $r$ in the interval $[i_1, i_2]$? How would you construct such a rational number? [I posted this only so that the useful answers at https://math.stackexchange.com/questions/414036/rationals-and-irrationals-on-the-real-number-line/414048#414048 could be merged here before that question was deleted.] AI: Let $x,y\in\mathbb{R}$, $x\neq y$. Without loss of generality, suppose $x<y$. Then there exists a positive $z$ such that $y-x=z$. By Archimedes' axiom, there exists a natural number $n$ such that $$n > \dfrac{1}{z}$$ $$nz > 1$$ $$ny - nx > 1$$ So there exists an integer $m$ such that $$nx < m < ny$$ $$x < \frac{m}{n} < y$$ i.e. $m/n$ is a rational number between $x$ and $y$. Since $x$ and $y$ can be any real numbers, in particular they can be irrationals.
H: polynomial congruence equations Is there a general method to solve the following equation: Finding $f(x)$ to satisfy: $$\left \{ \begin{matrix} f(x) \equiv r_1(x) \pmod{g_1(x)}\\f(x) \equiv r_2(x) \pmod{g_2(x)} \end{matrix} \right. $$ where $f(x),r_1(x),r_2(x),g_1(x),g_2(x) \in \Bbb{F}[x]$ and $\gcd(g_1,g_2)=1$ thanks very much AI: Are you familiar with how to solve systems of congruences a la Sun-Ze (aka chinese remainder theorem) in the case of modular arithmetic? Same story. Suppose $R$ is a PID and $u,v$ are coprime ($\forall w\in R:u,v\mid w\iff uv\mid w$). To find a solution to $$\begin{cases}x\equiv a\mod u \\ x\equiv b \mod v,\end{cases}$$ let $\bar{u}$ and $\bar{v}$ be elements such that $u\bar{u}\equiv1\bmod v$ and $v\bar{v}\equiv1\bmod u$. Then $x=av\bar{v}+bu\bar{u}$ is a solution, and every other solution is congruent to this one mod $uv$. This does not work in non-PIDs unless $(u,v)=R$ (which is stronger than elemental coprimality).
H: A good Problems and Solutions book accompany Baby Rudin? I'm reading the book Principles of Mathematical Analysis by Walter Rudin, aka "Baby Rudin". The exercises included are very instructive and helpful, but I'd like to find a book that has more problems (with solutions) that could help me build a better understanding on the topics covered in Baby Rudin. I found a series "Problems in Mathematical Analysis I-III" by W.J.Kaczor and M.T.Nowak, but seems to me that it does not fit Baby Rudin well. Any recommendation? AI: I used Intro to Real Analysis while reading Rudin and noticed many similarities. The structure of the book was very similar, although there are the occasional proofs shown in a different way. For example, Baby Rudin proves that every k-cell is compact by contradiction, whereas Manfred Stoll proves a slightly weaker claim (closed bounded set $[a,b]$ is compact) by using a direct proof, which constructs a set and shows that it is compact and equal to $[a,b]$. Stoll uses a slightly different layout, it introduces sequences, followed by topology, and treats series after integration, but the book has more approachable exercises. The exercises in Rudin, while not impossible are very difficult (at least they were for me) and it can be frustrating. Manfred Stoll's book contains some easier problems near the beginning of each set of exercises, however, difficult problems do exist. Solutions do not exist for all the problems, but I still found this book helpful to read alongside baby Rudin.