text
stringlengths
83
79.5k
H: Statistics problem (normal distribution) A factory produces roller stands (of cylindrical form) that has $4cm$. of diameter and $6cm$. of length. In fact, the diameters $X$ are normally distributed with a mean of $4cm$. and a standard desviation of $0,01cm$, being its density $f_{1}(x)$. By the way, the lengths $Y$ are normally distributed, with a mean of $6cm$. and a standard desviation of $0,01cm$, whose density is $f_{2}(y)$. If $X$ and $Y$ are independent, i.e., $f(x,y)=f_{1}(x)f_{2}(y)$, find the probability that the length and the diameter of a stand randomly chosen at the line of production differ from their respective mean over $0.02cm$. Well, $X$ and $Y$ are probability density function, so we need to use $$\int _{-\infty }^{\infty }\! \left\{ 1/2\,{\frac {\sqrt {2}}{\sigma\, \sqrt {\pi }}} \right\} {{\rm e}^{-1/2\,{\frac { \left( x-\mu \right) ^{2}}{{\sigma}^{2}}}}}{dx}=1 $$ But I haven't much knowledge about statistics. (Sorry if I make a mistake writing the exercise, I speak spanish). If anyone can explain me I would really apreciate it :) AI: Probability of Diameter $D\in (x,x+dx)=P(x<D<x+dx)=f_1(x)dx$ Therefore, $P(4-0.02<D<4+0.02)=\int_{4-0.02}^{4+0.02}f_1(x)dx$ and similarly, $P(6-0.02<L<6+0.02)=\int_{6-0.02}^{6+0.02}f_2(y)dy$ As $D,L$ are independent, therefore, $P(4-0.02<D<4+0.02,6-0.02<L<6+0.02)=\left(\int_{4-0.02}^{4+0.02}f_1(x)dx\right)\left(\int_{6-0.02}^{6+0.02}f_2(y)dy\right)$ As $f_1,f_2$ are pdf's of normal dist. and their cdf's denoted by $\Phi$ for their standard forms, hence $\int_{4-0.02}^{4+0.02}f_1(x)dx=\int_{-\infty}^{4+0.02}f_1(x)dx-\int_{-\infty}^{4-0.02}f_1(x)dx=\Phi(\frac{4.02-4}{0.01})-\Phi(\frac{3.98-4}{0.01})$
H: What is the $8^{th}$ term of $\left(3x-\frac{y}{2}\right)^{10}$? What is the $8^{th}$ term of $\left(3x-\frac{y}{2}\right)^{10}$? My solution: I'am not sure if I'am correct :) $^{10}C_r (3x)^{10-r} \left(-\frac{y}{2}\right)^r$ where $r= 7$ since we start at $r=0$ $^{10}C_7 (3x)^3 \left(-\frac{y}{2}\right)^7$ $\left(-120\right)(27x^3)\left(\frac{y^7}{128}\right)$ $= -\left(\frac{405}{16}\right) x^3 y^7$ AI: $$(3x-(1/2)y)^{10}$$ general term is $$T_k=\binom{n}{k}a^{n-k}b^k$$ in our case we have $$a=3x,b=-\frac{1}{2}y,n=10,k=7$$ so $$T_7=\binom{10}{7}(3x)^3(-\frac{1}{2}y)^7=-\frac{405}{16}x^3y^7$$
H: Dot products in the context of linear algebra and matrix multiplication I've been self-teaching myself linear algebra from Linear Algebra and its Applications 4th from D. Lay. I'm about 8 sections deep and I've had this bothersome feeling regarding the section describing the process of multiplying matrix $A$ and vector $\mathbf x$: The first entry in the product $A \mathbf x$ is a sum of products (sometimes called a dot product), using the first row of $A$ and the entries in $\mathbf x$. That is, $$\begin{bmatrix} 2 & 3 & 4 \\ \phantom{0} \\ \phantom{0} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 2x_1 + 3x_2 + 4x_3 \\ \phantom{0} \\ \phantom{0} \end{bmatrix}$$ This rolled into some examples: $\begin{bmatrix} 1 & 2 & -1 \\ 0 & -5 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ 3 \\ 7 \end{bmatrix} = \begin{bmatrix} 1 \cdot 4 + 2 \cdot 3 + (-1) \cdot 7 \\ 0 \cdot 4 + (-5) \cdot 3 + 3 \cdot 7 \end{bmatrix} = \begin{bmatrix} 3 \\ 6 \end{bmatrix}$ $\begin{bmatrix} 2 & -3 \\ 8 & 0 \\ -5 & 2 \end{bmatrix} \begin{bmatrix} 4 \\ 7 \end{bmatrix} = \begin{bmatrix} 2 \cdot 4 + (-3) \cdot 7 \\ 8 \cdot 4+ 0 \cdot 7 \\ (-5) \cdot 4 + 2 \cdot 7 \end{bmatrix} = \begin{bmatrix} -13 \\ 32 \\ -6 \end{bmatrix}$ $\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r \\ s \\ t \end{bmatrix} = \begin{bmatrix} 1 \cdot r + 0 \cdot s + 0 \cdot t \\ 0 \cdot r + 1 \cdot s + 0 \cdot t \\ 0 \cdot r + 0 \cdot s + 1 \cdot t \\ \end{bmatrix} = \begin{bmatrix} r \\ s \\ t \end{bmatrix}$ I don't know if it's the radical change in syntax or if I'm just plain missing something. The dot products (multivar calc/physics) I'm familiar with result in a scalar answer at the end of the day. As far as I understand, a matrix in itself is a “bundle” of vectors each occupying a column. I get the feeling I'm trying to relate a concept when there isn't a relationship at all I guess. Maybe someone can clarify. AI: Basically, multiplying a matrix $A$ by a vector $\vec{x}$ is like taking each row vector stored in $A$ and computing its dot product with the column vector $\vec{x}$, then storing the resulting scalars vertically in a vector $A\vec{x}$. Note that in your first example, $2x_1+3x_2+4x_3$ is a scalar. By slightly abusing the notation, let $\vec{v}_1,\vec{v}_2,...,\vec{v}_m$ be the $m$ row vectors in an $m\times n$ matrix $A$. Then to multiply $A$ by a vector $\vec{x}$ of length $n$, we have: $$\left[ \begin{array}{c} \longleftarrow \vec{v}_1 \longrightarrow \\ \longleftarrow \vec{v}_2 \longrightarrow\\ \vdots \\ \longleftarrow \vec{v}_m \longrightarrow\\ \end{array} \right] \vec{x} = \left[ \begin{array}{c} \vec{v}_1 \cdot \vec{x} \\ \vec{v}_2\cdot \vec{x} \\ \vdots \\ \vec{v}_m\cdot \vec{x}\\ \end{array} \right]$$
H: Is there a noncompact operator from $\ell^\infty$ to a reflexive space? It is well known that bounded operators from $c_{0}$ to a reflexive Banach space $X$ are in fact all compact. Indeed, since it can be shown that an operator is compact iff for any weakly convergent sequence in its domain, its image is convergent, one may consider the adjoint, which is an operator from $X$ to $\ell_{1}$, and use the Schur theorem (it is also known that an operator is compact iff its adjoint is) I wonder if it can be done "the other way round": if I take an operator which is adjoint to an adjoint to one from $c_{0}$ to $X$, it is an operator from $\ell^{\infty}$ to $X$, and is compact. Hovere these aren't (are they?) all bounded operators. I would be very grateful if someone provides an example of a noncompact operator from $\ell^{\infty}$ to an reflexive space. If such operators exists, what about ones to $\ell_{p}$, for $p\in [1,\infty)$, maybe at least they have to be compact? Thanks in advance. AI: It is "well-known" that $\ell^\infty$ is isomorphic to $L^\infty[0,1]$. The standard embedding of $L^\infty[0,1]$ into $L^2[0,1]$ is not compact (e.g. the image of the unit ball contains an orthonormal basis).
H: Is $e^z + \overline{z}^2$ holomorphic? Can a sum of a holomorphic and non-holomorphic functions be itself holomorphic? As I understand, $\overline{z} ^2$ is not holomorphic? AI: The difference of two holomorphic functions is holomorphic. Is $(e^z + \bar z^2) - e^z$ holomorphic?
H: If $10^{20} +20^{10}$ is divided by 4 then what would be its remainder? If $$10^{20} +20^{10}$$ is divided with 4 then what would be its remainder? AI: Since $$ \begin{eqnarray*} 10^{20}+20^{10} &=&\left( 10^{10}\right) ^{2}+2^{10}10^{10} \\ &=&10^{10}\left( 10^{10}+2^{10}\right) \\ &=&2^{10}5^{10}\left( 2^{10}5^{10}+2^{10}\right) \\ &=&2^{10}2^{10}5^{10}\left( 5^{10}+1\right) \\ &=&4^{10}5^{10}\left( 5^{10}+1\right) \\ &=&4\left( 4^{9}5^{10}\left( 5^{10}+1\right) \right) , \end{eqnarray*} $$ the remainder would be is $0$, because $$\frac{10^{20}+20^{10}}{4}=4^{9}5^{10}\left( 5^{10}+1\right).$$
H: Algebraic manipulation with square roots I have always had problems with the algebraic manipulation of square roots. For example, recently I encountered this in a problem I was working on: $$\sqrt{\left(\dfrac{x-1}{2x}\right)^2 - \dfrac{y}{x}} = \dfrac{1}{2x} \sqrt{(x-1)^2 -4xy}$$ I still don't grasp why this is correct and in general, I have trouble knowing when you can factor out something when dealing with square roots. Can someone enlighten me. AI: Observe that: $$ \begin{align*} \sqrt{\left(\dfrac{x-1}{2x}\right)^2 - \dfrac{y}{x}} &= \sqrt{\dfrac{(x-1)^2}{(2x)^2} - \dfrac{y}{x} \cdot \dfrac{4x}{4x}} \\ &= \sqrt{\dfrac{(x-1)^2}{(2x)^2} - \dfrac{4xy}{4x^2}} \\ &= \sqrt{\dfrac{(x-1)^2}{(2x)^2} - \dfrac{4xy}{(2x)^2}} \\ &= \sqrt{\dfrac{(x-1)^2-4xy}{(2x)^2}} \\ &= \sqrt{\dfrac{1}{(2x)^2} \cdot \left( (x-1)^2-4xy \right)} \\ &= \sqrt{\left(\dfrac{1}{2x}\right)^2 \left((x-1)^2 -4xy\right)} \\ &= \sqrt{\left(\dfrac{1}{2x}\right)^2}\cdot \sqrt{(x-1)^2 -4xy} \\ &= \dfrac{1}{2x} \sqrt{(x-1)^2 -4xy} \end{align*} $$ assuming that $x>0$.
H: Simple question about Liouville formula Liouville Formula $\det X(t)=\det X(t_0)\exp\left( \int^t_{t_0}tr A(u) du\right)$ Why when $t_0=0$ we have $\det e^{tA}=e^{t(trA)}$ My book says this, but I couldn't understand why. I need help. Thanks a lot AI: Presumably $X$ is the solution to the ODE $\dot{X} = A(t) X$, subject to a specified initial condition at $t_0$. If $X(0) = I$, and $A$ is a constant (that is, not time dependent), then we have $X(t) = e^{At}$. Liouville's Formula gives $\det X(t) = \det e^{At} = (\det I) e^{\int_0^t \operatorname{tr} A d\tau} = e^{t\operatorname{tr} A }$.
H: Let $f:[a,b]\to\mathbb R$. Evaluate $\lim_{n\to\infty}\int_a^bf(x)\sin(nx)\,dx$ Let $f:[a,b]\to\mathbb R$. Evaluate $\lim_{n\to\infty}\int_a^bf(x)\sin(nx)\,dx$. $f$ is continuously differentiable. I'm told this can be done using basic calculus. It's difficult for me to see where I should begin. I'd like some hints. AI: If the function $f$ is continuous the limit is $0$. Just notice that $f$ is uniformly continuous, and that on every small interval $[2k\pi/n,(2k+2)\pi/n]$ the function $sin(nx)$ has integral $0$ while $f$ is close to a constant. So the integral will be close to $0$. In general consider the piecewise constant function obtained replacing $f$ with its mean value on every such small intervals... and prove that the integral of the difference (between $f$ and the piecewise constant function) goes to zero. In practice you want to find $f_n$ such that $$ \left\vert \int_a^b f(x) \sin(nx)\right\vert \le \left\vert \int_a^b (f(x)-f_n(x)) \sin(nx)\right\vert + \left\vert \int_a^b f_n(x) \sin(nx)\right\vert = \int_a^b \lvert f(x)-f_n(x)\rvert < \varepsilon. $$
H: Does this variable have a HyperGeometric distribution? David has 100 cards in his right pocket: 70 black, 20 blue and 10 white. in each time David chose randomly one card, and move it to his left pocket. What is the distribution of the number of blue cards that left in David's right pocket after $n$ times. HyperGeometric Geometric Binomial None of the above I saw the solutions, and it should be 1.HyperGeometric. But I don't understand how it can be HyperGeometric. because lets say $n=2$. then if it is HyperGeometric distribution then the value 0 should be with probability $>0$. But here the only possible values are 20,19,18. thanks AI: We might as well assume that the $n$ cards are moved simultaneously. So we have $20$ blue and $80$ non-blue. We are choosing $n$ cards to be moved. Equivalently, we are (immplicitly) choosing $100-n$ cards to stay in the right pocket. Let $m=100-n$. The probability exactly $k$ of the $m$ "chosen" (by neglect) are blue is $$\frac{\binom{20}{k}\binom{80}{m-k}}{\binom{100}{m}}.\tag{1}$$ For there are $\binom{100}{m}$ equally likely ways to "choose" $m$ cards. We can choose $k$ blue cards in $\binom{20}{k}$ ways, and for each such way we can choose $m-k$ non-blue in $\binom{80}{m-k}$ ways. If $X$ is the number of blue cards that remain in the right pocket, then $\Pr(X=k)$ is given by Expression (1). This is a standard "hypergeometric" expression.
H: if $|D_n - \frac{n!}{e}| < \frac{1}{1+n}$, why is $D_n$ nearest integer? Given $D_n$ is the number of derangements (i.e. permutations s.t. $\pi(i) \neq i$ for all $i$ in $\{1,...,n\}$, where $D_n$ is defined to be: $\frac{D_n}{n!} = \sum_{0 \leq q \leq n} \frac{(-1)^q}{q!}$ Then, supposing this inequality is true: $$ | D_n - \frac{n!}{e} | < \frac{1}{n+1} $$ Why does this prove that $D_n$ is the nearest integer to $\frac{n!}{e}$? So, I am not having any trouble establishing the above inequality. Rather, I am trying to understand in simple language, why the expression $\frac{1}{n+1}$ would indicate that $D_n$ was the closes integer approximation to $\frac{n!}{e}$. For example, suppose I were able to find another integer, call it $K_n$, such that, $$ | K_n - \frac{n!}{e} | < \frac{1}{n+2} $$ why wouldn't that establish there is a closer integer? Or how is this a contradiction? Appreciate all the help, as always. AI: $D_n$ satifies $\left|D_n-n!/e\right|<1/(n+1)$. If another integer $K_n$ satisfies $\left|K_n-n!/e\right|<\alpha$, then the triangle inequality shows that $$ \left|D_n-K_n\right|< \frac{1}{n+1}+\alpha. $$ In particular if $K_n\neq D_n$ is an integer closer to $n!/e$ than $D_n$, that is if $\alpha<1/(n+1)$, then, using the fact that the distance between two distinct integers is at least $1$, you obtain the contradiction $$ 1\leq \left|D_n-K_n\right|< \frac{2}{n+1}\leq1,\quad n>1. $$
H: Finding $\sin^6 x+\cos^6 x$, what am I doing wrong here? I have $\sin 2x=\frac 23$ , and I'm supposed to express $\sin^6 x+\cos^6 x$ as $\frac ab$ where $a, b$ are co-prime positive integers. This is what I did: First, notice that $(\sin x +\cos x)^2=\sin^2 x+\cos^2 x+\sin 2x=1+ \frac 23=\frac53$ . Now, from what was given we have $\sin x=\frac{1}{3\cos x}$ and $\cos x=\frac{1}{3\sin x}$ . Next, $(\sin^2 x+\cos^2 x)^3=1=\sin^6 x+\cos^6 x+3\sin^2 x \cos x+3\cos^2 x \sin x$ . Now we substitute what we found above from the given: $\sin^6 x+\cos^6+\sin x +\cos x=1$ $\sin^6 x+\cos^6=1-(\sin x +\cos x)$ $\sin^6 x+\cos^6=1-\sqrt {\frac 53}$ Not only is this not positive, but this is not even a rational number. What did I do wrong? Thanks. AI: $(\sin^2 x + \cos^2 x)^3=\sin^6 x + \cos^6 x + 3\sin^2 x \cos^2 x$
H: Finding $A^k$ for a large $k$ If I have a matrix A that I have found a matrix $P$ for such that $P^{-1}AP$ is now diagonal, is it possible to calculate $A^k$ for a large $k$? I assume it has something to do with the fact that $(P^{-1}AP)^k=P^{-1}A^kP$, but I'm not sure how to use it. AI: If $P^{-1}AP$ is diagonal, finding $(P^{-1}AP)^k$ is trivial since we have to simply raise each element on the diagonal to $k$ Now, $(P^{-1}AP)^k=P^{-1}A^kP$ as you say. Let $(P^{-1}AP)^k=M$ $$\therefore M=P^{-1}A^kP$$ $$\implies PMP^{-1}=A^k$$
H: How to find the height, given the volume of a cylinder and cone (conjoined together)? Here is a picture for a diagram: If water fills this "vase" up to half its capacity (NOT half its height), what will the height of the water be, starting from the bottom? And if you could explain the steps leading up the answer, that would be great. Thanks! EDIT: Sorry, the diagram didn't have the volume, here it is: Volume of cylinder: 1398cm cubed. Volume of Cone: 162.32cm cubed. Total volume: 1560.32cm cubed. AI: Assuming that "halfway point" means "half of the volume of the object is filled", here are the steps: Find the volume of the object. Find the volume of the cone. Find the volume of the cylinder. Add those numbers together. Divide the volume of the object by 2 to obtain the volume of water that will be added. Add the water to the object and see how high it goes. The volume of the water will be greater than the volume of the conical part of the object, so subtract the volume of the cone from the volume of the water to obtain the volume of water that reaches the cylindrical part. The volume of the water reaching the cylindrical part will form a cylinder of the same radius as the cylindrical part, but with an unknown height. Solve for this height, knowing the volume and radius. Add the height of the cone.
H: Growth of y with respect to time based on some given assumptions I would be thankful if someone can help me with the following problem: Given: $\frac{\dot{N}(t)}{N(t)} = c_b - \frac{c_d}{y(t)}$ $\frac{\dot{A}(t)}{A(t)} = N(t)g - \delta $ $y(t) = A(t)\cdot(\frac{T}{N(t)})^{1-\alpha}$ $T$ is fixed. Question: How is possible to get $\frac{\dot{y}(t)}{y(t)}$ as: $\frac{\dot{y}(t)}{y(t)} = \frac{\dot{A}(t)}{A(t)} - (1-\alpha)\frac{\dot{N}(t)}{N(t)}$ Edit* I am trying the following: $y(t) = A(t)\cdot(\frac{T}{N(t)})^{1-\alpha} \implies$ $\frac{\dot{y}(t)}{y(t)} = \frac{\dot{A}(t)}{A(t)} \cdot (\frac{T}{\frac{\dot{N}(t)}{N(t)}})^{1-\alpha}$, then I take the log: $e^{log\Big(\frac{\dot{A}(t)}{A(t)} \cdot (\frac{T}{\frac{\dot{N}(t)}{N(t)}})^{1-\alpha}\Big)}$ $e^{log\Big(\frac{\dot{A}(t)}{A(t)}\Big) + log\Big((\frac{T}{\frac{\dot{N}(t)}{N(t)}})^{1-\alpha}\Big)}$ $e^{log\Big(\frac{\dot{A}(t)}{A(t)}\Big) + (1-\alpha)log\Big((\frac{T}{\frac{\dot{N}(t)}{N(t)}})\Big)}$ $e^{log\Big(\frac{\dot{A}(t)}{A(t)}\Big) + (1-\alpha)\Big[log(T) - log({\frac{\dot{N}(t)}{N(t)}})\Big]}$ Now, does this come to: $\cancel{e}^{\cancel{log}\Big(\frac{\dot{A}(t)}{A(t)}\Big) + (1-\alpha)\Big[\cancel{log}(T) - \cancel{log}({\frac{\dot{N}(t)}{N(t)}})\Big]}$ so I have: $\frac{\dot{A}(t)}{A(t)} + (1-\alpha)T - (1-\alpha)\frac{\dot{N}(t)}{N(t)}$ ?? I tried substituting, but I got no where - any suggestions are appreciated. Thanks AI: You only need the third equation (definition of $y(t)$). Taking $\log$ $$ \log y(t)=\log A(t)+(1-\alpha)\log(T/N(t))=\\ \log y(t)=\log A(t)-(1-\alpha)\log(N(t)/T) $$ then derive wrt $t$.
H: Determining dimension given a parameter $a$ If I have a homgenous matrix with one of the entries being $a$ and I need to determine which values of $a$ will give the matrix a space of solutions that has dimension $1$ (or dimension 2), how would I go about doing that? For example ( just making this up from the top of my head): $\begin{pmatrix} 1 &2&3&0\\4&5&6&0\\3&2&a&0 \end{pmatrix}$ I know I have to bring the matrix to RREF first, and I know which $a$ to choose to get infinitely many solutions or no solutions, but I'm not sure if there's a relation with dimension. Also my knowledge on linear algebra is just from an intro course on it. AI: Recall that the rank and the nullity of a matrix add up to the number of columns of the matrix (rank-nullity theorem). The nullity the dimension of the solution space. So, you can control the nullity by changing $a$ to adjust the rank of the matrix. The rank of the matrix in your case is at most 3, since it is 3x4. The first two rows are linearly independent, so the rank is at least 2. This means that $a$ controls whether the rank of the matrix is 2 or 3, and therefore if the nullity is 2 or 1. If you choose $a$ such that its row is linearly independent of the above rows, the nullity will be 1. If it is linearly-dependent, the nullity will be 2. Note however, that it is not guaranteed that you can choose $a$ to specify the dimension you want, because this is also controlled by the other numbers in the row. For example, you can clearly see here that the rank will always be 3: $ \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & a \end{pmatrix} $
H: All 1-tensors are alternating This statement from page 155 of Guillemin and Pollack's Differential Topology. I would assume because 1-tensors can not alternate because they have nothing to alternate with, so they are alternating...? AI: The statement is vacuously true: If $\sigma \in S_1$ is a permutation, then we have $$T(v_{\sigma(1)}) = (-1)^{\sigma} T(v_1)$$ for any $1$-tensor $T$ and any $v_1$. This is because $S_1$ only has the trivial permutation, which is even, so that $(-1)^\sigma = 1$. Note that this is just a rigorous way to say your assumed solution. "Nothing to alternate with" is captured by the fact that $S_1 = \{\mathrm{Id}\}$.
H: Linear Independence by row space. Let $(1,1,-1), (2,1,0)$ and $(-1,0,1)$ be vectors, show if they are independent. I wrote each vector on the rows of the matrix $A$. $A=\begin{pmatrix} 1 & 1 & -1 \\ 2 & 1 & 0\\ -1 & 0 & 1 \end{pmatrix}$ Then I put $A$ on an echelon form. $A=\begin{pmatrix} 1 & 1 & -1 \\ 2 & 1 & 0\\ -1 & 0 & 1 \end{pmatrix}\overset{L2-2L1} {\rightarrow}\begin{pmatrix} 1 & 1 & -1 \\ 0 & -1 & 2\\ -1 & 0 & 1 \end{pmatrix}\overset{L3+L1} {\rightarrow}\begin{pmatrix} 1 & 1 & -1 \\ 0 & -1 & 2\\ 0 & 1 & 0 \end{pmatrix}\overset{L3+L2}{\rightarrow}\begin{pmatrix} 1 & 1 & -1 \\ 0 & -1 & 2\\ 0 & 0 & 2 \end{pmatrix}$ So, $r(A)=3$. If $\space r(A)=3$ that is equal to the number of rows$(n)$ of $A$, then I can conclued that the $3$ vectors are independent. If $r(A)<n$ , then the set of the $3$ vectors would be dependent. Is this right? Thanks. AI: Edit: The row operations you performed and the row echelon matrix are indeed correct. You're correct that since $r = 3 = n$ and hence, the vectors are linearly independent. Note, since the echelon form of the matrix is now in upper triangular form, we can "read off" the determinant of the matrix in echelon form, which is the same as the determinant of the matrix we started with: why? (Because the only elementary row operations used here are those that did not change the value of the determinant of the original matrix.) $$\det(A) = \left|\begin{matrix} 1 & 1 & -1 \\ 2 & 1 & 0\\ -1 & 0 & 1 \end{matrix}\right| = \left|\begin{matrix} 1 & 1 & -1 \\ 0 & -1 & 2\\ 0 & 0 & 2 \end{matrix}\right| = (1)(-1)(2) = -2 \neq 0$$ Since the $\det A \neq 0$, we know the rows (and columns) of $A$ are linearly independent. And you are correct that when $r\lt n$, the vectors represented by the rows are then linearly dependent.
H: Definitions of connected space I have seen several definitions of connected space, but I would like to discuss those from Wikipedia. I am concerned about these: $X$ is disconnected, if it is the union of two disjoint nonempty open sets. What about $[0,1] \cup [2,3] $ ? It is not the union of two disjoint nonempty open sets, so it is connected? $X$ is connected, when it cannot be divided into two disjoint nonempty closed sets. What about $(0,1) \cup (2,3) $ ? It cannot be divided into two disjoint nonempty closed sets, so it is connected? Maybe I don't understand what "divided" means. AI: The problem is not with your understanding of divided, but rather with your understanding of closed. In the space $X=(0,1)\cup(2,3)$, the sets $(0,1)$ and $(2,3)$ are closed. This is because the topology $\tau$ on $X$ is the subspace (or relative) topology inherited from $\Bbb R$. A subset $U$ of $X$ is open in $X$ if and only if there is a $V\subseteq\Bbb R$ such that $V$ is open in $\Bbb R$ and $V\cap X=U$. Of course $(0,1)$ is open in $\Bbb R$, and $(0,1)\cap X=(0,1)$, so $(0,1)$ is open in $X$. By the definition of closed set this means that $X\setminus(0,1)$ is closed in $X$. And $X\setminus(0,1)=(2,3)$, so $(2,3)$ is closed in $X$. A similar argument shows that $(0,1)$ is also closed in $X$. Indeed, both of these sets are clopen (closed and open) as subsets of $X$, even though they are only open as subsets of $\Bbb R$. Openness and closedness depend not just on the set, but on the space in which it is considered. You have the same problem with your first example: the sets $[0,1]$ and $[2,3]$ are clopen in the subspace $Y=[0,1]\cup[2,3]$ of $\Bbb R$, not just closed. For example, $[0,1]=\left(-\frac12,\frac32\right)\cap Y$, and $\left(-\frac12,\frac32\right)$ is open in $\Bbb R$, so $[0,1]$ is open in $Y$.
H: Inverse of tangent map Let $Tf$ be the tangent map of a one to one map $f$ and let $g$ be the function: $$g(x,v)=(f(x),Tf(x).v)$$ $g$ is one-to-one, but what is the expression of its inverse ? Thanks ! AI: In general, if $f$ is one-to-one then $Tf$ is not necessarily one-to-one. For example, if $f(x) = x^3$, then $$Tf(x, v) = (x^3, 3x^2v),$$ which is certainly not one-to-one. However, if $Df_p$ is one-to-one for all $p$ and $f$ is one-to-one, we can easily define the inverse: $$(Tf)^{-1}(y, w) = (f^{-1}(y), (Df_{f^{-1}(y)})^{-1}(w)).$$ One-to-one maps $f$ such that $Df_p$ is injective for all $p$ are called injective immersions.
H: Moving only to the right and down, how many different pathways exist to get from A to Point B The answer I think is 10 following the rules in my textbook am I correct? AI:
H: Counting letters Arrange the letters IMAGINATORIUM where a 3 I's are separate and NAT appears as a subsequence. My thinking: Start by arranging the NAT. There are three spots in the string, so we need three letters. There is one N in the word, so to chose which N and which spot 1C1. Then for the A, there are 3 A's and 1 spot, so 3 C 1, and then the same case for the T, so 1C1. All of those multiply together and we get three possible ways to make NAT. Then we are going to build figure out what number we are at now, so remove the 3 I's, remove the 3 characters that go into NAT, and then add one character "NAT" because that string can go anywhere in the end result. So we get something like $ {(13 - 3 + 1 - 3) ! } \over {2! \cdot 2! \cdot 1! ... } $ = $ 8! \over 2! \cdot 2!$ Now, there are 9 possible spots for the three I's to go, so multiply the result by 9 C 3 to get the answer. While this makes sense to me, I usually get these questions wrong, so let me know where my logic breaks please. Thank you! AI: We have a total of $13$ real letters. However, it is useful to tie NAT together and think of it as a new "letter." So there are $11$ "letters." I count $3$ I's and $2$ M's and the rest distinct. First deal with the non-I's. There are $8$ of these, which can be arranged in $8!$ ways, if we paint the two M's different colours. But if we want to think of the two M's as identical, we only have $\frac{8!}{2!}$ words. Now we need to insert the $I$'s. Given an $8$-letter word, that determines $7+2$ "gaps." Of these, $7$ are real gaps between letters, and $2$ are the "end gaps." We need to choose $3$ of these to insert $I$ into. That gives a total count of $$\binom{9}{3}\frac{8!}{2!}.$$ Remark: The above is essentially your analysis, except that you have an additional division by $2$ that I think is not correct. Once we have made a new letter out of NAT, there is only one free A left. To see this, let's solve the same problem with the much simpler base word NAAT. There would be $2$ words that met our condition, ANAT and NATA. Your procedure would yield $\frac{2!}{2!}=1$.
H: Solving $A^TAx=A^Tb$ without using $A^TA$ or its inverse Given $A=\pmatrix{2&1\\ 2&3\\ -2&1}$ and $b=\pmatrix{1\\ 1\\ 1}$, solve $A^TAx = A^Tb$ without calculating $A^TA$ or its inverse. I solved a previous problem in this set where I calculated the QR factorization of $A$ using Gram-Schmidt orthogonalization. I used Kahn Academy's approach and found $u_1$ and $u_2$ for $A$ (orthonormal bases?). In order not to calculate $A^TA$ or its inverse, I have from the properties of orthogonal bases and Gram-Schmidt that if $Q$ has orthonormal columns and $Q$ is $m\times n$ where $m\ne n$ then: \begin{align*} Qx &= b ,\\ Q^TQx &= Q^Tb,\\ x &= Q^Tb. \end{align*} My $Q=(u_1, u_2)$, therefore, can I perform $Q^Tb$ and get my $x$? (Seems almost too simple) As always, any help is greatly appreciated. :) AI: If you substitute in $QR$ for $A$ on the lefthand side, you get \begin{align} (QR)^TQRx&=A^Tb \\ R^TRx=A^Tb \end{align} This is not so bad to solve - just compute $y:=A^Tb$, then backsolve twice to find $x$ in $R^TRx=y$. The steps are: $y=A^Tb$. Matrix-vector multiply to get intermediary result $y$. $R^Tz=y$. Triangular backsolve to get intermediary result $z$. $Rx=z$. Triangular backsolve to get $x$.
H: Points in $\mathbb{R}^3$ are coplanar and Determining Area of Triangle in $\mathbb{R}^3$ I came across this question in my textbook while working on some problems: Find the criterion for four points in $\mathbb{R}^3$ to be coplanar; then find the formula involving the cross product for the area of a triangle with vertices given at three given points. My work/ thoughts: I'm not sure about the first part of the question, but I know for the second part of the question that the formula they're probably talking about is that $\frac{1}{2}(\vec{a} \times \vec{b})$. I know that formula would give me half of the area of a parallelogram in $\mathbb{R}^2$, but this question asks for the vertices at $\mathbb{R}^3$, so I don't know if that formula still applies or not. Also to get the three edges of the triangle, I'd just use the distance formula to determine their lengths (if need be). I'm not sure what to do,as typically when there are three vectors in $\mathbb{R}^3$, I usually take the triple scalar product, but that would give me the volume, not the area. So any help would be greatly appreciated! Thanks in advance. Edit: Just realized that the formula I talked about would work if I figured out if the three vectors were coplanar. (Right?) I can use the triple scalar product and if it equals 0, then it is indeed coplanar. AI: Let $x_1,...,x_4$ be the four points. Compute $\alpha = \det \begin{bmatrix} x_1-x_4 & x_2-x_4 & x_3-x_4 \end{bmatrix}$. If $\alpha = 0$ they are coplanar. There is nothing special about $x_4$, any other 'origin' point would suffice (from a theoretical perspective). This is not a numerically stable way, but is fine theoretically. Note: Since $\langle a, b \times c \rangle = \det \begin{bmatrix} a & b & c \end{bmatrix}$, it should be clear that $\alpha = \langle x_1-x_4, (x_2-x_4) \times (x_3-x_4) \rangle$.
H: Conditional compactness iff totally bounded We say that a metric space $M$ is totally bounded if for every $\epsilon>0$, there exist $x_1,\ldots,x_n\in M$ such that $M=B_\epsilon(x_1)\cup\ldots\cup B_\epsilon(x_n)$. A metric space in which every sequence has a Cauchy subsequence is said to be conditionally compact. Prove that a metric space $M$ is conditionally compact if and only if $M$ is totally bounded. So suppose $M$ is totally bounded, and given a sequence $y_1,y_2,\ldots$ in $M$. I want to find a Cauchy subsequence. For any $\epsilon$, I can find $x_1,\ldots,x_n\in M$ such that $M=B_\epsilon(x_1)\cup\ldots\cup B_\epsilon(x_n)$. So some ball contains infinitely many points $y_i$. Those points are within $2\epsilon$ of each other. But this still doesn't give me a Cauchy subsequence, because as $\epsilon$ decreases, the balls can change... AI: HINT: Let $\langle x_k:k\in\Bbb N\rangle$ be any sequence in $M$, and suppose that $n\in\Bbb Z^+$. There is a finite $F\subseteq M$ such that $M=\bigcup_{x\in F}B\left(x,\frac1n\right)$, so there is a $y_n\in M$ such that $\left\{k\in\Bbb N:x_k\in B\left(y_n,\frac1{2n}\right)\right\}$ is infinite. Use this observation, which you’ve already made, in the following way. Start with any sequence $\sigma_0=\langle x_k:k\in\Bbb N\rangle$. There are a $y_1\in M$ and an infinite $A_1\subseteq\Bbb N$ such that $d(x_k,y_1)<\frac12$ for each $k\in A_1$. Let $\ell_1=\min A_1$, let $A_1'=A_1\setminus\{\ell_1\}$, and let $\sigma_1$ be the subsequence $\langle x_k:k\in A_1'\rangle$. Note that $d(x_i,x_j)<1$ for all $i,j\in A_1$. Now repeat the process with $\sigma_1$: there are a $y_2\in M$ and in infinite $A_2\subseteq A_1'$ such that $d(x_k,y_2),\frac14$ for each $k\in A_2$. Let $\ell_2=\min A_2$, let $A_2'=A_2\setminus\{\ell_2\}$, and let $\sigma_2$ be the subsequence $\langle x_k:k\in A_2'\rangle$ of $\sigma_1$. Note that $\ell_2>\ell_1$ and $d(x_i,x_j)<\frac12$ for all $i,j\in A_2$. You should now be able to repeat the construction of the previous paragraph in general, going from a sequence $\sigma_i=\langle x_k:k\in A_i'\rangle$ to a subsequence $\sigma_{i+1}=\langle x_k:k\in A_{i+1}'\rangle$. Along the way you’ll construct a strictly increasing sequence $\langle\ell_i:i\in\Bbb Z^+\rangle$ of natural numbers $\ell_i=\min A_i$. Show that $\langle x_{\ell_i}:i\in\Bbb Z^+\rangle$ is a Cauchy subsequence of $\sigma_0$. This is an example of a fairly common kind of recursive construction, starting with a sequence, producing a sequence of subsequences that have nicer and nicer properties, and then ‘diagonalizing’ to get a single subsequence that combines the nice properties of the original family of subsequences. And I see that while I was typing, PJ Miller has given a fine argument for the other direction.
H: A triangular "spot function" z = (cos πx + cos πy) represents the classical "spot function", made by square cells, used in every laser printer's halftone screening. Does anyone knows the corresponding function to produce TRIANGULAR cells instead of squared ones? AI: That is, you want a function such that the contour $z=0$ is a triangle (or triangular lattice)? I suggest $$ z(x,y)=\sin\left(\tfrac{2 y}{\sqrt 3}\right)\cdot \sin\left(x+\tfrac y{\sqrt 3}\right)\cdot \sin\left(x-\tfrac y{\sqrt 3}\right)$$
H: Manifold being locally euclidean vesus Manifold being locally homeomorphic to an open set in $R^n$. I was reading the definition of smooth manifold and i am little bit of confused. Informally it says A smooth manifold is a topological manifold (i.e. a topological space locally homeomorphic to a Euclidean space) equipped with an equivalence class of atlases whose transition maps are all smooth. Here transition maps are from Euclidean space to Euclidean space. Now what if I replace the word Euclidean space by topological vector space $\mathbb{R}^n$. Still we will get some object as we can talk about smoothness in topological vector space $\mathbb{R}^n$ without using any reference of coordinate system. So how much difference is there if I do the replacement? And my other question is What are the properties of Euclidean space, we use to study Smooth manifolds, which are not present in $\mathbb{R}^n$ just as a topological vector space. Till now what I have found that to describe local coordinates in manifold or to describe basis in tangent space one need a coordinate system in $\mathbb{R}^n$ so that you can pull it back to the manifold to define local coordinate there. Apart from these, where do we use the properties of Euclidean space to study smooth manifold? AI: Euclidean spaces are, by definition, $\mathbb{R}^{n}$. See if this clears things up for you. If not, come back and I'll try to help you some more. Also, as the previous comment noted, any open set homeomorphic to an open ball will also be homeomorphic to all of $\mathbb{R}^{n}$ since open balls are homeomorphic to $\mathbb{R}^{n}$. Hence, in the definition of locally Euclidean, it does not matter if we a priori decided that our spaces should be locally homeomorphic to all of $\mathbb{R}^{n}$, an open ball, or an open set.
H: Limit involving power tower: $\lim\limits_{n\to\infty} \frac{n+1}n^{\frac n{n-1}^\cdots}$ What is the value of the following limit? $$\large \lim_{n \to \infty} \left(\frac{n+1}{n}\right)^{\frac{n}{n-1}^{\frac{n-1}{n-2}^{...}}}$$ In general what do limits of infinite decreasing numbers strung together in familiar ways approach? AI: The sequence is given by: $$a_1 = 2, a_n = \left(\frac{n+1}n\right)^{a_{n-1}}$$ Taking logs, we obtain: $$\log a_n = a_{n-1} \log\left(1+\frac1n\right)$$ and it is easy to show that $\dfrac1{2n} \le \log\left(1+\frac1n\right) \le \dfrac1n$. Now if we can show that $a_n$ is bounded, we are done by the Squeeze theorem (since then $\lim\limits_{n\to\infty} \log a_n = 0$, hence $\lim\limits_{n\to\infty} a_n = 1$). Obviously, $a_n \ge 0$ for all $n$. We prove inductively that $a_n \le e$. The basis is trivial: $a_1 = 2 \le e$. Suppose $a_{n-1} \le e$. By the above estimate, $\log a_n \le \dfrac en \le 1$ for $n \ge 3$. It remains to show that $a_2 \le e$: $$a_2 = \left(\dfrac32\right)^2 = \dfrac94 \le e$$ In conclusion: $$\lim_{n\to\infty} a_n = \lim_{n \to \infty} {\large\frac{n+1}{n}^{\frac{n}{n-1}^{\frac{n-1}{n-2}^{...}}}} = 1$$
H: Related rates shadow question A $5$ meter lamp is casting a shadow on a $1.8$ meter man walking away at $1.2$ meters a second, how fast is the shadow increasing? I have no idea how to do this, it feels like there is missing information. I know that this is a problem about triangles but there is some weird trick that has to be used since only two heights are known which are both the same part of a triangle. AI: After $t$ seconds, the man has traveled $1.2t$ meters from the lamp. Let $\mathrm{shadow}(t)$ denote the length of the man's shadow after $t$ seconds. Note that triangles $\triangle ABC$ and $\triangle CDE$ are similar. Therefore the ratios between corresponding sides must be equal: $$\frac{BC}{AB}=\frac{DE}{CD}$$ which tells us $$\frac{1.2t}{3.2}=\frac{\mathrm{shadow}(t)}{1.8}$$ so that $$\mathrm{shadow}(t)=\frac{1.2t}{3.2}\times 1.8=\frac{2.16 t}{3.2}=\frac{27t}{40}.$$ Therefore $$\frac{d\,\mathrm{shadow}(t)}{dt}=\frac{d}{dt}\left(\frac{27t}{40}\right)=\frac{27}{40}\,\text{m/s}$$
H: Split by percentage I need to give $n$ apples to three persons $A$, $B$ and $C$. $A$ should get $50\%$, $B$ should get $30\%$, and $C$ should get $20\%$ of however much ever I give. For example, if I have given $10$ apples, $A$ should have $5$, $B$ should have $3$, and $C$ should have $2$. Apple A B C 1st 1 2nd 1 3rd 1 4th 1 5th 1 6th 1 7th 1 8th 1 9th 1 10th 1 What would be the formula to split $n^{th}$ apple by percentage? AI: It looks like you want to give the $n^{\text{th}}$ apple to whoever is due most of it. You look at then number you have given out, the number each has received of the first $n-1$, the exact number they are due out of $n$ and give it to whoever it gets closest. In your example, however, it seems the fourth apple should go to $C$. $A$ should have two of the first four, and does. $B$ should have $1.2$ and has $1$. $C$ should have $0.8$ and has none. So I would think you give $A \ \ 1,3,5,7,9;B\ \ 2,6,10;C\ \ 4,8$ The procedure I am following is as follows. Let $a$ be the fraction due $A$, $b$ the fraction due $B$ and $c$ the fraction due $C$. Let $aa(i)$ be the number of apples that $A$ has out of the first $i$, similar for $bb(i), cc(i)$. To give out apple $n$, we would want $A$ to have $an$, $B$ to have $bn$, and $C$ to have $cn$. Calculate $an-aa(n-1), bn-bb(n-1), cn-cc(n-1)$ and award the $n^{\text{th}}$ apple to whoever has the greatest value. In your example the pattern will repeat every $10$ apples. Example: $A$ gets $0.47$, $B$ get $0.31$, $C$ gets $0.22$ The second to fourth columns give the number each person is due out of $n$ apples. The next three give how many they got out of the first $n-1$. The nth apple is awarded to whoever is most short. $$\begin {array} { r|r|r|r|r|r|r|c}n&A&B&C&aa&bb&cc&\text{apple to} \\ \hline 1&.47&.31&.22&0&0&0&A\\ 2&.94&.62&.44&1&0&0&B\\3&1.41&.93&.66&1&1&0&C\\4&1.88&1.24&.88&1&1&1&A\\5&2.35&1.55&1.10&2&1&1&B\\6&2.82&1.88&1.32&2&2&1&A\\ 7&3.29&2.17&1.54&3&2&1&C\\8&3.76&2.48&1.76&3&2&2&A\\9&4.23&2.79&1.98&4&2&2&B\end{array} $$ If you are presented with a pile of apples, you can give each person the integer part of what they are due, then deal with the ones left. So here, if you got eight apples at once, you would give $3$ to $A$, $2$ to $B$, $1$ to $C$ and then run this calculation for the last two.
H: What's the correct name in english for "Analysis in $\Bbb R^n$"? Well, this question may seem silly and I fear it's even out of topic here. My motivation to ask that is to know the correct terminology when talking about that here in Math.SE. The point is, here in Brazil the topics covered in Spivak's Calculus on Manifolds and Munkres Analysis on Manifolds is called "Analysis in $\Bbb R^n$", but it seems that in english people just call this "analysis on manifolds". I feel that there should be another name, because "manifolds" is much more general than $\Bbb R^n$. So, how do people call in english the set of topics covered in those books? Analysis in $\Bbb R^n$, analysis on manifolds, multivariable analysis or what? Thanks very much in advance! AI: Contexts vary, naturally, but in the U.S. a common convention is that "real analysis" refers, somewhat perversely, to measure theory... sometimes with an emphasis on $\mathbb R^n$ or limitation to that case, but rarely exclusively so. One could suggest "analysis on Euclidean spaces", which makes sense in many ways, but this doesn't exclude products of $\mathbb R^n$'s and ("flat") multi-tori. In fact, it's not at all a bad thing to include tori and multi-tori, offering the option to look at Fourier series in contrast to Fourier transforms, which are less often considered in "real analysis" courses, because the difficulties are best met with ideas that fit even less well into a "measure theory" context than the Hilbert-space and Banach-space ideas relevant to basic Fourier series. In the end, although some people will misunderstand, and it's actually slightly broader, I think "analysis on Euclidean spaces" is the best reference to analysis on $\mathbb R^n$.
H: Use Euler's method to approximate $\int^2_0 e^{-u^2}du$ We learned Euler's method today there is one hw problem totally stunned my hat off. It says: Use Euler's method to approximate $\int^2_0 e^{-u^2}du$. I know Euler's method is $y_{n+1} = y_n + hf(t_n,y_n)$, but this is for $y'$ right? somehow I can compute an integral? This is supposed to be done using computer, I know how to use Euler's method to solve equation like $y' = 2y - 3e^{-t}$. Thanks for viewing! AI: Let $y(t) = \int_0^t e^{-u^2} du$. By the fundamental theorem of calculus, $y$ satisfies the differential equation $y' = e^{-t^2}$. You are trying to approximate $y(2)$. So it is the sort of problem you know how to solve.
H: Probability - how to satisfy an order with given probability? The probability that randomly chosen shirt from a current production can be qualified as a premium sort shirt is $p = 0.8$. Find the probability that out of $n = 100$ shirts we will have at least $85$ premium sort shirts. How many shirts we have to produce to satisfy an order for $400$ premium sort shirts with probability at least $0.99$? [edit] First part of the question : $$p = 0.8 \\ n = 100 \\ E(X) = p\,n = 80. \\ V(X) = p\,n\,(1-p) = 16 \\ P(X\geq 85)= 1 - P(X\leq 84) = 1 - 0.8708 = 0.1292 $$ AI: So the second part is different from the first part in that instead of computing a probability, we begin with a probability and proceed backwards. First of all, we need to find a number $z$ so that $\text{phi}(z)=(1-0.99)=0.01$. You should get $z=-2.33$, or something close. Now, let $n$ be the number of shirts that will be "enough" under the constraints of the second problem. We know that $n$ has to satisfy $$ n p + z\sqrt{np(1-p)}\geq 400 $$ Changing this to an equality, we rearrange this to get $$ z\sqrt{np(1-p)}= 400 - n p\\ z^2np(1-p)= 160000-800np + n^2 p^2\\ (p^2)n^2-((800+z^2(1-p))p)n + 160000=0 $$ This is a quadratic equation you can solve for $n$. Round the answer up to get your solution.
H: total differential of $f+g$, $fg$ and $\frac fg$ Let $f,g:\mathbb R^n\rightarrow\mathbb R$ be differentiable. We had in lectures that $f+g,fg,\frac fg$ are differentiable too. As an exercise I want to prove this. $f$ is differentiable in $x\in\mathbb R^n$ $\Leftrightarrow$ there is a lineare function $A$ such that $\lim_{h\rightarrow0}\frac{f(x+h)-f(x)-A(h)}{\|h\|}=0$ So for $fg$ I get\begin{align*} &\lim_{h\rightarrow0}\frac{(fg)(x+h)-(fg)(x)-A(h)}{\|h\|}\\& =\lim_{h\rightarrow0}\frac{f(x+h)g(x+h)-f(x)g(x)-A(h)}{\|h\|} \end{align*}Now I am really stuck. How can you show with above that $fg$ is totally differentiable in $x$? Solution: \begin{align} &\lim_{h\rightarrow0}\frac{f(x+h)g(x+h)-f(x)g(x)-(gDf+fDg)(x)h}{\|h\|}\\ &=\lim_{h\rightarrow0}\frac{f(x+h)g(x+h)-f(x+h)g(x)-f(x+h)Dg(x)h}{\|h\|}\\&+\frac{f(x+h)g(x)-f(x)g(x)-g(x)Df(x)h+f(x+h)Dg(x)h}{\|h\|}\\&-\frac{f(x)Dg(x)h}{\|h\|}\\ &=\lim_{h\rightarrow0}\frac{f(x+h)(g(x+h)-g(x)-Dg(x)h)}{\|h\|}+g(x)\frac{f(x+h)-f(x)-Df(x)h}{\|h\|}+\frac{(f(x+h)-f(x))Dg(x)h}{\|h\|} \end{align} So the first term is $fDg$ since $f$ is continuous and differentiable. The second term is $gDf$ and the last one is 0 since $f$ is continuous and $Dg$ linear and so bounded. So the limit equals 0 and $fg$ is differentiable with derivative $gDf+fDg$. AI: Hint: $$f(a)g(a)-f(b)g(b)=f(a)g(a)-f(a)g(b)+f(a)g(b)-f(b)g(b)$$ Does this help you at all?
H: wedge product and determinant I don't really know what $[\phi_i(v_j)]$ really is. As far as I understand, $\phi_i$ is a linear transformation - a matrix; and $v_j$ is the column vector it eats. So $[\phi_i(v_j)]$ spit out column vectors, rather than $k \times k$ real matrix as the problem stated. Again, thanks for your help~ AI: The dual space $V^\star$ contains linear maps into the reals $\mathbb R$ and so for each $i,j$ we obtain a real number. Them thinking of $i,j$ being indices in a table, this defines a matrix.
H: Need Help With These 2 Problems find k*x)=f(og) Need someone to check If I'am write I think I have the write solution. First question: Given: $g(x)=2x^2+5$ and $f(x) = x^2+4x$, find $k(x) = (f\circ g)(\sqrt3)$. My answer: $k(x)=119+48\sqrt3$ Second question: domain for $k(x)$ if: $d(x)=\sqrt{x-2}$ and $g(x) = 3\sqrt{4-x}$ and $k(x) = (d\circ g)(x)$ My answer: domain = $\{x\mid 2\le x\le 4, x\in\Bbb R\}$ [ OP's original images, which I have translated into MathJax: 1 2 3 4 —Editor ] AI: We have $$k(t)=(f\circ g)(t)=f(g(t)).$$ More informally, first do (apply) $g$ to $t$, then do $f$ to the result. Since $g(x)=2x^2+5$, we have $g(\sqrt{3})=11$. And now $f(11)=121+44$. So $k(\sqrt{3})=165$. Added: Now that the second question has been LaTeX'ed (thank you, MJD), it can be attacked. We want the domain of $d\circ g$. To evaluate $(d\circ g)(x)$, first we find $g(x)$, and then apply $d$ to the result. The function $g$ is only defined if $x\le 4$. And the function $d$ is only defined for inputs that are $\ge 2$. Recall that what gets fed into the function $d$ is $g(x)$, that is, $3\sqrt{4-x}$. So we need $x\le 4$ and $3\sqrt{4-x}\ge 2$. For $x\le 4$, the condition $3\sqrt{4-x}\ge 2$ is equivalent to $9(4-x)\ge 4$. (We squared both sides.) This can be rearranged as $9x\le 32$, and then as $x\le \frac{32}{9}$. So for the function $k(x)$ to be defined, we want $x\le 4$ and $x\le \frac{32}{9}$. The condition $x\le 4$ has become superfluous, since it is included in the condition $x\le \frac{32}{9}$. So $k(x)$ is defined for all real numbers $x$ such that $x\le \frac{32}{9}$. Remark: Another version of the second question has $(dgg)(x)$, where presumably ordinary product is meant, since there is no little circle $\circ$. If that is the intended function, then indeed the domain is the set of all $x$ such that $2\le x\le 4$.
H: Critical number $y = \frac{1}{x^2 + 2}$ Seems pretty straight forward but my book seems to be giving an incorrect answer without any explanation to their magic. $$y = \frac{1}{x^2 + 2}$$ I know that this has no 0 so that rule of finding a critical number can be discarded. $$y = (x^2 + 2)^{-1}$$ $$y' = -1(x^2 + 2)^{-2}\cdot 2x$$ $$y' = \frac{-2x}{(x^2 + 2)^{2}}$$ So now I am not so sure how my book got c = 0. AI: $$y' = \frac{-2x}{(x^2+2)^2}$$ Critical points occur if and when there are values of $x$ which make derivative of the function equal to zero. (I.e., you find the "zeros" of the function, if any.) You also check for points at which the function is not differentiable. The denominator in both the function $y$ and the derivative $y'$ are defined (and positive) for all real numbers: The denominator $(x^2 + 2)^2 \gt 0$ for all $x$. So here, the only points we need to consider are the zeros of the derivative. In this case, $\;y' = 0 \iff $ the numerator is zero. The numerator is zero if and only if $$-2x = 0 \iff x = 0$$ So there is indeed a critical point when $x = 0$, with the point being $(x,y) = (0, 1/2)$.
H: $f$ is constant if derivative equals zero Suppose $f'(x)=0$ for all $x\in (a,b)$. Prove that $f$ is constant on $(a,b)$. This seems painfully obvious, but I can't prove it rigorously. $f'(x)=0$ for all $x\in (a,b)$ means that for any $c\in(a,b)$, we have $$\lim_{x\rightarrow c}\frac{f(x)-f(c)}{x-c}=0$$ So for any $c\in (a,b)$, for any $\epsilon$ there exists $\delta$ such that if $|x-c|<\delta$ then $$\left|\frac{f(x)-f(c)}{x-c}\right|<\epsilon$$ which also means $$|f(x)-f(c)|<\epsilon\delta$$ How to continue from here? AI: Hint: It is better to start over again. Use the Mean Value Theorem (MVT), or the special case of MVT sometimes called Rolle's Theorem. Note that the way the problem is worded, the result is not quite true. The condition on the derivative says nothing about the values of $f$ at the endpoints $a$ and $b$. Added: Let $a\lt c\lt d \lt b$. Then by the Mean Value Theorem, we have $$\frac{f(d)-f(c)}{d-c}=f'(\xi)$$ for some $\xi$ between $c$ and $d$. It follows that $f(c)=f(d)$. From this we can see that $f(x)=f(y)$ for all $x,y$ in the open interval $(a,b)$. Let this common value be $k$. Then $f(x)=k$ on $(a,b)$. If $f$ is (right)-continuous at $a$ and (left)-continuous at $b$, we can further conclude that $f(a)=f(b)=k$. Remark: The result is indeed obvious, at least with the usual informal interpretation of derivative as rate of change: If your velocity is always $0$, you ain't going nowhere. However, differentiable functions can be very weird, and intuition about the smooth curves of our imagination cannnot always be relied on. The MVT is of great importance because it enables us to use local information about a function (its derivative) to obtain global information about the function, such as whether it is increasing on an interval. The reason that I suggested starting over is that continuing on the path you were taking, though possible, is quite difficult.
H: Is $X$ has a strong rank 1-diagonal? Definition 1: A space $X$ has a strong rank 1-diagonal \cite{5} if there exists a sequence $\{\mathcal U_n: n\in \omega\}$ of open covers of $X$ such that for each $x\in X$, $\{x\}=\bigcap \{\overline{\operatorname{St}(x, \mathcal U_n}): n\in \omega\}$. Example 2: Let $Y=\bigcup\{[0,1]\times\{n\}:n \in \omega\}$ and $X=Y\cup\{a\}$ where $a \notin Y.$ Define a basis for a topology on $X$ as follows. Basic open sets containing $a$ take the form $\{a \}\cup \bigcup \{[0,1)\times {m}: m \geq n \}$ where $n \in \omega.$ Basic open sets about the other points of $X$ are the usual induced metric open sets. Question 3: Is $X$ has a strong rank 1-diagonal? Proof: Construct a sequence $\{\mathcal V_n: n\in \omega\}$ of open covers of $Y$ which witnesses that $Y$ has a strong rank 1-diagonal. Let $\mathcal U_n= \mathcal V_n \cup (\{a \}\cup \bigcup \{[0,1)\times {m}: m \geq n \})$. Then $\{\mathcal U_n: n\in \omega\}$ shows that $X$ has a strong rank 1-diagonal. I'm not sure the proof is OKey. Could somebody help me? AI: Your argument is fine. You could even replace $[0,1)$ in $\{a\}\cup\bigcup\big\{[0,1)\times\{m\}:m\ge n\big\}$ with $[0,1]$, as can be seen directly or by the following argument. $X$ is obtained in an especially simple fashion from a compact metric space, which of course has a strong rank $1$ diagonal. Let me construct it a little differently. For $n\in\omega$ let $$Y_n=\left[0,\frac1{2^n}\right]\times\left\{\frac1{2^n}\right\}\;,$$ let $$Y=\bigcup_{n\in\omega}Y_n\;,$$ let $a=\langle 0,0\rangle$, and let $X=Y\cup\{a\}$. Let $\tau$ be the topology that $X$ inherits from the plane; $\langle X,\tau\rangle$ is evidently a compact metric space. Now let $D=\left\{\left\langle 0,\frac1{2^n}\right\rangle:n\in\omega\right\}$, and let $\tau'$ be the topology generated by the subbase $\tau\cup\{X\setminus D\}$; it’s not hard to check that $\tau'=\tau\cup\{U\setminus D:U\in\tau\}$, and that $\langle X,\tau'\rangle$ is homeomorphic to your space $X$. It is of course not regular, since $a$ cannot be separated from the closed set $D$. However, it has the same closures of open sets as $\langle X,\tau\rangle$: if $U\in\tau$, then $\operatorname{cl}_\tau U=\operatorname{cl}_{\tau'}U$ and $\operatorname{cl}_\tau (U\setminus D)=\operatorname{cl}_{\tau'}(U\setminus D)$. It follows immediately that a family of $\tau$-open covers witnessing the fact that $\langle X,\tau\rangle$ has a strong rank $1$ diagonal does the same for $\langle X,\tau'\rangle$.
H: how can i find best value of t in this equation? I need to evaluate $$\tan^{-1} (x) - x = O(x^t)$$ as $x$ approaches $0$, in order to find the best value of $t$. Big O notation is described here. I tried: $$\lim_{x \to 0} \frac{\tan^{-1}(x) - x}{x^t}.$$ AI: If you are familiar with the power series expansion for $\arctan x$, namely $$\arctan x=x-\frac{x^3}{3}+\frac{x^5}{5}+ \cdots,$$ (valid for $|x|\lt 1$, and even at $x=1$), then $t=3$ is immediate. If the power series expansion is not familiar, one could do a search for a suitable $t$ by looking, as you are, at $$\frac{\arctan x-x}{x^t},\tag{1}$$ for various small integer values of $t$, starting at $t=1$. Use L'Hospital's Rule to find the limit as $x\to 0$ of (1) first when $t=1$, then when $t=2$, then when $t=3$. At $t=3$ we get a non-zero limit.
H: Combinatorics: Options that do not involve a specific object The question was posed as: Marissa is doing a Tarot reading in which she must pick 6 cards from a deck of 72. The order of their selection is not important. Marissa does not want to see the Fool card. How many of the possible readings do not feature the Fool? What I think I should do is find the total amount of possibilities of 6 cards out of 72, C(72,6)= n! r!(n-r)! =__72!__ 6!(72-6)! =__72!__ 6!66! =__6.12e+103_ (720)(5.44e+92) =6.12e+103 3.92e+95 =156,238,908 possible combination of cards Then, find the amount out of a deck of 71 C(71,6) = 71! 6!65! =__8.50e+101__ (720)(8.25e+90) =__8.50e+101__ 5.94e+93 =143218999 Then subtract them 156238908-143218999= 13019909 That number is way too low, so that's where I am stuck. AI: You're close. You need to find the number of 6 card hands that contain the fool card: $$\binom{71}{5}$$ Then subtract that number from the total number of 6 card hands in the deck: $$\binom{72}{6}-\binom{71}{5}$$
H: How can I definitively determine whether a universal/existential statement is true or false? How can one look at a universal/existential statement and determine with absolute certainty whether it is true or false? For example, is $$∀x\in\mathbb R\,∀y\in \mathbb R\,∃z\in \mathbb R\,\left(z = \frac{x+y}2 \right)$$ true or false? I'm pretty sure it's true, because any number divided by $2$ is going to yield some number (I tried some insane ranges just to check my sanity). But how can I know for sure? What about a more complex problem where a range of substitutions isn't a feasible test? AI: As Adriano pointed out, this is simply because the real numbers are closed under addition and multipication (in this case, multiplication by $1\over2$). The reason you're struggling so much with a proof of this fact is because to really prove it you would need a definition of addition and multiplication, such as via Dedekind cuts or Cauchy sequences. Intuitively, the proposition is of course obviously true. But yeah, the answer to question "how do I know for sure" is "by diving nearly as deeply into math as the subject allows". You're basically dealing with how we define exactly what a number is, and how to define addition and multiplication on that foundation. Look into construction of the real numbers, or construction of the rationals for a more accessible example.
H: Using partial fractions to find explicit formulae for coefficients? The set of binary string whose integer representations are multiples of 3 have the generating function $$\Phi_S(x)={1-x-x^2 \over 1-x-2x^2}$$ Let $a_n=[x^n]\Phi_s(x)$ represent the number of strings in $S$ with length $n$. Using partial fraction expansion to determine an explicit formula for $a_n$ for all integers $n\geq 0$. You may use the initial conditions $a_0=1$, $a_1=0$. Hmm. What are the initial conditions for? How are partial fractions and generating functions related to recurrences? The terms do seem to follow $a_n=a_{n-1}+2a_{n-2}$ but how to prove it? AI: Note that we get very lucky, since $1-x-2x^2=(1-2x)(1+x)$. The first step, since the numerator has degree $\ge $ the degree of the denominator, is to divide. We get $$\frac{1-x-x^2}{1-x-2x^2}=\frac{1}{2}+\frac{1}{2}\frac{1-x}{1-x-2x^2}.$$ We now work for a while with the simpler $\frac{1-x}{1-x-2x^2}$, We try to find $A$ and $B$ such that $$\frac{1-x}{1-x-2x^2}=\frac{A}{1+x}+\frac{B}{1-2x}=\frac{A(1+x)+B(1-2x)}{1-x-2x^2}.$$ So we want to have $$1-x \quad\text{identically equal to}\quad A(1+x)+B(1-2x).$$ Put $x=\frac{1}{2}$. We get $A=\frac{1}{3}$. Putting $x=-1$, or otherwise, we find that $B=\frac{2}{3}$. Now use the fact that $$\frac{1}{1-t}=1+t+t^2+t^3+\cdots,$$ putting in turn $t=-x$ and $t=2x$, to find the power series expansions of $\frac{1}{1+x}$ and $\frac{1}{1-2x}$. Finally, use our previous calculations to find the power series expansion of our original function. From this we can read off the coefficient of $x^k$ for any $k$. Note that our original function is equal to $$\frac{1}{2}+\frac{1}{6}\cdot\frac{1}{1+x}+\frac{1}{3}\cdot\frac{1}{1-2x}.$$
H: Is $E(xy) \geq E(x) E(y)$? Is $E(xy) \geq E(x) E(y)$ always? Where $E$ means expectation. Intuitively I feel like covariance should always be nonnegative, but I cannot prove why. Thanks for your help AI: That is not true. For example, let $x$ be $0$ or $1$ with equal probability of $0.5$, and let $y = 1 - x$. Then $E(xy) = 0$, but $E(x)E(y) = 0.25$. About the second sentence, it is also false. For example, let $y = -x$. Then the covariance will be negative if $x$ has non-zero variance.
H: Help with a conditional probability problem There are 6 balls in a bag and they are numbered 1 to 6. We draw two balls without replacement. Is the probability of drawing a "6" followed by drawing an "even" ball the same as the probability of drawing an "even" ball followed by drawing a "6". According to Bayes Theorem these two possibilities should be the same: Pr(A and B) = Pr(A) x Pr(B∣A) Pr(A and B) = Pr(B) x Pr(A∣B) However, when I try to work this out I am getting two different probabilities, 2/30 and 3/30 for the two different scenarios listed above. The first scenario is fairly straight-forward to determine, Pr(6) x Pr(even∣6 has already been drawn) 1/6 x 2/5 = 2/30 however, I think I am doing something wrong with the second scenario, Pr(even) x Pr(6∣even has already been drawn) 3/6 x ????? Any help would be greatly appreciated as this is really bugging me. Thank you in advance.... AI: For the sake of simplicity, replace the "even" condition with the new condition "2 or 4." Since you can't draw 6 twice in a row, this is an equivalent problem. Given this, it should be straightforward to see that (2 or 4)-then-(6) has odds $(\frac{2}{6})(\frac{1}{5}) = \frac{1}{15}$, and that (6)-then-(2 or 4) has odds $(\frac{1}{6})(\frac{2}{5}) = \frac{1}{15}$. Let me know if that needs clarification.
H: Help identify this operator Except the whole operator is more compact. It is used between two fractions like.... x / y mystery operator x /y . I don't know how to write it on my computer. ~ = text ~ text ---- = ---- text text AI: $\cong$, perhaps? Your question is very hard to understand. $\propto$ is a different symbol that relates to proportionality….
H: Need help in number theory I wanted to know, how do I go about finding solutions to the equation $(x+1)(y+1) = z^3 + 1$ (integral solutions). Any help appreciated. Thanks. AI: For any integer $z$ there will be a solution. If $z^3 + 1 \neq 0$, then just take $x+1$ to be a divisor of $z^3 + 1$ and let $y+1 = \frac{z^3+1}{x+1}$. This will give you all solutions for that given $z$. If $z=-1$, then either $x+1 = 0$ and $y$ is arbitrary or vice versa. Hope that helps,
H: How to break a quadratic form based on two complementary subspaces I'd like to prove the following for my research, but I don't know how. It's from the paper An Elementary Proof of the Restricted Invertibility Theorem by Daniel A. Spielman and Nikhil Srivastava. Let's say I have a positive semi-definite matrix $\mathbf{A}\in\mathbb{R}^{n\times n}$ and a (maybe non-symmetric) matrix $\mathbf{L}\in\mathbb{R}^{n\times n}$. Basically, I want to compute some values related to $Tr(\mathbf{L}^T(\mathbf{A}-b\mathbf{I})^{-1}\mathbf{L})$. (It's guaranteed that $(\mathbf{A}-b\mathbf{I})^{-1}$ always exists and it is also positive semi-definite.) Now, let's say $\mathbf{P}$ is the projection onto the image of $\mathbf{A}$ and $\mathbf{Z}$ is the projection onto its kernel, so that $\mathbf{P} + \mathbf{Z} = \mathbf{I}$. Then how can you prove the following? $$Tr(\mathbf{L}^T(\mathbf{A}-b\mathbf{I})^{-1}\mathbf{L}) = Tr(\mathbf{L}^T\mathbf{P}(\mathbf{A}-b\mathbf{I})^{-1}\mathbf{P}\mathbf{L}) + Tr(\mathbf{L}^T\mathbf{Z}(\mathbf{A}-b\mathbf{I})^{-1}\mathbf{Z}\mathbf{L})$$ AI: Step 1: note that $P+Z=I$ yields $$ X=PXP+PXZ+ZXP+ZXZ. $$ Apply this to $X:=(A-bI)^{-1}$. Left-multiply by $L^T$, right-multiply by $L$, and take the trace. By linearity of the latter, the equation is equivalent to $$ \mbox{tr} (L^TPXZL)+\mbox{tr} (L^TZXPL)=0. $$ Actually, we will see that both matrices above are zero. Step 2: since $A$ is symmetric, $\ker A=\ker A^T=(\mbox{im } A)^\perp$. This explains why the projections (=self-adjoint idempotents) $P$ and $Z$ are complementary, i.e. $P+Z=I$. Note that in particular $$ PZ=P(I-P)=P-P^2=(I-P)P=ZP \quad\Rightarrow\quad PZ=ZP=0. $$ Step 3: since $A-bI$ commutes with $A$, so does $X$. Hence $\ker A=\mbox{im } Z$ is invariant under $X$. Thus $ZXZ=XZ$, whence $ZX=(XZ)^T=Z^TX^TZ^T=ZXZ=XZ$. So $X$ commutes with $Z$, and with $P=I-Z$ as well. Therefore $$ L^TPXZL=L^T(PZ)XL=0=L^T(ZP)XL=L^TZXPL $$ which finishes the proof.
H: estimating the error of $\sin(x) = x$ with Taylor's Theorem I want to calculate the numerical error in approximating $\sin(x)=x$ with Taylor's Theorem. Furthermore, what values of $x$ will this approximation be correct to within $7$ decimal places? Here is what I have done: $\sin(x) = \sum\limits_{k=0}^n (-1)^k\dfrac{x^{2k+1}}{(2k+1)!} + E_n(x)$ Where $E_n(x) =\dfrac{f^{(n+1)}(\xi)}{(n+1)!}x^{n+1}$, $x\in (-\infty, \infty)$ and $\xi$ is between $x$ and $0$. (This is just Taylor's Theorem with Lagrange remainder) Let $\xi = 0$ (why not). I am unsure of how I made $E_n(x)$: \begin{align} \left|\sin(x)-x\right| =& \sum\limits_{k=0}^n (-1)^k\dfrac{x^{2k+1}}{(2k+1)!} + (-1)^{2n+3}\dfrac{x^{2n+3}}{(2n+3)!} -x \\ \left|\sin(x)-x -\sum\limits_{k=0}^n (-1)^k\dfrac{x^{2k+1}}{(2k+1)!}\right| =& \left|(-1)^{2n+3}\dfrac{x^{2n+3}}{(2n+3)!} -x\right| \\ =&\left|\dfrac{x^{2n+3}}{(2n+3)!}-x\right| \end{align} Continuing in this way find the values of $n$ and $x$ that solve: \begin{equation} \left|\dfrac{x^{2n+3}}{(2n+3)!}-x\right| \leq 10^{-7} \end{equation} Intuitively this seems off because $(2n+3)!$ grows faster than $x^{2n+3}$, and so taking the limit as $n\to\infty$ we get $\left|-x\right|=x$ and thus any value of $x \leq 10^{-7}$ would work. This just does not seem right. All help is greatly appreciated! AI: We need to understand what Taylor's Theorem, Lagrange form of the remainder, says in this case. We are using the Taylor polynomial $P_1(x)=x$ to approximate $\sin x$. We could therefore call the error term $E_1$. But in this case the second term in the Taylor expansion is $0$, so $P_1(x)=P_2(x)$, and therefore $E_1$ and $E_2$ are equal. By the Lagrange form of the remainder, we have $$|E_2|=\left|\frac{-\cos \xi}{3!}x^3\right|\tag{1}$$ for some $\xi$ between $0$ and $x$. Since the absolute value of the cosine is $\le 1$, from (1) we obtain the estimate $$|E_2| \le \frac{|x|^3}{3!}.\tag{2}$$ It should now be straightforward to find the range of $x$ for which the right-hand side of (2) is $\lt 10^{-7}$. Remark: If you have met alternating series, we can bypass the Lagrange form of the remainder. In this case we end up with basically the same estimate of the error. Usually, however, the Lagrange form of the remainder results in technically correct but excessively pessimistic estimates.
H: The set of all subsequential limits of a bounded sequence is a non-empty compact set Let $(x_n)$ be a bounded sequence and let $Y$ be the set of all subsequential limits of $(x_n)$. Prove that $Y$ is a non-empty compact set. I think it's possible to solve this problem by proving that $Y$ is bounded (because if $Y$ is unbounded then $(x_n)$ is not bounded) and closed (because $\mathbb{R}^n-Y$ is open). I guess we can also use the definition of compact set by sequence (but in order to this, it's also necessary to prove that $Y$ is bounded). However, I'd like to prove it by using cover. In other words, I'd like to prove that every open cover of $Y$ has a finite subcover (without using of Borel-Lebesgue Theorem, obviously). Is it possible? Thanks. AI: I suspect it is possible, although the resulting proof would probably be very similar to proving it using the definition of sequential compactness. The proof would probably go by contradiction: suppose you have an open cover $\mathcal C$ of $Y$ has no finite subcover. Let $y_1\in Y$, and choose $U_1 \in \mathcal C$ such that $y_1 \in U_1$. $U_1$ can't cover $Y$, so choose $y_2 \in Y\setminus U_1$, and choose $U_2 \in \mathcal C$ such that $y_2 \in U_2$. $U_1 \cup U_2$ can't cover $Y$, so choose $y_3 \in Y\setminus (U_1 \cup U_2)$, and choose $U_3 \in \mathcal C$ such that $y_3 \in U_3$. And so on. You should (I hope) be able to show that the resulting sequence $\{y_1,y_2,\dots,\}$ has no limit point, a contradiction. Alternately, choose $y_j$ and $U_j$ as above, and then choose $x_{k_j}\in U_j$ (possible since $y_1$ is a subsequential limit); then the fact that $\{x_{k_j}\}$ should have a subsequential limit in $Y$ follows more directly from the definition of $Y$. Such a proof might need to ensure that the sequence $\{U_j\}$ does form a cover of $Y$; you can arrange this by finding a countable subcover of $\mathcal C$ to start with, and then using the above construction for $U_{2j-1}$ and just exhausting the countable subcover set by set for $U_{2j}$. Or something like that....
H: Are these two quotient groups of $\mathbb{Z}^2$ isomorphic to each other? I am trying to tell if two quotient groups of $\mathbb{Z}^2$ are isomorphic. Let $H$ be the subgroup generated by $\{(1, 3),(1, 7)\}$ and $G$ the subgroup generated by $\{(2, 4),(2, 6)\}$. Are the quotient groups $\mathbb{Z}^2/H$ and $\mathbb{Z}^2/G$ isomorphic? I feel like this will be no with having something to do with relatively prime but I am not sure how to prove this. Thanks! AI: Hint: invertible $\mathbb{Z}$-linear row operations to the matrix $$ \pmatrix{ a & b \\ c & d } $$ do not change the row space of the matrix. Use the Euclidean algorithm to row reduce your matrices to diagonal matrices, then conclude.
H: How prove this inequality $\sum\limits_{i=1}^{n}\frac{1}{x^{\alpha}_{i}+1}\ge\frac{n}{(n-1)^{\alpha}+1}$ let $n\ge 3,n\in N$, and $x_{1},x_{2},x_{3},\cdots,x_{n}$ are positive numbers,and such that $$\sum_{i=1}^{n}\dfrac{1}{x_{i}+1}=1,$$ show that: for any real numbers $\alpha\ge 1$,we have $$\sum_{i=1}^{n}\dfrac{1}{x^{\alpha}_{i}+1}\ge\dfrac{n}{(n-1)^{\alpha}+1}$$ AI: With change of variable $a_i = \frac{1}{x_i +1}$ we have to prove $\sum \frac{a_i^\alpha}{(1-a_i)^\alpha+a_i^\alpha}\geq\frac{n}{(n-1)^\alpha+1}$ under the constraint $\sum a_i =1$. Note that $$\sum_i \frac{a_i^\alpha}{(1-a_i)^\alpha+a_i^\alpha}=\sum_i \frac{a_i^\alpha}{(\sum _{j\neq i} a_j)^\alpha+a_i^\alpha}\\ \geq \sum_i \frac{a_i^\alpha}{(n-1)^{\alpha-1}(\sum _{j\neq i} a_j^\alpha)+a_i^\alpha}=:S,$$ where the power means inequality is applied to the sum in each denominator to obtain the last inequality. Using the fact that $\left\lbrace \frac{a_i^\alpha}{(n-1)^{\alpha-1}(\sum _{j\neq i} a_j^\alpha)+a_i^\alpha}\right\rbrace$ and $\left\lbrace(n-1)^{\alpha-1}(\sum _{j\neq i} a_j^\alpha)+a_i^\alpha\right\rbrace$ have opposite sorting order, we can apply the Chebyshev's sum inequality and obtain $$\frac{S}{n}\left(\frac{1}{n}\sum_i (n-1)^{\alpha-1}(\sum _{j\neq i} a_j^\alpha)+a_i^\alpha\right) \geq \frac{1}{n}\sum_i a_i^\alpha.$$ Therefore, $$S\geq\frac{n\sum_i a_i^\alpha}{\left((n-1)^\alpha+1\right)\sum_i a_i^\alpha}\\=\frac{n}{(n-1)^\alpha +1},$$ which completes the proof.
H: In a torsion module over a PID, is the annihilator of a sum of two elements the product of the annihilators? Let $R$ be a PID, and $M$ a torsion $R$-module. Let $m,m'\in M$ such that $\mathrm{ann}(m)=(c)$ and $\mathrm{ann}(m')=(c')$ with $c$ and $c'$ coprimes. Is true that $\mathrm{ann}(m+m')=(cc')$? If that's the case, how can I show it? AI: One direction is easy : if $d \in (cc')$, write $d = acc'$ so that $d(m+m') = ac'(cm) + ac(c'm') = 0+0=0$. Conversely, assume $d(m+m') = 0$. Then $dm = -dm'$ and $0 = d(cm) = cdm = -dcm'$, hence $-dc \in \mathrm{Ann}_R(m')$, which means $c'$ divides $-dc$. But $c'$ and $c$ are coprime, hence $c'$ divides $d$. Similarly $c$ divides $d$, hence since $c$ and $c'$ are coprime, $cc'$ divides $d$. Hope that helps,
H: How can I investigate the differentiability of this function? I leanred, if all partial derivatives exist and all are continuous, then it is differentiable. Am I wrong? I tried same way for this problem, I think it is differentiable because all the derivative exist and are continuous. However, it is not differentiable at 0. Why is it the case? // then I think... the functions of tho problems all have discontinuous partial derivative at (0,0)... right? then, why is it differentiable or not?.. AI: The partial derivatives evaluate to both to $0$. Thus, if the function is differentiable, it must be the case the differential is identically zero. You're looking at $$\lim_{(x,y)\to (0,0)}\left|\frac{f(x,y)-\nabla f(x,y)\cdot (x-y) }{\lVert (x,y)\rVert}\right|$$ This is $$\lim_{(x,y)\to (0,0)}\left|\frac{xy}{x^2+y^2}\right|$$ What happens if you take the limit along $y=x$? Is it $0$, as it should be? ADD The partial derivatives are $$\eqalign{ & \frac{{\partial f}}{{\partial x}} = \frac{{{y^2}}}{{{x^2} + {y^2}}}\frac{y}{{\sqrt {{x^2} + {y^2}} }} \cr & \frac{{\partial f}}{{\partial y}} = \frac{{{x^2}}}{{{x^2} + {y^2}}}\frac{x}{{\sqrt {{x^2} + {y^2}} }} \cr} $$ Are you sure they are continuous at the origin?
H: Is there a difference between abstract vector spaces and vector spaces? I am following my Oxford syllabus and my next step is abstract vector spaces, in my linear algebra book I've found vector spaces. I've searched a little and made a superficial comparison between both and found that they are the same thing. Is that correct or there's something I'm missing? Also, if it's correct, why two names to the same thing? AI: I am quite sure *vector spaces" and "abstract vector spaces" mean the same thing, and as Micah suggests, "abstract vector spaces" may simply make it more explicit that the spaces of concern are not necessarily $\mathbb C^n$ or $\mathbb R^n$. However, most courses and/or texts on linear algebra teach vector spaces as spaces which need not be $\mathbb R^n$ or $\mathbb C^n$. For example, from Wikipedia, you can read: Vectors in vector spaces do not necessarily have to be arrow-like objects as they appear in the mentioned examples: vectors are best thought of as abstract mathematical objects with particular properties ... ...Historically, the first ideas leading to vector spaces can be traced back as far as 17th century's analytic geometry, matrices, systems of linear equations, and Euclidean vectors. The modern, more abstract treatment, first formulated by Giuseppe Peano in 1888, encompasses more general objects than Euclidean space...
H: Recursion problem help The following are the teachers example problems. The issue is that I don't understand the exact steps they took to go from $f(0)$ to $f(1)$ to $f(2)$ to $f(3)$. What I'm asking here is if someone could be so kind to show me how the answers for $f(0)\ldots f(3)$ are derived for each problem. Let $f$ be defined as follows: $f(0) = 3$ $f(n) = (−1)^nf(n−1) + 4n$ for $n\ge1$. Find $f(7)$ The correct answer is $3$ $f(0) = 3$ $f(1) = 1$ $f(2) = 9$ $f(3) = 3$ $f(4) = 19$ $f(5) = 1$ $f(6) = 25$ $f(7) = 3$ Let $f$ be defined as follows: $f(0) = 1$ $f(n+1) = f(n) + n$ for $n≥0$ Find $f(6)$ The correct answer is $22$ $f(0) = 1$ $f(1) = 2$ $f(2) = 4$ $f(3) = 7$ $f(4) = 11$ $f(5) = 16$ $f(6) = 22$ Let $f$ be defined as follows: $f(0) = 4$ $f(1) = 3$ $f(n) = f(n−1) + 3f(n−2)$ for $n≥2$. Find $f(6)$ The correct answer is $348$ $f(0) = 4$ $f(1) = 3$ $f(2) = 15$ $f(3) = 24$ $f(4) = 69$ $f(5) = 141$ $f(6) = 348$ AI: The first recurrence is $f(n)=(-1)^nf(n-1)+4n$ for $n\ge 1$, with initial value $f(0)=3$. Just plug in successive values of $n$, starting with $n=1$: $$\begin{align*} f(1)&=(-1)^1f(0)+4\cdot1=(-1)(3)+4=-3+4=1\\ f(2)&=(-1)^2f(1)+4\cdot2=1\cdot1+8=9\\ f(3)&=(-1)^3f(2)+4\cdot3=(-1)(9)+12=3\\ f(4)&=(-1)^4f(3)+4\cdot4=1\cdot3+16=19\;, \end{align*}$$ and so on. The second recurrence is $f(n+1)=f(n)+n$ for $n\ge 0$, with initial value $f(0)=1$. To find $f(1)=f(0+1)$, you need to take $n=0$: $$f(1)=f(0+1)=f(0)+0=1+0=1\;.$$ Continue in similar fashion, taking $n$ to be in succession $1,2,3$, and so on: $$\begin{align*} f(2)&=f(1+1)=f(1)+1=1+1=2\\ f(3)&=f(2+1)=f(2)+2=2+2=4\\ f(4)&=f(3+1)=f(3)+3=4+3=7\;, \end{align*}$$ and so on. The third recurrence is $f(n)=f(n-1)+3f(n-2)$ for $n\ge 2$, with initial values $f(0)=4$ and $f(1)=3$. Like the first one, this one gives you $f(n)$ instead of $f(n)$, so you can proceed very straightforwardly, just as with the first one: $$\begin{align*} f(2)&=f(1)+3f(0)=3+3\cdot4=15\\ f(3)&=f(2)+3f(1)=15+3\cdot3=24\\ f(4)&=f(3)+3f(2)=24+3\cdot15=24+45=69\;, \end{align*}$$ and so on. In every case it’s just a matter of substituting the appropriate value of $n$ into the recurrence, and if you’ve already computed the values of $f$ that appear on the righthand side of the recurrence, you can use them to compute the desired value.
H: Finite Series - reciprocals of sines Find the sum of the finite series $$\sum _{k=1}^{k=89} \frac{1}{\sin(k^{\circ})\sin((k+1)^{\circ})}$$ This problem was asked in a test in my school. The answer seems to be $\dfrac{\cos1^{\circ}}{\sin^21^{\circ}}$ but I do not know how. I have tried reducing it using sum to product formulae and found out the actual value and it agrees well. Haven't been successful in telescoping it. AI: HINT: $$\frac{1}{\sin k^\circ\sin(k+1)^\circ}=\frac1{\sin1^\circ}\frac{\sin (k+1-k)^\circ}{\sin k^\circ\sin(k+1)^\circ}$$ $$=\frac1{\sin1^\circ}\cdot\frac{\cos k^\circ\sin(k+1)^\circ-\sin k^\circ\cos(k+1)^\circ}{\sin k^\circ\sin(k+1)^\circ}=\frac1{\sin1^\circ}\left(\cot k^\circ-\cot(k+1)^\circ\right)$$ Can you recognize Telescoping series / sum?
H: How the generating function $P(s)=\mathbb E[s^X]$ uniquely determines probabilities $p_n$, $n=1,2,\ldots$ for determining the probabilities, it has been written on the book that: $$p_n=\frac{\frac{d^n}{ds^n}P(s)|_{s=0}}{n!};\ldots(A)$$ But if i set $s=0$ then $p_n$ becomes $0$. $$p_1=\frac{\frac{d}{ds}P(s)|_{s=0}}{n!}$$ $$\Rightarrow p_1=\frac{\frac{d}{ds}\mathbb E[s^X]|_{s=0}}{n!}$$ $$\Rightarrow p_1=\frac{\mathbb E[Xs^{(X-1)}]|_{s=0}}{n!}$$ $$\Rightarrow p_1=\frac{\mathbb E[X0^{(X-1)}]}{n!}$$ $$\Rightarrow p_1=\frac{\mathbb E[0]}{n!}$$ $$\Rightarrow p_1=\frac{0}{n!}=0$$ Can you please prove the relation in equation (A) and check it for Binomial Distribution where the generating function of the Binomial Distribution is $P(s)=(1-p+ps)^n$, where $p$ denotes probability of success. AI: We try to show that all that is involved in the probability generating function is familiar calculus ideas. We have a random variable $X$ that takes on non-negative integer values only. Then $$P(s)=E(s^X)=p_0+p_1s+p_2s^2+p_3s^3+\cdots.$$ We could write the first term as $p_0s^0$, on the understanding that $s^0$ is identically $1$. But $p_0$ is cleaner. The $p_i$ are non-negative and less than $1$. So the power series above has radius of convergence at least equal to $1$. The $p_k$ obviously determine $P(s)$. We show that the $p_k$ can be uniquely recovered from $P(s)$. This amounts to showing that we can recover the $p_k$ by a derivatives calculation. In the interior of the interval of convergence, we can freely differentiate term by term. Now the first formula of your post is the familiar assertion that the coefficient of $s^n$ in the Maclaurin series of a function $P(s)$ is $\frac{1}{n!} P^{(n)}(0)$. We can get at it mechanically by differentiating the series on the left term by term $n$ times and setting $s=0$, When we differentiate $n$ times, all the terms before the $s^n$ term die. The $n$-th derivative of $s^n$ is $n!$, and the derivatives of all the terms that involve powers of $s$ higher than $n$ still have an $s$ in them, so die when we set $s=0$. To illustrate with the binomial distribution, differentiate $(1-p+ps)^n$ $k$ times, and set $s=0$. Differentiating $k$ times gets us a $p^k$ from the Chain Rule applied $k$ times. Then we get $(n)(n-1)\dots(n-k+1)$, which when divided by $k!$ is just $\binom{n}{k}$. And after we have differentiated $k$ times, there remains $(1-p+ps)^{n-k}$, which becomes $(1-p)^{n-k}$ when we set $s=0$. Thus we have, using the formula at the beginning of your post, recovered the probabilities $p_k=\binom{n}{k}p^k(1-p)^{n-k}$.
H: Euler's formula for triangle mesh Can anyone explain to me these two facts which I don't get from Euler's formula for triangle meshes? First, Euler's formula reads $V - E + F = 2(1-g)$ where $V$ is vertices number, $E$ edges number, $F$ faces number and $g$ genus (number of handles in the mesh). Now my book says Since for most practical applications the genus is small compared to the number of elements, the right hand side of the equation can be assume to be negligible. Given this and the fact that each triangle is bounded by three edges and that each interior manifold edge is incident to two triangles, one can derive the following The number of triangles is twice the number of vertices $F \approx 2V$ The number of edges is three times the number of vertices $E \approx 3V$ AI: Since we're talking about a triangle mesh, there is a fixed relationship between the number of edges and the number of faces. To derive this it's helpful to think of the mesh as being made of half-edges. A half-edge is a pair of an edge and a face it borders. The total number of half-edges in the mesh is $2E$, since each edge has two halves; and it's also $3F$, since each face touches three half-edges and this counts all the half-edges exactly once. Therefore $2E = 3F$. By solving for $E$ or $F$ and substituting into the formula $V - E + F \approx 0$, we can easily derive your two facts: $E = \frac{3}{2}F, \qquad V - \frac{3}{2}F + F \approx 0, \qquad V - \frac{1}{2}F \approx 0, \qquad F \approx 2V$. $F = \frac{2}{3}E, \qquad V - E + \frac{2}{3}E \approx 0, \qquad V - \frac{1}{3}E \approx 0, \qquad E \approx 3V$.
H: Subgroup with Finite Index of Multiplicative Group of Field Let $F$ be an infinite field such that $F^*$ is a torsion group. We know that $F^*$ is an Abelian group. So every subgroup of $F^*$ is a normal subgroup. My question: Does $F^*$ have a proper subgroup with finite index? AI: Let $F$ be the algebraic closure of a finite field. Each element of $F^*$ belongs to a finite field, so is a torsion element. On the other hand $F^*$ cannot have a subgroup of index $n>1$. For if $A$ is such a subgroup, then $x^n\in A$ for all $x\in F^*$. But if $z\in F^*\setminus A$, then $z$ has an $n$th root in $F^*$ contradicting the previous sentence. Returning to the general case. If $F^*$ is torsion, then obviously $F$ has finite characteristic, and is algebraic over its prime field. Therefore $F$ is contained in an algebraic closure of a finite field. If $F$ itself is finite, then it obviously has finite index subgroups, but this was excluded by the OP. And as an example of an infinite field such that $F^*$ has a subgroup of a finite index let's try the following. Consider the nested union of extensions of $\mathbb{F}_2$ of degrees $2^n$ $$ F=\bigcup_{n\ge0}\mathbb{F}_{2^{2^n}}. $$ The union can be formed inside an algebraic closure of $\mathbb{F}_2$. For $m\ge n$ let $N^m_n:\mathbb{F}_{2^{2^m}}\to\mathbb{F}_{2^{2^n}}$ be the relative norm map. For $n\ge1$ define the groups $$ A_n=\{z\in\mathbb{F}_{2^{2^n}}\mid N^n_1(z)=1\}\le \mathbb{F}_{2^{2^n}}^*. $$ Transitivity of norm in a tower of field extensions means that $N^n_1\circ N^m_n=N^m_1$ for all $1\le n\le m$. Let us define $$ A=\bigcup_{n\ge1}A_n. $$ I claim that $A$ is a subgroup of $F^*$. It is obviously closed under inverses, as all the $A_n$ are groups as kernels of $N^n_1$. If $z\in A_n$ and $n<m$, then $$ N^m_1(z)=N^n_1(N^m_n(z)). $$ Here $N^m_n(z)=z^{2^{m-n}}$ is just a power of $z$, so we get that also $z\in A_m$. We have seen that $A_n\le A_m$, and the claim follows from this. To close off this example I claim that $A$ is of index three in $F^*$. Let $\mathbb{F}_4^*=\{1,\omega,\omega^2=1+\omega\}$, where $\omega$ is a primitive third root of unity. Because $N^n_1(\omega)=\omega^{2^{n-1}}$ is either $\omega$ or $\omega^2$, it follows that for every element $z\in F^*$ exactly one of $z,\omega z,\omega^2 z$ belongs to the subgroup $A$. The claim follows from this. Note that the argument from the case of an algebraically closed field does not apply for this $F$. For example, the field $F$ does not have ninth roots of unity because those reside in the field $\mathbb{F}_{64}$, and won't be included in this tower.
H: Prove the image of separable space under continuous function is separable. Let $f:X\to Y$ be continuous. Show that if $X$ has a countable dense subset, then $f(X)$ satisfies the same condition. AI: HINT: This is a case in which the most obvious thing to try does work. $X$ is separable, so it has a countable dense subset $D$. Show that $f[D]$ is a countable dense subset of $f[X]$. This is extremely easy if you use the fact that $f$ is continuous if and only if $f^{-1}[U]$ is open in $X$ for each open $U\subseteq Y$.
H: $\hat{f}(x)=\frac{1}{\sqrt{2\pi}} \int^{\infty}_{-\infty}{f(x)e^{-i\omega x}dx}$ and $\hat{f}(x)=\int_{-\infty}^{\infty}{f(x)e^{-2\pi ixy}}dx$ Fourier transform - difference between $$\hat{f}(x)=\frac{1}{\sqrt{2\pi}} \int^{\infty}_{-\infty}{f(x)e^{-i\omega x}dx}$$ and $$\hat{f}(x)=\int_{-\infty}^{\infty}{f(x)e^{-2\pi ix\omega}}dx$$ Can you explain this difference please? thanks! AI: Both are used as the definition of the Fourier transform. I will explain why one has two definitions (and actually more than that). The Fourier transform is actually a map to functions on the dual group. But since $\mathbb{R}$ is self dual ($\mathbb{R} \cong \hat{\mathbb{R}}$), it maps into functions on $\mathbb{R}$. So when one writes $$ \hat{f}(\omega)=\frac{1}{\sqrt{2\pi}} \int^{\infty}_{-\infty}{f(x)e^{-i\omega x}dx} $$ with $\omega$ a real number, one has identified $\mathbb{R}$ with its dual through an isomorphism. Depending on the isomorphism, one gets a different "definition" of the Fourier transform. The factor $\frac{1}{\sqrt{2 \pi}}$ is there to make the Haar measure self-dual, that is the Haar measure on the dual group in the Fourier inversion theorem is equal to the Haar measure on the group itself.
H: limits of Surface area of revolution in polar co-ordinates. My Question is Find the area of the surface generated by revolving the right-hand loop of the lemniscate $\;r^2=\cos2\theta\;$ about the vertical line through the origin (y-axis). I know the formula $$S=2\pi \int_\alpha^\beta r \cos\theta\sqrt{r^2+\left(\frac{dr}{d\theta}\right)^2}\,d\theta$$ But I cant figure out the limits $\alpha$ and $\beta$. in Cartesian co-ordinate I can easily picture a function segment from a to b and imagine it rotating around any axis, but In polar I am really having difficulty in capturing the limits and than imaging a line segment then rotating the segment around y-axis.I just cant picture it in my mind. any help? thanks in advance. AI: To generate the right-hand loop of the lemniscate $r^2=\cos(2\theta)$, you need to let $\theta$ vary from $-\frac{\pi}{4}$ to $\frac{\pi}{4}$; i.e. $\theta \in [-\frac{\pi}{4},\frac{\pi}{4}] $. To see this, we have $$ r = \bar{+} \sqrt{\cos(2 \theta) } \implies \cos(2\theta)\geq 0 \implies -\frac{\pi}{2}\leq2\theta \leq \frac{\pi}{2}\implies -\frac{\pi}{4}\leq \theta \leq \frac{\pi}{4}. $$ Here is the plot Here is the full graph
H: On the direct sum of rings Let $A,B$ be rings. Suppose that $$A\cong A\oplus B$$ Can I conclude that $B=0$, the trivial ring? If so, how can be proved? Thanks AI: No: take $A$ to be the ring of subsets of $\Bbb N$ with $\cap$ as product, $\triangle$ (symmetric difference) as sum, $\Bbb N$ as $1$, and $\varnothing$ as $0$, and take $B$ to be the two-element ring. Equivalently, but perhaps more transparently, take $A=(\Bbb Z/2\Bbb Z)^\omega$ with the operations defined componentwise, and $B=\Bbb Z/2\Bbb Z$. For that matter, you can take $B=A$: $A\oplus A\cong A$.
H: Lax's proof of a property of principal minors There's a section in Peter Lax's Linear Algebra (2nd edition) that I am struggling to understand. I think it involves at least one typo, so let me write it out here exactly as it is in the book. Lax has just established that if $\lambda$ is a simple root of $\chi_A(x)$, then one of the principal minors of $A-\lambda I$ has nonzero determinant. Then he continues (p. 131): "Let A be (a matrix with $a$ as a simple root of its characteristic polynomial) and take the eigenvalue $a$ to be zero. Then one of the principal $(n-1)\times (n-1)$ minors of $A$, say the $i$th, has nonzero determinant. We claim that the $i$th component of an eigenvector $h$ of $A$ pertaining to the eigenvalue $a$ is nonzero. Suppose it were denote (sic) by $h^{(i)}$ the vector obtained from $h$ by omitting the $i$th component, and by $A_ji$ (sic) the $i$th principal minor of $A$. Then $h^{(i)}$ satisfies $$A_{ii}h^{(i)}=0."$$ Huh? Why should $A_{ii}h^{(i)}=0$? If I am to understand by $A_{ii}$ the $i$th principal minor of $A$, which results from omitting the $i$th row and $i$th column, then this statement seems false. For a dumb counterexample, let $A=\left[\begin{matrix}1&-1 \\ 2&-2\end{matrix}\right]$ and $h=\left[\begin{matrix}1\\1\end{matrix}\right]$. Then $A_{ii}h^{(i)}\neq 0$. What am I not understanding? AI: I think he meant a proof by contradiction. Suppose the contrary that the $i$-th component of $h$ is zero. Then $Ah=0$ implies that $A_{ii}h^{(i)}=0$. As $\det(A_{ii})\neq0$, it follows that $A_{ii}$ is invertible and $h^{(i)}=0$. Therefore $h=0$, contradicting that $h$ is an eigenvector.
H: How many parameters are required to specify a linear subspace? A problem in Peter Lax's Linear Algebra involves looking at the family of $n\times n$ self-adjoint complex matrices and asking: on how many real parameters does the choice of such a matrix depend? This made me ask: On how many real parameters does a choice of a $k$-dimensional subspace of $\mathbb{R}^n$ depend? Clearly the $n-1$-dimensional subspaces as well as the $1$-dimensional subspaces are an $n-1$ parameter family, since we can specify a $1$-dimension subspace by choosing a vector up to scaling, and we can choose an $n-1$-dimensional subspace by choosing its one-dimensional orthogonal complement. This suggests that the formula should be something like ${n\choose k}-1$. I'm having trouble proving this...any ideas? AI: You can generate an arbitrary linear subspace of dimension $d$ by applying an orthogonal transformation on the subspace spanned by the first $d$ basis vectors. However, orthogonal transformations which happen completely inside the subspace have no effect, nor do orthogonal transformations which happen completely in its complement. Since you need $k(k-1)/2$ parameters to specify an orthogonal transformation in $k$ dimensions, the number of arguments to specify a $d$-dimensional linear subspace of $\mathbb R^n$ is $$\frac{n(n-1)}{2} - \frac{d(d-1)}{2} - \frac{(n-d)(n-d-1)}{2}$$ To specify an affine subspace of dimension $d$, you need $n-d$ additional parameters to describe the displacement from the origin (displacements in the subspace don't change it). Here are some examples: To specify a $0$-dimensional linear subspace, you need $n(n-1)/2 - 0 - n(n-1)/2 = 0$ parameters. That's obvious, because there's only one 0-dimensional linear subspace. To specify a $1$-dimensional linear subspace (a straight line through the origin), you need $n(n-1)/2 - 0 -(n-1)(n-2)/2 = n-1$ parameters. This is also clear; to specify a one-dimensional subspace, you need to specify one point on the unit sphere, which is $n-1$-dimensional. To specify a $2$-dimensional linear subspace (a plane through the origin) in $\mathbb R^3$, you need $3\cdot2/2 - 2\cdot1/2 - 1\cdot0/2 = 2$ parameters. More generically, from the formula you see that you need the same number of parameters to specify a linear subspace and to specify its complement, which is also obvious because one implies the other. To specify a $1$-dimensional affine subspace (a general straight line) in $\mathbb R^n$, you need $n(n-1)/2 - 0 - (n-1)(n-2)/2 + (n-1) = 2(n-1)$ parameters. This can also be seen directly from the fact that you can write the points of the line as $\mathbf u + \lambda\mathbf v$, (2 vectors $\implies$ 2n parameters) but the length of $v$ is irrelevant (1 parameter less), and $\mathbf u$ can be changed by an arbitrary multiple of $\mathbf v$ (again, 1 parameter less). Edit: As Elmar Zander noted, the formula for linear subspaces can be simplified significantly to $d(n-d)$. Obviously this means that the number of affine subspaces then also simplifies to $(d+1)(n-d)$.
H: Dimension of isometry group of complete connected Riemannian manifold Given an $n$-dimensional geodesically complete connected Riemannian manifold $M$, we want to prove that the dimension of its isometry group is $$\dim {\rm ISO}(M) \leq \frac{n(n+1)}2.$$ Does it suffice to say that, since Euclidean space $\mathbb{R}^n$ is expected to be maximally symmetric, and the number above is the dimension of its isometry group, namely translations plus rotations, then the bound must be true for any other manifold? Do you know a more rigorous proof? AI: The comparison with Euclidean space gives the correct intuition but to give further details one could mention that the orbit of a point $p\in M$ under the isometry group is a homogeneous space of the isometry group, whose dimension is at most that of $M$ itself, namely $n$. Now the action of the stabilizer at $p\in M$ is determined uniquely by the induced action on the tangent space at $p$, which can be identified with a subgroup of $SO(n)$. The latter is known to have dimension $n(n-1)/2$, so for the total dimension of the isometry group you get at most $n + n(n-1)/2 = n(n+1)/2$.
H: Representation of an oriented manifold as a set of common zeroes of smooth functions Let $M$ be an arbitrary oriented smooth manifold of dimension $m$. Is it always diffeomorphic to a sumbanifold in ${\mathbb R}^n$ (with some $n$) defined as a set $X$ of common zeroes of $n-m$ smooth functions $f_1,...,f_{n-m}$ (defined on an open set $U\subseteq {\mathbb R}^n$ and having linearly independent differentials in each point $x\in X$: $df_1(x)\wedge...\wedge df_{n-m}(x ) \ne 0$)? AI: If you had such functions then the tangent bundle of the manifold would be stably trivial, which is not always the case. The lowest dimension where there is a countexample is 4. Thus, $\mathbb{CP}^2$ can indeed be embedded in Euclidean space, as can any compact manifold, but its tangent bundle is not stably trivial, in the sense that one cannot add a trivial bundle to the tangent bundle to get another trivial bundle. This follows from characteristic class theory, see e.g. the classic text Milnor, Stasheff, Characteristic classes. I am not sure what level text you are studying now, so I would add that the gradient vector fields of your functions $f_i$ would give a trivialisation of the normal bundle of the submanifold, and the ambient space $\mathbb{R}^n$ gives a trivialisation of the sum of the tangent bundle and the normal bundle. See for example this post for a discussion of chern classes of complex projective spaces.
H: How to calc $ a^{2^n}$ mod $m$ in less than O(n) time? a,m are positive integers. if needed, you can assume m is a prime. Is there any fast algorithm? I'm sorry for my not clear description. AI: If $m$ is a prime, you can reduce the exponent $$ \large a^{2^n}\equiv a^{2^n \operatorname{mod} (m-1)} \pmod m$$ where the $mod$ means the modulo - operator, defined as $ \operatorname{mod}$ in many programming languages. The topmost exponent can even be reduced modulo $\varphi(m-1)$, but here we have possibly to respect initial corrections for small $n$ if $\varphi(m-1)$ and $2^n$ have a common factor (which is always true if $m$ is an odd prime). For instance, if $m=7$ then $ \varphi(7-1) = \varphi(6) = 2$ and $$a^{2^n} \operatorname{mod} 7 \equiv a^{2^n \operatorname{mod} 6} \operatorname{mod} 7$$ so either $n$ is zero, then $$a^{2^n} \equiv a^1 \pmod 7$$ or $n$ is odd, then $$a^{2^n} \equiv a^2 \pmod 7$$ or $n$ is even $\gt 0$, then $$a^{2^n} \equiv a^4 \pmod 7$$ because $2^n \pmod 6$ is $1$ if $n=0$ or else $2\cdot 2^{n \operatorname{mod} \varphi(6)} = 2\cdot 2^{n \operatorname{mod} 2}$ This reduces the load of computation much...
H: Integrating velocity field to get position I feel silly for simply being brainstuck, but consider the following integral, physically it would be the solution of $\mathbf{p} = \tfrac{d\mathbf{v}}{dt}$ - the position of a given particle in space with respect to the time and a velocity vector field. $$\mathbf{p}(x,y) = \int_a^b{\mathbf{v}(x,y)}dt$$ However I have no idea how to describe the $x$ and $y$ components of the velocity vector in t. Or how to convert dt to dx & dy? Say for example $\mathbf{v} = \left \langle 3x, xy \right \rangle$ Which would result in: $$\mathbf{p}(x,y) = \int_a^b{3x}dt\cdot\mathbf{i}+\int_a^b{xy}dt\cdot\mathbf{j}$$ Buth how then to continue? I should be possible to calculate this right? I know the speed vector at each point in space, so over a given time period I should be able to get the new position right? AI: I think you might be confused because of the notation. If $\mathbb p$ is supposed to be the position, it should depend only on time, not on another position. That means $\mathbb p$ should be a function of one variable. The differential equation is supposed to be $\frac{d\mathbb p}{dt} = \mathbb v(\mathbb p)$. The integral form is $$ \mathbb p(t) = \mathbb p(0) + \int_{t_0}^{t} \mathbb v(\mathbb p(s)) ds, $$ but this generally is not the method to find the solution. To find $\mathbb p$ given $\mathbb p(0)$ and $\mathbb v$, it may help to break $\mathbb p(t)$ into $\mathbb p(t) = \langle p_x(t), p_y(t) \rangle$ and $\mathbb v(x, y)$ into $\mathbb v(x, y) = \langle v_x(x, y), v_y(x, y) \rangle$. The differential equation for the vector $\mathbb p$ can be written as a system of scalar differential equations: \begin{align*} \frac{dp_x}{dt} & = v_x(p_x, p_y) \\ \frac{dp_y}{dt} & = v_y(p_x, p_y). \end{align*} As an example, suppose $\mathbb v(x, y) = \langle 3x, xy \rangle$, i.e., $v_x(x, y) = 3x$ and $v_y(x, y) = xy$. Then you have the system \begin{align*} \frac{dp_x}{dt} & = 3p_x \\ \frac{dp_y}{dt} & = p_xp_y. \end{align*} The first equation can be solved independently for $p_x$, giving $$p_x(t) = c_1e^{3t}$$ where $c_1$ is a constant. Substitute this into $p_x$ in the second equation to get $$ \frac{dp_y}{dt} = c_1e^{3t}p_y. $$ This equation is separable. The solution is $$ p_y(t) = c_2e^{\frac{c_1}3 e^{3t}} $$ where $c_2$ is another constant. $c_1$ and $c_2$ can be determined once the initial condition is given. If $\mathbb p(0) = \langle x_0, y_0\rangle$ is given, then $$ \mathbb p(0) = \langle x_0, y_0\rangle = \langle c_1, c_2e^{\frac{c_1}3} \rangle. $$ It is easy to verify that $c_1 = x_0$ and $c_2 = y_0e^{-\frac{x_0}3}$. Therefore, $$ \mathbb p(t) = \langle x_0e^{3t}, y_0e^{\frac{x_0}3\left(e^{3t} - 1\right)} \rangle $$
H: Check whether the given matrix is diagonalizable or not? I am stuck on the following question: This question carries only $1$ mark. So,without going into details how can I check whether the given matrix $M$ and $M^2=\begin{pmatrix} 1 &15 &45 \\ 0&16 &45 \\ 0&0 &81 \end{pmatrix}$ are diagonalizable ? AI: If eigenvalues are all distinct, then the matrix is diagonalizable. Since the matrix is upper diagonal, the diagonal entries are precisely the eigenvalues.
H: find the least square solution for the best parabola find the least squares solution for the best parabola going through (1,1), (2,1), (3,2), (4,2) so to solve this problem I have 4 equation set up a + b + c = 1 4a + 2b + c = 1 9a + 3b + c = 2 16a + 4b + c = 2 I found my $A$ to be 1 1 1 4 2 1 9 3 1 16 4 1 and my $\mathbf{b}$ to be 1 1 2 2 then I apply the normal equation $x =(A^TA)^{-1} \, A^T b$ to solve for least square solutions I got $\mathbf{x}$ = 0 0.4 0.5 which give me $a = 0, b = 0.4, c = 0.5$ my friend went through similar process and get $a = 0.5, b= 0.4, c = 0$ which should be the right solutions for this problem ? was it me that getting the reverse answer? or was my answer correct? thank in advance AI: Most likely, you set up the quadratic as $ax^2+bx+c$, and your buddy set it up as $a+bx+cx^2$. Either way is OK, so long as when you get to the end you both remember which way you set up your quadratic.
H: How's it possible for each element of the empty set to be even? I was reading Pugh's Real Analysis: I've found this in the beginning of the book: A class is a collection of sets. The sets are members of the class. For example we could consider the class $\mathcal{E}$ of sets of even natural numbers. Is the set $\{2,15\}$ a member of $\mathcal{E}$? No. How about the singleton $\{6\}$? Yes. How about the empty set? $\color{red}{\text{Yes, each element of the empty set is even.}}$ How's it possible that each element of the empty set is even when the empty set doesn't have any elements? Edit: I've read the comments and the answer and I was thinking something quite quite different: If I have a set that has no elements, then the absence of elements would make the task of assigning a property to one of it's elements impossible. Is that feasible? AI: Each element of the empty set is even can be paraphrased as if $x$ is an element of the empty set, then $x$ is even: $$\forall x\Big(x\in\varnothing\to x\text{ is even}\Big)\;.\tag{1}$$ How could you show that this was false? You’d have to show that there was some $x\in\varnothing$ that was not even. And you can’t do this: you can’t find any $x$ in the empty set, let alone one that is even. Since you can’t show that $(1)$ is false, it must be true. To restate the argument in slightly different terms, the statement if $x$ is an element of the empty set, then $x$ is even imposes a condition on elements of the empty set, but the empty set has no elements, so it doesn’t actually impose a condition on anything. Thus, nothing can violate it: no object is an element of the empty set, so no object is even a candidate to violate the requirement of being even. The usual terminology is that the statement $(1)$ is vacuously true: it’s true because it doesn’t actually impose a requirement on anything. Note that you could replace $x\text{ is even}$ in $(1)$ with pretty much any statement about $x$, and the resulting sentence would be vacuously true by essentially the same argument.
H: Riemann sum calculation I would like to find out the sum of the following series $$ \lim_{n\to\infty} \left[\frac{n^2}{{(n^2+1)}^{3/2}} + \frac{n^2}{{(n^2+2)}^{3/2}} + \dots + \frac{n^2}{{(n^2+(n+1)^2)}^{3/2}}\right] $$ now it would be just upto $n$ then I could have calculate with Riemann's integral by narrowing it and simplify then just integrate the definite integration. As it's upto $n+1$, I am having the trouble simplifying it. Any help? AI: There are $n+1$ terms, and all but the last are at least $1/2$. So...
H: Concise notation for the successor of a cyclic index Very frequently we index "cyclic objects" using the integers. For instance, we might say that the vertices of a polygon are $x_1, x_2, \dotsc, x_n$, where the "next vertex" after $x_n$ is $x_1$. This discontinuity gets quite annoying if I make definitions that depend on the successor of each index: An edge exists between $x_i$ and $x_{i + 1}$ (or if $i = n$, then $x_n$ and $x_1$). Another version which I feel is more rigorous An edge exists between $x_i$ and $x_j$, where $j = 1$ if $i = n$, and $j = i + 1$ otherwise. Of course, we could remedy this situation a little by defining $$\operatorname{succ}_n(i) = \begin{cases}i + 1 & i < n\\1 & i = n\end{cases}$$ and saying An edge exists between $x_i$ and $x_{\operatorname{succ}_n(i)}$. I can't even drop the subscript of $\operatorname{succ}_n$ if I consider many "cyclic objects" that have different "periods" - if I'm studying many different polygons, for instance. This clumsy boilerplate obfuscates the intuition of "the next object". Another solution abuses the notation and simply says $i + 1$ to mean $\operatorname{succ}_n(i)$ (with an earlier statement giving prior warning, of course). I would actually prefer this, but is this commonly accepted, especially in submissions to peer-reviewed journals? Or is there an even better solution than those that I have enumerated above? AI: Things get easier if you start from $x_0$. This way, you could state there is an edge between $x_i$ and $x_{(i+1)\bmod\, n}$ where $a \bmod b$ is the remainder of the division of $a$ by $b$.
H: Straight Forecast Probability (Horse Racing) I have probabilities on four horses winning a race. Horse A =0.52, B=0.33, C= 0.11 and D=0.04. What I want to do is find the straight forecast probability of Horse A winning and Horse B finishing second. I know there are 2! combinations for this to happen, but I'm not sure where to progress from here (or if it is possible). If the horses all had equal probabilities of winning I think it would be easier. Can someone help out please? AI: The tricky part is figuring out the probability that Horse $B$ finishes second given that Horse $A$ finished first. It's no longer going to be $0.33$, since the $3$ probabilities of $0.33, 0.11, 0.04$ no longer sum to $1$, but $0.48$. To fix this, we can scale these $3$ probabilities by dividing each by $0.48$. (Note that this is a simplifying assumption; it's not guaranteed to work all the time.) This yields: $$ P(A \text{ is first, } B \text{ is second}) = P(A \text{ is first})P(B \text{ is second} \mid A \text{ is first}) = (0.52) \left(\dfrac{0.33}{0.48}\right) = 0.3575 $$
H: Prime made from the digits of $\sqrt{22}$ Which is the smallest prime derived from the digits of $\sqrt{22}$, where the 4 before the comma is not considered ? To be more precise : $x:=\sqrt{22}-4$ , so $x = 0,690415...$ for every natural number n : a(n):=truncate(x*10^n) , a(n) gives the first n digits of x Which is the smallest number n such that a(n) is prime ? AI: Maple didn't find any up to $n=4000$. Heuristically, a number of $n$ decimal digits has probability $ \sim 1/(n \ln 10)$ of being prime, so up to $n=10^N$ you would expect about $\ln(N)/\ln(10)$ primes. So it's not unlikely that you'll need to look at billions of digits of $\sqrt{22}$ before you get a prime.
H: Cantor's Diagonal Argument Why Cantor's Diagonal Argument to prove that real number set is not countable, cannot be applied to natural numbers? Indeed, if we cancel the "0." in the proof, the list contains all natural numbers, and the argument can be applied to this set. AI: The list contains all natural numbers, but also quite a few more. Natural numbers are terminating strings of digits, that is they are of finite length. Cantors diagonal argument enumerates all strings of digits, especially non-terminating ones. And yes, the set of those is uncountable, whereas the set of terminating strings is in indeed countable.
H: About $\sum_{n=1}^{\infty}\frac{(-1)^n}{n+(-1)^{n+1}}$ Determine if the following series is converges or diverges $$\sum_{n=1}^{\infty}\frac{(-1)^n}{n+(-1)^{n+1}}$$ I suspecting that we need rewrite $ \frac{(-1)^n}{n+(-1)^{n+1}} $ somehow, but how? Thanks! AI: It often helps to write out a few terms to try to see what’s going on: $$-\frac12+\frac11-\frac14+\frac13-\frac16+\frac15-\frac18+\frac17-+\ldots$$ If you calculate even-numbered partial sums, you’re going to get sums like $$s_8=\left(-\frac12+\frac11\right)+\left(-\frac14+\frac13\right)+\left(-\frac16+\frac15\right)+\left(-\frac18+\frac17\right)\;,$$ for instance. With a bit of thought you can work out that $$\begin{align*} s_{2n}&=\left(-\frac12+\frac11\right)+\left(-\frac14+\frac13\right)+\ldots+\left(-\frac1{2n}+\frac1{2n-1}\right)\\\\ &=\sum_{k=1}^n\left(-\frac1{2k}+\frac1{2k-1}\right)\\\\ &=\sum_{k=1}^n\frac1{2k(2k-1)}\;, \end{align*}$$ and $$s_{2n+1}=s_{2n}-\frac1{2(n+1)}=\sum_{k=1}^n\frac1{2k(2k-1)}-\frac1{2(n+1)}\;.$$ The original series converges if the sequence of these partial sums converges. Concentrate on the even-numbered ones, and worry about the odd-numbered ones afterwards. Does $$\sum_{k=1}^\infty\frac1{2k(2k-1)}$$ converge? If it doesn’t, the sequence of even-numbered partial sums of your original series doesn’t converge, so the series cannot converge. If it does, the sequence of even-numbered partial sums of your original series does converge, and you have only to check that the odd-numbered partial sums don’t misbehave.
H: Urn, Expected Value and Covariance How would you solve the next problem: A Urn contains 10 balls; 7 white and 3 black. 2 balls are taken randomly (without replacement), say $X$ white and $Y$ black. $X+Y=2$. Find the Covariance of $X$ and $Y$. AI: Because $Y = 2 - X$, we have \begin{align*} Cov(X, Y) & = Cov(X, 2 - X) = -Var(X). \end{align*} Compute \begin{align*} P(X = 0) & = \frac{\binom{3}{2}}{\binom{10}{2}} = \frac 1{15}\\ P(X = 1) & = \frac{7\cdot 3}{\binom{10}{2}} = \frac{7}{15}\\ P(X = 2) & = \frac{\binom{7}{2}}{\binom{10}{2}} = \frac{7}{15}\\ E[X] & = \frac{7}{5} \\ E[X^2] & = \frac{7}{3} \\ Var(X) & = \frac 73 - \frac{49}{25} = \frac{175 - 147}{75} = \frac{28}{75} \\ \therefore Cov(X, Y) & = -\frac{28}{75}. \end{align*}
H: Limit of $\frac1{n^{50}}\sum\limits_{k=1}^{n}(-1)^{k}k^{50}$ when $n\to\infty$ Evaluate $\lim_{n\to\infty}\dfrac{\sum_{k=1}^{n}(-1)^{k}k^{50}}{n^{50}}$. Or can we get some formula when $50$ is replaced with $m$? AI: Denote $T_m(n) = \sum\limits_{k=1}^{n} (-1)^m k^m$;$\qquad$ $S_m(n) = \sum\limits_{k=1}^{n} k^m$ $\qquad (m\in\mathbb{N})$. Then $T_m(2n) = -S_m(2n)+2\cdot 2^m\cdot S_m(n)$, $\qquad$ $T_m(2n+1) = T_m(2n)-(2n+1)^m$. Since $S_m(n) = \dfrac{n^{m+1}}{m+1}+\dfrac{n^m}{2}+Q_{m-1}(n) $, (see Sum of powers), $\quad$ where $Q_{m-1}(n)$ $-$ some polynomial of $(m-1)$-th degree, $\quad$ then $T_m(2n) = \dfrac{(2n)^m}{2}+R_{m-1}(n)$, $\quad$where $R_{m-1}(n)$ $-$ some polynomial of $(m-1)$-th degree. $T_m(2n+1) = \dfrac{(2n)^m-2(2n+1)^m}{2}+R_{m-1}(n)$. So, $$ \lim_{n\to\infty} \dfrac{T_m(2n)}{(2n)^m} = \dfrac{1}{2}; $$ $$ \lim_{n\to\infty} \dfrac{T_m(2n+1)}{(2n+1)^m} = -\dfrac{1}{2}. $$
H: Uniform Convergence... What did I wrong for this function? I want to know if this function is unif.conv. or not... Actually it is one of the my mid-exem problems. I got zero score for it.... What did I wrong in this problem? and What is the easiest way to see whether it is unif.conv. or not? AI: Evaluate $|f_n(x)-f(x)|$ at $x_n:=1-1/n$. You should get $|1+(1-1/n)^n-(1-1/n)^2|$. This has limit $e^{-1}\neq0$.
H: Detail of a proof question about linear dependent vector There is a lemma saying that if $v_1, ..., v_n$ are linearly dependent in a vector space $V$ and $v_1 \neq 0$ then there exists $j \in \{2, ..., n\}$ such that $v_j \in span(v_1,...,v_{j-1})$. The proof uses that in $a_1 v_1 + ....+a_nv_n = 0$ not all $a_i$ with $i>1$ can equal $0$ and then goes on by letting $j$ be the largest index in $2,....,n$ with $a_j \neq 0$. Then $v_j = -{a_1 \over a_j}v_1 - .... -{a_{j-1} \over a_j}v_{j-1}$. Why does $j$ have to be the largest index? Wouldn't any $j$ with $a_j$ non-zero do the trick? AI: If $j$ weren't the largest index then there would be an $a_i \neq 0$ with $i > j$. The result would then not follow as $v_j$ would be a linear combination of a set of vectors that includes $v_i$.
H: Fractional overlap of 1/2 and 1/3 Given a subset of the natural number sequence (positive integers starting from 1) we could say that $\frac12$ of the numbers in the set are divisible by 2. e.g if the set were ${[1,2,3,4,5,6,7]}$ we could say that $3\frac12$ of the numbers in it are divisible by 2. If we now wanted to work out how many numbers are divisible by 3, we could work it out as $\frac73 = 2\frac13$ and we know this is correct because if we look at the set we can see that the numbers 3 and 6 are the 2 numbers that are divisble by 3. If we wanted to work out how many numbers are divisible by 2 OR 3. At first glace I thought I could add up the 2 fractions and then subtract the overlap. This would then equate to $\frac12+\frac13-(\frac12*\frac13) = \frac23$ This makes sense $\frac23$ of all natural numbers are divisible by $2$ or $3$. So if we wanted to see how many numbers were divisible by $2$ OR $3$ in the set ${1,2,3,4,5,6,7,8}$ we could say $8 * \frac23 = 5\frac13$ and this makse sense because the $5$ numbers divisible by $2$ or $3$ are ${2,3,4,6,8}$ This is where I get confused, when I test this against the number 10 for example I get $10 * \frac23 = 6\frac23$ BUT there are $7$ numbers under $10$ that are divisible by both $2$ or $3$, so I was expecting the whole number component to be $7$ Please help me understand. Is it possible to create such a fraction that would tell me the number of elements in the set that are divisible by 2 or 3? Thanks in advance AI: Of the numbers $1$ to $n$, $\lfloor n/m \rfloor$ (i.e. the greatest integer $\le n/m$) are divisible by $m$. Now $x$ is divisible by both $m_1$ and $m_2$ if and only if $x$ is divisible by $\text{lcm}(m_1, m_2)$ (the least common multiple of $m_1$ and $m_2$: this is $m_1 m_2$ if $m_1$ and $m_2$ have no common factor $> 1$). So of the numbers $1$ to $n$, $\lfloor n/m_1 \rfloor + \lfloor n/m_2 \rfloor - \lfloor n/\text{lcm}(m_1, m_2) \rfloor$ are divisible by $m_1$ or $m_2$. For example, of the numbers $1$ to $7$, there are $$\lfloor 7/2 \rfloor + \lfloor 7/3 \rfloor - \lfloor 7/6 \rfloor = 3 + 2 - 1 = 4$$ divisible by $2$ or $3$. Since $\dfrac{n}{m} - 1 < \left\lfloor \dfrac{n}{m} \right\rfloor \le \dfrac{n}{m}$, $$ \eqalign{\dfrac{n}{m_1} + \dfrac{n}{m_2} - \dfrac{n}{\text{lcm}(m_1,m_2)} - 2 &< \left\lfloor \dfrac{n}{m_1} \right\rfloor + \left\lfloor \dfrac{n}{m_2} \right\rfloor - \left\lfloor \dfrac{n}{\text{lcm}(m_1, m_2)}\right\rfloor \cr &< \dfrac{n}{m_1} + \dfrac{n}{m_2} - \dfrac{n}{\text{lcm}(m_1,m_2)} + 1\cr}$$
H: What is a two point support in this lemma? What is the terminology of two point support in this lemma? AI: The meaning is that the random variable $T$ takes on the two values $r$ and $\bar{r}$ with probabilities $p$ and $1-p$ respectively. Consequently, one of $T-r$ and $T-\bar{r}$ always has value $0$ and so $E[(T-r)(T-\bar{r})] = E[0] = 0$. This expression allows for an easy way of determining the stated relationship between $r$, $\bar{r}$ and the given values of $\mu$ and $\sigma^2$. It is also possible to grind out the same result from the definitions $$\begin{align} \mu &= pr + (1-p)\bar{r}\\ \sigma^2 &= p(r-\mu)^2 + (1-p)(\bar{r}-\mu)^2 \end{align}$$ eliminating $p$ in the process, but the calculations take longer and are more tedious.
H: Integration gamma and beta: $\int_0^4y^3\sqrt{64-y^3}\,\mathrm dy$ How can we evaluate the following integral? $$\int_0^4y^3\sqrt{64-y^3}\,\mathrm dy$$ I can't find anything to substitute because all of the trigonometric identities are in square form... AI: $$\begin{align}\int_0^4 dy \, y^3 (64-y^3)^{1/2} &= \underbrace{2048 \int_0^1 dx \, x^3 (1-x^3)^{1/2}}_{y=4 x} \\ &= \frac{2048}{3} \underbrace{\int_0^1 du \, u^{1/3} \, (1-u)^{1/2}}_{x=u^{1/3}}\\ &= \frac{2048}{3} \frac{\Gamma\left(\frac{4}{3}\right)\Gamma\left(\frac{3}{2}\right)}{\Gamma\left(\frac{17}{6}\right)} \end{align}$$
H: Continuity of max (moving the domain and the function) Let $f:\mathbb{R}\times[0,1]\to\mathbb{R}$ continuous and $c:\mathbb{R}\to[0,1]$ continuous. Consider $$F:\mathbb{R}\to\mathbb{R},\ \ F(x)=\max_{t\in[0,c(x)]}f(x,t)$$ Is $F$ continuous? I believe it is true, but I've difficulties to prove it. I managed to prove that fixing the parameter in one of the two places then the obtained function is continuous, i.e. $$x\mapsto\max_{t\in[0,c(x_0)]}f(x,t) \qquad\text{and}\qquad x\mapsto\max_{t\in[0,c(x)]}f(x_0,t) \qquad\text{are continuous.}$$ But now it seems not trivial to conclude by the triangle inequality... AI: Note that the set $C:=\{(x,t)\mid x\in\Bbb R,\ t\in[0,c(x)]\}$ is closed and contains the graph of $c$, the set $G(c) = \{(x,c(x))\mid x\in\Bbb R\}$, which is closed in $\Bbb R \times [0,1]$ Show that the preimage of an open subbase set $(-\infty,a)$ under $F$ is open: This is the set $\Bbb R-\{x\mid \exists t\le c(x),f(x,t)\ge a\}$. The complement can be expressed as $\pi_{\Bbb R}(f^{-1}([a,\infty)\cap C)$. Since $[a,\infty)$ is closed, its preimage under $f$ is closed. But if we now apply the projection $\pi_{\Bbb R}$, we get a closed set again. This is because the projection $X\times Y\to X$ is closed if $Y$ is compact, so it is in the case $\Bbb R\times[0,1]\to\Bbb R$. Still, there is a problem if we want to apply the same argument to an open subbase set $(a,\infty)$ since the restriction of an open map, like the projection, to a closed subset isn't necessarily open. In this case, however, $\pi_{\Bbb R}|_C$ is indeed open. This is obvious on $C\setminus G(c)$. And on $G(c)$, we have $\pi_{\Bbb R}((U×V)\cap G(c))=c^{-1}(V)\cap U$.
H: $\frac {(x-y)}{x}$ Can it be simplified? I appreciate anyone taking a look at this. It's been ages since I've been in algebra/calculus and need to figure out if $\dfrac {(x-y)}{x}$ can be simplified or would it be $\left(1 - \dfrac yx\right)$? Thank you, Josh Thanks to all the very speedy responses. I guess my algebra isn't as rusty as I thought. This question can be marked as solved/closed. AI: $1-\frac yx$ is the best you will do for most purposes. Sometimes the original will be better, depending on the rest of the expression.
H: A simple proof for vector space of continuous function Since my math skill/knowledge is limited (basic real analysis, linear algebra etc.) and I can only follow proofs based on this knowledge, I am looking for a simple proof of the following claim: Let $C([a,b],\mathbb{R})$ denote the set of all of continuous real-valued functions defined on the interval $[a,b]$. This vector space has no finite basis. Thanks in advance for any help! AI: If you have a basis of cardinal $n$, if you take any $m>n$ elements, they are linearly dependent. So to prove that it has no finite basis, you can simply prove you have an infinite set of linearly independent elements. Can you think of such a set? Hint: think about polynomials. - $\forall k\in \Bbb N,e_k = x\mapsto x^n\in \mathcal C^0 \left([a,b],\Bbb R\right)$. Can you prove they are linearly independent? - Let $l \in \Bbb N$ and $\alpha_0,\dots,\alpha_l \in \Bbb R$ so that $\sum\limits_{k=0}^l \alpha_k e_k = \Bbb 0$ $\forall x \in \Bbb R, \sum\limits_{k=0}^l \alpha_k x^k=0$ Let $I=\{j \in \{0,\dots,l\}, \alpha_j \not = 0\}$ Suppose $I\not= \emptyset$ Let $d=\max I$ $\forall x \in \Bbb R^*, \sum\limits_{k=0}^d \alpha_k x^k=0$ and $\alpha_d \not= 0$ So $\sum\limits_{k=0}^d \alpha_k x^{k-d}=0$ So $\lim\limits_{x\to +\infty}\sum\limits_{k=0}^d \alpha_k x^{k-d}=0$ So $0+\dots+0+\alpha_d = 0$ which is absurd so $I=\emptyset$ So $\forall i \in \{0,\dots,l\}, \alpha_i=0$
H: Does this system of congruences have a solution even if they are not relatively prime at first? $$x \equiv 4\ (\textrm{mod}\ 15) \ \ \ \ \land\ \ \ \ \ x\ \equiv 6\ (\textrm{mod}\ 33)$$ Does this system of congruences have a solution even if they are not relatively prime at first? If I try to break the congruences into smaller ones, for instance: $$x \equiv 4\ (\textrm{mod}\ 15) \rightarrow \ \ \ \ x \equiv 4\ (\textrm{mod}\ 5) \ \ \ \ \ \ \ \land \ \ \ \ x \equiv 1\ (\textrm{mod}\ 3)$$ $$x \equiv 6\ (\textrm{mod}\ 33) \rightarrow \ \ \ \ x \equiv 6\ (\textrm{mod}\ 11) \ \ \ \ \land \ \ \ \ x \equiv 0\ (\textrm{mod}\ 3)$$ I can see that they are still not relatively primes as there is a $gcd(3,3) \neq 1$ But i'm not sure if my reasoning is correct or if there is another way around for the system of congruences to have a solution. Thanks! AI: Suppose $x$ is a solution of your system, then you'd get: $15k+4=33m+6$ for suitable $m,k \in \mathbb{Z}$, but this means that $15k-33m=2$:impossible mod3. A system of the form $x\equiv a$ (mod w) $ \wedge \ x \equiv b$ (mod k) has a solution iff $gcd(w,k)| (b-a)$.
H: Continuous with no left derivative I'm sure there are many examples for this, but I just can't find it. What is an example of a function continuous at point $a$, but has no left derivative at point $a$? AI: Of course, simple absolute value functions won't work since you want the failure of a unilateral derivative. Consider $f(x) = x \sin \frac{1}{x}$ if $x \neq 0$ and $f(0) = 0.$ At $x = 0,$ this function is neither left (finitely or infinitely) differentiable nor right (finitely or infinitely) differentiable. A nice way of seeing this geometrically is to consider the graph of $y = f(x)$ and the graphs of $y = \pm x$ on the same coordinate axes.
H: Is there a series $\sum (a_n) $ that converges conditionally but $\sum (a_n -1/n) $ doesn't? I'm studying for a test in calculus and have encountered a question I can't find a proof that contradicts the existence of such series. Contradict the existence of the series such that: $\sum(a_n) $ that converges conditionally but $\sum(a_n -1/n) $ doesn't absolutely converge? $\sum(a_n) $ that converges conditionally but $\sum(a_n -1/n^2) $ doesn't absolutely converge? Any help will be greatly appreciated. Thank you very much AI: I hope the below is what you asked for; it is based on the assumption that you asked to refute existence of conditionally, but not absolutely, convergent series $\sum a_n$ with the stated properties. Unfortunately, both of your statements are incorrect. Ad 1: Let $a_n = \dfrac{(-1)^n}n$. Then $\sum a_n$ converges conditionally; however it is clear that $\sum a_n - \dfrac1n$ does not converge at all. Ad 2: Suppose $\sum a_n$ converges conditionally. Then since $\sum \dfrac1{n^2}$ converges absolutely, we have by the sum rule for limits (and hence, series) that $\sum a_n - \dfrac1{n^2}$ will converge conditionally as well. Moreover, suppose $\sum a_n - \dfrac1{n^2}$ converged absolutely. Then by the triangle inequality: $$|a_n| = \left|a_n - \frac1{n^2} +\frac1{n^2}\right| \le \left|a_n-\frac1{n^2}\right| + \frac1{n^2}$$ and since by assumption, the right-hand side is summable, so is the left-hand side; but this implies $\sum a_n$ converges absolutely.
H: Concavity and Convexity A set $X \subseteq \mathbb{R}^n$ is said to be convex if $tx + (1-t)y \in X$ for all $x,y \in X$ and $t \in (0,1)$. Given a convex set $X \subseteq \mathbb{R}^n$, a function $f: X \to \mathbb R$ is said to be concave if $f(tx + (1-t)y) \ge tf(x) + (1-t)f(y)$ for all $x,y \in X$ and $t \in (0,1)$. 1) Show that $f: \mathbb{R}^n \to \mathbb R$ is concave iff $\sum_{i=1}^r t_if(x_i) \le f\left(\sum_{i=1}^r t_ix_i\right)$ for every positive integer r, for all $x_1, \dots, x_r \in \mathbb R^n$, and all $t_1,\dots,t_r \in (0,1)$ with $\sum_{i=1}^r t_i = 1$ 2) Use (1) to show that $\prod_{i=1}^r x_i^{t_i} \le \sum_{i=1}^r t_ix_i$ for all non negative $x_1, \dots, x_r \in \mathbb R$ and all $t_1,\dots,t_r \in (0,1)$ with $\sum_{i=1}^r t_i = 1$ 3) Show that the function $f: \mathbb{R}^n \to \mathbb{R}$ is convex iff the set $\{(x,r) \in \mathbb{R}^n \times \mathbb{R} \mid f(x) \le r \}$ is convex. AI: For part 1), try induction on $r$. For 2), think about log. For 3) think about the pairs $(x, f(x)), (y, f(y))$. I'm being somewhat mysterious because I think these are healthy exercises to solve for oneself. EDIT: Hey sorry for the delay. For the first part, the second condition implies concavity so we only need to prove the first condition implies the second. For the base case, this is easy. Suppose it holds for k and pick $x_1, ..., x_k, x_{k+1} \in \mathbb{R}^n $ and $t_1, ..., t_{k+1} \in (0, 1)$ such that $\sum\limits_{i=1}^{k+1} t_i=1$. Now, let $x'_k=\frac{t_kx_k+t_{k+1}x_{k+1}}{t_k+t_{k+1}}$ and $t'_k= t_k+t_{k+1}$. Then $\sum\limits_{i=1}^{k-1} t_i + t'_k=1$ and by inductive assumption we therefore have $t'_kf(x'_k)+\sum\limits_{i=1}^{k-1} t_if(x_i)$ $ \leq f( t'_kx'_k+\sum\limits_{i=1}^{k-1} t_ix_i)=f(\sum\limits_{i=1}^{k+1} t_ix_i)$. To finish up, we need to show that $t_kf(x_k)+t_{k+1}f(x_{k+1}) \leq t'_kf(x'_k)$. Now, $0<t_k+t_{k+1}<1$ so there is some $r \in \mathbb{R}$ so that $rt_k+rt_{k+1}=1$. Then by the base case, $rt_kf(x_k)+rt_{k+1}f(x_{k+1}) \leq f(rt_kx_k+rt_{k+1}x_{k+1})=f(rt'_kx'_k)=f(x'_k)$. But by definition, $r= \frac{1}{t_k+t_{k+1}}$ and so dividing by $r$, this gives what we want.
H: Stronger two-space formulation of Hurewicz theorem about homotopy and homology groups The following theorem of Hurewicz holds (let $\cdot$ be the one-point space and $n\!\geq\!2$): If $\pi_i(X)\cong\pi_i(\cdot)$ for $i\!<\!n$, then $H_i(X)\cong H_i(\cdot)$ for $i\!<\!n$ and $\pi_n(X)\cong H_n(X)$. Does the following hold (I suspect not): (1) If $\pi_i(X)\cong\pi_i(Y)$ for $i\!<\!n$, then $H_i(X)\cong H_i(Y)$ for $i\!<\!n$, and $\pi_n(X)\cong H_n(X)$? Does the following hold: (2) If $\pi_i(X)\cong\pi_i(Y)$ for all $i$, then $H_i(X)\cong H_i(Y)$ for all $i$? Does the following hold: (3) If $H_i(X)\cong H_i(Y)$ for all $i$, then $\pi_i(X)\cong \pi_i(Y)$ for all $i$? If yes, how can I prove them, preferrably using the Hurewicz theorem above. If not, are some additional mild hypotheses sufficient to make it work? Do (2') and (3') hold, in which $H_i$ has been replaced by $H^i$? AI: (1) does not hold. For the last part ($\pi_n(X) \simeq H_n(X)$), simply take $X=Y$ and find a counterexample to Hurewicz's theorem if you don't assume $\pi_k(X) = 1 \forall k < n$. For the first part, take the following counterexample (with a big enough $n$ st. $H_n(X) \not\simeq H_n(Y)$): (2) does not hold either. For example, $X = S^2 \times \mathbb R P^3$ and $Y = S^3 \times \mathbb R P^2$ have the same homotopy groups, but not the same homology (see Wikipedia's article and Grumpy Parsnip's answer). The additional condition needed is that the isomorphisms $\pi_*(X) \simeq \pi_*(Y)$ must be induced by a map $f : X \to Y$. (3) does not hold either. $S^2 \times S^4$ and $\mathbb C P^3$ have the same homology groups ($\mathbb Z$ in degrees 0,2,4,6 and zero elsewhere), but different homotopy groups ($\pi_2(\mathbb C P^3) = \mathbb Z$ but $\pi_2(S^2 \times S^2) = \mathbb Z^2$). For (1) and (3) I don't know of nice additional conditions that would make the theorems true. Note that the counterexamples to (2) and (3) also hold in cohomology (use the universal coefficient theorem).
H: What's special about the first vector My linear algebra notes state the following lemma: If $(v_1, ...,v_m)$ is linearly dependent in $V$ and $v_1 \neq 0$ then there exists $j \in \{2,...,m\}$ such that $v_j \in span(v_1,...,v_{j-1})$ where $(...)$ denotes an ordered list. But if at least one $v_i$ is $\neq 0$ then the list can be reordered and the lemma applied. Is $v_1 \neq 0$ just another way of saying $v_i \neq 0$ for at least one $i$? AI: Your book is correct, but silly. It should not have excluded $v_1=0$, but allowed $j=1$ instead. By convention $\operatorname{span}()=\{0\}$ (it is important that the span of every set of vectors is a subspace, so the empty set should give the null subspace), and $v_1=0$ (which all by itself makes the set linearly dependent) would not be an exception, because one can then take $j=1$ (and if there is no linear dependence among the remaining vectors, one has to take $j=1$). So it is just another case of unfounded fear of the void. The result states (or should) that given an ordered sequence of linearly dependent vectors, there is always one of them that is in the span of set of vectors preceding it. This is always true. Indeed, you can always take the this vector to be the first one to make the sequence-up-to-there linearly dependent. The empty sequence is always independent, and a sequence with one vector is linearly dependent only if that vector is zero, in which case it is in the (zero-dimensional) span of the (empty) set of preceding vectors. If the first is not zero but after some independent vectors a zero vector comes along, then it also in the span of the set of preceding vectors, but that span now has positive dimension. Of course a nonzero vector in the same span, in place of that zero vector, would also have made the sequence dependent; this is in fact the more common case.
H: Meaning of $\int\mathop{}\!\mathrm{d}^4x$ What the following formula mean? $$\int\mathop{}\!\mathrm{d}^4x$$ I know that this $\int f(x)\mathop{}\!\mathrm{d}x$ is the integral of the function $f$ over the $x$ variable, but the following $\int\mathop{}\!\mathrm{d}^4x$ leave empty the argument and also have the $\mathrm{d}$ elevated to the fourth power. What that mean? Update. In the Einstein-Hilbert action we have (note that I have understand that the other parts of the integral are relevant only after your answers, thus sorry.): $$S=\frac{c^4}{16\pi G}\int\mathop{}\!\mathrm{d}^4x \, \ R \sqrt{-g}$$ AI: It is a notation usually used in physics. In their language $$ \int\mathop{}\!\mathrm{d}^3 p \,f(p) = \int_{\Omega} f(x,y,z)\mathop{}\!\mathrm{d}x\mathop{}\!\mathrm{d}y\mathop{}\!\mathrm{d}z. $$ Several remarks: The domain where the integration is performed is unspecified, because we can have the flexibility of doing integration in momentum space or position space. The exponent on the shoulder of $\mathrm{d}$ specifies the dimensionality of this integral, so no $\iint$ or $\iiint$. The variable the integration is performed on is written before the integrand. So if it is $\mathrm{d}^4 x $, normally it means to integrate in the whole space-time. Some Updates: Not an expert in GR but if the integration is $$ S=\frac{c^4}{16\pi G}\int\mathop{}\!\mathrm{d}^4x \, R \sqrt{-g}, $$ then $\mathrm{d}^4x\,\sqrt{-g}$ means the integration is done in the whole space-time $\mathcal{M}$ with certain metric $g$. The metric also has a negative determinant (relevant to the $(− + + +)$ sign convention). The "volume" element in space-time for this integration is actually $$ \underbrace{\mathrm{d}^4 x\,\sqrt{-g}}_{\text{Physics type of notation}} = \underbrace{\sqrt{|g|} d x\wedge d y\wedge d z\wedge cd t}_{\text{Volume element of integration performed on a manifold with metric } g}. $$ Here we can't separately write $\mathrm{d}^4 x$, it would cause certain level of confusion of the metric. Also the mathematical type of notation is only true whenever we equip the manifold locally with a coordinate system $(dx,dy,dz,-cdt)$ while the physics's notation leaves the volume element's coordinate system unspecified to avoid some technique issues. Now because the space-time's metric $g$ is not a constant but rather changing in the whole space-time, we normally we do not write in the form of $\mathrm{d} x\,\mathrm{d} y\,\mathrm{d} z\,c\mathrm{d} t$ or $ \mathrm{d} x\wedge\mathrm{d} y\wedge\mathrm{d} z\wedge c\mathrm{d} t$ unless we know the space-time is flat (Minkowski space-time). I suggest you try to derive the action using a unit mass particle moving in a gravitational field in Minkowski time yourself to better understand the notation. A simple analogy: The "area" element in polar coordinate system is $rdrd\theta$ (written in math notation convention), while in physics the integration is written as: $$ \int\mathop{}\!\mathrm{d} \theta\mathrm{d} r\,r \,f(r,\theta). $$
H: Irreducible representations over $\Bbb R$ How to prove that all irreducible representations over $\mathbb{R}$ of finite abelian group have dimension 1 or 2? AI: For all elements $g$ in your finite abelian group $G$, you have an endomorphism on your vector space $V$ underlying your representation. Since your group is abelian, these endomorphisms commute, hence you can diagonalize them simultaneously (over $\mathbb C$). This proves, of course, that over the complex numbers (or any algebraically closed field), your irreducible representations are one-dimensional. Over $\mathbb R$, the best you can hope for is that your endomorphisms can be block-diagonalized simultaneously, with blocks of size at most 2. This is because an irreducible polynomial over $\mathbb R$ has degree at most 2. EDIT: Let's prove this. If your polynomial has odd degree, then it has a root (this is a consequence of the intermediate value theorem). Now, suppose your polynomial has even degree. By looking at its solutions over $\mathbb C$, we know they come in conjugate pairs $z$, $\overline z$. Therefore, both $(X-z)$ and $(X-\overline z)$ divide your polynomial, and therefore, so does their product, which is a quadratic polynomial with real coefficients. Therefore, any real polynomial of degree at least 3 is reducible. Therefore, your irreducible representations are at most two-dimensional.
H: Proof of $\lim_{x \to 0^+} x^x = 1$ without using L'Hopital's rule How to prove that $\lim_{x\to 0^+} x^{x} = 1$, or $\lim_{x\to 0^+} x\ln(x) = 0$ without using L'Hopital's rule. AI: Another answer: when $x>0$, $x^x$ is a strictly monotonic function. So it suffices to show that $\lim_{n\to\infty}\left(\frac1n\right)^{1/n}=1$, or equivalently, $\lim_{n\to\infty}\sqrt[n]{n}=1$. You can see this thread for various proofs of this. In particular, Aryabhata's answer is easy enough to swallow.
H: What is the next number in the sequence: $24, 30, 33 , 39 , 51,...$ What is the next number in the sequence: $24, 30, 33 , 39 , 51,...$ Here all numbers are divisible by $3$, difference between numbers are $6,3,6,12,...$ but I can't find common relationship in the sequence. AI: Looks to me like each term is the preceding term, plus the preceding term's sum of digits. The sequence would then be: $24, 30, 33, 39, 51, 57, 69, 84, 96,...$ Note that if a number is divisible by 3, then the sum of its digits is also divisible by 3, resulting in all of the numbers in the sequence being divisible by 3. Because of this, there are probably many other ways to define the sequence, but I find the one I gave to be fairly natural. Edit: Just noticed @DavidMitra's earlier comment which links to an OEIS page which includes the answer I gave, but I'll leave my answer here as reference.
H: Existence of product in the category of pre-sheaves of abelian categories Let $X$ be $Top(X)$ be the category of open sets of $X$ with inclusion maps as morphism. Let $\mathcal{C}$ be abelian category and $\mathcal{C}_x$ denote the category of contravariant functors from $Top(X)$ to $\mathcal{C}$. Let $\mathcal{F}$ and $\mathcal{G} \in obj(\mathcal{C}_x)$. I want to show that product of $\mathcal{F}$ and $\mathcal{G}$ exist. Let us $\mathcal{F} \times \mathcal{G}(U)=\mathcal{F}(U) \times \mathcal{G}(U)$. I want to show this is the direct product. Let $\mathcal H \in \mathcal{C}_x$ with natural tranformations $i_1$ and $i_2$ to $\mathcal{F}$ and $\mathcal{G}$. Now, I know there is unique natural transformation $\eta$ from $\mathcal H$ to $\mathcal{F} \times \mathcal{G}$ as there is a unique map from $\mathcal H(U)$ to $\mathcal{F}(U) \times \mathcal{G}(U)$ for any open set $U$. But how do I show that naturality square for $\eta$ commutes? AI: By the naturality square you mean the diagram you get with respect to some inclusion $V \to U$? If so then start with $x \in \mathcal H(U)$, push $x$, using $i_1$ and $i_2$ to $y \in \mathcal F(U)$ and $z \in \mathcal G(U)$. By naturality of $i_1$ and $i_2$ pushing $x|_V$ through $i_1$ and $i_2$ yields $y|_V$ and $z|_V$. Now by definition of the maps $\mathcal H(U) \to \mathcal F(U) \times \mathcal G(U)$ we have that $x$ goes to $(y, z)$ and $x|_V$ goes to $(y|_V, z|_V)$ and by definition of the restrictions of $\mathcal{F \times G}$ we get that $(y, z)|_V = (y|_V, z|_V)$, thus $\mathcal{H \to F \times G}$ is natural. You should definitely draw out a big diagram on paper and trace through what I've said here to make sure you understand it because it can be confusing at first, but once you get it you'll see that it amounts to nothing more than a diagram chase.