text
stringlengths
83
79.5k
H: trigonometric representation of a complex number. Let $z=e^{it}+1$ where $0\leq t\leq \pi$, Find the trigonometric representation of $z^2+z+1$. (The trigonometric representation should be in the form of : $r(\cos \theta +i \sin \theta)$, where $r,\theta \in \mathbb{R}$ and $r>0$. What I have done : $$z^2+z+1=e^{2it}+3z=(\cos (t)+i \sin (t)) (-2 i \sin (t)+4 \cos (t)+3).$$ The problem is that the expression in the right bracket is not real always. AI: The complex number $a+bi$ can be written in trigonometric form as $$a+bi=\sqrt{a^{2}+b^{2}}(\cos \theta +i\sin \theta ),\qquad\text{with } \tan \theta =\frac{b}{a}.$$ For $z=e^{it}+1=(\cos t+1)+i\sin t$ we have \begin{eqnarray*} z^{2}+z+1 &=&\left( e^{it}+1\right) ^{2}+(e^{it}+1)+1= (e^{i2t}+2e^{it}+1)+(e^{it}+1)+1 \\ &=&e^{2it}+3e^{it}+3 \\ &=&((\cos 2t+1)+i\sin 2t)+3\left( (\cos t+1)+i\sin t\right) +3 \\ &=&(\cos 2t+3\cos t+3)+i\left( \sin 2t+3\sin t\right) \\ &=&u+iv,\qquad u=\cos 2t+3\cos t+3,v=\sin 2t+3\sin t \\ &=&r\left( \cos \theta +i\sin \theta \right) , \end{eqnarray*} where \begin{eqnarray*} r &=&\sqrt{u^{2}+v^{2}}=\sqrt{(\cos 2t+3\cos t+3)^{2}+\left( \sin 2t+3\sin t\right) ^{2}} \\ \tan \theta &=&\frac{v}{u}. \end{eqnarray*} For $0\leq t\leq \pi ,v=\sin 2t+3\sin t\geq 0$ and $u=\cos 2t+3\cos t+3\geq 0 $. So $$ \begin{equation*} \theta =\arctan \frac{v}{u} =\arctan \left( \frac{\sin 2t+3\sin t}{\cos 2t+3\cos t+3}\right) \geq 0, \end{equation*} $$ because $w=z^2+z+1=u+iv=\operatorname{Re}(w)+i\operatorname{Im}(w)$ is in the first quadrant. Plots of $u,v$: $$u=\cos 2t+3\cos t+3 \text{ (blue) },\quad v=\sin 2t+3\sin t \text{ (red) },\quad 0\leq t\leq \pi.$$
H: Cramer-Rao lower bound for the variance of the unbiased estimator of $\tau(\theta)$ Given the pdf $f(x;\theta)=\frac{1}{\pi[1+(x-\theta)^2]}$ ; $-\inf < x<\inf$, $-\inf < \theta<\inf$ Show that the Cramer-Rao lower bound is 2/n where n is the sample size. AI: 1- you have the assumption that the samples are independent. 2- calculate the log likelihood ratio for $n$ samples. All likelihoods are multiplicative due to independence. 3- find the second derivative of the log likelihood function. 4- take the minus expectation w.r.t $X$ of what you've found at 3. And take the -1st power. See http://en.m.wikipedia.org/wiki/Cramér–Rao_bound for more information
H: Question about uniqueness function series Please pardon me if this is elementary, but I've looked hard for an answer to this and am very surprised I have yet to find a good one. My question is simple: under what condition(s) does $$\sum_{n=1}^{\infty}f_n(x) = \sum_{n=1}^{\infty}g_n(x)$$ imply that $$f_n(x) = g_n(x)$$? AI: It's hard to find any reasonable conditions for such an implication. The reason is simply that given any series of the form $\sum_{n=1}^\infty f_n(x)$ that converges, then changing the order of summation, or making small or large changes to $f_1$ only to absorb these in other $f_k(x)$ gives too much freedom for finding infinitely many possibilities for functions $g_k(x)$ that will add up to the same value. If you place some severe restrictions on the functions though then some (not terribly interesting) implications are possible. For instance, if all functions attain non-negative values, and moreover the inequalities $f_k(x)\le g_k(x)$ hold for all $x$ in the relevant domain, then equality of the series will imply the equality of the functions (in the relevant domain). But really, this is hopeless. It's like asking for conditions that will assure that two series of real numbers are equal. There are simply too many possibilities. Even with sums of just two real numbers, what conditions are there on $a,b,c,d$ that ensure that if $a+b=c+d$, then $a=c$ and $b=d$???
H: Integration question verifying piecewise I have the following question: from direct integration show $\displaystyle \int \limits_{-L}^{L} \cos({m πx\over L})\cos({nπx\over L}) \ dx = \begin{cases}0 & m \neq n \\ L & m = n \\ \end{cases} $ I use a trigonometric identity and evaluate individually to produce: $\displaystyle {L\sin(π(m-n)\over πm-πn} + {L\sin(π(m+n)\over πm+πn}$ I am pretty sure I am correct but I can quite clearly see that whenever m=n the part on the left will not exist let alone equal L. Any hints would be appreciated. Cheers AI: It's safest to consider the case $m=n$ separately when solving the integral. But also note that $\lim_{x\rightarrow 0}\frac{\sin x}{x}=1$.
H: Find the range of: $y=\sqrt{\sin(\log_e\frac{x^2+e}{x^2+1})}+\sqrt{\cos(\log_e\frac{x^2+e}{x^2+1})}$ Find the range of: $$y=\sqrt{\sin(\log_e\frac{x^2+e}{x^2+1})}+\sqrt{\cos(\log_e\frac{x^2+e}{x^2+1})}$$ What I tried: Let:$$\log_e\frac{x^2+e}{x^2+1}=X,$$ then $$y=\sqrt {\sin X}+\sqrt{\cos X}$$ $$y_{max}at X=\pi/4$$ The rest is too complicated. I am stuck up. Can someone please give a analytical solution. Even Wolfram alpha doesn't help. AI: A related problem. First, we study the expression $$ \frac{x^2+e}{x^2+1}=1+\frac{e-1}{x^2+1} \longrightarrow_{|x|\to\infty} 1, $$ which implies $ y(x)\longrightarrow_{|x|\to \infty} 0 $. To find the maximum of the function $y(x)$, Let's study the function $$ h(t)=\sqrt{\sin(t)}+\sqrt{\cos(t)}. $$ The maximum of the above function is attained at $t=\frac{\pi}{4}$ which can proved using derivative test. So, this implies our function attains its max when $$ \ln\left( \frac{x^2+e}{x^2+1} \right)=\frac{\pi}{4} \implies x=0.6632987771, -0.6632987771. $$ Plugging back in the function y(x) gives the max which is $y=1.681792830$. So the range is $$ 1 < y \leq 1.681792830.$$ Note: You can solve $ \ln\left( \frac{x^2+e}{x^2+1} \right)=\frac{\pi}{4} $ easily as, $$ \ln\left( \frac{x^2+e}{x^2+1} \right)=\frac{\pi}{4}\implies \frac{x^2+e}{x^2+1} =e^{\frac{\pi}{4}}\implies x^2+e= e^{\frac{\pi}{4}}x^2+ e^{\frac{\pi}{4}}=\dots.$$ I think you can finish it.
H: How to show that $\frac{1}{(1-\frac{1}{4}z^{-1})(1-\frac{1}{4}z)} = \frac{-4z^{-1}}{(1-\frac{1}{4}z^-1)(1-4z^{-1})}$ Can anyone help me clarify what rule is used in this rewriting of this fraction? $$\frac{1}{\left(1-\dfrac{1}{4}z^{-1}\right)\left(1-\dfrac{1}{4}z\right)} = \frac{-4z^{-1}}{\left(1-\dfrac{1}{4}z^{-1}\right)\left(1-4z^{-1}\right)}$$ AI: Multiply both numerator and denominator with $-4z^{-1}$. For the term $(1-z/4)$ in the denominator you get $$-4z^{-1}(1-z/4)=-4z^{-1}+1=(1-4z^{-1})$$
H: Finding the value of distribution function of a converging random variable There is this example in a note that I think this is supposed to be a simple problem, but I still find it not as straightforward. Consider a sequence of random variables $X_n\equiv1/n,X\equiv0$. Then $F_{X_n}(t)\to F_X(t)$ at all $t\not=0$ (as $n\to\infty$), but how can we show that $F_X(0)=1$ and $F_{X_n}(0)=0$? (in other words, how can we compute $F_X(t)$ and $F_{X_n}(t)$ at $t=0$?) (these values are given, I just don't know how they come up with these). I tried to reason as follows: 1. Since $X$ and $X_n$ are constant random variables, then both $F_X(0)$ and $F_{X_n}(0)$ should equal to $0$, am I right? or at least they should equal in value? 2. It was not said explicitly, but I suppose $t\in \mathbb{R}$. My question is does it affect $F_{X_n}(t)$ and $F_X(t)$ afterall? The purpose of this example is I think to illustrate convergence in distribution only applies at $t\in \mathbb{R}$ where $F_X(t)$ is continuous. I am so confused about this problem for so long. I hope my question is clear enough. Thanks very much in advance for any help. AI: I assume that by $X_n \equiv \frac{1}{n}$ you mean that $X_n$ always takes the value $\frac{1}{n}$. $X_n$ then corresponds to a probability measure $\mathbb{P}_n$ on $\mathbb{R}$ with $\mathbb{P}(A) = 1$ iff $\frac{1}{n} \in A$. $X$ similarly corresponds to a probability measure with $\mathbb{P}(A) = 1$ iff $0 \in A$. The distribution function $F_n$ of $X_n$ is thus $$ F_n(t) = \begin{cases} 0 &\text{if $t < \frac{1}{n}$} \\ 1 &\text{if $t \geq \frac{1}{n}$.} \end{cases} $$ and the distribution function $F$ of $X$ is $$ F(t) = \begin{cases} 0 &\text{if $t < 0$} \\ 1 &\text{if $t \geq 0$.} \end{cases} $$ Now, if you pick a $t > 0$, then for all $n > \frac{1}{t}$ you get $F_n(t) = 1$. Thus, for all $t > 0$, you have $F_n(t) \to 1 = F(t)$. For $t < 0$, you have for all $n$ that $F_n(t) = 0 = F(t)$. In other words, $F_n \to F$ pointwise on $\mathbb{R}\setminus\{0\}$. For $t=0$, $F_n(t) = F_n(0) = 0 \neq 1 = F(0)$ for all $n \in \mathbb{N}$. Does that mean that the $X_n$ do not converge in distribution? No, because $F_n$ only needs to converge pointwise to $F$ for those points $x$ where $F$ is continuous. And $F$ isn't continuous at $0$. So, why is convergence in distribution defined this way? One way to look at this is to ask whether requiring the $F_n$ to converge only at continuity points of $F$ somehow makes the limit ambiguous. It turns out it does not, because one you know the value of $F$ at those points, you can recover the full $F$ by using that since $F$ is a distribution function, it's always right-continuous, i.e., $\lim_{x\to a^+} F(x) = F(a)$. (I've cheated a bit here, because two different $F$ may have two different sets of continuity points, but this still explains the general idea..) Take your example. You know that $F_n$ goes to $1$ for $t > 0$, and to $0$ for $t < 0$. There then is only one right-continuous $F$ which fits, since the right-continuity at $0$ requires that $F(0)=1$ too. So the $F_n$ uniquely defined $F$, even though there's a point where the pointwise convergence fails.
H: Integrating $ a\,f(x) +b\,y(x)=\frac{dy}{dx}$ Can somebody put me on the right track for integrating the following equation? How do I separate the variables? $$ a\,f(x) +b\,y(x)=\frac{dy}{dx}$$ AI: Hint: Write down a differential equation solved by $z(x)=\mathrm e^{-bx}y(x)$.
H: inclusion of sets -transitive? show that if $A\subseteq B$ and $B\subseteq C$ then $A\subseteq C$ Can I do it with using injective functions? $A\subseteq B$ means there exists an injective fcn $f:A\to B$ $B\subseteq C$ means there exists an injective fcn $g:B\to C$ then the composition $g\circ f:A\to C$ is also an injective function then $A\subseteq C$ in each case all the functions are identity functions AI: Caution: The existence of an injection between $A$ and $C$ doesn't necessarily imply $A \subseteq C$. For example, consider set $A$, the set of all even integers, and set $B$, the set of all odd integers. Certainly, there exists an injection: $f: A \to C$, $\;f(x) = x + 1\;$ (which is not only injective, but surjective, as well). But clearly, $\;A \nsubseteq C$. The converse is true: if $A \subseteq C$, then an injection $h: A \to C$ exists. But we can easily prove the inclusion $A \subseteq C$ by "element chasing:" a standard way to prove set inclusions, and/or set equivalencies. We have $A \subseteq B$ and $ B \subseteq C$. And we want to prove that this necessarily implies $A\subseteq C$. $(1)$ Suppose $x \in A\quad $ (Assumption) $(2)$ We know $A \subseteq B$ means $x \in A \implies x \in B.\;$ So given $(1)$, we have $x \in B$. $(3)$ We know $B \subseteq C$ means $x \in B \implies x \in C$. So given $(2)$, we have $x \in C$. $(4)\;\;x \in A \implies x \in C$. $\quad[(1) - (3)]$ Therefore, $A \subseteq C$.
H: Bring close 3D point to another 3D point equivalence Having 2 point in 3D field - $\text{p1(x1,y1,z1) , p2(x2,y2,z2)}$ . How could I generate the equivalence , $f$ which take parameter $t\in[0,1]$ and bring $p1$ closer to $p2$ as well as $t$ increase such that $f(1)=p2$ and $f(0)=p1$ ? AI: Use a convex combination: $f=(1-t) p_1 + t p_2$, $t\in [0,1]$.
H: Does $\sum_{k=1}^{\infty}k(p^{\frac{(k-1)k}{2}}-p^{\frac{(k+1)k}{2}})$ converge? Does the sum: $$\sum_{k=1}^{\infty}k(p^{\frac{(k-1)k}{2}}-p^{\frac{(k+1)k}{2}})$$ $$ p\in\mathbb{R}|0{\leq}p<1$$ converse, and if so, to what function? AI: Let's simplify your expression to get Carl Najafi's expression : \begin{align} \sum_{k=1}^{\infty}k\left(p^{\frac{(k-1)k}{2}}-p^{\frac{(k+1)k}{2}}\right)&=\sum_{k=1}^{\infty}k\;p^{\frac{(k-1/2)^2}2-\frac 18}-\sum_{k=1}^{\infty}k\;p^{\frac{(k+1/2)^2}2-\frac 18}\\ &=\sum_{k=0}^{\infty}(k+1)\;p^{\frac{(k+1/2)^2}2-\frac 18}-\sum_{k=1}^{\infty}k\;p^{\frac{(k+1/2)^2}2-\frac 18}\\ &=\sum_{k=0}^{\infty}\;p^{\frac{(k+1/2)^2}2-\frac 18}\\ &=\sum_{k=0}^{\infty}\;p^{\frac{k(k+1)}2}\\ \end{align} Proving Carl's claim. After that you'll simply have to use the definition of the second theta function $$\theta_2(0,\sqrt{p})=2\sum_{k=0}^{\infty}\;\sqrt{p}^{(k+1/2)^2}$$ to get the Alpha result (since $\sqrt{p}^{1/4}=\sqrt[8]{p}$) : $$\frac {\theta_2(0,\sqrt{p})}{2\;\sqrt[8]{p}}\quad\text{for}\ 0<p<1$$
H: $C([0, 1])$ is not complete with respect to the norm $\lVert f\rVert _1 = \int_0^1 \lvert f (x) \rvert \,dx$ Consider $C([0, 1])$, the linear space of continuous complex-valued functions on the interval $[0, 1]$, with the norm $$\displaystyle\lVert f\rVert_1 = \int_0^1 \lvert f(x)\rvert \,dx.$$ I have to show that $C([0, 1])$ is not complete with respect to this norm. I have found the following example from a book. Let $f_n \in C[0,1]$ be given by $$f_n(x) := \begin{cases} 0 & \text{if $0 \le x \le \frac1{2}$}\\ n(x-\frac{1}{2}) & \text{if $\frac {1}{2} < x \le \frac {1}{2} + \frac {1}{n}$}\\ 1 & \text{if $ \frac {1}{2} + \frac {1}{n} <x \leq 1 $} \end{cases}$$ How to prove that $f_n$ is a Cauchy sequence with respect to $\lVert \cdot\rVert_1$? If I use basic definition then I have to prove that $\lVert f_n - f_m\rVert_1 < \epsilon$ $\forall n, m > N$. But I am finding it difficult to prove this. Please help me to understand how to prove that $f_n$ is Cauchy sequence in $C([0, 1])$. Thanks AI: Let $m\leq n$ both natural numbers, then $$\|f_n-f_m\|_1 = \int_0^1 |f_n(x)-f_m(x)|\,\mathrm{d}x $$ $$ = \int_\frac{1}{2}^{\frac{1}{2}+\frac{1}{n}}(n-m)\left(x-\frac{1}{2}\right)\,\mathrm{d}x + \int_{\frac{1}{2}+\frac{1}{n}}^{\frac{1}{2}+\frac{1}{m}}\left(1-m\left(x-\frac{1}{2}\right)\right)\,\mathrm{d}x.$$ Now try to bound these integrals for $n,m\geq N$.
H: Show that $\frac{-4z^{-1}}{(1-\frac{1}{4}z^{-1})(1-4z^{-1})} = \frac{16}{15}\frac{1}{(1-\frac{1}{4}z^{-1})}-\frac{16}{15}\frac{1}{(1-4z^{-1})}$ Can anyone help me clarify how this rewriting is done? $$\frac{-4z^{-1}}{(1-\frac{1}{4}z^{-1})(1-4z^{-1})} = \frac{16}{15}\frac{1}{(1-\frac{1}{4}z^{-1})}-\frac{16}{15}\frac{1}{(1-4z^{-1})}$$ AI: Let $x=z^{-1}$ Then \begin{align} \frac{-4z^{-1}}{(1-\frac{1}{4}z^{-1})(1-4z^{-1})} =\frac{-4x}{(1-\frac{1}{4}x)(1-4x)} \end{align} Suppose \begin{align} &\frac{-4x}{(1-\frac{1}{4}x)(1-4x)}=\frac{A}{(1-\frac{1}{4}x)}+\frac{B}{(1-4x)}\tag{1}\\&\frac{-4x}{(1-\frac{1}{4}x)(1-4x)}=\frac{A(1-4x)+B(1-\frac{1}{4}x)}{(1-\frac{1}{4}x)(1-4x)}\\&-4x=A(1-4x)+B(1-\frac{1}{4}x)&\\&-4x=x\left(-4A-\frac{B}{4}\right)+A+B\end{align} Comparing coefficients, you'll have $2$ equations involving $A$ and $B$. $$-4A-\frac{B}{4}=-4 \tag{2}$$ $$A+B=0 \tag{3}$$ Solve $(2)$ and $(3)$ to get $A=\frac{16}{15}$ and $B=-\frac{16}{15}$ then replace them in $(1)$ and also revert $x$ back into $z^{-1}$ to get your final result.
H: How to establish an isomorphism between these two tensor products? Let $V_1, ..., V_k$ be finite-dimensional vector spaces. A tensor product is defined to be the quotient space $U/I$, where $U$ denotes the free vector space generated by elements of $V_1 \times ... \times V_k$, and $I$ the subspace of $U$ generated by elements of the form $a(v_1, ..., v_k) - (v_1, ..., av_i, ..., v_k)$ $(v_1, ..., v_i + w_i, ..., v_k) - (v_1, ..., v_i, ..., v_k)- (v_1, ..., w_i, ..., v_k)$ and the space $U/I$ is denoted $V_1 \otimes...\otimes V_k$. Question: How to establish an isomorphism between $(V_1 \otimes ... \otimes V_l)\otimes(V_{l+1} \otimes ... \otimes V_k)$ and $V_1 \otimes...\otimes V_k$? This question might look silly, but I find it troublesome when dealing with the definitions above... AI: The definition you mention is one possible realization of the tensor product, but the “universal property” is what makes it possible to manage tensor products. The tensor product of the vector spaces $V_1$, $V_2$, $\dots$, $V_k$ can be defined as a pair $(V,\alpha)$, where $V$ is a vector space and $$\alpha\colon V_1\times V_2\times\dots\times V_k\to V$$ is a multilinear map such that, for any multilinear map $$\beta\colon V_1\times V_2\times\dots\times V_k\to W$$ there exists a unique linear map $f\colon V\to W$ such that $f\circ\alpha=\beta$. It's quite easy to show that your $U/I$, together with the map $$\gamma\colon V_1\times V_2\times\dots\times V_k\to U/I$$ defined by $$\gamma(v_1,v_2,\dots,v_k)=v_1+v_2+\dots+v_k+I$$ has the above property (where $v_1+v_2+\dots+v_k$ denotes the element $(v_1, v_2, \ldots, v_k)$ of $U$). If $(V,\alpha)$ is another “realization" of the tensor product, then, by the universal property, there are $f\colon V\to U/I$ and $g\colon U/I\to V$ such that $$f\circ\alpha=\gamma,\qquad g\circ\gamma=\alpha$$ and it's easy to prove that $f\circ g$ and $g\circ f$ are both identity maps. Namely, $$(g\circ f)\circ\alpha=g\circ(f\circ\alpha)=g\circ\gamma= \alpha=1_V\circ\alpha$$ so, by the uniqueness, $g\circ f=1_V$. Similarly for $f\circ g$. In this way you can denote any realization of the tensor product by $V_1\otimes V_2\otimes\dots\otimes V_k$. The multilinear map is not expressed, but it is implicit in the choice of the realization; conventionally the map is denoted by $(v_1,v_2,\dots,v_k)\mapsto v_1\otimes v_2\otimes\dots\otimes v_k$. Any element of $V_1\otimes\dots\otimes V_k$ is a sum of elements of the form $v_1\otimes v_2\otimes\dots\otimes v_k$. For this you can use abstract reasoning or the explicit realization. Now your problem boils down to showing that $$ V_1\otimes V_2\otimes\dots\otimes V_k\otimes V_{k+1}\otimes V_{k+2}\otimes\dots\otimes V_l $$ is a realization of the tensor product $$ (V_1\otimes V_2\otimes\dots\otimes V_k)\otimes (V_{k+1}\otimes V_{k+2}\otimes\dots\otimes V_l) $$ via a suitable bilinear map. The bilinear map $$ (V_1\otimes V_2\otimes\dots\otimes V_k)\times (V_{k+1}\otimes V_{k+2}\otimes\dots\otimes V_l)\to V_1\otimes V_2\otimes\dots\otimes V_k\otimes V_{k+1}\otimes V_{k+2}\otimes\dots\otimes V_l $$ is just $$(v_1\otimes\dots\otimes v_k,v_{k+1}\otimes\dots \otimes v_l)\mapsto v_1\otimes\dots\otimes v_k\otimes v_{k+1}\otimes\dots\otimes v_l $$ extended by bilinearity. You can have some doubts about the last map being well defined. But remember you have a given multilinear map $$ \alpha\colon V_1\times\dots\times V_k\times V_{k+1}\times\dots\times V_l \to V_1\otimes\dots\otimes V_k\otimes V_{k+1}\otimes\dots\otimes V_l = T $$ Define, for any $y\in V_{k+1}\times\dots\times V_l$, the map $\beta_y\colon V_1\times\dots\times V_k\to T$ by $\beta_y(x)=\alpha(x,y)$. This map is multilinear, so it induces a unique linear map $$f_y\colon V_1\otimes\dots\otimes V_k\to T$$ such that $f_y\circ\alpha_k=\beta_y$, where $\alpha_k : V_1 \times \cdots \times V_k \to V_1 \otimes \cdots \otimes V_k$ is again the canonical multilinear map. Thus you have a multilinear map $y\mapsto f_y$ $$ V_{k+1}\times\dots\times V_l\to \mathrm{Hom}(V_1\otimes\dots\otimes V_k,T) $$ that induces a unique linear map $$ g\colon V_{k+1}\otimes\dots\otimes V_l\to \mathrm{Hom}(V_1\otimes\dots\otimes V_k,T) $$ The bilinear map you're looking for is just $$h(w_1,w_2)=g(w_2)(w_1)\qquad (w_1\in V_1\otimes\dots\otimes V_k, w_2\in V_{k+1}\otimes\dots\otimes V_l).$$ Indeed, if $w_1=v_1\otimes\dots\otimes v_k$ and $w_2=v_{k+1}\otimes\dots\otimes v_l$, showing that $$ h(w_1,w_2)=\alpha(v_1,\dots,v_k,v_{k+1},\dots,v_l) $$ is just a tedious verification. Final comments Proving the existence of the isomorphism with the “explicit” realization you start with is time consuming and not worth the trouble. The explicit realization $U/I$ is only used to show that a tensor product exists. Then you can forget about it and use only the universal property. This is similar to how one does when constructing the real numbers from the rationals. The existence of a field with the required properties can be proved with Dedekind cuts or Cauchy sequences or whatever; then one proves that a field with those properties is unique up to isomorphisms and the actual construction is not used any more: one just employs the properties of the field.
H: number of positive real root of $f(x)=x^4+2x^3-2 , x\in \Bbb{R}$, $f(x)=x^4+2x^3-2 , x\in \Bbb{R}$, (A) has two roots in $[0,\infty)$ (B) has three roots in $[0,\infty)$ (C) has no roots in $[0,\infty)$ (D) has a unique roots in $[0,\infty)$ How can I do this?I am stuck on it. AI: By Descartes rule of sign it has at most one positive real root.Now $f(0)=-2$ and $f(1)=1$.so it has a real root between $0$ and $1$.so (D) is the correct option.
H: How to prove $G_2$ is a normal subgroup of $G_1\times G_2$? I am studying abstract algebra and in one of my previous midterms there was a question that I can not solve. Now I have final exam tomorrow and want to learn the answer to that question. Here it is: Can anyone help me with the solution of this question? My answer is there, but it would be very nice if you assume there is no answer and explain the answer to the question for part (i) and (ii), I really want to understand it. Thank you in advance AI: Hints. (i) $(g_1,g_2)(e,g)(g_1,g_2)^{-1}=(g_1,g_2)(e,g)(g_1^{-1},g_2^{-1})=(g_1eg_1^{-1},g_2gg_2^{-1})$ $=$ $(e,g_2gg_2^{-1})$ $\in$ $\{e\}\times G_2$ for any $g_1\in G_1$ and $g,g_2\in G_2$. (ii) Define $\varphi:G_1\times G_2\to G_1$ by $\varphi(g_1,g_2)=g_1$. Note that $\varphi$ is a surjective group homomorphism, $\ker\varphi=\{e\}\times G_2$ and apply the fundamental theorem of isomorphism for groups.
H: Row of $A = BC$ as a linear combination of the rows of $C$ How to prove that any row of a matrix $A = BC$ can be written as a linear combination of the rows of $C$. I am able to visualize it with an example, but finding it difficult to prove with notations of vector space. AI: By definition of matrix multiplication, $A_{i,j}=\sum_k B_{i,k}C_{k,j}$. So a row of $A$ looks like $[\sum_k B_{i,k}C_{k,1},\sum_k B_{i,k}C_{k,2},\dots,\sum_k B_{i,k}C_{k,n}]$. But that is the same as $$\sum_k[ B_{i,k}C_{k,1}, B_{i,k}C_{k,2},\dots, B_{i,k}C_{k,n}]\\=\sum_k B_{i,k}[ C_{k,1}, C_{k,2},\dots, C_{k,n}]$$ Clearly that row vector is the $k$'th row of $C$, so this last expression is a linear combination of the rows of $C$.
H: Determine the number of digits in $(2^{120})(5^{125})$. Determine the number of digits in $(2^{120})(5^{125})$. This is a bonus question I found in a Grade 12 math textbook, and I'm curious on how to solve it. What I find strange is that this question is in the "Power Functions" section of "Polynomial Functions"; yet, this seems to have little to do with functions. Any ideas as to how to solve this? I have tried to, but can't find a solution without a computer or calculator. AI: Observe that $$2^{120}5^{125}=2^{120}5^{120}5^5=10^{120}\cdot3125$$ So the number is $3125$ followed by $120$ zeroes.
H: Why do we believe the Church-Turing Thesis? The Church-Turing Thesis, which says that the Turing Machine model is at least as powerful as any computer that can be built in practice, seems to be pretty unquestioningly accepted in my exposure to computer science. Why? Do we have any more evidence for its truth than "nobody has been able to think of a more powerful model than Turing's yet?" If so, why do we accept that argument as truth? It seems to me that in any other mathematical discipline, a claim like the Church-Turing thesis whose support amounts to "we haven't yet found a counterexample" would not be taken seriously by the community. AI: A side note: I think it is an error to identify the Church-Turing Thesis with a claim about what machines can do in practice. Some reflections [if I may be forgiven self-quotation.] (a) It is important to note that there are three levels of concept that are in play here when we talk about computation. At the pre-theoretic level – and guided by some paradigms of common-or- garden real-world computation – there is a loose cluster of inchoate ideas about what we can compute with paper and pencil, about what a com- puter (in the modern sense) can compute, and about what a mechanism more generally might compute. Then at what we might call the proto-theoretic level we have, inter alia, one now familiar way of picking out strands from the pre-theoretic cluster while idealizing away from practical limitations of time or amount of paper, giving us the notion of an effectively computable function. So here some theoretic tidying has already taken place, though the concept still remains somewhat vaguely characterized (what makes a step in a step-by-small-step algorithmic procedure ‘small’ enough to be admissible?). Then at the fully theoretic level we have tightly characterized concepts like the concept of a recursive function and the concept of a Turing-computable total function. It would be quite implausible to suppose that the inchoate pre-theoretic cluster of ideas at the first level pins down anything very definite. No, the Church–Turing Thesis sensibly understood, in keeping with the intentions of the early founding fathers, is a view about the relations between concepts at the second and third level. The Thesis kicks in after some proto-theoretic work has been done. The claim is that the functions that fall under the proto-theoretic idea of an effectively computable function are just those that fall under the concept of a recursive function and under the concept of a Turing-computable function. NB: the Thesis is a claim about the extension of the concept of an effectively computable function. (b) There are other strands in the pre-theoretic hodgepodge of ideas about computation than those picked up in the idea of effective computability: in particular, there’s the idea of what a machine can compute and we can do some proto-theoretic tidying of that strand too. But the Church–Turing Thesis is not about this idea. It must not be muddled with the entirely different claim that a physical machine can only compute recursive functions – i.e. the claim that any possible computing mechanism (broadly construed) can compute no more than a Turing machine. For perhaps there could be a physical set-up which somehow or other is not restricted to delivering a result after a finite number of discrete, deterministic steps, and so is enabled to do more than any Turing machine. Or at least, if such a ‘hypercomputer’ is impossible, that certainly can’t be established merely by arguing for the Church–Turing Thesis. Let’s pause over this important point, and explore it just a little further. It is familiar that the Entscheidungsproblem can’t be solved by a Turing machine. In other words, there is no Turing machine which can be fed (the code for) an arbitrary first-order wff, and which will then decide, in a finite number of steps, whether it is a valid wff or not. Here, however, is a simple specification for a non-Turing hypercomputer that could be used to decide validity. Imagine a machine that takes as input the (Gödel number for the) wff $\varphi$ which is to be tested for validity. It then starts effectively enumerating (numbers for) the theorems of a suitable axiomatized formal theory of first-order logic. We’ll suppose our computer flashes a light if and when it enumerates a theorem that matches $\varphi$ . Now, our imagined computer speeds up as it works. It performs one operation in the first second, a second operation in the next half second, a third in the next quarter second, a fourth in the next eighth of a second, and so on. Hence after two seconds it has done an infinite number of tasks, thereby enumerating and checking every theorem to see if it matches $\varphi$ ! So if the computer’s light flashes within two seconds, $\varphi$ is valid; if not, not. In sum, we can use our wildly accelerating machine to decide validity, because it can go through an infinite number of steps in a finite time. Now, you might very reasonably think that such accelerating machines are a mere philosophers’ fantasy, physically impossible and not to be taken seriously. But actually it isn’t quite as simple as that. For example, we can describe space-time structures consistent with General Relativity which apparently have the following feature. We could send an ‘ordinary’ computer on a trajectory towards a spacetime singularity. According to its own time, it’s a non-accelerating com- puter, plodding evenly along, computing forever and never actually reaching the singularity. But according to us – such are the joys of relativity! – it takes a finite time before it vanishes into the singularity, accelerating as it goes. Suppose we set up our computer to flash us a signal if, as it enumerates the first-order logical theorems, it ever reaches $\varphi$. We’ll then get the signal within a bounded time just in case $\varphi$ is a theorem. So our computer falling towards the singularity can be used to decide validity. Now, there are quite fascinating complications about whether this fanciful story actually works within General Relativity. But no matter. The important point is that the issue of whether there could be this sort of Turing-beating physical set-up – where (from our point of view) an infinite number of steps are executed – has nothing to do with the Church–Turing Thesis properly under- stood. For that is a claim about effective computability, about what can be done in a finite number of steps following an algorithm.
H: Maximizing sum of strictly non-negative functions? In an answer to this question, it was stated that if a function is the sum of two functions $f(x)=g(x)+h(x)$, and ${(\forall{x})(g(x)\ge0)}$ and $(\forall{x})(h(x)\ge0)$, the relative maxima of $f(x)$ can be found by finding the relative maxima of $g(x)$ and $h(x)$. Intuitively I can somewhat grasp why this is the case, but analytically, what is the explanation for why this is true? AI: That's not true, if I understand correctly. For example, $[1+\sin(x)]+[1+\cos(x)]$ is a sum of two positive functions with maxima at $0,\pi/2$ (plus shifts) respectively where the whole function takes the value $3$. However, at $x=\pi/4$ we get $1+\sqrt 2/2+1+\sqrt 2/2=2+\sqrt 2> 3$. Draw the graph to check it out. Looking at the other question quickly, it's possible that what you're interested in is the special case where $g,h$ have the same sign for their slopes at all points. In this case, it is true (at least for nice differentiable functions) because of the following reasoning: let $x$ be not a maximum of $g,h$. Then both $g,h$ must increase away from $x$ in at least one direction, and that must be the same direction. Therefore $g+h$ increases in that direction too. Note that the $g,h>0$ thing doesn't really tell you anything since you can always shift the whole graph up by a constant and smoothly kill away behaviour at infinity.
H: Measure Algebra Question Background: Let $G$ be a locally compact group, and define $M(G)$ to be the vector space of all regular complex measures defined on $G$, normed by total variation. For $\mu,\nu\in M(G)$, define the product $\mu * \nu$ as follows: Define $\varphi\in C_{0}(X)^{*}$ by $$\varphi(f) = \int_{G}\int_{G}f(xy)d\mu(x)d\nu(y)\text{, for all }f\in C_{0}(X)$$ By the Riesz-Representation Theorem, there is a unique element (call it $\mu*\nu$) of $M(G)$ such that $\varphi(f) = \int_{G}fd(\mu *\nu)$. This turns $M(G)$ into an algebra. Proposition: I'm reading a book which says that it is an immediate consequence of Fubini's Theorem that $$\int_{G}\int_{G}f(xy)d\mu(x)d\nu(y) = \int_{G}\int_{G}f(xy)d\nu(y)d\mu(x)$$ My Question: In order to apply Fubini's Theorem, we need $$\int_{G\times G}|f(xy)|d(\mu\times\nu) < \infty$$ which doesn't seem to follow from $f\in C_{0}(X)$. Counterexample: Take $G = \mathbb{Z}$, $f\in C_{0}(G)$ defined by $f(n) = \frac{1}{|n|}$. Then $\int_{G}f(x)d\mu(x) = \sum_{n=-\infty}^{\infty}\frac{1}{|n|} = \infty$. From here it follows that for fixed $y\in G$, we have $\int_{G}f(xy)d\mu(x) = \infty$ so obviously the hypothesis of Fubini's Theorem is not satisfied. But in this case, $f$ is non-negative, so the desired integral equation instead follows from Tonelli's Theorem, so all is not lost. So does the Proposition follow from a different argument? or is an additional hypothesis on $G$ required? AI: I don't know if this is the answer, but it just occured to me that for $\mu\in M(G)$ to have finite norm, it must have finite total variation, which I think would be sufficient to guarantee the hypothesis of Fubini's Theorem. Maybe someone with more knowledge than I can confirm this?
H: Image of the idele class group and its subgroup of idelic norm 1 [Sorry if the title isn't specific, it was too long.] My question is: Why does $J_{K}/J_{K}^{1}\cong %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $ imply that $J_{K}/K^{\ast }$ and $J_{K}^{1}/K^{\ast }$ have the same image under any continuous homomorphism from $J_{K}$ to a discrete group $G$? Here $J_{K}$ denotes the group of ideles on a number field $K$ and $% J_{K}^{1}$ the subgroup of elements with adelic norm $1$. I know it has something to do with the fact that $% %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $ is connected, but don't know how to proceed from there. Thank you. AI: If $\phi: H \to G$ is a map from a topological group to a discrete topological group, then $\phi$ kills the connected component of the identity in $H$ (proof: the inverse image of the identity of $G$ is closed and open and contains the identity, hence contains the connected component of the identity). But anything in $J_K$ can be moved to something in $J_K^1$ by multiplying by something in the connected component of the identity. (e.g. by modifying just one archimedean place.)
H: Natural number solutions to $\frac{xy}{x+y}=n$ (equivalent to $\frac 1x+\frac 1y=\frac 1n$) I have a question about the following problem from a Putnam review: Let $n\in \mathbb{N}$. Find how many pairs of natural numbers $(x, y)\in \mathbb{N}\times \mathbb{N}$ solve $$ \frac{xy}{x+y}=n. $$ I have found some solutions for all $n$, such as $(2n, 2n)$ and $(n+1, n(n+1))$, but I feel as though this is the wrong approach, as the question is only asking for the number of solutions to the equation. I don't really want a complete solution, but any hints would be greatly appreciated. AI: HINT: Utilize $$(x-n)(y-n)=n^2$$
H: Inverse Laplace transformation using reside method of transfer function that contains time delay I'm having a problem trying to inverse laplace transform the following equation $$ h_0 = K_p * \frac{1 - T s}{1 + T s} e ^ { - \tau s} $$ I've tried to solve this equation using the residue method and got the following. $$ y(t) = 2 K_p e ^ {- \tau s} e ^ {-t/T} $$ $$ y(t) = 2 K_p e ^ {\frac{\tau}{T}} e ^ {-t/T} $$ And that is clearly wrong. Is it so that you can't use the residue method on functions that contains time delay, or is it possible to do a "work around" or something to get to the right answer? AI: First do polynomial division to simplify the fraction: $$\frac{1-Ts}{1+Ts}=-1+\frac{2}{1+Ts}$$ Now expand $h_0$: $$h_0=-K_pe^{-\tau{s}}+2K_p\frac{1}{Ts+1}e^{-\tau{s}}$$ Recall the time-domain shift property: $$\scr{L}(f(t-\tau))=f(s)e^{-\tau{s}}$$ $$\scr{L}^{-1}h_0=-k_p\delta{(t-\tau)}+2k_p g(t-\tau)$$ Where $g(t)=\scr{L}^{-1}\frac{1}{ts+1}$. To take the inverse Laplace transform of this term, recall the frequency domain shift property: $$\scr{L}^{-1}f(s-a)=f(t)e^{at}$$ $$\frac{1}{Ts+1}=\frac{1}{T(s+\frac{1}{T})}=\frac{1}{T}\frac{1}{s+\frac{1}{T}}$$ Therefore the inverse Laplace transform is: $$\frac{1}{T}e^{-\frac{t}{T}}$$ Finally, putting all of it together, the full inverse Laplace transform of the original expression is: $$-K_p\delta(t-\tau)+\frac{2K_p}{T}e^{-\frac{t-\tau}{T}}$$
H: Is the third order term of a Taylor approximation a covariant tensor of rank 3 restricted to (h,h,h)? Recently reading Peter Lax's linear algebra text, he had a very concise way of writing the Taylor approximation up to second order: $f(x+h) = f(x) + l(h) + \frac{1}{2}q(h) + \|h\|^2 \epsilon(\|h\|)$, where $l$ is a linear functional, $q$ is a quadratic form, and $\epsilon \to 0$ as $h\to 0$. How would we describe the third-order term of the approximation as a function of $h$? It couldn't be represented as a matrix, in the way that a bilinear form can; we'd need a 3d array I think. It seems we could express it as a homogeneous polynomial of degree 3 in $h_1, \dotsc, h_n$. Am I right that it is also a covariant tensor of rank 3, restricted so that it takes arguments $(h,h,h)$, so it is of the form $\left(\sum\limits_{I\in \{1,\dotsc,n\}^3}a_I\varepsilon^*_I \right)(h,h,h)$? Is there a name for such a thing? AI: Yes, this is the multivariate Taylor expansion. To derive the terms (of whatever order your heart desires) you simply feed a line $a+th$ into a multivariate function $f$ to obtain a single-variable function $g(t)=f(a+th)$ to which the single-variate Taylor expansion applies. After that, systematic application of the multivariate chain-rule reveals the expansion you seek. Just set $t=1$ and you obtain $f(a+h)=f(a)+f'(a) \cdot h + \cdots $. There are a variety of cute ways to express the formula. The one I have in my notes for functions of two variables $x=x_1$ and $y=x_2$ centered at $(a_1,a_2)$ $$ f(x, y) = \sum_{n=0}^{\infty} \sum_{i_1=0}^{n}\sum_{i_2=0}^{n} \cdots \sum_{i_n=0}^{n} \frac{1}{n!} \frac{\partial^{(n)}f(a_1,a_2)}{\partial x_{i_1}\partial x_{i_2} \cdots \partial x_{i_n}} (x_{i_1} -a_{i_1})(x_{i_2} -a_{i_2})\cdots (x_{i_n} -a_{i_n}) $$ The formula for functions of three or more variables is similar. I know of no particular name for the third order piece. Intuitively, it becomes important when the expansion is trivial up to second order and yet it is nontrivial. In such a case it is the dominant term (locally). Of course, I have no particular sense of what the graph of such a thing resembles. In contrast, the second order term is nicely captured by the theory of quadratic forms.
H: Example of vector space In almost all the literature that I have seen, one of the examples of vector space is as follows: Set of all real-valued functions $f(x)$ defined on the real line what confuses me here is that the word "linear functions" should be used instead of just "functions" because I think we also have non-linear functions and they (non-linear functions)do not follow the rule of addition and multiplication as: $ (f + g)(x)=f(x) + g(x) $ and $(kf)(x) = k f(x) $ thus, non-linear functions cannot make a vector space. am I right or wrong? AI: You're confusing adding the functions with adding their inputs. A linear function would have the property $f(x+y) = f(x) + f(y)$. However, for a vector space, you add the functions. For example, let $f(x) = x^2$ and $g(x) = x^3$, then $(f + g)(x) = f(x) + g(x) = x^2 + x^3$.
H: A an nxn matrix. P a permutation matrix that permutes columns of A. How many operations does P*A involve? Essentially, I am supposed to count how many operations a particular computational algorithm involves, and I've gotten stuck on this one part. My understanding is that for two nxn matrices, matrix multiplication requires of order $0(n^3)$ operations. If P is a permutation matrix, and we also know the particular permutation that it represents, how many operations does P*A involve? I'm assuming that it's of order $0(n)$? Or maybe even none (say, if the permutation simply involves storing the values differently - switching which values are in which columns)? Can someone confirm this? AI: For an $n \times 1$ vector $v$ and $n \times n$ matrix $A$, to get the first entry of the resulting vector $Av$ you require $n$ multiplications and $n-1$ additions. Doing this for each ot the $n$ rows (entries), you get $n(n+n-1) \sim O(n^2)$ operations. Matrix multiplication is just $Av$ done for each of the $n$ columns $v$ of the second matrix, so the resulting operation count is $O(n^3)$, as you said. However, in your case, multiplying on the left by $P$ corresponds to permuting the rows of $A$. In other words, if $P$ represents the permutation $\sigma \in S_n$, $\sigma : \{1,\dots,n\} \rightarrow \{1,\dots,n\}$, then the resulting matrix $PA$ has entries $a(\sigma(i),j)$, where $a(i,j)$ is the $i,j$ entry of $A$. Depending on the language you are using and how you code this, this should take no time at all.
H: Proving the product of two real numbers is maximum when the numbers are equal given their sum is constant Let us consider two real numbers $x$ and $y$. How can we prove value of $xy$ is greatest when $x=y$ given the condition $x+y=$ constant? I have already found a proof, but I am not entirely happy with it yet. AI: HINT: Make use of the fact that $xy \leq \left(\dfrac{x+y}2 \right)^2$
H: Cyclic vectors in a real vector space Let $V$ be an n-dimensional vector space over $\mathbb{R}$ and $T:V \rightarrow V$ be linear. Call a vector $v \in V$ cyclic if $V$ is spanned by $\{v, \ Tv, \ T^2v,..\}$. Question: Show that $v$ is cyclic for $T$ then there is some $k \geq 1$ such that $$B=\{v, \ Tv,..T^{k-1}v \}$$ is linearly independent in $V$ and that $T^kv$ lies in the span of $B$. Thoughts: If we suppose that no such $k$ exists, then every subset of the form $\{v, \ Tv,..T^{l-1}v \}$ is linearly dependent, or it is linearly dependent with $T^lv$ not lying in it span. There are in effect then two cases to consider. Suppose every subset of the form $\{v, \ Tv,..T^{l-1}v \}$ is linearly dependent. Then $T^{l-1}$ can be expressed as a linear combination of $v,...,T^{l-2}v$. Now, $\{v, \ Tv,..T^{l-2}v \}$ is also linearly dependent by assumption and so we can express $T^{l-2}$ as a linear combination of $v,...,T^{l-3}v$ etc... Continuing in this way, we show that $T^{l-1}v= \lambda v$ for some $\lambda \in \mathbb{R}$. Thus $$B=\{ \mu v \ | \ \mu \ \text{is an eigenvalue of} \ T^mv \ \text{for some} \ m \}$$ This is 'at best' countably infinite, and therefore could not span $V$ even if it were 1-dimensional, wince our base field is $\mathbb{R}$, which is uncountable. Suppose then that $B$ is linearly independent for some $k$ but that $T^kv$ does not lie in its span. I can't see how to progress this argument. Note: This question is for second year Undergraduates, and I think there must be some more simple approach; even the first part surprises me if we need to argue using the uncountability of $\mathbb{R}$, so any alternative approaches are also appreciated. AI: I had some trouble following your argument. You write Suppose every subset of the form $\{v,Tv,\ldots,T^{\ell-1} v \}$ is linearly dependent. If we are allowed to take $\ell = 1$ here then in particular you are assuming that $\{v \}$ is linearly dependent, and thus $v = 0$ so $V = 0$. Otherwise $\{v,Tv\}$ is linearly dependent so either $v = 0$ or $Tv = \alpha v$ for some scalar $\alpha$. If this is the case then $T^{\ell} v = \alpha^{\ell} v$ for all $\ell$, and the space $V$ must be one-dimensional, spanned by $v$. Nothing like the uncountability of the scalar field is needed for this. But why have you started the argument this way? As the above analysis shows, you've done only a trivial case and an almost trivial case, so you're certainly less than halfway to a complete proof. I suggest rather the following: let $\ell$ be the least positive integer such that $\{v,Tv,\ldots,T^{\ell} v \}$ is linearly dependent. Such an $\ell$ exists because $V$ is assumed to be finite-dimensional -- it need not exist otherwise. (We've handled the $\ell = 1$ case above.) This implies that $T^{\ell} v$ lies in the span of $1,Tv,\ldots,T^{\ell-1} v$, answering your question. Note also that for all $L \geq \ell$, $T^{L} v$ lies in the span of $1,Tv,\ldots,T^{\ell-1} v$, so that $V$ itself is spanned by $1,Tv,\ldots,T^{\ell-1} v$. (At first glance I thought that this was your question. It turns out that it isn't, but it is the natural next thing to say, so you will surely be encountering it soon.) The conclusion holds over any field of scalars; the real numbers have nothing to do with it.
H: finite length is stronger than finiteness of a module Let $A$ be a commutative ring and $M$ an $A$-module. I realized recently that the property of $M$ having finite length is stronger than $M$ being finitely generated. Here is my reasoning: Suppose $M$ has finite length but it is not finitely-generated. Let $\left\{x_i\right\}_{i \in I}$ be an infinite set of generators. Let $J$ be a countable subset of $I$. Define $M_i=Ax_{a_1}+ \cdots +A_{a_i}, a_k \in J$. Then we have chains of length $i$ of the form $M \supset M_{i-1} \supset M_i \supset \cdots \supset M_1 \supset M_0=0$ and $i$ can grow arbitrarily large. This contradicts the finite length assumption. Hence $M$ must be finitely generated. Can anyone think of a counterexample, where we have a module that is finitely generated but does not have finite length? AI: A module has finite length if and only if it is Noetherian and Artinian. So any commutative ring that isn't Artinian does not have finite length over itself and is obviously finitely generated over itself.
H: If $(1,1)$ is an eigenvector of $A=\begin{pmatrix}2 &5\\3&k\end{pmatrix}$,then one of the eigenvalues of $A$ is :- If $(1,1)$ is an eigenvector of $A=\begin{pmatrix}2 &5\\3&k\end{pmatrix}$,then one of the eigenvalues of $A$ is :- $0,-1,1,2$ Can I get some hints please. AI: $(1,1)$ is an eigen vector.let $\lambda$ be corresponding eigen value.then $\begin{pmatrix}2 &5\\3&k\end{pmatrix}\begin{pmatrix}1\\1\end{pmatrix}=\lambda\begin{pmatrix}1\\1\end{pmatrix}$ solving you can get the values of $k$ and $\lambda$ and then it is easy.
H: Prove that the tangent space of a hyperplane is itself I know this might sound really stupid: I was trying to show that the tangent space of a hyperplane is itself. I started by parametrising the hyperplane locally at $x$ with a diffeomorphism $\phi : U \rightarrow X$, where $U\subset R^n$ and $X\subset R^m$ is the hyperplane we're considering, such that $\phi(0)=x$. Now the tangent space $T_x(X)$ is given by the image of the derivative map $d\phi_0: R^n \rightarrow R^m$. If we can prove that $\phi$ is a linear map, then we are done, because the derivative map of a linear map is itself. But it seems that I can't prove that $\phi$ is linear. Also I realised there is a loophole in my proof. Since we're proving a global result, not just at $x$, parametrising $X$ locally won't work... Any ideas? AI: You can choose $\phi$ to be a global and linear map. Since the hyperplane is a linear subspace of $\mathbb{R}^m$, you may pick a basis for your hyperplane. That defines a linear isomorphism from $\mathbb{R}^n$ to the hyperplane taking the standard basis of $\mathbb{R}^n$ to your chosen basis, where $n$ is the dimension of the hyperplane. Since a linear isomorphism is automatically a diffeomorphism, $\phi$ indeed serves as a chart.
H: $\text{Inn}(G)$ cannot be nontrivial cyclic group Let $G$ be any group, and let $Z$ be its center. (a) Show that $G/Z\cong \text{Inn}(G)$. (b) Conclude that $\text{Inn}(G)$ cannot be a nontrivial cyclic group. I've already gotten part (a) by considering the mapping $\pi:G\rightarrow\text{Inn}(G)$ such that $\pi(g)$ is the automorphism that takes $x$ to $g^{-1}xg$ for all $x\in G$. The mapping $\pi$ is clearly a surjective homomorphism with kernel $Z$, and part (a) follows from the isomorphism theorem. For part (b), I must prove that $G/Z$ cannot be a nontrivial cyclic group. If it were, the group would equal $\{Z,Zg,Zg^2,\ldots,Zg^{n-1}\}$ for some $g\in G$. Also, $G/Z$ would be an abelian group, and it follows that the commutator subgroup $G'$ belongs to $Z$. I don't see how to derive a contradiction from there. AI: From $G/Z$ cyclic you can get something stronger: Let $a,b \in G$, then $a = z_1g^n$ and $b = z_2g^m$ for some $z_1,z_2 \in Z$. Now $ab = z_1g^n z_2g^m = z_2g^mz_1g^n = ba$. Thus $G$ is abelian, therefore $G = Z$. What now?
H: Closure of a subspace with respect to a inner product I just have a question in general. If we are trying to show that a subspace of a vector space is a closed subspace, I know we need to prove that all convergent sequences in that subspace converge to a limit in that set. But if the subspace is defined in terms of the inner product, for example: A = {f is a continuous function on C[0,1]: < f,sinh > = 0} and we knew that < f,sinh > is a continuous function of f, why would this imply A is a closed subspace? AI: If you know topology, by the definition the map $\phi$ is continuous if the preimage $\phi^{-1}(A)$ is open given that the set $A$ is open. Equivalently, $\phi^{-1}(B)$ is closed whenever $B$ is closed. In your case $$ \phi(f) = \langle f,\sin \rangle $$ is a continuous function from $C([0,1])$ to $\Bbb R$, and since $B = \{0\}$ is a closed set, your $A = \phi^{-1}(B)$ is closed as well. Clearly, it is also a linear space, so it is a closed subspace.
H: Definition of a primitive polynomial I understand there are already some questions (A, B) on primitive polynomials. But none of these clears my confusion. In page 84 of Handbook of Applied Cryptography, primitive polynomial has been defined as, Now, if I try to understand the definition by dissecting the parts, This is an irreducible polynomial, that means, it cannot be factored into the product of two or more non-trivial polynomials. The polynomial $f(x) \in \mathbb{Z}_p[x]$. So, the polynomial belongs to the polynomial ring $\mathbb{Z}_p [x]$, where, $\mathbb{Z}_p [x]$ is the ring formed by the set of all polynomials in the indeterminate $x$ having coefficients from $\mathbb{Z}_p$. Here $\mathbb{Z}_p$, will be the integers modulo $p$, set of (equivalence classes of) integers $\{0, 1, 2, . . . , p − 1\}$. $x$ is a generator of $\mathbb{F}^*_{p^m}$: I am coming to this part regarding $x$ later on. $\mathbb{F}^*_{p^m}$, is the multiplicative group of $\mathbb{F}_{p^m}$ such that $ \{a \in \mathbb{F}_{p^m} | \gcd(a, p) = 1\}$. $\mathbb{F}_{p^m} = \mathbb{Z}_p[x]/(f(x))$, denotes the set of (equivalence classes of) polynomials in $\mathbb{Z}_p[x]$ of degree less than $n = \deg f (x)$. Addition and multiplication are performed modulo $f (x)$. Now, coming back to the point of $x$, I began to realize that I must have some serious flaw in my understanding above. So far, I have seen that generators have always been numbers. Here the generator is $x$, which is an indeterminate. Could you please point out the where I have gone off the track? Perhaps the best way to salvage me will be to simply rewrite my points 1-4. Adding an example will make things perfect. AI: The notation in the book is not one I would have chosen. All the $m$ roots of the irreducible polynomial $f(x) \in \mathbb F[x]$ are generators of the multiplicative group of $\mathbb F_{p^m}$; this group is a cyclic group. In particular, if $\alpha$ is a root of $f(x)$, then $$\mathbb F_{p^m}^{\star} = \{1, \alpha, \alpha^2, \ldots, \alpha^{p^m-2}; \times\,\}.$$ The other roots of $f(x)$ are $\alpha^p, \alpha^{p^2},\cdots,\alpha^{p^{m-1}}$ which are also called conjugates of $\alpha$. An alternative representation of $\mathbb F_{p^m}$ is the collection of polynomials of degree less than $m$ in $\mathbb F_p[x]$. The two representations are connected via the following table: $$\begin{matrix}1&=&1\\ \alpha&=& & \alpha\\ \alpha^2&= &&&\alpha^2\\ \vdots& \vdots&\ddots\\ \alpha^{m-1}&= &&&&&\alpha^{m-1}\\ \alpha^{m}&= &-f_0&-f_1\alpha&-f_2\alpha^2&\cdots&-f_{m-1}\alpha^{m-1}\\ \vdots&\vdots&\ddots \end{matrix}$$ where we assume that $f(x)$ is monic, and use the fact that $$f(\alpha) = 0 = f_0 + f_1\alpha + f_2\alpha^2 + \cdots + f_{m-1}\alpha^{m-1} + \alpha^m \tag{1}$$ in writing $\alpha^m$ as a polynomial in $\alpha$ of degree smaller than $m$. To find the next line in the above table, we use $$\begin{align} \alpha^{m+1} &= \alpha^m\cdot\alpha\\ &= (-f_0-f_1\alpha-f_2\alpha^2\cdots -f_{m-1}\alpha^{m-1})\alpha\\ &= 0-f_0\alpha -f_1\alpha^2-f_2\alpha^3\cdots - f_{m-2}\alpha^{m-1}-f_{m-1}\alpha^{m} \end{align}$$ and replace $f_{m-1}\alpha^m$ by $f_{m-1}(-f_0-f_1\alpha-f_2\alpha^2\cdots -f_{m-1}\alpha^{m-1})$ to once again get a polynomial of degree less than $m$ in $\alpha$. Repeating this process creates the entire table which is often referred to as a logarithm table but in fact is an antilogarithm table. For each $i$, the table gives the representation of $\alpha^i$ as a polynomial in $\alpha$ -- very convenient for addition -- while multiplication of two polynomials in $\alpha$ and subsequent reduction using $(1)$ repeatedly is harder than computing $\alpha^i\times\alpha^j = \alpha^{i+j \bmod p^m-1}$, that is, by adding the logarithms. Finally, all this works exactly the same way and gives the same results if we replace $\alpha$ by $x$ and say that $\mathbb F_{p^m} = \mathbb F_p[x]/f(x)$ etc., but this always creates confusion amongst beginners.
H: Characteristic polynomial, and a coprime polynomial I'm a bit stuck on this following question in algebra, I hope someone can show me to the right direction. Let $A\in M_n(F)$ be a matrix and let $f\in F[X]$ be a polyonimal such that $\gcd(f,P_a)=1$, with $P_a$ being the characteristic polynomial of $A$. Show that $f(A)$ is an invertible matrix. I've tried showing that if $f(A)$ is not invertible, then its determinant is zero and there should be some common root for $f$ and $P_a$, but didn't seem to find a way. Thanks for the help. AI: Note that $\gcd (f,P_a)=1$ implies there exist polynomials $g$ and $h$ such that $gf+hP_a=1$. Hence $I=g(A)f(A)+h(A)P_a(A)$. Now you have $P_a(A)=0$ and hence $f(A)$ is invertible.
H: Given $H\triangleleft G, K\le G\ $, $\;K\nsubseteq H,\;$ and $(G:H) = p$, $\;p$ prime, prove $HK=G$ Suppose that $H\triangleleft G, K\le G\ $ and $K\nsubseteq H$. How we can prove that $HK=G?$ Also $(G:H)=p$ where p is prime. AI: With your modification, I give you a hint. By the index formula $$p=(G:H)=(G:HK)(HK:H)$$ Since $p$ is prime, what can you say about the possibilities? Which one is impossible under the given assumptions?
H: elements of sets, subsets relations give an example (if possible) such that: a)$x\in y$ and $x\not\subseteq y$ here I take $x=\{1,2\}$ and $y=\mathcal{P(x)}=\{\emptyset, \{1\},\{2\},\{1,2\}\}$ then $x\in y$ but $x\not\subseteq y$ as e.q $1\not\in y$ b)$x\subseteq y$ and $x\not\in y$ here I cannot find any counter example...By definition of subset any element in x must be in y... is it true? c)$x\in y$ and $x\subseteq y$ Here I take $x=\emptyset$ and $y=\mathcal{P(x)}=\{\emptyset, \{\emptyset\}\}$ then $x\in y$ and $x\subseteq y$ could someone please check? AI: Your answers for (a) and (c) are fine. For (b), remember that $\varnothing\subseteq y$ for any set $y$ whatsoever, so take $x=\varnothing$ and let $y$ be any set that does not have $\varnothing$ as an element, e.g., $\{\{\varnothing\}\}$. Added: For (b) you might also consider the fact that a finite set $y$ has $2^{|y|}$ subsets and $|y|$ elements, and $n<2^n$ for all $n\in\Bbb N$. Thus, every finite $y$ will give you an example: it has more subsets than elements, so at least one of those subsets cannot be an element. In fact you can always take $x=y$: $y\subseteq y$, but $y\notin y$. If you’re working in $\mathsf{ZF}$ this is true for all $y$ as a consequence of the axiom of regularity (also called foundation), but you can check it explicitly with any simple finite example that you might construct, like your answers to (a) and (c).
H: $G/Z$ cannot be isomorphic to quaternion group Let $Q_8$ be the quaternion group. Show that $G/Z$ can never be isomorphic to $Q_8$, where $Z$ is the center of $G$. Hint: if $G/Z\cong Q_8$, show that $G$ has two abelian subgroups of index $2$. I'm trying to prove the hint by looking at the canonical homomorphism $\pi:G\rightarrow G/Z$. Well, $Q_8$ has some abelian subgroups of index $2$, such as $H=\{1,-1,i,-i\}$, but that doesn't seem to map to abelian subgroups of index $2$ of $G$. (We know that $\pi(ab)=\pi(a)\pi(b)=\pi(b)\pi(a)=\pi(ba)$ for $a,b$ in $\pi^{-1}(H)$, but that only yields $aba^{-1}b^{-1}\in Z$, not $aba^{-1}b^{-1}=1$.) AI: $Q_8$ has cyclic subgroups of index $2$, such as $\langle i\rangle$ and $\langle j\rangle$. Their inverse images in $G$ are of index $2$ and their respective centres contains at least the centre $Z$ of $G$. Hence their quotient by their centre is cyclic and finally $\pi^{-1}(\langle i\rangle)$ and $\pi^{-1}(\langle j\rangle)$ are abelian (cf. Jykri Lahtonen's comment). Select $g\in G$ with $\pi(g)=-1$. Then $g\in \pi^{-1}(\langle i\rangle)\cap \pi^{-1}(\langle j\rangle)$, hence it commutes with all elements of $\pi^{-1}(\langle i\rangle)$ and all elements of $\pi^{-1}(\langle j\rangle)$, therefore also with all elements of $\pi^{-1}(\langle i,j\rangle)=\pi^{-1}(Q_8)=G$, contradicting $g\notin Z$.
H: Prove $\log x!$ is $\Omega (xlogx)$ Find a positive real number $C$ and a nonnegative real number $x_o$ such that $Cx$$\log x$ $\leq$ $\log x!$ for all real numbers $x > x_o$. I tried to expand $\log x!$ into $\log 1 + \log2 +\log3 +....\log x$. But how do I choose $C$ and $x_o$ so the above inequality hold? Any hints would be appreciated. AI: For $k\ge 2$, we have $\log k\ge \int_{k-1}^{k}\log t\,dt$. Summing from $k=2$ to $n$, we find that $$\log 2 +\log 3 + \log 4+ \cdots +\log n \ge \int_1^n \log t\,dt=n\log n-n.$$ If $n\ge 9$, then $n\log n-n \gt \frac{1}{2}n\log n$.
H: Closure of operators Let $X$ and $Y$ Banach. We say that the linear operator $A:\mathcal{D}(A)\subseteq X\rightarrow Y$ admits a closure if there's a linear operator $B:\mathcal{D}(B)\subseteq X\rightarrow Y$ such that $\mathcal{D}(A)\subseteq \mathcal{D}(B)$, $B|_{\mathcal{D}(A)} = A$ and $\mathcal{G}(B)= \overline{\mathcal{G}(A)}$, where $\mathcal{G}(Z)$ is the graph of $Z$. I want to prove that $A$ admits a closure if and only if every sequence $\{x_n\}_{n\in\mathbb{N}}\subseteq \mathcal{D}(A)$ such that $(x_n,A(x_n))\xrightarrow{n\rightarrow\infty}(0,y)$, with $y\in Y$, satisfy that $y=0$. I proved ($\Rightarrow$), but I have had problems in ($\Leftarrow$), because I don't know how to define the operator $B$. Please, somebody can help me? Thanks in advance. AI: The operator $B$ is defined as follows: If $x \in \mathcal{D}(A)$, then $Bx := Ax$. If $x \notin \mathcal{D}(A)$ happens to be the limit of a sequence $\{x_n\}_n \subset \mathcal{D}(A)$ such that $\{Ax_n\}_n$ converges in $Y$---note that such a sequence need not exist in general, in which case we can't define $B$ on $x$---then $Bx := \lim_{n\to\infty}Ax_n$. The point, then, is to check that $B$ is actually well-defined in the second case. So, suppose that $x \notin \mathcal{D}(A)$ is the limit both of $\{x_n\}_n \subset \mathcal{D}(A)$ and of $\{x_n^\prime\}_n \subset \mathcal{D}(A)$, such that $\{Ax_n\}_n$ and $\{Ax_n^\prime\}_n$ converge in $Y$. Then $$\{(x_n-x_n^\prime,A(x_n-x_n^\prime))\}_n = \{(x_n-x_n^\prime,Ax_n - Ax_n^\prime)\}_n \to_{n\to\infty} \left(0,\lim_{n\to\infty}Ax_n - \lim_{n\to\infty}Ax_n^\prime\right);$$ what can you now conclude?
H: integrating a vector over a sphere I have the following triple integral in spherical coordinates $(r,\theta,\phi)$: $$\int_0^{2\pi}\int_0^\pi\int_0^RCr^3\hat\theta\cdot r^2dr\sin{\theta}d\theta d\phi$$ How do I handle the $\hat\theta$? If I ignore it, I get $\frac{2}{3}\pi CR^6$. So is my answer the vector $\frac{2}{3}\pi CR^6\hat\theta$? Do I need to integrate the unit vector $\hat\theta$? If so, how? AI: Yes, you need to integrate this, as $\hat \theta$ is a function of $\theta, \phi$. A straightforward way to attack the problem is to write $\hat \theta$ as a linear combination of $\hat x, \hat y, \hat z$, but with the components expressed in terms of $r, \theta, \phi$.
H: Integral over a ball Let $a=(1,2)\in\mathbb{R}^{2}$ and $B(a,3)$ denote a ball in $\mathbb{R}^{2}$ centered at $a$ and of radius equal to $3$. Evaluate the following integral: $$\int_{B(a,3)}y^{3}-3x^{2}y \ dx dy$$ Should I use polar coordinates? Or is there any tricky solution to this? AI: Note that $\Delta(y^3-3x^2y)=0$, so that $y^3-3x^2y$ is harmonic. By the mean value property, we get that the mean value over the ball is the value at the center. Since the area of the ball is $9\pi$ and the value at the center is $2$, we get $$ \int_{B(a,3)}\left(y^3-3x^2y\right)\mathrm{d}x\,\mathrm{d}y=18\pi $$
H: Cardinality summation How do I prove that, for set $X$, $$\sum_{S\subseteq X, S\neq \emptyset}\frac{(-1)^{|S|}}{|X|+|S|} = \frac{|X|!(|X|-1)!}{(2|X|)!}$$ I have been around this exercise all day and would much appreciate your help. AI: Let us denote $n = |X|$. Then \begin{align*} \sum_{S\subset X} \frac{(-1)^{|S|}}{|X|+|S|} &= \sum_{k=0}^{n} \binom{n}{k} \frac{(-1)^{k}}{n+k} = \sum_{k=0}^{n} \binom{n}{k} (-1)^{k} \int_{0}^{1} x^{n+k-1} \, dx \\ &= \int_{0}^{1} \sum_{k=0}^{n} \binom{n}{k} (-1)^{k} x^{n+k-1} \, dx = \int_{0}^{1} x^{n-1} (1 - x)^{n} \, dx \\ &= \beta(n, n+1) = \frac{\Gamma(n)\Gamma(n+1)}{\Gamma(2n+1)} = \frac{(n-1)!n!}{(2n)!} \end{align*} as desired.
H: Is the following polynomial reducible over Q? I am looking at an exercise saying that "Demonstrate that x4-22x2+1 is reducible over Q. I have the solution manual and it solves like the following: If x4-22x2+1 is reducible over Z, then it factors in Z[x], and must therefore either have a linear factor in Z[x] or factor into two quadratics in Z[x]. The only possibilites for a linear factor are x ± 1, and clearly neither 1 nor -1 is a zero of the polynomial, so a linear factor is impossible. Suppose x4-22x2+1 = (x2 + ax + b)(x2 + cx + d). Equating coefficients, we see that x3 coefficient : 0 = a + c x2 coefficient : −22 = ac + b + d x coefficient : 0 = bc + ad constant term : 1 = bd so either b = d = 1 or b = d = −1. Suppose b = d = 1. Then −22 = ac + 1 + 1 so ac = −24. Because a + c = 0, we have a = −c, so −c2 = −24 which is impossible for an integer c. Similarly, if b = d = −1, we deduce that −c2 = −20, which is also impossible. Thus the polynomial is irreducible. My question is: 1)What does it mean "over Q"? What is the difference between saying over Z and over Q? 2)Why do we factor it as (x2 + ...)(x2 + ...)? Can't it be like (x3 + ...)(x + ...). Also, why don't we factor it like (ax2 + ...)(bx2 + ...), i mean how do we know that the head coefficients are 1?Can someone help with this? Thanks AI: $1.$ Being irreducible 'over $\Bbb Q$' means that it can not be factored as the product of polynomials (with degree greater than $0$) with all coefficients in $\Bbb Q$. The difference between irreducibility over $\Bbb Z$ and over $\Bbb Q$ is none due to (the second) Gauss's lemma. $2.$ If it could be factored as the product of a polynomial of degree $1$ times something else, then it would have a root, namely the root of that same polynomial of degree $1$.
H: If $I\pi ^{-k}\subseteq D$ for all $k\leq m$ but $I\pi ^{-m-1}\nsubseteq D$, then $I\pi ^{-m}\nsubseteq (\pi )$ I'm working through a proof and I've got stuck on one detail, it seems like it is supposed to be totally obvious, but need help to figure out why. Let $D$ be a Noetherian integrally closed domain with unique prime ideal $(\pi)$. For an ideal $I$ of $D$ we have that $I\pi ^{-k} \subseteq D$ for all $k\leq m$ but $I\pi ^{-m-1}\nsubseteq D$. I wonder how to make the conclusion that $I\pi ^{-m}\nsubseteq (\pi )$. This is used in proving that every ideal in the ring $D$ above is a power of $(\pi )$. I hope I didn't leave out any crucial information... AI: Suppose, for the sake of contradiction, that $I\pi^{-m}\subseteq(\pi)$. Then every element of $I\pi^{-m}$ is of the form $\pi a$ for some $a\in D$. Thus, every element of $I\pi^{-m-1}$ is an element of $D$, i.e. $I\pi^{-m-1}\subseteq D$; but we chose $m$ so that this is not the case.
H: name of theorem for infinite order polynomial limit for small x is there a name for this property? $\mathop{\operatorname{lim}}_{x\to 0} \sum_{n=1}^\infty x^n = \frac{x}{1-x}$ I have seen it in a derivation but want to know where it comes from. AI: Firstly, the formula as written is false. On the left hand side you take a limit as $x$ approaches $0$, yet on the right hand side you have a function of $x$. The correct formula is that for all $x\in \mathbb R$ with $|x|<1$ it holds that $$\sum_{n=1}^\infty x^n=\frac{x}{1-x}$$ which is known as the sum of a geometric series (it actually holds true for all complex numbers $z$ with $|z|<1$ and the formula fails for any number with absolute value bigger than or equal to $1$). Here is an informal argument (that can be turned completely formal) for proving it: believing (this is the informal part) that the series actually converges, say to $L$, we have that (with a bit more formal stuff lurking around) $$(1-x)L=(1-x)\cdot \sum_{n=1}^\infty x^n=\frac{x}{1-x}=\sum_{n=1}^\infty x^n - \sum_{n=1}^\infty x^{n+1}=x$$ from which the formula follows. There are other ways of showing the formula holds, and, of course, they all require some amount of limit formalism. One says that the function $\frac{x}{1-x}$ admits a power series representation with center $x=0$ and radius of convergence $1$.
H: Delta System Lemma: Kunen’s proof. I'm trying to understand Kunen's Delta System Lemma proof (Set Theory: Introduction to Independence Proofs, Chapter II, Theorem 1.6). I'm heaving issues on understanding the last line: "Since $|\alpha_0^{ < k}|<\theta$ there is an $r \subset \alpha_0$ and $B\subset A_2$ with $|B|=\theta$ and $\forall x \in B(x\cap \alpha_0=r)$, whence $B$ forms a $\Delta$-system with root $r$." I didn't understand the whole sentence: Why is there such r and such B? And why is it a Delta system? I understood the previous parts of the proof, but I'm not understanding the ending :( Adding more context: We want to construct a $\Delta$ system $B\subset A$ such that $|B|=0$. At this point, We already have constructed a subset $A_2$ of $A$ such that $|A_2|=\theta$ and $x \cap y\subset\alpha_0 < \theta $ whenever $x$ and $y$ are distinct members of $A_2$. PS: A $\Delta$-system is a set $A$ such that there exists $r$ such that $x \cap y=r$ whenever $x$ and $y$ are distinct members of $A$. AI: At that point in the argument you know that $x\cap y\subseteq \alpha_0$ whenever $x$ and $y$ are distinct elements of $\mathscr{A}_2$. For each $x\in\mathscr{A}_2$ we know that $|x|<\kappa$, so $x\cap\alpha_0\in[\alpha_0]^{<\kappa}$. For each $z\in[\alpha_0]^{<\kappa}$ let $\mathscr{A}_2(z)=\{x\in\mathscr{A}_2:x\cap\alpha_0=z\}$. Then $$\mathscr{A}_2=\bigcup_{z\in[\alpha_0]^{<\kappa}}\mathscr{A}_2(z)\;.$$ $|\mathscr{A}_2|=\theta>|\alpha_0^{<\kappa}|$, and $\theta$ is regular, so $\theta$ is not the union of $|\alpha_0^{<\kappa}|$ properly smaller sets; thus, there must be some $r\in[\alpha_0]^{<\kappa}$ such that $|\mathscr{A}_2(r)|=\theta$. Now take $\mathscr{B}=\mathscr{A}_2(r)$. For all $x\in\mathscr{B}$ we have $x\cap\alpha_0=r$, and $\{x\setminus\alpha_0:x\in\mathscr{B}\}$ is a pairwise disjoint family by the construction of $\mathscr{A}_2$, so $\mathscr{B}$ is a $\Delta$-system with root $r$.
H: Question about improper integral Does $\forall \epsilon \in (0,a]: \int _{\epsilon}^{a} f(x)dx =0$ imply that $\lim_{h \rightarrow 0} \int _h^a f(x)dx=0$. I guess it is somehow the definition of it, but I need to know this exactly. AI: It's even stronger. The limit only says that the integral goes to zero when $h$ goes to zero, whereas the first statement says that it is zero for all $h$. In fact, your first statement implies that $f = 0$ a.e. on $(0,a]$, I think.
H: Morse Function Definition: Does it implies Morse function is $C^2$? In my understanding, Morse function just means the determinant of Hessian matrix is nonsingular at critical points. So my claims are: the function itself should be continuous the reference to Hessian matrix in the definition implies Morse functions are twice differentiable - T/F? But Morse functions are not $C^2$ since the definition does not require continuity after twice differential - T/F? Thank you very much! n.b.1. A morse function is defined to be "A function for which all critical points are nondegenerate and all critical levels are different." n.b.2. $f$ is of class $\mathscr{C}^k$ on $U$ if all iterated partial derivatives of $f$, of order at most $k$, exist and are continuous on $U$. n.b.3. Differentiable functions are continuos. But the derivative of a differentiable function may not be continuous: $$f(x) = \left\{\begin{matrix} x^2 \sin \frac{1}{x} & x \neq 0\\ 0 & x=0 \end{matrix}\right.$$ AI: In most introductory texts, a Morse function is defined to be smooth ($C^\infty$), since most such texts do not wish to bother themselves with the minutiae of degrees of regularity. However, the assertion that a Morse function is $C^2$ suffices, at least for the basics of the subject, i.e. results like the Morse Lemma do not require smoothness. Although you can define the Hessian without $f$ being $C^2$, I can not find any text that develops the theory assuming only twice differentiability. If you read some proofs of the Morse Lemma, they use the fact that the entries of the Hessian are continuous, so I suspect that one must proceed with care when some, but not all, double partials are continuous.
H: $|\phi(G):\phi(H)|$ divides $|G:H|$ Let $\phi$ be a homomorphism defined on a finite group $G$, and let $H\subseteq G$. Show that $|\phi(G):\phi(H)|$ divides $|G:H|$. Not quite sure where to start on this one. We have a theorem saying that $\phi(H)$ is a subgroup of $\phi(G)$, but what is $|\phi(H)|$? If $H$ contains the kernel $N$ of $\phi$, then I think everything behaves nicely and $|\phi(G):\phi(H)|=|G:H|$. But if $N\not\subseteq H$, I don't know what I can conclude. AI: Hint: $\phi(H) = \phi(NH)$ and $|\phi(G):\phi(NH)| = |G: NH|$. Note that finiteness of $G$ is not necessary, we only need $|G: H|$ to be finite.
H: Definition of localization of rings I'm trying to understand this definition of Hungerford's book: The definition is simple, I think I understood what the author means, but... What is $P_P$? because we will have $P_P=S^{-1}P$, with $S=P-P=\varnothing$. What is $S^{^-1}P$, with $S=\varnothing$? I'm sure it should be a silly think I need help. Thanks in advance. AI: No: $P_P$ is $S^{-1}P$ where still $S=R\setminus P$. In other words, it is the ideal generated by (the elements in) $P$ in the localized ring $R_P$.
H: What is the range of the following function? I have difficulties in understanding the concept of range. Let $f:\mathbb Z_{12}\to \mathbb Z_3$, $f(x)=x$. What is the range of it? Here is what i think: Range of $f$ is the set $\{a \mid a\equiv x \bmod 3 \text{ and } x=0,1,2\}$. Is that right? Thanks AI: The range is the set of values which you can get as result of applying the function. That is, for a given function $f:X\to Y$, the range is $\{f(x):x\in X\}$. If I understand your function correctly, its range is $Z_3$.
H: Finding all Laurent expansions of $f=\frac{1}{z}$ I have the following homework problem I need help with: Let $G=\{z\in\mathbb{C}:\, z\neq0\}$ and define $f:\, G\to G$ by $f(z)=\frac{1}{z}$. Find all possible Laurent expansions of $f$ which are not Taylor expansions and for each such expansion specify in what set the Laurent series converges. What I tried: For the point $z=z_{0}$ there are Laurent expansions in $$E_{1}:=0<|z|<R$$ and in $$E_{2}:=|z|>R$$ Where $R>0$ is real. I am unsure about in which set the Laurent series converges to $f$ , but it seems that in both cases the Laurent series of $f$ is simply $\frac{1}{z}$ and it converges in all points of $E_{1}$or $E_{2}$ (according to the case). Am I correct ? For $z_{0}\ne0$ the Laurent expansions which is not Taylor is in $$E_{1}:=0\leq|z|<R$$ where $R>|z_{0}|$ and $$E_{2}:=|z|>R$$ where $R<|z_{0}|$. I am not sure about where I should of used $<$ or $\leq$ in the description of $E_{1},E_{2}$. I am also having difficulty finding the Laurent series at $E_{i}$ and telling where it is convergent. Can someone please help me out with the case $z\neq z_0$ and tell me if I did correctly the first case ($z=z_0$) ? AI: What you have is a bit confused. When $z_0=0$, then there is only the one expansion with $0<|z_0|$, and as you said, the Laurent expansion is just $f(z) = \dfrac 1z$. When $z_0\ne 0$, you have two possible regions: (1) $0<|z-z_0|<|z_0|$ and (2) $|z_0|<|z-z_0|<\infty$. You need an expansion in powers of $z-z_0$. So you start like this: $$\frac1z = \frac1{z_0+(z-z_0)}=\frac1{z_0}\cdot\frac1{1+\frac{z-z_0}{z_0}}\,.$$ When $|z-z_0|<|z_0|$, substitute $u=\frac{z-z_0}{z_0}$ and note that $|u|<1$. Now use geometric series. When $|z-z_0|>|z_0|$, this doesn't work, so you factor out the larger term: $$\frac1z = \frac1{z_0+(z-z_0)}=\frac1{z-z_0}\cdot\frac1{1+\frac{z_0}{z-z_0}}\,.$$ As before, substitute $u=\frac{z_0}{z-z_0}$, note that $|u|<1$, and proceed.
H: Conditional Probability Example using permutations You are dealt three cards. The events of interest concern the number of face cards that you are dealt (0, 1, 2, or 3). What is the conditional probability that you are dealt at least 2 face cards given that the last card dealt to you was a face card? There 12 face face cards (1 jack, queen, king * 4 suits) and 40 non face cards. Let F= face card, and R = regular or nonface card. So the question is asking us to find the probability of getting {RFF, FRF, FFF}. I thought I would approach this question using permutations: $\Pr(\text{@ least 2 face cards | last card has face}) = \Pr(\{RFF, FRF, FFF\})=\cfrac{(40)(12)(11) + (12)(40)(11) + (12)(11)(10)}{(52)(51)(50)} = 0.089593$ but the answer is 0.3882. Can anyone please tell me why this is not working? My understanding is that when you're solving probability you can consider ordering or not consider it. In other words, you could use permutations or combinations. Thank you in advance. Here is a tree diagram. Let A, B, and C be events where a face card is obtained. AI: Last or first or second makes no difference. It will feel clearer with first. We want the probability of at least one face card out of two, from a deck in which one face card is gone. It is easier to find the probability of no face card in two picks. This is $\frac{40}{51}\cdot\frac{39}{50}$. So our answer is $1-\frac{40}{51}\cdot\frac{39}{50}$. Another way: There are fancier ways to proceed. Let $F$ be the event "the third card is a face card," and let $A$ be the event "at least two face cards." We want the conditional probability $\Pr(A|F)$. By the definition of conditional probability, we have $$\Pr(A|F)=\frac{\Pr(A\cap F)}{\Pr(F)}.\tag{$1$}$$ Note that $\Pr(F)=\frac{12}{52}$. You calculated the top of $(1)$ correctly, and did not divide by the bottom. Remark: Remember that we are given that the third card is a face card. So the sample space is restricted to those draws in which the third was a face card. What you calculated was the probability that there are at least two face cards and the third is a face card. That is not what the problem asked for, buut once we have found it, the answer is one step away.
H: Proving that sequence $a_n = \sqrt{x \sqrt{x \sqrt{x \sqrt{\cdots}}}} = x^{1-2^{-n}}$ Let $x>0$. For sequence $a_n$, such that $n$ denotes the $n$th term: $$\begin{align} a_1&= \sqrt{x}\\ a_2&= \sqrt{x \sqrt{x}}\\ a_3&= \sqrt{x \sqrt{x \sqrt{x}}}\\ a_4&= \sqrt{x \sqrt{x \sqrt{x \sqrt{x}}}}\\ &\vdots\\ a_{n-1}&= \sqrt{x \sqrt{x\sqrt{... \sqrt{x}}}}\\ a_n&= \sqrt{x \sqrt{x\sqrt{... \sqrt{x \sqrt{x}}}}}\end{align}$$ How could one prove that: $${a_n = x^{1-2^{-n}}}?$$ AI: Hint: \begin{align} a_4&=\sqrt{x \sqrt{x\sqrt{{x \sqrt{x}}}}}\\&=\sqrt{x \sqrt{x\sqrt{{x^{\frac{3}{2}} }}}}\\&=\sqrt{x \sqrt{x^{\frac{7}{4}}}}\\&=\sqrt{x^{\frac{15}{8}}}\\&=x^{\frac{15}{16}}\\&=x^{1-2^{-4}} \end{align} Can you use induction in these footsteps?
H: Finding the values of $z$ s.t. $\sum_{n=0}^{\infty} \left( \frac{z}{1+z} \right)^n$ is convergent I manipulated the series to $\sum_{n=0}^{\infty} \left( \frac{1}{1-(-1/z)} \right)^n$, which converges for $|-1/z|<1$ by geometric series. Then solving for $z$, I obtained $z>(1/\bar{z})$. Is this correct? I was expecting some numerical values, so I'm not sure about my answer. AI: Let me suggest another approach. Put $w=\dfrac{z}{z+1}$. Where will $$\sum_{n=0}^\infty w^n$$ converge? What does this tell you about the possible domain of $z$? ADD I am getting $$2\Re z+1>0$$ Note you cannot really use complex numbers in an inequality, since they are not ordered in any way. However, from $|z|<|1+z|$ you get the equivalent $$z\bar z<(1+z)(1+\bar z)$$ since everything is real. This gives $$z\bar z <1+(z+\bar z)+z\bar z$$ which is the same as $$0<1+2\Re (z)$$
H: Differentiating $ f(x)= \frac{x + \sin x}{x - \cos x}$ Can someone help me? I'm having some trouble with this: how can I differentiate $$ f(x)= \frac{x + \sin x}{x - \cos x} \quad ?$$ P.S. Is there any trick or something to derive this kind of limit? Thanks! AI: To find $f'(x)$, use the quotient rule: $$f'(x) = \frac{(x-\cos(x))(1+\cos(x)) - (x + \sin(x))(1+\sin(x))}{(x-\cos(x))^{2}}$$ Can you simplify the algebra from here?
H: Extracting the coefficient from a generating function Consider (for fixed $r$) the following function: $$f(z) = \frac{1}{1-z-z^2-\cdots-z^r} = \frac{1}{1-z\frac{1-z^r}{1-z}}=\sum_{j=0}^\infty\left(z\frac{1-z^r}{1-z}\right)^j$$ (Assume everything is ok with regards to convergence.) The text I am reading claims that if we were to write $f(z)$ in the form $\sum_{n=0}^\infty A_n z^n$, that $$A_n = \sum_{j,k} (-1)^k \binom{j}{k}\binom{n-rk-1}{j-1}$$ I have no idea where this comes from. Could anyone point me in the right direction? Thanks! Edit: Also, the text states without explanation that in the case $r=2$, the coefficients are the Fibonnaci numbers: $$\frac{1}{1-z-z^2} = 1 + z + 2z^2 + 3z^3 + 5z^4 + 8 z^5 + 13 z^6 + \cdots$$ Is this a non-trivial result, or am I just not seeing something? AI: $$f(z) = \sum_{j=0}^\infty \left( z \frac{1 - z^r}{1-z} \right)^j = \sum_{j=0}^{\infty}z^j \left( \sum_{k=0}^j {j\choose k} (-z^r)^k \right) \left( \sum_{m=0}^{\infty} {m+j-1 \choose j-1}z^m \right)$$ Change the order of summation. Note that in order to get a term of $z^n$, we need $j + rk + m = n$, or that $m= n - rk - j$. The coefficient of $z^n$ is thus $$ \sum_{j,k} (-1)^k {j\choose k} {n-rk-1 \choose j-1}$$ Let $g(x)$ be the generating function of the Fibonacci numbers. Then, $\begin{array}{l l l l l l} g(x) &= 1 &+ x & + 2 x^2 & + 3x^3 & + \ldots \\ xg(x) & = & + x & + x^2 & + 2x^2 & + \ldots \\ x^2 g(x) & = & & + x^2 & + x^3 & + \ldots \\ \hline (1-x-x^2)g(x) & = 1 \\ \end{array}$ Hence, $g(x) = \frac{1}{1-x-x^2}$. Note: This is with $F_1=1, F_2=2$. Depending on how you want to start your Fibonacci sequence, some people use $g(x) = \frac{x}{1-x-x^2}$ as the generating function instead.
H: To which group is the $\mathbb Z_{20}^*$ isomorphic? I have a question saying that to which group is $\mathbb{Z}_{20}^{*}$ is isomorphic, where $\mathbb{Z}_{20}^{*}$ is the set of the not zero divisors of $\mathbb{Z}_{20}$. Here is what i think: $\mathbb{Z}_{20}^{*}=\{1,3,7,9,11,13,17,19\}$. It has 8 elements. Then it is isomorphic to $\mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2}$ or $\mathbb{Z}_{2} \times \mathbb{Z}_{4}$ or $\mathbb{Z}_{8}$. But how can I decide which of them? And is this group a multiplicative group or an additive group? How can I know? Thank you AI: Hints: Is $1 + 3$ in $\Bbb Z^*_{20}$? On the other hand, one can show that it's closed under multiplication. Is $\Bbb Z^*_{20}$ cyclic? What's the order of $3$? Answer both questions and one possibility will remain. To elaborate on the second hint. We have: \begin{align} 3^1 &= 3 &\equiv 3 \pmod{20} \\ 3^2 &= 9 &\equiv 9 \pmod{20} \\ 3^3 &= 27 &\equiv 7 \pmod{20} \\ 3^4 &= 81 &\equiv 1 \pmod{20} \\ \end{align} Therefore the order of $3$ is $4$. This eliminates the possibility $\Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_2$. With similar computations, you can see that none of the elements of $\Bbb Z^*_{20}$ generates the group. Hence, it's not cyclic. One possibility remains and it's $\Bbb Z_4 \times \Bbb Z_2$. To show that all elements of $\Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_2$ have order $1$ or $2$, consider $(a, b, c) \in \Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_2$. We have: $$ (a, b, c)^n = (a^n, b^n, c^n) $$ Since each of $a$, $b$, $c$ is a member of $\Bbb Z_2$, it has order $1$ or $2$. This forces: $$ (a, b, c)^2 = (a^2, b^2, c^2) = (1, 1, 1) = 1 $$
H: What are some examples of proof by contrapositive? Applying the Modus Tollens argument to Fermat's Little Theorem really helped me to understand logical implication. I never knew that FLT was actually a compositality test. Theorem (FLT): given integers $a>1$ and $n>1$, if $n$ is prime, then $a^n$ is congruent to $a\ (\bmod\ n)$. By the contrapositive, if $a^n$ is not congruent to $a\ (\bmod\ n)$, then $n$ is not prime. Thus $n$ is composite. What are some other simple and instructive examples of proof by contrapositive? AI: When you want to prove "If $p$ then $q$", and $p$ contains the phrase "$n$ is prime" you should use contrapositive or contradiction to work easily, the canonical example is the following: Prove for $n>2$, If $n$ is prime then $n$ is odd. Here $q$ is the phrase "$n$ is odd". Here $p$ is exactly the phrase "$n$ is prime" and is very difficult to work with it. Because the fact that $n$ is prime means that it is not divisible by other number grater than $1$ and different for $n$, so you must to choose from these $n-2$ true sentences the only one that is useful which is "$n$ is not divisible by 2", but you would not know which is the right choice, unless you read $q$. But proving contrapositive equivalent form is very easy, and you don't to do any choice. If $n$ is even then $n$ is not prime. Which follows from the fact that every even number greater than $2$ is divisible by $2$, hence not prime. So contrapositive (also contradiction) is used to avoid situations where you have a lot of information and very little of it is actually useful.
H: Random variable with density proportional to a function and finite in some points Let $X$ be a random variable on $[-1,3]$ with density $f(x) = k x^2$ (with $k \in \mathbb{R} $ to be determined) on $[-1,3]$ apart from some points s.t. $p(X=-1) = p(X=3) = \dfrac{1}{4} $ and $p(X=0) = \dfrac{1}{3}$. What is the cumulative distribution function of $X$? How much is $p(-1 \leq X < 3) $? To find $k$ I would just calculate $$ k = \dfrac{1}{\int_{-1}^3 x^2 \ dx} $$ and then the cumulative distribution should be $$F(x) = \int_{-1}^x f(t) \ dt $$ while $$p(-1 \leq X < 3) = 1 - p(X = 3) = \dfrac{3}{4}$$ Any suggestion would be appreciated. AI: You can't calculate $k$ the way you suggest because the total probability due to the continuous part of the PDF isn't 1. There are three possibilities, and $$\mathrm P(X=-1) + \mathrm P(X=3) + \mathrm P(-1<X<3) = 1$$ Therefore we have $$\mathrm P(X=-1) = \mathrm P(X=3) = \frac 1 4 \implies \mathrm P(-1<X<3) = 1 - \frac 1 4 - \frac 1 4 = \frac 1 2$$ Thus $$k = \frac {1/2}{\int_{-1}^3 x^2}$$ Then e.g. the cumulative function is zero below -1, then jumps to $\frac 1 4$ as you cross $-1$ and smoothly increases to $\frac 3 4$ at $3$, then jumps to one. Have another go at the question!
H: Continuity of a function defined differently on $\mathbb Q,\mathbb R\setminus \mathbb Q$ Let's say I define the function $f(x)=2^x$ for rational $x$, and $f(x)=1$ for irrational $x$. My question is: is this function continuous everywhere? I think it's not, because for any $2$ irrational numbers you can find a rational in between, and for any $2$ rationals you can find an irrational in between. My second question is: is the function continuous and differentiable at $x=0$? I think it is, and I think that $f'(0)=0$ because of the picture. Am I correct? (By the way please keep the answers at a pretty low level, I'm a calculus AB student.) Thanks! AI: It's certainly not continuous for any $x$ such that $2^x\neq 1$, you're correct. Also, you're right that it's continuous at $x=0$, since in any neighbourhood of 0, you're no further away from $1$ than $2^x$ is (though you might be closer). Hence the limit as you approach 0 is well-defined. However, the function is not differentiable because you can choose a sequence of rationals $x_n\to 0$ and a sequence of irrationals $y_n\to 0$ and consider $$\lim_{n\to\infty} \frac{f(x_n)-1}{x_n-0} \equiv \frac{2^{x_n}-1}{x_n-0} = \left[\frac {\mathrm d 2^x}{\mathrm d x}\right]_{x=0},\qquad \lim_{n\to\infty} \frac{f(y_n)-1}{y_n-0} \equiv \frac{1-1}{y_n-0} = 0$$ Then the first derivative is that of $e^{x\log 2}$ which is $\log 2\neq 0$. Hence there is no derivative. Notice in your dots that the dots for rationals are sloped at the origin, but the irrational dots are flat. This is what the above expresses.
H: Differentiating $ y= xe^{1\over x} $ Can someone please help me? I'm trying, but I really can't find the second derivative of $y= xe^{1/x}$. Thanks! AI: By the product rule: $$y'=x'e^{1/x}+x\left(e^{1/x}\right)'=e^{1/x}-\frac{1}{x}e^{1/x}$$ I used: $y=e^{f(x)}$, then $y'=f'(x)e^{f(x)}$. Then: $$y''=-\frac{1}{x^2}e^{1/x}+\frac{1}{x^2}e^{1/x}+\frac{1}{x^3}e^{1/x}=\frac{1}{x^3}e^{1/x}$$
H: Monic polynomial with certain properties is the characteristic polynomial? I'm trying to determine whether this statement is true or false: Assume that $dim V = n$ and let $T \in L(V)$. Let $f(z) \in P_n(F)$ be a monic polynomial of degree $n$ such that $f(T) = 0$. Then $f(z)$ is the characteristic polynomial of $T$. I know that because $f(T) = 0$, the minimal polynomial of T divides it and that the minimal polynomial of T divides the characteristic polynomial of T as well, but I can't seem to put these together in a way that conclusively establishes the truth or falsehood of the above statement. If anyone could shed some insight on the matter, it would be greatly appreciated. Thanks! AI: This is not true in general. The minimum polynomial is the smallest degree polynomial that the linear map satisfies, and the characteristic polynomial is defined to be $\det(xI-T)$. If the minimum polynomial is degree $m$, we know that the characteristic polynomial will be degree $n$ so $m\leq n$. If $p(x)$ is the minimum polynomial and $g(x)$ is any other poly, then $T$ will satisfy the polynomial $p(x)g(x)$. So if $m < n$, we can multiply $p(x)$ by any polynomial of degree $(n-m)$ to get a new polynomial of degree $n$ that the linear map satisfies. This will not, in general, be the characteristic polynomial. The above is all done thinking almost solely about polynomials. Now I'll talk about how we can understand the problem thinking more about linear maps. Working over an algebraically closed field, we know that we can pick a basis with respect to which the matrix of $T$ is in Jordan Normal Form (JNF). The multiplicities of the roots of the characteristic polynomial are the number of times that root appears on the diagonal of the matrix in JNF, but the multiplicities of the roots of the minimum polynomial tell us the size of the largest block corresponding to that root. From looking at the matrix, it is intuitively clear that the matrix will satisfy a given polynomial $f(x)$ iff $p(x)$ divides $f(x)$. Thus we can, as before, multiply $p(x)$ by any polynomial of degree $n-m$ to get one of degree $n$ that the linear map satisfies.
H: How to prove statistical hypothesis? I developed a caching method. I took 100 experiments and got that hit ratio is not less than 75%. Now, I want to prove that my method with some probability gives hit ratio not less than 75%. How should I make this? AI: You can calculate the chance of a type I or type II error in your hypothesis, which is essentially what you are looking for, the probability that your hypothesis is incorrect: If you are running a Bernoulli trial, which I'm assuming you are, your hypothesis is that p>=0.75 where P(x=0)=1-p P(x=1)=p type I error occurs when your hypothesis was actually correct but you rejected it based on your expirements. Since the criteria you are using to determine if you reject or accept the hypothesis is whether or not your expirements gave a successful result in over 75% of the time, the chance for type I error is: P(sum(1 through 100)(x)<75 | p>0.75) type II error occurs when your hypothesis was incorrect, but you accepted it: P(sum(1 through 100)(x)>75 | p<0.75) The sum of the results of Bernoulli trials have a binomial distribution, so you can calculate these probabilities using the CDF/PDF of the binomial distribution, which can be found here: http://en.wikipedia.org/wiki/Binomial_distribution
H: Square root of $8^3$ I'm only in intermediate algebra. I know that $\sqrt{8^3}$ is equal to $16\sqrt{2}$ but could you simply explain the process on how to get to that? AI: Using the standard rules of algebra, we compute: $$\sqrt{8^{3}} = \sqrt{8^2 \cdot8} = \sqrt{8^2}\cdot\sqrt{8} = 8\cdot\sqrt{4\cdot2} = 8\cdot\sqrt{2^2}\cdot\sqrt{2} = 16\sqrt{2}$$
H: Determinant Expression How I will be able to found any expression for the determinant of the matrix $R^{N\times N}$ wiht entries belong $\mathbb{R} $, if $R_{ij}=\dfrac{2}{N}-\delta_{ij}$? AI: I assume that $N$ is the size of the matrix, if it is not, the solution can be adapted easily. Your matrix is $$R= \begin{pmatrix} \frac{2}{N}-1 & \frac{2}{N} & \frac{2}{N} & ..&\frac{2}{N} \\ \frac{2}{N} & \frac{2}{N}-1 & \frac{2}{N} & ..&\frac{2}{N} \\ \frac{2}{N} & \frac{2}{N} & \frac{2}{N}-1 & ..&\frac{2}{N} \\ ...&...&...&...&... \\ \frac{2}{N} & \frac{2}{N} & \frac{2}{N} & ..&\frac{2}{N}-1 \\ \end{pmatrix}$$ Here is a simple solution if you know eigenvaleues. $R+I$ has rank $1$, which means that $\lambda=-1$ is eigenvalue of multiplicity at least $n-1$. Also, the columns of $R-I$ add to $0$, thus $\lambda=1$ is also an eigenvalue of $R$. Thus $$\det(R)=(-1)^{n-1}\cdot 1 \,.$$ If you don't know eigenvalues, add all rows to first one to get $$\det(R)= \det \begin{pmatrix} 1 & 1 & 1 & ..&1 \\ \frac{2}{N} & \frac{2}{N}-1 & \frac{2}{N} & ..&\frac{2}{N} \\ \frac{2}{N} & \frac{2}{N} & \frac{2}{N}-1 & ..&\frac{2}{N} \\ ...&...&...&...&... \\ \frac{2}{N} & \frac{2}{N} & \frac{2}{N} & ..&\frac{2}{N}-1 \\ \end{pmatrix}$$ now subtract $\frac{2}{N}$ times first row from all other rows to get: $$\det(R)= \det \begin{pmatrix} 1 & 1 & 1 & ..&1 \\ 0 & -1 & 0 & ..&0 \\ 0 & 0 & -1 & ..&0 \\ ...&...&...&...&... \\ 0 & 0 & 0 & ..&-1 \\ \end{pmatrix}=(-1)^{n-1}$$
H: $X$ random uniform variable, what is the cumulative distribution of $|X|$? Let $X$ be a random uniform distribution on $[-2,1]$. What is the cumulative distribution of $|X|$, i.e. $G(x) = p(|X| \leq x)$ ? The cumulative distribution of $X$ is $F(x) = \dfrac{x+2}{3} $. $$G(x) = p(|X| \leq x) = p (-x \leq X \leq x) = p(X \leq x) - p(X \leq -x) = F(x) - F(-x) = \dfrac{2}{3} x$$ But what is the domain of $G$? It can't be $[-2,1]$ since $G(-2) < 0$. Is it $[0, \dfrac{3}{2}]$? Is there something wrong? AI: The error that you meade, is that the cdf of X is NOT $F(x) = \frac{x+2}{3}$. The CDF is $$F(x) = \begin{cases} 0 & x \leq -2 \\ \frac{x+2}{3} & -2 < x < 1 \\ 1 & x \geq 1 \\ \end{cases}$$ The domain of $G$ is all real numbers.
H: Continuity of Infimum function on $\mathbb{R}^n$ Suppose $F$ is a non-empty closed subset of $\mathbb{R}^n$ and let $f(x)= \inf_{y \in F}|x-y|$, where $|x-y|$ is the usual Euclidean distance in $\mathbb{R}^n$. Prove that $f$ is continuous and that $f(x)=0$ iff $x \in F$. Thoughts: We want to show that given some $ \epsilon > 0$, for $a \in F$ we can find some $\delta >0$ such that $|a-b|< \delta \Rightarrow |f(a)-f(b)|< \epsilon$. AI: Hint: One direction is trivial, if $x\in F$ then $|x-x|$ is in the set over which you take the infimum. The other direction is easier to prove using sequences. Recall that $x\in F$ if and only if there exists a sequence $x_n\in F$ such that $\lim x_n=x$, use the assumption that $\inf\{|x-y|:y\in F\}=0$ to show there is such sequence $x_n\in F$.
H: Solving summation $2n+2^2(n-1)+2^3(n-2)+....+2^n$ Can anyone help me with this summation? I tried to use the geometric series on this, but I can't use that. $$2n+2^2(n-1)+2^3(n-2)+....+2^n$$ I am trying to do this for studying algorithms. Can we get a closed form for this ? AI: All you have to find out is how to sum out $$\sum_{k=1}^n 2^k(n-k+1)$$ This unwinds as $$(n+1) \sum_{k=1}^n2^k-\sum_{k=1}^nk2^k$$ The first term is a geometric series. Can you find out what the second term is? One way is to note that $$\sum_{k=1}^n x^k=x\frac{1-x^n}{1-x}$$ Now, differentiate and multiply by $x$ to get that $$\sum_{k=1}^n kx^k=x\cdot\frac{d}{dx}\left(x\frac{1-x^n}{1-x}\right)$$ After finding out what the right hand side is, plug $x=2$.
H: Doubt in $\varepsilon$-$\delta$ proof of continuity of $x^2$ I have one elementary doubt on the proof that $f(x)=x^2$ is continuous for every $a \in \Bbb R$. The initial steps usually presented are: to deduce which $\delta$ will work we write: $$|x^2-a^2|=|x-a||x+a|$$ Now, we must find some bound for $|x+a|$, so we require that $|x-a|<1$ or some other positive real number. Here comes my first doubt, this trick appears very often, what's the intuition on making $|x-a|<1$? After that we have that this implies $x < 1 + a$ and this implies that $|x+a| < 1 + 2|a|$. Now it's obvious that if we set $|x-a|< \varepsilon/(1+2|a|)$ and this obviously will make $|f(x)-f(a)|<\varepsilon$. However this was deduced under the hypothesis that $|x-a|<1$, so what if it were not true that $|x-a|<1$? Also the end of the proof is normally saying that we must take $$\delta = \min\{1, \varepsilon/(1+2|a|)\}$$ However it happens that if the minimum is $1$, it seems to not work, because when we use this $\delta$ the $\varepsilon$ doesn't even appear. I think my problem is that I didn't really get the trick of making $|x-a|<1$. Thanks very much in advance! AI: The number $1$ is arbitrary; it was chosen for its simplicity. You are right that $|x-a| < 1$ is not satisfied for every $x$, but it does not matter. We are only interested in what happens when $x$ is very close to $a$; more precisely, when $x$ satisfies $|x-a| < \delta$. But by our choice of $\delta$, $|x-a| < 1$ is always satisfied whenever $x$ is $\delta$-close to $a$.
H: Prove that this function define a norm on $\mathbb{R}^2$. Let $\| \;\|:\mathbb{R}^2\rightarrow\mathbb{R}$ be a function such that $ \| (x,y)\|=\sqrt{|x|^2+|y|^2}$ for all $(x,y)\in\mathbb{R}^2$. I need to show that $\|\;\|$ defines a norm. I would like a hint in order to prove that $\parallel \; \parallel$ satisfies the triangle inequality: $ \sqrt{|x_1+x_2|^2+|y_1+y_2|^2}\leq \sqrt{|x_1|^2+|y_1|^2}+\sqrt{|x_2|^2+|y_2|^2}$ Thanks! AI: Let $x, y\in \mathbb{R}^2$. Then \begin{align} \|x+y\|^2&=\langle x+y, x+y\rangle\\&=\langle x, x\rangle + 2\langle x, y\rangle+\langle y, y\rangle \\ &\le \{\text{Cauchy-Schwarz}\} \\&\le \|x\|^2+2\|x\|\|y\|+\|y\|^2 \\ &=(\|x\|+\|y\|)^2, \end{align} which is your result.
H: What's the probability of a set of only three digits appearing in a random 9 digit set? I'd like to know the method for correctly calculating the probability of a random sequence of $9$ numbers only containing $3$ unique, different numbers. For the purpose of this question: there are 10 numbers $0,1,2,3,4,5,6,7,8,9$ i.e. the probability of this: $123123123$ - for all unique combinations of $3$ digits (e.g $071$ in $071717717$) My initial instinct is $(1/3)^9$ - is this correct? AI: From the Twelvefold way, the number of surjective functions from a set of size $n$ to one of size $x$ is $x!\{{n\atop x}\}$, where $\{{n\atop x}\}$ is the Stirling number of the second kind. Here $n=9$ and $x=3$, and this represents the number of $9$-digit numbers containing $3$ (specified) integers. There are ${10\choose 3}$ possibilities of which integers to allow in the number (assuming base 10), so the total probability is $$P=\frac{3!}{10^9}\left\{{9\atop 3}\right\}{10\choose 3}=\frac{6\cdot3025\cdot120}{1000000000}=\frac{2178000}{1000000000}=0.2178\%$$ which is a bit more than the $3^{-9}\approx.005\%$ originally predicted. Edit: due to some ambiguities in the OP, I'm also listing the same results as above where the base alphabet only contains $9$ symbols (say $\{0,1,2,3,4,5,6,7,8\}$), per Ross Millikan's analysis. Then we have $$P=\frac{3!}{9^9}\left\{{9\atop 3}\right\}{9\choose 3}=\frac{6\cdot3025\cdot84}{387420489}=\frac{1524600}{387420489}\approx0.39352\%.$$ In general, if we want to know the proportion of length $n$ strings on an alphabet of $k$ symbols which use exactly $x$ distinct symbols, the formula is $P=\frac{x!}{k^n}\left\{{n\atop x}\right\}{k\choose x}$.
H: Giving an exact description for $E_{R}:=\{\cos(z):\, R<|z|<\infty\}$ I have the following homework problem: For each $R>0$ prove $$A(0,0,\infty)=\{e^{z}:\, z\in A(0,R,\infty)\}$$ and give an description of the set $$E_{R}:=\{\cos(z):\, z\in A(0,R,\infty)\}$$ Where $$A(z_{0},r_{0},r_{1}):=\{z\in\mathbb{C}:\, r_{0}<|z-z_{0}|<r_{1}\}$$ There is also a hint for the second part: if $\cos(z)=\omega$ for some given $\omega\in\mathbb{C}$ then $e^{iz}$ must satisfy a certain quadratic equation. What I did: I have proved the first part (which doesn't seem relevant for the second part). I have also used the hint: If $$ \cos(z)=\omega $$ then $$ \frac{e^{iz}+e^{-iz}}{2}=\omega $$ hence $$ \frac{e^{iz}\cdot e^{iz}+e^{iz}\cdot e^{-iz}}{2}=\omega $$ Denote $t=e^{iz}$. Then $$ t^{2}-2\omega t+1=0 $$ Hence $$ t_{1,2}=\frac{2\omega\pm\sqrt{4\omega^{2}-4}}{2} $$ $$ t_{1,2}=\omega\pm\sqrt{\omega^{2}-1} $$ I also noticed that $e^{-iz}$satisfies the same equation hence one of $e^{iz},e^{-iz}$ is $t_{1}$ and the other is $t_{2}$. I don't see how this help me since $$ t_{1}+t_{2}=2\omega $$ hence $$ \frac{e^{iz}+e^{-iz}}{2}=\frac{t_{1}+t_{2}}{2}=\omega $$ so this doesn't give me any new information. This is where I am stuck, can someone please help me continue this exercise ? AI: Solution 1 Let $f(z)=e^{iz},$ so $f(A(0,R,\infty))=A(0,0,\infty)$ Also, $g(z)=\frac12(z+\frac1z)$ Then $\cos z=\frac12(e^{iz}+e^{-iz})=(g\circ f)(z)$ Where foes $g$ map $A(0,0,\infty)?$ Well, generally $g$ maps the points $\{|z|>1\}$ conformally to $\mathbb C\setminus [-1,1]$ (See Bak-Newmann , Riemann Map chapter) . Also, the points of the circumference $|z|=1$ are mapped to the interval $[-1,1]$ Therefore, the image of $|z|\geq 1$ is the whole plane, which implies that the image of $A(0,0,\infty)$ is the whole plane. Solution 2 (using the hint) You have proved that the image of $A(0,R,\infty) $ through $e^z$ is $A(0,0,\infty)$. The same holds if we replace $e^z$ with $e^{iz}$. Now we will prove that the image of $A(0,R,\infty)$ through $\cos z$ is the whole plane. Take a $w\in \mathbb C$. We need to prove that there exists $z\in A(0,R,\infty)$ such that $$\cos z=w\iff$$ $$e^{2iz}-2we^{iz}+1=0\iff $$$$t=e^{iz}, t^2-2wt+1=0$$ Therefore $t=\dfrac{2w\pm\sqrt{4w^2-4}}{2}=w\pm\sqrt{w^2-1}$ This proves already that for a given $w\in\mathbb C$, you can find a $t\in\mathbb C$ which is not zero, therefore it belongs in $A(0,0,\infty)$ . Now, obviously(from the first part of the exercise), for a given $t\in A(0,0,\infty)$ we can find a $z\in A(0,R,\infty)$ such that $e^{iz}=t$ and the proof has finished.
H: Is There A Function Of Constant Area? If I take a point $(x,y)$ and multiply the coordinates $x\times y$ to find the area $(A)$ defined by the rectangle formed with the axes, then is there a function $f(x)$ so that $xy = A$, regardless of what value of $x$ is chosen? AI: The curve you are after is the rectangular hyperbola given by the equation $xy=A$. For instance, in the figure below, you see that the red, purple and blue rectangles are all of unit area ($A=1$).
H: If $f$ is a Morse function, then so is $f \circ \phi^{-1}$, where $\phi: U \rightarrow \mathbb{R}^k$ is the coordinate chart. I am trying to show: if when $f^\prime = 0$, then $f^{\prime\prime} \neq 0 \Leftrightarrow (f \circ \phi^{-1})^\prime = 0$, $(f \circ \phi^{-1})^{\prime\prime} \neq 0$. But the problem is, because $(f \circ \phi^{-1})^\prime = f^\prime(\phi(x))\phi^\prime(x)$, when $(f \circ \phi^{-1})^\prime = 0$, it could be $f^\prime = 0$ or $\phi^\prime = 0$. Can I simply regard $\phi$ as a change of coordinate function hence its Jacobian is nonsingular, so $\phi^\prime \neq 0$? $(f \circ \phi^{-1})^{\prime\prime} = (f^\prime(\phi(x))\phi^\prime(x))^\prime = f^{\prime\prime}(\phi(x))\phi^\prime(x)\phi^\prime(x) + f^\prime(\phi(x))\phi^{\prime\prime}(x)$ Assuming $f^\prime(\phi(x))=0, \phi^\prime \neq 0$: $(f \circ \phi^{-1})^{\prime\prime} = f^{\prime\prime}(\phi(x))\phi^\prime(x)\phi^\prime(x) \neq 0$ Assuming $\phi^\prime = 0: (f \circ \phi^{-1})^{\prime\prime} = 0$ Failed. Thank you very much! AI: This is just a calculus question. The given condition "coordinate chart" is strong. I found this in my calculus book: Advanced Calculus of Several Variables, Edwards Definition: Let $\phi: U \rightarrow \mathbb{R}^n$ be a $\mathscr{C}^1$ mapping defined on an open subset $U$ of $\mathbb{R}^k (k \leq n)$. Then we call $\phi$ regular if the devrivatire matrix $\phi^\prime(u)$ has maximal rank $k$, for each $u \in U$. Definition: Given $p \in M$, there exists an open set $U \subset \mathbb{R}^k (k<n)$ and a regular $\mathscr{C}^1$ mapping $\phi: U \rightarrow \mathbb{R}^n$ such that $p \in \phi(U)$, with $\phi(U^\prime)$ being an open subset of $M$ for each open set $U^\prime \subset U$. Then the mapping $\phi$ is called a coordinate patch for $M$ provided it is injective. Certainly, a full rank matrix can't be an empty matrix.
H: The area between two curves Hi there's one problem on my study guide that my teacher didn't go over and I don't know how to approach/solve it. Here's the problem: Find the area between the curves on the given interval. Draw a graph of the functions and the region. $$y=x^4 , y=x-1, -2 \le x \le 0.$$ AI: Hint: determine if there are any intersection points of these two curves on the specified intervals. If there are none, great! Determine which one of the two is greater on the interval (WLOG call the greater of the two $y_{1}$, and the smaller $y_{2}$; graphing should help you determine this!), and then find $$\int_{-2}^{0}y_{1}-y_{2} dx$$ If there are intersection points, break up the integral into several subintervals of $[0, 2]$ and again determine which curve is greater on each portion of the integral. Then integrate each piece exactly as you did for the case that there are no intersection points. Edit: at intersection points, the function which is greater might change, and so you would have to break up the integral accordingly to get the right subtraction of functions. In this case, $x^{4}$ and $x-1$ don't intersect each other on the interval $[-2, 0]$, and so you don't have to worry about it; $x^{4} > x - 1$ for all $-2 \leq x \leq 0$.
H: What is the contragredient representation? Let $V=M_2(\Bbb C)$ be the set of all $2$x$2$-matrices. Let $G=B$x$B$ where $B$ is the group of $2$x$2$ lower triangular matrices with non-zero diagonal entries. Then G acts on $V$ by $\rho (g,h)x=gx^th$ for $x \in V$ and $(g,h) \in G$. What is the representation $\rho^*$ of $G$ contragredient to $\rho$? AI: First, I'll rewrite the formula for $\rho$ so that I find it more understandable (I know it is also somewhat accepted notation to write the "t" to the left, but in this case I first read it as "$x$ transposed"): $$\rho(g,h)x=g x h^t.$$ (That is, $h$ is getting transposed). The space $V$ has a non-degenerate symmetric bilinear form defined by $$(x,y)=\mathrm{trace}(xy).$$ This form identifies $V$ with its dual. In terms of this identification, the dual ("contragredient") representation $\chi(g,h)$ is given by $$\chi(g,h)(y)(x)=y(g^{-1} x (h^{-1})^t)=\mathrm{trace}(y g^{-1} x (h^{-1})^t)=\mathrm{trace}((h^{-1})^t y g^{-1} x)=((h^{-1})^t y g^{-1})(x).$$ In other words, $$\chi(g,h) y =(h^{-1})^t y g^{-1}.$$
H: Differentiate $g(t)= {e^t - e^{-t} \over e^t + e^{-t}}$ I'm having some trouble trying to differentiate the function $g(t)= \dfrac{e^t - e^{-t}}{e^t + e^{-t}}$ Can someone help me? Thanks a lot! AI: $g(t) = \dfrac{a(t)}{b(t)}$ $g'(t)= \dfrac{a'(t)b(t)-a(t)b'(t)}{b^2(t)}$ $g'(t)= \dfrac{(e^t+e^{-t})(e^t+e^{-t})-(e^t-e^{-t})(e^t-e^{-t})}{(e^t+e^{-t})^2}$ $g'(t)= \dfrac{(e^{2t}+2+e^{-2t})-(e^{2t}-2+e^{-2t})}{(e^t+e^{-t})^2} = \dfrac{4}{(e^t+e^{-t})^2}$ By the way, $g(t)=\tanh(t)$, and   $g'(t)=\dfrac{1}{\cosh^2(t)} = \dfrac{1}{\left(\dfrac{e^t+e^{-t}}{2}\right)^2} = \dfrac{4}{(e^t+e^{-t})^2}$.
H: Find the equation of the sine function graphed below. Find the equation of the sine function graphed below. Write a cosine function for the graph below. Assume the least possible phase shift. AI: Hint: Your answer should be of the form $f(x)=A \sin (kx+\phi)$. What is the amplitude of a sine wave before you multiply it by $A$? What is the wavelength before you multiply $x$ by $k$? Where does it pass through $0$?
H: Continuity and limits. Please check epsilon delta Suppose $f$ is continuous at $a$ and $f(a) = 0$. Prove that if $\alpha \neq 0 $, then $f+\alpha$ is nonzero in some open interval containing $a$. Since $f$ is continuous, we take $\epsilon = |\alpha|$; then we have $|x - a| < \delta \implies -|\alpha| < f(x) < |\alpha| \implies \alpha - |\alpha | <f(x) + \alpha < \alpha + |\alpha |$ We consider two cases. If $\alpha >0$, then $\alpha = |\alpha|$; thus $|x - a| < \delta \implies 0<f(x) + \alpha < 2|\alpha |$ On the other hand, if $\alpha < 0$ we have $|x - a| < \delta \implies -2|\alpha |< f(x) + \alpha < 0$ EDIT Spivak's answer book referred me back to a theorem in the book. I am unfortunately too lazy to do that and after a quick glimpse of the theorem, it didn't look anything like what I wrote... AI: Correct. Note that in such situations, people usually give themselves some space and take $\epsilon=|\alpha|/2$. And it is not recommended to take the habit to rely too much on strict inequalities. It does no matter in this case, of course. But in more involved arguments requiring a limiting process, strict inequalities become large... Also, you can handle both cases at once with the reverse triangular inequality: $$|t-z|=|z-t|\geq ||z|-|t||\geq |z|-|t|.$$ So take $\delta>0$ such that $|x-a|\leq \delta$ implies $|f(x)|\leq \frac{|\alpha|}{2}$. Then $$ |f(x)+\alpha|\geq |\alpha|-|f(x)|\geq |\alpha|-\frac{|\alpha|}{2}=\frac{|\alpha|}{2}>0. $$ Note: the theorem you mention could very well be that if $f$ and $g$ are continuous at $a$, then $f+g$ is continuous at $a$. In this case, take $g$ the constant function equal to $\alpha$, which is obviously continuous. So $h=f+\alpha$ is continuous at $a$ where it takes the value $\alpha\neq 0$. Hence $|h|$ is continuous at $a$ where it takes the value $|\alpha|>0$. Take $\epsilon=\frac{|\alpha|}{2}>0$. Then there is $\delta>0$ such that $|x-a|\leq \delta$ implies $$ 0<\frac{|\alpha|}{2}=|\alpha|-\frac{|\alpha|}{2}\leq |h(x)|=|f(x)+\alpha|\leq |\alpha|+\frac{|\alpha|}{2} $$ where the rhs is useless for your purposes.
H: What are some examples of functions that are continous from $[a,b]$, differentiable $(a,b)$ but not at $a$ and $b$. I am studying the MVT and Rolle's Theorem. I would like some examples of functions that are continuous from $[a,b]$, differentiable from $(a,b)$ but not differentiable at $a$ and $b$. I am aware that the derivative at the point c is defined as: $ f'(c) = \displaystyle \lim_{x \to c} \frac{f(x) - f(c)}{x-c}$ Basically, I'm wondering why does a function have to be continuous at the endpoints of the interval, but it doesn't have to be differentiable at the endpoints. Why is this important? My other question is, how significant is the study of open/closed sets important in calculus? AI: $f(x)=\sqrt{a^2-x^2}$ on $[-a,+a]$ or $f(x)=\sqrt[3]{x}$ on $[0,b],~b>0$
H: A theorem in Linear Algebra; linear dependence - Axler I really am having trouble understanding the statement and the proof. Why does the theorem pick $v_1 \neq 0 $? Why not $v_2$? Also in proving (a), why do we consider the largest $j$? I do not understand the statement Note all of $a_2, a_3, \dots, a_m$ can be $0$ (because $v_1 = 0 $) Can someone teach me by example? AI: First, there's nothing special about $v_1$ that we set it not equal to $0$. We're putting an arbitrary order to our list of basis vectors: the theorem would work the same if we switched the places of $v_1$ and $v_2$ (and changed the statement accordingly so that we could potentially express $v_2$ in terms of $v_1$ and require that $v_2 \ne 0$). We have to require that $v_1 \ne 0$ because otherwise the singleton $\{ v_1 \}$ is linearly dependent, but it is nonsensical to express $v_1$ in terms of previous basis vectors (unless you consider $v_1 = 0$ an example of that). We consider the largest $j$ with nonzero coefficient simply because we're expressing $v_j$ in terms of previous basis vectors. If the coefficient of $v_k$ is $0$, then the statement says nothing about the vector $v_k$. We want to be able to divide out the coefficient, and taking the largest $j$ vector allows us to do that. Now to answer your last question, suppose all of $a_2 \dots a_m$ are $0$. We're assuming that $a_1$ and $v_1$ are not zero, so $a_1 v_1 \ne 0$. But $0 = a_1 v_1 + \dots a_m v_m = a_1 v_1$ if the other coefficients are all $0$. This is a contradiction. Hope that helped.
H: Other log solutions? I am evaluating the expression: $\ln(1)$ And I know the trivial solution is $0$. Are there other solutions to this equation? I feel there should be, my logic is as follows: if: $\ln(1) = x \implies 1 = e^x \implies 1 = 1 + x + x^2/2! + x^3/3!... $ $\qquad\qquad\qquad\qquad\qquad\implies 0 = x + x^2/2! + x^3/3!...$ $\implies x = 0$ is one solution, the other solution is all $x$ such that: $$1 + x/2! + x^2/3! + x^3/4! ... = 0$$ There have to be other solutions, or limiting solutions... What are they? AI: There is no equation mentioned in the post. I assume you mean the equation $e^x=1$. This has no real solutions other than $x=0$. You can see this from the fact that the function $e^x$ is an increasing function, so can take on the value $1$ at most once. The logarithm function can be extended to the complex numbers, where it is multiple-valued. In the complex numbers, the equation $e^z=1$ has the solutions $z=2n\pi i$, where $n$ ranges over the integers. So in the complex numbers, $\ln 1$ takes on infinitely many values.
H: Integrability of a function (Darboux) Question: Let $f: [a,b] \to \mathbb{R}$ and assume $0 \leq f(x) \leq B$ for $x\in [a,b]$. Show that $$U(f^2,P) -L(f^2,P) \leq 2B\ ( U(f,P) - L(f,P) )$$ for all partitions $P$ of $[a,b]$. Show that if $f$ is integrable on $[a,b]$, then so is $f^2$ [no positivity assumption here]. I found this hint buried in the latex file my prof gave me. (Hint: $f(x)^2 -f(y)^2 = (f(x) + f(y)) \cdot (f(x) -f(y))$.) It really wasn't easy to put it to good use... $$\begin{align} U(f^2,P) -L(f^2,P) &= (U(f,p) + L(f,p)) \cdot (U(f,p) - L(f,p)) \\ &\leq (|U(f,p)| + |L(f,p)|) \cdot (U(f,p) - L(f,p)) \end{align}$$ We know $|U(f,p)|\geq |L(f,p)|$, and since $f(x) \leq B$, $|f(x)| \leq B$ for all $x \in [a,b]$ where $B \in \mathbb {R}$. Knowing that $|L(f,p)| \leq |U(f,p)|\leq |f(x)| \leq B$, we get: $$(B + B) \cdot (U(f,p) - L(f,p)) = 2B\ (U(f,P) - L(f,P))$$ Hence $U(f^2,P)-L(f^2,P) \leq 2B\ (U(f,P) - L(f,P))$. Anyone agree with me? Part 2) Theorem: Let $f:[a,b] \to [c,d]$ be Daraboux-integrable and $g:[c,d] \to \mathbb {R}$ be continuous. Then the composition $g \circ f$ is Daraboux-integrable. Let $g=x^2$, so that $g \circ f=f^2$. Since $g$ is continous on $[c,d]$ for all $c,d \in \mathbb {R}$, $f^{2}$ is integrable as long as $f$ is integrable on $[a,b]$ by the theorem above. AI: $\color{red} \cdots$ $U(f,p)$ generally refers to the "upper sum," not a step function. Instead, define two step functions $s$ and $t$ over a partition $P$ such that $s(x) \le f(x) \le t(x)$. Let $s$ be maximal while still requiring $s(x) \le f(x)$ and let $t$ be minimal. Then $s^2$ and $t^2$ are step functions corresponding to $f^2$. Your comment that "hence $U(f,p) \le B$" threw me off a bit. You can simply conclude that $t(x) \le B$ since $f(x) \le B$ and $t$ is a step function taking values on $f$. And I would recommend turning the inequality $[B+B]( U(f,p) - L(f,p) ) \le 2B ( U(f,p) - L(f,p) )$ into an equality. Both parts give correct proofs (if we replace the instances of $U(f,p)$ and $L(f,p)$ appropriately with $s$ and $t$). However, it sounds like your professor may want you to prove the second part directly from the first. If that is the case, simply pay attention to the fact that $U(f,p) - L(f,p) \to 0$ as the partitions become finer, while $2B$ is a fixed constant. $\color{red} *$ But be careful, because you have to drop the positivity assumption for this part. There are a few lines in there which are unnecessary. For instance, you write: $U(f,p)|≥|L(f,p)|$ since $f(x)≤B |f(x)|≤B$ for all $x∈[a,b]$ where $B∈\mathbb R$ While all these statements are true, and you should definitely mention $U(f,p) \ge L(f,p)$, the second part is both unnecessary and trivial and makes the proof a bit messy (since it makes it sound like the first statement follows from the second). So I would recommend getting rid of the part beyond "since." You also write: $|L(f,p)|≤|U(f,p)|≤|f(x)|≤B$ By using the absolute value signs, do you mean to be taking the maximum value of the functions?
H: Given $x,y\in\mathbb{C}^n$ s.t. $f(x,y)=\sup_{\theta,\phi}\{\|e^{i\theta}x-e^{i\phi}y\|^2,\theta,\phi\in\mathbb{R}\}$ Given$$x,y\in\mathbb{C}^n,\quad f(x,y)=\sup_{\theta,\phi}\{\|e^{i\theta}x-e^{i\phi}y\|^2,\theta,\phi\in\mathbb{R}\}$$ Then which is/are the following are true? $1.\ f(x,y)\le \|x\|^2+\|y\|^2-2Re|\langle x,y\rangle|$ $2.\ f(x,y)\le \|x\|^2+\|y\|^2+2Re|\langle x,y\rangle|$ $3.\ f(x,y)= \|x\|^2+\|y\|^2+2Re\langle x,y\rangle$ $4.\ f(x,y)\ge \|x\|^2+\|y\|^2-2Re\langle x,y\rangle$ I have no idea how to start with or how to solve, could any one help me? AI: First, an important equality! $$\begin{align*}\|a-b\|^2 &= \langle a - b, a - b\rangle \\ &= \langle a,a \rangle - \langle a,b \rangle - \langle b,a \rangle + \langle b ,b \rangle \\ &= \|a\|^2 - (\langle a,b\rangle + \langle b,a\rangle) + \|b\|^2 \\ &= \|a\|^2 - (\langle a,b\rangle + \overline{\langle a,b\rangle}) + \|b\|^2 \\ &= \|a\|^2 - 2\mbox{Re}(\langle a,b\rangle) + \|b\|^2\end{align*}$$ Now, let's attack the norm inside the sup. $$\begin{align*}\|e^{i\theta}x - e^{i\phi}y\|^2 &= \|x\|^2 - 2\mbox{Re}(\langle e^{i\theta}x, e^{i\phi}y\rangle) + \|y\|^2 \\ &= \|x\|^2 - 2\mbox{Re}(e^{i(\theta-\phi)}\langle x, y\rangle) + \|y\|^2 \end{align*}$$ Finally, use this nice fact about complex numbers: given any complex number $\lambda$, there is some $e^{ip}$ so that $\mbox{Re}(e^{ip}\lambda) = |\lambda|$. Geometrically, we are rotating the vector $\lambda$ so that it lies on the real axis. (We could also rotate to be $-|\lambda|$...) Put these facts together, noting that $\mbox{Re}|\langle x,y\rangle| = |\langle x,y\rangle|$, since the norm is already real!
H: Injectivity of $A-\lambda I$ I'm reading a paper on determinants and on one point the author states that: A complex number $\lambda$ is called an eigenvalue of matrix $A$ if $A-\lambda I$ is not injective. Why is this? Could someone clarify :) Thank you! =) AI: Let's view $A$ as a linear operator from vector spaces $V \to W$. An eigenvalue $\lambda$ and eigenvector $x$ satisfy $$Ax = \lambda x$$ So equivalently: $$( A - \lambda I ) x = 0$$ But $( A - \lambda I ) 0 = 0$ and $0 \ne x$. This shows that $A - \lambda I$ is not injective.
H: $f(1/n)=\frac{2n}{3n+1}$, Then Given that $f:D\rightarrow \mathbb{C}$ is analytic, $D=\{z:|z|<1\}$, analytic at $0$ and satisfies $f(1/n)=\dfrac{2n}{3n+1}$, Then $f(0)=2/3$ $f$ has a simple pole at $z=-3$ $f(3)=1/3$ No such $f$ exists. considered $g(z)=f(z)-\dfrac{2}{3+z}$ and zero set of $g$ is $\{\dfrac{1}{n}\}$ has limit point $0\in D$, so can I say $f(z)=\dfrac{2}{3+z}$. so $1$, $3$ are true for sure, for $2$ If our function were defined on $\mathbb{C}$ or a domain containing the point $-3$ then $-3$ is a simple pole thats for true, so in our case $2$ is false am I right? $4$ is false AI: This is a very poorly formulated question. Your idea is correct, but it is possible to argue for both 2. and 3. to be either true or false. Notice that the formulation of the problem states that $f$ is defined on the unit disc. In other words, question 2 and 3 are meaningless, since $z=-3$ and $z=3$ are outside the specified domain of definition. On the other hand a (the) function described by the conditions has a unique meromorphic extension to the entire complex plane, and for this extension, question 2 and 3 make sense. Strictly speaking, the extended function is not the $f$ described in the question, though.
H: evaluating $\int_0^{\infty}\frac{e^{-t-\frac{x}{t}}}{t} dt$ I got to this integral, while proving some theorem in statistics: $$\int_0^\infty \frac{e^{-t-\frac{x}{t}}}{t} \mathop{dt}$$ I have trouble evaluating it. I tried partial integration, tried substitution with some polynomial and some trigonometric functions. None of them helped, and Wolfram can't compute it either. Do you have a hint on how to solve this? AI: With a little help from Maple, the integral is $$\int_0^\infty \frac{e^{-t-\frac{x}{t}}}{t}\,dt = 2K_0(2\sqrt{x}),$$ where $K_0$ is the modified Bessel function of the second kind of order zero.
H: Proving $\prod _{k=j}^n \frac{p_{k+1}}{p_k} = \frac{p_{n+1}}{p_j}\!\!,\;\;1\le j\!<\!n$ Let $p_n$ denote the $n$th prime number. How could one prove that: $$\prod \limits_ {k=j}^n \frac{p_{k+1}}{p_k} = \frac{p_{n+1}}{p_j}\!\!,\;\;1\le j\!<\!n$$ Examples: $n=3812,\;j=81\qquad\implies\quad\large{\prod \limits _{k=81}^{3812} \,\frac{p_{k+1}}{p_k}= \frac{p_{3813}}{p_{81}} = \frac{35897}{419}}$ $n=20019,\;j=1\qquad\implies\quad\large{\prod \limits _{k=1}^{20019} \frac{p_{k+1}}{p_k} = \frac{p_{20020}}{p_{1}} = \frac{224993}{2}}$ $n=129181,\;j=35\quad\implies\quad\large{\prod \limits _{k=35}^{129181} \!\!\frac{p_{k+1}}{p_k}= \frac{p_{129182}}{p_{35}} = \frac{1715059}{149}}$ AI: There's nothing mysterious going on here. It's just a telescoping product. You can prove these finite sums by induction. The general case: $$\prod_{i=j}^n \frac{a_{i+1}}{a_i} = \frac{a_{n+1}}{a_j}$$ Proof: The proof will proceed by induction. The base case $n=j$ is obvious. Suppose our series does work for $n$. Then $$\prod_{i=j}^{n+1} \frac{a_{i+1}}{a_i} = \prod_{i=j}^{n} \frac{a_{i+1}}{a_i} \cdot \frac{a_{n+2}}{a_{n+1}} = \frac{a_{n+1}}{a_j} \frac{a_{n+2}}{a_{n+1}} = \frac{a_{n+2}}{a_{j}}$$
H: A Calculus Question on onto functions with a specified range. The following question was from a mock test of a competitive exam. Suppose $f:\mathbb{R} \to [-8,8]$ is an onto function and $f(x) = \dfrac{bx}{(a-3)x^3 + x^2 + 4}$ where $a,b \in \mathbb{R}^+$. If the set of all values of $m$ for which the equation $f(x) = mx$ has three distinct real solutions in the open interval $(p,q)$, then find the value of $a+b+p+q$. I tried the problem and obtained an answer. According to me, it is $43$. I will post the answer later. My solution was quite involved and long. I want to know if there is a quick solution to this problem. Thank you. AI: First off, the function $f(x)=\dfrac{bx}{(a-3)x^3 + x^2 + 4}$ has a cubic polynomial in the denominator (assuming $a\ne3$), and since we assume $a\in\Bbb{R}$, this equation must have a real root, which cannot be canceled by the $x$ in the numerator, because $x=0$ is not a root of the cubic (which instead evaluates to $4$ at $x=0$). In the vicinity of this root, $f(x)$ is unbounded above and below, so clearly it doesn't satisfy the criterion that $\operatorname{ran}f=[-8,8]$. Thus $a=3$ (in order to make the denominator have no real roots). The new function $f(x)=\dfrac{bx}{x^2 + 4}$ has no poles, and goes to $0$ at $x\to\pm\infty$, and it's self-evidently continuous, so it takes on a maximum and minimum somewhere, which can be found using the first-derivative test. $f'(x)=-\dfrac{b(x^2-4)}{(x^2+4)^2}=0$ when $x=\pm2$, and at these points $f(x)=\pm\frac b4$. Thus $\operatorname{ran}f=\big[\!-\frac b4,\frac b4\!\big]=[-8,8]$ implies $b=32$. The function is now $f(x)=\dfrac{32x}{x^2 + 4}$. Note that $f$ is odd. Thus $f(0)=0=m\cdot0$ is always a solution to $f(x)=mx$, and nontrivial solutions come in pairs $\pm x$, since $mx$ is also an odd function. Having discarded $x=0$, we can divide by $x$ and rearrange the equation to get $x^2 + 4=32/m$. This equation has a solution when $32/m > 4$ (noting that $32/m=4$ merely yields a triple root at $0$ which is stated to be inadmissible), which is equivalent to $0<m<8$. The case $m=0$ must be analyzed separately, but $\dfrac{32x}{x^2 + 4}=0$ only when $x=0$, so it is not in the case of interest. Thus $m\in(0,8)=(p,q)$ implies $p=0$ and $q=8$. Putting it all together, we have $a+b+p+q=3+32+0+8=43$, so it looks like you got the answer right.
H: Unitary diagonalization and eigenspace dimensions I was trying to diagonalize the matrix: $\left(\begin{array}{ccc} 0 & 0 & i\\ 0 & i & 0\\ i & 0 & 0 \end{array}\right)$ I got two eigenvalues, $\lambda_{1}=i$ and $\lambda_{2}=-i$, and found the eigenspaces: $V_{\lambda_{1}=i}=\mathrm{span}\left\{ \left(\begin{array}{c} 1\\ 0\\ 1 \end{array}\right)\right\} $ $V_{\lambda_{2}=-i}=\mathrm{span}\left\{ \left(\begin{array}{c} 1\\ 0\\ -1 \end{array}\right),\left(\begin{array}{c} 0\\ 1\\ 0 \end{array}\right)\right\} $ Then I composed a unitary matrix after orthonormalising the eigenvectors: $U=\left(\begin{array}{ccc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\ 0 & 0 & 1\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \end{array}\right)\Rightarrow U^{*}=\left(\begin{array}{ccc} \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}}\\ 0 & 1 & 0 \end{array}\right) $ $U^{*}AU$ does produce a diagonal matrix. The thing is, I thought the main diagonal should have eigenvalues according to the dimensions $\dim V_{\lambda_{1}=i}=1$ and $\dim V_{\lambda_{2}=-i}=2$: $\left(\begin{array}{ccc} i & 0 & 0\\ 0 & -i & 0\\ 0 & 0 & -i \end{array}\right)$ But that's not the right diagonal matrix. Can anyone explain where my error is here? Thanks. AI: $\left(\begin{array}{c} 0\\ 1\\ 0 \end{array}\right)$ is an eigenvector for the eigenvalue $i$, not $-i$.
H: maximal antichain I don't understand the definition of Jech (set theory) for "maximal antichain". Let $B$ a boolean algebra and $A$ a subalgebra of $B$. $W\subseteq A^+$ is a maximal antichain if $\sum W=1$ and $W$ antichain. As $A^+\subseteq B^+$ and $1\in A^+\cap B^+$, $W$ is also a maximal antichain in $B$ (is there a mistake here ?). But it is saying that $W$ need not be maximal in $B$ .... why ? Thanks AI: It is easy to check that $W$ is an antichain in $B$ as well. So we will have to find a way to violate $\sum W = 1$ in $B$. The preservation of finite suprema means we will need to reach for an infinite $W$. An example is below. Consider $B = \mathcal P(\Bbb N)$, and let $A$ be the subalgebra of finite subsets of $\Bbb N$ not containing zero and their complements: $$A = \{S \subseteq \Bbb N: \text{$S$ finite and $0 \notin S$, or $S$ infinite and $0 \in S$}\}$$ Then the set of singletons of positive integers $W = \{\{n\}: n > 0\}$ is clearly an antichain in $A$, and in $A$, $\sum W = \Bbb N$, while in $B$, $\sum W = \Bbb N_{>0}$.
H: Eigenvalues of outer product matrix of two N-dimensional vectors I have a vector $\textbf{a}=(a_1, a_2, ....)$, and the outer product $M_{ij}=a_i a_j$. What are the eigenvalues of this matrix? and what can you say about the co-ordinate system in which $M$ is diagonal? I have proved that the only eigenvalue of the matrix is the norm of the vector squared, and that one of the eigenvectors is $\textbf{a}$ itself. $$Mu=a a^{T}u=\lambda u\\\ \implies a^{T}(aa^{T})u=a^{T}\lambda u \implies a^{T}a (a^{T}u)=\lambda a^{T}u \implies ||a||^2=\lambda$$ Also it is obvious that $a$ is the eigenvector of $aa^{T}$, which implies $M= \begin{pmatrix} ||a||^2 &0 &0 &..... \\0 & ||a||^2\\.\\.\\\\. \end{pmatrix}$ Is this correct? What are all the eigenvectors of this the outer product? and what intuitively happens in the transformed basis when M is diagonal? AI: Say $b \perp a$ then $a^{\text{T}}b=0$ and therefore $Mb=aa^{\text{T}}b=0$. So, just find $n-1$ independent vectors that are orthogonal to $a$ and you have $n-1$ new eigenvectors of $M$, all with eigenvalue $0$. So the spectrum of your matrix $M$ is $(\|a\|,0,\ldots,0)$. The transformed basis is just what happens when you realign your axes to correspond with the eigenvectors. That's when you see that $M$ projects on one direction, the direction of $a$.
H: Why is $ \|T(x) \|\le \| T\| \|x \|$? Let $X$ and $Y$ be normed linear spaces and let $T$ be a bounded linear operator from $X$ to $Y$. The norm of $T$ is defined as $$\|T \|=\sup\{\|T(x) \|\;:\;\|x \|\le 1\}. $$ From the definition of the norm, we can say that $\|T(x) \|\le \|T \|\;\forall \;x,$ and that $\|T \|\|x \|\le \|T \|$ as $\|x\|\le 1$. However I don't understand why we write $\|T(x)\|\le \|T\|\|x\|$? AI: Clearly $\Vert T(x) \Vert \le \Vert T \Vert$ if $\Vert x \Vert = 1$. Now, just remember that we're dealing with linear operators. So if $\Vert x \Vert = c$, then $\Vert \frac{x}{c} \Vert = 1$. So $$\left\Vert T(x) \right\Vert = c \left\Vert T\left(\frac{x}{c} \right) \right\Vert \le c \Vert T \Vert$$
H: Hodge Star Operator and definition of divergence operator on riemannian manifold From my lecture notes, on a general Riemannian manifold $(M, g)$, the divergence operator $\operatorname{div}: T^\infty(TM)\rightarrow C^\infty(M)$ is defined as $\operatorname{div}X := \star^{-1}d(x\neg\operatorname{Vol})$ , where $\star$ is the Hodge star operator $C^\infty\rightarrow T^\infty(\bigwedge^nT^\ast M)$ defined as $f\rightarrow f\operatorname{Vol}_g$. Now I've read this from other place that there is another characterized definition of divergence operator given by: For $\vec{F}\in C^\infty(U)$ with an open subset $U\subset X$, the function $\operatorname{div}(\vec{F})\in C^\infty$, $$d(\star w(\vec{F})) = \operatorname{div}(\vec{F})dV_X|_U$$ The detail of this definition can be found here. Now my question. How does the definition given in my lecture imply the characterization property of divergence operator? AI: If $ \overline{F} = (f_1,f_2,...,f_n) $ then you can identify it locally with the vector field $ X = \sum_i f_i\partial_i $ and thus you have definitions for both of them identical via this. There are several equivalent definitions of divergence, like defining $ div(X)dV = d(i_X dV) $, trace of Levi-Civita connection, etc.
H: Prove $a^n \rightarrow 0$ as $n \rightarrow \infty$ for $\left|a\right| < 1$ without use of log properties $a^n \rightarrow 0$ as $n \rightarrow \infty$ for $\left|a\right| < 1 $ Hint $u_{2n}$ = $u_{n}^2$ I have totally no idea how to prove this, this looks obvious but i found out proof is really hard... I am doing a real analysis course and there's a lot of proving and I stuck there. Any advices? Practice makes perfect? AI: Replacing $a$ by $|a|$, one can assume without loss of generality that $a$ is a nonnegative real number. If $a=0$, the result is direct. If $0\lt a\lt1$, the sequence defined by $u_n=a^n$ is decreasing and positive hence it converges to some finite nonnegative limit $\ell$. Since $u_{n+1}=au_n$, $\ell=a\ell$. Since $a\ne1$, the only possible limit is $\ell=0$, QED. The hint that $u_{2n}=u_n^2$ can probably be used as follows, once one knows that the limit $\ell$ exists and is finite: $\ell=\ell^2$ hence $\ell=0$ or $1$ and, since $u_n\leqslant u_1=a\lt1$ for every $n\geqslant1$, $\ell\ne1$ hence $\ell=0$.
H: Minimizing an expression (over the integers) In the context of Hurwitz groups and manifolds, one comes by what wikipedia defines as a "remarkable" fact, that $1-\dfrac1 a -\dfrac 1 b - \dfrac1 c > 0$ has a minimal value of $1/42$ if $a,b,c \in \mathbb{Z}$ and $a<b<c$ (I'm not sure this last part is needed, but let's consider it anyway). I was wondering how one would go about solving a problem like this. I am totally at loss when I see problems "over the integers": I never actually learned any techniques to approach such problems. So the question is: is there a suitable general method to minimize expressions over the integers? I of course feel that there is not standard way, but I'm interested in some examples as to how to proceed, maybe taking this expression as a base for the examples. AI: For this problem you just need some intelligent case analysis. Increasing the value of $a, b$ or $c$ makes the sum of reciprocals smaller and thus increases the expression on the right hand side. So you need the smallest values you can get. $a=1$ leaves the left hand side negative, and the best we could do with $a\ge 3$ would be to set $a=3, b=4, c=5$ giving us $\frac {13}{60}$. So we need to test $a=2$ to see if we can do better than this. The expression then becomes $\frac12-\frac1b-\frac1c$ with $2\lt b \lt c$ If $b\ge 4$ the best we can do is $b=4, c=5$, giving us a value of $\frac 1{20}$. So we are left testing $b=3$, when our expression becomes $\frac16-\frac1c$. The lowest value of $c$ for which this is positive is $c=7$ and this gives the minimum value $\frac1{42}$.
H: Derivative of order 16 - is there a method to do so? I have the following exercise: Find the $16^{\text{th}}$ derivative of $y$, (i.e. $y^{(16)}$), for $y = \sin x$. Is there any method to do so, or I simply have to differentiate the function $16$ times? AI: Hint: $y'=\cos x$ $y''= -\sin x=-y$ $y'''=- \cos x=-y'$ $y''''=\sin x=y$ Do you notice the recursion? $y^{4k}= \dots $?