text
stringlengths
83
79.5k
H: Any integer can be written as $x^2+4y^2$ If $n$ is a positive integer with $(n,8)=1$ and $-4$ is square $mod$ $n$ then $n$ can be written in this form: $n=x^2+4y^2$. I was using that there are x, y integers satisfying $x^2+4y^2=kn$ where $0<k<4$, let's consider $k=1,2,3$ If $k=1$ the result is completed. If $k=2$, $x^2+4y^2=2n$, then $x^2=2n$ $mod$ $n$ as $(n,8)=1$ $n$ is odd, so $n=2t+1$ thus, $x^2=2n=4t+2$ $mod$ $n$, this is $x^2=2$ $mod$ $n$, but it cannot be true because an square is always $0$ or $1$ $mod$ $4$. Now when $k=3$, I haven't found anything. Anybody can help me! Every idea is appreciated. Thanks AI: Well, the most elementary way is to use the fact that if all prime divisors of $n$ are of the form $4k+1$ then one can represent $n=a^2+b^2,$ which is a direct consequence of Fermat's theorem on two squares. Since $n$ is odd, one has that either $a$ or $b$ is even. Hence, $n=a^2+4b_1^2.$ So we are left to show that all prime divisors of $n$ are of the form $4k+1.$ Indeed, since there is such $x,$ that $x^2=-4(\mod p)$ we can raise both sides to the power $(p-1)/2$ and apply Fermat's Little theorem to get $(-1)^{(p-1)/2}=1(\mod p).$ Thus, $(p-1)/2$ has to be even and the result follows.
H: What form of Leibniz rule is this (principal fiber bundle)? Let $P(M,G)$ be a principal fiber bundle. Let $\sigma : U \subseteq M \rightarrow P$ be a smooth local section and $f : U \rightarrow G$ a smooth function. For $ a \in G$, $R_a : P \rightarrow P$ is the map $u \mapsto u \cdot a$. For $u \in P$, $L_u : G \mapsto P$ is the map $g \mapsto u \cdot g$. Now let $\sigma' = \sigma \cdot f$, $x \in U$ and $X \in T_x M$. Why is $$ \sigma'_{*} (X) = (R_{f(x)})_{*}(\sigma_{*}(X)) + (L_{\sigma(x)})_{*}(f_{*}(X)) $$ ? My book says it's because of the Leibniz rule, but I don't know of any Leibniz rule that looks like that. AI: Let $\gamma:(-\varepsilon,\varepsilon)\to M$ be an smooth curve generating $X$, i.e. $\dot{\gamma}(0)=X$. We then write $\sigma(t)$, $f(t)$ for $\sigma(\gamma(t))$, $f(\gamma(t))$ so that by definition we have $$\sigma'_*(X)=\left.\frac{d}{dt}\right|_{t=0}\mu(\sigma(t),f(t)),$$ where $\mu:P\times G\to P$ denotes the $G$-action on $P$. Using the canonical isomorphism $T_{(\sigma(x),f(x))}(P\times G)\to T_{\sigma(x)}P\oplus T_{f(x)}G$, we compute \begin{align}\left.\frac{d}{dt}\right|_{t=0}\mu(\sigma(t),f(t))&=\mu_*(\dot{\sigma}(0),\dot{f}(0))=\mu_*(\dot{\sigma}(0),0)+\mu_*(0,\dot{f}(0))\\&=\left.\frac{d}{dt}\right|_{t=0}\mu(\sigma(t),f(0))+\left.\frac{d}{dt}\right|_{t=0}\mu(\sigma(0),f(t))\\&=\left.\frac{d}{dt}\right|_{t=0}R_{f(x)}(\sigma(t))+\left.\frac{d}{dt}\right|_{t=0}L_{\sigma(x)}(f(t))\\&= (R_{f(x)})_{*}(\sigma_{*}(X)) + (L_{\sigma(x)})_{*}(f_{*}(X))\end{align}
H: Weak topology generated by identity maps Let $(X, \tau_1)$ and $(X,\tau_2)$ be two topological spaces having the same underlying set X. Let $\tau$ be the smallest topology on $X$ such that identity maps $I_1 : (X,\tau) \rightarrow (X,\tau_1)$ and $I_2 : (X,\tau) \rightarrow (X,\tau_2)$ are continuous. Then if both $(X,\tau_1)$ and $(X,\tau_2)$ are separable topological space. Does it imply that $(X,\tau)$ will be separable? AI: It does not; here’s a counterexample. Let $Y=\{y_k:k\in\Bbb N\}$ and $Z=\{z_k:k\in\Bbb N\}$ be disjoint countably infinite sets. Two subsets of $\Bbb N$ are said to be almost disjoint if their intersection is finite; let $\mathscr{D}$ be an uncountable family of almost disjoint subsets of $\Bbb N$. (Two constructions of such a family can be found in this answer.) Let $X=Y\cup Z\cup\mathscr{D}$. Let $$\mathscr{I}=\big\{\{y_k\}:k\in\Bbb N\big\}\cup\big\{\{z_k:k\}\in\Bbb N\big\}\;.$$ For each $D\in\mathscr{D}$ let $D_Y=\{y_k:k\in D\}$ and $D_Z=\{z_k:k\in D\}$, and let $$\mathscr{B}_1(D)=\{D\}\cup\{D_Y\setminus F:F\subseteq Y\text{ and }F\text{ is finite}\}$$ and $$\mathscr{B}_2(D)=\{D\}\cup\{D_Z\setminus F:F\subseteq \text{ and }F\text{ is finite}\}\;.$$ Let $$\mathscr{B}_1=\mathscr{I}\cup\bigcup_{D\in\mathscr{D}}\mathscr{B}_1(D)\qquad\text{and}\qquad\mathscr{B}_2=\mathscr{I}\cup\bigcup_{D\in\mathscr{D}}\mathscr{B}_2(D)\;;$$ $\mathscr{B}_1$ and $\mathscr{B}_2$ are bases for topologies $\tau_1$ and $\tau_2$, respectively, on $X$. Both topologies are separable: $Y\cup Z$ is dense in both. The coarsest topology $\tau$ on $X$ making the identity maps from $\langle X,\tau\rangle$ to $\langle X,\tau_1\rangle$ and $\langle X,\tau_2\rangle$ continuous is the joint of $\tau_1$ and $\tau_2$, i.e., the topology generated by the subbase $\tau_1\cup\tau_2$, which has as a base the sets of the form $U\cap V$ with $U\in\tau_1$ and $V\in\tau_2$. Clearly $\mathscr{I}\subseteq\tau$. Let $D\in\mathscr{D}$ be arbitrary, and let $U=\{D\}\cup Y$ and $V=\{D\}\cup Z$; then $U\in\tau_1$ and $V\in\tau_2$, so $\{D\}=U\cap V\in\tau$. Thus, $\tau$ is the discrete topology on the uncountable space $X$, and $\langle X,\tau\rangle$ is not separable.
H: How can $\left({1\over1}-{1\over2}\right)+\left({1\over3}-{1\over4}\right)+\cdots+\left({1\over2n-1}-{1\over2n}\right)+\cdots$ equal $0$? How can $\left({1\over1}-{1\over2}\right)+\left({1\over3}-{1\over4}\right)+\cdots+\left({1\over2n-1}-{1\over2n}\right)+\cdots$ equal $0$? Let $$\begin{align*}x &= \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} + \cdots\\ y &= \frac{1}{1} + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{2n-1} + \cdots\\ z &= \frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \cdots + \frac{1}{2n} + \cdots \end{align*}$$ so we have $$x = y + z.$$ However, $x = 2\cdot z$, so $y$ = $z$ or $$\frac{1}{1} + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{2n-1} + \cdots = \frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \cdots + \frac{1}{2n} + \cdots$$ This looks ok if I interpret it as $$\frac{1}{1} = \left (\frac{1}{2} - \frac{1}{3} \right ) + \left (\frac{1}{4} - \frac{1}{5} \right ) + \left (\frac{1}{6} - \frac{1}{7} \right ) + \cdots + \left (\frac{1}{2n} - \frac{1}{2n+1} \right ) + \cdots$$ However, it's a bit weird if I write it as $$\left (\frac{1}{1} - \frac{1}{2} \right ) + \left (\frac{1}{3} - \frac{1}{4} \right ) + \left (\frac{1}{5} - \frac{1}{6} \right ) + \cdots + \left (\frac{1}{2n-1} - \frac{1}{2n} \right ) + \cdots = 0.$$ How can a sum of positive numbers equal $0$? AI: Almost everything in your proof works fine until you write "this looks ok if I interprete it as..." Until then, you are manipulating infinite sums of series with positive terms, these are extended nonnegative real numbers (numbers $x$ such that $0\leqslant x\leqslant+\infty$, if you like) hence adding them and equating them is perfectly legal. The trouble begins when you substract them, since there is no substraction on the set of extended nonnegative real numbers. Unsurprisingly, you soon must deal with $(+\infty)-(+\infty)$ differences, and chaos ensues. A less sophisticated example, flawed quite similarly, is to start with the correct identity $$ 1+1+1+\cdots=\underline{\mathbf 1}+(\color{red}{1}+\color{blue}{1}+\color{green}{1}+\cdots), $$ and to deduce from it that $$ 0=(1-\color{red}{1})+(1-\color{blue}{1})+(1-\color{green}{1})+\cdots=\underline{\mathbf 1}. $$
H: proving that this function does not define a norm on $\mathbb R^2$ since the convexity This problema use the previous part to conclude something, so I write all the parts. First I have to prove that every norm in $\mathbb R^n$ is a convex function, I did it, it only requires the triangular inequality. Then I have to prove that the set $$ S = \left\{ {\left( {x,y} \right) \in \mathbb R^2 :\sqrt {\left| x \right|} + \sqrt {\left| y \right|} < 1} \right\} $$ is not convex. I did it because $(0,\frac{1}{4}),(\frac{1}{4},0)\in S $ but $\frac{1}{2}(0,\frac{1}{4})+\frac{1}{2}(\frac{1}{4},0)=(\frac{1}{2},\frac{1}{2})$ not. And then the book says, that using this I have to conclude that the function $$ \left| {\left| {\left( {x,y} \right)} \right|} \right| = \left( {\sqrt {\left| x \right|} + \sqrt {\left| y \right|} } \right)^2 $$ is not a norm on $\mathbb R^2$ But I don't know how to use that to prove the final part. Please help me! AI: If $f : \mathbb R^n \to \mathbb R$ is a convex function, then $S = \left\{ x \in \mathbb R^n \mid f(x) < 1 \right\}$ is convex, and a norm is convex. So if $\|-\|$ were a norm, $S = \{ (x,y) \in \mathbb{R}^2 \mid \|(x,y)\| < 1 \}$ would be convex, which isn't true, so $\|-\|$ is not a norm.
H: An identity involving the Beta function I'm trying to show that $$ \int _0^1 \frac{x^{a-1}(1-x)^{b-1}}{(x+c)^{a+b}}dx = \frac{B(a,b)}{(1+c)^ac^b}$$ Where $$B(a,b) := \int _0^1 x^{a-1}(1-x)^{b-1}dx $$ is the "Beta function". I am supposed to use a substitution but I'm pretty much stuck. I am familiar with the basic properties of the Beta function, its relation to the gamma function etc. Any hints or advice you care to offer would be super cool. AI: Let $\displaystyle \int _0^1 \frac{x^{a-1}(1-x)^{b-1}}{(x+c)^{a+b}}dx = I$ We have , $(1+c)^ac^bI=\displaystyle \int _0^1 \frac{(1+c)^ac^bx^{a-1}(1-x)^{b-1}}{(x+c)^{a+b}}dx $ $=\displaystyle \int _0^1 \frac{(1+c)c((1+c)x)^{a-1}(c(1-x))^{b-1}}{(x+c)^{a+b}}dx $ $=\displaystyle \int _0^1 \left(\frac{(1+c)c}{(x+c)^2}\right)\left(\frac{(1+c)x}{(x+c)}\right)^{a-1}\left(\frac{c(1-x)}{(x+c)}\right)^{b-1}dx$ Let $\displaystyle y=\left(\frac{c(1-x)}{(x+c)}\right)$ and we have $\displaystyle dy=-\left(\frac{(1+c)c}{(x+c)^2}\right)dx$ Then we have the above $=-\displaystyle\int _1^0y^{b-1}(1-y)^{a-1}dy=\int_0^1y^{b-1}(1-y)^{a-1}dy=B(b,a)=B(a,b)$ So ultimately we have, $\displaystyle(1+c)^ac^bI=B(a,b)\Rightarrow I=\frac{B(a,b)}{(1+c)^ac^b}$
H: uniform convergence of $\sum\limits_{n=0}^{\infty} \frac {(-1)^nx^{2n+1}}{(2n+1)!}$ Given the series $\displaystyle \sum_{n=0}^{\infty} \frac {(-1)^nx^{2n+1}}{(2n+1)!}$ does the series converge on $\mathbb R$? I found that the radius is $\infty$ and I know that for $\forall c\in [0,\infty)$ the sum converges uniformly in [-c,c]. Does it mean the sum converges uniformly on all of $\mathbb R$? AI: No. If $f_k \to f$ uniformly, then $(f_k)$ must be uniformly Cauchy, i.e for every $\varepsilon > 0$ there is an $K$ such that $\sup|f_j-f_k| < \varepsilon$ for every $j,k \ge K$. In your case, $$f_k(x) = \sum_{n=0}^k \dfrac{(-1)^n x^{2n+1}}{(2n+1)!}$$ and $$ \sup_{x\in\mathbb{R}} |f_{k+1}-f_k| = \sup_{x\in \mathbb{R}} \frac{|x|^{2k+3}}{(2k+3)!} = \infty $$ for every $k$. (In fact, a similar reasoning shows that a sequence of polynomial can never converge uniformly on $\mathbb{R}$ to something other than a polynomial.)
H: Uncountably many equivalent Cauchy sequence? RTP There exists uncountably many Cauchy sequence of rationals that are equivalent. I am trying to solve the above question, and my understanding is that $\Bbb R$ is a set of equivalent classes of Cauchy sequence of rationals. And two sequences are equivalent if they both have the same limit. So, having prior knowledge (not really knowing the proof) that there are uncountably many real numbers, I want to somehow connect this idea with this problem. Can someone help me out ? I am a beginner at analysis, so it would really help if you could dumb it down. AI: One can show, for example, that there are uncountably many sequences of rationals tending to zero. It is more convenient to show that there are uncountably many sequences of rationals $(r_n)$ tending to infinity (just invert all the terms). Now take the sequence $(x^n)$, where $x\in\mathbb R$ and $x>1$. This is not a sequence of rationals, but it tends to infinity. Now convert it to a sequence of rationals by setting $r_n = \lfloor x^n \rfloor$. The new sequence is a sequence of integers (in particular, rationals). It tends to infinity, and it is easy to show that two different $x$'s will give two different sequences tending to infinity. This yields uncountably many such sequences because there are uncountably many real $x$.
H: How does one prove that local diffeomorphism is submersion? How does one prove that local diffeomorphism is submersion? For a manifold, what does it being disconnected mean? I get what "disconnected" means for a graph, but not for a manifold. AI: If $f:M\to N$ is a local diffeomorphism, then for any point $p\in M$, there's a neighborhood $U$ of $p$ in $M$ such that $f|_U: U\to f(U)$ is a diffeomorphism. Show that $$\mathrm{d}(f|_U)_p:T_pM\to T_{f(p)}N,\qquad \mathrm{d}f_p:T_pM\to T_{f(p)}N$$ are the same map, and then use that an isomorphism of vector spaces is necessarily surjective. It means that, as a topological space, it is not connected. See the relevant Wikipedia article.
H: Number of days it took to climb the mountain (BdMO 2012 National Primary/Junior question) From the Bangladesh Mathematical Olympiad 2012 National Secondary (Question 7, or ৭). When Tanvir climbed the Tajingdong mountain, on his way to the top he saw it was raining $11$ times. At Tajindong, on a rainy day, it rains either in the morning or in the afternoon; but it never rains twice in the same day. On his way, Tanvir spent $16$ mornings and $13$ afternoons without rain. How many days did it take for Tanvir to climb the Tajindong mountain in total? I tried to solve it using sets but it has not worked out too well. I asked people who did it but most of them gave different answers ,often having a difference of $1$ or $2$. Any help will be appreciated. AI: I'll approach it up front: Every day is divided in 2 sections: morning or rain afternoon or rain Since it says that it can only rain once in a day, it doesn't even need to be accounted for a full day rain, so the number of days is mornings$ +$ afternoons $+$ rainy times $ = $ 2 $\times$ amount of days Thus $16 + 13 + 11 = 2$ $\times$ days -> days = 20 And it fits: 9 sunshine days, 7 morning sunshine, 4 afternoon sunshines.
H: What is the difference between isomorphism and homeomorphism? I have some questions understanding isomorphism. Wikipedia said that isomorphism is bijective homeomorphism I kown that $F$ is a homeomorphism if $F$ and $F^{-1}$ are continuous. So my question is: If $F$ and its inverse are continuous, can it not be bijective? Any example? I think if $F$ and its inverse are both continuous, they ought to be bijective, is that right? AI: When talking about functions between sets, there is no such thing as an "inverse" in the first place if the map is not bijective. Take a look at the relevant Wikipedia page. Moreover, when you refer to isomorphism is bijective homeomorphism I think you're thinking of isomorphism is bijective homomorphism which, by the way, happens to be true in some nice cases (groups, rings, etc.), but is absolutely not the definition of "isomorphism" in general, and in particular, is not true for topological spaces - there are functions $f:X\to Y$ that are bijective and continuous, but are not homeomorphisms.
H: Find two closed subsets or real numbers such that $d(A,B)=0$ but $A\cap B=\varnothing$ Here is my problem: Find two closed subsets or real numbers such that $d(A,B)=0$ but $A\cap B=\varnothing$. I tried to use the definition of being close for subsets like intervals but I couldn't find any closed sets. Any hint? Thank you. AI: Hint: You cannot do this for bounded closed sets. Think about $\Bbb N$ and a set $\{a_n\mid n\in\Bbb N\}$ such that $\lim\frac n{a_n}=1$.
H: Product and Quotient rule for Fréchet derivatives Does anyone know whether the product/quotient rule for Fréchet derivatives still hold? For example, consider the evaluation operator: $$\rho_x : (C[a,b],\|\cdot\|_\infty) \rightarrow (\mathbb{R},|\cdot|)$$ where $\|\cdot\|_\infty$ is the sup-norm and $|\cdot|$ the Euclidean norm. Then I may define an operator: $T$ for $f\in C[a,b]$ acting as $$T(f) = \frac{\rho_x (f)}{\rho_y(f)} = \frac{f(x)}{f(y)}$$ (Assume the denominator is not zero). Knowing that the Fréchet derivative of $\rho_x$ is $\rho_x$ itself at any point $f\in C[a,b]$, what can we say about the Fréchet derivative of $T$? Guess: $DT(f)(\cdot) = \frac{\rho_y(f)\rho_x(\cdot) + \rho_x(f)\rho_y(\cdot) }{\rho_y(f)^2} \in L(C[a,b],\mathbb{R})$ Thanks for you answers! AI: This is as simple as proving the quotient rule for real-valued functions. Assume $f(y) > 0$. \begin{align*}T(f+g) - T(f) - DT(f)\,g &= \frac{f(x) + g(x)}{f(y) + g(y)} - \frac{f(x)}{f(y)} - \frac{f(y) \, g(x) - f(x) \, g(y)}{f(y)^2}\\ &=\frac{f(x) \, f(y)^2 + g(x) \, f(y)^2 - f(x) \, f(y)^2 - f(x) \, f(y) \, g(y) - f(y)^2 \, g(x) - f(y) \, g(x) \, g(y) + f(x) \, g(y) \, f(y) + f(x) \, g(y)^2}{f(y)^2 \, (f(y) + g(y))}\\ &=\frac{- f(y) \, g(x) \, g(y) + f(x) \, g(y)^2}{f(y)^2 \, (f(y) + g(y))} \\ &= \mathcal{o}( \lVert g \rVert_\infty) \end{align*}
H: Quick question about proof of Levy's theorem The following question arise from the proof of Levy's theorem in Richard Bass - Stochastic processes (can be seen via Google books, its on page 77). So we have $(M_t)_{t\geq 0}$ a continuous local martingale, $M_0=0$ adapted to $\{\mathcal{F}_t\}$ s.t. $<M>_t = t$. We let $t_0>0$ and define $N_t=M_{t_0+t}-M_{t_0}$ and now its routine (Bass claims) to show $<N>_t=t$. Had the quadratic variation been linear - then no problem, but by my objection is that we only know the mixed variation is a biliniar form so that $$<N>_t=<M_{t_0+t}-M_{t_0},M_{t_0+t}-M_{t_0}>=<M_{t_0+t}>+<M_{t_0}>-2<M_{t_0},M_{t_0+t}>$$ which I cant see is the sought. On the other hand, for square integrable martingales we have that $E[(M_S-M_T)^2|\mathcal{F_S}]=E[M_S^2-M_T^2|\mathcal{F_S}]$ (his proposition 9.6 p 56) so in this case the quadratic variation would be linear (one can just check it works) - but how is this not in conflict with the above bilinearity? AI: By definition, we just need to check that $(N_t^2-t)_{t\geq 0}$ is a local martingale with respect to the filtration $(\mathcal{F}_t')_{t\geq 0}$, where $\mathcal{F}_t'=\mathcal{F}_{t+t_0}$ using that $(M_t^2-t)_{t\geq 0}$ and $(M_t)_{t\geq 0}$ are local martingales with respect to $(\mathcal{F}_t)_{t\geq 0}$. A localization argument shows that we can treat the problem for true martingales, and thus have to show that $(N_t^2-t)$ is a true $(\mathcal{F}_t')_{t\geq 0}$-martingale. To that end, let $0\leq s\leq t$ be given and then $$ \begin{align} {\rm E}[N_t^2-t\mid\mathcal{F}_s']&={\rm E}[M_{t+t_0}^2+M_{t_0}^2-2M_{t+t_0}M_{t_0}-t\mid\mathcal{F}_{s+t_0}]\\ &={\rm E}[M_{t+t_0}^2-(t+t_0)+M_{t_0}^2-2M_{t+t_0}M_{t_0}-t_0\mid\mathcal{F}_{s+t_0}]\\ &={\rm E}[M_{t+t_0}^2-(t+t_0)\mid\mathcal{F}_{s+t_0}]+M_{t_0}^2-2M_{t_0}{\rm E}[M_{t+t_0}\mid\mathcal{F}_{s+t_0}]-t_0\\ &=M_{s+t_0}^2-(s+t_0)+M_{t_0}^2-2M_{t_0}M_{s+t_0}-t_0\\ &=(M_{s+t_0}-M_{t_0})^2-s\\ &=N_s^2-s. \end{align} $$
H: Question about homomorphisms $f_{!}, f^{!}$. Let $f: A \to B$ be a finite ring homomorphism and $N$ a $B$-module. $N$ can be considered as an $A$-module if we define $A \times N \to N$, $(a, n) \mapsto f(a)n$. Therefore we have a map $f_{!}: K(B) \to K(A)$. Let $M$ be a $B$ module. $B$ can be considered as an $A$ module if we define $A \times B \to B$, $(a, b) \mapsto f(a)b$. Let $M_{B}=B\otimes_{A} M$. Then $M_{B}$ is a $B$-module. The action of $B$ on $M_{B}$ is given by $(b', b\otimes m) \mapsto (b'b)\otimes m$. Therefore we have a map $f^{!}: K_{1}(A) \to K_1(B)$, where $K_1(A)$ is the Grothendieck group obtained from the set of all isomorphism classes of finitely generated flat $A$-modules. On Page 88 of the book Introduction to commutative algebra by Atiyah and Macdonald, Exercise 27(v), it is said that $f_{!}(f^{!}(x)y)=xf_{!}(y)$ for $x \in K_1(A), y \in K(B)$. How to prove this result? I think that $f_{!}(f^{!}(x)y)=f_{!}((B\otimes_{A} x)y)$. But why $f_{!}((B\otimes_{A} x)y) = xf_{!}(y)$? Thank you very much. AI: It's clearly enough to prove that $\left(B\otimes_A M\right) \otimes_B N \cong M \otimes_A N$ as $A$-modules for every $A$-module $M$ and every $B$-module $N$. Well, notice that $N \cong B \otimes_B N$ as $A$-module. Thus, $M \otimes_A N \cong M \otimes_A \left(B \otimes_B N\right) \cong \left(M \otimes_A B\right) \otimes_B N \cong \left(B \otimes_A M\right) \otimes_B N$. If I am not doing something wrong, this should be enough; no naturality needs to be checked (although the isomorphisms I have sketched are natural), and no statements about preservation of exact sequences need to be proven (those were already incorporated in the definition of the $K_1\left(A\right)$-structure on $K\left(B\right)$). However, I am a bit surprised about part ii) of the problem. How do we know that $0\cdot y = 0$ for every $y\in K\left(A\right)$ ? Does tensoring an exact sequence of flat $A$-modules with an arbitrary $A$-module always give an exact sequence?
H: Question about symmetric even polynomials This might be an easy question but here goes. I am looking for a polynomial $P\in \mathbb{Q}[x,y,z]$ such that $P$ is symmetric and homogenous. $P$ is even in all three variables, i.e. $P\in \mathbb{Q}[x^2,y^2,z^2]$. $P$ is divisible by $x+y+z$. In two variables the equivalent of these conditions would be met by $(x^2-y^2)^2$, but I am having trouble constructing one in three variables. Is there some reason why such a polynomial might not exist in three variables? AI: Here's one: $$2x^2y^2 + 2y^2z^2 + 2x^2z^2 - x^4 - y^4 - z^4$$ $$= (x + y + z)(x + y - z)(x + z - y)(y + z - x)$$ Moreover, all such polynomials must be divisible by the polynomial above. To find this, simply note that if $x + y + z$ is a factor, then plugging in $z = -x - y$ results in $0$. But since the polynomial is in $z^2$, plugging in $z = x + y$ gives zero as well. Thus $(x + y - z)$ is a factor too. By symmetry, all the other factors are necessary as well. You may also recognize this as the square of Heron's formula for the area of a triangle with side lengths $x$, $y$, and $z$ (multiplied by a factor of 16).
H: variance of two independent random variable $X$ is normal with $E[X]=-1, Var(X)=4$, $Y$ is esponential with $E[Y]=1$, they are independent, if $T=pXY+q$ with $p, q \in R$, what is $Var(T)$, I get $E[T]=q-p$ and $Var(T)=p^2(E[X^2]E[Y^2]-(E[X])^2(E[Y])^2)=$? AI: Hint: For any random variable $Z$: $$ {\rm E}[Z^2]=\mathrm{Var}(Z)+{\rm E}[Z]^2. $$ Use this to find ${\rm E}[X^2]$ and ${\rm E}[Y^2]$.
H: Relation between stirling numbers Is there a relation between $$ \genfrac\{\}{0pt}{}{n}{n-2} $$ and $$ \genfrac\{\}{0pt}{}{n-1}{n-3} $$ Like the first one can be obtained from the second one by adding something? AI: The Stirling numbers of the second kind satisfy a somewhat Pascal-like recurrence relation: $${{n+1}\brace k}=k{n\brace k}+{n\brace{k-1}}\;.$$ In particular, then, $${n\brace{n-2}}=(n-2){{n-1}\brace{n-2}}+{{n-1}\brace{n-3}}\;.\tag{1}$$ It’s easy to check that $${{n-1}\brace{n-2}}=\binom{n-1}2\;,$$ so $(1)$ becomes $${n\brace{n-2}}=(n-2)\binom{n-1}2+{{n-1}\brace{n-3}}\;.$$
H: probability question is this true? So my teacher was solving this exercise and she wrote $D (2x +6) = (2^2)\times D$ where $ D$ is the statistical dispersion.But wasnt she supposed to write $ D(2x+6)=(2^2)\times D+ 6$ ? AI: If by Dispersion you specifically mean Variance, notice that: $$Var[aX+b] = E[(aX+b)^2]-E^2[aX+b]$$ $$=E(a^2X^2+2abX+b^2)-a^2E^2[X]+2abE[X]+b^2$$ $$=a^2(E[X^2]-E^2[X])=a^2Var[X]$$
H: Spectrum of the sum of matrices I have an $n$ by $n$ matrix $A$ such that: $$A = J_n + (k-1)I_n$$ $I_n$ being the identity matrix and $J_n$ the all-$1$ matrix. The spectra of those matrices are as follow: $$Spec(J_n) = (k^2 -k +1)^1(0)^{n-1}$$ and $$Spec((k-1)I_n) = (k-1)^n$$ I don't understand why I then have $$Spec(A) = (k^2)^1(k-1)^{n-1}$$ Is there a formula for $Spec(A+B)$ that I am missing? AI: Note that $\dim\ker J_n=n-1$ hence $0$ is an eigenvalue with multiplicity $n-1$ and the last eigenvalue is $\mathrm{tr}(J_n)=n$ where $\mathrm{tr}(A)$ denote the trace of the matrix $A$ so $$\mathrm{spectrum}(J_n)=\{0,\ldots,0,n\}$$ and obviously we have $$\mathrm{spectrum}((k-1)I_n)=\{k-1,\ldots,k-1\}$$ so we can conclude that $$\mathrm{spectrum}(J_n+(k-1)I_n)=\{k-1,\ldots,k-1,k+n-1\}$$ Remark If $A$ and $B$ are two matrices which conmmute i.e. $AB=BA$ then they are triangularizable over $\mathbb C$ (or if it's possible diagonalizable) in the same basis and in this case we have $$\mathrm{spectrum}(A+B)=\{\lambda_i+\mu_i,\quad i=1,\ldots,n\}$$ where $\lambda_i$ and $\mu_i$ are the eigenvalues of $A$ and $B$ respectively.
H: Under what circumstances does this procedure terminate? This earlier question (essentially) asked why the following loop will terminate. (This is Java code, so assume you're working with signed, 32-bit integers:) final int initial = 2; final int multiplier = 12381923; for (int i = initial; i != 0; i += i * multiplier) ; Ziyao Wei's excellent answer explained why this process terminates for this particular combination of an initial value and a multiplier. My question is this - what is a necessary and sufficient condition on initial and multiplier such that this process is guaranteed to terminate? Thanks! AI: Since i += i * multiplier is equivalent to i *= (multiplier + 1), the conditions for termination are: initial is zero, and/or multiplier is odd Since odd numbers have inverses modulo 2n, multiplying by them can not result in zero (unless it was zero to begin with). Multiplying by an even number irreversibly sets at least one bit to zero. Perhaps a more intuitive way to explain it comes directly from binary multiplication. Just like in decimal, a number of sub-results are added together. In binary, the sub-results are easier to calculate, since when you're calculating the them you only have to multiply by 0 or 1 (and shift). For example (adapted from wiki:binary_multiplier to be modular modulo 24 and with the operands swapped) 1011 x 1110 ====== 1110 (this is 1110 x 1) 110 (this is 1110 x 1, shifted one position to the left) 00 (this is 1110 x 0, shifted two positions to the left) + 0 (this is 1110 x 1, shifted three positions to the left) ========= 1010 So, if a multiplicand is odd, the other multiplicand is added in its unshifted version. Observe that adding carries to the left, and so can't affect any digits to the right of its rightmost 1, and no other addition can ever "touch" the rightmost 1 in that number, because from that point on only shifted numbers (with their rightmost 1 either more to the left or throw out completely). So once that rightmost 1 is there it will stay there, and since having a rightmost 1 at all means the number can't be zero, multiplying any nonzero number by an odd number can not result in zero.
H: Show $g^{-1} \wedge h^{-1}=(g\vee h)^{-1}$ Glass' Partially Ordered GroupsLemma 2.3.2 says: Let G be a p.o. group and $g,h\in G$. If $g\vee h$ exists, then $g^{-1} \wedge h^{-1}=(g\vee h)^{-1}$ Proof: If $f\leq g^{-1},h^{-1}$ then $f^{-1}\geq g,h$. Thus $f^{-1}\geq g\vee h$. Hence $f\leq (g\vee h)^{-1}$. I don't understand the second step. $g\vee h\geq g,h$, so the fact that $f^{-1}\geq g,h$ doesn't seem to imply that it's bigger than their join. AI: Recall that $g \vee h$ is the least upper bound (supremum) of $g , h$. This means that anything which is $\geq$ both $g$ and $h$ must also be $ \geq g \vee h$.
H: Proving the relation $\det(I + xy^T ) = 1 + x^Ty$ Let $x$ and $y$ denote two length-$n$ column vectors. Prove that $$\det(I + xy^T ) = 1 + x^Ty$$ Is Sylvester's determinant theorem an extension of the problem? Is the approach the same? AI: Here is an approach by upper triangularization for the sake of variety. Note that $xy^T\in M_n(K)$ has rank $\leq 1$ and $\mbox{tr } xy^T=\sum_{j=1}^nx_jy_j=x^Ty$. So $0$ is an eigenvalue of multiplicity $n-1$ if $xy^T$ has rank $1$, or $n$ if $xy^T=0$. In the latter case $x=0$ or $y=0$ whence $\det(I+xy^T)=\det I=1=1+x^Ty$. Now if $xy^T$ has rank $1$, take any vector not in $\ker xy^T$ and add it to a basis of $\ker xy^T$ to get a basis of $K^n$. Then, taking the trace to determine the lower-right coefficient, we see that $xy^T$ is similar to $$ S(xy^T)S^{-1}=\pmatrix{0&*\\0&x^Ty}\quad \Rightarrow\quad S(I+xy^T)S^{-1}=\pmatrix{I&*\\0&1+x^Ty} $$ The result follows immediately. Note: every matrix of rank $1$ is of the form $xy^T$ with $x\neq 0$ and $y\neq 0$. With the approach above, we see that a square rank $1$ matrix is diagonalizable if and only if its trace is nonzero.
H: Some group theory interpretion problem Very simple question I think, but I can't fully understand the following set: We are given a group $G$ with a subgroup $H$. Then I have to answer some questions about the subgroup $$\bigcap_{g\in G} gHg^{-1}$$ Wat exactly are the elements of this group? AI: That subgroup is called "the core" of the subgroup $\,H\le G\,$ , and it's characterizied for being the maximal normal subgroup of $\,G\,$ which is contained in $\,H\,$. You can get it as follows: let $\,g\cdot (xH)\mapsto (gx)H\;$ be the left regular action of $\,G\, $ on the set $\;X\;$ of left cosets of $\,H\, $ in $\,G\,$ . As with any other group action on a set, this one determines a group homomorphism $\,G\to \text{Sym}_X\;$ ( note, for example, that if $\;[G:H]=n\;$ then $\,\text{Sym}_X\cong S_n\;$). Well, the core of $\,H\,$ is precisely the kernel of this homomorphism...
H: removing the remainder of a fraction I would like to remove the remainder from a fraction if possible. I want a function $$f(x,y) = x/y - remainder$$ for example $$f(3,2) = 1$$ $$f(7,2) = 3$$ $$f(12,5) = 2$$ It seems so simple but its been bugging me for a while. Please help. AI: You are looking for division with remainder We have $y=\lfloor \frac yx \rfloor x+r$, where $f(x,y)=\lfloor \frac yx \rfloor, r=y-\lfloor \frac yx \rfloor x$. What do you mean by "from first principles?"
H: separability of a space When I have to show that some space $A$ IS NOT separable, does it always work if I find uncountable subset $B\subset A$, $|B|=2^{\aleph_0}$ and set C of disjoint open balls, $C=\{L(x,r): x\in B\}$. If $A$ IS separable, then $\exists D\subset A$ such that $|D|\leq \aleph_0$ and $(\forall x\in A)(\forall r >0)(\exists y\in D) y\in L(x,r)$. So, if I want A to be separable, then $D$ has to have non-empty intersection with all elements from $C$, so it would be $|D|=|C|=|B|=2^{\aleph_0}$, which means that D does not satisfy condition $|D|\leq \aleph_0$ and $A$ is not separable. AI: Yes, you are absolutely right. Any dense set must intersect every nonempty open set. So if you can find uncountably many nonempty open sets that are pairwise disjoint, then any dense set must be uncountable, and the space cannot be separable.
H: Number of solutions of a positive integral quadratic form is finite? Is there an easy way to see the following: Suppose Q is an integral quadratic form in $n$ variables that is positive definite, that is $Q(x) \geq 1$ for all $0 \neq x \in \mathbb{Z}^n$. Then the number of solutions to the equation $Q(x)=m$, for some fixed $n \in \mathbb{N}$, is finite. AI: Suppose that $Q$ is represented by an $n \times n$ matrix that I will by abuse of notation refer to as $Q$, and that $Q$ as a positive definite matrix satisfies a "coercivity" condition: $$ x^T Q x \ge c x^T x $$ for some constant $c \gt 0$. Then the existence of at most finitely many solutions to $Q(x) = x^T Q x = m$ follows from the existence of at most finitely many $x$ such that $c x^T x \le m$, counting on the integer components of $x$. Added: The basic definition of an integer n-ary quadratic form is a homogeneous of degree 2 polynomial $Q(x_1,\ldots,x_n) = \sum_{i \le j} a_{ij}x_i x_j$. As alluded to this can be expressed in matrix form $x^T Q x$ where $x$ is a column vector $(x_1,\ldots,x_n)$, whereby the off-diagonal entries become $\frac{1}{2} a_{ij}$ for $i \neq j$ to obtain real symmetric matrix $Q$ (indeed $Q$ is rational). Now real symmetric matrices have a complete set of eigenvalues and eigenvectors, and the task is to show that quadratic form $Q(x) \geq 1$ for integer vectors implies matrix $Q$ is positive definite in the sense that all eigenvalues are bounded below by positive constant $c \gt 0$, from which the coercivity property is immediate. We really just need, since the number of eigenvalues of $Q$ is finite, to eliminate the possibility of negative and/or zero eigenvalues. If there were negative eigenvalues, then taking $x$ to be an appropriate rational approximation to a corresponding eigenvector would produce a negative value for $Q(x)$, and by multiplying through by any denominator, a corresponding integer $x$ that also gives a negative value for the quadratic form. The case of zero eigenvalues is even simpler to dispose of. If matrix $Q$ were to have a nontrivial nullspace, then $Qx = 0$ would have rational nontrivial solutions (since $Q$ is rational), and again some integer multiple of such a solution $x$ would give $Q(x) = 0$ over the integer vectors, contradicting the asserted property of $Q(x) \ge 1$.
H: Find the volume using triple integrals Using triple integrals and Cartesian coordinates, find the volume of the solid bounded by $$ \frac{x}{a}+\frac{y}{b}+\frac{z}{c}=1 $$ and the coordinate planes $x=0, y=0,z=0$ My take I have set the parameters to $$ 0\le x \le a$$ $$0\le y \le b\left( 1 - \frac{x}{a} \right)$$ $$0\le z \le c \left( 1 - \frac{y}{b} -\frac{x}{a} \right) $$ and evaluated $$ \int_0^{a} \int_0^{b\left( 1 - \frac{x}{a}\right)} \int_0^{c \left( 1 - \frac{y}{b} -\frac{x}{a} \right)} 1 dzdydx$$ and gotten $0$ as my final answer but the actal answer is $\frac{abc}{6}$ Never mind, I found the mistake I was making, just a simple integral mistake but it was the right procedure. Thank you for viewing! :) AI: Here is an alternative computation using a single variable integral that confirms your result. The following figure represents the given pyramid. The equations of the lines situated on the planes $y=0$ and $z=0$ are: $$y=0,\qquad\frac{x}{a}+\frac{z}{c}=1\Leftrightarrow z=\left( 1-\frac{x}{a}\right) c,$$ $$z=0,\qquad\frac{x}{a}+\frac{y}{c}=1\Leftrightarrow y=\left( 1-\frac{x}{a}\right) b.$$ The intersection of the pyramid with the plane perpendicular to the $x$-axis in $x$ is a right triangle with catheti $\left( 1-\frac{x}{a}\right) c$ and $\left( 1-\frac{x}{a}\right) b$, whose area $A(x)$ is given by $$A(x)=\frac{1}{2}\left( 1-\frac{x}{a}\right) b\left( 1-\frac{x}{a}\right) c=\frac{bc}{2}\left( 1-\frac{x}{a}\right) ^{2}.$$ Hence the volume is given by the integration of the area $A(x)$ from $x=0$ to $x=a$ $$\begin{eqnarray*} V &=&\int_{0}^{a}A(x)dx \\ &=&\frac{bc}{2}\int_{0}^{a}\left( 1-\frac{x}{a}\right) ^{2}dx \\ &=&\frac{bc}{2}\left( a-\frac{2}{a}\frac{a^{2}}{2}+\frac{1}{a^{2}}\frac{a^{3} }{3}\right) \\ &=&\frac{abc}{6}. \end{eqnarray*}$$
H: Need help solving the ODF $ f''(r) + \frac{1}{r}f'(r) = 0 $ I am currently taking complex analysis, and this homework question has a part that requires the solution to a differential equation. I took ODE over 4 years ago, so my skills are very rusty. The equation I derived is this: $$ f''(r) + \frac{1}{r}f'(r) = 0 $$ I made this substitution: $ v(r) = f'(r) $ to get get: $$ v'(r) + \frac{1}{r}v(r) = 0 \implies \frac{dv}{dr} = - \frac{v}{r} \implies \frac{dv}{v} = - \frac{dr}{r} $$ Unsure of what to do now. Edit: I forgot to add that the answer is $ a \log r + b $ AI: Another way of doing this (which gets to the required form quicker in my opinion) is noticing that $$ f''(r) + \frac{1}{r}f'(r) = 0 \implies \frac{1}{r}\left(r f'(r)\right)'=0$$ $$ \implies r f'(r) = c$$ $$ \implies f'(r) = \frac{c}{r}$$ $$ \implies f(r) = c \log(r) + d$$ where $c$ and $d$ are arbitary constants.
H: Convolution and integrating over G(t) I'm struggling with the following expression in a statistics script: $$H(x) = \int_{-\infty}^\infty F(x-t) dG(t)$$ What does the dG(t) mean exactly? I've never seen that notation before. Background: Random variables X, Y with CDFs F and G, we want to calculate H which is the CDF of (X + Y) AI: One remark on notation. Let $X$ and $Y$ be indep. r.v. with prob. distributions $f_X$ and $g_Y$. The r.v. $Z=X+Y$ has the following cumulative probability distribution. $F_Z(z):=P(Z\leq z)=P(X+Y\leq z)=\int_{x+y\leq z} f_y(y)f_X(x)dxdy=$ $\int_{-\infty}^{+\infty} F_Y(z-x)f_X(x)dx=\int_{-\infty}^{+\infty} F_Y(z-x)dF_X(x)$, i.e. the formula in your question, where $F_Y(z-x)=P(Y\leq z-x)=\int_{-\infty}^{z-x} f_Y(y)dy$. To obtain the prob. distribution $f_Z$ of $Z$ one needs to derive w.r.t $z$ obtaining the convolution $$f_Z(z)=\int_{-\infty}^{\infty} f_Y(z-x)f_X(x)dx $$
H: Fourier series question How do we know that a Fourier series expansion does exist for a given function $f(x)$? I mean, if $f(x)=x$ and we suppose that $x=a_1\sin(x)+a_2\sin(2x)$ with $-\pi\leq x \leq \pi$ the Fourier coefficients $a_1$ and $a_2$ still being the same as if we suppose that $x=a_1\sin(x)+a_2\sin(2x)+...$ So, why there are infinitely many sine or cosine terms? AI: I think your question has two parts. The first is existence for a given $f$. A Fourier series exists for a given $f(x)$, $x \in [-\pi,\pi)$ when $$\int_{-\pi}^{\pi} dx \, |f(x)|^2 \lt \infty$$ The second part I think asks how do we know that the FS coefficients do not change when we add more harmonics to the series. This is because the coefficients do not depend on the number of terms in the sum; they simply depend on the particular harmonic and the function, i.e., $$a_n = \frac{1}{\pi} \int_{-\pi}^{\pi} dx \, f(x) \, \sin{n x}$$
H: is prime spectrum $Spec(R)$ countable? Let $R$ commutative ring with identity, given $Spec(R)=\{I|\text{$I$ prime ideal of $R$}\}$, does the set $Spec(R)$ countable? Also, if $\{\langle p^n \rangle\}$ is closed in $P_{-}Spec(R) = \{I| \text{$I$ primary ideal of $R$}\}$, does $\langle p^n \rangle$ maximal ideal? I tried to prove it but can't seem to find the answer. Could someone help me? Thank you. AI: For the first question, no: take $R=\mathbb{C}[x]$ to be a polynomial ring in one variable. The prime ideals are $0$ and those generated by $x-a$ for $a \in \mathbb{C}$, of which there are uncountably many. For the second question, the closed points of $\mathrm{Spec}(R)$ are precisely the maximal ideals. I don't know what topology you are using on the set of primary ideals. Can you clarify?
H: Upper bound of binomial distribution I'm studying a proof and I'm wondering which binomial approximation could have been use to establish the following bound: $$\cfrac{1}{2} {n \choose r} {n-r \choose r} \le \cfrac{n^{2r}}{2(r!)^2}$$ I get that: $$\cfrac{1}{2} {n \choose r} {n-r \choose r} = \cfrac{n!}{2(r!)^2 (n-2r)!}$$ So I'm wondering about: $$\cfrac{n!}{(n-2r)!} \le n^{2r}$$ AI: You have $\frac{n!}{(n-2r)!}=(n-2r+1)\cdot(n-2r+2)]\cdot...\cdot n\le n^{2r}$ as it is a product of $2r$ numbers each of which is $\le$ than $n.$
H: How to show that T is a projection operator For $x ∈ [0, 2π]$ let $G(x) = π^{−1}\cos x$, and define an operator $T$ on $L^2([0, 2π])$ as follows: $$(Tf)(x) = ∫_0^{2π}G(x − x')f(x') \,dx'. $$ Show that $T$ is a projection operator. I guess I must show that $T$ is selfadjoint and idempotent, in other words that: $T=T^*$ and $T^2=T$ I am quite new to this and have a little difficulty to getting started Sincerely Ingvar AI: To show $T^2 = T$, just compute $T^2$. We have for $f \in L^2([0,2\pi])$: \begin{align*} (T^2 f)(x) &= T(Tf)(x)\\ &= \int_0^{2\pi} G(x-x')(Tf)(x')\, dx'\\ &= \int_0^{2\pi} G(x-x')\int_0^{2\pi} G(x'-x'')f(x'')\, dx''\, dx'\\ &= \int_0^{2\pi} \int_0^{2\pi} G(x-x')G(x'-x'')\,dx'\, f(x'')\,dx'' \end{align*} So we have to compute $\int_0^{2\pi} G(x-x')G(x'-x'')\, dx'$, plugin in the given $G$, we have \begin{align*} \int_0^{2\pi} G(x-x')G(x'-x'')\, dx' &= \frac 1{\pi^2}\int_0^{2\pi} \cos(x-x')\cos(x'-x'')\, dx'\\ &= \frac 1{2\pi^2} \int_0^{2\pi} \bigl(\cos(x-x'') + \cos(x+x''-2x')\bigl)\, dx'\\ &= \frac 1{2\pi^2} \cdot \bigl( 2\pi \cos(x-x'') + 0\bigr)\\ &= \frac 1{\pi}\cos(x-x'')\\ &= G(x-x''). \end{align*} Continuing our calculation above, we have \begin{align*} (T^2 f)(x)&= \int_0^{2\pi} \int_0^{2\pi} G(x-x')G(x'-x'')\,dx'\, f(x'')\,dx''\\ &= \int_0^{2\pi} G(x-x'')f(x'')\, dx''\\ &= (Tf)(x). \end{align*} As $f$ and $x$ were arbitrary, we have $T^2 = T$. For the adjoint, we will do the same (i. e. computing $T^*$ from its definition), we have for $f,g \in L^2([0,2\pi])$: \begin{align*} (T^*f, g) &= (f, Tg)\\ &= \int_0^{2\pi} f(x)(Tg)(x)\, dx\\ &= \int_0^{2\pi} f(x)\int_0^{2\pi} G(x-x')g(x')\, dx'\, dx\\ &= \int_0^{2\pi}\int_0^{2\pi} f(x)G(x-x')g(x')\,dx'\,dx \end{align*} Note now, that $G$ is symmetric, that is $G(-y) = G(y)$ for all $y$, hence $G(x-x') = G(x'-x)$, continuing we have \begin{align*} (T^*f, g) &= \int_0^{2\pi} \int_0^{2\pi} f(x)G(x-x')g(x')\,dx'\,dx\\ &= \int_0^{2\pi} \int_0^{2\pi} G(x'-x)f(x)\, dx\, dx'\\ &= \int_0^{2\pi} (Tf)(x')g(x')\, dx'\\ &= (Tf, g) \end{align*} Hence $T^* = T$.
H: Why doesn't this series converge uniformly? given series: $$x^2+\frac{x^2}{(1+x^2)}+\frac{x^2}{(1+x^2)^2}+\frac{x^2}{(1+x^2)^3}+\frac{x^2}{(1+x^2)^4}+\ldots$$ in the interval $[-1,1].$ I tried to break the problem in two cases:1)$x=0$, in which case the series is zero. 2)$x\neq0$: I solved the G.P. and i got $(1+x^2)$ which is bounded and then i concluded that the series should converge uniformly. But the answer is given as not.So where am I mistaken? AI: The simple fact that it converges to a bounded function doesn't imply uniform convergence. Moreover, if the series would converge uniformly, the limit would be a continuous function. Is your limit continuous at $0$? What does this mean?
H: Question about inverse limits. I am reading the book Introduction to commutative algebra by Atiyah and Macdonald. On Page 104, I have some questions about the proof that $\{A_n\}$ is surjective implies $d^A$ is surjective. We have to prove that given $(a_n) \in A = \varprojlim A_n$, there is $(x_n) \in\varprojlim A_n$ such that $d^A((x_n))=(a_n)$. By definition, $d^A(x_n)=x_n-\theta_{n+1}(x_{n+1})$. So we have to solve the system of equations $x_n-\theta_{n+1}(x_{n+1})=a_n, n=0, 1, \ldots$, for $x_0, x_1, \ldots$. We have $x_0-\theta_1(x_1)=a_0$. But there are two unknowns $x_0, x_1$ in this equation. How could we solve this equation? Thank you very much. AI: As $(A_n)$ is surjective, the $\theta_n$ are onto. So let $x_0\in A_0$ be arbitrary, choose $x_1 \in A_1$ with $\theta_1(x_1) = x_0 - a_0$, which is possible as $\theta_1$ is onto. Now inductively we find $x_n \in A_n$ with $\theta_n(x_n) = x_{n-1} - a_{n-1}$ for each $n$, as wished.
H: When does $\varphi(n) = n/6$? I am having trouble showing this. I was able to show when $\varphi(n) = n/2$, and $\varphi(n) = n/3$, but I don't know when $\varphi(n) = n/6$ AI: As $\frac{\phi(n)}{n}=\prod_{p|n} \frac{p-1}{p}$ you need to find all $n$ so that $$\prod_{p|n} \frac{p-1}{p} =\frac{1}{6} $$ Let $p_1< ..<p_k$ be all primes dividing $n$. Then $$6(p_1-1)(p_2-1)...(p_k-1)=p_1p_2...p_k \,.$$ Thus $$p_k | 6(p_1-1)(p_2-1)...(p_k-1) \,.$$ Since $p_k$ is prime, and larger than $p_i-1$ it follows that $p_k | 6$. Thus $p_k=2$ or $p_k=3$. This shows that $n=2^k3^l$. It is easy to check that no such number is a solution, note that you have to check three cases $k=0$ or $l=0$ or $k\neq0,l\neq 0$.
H: How can a group of matrices form a manifold? So for example, $GL(n,\mathbb{R})$ group. It is said that this group can be considered as manifold - but I do not get how this is possible. How does one then assign a neighborhood of a matrix, and talk of compactness? (of course manifold can be disconnected, but.) AI: Expounding on the comments: An $n \times m$ matrix is by definition a function $A: \{1, \ldots, m\} \times \{1, \ldots, n\} \to \mathbb{R}$. Most of the time we write $A_{ij}$ for its value at $(i,j)$ instead of $A(i,j)$. If you think about the definition of a vector, this is equivalent to looking at points of $\mathbb{R}^{mn}$. So for $n \times n$ square matrices, they are just points of $\mathbb{R}^{n^2}$. On $\mathbb{R}^k$ we have a bunch of norms. It's a basic theorem of functional analysis that these are all equivalent and define equivalent topologies (this is essentially what the point of equivalence is). Perhaps the simplest is just the $\ell_1$-norm: the sum of the absolute values of the entries of the matrix. So now we can talk about distance and other similar notions like compactness. For example, we have the function $\det: \mathbb{R}^{n^2} \to \mathbb{R}$. By its definition, it is a polynomial in the entries of the matrix. Hence it is a very nice smooth function. The group $\text{GL}(n,\mathbb{R})$ is $$ \text{GL}(n, \mathbb{R}) = \{ A \in \mathbb{R}^{n^2} \mid \det(A) \neq 0 \}$$ and so is the preimage of an open set under a continuous function. Therefore it is an open set. The special linear group consists of the matrices with determinant $\pm 1$, and so similarly forms a closed subgroup. For a compact example, consider the orthogonal matrices. An orthogonal matrix, if you think about it, is one that satisfies $n^2$ linear equations in its entries. So they'll form a closed set (a subgroup, even). Moreover, we can see that they are actually bounded (the column sums cannot exceed $1$), and so by the Heine-Borel theorem AKA the characterisation of compactness in $\mathbb{R}^k$ we know that they are a compact subgroup. A nice application of this topological nonsense is the Cayley-Hamilton theorem (any matrix is a root of its characteristic polynomial). Working over $\mathbb{C}$, one can show that the diagonalizable matrices are in fact dense in $\mathbb{C}^{n^2}$. The Cayley-Hamilton theorem is true for these, and hence extends to all complex matrices. Moreover, one can show by some algebraic nonsense that this implies it's true for all commutative rings (with unit).
H: How to Find the First Moment of Area of a Circular Segment by Integration Given a segment of circle symmetric about the $y$-axis, I'm wondering how to apply the integral $Q_x = \int y \, dA $ to find the first moment of area with respect to the $x$-axis. I'm having difficulties taking into account both the straight line and the circular portion of the segment. Any help is greatly appreciated. AI: It seems quite straightforward: assumming a circle of unit radius: $$ Q_x = \int y \, dA =\int_{-a}^a \int_h^{\sqrt{1-x^2}} y \, dy \, dx$$ with $a=\sqrt{1-h^2}$. Where did you get stuck?
H: How to integrate: $\int \frac{\mathrm dx}{\sin (x)-\sin(a)}$ How to integrate : $$\int \frac{\mathrm dx}{\sin (x)-\sin(a)}$$ AI: Using Weierstrass substitution, $$\tan \frac x2=u$$ $$\implies \sin x=\frac{2u}{1+u^2}\text{ and } x=2\arctan u,dx=\frac{2du}{1+u^2}$$ $$I=\int\frac{dx}{\sin x-\sin \alpha} =\int\frac1{\frac{2u}{1+u^2}-\sin\alpha}\cdot\frac{2du}{1+u^2} =\int\frac{2du}{2u-(1+u^2)\sin\alpha}$$ Now, $$2u-(1+u^2)\sin\alpha=-\sin\alpha (1+u^2-2u\csc\alpha)=-\sin\alpha \left((u-\csc\alpha)^2-(\cot\alpha)^2\right)$$ Using $\frac{dx}{x^2-a^2}=\frac1{2a}\ln \left|\frac{x-a}{x+a}\right|+C$ $$I=-\frac1{\sin\alpha}\frac1{\cot\alpha}\ln\left|\frac{u-\csc\alpha-\cot\alpha}{u-\csc\alpha+\cot\alpha}\right|+C$$ where $C$ is an arbitrary constant for indefinite integral Using $\csc\alpha+\cot\alpha=\frac{1+\cos\alpha}{\sin\alpha}=\frac{2\cos^\frac\alpha2}{2\sin\frac\alpha2\cos\frac\alpha2}=\cot\frac\alpha2$ and similarly, $\csc\alpha-\cot\alpha=\tan\frac\alpha2$ (as $\sin2A=2\sin A\cos A,\cos2A=2\cos^2A-1$) $$I=-\frac1{\cos\alpha}\ln\left|\frac{\tan\frac x2-\cot\frac \alpha2}{\tan\frac x2-\tan\frac\alpha2}\right|+C$$ Again, $$\ln\left|\frac{\tan\frac x2-\cot\frac \alpha2}{\tan\frac x2-\tan\frac\alpha2}\right|$$ $$=\ln\left|\frac{\cos\frac \alpha2\cos\frac x2\left(\sin\frac x2\sin \frac \alpha2-\cos\frac \alpha2\cos\frac x2\right)}{\sin\frac \alpha2\cos\frac x2\left(\sin\frac x2\cos\frac\alpha2-\sin\frac\alpha2\cos\frac x2\right)}\right|=\ln\left|-\cot\frac\alpha2\right|+\ln\left|\frac{\cos\frac{x+\alpha}2}{\sin\frac{x-\alpha}2}\right|$$ Clearly, $\ln\left|-\cot\frac\alpha2\right|$ is independent of $x,$ hence constant
H: Is there a contradiction in this definite integral computation? EDIT: This question is wrong. You don't need to waste your time trying to answer it. :D I need help showing that: $$ \int_a^b x f(x) dx = \frac {a+b} 2\int_a^bf(x)dx$$ My attempt. $$ I = \int_a^bxf(x)dx = \int_a^b(b+a -x)f(b+a-x)dx $$ $$ = \int_a^b (b+a)f(b+a-x)dx - \int_a^bxf(b+a-x)dx $$ I doubt that probably the 2nd term is $I$ itself (though not sure). Hints? Btw, the reason I am asking this is, if the above stated theorem is true, then how come there be a contradiction between it and this question? You see, the question is of the same form. But in that case, the integral of $f(x) = \cfrac {\cos x} {1 + \sin^2x}$ from 0 to $\pi$ is 0. Interestingly, the above stated theorem works for $$ \int \limits_0^\pi \cfrac {x \sin x} {1 + \cos^2x} dx $$ AI: My suspicion is that this integral is true only for certain $f$. Take $f(x) = \dfrac{1}{b-a}$. In Probability, we refer to this specific $f$ as the density function for the continuous uniform distribution and $\int\limits_{a}^{b}\dfrac{x}{b-a} \text{ dx} = \dfrac{a+b}{2}$ as the expected value of the continuous uniform distribution. Also, for any density function, $\int\limits_{a}^{b}f(x) \text{ dx} = 1$, so we get $\int\limits_{a}^{b}xf(x) \text{ dx} = \dfrac{a+b}{2} * 1$, which is assuming that $f(x) = \dfrac{1}{b-a}$. EDIT: Of course, the expected value of other density functions will NOT always be $\dfrac{a+b}{2}$.
H: Why ring with only even numbers is not an integral domain? Let $S$ be a set of all even integers. According to my text book, $(S,+,\cdot)$ is a ring which is not an integral domain. It is stated as a fact without an explanation and I fail to see the reason for this. Why the ring from above is not an integral domain? EDIT: It can't be because of the lack of $1$ element. In the next example, $(Z,+,\cdot)$ (where $Z$ is the whole set of integers) is stated to be an integral domain. AI: I suspect that your book's definition of integral domain requires a multiplicative identity element, which $(S,+,\cdot)$ does not have.
H: equivalent measures, can be one finite and one not? Let $\mu$ be a non-negative and Borel-finite measure on $\mathbb{R}$ and $\nu$ a non-negative measure on $\mathbb{R}$. If $\mu$ and $\nu$ are equivalent (one absolutely continuous with respect to the other) is it true that even $\nu$ is Borel-finite. AI: The other often-seen example. Lebesgue measure on the real line, and the normalized Gaussian measure $$ \gamma(E) = \frac{1}{\sqrt{2\pi}}\int_E \exp(-x^2/2)\;dx $$ These two measures are mutually absolutely continuous.
H: About the relation of rank(AB), rank(A), rank(B) and the zero matrix Let $A$ be a $2 \times 4$ matrix and $B$ be a $4 \times 4$ matrix, prove that if $rank(A) = 2$ and $rank(B)=3$ then $AB \neq 0$. I got stuck at $rank(AB) \leq 2 $ How do I continue from here? AI: We know that, $rank (AB)=rank (B)-\dim(Img(B)\cap Ker A)$ Reason: Take the Vector Space $Img(B)$ .Let T be a linear transformation on $Img(B)$ represented by the matrix $A$ Then by rank -nullity theorem we have, $rank (T)+dim(Ker(T))=m$ where m is the dimension of $Img (B)$ and this is equal to $rank (B)$. Now we have $rank (AB)=rank (T)$ and $Ker (T)=\{v\in Img (B)|Av=0\}=\{v\in Img (B)|v\in Ker (A)\}=Img (B)\cap Ker (A)$. From this $rank (AB)=rank (B)-\dim(Img(B)\cap Ker A)$ follows. Now we have in case of this problem $\dim(Ker (A))=2$(Reason :$\dim(Ker (A))+rank (A)=4$ and as $rank (A)=2$) $\Rightarrow \dim(Img(B)\cap Ker A) \leq \dim(Ker (A))=2$ $rank (AB)=rank (B)-\dim(Img(B)\cap Ker A) \geq 3-2 = 1$ So $AB\ne 0$
H: Ring structure on subsets of the natural numbers Let $$\mathcal{N}=\{\{k_1,\ldots,k_s\}:\ s>0,\ \mbox{and the}\ k_i\ \mbox{are non-negative and pairwise different integers}\}\cup\{\emptyset\}.$$ Note that there is a bijection with the naturals, $$ \begin{array}{rccc} B:&\mathcal{N}&\longrightarrow &\mathbb{N}\\ &\emptyset& \longmapsto & 0\\ &\{k_1,\ldots,k_s\}& \longmapsto & 2^{k_1}+\cdots+2^{k_s} \end{array}. $$ For $K,L\in\mathcal{N}$ define $$K\oplus L=(K\cup L)\setminus(K\cap L),$$ $$K\otimes L=\bigoplus_{k\in K,\ l\in L}\{k+l\}.$$ The definition of the product doesn't make sense if either $K$ or $L$ is empty. In that case take the product to be $\emptyset$. Note that the sum I defined is just the Nim sum (justifying the CGT tag) but the product is not the usual Nim product. In particular I'm using the associativity of $\oplus$ to define $\otimes$. Here are my questions: 1) If I'm not mistaken $(\mathcal{N},\oplus,\otimes)$ is a commutative ring with unit. Can anyone confirm this? In principle the distributive property is the only tricky one. 2) If we do have a ring indeed, what is it known about it? AI: Your ring is isomorphic to $\Bbb F_2[X]$. For $k \ge 0$, define $1_k : \mathcal N \to \Bbb F_2$ by $1_k(A) = 1$ if $k\in A$, and $0$ otherwise. Then define $f : \mathcal N \to \Bbb F_2[X]$ by $f(A) = \sum_{k \ge 0} 1_k(A) X^k$. You can check that $f(A)+f(B) = \sum_{k \ge 0} (1_k(A)+1_k(B)) X^k = \sum_{k \ge 0} l_k(A \oplus B) X^k = f(A \oplus B)$. And $f(A)f(B) = (\sum_{k \ge 0} 1_k(A)x^k)(\sum_{l \ge 0} 1_l(A)X^l) = \sum_{k,l \ge 0} 1_k(A)1_l(B)X^{k+l} = \sum_{k,l \ge 0} 1_k(A)1_k(B)f(\{k+l\}) = \sum_{k \in A, l \in B} f(\{k+l\}) = f(\bigoplus_{k \in A, l \in B} \{k+l\}) = f(A \otimes B)$
H: Is the endomorphism of $\mathbb{Z}_{p}$ induced by multiplication by $p^{n}$ surjective? Let $p$ be a prime number. Is it true that $p^{n}\mathbb{Z}_{p}\cong\mathbb{Z}_{p}$ as additive groups for any natural number $n$ and if so, why? Here, $\mathbb{Z}_{p}$ denotes the ring of $p$-adic integers. Any help would be very much appreciated. AI: The multiplication by $p^n$ is injective - after all, $\mathbb{Z}_p$ is an integral domain - and so it gives an isomorphism $$\mathbb{Z}_p \cong p^n \mathbb{Z}_p$$ to its image. That doesn't make it surjective as an endomorphism, though, and it isn't, because $p^n$ isn't invertible in $\mathbb{Z}_p$.
H: Evaluate the maximum of: $A = \sin A\cdot\sin ^2 B\cdot \sin ^3 C$ Given a triangle ABC. Evaluate the maximum of: $A = \sin A\cdot\sin ^2 B\cdot \sin ^3 C$ AI: let $$f(A,B,C)=\ln{\sin{A}}+2\ln{\sin{B}}+3\ln{\sin{C}}+\lambda (A+B+C-\pi)$$ then $$\begin{cases} \dfrac{\partial f}{\partial A}=\cot{A}-\lambda=0\\ \dfrac{\partial f}{\partial B}=2\cot{B}-\lambda=0\\ \dfrac{\partial f}{\partial C}=3\cot{C}-\lambda=0 \end{cases}$$ so $$\cot{A}:\cot{B}:\cot{C}=1:2:3$$ and use $$\tan{A}+\tan{B}+\tan{C}=\tan{A}\tan{B}\tan{C}$$ we have $$\dfrac{1}{k}+\dfrac{1}{2k}+\dfrac{1}{3k}=\dfrac{1}{6k^3}$$ $$\Longrightarrow k^2=\dfrac{1}{11}$$ and use $$\sin{x}=\sqrt{\dfrac{\tan^2{x}}{1+\tan^2{x}}}$$ so $$\sin{A}=\sqrt{\dfrac{11}{12}},\sin{B}=\sqrt{\dfrac{11}{15}},\sin{C}=\sqrt{\dfrac{11}{20}}$$ so $$\sin{A}\sin^2{B}\sin^3{C}\le\sqrt{\dfrac{11}{12}}\left(\sqrt{\dfrac{11}{15}}\right)^2\left(\sqrt{\dfrac{11}{20}}\right)^3=\dfrac{11^4}{60\sqrt{15}}$$ and we must $$\begin{cases} \dfrac{\partial^2 f}{\partial A^2}|_{\tan^2{A}=11}=-\csc^2{A}_{\tan^2{A}=11}=-\dfrac{12}{11}\\ \dfrac{\partial^2 f}{\partial B^2}|_{\tan^2{B}=\dfrac{11}{4}}=-4\csc^2{B}_{\tan^2{B}=\dfrac{11}{4}}=-\dfrac{20}{11}\\ \dfrac{\partial^2 f}{\partial C^2}|_{\tan^2{B}=\dfrac{11}{4}}=-9\csc^2{C}_{\tan^2{C}=\dfrac{11}{9}}=-\dfrac{180}{11}\\ \dfrac{\partial^2 f}{\partial AB}=0\\ \dfrac{\partial^2 f}{\partial BC}=0\\ \dfrac{\partial^2 f}{\partial AC}=0 \end{cases}$$ Then the Hessian Matrix $$H=\begin{vmatrix} \dfrac{\partial^2 f}{\partial A^2}& \dfrac{\partial^2 f}{\partial AB}&\dfrac{\partial^2 f}{\partial AC}\\ \dfrac{\partial^2 f}{\partial BA}&\dfrac{\partial^2 f}{\partial B^2}&\dfrac{\partial^2 f}{\partial BC}\\ \dfrac{\partial^2 f}{\partial CA}&\dfrac{\partial^2 f}{\partial CB}&\dfrac{\partial^2 f}{\partial C^2} \end{vmatrix}=(-\dfrac{12}{11})(-\dfrac{20}{11})(-\dfrac{180}{11})< 0$$
H: Proving that: $-\frac{2 \;i\log(i^2)}{2} = \pi$ I'm trying to prove that: $$-\frac{2 \;i\log(i^2)}{2} = \pi$$ This is what I've tried: $$-\frac{2 \;i\log(i^2)}{2} = -i \log(i^2) = -i (i \pi)\implies$$ $$-x\;(x y)=-x\;y\;x\implies$$ $$-i\;(i \pi) = -i\;i\;\pi = \pi \implies$$ $$-i\;i = (-i^2) = (-(-1)) = 1$$ Is this considered to be sufficient proof? Is there an alternative approach? AI: Assuming the non-negative reals branch cut for the logarithm function (i.e., $\;\Bbb C-\Bbb R_+\;$) : $$-\frac{2i\log i^2}2=-\frac{2i\log(-1)}2=-\frac{2i\left(\log 1+i\pi\right)}2=-\frac{-2\pi}{2}=\pi$$
H: Henkin vs. "Full" Semantics for Second-order Logic and Multi-Sorted First Order Interpretations In this paper by Jeff Ketland, he notes: With Henkin semantics, the Completeness, Compactness and Löwenheim-Skolem Theorems all hold, because Henkin structures can be re-interpreted as many-sorted first-order structures. What is it about the full semantics for second-order logic that defies re-interpretation into a many-sorted first-order logic? AI: Many-sorted first-order logic does not place any internal, logical, requirement on the relationship of the domains of the various sorts. In particular, then, a two-sorted logic, with one sort running over "objects" and another sort running over "properties", places no particular logical requirement on the relationship of the domains. It doesn't require, for example, that if there are $\kappa$ objects, then there are $2^{\kappa}$ properties. By contrast, full semantics for second-order logic does require this, by requiring that the second-order quantifiers running over properties (thought of extensionally) do run over the full powerset of the domain of the first-order quantifiers -- indeed, that's what makes it "full"!
H: Show $A$ and $B$ have a common eigenvector if $A^2=B^2=I$ Let $n$ be a positive odd integer and let $A,B\in M_n(R)$ such that $A^2=B^2=I$. Prove that $A$ and $B$ have a common eigenvector. AI: For $M=A$ or $B$, define $$V_M^\pm=\{v\in\Bbb R^n:v\pm Mv\}.$$ By definition, $V_M^\pm$ are linear subspaces of $\Bbb R^n$. From $M^2=I$ we know that $$w=v\pm Mv\in V_M^\pm \Rightarrow Mw=\pm w,$$ i.e. $V_M^+$(resp. $V_M^-$) is either $\{0\}$ or the eigenspace of $M$ associated to eigenvalue $+1$(resp. $-1$), and in particular $V_M^+\cap V_M^-=\{0\}$. Moreover, from $$v=\frac{1}{2}(v+Mv)+\frac{1}{2}(v-Mv),\quad\forall v\in \Bbb R^n,$$ we know that $$V_M^+\oplus V_M^-=\Bbb R^n\Rightarrow \dim V_M^++\dim V_M^-=n.$$ Since $n$ is odd, it follows that one of $V_M^\pm$ has dimension strictly larger than $\frac{n}{2}$, and denote it as $V_M$. Therefore, $$\dim V_A +\dim V_B>\frac{n}{2}+\frac{n}{2}=n\Rightarrow V_A\cap V_B\ne \{0\},$$ which implies that $V_A\cap V_B$ contains some common eigenvector of $A$ and $B$.
H: Playing with a functional equation I was playing with a functional equation and proved the result below: Let $f$ be such that $$f(f(z))=z$$ If $f^{-1}$ exists then $$f(z)=f^{-1}(z)$$ If $f'$ exists then as $$(f^{-1}(z))'=\frac{1}{f'(f^{-1}(z))}=(f(z))'$$ then $(z)'=\frac{1}{f'(z)}$ so $f'(z)=1$ then $f(z)=z+C\rightarrow (z+C)+C=z\rightarrow C=0$ so $f(z)=z$. Any error? Did I assume innecesary things? AI: If $g:\mathbb R^{\geq 0}\to \mathbb R^{\geq 0}$ is such that $g(0)=0$, continuous, differentiable, $1-1$ and onto, and $g'(0)=1$, then you can define a function: $$f(z) = -g(z) \text{ if } z\geq 0, g^{-1}(-z) \text{ if }z<0$$ Then it turns out that $f(f(z))=z$ for all $z$ and $f$ is differentiable everywhere. (The condition that $g'(0)=1$ is required differentiability.) So there are a large number of such functions, most of them non-linear. For example, if $g(z)=z^2+z$, then $g(0)=0$ and $g'(0)=1$ and $h(z)=\frac{-1+\sqrt{1+4z}}{2}$ is the inverse, and we can define $f(z)=-g(z)$ when $z\geq 0$ and $h(-z)$ when $z<0$. The general non-trivial solution is to pick an arbitrary $C\in\mathbb R$ and $g$ as above. Then define $$f(z)=\begin{cases}C-g(z-C)&z\geq C\\ C+g^{-1}(C-z)&z<C\end{cases}$$ In particular, $f(z)=C-z$ is an example.
H: Is the outer boundary of a connected compact subset of $\mathbb{R}^2$ an image of $S^{1}$? A connected compact subset $C$ of $\mathbb{R}^2$ is contained in some closed ball $B$. Denote by $E$ the unique connected component of $\mathbb{R}^2-C$ which contains $\mathbb{R}^2-B$. The outer boundary $\partial C$ of $C$ is defined to be the boundary of $E$. Is $\partial C$ always a (continuous) image of $S^{1}$? AI: Even in two dimensions this is false. Let $C$ be the closed topologist's sine curve. It is compact and connected. It is also not hard to see that the outer boundary $\partial C$ is again $C$ itself. But $C$ is not the continuous image of $S^1$, since it is not path connected. For a path connected example, consider the "extended closed topologist's sine curve" $C'$ which is the union of $C$ with the segment $[0,1] \times \{1\}$. $C'$ is compact, path connected, and $\partial C' = C'$. Suppose to the contrary that $f : S^1 \to C$ is continuous and surjective. But $C'$ is not the image of $S^1$, as I now argue. Let $y_n = (1/(2 \pi n + \frac{3\pi}{2}), -1)$ be the sequence of points at the bottom of the sine curve, and $y = (0,-1)$ so that $y_n \to y$. Also let $U = C' \cap (\mathbb{R} \times (-\infty,0))$ be the open lower half of the sine curve. By surjectivity, for each $n$ there is an $x_n \in S^1$ with $f(x_n) = y_n$. By compactness, the set $\{x_n\}$ has a limit point $x$, and by continuity of $f$, we must have $f(x) = y$. $U$ is open and contains $y$, so $f^{-1}(U)$ is open and contains $x$. Hence we can find an open arc $V$ with $x \in V \subset f^{-1}(U)$. Being open, $V$ contains some $x_k$ (indeed, infinitely many), so $f(V)$ contains $y_k$. To summarize, $V$ contains $x$ and $x_k$, is contained in $f^{-1}(U)$, and is connected. Hence $f(V)$ contains $y$ and $y_k$, is contained in $U$, and is connected. This is absurd since $y$ and $y_k$ are in different components of $U$. Actually, the same argument shows that $C'$ is not the continuous image of any compact, locally connected topological space. Moreover, I just saw this question which asserts that any Hausdorff space which is the continuous image of some compact, locally connected space must again be locally connected.
H: Does $x^2>x^3+1 \implies x < -{1\over P}$? How could one prove that: $$x^2>x^3+1 \implies x < -{1\over P}$$ where $P$ denotes the plastic constant, the unique real root of $x^3-x-1=0$? AI: The function $x^2 - x^3 - 1$ is monotone decreasing on $\{x < 0\}$ so we need to look at the (unique) $x$ where $$x^2 = x^3 + 1.$$ But for this $x$, dividing by $x^3$, $$-\frac{1}{x} = -1 - \frac{1}{x^3},$$ so $-\frac{1}{x}$ solves the defining equation for the plastic constant; so $-\frac{1}{x} = P$ and $x = \frac{-1}{P}$.
H: Quadrilateral congruency theorem Is there a congruence theorem that says that if three sides of two quadrilaterals are equal, then the two quadrilaterals are congruent? I am grading some homework and a student appealed to such a theorem, but I cannot find it anywhere. I'd like to give them credit if it is the case... AI: I'm sorry, but the student is out of luck. No there isn't such a theorem, and for good reason. Indeed, if all four sides of a quadrilateral are equal in length to the sides of another quadrilateral, even then we cannot conclude the quads are congruent. A simple counter-example suffices: Consider an $(a \times a)$ square, vs. a (non-square) rhombus whose sides are all of length $a$, but do not meet at right angles, exemplified nicely in the image below: Image from Wikipedia rhombus
H: Does the limit of a descending sequence of connected sets still connected? Given a descending sequence of sets $$ F_1\supset F_2\supset\cdots F_n\supset\cdots $$ in which each $F_i$ is connected. I wonder if the limit set $$ F=\bigcap_{i=1}^\infty F_i $$ is still connected? I believe it is, but cannot make a proof. Anyone can help? Updated: Samuel has showed a counter example. Thus now I wonder can I add some constraints such that the conclusion holds? I ask this problem because when I look up the The Princeton Companion to Mathematics,chapter IV.14. Dynamics, section 2.8 The Mandelbrot Set M, there is the following words: It follows from the above that as $t$ approaches zero, the equipotential of potential $t$, together with its interior, gets closer and closer to M: that is, M is the intersection of all such sets. Hence, M is a connected, closed, bounded subset of the plane. I wonder why such argument shows $M$, the Mandelbrot set, is connected. AI: No. Let $F_n$ be the the plane $\mathbb R^2$ minus the line $\{0\}\times(-\infty,n)$. Added: It is true when all the $F_n$ are compact subsets of $\mathbb R^N$. Suppose otherwise: then there exist open disjoint sets $A,B$ such that $F$ contains points of both $A$ and $B$ and $F$ is contained in $A\cup B$. Now consider $F_n\cap (\partial A)$. Since each $F_n$ is connected, and contains points in both $A$ and $B$, the intersection $F_n\cap (\partial A)$ must be nonempty, and moreover, for $n=1,2,3,\ldots$ it is a decreasing sequence of compact sets, and therefore the intersection of all $F_n\cap (\partial A)$ is nonempty. Contradiction. Thus $F$ is connected.
H: maximal ideal properly contains union of its square with the union of minimal prime ideals One of the first theorems one encounters in the study of commutative algebra is that if $I$ is an ideal of a ring $A$ not contained in any of the prime ideals $P_1,\cdots,P_n$, then $I$ is not contained in $\cup_{i=1}^n P_i$. This is proposition 1.11(i) from Atiyah-MacDonald and exercise 1.6 from Matsumura's commutative ring theory. Now consider the following situation: Let $(R,m)$ be a regular local ring of dimension $n>1$. Let $P_1,\cdots,P_r$ be its minimal prime ideals. Since $\operatorname{dim} R=n>1$, we can not have $m=m^2$, since otherwise we would get $m=0$ by Nakayama's Lemma and $R$ would be a field. Question: why is it true that there exists an $x \in m$ which is not inside $P_1\cup \cdots \cup P_r \cup m^2?$ Remark 1: We can not apply proposition 1.11(i) from Atiyah-MacDonald, since $m^2$ might not be a prime. Remark 2: If $m \subset P_1 \cup \cdots \cup P_r \cup m^2$, then $m \subset P_1\cup \cdots \cup P_r + m^2 \Rightarrow m=P_1\cup \cdots \cup P_r + m^2$. Now if $P_1 \cup \cdots \cup P_r$ were an ideal, we could apply NAK and get $m=P_1\cup \cdots \cup P_r$ and we would be done. But $P_1 \cup \cdots \cup P_r$ might not be an ideal. And so what we get is $m = <P_1 \cup \cdots \cup P_r>$. But now proposition 1.11(i) is not applicable. Remark 3: In the argument of remark 2, i did not use neither the fact that $R$ is regular, nor the fact that the $P_i$ are minimal. Reference: Matsumura's Commutative Ring Theory, proof of Theorem 14.3. AI: Are you aware of the more general theorem that has the same statement and conclusion, except that it allows at most two of the $P_i$ to be nonprime? This statement would allow you to draw the conclusion you want. Here is one place it appears. Pretty sure it appears in something like Kaplansky's commutative ring book. I think the proof is similar to the proof of the theorem you cited, it is just "more careful." I see it also appears on page 90 of Eisenbud's Commutative algebra with a view...
H: Convergence - formula There are sequences: $\{x^n\}_{n\in N}$, where $x^n=\langle x^n_1, x^n_2, x^n_3,...\rangle, n=1,2,...$ $x=\langle x_1,x_2,x_3,...\rangle$ How should I write that $x$ is limes of $x^n$? I use definition that $lim_{n \rightarrow \infty}x_n=x$ iff $(\forall \epsilon >0)(\exists n_0\in N)(\forall n\geq n_0)(d(x_n,x)<\epsilon)$ (There are some properties of these sequences, but I think they are not important because I am confused just about how to write down expression) I have to use $d_{\infty}(x^n,x)=sup\{|x^n_k-x_k|:k\in N\}$ metric. I wrote it as $(\forall \epsilon >0)(\exists p_0\in N)(\forall n\geq p_0)(\sup\{|x^n_k-x_k|:k\in N\}<\epsilon)$ but then I saw that teacher wrote it as $(\forall \epsilon >0)(\exists p_0\in N)(\forall n\geq p_0)(\sup\{|x^k_n-x_n|:k\in N\}<\epsilon)$ and now I don't know what is correct. AI: I thinks this should make things clear what I mean. You may decide to write: $$x_n=\langle x_{n,1}, x_{n,2}, x_{n,3},\dots,x_{n,k},\dots\rangle, n=1,2,...$$ $$x=\langle x_1,x_2,x_3,\dots,x_k,\dots\rangle$$ where $k$ represents position of element in the sequence. And $n$ is index for n-th sequence. $$\dots$$ $$(∀ϵ>0)(∃p_0∈N)(∀n≥p_0)(sup{|x_{n,k}−x_k|:k∈N}<ϵ)$$ PS: provided I understood all correct what you want to say.
H: For $f$ an analytic function, what is $f$ be analytic function, could any one tell me how to find the value of $$\int_{0}^{2\pi} f(e^{it})\cos t \,\mathrm dt$$ I am not able to apply any complex analysis result here, could any one give me hint? AI: Make a change of variables, $z = e^{it}$, which maps the interval $[0,2\pi]$ to the unit circle. You get \begin{align} \int_0^{2\pi} f(e^{it})\cos t\,dt &= \int_0^{2\pi} f(e^{it}) \frac{e^{it}+e^{-it}}{2} \,dt \\ &=\int_{|z|=1} f(z) \frac{z+\frac1z}{2} \frac{dz}{iz} \\ &=\frac{1}{2i} \int_{|z|=1} f(z) \frac{z^2+1}{z^2}\,dz \\ &=\frac{1}{2i} \int_{|z|=1} \frac{f(z)}{z^2}\,dz = \pi f'(0) \end{align} assuming $f$ is analytic on a neighborhood of the closed unit disc (using Cauchy's integral theorem and Cauchy's integral formula for $f'$).
H: $\|x -y\|+\|y-z\|=\|x-z\|$ implies $y= a x + b z$ where $a +b =1$ $\|x -y\|+\|y-z\|=\|x-z\|$ implies $y= a x + b z$ where $a +b =1$ Hint: Take $m=x-y$ and $n= y-z$. Does this follow from standard properties of inner product spaces (linearity, symmetry, and positive definiteness?) Or does Cauchy-Schwarz help? This is from online notes from KU. Edit: I then have $\|m\|^2 + \|n\|^2 + 2 \|m\| \cdot \| n \| = \|m + n \|^2$. I have Cauchy Schwarz for products, which term does the inequality factor into? Do I show it both ways to show equality? AI: I will sove it geometrically. Suppose $y$ is the point $A$, $x$ is the point $B$ and $z$ is the point $C$. Then the given relation says that the triangle $ABC $ satisfies $|AB|+|AC|= |BC|$, but this is only possible if $A$ lies on the side $BC$. So $y$ is a convex combination of $x$ and $z$. $\textbf{Another approach:}$ Using the hint $$\| m\|^2+\| n\|^2+2\| m\|^\cdot \| n\|=\| m+n\|^2=\| m\|^2+\| n\|^2+2\langle m,n\rangle$$ So you have $$\langle m,n\rangle =\| m\|\cdot \|n\|$$ Hence from Cauchy-Schwarz inequality $m$ and $n$ are linearly dependent. So, $$x-y=r(y-z)$$ which implies $$\frac{1}{1+r} x +\frac{r}{1+r}z=y$$
H: Understanding $\sum^n_{i=1}\int_{(i-1)\pi}^{i\pi}|\frac{\sin x}{x}|\,dx\ge\sum^n_{i=1}\frac{1}{i\pi}\int_{(i-1)\pi}^{i\pi}|\sin x|\,dx$ Could some please explain me the following inequality? $$\sum^n_{i=1} \int_{(i-1)\pi}^{i\pi} \left|{\frac{\sin x}{x}}\right| \, dx\geq \sum^n_{i=1}\frac{1}{i\pi}\int_{(i-1)\pi}^{i\pi}\left|\sin x\right| \, dx $$ Thank you! AI: For $i=1,\ldots,n$ and $(i-1)\pi\leq x\leq i\pi$ $$ \left|\frac{\sin x}{x}\right|=\frac{|\sin x|}{x}\geq \frac{1}{i\pi}\left|\sin x\right| $$
H: “$f$ is a function from $A$ to $B$” vs. “$f $is a function from $A$ into $B$”? When we say that $f$ is a function from $A$ to $B$ is this different from saying $f$ is a function from $A$ into $B$ I know what injective ("1-1"), surjective ("onto"), and bijective functions are, but is there such a thing as an "into" function? AI: Both expressions say the same thing. But note that saying "$f: A \to B$ is a function from $A$ into $B$ does not imply that $f$ is into but not onto (i.e., it does not rule out that $f$ might be onto, so it is not the "converse" of, or the negation of, the descriptor "onto" or "surjective"). Indeed, it says nothing more, and nothing less, than the alternative: "$f$ is a function from $A$ to $B$." So we can either say that there is no such thing as an "into function" (i.e. it is NOT used to describe some, but not all functions, nor is it describe particular kinds of functions), Or we can say that every function is into (meaning every function $f: A \to B$ is a function on $A$ into $B$.)
H: On chain complex morphisms The following seems quite obvious to me. Nevertheless I would like to have another opinion. Suppose $(A_\bullet,d_A)$ and $(B_\bullet,d_B)$ are chain cmplexes, such that $d_A$ is the trivial differential (i.e $(d_A)_k(a)=0$ for all $k\in \mathbb{Z}$ and $a\in A_k$) and $d_B$ is not zero in any degree. Then any morphism of chain complexes $f_\bullet:A_\bullet \to B_\bullet $ has to be the zero morphism. Seems obvious at the moment to me. AI: This is not true. Let's just consider the case where $A_{\bullet}$ has a single non-zero piece, say $A_0$. Then to give a morphism $f_{\bullet}$ we just have to give a morphism $A_0 \to B_0$ whose image lies in the kernel of $d_{B,0}$. So unless $d_{B,0}$ is actually injective, there can be non-zero such maps (for a suitable choice of $A_0$).
H: One Step in Proving the Gamma of $1 \over 2$ Good Afternoon All. There is one step in the proof that I never quite understood. Let $I = \int_{0}^{\infty} e^{{-u}^2} du$. Then $$I^2 = \int_{0}^{\infty} e^{{-u}^2} du \int_{0}^{\infty} e^{{-v}^2} dv$$ Now, if I say that since $\int_{0}^{\infty} e^{{-u}^2} du$ is not a function of $v$, I can stack the integral as such $$I^2 = \int_{0}^{\infty} \int_{0}^{\infty} e^{{-u}^2} du (e^{{-v}^2}) dv \space (*)$$ Then because $e^{{-v}^2}$ is not a function of $u$, $$I^2 = \int_{0}^{\infty} \int_{0}^{\infty} e^{{-v}^2}e^{{-u}^2} du dv$$ However, to make such a argument in $(*)$, do I need to first know that $\int_{0}^{\infty} e^{{-u}^2} du$ converges? AI: Yes, you do need to know it converges. But $\exp (-u^2) \lt \frac 1{1+u^2}$ for large $u$ is enough to show it.
H: Raising each side of a limit to a power? If you have $\lim_{x\to \infty} f(x)=L$ $\quad$(or as $x \rightarrow c$ for that matter), is it correct to say that $\lim_{x\to \infty} f(x)^{g(x)}=L^{\lim_{x\to \infty}g(x)}$ as long as $L \not= 1$? I'm not looking for a very rigorous response, I am just looking for if this is a correct technique in general that can help you evaluate a limit. Thanks. AI: I followed the comment discussion where the question arose. I will give a sequence of more or less trivial cases. The most easiest case is to ask whether $$ \lim_{n\to \infty} f(x_n)=f(\lim_{n\to \infty} x_n)$$ holds. This is always the case when $f$ is continuous and might be true even if $f$ is not continuous. the next case is to ask whether $$ \lim_{n\to \infty} f_n(x_n)= \lim_{k\to \infty} f_k (\lim_{n\to \infty }(x_n))$$ holds. Here we need more than just every $f_k$ to be continous, we even need that the limit function ist continuous. This is always the case if $f_n$ converges uniformly. In your case as $$ f(x)^{g(x)} = \exp(\log(f(x))\cdot g(x))$$ Here you see that the case where $f(x)$ goes to $1$ surely could cause trouble. Furthermore negative values of $f(x)$ has to be handled in a different way. In the case where you question comes from you looked at the sequence of functions $x^n$. This converges pointwise on $[0,1]$ diverges everywhere else and converges uniform due to Dini's Theorem on $[0,1-\varepsilon]$. So whenever the limit is strictly lower than $1$ you may do it.
H: Prove normalizing constant on normal CDF So I know that the CDF of a standard normal will be: $$ \frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{ \frac{-z^2}{2}} \, dz $$ How do I show that when I sub in mu and sigma, the equation will become: $$ \frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^x e^{-\frac{(y-\mu)^2}{2\sigma^2}} \,dy $$ The terms inside the integral are easy, but how do I know the sigma outside the integral correctly fixes the normalization? AI: Make the substitution $t=\frac{y-\mu}{\sigma}$. Then $\frac{dy}{\sigma}=dt$. Then from the fact that $\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}e^{-t^2/2}=1$, it follows that $\int_{-\infty}^\infty\frac{1}{\sigma\sqrt{2\pi}}e^{-(y-\mu)/2\sigma^2} \,dy=1$.
H: How to calculate the order of commutator of two elements of a group $G$ in terms of their orders? if $G$ is a group , $x,y \in G$ and $[x,y]$ is the commutator of $x$ and $y$ so , $[x,y]=x^{-1}y^{-1}xy$ is there a formula to compute $|[x,y]|$ in terms of $|x| , |y|$ ? AI: No. If $|x|=|y|=2$ then $|[x,y]|$ can have any order $1,2,\ldots,\infty$. Take $G$ dihedral of order $4n$ (with $4\infty = \infty$). Then the commutator of two generating reflections has order $n$. Take $G = \langle a,b : a^2 = b^2 = (ab)^{2n} = 1 \rangle$ Then $[a,b] = a^{-1} b^{-1} a b = a b a b = (ab)^2$ should have order $n$ if the presentation is not “lying” about the order of $ab$. However, the group of symmetries of a regular $2n$-gon satisfies this presentation with $a$ and $b$ “adjacent” reflections, and their product $ab$ a “primitive” rotation. Alternatively, you can start with the reflection $a$ and a primitive rotation $r$. Then define $b=ar$. By a fundamental geometric principal, $b$ is also a reflection, and $ab=r$ has order $2n$. At any rate, $[a,b] = [a,ar]$ is a “double” rotation, so only has order $n$. If $n=\infty$ then $a=(x\mapsto -x)$ is a reflection (over the line $x=0$ if you imagine this in the Euclidean plane) and $b=(x\mapsto 1-x)$ is a reflection (over the line $x=\tfrac12$) and $[a,b]= x \mapsto -x \mapsto 1-(-x) \mapsto -(1-(-x)) \mapsto 1 - ( -(1-(-x))) = x+2$ is a translation by 2, so it has infinite order.
H: change of variables using a substitution Let $D$ be the triangle with vertices $(0,0),(1,0)$ and $(0,1)$. Evaluate $$\iint_D \exp\left( \frac{y-x}{y+x} \right) \,dx\,dy$$ by making the substitution $u=y-x$ and $v=y+x$ My attempt Finding the domain, $D$ $$ (x,y) \rightarrow(u,v) $$ $$ (0,0) \rightarrow(0,0) $$ $$ (0,1) \rightarrow(1,1) $$$$ (1,0) \rightarrow(-1,1) $$ Thus the domain is $-1 \le u \le1$ and $0 \le v \le1$ and the jacobian is $-2$ and so the integral I get is $$ \int_0^1 \int_{-1}^1 -2e^{\frac{u}{v}} \,du\,dv $$ this is awfully too hard to integrate and i can't figure out where i have gone wrong AI: You're right, except for your limits of integration on $u$. Draw the resulting triangle in the $uv$-plane. What are the limits of $u$ in your picture, or $u$ goes from what to what? It only goes from $-1$ to $1$ when $v = 1$. In general, we have $-v \leq u \leq v$, for $0 \leq v \leq 1$ (draw a horizental line from the left edge of the triangle to the right, for any v between $0$ and $1$). Thus, your integral is $$ \int_0^1 \int_{-v}^v -2 e^{\frac{u}{v}} \, dudv $$ You should be able to evaluate this now.
H: If $G$ is a finite group, $H$ is a subgroup of $G$, and $H\cong Z(G)$, can we conclude that $H=Z(G) $? If $G$ is a finite group, $H$ is a subgroup of $G$, and $H\cong Z(G)$, can we conclude that $H=Z(G) $? I learnt that if two subgroups are isomorphic then it's not true that they act in the same way when this action is related to outside groups or elements. For example, if $H \cong K $ and both $H,K$ are normal of $G$ then it's not true that $G/H \cong G/K$. Also, there is sufficient condition (but not necessary, as I read after that) for there to be an automorphism that, when restricted to $H$, induces an isomorphism between $H$ and $K$. Now, is this true in this case? or is the statement about center and its isomorphic subgroups always true? AI: No. Let me explain why. You should think of $H\cong K$ as a statement about the internal structure of the subgroups $H$ and $K$. The isomorphism shows only that elements of $H$ interact with each other in the same way that elements of $K$ interact with each other. It doesn't say how any of these elements behave with the rest of the group - that is, the external behavior of elements of $H$ with the rest of $G$ may not be the same as the external behavior of elements of $K$ with the rest of $G$. Being the center of a group is a statement about external behavior. If we state that every element of $Z(G)$ commutes with every other element of $G$, then just because $H$ and $Z(G)$ have the same internal structure doesn't mean that every element of $H$ must then commute with every other element in $G$. For an easy counterexample, consider $G=S_3\times \mathbb{Z}_2$. Let $\alpha$ be any of the transpositions $(12)$, $(13)$, or $(23)$ in $S_3$, and let $\beta$ be the generator of the $\mathbb{Z}_2$. Here, by virtue of the direct product, $\beta$ commutes with every other element of $G$, and it is not difficult to see that no other nontrivial element of $G$ holds this property, so $\langle\beta\rangle =Z(G)$. On the other hand, we know that all groups of order $2$ are isomorphic, so $\langle \alpha \rangle \cong \langle \beta \rangle$.
H: How to show that $ Ax \le b$ is convex? For $$ A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, c \in \mathbb{R} $$ one has to show that $$ K:= \{ x \in \mathbb{R}^n: Ax \le b \}$$ is convex. Now I'm aware that by definition, a set is convex $ \iff $ for all $x,y \in K, \lambda \in [0,1]$ any point $ \lambda x + (1- \lambda) y$ is again $ \in K$. However, I do not see how to apply that here. Can you give me any directions? Thanks! AI: I suggest you to try first the case where $m=n=1$. The general case is exactly the same.
H: Find a subspace $W$ such that ... - am I right? Let $U$ be a subspace of $\mathbb R^4$ spanned by $v_1=(1,-1,1,2)$ and $v_2=(3,1,2,1)$. Find a subspace $W$ of $\mathbb R^4$ such that $U ∩ W = \{{0}\}$ and $\dim(U) + \dim(W) = \dim(\mathbb{R^4})$. Now obviously, $v_1,v_2$ form a basis for $U$ and thus $\dim(U)=2$. Now if I get it right, all I need to do, is to find two vectors $v_3,v_4$ such that all $4$ of them would be linearly independent thus forming a basis for $\mathbb R^4$ and complying with constraints. And so for example $v_3=(1,0,0,0)$ and $v_4=(0,1,0,0)$ does the job, as then $\dim(W)=2$ and $\dim(U)+ \dim(W)=4$. right? AI: What you've done is correct. You should provide some justification that the vectors $v_1$, $v_2$, $v_3$, and $v_4$ are indeed independent (and maybe even why this suffices to solve the problem). Also, the problem asked for the subspace $W$; so, your answer should be phrased as "$W$ is the subspace spanned by $v_3$ and $v_4$".
H: $G$ is a tree $\iff G$ contains no cycles, but if you add one edge you create exactly one cycle - is my proof correct? $G$ is a tree $\iff G$ contains no cycles, but if you add one edge you create exactly one cycle Could anyone please be so kind to check my proof? That would be very much appreciated. Thank you in advance. My proof: '$\rightarrow$' Assume $G$ is a tree. Then by definition it is connected. Furthermore, there is a unique $u, v$-path for every $u,v \text{ in } G$. For arbitrary $u$ and $v$, let this path be $u=u_1\rightarrow u_2\rightarrow \dots \rightarrow u_k=v$. Assume $\nexists$ an edge $e=\{u_i,u_j\}$ for some vertices $u_i, u_j \text{ } (j>i)$ in this path. Now add this edge $e=\{u_i,u_j\}$. Then we can follow the original $u,v$-path until we arrive at $u_j$ and then return to $u_i$ via $e$. Hence this is a cycle. '$\leftarrow$' Assume $G$ contains no cycles but if you add one edge you create exactly one cycle. To prove that $G$ is a tree we only need to prove connectedness, since it is already given that $G$ contains no cycles. Reason by contradiction. Assume that $G$ is not connected. Then $\exists u \exists v \text{ }(u\neq v)$ in $G$ s.t. $\nexists$ an edge $\{u,v\}$. Add this edge. This will not create a cycle. But this contradicts our original assumption. Hence $G$ must be connected, and thus $G$ is a tree. AI: To prove "exactly one" in the $\rightarrow$ direction, suppose the addition of $(u,v)$ created (at least) two cycles. Both must include $(u,v)$ else they were cycles before. However, we now can go from $u$ to $v$ along two different paths in the tree, a contradiction. In the $\leftarrow$ direction, if $G$ is not connected, choose $u,v$ from different components. Then adding $(u,v)$ cannot create a cycle.
H: Power of commutator formula A few people remember a commutator formula of the form $[a,b]^n = (a^{-1} b^{-1})^n (ab)^n c$ where $c$ is a product of only a few commutators (say $n-1$) of them. Here $a,b$ are in a (free) group and $[a,b] := a^{-1} b^{-1} a b$. Does anyone remember such a formula with proof? Some such formula must exist where $c$ is in the commutator subgroup of $\langle a,b\rangle$, but my recollection is that $c$ is a product of something more like $n^2$ commutators. Answers that only work for $n=2$ are less interesting to me. There should be a radical difference for $n \geq 3$. AI: Positive results are fairly nice. I explain them in this note for commutators of powers (see the last pages for pretty pictures and crazy formulas). I didn't get around to the special case of powers of commutators, but Culler and Bavard give definitive results. Negative results: These are just well known true formulas that have “$c$” being way too long, even allowing ourselves to omit longer commutators. If you've never done it, try to write $(ab)^3 = a^3 b^3 c$ and actually get a formula for $c$ only involving commutators (of commutators) of $a$ and $b$. Robinson, page 137, 5.3.5, $(xy)^m = x^m y^m [y,x]^{\binom{m}{2}} \mod \gamma_3$ has $c$ of quadratic length Hall, Chapters 11 and 12. I don't actually see the formula (!) but the algorithms used to derive the formula and their $\mod \gamma_4$ brethren are there, as well as the application to regular $p$-groups, which says that you can at least choose $c$ to be a product of $n$th powers, but no bound on how many. This is mod a more difficult to describe subgroup, as well. Gorenstein, Chapter 5.6 has the technique (again modulo a hard to describe subgroup, due to the $p$ versus $c$ of $\gamma_c$ relation). It also has applications of the formula in 5.3.9, but not the formula proper. Leedham-Green–McKay, Corollary 1.1.7 $$[x,y]^n = (x^{-1} y^{-1})^n (xy)^n [y,x]^{\binom{n}{2}} [[y,x],x]^{\binom{n}{3}} \cdots$$ $$[y,x]^n = [y^n,x] [[x,y],y]^{\binom{n}{2}} [[[x,y],y],y]^{\binom{n}{3}} \cdots$$ modulo the subgroup generated by commutators containing at least 2 $x$s. Again quadratic (cubic, quartic, etc. if don't mod out by $\gamma_3$) not linear. Culler's formula is $$[a,b]^3 = [b^a,a^{-1} b^a b^{-1}][bab^{-1},b^2]$$ express a product of three commutators as a product of two commutators, which is disturbing. In fact Culler showed that the commutator length of $[a,b]^n$ is less than or equal to $\tfrac{n}{2} + 1$, and Bavard showed one has equality (with the greatest integer less than or equal to $\tfrac{n}{2}+1$). Danny Calegari and Alden Walker have improved the algorithms used in these papers while using the same basic topological idea. I would also mention the diagrams used in these works are on the inside covers of Rotman's group theory textbook. Culler, Marc. “Using surfaces to solve equations in free groups.” Topology 20 (1981), no. 2, 133–145. MR605653 DOI:10.1016/0040-9383(81)90033-1 Bavard, Christophe. “Longueur stable des commutateurs.” Enseign. Math. (2) 37 (1991), no. 1-2, 109–150. MR1115747 DOI:10.5169/seals-58734 Two versus three $n=2$ is special and has a finite formula with everything sorted: $$(ab)^2 = abab = aab[b,a]b = a^2 b^2 [b,a] [[b,a],b]$$ $n=3$ is more like the rest, and has no finite formula if you try sort the commutators by weight. Even the unsorted formula is pretty long: $$\begin{array}{ll} (ab)^3 &= ababab \\ &= aab[b,a]bab \\ &= aab[b,a]ab[b,a]b \\ &= aaba[b,a][[b,a],a]b[b,a]b \\ &= aaab[b,a]^2[[b,a],a]b[b,a]b \\ &= a^3b[b,a]^2 b [[b,a],a] [[[b,a],a],b] [b,a] b \\ &= a^3b^2 [b,a][b,a,b][b,a][b,a,b][[b,a],a] [[[b,a],a],b] [b,a] b \\ &= a^3 b^3 [b,a][b,a,b][b,a,b][b,a,b,b][b,a][b,a,b][b,a,b] \\ &\quad [b,a,b,b][b,a,a][b,a,a,b][b,a,a,b][b,a,a,b,b][b,a][b,a,b] \end{array}$$ Of course if we go mod $\gamma_4$ then we lose all that $[b,a,b,b]$ nonsense and are left with: $$(ab)^3 = a^3 b^3 [b,a]^3 [b,a,b]^5 [b,a,a]^1 \mod \gamma_4$$ The powers on those commutators are called Hall polynomials if you let $n$ vary. $$(ab)^n = a^n b^n [b,a]^{\binom{n}{2}} [b,a,a]^{\binom{n}{3}} [b,a,b]^{2\binom{n}{3}+\binom{n}{2}} \mod \gamma_4$$ has cubic growth. The Hall polynomial for $[b,a,a,\ldots,a]$ is always $\binom{n}{k}$. There are also three variable and higher versions. Commutator length Actually, the commutator length may be much shorter than the formulas indicate. $$(ab)^3 = a^3 b^3 [ {(ab)}^{-1} b^2, b^{-1} (ab)^2 ]$$ $$(ab)^3 = (ba)^3 [aba,bab]$$ The first can be applied with $a=x^{-1} y^{-1}$ and $b=xy$ to answer the main question. Guided by scallop by Alden Walker and Danny Calegari, I found $$(ab)^4 = a^4 b^4 [a^{-1}b^2, a^{-2}ba][ ba^{-2}ba,(ba)^2b ]$$ I think these shorter formulas lose some of the theoretical importance that the “nicer” formulas had, but I worry this sort thing will give a positive answer to the question. I continue to be interested in a positive answer. Positive result To prove Schur's theorem that if $[G:Z(G)]$ is finite then so is $G'$, Ornstein showed this in a way very similar to Cullen's formula and the commutator length ideas. The first step was the remembered claim $[a,b]^n = (ba)^{-n} (ab)^n u$ where $u$ is a product of $n-1$ commutators. This follows from induction on $n$ with $n=1$ being clear. $$\begin{array}{ll} [a,b]^n &= [a,b] [a,b]^{n-1} \\ &= [a,b] (ba)^{1-n} (ab)^{n-1} u_{n-2} \\ &= (ba)^{-1}(ab) (ba)^{1-n} (ab)^{n-1} u_{n-2} \\ &= (ba)^{-1}(ba)^{1-n} (ab)^{n-1} (ab) [ (ab), (ba)^{1-n} (ab)^{n-1} ] u_{n-2} \\ &= (ba)^{-n} (ab)^n u_{n-1} \end{array}$$ Thanks to Babak for finding this simple proof. This is used with $n=[G:Z(G)]$ since then $(ba)^{-n} (ab)^n = 1$ because $(ab)^n \in Z(G)$ and $(ab)^n = ((ba)^b)^n = ((ba)^n)^b = (ba)^n$. This gives that $[a,b]^n$ is a product of $n-1$ commutators, rather than $n$. In any product of commutators of minimal length, no commutator appears to a power higher than $n$. Since $xyx = x^2 y^x$ and the commutator length of a conjugate is the same as the original, we can sort any such expression to bring all copies of a commutator into a power. Hence no commutator appears anywhere in a minimal expression $n$ or more times. Since there are only at most $n^2$ commutators, that is a total of less than $n^3$ expressions, so $|G'| \leq [G:Z(G)]^3$. I suspect the other formulas we have give that $|G'| \leq 3[G:Z(G)]^2$.
H: Connected, regular, bipartite graph and biconnected component I'm learning for an exam and I can't solve this problem: Prove that every undirected, connected, regular, bipartite graph has only one biconnected component. Give an example which shows that assumption of bipartition is necessary. Can anyone help? AI: If the degree $d=1$, then it's not true for $K_2$. Otherwise the graph is disconnected. Now assume $d \geq 2$. If there is a bipartite graph with $\geq 2$ biconnected components then there is a bridge $e$. If we delete $e$, then we separate the graph into two disjoint components. Take one of these components and call it $G$. We know $G$ is a bipartite graph, so call the distinct parts $V_1$ and $V_2$, and assume the edge $e$ has an endpoint in $V_2$. In $G$: The number of edges coming out of $V_1$ into $V_2$ is $dv_1$ where $v_1=|V_1|$. The number of edges going into $V_2$ from $V_1$ is $d(v_2-1)+(d-1)$, since one vertex in $V_2$ was an endpoint of $e$ in the original graph, but $e$ has now been deleted. Hence $dv_1=d(v_2-1)+(d-1)$ has an integer solution, implying $0 \equiv -1 \pmod d$, giving a contradiction (since $d \geq 2$). Without the bipartite condition, we can find non-biconnected non-bipartite connected regular graph, such as the following:
H: Integrating by using change of variables and by making a substitution Let $D$ be the region bounded by $x=0$, $y=0$, $x+y=1$ and $x+y=4$. Evaluate $$\iint_D \frac{dx\,dy}{x+y}$$ by making the change of variables $x=u-uv$, $y=uv$ My attempt I understand I must first find the domain. $$x=u-uv, y=uv$$ $$x=u-y$$ $$u=x+y$$ and $$v=\frac{u}{y}$$ This gives me the domain $$ (x,y) \rightarrow(u,v) $$ $$ (0,1) \rightarrow(1,1) $$ $$ (1,0) \rightarrow(1,0) $$$$ (0,4) \rightarrow(4,1) $$ $$ (4,0) \rightarrow(4,0) $$ After i graph these points on the new $(u,v)$ graph i get a box and so my limits are $$ 1 \le u \le 4$$ $$0 \le v \le 1 $$ and now I am having trouble finding the jacobian since u is a function of v. Any help would be great. AI: I believe the way you need to re-arrange this is $$x \ = \ u - uv \ = \ u \cdot (1-v) \ , \ y = uv \ \Rightarrow \ x = u - y \ \Rightarrow \ u = x + y $$ $$\Rightarrow \ y = (x + y) \cdot v \ \Rightarrow \ v = \frac{y}{x+y} \ . $$ Since this transformation does not reverse the orientation of the boundary, your Jacobian ought to come out positive.
H: Maximum of $ f(x,y) = 1 - (x^2 + y^2)^{2/3} $ For the function $ f(x,y) = 1 - (x^2 + y^2)^{2/3} $ one has to find extrema and saddle points. Without applying much imagination, it is obvious that the global maximum is at $ (0,0)$. To prove that, I set up the Jacobian as $$ Df(x,y) = \left( -\frac{4}{3} x (x^2 y^2)^{-1/3}, -\frac{4}{3} y (x^2 y^2)^{-1/3}\right)$$ The only solution for $ Df(x,y) = 0$ yields indeed $(0,0)$. In this point the Hessian is $$ D^2f(0,0) = \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right)$$ which is both positive and negative semi-definite, suggesting that $(0,0)$ is both a minimum and a maximum - can that be? Or is it actually a saddle point? Thanks for your hints! AI: When analyzing this example forget about derivatives and Hessians. It is obvious that the function $$g(r):=1-r^{4/3}\qquad(r\geq0)$$ takes its maximum at $r=0$ and then strictly decreases with increasing $r$. Introducing polar coordinates $(r,\phi)$ in the $(x,y)$-plane your function $$f:\ (x,y)\mapsto 1-(x^2+y^2)^{2/3}$$ appears as $$\tilde f(r,\phi)=g(r)\ .$$ Therefore the graph of $f$ resembles a "paraboloid looking downwards", and we just have the global maximum you spotted right away and no saddle points whatever. Since $f(x,0)=1-x^{4/3}$ $\>(x\geq0)$ the given function is not twice differentiable at $(0,0)$; therefore the Hessian is not even defined there.
H: Prerequisites for understanding Borel determinacy I have just learned that Gale-Stewart game is determined for open and closed sets, so naturally I'm interested to understand Borel determinacy which seems to be on a totally different level. What's the minimal set of knowledge to understand Borel determinacy without diving into a standard descriptive set theory textbook? I know, or have some exposure, to some basics of game theory, set theory, and mathematical logic at an introductory level. AI: Borel determinacy was proved by Tony Martin. You don't need much if you read the inductive proof, as presented in Kechris's book on Descriptive set theory. Just about the only "technical" notion you need is the concept of tree, which is a collection of finite sequences closed under initial segments, that is, if $\sigma$ is an element of a tree $T$, then $\sigma$ is a finite sequence, and any proper initial segment of $\sigma$ is also an element of $T$. This proof uses the key idea of "unraveling" of games. It proceeds by transfinite induction. The point is that the Borel sets are naturally stratified in a hierarchy of length $\omega_1$. One can form the stratification in several ways. For example: At the first level you have open sets. At level $\alpha$ you have sets that are countable unions of sets at level before $\alpha$, or their complements (one can refine this a little). What the unraveling does is associate to a Borel set an open set, with an equivalent game, meaning that player I wins the game on the Borel set iff player I wins the game on the open set, and same for player II. This proves determinacy, by the Gale-Stewart result. The technical issue is that the open set is not a subset of the reals, but rather of a much larger space -- something like $\mathcal P^\alpha(\mathbb N)$ if the original Borel set was at level $\alpha$. The superscript $\alpha$ indicates we iterate the power set operation $\alpha$ times. I say "something like" since the precise computation requires a bit more care with the stratification. (One talks of "unraveling" because the associated open game is built up from the original Borel set $X$, its "history" (that is, what sets are used when taking countable unions and complements leading up from open sets to $X$), and the possible (not necessarily winning) strategies of the game on $X$.) This need to look at huge spaces is an essential technicality, which explains the difficulty of the result (it uses replacement in an unavoidable manner). The precise count of how many power sets any proof requires (level by level through the stratification) is a refinement of Martin of the original argument, due to Harvey Friedman. So: If you understand the Gale-Stewart result, and are comfortable with the basic theory of ordinals or transfinite induction, the result should be fairly accessible. Typically, the presentations of the argument use $\omega^\omega$ rather than $\mathbb R$. A bit more descriptive set theory shows that this is not an issue: There is a Borel bijection between $\mathbb R$ and $\omega^\omega$ that sends Borel sets to Borel sets (this is also explained in Kechris's book), and one can check that determinacy is preserved through this transformation. The idea of unravelings can be extended beyond Borel sets. Itay Neeman, for example, showed how to unravel $\Pi^1_1$ sets. That said, the proofs of projective determinacy and beyond proceed differently from the proof of Borel determinacy, and use large cardinals in an essential fashion. Edit, Sep. 24, 2013: Timothy Gowers has written a beautiful five-part series of posts discussing the proof, and motivating how one may go about discovering it: 1, 2, 3, 4, 5.
H: Questions about cosets, conjugate classes etc Some questions about subgroups, normal subgroups, conjugate classes etc, just to make sure I understand it :-) The index of a subgroup $H$ in $G$, written as $[G:H]$ is defined as the number of left cosets of $H$ in $G$. I know that the a left coset of $H$ in $G$ is determined by $a$, when $$ aH= \{x\in G ; x=ah, h\in H\}$$ So my questions about this thing are: 1.) Are the left cosets of $H$ in $G$ disjunct? Why? 2.) Can we say that $$G = \bigcup_{x\in G} xH$$so $G$ is the union of all different left cosets in $G$? How can I see/prove this? 3.) I read $ \#G= [G:H]\cdot\#H$. Why is this? Is there any relation between this fact and Lagrange theorem? 4.) What about the right cosets? I guess the union of all the distinct right cosets is the same set as the union of left cosets? What can one say about the relation between left and right cosets? AI: Questions 1 and 2 Define a relation on $G$ by $x\sim_H y$ if and only if $x^{-1}y\in H$. This is an equivalence relation. Can you prove it? What are the equivalence classes? The class of the element $x$ is the set of all elements $y\in G$ such that $x^{-1}y\in H$ which is easily seen to be $xH$. Thus the equivalence classes are exactly the (left) cosets. This implies, by general theory of equivalence relations, that the cosets are non empty, pairwise disjoint and their union is $G$. Question 3 Define the map $\varphi\colon H\to xH$ by $\varphi(h)=xh$. This is a bijection, therefore all cosets share the same cardinality of $H$ (which is the coset $1H$, by the way). In case $G$ is finite, this has the consequence that $$ |G|=|H|\cdot [G:H]. $$ Just count the elements, recalling that the union of the cosets is $G$. Question 4 Define a relation on $G$ by $x\mathrel{{}_H\!\sim} y$ if and only if $yx^{-1}\in H$. Repeat the same reasoning as above, the only difference is that the equivalence classes are the right cosets. Question 5 If $[G:H]=2$, then we know we have two distinct left cosets: $H$ and $xH$, where $x\in G\setminus H$. But, since $x\in G\setminus H$, the right coset $Hx$ is different from $H$. The cosets (left or right) define a partition of $G$: hence $xH=G\setminus H=Hx$, so any left coset is also a right coset. Question 6 Yes. Question 7 The usual notation is $G/N$. The case $\mathbb{Z}/n\mathbb{Z}$ is just a particular case (the only difference is that the operation is denoted by $+$).
H: Is Gödel's incompleteness theorem provable without any model-theoretic notion? The entry on Gödel's incompletenss theorem in Wikipedia says: Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true,[1] but not provable in the theory (Kleene 1967, p. 250). My understanding of Gödel's theorem derived from the proof sketch in the article above is that it is completely formal (axioms+logical axioms+inference rules) and does not rely on any model-theoretic notion where truth is established (a formal sentence being true iff every interpretation of it in every model is true). But is this correct, that is are there proofs that use the apparatus of logic but do not not use any model theory at all? One piece of evidence that they must is that the quote above says that a statement is true but not proveable, and truth is a model-theoretic notion, unless of course there is a notion of truth different from model-theoretic truth? EDIT There is a very similar question here. From the answer there one notices that 'true' is taken in the standard model of the integers. This is not the same as the model-theoretic notion of true taken above. So, the question is refined to can we dispense with this restricted notion of truth and make the proof completely formal? AI: "A formal sentence being true iff every interpretation of it in every model is true". Not so. A sentence of formal arithmetic may be true on the intended, standard, interpretation while not a logical truth, not true on every interpretation. "Gödel's theorem is completely formal." Not so. The theorem is a bit of ordinary informal mathematics. Its subject-matter is formal proofs in certain kinds of formal theories. But Gödel's theorem is a meta-result about those formal theories, and the usual proofs involve ordinary mathematical reasoning. [I say "ordinary" proofs because there have been projects to produce formal computer verifications of Gödel's theorem: but the theorem pre-dated the formal verifications by decades. And it is a nice question what we learn from having a formal proof in Coq, for example, all 37906 lines of it.] "Gödel's theorem does not rely on any model-theoretic notion." Actually as perhaps Mostowski was the first to make really clear in his exposition, sixty years ago, there are two different versions of the first theorem, one that presupposes that we are dealing with a sound theory, another that assumes only that we are dealing with an [omega]-consistent theory. These are often referred to as the semantic vs the syntactic versions of the theorem. The semantic version, unsurprisingly, does rely on semantic notions. "Are there proofs that use the apparatus of logic but do not not use any model theory at all?" Yes, some results, including the syntactic version of Gödel's first theorem, are purely proof-theoretic, i.e. about syntax. But again note that a theorem's being about syntax doesn't itself make it "completely formal". (For another example, proofs about the normalizability of formal natural deduction proofs in first-order logic are themselves bits of informal mathematics.)
H: Probability density/mass function I am a bit confused as to the difference between the probability mass function and the probability density function for a distribution of discrete variables. I understand there would be no mass function for a continuous variable distribution, only a density function. But for discrete variable distributions, are there both mass and density functions or are my notes wrong and there is only a mass function? AI: The difference is in how they are evaluated. They are not quite the same thing. In a discrete distribution, you have a finite number of events, and each event occurs with some probability. Therefore, since the set of events $E$ is finite, it makes sense to have a function $p$ that maps the events to their probabilities, $p(E) : E \to [0,1]$. In a continuous distribution, any event occurs with probability zero. So it doesn't make sense to have a function that, when evaluated, returns the probability of the variable being that value. In other words, for a continuous random variable $X$, $P(X = x) = 0$. What does make sense, however, is a function such that the area underneath it is the probability that the random variable is less than or equal to that value. This is a probability density function. The pdf is so named because it gives you a shape that mimics the density of the distribution of a variable in some domain. In both cases, the analog to area under the curve from the left gives the probability that the random variable is less than or equal to some value. In the discrete case, this is summation of all values to the left. In the continuous case, it is the area under the curve.
H: still trying to figure out details of Newton's method I understand mostly what going on when using Newton's method to approximate some value but there are some details I am still hazy on (like how you come up with the correct $f(x)$ for the update rule). I would appreciate it if someone could help me fill in the details. Here's what I understand so far: The $f(x)$ used for linear approximation seems to be different than the $f(x)$ used in Newton's method. For example, in estimating $\sqrt[3]{67}$, if we know beforehand somehow that $f(x)=x^3-67$, then $f(\sqrt[3]{67}) = 0$ $$ \begin{align} x_{n+1} &\approx x_n - \frac{f(x_n)}{f'(x_n)} \\ & \approx x_n - \frac{ {x_n}^3 - 67}{3{x_n}^2} \\ & \approx 4 - \frac{64-67}{3\cdot 16} \\ & \approx 4 - \frac{1}{16} \end{align} $$ Using linear approximation, we have $f(x) = \sqrt[3]{x}$, different than above. $$ \begin{align} f(x+h) & \approx f(x) + f'(x) \cdot h\\ & \approx \sqrt[3]{x} + \frac {1}{3x^{ \frac{2}{3} } }h\\ & \approx 64^{ \frac{1}{3} } + \frac{(64 -67)}{3\cdot 64^{\frac{2}{3}}} \\ & \approx 4 - \frac{1}{16} \end{align} $$ To find an $f(x)$ for the update rule, we set our approximation of $f(x+h)$ to zero and solve for $x$. $$ \begin{align} 0 = f(x+h) &\approx f(x) + f'(x) \cdot h \\ -\frac{f(x)}{f'(x)} & \approx h \\ -\frac{f(x)}{f'(x)} & \approx x - a \\ a - \frac{f(x)}{f'(x)} &\approx x \\ 67 - \frac{x^1/3}{(3x^{\frac{2}{3}})^{-1}} & \approx \text{ new } f(x) \text { for the update rule}\\ 67 - x^3 & \approx \ ? \end{align} $$ I think I've done something wrong in this last part. The $f(x)$ I came up with is different than the one I was supposed to have guessed earlier, and it seems odd to me that you have to solve the linear approximation to get the $f(x)$ you use in the update rule. What's the proper way to find $f(x)$ for the update rule? What's the advantage from using the update rule? Why not just use linear approximation instead? Also, what do you do when the derivative is too hard to compute? AI: The $f(x)$ is no different. A graphical interpretation of Newton's method helps. You compute the derivative, which is the tangent of the curve. Then, you project this to the x-axis. This is your linear approximation! Recall that $y=ax+b$ is a linear curve, and you compute the slope $a$ as $f'(x_n)$. Now, you need the $y$-axis intercept of this curve. How do we get this? We know that at $x=x_n$, that $y=f(x_n)$ because this curve must be tangent to the function. Therefore, we solve $$f(x_n) = f'(x_n)x_n + b \implies b = f(x_n)-f'(x_n)x_n.$$ Now, since we have $b$, which is the $y$-intercept, we can find the $x$ intercept by solving $$0 = ax+b.$$ The value of this $x$ gives us the starting point for the next iteration -- in essence, we are jumping back up to the function from this x-intercept. Newton's method, in short: Draw slope from curve Ski down slope to x-axis Take rocket pack back up to function Repeat until no more fun can be had
H: Normal subgroups and factor groups $\\$ A normal subgroup $N$ is a subgroup where the left cosets are the same as the right cosets. $N$ is normal $\iff $ $xnx^{-1} \in N, \forall x\in G$. 5.) Why is it that if $[G:H]=2 \implies $ $H$ is normal subgroup? 6.) Can we say that a factor group is just a group that has left cosets of $N$ (being a normal subgroup) as its elements? So if $N$ is a normal subgroup, then the left cosets of $N$ forms a group under coset multiplication given by $aNbN = abN$. 7.) The group of left cosets of $N$ in $G$ is called the factor group, why do we denote this by $G/N$? These are the same things as the integers modulo $n$ groups? How can I relate those exactly? AI: $(5)$ If $\;[G:H] = 2,\,$ then by definition of the index of $H$ in $G$, there exists exactly two distinct left cosets: $H$ and $gH$, for $g \in G \setminus H.\;$ Now, for $g\in G\setminus H$, the right coset $Hg \neq H$. We know the left cosets of $H$ partition $G$, as do the right cosets of $G$. It follows, necessarily, that $gH=G\setminus H=Hg$, so any left coset is also a right coset in any subgroup of index $2$ For $(6)$: Yes, indeed. Spot on. $(7)$ Yes, the usual notation for the factor group (also sometimes called a quotient group) is $G/N$. The integers modulo $n = \mathbb{Z}/n\mathbb{Z}\,$ which is simply a particular example of such a group, where the group $G = \mathbb Z$ and $N = \mathbb n\mathbb Z$.
H: Proving $\|e^A\|\le e^{\|A\|}$ I'm trying to prove this inequality: $\|e^A\|\le e^{\|A\|}$, where $A$ is a matrix and $\|A\|:=\sup_{|x|=1} |Ax|$. My attempt of solution: Since $e^A:=I+A+A^2/2!+A^3/3!+\ldots$ we have $$\|e^A\|=\|I+A+A^2/2!+A^3/3!+\ldots\|=\sup_{|x|=1}\|(I+A+A^2/2!+A^3/3!+\ldots)x\|$$ $$=\sup_{|x|}\|Ix+Ax+(A^2x)/2!+(A^3x)/3!+\ldots\| $$ $$\leq \sup_{|x|}\|Ix\| +\sup_{|x|=1}\|Ax\|+\frac{\sup_{|x|}\|A^2x\|}{2!}+\frac{\sup_{|x|}\|A^3x\|}{3!}+\ldots$$ Am I right so far? I couldn't go further I need help! Thanks a lot. AI: If you've established a few basic properties (subadditivity, submultiplicativity, and continuity) of the norm, you can do it without having to bring in the $\sup$ manipulation. Namely, for any $n$, we have by subadditivity of the norm that $$ \| I+A+\frac{A^2}{2!}+\cdots +\frac{A^n}{n!} \| \leq \| I\| + \|A\|+\|\frac{A^2}{2!}\|+\cdots +\|\frac{A^n}{n!} \| $$ Then since we know $\| A^2\| \leq \| A\|^2$, we can pull all the exponents out: $$ \| I\| + \|A\|+\|\frac{A^2}{2!}\|+\cdots +\|\frac{A^n}{n!} \| \leq \| I\| + \|A\|+\frac{1}{2!} \|A\|^2 +\cdots +\frac{1}{n!} \|A \|^n $$ Then since the norm is a continuous function, we can actually pass onto the limit as $n\to \infty$ of the above inequality, and obtain $$ \|e^A \| = \|I+A+\cdots \| \leq \| I\| + \|A\|+\frac{1}{2!} \|A\|^2 +\cdots +\frac{1}{n!} \|A \|^n + \cdots = e^{\| A\|} $$
H: Explanation of why $\frac{d}{dx} e^x=e^x$ I'm taught that all the way back when I'm in high school that $\dfrac{de^x}{dx}=e^x$ and $\int e^x dx=e^x$. Can someone explain why this is the case? AI: If you differentiate any exponential function, you can write something like this: $$ \frac{d}{dx} 2^x = \lim_{h\to0}\frac{f(x+h)-f(x)}{h} = \lim_{h\to0}\frac{2^{x+h} - 2^x}{h} = \lim_{h\to0}2^x\frac{2^h-1}{h}.\tag{1} $$ Now an odd thing: The last expression in ($1$) is equal to $$ 2^x\lim_{h\to0} \frac{2^h-1}{h}\tag{2} $$ because $2^x$ is "constant", but "constant" in this case means not depending on $h$, i.e. $2^x$ does not change as $h$ goes to $0$. But next we will say that the expression ($2$) is $$ (2^x\cdot\text{constant}) $$ where this time "constant" means not depending on $x$, i.e. that last quantity does not change as $x$ changes. In that way we see that $\dfrac{d}{dx}2^x = (2^x\cdot\text{constant})$. But what number is this constant? Letting $f(x)=2^x$ and $f'(x)=(2^x\cdot\text{constant})$, we see that $f'(0)=(2^0\cdot\text{constant})$. Since $2^0=1$, the "constant" is $f'(0)$; it's how fast $f$ is changing at that point. As $x$ goes from $-1$ to $0$ to $1$, $f(x)$ goes from $1/2$ to $1$ to $2$, and so its average rate of change between $-1$ and $0$ is $1/2$ and its average rate of change between $0$ and $1$ is $1$, and so its exact rate of change at $0$ is somewhere between $0$ and $1$. If we had considered $4^x$ instead of $2^x$, we would have had a bigger constant. By considering $f(x)$ when $x=-1/2$ and when $x=0$, we can see that $f'(0)>1$. When the base is $2$, the "constant" is somewhere between $1/2$ and $1$. When the base is $4$, the "constant" is bigger than $1$. When the base is $e$ the constant is $1$. That's what's "natural" about $e$. The number $e$ must therefore be somewhere between $2$ and $4$. With more work we can narrow it down to $e=2.71828\ldots$, but this particular method of narrowing it down is not efficient.
H: Distributing two distinct objects to identical boxes Hiii, I've been struck with a problem which deals with the distribution of two distinct objects such that p of one type and q of other type into three identical boxes. As if it were only one object with q copies i'd have used integer partitioning, and if all objects were distinct then could use Stirling Number of second kind. But what about this mixed case...? Thnx! AI: Using Burnside's lemma I get $\frac16{p+2\choose2}{q+2\choose2}+\frac12\lfloor\frac{p+2}2\rfloor\lfloor\frac{q+2}2\rfloor+\frac13N$, where $N=1$ if $p$ and $q$ are both divisible by $3$, and $N=0$ otherwise.
H: Calculate $e^{1/4}$ using Maclaurin series with accuracy of 0.001 I have to calculate $\sqrt[4]{e}$ with a deviation of less than $0.001$. I was guided to use the Maclaurin series to solve this exercise. So I've written down the series of $e$, and now I don't have any idea how to proceed. Any help? Thanks in advance. AI: As you may know, $$e^{1/4} = \sum_{k=0}^{\infty} \frac{1}{4^k\,k!}$$ The error in using $N$ terms of the sum is roughly the $N+1$th term; that is $$\text{error} \approx \frac{1}{4^{N+1} (N+1)!}$$ so find an $N$ such that $\frac{1}{4^{N+1} (N+1)!} \lt 0.001$ I get $N=3$; that is, the first 4 terms in the sum are accurate to within that error. To elaborate: $$e^{0.25} \approx 1.28403$$ $$\sum_{k=0}^{3} \frac{1}{4^k\,k!} \approx 1.28385$$ $$\text{error} \approx 0.000171$$
H: How many integer solutions to this 5 integers equation? Ref to the question in Unusual 5th grade problem, how to solve it. Find a positive integer solution $(x,y,z,a,b)$ for which $$\frac{1}{x}+ \frac{1}{y} + \frac{1}{z} + \frac{1}{a} + \frac{1}{b} = 1\;.$$ Here is my question: How many solutions for this question? There was a C++ program for the answer under 100. But you can easily find answer outside 100, as [x:5, y:220, z:4, a:2, b:22]. I can find 2477 answers for number <= 1806. When we change the upper limit, are there more answers? Is the total number of solutions limited? I guess so, but not able to prove it. Can you prove it? AI: The total number of solutions is limited. In fact, the largest possible value for the variables would arise from the greedy algorithm (choosing the largest possible $\frac{1}{n}$ each time), which gives $$ 1 =\frac{1}{2} + \frac{1}{3} + \frac{1}{7} + \frac{1}{43} + \frac {1}{ 1806}$$ Hence, you're done (assuming that your calculations are correct). As to how to show that the greedy algorithm yields the largest possible value, read up on the Sylvester Sequence.
H: Complex Analysis Question from Stein The question is #$14$ from Chapter $2$ in Stein and Shakarchi's text Complex Analysis: Suppose that $f$ is holomorphic in an open set containing the closed unit disc, except for a pole at $z_0$ on the unit circle. Show that if $$\sum_{n=0}^\infty a_nz^n$$ denotes the power series expansion $f$ in the open unit disc, then $$\lim_{n\to\infty}\frac{a_n}{a_{n+1}}=z_0.$$ I've shown that we can take $z_0=1$ without a loss of generality, but I'm having trouble showing the proof otherwise. One of the problems I'm having is because we aren't told the definition of a pole except that it is a place where the function isn't holomorphic. Disregarding this fact, the other problem I'm running into is that I don't know the order of the pole. Making some additional assumptions, including that the pole is simple so we can write $F(z)=(z-1)f(z)$ as a holomorphic function, we see that $$F(z)=-a_0+z(a_0-a_1)+z^2(a_1-a_2)+\cdots$$ This almost gets me to the end with these added assumptions, but I don't think it's quite enough (why do we know some of the $a_i$'s aren't $0$, for example). On another note, if we know $\lim_{n\to\infty}\frac{a_n}{a_{n+1}}$ exists, then it is easy to see that $\lim_{n\to\infty}\frac{|a_n|}{|a_{n+1}|}=1=|z_0|$; I, however, do not see why the limit must exist. Are there any hints that someone can provide? Even a solution would be nice, especially if one can avoid making any assumptions about what a pole is or is not. EDIT: So there isn't any confusion, I know the definition of a pole and I'm inclined to believe that the problem, as stated, necessarily has a pole at $z_0$. The problem is that the exercise is in Chapter $2$, and poles are introduced in Chapter $3$. AI: I will assume we can write $$f(z) = \frac{c}{z_0-z} + \sum_{n=0}^{\infty} b_n z^n$$ for some value of $c \ne 0$, and $\lim_{n \to \infty} b_n = 0$. Then $$f(z) = \sum_{n=0}^{\infty} a_n z^n$$ where $$a_n = b_n + \frac{c}{z_0^{n+1}}$$ Then $$\begin{align}\lim_{n \to \infty} \frac{a_n}{a_{n+1}} &= \lim_{n \to \infty} \frac{\displaystyle b_n + \frac{c}{z_0^{n+1}}}{\displaystyle b_{n+1}+ \frac{c}{z_0^{n+2}}}\\ &= \lim_{n \to \infty} \frac{\displaystyle \frac{c}{z_0^{n+1}}}{\displaystyle \frac{c}{z_0^{n+2}}}\\ &= z_0\end{align}$$ as was to be shown. Note that the second step above is valid because $z_0$ is on the unit circle. For a nonsimple pole, we may write $$f(z) = \frac{c}{(z_0-z)^m} + \sum_{n=0}^{\infty} b_n z^n$$ for $m \in \mathbb{N}$. It might be known that $$(1-w)^{-m} = \sum_{n=0}^{\infty} \binom{m-1+n}{m-1} w^n$$ Then $$\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = \lim_{n \to \infty} \frac{n+1}{n+m} z_0$$ EDIT @TCL observed that we can simply require that $b_n z_0^n$ goes to zero as $n \to \infty$. Then for a simple pole $$\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = \lim_{n \to \infty} \frac{\displaystyle b_n z_0^n + \frac{c}{z_0}}{\displaystyle b_{n+1} z_0^n + \frac{c}{z_0^2}}$$ which you can see goes to $z_0$.
H: If $H$, $K$ are characteristic subgroups of $G$, when is $\operatorname{Aut}(H\times K) = \operatorname{Aut}(H) \times \operatorname{Aut}(K)$ true? If $H$, $K$ are characteristic subgroups of $G$, when is $\operatorname{Aut}(H\times K) = \operatorname{Aut}(H) \times \operatorname{Aut}(K)$ true? What if $H$, $K$ are not characteristic subgroups? In general, under what conditions is it true that $\operatorname{Aut}(H\times K) = \operatorname{Aut}(H) \times \operatorname{Aut}(K)$, for subgroups $H$, $K$ of $G$? AI: Regarding no restriction on two subgroups $H,K$ of $G$, but finiteness; if $(|H|,|K|)=1$ then $$Aut(H\times K)=Aut(H)\times Aut(K)$$ By setting $H=K=\mathbb Z_p$- so we have $(|H|,|K|)>1$- this property fails to be correct. Morover, it seems that if $G=H\times K$ then: Let $f\in Aut(G)$. So $$f_H=f|_H\in Aut(H),~~f_K=f|_K\in Aut(K)$$ On the other hand, if $$f_H\in Aut(H),~~f_K\in Aut(K)$$ you may set $f=f_H\times f_K:HK\to HK$ with a rule (?) then $f\in Aut(G=HK)$ (?). Now assume that $$\psi:Aut(H)\times Aut(K)\to Aut(G),~~\psi(f_H,f_K)=f_H\times f_K\\\\ \phi: Aut(G)\to Aut(H)\times Aut(K),~~\psi(f)=(f_H, f_K)$$ Try to show that these are inverses to each other.
H: Intersection of the $p$-sylow and $q$-sylow subgroups of group $G$ What can we say about the intersection of the $p$-sylow and $q$-sylow subgroups of group $G\;$? It's not necessary that $p=q$. Is there general statements about the intersections of sylows subgroups ? You can give me restricted conditions, this is also welcome ! AI: Let $R = P\cap Q$, where $\;P\leq G$ a p-sylow subgroup and $Q\leq G$ is a q-sylow subgroup, and $p\neq q$. Then for any $r \in R$, $r$ has order equal a power of $p$ and equal to a power of $q$. What must $R$ "look like"? The same can not necessarily be said about the case $p = q$, for $\;p, q$ prime.
H: Why does this limit not exist? Working through some limit exercises. The answer sheet says the limit below does not exist. Is this correct. Shouldn't it be $-\infty$? $$\lim_{x \to 0^+} \left( \frac{1}{\sqrt{x^2+1}} - \frac{1}{x} \right)\ \ \ $$ AI: It is true that as $x$ approaches through positive values, $\frac{1}{\sqrt{1+x^2}}-\frac{1}{x}$ becomes large negative. However, some people do not allow $\infty$ or $-\infty$ as answers to a limit problem. As a simpler example, some would say that $\lim_{x\to\infty}x^2$ does not exist, and some would say $\lim_{x\to\infty} x^2=\infty$. Remark: Mathematical English has dialects. When you answer a question, it may be necessary to conform to the local dialect. (On a test, I would accept either answer if proper justification were given, but cannot guarantee that someone else would.)
H: Smooth retraction onto a differentiable manifold Let $M\subset\mathbb{R}^n$ be a smooth k-dimensional differentiable manifold (by which I mean that it is locally diffeomorphic to an open set in $\mathbb{R}^k$). Let us suppose $M$ compact for simplicity. How can one prove that there exists an open set $U\supset M$ and a smooth retraction of $U$ onto $M$? I heard that it has to do with the smooth dependence on initial conditions of the solution of an ODE.. Can somebody shed light on this? (Remark: clearly $U$ can't be arbitrary, since for example $\mathbb{R}^2$ does not retract onto $S^1$) AI: The important concept here is "tubular neighborhoods." These are imbeddings of a particular vector bundle (the normal bundle of M) as an open subset of the ambient manifold, and they carry the zero section to M. In the case where the ambient manifold is not Euclidean space, the construction uses the exponential map, which may be why you heard about ODEs in connection with this. But I think that in the Euclidean case, the construction requires only some simple point-set arguments and the inverse function theorem. You can find this construction in lots of manifold theory books. In "Topology and Geometry" by Bredon it's on page 93. I think it's also in the newest edition of Lee's "Introduction to Smooth Manifolds".
H: Evaluating $\lim\limits_{x\to0}\frac{1-\cos(x)}{x}$ $$\lim_{x\to0}\frac{1-\cos(x)}{x}$$ Could someone help me with this trigonometric limit? I am trying to evaluate it without l'Hôpital's rule and derivation. AI: $\displaystyle \lim_{x\to 0} \frac{1-\cos x}{x}= \lim_{x\to 0} \frac{(1-\cos x)(1+\cos x)}{ x(1+\cos x)}=\lim_{x\to 0} \displaystyle\frac{(1-\cos^2 x)}{ x(1+\cos x)}=\lim_{x\to 0} \frac{\sin^2 x}{ x(1+\cos x)}=\lim_{x\to 0} \frac{x\sin^2 x}{ x^2(1+\cos x)}=\lim_{x\to 0} \displaystyle\frac{\sin^2 x}{ x^2}\displaystyle\lim_{x\to 0}\frac{x}{1+\cos x}=1\times 0=0 $ O you simply apply L'Hopital rule and $$\displaystyle \lim_{x\to 0} \frac{1-\cos x}{x}=\displaystyle \lim_{x\to 0} \frac{(1-\cos x)'}{x'}=\displaystyle \lim_{x\to 0} \frac{\sin x}{1}=0$$
H: Positive integers a and b such that $\sqrt{7-2\sqrt{a}}=\sqrt{5}-\sqrt{b}$ There exists positive integers a and b such that $\sqrt{7-2\sqrt{a}}=\sqrt{5}-\sqrt{b}$ How am I suppose to find these? I squared both sides, turned out nasty though. Help Appreciated! AI: After squaring both sides you should get: $$7-2\sqrt{a}=(5+b)-2\sqrt{5b}$$ Now if you observe both sides you can make a system of equations: \begin{cases} 7=5+b\\ a=5b \end{cases} $\therefore b=2,$ $a=5b=10$
H: If $m+n=5$ and $mn=3$, find $\sqrt{\frac{n+1}{m+1}} + \sqrt{\frac{m+1}{n+1}}$? It is known that $m+n=5$ and $mn=3$. So what is the value of: $$ \sqrt{\dfrac{n+1}{m+1}} + \sqrt{\dfrac{m+1}{n+1}} $$ I think we're suppose to solve for the system of equations first, but I'm not getting any results that's useful. AI: \begin{align} \sqrt{\frac{n+1}{m+1}}+\sqrt{\frac{m+1}{n+1}}&=\frac{(n+1)+(m+1)}{\sqrt{(m+1)(n+1)}} \\ \\ &=\frac{m+n+2}{\sqrt{mn+m+n+1}} \\ \\ &=\frac{5+2}{\sqrt{3+5+1}} \\ \\ &=\frac{7}{3} \end{align}
H: Proving that $\frac{1}{\sqrt{1}} + \frac{1}{\sqrt{2}} + \cdots + \frac{1}{\sqrt{100}} < 20$ How am I suppose to prove that: $$\frac{1}{\sqrt{1}} + \frac{1}{\sqrt{2}} + \cdots + \frac{1}{\sqrt{100}} < 20$$ Do I use the way like how we count $1+2+ \cdots+100$ to estimate? So $1/5050 \lt 20$, implying that it is indeed less than $20$? AI: We can estimate the sum by integration: because the inverse square root function is strictly descreasing, hence for $n\in \mathbb{N}$ $$ \frac{1}{\sqrt{n}}< \frac{1}{\sqrt{x}} \quad \text{ for } x\in (n-1,n). $$ Integrating both sides on this interval: $$ \frac{1}{\sqrt{n}}= \int^n_{n-1}\frac{1}{\sqrt{n}}\,dx< \int^{n}_{n-1}\frac{1}{\sqrt{x}}\,dx. $$ Therefore $$ \frac{1}{\sqrt{1}} < \int_0^1 \frac{1}{\sqrt{x}}\,dx \\ \frac{1}{\sqrt{2}} < \int_1^2 \frac{1}{\sqrt{x}}\,dx \\ \cdots \\ \frac{1}{\sqrt{100}} < \int_{99}^{100} \frac{1}{\sqrt{x}}\,dx $$ and $$ \frac{1}{\sqrt{1}} + \frac{1}{\sqrt{2}} + ... + \frac{1}{\sqrt{100}} < \int_0^{100}\frac{1}{\sqrt{x}}\,dx =20. $$
H: Simplify a recursive function in Maple I have the following problem. Out of the runtime analysis of an divide and conquer algorithm I got the following formula for the necessary flops: flops(n): = (89+1/3)*n^3 + 2 * flops(n/2) and flops(1):= 0 I want to sum it up and to remove the recursion with Maple. But I do not get it working. Everytime Maple complains: Error, (in flops) too many levels of recursion How can this be done? AI: You have too much recursion because your code assumes nto be even for the stopping criterion to work. If you call your flops count function with nequal to 3, it will call the function with argument 3/2and your recursion will never end. You could replace your function with flops(n) := (89+1/3)*n^3 + 2 * flops(floor(n/2)) and use the criterion flops(0) := 0 But this will only work if you use numbers, not some arbitrary N. If you want to solve the recurrence equation, you should use the rsolvecommand: rsolve({flops(n) = (89+1/3)*n^3 + 2 * flops(n/2),flops(1)=0},flops(n)); which will give you $\frac{1072}{9}n(n^2-1)$ as an answer to your problem.
H: Question on a symmetric inequality Let $a, b,c $ are positive real number satisfying $a^2+b^2+c^2+2abc=1$. How can I prove that $a+b+c\ge \dfrac{3}{2}$ ? AI: let $$a=\cos{A},b=\cos{B},c=\cos{C},A+B+C=\pi$$ because it is known $$\cos^2{A}+\cos^2{B}+\cos^2{C}+2\cos{A}\cos{B}\cos{C}=1$$ then $$a+b+c=\cos{A}+\cos{B}+\cos{C}\le\dfrac{3}{2}$$
H: closure of open set in topological space How to prove : If $X$ is a topological space, $U$ is open in $X$, and $A$ is dense in $X$, then $\overline{U}=\overline{U \cap A}$ AI: Note that $$ \overline U = \underset{ \mathcal F \text{ closed}}{ \bigcap_{U \subseteq \mathcal F} }\mathcal F, \quad \overline {U \cap A} = \underset{ \mathcal F \text{ closed}}{ \bigcap_{U \cap A \subseteq \mathcal F} }\mathcal F, $$ so to see that $(\subseteq)$ and $(\supseteq)$ holds, it suffices to see that $U \subseteq \mathcal F$ if and only if $U \cap A \subseteq \mathcal F$ when $\mathcal F$ is closed, $U$ is open and $A$ is dense in $X$. One direction is clear because $U \cap A \subseteq U$. For the other one, note that if $U \cap A \subseteq \mathcal F$, then $A \subseteq \mathcal F \cup U^c$, and since $U$ is open, $\mathcal F \cup U^c$ is closed, hence $X = \overline A \subseteq \mathcal F \cup U^c$ and $\mathcal F \cup U^c = X$, which means $U \cap \mathcal F^c = \varnothing$, thus $U \subseteq \mathcal F$. Hope that helps,
H: Simple ODE question Suppose i have an ODE $$f(y'',y',y,x) = 0.$$ Does the domain of $x$ where $f$ is defined have anything to do with the domain in which a solution $y = g(x)$ satisfies the equation ? Knowing , a priori , the domain of $x$ where $f(y'',y',y,x)$ is defined, do I know in advance something about the domain of the solution $y = g(x)$ that satisfies it ? Is there any relation between those domains ? Thanks in advance AI: not sure what you are talking about. Take $$ y' = 1 + y^2, \; \; y(0) = 0. $$ You get $$ y = \tan x, \; \; - \pi / 2 < x < \pi / 2. $$
H: How many even number in a sequence are there? How many even numbers in the below numbers ? $$\binom{k}{0},\binom{k}{1},\binom{k}{2},\ldots,\binom{k}{k}$$ Worng: Is it true that all of them are odd iff $k$ is odd, and if $k$ is even then $\binom{k}{2i}$ is odd, $\binom{k}{2i+1}$ is even. ? AI: From the Wikipedia article on Pascal's Triangle: Parity: To count odd terms in row $n$, convert $n$ to binary. Let $x$ be the number of ones in the binary representation. Then number of odd terms will be $2^x$. A concise proof can be found here.