text
stringlengths
83
79.5k
H: prove equation has no rational root. Prove For all $n>1$, equation $\sum _{k=1}^n \frac{x^k}{k!}+1=0$ has no rational root. I'm not sure whether there are two questions,for without brace after Sigma. My thought is to prove it is not reducible on rational field. AI: Multiplying by $n!$ we can make all coefficients to become integers. By the rational root theorem, the root has to be integer and divides $n!.$ Moreover, since all coefficients except for the first one are divisible by $n,$ $x^n$ is divisible by $n.$ Take any prime divisor $p$ of $n.$ Then $p$ divides $x$ and it is enough to show that $x^k\frac{n!}{k!}$ is divisible by the higher power of $p$ than $n!$ for each $1\le k\le n$ The power of $p$ that $k!$ is divisible is less than $k/p+k/p^2+...=\frac{k}{p-1}\le k$ provided $p\ge 2$ and the result follows.
H: Using Maclaurin series with solving a multi-variable limits I need to determine wheter there's a limit where $(x,y)=(0,0)$ of the next function: $$\lim_{(x,y)\to(0,0)}\frac{e^{x(y+1)}-x-1}{\sqrt{x^2+y^2}}$$ In order to simplify the expression can I use maclaurin series on $e^{x(y+1)}?$ If so it's equal to $e^{x(y+1)}=1+x(y+1)+o(x(y+1))$ and putting it on the expression will simplify it to:$\lim_{(x,y)\to(0,0)}\frac{xy+o(x(y+1))}{\sqrt{x^2+y^2}}$ and using polar I can prove the limit is equal to $0$. Is the above correct? Can I use maclaurin series? AI: Note that $xy + o(x(y+1)) = xy + o(xy) + o(x) = o(x)$ so it suffices to show $\frac{o(x)}{\sqrt{x^2+y^2}} \to 0$ as $(x,y)\to(0,0)$. This is immediate since $|x| = \sqrt{x^2} \leq \sqrt{x^2+y^2}$. (That is, $o(x) \subseteq o(\sqrt{x^2+y^2})$.)
H: Find the value which satisfies these equations $$C = 0.6Y +50$$ $$I=10$$ We want to find the $Y$ for which $Y=C+I$. This is a question in a chapter about discrete dynamic models, so we have to use an appropriate method. I tried to rewrite it and ended up with $y = 1\dfrac{2}{3} C - 73 \dfrac{1}{3}$, but that was a dead end. AI: Not really familiar with DDMs, but algebra says: $$ \begin{align} Y &= C + I \\ \implies Y &= 0.6Y + 50 + 10 \\ \implies 0.4 Y &= 60 \\ Y &= \dfrac{60*10}{4} \\ &= 150 \end{align} $$
H: The derivative of a linear transformation Suppose $m > 1$. Let $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map. Consider $f + Ax$ for $A \in \mathrm{Mat}_{m\times n}, x \in \mathbb{R}^n$. Define $F: \mathbb{R}^n \times \mathrm{Mat}_{m\times n} \rightarrow \mathrm{Mat}_{m\times n}$ by $F(x,A) = df_x + A$. So what is $dF_x$? (A) Is it $dF(x,A) = d(df_x + A) = d f_x + A$? And therefore, $$dF(x,A) =\left( \begin{array}{ccc} \frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_1} \\ \vdots & & \vdots \\ \frac{\partial f_1}{\partial x_n} & \cdots & \frac{\partial f_m}{\partial x_n}\end{array} \right) + \left( \begin{array}{ccc} a_{11} & \cdots & a_{1m} \\ \vdots & & \vdots \\ a_{n1} & \cdots & a_{nm}\end{array} \right)$$ (B) Or should it be $dF(x,A) = d(df_x + A) = d^2 f_x + I$? And therefore, $$dF(x,A) =\left( \begin{array}{ccc} \frac{\partial^2 f_1}{\partial x_1^2} & \cdots & \frac{\partial^2 f_m}{\partial x_1^2} \\ \vdots & & \vdots \\ \frac{\partial^2 f_1}{\partial x_n^2} & \cdots & \frac{\partial^2 f_m}{\partial x_n^2}\end{array} \right) + \left( \begin{array}{ccc} 1 & \cdots & 0 \\ \vdots & & \vdots \\ 0 & \cdots & 1\end{array} \right)$$ Does this look right? Thank you very much. AI: Since, by your definition, $F$ is a matrix-valued function, $DF$ would be a rank-3 tensor with elements $$ (DF)_{i,j,k} = \frac{\partial^2 f_i(x)}{\partial x_j \partial x_k} $$ Some authors also define matrix-by-vector and matrix-by-matrix derivatives differently be considering $m \times n$ matricies as vectors in $\mathbb{R}^{mn}$ and "stacking" the resulting partial derivatives. See this paper for more details.
H: How to show that space is complete? Let $N_\alpha=\{(x_n)_{n=1}^\infty\mid \sum_{j=1}^n |x_j|\leq Mn^\alpha\}$, where $\alpha\in R$. Show that $N_\alpha$ is Banach space with the norm $\|(x_n)_{n=1}^\infty\|=\sup_{n\in N} n^{-\alpha} \sum_{j=1}^n|x_j|$. It is is easy to check properties of norm, but I don't know how to prove completeness of space $N_\alpha$ (or, that every Cauchy sequence converges). Any kind of help is welcome. AI: Suppose $x_n$ is Cauchy. We need to first produce a candidate limit, and then show that this limit is in $N_\alpha$. (I am assuming that you have shown the various properties of the norm, in particular, $\|x+y\| \le \|x\|+ \|y\|$.) Then for all $\epsilon>0$ there exists $N$ such that if $n,m \ge N$ then, for all $p$, $\sum_{k=1}^p |[x_n]_k-[x_m]_k| \le \epsilon p^\alpha$. In particular, for a fixed $p$, $|[x_n]_p-[x_m]_p| \le \epsilon p^\alpha$. Hence $[x_n]_k$ is Cauchy and converges to a limit. Let $[\hat{x}]_k = \lim_{n \to \infty} [x_n]_k$. The sequence $\hat{x}$ is our candidate limit. Now we must show that $\hat{x} \in N_{\alpha}$. Now fix $\epsilon =1$, and choose $N$ such that if $m,n \ge N$, then $\sum_{k=1}^p |[x_n]_k-[x_m]_k| \le p^\alpha$. If we fix $p$ and then take the limit as $n \to \infty$, and set $m=N$, we have $\sum_{k=1}^p |[\hat{x}]_k-[x_N]_k| \le p^\alpha$. If we let $[y]_k = [\hat{x}]_k-[x_N]_k$, the above shows that $y \in N_\alpha$. Since $\|x_N + y\| \le \|x_N\|+ \|y\|$, we have $x_N + y \in N_\alpha$, and since $[\hat{x}]_k = [x_N]_k + [y]_k$, we see that $\|\hat{x} \| \le \|x_N\|+ \|y\| < \infty$, hence $\hat{x} \in N_\alpha$.
H: Equivalent condition for Differentiability Is it possible to prove that $f:(a,b)\to\mathbb R$ is differentiable at $c \in (a,b)$ iff there exists a real number $f'(c)$ such that for all $x \in (a,b)$, $f(x) = f(c) + f'(c)(x-c) + e(x)$ for some function $e:(a,b) \to\mathbb R$ s.t $e(x)\to 0$ as $x\to c$ without assuming that differentiability at $c$ implies continuity at $c$? (sorry for the lack of LaTeX, i'm in a rush) AI: That's wrong. Your $e(x)$ should be replaced with $(x-c)e(x)$, making it a $o(x-c)$, using Landau notation. Modulo this correction, the equivalence holds. Proof of the only if : $$f(x) = f(x) + (x-c) \frac{f(x)-f(c)}{x-c} = f(c) +(x-c)f'(c) - (x-c)[\frac{f(x)-f(c)}{x-c} - f'(c)]$$ And the thing between brackets is actually a $e(x)$
H: Sum of infinitely many integrals I know that $\int_0^\infty \frac{1}{x^2+1} \, dx=\dfrac{\pi}{2}$. What if I integrate $f(x)=\dfrac{1}{x^2+1}$ in infinite disjoint integrals not of the same length, like $$\int_{x_{1}}^{x_{2}} f(x)\,dx+\int_{x_3}^{x_4} f(x)\,dx+\cdots +\int_{x_{2k+1}}^{x_{2k+2}} f(x)\,dx+\cdots$$ Is the sum of these integrals finite? AI: This amounts to show that $\int_{\Bbb{R}} g_{n} $ converges as $n \to \infty$, where $$ g_{n}(x) = f(x) \sum_{k=1}^{n} \mathbf{1}_{[x_{2k-1}, x_{2k}]}(x). $$ The condition shows that $0 \leq g_n \leq f$, thus the dominated convergence theorem guarantees the convergence.
H: Need help with Cantor-Bernstein-Schroeder Proof at ProofWiki This concerns Proof 6 of the CBST theorem at ProofWiki. I am stuck on the line beginning "Similarly, let $g' = $" The 2nd equality on this line is not immediately obvious to me. How do you prove $A-X = g(B -f(X))$? AI: A few lines up from your quote, we see the statement: Thus by $(1)$: $g\left(B \setminus f(X)\right) = A \setminus X$ This is the only thing that is used. It is proved in the lines following the lemma.
H: Show that there can be no trail in $G$ that contains all edges. Let $G=(V,E)$ be a graph. Assume that $G$ contains at least three vertices with an odd degree. Show that there can be no trail in $G$ that contains all edges. First I checked the definition of a trail; that is a walk that does not contain any edge more than once. I tried reasoning by contradiction, assuming that there is a trail that contains all edges. Then there should exist a trail s.t. $v_0\rightarrow v_1 \rightarrow \dots \rightarrow v_n$ where every edge is unique. However, I am stuck here. Could anyone please give me a hint to this problem and most of all give me advice on how to approach these kind of graph theory proofs in general? Thank you in advance! AI: HINT: Consider a vertex $v$ that is neither the first nor the last vertex of a trail $T$. $T$ may pass through $v$ more than once, but each time $T$ passes through $v$, it ‘uses up’ two edges incident at $v$: one on the way in, and one on the way out. If $T$ contains every edge of the graph, those must be all of the edges incident at $v$, so $v$ must have even degree. I can’t really offer any general advice. In this problem the at least $3$ in the problem statement might suggest that with just two odd vertices there could be a trail that contains all of the edges, and a bit of experimentation with small graphs would tend to confirm that. You’d probably also discover just where on the trail the odd vertices always seem to end up, which could well point you in the right direction.
H: Determinant of a matrix Having some problems with a determinant of a 4x4 matrix M. $ M = \left( {\begin{array}{cc} 1 & 2 & 3 &-1 \\ 0 & 1 & 2 & 2 \\ 1 &1 &0 &0 \\ 3&1&2&0 \end{array} } \right) $ Went along and developed it according to the 4th column. So I end up with two matrixes A and B. $ A = -1 \cdot det \left( {\begin{array}{cc} 0 & 1 & 2 \\ 1 & 1 & 0 \\ 3 &1 &2 \\ \end{array} } \right) $ $ B = 2 \cdot det \left( {\begin{array}{cc} 1 & 2 & 3 \\ 1 & 1 & 0 \\ 3 &1 &2 \\ \end{array} } \right) $ I get $A= (-1) \cdot((0 \cdot1\cdot2)+(1\cdot0\cdot3)+(2\cdot1\cdot1)-(3\cdot1\cdot2)-(1\cdot2\cdot2)-(1\cdot1\cdot0)) \\$ $A=(-1) \cdot(-6)=6$ $B= 2 \cdot((1\cdot1\cdot2)+(2\cdot0\cdot3)+(3\cdot1\cdot1)-(3\cdot1\cdot3)-(1\cdot2\cdot2)-(1\cdot1\cdot0)) \\$ $B = 2\cdot8=16$ $A+B=22$ which is wrong. Where is my mistake? The correct answer should be $-22$ but I don't get why my solution keeps being positive. Edit: im such a moron: A = 1* det and B = -2 * det. Everythings clearing up while in bed. Hehe! AI: The formula to calculate the determinant is given by $$\sum_{\sigma \in S_n} (-1)^{\sigma} a_{1\sigma(1)} \times \cdots \times a_{n \sigma(n)},$$ if you want me to explain the whole formula, I can but the important bit right now is the $(-1)^{\sigma}$ which refers to the sign of each entry you are expanding by. First, you have expanded by element $a_{1,4}$ and so the sign should be $(-1)^{1 + 4} = -1$ and so your determinant of $A$ should be $$(-1) \cdot (-1) \cdot (-6) = -6.$$ So, your sign for when expanding by row $4$ column $2$ is going to be $(-1)^6 = 1$. Another mistake you have made is that the determinant of $B$ isn't $16$, it's $-16$.
H: Universal covering space of connected open subset of $\mathbb R^n$ Is the universal covering of an open connected subset $U$ of $\mathbb{R}^n$ homeomorphic to $\mathbb{R}^n$? AI: No. For example, let $n=3$ and $U=\Bbb{R}^3\setminus \{(0,0,0)\}$. Then the universal cover of $U$ is $U$ itself (since it's simply connected) which is not homoemorphic to $\Bbb{R}^3$ (since it has nontrivial $2$-homology say).
H: Is $\mathbb{R}^2$ boundaryless? I just have a quick question, as stated in the title. Is $\mathbb{R}^2$ boundary-less? Thank you very much. :-) AI: That depends on whether you consider $\mathbb{R}^2 \subseteq \mathbb{R}^2$ or $\mathbb{R}^2 \subseteq \mathbb{R}^n$ for $n \geq 3$. In the former case, there are no points in the boundary; in the latter, $\partial \mathbb{R}^2 = \mathbb{R}^2$.
H: Let $f(\frac ab)=ab$, where $\frac ab$ is irreducible, in $\mathbb Q^+$. What is $\sum_{x\in\mathbb Q^+}\frac 1{f(x)^2}$? Let $f(\frac ab)=ab$, where $\frac ab$ is irreducible, in $\mathbb Q^+$. What is $\sum_{x\in\mathbb Q^+}\frac 1{f(x)^2}$? Club challenge problem. I don't think it's possible to do with only high school calculus. Help, please? AI: Note first that $f$ is multiplicative over the rationals in some sense. More precisely, if $\frac{a}{b}$ and $\frac{c}{d}$ are in reduced form and $(a,d) = (b,c) = 1$, we have that $f(\frac{ac}{bd}) = f(\frac{a}{b}) \cdot f(\frac{c}{d})$. You have to justify this next step, but (if the sum behaves nicely) we can thus split it up into an infinite product using unique factorization of the rationals: $$ \begin{align*} \sum_{x \in \mathbb{Q}^+} \frac{1}{f(x)^2} &= \prod_{p \text{ prime}} \left(\cdots + \frac{1}{f(p^{-2})^2} + \frac{1}{f(p^{-1})^2} + \frac{1}{f(1)^2} + \frac{1}{f(p^1)^2} + \frac{1}{f(p^2)^2} + \cdots\right) \\ &= \prod_{p \text{ prime}} \left( 1 + \frac{2}{p^2} + \frac{2}{p^4} + \frac{2}{p^6} + \cdots \right) \\ &= \prod_{p \text{ prime}} \left( \frac{2}{1 - \frac{1}{p^2}} - 1 \right) = \prod_{p \text{ prime}} \left( \frac{p^2 + 1}{p^2 - 1} \right) \end{align*} $$ I admit I am unsure of how to evaluate this product, but this seems helpful. Update: This product evaluates to $\frac52$! See the comments for an explanation.
H: Hartshorne Lemma(II 6.1) Hi I am wondering what Hartshorne means by "an open affine subset on which $f$ is regular". This is part of the first sentence in the proof of lemma 6.1 in chapter two of the book "Algebraic Geometry by Hartshorne". I Tried looking up regular in the index, but this just refers me to regular functions on varieties which as far as I know has absolutely nothing to do with the lemma. Could someone please tell me what he means by this. Thanks AI: Let $X$ be an integral scheme. Then, for any open subset $U$, the ring $\mathscr{O}_X (U)$ embeds as a subring of the function field $K (X)$ in a canonical way, and so for every point $x$ of $X$, the local ring $\mathscr{O}_{X, x}$ is canonically a subring of $K (X)$. Definition. An element $f$ of $K (X)$ is regular at $x$ if it is in $\mathscr{O}_{X, x}$, considered as a subring of $K (X)$, and it is regular on an open subset $U$ if it is in $\mathscr{O}_X (U)$, considered as a subring of $K (X)$.
H: Is there any good reason why a protractor starts from right to left, unlike a scale, which starts from left to right? While studying through the number system, i notice that positive side is from 0 to +ve infinity. The direction is left to right. However, this is opposite in case of angles. The sort of curved number system starts from 0 to 180 ( from right to left). Is their any good reason why unlike a number system, direction of measuring angles is right to left ? PS: I actually first thought that, it's due to +x +y axis-plane coming on the right side. But then what about -x + y plane coming on the left side. The angles should go negative after 90. ie. -91, -92.... -180. Logic probably is something different! What is it ? Thanks V. AI: Mathematicians always measure angles in the counterclockwise direction; a clockwise angle the same size as one of $30^\circ$ is called a $-30^\circ$ angle. The protractor is consistent with this practice. Why mathematical practice measures counterclockwise, I do not know. I was going to justify it by reference to the common layout of the cartesian plane ($x$ coordinate increasing left to right, as the language is written, and $y$ coordinate increasing bottom to top in accordance with every linguistic metaphor of increase) but on further thought I saw no reason why the zero angle couldn't have been be on the $y$-axis, with angle increasing clockwise, and I wonder why it wasn't done this way, for consistency with existing nautical practice. As Daniel McLaury points out, this would have made the vertical axis the real axis and the horizontal axis the imaginary axis. There is a real question of history here, which may or may not be resolvable; some of these things are just mysteries. I believe nobody knows why, when the first automobile traffic lights were constructed, they had red on the top and green on the bottom, opposite to the design of the railroad signal lights they imitated.
H: Area of Questionably Generated Manifold I might not possess the language to ask this question, but I'm going to try anyway. Consider a path c(t) : $\mathbb{R}\rightarrow \mathbb{R}^n$. Let c'(t) denote the tangent vector of the path c(t). Can we "stitch" together the tangent vectors given by evaluating c'(t) at each point along the path such that a smooth manifold is generated? Does this only work for simply connected closed paths? Rather, what are the constraints for a path that can potentially serve as a substructure for such a manifold? Can we meaningfully evaluate the area of this manifold? Is this manifold 1-dimensional? Are such manifolds unique to their paths? Can there exist a manifold for one orientation of the path but not the other? What properties would need to hold for the reverse orientation to generate a manifold of equivalent area to the standard orientation's manifold? How can we classify this object? Are there any applications for this? I will award the answer to whoever covers each question and provides both an interesting example of a path and a counterexample to the idea that any path can have such a manifold. I guess the simplest example of a substructure would be c($\theta$) : $\mathbb{R}\rightarrow \mathbb{R}^2$ given by c($\theta$) = < $\cos\theta$, $\sin\theta$ >. Would the area of the manifold generated be acquired using the integral $\int_{0}^{2\pi}\int_{1}^{\sqrt2} rdrd\theta$ ? I used intuition about the boundary and basic geometry to obtain this result. I may revise my questions after I get some responses and learn better language to approach this problem with. AI: So for any smooth manifold, $M$ and $p \in M$ we can consider the tangent space to $M$ at $p$, denoted $T_p M$. This is a real vector space with dimension equal to the dimension of $M$. You can consider the space of all tangent spaces of $M$ which is called the tangent bundle of $M$. A point of the tangent bundle of $M$ is a pair, a point $p$ of $M$ and a vector in $T_p M$. If $M$ is $n$-dimensional, then the tangent bundle, usually denoted $TM$ is $2n$ dimensional. It is difficult to visualize in general, but for $S^1$, we have the $TS^1$ is a cylinder, for example. The object you're describing can be thought of as a submanifold of the tangent bundle of $\mathbb R^n$, and since we can canonically identify this with $\mathbb R^{2n}$, which has a canonical Riemannian metric, we can make sense of the area of this. If anyone can add anything more useful, that'd be great, because I'm not sure how useful this is for OP.
H: What is the defining characteristic of a quadratic function? I'm helping a high school student prepare for an exam, and I'm unsure how to answer this... Why is $x^3+2x^2$ not quadratic? I thought anything that had a power of 2 was quadratic. AI: A quadratic must be a polynomial and it must be of degree $2$. The degree of a polynomial in $x$ is the highest power of $x$ appearing in the function. So we have that your function is a degree $3$ polynomial, also known as a cubic.
H: Integrating a form and using Gauss' theorem. Given the 2-form $$ \varphi = \frac{1}{(x^2+y^2+z^2)^{3/2}}\left( x\,dy\wedge dz +y\,dz\wedge dx + z\,dx\wedge dy\right) \ . $$ (a) Compute the exterior derivative $\textbf{d}\varphi$ of $\varphi$. (b) Compute the integral of $\varphi$ over the unit sphere oriented by the outward normal. (c) Compute the integral of $\varphi$ over the boundary of the cube of side 4, centered at the origin, and oriented by the outward normal. (d) Can $\varphi$ be written $\textbf{d}\psi$ for some 1-form $\psi$ on $\mathbb{R}^3-\{0\}$? This is problem 6.23 from Hubbard's Vector Calculus text. This is not homework, I am just studying for my final. The only part I am having any trouble with is part (c). For part (a), note that $\varphi = \Phi_{\vec{F}}$ where $$ \vec{F} = \frac{1}{(x^2+y^2+z^2)^{3/2}}\begin{bmatrix} x\\y\\z \end{bmatrix} \ . $$ Then $\textbf{d}\varphi = M_{\nabla\cdot\vec{F}}=0.$ For part (b), we pick an orientation-preserving parametrization of the sphere, call it $\partial S$, and use it to evaluate the integral. Namely, we use $$ \gamma : \begin{pmatrix} \alpha \\ \beta \end{pmatrix} \mapsto \begin{pmatrix} \sin\alpha\cos\beta \\ \sin\alpha\sin\beta \\ \cos\alpha \end{pmatrix} $$ where $\alpha\in[0,\pi]$ and $\beta \in [0,2\pi)$. Then the pullback is $$ \gamma^*\varphi = (\sin^3\alpha+\sin\alpha\cos^2\alpha)\,d\alpha\,d\beta = \sin\alpha\,d\alpha\,d\beta $$ so that $$ \int\limits_{\partial S}\varphi = \int\limits_0^{2\pi}\int\limits_0^{\pi} \sin\alpha\,d\alpha\,d\beta = 4\pi. $$ Part (c) is where I am a bit stuck. Consider the region $R$ bounded between the described cube, say $C$ with boundary $\partial C$, and the unit ball at the origin, say $S$. We can use Guass' theorem since $\varphi$ is well defined there. We have $$ \int\limits_{\partial C} \varphi = \int\limits_C \textbf{d}\varphi = \int\limits_R \textbf{d}\varphi \, + \int\limits_S \textbf{d}\varphi = \int\limits_S \textbf{d}\varphi \stackrel{?}{=} 4\pi $$ by part (a). How do I go about obtaining the last equality since $\varphi$ is not defined at the origin? I'm not sure how to justify appealing to my result in part (b). AI: I would rather write $$0=\int_Rd\varphi=\int_{\partial R}\varphi=\int_{\partial C}\varphi-\int_{\partial S}\varphi,$$ i.e. both surface integrals coincide, and the origin is not involved at any moment.
H: Application of uniform boundedness principle Let $(a_n)$ be a sequence in $\mathbb{K}$ such that for each $(x_n) \in c_0$ also $(a_nx_n) \in c_0$. Derive from the uniform boundedness principle that $(a_n) \in l^\infty$. I see that the idea is to find a family $\{T_i\}_{i\in I}$ of linear, bounded operators that is pointwise bounded, and such that I can derive the desired result from the uniform boundedness. However, I can't figure out how to define my operators to achieve this goal. Can anyone drop me a hint? AI: Let $A$ be the given sequence. Let $AC$ be the sequence obtained by pointwise multiplication, given a sequence $C$. Clearly $A$ defines a linear map on sequences. Our hypothesis says precisely that $A$ is actually a map from $c_0$ to $c_0$. We need a norm on $c_0$. The sup norm seems nice, so let's do that. Which operators should we look at? Well, I would suggest looking at operators that "cut off" the sequence $A$ and make it zero after a certain point. EDIT: Adding some more details, if $1_N$ is the sequence that is N ones followed by all zeros, then the sequences $A1_N$ obviously define a family bounded linear operator on $c_0$ with the sup norm, and you can show $\sup_{1 \leq n \leq N} |a_i| \leq \|A1_N\|$.
H: Showing that $\{x\in\mathbb R^n: \|x\|=\pi\}\cup\{0\}$ is not connected I do have problems with connected sets so I got the following exercise: $X:=\{x\in\mathbb{R}^n: \|x\|=\pi\}\cup\{0\}\subset\mathbb{R}^n$. Why is $X$ not connected? My attempt: I have to find disjoint open sets $U,V\ne\emptyset$ such that $U\cup V=X$ . Let $U=\{x\in\mathbb{R}^n: \|x\|=\pi\}$ and $V=\{0\}$. Then $V$ is relative open since $$V=\{x\in\mathbb{R}^n:\|x\|<1\}\cap X$$ and $\{x\in\mathbb{R}^n:\|x\|<1\}$ is open in $\mathbb{R}^n$. Is this right? and why is $U$ open? AI: Yup, that's right! And similarly, you can see that $U$ is open because $U=W\cap X$ where $$W=\{x\in\mathbb{R}^n:\tfrac{\pi}{2}<\|x\|<\tfrac{3\pi}{2}\}$$ is an open set of $\mathbb{R}^n$. One way of seeing that $W$ is open is noting that the function $\|\cdot\|:\mathbb{R}^n\to\mathbb{R}$ is continuous, and $W$ is the preimage of the open set $(\frac{\pi}{2},\frac{3\pi}{2})$.
H: Selection Sort Algorithm Analysis While performing algorithm analysis on the following C code-snippet, void selection_sort(int s[], int n) { int i, j, min; for (i = 0; i < n; i++) { min = i; for (j = i + 1; j < n; j++) if (s[j] < s[min]) min = j; swap(&s[i], &s[min]); } } the author of the book I am reading writes that the if statement is executed $$ S(n)=\sum_{i=0}^{n-1}\sum_{j=i+1}^{n-1}1=\sum_{i=0}^{n-1}n-i-1 $$ times. To eventually express this using Big Oh notation, he first writes that $$ S(n)\approx\frac{n(n+1)}{2}, $$ because "we are adding up $n-1$ terms whose average value is about $n/2$." How did he conclude this? Ultimately, of course, he concludes that the running time is $\Theta(n^2)$. AI: The idea is that the $n - 1$ terms are ranging from $n - 1$ down to $1$, so the average of those numbers is about $n / 2$. The actual sum is $\frac{n(n - 1)}{2}$, so I don't see why he didn't just put the exact formula (it's not too hard to work out, and is a standard result). But yeah, it's not too hard to see that the average of $n - 1, n - 2, \ldots, 1$ is roughly $n / 2$.
H: '$R$-rational points,' where $R$ is an arbitrary ring On page 49 of Liu's Algebraic Geometry and Arithmetic Curves, we find Example 3.32. In it, he shows that if $k$ is a field and $X=k[T_1,\dots,T_n]/I$ is an affine scheme over $k$, the sections $X(k)$ (the $k$-rational points of $X$) are in bijection with the algebraic set $Z(I)$ cut out by $I$. This is the set of all $\alpha\in k^n$ with $P(\alpha)=0$ for every $P(T)\in I$. At the end of the section, he gives this exercise: $\textbf{3.6}$ Generalize Example 3.32 to the case where $k$ is an arbitrary ring. How is this possible? The proof in the example makes essential use of the fact that $k$ is a field. Clearly we want to consider the set cut out by $I$ over $R^n$, where $R$ is some ring, and relate this to the sections $X(R)$, but I do not see how to proceed. In particular, the proof identified the $k$-rational points with the points where the residue field was $k$. Since the residue field is always a field, but $R$ may not be, we can no longer use the residue field to identify the $R$-rational points. AI: Let $X = \text{Spec}\,A$ be an affine scheme of finite type over $R$. Then $X$ comes with a map $X\rightarrow \text{Spec}\,R$ so that the corresponding map of rings $R\rightarrow A$ makes $A$ a finitely generated $R$-algebra. So we can write $A \cong R[T_1,\dots,T_n]/I$ for some ideal $I$. Now $X(R)$, the $R$-points of $X$, are maps $\text{Spec}\,R \rightarrow X$ which are sections of the map $X \rightarrow \text{Spec}\,R$. Since we're talking about maps of affine schemes, these are in canonical bijection with ring maps $A \rightarrow R$ which are sections of the structure map $R\rightarrow A$, that is, $R$-algebra homomorphisms $R[T_1,\dots,T_n]/I\rightarrow R$. Now since $R[T_1,\dots,T_n]$ is the free $R$-algebra on $n$ generators, \begin{align} \text{Hom}_R(R[T_1,\dots,T_n]/I,R) &\cong \{f\in\text{Hom}_R(R[T_1,\dots,T_n],R)\,|\,I\subseteq \text{Ker}(f)\}\\ &\cong \{(r_1,\dots,r_n)\in R^n\,|\,p(r_1,\dots,r_n) = 0\,\text{for all}\,p\in I\}. \end{align}
H: Is the product of covering maps a covering map? I have a question about covering maps. If $\phi_1: X_1 \rightarrow Y_1$ is a covering map, and $\phi_2: X_2 \rightarrow Y_2$ is a covering map, then is it true that $\phi_1 \times \phi_2: X_1 \times X_2 \rightarrow Y_1 \times Y_2$ a covering map? AI: Yes. This is a case where you should just follow the definition. The map $\phi_1\times\phi_2$ is continuous and surjective, and every point of $Y_1\times Y_2$ has a neighborhood that is evenly covered (remember the definition of the product topology).
H: Showing that $\int_0^{\pi/3}\frac{1}{1-\sin x}\,\mathrm dx=1+\sqrt{3}$ Show that $$\int_0^{\pi/3}\frac{1}{1-\sin x}\,\mathrm dx=1+\sqrt{3}$$ Using the substitution $t=\tan\frac{1}{2}x$ $\frac{\mathrm dt}{\mathrm dx}=\frac{1}{2}\sec^2\frac{1}{2}x$ $\mathrm dx=2\cos^2\frac{1}{2}x\,\mathrm dt$ $=(2-2\sin^2\frac{1}{2}x)\,\mathrm dt$ How do you get this in the form of $t$ instead of $x$ using $\sin A=\dfrac{2t}{1+t^2}$ ? $$=\int_0^{1/\sqrt3}\frac{2-2\sin^2\frac{1}{2}x}{1-\frac{2t}{1+t^2}}\mathrm dt\,??$$ AI: I think you mean $$\int_0^{\pi/3} \frac{dx}{1-\sin{x}}$$ which may be accomplished using the substitution $t=\tan{\frac{x}{2}}$. Then $$dt = \frac12 \sec^2{\frac{x}{2}} dx= \frac12 (1+\tan^2{\frac{x}{2}}) dx = \frac12(1+t^2) dx$$ so that $dx = 2 dt/(1+t^2)$ Also $$t^2=\frac{1-\cos{x}}{1+\cos{x}} \implies \cos{x} = \frac{1-t^2}{1+t^2} \implies \sin{x} = \frac{2 t}{1+t^2} $$ so that $$1-\sin{x} = \frac{1+t^2-2 t}{1+t^2} = \frac{(1-t)^2}{1+t^2}$$ Then the integral is $$2 \int_0^{1/\sqrt{3}} \frac{dt}{1+t^2} \frac{1+t^2}{(1-t)^2} = 2 \left [ \frac{1}{1-t}\right]_0^{1/\sqrt{3}} = 1+\sqrt{3} $$
H: Why this object is a sheaf? I whould like to know why $ \ \mathcal{C} : U \to \mathcal{C} ( U , \mathbb{R} ) $ is a sheaf ? $ U $ is an open set of $ E $ a $ \mathbb{R} $ - vector space which has a finite dimension. $ \mathcal{C} ( U , \mathbb{R} ) $ contains continous maps over $ U $, with values in $ \mathbb{R} $. Thanks a lot. P.S : sorry about my english language, i'm a foreign men from other country. :-) AI: The proof goes mostly straightforward and this (nice functions with restriction of domain) is the "mother" example of all sheaves: Given $V=\bigcup U_i$ and $s_i\colon U_i\to\mathbb R$ with $s_i|_{U_i\cap U_j}=s_j|_{U_i\cap U_j}$ for $i.j\in I$, define $s\colon V\to \mathbb R$ by letting $s(x)=s_i(x)$ where $i\in I$ is arbitrary with $x\in U_i$. Such $i$ exists because $x\in V=\bigcup U_i$. The function is welldefined because if we can pick another $j\in I$ with $x\in U_j$, then $s_i(x)=s_j(x)$. Obviously $s|_{U_i}=s_i$ for all $i\in I$. This makes $s$ continuous for all $x\in U_i$, hence for all $x\in V$, i.e. $s$ is a section over $V$. This $s$ is unique for if $s'\ne s$ then $s'(x)\ne s(x)$ for some $x$, then for $U_i$ with $x\in U_i$, we have $s'|_{U_i}(x)\ne s|_{U_i}(x)$, hence $s'|_{U_i}\ne s|_{U_i}=s_i$.
H: Complete theories - dense linear order There are two things I would like to prove. DLO - Dense linear order is complete, that means that when $\psi$ is a sentence of the language $\{<\}$ then $DLO\vDash\psi$ or $DLO\vDash\neg\psi$ When $S$ is recurisvely enumerable and a complete set of axioms then $M:=\{\phi: S\vDash\phi\}$ is recursiv (i.e decidable=computable) For 1. I do not have an idea. For 2. I use the fact that $M$ is recursively enumerable. Then I would like to argue (using the Church-Turing thesis) that in the case of complete theories the complement of $M$ is also recursively enumerable. Then I want to use the theorem that a set is recurisv if and only if the set and its complement are recursively enumerable. AI: DLO is not complete: The intervals $[0,1]$, $(0,1)$, $[0,1)$, and $(0,1]$ are all models of this theory and are not elementarily equivalent. On the other hand, these $4$ theories are the only possibilities: It is a well-known theorem of Cantor that all countable models of DLO without end-points are isomorphic. This is typically proved using the back-and-forth technique. With this, it follows that if to DLO you add the sentence that says that there are no end-points, then you get a complete theory (using Lowenheim-Skolem, for example. One can think of this as a particular case of the fact that if $T$ is a countable theory that is $\kappa$-categorical for some infinite $\kappa$, and $T$ has no finite models, then $T$ is complete). You can similarly see that the other three extensions of DLO (adding both endpoints, or only one) are complete. For example, any countable model of DLO+ "There are endpoints" looks like $\bullet+\mathbb Q+\bullet$, so any two countable models are isomorphic. As for question 2, you have an algorithm for enumerating $S$. Using it, you can easily enumerate all consequences of $S$. The point is that if $\phi$ is provable from $S$, then for some $n$, $\phi$ is provable from the first $n$ axioms of $S$ with a proof that uses at most $n$ steps. Since $S$ is complete, eventually either $\phi$ or $\lnot\phi$ will appear listed this way. So you have a way of deciding whether $\phi$ is provable from $S$ or not (just wait until one of $\phi,\lnot\phi$ appears in your list of consequences). I agree this is a highly inefficient algorithm in practice.
H: Linear multiple-variable function I'm reading a differential equation book but i'm stuck on his definition of linear ODE. Supposing our equation has y as its dependent variable, t as its independent variable and y^(n) reffers to the nth derivative of y with respect to t, my book defines any ordinary differential equation to be of the form : $f( y^{(n)},y^{(n-1)},\,\ldots,y',y,t) = 0$ which i clearly agree. Then he says that the ordinary differential equation would be called linear iff $f$ were to be linear for $y^{(n)},y^{(n-1)},\,\ldots,y',y$. But my problem is that it says that if is linear for $y^{(n)},y^{(n-1)},\ldots,y',y$, then: $$f( y^{(n)},y^{(n-1)},\,\ldots,y',y,t_1) = a(t)y^{(n)} + b(t)y^{(n-1)} + \ldots + c(t)y' + d(t)y + g(t).$$ I'm trying to understand in a simple case. Suppose we have a two-variable function $f(x,y)$. What would the requirement be for $f$ to be linear for $x$? Would it be that $f(c_1x_1 + c_2x_2,y) = c_1f(x_1,y) + c_2f(x_2,y)$ ? Or, would it be that $f(c_1x_1 + c_2x2_,y) = f(c_1x_1,y) + f(c_2x_2,y)$? Because if I define $f(x,y) = a(y)x + b(y)$ ( like the author from my book defined a multiple-variable linear function ) then it doesn't pass either of the tests .... Is the definition of a linear function for two-variables completely unrelated to $f(c_1x_1 + c_2x_2,y) = c_1f(x_1,y) + c_2f(x_2,y)$ or $f(c_1x_1 + c_2x2_,y) = f(c_1x_1,y) + f(c_2x_2,y)$? Is it completely unrelated to homogeneity and superposition (like in one-variable functions) ? What would be the "test" we would make to verify if $f(x,y)$ is linear for $x$ ( other than it fits the form $a(y)x + b(y)$ for instance ) ? AI: You are right. What your book means by a linear function is not exactly linear. It's more like an affine function. Anyway, it's not that important IMHO. The definition : f( y^(n),y^(n-1),...,y',y,t1) = a(t)y(n) + b(t)y(n-1) + ... + c(t)y' + d(t)y + e(t). seems to be more than enough for the scope of your studies (I assume that y is always a function $\mathbb{R} \rightarrow \mathbb{R}$. EDIT : You're absolutely right for your definition of linearity in respect to one variable. See the wikipedia page on multilinear functions (http://en.wikipedia.org/wiki/Multilinear_map), even if your function f is not multilinear, since there's no linearity in respect to t. Actually I can't recommend a book since I only know french books. But I doubt you'll find a book about this, i feel that this is kind of useless if you want to study linear ODEs where the unknown function is real-valued. Algebra is useful with linear ODEs, but then you will need to study ODEs with vector-valued unknowns. At this point, i feel that your book is using the word linear very sloppily. A linear ODE would be defined as : $$g(y^{n}(t), ..., y(t), t) - h(t) = 0$$ where $g$ is linear in respect to every variable except $t$, and $h$ is a function.
H: Primality test bound: square root of n I was reading about primality test and at the wikipedia page it said that we just have to test the divisors of $n$ from $2$ to $\sqrt n$, but look at this number: $$7551935939 = 35099 \cdot 215161$$ Since $\sqrt {7551935939} \approx 86901,875$ so basically I would only have to check the divisors from $2$ to $90000$, but one of the divisors ($215161$) is greater than $90000$. Also, do you guys have some ideas to improme my primality test? AI: Once you find the smaller divisor, you automatically find the larger divisor too.
H: First order sentences and Halting problem recursively enumerable I am just finding searching for some examples of recurisvely enumerbale models and I do not know how to prove that the following ones satisfy this property. Consider the set of first order sentences. The subset of the sentences which have a finite model are recursievly enumerable The Halting problem is recursively enumerable Ideas: For the first one I take the universal signature $\sigma_U$ which is given by $\sigma_U^{Op}=\{f_n^k | n,k\in\mathbb N\}$, $\sigma_U^{Rel}=\{R_n^k | n,k\in\mathbb N\}$ and $ar_{\sigma_u}(f_n^k)=ar_{\sigma_u}(R_n^k)=k$ but I do not know how to go on. For the second one I guess this is intuitively clear because every pait from the form (Programm, Input) is recurisvely enuemrbale when the program halts with input Input. AI: For the first, simply enumerate all the first-order sentences and check whether they have a model of cardinality $0,1,2,\ldots$. The only slightly difficulty is that you cannot check one sentence after another because you won't reach any sentence after the first one which doesn't have a finite model. To work around this, you have to interweave the two checks. You can for example check all sentences of length less or equal 1 for models of cardinality 1, then all sentences of length less or equal 2 for models of cardinality 2, and so on. For the second problem, the same trick works. You can enumerate all combinations of turing machines and inputs and check for each one whether it halts on $0,1,2,\ldots$ steps. You'll again need to interweave to avoid getting stuck upon hitting the first non-halting pair machine and input.
H: Limits and Inequalities My book claims the following: Let $f$ be a continuous function. $\lim s_n = x_0$ and since $f(s_n) < y$ for all $n$, we have $f(x_0) = \lim f(s_n) \leq y$. Can someone explain why the last inequality is $\leq$ and not just $<$? I feel like I am missing something very obvious. AI: You have $f(s_n) \in \{x \mid f(x)<y \}$, so $f(x_0)= \lim f(s_n)$ is in the closure of $\{x \mid f(x)<y \}$ which is precisely $\{x \mid f(x) \leq y\}$ (I assumed $f$ continuous).
H: Relationship between $PGL_2$ and $PSL_2$ I read somewhere that $PGL_2(\mathbb{C})=SL_2(\mathbb{C})/N$ where $N$ is the normal subgroup consisting of $\pm \left( \begin{matrix} 1 & 0\\ 0 & 1 \end{matrix}\right)$. It is unclear to me how this follows from the fact that $PGL_2(\mathbb{C})=GL_2(\mathbb{C})/aI$ where $aI$ are the scalar matrices ($I$ the identity and $a\in\mathbb{C}^*$). However, I did see somewhere the statement that $SL_2(\mathbb{C})/N$ is called $PSL_2(\mathbb{C})$ and that $PGL_2(\mathbb{C})\cong PSL_2(\mathbb{C})$. Two questions: 1) How does one show that $PGL_2(\mathbb{C})\cong PSL_2(\mathbb{C})$? 2) Is there another (perhaps easier) way of seeing that $PGL_2(\mathbb{C})=SL_2(\mathbb{C})/N$? Thanks. AI: Hint: If $M \in GL_2(\mathbb{C})$, let $\alpha$ be a square root of $\det(M)$. Then $\frac{1}{\alpha}M \in SL_2(\mathbb{C})$. Notice that the equality is false over $\mathbb{R}$.
H: find $\frac{ax+b}{x+c}$ in partial fractions $$y=\frac{ax+b}{x+c}$$ find a,b and c given that there are asymptotes at $x=-1$ and $y=-2$ and the curve passes through (3,0) I know that c=1 but I dont know how to find a and b? I thought you expressed y in partial fraction so that you end up with something like $y=Ax+B+\frac{C}{x+D}$ AI: You're on the right track; $$\frac{ax+b}{x+c}=\frac{ax+ac-ac+b}{x+c}=a+\frac{-ac+b}{x+c}$$ As $x\to \pm \infty$, the fraction disappears and you're left with $a$. Hence $a=-2$.
H: Evaluating $\lim\limits_{x\to1}\frac{\sqrt[3]{x^3+1}}{\sqrt[3]{x+1}}$? This is the limit: $$\lim_{x\to1}\frac{\sqrt[\large 3]{x^3+1}}{\sqrt[\large 3]{x+1}}$$ I made the calculation and the result gave $\sqrt[\large 3]{0}$, I wonder if this is the correct result, but if not, what would be the account to the correct result. Thanks! AI: $$\lim_{x\to-1}\frac{\sqrt[\large 3]{x^3+1}}{\sqrt[\large 3]{x+1}}=\lim_{x\to-1}{\sqrt[\large 3]{\frac{x^3+1}{x+1}}}=\lim_{x\to-1}{\sqrt[\large 3]{\frac{(x+1)(x^2-x+1)}{x+1}}}=$$ $$=\lim_{x\to-1}{\sqrt[\large 3]{(x^2-x+1)}}=\sqrt[\large 3]3$$
H: Is there such a thing as a metric space of sets over an underlying metric space? Typically we think of a metric as a notion of distance between elements of set subject to the following constraints... $$ d(x, y) ≥ 0,\quad d(x, x) = 0,\quad d(x, y) = d(y, x),\quad d(x, z) ≤ d(x, y) + d(y, z) $$ I want to know if there is an equivalent for distance between subsets of a known metric space? For example, if we partition integers between 1 and 20 into blocks of 5 and label these A, B, C and D, can we say that these labels are also metrics given a sensible method to compare them? E.g. $$ d(A,B) = \Big|\sum(1,2,3,4,5) - \sum(6,7,8,9,10)\Big| = 25,\\ d(A,A) = \sum(1,2,3,4,5) - \sum(1,2,3,4,5) = 0, \mathrm{etc}. $$ I'm sure there must be a solid definition of this somewhere with appropriate constraints on the sets. What I am particularly interested in is whether the concept of a metric distance is transitive from pairwise comparisons of elements to pairwise comparisons of subsets the metric space elements are taken from rather than an answer like "if you define a reasonable distance measure you can always call it a metric". AI: Given a metric set X, the Hausdorff metric is a metric on the set of non-empty compact subsets of X.
H: Least Upper Bounds based problem I am having difficulty with this problem from chapter 8 in Spivak's Calculus, any help is appreciated. (a) Suppose that $y - x > 1$. Prove that there is an integer $k$ such that $x < k < y$. Hint: Let $l$ be the largest integer satisfying $l \le x$, and consider $l + 1$. (b) Suppose $x < y$. Prove that there is a rational number $r$ such that $x < r < y$. Hint: If $1/n < y-x$, then $ny - nx > 1$ (c) Suppose that $r < s$ are rational numbers. Prove that there is an irrational number between $r$ and $s$. Hint: As a start, you know that there is an irrational number between $0$ and $1$. (d) Suppose that $x < y$. Prove that there is an irrational number between $x$ and $y$. Hint: It is unnecessary to do any more work; this follows from (b) and (c). AI: (a) Let $l$ be the largest integer satisfying $l \le x$. Then $l + 1 > x$, and we have $$x < l + 1 \le x + 1 < y$$ (b) Assume without loss of generality that $0 < x < y$. Choose $n \in \mathbb{N}$ such that $1/n < y - x$, and let $k$ be the largest positive integer such that $k/n \le x$. Then we have $$x < \frac{k+1}{n} = \frac{k}{n} + \frac{1}{n} \le x + \frac{1}{n} < y$$ (c) We have $$r < r + \frac{s-r}{\sqrt{2}} < r + s - r = s$$ (d) We can find rational numbers $r_1$ and $r_2$ such that $$x < r_1 < r_2 < y$$ by (b) and we can find an irrational between $r_1$ and $r_2$ by (c).
H: Are compositions of the Fourier sine and cosine transforms commutative? That is to say, is it true or false that $$\mathcal{F}_c(\mathcal{F}_s(f(x)))(\xi)\equiv\mathcal{F}_s(\mathcal{F}_c(f(x)))(\xi),$$ and if they are not then are there any conditions on $f$ for which they might be? I can't seem to find any documents online about general properties of the Fourier sine and cosine transforms (so far). My 7-th edition Table of Integrals, Series and Products only states the basic properties. AI: Yes, they do commute. Define $\mathcal F_-$ by $\mathcal F_-(f)(\xi) = \mathcal F(f)(-\xi)$, where $\mathcal F$ is the ordinary Fourier transform, $\mathcal F = \mathcal F_c + i\mathcal F_s$. Then $$ \mathcal F_c = {\mathcal F + \mathcal F_-\over 2},\quad \mathcal F_s = {\mathcal F - \mathcal F_-\over 2i}. \tag{1} $$ The Fourier inversion formula says that $\mathcal F \mathcal F_- = \mathcal F_- \mathcal F$ is a constant multiple of the identity. So, from $(1)$ and the fact that $\mathcal F$ and $\mathcal F_-$ commute, it follows that $$ \mathcal F_c\mathcal F_s = {\mathcal F^2 - \mathcal F_-^2\over4i} = \mathcal F_s\mathcal F_c. $$
H: Probability Question - Baye's Rule Here's the question: Bob's burglary alarm is on. If there was a burglary, the alarm goes off with probability 50%. However, on any given day, there is a 1% chance the alarm is triggered by a dog. The burglary rate for the area is 1 burglary in $10^4$ days. What is the probability that Bob has been burglarized? My solution is to use Baye's Rule: $P(burg|alarm) = \frac{P(alarm|burg)*P(burg)}{P(alarm)}=\frac{P(alarm|burg)*P(burg)}{P(alarm|burg)+P(alarm|noBurg)} = \frac{\frac{1}{2}*\frac{1}{10^4}}{\frac{1}{100}+\frac{1}{2}}$ but this is apparently incorrect. Could someone please help me understand where my reasoning went wrong? AI: Your denominator is wrong. The correct formula for denominator: $P(alarm)=P(alarm|burg)*P(burg)+P(alarm|noBurg)*P(noBurg)$
H: One sided limits (1) Does these limits exists: $$1)\lim_{x \to 0^+}\frac{x^2+1}{x^3}$$ $$2)\lim_{x \to 0^-}\frac{x^2+1}{x^3}$$ $$3)\lim_{x \to \infty}\frac{x^2+1}{x^3}$$ $$1)\lim_{x \to 0^+}\frac{x^2+1}{x^3}= \lim_{x \to 0^+}\frac{1+\frac{1}{x^2}}{x}=\lim_{x \to 0^+}(\frac{1}{x}+\frac{1}{x^3})= " \frac {1}{0^+}+\frac {1}{(0^+)^3}"= "\infty+ \infty"=+\infty$$ No the limit for $\lim_{x \to 0^+}\frac{x^2+1}{x^3}$ does not exist. $$2)\lim_{x \to 0^-}\frac{x^2+1}{x^3}= \lim_{x \to 0^+}\frac{1+\frac{1}{x^2}}{x}=\lim_{x \to 0^-}(\frac{1}{x}+\frac{1}{x^3})= " \frac {1}{0^-}+\frac {1}{(0^-)^3}"= "-\infty- \infty"=-\infty$$ No the limit for $\lim_{x \to 0^-}\frac{x^2+1}{x^3}$ does not exist. $$3)\lim_{x \to \infty}\frac{x^2+1}{x^3}= \lim_{x \to \infty}\frac{1+\frac{1}{x^2}}{x}=\lim_{x \to \infty}(\frac{1}{x}+\frac{1}{x^3})= " 0 + 0"= 0$$ Yes, there is a limit for $\lim_{x \to \infty}\frac{x^2+1}{x^3}=0$ My questions are:Is this correct what I did?Or do I have to do more to prove the existence of the limit than to calculate them? AI: Yes, what you are doing is in essence, correct. You could provide a more formal proof if you wished. For example, you can determine that $\lim_{x \to 0^+}\frac{x^3}{x^2+1}=0$ by various logical arguments, including the kind you just presented. Then from definition, for an arbitrarily large $E=\frac{1}{\epsilon}$, there exists $\delta$ such that $\frac{x^3}{x^2+1}-0<\epsilon$ for $x-0<\delta$. Taking the reciprocal on both sides of the inequality, we get that $\frac{x^2+1}{x^3}>\frac{1}{\epsilon}=E$ for all $x<\delta$. Thus, the function can be made arbitrarily large and thus we have proved that a limit does not exist.
H: Some doubts about interpretation of an atomic formula in predicate calculus I have some doubt related to the interpretation of atomics formulas in predicate calculus. In predicate calculus a formula will be interpreted on a specific domain that is where I take the allowed values for the formula constants. So I have a formula A such that: $\{p_1,.....,p_n\}$: is the set of all the predicates into the A formula. $\{t_1,.....,t_k\}$: is the set of all the constants that appear in the n predicates So an interpretation of my A formula is a triad build in this way: $I = \{D, \{R_1,....,R_n\}, \{d_1,....,d_k\}\}$ where: 1) $D$: is a not empty domain (so the constants are in $D$) 2) $R_i$ represents an assignment of a relation $n$-ary that goes in D (in a valid constant) for each p_i predicate 3) $d_i$: represents an assignment of a $d_j$ element, that belong to $D$, for each constant $a_j$ (this thing say also that a constant symbol of my language must be a value that is in the domain $D$) So for example if I have this formula: A valid interpretation for the previous formula could be: In which I say that: 1) The domain D is all the Natural Number set 2) The relation associated to the p predicate is the minor or equal relation 3) The value of the constant a must be into the domain D and is the natural number 0 And this is true because this say that in the natural number set exist a special element that is 0 that is the minimum element of the set Is my reasoning correct until now? i hope yes... Now I have a big doubt about the meaning of what on my book is called as INTERPRETATION OF ATOMICS FORMULA. It say that give an atomic formula its interpretation is: if it is TRUE that: and FALSE otherwise I have some problem to unserstand what means this assertion. I think that $P_i$ is a predicate that is into the set of the predicates of my language: **$\{p_1,.....,p_n\}$ but what is $a_1,...,a_n$ ? I think that are the constants used in this predicate. I am not understanding what it means. Someone can help me? Tnx Andrea AI: I'm going to start from scratch because your post is very long. Let us start with a language $$ L=\{R_i \,:\, i\in I\}\cup\{f_j \,:\, i\in J\}\cup \{c_k\, :\, k\in K\} $$ The $R_i$ are relation symbols, the $f_j$ function symbols and the $c_k$ constant symbols with $I,J,K$ indexing sets. Now an interpretation/structure/model for this language is a tuple $$ \mathfrak{A}=\langle A,\{\textbf{R}_i \,:\, i\in I\},\{\textbf{f}_j \,:\, i\in J\}, \{\textbf{c}_k\, :\, k\in K\}\rangle$$ Where $A$ is a (depending on your conventions, possibly required to be non-empty) set, if the symbol $R_i$ is $n$-ary then the corresponding $\textbf{R}_i\subseteq A^n$. If the symbol $f_j$ is $n$-ary then the corresponding $\textbf{f}_j$ is a function $A^n\rightarrow A$ and $\textbf{c}_j\in A$. The idea of this is that you just interpret a relation symbol as a relation on the domain, a function symbol as a function and a constant symbol as an element. Now we can use this to define a satisfaction relation on the formulae in the language $L$. But we have to be careful as we may have free-variables in the formula. So inductively, let $i:\{\text{variables}\}\rightarrow A$. Then inductively we define an interpretation of terms, and use this to do formulae. $$ c_i^\mathfrak{A}=\textbf{c}_i $$ $$ x_i^\mathfrak{A}=i(x_i) $$ $$ f_i(t_1,t_2,\ldots,t_n)^\mathfrak{A}=\textbf{f}_i(t_1^\mathfrak{A},t_2^\mathfrak{A},\ldots,t_n^\mathfrak{A})$$ So the idea of this definition is that we find out the value of a term by evaluating any constants or variables according to $i$ and the structure $\mathfrak{A}$ and then apply functions as defined in the structure. Now we are ready to define satisfaction of an atomic. Atomic formulas look like $t_1=t_2$ where $t_i$ are terms or like $R(t_1,\ldots,t_n)$ where $R$ is an $n$-ary relation symbol and $t_i$ are terms. We define $$ (\mathfrak{A},i)\models t_1=t_2 \Leftrightarrow t_1^\mathfrak{A}=t_2^\mathfrak{A} $$ and $$ (\mathfrak{A},i)\models R(t_1,\ldots,t_n) \Leftrightarrow \langle t_1,\ldots,t_n\rangle\in \textbf{R} $$ So we say that a claim that $t_i=t_j$ is true if they are interpreted as the same object in the domain, and a claim that a relation $R$ holds of some $n$-tuple of terms if we have stipulated that the interpretation of the terms are in the domain of $\textbf{R}$. Hope this clears a few things up.
H: Is $S=\{(1,t)\mid t\in \mathbb{R}\}$ a subspace of $\mathbb{R}^2$? My professor introduced subspaces of $\mathbb{R}^n$ today and I don't think I understand them very well. He posed this question as an example: Is the set $S=\{(1,t)\mid t\in \mathbb{R}\}$ a subspace of $\mathbb{R}^2$? He said that it wasn't. Could anyone elaborate as to why it isn't? Can I simply say that the zero vector, $\vec{0}=(0,0)$ can never equal $(1,t)$ and be done with it? AI: To show that some set is a subspace, you must show two things: You can pick any two vectors in the set, add them together, and the result is in the set. You can pick any vector in the set, multiply it by any scalar, and the result is in the set. The easiest check is for the $\vec{0}$ to be in the set. Why? If it isn't in the set, then multiplying by zero (a scalar) results in a vector not in the set! (Violation of condition 2.) So, in short, you're perfectly right. ;)
H: How to interpret "computable real numbers are not countable, and are complete"? On page 12 of this (controversial) polemic http://web.maths.unsw.edu.au/~norman/papers/SetTheory.pdf Wildberger claims that Even the "computable real numbers" are quite misunderstood. Most mathematicians reading this paper suffer from the impression that the "computable real numbers" are countable, and that they are not complete. As I mention in my recent book, this is quite wrong. Think clearly about the subject for a few days, and you will see that the computable real numbers are not countable, and are complete. Think for a few more days, and you will be able to see how to make these statements without any reference to "infinite sets"... These claims are false, as far as I can tell, with the usual definitions. Is there a reasonable way to interpret them---more specifically, do they become true after replacing "countable" by "computably enumerable" and doing something similar with "complete", requiring only sequences that are "computably Cauchy" (feel free to modify my provisional definition below if convenient) to converge? I know nothing about computable real numbers, so I will appreciate it if you are gentle! Provisional definition: A sequence $a_n$ is computably Cauchy if there is a computable function $f: \mathbb{Z}_{> 0} \rightarrow \mathbb{Z}_{> 0}$ such that $|a_k-a_m|<1/n$ if $k,m \geq f(n)$. AI: Here we have two assertions. Uncountability and completeness. In the usual set theoretic tradition computable real numbers are countable since there are only a countable quantity of Turing Machines. Funnily, you can show using Cantor's diagonal argument -the same that it is used to show that real numbers are not countable- that this bijection from the natural numbers to the real computable numbers is non-computable. If you have such a computable bijection, you can compute a real number that is not in the range of this computable function by diagonalization. Thus real computable number are countable, but not effectively countable, i.e. you cannot give a computable bijection from the natural numbers onto the real computable numbers. About completeness, Specker sequence using the fact that there are recursively enumerable sets that are not recursive/decidable. Take one of those sets $A\subset \mathbb{N}_1$, and consider an enumeration of it given by $$a:\mathbb{N}\rightarrow A$$ And so consider the number $$S_A=\sum_{n\in \mathbb{N}}\frac{1}{2^{a(n)}}$$ Then this number is the supremum of the family of computable numbers $\{S_{A,n}\}_{n\in\mathbb{N}}$, where $S_{A,n}$ is given by $$S_{A,n}=\sum_{i\leq n}\frac{1}{2^{a(i)}}\text{,}$$ but $S_A$ is not computable. (If it were computable, then $A$ would be recursive.) Thus we don't have completeness natural sequences of computable numbers. However, in the previous example we cannot say that everything is done. If we define a computable Cauchy sequence of computable real numbers as a computable sequence of computable real numbers $\{q_n\}_{n\in\mathbb{N}}$ such that there is a computable function $$r:\mathbb{N}_1\rightarrow \mathbb{N}$$ such that for every $n\in\mathbb{N}_1$, for all $i,j>r(n)$, $$|q_i-q_j|<1/n$$ Then the resulting limit can be shown to be a computable real number, and thus having a sort of computable completeness. The previous two things are the facts, thus we have some sort of uncountability, if we add the requirement of computability to the bijection, and a sort of completeness, strengthening the requirements of the definition-. This has to be said that are results very similar to those of Bishop's constructive analysis. The article you linked seems to me one of those people angry with the infinity and that thinks that mathematics have to be limited to what they think is philosophically valid. However, mathematics is free product of our mind (paraphrasing Dedekind) and thus we have to learn to enjoy it in all its forms since all of them gives us a broader view of the subject. EDIT: For the affirmation that the computable Cauchy sequence $\{q_n\}_{n\in\mathbb{N}}$ converges to a computable real number $q$ we only have to give an algorithm (Turing machine) that calculates the binary expansion of this number. Remember that a number $x$ is computable if there is a computable function $$f_x:\mathbb{N}_1\rightarrow \mathbb{Q}$$ such that for all $n\in\mathbb{N}_1$, $$|f_x(n)-x|<1/n$$ Now, given our computable sequence, this fact is translated that we have a computable function $$f:\mathbb{N}\times\mathbb{N}_1\rightarrow \mathbb{Q}$$ such that for all $n\in\mathbb{N}$ and $i\in\mathbb{N}_1$, $$|q_n-f(n,i)|<1/i$$ Now, since this sequence is Cauchy computable, we have that there is a computable function $$r:\mathbb{N}_1\rightarrow \mathbb{N}$$ such that for every $n\in\mathbb{N}_1$, for all $i,j>r(n)$, $$|q_i-q_j|<1/n$$ This fact will be essential for showing that the limit is computable in the considered case. Now, using classical analysis to show that for the limit $q$ we have that for every $n\in\mathbb{N}_1$, for all $i>r(n)$, $$|q_i-q|<2/n$$ In this way, the desired computable function for approximating $q$ is given by $$h(n)=f(r(4n)+1,2n)$$
H: A question on limit Suppose that $p_k>0, k=1,2,\dots, $ and $$ \lim_{n\to\infty} \frac{p_{n}}{p_1+p_2+\dots+p_n}=0,\quad \lim_{n\to\infty} a_n=a. $$ Show that $$ \lim_{n\to\infty}\frac{p_1a_n+p_2a_{n-1}+\dots+p_na_1}{p_1+p_2+\dots+p_n}=a $$ The hints are much appreciated. I don't want complete proof. Thanks for your help. AI: Hint: at some $N$, $a_N$ is close enough to $a$ (how close is that?) and so is $a_n$ for $n>N$. For such an $n$, we can say that the sum is at least equal to $$ \frac{p_1a_N+p_2a_{N}+\dots+p_{n-N}a_N}{p_1+p_2+\dots+p_n} + \frac{p_{n-N+1}a_{N-1}+\dots+p_na_1}{p_1+p_2+\dots+p_n} $$ How can we show that the term on the left tends to $a_N$ while the term on the right tends to $0$?
H: Why is this cardinal regular? I have the following problem in front of me. Show that if $\kappa$ is the least cardinal such that $2^\kappa>2^{\aleph_0},$ then $\kappa$ is regular. I've scribbled this: Suppose $$\kappa=\coprod_{\alpha<\lambda}X_\alpha,$$ where $\lambda<\kappa$ is a cardinal, and for each $\alpha<\lambda$ we have $|X_\alpha|<\kappa.$ Then $$\Large 2^{\aleph_0}<2^\kappa=2^{\coprod_{\alpha<\lambda}X_\alpha}=\prod_{\alpha<\lambda}2^{X_\alpha}=(\sup_{\alpha<\lambda}2^{X_\alpha})^\lambda.$$ If the last thing were equal to $2^\lambda$, I would get a contradiction, and I'd be done. But I have no idea what the last thing equals. I also tried to write down some formulas with cofinality and suprema, but I didn't get any farther. What's the simplest way to do it? The thing above uses some nontrivial facts that I don't even understand very well. AI: Let $\lambda=\operatorname{cf}\kappa$, and let $\langle\alpha_\xi:\xi<\lambda\rangle$ be a cofinal sequence in $\kappa$. Then $|\alpha_\xi|<\kappa$ for each $\xi<\lambda$, so $2^{|\alpha_\xi|}\le 2^\omega$ for each $\xi<\lambda$. Thus, $\sup_{\xi<\lambda}2^{|\alpha_\xi|}=2^\omega$, and $(2^\omega)^\lambda=2^\lambda$ (since $\lambda\ge\omega$). And if $\lambda<\kappa$, then $2^\lambda\le 2^\omega<2^\kappa$. Note that you don’t actually need equality in the last step of your displayed line: all you need for this result is that $$\prod_{\alpha<\lambda}2^{|X_\alpha|}\le\left(\sup_{\alpha<\lambda}2^{|X_\alpha|}\right)^\lambda\;,$$ which is pretty easy to see.
H: Show that: $\lim \limits_{n\to\infty}\frac{x_n-x_{n-1}}{n}=0 $ Here is an exercise: Suppose that $\{x_n\}$ is a sequence such that $\lim \limits_{n\to\infty}(x_n-x_{n-2})=0$. Show that: $$\lim \limits_{n\to\infty}\frac{x_n-x_{n-1}}{n}=0 $$ Thanks. AI: Hints: Let $y_n=|x_n-x_{n-1}|$. Note that $|y_n-y_{n-1}|\le |x_n-x_{n-2}|$. Then $$ |\frac {x_n-x_{n-1}}{n}|=\frac{y_n}{n} \le \frac{|y_n-y_{n-1}|+|y_{n-1}-y_{n-2}|+\dots+|y_{N+1}-y_N|}{n}+\frac{y_N}{n} $$
H: Show that the function $f: X \to \Bbb R$ given by $f(x) = d(x, A)$ is a continuous function. I'm studying for my Topology exam and I am trying to brush up on my metric spaces. Suppose $(X, d)$ is a metric space and $A$ is a proper subset of $X$. Show that the function $f: X \to \Bbb R$ given by $f(x) = d(x, A)$ is a continuous function. I know that showing the pre-image of an open set is open in $X$ is an option for continuity. Yet, I would like to know how to show continuity with open balls or neighborhoods given the context of the problem. By the way, is this the Euclidean metric? Or am I jumping the gun a bit there? AI: Credits to Martin. Fix $a\in A$. Since $d(x,a)\leq d(x,y)+d(y,a)$, taking $\inf\limits_{a\in A}$ we have that $$d(x,A)\leq d(x,y)+d(y,A)$$ By symmetry (i.e. $d(y,x)=d(x,y)$) $$d(y,A)\leq d(x,y)+d(x,A)$$ which gives that $$|f(x)-f(y)|\leq d(x,y)$$ Thus $f$ is $1-$Lipschitz continuous, whence it is continuous.$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\blacktriangle$
H: Convergence of $\sum_{n=1}^{\infty}\dfrac{(-1)^n}{n^{1+1/n}}$ Does the series $$\sum_{n=1}^{\infty}\dfrac{(-1)^n}{n^{1+1/n}}$$ converge absolutely, converge conditionally, or diverge? I've tried applying the ratio test and the root test, and in both cases the limit is $1$, so I cannot conclude anything. AI: This series does not converge absolutely, by the limit comparison test $$ \lim_{n\to\infty} \frac{1/n^{1+1/n}}{1/n} = \lim_{n\to\infty} \frac{1}{n^{1/n}} = 1 $$ with the divergent harmonic series $\sum \frac{1}{n}$. But since $$ \frac{1}{n^{1+1/n}} = \frac{1}{n} \exp\left( - \frac{\log n}{n} \right) = \frac{1}{n} + O\left(\frac{\log n}{n^2} \right), $$ the series converges conditionally by noting that $$ \sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^{1+1/n}} = \sum_{n=1}^{\infty} (-1)^{n} \left( \frac{1}{n} + O\left(\frac{\log n}{n^2} \right) \right), $$ which is a sum of two convergent series. This is my favorite style of argument for proving that an alternating series with complicated term converges. If you feel uncomfortable with the Big-Oh notation, then consider the function $$ f(x) = x^{-1-\frac{1}{x}} = \exp\left( - \frac{x+1}{x}\log x \right). $$ Then by the logarithmic differentiation, $$\frac{f'(x)}{f(x)} = -\frac{x+1-\log x}{x^2} < 0 $$ for large $x$ and thus $f(x)$ is a non-negative decreasing function. Since it is immediate that $f(x) \to 0$ s $x \to \infty$, the conclusion follows from the alternating series test that $$ \sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^{1+1/n}} = \sum_{n=1}^{\infty} (-1)^{n} f(n) $$ converges.
H: Evaluate the limit: $\lim_{(x,y,z)\to(0,0,0)}\frac{xy+yz^2+xz^2}{x^2+y^2+z^4}$ Could someone give me a hint? I would like to continue to attempt it. The limit to evaluate that I would like a hint on is: $$\lim_{(x,y,z)\to(0,0,0)}\frac{xy+yz^2+xz^2}{x^2+y^2+z^4}$$ AI: Along the curve $x=y=z^2$, the function is identically $1$. Along the line $y=z=0$, the function is identically $0$. Along the line $x=y, z=0$, the function is identically $\frac 12$. Along the line $x=2y, z=0$, the function is identically $\frac 25$.
H: Show that $\int_{|z|=1}(z+1/z)^{2m+1}dz = 2\pi i {2m+1 \choose m}$ Show that $$ \int_{|z|=1}(z+1/z)^{2m+1}dz = 2\pi i {2m+1 \choose m} $$ , for any nonnegative integer m. I can't solve this problem.. I tried to find singularities but failed. $(z+1/z)$ is not familiar to me. Is there anyone to help? AI: Due to binomial theorem, $$(z+\frac{1}{z})^{2m+1}=\frac{(z^2+1)^{2m+1}}{z^{2m+1}}=\frac{\sum_{k=0}^{2m+1}{2m+1 \choose k}z^{2k}}{z^{2m+1}}=\sum_{k=0}^{2m+1}{2m+1 \choose k}z^{2k-2m-1}.\tag{1}$$ By Cauchy's integral formula or direct calculation with $z=e^{it}$ for $|z|=1$, $$\int_{|z|=1}z^ndz = \left\{\begin{array}{cc} 2\pi i& n={-1}\\0 &n\ne{-1} \end{array}\right..\tag{2}$$ The conclusion follows from $(1)$ and $(2)$.
H: Why does the tangent of numbers very close to $\frac{\pi}{2}$ resemble the number of degrees in a radian? Testing with my calculator in degree mode, I have found the following to be true: $$\tan \left(90 - \frac{1}{10^n}\right) \approx \frac{180}{\pi} \times 10^n, n \in \mathbb{N}$$ Why is this? What is the proof or explanation? AI: The power series for cotangent (for $x$ in radians) is $$\cot x = \frac{1}{x} - \frac{1}{3}x - \frac{1}{45}x^3 - \frac{2}{945}x^5 - \cdots \approx \frac{1}{x} \; \text{for $x$ small}$$ So, $$\tan\left( 90^\circ - \frac{1^\circ}{10^n} \right) = \cot\frac{1^\circ}{10^n} = \cot\frac{\pi/180}{10^n} \approx \frac{10^n}{\pi/180} = \frac{180}{\pi}\times 10^n$$ Congratulations for noticing the pattern on your calculator!
H: Finding in a string S if is possible to create a set with perfect cubes or perfect squares with elements of S. You have a sequence $S[1...n]$ with $n$ digits(0 to 9) and you wanna know if its possible break then in perfect square or perfect cube. For example, if $S = 1252714481644$, then the answer is $YES$ because $S$ can set as: $125, 27, 144, 81, 64, 4$ that are $(125 = 5^2, 27 = 3^3, 144 = 12^2, 81 = 9^2, 64 = 8^2, 4 = 2^2$). Other possible solution is $1,25,27,144,8,16,4,4$. Write an PD algorithm that show the sequence that satisfy(or not) the condition. The complexity of the algorithm should be $O(n^2)$. In case of $YES$ show the sequence you found out. This is a subtype of the problem sumsubset?! Assume that the time to verify if the number is a perfect cube or perfect square takes time $O(1)$. AI: This being homework one should not provide a readymade solution that doesn't require any additional effort to understand and use. For this reason I am sending a Perl program that does what you requested but requires some sophistication one the part of the reader. In fact it does a little bit more, collecting all solutions as opposed to just one. If you can read this program then you have definitely understood the method (dynamic programming). Good luck. #! /usr/bin/perl -w # sub seqsqcube { my ($dref, $from, $to, $res) = @_; return $res->[$from][$to] if defined($res->[$from][$to]); my %allres = (); my $value = 0; for($dindx=$from; $dindx<=$to; $dindx++){ $value = 10*$value + $dref->[$dindx]; } my $sqroot = ($value>0 ? int(0.5+exp(log($value)/2.0)) : 0); my $cuberoot = ($value>0 ? int(0.5+exp(log($value)/3.0)) : 0); if(($value-$sqroot*$sqroot == 0) || ($value-$cuberoot*$cuberoot*$cuberoot == 0)){ $allres{direct} = join('', @$dref[$from..$to]); } for(my $pos=$from; $pos<$to; $pos++){ my $left = seqsqcube($dref, $from, $pos, $res); my $right = seqsqcube($dref, $pos+1, $to, $res); if(scalar(%$left) && scalar(%$right)){ push @{$allres{recursive}}, [$left, $right]; } } $res->[$from][$to] = \%allres; return $res->[$from][$to]; } sub allsols { my ($ref, $memo) = @_; return $memo->{$ref} if exists($memo->{$ref}); my %strings; if(exists($ref->{direct})){ $strings{$ref->{direct}} = 1; } if(exists($ref->{recursive})){ foreach my $pair (@{$ref->{recursive}}){ my $left = allsols($pair->[0], $memo); my $right = allsols($pair->[1], $memo); foreach my $lstr (@$left){ foreach my $rstr (@$right){ $strings{"$lstr,$rstr"} = 1; } } } } my @unique = keys(%strings); $memo->{$ref} = \@unique; return $memo->{$ref}; } MAIN: { my $input = shift || '252781'; my @digits = split(//, $input); my $count = scalar(@digits); my $result = []; seqsqcube(\@digits, 0, $count-1, $result); if(!scalar(%{$result->[0][$count-1]})){ print "no solution found\n"; } my $memo = {}; foreach my $str (@{allsols($result->[0][$count-1], $memo)}){ print "$str\n"; } }
H: Question about Group notation Say I have elements $g$ and $h$ in a group $G$. What does $g^h$ mean? Seeing this notation a lot but I can't find an explanation for it anywhere. AI: In work by group theorists, this is the right action of $G$ on itself by conjugation: $$g^h = h^{-1} g h$$ This has the nice property that $$(gh)^k = g^k h^k \quad \text{and} \quad g^{(hk)} = (g^h)^k$$ The commutator associated with this is $[g,h] = g^{-1} g^h$, the difference between ${}^h$ and ${}^1$, the identity. You will occasionally see other people use $g^h$ to mean $h gh^{-1}$ as a left-action. Sometimes this is called the topologist's convention, though we have some hope they will all adopt ${}^h g = h g h^{-1}$ so that their left action is on the left.
H: Equivalence for quantifications Considering: (1) $(\exists x)(E(x) \land (\forall y)(E(y)\rightarrow M(x,y)))$ There are equivalences that say: $\lnot(\forall x) A \equiv (\exists x) \lnot A$ $\lnot(\exists x) A \equiv (\forall x) \lnot A$ So, if I want to express (1) only with the "forall" quantifier I got: (2) $\lnot(\forall x)(E(x) \rightarrow \lnot((\forall y)(E(y)\rightarrow M(x,y))))$ Is this ok? What happens with the $\lnot((\forall y)(E(y)\rightarrow M(x,y))))$ part??? Is the same as $\lnot((\forall y))\lnot((E(y)\rightarrow M(x,y)))$ ?? More over, if I want quantifiers at the start of the expression, could be (1) directly expressed as: (3) $(\exists x)(\forall y)((E(x) \land (E(y))\rightarrow M(x,y)))$ ?? Or something can like (3) only be obtained coming from (2) ??: $\lnot(\forall x)(\forall y)((E(x) \land (E(y))\rightarrow M(x,y)))$ EDIT: Here i go again, are the next expressions equivalent?: $(\exists x)(E(x) \land (\forall y)(E(y)\rightarrow M(x,y)))$ $\lnot(\forall x)\lnot(E(x) \land (\forall y)(E(y)\rightarrow M(x,y)))$ $\lnot(\forall x)(\lnot(E(x)) \lor \lnot((\forall y)(E(y)\rightarrow M(x,y))))$ $\lnot(\forall x)(E(x) \rightarrow \lnot((\forall y)(E(y)\rightarrow M(x,y))))$ $\lnot(\forall x)(E(x) \rightarrow \lnot((\forall y)(\lnot(E(y))\lor M(x,y))))$ $\lnot(\forall x)(E(x) \rightarrow (\exists y)\lnot((\lnot(E(y))\lor M(x,y))))$ $\lnot(\forall x)(E(x) \rightarrow (\exists y)(E(y)\land \lnot(M(x,y))))$ This rule is valid?: $\lnot((\forall x) A) \equiv \lnot(\forall x) A \equiv (\exists x) \lnot A$ AI: Both 2 and 3 are correct. 3 can be obtained from 1 directly. Not sure what the $\lnot((\forall y))\lnot((E(y)\rightarrow M(x,y)))$ means -why the double parentheses? If you ignore those, then it's just $\exists y(E(y)\rightarrow M(x,y))$, which is true, but weaker than what you started with.
H: Calculating mean and standard deviation for a set Suppose we have a set $A = \{a_1, a_2, a_3, a_4, a_5\}$ where $a_n \in \mathbb{R}$ and a set $B = \{b_1, b_2, b_3, b_4, b_5\}$ where $b_n \in \mathbb{R}$ and a set $C = \{ma_1 + nb_1, ma_2 + nb_2, ma_3 + nb_3, ma_4 + nb_4, ma_5 + nb_5\}$ where $m, n \in (0,1) \subset \mathbb{R}$. $A$ has mean $\mu_1$ and standard deviation $\sigma_1$. $B$ has mean $\mu_2$ and standard deviation $\sigma_2$. Do we have sufficient information to calculate the standard deviation of $C$? Note: The mean of $C$ is $m\mu_{1} + n\mu_{2}$? AI: We have $C=mA+nB$, so by the linearity of expectation, the mean of $C$ is: $$ E(C)=E(mA+nB)=m\cdot E(A)+n\cdot E(B)=m\mu_1 + n\mu_2 $$ As for the standard deviation of $C$, we have: $$ \begin{align*} \sqrt{Var(C)} &= \sqrt{Var(mA+nB)} \\ &= \sqrt{m^2\cdot Var(A)+n^2\cdot Var(B)+2mn\cdot Cov(A,B)} \\ &= \sqrt{m^2\sigma_1^2 +n^2\sigma_2^2+2mn\cdot Cov(A,B)} \\ \end{align*} $$ In the special case where $A$ and $B$ are independent, we have $Cov(A,B)=0$, so the standard deviation of $C$ reduces to: $$ \sqrt{m^2\sigma_1^2 +n^2\sigma_2^2} $$
H: A function continuous on all irrational points Let $h:[0,1]\to\mathbb R$ $h(x)=\begin{cases}0&\text{if }x=1\\\frac{1}{n}& \text{otherwise if }x\in\mathbb Q,x=\frac{m}{n},\;m,n\in\mathbb N,\gcd(m,n)=1\\0&\text{otherwise if }x\in\mathbb R\setminus\mathbb Q\end{cases}$ How do you prove that $h$ is continuous on all irrational points within $[0,1]$? AI: That is discontinuous at all rational points is the easy part. Let $x\not\in \mathbb Q, \ \epsilon>0$ and consider the sets $$U_1=\left\{y\in[0,1]:f(y)<\epsilon\right\}\text{ and }U_2=\{y\in[0,1]:f(y)\geq\epsilon\}.$$ To show that $f$ is continuous at $x$ we have to show that $U_1$ contains an interval $(x-\delta,x+\delta)$ for sufficiently small $\delta$. Note that $U_2\subset \mathbb Q$ and that $\dfrac mn\in U_2\iff n\leq \dfrac1\epsilon$. Also for a fixed $n\in\mathbb N, \dfrac mn\in[0,1]\iff m\in\{0, 1, 2, \ldots, n\}$. Therefore $U_2$ is finite. Since $U_1\cup U_2=[0,1], \ U_1\cap U_2=\emptyset, \ x\in U_1$and $U_2$ is finite it follows that for some $\delta>0$, $U_2$ and $(x-\delta,x+\delta)$ are disjoint and therefore $ (x-\delta,x+\delta)\subseteq U_1$ This means that $f$ is continuous at $x$.
H: Elementary proof, convergence of a linear combination of convergent series Could you tell me how to prove that if two series $ \sum_{n=0} ^{\infty}x_n, \sum_{n=0} ^{\infty} y_n$ are convergent, then $\sum_{n=0} ^{\infty}(\alpha \cdot x_n + \beta \cdot y_n)$ is also convergent and its sum equals $\alpha \cdot \sum_{n=0} ^{\infty} x_n + \beta \cdot \sum_{n=0} ^{\infty} y_n$? AI: Hint: Think about the definition of $\displaystyle \sum \limits_{n=0}^{+\infty}(a_n)\color{grey}{\left(=\lim \limits_{m\to+\infty}\left(\sum \limits_{n=0}^m(a_n)\right)\right)}$ and use $\lim$ properties.
H: Is $S=\{(a+b,a+c,2c)\mid a,b,c\in \mathbb{R}\}$ a subspace of $\mathbb{R}^3$? Is $S=\{(a+b,a+c,2c)\mid a,b,c\in \mathbb{R}\}$ a subspace of $\mathbb{R}^3$? I just made this question up to practice determining if sets are vector subspaces or not. From what I can tell, the zero vector, $\vec{0}=(0,0,0)$ exists, when $a=b=c=0$ and is in $S$. Now to confirm if scalar multiplication and addition still hold and exist in $S$. $(a+b,a+c,2c)=a(1,1,0)+b(1,0,0)+c(0,1,2)$, which is a general linear combination of the vectors $(1,1,0),(1,0,0),(0,1,2)$. Since the sum of any such two linear combinations is again a linear combination and the same goes for scalar multiples, the set $S$ is a subspace of $\mathbb{R}^3$. Is the above reasoning valid? AI: yup, that's correct. actually you can deduce it's the whole $R^3$ because the vectors you wrote out form a basis
H: Question about radical of powers of prime ideals. Let $Q$ be an ideal of a commutative ring $A$ and $$r(Q) = \{x \in A : x^n \in Q \text{ for some } n >0 \},$$ the radical of $Q$. Suppose that $P$ is a prime ideal of $A$. How to show that $r(P^n) = P$ for all $n>0$? It is clear that $P \subseteq r(P^n)$ for all $n>0$. We have to show that $r(P^n) \subseteq P$ If $n=1$ and $x \in r(P)$, then there is some $k$ such that $x^k \in P$. Since $P$ is prime, $x \in P$. Therefore $r(P^n) \subseteq P$ is true for $n=1$. How to show $r(P^n) \subseteq P$ for $n>1$? Thank you very much. AI: Prove $I\subseteq J \implies r(I)\subseteq r(J).$ Then $P^n\subseteq P $ gives $r(P^n)\subseteq r(P)=P.$
H: Smallest positive $n$ divisible by $2$ and $3$, and which is an $n$th power and $m$th power, is $2^{\mathrm{lcm}(n,m)}3^{\mathrm{lcm}(n,m)}$ What is the smallest positive integer divisible both by 2 and 3 which is both a perfect square and a sixth power? Answer: $2^6 3^6$ More generally, what is the smallest positive integer $x$ divisible by both $2$ and $3$ which is both an $n$th power and an $m$th power, where $n,m \geq 2$? Answer: $2^{[n,m]} 3^{[n,m]}$ How would you I prove the last statement? AI: Any positive integer $x$ can be uniquely expressed as a product $$x=2^{a_1}3^{a_2}\cdots p_k^{a_k}\cdots$$ where $p_k$ denotes the $k$th prime number, the exponents $a_k$ are all $\geq 0$, and all but finitely many of the exponents $a_k$ are equal to $0$. $x$ is divisible by $2$ $\iff$ $a_1\geq 1$ $x$ is divisible by $3$ $\iff$ $a_2\geq 1$ $x$ is an $n$th power $\iff$ every $a_k$ is divisible by $n$ $x$ is an $m$th power $\iff$ every $a_k$ is divisible by $m$ an integer is divisible by both $n$ and $m$ $\iff$ it is divisible by $\mathrm{lcm}(n,m)$ Thus, the numbers that are divisible by $2$ and $3$, and are both $n$th powers and $m$th powers, are precisely those $x$'s of the form $$x=2^{\mathrm{lcm}(n,m)b_1}3^{\mathrm{lcm}(n,m)b_2}\cdots p_k^{\mathrm{lcm}(n,m)b_k}$$ where $b_1,b_2\geq 1$ and all but finitely many $b_k$'s are equal to $0$. Minimizing everything given these constraints tells us to choose $b_1=b_2=1$ and every other $b_k=0$. Because $$2^{\mathrm{lcm}(n,m)}3^{\mathrm{lcm}(n,m)}$$ will divide any other positive integer with the desired properties, it will be $\leq$ any other positive integer with the desired properties, and it has the desired properties itself, so we can conclude that it is the smallest.
H: joint pdf setting up the integral Let $X$ and $Y$ have the joint pdf $f_{x,y}(x,y)=2e^{-(x+y)}, 0<x<y, 0<y$. Find $P(Y<3X)$ my question is regarding setting up the integral: $\int_0^\infty\int_x^{3x}2e^{-x}e^{-y}dydx$ (1) i understand why the y integral is $0$ to infinity because it is given that $0<y$ however why is the $x$ integral from $x$ to $3x$, this is confusing especially the lower bound $x$, why not $0$? (2) also why are the $dy$ and $dx$ switched, the formula has them written as $dxdy$, why is that and when is it appropriate to switch them this way? edit: i think i might have answered my own question (1), the $x$ goes from $0$ to $y$ and $y$ goes to infinity therefore $x$ is integrated from $0$ to infinity, as far as $Y$, $Y$ is greater then $X$ and smaller then $3X$ therefore it is integrated from $X$ to $3X$. (can someone concur). But why are the $dxdy$ switched to $dydx$ is it because we are trying to find the prob. of $y$ and whatever we are trying to find must be the inner integral (still confused) AI: The inner integral is an integral in the vertical $y$ direction. Note that the bound given in the joint pdf is $x < y$, while the bound sought after in the probability computation is $y < 3 x$, so the integration bound is $x < y < 3 x$; this probability is $$2 \int_0^{\infty} dx \, e^{-x} \int_x^{3 x} dy \, e^{-y} = \frac12$$ Note how I wrote the integral so that it is clear which variable goes with which integral. I do this to avoid the confusion you have experienced in how your integral is written - it is unclear which is inner and outer (aside from the limits dependent upon $x$). To illustrate, I can switch the order of integration. This involves solving for $x$ as a function of $y$: $x < y < 3 x \implies y/3 < x < y$. The integral is then $$2 \int_0^{\infty} dy \, e^{-y} \, \int_{y/3}^y dx \, e^{-x}$$ which you can show is $1/2$ as before.
H: Help on a proof of dimension of a vector space The proof shows that two bases have the same number of elements, and I can't understand one step. The proof goes: As $v_1, . . . , v_n$ is a basis of $V$, each $w_k$ can be expressed as a linear combination of the $v_j$ ; thus for each $k$ there are scalars $λ_{1k}, . . . , λ_{nk}$ such that $$w_k=\sum_{j=1}^nλ_{jk}v_j,~~~k=1,...,m$$ Likewise, for each $j$ there are scalars $μ_{1j}, . . . , μ_{mj}$ such that $$v_j=\sum_{i=1}^m μ_{ij}w_i,~~~j=1,...,n$$ These give: $$w_k=\sum_{j=1}^n\sum_{i=1}^mλ_{jk}μ_{ij}w_i,~~~k=1,...,m$$ As the $w_j$ form a basis, we may equate coefficients of $w_k$ on each side of this equation and obtain $$1=\sum_{j=1}^nλ_{jk}μ_{kj},~~~k=1,...,m$$ Could anybody shed light on this last step? As rest of the proof is easy. Thank you! AI: Since the $w_i$'s are a basis, whenever we have scalars $a_1, \dots a_m$ and $b_1, \dots b_m$ such that $$ a_1 w_1 + a_2 w_2 \cdots + a_m w_m = b_1 w_1 + \cdots + b_m w_m, $$ then $$ a_1 = b_1, \ a_2 = b_2 , \ \ \dots , a_m = b_m. $$ This is what is meant by "equating the coefficients", and it follows since the $w_i$'s are linearly independent. So your second-last equation can be written $$ 0 \cdot w_1 + 0 \cdot w_2 + \cdots + 0 \cdot w_{k-1} + 1 \cdot w_k + 0 \cdot w_{k+1} + \cdots + 0 \cdot w_m= \\ \left( \sum_{j=1}^n \lambda_{jk} \mu_{1j} \right) \cdot w_1 + \cdots + \left( \sum_{j=1}^n \lambda_{jk} \mu_{mj} \right) \cdot w_m $$ Comparing the coefficients of $w_k$ on both sides gives your last equation.
H: No members of a sequence is a perfect square Prove that no member of the sequence 11, 111, 1111, ... is a perfect square. I noticed that the first four terms of the sequence (above) are none of them are a perfect square. But I do not know what to do afterwards. AI: Since this particular question has been discussed before, let me add five coins by proving that each number of the form $\overline{aaaa....aaaa}$ can not be a perfect square. Indeed, suppose that $\overline{aaaa....aaaa}=k^2,$ in other words $$k^2=a\cdot\frac{10^n-1}{9}.$$ Rewriting the last inequality we end up with $$a(10^n-1)=(3k)^2.$$ Considering both sides modulo $5,$ we get that $a=0,1,-1(\mod 5).$ Note, that $a\ne 5,$ because LHS is not divisible by $5.$ Considering both sides modulo $4,$ we have that $a=0,-1 (\mod 4)$ because $n\ge 2.$ Combining last two observations we end up with this only possibility $a=4.$ Then, $k=2k_1$ and we end up with $$10^n-1=(3k_1)^2$$ which is impossible modulo $4$ for $n\ge 2.$
H: Poisson counting process Let $N_t$ be number of customers that arrived at shop until moment $t$. Let's say that shop opens at 9:00. $N_t$, $t\geq0$ is Poisson process with $\lambda=1$ per hour. What is the probability that, there will be at least 2 arrivals between 10:00 and 10:30? We have time interval (10:00,10:30] (can it be interpreted as $(0,30]=(0,0.5]$ ?). Thus needed probability is: $$ P(N(1/2)-N(0)\geq 2) = P(N(1/2)\geq 2) $$ $$ = 1-\exp(-0.5\cdot0)\frac{(0.5\cdot0)^0}{0!}-\exp(-0.5\cdot1)\frac{(0.5\cdot1)^1}{1!}=0.09 $$ Am I correct? AI: The number of arrivals in a specified $1/2$ hour interval has Poisson distribution with parameter $0.5$. The probability that the number of arrivals is $\ge 2$ is, by reasoning similar to yours, equal to $$1-\left(e^{-0.5}\frac{(0.5)^0}{0!}+e^{-0.5}\frac{(0.5)^1}{1!}\right).$$ For some reason, you incorrectly had $(-0.5)^0$, which by accident is OK, and $(-0.5)^1$, which is not. You also had an error in the part dealing with the probability that the number is $0$. You had $e^{(-0.5)(0)}$ as part of that expression, and it should be simply $e^{-0.5}$. Remark: The probability that the number of arrivals in a $t$ hour interval is exactly $k$, in the general case with parameter $\lambda$, is $$e^{-\lambda t}\frac{(\lambda t)^k}{k!}.$$ In our case, $\lambda=1$ and $t=0.5$. You appear to be working with a different (and incorrect) formula.
H: Reduction formulae question. $I_n=\int_0^\frac{1}{2}(1-2x)^ne^xdx$ Prove that for $n\ge1$ $$I_n=2nI_{n-1}-1$$ I end up (by integrating by parts) with: $I_n =e^x(1-2x)^n+2nI_{n-1}$ I am not sure how $e^x(1-2x)^n$ becomes $-1$? AI: $\displaystyle I_n=\int_0^\frac{1}{2}(1-2x)^ne^xdx$ $=\displaystyle [(1-2x)^n\int e^{x}dx]_{0}^{1/2}-\int_{0}^{1/2}((\int e^xdx) (\frac{d}{dx}(1-2x)^{n}))dx$ $\displaystyle =[(1-2x)^ne^{x}]_{0}^{1/2}-\int_{0}^{1/2}e^xn(-2)(1-2x)^{n-1}dx$ $\displaystyle =[(1-2x)^ne^{x}]_{0}^{1/2}+2n\int_{0}^{1/2}e^x(1-2x)^{n-1}dx$ $\displaystyle =-1+2nI_n$ $[(1-2x)^ne^{x}]_{0}^{1/2}=(1-2.\frac{1}{2})^ne^{1/2}-(1-2.0)e^0=-1$
H: How to use the Chain rule for: $y=\cosh(a\sinh^{-1}x)$ $$y=\cosh(a\sinh^{-1}x)$$ where $a$ is a constant. What do i substitute for $u$ and $v$ to then find $\frac{du}{dx}$ and $\frac{dv}{dx}$? I am then suposed to prove that: $$(x^2+1)\frac{d^2y}{dx^2} +x\frac{dy}{dx}-a^2y=0$$ AI: HINT: So, $$\cosh^{-1}y=a\sinh ^{-1}x$$ Differentiating wrt $x,$ $$\frac1{\sqrt{y^2-1}}\frac{dy}{dx}=\frac a{\sqrt{1+x^2}}$$ Squaring we get, $$(1+x^2)\left(\frac{dy}{dx}\right)^2=a^2(y^2-1)$$ Again differentiating wrt $x,$  $$2x\left(\frac{dy}{dx}\right)^2+(1+x^2)2\frac{dy}{dx}\frac{d^2y}{dx^2}=a^2\cdot2y\frac{dy}{dx}$$ Divide by $2\frac{dy}{dx}$ assuming $\frac{dy}{dx}\ne0$
H: central limit theorem using expected value Considerable controversy has arisen over the possible aftereffects of a nuclear weapons test conducted in Nevada in 1957. Included as part of the test were some three thousand military and civilian "observers". Now, more then fifty years later, eight cases of leukemia have been diagnosed among those three thousand. The expected number of cases, based on the demographic characteristics of the observers, was three. Assess the statistical significance of these findings. Calculate an approximation based on the central limit theorem. the central limit theorem is as follows: $$P(a\le \frac{\bar w - \mu}{\sigma/\sqrt{n}} \le b)$$ the answer however is as follows: $$1-P(\frac{X-3}{\sqrt{3}} \le \frac{7.5-3 }{\sqrt{3}})$$ did i miss something why are they using the expected value instead of sigma divided by sqrt of n? should i assume that those two terms are equal? how are they equal? AI: I shudder at using CLT in this situation. However, the usual model used is the Poisson model, with parameter $\lambda=3$. The variance of the Poisson with parameter $\lambda$ is $\lambda$, so the standard deviation is $\sqrt{3}$. Another way of thinking about it is that the number of leukemia cases in a group of $3000$ has binomial distribution, $n=3000$, $p=\frac{1}{1000}$. The variance is $np(1-p)$. Since $p$ is very small, $1-p\approx 1$, and therefore the variance of the binomial is approximately $np$, which is $3$. Remark: Using the Poisson model, it is not really too hard to compute the probability that the number of cases is $\le 7$, under the null hypothesis that $\lambda=3$. So there is no real need to use a normal approximation.
H: How do I prove $\frac{M_1 \times M_2}{N_1 \times N_2} \simeq \frac{M_1}{N_1} \times \frac{M_2}{N_2}$? I seem to encounter this issue whenever a question involves quotient objects. In this case, I have modules $M_1$ and $M_2$ and subsets $N_1$ and $N_2$ thereof respectively. It is given that $N_1$ and $N_2$ are submodules, thus so too is their set-theoretic Cartesian product, namely $N_1 \times N_2$. Thus, all expressions in the formula below are well-defined. The task is to show that $$\frac{M_1 \times M_2}{N_1 \times N_2} \simeq \frac{M_1}{N_1} \times \frac{M_2}{N_2}.$$ As a first step, I'd like to show that there exists a function $$f : \frac{M_1 \times M_2}{N_1 \times N_2} \rightarrow \frac{M_1}{N_1} \times \frac{M_2}{N_2}$$ such that for all $(m_1,m_2) \in M_1 \times M_2$ it holds that $$f((m_1,m_2)+N_1 \times N_2) = (m_1+N_1, m_2 + N_2),$$ but I can't work out how to do it. Since functions are generalized by relations, one approach would be to define a relation with the right properties and then prove that it is a function. But I can't even work out the appropriate relation to define. Help, anyone? AI: An idea: define $$f: M_1\times M_2\to M_1/N_1\times M_2/N_2\;\;\text{by}\;\;f(m_1,m_2):=(m_1+N_1\,,\,m_2+N_2)$$ (1) Show the above is a surjective module homomorphism; 2) Apply the first isomorphism theorem
H: How to prove this result on limit? $$\lim_{x \to 0} \frac{({a+x^m})^{\frac{1}{n}}-({a-x^m})^{\frac{1}{n}}}{x^m}=\frac{2}{n}a^{\frac{1}{n}-1}$$ I have tried it using L'Hôpital's rule, but I don't get any way out of that. Any Help will be appreciated. Thanks. AI: Putting $y=\frac {x^m}a,$ $$\lim_{x \to 0} \frac{({a+x^m})^{\frac{1}{n}}-({a-x^m})^{\frac{1}{n}}}{x^m} $$ $$= a^{\frac1n}\lim_{y \to 0} \frac{(1+y)^{\frac{1}{n}}-(1-y)^{\frac{1}{n}}}{ay} $$ $$= a^{\frac1n-1}\lim_{y \to 0} \frac{1+\frac yn+O(y^2))-(1-\frac yn+O(y^2)}y \text{ (using Binomial Expansion)}$$ $$= a^{\frac1n-1} \lim_{y \to 0}\frac {\frac {2y}n+O(y^2)}y$$ $$=\frac{2 a^{\frac1n-1}}n \text{ as }y\ne0 \text{ as }y\to0$$
H: Can we get the line graph of the $3D$ cube as a Cayley graph? Given a graph $G=(V,E)$, the line graph of $G$ is a graph $\Gamma$ whose vertices are $E$ (the edges of $G$) and in $\Gamma$, two vertices $e_1,e_2$ are connected if, as edges in $G$, they share an endpoint. Now let $G$ be the $3D$ cube graph. It has $8$ vertices, $12$ edges and it is $3$-regular. It is actually the Cayley graph of $\mathbb{Z}_2^3$ with generators $(1,0,0),(0,1,0),(0,0,1)$. Let $\Gamma$ be the line graph of this $G$. Then $\Gamma$ has $12$ vertices and it is $4$-regular and highly symmetric. Is $\Gamma$ the Cayley graph of some group $H$ (of order $12$) with a symmetric set of generator $S \subset H$ ($|S|=4$)? (by a symmetric set I mean that $x\in S \Rightarrow x^{-1}\in S$, making the Cayley graph undirected). AI: I haven't checked this, but try $H=A_4$, with $S=\{g,h,g^{-1},h^{-1}\}$ where $g=(1,2,3)$ and $h=(2,3,4)$. So vertices of the cube are labelled alternately with $g$ and $h$, and multiplication of an element (egde) by a generator (vertex) corresponds to rotating the edge clockwise round that vertex (or anticlockwise for multiplication by the inverse).
H: Proving that the series $\sum\limits_{n=1}^\infty a(n)$ diverges, where $a(n) = (1/2)^k$ for $2^{k-1} \le n < 2^k$ Let $a(n)=(1/2)^k$ when $2^{k-1} \le n < 2^k$ and $k$ is a natural number. How do you prove that $\sum\limits_{n=1}^\infty a(n)$ diverges? AI: compute the sum of $a_n$s for n in such a $k$-block. you have $2^{k-1}$ $a_n$s with the same value, hence it will be easy. then just conclude using Cauchy criterion
H: Orthogonal complement of orthogonal complement Let $U$ be a subspace of $V$ (where $V$ is a vector space over $C$ or $R$). The orthogonal complement of the orthogonal complement of $U$ is not equal to $U$ in general (equal only for dim $V$ finite). Can anyone give me a simple example when the orthogonal complement of the orthogonal complement of $U$ is not $U$. AI: Let $$V:=\left\{ f:[0,1]\to\Bbb R\;;\; f\;\;\text{is continuous}\right\}\;\;\text{over}\;\;\Bbb R$$ and with the inner product $$\langle f,g\rangle:=\int\limits_0^1f(x)g(x)dx$$ Let $$U:=\{ f \in V\;;\;f(0)=0\}\implies U^\perp=\{0\}\;,\;\;U^{\perp\perp}=\ldots$$
H: The Set $\{x\in \mathbb{R}^n : \|x\|\leq r\}$ is Closed How to show that a set $A=\{x\in \mathbb{R}^n : \|x\|\leq r\}$ is closed? May I do it showing that for $x\in A, a\in \mathbb{R}$ it holds $ax\in A$ and for $x,y\in A$ it holds $x+y \in A$? AI: Here closed means that $\partial A$= boundary of A belongs to $A$. And set theoretically $\partial A=\{x\in R^n|\forall r>0,B(x,r)\cap A\ne \phi \text{ and }B(x,r)\cap A^c\ne \phi\}$ $B(x,r)=\{y\in R^n| ||(y-x)||=d(y,x)<r\}$ is the open ball around $x$ of radius $r$. Now to show this, Let $y\in \partial A$.I will show that $y\in A$. Let if possible $y\notin A\Rightarrow y\in A^c$ Then $\forall r>0,y\in B(y,r)\cap A^c$ so $B(y,r)\cap A^c\ne \phi$ As $y\notin A$ so $\exists \{y_n\}_{n\ge 1}$ with $y_i\in A$ and $y_i\to y$ as $n\to \infty$ But this implies $\forall \epsilon>0,\exists n\in N\text{ such that }||(y_i-y)||<\epsilon\text{ for all }i\ge n$ We know as $y_i\in A,||x-y_i||\le r\Rightarrow ||x-y||=||x-y_i+y_i+y||\le ||(x-y_i)||+||(y_i-y)||\le r+\epsilon$ As $\epsilon>0$ was arbitrary this implies $||(x-y)||\le r$ but this means $y\in A$ which contradicts our hypothesis. So $y\notin A$ is not possible. So we have $y\in A$ and as $y\in \partial A$ was arbitrary this implies $\partial A \subseteq A$. Hence $A$ is closed.
H: Show that $x=2\ln(3x-2)$ can be written as $x=\frac{1}{3}(e^{x/2}+2)$ Show that $x=2\ln(3x-2)$ can be written as $x=\dfrac{1}{3}(e^{x/2}+2)$. Is there a rule for this? AI: Solve for the "other" $x$. Notice that: $$ \begin{align*} x &= 2\ln(3x-2) \\ \dfrac{x}{2} &= \ln(3x-2) \\ e^{x/2} &= 3x-2 \\ e^{x/2}+2 &= 3x \\ \dfrac{1}{3}(e^{x/2}+2) &= x \\ \end {align*} $$ as desired.
H: Showing that two real matrices are not congruent over $\mathbb{Q}$ Maybe it is a stupid question but I will still ask it here. How can I prove that the following matrices are not congruent over $\mathbb{Q}$? \begin{pmatrix} -1 & 0\\ 0 & 2\\ \end{pmatrix} \begin{pmatrix} -1 & 0\\ 0 & 1\\ \end{pmatrix} Thanks in advance AI: Fi $B=Q^TAQ$ then $\det(B)=\det(Q^T)\det(A)\det(Q)$, i.e. $\frac{\det(B)}{\det(A)}=\det^2(Q)$ must be a square.
H: Expectation of maximum of a function whose expectation is concave An analysis of a data structure yields a property of the form $\qquad \mathbb{E}[ f(k) ] = H_k + H_{n-k+1} - 1$ for some natural $n$ and all $1 \leq k \leq n$. Note that the $f(k)$ are not independent. Now I am interested in the quantity $\qquad \mathbb{E}[\max_{k=1,\dots,n} f(k)]$. Since $\mathbb{E}[f(k)]$ is concave on $[1,k]$ if continued on the reals and the maximum is "nice", I think it should be possible to apply (the symmetric version of) Jensen's inequality to conclude $\qquad \mathbb{E}[\max_{k=1,\dots,n} f(k)] \leq \max_{k=1,\dots,n} \mathbb{E}[f(k)] = \mathbb{E}[f(n/2)] = 2H_{n/2} + \frac{1}{n/2 + 1} - 1$, glossing over odd $n$. Does this really hold? If so, how can I make the argument formal? If not, is there any handle on the desired quantity? Right now, I don't know more about the distribution for $f(k)$. Note that I can expect a logarithmic bound, so we should be able to make a connection. AI: Jensen's inequality states that, if $\phi$ is a convex function \begin{equation} \phi( \mathbb{E}X) \leq \mathbb{E}\phi(X) \end{equation} In your particular case, consider \begin{equation} \begin{aligned} \phi_n : x = (x_1,...,x_n) \mapsto \max_{i \in [1, n]} \{x_i\} \end{aligned} \end{equation} For all $n$, $\phi_n$ is convex since $\forall t \in [0,1]$, $\max \{ tx^1_i + (1-t)x^2_i\} \leq t \max\{x^1_i\} + (1-t) \max\{x^2_i\} $, therefore (unfortunately) \begin{equation} \max_i \mathbb E X_i \leq \mathbb{E} (\max_i X_i) \end{equation}
H: finiteness and first order sentences Lets consider a set of sentences $T$ and a signature $\sigma$. I proved (using compactness theorem) that when $T$ has arbitrary large models than also an infinite model. Now there are several consequences from this statement but I have problems formulating the connection to the statemant above. Finiteness is not first order characterizable, i.e there is no set of sentences $T$ such that $M$ is a $T$-model if and only if $M$ is finite. In my opinion this is a direct consequence because $T$ also has infinte models, therefore the finiteness of $M$ is no sufficient condition being a $T$-model. Infinity is not characterizable through one single sentence, i.e there is no sentence $\phi$ such that $M$ is a $\{\phi\}$-model if and only if $M$ is infinity. I do not know how to argue here. AI: As for your first point, you are correct. For, suppose there were such a set of sentences $T$ (hereafter, I speak of a theory). Then $T$ has arbitrarily large finite models, but no infinite model. This contradicts the result you have obtained. For the second, suppose $M \models \phi$ iff $M$ is infinite. Then what can we say about the theory $T = \{\neg \phi\}$ consisting only of the negation of $\phi$? When dealing with a certain fixed theory $T$ (e.g. the theory of groups), we can follow a similar strategy to prove that (essentially) the same results hold for models of $T$: Using Compactness, we prove that $T \cup \{\neg\phi_n:n \in \Bbb N\}$ (where $\phi_n$ is the standard "there are at most $n$ distinct elements" sentence) is consistent, i.e. that $T$ admits infinite models, from the fact that arbitrarily large finite models of $T$ exist. Analogous to the first point, we prove that "being a finite model of $T$" is not first-order axiomatisable. Finally, we prove that there is no single sentence axiomatising infinite $T$-models. Suppose $\phi$ were a single sentence expressing "being an infinite model of $T$". Then $T \cup \{\neg \phi\}$ is a first-order axiomatisation of "being a finite model of $T$", which we had shown to be impossible.
H: prove statement about fixed point iteration I have the following fixed point iteration: $$ p_{n+1} = \frac{p_n^3 + 3ap_n}{3p_n^2 + a} $$ By defining $$g(x) = \frac{x^3 + 3ax}{3x^2 + a}$$ en some algebra I found that the fixed point is $x = \sqrt{a}$. So $g(\sqrt{a}) = \sqrt(a)$. Now I need to show that this iteration proces converges for every $p_0$ with $0< p_0 < \sqrt{a}$ to the fixed point $\sqrt{a}$. Maybe it isn't of any importancy but I have already proven the above statement for every $p_0$ with $0<\sqrt{a} < p_0$ AI: Note that $$g'(x)=\frac{(3x^2+3a)(3x^2+a)-(x^3+3ax)6x}{(3x^2+a)^2}=\frac{3x^4-6x^2+3a^2}{(3x^2+a)^2}=\frac{3(x^2-a)^2}{(3x^2+a)^2}$$ Note that $\frac{p_{n+1}}{p_n}=\frac{p_n^2+3a}{a+3p_n^2}$ is $>1$ if $p_n^2+3a>a+3p_n^2$, i.e. if $(0<)p_n<\sqrt a$. Hence we have the following possibiliteis if $0<p_0<\sqrt a$: The sequence is monotonincally increasing until for some $n$ we have $p_n\ge \sqrt a$. In this case, your (therefore important) argument shows that in converges $\to\sqrt a$ from then on The sequence is monotonically increasing but stays below $\sqrt a$. In that case it must converge to some limit. This limit must be a fixed point of $g$, i.e. $p_n\to \sqrt a$.
H: $x$ such that $\sqrt{x^y} > \sqrt{(x y)^2}$ Let $y>1$, $x>0$ Of what form are numbers $x$ such that: $$\sqrt{x^y} > \sqrt{(x y)^2}$$ I.e: What is the solution for $x$? EDIT: As the first answer implied: $$\sqrt{x^y} > \sqrt{(x y)^2} \implies \log(\sqrt{x^y}) > \log(\sqrt{(x y)^2})\implies$$ $$\log(x^y) > \log((x y)^2) \implies y \log(x)>2 \log(x y) \implies y \log(x)> (\log(x)+\log(y))$$ $$\implies (y-2) \log(x)>2 \log(y)$$ AI: obviously x has to be positive. now take logarithms of both sides, the inequality you get is equivalent, so you have $1/2y \log(x) > \log(x) + \log(y)$, now solve for $\log(x)$ and you're done oh, sorry, actually if you allow y to be an even natural number then the inequality still makes sense even for negative x's, but it's still pretty much the same, because then it's the same for x as for -x so doing it by my method you can find solutions with assumption $x > 0$. then you just take the set of these solutions and you also add all their negatives - so in this case x is a solution iff -x is a solution you might want to clarify the question tho' because in general we don't define $x^y$ for $x <0$ it's more of a philosophical question
H: Solving a conjecture by bruteforcing Say we wanted to check the Beal conjecture ["If $A^x+B^y=C^z$, where $[A, B, C, x, y, z \in N] \wedge [x, y, z \gt 2] \to $ A, B and C must have a common prime factor", from the official site]. What is the most efficient way of solving it by brute-forcing? First, define a set of possible solutions which meet the requirements, then test all these solutions [i.e. find a triple $A, B, C$ of numbers which have no common prime factors, find a triple $a, b, c >2$, then test if $A^x + B^y = C^z$]? First, find solutions for the equation, then check if they meet the requirements [i.e. solve $A^x + B^y + C^z$, then test if the numbers have no prime factors, then test if $a, b, c > 2$]? AI: I would let $S$ be the set of numbers of the form $A^x$ with $x > 2$ in $[1,\ldots,N]$, find $(S + S) \cap S$ (thus solutions of $A^x + B^y = C^z$), then check if $\mathrm{lcd}(A,B)$ and $C$ are coprime (using Euclid's algorithm, not factoring). Of course, assuming the conjecture is true, you won't be "solving it", just verifying that there are no counterexamples up to $N$.
H: problem concerning continuity of functions Let $f,g:X \rightarrow \mathbb R$ be both continuous functions. Prove that if $X$ is open, then the set $A=\{ x \in X \mid f(x) \neq g(x) \}$ is also open. Show that if $X$ is closed, then $A=\{x \in X \mid f(x)=g(x)\}$ is closed. Is there anyone who can help me to solve this problem? AI: Let $\varphi: S\to \mathbb R, x\mapsto f(x)-g(x)$ where $S$ is a topological space then $h$ is continuous function and we have $$A=\varphi^{-1}(\mathbb R^*)\cap X$$ is an open subset of $S$ as intersection of two open subset of $S$ For the second case we have $$A=\varphi^{-1}(\{0\})\cap X$$ is a closed subset since it's an intersection of two closed subset of $S$.
H: What is this method called - Abelization of a Group. Today, I wanted to make a post for this question. There are some approach in which we can overcome the problem like this and this. According to my knowledge, I could solve the problem via the approach I had learned. Since I don't know what is this way called, so I refused to post an answer. I am very thankful if somebody tells me what is the method named? Here is some of the preliminaries but not the whole things because it is difficult for me to translate whole story (sorry). Definition: Let $R=\{r_1,r_2,...,r_m\}$, $X=\{x_1,x_2,...,x_n\}$ and $G=\langle X| R\rangle$. And consider the abelian group $G/G'$. If $\alpha_{ij}$ be the sum of the powers of $x_j$ in relation $r_i$ so we can call the following matrix, the relation matrix of abelian group $G/G'$: $$M=\begin{pmatrix} \alpha_{11} & \alpha_{12} & ... & \alpha_{1n}\\ \alpha_{21} & \alpha_{22} & ... & \alpha_{2n}\\ \vdots &\vdots &\vdots &\vdots\\ \alpha_{m1} & \alpha_{m2} & ... & \alpha_{mn} \end{pmatrix}$$ Definition: Let $G=\langle X| R\rangle$, such that $|X|-|R|\le 0$. If we can make $M$ to have an standard diagonal form: $$D:=\begin{pmatrix} d_1 & 0 & ... & 0 & 0\\ 0 & d_2 & ... & 0 & 0\\ 0 & 0 & d_3 &0 & 0\\ \vdots &\vdots &\vdots &\vdots &d_k\\ 0 & 0 & 0 &0 & 0\\\vdots &\vdots &\vdots &\vdots &\vdots\\0 & 0 & 0 &0 & 0 \end{pmatrix}_{m\times n}$$ wherein $d_i\in\mathbb N\cup\{0\}$, by employing elementary row operations, then we can have $G/G'\cong\mathbb Z_{d_1}\times\mathbb Z_{d_2}\times...\mathbb Z_{d_k}$. For example: Let $$G=\langle a,b\mid a^{2^{n-1}}=1, a^{2^{n-2}}=b^2, b^{-1}ab=a^{-1}\rangle$$ so $$G/G'=\langle a,b\mid a^{2^{n-1}}=1, a^{2^{n-2}}=b^2, a^2=1, [a,b]=1\rangle$$ Now we have $$M=\begin{pmatrix} 2^{n-1} & 0\\ 2^{n-2} & -2\\ 2 &0\\ \end{pmatrix}\xrightarrow{R_3\leftrightarrow R_1}\begin{pmatrix} 2 & 0\\ 2^{n-2} & -2\\ 2^{n-1} &0\\ \end{pmatrix}\to\begin{pmatrix} 2 & 0\\ 0 & -2\\ 0 &0\\ \end{pmatrix}$$ So $G/G'\cong\mathbb Z_{2}\times\mathbb Z_{2}$. Thanks for your time reading my question. :) AI: This is called computing the Smith normal form, and the application you describe features in this question: Determining the Smith Normal Form
H: Determinant of symmetric matrix Given the following matrix, is there a way to compute the determinant other than using laplace till there're $3\times3$ determinants? \begin{pmatrix} 2 & 1 &1 &1&1 \\ 1 & 2 & 1& 1 &1\\ 1& 1 & 2 & 1 &1\\ 1&1 &1 &2&1\\ 1&1&1&1&-2 \end{pmatrix} AI: You can substract the first row from every other rows and get matrix of form: $$\begin{pmatrix} 2 & 1 &1 &1&1 \\ -1 & 1 & 0& 0 &0\\ -1& 0 & 1 & 0 &0\\ -1&0 &0 &1&0\\ -1&0&0&0&-3 \end{pmatrix}.$$ Computing the determinant is now much easier.
H: A Surjective Local Smooth Diffeomorphism That is Not A Covering Map Let $\pi:M_1\rightarrow M_2$ be a surjective $C^{\infty}$ map between two connected manifolds with $d\pi$ an isomorphism. If $M_1$ is compact, it is seen that $|\pi^{-1}(m_2)|$ is finite, so $\pi$ is a covering. If we only have $M_2$ compact, do we have a counterexample showing that $\pi$ doesn't have to be a cover? AI: The map $t\mapsto\exp(2\pi it)$ is a standard covering map from the line to the unit circle of the complex plane. If you restrict it to an interval like $(0,2)$, you get a counterexample for your question.
H: Is there software to help with group presentation I wrote a computer program that generates group presentations. I would like to know the sizes of the resulting groups. I know that this is undecidable. Are there good heuristic programs that can try to compute the size of a group given by generators and relations? I am not interested in apporximating the size. Only in determining the exact size, when the software can do it, hopefully often in my cases. AI: You can use Magma Online Calculator. For instance, the following code: F<a, b> := FreeGroup(2); G<x, y>, phi := quo< F | a^2, b^3, (a*b)^4 >; #G
H: Why is big-Oh multiplicative? If $f$ is $O(g)$ over some base, this means that $f(x) = \beta(x)g(x)$, where $\beta$ is eventually bounded. So this means that eventually, $f$ is at most $c$ times $g$, where $c$ is some constant. But I thought $O$ was a way of expressing that two functions are eventually approximately the same. In that case, this seems like a poor definition. If $g$ is unbounded, then the difference between $f$ and $g$ is also unbounded, so the two functions might be $O$ of one another and yet still diverge wildly. I would have used an additive definition: $f(x)=g(x)+\beta(x)$, where $\beta$ is eventually bounded. So then we know that $f(x)=g(x)\pm \epsilon$, where $\epsilon$ is some positive constant. We now have a statement on the error incurred by assuming $f=g$. Similar objections can be raised against the definition of asymptotic equivalence. $x^2\sim x^2+x$ as $x\rightarrow+\infty$ even though their difference grows without bound. So in what sense are they equivalent? AI: To appreciate these comparisons we have to look at some actual numbers. Let's take the case of $x^2+x$ vs. $x^2$. Let's suppose a calculation takes a millisecond and we have one algorithm $A_1$ that takes $x^2$ calculations to complete a task and another algorithm $A_2$ that takes $x^2+x$ calculations to complete the same task where $x$ is the problem size. If $x=1,000,000$ then $A_1$ takes $10^{12}$ milliseconds which is $10^9$ seconds which is about $31.7$ years. $A_2$ takes $10^{12} + 10^6$ milliseconds which is $10^9 + 10^3$ seconds and since $10^3$ seconds is $16$ minutes and $40$ seconds $A_2$ takes about $31.7$ years and an additional $17$ minutes. Is that significant?
H: Prove that a function f is continuous (1) $f:\mathbb{R} \rightarrow \mathbb{R}$ such that $$f(x) = \begin{cases} x \sin(\ln(|x|))& \text{if $x\neq0$} \\ 0 & \text{if $x=0$} \\ \end{cases}$$ Is $f$ continuous on $\mathbb{R}$? I want to use the fact that 2 continuous functions: $$f:I \rightarrow J ( \subset \mathbb{R})$$ $$g:J \rightarrow \mathbb{R}$$ $$g \circ f:I \rightarrow \mathbb{R}, x \mapsto g(fx) $$ 1)For $f=\ln(|x|)$: "By the inverse of the Fundamental Theorem of Calculus, $\ln x$ is defined as an integral, it is differentiable and its derivative is the integrand $1=\frac{1}{x}$. As every differentiable function is continuous, therefore $\ln x$ is continuous." so $f=\ln(|x|) ,I \in ]0, \infty)$ $f$ is continuous. 2)For $g= \sin(x)$: if $\epsilon > 0, \exists \delta>0:$ $$x \in J \wedge |x - x_0| < \delta , x_0 \in \mathbb{R} $$ $$\Rightarrow |f(x) - f(x_0)| \epsilon \Leftrightarrow |\sin(x)-\sin(x_0)|< \epsilon$$ $$|\sin(x)| \leq |x|$$ $$\Leftrightarrow |\sin(x)-\sin(x_0)|<|x - x_0| < \delta = \epsilon$$ So $g$ is continuous on $\mathbb{R}$ 3)Because x is also continous on $\mathbb{R}$ $ \Rightarrow x \sin(\ln(|x|))$ is continuous. Is my proof correct? Are there shorter ways to get this result? AI: For all $x\in (-\infty;0)\cup(0;+\infty)$ is function continuous since it is composition of continuous functions (I think it is necessary to show it in this task, as mentioned in comments, the real problem is $x=0$). By definition, function is continuous at $x_0$, if $$\lim_{x\rightarrow x_0}f(x)=f(x_0)$$ In this case: $$\lim_{x\rightarrow 0}x\sin(\ln(x))=0$$ because $\lim_{x\rightarrow0}=0$ and $|\sin(\ln(x))|\leq1$ (sine is bounded).
H: Does $i = -\frac{(2\;W({\pi\over2}))}{\pi}$ Let $x = -\frac{(2\;W({\pi\over2}))}{\pi}$, where $W$ denotes the Lambert W-function. As $${\log(i^2)\over i} = \pi$$ and $${\log(x^2)\over x}=\pi$$ Does $x = i$? AI: Hint:assume $x=a+ib$ then $${\log(x^2)\over x}=\pi\to \pi(a+ib)=\log(x^2)=\ln(|x^2|)+i\arg(x^2)=2\mathrm{Ln}(|x|)+i2\arg(x)$$$$\to \begin{cases} 2\ \ln(|\sqrt{a^2+b^2}|)=\pi a \\ \pi b=2\arg(x) \\ \end{cases}$$
H: Questions of the book Introduction to commutative algebra by M. F. Atiyah and I. G. Macdonald. I have some questions of the book Introduction to commutative algebra by M. F. Atiyah and I. G. Macdonald. On Line 8-9 of Page 42, it is said that $(xs-a)t=0$ for some $t\in S$ iff $xst\in \mathfrak{a}$. If $(xs-a)t=0$, then $xst=at \in \mathfrak{a}$. But if $xst \in \mathfrak{a}$, could we conclude that $(xs-a)t=0$ for some $a\in \mathfrak{a}$? On Line 10-11 of Page 42, it is said that $\mathfrak{a} \in C$ iff $\mathfrak{a}^{ec} \subseteq \mathfrak{a}$. But on Page 10, Proposition 1.17(iii), it is said that $\mathfrak{a} \in C$ iff $\mathfrak{a}^{ec} = \mathfrak{a}$. AI: I don't think the implication you are asking about is very clear either (but being unpracticed at commutative algebra, I could be overlooking something.) I think they are being a bit terse at readers' expense. At the very least, we could conclude that the last item implies the third item in the chain of implications. That is, $x\in\cup_{s\in S}(\mathfrak{a}:s)$ implies $xs=a\in\mathfrak{a}$ for some $s\in S$, whence $\frac{x}{1}=\frac{a}{s}$. This would make the line of implications into a circle, so that equivalence is guaranteed everywhere. In 1.17 i), they show that $\mathfrak{a}\subseteq\mathfrak{a}^{ec}$ in all cases. Then if $\mathfrak{a}\in C$, $\mathfrak{a}=\mathfrak{a}^{ec}$ would be equivalent with $\mathfrak{a}\supseteq\mathfrak{a}^{ec}$
H: Closed linear subset of a Hilbert space If $H$ is a Hilbert space, and if $$(a,b)_H=0$$ for every $b \in B \subset H$, where $B$ is a closed linear subset of $H$, does it follow that $a=0$, the zero element of $H$? AI: No, because if $B\neq H$, $B$ will have a non-zero orthogonal complement. That means precisely that there exist non-zero $a\in H$ such that $(a,B)=0$. For an extreme example, just take $B=0$ in a non-zero Hilbert space.
H: A simpler proof that if $m\mid n$ then there is a ring homomorphism from $\mathbb{Z}_n$ onto $\mathbb{Z}_m$? Question 18.I.4 from Pinter's A Book of Abstract Algebra asks for a proof of the following, where $\mathbb{Z}_m$ and $\mathbb{Z}_n$ are treated as rings: If $n$ is a multiple of $m$, then $\mathbb{Z}_m$ is a homomorphic image of $\mathbb{Z}_n$. After reading ahead to the next chapter on quotient rings, the method that suggests itself to me is to use cosets. Let $n = md$ and define $f: \mathbb{Z}_n \to \mathbb{Z}_m$ to send $a + n\mathbb{Z} \mapsto a + m\mathbb{Z}$. This is a well-defined map, because if $a + n\mathbb{Z} = b + n\mathbb{Z}$ then $a - b \in n\mathbb{Z} = md\mathbb{Z} \subset m\mathbb{Z}$, and so $a + m\mathbb{Z} = b + m\mathbb{Z}$. Then the homomorphism properties of $f$ are clear from the definitions of coset addition and multiplication. However, this argument is really about the rings $\mathbb{Z}/(m)$ and $\mathbb{Z}/(n)$, and since the book hasn't defined quotient rings or given the homomorphism theorems up to this point I suspect that a proof using more basic concepts is called for. In fact, the book has not given a formal definition of $\mathbb{Z}_m$ as a ring, so I am not sure how to prove things about it in general. Should its elements be thought of as equivalence classes of integers, or as the integers $\{0, 1, ... m-1\}$ with addition and multiplication defined by taking remainders modulo $m$? So I have two questions: Is the argument above actually a proof of this statement? Is there a better proof using only the more basic properties of modulo arithmetic? AI: The argument is OK with the caveat that I hope you have a concrete explanation with the part where you say "are clear." You were probably just saving space, but as you know some students often jump the gun on what is clear :) Both of the "ways of thinking about" the elements of $\Bbb Z_m$ are valid since really they are the same thing. That is, $n$ is equivalent to $k$ if and only if $n-k\in m\Bbb Z$, if and only if $n+m\Bbb Z=k+m\Bbb Z$ as cosets. So, each equivalence class corresponds to one and only one coset.
H: Converting data to a specified range I am trying to convert data to a particular range of 0-10. Actual Data may vary from - 50000 - 26214400. I have broken this down in to 4 parts as follows - 50000 - 1048576 -----> 0 - 2.5 1048576 - 5242880 -----> 2.5 - 5.0 5242880 - 15728640 ----->5.0 - 7.5 15728640 - 26214400 ----->7.5 - 10 How can i convert this data ? Is there any formula to do this? I found a way to normalize data to a range of [ 0 - 1 ] but am not able to apply the same to this. AI: You can use the two-point form for each of the four lines. For the first, it is $y-0=(2.5-0)\frac {x-50000}{1048576-50000}$ The others are similar. I showed the $0$ explicitly as the lower range of $y$ in that interval.
H: Show that the following matrix is diagonalizable My question is related to this question discussed in MSE. $J$ be a $3\times 3$ matrix with all entries $1\,\,$. Then prove that $J$ is diagonalizable. Can someone explain it in terms of A.M. and G.M. (algebraic and geometric multiplicity) concept ? Thanks in advance for your time. AI: So we have $$ J = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix} $$ Let's compute the characteristic polynomial $\chi_J(X) = \det(J - X)$: \begin{align*} \chi_J(X) &= \det \begin{pmatrix} 1-X & 1 & 1 \\ 1 & 1-X & 1 \\ 1 & 1 & 1-X \end{pmatrix}\\ &= (1-X)^3 + 2 - 3(1-X)\\ &= 1 - 3X + 3X^2 - X^3 + 2 - 3 + 3X\\ &= -X^3 + 3X^2\\ &= -X^2(X-3) \end{align*} So 0 (with algebraic multiplicity 2) and 3 (with algebraic multiplicity 1) are the eigenvalues of $J$. To check whether $J$ is diagonaziable we will compute the geometric multiplicity of 0, that is $\dim \ker (J-0) = \dim\ker J$. We do Gaussian elimination: Subtrating the first row from the third and the second gives that $$ \ker J = \ker \begin{pmatrix} 1 & 1 & 1 \\ 0 & 0&0\\ 0 & 0 & 0 \end{pmatrix}$$ This matrix has rank 1, so its kernel has dimension 2. So $\dim\ker J = 2$ and $J$ is diagonaziable.
H: Question on Galois extension of field of fractions I would like to ask if the following is true: Suppose we have an integral domain $A$ and a group action $G$ on $A$. We consider $A^{G}$ the subring of $A$ fixed by $G$. Let $L$ be the field of fractions of $A^{G}$ and $K$ the field of fractions of $A.$ Can we safely say that $K$ is a Galois extension of $L$ and if so, can we say that $A$ is a free $A^{G}$ module? Thanks! AI: Partial answer only: Let $K$ be a field and $G$ be a finite group. Then the extension $K/K^G$ is Galois. Proof: for every $x\in K$ the orbit $Gx$ of $x$ under $G$ is invariant under $G$. The coefficients of the separable polynomial $ f_x:=\prod\limits_{a\in Gx} (X-a) $ thus are in $K^G$. One concludes that $K/K^G$ is separable algebraic. Now let $g\in K^G[X]$ be an irreducible polynomial having $x\in K$ as a root. Then all elements of $Gx$ are roots of $g$. Hence $f_x$ divides $g$, that is $g=f_x$, which shows that $g$ splits completely in $K$. Consequently $K/K^G$ is normal. The argument also shows that the ring extension $A/A^G$ is integral. The assertion is wrong if $G$ is infinite: let $k$ be an infinite field and let $G$ be the group of all $k$-automorphisms $g$ of the rational function field $K:=k(X)$ with the property $g(X)=\frac{aX+b}{cX+d}$. Then $k(X)^G=k$. A rather simple case in which freeness holds: let $A$ be a 1-dimensional, noetherian, integrally closed domain and assume that $A^G$ is local. Then $A/A^G$ is free. Proof: $A^G$ is 1-dimensional; by the Lying-over-Theorem there exists a maximal ideal $M$ of $A$ lying over the maximal ideal of $A^G$. By assumption $A_M$ is a discrete valuation ring. One always has $A\cap K^G = A^G$. Consequently $A_M\cap K^G=A^G$, hence $A^G$ is a discrete valuation ring. Moreover $A^G$ is integrally closed, that is $A$ is the integral closure of $A^G$ in the Galois extension $K/K^G$, thus $A/A^G$ is finite. Finitely generated, torsion-free modules over a discrete valution ring are free. Note that one can check locality of $A^G$ in $A$ itself: $G$ acts on the maximal ideals of $A$, and $A^G$ is local iff all maximal ideals lie in the same orbit. In general in my opinion the problem of freeness seems to be difficult: several things can go wrong. If $A/A^G$ is free it must have rank equal to $|G|$ -- in particular the ring extension must be finite. It is difficult to ensure this without assumptions about noetherianity of $A^G$. But it seems that $A$ noetherian does not imply that $A^G$ is noetherian. And even in the noetherian and finite case, moreover assuming that $A$ is integrally closed, the extension $A/A^G$ is usually not free.
H: Bounded variation and $\int_a^b |F'(x)|dx=T_F([a,b])$ implies absolutely continuous If $F$ is of bounded variation defined on $[a,b]$, and $F$ satisfies $$\int_{a}^b |F'(x)|dx=T_F([a,b])$$ where $T_F([a,b])$ is the total variation. How to prove that $F$ is absolutely continuous? My Attempt: I used the inequality, that $P_F([a,x])$ and $N_F([a,x])$ (positive and negative variation )are both monotonic non-decreasing function, and thus their derivative exists a.e. Thus $$\int_{a}^b |F'(x)|dx\le \int_a^b |P_F'([a,x])|dx+\int_a^b|N_F'([a,x])|dx\le P_f([a,b])+N_F([a,b])=T_F([a,b])$$ Then by the condition, the middle inequality should all be replaced by equality. But I cannot derive any useful information from the equalities, since the annoying absolute value cannot be diminished. I tried to prove $$F(x)=F(a)+\int_{a}^x F'(t)dt$$ but this doesn't work. Applying the definition of absolute continuity also failed to give me a clearer view. Thanks for your attention! AI: First establish equality: for every $x\in [a,b]$ $$ \int_a^x |F'| = T_F(a,x). $$ For this write $$ \Big( T_F(a,x) - \int_a^x |F'| \Big) + \Big( T_F(x,b) - \int_x^b |F'| \Big) = 0 $$ and note that each parentheses above is $\ge 0$ ( $T_F' = P_F' + N_F' \ge |P_F' - N_F'| =|F'|$, then note that each term is an increasing fct, should be part (a) of that excersice). Consequently, $$ \int_{a_k}^{b_k} |F'| = T_F(a_k,b_k) $$ for any subinterval $[a_k, b_k]$. Since $F'$ is integrable, given $\epsilon>0$, let $\delta>0$ such that $\int_E |F'|< \epsilon$ whenever $m(E)<\delta$. It follows that for any set of disjoint intervals $(a_k, b_k)$ with $k=1,\ldots,N $ and $\sum_1^N (b_k-a_k)<\delta$ $$ \sum_1^N |F(a_k)-F(b_k)| \le \sum_1^N T_F(a_k, b_k) = \int_{\cup(a_k,b_k)} |F'| < \epsilon $$
H: Notation for Multiple summation Is there an alternate way to represent the multiple summation given below? $\displaystyle \Large \sum_{i_k=k}^{n} \space \sum_{i_{k-1}=k-1}^{i_k} \dots \sum_{i_2=2}^{i_3} \space \sum_{i_1=1}^{i_2}$ It guess it is wrong to write it as a product of summations like $\displaystyle \large \prod_{\Large j=k}^{1}\space \sum_{\Large i_j=j}^{\Large i_{j+1}}$. What's a better, concise notation ? Update: Another way is $\displaystyle \quad \large \sum\limits_{\Large n \geq i_k \geq i_{k-1} \dots \geq i_2 \geq i_1 \geq 1}$. Are there any other ways? AI: I would use $$ \large \sum_{\substack{1 \le i_1 \le i_2 \le \cdots \le i_k \le n \\ i_j \ge j \;\; \forall j}} $$ A summation in this form is understood to be taken over all ordered tuples $(i_1,i_2,\cdots,i_k)$ which satisfy the inequalities given. See Wikipedia for a list of various notational conventions.
H: Why is the harmonic function $ \log(x^2 + y^2) $ not the real part of any function that is analytic in $ \mathbb{C} - \{0\} $? I would like to show that $ \log(x^2 + y^2) $ is not the real part of any analytic function in $ \mathbb{C} - \{0\} $ A similar question can be found here, but I don't think this argument is satisfactory. Here are my thoughts on the problem. On the domain $ \mathbb{C} - \{x \geq 0\} $, $ \log(x^2 + y^2) = \mathfrak{Re}(\log(z^2)) $. Therefore, if I can show the only analytic function with real part $ \log(x^2 + y^2) $ is $ \log(z^2) $ then the problem is solved, since $ \log(z^2) $ is not analytic on the positive real-axis. The problem I have is I don't know how to show that the only analytic function with real part $ \log(x^2 + y^2) $ is $ \log(z^2) $ Any tips? AI: Be careful when saying that $\log(z^2)$ is "the only analytic function" with real part $\log(x^2 + y^2)$, because (a) however you define it you will have some complex angle (branch cut) where it is undefined, and (b) $\log(z^2) + c$ also works for any $c$ imaginary. So really the candidates are some branch of $\log(z^2)$, plus some imaginary constant. (In this case, the imaginary constant can also be included in the branch cut.) Then when you conclude that this doesn't work because "$\log(z^2)$ is not analytic on the positive real axis" you should modify this to say it is not analytic on whatever branch cut you took. In general, if $a(x,y) + b(x,y)i$ and $a(x,y) + c(x,y)i$ are two analytic functions on some domain, as O.L. mentions in the comments, taking the difference and dividing by $i$ gives that $b(x,y) - c(x,y)$ is analytic. Cauchy-Riemann equations imply that $b(x,y)$ and $c(x,y)$ differ by a constant.
H: Can we conclude that this summation is positive? The summation is: $$S_n=n\sum_{i=0}^n x_i^2- \left ( \sum_{i=0}^nx_i\right)^2$$ where $n>1$ and $x_1,x_2,\ldots,x_n\in \mathbb{R}$. I'm trying to prove that if $x_i\neq x_j$ for $i\neq j$ then $S>0$. If $n=2$, it's easy: $$ S_2=(x_1-x_2)^2>0$$ But if $n\geq 3$ seems it's hardy to do. I thought about using induction, but I'm not be able to finish. Can someone help me? Thanks. AI: Let $\displaystyle \bar{x}=\frac{\sum_{i=1}^{n}x_i}{n}$ $$\sum_{i=1}^n \left(x_i-\bar{x}\right)^2=\sum_{i=1}^n \left(x_i^2+\bar{x}^2-2x_i\bar{x}\right)=\left(\sum_{i=1}^n x_i^2+n\bar{x}^2-2\bar{x}\sum_{i=1}^n x_i\right)=\left(\sum_{i=1}^n x_i^2+n\bar{x}^2-2n\bar{x}\left(\frac{\sum_{i=1}^n x_i}{n}\right)\right)=\left(\sum_{i=1}^n x_i^2+n\bar{x}^2-2n\bar{x}^2\right)=\left(\sum_{i=1}^n x_i^2-n\bar{x}^2\right)$$ So we have $$S_n=n\left(\sum_{i=1}^n x_i^2-n\left(\frac{\sum_{i=1}^n x_i}{n}\right)^2\right)=n\left(\sum_{i=1}^n x_i^2-n\bar{x}^2\right)=n\sum_{i=1}n (x_i-\bar{x})^2\ge 0$$
H: Verifying the trigonometric identity $\cos{x} - \frac{\cos{x}}{1 - \tan{x}} = \frac{\sin{x} \cos{x}}{\sin{x} - \cos{x}}$ I have the following trigonometric identity $$\cos{x} - \frac{\cos{x}}{1 - \tan{x}} = \frac{\sin{x} \cos{x}}{\sin{x} - \cos{x}}$$ I've been trying to verify it for almost 20 minutes but coming up with nothing Thank you AI: Observe that we need to eliminate $\tan x$ So, using $\tan x=\frac{\sin x}{\cos x},$ $$\cos x-\frac{\cos x}{1-\tan x}$$ $$=\cos x\left(1-\frac{1}{1-\frac{\sin x}{\cos x}}\right)$$ $$=\cos x\left(1-\frac{\cos x}{\cos x-\sin x}\right) (\text{ multiplying numerator & denominator by }\cos x)$$ $$=\cos x\left(1+\frac{\cos x}{\sin x-\cos x}\right)$$ $$=\cos x\left(\frac{\sin x}{\sin x-\cos x}\right)$$
H: Every perfect cube is the difference of two perfect squares? How would you prove this without induction? I know that one easy way is using Al Kharchi's principle (namely that $1^3+2^3+3^3+...+n^3=(1+2+3+...+n)^2$), but are there other ways? Thanks! AI: If $a^2-b^2=n^3=n^2\cdot n$ we can set $a+b= n^2$ and $a-b=n$ so that $a=\frac{n^2+n}2=\frac{n(n+1)}2$ which is an integer as $n(n+1)$ is even Similarly, $b=\frac{n^2-n}2=\frac{n(n-1)}2$ More generally, if $a^2-b^2=n^{k+1}=n^k\cdot n$ for integer $k\ge2$ we can set $a+b= n^k$ and $a-b=n$ so that $a=\frac{n^k+n}2=\frac{n(n^{k-1}+1)}2$ Observe that $n$ and $n^{k-1}+1$ have opposite parities, making the product even Similarly, $b$ can be handled
H: Limits preserve weak inequalities Does anyone have a quick way to show that if $f(x)$ tends to a limit A as $x$ tends to $0$, and $f(x) \geq 0$ then $A \geq 0$? Thanks AI: $|f(x)-A|<\epsilon \Rightarrow A-\epsilon < f(x) < A+\epsilon$ If $A<0$ then choose $\epsilon$ less than $|A|$ which forces $f(x)<0$.
H: summary of a summary At the printing company I work for, we have different materials that we print on; part numbers for those materials are assigned based on the unique supplier, material type, and material widths (so the same supplier could send us two different widths of the same material type, and they'd get different part numbers). With each part number, our supplier sends us a statistical summary of their quality control data - how many samples were taken (n), the average thickness of the samples, and the standard deviation. For the most common parts, we might get a few hundred different averages, Ns, and standard deviations. I'm trying to group those summaries together. Since the sample size is so high, I'm fine averaging the average thicknesses for each part number; the numbers are closely grouped enough that it shouldn't be a significant difference. My question is, how do I summarize the standard deviation? The quality tests are standardized and the instruments used to measure the thickness are constantly calibrated, so I have no reason to think that the measurements would be inconsistent from one shipment to the next. AI: The standard unbiased estimator for the underlying variance would be the pooled variance, $s_p^2$ defined by the formula $$ s_p^2 = \frac{\sum_{i=1}^k(n_i-1)s_i^2}{\sum_{i=1}^k(n_i-1)}. $$ The pooled standard deviation would then be $s_p$ (take the square root of the pooled variance). Here $n_i$ is the sample size of the $i$th sample, and $s_i$ is the $i$th sample standard deviation. Edit: I should also mention that the usual unbiased way to average the means (the method you described, simply averaging them, is unbiased, but is not the minimum variance unbiased estimator) is $$ \mu = \frac{\sum_{i=1}^kn_i\mu_i}{\sum_{i=1}^kn_i} $$ where $\mu_i$ is the $i$th sample mean.