text
stringlengths
83
79.5k
H: Proving convergence of a Hilbert modular theta function $\vartheta(z):= \sum\limits_{x \in \mathcal{O}_F} e^{\pi i \operatorname{Tr}(x^2 z)}$ I'm trying to understand a somewhat sketchy proof that I found online of the convergence of the analog of Jacobi's theta function $\displaystyle{\theta(\tau) := \sum_{n = -\infty}^{\infty} e^{2 \pi i n^2 \tau } = 1 + 2 \sum_{n = 1}^{\infty}e^{2 \pi i n^2 \tau } }$ in the context of Hilbert modular forms for real quadratic fields. It is well known that this function is holomorphic on the upper half plane $\mathbb{H}$, and I made sure to understand well and wrote down a detailed proof of this before asking this question. The context of my question is as follows Let $F = \mathbb{Q}(\sqrt{d})$ be a real quadratic field, with $d > 0$ a positive square-free integer such that $d \equiv 2, 3 \pmod{4}$. Assume also that $F$ has class number $1$. Also its ring of integers is $\mathcal{O}_F = \mathbb{Z}[\sqrt{d}] = \mathbb{Z} \oplus\sqrt{d} \, \mathbb{Z} = \{ n + m\sqrt{d} \mid n, m \in \mathbb{Z} \}$. Then there are two real embeddings $\sigma_i : F \hookrightarrow \mathbb{R}$ given by $\sigma_1(a + b\sqrt{d}) = a + b\sqrt{d}$ and $\sigma_2(a + b\sqrt{d}) = a - b\sqrt{d}$, so that $\sigma_1$ is just the identity map, and $\sigma_2$ gives the "conjugate" of an element. Now for $z = (z_1, z_2) \in \mathbb{H} \times \mathbb{H}$ and $\alpha \in F$, we define the "trace" as follows $\operatorname{Tr}(\alpha z) := \sigma_1(\alpha) z_1 + \sigma_2(\alpha) z_2 = \alpha z_1 + \alpha' z_2$, where we wrote $\alpha'$ for the conjugate of $\alpha$. Finally we define the holomorphic Hilbert modular theta function $\vartheta: \mathbb{H} \times \mathbb{H} \rightarrow \mathbb{C}$ by $$ \vartheta(z) := \vartheta(z_1, z_2) := \sum_{x \in \mathcal{O}_F} e^{\pi i \operatorname{Tr}(x^2 z)} $$ I'm trying to prove the absolute convergence of this infinite series. The argument I found online To prove the convergence of the series, it is compared to the Jacobi theta function defined above. To be precise, the author observes that $$ |\vartheta(z)| = \left| \sum_{x \in \mathcal{O}_F} e^{\pi i \operatorname{Tr}(x^2 z)} \right| = \left| \sum_{x \in \mathcal{O}_F} e^{\pi i x^2 z_1} e^{\pi i x'^2 z_2} \right| \leq \left| \sum_{x \in \mathcal{O}_F} e^{2 \pi i x^2 z_1} \right|^{1/2} \left| \sum_{x \in \mathcal{O}_F} e^{2 \pi i x'^2 z_2} \right|^{1/2} $$ where apparently the Cauchy-Schwarz inequality has been used in the last step, although I'm not convinced that it is true as stated. In any case, then it is noted that since the Jacobi theta function defined above $\displaystyle{\theta(\tau) = \sum_{n = -\infty}^{\infty} e^{2 \pi i n^2 \tau }}$ is holomorphic on $\mathbb{H}$, then each of the two series on the right hand side of the above inequality are absolutely convergent on compact sets in $\mathbb{H}$, so that the original series $\vartheta(z_1, z_2)$ converges absolutely on compact subests of $\mathbb{H} \times \mathbb{H}$. Questions Can someone please help me understand how does the convergence of each of the series on the right hand side of the above inequality follows from the corresponding fact for the Jacobi theta function? I mean, since each of the two sums on the RHS of the inequality are over all algebraic integers $x \in \mathcal{O}_F = \mathbb{Z} \oplus \sqrt{d} \, \mathbb{Z}$, then the sums are actually double sums, so it doesn't seem to follow trivially. Is the argument where the Cauchy-Schwarz inequality is used correct? If I'm not mistaken, I think that the absolute value should be inside the infinite series, like this $$ \left | \sum x_n y_n \right| \leq \left( \sum |x_n|^2 \right ) ^{1/2} \left( \sum |y_n|^2 \right ) ^{1/2} $$ Thank you very much for any help. AI: The convergence of the sum on the right-hand side, essentially is absolute convergence of $$ \sum_{m,n\in \mathbb{Z}} e^{-am^2-bmn-cn^2}$$ where $ax^2+bxy+cy^2$ is a positive-definite quadratic form. Because of absolute convergence, there is no problem with using C-S. Note that absolute convergence implies convergence. The actual point where you used C-S, need to be replaced by absolute values.
H: Show that $8 \mid (a^2-b^2)$ for $a$ and $b$ both odd If $a,b \in \mathbb{Z}$ and odd, show $8 \mid (a^2-b^2)$. Let $a=2k+1$ and $b=2j+1$. I tried to get $8\mid (a^2-b^2)$ in to some equivalent form involving congruences and I started with $$a^2\equiv b^2 \mod{8} \Rightarrow 4k^2+4k \equiv 4j^2+4j \mod{8}$$ $$\Rightarrow k^2+k-j^2-j=2m$$ for some $m \in \mathbb{Z}$ but I am not sure this is heading anywhere that I can tell. Second attempt: Use Euler's Theorem and as $\gcd(a,8)=\gcd(b,8)=1$ and $\phi(8)=4$, $a^4 \equiv b^4 \equiv 1 \mod 8$ so $a^4-b^4\equiv 0 \mod{8}$. I haven't gotten too much further are there any hints? AI: HINT: $k^2-j^2+k-j=(k-j)(k+j+1)$ As $(k+j+1)-(k-j)=2j+1$ which is odd, they must be of opposite parity, exactly one of them must be divisible by $2$ Method 2: If $a,b$ are odd, observe that one of $(a-b),(a+b)$ is divisible by $4,$ the other by $2$ Method 3: $(2a+1)^2=4a^2+4a+1=8\frac{a(a+1)}2+1\equiv 1\pmod 8$
H: Drawing dynamic circles based on input value Is there a formula that will allow me to calculate the radius of a circle based on an input value? The input value could be as small as zero or as large as $10^7$, or larger. The circle is restricted to a minimum radius of $10$ and a maximum radius of $100$. Does anyone know how to calculate something like this? UPDATE The input values correspond to state/country population. I want to calculate the radius (how big the circle should be) of the circle based on the input value. AI: I'll expand on what I understood from your comment. Let $\{x_i\}$ be the set of inputs. Then the range of inputs is given by $\max\{x_i\}-\min\{x_i\}$. The radius for the $i$th input is then given by $$R=10+90\left(\frac{x_i-\min\{x_i\}}{\max\{x_i\}-\min\{x_i\}}\right)$$
H: Evaluation of a complex numbers partial sum Let $w = e^{i\frac{2\pi}5}$. I would like to evaluate $$w^0 + w^1 + w^2 + w^3 +...+ w^{49}$$ Can anyone please give me an idea how to evaluate the expression? Thanks in advance AI: This is the sum of the fifth roots of unity, ten times over. See here for a discussion of why each of these sums is zero.
H: Non-parallel vectors confusion I've got a section in my textbook about non-parallel vectors, it says: For two non-parallel vectors a and b, if $\lambda a + \mu b = \alpha a + \beta b$ then $\lambda = \alpha $ and $\mu = \beta $ Okay I get that you can equate coefficients and solve for mu and lambda, but how are the two sides of the equation equal in the first place? How can you just equate two different vectors to each other like that? I'm just confused and i'm not entirely sure what about. I've tried googling but not much turns up.. I'd love it if someone could explain in basic terms what this equation is telling me.. (that non parallel vectors are equal?) as I've only just been introduced to this topic recently.. Thank you. AI: If you are in $\mathbb{R}^n$ of in any vector space then you can always compare two vectors. They are just two elements of a set. Equality perfectly make sense. Perhaps the following description can help you. If you rewrite this equation, you will get $$(\lambda -\alpha)a=(\beta -\mu)b$$ which is same as $$a=(\beta -\mu)/(\lambda -\alpha)b.$$ That is a is a scalar multiple of b. Therefore if they are not parallel (if x=cy for two vectors x and y and scalar c then x and y are parallel) then the denominator should be 0 hence you get the result.
H: how to dot product two vectors with different planes? how to dot product two vectors with different planes? I have vectors $A$,$B$ and $C$, vectors $A$ and $B$ is on $xy$ plane while vector $C$ is on $xz$ plane. I need to find the dot product of $A.C$ how should I do that? my book says that dot product of two vectors can be expressed in terms of their rectangular components. vector $B$ lies in $y$-axis. vector $A$ makes $60$ degrees to $B$, vector $C$ makes $37$ degrees to $x$-axis. $A=10$, $B=8$ and $C=5$. please help me to solve this problem AI: B lies in y- axis , then A makes 30 degrees with x-axis, so that: $$A=10\cos30\, i+10\sin30\,j\qquad A=5\sqrt3\,i+5\,j$$ $$C=5\cos37\,i+5\sin37\,k\quad =4\,i+3\,k$$ $$A.C=20\sqrt3$$
H: How do you deduce X given a set of truth value? given some truth value, how can we indeed deduce what the form is like? For example, P Q R X T T T T T T F F T F T T T F F T F T T T F T F F F F T F F F F T Here, given the truth value of X, can you deduce the form of X in term of P,Q,R?? I think if time are given one can guess the formula but if the case become more complicated, it seem very inefficient to guess. Is there any more general technique or algorithm? AI: For any truth table, there are many logical expressions that have that truth table. We give a simple mechanical method for producing such an expression. The main disadvantage is that in general, the expression so obtained is quite a bit longer than necessary. Look at all the entries that give truth value $T$. The first one is $TTT$. Write down $P\land Q\land R$. The next one is $TFT$. Write down $P\land \lnot Q\land R$. The next one is $TFF$. Write down $P\land \lnot Q\land \lnot R$. Continue, one for every combination of truth values that yields $T$. The remaining two are $FTT$ and $FFF$. Write down the appropriate conjunctions. Put $\lor$'s between all the things you wrote down. We get the expression $$(P\land Q\land R)\lor (P\land \lnot Q\land R)\lor (P\land \lnot Q\land \lnot R) \lor (\lnot P\land Q\land R)\lor (\lnot P\land \lnot Q\land \lnot R) .$$ A little thinking shows why this gives the right truth table. It certainly gives $T$ at the right places. And at the places where the truth table yielded $F$, everyone of the conjunctions that were $\lor$-ed is false. The above procedure gives us what is called the full disjunctive normal form (DNF). This form may be quite inefficient. There are good algorithms for simplifying a DNF to use a much smaller number of connectives. This minimization process is an important part of circuit design. We can also do some hand-optimization. For example, note that there are only $3$ combinations that give the value $F$, as against $5$ that give the value $T$. So write down the disjunctive normal form we would get if the $F$ and $T$ in the last column had been switched, put an $\lnot$ in front. and use De Morgan's Laws to simplify. We will get a conjunctive normal form that is simpler than the DNF we got. There are many other tricks/methods.
H: How can i show this inequality? Let $n>1$ and $a_1,...,a_n \in \mathbb{R}^+$ be such that $\sum a_i=1$. For evey $i$, define $b_i=\sum_{j=1,j\neq i}a_j$. Show that $\sum_{k=1}^n \dfrac{a_k}{1+b_k}\ge \dfrac{n}{2n-1}$ Thanks a lot for any suggestion. AI: Note that $b_i=1-a_i$. Thus using AM-HM inequality gives \begin{align} \sum_{k=1}^{n}{\frac{a_k}{1+b_k}}=\sum_{k=1}^{n}{\frac{a_k}{2-a_k}}=\sum_{k=1}^{n}{(\frac{2}{2-a_k}-1)}& =2\left(\sum_{k=1}^{n}{\frac{1}{2-a_k}}\right)-n \\ & \geq \frac{2n^2}{\sum_{k=1}^{n}{(2-a_k)}}-n \\ &=\frac{2n^2}{2n-1}-n \\ & =\frac{n}{2n-1} \end{align}
H: Finite equivalence class same cardinality For an equivalence relation $\sim$, if each partition has a finite number of elements, and $X$ is an infinite set, then is it true that $|X/\sim|=|X|$? I can prove injectivity one way by defining the map $f:X/\sim\rightarrow X$ by picking an element from the equivalence class, but not sure how I would go about the other way. AI: The quotient set $X / {\sim}$ is a partition of $X$ into finite sets $[X]_i$ with $i < |X / {\sim}|$ (the axiom of choice here is implicit: $i$ is an ordinal less than the cardinal number $|X / {\sim}|$). Map each $f_i : [X]_i \to \omega \times \{ i \}$ injectively since they are finite. Take the union $\displaystyle \bigcup_{i < |X / {\sim}|} f_i : X \to \omega \times |X / {\sim}|$, which is injective, since the images of the component functions are disjoint. So we have $$|X| \le |\omega \times X / {\sim}| \le |\omega \times X| = |X|$$ since $X$ is an infinite set. $|\omega \times X / {\sim}| = | X / {\sim}|$ because $X / {\sim}$ is infinite (easily seen because otherwise, $X = \bigcup X / {\sim}$ is the finite union of finite sets).
H: Schur's Lemma in Group Theory The analogue of well celebrated Schur's Lemma in group theory will be "If $G$ is a finite simple group, and $\phi$ is a non-identity homomorphism from $G$ to $G$, then $\phi$ is an isomorphism". The proof follows exactly same lines as it is in representation theory. I would like to ask the following question, which is a converse of Schur's lemma in some sense. Question 1: If $G$ is a finite group, $|G|>2$, such that every non-identity homomorphism $\phi\colon G\rightarrow G$ is an automorphism, should $G$ necessarily simple? The progress towards the solution I made is the following: if $G$ satisfies the hypothesis in the question and $G$ is non-abelian, then $G$ is not solvable: if $G$ is non-abelian, solvable, then it has non-identity abelian quotient $G^{ab}=G/[G,G]$. Let $p$ be a prime divisor of $|G^{ab}|$ (hence $p$ also divides $|G|$). Then there is a surjective homomorphism from $G^{ab}$ to $C_p$ (cyclic group of order $p$). We can embedd $C_p$ in $G$. Then, consider the composition of the homomorphisms: $G\rightarrow G^{ab} \rightarrow C_p\rightarrow G$. It is a non-identity homomorphism from $G$ to $G$ which is clearly not an automorphism. Note that if $G$ is abelian, which is not cyclic of prime order, then we can easily find non-identity homomorphisms $G\rightarrow G$ which are not automorphisms. I would like to ask a general question also (remove the condition of finiteness in above question). Question 2: If $G$ is an infinite group, such that every non-identity homomorphism $\phi\colon G\rightarrow G$ is an automorphism, should $G$ necessarily simple? AI: Nice question! The answer is no. The smallest potential counterexample works. Let $G$ be the binary icosahedral group. This is a perfect group of order $120$ (in fact the smallest nontrivial such group which is not simple). Its only nontrivial normal subgroup is its center $\pm 1$, hence its only nontrivial quotient is the icosahedral group $A_5$, which cannot occur as a subgroup of $G$, since any subgroup of $G$ of order $60$ is necessarily normal. Hence $G$ is not simple but every nonzero homomorphism $G \to G$ is an isomorphism. Dan Shved gives an example for your second question in the comments.
H: Prove $\sin \alpha+\sin \beta+\sin \gamma \geq\sin 2\alpha+\sin 2\beta+\sin 2\gamma $ Prove that $\sin \alpha+\sin \beta+\sin \gamma \geq\sin 2\alpha+\sin 2\beta+\sin 2\gamma $ where $\alpha$ $,\beta$ $,\gamma$ are the angles of a triangle AI: use $$\sin{2A}+\sin{2B}=2\sin{C}\cos{(A-B)}\le 2\sin{C}$$ $$\sin{2B}+\sin{2C}\le 2\sin{A},$$ $$\sin{2C}+\sin{2A}\le 2\sin{B}$$
H: determinants of 2 matrices with given property I have two $3\times3$ integer matrices $A$ and $B$ such that $AB=A+B$. I need to find all possibe values of $\det(A-E)$, where $E$ denotes the identity matrix. Any help is appreciated. AI: Hints: $$\bullet\;\;\;\;\;AB=A+B\implies (A-I)(B-I)=I\;,\;\;I:=\text{the unit (identity) matrix)}$$ $$\bullet\;\;\;\;\text{For square matrices $\,X,Y\,$ of the same order, it's true that}\;\;\det(XY)=\det X\cdot\det Y$$
H: Is the function identically zero? Let $f(x, y)$ be a continuous, real-valued function on $\mathbb{R}^2$. Suppose that, for every rectangular region $R$ of area 1, the double integral of $f(x, y)$ over $R$ equals 0. Must $f(x, y)$ be identically 0? AI: Hint: Consider $g(y)=\int_0^1 f(x,y)dx$. Since the intgeral of $f$ over $[0,1]\times[y,y+1]$ equals that over $[0,1]\times[y+\epsilon,y+1+\epsilon]$, we conclude $\int_y^{y+\epsilon} g(y)dy=\int_{y+1}^{y+1+\epsilon}g(y) dy$ and in the limit $g(y+1)=g(y)$, that is $g$ is periodic with period $1$.
H: Integration with change of variables (multivariable). The following are the problems that I have been working on. It involves change in variables with 2,3 variables respectively. (1)Let $R$ be the trapezoid with vertices at $(0,1),(1,0),(0,2)$ and $(2,0)$. Using the substitutions $u = y-x$ and $v = y +x$, evaluate $$\int\int_R e^{{y-x}\over{y+x}} dA$$. (2)Evaluate the following integral $$\int\int\int_D (x^2y + 3xyz)dV$$ where D is the region in 3-space defined by $1 \le x \le 2, 0 \le xy \le 2, 0 \le z \le 1$ using the substitution $u =x , v = xy, w=3z$. Here are what I tried. 1), I understand the part where the parallel lines $x+y = 1$ and $x+y=2$ is used to find the limit of integration of $v$, but what am I supposed to do with that of $u$ ? One side of the trapezoid is vertical and the other side is horizontal and it looks nothing like the equation $v$. I was thinking about trying to doubling the trapezoid so that it becomes a parallelogram, but I am not sure if that even works. I understand the idea and the fact that $$\int\int_R f(x,y) dA = \int\int_S f(g(u,v),h(u,v))|{ \partial{(x,y)}\over{\partial(u,v)}}|dvdu$$, but this is more of an algebraic thing that I am stuck with. 2), Again, I understand the theory, but I am stuck with the algebraic part. The approach I made was $$\int_0^1 \int_0^2 \int_1^2 (uv+3vw)|{1\over{3u}}|dudvdw$$ but I ended up having to integrate an improper integral with the result being $-\infty$... Can someone help out ? AI: I am adding some points for the first one so you can find the limits easily. Try to do the rest by yourself. :) If we consider the 4 points above and mark them on $xy$ plane, we have the area as you see below: Now, as you probably know, we find $x$ and $y$ respect to $u$ and $v$ as we are indicated. So $$u=y-x,v=x+y,~~~\to y=\frac{u+v}2,~~x=\frac{v-u}2~~~~~~~(I)$$ For finding the limits for $u$ and $v$ in $uv$ plane do as follows and by $I$: $$(1,0)\longrightarrow u=-1,v=1\\\\\ (2,0)\longrightarrow u=-2,v=2\\\\\ (0,1)\longrightarrow u=1,v=1\\\\\ (0,2)\longrightarrow u=2,v=2\\\\\ $$ This means that you have a new region like: Obviously $$-1\le v\le +1, ~~~-v\le u\le v$$
H: Calculating the probabilities of different lengths of repetitions of numbers of length 4 I'm trying to calculate the probabilities of different lengths of repetitions of X length number however I know I'm doing it incorrectly since when I add all the probabilities together they don't total to 1 e.g. Here is my reasoning to calculate the probabilities of the different lengths repetitions for length 4 Probability that there are 0 repeating sequences: e.g. WXYZ 10/10 * (9/10)^3 = 729 Probability that there is 1 repeating sequence of length 2: e.g XXYZ or YXXZ or YZXX 10/10 * (9/10)^2 * 1/10 * 3 = 243 Probability that there is 2 repeating sequence of length 2: e.g XXYY or YYXX 10/10 * 9/10 * (1/10)^2 * 2 = 18 Probability that there is a repeating sequence of length 3: e.g XXXY or YXXX 10/10 * 9/10 * (1/10)^2 * 2 = 18 Probability that there is a repeating sequence of length 4: e.g XXXX 10/10 * (1/10)^2 * 3 = 1 When I add the number of outcomes I get 1009, when I should be getting a 1000. Anyone know what I'm doing wrong? Thanks in advance! AI: The third entry in the list has a wrong count. One should not multiply by $2$, since shape XXYY is the same as shape YYXX.
H: Searching for unbounded, non-negative function $f(x)$ with roots $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$ If a function $y = f(x)$ is unbounded and non-negative for all real $x$, then is it possible that it can have roots $x_n$ such that $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$. AI: The function $ y = |x \sin(x)|$ has infinitely many roots $x_n$ such that $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$.
H: Some questions on Proof of Structure Theorem I understand the general idea of the proof of Structure Theorem for finitely generated modules over a principal ideal domain but I found it quite difficult to follow some lines of reasoning in the proof. I have spent hours tried to think about it and look for other sources but still unable to fully convince myself. Any detailed helps are truly appreciated. The Theorem is stated as follows: Let $M$ be a finitely generated module over a principal ideal domain $R$. Then there exist elements $d_1, d_2, ..., d_k \in R$ satisfying $d_1\mid d_2\mid\cdots\mid d_k$ such that $$M\cong R/(d_1)\oplus R/(d_2)\oplus \cdots\oplus R/(d_k).$$ Proof: Since $M$ is finitely generated (by $k$ elements say), then $M\cong R^k/N$. But $N\le R^k\Rightarrow N\cong R^s$ for some $s\le k$. Let $\zeta:R^s\to N$ be an isomorphism. This gives a homomorphism $\varphi:R^s\to R^k$ with $\operatorname{im}(\varphi)= N, \ker(\varphi)=\{0\}$. Let $A\in M_{k\times s}(R)$ be the matrix of $\varphi$ with respect to the standard bases for $R^s$ and $R^k$. Then A is equivalent to a matrix $$D=\begin{bmatrix}d_1 & 0 & ... & & 0\\0 & d_2 & 0 & ... & 0\\ & &.& & \\ & &:& & \\0 & 0 & 0 & ... & d_s\\0 & 0 & 0 & ... \\0 & 0 & 0 & ... \end{bmatrix},$$ where $D = XAY$, for $X\in M_{k\times k}(R)$ invertible and $Y\in M_{s\times s}(R)$ invertible. So $D=[\varphi]_{C,B}$ for some bases $B$ for $R^s$, $C$ for $R^k$. Let $C=\{f_1,f_2,...,f_k\}$ be a basis for $R^k$, then $\varphi(B)=\{d_1f_1,d_2f_2,...,d_kf_k\}$ is a basis for $N\subseteq R^k$. So we have $$R^k=\langle f_1 \rangle \oplus \langle f_2 \rangle \oplus \cdots \oplus \langle f_k \rangle,$$ $$N=\langle d_1f_1 \rangle \oplus \langle d_2f_2 \rangle \oplus \cdots \oplus \langle d_sf_s \rangle \oplus \langle 0f_{s+1} \rangle \oplus \cdots \oplus \langle0f_k \rangle$$ where $d_i:=0$ for $s<i\le k$. So, $$R^k/N \cong \frac{\langle f_1 \rangle}{\langle d_1f_1 \rangle} \oplus \frac{\langle f_2 \rangle}{\langle d_2f_2 \rangle} \oplus \cdots \oplus \frac{\langle f_k \rangle}{\langle d_kf_k \rangle}.$$ But for any $v\in R^k\setminus \{0\}$, by the First Isomorphism Theorem, $\langle v \rangle/\langle dv \rangle \cong R/(d)$. Then I get: $$R^k/N\cong R/(d_1)\oplus R/(d_2)\oplus \cdots \oplus R/(d_k),$$ $$d_1 \mid d_2 \mid \cdots \mid d_{s} \mid d_{s+1}=0 \mid \cdots$$ Exactly as I wished. $\blacksquare$ My questions are: Why is $\ker(\varphi)=\{0\}$? I could not find any convincing explanation. I have been told that $C$ is given by columns of $Y$ and $B$ is given by columns of $X^{-1}$. I have some understanding of change of bases but how can we show it in a more rigorous way? The part that I struggle the most are $R^k=\langle f_1 \rangle \oplus \langle f_2 \rangle \oplus \cdots \oplus \langle f_k \rangle$ and $N=\langle d_1f_1 \rangle \oplus \langle d_2f_2 \rangle \oplus \cdots \oplus \langle d_sf_s \rangle \oplus \langle 0f_{s+1} \rangle \oplus \cdots \oplus \langle 0f_k \rangle$, what does $\langle \cdot \rangle$ represent, why are we taking their direct sum and how do we derive them? Lastly, why can we just divide $R^k/N\cong\frac{\langle f_1 \rangle}{\langle d_1f_1 \rangle }\oplus\frac{\langle f_2 \rangle}{\langle d_2f_2 \rangle} \oplus \cdots \oplus \frac{\langle f_k \rangle}{\langle d_kf_k \rangle}$ on both sides, how can I explain/convince myself rigorously. I have consulted other sources, in the final step, it is given that the map $\psi:R^k\to R/(d_1)\oplus R/(d_2)\oplus ...\oplus R/(d_k)$ given by $\psi(\sum_{i=1}^{k}r_if_i)=(r_1+(d_1),...,r_k+(d_k))$ is a homomorphism of $R$-modules. The result follows from FIT since $\psi$ is surjective and $\ker(\psi)=N$. How can we show $\psi$ is surjective and $\ker(\psi)=N$? Why if $d_i=0$, then $d_i=0$ for all $j\ge i$? And why if $d_i$ is a unit, then $d_j$ is a unit for all $j\le i$? Many thanks in advance! AI: Some ideas: The homomorphism $\,\phi\,$ described there could be thought of as an embedding one: note that $\,N\cong R^s\le R^k\,$ , so one can embed the former in the latter. Of course, this embedding is usually far from being unique. From here $\,\ker\phi=0\,$ is immediate. The notation $\,\langle x\rangle\,$ usually denotes the cyclic (module, group, etc. ) generated by $\,x\,$ . In this case for example, $\,\langle f_i\rangle\,$ denotes the cyclic $\;R$-module generated by $\,f_i\,$ in $\,R^k\,$ , i.e.: $$\langle f_i\rangle:=\{rf_i\;;\;r\in R\}$$ Observe that any finitely generated $\,R$-module $\;M\;$ is the set of all the $\,R$-linear combinations of a finite number of elements of $\,M\,$ , and from here the term "basis" . The dividing thing is just taking the quotient module...Apparently you already proved before this that the quotient takes that form (this is (5) in your questions): $$\bigoplus_{i=1}^k\langle f_i\rangle/\bigoplus_{i=1}^k\langle d_if_i\rangle \cong\bigoplus_{i=1}^k \langle f_i\rangle/\langle d_if_i\rangle$$ Your questions about $\,\psi\,$ could probably be partially answered if you read carefully about the CRT = Chinese Remainder Theorem , which has a very similar homomorphism there, yet directly: clearly $\,\psi\,$ is surjective since any element in the direct sum of quotient modules on the right is precisely of the form $\;r_j+(d_j)\;$ , for some $\,r_j\in R\,$ , and you can take an element in $\,R^k\,$ with these $\,r_j$-s as corresponding coefficients of the basis $\,\{f_1,...f_k\}\,$ . That $\,N=\ker\psi\,$ is even more trivial (Sorry, I'm not teasing you, honest) if one understand what's the role of each thing (and this is why I thought subdividing the question in several, focused parts could help...)
H: Every subsequence of $x_n$ has a further subsequence which converges to $x$. Then the sequence $x_n$ converges to $x$. Is the following true? Let $x_n$ be a sequence with the following property: Every subsequence of $x_n$ has a further subsequence which converges to $x$. Then the sequence $x_n$ converges to $x$. I guess that it is true but I am not sure how to prove this. AI: True. If not, there exists an $\epsilon > 0$, such that for all $k$, there exists an $n_k > k$ satisfying $|x_{n_k}−x| \ge \epsilon$ since if there is some $k$ which doesn't have such $n_k$, then we can take it as $N$, so $x_n$ converges to $x$. The subsequence $x_{n_{k}}$ does not have any subsequence converging to $x$.
H: Find $a,b,c \in \mathbb {Q}$ Find $a,b,c \in \mathbb {Q}$ such that: $\left\{\begin{array}{rl} x^3&\in \mathbb Q \\ x&\notin \mathbb{Q}\\ ax^2+bx+c &=0\end{array}\right.$ I tried Vieta's formulas, but seem like it didn't help. I think $a=b=c=0$ is only solution. AI: $$a^2x^3=-ax(bx+c)=-b(ax^2)-cax=b(bx+c)-cax=x(b^2-ac)+bc$$ $$x^3=\frac{x(b^2-ac)+bc}{a^2}$$ will be $\in Q$ iff $x$ $\in Q$ unless $b^2-ac=0$ If $b^2=ac, \frac cb=\frac ba=r$(say) $\implies b=ar,c=br=ar^2$ $\implies ax^2+bx+c=a(x^2+rx+r^2)=0\implies x^2+rx+r^2=0$ as $a\ne0$ $\implies x=r\frac{-1\pm\sqrt3i}2$ which is not $\in Q$ for $r\in Q$ But $x^3=r^3$ which $\in Q$ for $r\in Q$
H: Solving Modular Equations With Identities $4+2x≡7 \pmod 8$ Find all possible solutions and note any identities. Identify how you found the solutions. Explain what identities are. AI: $\implies 2x\equiv3\pmod 8=3+8a$ for some integer $a$ So, as $3+8a$ is odd and $2x$ is even for integer $x,$ there is no solution More generally, using Linear congruence theorem (Proof) $ax\equiv b\pmod n$ is solvable iff $(a,n)$ divides $b$
H: sum of polynoms of given property I have $P(x)$ a polynomial with degree $n$ ,$P(x) \ge 0$ for all $x \in$ real. I have to prove that: $f(x)=P(x)+P'(x)+P"(x)+......+P^{n}(x) \ge 0$ for all $x$. I tried different methods to solve it but I got stuck.Any suggestion or advice is welcomed. AI: you can let $$G(x)=f(x)e^{-x},-\infty<x<\infty$$ then we have $$G'(x)=f'(x)e^{-x}-f(x)e^{-x}=-P(x)e^{-x}\le 0$$ so this $G(x)$ is decreasing and $$\lim_{x\to+\infty}G(x)=\lim_{x\to+\infty}f(x)e^{-x}=0$$ $$f(x)e^{-x}=G(x)\ge \lim_{x\to+\infty}G(x)=0$$ so $$f(x)\ge 0$$
H: Relation between a sum of a series and the limit of a sequence I'm stuck on this question Let $\{a_{n}\}$ a sequence of real numbers I need to show the series $\sum_{n=1}^{\infty}(a_{n} - a_{n-1})$ and the sequence $\{a_{n}\}$ are the same nature (convergent or divergent). Additional, I need to give a relation between the sum of the series and the limit of the sequence $\{a_{n}\}$. Can anyone please help me? I'm completely lost... Thanks in advance AI: Hint: What is the $k$-th partial sum of the series? (remember that a series converges to $L$ iff the sequence of its partial sums converges to $L$, so what will happen if you found out that the partial sums of the series give back the sequence of $a_n$'s?).
H: is a relation R total/linear/well-order Let $\mathcal{R}$ be a relation on $\mathbb{N}\times \mathbb{N}$ i.e $\mathcal{R}\subseteq(\mathbb{N}\times \mathbb{N})\times (\mathbb{N}\times \mathbb{N})$ s.t $(x,y)\mathcal{R}(z,w)$ iff $x<z$ or $x=z \wedge y\le w$. a)is $\mathcal{R}$ total order/linear order? b) does the greatest and the smallest element exist? c) is $\mathcal{R}$ a well order? My solution reflexivity: $(x,y)\mathcal{R}(x,y)$ as $x=x$ and $y\le y$ $\forall x,y\in \mathbb{N}\times \mathbb{N}$ antisymmetry $(x,y)\mathcal{R}(y,x)$ as $x<y\vee x=y \wedge y\le x$ I am not sure about this one, I considered two examples: $(1,2)\mathcal{R}(2,1)$ then $1<2$ so it is okay but $(2,1)not\mathcal{R}(1,2)$ as $2\not< 1$ futhermore $2\neq 1$ so $\mathcal{R}$ is not antisymmetric? thus not total/linear order and not well-order? AI: You have shown that the relation is reflexive, and that's fine. But the others you haven't done right.$\newcommand{\R}{\mathrel{\mathcal{R}}}$ For antisymmetry, for example, you need to assume that $(x,y)\R(z,w)$ and $(z,w)\R(x,y)$. From that assumption you need to show that $(x,y)=(z,w)$. This condition needs to be shown for every $(x,y),(z,w)\in\Bbb{N\times N}$. One example is not enough! You also have to show that this is a transitive relation. For the totality you also made a mistake. You have shown that there are two pairs such that $\R$ doesn't hold in one direction. But remember the definition of a total order: $(A,\preceq)$ is totally ordered if for every $a,b\in A$ it holds $a\preceq b$ or $b\preceq a$. So the fact that $(2,1)\R(1,2)$ is false doesn't prove anything. You need to show that $(1,2)\R(2,1)$ is also false -- but you can't, because $(1,2)\R(2,1)$ is in fact true. Finally, your title asks whether or not it is a total order, or a linear order, or a well-order. It is unclear whether or not you think that total and linear orders are different things; or that you think that linear orders and well-orders are the same thing. In either case it is wrong. All total orders are linear orders, and not all linear orders are well-orders. Now, let me reveal you a little secret, this is indeed a well-order (so it is a total order as well). But I am going to leave you with the task of proving that.
H: Find the smallest possible integer that satisfies both modular equations Find the smallest positive integer that satisfies both. x ≡ 4 (mod 9) and x ≡ 7 (mod 8) Explain how you calculated this answer. I am taking a math for teachers course in university, so I'm worried about guessing too much in fear that I will teach myself incorrectly. Our textbook (Pearson Custom Math Text) is absolutely terrible! It skips briefly over certain elements. If I had to guess how to do this question, it would be to convert them to x-4 and x-7 respectively then use multiples of 9 and 8 to see if I could come up with a positive integer which left the respective remainders... Is there an easier way to solve this? Am I doing it correctly? AI: Apply the Chinese Remainder Theorem (the following is just applying the proof of the theorem to this case): $$1=1\cdot 9+(-1)\cdot 8\implies x:=(4)(-1)\cdot8+7\cdot 1\cdot9=31$$
H: probability of getting a double six ($2$ dice) rolling them $24$ times This is what I got. $\dfrac{1}{6} \cdot \dfrac{1}{6} = 2.78\% \cdot 24 = 66.72\%$ I believe that since it is a six sided dice, since you roll both of them simultaneously it would be $\dfrac{1}{6} \cdot \dfrac{1}{6}$. So since they are rolling them $24$ times, I would just multiply it by $24$, so $2.78\% * 24$ would be $66.72\%$, which would mean I have a $67.7\%$ chance of rolling a double six. Do you think this is correct? Am i doing this correctly? AI: We calculate the probability of rolling at least one double-six in $24$ rolls of two dice. The probability we roll a double-six is, as you point out, $\frac{1}{36}$. So, on any roll, the probability of not getting a double six is $\frac{35}{36}$. The probability of "failure" $24$ times in a row is therefore $\left(\frac{35}{36}\right)^{24}$. So the probability of at least one "success" in $24$ rolls is $1-\left(\frac{35}{36}\right)^{24}$. Remark: If we interpret the question as asking for the probability of exactly one double-six, the answer is different, and more complicated. In that case, we want the probability of $1$ success and $23$ failures. This probability is $\binom{24}{1}\left(\frac{1}{36}\right)\left(\frac{35}{36}\right)^{23}$. However, the usual interpretation is "at least one six." This problem goes back to the the seventeenth century beginnings of probability theory. The Chevalier de Montmort was curious about what was more likely, at least one six in $4$ rolls of a die, or at least one double-six in $24$ rolls of two dice. Each has probability about $50\%$ and the expected number of successes is the same. At the time, expectation was the primary notion.
H: Quadratic Functions Consider the strictly convex quadratic function $f(x) = \frac{1}{2}x^tPx - q^tx + r,$ where $P \in \mathbb{R}^{n \times n}$ is a positive definite matrix, $q \in \mathbb{R}^n$ and $r \in \mathbb{R}.$ Let $\mathcal{H} := \{H: H \text{ is a }k- \text{dimensional subspace in } \mathbb{R}^n\}.$ Clearly, the restriction of $f$ to any $H \in \mathcal{H}$ is again a strictly convex function. For any $H \in \mathcal{H},$ we will use $x_H$ to denote the unique optimal point of the following problem \begin{equation*} \underset{x \in H}{\text{min.}} \; f(x). \end{equation*} Now consider the map, $\psi(H) = x_H.$ Prove / Disprove: The map $\psi$ is bijective. Remark: It is assumed that $P$ is invertible and $q \neq \mathbf{0}.$ AI: Edit: Whether $\psi$ is surjective depends on the definition of its codomain. For example, if $n=2$ and $k=1$, the image of $\psi$ is a curve on $\mathbb{R}^2$. So, if the codomain of $\psi$ is $\mathbb{R}^2$, $\psi$ is certainly not surjective. At any rate, $\psi$ may not be injective even if the origin is not a global minimum. Consider $a=(1,0,0)^T$ and $f(x)=\|x-a\|^2$. Let $\{e_1,e_2,e_3\}$ be the canonical basis of $\mathbb{R}^3$. Then $x_H=a$ when $H=\operatorname{span}\{e_1,e_2\}$ or $\operatorname{span}\{e_1,e_3\}$.
H: A question regarding the Power set In the proofs that I have seen so far for showing that the power set $2^X$ of a set $X$ cannot be in bijection to $X$, the common idea is to assume that there exists a surjection $f \colon X \to 2^X$ and then consider the set $$ B = \{ x \in X \mid x \notin f(x)\} $$ Then the argument proceeds by saying that this set is contradictory, because its pre-image (say $y \in X$, i.e. $f(y) = B$) satisfies both $y \in B$ and $y \notin B$. On Wikipedia, this method is said to be analogous to Cantor's diagonal argument that is used to show that the interval $(0,1)$ of real numbers between $0$ and $1$ is uncountable. Here we assume that there exists an enumeration. Representing a number $x \in (0,1)$ in its unique decimal expansion, we obtain a list \begin{align} x_1 &= 0.a_{11}a_{12}a_{13}... \\ x_2 &= 0.a_{21}a_{22}a_{23}... \\ x_2 &= 0.a_{31}a_{32}a_{33}... \\ x_2 &=0.a_{11}a_{12}a_{13}... \\ \vdots \end{align} where $a_{ij} \in \{0,1,\dots,9\}$ for each $i \in \mathbb{N}$ and $j \in \mathbb{N}$. Then one constructs an element from $(0,1)$ that is not in this list. For example, one can take the number $x = 0.b_{1}b_{2}b_{3}\dots$ where \begin{equation} b_i = \begin{cases} 1 & \text{if } a_{ii} = 5 \\ 5 & \text{if } a_{ii} \ne 5 \end{cases} \end{equation} Here, the number $x$ that is generated is well defined as an element of $(0,1)$, and it does not appear in the list above so we've reached a contradiction. On the other hand, for the argument above regarding the power set, assuming $f$ exists implies the object $B$ is not well defined as an element of $2^X$ since it is not the image of a function from $X$ to the set $\{0,1\}$ (it is multi-valued, namely the preimage of $B$ evaluates both to $0$ and $1$). In other words, I cannot use this object $B$ to derive a contradiction. What I can derive is that, if $f$ exists then $B$ is not a set, and if $B$ is a set for each function $f \colon 2^X \to X$ then no such $f$ can be surjective. The latter is what the proofs claim to be the only option, in other words this object $B$ must be a well defined set by some other reason - what am I missing? AI: Cantor's theorem is indeed very close to the diagonal argument. The idea is a generalization of the following concept. We write a table: $$\begin{array}{|c|c|c|c|c} \hline \quad & f(x_1) & f(x_2) & f(x_3) & \ldots\\\hline x_1 & 0 & 1 & 0 & \ldots\\\hline x_2 & 1 & 1 & 0 & \ldots\\\hline x_3 & 0 & 0 & 1 & \ldots\\\hline \vdots \end{array}$$ Where $(x,f(x))$ is $0$ if $x\notin f(x)$ and $1$ otherwise. The diagonal argument is to traverse across the diagonal, and consider those which have $0$ there. Collect this set, and you can in fact show that it is not $f(x)$ for any $x$. This $B$, as you denote it, or rather its indicator function (if you prefer considering $2^X$ rather than $\mathcal P(X)$ for one reason or another) exists because we define it. We explicitly gave a description of its members. Ordered pairs of the form $\langle x,i\rangle$ where $i=0$ if $x\in f(x)$ and $1$ otherwise. One of the earliest principles of set theory is comprehension. It's an important principle, mathematically and philosophically. If we can describe a collection then we want it to exist. And while the unrestricted comprehension (all the describable collections are sets) is inconsistent, when axiomatic set theory was formulated this was limited in the following sense: If $A$ is a set, and $\varphi(x,u_1,\ldots,u_n)$ is a formula, then for every choice of parameters, $p_1,\ldots,p_n$ the set $\{a\in A\mid\varphi(a,p_1,\ldots,p_n)\}$ exists. This is known as the axiom schema of specification as mentioned in the comments, and also as "restricted comprehension" and "separation" sometimes. Why does that help us? Well, if we assume that $X$ is a set, then we have defined a subset (from the parameter $f$) and therefore it exists. If you prefer the functional version, then consider the set $X\times\{0,1\}$ and apply the same argument. Therefore we proved that $B$ exists, and there is no surjection from a set onto its power set.
H: How can you prove Euler's phase angle formula for differential equations? How can you prove this formula: $C_1 e^{(\alpha + i\beta) t} + C_2 e^{(\alpha - i\beta)t}=Ke^{\alpha t}\cos {(\beta t + \phi)}$ This gives $x(t)$ in the second-order differential equation for an underdamped system with the characteristic equation $\lambda^2 + A\lambda + B = 0$ which gives $\lambda_{1,2} = \alpha \pm i\beta = \frac {A \pm \sqrt{A^2 - 4B}} 2$, and $C_1$, $C_2$ and $K$ are constants. AI: $$e^{ix}=\cos(x)+i\sin(x)$$ Leading to: $$C_1e^{\alpha t} e^{i \beta t}+C_2e^{\alpha t} e^{-i \beta t}=e^{\alpha t} (C_1 \cos(\beta t)+iC_1 \sin(\beta t)+C_2 \cos(\beta t)-iC_2 \sin(\beta t))$$ $$=e^{\alpha t} ((C_1+C_2) \cos(\beta t)-(iC_2-iC_1) \sin(\beta t))$$ $$=e^{\alpha t} (A \cos(\beta t)-B \sin(\beta t))$$ Now, any $(A \cos(\beta t)-B \sin(\beta t))$ can be written in the form $K \cos (\beta t+ \phi)$: $$K \cos (\beta t+ \phi)=(K\cos(\phi)) \cos(\beta t)-(K\sin(\phi)) \sin(\beta t)$$ As cosine and sine are orthogonal, you can equate the coefficients: $$K\cos(\phi)=A, K\sin(\phi)=B \Rightarrow \phi=\arctan(\frac{B}{A})$$ Has a unique solution $\frac{-\pi}{2}<\phi<\frac{\pi}{2}$, and $$K^2(\cos(\phi)^2+\sin(\phi)^2)=A^2+B^2 \Rightarrow K=\sqrt{A^2+B^2}$$ So $$C_1e^{\alpha t} e^{i \beta t}+C_2e^{\alpha t} e^{-i \beta t}=e^{\alpha t}\sqrt{A^2+B^2}\cos(\beta t+ \arctan(\frac{B}{A}))$$ With $$A=C_1+C_2$$ $$B=iC_2-iC_1$$ Finally, $$C_1e^{\alpha t} e^{i \beta t}+C_2e^{\alpha t} e^{-i \beta t}=2e^{\alpha t} \sqrt{C_1C_2} \cos(\beta t+i \tanh^{-1}(\frac{C_2-C_1}{C_2+C_1}))$$
H: Understanding the (Partial) Converse to Cauchy-Riemann We have that for a function $f$ defined on some open subset $U \subset \mathbb{C}$ then the following if true: Suppose $u=\mathrm{Re}(f), v=\mathrm{Im}(f)$ and that all partial derivatives $u_x,u_y,v_x,v_y$ exists and are continuous on $U$. Suppose further that they satisfy the Cauchy-Riemann equations. Then $f$ is holomorphic on $U$. The proof for this is readily available, though there is a subtlety that I can't understand. We essentially want to compute $\lim_{h \rightarrow 0} \dfrac{f(z+h)-f(z)}{h}$ where $h=p+qi \in \mathbb{C}$. We need a relationship like $u(x+p,y+q)-u(x,y)=pu_x(x,y)+qu_y(x,y)+o(|p|+|q|)$. Why is this relationship true? AI: One way to look at this is by interpreting $f$ as a function from $\mathbb{R}^2 \to \mathbb{R}^2$. This allows you to apply whatever you know about differentiability in $\mathbb{R}^2$ to $\mathbb{C}$. In other words, you interpret $f$ as function $$ f \,:\, (u,v) \to \left(\textrm{Re}(f(u+iv)), \textrm{Im}(f(u+iv)\right) =: (f_1(u,v),f_2(u,v)) \text{.} $$ If all the partial derivatives $\frac{\partial f_1}{\partial u}$, $\frac{\partial f_1}{\partial v}$, $\frac{\partial f_2}{\partial u}$, $\frac{\partial f_2}{\partial v}$ exist and are continuous, $f$ is differentiable in the $\mathbb{R}^2$-sense, i.e. for every $\mathbf{x}=(u,v)$ there's a linear function $df$ which approximates $f$ around $\mathbf{x}$. In other words, $$\begin{aligned} f(\mathbf{x} + \Delta) &= f(\mathbf{x}) + df_{\mathbf{x}}\cdot \Delta + R_{\mathbf{x}}(\Delta) \quad\text{where}\quad \lim_{\Delta\to 0} \frac{\left|\left| R_{\mathbf{x}}(\Delta) \right|\right|}{||\Delta||} = 0 \quad\text{and}\quad \\ df_{\mathbf{x}} &= \left(\begin{matrix} \frac{\partial f_1}{\partial u} &\frac{\partial f_1}{\partial v} \\ \frac{\partial f_2}{\partial u} &\frac{\partial f_2}{\partial v} \end{matrix}\right) \text{.} \end{aligned}$$ If the cauchy-riemann differential equations hold, $df_\mathbf{x}$ is then a scaled rotation, i.e. there are $A_\mathbf{x}$ and $B_\mathbf{x}$ such that $$ df_\mathbf{x} = \left(\begin{matrix} A_\mathbf{x}& B_\mathbf{x} \\ -B_\mathbf{x}& A_\mathbf{x} \end{matrix}\right) \text{.} $$ This form of $df_\mathbf{x}$ allows you to translate back into $\mathbb{C}$, because its exactly these matrices which correspond to complex multiplication. Simply set $$ f'(z) = f'(u+iv) = A_{(u,v)} + iB_{(u,v)} \text{.} $$ and translate the approximation back to $\mathbb{C}$, i.e. $$ f(z+\Delta) = f(z) + f'(z)\cdot\Delta + R_z(\Delta) \quad\text{where}\quad \lim_{\Delta\to 0} \frac{\left| R_{\mathbf{x}}(\Delta) \right|}{|\Delta|} = 0 \text{,} $$ which is exactly what you want to show.
H: Prove that $\int_{-\infty}^{\infty} \sin x \, dx = 0 $ $$\int_{-\infty}^{\infty} \sin x \, dx$$ When I am doing the proof for this, why do i have to split it into $\int_{-\infty}^a \sin x \, dx + \int_a^\infty \sin x \, dx $? where a is a constant AI: The assertion that the integral is $0$ doesn't really make sense: the convergence of this improper integral requires that both \begin{gather} \lim_{a\to-\infty}\int_{a}^{0}\sin x \, dx \\ \lim_{b\to\infty}\int_{0}^{b}\sin x \, dx \end{gather} exist and are finite and neither does. The "break point" $0$ is arbitrary and can be any real number. This is by definition of improper integral, at least the most common definition one finds. The second limit doesn't exist, because if you compute it on the sequences $2n\pi$ or $2n\pi + \pi/2$ you get different limits: $$ \lim_{n\to\infty}\int_{0}^{2n\pi}\sin x \, dx= \lim_{n\to\infty}[-\cos x]_{0}^{2n\pi}=0 $$ $$ \lim_{n\to\infty}\int_{0}^{2n\pi+\pi/2}\sin x \, dx= \lim_{n\to\infty}[-\cos x]_{0}^{2n\pi+\pi/2}=1 $$ In the same way you show that the first limit doesn't exist (just change the variable with $x=-y$). Therefore, we can't say that $\displaystyle\int_{-\infty}^{\infty}\sin x\,dx$ is equal to a number, much less that it's zero, unless we give the symbol some other meaning than an improper integral. If you are considering the principal value, but you should clearly specify it, because it's a different thing than an improper integral, in general, you indeed get $0$, because the sine function is odd: $\sin(-x)=-\sin x$, so, for $a>0$, $$ \int_{-a}^{a}\sin x\,dx = 0 $$ hence $$ \mathrm{p.v.}\!\!\int_{-\infty}^{\infty}\sin x\,dx= \lim_{a\to\infty}\int_{-a}^{a}\sin x\,dx = 0 $$ The first equality in the above line is the definition of the principal value integral.
H: Irreducible representation of tensor product Let $\varphi_n$ denote the standart irreducible representation of $SU(2)$ group with highest weight $n$. What are the irreducible representaions of $\varphi_2 \otimes \varphi_3$? AI: A result commonly attributed to Clebsch and Gordan gives you the answer: $$ \varphi_2\otimes \varphi_3\cong \varphi_5\oplus\varphi_3\oplus\varphi_1. $$ You can study the dimensions of weight spaces to see this. The weight five space is 1-dimensional (spanned by the tensor product of the highest weight weight spaces of both $\varphi_2$ and $\varphi_3$. The weight three space is two dimensional (= direct sum of two tensor products. That of weight zero space of $\varphi_2$ and weight three space of $\varphi_3$, and that of weight two space of $\varphi_2$ and weight one space of $\varphi_3$). The summand generated by the weight five submodule consumes only one of the two, leaving us with a high weight vector of weight three unaccounted for - resulting in a summand $\varphi_3$ on the r.h.s.. Continuing in the same vein leaves us with an extra $\varphi_1$ component. Let's check the dimensions. $\dim\varphi_n=n+1$, so the tensor product $\varphi_2\otimes\varphi_3$ is of dimension $3\cdot4=12$. On the r.h.s we have summands of dimensions $6+4+2=12$. Check. In quantum mechanics this result is known as the quantum mechanical addition of angular momenta. The general formula for the tensor product of two irreducible reps of $SU_2$ (or $sl_2$ reads: (assume $m\ge n$) $$ \varphi_m\otimes\varphi_n=\varphi_{m+n}\oplus\varphi_{m+n-2}\oplus\varphi_{m+n-4}\oplus\cdots\oplus\varphi_{m-n+2}\oplus\varphi_{m-n} $$
H: Multiplication in $\mathcal D'(R)$. I was reading in my text book that multiplication of elements of $\mathcal D'(R)$ is not a continuous function although it is defined and can be seen from the convergence of $\sin(nx)$ to $0$ as $n$ goes to infinity in $\mathcal D'(R)$. And not in the case of $\sin^2(nx)$ as $n$ goes to infinity. Please explain me what does he mean. Thank you in advance. AI: In the sense of distributions, $\sin nx \to 0$ as $n\to\infty$, since $$ \lim_{n\to\infty}\int_{-\infty}^\infty \phi(x)\,\sin nx\,dx = 0$$ for every test function $\phi$. (This follows for example from Riemann-Lebesgue's lemma.) On the other hand $\sin^2 nx = \frac12 - \frac12\cos 2nx$ so $$ \lim_{n\to\infty}\int_{-\infty}^\infty \phi(x)\,\sin^2 nx\,dx = \frac12 \int_{-\infty}^\infty \phi(x)\,dx,$$ which means that $\sin^2 nx$ tends to $1/2$ in the sense of distributions as $n \to \infty$. Hence, the operation $f \mapsto f^2$ can't be continuous on $\mathcal{D}'$. If it were, it would map a sequence of distributions tending to $0$ to another sequence of distributions tending to $0$. (Multiplication of two arbitrary distributions can't even be defined in general: what would $\delta^2$ be?)
H: Tutte's 1947 proof and paper on a family of cubical graphs It is known that if a graph is connected, cubic, simple and $t$-transitive, then $t \le 5$. A proof is given in [Biggs, Algebraic Graph Theory, Chapter 18], and this result is due to [Tutte, ``A family of cubical graphs,'' Proc. Cambridge Philosophical Society, 45, 459-474]. My question is: Is the proof given in Biggs' text the same as the one in Tutte's paper? I was unable to obtain Tutte's paper. I would appreciate if someone could electronically post or mail to me his paper. AI: It is essentially the same. Richard Weiss produced shorter proofs later - there was a lot of work on $s$-arc regular and $s$-arc transitive graphs - but even these used the same basic strategy.
H: The spectrum of an invertible element $x$ is $\sigma(x^{-1})=\{\lambda^{-1}: \lambda\in \sigma(x)\}$ Suppose $x$ is invertible in the unital Banach algebra $A$. How can I prove that $\sigma(x^{-1})=\{\lambda^{-1} : \lambda\in \sigma(x)\}$ AI: If $x$ is invertible, $x \cdot v = \lambda v$ is equivalent to $v= x^{-1}x \cdot v= \lambda x^{-1} \cdot v$, that is $x^{-1} \cdot v=\lambda^{-1} v$.
H: Action of $S_7$ on the set of $3$-subsets of $\Omega$ Reviewing the great book in Permutation Groups by J.D.Dixon, I encountered the following problem: Show that $S_7$ acting on the set of $3$-subsets of $\Omega=\{1,2,3,4,5,6,7\}$ has degree $35$ and rank $4$ with subdegrees $1,4,12,18$. I wanted to probe the problem by making a good program in GAP's environment so I did: gap> S7:=SymmetricGroup(7);; gap> O:=[1..7];; gap> 3s:=Combinations( O,3);; gap> Size(3s); 35 gap> G1:=Stabilizer(S7,3s[1],OnSets); Group([ (6,7), (5,7), (4,7), (2,3)(5,7), (1,2,3)(5,7) ]) gap> D1:=[];; gap> for k in [1..35] do D1[k]:=Size(Orbit(G1,3s[k],OnSets)); od;; gap> List([1..35],k->D1[k]); [ 1, 12, 12, 12, 12, 12, 12, 12, 12, 18, 18, 18, 18, 18, 18, 12, 12, 12, 12, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 4, 4, 4, 4 ] However, I can read the results as above, they are not in an order $1,4,8,12$ and I couldn't make them be happened just for one time. AI: Try SortParallel(D1, 3s); and then redo the steps D := []; for k in [1..35] do D1[k]:=Size(Orbit(G1,3s[k],OnSets)); od;; D1; Is this that you want?
H: Rolling dice until each has taken a specific value I'm facing the following problem. Let's say I have $N$ dice in hand. I need to calculate how much time I should roll my dice to make all of them equal to some selected (pre-defined) number. Each time I roll the selected number with some dice, I remove these dice from my hand and keep rolling the rest. Example: I have $2$ dice and I want to roll sixes. When I get one, I will remove this die and will roll one die instead of two. How many times do I need to roll my dice in order to get sixes in all (to make my hand empty)? I suppose that the correct answer is (for two dice) ${1\over6} +{1\over6} + {1\over6}\times{1\over6}$, but it seems to be wrong because I tried to implement an algorithm to calculate the probability, in which I ran 1M continuous rolls to calculate the average amount of required rolls. Any help is appreciated. AI: I will try to interpret the question in a more formal way. Let $X_1, \ldots, X_n$ be i.i.d. geometric random variables with rate of success $p$ (which is $1/6$ in this case). Find $E[\max\{X_1, \ldots, X_n\}]$. My approach is to first compute the distribution of $Y = \max\{X_1, \ldots, X_n\}$. Suppose $y \in \mathbb N$ is given. \begin{align*} P(Y \le y) & = \prod_{i=1}^n P(X_i \le y) \\ & = \prod_{i=1}^n \left(1 - P(X_i > y)\right) \\ & = \left(1 - (1 - p)^y\right)^n \\ \therefore P(Y = y) & = (1 - (1 - p)^y)^n - (1 - (1 - p)^{y-1})^n. \end{align*} (Note that $P(Y = 1) = p^n$.) For ease of writing, let $q = 1 - p$. The expected value of $Y$ is \begin{align*} \sum_{y=1}^\infty yP(Y = y) & = \sum_{y=1}^\infty y\left((1 - q^y)^n - (1 - q^{y-1})^n\right). \end{align*} This is the simplest expression I can find. There might be simpler ones, but I haven't found any.
H: Open Cover / Real Analysis I have the next question: Let $K \subset $ $R^1$ consist of $0$ and the numbers 1/$n$, for $n=1,2,3,\ldots$ Prove that $K$ is compact directly from the definition (without using Heine-Borel). I'm trying to understand compact sets so I would be grateful if someone could give me some examples of open covers and subcovers. Thank you! AI: As you're trying to understand this, I'll try not to give too much away so you can work it out for yourself. Let $\mathcal{U}$ be a cover of the set $K$. As $\mathcal{U}$ is a cover of $K$, there must exist some open $U_0\in\mathcal{U}$ such that $0\in U_0$. From this, can you construct a finite subcover of $\mathcal{U}$ for $K$? I would suggest putting the set $U_0$ in your finite subcover. I'll try to help you establish why the set $U_0$ is so important in this proof. Suppose that $K'=K\setminus\{0\}$. If we tried to do something similar to the above, we would notice that we can not be entirely certain that there exists some open set in $\mathcal{U}$ which contains $0$. This really is crucial. With this information, we can quite easily construct an infinite cover of $K'$ which has no finite subcover. Let $\mathcal{U}=\{U_n\}$ where $U_n=\left(\frac{1}{n}-\frac{1}{2}(\frac{1}{n(n+1)}),\frac{1}{n}+\frac{1}{2}(\frac{1}{n(n+1)})\right)$. Now, the $U_n$ are defined so that $\frac{1}{n}\in U_n$ and no two $U_n$ and $U_{n'}$ intersect in a non-empty subset for $n\neq n'$. So there are no proper subcovers of $\mathcal{U}$ for $K'$, let alone a finite subcover. It follows that $K'$ is not compact. Because $0$ is no longer in our set, the possible open covers of our set can now include covers which include no open sets which cover all but finitely many elements, and this was the key to the proof for the original $K$ which included $0$.
H: Finite ultraproduct I stucked when trying to prove: If $A_\xi$ are domains of models of first order language and $|A_\xi|\le n$ for $n \in \omega$ for all $\xi$ in index set $X$ and $\mathcal U$ is ultrafilter of $X$ then $|\prod_{\xi \in X} A_\xi / \mathcal U| \le n$. My tries: If $X$ is finite set then $\mathcal U$ is principal. Then singleton $\{x\}\in \mathcal U$ and $|\prod_{\xi \in X} A_\xi / \mathcal U| = |A_x|$. If $\mathcal U$ is not principal then for $x \in X$ there is $S_x \in \mathcal U$ with $x \notin S_x$. Then for every $k \in \omega$ there exists equivalence class corresponding to $S_{x_1} \cap \dots \cap S_{x_k}$ with size greater $|A_1|\cdot \dots \cdot |A_k|$. Can there be said anything about a structure of the ultrafilter if $X$ is infinite? And how to prove it? AI: The statement you are trying to prove is a consequence of Łoś's theorem - if every factor satisfies "there are no more than $n$ elements", then the set of factors that satisfy it is $X$, which is in $\mathcal{U}$, so by Łoś's theorem the ultraproduct will satisfy that sentence as well. Note that "there are no more than $n$ elements" is the sentence $$ (\exists x_1)\cdots(\exists x_n)(\forall y)[ y = x_1 \lor \cdots \lor y = x_n] $$ Thus one way to come up with a concrete proof of the statement you want is to examine the proof of Łoś's theorem and specialize it to the situation at hand. As a side note, if every factor is finite, but there is no bound on the sizes of the factors, then the ultraproduct will not be finite. The difference is that there is no longer a single sentence of interest that is true in all the factors, because finiteness is not definable in a first-order language. I assume that the OP figured out the hint, so let me spell out the answer for reference. Because $|A^\xi| = k$ for all $\xi \in X$, we can write $A^\xi = \{a^\xi_1, \ldots, a^\xi_k\}$ for each $\xi$. For $1 \leq i \leq k$ define $\alpha_i$ by $\alpha_i(\xi) = a^\xi_i$. Then every $\beta$ in the ultraproduct is equal to $\alpha_i$ for some $1 \leq i \leq k$. Proof: For $1\leq i \leq k$ let $B_i = \{\xi : \beta(\xi) = a^\xi_i\}$. Then $$X = B_1 \cup B_2 \cup\cdots\cup B_k.$$ Because $\mathcal{U}$ is an ultrafilter, one of the sets $B_i$ must be in $\mathcal{U}$, and if $B_i \in \mathcal{U}$ then $\beta = \alpha_i$ in the ultraproduct, QED. Thus we can explicitly name the $k$ elements of the ultraproduct: $\alpha_1, \alpha_2, \ldots, \alpha_k$.
H: Convergence in $L^p$ and $L^q$ - multiplication We have: $X_n \rightarrow X$ in $L^p$ and $Y_n \rightarrow Y$ in $L^q$. Moreover $p,q>1$ are such that $\frac{1}{p} + \frac{1}{q} =1$. Prove that $X_nY_n \rightarrow XY$ in $L^1$. Please, can you help? AI: Just use Hölder. We have $\def\norm#1{\left\|#1\right\|}$ \begin{align*} \norm{XY-X_nY_n}_1 &\le \norm{XY - XY_n}_1 + \norm{XY_n - X_nY_n}_1\\ &\le \norm X_p\norm{Y-Y_n}_q + \norm{X-X_n}_p\norm{Y_n}_q \end{align*} Now $(Y_n)$ is bounded in $L_q$, hence both terms tend to zero. So we are done.
H: Why doesn't integrating the area of the square give the volume of the cube? I had a calculus course this semester in which I was taught that the integration of the area gives the size (volume): $$V = \int\limits_a^b {A(x)dx}$$ But this doesn't seem to work with the square. Since the size of the area of the square is $x^2$ then $A(x) = {x^2}$, then: $$V = \int\limits_{ - r}^r {{x^2}dx} = \left[ {\frac{{{x^3}}}{3}} \right]_{ - r}^r = \frac{{{r^3}}}{3} - \frac{{ - {r^3}}}{3} = \frac{2}{3}{r^3}$$ It's clear that this is not the volume of the cube. Why is this the case? Am I misunderstanding something? AI: It should be: $$V = \int_0^a a^2 dz$$ where $a$ is the length of one of the sides of the square. Or using your notation: $$V = \int_0^x x^2 dz$$ where $z$ is the dimension over which you are integrating.
H: How to calculate the norm of an ideal? Would someone please help explain how to calculate the norm of an ideal? I can't find a source that explains this clearly. For example, I know that the norm $N_\mathbb{Q(\sqrt10)}(\langle2,\sqrt10\rangle)=2$, but not clear on how to get it myself. Many thanks AI: In general, given $I=(s_1,\dots,s_n)\lhd\mathcal{O}_K$ in the ring of integers of some number field $K$, we can factor it into primes to try and work out its norm. You can see the answer here which explains the rational behind it, but the important points are that: If a prime $\mathfrak{p}\mid I$, then $s_1,\dots,s_n\in\mathfrak{p}$. Let $(p)=\mathfrak{p}\cap\mathbb{Z}$. Then by the multiplicativity of the norm, we know that $\text{N}(\mathfrak{p})\mid\text{N}(s_i)$, and $\text{N}(\mathfrak{p})$ is always a prime power. This means that the primes in $\mathcal{O}_K$ dividing $I$ are above the rational primes dividing $\gcd(\text{N}(s_1),\dots,\text{N}(s_n))$: this gives us a finite list to check! If $\mathcal{O}_K=\mathbb{Z}[\alpha]$, then find the possible primes $\mathfrak{p}=(p,g(\alpha))$. Then $\mathfrak{p}\mid I$ if and only if $s_1,\dots,s_n\in(\overline{g})\lhd(\mathbb{Z}/p\mathbb{Z})[\alpha]$. In your example, $\text{N}(a+b\sqrt{10})=a^2-10b^2$, so $\text{N}(2)=4$, and $\text{N}(\sqrt{10})=-10$. So the only primes dividing $I=(2,\sqrt{10})$ are of norm $2$. In $\mathcal{O}_K=\mathbb{Z}[\sqrt{10}]$, there is in fact only one prime ideal of norm $2$, namely $\mathfrak{p}_2=(2,\sqrt{10})=I$. Alternatively, you can use the definition that $\text{N}(I)=|\mathcal{O}_K/I|$, and this method may well be quicker with simpler examples such as these. It should be quite clear that $$ \mathcal{O}_K/I=\mathbb{Z}[\sqrt{10}]/(2,\sqrt{10})\cong\mathbb{Z}/2\mathbb{Z} $$
H: Any way to simplify this gcd totient function I have the following expression $$\frac{gcd(a,b)}{\varphi(gcd(a,b))}$$ $a,b$ are known positive integers. Is there any way to rephrase this or simplify it? AI: In general, $$\frac{n}{\phi(n)} = \prod_{p|n} \frac{p}{p-1}$$ where the product is over all primes that divide $n$. So in your case, with $n=(a,b)$ this becomes: $$\prod_{p|a \land p|b} \frac{p}{p-1}$$ In don't think there is any better formula. Since $\gcd(a,b)$ can be any natural number, there is no real trick.
H: Any arbitrary closed smooth curve bounds a orientable surface? I've got a question that, given an arbitrary closed smooth curve $C:[0,1]\rightarrow\mathbb{R}^3$, can you always find a orientable surface $\Omega$ which satisfy $\partial\Omega=C[0,1]$ ? I have no idea on this question, and I suppose that the surface $\Omega$ has no restriction such as “smooth”. Thanks a lot for your help. AI: Do we know that $C$ does not intersect itself, i.e. that $C$ restricted to the half-open interval $\left[ 0 , 1 \right[$ is injective? Then the orientable surface exists, can be taken to be compact and connected, and is then called a Seifert surface of the knot.
H: Definition of Lebesgue-Stieltjes measure on $\mathbb R$ Let $F:\mathbb R\to\mathbb R$ be a non-decreasing, left-continuous function. Let $a,b\in\mathbb R$, then define the Lebesgue-Stieltjes measure $$ m[a,b]=F(b+)-F(a), \quad m(a,b)=F(b)-F(a+) $$ $$ m(a,b]=F(b)-F(a+), \quad m[a,b)=F(b)-F(a) $$ I am wondering what the notation F(a+) or F(b+) means. Since $F$ is left-continuous we could have, e.g. $F(x)=\begin{cases} 1 & ,x\le3\\ 2 & ,x>3\end{cases}$.Now is $F(3+)=1$ or $F(3+)=2$ ? Intuitively I would assume that e.g. $$m[3,4]=m(3,4)=m(3,4]=m[3,4)=F(4)-F(3)=2-1=1$$ in any case. But I am probably wrong. AI: The notation $f(a+)$ means "right hand limit," i.e., $\lim_{x\downarrow a} f(x)$. Similarly you have $$ f(a-) = \lim_{x\uparrow a} f(x).$$ These limits always exist for a cumulative distribution function since such function have a range contained in $[0,1]$ and are monotone nondecreasing. In your example, you have $F(3-) = 1$ and $F(3+) = 2.$ Since $F(3) = 1$, $F$ is left-continuous at 3. Remember, the limits disregard behavior at the target point.
H: $n$ divides $\phi(a^n -1)$ where $a, n$ are positive integer. Let $n$ and $a$ be positive integers with $a > 1$. I need to show that $n$ divides $\phi(a^n -1)$. Here, $\phi$ denotes the Euler totient function. Could any one give me a hint? AI: A group theoretic solution can be given, (though this solution requires some advanced concepts yet it is very elegant and beautiful). Let $m= a^n-1$. Consider the group $G = (\mathbb{Z} / m\mathbb{Z})^*$ or rather $(\mathbb{Z}_m)^*$. This group has $\phi(m)$ elements or rather the order of the group is $\phi(m)$. Let $\bar a \in \mathbb{Z} / m \mathbb{Z}$ be the remainder class of the integer $a$ modulo $m$. Then, $\bar a \in G$, as $\gcd(a,m) = \gcd(a,a^n-1)=1$. Consider the subgroup $H=\left<\bar a\right>$ that is the subgroup generated by $\bar a$. Now $a^n\equiv 1 \mod m$ (where $m=a^n-1$ and $n$ is the smallest integer with this property), but no positive integer $i<n$ satisfies $a^i \equiv 1 \mod m$ (since $a^i - 1$ is a positive integer smaller than $m$). This implies that order of $H$ equals $n$. Now as the order of a subgroup always divides the order of a group we have $n\mid\phi(a^n-1)$ .
H: What is the domain of $x^x$ when $ x<0$ I know that $x^x$ for all $x>0$ but what is negative values for that function which give a real number for example $$f(-1)=(-1)^{-1}=-1\in R$$ I try to put sequence for that but i faild is there any help thanks for all AI: $x^x$ is well defined as a real function for $$(0,\infty) \cup \{ -\frac{m}{2n+1}| m, n \in {\mathbb N} \}$$
H: $\lim_{x\to \pi/2} \;\frac 1{\sec x+ \tan x}$ how to solve it answer is $0$, but $\frac 1{\infty + \infty}$ is indeterminate form $$\lim_{x \to \pi/2} \frac 1{\sec x + \tan x}$$ AI: Clarification: $$\lim_{x \to \left(\frac{\pi}{2}\right)^+} \frac{1}{\sec x + \tan x} \to \frac{1}{\infty} = 0$$ $$\lim_{x \to \left(\frac{\pi}{2}\right)^-} \frac{1}{\sec x + \tan x} \to -\frac{1}{\infty} = 0$$ $$\lim_{x \to \frac{\pi}{2}} \frac 1{\sec x + \tan x} \to \frac{1}{\infty} = 0$$ In other words, the limit, $\,\to \frac 1{\infty}\,$ is not of indeterminate form: the limit in both cases above is equal to zero. I do believe that recognizing your function $$f(x) = \frac 1{\tan x + \sec x} = \frac{\cos x}{1 + \sin x}$$ makes the limit as $x \to \pi/2$ perhaps more evident.
H: The number of solutions of $ax^2+by^2\equiv 1\pmod{p}$ is $ p-\left(\frac{-ab}{p}\right)$ What I need to show is that For $\gcd(ab,p)=1$ and p is a prime, the number of solutions of the equation $ax^2+by^2\equiv 1\pmod{p}$ is exactly $$p-\left(\frac{-ab}{p}\right)\,.$$ I got a hint that I have to use Legendre symbol from the answer. I think that I may count a solution one by one. What I did : $$(ax)^2 \equiv a-aby^2 \pmod{p}$$ It suffices to count $y$ such that $(\frac{a-aby^2}{p})=1$. I tried to use the complete residue system or a primitive root but it didn't work. The factorization also didn't work. I think that the pigeonhole principle may not work because it just says the existence. Thanks in advance. AI: Using Legendre symbol, the number of solutions can be written as $$ \sum_{y=0}^{p-1} \left(1+\left(\frac{a-aby^2}{p}\right)\right),$$ since if $a-aby^2$ is nonzero square, you have to count two solutions in $x$, and if $a-aby^2$ is zero, then you have to count one solution in $x$ (namely 0), and if $a-aby^2$ is non-square, then no solutions. Then rewrite the summation of second term as $$ \sum_{y=0}^{p-1} \left(\frac{a-aby^2}{p}\right)=\left(\frac{-ab}{p}\right)\sum_{y=0}^{p-1}\left(\frac{y^2+d}{p}\right),$$ where $d=-b^*$ ($b^*$ is the inverse of $b$ modulo $p$) The summation on the right, can be rewritten as $$\sum_{y=0}^{p-1} \left(1+\left(\frac{y}{p}\right)\right)\left(\frac{y+d}{p}\right)$$ Then the sum over single Legendre symbol is zero, but it remains to compute $$\sum_{y=0}^{p-1}\left(\frac{y}{p}\right)\left(\frac{y+d}{p}\right)$$ This one involves a trick: using $\left(\frac{y^*}{p}\right)$ instead of $\left(\frac{y}{p}\right)$ Then you will find that $$\sum_{y=0}^{p-1}\left(\frac{y}{p}\right)\left(\frac{y+d}{p}\right)=-1$$ Now, putting all together, you get the desired answer.
H: Find x for which for every "a" the equation has solution $$a^{31x} \equiv a \mod 271$$ I need to find x variable, for which the equation has solution with any a. How can I do this? Generaly, modular equations have solutions when $GCD(a^{31x}, 271) = 1$, or $GCD(a^{31x}, 271) = d > 1; d|a$ It also looks like I could use Little Fermat's Theorem... But I can't come up with anything... AI: Since 271 is prime, we can indeed apply Fermat's little theorem. Using this theorem we know that $a^{n}\equiv a \pmod{271}$ for any $n\equiv 1 \pmod{270}$. Hence, we need to find an $x$ such that $31x\equiv 1\pmod{270}$. To find an appropriate $x$ we may use the extended Euclidean algorithm: $ 0\cdot 31+1\cdot 270 = 270$ $1\cdot 31+0\cdot 270 = 31$ $ -8\cdot 31+1\cdot 270 = 22$ $ 9\cdot 31-1\cdot 270 = 9$ $-26\cdot 31+3\cdot 270 = 4$ $61\cdot 31-7\cdot 270 = 1$ Thus $61\cdot 31\equiv 1 \pmod{270}$ and hence any $x\equiv 61\pmod {270}$ will do.
H: Proof that the interior of any union of closed sets with empty interior in a compact Hausdorff space is empty The question is pretty much in the title, I need to show that given $X$ is a compact Hausdorff space and $\left\{ A_n\right\}_{n=1}^\infty$ is a collection of closed subsets of $X$ each with empty interior, then $\bigcup_{n\in\mathbb{N}}A_n$ has empty interior as well. Out of curiosity only I was wondering whether the fact the collection is countable is necessary for the claim or if it's true for any such collection of subsets. Help would be appreciated :) AI: A space with the property that the union of countably many nowhere dense closed sets has empty interior is called a Baire space. It's easier if you dualize and show that the intersection of countably many open dense sets $D_n$ is dense. For if $A$ is closed and has empty interior, its complement $D$ is open and dense. Start with an open set $U$ and intersect it with $D_1$. Take a point $x_1$ in the intersection. There is an open neighborhood $U_1$ with compact closure $C_1\subset U\cap D_1$. Then intersect $U_1$ with $D_2$ and repeat this process for a point $x_2$ in $U_1\cap D_2$.
H: Can anyone help me finding recurrence relation in combinatoric? Guys, I am having trouble finding recurrent relation. A codeword is made up of the digits $0,1,2,3$ (order is important!). A codeword is defined as legitimate if and only if it has an even number of $0$’s. Let $a_n$ be the number of legitimate codewords of lenth $n$. Write a recurrence for $a_n$. The answer is $a_n = 3a_{n-1} + 4^{n-1} - a_{n-1}$ I know where $a_n=3a_{n-1}$ came from, but the problem is, I can't figure out where the rests ($4^{n-1} - a_{n-1}$) came from! Please can anyone tell me how to approach the question? Thank you! AI: So lets look at a word $w = w_1 \ldots w_n$ of length $n$ with $w_i \in \{0,1,2,3\}$ and denote by $w' = w_2 \ldots w_n$ the word consisting of all but the first letter. $w$ is legimate in the following cases: $w_1 \ne 0$ and $w'$ is legitimate (as then the number of $0$s in $w$ equals that in $w'$. There are 3 possiblities for $w_1$ and $a_{n-1}$ for $w'$, hence $3a_{n-1}$ for $w$ in this form. $w_1 = 0$, and $w'$ is illegitimate (as then the number of $0$s in $w'$ has to be odd) there are $4^{n-1}$ wors of 4 letters with length $n-1$, of which $a_{n-1}$ are legitimate, hence $4^{n-1}-a_{n-1}$ illegitimate. So $4^{n-1}-a_{n-1}$ possibilities for $w$ in this case.
H: Proving that for $f\geq0$ on $X$, $\int_X f d\mu = 0$ iff $f = 0$ a.e. Okay, so the question is the following: Suppose $f \geq 0$ is a measurable function on the measure space $(X,\Sigma,\mu)$. Prove that \begin{align} \int_X f d\mu = 0 \text{ if and only if } f = 0 \text{ almost everywhere.} \end{align} I've sort of finished the proof, but my version is not very elegant or simple. I was wondering if there is a simple proof of the statement. It's meant to be using only the basics of Lebesgue integration, i.e., simple functions, etc. Thanks! AI: The (almost) only fact needed is Fact: If $0\le f\le g$, then $\int f\le \int g$. One direction of your assertion is easy: Suppose $f$ is nonnegative and that the set $E=\{ x : f(x)>0\}$ has measure zero. Then we have, using the Fact: $$0\le\int f\le\int \infty\cdot\chi_E=\infty\cdot\mu(E)=0.$$ For the other direction, prove the contrapositive: Assume the set $E=\{ x : f(x)>0\}$ has positive measure. We proceed as suggested by my comment above. For $n$ a positive integer, define the set $E_n=\{x: f(x)>1/n \}$. Note that $\ \ \ 1)$ $E=\bigcup\limits_{n=1}^\infty E_n$ and $\ \ \ 2)$ $E_1\subset E_2\subset E_3\subset\cdots\,$. From $1)$ and $2)$ (and a result referenced by the "almost" of the first paragraph) it follows that $\mu(E)=\lim\limits_{n\rightarrow\infty} E_n$. Consequently, it follows from our assumption that $\mu(E)>0$ that there is some $n$ with $\mu(E_n)>0$. So, with this in hand, using the Fact again, we have: $$ \int f \ge \int_{E_n} f\ge \textstyle{1\over n}\cdot\mu(E_n)>0. $$ Whether or not this is "elegant" is debatable ...
H: How to calculate $\lim\limits_{x\to1^+}\frac{1}{(x-1)^2} \int\limits_{1}^{x} \sqrt{1+\cos(\pi t)}\,\mathrm dt$ Can anyone help me by calculating this limit? I know that I need L'Hôpital but how? $$ \lim_{x \to 1^+}\frac{1}{(x-1)^2} \int_{1}^{x} \sqrt{1+\cos(\pi t)} \,\mathrm dt $$ Thank you very much!! AI: Since my hints were not helpful, here a is my solution $$ \begin{align} \lim\limits_{x\to 1^+}\frac{1}{(x-1)^2}\int\limits_{1}^{x}\sqrt{1+\cos(\pi t)}dt &=\lim\limits_{x\to 1^+}\frac{\left(\int\limits_{1}^{x}\sqrt{1+\cos(\pi t)}dt\right)'}{((x-1)^2)'}&\text{L'Hopitale rule}\\ &=\lim\limits_{x\to 1^+}\frac{\sqrt{1+\cos(\pi x)}}{2(x-1)}& y\to x-1\\ &=\lim\limits_{y\to 0^+}\frac{\sqrt{1+\cos(\pi (y+1))}}{2y}&\cos(\pi+\alpha)=-\cos\alpha\\ &=\lim\limits_{y\to 0^+}\frac{\sqrt{1-\cos(\pi y)}}{2y}&1-\cos\alpha=2\sin^2\frac{\alpha}{2}\\ &=\lim\limits_{y\to 0^+}\frac{\sqrt{2}|\sin\frac{\pi y}{2}|}{2y}&y>0\\ &=\lim\limits_{y\to 0^+}\frac{\sqrt{2}\sin\frac{\pi y}{2}}{2y}&\sin\alpha\sim\alpha\\ &=\lim\limits_{y\to 0^+}\frac{\sqrt{2}\frac{\pi y}{2}}{2y}\\ &=\frac{\pi\sqrt{2}}{4} \end{align} $$
H: Question on Contractions Let $S = \{x \in \mathbb{R}^n ; \|x\| \le 1 \}$ and $f: S \to S$ be a contraction. Determine one can have $f(S) = S$. I really need some help with this question. In advance I wanted to give all the necessary definitions needed for this question. Definition: Let $(X, P)$, $(Y, P')$ be metric spaces and $f: X \rightarrow Y$ a function. Let $M \in \mathbb{R}$ with $M > 0$. We say that $f$ satisfies a Lipschitz condition with Lipschitz constant $M$ if $\forall x, y \in X: P'(f(x), f(y)) \le M P(x,y)$ Let $(X, P)$, $(Y, P)$ be metric spaces and $f: X \rightarrow Y$ a function. We say that $f$ is a contraction (mapping) if $f$ satisfies the Lipschitz condition on $X$ with some Lipschitz constant $M \in (0,1)$. AI: Let $\xi \in S$ by the fixed point of $f$ which exists by the Banach fixed point theorem. Now the function $d \colon S \to \mathbb R$, $x \mapsto \|x-\xi\|$ attains its maximum on $S$, as $S$ is compact. Let $\eta \in S$ such that $d(\eta) \ge d(x)$ for all $x \in S$. Let $q \in [0,1)$ by a Lipschitz constant for $f$, then for all $x \in S$: $$ d(f(x)) = \|f(x) - \xi\| = \|f(x) - f(\xi)\| \le q \|x-\xi\| = qd(x) \le qd(\eta) < d(\eta). $$ Hence $f(x) \ne \eta$ and $f$ isn't onto.
H: Proof of the uniqueness of maximal ideal Let $R$ be a commutative ring with $1$. Let $M$ be a maximal ideal of $R$ such that $M^2 = 0$. Prove that $M$ is the only maximal ideal of $R$. AI: A maximal ideal is prime. Let $\mathfrak{m}$ be any maximal ideal and let $x\in M$; since $x^2=0\in\mathfrak{m}$, you get $x\in\mathfrak{m}$. Therefore $M\subseteq\mathfrak{m}$. Note that the same holds if $M$ is a nil ideal.
H: A family of sets such that the each subfamily has a different union If it helps anything, please assume that everything below is finite. Let $\mathcal A$ be a family of subsets of a set $X$. I want to consider the following independence condition (C) on $\mathcal A$. (C). The function $\bigcup: 2^\mathcal A\to2^X$ is an injection. In other words, (C) says that each subfamily of $\mathcal A$ has a different union. Every pairwise disjoint family $\mathcal A$ satisfies this condition. If I put together pairwise disjoint pieces, I can recognize them in the union. But there are other examples. For $|\mathcal A|=1,$ (C) simply says that $\mathcal A\neq \{\varnothing\}$ because if $\mathcal A=\{\varnothing\},$ then $\varnothing,\{\varnothing\}\subseteq\mathcal A,$ and $\bigcup\varnothing=\bigcup\{\varnothing\}=\varnothing,$ but $\varnothing\neq\{\varnothing\},$ so $\bigcup$ is not injective; if $\mathcal A = \{Y\},$ $Y\neq\varnothing,$ then the only subfamilies of $\mathcal A$ are $\varnothing$ and $\{Y\}$, and they have different unions. For $|\mathcal A|=2$, $\mathcal A=\{Y,Z\},$ (C) says that $Y\setminus Z\neq\varnothing\neq Z\setminus Y.$ (In other words, neither of the sets is a subset of the other.) Indeed, if $Y\setminus Z=\varnothing,$ then $\bigcup\{Y,Z\}=\bigcup\{Z\}$ (and the same for $Z\setminus Y=\varnothing$); otherwise, the unions of $\varnothing,\{Y\},\{Z\},\{Y,Z\}$ are all different. For $|\mathcal A|=3,$ I already don't see a simple condition equivalent to (C). Is there one in the general case? (C) is of course a simple condition in terms of formulation, but it is quite a complex one in terms of the "little conditions" it comprises. I'm not certain that a condition I'm asking for exists, but perhaps it does, and someone here can help me see it? AI: The condition that $$\tag1 A\not\subseteq\bigcup(\mathcal A\setminus\{A\})\qquad\text{for all }A\in \mathcal A$$ is necessary to distinguish $\mathcal A$ from $\mathcal A\setminus\{A\}$ and sufficient as it implies that $A\in \mathcal B\subseteq \mathcal A$ iff $A\subseteq \bigcup\mathcal B$. For each $A\in \mathcal A$, $(1)$ allows us to select an element $a(A)\in A\setminus \bigcup(\mathcal A\setminus\{A\})$ that works as a "sensor" for $A$. That is, we can restrict everything from $X$ to $\{a(A)\mid A\in\mathcal A\}$ and by that end up in the simple case of disjoint singleton sets. In other words, an equivalent condition is: There exists a subset $X'\subseteq X$ such that $A\cap X'$ is a singleton for every $A\in\mathcal A$ and $A_1\ne A_2$ implies $A_1\cap X'\ne A_2\cap X'$.
H: Problem solving earthquake problem magnitude logarithims I need help solving a simlar type equation to this..... this one was easy though... An earthquake off the coast of Vancouver Island was measured at 8.9 on the Richter Scale and an earthquake off the coast of Alaska was measured at 6.5. How many times more intense, to the nearest whole number, was the earthquake off the coast of Vancouver Island than the one off the coast of Alaska? my solution for this problem was ((10)^8.9/*10^6.5) 251 times more intense the is the one I' am having problems with. A major earthquake of magnitude 8.3 is 120 times as intense as a minor earthquake. Determine the magnitude, to the nearest tenth, of the minor earthquake. I think I should do 10^8.3 / 10^x =12 and solve fore it? but it has to be a lograthmatic equation. AI: Use subtraction law of exponents: $$\frac{10^a}{10^b} = 10^{a-b}$$ so that $$\frac{10^{8.3}}{10^x} = 10^{8.3-x} = 120$$ Therefore $$8.3 - x = \log_{10}{120} = 2+\log_{10}{1.2} \approx 2.1$$ $$\implies x \approx 6.2$$
H: System of differential equations: Inverse matrix of a fundamental matrix I'm trying to show: Let $A:[0,\infty[\to \mathcal{M}(n,\mathbb{R})$ a function and suppose that all solutions of the system of differential equations: $$\vec{x'}(t)=A(t)\vec{x}(t) \ \ \ (\star)$$ are bounded when $t\geq 0$. If $X(t)$ is a fundamental matrix (a matrix of fundamental solutions) of the system $(\star)$ show that $X^{-1}(t)$ is bounded for $t\geq 0$ if and only if the function $t\to \int^t_0 \operatorname{tr}A(s)ds$ bounded below. I know that for the system $(\star)$, a fundamental matrix is $e^{tA}=e^{tPJP^{-1}}=Pe^{tJ}P^{-1}$ where $J$ is the Jordan form of $A$, but I have no idea how to prove the exercise. Thanks for your help. AI: The determinant $f(t)=\mathrm{det}\,X$ of the fundamental matrix satisfies the equation $$(\ln f)'=(\ln\mathrm{det}\,X)'=\left(\mathrm{Tr}\ln X\right)'=\mathrm{Tr}\left(X'X^{-1}\right)=\mathrm{Tr}\,A,$$ so that $\displaystyle f(t)=\mathrm{const}\cdot\exp\int^t\mathrm{Tr}\,A(t)\,dt$. The statement for $X^{-1}$ follows from this relation and boundedness of $X$.
H: Odd and even functions- a direct sum? Question: Let $V$ be the vector space of all functions $\Bbb R\to \Bbb R$. Show that $V=U \oplus W$ for $$U=\{f\ | \ f(x)=f(-x)\ \ \forall x\}, \quad W=\{f \ |\ f(x)=-f(-x) \ \ \forall x\}$$ What I did: I did prove that $U \cap W$={$0$}. But proving that any function from $\mathbb{R}$ to $\mathbb{R}$ can be displayed as a sum of odds and evens wasn't a success. I tried saying that for $v \in V, w \in W: v=v-w+w$ and proving that $v-w \in U$ but that didn't work (that trick worked with some linear transformations we saw, but this isn't a linear transformation). AI: Hint: $$f(x)=\frac{f(x)+f(-x)}{2}+\frac{f(x)-f(-x)}{2}$$
H: Determine the inverse function of $f(x)=3^{x-1}-2$ Determine the inverse function of $$f(x)=3^{x-1}-2.$$ I'm confused when you solve for the inverse you solve for $x$ instead of $y$ so would it be $x=3^{y-1}-2$? AI: $$ y=3^{x-1}-2 $$ $$ y+2=3^{x-1}/\cdot\log_3 $$ $$ \log_3(y+2)=\log_3 3^{(x-1)} $$ $$ \log_3(y+2)=x-1 $$ $$ x=\log_3 (y+2)+1 $$ Now we take the substituon $x=y^{-1}$, and we have $$ y^{-1}=\log_3 (x+2)+1 $$
H: The graph of the function $y=\log_bx$ passes through the point $(729,6)$. Determine $b$. The graph of the function $y=\log_bx$ passes through the point $(729,6)$. Determine $b$. Could someone show me a solution that is similar to mine if it is correct? $\log_b 729 = 6$ $729 = b^6$ $b = 729^{1/6} = 3$ AI: Your solution is fine. There are of course six different sixth roots of 729, but only one which is a positive real number.
H: Closed form for Exponential Conditional Expected Value & Variance I am wondering if there is a closed form for finding the expected value or variance for a conditional exponential distribution. For example: $$ E(X|x > a) $$ where X is exponential with mean $\lambda$. Same question for variance. What about for a joint distribution of independent exponentials? $$ E(X|y > a) $$ where X is exponential with mean $\lambda$, Y is exponential with mean $\theta$ and X & Y are independent. A sample problem for the actuarial P/1 exam (#124 for those also studying) asks: The joint probability for $f(x,y) = 2e^{-x-2y}, ~ x > 0, ~ y > 0$. Calculate the variance of Y given $x > 3, ~ y > 3$. The solution goes like this: (Math on the right, reasoning on the left) $Var (Y|x>3, y>3) =$ $Var (Y|x>3) = ~~~~~$Independence $Var (Y + 3) = ~~~~~$Memoryless $Var (Y) + Var (3) =~~~~~$Independence of Y and 3. $Var (Y) = ~~~~~ $ Since $Var (3) = 0$. $0.25 ~~~~~ $Exponential Variance, $\lambda = 2$. So this says to me that $Var (Y|x>3) = Var (Y)$. Is that true? If so, is it always true? If not, then how does this solution work? Could one also replace E(Y) for Steps 1 - 4, Use $E(a) = a$ and get $E(Y| y>a) = E(y) + a$? Shortcuts like this are immensely valuable for a timed test. (Not just faster, but less error prone). AI: Let $X$ be an exponentially distributed random variable with parameter $\lambda$. We can think of $X$ as the lifetime of a specific Carbon $14$ atom. Given that $X\gt a$, the additional lifetime $Y$ of the atom is exponentially distributed with parameter $\lambda$. This is a key property of the exponential, usually called memorylessness. Given that $X\gt a$, $X=a+Y$. It follows that $E[X|X\gt a]=a+E[Y]=a+E[X]$. The second question of the same kind is much simpler. If $X$ and $Y$ are any independent random variables, then $E[X|Y\gt a]=E[X]$. Back to the exponential. Since the random variable $X|X\gt a$ is just $a+Y$, where $Y$ is exponential, we have $\text{Var}(X|X\gt a)=\text{Var}(a+Y)=\text{Var}(Y)=\text{Var}(X)$. Since we know that an exponentially distributed random variable $X$ with parameter $\lambda$ has mean $\frac{1}{\lambda}$ and variance $\frac{1}{\lambda^2}$, we can write down with no trouble the mean and variance of the conditional version.
H: calculate $\lim_{t\rightarrow1^+}\frac{\sin(\pi t)}{\sqrt{1+\cos(\pi t)}}$ How to calculate $$\lim_{t\rightarrow1^+}\frac{\sin(\pi t)}{\sqrt{1+\cos(\pi t)}}$$? I've tried to use L'Hospital, but then I'll get $$\lim_{t\rightarrow1^+}\frac{\pi\cos(\pi t)}{\frac{-\pi\sin(\pi t)}{2\sqrt{1+\cos(\pi t)}}}=\lim_{t\rightarrow1^+}\frac{2\pi\cos(\pi t)\sqrt{1+\cos(\pi t)}}{-\pi\sin(\pi t)}$$ and this doesn't get me further. Any ideas? AI: $$\lim_{t\rightarrow1^+}\frac{\sin(\pi t)}{\sqrt{1+\cos(\pi t)}}=\lim_{t\rightarrow1^+}\frac{\sin(\pi t)}{\sqrt{1+\cos(\pi t)}}\frac{\sqrt{1-\cos(\pi t)}}{\sqrt{1-\cos(\pi t)}}=\lim_{t\rightarrow1^+}\frac{\sin(\pi t)\sqrt{1-\cos(\pi t)}}{\sqrt{\sin^2(\pi t)}}$$ P.S. Pay attention to the sign of $\sin(\pi t)$ .
H: Elementary theory of an algebraic structures Could someone elaborate me what the sentence "The elementary theory of finite fields is decidable" means? I'm not sure that for example if I take $x\in \mathbb{F}_4$ and $y\in \mathbb{F}_5$ then can I form a first order predicate logic sentence that contains both $x$ and $y$? And what kind of inputs we can make to a Turing machine if we have elements from many fields? AI: This means that there is a Turing machine which decides whether or not a statement in the language of fields is provable from the theory of finite fields. If we consider the theory as closed under consequences, then it is the same as asking whether or not the statement is in the theory. The Turing machine takes a statement in the language, not a statement about models of the theory. See also Wikipedia. Note that theories live in the syntax, whereas $\Bbb F_4$ and $\Bbb F_5$ are semantics. You can't write a statement about different structures in first-order logic.
H: "Translating" one value $- \infty$ to $+ \infty$ to another ($+ \infty$ to $ \gt 0$) Well even if I need to use the following in a computer game this is a math question. I have a world map which I can scroll with a scroll velocity $(f(x))$ with my mouse. And I have a zoom factor $(x)$ which is $0$ in its initial state and can be positive infinite and negative infinite. So I'm searching for a function where: $x$ can range from $- \infty$ to $+ \infty$ $f(x)$ should be $\gt 0$ $f(x)$ should tend to $0$ for $x$ tending to $+ \infty$ $f(x)$ should tend to $+ \infty$ for $x$ tending to $- \infty$ I bet the solution is somewhat so obviously easy I'm just to blind to see in the moment. AI: By request, try $f(x)=e^{-x}$. As @Andy points out, there are variations and tweaks one can make to the curve, such as $f(x)=e^{-Ax+B}$, where $A>0$ and $B\in \mathbb{R}$, which will give it different shape and move it around.
H: What can I do this cos term to remove the divide by 0? I was asked to help someone with this problem, and I don't really know the answer why. But I thought I'd still try. $$\lim_{t \to 10} \frac{t^2 - 100}{t+1} \cos\left( \frac{1}{10-t} \right)+ 100$$ The problem lies with the cos term. What can I do with the cos term to remove divide by 0 ? I found the answer to be $100$ (Google), but I do not know what they did to the $\cos$ term. Is that even the answer ? Thanks! AI: The cos term is irrelevant. It can only wiggle between $-1$ and $1$, and is therefore killed by the $t^2-100$ term, since that approaches $0$. For a less cluttered version of the same phenomenon, consider the function $f(x)=x\sin\left(\frac{1}{x}\right)$ (for $x\ne 0$). The absolute value of this is always $\le |x|$, so (by Squeezing) $f(x)\to 0$ as $x\to 0$.
H: Generating functions combinatorical problem In how many ways can you choose $10$ balls, of a pile of balls containing $10$ identical blue balls, $5$ identical green balls and $5$ identical red balls? My solution (not sure if correct, would like to have input): Define generated function: $$\begin{align} A(x) & =(x^0+x^1+x^2+...+x^{10})(x^0+x^1+x^2+...+x^5)(x^0+x^1+x^2+...+x^5) \\ & ={1-x^{11} \over 1-x}\cdot \left({1-x^6 \over 1-x}\right)^2 \\ & =(1-x^{11})(1-2x^6+x^{12}) \cdot {1 \over (1-x)^3} \\ & =(1-2x^6-x^{11}+x^{12}+2x^{17}-x^{23})\cdot \sum_{n=0}^∞ {n+2 \choose 2} \cdot x^n. \end{align}$$ We look for the coefficient of $x^{10}$, so we get: $$a_{10}=1 \cdot {10+2 \choose 2}-2\cdot {4+2 \choose 2}=36.$$ This seems incorrect (sadly I'm terrible in 'ordinary combinatorics' so I'm not sure how to calculate this 'regularly'). I would love to get input and hints. Thanks in advance. AI: Here’s one elementary approach. As you see, it confirms your result. Let $b,g$, and $r$ be the numbers of blue, green, and red balls chosen to make up a set of $10$ balls. You’re looking for the number of solutions in non-negative integers to the equation $$b+g+r=10\;,\tag{1}$$ subject to the condition that $g\le 5$ and $r\le 5$. (You also have to have $b\le 10$, but that imposes no additional constraint when the sum is to be $10$.) Without the upper bounds this is a standard stars-and-bars problem whose solution is $$\binom{10+3-1}{3-1}=\binom{12}2\;.$$ However, this count includes solutions with too many green or red balls. Let $g'=g-6$; then there is a bijection between solutions to $$b+g'+r=4\tag{2}$$ in non-negative integers and solutions to $(1)$ for which $g>5$. Thus, we need only count solutions to $(2)$ to get the number of solutions to $(1)$ with $g>5$. This is another stars-and-bars problem, and the answer is $$\binom{4+3-1}{3-1}=\binom62\;.$$ Similarly, there are $\dbinom62$ solutions to $(1)$ that have $r>5$. There are no solutions that exceed the upper limits on both $g$ and $r$, so the number of solutions to $(1)$ that satisfy all of the conditions is $$\binom{12}2-2\binom62=36\;.$$
H: what is exactly analytic continuation of the product log function When I solve in wolfram equation like this $xe^x=z$ they give me the solution $x=W_n(z)$ I know about $x=W_0(z) $ and $x=W_{1}(z)$ but for $n$ I searched in the internet but I didn't find anything can give me an expression about $x=W_n(z)$ Please can anyone help me and give me an explain $x=W_n(z)$ and give me an example how we can find $x=W_n(t)$ $t\in C$ or $t\in R$ AI: All that, and much more, is here: Corless, R.M.; Gonnet, G.H.; Hare, D.E.G.; Jeffrey, D.J.; and Knuth, D.E. "On the Lambert W Function." Advances in Computational Mathematics, Vol. 5, (1996): 329-359.
H: Absolute value of an element in a C*-algebra Is absolute value of a partial isometry a partial isometry itself? AI: We have, as square roots are self-adjoint $\def\abs#1{\left|#1\right|}$ \begin{align*} \abs v^*\abs v &= [(v^*v)^{1/2}]^*(v^*v)^{1/2}\\ &= (v^*v)^{1/2}(v^*v)^{1/2}\\ &= v^*v \end{align*}
H: If $T:X \to Y$ is a linear homeomorphism, is its adjoint $T^*$ a linear homeomorphism? $X$ and $Y$ denote Hilbert spaces. If $T:X \to Y$ is a linear homeomorphism, is its adjoint $T^*$ a linear homeomorphism? Homeomorphism means continuous map with continuous inverse. I think the answer is yes, the only thing I am unable to show is what is the inverse of $T^*$ and if it is continuous? AI: Let $S$ be inverse to $T$. Then $ST=1_X$ and $TS=1_Y$. Apply to this equalities $^*$ functor to get $T^*S^*=1_{X^*}$ and $S^*T^*=1_{Y^*}$. This means that $T^*$ is invertible and what is more $(T^*)^{-1}=S^*$ In this proof I implicitly assumed that $T^*\in \mathcal{B}(Y^*,X^*)$ whenever $T\in\mathcal{B}(X,Y)$. This is indeed true. For the begining recall one of the corollaries of Hahn-Banach theorem $$ \Vert g\Vert=\sup\{|g(y)|:y\in\operatorname{Ball}_{Y}\} $$ where $g\in Y^*$. Then $$ \begin{align} \Vert T^*\Vert &=\sup\{\Vert T^*(g)\Vert: g\in\operatorname{Ball}_{X^*}\}\\ &=\sup\{\Vert T^*(g)(x)\Vert: g\in\operatorname{Ball}_{Y^*}, x\in\operatorname{Ball}_{X}\}=\\ &=\sup\{| g(T(x))|: g\in\operatorname{Ball}_{Y^*}, x\in\operatorname{Ball}_{X}\}=\\ &=\sup\{\Vert T(x)\Vert: x\in\operatorname{Ball}_{X}\}=\\ &=\Vert T\Vert \end{align} $$
H: Possible description of closed subset of a projective variety Let $k$ be an algebraically closed field. Let $P\subset \mathbb{P}^n(k)$ be a projective variety, and $X\subset P$ be a subset. Suppose that $X_i = X\cap U_i$ is an affine closed subset for every affine open $U_i$ of $\mathbb{P}^n$, i.e. that for every $i$ we have a polynomial $\;f_i \in A(U_i \cap P)$ such that $X_i = \mathcal{Z}(f_i)$. Is it always true that, then, $X$ is a closed subspace of $P$ ? Or do we need some extra conditions? AI: Thanks to Jared for his suggestion! Below there's a possible easy proof of the following fact (notice there's a slight difference: in my claim $U_i\cap X$ is closed in $U_i$, not in $X$. This is what I need to answer the above question) : If $P$ is a topological space, $X\subset P$, and ${U_i}$ is an open cover of $P$ such that $U_i\cap X$ is closed in $U_i$ for all $i$, then $X$ is closed in $P$. Proof: Since $X_i = U_i\cap X$ is closed in $U_i$, we have that $U_i \setminus X_i$ is open for every $i$. Then we find that $$ P\setminus X = \bigcup_i \left( U_i \setminus (\cup_j X_j)\right) = \bigcup_i U_i \setminus X_i $$ is open being a union of the open sets $U_i \setminus X_i$. Therefore $X$ is closed in $P$. Hence the answer to the above question is: yes, under those assumptions $X\subset P$ is a closed set.
H: All closed balls are compact each isometry is bijective Let $(X,d)$ be a metric space in which all closed balls are compact and such that for any two points $x,y \in X$ there exists a function $u \in Iso(X,d)$ such that $u(x)=y$. Prove that then each isometry $u: X \rightarrow X$ is bijective. How can I prove this? I know that if $X$ is compact, then each isometry is bijective. Could you help me wih that? AI: Let $u$ be an isometry of $X$, and $x$ some point of $X$. It is enough to show that $u$ is surjective. Taking $u'$ an isometry such that $u'(u(x)) = x$ given by your hypothesis, then $u' \circ u$ is an isometry that fixes $x$. Therefore, it stabilizes the balls centered at $x$, and you can show that $u' \circ u$ is surjective because the balls are compact. The surjectivity of $u$ follows directly : If $y \in X$, then $\exists x \in X$ such that $(u' \circ u) (x) = u'(y)$ by surjectivity of $u' \circ u$. Then $u(x) = y$ by injectivity of $u'$. EDIT : If you remove one of the hypothesis, the assumption fails : Take $\mathbb{N}$ with the classical metric. Then $n \mapsto n+1$ is an isometry which isn't surjective. Its closed balls are finite and therefore compact ; but the isometries group of $\mathbb{N}$ is trivial, so its action can't be transitive. And in $l^2$, take the shift operator $e_i \mapsto e_{i+1}$. It is not surjective either. The action of its isometries group is transitive because it contains the translations ; but it is known that its closed balls aren't compact.
H: How to define this pattern as $f(n)$ Given a binary table with n bits as follows: $$\begin{array}{cccc|l} 2^{n-1}...&2^2&2^1&2^0&row\\ \hline \\ &0&0&0&1 \\ &0&0&1&2 \\ &0&1&0&3 \\ &0&1&1&4 \\ &1&0&0&5 \\ &1&0&1&6 \\ &1&1&0&7 \\ &1&1&1&8 \end{array} $$ If I replace each $0$ with a $1$ and each $1$ with $g(n)$ as follows $$\begin{array}{cccc|l} 2^{n-1}...&2^2&2^1&2^0&row\\ \hline \\ &1&1&1&1 \\ &1&1&g(0)&2 \\ &1&g(1)&1&3 \\ &1&g(1)&g(0)&4 \\ &g(2)&1&1&5 \\ &g(2)&1&g(0)&6 \\ &g(2)&g(1)&1&7 \\ &g(2)&g(1)&g(0)&8 \end{array} $$ I want to define a $$f(n) = \sum_{m=1}^{2^n}\prod_{i=1}^n row_mcol_i$$ I'm not sure if I wrote the above equation correctly but in words it is the following, for each row multiply the columns values together and add it to the result of the previous row. e.g from the table above we would have the following: $$f(3) = 1 + g(0) + g(1) + g(0)g(1) + g(2) + g(2)g(0) + g(2)g(1) + g(2)g(1)g(0)$$ Any help in generalising the equation for $f(n)$ would be much appreciated. The tables above are merely to help illustrate the pattern, if possible I would prefer not to reference the rows and columns of some matrix. Thanks in advance. AI: This is just $$(1+g(0))(1+g(1))(1+g(2))\dots(1+g(n-1))$$ This could be written as: $$\prod_{k=0}^{n-1} (1+g(k))$$ Your expression could be written by defining $G(m,k)$ as $1$ if $m$ has zero in the $k$th bit and $g(k)$ otherwise. Then you want: $$\sum_{m=0}^{2^n-1}\prod_{k=0}^{n-1} G(m,k)$$ But this simplifies to my above expression. An alternative formulation is to define $[n]=\{0,1,\dots,n-1\}$ and then you can write your formula as: $$\sum_{S\subseteq [n]} \prod_{k\in S} g(k)$$ This uses the fact that the numbers from $0$ to $2^n-1$ encode the subsets of $[n]$.
H: REVISTED$^1$: Circumstantial Proof: $P\implies Q \overset{?}{\implies} Q\implies P$ To prove that if a matrix $A\in M_{n\times n} ( F )$ has $n$ distinct eigenvalues, then $A$ is diagonalizable is enough to show that the opposite holds? That is, if $A$ is diagonalizable, then $A$ has distinct eigenvalues? Please don't answer the actual question. I want to do it myself. I just want to know if it suffice to show the opposite in this particular case. EDIT$^1$: If $A$ is similar to a diagonlizable matrix, then $A$ is diagonalizable, right? AI: No. It would only prove the converse. $$\underbrace{P \implies Q}_{\text {implication}} \quad\not\equiv \underbrace{Q \implies P}_{\text{converse of implication}}$$ If you need to prove $P \implies Q$, you can prove its equivalent: $$\underbrace{\lnot Q \implies \lnot P}_{\text{contrapositive of implication}}$$
H: Complex number question For any complex numbers $z_1, z_2$, is the quantity $S$: $$ S = 4 \left(| z_1|^6 + |z_2 |^6\right ) + 4 |z_1|^3 |z_2 |^3 + \left(2 |z_1|^2\times \overline{z_1}^2\times z_2^2\right) + \left(2 |z_2|^2\times \overline{z_2}^2\times z_1^2 \right)$$ always real and nonnegative? Here overline means the complex conjugate. AI: No. The only part we need to check is $2\left|z_1\right|^2\,\bar{z}_1^2\,z_2^2 + 2\left|z_2\right|^2\,\bar{z}_2^2\,z_1^2$. Its two summands clearly have opposite argument (polar angles), but different moduli (radii) whenever $\left|z_1\right|\neq\left|z_2\right|$. Therefore the imaginary part of $S$ will be nonzero whenever $\left|z_1\right|\neq\left|z_2\right|$.
H: Polynomial Equations for the Rank of a Power of a Matrix If I have some $n \times n$ matrix $X$ (in my case, I happen to know that X is nilpotent and in Jordan normal form), how can I write the condition that $\text{rank}(X^r)= k$ as a polynomial equation (to represent a set of matrices as a variety) in terms of the entries of the matrix? I would be interested in a Sage program that does this, or writing my own program. EDIT: I know that $\text{rank}(X) \leq k$ if and only if the determinant of every $(k+1) \times (k+1)$ minor of $X$ is zero, but I'm not sure how to get a set of polynomial equations which will give me exact equality. AI: The set of matrices with rank exactly $k$ is the open subset of those of rank at most $k$ defined by the non-vanishing of at least one $k$ by $k$ minor---thus, it is not naturally realized as a closed subvariety of the space of all matrices. You can use the usual trick to identify this with an affine variety if you wish: the point is that the set of non-zero elements $x$ of a field $F$ is isomorphic to the affine variety $xy=1$ in $F^2$.
H: showing that the Euler's number is irrational Our teacher wants us to do the following: Suppose that e is rational i. e $e=\frac{a}{b}$ where $a,b\in\mathbb{N}$. Choose $n\in\mathbb{N}$ such that $n>b$ and $n>3$. Use the following inequality $0<e-(1+\frac{1}{1!}+\frac{1}{2!}+....+\frac{1}{n!})<\frac{3}{(n+1)!}$ and replace e with $\frac{a}{b}$. Then multiply both sides of the inequality with $n!$ to see that the inequality leads to existence of an integer N that satisfies $0<N<\frac{3}{4}$ which leads to a contradiction. I have tried to to it but I am not getting there... I choose $n=4$ and $b=3$ and $0<\frac{a}{b}-1-1-\frac{1}{2}-\frac{1}{6}-\frac{1}{24}<\frac{3}{5!}$ $0<8a-\frac{17}{24}<\frac{3}{5}$ $0<8a<\frac{157}{120}$ $0<a<\frac{157}{960}$..... How should I do it??? AI: You’re not supposed to pick a specific value of $n$, like $4$; you’re supposed to look at arbitrary integers $n>\max\{b,3\}$. Suppose that $n>b$ and $n>3$. Multiplying the given inequality by $n!$, you get $$0<en!-n!\left(1+\frac1{1!}+\frac1{2!}+\ldots+\frac1{n!}\right)<\frac{3n!}{(n+1)!}\;,$$ and if you now replace $e$ in this inequality by $\frac{a}b$, you get $$0<\frac{an!}b-n!\left(1+\frac1{1!}+\frac1{2!}+\ldots+\frac1{n!}\right)<\frac{3n!}{(n+1)!}\;.$$ Now multiply out and do some simplifying: $$0<\frac{an!}b-\left(n!+\frac{n!}{1!}+\frac{n!}{2!}+\ldots+\frac{n!}{n!}\right)<\frac3{n+1}\;.$$ Use the fact that $n>b$ to explain why $\frac{an!}b$ is an integer. All of the fractions $\frac{n!}{k!}$ with $1\le k\le n$ are integers (why?), so the whole middle expression is an integer lying strictly between $0$ and $\frac3{n+1}$. But $n>3$, so this is impossible; why?
H: Find generator of principal ideal The ideal $(9, 2 + 2\sqrt{10})$ of $\mathbb{Z}[\sqrt{10}]$ is a principal ideal; it is generated by $1+\sqrt{10}$. This is easy enough to check once it's been found, but can anyone tell me some way to arrive at this (or some other) generator? That could be done with pencil & paper? Ordinarily I would use the Euclidean algorithm, but we don't have that here... AI: We have the norm (even though it is not always positive) obtained by multiplying $a+b\sqrt {10}$ with its conjugate $a-b\sqrt {10}$, that is $N(a+b\sqrt{10})=a^2-10b^2\in\mathbb Z$. So we have $N(9)=81$ and $N(2+2\sqrt{10})=-36$. We can restrict our search to elements of norm dividing both $81$ and $36$, that is we must have $N(a+b\sqrt{10})\in\{\pm1,\pm3,\pm9\}$. Looking for small solutions of this you will stumble upon $1+\sqrt {10}$, which is both a divisor of $9$ and $2+2\sqrt{10}$ and a linear combination of these, namely $(1+\sqrt {10})\cdot 9-4\cdot (2+2\sqrt{10})$.
H: Unsure of attempt to determine $\dim(W_1+W_2)$ and $\dim(W_1 \cap W_2)$ Let $V=\mathbb{R}^4$. $W_1$ is a subspace of $V$ spanned by vectors $a_1=(1, 2, 0, 1)$ and $a_2=(1,1,1,0)$. $W_2$ is a subspace of $V$ spanned by vectors $b_1=(1,0,1,0)$ and $b_2=(1,3,0,1)$. Determine $\dim(W_1+W_2)$ and $\dim(W_1 \cap W_2)$. Attempt The vectors $a_1$ and $a_2$ are linearly independent and span $W_1$, so they form a basis for $W_1$. Hence $\dim(W_1)=2$. The vectors $b_1$ and $b_2$ are linearly independent and span $W_2$, so they form a basis for $W_2$. Hence $\dim(W_2)=2$. This is the step that I'm unsure of. I added vectors from $W_1$ and $W_2$ together, i.e. $a_1 + b_1, a_1 + b_2, a_2 + b_1, a_2 + b_2$ to form $$W_1+W_2=\left\{(2,2,1,1),(2,5,0,2),(2,1,2,0),(2,4,1,1) \right\}.$$ The first three vectors are linearly independent, but $$(2,4,1,1)= (2,5,0,2)+(2,1,2,0)-(2,2,1,1).$$ Hence there are only $3$ vectors that are linearly independent and span $W_1+W_2$, so $\dim(W_1+W_2)=3$. We know that $\dim(W_1+W_2) = \dim(W_1) + \dim (W_2) - \dim(W_1 \cap W_2)$. Therefore $\dim(W_1 \cap W_2)=2+2-3=1$. Question Is my attempt correct? Thank you for your time. AI: Your attempt is correct, but note that your understanding of $W_1 + W_2$ might be wrong. By definition, $W_1 + W_2 = \{\vec{w}_1 + \vec{w}_2 | \vec{w}_i \in W_i, i=1, 2\}.$ However, This doesn't mean that the bases have to be added together. In other words, you can simply take $\{a_1,a_2,b_1,b_2\}$ as a spanning set for the space $W_1 + W_2$ and then toss out any linearly dependent vectors. The easiest way to do this is to row reduce the matrix whose columns are the corresponding basis vectors. For your example, the matrix will row reduce to (why? -- try it!) $$\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0& 0 & 1 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}.$$ This tells you that you have three linearly independent vectors and so the dimension of the $W_1 + W_2$ is 3. Edit: another related question that is useful to ask yourself: if $a_1, a_2, a_3, a_4$ are linearly independent, are $a_1 + a_2, a_1 - a_2, a_3 + a_4, a_3 - a_4$? How would you check? (hint: checking independence can always be reduced to row reducing a matrix)
H: What is the relationship between "recursive" or "recursively enumerable" sets and the concept of recursion? I understand that "recursive" sets are those that can be completely decided by an algorithm, while "recursively enumerable" sets can be listed by an algorithm (but not necessarily decided). I am curious why the word "recursive" appears in their name. What does the concept of decidability/recognizability have to do with functions that call themselves? AI: There's a history here. In thumbnail version, in the 1930s various attempts where made to formally characterize the intuitively computable numerical functions, and relatedly the effectively decidable sets of numbers (i.e. those whose membership can be decided by a computable function). Thus we encounter as various formal accounts of computability Church's idea of $\lambda$-computability, Turing computability, Herbrand-Gödel computability, and Gödel and Kleene's $\mu$-recursiveness (and more!). Of these, it is in the latter formal definition of computability where the notion of recursion in the sense of a function calling itself centrally features. Now as a matter of technical fact, the $\lambda$-computable function, the Turing-computable functions, the Herbrand-Gödel computable functions, and the ($\mu$)-recursive functions turn out to be the same class of functions. And for various reasons the preferred term for this class of functions became "recursive". The technical fact that all these attempts (and other later ones) to characterize the intuitively computable functions converge on the recursive functions leads to the Church-Turing thesis that the computable functions in the intuitive sense just are these recursive functions (and the decidable sets of numbers are the recursive sets, i.e. those with a recursive characteristic function). I wouldn't myself say that "computable" means "recursive" (nor would I recommend a linguistic reform to this effect). Rather I'd put it like this: it is a discovery that the intuitive notion of an algorithmically computable function (dressed up a bit) picks out the class of recursive functions.
H: Solving the equation $\dfrac{(1+x)^{36} -1}{x} =20142.9/420$ for $x$. How would one solve for x in the following equation: $\dfrac{(1+x)^{36} -1}{x} =20142.9/420$ I tried factorising the top but that didnt really help much. $((1+x)^{18} - 1)((1+x)^{18}+1)$ Any help is appreciated thanks. AI: This is a standard example of an equation that does not have an explicit formula for its root. It can only be solved numerically. The important thing is to get a good first approximation to the root. For example, if you want to solve $\frac{(1+x)^n-1}{x} = a$, and you think that there might be a root close to $0$, you can approximate $(1+x)^n$ by $1+nx+n(n-1)x^2/2$ (the first terms of the binomial theorem) to get $n + n(n-1)x/2 = a$ or $x = 2(a-n)/(n(n-1))$. Then apply Newton's iteration.
H: Compact metric space group $\operatorname{Iso}(X,d)$ is also compact Could you tell me how to prove that if metric space $(X,d)$ is compact, then the group $\operatorname{Iso}(X,d)$ is also compact? The group $\operatorname{Iso}(X,d)$ is considered with topology determined by a metric $\rho$ on $\operatorname{Iso}(X,d)$ such that $\lim _{n \rightarrow \infty} \rho(h_n, h) =0 \iff \forall x \in X: \lim _{n \rightarrow \infty} d(h_n(x), h(x))=0$ AI: Without the precise definition of the metric $\rho$, I'm not sure how to answer this definitively, but let me outline an approach that you might be able to take. For this approach, I take as a hidden assumption that the Principle of Dependent Choices holds, so to show that $Y:=\text{Iso}(X,d)$ is compact (considered as a metric space with metric $\rho$), it suffices to show that it is complete and totally bounded. To show that $\langle Y,\rho\rangle$ is a complete metric space, suppose that we have a $\rho$-Cauchy sequence of functions $h_n\in Y.$ Show that for each $x\in X,$ we have that $h_n(x)$ is a $d$-Cauchy sequence of points of $X$. Since $X$ is a compact space under the $d$-metric topology, then it is complete, so $h_n(x)$ converges in $X$. Define $h:X\to X$ by $h(x)=\lim_{n\to\infty}h_n(x).$ Show that $h\in Y,$ and that $\lim_{n\to\infty}\rho(h_n,h)=0,$ so $\langle Y,\rho\rangle$ is a complete metric space. To show that $\langle Y,\rho\rangle$ is a totally bounded metric space, we must show that for all $\epsilon>0$ there exist $f_1,...,f_n\in Y$ such that $Y$ is covered by the open $\rho$-balls about $f_1,...,f_n$ of radius $\epsilon$. Without a definition for $\rho,$ I'm afraid I have no suggestions for how to proceed with this. Edit: If julien is correct in his comment above, that you're simply considering $Y$ in the topology of pointwise convergence (which, in retrospect, it seems that you are), then I'm not certain that it's necessarily induced by a metric. Fortunately, we can proceed without worrying about such a metric. Let's proceed as follows, instead. Denote the set of all functions $X\to X$ by $X^X$. Given a point $x\in X$ and an open set $U$ in $X,$ we define $$P(x,U):=\{f\in X^X:f(x)\in U\},$$ and let $\mathcal B$ be the set of all sets of the form $$P(x_1,B_1)\cap\cdots\cap P(x_k,B_k)$$ for some positive integer $k$, with the $x_j\in X$ and the $B_j$ open balls in $X$. You can show that $\mathcal B$ is a basis for a topology on $X^X$, say $\mathcal T$, which is the set of unions of $\mathcal B$-sets. We call $\mathcal T$ the topology of pointwise convergence, because of the following Lemma: If $h_n$ is a sequence of functions $X\to X$ and $h:X\to X$, then $h_n\to h$ with respect to the topology $\mathcal T$ if and only if $h_n(x)\to h(x)$ for each $x\in X$. Proof: Suppose $h_n(x)\to h(x)$ for all $x\in X$, and take any $\mathcal B$-basic neighborhood of $h$, which will be of the form $$P(x_1,B_1)\cap\cdots\cap P(x_k,B_k)$$ for some $x_j\in X$ and some open balls $B_j$ with $h(x_j)\in B_j$. Since $h_n(x_j)\to h(x_j)$, then there exists $n_j$ such that $h_n(x_j)\in B_j$ for $n\ge n_j$. Putting $N=\max\{n_1,...,n_k\}$, we therefore have for $n\ge N$ that $$h_n\in P(x_1,B_1)\cap\cdots\cap P(x_k,B_k).$$ Thus, $h_n\to h$ with respect to $\mathcal T$. On the other hand, suppose that $h_n\to h$ with respect to $\mathcal T$. Take any $x\in X$ and any open ball $B$ in $X$ such that $h(x)\in B$. Since $P(x,B)$ is a $\mathcal T$-neighborhood of $h$ and $h_n\to h$, then there is some $N$ such that $h_n\in P(x,B)$ for $n\ge N$, meaning $h_n(x)\in B$ for all $n\ge N$. Thus, $h_n(x)\to h(x)$ for all $x\in X$. $\Box$ Now, julien refers to Tychonoff Theorem (a very useful theorem that requires the Axiom of Choice to prove) in his comment above. We really only need the following special case: Theorem: $X^X$ is compact in the topology of pointwise convergence if (and only if) $X$ is compact. The "only if" part is fairly easy to prove, and if you're interested, I can prove the other direction (though it can be a bit of a bear to prove without just using another form of the Tychonoff Theorem). Hence, all that is left for you to do is to show that $Y$ is a closed subspace of $X^X$, from which the result follows.
H: deciding if a chain is a composition series (sanity check) A small sanity check related to Question 2 from here: proof of the Krull-Akizuki theorem (Matsumura) Let $C$ be an $A$-module, with $A$ commutative ring and suppose that there exists a chain of submodules $C=C_0 \supset C_1 \supset \cdots \supset C_m=0$ such that $C_i/C_{i+1} \cong A/m_i$ where $m_i$ is a maximal ideal of $A$. It seems to me that this implies that this chain is actually a compoosition series and so $C$ has finite length, since every quotient is a field and hence a simple $A$-module. Is this assertion correct? AI: The only thing that's wrong is the "$C_i/C_{i+1}$ is a field" part. That quotient is a simple $A$ module, yes, but it is not necessarily a field. It is module isomorphic to $A/M$, but not ring isomorphic. (Added: of course, the quotient can be given a field structure by idenfitication with $A/M$, but it does not match the multiplication of the rng quotient $M/N$. Since the identification does not match, I discard it as a natural thing.) For example, you can take a commutative Artinian ring whose ideals are linearly ordered to see why the quotients aren't fields. If $M/N$ was a field for proper ideals $M$ and $N$, that would mean that the identity of $M/N$ lifts to be an identity of $M$, but unfortunately $M$ does not contain any idempotent elements besides 0.
H: Proof of Egoroff's Theorem Let $\{f_n \}$ be a sequence of measurable functions, $f_n \to f$ $\mu$-a.e. on a measurable set $E$, $\mu(E) < \infty$. Let $\epsilon>0$ be given. Then $\forall \space n \in \mathbb{N} \space \exists A_n \subset E$ with $\mu(A_n) <\frac{\epsilon}{2^n}$ and $\exists N_n$ such that $\forall \space x \notin A_n$ and $k \ge N_n \space |f_k(x) - f(x)| < \epsilon$. That is: if we define $A = \cup_{n=1}^{\infty} A_n$ with $\mu(A) < \epsilon $ then ${f_n}$ converges uniformly on $E \setminus A$. $\mathbf{Proof}$: (taken from Royden's Real Analysis) Let $A = \cup_{n=1}^{\infty} A_n \Rightarrow A \subset E$ and $\mu(A) < \sum_{n=1}^{\infty} \frac{\epsilon}{2^n} = \epsilon. \mathbf{Q1}$. choose $n_0$ such that $\frac{1}{n_0} < \epsilon$. If $x \notin A$ and $k \ge N_{n_0}$ we have $\space |f_k(x) - f(x)| < \frac{1}{n_0} < \epsilon \space$. $ \square$ $\mathbf{Q1}$: First off, I don't see how $\sum_{n=1}^{\infty} \frac{\epsilon}{2^n} = \epsilon$. It's a geometric series: $ \epsilon \sum_{n=1}^{\infty} \frac{1}{2^n} = \epsilon \frac{1}{1-\frac12} = 2 \epsilon$. Am I wrong? $\mathbf{Q2}$: The idea behind Egoroff is in order to turn almost sure convergence into uniform convergence on $E$ we only need to take away a really small set, right? Interestingly, as $\epsilon \to 0$ (that is $f_n$ is getting closer to $f$), the measure of the set $A$ is getting proportionally smaller ($\mu(A) \to 0$). So are we ultimately taking away a set of zero measure? AI: A2: You are correct, that for arbitrarily small $\epsilon$ there is a set $A$, such that $\mu(A)<\epsilon$, where uniform convergence fails. So the measure of $A$ can be arbitrarily small. However, also keep in mind that uniform convergence requires a finite $N$ such that given $\delta>0$, for all $k>N$, $|f_k - f|<\delta$. Imagine that as $\epsilon$ gets smaller and smaller, for a fixed $\delta$ this $N$ may get larger and larger. Then in the limit as $\epsilon\to 0$, $N \to\infty$ and uniform convergence would fail. My favorite text for Egoroff's theorem and related topics is Lieb and Loss's Analysis book.
H: Let $A$, $B$ be positive operators in a Hilbert space and $\langle Ax,x \rangle=\langle Bx,x \rangle$ for all $x$, show that $A=B$ Let $A$ and $B$ be positive operators in a Hilbert space $H$, and suppose that $\langle Ax,x\rangle=\langle Bx,x\rangle$ for every $x$ in $H$. Show that $A=B$. AI: If they are positive then they are selfadjoint and there exists a unique square root. from the fact that $\langle Ax,x \rangle = \langle Bx,x \rangle$ we get that also $A - B$ is positive and $$0 = \langle Ax - Bx,x\rangle = \|\sqrt{A - B}x\|^2$$ then $\sqrt{A - B} = 0$ and by uniquiness of the square root $A - B = 0$ proving the claim.
H: How to calculate large exponents by hand? How to calculate large exponents by hand like they did in ancient times? Is it something to do with Prosthaphaeresis? for example calculate $2^{15}$. AI: Use logarithms perhaps? $$\log_{10} 2^{15} = 15\log_{10} 2 \sim 15\cdot0.3 =4.5$$ So that: $$2^{15} \sim 10^{4.5} = 10^4\sqrt{10} \sim3\cdot 10^4$$ Which is just 7% percent away, and can be done in your head as long as you remember that $\log_{10} 2 \sim 0.3$, which most engineers probably do.
H: What's wrong with this Kuhn-Tucker optimization? The function $u(x,y,z) = xyz$ is to be maximized, under constraints: $ 0 \le x \le 1, y \ge 2, z \ge 0 $ and $ 4 - x - y - z \ge 0 $ Now I'm not quite sure how to translate the x-constraint into proper inequalaties, but my first shot would be $ h_1 = x \ge 0$ and $h_2 = 1-x \ge 0 $. So I set up the Lagrangian like: $$ L = xyz + \lambda_1 x + \lambda_2(1-x) + \lambda_3(y-2) + \lambda_4 z + \lambda_5(4-x-y-z) $$ ... resulting in FOC like: [... irrelevant FOC omitted ...] $$ \frac{\delta L}{\delta z} = xy + \lambda_4 - \lambda_5 = 0 $$ $$ \frac{\delta L}{\delta \lambda_4} = z \ge 0, \lambda_4 \ge 0, z \lambda_4 = 0$$ Now, as $u=xyz$, neither $x,y,z$ can be $0$ in maximum; thus, $\lambda_4 = 0$. Also, as $u$ is isotone in all $x,y,z$, the fifth constraint has to be effective in maximum, so $\lambda_5 = 0 $ either. So plugging into the first FOC from above, $xy = 0$ - so either $x$ or $y$ has to be $0$, which cannot be a maximum though! So am I making a mistake in the way I set up the Lagrangian here? In the derivatives? Or can't I apply this method here at all? If so, why not? Thanks for any hint! AI: Since the fourth constraint: $z\geq0$ will not bind at the maximum, you are correct that $\lambda_4=0$ (by the complimentary slackness condition). However, since the 5th constraint will bind, $\lambda_5\geq0$. You are left with $xy=\lambda_5$ rather than $xy=0$.
H: Mapping of a Lens-shaped region by a Möbius Transformation Consider the 'lens' described by $\{z:|z-i|<\sqrt{2}\ \text{and}\ |z+i|<\sqrt{2} \}$ . We want to map this to the upper right quadrant using a Möbius transformation. The two circles meet at $z=1,-1$ and so if we use the map $f:z \mapsto \frac{z-1}{z+1}$ we should send the lens to a sector centered on the origin. I have two questions: How do we find the angle sweeped out by the sector? As $f$ is conformal, it will be equal to the angle at which the two circles meet; but how is this calculated? And is there some general method for working this out? After finding this angle, how do we work out how the sector is rotated relative to say the real axis after applying $f$? Again, is there some general way of doing this also? After establishing this, the question becomes simple as we can use a power map to adjust to the correct angle and then a rotation to position the sector. After responses to 1. and 2. I think my approach is sound, though I'd be interested in hearing if it isn't also. AI: The centers $\pm i$ of the circles and the circle intersection points $\pm1$ form a square, so the circles intersect at right angles (the sides of the square are tangents). Therefore, a power map is not needed here. (The general method of course involves some acrtan manipulations with these four points) For the second question, the lens is symmetric with respect to the real axis and your transform leaves the real axis invariant, therefore the image is also symmetric with respect to the real axis. So you should rotate by $45^\circ$ in the end.
H: Lipschitz condition normed vector space Am I right that $g: C^{1}[0,2],||*||_{C^1[0,2]} \rightarrow \mathbb{R}$ with $g(f)=f'(1)$ satisfies a Lipschitz condition? Cause $|f'(1)-h'(1)|=|g(f)-g(h)|\le \text{max} |f-h|+ \text{max} |f'-h'|$, as the last term here contains already the lipschitz condition. AI: you are right: $$|g(f) - g(h)| = |f'(1) - h'(1)| \le \max_{[0,2]}|f'(x) - h'(x)| \le$$ $$\le \max_{[0,2]}|f'(x) - h'(x)| + \max_{[0,2]}|f(x) - h(x)| = \|f - g\|_{C^1},$$ then it is Lipschitz continuous with Lipschitz constant $L = 1$.
H: Why do injective holomorphic functions have nonzero derivative For some open sets $U$, $V$ in the complex plane, let $f:U\rightarrow V$ be an injective holomorphic function. Then $f'(z) \ne 0$ for $z \in U$. Now I don't understand the proof, but here it is from my text. My comments are in italics. Suppose $f(z_0) = 0$ for some $z_0 \in U$. $f(z) - f(z_0) = a(z - z_0)^k + G(z)$ for all $z$ near $z_0$, with $a \ne 0, k \ge 2.$ Also, $G$ vanishes to order $k+1$ at $z_0$. I'm not clear on what this "vanishing" thing means. Maybe it means that $G$ can be expressed as a power series of order $k+1$ around $z_0$. For sufficiently small $w$ we can write $f(z) - f(z_0) - w = F(z) + G(z)$, where $F(z) = a(z - z_0)^k - w$. I'm not sure why we need to have $w$ small. This equation will work for any $w$. Since $|G(z)| \lt |F(z)|$ on a small circle centered at $z_0$, and $F$ has at least two zeros inside that circle, Rouche's theorem implies that $f(z) - f(z_0) - w$ has a least two zeros there. Now I think that $|G(z)| \lt |F(z)|$ can follow simply from the fact that $F$ is a polynomial of degree $k$ while $G$ has degree $k+1$. And the remark about the two zeros can follow from the fact that $F$ must have $k$ zeros in the complex plane. But the first part requires that we consider $z$ only on a small circle. The second part requires that our circle be big enough to capture two zeros. How do we know that we can satisfy both? Since $f'(z) \ne 0$ for for all $z \ne z_0$ sufficiently close to $z_0$, the roots of $f(z) - f(z_0) - w$ are distinct, so $f$ is not injective - a contradiction. I think that the derivative is never zero for values of $z$ other than $z_0$ because otherwise we would have a sequence of zeros limiting towards $z_0$ which would cause our function to be constant which is a contradiction. But again we have the same problem - we can only consider a small circle. The roots of $f$ may lie outside this circle. AI: First comment: Yes. I prefer to write $(z-z_0)^k\cdot(a+(z-z_0)H(z))$ in such proofs. Second comment: $w$ small is not needed immediately, but we can only use a small circle (as guaranted by openness of $U$) and want to have $(z-z_0)^k=w$ at least once (end hence $k$ times) inside that circle. This is what forces $w$ to be small. Third comment: We make our circle even smaller (and may revise our choice of $w$) in order to make $|G|<| F|$. By the vanishing order of $G$, we have $G(z)\le c\cdot |z-z_0|^{k+1}$ for some $c$ as long as $z\approx z_0$ (with my notation above, you can take any $c>|H(z_0)|$). Then what we need here for $|G(z)|<|F(z)|$ if $|z-z_0|=r$ is to chose $r\le \frac ac$. The very simple polynomial $F$ has $k$ zeroes in the circle because we choose our $w$ small enough (smaller than $ar^k$ if $r$ is the radius of our small circle). Fourth comment: The zeroes of a holomorphic function are isolated unless the function is (locally) constant. If $f$ is constant on a small open disk, it is already far from injective.
H: If $F$ and $R$ are subspaces of vector space $E$, then $F \cap R \neq \varnothing$ I need to prove the following: let $F \cap R$ intersection of vector subspaces $F$ and $R$ of vector space $E$, then $F \cap R \neq \emptyset$ Thanks in advance! AI: Hint: What vector must every vector space (or subspace, which is in its own right a vector space) contain?
H: Show $60 \mid (a^4+59)$ if $\gcd(a,30)=1$ If $\gcd(a,30)=1$ then $60 \mid (a^4+59)$. If $\gcd(a,30)=1$ then we would be trying to show $a^4\equiv 1 \mod{60}$ or $(a^2+1)(a+1)(a-1)\equiv 0 \mod{60}$. We know $a$ must be odd and so $(a+1)$ and $(a-1)$ are even so we at least have a factor of $4$ in $a^4-1$. Was thinking I could maybe try to show that there is also a factor of $3$ and $5$ necessarily giving that $a^4-1\equiv0 \mod{60}$. Other things I was thinking was that as $Ord_n(a) \mid \phi(n)=\phi(60)=16$ that we just need to show that $Ord_n(a) \in \{1,2,4 \}$. Any hints? I have the exam soon =/ AI: Continue as you started: As $3 \nmid a$, we have $a \equiv \pm 1 \pmod 3$, giving $$ a^4 - 1 \equiv 1 - 1 = 0 \pmod 3 $$ which means $3 \mid a^4 + 1$. For $5$, we have either $a \equiv \pm 1 \pmod 5$, giving $$ a^4 - 1 \equiv 1 - 1 \equiv 0 \pmod 5 $$ or $a \equiv \pm 2 \pmod 5$, $$ a^4 - 1 \equiv 16 - 1 = 15 \equiv 0 \pmod 5$$ and we are done.
H: Composition of Linear Rotations and Reflections Prove that if $T_{1}$ is a rotation of $R^{2}$ about O, and $T_{2}$ a reflection in a line through O, then $T_{1}$$\circ$$T_{2}$ and $T_{2}$$\circ$$T_{1}$ are both reflections in a line through O. I'd prefer a hint than the answer, because I'm not sure how to begin. Thanks in advance for any help. AI: There are 4 different types of (non-trivial) isometries of the plane. They are rotations, reflections, translations and glide translations. Hint: The non-trivial isometries of the plane are classified according to the existence of fixed points, and if they preserve orientation. Hint: Both $T_1 (T_2)$ and $T_2 (T_1)$ have/don't have fixed points, and preserve/don't preserve orientation, hence they must be $\underline{ \quad \quad \quad} $.
H: Can a knight visit every field on a chessboard? I was doing excercises about graphs theory and I came across a quite interesting excercise (which probably has something to do with Hamiltonian Cycle): "Is it possible to step on every field of a 4x4 or 5x5 chessboard just once and return to the starting point using a knight?" Does anyone have any idea how to tackle this problem? I am more interested in a outline of how to do it or just some hints. AI: HINT: $$\begin{array}{|c|c|c|c|} \hline \cdot&&&\\ \hline &&\cdot&\\ \hline &\cdot&&\\ \hline &&&\cdot\\ \hline \end{array}$$
H: orthogonal matrix and elementary matrix Answer is False. But I can't think of the counter example... Could anybody have it? Let A be an orthogonal 4 x 4 matrix such that $$ Ae_1 = e_2, Ae_2 = e_3, Ae_3 = e_1$$ Then $$Ae_4 = e_4 $$ AI: Let $$A = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & -1 \end{pmatrix}$$ Then $Ae_4 = -e_4$. On the other side, the given three conditions and orthogonality of $A$ imply $Ae_4 \in \{\pm e_4\}$, as $Ae_4 \cdot Ae_i = e_4 \cdot e_i = 0$ for $i \in \{1,2,3\}$ hence $Ae_4 \cdot e_j = 0$ for $j \in \{1,2,3\}$. So $Ae_4 = \alpha e_4$ for some $\alpha \in \mathbb R$. Now $1 = Ae_4 \cdot Ae_4 = \alpha^2$ gives $\alpha \in\{\pm 1\}$.
H: Dimension of vector space and symmetric matrix Why the following statement is true? I am so frustrated that I could not have any clue on this problem. The dimension of the vector space of all symmetric 4 by 4 matrices is 10. Please help me. AI: Hints: A symmetric $\,n\times n\,$ matrix is completely characterized by the elements on its main diagonal and by those above it. How many elements are there...?
H: Cocartesian squares in the category of abelian groups. Recently, I've been doing a recap of (basic) category theory and found an old exercise I seem to be unable to solve. The question is as follows. Let $A, B$ be abelian groups, $A'<A$ and $B'<B$ subroups and $\phi:A\to B$ such that $\phi(A')\subseteq B'$. Let $\phi':A'\to B'$ and $\phi'':A/A'\to B/B'$ be the maps induced by $\phi$. Prove that the diagram \begin{array} AA' & \to & A \\ \downarrow{\phi'} & & \downarrow{\phi} \\ B' & \to & B \end{array} is a cocartisian square if and only if $\phi''$ is an isomorphism. The exercise preceding this one looks like a dual statement which I did manage to prove (a similar square being cartesian iff $\phi'$ is an iso), yet just dualising the proof doesn't seem to work. That proof, however, suggests using the universal property of cocartesian squares for one of the implications ($\Rightarrow$), while using the explicit construction of pushouts in the category of abelian for the other one. The pushout of $A_1\overset{f_1}{\leftarrow} A_3 \overset{f_2}{\rightarrow} A_2$ in this category is given by $A_1\oplus A_2/\{(f_1(x),-f_2(x))\mid x\in A_3\}$. Can anyone provide me with a proof, or give a sketch of proof that might point out which step I fail to see just yet? AI: Take $A' \to A \to A/A'$, the first map is the inclusion and the second is the quotient projection. Take then $B' \to B \to B/B'$, the maps as above. These are both exact sequences, which means $\operatorname{im}(\text{inclusion}) = \ker(\text{projection})$. You should notice that $(\phi', \phi, \phi'')$ is a morphism of exact sequences, which means that this triple makes the obvious diagram commute. Now, you may check that the sequence $B' \to P \to \operatorname{coker}(i)$ is exact, if $P$ is the pushout of the inclusion $A' \subseteq A$ along $\phi'$ and $i$ is the standard inclusion of $B'$ into the direct sum you mention. If you put the two diagrams together (the one that describes the map $(\phi', \phi, \phi'')$ and the one with $(\phi', j, k)$, where $j\colon A \to P$ is the standard inclusion and $k\colon A/A' \to \operatorname{coker}(i)$), you notice that there is also a map form $B$ to $P$ because of the universal property of $P$. As you should know the universal map is an iso, so $B \cong P$. Then you notice that $\operatorname{coker}(i)$ is iso to $B/B'$, because the inclusions $B' \subseteq B$ and $B' \subseteq P$ differ by an iso. This means that your square is in fact a pushout. This is a succession of iff-s, so your claim is proved. This should be so much easier if you write down the diagrams I mention, which I cannot do here, since it looks like mathjax does not support xypic, and I'm not familiar with any other way of writing commutative diagrams. Clarification: I understand this is an exercise in category theory and I didn't use any strictly categorical methods, but you should notice that the category of abelian groups is an abelian category and this is actually done in utter generality in any abelian category if, instead of quotients you take cokernels. EDIT: Here's the diagram I was talking about.
H: power series quotient of polynomial functions I have given $g(x)=\sum_{k=1}^\infty k^2x^k$. Why can you now write $g:(-1,1)\rightarrow\mathbb R$ as a quotient of two polynomial functions? I just know the radius of convergence is $\lim\limits_{n\rightarrow\infty}\sup\sqrt[k]{k^2}=1$ and by using the fact for $|x|=1$ is $k^2x^k$ not a null sequence I get convergence for $|x|<1$. AI: Hints: for $\,|x|<1\,$ : $$\frac1{1-x}=\sum_{k=0}^\infty x^k\implies\frac1{(1-x)^2}=\sum_{k=1}^\infty kx^{k-1}\implies$$ $$\implies\frac2{(1-x)^3}=\sum_{k=2}^\infty k(k-1)x^{k-2}=\frac1{x^2}\sum_{k=1}^\infty k^2x^k-\frac1x\sum_{k=1}^\infty kx^{k-1}\;\ldots\ldots$$
H: Composition of Reflection Is a Rotation? Prove that if $T_{1},T_{2}$ are reflections in lines through O then $T_{1}\circ T_{2}$ is a rotation about O. Once again, a hint would be preferable to an answer. I'm not familiar with these types of linear transformations, as I am not accustomed to thinking of them as functions. Thanks in advance for everyone who answers. AI: Since you requested a hint, go ahead and think of the transformations as matrices, and treat composition as multiplication. Otherwise, think about how reflection changes chirality, but two reflections would leave chirality unchanged.
H: How do i prove that $\frac{1}{\pi} \arccos(1/3)$ is irrational? How do i prove that $\frac{1}{\pi} \arccos(1/3)$ is irrational? AI: Let $\theta = \arccos\dfrac13$ so that $\cos\theta=\dfrac13$. If $\theta$ is a rational multiple of $\pi$, say $\theta=\dfrac mn \pi$, then $\cos(n\theta)=\pm1$. Now $\cos(n\theta)=T_n(\cos\theta)$, where $T_n$ is the $n$th-degree Chebyshev polynomial. Using mathematical induction and some trigonometric identities, you can show that the leading coefficient in the $n$th-degree Chebyshev polynomial is $2^{n-1}$ for $n\ge2$. We have $$ \pm1=T_n\left(\frac 13\right) = 2^{n-1}\left(\frac13\right)^n+\text{lower-degree terms}. $$ Multiplying both sides by $3^n$, we get $$ \pm3^n = 2^{n-1} + \text{terms divisible by $3$}. $$ And that says a positive power of $2$ is a multiple of $3$, which violates uniqueness of prime factorizations.