text
stringlengths
83
79.5k
H: Cartesian product and union How can we prove that $(A\cup C)\times (B \cup D) \subset (A \times B) \cup (C \times D) \Rightarrow (C \subset A ~\land ~ D \subset B) ~~ \lor ~~ (A \subset C ~ \land ~ B \subset D) $? I've tried to prove by contradiction but didn't manage to do it. I'll be grateful for any help. AI: If we apply the distributivity of $\times$ over $\cup$ we obtain the following conditions from the antecedent: $$C \times B \cup A \times D \subseteq A\times B \cup C \times D$$ Now $C\times B \not\subseteq A \times B \cup C \times D$ iff both $C \setminus A$ and $B \setminus D$ are non-empty. Similarly, $A \times D \not\subseteq A \times B \cup C \times D$ iff both $A \setminus C$ and $D \setminus B$ are non-empty. Therefore, we conclude that: $$C \times B \cup A \times D \subseteq A\times B \cup C \times D$$ is equivalent to: $$\neg(C\setminus A \ne\varnothing \land B\setminus D \ne \varnothing) \land \neg(A \setminus C \ne\varnothing \land D\setminus B \ne\varnothing)$$ which, using the De Morgan's laws for negation, translates to: $$(C\setminus A =\varnothing \lor B\setminus D = \varnothing) \land (A \setminus C = \varnothing \lor D\setminus B =\varnothing)$$ Using the equivalence of $X \setminus Y = \varnothing$ and $X \subseteq Y$, we obtain: $$(C\subseteq A \lor B\subseteq D) \land (A \subseteq C \lor D\subseteq B)$$ and by using distributivity of $\land$ over $\lor$, we obtain the disjunction: $$C =A \lor (C\subseteq A \land D\subseteq B) \lor (A \subseteq C \land B\subseteq D) \lor B = D$$ In conclusion, if we take $\subset$ to mean $\subsetneq$, two of these possibilities drop out, and we obtain the result. But if we take $\subset$ to mean $\subseteq$, it is false (as exhibited by Brian M. Scott).
H: How to prove $D^n/S^{n-1}\cong S^n$? In my textbook it is said that the quotient space $D^n/S^{n-1}$ is homeomorphic to $S^n$. I can imagine it for $n=2$, but fail to make a mathematical proof for any dimension. Can anyone provide a rigorous proof? AI: Theorem 1: Let $X$ be compact Hausdorff, let $p \in X$. If $Y$ is homeomorphic to $X \setminus \{p\}$ then $X$ is homeomorphic to the one-point compactification of $Y$. Theorem 2: if $X$ is compact Hausdorff and $A \subset X$ is closed, then $X / A$ (the quotient of $X$ under the equivalence relation that identifies $A$ to a point) is compact Hausdorff as well. Now apply Theorem 2 to $X = D^n$ and $A = S^{n-1} \subset D^n$. Then note that the result with the identified point removed is just the interior of $D^n$, which is homeomorphic to $\mathbb{R}^n$ (use the map $x \to \frac{x}{\|x\|+1}$ e.g.). So Theorem 1 then says that $X / A$ is homeomorphic to the one-point compactification of $\mathbb{R}^n$, which is $S^n$ (via the usual stereographic projection).
H: Second derivative of Brownian motion? My question is, we give a meaning to the following expression: $$dX(t) = \mu(t,X(t))dt + \sigma(t,X(t))dW(t), \ \ X(0)=x.$$ where $W$ is a Wiener process. This equation can be thought as $$\frac{dX(t)}{dt} = \mu(t,X(t)) + \sigma(t,X(t))\frac{dW(t)}{dt}, \ \ X(0)=x.$$ Now, if I wanted to differentiate again in an ODE I would write something like: $$\frac{d^2X(t)}{dt^2} = \mu'(t,X(t)) + \sigma'(t,X(t))\frac{d^2W(t)}{dt^2}, \ \ X(0)=x.$$ But, my question is, can we do this in Stochastic ODE's? Does the term $\frac{d^2W(t)}{dt^2}$ have a meaning? What would one do in this case? Any idea, literature, reference? Thank you for your help! AI: When you write $$dX(t) = \mu(t,X(t))dt + \sigma(t,X(t))dW(t), \ \ X(0)=x.$$ this is an 'abuse of notation' and should be read as the equation $$X(t)-x = \int_0^T\mu(t,X(t))dt + \int_0^T\sigma(t,X(t))dW(t)\qquad t\geq0.$$ This equation can't be solved pathwise since it would have no meaning. It has a more complex structure due to the stochastic integral $\int_0^T\sigma(t,X(t))dW(t)$ which is not differentiable in $T$.
H: which of the following is/are algebraic over rationals which of the following is/are true? $\sin 7^\circ$ is an algebraic over $\mathbb{Q}$ $\sin^{-1}(1)$ is algebraic over $\mathbb{Q}$ $\cos (\pi/7)$ is algebraic over $\mathbb{Q}$ $\sqrt{2}+\sqrt{\pi} $ is algebraic over $\mathbb{Q}(\pi)$ an algebraic number is a number that is a root of a non-zero polynomial in one variable with rational coefficients, I am perfectly sure that option $1$, $3$ are algebraic number. could any one help me to find out others option? AI: For option 2: Evaluate $\sin^{-1}(1)$. It's easy For $4$: $\sqrt{2}+\sqrt{\pi}$ satisfies is a root of the equation: $$(x^2-2-\pi)^2-2\pi=0$$ $$x^4-2(2+\pi)x^2+(\pi+2)^2-2\pi=0$$ $$x^4-(4+2\pi)x^2+\pi^2+2\pi+4=0$$ Hence $\sqrt{2}+\sqrt{\pi}$ is algebraic over $\mathbb{Q}(\pi)$ Here I will include the proofs of 2 useful facts: Fact 1: Let $E$ be an extension field of a field $F$. Let $u\in E$. Then $u$ is algebraic over $F$ iff $F(u)$ is finite dimesnsional over $F$. Proof: Forward direction: Consider the elements $1,u,u^2,...u^{[F(u):F]}$. If these $[F(u):F]+1$ elements were linearly independent over $F$, then the dimension of $F(u)$ over $F$ must be greater than or equal to $[F(u):F]+1$ (contradiction). Thus, we conclude that they are linearly dependent. Hence there exists $a_0,a_1,....,a_{[F(u):F]}\in F$ such that not all of thm are zero and: $$a_{[F(u):F]}u^{[F(u):F]}+a_{[F(u):F]-1}u^{[F(u):F]-1}+...+a_1u+a_0=0$$ Hence $u$ is algebraic over $F$. Backward direction: We start by showing that $F(u)=F[u]$. Clearly, $F[u]\subseteq F(u)$. Since $F(u)$ is the intersection of all subfields that contain $F\cup\{u\}$., therefore $F(u)\subseteq F[u]$ if $F[u]$. Now we verify that $F[u]$ is a field. Consider the ring homomorphism $\phi: F[x]\rightarrow F(u)$ that sends $p(x)$ to $p(u)$. It can be shown that $\phi$ is surjective. Hence $F[u]\cong F[x]/Ker\phi$. $Ker\phi=\{p(x)\in F[x]|p(u)=0\}$. Since $F$ is a field, therefore $F[x]$ is a PID. Since $Ker\phi$ is an ideal of $F[x]$, therefore $Ker\phi=(f(x))$ for some $f(x)\in F[x]$. It can be easily verified that $(f(u))$ is a prime ideal of $F[x]$. Since $F[x]$ is a PID, therefore $Ker\phi=(f(x))$ is a maximal ideal of $F[x]$. Hence $F[u]\cong F[x]/Ker\phi$ is a field. Thus $F(u)=F[u]$. Verify that $\{1,u,u^2,...,u^{deg(f)-1}\}$ is a basis for $F[u]$ over $F$. Hence, $F(u)=F[u]$ is a finite dimensional vector space over $F$. Fact 2: Let $E$ be an extension field of a field $F$. Let $K:=\{x\in E|x$ is algebraic over$F\}$. Then $K$ is a field. Proof: Let $x,y$ be algebraic over $F$. By fact 1, we deduce that $F(x)$ is finite dimensional over $F$. Since $y$ is algebraic over $F$, therefore $y$ is algebraic over $F(x)$. Thus, by fact 1 we deduce that $F(x)(y)$ is finite dimensional over $F(x)$. Since $F(x)$ is finite dimensional over $F$. Hence $F(x)(y)$ is finite dimensional over $F$. Since $x+y\in F(x)(y)$, therefore $F(x+y)$ is a subfield of $F(x)(y)$. Thus, $F(x+y)$ is finite dimensional over $F$. By fact 1, we deduce that $x+y$ is algebraic over $F$. Hence it is an element of $K$. The rest of the field axioms can be verified for $K$ similarly. Since $\sqrt{2}$ is algebraic over $Q$, therefore it is algebraic over $Q[\pi]$. Similarly, $\sqrt{\pi}$ is algebraic over $\mathbb{Q}[\pi]$ (as it is the root of the equation $x^2-\pi=0$). Thus, we know that $\sqrt{2}+\sqrt{\pi}$ is algebraic over $\mathbb{Q}[\pi]$ by fact 2.
H: Pair of compasses drawing a square (from children's fiction) I have read a children's book where alien race of "square people" used a pair of compasses that drafted a perfect square when used. Now I wanted to explain to the child that it is not possible to have such a pair of compasses, but then I was not really sure. So is it possible to construct a mechanical tool with one fixed point and one moving point that would actually draft a square with one continuous movement? AI: It depends on how restrictive you are about what counts as a 'pair of compasses'. For instance, if you require the writing end to be a fixed (Euclidean) distance away from the fixed point at all times, then that pair of compasses will always draw a circle, since a circle is exactly the set of points at some fixed distance from a fixed point. One way you might be able to build such a pair of compasses is by using linkages. A linkage is like a more complicated version of a pair of compasses, where you have some collection of fixed points, movable joints, and bars connecting them, with a writing stylus attached at some point. A pair of compasses, then, can be viewed as an extremely simple type of linkage. A very famous example of a linkage, dating from 1784 is Watt's linkage, which was used in the first steam engines. It is special because it draws a nearly perfect straight line: James Watt once said that of all his inventions, this linkage was the one he was most proud of. It wasn't until 1864 that a linkage was discovered that could draw an exact straight line. It is called the Peaucellier-Lipkin linkage, and earned its inventors the Prix Monyon: How do you get from here to a square? The following remarkable theorem was proved in 1994: Theorem (Thurston, Kapovic-Millson): given any polynomial map, there is a linkage and a vertex of that linkage such that the vertex traces out that polynomial map. In this context, a polynomial map is any curve on the plane that can be described by polynomial equations. Certain curves (like $y=\log(x)$) do not have polynomial maps describing them; however, any curve can be approximated to arbitrarily high accuracy by polynomial maps. We therefore have two nice corollaries: Corollary 1: There is a linkage that signs your name! Corollary 2: There is a linkage that draws a square (to arbitrarily high accuracy). Bear in mind that the theorem does not guarantee that the linkages will be simple! In fact, if you wanted to build a linkage that actually did sign your name, it would probably be extremely complicated! But it would nevertheless exist. I have no idea how complicated a linkage would have to be to draw a square. Further reading: the reason I know about all this is that I've read Joseph O'Rourke's excellent book How To Fold It, which has a website here. If this kind of thing interests you, I strongly recommend buying a copy.
H: Are there any perfect numbers which are also powerful? Powerful numbers are discussed in this paper by R. A. Mollin and P. G. Walsh. Wikipedia has more information. In particular, note that OEIS A001694 does not seem to contain any (even) perfect numbers. This is indeed the case, if we note that the Mersenne prime $2^p - 1$ has exponent $1$. (Recall that the general form for even perfect numbers is $N = {2^{p - 1}}(2^p - 1)$.) Now my question is: How about odd perfect numbers? I guess my main inquiry boils down to - Has anyone worked on considering properties of the set $O \bigcap P,$ where $O$ and $P$ are given as follows $$O = \{\text{odd perfect numbers}\}$$ $$P = \{\text{powerful numbers}\}$$ Any pointers to existing references in the literature will be appreciated. Thank you! AI: It is known that any odd perfect number is of the form $p^rn^2$ where $p$ is a prime, $p$ doesn't divide $n$, and $r\equiv1\pmod4$. The case $r=1$ has not been ruled out, so it has not been proved that every odd perfect is powerful; the cases $r\gt1$ have not been ruled out, so it has not been proved that no odd perfect number is powerful.
H: Use the Mean Value Theorem to prove $\cosh(x) \ge 1 + \frac{x^2}{2}$ in the interval $[0,x]$, given $\sinh(x) \ge x$ for all $x \ge 0$. Use the Mean Value Theorem to prove $\cosh(x) \ge 1 + \frac{x^2}{2}$ in the interval $[0,x]$, given $\sinh(x) \ge x$ for all $x \gt 0$. I tried using $f(x) = \cosh(x)$, but to no avail. All help appreciated, thanks! AI: What about the function $f(x)=\cosh (x)-1-\frac{x^2}2$. You clearly have $f(0)=0$. Can you show $f'(x)\ge0$. If you use these facts and Mean value theorem, what do you get?
H: Putting the table on the ground If I have smooth surface (which is the graph of some function $f(x,y)$) , is it true that I have 4 points of plain square lying on this surface? And is it true that the length of the edge of such square may be any prescribed (if $f$ defined on $\mathbb R^2$)? It's motivation may be found clear - can I put all four legs of the table on the any smooth ground? AI: See The Wobbly Table Theorem. Or How to stabilize a wobbly table. Or Turning the tables. [Oops! just noticed @Ben already gave that last one in a comment]
H: In any vector space, ax=bx implies a=b The above statement is listed as false in my text, and I wanted to be sure I understood why that is. (I guess if it were written "properly" it would be $a\mathbf{x} = b\mathbf{x}$ implies $a = b$). Given the axioms we were given, it would seem that the statement should be true, no? A related statement -- also listed as false -- is that "in any vector space, $a\mathbf{x} = a\mathbf{y}$ implies that $\mathbf{x} = \mathbf{y}$." Again, given the axioms we have I am not sure why this is the case. AI: Directly from the axioms defining a vector space: Claim: in a $K$-vector space $V$, $\lambda\cdot x=0_V$ if and only if $\lambda =0_K$ or $x=0_V$. Proof: the facts that $0_K\cdot x=0_V$ and $\lambda\cdot 0_V=0_V$ for all $\lambda \in K$ and all $x\in V$ are two of the axioms. Conversely, assume $\lambda \cdot x=0_V$. If $\lambda \neq 0$, then it is invertible and $$ 0_V=\lambda^{-1}\cdot 0_V=\lambda^{-1}\cdot(\lambda\cdot x)=(\lambda^{-1}\lambda)\cdot x=1_K\cdot x=x. $$ So $\lambda =0_K$ or $x=0_V$. QED. First question: this is false as $$a\cdot x= b\cdot x\iff (a-b)\cdot x=0_V\iff a-b=0_K\mbox{ or } x=0_V\iff a=b\mbox{ or } x=0_V. $$ Take $a=0_K$, $b=1_K$, and $x=0_V$ for a counterexample. But what you mention becomes true with th assumption $x\neq 0_V$. Second question: this is false as $$ a\cdot x=a\cdot y\iff a\cdot(x-y)=0_V\iff a=0_K\mbox{ or } x-y=0_K\iff a=0_K\mbox{ or } x=y. $$ Take $a=0$ and any $x\neq y$ for a counterexample. This becomes true if you further assume $a\neq 0_K$.
H: Don't understand proof on pg 65 of Qing Liu There is proposition in page 65 of Liu's book which is: $X$ an integral scheme with generic point $\xi$. Then if we identify $\mathcal{O}_X(U)$ with and $\mathcal{O}_{X,x}$ we have $\mathcal{O}_X(U) = \bigcap_{x \in U} \mathcal{O}_{X,x}$. His proof is like this. "By covering $U$ with affine open sets we can assume $U = \operatorname{Spec}(A)$ is affine. let $f \in \text{Frac}(A)$ be contained in all the localizations $A_\mathfrak{p}$ for $\mathfrak{p} \in \operatorname{Spec}(A)$. Let $I = \{g \in A | fg \in A\}$. Then $I$ is not contained in any prime ideal so that $I = A$. It follows $f \in A$" I have two questions: How can we reduce to the case that $U = \operatorname{Spec}(A)$ is affine? If I write $\{V_i\}$ a cover by affines for $U$, then $\mathcal{O}(U) = \bigcap \mathcal{O}(V_i)$ but where is intersection taken inside of? Why is $I$ not contained in any prime ideal? AI: For an integral scheme $X$ with generic point $\xi$, for each open subset $U\subseteq X$, $\xi\in U$, so there is a canonical homomorphism $\mathscr{O}_X(U)\rightarrow\mathscr{O}_{X,\xi}=:k(X)$ (the function field of $X$). If $U=\mathrm{Spec}(A)$ is affine, then this can be identified with the canonical inclusion of the domain $A$ into its field of fractions $\mathrm{Frac}(A)$. It then follows for a general $U$ that the map to $k(X)$ is injective, because, if $s\in\mathscr{O}_X(U)$ maps to zero, then the restriction of $s$ to each affine open inside $U$ maps to zero, which means the restriction to that affine is zero (by the affine case!), so $s=0$. So you can take the intersection of all the $\mathscr{O}_X(U)$ inside the function field $k(X)$, and then your equality $\mathscr{O}_X(U)=\bigcap_i\mathscr{O}_X(V_i)$ for $U=\bigcup_i V_i$ an affine open cover (or any open cover whatsoever) makes sense. Similarly each $\mathscr{O}_{X,x}$ canonically injects into $k(X)$, because every point $x\in X$ is a specialization of the generic point $\xi$, and, again consider an affine open around each point, localizations of a domain all canonically inject into the domain's field of fractions. Now, the argument quoted proves that the natural map $\mathscr{O}_X(U)\rightarrow\bigcap_{x\in U}\mathscr{O}_{X,x}$ is an isomorphism for $U$ affine, but you don't actually need it. For general $U$, the map is injective simply because $\mathscr{O}_X$ is a sheaf (in fact, because $X$ is integral, each $\mathscr{O}_X(U)\rightarrow\mathscr{O}_{X,x}$, $x\in U$, is injective). Now if $s$ is an element of the intersection of the stalks (taken in $k(X)$), for each $x$ we can find an affine open $U_x\subseteq U$ containing $x$ such that $s$ is the image of a section $t(x)\in\mathscr{O}_X(U_x)$ (this is just an application of the definition of the stalk of a sheaf, and in the argument above, the $s_\mathfrak{p}$ were denoted $t(x)$). Then $U=\bigcup_x U_x$, and the sections $t(x)\vert_{U_x\cap U_y}$ and $t(y)\vert_{U_x\cap U_y}$ for $x\neq y$ map to the same element in $k(X)$ (namely $s$) so they are equal in $\mathscr{O}_X(U_x\cap U_y)$ by the injectivity proved above. So they agree on $U_x\cap U_y$, and therefore they can be glued to a section $t$ of $U$ which satisfies $t\vert_{U_x}=t(x)$, and in particular, the image of $t$ in $\bigcap_{x\in U}\mathscr{O}_{X,x}$ will be $s$. The reason $I$ is not contained in any prime ideal is that $f\in\bigcap_\mathfrak{p}A_\mathfrak{p}$. If $\mathfrak{p}$ is a prime ideal of $A$, then because $f$ lies in $A_\mathfrak{p}$, it can be written as a fraction $a/g$ for some $g\in A\setminus\mathfrak{p}$, and then $gf=a\in A$, so $g\in I$. Therefore, for each prime ideal of $A$, there is some element of $A$ not in that prime ideal which is contained in $I$. So $I$ cannot be contained in any prime ideals of $A$. Let me now say why, when $X=\mathrm{Spec}(A)$ is affine, and I want to prove that $\mathscr{O}_X(X)\rightarrow\bigcap_{x\in X}\mathscr{O}_{X,x}$ is an isomorphism, meaning I want to prove that $A\rightarrow\bigcap_{\mathfrak{p}\in\mathrm{Spec}(A)}A_\mathfrak{p}\subseteq\mathrm{Frac}(A)$ is an isomorphism, the argument I gave translates to one involving the "ideal of denominators of $f$," which is the ideal $I$ of the previous paragraph. Given $f\in \mathrm{Frac}(A)$ which lies in all the stalks $A_\mathfrak{p}$ (more precisely their images in $\mathrm{Frac}(A)$), I know from the definition of localization that for each $\mathfrak{p}$ I can write $f=a_\mathfrak{p}/s_\mathfrak{p}$ for some $a_\mathfrak{p},s_\mathfrak{p}$ with $s_\mathfrak{p}\notin \mathfrak{p}$. In geometric terminology, this is saying precisely that $f$ comes from a section of the affine open $D(s_\mathfrak{p})$ of $\mathrm{Spec}(A)$, namely it comes from $a_\mathfrak{p}/s_\mathfrak{p}\in A_{s_\mathfrak{p}}=\mathscr{O}_X(D(s_\mathfrak{p}))$. Since $\mathfrak{p}\in D(s_\mathfrak{p})$ for each $\mathfrak{p}$ by construction, these open sets cover $\mathrm{Spec}(A)$. The equality $\mathrm{Spec}(A)=\bigcup_{\mathfrak{p}\in\mathrm{Spec}(A)}D(s_\mathfrak{p})$ is literally equivalent to: the ideal of $A$ generated by the $s_\mathfrak{p}$ (which is contained in the ideal $I$ of the previous paragraph) is equal to $A$. So the ideal of denominators is all of $A$, and that means $f=1\cdot f\in A$.
H: Intuitive meaning of Exact Sequence I'm currently learning about exact sequences in grad sch Algebra I course, but I really can't get the intuitive picture of the concept and why it is important at all. Can anyone explain them for me? Thanks in advance. AI: In the linear algebra of Euclidean space (i.e. $\mathbb R^n$), the consideration of subspaces and their orthogonal complements are fundamental: if $V$ is a subspace of $\mathbb R^n$ then we think of it as filling out "some of" the dimensions in $\mathbb R^n$, and then its orthogonal complement $V^{\perp}$ fills out the other directions. Together they span $\mathbb R^n$ in a minimial way (i.e. with no redundancies, i.e. $\mathbb R^n$ is the direct sum of $V$ and $V^{\perp}$). Now in more general settings (say modules over a ring) we don't have an inner product and so we can't form orthogonal complements, but we can still talk about submodules and quotients. So if $A$ is a submodule of $B$, then $A$ fills up "some of the directions" in $B$, and the remaining directions are encoded in $B/A$. Now by itself this doesn't seem like anything new, or worth memorializing with new terminology, but often what happens is that one has a submodule $A \subset B$, and then a surjection $B \to C$, given without any a priori relation to each other. However, if $A$ is precisely the kernel of the map $B \to C$, then we are (somewhat secretly) in the previous situation: $A$ fills out some of the directions in $B$, and all the complementary directions are encoded in $C$. So we introduce the terminology "$\, \, 0 \to A \to B \to C \to 0$ is a short exact sequence" to capture this situation. Since long (i.e. not necessarily short) exact sequences can always be broken up into a bunch of short exact sequences that are glued together, getting a feeling for short exact sequences is a good first step. Of course, you should be coupling your study of these homological concepts with examples, e.g. short exact sequences arising from tangent and normal bundles to submanifolds of manifolds, all the important long exact sequences in homology theory (from algebraic topology), and so on; without these examples of naturally occuring set-ups of the "$A, B, C$" form described above, it won't be so easy to get a feel for why this concept was isolated as being a fundamental one.
H: Converting equation to $y = mx + b$ My sister have an assignment of converting below equation to slope as $y = mx + b$ $xy = 4$ Can anyone help? thanks in advance. ^_^ AI: This is not possible. Look at the graph of xy=4. You will notice that it is hyperbola, not a line. Your sister will therefore be unable to write it in the form $y=mx+b$, which would be a straight line.
H: Questions about Grothendieck groups. I have a question of the exercise 26 on page 88 of the book introduction to commutative algebra by Atiyah and Macdonald. In 26(iii), let $A$ be a field. Then finitely generated $A$-modules are finite dimensional $A$-vector spaces. Two finitely dimensional vector spaces are isomorphic if and only if they have the same dimension. Therefore $F(A)$ is isomorphic to $\mathbb{Z}_{\geq 0}$. But $\mathbb{Z}_{\geq 0}$ is not a group. It is said that $F(A)$ is isomorphic to $\mathbb{Z}$. How to prove this? How to prove that $F(A)$ is isomorphic to $\mathbb{Z}$ for the case that $A$ is a principal ideal domain. Thank you very much. AI: It is not $F(A)$ that is isomorphic to $\mathbb{Z}$, but rather $K(A)$. In general, $F(A)$ does not have a group structure, and the Grothendieck group $K(A)$ is the most free way of imposing a group structure on $F(A)$ such that exact sequences behave as specified. An element of $C$ is a finite formal sum $\sum_{k \geq 0} n_k [V_k]$ where $V_k$ is a $k$-dimensional vector space over the field $A$ and $n_k \in \mathbb{Z}$. If you consider that element in $C/D = K(A)$, then since you can always find an exact sequence $$ 0 \rightarrow V_{k-1} \rightarrow V_k \rightarrow V_1 \rightarrow 0 $$ (where the subscript denotes the dimension of the vector space), it is not hard to show by induction that $[V_k] = k [V_1]$ in $K(A)$. Thus every element in $K(A)$ has the form $\sum_{k \geq 0} n_k k [V_1]$ and the map sending this element to $\sum_{k \geq 0} n_k k \in \mathbb{Z}$ is easily seen to be an isomorphism (exercise). Hint for the PID case: Use the exact sequence $$ 0 \rightarrow A \xrightarrow{p} A \rightarrow A / (p) \rightarrow 0 $$ to show that $[A / (p)] = 0$ in $K(A)$. What does every finitely generated module over a PID look like? Use this, the above observation and the argument in the case when $A$ is a field to prove that $K(A) \cong \mathbb{Z}$.
H: How to prove that the following iteration process converges? I have the following iteration process: $$ p_{n+1} = \frac{{p_{n}}^3 + 3 a p_{n} }{3 {p_{n}}^2 + a } , $$ where $a > 0$. Q1: How to prove that this iteration process converges for every number $p_0 > p > 0 $ to the fixed point $p = g(p), p > 0$ ? Q2: How to prove that this iteration process converges for every number $p > p_0 > 0$ to the fixed point $p = g(p) , p > 0 $ ? Perhaps the following theorem helps: Let $f \in C^2 [a,b]$ such that $f(p) = 0$ and $f'(p) \neq 0 $. Then there is a $\delta > 0 $ such that Newton's method generates a sequence $ \{ p_n \} $, which converges to $p$ for every $p_0 \in [p - \delta , p + \delta]$. In my case, $f(x) = x^2 - a$. I am guessing this theorem probably could help (especially for Q2), but I don't see how I can apply it. AI: Consider $p_{n+1} - p_n = \frac{-2p_n^3 + 2ap_n}{3p_n^2 + a}$. We have $$p_{n+1} - p_n =\begin{cases}\le 0, & \text{if } p_n \ge \sqrt{a}\\ = 0, & \text{if } p_n = \sqrt{a} \\ \ge 0, & \text{if } p_n \le \sqrt{a}\end{cases}$$ Moreover, let $f(x) = \frac{x^3 + 3ax}{3x^2 + a}$. Then $$f'(x) = \frac{(3x^2+3a)(3x^2 + a)-6x(x^3+3ax)}{(3x^2+a)^2}=\frac{3x^4-6ax^2+3a^2}{(3x^2+a)^2}=\frac{3(x^2-a)^2}{(3x^2+a)^2} \ge 0.$$ Thus it follows that $f$ is increasing. In particular $$p_{n+1} = f(p_n) \le f(\sqrt{a}) = \sqrt{a} \Leftrightarrow p_n \le \sqrt{a}.$$ The behaviour of the sequence is now clear: $p_n$ increases without going past $\sqrt{a}$ if $p_0 \le \sqrt{a}$ and decreases if $p_0 \ge \sqrt{a}$, again without going past $\sqrt{a}$. Thus as a bounded monotone sequence $p_n$ has a limit, which has to be $\sqrt{a}$ by taking limits on both sides of the defining equation of $p_{n+1}$.
H: Show that derivative less than 1 implies contraction. I am told that f has a continuous derivative and that $a \leq f(x) \leq b$ and $|f'(x)| < 1 \ \forall x \in [a,b]$ and I have to show that $f$ is a contraction. Now if I take any $x,y \in [a,b]$, the Mean-Value Theorem says that $\exists c \in (a,b)$ such that $$|f(x) - f(y)| = |f'(c)| |x-y|$$ and so clearly this is a contraction. However I haven't used the condition that $f$ has a continuous derivative or that $f$ is bounded by $a$ and $b$, why are these conditions necessary? My definition of contractive is that $|g(x) − g(y)| \leq a|x − y|$ for some real value $0 \leq a < 1$ and for all $x$, $y \in [a,b]$. Thanks AI: You only proved that $|f(x)-f(y)| < |x-y|$. But the definition of contraction is that there exists a $C <1$ so that $$|f(x)-f(y)| < C|x-y| \,.$$ Hint: If $f'$ is continuous on $[a,b]$, then so is $|f'|$, and hence $|f'|$ attains its maximum at some point $x_0 \in [a,b]$. Also, a contraction by definition is a function $f$ from some metric space to itself. Your domain is $[a,b]$ so your codomain has also to be $[a,b]$, that's why you need the other condition.
H: Line of Symmetry for Hyperbolas How might I find the equation for one of the lines of symmetry for the hyperbola $$y= 2 + \frac 6{x-4},\,\text{ where x cannot equal}\; 4.$$ I know that the lines of symmetry for the rational function $y=A/x$ are $y=x$ and $y=-x$...and that to find the lines of symmetry of $y=A/(x-h) + k$, where x cannot equal $h$...the equations become $y-k=x-h$ and $y-k = -(x-h)$...but I don't know how to apply that for the problem above...any ideas? AI: Graph your function, $$y = 2 + \dfrac 6{x - 4}$$ along with the lines $$y - 2 = x - 4 \iff y = x - 2\tag{1}$$ and $$ y - 2 = -(x - 4) \iff y =-x + 6\tag{2}$$ and see what you get. $(1), (2)$ follow from the information you provide: I know ... and that to find the lines of symmetry of $y=A/(x-h) + k$, where x cannot equal $h$...the equations become $y-k=x-h$ and $y-k = -(x-h)$ In your equation, $A = 6,\; k= 2,\; h = 4.$
H: Usefulness of the concept of equivalent representations Definition: Let $G$ be a group, $\rho : G\rightarrow GL(V)$ and $\rho' : G\rightarrow GL(V')$ be two representations of G. We say that $\rho$ and $\rho'$ are $equivalent$ (or isomorphic) if $\exists \space T:V\rightarrow V'$ linear isomorphism such that $T{\rho_g}={\rho'_g}T\space \forall g\epsilon G$. But I don't understand why this concept is useful. If two groups $H,H'$ are isomorphic, then we can translate any algebraic property of $H$ into $H'$ via the isomorphism. But I don't see how a property of $\rho$ can be translated to similar property of $\rho'$. Nor I have seen any example in any textbook where this concept is used. Can someone explain its importance? AI: This is true for all sorts of types of representations. Maybe it is easier to see for permutation representations. Suppose $G$ is a group that acts on polynomials. It has an element $g$ that swaps $x$ and $y$, and leaves $z$ alone. We could write $g=(x,y)(z)$ if we wanted. But then some jerk comes along and asks what we'd do if we needed a fourth variable. FINE. $G$ is a group that acts on polynomials. It has an element $g$ that swaps $x_1$ and $x_2$, and leaves $x_3$ alone. We could write $g=(x_1,x_2)(x_3)$ if we wanted. Nothing important has changed really; we just changed the names of the variables. We could go further and abbreviate $g=(1,2)(3)$. We replaced the variables with the identifying numbers. Maybe that is convenient. Saves ink (or electrons). No real change though. Vector space representations are the same. If $G$ acts on polynomials, then I guess we could apply $g$ to $2x + 3y + 5z$ to get $2y + 3x + 5z$. So we have $\rho(g)$ a linear transformation of the vector space with basis $\{x,y,z\}$. ARGH. I forgot about the jerk. $G$ acts on polynomials, so we could apply $G$ to $2x_1 + 3x_2 + 5x_3$ to get $2x_2 + 3x_1 + 5x_3$. So we have $\rho(g)$ a linear transformation of the vector space with basis $\{x_1,x_2,x_3\}$. The essential properties of $\rho(g)$ and of $\rho$ are not affected by what labels we give to the basis elements of the vector space. Changing the labels on a basis is exactly and only what $T$ does. $T$ doesn't affect how $\rho(g)$ acts, it just affects where $\rho(g)$ acts.
H: numerically evaluate a continued fraction I am looking at a continued fraction of the form $$ F_n = \cfrac{1}{1+\cfrac{p_1}{1+\cfrac{p_2}{1+\cfrac{p_3}{1+\ldots}}}} $$ where $p_n$ is a function I know. For simplicity I just take it to $p_n=n$ for now. I wish to evaluate this fraction numerically for a given $n$, but I am having conceptual difficulties on how to do it. I need a starting point, so I first evaluate (I take $n=4$ for the sake of the question): $$ x_0 = \frac{p_3}{1+p_4} $$ This is where I am stuck. How to go "up the ladder" recursively? I'd be happy to get a hint or two. AI: The continued fraction can be evaluated "top down," which is useful since that makes computations extensible. Let $F_n=\dfrac{a_n}{b_n}$. The numbers $a_k,b_k$ satisfy the following system of recurrences. $$a_{-1}=0,\qquad a_0=1, \qquad a_k=a_{k-1}+p_ka_{k-2},$$ $$b_{-1}=1,\qquad b_0=1, \qquad b_k=b_{k-1}+p_kb_{k-2},$$ for $k\ge 1$.
H: Fourier series identity I need to prove that $\dfrac{a \sin(bx)}{1 - 2a \cos(bx) + a^2} = \sum_{n=1}^\infty a^n \sin(nbx)$ where $|a| < 1$. It seems that this can be proved by using Euler's formula identities for $\cos(bx)$ and $\sin(bx)$ and substituting $z = e^{ibx}$. From that, I get $$ \dfrac{a \sin(bx)}{1 - 2a \cos(bx) + a^2} = \dfrac{a (z - 1/z)}{2i(1 - a(z + 1/z) + a^2)}$$ $$= \dfrac{a (z^2 -1)}{2i(z - a(z^2 +1) + za^2)}$$ $$= \dfrac{a (z^2 -1)}{-2ai(z^2 - z(1-a^2)/a + 1)}$$ I don't understand what I am supposed to do next. AI: $$\displaystyle\sum_{n=0}^{\infty} a^{n} \sin(nbx) = \text{Im} \sum_{n=0}^{\infty} (ae^{ibx})^{n} = \text{Im} \ \frac{1}{1-ae^{ibx}}$$
H: What happen if there is several $a$ such that equation have finite number of solutions? The motivation to this question can be found in The equation $f(s)=a$ has a finite number of solution My question is: What happen (regarding the equation $f(z)=P(z)e^{g(z)}+a$) if there is several $a$ such that equation have finite number of solutions? (a) finite number of $a$'s, (b) infinite number of $a$'s. Does there is a contradiction? AI: If $f$ is not a polynomial, then $z=\infty$ is an essential singularity for $f.$ Big Picard theorem implies that $f$ takes each value infinitely often with at most one exception.
H: Introductory books on complex analysis? I'm a senior in my undergrad. years of college, and I haven't taken Complex Analysis yet. I have taken Real Analysis I (covered properties of $\mathbb{R}$, set theory, limits of sequences and functions, series, (uniform) continuity, uniform convergence) and Abstract Algebra I (covered $\mathbb{Z}_{n}$, an intro to group theory (groups, subgroups, quotient groups, isomorphism theorems, semidirect products), and an intro to ring theory (fields, ideals)). The book that we use at the university I attend isn't very analytical, from my understanding. (The book is Fundamentals of Complex Analysis with Applications to Engineering, Science, and Mathematics, 3rd ed. by Saff.) Of the courses I've taken in my undergrad, Real Analysis I has definitely been my favorite course so far, and I will be taking Real Analysis II (covers integration and differentiation in $\mathbb{R}^n$, Riemman-Stieltjes, and some other topics that I don't know about) this upcoming fall. Are there any books on complex analysis that you would suggest given my background? Thank you! Edit: Other courses I have taken: I have taken Calculus I through III (nothing on Differential Equations - although I do know what a first-order linear differential equation is), actuarial science courses (Probability (Calculus-based), Statistics (Calculus-based), Life Contingencies), and Linear Algebra (one semester using Larson's Elementary Linear Algebra and a second semester independent study using Axler). AI: I'd recommend you look at Gamelin's book Complex Analysis. It is suitable for a good undergraduate who's already had some real analysis.
H: Proving Binomial Random Variable Identity Good Morning All, May I ask for a clue to the following problem? I got stuck and I am now wondering if I understood the problem correctly. Let X, Y be independent random variables bin(m, p), bin(n, p) respectively. Show that X+Y is a binomial random variable with parameter (m+n, p) So $\mathbb P (X = i) = {m \choose i}{p^i}{q^{m-i}}$, $\mathbb P (Y = j) = {n \choose j}{p^j}{q^{n-j}}$ Let $m \le n$, $\mathbb P (X+Y = t) = $ $ \mathbb P (X=0, Y=t) + \mathbb P (X=1, Y=t-1) ... + \mathbb P (X=m, Y=t-m) $ $ = \sum_{i=1}^m \mathbb P (X=i) \mathbb P (Y=t-i)$ $ = \sum_{1=1}^m {m \choose i}{n \choose t-i} p^{i+t-i}q^{m+n-i-t+i}$ Now, I am stuck with the combinations. I can't get them to look like $m+n \choose t$. AI: You should in your third line only go to $X=t$, $Y=0$. So you want to show that $$\sum_{i=0}^t \binom{m}{i}\binom{n}{t-i}=\binom{m+n}{t}.$$ Here is combinatorial proof of the above identity. We have a group of $m$ boys and $n$ girls, and want to pick $t$ people. By definition, this can be done in $\dbinom{m+n}{t}$ ways. Let us count this another way. We can pick $0$ boys and $t$ girls. This can be done in $\binom{m}{0}\binom{n}{t}$ ways. Or else we can pick $1$ boy and $t-1$ girls. This can be done in $\binom{m}{1}\binom{n}{t-1}$ ways. Or else we can pick $2$ boys and $t-2$ girls. This can be done in $\binom{m}{2}\binom{n}{t-2}$ ways. Continue, and add up. We get the desired identity. Note: If $m$ is "small" and $t$ and $n$ are largish, some of the $i$ in the sum may be greater than $m$. The above expression is still correct, if we use the convention that $\binom{a}{b}=0$ if $a$ and $b$ are non-negative integers such that $a\lt b$.
H: Matrix inverse of $\left(A-I\right)$ given $A^{-1}$ I am wondering if the inverse of $$B = A-I$$ can be written in terms of $A^{-1}$ and/or $A$. I am able to accurately compute $A$ and $A^{-1}$, which are very large matrices. Is it possible to calculate $B^{-1}$ without directly computing any inverses? For example, if $A = 2I$, then $B^{-1}=\frac{1}{2}A$. AI: It is not a complete answer, but if $\sum\limits_{k \geq 0} A^k$ converges then $$-(A-I) \sum\limits_{k \geq 0} A^k= \operatorname{Id}$$hence $$(A-I)^{-1}=- \sum\limits_{k \geq 0} A^k$$
H: Proof for Sum of Sigma Function How to prove: $$\sum_{k=1}^n\sigma(k) = n^2 - \sum_{k=1}^nn\mod k$$ where $\sigma(k)$ is sum of divisors of k. AI: Using the identity \begin{align} n \ \text{mod} \ k = n - k \lfloor \tfrac{n}{k} \rfloor, \end{align} one has \begin{align} \sum_{k = 1}^{n} (n \ \text{mod} \ k) =\sum_{k = 1}^{n} n - k \lfloor \tfrac{n}{k} \rfloor = n^{2} - \sum_{k = 1}^{n} k \lfloor \tfrac{n}{k} \rfloor. \end{align} Finally, since \begin{align} \sum_{k = 1}^{n} k \lfloor \tfrac{n}{k} \rfloor = \sum_{k = 1}^{n} \sigma(k), \end{align} the claim follows.
H: Limit of the root $\sqrt[k]{k}$ Haow can I calculate this limit? $$L=\lim_{k\to\infty}\sqrt[k]{k}$$ I suppose its value is one, but how can I prove it? Thanks AI: HINT: Note that $$\ln L=\lim_{k\to \infty}\frac{\ln k}{k}$$ Now use L'Hospital's rule.
H: Integer pair that satisfies 42x+55y How do I find the integer pair $(x,y)$ where $|100|\leq x,\;y\leq |200|$ that satisfies $42x+55y=1$? AI: Hint: Use the Euclidean algorithm to find the GCD of $42$ and $55$ then work it backwards to find the linear combination of $42$ and $55$.
H: Number of rules in my fuzzy logic I have 6 variables with 4 membership functions such as "tiny,small,large,huge". I tried to write the rules and came up with 200 rules but the combinations are killing me and it is still incomplete. Can anyone tell me what is the exact number of rules that would cover every combination possible ? For example one rule would be - IF var1 IS huge AND var2 IS large AND var3 IS tiny AND var4 IS small AND var5 IS huge AND var6 IS huge THEN output IS small AI: If there are $6$ variables, and each can be assigned one of $4$ adjectives, then the number of possible rules is $4 \times 4 \times 4 \times 4 \times 4 \times 4 = 4^6 = 4096$. (Assuming each rule is in the form of IF a IS (adj) AND b IS (adj) ... THEN)
H: Finding the smallest integer pair such that 123x+321y=1? How do I find the smallest integer pair $(x,y)$ such that $123x+321y=1$? AI: HINT; Using Linear congruence theorem (Proof) as the greatest common divisor $\textrm{gcd}(123,321)=3$ does not divide $1,$ there is no solution
H: Finding the remainder of $11^{2013}$ divided by $61$ How am I suppose to find the remainder when $11^{2013}$ is divided by $61$? AI: HINT: $11^2=121\equiv-1\pmod{61}$ So, $11^{2013}=11\cdot (11^2)^{1006}\equiv11\cdot(-1)^{1006}\pmod{61}$ $\implies 11^{2013}\equiv11\pmod{61}$
H: How to evaluate $\lim_{n \to \infty}{\frac{a_1 \cdot a_2\cdot a_3 \ldots a_{n}}{a_{n+1}}}$, given $a_{n+1} = a_{n}^{2} -1,a_1 = \sqrt{5}$? Let $a_{n+1} = a_{n}^{2} -1,a_1 = \sqrt{5}.$ How would one evaluate $$\lim_{n \to \infty}{\frac{a_1 \cdot a_2\cdot a_3 \ldots a_{n}}{a_{n+1}}}?$$ Added: Someone else asked me this question today, but unfortunately I don't know how to start. AI: Offset by one, this is a known sequence. Without the offset, we have $$a_{n}=\left\lceil c^{2^n}\right\rceil$$ (for $n\ge 2$) where $c$ is some constant between 1 and 2. The fraction in the limit is approximated by $$\frac{\sqrt{5}c^{2^1}c^{2^2}\cdots c^{2^n}}{c^{2^{n+1}}}=\frac{\sqrt{5}c^{2^{n+1}-2}}{c^{2^{n+1}}}=\frac{\sqrt{5}}{c^2}$$
H: How to imagine a plane defined by Cartesian Plane Equation? It isn't difficult for me to imagine a plane based on three points. Also it is quite simple to imagine a plane based on point and normal vector. Are there some tricks to imagine a plane defined by plane equation $Ax+By+Cz+D=0?$ AI: The vector $\mathbf{a}=(A,B,C)$ is the normal vector of the plane, and the equation says that the dot product of the normal vector $\mathbf{a}$ and all vectors from the origin to a point on the plane is a constant given by $-D$. Let $\mathbf{x}=(x,y,z)$, then we have: $$\mathbf{a}\cdot\mathbf{x}=-D$$ If the normal vector $\mathbf{a}$ is normalized to length 1, then $D$ is the distance of the plane from the origin.
H: Dividing $2012!$ by $2013^n$ What's the largest power $n$ such that $2012!$ is divisible by $2013^n$? It doesn't look like its divisible at all since $2012<2013$; am I right? AI: $2013=3 \times 11 \times 61$. Thirty-two naturals $ \leq 2012$ are divisible by $61$ and none are divisible by $61^2$. At least thirty-two naturals are divisible by $11$ and by $3$, and so we have that $2013^{32}$ divides $2012!$ but no larger power of $2013$ does.
H: Inequality with Gamma function: how to prove it? Let $\alpha \in (0,1)$ and $\Gamma(\alpha) = \int_0^{\infty}s^{\alpha - 1}e^{-s}ds$. I would like to prove that $$\int_0^{\infty}\frac{s^{-\alpha}}{1 + s}ds \le \Gamma(1 - \alpha)\Gamma(\alpha).$$ Basically I know the following two facts, but I don't know if they are needed: $$1) \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha + \beta)} = \int_0^1s^{\alpha - 1}(1 - s)^{\beta - 1}ds$$ $$2)\ \Gamma(1 - \alpha)\Gamma(\alpha) = \frac{\pi}{\sin(\pi\alpha)}$$ Thanks for the help! AI: $$\begin{align} \int_{0}^{\infty} \frac{x^{-\alpha}}{1+x} \ dx &= \int_{0}^{\infty} \int_{0}^{\infty} x^{-\alpha} e^{-(1+x)t} \ dt \ dx \\ &= \int_{0}^{\infty} e^{-t} \int_{0}^{\infty} x^{-\alpha} e^{-tx} \ dx \ dt \\ &= \int_{0}^{\infty} e^{-t} \int_{0}^{\infty} \left( \frac{u}{t} \right)^{-\alpha} e^{-u} \ \frac{du}{t} \ dt \\ &=\int_{0}^{\infty} t^{\alpha-1} e^{-t} \ dt \int_{0}^{\infty} u^{(1-\alpha)-1}e^{-u} \ du \\ &= \Gamma(\alpha) \Gamma(1-\alpha) \end{align}$$
H: Lowerbound of maximal cardinality of the set of pairwise disjoint non-null subset of the real line In a [3-page note] 1 on Bernstein set, Suppose, $\operatorname{cf}(\mathfrak{c})= \mathfrak{c}$, $\operatorname{non}({\bf{L}})= \min\{|X|: X \subset \Bbb{R}, X \text{ is not a Lebesgue measure zero subset of } \Bbb{R}\} = \mathfrak{c}$ Let $\operatorname{PD}(B)$ denote the family of all pairwise disjoint subsets of $B$, $\bf L$ denote the $\sigma$-ideal of Lebesgue measure zero subsets of the real line.Then:$$\operatorname{sat}({\cal{P}}(\Bbb{R}) / {\bf{L}})= \min\{ \kappa: \forall {X} \in \operatorname{PD}({\cal{P}}(\Bbb{R}) / {\bf{L}})(|X|< \kappa)\}$$ I can't understand Theorem $2$ states that, given above conditions, $\operatorname{sat}({\cal{P}}(\Bbb{R}) / {\bf{L}}) > \mathfrak{c}^{+}$. By lemma $1$ in the note, we could find a continuum of sets of cardinality continuum, $\mathcal A$ which consists of a partition of $\Bbb R$,such that for any perfect set $B$, there exists $A \in \cal A$ such that $A \subseteq B$, then we could have "$\cal S$ be a maximal with respect to inclusion a family of selectors of $\cal A$ such that $$X, Y \in {\cal{ S}} \land X \neq Y \to |X \cap Y | < \mathfrak{c}"$$ In particular, I don't understand why "it's easy to check that $|\mathcal{S}|= \mathfrak{c}^{+} $". Moreover, I don't know what $[S]_{\bf L}$ means. AI: Maybe part of the confusion is that you used $\mathcal{P}(\mathbb{R}) \setminus \bf{L}$ instead of $\mathcal{P}(\mathbb{R}) / \bf{L}$, which is the set of equivalence classes of subsets of $\mathbb{R}$ where two sets are equivalent if their symmetric difference is in $\bf{L}$; the class of a set $S$ is then denoted by $[S]_{\bf{L}}$. This is a special case of a quotient of a Boolean algebra modulo an ideal; the result is again a Boolean algebra (where the class of $\emptyset$, i.e. all sets of $\bf{L}$, which are all equivalent, is the $0$-element), so $\operatorname{sat}$ is defined for it. Corollary 2 in the note follows because the equivalence classes of the $\mathfrak{c}$ many pairwise disjoint Bernstein sets that Corollary 1 gives us are all non-equivalent and non-zero classes in $\mathcal{P}(\mathbb{R})/\bf{L}$ that are still disjoint, and so $\operatorname{sat}( \mathcal{P}(\mathbb{R})/\bf{L} )$ cannot be $\mathfrak{c}$ (or less) any more. Hence is it at least $\mathfrak{c}^{+}$. As to the proof of Theorem 2: Note that $\operatorname{non}(\bf{L}) = \mathfrak{c}$ just is a reformulation of "every subset $A$ of $\mathbb{R}$ such that $|A| < \mathfrak{c}$ has Lebesgue measure $0$ (i.e. is in $\bf{L}$)". So if we have family $\mathcal{S}$ of selectors for $\mathcal{A}$ with the property that any 2 different selectors $X$ and $Y$ have $| X \cap Y | < \mathfrak{c}$, then if we take the classes of these sets modulo $\bf{L}$ then their intersection is in $\bf{L}$ (all sets smaller than continuum have measure $0$), so the classes indeed form a disjoint family in the quotient Boolean algebra. If we also know that their are at least $\mathfrak{c}^{+}$ many, then the $\operatorname{sat}$ of this algebra is strictly larger than $\mathfrak{c}^{+}$, as claimed (see the definition of $\operatorname{sat}$). It is clear by a Zorn lemma argument that there indeed is a maximal (by inclusion) family of selectors for $\mathcal{A}$ with the intersection property. I think we only need that $\mathcal{A}$ is a partition of $\mathbb{R}$ of size $\mathfrak{c}$, all of whose members have size $\mathfrak{c}$. In that case $\operatorname{cf}(\mathfrak{c}) = \mathfrak{c}$ should imply that a family of "almost disjoint" (in the sense that their intersection has size strictly less than $\mathfrak{c}$) that has size $\le \mathfrak{c}$ can be extended by a new set, and so cannot be maximal. Hence the existing maximal set must have larger size than $\mathfrak{c}$, as claimed. To see this: let $S_{\xi}, \xi < \mathfrak{c}$ be a family of selectors with smaller than continuum pairwise intersection. Define $T_\xi = S_\xi \setminus \cup_{\eta < \xi} S_\eta$, Then by regularity of $\mathfrak{c}$, the $T_\xi$ are pairwise disjoint, and all of size $\mathfrak{c}$. Picking one element from each of them gives us a new selector that has intersection size $< \mathfrak{c}$ with all the $S_\xi$.
H: Tail event, convergence of series How to prove, that $\{ \omega : \sum_{k=1}^{\infty} X_k(\omega) \text{ is finite} \} \in \sigma(\mathcal{F}_1,\mathcal{F}_2, \ldots )$ where $\mathcal{F}_i=\sigma(X_i) = \{ X_i^{-1} (B) : B \in \mathcal{B} (\mathbb{R^n})\}$ Please write as many details as possible, I would like to understand clearly. AI: We use the fact that $\ell_1$ is separable, let $\{(x_{k,n})_{k=1}^\infty \}_{n=1}^\infty$ be a dense subset of summable sequences in $\ell_1$. Denote by $A$ the event $\{ \omega : \sum_{k=1}^{\infty} X_k(\omega) \text{ is finite} \}$. For $\omega\in A$, the sequence $(X_k(\omega))$ is in $\ell_1$, hence there are elements $(y_k)$ and $(z_k)$ of $\ell_1$ such that $y_k \le X_k(\omega)\le z_k$, and thus, we may find such elements in our dense set. This observation allows to write event $A$ the following way: $$ A = \bigcup_{m,n\ge1} \bigcap_{k\ge1} \{\omega: x_{k,m} \le X_k(\omega) \le x_{k,n}\} $$ Since each $ \{\omega: x_{k,m} \le X_k(\omega) \le x_{k,n}\} \in \sigma(X_k)$, it follows that $A\in \sigma(X_1, X_2,\ldots)$.
H: Finding time needed for a task, given two person's distinct task/time rates John can dig constantly at $15$ inches per minute and Linda can dig constantly at $45$ inches per minute. A certain hole can be dug by John in $12$ hours. If hole is dug by John for half the time and both together for the rest of the time, how many minutes does it take to dig the hole? AI: We need to use data about john to determine the depth of hole: $$\underbrace{\dfrac{15 \;\text{inches}}{\text{minute}}}_{\text{rate of john's digging}} \times \underbrace{\frac{12 \text{ hours}}{1}}_{\text{time needed for john to dig hole}} \times \dfrac{60 \;\text{ min}}{\text{hour}} = \underbrace{15 \times 12 \times 60 \text{ inches}}_{\text{depth of hole}}$$ Let $x$ be the total number of minutes to dig hole needed under the given restrictions. Then $$\left(\frac 12x\times 15\right) + \left(\dfrac 12 x(15 + 45)\right) = 15 \times 12\times 60$$ Solve for $x$. $$7.5 x + 7.5 x + 22.5 x = 10800 \iff 37.5 x = 10800 $$ $$\iff x = \dfrac{10800}{37.5} = 288\;\text{ minutes total time for john and Linda to dig hole}$$
H: Does $1 + \frac{1}{x} + \frac{1}{x^2}$ have a global minimum, where $x \in \mathbb{R}$? Does the function $$f(x) = 1 + \frac{1}{x} + \frac{1}{x^2},$$ where $x \in {\mathbb{R} \setminus \{0\}}$, have a global minimum? I tried asking WolframAlpha, but it appears to give an inconsistent result. AI: $g(x)=1+\frac{1}{x}+\frac{1}{x^2},x\ne 0$ $g'(x)=-\frac{1}{x^2}-2\frac{1}{x^3}=0\Rightarrow \frac{-2}{x^3}=\frac{1}{x^2}\Rightarrow x=-2$ $g''(x)=\frac{2}{x^3}+\frac{6}{x^4}|_{x=-2}> 0$ $\Rightarrow g(-2)$ is the minimum. Edit:This shows that $g$ has a local minimum at $x=-2$ but this is also the global minimum because $g(x)-g(-2)=1+\frac{1}{x}+\frac{1}{x^2}-1-(\frac{-1}{2})-\frac{1}{2^2}=\frac{x+2}{2x}+\frac{(2-x)(2+x)}{4x^2}=(x+2)\frac{2x+2-x}{4x^2}=\frac{(x+2)^2}{4x^2}\ge0$ so $g(x)\ge g(-2)$ so $g(-2)$ is a global minimum.
H: Two hyperreal numbers infinitely close to each other; $100$ and $100+\epsilon$ $100$ is a real number or we could call it a hyperreal number as every element of $\mathbb R$ is also an element of $\mathbb R^*$. If we add an infinitesimal say $\epsilon$ to $100$, the new number will be $100 + \epsilon$. We cannot give a numeric value to the number $100 + \epsilon$, because $\epsilon$ is not a real number or we can't ever suggest even an approximated value for $\epsilon$. The number $100 + \epsilon$ only tells how much close $100 + \epsilon$ is to $100$, but is not equal to $100$. Here, by 'numeric value' I mean that if $\epsilon$ would have some value like $\pi$ has a value, that is a non-terminating decimal. We cannot say that $100 + \epsilon$ is an infinitesimal, because in order for it to be an infinitesimal it must fulfill the condition $-a<\epsilon<a$ for all positive real numbers $a$. Can we call $100 + \epsilon$ a hyperreal number? I just want to confirm the name for such numbers, and share my understanding of infinitesimals, and want some suggestions if I understand them correctly. AI: Things get a little fuzzy when you say "numeric value". But I do think I understand what you are saying: we cannot "pin down" $\epsilon$ to a particular numeric real value, given the standard definition of the reals. If $\epsilon$ is an infinitesimal, then yes, we call $100 + \epsilon$ a (non-real but) hyper-real number.
H: Fun combinatorics: How many numbers with some restrictions I came accross this fun problem today: How many 8-digit numbers are there where: each digit appears only one digits 1-4 appear sequentially (though not necessarily consecutively) 5 does not appear after 4 I basically counted all possibilities but what would be a nice way to express the solution? Thank you. AI: Following the suggestion in the hint, we compute the number of ways $1-4$ appear consecutively, and subtract the number of ways $1-5$ appear consecutively. We split into two cases. Case 1: The first digit is $1$. Then, choose the slots for $2,3$, and $4$, and arrange other numbers to fill in. Here we find: $$\binom{7}{3}\cdot6\cdot5\cdot4\cdot3=12600$$ ways to do this. To find how many of these arrangements have $5$ after $4$, choose the $4$ slots for $2,3,4$, and $5$, and arrange other numbers to fill in. There are $$\binom{7}{4}\cdot5\cdot4\cdot3=2100$$ ways to do this, so in case 1, we find $10500$ ways total. Case 2: The first digit is not $1$. Then, choose the four slots for the numbers $1,2,3,$ and $4$, and fill in the remaining numbers, remembering $0$ can't be the first digit. There are: $$\binom{7}{4}\cdot5\cdot5\cdot4\cdot3=10500$$ ways to do this. To find how many of these arrangements have $5$ after $4$, choose the $5$ slots for $1,2,3,4$, and $5$, and fill in the remaining slots without $0$ in the first slot: $$\binom{7}{5}\cdot4\cdot4\cdot3=1008$$ So from case 2 we find $9492$ ways as well. In all, there are $19,992$ such numbers.
H: Why does $n = \sum_{k=0}^\infty \frac{1}{2^k\;e^{k-2}} \implies n = \frac{2\;e^3}{2\;e-1}$? Consider $$n = \sum_{k=0}^\infty \frac{1}{2^k\;e^{k-2}}$$ Why does $$n = \frac{2\;e^3}{2\;e-1}\;\;?$$ AI: HINT: $$n = \sum_{k=0}^\infty \frac{1}{2^k\;e^{k-2}}=e^2\sum_{k=0}^\infty\frac1{(2e)^k}$$ Observe that $\sum_{k=0}^\infty\frac1{(2e)^k}$ is an Infinite Geometric Series with the first term $\frac1{(2e)^0}=1$ and common ratio $\frac1{2e}$ which lies $\in(0,1)$
H: Stirling numbers with $k=n-2$ Is there a more general method of calculating: $$ \genfrac\{\}{0pt}{}{n}{n-2} $$ Like for :$$ \genfrac\{\}{0pt}{}{n}{n-1} $$ we can use $nC_2 $ AI: Dividing $n$ objects into $n-2$ sets can be done in two ways. Either you can have two sets of size 2, or one set of size 3. Hence $$\genfrac\{\}{0pt}{}{n}{n-2} = \frac{1}{2}{n\choose 2}{n-2\choose 2} + {n\choose 3} $$ The $\frac{1}{2}$ is necessary because the two doubletons are indistinguishable, so we are overcounting. We similarly have $$\genfrac\{\}{0pt}{}{n}{n-3} = \frac{1}{6}{n\choose 2}{n-2\choose 2}{n-4\choose 2} + {n\choose 3}{n-3\choose 2} + {n\choose 4} $$
H: Evaluating an integral in physics question $$U_{C} = \frac{1}{C} \int\!\frac{\cos(100\pi t + \pi/4)}{10}\,dt$$ Find $U_{C}$, the answer is $U_{C}=\left(3.2\times 10^{-4}\right)/C\times \cos(100\pi t - \pi/4)$. Can someone show to to get this answer? AI: $$U_{C} = \frac{1}{C} \int\!\frac{\cos(100\pi t + \pi/4)}{10}\,dt$$ $$=\frac1C \frac{\sin(100\pi t+\frac\pi4)}{100\pi\cdot 10} +K$$ Now, as $\sin(\frac\pi2+x)=\cos x,$ $$\sin\left(100\pi t+\frac\pi4\right)=\sin\left(\frac\pi2+100\pi t-\frac\pi4\right)=\cos\left(100\pi t-\frac\pi4\right)$$ and use $$\frac{10}{\pi}\approx 3.18309886184\approx3.2$$
H: After how many hours does a quantity becomes less than 1% initial quantity? Life of substance reduces to half at the end of one hour i.e its quantity reduces to one half of what it was at the beginning of one hour . In how many hours , the quantity becomes less than $1$% initial quantity.. AI: After $1$ hour, we have $\frac{1}{2}$ left. After $2$ hours, we have one-half of $\frac{1}{2}$ left, so $\frac{1}{4}$. After $3$ hours, we have one-half of $\frac{1}{4}$ left, so $\frac{1}{8}$. After $4$ hours, we have one-half of $\frac{1}{8}$ left, so $\frac{1}{16}$. Continue. After $6$ hours, we have $\frac{1}{64}$ left, after $7$, we have $\frac{1}{128}$. Now we are below $1\%$. If the looked for answer is an integer, that integer is $7$. But if the decay process takes place continuously, as it probably does, the answer will be a number between $6$ and $7$. To find that number, note that if at the beginning we have an amount $A$, then after $t$ hours we have an amount $A(t)$, where $$A(t)=\frac{a}{2^t}.$$ We want the time $t$ until $A$ decays to $\frac{A}{100}$. So we have $$\frac{A}{100}=\frac{A}{2^t}.$$ This simplifies to $2^t=100$. To solve this equation for $t$, we can use our calculator to hunt and peck our way. For example, my calculator says that $2^{6.5}\approx 90.51$, so $t\gt 6.5$. Soon you can zoom in on an excellent approximation. Or else we can take logarithms, to any base you like. We get $\log(2^t)=\log(100)$. So $$t\log 2=\log(100),$$ and therefore $$t=\frac{\log(100)}{\log 2}.$$ I get $t\approx 6.644$.
H: Homomorphism between multiplicative group of integers modulo n Just looking for anybody to check the following: We have got a homomorphism $f: (\mathbb{Z}/42\mathbb{Z})^{*} \rightarrow (\mathbb{Z}/21\mathbb{Z})^{*}$, given by $f(a\text{ mod} 42)= a \text{ mod} 21$. a.) What is the kernel of $f$ ? b.) Is $f$ bijective? My answer would be: $$(\mathbb{Z}/42\mathbb{Z})^{*}= \{ \pm 1, \pm5, \pm 11, \pm13,\pm 17, \pm 19\}$$ $$(\mathbb{Z}/21\mathbb{Z})^{*} = \{1,2,4,5,8,10,11,13,16,17,19,20 \}$$ a.) $\ker(f) = \{x \in(\mathbb{Z}/42\mathbb{Z})^{*} ; x \equiv 1 \text{ mod} 21 \} \subset \{ ..., -20, 1, 21, 43, ...\}$ So I conclude $\ker f = \{1\}$ b.) I would say yes, because for all the positive values of $(\mathbb{Z}/42\mathbb{Z})^{*}$ you can just call them $\text{mod}21$. The same is true for the negative values. So I can connect all elements from $(\mathbb{Z}/42\mathbb{Z})^{*}$ uniquely to a specific element in $(\mathbb{Z}/21\mathbb{Z})^{*}$. For b (if it is correct), can I use other strategies? AI: You are correct for both (a) and (b). As an alternative for (a) use that $21\mid a-1 \iff 42\mid a-1$. This follows from the fact that $a\in(\mathbb{Z}/42\mathbb{Z})^{*}\Rightarrow a$ is odd. And for (b) note that it follows from (a) since $f$ is one to one (see this) and both sets $(\mathbb{Z}/42\mathbb{Z})^{*} \text{ and } (\mathbb{Z}/21\mathbb{Z})^{*}$ have 12 elements.
H: Looking for an integer for which the $(\mathbb{Z}/n\mathbb{Z})^*$ contains elements with certain orders I don't need a specific answer or whatever, but I'm looking for a strategy to solve this kind of problems. The specific question I have in mind is: Give an integer $n$ for which the multiplicative group $(\mathbb{Z}/n\mathbb{Z})^* $ contains elements that has order $4$, (some) elements of order $5$, but no elements of order $3$. How can I solve this? AI: There is probably a better way but my idea is to try $n = p$ prime. Then $(\mathbb{Z}/p\mathbb{Z})^{\times}$ has $p-1$ elements and is cyclic. In order for there to be an element of order $3$ you must have that $3| p-1$ (since if a prime divides the order of a group then there exists an element of that prime order). So for there NOT to be an element of order $3$ you would have to have that $p=3$ or $p\equiv 2 \bmod 3$. Similarly you must have $p\equiv 1 \bmod 5$ to have an element of order $5$ (eliminating the $p=3$ possibility) Since our group is cyclic then an element of order $4$ existing is equivalent to $p\equiv 1 \bmod 4$ (this wouldn't have worked for general $n$, the cyclic property is important). So our prime must satisfy $p\equiv 1 \bmod 4$, $p\equiv 2 \bmod 3$ and $p\equiv 1 \bmod 5$. The simultaneous solution is $p\equiv 41 \bmod 60$ so $p=41$ will work and so will any other prime in this class. This argument will ALWAYS give you a prime that works.
H: Sub-dimensional subspaces a null set Let $m<n$ and $f: \mathbb{R}^m \rightarrow \mathbb{R}^n$ be continuously differentiable. Show $$\lambda^n(f(\mathbb{R}^m))=0$$ and conclude from this, that every linear subspace $E$ of $\mathbb{R}^n$ with $\dim E < n$ is a null set as well. The result seems rather obvious, yet I'm not able to prove it. About the second part: If $m:=\dim E<n$, $E$ it's clearly isomorphic to $\mathbb{R}^m$. But is there some canonical isomorphism that satisfies $f(\mathbb{R}^m)=E$ and is continuously differentiable? I tried to apply some isomorphism-argument for the first part, too, but didn't find it very helpful because of the generality of $f$... AI: We can apply Sard's lemma, since $m<n$ and $f$ is $C^1$ we see that $\operatorname{rk} Df(x) < n$ for all $x$. Hence $f(\mathbb{R}^m)$ has measure zero. If $E \subset \mathbb{R}^n$ is a subspace with dimension $m<n$, then let $b_1,...,b_m$ be a basis and define $f: \mathbb{R}^m \to \mathbb{R}^n$ by $f(x) = \sum_k x_k b_k$. $f$ is smooth, hence from the above we have $f(\mathbb{R}^m) = E$ has measure zero.
H: Using either the Direct or Limit Comparison Tests, determine if $\sum_{n=2}^{\infty}\frac{1}{n\sqrt{n^2-1}}$ is convergent or divergent. Unless I've done some calculations wrong, both tests appear to be inconclusive. I have my doubts that this is the correct outcome. I've chosen my $\sum t_n$ to be $\sum_{n=2}^{\infty}\frac{1}{n}$ Using the Direct Comparison Test: \begin{align} \frac{1}{n\sqrt{n^2-1}}&\geq\frac{1}{n}\\ n\sqrt{n^2-1}&\leq n \end{align} Clearly this is not true for all $n\geq 2$. Now, using the Limit Comparison Test: \begin{align} \lim_{n\to\infty}\frac{a_n}{t_n}&=\lim_{n\to\infty}\frac{1}{n\sqrt{n^2-1}}\times\frac{n}{1}\\ &=\lim_{n\to\infty}\frac{1}{\sqrt{n^2-1}}\\ &=0 \end{align} Which is again inconclusive So did I do something wrong here, or am I doing the comparison tests incorrectly? AI: Hint: try comparing your series to $$\sum_{n = 2}^\infty \dfrac 1{n^2}$$ $$a_n = \dfrac{1}{n\sqrt{n^2 - 1}} \sim \dfrac{1}{n\sqrt{n^2}} = \dfrac 1{n\cdot n} = \dfrac{1}{n^2}= t_n$$ The limit comparison text will work very nicely here. \begin{align} \lim_{n\to\infty}\frac{a_n}{t_n}&=\lim_{n\to\infty}\frac{1}{n\sqrt{n^2-1}}\times\frac{n^2}{1}\\ &=\lim_{n\to\infty}\frac{n}{\sqrt{n^2-1}}\\ &=1 \end{align}
H: Proving that tensor distributes over biproduct in an additive monoidal category I'm trying to prove that the tensor product distributes over biproducts in an additive monoidal category; namely that given objects $A,B,C$, we have $A \otimes (B \oplus C) \cong (A \otimes B) \oplus (A \otimes C)$. What I've attempted is: Now it's clear that the arrows from $A\otimes B$ to $A \otimes B$ collapse to $\mathrm{id}_{A\otimes B}$ and similarly for $A\otimes C$. I now want to show that $\varphi\circ\theta$ is $\mathrm{id}_{(A\otimes B)\oplus(A\otimes C)}$. I want to invoke the uniqueness of the (co)product again but this requires a map from $(A\otimes B)$ and $(A\otimes C)$ from the bottom of the diagram up to $(A\otimes B)\oplus(A\otimes C)$ at the top. I'd like to conclude that $p_{A\otimes B}\circ(\varphi\circ\mathrm{id}_A\otimes i_B) = \mathrm{id}_{A\otimes B}$ implies that $(\varphi\circ\mathrm{id}_A\otimes i_B)=i_{A\otimes B}$. This intuitively makes sense if considering a concrete example like modules or sets but seems unjustified here. (Attempting to complete the product diagram with the diagonal composite leads to the problem that it is not a zero morphism... it is $\mathrm{id}_A \otimes 0_{bc}$.) I found a similar proof in the setting of modules (http://unapologetic.wordpress.com/2007/04/17/more-on-tensor-products-and-direct-sums/) but unfortunately he glosses over the part where I am stuck. At least I seem to be considering the right morphisms though. I feel like I'm missing something obvious but I've been agonizing over this for many hours... does anyone know what to do here? AI: What's the definition of an additive monoidal category? Is it that tensor product distributes over addition of morphisms? If so, use the fact that a functor between additive categories preserves addition of morphisms iff it preserves biproducts (see for example this blog post).
H: Embedding of curves in projective spaces... typo? I'm reading from the book "Geometry of algebraic curves", by Griffiths, Harris, Arbarello and Cornalba. In the middle of page 5 they define the map $\phi_{\mathscr{D}}:C\to \mathbb{P}V^*$, from a curve $C$ to the projectified linear subspace $\mathbb{P}V$ of $H^0(C,L)$, by the prescription $\phi(p)=$"sections $s\in V$ which vanish at $p$". It doesn't make sense to me! It should be defined, instead, as $\phi(p)=$"sections $s\in V$ which don't vanish at $p$", so that the target is really $\mathbb{P}V^*$, since the zero section doesn't belong to the image of any point. Do you agree with me and this is a typo or am I losing something? Here's part of the page: AI: My guess would be that $p\in C$ defines a linear functional on $V$ by the rule $p(s) = s(p).$ We see that it is linear since we of course have $p(s_1+s_2)=(s_1+s_2)(p)=s_1(p)+s_2(p)$ and similarly for scalar multiplication. Thus, the set $\{s\in V:s(p)=0\}$ is a hyperplane in $V$ cut out by the linear functional $p.$ That is, we map $p$ to the point in $\mathbb P(V^*)$ corresponding to the hyperplane.
H: Why is $\{a + b\sqrt2 + c\sqrt3 : a\in\Bbb{Z}, b, c \in\Bbb{Q}\}$ not closed under multiplication? The set $R = \{a + b\sqrt{2} + c\sqrt{3}: a \in \Bbb{Z}, c, b \in \Bbb{Q}\}$ is not closed on multiplication, my textbook states. Why is this? And related to that: why then is $S = \{a + b\sqrt{2} : a, b \in \Bbb{Z}\}$ closed under multiplication? I figured the following, but I'm totally unsure about the correctness: A multiplication on two numbers of $S$ looks like this: $(\Bbb{Z} + \Bbb{Z}\sqrt{2}) \cdot (\Bbb{Z} + \Bbb{Z}\sqrt{2})$ is of the form $\Bbb{Z}² + 2\Bbb{Z}\Bbb{Z}\sqrt{2} + 2\Bbb{Z}$. Because any $\Bbb{Z}^2$ yields an integer, $2\Bbb{Z}$ yields an integer as well and $2\Bbb{Z}\Bbb{Z}\sqrt{2}$ yields an $\Bbb{Z}\sqrt{2}$, the result can again be expressed as $a + b\sqrt{2} : a, b \in \Bbb{Z}$ and therefore the set is closed under multiplication. However, a similar argument would also work for $R$ right? I would appreciate the answer as well as any feedback on the formal correctness of my arguments. AI: I think you want to show $\sqrt{2}\sqrt{3} = \sqrt{6}\notin R$.
H: Necessary conditions for not having roots Suppose $f(z)=\sum_0^\infty a_n z^n$ has a radius of convergence of $R$. What are necessary conditions, in terms of $\{a_n\}$, for $f(z)=0$ not to have any roots? Any combinations of real/complex restrictions on coefficients/roots can be assumed. EDIT: Does Weierstrass Product give any practical answers, that is: If so then there is $g(z)=\sum_{n=1}^\infty b_nz^n$ such that $\displaystyle {f(z)=f(0) e^{g(z)}}$. Now expand right hand side and see if you can get $b$'s out of $a$'s. AI: In this case the function $$\log(f(z)):=\int_0^z \frac{f'(w)}{f(w)} dw $$ has $R$ less than or equal to the radius of convergence of its Maclaurin series. This produces a necessary condition on $\{a_n\}$.
H: Number of occurrences needed to make an event probable (>95%) I'm trying to get a formula for the number of tries $x$ I need to make $P(A)$ occur with 95% probability $P(C)$. $$P(C) = \sum_{i=0}^x i \bigcup A $$ $$P(C) = \sum_{i=0}^x (P(i) + P(A) - P(i)*P(A))$$ I can decompose this summation as follows: $$P(C) = \sum_{i=0}^x (P(i) + P(A)) - \sum_{i=0}^x (P(i) * P(A))$$ These are equivalent to: $$P(C) = P(A)x - P(A)^x$$ Okay, I got it one step further: $$\frac{P(C)}{P(A)} = x - P(A)^{x-1} $$ Edit 2: I think I'm onto something. $$Log(\frac{P(C)}{P(A)} - x ) = Log(P(A)^{x-1})$$ $$Log(\frac{P(C)}{P(A)} - x ) = (x-1)Log(P(A))$$ Here is the part where I'm stuck. I'm not sure how to isolate the $x$. Any ideas? Thanks. AI: If you have an event $A$ with probability $P(A)$ and want to find the number of independent trials to have at least $C=0.95$ chance of one or more successes, it is easier to find the probability of all failures and subtract from $1$. The chance of no successes in $n$ trials is $(1-P(A))^n$, so the chance of at least one is $1-(1-P(A))^n$ So we have $$C=1-(1-P(A))^n\\(1-P(A))^n=1-C\\n=\frac {\log(1-C)}{\log(1-P(A))}$$ Your derivation doesn't work because $P(i)$ is not used correctly. It appears to be the chance that you get at least one success in $i$ tries. You can then write a recurrence $P(i)=P(i-1)+P(A)-P(i-1)P(A)$ but you don't want to be summing over $i$.
H: Solving two greatest integer function equations If $$x\lfloor x\rfloor =39 \quad \text{and}\quad y\lfloor y \rfloor=68.$$ What is the value of: $$\lfloor x\rfloor+\lfloor y \rfloor $$ I don't know how to solve such problems. I would appreciate an insight regarding the general approach to such problems. AI: Notice that $\lfloor x\rfloor$ is not very different from $x$, so $x\lfloor x\rfloor $ is not much different from $ x^2$. If you want $x\lfloor x\rfloor = 39$, you need $x^2$ to be about $ 39$ also, which means $x$ is going to be around 6 or so, and $\lfloor x\rfloor$ will be exactly 6. Then $x=6\frac12$ does the trick. Do the same for $y$, and then add the results.
H: Determine whether this integral converges: $\int_1^\infty\frac{(x+1)\arctan x}{(2x+5)\sqrt x}$ Determine whether the next integral converges: $$\int_1^\infty\frac{(x+1)\arctan x}{(2x+5)\sqrt x}$$ I has this one on a test and lost all my points on this one. Since we were given no answers to the test I still have no idea how to solve it. Can you please give me the idea on how to solve that one? AI: Note that $$\lim_{x\to\infty} \arctan x=\frac{\pi}{2}$$ so $$\frac{(x+1)\arctan x}{(22x+5)\sqrt x}\sim_\infty\frac{\pi}{44\sqrt{x}}$$ and since the integral $$\int_1^\infty \frac{dx}{\sqrt{x}}$$ is divergent hence the given integral is also divergent by limit comparaison.
H: Find the expected value of an Y function $EX = \int xf(x)dx$, where $f(x)$ is the density function of some X variable. This is pretty understandable, but what if I had, for example an $Y = -X + 1$ function and I had to calculate $EY$? How should I do this? AI: By the linearity of expectation, we have: $$ E(Y) = E(-X + 1) = -E(X) + 1 $$ Alternatively, you could use the definition directly. Let $Y=g(X)=-X+1$. Then: $$ \begin{align*} E(Y)&=E(g(X))\\ &=\int_{-\infty}^\infty g(x)f(x)dx\\ &=\int_{-\infty}^\infty(-x+1)f(x)dx\\ &=-\int_{-\infty}^\infty xf(x)dx + \int_{-\infty}^\infty f(x)dx\\ &=-E(X) + 1 \end{align*}$$
H: Convergence of $\sum_{n=1}^{\infty}\frac{1+\sin^{2}(n)}{3^n}$? Using either the Direct or Limit Comparison Tests, determine if $\sum_{n=1}^{\infty}\frac{1+\sin^{2}(n)}{3^n}$ is convergent or divergent. I seem to be completely stuck here. I've chosen my series to be $\sum\frac{1}{3^n}$, which is clearly a convergent geometric series. But when I do either of the two tests, I get inconclusive answers. \begin{align} \lim_{n\to\infty}\frac{1+\sin^2(n)}{3^n}\times\frac{3^n}{1}&=\lim_{n\to\infty}1+\sin^2(n)\\ &=\infty \end{align} and \begin{align} \frac{1+\sin^2(n)}{3^n}\leq\frac{1}{3^n}\Longrightarrow 1+\sin^2(n)\leq 1 \end{align} but this inequality is not true. But I just had a thought now... Am I allowed to choose the series $\sum\frac{2^n}{3^n}$ to help solve this? Clearly $1+\sin^2(n)\leq 2^n,\forall n\geq 1$, and $\sum\frac{2^n}{3^n}$ is also a convergent geometric series. AI: Better yet: $0\leq 1+(\sin (n))^2\leq 2$.
H: What does $\lim \limits_{n\rightarrow \infty }\sum \limits_{k=0}^{n} {n \choose k}^{-1}$ converge to (if it converges)? How we can show if the sum of $$\lim_{n\rightarrow \infty }\sum_{k=0}^{n} \frac{1}{{n \choose k}}$$ converges and then find the result of the sum if it converges? Thanks for any help. AI: $$ \begin{align} \sum_{k=0}^n\frac1{\binom{n}{k}} &=2+\frac2n+\sum_{k=2}^{n-2}\frac1{\binom{n}{k}} \end{align} $$ However $$ \begin{align} \sum_{k=2}^{n-2}\frac1{\binom{n}{k}} &\le\frac{n-3}{\binom{n}{2}}\\ &=\frac{2(n-3)}{n(n-1)}\\[9pt] &\to0 \end{align} $$ Thus, $$ \lim_{n\to\infty}\sum_{k=0}^n\frac1{\binom{n}{k}}=2 $$
H: Finding the exponential form of $z=1+i\sqrt{3}$ and $z=1+\cos{a}+i\sin{a}$. Here is what I have been able to accomplish: For the first one I found that $|z|=z\bar{z}=2$ and $\theta=\tan^{-1}{\sqrt{3}}=\frac{\pi}{3}$. So we get $2e^{\frac{\pi}{3}i}$. For the second one I have only been able to solve the following: $|z|=\sqrt{1+2\cos{a}}$. I'm stuck on how to treat the trig functions when converting to exponential form. AI: $$1+i\sqrt{3}=2\left(\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}\right)=2\left(\cos{\dfrac{\pi}{3}}+i\sin{\dfrac{\pi}{3}} \right)=2e^{{i\tfrac{\pi}{3}}}$$ The second expression is $$z=1+\cos{\alpha}+i\sin{\alpha}=2\cos^2{\dfrac{\alpha}{2}}+2i\sin{\dfrac{\alpha}{2}}\cos{\dfrac{\alpha}{2}} \\ =2\cos{\dfrac{\alpha}{2}}\left(\cos{\dfrac{\alpha}{2}}+i\sin{\dfrac{\alpha}{2}} \right)=2\cos{\dfrac{\alpha}{2}}\cdot e^{i{\tfrac{\alpha}{2}}}$$
H: Endomorphisms of a semisimple module Is there an easy way to see the following: Given a $k$-algebra $A$, with $k$ a field, and a finite dimensional semisimple $A$-module $M$. Look at the natural map $A \to \mathrm{End}_k(M)$ that sends an $a \in A$ to $$ M \to M: m \mapsto a \cdot m. $$ Then the image of $A$ is a finite-dimensional semisimple algebra. AI: Here's one way to look at it: Notice that the kernel of the map is exactly $ann(M)$, which necessarily contains the Jacobson radical $J(A)$ of $A$. Since $A/J(A)$ and all of its quotients are semiprimitive, it follows that $A/ann(M)$ is semiprimitive. Now $M$ as a faithful $A/ann(M)$ module. Since the simple submodules of $M$ remain the same during this passage, $M$ is also still semisimple over this new ring. You can see in this question why a ring with a faithful module of finite length must be Artinian. Now we have that $A/ann(M)$ is Artinian and semiprimitive: so it is semisimple. I see I overlooked a simple way for concluding that the image is finite dimensional. Of course our image ring is a subalgebra of $End(M_k)$ which is finite dimensional... so the subring is finite dimensional as well. The argument I gave before essentially proves a more general case: "If $M$ is a semisimple $R$ module of finite length, the image of the natural map is a semisimple ring."
H: Limit. $\lim_{n \to \infty}\frac{1^p+2^p+\ldots+n^{p}}{n^{p+1}}$. Have you any idea how to find the limit of the following sum: $$\lim_{n \to \infty}\frac{1^p+2^p+\ldots+n^{p}}{n^{p+1}}.$$ Stolz-Cesaro? any more ideas? AI: The quickest way is using integral: $$ \lim \sum_{k=1}^n \frac{k^p}{n^p}\frac{1}{n}=\int_0^1 x^pdx=\frac{1}{p+1}. $$ Seeing this, you may find an elementary proof from $$ k^p\sim \frac{1}{p+1}\bigg[(k+1)^{p+1}-k^{p+1}\bigg]. $$
H: experiencing methodological consternation in correctly applying Newton's method In lecture, we were told that to find $\sqrt[3]{a}$, we use Newton's method as follows: $$ \begin{align} f(x) &= x^3 - a\\ f'(x) &= 3x^2\\ x_{n+1} &= x_n - \frac{f(x_n)}{f'(x_n)}\\ &= x_n - \frac{{x_n}^3-a}{3{x_n}^2}\\ &=\frac{2}{3}x_n + \frac{a}{3{x_n}^2} \end{align} $$ And for approximating $\frac{1}{a}$, $$ \begin{align} f(x) &= a - \frac{1}{x}\\ f'(x) &= \frac{1}{x^2}\\ x_{n+1} &= x_n - \frac{x_n}{x_n}\\ &= x_n - (a - \frac{1}{x_n})x^2\\ &=2x_n - a{x_n}^2 \end{align} $$ $$ \\ \\ $$ However, I didn't follow why $f(x) =x^3 - a$ instead of $f(x)=\sqrt[3]{x}$ for the first case and why $f(x)=a - \frac{1}{x}$ instead of $f(x)= \frac{1}{x}$ in the second case. In each case, the teacher mentioned that the value we're trying to solve for can be realized as the root of $f(x).$ I'm not sure I know what that means in relation to finding a proper f(x). From what I understand, we are trying to find successively closer $x_{n+1}$ values to the root (where the function is zero) by finding the x intercept of the tangent line of our function at $x_n$ and then repeating that process. AI: A root for $f(x)$ is simply a value $x_o$ such that $f(x_o)=0$. For the functions you listed each root is the value you seek. For $f(x)=a-x^3$ if $f(x_o)=a-x_o^3=0$ then $x_o^3=a$ hence $x_o = \sqrt[3]{a}$. For $f(x)=a-1/x$ if $f(x_o)=a-1/x_o=0$ then $1/x_o=a$ hence $x_o = 1/a$. The reason not to do what you say is that these work.
H: Solve $ax - a^2 = bx - b^2$ for $x$ Method 1 Solve for x $$ax - a^2 = bx - b^2$$ Collect all terms with x on one side of the equation $$ax - bx = a^2 -b^2$$ Factor both sides of the equation $$(a -b)x = (a+b)(a - b)$$ Divide both sides of the equation by the coefficient of $x$ (which is $a-b$) $$x = a + b$$ (where $a \neq b$ since this would mean dividing by $0$) Method 2 Solve for $x$ $$ax - a^2= bx - b^2$$ Bring all the terms to one side of the equation $$ax - a^2 -bx + b^2 = 0$$ Rearrange $$ax - bx -(a^2-b^2)=0$$ Factor $$(a - b)x - (a + b)(a - b) = 0$$ $$(a - b)( x - (a + b)) = 0$$ which is a true statement if $$a-b=0$$ $$a = b$$ or $$x-(a+b)=0$$ $$x = a + b$$ My question is I don't understand how this second method is consistent with the first in terms of the restriction on $a$ and $b$. AI: This is indeed consistent solutions: note: if $a = b$, you have $0 = 0$ (just put $a = b$ and substitute into the original equation). Note, in the first method, you acknowledged that you could not have $a = b$ and validly divide by $(a - b).$ You just forgot to check what happens when $a = b$. When $a = b$, we have that any $x$ will solve the equation. That is, $a = b \implies x \in \mathbb R$. That doesn't tell you much! $a = b$ is not itself a solution to $x$.
H: Explain why a set is Jordan Measurable Problem For $D\subset\mathbb{R}^3$ be region such that $D=\{(x,y,z)\in\mathbb{R}^3:0\leq x,y,z$ and $x+y+z\leq 1\}$. Explain why $D$ is Jordan measurable, that is, show $1_B$ is Riemann Integrable. Thoughts Intutively, this is a hexagon in $\mathbb{R}^3$ with finite volume, but I'm not sure if I understand Jordan measuribility. My definition here says that a set is Jordan measurable if $\varphi(bd(D))=0$, i.e. the outer measure of $bd(D)$. I'm having difficulty understanding outer measure and I don't have an intuition of what it is. Is there a visual intuition behind outer measure? What does it mean for the outer measure to equal zero? I've searched using google for more details, but I can only mostly find treatises or very lengthy papers written at a level beyond my understanding. Thanks in advance! (outer measure is as its defined here http://en.wikipedia.org/wiki/Outer_measure) AI: Intuitively, the outer measure of a set $X$ in $\mathbb{R}^3$ is the greatest lower bound on the total volume of rectangles you'd need to "cover" the set. The idea is that you find a finite or countable infinite collection of three-dimensional rectangles which completely cover the set $X$. That is, each point in $X$ is contained in at least one of the rectangles. The set $X$ can be covered by rectangles in many different ways: The outer measure of $X$ is defined as the greatest lower bound (infimum) of the set of all total volumes given by the collection. Each way $w$ of covering $X$ by rectangles produces a number $V_w$ which is the sum of the volumes of the rectangles in the cover. The outer measure is the greatest lower bound of the set $\{V_w\}$ of these numbers. In other words. If you can cover $X$ with a collection of rectangles whose total volume is no more than $20$, then the outer measure is no more than than $20$. If you can cover the rectangle with some collection (which depends on $\epsilon$) of rectangles with total volume less than $10 + \epsilon$ for any $\epsilon > 0$, then the outer measure is no more than than $10 + \epsilon$. Since you could do this for any $\epsilon > 0$, then the outer measure is no more than $10$. If in addition it is impossible to cover $X$ with a collection of rectangles whose total volume is less than $10$, then the outer measure is $10$. The outer measure of a set depends on the ambient space in which it exists. For example, the outer measure of a line segment considered as a subset of $\mathbb{R}^1$ is just the usual length of the segment. But the measure of a line segment (say a segment on the $x$-axis with $y = 0$ and $z = 0$) in $\mathbb{R}^2$ or $\mathbb{R}^3$ is zero. This is because, although the cover using rectangles has to cover the length of the line segment, the width and height of the rectangles can be made arbitrarily small. According to Spivak's Calculus on Manifolds, a set is defined as Jordan-measurable if it is bounded and its boundary has measure zero. Another way of looking at it is to recall that a function on a closed rectangle in $\mathbb{R}^n$ is Riemann integrable if and only if the set of discontinuities of the function is measure zero. In this case we're looking at the indicator function for the set $D$ in question, and this function is discontinuous on the boundary of the set. The set $D$ is a tetrahedral region in the first octant (where all coordinates are at least zero) of $\mathbb{R}^3$ bounded by the $(x,y)$ plane, the $(y,z)$ plane, the $(x,z)$ plane and the plane $x + y + z = 1$. The boundary of this object is the collection of points in $D$ where either A) At least one of the coordinates is equal to zero, or B) $x + y + z = 1$ a triangular surface with corners $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$. You can cover each of the "sides" for A) with one 1 x 1 x $\epsilon$ rectangle for any $\epsilon$. You can cover the triangular surface section in B) with the region between $x + y + z = 1 - \delta$ and $x + y + z = 1 + \delta$ . The covers for A) and B) cover the boundary of $D$, and since you could make the total volume of the cover arbitrarily small, you just showed the boundary has outer measure zero. This means the set $D$ is by definition Jordan-measurable.
H: Infinite Expected Value of Jointly Distributed Random Variables I am given a joint pdf function $f(x,y)$ of random variables $X$ and $Y$ such that $f(x,y) = cxy^{-2}$ when $0 < x < 1$, $1 < y$ $f(x,y) = 0$ otherwise where $c$ is a constant. I have calculated the value of $c$, and computed the marginal pdf's of $X$ and $Y $ -- which, from my computations, appear to be independent -- but am having trouble understanding how to calculate the expected value for $Y$. By integrating $f(x,y)$ over the domain $(0,1)$ with respect to $x$, I computed $f_{Y}(y) = y^{-2}$ Now, the problem asks to compute $E[Y]$, which led me to try to compute $\int\limits_1^\infty y(y^{-2})dy$, giving me the result $E[Y] = \lim_{y \to \infty} \ln(y)$. Intuitively, this doesn't make sense to me. Am I doing something wrong? If not, what does it mean for the expected value of a random variable to be infinite? AI: You're quite correct: the integral diverges, and $E[Y]$ does not exist: we can say $E[Y] = +\infty$ in this case, because it diverges to $+\infty$.
H: Probability of balls and buckets with random removal There are 5 buckets, and I have 3 balls to place into these buckets. I cannot place more than one ball in any bucket. After placing the balls in the buckets, 3 buckets are removed at random. What is the probability of there being at least 1 ball in the remaining buckets? What is the probability of not being able to retrieve any balls from the remaining buckets? AI: Let's start with question $2$. No matter how you distribute the balls into the buckets, exactly $3$ buckets will contain a ball and exactly $2$ buckets will contain no ball. Now suppose we want to remove exactly $3$ buckets such that the remaining $2$ buckets will contain no balls. Then the number of ways this can happen is: $$ \binom{3}{3} \binom{2}{0} = 1 $$ With no restrictions, the number of ways to choose $3$ buckets from $5$ is: $$ \binom{5}{3}=\dfrac{5 \cdot 4}{2} = 10 $$ so we obtain the probability of $\boxed{\dfrac{1}{10}}$. For question $1$, this is simply the complement, so we obtain the probability of $1-\dfrac{1}{10}=\boxed{\dfrac{9}{10}}$
H: Need help on part b of this trigonometric question. The title of the question is: Using graphical technique determine the single wave resulting from a combination of two waves of the same frequency and then verify the result using trigonometrical formula Already done part a, I'm on part b now, I have no idea what to do. If you can't read anything on the picture I can clarify it for you. AI: Use the sum-difference formula for $\sin$: $$\sin(\alpha \pm \beta) = \sin\alpha \cos\beta \pm \cos \alpha \sin \beta$$ Use the sum ($+$) when two angles are being added, and difference $(-)$ when two angles are being subtracted. Added: For your equations, you might also like to know that you can distribute the scalar A. E.g. $$v_1 = \;A\sin(\omega t + \phi_1) = A(\sin\omega t \cos\phi_1 + \cos \omega t \sin \phi_1) = A\sin\omega t \cos\phi_1 + A \cos \omega t \sin \phi_1$$
H: how i can find mathematical way to know the number of triangles are in the photo? how i can find mathematical way to know the number of triangles are in this photo? I sure that the solution like sequence or series but how to find it ? if I added new line so what is the number of triangles become? thanks for all AI: Any triangle either has a top most vertex, or a bottom most vertex. First, count triangles with a top most vertex. Starting from the vertex at the top we have $6$ possible triangles. The two vertices in the second row each have $5$ possible triangles, and so on. The pattern continues: $$1\cdot6+2\cdot5+3\cdot4+4\cdot3+5\cdot2+6\cdot1=56$$ Next, count triangles with a bottom most vertex. Starting with the vertices at the bottom, there are $1+2+3+2+1=9$ such triangles. In the second row from the bottom, there are $1+2+2+1=6$ such triangles. Continuing we get $1+2+1=4$ and $1+1=2$ for the next two rows, finishing with $1$ near the top. This gives us $22$ triangles with a lower most vertex. Altogether, we have $78$ triangles.
H: $\Delta u = \operatorname{div}f \ \ \mbox{in} \ \ B_1, f \in L^2 \Rightarrow \nabla u \in L^2$ I'm looking for results like, If $f \in L^p$ and $$ \begin{array}{rclcl} \Delta u & = & \operatorname{div}f & \mbox{in} & B_1\\ u&=&0& \mbox{on}& \partial B_1 \end{array} $$ then $$ \int_{B_1} \!\left| \nabla u \right|^2 \le \int_{B_1}\!\left|\,f\right|^2 .$$ This is, $f \in L^2 \Rightarrow \nabla u \in L^2$. If you know more generally, $f \in L^q \Rightarrow \nabla u \in L^q$. Is great if you can write the solution. If no, reference is good, preferable for non general cases of second order elliptic equations. But if you know only references for general case, I'm very grateful too. AI: Define $F:H_0^1(B_1)\to\mathbb{R}$ by $$F(u)=\frac{1}{2}\int_{B_1}|\nabla u|^2-\int_{B_1}f\cdot\nabla u$$ I - $F$ is weakly lower semi continuous, i.e. if $u_n\to u$ weakly in $H_0^1(B_1)$, then $$F(u)\leq\liminf F(u_n)$$ To prove this, just note that the norm function is weakly lower semicontinous (you can find this result in any good book of functional analysis). On the other hand, the expression $\int_{B_1} f\cdot\nabla v$ for $v\in H_0^1$ is a linear functional on $H_0^1$, then, by definition of weak convergenge, we have that $$\int_{B_1} f\cdot\nabla u_n\to \int_{B_1} f\cdot\nabla u$$ The last convergence implies I. II - $F$ is coercive. Note that $-\int_{B_1} f\cdot\nabla v\geq -\|f\|_2\|\nabla u\|_2$, which implies coerciveness. With I and II, we know that there exist $u\in H_0^1(\Omega)$, which minimizes $F$, then $\langle F'(u),v\rangle=0$ for all $v\in H_0^1$, or equivalently $$\tag{1}\int_{B_1}\nabla u\nabla v=\int_{B_1}f\cdot\nabla v,\ \forall\ v\in H_0^1$$ By taking $v=u$ in the last equality, we conclude by using Holder inequality that $$\int_{B_1}|\nabla u|^2\leq\int_{B_1}|f|^2$$ For the result $f\in L^q$ implies $\nabla u\in L^q$, I suggest you to take a look in Calderón Zigmund theory as suggested in another post. Remark: Note that $F$ is strictly convex, so the solution is unique. Remark 2: Note that if $f\in L^2(B_1)^N$, then $\operatorname{div}f\in H^{-1}(B_1)^N$, hence, $\langle \operatorname{div}f, v\rangle =-\int_{B_1}f\cdot\nabla v$ for all $v\in H_0^1(B_1)$, whicih implies that $u$ satisfies (in the weak sense) $$\Delta u=\operatorname{div}f$$
H: How can we pick $f \in C(0,T;H)$ with $f(T) =0$ and $f(0) = h$, where $h$ is arbitrary? Let $C(0,T;H)$ be the space of continuous functions $f:[0,T]\to H$ where $H$ is Hilbert. For every $h \in H$, why is it possible to pick a function $f \in C(0,T;H)$ such that $f(0) = h$ and $f(T) = 0$? When $H = \mathbb{R}$, OK, I guess it's possible to do this as i can visualise a graph. But not sure about the general case. How to prove it? I ask because I see in the proof to parabolic PDE existence, one gets $$(u_0-u(0),v(0))_H=0$$ for all $v \in C(0,T;H)$ with $v(T)=0$, and from this everybody says that $u(0) = u_0$ since $v(0)$ is arbitrary. So this is why I ask the question. AI: $H$ is a Hilbert space, so is also a vector space (over $\mathbb{C}$ or $\mathbb{R}$). In addition, $H$ has an inner product $< , >$. Pick a point $h \in H$. Consider the map $t \mapsto th$, where $t \in \mathbb{R}$ and $h \in H$. $\lVert th - sh \rVert^2 =\ <th-sh, th-sh>$, since the Hilbert space norm is defined using its inner product. Then, $$\lVert th - sh \rVert^2 =\ <th-sh, th-sh>\ =\ (t-s)^2<h,h>\ =\ (t-s)^2 \lVert h \rVert ^2$$ and so $$\lVert th - sh \rVert = |t-s| \lVert h \rVert $$ This shows that the function $f: \mathbb{R} \to H$ given by $f(t) = (1 - \frac{t}{T})h$ is continuous, because $$\lim \limits _{s \to t}\lVert f(t) - f(s) \rVert = \lim \limits _{s \to t}\lVert (1 - \frac{t}{T})h - (1 - \frac{s}{T})h \rVert = \lim \limits _{s \to t}\frac{|t-s|}{T} \lVert h \rVert = 0$$ Since $f$ is continuous, it is in $C(0,T;H)$ by definition.
H: Repositioning and resizing when i change the size of my frame Moderator's Note: This question is cross-posted on gamedev.SE. I am trying to write a game. When someone resizes the window of my game, I need all the graphics i have drawn on the screen to reposition correctly so that the ratio of the graphics' width and height remain correct and that the new position determined by x and y, have been adjusted for the new size screen. In the following image, the top left corner is 0,0... all coordinates / widths / heights are measured from this point. What do I need to do to x, y, width and height from the variable width and height of my screen in order to keep this aspect ratio correct? SIDE NOTE: the aspect ratio of the screen will always be 16:9! AI: Here's an idea, assign each pixel as follows: $$ x_1=\left\lfloor x_0\cdot \frac{1376}{1920}\right\rfloor \\ y_1=\left\lfloor y_0\cdot \frac{768}{1080}\right\rfloor $$ where $(x_0,y_0)$ are the large screen coordinates, and $(x_1,y_1)$ are the small screen coordinates. Obviously, there are more pixels in the bigger screen than in the small screen, so some pixels map to the same location. In those cases, you need some way of choosing which pixel to take. This method will change the proportions very slightly.
H: How do you express the covariant cross product? If the covariant cross product is given by $\mathbf{AxB}= \varepsilon^{ijk}A_{j}B_{k}$, then the Levi-Civita tensor must transform contravariantly for the indices to contract. But according to this physicsforums thread, it obeys the transformation law $\epsilon^{i'j'k'}=|\det (\frac{\partial x'}{\partial x})|\varepsilon^{ijk}$, which means that it transforms neither covariantly nor contravariantly. What gives? AI: The cross product is not a vector but instead a pseudovector. Furthermore the Levi-Civita "tensor" $\varepsilon$ is not a tensor but instead a tensor density.
H: Question about the number of elements of order 2 in $D_n$ $$\text{Given}\;\; D_n = \{ a^ib^j \mid \text{ order}(a)=n, \text{ order}(b)=2, a^ib = ba^{-i} \}$$ $$\text{ how many elements does $D_n$ contain that have order $2$ ?}$$ My answer would be: We can write any element as $a^ib^j$. If $j$ is odd, then we can write it as $a^ij$, whereas if $j$ is even, then $b$ just cancels out. So to count: $b$ is one element of order $2$. If $j$ is odd we have $a^ib$. $(a^ib)^2= a^i(ba^i)b = a^i a^{-i} bb = e$. So all elements of the form $a^ib$ have order $2$. These are $n$ elements. If $j$ is even we can write the remaining elements as follows: $a^i$ . This element has order $2$ only if $i=n/2$. This can only be true if $n$ is even. So my final answer would be $n+1$ if $n$ is odd, $n+2$ if $n$ is even... Do you think this is correct? Can I improve this somewhere? Thanks ! AI: $D_3, D_4$, the dihedral groups of order $6, 8$ respectively, each serve as a counterexample to the odd, and even cases, respectively. In odd case, the number of elements of order $2$ is $\bf n$. In the even case, we have $\bf n+1$ elements of order $2$. As pointed out in a comment, in your second case, if $i = n$, you have that $a^nb = b.$ That scenario was accounted for in the first case. Otherwise, you did a nice job, just over-counted each case by one. A "sanity check" was all that was needed: comparing your results with simpler $D_n$ that you know, as in $D_3, D_4$, was all that was needed to see a slight over-count.
H: Can someone help me solve this problem please. For the real numbers $x=0.9999999\dots$ and $y=1.0000000\dots$ it is the case that $x^2<y^2$. Is it true or false? Prove if you think it's true and give a counterexample if you think it's false. AI: Since $x=y$ (that is, $y = 0.\overline{9}=\sum_{n=1}^\infty \frac9{10^n}=1=x$), it must be the case that $x^2=y^2$. Thus, your statement is false.
H: Sum of Infinite Series with the Gamma Function I am computing the volume of an infinite family of polytopes and have run into the following sum, which I am not sure how to evaluate, as it looks similar to the Riemann zeta function, except with the gamma function being summed over instead of a regular integer $n$. That is, $$\sum_{n=1}^{\infty} \frac{1}{\Gamma(n)^2}$$ Has anyone seen this sum before, know any properties of it, what other functions it is related to, or what the sum converges to? I am also interested in what this sum is equal to for all other natural numbers in the power of the gamma function, not just 2. AI: Note that $$I_0(x) = \sum_{n=0}^{\infty} \frac{(x/2)^{2 n}}{(n!)^2}$$ where $I_0(x)$ is the modified Bessel function of the first kind of zero order. Then your sum is equal to $I_0(2) \approx 2.27959$.
H: Show that there are infinitely many triples of integers $(a,b,c)$ such that $2a^2 + 3b^2 - 5c^2 = 1997$ (Cono Sur Math Olympiad - 1997) Show that there are infinitely many triples of integers $(a,b,c)$ such that $2a^2 + 3b^2 - 5c^2 = 1997$. I tried to attribute a value to $a$ or $b$ to put this equation in the Pell's form, but I hadn't success. AI: I expect what they had in mind was this: as $1997 \equiv 5 \pmod {24}$ and is prime, it can be written as $1997 = 2 \cdot 31^2 + 3 \cdot 5^2.$ Note that the form $f(x,y)= 2 x^2 + 3 y^2$ represents all primes $p=2,3$ and all $p \equiv 5,11 \pmod {24}.$ So the first solution could be $$ (a,b,c) = (31,5,0). $$ The automorphism group of $2 a^2 - 5 c^2$ is not difficult. We actually can find all solutions to $2 a^2 - 5 c^2 = 1922,$ but we don't need to. For any solution $$ (a,b,c) $$ to the original problem, there is a new one $$ (19a + 30 c, b, 12 a + 19 c). $$ So, an infinite sequence of solutions is $$ (31,5,0), $$ $$ (589,5,372), $$ $$ (22351,5,14136), $$ $$ (848749,5,536796), $$ and so on. A different sequence is $$ (5,28,9), $$ $$ (365,28,231), $$ $$ (13865,28,8769), $$ $$ (526505,28,332991), $$ and so on. We can instead vary the $3 b^2 - 5 c^2.$ For any solution $$ (a,b,c) $$ to the original problem, there is a new one $$ (a,4 b + 5 c, 3 b + 4 c). $$ Mixing together the two Pell type transformations gives us a big mess'o solutions.
H: A question on factorial rings Is 31 irreducible in the ring $\mathbb{Z}\left[\sqrt{5}\right]=\left\{a+b\sqrt{5}:a,b\in\mathbb{Z}\right\}$ ? And is it prime in $\mathbb{Z}\left[\sqrt{5}\right]$? AI: Recall that for $x$ to be irreducible means that for any factorization of $x$, at least one of the factors is a unit (ie. has a multiplicative inverse). As with the hints given by anon and vadim, you have $(6+\sqrt{5})(6-\sqrt{5})=31$. Now, is either of these factors a unit? Suppose $(a+b\sqrt{5})$ is a unit. Then, for some $(c+d\sqrt{5})$, we have: $$(a+b\sqrt{5})(c+d\sqrt{5})=1 \\ \text{(multiply both sides by complex conjugate)} \implies (a^2-5b^2)(c^2-5d^2)=1 $$ Since all of our variables are integers, this forces $a^2-5b^2=\pm 1$. You can easily check that neither $(6+\sqrt{5})$ or $(6-\sqrt{5})$ satisfy this condition -- hence neither are units. Thus, we have a nontrivial factorization of $31$, so it is not irreducible. Now recall that in any integral domain, $x$ is a prime element $\implies$ $x$ is an irreducible element. Hence by contrapositive, $x$ is not an irreducible element $\implies$ $x$ is not a prime element. Since $31$ is not irreducible, it is not prime.
H: Question about order of elements of a subgroup Given a subgroup $H \subset \mathbb{Z}^4$, defined as the 4-tuples $(a,b,c,d)$ that satisfy $$ 8| (a-c); a+2b+3c+4d=0$$ The question is: give all orders of the elements of $\mathbb{Z}^4 /H$. I don't have any idea how to start with this problem. Can anybody give some hints, strategies etc to solve this one? thanks AI: It may be better to interpret the group $H$ as the kernel of the homomorphism: $$\phi:\oplus_{i=1}^4\mathbb{Z}\rightarrow \mathbb{Z}_8\oplus\mathbb{Z}$$ that sends $(a,b,c,d)$ to $((a-c)\pmod8,a+2b+3c+4d)$. Verify that the kernel of $\phi$ is $H$. I think it would be easy to determine $\text{Im}(\phi)$. Using the first isomorhpism theorem we get: $$\oplus_{i=1}^4\mathbb{Z}/H=\oplus_{i=1}^4\mathbb{Z}/\ker(\phi)\cong \text{Im}(\phi)$$ EDIT: Since you only need to determine the possible orders of $\oplus_{i=1}^4\mathbb{Z}/H$, we can do the following: Since $\oplus_{i=1}^4\mathbb{Z}/H\cong \text{Im}(\phi)$, therefore we only need to determine the possible orders of $Im\phi$. $Im\phi$ is a subgroup of $\mathbb{Z}_8\oplus\mathbb{Z}$. Hence the set of possible orders of $Im \phi$ is a subset of the set of possible orders of $\mathbb{Z}_8\oplus\mathbb{Z}$ which is $\{1,2,4,8,\infty\}$. $\phi(0,0,0,0)$ is an element of order $1$ in $\text{Im}\,\phi$ $\phi(4,-2,0,0)$ is an element of order $2$ in $\text{Im}\,\phi$ $\phi(2,-1,0,0)$ is an element of order $4$ in $\text{Im}\,\phi$ $\phi(0,1,0,0)$ is an element of order $\infty$ in $\text{Im}\,\phi$ It is easy to show that there is no element of order $8$ in $\text{Im}\,\phi$
H: Concerning the tangent space of an exotic $\mathbb R^4$ My geometric intuition is very poor, so my naive approach to this question is "if $M$ is an exotic $\mathbb R^4$, then $TM$ must be something like $\mathbb R^8$, which is not exotic". Of course, my statement "like $\mathbb R^8$ " is probably rubbish. So, my concrete question is: is the tangent space of an exotic $\mathbb R^4$ a "nice" manifold? I personally don't consider exotic manifolds "nice" (again, because of my poor geometric intuition, though certainly they are very respectable objects of study). I am open-minded about what manifolds a topologist would regard as "nice". AI: An exotic $\Bbb R^4$ (let us denote it by $\Sigma$) is homeomorphic to the usual $\Bbb R^4$, so $\Sigma$ is contractible. Any vector bundle over a contractible manifold is trivial, and hence the tangent bundle $T\Sigma$ is trivial: $T\Sigma \cong \Sigma \times \Bbb R^4$. Hence we have that $T\Sigma$ is homeomorphic to $\Bbb R^8$. Since it is known that there are no exotic $\Bbb R^n$'s for $n \neq 4$, it follows that $T\Sigma$ is diffeomorphic to $\Bbb R^8$ for any exotic $\Bbb R^4$ $\Sigma$.
H: Formal definition of the Differential of a function The formal definition of the differential of a differentiable function $f: x \mapsto y=f(x)$ is that it's a two-variable function, its name is $df$ and its value is $df(x,\Delta_X) = f'(x)\cdot\Delta_X$. It's used by Courant for instance and i read in Wikipedia ( http://en.wikipedia.org/wiki/Differential_of_a_function#CITEREFCourant1937i ) that it's the modern treatise of differentials in differential calculus . I'm trying to see how do we go from that to $df(x) = f'(x) dx$ and then if $y=f(x)$, to the usual $dy = f '(x) dx$ that we see everywhere regarding linear approximation. First of all, what would $dx$ mean ? Is it the differential of what function ? What about $dy$ or $df(x)$, is it the differential of what function ? What would be the values of those differentials ? Since the formal definition of differentials treats it like a function i can't understand what these symbols "$dx$" and "$dy$" actually mean in the usual context. Any help highly appreciated. AI: Formally, $x:M\to\mathbb R$ is a map from a manifold $M$ into the reals. For one dimensional calculus, the manifold $M$ is usually taken to be $\mathbb R$ or a region thereof. $x(p)$ is a function used as a coordinate, and it tells you where on the manifold you are. Its argument is the abstract point on the manifold. Therefore, the manifold is the set of all possible points you might be sitting at. You usually think of just one point at a time. $y=f(x)$ is also a function on the manifold, and by the chain rule $\mathrm d y|_p= f'(x(p)) \mathrm d x|_p$ at $p$, a point on the manifold. You could also view $y$ as a local coordinate and then $x=x(y)$ locally and so on. A vector field $X^a$ on a manifold $M$ is a map from functions $f$ to their rate of change along that vector $X(f)=X^a\partial_a f$ in any coords. In one dimension, a vector field has one component so we can write it as $X$. In fact, we can interpret their action for small values $\Delta (f) = f' \times \Delta$ as being a predictor of the results of a small change in position (the flow along the integral curves of $\Delta$. A differential of a function $\mathrm d f$ is a map from vector fields to functions given by $\mathrm d f(\Delta) \equiv \Delta (f)$. That is, differentials of functions are maps from vector fields to the derivative/small change along that vector field of the function or maps from vector fields/small changes and points to real numbers, which give the small change in that function at that point induced by following the vector field away from that point Therefore $\mathrm d x$ just stores the information about how fast the coordinate $x$ changes. You make arguments like this: $$(\mathrm d f(x))(\Delta)(p)=\Delta(f(x(p)))= \Delta^a(p)\partial_a f(x(p)) = \Delta^a(p) \partial_a x(p)\times f'(x(p)) = f'(x(p)) \Delta (x(p)) = f'(x(p)) (\mathrm d x)(\Delta)(p)$$ and by linearity comparing the left and right we deduce $$\mathrm d f = f'(x) \mathrm d x$$ You can figure out a 'small change' interpretation of all this because the definition of a vector field is exactly what it needs to be for this to work. Note: By $\partial_a$ I mean a derivative with respect to the $a$th coordinate which is arbitrary.
H: Does $\mathrm{Mat}_{m \times n}$ have boundary? To me, $\mathrm{Mat}_{m \times n}$ is isomorphic to $\mathbb{R}^{mn}$, hence is boundaryless. But this disqualified the use of Sard's theorem in this question: An exercise on Regular Value Theorem. But it seems using Sard's theorem is on the right track. Thank you~ AI: $\mathrm{Mat}_{m \times n}$ is no boundary for exactly the reason you stated: it is diffeomorphic to $\Bbb R^{mn}$. With regards to your other question, you can still apply the Transversality Theorem on page 68 of Guillemin and Pollack. This is because a manifold $M$ in the usual sense is also a manifold with boundary; it just has empty boundary: $\partial M = \varnothing$. Just look at the definition of manifold with boundary and you'll see that there is no requirement that the manifold actually has boundary points, but instead the definition merely says that every point is either an interior point or a boundary point.
H: Prove the following limit $ \lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4 $ How do i prove the limit below? I've tried, but i got nothing. $ \lim\limits_{n\to \infty} (3^n + 4^n)^{1/n} = 4. $ Thanks for any help. AI: In general, let $\alpha_1,\alpha_2,\dots,\alpha_m$ be positive numbers. Let $A=\max\limits_{1\leq i\leq m}{\alpha_i}$ Then $$A^n\leq \alpha_1^n+\cdots+\alpha_m^n\leq mA^n$$ Thus $$A\leq (\alpha_1^n+\cdots+\alpha_m^n)^{1/n}\leq m^{1/n} A$$ So $$\lim_{n\to\infty} (\alpha_1^n+\cdots+\alpha_m^n)^{1/n}=\max\limits_{1\leq i\leq m}{\alpha_i}$$ since $m^{1/n}\to 1$.
H: Understand "i-Equivalent Valuation" that satisfies formula From the book Logic for Mathematicians (A. G. Hamilton): Definition 3.19 Two valuations $v$ and $v'$ are $i$-equivalent if $v(x_j)=v'(x_j)$ for all $j\neq i$. (I know every single book out there have a different way to define this.) Then consider $A= (\forall x_1)(\exists x_2) P(x_1,f(x_2))$ under the interpretation $\cal I$ with domain $\mathbb{Z}$, and $P(x_1,x_2)$ means $x_1=x_2$, and $f(x)=x+1$. It is easy to see that $A$ is true for $\cal I$, but my goal is to write the proof in a more or less formal way (similar to the one that is handled in the same book). This is what I have now: Consider $v$ to be a valuation in $\cal I$, then $v(x_1), v(x_2) \in \mathbb{Z}$ and we interpret $P(x_1,f(x_2))$ as $v(x_1)=v(x_2)+1$ which is true for $v(x_1)=2$ and $v(x_2)=1$. Now we start to consider $i$-equivalent valuations. So for every valuation $v'$ 1-equivalent to $v$, there always exists a valuation $v'$ 2-equivalent to $v$ in which $v'(x_1)=v'(x_2)+1$. I don't know if that's OK, and really I don't know if I'm understanding valuations well at all. AI: I think you've grasped the concept of valuations. As you've noted, not every presentation of interpretations for first-order logic does this exactly the same way. The Wikipedia article on interpretations in logic has a section interpretations of a first-order language, and you'll notice that they first describe the easy-to-grasp intuition of how to understand whether an interpretation satisfies $\forall x.\phi(x)$ and $\exists x.\phi(x)$. This leaves the issue of how to interpret formulas of the form ∀ x φ(x) and ∃ x φ(x). The domain of discourse forms the range for these quantifiers. The idea is that the sentence ∀ x φ(x) is true under an interpretation exactly when every substitution instance of φ(x), where x is replaced by some element of the domain, is satisfied. The formula ∃ x φ(x) is satisfied if there is at least one element d of the domain such that φ(d) is satisfied. — Interpretations of a first-order language The problem arises however, that we can't actually talk about $\phi(d)$, because that isn't actually a formula: Strictly speaking, a substitution instance such as the formula φ(d) mentioned above is not a formula in the original formal language of φ, because d is an element of the domain. There are two ways of handling this technical issue. The first is to pass to a larger language in which each element of the domain is named by a constant symbol. The second is to add to the interpretation a function that assigns each variable to an element of the domain. Then the T-schema can quantify over variations of the original interpretation in which this variable assignment function is changed, instead of quantifying over substitution instances. — Interpretations of a first-order language The definition given by Hamilton follows the second of these options. It does mean, however, that we no longer get to speak of an interpretation $\cal I$ satisfying a formula, but we have to speak of $\cal I$ and a valuation $v$ satisfying a formula. The alternative is to include a valuation $v$ inside an interpretation, but we still end up having to say that $\cal I = (D,\dots,v)$ (where $\dots$ is everything else that is in the interpretation) satisfies $\forall x.\phi(x)$ if and only if every $\cal I' = (D,\dots,v')$ satisfies $\forall x.\phi(x)$, where $v'$ differs from $v'$ only on $x$. This amounts to the same thing, but with more syntax. Your proof is in pretty good shape as it is, I think. You could express some things a bit more rigorously, perhaps, but the intuition and general structure is correct. Since you're defining the domain to be $\mathbb{Z}$ and can reference mathematical functions in your semantic explanation, I think you can do something like the following. Since you are interpreting $P$ as the equality function on $\mathbb{Z}$ and $f$ as $+1$, I simply used equality in the following formula, and $s$ as the successor ($+1$) function. Interpretation $\cal I$ and valuation $v$ satisfy $\forall x_1.\exists x_2.(x_1 = s(x_2))$ if and only if $\cal I$ and every 1-equivalent valuation $v'$ satisfy $\exists x_2.(x_1 = s(x_2))$. The interpretation $\cal I$ and valuation $v'$ satisfy $\exists x_2.(x_1 = s(x_2))$ if and only if there is some valuation $v''$ that is 2-equivalent to $v'$ such that $\cal I$ and $v''$ satisfy $x_1 = s(x_2)$. Consider an arbitrary valuation $v$. Let $v'$ be an arbitary valuation that is 1-equivalent to $v$. Then let $n = v'(x_1)$. Let $v''$ be the valuation that agrees with $v'$ on every value except $x_2$, which it maps to $n-1$. Observe that $\cal I$ and $v''$ satisfy $x_1 = s(x_2)$. Then, since there is some valuation (viz., $v''$) that is 2-equivalent to $v'$ which, with $\cal I$, satisfies $x_1 = s(x_2)$, then $\cal I$ and $v'$ satisfy $\exists x_2.(x_1 = s(x_2))$. Then, since $v'$ is an arbitrary valuation 1-equivalent to $v$, every valuation 1-equivalent to $v$, with $\cal I$, satisfies $\exists x_2.(x_1 = s(x_2))$, so $\cal I$ and $v$ satisfy $\forall x_1.\exists x_2.(x_1 = s(x_2))$.
H: Decimal pattern in division of two digit numbers by 9 Can some one explain how this is possible ? 1) 13 / 9 = 1.(1 + 3) = 1.444 ... 2) 23 / 9 = 2.(2 + 3) = 2.555 ... 3) 35 / 9 = 3.(3 + 5) = 3.888 ... 4) 47 / 9 = 4.(4 + 7) = 4.(11) → 4.(11 - 9) = 5.222 ... 5) 63 / 9 = 6.(6 + 3) = 6.(9) → 6.(9 - 9) = 7.0 = 7 AI: Let $a$ be the tens digit and $b$ be the ones digit so that our two digit number is $10a+b$. In general, the pattern seems to be: $$ \dfrac{10a+b}{9} = (a+j).kkk... $$ where: $$(j,k)= \begin{cases} (0,a+b) & \text{if }a+b \in \{0,1,2,...,8\} \\ (1,a+b-9) & \text{if }a+b \in \{9,10,11,...,17\} \\ (2,a+b-18) & \text{if }a+b =18 \\ \end{cases}$$ In other words, $j$ and $k$ are the quotient and remainder (respectively) upon dividing $a+b$ by $9$. To see why this makes sense, recall that $\boxed{0.kkk... = k/9}$. This can be proven a number of ways (for example, it is a convergent geometric series). Hence, observe that: $$ \dfrac{10a+b}{9} =\dfrac{9a+(a+b)}{9}=a+\dfrac{a+b}{9}=a+j+\dfrac{k}{9} = (a+j).kkk... $$
H: $\operatorname{rank}AB\leq \operatorname{rank}A, \operatorname{rank}B$ Prove that if $A,B$ are any such matrices such that $AB$ exists, then $\operatorname{rank}AB \leq \operatorname{rank}A,\operatorname{rank}B$. I came across this exercise while doing problems in my textbook, but am not sure where to start for the proof of this. I think columnspace might be involved in the proof, although I am not sure. AI: I think it is best to prove two things that are stronger statements: $x\in \ker B \Rightarrow x\in \ker AB$, $y\in \text{im}\;AB \Rightarrow y\in \text{im}\;A$. Together with the rank-nullity theorem, I believe this provides a solution not in the link @Amzoti provided above. Edit- nevermind, it's in there, but not all in one place.
H: Cauchy condition for functions Prove that $f$ has a limit at $a$ if and only if for every $\epsilon > 0$, there exists $\delta>0$ such that if $0<|x-a|<\delta$ and $0<|y-a|<\delta$, then $|f(x)-f(y)|<\epsilon$. Forward direction: Suppose $f$ has a limit $L$ at $a$. Fix $\epsilon$. Then for some $\delta$ we have $|f(x)-L|<\epsilon/2$ whenever $|x-a|<\delta$. Then for $|x-a|,|y-a|<\delta$, we have $|f(x)-f(y)|\le|f(x)-L|+|f(y)-L|<\epsilon/2+\epsilon/2=\epsilon$. Backward direction: Suppose there exists $\delta>0$ such that if $0<|x-a|<\delta$ and $0<|y-a|<\delta$, then $|f(x)-f(y)|<\epsilon$. We want to show $f$ has limit at $a$, which means that for some $L$, any sequence of $x_i$'s converging to $a$ has $f(x_i)$'s converging to $L$. How to proceed? AI: Suppose that $(x_n)$ converges to $a$. By the Bolzano-Weierstrass theorem, we know that the sequence $(f(x_n))$ must have a monotone subsequence $(f(x_{n_j}))$. Further, $(f(x_{n_j}))$ must be bounded: taking $\epsilon=1$, there exists $\delta>0$ so that $\lvert x-a\rvert<\delta$ and $\lvert y-a\rvert<\delta$ implies $\lvert f(x)-f(y)\rvert<1$; in particular, for all $x\in(a-\delta,a+\delta)$ we have $\lvert f(x)-f(a+\frac{\delta}{2})\rvert<1$, which implies $$ \lvert f(x)\rvert<\lvert f(a+\tfrac{\delta}{2})\rvert+1\text{ for all }x\in(a-\delta,a+\delta); $$ since $x_{n_j}$ is eventually contained in $(a-\delta,a+\delta)$, the sequence $(f(x_{n_j}))$ is then bounded. So, $(f(x_{n_j}))$ is bounded and monotone, and therefore converges to some $L$. We claim that $(f(x_n))$ must converge to $L$ as well. Let $\epsilon>0$ be given. By assumption, there exists $\delta>0$ such that $0<\lvert x-a\rvert<\delta$ and $0<\lvert y-a\rvert<\delta$ implies $\lvert f(x)-f(y)\rvert<\frac{\epsilon}{2}$. Because $x_n\rightarrow a$, there exists $N\in\mathbb{N}$ so that $n>N$ implies $\lvert x_n-a\rvert<\delta$. Because $x_{n_j}\rightarrow a$ and $f(x_{n_j})\rightarrow L$, there exists $J\in\mathbb{N}$ such that $\lvert x_{n_J}-a\rvert<\delta$ and $\lvert f(x_{n_J})-L\rvert<\frac{\epsilon}{2}$. Then for $n>N$, $$ \lvert f(x_n)-L\rvert\leq\lvert f(x_n)-f(x_{n_J})\rvert+\lvert f(x_{n_J})-L\rvert<\epsilon. $$ So, $f(x_n)\rightarrow L$, as claimed.
H: maths required to complete project euler What maths will help one complete most if not all of project Euler questions? Last I've attempted project Euler I could not understand the questions/vocabulary, etc., and could only complete a few questions. I've gone over set theory, and college algebra and looking through the questions again they are more comprehensible. I'd assume knowledge of discrete maths will help most with project Euler questions? AI: One text I'd recommend highly, and it will give you a very solid foundation in Discrete Math, and more, is Graham, Knuth, and Patashnik's Concrete Mathematics: A Foundation for Computer Science. From the description available at the link above: This book introduces the mathematics that supports advanced computer programming and the analysis of algorithms. The primary aim of its well-known authors is to provide a solid and relevant base of mathematical skills - the skills needed to solve complex problems, to evaluate horrendous sums, and to discover subtle patterns in data.... Concrete Mathematics is a blending of CONtinuous and disCRETE mathematics.... ...The book includes more than 500 exercises, divided into six categories. Complete answers are provided for all exercises, except research problems, making the book particularly valuable for self-study. Major topics include: *Sums *Recurrences *Integer functions *Elementary number theory *Binomial coefficients *Generating functions *Discrete probability *Asymptotic methods
H: Number of distinct conjugates of a subgroup is finite Let $H$ be a subgroup of $G$. I would like to prove that if $H$ has finite index in $G$, then there is only a finite number of distinct subgroups in $G$ of the form $aHa^{-1}$. (This is an exercise in [Herstein, Topics in Algebra], in the section on subgroups and Lagrange's theorem.) The assertion clearly holds if $|H|$ (or $|G|$) is finite. Also, the number of distinct conjugates $aHa^{-1}$ is 1 if $H \trianglelefteq G$. So we need to consider only the case $|H|=\infty, H \ntrianglelefteq G$. I am also wondering what would be some specific examples of such infinite groups $H \le G$ with $|G:H| < \infty$ and $H \ntrianglelefteq G$. AI: Are you familiar with the concept of group actions? The group $G$ acts on the conjugacy class of $H$ via the conjugation action. There is only one orbit, and its stabilier is $N_G(H)$, hence the orbit is in (equivariant) bijection with the coset space $G/N_G(H)$ via $gN_G(H)\longleftrightarrow gHg^{-1}$. But $H\le N_G(H)\le G$ so $N_G(H)$ has finite index, so $G/N_G(H)$ is finite. The class of profinite groups exhibits a large number of examples of groups with subgroups of finite index - in fact they have topology which is defined in terms of them.
H: Finding a function Let $\{f_j\}$ be an arbitrary sequence of positive real functions on $\mathbb{R}$. How can I find a function $f$ so that for all $n\in\mathbb{N}$: $\displaystyle\lim_{x\to\pm\infty}\dfrac{f(x)}{f_n(x)}=\infty$ ? AI: Hint: For example, for every $n\ge 1$, let $f(x)=n(f_1(x)+\cdots +f_n(x))$ when $n-1\le|x|<n$.
H: one and only one double root(quartic equation) I want to know how I can determine all positive real values of $b$ for which this equation will have one and only one double root: $x^4 +8x^3 + (288-72b)x^2 + (1088-32b)x + (4b-136)^2 = 0$. Any help appreciated. Thanks. AI: HINT: If Monic Polynomial $f(x)$ has exactly one double root $a, f'(a)=0,$ $(f(x),f'(x))=x-a$
H: How to calculate Mortgage calculation I have this formula for Mortgage calculation and now i want loan amount value using with same formula. Loan amount = Monthly Payment/ ((1 + Interest rate per annum/100) ^ Term of loan) * Term of loan * 12 For example Interest rate per annum is : 1.09 Term of loan is : 30 years Monthly Payment is : S$ 3,049.40 So how to get Loan Amount? Any ideas or suggestions? Thanks. AI: How? Substitution (you're given all the values you need to completely fill in all quantities on the right hand side of the equation). Simplification of RHS. Calculation of RHS. The loan amount (the left hand side of the equation) equals the the right hand side, and when the right-hand side is evaluated using the given values, it will give you precisely what the loan amount is equal to. $$\text{Loan amount}\; = \dfrac{\text{Monthly Payment}}{\left(1 + \dfrac{\text{Interest rate per annum}}{100}\right)^{\large \text{(Term of loan)}}} \times \text{Term of loan} \times 12 $$ $$\text{Loan amount}\; = \dfrac{3049.40}{\left(1 + \dfrac{1.09}{100}\right)^{30}} \times 30 \times 12$$ $$= 3049.4\times (1.0109)^{-30} \times 360 \approx 792996.01$$
H: Continuous on $[a,b]$ implies $|f(x)-f(y)|<\epsilon$ in whole interval Prove that if $f$ is continuous on $[a,b]$ and $\epsilon>0$, there exists $\delta>0$ such that if $|x-y|<\delta$ and $x,y\in[a,b]$, then $|f(x)-f(y)|<\epsilon$. Since $f$ is continuous on $[a,b]$, for every $p\in[a,b]$ and every $\epsilon$ there exists $\delta_p$ such that $|x-p|<\delta_p$ implies $|f(x)-f(p)|<\epsilon/2$. (For $p=a,b$ remove the absolute value and flip the sign if necessary) This means for every $p$ there exists $\delta_p$ such that $|x-p|<\delta_p,|y-p|<\delta_p$ implies $|f(x)-f(y)|<\epsilon$. How can I turn this $\delta_p$ into a global $\delta$? AI: One way: Try contradiction. Suppose there exists an $\epsilon>0$ such that for all $\delta>0$ there exists $x,y \in [a,b]$ with $|x-y|< \delta$ such that $|f(x)-f(y)| \ge \epsilon$. Choose $\delta = \frac{1}{n}$, and let $x_n,y_n$ be the points that satisfy $|x_n-y_n | < \delta$ and $|f(x_n)-f(y_n)| \ge \epsilon$. Since $[a,b]$ is compact, there is some subsequence such that $x_{n_k} \to \hat{x}$ and $y_{n_k} \to \hat{y}$. Since $|x_n-y_n | < \frac{1}{n}$, we see that $\hat{x}= \hat{y}$. However, $f$ is continuous at $\hat{x}$, hence $|f(x_{n_k})-f(y_{n_k})| \to 0$, which contradicts $|f(x_n)-f(y_n)| \ge \epsilon$ for all $n$. Hence $f$ is uniformly continuous. Or, another way: (Pay attention to the $\frac{1}{2}$s below!) Let $\epsilon>0$. Since $f$ is continuous at each $x$, there is some $\delta_x>0$ such that if $|x-y|< \delta_x$, then $|f(x)-f(y)| < \frac{1}{2}\epsilon$. Then $\{ B(x,\frac{1}{2} \delta_x) \}_{x \in [a,b]}$ (note the $\frac{1}{2}$) is an open cover of $[a,b]$ which is compact. Hence there is a finite subcover $\{ B(x_1, \frac{1}{2} \delta_{x_1}),..., B(x_n, \frac{1}{2} \delta_{x_n})\}$. Let $\delta = \min(\frac{1}{2} \delta_{x_1},...,\frac{1}{2} \delta_{x_n})$. If $|x-y| < \delta$, then $x \in B(x_k, \frac{1}{2} \delta_{x_k})$ for some $k$, and we have $|x_k-y| \le |x_k-x|+|x-y| < \frac{1}{2} \delta_{x_k}+ \delta$, hence $y \in B(x_k, \delta_{x_k})$ (note the $\frac{1}{2}$ is missing here). Hence by construction, we have $|f(x)-f(y)| \le |f(x)-f(x_k)| + |f(x_k)-f(y)| < \epsilon$. Hence we have uniform continuity.
H: Quick multiplication How can we explain this with equations ? Quick Multiplication of any two numbers whose last two digits add up to 10 and all other numbers are the same $32\times 38 = 3\times (3+1)|(2\times 8)\implies (3\times 4)|16\implies 12|16 \implies 1216$ $81\times 89 \implies 8\times(8+1)|(1\times9) \implies(8\times9)|09\implies72 | 09 \implies 7209$ $124 \times 126 \implies 12\times(12+1) | (4\times6) \implies(12\times13)|24\implies156|24 =15624$ AI: $(10 a + b)(10 a + d) = 100 a^2 + 10 a (b + d) + b d$ Since $b + d = 10$, that's the same as $100(a^2 + a) + bd = 100a(a+1) + b d$.
H: How can $\lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4$? If $\lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4$, then $\lim_{n\to \infty} 3^n + 4^n=\lim_{n\to \infty}4^n$ which implies that $\lim_{n\to \infty} 3^n=0$ which is clearly not correct. I tried to do the limit myself, but I got $3$. The way I did is that at the step $\lim_{n\to \infty} 3^n + 4^n=\lim_{n\to \infty}L^n$ I divided everything by $4^n$, and got $\lim_{n\to \infty} (\frac{3}{4})^n + 1=\lim_{n\to \infty} (\frac{L}{4})^n$. Informally speaking, the $1$ on the LHS is going to be very insignificant as $n \rightarrow \infty$, so $L$ would have to be $3$. Could someone explain to me why am I wrong and how can the limit possibly be equal to $4$? Thanks! AI: $\infty-\infty$ is not well-defined.
H: Is $f(x)=o(x^\alpha)$ for every $\alpha\gt0$ enough to know that $\int_c^x dt/f(t) \sim x/f(x)$? Let $f$ be a monotone increasing function $[c,\infty)\rightarrow \mathbb{R}^{\geq 0}$ satisfying $f(x)/x^\alpha\to 0$ for all $\alpha>0$. Is it true that $\int_c^x \frac{dt}{f(t)} \sim \frac{x}{f(x)}$ as $x\to \infty$? This question is inspired by the fact that $\operatorname{li}(x)\sim x/\log x$. It seems to me that on heuristic grounds it should be true: in the case $f(t)=\log t$, what's going on is that $\log$ grows slow enough that it spends "most of the time" on the interval $[c,x]$ close to its value at the endpoint. Thus $t/\log t$ is well-approximated by $t/\log x$ on "most of" the interval of integration. The length of the interval is asymptotically $x$, and this is the explanation, morally anyway. Thus it should be that other functions that grow "slow enough" work the same way. And it seems to me (this is extremely unrigorous) that if $f(x)/x^\alpha\to 0,\forall \alpha>0$, that should be "slow enough", because $$\int_c^x \frac{dt}{t^\alpha} = \frac{1}{1-\alpha}\cdot \frac{x}{x^\alpha} - \text{const.}$$ and $\frac{1}{1-\alpha}\to 1$ as $\alpha\to 0$. However, I have been unable to satisfy myself with a proper proof. The proof I know in the $f=\log$ case doesn't generalize because it involves the specifics of a particular integration by parts and involves facts like $\text{const.}/\log x\to 0$ that seem likely to be extrinsic to the result to me. Is there a proof that works in the generality in which I've posed the question? Or is it not actually true? AI: Consider functions that are very close to step functions. You could arrange it so that $x/f(x)$ grows approximately linearly to something like $x/\log x$, then decreases rapidly to something like $x/(\log x)^2$. But your integral is an increasing function of $x$...
H: why $g_n$ is measurable in the proof of Fatou's Lemma Fatou Lemma: Suppose $\{f_n\}$ is a sequence of measurable functions with $f_n \geq 0$. If $\lim_{n\rightarrow\infty}f_n(x)=f(x)$ for a.e. $x$, then $$\int f \leq \liminf_{n\rightarrow\infty}\int f_n$$ Proof: Suppose $0\leq g \leq f$, where $g$ is bounded and supported on a set $E$ of finite measure. If we set $g_n(x)=\min(g(x),f_n(x))$, then $g_n$ is measurable. etc.. I don't know why $g_n$ is measurable. because the author didn't assume $g$ is measurable. can anyone explain why? thanks very much AI: $g$ is assumed to be measurable, but the author didn't explicitly say so.
H: To prove $f$ to be a monotone function A open set is a set that can be written as a union of open intervals. If $f$ is a real valued continuous function on $\mathbb{R}$ that maps every open set to an open set, then prove that $f$ is a monotone function. AI: "Increasing" ("decreasing") means nonstrictly increasing (decreasing). We consider a function $f:\mathbb R\rightarrow\mathbb R$. Lemma 1. If a function $f$ is non-monotone, then its restriction to some set of size $3$ or $4$ is non-monotone. Proof. Suppose $f$ is non-monotone. Since $f$ is not decreasing, we can choose $a,b$ so that $a<b$ and $f(a)<f(b)$. Since is not increasing, we can choose $c,d$ so that $c<d$ and $f(c)>f(d)$. The set $\{a,b,c,d\}$ does the trick. Lemma 2. If a function $f$ is non-monotone, then its restriction to some $3$-element set $X$ is non-monotone. Proof. Suppose $f$ is non-monotone. By Lemma 1, we may assume that the restriction of $f$ to some $4$-element set $\{a,b,c,d\}$, with $a<b<c<d$, is non-monotone. Without loss of generality, we may assume that $f(a)\le f(d)$. Now we consider two cases. Case I. For some $x\in(a,d)$, either $f(x)>f(d)$ or $f(x)<f(a)$. In this case, the set $X=\{a,x,d\}$ works. Case II. For each $x\in(a,d)$ we have $f(a)\le f(x)\le f(d)$; in particular, $f(a)\le f(b)\le f(d)$ and $f(a)\le f(c)\le f(d)$. Since $f$ is not increasing on $\{a,b,c,d\}$, we must have $f(b)>f(c)$, and then $X=\{a,b,c\}$ works. Theorem. An open continuous function $f:\mathbb R\rightarrow\mathbb R$ is monotone. Proof. Consider an open continuous function $f:\mathbb R\rightarrow\mathbb R$, and assume for a contradiction that $f$ is non-monotone. By Lemma 2 there is a $3$-element set $X=\{a,b,c\}$, $a<b<c$, such that $f$ is non-monotone on $X$. Since $f$ is continuous, it has an absolute maximum value and an absolute minimum value on the closed interval $[a,c]$. If both the maximum and the minimum were attained at the endpoints, then $f$ would be monotone on $X$. Without loss of generality, we may assume that the absolute maximum value of $f$ on $[a,c]$, call it $M$, is attained in the open interval $(a,c)$. But this means that the image of $(a,c)$ under $f$ has a greatest element, namely $M$, and so it can't be an open set.
H: Class of finite groups a Fraïssé Class? Is the class of finite groups a Fraïssé class? Calling this class $K$, does $K$ satisfy the following: Joint embedding property Amalgamation property Hereditary property: if $G \in K$ and $H \le G$, then $H \in K$) (1) holds because if $G, H \in K$, then $G \times H \in K$ as well. Obviously $G$ and $H$ are both subgroups of $G \times H$. (2) seems to hold. Suppose $G_1, G_2 \in K$, with $H = G_1 \cap G_2$. Then the amalgamated free product $G_1 *_H G_2$ contains both $G_1$ and $G_2$ as subgroups. However, is it true that $G_1 *_H G_2$ is finite? (3) holds because substructures of finite groups are again finite groups. By definition, substructures must contain $0$ and must be closed under the group operation. Furthermore, every element has an inverse, because they all have finite order (you will eventually reach the inverse by multiplying an element by itself successively). Associativity is universal, so also holds. So every substructure is also a (sub)group. Is my reasoning correct? I would appreciate some kind of proof that amalgamated free products of finite groups are also finite, if this is true, or some counterexample if not, because I don't know much about amalgamated free products. AI: Just for information, an amalgam $A*_{C}B$ of finite groups with $C = A \cap B$ is finite if and only if $C = A$ or $C =B$ ( ie there is an inclusion between the groups $A$ and $B$)- this is proved in Serre's book "Trees". On the other hand, if $A$ is a finite group of order $m$ and $B$ is a finite group of order $n,$ then (by Cayley's theorem) the symmetric group $S_{h}$ (with $h = m+n)$ is a finite group which contains isomorphic copies of $A$ and $B$ with a trivial intersection. The group generated by these isomorphic copies is an epimorphic image of the free product.
H: Cardinality and surjective functions Let $A$ denote a set and $P(A)$ be the power set. By definition for cardinalities $|A|\le|B|$ iff there exists an injection $A \hookrightarrow B$. Note that there is an obvious surjection $P(A) \to A$. Without the axiom of choice now can there also be a surjection $A \to P(A)$? AI: No. Cantor's theorem does not depend on the axiom of choice. The proof is really constructive from this point of view. Given a function from $A$ to $\mathcal P(A)$ we construct a set which is not in its image.
H: If $K$ is $w$−compact and convex, $f\in X^\ast \implies f$ attains its maximum on $K$ Let $X$ be a real Banach space If $K\subset X$ is weakly compact and convex, then for a given $f\in X^\ast$ (dual space) we can always find $k\in K$ such that $$\displaystyle \sup_{x\in K}{f(x)}=f(k)$$ Any hints would be appreciated. AI: By definition of the weak topology, $f$ is weakly continuous, so $f(K)$ is a compact subset of $\mathbb{R}$. Convexity of $K$ is not needed, but it ensures that $f(K)$ is actually an interval.
H: Is there a preference between proving a total order (strict vs partial)? I know that proving a relation $\mathcal{R}$ to be a strict total order (asymmetric, transitive,and total ) implies that the relation $S$ defined as $X\mathcal{S} Y \longleftrightarrow (X\mathcal{R}Y\vee X=Y)$ is a total order (antisymmetric, transitive, reflexive, and complete). Conversely, if a relation $\mathcal{R}$ is a total order then the relation $S$ defined as $X\mathcal{S} Y \longleftrightarrow (X\mathcal{R}Y\wedge X\neq Y)$ is a strict total order. My perception is that people use to prove a relation to be a total order over proving that a relation is a strict total order. If proving a relation to be a total order automatically implies the other to hold why people don't do it by considering strict total orders? For example in the definition of $\mathbb{N}$ I think it should be the same (even easier) if we prove that it's a totally strict ordered set and then automatically it's a totally order set, but books prefer the other way around. They prove that it's a totally ordered set and then the other option follows automatically. What am I missing? is it just matter of custom, style, or something? AI: Since I don't feel there is a lot of difference between total and partial orderings in this aspect, I will address all of them at once. It seems that the natural form of most of the orders discussed is the inclusive form. For example, the ubiquitous subset relation $X \subseteq Y$ and its strict counterpart $X \subsetneq Y$. We express $X \subseteq Y$ by $$\forall x: x\in X \to x \in Y$$ This is a lot shorter than $X \subsetneq Y$: $$\forall x(x \in X \to x \in Y) \land \exists y( y \in Y \land \neg (y \in X))$$ which can of course be shortened by using properties of $\subseteq$, but in principle the expression above is what it comes down to. On the other hand, the membership relation $X \in Y$ itself (considered on transitive sets such as ordinals) is (almost) never discussed in the inclusive form, because (usually) we don't want to consider sets as elements of themselves. If you happen to know the interpretation of (inclusive) ordered sets as categories, that could be a further indication of mathematical practice: the fact that specifically inclusive ordered sets arise as special categories indicates that reasoning about them may often be "more natural" (because categories are well-behaved structures many mathematicians have a good intuition for, whether they are aware of the term or not). Ultimately, it is, I think, not but a custom. But it is a self-reinforcing one (we use inclusive ordered sets more, so have a better intuitive grip of what their properties are. Hence they are studied more, used more in proofs (Zorn's lemma comes to mind), and appear more often in books. This all leads to positive feedback in the form of e.g. students using them more... (I have avoided appealing to "obvious" things like "it is natural to be able to compare an element to itself" because I am probably deeply influenced by tradition and custom myself, making it "obvious".)