text
stringlengths
83
79.5k
H: Do there exist some non-constant holomorphic functions such that the sum of the modulus of them is a constant Do there exist some non-constant holomorphic functions $f_1,f_2,\ldots,f_n$such that $$\sum_{k=1}^{n}\left|\,f_k\right|$$ is a constant? Can you give an example? Thanks very much AI: NO. Suppose $f, g$ are holomorphic functions on the unite disc. $$ 2\pi r M=2\pi r( |f(z_0)|+|g(z_0)|)=|\int_{|z-z_0|=r} fdz|+|\int_{|z-z_0|=r} gdz|\le \int_{|z-z_0|=r} (|f|+|g|)|dz|=2\pi r M $$ so all equal sign hold, then $f, g$ are constants.
H: Can't that every standard positive real number is limited be proved? I'm reading a short section about internal set theory(see here), in which $x$ is limited in case for some standard $r$ we have $|x| ≤ r$. while the predicate “standard” is not defined. I'm interpreting an element $x \in A$ is standard, if it dosn't belong to $^*A \setminus A$. One exercise in it asks: Can one prove that every standard positive real number is limited? I think not. Because either $\{x \in {\bf{R}} : x >0 \land x \text{ is standard} \}$ or $\{x \in {\bf{R}} : x \text{ is limited} \}$ are "illegal set formations" in the lauguage of set theory. so there's no way to tell whether an element in one set necessarily belongs to the other. On the other hand, I'm doubtful about such reasoning, since if $x$ is a standard positive real number, so is $x+1$. Because $x < x+1$, we have every standard positive real number must be limited. What's wrong? AI: There is a difference between saying that the set of limited numbers exists, and that the set of standard numbers exists, and that every standard number is limited. To see the full meaning of this, let me give you a close example. In $\Bbb R$ we cannot define $\Bbb Z$, but we can define each of the integers, and therefore we can define every natural number if we wish to define it. Still we cannot define $\Bbb N$ itself. Similarly here, we cannot define the set of standard reals, nor the set of limited reals. But we can prove that every standard real is limited. Simply by arguing that if $r$ is standard then $|r|$ is also standard, and therefore $|r|\leq|r|$ as wanted.
H: Calculation of ordered pair $(x,y,z)$ in $x^2 = yz\;\;,y^2=zx\;\;,z^2 = xy$ (1) Total no. of integer ordered pair $(x,y,z)$ in $x^2 = yz\;\;,y^2=zx\;\;,z^2 = xy$ (2) Total no. of integer ordered pair $(x,y,z)$ in $x+yz = 1\;\;,y+zx = 1\;\;,z+xy = 1$ My Try:: (1) Clearly $ x = 0,y = 0,z = 0$ are the solution of given equation and from three equation we observe that $x,y,z$ has same sign. Now If $x\neq 0,y\neq 0$ and $z\neq 0,$ Then $x^2-y^2 =-z(x-y)\Leftrightarrow (x-y).(x+y+z) =0$ Means either $x=y$ or $x+y+z = 0$ $\bullet$ If $x = y$, The put in $z^2 = xy=x^2=y^2\Leftrightarrow z = \pm x = \pm y$ means $x = y =z$ So $(x,y,z) = (k,k,k)$ where $k\in \mathbb{Z}$ $\bullet$ If $x+y+z = 0$, Then put in third $z^2 = xy\Leftrightarrow x^2+y^2+xy = 0$ So $x^2+y^2+xy = x^2+y^2+(x+y)^2 = 0\Leftrightarrow x = 0,y = 0,x+y = 0$ So $(x,y,z) = (0,0,0)$ So Given equation has Infinite solution My Question is I have Calculate Right or not If not plz explain me. Thanks AI: Hint to (2): subtract 2nd equation out 1st: $$(x-y)(1-z)=0.\ \ \ (1)$$ Addition: Similarly $$(x-z)(1-y)=0,\ \ \ (2)$$ $$(z-y)(1-x)=0.\ \ \ (3)$$ If $z=1$, then $x+y=1, xy=0$ (from your $1$st and $3$rd equations) whence we obtain two solutions $$z=1, x=1, y=0,$$ $$z=1, x=0, y=1.$$ Similarly for $y=1$ we get one more solution from ($2$): $$y=1, x=1, z=0.$$ Further, let $x,y,z\ne 1$. Then $x=y=z$ and $x^2+x-1=0$ from your $1$st equation. Then we have two solutions: $$x=y=z=\frac{-1+\sqrt{5}}{2}$$ and $$x=y=z=\frac{-1-\sqrt{5}}{2}.$$
H: How to solve this integral $\int \frac{1+2x^2}{x^2(1+x^2)}dx$ How to solve this integral $$\int \frac{1+2x^2}{x^2(1+x^2)}dx$$ I thought it should be $ x + 3x^2$ in the numerator so that I will take $x+x^3 = u$ then taking derivative both sides and it comes; $1+3x^2$ so this is wrong. Please suggest how to proceed. AI: Hint: write the numerator as $1+x^2 + x^2$ and write the whole expression as two fractions. One with numerator $1+x^2$ and the other one with $x^2$
H: Finding $\int_{0}^{\infty}\frac{\cos(ax)}{(x^2 + 1)^2}\,dx $ I have a contour integral problem I need to solve, but I don't know the answer, so I wanted to verify that my work is correct. $$ \int_{0}^{\infty}{\frac{\cos(ax)}{(x^2 + 1)^2}dx} $$ For this one, the function being integrated is even, so I can just take the integral over the entire real line and multiply by $ \dfrac{1}{2} $. That is $ \int_{0}^{\infty}{\dfrac{\cos(ax)}{(x^2 + 1)^2}dx} = \dfrac{1}{2}\int_{-\infty}^{\infty}{\dfrac{\cos(ax)}{(x^2 + 1)^2}dx} $. In the upper half-plane, the function being integrated has a double pole at $ i $. Therefore, I want to say that this is true: $$ \int_{-\infty}^{\infty}{\frac{\cos(ax)}{(x^2 + 1)^2}dx} = \operatorname{Re} [2 \pi i\ \operatorname{Res}(\dfrac{e^{iz}}{(x^2 + 1)^2}, i)] $$ My solution yields: $ \int_{0}^{\infty}{\dfrac{\cos(ax)}{(x^2 + 1)^2}dx} = \dfrac{\pi}{4e} $ I have no way to verify the correctness of my answer, so is this correct or have I made a mistake somewhere? AI: You have: $$\int_{-\infty}^{\infty}{\frac{\cos(ax)}{(x^2 + 1)^2}dx} = \operatorname{Re} \left[2 \pi i\ \operatorname{Res}\left(\dfrac{e^{i a z}}{(z^2 + 1)^2}, i\right)\right]$$ And $$\operatorname{Res}\left(\dfrac{e^{i a z}}{(z^2 + 1)^2}, i\right)=\operatorname{Res}\left(\dfrac{e^{i a z}}{(z +i)^2(z -i)^2}, i\right)=\lim_{z \to i} \frac{d}{dz}\left( (z-i)^{2}f(z) \right)$$ where $f(z)=\dfrac{e^{i a z}}{(z +i)^2(z -i)^2}$. So $$\frac{d}{dz}\left( (z-i)^{2}\dfrac{e^{i a z}}{(z +i)^2(z -i)^2} \right)=\frac{i a e^{i a z}}{(z+i)^2}-\frac{2 e^{i a z}}{(z+i)^3}$$ Taking the limit, multiplying by $2\pi i$, taking the real part and finally multiplying by $\frac{1}{2}$ will give you $\frac{1}{4} \pi (a+1) e^{-a}$.
H: Plus construction of a presheave factors every sheaf-valued morphism. I'm having some trouble understanding the correctness of some proof in Sheaves in Geometry and Logic (Mac Lane, Moerdijk). It concerns the lemma III.5.3 : If $F$ is a sheaf and $P$ a presheaf, then any map $\phi \colon P \to F$ of presheaves factors uniquely through $\eta$ as $\phi = \tilde\phi \circ \eta$. $$ \begin{matrix} P & \stackrel \eta \to & P^+ \\ & \!\!\!\!\!\!_\phi\searrow & \downarrow \small{\tilde\phi} \\ & & \!\!\!\!F\end{matrix}$$ Recall the plus construction : $P^+(C)$ is the equivalent class of matching families under the relation $(x_f)_{f\in S} \sim (y_g)_{g \in R}$ ($R,S$ covering sieves of $C$) iff there is some covering sieve $T \subseteq S \cap R$ on which $x_f = y_f$ forall $f \in T$. Recall the morphism $\eta$ : $$\eta_C \colon P(C) \to P^+(C),\, x \to [(Pf(x))_{f \colon D \to C}].$$ I easily understand the definition of $\tilde\phi_C(\mathbf x)$ for some equivalent class $\mathbf x = (x_f)_{f\in S}$ : we push the matching family $(x_f)$ by $\phi_C$ into a matching family of $F$ ; being a sheaf, $F$ admit a unique amalgamation in $F(C)$ that we define to be $\tilde\phi_C(\mathbf x)$. The well-definition is no problem. What bother me is to check that $\tilde\phi = (\tilde\phi_C)_C$ is actually natural in C. We want a commutative diagram $$ \begin{matrix} P^+C & \stackrel {Ph} \to & P^+D \\ \small{\tilde\phi_C}\downarrow \ \ \ & & \ \ \ \downarrow\small{\tilde\phi_D} \\ FC & \stackrel{Fh} \to & FD \end{matrix}$$ for all $h \colon D \to C$. For an element $\mathbf x = (x_f)_{f\in S}$, and $h \in S$, the stability axiom of Grothendieck topologies assure $h^\ast S = \hom(-, D)$, and so the commutativity is immediat from the definition of $\tilde\phi_C(\mathbf x)$ as amalgamation. But I'm stuck with the case where $h \notin S$, and (it seems to me that) it is not treated in the proof of Mac Lane. Maybe can we always find $\mathbf y = (y_g)_{g\in R}$ with $\mathbf x \sim \mathbf y$ and $h \in R$, but I can't see it. AI: Let me write $w$ for the equivalence class in $P^{+}C$ of a matching family $(x_{f})_{f\in R}$ for a covering sieve $R$ of $C$. Now the down-right path sends $w$ to $Fh(y)$, where $y$ is the only amalgamation for the matching family $(\phi_{dom (f)}(x_{f}))_{f\in R}$. The right-down path maps $w$ to $z\in FD$, where $z$ is the only amalgamation for $(\phi_{dom (hf')}(x_{hf'}))_{f'\in h^{\ast}(R)}=:k$ (recall the definition of $P^{+}h$ for an arrow $h$). In order to show that going the two way gives the same result it suffices to show that $Fh(y)$ is an amalgamation of $k$: if $f'\in h^{\ast}(R)$ $$Ff'(Fh(y))=(F(hf'))(y)=\phi_{dom (hf')}(x_{hf'})$$ where the last equality is by definition of $y$ and we are done. I hope it's clear enough (fill in the details!) and that I didn't make some mistakes.
H: Lengendre symbol calculation I'm trying to calculate the lengendre symbol of (3/383) without using the Quadratic Reciprocity Law, and with not much success. I've thought about checking if 2^191 is congroent to 1 modulo 383 but it seems too complicated. I'd be grateful if some could point me to the solution. Thanks in advance. AI: You want to compute $$ \left( \frac{2}{383} \right) $$ you're after, $p = 383$ is a prime, so you calculate indeed $$ 2^{(p-1)/2} = 2^{191} \pmod{p}, $$ and you get $1$, so $2$ is a square modulo $p$. If you have to do the calculation by hand, use the method of repeated squaring.
H: A two-dimensional set of measure zero I have a 2D domain $[0,1]\times[0,1]$. This domain contains some set of measure zero $A$, the last understood as the Lebesgue measure in $\mathbb{R}^{2}$. Is the following true: for almost all $t\in[0,1]$ the set $A_{t}:=\{x\in[0,1]|(t,x)\in A\}$ has measure zero, in standard $\mathbb{R}^{1}$ sense? Actually, I'm hoping to get a negative answer. This question can be more general, for $A$ being $\Omega\times[0,1]$ in $\mathbb{R}^{n}$ with $\Omega$ being a bounded domain in $\mathbb{R}^{n-1}$. Thanks for your response! AI: Your statement is true. This can, for instance, be seen from Fubini's theorem: Let $\chi_A$ denote the characteristic function of $A$. Then $$ 0 = \int_{[0,1]^2} \chi_A(t,x)dtdx = \int_0^1 \bigg( \int_0^1 \chi_A(t,x)dx \bigg) dt $$ and hence for almost every $t\in[0,1]$, the function $\int_0^1 \chi_A(t,x)dx$ vanishes, which implies that $A_t$ as defined by you has measure zero.
H: Solve the roots of a cubic polynomial? I have had trouble with this question - mainly due to the fact that I do not fully understand what a 'geometric progression' is: "Solve the equation $x^3 - 14x^2 + 56x - 64 = 0$" if the roots are in geometric progression. Any help would be appreciated. AI: It means the roots are of the form $a, ar, ar^2$. Here it is not too difficult to see that $2, 4, 8$ are ok, just look at the constant term, which is $- (a \cdot ar \cdot ar^2) = - (a r)^3$, and check that $2, 4, 8$ fit with the other coefficients $$ 14 = 2 + 4 + 8, \qquad 56 = 2 \cdot 4 + 2 \cdot 8 + 4 \cdot 8. $$
H: Finding $A,B,C,D \subseteq \{1,2,...n\}$ such that $A \cup B \cup C \cup D = \{1,2,...,n\}$ I have this combinatorial question: Find the number of $(A,B,C,D)$ of sets $A,B,C,D \subseteq \{1,2,...,n\}$ such that $A \cup B \cup C \cup D = \{1,2,...,n\}$ I started by saying: We choose the $k$ elements that are in $A$ which is $n \choose k$ and then we distribute all the $n-k$ elements to the rest of the 3 sets. Which is obviously wrong, because even if we choose $k$ to be in $A$, there still can be a set which has common elements of $A$. I'll be happy if anyone could give me a direction. EDIT: Perhaps the answer is $4^n$? For every element in $N$ we choose if it is in $A, B, C$ or $D$ AI: For each element $k$ we can choose if it is in $A, B, C, D$. This amounts to $16$ possible choices for $k$. For only one of these choices, we have that $k \notin A \cup B \cup C \cup D$. Since we have such a consideration for each $1 \le k \le n$, the solution is $15^n$. In general, when distributing elements over sets, if counting from the point of view of sets is hard, a good first reflex is to try and approach the problem from the point of view of the elements. Often this makes for an easy solution.
H: Whether an infinite series can be tested by integral test I am asked whether the following infinite series can be proved to be convergent by integral test. $$\sum_{n=1}^\infty n e^{6 n}$$ so I integrate it $$\int_1^{\infty}\ n e^{6n}\, dn$$ and find it diverges so I concluded that the above series also diverges by the integral test. However, the answer is that the integral test cannot be used to test this infinite series. What is wrong in my deduction? AI: Hint: Is the sequence $\,\{ne^{6n}\}_{n\in\Bbb N}\,$ monotone decreasing?
H: Is there an explicit formula for the inverse of $\cot\left(\frac{x}{2}\right)\sqrt{1-\cos(x)}$? I apologize if this is trivial but I am stuck. Given the bijective function $f:(0,2\pi) \to (-2,2)$ with $$ f(x)=\cot\left(\frac{x}{2}\right)\sqrt{1-\cos(x)} $$ where $\cot$ is the cotangent, how can I find an inverse $g:(-2,2)\to (0,2\pi)$? Is there an explicit formula? AI: Hint: Use this fact that $$\frac{1-\cos(x)}{2}=\sin^2(x/2)$$ and that if $x\in(0,2\pi), |\sin(x/2)|=\sin(x/2)$
H: Is this language decidable? Is this language decidable? $$\{x\mid \text{$x$ is the code of a Turing machine that always halts on $y$ in less than $y^3$ steps}\}$$ I think it is, because it halts in a finite number of states. Am I right? Is there a more classic way of saying so? AI: No, it is not computable. First, the intuition is that you can not know that $\Phi_x$ halts on all $y$ in less than $y^3$ steps without actually checking it for all of the infinitely many $y$. Let $A$ denote the set above. Now let $\bar{K}$ denote the complement of $K$. Given $x$, define a Turing machine $T_x$ as follows: On input $y$, run $\Phi_x(x)$ for $y$ steps. If $\Phi_x(x)$ has not halted, then $T_x(y)$ will halt in $y$ steps. If $\Phi_x(x)$ has halted by $y$ steps, then $T_x(y)$ will keep running without ever halting. It is clear that there is a computable $f$ that takes $x$ to the index of the Turing $T_x$ constructed above. If $x \in \bar{K}$, then $f(x)$ is the index of a Turing program that halts on all input in $y < y^3$ steps. So $f(x) \in A$. If $x \notin \bar{K}$, then $x \in K$. So $\Phi_x(x)$ will halt at some stage $k$. Then $T_x$ is a Turing that does not even halt on any $y > s$. So $f(x) \notin A$. This is a many to one reduction of $\bar{K}$ to $A$. Since $\bar{K}$ is not computable, neither is $A$.
H: (Non) equivalence of regular cardinal definitions The usual definition of a regular cardinal is "$\kappa$ is regular if $cf(\kappa) = \kappa$", which, assuming the axiom of choice, is equivalent to this definition: "$\kappa$ is regular iff it cannot be expressed as a union of a less than $\kappa$ sets, all of which are of cardinality less than $\kappa$." Now consider Definition 2: a cardinal $\kappa$ is regular if for every subset $X \subset \kappa$ such that $|X| < \kappa$ we have $\bigcup X < \kappa$. I think these definitions are not equivalent in ZF, because $X$ and $\bigcup X$ can be well-ordered without invoking the axiom of choice. In particular, I guess one could prove in ZF that with Definition 2 all successor cardinals are regular, which is not true with usual definition. Questions: Is my intuition correct? How can I show the non-equivalence? AI: Let me rewrite the answer completely. You wrote three definitions for a regular cardinal, and you made two false claims. First let me write the definitions, so we'll be clear on those. Let $\kappa$ be an infinite initial ordinal (i.e. a cardinal in the context of $\sf ZFC$). $\kappa$ is $1$-regular if every unbounded subset of $\kappa$ has order type $\kappa$. $\kappa$ is $2$-regular if every union of less than $\kappa$ subsets of $\kappa$, each of cardinality less than $\kappa$, is itself of cardinality less than $\kappa$. $\kappa$ is $3$-regular if every $X\subseteq\kappa$ such that $|X|<\kappa$ has the property that $\bigcup X<\kappa$. You also claimed that $1$ and $2$ are equivalent if we assume the axiom of choice, and you wanted to show that $3$ is also inequivalent to those without the axiom of choice. But the truth is that the axiom of choice is not needed in order to show that all three are equivalent. If $\kappa$ is $1$-regular, then whenever $X\subseteq\kappa$ and $|X|<\kappa$ the order type of $X$ (as a well-ordered set) has to be less than $\kappa$, so $\bigcup X=\sup X<\kappa$ by the assumption that it is $1$-regular. If $\kappa$ is $3$-regular, let $A$ be an unbounded subset of $\kappa$ then $\bigcup A=\sup A=\kappa$, then by the definition $3$-regular we have to have that $|A|=\kappa$, but since $\kappa$ is an initial ordinal it does not have subsets of size $\kappa$ whose order type is not $\kappa$ itself. Therefore $A$ has order type $\kappa$. This establishes the equivalence between $1$ and $3$. There was absolutely no use of the axiom of choice. $\bigcup X$ is an ordinal because it is the union of ordinals. Now to show that these are equivalent to $2$-regularity. If $\kappa$ is $2$-regular, let $X\subseteq\kappa$ such that $|X|<\kappa$. Since $\kappa$ is an initial ordinal, all the members of $X$ have cardinality strictly less than that of $\kappa$. Therefore we take union of less than $\kappa$ sets of size less than $\kappa$, and by $2$-regularity $|\bigcup X|<\kappa$. But this means that $\sup X<\kappa$, as wanted. If $\kappa$ is $3$-regular, let $P$ be a set of less than $\kappa$ subsets of $\kappa$, each of cardinality $\kappa$. Then for every $A\in P$ we have that $\bigcup A=\sup A<\kappa$ by the definition $3$-regular. Let $X=\{\sup A\mid A\in P\}$, then $|P|\leq|X|<\kappa$ and therefore $\bigcup X<\kappa$. It follows that $\bigcup P<\kappa$ as well, because there is some $\alpha>\bigcup X$ and that means that $\alpha\notin A$ for every $A\in P$. Therefore $\kappa$ is $2$-regular as wanted. $\square$ In the comments you remarked that your teacher challenged with the following task, if $X\subseteq\aleph_{\alpha+1}$ and $|X|<\aleph_{\alpha+1}$ then you are to find an injection from $\bigcup X$ into $\aleph_\alpha\times\aleph_\alpha$. But that means that you have to assume that $\aleph_{\alpha+1}$ is regular. Whereas in $\sf ZFC$ every successor cardinal is indeed regular, it is consistent with $\sf ZF$ that successors are not regular. In fact, assuming very large cardinals are consistent, we can construct a model of $\sf ZF$ in which there are no regular cardinals except $\aleph_0$. So in order to find that injection one has to assume that $\aleph_{\alpha+1}$ is regular, which is an assumption we cannot prove without the help of the axiom of choice, and therefore we cannot write an explicit injection from $\sup X$ into $\aleph_\alpha\times\aleph_\alpha$.
H: $g \colon [0,1] \to [0,1]$ be a continuous map and consider the iteration $x_{n+1}=g(x_n)$. I came across the following problem: Let $g \colon [0,1] \to [0,1]$ be a continuous map and consider the iteration $x_{n+1}=g(x_n)$.Then Which of the following maps will yield a fixed point for $g$? The options are as follows: a. $g(x)=\frac{x^2}{4}$, b. $g(x)=\frac{x^2}{32}$, c. $g(x)=\frac{x^2}{8}$, d. $g(x)=\frac{x^2}{16}$. Can someone point me in the right direction? Thanks in advance for your time. AI: The answer is certainly all of them, according to a very general and powerful theorem called the Brower fixed point theorem. You'll probably get some funny looks if you turn that in, though, as I think it's quite a bit easier than that. Don't all the maps have an obvious common fixed point? Is that really the whole problem?
H: solving $1+\frac{1}{x} \gt 0$ In solving a larger problem, I ran into the following inequality which I must solve: $$ 1+\frac{1}{x} \gt 0.$$ Looking at it for a while, I found that $x\gt 0$ and $x\lt -1$ are solutions. Please how do I formally show that these are indeed the solutions. AI: There are two cases. If $x>0$ we may multiply both sides by $x$ and the inequality is unchanged, i.e. $x+(1/x)x>0(x)$, which simplifies to $x+1>0$ or $x>-1$. This case is the combination of $x>0$ and $x>-1$, which is $x>0$. If $x<0$ we multiply both sides by $x$ and the inequality reverses, i.e. $x+(1/x)x<0(x)$, which simplifies to $x+1<0$ or $x<-1$. This case is the combination of $x<0$ and $x<-1$, which is $x<-1$.
H: Guides/tutorials to learn abstract algebra? I recently read up a bit on symmetry groups and was interested by how they apply to even the Rubik's cube. I'm also intrigued by how group theory helps prove that "polynomials of degree $\gt4$ are not generally solvable". I love set theory and stuff, but I'd like to learn something else of a similar type. Learning about groups, rings, fields and what-have-you seems like an obvious choice. Could anyone recommend any informal guides to abstract algebra that are written in (at least moderately) comprehensible language? (PDFs etc. would also be nice) AI: I can highly recommend "A Book of Abstract Algebra", by Charles C. Pinter. You'll learn about groups, rings and fields. You will also learn enough Galois Theory to understand why polynomials of degree higher than $4$ are, in general, not solvable by radicals. It is 'formal' in the sense that it is rigorous, but the author is also very good at explaining the intuition behind all ideas. It is much less dense than most Abstract Algebra books, and, in my opinion, and excellent introduction to the subject. Furthermore, it isn't expensive and it contains solutions to numerous exercises. See the amazon page of the this book for more positive reviews. Again, highly recommended! Added: once you finished this book, you're ready for more advanced treatments of abstract algebra. After Pinter's book, you could try "A First Course in Abstract Algebra" by John B. Fraleigh. After that one, a great option is "Abstract Algebra" by Dummit and Foote. This is quite an advanced textbook, but a good one nevertheless. Once you've worked your way through these books (I advise you not to just read through them, but actually soak up the information by doing the exercises and reading actively) you will have a strong basis of knowledge in abstract algebra. By then you can tackle more advanced topics.
H: How to show that $A=B-C$ How to show that for a real symmetric matrix $A,~A$ can be written as $A=B-C$ where $B,C$ are positive definite real symmetric matrices? Please help me ! I'm clueless. AI: Let $C=cI$ where $I$ is the identity matrix and $c\gt0$ is chosen so that for each eigenvalue $\lambda$ of $A$, $c+\lambda\gt0$.
H: Simple Diffy-Q problem So as a fun project, I'm trying to work my way through Kreyzig's "Advanced Engineering Mathematics". But I've gotten to a really simple problem: $$xy' = 2y$$ where I know the solution is $x^2$ but for the life of me I can't figure out how to integrate this really simple problem properly. I keep ending up with:$$\lg(x) = 1/2\lg(2y), $$ but I don't think that's right. Help? AI: THis is a problem where we may use separation of variables. That is we begin by moving the $x$ terms to the right side and $y$ terms to the left so that we have $$\frac{y'}{y} = \frac{2}{x}.$$ We then integrate both sides and have $$\int \frac{1}{y} dy = \int \frac{2}{x} dx$$ which results in $$ln(y) =2\ln(x) + C.$$ Solving for $y$ by exponentiating both sides yields $$y=e^{\ln(x^2) + c} = e^{\ln(x^2)}e^c = Ax^2$$ where $A = e^c$.
H: Probability of getting split pill from bottle? I have a bottle of 100 pills. The daily dose is 1/2 pill, so if the first pill I extract is a whole pill, I split it and put 1/2 back. Just out of my own general curiosity, I'd like to model the probability of extracting a whole pill vs. a half pill over time, but I'm not sure how to start. AI: I have written a paper on this which will be published in the American Mathematical Monthly sometime in the next year or so. The title is "A drug-induced random walk." The main theorem is this: Consider a bottle of $n$ pills. Every day, you remove a pill from the bottle at random (with each pill equally likely to be chosen). If it is a whole pill, you cut it in half, take half of the pill, and return the other half to the bottle. If it is a half pill, then you take it and nothing is returned to the bottle. At any time, let $x$ be the fraction of the original pills in the bottle that are still whole, and let $y$ be the fraction that are now half pills. ($x+y$ may be less than $1$, since some pills may have been used up completely.) Then the point $(x,y)$ executes a random walk in the plane, starting at the point $(1,0)$ (all pills whole) and ending at $(0,0)$ (no pills left). The theorem says that for large $n$, the random walk will approximately follow the curve $y = -x \ln x$. More precisely, the theorem says that for every $\epsilon > 0$, the probability that the walk stays within $\epsilon$ of the curve $y = -x \ln x$ approaches $1$ as $n$ approaches infinity. The paper also answers the questions "What is the expected number of whole pills removed before the first half pill is removed?" and "What is the expected number of half pills removed after the last whole pill is removed?"
H: Counting exercise Three players a,b,c take turns in a game according to the following rules: At the start A and B play (so C does not play). The winner of the first trial plays against C and so on until one of the players wins two trials in a row. Possible outcomes are aa,acc,acbb,acbaa, bb,bcc,bcaa,bcabb etc. We have to prove that probability of A winning equals p(A) = 5/14, p(B) = 5/14, p(C) = 2/7 I have been stuck on this problem for a long time. The only thing I have been able to find out so far is that C can never win on even turns. Re-edit Players continue to play until one of them wins two times consecutively and all players are equally good at playing the game. Apologies for leaving such crucial information. I have finally solved this problem with a different approach Sample space aa ,acc ,acbb ,acbaa ,acbacc ,acbacbb ,acbabcaa ,... bb ,bcc ,bcaa ,bcabb ,bcabcc ,bcabcaa ,bcabcabb ,... Each point in the sample space has a probability (1/2^k) associated with it where k is the number of turns. For example probability of point (aa) = 1/4 Now let us construct a table enumerating probabilities - Let us consider the event that C wins overall. But C can only win if k = 3,6,9,12,... P(C) = P(C3) + P(C6) + P(C9) ...... where P(Ck) = Probability of C winning overall after k turns. P(C) = 1/4 + 1/32 ..... P(C) = a / (1 - r) ( Sum of an infinite GP ) a = 1/4 ( First Term ) r = 1/8 ( Ratio ) P(C) = 2/7 P(A) = 5/14 = P(B) AI: First, let us assume $A$ wins the first game. Let $a$ be the probability that $A$ wins overall, $b$ the probability that $B$ wins overall, and $c$ the probability that $C$ wins overall. Then we can write (in this case) $a=\frac 12 + \frac b2$ because $A$ either wins the second game (and wins overall) or loses the second game and is now in $B$'s position. Similarly, $c=\frac 12a$ because if $C$ wins his first game he is in $A$'s position, while if he loses his first game $A$ wins overall. Finally $b=\frac 12c$ because $B$ needs $C$ to win and he is now coming in, which is $C$'s position. This gives $$a=\frac 12 + \frac b2\\c=\frac a2\\b=\frac c2\\a=\frac 12+\frac c4==\frac 12+\frac a8=\frac 47\\c=\frac 27\\b=\frac 17$$ This is correct for $C$, but we assumed $A$ won the first one. Clearly $A$ and $B$ have the same winning probability at the start so we can split their total chances evenly, giving $$P(A)=\frac 5{14}, P(B)=\frac 5{14},P(C)=\frac 27$$
H: Dimension problems Let $f: \mathbb{R}^n \to \mathbb{R}^n$ be a linear transformation. Prove that there exists a subspace $S \subseteq \mathbb{R}^n$ that verifies $\ker(f^2) = \ker(f) \bigoplus S$ and $\dim S = \dim f(S) \leq \dim(\ker(f))$. My attempt at solution: So far I extended from the $\ker(f)$ to the $\ker(f^2)$ like this: let a base of $\ker(f)$ be $B = \{v_1,\ldots,v_r\}$ extending to a bae of $\ker(f^2)$ we get $B' = \{v_1,\ldots,v_r,v_{r+1},\ldots,v_m\}$. So I know that from $1$ to $r$ are elements linearly independent of $\ker(f)$, and from $r+1$ to $m$ I have my desired $S$. This verifies part of the second thing I'm asked, which is that $\dim S = \dim f(S)$, the thing is I'm having trouble with the inequality $\dim f(s) \leq \dim \ker(f)$. I've been trying to get the inequality using the dimension theorem but I cant seem to get it. AI: You're defining $S$ to be the subspace with basis $\{v_{r+1}, \dots, v_m\}$, where $m \leq n$, right? The key property of this set is that $S \subset \ker(f^2)$ but $S$ is not in $\ker(f)$. That means that if $w \in S$, then $f(w) \neq 0$ but $f(f(w)) = 0$. Thus, $f(w) \in \ker(f)$. You can conclude that $f(S) \subset \ker(f)$, which gives your inequality.
H: Linear programming problem neither max nor min Heres the actual question: television provider broadcasts two movie channels, A and B. Channel A broadcasts 1 romantic movie, 3 action movies and 3 comedies per month at a cost of 50 Euro. Channel B broadcasts 3 romantic movies, 4 action movies and 1 comedy per month at a cost of 25 Euro. Suppose that you like to see 8 romantic movies, 19 action movies and 7 comedies at minimal cost in the coming months. For how many months should you request both channels? a. Formulate this problem as an LP problem Now my way of thinking was that I would first write it down more clearly, A and B are the television channels. $A: 1R,3A,3C => 50 euro$ $B: 3R, 4A, 1C => 25 euro$ And I would define a new variable $x_i :=$ cost of taking a channel one month. But then I stumble upon the problem of defining the hard limits they have set, the 8 R, 19 A, and 7 C. In short: how do I define a constraint that is hard (e.g. not a maximum) AI: Normally I'd just give you hints but I actually have a couple of problems with the problem formulation. First of all, the proper model here is not a linear program (LP), but an integer linear program (IP). I'm assuming that you cannot subscribe to fractional months, after all, since that is standard practice in the cable industry. This is the model I'm thinking of: $$\begin{array}{ll} \text{minimize} & 50 A + 25 B \\ \text{subject to} & A + 3 B \geq 8 \\ & 3 A + 4 B \geq 19 \\ & 3 A + B \geq 7 \\ & A, B \geq 0 \\ & A, B ~ \text{integer} \end{array}$$ Now of course, it's possible they're permitting you to subscribe to fractional months. But you'd better ask the person requesting the model if that's the case. If they do allow fractional months, then indeed you can go ahead and drop the integer constraint, and you have an LP. (LATE EDIT: actually, fractional months really don't make sense, because then you'd get fractional movies!) But even this is not enough, because this only tells you the total number of months to subscribe to A and B, not the number of months you must subscribe to both channels. To answer that question, you need more information than is provided. Specifically, you need to know the maximum number of months you are allowed to wait to complete your movie watching task. For instance, suppose the model yields $A=3$, $B=2$. If you're allowed to take 5 months to watch the movies on your list, then you don't need to subscribe to both channels at the same time ever. Just subscribe to A for three months, then switch to B. On the other hand, if you're given an additional rule that you must subscribe for a minimum number of total months, then of course the total number of months required is $\max\{A,B\}$, and the number of months subscribed to both channels is $\min\{A,B\}$. But again, without additional information, the answer cannot be given---and neither answer is obtained directly from the LP or IP model, but rather from post processing the model output. So the task "Formulate this problem as an LP problem" is, actually, ill-posed. Not only does it likely require an IP, but it requires additional modeling work after the optimization model is solved. "Solve this problem using an LP" would be OK, if they make it clear that fractional months are permitted.
H: On proving $\ker(TT^*+T^*T)=\ker(T^*)\cap \ker(T)$ Let $T:V\to V$ be a linear transformation of finite dimensional inner product space. I want to show that $$\ker(TT^*+T^*T)=\ker(T^*)\cap \ker(T).$$ I showed already that $$\ker(T^*)\cap \ker(T)\subseteq \ker(TT^*+T^*T).$$ How do I showing the other inclusion? ($T^*$ is adjoint of $T$) Thank you! AI: Let $x \in \ker(TT^* + T^*T)$. Then $TT^*x = - T^*Tx$, hence \begin{align*} \|Tx\|^2 &= (Tx,Tx)\\ &= (x, T^*Tx)\\ &= -(x, TT^*x)\\ &= -(T^*x, T^*x)\\ &= -\|T^*x\|^2 \end{align*} As both $\|Tx\|^2$ and $\|T^*x\|^2$ are non-negative, this is only possible for $Tx = T^*x = 0$.
H: Do we know if there are more primes with even leading digits or odd leading digits? I was just wondering, out of curiosity, do we know if there are more primes with even leading digits or odd leading digits? For example, primes with even leading digits would be $23$ or $29$ and primes with odd leading digits would be $31$ or $97$. If we haven't figured it out then how do we do something like that? Thanks AI: The Prime Number Theorem can be used to find the supremum and infimum of the relative densities. In particular for primes starting with odd digits the upper bound is 7/9 = 0.777... (if you sample numbers starting 1999...) and the lower bound is 41/81 = 0.506... (if you sample numbers starting 8999...). The supremum of a set is its least upper bound. For a finite set, a supremum is its maximum. For infinite sets, though, the supremum may be ("slightly") greater than all the members of the set. For example, the set $\{x:x<3\}$ has supremum 3 even though all members are less than 3. You can see that there is no maximum for this set, since if you choose some $y$ in the set it is less than $(3+y)/2$ which is less than 3, and so also in the set. Similarly, the infimum is the greatest lower bound of the set, the generalization of the minimum. What you might like in a situation like this would be the relative density of the primes starting with odd digits in the set of primes. This would be $$ \lim_{x\to\infty}\frac{\text{# of primes}\le x\text{ starting with an odd digit}}{\text{# of primes}\le x} $$ But what happens if this limit does not exist? For example, it might go up to 70%, then back down to 55%, then back up to 70%, and repeat ad infinitum. In fact this is precisely what happens! So instead of looking for the limit, which does not exist (just like the maximum of $\{x:x<3\}$ does not exist), we look for the limit supremum and limit infimum. These always exist, and they are equal exactly when the limit exists (in which case they are equal to the limit). But what is a limit supremum (lim sup for short)? In this case, it is $$ \lim_{y\to\infty}\sup_{x\ge y}\frac{\text{# of primes}\le x\text{ starting with an odd digit}}{\text{# of primes}\le x} $$ or in other words, the largest value that the function becomes arbitrarily close to infinitely often. (If it happened only finitely often, then it happens for the last time at some $y_0$, and the limit goes beyond $y_0$ to $\infty.$) Of course the limit inferior (lim inf) is defined similarly with $\inf$ in place of $\sup.$ So much for the preliminaries. Now it's time to look at the Prime Number Theorem. It says that there are $(1+o(1))(x/\log x)$ primes below $x.$ The meaning of $o(1)$ is technical, but basically it's just some number that becomes arbitrarily small as $x$ increases. Now for some constant $0<\alpha<1$ the number of primes between $\alpha x$ and $x$ is $(1+o(1))(x/\log x) - (1+o(1))(\alpha x/\log(\alpha x))=(1+o(1))(1-\alpha)x/\log x.$ (The different $o(1)$ are actually different numbers, so they don't cancel the way you'd expect. The notation is funny, but that's the way it's usually written.) So basically there are just the number of primes you'd expect between $\alpha x$ and $x$ since the logarithm in the density grows so slowly. Now we can use this to derive the densities we need. First, let's look at the relative density of the odd-starting primes between $10^{n-1}$ and $10^n.$ 5/9 start with an odd digit and 4/9 start with an even digit, since by the above use of the Prime Number Theorem the densities are what we'd expect. Now you can do the same with the primes between $10^{n-2}$ and $10^{n-1}$, etc., so for the primes below $10^n$ you have 5/9 starting with an odd digit. But we don't want to restrict ourselves to looking just at powers of 10. What if we looked at numbers below $2\cdot10^n$? Then half would be below $10^n$ of which 5/9 would start with an odd digit, and the half between $10^n$ and $2\cdot10^n$ all start with odd digits. Thus (5/9+1)/2 = 7/9 start with odd digits. It's not hard to see that this is the best you can do. Similarly, on the other side, if you look at $9\cdot10^n$ you have nine groups: the ones below $10^n$ have 5/9 odd first digits, the $n-$digit ones starting with 1, 3, 5, or 7 are all odd-initial, and the $n-$digit ones starting 2, 4, 6, 8 are all even-initial. Overall that's (5/9+4)/9 = 41/81. And we're done!
H: Question about bipartie graphs. I have a fairly basic inquiry but i would sleep better at night if i saw a proof of it. Q: i know that if i take a connected subgraph with at least 2 vertices of any simple bipartite graph G that it has to be bipartite. how would one go about proving that this is the case for any simple graph G. i think that if G had vertices of all degree 2 i could prove it easily but above that i am not sure. AI: Hint: If $G$ is bipartite, then its vertices can be split $V=V_1 \cup V_2$ so that the edges are subset of $V_1 \times V_2$. Let $H$ be any subgraph of $G$ and let $V' \subset V$ be its vertices. Then you can split the vertices of $V'$ $$V'= (V'\cap V_1) \cup (V' \cap V_2)$$ and it is easy to see that this yields $H$ bi-partite. P.S. if you are familiar with the fact that bipartite means two colorable , if you color the vertices of $G$ with two colors, that is a good coloring for any subgraph...
H: Proving that the Flag Variety $Fl(n;m_1,m_2)$ is connected. I wish to prove that the flag variety $Fl(n;m_1,m_2) = \{ W_1 \subset W_2 \subset V | dimW_i = m_i \}$, for $0 \le m_1 \le m_2 \le n$ where V is an n-dimensional vector space over $\mathbb{C}$ and $W_1, W_2$ are vector subspaces of V, is connected. I'm following a similar method we have been using in lectures for other Lie groups. I have already shown that $U(n)$ acts transitively on $Fl(n;m_1,m_2)$ and that $U(n)$ is connected. I'm wondering if what follows is proof that $Fl(n;m_1,m_2)$ is connected. $Fl(n;m_1,m_2)=U(n)/U(n)_{W_1 \subset W_2}$,since $U(n)$ acts transitively on $Fl(n;m_1,m_2)$. Thus, since $U(n)$ is connected, if we can show that $U(n)_{W_1 \subset W_2}$ is connected then it follows that $Fl(n;m_1,m_2)$ is connected. Consider $U(n)_{W_1 \subset W_2}=\{F\in U(n) |F(W_1)=W_1, F(W_2)=W_2 \}$. It consists of maps with matrices of the form: A=$ \left( \begin{array}{ccc} A_{m_1,m_1} & 0 & 0 \\ 0 & B_{m_2-m_1,m_2-m_1} & 0 \\ 0 & 0 & C_{n-m_2,n-m_2} \end{array} \right)$ Where the index of the block denotes its size. This is because $F(W_2)$ will be orthogonal to $W_1$ and will be linear multiples of $e_{m_1+1},...,e_{m_2}$, and $F(W_1)$ will be orthogonal to $W_2$ and will be linear multiples of $e_{1},...,e_{m_1}$, where $e_1,...,e_n$ is the orthonormal basis of $Fl(n;m_1,m_2)$. Now I think that I have to show that the blocks in the matrix are themselves contained in a connected component of a Lie group, which would mean that I could connect each of them to the identity matrix of the respective size and thus connect any matrix in the stabiliser subgroup I'm considering to the identity, but I'm not quite sure how to proceed. Could anyone please give me any hints as to the next step? Many thanks. AI: I think you were already done after the introductory remark: If the (continuous) action of a connected group $G$ on a space $X$ is transitive, the $X$ is connected. In fact, assume $X=U_1\cup U_2$ with $U_1,U_2$ open and $U_1\cap U_2=\emptyset$. Select $x\in X$. Then the map $f\colon G\to X$, $g\mapsto gx$ is continuous, hence $f^{-1}(U_1), f^{-1}(U_2)$ are open and are disjoint and $f^{-1}(U_1)\cup f^{-1}(U_2)=G$. By connectedness of $G$, one of $f^{-1}(U_1), f^{-1}(U_2)$ is empty. But by transitivity, $f$ is onto, hence $f^{-1}(U_i)=\emptyset$ implies $U_i=\emptyset$ and we conclude that $X$ is connected.
H: Given $G$, when can we find a division ring $R$ with $R^*=G$? This is motivated by a characterization of finite cyclic groups, in which one proves Let $G$ be a finite group. If $\#\{g\in G\colon g^d=e\}$ is at most $d$, then $G$ is cyclic. The proof is not actually difficult, but a unnecessarily complicated idea occurred to me (maybe because the first time I saw such a result, it was used to prove that finite subgroups of the multiplicative group of a field are cyclic). If we can construct this $G$ as the multiplicative group of a certain division ring, that is, $R^*\colon=\{r\in R\colon r\neq 1\}$. Then we know $G$ is abelian since finite division rings are commutative, and the abelian case follows from a direct use of the structure theorem of finite abelian groups. Of course such a proof is unnecessarily complicated and very likely results in some cyclic arguments since it uses two very big structure theorems. But I guess it would still be nice to know what kind of groups are multiplicative groups of division rings. Thanks very much! AI: "The answer" for classifying which finite groups are subgroups of units of division rings was given by Amitsur in Finite subgroups of division rings. Trans. Amer. Math. Soc. 80 (1955), 361–386. This is generally considered to be the analogue of the "finite subgroups cyclic" theorem for the unit groups of fields. (In Lam's paper on the quaternions, he describes an interesting lead-up to that discovery about Coexeter's work determining the finite subgroups of $\Bbb H^\ast$.) This clearly acts as "local" restrictions on elements of finite order.
H: length of sum of two submodule Let $M$ be a $R$-module with finite length and $K$ and $N$ be a submodule of $M$. Prove that $l(K+N)+l(K\cap N)=l(K)+l(N)$. My proof: First, by assuming that $K\cap N=\{0\}$, we can conclude that $l(K+N)=l(K)+l(N)$. The detail: Let $n=l(K)$ and $m=l(N)$. Let $\{0\}\subseteq K_0\subsetneq K_1 \subsetneq \cdots \subsetneq K_n=K$ be a composition series of $K$. Let $\{0\}\subseteq N_0\subsetneq N_1 \subsetneq \cdots \subsetneq N_m=N$ be a composition series of $N$. Define $\phi: K \rightarrow K+N $ by $\phi(k)=k$ and $\psi:K+N \rightarrow N$ by $\psi(k+n)=n$. For $0 \leq i \leq n$, let $M_i=\phi(K_i)$ and for $n+1\leq i\leq n+m$, let $M_i=\psi^{-1}(N_i)$, then $\{M_i\}_{i=0}^{n+m}$ be a composition series for $K+N$. Hence, $l(K+N)=n+m=l(K)+l(N)$. Could you help me proving the general cases, that is $K\cap N \neq \{0\}.$ AI: The second isomorphism theorem says we have an exact sequence $$0 \to K \cap N \to K \to (K+ N)/N \to 0$$ and so $l(K) = l(K\cap N) + l\left( K+N/N\right) = l(K \cap N)+ l(K+N) - l(N)$ . Thus $$l(K) + l(N) = l(K \cap N) + l(K+N).$$
H: Geometric sequence, finding the first term using only the sum, the number of terms and value of one term. In Geometric series: S = 56, a(2) = 16 and n = 3 S - sum, a(2) - second term, n - number of terms Is it possible to get a(2) and a(3) from here? (If yes, hints would be awesome) Thank You! AI: So we have $S = a_1 + a_2 + a_3$, as we are considering a geometric sequence, we have $a_2 = ra_1$, $a_3 = r^2a_1$. The first one gives $a_1 = r^{-1}a_2$, plugin this into the second gives us $a_3 = ra_2$. So $$ S = r^{-1}a_2 + a_2 + ra_2 = \left(\frac 1r + 1 + r\right)a_2 $$ From this we can compute $$ \frac 1r + 1 + r = \frac S{a_2} $$ hence $$ 1 + r + r^2 = \frac {Sr}{a_2} $$ which is a quadratic equation for $r$. I'm sure you can do it from here.
H: The set of all functions mapping set $A$ to set $B$ Is $F$ as defined here the set of all functions from set $A$ to set $B$? $F=\{f\in 2^{A\times B}:\forall x(x\in A\rightarrow\exists y (y\in B\wedge (x,y)\in f))\wedge \forall x,y_1,y_2 ((x,y_1)\in f\wedge (x,y_2)\in f\rightarrow y_1=y_2)\}$ If $A$ was non-empty, then $B$ would also have to be non-empty, but my construction in no way depended on either set being non-empty. How can this be? Have I made a mistake? AI: Assuming that you’re using the notation $2^S$ for the power set of $S$, then yes, you’ve correctly described the set of functions from $A$ to $B$. If $A\ne\varnothing$ and $B=\varnothing$, $A\times B=\varnothing$, so the only $f\in 2^{A\times B}$ is $\varnothing$, and it fails to satisfy the first clause of the definition and is therefore not actually in $F$. Thus, $F=\varnothing$ in this case, just as it should.
H: What's the intuition behind this equality involving combinatorics? What is the intuition behind $$ \binom{n}{k} = \binom{n - 1}{k - 1} + \binom{n - 1}{k} $$ ? I can't grasp why picking a group of $k$ out of $n$ bijects to first picking a group of $k-1$ out of $n-1$ and then a group of $k$ out of $n-1$. AI: We have a group of $n$ people, one of whom is John. We want to pick a committee of $k$ people. By definition this can be done in $\binom{n}{k}$ ways. There are $\binom{n-1}{k}$ committees of $k$ that don't include John, for we can choose any $k$ of the people other than John. And there are $\binom{n-1}{k-1}$ committees of $k$ that do include John, for we can choose any $k-1$ people to join John. Note that automatically a committee that doesn't include John is different from a committee that includes John. So we have divided the $\binom{n}{k}$ possible committees into two groups, one of which has $\binom{n-1}{k}$ elements, and the other of which has $\binom{n-1}{k-1}$ elements.
H: Properties of basic set theory The question is about a set: $$B=\{a_1,a_2,a_3,...,a_n\} \subset \mathbb R$$ And would like to know how to calculate $B^n$ where $n \in\Bbb N$? AI: Assuming $B = \{a_1,a_2,a_3,...,a_n\} \subset \mathbb R$, with $|B| = n$, $B^n$ is the set of all ordered n-tuples of elements of $B$: The exponent refers to the operation of the Cartesian Product of $B$ with itself, n times: $$B^n = \underbrace{B \times B \times \cdots \times B}_{\Large\text{n times}} =\{(b_1, b_2,\cdots, b_n) \mid b_i \in B\}.$$ Can you figure out how to calculate: $|B^n|$? There are $n$ choices for position 1, $n$ choices for position 2, ..., $n$ choices for position $n$: $$|B^n| = \underbrace{n\times n\times \cdots \times n}_{\Large \text{n times}} = n^n$$
H: Second Derivative of basic fraction using quotient rule I know this is a very basic question but I need some help. I have to find the second derivative of: $$\frac{1}{3x^2 + 4}$$ I start by using the Quotient Rule and get the first derivative to be: $$\frac{-6x}{(3x^2 + 4)^2}$$ This I believe to be correct. Following that I proceed to find the second derivative in the same manner but I get this as my answer: $$\frac{(54x^4 + 144x^2 +96) - (-36x^3 + 48x)}{(9x^4 +24x^2 +16)^2}$$ This I believe to be correct just not simplified. However the answer I need to get is: $$- \frac{6(4 - 9x^2)}{(3x^2 + 4)^3}$$ I do not know what the best way to approach this would be, should I multiply out the denominator and try to cancel? Could someone point me in the right direction, I want to solve it myself but I need some guidance. Thanks AI: The first derivative is correct. Now we want to differentiate $\frac{-6x}{(3x^2+4)^2}$. The main thing to remember is do not "simplify" unless there is good reason to do so. The derivative of $\frac{-6x}{(3x^2+4)^2}$ is $$\frac{(3x^2+4)^2 (-6)-(-6x)(6x)(2)(3x^2+4)}{(3x^2+4)^4}.$$ Cancel a $3x^2+4$, and simplify the top. Remark: I probably would want to take out that ugly $-6$ from the top, which is an invitation to minus sign errors and other errors, and differentiate $\frac{x}{(3x^2+4)^2}$.
H: how to apply hint to question involving the pigeonhole principle The following question is from cut-the-knot.org's page on the pigeonhole principle Question Prove that however one selects 55 integers $1 \le x_1 < x_2 < x_3 < ... < x_{55} \le 100$, there will be some two that differ by 9, some two that differ by 10, a pair that differ by 12, and a pair that differ by 13. Surprisingly, there need not be a pair of numbers that differ by 11. The Hint Given a run of $2n$ consecutive integers: $a + 1, a + 2, ..., a + 2n - 1, a + 2n$, there are n pairs of numbers that differ by n: $(a+1, a+n+1), (a + 2, a + n + 2), \dots, (a + n, a + 2n)$. Therefore, by the Pigeonhole Principle, if one selects more than n numbers from the set, two are liable to belong to the same pair that differ by $n$. I understood the hint but no concrete idea as to how to apply it, but here are my current insights: My Insights break the set of 100 possible choices into m number of 2n(where $n \in \{9,12,13\}$ ) consecutive numbers and since 55 numbers are to be chosen , show that even if one choses randomly there will be n+1 in one of the m partitions and if so, by the hint there will exist a pair of two numbers that differ by n. AI: The easiest example is to do $10$. Then break up your set into $5$ sets, $$\{1,\dots,20\}\\\{21,\dots 40\}\\\vdots\\\{81,\dots,100\}$$ Since you have selected $55$ elements, you must have selected $11$ elements from one of these partitions. Then from the hint, you are done. The case for $10$ is easy because $20$ divides $100$. The case of $12$ is harder. Partition the numbers as: $$\{1,\dots,24\}\\\{25,\dots,48\}\\\{49,\dots,72\}\\\{73,\dots, 96\}\\\{97,98,99,100\}$$ Now we know that we've selected at least $51=55-4$ from the first four sets, and hence at least $13$ from one of them, and again we are done by the hint. The problem for $11$ is that the "remainder" is large. When we partition $100$ into sets of $22$, we are left with $12$ elements. That's a large enough number of elements left to "break" the logic above - we can pick up to $11$ different elements from that remainder, and we have $44$ to parcel out amongst the first four partitions, which we can do by putting $11$ in each partition. It is not hard to find a counter-example for $11$ precisely this way. More generally, given $N$ and $k$, let $m=k\left\lfloor \frac{N}{2k}\right\rfloor$. Then you can select from $\{1,\dots,N\}$ the following number of elements:$$m + \min(k,N-2m)$$ and not get a pair that differ by $k$. But it you try to select more than that number, you must get a pair with difference $k$. In the case $N=100$ and $k=11$, $m=44$ and $\min(k,N-2m)=11$ so we can select $55$ such numbers.
H: Hilbert space with all subspaces closed Does there exist an infinite-dimensional Hilbert space with all subspaces closed? AI: If $H$ is an infinite dimensional Hilbert space and $\{x_n:n\geq 1\}$ is a countably infinite set of linearly independent elements, then the abstract span of these elements, which is an infinite dimensional subspace, cannot be closed. If it were closed, it would be a Hilbert space itself, and infinite dimensional Hilbert spaces must have uncountably infinite dimension.
H: If the union of $A$ and $B$ is linearly independent then the intersection of the spans $= \{0\}$ $\newcommand{\sp}{\operatorname{sp}}$ Let $V$ be a vector space over $F$ field, and let $A,B$ be two different, disjoint, non-empty sets of vectors from $V$. Prove or disprove the following: If $A \cup B$ are linearly independent, $\sp(A) \cap \sp(B) = \{0\}$. It's an easy one to example over $\mathbb{R}^2$, because the $\text{span}$ of all $\mathbb{R}^2$ vectors has $(0,0)$ in it, but something keeps telling me that it might be disproved for other vector spaces. AI: Hint: $$x\in Sp(A)\cap Sp(B)\implies \exists\,a_1,...,a_k\in A\;,\;b_1,...,b_m\in B\,,\,c_1,...,c_{k+m}\in \Bbb F\;\;s.t.$$ $$x=\sum_{i=1}^kc_ia_i=\sum_{i=1}^mc_{k+1}b_i\implies c_1a_1+\ldots c_ka_k-c_{k+1}b_1-\ldots c_mb_m=0\implies$$ $$\implies c_r=0\;\;\forall r=1,\ldots,m\;,\;\text{since}\;a_i,b_k\in A\cup B\ldots$$
H: How to directly show that Figure 8 injective immersion is not a monomorphism I'm working in the category of smooth manifolds. The injective immersion that takes the open unit interval $(0,1)$ to the figure 8 is a well-known example of an injective immersion that is not an embedding. Now embeddings are the monomorphisms in our category. So the figure 8 immersion (let's call it $f$) must not be a monomorphism. Hence, there must exist two distinct maps $g_1,g_2:X\rightarrow (0,1)$ such that $f\circ g_1 = f\circ g_2$. What is an example of such maps? AI: Let $f : X \to Y$ be any smooth map whatsoever. Suppose $f (x) = f (x')$. Then $x$ and $x'$, considered as smooth maps $1 \to X$, have the property that $f \circ x = f \circ x'$. So if $f$ is monic in the category of manifolds it must be injective. Conversely, suppose $f : X \to Y$ is an injective smooth map. Then, for any smooth maps $g, g' : T \to X$, if $f \circ g = f \circ g'$, then $g = g'$ (because smooth maps are determined by their action on points). Thus any injective smooth map is monic. In particular the figure-8 injective immersion is monic.
H: Integral involving gaussian function I would like to calculate the following integral: $$\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\quad (x^2+y^2)^k\exp\left(-\dfrac{(x-x_0)^2+(y-y_0)^2}{a^2}\right)\,\mathrm dx\,\mathrm dy$$ Any clue on how to proceed? Thanks AI: You can use the fact that $$(x^2+y^2)^k = \sum_{n=0}^k {k \choose n}x^{2n}y^{2(k-n)}$$ So $$\mathrm{I}=\int_{\!-\infty}^{\!+\infty}\!\!\!\int_{\!-\infty}^{+\infty}dxdy(x^2\!\!+\!y^2)^k\! e^{-\frac{(x-x_0)^2+(y-y_0)^2}{a^2}}\!\!=\!\sum_{n=0}^k{k \choose n}\int_{-\infty}^{+\infty}\!\!\!x^{2n}e^{-\frac{(x-x_0)^2}{a^2}}dx \!\!\int_{-\!\infty}^{\!+\infty}y^{2(k-n)}e^{-\frac{(y-y_0)^2}{a^2}}dy$$ One can work with it like with scaled plane moments of the normally distributed random variable. It is well known that $$\operatorname{E} \left[ X^p \right] =a^p \cdot (-i\sqrt{2}\mathrm{sgn}x_0)^p \; U\left( {-\frac{1}{2}p},\, \frac{1}{2},\, -\frac{1}{2}(x_0/a)^2 \right)$$ for a probability function $ \frac{1}{a\sqrt{2\pi}} e^{ -\frac{(x-x_0)^2}{2a^2} }$ but we have a scaled version (up to the multiplyer $\frac{1}{a\sqrt{\pi}}$). So $$\int_{-\infty}^{+\infty}\!\!\!x^{2n}e^{-\frac{(x-x_0)^2}{a^2}}dx=a\sqrt{\pi}\operatorname{E} \left[ X^{2n} \right]$$ and $$\int_{-\infty}^{+\infty}\!\!\!y^{2(k-n)}e^{-\frac{(y-y_0)^2}{a^2}}dx=a\sqrt{\pi}\operatorname{E} \left[ Y^{2(k-n)} \right]$$ So one can obtain: $$\mathrm{I}=(-1)^k (a^2)^{k+1}\pi \sum_{n=0}^k{k \choose n}(\mathrm{sgn}x_0)^{2n}(\mathrm{sgn}y_0)^{2(k-n)}U\left(\!\!-\!n\!,\!\frac{1}{2}\!,\!-\frac{x_0^2}{a^2} \right)U\left(\!\!n\!-\!k\!,\!\frac{1}{2}\!,\!-\frac{y_0^2}{a^2} \right)$$ If $x_0=y_0=0$ then $$\mathrm{E}\left[X^{2n}\right] = \left(\frac{a}{\sqrt{2}}\right)^{2n}\,(2n-1)!! $$ and $$\mathrm{E}\left[Y^{2(k-n)}\right] = \left(\frac{a}{\sqrt{2}}\right)^{2(k-n)}\,(2(k-n)-1)!! $$ So everything gets even simpler: $$\mathrm{I}=\pi \left(\frac{a^2}{2}\right)^{k+1}\sum_{n=0}^k{k \choose n}(2n-1)!!(2k-2n-1)!!$$
H: Planar Geometry question Suppose I have three points $p$, $f_1$, and $f_2$. I want to place a third point $f_3$ such that if you extend the line segments $pf_1$ and $pf_2$ into full lines, $f_3$ is going to be on the opposite side of $pf_1$ as $f_2$ is, and $f_3$ will also be on the opposite side of $pf_2$ from where $f_1$ is. What I want to show is that the point $p$ will be in the triangle formed by $f_1$, $f_2$, and $f_3$. I'm about 98% sure this is a correct statement, I'm just not sure how to go about proving its true beyond "drawing pretty pictures". EDIT Clarification: If you take the line through $p$ and $f_1$, then you have partitioned the plane into two parts. The assumption is that $f_3$ must be in the partition that $f_2$ is not in. AI: Here is an idea: By rotating, translating, reflecting, and rescaling, we can assume that $p$ is the origin, $f_1=(1,0)$, and $f_2=(x_0,y_0)$ with $y_0>0$. Case $1$: $x_0=0$. Then $f_3=(x_1,y_1)$ is any point in the third quadrant. The line from $f_3$ to $f_2$ has positive $y$-intercept, and the line from $f_3$ to $f_1$ has negative $y$-intercept, so that $p$ is inside the triangle formed by $f_1$, $f_2$, and $f_3$. Case $2$: $x_0>0$. Then $f_3=(x_1,y_1)$ is any point satisfying $$\frac{y_0}{x_0}x_1<y_1<0$$ These are exactly the inequalities you need to show that the line from $f_3$ to $f_2$ has positive $y$-intercept, and the line from $f_3$ to $f_1$ has negative $y$-intercept, so that $p$ is inside the triangle formed by $f_1$, $f_2$, and $f_3$. Case $3$: $x_0<0$. Proceed as in Case $2$ using the inequalities: $$y_1<0,\qquad y_1<\frac{y_0}{x_0}x_1$$
H: What about this $\lim_{x \to \infty}\frac{3x+4}{\sqrt[5]{x^9+3x^4+1}}$? When I saw this limit, I didn't even try to solve it by an algebraic method. I thought about the assyntotic concept. In the example, $$\frac{3x+4}{\sqrt[5]{x^9+3x^4+1}}\sim \frac{3x}{\sqrt[5]{x^9}}(x \to \infty)$$ So, $$\lim_{x \to \infty}\frac{3x}{\sqrt[5]{x^9}}=3\lim_{x \to \infty}\frac{x}{x^{\frac{9}{5}}}=3\lim_{x \to \infty}\frac{1}{x^{\frac{4}{5}}}=0$$ And then the original limit is equal zero. Is my thought correct? Would the algebraic method less difficult as I thought? Thanks. AI: Yes. Both your line of thought and final answer are correct. If you want to be a bit more rigorous, you could proceed as below: For $x>0$, we have $$x^9 < x^9 + 3x^4 + 1 \implies \sqrt[5]{x^9} < \sqrt[5]{x^9 + 3x^4+1} \implies 0 < \dfrac{3x+4}{\sqrt[5]{x^9 + 3x^4+1}} < \dfrac{3x+4}{\sqrt[5]{x^9}}$$ Now use the squeeze theorem or the sandwich theorem, to conclude what you want.
H: How to integrate $\int \sqrt{x^2+a^2}dx$ $a$ is a parameter. I have no idea where to start AI: I will give you a proof of how they can get the formula above. As a heads up, it is quite difficult and long, so most people use the formula usually written in the back of the text, but I was able to prove it so here goes. The idea is to, of course, do trig-substitution. Since $$\sqrt{a^2+x^2} $$ suggests that $x=a\tan(\theta)$ would be a good one because the expression simplifies to $$a\sec(\theta)$$ We can also observe that $dx$ will become $\sec^2(\theta)d\theta$. Therefore$$\int \sqrt{a^2+x^2}dx = a^2\int \sec^3(\theta)d\theta$$ Now there are two big things that we are going to do. One is to do integration by parts to simplify this expression so that it looks a little better, and later we need to be able to integrate $\int \sec(\theta)d\theta$. So the first step is this. It is well known and natural to let $u=\sec(\theta)$ and $dv=\sec^2(\theta)d\theta$ because the latter integrates to simply, $\tan(\theta)$. Letting $A = \int \sec^3(\theta)d\theta$,you will get the following $$A = \sec(\theta)\tan(\theta) - \int{\sec(\theta)\tan^2(\theta)d\theta}$$ $$=\sec(\theta)\tan(\theta) - \int{\sec(\theta)d\theta - \int\sec^3(\theta)d\theta}$$ therefore,$$2A = \sec(\theta)\tan(\theta)-\int \sec(\theta)d\theta$$ Dividing both sides give you $$A = 1/2[\sec(\theta)\tan(\theta)-\int \sec(\theta)d\theta]$$ I hope you see now why all we need to be able to do is to integrate $\sec(\theta)$. The chance that you know how is rather high because you are solving this particular problem, but let's just go through it for the hell of it. This is a very common trick in integration using trig, but remember the fact that $\sec^2(\theta)$ and $\sec(\theta)\tan(\theta)$ are derivatives of $\tan(\theta)$ and $\sec(\theta)$, respectively. So this is what we do. $$\int \sec(\theta)d\theta = \int {{\sec(\theta)(\sec(\theta)+\tan(\theta))} \over {\sec(\theta)+\tan(\theta)}} d\theta$$ Letting $w = \sec(\theta)+\tan(\theta)$, $$= \int {dw \over w} = \ln|w|$$ So, long story short, $$\int \sqrt{a^2+x^2}dx = a^2/2[\sec(\theta)\tan(\theta) - \ln|\sec(\theta)+\tan(\theta)|]$$ $$= \dfrac{1}{2} \left({x\sqrt{a^2+x^2}} + {{a^2\ln \left|{x+\sqrt{a^2+x^2} \over a}\right|}}\right) + C$$
H: Group extension reference request I'm looking for a reference for the following "well known" result Let $C$ be an abelian group and $G$ a finite group, and let $$0 \rightarrow C \rightarrow W \rightarrow G \rightarrow 0$$ be a group extension of $G$ by $C$ corresponding to the fundamental class in $H^2(C,G)$. Now the result I have says that if we have a homomorphism $f$ from $C$ to some abelian group $A$ and an exact sequence of groups $$0 \rightarrow A \rightarrow A \rtimes G \rightarrow G \rightarrow 0, $$ then you can extend $f$ to some map $$f': W \rightarrow A' \rtimes G$$ such that both these sequences fit into a commutative diagram with the first vertical arrow given by $f$ and the third given by the identity map if and only if $f$ is $G$ invariant and it "sends" (I don't quite get what this means) the fundamental class in $H^2(C,G)$ to zero. The place I got this result from just gives a reference to Serre's Local fields but not a page number or anything that might narrow where this result is. I can imagine it is pretty standard but I cant seem to find anything of this sort. Thank you AI: We had better suppose that $A$ is abelian, and so a $G$-module; otherwise I'm not sure the question makes sense (because the $H^2$ with coeffs. in $A$ would not be defined). Group cohom. is functorial in the coefficients, so $C \to A$ induces $f_*: H^2(G,C) \to H^2(G,A)$. This latter map has an interpretation in terms of extensions: it sends the extension $0 \to C \to W \to G \to 0$ to the extension $0 \to A \to W' \to G \to 0$ obtained by "pushing out" along the morphism $C \to A$. One way to think about $W'$ is that it is the unique extension of $G$ by $A$ compatible with given action of $G$ on $A$, and which receives a map from $W$ that restricts to $f$ on $C$ and induces the identity on $G$. Now by definition the $0$ class in $H^2$ corresponds to the split extension, i.e. the semi-direct product. Thus $f_*$ sends the class of $W$ (an element of $H^2G,C)$) to zero in $H^2(G,A)$ if and only if $W'$ is a semidirect product. Now if you think about the above description of the push-out, you will see that $W'$ equals the semidirect product if and only if $W$ admits a map to the semidirect product, compatibly with the given map $f: C \to A$ and the identity on $G$.
H: Constructing Distribution By Coin Flipping I am interested in any example of construction distribution by coin flipping. Actually I want to show the process of construction a random variable $X$ with distribution $(p_1,...,p_n)$ by coin flipping and to prove expectation of the number of coin flipping is at least entropy $H(X)$. So far I have no idea where to start from. Will appreciate for examples and hits. AI: I'm not sure exactly what you're trying to ask here. You'll probably be able to find this answer in a textbook somewhere. If you were asking something else it would be helpful if you gave a little more detail in your question. Suppose $p_i< 2^{-k}$. If I toss a coin $k$ times I have $2^k$ possible events each of probability $2^{-k}$. So I cannot generate an event of probability $p_i$ with $k$ coin flips. Therefore if my generator returns $X=i$ it must have flipped the coin at least $-\log_2(p_i)$ times. So if $K$ is the number of coin flips before my algorithm returns a value we must have $$\mathbb E(K|X=i) \geq \log_2(p_i).$$ Now we just add these up $$\begin{array}{rcl}\mathbb E(K) &=& \sum_{i=1}^n \mathbb E(K|X=i)\mathbb P(X = i) \\ &\geq& \sum_{i=1}^n -\log_2(p_i)p_i \\ &=&\mathcal H(X).\end{array}$$ In reality it can take much longer. If I have a random variable with $p_1 = 0.50001$ $p_2 = 0.49999$ I need a very large number of coin flips to simulate that exactly, even though the entropy is pretty much one. The easiest way to generate your random number is to think of an infinite string of coin flips as a random number. If you replace every Head with a $1$ and every Tail with a $0$, Then a sequence of coinflips $THTHHT\dots$ would represent a binary number $0.010110\dots\equiv 0.17187$. Let's make this rigorous. Set $$\begin{array}{rcl} d_k &=& \left\{\begin{array}{rl} 2^{-k} &\text{if the $k$th toss is a head} \\0 &\text{if the $k$th toss is a tail}\end{array}\right. \\ X_i &=& \sum_{k=1}^i d_i \\ X &=& \sum_{k=1}^\infty d_i \\ q_i&=& \sum_{k=1}^i p_i \\ &=& \mathbb P(M\leq i). \end{array}$$ Now we want to set $m=i$ whenever $p_{i-1} < X < p_{i}$ So if we toss the coin $k$ times we know that $X_k < X < X_k + 2^{-k}$ so we stop tossing the coin the first time $k$ such that $p_{i-1} <X_k < X_k + 2^{-k} < p_{i}$ for some $i$.
H: Prove $\gcd\left((a^{2m}−1)/(a+1),a+1\right)=\gcd(a+1,2m)$ Show or prove that $$ \gcd\left(\frac{a^{2m}−1}{a+1},a+1\right)=\gcd(a+1,2m), $$ and that $$ \gcd\left(\frac{a^{2m+1}+1}{a+1},a+1\right)=\gcd(a+1,2m+1). $$ AI: Hint $$\frac{a^{2m}-1}{a+1}=\frac{a^2-1}{a+1}(a^{2m-2}+a^{2m-4}+...a^2+1)$$ $$=(a+1)(a^{2m-2}+a^{2m-4}+...a^2+1)-2(a^{2m-2}+a^{2m-4}+...a^2+1)$$ $$=(a+1)(a^{2m-2}+a^{2m-4}+...a^2+1)-2(a^{2m-2}-1+a^{2m-4}-1+...a^2-1+2m)$$ $$=(a+1)(a^{2m-2}+a^{2m-4}+...a^2+1)-2[(a^{2m-2}-1)+(a^{2m-4}-1)+...(a^2-1)]+2m$$ Hint 2 Each of $(a^{2m-2}-1),(a^{2m-4}-1),...,(a^2-1)$ is divisible by $a+1$.
H: Combinations problem help Four couples have reserved seats in a row for a concert. In how many different ways can they be seated if the two members of each couple wish to sit together? At first I thought that this was 8!/4!*4! but then I was told by a teacher that this was wrong. I really don't understand the logic behind this problem, can someone lead me through? Why can't we use distinguish-ability? AI: First think of each couple as a single entity: you can arrange the $4$ couples in $4!$ different orders along the row. Then the members of each couple can sit in either of $2$ orders, so there are altogether $4!\cdot 2^4$ ways to seat the couples. Your figure of $\frac{8!}{4!}\cdot 4!$ reduces to $8!$, which is the number of ways to seat the $8$ people in the row without any restrictions on who sits next to whom. Since that includes, for instance, the arrangements in which the husbands sit in the first $4$ seats and the wives in the last $4$ seats, it cannot be right: you don’t want to count any of those arrangements.
H: There does not exist an entire function which satisfies $f({1\over n})={1\over 2^n}$? There does not exist an entire function which satisfies $f({1\over n})={1\over 2^n}$, what I tried is if possible then define $g(z)=f(z)-{1\over 2^{1\over z}}$ Then $g({1\over n})=0$ and so $g(z)$ is entire and its $0$ set has limit point in it and so $f(z)={1\over 2^{1\over z}}$ which is not analytic at $0$? Please help! Edit: OOps! the way I defined $g(z)$ that is not entire! could any one give me hint? AI: Hint: If $f(0) = 0$ and $f$ is not identically zero, we can find another entire function $g$ so that $f(z) = z^k g(z)$ for some positive integer $k$ and $g(0) \ne 0$.
H: Determining whether a quadratic congruence is solvable using Legendre symbol I'm trying to detect whether the quadratic congruence $2x^2 + 5x - 9$ is congruent to $0$ modulo $101$. I've think I'll be able to detect whether there is or there is no solution using Legendre symbol, but I can't figure out how. I'll be grateful if someone could point me to the solution. Thanks in advance. AI: Edit: The original modulus was $89$. We keep the original calculation. A small appended modification deals with $101$. The given congruence has a solution if and only if the congruence $$16x^2+40x-72\equiv 0\pmod{89}$$ has a solution. (We multiplied through by $8$ as a preparation to completing the square.) This congruence has a solution if and only if $$(4x+5)^2-25-72\equiv 0\pmod{89}$$ has a solution. Equivalently, we want to find out whether $(4x+5)^2\equiv 8\pmod{89}$ has a solution. The congruence $w^2\equiv 2\pmod{89}$ has a solution. We can see this from the fact that the Legendre symbol $(2/89)$ is equal to $1$, since $89$ is of the shape $8k+1$. Since for any $w$, the congruence $4x+5\equiv w\pmod{89}$ has a solution, our original congruence does. Added: For the new modulus of $101$, we arrive in the same way at the congruence $w^2\equiv -4\pmod{101}$. We have $(-4/101)=(-1/101)$. But $101$ is of the form $4k+1$, so $(-1/101)=1$, and thus the congruence has a solution.
H: Normal intersecting a sphere Let $\textbf{x}$ and $\textbf{y}$ be two points on the sphere. Show that the normal to the plane determined by the great circle through $\textbf{x}$ and $\textbf{y}$ intersects the sphere at the points $\pm \textbf{z}$, where $$\bf z = \frac{x \times y}{||x \times y||}$$ Now obviously, if we let the center of the sphere lie at the origin, $\bf x \times y$ will be the normal vector to the plane that crosses the sphere and make that great circle. Since that plane divides the sphere into two equal parts, the points were the normal intersects the sphere will be opposites of one another, thus $\pm \textbf{z}$. I don't really get why we need to adjust that point by $\bf ||x \times y||$? Could somebody explain? Also, the problem itself is easier, as I am not asked to find the points, just to show that they are valid. But I don't really know how could I do it? I could reason, that if we place that sphere at the origin, the equation of the plane that cuts the sphere and makes the great circle is described by: $${\bf w}\cdot({\bf x\times y})=0$$ But what next? Thanks! AI: I don't really get why we need to adjust that point by $\bf ||x\times y||$? Could somebody explain? The magnitude of the cross product is $||{\bf x\times y}|| = ||{\bf x}|| \cdot ||{\bf y}|| \cdot \sin (\angle {\bf xOy})$. We're assuming that the sphere has unit radius, so $||{\bf x}|| = ||{\bf y}|| = 1$, and we want the points where the normal intersects the sphere. Therefore unless $\bf x \perp y$ we need to normalise. I'm afraid I can't help with the second half, because I'm not sure what definitions you're working from. Obviously you don't already have a definition or theorem to the effect that the equation of a plane is ${\bf P . N} = 0$ where $\bf N$ is its normal, or you would have used it already.
H: Understanding the average slope of a curve This question mainly asks: Is my understanding of the average slope correct? This question is somewhat related to my previous question. However, its different from the previous question to a certain extent. If $P(x_0,y_0)$ and $Q(x_0+\Delta x, y_0+\Delta y)$ are two points on the curve of the graph of square function ($y=x^2$), then the average slope of the curve between these two points can be given by the equation: $\bar m = 2x_0+\Delta x$. If $\Delta x$ is infinitely small, it can be neglected and we're left with $m=2x_0$ which is the formula to find the slope of a point at the curve. Let $P(2,4)$ and $Q(2.00005, 4.000200003)$ be two points on the curve. The average slope between these two points of the curve is $4.00005$. Now, let's take 3 points which are between the curve $PQ$, and find their slopes by the formula $2x_0$. Let those 3 points be $(2.00002, 4.00008),(2.00003, 4.0000120001)$ and $(2.00004, 4.000160002)$. Calculating the slopes at the given points, 1) $m=4$ at $(2, 4)$ 2) $m=4.00004$ at $(2.00002, 4.00008)$ 3) $m=4.00006$ at $(2.00003, 4.0000120001)$ 4) $m=4.00008$ at $(2.00004, 4.000160002)$ 5) $m=4.0001$ at $(2.00005, 4.000200003)$ Now we shall calculate the average of above values: $m_{av} = \frac{4+4.00004+4.00006+4.00008+4.0001}{5}$ $m_{av} = 4.000056$ Conclusion: The average slope $\bar m = 2x_0 + \Delta x$ is called so, because it's value is approximately equal to the average slope found by dividing the sum of certain points at an interval by the total no. of points, or $\bar m ≐ m_{av}$. I know that $\Delta x=0.00005$ is not infinitely small nor negligible, but for the sake of simplicity I've used it here. $4.00005≈4.000056$. They indeed are approximately equal. My question is: Is the way I understand the average slope correct? The way I've given above by calculating those long values? AI: I'm not clear as to what your question is, but you're conclusion is correct. You will learn shortly why $y = x^2 \implies \;\text{slope}\; \approx 2x_0 + \Delta x$, and why as $\Delta x$ grows smaller, slope $\approx 2x$ at the point $(x, x^2),$ when you learn about derivatives. As you calculated: $$ m_{av} = 4.000056,$$ and we can compute $$\overline m = 2 x_0 + \Delta x = 2\cdot 2 + (2.00005 - 2) = 4 + .00005 = 4.00005$$ So can you see how $\overline m \approx m_{av}$? Added: Yes, your "experiment" using this function, and calculating slopes using $m = 2x$ at points various points for which the change in $x$ is very small and dividing by the number of slopes you calculated, indicates what average slope over a small interval seeks to estimate.
H: Is this function from $[-1,1] \rightarrow \mathbb{R} \cup \infty$ continuous? just a short question. So I was wondering about functions from compact sets into $\mathbb{R} \cup \{+\infty\}$. Let's say we have a function $f : [-1,1] \rightarrow \mathbb{R} \cup \{+\infty\}$, defined by $f(x) = -1/x$ if $x \in [-1,0)$, $f(0) = +\infty$, $f(x) = 1/x$ if $x \in (0,1]$. I think this should be continuous, although somehow my brain doesn't want to believe that it is. AI: This is clearly continuous everywhere on $[-1,0)\cup(0,1]$, so we need only check continuity at $x = 0$. To talk about continuity, we need a topology, but taking a lead from the $\epsilon - \delta$ definition of continuity in real analysis, we can define $f(x)$ to be continuous with value of $\infty$ at $x = a$ if for all $N > 0$, there exists $\delta > 0$ such that $\left|\,f(x)\right| > N$ whenever $\left|x - a\right| < \delta$. In this case, we have $\left|\frac{1}{x}\right| > N$ whenever $\left|x\right| < \frac{1}{N}$, so that $f$ is indeed continuous with value $\infty$ at $x = 0$ (you would have to check that this definition of continuity agrees with your topology on $\Bbb R\cup \{\infty\}$).
H: Differentiate $y = \sqrt {{{1 + 2x} \over {1 - 2x}}} $ logarithmically $\eqalign{ & y = \sqrt {{{1 + 2x} \over {1 - 2x}}} \cr & \ln y = {1 \over 2}\ln (1 + 2x) - {1 \over 2}\ln (1 - 2x) \cr & {1 \over y}{{dy} \over {dx}} = {1 \over 2} \times {2 \over {(1 + 2x)}} - {1 \over 2} \times {{ - 2} \over {(1 - 2x)}} \cr & {1 \over y}{{dy} \over {dx}} = {1 \over {(1 + 2x)}} + {1 \over {(1 - 2x)}} \cr & {{dy} \over {dx}} = {{(1 - 2x) + (1 + 2x)} \over {(1 + 2x)(1 - 2x)}} \times y \cr & {{dy} \over {dx}} = {{2{{(1 + 2x)}^{{1 \over 2}}}} \over {(1 + 2x)(1 - 2x){{(1 - 2x)}^{{1 \over 2}}}}} \cr & {{dy} \over {dx}} = {2 \over {{{(1 + 2x)}^{{1 \over 2}}}{{(1 - 2x)}^{{3 \over 2}}}}} \cr} $ However the answer is: ${{dy} \over {dx}} = {2 \over {(1 - 2x)\sqrt {(1 - 4{x^2})} }}$ Where have I went wrong? AI: $$\frac{2}{(1+2x)^\frac{1}{2}(1-2x)^\frac{3}{2}} = \frac{2}{(1+2x)^\frac{1}{2}(1-2x)^\frac{1}{2}(1-2x)} = \frac{2}{(1-4x^2)^\frac{1}{2}(1-2x)}$$
H: Solving dependent systems When I'm solving a system of equations and realize that I have a dependent system, I need to express the answer in terms of y = {some value} where x is any real number, OR x = {some value} where y is any real number. How do I choose whether to For instance, I have this problem in my homework: x+4y = 5 5x + 20y = 25 x = 5-4y 5(5-4y) + 20 y = 25 25-20y+20y = 25 25 = 25 : DEPENDENT SYSTEM I then solve for x using the first equation: x + 4y = 5 And arrive at the answer of: x = -4y + 5, where y is any real number However, the answer in the back of the packet is: y = -(x/4) + (5/4) where x is any real number I see how they arrived at this, but my question is: Why did they choose to solve for y instead of solving for x? Is this choice arbitrary? Would my answer be correct as well? AI: It's just a standard mathematical convention to express a function in 2D space as $x$ being the independent variable, and $y$ being the dependent variable (dependent on $x$). Your answer is still correct, just a different way of expressing it. This is because $x$ is usually the horizontal axis, and $y$ is usually the vertical axis, and looking at a function from left to right is easier to interpret than when looking at it from top to bottom.
H: Proof on showing if F(x,y,z)=0 then product of partial derivatives (evaluated at an assigned coordinate) is -1 The task is as follows: Given: $$F(x,y,z) = 0$$ Goal: Show $\frac{\partial z}{\partial y}|_x \frac{\partial y}{\partial x}|_z \frac{\partial x}{\partial z} |_y = -1$ Here is my work so far: (1) Differentiate with respect to y, I get: $0 + F_2 + F_3 \frac{\partial z}{\partial y} = 0$ So $ F_3 \frac{\partial z}{\partial y} = - F_2$ (2) Differentiate with respect to x, I get: $F_1 + F_2 \frac{\partial y}{\partial x} + 0 = 0$ So $ F_2 \frac{\partial y}{\partial x} = - F_1$ (3) Differentiate with respect to z, I get: $F_1 \frac{\partial x}{\partial z} + 0 + F_3 = 0$ So $ F_1 \frac{\partial x}{\partial z} = - F_3$ (4) After some manipulations with the $F_i$, I get to the conclusion that $\frac{\partial z}{\partial y}* \frac{\partial y}{\partial x} * \frac{\partial x}{\partial z} = -1$, so when evaluated with x, z, y respectively, conclusion is still true My questions are: (1) Is my proof correct? (2) For example, when I differentiate with respect to y, I "let" $F_1$ be 0 and find partials for other coordinates. I had a hard time trying to explain to my friend on the reason(s) why I can do such "let be 0" thing. Although I think if I can't do that, then there is no way that I can reach the conclusion, but I somehow feel confused about the fact too. Since my book is doing it that way, my understanding is that I can do such "let be 0" thing based on the independece of x with respect to y, when I differentiate with respect to y. But is my thought ok? Would someone please help me on this question? Thank you very much ^_^ AI: A more simple way is the total differential $$dF=\frac {\partial F}{\partial x} dx+\frac {\partial F}{\partial y} dy+\frac {\partial F}{\partial z} dz=0$$ If it says "evaluated at $x$" it means that $x$ is fixed and $dx=0$ $$\frac {\partial F}{\partial y} dy+\frac {\partial F}{\partial z} dz=0\Rightarrow \frac{dz}{dy}=-\frac{\partial F/\partial y}{\partial F/\partial z}$$ and similarly $$dy=0\Rightarrow\frac {\partial F}{\partial x} dx+\frac {\partial F}{\partial z} dz=0\Rightarrow \frac{dx}{dz}=-\frac{\partial F/\partial z}{\partial F/\partial x}$$ $$dz=0\Rightarrow\frac {\partial F}{\partial x} dx+\frac {\partial F}{\partial y} dy=0\Rightarrow \frac{dy}{dx}=-\frac{\partial F/\partial x}{\partial F/\partial y}$$ And it follows $$\frac{dz}{dy}\frac{dy}{dx}\frac{dx}{dz}=\bigg(-\frac{\partial F/\partial y}{\partial F/\partial z}\bigg)\bigg(-\frac{\partial F/\partial x}{\partial F/\partial y}\bigg)\bigg(-\frac{\partial F/\partial z}{\partial F/\partial x}\bigg)=-1$$
H: Finding a kernel of a linear transformation of linear transformations. Question: Let V,W be vector spaces over field F. We mark L(V,W) as the vector space of linear transformations from V to W. Let $v_0 \ne 0$. We define a transformation: $\Psi: L(V,W) \to W$ that sends a linear transformation $T \in L(V,W)$ to $T(v_0) \in W$ What is the image of $\Psi$ ? Evaluate $dim(Ker \Psi)$ What I did: I said that $dim(V)=m, dim(W)=n$, therefore $dim L(V,W)=mn$ I see that $\Psi$ can't be injective because the dimensions of L and W are not equal. $Im\Psi=\Psi(T)=T(v_0)$ Can't figure how should I calculate the dim of the Kernel. A thought I had is that since dimW < dimL then if $\Psi$ is surjective then $dimKer\Psi = mn-m$, but I don't know what if it's not surjective. Thanks in advance AI: Well, if I'm not missing something, $\Psi$ is always surjective (for any $V, W$), at least assuming $dim(V)$ and $dim(W)$ are finite, as you implicitely seem to do when you write $dim(V)=m$ and $dim(W)=n$. To show this, complete $v_{0}$ to a base of $V$, let's say $(v_{0},\dots,v_{m-1})$ (a single non zero vector is linearly independent so you can do this). For any $w\in W$ consider the linear transformation $T\in L(V,W)$ such that $T(v_{0})=w$ and $T(v_{i})=0$ for any $i=1,\dots,m-1$. Obviously $\Psi (T)=w$ so that $Im (\Psi)=W$ and then one gets $dim(Ker(\Psi))=mn-n=dim(L(V,W))-dim(Im(\Psi))$.
H: Computing $\iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz$ using substitution Consider this integral: $$\iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz$$ How would you compute it? I already solved this problem this way: $$\iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz = \left( \int_\infty^\infty e^{-x^2} \right)^3 = \pi^{3/2}$$ But I wanted to find it using substitution (spherical coordinates) but this is all I could do: $$\iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz = \lim_{j\,\rightarrow\infty}\int_0^jdu\int_0^\pi dv\int_0^{2\pi} e^{-u^2}u^2\sin(v) dw=$$ $$=2\pi\lim_{j\,\rightarrow\infty}\int_0^jdu\int_0^\pi e^{-u^2}u^2\sin(v)dv=4\pi\lim_{j\,\rightarrow\infty}\int_0^je^{-u^2}u^2du$$ But it doesn't get me anywhere. Help would be very appreciated, thanks in advance. AI: To finish off your problem, you only need $$\int_{0}^{\infty}e^{-x^2}x^2dx=\frac{\sqrt{\pi}}{4}$$ which can be shown by integrating $$\int_{0}^{\infty}e^{-x^2}dx=\frac{\sqrt{\pi}}{2}$$ by parts. That is, we have: $$\frac{\sqrt{\pi}}{2}=\int_{0}^{\infty}e^{-x^2}dx=xe^{-x^2}\vert_{0}^{\infty}+2\int_{0}^{\infty}e^{-x^2}x^2dx=2\int_{0}^{\infty}e^{-x^2}x^2dx$$
H: Deduce that there exists a prime $p$ where $p$ divides $x^2 +2$ and $p≡3$ (mod 4) I am revising for a number theory exam and have a question that I am struggling with, any help would be greatly appreciated. First I am asked to show that for an odd number $x$, $x^2+2 ≡3$(mod 4). I can do this part of the question, but next I am asked to deduce that there exists a prime $p$ where $p$ divides $x^2 +2$ and $p≡3$ (mod 4) I am struggling to see how to attempt the second part and how the first part relates. My thoughts so far are that I want to show $x^2≡-2$(mod p) ? And perhaps Fermat's Little Theorem could be of use here somehow? Not sure if I'm barking up the wrong tree though. Thanks in advance. AI: Consider the prime factorization of $x^2+2$. Since $x$ is odd, $x^2+2$ is odd implying $2$ will not show up in the factorization. Now consider the primes that DO show up in the prime factorization. If they are all $1$ in modulo 4, then their product will also be one in modulo $4$. This is not true though, since you know that $x^2+2$ is $3$ modulo $4$. Therefore, there must be a prime that in the prime factorization of $x^2+2$ s.t. it is not $1$ modulo 4. Since primes other than 2 that are not $1$ modulo 4 are $3$ modulo $4$, this completes the reasoning.
H: Poisson distribution question, tips needed! A car dealership opens every day with a fresh stock of $A$ cars. Let $N$ be the r.v. corresponding to the number of purchases per day. Suppose $N$ is distributed according to the Poisson distribution with parameter $\lambda=2$. What should $A$ be for the dealership to run out of cars to sell at most once every 10 days? I have no clue how to begin to answer this question. How should I approach this problem? I'm not looking for a ready solution, but rather an explanation/tips on how to deal with this sort of problem. All I know is how the Poisson distribution looks like, and what the parameter means, but beyond that, I'm stuck. AI: We need to interpret "at most once every $10$ days." With bad luck one could run out every day for a week, no matter how big $A$ is. So we interpret "at most once every $10$ days" as meaning "with daily probability $\frac{1}{10}$." So our stock should be such that $\Pr(N\le A) \ge 0.9$. Presumably we want the smallest such $A$.
H: What is the reason for the name *left* coset? Let $G$ be a group and let $H \leq G$ be a subgroup. It seems that it is now standard to call the cosets $$gH=\{gh \ | h \in H \}$$ the left cosets of $H$ in $G$. I have to admit to being slightly annoyed with this convention: these are the orbits for the right action of $H$ on $G$. Therefore I am perpetually tempted to refer to them as right cosets. Is there a second good reason (I happily admit convention is a very good reason) for calling these left cosets? (The fact that the "g" is written on the left does not count, as far as I am concerned). AI: Don't think of them as the orbits of the right action of $H$ on $G$. The set of left cosets itself admits a natural left $G$-action, and every transitive left $G$-action arises in this way.
H: Why is this derivative incorrect? We have to find the derivative of $$f(x) = \dfrac{\tan(2x)}{\sin(x)}$$ I would like to know why my approach is incorrect: $$f'(x) = \dfrac{\sin(x) \cdot \dfrac{2}{\cos^2(2x)} - \tan(2x) \cdot \cos(x)}{\sin^2(x)}$$ $$ = \dfrac{ 2 \sin(x) - \tan(2x) \cdot \cos(x)}{\cos^2(2x) \cdot \sin^2(x)}$$ $$ = \dfrac {2 \sin(x) - \sin(2x) \cdot \cos(x)}{\cos^3(2x) \cdot \sin^2(x)}$$ p.s. - To avoid confusion ; I wanted to get rid of the $\tan$. I'm sure there is a shorter method than this but I don't want it; I just want to know why this is wrong. AI: Third line is: $ \dfrac{ 2 \sin(x) - \tan(2x) \cdot \cos(x)\cdot\cos^2(2x)}{\cos^2(2x) \cdot \sin^2(x)}$ instead of $\\\\ \dfrac{ 2 \sin(x) - \tan(2x) \cdot \cos(x)}{\cos^2(2x) \cdot \sin^2(x)}$
H: Simplify summation with factorial and binomial coefficients I would like to know how to simplify the following summation: $$\sum_{p=0}^n\quad n!\frac{(2p)!}{(p!)^2}\frac{(2(n-p))!}{((n-p)!)^2}$$ Which rules should I use to simplify it? Thanks! AI: You may start with the generating function of $\displaystyle \frac{(2k)!}{(k!)^2}$ which is : $$f(x):=\sum_{k=0}^\infty\ \frac{(2k)!}{(k!)^2}x^k=\sum_{k=0}^\infty\ \binom{2k}{k}x^k$$ but this is the well known central binomial generating function : $$f(x)=\frac 1{\sqrt{1-4x}}$$ and compute the square of $f(x)$ so that the coefficients of the 'diagonal terms' $\;\sum_{k=0}^n \binom{2k}{k}\binom{2(n-k)}{(n-k)}x^kx^{n-k}\,$ will be your sums for increasing values of $n$ (ignoring the 'constant' factor $n!$) : $$f(x)^2=\frac 1{1-4x}=\sum_{n=0}^\infty\ 4^n\;x^n$$ From this we deduce simply : $$\boxed{\displaystyle n!\sum_{p=0}^n\ \frac{(2p)!}{(p!)^2}\frac{(2(n-p))!}{((n-p)!)^2}=\ n!\,4^n}$$
H: How to show that if $p(A) = 0 \implies p(\lambda_0)=0$? Let $V$ be a finite dimensional vector space and $V \ne \{ 0 \}, A\in L(V), \lambda_0 \in \sigma(A)$. If $p(\lambda)$ is an arbitrary polynomial for which the following applies: $p(A) = 0 $, prove that $p(\lambda_0)=0$ $L(V)$ = The set of all linear mappings (linear operators) from V to V $\lambda_0$ = eigenvalue $\sigma(A)$ = spectrum of A If I am correct this would involve the Hamilton-Cayley theorem but I don't know how to apply it. I know that $\lambda_0$ is a zero of the polynomial $det(A-\lambda I)$ but I don't even know where to start. Edit (hint from mrf): Since $V$ is finite, we can mark the degree of the polynomial with n $\implies p(\lambda) = \alpha_n\lambda^n + \alpha_{n-1}\lambda^{n-1} + ... + \alpha_2\lambda^2 + \alpha_1\lambda + \alpha_0$ So $p(A) = \alpha_nA^n + \alpha_{n-1}A^{n-1} + ... + \alpha_2A^2 + \alpha_1A + \alpha_0I$ Let $v$ be an corresponding eigenvector to the eigenvalue $\lambda_0 \implies Av = \lambda_0v$ $p(A)v = (\alpha_nA^n + \alpha_{n-1}A^{n-1} + ... + \alpha_2A^2 + \alpha_1A + \alpha_0I)v $ $p(A)v = \alpha_nA^nv + \alpha_{n-1}A^{n-1}v + ... + \alpha_2A^2v + \alpha_1Av + \alpha_0Iv$ If $\lambda \in \sigma(A) \implies \lambda^k \in \sigma(A^k)$ Proof: $\lambda \in \sigma(A) \implies \exists v \in V$ so that $Av = \lambda v$ $Av = \lambda v \hspace{10px}| A$ $A^2v = A(Av) = A(\lambda v) =$ A is linear $ =\lambda(Av) = \lambda^2v$ $A^2v = \lambda^2v \hspace{10px}| A$ $A^3v = A(A^2v) = A(\lambda^2 v) =$ A is linear $ =\lambda^2(Av) = \lambda^3v$ And by induction we get: $A^kv = \lambda^k v$ $\implies$ $p(A)v = \alpha_n\lambda_0^nv + \alpha_{n-1}\lambda_0^{n-1}v + ... + \alpha_2\lambda_0^2v + \alpha_1\lambda_0v + \alpha_0v \hspace{10px}| :v$ $p(A) = \alpha_n\lambda_0^n + \alpha_{n-1}\lambda_0^{n-1} + ... + \alpha_2\lambda_0^2 + \alpha_1\lambda_0 + \alpha_0$ $0 = \alpha_n\lambda_0^n + \alpha_{n-1}\lambda_0^{n-1} + ... + \alpha_2\lambda_0^2 + \alpha_1\lambda_0 + \alpha_0$ AI: Hint: Let $v$ be a corresponding eigenvector. Compute $p(A)v$.
H: Example for a sequence of operators converging pointwise, but not with respect to the operator norm I am trying to understand the following example. Define $$T_n: l^2 \rightarrow l^2$$ $$T_n(x)=(0, ..., 0, x_{n+1}, ...).$$ It's rather clear that $T_n(x)$ converges for $0$ for every $x \in l^2$. However, the script says that $(T_n)_{n \in \mathbb{N}}$ does not converge with respect to the operator norm. The last part is difficult to understand for me. So I thought maybe my understanding of the first statement is wrong? If we arbitrarily choose $x \in l^2$, we have $\sum_{i=0}^\infty |x_i|^2 < \infty$. We also have that $\left|| T_n(x) \right||= \left( \sum_{i=n+1}^\infty |x_i|^2 \right)^{1/2}$ is decreasing for increasing $n$. If look at the subsequence of all $x_i \neq 0$, it becomes strictly decreasing and by assumption bounded. So $\left|| T_n(x) \right||$ should converge to $0$. Is that reasoning flawed? For the operator norm, I have to look at the limit of $$\left|| T_n \right|| = \sup \left\{ \left|| T_n(x) \right|| : x=1 \right\}.$$ Intuitively, it makes sense that the limit is not $0$, since infinitely many elements of my sequence always remain and thus the supremum over all $x \in l^2$ shouldn't be $0$. Yet, I have some trouble with the coincidence of limit and supremum here. E.g., how could I calculate the supremum for a fixed $n \in \mathbb{N}$? Does the sequence $\left|| T_n \right||$ converge to a value that is not zero or not at all? Clearly, I wouldn't know how to properly examine this problem. Occurrence of suprema in general has always provided difficulties for me. AI: First of all, if $x = (a_k) \in \ell^2$ is the sequence with $a_{n+1} = 1$ and $a_k = 0$ otherwise, we have that $$\|T_n x\|_{\ell^2} = \|(0, \dots, 0, 1, 0, \dots)\|_{\ell^2} = 1.$$ Hence $\|T_n\|_{\text{op}} \geq 1$. On the other hand, note that for any $y = (b_k) \in \ell^2$, $$\|T_n y\|_{\ell^2} = \sum_{k = n+1}^\infty |b_k|^2 \leq \sum_{k = 1}^\infty = \|y\|_{\ell^2}.$$ Therefore $$\|T_n\|_{\text{op}} = \sup \{ \|T_n y\|_{\ell^2} : \|y\|_{\ell^2} = 1\} \leq 1.$$ Since $\|T_n\|_{\text{op}} \geq 1$ and $\|T_n\|_{\text{op}} \leq 1$, it must be that $$\|T_n\|_{\text{op}} = 1.$$
H: Conditional expectation is square-integrable I am given the following definition: Let $(G_i:i\in I )$ be a countable family of disjoint events, whose union is the probability space $\Omega$. Let the $\sigma$-algebra generated by these events be $\mathcal{G}$. Let $X$ be an integrable random variable, that is $E|X|<\infty$. The conditional expectation of X given $\mathcal{G}$ is given by $Y=\sum_iE(X1_{G_i})1_{G_i}/P(G_i)$. I am told that $Y$ is square integrable, that is $E(Y^2)<\infty$. $1_A$ denotes the indicator function for the event $A$. But I could not prove that $Y$ is square integrable. Can anyone offer a proof? Thanks. AI: If $X$ is $\mathcal G$-measurable, then $Y=X$. Thus, with the only assumption that $X$ is integrable, there is no way to deduce that $Y$ is square integrable. Counterexample: $X=\sum\limits_ix_i\mathbf 1_{G_i}$ with $\sum\limits_ix_iP[G_i]$ finite and $\sum\limits_ix_i^2P[G_i]$ infinite. On the other hand, in the general case, if $X$ is square integrable then $E[Y^2]\leqslant E[X^2]$ hence $Y$ is square integrable.
H: An equivalent expression of Cauchy Criterion? For a sequence $\{a_n\}$, if $$ \forall \epsilon>0 \ \exists N>0, \forall k \in \mathbf{N}, \ |a_{N+k}-a_N|<\epsilon \ $$ Then $\{a_n\}$ converges and hence is a Cauchy sequence. Now how about changing the inequality above to $|a_{N+k}-a_N|< a_{N+k}\cdot \epsilon$, or equivalently $|\frac{a_N}{a_{N+k}} - 1| < \epsilon$? Does the sequence still converge? AI: Does the sequence still converge? Yes. Assume the criterion in the question holds and, for every $n\geqslant1$, let $N_n$ such that, for every $j\geqslant i\geqslant N_n$, $|a_i-a_j|\leqslant a_j/2^n$. Then, for every $j\geqslant N_1$, $\frac23a_{N_1}\leqslant a_j\leqslant2a_{N_1}$, hence $(a_k)$ is bounded. Furthermore, for every $n\geqslant1$, for every $j\geqslant i\geqslant N_n$, $a_i\leqslant(1+2^{-n}) a_j$ and $(1-2^{-n})a_j\leqslant a_i$ hence $(1-2^{-n})\limsup a_k\leqslant\liminf a_k$. Thus, $(a_k)$ converges. Note that the condition $$ \forall \epsilon>0, \ \exists N>0, \forall k \geqslant0, \ |a_{N+k}-a_N|<a_N\cdot\epsilon, $$ also implies convergence (the proof is similar).
H: Is $\operatorname{Aut}(\mathbb{I})$ isomorphic to $\operatorname{Aut}(\mathbb{I}^2)$? Is $\def\Aut{\operatorname{Aut}}\Aut(\mathbb{I})$ isomorphic to $\Aut(\mathbb{I}^2)$ ? ($\mathbb{I},\mathbb{I}^2$ have their usual meaning as objects in $\mathsf{Top}$). I show some of one of my attempts. I was trying to show that the two are not isomorphic by considering the elements of order $2$ of both groups. If $\Aut(\mathbb{I})$ has a finite number of elements of order $2$, then $\Aut(\mathbb{I}^2)$ would have more elements of order $2$ (because for each $f\in \Aut(I)$ of order $2$, the functions $f_1,f_2$ given by $f_1(x,y)=(f(x),f(y)),f_2(x,y)=(x,f(y))$ are elements of order $2$ of $\Aut(\mathbb{I}^2)$ ). Shortly, I found that $\Aut(\mathbb{I})$ has infinitely many elements of order $2$, thus this does not work. AI: If you knew that there were only a non-zero finite number of elements of $\text{Aut}(\Bbb I)$ of order $2$, then your approach would do the trick. However, since you know that each map $x\mapsto(1-x^n)^{1/n}$ with $n\in\Bbb Z_+$ is an element of $\text{Aut}(\Bbb I)$ of order $2$, then there are infinitely many such elements in both automorphism groups. Without something more--like showing that the respective infinite collections of order $2$ elements are of different cardinality--this won't be enough. Consider instead the function $g:\Bbb I^2\to\Bbb I^2$ given by $$g(x,y)=(1-y,x).$$ You should be able to show that $g$ is an automorphism of order $4$. Does $\text{Aut}(\Bbb I)$ have any elements of order $4$? Hints: Each auto-homeomorphism on $\Bbb I$ will either be strictly increasing or strictly decreasing. The only strictly increasing auto-homeomorphism on $\Bbb I$ of finite order is the identity function. Consider the multiplicative subgroup $C_2:=\{1,-1\}$ of $\Bbb C^\times,$ and the function $\phi:\text{Aut}(\Bbb I)\to C_2$ taking strictly increasing functions to $1$ and strictly decreasing functions to $-1$. You should be able to show that $\phi:\text{Aut}(\Bbb I)\to C_2$ is a surjective homomorphism. From this, we see that if $f$ is a strictly decreasing auto-homeomorphism of $\Bbb I$ of finite order, then $f$ has even order. How is the order of $f$ related to the order of $f^2$? Is $f^2$ strictly increasing or strictly decreasing? What can we then conclude about the order of $f$?
H: Are Taylor series and power series the same "thing"? I was just wondering in the lingo of Mathematics, are these two "ideas" the same? I know we have Taylor series, and their specialisation the Maclaurin series, but are power series a more general concept? How does either/all of these ideas relate to generating functions? AI: Taylor series are a special type of power series. A Taylor series has a very special form, given by $$T_f(x) = \sum_{n = 0}^{\infty}\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n,$$ and a general power series looks like $$P(x) = \sum_{n = 0}^{\infty} a_n (x - x_0)^n,$$ where the $a_k$'s are just the constants associated to this power series in particular. The $a_n$'s may not have the form $f^{(n)}(x_0)/n!$, so that not every power series is a Taylor series (although every Taylor series is a power series). Edit: as Matt noted, in fact each power series is a Taylor series, but Taylor series are associated to a particular function, and if the $f$ associated to a given power series is not obvious, you will most likely see the series described as a "power series" rather than a "Taylor series." Both of these types of series can be generalized to forms involving more variables, and you can also come up with types of series that involve negative powers of $x$. As for generating functions, these are more formal objects, the analysis of which doesn't really deal with the issue of convergence as much as the analysis a power series or a Taylor series does. In this case, the coefficients are encoding information about some sequence of numbers $\{a_n\}$, and we examine the series formally to gather information about this sequence.
H: Number of Solutions for Congruency Equations I'm leaning congruency equations, so for example: $$ ax \equiv b \pmod m $$ I have that the number of solutions will be equal to $d$, where $$ d = \gcd(a, m). $$ And the solutions ae: $$ x, x+m/d, x+2m/d, x+3m/d, \ldots , x+(d-1)m/d $$ Now, having understood how I solve the equation, I am still not entirely sure as to why the number of solutions is given by $d$... Anybody can calrify please? Cheers! :) AI: Let $d$ be a common divisor of $a$ and $m$. If the congruence $ax\equiv b\pmod{m}$ has a solution, then $ax-b$ is a multiple of $m$. Since $d$ divides both $m$ and $a$, it must divide $b$. In particular, if $d$ is the gcd of $a$ and $m$, the congruence has no solutions unless $d$ divides $b$. So from now on we suppose that $d$ divides $b$. Let $a=a_1d$, $m=m_1d$, and $b=b_1d$. The congruence $a_1dx\equiv b_1d\pmod{m_1d}$ holds if and only if $m_1d$ divides $a_1d x -b_1d$. This is the case if and only if $m_1$ divides $a_1 x-b_1$, that is, if and only if $a_1x\equiv b_1\pmod{m_1}$. So now consider the congruence $a_1x\equiv b_1\pmod{m_1}$. Since $d$ is the greatest common divisor of $a$ and $m$, it follows that the numbers $a_1$ and $m_1$ are relatively prime. Because $a_1$ and $m_1$ are relatively prime, the congruence $a_1x\equiv b_1\pmod{m_1}$ has a unique solution modulo $m$. For a proof of uniqueness, we can use the fact that $a_1$ has an inverse $c$ modulo $m_1$. Then $ax\equiv b\pmod{m_1}$ if and only if $ca_1x\equiv cb_1\pmod{m_1}$, that is, if and only if $x\equiv cb_1\pmod{m_1}$. (One can also give a direct proof of uniqueness without using the inverse.) So now we ask: given the unique solution $x$ modulo $m_1$, which numbers are congruent to $x$ modulo $m$? Certainly the numbers $x+\frac{im}{d}$ are, where $i$ ranges from $0$ to $d-1$. For these are the numbers $x+im_1$, and they are all congruent to $x$ modulo $m_1$. Are there any others? Suppose that $y\equiv x\pmod{m_1}$. Then $m_1$ divides $y-x$. So the possible remainders when $y-x$ is divided by $m$ are $0, m_1,2m_1,\dots, (d-1)m_1$. It follows that $y-x \equiv im_1 \pmod{m}$ for some $i$ where $0\le i\le d-1$. This just says that $y\equiv x+im_1 \pmod{m}$, that is, $y\equiv x+\frac{mi}{d}\pmod{m}$, for some $i$ ranging from $0$ to $d-1$. Remark: The important part was the uniqueness modulo $m_1$. The rest is straightforward, and you know it well. To give a simple numerical example, suppose that we know that $x\equiv 3\pmod{16}$. What can we say about $x$ modulo $80$? We can say that $x$ is congruent to one of $3$, $19$, $35$, $51$, $67$.
H: For natural $n$, prove $\prod_{k=1}^n \tan\left(\frac{k\pi}{2n+1}\right) = 2^n \prod_{k=1}^n \sin\left(\frac{k\pi}{2n+1}\right)=\sqrt{2n+1}$ Prove that, for a natural number $n$, $$\prod_{k=1}^n \tan\left(\frac{k\pi}{2n+1}\right) = 2^n \prod_{k=1}^n \sin\left(\frac{k\pi}{2n+1}\right)=\sqrt{2n+1}$$ This follows from a continued fraction identity for which, I think, there is a lengthy proof. But, I thought, that there may be a direct geometric or another proof. Constructing a polynomial with the sines and tangents roots may be helpful. AI: The proof for this should be identical to the one for: $$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$$
H: Do uniformly continuous functions map complete sets to complete sets? Let $f: (M, d) \rightarrow (N, \rho)$ be uniformly continuous. Prove or disprove that if M is complete, then $f(M)$ is complete. If I am asking a previously posted question, please accept my apologies and tell me to bugger off. I saw a similar problem but the solution was dealing with a Bi-Lipschitz function or some such business. I believe this statement to be true and here is a rough sketch of my reasoning: Since $f$ is uniformly continuous, then $f$ maps Cauchy to Cauchy. Let $(x_n)$ be a Cauchy sequence in $M$. Since $M$ is complete, $x_n \rightarrow x \in M$. Again, because of $f$'s uniform continuity, we now have $(f(x_n))$ is Cauchy in $N$ and $f(x_n) \rightarrow f(x) \in N$. Thus $N$ is complete. By the way, I am studying for an exam. This is certainly not homework. I gladly accept your criticisms. Thank you in advance for your help. AI: Take $f:\mathbb R\longrightarrow \mathbb R $, $f(x)=\arctan x$. Then $$f(\mathbb R)=(-\frac\pi2,\frac\pi2)$$ and $f'(x)=\dfrac{1}{1+x^2}\leq 1$ which implies that $f$ is uniformly continuous. However, $\mathbb R$ is complete, while $f(\mathbb R)=(-\frac\pi2,\frac\pi2)$ is not complete
H: How can I show that $x^4+6$ is reducible over $\mathbb{R}$? How can I show that the polynomial $x^4 + 6$ is reducible over $\mathbb{R}$ without explicitly finding factors? I was trying to find a non-prime ideal that would generate it but I'm kind of lost as to how to proceed. Is there some sort of criterion that will allow me to show that it's reducible in $\mathbb{R}$? AI: Hint: Fundamental Theorem of Algebra. Then use the fact that if a complex number $z$ is a root of a real polynomial, then $\bar z$ is also a root. It follows that in $\Bbb R$ only the polynomials of degree one, and the quadratic polynomials with negative discriminants are the irreducible.
H: Proof of: If $x_0\in \mathbb R^n$ is a local minimum of $f$, then $\nabla f(x_0) = 0$. Let $f \colon \mathbb R^n\to\mathbb R$ be a differentiable function. If $x_0\in \mathbb R^n$ is a local minimum of $f$, then $\nabla f(x_0) = 0$. Where can I find a proof for this theorem? This is a theorem for max/min in calculus of several variables. Here is my attempt: Let $x_0$ = $[x_1,x_2,\ldots, x_n]$ Let $g_i(x) = f(x_0+(x-x_i)e_i)$ where $e_i$ is the $i$-th standard basis vector of dimension $n$. Since $f$ has local min at $x_0$, then $g_i$ has local minimum at $x_i$. So by Fermat's theorem, $g'(x_i)= 0$ which is equal to $f_{x_i}(x_0)$. Therefore $f_{x_i}(x_0) = 0$ which is what you wanted to show. Is this right? AI: Do you know the proof for $n=1$? Can you try to mimic it for more variables, say $n=2$? Since $\nabla f(t)$ is a vector what you want to prove is that $\frac{\partial f}{\partial x_i}(t)=0$ for each $i$. That is why you need to mimic the $n=1$ proof, mostly. Recall that for the $n=1$, we prove that $$f'(t)\leq 0$$ and $$f'(t)\geq 0$$ by looking at $x\to t^{+}$ and $x\to t^{-}$. You should do the same in each $$\frac{\partial f}{\partial x_i}(t)=\lim_{h\to 0}\frac{f(t_1,\dots,t_i+h,\dots,t_n)-f(t_1,\dots,t_n)}h$$ ADD Suppose $f:\Bbb R\to \Bbb R$ is differentiable and $f$ has a local minimum in $t=0$. Then $f'(t)=0$. P Since $f$ has a local minimum at $t=0$, for suitably small $h$, $$f(t+h)-f(t)\geq 0$$ If $h>0$ then this gives $$\frac{f(t+h)-f(t)}{h}\geq 0$$ While if $h<0$ we get $$\frac{f(t+h)-f(t)}{h}\leq 0$$ Since $f'$ exist, the side limits also exist and equal $f'(t)$. From the above we conclude $f'(t)\geq 0$ and $f'(t)\leq 0$, so that $f'(t)=0 \;\;\blacktriangle$. Now, just apply that coordinatewise, and you're done.
H: Can countability coexist with infinity? This question concerns the countability of the real numbers. First I will show how I count the numbers between 0 and 1 on the real line. It is done by reversing digits behind the coma, so that e.g. 0,761 maps to 167. Obviously this is 1 to 1 mapping, but there are infinite number of those unique mappings depending on the chosen reasonable radix. In decimal number system I could count numbers like this: $ 0 , 0.1 , 0.2 , 0.3 ... 0.9 , \\ 0.01 , 0.11 , 0.21 , 0.31 , ... , 0.91 , \\ 0.02 , 0.12 , 0.22 , 0.32 , ... , 0.92 , \\ 0.03 , 0.13 , 0.23 , 0.33 , \dots , 0.93 , \\ \vdots \\ 0.09 , 0.19 , 0.29 , 0.39 , \dots , 0.99 , \\ 0.001 , 0.101 , 0.201 , 0.301 , \dots , 0.901 , \\ \vdots \\ 0.002 , \dots , 0.902, \\ \vdots \\ 0.092, \dots , 0.992 , \\ 0.003 , \dots , 0.903 , \\ \vdots \\ 0.004, \dots , 0,904,\\ \vdots \\ \infty \\ $ Now, given that I can "succeed" to count to infinity, I would also count all irrational numbers. There is no reason to haste. But then all irrational numbers are "somewhere" in the infinity. So either counting to infinity allows me to write irrational numbers backwards, or infinity and countability can not coexist. Which one is true ? How does your solution compare to rational numbers ? AI: Let's denote the $n^\text{th}$ number in your list by $f(n)$. To use your method to prove that $[0,1)$ is countable, you need to show that $f$ is a surjection, that is, for every real number $x \in [0,1)$ there is a natural number $n$ such that $f(n) = x$. In particular, you have to show that there is a natural number $n$ such that $f(n) = 1/3$. Note that $n$ here is just an ordinary, finite, natural number. So you have to show that $1/3$ appears at some finite stage in your list (e.g. it is the tenth number listed, or the millionth number listed.) This is not the case, and it is not correct to argue that it must appear sometime because $\mathbb{N}$ is so very large. For no value $n \in \mathbb{N}$ do we have $f(n)=1/3$, any more than we have $f(n) = -1$ or $f(n) = 37$. These numbers simply do not appear in the list.
H: Need help with Fourier transform problem I'm trying to calculate the Fourier transform of the unit step function, $$\mathcal{F}[u(t)] \ = \int_{-\infty}^{\infty}u(t)e^{-i\omega t}dt \ = \int_{0}^{\infty}e^{-i\omega t} dt. \tag{1}$$ This simplifies to, $$U(\omega) = (i\omega)^{-1},\ (\omega \not = 0). \tag{2}$$ However, my book claims that $(1)$ simplifies to $ \pi \delta(\omega) + (i\omega)^{-1}. \tag{3}$ Here, $\delta(\omega)$ is the unit impulse function. I don't have my book with me right now but I think they use the differentiation property to derive it by calculating the transform of the derivative of $u$ (which is $\delta$). My question is, isn't the appearance of $\delta$ in the result they obtain irrelevant? Since, at $\omega = 0$, $(3) = \infty$ and elsewhere $(3) = (2).$ So, why would they write $(3)$ instead of $(2)$? I should mention this is not in a mathematics textbook, but an engineering textbook. AI: The reason why you get a delta function is because the step function is actually defined as $$\theta(t) = \begin{cases} \\ 1 & t \gt 0\\1/2 & t=0\\0 & t \lt 0\end{cases}$$ That nonzero value at $t=0$ is a bit troublesome. Better to consider the signum funciton $\text{sgn}(t) = 2 \theta(t)-1$. The FT of $\text{sgn}(t)$ is $$\int_0^{\infty} dt \, e^{-i \omega t} - \int_0^{\infty} dt \, e^{i \omega t} = \frac{2}{i \omega}$$ Note that the value at $t=0$, being zero, does not contribute to the FT. The FT of $\theta(t)$ follows from this, because $$\theta(t) = \frac12 \text{sgn}(t) + \frac12$$ so that its FT is $$\frac{1}{i \omega} + \frac12 \int_{-\infty}^{\infty} dt \, e^{-i \omega t} = \frac{1}{i \omega} + \pi \delta(\omega)$$
H: $\frac{d}{dx}\int_{0}^{e^{x^{2}}} \frac{1}{\sqrt{t}}dt$ I'm having trouble understanding how to apply the $\frac{d}{dx}$when taking the anti-derivative. $$\frac{d}{dx}\int_{0}^{e^{x^{2}}} \frac{1}{\sqrt{t}}dt$$ In class it was mentioned we'll end up taking the derivative of $e^{x^{2}}$ which is $2xe^{x^{2}}$ My guess of solving this is $$\frac{d}{dx}\int_{0}^{e^{x^{2}}} \frac{1}{\sqrt{t}}dt = [2t^{\frac{1}{2}}]_0^{e^{x^2}}$$ Then $$[2(e^{x^{2}})^{\frac{1}{2}}]-[2(0)^{\frac{1}{2}}]=2(e^{x^{2}})^{\frac{1}{2}}$$ Am i missing something? AI: You just got a little "sloppy" with notation, and forgot that your aim is to take the derivative of the integral you found: First, when integrating, after evaluating the indefinite integral, drop the integral sign: $$\frac{d}{dx}\int_{0}^{e^{x^{2}}} \frac{1}{\sqrt{t}}dt = \frac d{dx}\left[ 2t^{\frac{1}{2}}\Big|_0^{e^{x^2}}\right]$$ Then $$\dfrac d{dx}\left[[2(e^{x^{2}})^{\frac{1}{2}}]-[2(0)^{\frac{1}{2}}]\right]= \frac{d}{dx} \left[2(e^{x^{2}})^{\frac{1}{2}}\right]$$ Precisely...Now, use what you know about $\frac d{dx}\left(e^{x^2}\right) = \frac d{dx}\left(x^2\right)\cdot \left(e^{x^2}\right)$ and apply the same strategy (chain rule) to your result of evaluating the indefinite integral, as you did well. $$\frac{d}{dx} \left[2(e^{x^{2}})^{\frac{1}{2}}\right] = \frac{d}{dx} \left[2\left(e^{\large\frac{x^{2}}{2}}\right)\right] = 2\frac{d}{dx}\left(\frac{x^2}{2}\right)\cdot e^{\large\frac{x^2}{2}}= 2\cdot \dfrac 12\cdot 2xe^{\large\frac{x^{2}}{2}} = 2xe^{\frac{x^{2}}{2}}$$
H: $\lim_{R \to \infty} \int_0^R \frac{dx}{(x^2+x+2)^3}$ $$\lim_{R \to \infty} \int_0^R \frac{dx}{(x^2+x+2)^3}$$ Please help me in this integral, I've tried some substitutions, but nothing work. Thanks in advance! AI: First complete the square: $$ x^2 + x + 2 = \left( x^2 + x + \tfrac{1}{4} \right) + \tfrac{7}{4} = \left( x + \tfrac{1}{2} \right)^2 + \left( \tfrac{\sqrt{7}}{2} \right)^2. $$ Now, make the (inverse) trigonometric substitution: $$ \tan t = \frac{x + \tfrac{1}{2}}{\tfrac{\sqrt{7}}{2}}. $$ This choice of ratio is motivated by the sum of square expression above. As a consequence, we have $$ x^2 + x + 2 = \left( \tfrac{\sqrt{7}}{2} \tan t \right)^2 + \left( \tfrac{\sqrt{7}}{2} \right)^2 = \tfrac{7}{4} \left( \tan^2 t + 1 \right) = \tfrac{7}{4} \sec^2 t. $$ Now, the differential is $$ dx = \tfrac{\sqrt{7}}{2} \sec^2 t \, dt $$ and the limits of integration become $$ \begin{align} x &= 0 &\Longleftrightarrow \quad t(0) &= \arctan \tfrac{1}{\sqrt{7}} \\ x &= R &\Longleftrightarrow \quad t(R) &= \arctan \tfrac{2R + 1}{\sqrt{7}}. \end{align} $$ Now, substitute: $$ \int_0^R \frac{dx}{(x^2 + x + 2)^3} = \int_{t(0)}^{t(R)} \frac{\tfrac{\sqrt{7}}{2} \sec^2 t \, dt}{\left( \tfrac{7}{4} \sec^2 t \right)^3} = \left( \tfrac{2}{\sqrt{7}} \right)^5 \int_{t(0)}^{t(R)} \cos^4 t \, dt. $$ Can you finish it from here, using power reducing identities?
H: Find expression for $dy/dx $ + state where it is valid hopefully you guys can shed some insight into this question I'm working on. Given $xy+y^{2}-e^{x^{2}} = 6$ find an expression for $dy/dx$ and state where it is valid. So, what I did was differentiate it, which resulted in: $x+3y-2xe^{x^{2}} = 0 $ Although I am unsure whether this is correct, and I do not understand the state where it is valid. Any help or direction is much appreciated! AI: $xy+y^2 - e^{x^{2}}=6---(1)$ Differentiate with respect to $x$ to get $y+xy'+2yy' - 2xe^{x^{2}}=0$ $y'(x+2y)=2xe^{x^{2}}-y$ $\displaystyle y'=\frac{2xe^{x^{2}}-y}{x+2y}---(2)$ Solving $(1)$ we have $\displaystyle y=\frac{-x\pm \sqrt{x^2+4e^{x^{2}}+24}}{2}$ Put $y$ in $(2)$ $\displaystyle y'=\frac{2xe^{x^{2}}-\left(\frac{-x\pm \sqrt{x^2+4e^{x^{2}}+24}}{2}\right)}{x+2\left(\frac{-x\pm \sqrt{x^2+4e^{x^{2}}+24}}{2}\right)}$ $\displaystyle y'=\frac{4xe^{x^{2}}+x\mp \sqrt{x^2+4e^{x^{2}}+24}}{{\pm 2\sqrt{x^2+4e^{x^{2}}+24}}}$ Since $\pm 2\sqrt{x^2+4e^{x^{2}}+24}$ is defined for all $\mathbb{R}$, we say that $y'$ is defined on all $\mathbb{R}$.
H: the set of countable sets of Real numbers I would like to ask some hints towards the proof that The set of countable sets in $\mathbb{R}$ is equinumerous to the set $\mathbb{R}$ AI: Hint: $\Bbb{R^N\sim(N^N)^N\sim N^{N\times N}\sim N^N}$. Show that there exists a surjection from that set onto the set of countable subsets and use the axiom of choice to conclude there is an injection in the reverse direction. Note that the axiom of choice has to be used, it is consistent that the axiom of choice fails and there is no bijection between the two sets (but there is still a surjection as above)!
H: What's the difference between arccos(x) and sec(x) My question might sound dumb, but I don't really see why the graphics of $\arccos(x)$ and $\sec(x)$ are different. As far as I know $\arccos(x)$ is the inverse cosine function $(\cos(x)^{-1})$ and $\sec(x)$ = $\frac{1}{\cos(x)}$ (source Wolfram|Alpha). So why aren't they equal? Thanks in advance. AI: That's a problem of notation and probably a lack of definitions. We define $\sec x$ as the multiplicative inverse of $\cos x$, in other words, fixed $a \in \mathbb{R}$, $\sec a$ is the number such that $\sec a \cos a = 1$. Now $\arccos x$ is a little different thing: it's the inverse function of $\cos x$. I don't know if you've learned this but the formal definition of a function is that of a collection of ordered pairs. In other words, since a function from a set $A$ to a set $B$ should be a rule assigning for each $a \in A$ some $b \in B$ we can simply define a function as the set of all ordered pairs of elements in $a$ together with related elements in $b$. However, we require the additional property that if $(a,b) \in f$ and if $(a,c)\in f$ then $b = c$ and this is just the formal way to state the "vertical line rule". Since the second element in each pair is unique we give it a name: if $(a,b) \in f$ then $b = f(a)$. Also to state starting and ending sets we write functions from $A$ to $B$ as $f: A \to B$. Now, if you have a function you have a collection of ordered pairs right? So, you can create a new set of ordered pairs by reversing the pairs. So if $f : A \to B$ is a function from $A$ to $B$ we define the inverse $f^{-1}$ by the property that $(a,b) \in f^{-1}$ when $(b,a)\in f$. Now it's not at all clear when $f^{-1}$ is a function. Just to show you that consider the following function that maps naturals to naturals: $$f = \{(1,2), (3,2), (4,1)\}\subset \mathbb{N}\times \mathbb{N}$$ This is a function by our definition. Now the inverse is $f^{-1} = \{(2,1), (2,3), (1,4)\}$, now this isn't a function because $(2,1)\in f^{-1}$ and $(2,3)\in f^{-1}$. So $f^{-1}$ will be a function if the original function also satisfies $f(x) = f(y)$ implying $x = y$. This kind of function is called one-one, and so if $f$ is one-one, $f^{-1}$ will be a function called then inverse function. Also, if $f: \mathbb{R} \to \mathbb{R}$ has an inverse function $f^{-1}:\mathbb{R} \to \mathbb{R}$ then $f(f^{-1}(x)) = x$ and $f^{-1}(f(x)) = x$. So $\arccos$ is defined precisely this way: fixing one interval where $\cos$ is one-one, you define $\arccos$ in that interval by the property that $\arccos x$ is the number $y$ such that $\cos y = x,$ in other words, it returns you the value of angle whose cosine is $x$. Just a reference to finish: you can find treatments like this in books like Spivak's Calculus or Apostol's Calculus Vol. 1. I hope the way I exposed this helps you a little. Good luck! EDIT: The problem of notation I've mentioned and forgot talking about is that both the multiplicative inverse and the inverse function are in some contexts denoted by $\cos^{-1}$ and this usually happens to all trigonometric functions. So to avoid confusion, I recommend writing $\arccos$, $\arcsin$ and so on for the inverse functions.
H: Does the definition of countable ordinals require the power set axiom? I am trying to understand the consequences of the different axioms of ZFC. In particular, I was trying to understand what you get on ZFC-power set (ZFC minus the power set axiom). If you have any references that I could read please let me know. In particular, I have a question. From the definition of ordinal number (for instance, Jech, p.19 and above) I believe that you do not need the power set axiom to define infinite ordinals beyond $\omega$, but I am not completely sure. If you don't need it, which is the largest ordinal that you can reach without using power set? can you reach $\omega_1$? AI: You may be very interested in the following paper: Victoria Gitman, Joel David Hamkins, Thomas A. Johnstone, What is the theory ZFC without power set? (arXiv, 2011) Note that you cannot prove the existence of $\omega_1$ either, this is because $H(\omega_1)$, the set of the hereditarily countable sets contain all the countable ordinals, and it is a model of $\sf ZFC-Pwr$, but $\omega_1\notin H(\omega_1)$ because it is not hereditarily countable.
H: Is it possible that the union of a Bernstein set and a singleton isn't a Bernstein set? Since the construction of a partition of two Bernstein sets is almost identical to that of a partition of three in an uncountable Polish space. It's possible that the union of a Bernstein set and a singleton is a Bernstein set. But is it possible otherwise, that is, the union of a Bernstein set and a singleton contains an uncountable closed set ? AI: No. To see this, notice that if you remove a point from a perfect set, what remains will still contain a perfect set, so the intersection of a Bernstein set with a perfect set is not just nonempty, but also infinite. In fact, it's dense (in the perfect set) by a similar argument. Therefore, removing (or adding) a single point does not turn a Bernstein set to a non-Bernstein set. Using measure arguments you can show that the intersection is uncountable: the intersection of a Bernstein set with any perfect set has full outer measure with respect to any continuous probability measure on the perfect set, so it can't be countable. (I vaguely recall that any Bernstein set is a union of $\mathfrak c$-many Bernstein sets, so that the intersection is actually of cardinality of the continuum, but I don't remember the proof, exactly, and might be mixing something up.)
H: Every curve is a geodesic?? I've been reading up on how isometries send geodesics to geodesics. I recently saw a proof of another theorem that used the fact: The set of fixed points of an isometry is a geodesic. But isnt the Identity always an isometry, which would then imply every curve, in say the Poincare half plane, is a geodesic. Whats wrong with my reasoning? Thanks AI: You're probably recalling or interpreting the fact they used incorrectly. The relevant fact is the following: If $M$ is a Riemannian manifold and $f: M \longrightarrow M$ an isometry, then each connected component of $\mathrm{Fix}(f)$ is a closed totally geodesic submanifold of $M$. Note that a submanifold $N \subset M$ being totally geodesic doesn't mean that $N$ is a geodesic in $M$, it means that every geodesic in $N$ is a geodesic in $M$ as well.
H: Sketch the function $y = {1 \over {{x^2}}}\ln x$ I don't know where to begin with this, the ${1 \over {{x^2}}}$ part of the function throws me off, how to do I go about this? How does one generally approach a question like this? AI: It is indeed an interesting function. You can try finding a lot of points: select a range of $x$ values, and evaluate $$f(x) = \dfrac{\ln x}{x^2}$$ for each x. Given enough data points, plotting them as you go along, you can try to "connect the dots" to see what you get. You can also check for asymptotes, critical values, if any (differentiate!!) to find any extrema, if they exist, etc: that information will help reveal if and where the function peaks and/or bottoms out, where it is increasing, decreasing, etc. For example, if we take the limit of $f(x)$ as $x\to +\infty$, $x \to 0^+$, e.g., and we will see some interesting things. Computing the derivative gives us $$f'(x) = \frac{1-2\ln x}{x^3}$$ So the derivative $f'(x) = 0$ at $x=e^{1/2}$, is positive for $x\lt e^{1/2}$ (hence increasing) and is negative, hence decreasing, when $x\gt e^{1/2}$. Maximum: $x=e^{1/2}$. You can also "take a peak" to see what you're trying to approximate: [Disclaimer Wolfram Alpha's graph is taking, I assume, $\ln x$ in the numerator to be $\ln|x|$, in which case you'd have a symmetrical graph (the left portion mirroring the right-hand portion).
H: Tricky logarithmic problem? It is given that $\log_9 p = \log_{12} q = \log_{16} (p+q) $. Find the value of $q/p$. I can see that the bases have common factors, but I don't exactly know how to exploit that. I tried many approaches to this, but I couldn't get it. The farthest I got was probably $q/p=4/\sqrt{p}$ (if I remember right) Thanks! AI: Let all of them equal to $x$. We then have $$p=9^x; q = 12^x; p+q = 16^x$$ Hence, we need solutions for $$9^x + 12^x = 16^x \implies 1 + \left(\dfrac43\right)^x = \left(\dfrac{16}9\right)^x$$ Now let $\left(\dfrac43 \right)^x = y$. We then have $$y^2 = y+1 \implies y = \dfrac{1+\sqrt5}2$$ Hence, $$\dfrac{p}q = \dfrac{12^x}{9^x} = \left(\dfrac43\right)^x = y = \dfrac{1+\sqrt5}2$$
H: Are the continuous functions on $G$ dense in $L^{1}(G)$? If $G$ is a locally compact group, is the set $C_{c}(G)$ of all continuous functions on $G$ with compact support dense in $L^{1}(G)$? AI: Yes. A more general result is the following: Proposition. Let $X$ be a locally compact Hausdorff space, and let $\mu$ be a Radon measure on $X$. Then for any $p \in (0, \infty)$, $C_c(X)$ is a dense subset of $L^p(X, \mu)$. See Proposition 3 in Terry Tao's blog post for a proof (this post also appears as a chapter in Tao's book An Epsilon of Room). Another reference is Proposition 7.9 in Folland's Real Analysis.
H: Evaluating limits involving absolute values: $\lim_{x\to\pm6} \frac{2x+12}{|x+6|}$ I have two questions regarding limits involving absolute values. How do I evaluate the following: $\displaystyle\lim_{x\to -6} \frac{2x+12}{|x+6|}$ $ \displaystyle \lim_{x\to 6} \frac{2x+12}{|x+6|}$. To handle the first problem, I considered two cases: $x\gt -6$ and $x\lt -6$ and thus get $2$ and $-2$ respectively for the right and left hand limits. However, I am bit confused regarding the second problem. How do I handle that? AI: For the second problem, when $x$ approaches $6$, $x+6$ will be positive — the absolute values are unecessary, they're just here for show. (more generally, if $a\neq 0$ and $x_n\operatorname*{\to}_{n\to\infty} a$, then there exists $N \geq 0$ such that $\forall n\geq N$ $x$ and $a$ have same sign)
H: Convention for locally compact groups? $\bf{\text{Suppose I find the phrase:}}$ Let $G$ is a locally compact group, and $\mathcal{U}$ a basis of neighborhoods of $1$. $\bf{\text{Question:}}$ Is it a convention to automatically take each $U\in\mathcal{U}$ to be compact? Clearly this can be done if the need arises by the locally compact assumption on $G$, but when reading material, should this be my first interpretation such a statement? AI: I wouldn't think so, unless it is explicitly stated as a convention in what you are reading. (Usually, an author would explicitly signal that $U$ is a compact neighbourhood if this is what was meant.)
H: Specific element in a set Given a set $A=\{a,b,c,d\}$ how can I take, for an example, the first element? Like this: $A(0)=a; A(1)=b; A(0).A(1)=a.b$. I think there's no way to do that with sets, because it isn't an ordered set. AI: For a set $A$, there is no way to refer to its first element, second elements, or any such enumeration of elements, simply because a set is not ordered. You can ask whether an element is in a given set, and the answer is either yes or no. You can't ask where an element is placed in a given set, since no ordering exists in a set. It is possible to consider some extra structure on a set, turning it into a poset (short for partially ordered set). Then you might be able to speak of first element, second element, and so on, but that is also not guaranteed. Posets that most closely resemble an ordering like you are alluding to are called well-orders. A finite well-order really looks just like a sequence of elements in order. But infinite well-orders can be very complicated.
H: Finding a linear recurrence regarding strings The question is Let $T(n)$ be the number of length-$n$ strings of letters $a$, $b$ and $c$, that do not contain three consecutive $a$'s. Give a recurrence relation for $T(n)$ and justify it. (You do not have to solve it.) How would I go about solving this problem. I found the initial conditions of $n = 0, T(n) = 1$ $n = 1, T(n) = 3$ $n = 2, T(n) = 9$ $n = 3, T(n) = 26$ But I do not know what to do after that point. AI: Hint: Call such strings good. Let $p(n)$ be the number of good strings of length $n$ that end in a letter other than $a$. Let $q(n)$ be the number of good strings that end in a single $a$. Let $r(n)$ the number of good strings that end in two $a$'s. Note that $p(n)+q(n)+r(n)=T(n)$. We have $$p(n+1)=2(p(n)+q(n)+r(n))=2T(n).\tag{$1$}$$ This is because we get a good word of length $n+1$ that doesn't end in $a$ by appending either $b$ or $c$ to any good word of length $n$. By similar reasoning, we have $$q(n+1)=p(n),\tag{$2$}$$ and $$r(n+1)=q(n).\tag{$3$}$$ Bump up the indices in Equation $(3)$ by $1$. We get $r(n+2)=q(n+1)=p(n)$. Add the the right-hand sides and left-hand sides of the three displayed equations. We get $$T(n+1)=2T(n)+p(n)+q(n)=3T(n)-r(n).$$ But $r(n)=p(n-2)=2T(n-3)$. Thus $$T(n+1)=3T(n) -2T(n-3).$$
H: Show the map $f(x)=\frac12 (x+1/x)$ has an attractive fixed point in $(0,\infty)$ From numerical test, I know $x=1$ is an attractive fixed point of the function $$ f(x)=\frac12 \left(x+\frac{1}{x}\right), $$ on $(0,\infty)$. Is there a way to prove it? Since $$ f'(x)=\frac12\left(1-\frac{1}{x^2}\right), $$ then $$ |f'(x)| < 1, $$ on $[1,\infty)$, and Banach contraction principle gives the result. But how to proceed for $(0,1)$? Let $a_0 \in (0,1)$ and choose $$ 0< \epsilon < a_0, $$ then the derivative is bounded on $(\epsilon,\infty)$. But I don't see how to build a contraction on $(\epsilon,\infty)$, which gives the result. I tried $$ g(x)=\frac{1}{M}f(x), $$ with $$ M=\sup\{x\in(\epsilon,\infty):|f'(x)|\}+1, $$ but it doesn't work. AI: Use the inequality $x+\frac{1}{x}\ge 2$, which can be proved many ways. For example, we can note that $\left(\sqrt{x}-\frac{1}{\sqrt{x}}\right)^2\ge 0$. Or else we can quote AM/GM. Remark: The fixed point is in fact very attractive, since the derivative there is $0$.
H: How to improve mathematical creativity? To introduce myself: I'm an undergraduate mathematics student in Germany. Currently I'm studying in the second semester and until now I'm doing well, but I still got the feeling that my ability to develop proofs (or just solving complex issues in general) needs to be improved. Let me explain myself: Sometimes, (especially) when it comes to proof tasks I'm being given, I feel like I dont know how to find my way through the proof, it rarely happens that I not even know where to start. Somehow I'm missing the feeling that my thoughts guide me to the correct answer, which I always used to have in school. This makes me feel a lack of "creativity" or in other words the ability to find my own path to reaching the solution. If there is any advice you could give me, I would appreciate that. AI: I remember having a similar feeling in my first undergraduate year. I could comprehend the proofs I was taught, and could mimic them afterwards on very similar problems, but I felt I lacked the creativity to actually think of proofs myself. Two realizations I had helped me get through this stage: Practice actually helps. The more you prove, the more tools are added to your inventory, even if this is not immediately apparent. In two years from now, you'll look back and may not even understand what you found difficult. Each of these proof "tools" was developed by someone very smart, generally over long periods of time thinking about the problems at hand. If you managed to come up with all the tricks and techniques by yourself, without first seeing some similar examples online or in books, you would indeed be a genius. In short, you shouldn't be feeling too bad about not "getting" proofs immediately - this does not mean you aren't creative, just that you have more to learn and that things that are presented as trivial actually took quite a while to get to. One could almost say that this is what university is for - to save you the time it would take to reinvent everything yourself.
H: Find $M$, where $M^7=I$ and $M\neq I$, $M$ has only 0's and 1's. Find a $3 \times 3 $ matrix $M$ with entries 0 and 1 only such that $M^7=I$ and $M\neq I$. This was a short question in a recent exam. I tried with permutation matrices but couldn't find $M^{odd}=I$ except for 3. AI: There is no such matrix over $\mathbb{R}$. The matrix satisfies the polynomial $x^7−1$ which implies that the eigenvalues lie amongst the $7^{\text{th}}$ roots of unity and that the matrix is diagonalizable. This implies that the characteristic polynomial, being a real cubic, has either three real roots, or one real root and a conjugate root pair. If all roots are real then we must have $1$ repeated three times as the eigenvalues. Diagonalizability then forces $M=I$. If we have a conjugate root pair, say $\left(\omega, \overline{\omega}\right)$ and a real root (which is again $1$) then we know that $$1 + \omega + \overline{\omega} = 1 + 2\Re(\omega) = \mathrm{tr}(M)$$ But since $M$ has only $0$s and $1$s as entries this implies that the trace is integral and hence $2\Re(\omega)$ is integral. It is easy to check that this is impossible. If you allow other fields then this is possible. As julien points out, if we work over $\mathbb{Z}/7\mathbb{Z}$ then $$M=\begin{pmatrix}1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ is a working example.
H: Divide by a number without dividing. Can anyone come up with a way to divide any given x by any given y without actually dividing? For example to add any given x to any given y without adding you would just do: $x-(-y)$ And to subtract any given x from any given y (that is, y-x) you could do: $y+xe^{iπ}$ *edit: well since (i) is ($\sqrt{-1}$) and that is technically subtracting this one might not work perfectly but for the sake of the riddle and for the sake of example, I'm using that equation :) How can you divide without dividing? Can anyone come up with equations that work for all $x$ and $y$ values? (For all intents and purposes we will leave out dividing by zero issues and what-not... don't worry about that... AI: Look at the equation $\frac{1}{x}=a$. We use Newton's Method to approximate the solution. Let $f(x)=\frac{1}{x}-a$. The standard Newton iteration gives $$x_{n+1}=x_n -\frac{f(x_n)}{f'(x_n)}=x_n -\frac{\frac{1}{x_n}-a}{-\frac{1}{x_n^2}}.$$ This simplifies to $$x_{n+1}=x_n(2-ax_n).$$ Remark: Note that only subtraction and multiplication are used. If we start with $x_0$ close enough to $\frac{1}{a}$, the method converges rapidly. It was once used to implement reciprocal in software.
H: What model do you get from PA without addition and multiplication? I have the feeling that this question is trivial, but I cannot figure the answer by myself nor from the stuff I have read. So the question is if addition (and multiplication) can be shown as a theorem of PA (without the arithmetical axioms but with successor and induction). If not they are independent and first order PA+¬A has a model (A=addition + multiplication axioms). What does a model of such system looks like? AI: I assume you mean a standard first-order PA setup. So we have a language $L$ with constant symbol $0$, unary function symbol $S$, and binary function symbols $f$ and $g$ (their common names, when they are written in the middle, are $+$ and $\times$). Make an $L$-structure by using the non-negative integers as the underlying set, and interpreting $0$ and $S$ in the usual way. We want to interpret $f$ and $g$ so that the usual axioms about addition and multiplication are false. It is awfully easy to make things false. For example, interpret $f(x,y)$ as $x^2+2y^2$, and $g(x,y)$ as $2x^2+y^2$. Almost anything will work.
H: Oscillation and Hölder continuity I am studying a proof of a theorem. And I have the following situation in the proof: Consider $\Omega$ is a bounded open set of $\mathbb R^n$ and $u: \Omega \to \mathbb R$ is a function satisfying: $$\operatorname{osc}_{B(x_0,R)} u \leq (1-\delta) \operatorname{osc}_{B(x_0,4R)},$$ for all ${B(x_0,R)} \subset \Omega$ for some $0<\delta <1$ ($\delta$ is independent of the open ball). The book says: Iterating this inequality we have that $u$ is Hölder continuous. Someone can help me understand the proof in the part of the "Iteration". Thank you! AI: I prefer to write the inequality as $$\operatorname{osc}_{B(x_0,4^{-1}R)} u \leq (1-\delta) \operatorname{osc}_{B(x_0,R)} \tag1$$ Iteration means that applying (1) with $B(x_0,R)$ replaced by $B(x_0,4^{-1}R)$, then replaced by $B(x_0,4^{-2}R)$, etc., and then chaining these inequalities together. Thus, $$\operatorname{osc}_{B(x_0,4^{-2}R)} u \leq (1-\delta) \operatorname{osc}_{B(x_0,4^{-1}R)} u \leq (1-\delta)^2 \operatorname{osc}_{B(x_0,R)} \tag2$$ and in general, $$\operatorname{osc}_{B(x_0,4^{-k}R)} u \leq (1-\delta)^k \operatorname{osc}_{B(x_0,R)} \tag3$$ Inequality (3) can be written in the form of the Hölder condition. Indeed, let $\alpha>0$ be the number such that $(1-\delta)=4^{-\alpha}$. Given $x$ near $x_0$, let $k$ be the integer such that $4^{-k-1}\le |x-x_0|< 4^{-k}R $. By (3), $$|u(x)-u(x_0)|\le (1-\delta)^k \operatorname{osc}_{B(x_0,R)} \le C |x-x_0|^\alpha \tag4$$ where $C=4^\alpha \operatorname{osc}_{B(x_0,R)}$ is independent of $x$. Thus, $u$ is Hölder continuous.
H: A simple proof about $e^x$? Do you guys think this is correct? I am trying to prove that there is no single-term polynomial function (oxymoron, I know) $f(x)$ which is always (or at least as x approaches infinity) greater than $e^x$ (I will try to expand this to any polynomial function later). Let $f(x)$ be such a function with the lowest possible degree. Then that means that the slope of $f$ will have to be greater than the slope of $e^x$ as x approaches infinity. This means that $f'(x)>e^x$ as x approaches infinity. However, we already assumed that $f(x)$ is the lowest degree which is greater than $e^x$, but $f'(x)$ is one degree lower than $f$. Therefore, this is a contradiction. What do you guys think? Is this valid? Thanks! AI: I like this proof a lot. I think we can try to tighten it up a bit, though. The main thing I would like you to justify is that "the slope of $f$ will have to be greater than the slope of $e^x$ as $x$ approaches infinity." In other words, there exists some $N\in \mathbb{R}$ such that $f'(x) \geq e^x$ for all $x\geq N$. Can you prove this? Hint: Try supposing $f(x)\geq e^x$ for $x\geq M$, but $e^x \geq f'(x)$ for all $x\geq N$ and get a contradiction through integration.
H: Finding partial derivatives for equations expressed in terms of $z$ where $z=f(x,y)$ to find tangent plane I am having troubles finding partial derivatives. If $f(x,y)=2x^2+y^2$ then, $$f_x=4x$$ $$f_y=2y$$ That's simple enough. But when I see a $z$ in the equation, I get stumped. I know $z=f(x,y)$. I don't really see the process. For example, if $z=2x^2+y^2$ then do we differentiate both sides with respect to $x$ like this? $$f_x:\frac{dz}{dx}=4x$$ $$f_y:\frac{dz}{dy}=2y$$ Even worse, what am I supposed to do for something like this? Same thing? $$x+y^2+z^3=3$$ $$z^3=-x-y^2+3$$ $$f_x:3z^2\frac{dz}{dx}=-1$$ $$f_x:\frac{dz}{dx}=\frac{-1}{3z^2}$$ Likewise, $$f_y:\frac{dz}{dy}=\frac{-2y}{3z^2}$$ I'm pretty sure that's wrong but I don't know why. Can someone please help me understand? Thanks. Edit Some context for the last example (it's from a homework problem): Find the equation of the tangent plane to the surface with equation $x+y^2+z^3=3$ at the point (2,1,0). I know that the equation for the tangent plane is $$z=f(x_0,y_0)+[f_x(x_0,y_0)](x-x_0)+[f_y(x_0,y_0)](y-y_0)$$ Since the $f_x$ and $f_y$ found above contain z, do I plug in $z_0$? So I'm trying to find $f_x$ and $f_y$. Edit 2 I get it now AI: What you are looking for is called the chain rule. If you have: $$g(z(x,y))=0$$ Then: $$\frac{\partial g}{\partial x}=\frac{dg}{dz}\cdot\frac{\partial z}{\partial x}$$ So if $x + y^2 + z^3 = 3$, you take the derivative according to $x$: $$1+0+3z^2 \cdot \frac{dz}{dx}=0 \ \ \to \ \ \frac{dz}{dx}=\frac{-1}{3z^2}$$ What does mean? this is a differential equation that may or may nor be easy to solve for $z$, but if you do know what $z$ is, you could just plug n' play: $$\frac{dz}{dx}=\frac{-1}{3z^2}=\frac{-1}{3(3-x-y^2)^{2/3}}$$
H: What is the (parametric) intersection of a plane and a sphere? Can someone please show me how to prove that the intersection of the plane $$x+y+z=0$$ and the sphere $$x^2+y^2+z^2=1$$ can be expressed as $$x(t)=\frac{\cos t-\sqrt3 \cdot\sin t}{\sqrt6}$$ $$y(t)=\frac{\cos t+\sqrt3 \cdot\sin t}{\sqrt6}$$ $$z(t)=\frac{-2\cos t}{\sqrt6}$$ Ps: Also, why am I not getting the correct notation? I am using a macbook pro (safari) if that is a concern? AI: You can solve for $z=-x-y$ and plug in $x^2+y^2+(-x-y)^2=1$, which rearranges to $2x^2+2xy+2y^2=1$. Hence for any fixed $z$ you have a rotated ellipse. Let $u=(x+y/2)$. Then $u^2=x^2+xy+y^2/4$, so the equation becomes $2u^2+\frac{3}{2}y^2=1$. In standard form this is: $$\frac{u^2}{(\frac{1}{\sqrt{2}})^2}+\frac{y^2}{(\frac{\sqrt{2}}{\sqrt{3}})^2}=1$$ We can solve this parametrically as $u=\frac{1}{\sqrt{2}}\sin t$, $$y=\sqrt{\frac{2}{3}}\cos t$$ We can now find $$x=u-y/2=\frac{1}{\sqrt{2}}\sin t-\frac{1}{\sqrt{6}}\cos t$$ and $$z=-x-y=\frac{-1}{\sqrt{2}}\sin t +(-\frac{1}{\sqrt{6}}-\sqrt{\frac{2}{3}})\cos t$$ Alas, you posted your revisions after I'd made my choices so my solution will not agree with yours. There are three choices at the first step (solve for $z,y,x$), and two choices at the second step (replace $x$ or $y$). One of those six choices might give the result you have. :-) Followup: A cross term ($xy$) is a rotation and can always be eliminated by a change of variables, pointing in the directions of the major and minor axes. There is a systematic way to do this; see here or here, or you can do it by the seat of your pants, like I did. I didn't directly rotate, instead I sheared, which made the computations a bit simpler. Edit: fix sign error, thanks @Mhenni. Edit 2: add general method
H: How to show C_e is closed and not dense in C. Let $C_{e}([-1,1],\mathbb{R})$ denote the set of even functions in $C([-1,1],\mathbb{R})$ (a) Show $C_e$ is closed and not dense in $C$. (b) show the even polynomials are dense in $C_e$, but not in $C$. I can't start on it... I can't catch any clue.. AI: Assume there is a sequence of even functions $f_{i}$ converge to $x$ under $\sup$ norm such that $$\sup |f_{i}(x)-x|<\epsilon$$ for large enough $i$. Then this implies $$\int f_{i}=\int x$$ on $[-R,R]$. But this is impossible unless $\int f_{i}\rightarrow 0$ on $[0,R]$ for any $R$. However this itself is absurd. $(2)$ is similar.
H: Proof regarding unitary self-adjoin linear operators I'm suck on how to do the following Linear Algebra proof: Let $T$ be a self-adjoint operator on a finite-dimensional inner product space $V$. Prove that for all $x \in V$, $||T(x)\pm ix||^2=||T(x)||^2+||x||^2.$ Deduce that $T-iI$ is invertible and that $[(T-iI)^{-1}]^{*}=(T+iI)^{-1}.$ Furthermore, show that $(T+iI)(T-iI)^{-1}$ is unitary. My attempt at a solution (to the first part): $||T(x)\pm ix||^2=\left< T(x)\pm ix, T(x)\pm ix\right>$ $=\left< T(x), T(x) \pm ix\right>\pm \left<ix, T(x)\pm ix \right>$ ... $=\left<T(x), T(x) \right>+ \left<x,x \right>$ $=||T(x)||^2+||x||^2$ The ... is the part I'm stuck on (I know, it's the bulk of the first part). I have yet to consider the next parts since I'm still stuck on this one. Any help would be appreciated! Thanks. AI: we have $$(Tx+ix,Tx+ix)=(Tx,Tx)+(ix, Tx)+(Tx, ix)+(ix,ix)=|Tx|^{2}+i(x,Tx)-i(x,Tx)+|x|^{2}$$where I assume you define the inner product to be Hermitian. I think for $Tx-ix$ it should be similar. The rest should be leave as an exercise for you; they are not that difficult. To solve the last one, notice we have $(T-iI)^{-1}$'s adjoint to be $(T+iI)^{-1}$, and $(T+iI)$'a adjoint to be $T-iI$. The first one can be proved by expanding $(T-iI)^{-1}$ as $(I-iT^{-1})^{-1}T^{-1}$, then use geometric series. The second one follows by $(Tx+ix,y)=(Tx,y)+(x,-iy)=(x,Ty)+(x-iy)=(x,(T-I)y)$. If we want $(T+iI)(T-iI)^{-1}$ to be unitary, then we want $$((T+iI)(T-iI)^{-1}x,(T+iI)(T-iI)^{-1}x)=(x,x)$$ for all $x$. Moving around this is the same as $$((T^{2}+I)(T-iI)^{-1}x,(T-iI)^{-1}x)=(x,x)$$ but this is the same as $$((T+iI)x, (T-iI)^{-1}x)=(x,x)$$ and the result follows because we know $(T-iI)^{*}=(T+iI)^{-1}$.