text
stringlengths
83
79.5k
H: Primitive of a rational function Does there exists some simple criteria to know when the primitive of a rational function of $\mathbb{C}[z]$ is still a rational function? In fact my question is more about the stability of this property. Let $P$ and $Q$ two coprime polynomials and let $A$ and $B$ two coprime polynomials such that $$\frac{A}{B}= \left(\frac{P}{Q}\right)'= \frac{P'Q-PQ'}{Q^2}.$$ Then considering a pertubation $A^\varepsilon$ of $A$ (in the sense the roots of $A^\varepsilon$ converge to the one of $A$ with the same multiplicity as $\varepsilon$ goes to zero.): does $\dfrac{A^\varepsilon}{B}$ admit a primitive which is rational function? AI: One simple criterion is the following. If the denominator of $\,f\,$ has squarefree factorization $\,Q_1 Q_2^2\cdots Q_n^n\,$ then $\,f\,$ has a partial fraction expansion $\, P_1/Q_1+\cdots+ P_n/Q_n^n,\,$ which is the derivative of a rational function $\iff$ each $\,P/Q^i\,$ is $\iff Q\mid W(P,Q',(Q^2)',\ldots,(Q^{i-1})'),\, $ where $\,W\,$ denotes the Wronskian. Recall that the squarefree factorization of a polynomial over a field of characteristic $\,0\,$ may be quickly computed by gcds. Special cases of this criterion go back to G. H. Hardy's classical treatise on integration in finite terms, see The Integration of Functions of a Single Varaible, 1916, 5 (iv), p. 19. For a proof of the general case, see the following paper, which is freely accessible. Jan Marik. A note on the integration of rational functions. Mathematica Bohemia, Vol 116(1991), No 4, 405-411.
H: Is there a compact contractible manifold? Does there exist a compact connected manifold (without boundary), that has a trivial homotopy type? AI: No. Closed manifold of dimension $n$ has $H^n(M;\mathbb Z/2)\cong\mathbb Z/2$.
H: The equation $f(s)=a$ has a finite number of solution Let $f$ be an analytic function. I am asking about this problem: What are sufficient conditions (and possibly necessary conditions) in which the equation $$f(s)=a$$ has a finite number of solution with respect to $s$. Here $a≠0$. AI: It depends on the domain of your function. Since you have not specified it, I will assume that your function is defined over the complex plane $\mathbb C$ and for $a\in\mathbb{C},$ $f(z)=a$ has finite number of solutions. Let $P$ be a polynomial, that has all roots (counting multiplicities) as $f(z)-a$ and only them. Then, $h(z)=\frac{f(z)}{P(z)}$ is an entire function that does not vanish. Therefore, $h(z)=e^{g(z)}$ for some entire function $g.$ Thus, $f(z)=P(z)e^{g(z)}+a.$
H: Distinguishable telephone poles being painted Each of n (distinguishable) telephone poles is painted red, white, blue or yellow. An odd number are painted blue and an even number yellow. In how many ways can this be done? Can some give me a hint how to approach this problem? AI: Proposition$^{[1]}$ Fix $k\in\mathbb{P}$ and functions $f_1,f_2,\dots,f_k:\mathbb{N}\to K$. Define a new function $h:\mathbb{N}\to K$ by $$h(\#S)=\sum f_1(\#T_1)\ f_2(\#T_2)\dots f_k(\#T_k),$$ where $(T_1,\dots,T_k)$ ranges over all weak ordered partitions of $S$ into $k$ blocks, i.e., $T_1,\dots,T_k$ are subsets of $S$ satisfying: (i) $T_i\cap T_j=\emptyset\ \text{if}\ i\neq j$, and (ii) $T_1\cup \dots\cup t_k=S$ Then: $$E_h(x)=\prod^k_{i=1}e_{f_i}(x)$$ Solution Using this proposition, now we let $h(n)$ be the desired number. \begin{align} E_h(x)&=(\sum_{n\geq0}\frac{x^n}{n!})^2(\sum_{n\geq0}\frac{x^{2n+1}}{(2n+1)!})(\sum_{n\geq0}\frac{x^{2n}}{(2n)!}) \\ \\ &=e^{2x}(\frac{e^x+e^{-x}}{2})(\frac{e^x-e^{-x}}{2}) \\ \\ &=\frac{1}{4}(e^{4x}-1) \\ \\ &=\sum_{n\geq 1}4^{n-1}\frac{x^n}{n!} \end{align} Thus, $\boxed{h(n)=4^{n-1}}$. $[1]:\ \ \ \ \ $Stanley, Richard. Enumerative Combinatorics: Volume 2. 1st ed. Cambridge, United Kingdom: Cambridge University Press, 2011. 3. Print.
H: Is this equality in a double category true? Caveat: This is a utterly trivial question from a person who always learned to manipulate diagrams in a double category "from the ground"; I'll be glad even if you simply address me to any source which gives precise rules of transformations for such diagrams (and this explains the "reference-request" tag). My problem is the following: I wonder if the following equality holds (where $\alpha\colon G\to G'$, $\beta\colon F\to F'$ are natural transformations between functors, but I think you can "interpret" them as 2-cells somewhere else). Can you help me? AI: Let $F,F': \mathcal C\rightarrow \mathcal D$ and $G,G':\mathcal D\rightarrow \mathcal D'$ be functors between the categories $\mathcal C, \dots, \mathcal D'$. With $\alpha: G\rightarrow G'$ and $\beta:F\rightarrow F'$ we denote natural transformations between them. Then, by definition $\beta_{F(Y)}\circ F(f)=F'(f)\circ \beta_{X}$ and $\alpha_{G(Z)}\circ G(g)=G'(g)\circ \alpha_{W}$ for any $X,Y\in\mathcal C$, $W,Z\in\mathbb D$ and morphisms $f:X\rightarrow Y$ in $ \mathcal C$, $g:Z\rightarrow W$ in $\mathcal D$. Each subdiagram involving 1 natural transformation on both sides of your identity commutes by definition. You can use this fact to construct the commutativity of all other subdiagrams and both diagrams in your identity representing commutativity as a way to identify "composition along the upper right corner" with "composition along lower left corner".
H: A subgroup containing a kernel of a group homomorphism into an abelian group is a normal subgroup. Let $ f \colon G \rightarrow H $ be a group homomorphism, where $ H $ is an abelian group. If $\ker f \subset N $ for some $ N \subset G $, then $ N $ is a normal subgroup of $ G $. I don't know how to start to prove the statement. Any hint? AI: By the Lattice Theorem, subgroups of $G$ containing $\ker f$ are in one-to-one correspondence with subgroups of $G/\ker f$, and by the first isomorphism theorem, $$G/\ker f \cong f(G)$$ So all subgroups of $G/\ker f$ are normal, and the corresponding subgroups are normal .
H: Notes about evaluating double and triple integrals I'm searching notes and exercises about multiple integrals to calculate volume of functions, but the information I find in internet is very bad. Can someone recommend me a book, pdf, videos, website... whatever to learn about this? The exercise type I have to learn to do are similar to the following: Calculate the volume inside these equations: $x^2+y^2=4, z=0, x+y+z=4$. AI: You might want to consult Paul's Online Notes, a tutorial-style site that addresses topics ranging from trig to Single- and multivariable calculus, and differential equations. In particular see Calculus III and click on multiple integrals from the menu on the left. You might also want to check out the Khan Academy for videos/tutorials of topics in multivariable calculus. Just scan the menu to the left of the linked webpage for particular topics to brush up on.
H: Given $y=\arccos(x)$ find $\arcsin(x)$ in terms of y Given that $y = \arccos x$, $ - 1 \le x \le 1\,and\,0 \le y \le \pi $, express $\arcsin x$ in terms of y. The best I know how to do this is is: $$\eqalign{ & \cos y = x \cr & {\cos ^2}y + {\sin ^2}y = 1 \cr & {\sin ^2}y = 1 - {\cos ^2}y \cr & \sin y = \sqrt {1 - {x^2}} \cr & \arcsin \left( {\sqrt {1 - {x^2}} } \right) = y \cr} $$ However this isn't what is asked for, How do I go about getting things in terms of y? Thanks AI: HINT: $$x=\cos y=\sin\left(\frac{\pi }{2}-y\right)$$
H: Prove that a function f is continuous (2) $$f:\mathbb{R} \rightarrow \mathbb{R}:$$ $$f(x) = \begin{cases} \cos\left(\frac{1}{x}\right) & \text{if $x\neq0$} \\ 0 & \text{if $x=0$} \end{cases}$$ Is the function $f$ continuous in $x=0$? 1) $\displaystyle\lim_{x \to 0^-} \cos\left(\frac{1}{x}\right)$ with the sequence: $x_n=-\frac{1}{n}$ $\displaystyle\lim_{x \to \infty}-\frac{1}{n}= 0^-$ $\Rightarrow \displaystyle\lim_{x \to \infty} \cos\left(\frac{1}{-\frac{1}{n}}\right)$ $\displaystyle\lim_{x \to \infty} \cos(-n)$ which doesn't exist. 2) $\displaystyle\lim_{x \to 0^+} \cos\left(\frac{1}{x}\right)$ with the sequence: $x_n=-\frac{1}{n}$ $\displaystyle\lim_{x \to \infty}-\frac{1}{n}= 0^+$ $\Rightarrow \displaystyle\lim_{x \to \infty} \cos\left(\frac{1}{-\frac{1}{n}}\right)$ $\displaystyle\lim_{x \to \infty} \cos(-n)$ which doesn't exist. $f$ is not continuous in $x=0$ because after (1) & (2) the both one-side limits for $0$ don't exist. Is this sufficient to answer the question? AI: Your idea to use a sequence $x_n$ is correct, but you should go in a different way. If you find two sequences $x_n$ and $y_n$, both converging to $0$ such that $$\lim_{n\to\infty}f(x_n)\neq\lim_{n\to\infty}f(y_n),$$ then you will have proved (ab absudro) that $f$ is not continuous at $0$. Try for example with $x_n=\frac1{2\pi n}$ and $y_n=\frac1{2\pi n+\pi}$.
H: Need help with combinatorics question(probably cyclical permutation) A human invites 6 of his friends to a meeting. In how many different arrangements they along with the human's wife can sit at a round table if the hosts and the wife always sit together? Is this a cyclical permutation problem? Please explain. AI: It is probably intended that any rotation of the people is considered to produce the"same" arrangement, meaning that yes, this is intended to be a cyclic permutation problem. That is not the only possible interpretation, since the host might like to be nearest to the kitchen. And the views from the chairs are different. To solve, since we can rotate the people without changing the arrangement, let one of the chairs be a throne, and let the host's spouse sit there. Then the host can be placed in $2$ ways. And for each of these ways, the $6$ guests $\dots$.
H: Are there any integer solutions to $\gcd(\sigma(n), \sigma(n^2)) = 1$ other than for prime $n$? A good day to everyone! Are there any integer solutions to $\gcd(\sigma(n), \sigma(n^2)) = 1$ other than for prime $n$ (where $\sigma = \sigma_1$ is the sum-of-divisors function)? Note that, if $n = p$ for prime $p$ then $$\sigma(p) = p + 1$$ $$\sigma(p^2) = p^2 + p + 1 = p(p + 1) + 1.$$ These two equations can be put together into one as $$\sigma(p^2) = p\sigma(p) + 1,$$ from which it follows that $$\sigma(p^2) + (-p)\cdot\sigma(p) = 1.$$ The last equation implies that $\gcd(\sigma(p), \sigma(p^2)) = 1$. I now attempt to show that prime powers also satisfy the number-theoretic equation in this question. If $n = q^k$ for $q$ prime, then $$\sigma(q^{2k}) = \frac{q^{2k + 1} - 1}{q - 1} = \frac{q^{2k + 1} - q^{k + 1}}{q - 1} + \frac{q^{k + 1} - 1}{q - 1} = \frac{q^{k + 1}(q^k - 1)}{q - 1} + \sigma(q^k).$$ Re-writing the last equation, we get $$(q - 1)\left(\sigma(q^{2k}) - \sigma(q^k)\right) = q^{k + 1}(q^k - 1).$$ Since $\gcd(q - 1, q) = 1$, then we have $$q^{k + 1} \mid \left(\sigma(q^{2k}) - \sigma(q^k)\right).$$ But we also have $$\sigma(q^{2k}) - \sigma(q^k) = q^{k + 1} + q^{k + 2} + \ldots + q^{2k} \equiv 0 \pmod {q^{k + 1}}.$$ Alas, this is where I get stuck. (I know of no method that can help me express $1$ as a linear combination of $\sigma(q^{2k})$ and $\sigma(q^k)$, from everything that I've written so far.) Anybody else here have any ideas? Thank you! AI: Yes, $\sigma(p^k)$ and $\sigma(p^{2k})$ are relatively prime if $p$ is prime. It is I think best not to sum the geometric series. We have $$\sigma(p^k)=1+p+p^2+\cdots+p^k,$$ and $$\sigma(p^{2k})=1+p+p^2+\cdots +p^k+p^{k+1}+\cdots +p^{2k}.$$ The second sum is $$\left(1+p+\cdots+p^k\right)+\left(p^k+p^{k+1}+\cdots+p^{2k}\right)-p^k$$ (we added and subtracted $p^k$). Thus $$\sigma(p^{2k})=(1+p^k)\sigma(p^k)-p^k.$$ So the gcd of $\sigma(p^k)$ and $\sigma(p^{2k})$ is the same as the gcd of $p^k$ and $1+p+\cdots+p^k$. This is is obviously $1$, since the only prime that divides $p^{k}$ is $p$, and $p$ does not divide $\sigma(p^k)$.
H: Is there a simple method to do LU decomposition by hand? Today my professor in numerical analysis pointed out that in the exam we will probably have to do LU decomposition by hand. I understand how the decomposition works theoretically, but when it comes actually getting my hands dirty, I'm never sure, if I'm writing the row operation at the right place in the L matrix. Do you know a mnemonic, which allows one to efficiently compute the LU decomposition by hand? AI: Note that I didn't write this answer; taken from: LU Decomposition Steps This is also useful: Upper and Lower Triangular Matrices Let's go step by step. We want an equation of the following form: (An example is given below) $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}\star&0&0&0\\\star&\star&0&0\\\star&\star&\star&0\\\star&\star&\star&\star\end{pmatrix}\begin{pmatrix}\star&\star&\star&\star\\0&\star&\star&\star\\0&0&\star&\star\\0&0&0&\star\end{pmatrix}$$ From the first column and first row of our known matrix, it's not too hard to see that we can start with this: $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&\star&0&0\\1&\star&\star&0\\2&\star&\star&\star\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&\star&\star&\star\\0&0&\star&\star\\0&0&0&\star\end{pmatrix}$$ Next, we can choose the diagonal elements of our upper triangular matrix to be $1$, and fill in the lower triangular matrix column by column: $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&-4&0&0\\1&-3&\star&0\\2&-3&\star&\star\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&1&\star&\star\\0&0&1&\star\\0&0&0&1\end{pmatrix}$$ $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&-4&0&0\\1&-3&5&0\\2&-3&1&\star\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&1&2&\star\\0&0&1&\star\\0&0&0&1\end{pmatrix}$$ $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&-4&0&0\\1&-3&5&0\\2&-3&1&2/5\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&1&2&3\\0&0&1&8/5\\0&0&0&1\end{pmatrix}$$
H: Upper bound for expression involving logarithms Let $N = 2^p$ for some $p \in \mathbb{N}$. Find the smallest upper bound for $\frac{N}{2}\log\left(\frac{N}{2}\right) + \frac{N}{4}\log\left(\frac{N}{4}\right) + \ldots + 1$ I guess I could first rewrite this to $\frac{2^p}{2}\log\left(\frac{2^p}{2}\right) + \frac{2^p}{4}\log\left(\frac{2^p}{4}\right) +\ldots+ 1$ and then to $2^{p-1}\log(2^{p-1}) + 2^{p-2}\log(2^{p-2}) +\ldots+1$ but I still don't know how I should proceed. All help appreciated. Edit: Now I though also writing it to the form $(p-1)2^{p-1} + (p-2)2^{p-2} + ... + 1 \Leftrightarrow \sum_{i=1}^{log N} (p-i)2^{p-i}$ but I still feel like ...... ;-( (yes, the log is base 2, sorry, forgot to mention that) AI: You want $\begin{align} \sum_{k=1}^p \frac{n}{2^k}\ln \frac{n}{2^k} &=\sum_{k=1}^p \frac{n}{2^k}(\ln n- \ln {2^k})\\ &=\sum_{k=1}^p \frac{n}{2^k}(\ln n- k\ln {2})\\ &=n \ln n\sum_{k=1}^p \frac{1}{2^k} -n\ln 2\sum_{k=1}^p \frac{k}{2^k} \\ \end{align} $ For not small $p$, $\sum_{k=1}^p \frac{1}{2^k} \approx 1$. Since $\sum_{k=1}^{\infty} k x^k = \frac{x}{(1-x)^2}$, $\sum_{k=1}^p \frac{k}{2^k} \approx \frac{1/2}{(1-1/2)^2} =2 $, so your sum $\approx n \ln n- 2 n \ln 2$. You can get it more exactly by getting the exact form for $\sum_{k=1}^p k x^k$, but the difference is of order $\frac{p}{2^p}=\frac{\ln n}{n}$, so I will leave that to you.
H: Exponential practice exam question Okay bear with me, this is one of those cumulative questions The amount of a certain type of drug in the bloodstream t hours after it has been taken is given by the formula: $$x = D{e^{ - {1 \over 8}t}}$$ where x is the amount of the drug in the bloodstream in milligrams and D is the dose given in milligrams. A dose of 10 mg of the drug is given. (a) Find the amount of the drug in the bloodstream 5 hours after the dose is given. $\eqalign{ & x = D{e^{ - {1 \over 8}t}} \cr & x = 10{e^{ - {1 \over 8}\left( 5 \right)}} \cr & x = 5.353 \cr} $ (b) A second dose of 10 mg is given after 5 hours. Show that the amount of the drug in the bloodstream 1 hour after the second dose is 13.549 mg to 3 decimal places. So: $$\eqalign{ & 10{e^{ - {1 \over 8}\left( 6 \right)}} + 10{e^{ - {1 \over 8}\left( 1 \right)}} \cr & = 10{e^{ - {6 \over 8}}} + 10{e^{ - {1 \over 8}}} \cr & = 13.549 \cr} $$ Okay this is the part I'm stuck on: (c) No more doses of the drug are given. At time T hours after the second dose is given, the amount of the drug in the bloodstream is 3 mg. Find the value of T. How do I go about solving this, I know x=3, but how about D? There is no dose in this case, and t=t, so what do I do? Form some sort of simultaneous equation? AI: Since the two doses were spaced 5 hours apart, the amount of the drug still present in the patient's bloodstream is $ \ x = 10 \cdot [ \ e^{-t/8} + e^{-(t+5)/8} \ ] $ mg. , with $ \ t \ $ being the number of hours from the second dose; we must introduce a "time delay" into one of the exponents. This actually doesn't make the equation difficult to solve, since we can use properties of exponents to write $$ \ x = 10 \cdot [ \ e^{-T/8} + e^{-(T+5)/8} \ ] = 10 \cdot [ \ e^{-T/8} \ + \ e^{-T/8} \cdot e^{-5/8} \ ] $$ $$ = 10 \cdot [ \ 1 \ + \ e^{-5/8} \ ] \ \cdot \ e^{-T/8} \ = \ 3 . $$ The quantity in brackets is just a constant and the variable $ \ T \ $ that we wish to solve for only appears once. Note -- we can make a quick estimate of the value of $ \ T \ $ to see if the calculated result is in the right neighborhood. The amount of drug present due to the second dose will dominate the amount remaining from the first, so we can consider the behavior of that dose. The "biological half-life" of the drug is given by $ \ \tau = \frac{\ln 2}{1/8} \approx 5.5 $ hours. The amount of drug present at $ \ t = 0 \ $ is about 13 mg. and 3 mg. is about one-quarter of this, so somewhat over two "half-lives" will be required to have the amount of drug fall to 3 mg. Thus, $ \ T \ $ should be somewhat larger than 11 hours (in fact, it comes out close to 13 hours).
H: Wikipedia's definition of isolated point. Wikipedia defines an isolated point of a subset $S \subseteq X$ to be a point $x \in S$ such that there exists a neighborhood $U$ of $x$ not containing any other points of $S$. Furthermore, it claims that this is equivalent to saying $\{x\}$ is open in $X$. Question: How is the last sentence true? This seems to be false since for example $1$ is an isolated point of $\{1\} \cup (3, 4)$, but $\{1\} \subseteq \mathbb{R}$ is not open. AI: It should (and now does) say that $x$ is isolated in $S$ iff $\{x\}$ is open in $S$, not in $X$.
H: Approximation of a harmonic function on the unit disc by harmonic polynomials. Let $u$ be a real valued harmonic function on the open unit disc $D_1(0) \subseteq \mathbb{C}$. Show that there exists a sequence of real valued harmonic polynomials that converges uniformly on compact subsets of $D_1(0)$ to $u$. Now, consider the harmonic function $u = \ln|z|$ on the open set $\Omega = \{ z : 1<|z| < 2\}$. Is it possible to find a sequence of harmonic polynomials which converge uniformly on compact subsets of $\Omega$ to $u$? I am preparing for my qualifying exam in complex analysis and this question has come up. I will admit that I have not made much progress, but I do have a few thoughts, which I give below. My hope is to receive a few nice hints, and then I will post a solution of my own. From my complex analysis course, I know that the convexity of $D_1(0)$ allows us to define a harmonic conjugate $v$ for $u$ on $D_1(0)$. Then $f \equiv u + iv$ is analytic on $D_1(0)$. Then maybe I can use a theorem from complex analysis to produce this sequence of harmonic polynomials? For the second question, I am assuming that the answer is no, likely due to the fact that we cannot define a harmonic conjugate for $ u = \ln|z|$ on the set $\Omega$ under investigation. Hints are greatly appreciated. AI: For the first question, following your thoughts, let $f$ be holomorphic(i.e. analytic) on $D_1(0)$, such that ${\rm Re}f=u$, i.e. $u$ is the real part of $f$. Then the Taylor expansion of $f$ $$f(z)=\sum_{k=0}^\infty a_kz^k$$ converges to $f$ uniformly on compact subsets of $D_1(0)$(and hence so do the corresponding real and imaginary parts). For every $n\ge 0$, let $$u_n(z)={\rm Re}\big(\sum_{k=0}^n a_kz^k \big).$$ Since $u_n$ is the real part of a (holomorphic) polynomial of $z=x+iy$, $u_n$ is a real valued harmonic polynomial of $x$ and $y$. Moreover, $u_n$ converges to the real part of $f$, which is just $u$, uniformly on compact subsets of $D_1(0)$. For the second question, assume that there is a sequence of real valued harmonic polynomials $(u_n)$, such that it converges to $\ln|z|$ uniformly on compact subsets of $\Omega$. Note that $u_n$ is in fact harmonic on $D_2(0)$, so by maximum/minimum principle, it is easy to see that $u_n$ converges uniformly on compact subsets of $D_2(0)$, and let us denote the limit by $u$. Then using Poisson integral formula or otherwise, it is easy to see that $u$ is harmonic on $D_2(0)$. However, since $\ln|z|$ is not harmonic at $0$, it contradicts to the fact that $u=\ln|z|$ on $\Omega$, so such $(u_n)$ does not exist.
H: Proving $\alpha+(\beta+\gamma) = (\alpha+\beta)+\gamma$ for ordinals I am following Jech's construction, by definition $\alpha+0 = \alpha, \alpha+(\beta+1)=(\alpha+\beta)+1$, and for limit $\beta$ we define $\alpha+\beta = \cup\{\alpha+\xi: \xi<\beta\}$. Jech's proof of associativity just says "By induction on $\gamma$." so I am trying to go through it. The statement I am inducting on is $\forall \alpha,\beta \in Ord~ (\alpha+(\beta+\gamma) = (\alpha+\beta)+\gamma)$. I have done $\gamma = 0$ and $\gamma=c+1$ and have left to show it holds for $\gamma$ a limit. In this case I would like to say $$(\alpha+\beta)+\gamma = \cup\{(\alpha+\beta)+\xi:\xi < \gamma\}$$ $$ = \cup\{\alpha+(\beta+\xi):\xi < \gamma\} $$ $$ = \alpha + \cup\{\beta+\xi:\xi<\gamma\} $$ $$ = \alpha + (\beta+\gamma). $$ However I have been stuck trying to show $ \cup\{\alpha+(\beta+\xi):\xi < \gamma\} = \alpha + \cup\{\beta+\xi:\xi<\gamma\}. $ How can I proceed? AI: For establishing your last statement, it suffices to prove two things (neither is hard): $\beta+\gamma$ is a limit ordinal; $\cup\{\alpha+(\beta+\xi):\xi<\gamma\} = \cup\{\alpha+\xi:\xi < \beta+\gamma\}$. For, the right-hand side of this equality is precisely the definition of $\alpha+(\beta+\gamma)$ when $\beta+\gamma$ is a limit ordinal. Alternatively, you can take the general route and prove that the composition of normal sequences is normal, and directly conclude that: $$\sup_{\xi <\gamma}(\alpha +(\beta+\xi)) = \alpha+(\beta+\gamma)$$ because the sequences $\delta_\zeta = \alpha+\zeta$ and $\epsilon_\xi = \beta+\xi$ are both normal, and thus so is the sequence $\eta_\xi = \delta_{\epsilon_\xi} = \alpha + (\beta + \xi)$. This approach will save you time when dealing with the multiplication and exponentiation cases.
H: Exponential function and uniform convergence of polynomials. How can I prove that no sequence of polynomials converges uniformly to the exponential function? Thanks in advance for any help. AI: If a sequence $(f_n)$ of polynomials converges uniformly to $x\mapsto e^x$ on all of $\mathbb R$ (or, as the following argument merely assumes, just on $(-\infty,2]$), then for $\epsilon=1$ there exists $N$ such that $|f_n(x)-e^x|<1$ for all $n>N$ and all $x\in\mathbb R$. Since $|e^x|<1$ for $x<0$, we conclude that $|f_n(x)|<2$ for $x<0$. It is easy to see that this is the case only for constant polnomials (all others go to $\pm\infty$ as $x\to-\infty$) and so $f_n(x)=c$ with $|c|<2$. But then $|f_n(2)-e^2|>e^2-2>1$, contradiction.
H: Finding the ratio of two persons time spent driving to a meeting Mark and pat drive separately to a meeting. mark's average driving speed is $1/3rd$ greater than pat's and mark drives twice as many miles as pat. What is ratio of number of hours mark spends driving to the meeting to the number of hours pat spends driving to meeting AI: $m_s$: Mark's driving speed, $\quad p_s$: Pat's driving speed $$m_s = p_s + \frac 13 p_s = \frac 43 p_s\tag{1}$$ $m_d$: Distance Mark drives, $\quad p_d$: distance Pat drives $$m_d = 2 p_d\tag{2}$$ $m_t$: Time Mark spends driving, $\quad p_t$: Time Pat spends driving $$\large\frac{m_{t}}{p_{t}} \ = \ \frac{m_{d}/m_s}{p_d/p_s}\; = \; \frac{m_{d}}{p_d} \cdot \frac{p_s}{m_s} $$ $$= \frac{2p_d}{p_d}\cdot \frac{p_s}{\frac 43 p_s}\tag{substitution (1), (2)}$$ $$ = 2\cdot \frac{1}{4/3} = \frac 32\tag{cancel units and evaluate}$$
H: Confusion regarding direct sum decomposition of representations from Serre's book Sorry if the question is dumb. I am trying to learn representation theory of finite groups from J.P.Serre's book by myself. In section 2.6 on canonical decomposition, he says that let V be a representation of a finite group G, $W_1,...,W_h$ be the distinct irreducible representations of G, and let V = $U_1 \oplus ... \oplus U_m$ be some decomposition of V into irreducible subrepresentations. Then we can write V = $V_1\oplus ...\oplus V_h$, where $V_i$ is the direct sum of irreducible subrepresentations among $U_i$'s which are $isomorphic$ to $W_i$. This much is clear. But then he says that : Next, if needed, one chooses a decomposition of $V_i$ into a direct sum of irreducible representations each isomorphic to $W_i$: $$V_i = W_i \oplus ...\oplus W_i$$ The last decomposition can be done in infinitely many ways; it is just as arbitrary as the choice of a basis in a vector space. I am confused with this part. I understand $external$ direct sums of same spaces, but how is the $internal$ direct sum of same spaces $W_i$ defined in general? I think I might be facing some notational difficulty. Thanks in advance. AI: In the internal decomposition, the spaces aren't all equal to $W_i$ in a literal sense, just isomorphic to it as representations of $G$ - what you really have is $V_i=V_{i,1}\oplus\cdots\oplus V_{i,n}$ with $V_{i,j}\cong W_i$ for all $j$. My preference would be to write $V_i\cong W_i\oplus\cdots\oplus W_i$ instead of $V_i=W_i\oplus\cdots\oplus W_i$, which I thinks makes it a bit clearer (and emphasises the fact that the decomposition is non-canonical).
H: Formal grammar for the language $L = \{w\in\{a,b\}^*,\,w=xx,\,x=a^nb^na^mb^m,\,n\ge0,\,m\ge0\}$ What is the grammar of this language? $$L = \{w\in\{a,b\}^*,\,w=xx,\,x=a^nb^na^mb^m,\,n\ge0,\,m\ge0\}$$ For example: $abab$, $abaabbabaabb$ AI: I think you need to do it in two steps: First a grammar to generate your $x$, or actually strings of the form $Xu^nv^nu^mv^mY$. You can easily adapt the grammar we discussed in the above comments to make these strings. Now you need to create a copying machine to duplicate the string. I used $u$ and $v$ instead of $a$ and $b$ here, intending to convert them to $a$ and $b$ as the copying progresses. Here is a set of productions to achieve this: $$\begin{aligned} X&\rightarrow XR&Ra&\rightarrow aR&Rb&\rightarrow bR&RY&\rightarrow\varepsilon\\ Ru&\rightarrow Ua&Rv&\rightarrow Vb&XU&\rightarrow XaR&XV&\rightarrow XbR\\ aU&\rightarrow Ua&bU&\rightarrow Ub&aV&\rightarrow vA&bV&\rightarrow Vb\\  \end{aligned}$$ The set is not complete yet! But look: $X$ can spawn an $R$ which moves right past all $a$s and $b$s until it hits a $u$ or $v$, where it converts that to $a$ or $b$ and morphs into $U$ or $V$, which travels left until it hits $X$, where it becomes $a$ or $b$ – that is the copy – and a new $R$ to travel right and continue the process. You may need a few more productions to finish it off. Perhaps you only need to add $X\rightarrow\varepsilon$ to finish the job. This is a lot like programming. Think of the nonterminals as little special purpose machines traveling back and forth doing their thing. Finally, a disclaimer: I have not debugged the above! There may well be bugs, like in any program. I will leave the debugging and verification to you. Good luck.
H: Elementary lower bounds for $n^{1/n}$ I can show that $n^{1/n} > 1+1/n$ for integer $n \ge 3$ by completely elementary means - no logs, exponentials, or calculus. Are there better bounds that can be proved in an elementary way? Here is my proof: The bound is equivalent to $n > \frac{(n+1)^n}{n^n}$ or $\frac{(n+1)^n}{n^{n+1}} < 1$. For $n=3$, this is $\frac{4^3}{3^4} =\frac{64}{81} < 1$. The ratio of consecutive terms is $\begin{align} \frac{\frac{(n+2)^{n+1}}{(n+1)^{n+2}}}{\frac{(n+1)^n}{n^{n+1}}} &=\frac{(n(n+2))^{n+1}}{(n+1)^{2n+2}}\\ &=\frac{(n^2+2n)^{n+1}}{(n^2+2n+1)^{n+1}}\\ &< 1 \end{align} $ so the terms are decreasing and thus less than $1$. Of course, since $e^x > 1+x$, $n^{1/n} = e^{\ln n/n} > 1+\ln n/n$, but this is non-elementary. AI: $$\left(1+\frac1n\right)^n=\sum_{k=0}^n\frac{n!}{k!(n-k)!\cdot n^k}$$ After cancelling $n!$ against $(n-k)!$ we are left with $k$ factors $\le n$ in the numerator. Since we have $k$ factors $=n$ in the denominator, we have $\frac{n!}{k!(n-k)!\cdot n^k}\le \frac1{k!}$. This brings us almost to the series $\sum \frac 1{k!}$ for $e$, but remains elementary enough, I guess. All but the $0$th summand are $\le \frac12$, so we get the very weak $\left(1+\frac1n\right)^n\le 1+\frac n2$, hence for $n> 2$ $$\sqrt[n]n>\sqrt[n]{1+\frac n2} \ge 1+\frac1n.$$ Of course, we can easily improve this by noting $\frac1{k!}<\frac1{2^{k-1}}$ for $k\ge1$ and hence $\left(1+\frac1n\right)^n<3$. This makes $$ \sqrt[n]n\ge\sqrt[n]9> \left(1+\frac1n\right)^2=1+\frac2n+\frac1{n^2}\qquad \text{for }n\ge 9$$ and so on.
H: Are real numbers also hyperreal? Are there hyperreal $\epsilon$ between $-a$ and $a$ for any positive real $a$? The set of all hyper-real numbers is denoted by $R^*$. Every real number is a member of $R^*$, but $R^*$ has other elements too. The infinitesimals in $R^*$ are of three kinds: positive, negative and zero. I think zero is not an extension in the set of real numbers. Question 1: Can we call any real number a hyper-real number, too? For example, $2$ is a real number, can we say that $2$ is a hyper-real number? Question 2: Does the set of hyper-real numbers $R^*$ include such infinitesimals say $\epsilon$, such that $-a<\epsilon<a$ for every positive real number $a$? Addition: Is it true that if $\epsilon$ is a positive infinitesimal, then $\epsilon>0$. However, $-\epsilon$ which is a negative infinitesimal is less than zero. But $0, \epsilon$ and $-\epsilon$ are greater than any negative real number and are less than any positive real number? AI: Yes to both questions: Note that in your definition (second statement) the reals are among ("members of" and so included in) the hyper-reals. Every real number is a member of $R^∗$, but $R^∗$ has other elements too. And since zero is a real number, then, it is also in the hyperreals. And $-a < 0 \lt a$ for all positive $a$.
H: What is the interpretation of the magnitude of a matrix? Consider a vector $v$. The magnitude of this vector (if it describes a position in euclidean space) is equal to the distance from the origin: $$(v^Tv)^{\frac{1}{2}} = \sqrt{(v^Tv)}$$ that is, the square root of the dot product. Suppose we compute this value for not just a vector but a matrix M, which describes an operator that transports a position vector. What is the interpretation of the magnitude? AI: For a vector $v\in\mathbb{R}^n$, $\sqrt{v^Tv}$ is just the Euclidean norm $\|v\|_2$. There are infinitely many different norms for vectors and any one of them gives rise to a legitimate measure of "magnitude". The same also holds for matrices. You can measure the "magnitudes" of matrices by using matrix norms. If you stack up the columns of an $n\times n$ matrix $A$ successively with the first column on top, you get a long vector $v$ in $\mathbb{R}^{n^2}$. The Euclidean norm of this vector (i.e. $\|v\|_2$) is called the Frobenius norm of the matrix and it is denoted by $\|A\|_F$. Again there are other kinds of matrix norms as well.
H: Need explanation of passage about Lebesgue/Bochner space From a book: Let $V$ be Banach and $g \in L^2(0,T;V')$. For every $v \in V$, it holds that $$\langle g(t), v \rangle_{V',V} = 0\tag{1}$$ for almost every $t \in [0,T]$. What I don't understand is the following: This is equivalent to $$\langle g(t), v(t) \rangle_{V',V} = 0\tag{2}$$ for all $v \in L^2(0,T;V)$ and for almost every $t \in [0,T]$. OK, so if $v \in L^2(0,T;V)$, $v(t) \in V$, so (2) follows from (1). How about the reverse? Also is my reasoning really right? I am worried about the "for almost every $t$ part of these statements, it confuses me whether I am thinking correctly. Edit for the bounty: as Tomas' comment below, is the null set where (1) and (2) are non-zero the same for every $v$? If not, is this a problem? More details would be appreciated. AI: For $(2)$ implies $(1)$, consider the function $v\in L^2(0,T;V)$ defined by $$v(t)=w, \forall\ t\in [0,T]$$ where $w\in V$ is fixed. Hence, you have by $(2)$ that $$\langle g(t),v(t)\rangle=\langle g(t),w\rangle=0$$ for almost $t$. By varying $w$, you can conclude.
H: Inertial Frames of Refereence I am told that in Newtonian mechanics, no coordinate system is "superior" to any other. Also, all inertial frames are in a state of constant, rectilinear motion with respect to one another. So am I right to understand that "inertness of coordinate systems" is an equivalence relation on all the coordinate systems in a space. Furthermore, one should not talk of an inertial coordinate system on its own. In order to talk about inertness one has to choose two coordinate systems and compare them. Finally no equivalence class is superior to another, whatever superior means in its usage in the first paragraph, to which meaning I am not knowledgeable. If any of this is not true, please include an example as well. AI: This is true for both Newtonian mechanics and special relativity. The transformation between inertial frames is different in the two. One inertial frame can be much more convenient than others for calculation, but that does not make it superior in theory.
H: Prove $x^{n}-5x+7=0$ has no rational roots This question arises in STEP 2011 Paper III, question 2. The paper can be found here. The first part of the question requires us to prove the result that if the polynomial $$x^{n}+a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+\ldots+a_{0}$$ where each of the $a_{n}$ are integers, has a rational roots if and only if that root is an integer. It does not give a name for this result. EDIT: user69810 has pointed out that this is in fact the rational root theorem. We are then to use this result to prove that the polynomial $$x^{n}-5x+7=0$$ has no rational solutions for $n\ge 2$. My argument was the following: If there exists a rational root, then it must be an integer. If there exists an integer root, then $$x^{n}=5x-7$$ for some integers $x$ and $n\ge 2$. Then the LHS is divisible by $x$, meaning that $7$ must be divisible by $x$. Therefore $x \in \{-7,-1,1,7\}$. Checking each of these shows that there is no rational root. Is this solution correct? It isn't the one in the solutions, but if it's right then I think it's more elegant than theirs. AI: You have identified the possible rational roots correctly and shown that none of them work. It is a fine application of the rational root theorem.
H: derivative, the way to show in a graph http://www.wolframalpha.com/input/?i=derivative+x%5E2%2C+x%5E2 Just reminding myself some math.. Is it ok to show the derivative in such a way like was shown in the link above for $x^2$ function? A derivative is supposed to be for some point of function. So, then I should see lots of lines, but not one. But if one then what is the 'point' where it is applied for (for that function is depicted) ? AI: You are confusing "derivative" and "tangent line". A derivative is a function which keeps track of the slopes of tangent lines. At each point the derivative is a number (the slope of a the corresponding tangent at that point) whereas the tangent line is determined by an equation (which keeps track of not only the slope but also the point of tangency). To incorporate kaine's comment: You can get Wolfram Alpha to draw a specific tangent line at a point using... tangent line of SOMETHING at x=SOME NUMBER For example: tangent line of x^2 at x=0.5
H: Normalizing an exponential function Given the equation $a^\frac yx + a^x=b$ is there a way to normalize this function into a form where $y=$...? In short can I express $y$ in terms of $x$ if $a$ and $b$ are constants? AI: First rewrite to $a^{y/x}=b-a^x$, then take the $\log$ on both sides to get $\frac{y}{x}\log(a) = \log(b-a^x)$. Finally, we obtain $$y = x\frac{\log(b-a^x)}{\log(a)}.$$
H: Concrete Mathematics Prerequisite Question I've been very interested in the book Concrete Mathematics (Graham,Knuth,Patashnik) and I've been reading it for the past few weeks. I'm at the chapter about Sums (Chapter 2), specificaly, the lesson about Finite and Infinite Calculus. My question is if any serious calculus or advanced math background will be needed from this point on? I'm going into 10th grade and the highest math I can take as a 10th grader is Precalculus (in my school at least), so anything I would want to do with calculus at school would have to wait until I'm in 11th grade. I know a few topics about Calculus due to my own interest in the topic, but they've been entirely self-taught and no formal lecturing about them ( maybe this will hold me back?). I seem to understand this part of the book without too much difficulty, but should I first complete a Calculus course before moving on with the book and gather a firm background, or is it not needed? Thanks! AI: You do not need calculus to understand the material on finite calculus; you’ll simply miss some of the analogies until you actually learn the calculus side of them. Most of the material through Section $5.3$ is also accessible without calculus, at least in principle, though some of it is far from easy. The same applies to much of Chapter $6$. On the whole I’d say that you should give it a try if you find it interesting (as I certainly would have done, had it existed when I was that age).
H: Find the maximum value of $r$ when $r=\cos\alpha \sin2\alpha$ Find the maximum value of $r$ when $$r=\cos\alpha \sin2\alpha$$ $$\frac{\rm dr}{\rm d\alpha}=(2\cos2\alpha )(\cos\alpha)-(\sin2\alpha)(\sin\alpha)=0 \tag {at maximum}$$ How do I now find alpha? AI: We have $$\cos\alpha\sin 2\alpha=\cos\alpha(2\sin\alpha\cos\alpha)=2(1-\sin^2\alpha)\sin\alpha=2\sin\alpha-2\sin^3\alpha.$$ Examine the function $t-t^3$, where $-1\le t\le 1$.
H: In diagonalization, can the eigenvector matrix be any scalar multiple? One can decompose a diagonalizable matrix (A) into A = C D C^−1, where C contains the eigenvectors and D is a diagonal matrix with the eigenvalues in the diagonal positions. So here's where I get confused. If I start with a random eigenvector matrix D > D [,1] [,2] [1,] 7 0 [2,] 0 5 and a random eigenvector matrix C > C [,1] [,2] [1,] 4 2 [2,] 1 3 There should be some matrix A with those eigenvectors and eigenvalues. When I compute A by multiplying C D C^-1 I get > A<-C%*%D%*%solve(C) > A [,1] [,2] [1,] 7.4 -1.6 [2,] 0.6 4.6 My understanding is that if I then work backwards and diagonalize A I should get the same matrix C and D, I started with to get A in the first place. But for reasons that escape me, I don't. > eigen(A) $values [1] 7 5 $vectors [,1] [,2] [1,] 0.9701425 0.5547002 [2,] 0.2425356 0.8320503 the first eigenvector is a multiple of column 1 of C and the second eigenvector is a multiple of column 2 of C. For some reason it feels strange that I have this relationship: xC D (xC)^-1 = C D C where x is a scalar. Did I screw up somewhere or is this true? AI: It's true. In general, if $a$ is a nonzero scalar and $M$ is an invertible matrix, then $(aM)^{-1} = a^{-1}M^{-1}$. So $xCD(xC)^{-1} = xCDx^{-1}C^{-1} = xx^{-1}CDC^{-1} = CDC^{-1}$. Note that the second equality in this chain is only true because $x$ is a scalar (rather than a matrix).
H: Michelson-Morley Experiment I've looked everywhere and I cannot find a complete derivation that includes the step I'm looking for lol...hopefully this will add another link for google. So the full time for light to travel both directions with the ether then against as follows: $$T = \frac{L}{(c+v)}+\frac{L}{(c-v)}\tag 1$$ then you get by adding fractions: $$T = \frac{2Lc}{(c^2-v^2)}\tag 2$$ Then everything I read online and my textbook says that equals: $$T=\frac{2L}{c}\frac{1}{1-\frac{v^2}{c^2}}\tag 3$$ then of course: $$T=\frac{2L}{c(1-\frac{v^2}{c^2})}\tag 4$$ I'm not understanding how they get from $(2)$ to $(3)$. AI: You just divide numerator and denominator by $c^2$: $$T = \frac{2Lc}{(c^2-v^2)}\tag 2=\frac{\frac{2Lc}{c^2}}{\frac{(c^2-v^2)}{c^2}}=\frac{2L}{c}\frac{1}{1-\frac{v^2}{c^2}}$$
H: Order of Permutation : If $\tau \in S_n$ has order $m$, then $\sigma \tau\sigma^{-1}$ has also order $m$. I dont understand the following very simple statement: If $\tau \in S_n$ has order $m$, then $\sigma \tau\sigma^{-1}$ has also order $m$. The proof is: Suppose $\tau$ has order $m$. $(\sigma \tau \sigma^{-1})^{m} = \sigma \tau^{m}\sigma^{-1}= (1)$ Then (using http://www.math.niu.edu/~beachy/abstract_algebra_2ed/guide/soln23.pdf exercise 17) they say that the order of $\sigma\tau\sigma^{-1}$ cannot be less than $n$ (remember we are in $S_n$), since $(\sigma\tau\sigma^{-1})^k=1 $ implies $\sigma\tau^k\sigma^{-1} =(1)$, and then $\tau^k=\sigma^{-1} \sigma = (1)$ I don't understand the last part.... Who can help me? AI: I think it's a typo, it should say $m$ instead of $n$: From the fact that $(\sigma \tau\sigma^{-1})^m=1$, we deduce the order must be less or equal to $m$. But it cannot be less, because if the order was less: $k$, then it would imply as they do, that $\tau^k=1$, and that's not true because the order of $\tau$ is $m>k$., and therefore the order of $\sigma\tau\sigma^{-1}$ is $m$.
H: $\lim_{n\rightarrow\infty}\frac{a_{n+1}}{a_n} = L$ implies $\lim_{n\rightarrow\infty}a_n^{1/n}=L$ Let $\{a_n\}$ be a sequence of positive numbers. Prove that if $$\lim_{n\rightarrow\infty}\frac{a_{n+1}}{a_n} = L$$ then $$\lim_{n\rightarrow\infty}a_n^{1/n}=L.$$ The first condition means that for any $\epsilon$, then exists $N$ such that for all $n\geq N$, we have $$L-\epsilon < \dfrac{a_{n+1}}{a_n} < L+\epsilon.$$ This means $$(L-\epsilon)^na_N<a_{N+n}<(L+\epsilon)^na_N$$ for all $n\geq 0$. Then $$(L-\epsilon)^{\frac{n}{N+n}}a_N<a_{N+n}^{\frac{1}{N+n}}<(L+\epsilon)^{\frac{n}{N+n}}a_N$$ How can I finish from here? AI: You forgot to exponentiate the $a_N$; actually, your final expression is: $$(L-\epsilon)^{n/(N+n)}a_N^{1/(N+n)}<a_{N+n}^{1(N+n)}<(L+\epsilon)^{n/(N+n)}a_N^{1/(N+n)}$$ Now by e.g. using the product rule for limits and some standard limits involving powers (note that $N$ is fixed): $$\lim_{n\to\infty} (L-\epsilon)^{n/(N+n)}a_N^{1/(N+n)} = \lim_{n\to\infty}(L-\epsilon)^{n/(N+n)} \lim_{n\to\infty} a_N^{1/(N+n)} = (L-\epsilon)\cdot 1 = L-\epsilon$$ I trust you can conclude from here.
H: Continuous right derivative implies differentiability A book of mine says the following is true, and I am having some trouble proving it. (I've considered using the Lebesgue differentiation theorem and absolute continuity, as well as elementary analysis methods.) Let $f: [0, \infty) \rightarrow \mathbb{R}$ be continuous and have right derivatives at each point in the domain, with the right derivative function being continuous. Then $f$ is differentiable. AI: Let us denote the right derivative of $f$ by $g$. Lemma: Given $a<b$ and $m\le M$, if $m\le g\le M$ on $[a,b]$, then $$m\le\frac{f(b)-f(a)}{b-a}\le M.$$ Proof: Define $$L(a)=g(a)\quad\text{and}\quad L(x)=\frac{f(x)-f(a)}{x-a},\ x\in(a,b].$$ By definition, $L$ is continuous on $[a,b]$, and it suffices to show that for every $\delta>0$, $$E_\delta:=\Big\{x\in[a,b]\,\Big|\, m-\delta\le L(y)\le M+\delta, \forall y\in[a,x] \Big\}=[a,b].$$ By definition and the continuity of $L$, we know that $E_\delta=[a,c]$ for some $c\in[a,b]$, and from $m\le g(a)\le M$ we know $c>a$. Then from $c\in E_\delta $ and $m\le g(c)\le M$ it is easy to see that $c<b$ is impossible. Therefore, $c=b$ and the lemma follows. $\quad\square$ Now let us show that $f$ is differentiable for any $x>0$. Since $g$ is continuous, given $0<h<x$, we can define $$m_h=\min_{y\in[x-h,x]}g(y),\quad M_h=\max_{y\in[x-h,x]}g(y),$$ and we know that $$\lim_{h\to 0^+}m_h=\lim_{h\to 0^+}M_h=g(y).$$ Due to the lemma, for $a=x-h$, $b=x$, $m=m_h$ and $M=M_h$, we have $$m_h\le\frac{f(x-h)-f(x)}{-h}\le M_h.$$ Let $h\to 0^+$, it follows that the left derivative of $f$ at $x$ exists and is equal to $g(x)$, i.e. $f$ is differentiable at $x$. $\quad\square$
H: Binomial coefficients equal to a prime squared I am looking for some reading on when binomial coefficients are equal to $p^2$ for $p$ a prime. In general I imagine this is rare, as there are simply too many factors. Concretely, I am looking for pairs $(n, k)$ such that ${n + k - 1 \choose k} = p^2$. AI: For the equality $$ {m\choose k}=p^2 $$ to hold for a prime $p$, we must obviously have $2p\le m$. Otherwise $m!$ won't be divisible by $p^2$ so neither will the binomial coefficient. I claim that this implies $k=1$. Without loss of generality we can assume that $k\le m/2$. If $2<k\le m/2$, then $$ {m\choose k}\ge {m\choose 3}=\frac{m(m-1)(m-2)}6, $$ which is larger than $m^2/4\ge p^2$ whenever $m\ge6$. This means that we must have $k=1$ or $k=2$ or $m<6$. The last possibility won't concern us - a brute force check tells in few seconds that there are no counterexamples. So we need to take a look the case $k=2$. But $$ {m\choose2}=\frac{m(m-1)}2. $$ Here $m$ and $m-1$ have no common factors, so from $m(m-1)=2p^2$ we get that either $m-1=2, m=p^2$ or $m-1=p, m=2p$ and both are impossible. The claim follows from this. Obviously you can arrange ${m\choose 1}=m$ to be anything you want.
H: Embedding into a morphism of distinguished triangles Everything in this question happens in a triangulated category $\mathbf{D}$. I am trying to prove that in a diagram like this $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllll} X & \xrightarrow{f} & Y &\rightarrow& Z\\ & & \da{\alpha} & & \\ X' & \rightarrow & Y' & \xrightarrow{k} & Z' \\ \end{array} $$ where $k\circ \alpha\circ f=0$ and horizontal rows are distinguished triangles, it is possible to find two (non unique) morphisms $\beta:X\to X$ and $\gamma: Z\to Z'$ such that $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllll} X & \xrightarrow{f} & Y &\rightarrow& Z\\ \da{\beta}& & \da{\alpha} & & \da{\gamma} \\ X' & \rightarrow & Y' & \xrightarrow{k} & Z' \\ \end{array} $$ is a morphism of distinguished triangles. I have managed to show it for an easier case, when the upper triangle is $X\to X\to 0\to X[1]$, which is distinguished by the definition of $\mathbf{D}$. Suppose that we have a diagram $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllll} X & \xrightarrow{\mathrm{id}} & X &\rightarrow& 0\\ & & \da{\alpha} & & \da{\gamma} \\ X' & \xrightarrow{j} & Y' & \xrightarrow{k} & Z' \\ \end{array} $$ such that $k\circ \alpha=0$, note that the only choice for $\gamma$ is 0. I want to show that $\alpha=j\circ \beta$ for some $\beta:X\to X'$. I can then get $\beta[1]$ from the diagram $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllll} X & \rightarrow & 0 &\rightarrow& X[1]&\xrightarrow{-\mathrm{id}}&X[1]\\ \da{\alpha} & & \da{\gamma} & & \da{\beta[1]}&&\da{\alpha[1]} \\ Y' & \xrightarrow{k} & Z' & \xrightarrow{} & X'[1]&\xrightarrow{-j[1]}& Y'[1] \\ \end{array} $$ and recover $\beta$ from $\beta[1]$. Can this construction be generalised to arbitrary distinguished triangles? AI: Use cohomological functors: $D$ is triangulated, so $Hom(X,)$ is a cohomological functor for anz object $X$ in $D$. In particular, the short sequence $Hom(X,X')\rightarrow Hom(X,Y')\rightarrow Hom(X,Z')$ (*) is exact,with $X'\stackrel{q}{\rightarrow} Y'\rightarrow Z'$ distinguished in $D$. As $\alpha\circ f$ belongs to the kernel of $Hom(X,Y')\rightarrow Hom(X,Z')$ by hypothesis, then there exists a $r\in Hom(X,X')$ s.t. $q\circ r= \alpha\circ f$ by exactness of (*). The last morphism between the distinguished triangles $X\rightarrow Y\rightarrow Z$ and $X'\stackrel{q}{\rightarrow} Y'\rightarrow Z'$ can be found using the axiom T3.
H: Evaluating $\lim_{x\to1}\frac{\sqrt{x^2+3}-2}{x^2-1}$? I tried to calculate, but couldn't get out of this: $$\lim_{x\to1}\frac{x^2+5}{x^2 (\sqrt{x^2 +3}+2)-\sqrt{x^2 +3}}$$ then multiply by the conjugate. $$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ Thanks! AI: You were right to multiply "top" and "bottom" by the conjugate of the numerator. I suspect you simply made a few algebra mistakes that got you stuck with the limit you first posted: So we start from the beginning: $$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ and multiply top and bottom by the conjugate, $\;\sqrt{x^2 + 3} + 2$: $$\lim_{x \to1} \, \frac{\sqrt{x^2 + 3} -2 }{x^2 - 1} \cdot \frac{\sqrt{x ^ 2 + 3} +2}{\sqrt{x^2+3}+2}$$ You were correct to do that. You just miss-calculated and didn't actually need to expand the denominator, that's all. In the numerator, we have a difference of squares (which is the reason we multiplied top and bottom by the conjugate), and expanding the factors gives us: $$(\sqrt{x^2+3}-2)(\sqrt{x^2 + 3} + 2) = (\sqrt{x^2+3})^2 - (2)^2 = (x^2 + 3) - 4 = x^2 - 1$$ And now there's no reason to waste time trying to simplify the denominator*, since we can now cancel the factor $(x^2 - 1)$ from top and bottom: $$\lim_{x\to1}\frac{\color{blue}{\bf (x^2- 1)}}{\color{blue}{\bf (x^2 - 1)}(\sqrt{x^2 +3}+2)}\; = \;\lim_{x \to 1} \dfrac{1}{\sqrt{x^2 + 3}+2}\; = \;\frac{1}{\sqrt{1+3} + 2} \;=\; \dfrac 14$$
H: Find the values of the constants in the following identity $2x^3+3x^2-14x-5=(ax+b)(x+3)(x+1)+C$ I'm working through identities but I can't figure out how to get further than multiplying out the above to get : $$2x^3+3x^2-14x-5=2ax^3+3ax^2+3ax+bx^2+3bx+bx+3b+C$$ can someone give me a hint on what to do next? AI: Good luck in finding such constants! Let $x=-1$. The right-hand side is $0$, and the left-hand side isn't. Presumably there is a typo. Once it is fixed, we can go on. Edit: The above was a response to the question before there was a $C$ on the right. Below is a solution of the corrected problem. Put $x=-1$. Then the right-hand side is $C$, and the left-hand side is $10$, so $C=10$. Put $x=0$. The right-hand side is $3b+10$, the left is $-5$. So $3b+10=-5$, and now we know $b$. For $a$, the coefficient of $x^3$ on the left is $2$, and on the right it is $a$. Now we need to verify that with the $a$, $b$, and $C$ we have found, the identity actually holds. This is necessary, for all we have shown is that if there are $a$, $b$, and $c$ that satisfy the equation, then $a$, $b$, $C$ must be as calculated. But perhaps there are no $a$, $b$, $C$ with the desired property. Remark: Multiplying out and setting up a system of equations will get you there too, but it is somewhat tedious, and the probability of error increases.
H: Finding a maximum likelihood function Let $X_{1},X_{2},\dots,X_{n}$ represent a random sample from a distribution with probability density function $$f(x;\theta)=\frac{x}{\theta}e^{-x^{2}/2\theta}\hspace{1em}x>0 $$ Find the maximum likelihood function. I did: $$L(\theta)=\prod_{i=1}^{n}f(x_{i};\theta)=\prod_{i=1}^{n}\frac{x_{i}}{\theta}e^{-x_{i}^{2}/2\theta} $$ $$\ell(\theta) = \ln(L(\theta)) = \sum_{i=1}^{n}\left(-\frac{x_{i}^{2}\theta}{2}+\ln x_{i}-\ln\theta\right) $$ $$\begin{eqnarray*} \frac{\delta\ell}{\delta\theta} & = & \frac{\delta}{\delta\theta}\left(\sum_{i=1}^{n}\left(-\frac{x_{i}^{2}\theta}{2}+\ln x_{i}-\ln\theta\right)\right)\\ 0 & = & \sum_{i=1}^{n}\frac{x_{i}^{2}-2\theta}{2\theta^{2}}\\ 2\theta^{2} & = & \sum_{i=1}^{n}x_i^{2}-2\theta\\ 2\theta^{2}+2n\theta & = & \sum_{i=1}^{n}x_i^{2}\\ \end{eqnarray*}$$ but it feels wrong. Is it wrong? How should I fix it? AI: Given the distribution family and maximum likelihood estimate definition: $$\displaystyle f(x; \theta) = \frac{x}{\theta} e^{-x^2 / (2\theta)}$$ $$\displaystyle \mathcal{L}(\theta) =\prod_{i = 1}^{n} \frac{x_i}{\theta} e^{-x_i^2 / (2\theta)}$$ Your derivation of the log likelihood should look closer to this: $$\displaystyle \mathcal{l}(\theta) = - n \ln{\theta} + \sum_{i = 1}^{n} \left ( \ln{x_i} + \frac{-x_i^2}{2\theta} \right)$$ And consequently your optimization step of taking the derivative of the log likelihood like: $$\displaystyle \frac{d \mathcal{l}}{d\theta} = -\frac{n}{\theta} + \sum_{i=1}^{n}\frac{x_i^2}{2\theta^2}$$ Which when solving for a derivative of zero, gives you your estimator $$\displaystyle \theta = \frac{1}{2 n} \sum_{i=1}^{n}x_i^2 = \frac{1}{2} \overline{ X^2 }$$ (Where $\overline{ X^2 }$ is the mean of the samples squared, not the square of the mean of the samples.)
H: Polynomial lower bound on a sequence Let $0<s<1$ and $$a_i=\left(i+1\right)^s-(i)^s, \ i \in \mathbb{N}.$$ I'm trying to find a lower bound on $(a_i)_{i\in\mathbb{N}}$ of the form $$ a_i \geq i^k \ \mbox{for large enough i}.$$ That is, it does not have to hold for small $i$, but only eventually. Of course, we must have $k<0$. I guess I could prove such bound holds for small enough $k$, however I'd like to have some expression for $k$ (which probably depends on $s$, e.g., $k<-1/s$). Any thoughts? Many thanks. AI: By the Mean Value Theorem, we have $$\frac{(i+1)^s-i^s}{1}=f'(\xi),$$ where $f(x)=x^s$ and $\xi$ lies between $i$ and $i+1$. That produces a lower bound of $\dfrac{s}{(i+1)^{1-s}}$, which is not quite of the right shape. But for any $k\gt 1-s$, and large enough $i$ (depending on $k$) this is $\gt \dfrac{1}{i^k}$. Remark: Prompted by the comment of Calvin Lin, we note that $\dfrac{s}{i^{1-s}}$ is an upper bound. Since the ratio of upper bound to lower bound is close to $1$ for $i$ large, one cannot expect to improve on the $k$.
H: Help with a different approach to extracting a polynomial equation from differences It is well known that we can determine the degree of a polynomial can be found by finding when the differences are the same. i.e. if the second differences are the same, it is a polynomial of the 2nd degree and if the third differences are the same, it is a polynomial of the 3rd degree. I've noticed that the coefficient of the highest degree in the equation can be determined by looking at the corresponding differences. I.e. Given: x y D1y D2y D3y 1 1 -4 12 12 2 -3 8 24 12 3 5 32 36 4 37 68 5 105 D here stands for Delta. Right away, you can tell the the equation that matches this will be a polynomial of the third degree. I've found that the relation between the differences for $\ \Delta\ $ 3y will be related to the coefficient such that: $\ a\ = \frac{\Delta 3y}{3!} $ where $\ a\ $ is the coefficient for $\ x^3\ $ Here this would be a = 2 What I would like to figure out is how to determine the coefficient for $\ x^2\ $. I know it is possible to create a matrix of equations and solve, but I am curious about a different approach. My thought was to consider the equation as if it had no $\ x^3\ $ term. So in order to do that I must change the differences as if $\ x^3\ $ wasn't apart of the equation and this would let me extract the $\ x^2\ $ term. So I calculated out all of the $\ \Delta y\ $ terms using an equation of just $\ 2x^3\ $ x y D1y D2y D3y 1 2 14 24 12 2 16 38 36 12 3 54 74 48 4 128 122 5 250 I found that if I subtracted these values from the ones in the original equation I would be left with a constant second difference of -12 meaning that $\ b = \frac{-12}{2!}\ = -6 $ where b is the coefficient of $\ x^2\ $ What I can't figure out is a quick way to determine a value of $\ \Delta 2y\ $ for $\ 2x^3\ $ without adding up all of the numbers manually. If I could do this then the process would be much faster and thus this process would be more practical for polynomial equations of very large degrees. AI: From the first line of values of $x, y, D1y, D2y, D3y$ which are $1, 2, 14, 24, 12$, you can get that the polynomial is $$ \frac{12}{3!} (x-1)(x-2)(x-3) + \frac{24}{2!} (x-1)(x-2) + \frac{14}{1} (x-1) + 2 =2x^3$$ This is known as the method of differences. It should be easy to see how this generalizes. As to why this works, you should convince yourself There exists a unique polynomial of degree $n$ in which the first $n-1$ differences are 0, and the last difference is 1. Show that this polynomial has the form $\frac{1}{n!} \prod_{i=1}^n (x-i)$. The starting value of $x$, which is 1, tells us that we want the polynomial product to start with $(x-1)$. If your starting value of $x$ was 0 (more common), then the polynomials will be shifted by 1 and hence have the form $x(x-1)(x-2) ... $.
H: Finding a length of arc, what's wrong? Find: $$ \int \sqrt{x^{2}+y^{2}}dl$$ $$L: x^{2}+y^{2}= Rx$$ (at image $p' = -R\cdot \sin(\phi)$ ) AI: You want to evaluate $$\oint_L d\ell \sqrt{x^2+y^2}$$ where $$\left ( x-\frac{R}{2}\right)^2+y^2=\frac{R^2}{4}$$ Parametrizing: $$x(t) = \frac{R}{2} + \frac{R}{2} \cos{t} = R\cos^2{(t/2)}$$ $$y(t)=\frac{R}{2} \sin{t}$$ $$d\ell = \sqrt{x'^2+y'^2} dt = \frac{R}{2} dt$$ $$\sqrt{x^2+y^2} = \sqrt{R x} = R |\cos{(t/2)}|$$ Then the integral is equal to $$\frac{R}{2} R \int_0^{2 \pi} dt\, |\cos{(t/2)}| = R^2 \int_0^{\pi} dt \, \cos{(t/2)} = 2 R^2$$
H: Finding an Orthogonal Transformation with 2 given vectors There are two possible orthogonal transformations of $\mathbb{R}^2$ that leave the origin fixed and send the point $(0,13)$ to $(5,12)$. Find their matrices and describe them geographically. Can anyone explain to me how I'd go about working this out? Sorry but I know nothing about orthogonal transformations and nowhere seems to explain how I can work them out well enough... AI: Hint: Consider the action on a basis. You want $ (0,1) $ to get mapped to $( \frac{5}{13}, \frac {12}{13})$. Since you want an orthogonal transformation, $(1,0)$ gets mapped to $(-\frac{12}{13}, \frac{5}{13})$. Which matrix does this>
H: Minimizing the distance between two boats. the source of this problem is Stewart's Essential Calculus (Early Transcendentals) 2nd ed. A boat leaves a dock at 2:00PM and travels due south at a speed of $20$ km/h. Another boat has been heading due east at $15$ km/h and reaches the same dock at 3:00 PM. I want to find at what time the two boats were closest together. My plan of approach is to find the positions of the boats at any time $t$, and minimize the square of the distance between them. For the boat traveling due east, its position could be written as $(f(t), 0)$ and for the one due south, $(0, g(t))$. My difficulty lies in determining what $f(t)$ and $g(t)$ are. I'd appreciate it if someone could assist. Thanks. AI: First, draw a picture describing the starting position of the boats and the dock. We know it takes 1hr for the green boat to reach the dock, so I've marked that distance as 1hr. We want the units of $f(t)$ and $g(t)$ to both be in kilometers, so let's start with writing down a function that gives the starting position for each when they're at $t=0$. I'm using $f$ for the vertical boat, $g$ for the horizontal boat. $$f(t) = 0 + \text{something to be added}$$ $$g(t) = -(1\text{ hr})\left(15 \frac{\text{km}}{\text{hr}}\right) + \text{something to be added}$$ The constants give me my offset because they don't start at the same place. See if you can figure out the rest. If you can't, see (mouse-over) below: Moving on, we need to figure out what that "something" is. Well, we know $\text{distance} = \text{rate}\times\text{time}$, so let's try that: $$f(t) = 0 -\left(20\frac{\text{km}}{\text{hr}}\right)t$$ $$g(t) = -(1\text{ hr})\left(15 \frac{\text{km}}{\text{hr}}\right) + \left(15\frac{\text{km}}{\text{hr}}\right)t$$ Why do we need the minus sign in $f$? Well, it's because we're going south, which is in the negative $y$ direction.
H: Subobject classifiers as internalizations I recently read the article on internalizations on nlab, but I am not quite sure what falls under that description. Is it fair to say, that subobjects are internalizations of subsets and that the subobject classifier internalizes characteristic functions? AI: Internalizations is the process of describing a particular concept completely in terms of category theoretic language. Once that is done the same concept can be interpreted (internally, i.e., without need for any external ideas or concepts other than the category theoretic onces) in any category (provided the category has enough structure to for the description to be valid). It is fair to say that subobjects are internalizations of subsets. A subobject classifier though is not the concept corresponding to characteristic functions. Instead, a subobject classifier is an internalization of the codomain of characteristic functions. It is what enables characteristic functions to exist (and characterize subobjects).
H: Impossibility to find the General Term of a sequence (or series) Is there a way to formally show that the General Term of a sequence cannot be inferred from some set of given information? If so, how can one do that? An example to illustrate what I mean is: Find the General Term of the series $$\frac{1}{2} + \left(\frac{2}{3}\right)^{4} + \left(\frac{3}{4}\right)^{9} + \left(\frac{4}{16}\right)^{16} + \cdots$$ I'm actually asking this question, because my real analysis exam was typed wrong and the fourth term was printed $(4/16)^{16}$ instead of $(4/5)^{16}$, which would make it possible to show that the General Term is : $$a_n=\left(\frac{n}{n+1}\right)^{n^2}.$$ I was warned to change it, but then this question came to my mind. AI: When we write "$\ldots$" in mathematics, we are really saying "and continue on in the obvious way". But, as you point out, "the obvious way" is open to interpretation. The answer to your question is 'no': for one way to see why, consider polynomial interpolation. Given any $n$ values $x_1,\ldots,x_n$, you can find an (at most) $n$th degree polynomial $p$ so that $p(i)=x_i$ for all $i\in\{1,2,\ldots,n\}$. So, if you are given a sequence of some number of terms, you can always choose ANY further set of finite terms you want, use them to come up with a polynomial, use that polynomial as a rule for a sequence, and feel completely justified that your answer is "reasonable". That's why we have to be so careful with our "and so on", "obvious", "clear", "$\ldots$", etc.
H: Simplify trig function and calculate limit $\lim\limits_{x \to 0} \frac{\tan x-\sin x}{\sin^2 x}$ Using the fact that $\lim\limits_{x \to 0} \frac{\sin(x)}{x}=1$, please help me show that $$\lim\limits_{x \to 0} \frac{\tan(x)-\sin(x)}{\sin^2(x)}=0.$$ Because I am not familar with L'Hôpital's rule and Taylor's Theorem, please avoid the use of either in your solution. AI: If you multiply top and bottom by $\cot x$, you get: $$\lim_{x\to 0}\frac{1}{\cos x}\frac{1-\cos x}{\sin x}$$ Using the taylor series for sine and cosine, we have: $$\lim_{x\to 0}\frac{1}{1-\frac{x^2}{2!}+\ldots}\frac{\frac{x^2}{2!}+\ldots}{x-\frac{x^3}{3!}+\ldots}=\lim_{x\to 0}\frac{1}{1-\frac{x^2}{2!}+\ldots}\frac{\frac{x}{2!}+\ldots}{1-\frac{x^2}{3!}+\ldots}=0$$ after canceling an $x$ and plugging in $0$.
H: Linear Transformation of a straight line Let $L_{1}: x-y-2=0$ be a straight line in the x-y coordinate system. Find a coordinate system $(x_{1},y_{1})$ having its origin at $(0,0)$ and relative to which $L_{1}$ has equation $y_{1}= $constant. I know I keep asking these stupid questions, but my final is tomorrow and I can help feeling like these notes aren't helping me at all. Can somebody explain the steps to me? AI: $L_1$ has a slope of $1$ so it is at a $45^{\circ}$ angle to both of the current coordinate axes. We know that lines of the form $y=k$ are parallel to the $x$ axis and perpendicular to the $y$ axis so we must rotate our coordinate axes by $45^{\circ}$ in the clockwise direction. When we do that we get that $y_1$ is the line $y=-x$ in our current coordinate system and $x_1$ is the line $y=x$. Can you show that the equation for $L_1$ in this new coordinate system is $y_1=-\sqrt{2}$?
H: Bombing of Königsberg problem A well-known problem in graph theory is the Seven Bridges of Königsberg. In Leonhard Euler's day, Königsberg had seven bridges which connected two islands in the Pregel River with the mainland, laid out like this: And Euler proved that it was impossible to find a walk through the city that would cross each bridge once and only once. And more generally, that an Eulerian path does not exist for a graph with more than two nodes of odd degree. World War 2 changed the topology of the city by destroying two of the bridges. (It also brought other changes such as the transfer of the territory from Germany to Russia, but this is not topologically relevant.) This lead to the similar but solvable problem of the Five Bridges of Kaliningrad. I doubt that the British and Soviet air forces made creation of an Euler walk a priority when they were conducting their attacks. But if they had, one would have to criticize their inefficiency, because they could have achieved the same objective by bombing only one bridge: Generalizing the problem: What is the minimum number of edges that need to be removed from a graph so that the remaining edges form an Eulerian path? My first conjecture was that it would be half the number of “extra” odd degree nodes. However, a simple X-shaped graph provides a counterexample: There are 4 odd nodes (and 1 even one), but two edges need to be removed, not just one. AI: Color the vertices with an odd number of edges white, and the other vertices black. We consider sets of paths, each path from a white vertex to another white vertex. Such a set of paths with a minimal number of edges, that still connects to every white vertex, constitutes a minimum number of bridges to destroy that leaves an Eulerian cycle. If instead we consider such a set of paths with a minimal number of edges, that connects to every white vertex but two, that will answer the original question, giving the minimal number of bridges to destroy that leaves an Eulerian path.
H: Show that $\langle G^+\rangle=G$ in a directed group Lemma 2.1.8 of Glass' Partially Ordered Groups states: $G$ is a directed group if and only if $\langle G^+\rangle=G$ (where $G^+=\{x:x\geq1\}$) This doesn't make any sense to me. For example, $(\mathbb Z,+)$ is a directed group (every two elements have an upper and lower bound) so the lemma implies that you can add some number of positive integers together to create a negative integer, which clearly isn't true. Is he using some weird definition of $\langle \cdot \rangle$ (e.g. where the inverses of elements are included)? What am I not understanding? AI: Given a group $G$ and a subset $S$ of $G$, $\langle S\rangle$ is (typically) defined to be the smallest subgroup of $G$ containing $S$ (i.e.: the intersection of all subgroups of $G$ containing $S$). Thus, inverses are included.
H: radius of convergence for $\sum_{n=1}^{\infty}(n^{\frac1n}-1)x^n$ Find the radius of convergence $R$ for the power series $$\sum_{n=1}^{\infty}(n^{\frac1n}-1)x^n$$ Applying the ratio test, this becomes $\lim_{n\rightarrow\infty}\dfrac{(n+1)^{\frac{1}{n+1}}-1}{n^{\frac{1}{n}}-1}$, which is $\dfrac00$ and seems hard to use L'Hospital. Applying the root test, this becomes $\limsup (n^{\frac1n}-1)^{\frac1n}$, and I don't know what that is. AI: Write $$n^{1/n}=e^{\frac 1 n\log n}$$ For large $n$, (that is, near the origin for $e^x$, since $n^{-1}\log n\to 0$) we can then write $$n^{1/n}=1+\frac{\log n}{n}+\frac{\log^2n}{n^2}+O(n^{-2})$$ since for large enough $n$, $$\frac{\log^3n}{n^3}\leq \frac 1 {n^2}$$so $$n^{1/n}-1=\frac{\log n}{n}+\frac{\log^2n}{n^2}+O(n^{-2})$$ Can you take it from here?
H: Absolute convergence condition for radius of convergence Let $\sum_{n=0}^{\infty}a_n(x-t)^n$ be a power series. Let $X=\{|x-t|:\sum_{n=0}^{\infty}|a_n||x-t|^n$ converges$\}$. Let $R$ be the least upper bound of $X$ if $X$ is bounded and let $R=\infty$ if $X$ is unbounded. Prove that $R$ is the radius of convergence of the series $\sum_{n=0}^{\infty}a_n(x-t)^n$. If $b\in X$, then any $a\in[0,b]$ is also in $X$ because $a^n<b^n$ and we use the comparison test. So suppose $|b|<R$. Then $|b|\in X$, so $\sum_{n=0}^{\infty}a_nb^n$ converges absolutely. Now suppose $|b|>R$. So $\sum_{n=0}^{\infty}|a_n||b|^n$ diverges. We must prove that $\sum_{n=0}^{\infty}a_nb^n$ diverges. How can we do it? AI: Hint: Prove and use the following result: If a series $\sum a_n z^n$ converges then $\sum a_n (cz)^n$ converges absolutely for any $-1<c<1$. Further hint: To show the above notice that we must have $a_nz^n$ bounded by some $K$. Now how can you rewrite the second sum with absolute value signs?
H: Confused by $\Re(z)$ and closed contours, why isn't it the integral 0? Let the contour $\gamma$ be the triangle with vertices $\ 0, 1, 1+i $, taken anticlockwise. As this is a closed contour and I understand $$\oint_\gamma z\ dz = 0 $$ as it is analytic inside the contour (and everywhere else). However, how are you meant to evaluate $$\oint \Re(z) \ dz$$ As it's not analytic inside the contour, I split the triangle into three parametrised lines: $$\gamma_1: t\quad 0\le t\le 1 \\ \gamma_2: 1+it\quad 0\le t\le 1 \\ \gamma_3: 1+i -(1+i)t\quad 0\le t\le 1 $$ When you find the derivatives and substitute, then integrate, my final answer is $\not1$, but the answers say $\frac12 i$ Have I made a calculation error, or am I approaching the question the wrong way? Edit my working, answer is still wrong now right.. : $$ \oint_\gamma \Re(z) = \int_0^1 \Re(t)\ dt + i\int_0^1\Re(1+i)\ dt - (1+i) \int_0^1 \Re(1+i - t(1+i))\ dt \\= \int_0^1 \not1 t\ dt + i \int_0^1 1\ dt - (1+i)\int_0^1(1 - t) \ dt \\= \not 1\frac12 + i - \frac{(1+i)}{2} \\ = \frac12 i $$ Final edit: Made a error on second line, was a calculation error. AI: $\gamma_1$ gives rise to $1/2\times 1$. $\gamma_2$ gives rise to $1\times i$. This is from $\int 1 d(1+it)=\int 1 dt\times i$. $\gamma_3$ gives rise to $(1-1/2)\times(-(1+i))$. Summing gives $1/2+i-(1+i)/2=i/2$. Edit: Your current error is that the real part of $t$ is not 1 but $t$ in the first integral.
H: Can a vector space have more than one zero vector? The question above is really it. The reason I ask is that my text says a vector space can have more than one zero vector (It's a true/false question: A vector space may have more than one zero vector). But if the zero vector in any space is unique, then it has only one zero vector, no? Or am I reading "unique" wrong? AI: By unique we can say that there is only one zero vector. To see this, Suppose that we have a vector space with two zero vectors, $x,x'$, then we have $$ x = x+x' = x' + x = x'. $$ Thus the zero vector is indeed unique.
H: Harnack's inequality Evans' PDE book This is on page 33 of my edition, under the proof of Harnack's inequality. Let $V\subset \overline{V} \subset U$ with $\overline{V}$ compact. Let $r=\frac{1}{4}\text{dist}\left(V,\partial U\right)$. Let $x,y\in V$ s.t. $\left|x-y\right|\leq r$. It seems as though Evans uses $$\int_{B\left(x,2r\right)} u dz\geq \int_{B\left(y,r\right)} udz,$$ but this does not seem obvious to me. What if $u$ is negative in $B\left(x,2r\right)\setminus B\left(y,r\right)$? AI: Harnack's inequality concerns about the non-negative function which is harmonic in $U$, by the choice of $r$, $x$, and $y$, we know that $B(y,r)\subset B(x,2r) \subset U$, hence the inequality holds.
H: Show $G=H_3H_5$ Let $G$ be a group and $H_3$ and $H_5$ normal subgroups of $G$ of index $3$ and $5$ respectively. Prove that every element $g\in G$ can be written in the form $g=h_3h_5$, with $h_3\in H_3$ and $h_5\in H_5$. AI: Key facts: The product $HN=\{hn:h\in H,n\in N\}$ is a subgroup of $G$ if $H$ and/or $N$ is normal. Index is transitive; if $A\le B\le G$ then $[G:A]=[G:B][B:A]$ and $[G:B]\mid [G:A]$. So $H_3H_5\le G$ is in fact a subgroup. As both $H_3$ & $H_5$ are subgroups of $H_3H_5$, $[G:H_3H_5]$ must divide both $[G:H_3]$ & $[G:H_5]$...
H: Differential Equation - Solve $y'=y\cot 2x, y (\frac{\pi}{4})=2$ Solve the following differential equation : $$y'=y\cot 2x,\; y (\frac{\pi}{4})=2.$$ My approach : $$\frac{dy}{dx}=y\cot 2x \Rightarrow \frac{dy}{y}=\cot 2x dx$$ Integrating both sides we get : $$\Rightarrow \log y = \frac{\log|\sin 2x|}{2}+c$$ $$\Rightarrow \log\left(y^2\right)= \log|\sin 2x| +c$$ Please guide further.....Thanks.. AI: $$ \int \frac{dy}{y}= \int \cot 2x\; dx$$ $$\implies \;\log |y| = \dfrac 12 \log|\sin 2x| + c$$ $$\iff \log\left(y^2\right)= \log|\sin 2x| +c$$ $$\exp\left(\log\left(y^2\right)\right) = y^2 = \exp\left(\log |\sin 2x| + c\right) = C\,\Big|\sin (2x)\Big|$$ Now simply evaluate $\;y^2 = C\,\Big|\sin(2x)\Big|\;$ at $\,(\pi/4,\;2)\,$ to solve for $\;C$.
H: How to think of $\vec{u}-\vec{v}$ Assume I have two vectors, $\vec{u}$ and $\vec{v}$. I know that I can think of their sum via Triangle or Parallelogram Law, but I'm having trouble knowing which way the vector would point depending on if it was $\vec{u}-\vec{v}$ or if it was $\vec{v}-\vec{u}$. Is there an easy way to remember which way the arrow would point? AI: Instead of memorizing the direction, you could try thinking $$\vec{u} = (\vec{u}-\vec{v}) + \vec{v},$$ i.e., $\vec{u}-\vec{v}$ would point to the direction from the tip of $\vec{v}$ to the tip of $\vec{u}$.
H: Likelihood Functon. $n$ random variables or a random sample of size $n$ $\quad X_1,X_2,\ldots,X_n$ assume a particular value $\quad x_1,x_2,\ldots,x_n$ . What does it mean? The set $\quad x_1,x_2,\ldots,x_n$ constitutes only a single value? or, $\quad x_1,x_2,\ldots,x_n$ are n values , that is, $X_1$ assumes the value $x_1$, $X_2$ assumes the value $x_2$,and so on? Why Likelihood Function is a function of parameter $\theta$? Why not is the function of random variables $\quad X_1,X_2,\ldots,X_n$? Let $n=1$ and $X_1$ has the normal density with mean, $\mu=6$ and variance $\sigma^2=1.$ Then the value of $X_1$ which is most likely to occur is $X_1=6.$ How it has been computed? I have thought in the way that since $X_1$ is a random variable it can assume the values $x_1,\ldots,x_n$ [Then how $\quad X_1,X_2,\ldots,X_n$ can assume $\quad x_1,x_2,\ldots,x_n$?] and since the mean of the random variable is $6$ so The value which is most likely to occur is $6$. Is it the real process ? AI: In your first question, what is meant is that $X_1$ assumes the value $x_1$ and $X_2$ assumes the value $x_2$, etc. That may be expressed by saying that the tuple or vector $(X_1,\ldots,X_n)$ assumes the value $(x_1,\ldots,x_n)$. To your third question, one must recognize first that the probability that $X_1$ assumes any particular value is $0$, since this is a continuous distribution. But the probability that $X_1$ is in any interval centered at $6$ is greater than the probability that $X_1$ is in any equally long interval centered anywhere else. Just look at the bell-shaped curve and remember that the probability is the area under the curve. The reason that the likelihood is a function of $\theta$ with $x_1,\ldots,x_n$ fixed is just that the term "likelihood function" is conventionally used to refer to that function. In the case of the family of all normal distributions, this would be a function of the pair $(\mu,\sigma)$ or of the pair $(\mu,\sigma^2)$.
H: Convergence of a series $\sum\limits_{n=1}^\infty\left(\frac{a_n}{n^p}\right)^\frac{1}{2}$ I've got a question about the convergence of a series during studying analysis. If I know that a series of positive real numbers $$\sum_{n=1}^\infty a_n$$ converge, why does $$\sum_{n=1}^\infty\left(\frac{a_n}{n^p}\right)^\frac{1}{2}$$ also converge for $p>1$? Although I know about many convergence tests, I don't know how to apply those tests for this case. Since this problem is the form of "series A converge → series B converge", I've been thinking that it must be verified by using some "comparison" tests. Is this thinking correct? All advice is welcome^_^ Thanks. AI: Since $$ab\leq a^2+b^2$$ we have $$\sum_{n=1}^\infty\left(\frac{a_n}{n^p}\right)^\frac{1}{2}\leq \sum_{n=1}^\infty a_n+\sum_{n=1}^\infty\frac{1}{n^p} $$
H: $\epsilon$-$\delta$ proof that $\lim\limits_{x \to 1} \frac{1}{x} = 1$. I'm starting Spivak's Calculus and finally decided to learn how to write epsilon-delta proofs. I have been working on chapter 5, number 3(ii). The problem, in essence, asks to prove that $$\lim\limits_{x \to 1} \frac{1}{x} = 1.$$ Here's how I started my proof, $$\left| f(x)-l \right|=\left| \frac{1}{x} - 1 \right| =\left| \frac{1}{x} \right| \left| x - 1\right| < \epsilon \implies \left| x-1 \right| < \epsilon |x|$$ I haven't made any further progress past this point. Is it possible to salvage this proof? Should I try an alternate approach? AI: Update 2/19/2018: It appears that this answer has received a lot of attention, which I'm very glad to know about. When you're reading through this answer and you're trying to learn about $\delta$-$\epsilon$ proofs for the first time, I would recommend skipping the sections labeled Addendum. on your first read. Please let me know of any other clarifications that you would like with this answer. Whenever I am doing a $\delta$-$\epsilon$ proof, I do some scratch work (note, this is NOT part of the proof) to figure out what to choose for $\delta$. I always tell students to think about the following: What are you given? What do you want to show? In the definition of the limit, you are given an arbitrary $\epsilon > 0$ and you want to find $\delta$ such that $$0 < |x - 1| < \delta$$ implies $$\left|\dfrac{1}{x} - 1 \right| < \epsilon\text{.}$$ You have control over what to choose for your $\delta$ in this case. The idea of this $\delta$-$\epsilon$ proof is to work with the expression $|x - 1| < \delta$ and get $\left|\dfrac{1}{x} - 1 \right| < \epsilon$ at the end. Let's do some scratch work (again, NOT part of the proof). Scratch Work Let's start with what we want to show for our scratch work (starting with what you want to show is bad to do $100\%$ of the time when you're doing proofs - again, this is scratch work and not actually part of the proof). We want to show that $\left|\dfrac{1}{x} - 1 \right| < \epsilon$. Let's work backwards and try to turn the expression $\left|\dfrac{1}{x} - 1 \right|$ into some form of $|x-1|$. So, note that $$\left|\dfrac{1}{x} - 1 \right| =\left|\dfrac{1-x}{x} \right| = \left|\dfrac{-(x-1)}{x} \right| = \left|\dfrac{x-1}{x} \right|$$ since $|y|=|-y|$ for all $y$ in $\mathbb{R}$. The last expression can be rewritten as $\dfrac{\left|x-1 \right|}{\left| x \right|}$. Looking at this expression, we do have $|x-1|$ in the numerator, which is good. But we have that pesky $|x|$ in the denominator. Since we do have control of what $|x-1|$ is less than (this is our $\delta$), let's choose a really convenient, small number to work with that is greater than $0$. Let's say $\delta = \dfrac{1}{2}$. Well, if $|x - 1| < \dfrac{1}{2}$, then $$-\dfrac{1}{2} < x-1 < \dfrac{1}{2} \implies \dfrac{1}{2} < x < \dfrac{3}{2} \implies \dfrac{1}{2} < |x| < \dfrac{3}{2}\text{.}$$ So if we choose $\delta = \dfrac{1}{2}$, $\dfrac{1}{2} <|x| < \dfrac{3}{2}$. Addendum. In many examples, $\delta$ is usually chosen to be $1$. Why did we elect not to do that in this case? It's because it wouldn't work. Intuitively, here's why it doesn't: when you consider the neighborhood of radius $1$ centered around $x = 1$, you get the interval $(0, 2)$. $f(x) = \dfrac{1}{x}$ doesn't have a finite limit at $x = 0$, so this makes $\delta = 1$ a bad choice. This isn't the case if $\delta = 1/2$. The neighborhood of radius $1/2$ around $x = 1$ is $(1/2, 3/2)$. $f$ has limits at every $x$-value in the interval $(1/2, 3/2)$, including the endpoints. In terms of the algebra, if we had chosen $\delta = 1$, the algebra wouldn't have worked out. We would've gotten $0 < x < 1$ and would not have been able to obtain a finite upper bound for $\dfrac{1}{x}$. That is, $$0 < x < 1 \implies 1 < \dfrac{1}{x} < \infty\text{.}$$ We do not have a finite upper bound for $\dfrac{1}{x}$ in this case, and hence why $\delta = 1$ will not work for this purpose. If $\dfrac{1}{2} <|x| < \dfrac{3}{2}$, then $$\dfrac{2}{3} <\dfrac{1}{|x|} < 2$$ and $$\dfrac{1}{|x|} < 2 \implies \dfrac{\left|x-1 \right|}{\left| x \right|} < 2\left| x-1 \right|\text{.}$$ Now we have control over what $|x-1|$ is less than. So to get $\epsilon$, we choose $\delta = \dfrac{\epsilon}{2}$. But, wait - didn't I say that we chose $\delta = \dfrac{1}{2}$ earlier? A simple solution would be to minimize $\delta$, i.e., make $\delta = \min\left(\dfrac{\epsilon}{2} , \dfrac{1}{2} \right)$. Addendum. To see why $\delta = \min\left(\dfrac{\epsilon}{2} , \dfrac{1}{2} \right)$ works, suppose $\dfrac{\epsilon}{2} > \dfrac{1}{2}$, so that $\delta = \dfrac{1}{2}$. Then $\epsilon > 1$. Then $$\dfrac{\left|x-1 \right|}{\left| x \right|} < 2|x-1| < 2 \cdot \dfrac{1}{2} = 1 < \epsilon\text{.}$$ Now suppose $\dfrac{\epsilon}{2} \leq \dfrac{1}{2}$, so that $\delta = \dfrac{\epsilon}{2}$. Then $$\dfrac{\left|x-1 \right|}{\left| x \right|} < 2|x-1| < 2 \cdot \dfrac{\epsilon}{2} = \epsilon\text{.}$$ In both cases, we have $\dfrac{\left|x-1 \right|}{\left| x \right|} < \epsilon$, as desired. See also Why do we need min to choose $\delta$?. So now we've found our $\delta$ and can use this to write out the proof. The Proof Proof. Let $\epsilon > 0$ be given. Choose $\delta := \min\left(\dfrac{\epsilon}{2} , \dfrac{1}{2} \right)$. Then $$\left|\dfrac{1}{x} - 1 \right| = \left|\dfrac{x-1}{x} \right| = \dfrac{\left|x-1 \right|}{\left| x \right|} < 2\left| x-1 \right|$$ (since if $|x - 1| < \dfrac{1}{2}$, $\dfrac{1}{|x|} < 2$) and $$2\left| x-1 \right| < 2\delta \leq 2\left(\dfrac{\epsilon}{2}\right) = \epsilon\text{. }\square$$ Success at last. Addendum. Note that the end goal above was achieved, namely to show that $$\left|\dfrac{1}{x}-1\right| < \epsilon\text{.}$$ In the step $$2\left| x-1 \right| < 2\delta \leq 2\left(\dfrac{\epsilon}{2}\right) = \epsilon\text{,}$$ textbooks usually omit the step with the $\delta$ and just write $$2\left| x-1 \right| < 2\left(\dfrac{\epsilon}{2}\right) = \epsilon\text{.}$$ Addendum. It may seem that the note "(since if $|x - 1| < \dfrac{1}{2}$, $\dfrac{1}{|x|} < 2$)" may be an additional assumption added to the problem - i.e., that $\delta$ has to be $\dfrac{1}{2}$. This is not the case for the following reason: given $|x-1| < \delta$, we have $$|x-1| < \min\left(\dfrac{\epsilon}{2}, \dfrac{1}{2}\right)\text{.} $$ Obviously, if $\epsilon \geq 1$, we end up with $|x - 1| < \dfrac{1}{2}$, as stated above. But let's suppose that $\epsilon < 1$. Then $$|x - 1| < \dfrac{\epsilon}{2} < \dfrac{1}{2}$$ and you end up with $|x - 1| < \dfrac{1}{2}$, so the $\dfrac{1}{|x|} < 2$ implication holds in either case.
H: Linear Systems: Application Zoltan(who is behind Mali) and Mali are 18 km apart and begin to walk at the same time. If they walk in the same direction they meet after 6h. If they walk towards each other they meet in 2h. Find their speeds AI: Let $z$ be the speed of Zoltan, assumed constant, and let $m$ be the speed of Mali. Then $$6(z-m)=18 \qquad\text{and}\qquad 2(z+m)=18.\tag{1}$$ The first equation holds because the rate of gain by Zoltan is $z-m$, and in $2$ hours, Zoli gains $18$ km. The second equation holds because when they are travelling toward each other, the distance between them is decreasing at $z+m$ km per hour. From the two equations in (1), we find that $z-m=3$ and $z+m=9$. Add. We get $2z=12$, so $z=6$. Now we can find $m$.
H: single valued function in complex plane Let $$f(z)=\int_{1}^{z} \left(\frac{1}{w} + \frac{a}{w^3}\right)\cos(w)\,\mathrm{d}w$$ Find $a$ such that $f$ is a single valued function in the complex plane. AI: $$ \begin{align} \left(\frac1w+\frac{a}{w^3}\right)\cos(w) &=\left(\frac1w+\frac{a}{w^3}\right)\left(1-\frac12w^2+O\left(w^4\right)\right)\\ &=\frac{a}{w^3}+\color{#C00000}{\left(1-\frac12a\right)}\frac1w+O(w) \end{align} $$ The residue at $0$ is $1-\frac12a$, so setting $a=2$ gives a residue of $0$.
H: point out the mistakes $1=2$? Q:prove $1=2$ ? method1:- let us consider $x=1$ then $x=x^2$ $x-1 =(x^2)-1$ $x-1=(x-1)(x+1)$ $1=x+1$ finally $1=2$ i have little confusion here, is the mistake is considering $x=1$ or else any thing other than this. method2:- $-2=-2$ $1-3=4-6$ $(1^2)-(\frac{2*1*3}{2})=(2^2)-(\frac{2*2*3}{2})$ $(1^2)-(\frac{2*1*3}/2)+((\frac{3}{2})^2)=(2^2)-(\frac{2*2*3}{2})+((\frac{3}{2})^2)$ $(1-\frac{3}{2})^2=(2-\frac{3}{2})^2$ $1-\frac{3}{2}=2-\frac{3}{2}$ [here the formula if $x^n=y^n$ then $x=y$ is failed] $1=2$ point out the mistakes in above two methods exactly and please explain AI: For the first: Don't divide by zero. (That's what you did going from $x-1=(x-1)(x+1)$ to $1=x+1$, since $x=1$.) For the second: $x^2=y^2$ does not imply that $x=y$. After all, $(-1)^2=1=1^2,$ but $-1\ne 1$. The best we can conclude is that $x=\pm y$.
H: Finding % of remaining students in the class would be girls? In class of $120$ students, boys constitute $40$% of total. If $\dfrac 13^{rd}$ of boys and $4$ girls drop out of class to join a camp , what % of remaining students in the class would be girls ? AI: Total number of students, $ T = 120 $. Number of boys, $ N_B = 40\% \text{ of } T = 0.4 * 120 = 48 $. Thus, number of girls, $ N_G = 120 - 48 = 72 $. Now, $ \frac{1}{3}^{rd} $ of boys are leaving, $ \implies (\frac{1}{3} * 48) + 4 = 16 + 4 = 20 $ students are leaving. New count $ N = 120 - 20 = 100 $. New percentage of girls population: $$ P_G = \dfrac{72 - 4}{N} * 100 \% \ = 68 \% $$
H: Computing $ \int_0^\infty \frac{\log x}{\exp x} \ dx $ I know that $$ \int_0^\infty \frac{\log x}{\exp x} = -\gamma $$ where $ \gamma $ is the Euler-Mascheroni constant, but I have no idea how to prove this. The series definition of $ \gamma $ leads me to believe that I should break the integrand into a series and interchange the summation and integration, but I can't think of a good series. The Maclaurin series of $ \ln x $ isn't applicable as the domain of $ x $ is not correct and I can't seem to manipulate the integrand so that such a Maclaurin series will work. Another thing I thought of was using $ x \mapsto \log u $ to get $ \int\limits_{-\infty}^\infty \frac{\log \log u}{u^2} \ du $ and use some sort of contour integration, but I can't see how that would work out either. AI: lemma1: $$\int_{0}^{1}\dfrac{1-e^{-x}}{x}dx-\int_{1}^{\infty}\dfrac{e^{-x}}{x}dx=\gamma$$ pf:use \begin{align}\sum_{i=1}^{n}\dfrac{1}{i}&=\int_{0}^{1}\dfrac{1-t^n}{1-t}dt\\ &=\int_{0}^{n}\dfrac{1-(1-\frac{x}{n})^n}{x}dx \end{align} and $$\ln{n}=\int_{1}^{n}\dfrac{1}{x}dx$$ so $$\gamma=\lim_{n\to\infty}(\sum_{i=1}^{n}\dfrac{1}{i}-\ln{n})=\lim_{n\to\infty}\left(\int_{0}^{1}\dfrac{1-(1-x/n)^n}{x}dx-\int_{1}^{n}\dfrac{(1-x/n)^n}{x}dx\right)$$ so $$\gamma=\int_{0}^{1}\dfrac{1-e^{-x}}{x}dx-\int_{1}^{\infty}\dfrac{e^{-x}}{x}dx=\int_{0}^{1}(1-e^{-x})d(\ln{x})-\int_{1}^{\infty}e^{-x}d(\ln{x})=\cdots=-\int_{0}^{\infty}e^{-x}\ln{x}dx$$
H: Closed Form of Recurrence Relation I have a recurrence relation defined as: $$f(k)=\frac{[f(k-1)]^2}{f(k-2)}$$ Wolfram Alpha shows that the closed form for this relation is: $$ f(k)=\exp{(c_2k+c_1)} $$ I'm not really sure how to go about finding this solution (it's been a few years...). Hints? AI: Your recurrence relation can be rewritten to $$\frac{f(k)}{f(k-1)} = \frac{f(k-1)}{f(k-2)}.$$ If we define $g(k)=\frac{f(k)}{f(k-1)}$, this becomes $g(k)=g(k-1)$. Obviously, this implies $g(k)=a$. By definition of $g$ we now have the remaining equation $\frac{f(k)}{f(k-1)} = a$ or $f(k) = af(k-1)$. This is solved by $f(k) = b\cdot a^k$. To get Wolfram Alpha's version of the solution, we just pick $c_2=\log(a)$ and $c_1=\log(b)$.
H: evaluate the following limit on trigonometry given that \begin{equation} \lim_{y \rightarrow 0} \frac{\sin y}{y}=1 \end{equation} evaluate the following \begin{equation} \lim_{x \rightarrow 0} \frac{2-2\cos^2 x-2 \cos x \sin ^2 x}{x^4} \end{equation} AI: $$\begin{align} 2-2\cos^2{x}-2\cos{x}\sin^2{x} &=2(1-\cos^2{x})-2\cos{x}\sin^2{x} \\[6pt] &=2 \sin^2x-2\cos x \sin^2x \\[6pt] &=2\sin^2{x}(1-\cos{x})\\[6pt] &=2\sin^2{x}\cdot(2\sin^2{\dfrac{x}{2}}) \end{align}$$ so $$\lim_{x\to 0}\dfrac{2-2\cos^2{x}-2\cos{x}\sin^2{x}}{x^4}=\lim_{x\to 0}\left(\dfrac{\sin{x}}{x}\right)^2\cdot\left(\dfrac{\sin{\dfrac{x}{2}}}{\dfrac{x}{2}}\right)^2=1$$
H: Help on showing a function is Riemann integrable Problem Let $(a_k)_{k=1}^\infty$ be the sequence of values of $\mathbb{Q}\cap[0,1]$ (which is countable set). Let $g:\mathbb{R}\rightarrow\mathbb{R}$ by $g=\sum_{k=1}^\infty {1 \over k}\cdot \chi_{a_k}$, that is, $g(x)=0$ if $x\notin\mathbb{Q}\cap[0,1]$ and $g(a_k)={1\over k}$ for all $k\in\mathbb{N}$. Show $g$ is Riemann integrable. (Note: My attempt is below, but I was told that there are many ways to prove this efficiently.) Attempted Work My main problem is understanding the function $g$. I thought at first glance that it was the harmonic series, which is unbounded and so then not integrable, but I was told to think more carefully, but I'm stumped for some reason. I think it's the way how its defined twice to me: seems contradictory definition. But regardless, I went ahead to try... Proof so far: Let $g$ be defined as above. For $g(a_k)={1\over k}$ we have that $g$ is bounded with compact support in $(a_k)\subseteq\mathbb{Q}\cap[0,1]$. Now let $g={1 \over k}={1\over k^m}+{k^{m-1}-1\over k^m}$ for $k=2,...,\infty$, where $f_1={1\over k^m}$ is bounded above (by 1) and $f_2={k^{m-1}-1\over k^m}$ is bounded by some $\epsilon_k$>0. Both $f_1$ and $f_2$ have thier support in $(a_k)$. $f_1$ is Riemann integrable (is this circular logic?) and $||f_2||_\infty\leq\epsilon_k$. Hence, by a Theorem about Lower and Upper Riemann Sums provided, $g$ is Riemann integrable. (I know this last part leaves much to be desired) Lemma provided Basically we were provided a nameless lemma that said if we could show that: (1) $f_1$ and $f_2$ existed such that $g=f_1+f_2$ with $f_1$ and $f_2$ bounded with support on $(a_k)$ and (2) $f_1$ is Riemann integrable and $||f_2||_\infty\leq\epsilon$ for all $\epsilon>0$. Then $g$ is Riemann integrable. Any tips, advice or help is really appreciated. Thanks so much in advance! AI: I think it is easier to work from the definition here. It should be clear that for any partition $\pi$, we have $L(g,\pi) = 0$. Pick $\epsilon>0$. Choose $N$ such that $\frac{1}{N} < \frac{1}{2} \epsilon$. It should be clear that we can pick a partition $\pi$ of $[0,1]$ such that the points $a_1,...,a_N$ are contained in intervals of total length $\frac{1}{2} \epsilon$. Let $\{I_k\}$ be the intervals of the partition $\pi$, and let ${\cal I} \subset \{I_k\}$ be the collection of intervals containing any point $a_1,...,a_N$. Then \begin{eqnarray} U(g,\pi) &=& \sum_k (\sup_{x \in I_k} g(x)) \ l(I_k) \\ &=& \sum_{I \in {\cal I}} (\sup_{x \in I} g(x)) \ l(I) +\sum_{I \in {\cal I}^c} (\sup_{x \in I} g(x)) \ l(I) \\ &\le& \sum_{I \in {\cal I}} (1) \ l(I) + \sum_{I \in {\cal I}^c} (\frac{1}{N} l(I)) \\ & \le& \frac{1}{2} \epsilon + \frac{1}{N} \\ &<& \epsilon \end{eqnarray} It follows that $g$ is integrable and $\int_0^1 g(x)dx = 0$. Note: You could view the above as writing $g = f_1+f_2$, where $f_1 = \sum_{k=1}^N \frac{1}{k} 1_\{a_k\}$, and $f_2 = g-f_1$. It should be clear that $\|f_2\|_\infty < \frac{1}{2} \epsilon$, however, I think it is simpler to do the above.
H: Standard logic notation in mathematics My profesor is always complaining that my proofs are very long and difficult to read because I never use notation, meaning I say everything in words. Tired of that I decided to study logic by myself and develop my proofs by using the methods of logic. The problem to me now is that I don't know exactly the point of being pedantic in writing everything in symbols of logic and writing in words. For example I don't know if it's ok or how to simplify my logic statement $\forall X\forall Y(X\in Cam(R)\wedge Y\in Cam(R)\wedge (X,Y)\in R\longleftrightarrow (F(X),F(Y))\in S)$. or for example if it's the case of having a lot of quantifiers like $\forall X \forall Y \forall Z(Z\in \mathbb{N} \wedge Y\in \mathbb{N} \wedge Z\in \mathbb{N}\wedge \phi(X,Y)\longrightarrow \psi(X,Z))\wedge Z>Y)$ Also I always see that, when doing deductions, people write things like $\phi(X)\longrightarrow \psi(X)$ $\longrightarrow \psi_{1}(X)$ $\longrightarrow \psi_{2}(X)$ . . . $\longrightarrow \psi_{n}(X)$ $\therefore \psi_{n}(X)$ This means we suppose $\phi$. Then we get $\psi$. This implies $\psi_{1}$ . Then this implies $\psi_{2}$, and then it implies $\psi_{3}$... etc. So, basically what I need is "tips" (the more,the better) to develop my proofs in a more or less standard way, just like you would do in your tests or the way you would write your proof on the blackboard, or to hand in your homework, etc. (If you know a book, a paper, or something that might help me with this I will be very gratefull). AI: It’s possible that your proofs are too wordy and don’t use enough notation, but even if that’s true, what you’re proposing here is not the solution: arguments expressed almost entirely in formal logical notation are at least as hard to read as arguments that drown the reader in verbiage. Of course you want to be sure that what you write is correct, but after that the most important thing is saying it clearly. Generally that requires a well-chosen mixture of symbols and words; the ideal mixture is different for different people, but for most people both extremes are hard to read. Here’s an example, modified from an earlier question, of three ways of defining a certain function: Define a function $f$ by $$f:\wp(\Bbb N)\setminus\{\varnothing\}\to\Bbb N \\ x\mapsto y \owns y\in x\land\forall z:z\in x\to z\geq y\;.$$ Define a function $f$ by $$f:\wp(\Bbb N)\setminus\{\varnothing\}\to\Bbb N:x\mapsto\min x\;.$$ Given a nonempty set $A$ of natural numbers, denote its least element by $f(A)$. The first is horrible to read. The second is much clearer and would be appropriate wherever a concise but readable definition is wanted. And the third, though by far the most wordy, is instantly clear. I would never use the first; the choice between the second and third would depend on the context and the intended audience. When writing for your professor, for instance, you might want to lean towards the second version. Besides tending to make mathematics hard to read, excessive use of formal notation can get in the way of thinking about a problem. For a possible example of this phenomenon, see this question and my answer to it. Please note that I do not mean to deny the possibility that you really are muddying things by using words when symbols would be preferable: a version of the quadratic formula and its derivation that used no mathematical symbols would be extremely hard to follow. Because judgement of readability is highly subjective, it’s very difficult to give concrete, objective advice. The best that I can suggest is to pay attention to the way proofs are written in textbooks; you’ll find considerable variation, but in general it will be within the range of acceptable practice, especially in your more advanced courses.
H: Change of Coordinate Matrix question. I have this question and the wording is very confusing, I dont understand how to answer it. Any help will be greatly appreciated. I have tried answering it and I just dont know where to begin. EDIT: I'm not just looking for an answer, I genuinely want to understand how it works. Thank you. AI: To find the target matrix for b, we have to solve $$b_1=\alpha_1 c_1+\beta_1 c_2\\\ b_2=\alpha_2 c_1+\beta_2 c_2 $$ or equivalently: $$ \left\{ \begin{array}{ll} (4,~4)=\alpha_1 (2,~2)+\beta_1 (-2,~2)\to \left\{ \begin{array}{ll} 4=2\alpha_1-2\beta_1 \\ 4=2\alpha_1 +2\beta_1 \end{array} \right.\\ (8,~4)=\alpha_2 (2,~2)+\beta_2 (-2,~2)\to \left\{ \begin{array}{ll} 8=2\alpha_2-2\beta_2 \\ 4=2\alpha_2 +2\beta_2 \end{array} \right. \end{array} \right. $$
H: Euler's sum of divisors recurrence relation Euler came up with following recurrence relation for the sum of divisors $$\sigma(n) = \sigma(n−1) + \sigma(n−2) − \sigma(n−5) − \sigma(n−7) \dots$$ Since $\sigma(p) = p+1$, where $p$ is a prime number, we can use the recurrence relation to verify if a number is prime. It seems fairly fast to add a few numbers, especially the numbers subtracted increase quadratically? I'm wondering how efficient it is to use this method to find a prime, or build all the primes up to a number. AI: It's recursive. $\sigma(n)$ calls on $\sigma(n-1)$. This means that, in the process of calculating $\sigma(5271009)$ you will have calculated $\sigma(n)$ for every integer $n$ from $1$ to $5271009$. The may be the least efficient known way of determining whether $5271009$ is prime.
H: For a given image $\mathbf X$, the equivalence class for pixels $p$ with labels $l$. $\left[l\right]=\left\{p \in\mathbf X|\,p\sim l\right\}$. This is taken from this paper on image segmentation, page $2$. I don't know how to interpret this, do they mean "all the pixels on image $\mathbf X$ that share a label $l$ on algorithm step $t$ become equivalent"? AI: Yes, that's correct. The equivalence classes partition X by grouping together the pixels with the same label. For the purposes of this algorithm, pixels that share the same label are considered equivalent.
H: Finding the cumulative distribution function of the minimum of a sample The Weibull pdf is given as the following. $$f(x) = \begin{cases} \frac{1}{\alpha}mx^{m-1}e^{-x^m/\alpha} \quad \text{if } x > 0 \\ 0 \quad \text{else} \end{cases}$$ If a random sample of size $n$ is taken from a Weibull distribution, what is the pdf and cdf of the minimum of the sample, $Y_i$? I realize $g_1(y_1) = n[1-F(y_1)]^{n-1}f(y_1)$; however how do I find the cdf $F(y_1)$? I tried integrating the pdf from 0 to $y$ but am having difficulty. AI: Let $X_1,X_2,\dots,X_n$ be iid Weibull, and let $W$ be the minimum of the $X_i$. Then $W\gt w$ if and only if all the $X_i$ are $\gt w$. This has probability $(1-F_X(w))^n$, where $F_X$ is the cdf of one of the Weibulls. Thus the cdf of $W$ is given by $$F_W(w)=1-\Pr(W\gt w)=1-(1-F_X(w))^n$$ (for positive $w$). To find $1-F_X(w)$, we need to calculate $$\int_w^\infty \frac{1}{\alpha}mx^{m-1}\exp(-x^m/\alpha)\,dx.$$ Make the substitution $\frac{x^m}{\alpha}=t$. Then $dt=\frac{1}{\alpha}mx^{m-1}\,dx$, and the integral becomes the easy $$\int_{t=w^m/\alpha}^\infty e^{-t}\,dt.$$
H: Probability of adjacent seating A homework question states: A room holds two rows of six seats each. Two friends are assigned randomly to the 12 seats. What is the probability that the 2 friends sit in adjacent seats? Note: Friends sitting behind friends don't count. Friends sitting diagonally adjacent to each other don't count. Only friends setting beside each other (left/right) in the same row count. $$ \cdot~~~~~= Empty~seat $$ $$ \times = Occupied~seat $$ $$ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ By drawing out the favorable possibilities: $ \begin{bmatrix} \times & \times & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \times & \times & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \times & \times & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \cdot & \times & \times & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \times & \times \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $ $ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \times & \times & \cdot & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \times & \times & \cdot & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \times & \times & \cdot & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \times & \times & \cdot \\ \end{bmatrix} $$ \begin{bmatrix} \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \times & \times \\ \end{bmatrix} $ It seems like there are a total of 10 favorable situations. I hope I'm right in saying there are a total of ${12 \choose 2}$ total possible situations (friends can sit in any two seats)? So is the probability that 2 friends sit adjacent to each other in this room of 12 seats: $$ \frac{10}{{12 \choose 2}} = \frac{10}{66} = 0.1515152$$ Whether that's right or wrong, I guess, what's the better mathematical approach (using the whole ${X \choose Y}$ thing to think about this problem? AI: The calculation is correct, and efficiently done. There are $\binom{12}{2}$ equally likely ways to select $2$ seats from $12$. You then counted the "favourables" well, though it was quite unnecessary to list separately the favourables in the first row and the favourables in the second. Since you did the problem in a nice way, let me do it an uglier way. Call the people Alicia and Bob, and seat Alicia first. We could (i) put her in an end seat or (ii) not in an end seat. The probability we put Alicia in an end seat is $\frac{4}{12}$, Given she is in an end seat, the probability Bob ends up beside her is $\frac{1}{11}$. The probability Alicia is not in an end seat is $\frac{8}{12}$, and then Bob has probability $\frac{2}{11}$ of ending up beside her. Thus our probability is $$\frac{4}{12}\cdot\frac{1}{11}+\frac{8}{12}\cdot\frac{2}{11}.$$
H: Another Error in Neukirch's Algebraic Number Theory? I'm reading Neukirch's Algebraic Number Theory and trying to do the exercises. I think I may have found another error, but am not sure... Exercise 7. In a noetherian ring $R$ in which every prime ideal is maximal, each descending chain of ideals $a_1 \supseteq a_2 \supseteq \ldots$ becomes stationary Hint: Show as in (3.4) that (0) is a product $p_1 \ldots p_r$ of prime ideals and that the chain $R \supseteq p_1 \supseteq p_1p_2 \supseteq \ldots \supseteq p_1\ldots p_r = (0)$ can be refined into a composition series. So, my main issue is that the ring $\mathbb{Z}$ is noetherian and every prime ideal is maximal, yet the chain $$(2) \supsetneq (4) \supsetneq (8) \supsetneq \ldots \supsetneq (2^n) \supsetneq \ldots$$ Never becomes stationary. Additionally, I don't know what (correct) statement he could be asking me to prove. (I also realize there may well be a translational error.) So am I reading it wrong or what? AI: The (0) ideal in $\mathbb{Z}$ is prime, but not maximal. So $\mathbb{Z}$ does not satisfy the hypotheses.
H: Does every infinite group have a maximal subgroup? $G$ is an infinite group. Is it necessary true that there exists a subgroup $H$ of $G$ and $H$ is maximal ? Is it possible that there exists such series $H_1 < H_2 < H_3 <\cdots <G $ with the property that for every $H_i$ there exists $H_{i+1}$ such that $H_i < H_{i+1}$? AI: Rotman p. 324 problem 10.25: The following conditions on an abelian group are equivalent: $G$ is divisible. Every nonzero quotient of $G$ is infinite; and $G$ has no maximal subgroups. It is easy to see above points are equivalent. If you need the details, I can add them here.
H: Question regarding the definition of direct sum decomposition of a representation Please bear with me. I am trying to learn representation theory of finite groups from J.P. Serre's book by myself. Here, the author has used the word 'representation' for the homomorphism $\rho : G\rightarrow GL(V)$, as well as the vector space $V$ interchangeably. But I am a little confused about the definition of direct sum of subrepresentations. Please correct me if I have understood it right : A $subrepresentation$ of $V$ is a subspace $W\subseteq V$ such that $\rho_g (W) \subseteq W $ $\forall g\epsilon G$,and $V$ is said to be the $direct$ $ sum$ of two subrepresentations $W,W'$ if $V=W\oplus W'$ as '$a$ $vector$ $space$', where the direct sum is $internal$. What I am confused about is what it means for a representation to be the direct sum of two subrepresentations, where by 'representation' we mean the 'homomorphism' $\rho$ itself, i.e. the exact formulation of $\rho$ in terms of $\rho_{|W}$ and $\rho_{|W'}$. I understand that if a proper basis is chosen, then the matrix of $\rho_g$ can be written as$$[\rho_g] = \begin{pmatrix} [\rho_{g|W}] & 0\\ 0&[\rho_{g|W'}]\end{pmatrix}$$ But I want a description of $\rho_g$ in terms of $\rho_{g|W}$ and $\rho_{g|W'}$ which does not depend on the choice of a basis, e.g. if $T_1: V_1 \rightarrow W_1, T_2:V_2 \rightarrow W_2$ are two linear transformations of vector spaces, we can define the direct sum $$T_1\oplus T_2 : V_1\oplus V_2 \rightarrow W_1\oplus W_2 $$ in a canonical way, when the direct sum is understood to be $external$, without reference to a basis. I guess I am confused about the $internal$ direct sum version of it. AI: To make the distinction between internal and extenal direct sum more clear, let me write external ones as $S\times T$ (in fact the underlying set of the external direct sum is the cartesian product set). Then elements of $S\times T$ are pairs $(s,t)$ with $s\in S, t\in T$, and $$ T_1\times T_2 : V_1\times V_2 \rightarrow W_1\times W_2 \qquad\text{maps}\quad (v_1,v_2)\mapsto (T_1(v_1),T_2(v_2)) $$ for $(v_1,v_2)\in V_1\times V_2$. Saying $V=U_1\oplus U_2$ is an internal direct sum of subspaces $U_1,U_2\subseteq V$ just means that the map $\alpha_{U_1,U_2}:U_1\times U_2\to V: (u_1,u_2)\mapsto u_1+u_2$ (which is defined because $u_1,u_2\in V$) is an isomorphism of vector spaces (surjectivity means that $U_1+U_2=V$, injectivity that $U_1\cap U_2=\{0\}$). Now suppose you have a representation of $G$ on $V$, and $V$ can be written as the direct sum of two invariant subspaces $U_1,U_2$. Then by restriction one has representations $\rho_1,\rho_2$ of $G$ on $U_1$ respectively on $U_2$, called subrepresentations of $G$. The original representation can be recovered from just the subrepresentations by using the composite isomorphism $$ V \overset{\alpha_{U,U'}^{-1}}\longrightarrow U\times U' \overset{\rho_1\times\rho_2}\longrightarrow U\times U' \overset{\alpha_{U,U'}}\longrightarrow V. $$ Explicitly, to find the image by $\rho(g)$ of $v\in V$, write $v=u_1+u_2$ in the unique possible way with $u_i\in U_i$ for $i=1,2$ (this is applying $\alpha_{U,U'}^{-1}$), then form $\rho_i(g)(u_i)\in U_i$ for $i=1,2$ (this is applying $\rho_1(g)\times\rho_2(g)$) and finally add the results as elements of $V$ (this is applying $\alpha_{U,U'}$). All in all $$ \rho(g): v=u_1+u_2 \mapsto \rho_1(g)(u_1)+\rho_2(g)(u_2) \qquad\text{where $u_1\in U_2,u_2\in U_2$}. $$
H: Which ring homomorphisms preserve/reflect what? Exams are coming up and I'm getting kind of desperate. So more now than ever, whatever help you're able to provide is much appreciated. In the abstract algebra exam I'm currently preparing for, there's a lot of focus on the following ring-theoretic concepts. units irreducible elements prime elements subrings ideals prime ideals maximal ideals Clearly these are all preserved and reflected under ring isomorphisms. But I need to know: which of these concepts are preserved and/or reflected under possibly injective and/or surjective ring homomorphisms? I'd work it out myself, but I really, really need to go on to the next section of the course (modules) in my revision. Thank you for your time. All answers are welcome, no matter how incomplete. AI: Here's what I got so far (rings are assumed to be commutative with identity). Units are preserved by all ring homomorphisms, but not necessarily reflected by monomorphisms (take $\mathbb{Z}\to \mathbb{Q} $, e.g. $2\in \mathbb{Q}$ is not a unit in $\mathbb{Z}$) or epimorphisms (quotient $k[x] \to k$, $k$ a field). Prime elements are not necessarily preserved under monomorphisms, take $\mathbb{Z} \to \mathbb{Z}[i]$, or $\mathbb{Z} \to \mathbb{Q}$. The same example shows irreducible elements are not necessarily preserved under monomorphisms. Subrings are preserved by all homomorphisms. The image of an ideal under a monomorphism need not be an ideal (take $\mathbb{Z} \to \mathbb{Q}$, only the $(0)$ ideal gets mapped to an ideal). The image of an ideal under an epimorphism is ideal. The inverse image of an ideal is an ideal. The inverse image of a prime ideal is prime. The inverse image of a maximal ideal need not be maximal (take $\mathbb{Z} \to \mathbb{Q}$, $(0)\subset \mathbb{Q}$ is maximal).
H: smooth approximate parameterization to polygonal boundary I can "almost" parameterize the boundary of a square using $${\bf r}(t) = (\cos t)^{1/p} {\bf i} + (\sin t)^{1/p} {\bf j},$$ $0\leq t\leq 2 \pi$, and $p$ is odd. This parameterization is smooth (or at least $C^1$), and of course is the unit ball in the $L^p$ norm. Letting $p\rightarrow \infty$ makes our approximation better. Now suppose I have a triangle, or in general, a convex polygon with vertices $a_1, a_2, \dots, a_n$. Is there some relatively simple, explicit smooth approximate parameterization of the boundary? It should, of course, have some tweakable parameter that allows for convergence, like $p$ in the example I gave. The simpler, and the "cuter", the better. Another issue is the following. With my parameterization of the square, if I want to get a list of roughly equally distributed points on the boundary, I cannot do this by letting $t=2 \pi i/N$. In that case the points collect around the corners on the square. Is there some nice way of fixing this for the square, and for my polygon in general? AI: An arc-length parametrization of your polygon uses piecewise-linear functions: if $a_j = (b_j, c_j)$, $r(t) = \sum_j u_j(t) a_j$ where $u_j(t)$ is a "triangular" function of the form $$ u_j(t) = \cases{ \dfrac{t-t_{j-1}}{t_j - t_{j-1}} & for $t_{j-1} \le t \le t_j$\cr \dfrac{t_{j+1} - t}{t_{j+1}-t_j} & for $t_j \le t \le t_{j+1}$\cr 0 & otherwise}$$ You can smooth it out e.g. by replacing $u_j(t)$ by $$\frac{p}{\pi} \int_{-\infty}^\infty \frac{u_j(s)}{p^2 + (t-s)^2}\ ds$$ for $p \to 0+$. This is a bit messy, but can be written explicitly using logarithms and arctan.
H: Number of ways to distribute 5 distinguishable balls between 3 kids such that each of them gets at least one ball How many ways are there to distribute 5 distinguishable balls between 3 kids such that each of them gets at least one ball? My approach is $ \binom{5}{3} 3! $ + $ \binom{2}{2} \binom{3}{2}2!$ which is equal to $66$? AI: It’s not the quickest or most elegant approach, but this problem is straightforwardly done by considering cases. If each kid gets at least one ball, the balls must be distributed either $3$-$1$-$1$ or $2$-$2$-$1$. $3$-$1$-$1$: There are $\binom31=3$ ways to choose which kid gets $3$ balls, and $\binom53=10$ ways to choose $3$ balls for that kid. There are then $2$ ways to distribute the remaining $2$ balls to the other $2$ kids. This case therefore accounts for $3\cdot10\cdot2=60$ possible distributions. $2$-$2$-$1$: There are $3$ ways to choose which kid gets only $1$ ball, and $5$ ways to pick the ball for that kid. There are then $\binom42=6$ ways to choose which $2$ balls go to the next kid in line, and the remaining kid gets the remaining $2$ balls. This case accounts for another $3\cdot5\cdot6=90$ possible distributions. The correct total, therefore, is $60+90=150$. Added: It appears to me that you reasoned something like this: First we choose $3$ of the $5$ balls and distribute one of them to each kid; that can be done in $\binom533!$ ways. Then we take both of the remaining $2$ balls, pick $2$ of the $3$ kids, and distribute the last $2$ balls in one of the $2$ possible ways to the $2$ lucky kids. There are several problems with this approach. You’ve considered only the second of my two cases. You’re combining the results of successive choices, not the counts of disjoint cases, so you should be multiplying, not adding: $\binom533!\cdot\binom22\binom322!=60\cdot6=360$. You’re overcounting. Suppose that the balls are labelled A, B, C, D, and E. Then you’ve counted the distribution AD | BE | C four times: once as the result of distributing A, B, and C in the first step and then distributing D and E to the first and second kids in the second step; once as the result of distributing D, B, and C in the first step and then distributing A and E to the first and second kids in the second step; once as the result of distributing A, E, and C in the first step and then distributing D and B to the first and second kids in the second step; and once as the result of distributing D, E, and C in the first step and then distributing A and B to the first and second kids in the second step.
H: Why is $1 =1 $? why is it so why cant be $1 =$ something else? It may sound stupid but why is $1=1$ or $n=n$ if thats the case does $1/0 = 1/0.$ AI: Because we define equals as an equivalence relation which needs to fulfill three properties reflexive meaning $x\sim x$ symmetric $x\sim y \implies y\sim x$ transitive $x\sim y $ and $y\sim z$ imply $x\sim z$ where $\sim$ means it fulfills the relation.
H: Geometrical solution to a search problem Given: an infinite grid a robot that can move to adiacent cells you have available up(), down(), left(), right() and distanceToDestination() functions destination coordinates and robot coordinates are unknown To do find an algorithm to move the robot to destination asap I already have a solution which searches on one axis and then changes, which solves the problem, but I am trying to improve and find a better algorithm. And I was trying to approach the problem from a geometrical perspective. Triangulation calculate distance for current point A = dA generate a new point and get dB generate a new point and get dC So I have 3 (or more) distances to a specific point in a plane. Can I calculate the position of the destination point relative to the origin? I think there is only one mathematical solution for 3 circles intersection with center in A,B,C and radius dA, dB, dC, but I would like a confirmation from you if this solution works. AI: Starting at $(0,0)$ let $A$ be the distance to the target. Move right one step. Let $B$ be the distance to the target. Move up one step, let $C$ be the distance to the target. So the target is located at $(x,y)$ with $$\begin{align} x^2 + y^2 &= A^2 \\ (x-1)^2 + y^2&=B^2 \\ (x-1)^2 + (y-1)^2&=C^2 \end{align}$$ Subtracting the first equation from the second and the second equation from the first. $$\begin{array}{rll} A^2-B^2 &= x^2 + y^2 - \left((x-1)^2 + y^2\right) &=2x-1 \\ B^2-C^2 &= (x-1)^2 + y^2 - \left((x-1)^2 + (y-1)^2\right)&=2y-1 \end{array}$$ Therefore $$\begin{align} x &= \frac{1+A^2-B^2}2 \\ y &= \frac{1+B^2-C^2}2 \end{align}$$ Go Robot Go!
H: How to integrate $\int \frac{1}{\cos(x)}\,\mathrm dx$ could you help me on this integral ? $$\int \frac{1}{\cos(x)}\,\mathrm dx$$ Here's what I've started : $$\int \frac{1}{\cos(x)}\,\mathrm dx = \int \frac{\cos(x)}{\cos(x)^2}\,\mathrm dx = \int \frac{\cos(x)}{1-\sin(x)^2}\,\mathrm dx$$ Now, I did : $u = \sin(x)$, so $\mathrm du = 1$. Now I have : $$\int \frac{\text{???}}{1-u^2}\,\mathrm du$$ But at this point, I think I did the most of the job but I'm stuck. Could you help me to solve this integral please (to the integration by substitution at the end) ? Thanks EDIT : Now I follow the steps and I got : $$\int \frac{1}{1-u^2}\,\mathrm du$$ Doing the partial fraction I got $A = 1/2$ and $B = 1/2$. So basically I have : \begin{align} & \int \frac{1}{1-u^2}\,\mathrm du = \int \frac{1/2}{1+u}\,\mathrm du + \int \frac{1/2}{1-u}\,\mathrm du \\[8pt] = {} & \frac 1 2 \left(\int \frac{1}{1+u} \, du - \int \frac{1}{1-u} \, du\right) \\[8pt] = {} & \frac 1 2 \ln\left(\frac{1+u}{1-u}\right) = \ln\left(\left(\frac{1+\sin(x)}{1-\sin(x)}\right)^{1/2}\right) \\[8pt] = {} & \ln \left(\frac{\sqrt{1+\sin(x)}}{\sqrt{1-\sin(x)}}\right) = \ln\left(\frac{\sqrt{1+\sin(x)}}{\sqrt{\cos(x)^2}}\right) \\[8pt] = {} & \ln\left(\frac{\sqrt{1+\sin(x)}}{\cos(x)}\right) \end{align} Here's what my teacher got : What's wrong with what I did ? Did I miss something ? AI: Alternatively, observe that: $$ \int \dfrac{1}{\cos x} dx = \int \sec x~dx = \int \sec x \left(\dfrac{\sec x + \tan x}{\sec x + \tan x}\right) dx = \int\dfrac{\sec^2 x + \sec x \tan x}{\sec x + \tan x} dx $$ Now let $u=\sec x + \tan x$ so that $du = (\sec x \tan x + \sec^2x)~dx$. Then we obtain: $$ \int\dfrac{(\sec x \tan x + \sec^2 x)~dx}{\sec x + \tan x} = \int \dfrac{du}{u}=\ln|u|+C= \boxed{\ln|\sec x + \tan x|+C} $$
H: Induced map between fundamental groups from covering map is injective Question: Let $f : X \to Y$ be a continuous map and let $x \in X$, $y \in Y$ be such that $f(x) = y$. Then there is an induced map $f_* : \pi_1(X, x) \to \pi_1(Y, y)$ such that $f_*([\gamma]) = [f \circ \gamma]$. If $p$ is a covering map, show that $p_*$ is injective. My Ideas: I need to show for $\gamma, \gamma' \in \pi_1(X, x)$, that if $p \circ \gamma$ and $p \circ \gamma'$ are homotopic, then $\gamma$ and $\gamma'$ are homotopic. I feel like I am suppose to use the path lifting lemma somehow, but I don't see exactly how to do this. $p \circ \gamma$ is a path in $Y$, and so for any point $(p \circ \gamma)(t)$ there is an open neighborhood $U$ such that $p^{-1}(U)$ is disjoint union of open sets mapped homeomorphicly by $p$ onto $U$. $\gamma(t) \in p^{-1}(U)$. I was thinking I could someone use this to construct the homeomorphism between $\gamma$ and $\gamma'$ but again I am drawing a blank. Can anyone help point me in a direction? Thanks! AI: Instead of working with two loops, assume $\gamma\in \pi_1(X,x)$ such that $p\gamma \simeq k_y$ (constant loop at $y$) by a homotopy $\gamma_t$ keeping the endpoints fixed. By the homotopy lifting property, there is a homotopy $\gamma'_t:I \to X$ starting at $\gamma$ such that $p\gamma'_t = \gamma_t$. Now prove that $\gamma'_t(0) = \gamma'_t(1) =x$ for all $t$, and that $\gamma'_1=k_x$ the constant loop at $x$. This show that $\ker p_* = 0$
H: Find maximum value of $f(x)=2\cos 2x + 4 \sin x$ where $0 < x <\pi$ Find the maximum value of $f(x)$ where \begin{equation} f(x)=2\cos 2x + 4 \sin x \ \ \text{for} \ \ 0<x<\pi \end{equation} AI: $f(x)=2-4\sin^2(x)+4\sin(x)$. Let $t=\sin (x)$. Then $0<t\le 1$. We have a new function: $$g(t)=-4t^2+4t+2$$. So $t=\frac12$, we get the maximum of $f(x)$. It is 3.
H: Finding $a$ s.t the cone $\sqrt{x^{2}+y^{2}}=za$ divides the upper half of the unit ball into two parts with the same volume My friend gave me the following question: For which value of the parameter $a$ does the cone $\sqrt{x^{2}+y^{2}}=za$ divides $$\{(x,y,z):\, x^{2}+y^{2}+z^{2}\leq1,z\geq0\}$$ into two parts with the same volume ? I am having some difficulties with the question. What I did: First, a ball with radius $R$ have the volume $\frac{4\pi}{3}R^{3}$ hence the volume of the upper half of the unit ball is $\frac{\pi}{3}$. Secondly: I found where does the cone intersect with the boundary of the ball: $$ \sqrt{x^{2}+y^{2}}=za $$ hence $$ z=\sqrt{\frac{x^{2}+y^{2}}{a^{2}}} $$ and $$ x^{2}+y^{2}+z^{2}=1 $$ setting $z$ we get $$ x^{2}(\frac{a^{2}+1}{a^{2}})+y^{2}(\frac{a^{2}+1}{a^{2}})=1 $$ hence $$ x^{2}+y^{2}=\frac{a^{2}}{a^{2}+1} $$ using the equation for the cone we get $$ z=\frac{1}{\sqrt{a^{2}+1}} $$ I then did (and I am unsure about the boundaries) : $0<z<\frac{1}{\sqrt{a^{2}+1}},0<r<az$ and using the coordinates $x=r\cos(\theta),y=r\sin(\theta),z=z$ I got that the volume that the cone enclose in the ball is $$ \int_{0}^{\frac{1}{\sqrt{a^{2}+1}}}dz\int_{0}^{az}dr\int_{0}^{2\pi} r d\theta $$ which evaluates to $$ \frac{\pi a^{2}}{3(a^{2}+1)^{\frac{3}{2}}} $$ I then required this will be equal to e the volume of the upper half of the unit ball : $$ \frac{\pi a^{2}}{3(a^{2}+1)^{\frac{3}{2}}}=\frac{\pi}{3} $$ and got $$ a^{2}=(a^{2}+1)^{\frac{3}{2}} $$ which have no real solution, according to WA. Can someone please help me understand where I am wrong and how to solve this question ? AI: Check your bounds again. I believe they should be $$\begin{align}0<&z<\frac{1}{\sqrt{a^2+1}}\\az<&r<\sqrt{1-z^2}\end{align}$$ Finishing the integral with these bounds should yield $a=\sqrt3$.
H: Isometry and equivalence Let $(X,d_1)$ and $(X,d_2)$ be two metric spaces on the same set $X$. Is there any relation between $d_1$ and $d_2$ being equivalent and $(X,d_1)$ and $(X,d_2)$ being isometric? If not, can anyone give examples where $d_1$ and $d_2$ are equivalent but $(X,d_1)$ and $(X,d_2)$ are not isometric; and where $d_1$ and $d_2$ are not equivalent, but $(X,d_1)$ and $(X,d_2)$ are isometric? AI: If I understand your question correctly, there is no relation between being equivalent and being isometric. To see that "equivalent" does not imply "isometric", take $X=\mathbb R$ with $d_1(x,y)=\vert x-y\vert$ and $d_2(x,y)=\frac{\vert x-y\vert}{1+\vert x-y\vert}$. Then $d_1$ and $d_2$ are equivalent but $(\mathbb R,d_1)$ cannot be isometric with $(\mathbb R, d_2)$ because $d_2$ is bounded and $d_1$ is not. To see that "isometric" does not imply "equivalent", take again $X=\mathbb R$. Let $\phi$ be any $discontinuous$ bijection from $\mathbb R$ onto $\mathbb R$ (where $\mathbb R$ has the usual topology). Define $d_1(x,y)=\vert x-y\vert$ and $d_2(x,y)=\vert \phi(x)-\phi(y)\vert$. Then $(\mathbb R, d_1)$ and $(\mathbb R, d_2)$ are isometric by definition, but $d_1$ and $d_2$ are not equivalent.
H: Eigenvalue calculation. I am getting confused by this simple eigenvalue calculation. Calculate the eigenvalues of $\begin{bmatrix} 5 & -2\\ 1 & 2\end{bmatrix}$. Firstly, I row reduce it, to go from $\begin{bmatrix} 5 & -2\\ 1 & 2\end{bmatrix} \to \begin{bmatrix} 6 & 0\\ 1 & 2\end{bmatrix}$ by performing $R_1 \to R_1 + R_2$. The resulting equation is $(6 - \lambda)(2 - \lambda) = 0$ so the eigenvalues should be $6$ and $2$, but I check on wolfram alpha and it says they are $4$ and $3$. Can someone please explain? AI: An eigenvalue of the $n\times n$ matrix $A$ is a number $\lambda$ such that there is a nonzero vector $v$ with $$ Av=\lambda v. $$ Depending on the context you may be looking for $\lambda$ in the real or complex numbers. In any case, the equation is equivalent to $$ (A-\lambda I)v=0 $$ where $I$ is the $n\times n$ identity matrix. This has a nonzero solution if and only if the rank of $A-\lambda I$ is less than $n$ or, equivalently, $$ \det(A-\lambda I)=0. $$ It turns out that the expression $\det(A-\lambda I)$ is a polynomial in $\lambda$ of degree exactly $n$, called the characteristic polynomial of $A$. Its roots are precisely the eigenvalues of $A$. It's not possible to use row-reduction for finding eigenvalues. Computing the roots of the characteristic polynomial is not the only way: in some special cases other methods are available. Nevertheless, this method is always available. In your case $$ A-\lambda I= \begin{bmatrix} 5 & -2 \\ 1 & 2 \end{bmatrix} - \lambda \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} $$ so the characteristic polynomial is $$ \det(A-\lambda I)=\det \begin{bmatrix} 5-\lambda & -2 \\ 1 & 2 - \lambda \end{bmatrix} =(5-\lambda)(2-\lambda)+2=\lambda^2-7\lambda+12 $$ and the roots are easily computed to be $3$ and $4$.
H: Concerning the proof of the Cantor–Bernstein theorem I've seen two proofs for the Cantor–Bernstein theorem which says that for two sets $X$ and $Y$ if $\#X \le \#Y$ and $\#Y \le \#X$ then $\#X=\#Y$, equivalently if we can find an injection from $X$ to $Y$ and $Y$ to $X$ then we can find a bijection between the two sets. One proof says that one can assume WLOG $Y \subset X$ another says WLOG $X \cap Y =\emptyset$ . What I don't get is that if we say $Y \subset X$ , we can find a set such that $Y \not\subset X$ and $X \not\subset Y$. How is generality not lost? AI: If $f:Y\to X$ is an injection, we can identify $Y$ with $f[Y]$, replacing each $y\in Y$ with its image $f(y)\in X$. The injection $g:X\to Y$ then has to be replaced with the injection $f\circ g:X\to f[Y]$. If we let $Y'=f[Y]$ and $g'=f\circ g$, we now have $Y'\subseteq X$ and an injection $g':X\to Y'$. If we can find a bijection $h:X\to Y'$, $f^{-1}\circ h$ will be the desired bijection from $X$ to $Y$. Thus, there’s no harm in assuming from start that $Y\subseteq X$: if it isn’t, we work with $Y'$ and then use $f^{-1}$ to transfer the resulting bijection into the one that we really wanted. Similarly, if $X\cap Y\ne\varnothing$, let $X'=X\times\{0\}$ and $Y'=Y\times\{1\}$. Then $X'$ and $Y'$ are disjoint, and there are obvious bijections $\varphi:X\to X':x\mapsto\langle x,X\rangle$ and $\psi:Y\to Y':y\mapsto\langle y,1\rangle$. If $f:Y\to X$ and $g:X\to Y$ are the original injections, we can replace them by the injections $$f':Y'\to X':\langle y,1\rangle\mapsto\langle f(y),0\rangle$$ and $$g':X'\to Y':\langle x,0\rangle\mapsto\langle g(x),1\rangle$$ and work with those instead. In both cases the point is that we can replace $Y$ with any other set $Y'$ such that there is a bijection between $Y$ and $Y'$, and similarly for $X$: we just adjust the injections correspondingly.
H: Map from a manifold to $[0,1]$ I am looking through a practice exam to prepare for an upcoming final and I am having through with this question. Question: Let $M$ be a manifold, $p \in M$, and $U \subset M$ an open set containing $p$. Show there is a continuous function $f : M \to [0, 1]$ such that $f(p) = 1$ and $\overline{f^{-1}((0, 1])}$ is a subset of $U$. The bar is the closure of the set. My Ideas: Around $p$ there is an open set $V$ which is homeomorphic to the unit ball in $\mathbb{R}^n$ for some $n$. Let $g : V \to B_1(0)$ witness this. I am not sure if there is a way to extend $g$ so that its domain is all of $M$. But even if I did get this I am not sure how to make $f(p) = 1$. If I am not mistaken once I can map to the ball I can project onto $[0, 1]$ to get a continuous map there. Thanks for any help. AI: You can use Urysohn's lemma to get such a continuous function. If you want your function to be smooth as well, then look up mollifiers and/or partitions of unit.
H: Question about primary decompositions. I am reading the book Introduction to commutative algebra by Atiyah and Macdonald. On page 50, Line -7, it is said that "if $f: A \to B$ and $\mathfrak{q}$ is a primary ideal in $B$, then $A/\mathfrak{q}^c$ is isomorphic to a subring of $B/\mathfrak{q}$". How to prove this result? Thank you very much. AI: $\def\p{\mathfrak p}\def\q{\mathfrak q}$Let $\pi \colon B \to B/\q$ denote the projection, consider $\pi f \colon A \to B/\q$. $\pi f$ is a homomorphism with kernel $f^{-1}(\q) = \q^c$. So we get a monomorphism $g \colon A/\q^c \to B/\q$ by $g(a + \q^c) = f(a) + \q$. As $g$ is one-to-one, we may regard $A/\q^c$ as a subring of $B/\q$.
H: What 's the upper limit of a binomial expansion with fractional power? It's known that a binomial expansion can be writen as a sum, $\displaystyle (a+b)^n=\binom{n}{0}a^n+ \binom{n}{1} a^{n-1}b+\binom{n}{2}a^{n-2}b^2+.....$ If the power, $n$, is a natural number, the last term of this sum will be $\displaystyle \binom{n}{n}b^n$, and so $$ \displaystyle (a+b)^n=\sum_{k=0}^{n}\binom{n}{k}a^{n-k}\cdot b^k$$ But, if the power is a rational positive number? For instance, $(1+x)^{\frac{1}{3}}=1+ \frac{1}{3}x+\frac{1}{3}(-\frac{2}{3})\frac{x^2}{2!}+\frac{1}{3}(-\frac{2}{3})(-\frac{5}{3})\frac{x^3}{3!}+......$ How far can I go with this sum? AI: Firstly you have to keep this in mind that the fractional powers can only be expanded if $|x|<1$. Now as $x^k\to 0$ as $k\to \infty$ so the terms become very smaller and smaller as you go high. You can expand it till anywhere but as the terms become small after a few stage there is no noticeable change in the value of the expression if we stop it after a certain stage.
H: Random walk as a martingale? Let $S_0$, $Z_1$, $Z_2$, $\ldots$ be independent random variables. $S_n=S_0+Z_1+\cdots+Z_n$, $n=0,1,2,\ldots$ $S_n$ is a random walk starting in a random point, $S_0$ I need to find out, when it is a martingale. Is it enough to show, that $E(S_{n+1}|\mathcal F_n)=S_n$? Does "find out when" regards filtration? AI: The definition of a martingale has three items. A sequence $(X_n)_{n\geq 1}$ is said to be martingale with respect to a filtration $(\mathcal{F}_n)_{n\geq 1}$ if $(X_n)_{n\geq 1}$ is adapted to $(\mathcal{F}_n)_{n\geq 1}$, i.e. $X_n$ is $\mathcal{F}_n$-measurable for each $n$, $X_n$ is integrable, i.e. ${\rm E}[|X_n|]<\infty$ for each $n$, and $X_n$ satisfies the martingale condition, i.e. ${\rm E}[X_{n+1}\mid\mathcal{F}_n]=X_n$ for each $n$. So in order for you to answer the question of when $(S_n)_{n\geq 1}$ is a martingale you need to address the first two bullets first. Let us therefore assume that all variables are integrable, and that the filtration we are working with is indeed the natural filtration, i.e. $$ \mathcal{F}_n=\sigma(S_0,Z_1,\ldots,Z_n),\quad n\geq 1. $$ Then you're correct that you just have to show when ${\rm E}[S_{n+1}\mid\mathcal{F}_n]=S_n$ for all $n$. You correctly calculated that $$ {\rm E}[S_{n+1}\mid\mathcal{F}_n]={\rm E}[S_n\mid\mathcal{F}_n]+{\rm E} [Z_{n+1}\mid\mathcal{F}_n]=S_n+{\rm E}[Z_{n+1}\mid\mathcal{F}_n] $$ and hence ${\rm E}[S_{n+1}\mid\mathcal{F}_n]=S_n$ if and only if ${\rm E}[Z_{n+1}\mid\mathcal{F}_n]=0$. All that is left is to recognize ${\rm E}[Z_{n+1}\mid\mathcal{F}_n]$ as ${\rm E}[Z_{n+1}]$ due to the independence between $Z_{n+1}$ and $\mathcal{F}_n$.
H: Equivalence relation - Proof question Prove that the relation, two finite sets are equivalent if there is a one-to-one correspondence between them, is an equivalence relation on the collection $S$ of all finite sets. I'm sure I know the gist of how to do it, but I'm a beginner in proofs, and I'm not sure if I've written it down correctly. I absolutely encourage nitpicking in the following proof, as I wish to learn how proofs are correctly written. Thanks! Proof Let $S$ be the class of all finite sets. Let $A,B$ and $C$ be three finite sets. Reflexive property Now, $n(A)=n(A)$, and hence there exists a one-to-one correspondence between $A$ and $A$ Therefore, $A≈A$ ------------------$(1)$ Symmetric property Let $A≈B$ $⇒n(A)=n(B)$ $⇒n(B)=n(A)$, and hence there exists a one-to-one correspondence between $B$ and $A$ $⇒B≈A$ Therefore, $A≈B⇒B≈A$----------------$(2)$ Transitive property Let $A≈B$ $⇒n(A)=n(B)$---------------------$(3)$ Also, let $B≈C$ $⇒n(B)=n(C)$---------------------$(4)$ From $(3)$ and $(4)$, $n(A)=n(C)$ $⇒A≈C$ Therefore, $A≈B$ and $B≈C⇒A≈C$--------(5) From $(1), (2)$ and $(5)$, it is clear that the relation, two finite sets are equivalent if there is a one-to-one correspondence between them, is an equivalence relation. Q.E.D AI: Your argument for reflexivity is circular, I’m afraid. When you write that $n(A)=n(A)$, you’re already assuming that $A\approx A$: both statements mean that there is a one-to-one correspondence between $A$ and $A$. To prove them, you must demonstrate that there really is such a correspondence. Fortunately, this isn’t at all difficult: we just use the identity map $$\varphi:A\to A:a\mapsto a\;.$$ You make a similar mistake in your argument for symmetry: when you say that $n(A)=n(B)$ implies that $n(B)=n(A)$, you’re assuming the conclusion that you’re trying to reach. All that you’re actually given is that $A\approx B$. This means that there is a bijection $\varphi:A\to B$. You want to show from this that there is also a bijection from $B$ to $A$. Since $\varphi$ is a bijection (one-to-one correspondence), it has an inverse, $\varphi^{-1}$, that is also a bijection, and it goes from $B$ to $A$. That is, $\varphi^{-1}:B\to A$ is a bijection, and therefore $B\approx A$, as desired. And you’ve done it again in the argument for transitivity, but this time I’ll just get you started on the right track. Your hypothesis is that $A\approx B$ and $B\approx C$. This means that you have bijections $\varphi:A\to B$ and $\psi:B\to C$. How can you combine $\varphi$ and $\psi$ to show that there is a bijection from $A$ to $C$, thereby showing that $A\approx C$ and hence that $\approx$ is transitive?
H: Question about zero-divisors and a quotient of a polynomial ring by an ideal in the book Introduction to commutative algebra by Atiyah and Macdonald. I am reading the book the book Introduction to commutative algebra by Atiyah and Macdonald. I have two questions On Page 51. On Line 5 of Page 51, it is said that the zero-divisors in $A/\mathfrak{q} \cong k[y]/(y^2)$ are all the multiples of $y$. I think that $A/\mathfrak{q} \cong k[y]/(y^2)=\{a+by : a, b \in k\}$. Let $a \neq 0$ and $a+by \in A/\mathfrak{q}$. We have $y (a+by)=ay+by^2 = ay \neq 0$. So $y$ is not a zero-divisor of $A/ \mathfrak{q}$. But it is said that $y$ is a zero-divisor of $A/\mathfrak{q}$. I am confused. On Line 12 of Page 51, it is said that $A/\mathfrak{p} \cong k[y]$. Here $A=k[x, y, z]/(xy-z^2)$ and $p=(\bar{x}, \bar{z})$, $\bar{x}=x+(xy-z^2), \bar{z}=z+(xy-z^2)$. I think that $A/\mathfrak{p} = k[y]/(xy-z^2)$. Why $A/\mathfrak{p} \cong k[y]$? Thank you very much. AI: Your reasoning is wrong: Put $a = 0$ and $b = 1$ to get $y \cdot y = y^2 = 0$ and so $y$ is a zero divisor of $k[y]/(y^2)$. For your second question, the quotient by $\mathfrak{p}$ is isomorphic to $$k[x,y,z]/(xy - z^2, x,z) \cong k[x,y,z]/(xy,x,z) \cong k[y].$$
H: Find the range of $ f(x)=9^x - 3^x+1$ Problem:-Find range of function $ f(x)=9^x - 3^x+1$, here the domain of $f$ is $\mathbb R$. Solution: $ f(x)=9^x - 3^x+1$. Let $f(x)=y$. Then $$ \begin{split}y&=9^x - 3^x+1\\ y&=3^{2x} - 3^x+1 \end{split}$$ Let $3^x= u$. Then $ y=u^2 - u+1$, so $$ u^2 - u+1-y=0.$$ Am I doing right? AI: Looks good. To finish it off, we complete the square and notice that: $$ y = u^2 - u + 1 = \left(u^2-u+\dfrac{1}{4}\right)+\dfrac{3}{4} = \left(u-\dfrac{1}{2}\right)^2+\dfrac{3}{4} \ge 0+\dfrac{3}{4}=\dfrac{3}{4} $$ since the square of any real number must be nonnegative. Hence, the range is: $$ \{y\in \mathbb{R} \mid y \ge3/4\} $$
H: finding probability function, bills and dice There are 2 white bills, and 4 green bills. We throw the dice, if it gives 6 - we take two bills, if it gives 1/2/3/4/5, we take one bill. X variable is the amount of white bills taken out. I need to find the probability function for X and draw the cumulative distribution function. How can I do this? AI: I suppose we draw the bills at random from the 6 bills you describe. $P(X = 2) = \frac{1}{6} \cdot \frac{\binom{2}{2}}{\binom{6}{2}}$, because in order to get two white bills, we need to throw a 6, and then take the 2 white bills out of the 6 we have as well, and we multiply them, as they are independent events. $P(X = 1) = \frac{1}{6}\cdot P(X = 1 | \mbox{6 thrown}) + \frac{5}{6} \cdot P(X = 1| \mbox{1,2,3,4 or 5 thrown})$ by a standard formula. $P(X = 1 | \mbox{6 thrown}) = \frac{\binom{2}{1}\binom{4}{1}}{\binom{6}{2}}$, because we select 2 bills, and 1 one it is white, the other green. $P(X = 1 | \mbox{1,2,3,4 or 5 thrown}) = \frac{2}{6}$ as we draw only one bill and it has to be white. $P(X = 0) = 1 - P(X=2) - P(X= 1)$ as we only have 3 possible values.