text
stringlengths
83
79.5k
H: Domain and range of a multiple non-connected lines from a function? How do you find the domain and range of a function that has multiple non-connected lines? Such as, $ f(x)=\sqrt{x^2-1}$. Its graph looks like this: I'm wanting how you would write this with a set eg: $(-\infty, \infty)$. P.S. help me out with the title. Not sure how to describe this. AI: The domain of your function includes all values of $x$ for which the function is defined: $$f(x)=\sqrt{x^2-1}$$ is defined if and only if $\quad x^2 - 1 = (x+1)(x-1) \geq 0$. So we need to rule out all values for which $\;x^2 -1 < 0$, which happens when and only when $|x| < 1$, and that happens for any $x$ is in the interval $ (-1, 1)$. So we need to exclude the open interval $(-1, 1)$ from the domain of $f$, giving us a domain of $f$ all of $\mathbb R\setminus (-1,1)$, where $f$ is defined. So that gives us that the domain of $f$ is equal to $(-\infty, -1] \cup [1, +\infty)$. As you can see by your graph, the range of $f(x)$ includes all values $y = \sqrt{x^2 - 1} \geq 0$. Hence the range of the function lies withing the interval $[0, +\infty)$ Your graph is "off" a bit, on the $x$-intercepts: the two curves you see should intersect the x-axis at the points $(-1, 0)$ and $(1, 0)$. Here's a zoomed-in graph of the function near the origin: $f(x) = \sqrt{x^2 - 1}$ And here it the same function zoomed "out"
H: Let $A$ and $B$ be $n \times n$ complex matrices. Then Let $A$ and $B$ be $n \times n$ complex matrices.Then a) $\lim_{k→∞} A^k =0 ⇔$ all eigenvalues of A have absolute value less than $1$ b) $e^A\cdot e^B=e^{(A+B)}$ c) If $A$ and $B$ are nilpotent, then so is $A + B$. d) $I + A + A^2 + A^3 + \dots$ converges ⇒ $A$ is nilpotent. I have no idea other than $b$ which is I know that true for all matrix with same order. $c$ is false for sure! AI: (a) $\Rightarrow$ is easy. Let $\lambda$ be an eigenvalue and $x$ a corresponding eigenvector. Then $\lambda^kx=A^kx \to 0$. Since $x$ is not the zero vector you get $\lambda^k \to 0$. $\Leftarrow$. Is easy to prove for Jordan forms. (c) $$A=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \, B=\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}$$ (d) Hint Consider $A$ a diagonal matrix. When is $I+A+A^2+..$ convergent?
H: Sequence $f_n$ of continuous functions on $[0,1]$, such that $0 \le f_n \le 1$ and $\lim\limits_{n\to\infty}\int_{0}^{1} f_n(x)\,\mathrm dx=0$ Let $f_n$ be a sequence of continuous functions on $[0,1]$, such that $0 \le f_n \le 1$ and $$\lim_{n\to\infty}\int_{0}^{1} f_n(x)\,\mathrm dx=0.$$ a) The sequence ${f_n (x)}$ converges uniformly. b) The sequence ${f_n (x)}$ converges for some $x$. c) The sequence ${f_n (x)}$ might not converge for any $x$. d) The sequence ${f_n (x)}$ converges for every $x$ but not necessarily uniformly. The answer is c? I have no counter example in my hand. AI: Consider the following sequence of sets, constructed as follows. Let $I_1=[0,1]$. Split $[0,1]$ in half, and let $I_2$ and $I_3$ be the first and second halves. Split $[0,1]$ in fourths, and let $I_4,I_5,I_6,I_7$ be the fourths in order. Continue this way, split $[0,1]$ in $2^n$ths and let $I_{2^n},\cdots, I_{2^{n+1}-1}$ be the $2^n$ pieces in order. Now let $$f_n(x)=\text{ an isosceles triangle over } I_n \text{ of height } 1$$ Then $$\int_0^1 f_n\to 0$$ but if $x\neq m2^{-k}$ then $$\tag 1 \limsup_{n\to\infty} f_n(x)=1 \; ; \;\liminf_{n\to\infty} f_n(x)=0$$ ADD To fix the above, we can do the following: let $\langle g_n\rangle $ be the same as $\langle f_n\rangle $, but with the triangles shifted so that the peaks of the triangles fall on the binary rational points. Thus, $g_1$ will look like (the right) half an isosceles triangle on $[0,1/2]$ plus (the left) half of an isosceles triangle on $[3/4,1]$, $g_2$ will be an isosceles triangle over $[1/2,3/4]$, $g_3$ will be the two halves at $[0,1/4]$,$[7/8,1]$, and so on. Then $\langle g_1,f_1,g_2,f_2,g_3,f_3,\ldots\rangle$ should do: now the rational binary points oscillate between $0$ and $1$.
H: Mean-value for three functions If $f,g,h$ are continuous functions on $[a,b]$ which are differentiable on $(a,b)$, then prove that there exists $c\in(a,b)$ such that $$f(a)[g(b)h'(c)-h(b)g'(c)]+h(a)[f(b)g'(c)-g(b)f'(c)]=g(a)[f(b)h'(c)-h(b)f'(c)]$$ This looks very much like (generalized) mean-value theorem, so I'm tempted to write $$i(x)=[h(a)g(b)-g(a)h(b)]f(x)+[f(a)h(b)-h(a)f(b)]g(x)+[g(a)f(b)-f(a)g(b)]h(x)$$ If there exists $c$ such that $i'(c)=0$, we would be done. But here we don't have $i(a)=i(b)$, so I don't know how to reach that conclusion. AI: Hint: Define the determinant: $$ D(x)=\left| \begin{array}{ccc} f(x) & g(x) & h(x) \\ f(a) & g(a) & h(a) \\ f(b) & g(b) & h(b) \\ \end{array}\right| $$ Note that $D(a)=D(b)=0$, since there will be identical rows. Notice that: $$ D'(x)=\left| \begin{array}{ccc} f'(x) & g'(x) & h'(x) \\ f(a) & g(a) & h(a) \\ f(b) & g(b) & h(b) \\ \end{array}\right| $$ and consider using cofactor expansion along the second row. Note: Your argument is essentially the same; in fact you do have $i(a)=i(b)$.
H: If $G=(V,E)$ is a planar graph, all vertex degrees are $3$, all faces are of five/six edges, how many five-edged faces are there? Given a graph $G = (V,E)$, a planar graph where every vertex has degree $3$ and all faces are five-edged or six-edged. How many five-edged faces are there? It was a question in one of our previous exams, the answer is $12$, but I have no clue on how to solve it. I mean, I can have as many faces as I want, why is it limited to $12$ and $12$ only? AI: Since the graph is planar $$V-E+F=2 \,.$$ As each vertex has degree $3$, by Handshaking lemma we have $2E=3V$. Let $f_1$ be the number of $5$ edged faces and $f_2$ be the number of $6$ edges faces. Then $$2E=5f_1+6f_2 \,.$$ $$f_1+f_2=F \,.$$ From here we get $$2E=6F-f_1 \,.$$ Thus, plugging everything in the first relation, you get: $$\frac{2E}{3}-E+\frac{2E+f_1}{6}=2 \,.$$ Since all $E$'s cancel you are done.
H: multiple choice question on topology of matrices Which of the following are true? a)The set of symmetric positive definite matrices are connected. b)The closed unit ball centred at $0$ and of radius $1$ of $l_1$ is compact c)The set of invertible matrices on R forms a set of measure $0$. d)The set of symmetric matrices in $M_n (\Bbb R)$ is dense. I am clueless!! advanced thank you all. AI: a) true, they are path connected look $x^T[tA+ (1-t)B]x$ is again positive definite symmetric b) false as $l_1$ is not finite dimensional norm linear space. c) false, as The set of invertible matrices on R is dense, open, so must contain open ball with positive measure (intuitively) d) False, they are closed subset in $M_n(\mathbb{R})$ as you see they are the inverse image of $0$ matrix under the continuosu map $f:M_n(R)\to M_n(R)$ $f(X)=X-X^T$
H: number of solution of polynomial with parameter Given the polynomial $P(x) = x^5 - 20x^2 + a$ and $a\in\mathbb{R}$, for which values of $a$ this polynomial have only one solution? from the fact that $\lim_{x\to+\infty}{P(x)} = +\infty$ and $\lim_{x\to-\infty}{P(x)} = -\infty$ there is $a_0$ such that $P(a_0)<0$ and $b_0$ such that $P(b_0) > 0$ so from intermediate value theorem there is at least one solution between $a_0$ and $b_0$. now I need to prove that there is at the most one solution. I'm not really sure how continue from here. I tried to calculate the derivative of $P(x)$ so I got that $P'(x) = 0$ when $x = 0$ or $x = 2$ and it seems like when $P(2) > 0$ in other words $a > 48$ the polynomial have only one solution but I have no idea why this correct or how to explain it. I hope someone can explain this and what is the correct way to solve this kind of problem. Thanks in advance. edit: by "solution" I mean root of the polynomial. AI: You're right into looking at the derivative: $$ P'(x)=5x^4-40x=5x(x^3-8)=5x(x-2)(x^2+4x+4) $$ Therefore $P(x)$ has a maximum in $0$ and a minimum in $2$. Moreover $\lim_{x\to-\infty}P(x)=-\infty$ and $\lim_{x\to\infty}P(x)=\infty$. Because of this we know that $P(0)>P(2)$. Thus three cases can happen: $P(0)<0$ (and $P(2)<0$): $P(x)$ assumes only once the value $0$, for some $x>2$. $P(2)>0$ (and $P(0)>0$): $P(x)$ assumes only once the value $0$, for some $x<0$. $P(0)>0$ and $P(2)<0$: $P$ has three roots, one less than $0$, one between $0$ and $2$, one greater than $2$. So, in order to find when $P$ has only one root you have to see when $P(0)<0$ or $P(2)>0$. Since $P(0)=a$ and $P(2)=a-48$, you can conclude that $P$ has only one root if and only if $a<0$ or $a>48$.
H: Average number of $U[0,1]$ random numbers to reach $x$ Let $P(x)$ be a random process where you keep selecting random numbers, uniformly distributed between 0 and 1, until the sum reaches $x$. From memory, the expected value of the number of terms needed by $P(x)$ is $e^x Q(x)$, where $Q(x)$ is a piecewise polynomial whose degree increases at each integer value of $x$. If I remember, it was $$Q(x) = \sum_{k=0}^{\lfloor x\rfloor} (-1)^k\frac{(x-k)^k}{(k+1)!}$$ My question is: This formula numerically gives $(2/3)+2x$ plus exponentially decaying but oscillating terms. How do you go from $e^x Q(x)$ to $2/3 + 2x$ plus exponentially decaying terms? AI: Short version: do not use the formula for $Q(x)$. Long version: let $U$ denote the first random number, then, conditionally on $U$, the mean number of draws needed to overcome $x$ is $1$ if $x\leqslant U$ and distributed like $1+P(x-U)$ otherwise. Thus, $$ P(x)=1+E(P((x-U)^+))=1+\int_{(x-1)^+}^xP(t)\mathrm dt. $$ The Laplace transform is $$ L(s)=\int_0^{+\infty}\mathrm e^{-sx}P(x)\mathrm dx. $$ Massaging the identity involving $P$ yields $$ L(s)=\frac1{s-1+\mathrm e^{-s}}=\frac2{s^2}+\frac2{3s}+O(1), $$ when $s\to0$. Since $P$ is monotonous, the $\frac2{s^2}$ term implies that $P(x)=2x+o(x)$, and a little more work is needed to show that the $\frac2{3s}$ term implies that $P(x)-2x\to\frac23$.
H: At which points the given function $f$ is not differentiable? Let $f(x) = \cos (|x-5|) + \sin (|x-3|) + |x + 10 | ^ 3 - (|x| + 4 )^2$; at which points the function $f$ is not differentiable? Since $|x-a|$ is not differentiable at any real number $x= a$. so, the function $f$ is not differentiable at $x=5,3,-10$ and $0$. Am I right? Please guide me. AI: We first deal in detail with the $|x+10|^3$ part. Looks like there might be trouble at $x=-10$. But there isn't! There are various ways to prove this. One way is to go back to the definition of the derivative. Let $p(x)=|x+10|^3$. We have $$\frac{p(-10+h)-p(-10)}{h}= \frac{|h|^3}{h}=h^2\frac{|h|}{h}\tag{1}$$ As $h$ approaches $0$, the right-hand side of (1) approaches $0$, and quickly at that! So the derivative of $|x-10|^3$ exists at $x=-10$, and indeed is $0$. You will find a similar phenomenon for the cosine part of your function. It is differentiable at $x=5$. Equivalently, let us see that $\cos| t|$ is differentiable at $0$. We could again go back to the definition of the derivative, but for a change, let's not. The cosine function is even, $\cos (-u)=\cos(u)=\cos(|u|)$, so our problem reduces to whether $\cos t$ is differentiable at $0$. It sure is! The sine part, however, will give non-differentiability at $3$. We leave this part to you for a while. For the part about $(|x|+4)^2$, expand and you will see that we have non-differentiablity at $0$.
H: Differentiability of multivariable functions, and its relation to the chain rule. I'm struggling with the conditions for the applicability of the chain rule. $${df(C(t))\over dt} = \mathrm{grad}f(C(t))\cdot C'(t)$$ Where $C$ is in $\Bbb R^n$ and $f$ is differentiable in the following sense: $f:\Bbb R^n\rightarrow \Bbb R$ is differentiable at $X$ if we have, for sufficiently small $H$ (in the sense of being close to the null vector) $$f(X+H)-f(X) = \mathrm{grad}f(x)\cdot H+||H||g(H)$$ for some real-valued $g$ which tends to $0$ as $H$ tends to the null vector. $C$ also has to be differentiable in that all of its partial derivatives exist. These requirements seem overly strong to me. What we really need is that $f\circ C:\Bbb R^n \rightarrow \Bbb R$ be differentiable (in the traditional sense of the word). For that, I don't see that either $f$ nor $C$ even need to be continuous. What if we have $$C(t) = (0, t)$$ for $t$ irrational, and $$C(t) = (t, 0)$$ for $t$ rational. Then define $f$ on $C(\Bbb R)$ by extracting $f(0, t) = f(t, 0) = t$. Do we not have $f\circ C$ as the identity function, thus differentiable? AI: "Looking at the proof, it almost seems like the only reason the chain rule formula is obtained is because the author's definition of differentiability is specifically set up for it to emerge." The essential content of mathematics are examples, definitions and theorems. The definition of differentiability is not "the author's definition", but it is the essence that has been cristalized after 250 years of "calculus" out of various manifestations of differentiability in onedimensional and pluridimensional contexts. The chain rule is a prime feature of differentiability, having a high intuitive-geometrical power (I know that students see the chain rule mainly as a recipe to formally differentiate composed function terms). Therefore no general concept of differentiability could do without it. It's no big deal when a crazy function composed with, say, a constant function is differentiable. But it is essential that the composition of any two differentiable functions is again differentiable, and that we have a formula for the derivative of this composition. Now to your example: Assume (i) that the vector-valued function $$t\mapsto {\bf c}(t)=\bigl(c_1(t),\ldots,c_n(t)\bigr)\qquad (-h<t<h)$$ representing a curve in ${\mathbb R}^n$ is differentiable at $t=0$, i.e., that $${\bf c}'(0)=\bigl(c_1'(0),\ldots,c_n'(0)\bigr)$$ exists. Assume (ii) that the scalar function $$f:\quad {\mathbb R}^n\to{\mathbb R},\qquad{\bf x}=(x_1,\ldots,x_n)\to f(x_1,\ldots, x_n)$$ representing, say, a variable temperature in space, is differentiable at the point ${\bf p}:={\bf c}(0)$. This means that we have an estimate of the form $$f({\bf p}+{\bf X})-f({\bf p})=A.{\bf X} + o(|{\bf X}|)\qquad({\bf X}\to{\bf 0})\ ,\tag{1}$$ where $$A=:df({\bf p}):\quad T_{\bf p}\to{\mathbb R}$$ is a certain linear map (a linear functional, to be exact). This implies (but is not implied by) that $f$ has partial derivatives $f_{.k}({\bf p})={\partial f\over\partial x_k}({\bf p})$ $\>(1\leq k\leq n)$. These partial derivatives can be collected into the gradient vector $\nabla f({\bf p})$, and $(1)$ can then be written in the form $$f({\bf p}+{\bf X})-f({\bf p})=\nabla f({\bf p})\cdot{\bf X} + o(|{\bf X}|)\qquad({\bf X}\to{\bf 0})\ ,$$ where the dot denotes the scalar product in ${\mathbb R}^n$. Now we are interested in the composed function $$\phi:\quad ]-h,h[\ \to {\mathbb R},\qquad t\mapsto f\bigl({\bf c}(t)\bigr)$$ which represents the temperature measured along time by an experimenter sitting in the spaceship $t\mapsto{\bf c}(t)$. The chain rule tells us that $$d\phi(0)=df({\bf p})\circ d{\bf c}(0)\ ,$$ which when unpacked goes over into $$\phi'(0)=\nabla f({\bf p})\cdot {\bf c}'(0)\ .$$
H: Pell-like equations and continued fractions Why does the continued fraction method work? Could be applied in order to solve, for example, $x^{17}-19y^{17}=1$ ? AI: I tried that for cubes. It was not that successful. With $x^2 - 19 y^2 = 1,$ you are guaranteed an infinite set of solutions. Also, using Lagrange's method of "neighboring forms," it is not necessary to use high decimal accuracy or cycle detection, the cycle ends in only one way, so no memory at all is required. Furthermore, in an infinite sequence, once you have a solution $(x,y)$ you get the next solution at $(170 x + 741 y, 39 x + 170 y).$ In comparison, by Thue's theorem and Thue-Siegel-Roth, your equation has only finitely many solutions. If any are nontrivial, they will show up in the continued fraction expansion. However, owing to the degree (17) you are stuck using very high accuracy, plus there may be no non-trivial answers. The whole thing is a lose-lose. Still interesting, of course.
H: Differential equation on $\Bbb R$ We have a differential equation on $\Bbb R$ of the form $$\frac {d^2}{dx^2}u = \chi_{[0,1]},$$ where $\chi_{[0,1]}$ is the characteristic function of the interval $[0, 1] ⊂ \Bbb R$. I want to find a generalized solution for this differential equation. I also want to know that will the solution be unique? Definition: If $\Omega$ is a domain in $\Bbb R^n$. We say that $u\in \mathcal D'(\Bbb R^n)$ is a generalized solution of $$\sum_{|\alpha|\le m} a_\alpha D^\alpha u=f(x)$$ in $\Omega.$ If $u$ satisfies $$\sum_{|\alpha|\le m} a_\alpha \langle D^\alpha u,\phi\rangle =\langle f(x),\phi\rangle $$ for every $\phi\in\mathcal D(\Omega).$ Here $f\in\mathcal D'(\Bbb R^n)$ and constant coefficients $a_\alpha\in \Bbb R^n $. Thank you. AI: Let's solve the problem in $\mathbb{R}$ i.e. find $u\in \mathcal{D}'(\mathbb{R})$ such that $$\tag{1}\langle u'',\varphi\rangle=\langle \chi_{[0,1]},\varphi\rangle,\ \forall\ \varphi\in \mathcal{D}(\mathbb{R})$$ Suppose that $u\in\mathcal{D}'(\mathbb{R})$ satisfies $(1)$, then for all $\varphi\in \mathcal{D}(\mathbb{R})$ \begin{eqnarray} \langle u'',\varphi\rangle &=& \langle \chi_{[0,1]},\varphi\rangle \nonumber \\ &=& \int_0^1\varphi \nonumber \\ &=& -\int_0^1x\varphi'+\varphi(1) \\ &=& \int_0^1\frac{x^2}{2}\varphi''+\varphi(1)-\frac{\varphi'(1)}{2} \end{eqnarray} We conclude that $\langle u,\varphi''\rangle=\langle f(x)+g(x)+h(x),\varphi''\rangle$ for all $\varphi\in \mathcal{D}(\mathbb{R})$ where $$f(x)=\chi_{[0,1]}\frac{x^2}{2}$$ $$ g(x) = \left\{ \begin{array}{cc} 0 &\mbox{ if $x<1$} \\ x-1 &\mbox{ otherwise} \end{array} \right.$$ $$ h(x) = \left\{ \begin{array}{c} 0 &\mbox{ if $x<1$} \\ \frac{1}{2} &\mbox{ otherwise} \end{array} \right. $$ Hence, $u=f(x)+g(x)+h(x)+ax+b$, where $a,b\in\mathbb{R}$ is the solution for your problem. Remark 1: To find $g$, you have to solve the equation $$\langle g,\varphi''\rangle =\langle \delta_1,\varphi''\rangle$$ where $\delta_1$ is Dirac in the point $1$. To find $h$ you do a similar thing. Remember that the Heaviside function is the "integral" of the Dirac distribution. Remark 2: If you look for a solution in $[0,1]$, then $u=f(x)+ax+b$ is the solution.
H: Two-form wedge product Consider a two-form $\alpha$, then $d\alpha \wedge d\alpha$ is not necessarily zero. Is this statement true? Consider $\beta = \alpha \wedge d \alpha$. Then $d\beta = d(\alpha \wedge d \alpha) = d \alpha \wedge d \alpha + \alpha \wedge d^2 \alpha = d\alpha \wedge d \alpha.$ Is this statement true? Thank you! AI: Hints: $d\alpha$ is a 3-form. The commutativity relationship between a $k$-form $\omega$ and a $l$-form $\eta$ is $$ \omega \wedge\eta = (-1)^{kl}\eta \wedge \omega .$$ Also, $$ d(\omega \wedge\eta) = d\omega \wedge\eta + (-1)^{k}\omega \wedge d\eta. $$ The solutions should follow immediately from these properties (together with $d^2=0$). (For the second part, only the last equality is off. Edit: fixed)
H: Why aren't the strong LLNs and CLT contradicting each other? Given $n$ i.i.d. random variables $\{X_1, X_2, \dots , X_n\}$, each with mean $M$ and variance $V$, both strong and week LLNs seem to say that the average of the $n$ random variables, $S_n = \frac{X_1 + X_2 + \dots + X_n }{n}$, approaches $M$, as $n \to \infty$. The CLT seems to say that, as $n \to \infty$, the distribution of this average $S_n$ approaches a normal distribution with mean $M$ and variance $V$. The problem I'm having is that it seems like the distribution of the average should converge to something like a discrete variable with a PMF like $1$ at $M$ and $0$ everywhere else. This is because the strong LLN says the average must be $M$, as $n$ approaches infinity. Instead, the normal distribution given by the CLT seems to say that there's a chance of the average not being $M$, as $n$ approaches infinity, which seems to contradict the strong LLN. Where's the flaw in my reasoning? AI: The problem is that you left out the scaling. The average does converge to a constant: what converges to a normal distribution is $$\left(\text{average} - M\right)\sqrt{n}$$
H: How to derive compositions of trigonometric and inverse trigonometric functions? To prove: $$\begin{align} \sin({\arccos{x}})&=\sqrt{1-x^2}\\ \cos{\arcsin{x}}&=\sqrt{1-x^2}\\ \sin{\arctan{x}}&=\frac{x}{\sqrt{1+x^2}}\\ \cos{\arctan{x}}&=\frac{1}{\sqrt{1+x^2}}\\ \tan{\arcsin{x}}&=\frac{x}{\sqrt{1-x^2}}\\ \tan{\arccos{x}}&=\frac{\sqrt{1-x^2}}{x}\\ \cot{\arcsin{x}}&=\frac{\sqrt{1-x^2}}{x}\\ \cot{\arccos{x}}&=\frac{x}{\sqrt{1-x^2}} \end{align}$$ AI: I'll be ignoring domains and possible roots of negative numbers. (If you let $\mbox{$x\in \textbf{]}0,\pi/2[$}$ everything works fine). Given $f\circ g$, the trick is too relate $f$ with $g^{-1}$. I did some. You should be able to handle the remaining ones. $\bullet \sin (\arccos (x))=\sqrt {1-(\cos (\arccos (x) ))^2}=\sqrt {1-x^2}$ $\bullet \sin (\arctan (x))=\dfrac{\tan (\arctan (x))}{\sqrt {1+(\tan (\arctan (x)))^2}}=\dfrac{x}{\sqrt {1+x^2}}$ For this one I used $$\begin{align}(\cos(x))^2+(\sin (x))^2=1 &\implies 1+(\tan(x))^2=(\sec(x))^2\\ &\implies 1+(\tan (x))^2=\dfrac{1}{1-(\sin (x))^2}\\ &\implies 1-(\sin(x))^2=\dfrac{1}{1+(\tan(x))^2}\\ &\implies \sin (x)=\dfrac{\tan(x)}{\sqrt{1+(\tan(x))^2}}\end{align}$$ $\bullet \tan (\arcsin (x))=\dfrac{\sin (\arcsin (x))}{\sqrt{1-(\sin(\arcsin (x)))^2}}=\dfrac{x}{\sqrt{1-x^2}}$ For this one I used $$\begin{align}(\cos(x))^2+(\sin (x))^2=1 &\implies 1+(\tan(x))^2=(\sec(x))^2\\ &\implies 1+(\tan (x))^2=\dfrac{1}{1-(\sin (x))^2}\\ &\implies \tan(x)=\dfrac{\sin(x)}{\sqrt{1-(\sin(x))^2}}\end{align}$$ $\bullet \cot (\arcsin(x))=\dfrac{\sqrt{1-(\sin (\arcsin(x)))^2}}{\sin (\arcsin(x))}=\dfrac{\sqrt{1-x^2}}{x}$ For this one I used $$\begin{align}(\cos(x))^2+(\sin (x))^2=1 &\implies (\cot (x))^2+1=(\csc(x))^2\\ &\implies (\cot (x))^2=\dfrac{1-(\sin(x))^2}{(\sin(x))^2}\\ &\implies \cot(x)=\dfrac{\sqrt{1-(\sin(x))^2}}{\sin (x)}\end{align}$$ $\bullet \cot (\arccos(x))=\dfrac{\cos (\arccos(x))}{\sqrt{1-(\cos (\arccos(x)))^2}}=\dfrac{x}{\sqrt{1-x^2}}$. For this one I used $$\begin{align}(\cos(x))^2+(\sin (x))^2=1 &\implies (\cot (x))^2+1=(\csc(x))^2\\ &\implies (\cot (x))^2=\dfrac{1-(\sin(x))^2}{(\sin(x))^2}\\ &\implies (\cot(x))^2=\dfrac{(\cos(x))^2}{1-(\cos(x))^2}\\ &\implies \cot(x)=\dfrac{\cos(x)}{\sqrt{1-(\cos(x))^2}}\end{align}$$
H: Is it possible to solve this PDE It would be pretty sweet if I could solve this for $A$. Is it possible? $$\frac{dA}{dx}+\frac{dA}{d\tau}=wx\tau$$ where $w$ is a constant and $x$ is a function of $\tau$. It might help that it is also known that: $$w\tau=\frac{d^2A}{dx^2}$$ So the equation to be solved could also be expressed: $$\frac{dA}{dx}+\frac{dA}{d\tau}=\frac{d^2A}{dx^2}x$$ AI: First of all, it's a PDE (partial differential equation), not an ODE. Let $B = A + w x^3/6 - w x^2 \tau/2$. Then your first equation becomes $$ \dfrac{\partial B}{\partial x} + \dfrac{\partial B}{\partial \tau} = 0$$ which is equivalent (apart from quibbles about differentiability) to $B = F(x - \tau)$. Then $\dfrac{\partial^2 A}{\partial x^2} = F''(x-\tau) + w(x - \tau)$. Now the bad news: that can't be $w \tau$, it must be a function of $x - \tau$.
H: calculate $ \pi_{1}\left(T_{2}\#T_{2}\right) $ How can I calculate $\pi_{1}\left(T_{2}\#T_{2}\right)$ ? I know that this is a covering space of a tori which means that this group is a subgroup of $\mathbb{Z}^{2}$. Thanks! AI: Write the connected sum as the union of the tori with disk removed; the intersection of these open sets is a (thickened) circle. The fundamental group of a torus with a disk removed is $\mathbb{Z}\star\mathbb{Z}$ (look at the usual CW complex giving the torus, remove a disk, and you have a wedge of circles). Write $h_1, v_1$ for the generators of this free group on two letters coming from the first torus, and $h_2, v_2$ for the same coming from the second torus. Then the circle included on the intersection is identified with $h_1v_1h_1^{-1}v_1^{-1}$ on the first torus and $h_2v_2h_2^{-2}v_2^{-2}$ on the second (draw the same picture of a standard CW complex with a disk removed). Hence by Van Kampen the fundamental group of the connected sum is $$ \langle h_1, v_1, h_2, v_2 | h_1v_1h_1^{-1}v_1^{-1} = h_2v_2h_2^{-2}v_2^{-2}\rangle. $$
H: Density function as derivative (Self-study) I'm trying to do the Society of Actuaries' example problems. I am having trouble with no. 62, which says: A random variable $X$ has CDF $$ F(x) = \begin{cases} 0 & \text{for $x < 1$} \\ \frac{x^2 - 2x + 2}{2} & \text{for $1 \leq x < 2$} \\ 1 & \text{for $x \geq 2$} \end{cases} $$ Compute the variance of $X$. I understand the basic steps for computing the variance: Transform the CDF into a density $f$, by differentiating. Compute the expectation $E(X)$ by integrating $x f(x)$ Compute the expectation $E(X^2)$ by integrating $x^2 f(x)$ Compute the variance $V(X) = E(X^2) - E(X)^2$. The problem I'm having is that the density I compute does not match up with the one in the solutions page. In particular, I get $$f(x) = \begin{cases} x - 1 & \text{for $1 \leq x < 2$}\\ 0 & \text{otherwise} \end{cases} $$ whereas they get $$ f(x) = \begin{cases} \frac{1}{2} & \text{for $x = 1$} \\ x - 1 & \text{for $1 \leq x < 2$}\\ 0 & \text{otherwise} \end{cases} $$ Where is that $\frac{1}{2}$ coming from? Edit: I corrected the CDF, and now the derivative (if not the density) is correct. AI: Note that as $x$ approaches $1$ from the left, the cdf stays at $0$. Now calculate the cdf at $x=1$. Substituting $x=1$ into the formula, we have $F(1)=\frac{1}{2}$. So no "weight" up to but not including $1$, and then a sudden weight of $\frac{1}{2}$ at $1$. Our overall distribution is neither discrete nor continuous. it has some features from each. That sort of distribution is often called a mixed distribution. Before we do the remaining detailed calculations, we need to worry about the expression $\frac{x^2+2x-2}{2}$. Note that for $x$ near $2$ but to the left of $2$, this gives a number much bigger than $1$. That is absolutely impossible for a cdf. Added: Now the cdf for $1\le x\lt 2$ has been fixed to $\frac{x^2-2x-2}{2}$, it is indeed a cdf. To compute the variance, we probably first want to compute the mean. Differentiating, we find that the density function for $1\lt x\lt 2$ is $x-1$. So we have a point mass of $\frac{1}{2}$ at $x=1$, and a wire of varying density stretching from $1$ to $2$. For the mean, we need to treat the two contributions to the mean separately, and add. We get $$E(X)=(1)\left(\frac{1}{2}\right) +\int_{x=1}^2 x(x-1)\,dx.$$ This I think simplifies to $\frac{4}{3}$. To complete the calculation of variance, we need $E(X^2)$. This is $$(1^2)\left(\frac{1}{2}\right) +\int_{x=1}^2 x^2(x-1)\,dx.$$
H: Finding the range of $f(x) = 1/((x-1)(x-2))$ I want to find the range of the following function $$f(x) = \frac{1}{(x-1)(x-2)} $$ Is there any way to find the range of the above function? I have found one idea. But that is too critical. Please help me in solving this problem . AI: HINT: Let $$y=\frac1{(x-1)(x-2)}$$ $$\implies (y)x^2-(3y)x+2y-1=0$$ which is a Quadratic Equation in $x$ As $x$ is real, the discriminant must be $\ge 0$ $$\implies (3y)^2-4\cdot y\cdot(2y-1)\ge0$$ $$\implies y^2+4y\ge 0\iff y(y+4)\ge 0$$ Either $y\ge 0$ and $y+4\ge 0\implies y\ge -4\implies y\ge0$ or $y\le 0$ and $y+4\le 0\implies y\le -4\implies y\le-4$
H: Linearly independent rows and matrices Let M be an $m \times n$ matrix whose rows are linearly independent. Suppose that $k$ columns $c_{i_1}, ... , c_{i_k}$of M span the column space of M. Let C be the matrix obtained from M by deleting all columns except $c_{i_1}, ... , c_{i_k}$. Show that the rows of C are also linearly independent. Since the rows are linearly independent in M, we know that every row has a pivot position. We know that the row rank is equal to the column rank. Since the row rank is at least m and the column space is at most k, we know that k is either equal to m or greater than it. Case 1 Let k=m. Then $c_{i_1}, ... , c_{i_k}$ is both a linearly independent and a spanning set, so the rows must also be linearly independent. Case 2 Let k>m. Since $c_{i_1}, ... , c_{i_k}$ is a spanning set, it must contain the linearly independent set. In other words, it must contain all columns in M that have a pivot position. So the rows of matrix C also have a pivot in every row, which implies that the rows are linearly independent. Do you think my answer is correct? Thanks in advance AI: I think your "proof by cases" works just fine, though I personally would do three cases, with first case "if $k < m \implies$ Contradiction", basically, then writing the very reasons you concluded that $k \geq m$. Note: That's just how I'd do it, since you are already considering cases, we might as well exhaust them all, which you did (consider them all, just not explicitly so). But my preference has nothing to do with the correctness of your proof. Your logic, which you included, for good reason, does the job.
H: Prove that $\dfrac{(n^2)!}{(n!)^n}$ is an integer for every $n \in \mathbb{N}$ Prove that $\dfrac{(n^2)!}{(n!)^n}$ is an integer for every $n \in \mathbb{N}$ I know that there are tools in Number theory to proves the required but I want to use the tool that says that if you can prove that an expression solves a combinatorial problem then it represents an integer for every $n \in \mathbb{N}$ My solution is, assume we are given $n^2$ beads, $n$ beads in every color of $n$ colors, and we want to place them in a row ($(n^2)!$) now since we have $n$ groups of colors and every group has $n$ beads in it, we need to cancel the inner sort of each of the groups, which gives us $(n!)^n$ for all the possible inner sorts of the beads. Will that proof work? What are its flaws? AI: Note that $\dfrac{(a+b)!}{a! b!}$ is an integer. In general, $$\dfrac{\left(\displaystyle \sum_{k=1}^m a_k \right)!}{\displaystyle \prod_{k=1}^m a_k!}$$ is an integer. The above can be shown to be an integer by a combinatorial argument. Consider $\displaystyle\sum_{k=1}^m a_k$ balls, where $a_l$ balls are of color '$l$'. Then the number of possible arrangements of these balls in a straight line is given by $\dfrac{\left(\displaystyle \sum_{k=1}^m a_k \right)!}{\displaystyle \prod_{k=1}^m a_k!}$, which is an integer. In your case, take $m=n$ and $a_k = n$, $\forall k \in \{1,2,\ldots,n\}$.
H: For a $k$ form $\eta$ and a $l$ form $\omega$, what is $d(\eta\wedge\omega)?$ For a $k$ form $\eta$ and a $l$ form $\omega$, what is $d(\eta\wedge\omega)?$ Thank you very much for your help and guidance! AI: Use the Leibniz rule, i.e. $$d(\eta\wedge\omega)=d\eta\wedge\omega+(-1)^{k}\eta\wedge d\omega.$$ It is one of the properties satisfied by the operator $d$.
H: Airy function and modified Bessel function I have got a question concerning the Airy functions in relation to the Bessel function. From Wiki, it is possible to see how $$ Ai(x)=\frac{1}{\pi}\sqrt{\frac{x}{3}}K_{1/3}\left(\frac{2}{3}x^{\frac{3}{2}}\right) $$ The question is: how can the Airy function retrieve a 0.3550 value when evaluated in $ x = 0 $, if $$ K_{1/3}\left(\frac{2}{3}0^{\frac{3}{2}}\right) = \infty $$ It's probably a naive question but I would say that $$ Ai(0) = 0*\infty = NaN$$ looking at the above equivalence. I thank you in advance for supporting. AI: For fixed $\nu>0\;$ and from the reference DLMF you have the equivalence for $z$ near $0$ : $$\operatorname{K}_{\nu}(z)\sim \frac{\Gamma(\nu)}2\left(\frac z2\right)^{-\nu}$$ From this you may deduce that near $0$ the Airy function $\operatorname{Ai}(x)$ is equivalent to : \begin{align} \frac{1}{\pi}\sqrt{\frac{x}{3}}\operatorname{K}_{1/3}\left(\frac{2}{3}x^{\frac{3}{2}}\right)&\sim \frac{\Gamma(1/3)}{2\pi}\sqrt{\frac{x}{3}}\left(\frac{1}{3}x^{\frac{3}{2}}\right)^{-1/3}\\ &\\ &\sim \frac{\sqrt[3]{3}\ \Gamma(1/3)}{\sqrt{3}\;2\,\pi}\\ &\\ &\sim 0.355028053887817239260063186\cdots \end{align} Getting even a closed form for the limit.
H: Set Relations Quick Question Can someone please explain how this answer was reached? I know that relation of A is just A * A but wouldn't that just be $4^{2}? $ Let A = {1, 2, 3, 4}. How many relations are there on a set A? Solution: $2^{4^2} = 2^{16} = 131,072$ AI: By definition, a relation on $A$ is a subset of $A^2$. Since $|A^2|=16$, the number of subsets of $A^2$ is $2^{16}$, which means there are $2^{16}$ relations on $A$.
H: Inequality with square root and squaring each side of the inequality My book says if I take $\sqrt{x^2 + y^2} \lt 1,\;$ and it says if I "square each side of the inequality" the result will give the inequality $\;x^2 + y^2\lt 1,\;$ but I don't understand the concept. If you find the square root of $5^2$ isn't that $5$? Then isn't the square root of $\,x^2 + y^2\,$ equal to $\,x + y\;?$ Am I thinking about this correctly or am I mistaken somehow? AI: Please understand: While it is true that $\;\sqrt{(x + y)^2} = x+y,\,$ note that $$ \begin{align} \sqrt{x^2 + y^2} & \color{blue}{\;\large \bf \neq\;}x + y \\ \\ \text{so}\quad \left(\sqrt{x^2 + y^2}\right)^2 & \neq (x + y)^2 \\ \\ \text{because}\quad x^2 + y^2 & \neq x^2 + 2xy + y^2 \end{align}$$ Regarding your inequality (for why I'm addressing this, please note the OP's comment below, and the fact that the original post before editing, included a statement about being perplexed as to why the following is true): $$\begin{align} \sqrt{x^2 + y^2} \lt 1 & \implies \left(\sqrt{x^2 + y^2}\right)^2 \lt (1)^2\\ \\ & \implies x^2 + y^2 \lt 1\end{align}$$
H: Simplifing formulas using tensor notation Im trying to symplify formulas like: $$\operatorname{div}(\operatorname{rot}\vec{F}),\qquad \operatorname{rot}(\operatorname{rot}\vec{F}) $$ or something more strange like: $$\operatorname{rot}(\vec{r}\operatorname{div}(r^4\operatorname{grad}(r^4)))$$ To do this I want to use the tensor notation and by this I mean using the Einstein convention, Levi-Civita symbol, Kronecker delta and all that. The problem is that I don't understand the rules and what its allowed and what not. As an example: $$\operatorname{div}(f( r)\cdot \textbf r)= \partial_i(f( r)\cdot \textbf r)_i = \partial_i(f( r)\cdot x_i)= \partial _if(r)x_i+f(r)\partial_ix_i= f'(r)\frac{x_i}{r}x_i+3f(r)=rf'(r)+3f(r) $$ Any help on how to do this kind of problems or where I can find useful examples? I'm working in flat space. Thanks! AI: The gradient, divergence, curl, and Laplacian can be written in the following way: $$\begin{eqnarray*} (\mathrm{grad}\, f)_i &=& \partial_i f \\ \mathrm{div}\,{\bf F} &=& \partial_i F_i \\ (\mathrm{rot}\,{\bf F})_i &=& \epsilon_{ijk} \partial_j F_k \\ \Delta &=& \partial^2 = \partial_i\partial_i. \end{eqnarray*}$$ The Levi-Civita symbol has some key properties. For example, it is totally antisymmetric so $\epsilon_{ijk}=-\epsilon_{jik} = \epsilon_{jki}$. In addition it has an important multiplicative property: $$\begin{eqnarray*} \epsilon_{ijk}\epsilon_{ilm} &=& \delta_{jl}\delta_{km}-\delta_{jm}\delta_{kl}. \end{eqnarray*}$$ The Levi-Civita symbol and the Kronecker delta are just numbers, so they commute with derivatives. Below we give some sample calculations. Example 1: $$\begin{eqnarray*} \mathrm{div}(\mathrm{rot}\,{\bf F}) &=& \partial_i (\mathrm{rot}\,{\bf F})_i \\ &=& \partial_i (\epsilon_{ijk} \partial_j F_k) \\ &=& \epsilon_{ijk} \partial_i \partial_j F_k \\ &=& \epsilon_{ijk} \partial_j \partial_i F_k \\ &=& -\epsilon_{jik} \partial_j \partial_i F_k \\ &=& -\epsilon_{ijk} \partial_i \partial_j F_k \\ &=& 0 \end{eqnarray*}$$ Example 2: $$\begin{eqnarray*} [\mathrm{rot}(\mathrm{rot}\,{\bf F})]_i &=& \epsilon_{ijk} \partial_j (\mathrm{rot}\,{\bf F})_k \\ &=& \epsilon_{ijk} \partial_j (\epsilon_{klm}\partial_l F_m) \\ &=& \epsilon_{ijk} \epsilon_{klm} \partial_j \partial_l F_m \\ &=& \epsilon_{kij} \epsilon_{klm} \partial_j \partial_l F_m \\ &=& (\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}) \partial_j \partial_l F_m \\ &=& (\partial_m \partial_i - \delta_{im} \partial^2)F_m \\ &=& \partial_i(\mathrm{div}\,{\bf F}) - (\Delta{\bf F})_i \end{eqnarray*}$$ Therefore, $$\mathrm{rot}(\mathrm{rot}\,{\bf F}) = \mathrm{grad}(\mathrm{div}\,{\bf F}) - \Delta{\bf F}.$$
H: Can saturation drop when passing to an elementary extension. Suppose I have a model $\mathcal{U}$ with some theory $T$ such that $\mathcal{U}$ is $\kappa$-saturated for some infinite cardinal $\kappa$. If $\mathcal{V} \succeq \mathcal{U}$ is an elementary extension of $\mathcal{U}$, is it true that $\mathcal{V}$ is (at least) $\kappa$-saturated? I.e. is it possible for the saturation to drop when you pass to an elementary extension. One would hope that the saturation doesn't drop but I can't seem to get an argument for why this is the case. Thanks! AI: Actually it is not true. $\cal V$ would realise types over subsets of $\cal U$ (since they are realised in $\cal U$) but it may omit other types. Let $T$ be the theory of dense linear orders without endpoints. Let $\cal U$ be an $\aleph_1$-saturated model of $T$. Consider ${\cal V} = {\cal U} + \mathbb Q$, by which I mean the structure whose universe is the disjoint union of $\mathbb Q$ and the universe of $\cal U$ and every element in $\cal U$ is smaller than any element in $\mathbb Q$. Now $\cal V$ is a dense linear order and hence $\cal U \preceq V$ by model completeness. But $\cal V$ is not $\aleph_1$-saturated, since the type $\{v > p : p \in \mathbb Q\}$ is not realised in $\cal V$.
H: When $(\mathbb{Q}(\zeta_n) : \mathbb{Q}) = 2$? maybe this is a stupid question . Anyway, when $(\mathbb{Q}(\zeta_n) : \mathbb{Q}) = 2$, i.e., $\phi(n) = 2$ (where $\phi$ is the Euler's totient function)? Thanks in advance. AI: Hint: if $n=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$, then $$\varphi(n)=(p_1-1)p_1^{a_1-1}\cdots(p_k-1)p_k^{a_k-1}$$ (this formula is an easy restatement of the one listed on Wikipedia). Thus, if $\varphi(n)=2$, could $n$ be divisible by $5$, for instance? What can you deduce about the possible powers of $2$ or $3$ going into $n$?
H: If the difference of cubes of two consecutive integers is a square, then the square can be written as the sum of squares of two different integers. How can i prove the statement that if the difference of cubes of two consecutive integers is an integral power of 2, then the integer with power 2 can be written as the sum of squares of two different integers. For example: $$8^3 - 7^3 = 13^2 = 12^2 + 5^2$$ Any help appreciated. Thanks. AI: We can see that $$n^3-(n-1)^3=3n^2-3n+1.$$ By looking at congruences modulo 4, we see that $$n^3-(n-1)^3 \equiv 1\pmod 4.$$ Recalling that you assumed that the difference was a square, you can apply Fermat's theorem cited above to conclude.
H: Proving a minimum vertex cover for a family of 3-uniform hypergraphs Let $H_n$ be a $3$-uniform hypergraph. For every $\{a, b, c\} \subseteq [n]$ we have an edge $\{\{a, b\}, \{a, c\}, \{b, c\}\} \in H_n$. What is the minimum vertex cover of $H_n$? The following construction is a vertex cover of $H_n$ (call it $C_n$), but I am not sure how to prove it is a minimum vertex cover (and it may not be a minimum). Let $Sub_k(S) = \{A \subseteq S \mid |A| = k\}$ (does this have a standard notation?) and partition $[n]$ into two disjoint sets $L_n$ of the odd elements and $R_n$ of the even elements. Then, $$C_n = Sub_2(L_n) \cup Sub_2(R_n)$$ This is indeed a vertex cover, since for any edge $\{\{a, b\}, \{a, c\}, \{b, c\}\}$ we have, $a, b$ are both odd, then $\{a, b\} \subseteq L_n$ $a, b$ are both even, then $\{a, b\} \subseteq R_n$ Otherwise assume $a$ is odd and $b$ is even, then either, $c$ is odd and $\{a, c\} \subseteq L_n$ $c$ is even and $\{b, c\} \subseteq R_n$ AI: Commonly used notations for the set of all $k$-element subsets of a set $S$ are $[S]^k$ and $\binom S k$. Your construction is indeed the (essentially unique) minimum vertex cover of your $H_n$. To put it in terms of ordinary graphs ($2$-uniform hypergraphs), you're looking for the smallest subgraph $G$ of $K_n$ which contains at least one edge of every triangle in $K_n$. In other words, you want the complementary graph $\overline G$ to be a triangle-free graph on $n$ vertices with the maximum possible number of edges. This is answered by Mantel's theorem, which is a special case of Turán's theorem: the answer is $\overline G=K_{\lfloor n/2\rfloor,\lceil n/2\rceil}$, a complete bipartite graph with $\lfloor n^2/4\rfloor$ edges. http://en.wikipedia.org/wiki/Tur%C3%A1n%27s_theorem
H: Why $x^2 + 7$ is the minimal polynomial for $1 + 2(\zeta + \zeta^2 + \zeta^4)$? Why $f(x) = x^2 + 7$ is the minimal polynomial for $1 + 2(\zeta + \zeta^2 + \zeta^4)$ (where $\zeta = \zeta_7$ is a primitive root of the unit) over $\mathbb{Q}$? Of course it's irreducible by the Eisenstein criterion, however it apparently does not satisfies $1 + 2(\zeta + \zeta^2 + \zeta^4)$ as a root, I tried to calculate several times however I couldn't get $f(1 + 2(\zeta + \zeta^2 + \zeta^4))$ = 0$. Thanks in advance. AI: Compute: $$\begin{align*} (1+2(\zeta+\zeta^2+\zeta^4))^2+7&=\bigg[1^2+4(\zeta+\zeta^2+\zeta^4)+4(\zeta+\zeta^2+\zeta^4)^2\bigg]+7\\[0.1in] &=\bigg[1+4(\zeta+\zeta^2+\zeta^4)+4(\zeta^2+\zeta^4+\zeta^8+2\zeta^3+2\zeta^5+2\zeta^6)\bigg]+7\\[0.1in] (\mathsf{\text{because }}\zeta^8=\zeta)\quad&=\bigg[1+4(\zeta+\zeta^2+\zeta^4)+4(\zeta^2+\zeta^4+\zeta+2\zeta^3+2\zeta^5+2\zeta^6)\bigg]+7\\[0.1in] &=\bigg[1+8\zeta+8\zeta^2+8\zeta^3+8\zeta^4+8\zeta^5+8\zeta^6\bigg]+7\\[0.1in] &=(-7)+8+8\zeta+8\zeta^2+8\zeta^3+8\zeta^4+8\zeta^5+8\zeta^6+7\\[0.2in] &=(-7)+8\Phi_7(\zeta)+7\\[0.1in] \left(\begin{array}{c}\mathsf{\text{because }}\Phi_7\mathsf{\text{ is the}}\\ \mathsf{\text{minimal poly of }}\zeta\end{array}\right)\quad&=(-7)+7\\[0.1in] &=0 \end{align*}$$
H: How to solve $u'' + k u + \epsilon u^3 = 0$? I am looking at the project of my ODE class, there is one problem saying we have to solve $u'' + k u + \epsilon u^3 = 0$. The problem gives us some values of $k$, $\epsilon$ and says you should experiment with different initial values with Euler's method. I have solved the equation using Euler's method. But now I am confused, how can I know I get the right solution, in previous exercises the book gave the answer for the function and we can compare it with the Euler answer, the project page says nothing about the answer. I tried letting $u = e^{rt}$ like in the class to get the characteristics equation if both constants are 1: $$ r^2 + r + e^{2rt}=0 $$ but I can't solve this. Thanks alot. The initial values are $u(0)=0$, $u'(0)=1$, and $k=\epsilon =1$. AI: It looks like you are supposed to use perturbation theory to solve this-the $\epsilon$ is a tipoff. Initially you set $\epsilon$ to zero, which gives an equation you know how to solve. We designate its solution $u_0$ as the zero order solution and hope that as the other term is small it will not perturb the solution too much. So $u_0=a \cos (\sqrt k t + \phi)$. Now you plug this into the perturbation term and solve the new equation, giving $u_1''+ku_1+\epsilon u_0=0$ or $u_1''+ku_1+\epsilon a \cos (\sqrt k t + \phi)=0$. Now solve this to get $u_1(t)$. Plug that in to the original equation to get $u_2''+ku_2+\epsilon u_1=0$. You should get new terms with multiples of the original frequency. Keep going until you get tired. As long as $\epsilon$ is small enough, you hope that the corrections are getting smaller and smaller. With your edit, I would be worried that $\epsilon =1$ might perturb the solution enough that things don't converge. Even in the first order, you might get a secular (frequency zero) term representing unbounded motion.
H: How to prove that: $|f(x+h)-f(x)| \leq 2|f(x)|$ for continuous $f$ and sufficiently small $h$. If I have a continuous function $f$, how can I prove that $$|f(x+h)-f(x)| \leq 2|f(x)| $$ for all sufficiently small $h$? for example if I suppose that $f$ is not continous. I know only that $f\in L^{1}(\mathbb{R})$ how can I prove the inequality? thanks :) AI: Recall the definition of $f$ being continuous at $x$: $$\forall \epsilon > 0: \exists \delta > 0: \forall h: |h| < \delta \implies |f(x+h)-f(x)| < \epsilon$$ The desired inequality follows from applying this to $\epsilon = 2|f(x)|$. Note that we had to assume $f(x) \ne 0$ (since $\epsilon > 0$ is required).
H: Confused about a limit proof and Big O. I gave an incorrect proof here : How can evaluate $\lim_{x\to0}\frac{\sin(x^2+\frac{1}{x})-\sin\frac{1}{x}}{x}$ I am confused as when considering the mistakes in my proof it seems the limit cannot be $0$. The method must thus be completely wrong even more wrong than the comments suggest. Yet I believe there should be a big O proof possible and even one similar to the one I posted. The "paradox" Im getting at is perhaps clearer understood when considering that my method/the correct method should also work if sine is replaced by another function that has a Taylor series at $0$ with all $a_n$ larger or equal to $0$. (and for which the limit should also be $0$ ofcourse). Very confused. Please keep in mind that I want a proof based on big O and not on trig identities or l'hopital rule. AI: A Taylor series of $f(x)$ around $x=0$ is of little heltp when estimating $f(x^2+\frac1x)$ as $x\to 0$. Note that $x^2+\frac1x\to\infty$. Rather you need the Taylor series developed around $\frac1x$ (and for general $f$ have to make sure that the constant hidden in the big-O does not depend on the point of developping): $$ \tag1\sin(x_0+h)=\sin(x_0)+h\cos(x_0)-\frac12h^2\sin(x_0)+O(h^3)$$ seems to give us $$\tag2 \sin(x^2+\frac1x)-\sin(\frac1x)=x^2\cos(\frac1x)-\frac12x^4\sin(\frac1x)+O(x^6)$$ but the constant hidden in $(1)$ may depend on $x_0$, that is in $(2)$ the constant depends on $x$ which of course makes the $O$ useless. However, one can argue that because of the periodicity, the $O$ is uniform in $x_0$ here.
H: How do we derive new inference rules? I've been toying with a system of inference rules for propositional logic. I can use the system to prove theorems; but my question is, can I use the system to obtain new inference rules? Here are the details. We begin with 5 inference rules, and 0 axioms. (1) Proof by contradiction. We can open a proof/subproof by assuming $\neg A$. If we obtain a contradiction, we may close the proof/subproof and deduce $A$. (2) Double Negation Elimination. $$\neg\neg A \vdash A$$ (3) Conjunction Elimination. $$A \wedge B \vdash A,\quad A \wedge B \vdash B$$ (4) De Morgan's Principles. $$\neg(A \wedge B) \vdash \neg A \vee \neg B,\quad \neg(A \vee B) \vdash \neg A \wedge \neg B$$ (5) [Don't Know What To Call This] $$(\neg A \vee \neg B), A \vdash \neg B,\quad (\neg A \vee \neg B), B \vdash \neg A$$ Now it seems to me that I can prove a lot using just these rules; I'm guessing the logic is complete. For a simple example, lets prove $\neg(A \wedge B) \vee (B \wedge A).$ Assume not. Then by De Morgan, we get $\neg\neg(A \wedge B) \wedge \neg(B \wedge A)$. Using both versions of rule (3), we obtain $\neg\neg(A \wedge B)$ $\neg(B \wedge A)$ which (by 2) becomes $A \wedge B$ $\neg(B \wedge A)$ which using both versions of rule (3) again, becomes $A$ $B$ $\neg(B \wedge A)$ which (thank you for the correction, Doug) becomes $A$ $B$ $\neg B \vee \neg A.$ Finally, using rule (5), from $B$ and $\neg B \vee \neg A$ we deduce $\neg A$. Since both $A$ and $\neg A$ are written down, we're done! Okay, so we've proved the theorem $\neg(A \wedge B) \vee (B \wedge A).$ But, I'd like to view this as a new rule of inference rather than a theorem. Am I allowed to deduce that $A \wedge B \vdash B \wedge A$ is an admissible rule of inference? In general, how does one deduce a new rule of inference? For instance, is it a general principle that if we have a theorem of the general form $\neg \Gamma \vee \Delta$, then it follows that $\Gamma \vdash \Delta$ is an admissible rule of inference? Is the converse true? Similarly, suppose we want to demonstrate that a rule of the form $\Gamma, \Gamma' \vdash \Delta$ is admissible. Is it sufficient to begin the game with $\Gamma$ and $\Gamma'$ written down, and deduce $\Delta$? If so, why? I'm having difficulty organizing my thoughts on the issue. AI: What new rule of inference would this be? What would such a rule say you could infer from (¬(A∧B)∨(B∧A))? Yes, you can get rules of inference via writing proofs. For your case you would show that from (A∧B) you can deduce (B∧A) for your system (but you can't discharge a supposition like you did in your proof... proof-by-contradiction discharges the supposition). Then you would have (A∧B)⊢(B∧A) as a derived rule of inference. And yes you can have multiple things on the left. You already have two such rules here, rules covered by (5). But, I guess you wanted to see at least three things on the left to convince you of this. Let's say our only rules of inference are detachment (basically modus ponens) and substitution for variables and our axioms (in Polish notation) are the following theses: 1) CCqrCCpqCCrsCps Sorites 2) CCpCqrCCpqCpr Frege 1 q/CpCqr, r/CCpqCpr * C2-3 (n x/Czy means x is substituted with Czy in the formula with numeral n, Ca-b indicates that thesis n is the antecedent of the formula after we've made the substitutions in the n formula, it has consequent b, and thus we detach thesis b) 3 CCpCpCqrCCCCpqCprsCps From thesis 3 and detachment it follows that "CpCpCqr, CCCpqCprs, p⊢s" is a derivable rule of inference. We can demonstrate that as follows: 1 CpCpCqr assumption 2 CCCpqCprs assumption 3 p assumption 4 CCCCpqCprsCps thesis 3, 1 detachment 5 Cps 4, 2 detachment 6 s 5, 3 detachment Also, if you look at the justification for "CpCpCqr, CCCpqCprs, p⊢s" as a derivable rule of inference, you can find other derivable rules of inference also. I'll clarify this if you like. If you have a Deduction (meta) Theorem for your system, or a rule of conditional introduction, you can get another rule of inference R' from any rule of inference R which has multiple premises by applying the rule of conditional introduction/the theory's Deduction Theorem. For example, with "CpCpCqr, CCCpqCprs, p⊢s" supposing we had a deduction theorem or a rule of conditional introduction, we might infer "CpCpCqr, CCCpqCprs⊢Cps" as a rule of inference. Now, I did say "supposing" there. The system {CCqrCCpqCCrsCps, CCpCqrCCpqCpr} under detachment and substitution doesn't actually have a conditional-in rule or a Deduction Theorem. But, we can still get that second rule. One half of what some authors call the Deduction Theorem, or what I'd call The Detachment Theorem says this: For any propositions $\alpha$, $\beta$ "If C$\alpha$$\beta$ holds, if $\alpha$ also, then $\beta$ holds also." (this isn't quite like detachment which says "from C$\alpha$$\beta$, as well as $\alpha$, we may infer $\beta$ we use detachment to justify The Detachment Theorem, but not conversely.") The upshot comes as that with a set of premises $\gamma$, from $\gamma$ $\vdash$Cpq we may infer ($\gamma$ U {p}) $\vdash$ q. Or in other symbolism we might write "from $\alpha$, ..., $\omega$ $\vdash$ Cpq, we may infer $\alpha$, ..., $\omega$, p $\vdash$ q." So, since we derived CCpCpCqrCCCCpqCprsCps we have $\vdash$CCpCpCqrCCCCpqCprsCps. Thus, as rules of inference we can derive using the Detachment Theorem: 1: CpCpCqr $\vdash$ CCCCpqCprsCps. From 1 and the Detachment Theorem we can derive 2: CpCpCqr, CCCpqCprs $\vdash$ Cps. From 2 and the Detachment Theorem we can derive 3: CpCpCqr, CCCpqCprs, p $\vdash$ s. If we wanted to justify each of those rules without The Detachment Theorem in one proof we might write the following: 1 CCpCpCqrCCCCpqCprsCps by the derivation above 2 $\alpha$ CpCpCqr assumption 3 $\alpha$ CCCCpqCprsCps 2, 1 detachment (thus rule 1 above) 4 $\alpha$$\beta$ CCCpqCprs assumption 5 $\alpha$$\beta$ Cps 3, 4 detachment (thus rule 2 above) 6 $\alpha$$\beta$$\gamma$ p assumption 7 $\alpha$$\beta$$\gamma$ s 6, 7 detachment (thus rule 3 above)
H: Need help identifying notation, groups, $(G:1)$ Let $G$ be a finite group and let $p$ be a prime integer. (a) Show that if $p$ divides $(G:1)$, then $G$ contains an element of order $p$. (You may assume this holds if $G$ is abelian) My only question is what does the notation $(G:1)$ refer to? I am preparing for qualifying exams, still not an official graduate student yet but I saw this in one of the past algebra qualifying exams. From my experience, the little bit I have had, in abstract algebra, it would seem to me that $(G:1)$ could in some way be referring to the order of the group but I kind of doubt this as I am not sure where the $1$ comes from. I am guessing it is something I haven't heard of before or I just haven't seen the notation for it if I have. AI: If $H$ is a subgroup of $G$, then $(G:H)$ is a notation for the index of $H$ in $G$. So $(G:1)$ is just the order of $G$.
H: Maximum of a trigonometric function without derivatives I know that I can find the maximum of this function by using derivatives but is there an other way of finding the maximum that does not involve derivatives? Maybe use a well-known inequality or identity? $f(x)=\sin(2x)+2\sin(x)$ AI: The idea is to use $\sin^2 x + \cos^2x = 1$ to reduce to dealing with only 1 trigonometric function, and then proceed as a standard 1-variable inequality. We wish to find the maximum of $f(x) = \sin 2x + 2 \sin x = 2 \sin x ( 1 + \cos x )$. It is clear that we may assume $\sin x \geq 0, \cos x \geq 0$, to maximize this product. Let's consider $$[f(x)]^2 = 4 \sin^2 x ( 1 + \cos x)^2 = 4 (1-\cos^2x ) ( 1 + \cos x)^2 = 4 ( 1 - \cos x ) ( 1 + \cos x)^3 $$ By AM-GM, applied to $ 3(1-\cos x) , (1 + \cos x), (1 + \cos x), (1 + \cos x)$, we get that $ \sqrt[4]{3 ( 1 - \cos x ) ( 1 + \cos x)^3} \leq \frac{ 6}{4}$, or that $ ( 1 - \cos x ) ( 1 + \cos x)^3 \leq \frac{27}{16}$. Hence, $ [f(x)]^2 \leq \frac{27}{4}$ so $f(x) \leq \frac{3 \sqrt{3} } {2} $. It remains to verify that equality can occur, which it does at $3(1-\cos x) = 1 + \cos x$ of $\cos x = \frac{1}{2}$.
H: Big Rudin 1.40: Open Set is a countable union of closed disks? Reading through Big Rudin, I have come across the following statement in the proof of Theorem 1.40: Let $S \subset \mathbb{C}$ be a closed set [in the topology induced by the complex modulus]. Let $\Delta = \{z \in \mathbb{C}: |z-\alpha|\leq \epsilon\}$ be a closed disc about the point $z \in \mathbb{C}-S$. Then $\mathbb{C}-S$ is given by a countable union of closed discs $\Delta$. An (arbitrary) open set is the union of a countable number of closed sets? This seems like an error to me, but I can find no mention of it online. For what its worth, the proof seems to work if we take $\Delta$ to be an open disc, as the standard topology on $\mathbb{C}$ is second-countable. Edit: Does Anyone know of a good errata for the book? The only ones making themselves known are for Baby Rudin. AI: An arbitrary open set of $\mathbb{C}$ is a countable union of open balls - specifically, if $U\subseteq \mathbb{C}$ is open, then $U$ is the union of the "rational" open balls contained in $U$, that is, those open balls whose centers are complex numbers whose real and imaginary parts are rational, and whose radii are rational (since $\mathbb{Q}$ is countable, there can only be countably many such open balls). Now note that any open ball is a countable union of closed balls - specifically, $\{z\in\mathbb{C}:|z-\alpha|<\epsilon\}$ is equal to the union $$\bigcup_{n\in\mathbb{N}}\{z\in\mathbb{C}:|z-\alpha|\leq\epsilon-\tfrac{1}{n}\}.$$ Lastly, note that a countable union of countable unions is still a countable union.
H: combinatorics: Calculating number of different possibilities This is a HW question Part a) A normal pizza can have upto 3 toppings out of possible 18 choices, or can be one of four speciality pizzas. Calculate the number of different pizzas possible (incliding the pizza with no toppings). I came to the conclusion that it should be $4+ \big(19 C 3\big) $ the reason I wrote 19 is because the 19th topping is "none". Is this right ? Part b) A 4-for-all pizza consists of any four pizzas from part a arranged in a box. Calculate the total number of 4-for-all pizzas. Keep in mind that the four pizzas could be all different , or three different kinds of pizzas, or two different kinds of pizzas, or the four could be all the same. The ordering of the pizzas in the box does no matter. For this I believe it is to do with combination with repetition. So for PArt a) I got $969$ and therefore Part b) I get $ \big((969+4-1)C4\big) $. Am I right ? If I am wrong somewhere please provide a hint. AI: No, the $(19C3)$ term isn't quite right. "Up to three toppings" means exactly zero of the 18, exactly one of the 18, exactly two of the 18, or exactly three of the 18. This means you have $(18 C 0) + (18C1) + (18C2) + (18C3)$ options (plus the additional 4 specialty pizzas). Also, for the record, there's a "homework" tag that usually goes with homework questions.
H: Can an eigenvalue $\lambda_i=0$? I was doing some work with diagonalization of a matrix $A$ in order to find a matrix $P$ such that $\,P^{-1}AP\,$ was diagonal. In order to that I set $\;\lambda I_{n}=0\;$ and found the characteristic polynomial and its roots. When I factored my characteristic polynomial I obtained $\;\lambda^2(\lambda-2),\,$ so $\,\lambda=0,\,2$. I was taught that the eignenvalues$\,\lambda_{i}\,$ I found become the entries of the diagonal matrix $\,P^{-1}AP.\,$ If this is indeed true, then two of the diagonal entries would be $\,0.\,$ Is this allowed, or must a diagonal matrix strictly have non-zero diagonal entries? AI: Absolutely yes: It is very possible $\lambda = 0$. Zero is allowed. You may be mixing up what you know about eigenvectors --- the zero vector cannot be an eigenvector. But for an eigenvalue $\lambda$, it is certainly possible and admissible that $\lambda = 0$. With respect to your last question: "($\lambda = 0$): Is this allowed, or does a diagonal matrix strictly have to have the diagonal entries as non-zero?" Yes, it is allowed for zero's to be on the diagonal. No, the diagonal entries need not be non-zero.
H: failed application of magicry in Taylor expansion of $1/x^2$ near $x=2$ It's straightforward to find the Taylor expansion for $\frac{1}{x^2}$ near $x=2$ using the the Taylor series definition. This is turns out to be $\frac{1}{4} - \frac{1}{4} (x-2) + \frac{3}{16}(x-2)^2 + \cdots$ I was trying to be cute by finding the expansion using the geometric series: $$ \begin{align} \frac{1}{x^2} &= \frac{1}{1 -(1 - x^2)} = \frac{1}{1-z} &\text{ where } z = 1-x^2\\ \end{align} $$ This is where I run into trouble. My initial guess was to expand around $x=0$ and then shift over by 2. $$ \begin{align} &1 + z + z^2+ \cdots\\ &1 + (1-x^2) + (1-x^2) ^2 + \cdots \\ &1 + (\ 1-(x-2)^2\ ) + (\ 1-(x-2)^2\ )^2+\cdots \end{align} $$ I looked at the Wikipedia page on the geometric series and see that the formula is different when the common ratio is not 1. $$a + ar + a r^2 + a r^3 + \cdots + a r^{n-1} = \sum_{k=0}^{n-1} ar^k= a \, \frac{1-r^{n}}{1-r}$$ This helped me to see that I had missed the $a$ term. I'm taking it on faith that the common ratio is less than one. The article points out that when $|r|<1$, the series will be $\frac{a}{1-r}$, which is the familiar geometric series. Somehow, my substitution seems to be wrong. Any help will be appreciated. AI: Your idea is sound... iff $$|1-x^2|<1\implies 0<x^2<2\implies |x|<\sqrt2$$ Now, you want to develop around $\,x=2\,$...not good, uh? My idea: $$\frac1x=\frac1{2+(x-2)}=\frac12\frac1{1+\left(\frac{x-2}2\right)}=\frac12\left(1-\frac{x-2}2+\frac{(x-2)^2}4-\ldots\right)\implies$$ $$\frac1{x^2}=\frac1x\frac1x=\frac14\left(1-\frac{x-2}2+\frac{(x-2)^2}4-\ldots\right)^2=$$ $$=\frac14\left(1-(x-2)+\frac34(x-2)^2-\frac38(x-2)^3+\ldots\right)$$ The only thing you need now to justify the above is to prove that $$\left|\frac{x-2}2\right|<1\iff |x-2|<2\iff\;\ldots$$
H: Marginal density, indicator function I'm trying to figure out how to find the marginal density for $Y$ of the following function: $$f_{X,Y}(x,y)=\begin{cases} 1 &\text{if } 0\le x \le2, \max(0,x-1)\le y \le \min(1,x) \\ 0 & \text{otherwise}\end{cases} $$ $$f_Y(y)=\int_{-\infty}^\infty f_{X,Y}(x,y) \, dx = \int_0^2 1 \cdot \mathbb{I}(x) \, dx$$ where I get that $$\mathbb{I(x)}=\begin{cases} 1 &\text{if } 0 \le y \le x \le 2-y \le 2 \\ 0 & \text{otherwise}\end{cases}$$ but this leads to $$f_Y(y)=\int_{-\infty}^\infty f_{X,Y}(x,y) \, dx = \int_0^2 1 \cdot \mathbb{I}(x) \, dx = \int_y^{2-y} 1 \, dx = 2(1-y) \text{ for } 0\le y\le1$$ This is not the right answer though ($Y$ should be uniformly distributed). AI: The support of the $f_{X,Y}$ is a parallelogram; rearranging the inequalities yields $$ f_{X,Y}(x,y)=\begin{cases}1 & \text{if $0\leq y\leq 1$ and $y\leq x\leq y+1$}\\ 0 & \text{else}\end{cases} $$ Hence for $0\leq y\leq 1$, $$ f_Y(y)=\int_{-\infty}^{\infty}f_{X,Y}(x,y)\,dx=\int_{y}^{y+1}1\,dx=1. $$ and $f_Y(y)=0$ for all other $y$. From here, all that is left is to observe that this is precisely the probability density function for the uniform distribution on $[0,1]$.
H: A question concerning the Hilbert space trace I am stuck with an equation regarding the trace on a Hilbert space $H$. The trace is defined in the book by Pedersen ("Analysis now", Sect. 3.4) as follows. We choose an orthonormal basis $\{ e_j \mid j \in J\}$ for $H$, and for every positive operator $T \in B(H)$ (the latter denoting the space of bounded operators mapping $H$ to itself) we derfine the trace of $T$ by $$ \text{tr}(T) = \sum \langle Te_j, e_j \rangle \,. $$ One can show that $$ \tag1 \text{tr}(T^*T) = \text{tr}(TT^*) $$ for every $T \in B(H)$, and from this one then shows that (Corollary 3.4.4. in Pedersen) $$ \tag2 \text{tr}(UTU^*) = \text{tr}(T) \qquad \text{if $U$ is unitary and $T \ge 0$}\,. $$ by using the fact that $T = (T^{1/2})^2$ together with replacing $T$ by $UT^{1/2}$ in equation (1). But here I am stuck, because from this I only get \begin{align} \text{tr}(UT^{1/2}(T^{1/2})^*U^*) &= \text{tr}((UT^{1/2})(UT^{1/2})^*) \\ &= \text{tr}((UT^{1/2})^*(UT^{1/2}))\\ &= \text{tr}((T^{1/2})^*T^{1/2}) \end{align} To conclude the identity (2) I need to assume that every positive operator $T$ can be written as $T = T^{1/2}(T^{1/2})^*$ .. is this correct ? If so, I'd love to understand why, how can I prove this? If not, what am I doing wrong? Thanks! AI: I believe that since $T$ is a positive operator, it has a positive square root. A positive operator is self-adjoint, so $T^{\frac{1}{2}} = (T^{\frac{1}{2}})^*$. Therefore, you can decompose $T$ into $T^{\frac{1}{2}} T^{\frac{1}{2}} = T^{\frac{1}{2}} (T^{\frac{1}{2}})^*$.
H: How do you formally prove that a function in several variable is really a function Let say for example that we define $f:\mathbb{R}^{3}\longrightarrow \mathbb{R}^{3}$ such that $f(x,y,z)=(y^{2},xz,xy^{2})$. My informal argument would be just that there is only one object that can be defined having three real numbers, but I'm just saying this intuitively. How do you prove it formally? AI: Often, it's useful to analyze a given function as a composition of "smaller" elementary functions. Here, we have the three coordinate functions $f_j : \mathbb{R}^3 \rightarrow \mathbb{R}$ ($j=1,2,3$) defined by $f_1(x,y,z)=y^2$, $f_2(x,y,z)=xz$ and $f_3(x,y,z)=xy^2$. You could even break these down further. For instance, $f_3$ is the product of the projection $(x,y,z) \mapsto x$ and the function $(x,y,z)\mapsto y^2$, and the product of two real-valued functions with common domain is also a real-valued function. Finally, given three real-valued functions $f_1,f_2,f_3$ with common domain $\mathbb{R}^3$, we can form the function $f(x,y,z)=(f_1(x,y,z),f_2(x,y,z),f_3(x,y,z))$ with domain $\mathbb{R}^3$ and codomain $\mathbb{R}^3$. This example kind of clouds the set theory; here's another example. Suppose $f : A \rightarrow B$ and $g : A \rightarrow C$ are both functions. Note that they share the domain $A$. Then we can form a function $h : A \rightarrow B \times C$ where $B \times C$ is the Cartesian product of the sets $B$ and $C$. The function $h$ is defined in terms of $f$ and $g$ via the formula $h(a)=(f(a),g(a))$ for $a \in A$. I don't believe this construction has a name. In fact, it can be written as a composition of the diagonal map $a \mapsto (a,a)$ going from $A \rightarrow A \times A$ along with the Cartesian product of the functions $f$ and $g$. Note that here I'm talking about the Cartesian product of two functions. Finally, to glue all of this thinking together, you need to prove that the composition of two functions is actually a function. It sounds silly, but this type of thinking is useful in more advanced classes: the composition of continuous functions is continuous, the composition of differentiable functions is differentiable, the composition of homomorphisms is a homomorphism, etc.
H: Show $X=\left\{x \in [0,1]: x \neq \frac1n\text{ for any }n \in \Bbb N\right\}$ is neither compact nor connected I am stuck on the following question: Let $X=\{x \in [0,1]: x \neq \frac1n: n \in \Bbb N\}$ be given the subspace topology. Then I have to prove that $X$ is neither compact nor connected. Can someone give some explanation about how to tackle it? Thanks in advance for your time. AI: Let's consider a picture of $X$: For convenience / as a suggestion, define $$U_n=(\tfrac{1}{n+1},\tfrac{1}{n})$$ so that $$X=U_1\cup U_2\cup\cdots\cup\{0\}.$$ Can you find an infinite open cover of $X$ with no subcover (demonstrating that $X$ is not compact)? Can you find two disjoint non-empty open sets of $X$ whose union is $X$? (recall that any union of open sets is itself an open set) Edit: I wrote this answer mistaking thinking that $\{0\}$ is open in $X$, when of course it is not. Therefore, the answer that was (intentionally) heavily suggested by what I wrote for compactness is not actually correct. TonyK's suggestion below, to give an infinite open cover of one of the $U_n$'s and then cover the rest of $X$, is correct. For connectedness, you can still break $X$ into $U_1$ and $U_2\cup U_3\cup\cdots\cup\{0\}$ because the latter is just $X\cap (-\frac{1}{2},\frac{1}{2})$, and hence is indeed an open set of $X$.
H: An identity for Lagrangian function For $a_1,a_2,\cdots,a_n$, let $f_i(x)$ be $$f_i(x)=\frac{\prod_{j\neq i}(x-a_j)}{\prod_{j\neq i}(a_i-a_j)}.$$ For $b_1,b_2,\cdots,b_n$, let $g_i(x)$ be $$g_i(x)=\frac{\prod_{j\neq i}(x-b_j)}{\prod_{j\neq i}(b_i-b_j)}.$$ Can we prove such an equality: $$\sum_{i=1}^n f_j(-b_i)g_i(-a_j)=1?$$ I need this in the calculation of the detrminant of $$\left(\frac{1}{a_i+b_j}\right)_{1\leq i,j\leq n}$$ by induction on $n$. AI: It's true and no horrible algebra needed to prove it. Remember that the Lagrange polynomials are defined in this way so that if given the points $ (b_1, y_1), \dots, (b_n, y_n) $, the function: $$ L(x) = \sum_i y_i g_i(x) $$ will go through the points, ie. $ L(b_i) = y_i $ for all i. Now for your sum, let: $$ f(x) = \sum_{i=1}^n f_j(-b_i)g_i(x) $$ This is the polynomial that goes through the n points $ (b_1, f_j(-b_1)), \cdots, (b_n, f_j(-b_n)) $. There is one unique (n-1)-order polynomial that goes through n distinct points, so $f(x)$ must be exactly $ f_j(-x) $. The fact that $ f(-a_j) = f_j(a_j) = 1 $ is the usual normalization, easily checked.
H: Greatest integer $n$ where $n \lt (\sqrt5 +\sqrt7)^6$ I'm really not sure how to do this. I factored out a power of $3$ and squared so that I have $2^3 (6+\sqrt{35})^3 \gt n$ , and I know that if I can prove that $12^3-1 \le (6+\sqrt{35})^3 \lt 12^3$ I am basically done, but I don't know how to do that. Any help is appreciated. Thanks! AI: Let $$N=(\sqrt{7}+\sqrt{5})^6 +(\sqrt{7}-\sqrt{5})^6.$$ If we imagine expanding via the Binomial Theorem we see that $N$ is an integer. The second term is pretty small, since already $\sqrt{7}-\sqrt{5}$ is well below $1$. So our $n$ is equal to $N-1$. Now expand. There is a fair bit of cancellation. We get $$N=2\left(7^3 + \binom{6}{2}(7^2)(5)+\binom{6}{4}(7)(5^2)+5^3\right).\tag{1}$$ Finding $N$ explicitly is fairly easy, it is exact arithmetic . Remarks: $1.$ Much more efficiently, we can use exactly the same idea, but make use of your calculation of the square. Consider $$(6+\sqrt{35})^3+(6-\sqrt{35})^3.$$ This is equal to $2\left(6^3+3(6)(35)\right)$. $2.$ What we did is not a trick. The general idea is of wide applicability, it is a method.
H: Can $\mathbb{R}$ be obtained as union of disjoint translations of a dense subset? Doing some homework I had to find out maximal subgroups of $\mathbb{R}$ and my first approach was that subgroups of $\mathbb{R}$ are discrete or dense and, of course, a maximal subgroup $G$ can't be discrete and $\mathbb{R}/G \simeq \mathbb{Z}_p$ for some $p$ prime number. Since $\mathbb{Z}_p$ is cyclic then there should be $u \in \mathbb{R}$ for which $$\mathbb{R} = G \bigsqcup (u + G) \bigsqcup (2u + G) \bigsqcup \cdots \bigsqcup ((p-1)u + G) $$ and so I turned out to prove There isn't any proper dense subset $X \subset \mathbb{R}$ for which $\mathbb{R}$ can be obtained my means of union of a finite number of disjoint translations of $X$. I accomplished my original problem in another way, although I couldn't prove or disprove my original approach. AI: Let $$ X_n \;=\; \Bigl(\bigl[n,n+\tfrac12\bigr)\cap \mathbb{Q}\Bigr) \,\cup\, \Bigl(\bigl[n+\tfrac12,n+1\bigr)-\mathbb{Q}\Bigr) $$ for each $n\in\mathbb{Z}$, and let $X = \bigcup_{n\in\mathbb{Z}} X_n$. Then $X$ is clearly dense, and $\mathbb{R}$ is the disjoint union of $X$ and $X+\tfrac12$.
H: Does $n \log\left(\cos\left(\frac{\pi\,n!}{n^2}\right )\right ) \neq 0 \implies n = p$? Let $p$ denote a prime $> 3$. Take any $\text{odd }n\geq3,\;n \in \mathbb{N}$. How could one (dis)prove that: $$n \log\left(\cos\left(\frac{\pi\,n!}{n^2}\right )\right ) \neq 0 \implies n = p$$ Although it doesn't seem practical for checking whether $x = p$, it is certainly nice lookin'! AI: The implication actually goes both ways. Clearly $$n\ln\left(\cos\dfrac{\pi n!}{n^2}\right)=0$$ if and only if $\cos\dfrac{\pi n!}{n^2}=1$. Now $\cos k\pi=1$ if and only if $k$ is an even integer, so $\cos\dfrac{\pi n!}{n^2}=1$ if and only if $\dfrac{n!}{n^2}=\dfrac{(n-1)!}n$ is an even integer. If $n$ is prime, this fraction is not even an integer, so it’s certainly not an even integer, and $$n\ln\left(\cos\dfrac{\pi n!}{n^2}\right)\ne 0\;.$$ If $n\ge 3$ is composite, then $\dfrac{(n-1)!}n$ is an integer, and the numerator has at least one even factor, $n-1$. Thus, the numerator is even, and the denominator is an odd divisor of that numerator, so the quotient is even, and $$n\ln\left(\cos\dfrac{\pi n!}{n^2}\right)=0\;.$$
H: What is the difference between a ball and a neighbourhood? I am presently reading chapter two of Rudin, Principles of Mathematical Analysis (ed. 3). He provides the following definitions: Definition: If $\boldsymbol{x} \in \mathbb{R} ^ k$ and $r > 0$, the open ball $B$ with center at $\boldsymbol{x}$ and radius $r$ is defined to be the set of all $\boldsymbol{y} \in \mathbb{R} ^ k$ such that $| \boldsymbol{y} - \boldsymbol{x} | < r.$ Definition: A neighbourhood of a point $p$ is a set $N_r(p)$ consist of all points $q$ such that $d(p,q) < r$. The number $r$ is called the radius of $N_r(p)$. What I have been attempting to figure out is what the difference between these two definitions are. Ilya, in the following question provides the following description - "The neighborhood of a point $x\in \Bbb R$ is any subset $N_x\subseteq \Bbb R$ which contains some ball $B(x,r)$ around the point $x$. Note that in general one does not ask neighborhood to be open sets, but it depends on the author of a textbook you have in hands." In Kaplansky, Set Theory and Metric Spaces he presents an example where by setting the distrance function $d(p,q) = |a-b|$ then we obtain a metric space. Then combining this statement and the part of Ilya's answer that a neighbourhood contains some ball, a ball is like a special case of a neighbourhood. The following two theorems seem to give support for this case: Theorem $27$ from Kaplansky: Any open ball in a metric space is an open set. Theorem $2.19$ from Rudin: Every neighbourhood is an open set. So essentially they are both open sets, however, the neighbourhood has a more general distance function. I would appreciate some clarification on this matter. AI: As you have seen, different texts define their terms somewhat differently, but the most common definitions are as follows: If $(X,d)$ is a metric space (or a pseudometric space), then the open ball of radius $r > 0$ about the point $x \in X$ is the set of all $y \in X$ such that $d(x, y) < r$. If $(X,d)$ is a metric space (or a pseudometric space), then a set $U \subseteq X$ is open iff for each $x \in U$ there is an $r > 0$ such that the open ball about $x$ of radius $r$ is a subset of $U$. An open neighborhood of a point $x$ in a metric space (or, in fact, any topological space) is any open set containing $x$. A neighborhood of a point $x$ in a metric space (or any topological space) is any subset of the space including, as a subset, an open neighborhood of $x$. Beware: the notations used for open balls vary radically among texts, with almost all imaginable permutations of where the point goes, where the radius goes, and (in some cases) where the name of the metric goes.
H: Why doesn't sign (appear to) change in inequality? Given the equation $\frac{1}{x}\gt -1$, I would assume one would work it as $$\frac{1}{x}\gt-1$$ $$x\cdot\frac{1}{x}\gt-1\cdot x$$ $$1 \gt -x$$ $$-1\cdot 1 \gt -x \cdot -1$$ $$-1 \lt x $$ which is incorrect, because the answers should be $x\gt0$ or $x\lt-1$. When I put the correct answers in, it makes sense, but I can't see how to get them algebraically. Is there some rule that I'm missing or misusing? AI: Remember that when you multiply an inequality by a negative number, you reverse the direction of the inequality. Thus, $$x\cdot\frac1x>-1\cdot x$$ is true only if $x>0$. If $x<0$ you’re multiplying the original inequality by a negative number, and you get $$x\cdot\frac1x<-1\cdot x\;.$$ Thus, you actually end up with two cases: $$x>0\quad\text{and}\quad x\cdot\frac1x>-1\cdot x\tag{1}$$ and $$x<0\quad\text{and}\quad x\cdot\frac1x<-1\cdot x\;.\tag{2}$$ $(1)$ simplifies to $x>0$ and $1>-x$ and thence to $x>0$ and $-1<x$; $x>0$ already implies that $x>-1$, so this case gives you the solutions $x>0$. $(2)$ simplifies to $x<0$ and $1<-x$ and thence to $x<0$ and $-1>x$; $x<-1$ already implies that $x<0$, so this case gives you the solutions $x<-1$.
H: Understanding Equivalence and Relations Can someone please explain these answers? I have reviewed the slides and read about properties of equality but I still don't understand how to apply it to these sets. For each the following relations on the set of integers list all that apply (Reflexive, Symmetric, Antisymmetric, Transitive, or none): $R1 = \{(a, b)\mid a \neq b\}$; Solution: Symmetric $R2 = \{(a, b)\mid a < b\}$; Solution: Antisymmetric, Transitive $R3 = \{(a, b) \mid a = b\text{ or }a = b + 1\}$; Solution: Reflexive, Antisymmetric $R4 = \{(a, b) \mid a = b\}$; Solution: Reflexive, Symmetric, Antisymmetric, Transitive $R5 = \{(a, b) \mid a = 2b\}$; Solution: Antisymmetric $R6 = \{(a, b) \mid a < 10 - b\}$; Solution: Symmetric AI: Let's review what each property means. Your question deals with relations on the integers; so, let's say that we are working with a relation $R\subseteq\mathbb{Z}\times\mathbb{Z}$. Symmetric: $R$ is called symmetric if for all $(a,b)\in R$, we also have $(b, a)\in R$. For instance, $R_1$ is symmetric: if $(a,b)\in R_1$, then $a\neq b$, which implies $b\neq a$ and therefore $(a,b)\in R_1$. On the other hand, $R_3$ is not symmetric; $(5,4)\in R_3$, but $(4,5)\notin R_3$. Antisymmetric: $R$ is called antisymmetric if whenever $(a,b)\in R$ and $a\neq b$, we have $(b,a)\notin R$. So, for instance, $R_1$ is not antisymmetric; $(3,5)$ and $(5,3)$ are both elements of $R_1$. On the other hand, $R_5$ is antisymmetric: if $a=2b$ and $a\neq 0$, then $b\neq 2a$ and so $(b,a)\notin R_5$; if $a=0$ and $a=2b$, then $a=b=0$, so it is fine. Transitive: $R$ is called transitive if whenever $(a,b)\in R$ and $(b,c)\in R$, we also have $(a,c)\in R$. For instance, $R_1$ is not transitive: $(3,5)\in R_1$ and $(5,3)\in R_1$, but $(3,3)\notin R_1$. On the other hand, $R_2$ is transitive: if $(a,b),(b,c)\in R$, then $a<b$ and $b<c$; therefore $a<c$, and $(a,c)\in R_2$. Reflexive: $R$ is called reflexive if $(a,a)\in R$ for all $a$. So, $R_1$ is not reflexive: if $(a,a)\in R_1$, then $a\neq a$... which is clearly not true. On the other hand, $R_3$ is reflexive; since any $a\in\mathbb{Z}$ satisfies $a=a$ (and, therefore, either $a=a$ or $a=a+1$), we have $(a,a)\in R_3$. Hope this helps.
H: $k$-cells: Why $a_i < b_i$ instead of $a_i \le b_i$ In Rudin, The Principles of Mathematical Analysis, there is the following definition: Definition: If $a_i < b_i$ for $i=1,2,...,k$, the set of all points, $ \boldsymbol{x} = ( x_1, x_2, ..., x_k )$ in $ \mathbb{R} ^ k $ whose coordinates satisfy the inequalities $ a_i \le x_i \le b_i$ $( 1 \le i \le k )$ is called a $k$-cell. The examples that are provided give a $1$-cell as an interval and a $2$-cell as a rectangle. What I am trying to figure out is why the definition is given this particular way. In particular, why the definition is not phrased the same way, with the exception that $a_i \le b_i$. Upon consideration, the only difference between the two situations is what may arise in a particular $k$-cell. For example, consider a $3$-cell. This would provide the set of all $\boldsymbol{x} = (x_1, x_2, x_3)$ where each $a_i \le x_i \le b_i$. Such a characterization gives a cuboid. However, by allowing for the possibility that $a_i = b_i$, we allow for the possibility that at least one $x_i$ is fixed in the set. Thus in the $3$-cell case we would be allowed to have cuboids, planes, lines, and points. Thus the restriction $a_i \le b_i$ seems to be more flexible. Thus my question is, why is the more restrictive $a_i < b_i$ used rather than $a_i \le b_i$. AI: Well, think about the specific case of a $2$-cell under your less-restrictive definition. For instance, we could take $\vec{a}=(0,0)$ and $\vec{b}=(0,1)$. The result is actually the line segment $\{(0,y)\mid 0\leq y\leq 1\}$. This is a 1-dimensional object... and yet we would be calling it a 2-cell. The big idea with Rudin's definition of a $k$-cell is that we have to have room to move in each of the $k$ directions - so that the object is actually $k$-dimsional, rather than at most $k$-dimensional.
H: Commuting squares in abelian categories Here $A,B,C$ and $D$ are all objects in an Abelian category. $\require{AMScd} \begin{CD} A @> >> B @> >> C;\\ @VVV @VVV @VVV\\ D @> >>E @> >> F; \end{CD} $ The square $ABCDEF$ commutes (the outer square ) and the square $BEFC$ also commutes. Is it true that the square $ADBE$ commutes? If so why? AI: If $F=0$, the hypotheses hold trivially, and there is no reason for the conclusion to hold.
H: Showing that a transformation $T:\mathbb R^3 \to \mathbb R^2$ is linear OK, I am trying to prove the following transformation is linear, and find the basis for $\ker(T)$ and Im$(T)$ (also denoted in our textbook by $N(T)$ and $R(T)$ ). Then we're suposed to find the nullity and rank of $T$. $T: \Bbb{R}^3 \rightarrow \Bbb{R}^2$ defined by $T(a_1, a_2, a_3) = (a_1-a_2, 2a_3)$ We want to see that the transformation preserved addition and scalar multiplication. So I define vector $a$ as $(a_1, a_2, a_3)$ and $b$ as $(b_1, b_2, b_3)$. So the first question is whether $T((a_1+b_1, a_2+b_2, a_3+b_3)$ = $T((a_1-a_2), 2a_3) + T(b_1-b_2, 2b_3)$ and when I plug in vectors $a+b$ to the transformation I get: $((a_1+b_1)-(a_2+b_2), 2(a_3+b_3))$ which works. So addition is preserved. The next question is whether it preserves scalar multiplication, or if $T(ca+b) = cT(a) + T(b)$ and as it happens: $T(ca_1+b_1, ca_2+b_2, ca_3+b_3) = ((ca_1+b_1-ca_2+b_2), 2(ca_3+b_3))$ and then if we break up the vectors we find that we get $(ca_1-ca_2, 2ca_3)+(b_1-b_2, 2ba_3)$ so the transformation is linear. To find the kernel we look for the set of vectors for which $T(a_1,a_2,a_3) = 0$. That happens whenever $a_1 = a_2$ and $a_3 = 0$ But that is where I get stuck because the definition of a kernel doesn't seem to fit. What is the basis for the kernel in this case? If a kernel is a set of vectors then this is making little or no sense to me from the get-go. Because I am not sure what the basis would be if the set of vectors are all those where $a_1 = a_2$ unless it's something like $(a_1, a_2, 0)$. And the dimension of the kernel is 2, I wold think intuitively, but I want to better understand why that is so I can get through the rest of the problem. AI: Your verification that $T$ is a linear transformation could use some work. Try expanding out the first expression to show explicitly that $T(x + y) = T(x) + T(y)$. Also, check your (linear) algebra in your proof that $T$ preserves scalar multiplication. As for the kernel, you are correct $T(a_1, a_2, a_3) = 0$ when $a_1 = a_2$ and $a_3 = 0$. So $$N(T) = \{(a, a, 0) : a \in \mathbb{R}\}$$ Can you see why $\{(1, 1, 0)\}$ is a basis for $N(T)$? It's pretty clear that $T$ is onto. For any $(x,y) \in \mathbb{R}^2$, $(x, 0, y/2) \mapsto (x, y)$ so $R(T) = \mathbb{R}^2$. You can also use the rank-nullity theorem, which gives us $$3 = \dim \mathbb{R}^3 = \dim N(T) + \dim R(T) = 1 + \dim R(T)$$
H: Motivation for Jordan Canonical Form I took linear algebra and understood the proof that linear operators on a vector space over an algebraically closed field have a Jordan Canonical Form. Why should I care about this theorem? I understand that it can be useful in doing some computations, but it seems that these computations are quite rare. Indeed, I am not puzzled by diagonalization or triangularization at all. They both have practical and theoretical uses, but even more than that, they just seem like nice things to have. Can someone explain why Jordan Canonical Form is a "nice thing to have"? AI: The most generic answer: any time that we can reduce a problem over an incredibly general object (say, a matrix) to a problem in which we have more information at our fingertips (say, the same problem but over matrices that are in JCF), we make life easier - both in terms of proving theory and in terms of practical computations. To be more specific to the situation at hand: the Jordan canonical form is sort of the next-best-thing to diagonalization. If the matrix is diagonalizable, then its JCF is diagonal; if it isn't, then what you get is at least block diagonal, and the blocks come in a predictable form.
H: $\lim_{x\rightarrow 0^+}x^x$ How can I calculate $\lim_{x\rightarrow 0^+}x^x$? I can only write it in the form $e^{x\ln x}$. I would like to use L'Hospital rule somehow, but I can't write it in form of fractions. AI: HINT: $$y=x^x\iff \ln y=x\ln x=\frac{\ln x}{\frac1x}$$ which is of the form $\frac\infty\infty$ as $x\to 0^+$ So applying L'Hospital's Rule $$\lim_{x\to0^+} \ln y=\lim_{x\to0^+}x\ln x=\lim_{x\to0^+}\frac{\ln x}{\frac1x}=\lim_{x\to0^+}\frac{\frac1x}{-\frac1{x^2}}=\lim_{x\to0^+}(-x)=0$$
H: what is no. of positive eigen value of symmetric matrix A with some given relationship Suppose A is a 3*3 symmetric matrix s.t. $$\begin{pmatrix} x & y & 1 \\ \end{pmatrix} A \begin{pmatrix} x \\ y\\ 1\end{pmatrix} = xy -1 $$ let p be the no. of positive eigen value of A and q = rank(A)-p, then p=? q=? what matrix A should be I can't find?? AI: Hint: I would try brute force, if you have no other idea, i.e. take an arbitrary symmetric $3\times 3$-matrix $\begin{pmatrix}a&b&c\\b&d&e\\c&e&f\end{pmatrix}$. Then compute the left hand side and compare coefficients to the right hand side. You get a system of easy linear equations that you can solve. Added: As @HenningMakholm showed for a Duplicate of this Question, solving for the entries of symmetric $A$ gives: $$ \begin{pmatrix} 0 & \frac{1}{2} & 0 \\ \frac{1}{2} & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} $$ a block diagonal form as promised in the Comment below. The upper $2\times 2$ block has one positive and one negative eigenvalue, $\pm \frac{1}{2}$, and the lower block obviously has one negative eigenvalue.
H: related rates (calculus) questions The minute and hour hands of the GPO clock are 2m and 1.5m long respectively. How fast are their ends approaching at (a) 2 o'clock (b) 6 o'clock? A fuel storage tank is in the shape of a right circular cone of base diameter 10m and height 20m and is being filled at a constant rate of 20m^3 per hour. FInd the rate at which the fuel level is rising at the instant when it is 8m deep. Please help! AI: For second part: Volume of right circular cone $\;V=\dfrac 13\pi r^2 h$ Rate of volume increasing$\;\dfrac {dV}{dt}=20m^3/hour$ using similar triangle theorems we can find out radius when fuel is on 8 m height $$\Delta ABC\sim \Delta ADE$$ $$\dfrac {AB}{BC}=\dfrac {AD}{DE}$$ $$\dfrac{20}{5}=\dfrac {20-8}{r_1}$$ $$r_1=3$$ now rate of change in fuel height at height =8m. We can think that there is a smaller cone of height 12m and radius=3m. since $$V=\dfrac 13\pi r^2 h$$ $$\dfrac {dV}{dt}=\dfrac {d}{dt}(\dfrac 13\pi r^2 h)$$ $$20=\dfrac 13\pi 3^2 \dfrac {dh}{dt}$$ $$\dfrac {dh}{dt}=\dfrac {20}{3\pi} $$
H: Remainders of compactifications are images of the Stone-Čech remainder. I need to show that if $\gamma X$ is a compactification of $X$, then $\gamma X\setminus X$ is the continuous image of $\beta X\setminus X$. I know that there exists a continuous function from $\beta X$ onto $\gamma X$ that is the identity on $X$. Furthermore, I know that this function can be defined as follows: For $u\in\beta X$ if $u\in X$ let $f(u)=u$ else, let $f(u)\in \bigcap_{Z\in u} \text{cl}_{\gamma X} Z$ Another way of stating what I want to prove is, for $x\in X$, $f(p)=x$ implies $p=x$. To show this I think it would suffice to show $p$ (the ultrafilter) contains every zero set which contains $x$. Here is my attempted proof... Suppose $x\in X$ and $f(p)=x$, but $p\neq x$. Then there is a zero set $Z$ with $x\in Z$ but $Z\notin p$. Then (by an ultrafilter property) $Z\cap K=\phi$ for some zero set $K\in p$. I want to get a contradiction by showing these disjoint zero set do not have disjoint closures in $\beta X$. $x\in \text{cl}_{\beta X}Z$ and $p\in \text{cl}_{\beta X}K$ so $f(p)=x\in f[\text{cl}_{\beta X}K]=\text{cl}_{\gamma X}K$ So $x\in \text{cl}_{\beta X}Z\cap \text{cl}_{\gamma X}K$. But this is not quite what I need! AI: One can actually prove a more general result without having to deal with the details of any construction of the Čech-Stone compactification: Lemma. Let $X$ be a Hausdorff space and $f:X\to Y$ a continuous surjection. Let $D$ be a dense subset of $X$. If $f\upharpoonright D$ is a homeomorphism from $D$ to $f[D]$, then $f[X\setminus D]=Y\setminus f[D]$. Proof. Suppose not, and let $p\in D$ and $q\in X\setminus D$ be such that $f(p)=f(q)=y$. Let $U_p$ and $U_q$ be disjoint open nbhds of $p$ and $q$, respectively. Then $U_p\cap D$ is an open nbhd of $p$ in $D$, so $f[U_p\cap D]$ is an open nbhd of $y$ in $f[D]$. Thus, $f[U_p\cap D]=V\cap f[D]$ for some open nbhd $V$ of $y$. Let $W$ be any open nbhd of $q$; then $W\cap U_q\cap D\ne\varnothing$, so we may pick $x\in W\cap U_q\cap D$. Then $f(x)\in f[U_q\cap D]$, and $f[U_q\cap D]$ is disjoint from $f[U_p\cap D]=V\cap f[D]$ (since $f\upharpoonright D$ is a homeomorphism), so $f(x)\notin V$. That is, $f[W]\nsubseteq V$, and $f$ is not continuous at $q$, contrary to hypothesis. $\dashv$ In your setting $X$ is a dense subset of the Hausdorff space $\beta X$, and $f\upharpoonright X$ is the identity, which is certainly a homeomorphism.
H: Proving principle of the Iterated Suprema Let $X$ and $Y$ be nonempty sets and let $h : X\times Y \to R$ have bounded range in $\mathbb{R}$. Let $F: X \to\mathbb{R}$ and $G : y \to \mathbb{R}$ be defined by $F(x):=\sup\{h(x,y) : y\in Y\}$, and $G(y) := \sup\{h(x,y) : x\in X\}$. Establish the Principle of the Iterated Suprema: $\sup\{h(x,y) :x \in X, y \in Y\} = \sup\{F(x) : x \in X\} = \sup\{G(y) : y \in Y\}$. This proof is quite long but im having trouble with it. Here is what I have so far. Proof Since $h(x,y)$ is bounded we know that $\sup\{h(x,y) :x \in X, y \in Y\}$ exists. Let $T=\sup\{h(x,y) :x \in X, y \in Y\}$. By definition we know that $T \geq h(x_0,y_0)$ for arbitrary $x_0 \in X$ and $ y_0 \in Y$ Since $h(x_0,y_0) \in\{h(x,y) :x \in X, y \in Y\}$ and $h(x_0,y_0) \in \{h(x_0,y) : y \in Y\}$ Since $x_0$ and $y_0$ are arbitrary it follows that T is a upper bound for $F(x_0)= \sup\{h(x_0,y) : y\in Y\}$. $\sup\{F(x) : x \in X\}$ exists since $F(x_0)$ was arbitrary. Thus $T \geq \sup\{F(x) : x \in X\}$ This is where I stopped at since I do not know if what I have done so far is correct and I am stuck proving $T \leq \sup\{F(x) : x \in X\}$ in order to show $T=\sup\{F(x) : x \in X\}$. AI: I am omitting the $x \in X$, etc, for simplicity. $\sup_{x',y'} h(x',y') \ge h(x,y)$ for all $x,y$. Hence $\sup_{x',y'} h(x',y') \ge G(y)=\sup_{x'} h(x',y)$ for all $y$, hence $\sup_{x',y'} h(x',y') \ge \sup_{y'}G(y')$. Similarly for $F$. Let $\epsilon>0$, then we have $x_1,y_1$ such that $h(x_1,y_1) \ge \sup_{x',y'} h(x',y') - \epsilon$. Clearly $\sup_{y'}G(y') \ge G(y_1) \ge h(x_1,y_1) \ge \sup_{x',y'} h(x',y') - \epsilon$. Since $\epsilon$ was arbitrary, we have $\sup_{y'}G(y') \ge \sup_{x',y'} h(x',y')$. Similarly for $F$.
H: Let $A$ and $B$ be $n \times n$ complex matrices. Pick out the true statements. Let $A$ and $B$ be $ n \times n$ complex matrices. Pick out the true statements: a) If $A$ and $B$ are diagonalizable, so is $A + B$ b) If $A$ and $B$ are diagonalizable, so is $AB$ c) If $A^2$ is diagonalizable, then $A$ is diagonalizable d) If $A$ is real symmetric and $A^2$ is diagonalizable, then $A$ is diagonalizable. I am not sure about a and b, but c is true and d is false, as $A^2$ is diagonalizable so $D=PA^2P^{-1}$ and we can write $\sqrt{D}=\sqrt{P}A\sqrt{P^{-1}}$, as $A$ is real symmetric so it may have negative eigenvalue so square root may not exists, but if it were complex matrix then d would have also true. am I right? AI: Hints for first two parts: a)Try $A=\left(\begin{array}[cc]\\ 1&3\\ 1&2 \end{array}\right)B=\left(\begin{array}[cc]\\ -1&-2\\ -1&-2 \end{array}\right)$. [edited] b) $A=\left(\begin{array}[cc]\\ 0&0\\ 1&1 \end{array}\right)B=\left(\begin{array}[cc]\\ 1&0\\ 1&0 \end{array}\right)$.
H: If a morphism between affine schemes is dominant, is the corresponding ring morphism injective? Suppose we have $\phi$ a ring morphism from $A$ to $B$, let $X=\operatorname{Spec}A$ , $Y=\operatorname{Spec}B$ and $\psi$ is the induced morphism of affine schemes. Is it true that if $\psi$ dominant, than $\phi$ is injective? AI: Hint: If $A$ and $B$ each only have one prime ideal, then $X$ and $Y$ are singletons, so we would have to have $\psi(Y)=X$, hence $\psi(Y)$ is dense in $X$, hence $\psi$ is dominant. Can you think of a ring morphism between two such rings $A$ and $B$ that is not injective?
H: Substituting an equation into itself, why such erratic behavior? Until now, I thought that substituting an equation into itself would $always$ yield $0=0$. What I mean by this is for example if I have $3x+4y=5$, If I substitute $y=\dfrac {5-3x}{4}$, I will eventually end up with $0=0$. However, consider the equation $\large{\sqrt {x+1}+\sqrt{x+2}=1}$ . If we multiply by the conjugate, we get $\dfrac {-1}{\sqrt{x+1}-\sqrt{x+2}}=1$, or $\large{\sqrt{x+2}-\sqrt{x+1}=1}$. Now we can set this equation equal to the original, so $\sqrt{x+2}-\sqrt{x+1}=\sqrt {x+1}+\sqrt{x+2}$ , and you get $0=2 \sqrt{x+1}$ which simplifies to $x=-1$ , which is actually a valid solution to the original! So how come I am not getting $0=0$ , but I am actaully getting useful information out of this? Is there something inherently wrong with this? Thanks. AI: You didn't actually substitute anything (namely a solution for $x$) into the original equation; if you would do that the $x$ would disappear. Instead you combined the equation with a modified form of itself to obtain a new equation that is implied by the original one; the new equation may or may not have retained all information from the original one. As it happens the new equation has a unique solution and it also solves the original equation; this shows the new equation implies the original one, and you did not in fact loose any information. If you consider the operation of just adding a multiple of an equation to itself, you can see what can happen: in most cases you get something equivalent to the original equation, but if the multiple happened to be by a factor$~-1$ then you are left with $0=0$ and you have lost (in this case all) information contained in the equation.
H: Which of these statements about biholomorphic functions $f \colon D(0, 1) → D(0, 1)$ is true? $f \colon D(0, 1) → D(0, 1)$ is a biholomorphic function. a) $f$ must be constant b) $f$ must have a fixed point c) $f$ must be a rotation d) $f$ must fix the origin. Any such map looks like $e^{i\alpha}{(z-a)\over (1-\bar{a}z)}$, $a\in D$, $\alpha\in [0,2\pi]$, so only c is correct option? AI: It helps to keep in mind the classification of biholomorphic self-maps of $\mathbb D$ (also known as orientation-preserving isometries of the hyperbolic plane in the disk model) into hyperbolic, which look like translation, and have no fixed points in $\mathbb D$ parabolic, which look like rotation about a boundary point (which is not a point in $\mathbb D$). They have no fixed points either: elliptic, which look like rotation about a point in $\mathbb D$ (which is a fixed point). (The author of the materials linked above is Colleen Robles, as far as I can tell.) a) clearly false: a bi-(anything) function cannot be constant. b) c), d) are disproved by an example of hyperbolic or parabolic isometry
H: Theorem by Whitney For $0<k<\infty$ and any $n$-dimensional $C^k$ manifold the maximal atlas contains a $C^\infty$ atlas on the same underlying set by a theorem due to Whitney. Could someone please point me to where I can find the theorem and a proof thereof? AI: Morris Hirsch's Differential Topology text contains a proof. I don't have the text here, but it appears fairly early in the textbook. The key idea is in his theorem on increasing the order of differentiability of a function.
H: Why isn't the Ito integral just the Riemann-Stieltjes integral? Why isn't the Ito integral just the Riemann-Stieltjes integral? What I mean is, given a continuous function $f$, some path of standard brownian motion $B$, and the integral: $$\int_0^Tf(t)\;dB(t).$$ So what if we can't apply the change of variables formula to make sense of $$\int_0^Tf(t)B'(t)\;dt,$$ the Riemann-Stieltjes integral never required differentiability of the integrator anyways. Is there a reason to distinguish the Ito integral from the Riemann-Stieltjes integral above and beyond the need to develop a theory (Ito Calculus) to get around all the problems caused by the failure of change of variables? AI: First of all, Brownian motion is almost surely nondifferentiable meaning that you can't just apply the rule you cite to Stieltjes integrals. Second, in essence the Ito integral is Riemann Stieltjes integration when you observe the path of Brownian motion, but not entirely. You can think of it as Stieltjes integration where the "integrator" as you call it has an extra variable of dependence, in this case on the probability space of realizable Brownian paths. Specifically $B_t$ is a random variable. The definition is then essentially the same as its a limit over partitions. However Brownian motion is not of bounded variation so you cannot apply the usual definitions of Riemann Stieltjes to evaluate the integrals along some realized path. In particular you get the integral evaluated as a limit of sums over partitions converges in probability.
H: Sentential Logic Below is a question comes from the book How to Prove It written by Daniel J. Velleman. Let $P$ stand for the statement, “I will buy the pants” and $S$ for the statement “I will buy the shirt.” What English sentences are represented by the following expressions? $\lnot(P \land \lnot S)$ The given answer is, “I won't buy the pants without the shirt.“ I don't understand the answer. Can someone explain? Or maybe the answer is wrong? AI: Remember that, by DeMorgan's Laws, $$\lnot (A\land B)\iff(\lnot A\lor\lnot B),$$ so $$\lnot (P\land \lnot S)\iff \lnot P\lor(\lnot\lnot S)\iff\lnot P\lor S.$$ Our goal is to describe what it means for $\lnot (P\land \lnot S)$ to be true. So, let's agree that it is true, and see what we conclude. If you don't buy the pants, then $\lnot P$ is true, and hence so is $\lnot P\lor S$, regardless of what $S$ is. However, if you do buy the pants, then $\lnot P$ is false, and in this case, the only way $\lnot P \lor S$ can be true is if $S$ is true, i.e. if you buy the shirt. Thus, to say that $\lnot (P\land \lnot S)$ is true is to say that maybe you won't buy the pants, maybe you will; but if you do, you'll definitely also buy the shirt.
H: How does this vector addition work in geometry? I saw the accepted answer to the question: Finding a point along a line a certain distance away from another point! I am not getting how to use it actually to find the coordinates of the new point at a given distance. This is because I am confused between how to translate to/from the Cartesian system and the vector system. So please explain me the following by walking through the solution suggested in that answer with the following example data. Suppose I have two points $(0,0)$ and $(1,1)$ and I want to find a point at a distance which is 3/5th of the total distance between the points (i.e. $\frac{3}{5}\sqrt{2})$ from the point $(0,0)$ and lies on the segment. How do I use the vectors mentioned in the solution given there to find the required coordinates? Edit: Precisely speaking, What I do expect is the explanation of: What is vector $\mathbf v$ there if $(x_1,y_1) = (1,1)$ and $(x_0,y_0) = (0,0)$ What is the normalized vector $d\mathbf u$? How do I do the addition $(x_0,y_0) + d\mathbf u$? AI: We have $\mathbf v = (x_1,y_1)-(x_0,y_0) = (1,1)-(0,0)=(1,1)$. Note that its length (or norm) is: $$ ||\mathbf v|| = \sqrt{v_1^2+v_2^2} = \sqrt{1^2+1^2}=\sqrt{2} $$ Thus, normalizing the vector yields: $$ \mathbf u = \frac{\mathbf v}{||\mathbf v||} = \frac{1}{\sqrt{2}}(1,1) $$ Since we want the point that is at a distance that is three-fifths the total distance between the two points, we have $d=\dfrac{3}{5}||\mathbf v||=\dfrac{3\sqrt{2}}{5}$. Hence, the desired point is: $$ \begin{align*} (x_0,y_0)+d\mathbf u &= (0,0)+\dfrac{3\sqrt{2}}{5}\left(\frac{1}{\sqrt{2}}(1,1)\right)\\ &= (0,0)+\dfrac{3}{5}(1,1) \\ &= (0,0)+(3/5,3/5)\\ &=\left(\dfrac{3}{5},\dfrac{3}{5}\right) \end{align*} $$
H: Determine the trace and determinant of a block upper triangular matrix Let $A$ be a $5\times5$ real skew symmetric matrix with entries and $B$ be a $5\times5$ real symmetric matrix whose $(i,j)$-th entery is the binomial coefficient ${i \choose j}$ when $i\ge j$. Now, if $$C= \begin{pmatrix}A & A+B \\ 0 & B\\ \end{pmatrix},$$ then $\mathrm{trace}(C)$=? $\det(C)$=? Since $A$ is real and skew symmetric, $\mathrm{trace}(A)=0$; also, $\mathrm{trace}(B)=5$, so $\mathrm{trace}(C)=5$. Am I right? And given $\det(C)=0$, how can I get $\det(C)=0$? Can anyone help me please... AI: Your answer that $\mathrm{trace}(C)=5$ and your reasoning are correct. In general, if $A$ and $B$ are both $n\times n$, since $C$ is block upper triangular, we have $\mathrm{trace}(C)=\mathrm{trace}(A)+\mathrm{trace}(B)=0+n=n$. We also have $\det(C)=\det(A)\det(B)$. When $n$ is odd, the determinant of an $n\times n$ skew symmetric matrix $A$ is zero. (Hint: For any matrix -- skew symmetric or not -- we have $\det(A)=\det(A^T)$. For skew symmetric matrix, we also have $\det(A^T)=\det(-A)$. Therefore $\det(A)=\det(-A)$. How to prove that $\det(A)=0$ from here? Why is the oddness of $n$ essential?) Therefore $\det(C)=0$. For even $n$, the determinant of a skew symmetric matrix is not necessarily zero. Since the entries of $A$ are unspecified, there is no way to find its determinant. Also, the determinants of $B$ from $n=1$ to $5$ are resp. $1,-3,15,-97,628$. According to the OEIS database (A079689), there apparently isn't any explicit formula for $\det(B)$ in terms of $n$.
H: how to prove a parametric relation to be a function For example lets suppose that I have given the functions $f:\mathbb{R}\longrightarrow \mathbb{R}$ and $g:\mathbb{R}\longrightarrow \mathbb{R}$. If my relation is $R=\{(x,(y,z))\in \mathbb{R}\times \mathbb{R}^{2}: y=f(x) \wedge z=g(x)\}$ How to prove formally (from a set theoretic stand point) that $R$ is a function. I have a try but I'm not convince: Let suppose to have $(x,(y,z))\in R$ and also $(x,(y',z'))\in R$. Then $y=f(x), z=g(x)$ and also $y'=f(x), z'=g(x)$ by definition. Then $y=y'$ and also $z=z'$. Therefore $(y,z)=(y',z')$. In a more general case if I have the functions $f_1, f_2,...,f_n:\mathbb{R}^m\longrightarrow \mathbb{R}$ and I define the function $f:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}$ such that $f(x_{1},x_{2},...,x_{m})=(f_{1}(y),f_{2}(y),...,f_{n}(y))$ with $y=(x_1,x_2,...,x_m)$, how to justify that it is indeed a function? If my try is fine I suppose this can be done by induction. Any comment will be appreciated. AI: Let's prove a vastly more general statement. Let $I$ be an index set, and for every $i\in I$ let $f_i$ be a function. Then the relation $F$ defined on $X=\bigcap_{i\in I}\operatorname{dom}(f_i)$ by $F=\{\langle x,\langle f_i(x)\mid i\in I\rangle\rangle\mid x\in X\}$ is a function. Proof. Let $x\in X$, and suppose that $\langle x,\langle y_i\mid i\in I\rangle\rangle,\langle x,\langle z_i\mid i\in I\rangle\rangle\in F$, then for every $i\in I$ we have $y_i=f_i(x)=z_i$, therefore the sequences are equal and $F$ is a function. $\square$ Now you care about the case where $\operatorname{dom}(f_i)$ are all equal, so the intersection creating $X$ is trivial.
H: How to find solutions to this equality $\; \mathrm{x} = \mathrm{a^2x \, (1-x)\,(1-ax\,(1-x))}$ We have the following equality: $$ \mathrm{x} = \mathrm{a^2x \, (1-x)\,(1-ax\,(1-x))}$$ Some of the solutions I found: $\mathrm{x} = 0$ Also for $\mathrm{a}=0$, every $\mathrm{x}$ is a solution I believe I tried getting everything out of the brackets but that just gave a nasty equality which I couldn't solve. Let's take $-1 \leq \mathrm{a} \leq 1$. I was also wondering if there is a way of knowing how many solutions this equality has beforehand? Or do we just have to look at the cases $\mathrm{a} = 1$, $\mathrm{a} = -1$, $\mathrm{a} \neq 0$ and $\mathrm{a} = 0$ individually to find every solution (i.e. both of the bounds, and for a (not) equal to 0)? AI: I've solve upto some extent $$ x = \mathrm{a^2x \, (1-x)\,(1-ax\,(1-x))}$$ $$ {a^2x(1-x)(1-ax(1-x))-x=0}\implies x=0$$ $$ {a^2(1-x)(1-ax(1-x))-1=0}$$ $$ {a^2(x-1)(1+ax(x-1))+1=0}$$ put x-1 =t $$ {a^2t(1+at(t+1))+1=0}$$ $$ {a^2t(1+at^2+at)+1=0}$$ $$a^3t^3+a^3t^2+a^2t+1=0$$ for this equation if we take a=1 $$t^3+t^2+t+1=0\implies t=-1,i,-i\implies x=0,1+i,1-i$$ so based on value of a there are different solution of t and x. solution in continuity based on @maming's comment $$a^3t^3+a^3t^2+a^2t+1=0$$ $$a^3t^3+1+a^3t^2+a^2t=0$$ $${(at)}^3+1^3+a^3t^2+a^2t=0$$ $$(at+1)(a^2t^2-at+1)+a^2t(at+1)=0$$ $$(at+1)(a^2t^2-at+1+a^2t)\implies (at+1)=0\;,(a^2t^2-at+1+a^2t)=0$$ $$t=\frac{-1}{a}\implies \mathbf{x=1-\frac 1a}\;\;,a^2t^2+t(a^2-a)+1=0$$ $$t=\dfrac {-(a^2-a)\pm\sqrt{{(a^2-a)}^2-4\cdot a^2\cdot 1}}{2a^2}$$ $$t=\dfrac {a\left((1-a)\pm\sqrt{{(a-1)}^2-4}\right)}{2a^2}$$ $$t=\dfrac {(1-a)\pm\sqrt{{(a-1)}^2-4}}{2a}\implies x=1+\dfrac {(1-a)\pm\sqrt{{(a-1)}^2-4}}{2a}$$ $$x=\dfrac {(a+1)\pm\sqrt{{(a-1)}^2-4}}{2a}$$
H: Are the real numbers a nontrivial simple extension of another field? Is there a proper subfield $K$ of the real numbers and a real number $\theta$ such that $\mathbb R = K(\theta)$? I thought of this question earlier idly wondering about what the structure of the poset of all subfields of $\mathbb C$ looks like and I'm surprised that I have no idea how to answer it. AI: This is impossible. If $\theta$ is algebraic over $K$, then $\Bbb{R}$ has finite degree over $K$, and so $\Bbb{C}$ has finite degree over $K$. But this is impossible: by Artin-Schreier, if the algebraic closure of a field is a finite extension, its degree is $1$ or $2$. If $\theta$ is transcendental over $K$, then $\Bbb{R}=K(\theta)$ does not contain a square root of $\theta$ or of $-\theta$. But of course this is impossible: every positive real has a real square root.
H: Defining holomorphic function as an integral in $\mathbb{R}^n$ Let $U$ be a bounded connected subset of $\mathbb{R}^n$. Let $f:\mathbb{C} \times U \to \mathbb{C}$ such that 1) for all $(u_1,\ldots,u_n) \in U$ the function $z \mapsto f(z,u_1,\ldots,u_n)$ is holomorphic 2) for all $z\in \mathbb{C}$ the function $(u_1,\ldots,u_n) \mapsto f(z,u_1,\ldots,u_n)$ is continuous Can we conclude $F(z) = \int_U f(z,u_1,\ldots,u_n) \;du_1 \cdots du_n$ is holomorphic on $\mathbb{C}$ ? AI: The answer is NO! The function $$ f:\mathbb{C}\times(0,1) \to \mathbb{C},\ f(z,u)=\exp\left(\frac{z}{u}\right) $$ satisfies all the conditions. However the function $(0,1) \to \mathbb{C}, z \mapsto f(z,u)$ is not integrable. In particular the function $F: \mathbb{C} \to \mathbb{C},\ F(z)=\int_0^1f(z,u)\,du$ is not well defined, and therefore it cannot be holomorphic.
H: Prove that $\sum_{i=1}^{m-1} i^k$ is divisible by $m$ Prove that $\sum\limits_{i=1}^{m-1} i^k$ for odd numbers $m,k \in \mathbb{N}$ is divisible by $m$. Because $m \mid m^k$, it is equivalent to the following: Prove that $m \mid \sum\limits_{i=1}^{m} i^k$ for odd numbers $m,k \in \mathbb{N}$. Say $m = 2t + 1$ with $t \in \mathbb{Z}$. For $k=1$, it is pretty obvious: $$ \sum\limits_{i=1}^{2t+1} i = \frac{(2t+1)(2t+2)}{2} = (t+1)(2t+1) $$ which is an integer. For $k=3$, it is not that difficult either: $$ \sum\limits_{i=1}^{2t+1} i^3 = \left( \sum\limits_{i=1}^{2t+1} i \right)^2 = \left( (t+1)(2t+1) \right)^2 $$ which again is an integer. However, I have used formulas for special sums, which do not apply to all odd numbers $k$. Can someone give a hint on how I can prove the statement for all odd numbers $k$? AI: Hint: $l^k + (m-l)^k = 0 \mod m$ for $l = 1,2,3 \cdots, m-1$ Use $k$ is odd.
H: local isometry for riemannian manifolds is not transitive Let $(M_1,g_1)$ and $(M_2,g_2)$ be Riemannian manifolds of the same dimension, and let $\phi: M_1 \to M_2$ be a smooth map. We say that $\phi$ is a local isometry if $g_2 (\phi_* X, \phi_* Y ) = g_1 (X, Y )$ for all $m \in M_1$ and $X, Y \in T_m M_1,$ where $\phi_* : T_m M_1 \to T_{\phi(m)} M_2$ is the derivative of the map $\phi$ at $m.$ The relation of being locally isometric for Riemannian manifolds is not symmetric, it is of course reflexive: is it transitive? AI: The relation is transitive. Let $\phi \colon (M_1,g_1) \to (M_2,g_2)$ and $\psi \colon (M_2,g_2) \to (M_3,g_3)$ be local isometries. The chain rule $(\psi \circ \phi)_\ast = \psi_\ast \circ \phi_\ast$ yields for all $X,Y \in T_{m}M_1$ that $$ \begin{align*} g_{3}((\psi \circ \phi)_\ast X, (\psi \circ \phi)_\ast Y) & = g_{3}(\psi_\ast\phi_\ast X, \psi_\ast\phi_\ast Y) &&\text{chain rule}\\ &= g_{2}(\phi_\ast X, \phi_\ast Y) && \psi \text{ is a local isometry}\\ &= g_{1}(X,Y) && \phi \text{ is a local isometry} \end{align*} $$ so $\psi \circ \phi \colon (M_1,g_1) \to (M_3,g_3)$ is a local isometry.
H: Relation in probability As part of the solution of an exercise I have the following relation: $$\sum_{k=0}^{\infty}k(1-p)^{k-1}=\frac{1}{(1-(1-p))^2}$$ Where $p$ is a probability. I don't understand where this is coming from? AI: Hints: For $\,x\in\Bbb R\;,\;\;|x|<1\,$ , we have the well known power series development $$\frac1{1-x}=\sum_{n=0}^\infty x^n\stackrel{\text{differentiation}}\implies\frac1{(1-x)^2}=\sum_{n=1}^\infty nx^{n-1}$$ But $\;p\,$ is probability, so $\;0\le p<1\;\ldots$ (if $\,p=1\,$ the claim in your question is trivial)
H: Linearly independent elements over a vector space over $\mathbb R$. Let $n\geq3$ be an integer, let $u_1,u_2,u_3,\ldots,u_n$ be $n$ linearly independent elements over a vector space over $\mathbb{R}$. Set $u_0=0$ and $u_{n+1}=u_1$ and define $v_i=u_i+u_{i+1}$ and $w_i=u_{i-1}+u_{i}$ for $i=1,2,\ldots,n$, then $v_1,v_2,v_3,\ldots,v_n$ are linearly independent if $n=2010$. $v_1,v_2,v_3,\ldots,v_n$ are linearly independent if $n=2011$. $w_1,w_2,w_3,\ldots,w_n$ are linearly independent if $n=2010$. $w_1,w_2,w_3,\ldots,w_n$ are linearly independent if $n=2011$. I am stuck on this problem. Can anyone help me please? AI: Hint that should illustrate the general line of thought: $$v_1-v_2+v_3-v_4+ \cdots \pm v_k \\= u_1 +u_2 - (u_2 + u_3) + (u_3 + u_4) - \cdots \pm (u_{k}+ u_{k+1}) = u_1 \pm u_{k+1},$$ the sign depending on whether $k$ is even or odd.
H: What is the semantic of square brackets after the set denoting coefficients of polynomial? I have the following excerpt: Unless stated otherwise, we assume all polynomials take integer coefficients, i.e. a polynomial $f \in \mathbb{Z}[{\bf y}, x]$ is of the form $$f(y, x) = a_m · x^{d_m} + a_{m−1} · x^{d_{m−1}} + · · · + a_1 · x^{d_1} + a_0 ,$$ where $0 < d_1 < · · · < d_m$ , and the coefficients $a_i$ are in $ \mathbb{Z}[{\bf y}] $ with $a_m \ne 0$. What exactly is this $\mathbb{Z}[{\bf y}]$? What is y (and why is it bold)? I would assume (based on text) that it means that they are integers, but then this notations doesn't make any sense to me (I've seen this before as a notation for polynomials over integers). AI: Without further context, I think this is the correct interpretation: This is another way of viewing a multivariate polynomial. We can view $\mathbb{Z}[x,y]$ as polynomials in $x$ and $y$, or we can view them as $(\mathbb{Z}[x])[y]$ — if you follow the formalism through, these are equivalent. In the first, we're looking at expressions like $$ p = a_{00} + a_{10}x^1 y^0 + a_{01}x^0 y^1 + a_{11}x^1 y^1 + \cdots$$ and in the latter, expressions like $$ p = q_0 + q_1 x + q_2 x^2 + \cdots $$ where the $q_i$ are themselves polynomials in $y$. It's essentially just rearranging the sum (which is finite, despite appearances). This is more convenient for various things. I've discussed this just for the case of two variables, but we can repeat this with as many as we like: hence we could write $\mathbf{y}$ for $y_1, y_2, \ldots, y_n$ (note that this certainly is typographically more compressed), and then the $q_i$ are polynomials in the various $y$s. Dropping the $x$, I think I may have seen this notation as an instance of multi-index notation.
H: Tangent of circumscribed circle I found a solution online which it said : "It's easy noted that $AG.AE$ = $AD^2$ = $AF^2$ (Using tangent of circumscribed circle)" I found this not obvious at all. I know that $AD = AF$ but why it had to equal to the product of two inline line? AI: Nevermind, I've found it! It's called "secant-tangent theorem", "intersecting chords theorem", or the "power-of-a-point theorem". Which you can learn in this link.. http://en.wikipedia.org/wiki/Power_of_a_point
H: Product and Sum of Polynomial Roots The ratio of the sum of the roots of the equation, $8x^3+px^2-2x+1=0 $ to the product of the roots of the equation $5x^3+7x^3-3x+q=0 $ is $3:2$. What is the value $\frac{p-q}{p+q}$? Well I found out the sum of the roots of 1st equation is $-\frac{p}{8}$ and product of roots of second equation is $-\frac{d}{5}$. Now what to do further? How to get in terms of $\frac{p-q}{p+q} $ ? AI: Using Vieta's Formulas, the sum of the roots of $\ 8x^3+px^2-2x+1=0$ is $\frac{-p}8$ and the product of the roots of $\ 5x^3+7x^3-3x+q=0$ is $\frac {-q}5$ So, $$\frac {\frac{-p}8}{\frac {-q}5}=\frac32\implies \frac pq=\frac{3\cdot 8}{2\cdot 5}=\frac{12}5$$ Now, use Componendo and dividendo or use $$\frac{p-q}{p+q}=\frac{\frac pq-1}{\frac pq+1}=\frac{\frac {12}5-1}{\frac {12}5+1}=\frac{12-5}{12+5}$$ or use $$\frac p{12}=\frac q5=r(\ne0)\text{(say)}\implies p=12r,q=5r$$
H: If $E(X^2)$ exists, $E(X)$ exists How do I prove this? Is it even true? I proved the obvious cases, but don't know how to prove/disprove the $0\lt x\lt 1$ case. IF $E(X^2)$ exists, $\sum_{x\in\omega}x^2P(x)\lt \infty$. Therefore: $$\forall x \le 0:\space \sum_{x\in\omega}xP(x) \le \sum_{x\in\omega}x^2P(x) = E(X^2)\lt \infty \\ \forall x\ge 1:\space \sum_{x\in\omega}xP(x) \le \sum_{x\in\omega}x^2P(x) = E(X^2)\lt \infty$$ Thank you. AI: Jensen's inequality is the fastest way to show this. $$E(X)^2 \leq E(X^2)$$
H: What exactly are limits? So as far as I know, a limit of a function is the $y$ value that we get as $x$ gets closer and closer to the limit but remains distinct from the limit. Also, limit when approached from right should be the same as limit when approached from left or else the limit does not exist. The problem I have is with limits involving complex functions. I was watching a video on this and the presenter said that if we were to make a circle of radius $1$ around the $x$ value we input into the function, the $y$ value will be in the same neighborhood. Can someone please tell me what that is? AI: The point the guy in your lecture was making was right in line with what you were already understanding about limits. Take for example the limit(X->0) of 1/|x|. If you approach X from the left, the y value approaches infinity. Thus, the left-hand limit is infinity. If you approach x from the right, the y value is also infinity. Thus, the right-hand limit is infinity. Since Left-hand limit = right-hand limit, the two limits are equal. Now consider what the professor said about a circle. If you draw a circle around 1/|X| where X is really close to zero (can't use zero for obvious reasons) and X is positive, ten your circle will enclose both the left-hand side of the graph and the right. That's what he means by being in the same neighborhood. For a counter example, consider f(x) = 1/x. As we approach from the left, the limit is negative infinity. As we approach from the right, the limit is positive infinity. If you now draw your circle of radius 1 somewhere near X=0 for x positive, it will not incorporate the left-hand side of the function because it is negative. Thus, the left-hand and right-hand limits are not in the same neighborhood. Therefore, the limit does not exist.
H: Which of the following groups is not cyclic? Which of the following groups is not cyclic? (a) $G_1 = \{2, 4,6,8 \}$ w.r.t. $\odot$ (b) $G_2 = \{0,1, 2,3 \}$ w.r.t. $\oplus$ (binary XOR) (c) $G_3 =$ Group of symmetries of a rectangle w.r.t. $\circ$ (composition) (d) $G_4 =$ $4$th roots of unity w.r.t. $\cdot$ (multiplication) Can anyone explain me this question? AI: Hint: For a group to be cyclic, there must be an element $a$ so that all the elements can be expressed as $a^n$, each for a different $n$. The terminology comes because this is the structure of $\Bbb {Z/Z_n}$, where $a=1$ works (and often others). I can't see what the operator is in your first example-it is some sort of unicode. For b, try each element $\oplus$ itself. What do you get? For c, there are two different types of symmetry-those that turn the rectangle upside down and those that do not.
H: not injective/not surjective linear maps Let $S$ be the vector space of real sequences, and for $x=(x_1,x_2,\dots)$ define $\alpha(x)=(0,x_1,x_2,\dots)$ and $\beta(x)=(x_2,x_3,\dots)$. The problem was asking for few other things to do, but I got stuck at showing that the first is not injective, while the second is not surjective. Now, I realize I need to find two distinct arguments (two different sequences), plug it in $\alpha(x)$ and get the same value, which would show that is not injective. But I can not think of any example. Could somebody guide me, how I should structure my search for such example? As for second, could I take for example a sequence of one member $\{x_1\}$? Then it would be undefined for $\beta$ proving it's not surjective, right? AI: Maybe this is backwards? $\alpha(x)$ looks injective to me and $\beta(x)$ looks surjective. $\alpha(x)$ is not surjective though, and I think you have the right idea. Basically, the realization is that $\alpha(x)$ has a $0$ as its first coordinate for each $x$, and so in particular, there is no sequence that maps to $(1, 1, 1,...)$. $\beta(x)$ is not injective. Intuitively, this is because $\beta(x)$ does not give you enough information to reconstruct $x$. In particular, you have no idea what the first coordinate of $x$ is given $\beta(x)$. In fact, you can make this into a proof by picking $x, x'$ that differ only in the first coordinate and showing they map to the same thing.
H: How to define Conway's class of all games rigorously? I'm reading John Conway's On Numbers And Games. In the course of defining the surreal numbers, Conway defines the term game more or less as follows. $(\emptyset,\emptyset)$ is a game. If $L$ and $R$ are sets whose elements are games, then $(L,R)$ is a game. All games are constructed in this way. Obviously this is an informal definition, so I'd like to see a formalized version. However, the 'solution' to this problem discussed in the Appendix to Part 0 doesn't make sense to me. Can someone provide a formal definition of the predicate $\mathrm{Game}$ which returns true iff the input is a game? Note that the issue here is quite distinct from the issue discussed here: are surreal numbers actually well-defined in ZFC?. What I've tried so far. It's tempting to proceed as follows. Define that a class $C$ is closed iff whenever $L$ and $R$ are sets with $L,R \subseteq C,$ then $(L,R) \in C.$ Then the class of games is defined as the intersection of all closed classes incorporating $(\emptyset,\emptyset)$. There's a few issues with this. Firstly, in ZFC, we don't have classes, just predicates. So, how do we rehash the notion of an intersection of all closed classes in terms of predicates? Is it even possible? I'm willing to use NBG or MK if its strictly necessary, but I don't think it is. Secondly, how do we know there exists a closed class? We don't want to use the cumulative hierarchy to construct such a class, because the whole point of Conway's approach is that we get the ordinals for free as a subset of the surreals. Edit. In the comments, it was mentioned that $\in$-recursion makes Conway's definition rigorous. Can someone explain precisely how this is done? For instance, does $\in$-recursion allow us to define 'game' by asserting that $x$ is a game iff there exist sets $L,R$ such that $(L,R)=x$ and the elements of $L$ and $R$ are games. AI: Here's a definition of the notion of "game" in ZF; it's essentially re-doing, for this particular example, the general technique for eliminating recursion in favor of explicit definitions. I'll begin with an auxiliary definition, that I'll call "evidence"; the intuition is that a set is evidence if it contains enough information to verify that each of its elements is a game. Formally, a set $E$ is evidence if it is a set of ordered pairs and, for each pair $(L,R)\in E$, both $L$ and $R$ are sets and all their members are also in $E$. [Technically, I could omit the clause "both $L$ and $R$ are sets," because in ZF everything is a set, but I think the definition is easier to understand with that clause included.] Note that this is an explicit definition of what it means for a set to be evidence. Now define a set $G$ to be a game iff it is a member of some $E$ that is evidence.
H: Orthogonal projection $\alpha$ of $\mathbb{R}^3$ onto a plane Let $\Pi$ be the plane in $\mathbb{R}^3$ that contains $\textbf0,\textbf j,\textbf k$. Show that the orthogonal projection $\alpha$ of $\mathbb{R}^3$ onto $\Pi$ is a linear map. I'm more interested in finding this orthogonal projection, as I still don't understand what exactly is mean't by that, while showing its linearity I think is quite easy. Now, this plane will be given by $\textbf x \cdot \textbf i=0$ right? Thus, the projection I think will be given by $\textbf x \cdot \textbf i + t=0$, where $t$ is chosen such that $\textbf x + t\textbf i$ lies on $\Pi$. If I'm right, then linearity of this map follows from linearity of dot product. Thus, am I right or it doesn't make much sense? Thank you! AI: Thus, the projection I think will be given by $\textbf x \cdot \textbf i + t=0$ This is somewhat confused! The projection is $\alpha(\textbf{x}) = \textbf{x} + t\textbf{i}$ , where $t = -\textbf{x.i}$. You can check that $\alpha(\textbf{x}).\textbf{i} = 0$, so that $\alpha(\textbf{x})$ lies in $\Pi$; $\alpha(\textbf{x}) - \textbf{x}$ is orthogonal to $\Pi$.
H: Need help in proving that $\frac{\sin\theta - \cos\theta + 1}{\sin\theta + \cos\theta - 1} = \frac 1{\sec\theta - \tan\theta}$ We need to prove that $$\dfrac{\sin\theta - \cos\theta + 1}{\sin\theta + \cos\theta - 1} = \frac 1{\sec\theta - \tan\theta}$$ I have tried and it gets confusing. AI: $$\frac{\sin\theta-\cos\theta+1}{\sin\theta+\cos\theta-1}$$ $$=\frac{\tan\theta-1+\sec\theta}{\tan\theta+1-\sec\theta}(\text{ dividing the numerator & the denominator by} \cos\theta )$$ $$=\frac{\tan\theta-1+\sec\theta}{\tan\theta-\sec\theta+(\sec^2\theta-\tan^2\theta)} (\text{ putting } 1=\sec^2\theta-\tan^2\theta) $$ $$=\frac{\tan\theta+\sec\theta-1}{\tan\theta-\sec\theta-(\tan\theta-\sec\theta)(\tan\theta+\sec\theta)}$$ $$=\frac{\tan\theta+\sec\theta-1}{-(\tan\theta-\sec\theta)(\tan\theta+\sec\theta-1)}$$ $$=\frac1{\sec\theta-\tan\theta}$$ Alternatively using Double-angle formula by putting $\tan\frac\theta2=t,$ $$\text{ LHS= }\frac{\sin\theta-\cos\theta+1}{\sin\theta+\cos\theta-1}=\frac{\frac{2t}{1+t^2}-\frac{1-t^2}{1+t^2}+1}{\frac{2t}{1+t^2}+\frac{1-t^2}{1+t^2}-1}$$ $$=\frac{2t-(1-t^2)+1+t^2}{2t+(1-t^2)-(1+t^2)} =\frac{2t+2t^2}{2t-2t^2}=\frac{1+t}{1-t}\text{assuming }t\ne0$$ $$\text{ RHS= }\frac1{\sec\theta-\tan\theta}=\frac1{\frac{1+t^2}{1-t^2}-\frac{2t}{1-t^2}}=\frac{1-t^2}{(1-t)^2}=\frac{1+t}{1-t} \text{assuming }t-1\ne0$$
H: Probability $P(A < B)$ Given two independent and continuous random variables $A$ and $B$ with cumulative distributions $F_A$ and $F_B$, show that $$P(A<B) = \int_{-\infty}^{\infty} F_A(x)\, F'_B(x)\,dx.$$ Is this something obvious and available in text books ? AI: Let the density functions be $f_A(x)$ and $f_B(y)$. So the joint density is $f_A(x)f_B(y)$. To find the probability that $A\lt B$, we integrate the joint density over the part of the plane made up of points $(x,y)$ such that $x\lt y$. So we are integrating over the part of the plane above the line $x=y$. This double integral can be expressed as the iterated integral $$\int_{y=-\infty}^\infty\left( \int_{x=-\infty}^y f_A(x)\,dx\right) f_B(y)\,dy.$$ The innner integral is $F_A(y)$, and $f_B(y)=F'_B(y)$, so we end up with $$\int_{-\infty}^\infty F_A(y)F_B'(y)\,dy.$$ What you quoted uses a different name for the dummy variable of integration.
H: Complex integral, correct? I am supposed to do the integral $$ \int_{\gamma_2} \frac{\sin(z)}{z+\frac{i}{2}} dz$$ where $\gamma_2:[-\pi, 3\pi] \rightarrow \mathbb{C}$ , $\gamma_2(t)=\exp(it)$ for $ t\in [-\pi,\pi]$, $\gamma_2(t)=(1+t-\pi)\exp(it)$ for $t\in [\pi,2\pi)$ and $\gamma_2(t)=(1+3\pi-t) \exp(it)$ for $t\in[2\pi,3\pi]$. My idea was to say that this is equal to $2 \cdot 2\pi i \sin(-\frac{i}{2})$. Since we have two loops and the rest is cauchy's integral formula, is this correct? AI: Yes, indeed, Lipschitz: Your conclusions are correct. Nice work.
H: Analyticity of Laplace transform Let $f(x)$ be a bilateral Laplace transform of a measure $\mu$: $$ f(x)=\int_{-\infty}^{+\infty} e^{-xt} d\mu(y). $$ Suppose that $f(x)$ converges absolutely in $(a,b)$, and $(a,b)$ do not contain the origin. It is always true that $f(x)$ is analytic in $(a,b)$? Or it is true just for finite measure $\mu$? Moreover, if the measure $\mu$ is finite, then $(a,b)$ must contain the origin? Thank you! AI: The following references investigate the Moment Generating Function (finite measure): There is a classical reference for the case where the interval $(a,b)$ contains the origin and a recent extension to the case of a general open interval $(a,b)$. This paper that gives a nice overview and proves the converse to Curtiss's theorem in the general case. Refs: Curtiss, J.H., 1942. A note on the theory of moment generating functions, 1, Ann. of Math. Statist, 13 (4) : 430-433. Mukherjea, A., Rao, M. and Suen, S., 2006. A note on moment generating functions, Statistics & Probability Letters, 76 : 1185-1189. P. Chareka. \The converse to Curtiss theorem for one-sided moment generating functions" (2008). arXiv: 0807.3392 [quant-ph].
H: Question on a theorem on Riemann surfaces In the book "Lecture on Riemann surface" of Forster, in the page 23, there is a theorem as follows: Suppose $X$ and $Y$ are Riemann surfaces, $p: Y\rightarrow X$ is an unbranched holomorphic map and $f:Z\rightarrow X$ is any holomorphic map. Then every lifting $g:Z\rightarrow Y$ of $f$ is holomorphic. I understand the idea of the proof there, however in the proof, there is a claim : Let $c$ be an arbitrary in $Z$. Let $b=g(c)$ and $a=p(b)=f(c)$. There exist open neighborhoods $V$ of $b$ and $U$ of $a$ such that $p|V\rightarrow U$ is biholomorphic. I do not understand why such restriction should be biholomorphic. And I can't even find the definition of biholomorphic map on 2 open subsets of Riemann surfaces from the section 1 to there. Could you please explain for me : Why $p|V\rightarrow U$ is biholomorphic ? Thanks. AI: I follow Foster's book, as well. Use Corollary 2.5 at pag.11: as $p:Y\rightarrow X$ is a (non constant) unbranched holomorphic map, then it is locally a homeomorphism. Let us denote this homeomorphism by $p|_{V}:V\rightarrow U$. This implies, in particular, that $p|_{V}$ is injective. Now Corollary 2.5 implies that $p|_{V}:V\rightarrow U$ is biholomorphic. The proof of this result uses thm. 2.1. at pag.10.
H: A Differential operator. What are the fundamental solutions for the operator $$\mathcal D=a{\partial^2\over\partial x_1^2}+b{\partial^2\over\partial x_2^2}$$ on $\Bbb R^2 $ with standard cordinates $(x_1,x_2)$. Here $a,b\in \Bbb R$. Definition: A distribution $E\in D'(R)$is called a fundamental solution of an differential operator $P(D)$ if $$P(D)E= \delta(x).$$ Here $$P(D)=\sum_{|\alpha|\le m} a_\alpha D^\alpha.$$ Differential operator in the above problem looks like a kind of generalization of Laplace operator. Actually, I don't know; is there any generalization of Laplace operator? But if we take $a,b=1$ then it is the Laplace operator and in this case, maybe we can show that $E=\frac1 {2\pi} log|x|, n=2$ is a fundamental solution... Thank you. AI: If $a,b$ are positive or negative at the same time, then scaling will do the trick. Let $$x_1' = c x_1, x_2' = d x_2,$$ then $$ \frac{\partial^2 u}{\partial x_1'^2} = \frac{1}{c^2}\frac{\partial^2 u}{\partial x_1^2}, \quad \frac{\partial^2 u}{\partial x_2'^2} = \frac{1}{d^2}\frac{\partial^2 u}{\partial x_2^2}. $$ Comparing with the equation we have $c = 1/\sqrt{a}$ and $d = 1/\sqrt{b}$, the fundamental solution to $$ a\frac{\partial^2 u}{\partial x_1^2}+ b\frac{\partial^2 u}{\partial x_2^2}= \frac{\partial^2 u}{\partial x_1'^2} + \frac{\partial^2 u}{\partial x_2'^2}= \delta_{(0,0)}(x_1,x_2) $$ is $$ E = \frac{1}{4\pi} \ln(x_1'^2 + x_2'^2) = \frac{1}{4\pi} \ln\left(\frac{x_1^2}{a} + \frac{x_2^2}{b}\right). $$ To verify it, notice your operator $\mathcal{D}$ acting $u$ is $$ \mathcal{D}u = \nabla \cdot (A\nabla u) = a\frac{\partial^2 u}{\partial x_1^2}+ b\frac{\partial^2 u}{\partial x_2^2}, \quad \text{ where } A = \begin{pmatrix}a &0\\0&b\end{pmatrix}. $$ $\nabla$ is just $D$, and $\nabla \cdot $ is the divergence operator. You can find that $$ \nabla u = \frac{1}{4\pi}\left(\frac{2 b x_1}{b x_1^2+a x_2^2}, \frac{2 ax_2}{b x_1^2+a x_2^2}\right), $$ and acting $A$ on $\nabla u$ we have: $$ A\nabla u = \frac{1}{4\pi}\left(\frac{2 ab x_1}{b x_1^2+a x_2^2}, \frac{2 a bx_2}{b x_1^2+a x_2^2}\right). $$ When $x_1,x_2\neq 0$, take divergence: $$ \nabla \cdot (A\nabla u) = \frac{\partial }{\partial x_1} \left(\frac{2 ab x_1}{b x_1^2+a x_2^2}\right)+ \frac{\partial }{\partial x_2} \left(\frac{2 a bx_2}{b x_1^2+a x_2^2}\right) = 0. $$ When $x_1,x_2 = 0$, you can refer to my answer in this question: Green's theorem and flux and the divergence is a Dirac delta (for we have a $1/(4\pi)$ factor). If $a,b$ have different signs, then this is not an elliptic operator anymore but 1D a wave operator, the solutions are plane wave will be rather given in D'Alembert formula, and the fundamental solution is given by the Heaviside step function $H(x_2 -\sqrt{-b/a} x_1 )$.
H: Open sets and projections maps in metric spaces Let $E_1$ and $E_2$ metric spaces and $E=E_1\times E_2$ a metric spaces with some metric $d$. Let $\pi_1$ and $\pi_2$ the projections maps of $E_1\times E_2\rightarrow E_1$ and $E_1\times E_2\rightarrow E_2$ respectly, i.e, $$\pi_1(x, y)=x,\,\,\,\,\,\,\,\,\,\pi_2(x, y)=y;\,\,\,\,\,\,\,\,\forall\,\,(x, y)\in E_1\times E_2$$ I know that if $A\subseteq E$ is open then $\pi_1(A)$ is also open in $E_1$ and $\pi_2(A)$ is also open in $E_2$, but the reverse is true? i.e if $\pi_1(B)$ is open in $E_1$ and $\pi_2(B)$ is open in $E_2$ then $B$ is open in $E_1\times E_2$? AI: No, that's not true. A slight variation on Cameron Buie's example: let $E_1=E_2=\mathbb{R}$ and let $$A=\{(x,0):x\in (-1,1)\}\cup\{(0,y):y\in(-1,1)\},$$ i.e., a "plus sign" in the plane $\mathbb{R}^2$. Then, projected in either direction onto $\mathbb{R}$, we get $(-1,1)$ which is open, but $A$ itself is not open in $\mathbb{R}^2$.
H: weak derivative of a vector valued function Consider $T>0$ and $U$ a opensubset of $R^n$ ,bounded and with smooth boundary. Consider ${\Omega}_T = U \times (0,T]$. Let $u: {\Omega}_T \rightarrow R $ a smooth function. Define $h : [0,T] \rightarrow H^{1}_0 (U)$ given by $h(t) = u(. ,t)$ . What is the weak derivative of $h$ ?. I believe that is the function $ v : [0,T] \rightarrow H^{1}_0 (U)$ given by $v(t) = u_t (.,t)$. am I right ? Someone can give me a hint to prove or disprove ? I think this is the answer because $lim_{h \rightarrow 0} \frac{h(t+h) - h(t)}{h} = u_t (.,t) $. Then I believe that in this abstract context the classical derivative will be the weak derivative (as in Sobolev spaces ) . For the definition of weak derivative see : about weak derivative of Bochner integrable function thanks in advance. AI: We have to find $g:[0,T]\to H_0^1(U)$ such that $$\tag{1}\int_0^Th(t)\varphi'(t)dt=-\int_0^T g(t)\varphi(t)dt,\ \forall\ \varphi\in C_0^\infty(0,T)$$ Fix $\varphi\in C_0^\infty(0,T)$. Note \begin{eqnarray} \int_0^T h(t)\varphi'(t)dt &=& \int_0^T u(\cdot,t)\varphi'(t)dt \nonumber \\ &=&- \int_0^T\frac{\partial u(\cdot,t)}{\partial t}\varphi(t)dt \\ \end{eqnarray} How to justify the integration by parts in Bochner Integral? Try find out what is $$\int_0^T (u(\cdot,t)\varphi(t))'dt$$
H: A question on boundary of open set Let $\mathbb Q$ and $\mathbb R$ have the usual topologies. Is there an open set $N\subseteq \mathbb Q\times \mathbb R$ (with product topology) such that for every non-empty open subset $U\subseteq N$, the boundary of $U$ is not compact? AI: Sure. Let $N = (0,1)^2 \cap (\mathbb{Q}\times\mathbb{R})$, and let $U\subseteq N$ be a nonempty open subset of $N$. I claim that $\partial U$ is not compact. Let $p_1\colon \mathbb{Q}\times\mathbb{R}\to\mathbb{Q}$ be projection onto the first coordinate. Then $p_1(U)$ is a nonempty open subset of $\mathbb{Q}$. Since the extreme points of any vertical cross-section of $U$ lie in the boundary, we know that $p_1(\partial U) \supseteq p_1(U)$, and therefore $p_1(\partial U)$ contains an open interval $(a,b)\cap\mathbb{Q}$. This in turn contains a nondegenerate closed interval $[c,d]\cap \mathbb{Q}$. This is a closed subset of $p_1(\partial U)$ which is not compact, so $p_1(\partial U)$ cannot be compact, and hence $\partial U$ is not compact.
H: Proving that $\iint_S (\nabla \times F) \cdot \hat{n} dS =0$ I have the following question: Prove that $$\iint_S (\nabla \times \vec{F}) \cdot \hat{n} dS =0$$ for any closed surface $S$ and twice differentiable vector field $\vec F:\mathbb{R^3} \to \mathbb{R^3} $ . I need to prove this using Stokes' theorem. The only thing I want to verify is whether or not for every closed surface $S$, we have: $$\iint_S (\nabla \times \vec{F}) \cdot \hat{n} dS =\int_C \vec F \cdot d\vec r$$ and the last term is trivially zero, because $C=\emptyset $ ($S$ is a closed surface). Is this correct? Thanks in advance AI: Yes. Correct. Here's an alternative proof: Choose $C$ to be a band across the surface, like an equator. So now you have two surfaces. So divide the surface integral into two: $$\iint\limits_{S_1}\left(\nabla\times\vec f\right)\cdot \hat n\mbox{ d}S+\iint\limits_{S_2}\left(\nabla\times\vec f\right)\cdot \hat n\mbox{ d}S$$ Since the two surfaces have opposite orientations, they cancel out (alternatively, you may use Stoke's theorem to convert them to line integrals with opposite directions).
H: The most general linear map of $\mathbb{R}^n$ to $\mathbb{R}$ Find the most general linear map of $\mathbb{R}^n$ to $\mathbb{R}$. Obviously, dot product is one. However, I'm not sure how do I show that it's the most general of them? Or maybe, something else is even more general? Hint appreciated! AI: As with any two finite dimensional vector spaces, $V$, $W$, you can fix a basis for $V$ and a basis for $W$ and then the linear transformations from $V$ to $W$ become rectangular matrices which you perform matrix multiplication with. So, one could say that a general linear transformation from $\Bbb R^n$ to $R$ can be identified with a matrix $[a_1,\dots a_n]$ where $T(x_1,\dots x_n):=[a_1,\dots a_n][x_1,\dots, x_n]^\top \in \Bbb R$. You can see that this means all linear maps arise as the dot product of some fixed vector with other vectors.
H: Operator Norm of a Matrix composed of Standard Basis and Fourier Basis Let $\mathbf{A}_n$ be an $n\times 2n$ matrix (where $n=2^k$) composed of Fourier basis and standard basis; that is, $$\mathbf{A}_n = \begin{bmatrix}\mathbf{I}_n & \mathbf{F}_n\end{bmatrix}$$ where $\mathbf{I}_n$ is the identity matrix and $\mathbf{F}_n$ is the DFT matrix. Generally, a DFT matrix $\mathbf{F}_n$ is defined as $$ \mathbf{F}_n = \frac{1}{\sqrt{n}} \begin{bmatrix} \omega^0 & \omega^0 & \omega^0 & \cdots & \omega^0 \\ \omega^0 & \omega^1 & \omega^2 & \cdots & \omega^{n-1} \\ \omega^0 & \omega^2 & \omega^4 & \cdots & \omega^{2(n-1)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \omega^0& \omega^{n-1}& \omega^{2(n-1)}& \cdots & \omega^{(n-1)(n-1)} \end{bmatrix} $$ where $\omega = \exp(-2\pi i / n)$ is the primitive $n^\mathrm{th}$ root of unity. For example, if $n=2^2$, $\mathbf{A}_4$ is the concatenation of $\mathbf{F}_4$ and $\mathbf{I}_4$: $$ \mathbf{A}_4= \begin{bmatrix} 1 & 0 & 0 & 0 & 1/2 & 1/2 & 1/2 & 1/2 \\ 0 & 1 & 0 & 0 & 1/2 & -i/2 & -1/2 & i/2 \\ 0 & 0 & 1 & 0 & 1/2 & -1/2 & 1/2 & -1/2 \\ 0 & 0 & 0 & 1 & 1/2 & i/2 & -1/2 & -i/2 \end{bmatrix} $$ One can easily obtain $\mathbf{F}_4$ by plugging in $\omega = \exp(-2\pi i / 4) = -i$. I'm interested in computing the operator norm of $\mathbf{A}$ defined as: $$||\mathbf{A}||_{op} = \max_{||\mathbf{x}||_2 = 1} ||\mathbf{A}\mathbf{x}||_2$$ where $||\mathbf{x}||_2$ is the Euclidean norm of the vector $\mathbf{x}$. Do we have a closed form of $||\mathbf{A}_n||_{op}$ for any $n$? AI: The singular values of $(I,F)$ are the square roots of the eigenvalues of $(I,F)(I,F)^\ast=I+FF^\ast$. Since the DFT matrix $F$ is unitary, $I+FF^\ast=2I$. Consequently, all singular values and in turn the operator $2$-norm of $(I,F)$ are equal to $\sqrt{2}$, regardless of the size $n$.
H: What is the dimension of an open subset of an affine euclidean space? And why? My question comes from an exercise from my differential geometry book: An open subset $M \subset \mathcal E$ is a smooth manifold of maximal dimension, i.e. $dim(M) = Dim(\mathcal E)$, and - for all p $\in$ M - $T_pM = E$ I used a restriction to $W = \xi^{-1}(M) $ of a global chart $\xi: R^m \rightarrow \mathcal E $ to proof is a manifold but i'm not sure on how to determine the dimension. Help, anyone? AI: Since $W = \xi^{-1}(M)$ is an open subset of $\mathbb{R}^m$, and you have -- in the process of showing that $M$ is a manifold -- constructed a chart $$\xi|_W : W \to M,$$ by (the most common) definition of dimension, the dimension is simply $m$, i.e. the same as that of $\mathcal{E}$.