text
stringlengths
83
79.5k
H: existence/uniqueness of solution and Ito's formula Given the Ito SDE $$ dX_t=a(X_t,t)dt + b(X_t,t) dB_t $$ where $a(X_t,t)$ and $ b(X_t,t)$ satisfy the Lipschitz condition for existence and uniqueness of solutions. Given a function $f(X_t,t) ∈ C^2$ using Ito's formula I can derive the SDE $$ df = \frac{\partial f} {\partial t} dt + \frac{\partial f}{\partial x} dX_t + \frac{1}{2} \frac{\partial^2 f}{\partial x^2} dX_t^2 $$ where $dX_t^2$ is computed using Ito's lemma. The question is: are there any requirements that $f(X_t,t)$ must satisfy to guaranty the existence and uniqueness of solutions (I would say yes). Any reference is welcome. Thanks in advance. AI: You shall distinguish between the equation and the 'direct' definition of something. For example, whenever you have an expression of the form $$ x = g(x) \tag{1} $$ where $g$ is a certain function/operator that is given to you, you may be asked to find $x$ which satisfies such expression. You can never be sure whether such $x$ exists, or whether there is only one such $x$. Indeed, if you change the $x$ in the RHS, the LHS changes as well since it depends on $x$. Anyway, suppose we found such $x$ and it is unique. Now, imagine that you also have an expression $$ y = h(x) \tag{2}. $$ This is not an equation, but rather a definition of $y$. Indeed, to find the value of $y$ the only thing we need to do is to substitute $x$ (found on the previous step) as an argument of $h$ - that's it. What do you have in your original post (OP) is that $x$ is the process $(X_t)_{t\geq 0}$ which satisfies $$ X_T = X_0 + \int_0^Ta(X_t,t)\mathrm dt + \int_0^Tb(X_t,t)\mathrm dB_t \tag{$1^\prime$} $$ which can be compactly written through differentials as in your case. Here the operator $g$ as in $(1)$ is just these integrals and functions $a,b$ applied to $X$. Again, the LHS and the RHS both contain $X$, so that it's not obvious whether there exists such $X$ which makes both sides be equal. Thus, we need conditions on $a$ and $b$ to assure such existence, or uniqueness. Now, let $Y_t = f(X_t,t)$ be another process. For $f\in C^2$ we can use an alternative definition of $Y$: $$ Y_T = Y_0 + \int_0^T\frac{\partial f}{\partial t}(X_t,t)\mathrm dt + \int_0^T \frac{\partial f}{\partial X}(X_t,t)\mathrm dX_t + \int_0^T \frac{\partial^2 f}{\partial X^2}(X_t,t)\mathrm dX_t^2 \tag{$2^\prime$} $$ which yet again can be written in a compact symbolic form as via differentials as you did. Here you can think of the RHS as a function of the process $X$: $h((X_t)_{t\geq 0})$. It is only used to define $Y$. The only thing that you need to take care of is that $h$ is defined for a particular value of the argument, that is all the integrals in the RHS of $(2')$ are well-defined. Actually, here the only Ito integral is the middle one (a part of $\mathrm dX_t$) and you indeed shall check that $$ Z_t:=b(X_t,t)\frac{\partial f}{\partial X}(X_t,t) $$ satisfies for example $(iii)$ in Definition 3.1.4 or $(iii)'$ in Definition 3.3.2 of Oksendal: "Stochastic Differential Equations". As a result, you actually shall not talk about solutions of $(2')$ as much as you don't talk about the solutions of $y = 3$. Instead, you talk about solutions of $x = x^2+1$.
H: Exterior Algebra of smooth differential forms I'm a little bit confused about the exterior algebra of smooth differential forms $\Omega(M)$ on a manifold M. The definition of k-forms is clear to me, but I don't understand how to put them together, s.t. they form $\Omega(M)$ so to speak. Maybe you can help me to get rid of my problems: First of all there is some confusion about the definition of $\Omega(M)$. Some people write $\Omega(M):=\bigoplus\limits_{k=0}^{\dim(M)}\Omega^k(M)$ and some $\Omega(M):=\sum\limits_{k=0}^{\dim(M)}\Omega^k(M)$. I never saw the second notation before, are they both the same by definition or is there a different meaning by the second one? Furthermore, if we accept the definition $\Omega(M):=\bigoplus\limits_{k=0}^{\dim(M)}\Omega^k(M)$ then the elements of $\Omega(M)$ will consist of tuple like $(\omega_0,...,\omega_{\dim(M)})$, whereas $\omega_k\in\Omega^k(M)$. How do I extend the definition of the wedge product of single forms, i.e. $w_i\wedge\omega_j$, to elemts of the algebra, i.e. $(\omega_0,...,\omega_{\dim(M)})\wedge (\alpha_0,...,\alpha_{\dim(M)})=?$ I suggest that one writes the elements as "formal sums" like $(\omega_0,...,\omega_{\dim(M)})=:\omega_0+...+\omega_{dim(M)}$ and then extend the wedge product bilinearly. Is that right? I hope someone can help me by answering my two questions. Regards AI: Honestly I do not remember the notation $\sum$ you introduce above. I will be quite sloppy with notation trying to keep all simple. Let $n$ denote the dimension of $M$, a finite dim. real manifold. The $k$-differential forms on $M$ are the elements of the space $\Omega^{k}(M)$; it follow from the very definition that such $k$-diff. forms can be locally written as $\omega=\omega_{i_1\dots i_k}(p) dx^{i_1}\wedge\dots\wedge dx^{i_p}$, where $p\in M$. $\omega_{i_1\dots i_k}(p)$ are smooth functions and $\{dx^{\bullet}\}$ denotes a basis of the finite dimensional vector space $\wedge^k T^{*}_p(M)$. From the very definition of wedge product it follows that $ dx^{i_k}\wedge dx^{i_l} =-dx^{i_l}\wedge dx^{i_k}$. So $\Omega^{k}(M)=0$ if $k>n$ (try to write a wedge product of $dx^{\bullet}$'s with more than $n$ terms: by antisymmetry you can prove that the product is zero). The direct sum $\Omega(M)=\oplus_{i=0}^n\Omega^{k}(M)$ is a graded algebra; the "grading" is the integer $k$ we introduced above. The algebra structure is given by the associative product $*:\Omega^{k}(M)\otimes\Omega^{l}(M)\rightarrow \Omega^{k+l}(M)$, $\omega_k *\omega_l:=\omega_k\wedge\omega_l$. Note that the product "respects" the grading: the structure $\Omega^{k}(M)\otimes\Omega^{l}(M)$ has degree $k+l$ (it follows from its very definition as tensor product), which is the same degree of $\Omega^{k+l}(M)$. The product is then extended bilinearly: $(\omega_1+\dots+\omega_k) *\omega_l:=\omega_1\wedge\omega_l+\dots+\omega_k\wedge\omega_l$ and $\omega_s*(\omega_1+\dots+\omega_t) :=\omega_s\wedge\omega_l+\dots+\omega_s\wedge\omega_t$. Hope this can help
H: Help on this integral I'd like to know why this holds If one have $f(x_t,t)=x_t\mathrm{e}^{\theta t}$ $\int_0^tdf(x_t,t)=x_t\mathrm{e}^{\theta t}-x_0$ That shouldn't be only $x_t\mathrm{e}^{\theta t}$? AI: this is definite integral, $\int_0^t df(x_s,s) = f(x_s,s)|_0^t = x_t e^{\theta t} - x_0 e^{\theta 0} = x_t e^{\theta t} - x_0$.
H: Do closure operators on arbitrary posets give rise to complete lattices? The notion of a closure operator is defined here for an arbitrary partially ordered set. Now consider an arbitrary set $x$, and call its powerset $P.$ Furthermore, let us order $P$ by inclusion, thereby obtaining a complete lattice. Under these circumstances, it well known that given a closure operator $\mathrm{cl} : P \rightarrow P$, we can obtain a complete lattice $Q \subseteq P$ as follows. The elements of $Q$ are precisely the $a \in Q$ such that $\mathrm{cl}(a)=a.$ The meet of $A \subseteq Q$ is simply $\bigcap A.$ (By intersection, I simply mean the meet operation of $P$. Thus if $A$ is empty, its meet is simply $x$). The join of $A \subseteq Q$ is $\mathrm{cl}\left(\bigcup A\right).$ So I'm wondering. What happens in more general situations? e.g. What if $P$ is an arbitrary poset, or an arbitrary lattice, or even an arbitrary complete lattice? Edit: For example, if $P$ is a lattice, do we obtain a new lattice? (Clearly it needn't be complete - as Carl Mummert explains in the comments, we can always take $\mathrm{cl}$ to be the identity operator). And what if $P$ is a join-semilattice, or a meet-semilattice? AI: The fixpoints of a closure operator on a (complete) meet semilattice is always a (complete) meet semilattice, with the inherited meet, whereas the fixpoints of a closure operator on a (complete) join semilattice is a (complete) join semilattice, but possibly with a different join – but your formula still works. This generalises to arbitrary categories, so long as we replace "closure operator" with "idempotent monad".
H: To prove that the following equation has no solution The question is : Prove that there are no real numbers $(x,y)$ that satisfy the equation $$ 13+\ 12[\arctan(x)]=62[\ln(x)]+8[e^x]+4[\arccos(y)] $$ $ [\ ] \text{ denotes the greatest integer function} $ I tried writing the possible values of $\arctan(x), \arccos(y)$ thereby leading to the values of $x$ and $y$. that turned out to be useless. Can someone help me or is there an alternative approach? AI: The LHS and RHS of the equation are both integers (why?). The RHS is an even integer. Observe the common factor 2 in the RHS. The LHS, irrespective of the value of [arctan(x)] is an odd integer. since LHS=RHS, which means, an odd number= an even number.. This is clearly a contradiction. Hence there are no (x,y) that satisfy the above equation
H: limit of evaluated automorphisms in a Banach algebra Let $\mathcal{A}=\operatorname{M}_k(\mathbb{R})$ be the Banach algebra of $k\times k$ real matrices and let $(U_n)_{n\in\mathbb{N}}\subset\operatorname{GL}_k(\mathbb{R})$ be a sequence of invertible elements such that $U_n\to 0$ as $n\to\infty$. Define $\sigma_n\in\operatorname{Aut}(\mathcal{A})$ via $X\mapsto U_nXU_n^{-1}$. Suppose I have a sequence $(W_n)_{n\in\mathbb{N}}\subset\mathcal{A}$ such that $W_n\to W\in\mathcal{A}$ as $n\to\infty$. I would like to determine $\lim_{n\to\infty}\sigma_n(W_n)$. My question is how can I approach such a problem? It looks like something that should have a general answer (for $\mathcal{A}$ not necessarily finite-dimensional Banach algebra over the reals) in the theory of operator algebras, but I have a rather poor background there. If it is something relatively easy, I would rather appreciate a hint or reference than a full answer, so I can work it out further on my own (I am just trying to get back on the math track after some time of troubles). Thanks in advance for any help! AI: This limit need not exist. For example, let's work in $M_2(\mathbb R)$. If $$ U_n= \left( \begin{array}{cc} \frac{1}{n} & 0 \\ 0 & \frac{1}{n^2} \end{array} \right), $$ then $\Vert U_n \Vert \to 0$ as $n \to \infty$, and $$ U_n^{-1}= \left( \begin{array}{cc} {n} & 0 \\ 0 & {n^2} \end{array} \right). $$ If we now let $$ W_n= \left( \begin{array}{cc} 0 & \frac{1}{\sqrt{n}} \\ 0 & 0 \end{array} \right), $$ then $W_n \to 0$ as $n \to \infty$, but $$ \sigma_n (W_n) = U_n W_n U_n^{-1}= \left( \begin{array}{cc} 0 & \sqrt{n} \\ 0 & 0 \end{array} \right), $$ which does not converge as $n \to \infty$.
H: no. of solution of the equation $[x]^2+a[x]+b = 0$ is If $a$ and $b$ are odd integer. Then the no. of solution of the equation $[x]^2+a[x]+b = 0$ is where $[x] = $ greatest Integer function My Try:: Let $[x] = y$. Then equation become $y^2+ay+b = 0$ Now If given equation has real Roots, Then $\displaystyle y = \frac{-a\pm \sqrt{a^2-4b}}{2}$ Now $a^2-4b = k^2\Leftrightarrow a^2-k^2=4b^2$. where $k\in \mathbb{Z}$ Now How can I solve after that. Help required Thanks AI: Note that $-a$ is the sum and $b$ is the product of the roots. If the roots are integer, their sum is odd only if they have different parity, their product is odd only if they both are odd. Hence no solution.
H: Integrating secant squared times tangent Integrate the function. $$ \int \sec^2 x \tan x dx $$ I'm trying to get a proper substitution, but I couldn't get anything proper. AI: This is a good one. Hope you are familiar with integration by substitution. now case 1: put $\\ \tan x=t$ so, $ dt=\sec^2xdx $ Substituting, we get $I= \int tdt\Rightarrow \frac {t^2}{2}+c_1 $ that is, I= $ \frac{\tan^2x}{2} + c_1 $ Case 2: put $ \sec x=u $ $du=\sec x\tan x \ dx$ so, $I= \int udu => I=\frac {u^2}{2}+c_2$ since both the integrals are equal, $\frac{\tan^2x}{2}+c_1=\frac {\sec^2x}{2}+c_2 $ which implies $c_1 -c_2 =\frac {1}{2} $. So, we understand that, if a function has more than one integral, the integrals differ by a constant.
H: Trouble with complex numbers Is my following calculation true? $e^{a+ib}e^{\overline{a+ib}}=e^{a+ib}e^{a-ib}=e^{2a}$? for a,b real numbers or in general, what is $\overline{{z}^{w}}$ if $z,w$ are complex numbers? AI: yes $\color {green}{e^{a+ib}\bar{e^{a+ib}}=e^{a+ib}e^{a-ib}=e^{2a}} $ is true Hint:$$z^w=e^{w \ logz}$$ $$log z=\ln(|z|+\arg(z))$$ Edit :$e^{\bar{z}}=e^{x}\ cos (y)+i\ sin(-y)=e^{x}\ cos (y)-i\ sin(y)$ and $\bar{e^z}={e^{x}\ cos (y)-i\ sin(y)}$
H: log base 1 of 1 What is $\log(1)$ to the base of $1$? My teacher says it is $1$. I beg to differ, I think it can be all real numbers! i.e., $1^x = 1$, where $x\in \mathbb{R}$. So I was wondering where I have gone wrong. AI: The reason why it is not convenient to define $\log$ for the base of $1$ is simple: $$\log_11=\frac{\log_e 1}{\log_e 1}$$ But the denominator is $0$ and thus the division doesn't make any sense unless we're working with limits :)
H: Is this notation for Stokes' theorem? I'm trying to figure out what $\iint_R \nabla\times\vec{F}\cdot d\textbf{S}$ means. I have a feeling that it has something to do with the classical Stokes' theorem. The Stokes' theorem that I have says $$ \int\limits_C W_{\vec{F}} = \iint\limits_S \Phi_{\nabla\times\vec{F}} $$ where $\vec{F}$ is a vector field, $W_{\vec{F}}$ is the work form of $\vec{F}$, and $\Phi_{\nabla\times\vec{F}}$ is the flux form of the curl of $\vec{F}$. Is the notation in question the same as the RHS of the above equation? AI: It seems to me that the integrals $$\int\limits_C W_{\vec{F}}~~~~\text{and}~~~~\oint_{\mathfrak{C}}\vec{F}\cdot d\textbf{r}$$ have the same meanings. I don't know the notation $ \Phi_{\nabla\times\vec{F}}$, but if it means as $$\textbf{curl F}\cdot \hat{\textbf{N}} ~dS$$ then your answer is Yes.
H: How to prove $(a-b)^3 + (b-c)^3 + (c-a)^3 -3(a-b)(b-c)(c-a) = 0$ without calculations I read somewhere that I can prove this identity below with abstract algebra in a simpler and faster way without any calculations, is that true or am I wrong? $$(a-b)^3 + (b-c)^3 + (c-a)^3 -3(a-b)(b-c)(c-a) = 0$$ Thanks AI: The easiest way to show this is by observing that the sum of cubes is such that all the cubed terms cancel, so it is quadratic in each variable individually; then notice that the sum of cubes vanishes for $a=b,c$ and for $b=c$. Consequently it must factorize as $(a-b)(a-c)(b-c)\times d$ for some $d$. (Why? One gets $f(b,c)\times (a-b)(a-c)$ by thinking of it in terms of a quadratic in $a$; then the form of $f$ follows by thinking in terms of $b$ or simply symmetry.) Letting $a,b,c=0,1,2$ tells you the constant. Alternatively, note that $(a-b)$ must be a factor, so by cyclic symmetry $(a-b)(b-c)(c-a)$ must be. The result can be deduced similarly from the above.
H: Proof of an equivalent definition of a continuous function In Dudley, Real Analysis and Probability (2nd ed.), Theorem 2.1.2 states: Given topological spaces $(X,\mathcal{T})$ and $(Y,\mathcal{U})$ and a function $f:X\to Y$, if for every convergent filter base $\mathcal{F}\to x$ in $X$, $f[[\mathcal{F}]]\to f(x)$ in $Y$, then $f$ is continuous. Proof. Take any $U\in\mathcal{U}$ and $x\in f^{-1}(U)$. The filter $\mathcal{F}$ of all neighborhoods of $x$ converges to $x$, so $f[[\mathcal{F}]]\to f(x)$. For some neighborhood $V$ of $x$, $f[V]\subset U$, so $V\subset f^{-1}(U)$, and $f^{-1}(U)\in\mathcal{T}$. My question is: how does the last assertion follow? From $V\subset f^{-1}(U)$ we know that $f^{-1}(U)$ is a neighborhood of $x$ and therefore $f^{-1}(U)\in\mathcal{F}$. AI: Perhaps a slight re-wording of the argument make it more clear. To show that $f^{-1} [ U ]$ is open in $X$, it suffices to show for each $x \in f^{-1}[U]$ that there is an open neighbourhood $V$ of $x$ such that $V_x \subseteq f^{-1} [ U ]$. (Then $f^{-1} [ U ]$ is the union of the sets $V_x$, and is thus open.) The last little bit of the argument given is really a couple of steps compressed into one. First, for a given $x \in f^{-1}[U]$ we have a neighbourhood $V$ of $x$ such that $f [ V ] \subseteq U$, and so $V \subseteq f^{-1} [ f[V] ] \subseteq f^{-1} [ U ]$. So we have found an appropriate neighbourhood for that particular element of $f^{-1} [ U ]$. But this $x$ was just some arbitrary element of $f^{-1} [ U ]$, so we have actually shown that for all elements of $f^{-1} [ U ]$ there is an appropriate neighbourhood. And therefore $f^{-1} [ U ]$ is open, as desired.
H: Logic Circuit Question 1) Write the boolean expression after every GATE 2) Write the boolean expression of GATE 3 3) Try to simplify the boolean expression of GATE3 I need to know if what I did its right + your advice if there is another way to answer those questions. 1) $GATE_1 = (A'B)' = A+B'$ |$ GATE_2 = 1 ⊕ C$ 2) $GATE_3 = ((A+B')+0+C')'$ 3) $A'BC$ AI: Yes you are correct and I used the same method to reach the answers as you have above.
H: Polygon Inequality We know that to form a triangle the 3 sides should obey the triangle inequality . So is there any rule to be followed by the sides of $n$-sided convex polygon. For Eg:- $1,2,4$ cannot form a triangle so can we tell if we are given $n$ line segments can we make a $n$-sided convex polygon. AI: The longest side must be shorter than the sum of the rest.
H: prove that : $D_{4n} $ is isomorphic to $ D_{2n} \times Z_2 $ when $n$ is odd let $n$ be odd integer , prove that : $D_{4n} $ is isomorphic to $ D_{2n} \times Z_2 $ it's an example which the text proves ! but i can't understand any thing from the argument ! but i tried to prove it by constructing the isomorphism function directly but every time there was a tiny gap !so anyone knows nice proof fot this ? AI: Note that $Z_n\times Z_2\cong Z_{2n}$ if $n$ is odd and the inveres of $(x,y)\in Z_n\times Z_2$ is $(-x,y)$, that is we have an operation of $Z_2$ on $Z_{2n}$ by inversion that boils down to the trivial operation on the summand $Z_2$ and again inversion on $Z_n$. Therefore $$ D_{4n}\cong Z_{2n}\rtimes Z_2\cong(Z_n\times Z_2)\rtimes Z_2\cong(Z_n\rtimes Z_2)\times Z_2\cong D_{2n}\times Z_2.$$
H: $L^2$-lower semicontinuity of an integral operator on $G(x,\nabla w(x))$ In the paper "A posteriori error estimates for variable time-step discretizations of nonlinear evolution equations" by Nochetto, Savaré, Verdi we find the following claim in Example 2.4: Let $\mathcal H:=L^2(\Omega),\ p>1$ $$ \phi(w):=\int_\Omega G(x,\nabla w(x))dx,\quad D(\phi):=L^2(\Omega)\cap W_0^{1,p}(\Omega)$$ If $G(x,\xi):\Omega\times\mathbb R^d\rightarrow\mathbb R$ is a Carathéodory function convex and continuously differentiable in $\xi$ for a.e. $x\in\Omega$ such that $$ G(x,\xi)\geqslant\alpha_0|\xi|^p-\alpha_1,\quad|\nabla_\xi G(x,\xi)|\leqslant\alpha_2 (1+|\xi|^{p-1}), \quad\forall\xi\in\mathbb R^m,a.e. x\in\Omega$$ with some positive constants $\alpha_i$, then $\phi$ is convex and lower semicontinuous in $\mathcal H=L^2(\Omega)$. Note that l.s.c. is stated in $L^2$. (At least for $p=2$) I am interested in either a reference to such a result with proof or a hint of how to prove this. I have dificulties to relate this to classical results like the theorem of Serrin and generalizations as they yield $${\lim \inf}_{n\rightarrow\infty} \phi(u_n)\geqslant\phi(u)$$ only for sequences $u_n\rightarrow u$ wrt the $L^1$ norm where the $u_n$ and $u$ are in $W^{1,1}(\Omega)$ while assumptions are usually weaker. whether or not such a result carries over to the vector-valued case, i.e. $\mathcal H=L^2(\Omega,\mathbb R^N)$. NB: I'm a numerics guy with little practice in analysis... AI: Ad 1: I will give a possible idea of a proof. Take a sequence $w_k \to w$ in $L^2(\Omega)$. If $\liminf\phi(w_k) = +\infty$, we are fine. Otherwise take a bounded subsequence (still denoted by $w_k$). Now you have $$C \ge \phi(w_k) \ge \alpha_0 \int_\Omega \lvert \nabla w_k \rvert dx - \alpha_1 \, \int_\Omega dx.$$ Hence, $w_k$ is bounded in $W^{1,p}(\Omega)$ and you can select a weakly convergent subsequence (converging still towards $w$). Finally, you obtain the lower semicontinuity, since $G$ is convex in $\xi$ (use: convex + continuous gives weakly lower semicontinuous). Ad 2: Along this lines, one should do similar things for the vector valued case.
H: Solution of a Dirichlet problem on the unit disk Find the solution of the Dirichlet problem: $$\Delta u(r,\phi)=0, r<1, u(1,\phi)=f(\phi)$$ where $x=r\cos\phi$ and $y=r\sin\phi$ and $$f(\phi)=\sin^3(\phi).$$ I start by doing the following: Enter the polar coordinates $x=r\cos\phi$ and $y=r\sin\phi$. Deriving: $$\begin{gather} u_r=u_x\cos\phi+u_y\sin\phi \tag{1}\\ u_\phi=-ru_x\sin\phi+ru_y\cos\phi \tag{2}\\ u_{rr}=u_{xx}\cos^2(\phi)+2u_{xy}\cos\phi \sin\phi+u_{yy}\sin^2\phi \tag{3}\\ u_{\phi\phi}=r^2u_{xx}\sin^2\phi-ru_{x}\cos\phi \tag{4}\\ -2r^2u_{xy}\sin\phi \cos\phi+r^2u_{yy}\cos^2\phi-ru_y\sin\phi \tag{5} \end{gather}$$ Adding equation $(3)$ and equation $(5)$ divided by $r^2$, we obtain $$u_{rr}+\frac{1}{r^2}u_{\phi\phi}=u_{xx}+u_{yy}-\frac{1}{r}u_x\cos\phi-\frac{1}{r}u_y\sin\phi$$ Using $(1)$, we obtain $$\Delta u=\frac{1}{r} (ru_r)_r+\frac{1}{r^2}u_{\phi\phi}$$ Therefore in polar coordinates, the problem takes the form: $$\frac{1}{r}(ru_r(r,\phi))_r+\frac{1}{r^2}u_{\phi\phi}(r,\phi)=0 \text{ with } u(a,\phi)=f(\phi).$$ My biggest problem is to reach the conclusion that the solution is $$r(3\sin\phi-r^2\sin 3\phi)/4.$$ AI: Use the fact that $\sin^3{\phi} = \frac{3}{4} \sin{\phi} - \frac14 \sin{3 \phi}$, and that the general solution to the interior Dirichlet problem in polar coordinates is $$u(r,\phi) = \frac{a_0}{2} + \sum_{k=1}^{\infty} r^k (a_k \cos{k \phi} + b_k \sin{k \phi})$$ Now, $u(1,\phi) = \sin^3{\phi} = \frac{3}{4} \sin{\phi} - \frac14 \sin{3 \phi}$ means that we may simply conclude $a_k=0$ for all $k$ and $b_k=0$ for all $k$ expect $k=1$ and $k=3$, where $b_1=\frac{3}{4}$ and $b_3=-\frac14$. This is true because the sines and cosines form an orthogonal basis set over the unit circle. Thus, the solution to this interior Dirichlet problem is $$u(r,\phi) = \frac{3}{4} r \sin{\phi} -\frac14 r^3 \sin{3 \phi}$$
H: normal distribution, two independent random variable if $X$ and $Y$ are independent normal distribuited random variables and $T=2X-Y-1$ and $E[X]=E[Y]=1$ and $Var(X)=Var(Y)=4$, what is $Var(T)$? I get $E[T]=E[2X-Y-1]=2-1-1=0$, but i don't know how to get $Var(T)$. AI: This answer (which is more of an outline to an approach, cq. a hint) intends to indicate a means of deriving some useful properties about the variance $\operatorname{Var}(X)$. It is not necessary to know that $X$ and $Y$ are normal, as long as we have the information that $\operatorname{Var}(X)$ and $\operatorname{Var}(Y)$ exist (and that $X,Y$ are independent). Recall the definition of $\operatorname{Var}(X)$: $$\operatorname{Var}(X) = \Bbb E[(X-\Bbb E X)^2] = \Bbb E[X^2] - \Bbb E[X]^2$$ where the last equality was derived using the following properties of $\Bbb E$, for independent $X, Y$ and $\lambda \in \Bbb R$: \begin{align*} \Bbb E[X+\lambda Y] &= \Bbb E[X]+\lambda\Bbb E[Y]\\ \Bbb E[XY] &= \Bbb E[X]\Bbb E[Y] \end{align*} (Note that $X$ is not independent of $X$ itself, so the last equation does not help for computing $\Bbb E[X^2]$!) Using only these, it is a good and enlightening exercise to prove the following about the variance: $$ \operatorname{Var}(X+\lambda Y) = \operatorname{Var}(X) + \lambda^2\operatorname{Var}(Y) $$ Once you've done this, computing $\operatorname{Var}(T)$ should be a piece of cake.
H: Product of two primitive polynomials I'm having troubles with one of the problems in the book Introduction to Commutative Algebra by Atiyah and MacDonald. It's on page 11, and is the last part of the second question. Given $R$ a commutative ring with unit. Let $R[x]$ be the ring of polynomials in an indeterminate $x$ with coefficients in $R$. We say that a polynomial $f = \sum\limits_{i=0}^n r_ix^i \in R[x]$ (with coefficients $r_0, r_1, \ldots, r_n$) is primitive if $\langle r_0,r_1,...,r_n\rangle = R$, i.e., the ideal generated by the coefficients of $f$ is $R$. Let $f$ and $g$ be two polynomials in $R[x]$. Prove that $fg$ is primitive iff $f$ and $g$ are both primitive. The $\Rightarrow$ part is easy. Say, $f = \sum\limits_{i=0}^n r_ix^i$, $g = \sum\limits_{i=0}^m s_ix^i$; then $fg = \sum\limits_{i=0}^{m+n} c_ix^i$, where $c_k = \sum\limits_{i + j = k}r_is_j$. Since $fg$ is primitive, there exists a set of $\{\alpha_i\} \subset R$, such that $\sum\limits_{i=0}^{m+n} \alpha_ic_i = 1$, to prove $f$ is primitive, I just need to write all $c_i$'s in terms of $r_i$'s, and $s_j$'s, then group all $r_i$ accordingly, rearranging it a little bit, and everything is perfectly done. And the proof of the primitivity of $g$ is basically the same. The $\Leftarrow$ part is just so difficult. Say $f = \sum\limits_{i=0}^n r_ix^i$, $g = \sum\limits_{i=0}^m s_ix^i$ are both primitive, then there exists $\{\alpha_i\}; \{\beta_i\} \subset R$, such that $\sum\limits_{i=0}^{n} \alpha_ir_i = 1$, and $\sum\limits_{i=0}^{m} \beta_is_i = 1$. At first, I thought of multiplying the two together, but it just didn't work. So, I'm stuck since I cannot see any way other than multiplying the two sums together. I hope you guys can give me a small push on this. Thanks very much in advance, And have a good day. AI: To summarize the WP reference I gave in a comment: supposing $fg$ is not primitive, form the quotient of $R$, and consequently $R[x]$, by any maximal ideal (any prime ideal will do too) of$~R$ containing all coefficients of$~fg$. Then $fg$ is killed but neither $f$ nor $g$ is; however this is impossible in $K[x]$ where $K$ is the quotient field (or quotient integral domain) of $R$ by the mentioned ideal, since $K[X]$ is an integral domain when $K$ is.
H: Prove/disprove: In a graph with at least one component that does not contain a hamilton circuit, we can make it hamiltorian by adding a vertex Prove/disprove: In a graph $G$ with at least one component that does not contain a Hamiltonian circuit, we can add a vertex $x$ and certain edges that connect it with certain vertices in the graph, such that we get a graph where every component of the graph has a Hamiltonian circuit. My answer was: Disprove. Take for example the claw graph with 3 vertices. Any addition of $x$ and certain edges will not make a Hamiltonian circuit. (it does make a Hamiltonian path, but not a circuit.) Is that correct or am I missing something? AI: I don't know what you mean by "claw graph with 3 vertices". If you mean $P_3$ or $K_{1,2}$, then indeed we can make a Hamiltonian circuit, by creating a square. If you mean $K_{1,3}$, then your example is correct but your proof is incomplete. It can only help (in terms of producing a Hamiltonian circuit) to connect $x$ to every vertex in $K_{1,3}$. Consider the three vertices of degree 2 that result. Each path from one to another must pass through either $x$ or the other vertex of degree 4. But in a Hamiltonian circuit one needs to be able to get from each to the next (three paths) that do not intersect. Contradiction. Specific errors/omissions in OP's solution: Does not discuss which edges are added to $x$, except for the general and unsupported claim "any". Does not justify why such an addition will not make a Hamiltonian circuit, or why it will make a Hamiltonian path. Uses the nonstandard term "claw graph with 3 vertices" for $K_{1,3}$ (which has 4 vertices).
H: Find the probability $P(0.5 < X < 5)$ We select two balls without replacing from a box where there are 7 red balls and 3 green balls. Be $X$ the random variable denoting the number of selected green balls. Please compute $P(0.5 < X < 5)$ (if you can explain to me how to do $P(a<X<b)$ in general that will be great). Any pointers in the right direction? AI: In general, $P(a<X<b)=P(X<b)-P(X\leq a)$ (You won't need that here though, this property will especially be useful when dealing with continuous distributions, intuition is handier in this case). Here, we need $X$ to be at least $0.5$. Since we cannot draw half a green ball, we need $X\geq 1$. Now reason in the following way, which integer numbers satisfy the condition that they are $>0.5$ (here: $\geq 1$) and $<5$? That are $1$, and $2$ (since it impossible to draw more than $2$ green balls, as we have only $2$ trials). This means that we need $P(X=1)+P(X=2)$ $P(0.5<X<5)=P(X=1)+P(X=2)$
H: What does $\mathbb Z_+$ mean? I am not so sure whether the meaning of $$\mathbb Z_+$$ is very clear. How many different definitions are there? Does the definition that is used depend on whether the writer is English or German? In French maths, this notation doesn't exist. AI: Many people would interpret this to mean $\{1,2,3,\ldots\}$, although some might argue for $\{0,1,2,3,\ldots\}$. Absent any other context I don't think any other interpretations are likely. Sadly, many authors use notations without defining them, because they are "standard" in their little corner of mathematics.
H: Constructing a number system I have just started working through a book on higher algebra. I'm just at the beginning, where the authors introduce the notation and talk about the various number systems. I found this particular paragraph confusing:- "The basic idea in the construction of new set of numbers is to take a set, call it $ S $, consisting of mathematical objects, such as numbers you are already familiar with, partition the set $ S $ into a collection of sets in a suitable way, and then attach names or labels, to each of the subsets. These subsets will be elements of a new number system." What does the author mean, when he says a "suitable way" here? Does it mean, that I can partition in any way that I find suitable, or are there requirements to be met, for any number system that is constructed by me? For instance, I'm familiar with the set of natural numbers. So, can I construct $S= ${$1,2,3,4,5,6,7,8,9,10$} and call it a subset of a new number system? Can I go as far as to say that this subset is the only element of my new number system? AI: Maybe an example would help. We're all familiar with the natural numbers $\mathbb{N} = \{0, 1, 2, \ldots \}$. Then we also have the set of ordered pairs of natural numbers $\mathbb{N} \times \mathbb{N}$. If we partition $\mathbb{N} \times \mathbb{N}$ using the equivalence relation $(a, b) \sim (c, d)$ when $a + d = b + c$, then this equivalence relation partitions $\mathbb{N} \times \mathbb{N}$ out into subsets. You can think of each subset as containing pairs $(a, b), (c, d), \dots$ where $a - b = c - d = \ldots$. So each subset can be thought of as an integer by picking any pair $(a, b)$ out of it and thinking of it as the integer $a - b$. But this is a bit circular, since the point is that we imagine we didn't formally have the integers yet, but we had an intuition for them, and just formally constructed them from $\mathbb{N}$. We just came up with a new number system, and attached names to the subsets; for example, the subset that contains the pair $(1, 4)$ can be called $-3$. So suitable means it fulfils some sort of intuition we have about something we think exists in a real-world sense, or at least has some abstract reality. We can come up with silly, meaningless number systems which are technically equally valid, but mean less to human beings. "Suitable" isn't a mathematical term, it's a social and psychological one.
H: Period of derivative is the period of the original function Let $f:I\to\mathbb R$ be a differentiable and periodic function with prime/minimum period $T$ (it is $T$-periodic) that is, $f(x+T) = f(x)$ for all $x\in I$. It is clear that $$ f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h} = \lim_{h\to 0} \frac{f(x+T+h) - f(x+T)}{h} = f'(x+T), $$ but how to prove that $f'$ has the same prime/minimum period $T$? I suppose that there exist $\tilde T < T$ such that $f'(x+\tilde T) = f'(x)$ for all $x\in I$ but can't find the way to get a contradiction. AI: To see what happens, simply integrate (I will use $\tilde{T}$ instead of $T'$, since prime is being used for derivatives): \begin{align*} f'(x+\tilde{T}) &= f'(x)\\ \Rightarrow \int^y f'(x+\tilde{T})\,dx &= \int^y f'(x)\,dx \\ \Rightarrow \int^{y+\tilde{T}} f'(\tilde x)\,d\tilde x &= \int^y f'(x)\,dx \\ \end{align*} where we have substituted $\tilde x = x+\tilde{T}$. So we get $$ f(y+\tilde{T}) = f(y) + C $$ for some constant $C$. But we already know $f$ is periodic, so we must have $C = 0$. Hence $f(y+\tilde{T}) = f(y)$, so $\tilde{T}$ is some integer multiple of $T$ (since by assumption, $T$ is the prime period of $f$).
H: CW-pairs are good pairs Hatcher uses in a proof that every subcomplex of a CW-complex is a deformation retract of some neighborhood. In what way can I see this in the infinite dimensional case? AI: Have you checked the appendix? I think there are more explanations there about CW complexes topology.
H: Is it always true that "max $\ge$ average + sigma"? Assume that $i$ from $1,\ldots,N$, $x_i \ge 0$ and: $$\mathrm{avg} = \frac{\sum_i x_i}{N}$$ $$\sigma = \sqrt\frac{\sum_i{(x_i-\mathrm{avg})^2}}{N}$$ Is that true that: $$\max_i x_i \ge \mathrm{avg} + \sigma\text{ ??}$$ REFERENCE http://en.wikipedia.org/wiki/Average http://en.wikipedia.org/wiki/Variance http://en.wikipedia.org/wiki/Standard_deviation AI: No. In fact it's not even true when you replace sigma by the square root of sigma i.e. the standard deviation rather than the variance. As an example, consider the set of numbers {0,6,6} Mean: 4 Variance:8 Standard Deviation: 2.83
H: Let $G$ be the graph whose vertices are binary sequences of length 4, two vertices are adjacent if they have exactly 2 bits different. Is it planar? Given $G$, a graph whose vertices are binary sequences of length 4. Two vertices are adjacent in $G$ if and only if they differ by exactly two bits. Is it planar? Here's what I tried to say: I tried to disprove that it is planar first. I wrote so many examples trying to get $K_5$ and $K_{3,3}$ but have not succeeded. Then I thought about the Cubic graph which is built of sequences of length $3$ and two vertices are connected if and only if they differ by exactly $1$ bit as I remember, but given that our $G$ is of length $4$ this is not helpful at all. Any direction would be appreciated! Perhaps calculating the number of edges? AI: First problem If they differ in exactly $2$ bits. Then each vertex has degree $\binom{4}{2}=6$. Since any planar graph has a vertex of degree at most $5$, the graph cannot be planar. P.S. If you don't know this, you can use the fact that in any planar graph you have $$e \leq 3v-6 \,.$$ Your graph fails this relation. The proof of this result is pretty simple, it can be found in almost any textbook on graph Theory. Second problem If they differ in exactly $3$ bits. Hint Your graph is bi-partite, since the vertices of any edge have odd/even number of ones. In any bi-partite planar graph (more generarily in any planar graph without triangles) you have $$e \leq 2v-4 \,.$$ P.S. This is probably easier: prove first that any planar graph without triangles has a vertex of degree at most $3$.
H: Simple generalized integral The integral to compute is $\displaystyle\int_0^\infty \frac{1}{3+x^2} \ \mathrm dx$. I know how to compute the indefinite integral of this function - I obtained: $$\frac{\sqrt{3}}{3} \arctan\left(\frac{x}{\sqrt{3}}\right).$$ But when I compute the definite integral it now gives me : $$\frac{\sqrt{3}}{3} \left(\lim\limits_{x \to \infty }\arctan\left(\frac{x}{\sqrt{3}}\right)-\arctan(0)\right).$$ Then I don't understand why my teacher writes that $\arctan(0)=0$ because it also can be equal to $\pi$, and more strangely, I don't know how $\lim\limits_{x \to \infty }\arctan\left(\frac{x}{\sqrt{3}}\right)=\pi/2$. Can someone explain how the limit is evaluated? Thank you for help ! EDIT : I only need to know how to compute the limit. AI: Actually, you don't really have to have $\arctan(0)=0$, you can have $\arctan(0)=\pi$ but your arctan function should be continuous. Hence if you decide to make your range of arctan such that $\arctan(0)=\pi$, you should have that $\lim_{x\rightarrow \infty}\arctan(x)=\frac{3\pi}{2}$, which will give the same answer as your teacher's.
H: Why is it linearly dependent when the linear combination is zero only with none zero coefficients in 3D? Title says it all. I'm asking the geometrical sense. I know it is linearly independent if the linear combination of vectors is zero with all the coefficients are zero. And so do dependent. Independent $\sum a_n\mathrm{v}_n=0$, for $a_n=0$. Dependent $\sum a_n\mathrm{v}_n=0$, not all $a_n=0$. For mathematical notation. I do understand in 2D, but I really don't understand why is this working in 3D. AI: Those are not clear statements of dependence and independence of the set$\{v_1,\dots v_n\}$. The set would be dependent if: There exist $\alpha_i$, not all of which are zero, such that $\sum \alpha_iv_i=0$ The set would be independent if it satisfies the negation, that there will not be such a set of nonzero coefficients: If $\sum \alpha_iv_i=0$, then all the $\alpha_i=0$ If all the $\alpha_i=0$ then $\sum \alpha_iv_i=0$ holds all the time, so it is not interesting! It is a special case that always works. A linearly independent set is special precisely because you can't get a combination to add up to zero unless you use all zeros (which will always work.) In any number of dimensions, linear independence expresses the idea that one vector is not in the span of the other vectors. For example, if $\sum \alpha_iv_i=0$ where at least one of the alphas is nonzero, (say for convenience, $\alpha_1$) then $v_1=\sum_{i=2}^n \alpha_1^{-1}\alpha_iv_i$, and so $v_1$ is generatable by $v_2\dots v_n$. Then we could just throw $v_1$ out, since we know the other $v_i$ can already generate it. So when a set is linearly independent, it means that each member really does contribute to the vector space they generate. Each element adds something new that can't be produced by the other vectors.
H: Using natural numbers $1,2,...n,$ in how many ways can the number $n$ be formed from the sum of **one** or more smaller natural numbers? Using natural numbers $1,2,...n,$ in how many ways can the number $n$ be formed from the sum of one or more smaller natural numbers? I thought it would be an easy problem but i couldn't figure it out. Example: For n = 4, we have 4$,1 + 3, 2 + 2, 1 + 1 + 2,$1 + 1 + 1 + 1 for a total of $5.$ AI: The answer is complicated and you've missed $4=1+1+1+1$ in your question. You are looking for partitions - the link has a good bibliography. The exact formula is surprisingly complicated. Your question involves subtracting 1 from the number of partitions as conventionally calculated.
H: Is There a Better Strategy for this Combination Scenario? An elevator containing five people can stop at any of seven floors. What's the probability that no two people get off at the same floor? Assume that the occupants act independently that all floors are equally likely for each occupant. For the denominator, since each person has seven choices, there are $7^5$ way that the passengers can occupy the 7 floors. Then, for the numerator, let the 5 passengers form one group, and 2 imaginary passengers forming a second group. So there are 7! permutations, but since the order among the 5 passengers and the order among the 2 imaginary passengers do not matter, you have $ 7!\over 5!2!$ ways that the 5 passengers can occupy the floors such that each floor has only one or zero passenger. Altogether, you have $7 \choose 5$ / $7^5$. I am pretty sure that I got the numerator correct. It took me a long time to figure out the appropriate analogy; is there a more direct method to this question? Furthermore, why is the denominator $7^5$ and not $6^7$? In the context of this question, $6^7$ means that each floor can take 0, 1, ... 5 passengers. Thanks! AI: Note that in your denominator, you've allowed different arrangements of people on the same floors to count multiples times i.e. Person A on Floor 1 and Person B on Floor 2 is counted differently to Person A on Floor 2 and Person B on Floor 1 hence your numerator should also allow different arrangements to be counted. So the numerator should be $^7P_2=\frac{7!}{2!}$ not $^7C_2$. The denominator wouldn't be $6^7$ as you claim because you've removed all dependency on the number of people. This method of counting would allow for anywhere between 0 and 35 people.
H: cardinality of all real sequences I was wondering what the cardinality of the set of all real sequences is. A random search through this site says that it is equal to the cardinality of the real numbers. This is very surprising to me, since the cardinality of all rational sequences is the same as the cardinality of reals, and it seemed fairly intuitive to me that if cardinality of a set $A$ is strictly greater than the cardinality of the set $B$, then cardinality of $A^{\mathbb{N}}$ should be strictly greater than cardinality of $B^{\mathbb{N}}$. It turns out to be false. Some technical answers have appeared in this forum elsewhere but I do not understand them. As I am not an expert in this topic, could some one explain me in simple terms why this is happening? Also is the cardinality of all functions from reals to reals also the same as the cardinality of reals? AI: Identify $\mathbb R$ as the set of functions $f : \mathbb N \to \{ 0,1\} $. Then any sequence $\{ x_n \}$ becomes a sequence $\{f_n \}_n$ where $f_n : \mathbb N \to \{ 0,1\}$. But then, this is simply a function $g : \mathbb N \times \mathbb N \to \{ 0,1\}$: $$g(m,n) =f_n(m) \,.$$ This way you can construct a bijection from the sequences of real numbers to the set of functions from $\mathbb N \times \mathbb N \to \{ 0,1\}$. Now, since $\mathbb N \times \mathbb N$ and $\mathbb N$ have the same cardinality, you get a bijection from the sequences of real numbers to the set of functions from $\mathbb{N} \times \mathbb{N} \to \{ 0,1\}$, which is just $\mathbb R$.
H: Open and Closed Set in Zariski Topology I'm confused about the definition closed and open set in Zariski Topology, it is said that the set $$V(I)=\{P \in \operatorname{Spec}(R)\mid I \subseteq P\}$$ are the closed set in Zariski Topology. But it is said in James Munkres's Topology that a subset $U$ of $X$ is an open set of $X$ if $U$ belong to the collection $\tau$. So assuming that $V(I)$ is the closed set of Zariski Topology on $\operatorname{Spec}(R)$, shouldn't the collection of $D(r)$ in which $$D(r)=\{P \in \operatorname{Spec}(R)\mid r \notin P\}$$ are the topology $\tau$ in $\operatorname{Spec}(R)$? But, in a lecture notes about Zariski Topology www.math.kth.se/~laksov/courses/algebradr01/notes/rings5.pdf‎ , proposition 5.3 to be precise, the one that is proved to be the topology $\tau$ is the collection $V(I)$ instead. Could somebody explain this to me? Thank you. AI: What's proved is that arbitrary intersections and finite unions of such sets are again sets of that form. This is exactly dual to the requirements of a topology - and so the complements of sets $V(I)$ are the open sets in the Zariski topology. Complements of $V(I)$ are not generally of the form $D(r)$, but they are unions of such sets - so we call the collection of $D(r)$ a basis of the topology.
H: Is perspective transform affine? If it is, why it's impossible to perspective a square by an affine transform, given by matrix and shift vector? I'm a bit confused. I want to program a perspective transformation and thought that it is an affine one, but seemingly it is not. As an example, I want to perspective a square into a quadrilateral (as shown below), but it seems impossible to represent such a transform as a matrix multiplication+shift: 1) What I can't understand is that by definition affine transform is the one, that preserves all the staight lines. Can you provide an example of straight line, which is not preserved in this case? 2) How do I represent perspective transforms as this one numerically? Thank you. AI: There is no affine transformation that will do what you want. If two lines are parallel before an affine transformation then they will be parallel afterwards. You start with a square and want a trapezium. This is not possible. The best you can get is a parallelogram. You will need to move up a level and look at projective transformations. Affine transformations form a subset of the projective transformations. They are the ones that fix the line at infinity. (Parallel lines are thought to cross at infinity.) In local coordinates, a projective transformation is given by: $$(x,y) \longmapsto \left(\frac{ax+by+c}{gx+hy+k},\frac{dx+ey+f}{gx+hy+k}\right) $$ It is possible to find all of the constants by substituting and solving. I get: $$T : (x,y) \longmapsto \left( \frac{12x+3y}{4y+16} , \frac{3y}{y+4} \right) . $$
H: Green fuctions and sturm liouville problem I have the following problem. Find the Green function in the problem of Sturm-Liouville: $$Ly=-y''.$$ $$y'(0)=y(0),\quad y'(1)+y(1)=0$$ I can not find the eigenvalues ​​and the eigenfunctions but how do I find the Green's function? The solution in the book is $\frac{1}{3}\left\{\begin{matrix} (x+1)(2-\xi ), 0\leq x \leq \xi & \\ (\xi +1)(2-x), \xi\leq x\leq 1 & \end{matrix}\right.$ Thanks for all and any help AI: Recall that I did find your eigenvalues and eigenfunctions, as well as the Green's function in terms of an expansion in those eigenfunctions. To find the Green's function directly in the form you want, note that you are solving $$\frac{d^2}{d x^2} G(x,x') = -\delta(x-x')$$ The general solution to this equation may be immediately written down as $$G(x,x') = \begin{cases}\\A x + B & 0 \le x\le x'\\ C x+D & x' < x \le 1 \end{cases}$$ The constants are determined through boundary conditions, as well as conditions imposed on the Green's function itself through the differential equation. The boundary conditions at $x=0$ imply that $A=B$. The boundary conditions at $x=1$ imply that $$C+C+D = 0 \implies D=-2 C$$ We need two other conditions to completely determine the Green's function. First, we require that $G$ be continuous at $x=x'$: $$A x'+B = C x'+D$$ The last condition is a little tricky. We integrate the diff. equation with respect to $x$ in a small neighborhood about $x=-x'$; note that the integral of a delta function that goes through $x=x'$ is $1$. Thus, we have $$\lim_{\epsilon \to 0}\left(\left[\frac{d}{dx} G(x,x')\right]_{x=x'+\epsilon}-\left[\frac{d}{dx} G(x,x')\right]_{x=x'-\epsilon}\right) = -1$$ This is saying that the derivative of $G$ is discontinuous at $x=x'$. What this means for our solution is that $$C-A=-1$$ You then solve, four equations for four unknowns. Actually, these equations are simple enough, and you will find that $$A = B = \frac13(2-x')$$ $$C=-\frac13(1+x')$$ $$D=\frac{2}{3}(1+x') $$ The sought result then follows: $$G(x,x') = \begin{cases}\\\frac13 (2-x')(1+x) & 0 \le x\le x'\\ \frac13 (1+x')(2-x) & x' < x \le 1 \end{cases}$$
H: Concrete Mathematics Summation Question I'm sorry if this question is too novice, but I am just beginning discrete math. I've been working through the book Concrete Mathematics (Graham,Knuth,Patashnik) and I reached a double summation that has me very confused. I'v been trying to work it out, and I think I have a solution but I'm not sure if it is the correct way to solve it. The question comes from chapter 2 section 4 and it goes as follows: \begin{equation} S = \displaystyle\sum\limits_{1 \le j < k \le n}^{}{(a_k - a_j)(b_k - b_j)} \end{equation} The authors then go on to say " We have symmetry when j and k are interchanged:" and write the new sum: \begin{equation} S = \displaystyle\sum\limits_{1 \le k < j \le n}^{}{(a_j - a_k)(b_j - b_k)} = \displaystyle\sum\limits_{1 \le k < j \le n}^{}{(a_k - a_j)(b_k - b_j)} \end{equation} I understand how they can get to the 2nd sum, because all you're doing is changing index names. But how do the authors get from the 2nd sum to the 3rd sum? Or how do they use their previously mentioned "Rocky Road" formula to achieve this result? Thanks, EDIT: Sorry for not making the Rocky Road formula clear. The Rocky Road Formula is as follows: \begin{equation} \displaystyle\sum\limits_{j \in J}^{}{\displaystyle\sum\limits_{k \in K(j)}^{}{a_j,_k}} = \displaystyle\sum\limits_{k \in K'}^{}{\displaystyle\sum\limits_{j \in J'(k)}^{}{a_j,_k}} \end{equation} AI: All we're doing in moving from the second sum to the third sum is essentially multiplying each bracket by $-1$, but since there are two brackets, the two factors of $-1$ cancel out so the sum is unchanged.
H: What does the notation $[V]^2$ mean (in graph theory)? In graph theory, a graph is a pair $G=(E,V)$ of sets satisfying $E\subseteq[V]^2$. But what is $[V]^2$? I suppose that it is the same as $V\times V=V^2$, but I do not know where the square brackets come from. Thanks in advance! AI: In set theory (and graph theory, apparently), $[X]^n$ denotes the set of subsets of $X$ that have precisely $n$ elements. Symbolically: $$[X]^n = \{Y \subseteq X: |Y| = n\}$$ In combinatorics, the notation $\dbinom X n$ is also seen for $[X]^n$.
H: $f$ continuously differentiable implies even or odd? Suppose that $f(x)$ is continuously differentiable for all $x\in\mathbb{R}$. Then $f$ is continuous and Riemann integrable such that $\int f(x)\,dx = F(x)+c$. My question is: Can we conclude that $F$ must be either odd or even? In other words, can we rule out the possibility that it is neither? I would like to use this fact in a proof if it is so, but I haven't been able to prove it or provide a counter example. AI: Surely $f(x)=e^x$ is a counter example.
H: How to isolate $v_m$? Note: I am not asking anything pertaining to the physics of this question; only the mathematics. The physics is just given as a context to the problem for those interested, as opposed to simply saying "simplify this equation". The double ball drop problem is as follows: A ball of mass $m$ is placed on top of a ball of mass $M$ (where $m < M$), and the balls are dropped simultaneously from some height $h$. When the balls hit the floor, the ball on top will shoot upwards. What is the velocity of this ball at the moment it shoots upwards? Here is a picture of accompany this description: I have attempted to solve this problem, but am stuck with the following equation: $$\sqrt{\frac{2gh(m + M) - mv_m^2}{M}} = \frac{(m + M)\sqrt{2gh} - mv_m}{M}$$ What I ask is to help isolate $v_m$. I attempted to using the quadratic equation, but that resulted in a very challenging equation that I am unable to simplify. I feel as though there may be an easier way to solve for $v_m$. Any help is appreciated. AI: \begin{array}{ccc} \sqrt{\frac{2gh(m + M) - mv_m^2}{M}} &=& \frac{(m + M)\sqrt{2gh} - mv_m}{M} \\ \\ \frac{2gh(m + M) - mv_m^2}{M} &=& \frac{((m + M)\sqrt{2gh} - mv_m)^2}{M^2} \\ \\ (2gh(m + M) - mv_m^2)M &=& ((m + M)\sqrt{2gh} - mv_m)^2 \\ \\ (2gh(m + M) - mv_m^2)M &=& 2gh(m + M)^2 - 2\sqrt{2gh}(m + M)mv_m + m^2v_m^2 \\ \\ 2gh(m + M)M - mMv_m^2 &=& 2gh(m + M)^2 - 2\sqrt{2gh}(m + M)mv_m + m^2v_m^2 \\ \\ 2gh(m + M)M &=& 2gh(m + M)^2 - 2\sqrt{2gh}(m + M)mv_m + (m^2+mM)v_m^2 \\ \\ 2gh(m + M)M &=& 2gh(m + M)^2 - 2\sqrt{2gh}(m + M)mv_m + m(m+M)v_m^2 \\ \\ 2ghM &=& 2gh(m + M) - 2\sqrt{2gh}mv_m + mv_m^2 \\ \\ 0 &=& 2ghm - 2\sqrt{2gh}mv_m + mv_m^2 \\ \\ 0 &=& m(2gh - 2\sqrt{2gh}v_m + v_m^2) \\ \\ 0 &=& m(\sqrt{2gh} - v_m)^2 \\ \\ \end{array} Hence $v_m = \sqrt{2gh}$.
H: The nested self-composition of $f(x) = \frac{\sqrt3}2x+\frac12\sqrt{1-x^2}$ The function $f(x)$ is defined, for $|x|\leqslant1$ by $$f(x)=\frac{\sqrt 3}{2}x+\frac{1}{2}\sqrt{1-x^2}.$$ Find an expression for $$f^n(x)=\underbrace{f \circ f \circ \cdots \circ f(x)}_{\text{n times}},$$where $n\in\mathbb{Z^+}$. Now what I did was to first find $f^2(x)$ and my intention was to find $f^3(x)$ and another few cases so as to recognise a pattern. However, $f^2(x)$ is actually very complicated and does not simplify too much. I am looking for hints on how to approach the problem and not complete solutions. Thanks in advance. AI: If you note that $x$ may be written as $\cos{\phi}$, then $$f(\cos{\phi}) = \cos{\left(\phi-\frac{\pi}{6}\right)}$$ On the next iteration, $$f[f(\cos{\phi})] = f\left [ \cos{\left(\phi-\frac{\pi}{6}\right)} \right ] = \cos{\left(\phi-\frac{2 \pi}{6}\right)} $$ I hope you see the pattern... EDIT This answer is not totally correct - it is merely formally correct, as pointed out by @achille hui. The problem lies with the evenness of the cosine. The iteration for $\phi=0$ produces an incorrect result. In fact, it is incorrect for $0 \le \phi \le \pi/6$. What to do? In the $k$th iteration, consider the sign of $\phi-k \pi/6$: $$f^{(k)}(\cos{\phi}) = \begin{cases}\cos{\left[\phi - (1-(-1)^k)\frac{\pi}{12} \right]} & 0 \le \phi \le k \frac{\pi}{6} \\ \cos{\left(\phi-k \frac{\pi}{6}\right)} & k \frac{\pi}{6} \lt \phi \le \pi \end{cases}$$ Repeat the process for $k > 6$.
H: Find the jordan form of a given matrix let $n,k\in\ N$ and $\lambda \in F$. Find the rank and the Jordan form of matrix $$A={ J }_{ n }{ (\lambda ) }^{ k }$$ AI: $J_n(\lambda)^k$ is an upper-triangular matrix, and its diagonal elements will all be $\lambda^k$. Thus, its rank is $n$, and its Jordan form is $J_n(\lambda^k)$
H: Bound on Expectation of a convex function of a Random variable My friend asked me the following question, which I at first thought was simple and straightforward: If $X$ is an integrable random variable and $g$ is a convex function(all real valued), then is it true that $\forall a>0$ $$E[g(X)] < \infty \Rightarrow E[g(aX)] < \infty \quad?$$ Question: My guess is that this would be true although when I tried to prove it, I got stuck. Let me describe my attempt: $$E[g(aX)] = \int_{u \in \mathbb{R}}g(au)dF_X(u)$$ Put $au=t$, then you get $$\frac{1}{a}\int_{t \in \mathbb{R}}g(t)dF_X(t/a)$$ If the $dF_X(t/a)$ was $dF_X(t)$, I'd be done. But I'm probably a step away. Additionally I have not used convexity anywhere so it's probably superfluous. I'd appreciate it if someone could shed some light on this matter. Kindly request clarifications if necessary. AI: Counter-example: suppose that $X$ has exponential distribution with $E[X]=1$, i.e. $F_X(x)=0$ when $x\le 0$ and $F_X(x)=1-e^{-x}$ when $x>0$. Then $$E[g(X)]=\int_0^{+\infty}g(x)e^{-x}dx.$$ For $g(x)=e^{\frac{x}{2}}$, $g$ is convex, $E[g(X)]=2$ is finite, but when $a\ge 2$, $E[g(aX)]=+\infty$.
H: Find Gross from Net and Percentage I would like know if a simple calculation exists that would allow me to determine how much money I need to gross to receive a certain net amount. For example, if my tax rate was 30%, and my goal was to take home 700, I would need to have a Gross salary of $1000. AI: Let $x$ be the gross amount needed, and $y$ be the desired net amount, and $r$ be the rate of taxation. Then, $$x\times\left(1-\frac{r}{100}\right) = y\\\implies x=\frac{y}{1-\frac{r}{100}} = \frac{100y}{100-r}$$
H: Solve these equations simultaneously Solve these equations simultaneously: $$\eqalign{ & {8^y} = {4^{2x + 3}} \cr & {\log _2}y = {\log _2}x + 4 \cr} $$ I simplified them first: $\eqalign{ & {2^{3y}} = {2^{2\left( {2x + 3} \right)}} \cr & {\log _2}y = {\log _2}x + {\log _2}{2^4} \cr} $ I then had: $\eqalign{ & 3y = 4x + 6 \cr & y = x + 16 \cr} $ Solving: $\eqalign{ & 3\left( {x + 16} \right) = 4x + 6 \cr & 3x + 48 = 4x + 6 \cr & x = 42 \cr & y = \left( {42} \right) + 16 \cr & y = 58 \cr} $ This is the wrong answer, I would like to understand where I went wrong so I dont make the same mistake again, your help is greatly appreciated, thanks! AI: $$\log _2y = {\log _2}x + \log_22^4$$ $$\log m+\log n=\log (mn)$$ $$\log _2y = \log _2({x\cdot 2^4})$$ $$y=16x$$ this will be second eqn so equation is $3y = 4x + 6\;\;$ and $y=16x$ solving these: $3\times16x = 4x + 6\implies44x=6\implies x=\dfrac3{22 }\;\;,y=\dfrac{24}{11}$
H: $f : \mathbb{R} \to \mathbb{R}$ be injective then $f^{ −1} (\mathbb{Q} \cap [0, 1])$ is Let $f : \mathbb{R} \to \mathbb{R}$ be injective then $f^{ −1} (\mathbb{Q} \cap [0, 1])$ is (a) measurable and its measure is 0. (b) measurable and its measure is 1. (c) measurable and its measure is ∞. (d) need not be measurable. totally clueless, please help. I know that $[0,1],\mathbb{Q}$ are borel measurable and hence their intersection, also they are lebesgue measurable AI: Since $\Bbb Q\cap [0,1]$ is countable, and $f$ is one-to-one, you have that $f^{-1}(\Bbb Q\cap[0,1])$ is countable as well. Hence: measurable and of measure ...
H: Small doubt about Dirichlet's problem Find the solution of dirichlet s problem: $\Delta u(r,\phi)=0, r<1, u(1,\phi)=f(\phi)$ Where $x=r\cos\phi$ and $y=r\sin\phi$ and $f(\phi)=\cos^2(\phi)$ I starting by doing following: Enter the polar coordinates $x=r\cos\phi$ and $y=r\sin\phi$. Deriving: $u_r=u_x\cos\phi+u_y\sin\phi$. (1) $u_\phi=-ru_x\sin\phi+ru_y\cos\phi$. (2) $u_{rr}=u_{xx}\cos^2(\phi)+2u_{xy}\cos\phi \sin\phi+u_{yy}\sin^2\phi$ (3) $u_{\phi\phi}=r^2u_{xx}\sin^2\phi-ru_{x}\cos\phi$ (4) $-2r^2u_{xy}\sin\phi \cos\phi+r^2u_{yy}\cos^2\phi-ru_y\sin\phi$ (5) Adding equal (3) and equal (5) divided by $r^2$, we obtain $u_{rr}+\frac{1}{r^2}u_{\phi\phi}=u_{xx}+u_{yy}-\frac{1}{r}u_x\cos\phi-\frac{1}{r}u_y\sin\phi$ Using (1), we obtain $\Delta u=\frac{1}{r} (ru_r)_r+\frac{1}{r^2}u_{\phi\phi}$ Therefore in polar coordinates, the problem takes the form: $\frac{1}{r}(ru_r(r,\phi))_r+\frac{1}{r^2}u_{\phi\phi}(r,\phi)=0$ with $u(a,\phi)=f(\phi)$ my biggest problem is to reach the conclusion that the solution is $(1+r^2cos(2\phi))$ this is the soluction on the book, but for me the soluction is $\frac{1+r^2cos(2\phi)}{2}$ Thanks for all and any help AI: Hint: Use the facts that $$u(1,\phi)=f(\phi)=\cos^2(\phi)=\frac{1+\cos(2\phi)}{2}$$ and $$u(r,\phi)=\frac{1}{2}a_0+\sum_{n=1}^\infty r^n(a_n\cos(n\phi)+b_n\sin(n\phi)).$$ For a derivation of the latter formula, see here.
H: Proof that binomial coefficients are integers - combinatorial interpretation For any integers $k \le n$ here is an injective group homomorphism $$S_k \times S_{n-k} \rightarrow S_n$$ such that a tuple $(\sigma, \tau)$ permutes $\{1,...,n\}$ by letting $\sigma$ act on $\{1,...,k\}$ and $\tau$ act on $\{k+1,...,n\}$. By Lagrange's theorem, $k!(n-k)! = |S_k \times S_{n-k}|$ divides $|S_n| = n!$, so $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ is an integer. In the comments of some question from several months ago (that I can't find now) it was asked if we can find a combinatorial interpretation of this result - that the cosets of $S_k \times S_{n-k}$ in $S_n$ should correspond naturally to ways of choosing $k$ elements from a set of $n$. To me there is no obvious correspondence and I would like to know if anyone has an interpretation in that sense. AI: Let $X$ be the set of $k$ element subsets of $\{1, \ldots, n\}$. $S_n$ acts on $X$ (applying a permutation to $k$ distinct numbers yields another $k$ distinct numbers) and it should be clear that this action is transitive. Thus the orbit stabilizer relation says that for any choice $x \in X$ there is a natural bijection $X \simeq S_n/\operatorname{Stab}(x)$ where $\operatorname{Stab}(x)$ are the permutations in $S_n$ that fix $x$. If we let $x = \{1, \ldots, k\} \in X$ then $\operatorname{Stab}(x) = S_k \times S_{n - k}$ therefore the $X \simeq S_n/(S_k \times S_{n - k})$.
H: Let $f : [0, 1] \to [0, 1]$ be strictly increasing then Let $f : [0, 1] \to [0, 1]$ be strictly increasing then (a) f is continuous. (b) If f is continuous then f is onto. (c) If f is onto then f is continuous. (d) None of the above It should be injective I am sure, but right now no counter example is coming in my mind. could any one help me? AI: Monotonic functions can only have jump discontinuities. If $f$ is onto and monotonic, we cannot have jump discontinuities, so it must be continuous. There are counterexamples to the others.
H: If $f$ is an entire function such that $f(iy) = \exp(iy)$ where $0 \leq y \leq 1$. Is $f(x+iy) = \exp(x+iy)$? $(1)$ If $f$ is an entire function such that $f(iy) = \exp(iy)$ where $0 \leq y \leq 1$. Then, is $f(x+iy) = \exp(x+iy)$ for every $x$ and every $y$? $(2)$ If $f$ is an entire function such that $f(iy) = iy$ where $0 \leq y \leq 1$. Then: $\qquad (a)$ $f(z) = y$ for every $x$ and every $y$ $\qquad (b)$ $f(z) = z$ for every $x$ and for every $y$ $\qquad(c)$ $f(z) = z$ only for $0 \leq y \leq 1$ $\qquad (d)$ $f(z) = z$ whenever $0 \leq y \leq 1$ My thoughts: I think analytic continuation will play a part here. So, this is how I visualise this: If we draw a circular neighborhood at $z = i$ then, since $f$ is analytic, it must display same limit behavior everywhere in that neighborhood. So, $f(z) = e^z$ everywhere? Regarding the second question too, I think the same, $f(z) = z$ everywhere? However, I know that my argument is not concrete enough. Could you tell me a more definite and concrete way to tackle this kind of problem? Thanks. AI: This is a consequence of the identity theorem for holomorphic functions: If $f,g$ are holomorphic on $D$ and agree on infinitely many points (here, the line segment $\{iy\mid 0\le i\le 1\}$ ) and these points have an accumulation point in $D$, then $f=g$.
H: For $X_i$ iid $ Var( \sum_{i=1}^n X_i )= \sum_{i=1}^n Var (X_i)$? My contention is that it's true. I thought of two ways of proving it, unsure which one is better (and/or correct): Suppose $X_i \sim N(\mu, \sigma^2)$, then it's moment generating function (MGF) is $$\Psi_{X_i}(t) = \exp\{\mu t + \frac12 \sigma^2 t^2\}$$ If we've got the iid property, then MGF of the sum is equal to the product of the MGF's which is: $$\Psi_{\sum_{i=1}^n X_i}(t) = \prod_{i=1}^n \exp\{\mu t + \frac12 \sigma^2 t^2\} = \exp\{n \mu t + n \frac12 \sigma^2 t^2\} $$ Which gives us that $\sum_{i=1}^n X_i \sim N(n \mu, n \sigma^2)$. It should probably hold in general (can't think of an argument, why?) From the definition of variance \begin{align} Var( \sum_{i=1}^n X_i ) &= \mathbb{E} \Bigg (\sum_{i=1}^n X_i- \mathbb{E} \sum_{i=1}^n X_i \Bigg)^2\\ &= \mathbb{E} \Bigg( (\sum_{i=1}^n X_i)^2- 2 \sum_{i=1}^n X_i \sum_{i=1}^n\mathbb{E} X_i + (\sum_{i=1}^n\mathbb{E} X_i)^2 \Bigg)\\ &=\sum_{i=1}^n\mathbb{E} X_i^2- (\sum_{i=1}^n\mathbb{E} X_i)^2\\ &=n \mathbb{E}X_1^2 - n^2 (\mathbb{E}X_1)^2 \end{align} since $\mathbb{E}[\sum_{i \neq j} X_i X_j ]=0 $ and we have the iid property, i.e.: $\mathbb{E}X_1 = \mathbb{E}X_i \space \forall i$. Whereas \begin{align} \sum_{i=1}^n Var(X_i) &=\sum_{i=1}^n \mathbb{E} \Bigg( X_i - \mathbb{E}X_i \Bigg)^2 \\ &= \sum_{i=1}^n \mathbb{E} \Bigg( X_i^2- 2X_i \mathbb{E}X_i + (\mathbb{E}X_i)^2 \Bigg)\\ &= n \mathbb{E}X_1^2 - n (\mathbb{E}X_1)^2 \end{align} I can't seem to get the $n^2$ here the same way I got (above). This would suggest that my contention is wrong. Is it or did I make a mistake in the calculations? AI: $$\begin{align} \operatorname{var}\left( \sum_{i=1}^n X_i \right) &= \mathbb{E} \Bigg (\sum_{i=1}^n X_i- \mathbb{E} \sum_{i=1}^n X_i \Bigg)^2\\ &= \mathbb{E} \Bigg (\sum_{i=1}^n (X_i- \mathbb{E}X_i) \Bigg)^2\\ &= \sum_{i=1}^n \mathbb{E} (X_i- \mathbb EX_i)^2 + \mathbb E\sum_{i=1}^n ~\sum_{j=1, j \neq i}^n (X_i -\mathbb{E} X_i) (X_j -\mathbb{E} X_j)\\ &=\sum_{i=1}^n\operatorname{var}(X_i)+ \sum_{i=1}^n~ \sum_{j=1, j \neq i}^n \mathbb E(X_i -\mathbb{E} X_i)\mathbb E(X_j -\mathbb{E} X_j)\\ &= \sum_{i=1}^n\operatorname{var}(X_i) \end{align}$$
H: Simple Differential Equation We have $$\frac{dx}{dt}=x-y-1$$ $$\frac{dy}{dt}=x-y+1$$ Express $y$ in terms of only $x$ (i.e. no $t$ term). My professor gave me the hint "use $\frac{d}{dt}(x-y)$", but I don't know how this is supposed to help me. AI: Use both $x-y$ and $x+y$. I get $$\frac{d}{dt}(y-x) = 2 \implies y-x=2 t+C$$ $$\frac{d}{dt}(y+x) = 2 (x-y) = -4 t \implies y+x=-2 t^2 +C'$$ Do some algebra to find that $$t^2+t+x=0 \implies t=-\frac12 \pm \frac12 \sqrt{1-4 x}$$ Then $$y=t-t^2+C = x-1\pm\sqrt{1-4 x}+C$$
H: Prove $e^{x+y}=e^{x}e^{y}$ by using Exponential Series In order to show $e^{x+y}=e^{x}e^{y}$ by using Exponential Series, I got the following: $$e^{x}e^{y}=\Big(\sum_{n=0}^{\infty}{x^n \over n!}\Big)\cdot \Big(\sum_{n=0}^{\infty}{y^n \over n!}\Big)=\sum_{n=0}^{\infty}\sum_{k=0}^n{x^ky^n \over {k!n!}}$$ But, where should I go next to get $e^{x+y}=\sum_{n=0}^{\infty}{(x+y)^n \over n!}$. Thanks in advance. AI: You should have gotten $$ \sum_{n=0}^\infty \sum_{k=0}^n \frac{x^k}{k!}\cdot\frac{y^{n-k}}{(n-k)!}. $$ After that, you can write $$ \sum_{n=0}^\infty \sum_{k=0}^n \frac{1}{n!} \cdot \frac{n!}{k!(n-k)!} x^k y^{n-k}. $$ Since the factor $\dfrac{1}{n!}$ does not depend on $k$, you can pull it out: $$ \sum_{n=0}^\infty \left(\frac{1}{n!} \sum_{k=0}^n \frac{n!}{k!(n-k)!} x^k y^{n-k}\right). $$ Then you have $$ \sum_{n=0}^\infty \frac{1}{n!} (x+y)^n. $$ How does $\displaystyle\left(\sum_{n=0}^\infty b_n\right) \left(\sum_{m=0}^\infty c_m\right)$ become $\displaystyle\sum_{n=0}^\infty \sum_{m=0}^\infty (b_n c_m)$? And how does $\displaystyle\sum_{n=0}^\infty \sum_{m=0}^\infty a_{n,m}$ become $\displaystyle\sum_{n=0}^\infty \sum_{k=0}^n a_{k,n-k}$? In the first sum above, notice that $\sum_{m=0}^\infty c_m$ does not depend on $n$ so it can be pushed inside the other sum and become $$ \sum_{n=0}^\infty \left( b_n \sum_{m=0}^\infty c_m \right). $$ Then the factor $b_n$ does not depend on $m$, so that expression becomes $$ \sum_{n=0}^\infty \sum_{m=0}^\infty (b_n c_m). $$ That answers the first bolded question above. Next consider the array $$ \begin{array}{ccccccccc} a_{0,0} & a_{0,1} & a_{0,2} & a_{0,3} & \cdots \\ a_{1,0} & a_{1,1} & a_{1,2} & a_{1,3} & \cdots \\ a_{2,0} & a_{2,1} & a_{2,2} & a_{2,3} & \cdots \\ a_{3,0} & a_{3,1} & a_{3,2} & a_{3,3} & \cdots \\ \vdots & \vdots & \vdots & \vdots \end{array} $$ The sum $\displaystyle\sum_{n=0}^\infty \sum_{k=0}^n a_{k,n-k}$ runs down diagonals: $$ \begin{array}{ccccccccc} & & & & & & & & n=3 \\ & & & & & & & \swarrow \\ a_{0,0} & & a_{0,1} & & a_{0,2} & & a_{0,3} & & \cdots \\ \\ & & & & & \swarrow \\ a_{1,0} & & a_{1,1} & & a_{1,2} & & a_{1,3} & & \cdots \\ \\ & & & \swarrow \\ a_{2,0} & & a_{2,1} & & a_{2,2} & & a_{2,3} & & \cdots \\ & \swarrow \\ a_{3,0} & & a_{3,1} & & a_{3,2} & & a_{3,3} & & \cdots \\ \vdots & & \vdots & & \vdots & & \vdots \end{array} $$
H: Why does the graph of $x^n$ only have an imaginary component if $n$ is not an integer? I was playing around graphing equations and noticed that only non-integer exponents of x yield imaginary graphs. Try it on Wolfram Alpha. Why is that? AI: An imaginary component is a component containing $i=\sqrt{-1}=(-1^{\frac{1}{2}})$. Since $(x^{b})^{c}=x^{bc}$, any exponent in the form $x^{\frac{p}{2q}}$, where x is real and $\frac{p}{2q}$ is a reduced fraction so $p$ is not even, can be rewritten as $(x^{\frac{1}{2}})^{\frac{p}{q}}=((\sqrt x)^{p})^{\frac{1}{q}}$. When $x$ is negative this can be seen to be $(i\sqrt{|x|})^{\frac{p}{q}}=(i)^{\frac{p}{q}}(|x|)^{\frac{p}{2q}}$. Since $|x|$ is positive and real $(|x|)^{\frac{p}{2q}}$ is real and since $\frac{p}{q}$ is non-even $(i)^{\frac{p}{q}}$ has an imaginary component. So the graph of $x^{\frac{p}{2q}}$ will have an imaginary component when $x$ is negative. As Fly by Night pointed out in a comment other fractional powers can indeed have imaginary parts because there are $n$ roots of unity for $x^{\frac{1}{n}}$. Many graphing programs will ignore these if you have an odd root, but a quick check shows that wolfram alpha is not among them and will include the information about the other roots of unity. An integer power will always have any fractional powers exactly canceled out so that any roots of unity are taken back to unity, that is $1$, by exponentiation. Thus a graph of an integer power of real $x$ should not have an imaginary component.
H: Probability of winning the game "1-2-3" Ok, game is as follow, with spanish cards (you can do it with poker cards using the As as a 1) You shuffle, put the deck face bottom, and start turning the cards one by one, saying a number each time you turn a card around ---> 1, 2, 3; 1, 2, 3; etc. If when you say 1 a 1 comes out, you lose, same with 2 and 3. If you finish the deck without losing, you win. I know some basics of probabilities, but is there a way to calculate the probability of winning the game, given a random shuffled deck? AI: For $i,j\in\{1,2,3\}$, let $a_{i,j}$ denote the number of $i$ cards being dealt with number $j$ spoken. We have $\sum_j a_{i,j}=4$ and for a winning game $a_{i,i}=0$. The number of winning positions for a given $(a_{i,j})$ is $$\frac{18!}{a_{2,1}!a_{3,1}!(18-a_{2,1}-a_{3,1})!}\cdot\frac{17!}{a_{1,2}!a_{3,2}!(17-a_{1,2}-a_{3,2})!}\cdot\frac{17!}{a_{1,3}!a_{2,3}!(17-a_{1,3}-a_{2,3})!}. $$ We need to sum this over all $(a_{i,j})$ and divide by the total count $$ \frac{52!}{4!4!4!40!}.$$ (Actually, we need just let $a_{1,2}, a_{2,3}, a_{3,1}$ run from $0$ to $4$ and this determines $a_{1,3}=4-a_{1,2}$ etc.) The final result is $$p=\frac{58388462678560}{7151046448045500}=\frac{24532967512}{3004641364725}\approx 0.008165 $$ (I just noted that Harold has performed a Monte Carlo simulation with matching result)
H: The fundamental group functor is not full. Counterexample? Subcategories with full restriction? Anyone aware of a nice counterexample to "The fundamental group functor is full?" (Which is...false, right?) and are there a nontrivial subcategories on which its restriction is full? I.e. Can you think of an example of topological spaces $X$ and $Y$ such that there is a group homomorphism $\phi: \pi_1(X, x_0) \to \pi_1 (Y,y_0)$ such that $\phi$ is not induced by any continuous map $f: X \to Y$ ? Sometimes every homomorphism is induced by a continuous map: For example every automorphism of $\pi_1(S^1)$ is induced by a map $S^1 \to S^1$. Are all automorphisms of fundamental groups induced by continuous maps? What conditions on spaces $X$ and $Y$ ensure every fundamental group homomorphism is induced by a continuous map? AI: Consider $X = \mathbb R P^3$ and $Y = \mathbb R P^2$. They both have $\pi_1$ equal to $\mathbb Z/2\mathbb Z$, but I don't think there is any continuous map $X \to Y$ which induces the identity on $\pi_1$s.
H: Proof that a normal subgroup $N$ of $G$ is the identity coset in the group of cosets of $N$ I am not sure how to prove the following: Let $N$ be a group. Prove that for any $n\in N, nN=N$. (Or maybe the following, but I'm not sure it's correct: $n\in N$ $\Leftrightarrow nN = N$) I haven't been asked to prove this directly, but I seem to run into many proofs that require the above, yet don't bother to do any explanation regarding it (So I assumed the above fact is trivial). Any guidance/explanation would be appreciated. Thanks in advance! AI: The correct formulation is the following. Suppose $N$ is a subgroup of the group $G$. Let $g \in G$. Then $g N = N$ if and only if $g \in N$. You should know that for two cosets one has $$ \text{$a N = b N$ if and only if $a^{-1} b \in N$.}\tag{fact}$$ So $N = 1 N = g N$ if and only if $1^{-1} g = g \in N$. To prove (fact), you may start from the relation $R$ on $G$ defined by $a R b$ if and only if $a^{-1} b \in N$. You can prove this is an equivalence relation, and the class of an element $g$ is $g N$.
H: 2 questions regarding compactness and closed Let $(X,d)$ be a metric space. Let $E$, $F$ be two disjoint non-empty subsets of $X$ with $E$ compact and $F$ closed. Show that $\inf\{d(x,y): x\in E, y\in F\}>0$ Show that this does not longer true is $E$ is not compact: find two disjoint closed subsets $E$ and $F$ of $\mathbb{R}^2$ so that $\inf\{d(x,y): x\in E, y\in F\}=0$ I have been trying to use the fact sequential compactness iff compact on a metric space. I haven't had much luck. Any help would be hugely appreciated. AI: Hint. Can you show that the function $f : E \to \mathbb{R}$ defined as $f(x) = \inf \{d(x, y) : y \in F\}$ is continuous and greater than $0$?
H: Two problems about rings. Somebody can to help me in such exercices: (1) A ring R such that $a^2 = a$ for all $a\in R$ is called a Boolean ring. Prove that every Boolean ring R is commutative and $a + a = 0$ for all $a \in R$. (2) Let R be a ring with more than one element such that for each nonzero $a\in R$ there is a unique $b \in R$ such that $aba = a$. Prove: (a) R has no zero divisors. (b) $bab = b$. (c) R has an identity. (d) R is a division ring. My greatest difficulty in the question 2 is to prove that R has no zero divisors. Thanks! AI: $a+a=(a+a)^2=a^2+a^2+a^2+a^2=a+a+a+a$, so $a+a=0$. $a+b=(a+b)^2=a^2+ab+ba+b^2=a+ab+ba+b$, so $ab+ba=0$. Using the previous fact, $ab-ba=0$ so $ab=ba$.
H: How to do I calculate the conditional probability distribution? The Chicago Cubs are playing a best-of-five-game series (the first team to win 3 games win the series and no other games are played) against the St. Louis Cardinals. Let X denotes the total number of games played in the series. Assume that the Cubs win 59% of their games versus their arch rival Cardinals and that the probability of winning game is independent of other games. (b) calculate the conditional mean and standard deviation for X given that the Cardinals win the first game. I asked a similar question the other day and I learned about the negative binomial distribution here: Can you use combinatorics rather than a tree for a best of five match? Now in this part of the question, the conditions are slightly tweaked in that the Cardinals won the first game so it has to be modified. I tried doing this using the previous post but couldn't get the same answer. This is my approach: Let $X$= number of games for any $k$ in the interval $w\le X\le 2w-1$ where $w$ equals the number of wins to win the series (in this case $3$). Also, let $p$ equal the probability the cubs win which is $59\%$. Cardinals win: k=4 The key inference to make is the following: In order to have the series of length k, the winning team must have a win in its last game. This is because if there are $w$ wins in the $(k-1)$ interval, then the series would be over before $k$ games are played. So for cardinals winning in $k=4$, we have: W _ _ W. So there are $2$ remaining spots left which is represented as $(k-2)$ and we have to place $1$ win. So I would write this as: $\Pr(\text{Cardinals Win})=\binom{k-2}{w-2} \cdot (1-p)^w \cdot p^{k-w} \text{ for } k=3,4,5\tag{1}$ We could test this with k=5 and get the same derivation: W _ _ _ W. Here we have to place 1 win in 3 spots. Cubs Win: k=5 The cubs lost the first game, must win the last game, and place 2 wins in the remaining spots: L _ _ _ W. I wrote this as: $\Pr(\text{Cubs Win})=\binom{k-2}{w-1} \cdot (p)^w \cdot (1-p)^{k-w} \text{ for } k=4,5\tag{2}$ Note that for the case k=3, the Cubs lost the first game and they cannot win for that case. Since the event that the teams win are disjoint, the probability for the number of games played is simply the sum of equations (1) and (2). I then solved the equations for each value of $k$: $Pr(\text{Cubs win in 4 games})=p^3 \cdot (1-p)= (.59)^3(.41)$ $Pr(\text{Cubs win in 5 games})=3 p^3 \cdot (1-p)^2=3(.59)^3(.41)^2$ $Pr(\text{Cardinals win in 5 games})=3 (1-p)^3 \cdot (p)^2=3(.59)^3(.41)^2$ This is not the same thing as the book. What's even more confusing is that the term numbers don't even add up to the value of $k$. If I drew a tree diagram, the probability of an event is the product of the nodes of the tree. So if I have $k=4$, I would expect to have a product of 4 values, but they have 1 less. Can someone please tell me what I am doing wrong? Thank you again in advance. AI: Cardinals winning the first game complicates things, since the situation becomes less symmetrical. Let $p=0.59$. If we can find the probability distribution function of $X$, the number of games (conditional on Cards winning the first game), then finding the mean and variance is staightforward. Easiest is to find the (conditional) probability that $X=3$. the Cards have to win the next $2$ games. This has probability $(1-p)^2$. Next we find the probability that $X=4$. This can happen in $2$ different ways: (i) Cards win in $4$ and (ii) Cubs win in $4$. (i) Cards win in $4$ if they lose exactly one of Games 2 or 3, and win the rest. This has probability $(2)p(1-p)(1-p)$. (ii) Cubs win in $4$ if they win Games 2, 3, 4. This has probability $p^3$. For the probability that $X=4$, add the numbers obtained in (i) and (ii). Next we find the probability that $X=5$. We could do a cases analysis. But we know the answer! It is $1-\Pr(X=3)-\Pr(X=4)$. Remark: In the post, there is, among other things, the assertion that the probability Cubs win in $4$ games is $p^3(1-p)$. This is true for unconditional probabilities. But the Cards won the first game, that's a fact. So the probability the Cubs win in $4$ games, given that fact, is simply $p^3$.
H: Primitive element (fields) I'm re-reading the primitive element lemma and I can't reason the following concept. Let $f,g\in F[x]$ be in the polynomial ring of one variable over the field $F$. Let those two polynomials have a unique common simple root $\beta$. Then the largest unary common divisor of $f$ and $g$, with coefficients in $F$ is $x-\beta$. How come $x-\beta\in F[x]$? AI: Euclid's algorithm for calculating the gcd of $f$ and $g$ will never leave the ring $F[x]$. Because $F$ is a field you can normalize the result to be monic (=leading coefficient equal to one) in the end.
H: Ratio between trigonometric sums: $\sum_{n=1}^{44} \cos n^\circ/\sum_{n=1}^{44} \sin n^\circ$ What is the value of this trigonometric sum ratio: $$\frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \quad ?$$ The answer is given as $$\frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} \approx \displaystyle \frac{\displaystyle\int_{0}^{45}\cos n^\circ dn}{\displaystyle\int_{0}^{45}\sin n^\circ dn} = \sqrt{2}+1$$ Using the fact $$\displaystyle \sum_{n = 1}^{44}\cos\left(\frac{\pi}{180}\cdot n\right)\approx\int_0^{45}\cos\left(\frac{\pi}{180}\cdot x\right)\, dx $$ My question is that I did not understand the last line of this solution. Please explain to me in detail. Thanks. AI: The last line in the argument you give could say $$ \sum_{n=1}^{44} \cos\left(\frac{\pi}{180}n\right)\,\Delta n \approx \int_1^{44} \cos n^\circ\, dn. $$ Thus the Riemann sum approximates the integral. The value of $\Delta n$ in this case is $1$, and if it were anything but $1$, it would still cancel from the numerator and the denominator. Maybe what you didn't follow is that $n^\circ = n\cdot\dfrac{\pi}{180}\text{ radians}$? The identity is ultimately reducible to the known tangent half-angle formula $$ \frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta}=\tan\frac{\alpha+\beta}{2} $$ and the rule of algebra that says that if $$ \frac a b=\frac c d, $$ then this common value is equal to $$ \frac{a+c}{b+d}. $$ Just iterate that a bunch of times, until you're done. Thus $$ \frac{\sin1^\circ+\sin44^\circ}{\cos1^\circ+\cos44^\circ} = \tan 22.5^\circ $$ and $$ \frac{\sin2^\circ+\sin43^\circ}{\cos2^\circ+\cos43^\circ} = \tan 22.5^\circ $$ so $$ \frac{\sin1^\circ+\sin2^\circ+\sin43^\circ+\sin44^\circ}{\cos1^\circ+\cos2^\circ+\cos43^\circ+\cos44^\circ} = \tan 22.5^\circ $$ and so on. Now let's look at $\tan 22.5^\circ$. If $\alpha=0$ then the tangent half-angle formula given above becomes $$ \frac{\sin\beta}{1+\cos\beta}=\tan\frac\beta2. $$ So $$ \tan\frac{45^\circ}{2} = \frac{\sin45^\circ}{1+\cos45^\circ} = \frac{\sqrt{2}/2}{1+(\sqrt{2}/2)} = \frac{\sqrt{2}}{2+\sqrt{2}} = \frac{1}{\sqrt{2}+1}. $$ In the last step we divided the top and bottom by $\sqrt{2}$. What you have is the reciprocal of this. Postscript four years later: In my answer I explained why the answer that was "given" was right, but I forgot to mention that $$ \frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \sqrt{2}+1 \text{ exactly, not just approximately.} $$ The reason why the equality is exact is in my answer, but the explicit statement that it is exact is not.
H: Is $f'(x)-3f(x) = 0$ subspace of differentiable functions $f\colon (0,1)\to \mathbb{R}$ $V$ is space of differentiable functions $f(0,1) \to \mathbb{R}$ and $W$ is a subset of $f$ that meets $f'(x) - 3f(x) = 0$ for all $x\in (0,1).$ Is subset $W$ a subspace of $V$? I know that I have to prove that it's closed under scalar multiplication and under addition. But I forgot all the stuff about differentiable functions, so I don't know how to do this. But my intuition tells me that if there's a zero it's valid subspace :) If anyone could elaborate on this I would be grateful. AI: Certainly. Let $f, g$ be two differentiable functions satisfying the given conditions (i.e., $f, g \in W$, and let $c$ be a scalar in $\mathbb{R}$. Then we can see: $$(f+g)'(x) - 3(f+g)(x) = f'(x)+g'(x) - 3(f(x) + g(x)) = (f'(x) - 3f(x)) + (g'(x) - 3g(x)) = 0 + 0 = 0,$$ i.e., $(f+g)(x) \in W$, and $$(cf)'(x) - 3(cf(x)) = cf'(x) - 3cf(x) = c(f'(x) - 3f(x)) = c(0) = 0,$$ i.e., $cf(x) \in W$. So $W$ is clearly closed under addition and scalar multiplication. Observing that $f(x) = 0$ is obviously in $W$, we are done.
H: calculate integral $\;2\int_{-2}^{0} \sqrt{8x+16}dx$ I want to calculate the integral $$2\int_{-2}^{0} \sqrt{8x+16}dx$$ The answer is $\;\dfrac {32}{6}\;$ but I don't know how to get it. AI: $$I = F(x) = \int_{-2}^0 \sqrt{(8x + 16)}\,dx$$ We use substitution: Let $u = 8x + 16,\;\;du = 8\,dx \implies dx = \dfrac 18 du$ Change limits of integration: When $x = -2, u = 0$, when $x = 0, u = 16$. Substituting equivalent expressions and changing the limits of integration then gives us: $$\int_{-2}^0 \sqrt{(8x + 16)}\,dx=\int_{0}^{16} \sqrt{u}\,\left(\frac 18 du\right) = \dfrac 18 \int_0^{16} u^{1/2}\,du$$ Now we use the power rule to integrate: $\quad \int u^a\,du = \dfrac{u^{a + 1}}{a+1} + C,\quad\text{for all}\;a\neq -1$ We integrate with respect to $u$ and evaluate the result $I = F(u)$: $F(16) - F(0)$. $$ \dfrac 18 \int_0^{16} u^{1/2}\,du = \frac{1}{8}\left(\frac{2}{3}u^{3/2}\right)=\frac{1}{12}\left(u^{3/2}\right)\Bigg|^{16}_{0} = \frac{1}{12}\Bigl[(64) - (0)\Bigr] = \frac{32}{6}$$ If your integral was, as you write it, given as $2I = 2F(x) = 2F(u)$, then our result will be $$2\cdot \frac {32}{6} = \frac{32}{3}$$
H: Applying the contra positive of the finite intersection property I'm reading a proof which has the following setting. I have a family $D$ of compact sets with empty intersection. The next line takes a finite subset of $D$ with empty intersection. This is clearly possible if families of compact sets enjoy the finite intersection property. Upon trying to verify this I realize it might be much harder than it sounds. Is this the case or am I barking up the wrong tree? AI: It’s true in any Hausdorff space. (Hausdorffness ensures that compact sets are closed.) Let $\mathscr{K}$ be a family of compact sets in a space $X$, and suppose that that $\bigcap\mathscr{K}=\varnothing$. Fix $K_0\in\mathscr{K}$, and let $\mathscr{K}_0=\mathscr{K}\setminus\{K_0\}$. For each $K\in\mathscr{K}_0$ let $U_K=X\setminus K$, and let $\mathscr{U}=\{U_K:K\in\mathscr{K}_0\}$. Then $\mathscr{U}$ is an open cover of $K_0$, and $K_0$ is compact, so $\mathscr{U}$ has a finite subcover. Let $\mathscr{F}$ be a finite subset of $\mathscr{K}_0$ such that $\{U_K:K\in\mathscr{F}\}$ covers $K_0$; then $\mathscr{F}\cup\{K_0\}$ is a finite subset of $\mathscr{K}$ with empty intersection.
H: The Fitting subgroup centralizes minimal normal subgroups in finite groups Let $G$ be a finite group: If $N$ is a minimal normal subgroup of $G$, then $F(G) \leq C_G(N)$. Here $C_G(N)$ denotes the centralizer of $N$ in $G$, and $F(G)$ denotes the Fitting subgroup of $G$. AI: If $N$ is abelian, then $N \leq F$ and $[F,N] < N$ is still normal, so $[F,N]=1$ as claimed. If $N$ is non-abelian, then $[F,N] \leq F \cap N = 1$ as claimed. You may try to prove sort of a converse: $F(G)$ is the intersection of the centralizers, not just of the minimal normal subgroups, but of all $K/L$ where $K$ is a minimal normal subgroup of $G/L$.
H: What are the last two digits of $3^{3^{100}}$? What are the last two digits of $3^{3^{100}}$? I had this on an exam, just curious. AI: From Fermat's little theorem, $a^{\phi(n)}\equiv 1\pmod n$ if $\gcd(a,n)=1$. With $a=3$ and $n=100$, we conclude $3^{40}\equiv1\pmod{100}$. Hence if $3^{100}=m\cdot 40+r$, we only need to calculate $3^r\pmod{100}$. By the same reasoning, we find $3^{16}\equiv 1\pmod{40}$, hene $3^{100}\equiv 3^4=81\equiv 1\pmod {40}$, hence our $r=1$ and ultimately the desired last digits are "03".
H: Find the length of the parametric curve Find the length of the parametric curve defined by: $x=t+\dfrac{1}{t}$ and $y=\ln{t^2}$ on the interval $(1 \le t \le 4)$. AI: Hint: The derivative of your first function is $1-\frac{1}{t^2}$. The derivative of the second is $\frac{2}{t}$. Note that the sum of the squares of the derivatives is $$\left(1+\frac{1}{t^2}\right)^2.$$ Now we can take the square root easily, and integrate. Added: We have, using the familiar $(a+b)^2=a^2+2ab+b^2$, $$\left(1-\frac{1}{t^2}\right)^2=1-\frac{2}{t^2}+\frac{1}{t^4}.$$ Add the square of $\frac{2}{t}$, that is, $\frac{4}{t^2}$. We get $$1+\frac{2}{t^2}+\frac{1}{t^4},$$ which is equal to $\left(1+\frac{1}{t^2}\right)^2$. Remark: This sort of trickery happens often in arclength problems, parametric or not. The point is that if we take two reasonable functions $x(t),y(t)$, and calculate $$\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2},$$ we typically get a function which has no elementary antiderivative. Thus many of the arclength problems in calculus books involve very specially selected functions for which there is "magic" cancellation that makes the thing inside the square root the square of something nice. Note that if we change the function a little bit, taking in our case $x(t)=t+\frac{1.2}{t}$ we end up with an integral that cannot be done in terms of elementary functions.
H: Multiple Integral I'd like to know when exactly we have the right to inverse the order of the variables in a multiple integral. Which are the cases which cause problems. (When $\int_a^b \int_c^d \int_e^f f(x,y,z) \, \mathrm dx\, \mathrm dy \, \mathrm dz \neq \int_c^d \int_a^b \int_e^f f(x,y,z) \, \mathrm dx\, \mathrm dz \, \mathrm dy$ ??). And when we have the right to do it what changes we have to do in the function? AI: The order of the integral matters in cases where the integral of the absolute value is not finite; this is equivalent to Lebesgue integrability.
H: Compute dependent probability I have $X = \pmatrix{-2 & -1 & 0 & 1 & 2 \\ .05 & .2 & .3 & .4 & .05} $, $$F(X) = \begin{cases} 0 & X \le -2 \\ .05 & -2 \le X < -1 \\ .25 & -1 \le X < 0 \\ .55 & 0 \le X < 1 \\ .95 & 1 \le X < 2 \\ 1 & 2 \le X \end{cases} $$ I must compute: $$ P(X > -2.1\ |\ X < 1.3) \\ $$ I know Bayes formula that says: $$ P(A|B) = \frac{P(B|A) \times P(A)}{P(B)} $$ I computed: $P(A) = P(X > -2.1) = 1 - F(-2.1) = 1$ and $P(B) = P(X<1.3) = F(1.3) = \frac{95}{100}$ but I don't know how to compute $P(B|A)$ AI: For this one, no complex formula seems necessary. Note all values for $X$ are in $[-2,2]$, so $X$ is always at least $-2.1$. Thus, (since $\mathbb{P}[X<1.3] \neq 0$), your probability is $\mathbb{P}[X>-2.1|X<1.3]=1$.
H: Sheafification of singular cochains Let $S^k$ be the presheaf on a space $X$ that assigns to every open set $U$ the abelian group $S^k(U)$ of singular k- cochains on $U$. This is clearly not a sheaf. Consider the sheafification $F^k$ of each $S^k$. These sheaves form an exact resolution of the constant sheaf of integers. We can take global sections on this sheaf resolution to obtain a cochain complex $F^*(X)$. Does the cohomology of this cochain complex coincides with ordinary singular cohomology? AI: You haven't said this explicitly, but your claim is that the sheaf cohomology of the constant sheaf $\Bbb Z_X$ is isomorphic to the singular cohomology of $X$: $$H^\ast(X, \Bbb Z_X) \cong H^\ast_{\text{sing}}(X; \Bbb Z).$$ This is true for $X$ locally contractible. One checks that the complexes $S^\ast(X)$ and $\Gamma(X, S^\ast(X))$ are quasi-isomorphic. More generally, the assertion holds for any abelian group $G$: $$H^\ast(X, G_X) \cong H^\ast_{\text{sing}}(X; G)$$ when $X$ is locally contractible. I don't have any references on hand, but I would expect that you could find the proof in most books that introduce the concept of sheaf cohomology.
H: Algebra simplification in mathematical induction . I was proving some mathematical induction problems and came through an algebra expression that shows as follows: $$\frac{k(k+1)(2k+1)}{6} + (k + 1)^2$$ The final answer is supposed to be: $$\frac{(k+1)(k+2)(2k+3)}{6}$$ I walked through every possible expansion; I combine like terms, simplify, factor, but never arrived at the answer. Could someone explain the steps? AI: A good idea for this sort of thing is to use Wolfram Alpha to ensure that the two things are, indeed, equal. In this case, they are, so we can spend some time looking to factor. $$\begin{align} \frac{k(k+1)(2k+1)}{6} + (k + 1)^2 &= \frac{k(k+1)(2k+1) + 6(k + 1)^2}{6}\\ &= \frac{(k+1)(k(2k+1) + 6(k + 1))}{6}\\ &= \frac{(k+1)(2k^2 +7k + 6)}{6}\\ &= \frac{(k+1)(k+2)(2k + 3)}{6}\\ \end{align}$$ EDIT: A helpful trick is the one I did in the step from line 1 to line 2: I factored out the $k+1$ immediately, rather than expanding the whole expression. This makes it easier to deal with--most people have more practice factoring quadratics than cubics. EDIT in response to comment: $$\begin{align} \frac{k(k+1)(2k+1) + 6(k + 1)^2}{6} &= \frac{k\color{red}{(k+1)}(2k+1) + 6\color{red}{(k+1)}(k+1)}{6}\\ &=\frac{\color{red}{(k+1)}\Big(k(2k+1) + 6(k+1)\Big)}{6} \end{align}$$
H: About the sum of the digits of $k^{105}$. I read here that We cannot find an integer $k>2$ such that the sum of the digits of $k^{105}$ is $k$. Does anyone know a proof of this? AI: A dumb-but-working approach could look like this: $k^{105}$ has $1+\lfloor\log_{10} k^{105}\rfloor=1+\lfloor 105\log_{10} k\rfloor$ digits. A number with $n$ digits has digit sum between $1$ and $9n$. Thus, the maximum possible digit sum of $k^{105}$ is $9(1+\lfloor 105\log_{10}k\rfloor)$. This quantity is smaller than $k$ for $k\geq 3330$. This reduced the problem to a finite one, which can be easily finished by a computer.
H: How many even numbers can you form which are greater than 100 subject to the following constraints? How many 3 digit even numbers can be made from the numbers 0,1,2,3 which are greater than 100. The book answer says 20 but I am getting 23. AI: Your answer is correct. As a first approximation there are $3$ choices for the first digit, $4$ choices for the second digit, and $2$ choices for the last digit, since it must be $0$ or $2$. That’s a total of $3\cdot4\cdot2$ numbers. However, one of those is $100$, which has to be excluded, so there are only $23$. Added: And indeed the numbers involved are small enough to permit direct verification: $$\begin{align*} &102,110,112,120,122,130,132,\\ &200,202,210,212,220,222,230,232,\\ &300,302,310,312,320,322,330,332 \end{align*}$$
H: A convergence test for improper integrals ($\mu$-test) I came across a convergence test for improper integrals referred to as the $\mu$-test while I was looking through a textbook. I'm interested in understanding the idea behind the test since no explanation is given in the textbook. Let $f(x)$ be unbounded at $a$ and integrable in the interval $[a+ \epsilon, b]$ where $0 < \epsilon < b-a$. If there is a number $\mu$ between $0$ and $1$ such that $\lim_{x \to a^{+}} (x-a)^{\mu} f(x)$ exists, then $\int_{a}^{b} f(x) \ dx$ converges absolutely. If there is a number greater than or equal to $1$ such that $\lim_{x \to a^{+}} (x-a)^{\mu} f(x)$ exists and is nonzero, then $\int_{a}^{b} f(x) \ dx$ diverges, and the same is true if $\lim_{x \to a^{+}} (x-a)^{\mu} f(x) = \pm \infty$. In the case $\int_{a}^{b} f(x) \ dx$ is unbounded at $b$, we should find $\lim_{x \to b^{-}} (x-b)^{\mu} f(x)$, other conditions remaining the same. If $\mu =1$ (or any positive integer) and the limit exists and is nonzero, then $f(x)$ would have a pole at $a$ if $f(x)$ were a function of a complex variable. So from that perspective I can see why the integral would diverge. AI: If $\lim_{x\to a^+} (x-a)^\mu f(x) = L$, then $|f(x)| < 2L|x-a|^{-\mu}$ near $x=a$. Since $$\int_a^b |x-a|^{-\mu}\,dx$$ converges for $0 < \mu < 1$, the integral $$\int_a^b f(x)\,dx$$ converges absolutely. The other assertions are similar.
H: Question on statistics - mean and standard deviation? I'm not sure what to do here: A class of 30 students were weighed. Their mean was found to be 58kg with a standard deviation of 5.5kg. What percentage of students will have a mass between 52.5 kg and 63.5 kg? AI: For the answer, you are expected to assume that the weights have normal distribution. This is a very dubious assumption. You are also expected to believe that this normal distribution has standard deviation $5.5$. You are also expected to believe that the probability that such a normal lies within $1$ standard deviation unit of the mean gives us the right percentage. That need not be true. At best it gives an estimate of the percentage that is likely to be not very far from the truth. But let us hold our noses and make all these assumptions. From tables of the standard normal, the probability that a normal with mean $\mu$ is $\le \mu+\sigma$, where $\sigma$ is the standard deviation, is approximately $0.8413$. We ust need to look up $\Pr(Z\le 1)$, where $Z$ is standard normal. So the probability it is between $\mu$ and $\mu+\sigma$ is about $0.2413$. Double this to find the probability of lying between $\mu-\sigma$ and $\mu+\sigma$.
H: Integral inequation In my statistics book Chebyshev's inequality is proven. In several steps this inequality is used: $$ \int_a^{+\infty} \phi(x) f_X(x)dx \quad \geq \quad \phi(a) \int_a^{+\infty} f_X(x)dx $$ and also: $$ \int_{-\infty}^{-a} \phi(x) f_X(x)dx \quad \geq \quad \phi(-a) \int_{-\infty}^{-a} f_X(x)dx $$ Here is $a \geq 0$, $\phi:\mathbb{R}\to\mathbb{R}^+$ a positive function, and $f_X$ a pdf. Why this is valid? AI: Since $\phi$ is increasing (this is important here) and $f_X$ is positive, we have for $a>0$ and $x\ge a$, $\phi(x)\ge\phi(a)$. Therefore, $$ \begin{align} \int_a^\infty\phi(x)f_X(x)\,\mathrm{d}x &\ge\int_a^\infty\phi(a)f_X(x)\,\mathrm{d}x\\ &=\phi(a)\int_a^\infty f_X(x)\,\mathrm{d}x\\ \end{align} $$ The other inequality is simply a change of variables.
H: Coordinate geometry and translations: rotations and composites oh my! Here's the problem: Let $R_y$ be a reflection in the $y$-axis and $T : (x,y) \rightarrow (x-3,y+1)$. Which one of the following transformations is equivalent to $R_y \circ T$? Here's my thought process: okay, so this means that first we perform $R_y$ on $(x,y)$, which gives us $(-x,y)$. Then we perform the transformation on $(-x,y)$ $T: (x,y) \to (x-3,y+1)$ (original translation) $T: (-x,y) \to (-x-3,y+1)$ (now we substitute in the coordinates from $R_y$) One good strategy is to substitute values for $x$ and $y$, so I did that, using the numbers $1$ and $$2. $(x,y) \to (1,2)$ $(1,2)(R_y) \to (-1,2)$ $((-1)-3,(2)+1) \to (-4,3)$ The issue is that my answer doesn't match any of the answer choices! What am I doing wrong? AI: It seems like you're doing the operations in the wrong order. Note that $$(R_y\circ T)(x, y) = R_y(T(x, y)).$$ First you apply $T$, then $R_y$; you applied $R_y$ then applied $T$. You can see the answer below (move your cursor over the grey box), but try it for yourself first. $(R_y\circ T)(x, y) = R_y(T(x, y)) = R_y(x-3, y+1) = (-x+3, y+1)$
H: Minimal subgroups lie in the center so group is nilpotent Let $G$ be a group of odd order. If every minimal subgroup lies in the center, prove that $G$ is nilpotent . Thanks! AI: Proposition: Let $G$ be a finite group, $p$ an odd prime, such that every subgroup of order $p$ is contained in the center of $G$. Then $G$ is $p$-nilpotent. Corollary: If $G$ has odd order and every subgroup of prime order is central, then $G$ is nilpotent. Lemma: (Thompson) If $P \leq G$ is a $p$-group for $p$ an odd prime, and $Q \leq N_G(P)$ is a subgroup whose order is relatively prime to $p$, and $Q$ centralizes every element of $P$ of order $p$, then $Q$ centralizes $P$. Proof: See Gorenstein's Finite Groups theorem 5.3.10, page 184. $\square$ Proof: (of the proposition) Let $P$ be a $p$-subgroup of $G$ and let $x \in N_G(P)$ have order relatively prime to $p$. Since every subgroup of order $p$ is central, $x$ centralizes every element of order $p$, and thus by the lemma, $x \in C_G(P)$. Hence $[N_G(P):C_G(P)]$ is a power of $p$. By Frobenius's normal $p$-complement theorem (Theorem 7.4.5, page 253), $G$ is $p$-nilpotent. $\square$ Proof: (of the corollary) Such a group is $p$-nilpotent for every odd prime $p$ by the proposition. Since the group itself has odd order, it is $p$-nilpotent for every prime $p$ dividing the order of $G$. Hence the group is nilpotent (the intersection of the normal Sylow $q$-complements for $q \neq p$ is a normal Sylow $p$-subgroup). $\square$
H: Why is it impossible to find distinct $z_1,z_2,z_3, z_4\in \mathbb C$ such that $|z_1- z_2|=|z_1-z_3|=|z_2-z_3|=|z_1-z_4|=|z_2-z_4|=|z_3-z_4|$? A. It is possible to find distinct $z_1,z_2,z_3\in \mathbb C$ such that $|z_1-z_2|=|z_1-z_3|=|z_2-z_3|$. Answer: True B. It is possible to find distinct $z_1,z_2,z_3, z_4\in \mathbb C$ such that $|z_1- z_2|=|z_1-z_3|=|z_2-z_3|=|z_1-z_4|=|z_2-z_4|=|z_3-z_4|$. Answer: False I don't understand why the second one is false. Can anyone help me out? AI: You can think about $\mathbb{C}$ as a two dimensional plane (that is, like $\mathbb{R}^2$). Each complex number defines a point in the plane, and given two complex numbers $z$ and $w$, $|z - w|$ is the distance between them, or the length of the line segment which joins them. For three complex numbers $z_1$, $z_2$, $z_3$, the condition $|z_1 - z_2| = |z_1 - z_3| = |z_2 - z_3|$ implies that the three points are the vertices of an equilateral triangle. So if you draw an equilateral triangle in the complex plane, and label the vertices $z_1$, $z_2$ and $z_3$, you will have three complex numbers which satisfy the required condition. In your comment you asked whether $z_1 = z_2 = z_3$. This is not the case as the question asked for the numbers to be distinct. For four complex numbers $z_1$, $z_2$, $z_3$, $z_4$, the condition $$|z_1 - z_2| = |z_2 - z_3| = |z_3 - z_4| = |z_1 - z_4|$$ implies that the four points are the vertices of a rhombus (labelled cyclically). However, the question asks about four complex numbers which satisfy the stronger condition $$|z_1 - z_2| = |z_2 - z_3| = |z_3 - z_4| = |z_1 - z_4| = |z_1 - z_3| = |z_2 - z_4|.$$ If four such complex numbers did exist, they would have to be the vertices of a rhombus labelled cyclically, but note that $|z_1 - z_3|$ and $|z_2 - z_4|$ are the lengths of the diagonals of the rhombus. So $z_1$, $z_2$, $z_3$, $z_4$ would have to be the vertices of a rhombus which had equal side and diagonal lengths, but no such rhombus exists (the diagonal length is always greater than the side length).
H: Find CDF of uniformly distributed variable Suppose $X$ is uniformly distributed over $[-1,3]$ and $Y=X^2$. Find the CDF $F_{Y}(y)$ From definition, I know that X's PDF is $\displaystyle f_{X}(x)=\begin{cases} \frac{1}{4}, & -1\leq x\leq 3, \\ 0, & \text{otherwise}. \end{cases}$ Thus $\displaystyle F_{Y}(y)=P(X^2\leq y)=\int_{-\sqrt{y}}^{\sqrt{y}}f_{X}(x)dx$. Besides, $0\leq Y \leq9$, thus $F_{Y}(y)=0$, when $y<0$ and $F_{Y}(y)=1$, when $y>9$ Due to fact, that integral depends on $y$, I need to split interval into few parts. Any tips, on how to choose $y$ intervals? Thanks. AI: For $0\le y\le 1$, we have $Y\le y$ if and only if $-\sqrt{y}\le X\le \sqrt{y}$. But for $0\le y\le 1$, $$\Pr(-\sqrt{y}\le X\le \sqrt{y})=\frac{2\sqrt{y}}{4}.$$ The reason is that the interval from $-\sqrt{y}$ to $\sqrt{y}$ has length $2\sqrt{y}$. For $1\lt y\lt 9$, we still want the probability that $-\sqrt{y}\le X\le \sqrt{y}$. However, $X$ "can't" be below $-1$. So equivalently we want the probability that $-1\le X\le \sqrt{y}$. The interval now has length $\sqrt{y}-(-1)=\sqrt{y}+1$, so the required probability is $$\frac{\sqrt{y}+1}{4}.$$ Remark: For the uniform distribution, setting up integrals is unnecessary. However, the cdf is always given by $$F_Y(y)=\Pr(Y\le y)=\int_{-\sqrt{y}}^{\sqrt{y}} f(x)\,dx$$ where $f(x)$ is the density function of $X$. But we must take account of the fact that the density function is $0$ to the left of $-1$. so for $0\le y\le 1$, we have $$F_Y(y)=\int_{-\sqrt{y}}^{\sqrt{y}}\frac{dx}{4},$$ while for $1\lt y\le 9$ we get $$F_Y(y)=\int_{-1}^{\sqrt{y}}\frac{dx}{4}.$$ (The lower bound is $1$, not $-\sqrt{y}$, because the density function of $X$ to the left of $-1$ is $0$, not $\frac{1}{4}$.)
H: How many Jordan normal forms are there for this characteristic polynomial? Given the characteristic polynomial of a matrix $A \in \mathbb{C}^{6x6}$ with $p(A)=(\lambda-2)^2(\lambda-1)^4$, we were supposed to determine all Jordan normal forms that have this characteristic polynomial. I determined 10 (is this correct?) and was wondering whether this is a general way to compute the number of them for an arbitrary characteristic polynomial. AI: Let's see: up to order of the Jordan Blocks and the eigenvalues, we have, with $\,m_A(x)=$ the matrix's minimal polynomial: $$m_A(x)=(x-1)(x-2)\;\;\;:\;\;\;\;\begin{pmatrix}1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&0\\ 0&0&0&0&0&2\end{pmatrix}$$ $$m_A(x)=(x-1)(x-2)^2\;\;\;:\;\;\;\;\begin{pmatrix}1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&1\\ 0&0&0&0&0&2\end{pmatrix}$$ $$(1)\;\;\;m_A(x)=(x-1)^2(x-2)\;\;\;:\;\;\;\;\begin{pmatrix}1&1&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&0\\ 0&0&0&0&0&2\end{pmatrix}$$ $$(2)\;\;\;m_A(x)=(x-1)^2(x-2)\;\;\;:\;\;\;\;\begin{pmatrix}1&1&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&0\\ 0&0&0&0&0&2\end{pmatrix}$$ $$m_A(x)=(x-1)^3(x-2)\;\;\;:\;\;\;\;\begin{pmatrix}1&1&0&0&0&0\\ 0&1&1&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&0\\ 0&0&0&0&0&2\end{pmatrix}$$ $$(1)\;\;\;m_A(x)=(x-1)^2(x-2)^2\;\;\;:\;\;\;\;\begin{pmatrix} 1&1&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&1\\ 0&0&0&0&0&2\end{pmatrix}$$ $$(2)\;\;\;m_A(x)=(x-1)^2(x-2)^2\;\;\;:\;\;\;\;\begin{pmatrix} 1&1&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&1\\ 0&0&0&0&0&2\end{pmatrix}$$ $$m_A(x)=(x-1)^3(x-2)^2\;\;\;:\;\;\;\;\begin{pmatrix} 1&1&0&0&0&0\\ 0&1&1&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&1\\ 0&0&0&0&0&2\end{pmatrix}$$ $$m_A(x)=(x-1)^4(x-2)\;\;\;:\;\;\;\;\begin{pmatrix} 1&1&0&0&0&0\\ 0&1&1&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&0\\ 0&0&0&0&0&2\end{pmatrix}$$ $$m_A(x)=(x-1)^4(x-2)^2\;\;\;:\;\;\;\;\begin{pmatrix} 1&1&0&0&0&0\\ 0&1&1&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&2&1\\ 0&0&0&0&0&2\end{pmatrix}$$ Yes, I also get $\,10\,$ different (i.e., non-similar) JCF's for that matrix, check we agree (I, you or both could be wrong), and I don't think there's a general method to come up with the different JCF's . I, for example, try to do it by checking the different possibilities for the minimal polynomial, according to the its different possbile degrees...
H: Can we see a ring $R$ as a subring of $S^{-1}R$? I know that we can consider an integral domain $D$ as a subring of its quotient field, I'm wondering why we can't consider any commutative ring with identity as a subring of $S^{-1}R$ identifying $r\in R$ as $r/1_R\in S^{-1}R$. Thanks in advance. AI: This is because the natural ring homomorphism $R \to S^{-1}R$ is not always injective. It is injective when $R$ is an integral domain, which is why we can make this identification.
H: What leads us to believe that 2+2 is equal to 4? My professor of Epistemological Basis of Modern Science discipline was questioning about what we consider knowledge and what makes us believe or not in it's reliability. To test us, he asked us to write down our justifications for why do we accept as true that 2 plus 2 is equal to 4. Everybody, including me, answered that we believe in it because we can prove it, like, I can take 2 beans and more 2 beans and in the end I will have 4 beans. Although the professor told us: "And if all the beans in the universe disappear", and of course he can extend it to any object we choose to make the proof. What he was trying to show us is that the logical-mathematical universe is independent of our universe. Although I was pretty delighted with this question and I want to go deeper. I already searched about Peano axioms and Zermelo-Fraenkel axioms although I think the answer that I am looking for can't be explained by an axiom. It is a complicated question for me, very confusing, but try to understand, what I want is the background process, the gears of addition, like, you can say that a+0=a and then say a+1 = a+S(0) = S(a+0) = S(a). Although it doesn't show what the addition operation itself is. Can addition be represented graphically? Like rows that rotates, or lines that join? Summarizing, I think my question is: How can I understand addition, not only learn how to do it, not just reproduce what teachers had taught to me like a machine. How can I make a mental construct of this mathematical operation? AI: I've always liked this approach, that a naming precedes a counting. =============================================== ================================================
H: Notation for set of all closed sets Is there a common notation for the set of all closed sets of a topological space? I have been using $(X,\tau)$ to denote a topological space with $\tau$ being the topology, set of all open sets. I was wondering if there is something like that this is used widely but for all the closed sets. AI: There is no such standard notation. The safest approach is to let $\langle X,\tau\rangle$ (or $\langle X,\mathscr{T}\rangle$, etc.) be a topological space and then explicitly to name the collection of closed sets, e.g., by letting $\mathscr{F}=\{X\setminus U:U\in\tau\}$. Since $F$ (from French fermé) is one of the letters that I conventionally use for closed sets, I’m likely to use $\mathscr{F}$ or $\mathscr{C}$ (for closed) for the collection of closed sets unless those letters have been pre-empted.
H: Irreducible characters form orthonormal basis of set of class functions I am reading Serre's book (Linear Representations of Finite Groups). Theorem 6 in chapter 2 says that the irreducible characters $\chi_1,\dotsc,\chi_h$ of a finite group $G$ form an orthonormal basis of $H$, the set of class functions on $G$. It says in the proof (given the $\chi_i$ form are orthonormal) that 'it is enough to show that every element of $H$ orthogonal to the $\chi_i^\ast$ is zero', where $\ast$ is the complex conjugate. Why is this so? Will appreciate any hints etc. AI: If $V$ is a finite dimensional inner product space, and $W$ is a subspace, then: $$V=W\oplus W^{\perp}$$ Now, let $V$ be the space of class functions, and let $W$ be the subspace generated by the irreducible characters. Then the statement you've written means to show that $W^{\perp}=0$ from which it follows that $V=W$. Perhaps we should also note that the subspace generated by irreducible characters is the same as that generated by their conjugates. Can you see why this is true?
H: Probability question/"puzzle" Given I have some number X. I draw a random number R from the uniform distribution on the unit interval and construct two new numbers Y and Z through the following procedure: Y = R*X Z = X - Y What is the probability, that neither Y or Z (not exclusive OR) are below 1? As far as I see, the probability would be $(1-\frac{2}{X})$ - is this correct? AI: Assuming $X \gt 1$ (if not, both $Y$ and $Z$ will be below $1$), $P(Y \lt 1)=\frac 1X$ and $P(Z \lt 1)= \frac 1X$. As long as $X \gt 2$ these are disjoint and the chance that one of them is less than $1$ is $\frac 2X$. If $1 \lt X \lt 2$ one of them is certain to be below $1$. If $X \gt 2$, the chance that neither $Y$ nor $Z$ is less than $1$ is then $1-\frac 2X$
H: For subspaces, if $N\subseteq M_1\cup\cdots\cup M_k$, then $N\subseteq M_i$ for some $i$? I have a vector space $V$ over a field of characteristic $0$. If $M_1,\dots,M_k$ are proper subspaces of $V$, and $N$ is a subspace of $V$ such that $N\subseteq M_1\cup\cdots\cup M_k$, how can you tell $N\subseteq M_i$ for some $i$? I was first trying to show it just in the case $N\subseteq M_1\cup M_2$, and hoping to extend it to finitely many $M_i$. If either of the $M_i$ contains the other, the claim follows, so I suppose neither $M_i$ contains the other. In hopes of a contradiction, I suppose $N\not\subseteq M_1$ and $N\not\subseteq M_2$, Picking $x_1\in N\setminus M_1$ and $x_2\in N\setminus M_2$, I'd have $x_1+x_2\in N\subseteq M_1\cup M_2$. The only thing I could conclude was that actually $x_1,x_2\notin M_1$ and $x_1,x_2\notin M_2$, which seems like a dead end. What's the right way to do this? AI: In this problem, you can try pointing out that if $N$ is not a subset of $M_1$, then it must be a subset of $M_2$. I'll give you some hints on this: Since $N \not \subset M_1$, then there exists some $x \in N \ \backslash \ M_1$. You should notice that, since $N = M_1 \cup M_2$ (i.e, every element in $N$ must belong to either $M_1$, or $M_2$), and we have that, $x \in N \ \backslash \ M_1$, so $x \in M_2$. Now, can you try showing that for all $n \in N$, $n$ must belong to $M_2$? Hint: Take their sum, and consider 2 cases. ;) To show it holds not only for $k = 2$, but for any $k \in \mathbb{N} \ \backslash \ \{ 0 \}$, you can try using Proof by Induction. Case $k = 1$ is trivial, and $k = 2$ has already been done, can you continue from here? :)
H: Probability puzzle - the 3 cannons (Apologies if this is the wrong venue to ask such a question, but I don't understand how to arrive at a solution to this math puzzle). Three cannons are fighting each other. Cannon A hits 1/2 of the time. Cannon B hits 1/3 the time. Cannon C hits 1/6 of the time. Each cannon fires at the current "best" cannon. So B and C will start shooting at A, while A will shoot at B, the next best. Cannons die when they get hit. Which cannon has the highest probability of survival? Why? Clarification: B and C will start shooting at A. AI: The problem can be done by straightforward calculation. Initially no one is shooting at C, so at some point A or B will be hit. At that point either A and C survive, B and C survive, or C alone survives (if A and B are hit simultaneously). A and C survive. This occurs when A hits B before B or C hits A. The probability that all three miss on any given turn is $\frac12\cdot\frac23\cdot\frac56=\frac5{18}$, and the probability that A hits and the other two miss on any given turn is also $\frac12\cdot\frac23\cdot\frac56=\frac5{18}$, so this event occurs on turn $n$ with probability $\left(\frac5{18}\right)^n$. The probability that it occurs at all is therefore $$\sum_{n\ge 1}\left(\frac5{18}\right)^n=\frac{\frac5{18}}{1-\frac5{18}}=\frac5{13}\;.$$ B and C survive. This occurs when B or C hits A before A hits B. As before, the probability that all three miss on any given turn is $\frac12\cdot\frac23\cdot\frac56=\frac5{18}$. The probability that neither B nor C hits on a given turn is $\frac23\cdot\frac56=\frac59$, so the probability that A misses and at least one of B and C hits is $\frac12\left(1-\frac59\right)=\frac29$. This event therefore occurs on turn $n$ with probability $$\left(\frac5{18}\right)^{n-1}\left(\frac29\right)\;,$$ and the probability that it occurs at all is $$\frac29\sum_{n\ge 1}\left(\frac5{18}\right)^{n-1}=\frac29\sum_{n\ge 0}\left(\frac5{18}\right)^n=\frac{\frac29}{1-\frac5{18}}=\frac4{13}\;.$$ C alone survives. This occurs when A hits B and, on the same turn, B or C hits A. Since the probability that A hits on the $n$ turn is the same as the probability that A misses, the calculation for this case is identical to that for the previous case, and this case therefore occurs with probability $\frac4{13}$. (Alternatively, of course, one can simply note that the probability must be $1-\frac5{13}-\frac4{13}=\frac4{13}$.) Now consider the outcome of case (1). At this point A and C are shooting at each other. The probability that both miss on any given turn is $\frac12\cdot\frac56=\frac5{12}$, and the probability that A hits and C misses on any given turn is the same, so A becomes the sole survivor on turn $n$ with probability $\left(\frac5{12}\right)^n$. Should the battle reach this case, then, the probability that A is the sole survivor is $$\sum_{n\ge 1}\left(\frac5{12}\right)^n=\frac{\frac5{12}}{1-\frac5{12}}=\frac57\;.$$ The probability that A misses and C hits on any given turn is $\frac12\cdot\frac16=\frac1{12}$, so the probability that C becomes the sole survivor on turn $n$ is $\left(\frac5{12}\right)^{n-1}\left(\frac1{12}\right)$, and the overall probability that C becomes the sole survivor, given that the battle reaches this case, is $$\frac1{12}\sum_{n\ge 1}\left(\frac5{12}\right)^{n-1}=\frac1{12}\sum_{n\ge 0}\left(\frac5{12}\right)^n=\frac{\frac1{12}}{1-\frac5{12}}=\frac17\;.$$ Finally, once this case is reached there is a probability of $1-\frac57-\frac17=\frac17$ that A and C will kill each other, leaving no survivors. The analysis for case (2) is entirely similar. The probability that B is the sole survivor is $$\left(\frac13\cdot\frac56\right)\sum_{n\ge 1}\left(\frac23\cdot\frac56\right)^{n-1}=\frac{\frac5{18}}{1-\frac59}=\frac58\;,$$ the probability that C is the sole survivor is $$\left(\frac23\cdot\frac16\right)\sum_{n\ge 1}\left(\frac23\cdot\frac56\right)^{n-1}=\frac{\frac19}{1-\frac59}=\frac14\;,$$ and the probability that there are no survivors is $1-\frac58-\frac14=\frac18$. The overall probabilities of survival are therefore $$\begin{align*} &\text{A:}\quad\frac5{13}\cdot\frac57=\frac{25}{91}\\\\ &\text{B:}\quad\frac4{13}\cdot\frac58=\frac5{26}\\\\ &\text{C:}\quad\frac5{13}\cdot\frac17+\frac4{13}\cdot\frac14+\frac4{13}=\frac{40}{91}\;. \end{align*}$$ As a quick check, the missing $\frac{17}{182}$ should be the probability that none of them survives; since this is $$\frac5{13}\cdot\frac17+\frac4{13}\cdot\frac18\;,$$ which is indeed $\frac{17}{182}$.
H: Confusion about the usage of points vs. vectors As far as definitions go, understand the difference between a vector and a point. A vector can be translated and still be the same vector, whereas a point is fixed. But I would like some clarification on the usage of vectors and the usage of points, because it seems like in many cases they are used interchangeably. For example: It has always been my understanding that addition is not defined for two points. But in this question, two points are being added together in this equation: $\overline{C}= \lbrace tx+(1-t)x': x\in C, x' \in C', t\in [0,1]\rbrace$ Sometimes $\mathbb{R}^n$ is used to denote the set of $n$ dimensional vectors, and sometimes it denotes the set of points in $n$ dimensional space. In vector calculus, it is often said that a function with multiple inputs takes a vector as an input, but I have rarely seen a function written as $f(\vec{v})$. Even though I understand that what it means to say is, the vector that originates at the origin, to me, it doesn't seem entirely correct to say that the input is a vector without explicitly saying that the vector's tail is at the origin. Can anyone clarify these points of confusion? Are point and vector interchangeable in these cases? Also, is there a notation for "converting" one to the other? E.g. how to "convert" $\langle{x,y,z}\rangle$ to $(x,y,z)$ or vice versa? AI: I'll try to convince you that they are geometrically quite obviously different, but when it comes to naming them, the begin to look alike :) Geometrically you probably already have a good picture of what a point is: it's just the primitive notion of a point you have in geometry. That is, a single dimensionless location in space. A vector should be thought of as having two qualities: a ray that has direction and magnitude. In basic vector algebra in $\Bbb R^n$, we learn that such a ray can slide all around $\Bbb R^n$, and as long as you aren't changing the direction or the length of the ray, then it is still the same vector. Now when it comes to naming these two things, they start to look alike! With Cartesian coordinates, points in $\Bbb R^n$ are labeled by their projections to the axes, and that creates a list of real numbers. Similarly, when we go about naming vectors, we have this convention of sliding the vector so that it is being emitted from the origin, and then we check to see what point is on its arrowhead. The vector is named after this point. So in both cases, a similar list of real numbers is used to identify the object. Since this is the case, it's common to just start referring to any ordered $n$-tuple of things from a field (like $\Bbb R$) as a "vector," even if we aren't thinking of it as a ray in that application. One example is that of vector fields. Since these are functions of position, the inputs they take are points of $\Bbb R^n$ (which look like an ordered $n$-tuple). The outputs are vectors (which again look like an ordered $n$-tuple), but we are interpreting these as the vectors they represent, slid over from the origin to the point we're at. You can, of course, really have vector inputs! For instance, the length of a vector in $\Bbb R^n$ creates a function from vectors into $\Bbb R$. Of course, the same function could be reinterpreted as the distance-to-zero function on points of $\Bbb R^n$. So, the difference is all in how you are interpreting that particular list of numbers. For #1 in your post, you are probably thinking of it as the line segment between points $x$ and $x'$. The addition that's going on is vector addition though. Drawing the vectors that $x$ and $x'$ represent, you see you have two vectors extending from the orign to these two points. For any two vectors $v,w$, $v-w$ yields the vector which fits between the two tips of $w$ and $v$, and points to the tip of $v$. So, you can see that $x-x'$ has the point $x$ on its tip. What does the $t$ contribute? If you multiply out $xt+(1-t)x'=x'+t(x-x')$, you can see that the vector $x-x'$ is being scaled by $t$ to something shorter, and then is being concatenated onto the tip of $x'$. The tip of this arrow gives another point on the segment. Ranging over all $t$ between 0 and 1, you get vectors pointing to all points on that segment.
H: A question about the validity of a notation I am writing a paper and using such a notation. Do you think that it is mathematically a reasonable notation? $$ \hat{{\cal{P}}}_{i}=\{\hat{Q}: \hat{Q}_i|G_i[q_1/q_0<t]\stackrel{i=1}{\underset{i=0}{\gtreqqless}}Q_i|G_i[q_1/q_0<t] \}, $$ Thank you very much. EDIT: so I need to do in this way? $$ \hat{{\cal{P}}}_{0}=\{\hat{Q}: \hat{Q}_0|G_0[q_1/q_0<t]\leq Q_0|G_0[q_1/q_0<t]\}, $$ and $$ \hat{{\cal{P}}}_{1}=\{\hat{Q}: \hat{Q}_1|G_1[q_1/q_0<t]\geq Q_1|G_1[q_1/q_0<t]\}, $$ AI: Apart from the typesetting issues (e.g. the outer braces and hats are too small), the object in the middle is difficult to parse and takes up too much room. Some alternatives: If these are nonzero numbers, divide one by the other, name the result, and compactly say that the quotient is bigger (resp. less) than 1, if $i=0$ (resp. $i=1$). This has the advantage of emphasizing the close similarities between the two sides. If these are possibly zero numbers, subtract one from the other and continue as above. Since the only values you're considering are $i=0,1$, then eliminate $i$ altogether and simply define the two things you want. Followup: Try $H_i=G_i[q_1/q_0<t]$, that will make your two expressions much shorter.
H: Verify the real solution of a linear system of differential equation I'm trying to solve $Y' = AY$ where $A= \left[ \begin{array}{ c c } -2 & 6 \\ -3 & 4 \end{array} \right]$ I have found the eigenvalue $1 \pm 3i$ with eigenvector for $1+3i: $ $v = \left[ \begin{array}{ c c } 1-i \\ 1 \end{array} \right]$ Which seems to be correct by testing $Av = (1+3i)v$ but when I try to write it as a real solution I don't seem to get the right answer. $$\left[ \begin{array}{ c c } 1-i \\ 1 \end{array} \right](\cos 3t + i\sin 3t) = \left[ \begin{array}{ c c } \cos 3t + \sin 3t - i\cos 3t + i\sin 3t \\ \cos 3t + i\sin 3t \end{array} \right]$$ $$v_1 = \left[ \begin{array}{ c c } \cos 3t + \sin 3t \\ \cos 3t \end{array} \right],\ v_2 = \left[ \begin{array}{ c c } \cos 3t + \sin 3t \\ \sin 3t \end{array} \right]$$ If I now verify by $v_1' = Av_1$ $$\left[ \begin{array}{ c c } 3(\cos 3t - \sin 3t) \\ -3\sin 3t \end{array} \right] \neq \left[ \begin{array}{ c c } 4\cos 3t - 2\sin 3t \\ \cos 3t - 3\sin 3t \end{array} \right]$$ AI: You are still missing an exponential term, we have: $e^{\lambda t}v_1 = e^{(1+3i)t}\begin{bmatrix}1-i\\1\end{bmatrix} = e^te^{3it}\begin{bmatrix}1-i\\1\end{bmatrix} = e^t(\cos 3t + i \sin 3t)\begin{bmatrix}1-i\\1\end{bmatrix} = \begin{bmatrix}e^t(\sin 3t + \cos 3t+i (\sin 3 t-\cos 3 t))\\ e^t(\cos 3t + i \sin 3t) \end{bmatrix} $ So, our solution can be written as (because we know that the real and imaginary parts are both independent solutions): $$Y(t) = c_1 e^t\begin{bmatrix}\sin 3t + \cos 3t\\ \cos 3t \end{bmatrix}+ c_2e^t\begin{bmatrix}\sin 3t - \cos 3t\\ \sin 3t \end{bmatrix}$$
H: A confusion on Axiom of infinity I'm currently working "the elements of advanced mathematics" by steven g. krantz, currently on Chapter 5. I came to "Axiom of Infinity" which roughly states: $$\exists A \; s.t. \; \phi \in A \; and \; \forall a\in A, a\cup \left \{ a \right \} \in A$$ Now, doesn't this mean: $$\exists B=\left \{ \phi,\left \{ \phi \right \},\left \{ \phi,\left \{ \phi \right \} \right \},\left \{ \phi,\left \{ \phi \right \},\left \{ \phi,\left \{ \phi \right \} \right \} \right \},... \right \}$$ which results in: $$B \in B$$ ??? (the 'last' element of B will be B itself...) did I get this wrong, or are there some "premises" that I'm missing? Thanks :D p.s: Any good source for learning ZFC??? the book seems to fly off in a hurry and doesn't explain much, and my google-fu isn't giving me any "ZFC-for-dummies" OK, so is this a good summary of the answers? Notice that B[0] = $\phi$ B[1] = $\left \{ \phi \right \}$ B[2] = $\left \{ \phi,\left \{ \phi \right \} \right \}$ and so on (thus, for B[n], as n++, B[n] -> B) BUT, similar to "limits doesn't mean the value actually approaches lim", B[n] never reaches B AI: But there is no last element to $B$. Indeed the next "step" would be $B\cup\{B\}$, but that set is not equal to $B$ anymore.
H: Question about inner product. Let $V=C([-1, 1])$ and $$\langle f, g\rangle=\int_{-1}^1 f(x)g(x)dx$$ Let $W=\{f \in V \mid f\text{ is even}\}$. Find $W^\perp$. Progress: I know that every odd function belongs to $W^\perp$ and I suspect $W^\perp=\{f \in V \mid f\text{ is odd}\}$. AI: Let $g$ be an odd function. Then $fg$ is odd for all even functions $f$ in $C([-1, 1])$, therefore $\langle f, g\rangle=0$ for all $f \in C([-1, 1])$ Now suppose $g$ isn't odd. Let: $$u(x)=\frac{g(x)+g(-x)}{2}$$ $$v(x)=\frac{g(x)-g(-x)}{2}$$ It's clear that $g=u+v$, $u$ is even and $v$ is odd. Since $g$ isn't even, $u\neq0$. Finally: $$\langle u, g\rangle=\int_{-1}^1 u(x)g(x)dx=\int_{-1}^1 u(x)v(x)dx+\int_{-1}^1 u(x)u(x)dx=0+\int_{-1}^1 u(x)^2dx>0$$ We have shown that $g \in W^\perp \Leftrightarrow g \text{ is odd}$. Thanks to: vadim123
H: Why Gaussian Elimination only works over field? When I was solving system of linear congruences (n variables, n equations), like this: $AX \equiv b \pmod p$ I was told that ordinary Gaussian Elimination works if $p$ is prime. And I figured out that when $p$ is prime, integers $\pmod p$ form a field, otherwise it doesn't form a field, but a ring. Here comes my problem, why Gaussian Elimination does not work over a ring? AFAIK, the difference between a ring and a field, is that elements in a field are multiplicatively invertible, but does it have something to do with the applicability of Gaussian Elimination? If yes, how? AI: You need to be able to invert the element on the main diagonal (as long as it is non-zero). If $p$ is not prime, you may not be able to do this. For example, let $p=6$, then $2,3$ have no inverse. So try $$\begin {pmatrix} 2&0\\0&3 \end {pmatrix}\begin {pmatrix} X_1 \\ X_2 \end {pmatrix}=\begin {pmatrix} 1 \\ 1 \end {pmatrix}\pmod 6$$ If you complain that the Gaussian elimination is done, change the $0$'s to $1$'s. If $p$ is prime you are guaranteed an inverse of any non-zero element and the equation will be soluble. Do it $\pmod 7$ and you will succeed.
H: Let $f(x)= \frac {1}{\sqrt{|[|x|-1]|-5}}$ where $[ .]$ is greatest integer function, Find domain of $f(x)$ ?? Problem: Let $$f(x)= \dfrac {1}{\sqrt{|\bigl[|x|-1\bigr]|-5}}$$ where $\bigl[.\bigr]$ is greatest integer function. Find domain of $f(x)$. Solution: The function $f$ is defined for $|\bigl[|x|-1\bigr]|-5>0$. So $$|\bigl[|x|-1\bigr]|>5$$ $$5>\bigl[|x|-1\bigr]>-5$$ I don't know whether I am doing right or wrong AI: You have it all correct until the very end! You're right to restrict $|[|x|-1]| - 5 > 0$. However, at the end, you conclude that $5 > [|x|-1] > -5$, when in fact $|[|x|-1]| > 5$ implies that $[|x|-1] > 5$ or $[|x|-1] < -5$. From here, we note that $[|x| - 1] = [|x|] - 1$, so we want to see where $[|x|] > 6$ and where $[|x|] < -4$. Since $|x| \geq 0$, the latter is clearly impossible. We thus see that we must have $[|x|] > 6$, which means that $x \geq 7$ or $x \leq -7$ (since we have $[|x|]$ is strictly greater than 6.)
H: What is the kernel of the evaluation homomorphism? I'm studying Sharp's Steps in Commutative Algebra, and I need a hint how to proceed with this exercise in the page 26: First of all, I didn't understand even the notation, what did the author mean by $(X_1-\alpha_1,...,X_n-\alpha_n)$? Thanks a lot. AI: Basically the problem can be translated as "Show that if a polynomial in $n$ variables has $(\alpha_1,\ldots,\alpha_n)$ as a root, then it is of the form $f_1\cdot(x_1-\alpha_1)+\ldots +f_n\cdot (x_n-\alpha_n)$ for some polynomials $f_1,\ldots,f_n$. Can you do this for a polynomial in 1 or 2 variables?
H: Determining whether or not a group has an element of a specific order If $|G| = 55$, must it have an element of order $5$ and/or $11$? I'm not quite sure how to determine this. I know it could be possible by Lagrange's Theorem, but I'm stuck otherwise. Any help would be appreciated. Edit: I haven't learned material about Cauchy's or Sylow Theorems yet. So I'm trying to prove this by very elementary facts. AI: Yes. Lagrange's theorem implies that every element has order 5, 11, or 55. If there is an element of order $55$, call it $g$. Then $g^{11}$ is an element of order $5$ and $g^5$ is an element of order $11$. So let's suppose that there are no elements of order $55$. If there are elements of order $5$ and $11$, then we are done. So suppose there are only elements of order $11$. Such an element generates a cyclic subgroup of $10$ non-identity elements. Moreover, any other element in the subgroup generates the same subgroup. Now pick another element in $G$ not in that cyclic subgroup. Each time we do this, we get $10$ more non-identity elements. But then the group must have $11, 21, 31, 41, 51, 61,\ldots$ elements. In particular, such a group cannot have $55$ elements. A similar argument shows that a group with elements only of order $5$ cannot have $55$ elements. Unwinding all of this, we see that we reach a contradiction unless there exist elements of order $5$ and $11$ in the group.
H: Prove that if $a b c=1$, $a^2+b^2+c^2\ge 3$. (for $a,b,c\in\mathbb{R}$) I can do this problem using calculus minimization techniques, using Lagrange multipliers to find the equations $a^2=b^2=c^2$, so with $abc=1$, $a$, $b$, and $c$, are either $-1$ or $1$ (making sure you don't end up with $abc$ negative). So if the minimum distances are at those points, then $a^2+b^2+c^2=3$ is the closest distance, so all other solutions are further away. I just pieced together (while writing this) that there's another solution: The arithmetic-geometric inequality gives $3(abc)^{1/3}\le a+b+c$, and by the inequality of this question, we have $9(abc)^{2/3}\le (a+b+c)^2\le 3(a^2+b^2+c^2)$, so since $abc=1$, we have $3\le a^2+b^2+c^2$. But, is it possible to prove this using other inequality tricks? It's just, the inequality on the link I gave, I had no idea about, so it's uncomfortable to use it. AI: By AM/GM, we have $$\frac{a^2+b^2+c^2}{3}\ge (a^2b^2c^2)^{1/3}=1.$$
H: Definition of measurable functions defined w.r.t. topology (Big) Rudin's "Real and Complex Analysis" defines (definition 1.3) a measurable function from a measurable space into a topological space as one that has the property that the inverse image of every open set in the range space is measurable in the domain space. Is this definition somehow more general than the one between two measurable spaces that pulls back measurable sets to measurable sets? My understanding is that topologies and sigma-algebras do not necessarily coincide, so I'm not sure why Rudin is using this definition for measurable functions. AI: This definition isn't any more or less general, it's just the way to define a measurable function from a measurable space into a topological space. Until you've given it a topology, a measurable space is just that, and vice versa - until you define a sigma algebra, a topological space is not a measurable space. Rudin uses this definition because he needs topological structure (and not measurable structure) on his target space. Also, it's always good practice to go through each theorem and figure out which structures were necessary in the proof.