text
stringlengths
83
79.5k
H: Dirichlet problem on $[0,1] \times [0, \pi]$ Let $\Omega := [0, 1] \times [0,\pi]$. We are searching for a function $u$ on $\Omega$ s.t. $$ \Delta u =0 $$ $$ u(x,0) = f_0(x), \quad u(x,1) = f_1(x), \quad u(0,y) = u(\pi,y) = 0 $$ with $$ f_0(x) = \sum_{k=1}^\infty A_k \sin kx \quad, f_1(x) = \sum_{k=1}^\infty B_k \sin kx $$ If I use seperation of variables, say $u(x,y) = f(x)g(y)$ I get $$ f''(x)+\lambda f(x) = 0 , \quad g''(y)-\lambda g(y) = 0 $$ with $f(0) = f(\pi) = 0$ where I use that $f,g \neq 0$. $\lambda$ is some constant. How can I proceed ? Thanks in advance. AI: Hint: which is the most general solution of $$f''(x)=-\lambda f(x)$$ and $$g''(y)=\lambda g(y)?$$ You need to consider linear combinations of exponentials. Such exponentials have real or complex exponents depending on the sign of $\lambda$, i.e. $\lambda >0$, $\lambda<0$ (not necessarily in this order!). Try quickly to see what happens if $\lambda=0$, instead. To determine which choice of sign for $\lambda$ is the correct one for your problem, you need to apply the boundary conditions you wrote for $f$ and $g$ at $0$ and $\pi$. Once you are there apply superposition and the boundary conditions with the Fourier series. You are done.
H: Showing $x^3$ is not uniformly continuous on $\mathbb{R}$ Referring to the original source http://math.stanford.edu/~ksound/Math171S10/Hw8Sol_171.pdf Prove that $f(x) = x^3$ is not uniformly continuous on $\Bbb R$. I want to use a more elementary method, no mention of metric spaces. Their selection of $\delta$ confuses me, so I am want to present a more "elementary" method. I am unsure if it is correct. Here I picked $|a| > \dfrac{\epsilon}{3\delta}$ and $x > a - \delta$ Thus $$\begin{align} |x^3 - a^3| &= |x - a||x^2 + ax + a^2| \\&>|x^2+ax +a^2| \\&> |(a-\delta)^2+a(a - \delta) + a^2|\\&=|3a^2-3a\delta+\delta^2| \\&> 3\delta|a| \end{align}$$ AI: Their solution doesn't really use any ideas from metric spaces. Here is an elaboration. Suppose $f(x)=x^3$ were uniformly continuous. Then for any $\epsilon$, there is a $\delta$ that works for all $a$. Let's take $\epsilon=1$; we are now given $\delta$ such that for all $a$ and for all $x$ with $|x-a|<\delta$, we must have $|f(x)-f(a)|<\epsilon$. Choose $a$ large enough so that $\frac{3\delta a^2}{2}>1$; for example $a=\sqrt{\frac{2}{3\delta}}+1$ works. Now take $x=a+\frac{\delta}{2}$; this satisfies $|x-a|=\frac{\delta}{2}<\delta$. Hence we SHOULD have $|f(x)-f(a)|<\epsilon=1$. Instead we have $|f(x)-f(a)|=|f(a+\frac{\delta}{2})-f(a)|=|(a+\frac{\delta}{2})^3-a^3|=\left|3a^2\frac{\delta}{2}+3a\frac{\delta^2}{4}+\frac{\delta^3}{8}\right|\ge \left|3a^2\frac{\delta}{2}\right|>1$. This is a contradiction, and hence $f(x)$ is not uniformly continuous.
H: Question on the relationships of two and three manifolds The Question is: Let $W_c = \{ ( x,y,z,w) \in R^4 | xyz = c \}$ and $Y_c = \{ ( x,y,z,w) \in R^4 | xzw = c \}$. For what real numbers $c$ is $Y_c$ a three-manifold? For what pairs $(c1,c2)$ is $W_{c_1} \cap Y_{c_2}$ a two-manifold? I think that we can say for the first half that $Y_c$ is a three-manifold when $c \neq 0$. This isn't exactly fleshed out, but I can imagine what $xyz = c$ looks like for $c \neq 0$ with none of the variables being $0$, and locally this basically is a piece of $\mathbb{R}^3$. But my intuition is really lacking on the intersection of manifolds, and I want to say something like when both $c_1$ and $c_1$ are non-zero, as their $x$ and $z$ variables are the ones we need to intersect to form a manifold. Is this the right way of thinking about this problem, and what gaps could I fill in to explain this more concisely? Thanks for your help! AI: You need to make sure that $(c_1,c_2)$ is a regular value of $f\colon\mathbb R^4\to\mathbb R^2$ given by $f(x,y,z,w)=(xyz,xzw)$. That is, you need to check that $\text{rank}(Df)=2$ at every point $(x,y,z,w)$ with $f(x,y,z,w)=(c_1,c_2)$. This is equivalent to checking that the gradient vectors of the components are everywhere non-parallel on the level set.
H: Matrices and vectors I am wondering if you can write a 2x2 matrix $$\left(\begin{array}{cc} a & b \\ c & d\end{array}\right)$$ as a one vector $$\left(\begin{array}{c} a \\ b \\ c \\ d\end{array}\right).$$ AI: I think you can do this in two ways: Row-major Order Column-major Order They both allow you to write a matrix in linear/sequential manner. In the first one you do it as you explained in the example by writing first row first,then second one and so on. In the latter one, you do it by writing first column first, then second and so on. Edit: A useful link might be: http://en.wikipedia.org/wiki/Row-major_order
H: How can I measure the distance between two cities in the map? Well i know that the distance between Moscow and London by km it's about 2,519 km and the distance between Moscow and London in my map by cm it's about 30.81 cm and the Scale for my map is 1 cm = 81.865 km but when i tried to measure the distance between other two cities for example between London and Berlin with my map scale the result was wrong so i think that's because the spherical of earth ???!! Now i want to know how can i measure the distance between tow cities in the map also how can i know the scale of a map ? AI: The calculation is somewhat complex. A simplification is to assume that the Earth is a sphere and finding the great-circle distance. A more complex calculation instead uses an oblate spheroid as a closer approximation.
H: How to find the probability of damaged goods transported by air? A company transports goods in three ways: air, sea and road. Based on the past records, of the total transported goods, 10% were by air, 30% by sea and 60% by road. It is found that 1% of goods transported by air were damaged, 5% of the goods transported by sea were damaged and 10% of the goods transported by road were damaged. If a good is found to be damaged, what is the probability that it was transported by air? AI: An idea: form a probability tree, with three first branches $\,A, S, R\,$ (air, sea, road), with probabilities $\,0.1\,,\,0.3\,,\,0.6\,$ . From each of these trees two second branches grow: $\,D,G\,$ (damaged, good/not damaged), and write down the probabilities. So the probability for a damaged good is $$\color{red}{0.1\cdot0.01}+0.3\cdot0.05+0.6\cdot0.1=0.076$$ And now calculate the conditional probability $$P(A/D)=\frac{P(A\cap D)}{P(D)}=\frac{\color{red}{0.001}}{0.076}=0.013158$$
H: Does this right adjoint of a geometric morphism preserve directed colimits? Let $E$ be a sheaf topos $E=\operatorname{Shv}(C)$ and $\{x_i:\operatorname{Sets}\to E\}$ be a set of geometric morphisms $x_i:Sets\to E$ such that the morphism $$ x^*\colon E\to \prod_i \operatorname{Sets} $$ induced by the left adjoints $(x_i)^*$ is faithful (this is saying that $\{x_i\}$ consists of enough points). The morphism $x^*$ has a right adjoint denoted by $x_*$. Does $x_*$ preserve directed colimits? If not, what is a counterexample? AI: No, and there is no reason to expect this anyway. For example, let $\mathbf{sSet}$ be the category of simplicial sets – this is a presheaf topos by definition and so has enough points: just take $x_i : \mathbf{Set} \to \mathbf{sSet}$ to be the unique (essential) geometric morphism whose inverse image functor $x_i^* : \mathbf{sSet} \to \mathbf{Set}$ sends a simplicial set $X$ to the set $X_n$. Thus we have a geometric morphism $x : \mathbf{sSet} \to \mathbf{Set}^{\mathbb{N}}$. It is well-known that $\mathbf{Set}^\mathbb{N}$ and $\mathbf{sSet}$ are locally finitely presentable (l.f.p.) categories, and it is also not hard to check that a right adjoint between l.f.p. categories preserves filtered colimits if and only if the left adjoint sends compact (a.k.a. finitely presentable) objects to compact objects. Now, $\Delta^1$ is compact as a simplicial set, yet its inverse image in $\mathbf{Set}^\mathbb{N}$ is not compact. (An object $Y$ in $\mathbf{Set}^\mathbb{N}$ is compact if and only if $\coprod_{n \in \mathbb{N}} Y_n$ is a finite set.) Hence $x_* : \mathbf{Set}^\mathbb{N} \to \mathbf{sSet}$ does not preserve filtered colimits, and hence does not preserve directed colimits. That said, it is not hard to check that $x^*$ sends compact objects in $\mathbf{sSet}$ to $\aleph_1$-compact objects in $\mathbf{Set}^\mathbb{N}$, so it follows that $x_* : \mathbf{Set}^\mathbb{N} \to \mathbf{sSet}$ preserves $\aleph_1$-filtered colimits. More generally, for any Grothendieck toposes $\mathcal{E}$ and $\mathcal{F}$ and any geometric morphism $f : \mathcal{E} \to \mathcal{F}$, there exists a regular cardinal $\kappa$ such that $f_* : \mathcal{E} \to \mathcal{F}$ preserves $\kappa$-filtered colimits. This is because Grothendieck toposes are always locally presentable (but not necessarily l.f.p.) and the accessible adjoint functor theorem ensures that any right adjoint between locally presentable categories is accessible (but not necessarily $\aleph_0$-accessible).
H: How can i solve this second-order differential equation $y^{\prime\prime}=a\sqrt{1+y^{\prime}}$? My problem is to solve this given equation: $y^{\prime\prime}=a\sqrt{1+y^{\prime}}$ My approach was: But i dont know how to handle $a$ and besides this fact, it is a second-order equation. so i thought i'll have to find out, what's $y^{\prime}$ like. But i don't know. $$y^{\prime}=\int a\sqrt{1+y^{\prime}}\ dx$$ (or $dy$?) $$y^{\prime}=a\cdot \int \sqrt{1+y^{\prime}}\ dx$$ I think i could need help in solving this equation. AI: Denote $u=y'$, then $$u'=a\sqrt{1+u}\;\Longrightarrow\; \int\frac{du}{\sqrt{1+u}}=\int adx.$$ This in turn implies $$2\sqrt{1+u}=ax+C\;\Longrightarrow\; u=\frac{(ax+C)^2}{4}-1.$$ Recalling that $u=y'$ and integrating once more, we find $$y=\frac{(ax+C)^3}{12a}-x+D,$$ where $C$, $D$ denote two arbitrary constants of integration.
H: Regular space which is not Hausdorff I know that normality in the absence of $T_{1}$ does not imply regularity (Sierpinski space being a counterexample as it is vacuously normal but not regular). I have the feeling that similarly regularity in the absence of $T_{1}$ does not imply Hausdorff. I tried thinking of a counter example but obviously every such counter example mustn't be $T_{1}$ and I'm not familiar with many spaces which are not $T_{1}$ (the only ones that comes to mind are Trivial topologies and the Sierpinski space). Help would be appreciated :) AI: Let $X=\{0,1,2,3\}$, and endow $X$ with the topology $$\tau=\big\{\varnothing,\{0,1\},\{2,3\},X\big\}\;;$$ then $\langle X,\tau\rangle$ is regular but not Hausdorff. ($X$ is homeomorphic to the product of the discrete two-point space with the indiscrete two-point space.) Added: Given a space $\langle X,\tau\rangle$, we can define an equivalence relation $\sim$ on $X$ by setting $x\sim y$ iff $x$ and $y$ have the same open nbhds. If we identify equivalent points (i.e., take the quotient $X/\sim$), we always get a $T_0$-space. An $R_0$-space is one in which we get a $T_1$-space. As you can see, the example above is $R_0$: the quotient $X/\sim$ is just a discrete two-point space. You can start with any $T_3$-space and ‘fatten up’ some points to get an $R_0$, regular space that is not Hausdorff. The Wikipedia article on separation axioms has definitions of some of the more obscure ones, including $R_0$, as well as of the familiar ones.
H: Outer measure fundamental question Royce's introductory definition of outer measure. Let $A\subset R$. Then: $$m^*(A)=\text{inf}\left\{\sum_{n=1}^{\infty}l(I_n):\,A\subset \cup_{n=1}^{\infty}I_n\right\},$$ where $I_n=(a_n,b_n)$. However, in reading this, I suddenly question whether it is a fact that any subset of $R$ can be covered by a countable collection of open, bounded intervals of the form $(a,b)$. AI: For $\mathbb{R}$, this is definitly possible. For example, for every integer $i$ take the interval $I_i = (i-1,i+1)$. For ever $i$ the interval is open and bounded. The countable (since $\mathbb{Z}$ is countable) union gives: $$\bigcup_{i \in \mathbb{Z}} I_i = \mathbb{R} \ .$$ This fullfills your requirements, because $\mathbb{R}$ covers every subset of itself.
H: Evaluating $\frac{1}{\pi} \int_{0}^{\pi} e^{2\cos{\theta}} d\theta$ I ran into this integral when computing the volume of a family of polytopes and I'm not sure how to evaluate it analytically (I know Wolframalpha says 2.27...). Any ideas? I tried using complex analysis (Cauchy Integral Formula, Residue Theorem, etc.) but nothing seemed applicable. $$\frac{1}{\pi} \int_{0}^{\pi} e^{2\cos{\theta}} d\theta$$ AI: $$ I=\frac{1}{\pi}\int_0^\pi e^{2\cos\theta}\,d\theta=\frac{1}{\pi}\sum_{k=0}^\infty\frac{2^k}{k!}A_k, $$ with $$ A_k=\int_0^\pi\cos^k\theta\,d\theta \quad \forall k \ge 0. $$ We have $$ A_0=\int_0^\pi\,d\theta=\pi. $$ For every $k \ge 1$, if we set $\varphi=\pi-\theta$, then $$ \int_{\pi/2}^\pi\cos^k\theta\,d\theta=\int_0^{\pi/2}\cos^k(\pi-\varphi)\,d\varphi=(-1)^k\int_0^{\pi/2}\cos^k\varphi\,d\varphi. $$ Theorefore, for every $k \ge 1$ we have $$ A_k=\int_0^{\pi/2}\cos^k\theta\,d\theta+\int_{\pi/2}^\pi\cos^k\theta\,d\theta=[1+(-1)^k]\int_0^{\pi/2}\cos^k\theta\,d\theta. $$ We deduce that $A_{2k+1}=0$ for all $k \ge 0$. For every $k\ge 1$ we have \begin{eqnarray} B_k&:=&A_{2k}=2\int_0^{\pi/2}\cos^{2k}\theta\,d\theta=2\int_0^{\pi/2}(\sin\theta)'\cos^{2k-1}\theta\,d\theta\\ &=&2(2k-1)\int_0^{\pi/2}\sin^2\theta\cos^{2k-2}\theta\,d\theta=2(2k-1)\int_0^{\pi/2}(1-\cos^2\theta)\cos^{2k-2}\theta\,d\theta\\ &=&(2k-1)(B_{k-1}-B_k), \end{eqnarray} i.e. $$ B_k=\frac{2k-1}{2k}B_{k-1} \quad \forall k \ge 1. $$ Thus $$ B_k=\frac{(2k-1)\cdot(2k-3)\ldots3\cdot1}{(2k)\cdot(2k-2)\ldots4\cdot2}B_0=\frac{(2k)!}{[(2k)(2k-2)\ldots4\cdot2]^2}B_0=\frac{(2k)!}{2^{2k}(k!)^2}\pi. $$ The given integral is then $$ I=\frac{1}{\pi}\sum_{k=0}^\infty\frac{2^{2k}}{(2k)!}A_{2k}=\frac{1}{\pi}\sum_{k=0}^\infty\frac{2^{2k}}{(2k)!}\cdot\frac{(2k)!}{2^{2k}(k!)^2}\pi=\sum_{k=0}^\infty\frac{1}{(k!)^2}. $$
H: permutations combinations for shirt How many different ways can 6 identical green shirts and 6 identical red shirts be distributed among 12 children such that each child receives a shirt? Now what will be the answer if they get any number shirts? EDIT :Actually I was supposed to solve the modified question, cause the first question can be solve by simply combination theory. AI: The second problem is a typical urn problem as well. Distributing the 6 red shirts is like choosing from a urn of 12 people (with putting choosen people back to the urn) where the order does not matter. The number of ways to do that is given by the multiset coefficient: $$\binom{n+k-1}{k}$$ Now $n=12$ and $k=6$ and you do this twice for each color, so the answer ist $$\binom{12+6-1}{6}^2=\binom{17}{6}^2=153165376$$
H: Collision between a circle and a rectangle I am trying to create a simple model for collisions between a circle and a rectangle to be used in a computer game. The reason I am asking this question here rather than stack overflow is that the problem is not one of programming but rather of mechanics. The problem I am experiencing can be reduced to the following: A circle with centre (cx,cy) collides with the top-left corner of a rectangle (rx,ry) The rectangle is stationary and cannot move. Given the circle collided with the rectangle at velocity (cvx,cvy), what velocity does the circle bounce off with, assuming the collision is perfectly elastic? Next, consider that the rectangle is also moving with velocity (rvx,rvy), but its velocity is unaffected by a collision. What velocity does the circle bounce off with with, again assuming the collision is perfectly elastic? This is similar to Angle of reflection off of a circle?, but the solution there does not seem applicable as a tangent cannot be drawn off the corner of a rectangle. Please note that I have not studied mechanics for over 2 years and then it was at a fairly low level. AI: Physically, the force exerted on the circle during collision acts radially on the line through the vertex of the rectangle and the center (of gravity) of the circle. This is true at least as far as the momentum is concerned; some kinetic energy may be converted (via tangential force) to rotational energy, with the details depending on the "surface structure" and friction. By your assumption that the rectangle is not affected, we should assume that the rectangle has infinite mass and no energy is transferred between the two objects (whereas momentum is, but that doesn't matter for the rectangle). If at the moment of collision the circle center is at $(c_x,c_y)$ and the corner at $(r_x,r_y)$ and the velocity before is $(v_x,v_y)$, then the velocity after the collision is determined by $(v_x',v_y')=(v_x+q(c_x-r_x), v_y+q(c_y-r_y))$ and - assuming for simplicity that no rotation is produced - same kinetic energy: $v'^2=v^2$. You can use this to find the nonzero solution for $q$ and hence the new velocity: $$\begin{align}v_x^2+v_y^2&=(v_x+q(c_x-r_x))^2+(v_y+q(c_y-r_y))^2\\ \implies\quad0&=2q(v_x(c_x-r_x)+v_y(c_y-r_y))+q^2((c_x-r_x)^2+(c_y-r_y))^2\\ \implies \quad q&=-\frac{2(v_x(c_x-r_x)+v_y(c_y-r_y))}{R^2} \end{align}$$ where $R$ is the radius of your circle. If the rectangle (still of infinite mass) is moving, it is best to assume the rectangle at rest (or rather the common center of gravity - but that is the center of gravity of the rectangle), i.e. you first subtract the rectangle velocity from the circle velocity, then compute the new circle velocity and add back the rectangle velocity. More realistically, however, you really do have to consider rotational energy (pre and post collision) as well.
H: How to prove this statement on finite groups? I Fulton and Harris Chapter 3.2 we have that if $\mathbb{C}^n$ is the permutation representation of $S_n$ (symmetric group) then we can write $\mathbb{C}^n=V\oplus U$, where $U$ is the trivial representation and $V$ is the standard representation. Then they claim that $V$ is irreducible iff $(\chi_{\mathbb{C}^n},\chi_{\mathbb{C}^n})=2$, but I can't see the left direction. What I see is: $$2=(\chi_{\mathbb{C}^n},\chi_{\mathbb{C}^n})=(\chi_{V}+\chi_U,\chi_{V}+\chi_U)= (\chi_{V},\chi_V)+2(\chi_{V},\chi_U)+(\chi_{U},\chi_U)= (\chi_{V},\chi_V)+2(\chi_{V},\chi_U)+1$$ since $U$ is irreducible, so we have $(\chi_{V},\chi_V)+2(\chi_{V},\chi_U)=1$, but how can we now imply the irreducibility of $V$, i.e. $(\chi_{V},\chi_V)=1$? Thanks in advance! AI: In general, if $U$ is irreducible, then $(U,V)\neq 0$ implies that $U$ is a sub-representation of $V$. So you need only prove that $U$ is not a sub-representation of $V$ in this case - that is, there is no subspace of $V$ on which the action of $G$ is a constant.
H: Question using $f(A) =\mathcal Sf(\lambda)\mathcal S^{-1}$ Given $$A=\pmatrix{0&1\\-2&3}$$ I found $\lambda_1=1$ and $\lambda_2=2$ $$\mathcal S=\pmatrix{1&1\\1&2}$$ $$\mathcal S^{-1}=\pmatrix{2&-1\\-1&1}$$ Using the formula $\mathcal S\Lambda\mathcal S^{-1}$ Calculate: a) $(A-I)^{4000}$ b) $e^{2A}$ c) $(3A-5I)(I+4A-4A^2+A^3)^{-1}$ For part a) and c), I am having difficulty with the $(A-I)$ calculation when I replace $A$ with the eigenvalues and insert them into the $\Lambda$ matrix. Any help is greatly appreciated! :) AI: As you stated in the question title, the idea is that when $A = S \Lambda S^{-1}$ is the diagonalization, then $f(A) = S f(\Lambda) S^{-1}$ holds for sufficiently nice $f$ (in particular, those $f$ that are analytic in a domain containing the eigenvalues of $A$). So for part $(a)$, it is true that $(A - I)^{1000} = S (\Lambda - I)^{1000} S^{-1}$. You can justify this by using the fact that $A - I = S( \Lambda - I) S^{-1}$. Hopefully this should allow you to immediately calculate the answer for part $(a)$. You should try something similar for the other parts.
H: For covariant tensors, why is it $\bigwedge^k(V)$, not $\bigwedge^k(V^*)$? In learning the very basics of differential geometry, I have seen the exterior product defined a couple of ways: First, I have seen it as the image of the covariant tensors (which I believe are essentially the $k$-fold tensor product of $V^*$ with itself) transformed by the mapping "Alt". Second, I have seen it defined somewhat more abstractly as the $k$-fold tensor product of $V$ with itself, modulo certain relations which make it anti-symmetric. (However, at this early stage it seems we have no need for contravariant tensors.) In either case, we write it $\bigwedge^k(V)$. Since we are focusing on covariant tensors, why do we not write $\bigwedge^k(V^*)$? Is there something I am missing which makes these two equivalent? AI: You're right: this is a notation failure (on the part of many differential geometry texts). The correct notation should be $\bigwedge^k (V^*)$, as you say. However, confusingly many texts write $\bigwedge^k(V)$ to mean $\bigwedge^k(V^*)$. (edit by EA, 5:54. fixed a typo)
H: Customary layout of $\phi_i(v_j)$ What is the customary layout of $\phi_i(v_j)$? Is it $$\pmatrix{\phi_1(v_1)&\phi_1(v_2) & \cdots \\ \phi_2(v_1) & \phi_2(v_2) & \cdots\\ \vdots &\vdots & \ddots},$$ or$$\pmatrix{\phi_1(v_1)&\phi_2(v_1) & \cdots \\ \phi_1(v_2) & \phi_2(v_2) &\cdots \\ \vdots &\vdots & \ddots}?$$ This question comes from the following problem: Show that whenever $\phi_1, \ldots, \phi_k \in V^*$, and $v_1, \ldots, v_k \in V$, then $$\phi_1 \wedge \cdots \wedge \phi_p (v_1, \ldots, v_) = \frac{1}{k!}\det[\phi_i(v_j)].$$ Thanks! AI: The general consensus seems to be that $i$ denotes the row and $j$ the column, so your first variant is correct. Of course the determinant of the transpose is the same so it wouldn't really matter here for the result.
H: Why do we only consider quadratic domains as Euclidean domains with squarefree integers? I have been reading "Introductory algebraic number theory" by Alaca and Williams, and in the opening chapters they use the quadratic domains $\mathbb{Z}+\mathbb{z}(\sqrt{m})$ for non-square $m$ and $\mathbb{Z}+\mathbb{Z}\Big(\frac{1+\sqrt{m}}{2}\Big)$ where $m\equiv1 \mod{4}$ and non-square. In the chapter on Euclidean domains, it defines and proves a number of results about the norms $\phi_m$ where for $x,y \in \mathbb{Q}, \phi_m(x+y\sqrt{m})=|x^2-my^2|$ where $m$ is now a squarefree integer, and go into much detail about conditions under which e.g. $\mathbb{Z}+\mathbb{Z}(\sqrt{m})$ is a Euclidean domain with respect to this function, but only for for squarefree $m$. My question is: Why do we make this distinction? As far as I can tell, there is no reason to exclude, for example, $\mathbb{Z}+\mathbb{Z}(\sqrt{8})$. $\phi_8$ appears to behave in all the ways we want it to and seems to be a candidate for a euclidean function on the domain. I know that this is a subdomain of $\mathbb{Z}+\mathbb{Z}(\sqrt{2})$, but the book itself has exercises where it is shown that certain subdomains of norm-euclidean domains are not euclidean with respect to any function, so we can't always make observations about subdomains from our knowledge of larger ones. AI: The reason they do this is because $\mathbb{Z}+\mathbb{Z}(\sqrt{8})$ or more generally $R=\mathbb{Z}+\mathbb{Z}(k\sqrt{m})$, for $m$ squarefree and $k>1$ is not integrally closed. This means that there are elements $x$ of the fraction field Frac$R=K=\mathbb{Q}(\sqrt{m})$ that are not in $R$ which satisfy an equation of the form $$ x^n +a_{n-1}x^{n-1}+\ldots+a_0=0. $$ for $a_i\in\mathbb{Z}$. In our case $x=\sqrt{m}$ is such an element satisfying $x^2-m=0$. However all euclidean domains are integrally closed as they are unique factorisation domains. Added proof of the above fact: If $x\in K$ we have a prime decomposition $x=\prod_i p_i^{e_i}$ where the $e_i$ can also be negative. Suppose $e_1,\ldots,e_k$ are negative. If $x$ is integral over $R$ then $$ x^n = -a_{n-1}x^{n-1}+\ldots+a_0. $$ After multiplying both sides by $(p_1^{e_1}\cdot\ldots\cdot p_k^{e_k})^n$ the left hand side is in $R$ and not divisible by $p_1$ while the right hand side is in $R$ and divisible by $p_1$, so we have a contradiction. So $x\in R$.
H: How to find the base $n$ such that $2_{n}^{12_n}=2_{10}^{6_{10}}\cdot 5_{10}$? I have met the following problem: How to find the base $n$ such that $2_{n}^{12_n}=2_{\small10}^{6_{\small 10}}\cdot 5_{\small 10}$? And until now, I have no idea of how to solve it. I could try the conversion with some bases but it would be tedious. Is there a general method? AI: We have $12_n=1\cdot n + 2=n+2$. Hence $2_n^{12_n}=2^{n+2}$, a power of $2$. By the Fundamental Theorem of Arithmetic, there is only one way to factor this into primes in the integers, namely as the product of $n+2$ copies of the prime $2$. The other side, however, has a factor of the prime $5$. Hence the two sides cannot be equal. If you instead tried to solve $2_n^{12_n}=2_{10}^{6_{10}}$, that would have unique solution $n=4$, because in this case $n+2=6$.
H: If $T(W)⊆W$ show $W$ spaned by eigenvectors. Let $T$ be a linear transformation of a finite dimensional real vector space $V$ and assume that $V$ is spanned by eigenvectors of $T$. If $T(W)⊆W$ for some subspace $W⊆V$, show that $W$ spaned by eigenvectors. Any suggestion? Thanks. AI: Note that the following are equivalent for $T\in L(V)$ on a finite-dimensional $K$-vector space $V$: 1- $T$ is diagonalizable. 2- there exists $p(X)\in K[X]$ which splits over $K$ with simple roots such that $p(T)=0$. 3- $V$ is spanned by eigenvectors of $T$. Proof: the only non-completely standard implications are $3\Rightarrow 1$ or $2$. But if there is a spanning set of eigenvectors, one can extract a basis of eigenvectors from that. So $T$ is diagonalizable. $\Box$. Now assuming $V$ is spanned by eigenvectors of $T$, we know that there exists a polynomial $p(X)$ splitting over $K$ with simple roots, such that $p(T)=0$. E.g. the minimal polynomial of $T$. Then if $W$ is invariant under $T$, i.e. $T(W)\subseteq W$, we can consider $T_W$ the restriction of $T$ to $W$. Note that every power of $T$ leaves $W$ invariant as well whence $p(T_W)=p(T)_W=0$. Note also that every eigenvector of $T_W$ is an eigenvector of $T$. It follows that $W$ is spanned by eigenvectors of $T$. Note: in short, if $T$ is diagonalizable, so is every restriction to an invariant subspace.
H: $k \subset A \subset B$, $B\supset k$ f.g., $\text{codim}_k(A) < \infty$ $\Rightarrow$ $B \supset A$ f.g. module? Does this hold? Let $k \subset A \subset B$ where $k$ is a field and $A,B$ are commutative rings. If $B$ is a finitely-generated ring over $k$ and $\dim_k(B/A) < \infty$ then $B$ is a finitely-generated $A$-module. I think the above is used in a proof that I'm trying to understand. But it's not carried out so I guess it must be really obvious. AI: If $B=k[x_1,\dots,x_n]$ and $\dim_k(B/A)<\infty$ we get that $x_i$ is integral over $A$ for each $i$. The powers of the element $\hat x_i\in B/A$ are linearly dependent over $k$, so there exists $N\ge 1$ and $a_1,\dots,a_N\in k$ such that $\hat x_i^N+a_1\hat x_i^{N-1}+\cdots+a_N=0$ in $B/A$, that is, there exists $a\in A$ such that $x_i^N+a_1x_i^{N-1}+\cdots+a_N-a=0$. This shows that $x_i$ is integral over $A$, so the extension $A\subset B$ is finitely generated and integral, and therefore it's finite.
H: The implication of zero mixed partial derivatives for multivariate function's minimization Suppose $f(\textbf x)=f(x_1,x_2) $ has mixed partial derivatives $f''_{12}=f''_{21}=0$, so can I say: there exist $f_1(x_1)$ and $f_2(x_2)$ such that $\min_{\textbf x} f(\textbf x)\equiv \min_{x_1}f_1(x_1)+ \min_{x_2}f_2(x_2)$? Or even further, as follows: $$f(\textbf x)\equiv f_1(x_1)+ f_2(x_2)$$ A positive simple case is $f(x_1,x_2)=x_1^2+x_2^3$. I can not think of any opposite cases, but I am not so sure about it and may need a proof. AI: For a mixed derivative $f_{xy} = 0$, integrating with respect to $y$ gives: $$ f_x(x,y) = \int f_{xy} \,dy + h(x). $$ Integrating with respect to $x$: $$ f(x,y) = \iint f_{xy} \,dydx + \int h(x)dx + g(y). $$ Similar result yields if we start from $f_{yx}$, now this implies $$ f(x,y) = f_1(x) + f_2(y), $$ and there goes your conclusion in the question.
H: If $L$ is a chain, prove it is finite. I need ideas on the following problem. Suppose $L$ is a poset and every subset $S$ of $L$ has a top and bottom element. Prove $L$ is a finite chain. All I need to do is prove that $L$ is finite (I have already proved $L$ is a chain). Any ideas or suggestions on solving this problem would be great! Thanks. AI: You've proved that $L$ is a chain, so since every non-empty subset of $L$ has a bottom element, then $L$ is well-ordered. Thus, $L$ is order-isomorphic to a unique ordinal, say $\alpha$. Can you show that $\alpha$ must be finite? (Hint: Use the fact that every subset of $L$ has a top element.) Alternate approach (related to my answer to this question, asked previously by another user): Let $x_0$ be the least element of $L,$ $x_1$ the second least element of $L,$ and so on. This sequence is well-defined recursively, because every non-empty subset of $L$ has a bottom element--so $x_1$ is the bottom element of the subset of $L$ containing all but $x_0$, $x_2$ is the bottom element of the subset of $L$ containing all but $x_0$ and $x_1,$ etc.--and at some point, this process must stop (namely, when we reach the top element of $L$) after finitely-many steps. If not, then the set of all $x_n$ (with $n$ a nonnegative integer) is a subset of $L$ without a top element! (Why?) This contradicts our assumption. Thus, our process will stop after finitely-many steps (say at $x_n$), giving us an increasing finite sequence $x_0,x_1,...,x_n$ of points of $L$. Show that every point of $L$ is in this sequence (use the recursive definition), and you're done.
H: Closed form of $\sum\limits_{i=1}^n k^{1/i}$ or asymptotic equivalent when $n\to\infty$ Is there a "closed form" for $\displaystyle S_n=\sum_{i=1}^n k^{1/i}$ ? (I don't think so) If not, can we find a function that is asymptotically equivalent to $S_n$ as $n\to\infty$ ? AI: Cesaro (particular case of the Stolz-Cesaro theorem) says $$ \lim_{n\rightarrow +\infty}a_n=a\quad\Rightarrow \quad \lim_{n\rightarrow+\infty}\frac{\sum_{i=1}^na_i}{n}=a . $$ I assume $k>0$ and $k\neq 1$ (as $S_n=n$ is trivial if $k=1$). Since $\lim_{n\rightarrow +\infty}k^\frac{1}{n}=1$, we get $$ \lim_{n\rightarrow+\infty}\frac{\sum_{i=1}^nk^\frac{1}{i}}{n}=1\quad\Rightarrow\quad S_n=\sum_{i=1}^nk^\frac{1}{i}\;\;\sim\; n. $$ I doubt there is a closed form. But we can go further in the asymptotics. Recall that if $a_n\sim b_n$ and if $b_n $ is (eventually) positive with $\sum_{i=1}^{+\infty}b_i$ divergent, then $\sum_{i=1}^{n}a_i\sim \sum_{i=1}^nb_i$. That's again Stolz-Cesaro if you want. Now by first derivative of $k^x$ at $0$ $$ k^\frac{1}{n}-1\sim\frac{\ln k}{n} \quad\Rightarrow\quad S_n-n=\sum_{i=1}^nk^\frac{1}{i}-1\sim\sum_{i=1}^n\frac{\ln k}{i}=\ln k\cdot H_n\sim\ln k\cdot \ln n $$ where $H_n$ is the $n$th harmonic number. Hence $$ S_n=n+\ln k\cdot\ln n+o(\ln n). $$ Going up to the second derivative of $k^x$, we get $$ k^\frac{1}{n}-1-\frac{\ln k}{n}\sim \frac{(\ln k)^2}{2n^2}. $$ So the resulting series converges and therefore, using $H_n=\ln n +O(1)$, $$ S_n-n-\ln k \cdot H_n=O(1)\quad\Rightarrow\quad S_n=n+\ln k\cdot \ln n+O(1). $$
H: Proof that there are infinitely many positive rational numbers smaller than any given positive rational number. I'm trying to prove this statement:- "Let $x$ be a positive rational number. There are infinitely many positive rational numbers less than $x$." This is my attempt of proving it:- Assume that $x=p/q$ is the smallest positive rational number. Consider $p/q - 1$ $= (p-q)/q$ Case I: $p$ and $q$ are both positive Then, $p-q<p$ And hence, $(p-q)/q < p/q$ Since $p$ and $q$ are integers, $(p-q)$ is also an integer. Thus, $(p-q)/q$ is a rational number smaller than $p/q$. Therefore, our assumption is wrong, and there always exists a rational number smaller than any given rational number $x$. Case II: $p$ and $q$ are both negative Then, let $p/q = -s/-t$, where $s$ and $t$ are both positive integers. Then, $-s-(-t)>-s \implies (-s+t)/-t < -s/-t \implies (p-q)/q <p/q$ Since $p$ and $q$ are integers, $(p-q)$ is also an integer. Thus, $(p-q)/q$ is a rational number smaller than $p/q$. Therefore, our assumption is wrong, and there always exists a rational number smaller than any given rational number $x$. Q.E.D Is my proof correct? And there are a couple of questions that I've been pondering over:- 1) How do I justify the subtraction of $1$ from $p/q$? I mean, I assumed that $p/q$ is the smallest rational number, so how do I even know if this operation is valid? 2) I proved that there always exists a smaller rational number given any positive rational number. But how do I prove that there's always a smaller positive rational number? 3) Also, I don't seem to have proved that there are infinitely many smaller rational numbers than $x$. If I use a general integer $k$ instead of $1$, this would be taken care of, right? But then again, how do I justify this subtraction? I'd be really grateful, if someone could help me with this! Thanks! AI: First, you don't need Case II. If $x\in\mathbb Q_{>0}$, then you can assume, that $x=\frac{p}{q}$, where $p,q>0$. Your general idea is good: Assume, that $x$ is smallest possible and find an even smaller one, which then is a contradiction. Now let's answer your questions: 2) You already noticed, that you only proved that there is a smaller rational number, not necissarily positive. Your proof is basically "If $x$ is the smallest, then $x-1$ is smaller, a contradiction." 1) Of course, this a valid operation, it actually disproved your assumption, that $x$ was the smallest rational number. 3) is right, too. With an arbitrary $k$, you get infintely many smaller rationals. To prove the positive case, notice, that if $x\in\mathbb Q_{>0}$, then $0<\frac{x}{k}<x$ for all $k\in\mathbb Z$.
H: Probability that random subspaces intersect Given an ambient space $\mathbb{R}^d$ and two randomly oriented subspaces $A,B$ with dimensions $a,b$ respectively, how can I express the probability that $A$ and $B$ intersect non-trivially? AI: Assuming a uniform independent distribution (say picking orthogonal vectors to span each space one at a time by uniform distributions on spheres) the probability is zero for $a+b\le d$ and one for $a+b>d$. To see why, take $A$ to be fixed by rotation to some standard space. Then consider picking $b$ orthogonal vectors to span $B$. Note that a nontrivial intersection is equivalent to linear dependence amongst these two bases. But the probability the first introduces it is zero unless $A=\mathbb R^d$ because the angle a vector makes with a hyperplane is 0 with probability 0. Clearly the answer is one in the $a=d$ case. But then instead of immediately picking the second vector orthogonal to the first, simply project onto the orthogonal complement. $A$ retains the same dimension, but now $b\to b-1,d\to d-1$. Thus we have the result above by induction.
H: Complement vector subspaces and the direct sum. From Advanced Linear Algebra (Roman): Let $\dim(V) < \infty$ and suppose that $V = U \oplus S_1 = U \oplus S_2$. What can you say about the relationship between $S_1$ and $S_2$? What can you say if $S_1 \subseteq S_2$? I couldn't completely answer it, but here are my thoughts: We know that $U \cap S_1 = \{0\}$ and $U \cap S_2 = \{0\}$. We also know that all elements of $V = U \oplus S_i$ are of the form $u + s_i$ where $s_i \in S_i$ for i=1,2. If $s_i = 0$, then $u+s_i=u$, so we don't really need to examine this one since it doesn't really say anything about $S_1$ or $S_2$. What we want to look at is $u+s_1$ for $u=0$ and $u \not=0$ (where $s_i \not= 0$). The subspaces $S_1$ and $S_2$ do not need to be equal, since an element $w \in S_1$ doesn't need to be in $S_2$ as long as we have $u_1 + s_1 = w$ for some $u_1 \in U, s_1 \in S_1$. However, for $S_i$ (where $i = 1,2$), the elements of the form $u+s_i$ where $u\not=0$ must be different from those of the form $u + s_i$ where $u=0$ (we are assuming that $s_i \not= 0$ in these cases). Suppose that they are equal. Then $u+s_i = s_i'$ for some $s_i, s_i' \in S_i$. But then we have $s_i' - s_i \in S_i$, contradicting the fact that $U$ and $S_i$ are disjoint. From here, we know that there are $|S_i| + |U|$ elements of the form $u+s_i$ where $u=0$ and where $u \not=0$ (and $s_i \not= 0$ in both cases). In other words, form here we can conclude that both $S_1$ and $S_2$ have the same number of elements; since there are $|S_i| + |U|$ possible elements and we know that they are all different. So if $S_1 \subseteq S_2$, then $S_1 = S_2$. So we know that both subspaces must be of the same cardinality. However, the textbook mentions somewhere that such subspaces must be isomorphic. But I wasn't able to prove that. I'm guessing that I need to define a module homomorphism from $S_1$ to $S_2$ and show that it is injective or surjective. But I'm not really sure how. In other words, how can I define a module homomorphism when we only know that we have two subspaces without any further detail? Or is there another way to prove this without using a module homomorphism? Thank you in advance AI: I don't think you want to argue that $S_1$ and $S_2$ have the same cardinality. Rather you want to show they have the same dimension. Then they must be isomorphic since all vector spaces of the same dimension are isomorphic. To see they have the same dimension notice that a basis for $U$ joined with a basis for $S_i$ is a basis for $V$. So $\dim(S_i)=\dim V-\dim U$.
H: Confidence Interval for a Binomial Having trouble with this question from my textbook. I was wondering if anyone could help me out. The following set of $10$ data points are independent realizations from a Binomial model $X$ ~ $\mathrm{Bin}(36,\pi)$ $$10,12, 7, 6, 6,11, 7,12, 9,10.$$ Compute numerically, showing all your work, the $95$% confidence interval for $\pi$. I know that the confidence interval for a binomial looks like this: $$p\pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{p(1-p)}{n}}$$ But I don't know where to go from there. AI: It looks as if we took a sample of $36$, then did it again, and again, until we had a sample of $360$. The $10$ numbers $10$, $12$, $7$, and so on represent the number of successes in the various trials. We can lump all these trials together. What we had was in effect $360$ independent Bernoulli trials, and, if I am adding correctly, a total of $90$ successes. Thus the sample proportion of successes is $\frac{90}{360}=0.25$. The true probability $\pi$ of success is unknown. However, the random variable represented by the sample mean has mean $\pi$, and standard deviation $\sqrt{\frac{\pi(1-\pi)}{360}}$. Because $360$ is fairly large, we can expect that the true standard deviation $\sqrt{\frac{\pi(1-\pi)}{360}}$ is well-approximated by $\sqrt{\frac{(0.25)(1-0.25)}{360}}$. Call this number $s$. Note that $s\approx 0.02282$. By the normal approximation, a $95\%$ confidence interval for $\pi$ is $$0.25-1.96s \le \pi \le 0.25+1.96s.$$
H: What is the difference in the meaning of equality symbol "$=$" as a logical symbol vs. a parameter? In my text, the author says that he considers $=$ to be a logical symbol, and he adds that this makes the translation of the equality symbol to English different from if $=$ were a parameter. But the examples don't explain what the difference is. Can anyone explain, please, what the difference is? When should we use $=$ as a logical symbol and when should we use it as a parameter? AI: See the Wikipedia entry on First-Order-Logic: Equality and its Axioms. There, you'll find distinctions in the way the equality symbol is used in logics: FOL (First Order Logic) with identity (where = is a primitive logic symbol), and this is what your author intends when discussing $=$ as a logical symbol, which is its most common usage in first-order-logic). As a logical symbol, its incorporation adds the "Axioms of Equality" to the deductive system one is using. Reflexivity. Substitution for functions. Substitution for formulas. The above are "axiom schemes," each of which specifies an infinite set of axioms. The third scheme is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second scheme, involving the function symbol f, is (equivalent to) a special case of the third scheme, using the formula." Many other properties of equality are consequences of the axioms above, for example: Symmetry and Transitivity. FOL without identity, where the equal sign does not denote "identity". The entry also discusses the prospect of defining equality within a theory. It might also be the case that your author is distinguishing $=$ as a binary predicate, from $=$ as a logical symbol. Perhaps you can include a definition of parameter, as given in your text, so we can help disambiguate the two uses of equality you're text is introducing.
H: Mapping two disjoint intervals into one interval I'm trying to find a continuous function that maps $[0,1] \cup [2,3]$ onto $[0,1]$. Could someone give me a hint? (Note I haven't taken topology.) AI: One such function is: $$f(x)=\begin{cases}x&:\ x\in[0,1]\\ x-2&:\ x\in[2,3]\end{cases}.$$ We can think of this as fixing the $[0,1]$ interval and translating the interval $[2,3]$ to the left by $2$ units.
H: What is the column space in a $5\times 5$ invertible matrix? If $A$ is any $5 \times 5$ invertible matrix, then what is its column space? Why? I'm totally lost with column space. Any ideas? AI: If $A$ is invertable, is there any vector $v \in \mathbb R^5$ which cannot be composed by a linear combination of the columns of $A$ (i.e. the column space)? What does that say about the column space? EDIT: I can't imagine that I am providing anything that you haven't read before, but here goes... Span is the set of all possible linear combinations of a set of vectors. In the simplest case the span of a single vector $v$ is a line. If we have two vectors, the span could be a line or a plane; if they are linearly independent, then they span a plane. The simplest example of this is the familiar coordinate plane spanned by $\vec i$ and $\vec j$. Any point can be represented by $a\vec i+ b\vec j$ for some $a$ and $b$. In that case we say that $\vec i$ and $\vec j$ span the plane and because they are linearly independent they also form a basis for a 2-D space (i.e. $\mathbb R^2$). Just for kicks, let's represent the coordinate point (2,0); it is clear that $2\vec i+ 0\vec j$ represents that point. We like $\vec i$ and $\vec j$ because they are easy to work with but the idea of span extends to any set of vectors. For example, we could have just as easily chosen $\vec v = \vec i + \vec j$ and $\vec u = \vec i - \vec j$ and because they are linearly independent we can also represent any point in the coordinate plane as a linear combination represented as $c\vec v+ d\vec u$. Now let's revisit our coordinate point (2,0); can you see how $1\vec v+ 1\vec u$ represents the same point? We can continue the same thought process to 3, 4 and 5 dimensions and so on... For your question we have five vectors arranged in a matrix $A$. Each vector is taken as a column of the matrix. If we have a vector represented as a $5\times 1$ column vector $x$ then the product $Ax$ represents a linear combination of the columns of $A$ by the weights in $x$. Thus for the equation $$Ax = b$$ the linear independence of the columns of $A$ tell us that we can represent any point $b \in \mathbb R^5$ as a linear combination of the columns of $A$. So the bottom line is that in order to solve $Ax = b$, the vector $b$ has to be in the column space of $A$. That is why column space is important.
H: How to go About Undergraduate Research I apologize in advance if this question is out of the scope or focus here. I was just wondering about the whole prospect of researching as an undergraduate. How to do it? Who to talk to in my department (UCLA) and what to do in general? To give you an idea of where my knowledge of math is and my interests, I have gone through a few graduate texts on algebra and group theory. I have been trying to read the papers on arXiv and am able to follow them (albeit taking a few hours and multiple Wikipedia searches). Most of my learning has been outside of a classroom setting though, and I am only taking an introduction to algebra class at this point. Adding to my problems is both the fact that I have just transferred into my school and that I have generally never known how to interact with my teachers. I am a rising junior by the way. I really have no idea what to do at this point. I would like to start researching, but don't really know how to do it. Any help would be useful. Thank you all very much. AI: I know a few students who did undergraduate research at UCLA with Professor Robert Brown and Professor Liam Watson. They do mostly topology. Just talk to your professors during office hours and ask them about undergraduate research. Even if they are not actively involved in anything, they will know who is, and they may be able to give you a recommendation to get you into a program. There are also some summer programs, although you are passed the deadline now. UCLA also has a Logic summer program which supposedly brings you to research level in logic, http://www.math.ucla.edu/~ineeman/Summer-school/ RIPS is a summer undergraduate research program. http://www.ipam.ucla.edu/rips/ Also towards middle of spring quarter, you will get emails about money available for summer undergraduate research. You will need to get a professor as an adviser, but it is my understanding that you mostly get to choose what you want to work on. Most summer programs are applied to at the beginning of Winter quarter, so go to your Professor's office hours in the Fall and make friends for letters of recommendation. Ask in the math undergraduate office, and they will also tell you what programs you can apply for and what the deadlines are. Edit: It might also be worth emailing ugrad@math.ucla.edu now, and asking about what types of programs are available.
H: prove that $\cos x,\cos y,\cos z$ don't make strictly decreasing arithmetic progression let $x,y,z\in R$,and such that $$\sin y-\sin x=\sin z-\sin y\ge 0 $$ show that: $$\cos x,\cos y,\cos z$$ don't make strictly decreasing arithmetic progression my idea: we have $$2\sin y=\sin x +\sin z\cdots\cdots\tag 1$$ and assume that,there exist $x,y,z$ such that $$2\cos y=\cos x+\cos z\cdots\cdots \tag2$$ and $(1)^2+(2)^2$,we have $$4=2+2(\sin x\sin z+\cos x\cos z)=2+2\cos(x-z)$$ then $$\cos(x-z)=1\Longrightarrow x=z+k\pi,k\in Z$$ so $$\cos x=(-1)^k\cos z,\sin x=(-1)^k\sin z$$ Then ? AI: The three points $(\cos x, \sin x)$, $(\cos y, \sin y)$, and $(\cos z,\sin z)$ lie on the unit circle, and by assumption are distinct. The y-coordinates are given to be in arithmetic progression, and we are asked to show the $x$-coordinates are not. If both sets of coordinates were in arithmetic progression, the three points would be collinear. A simple geometric proof would be that a line cannot intersect a circle in three points.
H: How do I find out the coordinates of every point between two points? Suppose all I am given, is the coordinates of two points like the following: What are some ways I could go about finding the values of every point on this line segment? Like the y-value at 2.3, 2.4, 2.7 etc. Any suggestions as to how I could go about doing this would be appreciated. AI: Here is another way. Choose $t \in [0,1]$, then let $p(t) = (x_1+ t (x_2-x_1), y_1+t(y_2-y_1))$. Then $p(0) = (x_1,y_1)$, $p(1) = (x_2,y_2)$ and for $t \in (0,1)$, $p(t)$ will be a point in between. This scheme works even if the two $x$ coordinates are the same.
H: Sum of two independent exponential functions Hey, I need a little help with this question. Let $X$ and $Y$ be independent, exponentially distributed random variables. What is the distribution of $Z=\frac X {X+Y}$? I just can't figure it out because of the sum on the denominator, thank you for your help. AI: We will assume that the parameters of the exponentials are not necessarily the same. Let $X$ have paramemter $\kappa$ and let $Y$ have parameter $\lambda$. Then the joint density (when $x\gt 0$, $y\gt 0$) is $\kappa \lambda e^{-\kappa x}e^{-\lambda y}$. We go after the cdf $F_Z(z)$ of $Z$. Note that $z$ ranges over the interval $[0,1]$ and it does no harm to assume that $z$ is neither $0$ nor $1$. We have $\frac{X}{X+Y} \le z$ if and only if $X\le (X+Y)z$ if and only if $Y\ge X\frac{1-z}{z}$. Thus $$F_Z(z)=\Pr(Z\le z)=\int_{x=0}^\infty \kappa \exp(-\kappa x)\left(\int_{y=x(1-z)/z}^\infty \lambda \exp(-\lambda y)\,dy\right)\,dx.$$ The inner integral is easy to evaluate, it is just the right-tail of an exponential, and is equal to $\exp(-x\lambda(1-z)/z)$. So now we need to evaluate $$\int_0^\infty \kappa \exp\left(-x(\kappa +\lambda(1-z)/z )\right)\,dx.$$ The integration is straightforward, we get a quite simple function of $z$. So now we have the cdf of $Z$. Differentiate to find the density.
H: $K$ is weakly-compact $\Longleftrightarrow$ $\Pi(K)$ is weak*-compact Let $X$ be a Banach space and $K\subset X$. $\displaystyle \Pi:X \longrightarrow X$** canonical injection $\Pi(x)(f)=f(x)$ How can we prove that: $K$ is weakly-compact $\Longleftrightarrow$ $\Pi(K)$ is weak*-compact Any hints would be appreciated. AI: I'll assume the base field is $\mathbb{C}$, but this could be $\mathbb{R}$ without changing anything. The weak*-topology on $X^{**}$ is the weakest topology that makes the point evaluations $e_{x^*}:x^{**}\longmapsto x^{**}(x^*)$ continuous. It follows that a map $\phi:Y\longrightarrow (X^{**},w^*)$ is continuous if and only if $y\longmapsto \phi(y)(x^*)$ is continuous from $Y$ to $\mathbb{C}$ for every $x^*\in X^*$. The weak topology on $X$ is the weakest topology that makes the functionals of the topological dual $X^*$ continuous. It follows that a map $\theta: Z\longrightarrow (X,w)$ is continuous if and only if $x^* \circ \theta:Z\longrightarrow \mathbb{C}$ is continuous for every $x^*\in X^*$. Now for every $x^*\in X^*$, the map $x\longmapsto \Pi(x)(x^*)=x^*(x)$ is just the functional $x^*\in X^*$, which is continuous on $(X,w)$. Therefore $\Pi:(X,w)\longrightarrow (X^{**},w^*)$ is continuous. Conversely, for every $x^*\in X^*$, the map $\Pi(x)\longmapsto (x^*\circ \Pi^{-1})(\Pi(x))=x^*(x)=\Pi(x)(x^*)$ is continuous on $(\Pi(X),w^*)$ as the restriction of $e_{x^{*}}$. Hence $\Pi^{-1}:(\Pi(X),w^*)\longrightarrow (X,w)$ is continuous. Conclusion: the isometric embedding $\Pi:X\longrightarrow X^{**}$, which is obviously a norm-norm homeomorphism onto its range, is also a $w-w^*$ homeomorphism onto its range. In particular, since the continuous image of a compact is compact, $K$ is $w$-compact if and only if $\Pi(K)$ is $w^*$-compact. Note that since both topologies are Hausdorff, such compacts are Hausdorff as well. Note: we never used that $X$ is a Banach space. This is true for any normed vector space. And even for a topological vector space, where we only lose the isometry of $\Pi$.
H: Sum of Independent Folded-Normal distributions Let $X$ and $Y$ be independent, normally distributed random variables. How is $|X| + |Y|$ distributed? Is it known to be $|Z|$, where $Z$ is distributed normally? AI: For $\alpha > 0$, $$F_{|X|+|Y|}(\alpha) = P\{|X|+|Y|\leq \alpha\} = P\{(X,Y) \in A\}$$ where $A$ is a square region with vertices $(\alpha,0), (0,\alpha), (-\alpha, 0), (0,-\alpha)$. Assume that $X$ and $Y$ have $0$ mean and identical variance $\sigma^2$. Then, since the joint density of $X$ and $Y$ has circular symmetry, we can rotate the square about the origin so that the sides are parallel to the axes and at distance $\alpha/\sqrt{2}$ from them. Consequently, \begin{align}F_{|X|+|Y|}(\alpha) &= P\{(X,Y) \in A\}\\ &= \left[\Phi\left(\frac{\alpha}{\sqrt{2}\sigma}\right) - \Phi\left(\frac{-\alpha}{\sqrt{2}\sigma}\right)\right]^2\\ &= \left[2\Phi\left(\frac{\alpha}{\sqrt{2}\sigma}\right) - 1\right]^2. \end{align} Can you get the density of $Z$ from this? (Hint: think of the chain rule for differentiation from basic calculus, and remember that you know the derivative of $\Phi(x)$) Note that $Z$ is not the absolute value of a normal random variable. More generally, for arbitrary independent normal random variables, we have that \begin{align} F_{|X|+|Y|}(\alpha) &= P\{(X,Y) \in A\}\\ &= \int_{-\alpha}^0\int_{-\alpha-x}^{\alpha+x}f_X(x)f_Y(y)\mathrm dy \mathrm dx + \int_0^{\alpha}\int_{x-\alpha}^{-x+\alpha}f_X(x)f_Y(y)\mathrm dy\mathrm dx. \end{align} Rather than computing the integrals and then differentiating with respect to $\alpha$ to find the density of $|X|+|Y|$, one can directly differentiate the integrals with respect to $\alpha$. If you don't remember the details of how to do so, see the comment following this answer and remember that when you are differentiating the outer integral (the one with respect to $x$), the integrand (a.k.a. the value of the inner integral) is also a function of $\alpha$.
H: How do I find the radius of convergence of these power series Please help me find the radius of convergence of the following power series including the method of solving them. $$\sum_{n=0}^{\infty}2^{n!}x^{n!}\tag{1}$$ $$\sum_{n=0}^{\infty}\frac{n^2}{2^n}x^{n^2}\tag{2}$$ AI: The formula for the convergence radius of a power series is $$\frac 1 R= \limsup_{n \to \infty} |a_n|^{1/n}.$$ In the first case $a_k =0$ if $ k \neq n!$ and $a_k =2^k$ if $k=n!$. Thus $$\frac 1 R=\lim_{k \to \infty} (2^k)^{1/k}=2.$$ In the second case $a_k=0$ if $k \neq n^2$ and $a_k=\frac k {2^\sqrt{k}}$ if $k=n^2$. Thus $$\frac 1 R=\lim_{k \to \infty }\left(\frac k {2^{\sqrt{k}}}\right)^{1/k}= \frac {\lim_{k \to \infty} k^{1/k}} {\lim_{k \to \infty} \left(2^{\sqrt{k}}\right)^{1/k}}=\frac 1 {\lim_{k \to \infty} 2^{k^{-1/2}}}=\frac 1 {2^{\lim_{k \to \infty} (k^{-1/2})}}=1.$$
H: How this particular inequality deduced in finding the limit of a vector function? There is a proposition in differential geometry which states: Let $\vec{x}$ be a vector function from a subset of $\mathbb{R}$ into $\mathbb{R}^n$ that is defined over an interval containing $a$, through perhaps not at $a$ itself. Suppose in coordinates we have $\vec{x}(t) = (x(t), y(t))$ wherever $\vec{x}$ is defined. If $\vec{w} = (w_1, w_2)$, then $\lim_{t \to a} \vec{x}(t) = \vec{w}$ if and only if $\lim_{t \to a} x(t) = w_1$ and $\lim_{t \to a} y(t) = w_2$. Proof: Suppose first that $\lim_{t \to a}\vec{x}(t) = \vec{w}$. Let $\varepsilon > 0$ be arbitrary and let $\delta > 0$ satisfy the definition of the limit of the vector function. Note that $|x(t) - w_1| < ||\vec{x}(t) - \vec{w}||$ and $|y(t) - w_2| < ||\vec{x}(t) - \vec{w}||$ Hence, $0 < |t - a| < \delta$ implies $|x(t) - w_1| < \varepsilon$ and $|y(t) - w_2| < \varepsilon$ Thus, $\lim_{t \to a} x(t) = w_1$ and $\lim_{t \to a} y(t) = w_2$ My Question: How did the author of this theorem deduce that $$ |x(t) - w_1| < ||\vec{x}(t) - \vec{w}|| \text{ and } |y(t) - w_2| < ||\vec{x}(t) - \vec{w}|| $$ Is it possible to help me find the answer why they are less than the other? AI: The definition of $||\vec{x}||$ is the square root of the sum of all its component squares. I think.
H: Problem understanding proof for positive solutions of parabolic PDE in Friedman's textbook This is Lemma 5 in the chapter on maximum principles in Friedman's book Partial Differential Equations of Parabolic Type. I am having trouble understanding one of the steps in the proof. Let $$ \Omega_{0}\equiv\mathbb{R}^{n}\times\left(0,T\right]\text{ and }\Omega\equiv\mathbb{R}^{n}\times\left[0,T\right]. $$ Further let $$ L\equiv\sum_{i,j=1}^{n}a_{ij}\left(x,t\right)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}+\sum_{i=1}^{n}b_{i}\left(x,t\right)\frac{\partial}{\partial x_{i}}+c\left(x,t\right)-\frac{\partial}{\partial t} $$ be a parabolic operator with continuous coefficients in $\Omega_{0}$. If $c\leq0$ and $Lu\leq0$ in $\Omega_{0}$, $u\left(x,0\right)\geq0$ and $$ \liminf_{\left|x\right|\rightarrow\infty}u\left(x,t\right)\geq0 $$ uniformly w.r.t. $t$ ($0\leq t\leq T$) then $u\left(x,t\right)\geq0$ in $\Omega$. Proof: For any $\epsilon>0$, we have $u\left(x,0\right)\geq0$ on $t=0$ and hence $u\left(x,0\right)+\epsilon>0$. Similarly, $u\left(x,t\right)+\epsilon>0$ for $\left|x\right|=R$, $0\leq t\leq T$ provided $R$ is sufficiently large (from the $\liminf$ assumption). Since $$ L\left(u+\epsilon\right)=\left[\sum_{i,j=1}^{n}a_{ij}\left(x,t\right)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}+\sum_{i=1}^{n}b_{i}\left(x,t\right)\frac{\partial}{\partial x_{i}}+c\left(x,t\right)-\frac{\partial}{\partial t}\right]\left(u+\epsilon\right)\leq c\epsilon\leq0, $$ $u\left(x,t\right)+\epsilon>0$ for $\left|x\right|\leq R$, $0\leq t\leq T$ (this is the part I don't understand). Taking $\left(x,t\right)$ fixed and letting $\epsilon\rightarrow0$, it follows that $u\left(x,t\right)\geq0$. AI: The author used the maximum principle for parabolic pde (see Evans Chapter Theorem 9 for example). Denote $\Omega_T = \{|x|<R\}\times (0,T]$, and $\omega_T = \{|x|\leq R\}\times \{T\}$, then if $L (u+\epsilon) \leq 0$ (Theorem 9 case 2) in $\Omega_T$, we have: $$ \inf_{\Omega_T} (u+\epsilon) = \min_{\bar{\Omega}_T} (u+\epsilon) \geq -\max_{\partial \Omega_T\backslash \omega_T} (u+\epsilon)^- =0. $$ For $(u+\epsilon)^- = 0$ on $\partial \Omega_T\backslash\omega_T$ because $u+\epsilon> 0$ for fixed $0<t<T$. Therefore by above, minimum is attained on the boundary, and we have $$ \inf_{\Omega_T} (u+\epsilon) \geq 0\implies u+\epsilon >0 \text{ in }\Omega_T. $$ For $u+\epsilon \in C^{2,1}(\Omega_T)\cap C (\overline{\Omega}_T)$, you can resolve the case when $t=T$, however I think that the inequality should be weakened to $\geq 0$ at the $t= T$.
H: approximate $[0, 1]$ continuous function with 2d basis. everyone. I've been thinking of this problem when reading papers about Fourier series. I think I can state my question as follows: in the interval $[0, 1]$, I want to approximate an unknown continuous function with maximum frequency $f_0$ with mean $0$ (don't worry about the boundaries). I know it can be represented losslessly by weighted sum of sine waves. However, if I am only allowed to use two functions in $[0,1]$, ( any continuous functions), What they should be so that I can minimize the $L_2$ error? i.e. if the two optimal functions are $f_1$ and $f_2$, the unknown function is $f$, I want to minimize, the expectation of the following, you can assume the occurrence of any functions is 'uniform'. $$\min_{a_1, a_2}\int_0^1 (f-a_1 f_1 - a_2 f_2)^2 \,dx $$ where $a_1$ and $a_2$ are real numbers. I am also wondering if there is an information theory viewpoint, those two optimal functions can be considered as the most expressive ones under the constraints. And therefore minimize the conditional entropy of the unseen function? This seems to be a quite general and easy question, I doubt it is still open, Can someone point me to the right direction? Is there a well-established theory for such approximation? I am not sure if the conditions are strong enough, please make any assumptions as you like. Thanks, any input will be appreciated! AI: There is a general theory known as the Hilbert spaces H and the problem you are tackling is known as the closest element in a closed subspace $M \subset H $.
H: Finding rank of matrix Suppose $B$ is a non-zero real skew-symmetric matrix of order $3$ and $A$ is a non-singular matrix with inverse $C$. Then rank of $ABC$ is: (A) $0, 1, 2$ (B) definitely $1$ (C) definitely $2$ (D) definitely $3$ Here we are given $B^{T}=-B$ and $A$ is non-singular i.e. $A^{-1}$ exists and $A^{-1}=C$ Now, $rank(ABC)=rank(ABA^{-1})=rank(B)$ Since $B$ is non-zero, option (A) is incorrect but what about (B), (C) and (D)? AI: Can a skew-symmetric $3\times 3$ matrix be non-singular? HINT: If $A$ is $n\times n$ and skew-symmetric, then $$\det=\det A^T=\det(-A)=k\,\det A\;,$$ where $k=\ldots$ what? That gives you part of what you need; for the rest see this question.
H: Proof that $n \in \mathbb{N}$ by combinatorial analogue? (Disclaimer: I'm a high school student, and my highest knowledge of mathematics is some elementary calculus. This may not be the correct terminology.) A while ago, I saw the following problem: prove, for natural numbers $a$, $b$, $c$, $d$ with $a \geq b + c + d$ that $\frac{a!}{b!c!d!}$ is a natural number. I had a neat idea about this when I realized that the expression $\frac{a!}{b!c!d!}$ actually gives the answer to a combinatorial problem. Namely, it is the number of unique permutations of $a$ objects when $b$ are of one type (indistinguishable from each other), $c$ are of another type, and $d$ are of another type. Obviously, one cannot have a non-natural number of permutations, so this must always be a natural number. Is this a valid method of proof? Can one establish that an expression is always a natural number by assigning to it a combinatorial meaning? AI: Yes: you’ve discovered one form of combinatorial proof, an elegant and often very informative kind of proof that is very frequently used in combinatorics. However, your argument needs a little bit of repair. Let $e=a-(b+c+d)$; then the multinomial coefficient $$\binom{a}{b,c,d,e}=\frac{a!}{b!\,c!\,d!\,e!}$$ is the number of distinguishable permutations of $a$ objects consisting of $b$ indistinguishable objects of one type, $c$ indistinguishable objects of another type, $d$ indistinguishable objects of a third type, and $e$ indistinguishable objects of a fourth type. This is clearly a non-negative integer, since it counts a discrete collection of entities, and $$\frac{a!}{b!\,c!\,d!}=\binom{a}{b,c,d,e}\cdot e!$$ is therefore also a non-negative integer. Alternatively, you could take the $e$ remaining elements to be distinguishable from one another and from the other three types, in which case $\frac{a!}{b!\,c!\,d!}$ is indeed the number of distinguishable permutations of the $a$ objects, and you don’t need the intermediate step. If this is what you had in mind, the only repair needed is to make this a bit clearer!
H: Evaluating $\lim\limits_{x\to 0^{+}} \frac{x}{\ln^2 x}$ How can I find: $$\lim_{x\to 0^+} \frac{x}{\ln^2 x} $$ I know that the limit is $0$. I tried sandwich theorem but I don't know what could be bigger. Thanks in advance. AI: HINT: You need only consider what happens when $x$ is close to $0$, and as long as $0<x\le 1$, you have $$\frac{x}{\ln^2x}\le\frac1{\ln^2x}\;.$$ (I’m assuming that by $\ln^2x$ you mean $\left(\ln x\right)^2$, as the other possible meaning, $\ln\ln x$, is undefined for $0<x<1$.)
H: How can i solve this Cauchy-Euler equation $x^{3}y^{\prime\prime\prime}+2xy^{\prime}-2y=0$? My Problem is this given Cauchy-Euler equation: $$x^{3}y^{\prime\prime\prime}+2xy^{\prime}-2y=0$$ My Approach was: i can see this is a ordinary differential equation of third-order and i think its linear. I was told that this equation should have variable coefficients and that a stratgey to solve this equation should include a Transformation into a equation with contant coefficients. In this case, i think the coefficients are $x^3$ and $2x$. So let $y=x^m$ with $m$ being a number. then: $$y=x^m\to y'=mx^{m-1},~~y'''=m(m-1)(m-2)x^{m-3}$$ applied into the equation gives: $$x^3y'''+xy'-y=0\Longrightarrow x^{3}m(m-1)(m-2)x^{m-3}+2x\cdot m\cdot x^{m-1}-2x^{m}=0$$ $$m^{3}x^{m}-3m^{2}x^{m}+4m\cdot x^{m}-2x^{m}=0$$ $$(m^{3}-3m^{2}+4m-2)x^{m}=0$$ While $x\neq 0$: Thats why: $$m^{3}-3m^{2}+4m-2=0$$ So $m_1 = 1$ ... so now i am stuck. what can i do with the m, since i have it now? And do i Need only one $m$ or do i need all possible $m$ ? AI: Besides to @Boris prime hint, you can divide $m^{3}-3m^{2}+4m-2$ by $m-1$ to see that $$m^{3}-3m^{2}+4m-2=(m-1)(m^2-2m+2)=(m-1)(m-\alpha)(m-\beta)$$ where $\alpha=1+i, ~\beta=1-i$. So $$y_c(x)=C_1x^1+C_2x^1\cos(\ln(x))+C_3x^1\sin(\ln(x))$$
H: How to distinguish convergence with limited fluctuation in a non-standard setting? I'm reading Edward Nelson's Radical Probability theory, and got confused on two concepts, convergence and limited fluctuation in a non-standard setting. On page 21-22(see here): Let $T$ be a subset of $\bf R$, and let $\xi: T \to \bf R$. We say that $\xi$ admits $k$ $\epsilon$-fluctuation, in case, there exists elements $t_0 < t_1 \ldots < t_k$ of $T$ with $$|\xi(t_0)-\xi(t_1)| \geq \epsilon, |\xi(t_1)-\xi(t_2)| \geq \epsilon \ldots |\xi(t_{k-1})-\xi(t_k)| \geq \epsilon$$ We say that a infinit sequence is convergent, iff for all $\epsilon > 0$, there exists a $k$ such that the sequence doesn't admit $k$ $\epsilon$-fluctuation. We say that a finite or infinit sequence is of limited fluctuation, in case for all $\epsilon \gg 0$ and all $k \simeq \infty$, it doesn't admit $k$ $\epsilon$-fluctuation. $\epsilon \gg 0$ means that $\epsilon$ is positive but not a positive infinitesimal. $k \simeq \infty$ means that $\frac{1}{k}$ is a positive infinitesimal. To highlight the difference of two concept, the author provide an example which I don't follow: Let $i \leq \nu$ be unlimited, and let $x_n = 0$,for $n \leq i$, and $x_n = 1$,for $i < n \leq \nu$. Then this seqence is of limited fluctuation, but not convergent. I can't understand why this is the case. This sequence only takes on two value, for $k = 2$, either $|\xi(t_0)-\xi(t_1)|$ or $|\xi(t_1)-\xi(t_2)|$ has to equal zero. Thus it doesn't admit $k$ $\epsilon$-fluctuation, when $k \geq 2$. It should be convergent. What's wrong? AI: You've misquoted Nelson slightly. He says "Now an infinite sequence is convergent if and only if for all $\epsilon > 0$ there exists $k$ such that the sequence does not admit $k$ $\epsilon$ fluctuations". But the example you describe is not an infinite sequence: note the third paragraph on p20 where he explicitly contrasts sequence whose indices are in $[1,\ldots.,\nu]$ and "infinite" sequences. $\nu$ is finite (being a natural number), hence so is the sequence, it just happens to be very large. Of course using the definition that $x_*$ is convergent to $x$ if for all illimited $r \leq \nu$, $x_r \simeq x$ you easily get nonconvergence. $x_*$ is not convergent because the only possible limit is $1$, whereas for any illimted $m < i < \nu$ we have $x_m=0$.
H: Question about convergence in probability (topic confusion) I'm taking second year stats and was introduced the below concepts For the third one, we use that to estimate the mean squared error in the case where the estimator is a nonlinear function of the sample mean. However, I can't find it anywhere in mathematical statistics textbooks (don't you hate it when that happens?). Where can I find out more about the above topics? AI: The concept is called "Convergence in Probability". Although for a sequence of numbers there's only one way of interpreting $x_n\to x$ as $n\to\infty$ for functions or random variables there are several different ideas about convergence. If $X_n$ are random variables then $X_n$ converges to $x$ in probability if $$\mathbb P\left[ \|X_n - x\| <\varepsilon\right] \to 1 \text{ as } n\to \infty$$for every $\varepsilon>0$. This is quite close to the definition for real numbers, notice that it's not the same thing as $$\mathbb P\left[X_n\to x \text{ as } n\to\infty\right]=1$$ The second statement is much stronger and is known as "Almost Sure Convergence". There are lots of other important ones. I mentioned almost sure convergence to try and convince you there's a point to having lots of different ones. It's a good exercise to try and come up with an example of a sequence of random variables that converge in probability but not almost surely. The results of your statements follow from the definition. Firstly if $\mathbb E(X_n)\equiv c$ and $\mathop{SD}(X_n)\to 0$ then by Chebyshev's inequality we have $$\mathbb P\left[\|X_n - c\|>\varepsilon\right] \leq \frac{\mathop{SD}(X_n)}{\varepsilon^2} \to 0$$ as $n\to\infty$. Next if $\lim_{x\to c}g(x) = \ell$ then for every $\varepsilon>0$ there exists some $\delta>0$ such that $\|x-c\|<\delta\Rightarrow\|g(x)-\ell\|<\varepsilon$ hence $$ \mathbb P\left[\left|g(X_n) - \ell\right| <\varepsilon\right] \geq \mathbb P\left[\left|X_n - c\right| <\delta\right]\to 1 $$as $n\to\infty$. The final statement follows directly from the second by noting that if $g$ is differentiable as $c$ then $$\lim_{x\to c} \frac{g(x)-g(c)}{x-c} = g'(c).$$
H: Showing that the functions $\sqrt{2x}\exp(2\pi inx^2)$ form a complete orthonormal system for $L^2([0,1])$ How do I show that the functions $$g_n(x) :=\sqrt{2x}\exp(2\pi inx^2)$$ where $n$ is a integer, are a complete orthonormal set in $L^2([0, 1])$? I am relatively new to this and need some help getting started. AI: Hint: $$\langle g_n, g_n\rangle =\int_{0}^{1}2x\exp(2\pi inx^2-2\pi i n x^2)dx$$ by definition of the scalar product $\langle,\rangle$ on $L^2([0,1])$. Then $$\langle g_n, g_n\rangle = \int_{0}^{1}2xdx=2\frac{1}{2}x^2|^{1}_0=1,$$ for all $n$. Now we consider $$\langle g_n, g_m\rangle =\int_{0}^{1}2x\exp(2\pi inx^2-2\pi i mx^2)dx= \int_{0}^{1}2x\exp(2\pi i(n-m)x^2)dx$$ with $m\neq n$. Using $$\frac{d}{dx}\exp(2\pi i(n-m)x^2)=4\pi i(n-m)x\exp(2\pi i(n-m)x^2)$$ we arrive at $$\langle g_n, g_m\rangle =\frac{2}{4\pi i(n-m)}\int_{0}^{1}\frac{d}{dx}\exp(2\pi i(n-m)x^2)dx=\frac{1}{2\pi i(n-m)}\exp(2\pi i(n-m)x^2)|^{1}_{0}= \frac{1}{2\pi i(n-m)}[\exp(2\pi i(n-m))-1]=0,$$ because $q:=n-m\in\mathbb Z$ and $\exp(2\pi i q)=1$.
H: Inequality in $L^p$ involving integral This is from Folland chapter 6. exercise 30. Suppose that K is a nonnegative measurable function on $(0,\infty)$ such that $\int_0^{\infty}K(x)x^{s-1}=\theta(s)<\infty$ for $0<s<1$. If $1 < p <\infty,\ p^{-1} + q^{-1} = 1$, and $f, g$ are nonnegative measurable functions on $(0,\infty)$, then (with $\int = \int_0^{\infty}$) $$\int\int K(xy)f(x)g(y) dxdy \leq \theta(p^{-1})\left[ \int x^{p - 2} f(x)^p dx\right]^{\frac{1}{p}} \left[\int g(x)^q dx\right]^{\frac{1}{q}}$$ The operator $Tf(x) = \int_0^{\infty} K(xy)f(y) dy$ is bounded on $L^2((0, \infty))$ with norm $\leq\theta\left(\frac{1}{2}\right)$ For item 1. I've found the answer :-) For item 2. I'm not sure which norm to use for $Tf$. Based on previous item it would be natural to take $||\cdot||_1$ but then the second item $||g||_q$ fails to be finite. And if it is $||\cdot||_2$ I have no idea what to do. Any help would be appreciated. Thank you. AI: I understand that bounded on $L^2$ means bounded as an operator from $L^2$ to $L^2$, so that you have to take $p=q=2$. The inequality you have proved is then $$ \Bigl|\int Tf(y)\,g(y)\,dy\Bigr|\le\theta(1/2)\|f\|_2\|g\|_2\quad\forall f,g\in L^2. $$
H: Closed form for the definite integral $\int_3^5\exp\left[\frac{-3.91}{V}\left(\frac{\lambda}{0.55}\right)^{-q}R\right]\,\mathrm d\lambda$ I am stuck on the following definite integral: $$\tau=\int_3^5\exp\left[\frac{-3.91}{V}\left(\frac{\lambda}{0.55}\right)^{-q}R\right]\,\mathrm d\lambda$$ Is it possible to solve it in a closed form ? Thanks. AI: Since the limits of the integration are pretty "exotic" (not $0,1$ or $\infty$), I would only expect the closed form if the indefinite integral has it. For the latter, WA gives relatively closed solution which may be of interest to you - just follow the link.
H: If there is a continuous function between linear continua, then this function has a fixed point? Let $f:X\to X$ be a continuous map and $X$ be a linear continuum. Is it true that $f$ has a fixed point? I think the answer is "yes" and here is my proof: Assume to the contrary that for any $x\in X$, either $f(x)<x$ or $f(x)>x$. Then, $A=\{x: f(x)<x\}$ and $B=\{x: f(x)>x\}$ are disjoint and their union gives $X$. Now if we can show that both $A$ and $B$ are open we obtain a contradiction because $X$ is connected. How can we show that $A$ and $B$ are open in $X$? AI: The function $f(x)=x+1$ is a counterexample. Here both sets $A$ and $B$ are open, but one of them is empty :-) Brouwer fixed point theorem asserts that the closed ball has the property you are looking for: every continuous self-map will have a fixed point. But the proof requires tools well beyond the general topological arguments you outlined. The most straightforward proof passes via relative homology or homotopy, and exploits the nontriviality of certain homology (resp. homotopy) classes.
H: Continuity of scalar product In a Hilbert space $H$ with inner product and associated norm, why would if $\|x-x_n\| \longrightarrow 0$ and $\|y-y_n\| \longrightarrow 0$ also $\langle x_n,y_n\rangle \longrightarrow\langle x,y\rangle$? I understand that by Cauchy-Schwarz $\lvert\langle x-x_n,y-y_n\rangle\rvert \leq \|x-x_n\|\cdot\|y-y_n\|\xrightarrow{n\to\infty} 0$ but how do I get to $\lvert\langle x,y\rangle-\langle x_n,y_n\rangle \rvert\longrightarrow 0$? AI: Hint: $$|\langle x,y\rangle-\langle x_n,y_n\rangle|=|\langle x,y\rangle-\langle x_n,y\rangle+\langle x_n,y\rangle-\langle x_n,y_n\rangle|;$$ Grouping the terms, using the triangular inequality for $|\cdot|$ and Cauchy-Schwarz helps.
H: Explicit formula for the series $ \sum_{k=1}^\infty \frac{x^k}{k!\cdot k} $ I was wondering if there is an explicit formulation for the series $$ \sum_{k=1}^\infty \frac{x^k}{k!\cdot k} $$ It is evident that the converges for any $x \in \mathbb{R}$. Any ideas on a formula? AI: You can have the closed form $$\sum_{k=1}^{\infty}\frac{x^k}{k k!}= -\gamma-\ln(-x)-\Gamma(0, -x), $$ where $\Gamma(s,x)$ is the upper incomplete gamma function. Another possible form is $$ \sum_{k=1}^\infty \frac{x^k}{k!\cdot k}=-\gamma-\ln \left( -x \right) -{\it Ei} \left( 1,-x \right), $$ where $$ Ei(a, z) = \int_{1}^{\infty} \frac{e^{-tz}}{t^a}dt,\quad 0 < Re(z),$$ which is known as the exponential integral. The following relation between the exponential integral and the upper incomplete gamma function is useful $$ Ei(a, z) = z^{a-1}\Gamma(1-a, z). $$
H: Complete metric space question Suppose $(X,d)$ is a non empty and complete metric space and $f:X \to X$ is a contraction. Show that there exists exactly one $x \in X$ such that $f(x)=x$. AI: It is called Banach fixed-point theorem or Contraction mapping theorem. For the proof see the link.
H: Convert a line in $ \Bbb R^3 $ given as intersection of two planes to parametric form. We have a line in $ \Bbb R^3 $ given as intersetion of two planes: $$ \left\{ \begin{aligned} A_1x+B_1y+C_1z + D_1 &=0 \\ A_2x+B_2y+C_2z + D_2 &=0 \\ \end{aligned} \right. $$ How to represent it in parametric form: $$ \left\{ \begin{aligned} x &= x_0 +at\\ y &= y_0 +bt \\ z &= z_0 +ct \\ \end{aligned} \right. $$ ? example I'm doing is: $ l: \left\{ \begin{aligned} x + y - z + 1 &=0 \\ x - y + z - 1 &=0 \\ \end{aligned} \right. $ AI: What you look for is a point on the line $\left(x_0, y_0, z_0\right)$, and a direction vector of the line $\left(a, b, c\right)$. To find a point on the line, you can for example fix $x$ and find $y,z$ (there are some case where this won't work). To find a direction vector, note that the vector $\left(A_1,B_1,C_1\right)$ is orthogonal to the first plane therefore to the line. Likewise, $\left(A_2,B_2,C_2\right)$ is orthogonal to the line. If you take their vector product you will get a direction vector. Another way to find a direction vector, is to find another point on the line and subtract one point from the other.
H: Prove this inequality concerning integral average. Let $f\in L^1([a,b])$, and extend $f$ to be $0$ outside $[a,b]$. Let $$ \phi(x)=\frac{1}{2h}\int_{x-h}^{x+h}f $$ How to prove $$ \int_a^b\left | \phi\right | \leqslant\int_a^b\left|f\right| $$ To @martini: I asked this question because I find something strange: if we let $f(x)=x^2$, then $\phi(x)=x^2+\frac{1}{3}h^2$, so the inequality doesn't hold apparently. Clearly, the problem is I do not extend $f$ to be $0$ outside $[a,b]$, so $\phi$ should be smaller near $a$ and $b$. However, when I put this example into your proof, I cannot figure out what goes wrong. It seems you have proved $$ \int_a^b(x^2+\frac{1}{3}h^2) \leqslant\int_a^b x^2 $$ I think the problem lies in your exchange of $\chi$, but still not clear. AI: We have where $\chi_I$ denotes the indicator function of $I$$\def\abs#1{\left|#1\right|}$ \begin{align*} \int_a^b \abs{\phi(x)}\, dx &= \frac 1{2h}\int_a^b \abs{\int_{x-h}^{x+h} f(\xi)\, d\xi}\, dx\\ &\le \frac 1{2h}\int_a^b \int_{x-h}^{x+h} \abs{f(\xi)}\, d\xi\, dx\\ &= \frac 1{2h}\int_a^b \int_a^b \abs{f(\xi)}\chi_{[x-h,x+h]}(\xi)\, dx\, d\xi\\ &= \frac 1{2h}\int_a^b\abs{f(\xi)} \int_a^b \chi_{[\xi-h,\xi+h]}(x)\, dx\,d\xi\\ & \text{note that $x-h \le \xi\le x+h \iff \xi-h\le x \le \xi+h$}\\ &= \frac 1{2h}\int_a^b\abs{f(\xi)}\int_{\xi-h}^{\xi+h}\, dx\, d\xi\\ &\le \frac {2h}{2h} \int_a^b \abs{f(\xi)}\, d\xi. \end{align*}
H: Basis of a space of upper triangular matrices with trace 0 What would a basis of a space of $n \times n$ upper triangular matrices with trace 0 be? Is it trivial? AI: Hint: trace A=0 then $\sum_{i=1}^na_{ii}=0 $then we have $a_{nn=}-a_{11}-...-a_{n-1 n-1}$ $$ A=\begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\ 0 & a_{22} & a_2^2 & \cdots & a_2^n \\ \vdots & \vdots& \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & (-a_{11}-...-a_{n-1 n-1}) \end{pmatrix}$$ $$=\pmatrix{ \bullet&\bullet&\bullet&\bullet&\bullet\\ \color{white}\circ&\bullet&\bullet&\bullet&\bullet\\ \color{white}\circ&\color{white}\circ&\bullet&\bullet&\bullet\\ \color{white}\circ&\color{white}\circ&\color{white}\circ&\bullet&\bullet\\ \color{white}\circ&\color{white}\circ&\color{white}\circ&\color{white}\circ&\color{white}\circ\\ }$$ $B=\{E_{ij}:i\lt j\}$$\cup$$\{E_{ii}:i=1,2,..,n-1\}$ clearly $B$ is basis for an upper triangular matrix with trace 0 because $B$ generate it and $B$ is linear independent
H: Integral closure of $\mathbb{Z}$ in $\mathbb{Q}[i]$ I am trying to compute the integral closure of $\mathbb{Z}$ in $\mathbb{Q}[i].$ I have managed to show that $\mathbb{Z}[i]$ is inside the integral closure, and I suspect it is the entire thing. Can someone give me a nudge in the right direction? AI: $\mathbb{Z}[i]$ is a Euclidean ring with respect to the norm $N(x+iy) = x^2 + y^2$, so it's a unique factorization domain, and therefore integrally closed.
H: The smallest positive integer in the set $\{24u+60v+200w : u,v,w \in \Bbb Z\}$is given by which of the following number? I am stuck on the following problem: The smallest positive integer in the set $\{24u+60v+200w : u,v,w \in \Bbb Z\}$is given by which of the following number? The options are: $2,4,6,24$. Since $24u+60v+200w=4(6u+15v+50w)$, I think the answer is between 4 and 24. Can someone explain it? AI: HINT: It shouldn't be too hard to convince yourself (or even prove) that the greatest common divisor of 24, 60 and 200 must also divide $24u+60v+200w$ for any $u,v,w\in \mathbb{Z}$ and in fact we can find $u,v$ and $w$ for any multiple of the gcd.
H: if $1=\int _{ 1 }^{ \infty }{ \frac { ax+b }{ x(2x+b) } dx } $ then $a+b=?$ I have the following question: if $1=\int _{ 1 }^{ \infty }{ \frac { ax+b }{ x(2x+b) } dx } $ then $a+b=?$ a) $0$ b) $e$ c) $2e-2$ d) $1$ I tried finding it's anti-derivative but that doesn't seem to help. I also tried to get it to $\int _{ 1 }^{ \infty }{(\alpha-1) \frac { 1 }{ x^\alpha } dx } $ with $\alpha>1$ since that equals $1$ but without finding an answer. What am I missing here? Thanks. AI: Note that, for the integral to converge, $a$ has to equal $0$ which implies $$ 1=\int _{ 1 }^{ \infty }{ \frac { b }{ x(2x+b) } dx }=\ln(b+2)-\ln(2)=\ln(\frac{b+2}{2}) $$ $$ \implies \frac{b+2}{2}=e \implies b=2 e -2 \implies a+b= 2 e-2 .$$ Note: Observe that, the integrand $$ \frac { ax+b }{ x(2x+b) }\sim \frac{ax}{2x^2}=\frac{a}{2x}, $$ which is not integrable on $[1,\infty)$.
H: The number of limit points of the set $\left\{\frac1p+\frac1q:p,q \in \Bbb N\right\}$ is which of the following: I am stuck on the following problem: The number of limit points of the set $\left\{\frac1p+\frac1q:p,q \in \Bbb N\right\}$ is which of the following: $1$ $2$ Infinitely many Finitely many If I take $p$ to be fixed (say=$k$) and let $q \to \infty$, then the limit point is given by $\frac{1}{k}$. Since $k$ is an arbitrary natural number, the number of limit points is infinite. The same case can be continued after taking $q$ to be fixed (say=$k_1$). I think option 3 is the right choice. Am I on the right track? Can someone give further explanation? AI: Your answer is correct. You have shown that there are infinitely many limit points, which is enough information to answer the multiple choice question. You can prove that the set of limit points of this set is $\{0\}\cup\{1/k:k\in\mathbb N\}$. Since $\frac{1}{k} = \frac{1}{2k} + \frac{1}{2k}$, that means that $0$ is the only limit point that is not in your original set.
H: Find the distance between two lines in $ \Bbb R^3 $ There are two lines in $ \Bbb R^3 $ given in parametric form: $$ l_1: \left\{ \begin{aligned} x &= x_1 +a_1t\\ y &= y_1 +b_1t \\ z &= z_1 +c_1t \\ \end{aligned} \right. $$ $$ l_2: \left\{ \begin{aligned} x &= x_2 +a_2s \\ y &= y_2 +b_2s \\ z &= z_2 +c_2s \\ \end{aligned} \right. $$ What's the simplest method (or formula) for finding (the shortest) distance beteen them? example I'm doing: $ l_1: \left\{ \begin{aligned} x &= 0 \\ y &= -1 - 2t \\ z &= -2t \\ \end{aligned} \right. $ $ l_2: \left\{ \begin{aligned} x &= 3s \\ y &= 1 - s \\ z &= 2 + 4s \\ \end{aligned} \right. $ AI: Hint: Find a vector $\mathbf A$ perpendicular to both lines and then find the projection onto $\mathbf A$ of any vector joining $\ell_1$ and $\ell_2$. Pictures help.
H: Poles of complex function Let $f:\mathbb{C} \to \mathbb{C} $ be meromorphic and $\{ z_j \}$ be its poles. In the text I am reading $f$ also satisfies the identity $$ f(z)^{-1} = \overline{f(\overline{z})} \qquad \text{for } \operatorname{Re}z >0. $$ It also happens to satisfy the property that $$ g(z) := \frac{f'(z)}{f(z)}, \qquad z\in \mathbb{C}, $$ is even. Then it states that, consequently, the function $f(z)^{-1}$ will have poles at $\{-z_j \}$. Why? AI: Since the derivative of $\log(f)$ is even, $\log(f)$ is odd. That means that $f=e^h$ where $h$ is an odd function. Thus, $$ f(-z)=e^{h(-z)}=e^{-h(z)}=1/f(z) $$ Therefore, if $z_j$ is a pole of $f$, $-z_j$ will be a pole of $1/f$. Since $g$ is even, if there is a pole at the origin, it must have residue $0$ (i.e. the coefficient of $1/z$ must be $0$). Thus, $h$, the anti-derivative of $g$, can be well-defined in a neighborhood of the origin. This is enough to establish that $f(z)f(-z)=1$ in a neighborhood of the origin. By analytic continuation, we can extend this property to the domain of $f$ connected to the origin.
H: An element of $L^2(0,T;V_n)$. Let $V$ be Hilbert with basis $w_j.$ Let $V_n = \text{span}(w_1, ..., w_n)$. Is it true that every element $v \in L^2(0,T;V_n)$ can be written as $$v(t) = \sum_{j=1}^n a(t)w_j?$$ I think so. But my doubt comes because I am told that $L^2(0,T;V_n)$ has basis $$\{l_iw_j : i \in \mathbb{N}, j = 1,...,n\}$$ where $l_i$ is the basis for $L^2(0,T)$, so this contradicts what I wrote above... AI: Since $\{l_i(t)\cdot w_j : i \in \mathbb{N}, j = 1,...,n\}$ is a basis you can write $$ v(t)=\sum\limits_{i=1}^\infty\sum\limits_{j=1}^n \alpha_{ij} l_i(t) w_j $$ After changing order of summation we get $$ v(t) =\sum\limits_{j=1}^n \sum\limits_{i=1}^\infty \alpha_{ij} l_i(t) w_j =\sum\limits_{j=1}^n w_j \sum\limits_{i=1}^\infty \alpha_{ij} l_i(t) $$ It is remains to put $$ a_j(t)=\sum\limits_{i=1}^\infty \alpha_{ij} l_i(t) $$
H: Trigonometric identity: $\frac {\tan\theta}{1-\cot\theta}+\frac {\cot\theta}{1-\tan\theta} =1+\sec\theta\cdot\csc\theta$ I have to prove the following result : $$\frac {\tan\theta}{1-\cot\theta}+\frac {\cot\theta}{1-\tan\theta} =1+\sec\theta\cdot\csc\theta$$ I tried converting $\tan\theta$ & $\cot\theta$ into $\cos\theta$ and $\sin\theta$. That led to a huge expression which I wasn't able to simplify. Please help!!! AI: You are on the right track. writing $\tan\theta$ as$ \dfrac {\sin\theta}{\cos\theta}$ and $\cot\theta$ as $ \dfrac {\cos\theta}{\sin\theta} $, we get $ \dfrac {\frac {\sin\theta}{\cos\theta} }{1-\frac {\cos\theta}{\sin\theta} }+\dfrac {\frac {\cos\theta}{\sin\theta} }{1-\frac {\sin\theta}{\cos\theta} }$ $= \dfrac {\sin^2\theta}{cos\theta\cdot(\sin\theta-\cos\theta)} + \dfrac {\cos^2\theta}{\sin\theta\cdot(\cos\theta-\sin\theta)}$ (how?) $= \dfrac {\sin^2\theta}{\cos\theta\cdot(\sin\theta-\cos\theta)} - \dfrac {\cos^2\theta}{\sin\theta\cdot(\sin\theta-\cos\theta)}$ $=\dfrac{1}{(\sin\theta-\cos\theta)}\big(\dfrac {\sin^2\theta}{\cos\theta}-\dfrac {\cos^2\theta}{\sin\theta})$ $=\dfrac{1}{(\sin\theta-\cos\theta)}\big(\dfrac {\sin^3\theta-\cos^3\theta}{\sin\theta\cdot\cos\theta})$ $=\dfrac{\sin\theta-\cos\theta}{\sin\theta-\cos\theta}\dfrac{\big(\sin^2\theta+\sin\theta\cdot\cos\theta+\cos^2\theta)}{\sin\theta\cdot\cos\theta}$(how?) $=1\cdot \dfrac{1+\sin\theta\cdot\cos\theta}{\sin\theta\cdot\cos\theta}$ (why?) which is $1+\sec\theta\cdot\csc\theta$ QED.
H: Inegrable functions $f_k$ with $\frac 1 {2 \pi} \int_0^{2 \pi} |f_k(x)|^2 dx \rightarrow 0$ where $\lim_{k \rightarrow \infty} f_k$ does not exist I am searching for integrable functions $(f_k)_{k=0}^\infty$ on the circle with $$ \lim_{k \rightarrow \infty} \frac 1 {2\pi} \int_0^{2 \pi} |f_k(x)|^2 dx = 0 $$ and s.t. $\lim_{k \rightarrow \infty} f_k(x)$ does not exist for all $x$. AI: Take the usual counterexample on the unit interval: $\ \ f_1=\chi_{[0,1]}$, $f_2=\chi_{[0,{1\over2}]}$, $f_3=\chi_{[{1\over2},1]}$, $f_4=\chi_{[0,{1\over4}]}$, $f_5=\chi_{[{1\over4},{2\over4}]}$, $f_6=\chi_{[{2\over4},{3\over4}]}$, $f_7=\chi_{[{3\over4},1]}$, $\ldots$ and "wrap them around the circle". So, $f_1$ is the characteristic function over the entire unit circle, $f_2$ is the characteristic function of the upper half of the unit circle, $f_3$ is the characteristic function of the lower half of the unit circle, $...\,\,$.
H: Show $A^*A$ is nonnegative operator exercises from my lecturer too. I haven't found any idea. Let $A$ be an arbitrary linear operator acting in unitary space $\mathbb{C}^n$ , and let $A^*$ be its adjoint. Prove that $A^*A$ is nonnegative operator. Prove that $A^*A$ is positive operator if $A$ is nonsingular. Correct me if the statement in the problem seems wrong. Thank You. AI: Just compute the inner product: $\langle x \mid A^\ast A x\rangle = \langle Ax \mid Ax \rangle \geqslant 0$ and if $A$ is nonsingular, $Ax \neq 0$ for $x \neq 0$.
H: Domain and Range in function composition I just came out of an exam that included this question. Let $$g(x)=\sqrt{x^2-1}, ~|x|\geqslant1,$$$$f(x)=\sqrt{x^2+1},~x\in \mathbb{R}.$$ Find $gf$ and $fg$, stating their domain and range. My answer was $$gf=g(f(x))=\sqrt{(\sqrt{x^2+1})^2-1}=\sqrt{x^2+1-1}=|x|$$with $D_{gf}:x\in \mathbb{R}$ and $R_{gf}:0\leqslant gf<\infty$ and $$fg=f(g(x))=\sqrt{(\sqrt{x^2-1})^2+1}=\sqrt{x^2-1+1}=|x|$$with $D_{fg}: |x|\geqslant 1$ and $R_{fg}: 1\leqslant fg<\infty$. Can somebody tell me whether my answer is correct? In particular, although the $fg=gf$ they have different domain and range because they are obtained in a different way, is this right? Thanks. Edit: It was given in the question that $fg$ refers to the composition of the two functions. AI: $gf$ usually refers to the product of the functions $g$ and $f$, in which case $(fg)(x) = f(x)g(x) = g(x)f(x) = (gf)(x)$. The domain will be $\{x \in \mathbb R : |x| \geq 1\} \;$ (which is the intersection of the domains of $f$ and $g$), and the range will be $\mathbb{R^+_0}$. If indeed $fg$ is referring to the composition of $f$ and $g$, your answer appears correct!
H: Proof that restriction of hermitian operator to its invariant subspace is also hermitian Proof that restriction of hermitian operator to its invariant subspace is also hermitian What would be the most elegant way to prove this? AI: Let $L$ be invariant subspace of $A$, then $$ \begin{align} A\text{ is Hermitian}&\Longleftrightarrow \forall x,y\in H\quad \langle Ax,y\rangle=\langle x,Ay\rangle\\ &\Longrightarrow \forall x,y\in L\quad \langle Ax,y\rangle=\langle x,Ay\rangle\\ &\Longleftrightarrow A|_L \text{ is Hermitian} \end{align} $$
H: Jacobbian Transformation Multiple Integral The question says : Sketch the region under the transformation of u=x+y and v=y for $$R=\{(x,y): 0\leqslant x\leqslant 1 , 0\leqslant y \leqslant1\}$$ Find the Area of the region. Given answer is $1$ I just need help on the calculation of the area. No need to draw the graph. Could someone guide me to start ? AI: Hint: the set $R(x,y)$ is the square $[0,1]\times [0,1]$ in $\mathbb R^{2}$. You are looking for the transformation of variables $$(x,y)\mapsto (u-v,v).$$ To deduce it I used the definitions of $u$, and $v$. In fact: $u=x+y$ implies $x=u-y$; as $v=y$, we arrive at $x=u-v$. Now you need to find the image of the square $R(x,y)$ under the above coordinate transformation. In other words, you need to solve the inequalities $$0\leq u-v\leq 1$$ and $$0\leq v\leq 1$$ (in this second case there is nothing to do!). Can you draw the set $\tilde{R}(u,v)=\{(u,v): 0\leq u-v\leq 1, 0\leq v\leq 1\}$ in the $u$-$v$-plane? Once the geometry is clear, in order to find the surface of $\tilde{R}(u,v)$ you need to compute the integral $\Sigma_{\tilde{R}(u,v)}=\int_{\tilde{R}(u,v)}dudv=...$ and the exercise is done.
H: Does removing random items effect probability? Let us say you have a finite set of things (maybe playing cards?) called $S$. It has $n$ things. No let us say we want a thing from an arbitrary set $E$. First we remove $k$ number random things from S, were $0 \leq k<n$. Now I pick a random thing from $S$. Does the probability that the thing is in $E$ in terms of $E$, $k$, $n$ and $S$ actually depend upon $k$? If so, how? AI: Denote with $R$ the set that remains after removing $k$ elements from $S$. What the question boils down to is if the following equality holds: $$\Pr(x \in E) = \Pr(x \in E \mid x \in R)$$ where the right-hand side is a conditional probability. This conditional probability can be calculated by the identity: $$\Pr(x \in E \text{ and } x \in R) = \Pr(x \in E \mid x \in R) \Pr(x \in R)$$ Now because $R$ picks (uniformly) randomly from $S$, $x \in R$ is independent from $x \in E$, that is, $\Pr(x \in E \text{ and } x \in R) = \Pr(x \in E)\Pr(x \in R)$. It finally follows that (using $\Pr(x \in R) > 0$): $$\begin{align} \Pr(x \in E \mid x \in R) &= \frac{Pr(x \in E \text{ and } x \in R)}{\Pr(x \in R)}\\ &= \frac{\Pr(x \in E)\Pr(x \in R)}{\Pr(x \in R)}\\ &= \Pr(x \in E) \end{align}$$ which means that the probability that an $x \in R$ is in $E$ actually does not depend on $R$ whatsoever (as long as $R$ is not empty, i.e. $k < n$).
H: Calculate the limit $\lim_{x \to \infty} \ \frac{1}{2}\sum\limits_{p \leq x} p \log{p}$ (here p is a prime) Calculate the limit $\lim_{x \to \infty} \ \frac{1}{2}\sum\limits_{p \leq x} p \log{p}$ (here the sum goes over all the primes less than or equal to x) using the Prime Number Theorem. I think I've managed to show by definition that the limit is infinity but couldn't think of an elegant way of calculating it using the Prime Number Theorem. Any ideas ? AI: $$\lim_{x\to\infty}\sum\limits_{p \leq x} p \log{p}>\lim_{x\to\infty}\sum\limits_{p \leq x}1$$ $$=\lim_{x\to\infty}\pi(x)\tag{$\pi(x)$ is prime counting func.}$$ $$=\lim_{x\to\infty}\left(\frac{\pi(x)}{\frac{x}{\log x}}\right)\left(\frac{x}{\log x}\right)=\left(\lim_{x\to\infty}\frac{\pi(x)}{\frac{x}{\log x}}\right)\left(\lim_{x\to\infty}\frac{x}{\log x}\right)$$ $$=\lim_{x\to\infty}\frac{x}{\log x}\to\infty$$ Here, $\lim_{x\to\infty}\frac{\pi(x)}{\frac{x}{\log x}}=1$ by prime number theorem.
H: Recursive Properties As They Relate To Domains I'm struggling with understanding what restrictions recursive properties generally place on domains. also on a more global level I don't think I fully understand how functions are described in symbolic logic. I hope these two examples will illustrate these issues: Say there's a function $g:N \rightarrow N$ that satisfies the recursive property $g(n+1) = 7n + 2g(n)$ $\forall n \in N$ My first question is this: Let's say I define a new function $g(n) = n^2 + 5n + 2$. What's the difference between $g(n)$ and the $g$ I described earlier? If I went on and described another $g(n)$ as being $g(n) = 10n$, what claim would I be making about both $g(n) = n^2 + 5n + 2$ and $g(n) = 10n$? Is this allowed? Are they the same? Are they different? Finally, if I stated that $g(n) = 10n $ holds true for $\forall n \in N$, would I be making the following claim? $\forall n \in N, g(n) = 10(n) \Rightarrow g(n+1) = 7n + 2g(n)$ or would I be making the following claim? $\forall n \in N, g(n) = 10(n) \Leftrightarrow g(n+1) = 7n + 2g(n)$ AI: Ok, firstly, when you want to define a new function PLEASE use a new symbol, especially when you go on to reference both. So I am going to say $f(n+1)=7n+2f(n), g(n)=n^2+5n+2, h(n)=10n$. Now $$g(n+1)=(n+1)^2+5(n+1)+2=n^2+2n+1+5n+5+2=n^2+5n+2+2n+6=g(n)+2n+6$$ So firstly $g$ does not satisfy the recurrence relation for $f$. Also notice at this point that we know precisely what function $g$ is, i.e. on input $n$ we can work out $g(n)$. However we don't know how to calculate $f$, we only know how to calculate it if we know smaller values, which we don't. So the recursive relation which $f$ satisfies does not uniquely specify the function $f$, we need to specify $f(0)$. To illustrate if we have a function $i(n)$ which satisfies $i(n)=i(n)+2n+6$ do we know that $i=g$? I don't really know what you are getting at regarding $h$, certainly $g$ and $h$ are different functions, for instance $g(0)=2$ and $h(0)=0$. For two you are getting really confused, sentences are false as $g(0)=10(0)=0$ yet $g(1)\neq 7 + 0$.
H: Family of complemented subspaces Let $X$, $Y$, $A$, $B$ be topological vector spaces. Given two jointly continuous families of linear injective maps $P: Y \times A \rightarrow X$ and $R: Y \times B \rightarrow X$, such that for $y=0$ we have the topological complementation $X = Im\, P_0 \oplus Im\, R_0$. Here and in the following $P_y$ denotes the contracted map $A \rightarrow X, a \rightarrow P(y,a)$. I would like to have criteria on $P$ and $R$ for the following claim: There exists a open subset $W$ in $Y$ around $0$ such that $X = Im\, P_y \oplus Im\, R_y$ for all $y \in W$. I might have a (fuzzy) proof for the case when $A$ (or $B$) is finite dimensional, but are there other cases? Partial answers for $X$ being a Banach space are also ok, but I prefer at least Fréchet spaces. AI: I will assume $A,B, X$ are Banach spaces, so that I can use Banach isomorphism theorem (BIT) and the fact that $GL(X)$ (the bounded invertible operators on $X$) is open in $B(X)$ (the bounded linear operators on $X$). I only assume that $Y$ is a topological space, and I fix the initial condition at some $y_0$ in $Y$. Note that $P_0:A\longrightarrow \mbox{Im} P_0$ is injective continuous onto a closed subspace of $X$, whence a Banach space. Thus $P_0$ is linear homeomorphism onto its range by BIT. Likewise, $R_0$ is a linear homeomorphism from $B$ onto its range. So up to composing with the isomorphism from $X$ onto $A\times B$ given by $(P_0^{-1},R_0^{-1})$, we can assume that $X=A\times B$ and $P_0$ (resp. $R_0$) is the canonical projection onto $A$ (resp. $B$). Now consider $\overline{P_y}:A\times B\longrightarrow A\times B$ defined by $\overline{P_y}(a,b):=P_y(a)$. Likewise set $\overline{R_y}(a,b):=R_y(b)$. By your assumptions, the map $$ y\longmapsto S_y:=\overline{P_y}+\overline{R_y}:(a,b)\longmapsto P_y(a)+R_y(b) $$ is continuous from $Y$ to $B(X)=B(A\times B)$ the bounded linear operators on $X=A\times B$. The initial condition is $S_{y_0}=\mbox{Id}_X$. This is in particular invertible. Since $GL(X)$ is open when $X$ is a Banach space, it follows that there exists an open neighborhood $W$ of $y_0$ in $Y$ such that $S_y$ be invertible for every $y\in W$. If $S_y$ is invertible, in particular it is surjective whence $X=A\times B=\mbox{Im} S_y=\mbox{Im}P_y+\mbox{Im} R_y$ for every $y\in W$. And also $S_y$ is injective. So if $x\in \mbox{Im}P_y\cap\mbox{Im} R_y$, we get $x=P_y(a)=R_y(b)$ whence $S_y(a,-b)=P_y(a)-R_y(b)=x-x=0$. Thus $a=b=0$ and $x=0$. Therefore $\mbox{Im}P_y\cap\mbox{Im} R_y=\emptyset$ for every $y\in W$. Since $S_y^{-1}$ is bounded, note that both $\mbox{Im}P_y$ and $\mbox{Im} R_y$ are closed for every $y\in W$. Conclusion: there does exist an open neighborhood $W$ of $y_0$ in $Y$ such that we have the topological complementation $X=\mbox{Im}P_y\oplus \mbox{Im} R_y$ for every $y\in W$. Removing the Banach space assumptions on $A,B,X$ seems impossible, since we really need $GL(X)$ to be open, which is not true in a general Fréchet space. Unless we make the ad hoc assumption on $P_y$ and $R_y$ from the beginning that $S_y$ be invertible near $y_0$. Then the argument works the same with $A,B,X$ Fréchet spaces, as BIT holds.
H: Projection of a vector I am reading some material on the Householder reflection and it describes the projection of a vector $\vec{x}$, onto $\vec{v}$ in vector notation to be: $\frac{\vec{x}\cdot \vec{v}}{||\vec{v}||^2}\vec{v}$ Where $a\cdot b$ is the dot product of vectors a and b. So how is that? I know if θ is the angle between $\vec{x}$ and $\vec{v}$, then in trigonometric terms, the projection of $\vec{x}$ on $\vec{v}$ would be $x\cos(θ)$. How are these two equivalent? Thanks. AI: $(x.v/v.v)v$ is the projection vector of $x$ onto $v$, while, $|x|\cos\theta=|x|(x.v/|v||x|)=(x.v/|v|)$ is just the length(magnitude) of the projection vector. projection vector($x$ on $v$) $=(x.v/v)*$ unitvector in direction of $v$ = $(x.v/|v|)(v/|v|)=(x.v/|v|^2)v=(x.v/v.v)v$
H: A group with two non trivial subgroups is cyclic Let $G$ be a group. Suppose that $G$ has at most two nontrivial subgroups. Show that $G$ is cyclic. Can anyone help me please to solve the problem? AI: If $G$ has no nontrivial subgroups, it is clearly cyclic. If $G$ has exactly one nontrivial subgroup $H$, consider the subgroup generated by a nonidentity element $g\in G\setminus H$. Now suppose that $H$ and $K$ are the only nontrivial subgroups of $G$. Recall that a group is never the union of two proper subgroups. So pick some element $g\in G\setminus (H\cup K)$. What must $\langle g\rangle$ be?
H: Study , according to the values of x , the relative position of line & Curve We have $F(x) = x + \ln \left(\frac{1+2e^{-2x}}{1+e^{-x}}\right)$ represents curve $C$ and a line $d: y=x$ In an exercise it is required to study the relative position of these two functions, I know we have to subtract them from each other but I can't continue it AI: Let us ask for what $x$ the curve $y=F(x)$ is above the line $y=x$. This happens precisely when $$x +\ln\left( \frac{1+2e^{-2x}}{1+e^{-x}} \right)\gt x.\tag{A}$$ The inequality (A) holds precisely if $$\ln\left( \frac{1+2e^{-2x}}{1+e^{-x}} \right)\gt 0.\tag{B}$$ The inequality (B) holds precisely if $$ \frac{1+2e^{-2x}}{1+e^{-x}}\gt 1.\tag{C}$$ The inequality (C) holds precisely if $$1+2e^{-2x} \gt 1+e^{-x}.\tag{D}$$ (Here we used the fact that $1+e^{-x}$ is always positive, so multiplying both sides of (C) by $1+e^{-x}$ yields an equivalent inequality.) The inequality (D) is equivalent to $2e^{-2x}\gt e^{-x}$, which in turn is equivalent to the inequality $2\gt e^x$ (we multiplied both sides by $e^{2x}$). Finally, the inequality $e^x \lt 2$ holds precisely when $x\lt \ln 2$. So at any $x\lt \ln 2$, the curve $y=F(x)$ lies above the line $y=x$. When $x=\ln 2$, the curve and the line meet. And when $x\gt \ln 2$, the curve $y=F(x)$ is below the line $y=x$.
H: What is the summation notation for the Fibonacci numbers? I learned about summation notation the other day, and I'm looking for a way to write the Fibonacci numbers with it. What would it look like? AI: $$F_n=\sum_{k=n-2}^{n-1}F_k$$ Given the initial conditions: $$F_0=0$$ $$F_1=1$$ It's trivial, but it does use the summation notation.
H: Is it true in general that $\int \dots \int_{0 \le x_1 \le \dots \le x_n,\ 0 \le x_n\le1}dx_1\dots dx_n=\left(\frac{1}{2}\right)^n?$ I was looking at an example with the following integral: $$\iiiint_{0 \le x \le y \le z \le t,\ 0 \le t \le \frac{1}{2}} 1 \,dx\,dy\,dz\,dt = \frac{1}{16}$$ Is it true in general that $$\int \dots \int_{0 \le x_1 \le \dots \le x_n,\ 0 \le x_n\le1}dx_1\dots dx_n=\left(\frac{1}{2}\right)^n?$$ (edit: here $0 \le x_n \le 1$ but in the example $0 \le t \le \frac{1}{2}$ so this can't be true...). Intuitively it would seem so since each dimension is "cut in half" by each inequality. Are there some other results for multiple integrals with this domain that have other integrand that is not a constant? Is it true that $$\int \dots \int_{0 \le x_1 \le \dots \le x_n,\ a \le x_n\le b}f(\boldsymbol{x})\,dx_1\dots dx_n=\int_a^b \int_0^{x_{n-1}} \int_0^{x_{n-2}} \dots \int_0^{x_2}f(\boldsymbol{x})\,dx_1\dots dx_n$$ AI: To find the value of: $$\int \dots \int_{0 \le x_1 \le \dots \le x_n,0 \le x_n\le1}dx_1\dots dx_n=?$$ First note that the first integral gives $x_1 |^{x_2}_0 = x_2$. The next one gives $x_3^2/2$ and so on. This leads to: $$\int \dots \int_{0 \le x_1 \le \dots \le x_n,0 \le x_n\le1}dx_1\dots dx_n=\frac{L^{n-1}}{(n-1)!}$$ Where $L$ is the upper limit, in your case $1$, and in the original case, $1/2$.
H: Integral Inequality Absolute Value: $\left| \int_{a}^{b} f(x) g(x) \ dx \right| \leq \int_{a}^{b} |f(x)|\cdot |g(x)| \ dx$ Suppose we are given the following: $$\left| \int_{a}^{b} f(x) g(x) \ dx \right| \leq \int_{a}^{b} |f(x)|\cdot |g(x)| \ dx$$ How would we prove this? Does this follow from Cauchy Schwarz? Intuitively this is how I see it: In the LHS we could have a negative area that reduces the positive area. In the RHS the area can only increase because we take the absolute values of the functions first. AI: The big idea here is this: First: it is enough to show that $$ \left\lvert\int_a^b f(x)\,dx\right\rvert\leq\int_a^b\lvert f(x)\rvert dx, $$ since you can replace $f(x)$ by $f(x)\cdot g(x)$ to get the desired result. Now, notice that $$ -\lvert f(x)\rvert\leq f(x)\leq \lvert f(x)\rvert $$ for all $x$; hence $$ -\int_a^b\lvert f(x)\rvert\,dx\leq \int_a^b f(x)\,dx\leq\int_a^b\lvert f(x)\rvert\,dx. $$ Can you finish it from here?
H: Finding multiple integral on bounded area. Today I just learn on multiple integral. Somehow this question quite confusing. Find the area of the 1st quadrant region bounded by the curves y=$x^3$, y=2$x^3$ and x=$y^3$, x=4$y^3$ using subsitution method (Jacobbian method). Let y=$u$$x^3$ and x=$v$$y^3$ Could someone please help me some steps on how to do by changing the variables ? AI: This is an exercise in computing a Jacobian: $$dx \, dy = |J(u,v)| du \, dv$$ where $$J = \det{\left (\begin{array}\\ \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}\end{array}\right )}$$ $u=y/x^3$ and $v=x/y^3$ implies $$ x=u^{-3/8} v^{-1/8} \quad y=u^{-1/8} v^{-3/8} $$ Compute the Jacobian from this. Next, what are the bounds on $u$ and $v$? You should be able to see that $u \in [1,2]$ and $v \in [1,4]$ from the bounds given to you. The area is then $$\int_1^2 du \, \int_1^4 dv \, |J(u,v)|$$ I get as an answer $(2-\sqrt{2})/8$.
H: Amount of homomorphisms $S_5$ to $C_6$ I think I understand the amount of homomorphisms $C_6$ to $S_5$: Because $C_6$ is cyclic, we only have to look where we can send $\langle a\rangle$, because $f(\langle a\rangle)$ must have order $1, 2, 3$, or $6$ we have: -for $1$: trivial homomorphism. -for $2$: the ten $(xy)$ cycles in $S_5$, the fifteen $(xy)(ab)$ cycles in $S_5$ -for $3$: the twenty $(xyz)$ cycles in $S_5$ -for $6$: the twenty $(xyz)(ab)$ cycles in $S_5$ so total $= 66$ the other way around I find more difficult. Can anyone help? Bonus question: is the size of the conjugacy class of an element in the group always the amount of elements of the same shape? so in $S_5$, $(12)$ has conjugacy class size 10, so there are ten $(xy)$ in $S_5$? AI: The kernel of a homomorphism $f: S_5 \rightarrow C_6$ must be a normal subgroup of $S_5$. There are only three such: $S_5, A_5$ and $\{1\}$. Since $|S_5| > |C_6|$ we cannot have $\ker f = \{1\}$. If $\ker f = S_5$ then $f$ just sends everything to the identity. If $\ker f = A_5$ then by first isomorphism theorem im$f \cong C_2$, which there is only one copy of inside $C_6$, so we have $f(a) = g^3$ if $a \notin A_5$, $f(a) = 1$ if $a \in A_5$.
H: Show that $\sum\limits_{p \leq x} \frac{1}{p}$ ~ ${\log\log{x}}$ when ${x \to \infty}$ (here p is a prime) I saw that some of you were upset over my last question, so I decided to ask a more interesting question: Show that $\sum\limits_{p \leq x} \frac{1}{p}$ ~ ${\log\log{x}}$ when ${x \to \infty}$ (here the sum goes over all the primes less than or equal to x) Edit: I've changed the question and wrote it as it appeared in the exam. AI: yanbo is right. Merten's second theorem states that $\lim_{n \to \infty} \big(\sum_{p \le n} \frac1{p}-\ln\ln n \big) =M $ where $M$ is the Meissel–Mertens constant. Dividing by $\ln\ln n$, we get $\lim_{n \to \infty} \big(\frac1{\ln\ln n}\sum_{p \le n} \frac1{p}-1 \big) =\lim_{n \to \infty} \frac{M}{\ln\ln n} = 0 $ (since $\ln \ln n$ goes reluctantly to $\infty$) so $\lim_{n \to \infty} \big(\frac1{\ln\ln n}\sum_{p \le n} \frac1{p} \big) = 1 $
H: Big-$O$ inside a log operation I would appreciate help in understanding how: $$\log \left(\frac{1}{s - 1} - O(1)\right) = \log \left(\frac{1}{s - 1}\right) + O(1)\text{ as }s \rightarrow 1^+$$ I thought of perhaps a Taylor series for $\log(x - 1)$, but that would have an $O(x^2)$ term. Also I would appreciate any reference or tutorial recommendations to get a good understanding of big-$O$ notation in general. Thanks very much. AI: $f(s)=O(1)$ as $s\rightarrow1^+$ means that there is a constant $C$ such that $|f(s)|<C$ for $s$ close to $1^+$. One thing to always keep in mind is that equalities between things with $O$'s or $o$'s are not really symmetric equalities. This means that it is probably safe to prove this 'in both directions'. The left hand side is talking about a function $f(s)$ such that there is a constant $C$ such that $|e^{f(s)}-\frac{1}{s-1}|<C$ for $s$ close to $1^+$. The right hand side is talking about a function $g(s)$ such that there is a constant $D$ such that $|g(s)-\log(\frac{1}{s-1})|<D$. Let us check that the function on the left $f$ satisfies the description on the right. According to its description, we can write $f(s)$ as $\log(\frac{1}{s-1}+h(s))$, where $h$ is bounded as $s\rightarrow 1^+$. We subtract $\log(\frac{1}{s-1})$ and get $\log(1+(s-1)h(s))$. Since $h$ is bounded and $(s-1)\rightarrow 0$ then this subtraction is bounded. This was the description on the right. Now, we do the other direction. Let us check that $g$ satisfies the description on the left. From what we know of $g$ we can write it as $\log(\frac{1}{s-1})+a(s)$ with $a$ bounded. We now compute $e^{\log(1/(s-1))+a(s)}-\frac{1}{s-1}=\frac{e^{a(s)}}{(s-1)}-\frac{1}{(s-1)}=\frac{e^{a(s)}-1}{(s-1)}$ ... oops. I guess that is why this is not really a symmetric equality. So, reading from left to right the equality is true. Reading from right to left, it is not.
H: $U_n= \int_0^1 \frac{e^{nx}}{e^x +1} \mathrm d x $ I have this integral in a sequence question $$U_n= \int_0^1 \frac{e^{nx}}{e^x +1} \mathrm dx $$ how to solve it ? AI: Try looking at $$U_{n+1}+U_n = \int_0^1\frac{e^{(n+1)x}+e^{nx}}{e^x+1}\,\mathrm dx$$ and see how that helps simplify the integral.
H: How find $dy/dx$ $y= \frac{x^2+\sin2x}{2x+\cos^2 x}$ $$dy/dx\space\space y= \frac{x^2+\sin2x}{2x+\cos^2x}$$ AI: We need the quotient rule and the chain rule to find $\frac {dy}{dx} = y'$ given $$y= \frac{x^2+\sin2x}{2x+\cos^2x}$$ Given a quotient of functions: $f(x) = \frac{g(x)}{h(x)}$ $$f'(x) = \frac{g'(x)h(x) - g(x)h'(x)}{[h(x)]^2}$$ In your case, we have $$f(x) = \frac{x^2+\sin2x}{2x+\cos^2x}$$ So put $g(x) = x^2 + \sin 2x,\;$ and $\;h(x) = 2x + \cos^2x$. Now, $g'(x) = 2x + 2\cos 2x,\;$ and $\;h'(x) = 2 - 2\cos x \sin x = 2 - \sin(2x)$. So $$f'(x) = \frac{g'(x)h(x) - g(x)h'(x)}{[h(x)]^2}$$ $$f'(x) = \frac{dy}{dx} = \frac{(2x + 2\cos 2x)(2x + \cos^2 x) + (x^2 + \sin 2x)(2 - \sin 2x)}{\left(2x + \cos^2 x\right)^2}$$ The rest is of the work is merely algebraic simplification.
H: Exponential distribution for the lifetime of an LCD screen: probability that it functions for $50{,}000$ hours If the probability that an LCD screen functions for $x$ hours is defined by the density function: $$f(x)=0.01*\exp(-x/100)I_{[0,\infty)}(x)$$ where $I$ is an indicator function and $x$ is measured in thousands of hours, then the probability that the screen functions for $50{,}000$ hours is given by: $$P(50)= \int_{50}^\infty f(x)\,\mathrm dx = 0.6065$$ So the only event $x=50$ is considered as the interval $[50,\infty)$, right? Because if we integrate the event $x=50$, $\int_{50}^{50} f(x)\,\mathrm dx$ will always be $0$. AI: Let $X$ be the lifetime of a screen, in thousands of hours. We are told that $X$ has density function $(0.01)e^{-x/100}$ in the interval $[0,\infty)$, and $0$ elsewhere. Then $$\Pr(X\gt 50)=\int_{50}^\infty (0.01)e^{-x/100}\,dx.$$ This is approximately $0.60653$. Note that $\Pr(X\gt 50)=\Pr(X\ge 50)$. For any random variable $X$ with continuous distribution, $\Pr(X=a)=0$ for any $a$.
H: Proving that $\int \frac{dx}{(1-x^2)}$ equals the inverse hyperbolic tangent of $x$ I prove the following fact $$\int\limits \dfrac{dx}{(1-x^2)}=\tanh^{-1}x$$ I show, by integrating by substitution, that the integral equals $$- \dfrac{\ln(1-x^2)}{2x}.$$ Setting $x=\tanh z$, we get $$-\dfrac{\ln\operatorname{sech} z}{\tanh z}.$$ Then, since $z=\tanh^{-1}x$, the above equation yields $$-\dfrac{\ln(1-x^2)}{2x}.$$ Is this sufficient to prove the first equation? AI: Your integration is incorrect; the derivative of $-\frac{\ln(1-x^{2})}{2x}$ is $$-\frac{\frac{2x}{1-x^{2}}-2\ln(1-x^{2})}{4x^{2}}$$ The correct way to proceed is by partial fractions; notice the following: $$\frac{1}{1-x^{2}}=\frac{1}{(1-x)(1+x)}=\frac{1}{2(1-x)}+\frac{1}{2(1+x)}$$ And so, integrating this, we get$$\int\frac{1}{1-x^{2}}dx=\int\frac{1}{2(1-x)}+\int\frac{1}{2(1+x)}=\frac{1}{2}\left(-\ln(1-x)+\ln(1+x)\right)=\frac{1}{2}\ln\left(\frac{1+x}{1-x}\right)$$ Which is another expression for $\tanh^{-1}(x)$ - are you familiar with this? As an aside, I think I can see where your incorrect expression has come from, and it is important to see that it is wrong. When integrating by substitution, you appear to know that we must compute the derivative of our substitution - in this case, the derivative $1-x^{2}$ is $-2x$. However, we cannot bring this outside of the integral.
H: $f:(0,\infty)\to [0,\infty)$ concave implies $f$ bounded below? Let $f:(0,\infty)\to [0,\infty)$ be a concave function such that $f$ is not indentically zero. It seems to me that the following statements are true: I - For every $a>0$ fixed, $f_{|[a,\infty)}$ is bounded below by a positive constant depending on $a$, II - $f$ cannot attain a maximum value in $[a,\infty)$. Am I seeing things or those statements are true? Thank you AI: The first is true, the second false. Simple counterexample for the second, constants, or $f(x) = \min \lbrace x, 1\rbrace$. If you meant strictly concave, the second is also true, because then $f$ must be strictly monotonically increasing. The first is true because a concave function that is non-negative must be monotonic(ally nondecreasing), and cannot have a zero larger than $0$ unless it is identically $0$. A concave function that is not monotonically non-decreasing cannot be bounded below: Let $x_1 < x_2$ with $f(x_1) > f(x_2)$. Let $h = x_2 - x_1$ and $\delta = f(x_1) - f(x_2)$. Then $$ f(x_1 + k\cdot h) \leqslant f(x_1) - k \cdot \delta\;, k \geqslant 1.$$ That is because by the definition of concaveness, $$f(x_1) - \delta = f(x_2) = f\left(\frac{k-1}{k}x_1 + \frac{1}{k}(x_1 + k\cdot h)\right) \geqslant \frac{k-1}{k}f(x_1) + \frac{1}{k}f(x_1 + k\cdot h)$$ Thus (subtracting the first summand of the right hand side) $$ \frac{1}{k}f(x_1) - \delta \geqslant \frac{1}{k}f(x_1 + k \cdot h) \iff f(x_1) - k\cdot \delta \geqslant f(x_1 + k\cdot h). $$ By assumption $\delta > 0$, so $f$ is not bounded below. Geometrically, concaveness means the slope of the graph is monotonically non-increasing ($f'' \leqslant 0$, if by $f''$ we understand the distribution derivative, if the classical derivative doesn't exist), thus once the slope is negative, it remains so, and can only become more negative, never less.
H: Large exponential modular Proof $2011^{2011^{2011}}-2011 \equiv 0 \mod 30030$ By Chinese Remainder Theorem this is equivalent to proving: $2011^{2011^{2011}}-2011 \equiv 0 \mod 2$ $2011^{2011^{2011}}-2011 \equiv 0 \mod 3$ $2011^{2011^{2011}}-2011 \equiv 0 \mod 5$ $2011^{2011^{2011}}-2011 \equiv 0 \mod 7$ $2011^{2011^{2011}}-2011 \equiv 0 \mod 11$ $2011^{2011^{2011}}-2011 \equiv 0$ mod $13$ $2011 \equiv 1 \mod (2,3,5)$ so $2011^{2011^{2011}}-2011 \equiv 1^{2011^{2011}}- 1 \equiv 0 \mod (2, 3,5)$ But I do not know how to continue. AI: The first three cases are quite simple, since $2011\equiv 1 \mod 2,3,5$, so $$2011^{2011^{2011}}-2011\equiv 1^{2011^{2011}}-1=1-1=0 \mod 2,3,5$$ The other cases can be treated like that (I'll only do $7$, the others are similar): As $(7,2011)=1$, Euler's theorem applies, which tells $$2011^{\varphi(7)}=2011^6\equiv 1 \mod 7$$ So if we write $2011^{2011}=6q+r$, then $$2011^{2011^{2011}}=2011^{6q+r}=(2011^{6})^q\cdot 2011^r\equiv 1^q\cdot 2011^r\equiv 2011^r \mod 7$$ Again, as $(6,2011)=1$, apply Euler's theorem: $$2011^{\varphi(6)}=2011^2\equiv 1 \mod 6$$ So, as before: $$2011^{2011}=2011^{2*1005+1}\equiv 2011\equiv 1 \mod 6$$ Now $2011^{2011}=6q+1$ for some $q$, so $$2011^{2011^{2011}}\equiv 2011^1\equiv 2011$$
H: Solving $L - L\sqrt{1-\frac{u^2}{C^2}} = u T$ for $u$ Hello how can i solve this for $u$ ($L,C,T$ are constants). \begin{aligned} L - L\sqrt{1-\frac{u^2}{C^2}} = u T \end{aligned} AI: $$\text{Rearranging: }L-uT=L\sqrt{1-\frac{u^{2}}{C^{2}}}$$ $$\text{Squaring both sides: }(L-uT)^{2}=L^{2}\left(1-\frac{u^2}{C^{2}}\right)$$ $$\text{Expanding our squares: }L^{2}-2LuT+u^{2}T^{2}=L^{2}-\frac{u^{2}L^{2}}{C^{2}}$$ $$\text{Cancelling $L^{2}$, and collecting coefficients: }\left(\frac{L^{2}}{C^{2}}+T^{2}\right)u^{2}=2LTu$$ If $u$ is not zero, we can divide by it (and if it is, then our orginal equation becomes $L=L$), so: $$\text{Bringing our fraction over a common denominator: }\left(\frac{L^{2}+C^{2}T^{2}}{C^{2}}\right)u=2LT$$ $$\text{Multiplying by its reciprocal}\implies u=\frac{2LTC^{2}}{L^{2}+C^{2}T^{2}}$$ In general, when solving algebraic expressions it is generally best to consider what is in the way, and get rid of the most difficult expressions first. In this case, the square root is the most unpleasant expression, so we square both sides to eliminate it. Bring all our terms with $u$ together allowed us certain cancellations which made the overall procedure easier.
H: Show that $\arg(\exp(z)) = y + 2\pi k$ for any $\arg(\exp(z))$ and some integer $k$. The entire question is: Show that $\operatorname{mod}(\exp(z)) = e^x$. Show that $\arg(\exp(z)) = y + 2\pi k$ for any $\arg(\exp(z))$ and some integer $k$. I could do the first part. I do not know how to do the second. Guidance please. AI: i know if $z '=x '+iy'$ then $\arg(z ')=\arctan(\frac{y '}{x '})$ $$exp(z)=e^{x+iy}=e^xe^{iy}=e^x\cos(y)+ie^x\sin(y)=x '+iy'$$ $$\arg(exp(z))=\arctan(\frac{y '}{x '})=\arctan(\frac{e^x\sin(y)}{e^x \cos(y)})=\arctan(tan(y))=y+2k\pi \color{red}{\Rightarrow}$$$$\color{green}{\arg(\exp(z)) = y + 2\pi k}$$
H: Interpolating missing points in 3D data-set Given the following x,y,z points (z is actually a signal strength indicator in dBm): 63 371 -21 142 371 -9 233 374 -18 288 371 -36 310 373 -38 349 374 -39 415 348 -44 507 334 -49 689 337 -56 635 254 -57 422 284 -42 380 278 -39 281 280 -39 214 299 -34 146 285 -30 81 302 -39 76 246 -39 80 214 -44 137 200 -44 64 134 -48 73 87 -48 200 101 -46 230 202 -44 246 105 -53 285 109 -53 278 191 -50 334 87 -54 395 189 -56 513 208 -58 510 99 -61 553 101 -62 593 100 -65 634 101 -64 679 102 -68 731 196 -70 How would I go about interpolating the missing z values for the remaining points, assuming they cover an area that is 800x400 units? (The area is of course arbitrary, but for this data-set it should suffice). I realize that this question is not a simple as it first seems. My goal is to end up with a numerical representation of the entire space (ideally a 2D array holding the Z values, in C#, for those who are interested) however I'm not even sure how to fit a suitable surface to these points. Once complete, the 2D array should hold a predicted signal strength at every x,y point. A single, "best fit" plane will not suffice, as the point of this is to generate a signal strength heat map with usable values at each point, based on measurements at various points on the floor plan. In other words, the attenuation from various walls/furniture/etc. must be taken into account. Cheers AI: One way to do this would be to construct a Delaunay triangulation for the 2D point cloud, extruding this triangulation to the 3D values. Each triangle has a plane equation which can be used to find the rest of the $z$ values for "new" points $x,y$ in the corresponding triangle. Note the equation for the plane defined by three points is given by: $$\begin{vmatrix} x - x_1 & y - y_1 & z - z_1 \\ x_2 - x_1 & y_2 - y_1 & z_2 - z_1 \\ x_3 - x_1 & y_3 - y_1 & z_3 - z_1 \end{vmatrix} = 0$$
H: A new question about solvability of a direct product I had asked that if $G$ is a direct product of a $2$-group and a simple group, then is it possible that $G$ be a solvable group. That the answer is no! But by a Remark on T.M. Gagen, Topics in Finite Groups, London Math. Soc. Lecture Note Ser., vol. 16, Cambridge Univ. Press, Cambridge, 1976. I am confused! I will write the Remark below: Definition 11.3. A group $G$ with an abelian Sylow $2$-subgroup is said to be an $A^*$-group, if $G$ has a normal series $1\subseteq N \subseteq M \subseteq G$ where $N$ and $G/M$ are of odd order and $M/N$ is a direct product of a $2$-group and simple groups of type $L_2(q)$ or $JR$. Theorem A. Let $G$ be a finite group with an abelian Sylow $2$-subgroup. Then $G$ is an $A^*$-group. Remark. If a group $G$ has an abelian Sylow $2$-subgroup $T$ of rank $1$, then $G$ is solvable and $2$-nilpotent and clearly an $A^*$-group. If $T$ has rank $2$, then $G$ is $2$-nilpotent unless $T$ is of type $( 2^\alpha , 2^\alpha )$ . But then if $\alpha > 1$, $G$ is solvable by [7] and clearly an $A^*$-group since it has $2$-length $1$. Thus we may assume that $|T | = 4$ and then we can apply the results of [13]. Again $G$ is an $A^*$-group. By the Remark, it is possible that $G$ (with abelian sylow $2$-group) be a solvable group. In this case $M$ and then $M/N$ must be solvable, and can not be have a simple group as its direct factor. a contradiction with Definition! Am I right?! AI: This is a matter of convention. When $G$ is solvable, $M/N$ is an (abelian) 2-group. It is still the case that $M/N$ is the direct product of a 2-group and a set of simple groups, each isomorphic to $L_2(q)$ or $J_1$ or $^2G_2(q)$. In case $G$ is solvable, that set of simple groups is empty. The remark is trying to clarify what happens in the low rank cases, where typically the result is a solvable group. In the $2^a$ case, Cayley showed you can take $G/M=1$, $M/N$ the cyclic Sylow 2-subgroup. In the $2^a \times 2^a$ ($a>1$) case Brauer's result shows $G/M$ can be taken to have order 1 or 3, and $M/N$ to be the homocyclic Sylow 2-subgroup. In the non-homocyclic case, Burnside or Frobenius shows you can take $G/M=1$ and $M/N$ to be the Sylow 2-subgroup. Only in the case $C_2 \times C_2$ (amongst rank 1 or 2) do you get to the case supporting a non-solvable $M/N$. Rank 3 is interesting because of $J_1$ and $^2G_2(q)$. I'll mention that Gagen, Bender, and Gorenstein all use the same phrasing, and all intend to allow the groups to be solvable as well as non-solvable. Another small point: Gagen and Bender use the name “$A^*$-group” rather than “$A$-group” used by Gorenstein. The definitions are actually nearly identical; I believe they are trying to avoid confusion with a similar (weaker) result classifying A-groups in the sense of Hall: groups in which all Sylow subgroups are abelian.
H: A helix problem This equation $$ x^2 + y^2 – \left(\tan^{-1} \frac{y}{x}\right)^2 = 0 $$ describes a helix. What is the capacity of the first twist? AI: Polar co-ordinates simplify the analysis greatly. When $r^2=x^{2}+y^{2}$ and $\tan(\theta)=\frac{y}{x}$, your equation becomes simply $r=\theta$, which is the well known equation of an Archimedean Spiral. What I think you are trying to ask is "What is the area enclosed by this curve as $\theta$ varies from $0$ to $2\pi$ (one revolution)". I will present the solution by integration, which I also presume you have seen. The area enclosed by the polar curve $(r,\theta)$ as $\theta$ varies from $a$ to $b$ is given by the integral $$A=\frac{1}{2}\int_{a}^{b}r^{2}d\theta$$ In your case, $r=\theta$, so we get $$A=\frac{1}{2}\int_{0}^{2\pi}\theta^{2}d\theta=\frac{1}{2}\left[\frac{1}{3}\theta^{3}\right]^{2\pi}_{0}=\frac{1}{6}(2\pi)^{3}=\frac{4}{3}\pi^{3}$$ If you are unfamiliar with integral calculus (which I somehow doubt) there is an article here which gives a derivation from Archimedes' principles.
H: $\mathbf{F} = \nabla f$ Get the function f, when given a vector field For conservative vector fields F the following equation holds $$\int_c \mathbf{F} \cdot d\mathbf{r} = \int_c \nabla f \cdot d\mathbf{r} = f(\mathbf{r}(b)) - f(\mathbf{r}(a))$$ Now I have a lot of trouble finding the function f so $\mathbf{F} = \nabla f$ holds. Is there a manner to do this? An example field: $$\mathbf{F}(x,y,z) = e^y\mathbf{i} + xe^y\mathbf{j} + (z + 1) e^z\mathbf{k}$$ How would I convert this field to a function f? AI: This is in most every multivariable calculus book. Integrate $\partial f/\partial x$ to get $f(x,y,z) = xe^y+g(y,z)$. Differentiate, compare to the second component of $\mathbf F$, find $g(y,z)=h(z)$. Then use the last bit of info to get $h'(z)=(z+1)e^z$, integrate, and assemble all your information.
H: How to compute $x$ and $y$ How can one find in an efficient way $x,y \in \mathbb{Z}$ with max$\{|x|,|y|\} > 0$ as small as possible such that $\mid \pi x + e y \mid < 10^{-4}$ ? I have reduced the following lattice basis: \begin{pmatrix} 1 & 0 \\ 10^4\pi & 10^4e \end{pmatrix} but is this the right one? And how can I use the reduced basis? AI: You are looking for $|1 -\frac yx\frac e\pi| \lt \frac 1{\pi x} 10^{-4}$. Look at the continued fraction or Farey sequence converging to $\frac e\pi$. Following the Farey sequence (start with $\frac 01$ and $\frac 11$, add numerators and denominators, replace one endpoint with the new fraction. Using Alpha, $\mid 13453 \pi - 15548 e \mid$ is just under $10^{-4}$
H: Set of convergent sequences as a vector space, show associativity I try to prove that a set of convergent sequences $S=\{(a_n) : (a_n) \text{ convergent sequence of natural numbers} \}$ is a vector space. I'm guess I have to show that all 8 axioms are valid, but I have a problem how to show the associativity. Should I do it with the epsilon criterion or is enough to say that the + operation on sequences behaves just like it would on the set of real numbers. AI: To argue for associativity of addition, your second proposal is correct - there is nothing special about convergence here, and no limit-related argument is necessary. Addition of three non-convergent sequences $(x_n)$, $(y_n)$, and $(z_n)$ is associative for exactly the same reason. In contrast, showing closure under the operations of addition and scalar multiplication requires a (comparatively) non-trivial amount of work. It is possible to add two non-convergent sequences together and get a convergent sequence, so we see that something about convergence must be used.
H: Normal at every localization implies normal I'm having some trouble with basic ring theory. Let $A$ be an integral domain and $\alpha$ an element of its fraction field integral over $A$. I am trying to understand a proof that $\alpha\in A$ under the hypothesis that $A_{\mathfrak p}$ is normal for every prime ideal $\mathfrak p$ of $A$. Recall an integral domain is normal if it is integrally closed in its field of fractions. My question has two parts. $1.$ Let $I=\operatorname{Ann}((\alpha A +A )/A)$. Why does it suffice to show that $I$ is all of $A$? $2.$ Assume $I$ is not all of $A$. Then it is contained in a prime $\mathfrak p$, and we can consider $A_{\mathfrak p}$. Why does this localization being normal imply the existence of $s\in A\backslash \mathfrak p$ with $s\alpha \in A$? (It then follows that $s$ pushes everything in $(\alpha A +A )$ into $A$, so $s\in I$, and we get a contradiction, which shows the sufficient condition noted above.) AI: $I$ is the annihilator (in $A$) of $\hat{\alpha}\in K/A$. If $I=A$, then $1\in I$, so... It seems you forgot that $\alpha$ is integral over $A$. Then $\alpha$ is integral over $A_{\mathfrak p}$, so $\alpha\in A_{\mathfrak p}$ which means that there exists $s\in A-\mathfrak p$ such that...
H: Closed-form expression for the exponent? Assume we have a simple equation $a^x = y, \quad a, x \in \mathbb{R}, \; a \neq 0$ from where $x$ needs to be evaluated. If we set a restriction $a > 0$, there is a simple logarithm expression available $x = \log_a y = \frac{\log y}{\log a}$. Still I'm not sure how to deal with cases such as $(-2)^x=-8$. Eventually, I would like to solve $\theta$ from $a^{\theta} \exp (-\theta \sum\limits_{k=1}^{N} f(k)) = c \qquad a,c \neq 0, \; N \in \mathbb{N}^{+}$ Any help would be highly appreciated. AI: If $a,y \lt 0$ in $\Bbb R$ we are restricted to $x$ being an odd integer and we can use $x=\frac {\log(-y)}{\log(-a)}$. If $a \lt 0, y \gt 0$ we are restricted to $x$ being an even integer and can use $x=\frac {\log(y)}{\log(-a)}$