text
stringlengths
83
79.5k
H: Clues for $\lim_{x\to\infty}\sum_{k=1}^{\infty} \frac{(-1)^{k+1} (2^k-1)x^k}{k k!}$ Some clues for this questions? $$\lim_{x\to\infty}\sum_{k=1}^{\infty} \frac{(-1)^{k+1} (2^k-1)x^k}{k k!}$$ AI: Hint: You're looking at $f(2x) - f(x) = \int_x^{2x} f'(t)\ dt$ where $f(z) = \sum_{k=1}^\infty (-1)^{k+1} z^k/(k k!)$. What is $f'(z)$?
H: Calculate limit sequence of roots equation $f(x) = g(x)$. Considering the functions $\;f,g:(0,\frac{\pi}{2})\rightarrow \textbf{R}\;$ given by $f(x)=\tan x, \;g(x)=nx,\;$ where $n\in\textbf{N}, n\neq0\,$ defines the sequence $\,x_{n}\,$ given by the roots of the equation $f(x) = g(x)$, namely $f(x_{n}) = g(x_{n})$. Find $\;\lim_{n\rightarrow\infty} x_{n}$. Using the graphical representation of functions $ f $ and $ g $ can be seen easily that $\lim_{n\rightarrow\infty} x_{n}=\frac{\pi}{2}$. This is the only way? AI: First, $x_n$ is uniquely determined (easy to show). Next, show that the graph of $\frac{ \tan x} { x} $ is strictly increasing on $(0, \frac{\pi}{2})$. This requires differentiation. Hence, conclude that the limit exists, and is $\frac{\pi}{2}$.
H: Recommendations for website/journal/magazine in applied mathematics Which website/journal/magazine would you recommend to keep up with advances in applied mathematics? More specifically my interest are: multivariate/spatial interpolation numerical methods computational geometry geostatistics etc I am looking for a fairly high-level and broad ranging source of info. AI: Try the SIAM Review. It features Survey and Review papers of wide interest.
H: Can all real/complex vector spaces be equipped with a Hilbert space structure? Let $X$ be a vector space over $\mathbb K \in \{\mathbb R, \mathbb C\}$. Does there exists a pairing $X \times X \rightarrow \mathbb K$ that induces a Hilbert space structure on $X$? I have been thinking that one may choose an arbitrary basis of $X$, which exists by the axiom of choice, and declare it as the orthonormal basis of the Hilbert space, but then limits of series might become unhandy. AI: No. If $X$ is a space of countably infinite dimension, then the Baire category theorem implies that $X$ can't be given the structure of a Hilbert space (or even a Banach space). Your idea fails since, in an infinite-dimensional Hilbert space, an orthonormal basis is never an algebraic (Hamel) basis.
H: If $ 5x+12y=60$ , what is the minimum of $\sqrt{x^2+y^2}$? I know this can be easily done by solving for $y$ and substituting, so that you only have to find the minimum value of the parabola $\large x^2 + \left ( \frac {60-5x}{12} \right)^2$ using standard techniques, but is there a less messy way to do this using inequalities? I tried various things such as $AM-GM$, but I don't get anywhere. One thing which I did notice about the restriction is that it is of the form $ax+by=ab$, but I don't know how to make any use of that. Thanks. AI: Note that $$(5x+12y)^2+(12x-5y)^2=(13^2)(x^2+y^2).$$ Since $5x+12y$ is given, we minimize $x^2+y^2$ by minimizing $(12x-5y)^2$. The minimum value of this is $0$. It follows that the minimum value of $\sqrt{x^2+y^2}$ is $\dfrac{60}{13}$.
H: eqution of the plane knowing two conditions How can I find the equation of the plane $\pi$ such that: $$\pi || \pi_{1}: x_{1}+3x_{2}-2x_{3}+15=0$$ and $$d_{1}: \frac{x_{1}+3}{4}=\frac{x_{2}-2}{2}=\frac{x_{3}-1}{5} \subset \pi.$$ thanks :) AI: Since the normal vector for $\pi_1$ is $(1,3,-2)$, the equation of any plane parallel to $\pi_1$ has normal vector $(1,3,-2)$. So, the equation of the plane parallel to $\pi_1$ and containing the point $(a,b,c)$ is $$1(x_1-a)+3(x_2-b)-2(x_3-c)=0.$$ Take any point $(a,b,c)$ on the line $d_1$ and substitute into the above. The resulting plane will contain $d_1$, since, as easily verified, $d_1$ does not intersect $\pi_1$.
H: Holomorphic function on $D\subset \mathbb{C}$ has to be $\mathcal{C}^\infty (D)$? I'm confused about this, in my notes I have the following: Theorem: Let $f$ be holomorphic inside and on the boundary ($C$, itself a contour) of a simply connected region $D$. Then $\oint_Cf(z)dz=0$. Proof: Assuming that $f'$ is continuous in $D$ and on $C$,... Now I always thought that a holomorphic function on $D$ is $\mathcal{C}^\infty(D)$. Is that not the case? AI: A standard assumption in this context is that $f$ is holomorphic in $D$ (which is open, so doesn't include the boundary) and continuous on its closure $\overline{D}$. It doesn't have to be differentiable on the boundary.
H: Question about Green function How to find the Green's function associated to the operator $\frac{-d^2}{dx^2}$ and to the boundary Dirichlet conditions: $u(a)=u(b)=0$ ? Please help me. Thank you. AI: This is a great set of notes on this exact green's function with identical boundary conditions. They're from an MIT math course. He sketches out the derivation in the beginning.
H: How can one prove the inequality $(|r_1s_1|+\cdots+|r_ns_n|)^2\leq(r_1^2+\cdots+r_n^2)(s_1^2+\cdots+s_n^2)$ in $\mathbb{R}^n$? In the inner product space $\mathbb{R}^n$, Cauchy's inequality tells us that $$ (r_1s_1+\cdots+r_ns_n)^2\leq(r_1^2+\cdots+r_n^2)(s_1^2+\cdots+s_n^2). $$ Apparently the inequality can be improved to $$(|r_1s_1|+\cdots+|r_ns_n|)^2\leq(r_1^2+\cdots+r_n^2)(s_1^2+\cdots+s_n^2). $$ How can this be proven? I attempted with induction on $n$. The base case is clear, and I observed $$ (|r_1s_1|+\cdots+|r_ns_n|+|r_{n+1}s_{n+1}|)^2 $$ can be expanded as $$(|r_1s_1|+\cdots+|r_ns_n|)^2+2(|r_1s_1|+\cdots+|r_ns_n|)|r_{n+1}s_{n+1}|+|r_{n+1}s_{n+1}|^2. $$ On the other hand, $(r+_1^2+\cdots+r_{n+1}^2)(s_1^2+\cdots+s_{n+1}^2)$ can be expanded as $$ (r_1^2+\cdots+r_n^2)(s_1^2+\cdots+s_n^2)+(r_1^2+\cdots+r_{n+1}^2)(s_{n+1}^2)+(s_1^2+\cdots+s_{n+1}^2)(r_{n+1}^2)+r_{n+1}^2s_{n+1}^2. $$ By induction, it then suffices to show $$ 2(|r_1s_1|+\cdots+|r_ns_n|)|r_{n+1}s_{n+1}|\leq(r_1^2+\cdots+r_{n+1}^2)(s_{n+1}^2)+(s_1^2+\cdots+s_{n+1}^2)(r_{n+1}^2). $$ However, I'm not sure this inequality is even true, as even the base case is not clear. AI: Just note that if $r_i s_i$ is negative for any $i$, you can replace $r_i$ with $-r_i$, and Cauchy's inequality still applies. But the right hand side does not change!
H: Show that the Beta Function $\beta (x,y)$ Converges When $x \gt 0, \space y \gt 0$ Given $ \beta (x, y) = \int_0^1 t^{x-1}(1-t)^{y-1} dt$, Show that $\beta (x,y)$ converges when $x \gt 0, \space y \gt 0$. $$\int_0^1 t^{x-1}(1-t)^{y-1} dt = \int_0^{0.5} t^{x-1}(1-t)^{y-1} dt + \int_{0.5}^1 t^{x-1}(1-t)^{y-1} dt$$ Now, according to the text, for the integral from $0$ to $1 \over 2$, $t^{x-1}(1-t)^{y-1} \le t^{x-1}$. $\int_0^{0.5} t^{x-1} dt \lt {\infty}$ entails that $\int_0^{0.5} t^{x-1}(1-t)^{y-1} dt$ converges. Well, $t^{x-1}(1-t)^{y-1} \le t^{x-1}$ when $y \ge 1$, not for all $y \gt 0$. Then the text does something similar for the integral from ${1 \over 2}$ to $1$: Since $t^{x-1}(1-t)^{y-1} \le (1-t)^{y-1}$, $\int_{0.5}^1 (1-t)^{y-1} dt \lt {\infty}$ entails that $\int_{0.5}^1 t^{x-1}(1-t)^{y-1} dt$ converges. Well, $t^{x-1}(1-t)^{y-1} \le (1-t)^{y-1}$ when $x \ge 1$. So the text shows that $\beta (x,y)$ converges when $x, y \ge 1$. Did I make a mistake somewhere or misunderstand something? AI: It should say that if $y > 0$ there is a constant $c$ such that $t^{x-1}(1-t)^{y-1} \le c t^{x-1}$ for $0 < t < 1/2$.
H: Find the equation of the tangent plane given a vector instead of point Find the equation of the tangent plane at $\mathbf p = (0,0)$ on the surface $z=f(x,y)=\sqrt{1-x^2-y^2}$. Give an intuitive geometric argument to support the result. However $\mathbf p$ is a vector. I see that the surface of $z$ represents a sphere. Thanks. Additional: I found the tangent plane equation to be $0=1-z$, but I am having trouble answering the second question. AI: The normal vector at a point $(x_0,y_0)$ is given by: $$ \mathbf{n} = (-f_x,-f_y,1)|_{(x,y)= (x_0,y_0)} = \left(\frac{2x_0}{\sqrt{1-x_0^2 -y_0^2}}, \frac{2y_0}{\sqrt{1-x_0^2 -y_0^2}},1\right) = (0,0,1). $$ The tangent plane is $$ \mathbf{n}\cdot(x-x_0,y-y_0,z - f(x_0,y_0)) = (0,0,1)\cdot (0,0,z-1)= 0, $$ geometrically speaking, this means: The tangent plane at $(x,y)= (0,0)$ is perpendicular to the normal vector of the surface $z = f(x,y)|_{(x,y)= (0,0)}$. Hence the plane is $z -1=0$. Moreover, the normal vector is $(0,0,1)$, which points along the positive direction of $z$-axis, the plane perpendicular to this vector must be a "vertical" shift (shift along the $z$-axis) of the $xy$-plane $z=0$, hence it must be $z = a$. Now for the plane must pass through $(x_0,y_0,f(x_0,y_0))$, which is $(0,0,1)$ in your case, this implies $a=1$.
H: Finite union of compact sets is compact Let $(X,d)$ be a metric space and $Y_1,\ldots,Y_n \subseteq X$ compact subsets. Then I want to show that $Y:=\bigcup_i Y_i$ is compact only using the definition of a compact set. My attempt: Let $(y_n)$ be a sequence in $Y$. If $\exists 1 \leq i \leq n\; \exists N \in \mathbb N \; \forall j \geq N\; y_j \in Y_i$ then $(y_n)$ has a convergent subsequence because $Y_i$ is compact. Otherwise, $$ \forall 1 \leq i \leq n \; \forall N \in \mathbb N\; \exists j \geq N\; y_j \notin Y_i $$ Assuming for the moment that $n = 2$ and using induction later we have that $$ \forall N \in \mathbb N \; \exists j \geq N \; y_j \in Y_1 \backslash Y_2 $$ With this we can make a subsequence $\bigl(y_{n_j}\bigr)_{j=0}^\infty$ in $Y_1 \backslash Y_2$. This sequence lies in $Y_1$ and thus has a convergent subsequence. This convergent subsequence of the subsequence will then also be a convergence subsequence of the original sequence. Now we may use induction on $n$. AI: It looks like your definition of compactness is that every sequence has a convergent subsequence. There is something you need to be cautious about, here: you are using $n$ for two different things! You use it first as the highest index of your $Y_i$s, and then as the index variable of your arbitrary sequence. Instead, let's go with $Y_1,...,Y_k$ as your compact sets. Now, your proof gets the job done, but your detour into the "all but finitely many $y_n$ lie in some $Y_i$" possibility (how you start your proof) is unnecessary, as is induction. We can get there more simply if we recall that a union of finitely-many finite sets is again a finite set. For $1\le i\le k,$ let $$\mathcal I_i=\{n\in\Bbb N:y_n\in Y_i\}.$$ (That is, $\mathcal I_i$ is the set of indices of sequence elements lying in $Y_i$.) Since the $y_n$ are all in $Y=\bigcup_{i=1}^kY_i,$ then $\bigcup_{i=1}^k\mathcal I_i=\Bbb N,$ whence at least one of the $\mathcal I_i$ is infinite (by the fact we recalled earlier). Without loss of generality, suppose $\mathcal I_1$ is infinite, so that the points $y_n$ lying in $Y_1$ form a subsequence of $\{y_n\}_{n=0}^\infty.$ Then we can proceed as you did.
H: What is $A$ in the definition of atlas? The definition of atlas I encountered is An atlas for $M$ is a family {$\varphi_\alpha: U_\alpha \rightarrow U_\alpha^\prime: \alpha \in A $} of charts such that {$U_\alpha: \alpha \in A$} is an open cover of $M$. Intuitively, I guess it is just the index set, a place where holds all the names of the subsets. But this is rather ungrounded. The entry on wikipedia used $\alpha$ without define which set it belongs to, http://en.wikipedia.org/wiki/Atlas_%28topology%29. And I checked Clifford Taubes' Differential Geometry, he defined chart without mentioning atlas altogether. AI: You are correct. $A$ is just an index set. It merely gives a label to each chart. Note that a chart is an element of an atlas; therefore to define a chart you won't mention an atlas. It's the other way around: an atlas is a special collection of charts. Taubes's book defines an atlas without an indexing set, but the definition he gives is equivalent. This will just change some notation. For example $$\bigcup_{\alpha \in A} U_\alpha$$ would be written $$\bigcup_{(U, \varphi) \in \mathcal{U}} U$$ in Taubes's book.
H: If two maps induce the same homomorphism on the fundamental group, then they are homotopic This is exercise 15.11(d) in C. Kosniowsky book A first course in algebraic topology: Prove that two continuous mappings $\varphi,\ \psi:X\to Y$, with $\varphi(x_0)=\psi(x_0)$ for some point $x_0\in X$, induce the same homomorphism from $\pi(X,x_0)$ to $\pi(Y,\varphi(x_0))$ if $\varphi$ and $\psi$ are homotopic relative to $x_0$. Which is easy to solve. We were asked to prove the converse but, is it true to begin with? If so, is it an standar exercise or thought one? AI: Hint: Consider $\phi\colon S^2\to S^2$ the identity map.
H: If $\langle v,s\rangle+\langle s,v\rangle\leq \langle s,s\rangle$ for all $s\in S$, why is $v\in S^\perp$? Suppose you have a complex inner product space $V$, a subspace $S$, and some vector $v\in V$ such that $$\langle v,s\rangle+\langle s,v\rangle\leq\langle s,s\rangle$$ for all $s\in S$. How can you determine $v\in S^\perp$? I jotted down that for a given $\epsilon>0$ and $s\in S$, there exists $c\in\mathbb{C}$ depending on such $\epsilon$ and $s$ such that $\langle cs,cs\rangle=|c|^2\|s\|<\epsilon$. Then $$ \begin{align*} \langle v,cs\rangle+\langle cs,v\rangle &=\bar{c}\langle v,s\rangle+c\langle s,v\rangle\\ &=\bar{c}\langle v,s\rangle+c\overline{\langle v,s\rangle}=2\Re(\bar{c}\langle v,s\rangle)\\ &\leq \langle cs,cs\rangle=|c|^2\|s\|<\epsilon \end{align*} $$ My hope was to somehow show $\Re\langle v,s\rangle$ and $\Im\langle v,s\rangle$ equal $0$, but it seems to be going in the wrong direction. Is there a more useful approach? AI: Hint: Let $s\in S$ be a nonzero vector. Then $\varepsilon \alpha s\in S$ for every $\varepsilon>0$ and every $\alpha\in\mathbb{C}$. Hence $\langle v,\varepsilon \alpha s\rangle+\langle \varepsilon \alpha s,v\rangle\leq \langle \varepsilon \alpha s,\varepsilon \alpha s\rangle$ for all $\varepsilon>0$. Argue that $\mathrm{Re}(\langle \alpha s,v\rangle)=0$. As $\alpha$ is arbitrary, argue further that $\langle s,v\rangle=0$.
H: A simple way to obtain $\prod_{p\in\mathbb{P}}\frac{1}{1-p^{-s}}=\sum_{n=1}^{\infty}\frac{1}{n^s}$. Let $ p_1<p_2 <\cdots <p_k < \cdots $ the increasing list in set $\mathbb{P}$ of all prime numbers . By sum of infinite geometric series we have $\sum_{k=0}^\infty r^k = \frac{1}{1-r}$, for $0<r<1$. For all $s>1$ and $r=\frac{1}{p_k^{s}}$ we have $$ \begin{array}{cccccc} \dfrac{1}{1-p_{1}^{-s}} & = & 1+\dfrac{1}{(p_1^s)^1}+\dfrac{1}{(p_1^s)^2}+\dfrac{1}{(p_1^s)^3}+ & \!\!\cdots\!\! & +\dfrac{1}{(p_1^{s})^{\alpha_1}}+ & \cdots \\ \dfrac{1}{1-p_{2}^{-s}} & = & 1+\dfrac{1}{(p_2^s)^1}+\dfrac{1}{(p_2^s)^2}+\dfrac{1}{(p_2^s)^3}+ & \!\!\cdots\!\! & +\dfrac{1}{(p_2^s)^{\alpha_2}}+ & \cdots \\ \dfrac{1}{1-p_{3}^{-s}} & = & 1+\dfrac{1}{(p_3^s)^1}+\dfrac{1}{(p_3^s)^2}+\dfrac{1}{(p_3^s)^3}+ & \!\!\cdots\!\! & +\dfrac{1}{(p_3^s)^{\alpha_3}}+ & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots &\vdots \\ \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \dfrac{1}{1-p_{k}^{-s}} & = & 1+\dfrac{1}{(p_k^s)^1}+\dfrac{1}{(p_k^s)^2}+\dfrac{1}{(p_k^s)^3}+ & \!\!\cdots\!\! & +\dfrac{1}{(p_k^s)^{\alpha_k}}+ & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \end{array} $$ And the Fundamental Theorem of Arithmetic tells us that every integer $ n> 1$ can be decomposed uniquely as a product $$ n= p_{i_1}^{\alpha_{i_1}}p_{i_2}^{\alpha_{i_2}}\cdots p_{i_k}^{\alpha_{i_k}} $$ of powers of prime numbers $p_{i_1}< p_{i_2}< \cdots < p_{i_k}$ for integers $\alpha_{i_1},\alpha_{i_2},\ldots,\alpha_{i_k}\geq 1$. Since $ n^s= (p_{i_1}^s)^{\alpha_{i_1}}(p_{i_2}^{s})^{\alpha_{i_2}}\cdots (p_{i_k}^s)^{\alpha_{i_k}}$ and using brute force with I can prove that $$ \prod_{p\in\mathbb{P}}\frac{1}{1-p^{-s}}=\sum_{n=1}^\infty \frac{1}{n^s} $$ But I would like to know if there is a simple and elegant way to achieve this result is up through the above list. AI: I absolutely love this result, I literally cannot stop from getting goosebumps and smiling whenever I think about it. It is a proof from probability theory! I learned it in David William's Probability with Martingales, of which it is part of exercise E4.2. Fix $s>1$ and recall that $\zeta(s) = \sum_{n \in \mathbb{N}} n^{-s}$, so we aim to show that $1/\zeta(s) = \prod_p(1-p^{-s})$ where of course $p$ ranges over the primes. First, define a probability measure $P$ and an $\mathbb{N}$-valued random variable $X$ such that $P(X=n) = n^{-s}/\zeta(s)$ (for example take $P(\{n\}) = n^{-s}/\zeta(s)$ and $X(\omega)=\omega$). Let $E_k := \{X \text{ is divisible by } k\}$. We claim that the events $(E_p : p \text{ prime})$ are independent. We note that $$ P(E_k) = \sum_{i=1}^\infty P(X=ik) = \sum_{i=1}^\infty \frac{(ik)^{-s}}{\zeta(s)} = k^{-s} \frac{\zeta(s)}{\zeta(s)} = k^{-s}. $$ Then if $p_1,\ldots,p_n$ are distinct primes we have $$\bigcap_{i=1}^n E_{p_i} = E_{\prod_{i=1}^np_i},$$ so that $$ P\left(\bigcap_{i=1}^n E_{p_i}\right) = P(E_{\prod_{i=1}^np_i}) = \left(\prod_{i=1}^n p_i \right)^{-s} = \prod_{i=1}^n p_i^{-s} = \prod_{i=1}^n P(E_{p_i}) $$ so our independence claim is proved. Then we note that $1$ is the unique positive integer which is not a multiple of any prime. Hence $$ \frac{1}{\zeta(s)} = P(X=1) = P\left(\bigcap_p E_p^c\right) = \prod_p(1-P(E_p)) = \prod_p(1-p^{-s}). $$
H: Computing directional derivative of $f(x,y,z)$ Find the direction derivative of $f(x,y,z) = xy + yz + xz$ at the point $P(1,1,1)$ in the direction of $v = \langle7,3,-6\rangle$. I got the answer as $32/\sqrt{94}$. Is that right? If not what is the right answer please help AI: $$ \left . f_v \right |_P = \left . \left ( \nabla f\cdot \frac v{|v|} \right )\right |_P = \left . \left ( \frac {\langle y + z, x + z, x + y\rangle \cdot \langle 7, 3, -6\rangle}{\sqrt{49+9+36}} \right ) \right |_P = \left . \left ( \frac {-3x+y+10z}{\sqrt{94}}\right ) \right |_P = \frac 8{\sqrt{94}} $$
H: Gradient of a function in multi-variate calculus please help Find the gradient of the function at the given point. $g(x, y) = 3xe^{y/x}$, at point $(3, 0)$. How do you compute the gradient of this function. Please help me. AI: Initially OP wrote $g(x,y) = 3xy^{\frac{y}{x}}$ but realized it was incorrect. I have left the initial work for posterity. The actual answer is given in the edit. As per my comment above, we want to take the logarithm of $g$ as standard calculus identities do not apply to this function. Doing so we have: $$\begin{eqnarray} \log g &=& \log(3xy^{\frac{y}{x}}) \\ &=& \log(3x) + \frac{y}{x}\log(y). \end{eqnarray} $$ Differentiating this with respect to $x$ gives: $$\frac{\partial\log g}{\partial x} = \frac{1}{g}\frac{\partial g}{\partial x} = \frac{1}{x} - \frac{y}{x^2}\log(y).$$ Multiplying both sides of this by $g$, we get that $$\frac{\partial g}{\partial x} = 3xy^{\frac{y}{x}}\left(\frac{1}{x}-\frac{y}{x^2}\log(y)\right). $$ Do you see how to do this for $y$? Knowing these two, do you see how to get the gradient? Edit: Since the actual definition of $g$ is $g(x,y) = 3x\exp\left(\frac{y}{x}\right)$, I will use standard calculus techniques. $$\frac{\partial g}{\partial x} = \frac{\partial}{\partial x}\left(3x\exp\left(\frac{y}{x}\right)\right) = 3\exp\left(\frac{y}{x}\right) + 3x\exp\left(\frac{y}{x}\right)\left(-\frac{y}{x^2}\right).$$ Above I used product rule to differentiate the separate functions of $x$ and then used chain rule on the exponential. Knowing this, do you see how to get $\frac{\partial g}{\partial y}$?
H: What is an effective and practical means to teach about natural logarithms and log laws to high school students? My students are quite practically minded, and I have found that teaching them concepts in a practical manner to be very helpful (maths 'experiments'; modelling on the smartboard etc). I am looking for a practical means (hands on preferably) to teach about the log laws of natural logarithms. AI: For introducing logarithm, perhaps something like starting with exponential population growth and then noting that the graph is linear in log-space? You can then change back and forth between representations and see how things like base-change affect the plot and the interpretation. This is also probably representative of where less-mathematical scientists run into logarithms the most.
H: why can't we divide by zero ?! in arabic sites which is interested in maths , i find many topics like ,here is a proof that 0=2 . and we answer that the proof is wrong as we can't divide by zero . but i really wonder , why can't we divide by zero ? i think the reason that mathematicians refused dividing by zero is that make us into a contradictions like $1=2=3=...$ and things like this , but those are facts about the physical world , why should mathematicians obey the outside world ? i also have read a news in BBC about a new theorem which find special cases where we can divide by zero , but not details was mentioned i think , have any one had any idea about this ? AI: You are correct that division by zero results in statements such as $1 = 2 = 3 =\cdots$, but that is not just a statement about facts in the real world (or "physical world"): mathematics as we know it would fall apart if this is allowed. What, after all, can be said, mathematically, if we have that $1 = 2 = 3 = \cdots$? Just consider the implications, as they are vast, if we division by zero defined, and hence "meaningful"... How would you define division if you allow division by zero? See, e.g., this answer regarding division by zero, with respect to how we define division, and division as we know it would fail if we were permit division by zero. Indeed, how would we define $0^{-1}$ so that $0\cdot 0^{-1} = 1$? These questions are simple prompts to suggest that to allow/define division by zero would entail having to redefine all axioms of arithmetic, and the field axioms, and much more. I.e., any successful redefining and reconstruction of a consistent system which is also consistent with allowing division by zero would yield, as first suggested in the comments, a system which would be exceedingly uninteresting, even perhaps meaningless.
H: Real number construction : prove that $Q$ is a subfield of $R$ I am slightly confused about the proof presented in Rudin. It says that the ordered field $Q$ is isomorphic to the ordered field $Q*$ whose elements are the rational cuts. It is this identification of $Q$ with $Q*$ which allows us to regard $Q$ as a subfield of $R$. Now, I know that if I have to prove that $A$ is a subfield of $(B,+,.)$ then first I need to prove that $A$ is a subset of $B$ and the field operations in $B$ can be extended to $A$. For example, let $a, b \in A$ then $a+b \in A$ should be true. Now for the above case, let $p$ and $q$ be two rational numbers each represented by the rational numbers less than them in $R$. Now if we add them using the $+$ operation in $R$, does that gives the rational number $p+q$ ? I don't think so. AI: We deal with your question about $p+q$. That is a large step toward proving that $\mathbb{Q}$, under the usual operations, and $\mathbb{Q}^\ast$, under the newly defined addition and multiplication, are isomorphic. For any rational number $r$, let $r^\ast$ be the cut associated with $r$. So we want to prove that $(p+q)^\ast=p^\ast+q^\ast$. Note that $(p+q)^\ast$ and $p^\ast +q^\ast$ are sets. Presumably you have already proved that in fact $p^\ast+q^\ast$ is a cut, since that is needed to show that the operation $+$ on cuts (the reals) is well-defined. To show that two sets $X$ and $Y$ are equal, we show that any element of $X$ is an element of $Y$, and vice-versa. So let $r\in (p+q)^\ast$. Then $r\lt p+q$. Let $t=\frac{p+q-r}{2}$, and let $x=p-t$, $y=q-t$. Note that $x+y=p+q-2t$ and therefore $x+y=r$. Then $x\lt p$, so $x\in p^\ast$. Similarly, $y\in q^\ast$. So by definition $x+y\in p^\ast +q^\ast$. But $x+y=r$. This completes the proof in one direction. The other direction is easier. Let $w\in p^\ast+q^\ast$. Then $w=x+y$ for some $x\in p^\ast$ and $y\in q^\ast$. So $x\lt p$ and $y\lt q$, and therefore $x+y\lt p+q$. It follows that $x+y\in (p+q)^\ast$, that is, $w\in (p+q)^\ast$. There is more to do to show that the map that takes $r$ to $r^\ast$ is an isomorphism. We need to show that the mapping is onto (very easy), and one to one (straightforward). We also need to verify that multiplication also behaves well, that is, that $(pq)^\ast=p^\ast q^\ast$. One direction of this will take some work.
H: Why inverse modulo exponentiation is harder than inverse exponentiation without modulo I am new to number theory. I read in cryptography inverse modulo exponentiation is used because it is hard. But I couldn't understand the advantage of it over inverse exponentiation without modulo. Could someone please explain the advanatge AI: Here's an example that might make things more clear. Suppose you want to solve $3^x=80$. You know that $3^4=81$, so the answer is a little less than 4. Using inverse exponentiation (logs) on the reals, we can find the exact answer as $3.988769\ldots$. Because the function is continuous, we can do all sorts of nice approximations like the bisection method or Taylor series to home in on the answer quickly. Now let's try to solve $3^x\equiv 80\pmod{17}$. As it happens, $80\equiv 12\pmod{17}$, so we want to solve $3^x\equiv 12\pmod{17}$. Since we have only integers at our disposal, there's no really efficient way to figure out that the only answer is $x=13$. Knowing that $81$ is close to $80$ doesn't help at all; $4$ isn't close to $13$. I found $13$ by trial and error. If $17$ were a much larger number, trial and error would take too long.
H: extrema and saddle points Examine the following function for relative extrema and saddle points: $$f(x, y) = 9x^2-5y^2-54x-40y+4.$$ I did this and got that the point should be at $(3, -4, 3)$. Is that right? Also, how do I know if it is a saddle point or a minimum? AI: Hints: Your solution is correct, the critical point is $(3, -4)$ and the function value $f(x,y) = 3$ at the critical point. There are no global min or max There are no local min or max To determine if it is saddle, you look at the determinant of the Hessian, $$\det(H) = -180 < 0 \rightarrow \text{saddle}$$ So we have a saddle at the critical point. See my response here for details: Maximum and minimum absolute of a function $(x,y)$ Graphically, we can see this:
H: $C⊆A$ and $D⊆B$ and A and B are disjoint, then C and D are disjoint. Let A,B,C and D be sets. How to prove: $C\subseteq A$ and $D\subseteq B$ and $A$ and $B$ are disjoint, then $C$ and $D$ are disjoint. Could anyone please explain to me how to approach this problem? Thanks. AI: Use the definition of disjoint sets. If A and B are disjoint, then: for any $x \in A \implies x\notin B$, for any $y \in B \implies y\notin A$, i.e., $A$ and $B$ are disjoint if and only if $A\cap B = \varnothing$. Now, we can prove your statement using a proof by contradiction: Suppose $C$ and $D$ are NOT disjoint, i.e. suppose $C\cap D\neq \varnothing$. Then there exists an $x \in C\cap D.\;$ Then $x \in C$ and $x \in D$. In particular, that element $x$ is in $A$ since it's in $C\subseteq A$, and it's in $B$, since it's in $D\subseteq B$. So $x\in A\cap B$, which means $A\cap B \neq \varnothing.\;$ But wait: by hypothesis, $\;A\cap B =\varnothing\;$ Contradiction... Therefore our supposition must be false, as it leads to a contradiction. Hence, it must be the case that $C$ and $D$ are disjoint.
H: equation of the tangent plane Find an equation of the tangent plane to the surface at the given point. $g(x, y) = x^2-y^2$ at $(7, 2, 45)$. I know the answer is between $14(x-7)-4(y-2)+(z-45)=0$ or $14(x-7)-4(y-2)-(z-45)=0$. I think it is $14(x-7)-4(y-2)+(z-45)=0$. Am I right? If not, please explain why I am not. AI: The equation of the tangent plane is given by $$ z=g(a,b)+g_x(a,b)(x-a)+g_y(a,b)(y-b), $$ or if prefer the other form using the gradiant $$ \nabla (z-g(x,y))\Big|_{(7,2,45)}.((x,y,z)-(7,2,45)). $$
H: Definition of differentiable manifold. In the notes above, I am not quite sure what the up-side-down $\Pi$ is, nor the notation of $/ \sim$. $\amalg$ means disjoint union as Peter Tamaroff advised. As for the notation of$/ \sim$, I guess it means take away subsets in the manifold that are mapped for multiple times? Therefore, each element in the manifold is covered by a coordinate chart exactly once, hence is disjoint? What does it mean by "One just need to check the resulting space is Hausdorf" at the end? The reason it mentions Hausdorff at the end of the remark is because the definition of manifold: A second-countable Hausdorff space $\mathcal{X}$ is an $n$-dimensional (abstract) manifold if for every $p \in \mathcal{X}$ there exists a neighborhood $\mathcal{U}$ and a homeomorphism $\varphi: \mathcal{U} \rightarrow \mathcal{U}^\prime$ to an open set $\mathcal{U}^\prime \rightarrow \mathbb{R}^n$. AI: The notation "$\coprod$" refers to disjoint union. The notation "$/\sim$" refers to taking the quotient by the equivalence relation $\sim$. That is, you identify those points which are related by $\sim$. In short, the definition is "take the disjoint sets $\mathcal U'_\alpha$ and for each point $p\in \mathcal U'_\alpha$, glue $p$ to $(\varphi_\beta\circ\varphi_\alpha^{-1})(p)$."
H: Determine whether $F(x)= 5x+10$ is $O(x^2)$ Please, can someone here help me to understand the Big-O notation in discrete mathematics? Determine whether $F(x)= 5x+10$ is $O(x^2)$ AI: Let us give a definition for Big-O notation: Suppose $g(x)\geq 0$. We say that $f(x)=O(g(x))$ as $x\rightarrow\infty$ if: Loosely: $\lvert f(x)\rvert$ is bounded by a constant multiple of $g(x)$ for $x$ sufficiently large. Rigorously: There exists $C>0$ and $X\in \mathbb{R}$ such that for all $x>X$, we have $\lvert f(x)\rvert \leq Cg(x)$. In your case, we deal with $f(x)=5x+10$. So, we want to show that for $x$ sufficiently large, $f(x)$ can be bounded by $Cx^2$ for some $C$. To make life simple, let's assume $x\geq 10$, so that $\lvert 5x+10\rvert=5x+10\leq 5x+x=6x$. Now, if $x\geq 10$, then $x\leq x^2$. So, for $x\geq 10$, we have $$ \lvert 5x+10\rvert=5x+10\leq 6x\leq 6x^2. $$ Hence it is true that $5x+10=O(x^2)$, as you were asked. The big idea with Big-O notation is this: all it asks you to do is to think about the rate of growth of the function, once $x$ is large enough that only leading terms really matter. In this case, the exact order of your function is $x$; for $x\rightarrow\infty$, of course $x^2$ is a faster rate of growth than $x$ is.
H: Question about $\langle b\rangle$ in a Hausdorff Topological Group Let $G$ be a topological Hausdorff group, if $b\in G$, does there exist an open neighborhood of $b$ such that $U\cap \langle b\rangle$ is finite. I know this is a really odd question, but I would really appreciate to know whether this is true or not. Thanks AI: Not necessarily. For example, let $G$ be the unit circle in the complex plane, that is, the points $e^{it}$, where $t$ ranges over the reals, under the usual multiplication. Let $b=e^{it_0}$, where $t_0$ is not a rational multiple of $\pi$. It can be shown that the group generated by $b$ is dense in the unit circle. You may be more familiar with the result in the following form. Suppose that $\tau$ is irrational. Then the set of fractional parts of $n\tau$, as $n$ ranges over the natural numbers, is dense in the unit interval.
H: Finding the angle of a moving target I'm developing a submarine game and found a mathematical problem that exceeds my knowledge. A submarine has $x$ and $y$ coordinates in the plane, a speed $v$, and two angles: one indicates the direction in which it moves and the other direction it shoots. If a submarine in motion plans to shoot another that also is in movement, which the angle of rotation of the weapon that results in the destruction of the target? Besides the known data of the submarines we know that: Rotational speed of the weapon: $\tfrac {20^\circ}{\text{tick}}$. Shooting speed: $20 - \frac{1200}{\text{distance from target}} \frac{\text{space unit}} {\text {tick}}$. Maximum shooting speed: 3. The submarine is a rectangle of width $w$ and height $h$. Does anyone have any idea how to get a formula? The case that I will need more is when the shooter is stopped so, if someone can at least for this case, I'm grateful. AI: Let's assume that the shooter is not moving, that the target is moving on a known trajectory at a fixed speed. Calculate where the weapon's trajectory and the target's trajectory cross. Then calculate when each of them arrive. If the weapon arrives $k$ ticks sooner, then recalculate based on waiting $1$ tick, $2$ ticks, etc. (not necessarily $k$ ticks since the shooting speed depends in distance to target). If instead the target arrives before the weapon, then turn the weapon 20 degrees (leading the target) and recalculate as previously. If instead the shooter is moving at a fixed speed, then add that speed to the target and pretend the shooter isn't moving. This is equivalent to changing the reference frame to be centered at the shooter. All of this depends on the physics of the game. It seems strange that shooting speed depends on how far the target is. It also seems strange that the weapon doesn't slow down in the water, and that gravity doesn't act on the weapon.
H: An inequality property of the Fibonacci sequence Given the Fibonacci sequence $F_n$, Wikipedia says (http://en.wikipedia.org/wiki/Fibonacci_number#List_of_Fibonacci_numbers) $$ F_{2n-1} = F_n^2+F^2 _{n-1}$$ so that $$F_{2n-1}>F^2_n$$ What is the smallest such k for which $$F_{n+k}>F^2_n\,\,?$$ I'm not sure where to start or to find smaller values than $2n-1$ AI: $k=n-1$ is the minimum possible value. We have $F_{2n-1}=F_n^2+F_{n-1}^2$. Rewrite the LHS as $F_{2n-2}+F_{2n-3}$, so $$F_{2n-2}=F_n^2+F_{n-1}^2-F_{2n-3}$$ But now $F_{2n-3}=F_{n-1}^2+F_{n-2}^2$. Substituting, we get $$F_{2n-2}=F_n^2-F_{n-2}^2<F_n^2$$ Hence even the term immediately preceding $F_{2n-1}$ is less than $F_n^2$.
H: Let A be a family of pairwise disjoint sets. Prove that if B⊆A, then B is a family or pairwise disjoint sets. I know that for this problem I have to use contradiction. Could anyone check my work and guide me through the problem if it's wrong? So far, this is what I have.Thanks!! Contradiction: $\mathcal B\subseteq \mathcal A$, then $\mathcal B$ is not a family of pairwise disjoint sets. If $\mathcal B$ is not pairwise disjoint then $\mathcal A \neq \mathcal B$ or $\mathcal A \cap \mathcal B\neq \varnothing $ then, x $\in \mathcal A$ and x$\notin \mathcal B$ However, x $\in \mathcal B\subseteq \mathcal A$ , which is a contradiction since we said that x$\notin \mathcal B$. AI: There is no need to use contradiction. Suppose that $B_0,B_1$ in $\mathscr{B}$ with $B_0\ne B_1$. Then $B_0$ and $B_1$ are distinct members of $\mathscr{A}$, so $B_0\cap B_1=\varnothing$ (since $\mathscr{A}$ is a pairwise disjoint family). Thus, every pair of distinct members of $\mathscr{B}$ are disjoint, and by definition $\mathscr{B}$ is a pairwise disjoint family. The argument that you’ve given does not make sense. $\mathscr{B}$ is a family of sets, and in order to show that this family is pairwise disjoint, you must show that if $B_0,B_1\in\mathscr{B}$, then either $B_0=B_1$, or $B_0\cap B_1=\varnothing$. There is no point to looking at $\mathscr{A}\cap\mathscr{B}$: we know that $\mathscr{A}\cap\mathscr{B}=\mathscr{B}$ simply because $\mathscr{B}\subseteq\mathscr{A}$. This fact has in itself no bearing on whether $\mathscr{B}$ is pairwise disjoint.
H: Linear Differential Equation So I asked this question a while back but the answer I recorded really did not help me solve it. Find the general solution to: $a_0 + a_1x + a_2y + a_3dy/dx = 0$ If $a_0 = 0$ this becomes quite easy to solve (just set y = vx and factor it) but separation of variables doesn't work otherwise. I think it is important to note that this is non homogenous. Can someone give the general solution with an explanation of how it was solved (and if it isn't too much) as well as an explanation of intuitions/motivations behind the methods used? AI: Hints: Rewrite it as $\displaystyle y' + \frac{a_2}{a_3} y = -\frac{a_0}{a_3} - \frac{a_1}{a_3} x$ Find an integrating factor After those steps, you'll end up with: $$\displaystyle \large y(x) = -\frac{a_0}{a_2}+\frac{a_1 a_3}{a_2^2}-\frac{a_1 x}{a_2}+c_1 e^{-\frac{a_2 x}{a_3}}$$ As an alternate approach, you could solve the homogeneous equation and then guess at a particular solution and find the constants.
H: Is a uniformly continuous functions bounded? Let f be uniformly continuous on (a,b). How do you prove that it is bounded on (a,b)? AI: Hint: Any uniformly continuous function on a dense set can be extended continuously on the whole set.
H: finding probability density function Let $x$ be random variable with cdf $ F(x)=0, x\leq 0 $ and $f(x)=1-e^{-x}, x>0$ The value of $P(\frac{1}{4}\leq e^{-x}\leq \frac{1}{3})$ is (A) $\frac{1}{12}$ (B) $\frac{1}{3}$ (C) $\frac{1}{4}$ (D) $e^{-3}-e^{-4}$. I know that $P(a\leq X\leq b)=F(b)-F(a)+P(X=a)$ i am using this but i get none of the above answer. AI: We will change notation, and use caps for random variables. So the random variable will be called $X$. Then $e^{-X}$ is also a random variable. We have $$\frac{1}{4}\le e^{-X} \le \frac{1}{3}\quad\text{ if and only if}\quad \ln(1/4) \le -X\le \ln(1/3),$$ that is, if and only if $\,\ln 3\le X\le \ln 4$. Now we can use the formula you quoted. $$\Pr(\ln 3\le X\le \ln 4)=(1-e^{-\ln 4}) -(1-e^{-\ln 3})=e^{-\ln 3}-e^{-\ln 4},$$ that is, $\frac{1}{3}-\frac{1}{4}$, which simplifies to $\frac{1}{12}$.
H: Question about modular arithmetic and divisibility If $$a^3+b^3+c^3=0\pmod 7$$ Calculate the residue after dividing $abc$ with $7$ My first ideas here was trying the 7 possible remainders and then elevate to the third power $$a+b+c=x \pmod 7$$ $$a^3+b^3+c^3+3(a+b+c)(ab+bc+ac)-3abc=x^3\pmod 7$$ $$3(a+b+c)(ab+bc+ac)-3abc=x^3 \pmod 7$$ If I replace $x=0$ the result is immediate, $abc=0 \pmod7$. But with $x=1$ $$3(7n+1)(ab+bc+ac)-3abc=x^3 \pmod 7$$ $$3(ab+bc+ac)-3abc=x^3 \pmod 7$$ And there is nothing more to simplify. I know the LHS is a multiple of $3$, but what can i do with that? Is it necessary that $x^3$ or $7-x^3$ is a multiple of $3$? Any help is greatly appreciated AI: If one of $a,b,c$ is divisible by $7,abc\equiv0\pmod 7$ Else $n^3\equiv \begin{cases} 1 &\mbox{if } n \equiv 1,2,4\pmod 7 \\ -1 & \mbox{if } n \equiv 3,5,6\pmod 7 \end{cases} \pmod 7$ Observe that for no combination of $a,b,c$ $$a^3+b^3+c^3\equiv0\pmod 7$$ $$\implies a^3+b^3+c^3\equiv0\pmod 7\implies 7\text{ must divide at least one(or all three) of } a,b,c $$
H: Monte Carlo Integration - determining if a random x,y coordinate falls within the circle or square My textbook says you can take any random (x,y )coordinate between -1 and 1 like (-.3, .5) or (.4, -.7) and determine if the given coordinate falls within the circle if you calculate $\sqrt(x^2+y^2)$ < 1. The part I don't understand is why are you testing the (x,y) coordinate against 1? What is the rationale behind squaring x and y and seeing if it is less than 1? How do you arrive at 1? Also do you typically only use -1 and 1 when you're dealing with Monte Carlo integration or can you use -5 and 5 or -10 or 10 or any other numbers? AI: The equation $\sqrt{x^2+y^2}<1$ is just the equation of a circle of radius one, and the box between $\pm 1$, is just a convenient bounding box who's area is known. This can be extended arbitrarily, for instance, to calculate the area of between the function $\sin x$ and the $x$ axis between $0$ and $\pi/2$, you can use a Monte Carlo method and generate points with $x \in [0,\pi/2]$ and $y \in [0,1]$.
H: Mathematical Economics - Utility maximization I am thankful to any hints: What I have: Simple log-utility form: $u = \log c_1 + \beta \log c_2$ Budget constraints: $c_1 + s \leq w$ $c_2 \leq R\; s$ Problem: For utility maximization: $s = \frac{\beta}{1+\beta} \cdot w \; \; \; \; \;$ I am not getting this! I have tried this: Since the utility is monotonic, we use equality and then I substitue $c_1 \; and \; c_2$ so I get: $ max \; \; \log (w-s) + \log Rs = 0$ Thus, $\frac{1}{w-s} + \frac{\beta}{s} = 0$ $ \implies s = - \beta w - \beta s \therefore$ $s = \frac{- \beta}{1 + \beta} w$ !! Any ideas are appreciated. AI: The cost is an increasing function of $c_1$, so we can take $c_1 = w-s$. If $\beta<0$, then we can choose $c_2$ close to zero to get arbitrarily large cost, so presumably we have $\beta \ge 0$. In this case, the cost is a non-decreasing function of $c_2$, so we should take $c_2 = Rs$. The utility now reduces to the unconstrained $f(s)= \log (w-s) + \beta \log (Rs)$. Differentiating gives $-\frac{1}{w-s} + \beta \frac{1}{s} = 0$ (note the first minus sign!), which yields $s = \frac{\beta}{1+\beta} w$.
H: Finding Big-O with Fractions I'd want to know how I can find the lowest integer n such that f(x) is big-O($x^n$) for a) $f(x) = \frac {x^4 + x^2 + 1}{x^3 + 1}$ I've fooled around with this a bit and tried going from $\frac {x^4 + x^2 + 1}{x^3 + 1} \le \frac {x^4 + x^2 + x}{x^3 + 1} \le \frac {x^4 + x^2 + x}{x^3}$ but I'm honestly just taking wild stabs at the problem. The single fractional problem in my textbook talks about making the fraction larger by making the numerator bigger and/or the denominator smaller. That's the only basis I have for what is written above and I don't know how to progress from there. b) $f(x) = \frac {x^4 + 5 \log x}{x^4 + 1}$ I've got no clue where to start with this one. I tried looking at it like as a combination of functions ($x^4 +5 \log x)$ but I have no idea what to do with 5 log x. When writing your reply please try to keep it as simple as possible. Edit (June 24): Because the question asks the answer to use the lowest exponents possible, the book lists $2x$ where x > 1 as a possible solution for a) and $2x^0$ where x > 1 as a possible solution to b). AI: Intuitively, the idea is that the "order" of a function is that of its highest-order term. In $x^4 + x^2 + 1$, you can usually ignore everything except the $x^4$, and say that it is $O(x^4)$ and not $O(x^d)$ for any smaller $d < 4$. Similarly $x^3 + 1$ is of the order of $x^3$, and in general a polynomial of degree $d$ is of the order of $x^d$. So in your first function $f(x) = f(x) = \frac {x^4 + x^2 + 1}{x^3 + 1}$, the numerator is of order $x^4$ and the denominator is of order $x^3$, so the fraction $f(x)$ is of order $\frac{x^4}{x^3} = x$. That's the idea of what's really going on, but to prove it formally, you can do the division: write $f(x) = \frac {x^4 + x^2 + 1}{x^3 + 1} = x + \frac{x^2 - x + 1}{x^3 + 1}$, and observe that the remainder term $\frac{x^2 - x + 1}{x^3 + 1} = \frac{1}{x+1} \to 0$ as $x \to \infty$, so $f(x) = O(x)$ and not $O(x^{1 - \epsilon})$ for any $\epsilon > 0$. You don't even have to do the division completely: just get rid of the $x^3$ terms from both numerator and denominator. Observe that $$f(x) = \frac {x^4 + x^2 + 1}{x^3 + 1} = \frac{(x^4 + x^2 +1)/x^3}{(x^3 + 1)/x^3} = \frac{x + \frac{1}{x} + \frac{1}{x^3}}{1 + \frac{1}{x^3}}$$ and you can prove directly from this that $f(x) = O(x)$. Similarly, for the second function, $f(x) = \frac {x^4 + 5 \log x}{x^4 + 1}$, both numerator and denominator are of the same order $x^4$, so you should expect the fraction to be $O(1)$. You can prove it by writing $$f(x) = \frac {x^4 + 5 \log x}{x^4 + 1} = \frac {(x^4 + 5 \log x)/x^4}{(x^4 + 1)/x^4} = \frac{1 + \frac{5 \log x}{x^4}}{1 + \frac{1}{x^4}} $$ and proving from there that $f(x) = O(1) = O(x^0)$. (E.g. you can show that, for $c_1 = \frac12$ and $c_2 = 2$ (say) and sufficiently large $x$, we have $c_1 \le f(x) \le c_2$, since $f(x) \ge \frac{1}{1 +\frac1{x^4}} \ge \frac{1}{2}$ when $x \ge 1$, and similarly $f(x) \le \frac{1 + \frac{5\log x}{x^4}}{1} \le 2$ when $\frac{5 \log x}{x^4} \le 1$, which again is true for sufficiently "large" $x$. Actually in this case any constants $c_1 < 1 < c_2$ will do, since $\lim_{x\to\infty} \frac{f(x)}{x} = 1$, but we don't need all this just to prove that $f(x) = O(1)$.)
H: Double integrals question please? We have the double integral$$\int_{0}^{1} dx\int_{0}^{x} {f(x,y)} dy$$ In my book it says that we change the order of the integral and we have $$\int_{0}^{1}dy \int_{y}^{1} {f(x,y)} dx$$ How is this even possible? Can you explain this to me? AI: $$\left\{\begin{aligned} 0\le x\color{red}{\le}1 \\ 0\le y\color{green}{\le} x \\ \end{aligned} \right.\Rightarrow\left\{\begin{aligned} y\le x\le 1 \\ 0\le y\color{green}{\le} x\color{red}{\le}1\to0\le y\color{red}{\le}1\\ \end{aligned} \right.$$
H: Finite groups $G$ with epimomorphisms $\phi_1,\phi_2 : F_2 \rightarrow G$ s.t. $|\phi_1(x^{-1}y^{-1}xy)|\neq|\phi_2(x^{-1}y^{-1}xy)|$. Can someone give an example of a finite (ideally nonabelian) group $G$ and two surjective homomorphisms $\phi_1,\phi_2 : F_2 \rightarrow G$ (where $F_2$ is the free group on the generators $x,y$), such that $\phi_1(x^{-1}y^{-1}xy)$ and $\phi_2(x^{-1}y^{-1}xy)$ have different orders? Is this possible? Are there conditions on $G$ that make this impossible? (In other words, as suggested by Mariano Suárez-Alvarez, are there conditions on $G$, still assumed to be a 2-generated group such that the commutator $[a,b]$ always has the same order for any pair of generators $a,b$?) thanks AI: Consider the elements $a=(12345678)$, $b=(12)$ and $c=(12)(34)$. The sets $\{a,b\}$ and $\{a,c\}$ both generate $S_8$. Compute the orders of the corresponding commutators.
H: Finding mathematical expectation Suppose that a distribution of a random variable $X$ is given by, $P(X=-1)=\frac{1}{6}=P(X=4)$ and $P(X=0)=\frac{1}{3}=P(X=2)$. Then, Find the value of $E\left(\frac{X}{X+2}\right)$. (A) $\frac{1}{36}$ (B) $\frac{1}{18}$ (C) $\frac{1}{9}$ (D) $\frac{1}{6}$. I am finding $E(X)=\frac{7}{6}$ and $E(X+2)=\frac{19}{6}$. AI: $E(X)$ and $E(X+2)$ will not help. You want to find the value of $\frac{X}{X+2}$ in the cases $X=-1$, $X=4$, $X=0$, and $X=2$, multiply by the relevant probability, and add up. It will not be difficult. For example, when $X=-1$, we have $\frac{X}{X+2}=-1$, so one term of your sum will be $(-1)(\frac{1}{6})$.
H: Prove that $(3+5\sqrt{2})^m=(5+3 \sqrt{2})^n$ has no positive integer solutions? Is my proof ok? I set $b=3+5 \sqrt{2}$, so that we have $b^m=(b+2-3 \sqrt{2})^n$ , or $b^m=(b+\sqrt2(\sqrt2- 3))^n$. Since $RHS<LHS$, $n>m$ . However, from what we know about binomial expansion, we will have a $b^n$ on the $RHS$ which will not cancel out with anything since it is the highest power, so we will never be able to reduce the $RHS$ to just $b^m$ . I don't really like my proof, is there a better one? And does this have a solution for rational powers, and how can you prove that it does/doesn't? Thanks. AI: If there is a solution, there is one where the $\sqrt{2}$are replaced by $-\sqrt{2}$. Multiply. We get $$(3+5\sqrt{2})^m(3-5\sqrt{2})^m=(5+3\sqrt{2})^n(5-3\sqrt{2})^n.$$ That gives $(-41)^m=7^n$, impossible unless $m=n=0$. Can't have non-zero rationals either. Just raise to the appropriate integer power and use basically the same argument.
H: Find $\lim_{n\to\infty}2^n\underbrace{\sqrt{2-\sqrt{2+\sqrt{2+\dots+\sqrt2}}}}_{n \textrm{ square roots}}$. Find $\displaystyle \lim_{n\to\infty}2^n\underbrace{\sqrt{2-\sqrt{2+\sqrt{2+\dots+\sqrt2}}}}_{n \textrm{ square roots}}$. By geometry method, I know that this is $\pi$. But is there algebraic method to find this ? Thank you. AI: Let $2=4\cos^{2}{\theta}$ we get $$\underbrace{\sqrt{2-\sqrt{2+\sqrt{2+\dots+\sqrt2}}}}_{n \textrm{ square roots}}=4\left|\sin{\frac{\theta}{2^n}}\right|$$ And as $n \longrightarrow \infty$ ,$\frac{\theta}{2^n} \longrightarrow 0$. Hence $\displaystyle \lim_{n \to \infty}2^n\sin{\frac{\theta}{2^n}}=\theta$ $\text{Limit}=|4\theta|$ As $\cos{\theta}=\pm \frac{1}{\sqrt{2}}$ we have $\theta=\pm \frac{\pi}{4}+n\pi$ $\quad ,n \in \mathrm{I}$ Here we take $\theta =\pm \frac{\pi}{4}$ (To know why please refer the note at the bottom) . Hence the value of limit is $\pi$ Note: the reason why $\theta=\pm \frac{\pi}{4}$ only. We know that by the definition of square-root function $\sqrt{x^2}=|x|$ using the same reasoning. We have $\sqrt{2+2\cos{x}}=2\cos{\frac{x}{2}}$ iff $\cos{\frac{x}{2}}\ge 0 \Rightarrow \frac{x}{2}\in \left[2n\pi,\frac{\left(4n+1\right)\pi}{2}\right] \cup \left[\frac{\left(4n-1\right)\pi}{2},2n\pi\right] ,n\in \mathbb{I}$. As we applied the process for $n$ times for every whole number $j \le n$ we need to have the following inequality $\cos{\frac{x}{2^j}}\ge 0 \Rightarrow \frac{x}{2^j}\in \left[2N\pi,\frac{\left(4N+1\right)\pi}{2}\right] \cup \left[\frac{\left(4N-1\right)\pi}{2},2N\pi\right] ,N\in \mathbb{I}$. Now it is easy to reason out that our $\theta$ must belong to the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$
H: Boston Celtics VS. LA Lakers- Expectation of series of games? Boston celtics & LA Lakers play a series of games. the first team who win 4 games, win the whole series. The probability of win or lose game is equal (1/2) a. what is the expectation of the number of games in the series? So i defined an indicator: $x_i=1$ if the game $i$ was played. It is clear that $E[x_1]=E[x_2]=E[x_3]=E[x_4]=1$. for the 5th game, for each team we have 5 different scenarios (W=win, L=lose) and the probability to win is: W W W L $((\frac12)^3=\frac18)$ W W L L $((\frac12)^2=\frac14)$ W L L L $((\frac12)^1=\frac12)$ $\frac12+\frac14+\frac18=\frac78$ We can use the complementary event: if only one game is L in a series of 4 games (so the 5th game will be played): $1-\frac18=\frac78$ The same calculation is for the 6th and 7th game: 6th: W W W L L or W W L L L => $(\frac18+\frac14=\frac38)$ 7th: W W W L L L => $(\frac18)$ And the expectation is $E[x]=1+1+1+1+\frac78+\frac38+\frac18=\frac{43}8$ What am i missing here, and how can I fix it? AI: We find the probability the series lasts $4$ games, $5$, $6$, $7$. Then we can find the expectation using a basic calculation. Let $X$ be the number of games. It is clear that $\Pr(X=4)=\frac{2}{2^4}$. For the probability the Celtics win in $4$ is $\frac{1}{2^4}$. Double that to take into account the probability the Lakers win in $4$. To find $\Pr(X=5)$, again there are two cases, Celtics win series in $5$ games and Lakers win series in $5$. Let's find the probability Celtics win in $5$. They must win exactly $3$ of the first $4$ games, and win the fifth. The probability of this is $\binom{4}{1}(1/2)^3(1/2)(1/2)$. Multiply by $2$ for the probability the series lasts exactly $5$ games. Similarly, the probability the Celtics win in $6$ is $\binom{5}{3}(1/2)^3(1/2)^2(1/2)$. Again, multiply by $2$ to find $\Pr(X=6)$. Similar reasoning gives us the probability the series lasts $7$ games. Or else we can find that by subtracting the sum of the other $3$ probabilities from $1$. As mentioned at the beginning, now that you have the distribution of $X$, calculation of $E(X)$ is straightforward. Remarks: $1.$ Note that the probability the series lasts exactly $6$ games is equal to the probability that the series lasts $7$. For suppose that after the $5$-th game, the series has not been decided. Then one team is leading, $3$ games to $2$. With probability $\frac{1}{2}$ it will win the $6$-th game, and the series will be over in $6$. And with probability $\frac{1}{2}$ it will lose the $6$-th game, and the series will require $7$ games. Thus $\Pr(X=6)=\Pr(X=7)$. Precisely the same is true of a series where the first team to win $n+1$ games, where $n\ge 1$, wins the series. The probability the series lasts $2n$ games is equal to the probability the series lasts $2n+1$ games. (We are again assuming independence, and that each team has probability $\frac{1}{2}$ of winning any particular game.) $2.$ During our calculations, we implicitly assumed independence. We need to know not only that each team has probability $1/2$ of winning any game, we must also assume that game results are independent.
H: Isomorphism between quotient rings of $\mathbb{Z}[x,y]$ I need to find the condition on $m,n\in\mathbb{Z}^+$ under which the following ring isomorphism holds: $$ \mathbb{Z}[x,y]/(x^2-y^n)\cong\mathbb{Z}[x,y]/(x^2-y^m). $$ My strategy is to first find a homomorphism $$ h:\mathbb{Z}[x,y]\rightarrow\mathbb{Z}[x,y]/(x^2-y^m) $$ and then calculate the kernel of $h$. To achieve this, I furthermore try to identify the isomorphism between $\mathbb{Z}[x,y]$ and itself, which I guess is $$ f:p(x,y)\mapsto p(ax+by,cx+dy) $$ where $ad-bc=\pm 1,a,b,c,d\in\mathbb{Z}$. Then $f$ induces a homomorphism $h$. But from here I failed to move on. I believe there is some better idea, can anyone help? Updated: It should be isomorphism between quotient rings, not groups. Very sorry for such mistake. AI: Well, the additive group of $\mathbb{Z}[x,y]/(x^2 - y^n)$ is just a free abelian group on the generators $1,x, y, xy, xy^2, xy^3,\ldots$ - ie its the free abelian group on countably many generators, which is independent of $n$. Hence, $m,n$ can be anything. Basically, the $\mathbb{Z}$-module $\mathbb{Z}[x,y]$ is generated by all monomials with coefficient 1, ie, $1, x, y, x^2, y^2, xy, x^3, y^3, x^2y, xy^2,\ldots$. The relation $x^2 - y^n$ just allows you to replace any $x^k$ you see with $x^{k-2}y^n$ (for $k\ge 2$). However, in both cases you still end up having countably infinitely many generators (and hence countably infinite rank), and since a free abelian group is determined up to isomorphism by its rank, they're isomorphic. If you're talking about ring isomorphisms, then Potato is right - if $n\ne m$, then the two rings are not isomorphic. Let $R_n = \mathbb{Z}[x,y]/(x^2 - y^n)$. As to why they're not isomorphic as rings, this seems to me to be a rather deep question, and I feel like the best explanation is through algebraic geometry. Essentially, the polynomial $x^2 - y^n$ defines a curve $C_n$ in the plane (namely the set of points (a,b) where $a^2 - b^n = 0$). These curves $C_n$ are birationally defined by their function fields, which in this case is just the quotient field of your ring $R_n$. If the rings $R_n,R_m$ are isomorphic, then their quotient fields must be isomorphic as well, and so the curves $C_n,C_m$ they define must be birationally equivalent. However, it can be computed via the Riemann-Hurwitz formula on the coordinate function $y$, viewed as a function from your curve to $\mathbb{P}^1$ that the curve associated to $R_n$ has geometric genus $(n-1)(n-2)/2$ (as long as $n\ge 1$, see exercise 2.7 in Silverman's book "The Arithmetic of Elliptic Curves"), which being a birational invariant, tells you that the function fields for your curves $C_n,C_m$ are not isomorphic for $n\ne m$, and hence $R_n, R_m$ could not be isomorphic either. Finally it's easy to see that $R_0$ is not isomorphic to $R_n$ for any $n\ge 1$ since $R_0$ has nilpotent elements, and $R_n$ for $n\ge 1$ does not. I can think of some other proof ideas, but essentially they all rely on some form of algebraic geometry. Many of these ideas I could phrase purely ring-theoretically, but it would seem complicated and completely unmotivated without explaining the connection to geometry.
H: How Josiah Willard Gibbs introduced the notions of dot product and vector product Before posting this question, I have realised that this question might be repeating the same topic which some users have asked before, but still I cannot find a satisfying answer. So please forgive me for repeating this question: How did J.W. Gibbs invent dot product and vector product? (Certainly the definition |v||w|cos θ cannot just come out from nowhere, the proof: ∥v−w∥^2=∥w∥^2+∥v∥^2−2∥v∥∥w∥ cos θ isn't helpful either, why dot product is used in this proof instead of cross product? v.w instead of v x w) Did he base on other people's works to introduce these notions? Can someone please briefly explain the history about the origin of these products. Are there any books that talk about the history of vector? Thanks. AI: Dot product and vector product are just a different presentation of the product of quaternions. Quaternions were invented by Hamilton as a result of a twenty year long attempt at generalizing the $\mathbb R$-algebra $\mathbb C$ of complex numbers. In modern notation , if two quaternions $q,q'$ are presented as the sum of a scalar in $\mathbb R $ and a vector in $3$-space, $q=a+\vec v, p=b+\vec {w}$, then we have for their quaternionic product : $$ q\cdot q'=(ab-\langle \vec v,\vec {w}\rangle) + ( a\vec {w} +b\vec {v}+\vec {v}\times\vec {w} ) $$ and in particular we have the fundamental relation for two pure quaternions, that is quaternions with zero scalar part: $$ \vec v\cdot \vec {w}= - \langle \vec v,\vec {w}\rangle + \vec {v}\times\vec {w} $$ [I have written the scalar product product as $\langle \vec v,\vec w\rangle $ in order not to confuse it with the quaternionic product $v\cdot w$] So Gibbs's contribution was to notice that, given two pure quaternions, it is very convenient in their quaternionic product to isolate the scalar part, yielding the scalar product up to sign, and the vector part, yielding the cross product. A great psychological advantage is that now everything takes place in $\mathbb R^3$ and there is no need to resort to $\mathbb R^4$, the underlying vector space of the $\mathbb R$-algebra of Hamilton's quaternions: I suppose that at the time four-dimensional space was regarded with some unease by physicists like Gibbs and by engineers.
H: Polynomial whose only values are squares Given a polynomial $ P \in \Bbb Z [X] $ such that, $ P (x)$ is the square of an integer for all integers x, is $ P $ necessarily of the form $ P (x)= Q (x)^2$ with $ Q \in \Bbb Z [X]$? AI: Yes, see this answer, which gives a proof, and references this article, which deals with the multivariate case.
H: Is there a power series which pointwise convergent but not uniformly convergent on $(-1,1)$? I was recently reading that power series of form $\sum_{n=0}^\infty b_n(x-a)^n$ converge uniformly to some uniform limit function on compact intervals $[a-r,a+r]$ if $r$ is less than the radius of convergence. I was curious about the case on an open, noncompact interval. Particularly, is there an example of a formal power series $\sum_{n=0}^\infty b_nx^n$ which is pointwise convergent on $(-1,1)$ but does not converge uniformly? AI: The series $\sum_{n=0}^\infty x^n$ converges point-wise on the interval $(-1,1)$ to the function $\frac{1}{1-x}$. If it were to converge uniformly on $(-1,1)$, then the function would have to be bounded, but it is not. So, this is an example as you are looking for.
H: Find a sum of geometric series with changing signs? I need to find the sum of the following series: $$\sum_{i=0}^\infty {3 * (-1)^{n+1} \over 2^{n}}$$ I started to simplify this: $$3 *\sum_{i=0}^\infty {(-1)^{n+1} \over 2^{n}}$$ And then I stuck. any idea how to continue? Thank-you. AI: Recall that $$\sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$$ Now substutute $x=-y$ and get $$\sum_{n=0}^{\infty} (-y)^n = \frac{1}{1+y}$$ Note that $(-y)^n = (-1)^n y^n$. Substitute $y=1/2$.
H: Reference request for derivatives of complex functions I have been searching for reference for derivatives of complex numbers. All I found so far were texts that were too convoluted for me to grasp. I was (and still am) searching for a reference that is aimed at beginners to complex analysis. I have one more question in relation to this: 1. Is derivative of a complex function the same as taking derivatives of real functions ? Just to make it unambiguous: $$\frac{d}{dx} x^2 + 3x + 9$$ is $$2x + 3 + 0$$ so, for complex functions will it be the same.. ?? AI: Yes, all the standard formulas for derivatives of elementary functions (not including the absolute value function) work over the complex numbers. You might try http://people.math.gatech.edu/~cain/winter99/complex.html
H: Double integral question area? Calculate the integral: $$^{} \iint_{D}dx dy$$ where $D :=\{(x,y):\ x^2+y^2\le a^2\}$. So this is obviously a circle, but how do I use it to integrate? AI: Without polar coordinates but suffering a little more. The integral is $$\int\limits_{-a}^adx\int\limits_{-\sqrt{a^2-x^2}}^{\sqrt{a^2-x^2}}dy=2\int\limits_{-a}^a\sqrt{a^2-x^2}\,dx\stackrel{\text{even func.}}=4\int\limits_0^a\sqrt{a^2-x^2}\,dx=$$ $$=4a\int\limits_0^a\sqrt{1-\left(\frac xa\right)^2}\,dx=:I$$ and now substitution: $$\frac xa=\sin u\implies dx=a\cos u\,du\;,\;\;\begin{cases}x=a\implies u=\frac\pi2\\{}\\x=0\implies u=0 \end{cases}$$ Thus $$I=4a^2\int\limits_0^{\frac\pi2}\cos^2u\,=\left.4a^2\frac{u+\sin u\cos u}2\right|_0^{\pi/2}du=4a^2\cdot\frac\pi 4=\pi a^2$$
H: Representation of discrete distributions It is well-known, that any Borel probability distribution on $[0,1]$ can be obtained starting from the probability space $([0,1],\mathscr B([0,1]),\lambda)$ where $\lambda$ is the Lebesgue measure. I wonder, whether a similar result exists for the case of discrete distributions on $\Bbb N_0$. Namely, let the sample space be $\Omega = \Bbb N_0$. Does there exist a probability measure $\mu$ on $(\Omega,2^\Omega)$ such that for any other probability measure $\nu$ on the latter space there exists a measurable function $f_{\mu\nu}$ such that $\nu = (f_{\mu\nu})_*\mu$? If there are several measures $\mu$ with the latter property, are there any sufficient conditions which assure the existence of $f_{\mu\nu}$? In particular, it is clearly necessary that $\mu(\omega)>0$ for infinitely many $\omega$. Is it also a sufficient condition? Inspired by this problem. AI: No. A measure defined on $2^\Omega$ is neccesarily purely atomic. Let $n$ be such that $\mu(\{n\})>0$. Let $\nu$ be a probability measure with the property that all atoms have measure smaller than $\mu(\{n\})$. But $f(n)$ will have distribution measure $\mu(\{n\})$.
H: A problem on limits of functions Find two functions $ f(x) $ and $ g(x)$ such that 1) $\lim_{x\to 0} g(x)=10$; $\quad\lim_{x\to 10} f(x)=100$ but $\lim_{x\to 0} f(g(x))$ doesn't exist. 2) $\lim_{x\to 0} g(x)=10$; $\quad\lim_{x\to 10} f(x)=100$ but $\lim_{x\to 0+} f(g(x))=10 $ and $\lim_{x\to 0-} f(g(x))=100 $ 3) $\lim_{x\to 0} g(x)=10$; $\quad\lim_{x\to 10} f(x)=100$ but $\lim_{x\to 0+} f(g(x))=10 $ and $\lim_{x\to 0-} f(g(x)) $ doesn't exist I could only do the 1st one and found the other 2 difficult hence any help would be appreciated. My solution to the 1st one was $$g(x) = \begin{cases} x+10, & \text{if $x<0$ } \\ 10, & \text{if $x\ge 0$ } \\ \end{cases} \qquad \text{and} \qquad f(x) = \begin{cases} 100, & \text{if $x<10$ or x>10} \\ 50, & \text{if $x= 10$ } \\ \end{cases} $$ Then: $$f(g(x)) = \begin{cases} 100, & \text{if $x<0$ } \\ 50, & \text{if $x\ge 0$ } \\ \end{cases}$$ Is it correct? and please help on other 2 AI: Example for 1st: Yours Example for 2nd: Just modify your example by defining $f$ to be 10 when $x$ is 10, instead of 50. Example for 3rd: Let $$g(x)=\begin{cases}10+x\cos\frac1x&x<0\\10&x\ge0\end{cases}$$ $$f(x)=\begin{cases}10&x=10\\100&x\ne10\end{cases}$$ See that $g$ does have limit of $10$ at $0$ (for $x\cos\frac1x$ is continuous at $0$) and that $g$ keeps going above and below 10 as $x$ gets arbitrarily close to $0$. $g\left(-\frac1{2n\pi}\right)<10$ so $f\circ g$ there is $100$. $g\left(-\frac2{(2n+1)\pi}\right)=10$ so $f\circ g$ there is $10$. Enlarging $n$ in each case you keep approaching zero. We clearly see that the left hand limit does not exist. The right hand limit however clearly exists.
H: Proof of $e^z \neq 0$ Proof: Let $a \in \mathbb{C}$ be s.t. $e^{a} =0$. Then $0=e^a e^{-a} = e^{-a+a} = e^0 =1$, contradicting the existence of $a$. But why can we multiply by $e^{-a}$?? If $e^a=0$, then $e^{-a}=\frac{1}{e^a}=\frac{1}{0}$ which can't be defined? AI: How are you defining $e^a$ over the complex numbers in the first place? If this is done by the usual series for $$e^x=1+\sum_1^\infty \frac {x^n}{n!}$$ extending the definition from the real numbers, then the series is absolutely convergent for all $x$, then $e^{-a}$ exists and is defined already. The property $e^xe^{-x}=1$ can be shown from the definition. Then everything is in place to derive a contradiction from the assumption $e^a=0$. But how you approach the problem depends on how $e^x$ has been defined in the first place.
H: The notations change as we grow up In school life we were taught that $<$ and $>$ are strict inequalities while $\ge$ and $\le$ aren't. We were also taught that $\subset$ was strict containment but. $\subseteq$ wasn't. My question: Later on, (from my M.Sc. onwards) I noticed that $\subset$ is used for general containment and $\subsetneq$ for strict. The symbol $\subseteq$ wasn't used any longer! We could have simply carried on with the old notations which were analogous to the symbols for inequalities. Why didn't the earlier notations stick on? There has to be a history behind this, I feel. (I could be wrong) Notations are notations I agree and I am used to the current ones. But I can't reconcile the fact that the earlier notations for subsets (which were more straightforward) were scrapped while $\le$ and $\ge$ continue to be used with the same meaning. So I ask. AI: This is very field dependent (and probably depends on the university as well). In my M.Sc. thesis, and in fact anything I write today as a Ph.D. student, I still use $\subseteq$ for inclusion and $\subsetneq$ for proper inclusion. If anything, when teaching freshman intro courses I'll opt for $\subsetneqq$ when talking about proper inclusion. On the other hand, when I took a basic course in algebraic topology the professor said that we will write $X\setminus x$ when we mean $X\setminus\{x\}$, and promptly apologized to me (the set theory student in the crowd).
H: Isomorphism between Hom and tensor product I am looking for an explicit isomorphism $Hom(V,V^*)\rightarrow V^*\otimes V^*$ where $V$ is a vector space. I thought of: $\phi\rightarrow ((u,v)\rightarrow \phi(u)(v))$ But I'm not sure this works. Does anyone have a suggestion? AI: I normally think of it the other way around, that is, $$ V^{\star} \otimes W \to \hom(V, W), \qquad \varphi \otimes w \mapsto (v \mapsto \varphi(v) w). $$ PS I am assuming $V, W$ to be finite-dimensional, see the comments below.
H: Multiobjective optimization with two real functions over two real vector spaces Question: Does anyone know about a book, a paper or an algorithm for the following optimization problem? What are the sufficient conditions for the existence of the joint optimum, and how to find it?: Specs: Let $f : \mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}$ and let $g : \mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}$. Both $f$ and $g$ are twice continuously differentiable. Furthermore, they can be written as sums of squared non-linear functions, on the form $f(X, Y) = \sum_{i \in I} \left( \hat{f_i} (X, Y) \right)^2$ and $g(X, Y) = \sum_{j \in J} \left( \hat{g_j} (X, Y) \right)^2$, with $X \in \mathbb{R}^m$ and $Y \in \mathbb{R}^n$. The problem, stated informally: I would like to minimize $f(X, Y)$ with respect to $X \in \mathbb{R}^m$ while at the same time minimizing $g(X, Y)$ with respect to $Y \in \mathbb{R}^n$. That is, I am looking for a solution $(X^*, Y^*)$ such that $f(X^*, Y^*)$ is locally minimal in $X^*$ and $g(X^*, Y^*)$ is locally minimal in $Y^*$. Preferably, I would like $f(X^*, Y^*)$ to be globally optimal in $X^*$ but I realize that it may be difficult to find a globally optimal point, so therefore a locally optimal point may do. The problem, stated more formally: For local optimality, I am looking for a point $(X^*, Y^*)$ such that $(i) \quad \exists \; \gamma > 0$ such that $\forall X \in \mathbb{R}^m: \quad \| X^* - X \|_2 < \gamma \Rightarrow f(X^*, Y^*) \leqslant f(X, Y^*)$ and $(ii) \quad \exists \; \mu > 0$ such that $\forall Y \in \mathbb{R}^n: \quad \| Y^* - Y \|_2 < \mu \Rightarrow g(X^*, Y^*) \leqslant g(X^*, Y)$ For global optimality, $(i)$ is replaced by: $(i) \quad \forall X \in \mathbb{R}^m: f(X^*, Y^*) \leqslant f(X, Y^*)$ Conditions for optimality: At an optimal point $(X^*, Y^*)$, I would expect that $\frac{\partial f}{\partial X} (X^*, Y^*) = 0$ and $\frac{\partial g}{\partial Y}(X^*, Y^*) = 0$. For instance, if $f$ and $g$ are quadratic functions, this condition yields a linear system of equations and a an optimal point $(X^*, Y^*)$ is a solution to that linear system. Motivation, hints: The above problem appears in my research and I am looking for a practical approach for solving it numerically on a computer. I can easily compute first-order derivatives for a $f$ and $g$ and also resort to packages such as ADOL-C for automatic differentiation in case I need higher order derivatives. Also note that both $f$ and $g$ are sums of squared, non-linear functions. This suggests that some Gauss-Newton-inspired approach could be appropriate for solving it. To sum up, can anyone direct me to a specific paper, a book, an algorithm, a program, a webpage or anything that can be helpful for solving the above problem? Background: The optimization problem appears in a computer-vision algorithm that I am working on. Essentially, $X$ is a vector with with 2D coordinates for points in an image and $Y$ is a vector of corresponding depths at every point. The function $f(X, Y)$ is derived from observations in an image captured from a camera and $g(X, Y)$ represents a physically related energy. AI: There is no structural distinction between $X$ and $Y$ variables in your question, so the equivalent form would be: given two non-negative functions $f,g:\Omega\to\Bbb R$, where $\Omega$ is e.g. $\Bbb R^{m+n}$, how to find $\omega\in \Omega$ which "minimizes" both $f$ and $g$. Even more, the nature of $\Omega$ does not really matter for the existence of the solution for the multi-objective optimization problem. The trick is that $(f,g)\in \Bbb R^2$ which does not have a natural linear ordering, so you shall specify which way of optimality are you interested in. There are many of them, and actually the problem is very similar to game theoretical framework. You may start with reading about Pareto optimality. To adress your original questions: unless you are in a very special case, $\omega^* = (X^*,Y^*)$ does not exist even locally.
H: f(z) can be expressed as u(x,y) , iv(x,y). Am I doing this right? So, $f(z) = z^2 + 4z - 6i$ and I need to express this as $u(x,y)$ , $iv(x,y)$. So, I plug in $z = x+iy$ and simplify. I am left with this: $f(x+iy) = x^2 - y^2 + 2xy + 4x + 4iy -6i$. Now, how do I express this as $u(x,y)$ and $iv(x,y)$ The reason I am doing this is because I read that the derivative of $f(z)$, that is $f\prime(z)$ is $u\prime(x,y) + iv\prime(x,y)$. AI: When you expand $(x+iy)^2$ you get $x^2-y^2+2ixy$, so you have a small typo in your question. Now put $u(x,y) = x^2-y^2+4x$ and $v(x,y) = 2xy+4y-6$. Then $$f(x+iy) = u(x,y) + iv(x,y).$$
H: Question about matrix discretisation numerical methods Tomorrow I have an exam about Numerical Methods, and I came up with the following question. Let $$-\frac{d}{dr} \left ( \frac{1}{r} \frac{dy}{dr} \right ) = 1 $$with $r\in [1,2], y(1) = 1 \mbox{ and }y(2)=10$. Take $h = \frac{1}{n+1}$ and $r_i = ih + 1$. (We have made a grid of n+1 points). Now I need to give the equation for the $i^{th}$ point on the grid, which is (according to my book) $$-\frac{1}{h^2}(\frac{1}{r_{i+0.5}}(u_{i+1}-u_i) - \frac{1}{r_{i-0.5}}(u_i - u_{i-1}) = 1$$ Sorry if my terms are not correct, I don't follow this course in English, so I don't know all the correct English terms. AI: This looks quite correct. If you denote $u_i$ the approximation of $y$ in $r_i$, you approximate the outer derivative using centered finite difference with step $h/2$ :$$\frac{d}{dr} \left( \frac{1}{r} \frac{dy}{dr} \right) \approx \frac{1}{h} \left( (\frac{1}{r}\frac{dy}{dr})(r_{i+0.5}) - (\frac{1}{r}\frac{dy}{dr})(r_{i-0.5})\right)$$ where $r_{i+0.5} = (i+0.5)h+1$ (and the same for $r_{i-0.5}$). Now, if you reuse the centered finite difference for the two derivatives appearing in the previous equation (with step $h/2$ again) $$\frac{dy}{dr}(r_{i+0.5}) \approx \frac{1}{h}(y(r_{i+1}) - y(r_{i}))$$ and $$\frac{dy}{dr}(r_{i-0.5}) \approx \frac{1}{h}(y(r_{i}) - y(r_{i-1}))$$ Collecting all the terms and replacing the $y(r_{i})$ by their approximations $u_i$, you obtain the approximation from your book.
H: How to solve this integral? - A Proof is needed I am trying to solve this integral $$\int_{0}^{\pi /2}\sin^n x\cdot dx.$$ I think we should solve it for: a) odd numbers $2n+1$ $$\int_0^{\pi /2}\sin^{2n+1}x\cdot dx = \int_0^{\pi /2}\sin x\cdot \sin^{2n}x\cdot dx=\int_0^{\pi /2}\sin x\cdot  (1-cos^2x) ^n\cdot dx$$ let $t=\cos(x)$ and $dt=-\sin(x) \, dx$ then: $$\int_0^{\pi /2}\sin x\cdot  (1-\cos^2x) ^n\cdot dx=-\int_1^0 (1-t^2)^n \, dt$$ Unfortunately I can not solve this integral. Would you please help me to finish it? b) even numbers $2n$: $$\int_0^{\pi /2}\sin^{2n}x\cdot dx = \int_0^{\pi /2}\left( \frac {1-\cos 2x} 2\right)^n \, dx$$ Unfortunately I can not solve this integral. Would you please help me to finish it? I tried to search for something useful on the Internet and I found these two formulas: $$\int_0^{\pi /2}\sin^{2n+1}x\cdot dx = \int_0^{\pi /2}\cos^{2n+1}x\cdot dx = \frac{2\cdot 4\cdot 6\cdot \ldots \cdot 2n}{1\cdot 3\cdot 5\cdot \ldots \cdot (2n+1)}\frac{\pi}{2}$$ $$\int_0^{\pi /2}\sin^{2n}x\cdot dx = \int_0^{\pi /2}\cos^{2n}x\cdot dx = \frac{1\cdot 3\cdot 5\cdot \ldots \cdot (2n-1)}{2\cdot 4\cdot 6 \cdot \ldots \cdot 2n}\frac{\pi}{2}$$ If you could write proofs of these two formulas that would solve my problem. Thank you AI: You can do this by integration by parts, or equivalently using the product formula for differentiation. Let $I_n$ be the integral involving $\sin^n x$ So note that $$\frac {d}{dx}(\cos x\sin^{n-1}x)=-\sin^n x+(n-1)\cos^2x\sin^{n-2}x$$$$= -\sin^n x+(n-1)(1-\sin^2x)\sin^{n-2}x=-n\sin^nx+(n-1)sin^{n-2}x$$ Now integrate both sides, noting that: $\cos x\sin^{n-1}x=0$ at the limits of integration to obtain $$0=-nI_n+(n-1)I_{n-2}$$ which becomes $$I_n=\frac {(n-1)}nI_{n-2}$$ Using this successively gives the products you see in your answers, and takes you down to evaluating $I_1$ or $I_0$ (depending whether $n$ is odd or even). You should be able to finish it from there.
H: direct proportional Mr Tan’s monthly savings (S) is directly proportional to the square root of his monthly income (I). His income in January and February 2011 is 3600 and 2500 respectively. His savings in January is 80 more than in February. Find the amount he saved in January 2011. AI: $S_{Jan}$ $-$ January savings; $S_{Feb}$ $-$ February savings; main rule: $S = k \cdot \sqrt{I}$, $\quad$where $k$ $-$ constant value. $S_{Jan} = k \cdot \sqrt{3600} = k\cdot 60$, $S_{Feb} = k \cdot \sqrt{2500} = k \cdot 50$. $S_{Jan}-S_{Feb}=80$. $\cdots$ It remains to find out value $k$. For you.
H: Signal compression I believe I have an extremely simple question but I can't seem to figure it out. This image shows $x[n]$ and I have to draw $y[n]=x[2n-4]$ by first doing a compression and then a time shift. The image from the compression $y[n] = x[2n]$ is this: I understand how the time shift is done but can't seem to figure how from the first image, the compression is done resulting in the second image. Can someone explain me how its made? AI: The second graph is $x(2n-4),$ not $x(2n)$. To obtain the correct graph of $x(2n)$ note that $x(n)$ is nonzero only when $n$ lies in the interval $[-4,2].$ Therefore $x(2n)$ will be nonzero only when $2n$ lies in the interval $[-4,2],$ which is to say, when $n$ lies in the interval $[-2,1].$ Let $z(n)=x(2n).$ To compute an example, $z(-1)=x(2\cdot(-1))=x(-2)=0.$ In general, the value that $z$ takes at $n$ is the same as the value that $x$ takes at $2n.$ Geometrically, this is saying that if the horizontal distance to the vertical axis of every point on the graph of $x(n)$ is halved, then one has the graph of $z(n)=x(2n).$ This results in a horizontal compression of the graph. The graph of $y(n)=x(2n-4)$ is the the graph of $z(n)=x(2n),$ shifted four units to the right. This is what is seen in your second picture. So the rising portion of the graph of $x(n)$ occurs between $n=-4$ and $n=2.$ The corresponding rising portion in the graph of $z(n)$ occurs between $n=-2$ and $x=1.$ Finally the rising portion of $y(n)$ occurs between $n=2$ and $n=5.$
H: Bounds of $\oint_{\partial R}\left((x^2-2xy)dx+(x^2y+3)dy\right)=\iint_{R}\left(2xy+2x\right)dxdy$ I'm just having some trouble figuring out the bounds and boundary of the following integral. Question as follows: Evaluate $$\oint(x^2-2xy)dx+(x^2y+3)dy$$ around the boundary of the region contained by $y^2=8x$ and $x=2$. Obviously I apply Green's Theorem to get $$\oint_{\partial R}\left((x^2-2xy)dx+(x^2y+3)dy\right)=\iint_{R}\left(2xy+2x\right)dxdy$$ and now my only problem is finding the bounds for the integral on the right. I think the bounds should be $\frac{1}{8}y^2<x<2$ and $-4<y<4$,$$\int_{-4}^{4}\int_{\frac{1}{8}y^2}^{2}\left(2xy+2x\right)dxdy=32$$ but the answers give $\frac{128}{15}$. Are my bounds wrong? As far as I can tell my arithmetic isn't. Thanks! AI: Your setup is fine. $$ \begin{align} \int_{-4}^4\int_{\frac18y^2}^2(2xy+2x)\,\mathrm{d}x\,\mathrm{d}y &=\int_{-4}^4\int_{\frac18y^2}^22(y+1)x\,\mathrm{d}x\,\mathrm{d}y\\ &=\int_{-4}^42(y+1)\left(2-\frac1{128}y^4\right)\,\mathrm{d}y\\ &=\int_{-4}^42\left(2-\frac1{128}y^4\right)\,\mathrm{d}y\\ &=32-\frac{32}{5}\\ &=\frac{128}{5} \end{align} $$ In the third equation, we dropped the odd powers of $y$ since they integrate to $0$ over a symmetric region.
H: Find a subdivision of K4 in the Grötzsch graph. It is known that the Grötzsch graph is 4-coloring. Hence it contains a subdivision of K4. But where is this subdivision? AI: a is joined to b, e, f. b is joined to e via j, and to f via h and k. e is joined to f via g. So a, b, e, f is a $K_4$, subdivided at j, h, k, and g.
H: Calculating area of astroid $x^{2/3}+y^{2/3}=a^{2/3}$ for $a>0$ using Green's theorem question as follows. Show that for any planar region $\Omega$, $$\mathrm{area}\left(\Omega\right)=\frac{1}{2}\oint_{\partial\Omega}(xdy-ydx).$$ Use this result to find the area enclosed by the astroid $x^{2/3}+y^{2/3}=a^{2/3}$ for $a>0$. The first part's easy: $$\begin{array}{l c l} \mathrm{RHS}&=&\frac{1}{2}\oint_{\partial\Omega}(xdy-ydx)\\ &=&\frac{1}{2}\oint_{\partial\Omega}(-ydx+xdy)\\ &=&\frac{1}{2}\iint_{\Omega}(1+1)dA\\ &=&\iint_{\Omega}1dA\\ &=&\mathrm{area}(\Omega)\\ &=&\mathrm{LHS}. \end{array}$$ But the next part really has me stumped. In this case $\Omega=\{ (x,y) : x^{2/3}+y^{2/3}\le a^{2/3} \}$ and $$\mathrm{area}(\Omega)=\iint_{\Omega}1dA=\frac{1}{2}\oint_{\partial\Omega}(xdy-ydx)$$ but I have a lot of problems coming up with $\partial\Omega$ to make it work. Any ideas would be welcome! Thanks. AI: $$x^{2/3}+y^{2/3}=a^{2/3}\longleftrightarrow\;x=a\cos^3t\;,\;\;y=a\sin^3t\;,\;\;0\le t\le 2\pi$$ So $$\frac12\int\limits_0^{2\pi}\left[(a\cos^3t\cdot3a\sin^2t\cos t)-(a\sin^3t\cdot(-3a\cos^2t\sin t)\right]dt =$$ $$=\frac{3a^2}2\int\limits_0^{2\pi}(\cos^4t\sin^2t+\cos^2t\sin^4t)dt=\frac{3a^2}2\int\limits_0^{2\pi}\cos^2t\sin^2t\,dt=$$ $$=\left.\frac{3a^2}{64}\left(4x-\sin 4x\right)\right|_0^{2\pi}=\frac{3a^2}{64}8\pi=\frac38a^2\pi$$
H: When a limit does not exist, can its derivative be found? I am learning derivatives of complex numbers (functions, actually) and what a learned community member pointed to me was that there is a subtle difference between finding derivatives of real numbers. He said that the derivative of a complex function can be calculated iff it satisfies the Cauchy Riemann equations. Which means, limit at the point $z$ exists. Back to reals. Have a look at the diagram above. I can calculate derivatives in both the cases. The 'mechanical' derivative using chain rule, quotient rule, product rule, etc. The derivative will fail if I plug in x = 13 (in both the cases) but for all other values of $x$, I can calculate the derivative (slope). In other words, $f(x)$ is not differentiable at $x = 13$. The same concept applies in case of complex functions, right ? You use Cauchy Riemann equations to see if $f(z)$ is differentiable at a given $z$. Even if it is not differentiable, I can still calculate the derivative mechanically, right ? And the derivative will fail if I plug in the given $z$ at which it was not differentiable Please clarify. AI: Judging from the captions of the pictures, I think we should still talk about real derivatives for a bit. Brief answer Neither of functions depicted in your graphs are going to be differentiable at the discontinuities depicted. After you fill in a removable discontinuity of a function like the one on the left, it could be either differentiable or nondifferentiable at the point. Jump discontinuities of functions on the real line are always nondifferentiable, but they might have one-sided derivatives that are well-defined. Longer anwer First of all, remember that the derivative at a point is, intuitively, a "limit of slopes as calculated from the left and from the right." From the left you take a limit of $\frac{f(x)-f(x-h)}{h}$ over very small positive values of $h$, and on the right the same happens with $\frac{f(x+h)-f(x)}{h}$. (It can be the case that both of these can be defined, but they don't match and in that case, the derivative isn't defined at that point.) Notice also that it is critical for $f(x)$ to be defined to carry out these computations, and so you won't get anywhere at all without settling on a value for $f(x)$. If you insist that there's no value for $f(x)$, then the slope is formally undefined. If you are willing to fill in removable discontinuities, though, you can proceed. The derivative may or may not exist after the point is filled in (consider $f(x)=|x|$ with the $x=0$ point removed/replaced.) That leaves the case of the jump discontinuity, which you've depicted in the right hand picture. Jump discontinuities always make one of the slope limits on the right or on the left jump to infinity. Here's what I mean. Suppose $f(x)$ is anywhere exept filling in the lower circle in your right hand picture. Then as you shrink $h$ in $\frac{f(x+h)-f(x)}{h}$, the associated picture is that of a line which always lies on $(x,f(x))$ and $(x+h,f(x+h)$, which lies on the branch on the right. You see as $h$ shrinks, $x+h$ approaches $x$ from the right. Since $f(x)$ is not on that lower empty circle, this line tips ever more steeply as $h$ shrinks. Thus its slope goes to either $+\infty$ or $-\infty$, and the slope there is undefined. If $f(x)$ happened to land on that empty lower right circle, then you are guaranteed it wouldn't land on the upper left circle, so you would then deduce that the slope estimate from the left would go off to infinity, and the derivative at the point would again not exist.
H: Archimedean Property - The use of the property in basic real anaysis proofs I've been looking for something like this in the previous answers on the topic, but I didn't enounter anything similar, so here there is my problem. First of all, here there is the definition of Archimedean Property (AP) that I found in the book I am self-studying: $\forall (a,b)\in \mathbb{R}_{++} \times \mathbb{R}, \exists m \in \mathbb{N} (b<ma)$ My problem is with the way in which it is presented a proof of the fact that $\sup\{q\in \mathbb{Q}:q^2<2\}=\sqrt{2}$. Indeed, the author first of all points out that the Completeness Axiom ensures that $s:=\sup S$ is a real number (where $S:=\{q\in \mathbb{Q}:q^2<2\}$). Then, he supposes that $s^2>2$, from which we have that $s^2 - 2>0$. At this point, he states that by AP there exists an $m \in \mathbb{N}$ s.t. $m(s^2 - 2) >2s$. Problem 1 Here, the author is using the AP in the following way: $m$ is the $m$ of the original formulation, $(s^2 - 2)$ is the $a$ of the original formulation, and $2s$ is the $b$. Am I right? Problem 2 Where does the $2s$ come from? Anyway, the author moves on and states that, from $m(s^2 - 2) >2s$, we have that $$ \left( s- \frac{1}{m} \right)^2=s^2 - \frac{2s}{m} + \frac{1}{m^2} > s^2 - (s^2 - 2) = 2$$ which means that $(s-\frac{1}{m})^2>q^2$. Problem 3 Where does $(s-\frac{1}{m})^2>q^2$ come from? To me it really looks a bit a rabbit-out-of-the-hat proof. What am I missing exactly? Is my way of looking at the AP the correct one (see Problem 1)? How can I figure out in this kind of cases (that looks to me quite common in Real Analysis) where the author takes the elements of his proof? Any help, hint or feedback is more than welcome. Thanks a lot. AI: First of all, you are right: AP states that for any $a,b>0$ there exists $m$ such that $am>b$. In your case you would like to pick up $m$ in such a way, that $m(s^2-2)>2s$, so your choice of $a$ and $b$ is correct. Second, we want to show that if we assume that $s = \sup S$ is such that $s^2>2$, than something meaningless will follow. For example, in your case the author shows that $s^2>2$ would imply that $(s-\frac1m)^2>2$ and hence $(s-\frac1m)^2>q^2$ for all $q\in S$. As a result, it means that $s-\frac1m$ is an upper bound for $S$ as well, which can't be true since $s$ was the least upper bound and $s-\frac1m<s$ is even less. However, to show all latter facts, as vadim123 mentioned, the author had to use some tricks which at the first glance are indeed magical. On the other hand, after reading 3-5 proofs on the similar topic you'll get the intuition which tricks to try if you shall prove something yourself just by looking for analogies. Let's take a backward look on how this magic works, and why is it intuitive. We assume that $s^2>2$ where the strict inequality holds, so intuitively there is a gap between $2$ and $s^2$ and hence we shall be able to insert there another positive number $x^2$ such that $$ s^2>x^2>2. \tag{1} $$ That would mean exactly that $s\neq\sup S$ since $x\geq \sup S$ from $(1)$, and $s>x$. Now, instead of looking for $x$ that satisfies $(1)$ it is notationally more convenient to denote $x = s-v$ for some $v>0$ and look for some $v$ small enough such that $(1)$ holds. We have $$ (s-v)^2 = s^2 - 2sv + v^2\geq s^2 - 2sv. $$ Now, to choose $v>0$ in such a way that $(s-v)^2>2$, we have to have that $$ s^2 - 2sv>2 \iff \frac1v(s^2 - 2)>2s. $$ The question now is how to pick up $v>0$ so that LHS is greater than the RHS. Here AP comes to help us: we just denote $m = \frac1v$. Now, after we followed this backward logic, we know how to prove the theorem. However, it is more neat (though less understandable) to do proof in forward direction (keeping all the results of the backward one that we already found).
H: Proving that every finite group has a composition series I'm having trouble understanding one step in the proof that every finite group has a composition series. We proved this with induction over $d = |G|$ in our algebra lectures. The case $d=1$ is trivial. For $d > 1$, let $H$ be a real normal subgroup of $G$ with minimal index $[G:H]$. Now we claim that $G/H$ is simple (i.e. has no normal subgroups apart from itself and the trivial subgroup). Why is this true? AI: If $G/H$ is not simple, then there exists a subgroup of $G$ with index smaller than that of $H$, contradiction. In particular let $K/H$ be a proper subgroup (correspondence theorem) of $G/H$ that is not zero. Then $|G:H|=|G:K| |K:H| \Rightarrow |G:K| < |G:H|$.
H: Interval of convergence of a Laplace-Stieltjes transform I have a two-sided Laplace-Stieltjes transform, $$ \int_{-\infty}^{+\infty} e^{-xt}d\mu(t) $$ that converges absolutely in $(a,b)$. If the measure $\mu$ is finite,then $$ \int_{-\infty}^{+\infty}d\mu(t)=\mu(\mathbb{R})<\infty $$ can I conclude that $(a,b)$ MUST contain the origin? In general, how changes the interval of convergence of a two-sided Laplace-Stieltjes transform with respect to $\mu$? Thank you AI: Even if $\mu$ is not a positive measure, its total variation measure $|\mu|$ is, and "the integral converges absolutely" means $\int_{\mathbb R} e^{-xt}\ d|\mu|(t) < \infty$. Now $\int_{A} e^{-xt}\ d|\mu|(t)$ is a convex function of $x$ for every measurable $A \subseteq \mathbb R$, so the set where this is finite is a convex set, i.e. an interval. If $\mu$ is a finite measure, that says $\int_{\mathbb R} e^{-xt}\ d|\mu|(t) < \infty$ for $x=0$, so $0$ is in the interval. Note, however, that the interval doesn't have to be open. For example, take the measure with density $1/x^2$ for $x > 1$, $0$ elsewhere: this has interval $[0,\infty)$.
H: Find number of solutions of $|2x^2-5x+3|+x-1=0$ Problem: Find number of solutions of $|2x^2-5x+3|+x-1=0$ Solution: Case 1: When $2x^2-5x+3 \geq 0$ Then we get, $2x^2-5x+3+x-1=0$ x=1,1 Case 2: When $2x^2-5x+3 < 0$ Then we get, $-2x^2+5x-3+x-1=0$ x=1,2 In both cases, common value of x is 1 Hence solution is x=1 Am I doing right ?? Somebody told me solution is x=2 AI: Nope, if you have $|f(x)| + g(x) = 0$ $(1)$ you solve it as follows: assume that $f(x)\geq 0$, find solutions of $f+g = 0$ - say $x_1$. Check whether $f(x_1)\geq 0$. If it holds, then $x_1$ solves $(1)$ as well. If it does not hold, it's not a solution of the original equation. assume $f(x)<0$, find solutions of $g-f = 0$ and check whether for them $f<0$. In your case: for $f+g = 0$ you get a solution $x = 1$, which satisfies $f(1)\geq 0$, so that this is indeed one of the solutions of the original equation. For the case $f<0$ you get $x = 1$ or $x = 2$. We don't have to check $x = 1$ since we know it's already a solution, so we only check whether $f(2)<0$. Since the latter does not hold: $2\cdot 2^2 -5\cdot 2+3 = 1>0$, we obtain that $x=2$ is not a solution, and hence $1$ is the only solution.
H: For $G$ group and $H$ subgroup of finite index, prove that $N \subset H$ normal subgroup of $G$ of finite index exists Let $G$ be a group and $H$ be a subgroup of $G$ with finite index. I want to show that there exists a normal subgroup $N$ of $G$ with finite index and $N \subset H$. The hint for this exercise is to find a homomorphism $G \to S_n$ for $n := [G:H]$ with kernel contained in $H$. The standard solution suggests to choose $\varphi$ as the homomorphism induced by left-multiplication $\varphi: G \to S(G/H) \cong S_n$. I'm not 100% sure if I understand this correctly. What exactly does $\varphi$ do? We take $g \in G$ and send it to a bijection $\varphi_g: G/H \to G/H, xH \mapsto gxH$? If so, how can I see that its kernel is contained in $H$? Also, the standard solution claims its image is isomorphic to $G/N$ and thus $N$ has a finite index in $G$, how can I see that the image is isomorphic to $G/N$? Thanks in advance for any help. AI: Your definition of $\varphi$ looks fine. Anything in the kernel must in particular fix $H$, and $gH = H$ is equivalent to $g \in H$. On the other hand I think $N = \ker \varphi$ can be a proper subgroup of $H$. As an example, which is silly because the group is finite, if you take $G = S_3$ and $H = \{1, (12)\}$ then this process produces $N = \{1\}$. For the second question, this is just the "first" isomorphism theorem.
H: Is the derivative the natural logarithm of the left-shift? (Disclaimer: I'm a high school student, and my knowledge of mathematics extends only to some elementary high school calculus. I don't know if what I'm about to do is valid mathematics.) I noticed something really neat the other day. Suppose we define $L$ as a "left-shift operator" that takes a function $f(x)$ and returns $f(x+1)$. Clearly, $(LLL\ldots LLLf)(x)=f(x+(\text{number of $L$s}))$, so it would seem a natural extension to denote $(L^hf)(x)=f(x+h)$. Now, by the definition of the Taylor series, $f(x+h)=\sum\limits_{k=0}^\infty \frac{1}{k!}\frac{d^kf}{dx^k}\bigg|_{x}h^k$. Let's rewrite this as $\sum\limits_{k=0}^\infty \left(\frac{\left(h\frac{d}{dx}\right)^k}{k!}f\right)(x)$. Now, we can make an interesting observation: $\sum\limits_{k=0}^\infty \frac{\left(h\frac{d}{dx}\right)^k}{k!}$ is simply the Taylor series for $e^u$ with $u=h\frac{d}{dx}$. Let's rewrite the previous sum as $\left(e^{h\frac{d}{dx}}f\right)(x)$. This would seem to imply that $(L^hf)(x)=\left(e^{h\frac{d}{dx}}f\right)(x)$, or equivalently, $L=e^\frac{d}{dx}$. We might even say that $\frac{d}{dx}=\ln L$. My question is, does what I just did have any mathematical meaning? Is it valid? I mean, I've done a bit of creative number-shuffling, but how does one make sense of exponentiating or taking the logarithm of an operator? What, if any, significance does a statement like $\frac{d}{dx}=\ln L$ have? AI: Well, it seems that you have just discovered a beautiful theory of (semi)group generators by yourself. To give some basics of it, let us consider a collection of "nice" functions on real values - e.g. bounded and having continuous derivatives. The action of operators $L^h$ on this space has a semigroup structure: $$ L^s(L^tf(x)) = L^sf(x+t) = f(x+s+t) = L^{s+t}f(x). \tag{1} $$ Also, you have that $L^0f(x) = f(x)$, so $L^0$ is the identity operator -it does not change its argument. You can see, that although there are a lot of operators in the collection $(L^h)_{h}$, they have to satisfy the semigroup property $L^{s+t} = L^sL^t$, and so there is no much freedom in choosing them. Even more, one can define the generator of the semigroup (also sometimes called the derivative of it) by $$ \mathscr Af(x):=\lim_{h\to 0}\frac{1}{h}(L^hf - f) $$ which in your case exactly coincides with the derivative of the function. However, if you would consider the semigroup $K^hf(x) = f(x + v\cdot h)$ for some constant $v$, you'll see that the generator will be a bit different. Anyways, under certain condition - if you don't know the semigroup $L$, but you're just given the generator $\mathscr A$, it is possible to reconstruct $L$ from $\mathscr A$ by the so-called exponential map, that is $$ L^h:=\mathrm e^{h\mathscr A} $$ where the definition of the exponent of the operator indeed is given by the Taylor series where e.g. $\mathscr A^2 f(x) = \mathscr A(\mathscr A f(x))$. As a result, you can indeed consider $\mathscr A$ to be a certain logarithm of $L^h$ and it comes as no surprise that it has a similar Taylor expansion. However, although the name "exponential map" from $\mathscr A$ to $L^h$ is commonly used, I haven't heard of the inverse being called the "logarithm". One rather uses the "generator" or "derivative." If you are further interested, I would suggest you reading the linked wikipedia article.
H: Show that $\frac{a^2+1}{b+c}+\frac{b^2+1}{a+c}+\frac{c^2+1}{a+b} \ge 3$ If $a,b,c$ are positive numbers then show that $$\dfrac{a^2+1}{b+c}+\dfrac{b^2+1}{a+c}+\dfrac{c^2+1}{a+b} \ge 3$$ I am stuck at the first stage. Please give me some hints so that I can solve the problem. Thanks in advance. AI: Hint: Note that $a^2+1 \ge 2a$ and also use NESBIT Inequality. Note that $(a-1)^2\ge 0 \implies a^2+1 \ge 2a$ Otherwise use $AM \ge GM $ for $a^2$ and $1$, you get $a^2+1 \ge 2a$. NESBITT Inequality: $$\dfrac{a}{b+c}+\dfrac{b}{a+c}+\dfrac{c}{a+b} \ge \dfrac 3 2$$ For proof of NESBITT Inequality you can see here.
H: Proving that every finite group with $|G| = pqr$ for $p,q,r$ distinct primes is solvable Without loss of generality, let $p < q < r$. Using Sylow's theorem, the amount of $r$-Sylow groups is 1 mod $r$ and is a factor of $pq$, so only $1$ and $pq$ are possible. Now the proof states that in the case that there only exists one $r$-Sylow group, it is a normal subgroup. Why? AI: A subgroup $H\le G$ is normal iff for all $g\in G$, $g^{-1}Hg=H$. Let $H\le G$ be a $p$-Sylow group, then for all $g\in G$, $g^{-1}Hg$ is also a $p$-Sylow group. Therefore, if there is only a single $p$-Sylow group, it is normal. If, on the other hand, there is more than one $p$-Sylow group, then all $p$-Sylow groups are not normal since they are all conjugate to each other.
H: Uniform convergence of Fourier Series, how do I check it? Let $f(x)=x(\pi-x)$, $x\in (0,\pi)$. The function satisfies the Dirichlet conditions so its Fourier series, $S_f$ converges pointwise to $f$. The definition of a Fourier series of $f$ on $[a,a+L]$ is: $$S(x)=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos{ \frac{2n\pi x}{L}} + b_n \sin { \frac{2n\pi x}{L}} $$ with $$a_n = \frac{2}{L}\int_{a}^{a+L} f(x) \cos { \frac{2n\pi x}{L}}dx$$ $$b_n = \frac{2}{L}\int_{a}^{a+L} f(x) \sin { \frac{2n\pi x}{L}}dx$$ With $L=\pi$, $a=0$, this becomes: $$a_n = \frac{2}{\pi}\int_{0}^{\pi} f(x) \cos(2n x) \ dx = -\frac{1}{n^2}$$ $$b_n = \frac{2}{\pi}\int_{0}^{\pi} f(x) \sin(2n x) \ dx = 0$$ $$a_0 = \frac{\pi^2}{3}$$ Thus $$S (x) = \frac{\pi^2}{6} - \sum_{n=1}^\infty \frac{\cos(2nx)}{n^2}$$ So, if I denote $$S_n (x) = \frac{\pi^2}{6} - \sum_{k=1}^n \frac{\cos(2kx)}{k^2}$$ I have $S_n \rightarrow f$ pointwise, that is $$\forall x\in[0,\pi], \ f(x) = S(x) = \lim_{n \rightarrow \infty} S_n (x).$$ Now I have to decide whether the convergence is uniform. I would do this by looking at the limiting behaviour of $$\sup_{x\in[0,\pi]} \vert x(\pi - x) + \sum_{k=1}^n \frac{\cos(2kx)}{k^2} - \frac{\pi^2}{6} \vert $$ but I have no idea what to do with this expression. Is there a simpler way to determine uniform convergence? AI: To prove that an infinite sum of functions $\sum_n f_n(x)$ converges uniformly, all that is needed is to show that the "tails" get small independent of $x$. Here, you'd want to show that, given $\epsilon>0$, there is $N$ such that for $m,n\ge N$, $\sum_{m\le k\le n} |\cos(2kx)/k^2|\;<\;\epsilon$. That is, there's no necessity of referring to the (known) limit to prove uniform convergence.
H: Proof of formula of number of Power $\leq n$ I recently came across this OEIS sequence: http://oeis.org/A069623. A Perfect Power refers to a number which can be expressed in the form of $x^y$ where $x > 0$ and $y > 1$. The sequence lists the number of distinct Perfect Powers less than or equal to $n$. Here, The sequence depicts the formula as: $a(n) = n - \sum_{k = 1}^{ \log_2n} (\mu(k)*[n^{(1/k)}-1])$ where $\mu(k)$ is the Möbius Function as usual. I want to know the proof of derivation for this closed form of the sequence. AI: The sum $\sum\limits_{k = 1}^{\lfloor \log_2 n\rfloor} \mu(k)*\lfloor n^{1/k}-1\rfloor$ gives you the number of non-perfect powers from $2$ to $n$. For $k = 1$, the summand is $n-1$, that is the number all numbers from $2$ through $n$. The basic fact is that for a number $k$, $\lfloor n^{1/k} - 1\rfloor$ is the number of $k$-th powers (for a base $> 1$), as should be pretty clear. So for the primes $p \leqslant \log_2 n$, the term $\mu(p) * \lfloor n^{1/p} - 1\rfloor$ subtracts the number of $p$-th powers from the number of $2 \leqslant a \leqslant n$. The remainder is the inclusion-exclusion principle, if a number $a = b^m \quad (b > 1)$ is an $m$-th power for a composite $m$, it's subtracted for each of $m$'s distinct prime divisors. If $m = p^k$ is a prime power, $a$ is subtracted exactly once, and all is fine. If m has at least two distinct prime divisors, it is subtracted too often, hence it must be added again for each pair of distinct prime divisors of m. If m has at least 3 distinct prime divisors, a has been re-added too often, hence it must be subtracted again for each triple of distinct prime divisors, ... Altogether, for a composite $m$, the term $\mu(m) * \lfloor n^{1/m} - 1\rfloor$ adds or subtracts the number of $m$-th powers $2 \leqslant b^m \leqslant n$. If $1 < a = b^m \leqslant n$ is a perfect power and $b$ is not a perfect power, then $a$ is counted in the terms for all divisors of $m$ and no other terms, so it is added with a factor of $\sum\limits_{d \mid m} \mu(d) = 0$ in the total sum. If $a$ is not a perfect power, it appears only in the term for $k = 1$, that is, it is added with a factor of $1$, hence $n -\sum\limits_{k = 1}^{\lfloor \log_2 n\rfloor} \mu(k)*\lfloor n^{1/k}-1\rfloor = n - \#\lbrace a \colon (2 \leqslant a \leqslant n) \land (a = b^m \Rightarrow m = 1) \rbrace$ $\qquad\qquad = \#\lbrace a \colon (1 \leqslant a \leqslant n) \land \bigl(\exists m > 1, b\bigr)(a = b^m)\rbrace.$
H: A vector field is a section of $T\mathcal{M}$. By definition, a vector field is a section of $T\mathcal{M}$. I am familiar with the concept of vector field, as well as tangent plane of a manifold. But such definition is not intuitive to me at all. Could some one give me some intuition? Thank you very much! AI: Remember that a point of the tangent bundle consists of pair $(p,v)$, where $p \in M$ and $v \in T_pM$. We have the projection map $\pi: TM \to M$ which acts by $(p,v) \to p$. A section of $\pi$ is a map $f$ so that $\pi \circ f$ is the identity. So for each $p \in M$, we have to choose an element of $TM$ that projects back down to $p$. So for each $p \in M$ we're forced to choose a pair $(p,v)$ with $v \in T_pM$. This is the same information as choosing a tangent vector in each tangent space, which is the same information as a vector field. If we insist that $f$ is smooth (as we usually do), then we get a smooth vector field.
H: Find a $2$-Sylow subgroup of $\mathrm{GL}_3(F_7)$ We have $|\mathrm{GL}_3(F_7)| = 7^3 \cdot 2^6\cdot 3^4\cdot 19$. I can find the $3,7,19$-Sylow subgroup of it, but failed to find a $2$-Sylow subgroup. Can one help? AI: Short version: $ \newcommand{\GF}[1]{\mathbb{F}_{#1}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\Gal}{\operatorname{Gal}} \newcommand{\Sym}[1]{\operatorname{Sym}(#1)} \newcommand{\bm}[5]{\left[\begin{array}{r|rr} #1 & 0 & 0 \\ \hline 0 & #2 & #3 \\ 0 & #4 & #5 \end{array}\right]} $A Sylow 2-subgroup is a direct product of the Sylow 2-subgroup of $\GF{7}^\times$ with the Sylow 2-subgroup of $\Gal(\GF{49}/\GF{7}) \ltimes \GF{49}^\times$. Remember that the Galois group of $\GF{49}$ acts $\GF{7}$-linearly on $\GF{49}$. Similarly $\GF{49}^\times$ (and $\GF{7}^\times$) act $\GF{7}$-linearly on $\GF{49}$ (and $\GF{7}$ respectively). Thus those groups end up acting on $\GF{7} \times \GF{49} \cong \GF{7}^3$. Explicit version: In terms of matrices, that is $$ \bm{-1}{1}{0}{0}{1}, \qquad \bm{1}{1}{0}{-1}{-1}, \qquad \bm{1}{0}{1}{1}{-1}$$ We split the matrices into block matrices. The top-left $1 \times 1$ block is $\GF{7}$. The bottom-right $2\times 2$ block is $\GF{49}$. The first matrix generates the Sylow 2-subgroup of $\GF{7}^\times$, the second matrix is the Frobenius automorphism of $\GF{49}$, and the last generates the Sylow 2-subgroup of $\GF{49}^\times$ of order 16. Here I choose a basis of $\GF{49}$ of the form $\{1,\xi\}$ where $\xi$ is a primitive $16$th root of unity in $\GF{49}$. For a choice that makes the second matrix prettier, see below. Intuitive version: I'll repeatedly use a simple but useful fact: If $H \leq G$ are finite, and $p$ does not divide the index $[G:H]$, then every Sylow $p$-subgroup of $H$ is a Sylow $p$-subgroup of $G$. I'll choose $H$ so that we can use induction. I'll try to describe all Sylow $p$-subgroups for all prime powers $q$, but I'll ignore $p$ divides $q$ (upper unitriangulars form a Sylow $p$-subgroup in this case), and I'll assume $q$ is odd to make a few statements smoother. I put a more complete version on my website. The order of $\GL(3,q)$ is $(q^3-1)(q^3-q)(q^3-q^2) = q^3(q-1)^3(q+1)(q^2+q+1)$ with the latter factorization driving most of the case by case analysis. The main subgroups we consider are $\GF{q}^\times \cong \GL(1,q)$, $\GF{q^2}^\times \leq \GL(2,q)$, and $\GL(1,q) \times \GL(1,q) \times \GL(1,q) \leq \GL(1,q) \times \GL(2,q) \leq \GL(3,q)$. Assume $q$ is an odd prime power. Then $[\GL(3,q):\GL(1,q) \times \GL(2,q)] = q^2(q^2+q+1)$. For many primes $p$, a Sylow $p$-subgroup is contained in $\GL(1,q) \times \GL(2,q)$. The exceptions are those $p$ such that $q$ has order 3 mod $p$, and of course the defining characteristic where $p$ divides $q$. At any rate, if $q$ is odd, then $p=2$ does not divide $q^2(q^2+q+1) \equiv 1 \mod 2$, so a Sylow 2-subgroup is contained in $\GL(1,q) \times \GL(2,q)$. Thus we want the direct product of a Sylow $p$-subgroup of $\GL(1,q)$ and a Sylow $p$-subgroup of $\GL(2,q)$, at least for our $p=2$. The Sylow 2-subgroup of $\GL(1,q)\cong \GF{q}^\times$ is isomorphic to the Sylow 2-subgroup of $\GF{q}^\times$. In case $q=7$, it is generated by $\begin{bmatrix}-1\end{bmatrix}$ of order 2. In general, I'll just call $\zeta$ a generator of the Sylow $p$-subgroup of $\GF{q}^\times$. The Sylow 2-subgroup of $\GL(2,q)$ comes in two varieties. The index $[\GL(2,q):\Sym{2} \ltimes (\GL(1,q) \times \GL(1,q))] = q(1+q)/2$, so assuming $p$ does not divide $q$ (automatic if $q$ is odd and $p=2$) and $p$ does not divide $(1+q)/2$ (for $p=2$ and $q$ odd this just means $1 \equiv q \mod 4$), then the Sylow $p$-subgroup is contained in $\Sym{2} \ltimes (\GL(1,q) \times \GL(1,q))$. In such a case, $1 \equiv q \mod 4$, the full Sylow 2-subgroup is generated by the matrices: $$ \bm{\zeta}{1}{0}{0}{1}, \qquad \bm{1}{0}{1}{1}{0}, \qquad \bm{1}{\zeta}{0}{0}{1} $$ In the other case, $3 \equiv q \mod 4$, view $\GF{q}^2$ as $\GF{q^2}$ by choosing a basis. I suggest choosing a basis $\{x,x^q\}$ so that the linear transformation $\GF{q^2} \to \GF{q^2}:t \mapsto t^q$ is represented by the matrix $\left[\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right]$. Now choose $\xi \in \GF{q^2}$ a generator of the Sylow $p$-subgroup of $\GF{q^2}^\times$. Multiplication by $\xi$ is a $\GF{q}$-linear map of $\GF{q^2}$, so it has a matrix, denote it $A$. At any rate, $[\GL(2,q):\Gal(\GF{q^2}/\GF{q})\ltimes \GF{q^2}^\times] = q(q-1)/2$ and since $3 \equiv q \mod 4$, $p=2$ does not divide this index. In this case, $3 \equiv q \mod 4$, the full Sylow 2-subgroup is generated by the matrices: $$\bm{\zeta}{1}{0}{0}{1}, \qquad \bm{1}{0}{1}{1}{0}, \qquad \left[\begin{array}{r|r} 1 & 0 \\ \hline 0 & A \end{array}\right] $$ In the explicit answer above, I chose a different basis of $\GF{q^2}$, $\{1,\eta\}$, and with respect to this basis $A$ has matrix $\left[\begin{smallmatrix}0&1\\1&-1\end{smallmatrix}\right]$ because the minimum polynomial of $\eta$ is $\eta^2+\eta-1$. This is easy to calculate, but the Frobenius map is more of a pain, $\eta^7 = -1-\eta$, so its matrix is now $\left[\begin{smallmatrix}1&0\\-1&-1\end{smallmatrix}\right]$.
H: Why is this integral zero for every $n \in \mathbb{Z}$? $\int_0^\pi x(\pi -x) \sin 2nx$. The integral $$\int_0^\pi x(\pi -x) \sin 2nx$$ evaluates to zero, but the function can't be said to be even or odd. What argument, other than pure calculation (as I did) would give the value $0$ immediately? The thing multiplying the sine in the integrand is symmetric (w.r.t to reflection against $x=\pi/2$) on the interval, but that's about it I can really say. Can something more general be extrapolated from this? AI: You're on the right track. You need to think about the symmetries and antisymmetries about $x= \pi/2$. The quadratic part you've already noted, now think about the sine function. You may want to draw a few of the sine functions out if you're stuck. After this it should be clear why the integral is zero.
H: Determining the smallest possible value If both $11^2$ and $3^3$ are factors of the number $a \times 4^3 \times 6^2 \times 13^{11}$, then what is the smallest possible value of a? IS there any trick to answer this type question quickly? AI: HINT: As $(4,11)=(6,11)=(13,11)=1$ $11^2$ must divide $a$ Observe that the highest power of $3$ in $4^3 \times 6^2 \times 13^{11}$ is $2$ So, $3$ must divide $a$ $\implies $lcm$(11^2,3)$ must divide $a$
H: Translation of; If X is any real number other than 1, then... i've just started reading a book on number theory and am trying to follow along with the example proofs of theorems. I've not had too much trouble once I have managed to "translate" the mathematical notation into an English sentence. Could somebody state in plain English what this says? If $x$ is any real number other than $1$, then $$\sum_{j = 0}^{n -1} x^j = 1 + x + x^2 + \cdots + x^{n-1} = \frac{x^n-1}{x-1}$$ AI: It says that if you add $1$ to $x$ to $x^2$ and so on, where each term is $x$ times the previous term, until you've added $n$ terms total, then you get the expression on the right.
H: Why is this derivative correct? Why is the following correct? Can't understand it. $$\dfrac{d}{dz}\bigg(\dfrac{e^z-e^{-z}}{e^{z}+e^{-z}}\bigg)=1-\dfrac{(e^z-e^{-z})^2}{(e^{z}+e^{-z})^2}$$ AI: We use the quotient rule for taking derivatives: $$\dfrac{d}{dz}\bigg(\dfrac{\overbrace{e^z-e^{-z}}^{\large f(x)}}{\underbrace{e^{z}+e^{-z}}_{\large g(x)}}\bigg)$$ $$\frac{d}{dx}\left(\frac{f(x)}{g(x)}\right) = \frac{f'(x)g(x) - f(x)g'(x)}{g^2(x)}$$ In this case, $\;f'(x) = e^z + e^{-z} = g(x),\;$ and $\;g'(x) = e^z - e^{-z} = f(x),\;$ so we get that the derivative $$\frac{d}{dx}\left(\frac{f(x)}{g(x)}\right) = \frac{g^2 (x) - f^2(x)}{g^2(x)} = 1 - \dfrac{f^2(x)}{g^2(x)} = 1-\dfrac{(e^z-e^{-z})^2}{(e^{z}+e^{-z})^2}$$
H: The Poisson distribution? Particles are suspended in a liquid medium at a concentration of 6 particles per ml. A large volume of the suspension is thoroughly agitated, and then 3 ml are withdrawn. What is the probability that exactly 15 particles are withdrawn? AI: The usual model is that the number of particles in a $3$ ml sample has Poisson distribution with parameter $\lambda=(6)(3)$. If $X$ is the number of particles in the sample, the model gives $$\Pr(X=15)=e^{-18} \frac{(18)^{15}}{15!}.$$
H: What is the area of the shaded region of the square? To find area of shaded portion in the below figure, the picture generate by following mathematica code. Block[{cond = {x^2 + (y - 1/2)^2 < 1/4 && x > 0, (x - 1)^2 + (y - 1)^2 < 1 && x < 1 && y < 1, (x - 1)^2 + y^2 < 1 && y > 0 && x < 1} }, RegionPlot[Evaluate@Append[cond, And @@ cond], {x, -#, #}, {y, -#, #}, PlotPoints -> 40, PlotRange -> {{-0.2, 1.2}, {-0.2, 1.2}}, PlotStyle -> {None, None, None, Cyan}, Axes -> 1, Frame -> 0] &@1.5 ] AI: The leftmost point is $(\frac 12, 1-\frac 2{\sqrt 5})$ and the top point is $(\frac 25,\frac 45)$ from solving simultaneous equations. Now I would cut the region horizontally at $y=\frac 12$, integrate the area above that line, and double the result.
H: How to integrate $\int x\sin {(\sqrt{x})}\, dx$ I tried using integration by parts twice, the same way we do for $\int \sin {(\sqrt{x})}$ but in the second integral, I'm not getting an expression that is equal to $\int x\sin {(\sqrt{x})}$. I let $\sqrt x = t$ thus, $$\int t^2 \cdot \sin({t})\cdot 2t dt = 2\int t^3\sin(t)dt = 2[(-\cos(t)\cdot t^3 + \int 3t^2\cos(t))] = 2[-\cos(t)\cdot t^3+(\sin(t)\cdot 3t^3 - \int 6t \cdot \sin(t))]]$$ which I can't find useful. AI: Yes, indeed, continue as you did in the comments, treating $\int 6t\sin t \,dt\;$ as a separate integral, use integration by parts, and add (or subtract, if appropriate) that result to your earlier work, and you will end with an expression with no integrals remaining!: $$\int t^2 \cdot \sin({t})\cdot 2t dt = $$ $$= 2[-\cos(\sqrt x) \cdot x(\sqrt x) + \sin(\sqrt x)\cdot 3x -(\cos(\sqrt x)\cdot6\sqrt x+\sin(\sqrt x)\cdot \sqrt x + \cos (\sqrt x))] + C$$ after substituting $\sqrt x$ for $t$, though I'd suggest finding a way to simplify (combining like terms, etc.)
H: How to calculate the number of banner appearance based on monthly page views. Hello fellow mathematicians. I have a website that gathers more then 44.000 page views per month. In my website I have 1 rotating place of 4 banner positions, each time it rotates 4 new banners will appear. So the client asks me, how many times will my banner appear in the website? So I figured out the following equation, in order to help me answer that question: ((page views) * (Available banner positions))/ (Total Page views) = (how many times will my banner appear in the website) So if I have 4 banner positions that will rotate each time and 6 banners in my stack, I can calculate that within 7 page views I can ensure (7*4)/6=4.6 times will appear. Here is a table I came up with in order to check if my answer is correct. \begin{array}{c c c} page views& Banner pos 1& Banner pos 2 & Banner pos 3& Banner pos 4\\ 1& banner 1 & banner 2 & banner 3 & banner 4 \\ 2& banner 5 & banner 6 & banner 1 & banner 2 \\ 3& banner 3 & banner 4 & banner 5 & banner 6 \\ 4& banner 1 & banner 2 & banner 3 & banner 4 \\ 5& banner 5 & banner 6 & banner 1 & banner 2 \\ 6& banner 3 & banner 4 & banner 5 & banner 6 \\ 7& banner 1 & banner 2 & banner 3 & banner 4 \\ \end{array} Indeed it looks that Banner 1 will appear 5 times and Banner 6 , 4 times so this means that 4,66 times is the answer 4 to 5 times. I am not sure if this is correct though so I need your validation. Thank you AI: Assuming you have k banners and n places to show banners, then each banner will be shown n/k times per page view, assuming your rotation scheme is a fair one - which yours (essentially) is. So, you are correct, and moreover, if you have a number of page views that is a multiple of the number of banners you have, then each banner will have been shown the same (integer) number of times. If you look at your table, but ignore row 7, you will see that each banner gets shown 4 times, which is what we'd expect: $6 \times 4 \div 6 = 4$. Hope that helps!
H: Does the Stone-Čech compactification respect subspaces? That is, is it true that if $X$ and $Y$ are topological spaces (assume they are Tychonoff, if necessary), with $X \subseteq Y$, then $\beta X$ is homeomorphic to a subspace of $\beta Y$? If so, how does one prove this? If not, what would be a counter-example? My lecturer has asked us to come up with questions for the exam in 4 days. I've been trying to prove this, but juggling all the various spaces and topologies is making my head hurt, and I'm hitting so many dead ends that I'm beginning to think it's not true. Thanks for any help! AI: If $X=\mathbb N$ and $Y=\mathbb N^*$, the one-point compactification of $\mathbb N$, then $\beta Y\approx \mathbb N^*$, which is countable. However, $\beta\mathbb N$ is not countable, hence there cannot be an injection into $\beta\mathbb N^*$
H: How do we define $\sin(\theta)$ or $\cos(\theta)$ On the interval $[0,2\pi]$, how do we define either sine or cosine? Obviously if we have one, the other is straight forward to generate as a phase shift of the other one. To expand a little, we know what properties we want these functions to satisfy: $2\pi$-periodic, max/min of $1/-1$, $2$ zero crossings, alternating sign and function for derivatives. But the behavior near maxima/minima and zero crossings seem difficult to define without resorting to preexisting information about derivatives of the functions; for example, how fast should cosine level off as it goes to $\frac{\pi}{2}$? Obviously one way to define these is just to find a Taylor Series which exhibits these characteristics, but it feels like there should be a more intuitive or geometric way. One method I thought of was to define linear distance around the unit circle as a function of time, $f:[0,2\pi] \to [0,2\pi], t \mapsto t$, and then try to extrapolate the x and y coordinates (and therefore sine/cosine values) from the position on the circle. I'm not exactly sure how I would go about doing this, short of using sine and cosine themselves, in which case I suppose I've found a definition? It also seems that there should be some way to at least define sine or cosine from $[0,\frac{\pi}{2}]$ this way using the Pythagorean Theorem and some differentials to convert between linear/angular velocity (since they are the same in this case) and position, in which case it seems straight forward to extend this to the rest of the function. AI: There are several ways of defining sine and cosine, one of which is by their respective Taylor series. Another way of defining say sine, is as the unique solution to the differential equation $y''=-y$, $y(0)=0$, $y'(0)=1$. Then we can define cosine to be its derivative.
H: Does the nerve of a category preserve directed colimits? The nerve $N(C)$ of a category $C$ is a simplicial set and defines a functor $$ N\colon\operatorname{Categories}\to \operatorname{sSets} $$ from the category of small categories to simplcial sets. It is given by $N(C)_n=\operatorname{Hom_{Categories}}(\Delta[n],C)$ and $\Delta[n]$ is the obvious category. Does $N$ preserve directed colimits? This is true if $\Delta[n]$ is a compact object in $\operatorname{Categories}$. AI: Yes, $\Delta[n]$ is compact: it has finitely many objects and morphisms!
H: Can we *ever* use certain log/exp identities in the complex case? This article on Wikipedia points out that certain identities for the log and exponential functions which are familiar from the real case require care when used in the complex case. Failures in the following identities are pointed out $$\log{z^w} \equiv w \log {z}\\(zw)^{\omega}\equiv z^{\omega}w^{\omega}\\e^{zw} \equiv (e^z)^w$$ The article alludes to the fact that the identities do not always work, even if we consider them as an assertion about equalities of sets (and consider the functions as multivalued). Now, how can we work with these identities without leading ourselves into error? Is the answer simply never to employ them in the presence of a complex base or exponent? Occasionally I see uses of these identities; here is one example: Here is a scanned page from Schaum's outline on complex variables which shows how to evaluate $\int_{0}^{\infty}\frac{x^{p-1}}{1+x}dx$ with a keyhole contour (for $0<p<1$). As usual, on the return path, we have to parameterize the path as $z = e^{2\pi i}x$, $x\in \mathbb{R}$, rather than simply $z=x$. Now when the author exponentiates this term, he uses $(x e^{2\pi i})^{p-1}=x^{p-1}e^{2\pi i(p-1)}$, employing the second identity above. What justifies this, given that the identity does not hold in general? AI: By definition $z^w = e^{w \log(z)}$ (for whichever branch of $\log(z)$ you are using), so $$ \log(z^w) = w \log(z) + 2 \pi i n$$ for some integer $n$. If, for example, you are using the branch of $\log$ that has imaginary part in $(\alpha, \alpha + 2 \pi]$, you choose $n$ so that the imaginary part is in that interval. Similarly, $(zw)^\omega = e^{\omega \log(zw)}$, and $\log(zw) = \log(z) + \log(w) + 2 \pi i n$ for suitable integer $n$, so $$(zw)^\omega = e^{\omega(\log(z) + \log(w) + 2 \pi i n)} = e^{2 \pi i n \omega} z^\omega w^\omega $$ And $$(e^z)^w = e^{w \log(e^z)} = e^{w (z + 2 \pi i n)} = e^{2 \pi i n w} e^{wz}$$ EDIT: In Schaum's example, it is misleading to write $z = e^{2 \pi i} x$ (which would be the same as $x$), it's really $e^{(2 \pi - \epsilon) i} x$ where $\epsilon > 0$ is arbitrarily small. Then $\log(z) = \log(x) + (2 \pi - \epsilon) i$ (for the branch of log with imaginary part in $[0, 2 \pi)$), and $$z^{p-1} = e^{(p-1)\log(z)} = e^{(p-1)\log(x) + (p-1)(2\pi - \epsilon) i} = x^{p-1} e^{(p-1)(2\pi - \epsilon) i}$$
H: Bernoulli Random Variables and Variance The question is: Suppose $Z_1, Z_2, \ldots $ are iid $\operatorname{Bernoulli}\left(\frac{1}{2}\right)$ and let $S_n = Z_1 + \ldots +Z_n$. Let $T$ denote the smallest $n$ such that $S_n = 3$. Calculate $\operatorname{Var}(T)$. What I know is that $\operatorname{Var}(T) = E(T^2) - E(T)^2$ but I am not sure how to calculate the expectation from the given information. Perhaps need to go through moment-generating function and the formula $M^{(r)}(0) = E(X^r)$? AI: The moment generating function idea is in this case a good one. Let $T_1$ be the smallest $n$ such that $S_n=1$. More informally, $T_1$ is the waiting time until the first "success." Let $T_2$ be the waiting time from the first success to the second, and let $T_3$ be the waiting time from the second success to the third. Then the $T_i$ are independent and identically distributed, and $T=T_1+T_2+T_3$. Thus the moment generating function of $T$ is the cube of the mgf of $T_1$. We proceed to find the mgf of $T_1$. So we want $E( e^{tT_1})$. Note that $T_1=k$ with probablity $\frac{1}{2^k}$. So for the moment generating function of $T_1$ we want $$\sum_{k=1}^\infty \frac{1}{2^k}e^{tk},$$ This is an infinite geometric progression with first term $\frac{e^t}{2}$ and common ratio $\frac{e^t}{2}$. Thus the moment generating function of $T_1$ is $$\frac{e^t}{2(1-\frac{e^t}{2})}.$$ Cube this to get the mgf of $T$, and use that mgf to find $E(T)$ and $E(T^2)$. Remark: The fact that the probabilities were $\frac{1}{2}$ was not of great importance. And neither was the fact that we are interested in the waiting time until the third success. Our $T$ has distribution which is a special case of the negative binomial. The method we used adapts readily to find the mgf of a general negative binomial.
H: Probability question involving playing cards and more than one players I've been struggling with this problem for a while, so I have to ask you guys for a little mental push. My problem involves a full deck of playing cards and 4 players, but I'll simplify it to 4 cards and 2 players with the hopes to be able to apply the logic behind the simple question to the bigger one. Okay, here goes: There are the following cards in a deck: (K of Hearts), (10 of Diamonds), (8 of Diamonds), (10 of Hearts) There are two players and each one gets 2 cards. What is the probability of AT LEAST one of the players to have (at the same time) NO hearts AND at least one card of diamonds? Can anyone walk me through the logic and calculations and get me to the answer? Thanks in advance! AI: Any player that has no hearts has two diamonds, so you want the chance that the two heart cards are in the same hand. If you deal the first heart to somebody, he gets one of the remaining three cards, so the chance is $\frac 13$. This approach may not help you with the larger problem. Added: for a slightly larger problem, take a standard deck, a player drawing two cards without replacement and we ask the chance that he gets no hearts and at least one diamond. The first card can be a diamond, probability $\frac 14$, in which case the second card can be any non-heart, probability $\frac {38}{51}$ or the first card can be a club or spade, probability $\frac 12$ and the second a diamond, probability $\frac {13}{51}$. The total is then $\frac 14 \cdot \frac {38}{51}+\frac 24\cdot \frac {13}{51}=\frac {64}{204}=\frac {16}{51}$. It takes care to get all the possibilities, and once each. In this case it would be easy to double count the draws of two diamonds. Added: to have two players have no hearts each and at least one diamond, you just keep going. The new twist is that the chance of the second getting this is influenced by whether the first has two diamonds or not. So now we figure the chance the first player has two diamonds: $\frac 14 \cdot \frac {12}{51} = \frac 3{51}$ Then the chance the first player has exactly one diamond and one non-heart is $\frac {16}{51}-\frac 3{51}=\frac {13}{51}$ Given two diamonds in the first hand, the second player can draw a diamond then a non-heart with probability $\frac {11}{50}\cdot \frac {36}{49}$ and a club or spade then a diamond with probability $\frac {26}{50}\cdot \frac {11}{49}$. Given only one diamond and no hearts in the first hand, the second player can draw a diamond then a non-heart with probability $\frac {12}{50}\cdot \frac {36}{49}$ and a club or spade then a diamond with probability $\frac {25}{50}\cdot \frac {12}{49}$. So the total is $\frac 3{51}\cdot (\frac {11}{50}\cdot \frac {36}{49}+\frac {26}{50}\cdot \frac {11}{49})+\frac {13}{51}(\frac {12}{50}\cdot \frac {36}{49}+\frac {25}{50}\cdot \frac {12}{49})$. I'll leave it to you to gather all this together and to do the three card case.
H: Solving differential equation first order Please help me with calculation, or with method I can do it by myself. Maybe $$y=uv?$$ $$ y'=\frac{x+y-2}{y-x-4} $$ AI: Hints: This is a first order nonlinear equation Rewrite it as $M dx + N dy = 0$ Test to see if it is an exact equation Solve to get: $$y(x) = x \pm \sqrt {2} \sqrt{x^2+2 x+c_1}+4$$
H: Number of letters required to make three letter names If a monster has 63 children and he wants to keep 3 letter names for each of them so that they are distinct,but with the condition that you can use the same letter more than once,how many letters at minimum does the monster need to name it's children?-source-BdMO 2011question.I have done a bit of listing and I think it is 5. As others pointed out,my listing was wrong. AI: Let $n$ be the number of letters we need to form $63$ distinct three-letter names. So, we have $n$ choices for the first letter, $n$ choices for the second letter and $n$ choices for the third letter: This means we have $n\times n\times n = n^3$ possible combinations of distinct names. We need $n^3 \geq 63$ where $n$ is an integer. $n = 4$ gives us $4^3 = 64$ distinct names, and this is indeed the least integer giving us at least $63$ distinct names. (For $n = 3$, we have only $3^3 = 27$ possible distinct names, and for $n = 5$, we have as many as $5^3 = 125$ distinct three letter names. Hence, the monster needs minimally $4$ distinct letters to create 63 distinct three-letter names.
H: A question about the composite function of a derivative This may seem dumb, but, I'm trying to understand the proof of the chain rule, but here is my issue: By definition, the derivative is the following: $f'(a)=\lim\limits_{x\rightarrow a}(\frac{f(x)-f(a)}{x-a})$ So far, so good. But then, if I were to do a composite function, I would do it like this: $f'(g(a))=\lim\limits_{x\rightarrow g(a)}(\frac{f(x)-f(g(a))}{x-g(a)})$ I mean, isn't $a$ the input to the $f'(x)$ function? But the proof states that the composite function is the following: $f'(g(a))=\lim\limits_{x\rightarrow a}(\frac{f(g(x))-f(g(a))}{x-a})$ And I don't understand why. AI: You are correct. Note that: $$ f'(g(a))=\lim\limits_{y\rightarrow g(a)}\frac{f(y)-f(g(a))}{y-g(a)} $$ while on the other hand: $$ (f \circ g)'(a)=\lim\limits_{x\rightarrow a}\frac{f(g(x))-f(g(a))}{x-a} $$ (To see this, consider the function $h(x)=f(g(x))$.) Proving Chain Rule involves proving that: $$ (f \circ g)'(a) = f'(g(a))\cdot g'(a) $$
H: Is my application of Cauchy-Riemann right? Question: Given $f(z) = 3z^2 + 9z^3 -z$. 1. Find $f^\prime(z)$ 2. Find $f(z)$ when $z = 3 + 2i$ 3. Use Cauchy-Riemann to find if $f(z)$ is differentiable at $3 + 2i$ My Attepmt: 1. $f^\prime(z) = 6z + 27z^2 - 1$ 2. $$ \begin{align} f(3 + 2i) &= 3(3+2i)^2 + 9(3+2i)^3 - (3 + 2i)\\ &= 3(9-4+6i)+9(9-8i+54i-36) - (3+2i)\\ &= (15 + 18i) + (81 - 72i + 486i - 324) -(3 + 2i)\\ &= -231 + 430i\\ \end{align} $$ 3. For this, I substituted $z = x + iy$ in $f(z)$, then expanded the brackets. After that, I grouped the like terms together and expressed $f(z) = u(x,y) + iv(x,y)$. I got this as a result of my calculations: $$ \begin{align} u(x,y) &= 3x^2 - 3y^2 + 9x^3 -9y^3 - 27xy^2\\ v(x,y) &= 6xiy + 2x^2iy - iy \end{align} $$ Then, I calculated partial derivatives $u_x, v_y, u_y$ and $v_x$. For Cauchy-Riemann, $u_x = v_y$ and $u_y = -v_x$. I substituted x = 3 and y = 2 in the partial derivative and found that it fails C.R. equations. Hence, NO, IT IS NOT DIFERENTIABLE AT $3 + 2i$ Can someone please verify this for me ? AI: Wrong. This is a polynomial, so it is differentiable, and the C-R equations should be true. EDIT: In this case I get $$\eqalign{ u &= 9\,{x}^{3}-27\,x{y}^{2}+3\,{x}^{2}-3\,{y}^{2}-x, \cr v &= 27\,{x}^{2}y-9\,{y}^{ 3}+6\,xy-y\cr u_x &= 27 x^2 - 27 y^2 + 6 x - 1 = v_y\cr u_y &= -54 x y - 6 y = - v_x}$$
H: Non-symmetric $A^T=A$ Wikipedia says that symmetric matrices are square ones, which have the property $A^T=A$. This assumes that one can have non-square $A^T=A$ and, because it does not satisfy the first property of symmetry, it is not symmetric. So, there can be non-symmetric $A^T=A$ matrices and the definition is right. Is it right? Or, definition is redundant and misleading, and it is better to define a symmetric matrix by single $A^T=A$ property and symmetric matrix squareness follows from symmetry rather than defines symmetry? The question in short: should I say that symmetric matrix is square, when define it? AI: If you consider that if $A$ is a $n$-by-$m$ matrix, then $A^T$ is a $m$-by-$n$ matrix, so it follows that if $A$ = $A^T$ then a necessary condition is that $n=m$ as the matrices need to have the same number of rows/columns if they are equal, so $A$ must be a square matrix.
H: Hölder- continuous function $f:I \rightarrow \mathbb R$ is said to be Hölder continuous if $\exists \alpha>0$ such that $|f(x)-f(y)| \leq M|x-y|^\alpha$, $ \forall x,y \in I$, $0<\alpha\leq1$. Prove that $f$ Hölder continuous $\Rightarrow$ $f$ uniformly continuous and if $\alpha>1$, then f is constant. In order to prove that $f$ Hölder continuous $\Rightarrow$ $f$ uniformly continuous, it is enough to note that $|f(x)-f(y)| \leq M |x-y|^\alpha \leq M|x-y|$, since $\alpha \leq 1$. This implies that f is Lipschitz $\Rightarrow$ f is uniformly continuous. But how can I prove that if $\alpha >1$, then $f$ is constant? AI: Hint: For some $\epsilon>0$ and all $x\ne y$, you have $\Bigl|{f(x)-f(y)\over x-y}\Bigr|\le M|x-y|^\epsilon$ for some $\epsilon>0$. Why must $f'(x)$ exist? What is the value of $f'(x)$?
H: Probability of of an event happening at least once in a sequence of independent events? I want to find the probability of flipping heads at least once if you flip a coin two times. The possible outcomes (we don't care about the order) are (each equally likely) $TT$, $TH$, $HT$, $HH$. Three out of four have an $H$ in them, so the probability is $\large \frac 34$. Is this correct? Is there a better and efficient way (especially when dealing with a higher number of flips? Please use only very basic terminology and concepts from probability because I've never taken a class. Thanks. AI: We have the following equally likely outcomes: T T T H <-- H T <-- H H <-- In three of the four outcomes, a Heads appears: Probability of at least one head is indeed $\dfrac 34$. Put differently, note that the probability that no heads appears is 1 out of four or $\frac 14$. So the probability of at least one head is $1$ minus the probability of getting NO heads, which is $1$ minus the probability of getting all tails: is $1 - \frac 14 = 1 - \frac{1}{2^2} = 1 - \frac 12 \cdot \frac 12$ In the above demonstration, it is quite easy to list out the "probability space": which is essentially, all possible outcomes. The "at least one head" qualifier is handy, because it allows you to simplify the determination of probability for any arbitrarily large numbers of flips. If you flip a coin $n$ times, and want to know the probability of getting at least one head, note the outcome of getting all tails is the "complement" of the set of outcomes in which you get at least one head. The probability of getting all/only tails, when flipping a coin $n$ times is equal to $$\underbrace{\dfrac 12 \cdot \frac 12 \cdot \cdots \cdot \frac 12}_{\large n\; \text{times}} = \left(\frac 1{2}\right)^n$$ So...$$\begin{align} \text{probability of getting at least one head}\; & = 1 -\text{probability of not getting any heads}\; \\ \\ & = 1 - \;\text{probability of getting all tails}\;\\ \\& = 1 - \left(\dfrac{1}{2}\right)^n\end{align}$$
H: practical arithmetic in prime factorizations I am quite adept at doing arithmetic mentally or on paper, but I know little about the relatively sophisticated stuff that software experts use to crunch numbers. My question is whether the following idea is frequently used by such experts. Say I'm trying to find the prime factorization of some immense number $N$, and I find that it's divisible by a bunch of small primes, e.g. $$ N = 2\cdot2\cdot2\cdot2\cdot3\cdot 7 \cdot19\cdot53\cdot79\cdots\cdots\cdots. $$ I've still got a long way to go to get anywhere near $\sqrt{N}$, but then I notice that the quotient of $N$ by the product of the prime factors I've found so far has a square root just smaller than the biggest prime I've found so far. So I draw this conclusion: That quotient is prime. Might this method of establishing primality be used in actual software? AI: All general-purpose factorization software does trial division by a (usually) very large list of small primes before switching to the heavy machinery. [I say "general-purpose" factorization because there are special-purpose algorithms for cases where the target number is a semi-prime and very large. See Number Field Sieve, for example.] In the case you cite above, your trick is correct (of course) but I have never seen it used. Instead what normally would happen is this: all small factors are divided out, as you indicated, and then the remaining number is submitted to a primality test. The primality test can be slow if the remaining quotient $Q$ is large, but in your case it's not: you suppose that $\sqrt{Q}$ is slightly smaller than the largest prime in your list. In practice, the largest prime in the "small primes list" will be around, say, $2^{30}$. This means $q \approx 2^{60}$ and a primality test (which is $O(n^3)$ for an $n$-bit number) will be very fast. Your trick requires only a square-root (or even trickier: square the largest prime in your list and compare, which requires one multiplication), which is definitely faster but we're talking about milliseconds here. Note: There is no denying that by-hand your idea is a very worthwhile step to take.