text
stringlengths
83
79.5k
H: Calculating expected value of unknown random variable The question: Micro Insurance Company issued insurance policies to $32$ independent risks. For each policy, the probability of a claim is $1/6$. The benefit amount given that there is a claim has probability density function $$ f(y) = \begin{cases} 2(1-y) & 0<y<1, \\0 & \text{otherwise}. \end{cases}$$ Calculate the expected value of total benefits paid. My attempt: I'm not sure on how to define my random variable. Its expected value should sum from 1 to 32, each with probability $\frac{1}{6} \int f(y) dy$, I think. AI: Let $Y_{n}$ be the amount paid to the $n$-th policyholder assuming that the claim is made. Let $\mathbb{I}_{n}$ be $0$ when the $n$-th claim is not made and $1$ otherwise. Then, the total benefits paid is $$ X=\sum_{n=1}^{32}\mathbb{I}_{n}Y_{n}. $$ We need to calculate $$ \mathbb{E}\left[X\right]=\mathbb{E}\left[\sum_{n=1}^{32}\mathbb{I}_{n}Y_{n}\right]=\sum_{n=1}^{32}\frac{1}{6}\int_{0}^{1}yf\left(y\right)dy. $$ I think you can do the rest yourself.
H: What is the equation for a 3D line? Just like we have the equation $y=mx+b$ for $\mathbb{R}^{2}$, what would be a equation for $\mathbb{R}^{3}$? Thanks. AI: You can describe a line in space as the intersection of two planes. Thus, $$\{(x,y,z)\in{\mathbb R}^3: a_1x+b_1y+c_1z=d_1 \text{ and } a_2x+b_2y+c_2z=d_2\}.$$ Alternatively, you can use vector notation to describe it as $$\vec{p}(t) = \vec{p}_0 + \vec{d}t.$$ I used this relationship to generate this picture: This is largely a topic that you will learn about in a third semester calculus course, at least in the states.
H: What is "group of graph"? I'm reading some old article and I have one small question: what in general is the group of a graph? By the article, definition should be in Harary's Graph Theory, but unfortunately I don't have any access to that book. AI: The group of $G$ just means the automorphism group of $G$, i.e. the collection of all isomorphisms of $G$ with itself, the group operation being composition. Here is the relevant section from Harary's Graph Theory:
H: Show that a local ring is equicharacteristic iff it contains a subfield A local ring $(A,\mathfrak m)$ is equicharacteristic if $\operatorname{char} A=\operatorname{char} \kappa (m)$. Need hints to solve the following question: A local ring is equicharacteristic iff it contains a subfield. AI: $\kappa (\mathfrak m):=A/\mathfrak m$ and $\operatorname{char} A=\operatorname{char} \kappa (m)$ iff $A$ contains a field. If $A$ contains a field $k$, then we have $\operatorname{char} A=\operatorname{char} k$ and $k\subseteq A\to A/\mathfrak m$, so $\operatorname{char}A/\mathfrak m=\operatorname{char}k$. If $\operatorname{char} A=\operatorname{char} \kappa(\mathfrak m)$ and $\operatorname{char} \kappa(\mathfrak m)=p>0$, then obviously $\mathbb Z/p\mathbb Z\subseteq A$. If $\operatorname{char} \kappa(\mathfrak m)=0$, then $\mathbb Z\subseteq A$ and every non-zero integer remain non-zero in $\kappa(\mathfrak m)$. It follows that every non-zero integer is invertible in $A$, so $\mathbb Q\subseteq A$.
H: Sobolev spaces - about smooth aproximation Consider $\Omega $ a open and bounded set of $\mathbb R^n$ . Let $u \in H^{1}(\Omega)$ a bounded function. I know that there exists a sequence $u_m \in C^{\infty} (\Omega) $ where $u_m \rightarrow u$ in $H^{1}(\Omega)$ . I can afirm that exists $C>0$ where $|u_m (x)| \leq C , \forall \ x , \forall m ?$ I dont have idea how to prove this or how to show a counter example ... my english is terrible, sorry .. AI: Yes, it is possible to find a uniformly bounded approximating sequence. Let $\phi:\mathbb R\to\mathbb R$ be a bounded $C^\infty $ function such that $\phi(x)=x$ for $x\in [-M,M]$ where $M=\operatorname{ess\,sup}|u|$ $ 0\le \phi'(x)\le 1$ for all $x$. (As a consequence, $|\phi(x)|\le |x|$ for all $x$.) For every $m$, the function $\phi\circ u_m$ is smooth and satisfies $\|\phi\circ u_m\|_{H^1}\le \|u_m \|_{H^1}$. Since the sequence $(u_m)$ converges in the $H^1$ norm, it is bounded in the $H^1$ norm. Therefore, the sequence $(\phi\circ u_m)$ is bounded in the $H^1$ norm. It follows that $(\phi\circ u_m)$ has a subsequence that converges in the weak topology of $H^1(\Omega)$. On the other hand, $\phi\circ u_m\to \phi\circ u=u$ in $L^2(\Omega)$. Therefore, the aforementioned weak limit is indeed $u$. Furthermore, it is the strong limit because $\limsup\|\phi\circ u_m\|_{H^1}\le \limsup\| u_m\|_{H^1}=\|u\|_{H^1}$.
H: What does it say about a multivariate polynomial to be zero on a linear subspace? If I univariate polynomial $f(x)$ that vanishes at a point $x_0$, we conclude that $x - x_0$ divides $f(x)$, and in particular that $f$ is reducible if $\deg f > 1$. Can anything of significance be said if a multivariate polynomial $g : \mathbb{R}^d \to \mathbb{R}$ vanishes along an entire linear subspace $V < \mathbb{R}^d$? The case $\dim V = 1$ can occur quite easily: if $g$ is homogenous, any zero of $g$ produces such a $V$. Thus, I am particularly interested in $\dim V = 2$ or higher, or if there is a taxonomy of different ways the $\dim V = 1$ case can occur. This question is related to an earlier question on the Schwartz-Zippel lemma linked below. The motivation for both is that I have an algorithm that chooses a random linear subspace and succeeds whenever a particular multivariate polynomial doesn't vanish entirely on that subspace. Analogue of the Schwartz–Zippel lemma for subspaces AI: If $V$ has codimension $1$ and is cut out by a single linear equation $\sum_{i=1}^d a_i x_i = 0$, then you can conclude that $\sum a_i x_i$ divides $f$. The proof is fairly short: by applying a suitable change of coordinates you can assume WLOG that the linear equation is $x_1 = 0$, and then it's clear. In higher codimension you can't conclude much. For example, when $d = 2$ and $V$ has codimension $2$ then $V$ is a point, and vanishing on a point doesn't tell you too much in this case.
H: How is the area of a country calculated? As countries' or states' borders are not straight lines but they are irregular in nature, I wonder how anyone can calculate the area of a country or state. When do you think the area of a country or state was first calculated? Was it before satellites provided us accurate picture of the earth? Note: I am not asking about surface area of a country. They are assumed as flat while calculating the area. AI: I guess you could ask Google or WolframAlpha. Interestingly, these answers differ substantially. Perhaps, that's just a matter of how territories are interpreted but it illustrates the point that, there's really no easy answer. The question is at once terribly elementary and, on the other hand, fabulously interesting. Mandelbrot asked the question "How long is the coast of Britain?" Turns out that it depends strongly on how carefully you measure it. So, the short answer is - it's super simple, in that you do it just like any other spherical polygon. Dealing with data at this level, as well as territorial disputes, is a bit more complicated. To illustrate more concretely, consider the image below. You can see that, in a very simple sense, each "country" can be triangulated. These triangles can then be further sub-divided to get a good approximation and the areas of the triangles can be added up. Thus, again, on the most basic level it's no harder than adding up the areas of some triangles. To fully implement this, though, requires a fair understanding of computational geometry as well as access to solid data, which is likely disputable anyway. The real question is - can you get good upper and lower bounds? Addendum Per request, here's a look at the other side of the planet: And here's a look on the map of just India and its neighboring countries.
H: doubt about basic definition I have a very basic doubt regarding my last post understanding the basic definiton Can anybody clear the point what would be the result if I take $j=1$. For ex. if I am taking $(a,x,1)$ and $(b,x,1)$ as the vertices, Will they be adjacent or not? Here the value of $j$ is 1. I made them adjacent as acc to definition... vertices $(x_1, x_2, . . . , x_k)$, $(y_1, y_2, . . . , y_k)$ are adjacent if for some index j $\in$ {1, 2, . . . , k} we have $x_jy_j$ $\in$$E(G_j )$ and $x_i = y_i$ for each 1$\leq$ i < j. In example I took $V(P_3)$ = {a,b,c,d}, $V(K_2)$= {x,y} and $V(K_2)$={1,2}. And the product is $P_3$$\circ$$K_2$$\circ$$K_2$ Thanks a lot. AI: If $x_1$ and $y_1$ are adjacent in $G_1$, then $\langle x_1,x_2,\dots,x_k\rangle$ and $\langle y_1,y_2,\dots,y_k\rangle$ are adjacent in $G_1\circ G_2\circ\ldots\circ G_k$ no matter what $x_2,\dots,x_k$ and $y_2,\dots,y_k$ are. In this case the condition that $x_i=y_i$ for $1<i\le j$ is vacuously satisfied, because there are no such $i$. Thus, in your example $\langle a,u,m\rangle$ is adjacent to $\langle b,v,n\rangle$ in $P_3\circ K_2\circ K_2$ for any choice of $u,v\in\{x,y\}$ and $m,n\in\{1,2\}$. Here’s another way to describe the adjacency relation in $G_1\circ G_2\circ\ldots\circ G_k$. Given distinct vertices $x=\langle x_1,x_2,\dots,x_k\rangle$ and $y=\langle y_1,y_2,\dots,y_k\rangle$, let $$d(x,y)=\min\Big\{i\in\{1,\dots,k\}:x_i\ne y_i\Big\}\;;$$ then $x$ and $y$ are adjacent in $G_1\circ G_2\circ\ldots\circ G_k$ if and only if $x_{d(x,y)}$ and $y_{d(x,y)}$ are adjacent in $G_{d(x,y)}$. In words, examine the coordinates of $x$ and $y$ from left to right. Since $x\ne y$, you will eventually come to one at which $x$ and $y$ differ. Say that this first occurs at position $j$: then $x$ and $y$ are adjacent in $G_1\circ G_2\circ\ldots\circ G_k$ if and only if and only if $x_j$ and $y_j$ are adjacent in $G_j$. (This $j$ is the $d(x,y)$ of the previous paragraph.) This $j$ might be $1$, in which case you don’t even have to look at any other coordinate: if $x_1$ and $y_1$ are adjacent in $G_1$, then $x$ and $y$ are adjacent in $G_1\circ G_2\circ\ldots\circ G_k$. Or $x_1$ and $y_1$ might be equal, in which case you go on and look at $x_2$ and $y_2$. If $x_2\ne y_2$, then $x$ and $y$ are adjacent in $G_1\circ G_2\circ\ldots\circ G_k$ if and only if $x_2$ and $y_2$ are adjacent in $G_2$; if $x_2=y_2$, you go on and look at $x_3$ and $y_3$. And so on.
H: Geometric similarities between points in an algebraic variety If $f : \mathbb{R} \to \mathbb{R}$ is a univariate irreducible polynomial, Galois theory says that all roots are equivalent up to field automorphism (specifically, an automorphism of the field extension fixing the base field). Can anything similar be said for the multivariate case? Specifically, if $f : \mathbb{R}^d \to \mathbb{R}$ is an irreducible multivariate polynomial, is the local geometry of the zero set $V = f^{-1}(0)$ similar at different points in some sense? Clearly this isn't true at all points of $V$; the easiest counterexample is the lemniscate (http://mathworld.wolfram.com/Lemniscate.html), where $V$ is manifold except at one point. However, $V$ does have some self similarity except at this lower dimensional set: in particular, the curvature elsewhere is nonzero. As another example, I would expect that $V$ is locally a ruled surface either almost everywhere or almost nowhere. Is there a general statement that would encompass these and related similarities between different points of $V$? A hint at a possible answer: I believe the statement is true for any geometric property that can be expressed algebraically in terms of the differential structure near $x \in V$, since otherwise adding these equations would separate $V$ into multiple components, contradicting the irreducibility of $V$. AI: In general, it is not true that points on an algebraic variety "look the same". If $V$ is a smooth variety, and $p, q \in V$, it is not true that there are necessarily neighborhoods $U,V$ of $p,q$ which are isomorphic by an isomorphism taking $p$ to $q$ (a property which is obvious in differential geometry). If $V$ is smooth and projective, such an isomorphism would actually extend to an automorphism of $V$ taking $p$ to $q$. So in fact, on a sufficiently generic variety, no two points look the same! What I've said amounts to the following thing: for different points, the local rings $\mathcal O_p$ are generally different. However, the process of completion allows us to make sense of the intuition from differential geometry. The local ring $\mathcal O_p$ is a local ring with maximal ideal $\mathfrak m_p$. Its dimension is equal to the dimension of $V$. We can take the formal completion $\hat{\mathcal O_p}$ along the maximal ideal $\mathfrak m_p$; this has essentially the effect of replacing rational functions with power series. Taking Taylor expansion of a rational function allows us to embed the ring $\mathcal O_p$ into $\hat{\mathcal O_p}$. If $V$ is smooth at $p$, the local ring $\mathcal O_p$ is a regular local ring (this is what it means to be smooth); and we have the following theorem characterizing the completion of a regular local ring of dimension $n$ over a field: Theorem: If $(A, \mathfrak m)$ is a regular local $k$-algebra of dimension $n$, then $\hat A \cong k[[x_1, \dots, x_n]]$, the ring of formal power series in $n$ indeterminates. Thus, all of the complete local rings $\hat{\mathcal O_p}$ are isomorphic, if $V$ is smooth. However, there is generally no preferred isomorphism from $\hat{\mathcal O_p}$ to $\hat{\mathcal O_q}$.
H: Finding the sum- $x+x^{2}+x^{4}+x^{8}+x^{16}\cdots$ If $S = x+x^{2}+x^{4}+x^{8}+x^{16}\cdots$ Find S. Note:This is not a GP series.The powers are in GP. My Attempts so far: 1)If $S(x)=x+x^{2}+x^{4}+x^{8}+x^{16}\cdots$ Then $$S(x)-S(x^{2})=x$$ 2)I tried finding $S^{2}$ and higher powers of S to find some kind of recursive relation. 3)When all failed I even tried differentiating and integrating S.Obviously that was of no good either. Could anyone give me a hint to solve this?Thanks! AI: I have worked on this series before. There is no simple closed form as a geometric series has. (such as $x+x^2+x^3+x^4+...=\frac{x}{1-x}$ where $|x|<1$). You can see more information in the link about Lacunary function. The series can be expressed in closed form of double integral. I shared my result below. $x+x^{2}+x^{4}+x^{8}+\dots=F(x)$ Let's transform $x=e^{-2^t} \tag{1}$ Where $-\infty<t<\infty$ Thus x will be in $ (0,1)$ and $F(x)$ is not divergent in this range. $e^{-2^t}+e^{-2^{t+1}}+e^{-2^{t+2}}+\dots=F(e^{-2^t})=H(t)$ $e^{-2^t}+H(t+1)=H(t)$ $H(t+1)-H(t)=-e^{-2^t}$ The Fourier transform of both sides $$\int_{-\infty}^{+\infty} H(t+1)e^{-2πift} \mathrm{d}t-\int_{-\infty}^{+\infty} H(t)e^{-2πift} \mathrm{d}t=-\int_{-\infty}^{+\infty} e^{-2^{t}}e^{-2πift} \mathrm{d}t$$ $$V(f)= \int_{-\infty}^{+\infty} H(t)e^{-2πift} \mathrm{d}t$$ $$\int_{-\infty}^{+\infty} H(t+1)e^{-2πift} \mathrm{d}t=\int_{-\infty}^{+\infty} H(z)e^{-2πif(z-1)} \mathrm{d}z=V(f)e^{2πif}$$ $$e^{2πif}V(f)-V(f)=-\int_{-\infty}^{+\infty} e^{-2^{t}}e^{-2πift} \mathrm{d}t$$ $$V(f)=\int_{-\infty}^{+\infty} \frac{e^{-2^{t}}e^{-2πift}}{1-e^{2πif}} \mathrm{d}t$$ Now we need to take the inverse Fourier transform $$H(z)=\int_{-\infty}^{+\infty} V(f) e^{2πifz} \mathrm{d}f=\int_{-\infty}^{+\infty} e^{2πifz}\int_{-\infty}^{+\infty} \frac{e^{-2^{t}}e^{-2πift}}{1-e^{2πif}} \mathrm{d}t\,\mathrm{d}f $$ The closed form of $H(z)$ in integral expression: $$H(z)=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} e^{2πifz} \frac{e^{-2^{t}}e^{-2πift}}{1-e^{2πif}} \mathrm{d}t\,\mathrm{d}f $$ $$\sum_{k=0}^\infty x^{2^k}=H(\log_2(-\ln x))=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} e^{2πif\log_2(-\ln x)} \frac{e^{-2^{t}-2πift}}{1-e^{2πif}} \mathrm{d}t\,\mathrm{d}f$$ Where $0<x<1$ $$\sum_{k=0}^\infty x^{2^k}= \int_{-\infty}^{+\infty} \frac{e^{2πif\log_2(-\ln x)}}{1-e^{2πif}} \int_{-\infty}^{+\infty} e^{-2^{t}-2πift} \mathrm{d}t\,\mathrm{d}f=\int_{-\infty}^{+\infty} e^{-2^{t}} \int_{-\infty}^{+\infty} \frac{e^{2πif(\log_2(-\ln x)-t)}}{1-e^{2πif}} \mathrm{d}f\,\mathrm{d}t$$ Note: I made the update on 07/29/2016 . Variable change was $x=e^{2^t}$ at Tag (1) and it had problems as @leonbloy 's comment below. Thanks for the comment. Please let me know if you notice something else in definitions.
H: Get a result and calculate back to it's Divisor In the calculation 28 /7 = 4 the result is 4 and the divisor is 7. From the result i want to calculate back into the divisor. In other words, all I have to do in this case is is 4 + 3 and i get back to the divisor 7. But that does not work when i do 28 / 2 = 14. In this case I have to do 14 - 12 = 2 to get back to the divisor. Is there a formula or equation that I can apply that calculates the result back to the divisor every time? AI: Yes, if you know both the original product and the resulting quotient. If all you're allowed to use is your final result, then no. Given a calculation of the form $P/D = Q$ (where $P$ is the original product, $D$ is the divisor, and $Q$ is the resulting quotient), you can "calculate the result back to the divisor" by using the formula $D=P/Q$. Thus, in your first example, we have $P = 28$ and $Q=4$, so the divisor was $D=28/4=7$. For your second example, we have $P = 28$ and $Q=14$, so the divisor was $D=28/14=2$.
H: Slight generalisation of the Baire category theorem? By the Baire category theorem, one cannot write a complete metric space $X$ as a countable union of closed nowhere dense subsets of $X$. Can this be generalised to say that there is no injection $f: X \hookrightarrow X$, $f(X) \subseteq \cup_{n \ge 1} X_n$, where the $X_n$ are closed and nowhere dense? It seems like not much of a jump, and that it should be true, but I can't quite get it out. Thanks for any insight. AI: Since you don’t require continuity, there is an injection of $\Bbb R$ into the middle-thirds Cantor set $C$, which is closed and nowhere dense, because $|\Bbb R|=|C|=2^\omega=\mathfrak c$. If you insist on a union of countably infinitely many closed, nowhere dense sets, just inject each $[n,n+1)$ into a Cantor set in $\left[n,n+\frac12\right]$.
H: About mulit-variate Fourier series If $f(x,y)$ is $2\pi$ periodic with respect to $x$ and $2\pi$ periodic with respect to $y$ respectively, then can I write $$ f(x,y) = \sum_{j,k \in \mathbb Z} c_{jk} e^{ijx} e^{iky}$$ where $$ c_{jk} = \frac{1}{4\pi^2}\int_0^{2 \pi} \int_0^{2 \pi} f(x,y) e^{-ijx} e^{-iky} dxdy \;?$$ I am wondering the sufficient condition to write in this way. Is the condition $f(x + 2\pi, \cdot) = f(x, \cdot)$ and $f(\cdot, y + 2 \pi) = f(\cdot, y)$ sufficient? Or do I need $f(x + 2 \pi, y+2\pi) = f(x,y)$ ? AI: The conditions that $f$ is $2\pi$ periodic in the $x$ and $y$ directions separately imply together that $f(x + 2\pi, y + 2\pi) = f(x,y)$. Periodicity in both directions is needed. However, there is also another condition that is required for the above to be true, which is that $f$ must be square-integrable; that is, $ \int_0^{2\pi} \int_0^{2\pi} |f(x,y)|^2 \, dx \, dy$ must be defined and finite.
H: Why does $(k-2)!-k \left\lfloor \frac{k!}{(k-1) k^2}\right\rfloor = 1,\;k\ge2\;\implies\;\text{isPrime}(k)$ Let $k$ be a integer such that $k\ge2$ Why does $$(k-2)!-k \left\lfloor \frac{k!}{(k-1) k^2}\right\rfloor = 1$$ only when $k$ is prime? Example: $$\pi(n) = \sum _{k=4}^n \left((k-2)!-k \left\lfloor \frac{k!}{(k-1) k^2}\right\rfloor \right),\;n\ge4$$ where $k=4,$ since: $$\pi(4)\quad=\quad(4-2)!-4 \left\lfloor \frac{4!}{(4-1) 4^2}\right\rfloor = 2$$ I've tried to evaluate it in different forms, and I am probably just overlooking something obvious; So if anyone has any information in regard to this, please share. AI: If $k$ is not a prime, then the equation is $0$ modulo $k$. (We have $(k-2)! \equiv 0 \pmod{k}$.) Conversely if $k$ is a prime, then by Wilson's theorem $(k-2)! \equiv 1 \pmod{k}$. Edit: (To complete the argument.) If $k$ is a prime, then $(k-2)! = 1 + l k$ for some $l \ge 0$. Equivalently $l = \frac{(k-2)! - 1}{k}$ and we must show that $l = \left\lfloor \frac{k!}{(k-1)k^2} \right\rfloor$. But this is true since $$\left\lfloor \frac{k!}{(k-1)k^2} \right\rfloor = \left\lfloor \frac{(k-2)!}{k} \right\rfloor$$ and the greatest number below $(k-2)!$ that is divisible by $k$ is $(k-2)! - 1$.
H: Math equation a little hard for me I got this equation to solve. Im not very good at equation. I got this equation $( A\times \cos(B) ) \times C = D$ I want this form $( A \times ? ) \times \cos(B \times ?) = D$ I want to do something like merge C in A and B for giving the result D AI: If we have the equation: $(A\cdot \cos{B}) \cdot C = D$ Then by commutativity/associativity we can rewrite it as: $(A \cdot C) \cdot \cos{B} = D$ We can NOT "distribute" the $C$ in the way that you have suggested. That is, in general: $(X \cdot Y) \cdot Z \ne (X \cdot Z) \cdot (Y \cdot Z)$ $(\cos{X}) \cdot Y \ne \cos(X \cdot Y)$
H: Find an integrable $g(x,y) \ge |e^{-xy}\sin x|$ I want to use Fubini theorem on $$\int_0^{A} \int_0^{\infty} e^{-xy}\sin x dy dx=\int_0^{\infty} \int_0^{A}e^{-xy}\sin x dx dy$$ Must verify that $\int_M |f|d(\mu \times \nu) < \infty$. I'm using the Lebesgue theorem, so far I've come up with $g(x,y)=e^{-y}$ but am not sure whether it's correct. My argument is that if $x\in (0,1)$ then the $\sin x$ part is going to ensure that the inequality holds. AI: Try $g(x,y)=x\mathrm e^{-xy}$, then $|\mathrm e^{-xy}\sin x|\leqslant g(x,y)$ for every nonnegative $x$ and $y$. Furthermore, $\int\limits_0^\infty g(x,y)\mathrm dy=1$ for every $x\gt0$ hence $\int\limits_0^A\int\limits_0^\infty g(x,y)\mathrm dy\mathrm dx=A$, which is finite.
H: Extrema homework — maximizing the viewing angle of a picture on a wall I have hit a problem in my homework and don't know how to solve it. Here it is: "A picture with height of 1.4 meters hangs on the wall, so that the bottom edge of the picture is 1.8 meters from the viewers eye. How far does the viewer have to stand in order to have the best viewing angle? " I understand that the viewing angle has to be maximum, but I have no idea how to begin solving it. Hints and tips would really help! Thank you in advance! AI: Sketch the problem: Write down the cosine rule: $${ x }^{ 2 }+{ y }^{ 2 }-2xy\cos { \alpha } ={ 1.4 }^{ 2 }\\\alpha =\arccos { \left( \frac { { x }^{ 2 }+{ y }^{ 2 }-{ 1.4 }^{ 2 } }{ 2xy } \right) } $$ Now find $y$ in terms of $x$ using Pythagorean theorem: $${ x }^{ 2 }-{ 3.2 }^{ 2 }={ y }^{ 2 }-{ 1.8 }^{ 2 }$$ Maximize $\alpha$ and find $x$. The distance is: $$d=\sqrt { { x }^{ 2 }-{ 3.2 }^{ 2 } } $$ Also see: Regiomontanus' angle maximization problem
H: If $a,b,c \in R$ are distinct, then $-a^3-b^3-c^3+3abc \neq 0$. If $a,b,c \in R$ are distinct, then $-a^3-b^3-c^3+3abc \neq 0$. I think it is trivial because they are distinct. So I wonder just saying "Since they are distinct" is enough to prove it? Of course there could be several more detailed versions but I just want to know that reasoning is true or not. AI: HINT: $$a^3+b^3+c^3-3abc$$ $$=(a+b)^3-3ab(a+b)+c^3-3abc=(a+b)^3+c^3-3ab(a+b+c)$$ $$=(a+b+c)\{(a+b)^2-(a+b)c+c^2\}-3ab(a+b+c)$$ $$=(a+b+c)\{(a+b)^2-(a+b)c+c^2-3ab\}$$ $$=(a+b+c)(a^2+b^2+c^2-ab-bc-ca)$$ $$=(a+b+c)\frac{\{(a-b)^2+(b-c)^2+(c-a)^2\}}2$$ which will be $>=<0 $ according as $a+b+c>=<0$ as for real distinct $a,b,c,$ each of $(a-b)^2,(b-c)^2,(c-a)^2>0$
H: Axiom of choice and function with empty codomain I'm having a little problem here, namely if the axiom of choice (Wikipedia) is $$\forall X \left[ \emptyset \notin X \implies \exists f: X \to \bigcup X \quad \forall A \in X \, ( f(A) \in A ) \right]$$ and I choose the nonempty $X=\{\emptyset\}\neq \emptyset$ for which only $\emptyset\in X$ and hence $A=\emptyset$, then I get $$\exists f:\{\emptyset\}\to\emptyset\ \ \text{such that}\ \ f(\emptyset) \in \emptyset.$$ The last point seems wrong in itself though. Of course, there is no actual value $f(\emptyset)$, so $f(\emptyset) \in \emptyset$ doesn't say much. It might be just taken to be a legal statement. I haven't seen that written down anywhere though. AI: Note that $X=\{\varnothing\}$ satisfies the statement vacuously, so it's fine. You can change the statement to the following: $$\forall X\exists f\left[f\colon X\to\bigcup X\cup\{\varnothing\}\land\forall A\in X(f(A)\in A\lor A=\varnothing)\right]$$ It allows $\varnothing\in X$, and therefore $X=\{\varnothing\}$. But it's harder to understand it this way, so it's easier to just require $\varnothing\notin X$.
H: Verifying that $\mathbb Q=\bigcup_{n\ge 1} H_n$ Let $G=(\mathbb Q,+)$ and $r_1=p_1/q_1, r_2=p_2/q_2\in G$. I want to prove that: $\langle r_1,r_2\rangle\subseteq \langle\frac{1}{q_1q_2}\rangle$ If $r_1,r_2,...,r_n\in G$ then $\langle r_1,r_2,...,r_n\rangle$ is cyclic. If $H_n=\langle \frac{1}{n!}\rangle$ then every $H_n$ is cyclic, $$H_1\subset H_2\subset H_3\subset...$$ and $\mathbb Q=\bigcup_{n\ge 1} H_n$. I can see that $r_1\in\langle\frac{1}{q_1q_2}\rangle$ and the same is true for $r_2$. This ends proving $1.$ I guess I should use the first part to see the second part clearly. So, I think I should have $r_i\in \langle\frac{1}{q_1q_2q_3...q_n}\rangle$. Is my guess right because it makes the second proved? For the third part it is obvious that every $H_n$ is cyclic. Assuming that chain of inclusions is correct and that $\bigcup_{n\ge 1} H_n\subseteq\mathbb Q$, just make me an small finial hint. Please verify my way of solution. I will be thankful if you have any other point of views. Thanks AI: Your intuition seems correct, but your arguments need to be more rigorous. The crucial step in proving all of these statements is knowing that $a \in \langle b \rangle$ means there is $n \in \Bbb Z$ for which $a = nb$. This follows from the definition of cyclic groups in the additive notation. Another important fact is that $a \in \langle b \rangle$ implies $\langle a \rangle \subset \langle b \rangle$. Try to prove this if you aren't already familiar with it. To get to specifics, 1 follows from the statements above. For 2, it's not sufficient to prove $r_i \in \left\langle \frac{1}{q_1 \cdots q_n} \right\rangle$. This would imply $\langle r_1, \ldots, r_n \rangle \subset \left\langle \frac{1}{q_1 \cdots q_n} \right\rangle$. You need to show $\langle r_1 \cdots r_n \rangle = \langle s \rangle$ for some $s \in \Bbb Q$. For 3 and 4, use my initial statements and make sure to show containment in both directions in 4!
H: Square Matrices Problem Let A,B,C,D & E be five real square matrices of the same order such that ABCDE=I where I is the unit matrix . Then, (a)$B^{-1}A^{-1}=EDC$ (b)$BA$ is a nonsingular matrix (c)$ABC$ commutes with $DE $ (d)$ABCD=\frac{1}{det(E)}AdjE$ More than one option may be correct . Also , taking the special case A=B=C=D=E=I states all these options be true , but answer key states (a) is incorrect, how ? AI: First, a special case can only show you, that an answer is wrong, but not that is true in general. Regarding the options: (a) We have $$ 1 = ABCDE \iff A^{-1} = BCDE \iff B^{-1}A^{-1} = CDE $$ so choosing $C$, $D$, $E$ such that $CDE \ne EDC$ will give you an example for (a) being wrong. (b) If $BA$ were singular, then $$ 1 = \det(ABCDE) = \det(AB)\det(CDE) = \det(BA)\det(CDE) = 0. $$ (c) We have $$ ABCDE = 1 \iff (ABC)^{-1} = DE $$ and every matrix commutes with its inverse. (d) We have $E^{-1} = ABCD$ and $E^{-1} = \frac 1{\det E} \mathrm{adj}\, E$.
H: How to prove this function is surjective I'm trying to solve this question: In order to solve this question above, I found this function: $r/w\mapsto (r/s)/(w/s)$ such that $w/s\in T$, I almost proved this map is an isomorphism, I'm stuck just in the surjectivity part. If we get an element $(r/s)/(w/s)$ of $T^{-1}(S^{-1}R)$, ok! However, an element of $T{^1}(S^{-1}R)$ can be for example $(r/s_1)/(w/s_2)$ with $s_1\neq s_2$ I need help in this part. Thanks in advance. AI: Hint: Note that in $S^{-1}R$ we have $(ws_1)/(s_1s_2) = w/s_2 \in T$, so $ws_1 \in S_*$. Now consider $rs_2/ws_1 \in S_*^{-1}R$.
H: MCQ on a function $f(x)=\left( \ln \left( \frac{\left( 7x-x^{2} \right)}{12} \right) \right)^{\frac{3}{2}}$ Choose correct options , more than one may be correct . (a) $f$ is defined on $R^+$ and is strictly increasing. (b)$f$ is defined on an interval of finite length and is strictly increasing (c)range of function include 1 (d)$f$ is defined on an interval of finite of length and is bounded . Now answer says (c) & (d) I am getting (b) & (d) and that too when I don't take the natural domain of the function i.e. on all real values it could possibly be defined . Here is the graph of the function http://www.wolframalpha.com/input/?i=y%3D%5Cleft%28+%5Cln+%5Cleft%28+%5Cfrac%7B%5Cleft%28+7x-x%5E%7B2%7D+%5Cright%29%7D%7B12%7D+%5Cright%29+%5Cright%29%5E%7B%5Cfrac%7B3%7D%7B2%7D%7D&dataset= How can the function be bounded when it approaches $-\infty$ from both sides . AI: Worry not about Alpha. We are ultimately going to take a square root, so $$\ln\left(\frac{7x-x^2}{12}\right)$$ has to be non-negative, that is, we must have $\frac{7x-x^2}{12}\ge 1$. Equivalently, we want $x^2-7x+12\le 0$, which happens only for $3\le x\le 4$. Note that $f(x)$ is $0$ at $3$ and $4$, so cannot be strictly increasing.
H: What does proving the Riemann Hypothesis accomplish? I've recently been reading about the Millennium Prize problems, specifically the Riemann Hypothesis. I'm not near qualified to even fully grasp the problem, but seeing the hypothesis and the other problems I wonder: what practical use will a solution have? Many researchers have spent a lot of time on it, trying to prove it, but why is it important to solve the problem? I've tried relating the situation to problems in my field. For instance, solving the $P \ vs. NP$ problem has important implications if $P = NP$ is shown, and important implications if $P \neq NP$ is shown. For instance, there would be implications regarding the robustness or security of cryptographic protocols and algorithms. However, it's hard to say WHY the Riemann Hypothesis is important. Given that the Poincaré Conjecture has been resolved, perhaps a hint about what to expect if and when the Riemann Hypothesis is resolved could be obtained by seeing what a proof of the Poincaré Conjecture has led to. AI: The Millennium problems are not necessarily problems whose solution will lead to curing cancer. These are problems in mathematics and were chosen for their importance in mathematics rather for their potential in applications. There are plenty of important open problems in mathematics, and the Clay Institute had to narrow it down to seven. Whatever the reasons may be, it is clear such a short list is incomplete and does not claim to be a comprehensive list of the most important problems to solve. However, each of the problems solved is extremely central, important, interesting, and hard. Some of these problems have direct consequences, for instance the Riemann hypothesis. There are many (many many) theorems in number theory that go like "if the Riemann hypothesis is true, then blah blah", so knowing it is true will immediately validate the consequences in these theorems as true. In contrast, a solution to some of the other Millennium problems is (highly likely) not going to lead to anything dramatic. For instance, the $P$ vs. $NP$ problem. I personally doubt it is probable that $P=NP$. The reason it's an important question is not because we don't (philosophically) already know the answer, but rather that we don't have a bloody clue how to prove it. It means that there are fundamental issues in computability (which is a hell of an important subject these days) that we just don't understand. Solving $P \ne NP$ will be important not for the result but for the techniques that will be used. (Of course, in the unlikely event that $P=NP$, enormous consequences will follow. But that is about as likely as it is that the Hitchhiker's Guide to the Galaxy is based on true events.) The Poincaré conjecture is an extremely basic problem about three-dimensional space. I think three-dimensional space is very important, so if we can't answer a very fundamental question about it, then we don't understand it well. I'm not an expert on Perelman's solution, nor the field to which it belongs, so I can't tell what consequences his techniques have for better understanding three-dimensional space, but I'm sure there are.
H: Product of two functions converging in $L^1(X,\mu)$ Let $f_n\to f$ in $L^1(X,\mu)$, $\mu(X)<\infty$, and let $\{g_n\}$ be a sequence of measurable functions such that $|g_n|\le M<\infty\ \forall n$ with some constant $M$, and $g_n\to g$ almost everywhere. Prove that $g_nf_n\to gf$ in $L^1(X,\mu)$. This is a question from one of my past papers, but unfortunately there is no solution provided. Here is as far as I have gotten: $$\int|gf-g_nf_n|=\int|(f-f_n)g_n+(g-g_n)f|\le\int|f-f_n||g_n|+\int|g-g_n||f|$$ $$\le M\int|f-f_n|+\int|g-g_n||f|$$ We know that $f_n\to f$ in $L^1$, so $\int|f-f_n|\to 0$, and by Lebesgue's bounded convergence theorem it follows that $\int|g-g_n|\to 0$. But I am unsure whether this also implies $\int|g-g_n||f|\to0$. AI: Hint: Observe that $2M|f|$ is an integrable bound for $|g_n - g|\cdot |f|$ and the latter converges a. e. to $0$. Now apply the bounded convergence theorem.
H: Finding the singularity type at $z=0$ of $\frac{1}{\cos(\frac{1}{z})}$ I have the following homework problem: What kind of singular point does the function $\frac{1}{\cos(\frac{1}{z})}$ have at $z=0$ ? What I tried: We note (visually) that $z_{0}$ is the same type of singularity for both $f,f^{2}$ hence the type of singularity of $f(z)=\frac{1}{\cos(\frac{1}{z})}$ have at $z=0$ is the same type of singularity $f^{2}(z)=\frac{1}{\cos^{2}(\frac{1}{z})}$. We recall $$ \frac{1}{\cos^{2}(z)}=1+\tan^{2}(z) $$ We also note that for any constant $z_{0}\in\mathbb{C}$ the singularity of $f,f+z_{0}$ are the same, this can be proved by noting the Laurent expansions of both functions are the same, up to an additive constant. It remains to determine the singularity type at the origin of $$ \tan(\frac{1}{z}) $$ This is where I'm stuck, we didn't study what the Taylor series of $\tan(z)$. I also know that type of singularity $$\cos(\frac{1}{z})$$ have, but I don't know how to connect this with the singularity type of $$\frac{1}{\cos(\frac{1}{z})}$$ Can someone please hint me in the right direction ? AI: The singularity is not an isolated singularity, as $\cos(1/z)$ has a sequence of zeroes approaching $0$ (namely $z_n=1/(n\pi+\pi/2)$). In particular, $0$ cannot be an essential singularity of the function.
H: How to show that $\sqrt{1+\sqrt{2+\sqrt{3+\cdots\sqrt{2006}}}}<2$ $\sqrt{1+\sqrt{2+\sqrt{3+\cdots\sqrt{2006}}}}<2$. I struggled on it, but I didn't find any pattern to solve it. AI: $$\begin{aligned}\sqrt{1+\sqrt{2+\sqrt{3+\cdots \sqrt{n}}}}&<\sqrt{1+\sqrt{2+\sqrt{2^2+\cdots \sqrt{2^{2^{n-1}}}}}}\\&<\sqrt{1+\sqrt{2+\sqrt{2^2+\cdots }}}\\&=\sqrt{1+\sqrt{2}\cdot\sqrt{1+\sqrt{1+\cdots }}}\\&=\sqrt{1+\sqrt{2}\phi}\\&<2\end{aligned}$$ We can get a tighter bound for the limit by breaking the pattern a little further down the line: $$\begin{aligned}\sqrt{1+\sqrt{2+\sqrt{3+\cdots \sqrt{n}}}}&<\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{2^2+\cdots}}}}\\&=\sqrt{1+\sqrt{2+\sqrt{3+2\sqrt{1+\cdots }}}}\\&=\sqrt{1+\sqrt{2+\sqrt{4+\sqrt{5}}}}\approx 1.7665398\end{aligned}$$ $$\sqrt{1+\sqrt{2+\sqrt{3+\cdots }}}\approx 1.7579327$$
H: If $b = c \times a$ and $c = a \times b$, and length $b$ = length $c$, $a$ is a unit vector. If $\vec b = \vec c \times \hat a\,$ and $\,\vec c = \hat a \times \vec b\,$, and $|\vec b|$ = $|\vec c|$. Assuming $\vec b \ne 0$. I have managed to prove $\vec a$, $\vec b$ and $\vec c$ are orthogonal, but not much else. Help appreciated! Thanks. AI: We have \begin{align*} b \times c &= b \times (a \times b)\\ &= \bigl(b \cdot b\bigr)a - \bigl((c\times a) \cdot a\bigr)b\\ &= |b|^2 a \end{align*} and \begin{align*} b \cdot c &= (c \times a)\cdot c\\ &= 0, \end{align*} hence $|b \times c| = |b||c| = |b|^2$ and therefore \[ |a| = \frac{|b \times c|}{|b|^2} = 1. \]
H: What are 'contexts' actually called? Consider the following argument by contradiction. \begin{array}{|l} \mbox{We wish to deduce A.} \\ {\begin{array}{|l} \mbox{Suppose not A.} \\ \hline \\ \mbox{Then B. Thus C. Therefore, contradiction. } \end{array} } \\ \mbox{Thus, A.} \end{array} So to actually perform the argument by contradiction, we had to open a new ‘context’ wherein additional assumptions hold. What are these ‘contexts’ actually called? And which deductive systems actually utilize them? AI: Natural deductive systems often use such a context often for a few different rules. Say we want to prove that Cpr (p implies r) in some proof. Then we'll open such a new context, take p as a supposition and show that r follows. Then we infer Cpr outside of the context. Also, you can open a new context supposing p, show that a contradiction follows, and then infer Np (the logical negation of p) outside of the context. The new contexts I've seen called "subproofs".
H: Chebyshev-Gauss quadrature with $\tan(x)$ With Chebyshev-Gauss quadrature, solve $\int_0^{\pi/4}x\cdot tan^2(x)$, for $n=3$. Needs first to determine the change in the integral, to change the limits of integrals and then reduce in form integrate $$\int_{-1}^1{\dfrac{1}{(1-x^2)^{1/2}}}$$ http://upload.wikimedia.org/math/3/7/3/3739c7537ace93cb3bc05e3957a44ff3.png Thanks a lot, this is really important for me. Greetings, Tanya AI: Hint: make the substitution $$4x - \frac{\pi}{2} = \arcsin(t).$$ You will need to simplify some horrific looking trig. functions in the integrand but this substitution should give you the correct limits of integration and a factor of $\sqrt{1 - t^2}$ in the denominator.
H: ZF: Regularity axiom or axiom schema? I have seen the axiom system ZF for set theory described including a single axiom of regularity (aka "foundation"), namely $$\forall x\neq\emptyset \, \exists y\in x \ y\cap x = \emptyset$$ and also including regularity as an infinite axiom schema, with an axiom for every formula $\varphi(x,x_1,..,x_n)$: $$\forall x_1,..,x_n \,\exists x \left(\varphi \rightarrow \exists x \, \left( \varphi \land \forall y\in x \ \neg \varphi\frac{y}{x}\right)\right)$$ The second version states that each non-empty class has an $\in$-minimal element, while the first one states that every non-empty set has an $\in$-minimal element. Is the second one stronger? Is it needed? AI: Let $\phi(x,x_1, \ldots, x_n)$ be a given formula. For given $x_1, \ldots, x_n$, suppose there is an $x$ such that $\phi(x, x_1, \ldots, x_n)$ holds. Let $X$ be the transitive closure of $\{x\}$ (which is a set) and \[ z = \{y \in X \mid \phi(y, x_1, \ldots, x_n) \} \] Then $z$ is non-empty, by regularity, $z$ has an $\in$-minimal element $x'$. Let $y \in x'$, then $y \in X$ (as $X$ is transitive) and $y \not\in z$ (as $x'$ is $\in$-minimal), so $\neg\phi(y,x_1,\ldots, x_n)$. That is, $x'$ is an $\in$-minimal element of the class $\phi$. So the schema follows from the other axioms of $\mathsf{ZF}$.
H: Relationship between Mean Value Theorem and the maximum norm I am seeking assistance with the following application of the Mean Value Theorem: Let $x \in \Omega$ and construct an associated neighbourhood $N_x = (a, a+ \sqrt{\epsilon})$, such that $x \in N_x$ and $N_x \subset \Omega$. Then, by the Mean Value Theorem, for some $y \in\overline{N}_x $, $$ \frac{u(a + \sqrt{\epsilon}) - u(a)}{\sqrt{\epsilon}} = u'(y)...........(1) $$ It follows that $$ |u'(y)| \le 2\epsilon^{-1/2}||u||_{N_x} \le C\epsilon^{-1/2}............(2)$$ My problem is that I do not quite follow how expression (2) is obtained from expression (1). AI: We have $\def\abs#1{\left|#1\right|}\def\norm#1{\left\|#1\right\|_{N_x}}$ \begin{align*} \abs{u'(y)} &= \abs{\frac{u(a+ \sqrt \epsilon) - u(a)}{\sqrt \epsilon}}\\ &\le \epsilon^{-1/2} \cdot \bigl(\abs{u(a + \sqrt \epsilon)} + \abs{u(a)}\bigr)\\ &\le \epsilon^{-1/2} \cdot \bigl( \norm u + \norm u\bigr)\\ &= 2 \epsilon^{-1/2} \norm u. \end{align*}
H: solving an equilibrium equation I have the following example: $ pa_1=a_0\\ pa_0+qa_1+pa_2=a_1\\qa_0+qa_2=a_2 $ where p+q=1. I can see how to get $a_1=(1/p)a_0 $ but from there they say from the third equation they produce $a_2 = (q/p)a_0$. Then the values are substituted into the normalising equation of $a_0 + a_1 + a_2 =1$. With a final answer of $a_0 =p/2 \ , a_1=1/2 \ , \ a_2=q/2$ Update: I can see how they get $a_2 = (q/p)a_0 $ $qa_0+qa_2=a_2 \ -> qa_0=a_2 - qa_2\ -> qa_0=a_2(1-q)\ −> a_2 = qa_0/(1-q) \ $ and as p+q=1 p=1-q hence $a_2=(q/p)a_0$ AI: $ p\ a_1=a_0\\ p\ a_0+q\ a_1+p\ a_2=a_1\\q\ a_0+q\ a_2=a_2 $ Now replacing $a_1$ from the first equation gives: $$p\left[ \left(1+\frac{q-1}{p^2}\right)a_0+a_2\right]=0\\ q\ a_0=\left(1-q\right)a_2$$ Using the fact that $p=1-q$, the third equation gives(which is the same as what you might have got from the second equation): $$a_2=\frac{q}{p}a_0$$ Now replacing $a_1$ and $a_2$ in the relation$a_0+a_1+a_2=1$, we get: $$a_0+a_1+a_2=a_0+\frac{1}{p}a_0+\frac{q}{p}a_0=\frac{a_0}{p}\left(p+1+q\right)=\frac{2}{p}a_0=1\\ \Rightarrow a_0=\frac{p}{2},a_1=\frac{1}{2},a_2=\frac{q}{2}$$
H: Prove that if $p_1,\dots,p_k$ are distinct odd primes then 1 has $2^k$ square roots $\mod m$ where $m$ is the product of the primes. I think I am most of the way through this proof but I am stuck. Here was my approach: I looked at the square roots of $1$ mod $105$, and noticed that each one corresponded to one less than an integer multiple of a subset of the prime factors of $105$, i.e. we have for instance $71=24\cdot3-1$ or $29=2\cdot(3\cdot 5)-1$ and so on. I conjectured that given a subset of the prime factors of $m$, I could construct a unique square root of $1$ mod $m$, which would show that there are (at least) $2^k$ of them. So I did that and I think that it worked. Pick some set of the prime factors of $m$, say $p_{a_1},\dots,p_{a_j}$. Claim: we can find a corresponding $n$ such that $n\prod_{i=1}^jp_{a_i}-1$ is a square root of $1$ mod $m$. Proof: We are looking for a solution to $$\left(n\prod_{i=1}^jp_{a_i}-1\right)^2\equiv 1 \pmod m$$ which may be rewritten as $$\left(\prod_{i=1}^jp_{a_i}\right)^2n^2 - 2\prod_{i=1}^jp_{a_i}n\equiv 0 \pmod m$$ Now since $\prod_{i=1}^jp_{a_i}|m$ we may write this as $$\prod_{i=1}^jp_{a_i}n^2-2n \equiv 0\mod \frac{m}{\prod_{i=1}^jp_{a_i}}$$ or $$n\left(\prod_{i=1}^jp_{a_i}n-2\right) \equiv 0 \mod {\frac{m}{\prod_{i=1}^jp_{a_i}}}$$ This has solutions when $n\equiv0\mod \frac{m}{\prod_{i=1}^jp_{a_i}}$ or when $\prod_{i=1}^jp_{a_i}n-2\equiv 0 \mod \frac{m}{\prod_{i=1}^jp_{a_i}}$. In the first case we end up with the same root for any choice of $p_{a_i}$'s so that's sort of a degenerate one. But in the second case we have $\prod_{i=1}^jp_{a_i}n\equiv 2 \mod \frac{m}{\prod_{i=1}^jp_{a_i}}$, which has a unique and nonzero solution for $n$. Plugging that $n$ back into $n\prod_{i=1}^jp_{a_i}-1$ thus gives a square root of $1$ mod $m$ which corresponds uniquely to your choice of prime factors, thus showing that there are (at least) $2^k-1$ square roots of 1. Then we add 1 in which corresponds to the empty set of prime factors and we get the full $2^k$. My problem is ruling out the possibility of more square roots of 1, i.e. if $x^2\equiv1\mod m$ I need to show that it corresponds uniquely to something of the form $n\prod_{i=1}^jp_{a_i}-1$. This is proving to be difficult if not impossible. I put a lot of work into going the one way and I really don't want it all to be for nothing. Can anyone see a way to go the other direction? AI: Since $x^2=1 \mod (p)$ has $2$ solutions, applying then the Chinese remainder theorem.
H: Essential singularities of $\frac 1{e^z-1}$ How do I show that $\frac 1{e^z-1}$ has essential singularities (instead of say, poles) at $z=2n\pi i(n\in \mathbb Z)$? I can't figure out how to show that the function does not go to infinity near $0$, or that it assumes every possible value near $0$. Exhibiting the laurent series around $0$ isn't general enough to show that essential singularities occur at all the stated points. AI: You don't! The function in question has simple poles. The easiest way to see this is to note that $\exp(2n\pi i)=1$, so by continuity $e^z-1$ tends to zero as $z \rightarrow 2n\pi i$ and thus the reciprocal tends to infinity. To show that the poles are simple, note that $e^z-1$ has a zero of degree $1$ at $2n\pi i$ so the reciprocal has a simple pole and so the residue is: $$\frac{1}{\frac{d}{dz}(e^z-1)}=\frac{1}{e^{2n\pi i}}=1$$
H: How to find the Laurent expansion for $1/\cos(z)$ How to find the Laurent series for $1/\cos(z)$ in terms of $(z-\frac{\pi}{2})$ for all $z$ such that $0<|z-\frac{\pi}{2}|<1$ AI: Note $\frac{1}{\cos z}=-\frac{1}{\sin (z-\frac{\pi}{2})}$. Let $t:=z-\frac{\pi}{2}$. Then $0<|t|<1$, $\sin t=t-\frac{t^3}{3!}+\frac{t^5}{5!}-\frac{t^7}{7!}+\dots$, so $$ \begin{align} \frac{1}{\sin t}&=\frac{1}{ t-\frac{t^3}{3!}+\frac{t^5}{5!}-\frac{t^7}{7!}+\dots } \\ &=\frac{1}{t}\frac{1}{1-\left(\frac{t^2}{3!}-\frac{t^4}{5!}+\frac{t^6}{7!}+\dots\right)}\\ &=\frac{1}{t}\left[1+\left(\frac{t^2}{3!}-\frac{t^4}{5!}+\frac{t^6}{7!}+\dots\right)+\left(\frac{t^2}{3!}-\frac{t^4}{5!}+\frac{t^6}{7!}+\dots\right)^2+\dots\right]\\ &=t^{-1}+\frac{1}{3!}t+\left[\left(\frac{1}{3!}\right)^2-\frac{1}{5!}\right]t^3+\dots \end{align} $$ It seems no closed forms for higher terms.
H: Splitting of conjugacy class in alternating group Browsing the web I came across this: The conjugacy class of an element $g\in A_{n}$: splits if the cycle decomposition of $g\in A_{n}$ comprises cycles of distinct odd length. Note that the fixed points are here treated as cycles of length $1$, so it cannot have more than one fixed point; and does not split if the cycle decomposition of $g$ contains an even cycle or contains two cycles of the same length. Anybody with a proof? AI: Note the following: (1) The conjugacy class in $S_n$ of an element $\sigma \in A_n$ splits, iff there is no element $\tau \in S_n\setminus A_n$ commuting with $\sigma$. For if there is one, for each $\tau' \in S_n \setminus A_n$ we have $$ \tau'\sigma{\tau'}^{-1} = \tau'\sigma\tau\tau^{-1}\tau'{}^{-1} = (\tau'\tau)\sigma(\tau'\tau)^{-1} $$ and $\tau\tau' \in A_n$. On the other hand, if $\tau\sigma\tau^{-1}$ and $\sigma$ with $\tau \in S_n\setminus A_n$ are conjugate in $A_n$, then for some $\tau' \in A_n$, we have $\tau\sigma\tau^{-1} = \tau'\sigma\tau'^{-1}$, giving $$ \tau'{}^{-1}\tau \sigma = \sigma\tau'{}^{-1}\tau $$ and hence $\tau'{}^{-1}\tau \in S_n\setminus A_n$ commutes with $\sigma$. Now suppose, $\sigma$ has a cycle $c_i$ of even length. A cycle of even length is an element of $S_n \setminus A_n$, and as $\sigma$ commutes with its cycles, we are done by the above. If $\sigma$ has two cycles $(a_1\ldots a_\ell)$ and $(b_1 \ldots b_\ell)$ of the same odd length $\ell$, then $(a_1b_1) \ldots (a_\ell b_\ell)$ is a product of $\ell$ transpositions (hence odd, so an element of $S_n \setminus A_n$) commuting with $\sigma$. Now suppose $\sigma = c_1 \cdots c_s$ is a product of odd cycles $c_i$ of distinct length $d_i$. Let $\tau \in S_n$ be a permutation commuting with $\sigma$. Then $\tau$ must fix each of the $c_i$, that is, $\tau$ must be of the form $\tau = c_1^{a_1} \cdots c_s^{a_s}$ for some $a_i \in \mathbb Z$. But as the $c_i$ are even permutations (as cycles of odd length), we have $\tau \in A_n$. So no $\tau \in S_n \setminus A_n$ commutes with $\sigma$ and we are done.
H: Solve the following system of equations Solve the following system of equations: $\left\{\begin{matrix} x^3(1-x)+y^3(1-y)=12xy+18\\ \left | 3x-2y+10 \right |+\left | 2x-3y \right |=10 \end{matrix}\right.$ AI: Perhaps asking Mathematica(WolframAlpha gives the answer as well) to solve it: Solve[{x^3 (1 - x) + y^3 (1 - y) == 12 x y + 18, Abs[3 x - 2 y + 10] + Abs[2 x - 3 y] == 10}, {x, y}, Reals] immediately gives: $$\left\{\left\{x\to -\sqrt{3},y\to \sqrt{3}\right\}\right\}$$ And there is this nice plot of the two curves: P.S. I will probably(if I find a reason to) add an analytic answer later.
H: $\Delta_{0}$ formulas I am working through the Jech Set Theory book, and at the moment I am stuck at his definition of the $\Delta_{0}$ formulas: A formula of set theory is a $\Delta_{0}$ formula if: (i) it has no quantifiers, or (ii) it is $\varphi \wedge \psi$, $\varphi \vee \psi$, $\neg \varphi$, $\varphi\rightarrow\psi$ or $\varphi\leftrightarrow\psi$, with $\varphi$ and $\psi$ $\Delta_{0}$ formulas, or (iii) it is $(\exists x\in y)\varphi$ or $(\forall x\in y)\varphi$ where $\varphi$ a $\Delta_{0}$ formula. I'm not really sure if I understand right $\Delta_{0}$ formula is. In class we wrote the same, only in the first point $\varphi$ is $\Delta_{0}$ if it is atomic: $x\in y$, $x=y$. Is a $\Delta_{0}$ formula hence only the combination of $x\in y$ or $x=y$ with (ii)? Afterwards we have in the notes: $x=\mathscr{P}(y)$ is not $\Delta_{0}$. Is that because $x=\mathscr{P}(y)\leftrightarrow (\forall u\in x)(u\subseteq y)\wedge (\forall u)(u\subseteq y\rightarrow u\in x)$ and then the part $\forall u$ is not writable like (i)? Also some other papers about this definition would be helpful! Thanks, Luca AI: The definitions are the same. This is because omitting quantifiers effectively reduces the language to propositional calculus. Namely, given atomic formulae, if we only apply (ii), we get exactly the quantifier-free formulae. So the (i) of Jech basically amounts to what one can achieve with the (i) and (ii) from your class, without using (iii). As such, it is not minimal, but it does avoid tedious applications of (ii), when the actual interest in $\Delta_0$ formulae lies in the use of the restricted quantifications. Regarding $x = \mathscr P(y)$, you are correct. More intuitively, since the definition of $\mathscr P(y)$ requires an unrestricted universal quantifier, it cannot be a $\Delta_0$ formula. Unfortunately, Jech is the only source I have actually studied on the subject, so I can't help you with other references. I hope that the above clarified the definition a bit for you, though.
H: Error in book's definition of open sets in terms of neighborhoods? The following is copied verbatim from a book (I. Protasov, Combinatorics of numbers, p. 14): Suppose that to each point $x$ of a set $X$ a collection $\mathcal{B}(x)$ of subsets of $X$, which are called neighborhoods of $x$, is assigned so that the following conditions are satisfied: (B1) $x\in U$ for every neighborhood $U \in \mathcal{B}(x)$;(B2) if $U \subseteq V, U \in \mathcal{B}(x)$, then $V\in \mathcal{B}(x)$;(B3) if $U_1, \dots, U_n \in \mathcal{B}(x)$, then $U_1 \cap \dots \cap U_n\in \mathcal{B}(x)$;(B4) if $U\in \mathcal{B}(x)$, then there is a neighborhood $V\in\mathcal{B}(x)$ such that $U\in \mathcal{B}(y)$ for every $y\in V$. A subset $A\subseteq X$ is defined to be open, if $A$ is a neighborhood of each of its points, i.e. $A\in\mathcal{B}(x)$ for every $x\in A$. Evidently, open sets satisfy the following properties: (O1) $X, \varnothing$ are open sets;(O2) if $U_1, \dots, U_n$ are open sets, then $U_1 \cap \dots \cap U_n$ is an open set;(O3) if $U_\alpha, \alpha\in J$, is a collection of open sets, then $\bigcup\{U_\alpha : \alpha \in J\}$ is an open set. I don't see why (O1) is true. More generally, I don't see what guarantees that $\mathcal{B}(x) \neq \varnothing$ for any $x \in X$. In fact, if we set $\mathcal{B}(x) = \varnothing$ for all $x \in X$, conditions (B1)-(B4) are all satisfied vacuously. In this case there could be no open sets, so (O1) could not hold. Am I missing something? If not, there must be some error in the book, and I'd like to know how to fix it such that what results agrees with the standard way of defining open sets in terms of neighborhoods. AI: I think I finally got it. All we need to do is replace B3 with (B3) each collection $\mathcal{B}(x)$ is closed with respect to finite (including empty) intersections; Then the empty intersection (namely $X$) will belong to every $\mathcal{B}(x)$. (It could be argued that the original (B3) already implies this.)
H: Two questions about implications between $\mathsf{DC}, \mathsf{BPI}$ and $\mathsf{AC}_\omega$ Does the implication $\mathsf{DC} \implies \mathsf{BPI}$ hold? And does the implication $\mathsf{BPI} \implies \mathsf{AC}_\omega$ hold? I checked with Howard/Rubin's "Consequences of the Axiom of Choice" in part V where they list relations between forms, in particular, I could not find any mention of either of the above implications on page 324 and 326 respectively. AI: No implication whatsoever. In Cohen's first model $\sf BPI$ holds, while both $\sf AC_\omega$ and $\sf DC$ fail. In Shelah's model where every set has the Baire property, $\sf BPI$ fails, and $\sf DC$ holds (and consequently $\sf AC_\omega$ holds). One can also use for this Solovay's model or models of $\sf AD+\it V=L(\Bbb R)$, if one is willing to take large cardinals for granted.
H: How to get ${n \choose 0}^2+{n \choose 1}^2+{n \choose 2}^2+\cdots+{n \choose n}^2 = {x \choose y}$ I found this in my test book, any hints? Given $${n \choose 0}^2+{n \choose 1}^2+{n \choose 2}^2+\cdots+{n \choose n}^2 = {x \choose y}$$ Then find the value of x and y in n. According to the answer provided on last pages of that book, it's $2n \choose n$. What i dont understand is how to get the answer (there's no explanation written there). All i said is thank you so much for every activity to me and i'm sorry for my bad english (english is not my native language) and my messy post. AI: Let $E=\{a_1,\ldots a_n,b_1,\ldots,b_n\}$ a set with $2n$ elements. There's $2n\choose n$ subsets of $E$ with $n$ elements: $k$ elements from $\{a_1,\ldots,a_n\}$ and $n-k$ elements from $\{b_1,\ldots,b_n\}$ for some $k=0,\ldots,n$ hence we have $${2n\choose n}=\sum_{k=0}^n{n\choose k}{n\choose n-k}=\sum_{k=0}^n{n\choose k}^2$$
H: sum of coefficients of polynomial? If $\sqrt{2+(\sqrt3 +\sqrt5)}$ is root of polynomial of eighth degree then, the sum of absolute values of coefficients of polynomial is? I found this question on https://brilliant.org/assessment/s/algebra-and-number-theory/1974729/ I want to know is there any simple way to solve it? AI: Call your number $x$. That means that we have the relation $x = \sqrt{2+(\sqrt3 +\sqrt5)}$ It follows by squaring both sides that $x^2 - 2 = \sqrt{3} + \sqrt{5}$. Further squaring and subtracting leftover integers yields $$ ((x^2 - 2)^2 - 8)^2 - 60 = 0 $$ and you have your polynomial. Now all you need to do is multiply out the parentheses, and you're more or less there. Note that to get a unique answer, we have to require the polynomial to be monic. I assume that this is the case.
H: Binary search complexity In sorted array of numbers binary search gives us comlexity of O(logN). How will the complexity change if we split array into 3 parts instead of 2 during search? AI: Same. You will get a running time only differing by constant factor ($\log_2 3=\frac{\ln 3}{\ln 2}$).
H: From geometric sequence to function I have this question: Find the functions which equal the sums: $$ x + x^3 + x^5 + .. $$ Now, I can see in my result list, that its supposed to give $$ \frac{x}{(1-x^2)} $$ I can see why the numerator should be x, but I fail to see why the denominator should be $$ (1-x^2) $$ Can someone please explain this to me? David AI: Starting from the geometric series we have $$\frac{1}{1-x}=1+x+x^2+x^3+...$$ When we fill in $x^2$ instead of x we get $$\frac{1}{1-x^2}=1+x^2+x^4+x^6$$ Multiplying with x $$\frac{x}{1-x^2}=x+x^3+x^5+x^7+...$$ On het other hand is a function uniquely determined by his Taylor expension so that the only possible answer to this question is $\frac{x}{1-x^2}$.
H: Slope of tangent in $(x,y)$ on a circle $K=\{(x,y)\in\mathbb{R}^2|x^2+y^2=r^2\}$ with initial conditions I came across an exercise with a sample solution that I unfortunately don't fully understand given that it's shortened. Let $(x,y)$ be a point on a circle $K=\{(x,y)\in\mathbb{R}^2|x^2+y^2=r^2\}$ with $y\neq0$. Show that the slope of the tangent on $K$ in $(x,y)$ is $-\frac{x}{y}$ and find two solutions for the differential equation $y'=-\frac{x}{y}$ that adhere to the initial conditions $y(1)=1$ or $y(-1)=2$. The sample solution is as follows: $$x^2+y^2=r^2$$ $$2x+2y\frac{dy}{dx}=0$$ $$\Rightarrow\frac{dy}{dx}=-\frac{x}{y}$$ $$\int y\;dy=\int -x\;dx$$ $$y^2=c-x^2$$ $$c=2\;or\;c=5$$ $$\Rightarrow x^2+y^2=2\;or\;x^2+y^2=5$$ First of all I'm interested in line 2. From what I could find and understand, we come to this point through implicit differentiation of line 1. I assume the $r^2$ disappeared because it is treated as a constant but why exactly is that the case? Furthermore, why is the constant of integration omitted on one side of the equation and how exactly do we find the solutions $2$ and $5$ for it? Thank you very much in advance! AI: $r$ is indeed a constant. Think about it this way - the distance from the origin, which would be $x^2 + y^2$ remains constant. Geometrically, that's a circle. In another sense, if $r$ weren't a constant, the implication is that it depends on $x$. A circle's radius shouldn't depend on anything. The constant of integration is omitted on one side because the sample solution "skipped a step." Really, both integrals should end up with a constant $c$. However, since both constants are arbitrary, we can just lump them into one constant and put it on one side of the equation. You can solve for $c$ by plugging in the initial values for $x$ and $y$ into your solution equation.
H: Geodesic and Euler - Lagrange equation If we have a metric $g_{\mu \nu}$, defined in a Riemannian manifold we can write the equation of the geodesic: $$\frac{d^2x^\mu}{dt^2}+\Gamma^\mu_{\alpha\beta}\frac{dx^\alpha}{dt}\frac{dx^\beta}{dt}$$ in which $\Gamma^\mu_{\alpha\beta}$ are the Christoffel symbols. The geodesic in that metric can be obtained also using the Euler - Lagrange equations: $$\frac{d}{dt}\frac{\partial L}{\partial \dot q_k}-\frac{\partial L}{\partial q_k}=0$$ Where the $q_k$ are the generalized coordinates. How can I find the Lagrangian $L$ in order to find the same geodesic obtained using the affine connection symbols? AI: The Lagrangian is the standard measurement of a distance. That is $$ L=-\kappa\int ds $$ being $\kappa>0$ a constant, that in your case is $$ S=-\kappa\int\sqrt{g_{\mu\nu}dx^\mu dx^\nu}. $$ Then you are supposed to parametrize the coordinates as $x_\mu=x_\mu(t)$ and the computation becomes a standard one. Notice that, for a Minkowski metric, this reduces to the action of a relativistic free particle.
H: Truncating a Time-Series Graph I'm looking for a way to truncate outliers in a time-series graph. Some context: I'm plotting two different metrics on a single graph. It's important to understand what portion of "Metric A" takes up respective to "Metric B". "Metric B" can at times spike to a high value thus increasing the Y-Axis scale which then minimizes the visibility of Metric "A". Are there any good formulas that help normalize this data or visual treatments that can make the graphic honest as well as accessible? See screenshot for more context. The blue area is "Metric B". Thanks! AI: If Metric B spikes up too much (say to more than twice the maximum of Metric A), you can display it as a vertical blue column with a jagged break in the middle to indicated it goes off the top of the graph. I have also seen it with a jagged break and an arrow on the top. You can also list the specific data point to show how tall the spike is.
H: Find point nearest to the origin Find the points on the curve $5x^2 - 6xy + 5y^2 = 4$ that are nearest the origin. The first method I've tried is I've taken the derivative of the equation to optimize (Pythagorean Theorem) and also the function of the curve using implicit differentiation and plugged stuff in (the way I've done it so far). I got $y = x$ which I tried to plug into the function of the curve but didn't match the answers at the back of the textbook. I stared at it and figured I made a sign error somewhere since $-y = x$ worked but couldn't find it. The second method I tried was solving for the function of the curve in terms of $y$ and tried both completing the square and using the quadratic formula to find $x$ which gave the same failed answer where $y$ subtracted to nonexistence when I plugged them into the function of the curve. I don't know what lagrange multipliers are, which was the only solution on the web. I hope it could be done just with simple algebra (or calculus) since the textbook expects that (I think). AI: Hint: Using a change of coordinates $X=x+y$, $Y= x-y$, the problem becomes $$ 5\frac{X^2+Y^2}{2} -6\frac{X^2-Y^2}{4} = 4,$$ or that $$ X^2 + 4Y^2 = 4.$$ You should recognize this as an ellipse, with minor axis __ and major axis __. Hence, the answer is __. With regards to your differentiation solution: $x^2 + y^2$ is minimized, when $2x + 2y \frac{dy}{dx} = 0$, or that $\frac{dy}{dx} = -\frac{y}{x}$. In the original equation, we have $ 10x - 6y - 6x \frac{dy}{dx} + 10y \frac{dy}{dx}$, or that $\frac{dy}{dx} = \frac{6y-10x}{10y-6x} $. Hence, solving for $-\frac{y}{x} = \frac{dy}{dx} = \frac{6y-10x}{10y-6x}$, we get $-y(10y-6x)=x(6y-10x)$, or that $-10y^2 +6xy = 6xy - 10x^2$, so $10(x^2-y^2)=0$. This has solution set $x=y, x=-y$. You then need to check the second order condition, to see which (if any) gives your a global minimum.
H: Number of solutions I have a polynomial in integers $\psi (x)$ of degree $k$. Consider the number of solutions $$ \psi(z) \equiv u (\mod p^r) $$ with $$ (\psi'(z),p)=1. $$ I was wondering how can I show that the number of solutions is $O(1)$? Thank you! AI: There are at most $\max(k,p)\leq p$ solutions to $\psi (z) \equiv u \pmod{p}$. Consider lifting solutions from $p^r$ to $p^{r+1}$. Hensel's lifting lemma tells us that if $z$ is a solution $\pmod{p^r}$, then there exists a unique solution $z^*$ in the higher modulus power, such that $z^* \equiv z \pmod{p^r}$ if $\psi'(z) \not \equiv 0 \pmod{p^r} $. Hence, there are at most $p$ solutions where $\psi'(z) \neq 0$, and thus there are at most $p$ solutions to $(\psi'(z), p) = 1$.
H: is $\hat{\theta}$ unbiased consider a random sample of size n from a distribution with pdf $f(x;\theta)=\frac{1}{\theta}$ $0<x\leq \theta$ and zero otherwise. $0< \theta$ Now the first question was to find the MLE of $\hat{\theta}$ which I found to $X_{n:n}$ , now they want to find out if it is unbiased. My work so far: $$ \begin{align} E[\hat{\theta}] &=E[X_{n:n}] \\ &= E[n\frac{1}{\theta}[ln(\theta)]^{n-1}] \end{align} $$ now this is probably where i went wrong. isnt the cdf of $X_{n:n}$: $$ nf(x)[F(x)]^{n-1} $$ ? AI: The cdf of the maximum is given by $F(x)^n$. Thus, the pdf is $n f(x) F(x)^{n-1}$, In your case, we have for $0 < x \le \theta$: $$f(x) = \frac{1}{\theta}$$ $$F(x) = \frac{x}{\theta}$$ Thus, the expected value of $\hat{\theta}$ is given by: $$\int_0^{\theta}\frac{nx^{n-1}}{\theta^n}dx$$ That should hopefully help.
H: Why can a $2k$-regular graph be $k$-factored? From Wikipedia: If a connected graph is $2k$-regular it may be $k$-factored, by choosing each of the two factors to be an alternating subset of the edges of an Euler tour. I don't understand why those alternating subsets form $k$-factors. AI: In a connected $2k$-regular graph $G=(V,E)$, you can find a path $\pi=v_0 e_1 v_1 e_2\dots e_n v_0$, taking each edge $e_i$ of the graph exactly once (Eulerian path). Then if you look at the partitions $E_1=\{e_1,e_3,\dots\}$ and $E_2=\{e_2,e_4,\dots\}$, they form a $k$-factorization of $G$. Indeed, each vertex $v$ is adjacent to exactly $k$ edges from $E_1$, and $k$ edges from $E_2$, because every time an edge from $E_1$ enters $v$ in $\pi$, then the next one is from $E_2$ and exits $v$, and vice-versa. However this only works if $e_n\in E_2$, otherwise the above statement is not true for $v_0$. This means that we have to additionally require that $|E|$ is even. $K_3$ is counter-example for the odd case.
H: Def. of Homology Cell I would like an explanation for the following definition. "A metric space is an $\ homology \ cell $ if it is nonempty and homologically trivial (acyclic) in all dimensions." What does "homologically trivial (acyclic)" mean? What are intuitive examples of homology cells? AI: From your question, I'm guessing you don't know what homology is, so answering this will be challenging. The first homology $H_1(X)$ is easy to define. It is the abelianization of the fundamental group. So in particular the fundamental group of a homology cell must be perfect (equal to its own commutator subgroup.) The definitions of $H_2$ and higher homology groups require a fair bit of machinery. A simple example of a homology cell is any contractible space. (This is actually a homotopy cell, which is a stronger condition.) A less trivial example arises by taking the Poincare homology sphere (Google it!) and removing the interior of a small ball. This space has a nonzero fundamental group so it cannot be a homotopy ball, but all of its (reduced) homology groups vanish.
H: Problem on filters $\mathcal{F}$ is filter on $\mathcal{I}$, but not ultrafilter. Prove that $\exists$$\mathcal{X, Y}\notin\mathcal{F}$ | $\forall\mathcal{Z}\in\mathcal{F}$ $\mathcal{X}\cap\mathcal{Z}\neq$ $\mathcal{Y}\cap\mathcal{Z}$ Thank you for your time. AI: Do you know that a filter $\mathcal F$ on a set $I$ is an ultrafilter if and only if for any $X\subseteq I$ exactly one of the sets $X$ and $I\setminus X$ belongs to $\mathcal F$? Using this fact we get that if $\mathcal F$ is not an ultrafilter, then there exists a set $X$ such that $X\notin\mathcal F$ and $I\setminus X\notin\mathcal F$. What could be a good candidate for $Y$ in such situation?
H: How to formulate continuum hypothesis without the axiom of choice? Please correct me if I'm wrong but here is what I understand from the theory of cardinal numbers : 1) The definition of $\aleph_1$ makes sense even without choice as $\aleph_1$ is an ordinal number (whose construction doesnt depend on the axiom of choice) with some minimal property. With or without choice, there is no cardinal number $\mathfrak{a}$ such that $\aleph_0 < \mathfrak{a} < \aleph_1$. 2) In ZFC, all cardinal numbers are $\aleph$ and are comparable (by the trichotomic property of the ordinals). The continuum hypothesis states that $\aleph_1 = 2^{\aleph_0}$. 3) In ZF, $2^{\aleph_0}$ need not be an $\aleph$. However, I don't know if talking about $2^{\aleph_0}$ in ZFC makes sense or not. Does it make sense in ZF to define CH to be the statement that if a set is larger than the natural numbers, then it must contain a copy of the reals (up to a relabeling of the elements) no matter what the cardinality of the reals is ? AI: You are correct that without the axiom of choice $2^{\aleph_0}\newcommand{\CH}{\mathsf{CH}}$ may not be an $\aleph$. Therefore the continuum hypothesis split into two inequivalent statements: $(\CH_1)$ $\aleph_0<\mathfrak p\leq2^{\aleph_0}\rightarrow2^{\aleph_0}=\frak p$. $(\CH_2)$ $\aleph_1=2^{\aleph_0}$. Whereas the second variant implies that the continuum is well-ordered, the first one does not. You suggested a third variant: $(\CH_3)$ $\aleph_0<\mathfrak b\rightarrow 2^{\aleph_0}\leq\mathfrak b$. Let's see why $\CH_3\implies\CH_2\implies\CH_1$, and that none of the implications are reversible. Note that if we assume $\CH_3$, then it has to be that $2^{\aleph_0}\leq\aleph_1$ and therefore must be equal to $\aleph_1$. If we assume that $\CH_2$ holds, then every cardinal less or equal to the continuum is finite or an $\aleph$, so $\CH_1$ holds as well. On the other hand, there are models of $\sf ZF+\lnot AC$, such that $\CH_1$ holds and $\CH_2$ fails. For example, Solovay's model in which all sets are Lebesgue measurable is such model. But $\CH_2$ does not imply $\CH_3$ either, because it is consistent that $2^{\aleph_0}=\aleph_1$, and there is some infinite Dedekind-finite set $X$, that is to say $\aleph_0\nleq |X|$. Therefore we have that $\aleph_0<|X|+\aleph_0$. Assuming $\CH_3$ would mean that if $X$ is infinite, then either $\aleph_0=|X|$ or $2^{\aleph_0}\leq|X|$. This is certainly false for infinite Dedekind-finite sets (one can make things stronger, and use sets that have no subset of size $\aleph_1$, while being Dedekind-infinite). One can also think of the continuum hypothesis as a statement saying that the continuum is a certain kind of successor to $\aleph_0$. As luck would have it, there are $3$ types of successorship between cardinals in models of $\sf ZF$, and you can find the definitions in my answer here. It is easy to see that $\CH_1$ states "$2^{\aleph_0}$ is a $1$-successor or $3$-successor of $\aleph_0$", and $\CH_3$ states that "$2^{\aleph_0}$ is a $2$-successor of $\aleph_0$" -- while not explicitly, it follows from the fact that I used to prove $\CH_3\implies\CH_2$. So where does $\CH_2$ gets here? It doesn't exactly get here. Where $\CH_1$ and $\CH_3$ are statements about all cardinals, $\CH_2$ is a statement only about the cardinality of the continuum and $\aleph_1$. So in order to subsume it into the $i$-successor classification we need to add an assumption on the cardinals in the universe, for example every cardinal is comparable with $\aleph_1$ (which is really the statement "$\aleph_1$ is a $2$-successor of $\aleph_0$"). All in all, the continuum hypothesis can be phrased and stated in many different ways and not all of them are going to be equivalent in $\sf ZF$, or even in slightly stronger theories (e.g. $\sf ZF+AC_\omega$). Without the axiom of choice we can have two notions of ordering on the cardinals, $\leq$ which is defined by injections and $\leq^*$ which is defined by surjections, that is to say, $A\leq^* B$ if there is a surjection from $B$ onto $A$, or if $A$ is empty. These notions are clearly the same when assuming the axiom of choice but often become different without it (often because we do not know if the equivalence of the two orders imply the axiom of choice, although evidence suggest it should -- all the models we know violate this). So we can formulate $\CH$ in a few other ways. An important fact is that $\aleph_1\leq^*2^{\aleph_0}$ in $\sf ZF$, so we may formulate $\CH_4$ as $\aleph_2\not\leq^*2^{\aleph_0}$. This formulation fails in some models while $\CH_1$ holds, e.g. in models of the axiom of determinacy, as mentioned by Andres Caicedo in the comments. On the other hand, it is quite easy to come up with models where $\CH_4$ holds, but all three formulations above fail. For example the first Cohen model has this property. All in all, there are many many many ways to formulate $\CH$ in $\sf ZF$, which can end up being inequivalent without some form of the axiom of choice. I believe that the correct way is $\CH_1$, as it captures the essence of Cantor's question. Interesting links: What's the difference between saying that there is no cardinal between $\aleph_0$ and $\aleph_1$ as opposed to saying that... Relationship between Continuum Hypothesis and Special Aleph Hypothesis under ZF
H: About Regulated Functions Definition. Let $X$ be a Banach space. A mapping $f:[a,b]\to X$ is called regulated if it has one sided limits. In the setting of a Hausdorff topological vector space $X$, can we still define regulated functions? AI: As in a Hausdorff topological vector space the one sided limit of a function is a defined object, you surely can do this, exactly as in the case of Banach spaces. (You even do not need the Hausdorff condition or vector space structure, as "having a limit" does not need this).
H: How to solve these equations with $x^2$, $xy$, and $y^2$? Is there any good method to solve equations like this? $$ \begin{cases} 2y-2xy-y^2=0\\2x-x^2-2xy=0\end{cases} $$ This is what I did: $$ \begin{cases} y(2-2x-y)=0\\x(2-x-2y)=0\end{cases} $$ now I see, that: $$ x=0 $$ $$ y=0 $$ and it's the first solution, now: $$ \begin{cases} 2-2x-y=0\\2-x-2y=0\end{cases} $$ and: $$ x=\frac{2}{3}, y=\frac{2}{3} $$ I have 2 solutions, but wolfram found more... Is there any good method for equations like this? AI: You are on the right track, but need to consider your cases carefully. Case 1. $y= 0$, $x=0 $. Done. Case 2. $y=0 $, $2-x-2y = 0$. Do this. Case 3. $2-2x-y = 0$, $x = 0$. Do this. Case 4. $2-2x-y = 0$, $2-x-2y = 0$. Done.
H: Rudin Theorem 2.41 - Heine-Borel Theorem When proving Theorem 2.41 in Principles of Mathematical Analysis: Let $E \subset \mathbb{R}^k$. If every infinite subset of $E$ has a limit point in $E$, then $E$ is closed and bounded. Rudin says, "If $E$ is not bounded then $E$ contains points $x_n$ with $$\vert x_n \vert > n \hspace{2cm} (n = 1,2,3,...).$$ The set $S$ consisting of these points $x_n$ is infinite and clearly has no limit point in $\mathbb{R}^k$, hence has none in $E$." My first reaction to this was that it is indeed obvious as $S$ is composed of discrete points so I could find a ball around any given point not containing a point of $S$. So, if $p \in \mathbb{R}^k$ is the point I'm constructing a ball around, taking the radius of the ball as $r = \min_{x_n \neq p \in S} \{ \vert x_n - p \vert \}$ should work and as $p$ was arbitrary $S$ doesn't have a limit point. However, I'm not sure if I can take the minimum over an infinite set? I honestly don't know why I shouldn't be able to, but nonetheless I decided to try another proof: Suppose that $S$ has a limit point $p \in \mathbb{R}^k$. Then for any $r > 0$ $B_r(p)$ has infinitely many points of $S$. So in particular, $B_{\frac{1}{2}}(p)$ contains infinitely many points of $S$. Therefore, there exists a point $y_m \in B_{\frac{1}{2}}(p)$ with $\vert y_m \vert > m > \vert p \vert + 1$ (as otherwise, there would be only finitely many points of $S$ in $B_{\frac{1}{2}}(p)$). Therefore, we have $$\vert y_m - p \vert \geq \big\vert \vert y_m \vert - \vert p \vert \big\vert > 1 $$ However, this is a contradition as we also have $\vert y_m - p \vert < \frac{1}{2}$. Therefore, as $p$ was arbitrary, $S$ has no limit point in $\mathbb{R}^k$. My questions: Can I take the min over an infinite set as in my first argument? Is my second argument okay? I appreciate any answers. Thanks. AI: Fix an arbitrary point $p\in{\mathbb R}^k$. Put $n_p:=\bigl\lceil|p|\bigr\rceil+1$. All $x_n\in S$ with $n\geq n_p$ satisfy $$|x_n|>n\geq n_p\geq|p|+1\ .$$ Therefore all these $x_n$ are outside the ball of radius $1$ around $p$. It follows that the chosen $p$ cannot be a limit point of $S$, and since $p$ was arbitrary the set $S$ cannot have a limit point in ${\mathbb R}^k$ whatsoever.
H: how can we prove that this function is homomorphism? If $G_1 , ... , G_n$ are groups let $G=G_1\times G_2\times\dotsb\times G_n$. $E_p\colon G\to G_{p^{-1}(1)}\times G_{p^{-1}(2)}\times\dotsb\times G_{p^{-1}(n)}$ where $p \in S_n$ is defined as $(g_1 , g_2 , \dotsc , g_n )\mapsto(g_{p^{-1}(1)} , g_{p^{-1}(2)} , \dotsc ,g_{p^{-1}(n)})$. Now in the case $G_1 = G_2 = \dotsb = G_n$, show that the function $A\colon S_n\to\operatorname{Aut}(G)$ defined by $p\mapsto E_p$ is homomorphism. I tried to solve it "the idea is not difficult" but every time, I found it's not a homomorphism! So I need help thanx This is an exercise #8 in Dummit and Foote's abstract algebra book page 156 but i have changed the symbols because I can't use the greek symbols in this site. AI: Just compute. Let $\sigma, \tau\in S_n$ and $g = (g_1, \ldots, g_n) \in G$, then we have denoting by $()_i\colon G \to G_i$ the projection onto the $i$-th coordinate: \begin{align*} (E_\sigma E_\tau g)_i &= (E_\tau g)_{\sigma^{-1}(i)}\\ &= g_{\tau^{-1}\sigma^{-1}(i)}\\ &= g_{(\sigma\tau)^{-1}(i)}\\ &= (E_{\sigma\tau}g)_i \end{align*} Hence $E_\sigma E_\tau = E_{\sigma\tau}$, as wished.
H: Commutativity in rings of matrices Let n be arbitrary positive integer and R arbitrary ring (perhaps, non-associative). Let's denote $M_n(R)$ the set of all $n х n$ matrices with entries from R. As i know if R is non-trivial commutative ring without zero divisors then scalar matrices are the only matrices which commutate with all other matrices over R. By scalar matrix i mean diagonal matrix such that all its diagonal entries are equal. I have the following question: what if R is non-trivial ring WITH zero divisors? Is it possible that there is matrix $M \in M_n(R)$ such that $\forall A\in M_n(R) ~~ AM = MA$ but M is non-diagonal or at least non-scalar. Thanks in advance. AI: $M$ is in the center of $M_n(R)$ iff $M$ commutes with all matrices $E_{ij}$ (the $(i,j)$th entry equals $1$ and others $0$). From here you can easy obtain the answer.
H: Zorn's lemma implies the well-ordering principle I am little confused about the proof given here http://euclid.colorado.edu/~monkd/m6730/gradsets05.pdf On the second page, when defining $P$, the author says that $B\subset A$ and $(B,<)$ is a well-ordering structure. Isn't this exactly what we want to prove? How do we know that $B$ can be well-ordered? What happens if $P$ is empty? AI: If $A$ is non-empty then every finite subset is well-orderable, by definition of finite. So $P$ is never empty if $A$ is non-empty.
H: Integral proof of logarithm of a product property In one of my textbooks, the expansion of a logarithm product is proved using integrals. $$\ln xy = \ln x + \ln y\iff \int_1^\left(xy\right)dt/t$$ $$\ = \int_1^xdt/t + \int_x^\left(xy\right)dt/t$$ Then, let $ u = t/x $ and substitute in the second integral: $$= \int_1^x dt/t + \int_1^y du/u = ln x + ln y $$ While the expansion is rather quaint, I find the u-substitution difficult to follow: If $u = t/x$ then $du/dt = 1/x$ and $du = dt/x $ ... I am unsure how $ du/u $ is obtained for substitution. Any help or hints are appreciated! AI: Because $ u = \frac{t}{x} $, we have that $ t = ux \implies dt = x \cdot du $. Hence, $ \frac{1}{t} \ dt = \frac{1}{ux} \cdot x \ du = \frac{1}{u} \ du $. The bounds follow from the transformation and calculated independently of the integrand in this case.
H: How to solve equation like this? Is there any good method to solve equations like this? $$ \begin{cases} x^2=y^2\\-6xy+10y^2=16\end{cases} $$ From the first equation, I see: $$ 1) x=1, y=1 $$ $$ 2)x=1, y=-1 $$ $$ 3)x=-1, y=1 $$ $$ 4)x=-1, y=-1 $$ When I plug 2) and 3) to the second equation, I see it works. So now I have 2 solutions, but wolfram found more... Is there any good method for equations like this? AI: From the first equation we can just conclude $x=\pm y$, nothing more. If $x=y,$ from the second equation $-6(y)y+10y^2=16\implies 4y^2=16\implies y^2=4\implies y=\pm 2$ If $x=-y, $ from the second equation $-6(-y)y+10y^2=16\implies y^2=1\implies y=\pm 1$
H: Show the surface area of revolution of $e^{-x}$ is finite I need to show that the surface area of revolution of $e^{-x}$ is finite when the region from $x=0$ to $x=\infty$ is rotated about the x-axis. I tried using the surface area formula, but got stuck on the integration, so I thought that maybe there was a trick to show that it was infinite without actually finding the definite integral. This is where I got with my integration: $$ \begin{align} SA &= 2\pi\, \int^b_a\, f(x)\sqrt{1+f'(x)^2}\, \mathrm{d}x \\ &= 2\pi\, \int^\infty_0\, e^{-x}\sqrt{1+e^{-2x}}\, \mathrm{d}x \\ &= \lim_{t \to \infty} 2\pi\, \int^t_0\, e^{-x}\sqrt{1+e^{-2x}}\, \mathrm{d}x \\ \end{align} $$ $$ \begin{align} \mathrm{let\,\,} u &= e^{-x}.\\ \mathrm{d}u &= -e^{-x}\, \mathrm{d}x\\ \mathrm{d}x &= \frac{-1}{e^{-x}} \,\mathrm{d}u\\ \mathbf{But:}\, u &= e^{-x}, \,\mathrm{so}\\ \mathrm{d}x &= \frac{-1}{u} \mathrm{d}u. \end{align} $$ When $x=0$, $u=1$ and when $x=t$, $u=e^{-t}$. However, as $t \to \infty$, $u \to 0$. (Is this a bad assumption to make here?) So, $$ \begin{align} SA &= -2\pi\int_1^0 \sqrt{1+u^2}\, \mathrm{d}u. \end{align} $$ This looks like it could be a nice enough integral, but I can't immediately see how to do it, and Wolfram|Alpha seems to use magic (and $\sinh^{-1}$, which is almost the same thing... =P) Remember that I have only been asked to show the surface area is finite. I don't actually have to find the surface area. (But if you know a method that doesn't involve too many advanced tricks, I'm happy to hear it.) Can we just say that the area under that curve is obviously finite, because it is the area bounded by a continuous curve between $x=0 $ and $x=1$? Should we perhaps say that this area is less than the area under $-2\pi \int_1^0 1+x^2\,\mathrm{d}x$, which is obviously finite, and thus this area itself is finite? What is the best way to answer this? AI: We want to calculate the surface area $S(M)$ from $0$ to $M$, and see what happens to $S(M)$ as $M$ gets large. You had done all the necessary work. We have $$S(M)=\int_0^M 2\pi e^{-x}\sqrt{1+e^{-2x}}\,dx.$$ Note that $\sqrt{1+e^{-2x}}\le \sqrt{2}$ in our interval. So the function we are integrating is between $0$ and $2\sqrt{2}\pi e^{-x}$. The integral $$\int_0^\infty 2\sqrt{2}\pi e^{-x}\,dx$$ is an upper bound for $S(M)$, so $$\int_0^\infty 2\pi e^{-x}\sqrt{1+e^{-2x}}\,dx$$ converges. Remark: There are various ways to find explicit expressions for the integral. The most pleasant is through a hyperbolic function substitution. Quite a bit uglier, but doable, is your substitution, followed by $u=\tan t$. Your substitution process that leads to $\int_0^1 \sqrt{1+u^2}\,du$ can be justitified. The part that made you justifiably nervous can be dealt with by noting that for $S(M)$ we are integrating from $u=e^{-M}$ to $1$. And there is no issue of convergence for $\int_0^1 2\pi \sqrt{1+u^2}\,du$. We are integrating a continuous function on a bounded interval.
H: Two closed subsets $A$ and $B$ in $\mathbb{R}$ with $d(A,B)=0$ I am looking for two closed subsets A and B (with $A\cap B = \emptyset$) of $\mathbb{R}$ with $d(A,B)=0$. I found a solution in $\mathbb{R}^2$, namely $A=\{(x,\frac{1}{x})\mid x>0\}$ and $B=\{(x,0)\mid x>0\}$. I know that those subsets have to be unbounded because there is a theorem that says: the distance between two closed subsets of which one is bounded, is greater than 0. AI: Set $A=\mathbb{N}\setminus \{1\}$ and $B=\{x+\frac{1}{x}:x\in A\}$
H: What is the limit of $\lim _{ n\rightarrow \infty }{ \sum _{ k=1 }^{ n }{ { k }^{ \left( \frac { 1 }{ 2k } \right) } } } $ When I was trying to solve this question I came up to this: $$\lim _{ n\rightarrow \infty }{ \sum _{ k=1 }^{ n }{ { k }^{ \left( \frac { 1 }{ 2k } \right) } } } ={ 1 }^{ 1/2 }+{ 2 }^{ 1/4 }+{ 3 }^{ 1/6 }+...+{ n }^{ 1/(2n) }.$$Does it converge? What is the limit of the sum? AI: This has no limit since $k^{\frac{1}{2k}}\to 1$ and $k\to\infty$. You don't really need that to see that it can't converge, since $k^{\frac{1}{2k}}>1$ for all $k$.
H: $\operatorname{\mathcal{Jac}}\left( \mathbb{Q}[x] / (x^8-1) \right)$ $\DeclareMathOperator{\Jac}{\mathcal{Jac}}$ Using the fact that $R := \mathbb{Q}[x]/(x^8-1)$ is a Jacobson ring and thus its Jacobson radical is equal to its Nilradical, I already computed that $\Jac \left( \mathbb{Q}[x] / (x^8-1) \right) = \{0\}$. But I'd like to calculate this in an elementary way, preferably using nothing beyond the very basic properties of the Jacobson radical, quotients of rings and maximal ideals. And here starts my struggle. AI: Hints: $\Bbb Q[x]$ is a principal ideal domain The ideals (including the maximal ideals) sitting above $(x^8-1)$ just depend on the factorization $x^8-1=(x-1)(x+1)(x^2+1)(x^4+1)$. If $p$ and $q$ are coprime polynomials, then $(p)\cap (q)=(pq)$. The nilradical of a commutative ring is certainly always contained in the Jacobson radical.
H: Evaluate the given limit by recognizing it as a Riemann sum. Please note that this is homework. Please excuse my lack of $\LaTeX{}$ knowledge. The Problem: Evaluate the given limit by first recognizing the sum (possibly after taking the logarithm to transform the product into a sum) as a Riemann sum of appropriate function associated to a regular partition of $[0,1]$. $\lim_{n\to \infty}\frac{1}{n}\sqrt[n]\frac{(2n)!}{n!}$ My petty Attempt Okay so taking the hint from the problem. I tried this: $\Delta x = \dfrac{1-0}{n} = \frac{1}{n}$ Then $x^*_k = 0 + k\Delta x = \frac{k}{n}$ Then I took the logarithm and expanded the logarithm. $e^{[\lim_{n\to \infty}ln{(\frac{1}{n}\sqrt[n]\frac{(2n)!}{n!})}]}$ $e^{[\lim_{n\to \infty}ln{(\frac{1}{n})} + ln{(\sqrt[n]\frac{(2n)!}{n!})}]}$ $e^{[\lim_{n\to \infty}-ln{(n)} + \frac{ln(\frac{(2n)!}{n!})}{n}]}$ $e^{[\lim_{n\to \infty}-ln{(n)} + \frac{ln(2n!) - ln(n!)}{n}]}$ Now after this I am totally stuck. I am not even sure if I am going on the right track. All I ask is for some guidance. I know that the Riemann Sum is defined as: $\sum^n_{k=1}{f(x^*_k)\Delta x_k}$ I also understand that the stuff inside the limit is already in closed form. Somehow I have to use the properties of logarithm to transform it to a $\sum$ but how? AI: We apply the $\log$ function and we use the Riemann sum and we integrate: $$\log\left(\frac{1}{n}\sqrt[n]\frac{(2n)!}{n!}\right)=-\log n+\frac{1}{n}\sum_{k=1}^n\log(k+n)=\frac{1}{n}\sum_{k=1}^n\log(k+n)-\log n\\=\frac{1}{n}\sum_{k=1}^n\log\left(\frac{k}{n}+1\right)\to\int_0^1 \log(1+x)dx=2\log(2)-1$$ hence $$\lim_{n\to\infty}\frac{1}{n}\sqrt[n]\frac{(2n)!}{n!}=\frac{4}{e}$$
H: Dual vector space, proof of inclusion in a span Question: V is a vector space (of finite dimension) over F. We assume $\alpha, \beta \in V^*$ (The dual space) and they satisfy: $\forall v: (\alpha (v)=0 \Rightarrow \beta (v)=0$) prove that $\beta \in sp(\alpha)$. What I thought: What I understand is that I need to prove that $\beta$ is actually $\alpha$ multiplied by a scalar. I also think that $\operatorname{Ker}\alpha \subseteq \operatorname{Ker}\beta$ (from what we were given). Would love some hints at first. AI: Your hypothesis is indeed $\ker(\alpha) \subseteq \ker(\beta)$. There are two possibilities. Either $\beta = 0$, and then... Otherwise $\beta \ne 0$. Then the kernels of $\alpha$ and $\beta$ are two subspaces of the same dimension, one contained in the other, and then... Finally, extend a basis of $\ker(\alpha)$ to a basis of $V$, and see how $\alpha$ and $\beta$ act on this basis.
H: Holomorphic vs differentiable (in the real sense). Why a holomorphic function is infinitely differentiable just because of satisfying the Cauchy Riemann equations, but on the other side, a two variable real function that is twice differentiable is not infinitely differentiable? I'm asking this for two reasons: 1) $\mathbb{R}^2$ is somehow an analog to the complex plane (they are isomorphic). 2) Holomorphic means that is differentiable in any direction of the complex plane, so why if a function is differentiable in any direction of the real plane then it is not infinitely differentiable? AI: Even though $\mathbb R^2$ and $\mathbb C$ are isomorphic as real vector spaces, they are very different in some algebraic respects, which crucially influence the notion of differentiability. Recall the key idea that a function is differentiable at a point if it has a best linear approximation (more precisely, a constant plus a linear transformation) near that point. In the context of functions $\mathbb R^2\to\mathbb R^2$, "linear transformation" means a transformation that respects addition of vectors and multiplication by real scalars. In other words, the transformation respects the real vector space structure of $\mathbb R^2$. It is well known that such linear transformations are given by $2\times2$ real matrices (once one has chosen a basis for $\mathbb R^2$). In the context of functions $\mathbb C\to\mathbb C$, on the other hand, "linear transformation" means a transformation that respects addition of vectors and multiplication by complex scalars. In other words, the transformation respects the complex vector space structure of $\mathbb C$. It is well known that such linear transformations are just multiplication by a single complex number. That's much more restrictive than multiplying by an arbitrary $2\times 2$ real matrix. Specifically, if we use $\{1,i\}$ as our basis for $\mathbb R^2$, then the $2\times 2$ real matrices that correspond to complex linear transformations are just those of the form $\pmatrix{a & b\\-b & a}$. Because complex linear transformations are a very special sort of real linear transformations, complex differentiable functions are a very special sort of real differentiable functions. That "specialness" ultimately accounts for all the miraculous consequences of complex differentiability.
H: A binomial inequality with factorial fractions: $\left(1+\frac{1}{n}\right)^n<\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{n!}$ Prove that $$\left(1+\frac{1}{n}\right)^n<\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{n!}$$ for $n>1 , n \in \mathbb{N}$. AI: We have by the binomial identity that \begin{align*} \left(1 + \frac 1n \right)^n &= \sum_{k=0}^n \binom nk \frac 1{n^k}\\ &= \sum_{k=0}^n \frac{n!}{(n-k)! n^k} \cdot \frac 1{k!}\\ &= \sum_{k=0}^n \frac{n \cdot (n-1) \cdots (n-k+1)}{n \cdot n \cdots n} \cdot \frac 1{k!}\\ &\text{now the first factor is $<1$ for $k\ge 2$}\\ &< \sum_{k=0}^n \frac 1{k!} \end{align*} for $n \ge 2$.
H: Convergence of infinite series $\sum (-1)^{n+1}\frac{1}{n!}$ I have this question: Do the series converge absolutely or conditionally? $$ \sum (-1)^{n+1}\frac{1}{n!} $$ I would say it does not converge absolutely, since I suggest, by using the ratio test, that $$ \frac{a_{n+1}}{a_n} $$ does not approach a limit $$ L<1 $$ but rather $$ L=1 $$ However, I can see from the result list that I am wrong, and it does converge absolutely, can someone please explain to me why? AI: You do indeed have absolute convergence, and by the ratio test: $$\lim_{n\to \infty} \left|\frac{a_{n+1}}{a_n}\right| = \lim_{n\to \infty} \frac{\frac{1}{(n+1)!}}{\frac 1{n!}} = \lim_{n\to \infty} \frac{n!}{(n+1)!} = \lim_{n\to \infty} \frac 1{n+1} = 0 < 1$$ Added: recall that the factorial is a product of factors, and so $$\frac{n!}{(n+1)!} = \frac{(n!)}{(n+1)(n!)} = \frac 1{n+1}$$ The factors given by $n! = n(n-1)(n-2)\cdots (2) \cdot (1)$ are common in both the numerator and denominator, and hence cancel, leaving only the factor of $1$ in the numerator, and $(n+1)$ in the denominator.
H: Coordinate geometry: calculating the height of an equilateral triangle If I have equilateral $\Delta ABC$ with A being $(-x,0)$ and B being $(x,0)$, how can I solve for the coordinates of C in terms of $x$? I tried the following: $2x^2 = x^2 + b^2 $ -- pythagorean thm, since we know that one side of the triangle is two times the length of half of the base. $3x^2 = b^2$ -- simplification $ x\sqrt3 = b $ -- so this says that the height of point C must be x\sqrt3 units. This gives me an end result with the coordinates $(0, 2)$. However, frankly, this doesn't seem logical to me...what I'm basically saying with that result is that all equilateral triangles are two units tall...what?! This sounds completely incorrect. Any help would be great; thanks. edited: I actually had the right idea, I just made a stupid mistake with my simplification. ._. AI: Since the triangle is equilateral, the $x$-coordinate of $C$ must be $x=0$, so that it falls in the middle of the $x$-coordinates of $A$ and $B$. The base has length $|AB|=2x$ and so all of the sides have length $2x$. The point $C$ will lie on the circle, centre $A$ and radius $2x$. The whole circle is parametrized by $(-x,0) + 2x(\cos\theta,\sin\theta)$, where $\theta = 0$ corresponds to the base. Since we have an equilateral triangle, we have $\theta = 60^{\circ}$: \begin{array}{ccc} (-x,0)+2x(\cos 60, \sin 60) &=& (-x,0) + 2x(\tfrac{1}{2},\tfrac{\sqrt{3}}{2}) \\ \\ &=& (0,x\sqrt{3}) \end{array} If you want an "inverted" triangle then let $C=(0,-x\sqrt{3})$.
H: Decreasing function Given two increasing function $f(x)$ and $g(x)$ with all $x \ge 0$. Moreover, $f(x) > g(x) $ for all $x \ge 0$ and $$\mathop {\lim }\limits_{x \to \infty } \left( {f(x) - g(x)} \right) = 0.$$ Is $d(x) = f(x)-g(x)$ a decreasing function ? AI: It is not even eventually decreasing. Consider $$ f(x) = e^x \ \ \mbox{ and } \ \ g(x) = \begin{cases} x & \mbox{ if } x \leq 1 \\ e^x - \frac{1}{x} & \mbox{ if } x \in (2n+1,2n+2], n \in \mathbb{N} \\ e^x - \frac{1}{2n+2} & \mbox{ if } x \in [2n, 2n+1], n \in \mathbb{N}; \\ \end{cases} $$ they satisfy your conditions but on intervals of the form $[2n,2n+1]$ $f(x) - g(x)$ is constant and therefore not decreasing. There are infinitely many such intervals.
H: $f$ is continuous at $a$ iff for each subset $A$ of $X$ with $a\in \bar A$, $f(a)\in \overline{ f(A)}$. Definiton. $f$ is continuous at $a$ provided that for each open set $V$ in $Y$ containing $f(a)$ there is an open set $U$ in $X$ containing $a$ such that $f(U) \subset V$. Problem. $f$ is continuous at $a$ iff for each subset $A$ of $X$ with $a\in \overline{A}$, $f(a)\in \overline{ f(A)}$. AI: Suppose $f$ is continuous at $a$, according to the definition. Let $A$ be a subset with $a \in \overline{A}$, we want to show that $f(a) \in \overline{f[A]}$. We will use that a point is in the closure of a set iff every open neighbourhood of the point intersects that set. So let $V$ be an open neighbourhood of $f(a)$. Using the definition, we find a neighbourhood $U$ of $a$ such that $f[U] \subset V$. As $a \in \overline{A}$ we know that $V$ intersects $A$, say $a' \in A \cap V$. But then $f(a') \in V \cap f[A]$ (as $f[U] \subset V$), so indeed $V$ intersects $f[A]$, and as $V$ is arbitrary, $f(a) \in \overline{f[A]}$. This shows one implication. Now let $f$ fulfill the closure condition at $a$; we want to see that $f$ is continuous at $a$. So let $V$ be an open neighbourhood of $f(a)$. Suppose now (striving for a contradiction) that for all open neighbourhoods $U$ of $a$ we have that $f[U] \not\subset V$. This means that every neighbourhood of $a$ contains points of $X \setminus f^{-1}[V]$, so $a \in \overline{X \setminus f^{-1}[V]}$, and so, by the closure condition, $f(a) \in \overline{f[X \setminus f^{-1}[V]} \subset \overline{Y \setminus V}$, but this is false, as $V$ does not intersect $Y \setminus V$. This contradiction shows the continuity of $f$ at $a$.
H: definition of discriminant and traces of number field. Let $K=\Bbb Q [x]$ be a number field, $A$ be the ring of integers of $K$. Let $(x_1,\cdots,x_n)\in A^n$. In usual, what does it mean $D(x_1,\cdots,x_n)$? Either $\det(Tr_{\Bbb K/ \Bbb Q} (x_ix_j))$ or $\det(Tr_{A/ \Bbb Z} (x_ix_j))$? Or does it always same value? I searched some definitions, but it is not explicitly stated. AI: I'm not entirely certain what $\operatorname{Tr}_{A/\mathbb{Z}}$ is, but the notation $D(x_1,\dots,x_n)$ or $\Delta(x_1,\dots,x_n)$ usually means the discriminant of $K$ with respect to the basis $x_1,\dots,x_n$, so I would say it's most likely the former. After all, $x_1,\dots,x_n\in A\subset K$, so it still makes sense to talk about the trace of these as elements of $K$, and that is what the definition of the discriminant of $K$ with respect to a basis is (assuming that $x_1,\dots,x_n$ are indeed a basis for $K/\mathbb{Q}$!).
H: For all $f: D_1(0) \to D_1(0)$ analytic with $f(\frac{i}{3}) = 0$, find $\displaystyle \sup_f\{\operatorname{Im} f(0) \}$ Let $\mathcal{F}$ denote the family of all analytic functions $f$ that map the unit disc onto itself with $f(\frac{i}{3}) = 0$. Find $M \equiv\sup\{\operatorname{Im} f(0) : f \in \mathcal{F}\}$. I am preparing for my qualifying exam in complex analysis and I'm pretty stuck on this question. I will admit that I have not gotten very far and I'm just looking for a few hints to get me started. Hopefully then I will be able to post an answer to my own question. A few thoughts. I know that that, for each $f \in \mathcal{F}$, $\operatorname{Im} f$ is a harmonic function and thus satisfies a maximum principle on each $\overline{D_r(0)} \subset D_1(0)$, $0 < r <1$. Could this be useful? Also, I do know that the automorphism $\phi : D_1(0) \to D_1(0)$ given by $\phi(z) = \frac{\frac{i}{3} - z}{1 + \frac{i}{3}z}$ is an element of $\mathcal{F}$, with $\operatorname{Im} \phi(0) = \frac{1}{3}$, so $M \ge \frac{1}{3}$. Any hints, or even a solution, are greatly appreciated. AI: Take arbitrary $f \in \mathcal{F}$ and look at $f \circ \phi : D_1(0) \to D_1(0)$. Since $f \circ \phi (0) = 0$, the Schwarz Lemma applies. In particular, we get $|f\circ \phi (\frac{i}{3})| \le |\frac{i}{3}| = \frac{1}{3}$. But then $\operatorname{Im} f (0) \le |f(0)| = |f\circ \phi (\frac{i}{3})| \le \frac{1}{3} \implies M \le \frac{1}{3}$. Since we already mentioned that $M \ge \frac{1}{3}$ above, the proof is complete.
H: Infinite series for arctan of x this is a bit of a vague question so I won't be too surprised if I get vague responses. $$\tan^{-1}(x) = x - (x^3 / 3) + (x^5 / 5) - (x^7 / 7) + \cdots $$ ad infinitum I'm using this, where $x = (\sqrt{2} - 1)$ to calculate $\pi$, as $\pi = 8 \tan^{-1}(x)$ I have never really learnt about infinite series before, and obviously when I translate this into a python code, I can't use a range from $(0, \infty)$. So, my question is this; How do I/can I represent this idea in the form of a geometric series or is that the wrong way of going about it? Thanks. AI: You are only going to get an approximation good to some number of decimal places. If you use $N$ terms in the series, then the error is approximately the magnitude of the $N+1$th term. The question you must ask yourself is, if I want $M$ decimal places of accuracy, then how big must $N$ be? Example: say you want $M=6$ places of accuracy. Then $$\frac{(\sqrt{2}-1)^{2 N+1}}{2 N+1} < 10^{-6}$$ By trial and error, I get $N=6$. That means you only need $6$ terms in the series to get that level of accuracy.
H: On the notion of sheaf Citing form Borceux, Handbook of categorical algebra, in the preface to volume 3: The crucial idea behind the notion of a sheaf is to work not just with a "plain" set of elements, but with a whole system of elements at various levels. Of course, reasonable rules are imposed concerning the interactions between the various levels: an element at some level can be restricted to all lower levels and, if a compatible family of elements is given at various individual levels, it is possible to "glue" the family into an element defined at the global level covered by the individual ones. The various notions of sheafs depend on the way the words "level", "restriction" and "covering" are defined. The easiest examples are borrowed from topology, where the various "levels" are the open subsets of a fixed space $X$: for example a continuous function on $X$ may very well be defined "at the level of the open subset $U\subseteq X$", without being the restriction of a continuous function defined on the whole of $X$. Could someone explain me the bold part? AI: Rephrasing it, that means that you can have a continuous function $f:U \to Y$ that can't be extended continuously to the whole $X$. Moreover in that case the restriction morphism $p_{XU}$ is not surjective, of course.
H: Prove positive definite matrix and determinant inequality $A$ and $B$ are two real symmetric matrices and $A \succeq B$ (that means $A-B$ is a psd matrix), does that hold $|A|\ge|B|$? and why? AI: Of course not. Consider, e.g. $A=I_2$ and $B=-2I_2$ with $\det A=1<4=\det B$. Edit: However, the statement is true if $B$ is positive semidefinite. That $A\succeq B$ implies that $A$ is positive semidefinite too. If $\det B=0$, the statement obviously holds. If $\det B>0$, then both $A$ and $B$ are positive definite. Hence $A\succeq B$ implies that $B^{-1/2}AB^{-1/2}\succeq I$ and in turn $\det(B^{-1/2}AB^{-1/2})\ge 1$, i.e. $\det A\ge\det B$. (In fact, if $A\succeq B\succ0$, we have $\lambda_k(A)\ge\lambda_k(B)$ for every $k$. This is a property of positive definite ordering, but we do not need this fact here.)
H: secant method in maple secant method in maple. Find a root of the statement $x^3-3x^2+4x-1=0$ with the initial value $x_0=0$ and $x_1=1$ with 5 digits point approximation. AI: If this is homework and you're supposed to program it yourself then you could show what you've accomplished on your own already. If you just want to see answers from such an algorithm then you could use the Student:-NumericalAnalysis command. Eg, [Student:-NumericalAnalysis:-Secant(x^3-3*x^2+4*x-1,x=[0,1], tolerance=1e-5, maxiterations=50)]: evalf[5](%)[]; 0.31767 [Student:-NumericalAnalysis:-Secant(x^3-3*x^2+4*x-1,x=[0,1], tolerance=1e-5, output=sequence, maxiterations=50)]: evalf[5](%)[]; 0., 1., 0.50000, 0.20000, 0.33624, 0.31947, 0.31764, 0.31767, 0.31767 Programming questions about Maple are better asked on stackoverflow, unless it is the mathematics behind the algorithm that is your central concern.
H: Symbol for Mutual Inclusive events is there any symbol for mutual inclusive(opposite of Mutually exclusive) events in probability. I meant to say is for OR we have U symbol in Set Theory. Likewise is there any symbol for Mutual Inclusiveness. AI: You want to say that two events $A$ and $B$ satisfy If $A$, then $B$; If $B$, then $A$. That is, $A$ if and only if $B$. Thus $A$ and $B$ are mutually inclusive if and only if $A$ and $B$ happen simultaneously. We might say that $A$ and $B$ are equivaent events, or that $A$ and $B$ are concurrent events. (I don't think either notation is standard.) Symbolically, I suppose you'd want to use $A \equiv B$ or $A=B$ (denoting the fact that mutual inclusion is an equivalence relation). I don't think any particular form is standard.
H: Splitting open sets in perfect spaces Suppose $V$ is a Hausdorff space which is perfect and $U\subset V$ is a non-empty open set. Can we find two disjoint, non-empty open sets $U_1, U_2\subset U$? Is there any natural class of spaces having this property? AI: Sure: just pick distinct $x,y\in U$, use the fact that $X$ is Hausdorff to choose disjoint open nbhds $V_x$ and $V_y$ of $x$ and $y$, respectively, and let $U_1=V_x\cap U$ and $U_2=V_y\cap U$.
H: Probability example for homework There's a highway between two towns. To reach the other, people must pay 10 dollars for cars and 25 dollars for bigger vehicles. How big income can we expect if the 60 percent of vehicles are cars and there are 25 incoming vehicles per hour? What kind of distribution does it follow, and how to calculate with that? AI: This is a discrete distribution. $\text{P(Vehicle is car)} = 0.6$ and $\text{P(Vehicle is a bigger vehicle)} = 0.4$ Expected revenue per vehicle is: $\text{E(Revenue per vehicle)} = \text{P(Vehicle is car)} (\text{Revenue from car}) + \text{P(Vehicle is a bigger vehicle)} (\text{Revenue from a bigger vehicles})$ Thus, total expected revenue is: $\text{Total expected revenue} = (\text{Total number of vehicles}) \text{E(Revenue per vehicle)}$
H: Integrating over a triangle Let $\hat T$ be the triangle spanned by $(0,0)$, $(1,0)$ and $(0,1)$. Let $$I(r,s) = \int_{\hat T} x^r y^s d(x,y)$$ with $r,s\in\mathbb N\cup\{0\}$. Prove that $$I(r,s) = \frac{r!s!}{(2+r+s)!}$$ To show this I rewrote the integral to $$\int_0^1 \int_0^{-x+1} x^r y^s dy dx = \frac{1}{s+1}\int_0^1 x^r (1-x)^{s+1}dx$$ but I don't know how to continue from this point. How can I solve this? Or is there another way to prove the proposition? AI: I think the trick is using integration by parts: $$I\left[m,n\right] = \int_0^1 x^m (1-x)^{n}dx= \int_0^1 (x^m dx)(1-x)^n$$ Now using the formula$\int{v\ du}= uv-\int{u\ dv}$: $$I\left[m,n\right]=\frac{x^{m+1}}{m+1}(1-x)^n|_0^1+ \frac{n}{m+1} \int_0^1 x^{m+1}\left(1-x\right)^{n-1}=0+\frac{n}{m+1}I\left[m+1,n-1\right]$$ Now using this formula repeatedly in our case will give us: $$\frac{1}{s+1}I\left[r,s+1\right]=\frac{1}{r}I\left[r+1,s\right]=\frac{s}{r(r+1)}I\left[r+2,s-1\right]=\frac{s!\ r!}{(r+s+1)!}I\left[r+s+2,0\right]$$ But now,$I[r+s+2,0]$ is simply $\frac{1}{r+s+2}$, ergo: $$I=\frac{s!\ r!}{(r+s+2)!}$$
H: elementary substructure of $H(\kappa)$ $H(\kappa)=\{x:|TC(x)|<\kappa\}$, with $\kappa$ regular card. and $TC(*)$ the transitive closure. Now we defined an elementary substructure: $M\prec H(\kappa)$ if for every formula $\varphi$ and all $a_{1},\dots, a_{n}\in M$ holds: $\varphi^{M}(a_{1},\dots,a_{n})\leftrightarrow\varphi^{H(\kappa)}(a_{1}.\dots, a_{n})$. My first question is, what does this exactly mean? It was the end of the lecture, and so we defined this quite hastily. And afterwards the professor wrote: Let $\kappa > \omega_{1}$ and $M\prec H(\kappa)$ be a countable elementary substructure. Note: $M$ is defenitely not transitive. How do I see that fast, that $M$ can not be transisive? (Definition: set $T$ transitive, if $x\in T$ implies $x\subset T$.) AI: This means that the same first order sentences are true, using one binary relation symbol in the language, interpreted as '$\in$'. $M$ is a substructure of some $(T,\in)$ means that the relation symbol of $M$ is also interpreted as $\in$, and that it is an elementary substructure means that in addition all formulas have the same truth values when variables are evaluated in $M$. $\kappa>\omega_1$ implies that $H(\kappa)\models\,$"$ \exists x:\nexists f:x\hookrightarrow \omega$". This latter one means that there is a set which cannot be embedded into $\omega$, and this sentence can be written as a correct first order formula, using only '$\in$'. By 1., the same sentence also holds in $M$, though it doesn't hold in any countable transitive set.
H: A nonlinear differential equation We are to solve $$y=(y'-1)\cdot e^{y'}$$ Let $p=y'$, so $$y= (p-1)\cdot e^p$$ Differentiate: $$dy=(e^p + pe^p-e^p)dp=pe^p\, dp$$ From $$dx=\dfrac{dy}{p}=e^p\,dp$$ I find $$x=\int(e^p)dp=e^p$$.but the answer on my book is $$y=[\ln(x-c)-1]\cdot(x-c)$$ Help? AI: $$x=e^p \implies p=\mathrm d y/\mathrm d x = \log x$$ You need to integrate this for $y=y(x)$. Note that to get the constant shift, you should use $x =\int e^p\mathrm d p = e^p+c$. You should also go back and check the original equation to make sure you haven't got any incorrect solutions by accident. This is always a good idea with nonlinear equations. (The point is that you differentiated a first order equation with one arbitrary constant so you introduced an extra fictitious one.)
H: Linear Independence of $v$, $Av$, and $A^2v$ Let A be a 3x3 matrix and $ v \in \Re^3 $ with $ A^3v = 0 $ but $ A^2v \neq 0 $. Show that the vectors $ v, Av $ and $ A^2v $ are linearly independent. (From Alan MacDonald's Linear and Geometric Algebra, Problem 3.1.5). AI: First, you might try to see what would happen if they were linearly dependent? Certainly, you could write an equation down from that assumption. That equation would involve a linear combination and, once you've got a linear combination, you can multiply through by $A^2$ to get the result. Edit Per your request, here's a more geometric way to think of it. First, $v$ cannot be an eigenvector of $A$. (Can you see why?) Thus, $v$ and $Av$ together span a 2D subspace, say $S$, of $\mathbb R^3$. Next, we need to show that $A^2v$ does not lie in $S$. Well, to say that $A^3v=0$, while $A^2v\neq 0$ is to say that $A^2 v$ lies in the null space of $A$. Thus, if $A^2v$ did lie in $S$, there would be some linear combination of $v$ and $Av$ that lied in the null-space of $A$. But since $Av$ already maps to the null-space under application of $A$, we would get that $v$ must also map to the null-space of $A$, which is a contradiction. More generally, you might think of multiplication by $A$ as folding up space, even in higher dimensions. Multiplication by $A$ once sends the one-dimensional null-space to zero and something also maps to that null-space. Multiplication again does it, well, again.
H: Counting number of solutions with restrictions I want to count the number of non-negative integer solutions to an equation such as $$x+5y+8z=n$$ I can do this using generating functions; for example, the answer here is $$[x^n]\frac{1}{(1-x)(1-x^5)(1-x^8)}$$ where $[x^n]$ is the coefficient of $x^n$. But what if I add a restriction between variables such as $y \le z$? I have no idea how to count the number of solutions with this additional constraint. Any suggestions? AI: If you want $y\leq z$, use the substitution $z = y + z'$, and you're solving $x + 5y + 8(y+z') = n$, which is equivalent to non-negative integer solutions for $x + 13y + 8 z' = n$. If you want $x\leq y \leq z$, a similar method will work. You could also do this for $2y \leq z$. Of course, with more restrictions, this can get harder to clearly do. It could be difficult to solve subject to $x \leq y, z \leq 2y, x \geq \pi z$ (assuming any solutions exist).
H: Drawing a triangle in a unit circle This is a question that I derived for a long time ago. It asks if we draw a triangle in a unit circle does all arc lengths $(\alpha ,\beta ,\theta)$ and sides of triangle $(a,b,c)$ can be rational numbers? Intuitively I believe that the rationality does not hold but I can't derive a proof. Do you have any ideas? AI: $\alpha + \beta + \theta = 2 \pi$, so they can't all be rational. You might want to write the arc lengths as $\alpha \pi, \beta \pi, \theta \pi$, in which case $\alpha + \beta + \gamma = 2$, and it gets slightly more interesting. The answer is still no. (WLOG, we have $0 \leq \alpha, \beta, \gamma \leq 2$.) This arises because $a= 2| \sin \frac{\alpha\pi}{2}|$, which is rational if and only if $\frac{\alpha}{2} \in \{ 0, \frac{1}{6} ,\frac{1}{2}, \frac{5}{6}, 1\} $. This is known as Niven's Theorem. The only way to get $\frac{\alpha}{2} + \frac {\beta}{2} + \frac{\gamma}{2} = 1$ is then to require that $\alpha\beta\gamma=0$ (slight checking involved), which is the degenerate cases (that I'm ignoring). These cases correspond to either having a point on the circumference, or a chord of the circle.
H: How to isolate j? can anyone explain me how to isolate the j variable please? $$q = \frac{1 - (1 + j)^{-n}}{j} p $$ TIA AI: This can be solved exactly ("isolating $j$") only under very special circumstances. If $n > 4$ it is probably hopeless (and for $n = 3$ or 4 the exact solution will turn out to be a horrible mess).
H: A property of radical ideals Let $A$ be a commutative ring with $1 \neq 0$. Theorem (Atiyah-MacDonald 1.13 (v)). Let $\mathfrak{a, b} \subseteq A$ be ideals. Then $\sqrt{\mathfrak{a + b}} = \mathfrak{\sqrt{\sqrt{a} + \sqrt{b}}}$. Question. Is the following generalization true? For any finite collection of ideals $\mathfrak{a}_{1}, \dots, \mathfrak{a}_{n} \subseteq A$, we have $$\sqrt{\mathfrak{a}_{1} + \cdots + \mathfrak{a}_{n}} = \sqrt{\sqrt{\mathfrak{a}_{1}} + \cdots + \sqrt{\mathfrak{a}_{n}}}.$$ I'm pretty sure that it is true since if $x_{j} \in \sqrt{\mathfrak{a}_{j}}$ so that $x_{j}^{n_{j}} \in \mathfrak{a}_{j}$ for chosen $n_{j} \geq 1$, each term of $(x_{1} + \cdots + x_{n})^{N}$ is $x_{1}^{r_{1}} \cdots x_{n}^{r_{n}}$ and not all $r_{j} < n_{j}$ if we take $N$ large. (This is a recycled argument of Proposition 1.7 in Atiyah-MacDonald.) I just want to make sure that this statement is correct as I don't have a reference for it. I am interested in this statement because I'd like to show that given finitely many elements $f_{j} \in A$, we have $\sqrt{(f_{1}, \dots, f_{n})} = \sqrt{0}$ if and only if $f_{j} \in \sqrt{0}$ for all $j$. It is true when $n = 1, 2$, as follows. Lemma. Let $f \in A$. We have $\sqrt{(f)} = \sqrt{0}$ if and only if $f \in \sqrt{0}$. Proof. Since $f \in \sqrt{(f)}$, we only need to show one implication. Let $f \in \sqrt{0}$. Then $(f) \subseteq \sqrt{0}$. Since taking radical does not change the order of inclusion, we have $\sqrt{(f)} \subseteq \sqrt{\sqrt{0}} = \sqrt{0} \subseteq \sqrt{(f)}$. We showed that $f \in \sqrt{0}$ implies $\sqrt{(f)} = \sqrt{0}$. Q.E.D. Corollary. Let $f, g \in A$. We have $\sqrt{(f, g)} = \sqrt{0}$ if and only if $f, g \in \sqrt{0}$. Proof. Since $f, g \in \sqrt{(f, g)}$, we only need to show one implication. Suppose that $f, g \in \sqrt{0}$. Then by Lemma, we have $\sqrt{(f)} = \sqrt{(g)} = \sqrt{0}$, so $\sqrt{(f, g)} = \sqrt{(f) + (g)} = \sqrt{\sqrt{(f)} + \sqrt{(g)}} = \sqrt{0}$. Q.E.D. AI: $$\sqrt{\mathfrak{a}_{1} + \cdots + \mathfrak{a}_{n}} = \sqrt{\sqrt{\mathfrak{a}_{1}} + \cdots + \sqrt{\mathfrak{a}_{n}}}$$ Take into account that the radical of an ideal is the intersection of all prime ideals containing it. Now let $\mathfrak p$ be a prime ideal of $R$ containing $\sqrt{\mathfrak{a}_{1}} + \cdots + \sqrt{\mathfrak{a}_{n}}$. This is equivalent to $\sqrt{\mathfrak{a}_{i}}\subseteq\mathfrak p$ for all $i$. But $\sqrt{\mathfrak{a}_{i}}\subseteq\mathfrak p$ iff $\mathfrak{a}_{i}\subseteq\mathfrak p$ and you are done. (You can also prove this by induction on $n\ge1$.) $$\sqrt{(f_{1}, \dots, f_{n})} = \sqrt{0} \text{ if and only if } f_{j} \in \sqrt{0} \text{ for all } j=1,\dots,n$$ This is easy to prove without using the above equality for radicals. We have $(f_{1}, \dots, f_{n})\subseteq\sqrt{(f_{1}, \dots, f_{n})}=\sqrt 0$, so $f_j\in\sqrt 0$ for all $j$. Conversely, if $f_{j} \in \sqrt{0}$ for all $j$, then $(f_{1}, \dots, f_{n})\subseteq\sqrt0$ and therefore $\sqrt{(f_{1}, \dots, f_{n})}\subseteq\sqrt 0$. Since $(0)\subseteq (f_{1}, \dots, f_{n})$ we finally get an equality.
H: Number of abelian groups Vs Number of non-abelian groups I would like to see a table that shows the number of non-abelian group for every order n. It is a preferable if the table contains the number of abelian groups of order n (this is not necessary though). If anyone could provide me with such a table , I would be grateful. Note: I already checked wikipedia. They have a table similar to the one I want but its small. Here is a simliar table to the one I want: http://oeis.org/wiki/Number_of_groups_of_order_n#Table_of_number_of_distinct_groups_of_order_n Thank you AI: As it's a reference-request and the amounts given there seem sufficient: The number of goups of order $n$ is listed at OEIS as A00001. On its description page is a link to a table of $n$, $a_n$ for $n=1,\ldots, 2047$. For abelian groups, A000688 lsts the counts up to $n=10000$ as these are quite easily calculated. Intriguingly, A060689 (non-abelian groups) gives only a list up to $n=2015$, sou you can earn merit by computing the differences from the tables and submit a bigger table.
H: Principal ideal domain not euclidean Can anyone give an example of a principal ideal domain that is not Euclidean and is not isomorphic to $\mathbb{Z}[\frac{1+\sqrt{-a}}{2}]$, $a = 19,43,67,163$? I believe it is conjectured that no other integer rings of number fields have this property. What about other rings? AI: One such simple example of a non-Euclidean PID is $ K[[x,y]][1/(x^2\!+\!y^3)]\,$ over any field $\,K,\,$ i.e. adjoin the inverse of $\,x^2\!+\!y^3$ to a bivariate power series ring over a field. For the proof, and a general construction method see D.D. Anderson. An existence theorem for non-euclidean PID’s, Communications in Algebra, 16:6, 1221-1229, 1988. For number rings, by Weinberger (1973), assuming GRH, a UFD number ring R with infinitely many units is Euclidean, e.g. real quadratic number rings are Euclidean $\!\iff\!$ PID $\!\iff\!$ UFD.
H: Supposed counterexample to Liouville's theorem I'm trying to understand Liouville's theorem, and I don't see why $f(z)=e^{-|z|^2}$ isn't a counterexample. It's bounded ($0 < f(z) \leq 1$), so it must be somehow that it's not holomorphic. Isn't it differentiable everywhere? AI: No, it's nowhere differentiable (except at the origin). First off, the modulus function $f(z)=|z|$ is nowhere differentiable in $\mathbb{C}$, and since any point (other than the origin) is contained in some open set with a holomorphic branch of logarithm and square root defined, if $g(z) = e^{-|z|^2}$ were differentiable then so too would $(-\log(g(z)))^{1/2}=|z|$, which is a contradiction. At the origin, $g(0)=1$ so: $$\frac{g(re^{i\theta})-g(0)}{re^i\theta}=\frac{e^{-r^2}-1}{re^{i\theta}} $$ So taking the limit as $r\rightarrow 0$ we get: $$e^{-i\theta}\frac{d}{dx}\bigg(e^{-x^2}\bigg)\biggr\rvert_0$$ Where we take the real function $e^{-x^2}$, which is (real) differentiable at the origin. As pointed out in the comments below, this derivative is equal to zero so the function is differentiable at the origin with derivative $0$.
H: Need help simplifying complicated rational expression. Studying for my final and I can't figure this out. Simplify: $$\large\frac{\frac{3}{x^3y} + \frac{5}{xy^4}}{\frac{5}{x^3y} -\frac{4}{xy}}$$ AI: The shortest route is to note that the least common multiple of all four denominators of the ‘small’ fractions is $x^3y^4$ and to multiply the expression by $1=\frac{x^3y^4}{x^3y^4}$: $$\large{\frac{\frac{3}{x^3y} + \frac{5}{xy^4}}{\frac{5}{x^3y} -\frac{4}{xy}}\cdot\frac{x^3y^4}{x^3y^4}}\;.$$ I’ll let you check that this does the trick. It’s analogous to seeing that the least common multiple of $2,3,4$, and $6$ is $12$, so that multiplying $$\large\frac{\frac12+\frac23}{\frac14+\frac56}$$ by $\frac{12}{12}$ will clean it up nicely.
H: Comparing $2^n$ to $n!$ Comparing $\;2^n\;$ and $\;n!\;$ I need to rewrite one or the other so they can be more easily compared (i.e. have similar form). Because of the factorial, I'm a little lost as to how to compare the two functions. Normally, I would take the logarithm when trying to re-express a power like $2^n$, but I don't think this will work, since I don't know how to take the logarithm of a factorial, if at all. Might the series expansions of either functions be the right approach? If not could, someone point me in the right direction? AI: $$2^n \;=\;\; \underbrace{\;2\cdot 2 \cdot 2\cdot 2\cdot\, \cdots \,\cdot 2 \; \cdot \;2\;}_{\text{n $ $factors of $2$}}$$ $$n! = \underbrace{1\cdot 2\cdot 3 \cdot 4\cdot\, \cdots\,\cdot (n - 1)\cdot n}_{\text{n factors}}$$ When we compare the products $\,$ factor-by-factor,$\,$ we can easily see that when $n \geq 4$, $\;n!\;>\;2^n$. This can, of course, be formalized using induction on $n$.
H: Is $\sum_{n=1}^{\infty} {x^2 e^{-nx}}$ uniformly convergent in $[0,\infty)$ Is $\sum_{n=1}^{\infty} {x^2 e^{-nx}}$ uniformly convergent in $[0,\infty)$? So I started by saying that by the geometric series test where $a=x^2$ and $|r| = |\frac{1}{e^x}| \leq 1$, the series converges pointwise. But how do I exactly prove that it converges uniformly? I am quite sure it is by weistrass test but I can not find an upper bound to compare it to! Any direction would be appreciated! AI: Let $$f_n(x)=x^2e^{-nx}$$ then we have $$f'_n(x)=e^{-nx}\left(2x-nx^2\right)=0\iff x=0\ \text{or}\ x=\frac{2}{n}$$ so $$||f_n||_\infty=f_n\left(\frac{2}{n}\right)=\frac{4}{n^2}e^{-2}$$ hence the series $\displaystyle \sum_{n=1}^\infty ||f_n||_\infty$ is convergent and then the series $\displaystyle \sum_{n=1}^\infty f_n$ is uniformly convergent.
H: $\lVert A\lVert<1$ implies $1-A$ invertible only true in complete spaces? It is a well known fact that if in a Banach space $X$ a bounded linear operator $A:X\to X$ satisfies $\lVert A\lVert<1$, then $1-A$ has a bounded inverse. I was wondering wether completeness is actually a necessary condition. I think the following argument is corect and doesn't use completeness: Clearly for any $N\in\mathbb{N}$ $$\left(\sum_{k=0}^N A^k\right)(1-A)=1-A^{N+1}.$$ Since $\lVert A^{N+1}-0\lVert\leq\lVert A\lVert^{N+1}\rightarrow 0$ the limit of the right hand side exists and equals $1$. Hence also $$\lim_{N\rightarrow\infty}\left(\sum_{k=0}^N A^k\right)(1-A)=1$$ and therefore $(1-A)^{-1}=\lim_{N\rightarrow\infty} \sum_{k=0}^N A^k$. To see that the inverse is bounded I note that by continuity of the norm $$\lVert(1-A)^{-1}\lVert=\lim_{N\rightarrow\infty} \left\lVert \sum_{k=0}^NA^k\right\lVert\leq\sum_{k=0}^N \lVert{A}\lVert^k=\frac{1}{1-\lVert A\lVert}<\infty.$$ AI: Without some completeness hypothesis, the infinite sum expressing the inverse may not converge in the given space.
H: Coloring dots in a circle with no two consecutive dots being the same color I ran into this question, it is not homework. :) I have a simple circle with $n$ dots, $n\geqslant 3$. the dots are numbered from $1\ldots n$. Each dot needs to be coloured red, blue or green. No dot can be couloured the same as his neighbors. How many options are there to organize the colours? Hint: we need to use recurrence relation. AI: Let $a_n$ be the number of ways of coloring a ring of $n$ dots. Now consider a ring of $n+1$ dots that have been properly colored. There are two possibilities: either dots $1$ and $n$ have the same color, or they do not. If they do not have the same color, then removing dot $n+1$ leaves one of the $a_n$ properly colored rings of $n$ dots. Conversely, if we insert a dot $n+1$ into a properly colored ring of $n$ dots, we must give it the third color in order to have a properly colored ring of $n+1$ dots. Thus, there are $a_n$ properly colored rings of $n+1$ dots in which dots $1$ and $n$ have different colors. If dots $1$ and $n$ have the same color, imagine removing dot $n+1$ and merging dots $1$ and $n$ into a single dot; you now have a properly colored ring of $n-1$ dots. Conversely, if you start with a properly colored ring of $n-1$ dots, you can split dot $1$ into two dots of the same color, one adjacent to dot $2$ and one adjacent to dot $n-1$; call the former dot $1$ and the latter dot $n$. Now insert dot $n+1$ between the new dots $1$ and $n$; this time you have two choices for its color, so there are $2a_{n-1}$ properly colored rings of $n+1$ dots in which dots $1$ and $n$ have the same color. The recurrence is therefore $a_{n+1}=a_n+2a_{n-1}$, and I’ll leave it to you to work out the initial conditions.