text
stringlengths
83
79.5k
H: Convergence of $\sum \frac{a_n}{(a_1+\ldots+a_n)^2}$ Assume that $0 < a_n \leq 1$ and that $\sum a_n=\infty$. Is it true that $$ \sum_{n \geq 1} \frac{a_n}{(a_1+\ldots+a_n)^2} < \infty $$ ? I think it is but I can't prove it. Of course if $a_n \geq \varepsilon$ for some $\varepsilon > 0$ this is obvious. Any idea? Thanks. AI: Recall how do you prove $\sum \frac{1}{n^2}$ converges.
H: Show that a function is continuous Let K be bounded and continuous and bounded on $\mathbb{R}^{n}$ and let $f$ be Lebesgue integrable on $\mathbb{R}^{n}$. Show that the function $g$ defined on $\mathbb{R}$ by $g(t) = \int_{\mathbb{R}^{n}} K(tx)f(x)dx$ is well defined and continuous on $\mathbb{R}$. Well defined: $g(t) \leq |g(t)| \leq |\int K(tx)f(x)dx| \leq \int |K(tx)||f(x)|dx \leq N\int|f(x)|dx < \infty $, where $N = \sup \{ |K(x)| : x \in \mathbb{R}^{n} \}$. Now prove it is continuous, I think I have come up with a raher complicated approach. Since $f$ is integrable, we have for a positive number $M \in \mathbb{R}$ that $\int_{B(0,M) \cap\mathbb{R}^{n}} |f(x)|dx = \int |f(x)| dx - \epsilon'$, where $B(0,M)$ is a ball in $\mathbb{R}^{n}$ with center $0$ and radius $M$. Now, since $K$ is continuous on $\mathbb{R}^{n}$, it is uniformly continuous on the compact set $B(0,M) \cap\mathbb{R}^{n}$. Letting $|\frac{t_{1} - t_{0}}{M} | < \frac{\delta}{M}, \delta > 0$, we get that on this compact subset $|g(\frac{t_{1}}{M}) - g(\frac{t_{0}}{M}) | = |\int_{B(0,M) \cap\mathbb{R}^{n}} (K(\frac{t_{1}x}{M}) - K(\frac{t_{0}x}{M}))f(x)dx| $. Since $||\frac{t_{1}x}{M} - \frac{t_{0}x}{M}|| = |\frac{t_{1}}{M} - \frac{t_{0}}{M}|$ $||x|| < \delta $ , $|K(\frac{t_{1}x}{M}) - K(\frac{t_{0}x}{M})| < \epsilon$ $\forall x \in B(0,M)\cap \mathbb{R}^{n}$ and we get $|\int_{B(0,M) \cap\mathbb{R}^{n}} (K(\frac{t_{1}x}{M}) - K(\frac{t_{0}x}{M}))f(x)dx| \leq \int_{B(0,M) \cap\mathbb{R}^{n}} \epsilon|f(x)| dx =\epsilon (\int_{\mathbb{R}^{n}} |f(x)|dx - \epsilon')$. Since we can make $\epsilon$ and $\epsilon'$ arbitrary small, $g$ is continuous. Now this seems to me like a complicated solution (if it is valid), but at the moment I can't come up with something simpler. I appreciate feedback on my approach, whether my proof is correct or if something is wrong. A simpler solution would also be nice! Thanks in advance! /Erik AI: For continuity you can apply the Dominated convergence theorem: Consider a $t\in\mathbb R$ and a sequence $t_n\to t$. Then $$g(t_n) = \int_{\mathbb{R}^{n}} K(t_nx)f(x)dx$$ Let $h_n(x)=K(t_nx)f(x)$. It is apparent that $h_n\to h=K(tx)f(x)$ since $K$ is continuous. Also $|h_n(x)|=|K(t_nx)||f(x)|\leq \sup|K(x)|\cdot |f(x)|=N|f(x)|$. Now note that $N|f(x)| $ is integrable, therefore the dominated convergence theorem applies: $$g(t) = \int_{\mathbb{R}^{n}} K(tx)f(x)dx=\int_{\mathbb R^n} \lim\limits_{n\to\infty}K(t_nx)f(x)dx=$$ $$\lim\limits_{n\to\infty }\int_{\mathbb R^n}K(t_nx)f(x)dx=\lim\limits_{n\to\infty}g(t_n)$$thus $g$ is continuous.
H: Distribution for Response Times I have samples from a response time population for a web transaction. I want to be able to use them to describe a distribution for the population but don't know a proper one to use. I have shied away from a Normal since it would result in some probability of getting a negative time. Could someone suggest a reasonable approach? Please go easy with the responses; I'm an engineer, not a math guy. Thank you. AI: You might want to try fitting a log-normal distribution. If you have more interesting data, it's possible that some sort of power-law distribution would fit better. See, for instance, the Lévy distribution.
H: Why are there problems when interpolating $f(x)=\arctan(10x)$? Given $f(x)=\arctan(10x)$, there would be a problem when we interpolate it by using Lagrange's method. This would have something to do with the derivatives of $f(x)$. I plotted some derivatives of $f$ but I did not come up with an answer. Can anybody tell me what goes wrong if we interpolate $arctan(10x)$ and why this comes from the derivatives of $f(x)$. thanks :-) Oh.. second question: I also would like to know why this problem does not occur when using cubic Hermite interpolation... In addition, I want to know why this problem does not occur when using cubic spline interpolation ? I can not figure this out AI: If what you got is something like this: where dashed lines are the interpolant. Then this is Runge's phenomenon. What you said is a typical example of this phenomenon that using high degree polynomial to approximate a continuous function on evenly spaced sample points. This happens when we approximate some smooth function on a given interval, given $n$ evenly distributed sample points, and using a single globally defined polynomial. If you use cubic Hermite spline, we would not have this phenomenon because of this interpolation's piecewise nature, it is piecewisely defined on each small interval. We have multiple locally defined polynomials, we only glue these different polynomials together using continuity (namely $f$ and $f'$ are continuous across the sample points).
H: How to calculate $\lim_{n \to \infty}\frac{r^n}{n}$ I'm a bit rusty on calculus and I'm not able to solve this rather simple limit: $$\lim_{n \to \infty}\frac{r^n}{n}$$ In my case $r = -1$, and "just by looking at it" I'd guess that for $\left|r\right| = 1, n \to \infty, \frac{\pm1^n}{n} \to 0$. But I was wondering if there's a general rule, since given values like $0.1, e, -2$ for $r$ the limit does not seem so obvious for me. AI: Hint: For $|r|\le 1$ we have $$-\frac{1}{n}\le \frac{r^n}{n}\le \frac{1}{n}$$ and for $|r|>1$ the values $\frac{r^n}{n}$ are unbounded.
H: Smooth Quartics in $\Bbb{P}^3$ Algebraic category. Ground field $\Bbb{C}$. This is a naive question: are all smooth quartic surfaces in $\Bbb{P}^3$ isomorphic ? The answer is NO if and only if there is a smooth quartic in $\Bbb{P}^3$ containing some (-1)-curve. AI: A smooth quartic in $\mathbb P^3$ is a K3 surface, and they are typically non-isomorphic. If you quotient out by the obvious projective symmeteries, you get a 19-dimensional family, and the K3's honestly depend on these 19 moduli. (There is a Torelli theorem to this effect, I think.) [But, by adjunction, there are no -1 curves on a K3. I don't really understand your (-1)-curve remark. Details: Adjunction says that $2g - 2 = C \cdot C - C\cdot K,$ but on a K3 we have $K = 0$, so $2g-2 = C \cdot C.$ The left hands side is even, and so a curve on K3 cannot have self-intersection $-1$.]
H: Is this piecewise-defined function on $\mathbb{R}^2$ continuous at $(0,0)$? What about differentiable? Is this function is a differentiable function, a continuous function at the point $(0,0)$? How to show that ? $$ f(x,y)=\begin{cases} \frac{x^{3}y+xy^{3}}{x^{2}+y^{4}} &\text{if }(x,y)\neq (0,0),\\ 0 & \text{if }(x,y)=(0,0) \end{cases} $$ Thanks a lot! AI: This is an unusual question, but a great one. @guru's method still applies. You should be able to argue continuity. For differentiability, note that both partials at the origin are $0$. So you need to consider $$\lim_{(x,y)\to (0,0} \frac {f(x,y)}{\sqrt{x^2+y^2}} \overset{?}= 0\,.$$ The polar coordinates approach should address this as well.
H: Question about a proof that $\mathbb{Q}$ is dense in $\mathbb{R}$ This is from Ross's elementary analysis book. The statement is if $a,b \in \mathbb{R}$ such that $a<b$ then there exists a rational $r \in \mathbb{Q}$ such that $a<r<b$. I don't understand an important part of the proof which I will point out. Here is the proof: Since $b-a>0$ then by the archimidean property there exists a natural number call it $n$ such that $n(b-a)>1$. Now we must prove there exists an integer $m$ such that $an<m<bn$. By the archimidean property again there exists an integer $k$ such that $k>\max(|an|, |bn|)$ so that $-k<an<bn<k$. Then the set $$\{j \in \mathbb{Z}: {-k<j \leq k}\text{ and }an<j\}$$ is finite and nonempty and we can set $m = \min\{j \in \mathbb{Z}: {-k<j \leq k}\text{ and }an<j\}$. Then $an < m$ but $m - 1 \leq an$. Also, we have $m = (m - 1) + 1 \leq an + 1 < an + (bn - an) = bn$. Comment: I get up to the point how the author sets$ k$ to be the integer that is larger than both $an$ and $bn$. And since $a<b$ we get the bounded inequality $-k<an<bn<k$. From here on this is where I get confused. He creates a set $\{j \in \mathbb{Z}: {-k<j \leq k}\text{ and }an<j\}$ which he calls finite which I see and nonempty (I'm guessing since this set has a least upper bound $k$ thus it is nonempty). But then he lets $m = \min\{j \in \mathbb{Z}: {-k<j \leq k}\text{ and }an<j\}$ which follows $an<m$ but $m - 1 \leq an$. I don't get how the author gets to that point? AI: The set is finite because there are at most $2k$ elements in it. It is non-empty because by assumption $k$ is in it. By the least upper bound property for $\mathbb Z$, since this set is bounded below, it admits a minimum. The minimum $m$ is in that set, but by minimality, $m-1$ is not in that set, because otherwise $m-1$ would be an element of the set smaller than its minimum. This is why $an < m$ (i.e. $m$ is in the set) but $m-1 \le an$ (i.e. $m-1$ is not in that set). Hope that helps,
H: True or false? About unitary operation. Let $V$ be a finite inner product space. Let $T:V\to V$ be a linear transformation. Suppose that $v_1,...,v_n$ is an orthonormal basis of $V$ such that $(Tv_i,Tv_i)=1$ for every $1\leq i\leq n$. Then $T$ is unitary operation. Is that statement true? AI: Unitary operators are invertible and therefore surjective. There are non-surjective maps which satisfy the condition above, so not all operators satisfying that condition are unitary. Consider the map $T: \mathbb{R}^2 \to \mathbb{R}^2$ given by $(x,y) \to (x+y,0)$. Taking the standard inner product on $\mathbb{R}^2$, we find that $\langle T(1,0), T(1,0) \rangle = 1$ and $\langle T(0,1), T(0,1) \rangle = 1$. Therefore $T$ satisfies the above condition, but it is not invertible and therefore not unitary.
H: Application Church-Turing thesis I would like to give examples of problems which are solvable with an algorithm, for example the function $f$ which maps the tuple $(n,m)$ to the greatest common divisor. This map is recursive. I would do it in the following way: 1st step: If $m$ equals $0$, then $\mathrm{gcd}(n,m)=n$. 2nd step: otherwise $\mathrm{gcd}(n,m)=gcd(m,n \mod m)$. What about the set of polynomials in one variable with integer coefficients? I want to argue that the subset of the polynomials which have an integer root is recursive. Moreover, why is the addition and multiplication of rational numbers a recursive function? What about the exponentiation of two rational numbers? AI: What about the exponentiation of two rational numbers? I assume you mean positive rational numbers with a rational exponent. As a formal symbol $q^r$ with related rules of calculation, it all boils down to prime factorization and rules for integer exponentiation, which are recursive in any reasonable discrete representation. However, if you mean calculation of real numbers equal to the exponentiated rationals, you will need infinite series (or more complicated things) to define a recursive sequence of rational approximations, and basic operations on the real number representation (like checking which of two expressions with rational exponentials is larger) are either not recursive or it can be difficult to prove correctness or termination of the algorithms. The other questions are simpler. Test for an integer root of a polynomial with integer coefficients can be done by searching through divisors of the first and last coefficients, which reduces the problem to integer factorization. Addition and multiplication are done by polynomial formulas, so if you know integer addition and multiplication are recursive, this is enough to get the recursiveness of rational fractions.
H: What is the terminology for the non-repeating portion of a rational decimal? Given a number co-prime with 10, such as thirteen, we can construct a repeating decimal from its reciprocal: $\frac{1}{13}$ = 0.(076923). If we successively divide this number by a factor of 10 (i.e., 2 or 5) we get a sequence of numbers which have a fixed (non-repeating) and repeating portion (shown in parentheses): $\frac{1}{26} = 0.0(384615)$ $\frac{1}{52} = 0.01(923076)$ $\frac{1}{104} = 0.009(615384)$ $\frac{1}{208} = 0.0048(076923)$ $\frac{1}{416} = 0.00240(384615)$ $\frac{1}{832} = 0.001201(923076)$ What is the correct terminology for the non-repeating and repeating portions of the decimals? AI: Given an eventually-periodic sequence, the piece occurring before the periodicity starts is called the preperiod (at least, this is the term I've always heard for it; note, however, that it might also be used to refer to the length of this piece). I don't know if there's a more specialized term for when we are talking about digits of a decimal expansion, but I think it's a good name for it regardless.
H: Ideal of finite intersection of algebraic sets In general if $X_1$ and $X_2$ are two algebraic sets on $k^n$ with $k$ a field of characteristic zero, we have that $I( X_1 \cap X_2 ) = \sqrt{ I(X_1) + I(X_2) }.$ Is posible in general compute $I(X_1 \cap X_2 \cap \dots \cap X_n)$ in terms of $I(X_1), \dots , I(x_n)$?. Thanks in advance. AI: Remember that for any ring $R$ and ideal $J\subseteq R$, $$\sqrt{J}=\bigcap_{\large\substack{\text{prime ideals}\\ P\supseteq J}} \!\!\!\!\!\!\!P.$$ Also, remember that for ideals $J_1,J_2,J_3\subseteq R$, we have $$J_1\supseteq J_2+J_3\iff J_1\supseteq J_2\quad\text{and}\quad J_1\supseteq J_3.$$ Use this to show that for any ideals $J_1,J_2$ of $R$, $$\sqrt{J_1+\sqrt{J_2}}=\sqrt{J_1+J_2}$$ so that you can use induction to prove what you want.
H: Is it possible to make integers a field? Is it possible to define addition and/or multiplication on the set of a) natural numbers (including $0$: $0,1,2,3,...$) b) integers $(..., -2, -1, 0, 1, 2, ...)$ in such way that they will become fields? Thanks in advance. AI: Let $X$ be any countably infinite set (such as $\mathbb{Z}$ or $\mathbb{N}$). Choose any bijection $f:X\to\mathbb{Q}$. Define operations $\oplus,\otimes$ on $X$ by $$a\oplus b=f^{-1}(f(a)+f(b)),\qquad a\otimes b=f^{-1}(f(a)\cdot f(b))$$ where $+$ and $\cdot$ are the operations on $\mathbb{Q}$. Then these operations make $X$ into a field, indeed one that is isomorphic to $\mathbb{Q}$.
H: a group is not the union of two proper subgroups - how to internalize this into other categories? A well-known fact from group theory is that a group cannot be the union of two proper subgroups. I wonder, does this statement internalize into other categories than the category of sets? That is, is there some corresponding statement for group objects in, say, cartesian categories? It is certainly true for topoi, interpreted in a suitable way, since the usual proof is intuistic (however, note that $G=G_1 \cup G_2 \Rightarrow G=G_1 \vee G=G_2$ fails for disconnected topoi). What about the following: Let $G:=(|G|,m,i,e)$ be a group object in the cartesian category $\mathcal{C}$. Let $G_1 \to G$ and $G_2 \to G$ be homomorphisms of group objects, which are no isomorphisms. Assume that $|G_1| \to |G|$ and $|G_2| \to |G|$ are monomorphisms. Is it not possible that $|G| \to |G|$ is their union, i.e. their sup in the preorder of subobjects of $|G|$? Well as I've said, it fails for disconnected topoi. What do we have to assume on $\mathcal{C}$ so that it becomes true? AI: First, another easy counterexample. Recall that an internal group in $\mathbf{Grp}$ is just an abelian group. So consider $G = \mathbb{Z} / 6 \mathbb{Z}$, let $G_1$ be the unique subgroup of order 2, and let $G_2$ be the unique subgroup of order 3. Then the smallest subgroup of $G$ containing $G_1$ and $G_2$ is $G$ itself, so the claim fails in $\mathbf{Grp}$ as well. Now, for some slightly more positive results. Let's recall the standard proof. Let $G_1$ and $G_2$ be subgroups of $G$. Suppose $G = G_1 \cup G_2$, $g_1 \notin G_2$, $g_2 \notin G_1$. Consider $g_1 g_2$. If $g_1 g_2 \in G_1$, then $g_2 \in G_1$, a contradiction; and if $g_1 g_2 \in G_2$, then $g_1 \in G_2$ , also a contradiction. Thus, $g_1 g_2 \notin G_1 \cup G_2$, so $G \ne G_1 \cup G_2$. As you say, this is intuitionistically valid. In fact it can even be interpreted in any coherent category. The only problem is that it does not mean what you think it means! The right formulation is the following: If $G_1$ and $G_2$ are subgroups of $G$ and we (simultaneously) have (generalised) elements of $G \setminus G_1$ and $G \setminus G_2$, then $G \ne G_1 \cup G_2$. More formally, $$g_1 \notin G_2, g_2 \notin G_1 \vdash_{g_1 : G, g_2 : G} \lnot (\forall g : G . g \in G_1 \lor g \in G_2)$$ but having $g_2 \notin G_1$ is generally a stronger condition than $G_1 \hookrightarrow G$ not being an isomorphism. In general, $G_1 \hookrightarrow G$ in an isomorphism (in a topos) if and only if the following formula holds in the internal logic: $$\forall g : G . g \in G_1$$ Unfortunately, it is not possible to express internally that a formula does not hold! Although we may write $$\lnot (\forall g : G . g \in G_1)$$ the above formula asserts that $G_1 \hookrightarrow G$ is not even an isomorphism locally: in a topos, the above formula is valid if and only if, for every $U$, the induced map $\mathrm{Hom}(U, G_1) \to \mathrm{Hom}(U, G)$ is not a surjection. Clearly, this is a stronger condition than $G_1 \hookrightarrow G$ not being an isomorphism. Nonetheless, we may deduce that the claim is true for complemented proper subgroups $G_1$, $G_2$ in a topos in which the subobjects of $1$ form an irreducible locale: because under these hypotheses, the locus $U$ over which both $G \setminus G_1$ and $G \setminus G_2$ is inhabited is non-empty, so $G \ne G_1 \cup G_2$ in the topos over $U$. But this is enough to prove $G \ne G_1 \cup G_2$ globally as well – because passing to the topos over $U$ is a logical functor. Postscript. A more sophisticated approach allows us to deduce another version of the claim. Suppose we have subgroups $G_1$ and $G_2$ of an internal group $G$ in a topos such that $G_1 \times G \cup G \times G_2$ is a proper subobject of $G \times G$. Noting that $G = G_1 \cup G_2$ if and only if this is true in all covers of the topos, by passing to a boolean cover if necessary, we may assume the law of excluded middle. But then in that case it is clear that $$(G \setminus G_1) \times (G \setminus G_2) = (G \times G) \setminus (G_1 \times G \cup G \times G_2)$$ so the LHS is non-empty. Thus we have reduced the question to the well-known case.
H: Need help with algebra portion of calculus finding slope of secant line The example problem is: Given f(x) = $x^2$,find and simplify the slope of the secant line for a = 1 and h = any non-zero number. The answer is as follows: For a = 1 and h any non-zero number, the secant line goes through (1, f(1)) = (1,1) and (1+h,f$(1+h))$ = $(1+h,(1+h)^2)$ and its slope is (1)$$\frac {f((1+h) - f(1)} {h} =$$ (2) $$\frac{(1+h)^2 - 1^2} {h} =$$ (3) $$\frac{1 + 2h + h^2 -1} {h} = $$ (4) $$\frac {h(2+h)} {h} = $$ (5) $$2 + h$$ where h != 0. My question here is the algebra. I do not understand how they got from step (2) to step (3) to step (4). In the book step (2) is annotated as "square the binomial". Step (3) is annotated as "combine like terms and and factor the numerator". (Step (4) is annotated as "cancel" - that much I understand. So can anyone walk me through steps 3 to 5? Thanks! AI: From steps 2 to 3 they used: $$(a+b)^2=a^2+2ab+b^2$$ From steps 3 to 4 they used $$1+2h+h^2-1=2h+h^2$$ then $$2h+h^2=h(2+h)$$
H: convergence of series with $k!$ check if the following series converges: $\sum\limits_{k=1}^{\infty} (-1)^k \dfrac{(k-1)!!}{k!!}$ where $k!!=k(k-2)(k-4)(k-6)...$ I came across this exercise while going trough some old exams. I'm pretty sure we have to bound the sequence and apply Leibniz-criteria but after a while i gave up. If you have a little Hint for me just to get me strted , that would be great. AI: Set $a_k = (k-1)!!/k!!$, so that we're looking at the series $\sum_{k=1}^\infty (-1)^k a_k$. Notice that the odd terms are exactly the normalized central binomial coefficients: $$ a_{2n} = \frac1{4^n} \binom{2n}n. $$ By known asymptotics, $a_{2n} \sim 1/\sqrt{\pi n}$ as $n$ grows large. Similarly, $$ a_{2n-1} = \frac1{2na_{2n}} \sim \frac{\sqrt{\pi n}}{2n} = \frac{\sqrt\pi}{2\sqrt n} $$ as $n$ grows large. Therefore \begin{align*} \sum_{k=1}^{2N} (-1)^k a_k &= \sum_{n=1}^N ( -a_{2n-1}+a_{2n} ) \\ &\approx \sum_{n=1}^N \bigg( {-}\frac{\sqrt\pi}{2\sqrt n}+\frac1{\sqrt{\pi n}} \bigg) \sim c\sqrt N \end{align*} where $c = -\sqrt\pi+2/\sqrt\pi < 0$. So the sum diverges to $-\infty$.
H: Image of the Norm on a Finite Dimensional Extension of $\mathbb{Q}_p$ I've been trying to see whether following assertion is true in order to give a quick proof of another problem I was doing: if $K$ is a finite dimensional extension of the $p$-adic numbers $\mathbb{Q}_p$, we have the multiplicative field norm $N_{K/\mathbb{Q}_p}: K \rightarrow \mathbb{Q}_p$. While the norm is not guaranteed to be surjective, is it always possible to find some $k \in K$ such that $N_{K/\mathbb{Q}_p}(k)$ has ($p$-adic) absolute value $1/p$? One way I would imagine to do this is to show that there exist $\alpha, \beta \in K$ such that $|N(\alpha)| = p^A$ and $|N(\beta)| = p^B$ with $A, B$ relatively prime. The assertion then follows because there exist integers $x, y$ such that $xA + yB = 1$, whence $\alpha^{-x}\beta^{-y} \in K$ with $|N(\alpha^{-x}\beta^{-y})| = |N(\alpha^{-x})N(\beta^{-y})| = |a|^{-x}|b|^{-y} = p^{-Ax}p^{-By} = 1/p$. My claim seems to me altogether reasonable since otherwise there exists some prime $q$ such that the absolute value $p^s$ of every $N(k)$ as $k$ runs through all of $K$ will be such that $s$ is always divisible by $q$, which doesn't seem right. AI: You ask whether, given a finite extension $K/\mathbf Q_p$, the image of the norm $N_K : K^* \to \mathbf Q_p^*$ contains a uniformiser $\pi \in \mathbf Q_p^*$. In other words, does $\nu(N_K(a)) = 1$ have a solution $a \in K$? In general, it is not true. For simplicity, suppose $K/\mathbf Q$ is Galois of degree $n$. Since all the conjugates of $a$ have the same $p$-adic valuation, $\nu (N_K(a)) = n\cdot \nu(a)$ for every $a$. Recall that $\nu(K^*) =\frac{1}{e}\mathbf Z$, where $e$ is the ramification index of $K/\mathbf Q_p$. Therefore $\nu(N_K(K^*)) = n\cdot \nu(K^*) = (n/e) \mathbf Z$; so a necessary and sufficient condition for a uniformiser to be in the image of the norm, in the case of a Galois extension, is that $n=e$; i.e. that $K/\mathbf Q_p$ be totally ramified.
H: Introductory/Intuitive Functional Analysis Book Can you recommend a gentle introduction to the abstract thinking and motivation of functional analysis? I'm looking for a book that holds you by the hand and shows the details of exercises, etc. Thanks. AI: Of the well written books on Functional analysis that I've seen, Kreyszig is the most elementary. The Amazon reviews seem to agree with me.
H: Cauchy's Theorem- Trigonometric application any help would be very much appreciated. The question asks to evaluate the given integral using Cauchy's formula. I plugged in the formulas for $\sin$ and $\cos$ ($\sin= \frac{1}{2i}(z-1/z)$ and $\cos= \frac12(z+1/z)$) but did not know how to proceed from there. $$\int_0^{2\pi} \frac{dθ}{3+\sinθ+\cosθ}$$ Thanks. AI: Real Method Often, this type of integral is workable using the substitution $$ z=\tan(\theta/2)\quad\text{and}\quad\mathrm{d}\theta=\frac{\mathrm{2\,d}z}{1+z^2}\\ \sin(\theta)=\frac{2z}{1+z^2}\quad\text{and}\quad\cos(\theta)=\frac{1-z^2}{1+z^2} $$ then $$ \begin{align} \int_0^{2\pi}\frac{\mathrm{d}\theta}{3+\sin(\theta)+\cos(\theta)} &=\int_{-\pi}^\pi\frac{\mathrm{d}\theta}{3+\sin(\theta)+\cos(\theta)}\\ &=\int_{-\infty}^\infty\frac{2\mathrm{d}z}{3(1+z^2)+2z+(1-z^2)}\\ &=\int_{-\infty}^\infty\frac{\mathrm{d}z}{z^2+z+2}\\ &=\int_{-\infty}^\infty\frac{\mathrm{d}z}{(z+1/2)^2+7/4}\\ &=\frac{2\pi}{\sqrt7} \end{align} $$ Complex Method $$ \begin{align} \int_0^{2\pi}\frac{\mathrm{d}\theta}{3+\sin(\theta)+\cos(\theta)} &=\oint\frac1{3+\frac1{2i}(z-\frac1z)+\frac12(z+\frac1z)}\frac{\mathrm{d}z}{iz}\\ &=\oint\frac{2z}{6z-i(z^2-1)+(z^2+1)}\frac{\mathrm{d}z}{iz}\\ &=\oint\frac{-2i}{(1-i)z^2+6z+(1+i)}\mathrm{d}z\\ \end{align} $$ The singularities of the integrand are at $\frac{-3+\sqrt7}{2}(1+i)$ and $\frac{-3-\sqrt7}{2}(1+i)$. The second is outside the unit circle, so we only need to compute the residue at the first, which is $$ \frac{-2i}{2(1-i)z+6}=\frac{-2i}{2\sqrt7} $$ Thus, the integral is $2\pi i$ times the residue: $$ \int_0^{2\pi}\frac{\mathrm{d}\theta}{3+\sin(\theta)+\cos(\theta)}=\frac{2\pi}{\sqrt7} $$
H: Ordered combinations If I have 2 sets of elements, say $\{A, B, C, D\}$ and $\{P, Q, R, S\}$, how can I calculate combinations of combined set $$\{A, B, C, D, P, Q, R, S\}$$ such that order $A\to B\to C\to D$ and $P\to Q\to R\to S$ is maintained in each combination. For example $$\{A, B, C, D, P, Q, R, S\},\\\{A, P, Q, R, B, S, C, D\}, \\\{P, Q, R, S, A, B, C, D\}\hphantom{,}$$ are valid combinations whereas $$\{A, B, C, D, Q, P, R, S\}, \\ \{A, B, D, C, P, Q, R, S\}\hphantom{,}$$ are invalid. AI: Hint: Once you know which 4 positions $A, B, C, D$ are in, how many possible ordered combinations are there?
H: What does area represent? Since any two Euclidean shapes have an infinite number of points inside of them, shapes with different area have the same infinite number of points in them (and any object has the same number of points in it as are inside the plane it is inside). So area isn't a measure of the amount of points in an object, right? And if this is true, what does the area of a shape actually represent? In other words beyond the abstract idea of the amount of space in an object, what does an area of, for example, $4$, mean? AI: Area (in $\mathbb R^2$) starts with agreeing that the area of a rectangle of sides $a$ and $b$ has area equal to $a\cdot b$. You may take that as an axiom. Now, the area of more complicated shapes is obtained by approximating the shape by rectangles. This can be done in different ways and there are plenty of things to prove here, as well as some surprises. For instance, one can approximate a given shape from the outside: cover it by countably many disjoint rectangles, and add their areas. Take the infimum over all such covers, and that will give you a measure of the area of the shape measured from the outside. Similarly, you can inscribe rectangles inside the shape, add their areas, and take the supremum over all such numbers. This will give you a measure of the area of the shape measured from the inside. Your question is a prelude to measure theory, a very important part of modern analysis. One of the important surprises is that the concept of area (and even the concept of length) can't be extended sensibly to measure all subsets of the plane (line). A famous example is Vitali's Set. So, the resulting theory is quite subtle.
H: Is second derivative of a convex function convex? If $f$ is twice differentiable and convex, is it true that $f''$ is a convex function ? AI: I thought I'd take a personal challenge to find a counterexample on the entire real line. Here's a method for constructing such a function: let $g_1(x)$ be any nonconvex but positive real function such that the integrals $g_2(x)=\int_{0}^x g_1(z) dz$, and $g_3(x)=\int_{0}^x g_2(z) dz$ both exist for all $x\in\mathbb{R}$. Then $g_3$ is convex, but its second derivative $g_1$ is not. If you'd like a specific example, consider $$f(x) = x \mathop{\textrm{erf}}(x) + \tfrac{1}{\sqrt{\pi}} e^{-x^2}$$ I chose this deliberately, because $$f'(x) = \mathop{\textrm{erf}}(x), \quad f''(x) = \tfrac{2}{\sqrt{\pi}} e^{-x^2}.$$ The function: The second derivative:
H: Continuous Linear Mapping $C[0,1]\rightarrow C[0,1]$ Show that $L(f)(x)= \int_0^x f(t) dt $ is a continuous linear mapping from $C[0,1]$ into itself. Do I only have to show that the operator is bounded? How to do I explicitly choose my $M$ such that $\|L \ f\|<M\|f\|$? AI: Yes, continuous is equivalent to bounded, among linear mappings between normed spaces. Choose $M$ to be the length of the given interval, now $M=1$ will do it. $\left| \displaystyle\int_0^x f(t)\,dt\right| \ \le \ \int_0^x|f(t)|\,dt\ \le \ \int_0^1|f(t)|\,dt\ \le\ \int_0^1\,\|f\|_\max = 1\cdot \|f\|_\max$
H: Proving Bijections in $\mathbb{R}^n$ I have a question that may seem trivial or silly, however I'll try to make my point clear. I'm a student of Mathematical Physics and unfortunately my college doesn't offer a set theory course, so that we are not taught to prove bijections between arbitrary sets. The single variable calculus course we have teaches us to prove bijections by looking for one inverse function in a very clear way: we write $y = f(x)$, try to solve for $x$ and then create the function desired (it's of course more formal than that, first we prove injectivity showing that $f(x)=f(y)$ implies $x=y$ and then we look for the inverse to prove surjectivity). The multivariable calculus course on the other hand doesn't talk about proving bijections from subsets of $\mathbb{R}^n$ to subsets of $\mathbb{R}^m$. On the other hand we have the analysis course, the topology course and the differential geometry course that all presumes we know how to prove those bijections and so on. Now, I'm confused, because none of our courses teaches to prove that kind of thing. If we have some function $f: \mathbb{R}^n \to \mathbb{R}^m$, proving injectivity is the easiest part, but for surjectivity, seeking for some inverse seems kind of painful. By now I'm studying differential geometry of curves and surfaces (it uses Do Carmo's book), and the only thing I'm needing is this basic notion on how to prove bijections in these cases (for instance, when we come to show that something is a chart, proving continuity is rather easy, we have been taught on how to do it, but proving the bijection is complicated). Is there any general guidelines as there is in the case of functions from $\mathbb{R}$ to $\mathbb{R}$? Is there any reference that can be given that treats this in a good way? Again, I know this is basic and that we all should know before coming to things like differential geometry, but since the course doesn't teach this, I came to ask here to fill this gap. Thanks very much in advance, and sorry if this question is silly. AI: For example, $\tan$ gives a (smooth) bijection $(-\frac\pi2,\,\frac\pi2)\to \Bbb R$. By linear maps and shifting we can get bijections between any two open intervals. Similarly, there is a bijection from the open $n$ dimensional (unit) ball to $\Bbb R^n$, using the previous bijection on each ray. These bijections (and their inverses) are also continuous, so they are homeomorphims. An $n$ dimensional manifold is a topological space that has an $n$ dimensional ball around each of its point. To formulate it, we need the bijections to the balls (or either to $\Bbb R^n$).
H: Any practical difference between the metrics $d_1=\sup\{\left|{x_j-y_j}\right|:j=1,2,...,k\}$ and $d_2=\max\{\left|x_j-y_j\right|:j=1,2,...k\}$? I've been asked to do a proof showing that $d_1\left({x,y}\right)$ is a metric on $\mathbb{R}^k$, but is there any difference between this and $d_2\left({x,y}\right)$, for which I've already done a proof? AI: No, $\sup$ and $\max$ are the same for finite sets. In fact, they are the same for infinite sets as well whenever both exist, however an infinite set of real numbers may fail to have a maximum even if it's bounded above (consider $\{1-\frac1n:n\in\mathbb N\}$).
H: Show that $4mn-m-n$ can never be a square Let $m$ and $n$ be positive integers. Show that $$4mn-m-n$$ can never be a square. In my attempt I started by assuming for the sake of contradiction that $$4mn-m-n=k^2$$ for some $k \in \mathbb{N_0^+}$. Then I considered $k^2 \pmod3$ (I couldn't find a way for $\pmod4$ or $\pmod8$ to help) $$k^2 \equiv 0,1 \pmod3$$ This yield three possibilities for $m$ and $n$ $\pmod3$ either $$m,n \equiv 0 \pmod3$$ $$m \equiv 0,\;\;\;n \equiv 2 \pmod3 \; \; \;(WLOG)$$ $$m,n \equiv 2 \pmod3$$ Case $1$: $m,n \equiv 0 \pmod3$ Let $m=3m'$ and $n=3n'$ then $k^2=36m'n'-3m'-3n'$ Hence $3|k^2 \implies 3|k$ Let $k=3k'$ then $$9k'^2=36m'n'-3m'-3n'$$ $$\implies3k'^2=12m'n'-m'-n'$$ Here I'm stuck and don't know if anything I've done is helpful and don't feel inclined to pursue the other cases given what happened here. I was hoping to reduce this to some sort of infinite descent argument - that didn't happen. I now think that my approach was probably completely wrong. Any help would be greatly appreciated. AI: Assume on the contrary that $4mn-m-n=k^2$. Then $(4m-1)(4n-1)=(2k)^2+1$. Since $4m-1 \equiv 3 \pmod{4}$ and $4m-1>0$, $4m-1$ has a prime factor $p \equiv 3 \pmod{4}$. Then $p \mid (2k)^2+1$, which gives a contradiction since $(\frac{-1}{p})=(-1)^{\frac{p-1}{2}}=-1$.
H: Five digit re-write game In the habit of factoring numbers, a notebook I bought had a five digit item number $77076$, which factors as $2^2 3^2 2141$, which may also be $9 \cdot 8564$, and in this form the count of digits is again five. (Repeated digits count separately). [Note I thank Calvin Lin for pointing out I initially had the wrong product for $77076$.] So I wondered how often this can work for five digit numbers, and started at $10001=73 \cdot 167$, $10002=2 \cdot 3 \cdot 1667=6\cdot 1667,$ and so on. The rules I decided to stick to were only that the five digit number has to be re-written as a product of two or more factors, where the total number (with repetitions) of digits occurring in the factors is again five. Of course one runs into problems if the five digit number is itself a prime, for example at $10007$ and $10009$. There may be other general restrictions, statable in terms of the form of the prime factorization of the given five digit number; if so I'd be interested in that. Sometimes the initial factorization into primes has to be juggled with. For example $10010=2 \cdot 5 \cdot 7 \cdot 11 \cdot 13$, which as it stands is two digits over the goal of five. We can lower the digit count by $1$ if we can for example multiply a one digit by a two digit prime in the factorization, and get only a two digit result. For this case we can use $2 \cdot 11=22,\ 5 \cdot 7=35,$ to get $$10010=7 \cdot 22 \cdot 65,$$ so the five digit requirement is met. In this same example we could instead use $2 \cdot 13=26,\ 5 \cdot 11=55$ and get another re-write: $$10010=7 \cdot 26 \cdot 55.$$ I know this is not serious math, hence the recreational tag; maybe someone might find it amusing to look at these rewrites. Just to make a few specific questions: Is there a nice characterization, say in terms of the numbers of digits in the primes occurring in the factorization of $n$, which would say which composite $n$ with five digits did not have five digit rewrites as above? What happens in case we increase the number of digits to say 6 or 7? AI: Since $(10^k-1)^2=10^{2k}-2\cdot10^k+1\lt10^{2k}$ (e.g. $9\cdot9\lt100$), the number of digits in a factorization cannot decrease by further subdivision of the factors. Thus a number $n$ with $k$ digits has a non-trivial factorization with $k$ digits if and only if it has such a factorization with two factors. For a factorization of $n$ with two factors to have $k$ digits, it is necessary and sufficient that one of the factors has a higher mantissa than $n$. Thus $n$ has a factorization with $k$ digits if and only if it has a divisor with a higher mantissa than its own. That explains why you found no counterexamples other than primes when you started with the lowest mantissas. At the other end of the spectrum, there are no factorizations with $5$ digits of any $5$-digit number greater than $99\cdot999=98901$.
H: Questions about $\mathrm{Aut}(\mathbb{Q} (\sqrt{2}, \sqrt{3})/\mathbb{Q})$ Consider the extension $\mathbb{Q} \subset\mathbb{Q} (\sqrt{2}, \sqrt{3})$. How many elements are there in $\text{Aut}(\mathbb{Q} (\sqrt{2}, \sqrt{3})/\mathbb{Q})?$ Describe all elements in $\text{Aut}(\mathbb{Q} (\sqrt{2}, \sqrt{3})/\mathbb{Q})$. Find all subgroups of $\text{Aut}(\mathbb{Q} (\sqrt{2}, \sqrt{3})/\mathbb{Q})$ and their fixed fields. I'm using Dummit & Foote Chapter 14 (Galois Theory) if it needs to be referenced. Chapter 13 was a breeze for me so I just feel like I need someone to explain how easy of a connection this is and it'll just click (or maybe it's not and I need to be told that, too). I really just need direction. AI: Hint: An automorphism fixes the integers, indeed the rationals. So it has to carry $\sqrt{2}$ to $\pm\sqrt{2}$, and $\sqrt{3}$ to $\pm\sqrt{3}$. There are not many candidates.
H: For a given real square matrix $A$ what is meant by $e^{kA}$ where $k$ is real. For a given real square matrix $A$ what is meant by $e^{kA}$ where $k$ is real? I've problem involving this notion and I wondered if $e^{kA}=(e^{ka_{ij}})$ where $A=(a_{ij}).$ AI: It's the matrix exponential. By definition, $$e^A = \sum_{i=0}^{\infty}\frac{A^i}{i!}.$$ In your case, the matrix we're exponentiating is $kA$. Your "intuition" that this can be computed component-wise is generally only true for the diagonal entries of diagonal matrices.
H: Total area of squares. We have a square whose length is $1$ unit. Every time we rotate by $\theta$ and scale the square such as you see in the image. Does the total area of squares converge if $\theta $ goes to $0$? AI: Consider a square of side $a$. If the new square rotated by $\theta$ has side of length $b$, we then have $$b \cos(\theta) + b \sin(\theta) = a \implies b = \dfrac{a}{\sqrt2 \sin(\theta+\pi/4)}$$ Hence, the $n^{th}$ square will have a side of length $$\dfrac{a}{2^{n/2}\sin^n(\theta+\pi/4)}$$ and thereby an area of $$\dfrac{a^2}{2^n \sin^{2n}(\theta+\pi/4)}$$ Hence, we want to know for what $\theta$ $$\sum_{n=0}^{\infty} \dfrac{\csc^{2n}(\theta+\pi/4)}{2^n}$$ converges. Can you finish it off from here?
H: Is there a procedure to solve Diophantine Equations? How would you go about solving a multivariable, non-linear Diophantine Equation? AI: By a famous result of Matiyasevich, there is no universal algorithm which, when fed any Diophantine equation, will determine whether or not that equation has a solution in integers. Interestingly, it is still unknown whether there is an algorithm that will always determine whether or not a Diophantine equation has a solution in rationals. For quadratic Diophantine equations, in any number of variables, there is an algorithm. For degree $4$ equations, it is known that there is not. The question for cubic equations is unresolved.
H: What would be the expected number of targets which didn't get hit by any of the shooters? Assume there are 5 shooters and 5 targets. Each shooter would choose a target randomly and when the signal is given all shooters will fire at the same time and hit their target. What would be the expected number of targets which didn't get hit by any of the shooters? My attempts: Initially, there are $5^5$ different possible case. Now I assume a R.V $X$ be the number of unshot target. So $P(X=0)=5!/5^5, P(X=1)=5!/5^5,P(X=2)=C_2^5\times3!/5^5$ and so on but it seems that the probability I get is wrong but i am not sure which part goes wrong and how to get the correct one. Any solution or hints are welcome, thanks. AI: Define random variables $X_1,X_2,\dots, X_5$ by $X_i=1$ if Target $i$ didn't get hit, and by $X_i=0$ otherwise. Then the number of targets hit is $X_1+\cdots+X_5$. By the linearity of expectation, $$E(Y)=E(X_1)+E(X_2)+\cdots+E(X_5).$$ To find $E(X_i)$, we only need to find the probability Target $i$ didn't get hit. This is (if the shooters choose their targets uniformly and independently) equal to $\left(\frac{4}{5}\right)^5$. This is $E(X_i)$. Remark: Note that we do not need to know the distribution of $Y$ to calculate the expectation. The $X_i$ are not independent, but linearity of expectation always holds. Also, the approach works just as smoothly with $k$ targets and $n$ shooters.
H: Order of the largest cyclic subgroup of $\mathrm{Aut}(\mathbb{Z}_{720})$ Back to easier problems for a bit... I have been told that it is possible to find the order of the largest cyclic subgroup of $\mathrm{Aut}(\mathbb{Z}_{720})$ without considering automorphisms of $\mathbb{Z}_{720}$. Here's what I have: We have that $\mathrm{Aut}(\mathbb{Z}_{720}) \cong U(720)$. Then since $720=2^4 \cdot 3^2 \cdot 5$ we have $U(720)=U(16) \oplus U(9) \oplus U(5)$. Some simple computations show that $U(16)$ is not cyclic, while $U(9)$ and $U(5)$ are, so $U(720) \cong U(16) \oplus \mathbb{Z}_6 \oplus \mathbb{Z}_4$. Since $\gcd(6,4) \ne 1$, $\mathbb{Z}_6 \oplus \mathbb{Z}_4$ is not cyclic, so this, coupled with the fact that the noncyclic $U(16)$ of order 8 would have a cyclic subgroup of order 4 at the most, gives us that the largest cyclic subgroup of $\mathrm{Aut}(\mathbb{Z}_{720})$ is of order $6$. Is this correct? I didn't look at automorphisms of $\mathrm{Aut}(\mathbb{Z}_{720})$ per se, but I did consider elements of $\mathrm{Aut}(\mathbb{Z}_{720})$ up to isomorphism... Is there a way to derive this result just from looking at the structure of $\mathbb{Z}_{720}$? Thanks. AI: If we find an element of maximal order in $Aut({\mathbb Z_{720}})$, then this element will be a generator of the maximal order cyclic subgroup of $Aut({\mathbb Z_{720}})$. First, we use Thm: $Aut({\mathbb Z_n})\cong U(n)$. You had the first step: $$Aut({\mathbb Z_{720}})\cong U(720)=U(16\cdot 9\cdot 5)\cong U(16)\oplus U(9)\oplus U(5)\cong {\mathbb Z_2}\oplus{\mathbb Z_4}\oplus{\mathbb Z_6}\oplus{\mathbb Z_4} $$ We see immediately that the orders of the elements of $U(720)$ can only be $1,2,3,4,6,$ and $12$, since an element from ${\mathbb Z_2}\oplus{\mathbb Z_4}\oplus{\mathbb Z_6}\oplus{\mathbb Z_4}$ has the form $(a,b,c,d)$, where $|a|=1$, $|b|\in \{1,2,4\}$, $|c|\in\{1,2,3,6\}$, and $|d|\in\{1,2,4\}$. Since the maximum order of an element in this form $(a,b,c,d)$ is the least common multiple of the orders of each element, it is easy to see that we see that $12$ is the maximum order of an element in $U(720)$. Hence the maximal order cyclic subgroup of $Aut({\mathbb Z_{720}})$ has order $12$. We can arrive at the same conclusion considering the properties of $Aut({\mathbb Z_{720}})$ alone: Let $\phi\in Aut({\mathbb Z_{720}})$. Then the mapping $\phi$ is completely determined by $\phi(1)$, which must be relatively prime to $720$. Note that ${\mathbb Z_{720}}\cong {\mathbb Z_{16}}\oplus {\mathbb Z_9}\oplus {\mathbb Z_5}$. Now the image under isomorphism $\phi(1)=(a,b,c)$, where $a,b$, and $c$ are multiplicative units of ${\mathbb Z_{16}}$, is a multiplicative unit of ${\mathbb Z_{16}}$, ${\mathbb Z_9}$, and ${\mathbb Z_5}$, respectively. But these groups of units are isomorphic to ${\mathbb Z_2}\oplus {\mathbb Z_4}, {\mathbb Z_6}$, and ${\mathbb Z_4}$ respectively. The order of $\phi$ is the least common multiple of $(|a|,|b|,|c|)$, and therefore is at most $12$.
H: Why does $3+ (-1)\left(\left\lfloor\sum_{k=1}^{|\{x\in P,\;x\le n\}|} \frac{P_k}{1-P_k}\right\rfloor\right) = \pi(n),\quad n\ge1223$? Let $P$ denote $\text{primes}$, and $\pi(x)$ denote $|P| \le x$. Here's my first question: Why does $$3+ (-1)\left(\left\lfloor\sum_{k=1}^{|\{x\in P,\;x\le n\}|} \frac{P_k}{1-P_k}\right\rfloor\right) = \pi(n),\quad n\ge1223$$ And similarly: $$2+ (-1)\left(\left\lfloor\sum_{k=1}^{|\{x\in P,\;x\le n\}|} \frac{P_k}{1-P_k}\right\rfloor\right) = \pi(n),\quad 11\le n\lt1223$$ I know that it is a fairly weak statement as of yet, but I can't find this (or a statment similar enough) anywhere, which seems weird, as it seems to be a interesting way of defining $\pi(x)$. If anyone has any explanation on why this is the case, please share. Thanks in advance! Edit: If the statment is redefined as $j+ (-1)\left(\left\lfloor\sum_{k=1}^{|\{x\in P,\;x\le n\}|} \frac{P_k}{1-P_k}\right\rfloor\right) = \pi(n)$, is there a way to determine $j$? Edit 2: As the accepted answer indicates: $$j + (-1)\left(\left\lfloor\sum_{k=1}^{|\{x\in P,\;x\le n\}|} \frac{P_k}{1-P_k}\right\rfloor\right),\quad j>0,\; e^{e^{j-1}} < n < e^{e^{j}} = \pi(n)$$ AI: Note that $$-\frac {P_k}{1-P_k}=1+\frac1{P_k-1}>1+\frac1{P_k}$$ and the sum of prime reciprocals diverges. Therefore, for $n$ big enough, your $\sum$ will be bigger than $\pi(n)+r$ for any $r$, thus disproving your conjecture. (The divergence is slow, however, for the $r$th steps you have to go up to $n$ in the order of $e^{e^{r}}$).
H: A paradox? Or a wrong definition? Let $A$ be a commutative ring with $1 \neq 0$. Then writing $V(1) = V((1))$, we have $\bigcap_{\mathfrak{p} \in V(1)}\mathfrak{p} = \sqrt{(1)} = (1)$. But then $\bigcap_{\mathfrak{p} \in V(1)}\mathfrak{p} = \{x : x \in \mathfrak{p} \text{ for all prime ideals } \mathfrak{p} \supseteq (1)\} \supseteq \text{Set}$, where the last containment is justified since there is no prime ideal that contains $(1) = A$, so $x$ can be anything by vacuous truth. I had a similar confusion like this in my topology course, but the answer that I heard from my teacher was that "of course, $A$ is the universe that we consider," so $\bigcap_{\mathfrak{p} \in V(1)}\mathfrak{p} = \{x \in A : x \in \mathfrak{p} \text{ for all prime ideals } \mathfrak{p} \supseteq (1)\} = (1)$. But then why "pure" definition from (naive) set theory fails and forces us to consider $A$ as the ambient space? AI: Consider the defining formula for the class $\bigcap\cal S$, where $\cal S$ is a collection of sets: $$\bigcap\mathcal S=\{x\mid\forall S\in\mathcal S, x\in S\}$$ When $\cal S=\varnothing$, the assumption is satisfied by every object in the universe. And as we know, the collection of all objects in the universe is not a set. Therefore $\bigcap\varnothing$ is not a set, it doesn't "exist" in the sense that we want it to exist. But if we agree that the ambient space is $A$ itself, our ring, then this is just all the objects which are in $A$, that is to say $A$ itself. Since $A$ exists, it is a set, it's fine. When we agree on an ambient space, we actually say that whatever formula is used to pick whatever elements, they will always be members of that ambient space. So if we agree that $A$ is our ambient space, and $\cal S$ is a collection sets, then we actually write: $$\bigcap\mathcal S=\{x\in A\mid\forall S\in\mathcal S, x\in S\}$$ Now the empty intersection would be $A$ itself, which we assume is a set. Also see: intersection of the empty set and vacuous truth Unary intersection of the empty set
H: Finding an example of a bounded sequence in a complete metric space such that the sequence has no partial limit I'm working through an analysis text and I've come across this exercise: Give an example of a complete metric space $X$ and a bounded sequence $\left(x_{n}\right)$ in $X$ such that the sequence $\left(x_{n}\right)$ has no partial limit. The reason that I'm stuck with this one is because I'm shaky about what it means for a sequence to be bounded. Does this mean that for any positive integers $n$ and $m$, the set of all possible $|x_{n}-x_{m}|$ is a bounded set of real numbers? If so, I'm not sure how to relate the boundedness of a sequence to the completeness of a metric space. I know that a metric space $X$ is said to be complete if every Cauchy sequence in $X$ converges to a point in $X$... what does this have to do with bounded sequences?!? Thanks in advance for any help. EDIT: I'm going to add a couple of definitions. A sequence is said to be bounded if there exists a constant $C\gt 0$ such that for any positive integers $n$ and $m$ the inequality $|x_{n}-x_{m}|\lt C$ holds. A point $x$ in a metric space $X$ is said to be a partial limit of a sequence $\left(x_{n}\right)$ if for every $\epsilon\gt 0$ there are infinitely many values of $n$ for which $x_{n}\in B\left(x,\epsilon\right)$. AI: Fix any point $y_0\in X$, and define a sequence $(x_n)$ to be bounded if there exists a constant $C>0$ such that $d(x_n,y_0)\leq C$ for all $n$. This definition does not depend on $y_0$, since if you choose any other $y_1\in X$, then by the triangle inequality, $$|d(x_n,y_0)-d(x_n,y_1)|\leq d(y_1,y_0),$$ where the right-hand-side is a constant, so the sequence $(x_n)$ is bounded "with respect to $y_0$" if and only if it is bounded "with respect to $y_1$". Hint for the exercise: Consider a space with no limit points.
H: Diffeomorphism preserves dimension I read from Milnor's book $\textit{Topology from the Differentiable Viewpoint}$ this assertion "If $f$ is a diffeomorphism between opensets $U\subset R^k$ and $V\subset R^l$, then k must equal l, and the linear mapping $$df_x:R^k\rightarrow R^l$$ must be nonsingular." The proof was: The composition $f^{-1}\circ f$ is the identity map of U; hence $d(f^{-1})_v\circ df_x$ is the identity map of $R^k$. Similarly $df_x \circ d(f^{-1})_v$ is the identity map of $R^l$. Thus $df_x$ has a two-sided inverse, and it follows that $k=l$. My question is as follows: I don't understand why the fact that $df_x$ has a two-sided inverse implies $k=1$. The way I would prove it is as follows: Instead of proving $df_x$ has a two-sided inverse, I would say one-sided inverse will suffice. Since $d(f^{-1})_v\circ df_x$, it follows that $df_x$ must be an isomorphism(Structure preserving comes from linearity. And bijectivity comes from the identity map.). Therefore $k=l$. Thanks so much for your help! AI: It is not true that if a linear map $f:V\to W$ has a one sided inverse, then $\dim V=\dim W$ nor that $f$ is an isomorphism.
H: If $F$ is a finite field of size $q=p^n$ and $b\in K$ is algebraic over $F$ then $b^{q^{m}} = b$ for some $m > 0$ I want to apply a similar type of argument to show that $\alpha^{q} = \alpha$ for all $\alpha\in F$. When we know that the characteristic of $F$ is $p$ and $|F| = q = p^{n}$. But I dont know how to incorporate the condition that $b\in K$ is algebraic over $F$. I know there is some $f\in F[t]$ such that $f(b) = 0$. Also, $F(b)/F$ is finite... I don't know how to proceed. Any suggestions? AI: If $b=0$ then you're done. Otherwise it is a unit and the multiplicative order of a unit $b$ in the group of units divides the order of said group. Given that $F(b)$ is a vector space over a field of size $q$, we find that $|F(b)|=q^m$ for $m=\dim_{\,F}F(b)$. What is the size of the group of units in $F(b)$ then?
H: Find the flaw in my proof that $z^2 =1$ has more than $2$ distinct solutions. Let $z \in \mathbb{C}$ be any number that satisfies the equation $z^2=1$. Certainly, $z=\pm1$ are two possible solutions to this equation. I claim that $z^k$ is also a solution to this equation for any $k \in \mathbb{R}$, resulting in (probably) at least one other distinct solution (for example, the solution $z=(-1)^{\pi}$). Proof: Choose any $k \in \mathbb{R}$. Then $(z^k)^2=z^{2k}=(z^2)^k=1^k=1$, as desired. So, can someone tell me what's wrong with my proof? I know that this has something to do with roots of unity and that this equation should only have 2 distinct solutions, so something must be wrong with my proof. Please bear in mind that I understand very little about complex numbers besides the definition that $i^2 = -1$. Thanks! AI: $$1=\sqrt{1}=\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}=i^2=-1$$ The rule $\sqrt{x} \sqrt{y}=\sqrt{xy}$ is generally valid only if both $x,y \in +ve \text { real numbers}$ Look at one more: $$e^{i 2\pi}=1$$ $$(e^{2 \pi i})^i=1^i$$ $$e^{-2 \pi}=1$$ The error here is that the rule of multiplying exponents as when going to the third line does not apply unmodified with complex exponents, even if when putting both sides to the power i only the principal value is chosen.
H: Continuity of the real and imaginary parts of a continuus complex-valued function If a complex-valued function is continuous, are the component real and imaginary parts $u(x,y)$ and $u(x,y)$ necessarily continuous? If so, why? AI: The functions $\operatorname{Re}, \operatorname{Im}: \mathbb{C} \to \mathbb{R}$ are continuous since $\operatorname{Re} (z_1+z_2) = \operatorname{Re} z_1 + \operatorname{Re} z_2$ and $|\operatorname{Re} z | \le |z|$, and similarly for $\operatorname{Im}$. Hence $\operatorname{Re} \circ f$ and $\operatorname{Im} \circ f$ are continuous.
H: Give an example of the $a,b,c$ which satisfies conditions in the generating set How to derive the specific case of the generating element of a group given its generating set. For example, when $$G=\langle a,b,c|a^3=b^3=c^2=1,ab=ba,ca=a^2c,cb=b^2c\rangle$$ we can let $G\subset S_3\times S_3$, and let $$a=((123),1),b=(1,(123)),c=((12),(12))$$ to get the desired result. However, when $3$ is replaced by general $p$, where $p$ is an odd prime,say, $$G=\langle a,b,c|a^p=b^p=c^2=1,ab=ba,aca=c,bcb=c\rangle$$ I cannot specifically give the example of $a,b$ and $c$, which satisfies "$a^p=b^p=c^2=1,ab=ba,aca=c,bcb=c$" Is there any quick way to do so? AI: I suspect that you are trying to find an embedding of your group $G \hookrightarrow S_p \times S_p$, the product of symmetric groups. Let $$ a = ((1\;2\; \cdots \; p), 1), $$ $$ b = (1, (1\;2\; \cdots \; p)), $$ $$ \text{and} \quad c = (r, r), $$ where $r = (1, p-1)(2, p-2)\cdots$ is the reflection that fixes $p$. (This actually embeds $G$ in $D_p \times D_p$, the product of dihedral groups.)
H: Square bracket $[X]$ with finite fields and polynomial rings I understand that parentheses are used for functional notation. I do not have any confusion about this one. However, in some literature, I find square brackets ($[X]$) after some finite field notation. Source: Encyclopedia of Cryptography and Security, p356 Since, $Q$ has been mentioned as a polynomial, I understand that $F_q[X]$ must be some polynomial. Indeed, the previously cited article in Wikipedia mentions about polynomial ring. This is further elaborated here. This is where things go beyond me. I was wondering, whether it would be possible to explain a polynomial ring in simple terms. Yes, something like Polynomial Ring for Dummies. I hope that I am not asking for too much. AI: No, $F_q[X]$ is not a polynomial, but rather the collection of all polynomial over the field $F_q$. The $[X]$ notation denotes the formal variable, $3X^2+1$, for example, is a member of $F_5[X]$. The set of all polynomials have naturally defined addition and multiplication operations, these make it into a structure called a ring, which is why this is also called the polynomial ring $F_q[X]$.
H: What are morphisms of functors I am not been able to understand, what is a morphism between two functors. I have gone through the formal definition involving a commutative diagram. Can someone explain that to me in a bit more details or in pictures? AI: Given categories $\mathcal{C}$ and $\mathcal{D}$, a functor $F:\mathcal{C}\to\mathcal{D}$ consists of two pieces of information: A rule assigning, to each object $x\in \mathrm{ob}(\mathcal{C})$, a corresponding $Fx\in\mathrm{ob}(\mathcal{D})$. A rule assigning, to each morphism $f\in\mathrm{Hom}_{\mathcal{C}}(x,y)$, a corresponding $Ff\in\mathrm{Hom}_{\mathcal{D}}(Fx,Fy)$. "Pictorally": $${\mathrm{ob}(\mathcal{C})\xrightarrow{F\text{ (on objects)}}\mathrm{ob}(\mathcal{D})\atop x\;\;\;\;\;\;\;\;\to\;\;\;\;\;\;\;\; Fx}\qquad\qquad {\mathrm{Hom}_{\mathcal{C}}(x,y)\xrightarrow{F\text{ (on morphisms)}}\mathrm{Hom}_{\mathcal{D}}(Fx,Fy)\atop \begin{pmatrix} x\\ \downarrow\!\scriptsize f\\ y\end{pmatrix} \;\;\;\;\;\;\;\;\to\;\;\;\;\;\;\;\;\begin{pmatrix} Fx\\ \downarrow\!\scriptsize Ff\\ Fy\end{pmatrix} }$$ Of course, there are also properties that these rules must have, to ensure that we have some consistency: we must have $F(\mathrm{id}_x)=\mathrm{id}_{Fx}$ and $F(g\circ f)=Fg\circ Ff$. We can't just haphazardly move objects and morphisms around without respecting composition. Now, what would a "morphism of functors" be? We need a sort of "meta-rule"; a rule for turning the rules defining $F$ into the rules defining $G$. $$\begin{array}{ccc} & & Fx\\ & \nearrow & \color{red}{\downarrow}\\ x & & \color{red}{\downarrow}\\ & \searrow & \color{red}{\downarrow}\\ & & Gx \end{array}\qquad\qquad \begin{array}{ccc} & & (Fx\xrightarrow{Ff}Fy)\\ & \nearrow & \color{red}{\downarrow}\\[-0.4in] (x\xrightarrow{f} y) & & \color{red}{\downarrow}\\ & \searrow & \color{red}{\downarrow}\\ & & (Gx\xrightarrow{Gf}Gy) \end{array}$$ So, to specify a morphism of functors (let's call it $\alpha$), we can figure that we'll need to do two things: for each $x\in\mathrm{ob}(\mathcal{C})$, we want to choose a morphism $\alpha_x:Fx\to Gx$ for each $f\in\mathrm{Hom}_\mathcal{C}(x,y)$, we want to choose a "map of arrows" from $Ff$ to $Gf$. What is a map of arrows? Why, it's a choice of arrows on each end that make it a commutative square. And hey, we've already chosen maps $\alpha_x:Fx\to Gx$ and $\alpha_y:Fy\to Gy$, so we actually don't need to be choosing anything more; we just add the requirement that the $\alpha_x$'s we chose make a commutative diagram $$\begin{array}{ccc} Fx & \xrightarrow{Ff} & Fy\\ \scriptsize\alpha_x\!\normalsize\downarrow & & \downarrow \!\scriptsize\alpha_y\\ Gx& \xrightarrow[Gf]{} & Gy \end{array}$$ for any morphism $f$ in the category $\mathcal{C}$. This choice of arrows $\alpha_x$ for each $x\in\mathrm{ob}(\mathcal{C})$ defines a natural transformation $\alpha$ from $F$ to $G$. There is actually a very deep connection between the notion of natural transformation, and the notion of homotopy (from the field of topology). In the gif animation below (taken from Wikipedia), what we start with are two different plane curves, $f:X\to \mathbb{R}^2$ and $g:X\to\mathbb{R}^2$, where $X$ stands for some closed interval. A homotopy is a "morphing" of one curve to the other. But what does this really mean? For each $s\in X$, we have chosen a path $h_s:I\to \mathbb{R}^2$ that starts at $f(s)$ and ends at $g(s)$ ($I$ stands for the interval $[0,1]$). Using $t$ to denote the variable for these $h_s$'s, you can think of $t$ as time: at time $t=0$, we have $h_s(0)=f(s)$, and at time $t=1$, we have $h_s(1)=g(s)$, and the curve $h_s(t)$ is the path traced out as $f(s)$ moves to $g(s)$. Of course, there must be some consistency; nearby points on one curve can't have too wildly different paths, the maps $h_s$ must "continuously vary" with $s$. Sounds familiar! In fact, there is a category analogous $I$, let's call it $\mathcal{I}$: it has two objects, named $0$ and $1$, and it has one non-identity morphism $T:0\to 1$. To be poetic, we might call it the "arrow of time". Just as a homotopy from a map $f:X\to Y$ to a map $G:X\to Y$ is a map $h:X\times I\to Y$ such that $h(x,0)$ is $f(x)$, $h(x,1)$ is $g(x)$, and for each $x\in X$, $h(x,t)$ is traces out a curve from $f(x)$ to $g(x)$, a natural transformation $\alpha$ from a functor $F:\mathcal{C}\to\mathcal{D}$ to a a functor $G:\mathcal{C}\to\mathcal{D}$ can be identified with (or even alternatively defined as) a functor $H:\mathcal{F}\times \mathcal{I}\to\mathcal{G}$, where $H(x,0)=Fx$, and $H(x,1)=Gx$, and $H(x,T)=\alpha_x$. See this MathOverflow thread Natural transformations as categorical homotopies and this nLab article geometric realization of categories.
H: simplifying equation with logs I have the following equation: I would like to solve this for Ze. I have found the same equation expressed in terms of Ze in another paper: I can't get my head around how this works. This is my attempt: However, there is something about the last step that doesn't seem right to me, even though the answers match. Can anyone explain the correct process for simplifying the first equation in terms of Ze? AI: Be careful: $e^{a+b}$ is not the same as $e^a+e^b$ (it's actually $e^a.e^b$). When it comes to the final step, one only needs to note that $e^{-a}=\frac{1}{e^a}$. The full derivation might look like this: $$\ln\left(\frac{z_0}{z_e}\right) = b_1Re^{0.25}+b_2$$ Get rid of the logarithm: $$\frac{z_0}{z_e} = \exp(b_1Re^{0.25}+b_2)$$ Multiply by $z_e$ and divide by the exponential: $$z_e = z_0\frac{1}{\exp(b_1Re^{0.25}+b_2)}$$ Use the fact that $\exp{-a}=\frac{1}{\exp{a}}$: $$z_e = z_0\exp(-(b_1Re^{0.25}+b_2))$$
H: Calculation Of Integral Related To Sequence Let's evaluate the following integral. Many trials but no success. $$\int_{-\pi}^{\pi}\dfrac{\sin nx}{(1+\pi^{x})\sin x}dx$$ AI: Hint: Fix $x=-x \implies dx=-dx$ $$I=-\int_{\pi}^{-\pi}\dfrac{\sin n(-x)}{(1+(\pi)^{-x})\sin (-x)} dx$$ $$I= \int_{-\pi}^{\pi}\dfrac{\sin n(-x)}{(1+(\pi)^{-x})\sin (-x)} dx$$ $$I=\int_{-\pi}^{\pi}\dfrac{\pi^x \sin n(x)}{(\pi^x+1)\sin x} dx$$ Add this to $I=\int_{-\pi}^{\pi}\dfrac{\sin nx}{(1+\pi^{x})\sin x}dx$ $$2I= \int_{-\pi}^{\pi} \dfrac{\sin nx}{\sin x} \dfrac{\pi^x+1}{1+\pi^x}dx$$ If you have $\int_{-a}^a f(x) dx$ and $f(x) $ is even, the same integral can be written as $2\int_0^af(x)dx$. I believe you can do the further caculations.
H: Prove that the tangent space has the same dimension as the manifold I asked this question a couple of days ago. And I thought that I totally understood the question. However it turned out that I didn't, since the argument I constructed was proved to be wrong just now: Diffeomorphism preserves dimension The original text from Guillemin and Pollack's book was: "The dimension of the vector space $T_x(X)$ is, as you expect, the dimension $k$ of $X$. To prove this, we use the smoothness of he inverse $\phi^{-1}$. Choose an open set $W$ in $\mathbf{R}^N$ and a smooth map $\Phi': \mathbf{R}^N \rightarrow \mathbf{R}^k$ that extends $\phi^{-1}$. Then $\Phi'\circ\phi$ is the identity map of U, so the chain rule implies that the sequence of linear transformations $\mathbf{R}^k\xrightarrow{d\phi_0}T_x(X)\xrightarrow{d\Phi_x'}\mathbf{R}^k$ is the identity map of $\mathbf{R}^k$. It follows that $d\phi_0 :\mathbf{R}^k \rightarrow T_x(X)$ is an isomorphism, so $\dim T_x(X)=k$." Just for everyone's information, in GP's book, the convention is that $\phi$ is a diffeomorphism from $\mathbf{R}^k$ to a k-dimensional manifold $X\subset \mathbf{R}^N$ ($\phi :\mathbf{R}^k\rightarrow X$). My question is here: How does $d\phi_0 :\mathbf{R}^k \rightarrow T_x(X)$ is an isomorphism follow from previous argument? From the link given, it was shown that the existence of one-sided inverse does not guarantee that a linear map is an isomorphism. Thus I think there is a loophole in arguing "it follows that $d\phi_0 :\mathbf{R}^k \rightarrow T_x(X)$ is an isomorphism"... Thanks a lot for your help! AI: That $d\phi_0$ has a left inverse implies that it is an injective linear map. The tangent space $T_x(X)$ is defined by Guillemin and Pollack to precisely be the image of $\mathbb{R}^k$ under the map $d\phi_0$, so it is also a surjection. Thus $d\phi_0$ is a linear isomorphism.
H: Invariant subspaces of tensor product of SU(2) Let $\varphi_n$ denote the standart irreducible representation of $SU(2)$ group with highest weight $n$. I know that irreducible representations of $\varphi_2 \otimes \varphi_3 = \varphi_5 \oplus \varphi_3 \oplus \varphi_1$ (according to Clebsh-Gordan decomposition). What will be the invariant subspace of $\varphi_2 \otimes \varphi_3$? What will be the explicit form of the actions of irreducible reps on invariant subspaces? AI: Working under the premise that you want to find, e.g. the submodule isomorphic to $\varphi_3$ inside the tensor product $\varphi_2\otimes\varphi_3$. Let the vectors ${\cal B}_2=\{x_0,x_1,x_2\}$ be a basis consisting of weight vectors of $\varphi_2$ (of respective weights 2,0,-2). Let similarly ${\cal B}_3=\{y_0,y_1,y_2,y_3\}$ be a basis of weight vectors of $\varphi_3$ (of respective weights 3,1,-1,-3). Then the space of vectors of weight three in the tensor product consists of the linear combinations $$ a x_0\otimes y_1+bx_1\otimes y_0, $$ where $a,b$ are arbitrary scalars. Calculate the effect of the raising operator (= a ladder operator that increases the weight) to such a linear combination. I dare not do that myself, because the details depend on how you normalized things and produced the bases ${\cal B}_2$ and ${\cal B}_3$ in the first place. You will find that - up to a scalar multiple - there is exactly one such linear combination that is annihilated by the ladder operator. That linear combination is then a high weight vector of weight three, and generates the copy $\varphi_3$. You can similarly find (up to scalar multiple) a unique linear combination $$ cx_0\otimes y_2+dx_1\otimes y_1+ex_2\otimes y_0 $$ of weight one vectors that is annihilated by the raising operator. Tha generates the remaining summand $\varphi_1$ inside $\varphi_2\otimes\varphi_2$.
H: Symmetric, transitive and reflexive properties of a matrix Say I had a relation \begin{align} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{align} where $a,b,c,d \in \mathbb{R}$, where $X$ is related to $Y$ if and only if $\det(X) = \det(Y)$ (where $\det(A) = ad-bc$) So I say it is reflexive since $xRx$ since $\det(X) = \det(X)$ However would it be Symmetric: $xRy$ implies $yRx$ (I would say yes, since to be related $\det(x)=\det(Y)$ so therefore $\det(y) = \det(x)$?) Transitivity is a hard, $xRy yRb$ then $xRb$ (I would say yes since if $xRy$, $\det(x)=\det(y)$ and then $\det(y) = \det(b)$, so $xRb$ since $\det(b) = \det(x)$ And not anti symmetric since if $X=\{1, 2, 3, 4\}$ and $Y = \{4, 2, 3, 1\}$ then they would be $xRy$ but $X$ does not equal $Y$. Am I on the right track with these? AI: You're correct. Since the definition of the given relation uses the equality relation (which is itself reflexive, symmetric, and transitive), we get that the given relation is also reflexive, symmetric, and transitive pretty much for free. To show that the given relation is not antisymmetric, your counterexample is correct. If we choose matrices $X,Y\in \left\{ \left[ \begin{array}{cc} a & b \\ c & d \end{array}\right] \text{ } \middle| \text{ } a,b,c,d\in \mathbb{R}\right\}$ , where: $$X=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \end{array}\right] \text{ and } Y=\left[ \begin{array}{cc} 4 & 2 \\ 3 & 1 \end{array}\right]$$ Then certainly $X$ is related to $Y$ since $\det(X)=1\cdot4-2\cdot3=-2=4\cdot1-2\cdot3=\det(Y)$. Likewise, since the relation was proven to be symmetric, we know that $Y$ is related to $X$. Yet $X\ne Y$.
H: Does a product measure on a product space constructed from two sub-fields of the same space determine a measure on the underlying space? Let $\mathcal{A}_1,\mathcal{A}_2$ be $\sigma$-algebras on $\Omega$. Let $P$ be a probability on $\mathcal{A}_1$ and let $Q$ be a Markov kernel from $\mathcal{A}_1$ to $\mathcal{A}_2$. Set $K:=P\otimes Q$, so that $K$ is a probability measure on $\mathcal{A}_1\otimes\mathcal{A}_2$. Does there exist a probability measure $K'$ on $\sigma\left(\mathcal{A}_1,\mathcal{A}_2\right)$, that satisfies $$ K'\left(D\cap E\right)=K\left(D\times E\right) $$ for all $D\in\mathcal{A}_1$, $E\in\mathcal{A}_2$? AI: The answer is: no. $K'$ is not even necessarily consistent. Set $\Omega:=\left\{0,1\right\}$, and $\mathcal{A}_1:=\mathcal{A}_2:=\mathcal{P}\Omega$. Define $$ P(0):=P(1):=\frac{1}{2} $$ $$ Q\left(\omega,B\right):=P(B) $$ Then $K=P^2$. Set $D_1:=D_2:=E_1:=\left\{1\right\}$, $E_2:=\Omega$. Then $D_1\cap E_1=D_2\cap E_2$, but $$ K\left(D_1\times E_1\right)=\frac{1}{4}\neq\frac{1}{2}=K\left(D_2\times E_2\right) $$
H: Question Regarding the Axiom of Extensionality Jech's text on Set Theory states the following: If X and Y have the same elements, then X = Y : ∀u(u ∈ X ↔ u ∈ Y ) → X = Y. The converse, namely, if X = Y then u ∈ X ↔ u ∈ Y, is an axiom of predicate calculus. Thus we have X = Y if and only if ∀u(u ∈ X ↔ u ∈ Y). The axiom expresses the basic idea of a set: A set is determined by its elements. To check my understanding, are the following true? (1) The axiom of extension states only that if two sets have the same members, then those two sets equal. (2) It is, on the other hand, an axiom of the language of set theory (i.e., predicate calculus) that if two sets equal, then they have the same members. A third question: (3) It seems that the axioms of ZFC are very much independent of the language being worked with. That is, the axioms of ZFC don't state we must be working in the language predicate calculus. Then I take it there exist languages we could be working with that don't have the property that if two sets are equal, they have the same members (so that we could only conclude if two sets have the same members, they are the same and not the reverse). Is this true? Do mathematicians ever work in languages besides predicate calculus? AI: (1) and (2) are correct. As for (3), recall Leibniz's Law -- if $a$ and $b$ are one and the very same thing, then whatever property $a$ has $b$ must have too. Applied to sets, if $a$ and $b$ are one and the very same set, then whatever property $a$ has $b$ must have too. So in particular, if $a$ and $b$ are one and the very same set, if $a$ has the property of having $x$ as a member, $b$ has the property of having $x$ as a member. In symbols, if that helps, if $a = b$ then $x \in a \to x \in b$. And of course, we likewise have if $a = b$ then $x \in b \to x \in a$. So: if $a$ and $b$ are sets, then if $a = b$ then $x \in a \equiv x \in b$ But $x$ was arbitrary, so there's implicit generalization here which we can make explicit, again borrowing logical notation if $a$ and $b$ are sets, then if $a = b$ then $\forall x(x \in a \equiv x \in b)$ Now, in getting this far we are just using the informal Leibniz's Law, notation for set-membership, and some handy logical notation. We aren't appealing to a formal system; rather we are appealing to informal mathematical reasoning (the sort of reasoning that logic books aim to formalize using the classical predicate calculus). Because we do want the formal predicate calculus to replicate this informal mathematical reasoning, we will get -- inside the calculus, applied to a language with $\in$ available $a = b \to \forall x(x \in a \equiv x \in b)$ But the ultimate grounds for this half of the formal version of the extensionality principle are the background informal reasoning using Leibniz's Law which we aim to regiment formally. What's going on here is not an arbitrary appeal to one formal calculus rather than another.
H: $\sum_i x_i^n = 0$ for all $n$ implies $x_i = 0$ Here is a statement that seems prima facie obvious, but when I try to prove it, I am lost. Let $x_1 , x_2, \dots, x_k$ be complex numbers satisfying: $$x_1 + x_2+ \dots + x_k = 0$$ $$x_1^2 + x_2^2+ \dots + x_k^2 = 0$$ $$x_1^3 + x_2^3+ \dots + x_k^3 = 0$$ $$\dots$$ Then $x_1 = x_2 = \dots = x_k = 0$. The statement seems obvious because we have more than $k$ constraints (constraints that are in some sense, "independent") on $k$ variables, so they should determine the variables uniquely. But my attempts so far of formalizing this intuition have failed. So, how do you prove this statement? Is there a generalization of my intuition? AI: For a slightly different method than Potato's second answer (but the idea is mainly the same): Without loss of generality, the system of equations can be written as: $$\left\{ \begin{array}{lcl} \lambda_1x_1 + \lambda_2x_2+ \dots + \lambda_k x_k &= &0 \\ \lambda_1x_1^2 + \lambda_2x_2^2+ \dots + \lambda_k x_k^2 & = & 0 \\ & \vdots & \\ \lambda_1x_1^k + \lambda_2x_2^k+ \dots + \lambda_k x_k^k & = & 0 \end{array} \right.$$ where $\lambda_i>0$, $x_i \neq 0$ and $x_i \neq x_j$ for $i \neq j$. Indeed, if $x_i=x_j$ replace $x_i+x_j$ with $2x_i$ and if $x_i=0$ just remove it. By contradiction, suppose $k \geq 1$. Now, the family $\{ (\lambda_1 x_1^j, \dots , \lambda_k x_k^j) \mid 1 \leq j \leq k \}$ cannot be linearly independent since the vector space $\{(y_1,\dots, y_k) \mid y_1+ \dots+ y_k=0 \}$ has dimension $k-1$ (it is a hyperplane). Therefore, the matrix $$A:=\left( \begin{matrix} \lambda_1x_1 & \lambda_2x_2 & \dots & \lambda_k x_k \\ \lambda_1x_1^2 & \lambda_2x_2^2 & \dots & \lambda_k x_k^2 \\ \vdots & \vdots & & \vdots \\ \lambda_1x_1^k & \lambda_2x_2^k & \dots & \lambda_k x_k^k \end{matrix} \right)$$ is not invertible. Using Vandermonde formula, $$0= \det(A)= \prod\limits_{i=1}^k \lambda_i \cdot \prod\limits_{i=1}^k x_i \cdot \prod\limits_{i<j} (x_i-x_j) $$ which is nonzero by assumption.
H: Linear algebra classic, Farkas lemma application $A \in M_{m \times n}(\mathbb{R})$ and $b \in \mathbb{R}^m$. Farkas' lemma says exactly one of the following holds: (a) there exists some $x \in \mathbb{R}^n$, $x \geq 0$, such that $Ax = b$ (b) there exists some vector $p \in \mathbb{R}^m$ such that $p^TA \geq 0$ and $p^Tb < 0$. where $0$ means all-zero vector of dimension $n$ and comparing two vectors means comparing all components of the vectors. Let $x_1,x_2,\dots,x_m \in \mathbb{R}^n$. My task is to derive from Farkas’ Lemma that exactly one of the following holds: (a) $0 \in$ convex.hull$\{ x_1,x_2,\dots,x_m \}$ (b) there is a row vector $y \in \mathbb{R}^n$ such that $yx_i > 0$ for $i = 1, . . . , m$. I tried but cannot proceed. From the first you can deduce that $x_i$ are linearly dependent but then i am stuck since i don't see how to apply the lemma :( AI: $\def\Mat{\mathrm{Mat}}\def\R{\mathbb R}$Let $A \in \Mat_{n+1,m}(\R)$ given by $$A = \begin{pmatrix} x_1 & x_2 & \cdots & x_m\\ 1 & 1 & \cdots & 1 \end{pmatrix} $$ and $b \in \R^{n+1}$ by $b = \binom{0_n}1$. Then, by Farkas, either (a) for some $\lambda\in\R_{\ge 0}^m$, we have $A\lambda = b$, which is equivalent to $\sum_{i=1}^m \lambda_i x_i = 0$, $\sum_i \lambda_i = 1$, that is $0 \in \mathrm{conv}\{x_1, \ldots, x_m\}$. or (b) for some $y = (\bar y, y_{n+1})^t \in \R^{n+1}$ we have $y^TA \ge 0$ that is $\bar y^t x_i + y_{n+1} \ge 0$ for each $i$, and $y^t b = y_{n+1} < 0$. So this holds, if for each $i$ $\bar y^t x_i \ge -y_{n+1} > 0$.
H: Show that for all $a,b,c>0$, $\frac 1 {\sqrt[3]{(a+b)(b+c)(c+a)}}\geq\frac 3 {2(a+b+c)}$. Show that for all $a,b,c>0$, $\displaystyle\frac 1 {\sqrt[3]{(a+b)(b+c)(c+a)}}\geq\frac 3 {2(a+b+c)}$. I tried to cube the both sides, and expand it, but that'll be too troublesome, is there simpler ways? Thasnk you. AI: HINT: Using AM-GM inequality, $$\frac{a+b+b+c+c+a}3\ge \sqrt[3]{(a+b)(b+c)(c+a)}$$
H: Being inside or outside of an ellipse Let $A$ be a point $A$ not belonging to an ellipse $E$. We say that $A$ lies inside $E$ if every line passing trough $A$ intersects $E$. We say that $A$ lies otside $E$ if some line passing trough $A$ does not intersect $E$. Let $E$ be the ellipse with semi-axes $a$ and $b$. Show that (a) a point $A=(x,y)$ is inside $E$ if $\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}< 1$ (b) a point $A=(x,y)$ is outside $E$ if $\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}> 1$. I wrote down the equation of an arbitrary line $L$ passing through $A$. If $A$ is inside $E$ then clearly there is another point, say $B$, that belongs to $A\cap L$. I ended up with a bunch of big equations which didn't give any result. AI: Set the line $L$ be passing through the origin and a point $A(x_0,y_0),x_0\neq0$ inside the ellipse. So we have $$L: y=\frac{y_0}{x_0}x$$ This line should intersect the ellipse in another point(s), so we should consider two equations below simultaneously: $$y=\frac{y_0}{x_0}x,~~~ \frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$ Then we get $$\left(\frac{y_0^2}{b^2x_0^2}+\frac{1}{a^2}\right)x^2=1~~~~\text{or}~~~~~\frac{y_0^2}{b^2}+\frac{x_0^2}{a^2}=\frac{x_0^2}{x^2}$$ But $\frac{x_0^2}{x^2}<1$.
H: Number of equivalent rectangular paths between two points I am trying to determine the number of paths between two points. I am representing the paths as a list of steps "ruru" = right -> up -> right -> up For my purposes, we can assume that there will never be "backwards" steps (lrlr), and we will only be working with diagonal cases (equal number of steps in each direction, ie: ulul, NOT luu) Example: Between points (0,0) and (2,2) there are 6 different paths (uurr, urur, urru, ruur, ruru, rruu) How would I find this value mathematically? If it is of any help, I found this by exhaustion: 20 paths between (0,0) and (3,3), 6 paths between (0,0) and (2,2), and 2 paths between (0,0) and (1,1). Note: I will need to be able to do this for both 2 and 3 dimensional cases. The same assumptions are true for the 3D case, I just have a 3rd variable (f and b for forward and back) a valid 3D path might look something like: furrfu. My ideal answer would be in the form "f(dimensions,numSteps) = numPaths" AI: To get from $\langle 0,0\rangle$ to $\langle m,n\rangle$ you must take $m$ steps to the right and $n$ steps up. These $m+n$ steps may be taken in any order. As soon as you know which $m$ of them are to the right, you know the whole path. There are $\binom{m+n}m$ ways to select $m$ of the $m+n$ steps to be the rightward steps, so there are $$\binom{m+n}m=\binom{m+n}n=\frac{(m+n)!}{m!\,n!}$$ possible paths. The three-dimensional problem is similar: to get from the origin to $\langle k,m,n\rangle$ you must choose which $k$ of the steps are to the right and which $m$ are up; the remaining $n$ must be in the third direction. There are $$\binom{k+m+n}k\binom{m+n}m$$ to make these choices, so that is the number of paths. This can be expressed more compactly as the multinomial coefficient $$\binom{k+m+n}{k,m,n}=\frac{(k+m+n)!}{k!\,m!\,n!}\;.$$
H: property of complementary cumulative distribution function $$\Bbb{E}(X) = \int_0^\infty xf(x)dx \ge \int_0^c xf(x)dx + c \int_c^\infty f(x)dx$$ I'm having trouble understanding the above formula. I do understand that $ \Bbb{E}(X) = \int_0^\infty xf(x)dx \ge \int_0^c xf(x)dx$ but I don't yet get where $c \int_c^\infty f(x)dx$ comes from , how it relates to it all.. Please explain in mathnoob-friendly terms. AI: For any $c>0$ $$ \int_0^\infty xf(x)\,\mathrm dx=\int_0^cxf(x)\,\mathrm dx+\int_c^\infty xf(x)\,\mathrm dx $$ and $$ \int_c^\infty xf(x)\,\mathrm dx\geq \int_c^\infty cf(x)\,\mathrm dx $$ since $0\leq cf(x)\leq xf(x)$ for all $x\in [c,\infty)$.
H: For which values is $x^3$ less than or equal to $3x$? The title says it all. The answers say: $x\le -\sqrt{3}$ and $0\le x\le \sqrt{3}$ (can someone edit this so all the $<$ have an 'or equal to' sign. Edit the roots as well please. I'm not sure how to attempt this question. When I simplify, I get $x^2\le 3$, so $x\le \pm\sqrt{3}$. Thanks! AI: The equality can be solved very easily. $$x^3< 3x\iff x(x^2-3)< 0 $$ If $x<0, x^2-3>0$ As $x^2-3>0\implies$ either $x>\sqrt3$ or $x<-\sqrt3$ and as $x<0$ the required region will be $x<-\sqrt3$ Similarly, if $x>0, x^2-3<0\implies -\sqrt3<x<\sqrt3$ and as $x>0$ the required region will be $0<x<\sqrt3$ Alternatively, HINT: $$x^3< 3x\iff x(x-\sqrt3)(x+\sqrt3)< 0 $$ This will hold true if odd number (one or three) of factors $< 0$ Now check for the ranges $(-\infty,-\sqrt3);[-\sqrt3,0);[0,\sqrt3);[\sqrt3, \infty)$
H: Is it true that a monotic, differentiable function with non-zero derivative has a continuous inverse? Is it true that a strictly monotic, differentiable function on $\mathbb{R}$ with non-zero derivative has a continuous inverse? This is a small caveat in a problem I'm working on, if this is true then I'm all good. AI: Something stronger is true. A strictly monotone and continuous function on $\mathbb R $ has a continuous inverse. The function doesn't even have to have a derivative.
H: Continuity of an $\mathbb {R}^2$ function Let $f$ be an $\mathbb{R}^2$ endomorphism and $N:\mathbb{R}^2\to\mathbb{R}^+$ defined by $$\forall u \in \mathbb {R }^2, N(u) = ||f(u)|| $$ I need to show $N$ is continuous. The problem is that $N$ is only a seminorm, otherwise it would have verified Lipschitz criterion and I'd be done. Thanks for your help. AI: Let's prove that a linear function is continuous. Recalling the definition: A function $f:\Bbb{R}^n\to\Bbb{R}^k$ is continuous iff for each $x\in\Bbb{R}^n$ and for each $\epsilon>0$ there exists a $\delta>0$ such that for all $y\in\Bbb{R}^n$ with $||y-x||<\delta$ we have $||f(y)-f(x)||<\epsilon$. Now if $f$ is linear, then it takes the form $f(x)=Ax$ for some $k\times n$ matrix $A$. Then $$||f(y)-f(x)||=||Ay-Ax||=||A(y-x)||,$$ and we want to show that this is less than $C||y-x||$ for some constant $C$ dependent on $A$ only. This would finish the proof, because for any $\epsilon$, if we let $\delta=\epsilon/C$ we have that $$||f(y)-f(x)||=||A(y-x)||\le C||y-x||<C\delta=\epsilon,$$ and so $f$ is continuous. But let's back up the claim that for all $x\in\Bbb{R}$, $||Ax||\le C||x||$ for some $C$. (The smallest upper bound for $C$ here is conventionally called the induced matrix norm $||A||$.) An easy-to-prove upper bound here is the Frobenius norm, defined by $||A||_F^2=\sum_{i=1}^k\sum_{j=1}^nA_{ij}^2$. Writing out all these norms as sums (and squaring both sides for convenience): $$||Ax||^2=\sum_{i=1}^k\left(\sum_{j=1}^nA_{ij}x_j\right)^2$$ $$||A||_F^2||x||^2=\left(\sum_{i=1}^k\sum_{j=1}^nA_{ij}^2\right)\sum_{k=1}^nx_k^2=\sum_{i=1}^k\left(\sum_{j=1}^nA_{ij}^2\cdot\sum_{k=1}^nx_k^2\right)$$ We wish to prove that $||Ax||^2\le ||A||_F^2||x||^2$, but given the above expressions, this reduces to $$\left(\sum_{j=1}^nA_{ij}x_j\right)^2\le\sum_{j=1}^nA_{ij}^2\cdot\sum_{k=1}^nx_k^2,$$ which is exactly the famous Cauchy–Schwarz inequality. This completes the proof. Now that we know that linear functions are continuous, let's return to the problem. We want to know that an endomorphism of $\Bbb{R}^2$, i.e. a linear function $f:\Bbb{R}^2\to\Bbb{R}^2$, composed with the norm operation $||\cdot||:\Bbb{R}^2\to\Bbb{R}$, is continuous. If it is accepted that the norm operation is continuous and the composition of continuous functions is continuous, then the above proof is sufficient to prove that $N(x)=||f(x)||$ is continuous. Although the above was proven in full generality of $\Bbb{R}^n\to\Bbb{R}^k$, for application to the problem we need only $\Bbb{R}^2\to\Bbb{R}^2$, and the only linear functions $\Bbb{R}^2\to\Bbb{R}^2$ are functions of the form $f(x)=Ax$ for $A$ a $2\times2$ matrix; this yields the representation $f(x)=(ax_1+bx_2,cx_1+dx_2)$. Note that these functions (choosing arbitrary constant $a,b,c,d\in\Bbb{R}$) are the only endomorphisms of $\Bbb{R}^2$.
H: Proving $f(x) = x^2 \sin(1/x)$, $f(0)=0$ is differentiable at $0$, with derivative $f'(0)= 0$ at zero I need a solution for this question. I've been trying out this question for days and I haven't been able to find out its solution yet. And some explanation would help too. Show that the function f defined by: $$f(x):= \begin{cases} x^2\sin(1/x) &:\text{if $x \ne 0$} \\ 0 &:\text{if $x=0$} \end{cases}$$ is differentiable at $x=0$, and that $f'(0)=0$. AI: Hint: We have $-x^2 \le f(x) \le x^2$, hence $$ -x \le \frac{f(x) - f(0)}{x-0} = \frac{f(x)}x \le x $$ for $x \ne 0$.
H: Plane geometry tough question $\triangle ABC$ is right angled at $A$. $AB=20, CA= 80/3, BC=100/3$ units. $D$ is a point between $B$ and $C$ such that the $\triangle ADB$ and $\triangle ADC$ have equal perimeters. Determine the length of $AD$. AI: Add up the the sides of $\triangle ADB$ and $\triangle ADC$ and you get their perimeters and then equate them: $x+\frac{100}{3}-y+\frac{80}{3}=x+y+20$ Solve for $y$ to get $y=20$ and then for $x$ using the cosine rule we get $x=\sqrt{20^2+20^2-2(20)(20)\cos\left(\tan^{-1}\frac{80}{20\times3}\right)}$ $x=8\sqrt{5}$
H: Is there any simple way to find out all divisors of $n+1$ under the given conditions? Aussuming I have given a really large number $n \in \mathbb{N}$ (let's say, $10^{80} \le n \le 10^{100}$) and I know all the divisors of every number $x=0,1,\ldots,n-1$. Is there any simple, universal and not too time-consuming way to find out all divisors of $n$? AI: Apart from electronically, no. Knowing the divisors of numbers less than $n$ isn't really going to help. You are still going to have to test divisors, which means that you might have to search for the first multiple of $p$ under the required number. On the other hand, there are some relatively fast proggies that will factorise numbers in these ranges. 'factor for OS/2 and Windows etc'. They're quite good. One, could consider finding the factors, say of 121, knowing the factors of 2 to 120. Even for simple tests, one has to search back to find the previous multiple of 7, or 11, or 13, etc. On the other hand, if you are trying to factorise a mob of numbers that follow each other, such as 14400, 14401, 14402, 14403, 14404, 14405, 14406, ..., 14409, it is really useful to do things like pick out the multiples of 2, 3, 5, 7, etc, because if 7 divides 14406, it does not divide any of the others. But this sort of test supposes that you have access to all of the primes less than the square root.
H: I don't understand equivalence classes with relations I am not quite understanding equivalence classes. For example I have this problem: Let $A$ be the set of integers and $\quad a\;R\;b\quad$ if and only if $\quad |a| = |b|$. I have proved that this is an equivalence relation, (its reflexive, symmetric and transitive), but how do I show the equivalence classes? AI: Equivalence classes (wrt your equivalence relation) are subsets of elements of the original set with the following property: every element of a certain equivalence class must be equivalent (wrt the equivalence relation) to any other in it and inequivalent to any other out of it. This implies that equivalence classes are disjoint subsets of the original set. (*) You are asked to construct the maximal subsets of $\mathbb Z$, such that - picked one - every element in it has the same absolute value as any other element of the subset. It should be fairly easy to understand how many elements can belong to every equivalence class in this case (there is an important exception, though: $0$). Notice that in general the cardinalities of the equivalence classes of a set wrt an equivalence relation are different! (*) This second part of the definition was added after the useful comments of two users below.
H: There are 50 rooms in a line. If there are 26 rooms with girls, prove there are two girls exactly 5 rooms apart. There are 50 rooms in a line. If there are 26 rooms with girls, prove there are two girls exactly 5 rooms apart. My idea was place 25 girls in into pairs of rooms, and there is no scenario which there are 5 girls apart. Then, if you place one more, by pigeonhole principle, there's a girl exactly 5 away - but I want a more formal way of explaining this. Does anyone have a way? Cheers. AI: A hint: Color the rooms with five colors according to the remainder mod 5 of the room number.
H: Find the limiting distribution Find the limiting distribution for $n\rightarrow \infty \text{ of} \prod\limits^n_{i=1}X_i$. Given is that $f(x)=\frac{1}{2x\sqrt{2\pi}}e^{-\frac{1}{8}(\ln x-\theta)^2}, x\geq 0$. AI: The density of the individual $X_i$ is nearly the density of a normal distribution, but it has $\log x$ where one would expect $x$ and that additional $\frac{1}{x}$-factor. That means that the $X_i$ are log-normally distributed, i.e. $\log X_i$ is normally distributed. See http://en.wikipedia.org/wiki/Log-normal_distribution. By taking the logarithm of the product $S_n = \prod_{i=1}^n X_i$, you get $$ \log S_n = \log \prod_{i=1}^n X_i = \sum_{i=1}^n \log X_i \text{.} $$ $\log S_n$ is thus normally distributed too, and $S_n$ is thus log-normally distributed. The mean and variable of $\log S_n$ are simply $n$ times the mean respectively variance of $\log X_i$, which can easily be found by looking at the density of $X_i$. Using that $\log S_n \sim N(\theta n,4n)$, i.e. that $P(\log S_n \leq x) = \Phi\left(\frac{x-n\theta}{2\sqrt{n}}\right)$, where $\Phi$ is the CDF of the standard normal distribution, you get $$ F_n(x) := P(S_n \leq x) = \Phi\left(\frac{\log x-n\theta}{2\sqrt{n}}\right) \text{.} $$ for the CDF of $S_n$. If $S_n$ is to converge in distribution to $S$ with CDF $F$, then $F_n \to F$ pointwise, except for discontinuity points of $F$. You thus have the following cases. If $\theta > 0$, $F_n(x) \to 0$ as $x \to \infty$ for all $x > 0$ and thus by monotonicity any possible limit distribution would have to be identically zero, which is impossible. The product thus doesn't converge in this case. If $\theta < 0$, $F_n(x) \to 1$ as $x \to \infty$ for all $x > 0$. In other words, $F(x)=1$ if $x > 0$, and by right-continuity of CDFs also $F(0)=1$. On the other hand, $F_n(x) = 0$ if $x < 0$, and thus $$ F(x) = \begin{cases} 0& \text{if $x < 0$} \\1 &\text{if $x \geq 0$}\end{cases} $$ If $\theta = 0$, $F_n(x) \to \frac{1}{2}$ for all $x > 0$. This is also impossible, since you need $F(x) \to 1$ for $x \to \infty$ if $F$ is to be a CDF. The product $S$ thus converges exactly if $\theta < 0$, and the limit distribution is a point distribution with $P(S=0)=1$, $P(S\neq0)=0$ in the case. Intuitively, this makes sense. If the means of the logarithms are less than zero, on average multiplying will make the numbers smaller and smaller. Otherwise, the numbers get larger and larger and the product diverges.
H: a recurrence equation interpolating linear and exponential $f(n+1) = f(n) + f(n)^{a}$ where $a \in (0,1)$ and $n \ge 1$ with $f(1) = m$. If $a=0$, we see $f(n) = m + n - 1$ and if $a=1$, we see $f(n) = 2^{n-1}m$. So the recursion seems to interpolate between linear and exponential forms. Is there a closed form or tight approximation or tight asymptotics for $f(n)$ in terms of $n$, $a$ and $m$? Update after leshik's answer Can the same asymptotics hold if $a \ge 1$? What is the asymptotics/tight approximation/exact expression if $a \ge 1$? AI: We "guess" the asymptotic by making use of Stolz-Cesaro theorem. The idea is to show that the sequence behaves like $n^{\alpha}$ for an appropriate choice of $\alpha.$ By Stolz- Cesaro, $$A=\lim_{n\to\infty}\frac{x_{n+1}}{(n+1)^{\alpha}}=\lim_{n\to\infty}\frac{x_{n+1}-x_n}{(n+1)^{\alpha}-n^{\alpha}}=\lim_{n\to\infty}\frac{x_n^a}{\alpha n^{\alpha-1}}=\frac{1}{\alpha}\lim_{n\to\infty}\left(\frac{x_n}{ n^{{\alpha-1}/a}}\right)^a.$$ Now, we want to have $\frac{\alpha-1}{a}=\alpha$ or $\alpha=\frac{1}{1-a}.$ The equation for our limit can be rewritten as $A=\frac{1}{\alpha}A^a$ which gives $A=(1-a)^{\frac{1}{1-a}}.$
H: Curve fitting: $a f_1(x)+b f_2(x)+c f_3(x)+d f_4(x)$ I want to fit my data $y$ with a method such as $$a f_1(x)+b f_2(x)+c f_3(x)+d f_4(x)$$ However I don't know how to obtain coefficients $(a,b,c,d)$ of the known functions $(f_1(x),f_2(x),f_3(x),f_4(x))$. I don't know least squares very well. Therefore I want a matrix formula that I can get the coefficients for any data easily, without getting into learn least squares since I don't have time. Could you please help me? AI: Define a grid of $m$ values $x_i$ corresponding to your data $y_i$. Then you get an overdetermined system of linear equations: $$\mathbf{Ax}=\mathbf{y}$$ where $\mathbf{A}$ is a $m\times 4$ matrix with elements $a_{ij}=f_j(x_i)$, $\mathbf{x}=[a,b,c,d]^T$ is your vector of unknowns, and the vector $\mathbf{y}$ contains your data. You can solve this system in a least squares sense. Depending on the software you use, you can either directly solve it like $$\tt{x=A \backslash b}$$ in Matlab or Octave. Otherwise you need to compute the pseudo-inverse and obtain the least squares solution like this: $$\mathbf{x}=(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^T\mathbf{y}$$
H: is this symmetric A is the set of all functions $\mathbb{R}$ $\to$ $\mathbb{R}$ f is related to g if and only if f(x) $\le$ g(x) for all x $\in$ $\mathbb{R}$ I said its reflexive since it is less than OR equal, so f(x)=f(x) However would it be symmetric and transitive? I said it would not be symmetric (counter-example) because take x,y $\in$ $\mathbb{R}$ if x=3, y=10 then it would not be symmetric but i'm not too sure... Also since it is a partial order, is it a total order? AI: It is surely not symmetric, as you pointed out in your question. It is actually anti-symmetric, hence your relation (being also reflexive and transitive) turns out to be a (partial) order, not an equivalence.
H: Simple meaning to Center of a group Recently I was learning Center of groups and on referencing the group table, I observed is that all the rows that are also present as columns are the centers of any group. So, I made a small program to check it for various groups and found somewhat consistent answers. I wrote a small blog on it but I have a fear in my heart if it is only true or not and my post might send some wrong information to other person. The way I made program was to find similar rows and columns instead of calculating $xg=gx$ as per definition. Is it perfectly fine to let it be so? AI: Yes it is. For some element $g \in G$, the $g$th row (let's write this short for the row corresponding to $g$) consists of the elements $gx$ for $x \in G$ in some particular order, the $g$th column of the elements $xg$, $x \in G$ (same order on the $x$s). If row and column are equal elementwise, this means $gx = xg$ for all $x \in G$, that is exactly true if $g \in Z(G)$.
H: 3 cards be selected from a pack of 52 playing cards if at least one of them is an ace? not more than one is an ace? Distributing 9 books to 3 peoples… I am currently studying Extension 1 Mathematics. I missed two classes and I figured out that tomorrow I will have a quiz. Can you help me to solve this permutation and combination question: In how many ways 3 cards be selected from a pack of 52 playing cards if: (i) at least one of them is an ace; (ii) not more than one is an ace. In how many ways can 9 books be distributed amongst a man, a woman and a child, if the man receives $4,$ the woman $3,$ and the child $2$? I know the solution (below), but I don't understand how to get there. $1$ (i) $4804$ (ii) $21808$ $2$ $1260$ Thanks in advance! AI: Hint: 1. (i) There are $\binom{52}{3}$ ways to draw $3$ cards. There are $\binom{48}{3}$ ways to draw $3$ cards without aces. Hint: 1. (ii) There are $4$ aces and $\binom{48}{2}$ ways to choose $2$ cards without aces. Remember that no aces need be drawn. Hint: 2. First think of the woman and child as one group and the man as another. How many ways are there to give $9$ books to them if the man gets $4$ and the group gets $5$? For each of those, how many ways can the $5$ books be given to the woman and child if the woman gets $3$ and the child gets $2$? Bonus: Consider that $\binom{9}{5}\binom{5}{2}=\dfrac{9!}{4!\,3!\,2!}$ and how that relates to 2. The answers in the book are correct.
H: How to show $x^4 - 1296 = (x^3-6x^2+36x-216)(x+6)$ How to get this result: $x^4-1296 = (x^3-6x^2+36x-216)(x+6)$? It is part of a question about finding limits at mooculus. AI: Hints: $1296=(-6)^4$ and $a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+\ldots+ab^{n-2}+b^{n-1})$.
H: How to evaluate limiting value of sums of a specific type We know that if $f$ is integrable in (0,1) then $$ \lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n}f(k/n) = \int_{0}^{1}f(x)dx. $$ Recently I found the following sum $$ \lim_{n \to \infty} \sum_{k=1}^{n}\frac{k}{n^2 + k^2} = \frac{\ln 2}{2}. $$ This sum cannot be expressed in the form $\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n}f(k/n)$. Rather it is of the form $$ \lim_{n \to \infty} \frac{1}{n^2}\sum_{k=1}^{n} k f(k/n). $$ Part 1 Is there any technique to evaluate limits of this form? Part 2 Generalizing the question one step further, how can we asymptotics of summations of the form $$ \sum_{k=1}^{n} f(k)g\Big(\frac{k}{n}\Big). $$ AI: Let's rewrite your sum. Note, that we have $$ \frac 1{n^2} \sum_{k=1}^n k\cdot f(k/n) = \frac 1n \sum_{k=1}^n \frac kn \cdot f(k/n) $$ So, if we define $g \colon [0,1] \to \mathbb R$ by $g(x) = xf(x)$, we get $$ \frac 1{n^2} \sum_{k=1}^n k\cdot f(k/n) = \frac 1n \sum_{k=1}^n g(k/n), $$ which for $n \to \infty$ converges, as you noted to $$ \int_0^1 g(x) \, dx = \int_0^1 x\cdot f(x)\, dx. $$
H: Series of Vectors In $\mathbb{R}^n$ we define sequences of it's elements in a very natural manner, we say that a sequence is a function $x : \mathbb{N} \to \mathbb{R}^n$ and we denote it by $(x_k)$ as in the $n=1$ case. Then defining limit of a sequence and all of that is pretty straightforward and acts like one extension of everything done in $\mathbb{R}$. My doubt is: is there something that stops us from defining the sum of a sequence in $\mathbb{R}^n$? All the books I've seem until now define sequences in $\mathbb{R}^n$ but not series. I thought it was possible to define it in an analogous manner: we consider the sequence of partial sums of some sequence $(x_k)$, this will give us another sequence defined in $\mathbb{R}^n$ and we could say that the series converges if the sequence of partial sums converges. Is it possible to do this or there is some inconsistency I'm failing to see? Thanks very much in advance. AI: It is possible. The sum of a series of vectors has as its kth component the sum of the series of kth components of vectors in the series, each of which is a series in $\mathbb{R}$, and converges iff all of the component series converge.
H: Simple conceptual Fundamental Thm of Calculus question When applying the Fundamental Thm of Calculus in complex analysis, what does it mean for an open connected set to contain a loop? For example, does my red-color open annulus contain the black colour loop? I think so, but I'm struggling with understanding this: When asked to integrate $\frac 1{(z-5)}$ around a circle centred at 5, why can't I use the Fundamental Thm of Calculus? After all, the function is continuous in the red annulus , and furthermore, its primitive $\ln z$ is analytic throughout the annulus which does not touch the negative real axis. AI: The fundamental theorem of calculus is applied to a special contour: intervals of the real line. And it applies to functions that have an antiderivative, or primitive. In the complex plane, we have Cauchy's Theorem which states that such nicely behaved functions ("analytic") may be integrated by evaluating its antiderivative at the endpoints of a contour, no matter the shape of the contour. In your case, the function $1/(z-5)$ has a pole in the interior of the black contour. By Cauchy's theorem, the value of the integral is $i 2 \pi$. You can see this from a parametrization $z=5+r e^{i \phi}$. Now, suppose you evaluate the integral via a Fundamental Theorem of calculus mindset: let's say we want to evaluate $\log{(z-5)}$ at the beginning and end of the contour. Parametrize by $z=5 + r e^{i \phi}$, and so we will be subtracting the value of the antiderivative at $\phi=0$ from the value at $\phi=2 \pi$: $$\log{(r e^{i 2 \pi})} - \log{(r e^{i 0} )} = \log{r} + i 2 \pi - \log{r} - i 0 = i 2 \pi$$ It is the multivaluedness of the log function that causes the nonzero value. But you see that it all agrees and provides for a consistent basis for computation. A word about your annulus: yes, $1/(z-5)$ is analytic throughout your annulus. The integral about the boundary of the annulus is indeed zero: you also have to integrate about the inner circle in the opposite direction, so the integral is $i 2 \pi - i 2 \pi = 0$.
H: Question about an almost split sequence. On page 124 of the book Elements of representation theory of associative algebras, volume 1, Example 3.10, I computed the modules in this example. $$ S(3)=0\leftarrow 0 \rightarrow K \leftarrow 0, \\ P(2) = K\overset{1}{\leftarrow} K \overset{1}{\rightarrow} K \leftarrow 0,\\ P(4) = 0\overset{}{\leftarrow} 0 \overset{}{\rightarrow} K \overset{1}{\leftarrow} K,\\ P(2) \oplus P(4) = K\overset{1}{\leftarrow} K \overset{\left(\begin{matrix} 0 \\ 1 \end{matrix}\right)}{\rightarrow} K^{2} \overset{\left(\begin{matrix} 1 \\ 0 \end{matrix}\right)}{\leftarrow} K. $$ Let $f_1: S(3) \to P(2)$ be the embedding and $f_2: S(3) \to P(4)$ be the embedding. Let $f=\left(\begin{matrix} f_1 \\ f_2 \end{matrix}\right): S(3) \to P(2) \oplus P(4)$. Then $f$ is injective. Is $f: S(3) \to P(2) \oplus P(4)$ the map in the sequence $$ 0 \to S(3) \overset{f}{\to} P(2) \oplus P(4) \to (P(2) \oplus P(4))/S(3) \to 0 \quad (1) $$ in Example 3.10? I think that $$ (P(2) \oplus P(4))/(Im(S(3))) = (P(2) \oplus P(4))/(Im f) = K\overset{1}{\leftarrow} K \overset{0}{\rightarrow} 0 \overset{0}{\leftarrow} K, (2) \\ (P(2) \oplus P(4))/S(3) = K\overset{1}{\leftarrow} K \overset{1}{\rightarrow} K \overset{0}{\leftarrow} K. (3) $$ I am not sure about the maps in (3). Should $(P(2) \oplus P(4))/S(3)$ the sequence (1) in Example 3.10 be $(P(2) \oplus P(4))/(Im(S(3)))$? I think that the direct sum of $S(3)$ and (3) is $P(2) \oplus P(4)$ which contradicts the fact the (1) is non-split. Thank you very much. AI: Their notation in the example is sloppy, but understandable if you keep the preceeding lemma in mind. Note that the end terms of an almost split sequence are both indecompoable modules, so both of your guess are incorrect. What they meant by $M=(P(2)\oplus P(4))/S(3)$ is the indecomposable module given by the non-split extension: $0 \to P_4 \to M \to P(2)/S(3) \to 0$. To be slightly more precise yet intuitive, the quiver is: $1 \xleftarrow{\alpha} 2 \xrightarrow{\beta} 3 \xleftarrow{\gamma} 4$. The maps from $S(3) \to P(2)\oplus P(4)$ is $a \overline{e_3} \mapsto a(\beta+\gamma)$ for any $a\in kQ$. By using the philosophy as mentioned in Specific projective dimension of a module over bound quiver , the cokernel $M$ of the injection is by "tearing half of the $S(3)$ from $P(2)$ and half from $P(4)$, and then glue the remaining together", hence it is an indecompoable module with sincere support. You can verify this by first writing the basis of $M$, and then consider the action of $\beta$ and $\gamma$ on it.
H: Infinite prime numbers from a sum of powers I am not sure if it's possible to get infinite prime numbers from this sum: $$p=k^j+j^k$$ with $j\in\mathbb{N}, k\in\mathbb{N}$ I tried for $j=1,2,...9,k=1,2,...9$ and I get only eleven prime numbers. If I consider the matrix: $$A(k,j)=k^j+j^k$$ in which the components $A(j,k)=1$ iff $p$ is prime this is a sparse matrix in which the prime numbers are mostly in the first four rows. Can someone give me some hint to prove $A$ contains infinite prime numbers in the limit $j\to \infty$, $k\to\infty$ AI: Hint You get infinitely many primes when $k=1$ or $j=1$.
H: If three corners of a parallelogram are known solve for the 3 possible 4th corners. An example would be three corners being the points: (1,1), (4,2) and (1,3). I understand the specific solution for this example: (4,4), (4,0) or (-2,2). Which I reasoned when i drew it out. The example came from a linear algebra textbook and i'm curious if it can be done some other way. AI: The three possibilities come from adding the coordinates of two points and subtracting the third. So $(4,0)=(4,2)+(1,1)-(1,3)$. There are three choices of which point to subtract. You are translating the point you subtract to the origin, then adding the vectors to the other two points to find the opposite one, then translating back. So you could look at it as $(4,0)=[(4,2)-(1,3)]+[(1,1)-(1,3)]+(1,3)$
H: How do you solve for Y? I know there has got to be a way to solve for $Y$ but I just can't seem to figure it out. Does anyone know how to solve this? Please help :) $$5(Y(8))=C$$ $$C(Y(4))=B$$ $$B(Y(2))=A$$ $$A(Y(1))=0.50$$ AI: Hint: Substitute for $C$ from the first to the second equation to get: $160Y^2 = B$ Then substitute for $B$ into the third equation and so on.
H: Simple Linear Regression Question Let $Y_{i} = \beta_{0} + \beta_{1}X_{i} + \epsilon_{i}$ be a simple linear regression model with independent errors and iid normal distribution. If $X_{i}$ are fixed what is the distribution of $Y_{i}$ given $X_{i} = 10$? I am preparing for a test with questions like these but I am realizing I am less up to date on these things than I thought. Could anyone explain the thought process used to approach this kind of question? AI: Let $\epsilon_i \sim N(0,\sigma^2)$. Then, we have: $$Y_i \sim N(\beta_0 + \beta_1 X_i,\sigma^2)$$ Further clarification: The above uses the following facts: (a) Expectation is a linear operator, (b) Variance of a constant is $0$, (c) Covariance of a random variable with a constant is $0$ and finally, (d) A linear combination of normals is also a normal. Does that help?
H: Vector space: $\forall a\in K, v \in E ( a \cdot_E v=0_E \to a=0_K \lor v=0_E)$ I need to prove the following: let $E$ vector space on $K$, then $\forall a\in K, v \in E ( a \cdot_E v=0_E \to a=0_K \lor v=0_E)$ Thanks in advance! AI: It suffices to prove that if $a v = 0_E$ (I take there's a misprint in you formula, and I'm simplifying notation a bit), and $a \ne 0_K$, then $v = 0_E$. Now if $a \ne 0_K$, then $a$ has an inverse $a^{-1}$ in the field $K$. Thus $$ 0_E = a^{-1} \cdot 0_E = a^{-1} (a v) = (a^{-1} a) v = 1 v = v. $$
H: How do I solve $x^2 + y^2 + xy = z$ for $y$ How do I solve the following equation for $x$ or $y$ (does not matter because you can swap them): $$ x^2+y^2+xy=z $$ AI: To solve for $x$, say, just consider $y$ and $z$ as constants, and use the usual formula for the quadratic equation.
H: Probability of two variables of having the same value Let $X$ and $Y$ be two random variables, whose PDFs $f_X$ and $f_Y$ are uniform. $f_X$ and $f_Y$ may overlap. For instance, they could represent two score distributions for two tuples $x$ and $y$ in a database. Which is the probability for $X$ and $Y$ of having the same value $v$ (e.g., for the tuples $x$ and $y$ of having the same score)? I have tried with an integral, but this returns me back $0$ as a result, since it is evaluated on an interval that is a point (the value $v$), and an integral that is evaluated on a point returns $0$ as a result. AI: Since $X, Y$ are continuous it is indeed the case that $\text{Prob}(X=Y)=0$.
H: Why to obtain the coordinates of vectors in the basis that themselves belong? Let $ \space T: \mathbb{R^2} \to \mathbb{R^3}$ a linear transformation defined as $ \space T(x,y)=(3x+2y,x+y,-2x-y)$, where $\beta=\{(1,-1),(0,1)\}$ is a basis of $\mathbb{R^2}$. Is not specify a basis for $\mathbb{R^3}$. By the expression of $ \space T(x,y)$ one known that, $$T(1,-1)=(1,0,-1)$$ $$T(0,1)=(2,1,-1)$$ The matrix will have $3$ lines by $2$ columns. Since there wasn't a specific basis for $\mathbb{R^3}$ I assumed the canonic basis. So the linear combination of each $T(1,-1)$ and $T(0,1)$ in the canonic $\mathbb{R^3}$ basis are themselves. The matrix will be, $\begin{bmatrix}1 & 2 \\ 0 & 1 \\ -1 & -1 \end{bmatrix}$ But, $\begin{bmatrix}1 & 2 \\ 0 & 1 \\ -1 & -1 \end{bmatrix}\begin{bmatrix}1 \\ -1 \end{bmatrix}\neq$$\begin{bmatrix}1 \\ 0 \\ -1 \end{bmatrix}$ My doubt is, why it's need to write each vector of $\beta$ as a linear combination of themselves to get their coordinates in the basis that they belong? Thanks AI: Since that $(1,-1)$ is the first vector of $\beta$ it has coordinates $(1,0)$. When you use matrices to apply the linear transformation you are using the coordinates of the vector on a fixed basis. So you should write $$\begin{bmatrix}1 & 2\\ 0& 1\\ -1 & -1\end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix}.$$ If $\{v_1,\ldots,v_n\}$ is a basis and $v=\sum_{i=1}^{n} \alpha_iv_i$ then you can say that the coordinates of $v$ on this basis is $(\alpha_1,\ldots,\alpha_n)$. So what is the coordinate of $v_i$ on this basis? That is $(\underbrace{0,\ldots,0}_{i-1},1,0,\ldots,0)$.
H: Prove that expected value of X is greater than Y, if given that $P(X\ge Y)=1$ I have to prove that $E(X)$ (Expected Value of a random variable X), is greater than $E(Y)$, if given that $P(X\ge Y)=1$. my thoughts so far: I know from the $P(X\ge Y)=1$ statement, that the values that X "receives" are always greater than the values that Y "receives", and because the expected value of a random variable is always a weighted mean of its valid values, than of course $E(X) \ge E(Y)$. But how do I prove it formally? AI: Take a new random variable, say Z, where Z = X - Y. Then you have $P(Z \geq 0) = 1$. Now try to show $E(Z) \geq 0$.
H: Embedded Lp spaces Let $L^\infty(Ω,F,P)$ be the vector space of bounded random variables $(X ∈ L^\infty (Ω,F,P)$ means that there exists a constant C such that $|X(ω)|≤C$, a.s.$)$. Show that $$L^\infty(Ω,F,P)⊂L^2(Ω,F,P)⊂L^1(Ω,F,P)$$ AI: It is a consequence of Holder inequality $$ E[|XY|]\leq E[|X|^p]^{1/p}E[|Y|^q]^{1/q} $$
H: Regarding $\lim_{n \to \infty} n^{\frac{1}{n}}$ Suppose $\lim_{n \to \infty} n^{\frac{1}{n}} = l \in \mathbb{R}$. The function $f(x) = x^n$ is continuous, then $$l^n=\left (\lim_{n \to \infty} n^{\frac{1}{n}} \right)^n=\lim_{n \to \infty} \left ( \left (n^{\frac{1}{n}} \right)^n \right ) =\lim_{n \to \infty} n = \infty.$$ Then it follows that $l = \infty $. This is false, but I couldn't find which step is wrong. Could you please help me? Thank you for your time. AI: You cannot swap the $n$ around, that is, inside and outside the limit. The step $$\left (\lim_{n \to \infty} n^{\frac{1}{n}} \right)^n=\lim_{n \to \infty} \left ( \left (n^{\frac{1}{n}} \right)^n \right )$$ is wrong. The $n$ outside the LHS is fixed, while the $n$ inside, in the RHS, varies like the other $n$. The result would be correct, if you said, say, $$\left (\lim_{n \to \infty} n^{\frac{1}{n}} \right)^3=\lim_{n \to \infty} \left ( \left (n^{\frac{1}{n}} \right)^3 \right )$$ As Thomas pointed out, $n$ is a dummy variable, that is $$\lim_{n\to\infty}n^{1/n}=\lim_{k\to\infty}k^{1/k}=\lim_{\sigma \to\infty}\sigma^{1/\sigma}$$ so you cannot really force $n$ to "match up" in the limit.
H: $\sum_{k=1}^n m(k)$, where $m(k)$ is defined by $2^{m(k)} || k$. I'm looking at the sum: $$f(n) = \sum_{k=1}^n m(k),$$ where $m(k)$ is defined by $2^{m(k)} || k$, i.e. $2^{m(k)}$ is the largest power of $2$ that divides $k$. For example, we have $f(8) = 0+1+0+2+0+1+0+3 = 7$. Here's a table of $n$, $m(n)$, and $f(n)$ for a few small $n$: n m(n) f(n) 1 0 0 2 1 1 3 0 1 4 2 3 5 0 3 6 1 4 7 0 4 8 3 7 The values $m(n)$ in the table are the last $m(k)$-value for the given $n$, i.e. the value of $m(k)$ when $k=n$. I haven't spent much time trying to figure this out, mostly because it's a funny little problem and I thought I'd share it. It seems to me that we have $f(2^j) = 2^j-1 \forall j$ and $f(n)\sim n$. AI: Let $m(k)$ be the exponent of $2$ occurring in $k$, i.e. $2^{m(k)}\|k$. One quickly deduces $$\tag1f(2n+1)=f(2n)=n+f(n)$$ as only the even numbers contribute and $2k$ contributes one more to $2n$ and $2n+1$ than $k$ does to $n$, i.e. $m(2k)=m(k)+1$. Since $f(1)=m(1)=0$, we find $f(3)=f(2)=1$, $f(5)=f(4)=3$, and so on. The sequence goes $$ 0, 1, 1, 3, 3, 4, 4, 7, 7, 8, 8, 10, 10, 11, 11, 15, 15, 16, 16, 18, 18, 19, 19, 22, 22, 23, 23, 25, 25, 26, 26,\ldots$$ And according to OEIS, this equals $m(n!)$ and also $n$ minus the number of $1$s in the binary expansion of $n$. One easily verifies that these two discriptions indeed follow the recursion formula $(1)$.
H: Find an interval of convergence and an explicit formula for $f(x)$ Let $f(x) = 1 + 2x + x^2 + 2x^3 +x^4+...$ If $c_{2n} = 1$ and $c_{2n+1} = 2$ $\forall n \ge 0$ find the interval of convergence and an explicit formula for $f(x)$. The answers are $I = (-1,1)$ and $f(x) = \frac{1 + 2x}{1 - x^2}$ Can anyone please give me an idea how to get it... Thanks in advance AI: The radius of convergence is $\dfrac1{\limsup\limits_{n\to\infty}|a_n|^{1/n}}$ and since $1\le a_n\le2$, the Squeeze Theorem says the radius of convergence is $1$. $$ \begin{align} &\hphantom{(}1+2x\hphantom{)}+\hphantom{(}x^2+2x^3\hphantom{)}+\hphantom{(}x^4+2x^5\hphantom{)}+\hphantom{(}x^6+2x^7\hphantom{)}+\dots\\[8pt] =&(1+2x)+(1+2x)x^2+(1+2x)x^4+(1+2x)x^6+\dots\\[8pt] =&(1+2x)(1+x^2+x^4+x^6+\dots)\\[4pt] =&\frac{1+2x}{1-x^2} \end{align} $$ $$ \begin{align} &1+2x+x^2+2x^3+x^4+2x^5+x^6+2x^7+\dots\\[12pt] =&1+\hphantom{2}x+x^2+\hphantom{2}x^3+x^4+\hphantom{2}x^5+x^6+\hphantom{2}x^7+\dots\\ &\hphantom{1}+\hphantom{2}x\hphantom{\,+\;x^2}+\phantom{2}x^3\hphantom{\,+\;x^4}+\phantom{2}x^5\hphantom{\,+\;x^4}+\phantom{2}x^7\dots\\[4pt] =&\frac1{1-x}+\frac{x}{1-x^2}\\[9pt] =&\frac{1+2x}{1-x^2} \end{align} $$
H: Behaviour of $f$ in the neighbourhood of $c$ if $f'(c)= \cdots = f^{(n)}(c)=0$, and $f^{(n+1)}(c) \gt 0$ What can I say about the behaviour of $f$ in the neighbourhood of $c$ if $f'(c)= \cdots = f^{(n)}(c)=0$, and $f^{(n+1)}(c) \gt 0$? I know the behaviour of $f$ if $n \le 2$, but I do not know how to generalize this. I appreciate your help. AI: What do you think you can say about the behavior of $f(x)=(x-c)^{n+1}$?
H: Show that a vector that is orthogonal to every other vector is the zero vector I have the following question, and I'd like to get some tips on how to write the proof. I know why it is, but I'm still not so great at writing it mathematically. If $u$ is a vector in $\mathbb{R}^n$ that is orthogonal to every vector in $\mathbb{R}^n$, then $u$ must be the zero vector. Why? I'm starting off like this, but I don't know if it's the right way to do it, or if it is and I just don't know how to continue. \begin{align} (\exists u\in\mathbb{R}^n)(\forall v\in\mathbb{R}^n)[\text{u is orthogonal to v}]&\iff u\cdot v=0\\ &\iff ? \end{align} From here, instinctively I want to divide both sides by $v$, but I don't know if there is such a thing as dividing a dot product. AI: The dot product $\cdot$ is an unusual type of multiplication. It takes two vectors in and produces a scalar out. Imagine a mommy elephant and a daddy elephant giving birth to a giraffe. There's no such thing as division in this context. You need to do the proof by looking at components of the vectors. $u=(u_1,u_2,\ldots, u_n), v=(v_1, v_2,\ldots, v_n)$, and $$u\cdot v=u_1v_1+u_2v_2+\cdots+u_nv_n$$ Then, following Jared's hint, you can try specific values for $v$ and learn things about the components of $u$.
H: Can a finitely generated $\mathbb{Z}$-algebra contain $\mathbb{Q}$? Is there a ring between $\mathbb{Q}$ and $\mathbb{R}$ that is finitely generated as an algebra over $\mathbb{Z}$? My guess is there isn't. I can see that it would have to be finitely generated over $\mathbb{Q}$ as well, and I think I can deal with algebraic generators. But if there are algebraically dependent transcendentals, I don't see how to exclude some rational. Why couldn't there be $\alpha$, $\beta$ transcendental, such for every prime $p$, $1/p$ is given by some integer polynomial in $\alpha,\beta$? AI: The residue fields of a finitely generated $\mathbb{Z}$-algebra (i.e. the quotients by maximal ideals) are finite fields (see here). There is no homomorphism from $\mathbb{Q}$ to a finite field.
H: calculate $ \lim_{k\rightarrow\infty}\sin\left(kx\right) $ if $ x \notin \pi\mathbb{N} $ why the limit $ \lim_{k\rightarrow\infty}\sin\left(kx\right) $ does not exist? AI: so when $x\rightarrow \infty $ what you see ? is there any existed value for f(x) note thate $-1\leq sin(kx)\leq 1$ :$k\in R$
H: A probability question regarding combinatorics From a class of $300$ students, three are selected at random to receive three identical prizes. Of the students, $200$ are from department $A$, $60$ from department $B$ and $40$ from department $C$. Find the probability that the three winners come from different departments. Find the probability that all three winners come from the same department. Suppose that all three are from the same department. Compute the probability that they are all from department $A$. Here is my attempt. I am not sure if this is correct. $P(\text{different departments})=3!\binom{200}{1}\binom{60}{1}\binom{40}{1}$ or $3!\frac{200}{300}\times \frac{60}{299} \times\frac{40}{298}$? $P(\text{same department})=\dfrac{\binom{200}{1}+\binom{60}{1}+\binom{40}{1}}{\binom{300}{3}}= \dfrac{3}{44551}$. $\begin{align} P(\text{from department A}|\text{same department}) & = \dfrac{P(\text{from department A}) P(\text{same department})}{P(\text{same department})} \\ & = \dfrac{\binom{200}{1} \frac{3}{4451}}{\frac{3}{4451}} \\ &=\binom{200}{1}. \end{align}$ Thank you in advance. AI: Note: I am interpreting the problem to mean that the three winners must be different people. Unfortunately all three proposed solutions are incorrect. There are ${300 \choose 3}$ subsets of size 3. This will be the denominator for the first two questions. In the first question, the numerator is ${200\choose 1}{60\choose 1}{40 \choose 1}$. We want one of our three winners to be from department $A$, one from $B$, and one from $C$; these three choices are made independently, so we use the multiplication principle. The numerator of the second question is ${200\choose 3}+{60\choose 3}+{40\choose 3}$. There are three types of subsets, those entirely from $A$, those entirely from $B$, and those entirely from $C$. These are disjoint subsets of ${300\choose 3}$, so we use the addition principle. To count the subsets entirely from $A$, we need to pick 3 winners. The denominator in the third question is the same as the numerator of the second question, since now we are restricting to only those triplets that are from a single department, which the numerator from the second question was counting. The numerator of the third question is ${200\choose 3}$.
H: Derivatives for a piecewise defined function $f$ is a function on real line and $f(x)=\begin{cases} x^2& x<0\\ 2x+x^2&x\ge 0\end{cases}$. Could any one tell me which of the following is/are true? 1.$f'(0)$ doesn't exist 2.$f'(x)$ exist other than at $0$ 3.$f''(x)=2$ 4.$f''(0)$ does not exist I have myself checked that $1$ is true as $\lim x\uparrow 0 f'(x)=0\neq \lim x\downarrow f'(x)=2$ I mean to say lefthand derivative is $0$ but right hand derivative is $2$, but I am not getting how to find $f''$ please help AI: Hints: At each side of $\,x=0\,$ the function is polynomial and thus infinitely differentiable, which already answers (2) and almost-almost (really almost!) (3). Now, just as $\,f'(x_0)\,$ cannot exist if $\,f\,$ isn't defined at $\,x_0\,$ , we get that (4) is true after we take a peek at what we did in (1) ...
H: Integral with delta Dirac power Is it possible to calculate the integral: $$J=\int_{-\infty}^{+\infty}f(x)\delta(x-x_0)^kdx$$ wih $k\in\mathbb{R}$? I know that in the Colombeau algebra the distribution $\delta(x)^2$ is defined. What happens if the Delta function is raised to a real number different from $2$? Thanks in advance. AI: For $k$ being an integer: You certainly know that $\int f(x) \delta (x-x_0) dx = f(x_0)$ This is true regardless of what $f(x)$ is, even when $f(x)$ itself contains a $\delta$-function. So your integral gives the highly singular result: $\int f(x)\delta^{k-1}(x-x_0) \delta(x-x_0) dx = f(x_0)\delta^{k-1}(x_0-x_0)$ But you asked about all reals. Don't know what to tell you for non-integer $k$.
H: Is there a text book containing a self-contained and complete proof of the Jordan Curve theorem? I seem to remember (in my undergraduate years) encountering a book on complex analysis which contained a proof of the Jordan Curve Theorem, building up from first principles - so self-contained and complete. I am now looking for such a proof, and haven't been able to find one. Does anyone know of such a book, or alternatively an online resource which starts at the beginning and finishes at the end? The level I'm looking for is roughly a demanding first course in complex analysis. Edit: Does Beardon's book "Complex Analysis: The Argument Principle in Analysis and Topology" do this? I can't find a review or synopsis. AI: I don't know if this is what you are looking for, but I found the following resources: https://luisto.fi/misc.shtml Click on "An essay about the jordan curve theorem". It is in the spirit of complex analysis. "Introduction to the theory of functions of a complex variable" by Wolfgang J. Thron. Hope this helps!
H: $\sum a_n$ be convergent but not absolutely convergent, $\sum_{n=1}^{\infty} a_n=0$ Let $\sum a_n$ be convergent but not absolutely convergent, $\sum_{n=1}^{\infty} a_n=0$,$s_k$ denotes the partial sum then could anyone tell me which of the following is/are correct? $1.$ $s_k=0$ for infinitely many $k$ $2$. $s_k>0$ and $<0$ for infnitely many $k$ 3.$s_k>0$ for all $k$ 4.$s_k>0$ for all but finitely many $k$ if we take $a_n=(-1)^n{1\over n}$ then $\sum a_n$ is convergent but not absolutely convergent,but I don't know $\sum_{n=1}^{\infty} a_n=0$? so I am puzzled could any one tell me how to proceed? AI: None of them are necessarily true. We can easily compute a series from its partial sums, so let's specify the $s_k$. Define $$ s_k=\left\{\begin{array}{} -\frac1k&\text{if $k$ is odd}\\[4pt] -\frac1{k^2}&\text{if $k$ is even} \end{array}\right. $$ Then $a_1=-1$ and for $k\gt1$, $$ a_k=\left\{\begin{array}{} \frac1{(k-1)^2}-\frac1k&\text{if $k$ is odd}\\[4pt] \frac1{k-1}-\frac1{k^2}&\text{if $k$ is even} \end{array}\right. $$ Show that this series is not absolutely convergent, its sum is $0$, and it fails to satisfy any of the conditions.
H: How to show that every complex matrix with orthonormal columns can be supplemented into an unitary matrix? Show that every matrix $A \in M_{n,k}(\mathbb{C})$ whose columns are orthonormal vectors in $M_{n1}(\mathbb{C})$ can be supplemented with additional n-k columns to an unitary matrix $U \in M_{n}(\mathbb{C})$ $A = \begin{bmatrix} a_{11} & a_{12} & ... & a_{1k} \\ a_{21} & a_{22} & ... & a_{2k} \\ ... & ... & ... & ... \\ a_{n1} & _{n2} & ... & a_{nk} \end{bmatrix}$ Now, I can mark the columns with $c[i]$ for example where $c[i] = \begin{bmatrix} a_{1i} \\ a_{2i} \\ ... \\ a_{ni} \end{bmatrix} ; i \in {1, ..., k}$ We know that $|| c[i] || = \langle c[i], c[i] \rangle = 1$ and $\langle c[i], c[j] \rangle = 0 \space \forall i \ne j$ Since there is an bijection between the spaces $M_{n,1}(\mathbb{C})$ and $C^n$ we can view the set $\{ c[1], c[2], ... , c[k] \}$ as an orthonormal set in the space $C^n$ and by applying the Gram-Schmidt process we can get an orthonormal base for $C^n ... \{ c[1], c[2], ... , c[k], v[k+1], v[k+2], ... , v[n] \}$ And by supplementing the vectors $v[k+1], ..., v[n]$ as columns to the original matrix $A$ we get: $A' = \begin{bmatrix} a_{11} & a_{12} & ... & a_{1k} & v_{1,k+1} & ... & v_{1n} \\ a_{21} & a_{22} & ... & a_{2k} & v_{2,k+1} & ... & v_{2n} \\ a_{31} & a_{32} & ... & a_{3k} & v_{3,k+1} & ... & v_{3n} \\ a_{41} & a_{42} & ... & a_{4k} & v_{4,k+1} & ... & v_{4n} \\ ... & ... & ... & ... & ... & ... & ... \\ a_{n1} & _{n2} & ... & a_{nk} & v_{n,k+1} & ... & v_{nn} \end{bmatrix}$ $(A')^* = \begin{bmatrix} \overline{a_{11}} & \overline{a_{21}} & ... & \overline{a_{k1}} & \overline{v_{k+1,1}} & ... & \overline{v_{n1}} \\ \overline{a_{12}} & \overline{a_{22}} & ... & \overline{a_{k2}} & \overline{v_{k+1,2}} & ... & \overline{v_{n2}} \\ \overline{a_{13}} & \overline{a_{23}} & ... & \overline{a_{k3}} & \overline{v_{k+1,3}} & ... & \overline{v_{n3}} \\ \overline{a_{14}} & \overline{a_{24}} & ... & \overline{a_{k4}} & \overline{v_{k+1,4}} & ... & \overline{v_{n4}} \\ ... & ... & ... & ... & ... & ... & ... \\ \overline{a_{1n}} & \overline{a_{2n}} & ... & \overline{a_{kn}} & \overline{v_{k+1,n}} & ... & \overline{v_{nn}} \end{bmatrix}$ And by multiplying the matrices, we notice that we can write the elements of the matrix as: $(A')^*A' = \begin{bmatrix} \sum_{i=1}^na_{i1}\overline{a_{i1}} & \sum_{i=1}^na_{i1}\overline{a_{i2}} & ... & \sum_{i=1}^na_{i1}\overline{v_{in}} \\ \sum_{i=1}^na_{i2}\overline{a_{i1}} & \sum_{i=1}^na_{i2}\overline{a_{i2}} & ... & \sum_{i=1}^na_{i2}\overline{v_{in}} \\ ... & ... & ... & ... \\ \sum_{i=1}^na_{in}\overline{a_{i1}} & \sum_{i=2}^na_{in}\overline{a_{i2}} & ... & \sum_{i=1}^na_{in}\overline{v_{in}} \end{bmatrix}$ The standard inner product for $\mathbb{C}^n$ is $\sum_{i=1}^n \alpha_i \overline{\alpha_i}$ and since the elements of the matrix are summations which consist of multiplications of elements of the corresponding columns which form vectors which are part of an orthonormal base of $C^n$, we can apply the standard inner product for the summations/elements of the matrix $\implies (A')^*A' = I $ But the following multiplication remains: $A'(A')^* = \begin{bmatrix} \sum_{i=1}^na_{1i}\overline{a_{1i}} & \sum_{i=1}^na_{1i}\overline{a_{2i}} & ... & \sum_{i=1}^na_{1i}\overline{v_{ni}} \\ \sum_{i=1}^na_{2i}\overline{a_{1i}} & \sum_{i=2}^na_{2i}\overline{a_{2i}} & ... & \sum_{i=1}^na_{2i}\overline{v_{ni}} \\ ... & ... & ... & ... \\ \sum_{i=1}^na_{ni}\overline{a_{1i}} & \sum_{i=2}^na_{ni}\overline{a_{2i}} & ... & \sum_{i=1}^na_{ni}\overline{v_{ni}} \end{bmatrix}$ The resulting matrix consists of summations of multiplications of the corresponding elements of the rows of the original matrix. But since we are dealing with a matrix, we know that the number of linear independent columns equals the number of linear independent rows and conclude that the rows of the matrix also form vectors which on the other hand form a basis for $C^n \implies A'(A')^* = I$ Since $A'(A')^* = (A')^*A' = I$ we can take $ U = A' $. The statement follows. Is my reasoning correct ? If yes, do I need to add some details/be more formal in some places? If no, it would be great if you could give me some hints. AI: Seems about right to me. The point is that a matrix is unitary if and only if its columns form an orthonormal basis with respect to the complex Euclidean inner product. In fact, if $U = [v_1 | \cdots | v_n]$ then $(UU^\dagger)_{ij} = \langle v_i, v_j \rangle$ (or $U^\dagger U$ depending on your convention). Thus your question really boils down to: can any orthonormal set be extended to an orthonormal basis (in finite dimensions)? Yes, by Gram-Schmidt, as you indicated.
H: Prove that a cut edge is in every spanning tree of a graph Given a simple and connected graph $G = (V,E)$, and an edge $e \in E$. Prove: $e$ is a cut edge if and only if $e$ is in every spanning tree of $G$. I have been thinking about this question for a long time and have made no progress. AI: Suppose $e$ is not a cut edge. Then $G\setminus e$ is connected. Now, considering any spanning tree $T$ of $G\setminus e$ we see that $T$ is a spanning tree of $G$ as well. Now let $e$ be a cut edge and let $T$ be a spanning tree of $G$ with $e\notin T$. Then $T$ must be a spanning tree of a disconnected graph, a contradiction.
H: Solving $\sqrt{7x-4}-\sqrt{7x-5}=\sqrt{4x-1}-\sqrt{4x-2}$ Where do I start to solve a equation for x like the one below? $$\sqrt{7x-4}-\sqrt{7x-5}=\sqrt{4x-1}-\sqrt{4x-2}$$ After squaring it, it's too complicated; but there's nothing to factor or to expand? Ideas? AI: Divide $$(7x-4)-(7x-5)=(4x-1)-(4x-2)\ \ \ \ \ (1)$$ by the original equation, which is $$\sqrt{7x-4}-\sqrt{7x-5}=\sqrt{4x-1}-\sqrt{4x-2}\ \ \ \ \ (2)$$ You should get:$$\sqrt{7x-4}+\sqrt{7x-5}=\sqrt{4x-1}+\sqrt{4x-2}\ \ \ \ \ (3)$$Now add (2) and (3), you get: \begin{align} 2\sqrt{7x-4}&=2\sqrt{4x-1} \\ \\ 7x-4&=4x-1 \\ \\ x&=1 \end{align}