text
stringlengths
83
79.5k
H: Finding a unique representation as a linear combination ok, another problem suggested by my prof. the vectors $u_1 = (1,1,1,1)$, $u_2 = (0,1,1,1)$, $u_3 = (0,0,1,1)$, $u_4 = (0,0,0,1)$, are a basis for $F^4$. Find a unique representation of an arbitrary vector $(a_1, a_2, a_3, a_4)$ as a linear combination of $u_1, u_2, u_3, u_4$. OK, as I understand it I have to find some scalar coefficients such that $(a_1, a_2, a_3, a_4)$ = $c_n(u_1 + u_2 + u_3 + u_4)$ = $c_1(1,1,1,1) + c_2(0,1,1,1) + c_3(0,0,1,1) +c_4(0,0,0,1)$, correct? assuming that is the case, though, I get stuck. if I just add the u-vectors I get (1, 2, 3, 4) but I am unsure what to do after that. I feel like this should be simple but I have been so bloody lost in this class.. AI: You need $a_1u_1$ or else you won't get the right first coefficient. The only way to get $a_2$ in the second coefficient is by taking $u_2$, but if you take $a_2u_2$ you'll get $a_1+a_2$ in the second coefficient, which is too many. Hence, you must take $(a_2-a_1)u_2$, which will give you just $a_2$ in the second coefficient when added to $a_1$ from the $u_1$ previously. Continuing in this way you will get $$a_1u_1+(a_2-a_1)u_2+(a_3-a_2)u_3+(a_4-a_3)u_4$$
H: $A$ is some fixed matrix. Let $U(B)=AB-BA$. If $A$ is diagonalizable then so is $U$? This is from Hoffman and Kunze 6.4.13. I am studying for an exam and trying to solve some problems in Hoffman and Kunze. Here is the question. Let $V$ be the space of $n\times n$ matrices over a field $F$. Let $A$ be a fixed matrix in $V$. Let $T$ and $U$ be linear operators on $V$ defined by $T(B)=AB$ and $U(B)=AB-BA$. a) (True or False) If $A$ is diagonalizable then $T$ is diagonalizable. This is true. I can show that both A and T have the same minimal polynomial. So if $A$ is diagonalizable then the minimal polynomial of $T$ should be a product of distinct linear factors. Proving that $T$ is diagonalizable. but its the next question I am having trouble with. b)(True or false) If $A$ is diagonalizable then $U$ is diagonlizable. I am thinking this is false. But I can't think of of a counter example. May be I am wrong. Can anybody help?. Thanks for all your help. AI: (a) True. Indeed $p(T)B = p(A)B$ for every polynomial in $F[X]$ whence $p(T) = 0$ if and only if $p(A)=0$. By the minimal polynomial characterization of diagonalizability in finite dimension (splits with simple roots), it follows that $T$ is diagonalizable if and only $A$ is diagonalizable. (b) True. If $A=P^{-1}DP$ is diagonalizable, then $U$ is similar to $V(B)=DB-BD$ with $D=\mbox{diag}(d_1,\ldots,d_n)$ diagonal. Precisely, $U=S\circ V\circ S^{-1}$ with $S(B)=P^{-1}BP$. Then denoting $E_{ij}$ the canonical basis of $M_n(F)$, we find $$V(E_{ij})=DE_{ij}-E_{ij}D=(d_i-d_j)E_{ij}.$$ Whence $V$ is diagonal in the canonical basis, and $U$ is therefore diagonalizable.
H: Why is $\frac{1}{\frac{1}{X}}=X$? Can someone help me understand in basic terms why $$\frac{1}{\frac{1}{X}} = X$$ And my book says that "to simplify the reciprocal of a fraction, invert the fraction"...I don't get this because isn't reciprocal by definition the invert of the fraction? AI: Maybe this will help you see why $\;\dfrac 1{\large \frac 1X}= X.\;$ We multiply numerator and denominator by $X$, which we can do because we can multiply any number by $\dfrac XX = 1$ without changing the actual value of the number: $$\frac 1{\Large \frac 1X}\cdot \frac XX = \frac X1 = X$$ $$ $$
H: the field $\mathbb{Z}_3[i]$ is ring-isomorphic to the field $\mathbb{Z}_3[x]/(x^2 + 1)$ Let $\mathbb{Z}_3[i] =${$a+bi | a, b \in \mathbb{Z}_3$} . Show that the field $\mathbb{Z}_3[i]$ is ring-isomorphic to the field $\mathbb{Z}_3[x]/(x^2 + 1)$ how can I able to do this?can someone help? AI: Consider the ring homomorphism $\mathbb Z_3[x]\rightarrow \mathbb Z_3[i]$ sending $x$ to $i$. This exists by the universal property of polynomial rings. It is clear that no constant or linear polynomial is in the kernel, but $x^2+1$ is. Since $\mathbb Z_3$ is a field, we can use the division algorithm for polynomials to see that any element of the kernel must be a multiple of $x^2+1$. Conversely, all such multiples work. (This is just reiterating the proof of the theorem that a polynomial ring over a field is a PID. You can just quote that theorem if you wish.) So the kernel is the ideal generated by $x^2+1$, and First Isomorphism Theorem for rings gives $$\frac{\mathbb Z_3[x]}{(x^2+1)} \cong \mathbb Z_3[i].$$ If you need more detail or explanation, just ask. As @Omnivium notes in the comments, this is a very common way of showing two rings are isomorphic.
H: Square roots and powers This is a rather silly question. In what order does one evaluate a combination of powers and fractional powers? I have the question phrased: $\sqrt{ 1/4^2 }$ OR ${ 1/4 }$? Which is greater? I answered that it cannot be determined, because the ${ 1/4^2 }$ could be evaluated first and then the square root, yielding $\pm 1/4$. I understand it could also be the multiplication of exponents, yielding +1/4. Which of these is correct? AI: $\sqrt x$ means, by definition, the positive number $r$ with $r^2=x$. EDIT: make that, the non-negative number $r$ with $r^2=x$. Wouldn't want to leave out zero.
H: What's the best way to calculate all of the points for a curve given only a few points? I've been reading up on curves, polynomials, splines, knots, etc., and I could definitely use some help. (I'm writing open source code, if that makes a difference.) Given two end points and any number of control points (including e.g. $0$ and $100$), I need to calculate many different points for curve. This curve must pass through all points, including the end points. I'm not sure if this means that there is even a difference between the end points and the control points or not; I guess the difference would be that the end points don't have any points on the "outside", and thus they are different in that regard. I have tried and succeeded with the "De Casteljau's algorithm" method, but the curve it generates doesn't (necessarily) pass through the control points (unless on a straight line or something). I have also looked into solving for the curve's equation using a generic polynomial curve equation, e.g.: $y = a + b x + c x ^ 2 + \dots + j x ^ 9$ plugging points into it, and then solving systematic equations. The problem with this approach is that it solves for a function, so then the curve wouldn't be able to go "backwards" any, right (unless it's a "multivalued" function maybe)? Based on browsing through Wikipedia, I think what I might want is to calculate a spline curve, but even though I know some Calculus I'm having trouble understanding the math behind it. I asked this question in the Mathematics section because I'm expecting a mathematical answer, but if the solution is easier to explain with pseudocode or something then I'll take that too. :) Thanks! Update: I'm looking to curve fit using Spline (low-degree) polynomial interpolation given some points. Order matters (as marty cohen explained it), and I want each polynomial to be continuous in position, tangent, and curvature. I also want minimalized wiggles and to avoid high degree polynomials if possible. :D AI: It looks like your set of endpoints and control points can be any set of points in the plane. This means that the $order$ of the points is critical, so that the generated curve goes through the points in a specified order. This is much different than the ordinary interpolation problem, where the points of the form $(x_i, y_i)$ are ordered so that $x_i < x_{i+1}$. As I read your desire, if you gave a set of points on a circle ordered by the angle of the line from the center to each point, you would want the result to be a curve close to the circle. There are a number of ways this could be done. I will assume that you have $n+1$ points and your points are $(x_i, y_i)_{i=0}^n$. The first way I would do this is to separately parameterize the curve by arc length, with $d_i = \sqrt{(x_i-x_{i-1})^2+(y_i-y_{i-1})^2}$ for $i=1$ to $n$, so $d_i$ is the distance from the $i-1$-th point to the $i$-th point. For a linear fit, for each $i$ from $1$ to $n$, let $t$ go from $0$ to $d_i$ and construct separate curves $X_i(t)$ and $Y_i(t)$ such that $X_i(0) = x_{i-1}$, $X_i(d_i) = x_i$, and $Y_i(0) = y_{i-1}$, $Y_i(d_i) = y_i$. Then piece these together. For a smoother fit, do a spline curve through each of $(T_i, x_i)$ and $(T_i, y_i)$ for $i=0$ to $n$, where $T_0 = 0$ and $T_i = T_{i-1}+d_i$. To get a point for any $t$ from $0$ to $T_n$, find the $i$ such that $T_{i-1} \le t \le T_i$ and then, using the spline fits for $x$ and $y$ (instead of the linear fit), get the $x$ and $y$ values from their fits. Note that $T_i$ is the cumulative length from $(x_0, y_0)$ to $(x_i, y_i)$, and $T_n$ is the total length of the line segments joining the consecutive points. To keep the curves from not getting too wild, you might look up "splines under tension". Until you get more precise, this is as far as I can go.
H: Characterization of open subset of $\mathbb{R}$ Let $X$ be an open nonempty subset or $\mathbb{R}$. Prove that there exists a unique countable set of open intervals $\{(a_n,b_n)\}_{n=1}^{\infty}$ such that (a) $\cup_{n=1}^{\infty}(a_n,b_n)=X$ (b) $(a_m,b_m)\cap(a_n,b_n)=\emptyset$ if $n\neq m$ ($a_n=-\infty$ and $b_n=\infty$ may occur. How I did it: Suppose $x\in X$. We'll show that $x$ is in one such open interval. Consider the let $L_x=\{y<x\mid y\not\in X\}$. If $L_x$ is empty, $x$ is in the open interval with left endpoint $-\infty$. Otherwise, $L_x$ is not empty, and must contain a least upper bound $b$. If $b\in X$, $b$ must have some open ball contained in $X$, which is impossible since there are elements outside $X$ arbitrarily close to $b$. So $b\not\in X$. So $x$ is in the interval with left endpoint $b$. Similarly, we can find the right endpoint for the interval containing $x$. We can remove this interval, and choose a new $x'\in X$ if there is any. So we get a union of disjoint intervals. This union must be unique, since no two intervals can join, and no interval can be broken into two disjoint intervals. The number of intervals must be countable, since each interval contains a rational number. Is this solution good? I'm worried about the part about removing the interval and choosing a new $x'\in X$. AI: It needs a bit of work precisely at the point that worries you in order to make it fully convincing. It would be better to carry out that argument for each point of $X$ and then show that any two of the resulting intervals are either equal or disjoint. In this answer I gave a pretty detailed writeup of a slightly different approach that gives you all of the intervals at once.
H: what % of remaining of remaining coats are full length A garment supplier stores $800$ coats in a warehouse of which $15$% are full length coats. If $500$ of shorter length coats are removed from warehouse what % of remaining of remaining coats are full length ? AI: Step 1 : $15% $ of $800$ is 120 . Now out of $800$ , we know $ 120$ are of full length coats step 2 : if $500$ shorter once are removed from the total of $ 800 $ then left once are $300$ now of these $300$ we have 120 as full length and rest shorter once Steo 3 : Calculate this % of full length from 300 left and it is $\frac {120}{300} \times 100 = 40$ %
H: Upper bound to a series with ceiling If we know that: $$a_{i+1} \leq {a_i \over 2}$$ Then we can calculate an upper bound for every n: $$a_{n} \leq {a_0 \over 2^n}$$ But what if we keep the elements integer, by taking the ceiling: $$a_{i+1} \leq \lceil {a_i \over 2} \rceil$$ As long as $a_i > 1$, the series is decreasing. However, the rate of decrease is slower, and the above upper bound doesn't hold. What is the best upper bound (as a function of $a_0$ and $n$) that we can find? $$a_{n} \leq F(a_0,n) = ??? $$ AI: You have $a_{i+1} \le a_i/2 + 1/2$, so $a_n \le 1 + (a_0 - 1) 2^{-n}$. This is best possible in that it is an equality if $a_0 \equiv 1 \mod 2^n$.
H: What is ratio of total number of people who read B newspaper In a survey conducted with $95$ people on readership of two newspapers $A$ and $B$ , it was noted that $30$ people read both ,$20$ people read only $A$, $5$ Read only $B$ and balance $40$ read neither. What is ratio of people who read A to the total number of people who read $B$? AI: HINT: First calculate the total number who read $A$: that’s $30+20=50$. Then calculate the total number who read $B$; I’ll leave that to you. Finally form a fraction: if $a$ and $b$ are two numbers, the ratio of $a$ to $b$ is $\frac{a}b$.
H: Differentiability of $f$ if $f^{-1}$ is differentiable Suppose that $f$ is a one-to-one function and that $f^{−1}$ has a derivative which is nowhere $0$. Prove that $f$ is differentiable. AI: Since $f$ is one-to-one, $f^{−1} $ is a function whose inverse is $f$. Since $f^{−1} $ is differentiable, $f^{−1} $ is continuous. Let $y = f(x)$ , so $f^{−1} (y) = x$. $$\lim_{h\to0}\frac{f(x+h)−f(x)}{h}=\lim_{h\to0}\frac{f(x+h)−y}{h}.$$ Now since $f^{−1}$ is continuous, we can write $x + h = f^{−1} (y + k)$ where $k \to 0 $ as $ h \to 0.$ Now $$\lim_ {h\to0} \frac {f(x+h)−y}{h} = \lim_{h\to0} \frac {f(f^{−1} (y+k))−y}{(x+h)−x} = \lim_{h\to0} \frac {(y+k)−y}{f^{−1} (y+k)−f^{−1} (y)} = \lim _{k\to0} \frac {k}{f^{−1} (y+k)−f^{−1} (y)} $$ as "$h \to 0 \implies k \to 0$". Hence $$ \lim_{h\to0} \frac {f(x+h)−y}{h} = \lim _ {k\to0}\frac {k} { f^{−1}(y+k) − f^{−1}(y) } = \frac {1} {(f^{−1})′(y) } = \frac {1} {(f^{−1})′(f(x)) }.$$ So f is differentiable.
H: Finding consecutive odd number The negative of the sum of 2 consecutive odd numbers is less than -45 , which of the following may be one of the numbers? $A)21 $ $B) 23 $ $C) 26 $ $D) 22 $ $ C) 24$ What will be logic to solve this problem. AI: Let the consecutive odd numbers be $2n-1, 2n+1$ where $n$ is any integer So, $-(2n-1+2n+1)<-45\iff 4n>45\implies n\ge 12$ For $n=12, 2n-1=23;2n+1=25$ For $n=13, 2n-1=25;2n+1=27$ and so on
H: Permutations with exactly $k$ inversions Let $I_{n,k}$ denotes the number of permutations of $\left\{1,..,n\right\}$ that have exactly $k$ inversions. Prove that: $$\sum_k I_{n,k}x^k = \frac{\prod_{i=1}^n (1-x^i)}{(1-x)^n}$$ The only one fact I came up with is recursive formula $I_{n,k}=\sum_{i=0}^{}I_{n-1,i}$, but it's rather useless here, I think. Can anybody help? AI: Given any permutation $\tau$, we denote the number of inverse pairs in $(\tau(1),\tau(i)),(\tau(2),\tau(i)),...,(\tau(i-1),\tau(i))$ by $V(\tau,i)$. Then $\tau\rightarrow (V(\tau,1),V(\tau,2),...,V(\tau,n))$ is a bijection from permutations to $n$-tuples with $a_i \le i-1$, with number of inversions correspond to sum of all coordinates. Expand the RHS and you see it is the generating function for the number of such tuples, while the LHS is counting permutations with a given number of inversions.
H: Rationale behind MLE of $f_{\theta}(x) = \frac{1}{\theta} I_{\{1, \dots,\theta\}}(x)$ Our probability density for $\theta \in \{1,\dots,\theta_0\}$ is $$f_{\theta}(x) = \frac{1}{\theta} I_{\{1, \dots,\theta\}}(x)$$ Let $X_{(n)}$ be the largest order statistic. Acooding to the solution, the likelihood function is $$\frac{1}{\theta^n} I_{\{X_{(n)}, \dots,\theta_0\}}(x)$$ I don't get why. Also, $X_{(n)}$ is supposed to be the MLE of $\theta$. Why? AI: The likelihood based on the sample $(x_k)$ is by definition $L(\theta)=f_\theta(x_1)f_\theta(x_2)\cdots f_\theta(x_n)$. In the present case, $L(\theta)=\theta^{-n}\mathbf 1_{x_1\leqslant\theta}\mathbf 1_{x_2\leqslant\theta}\cdots\mathbf 1_{x_n\leqslant\theta}=\theta^{-n}\mathbf 1_{\max(x_k)\leqslant\theta}$ hence $L(\theta)=0$ if $\theta\lt\max(x_k)$ and $\theta\mapsto L(\theta)$ is decreasing on $\theta\geqslant\max(x_k)$. Thus, $L(\theta)$ is maximal at $\theta=\max(x_k)$.
H: Finding gallons of milk for $2$% How many gallons of milk that is $2$% butterfat must be mixed with milk that is $3.5$% butterfat to get $10$ gallons that are $3$% butterfat? AI: Set up the following system $$\left\{ {\begin{array}{*{20}{c}} {.02x + .035y = .03 \cdot 10} \\ {x + y = 10} \end{array}} \right.$$ Where $x$ is the number of gallons of $2\%$ milk and $y$ is the number of gallons of $3.5%$. Then, solve this system using standard techniques (such as add/subratcing equations or substitution) to obtain, $$x = \frac{10}{3} \text{gallons} \ \ \ \text{and } \ \ \ y = \frac{20}{3} \text{gallons.}$$
H: Comparing $\gamma^e$ and $e^\gamma$ How can I calculate without calculator or something like this the values of $\gamma^e$ and $e^\gamma$ in order to compare them? ($\gamma$ the Euler-Mascheroni constant) Note: the shape of this question lend from the beautiful question of Mirzodaler >>> here. AI: Since $$γ^e=e^{e\lnγ}$$ So,the problem is comparing $γ$ and $e\lnγ$, as well as,the sign of $γ-e\lnγ$. Consider the function $$f(x)=x-e\ln x$$ and its monotony.
H: Calculate the Laplace transform. Can anybody help me with the answer of this question? Find the inverse Laplace transform of $$f(t)=10$$ AI: The LT of a constant $a$ is $$\hat{f}(s) = a \int_0^{\infty} dt \, e^{-s t} = \frac{a}{s}$$
H: Find $x$ in terms of $n$ A survey of $n$ people found that $60$ % preferred brand $A$. An additional $x$ people were surveyed who all preferred brand $A$. $70$% of all people surveyed preferred brand $A$ . Find $x$ in terms of $n$ AI: HINT: So, initially $60$% of $n$ i.e., $n\cdot \frac{60}{100}=\frac{3n}5$ people preferred brand $A$ So, the total people preferring brand $A$ is $\frac{3n}5+x$ which is the $70$% of all the people $n+x$ $$\implies \frac{3n}5+x=\frac{70}{100}\cdot\left(n+x\right)$$ Can you take it from here?
H: How many coordinates are unreachable? I wanted to know, if a man was to go from $(0,0)$ to $(46,46)$ moving only straight and up with the following constraints:- If he walks right, he will walk atleast $4$ consecutive coordinates. If he moves up, he will walk atleast $12 $ consecutive coordinates. How many coordinates are unreachable by him in this $46 \times 46$ grid? (Assumption:A coordinate is reachable by the man if he runs through the coordinate.) Any help is appreciated. Thanks. AI: The reachable points are those that lie on the $1,4,8,12,\dots,40,44$ numbered columns and $1,12,24,36$ numbered rows(numbering being done from bottom to top and from left to right.)
H: Why does a first axiom space have to be $T_{1}$ in order for limit points to have a sequence converging to them? My textbook General Topology by Pervin) says "If $x$ is a point and $E$ a subset of a $T_{1}$-space $X$ satisfying the first axiom of countability, then $x$ is a limit point of $E$ iff there exists a sequence of distinct points in $E$ converging to $X$"? I don't understand why $X$ needs to be a $T_{1}$-space for this to be true. Let $X$ be a first axiom topological space, and $E\subseteq X$. If a point $a\in X$ is a limit point of $E$, then there is a sequence in $E$ whose limit is $a$. Proof: let $B_{n}(a)$ be a monotone decreasing countable open base containing $a$. All these sets will contain at least one point from $E$. If we keep on choosing a countable number of points from each $B_{i}(a)\cap E$, we will generate one such sequence converging to $a$. It is also clearly true that if there is a sequence in $E$, which converges to $a$, then $a$ is a limit point of $E$. Nowehere in this argument was the fact that $E$ is a $T_{1}$-space is used. I feel $X$ being a $T_{1}$-space only proves that $\bigcap_{i}B_{i}(a)=\{a\}$. This fact I feel is irrelevant in making the argument concerned. Thank you for your time. AI: The key word here is "distinct". So let $X$ be a fixed $T_1$-space. Let $E \subset X$. First note that a limit point of $E$ by the standard definition is a point $x$ such that every (open) neighbourhood $U$ of $x$ contains a point of $E$ different from $x$. Now, for a $T_1$-space we can say something stronger: $x$ is a limit point of $E$ iff every (open) neighbourhood of $x$ contains infinitely many points from $E$. Proof: if every neighbourhood contains infinitely many points of $E$, it will certainly contain one different from $x$, so one implication is trivial, so to see the other one: suppose a limit point $x$ of $E$ has an open neighbourhood $U$ such that $ U \cap E$ is finite. Then $F = (U \cap E) \setminus \{x\}$ is also finite, and thus a closed set (here we use $T_1$: singletons and thus finite sets are closed), and so $X \setminus F$ is open, and so is $U' = U \cap (X \setminus F)$. As the latter set is thus an open neighbourhood of $x$ such that $U' \cap E \subset \{x\}$ (i.e. either empty or just $x$, depending on whether $x \in E$ or not), this contradicts that $x$ is a limit point of $E$. This is the reason the fact is stated for $T_1$-spaces. In a non-$T_1$ space like Sierpinski space $\{0,1\}$ with open sets $\{\emptyset, \{0,1\},\{0\}\}$, which is $T_0$, the point $1$ is a limit point of the finite set $E = \{0\}$, but we certainly cannot find infinitely many different points from $E$ converging to $1$, even though the space is trivially first countable (and the constant sequence of $0$'s does converge to $1$). Now if there is a sequence of distinct points from $E$ that converges to $x$, then clearly $x$ is a limit point of $E$ (every open neighbourhood of $x$ contains a tail of the sequence, and so even infinitely many different points of $E$, so we get the strong version of limit point for free). This holds without $T_1$ or first countability. Now if $X$ is $T_1$ and also first countable, and if $x$ is a limit point, then let $U_n$ be a local base at $x$, and pick $x_1 \in E \cap U_1$. When we have chosen $x_1,\ldots,x_n$ (all distinct and from $E$) such that $x_i \in U_j$ for all $j \le i$ ( for $i=1 \ldots n$), then pick $x_{n+1} \in E \cap (U_1 \cap \ldots \cap U_n \cap U_{n+1})$, such that $x_{n+1}$ is distinct from all $x_1,\ldots,x_n$. This can be done as $U_1 \cap \ldots U_n \cap U_{n+1}$ is an open neighbourhood of $x$ and every one of them intersects $E$ in infinitely many different points by the strengthened version of limit point, which is valid in $T_1$-spaces. This defines a sequence by recursion, and it clearly converges to $x$ and consists of all distinct points from $E$.
H: Things that can happen to a differential equation We have a list of things that can happen to a differential equation $y'(t)=f(t,y(t)), y(t_0) \in \mathbb{R}^n$ and $ f: G \rightarrow \mathbb{R}^n$ continuous. That is given by (i) $ b = \infty$ (ii) $\lim \sup \limits_{t \rightarrow b} ||y _{\max(t)}||=\infty$ (iii) $ \lim \inf \limits_{t \rightarrow b} \text{dist}((t,y_{max(t)}),\partial G)=0 $ where $ y_{max}$ is the maximal solution of the IVP Now I have troubles to interpret the meaning of (iii). Can someone give me a hint, how to interpret this? AI: Your function $f$ is only defined on $G$. If the solution left $G$, the differential equation would cease to be defined. (iii) is saying that as $t$ approaches $b$, you come close (at least some of the time) to the boundary of $G$. For example, perhaps you don't need the "inf": $(t,y(t))$ actually approaches a point on the boundary of $\delta G$, and then there is no differential equation to solve. In more pathological cases, $y(t)$ can fluctuate wildly near the boundary of $G$, so that $\lim_{t \to b-} y(t)$ doesn't exist.
H: Proof regarding factorials. Suppose $a$ and $k$ are positive integers, then how would you prove(not intuitively) that: $a!k! \leq (ak)!$ Although it is apparent that the inequality is correct, but how can I show this algebraically? AI: Assume $a, k\geq1$. \begin{align} \frac {(ak)!} {k!} &= (ak)(ak-1)\cdots(k+1) \\\\ a!&=a(a-1)\cdots2 \\\\ \implies \frac {(ak)!} {a!k!} &= \frac {(ak)(ak-1)\cdots(k+1)}{a(a-1)\cdots2} \geq 1 \end{align} The numerator has $ak-(k+1)+1=(a-1)k$ terms, while the denominator has $a-1$ terms (which is the same or less by our assumption). Matching each term from the denominator with one from the numerator: $$ak\geq a\\ ak-1\geq a-1\\ \vdots$$ you get the inequality.
H: How to convert a permutation group into linear transformation matrix? is there any example about apply isomorphism to permutation group and how to convert linear transformation matrix to permutation group and convert back to linear transformation matrix AI: Hint:Prove that Permutatation matrices form a subgroup of $\mathbb{GL}_n(R)$(under multiplication) and this subgroup is isomorphic to $S_n$. You can visualize it if you think of the way permutation matrices work when they are applied from the left to any matrix. Permutation matrices just change the positions of some rows when they are applied from the left to some matrix. Multiplying two permutation matrix results in another permutation matrix because In a matrix if we apply two permutation matrices from the left we are left with another matrix haing the same rows but in a different position so it can again be represented by another permutation matrix. Clearly the permutation matrix doing the reverse change of these rows will be its inverse. The homomorphism that shows the two groups to be isomorphic is $\phi(P)=(\sigma_1,\sigma_2,\dots ,\sigma_n)$. Here $\sigma_i $ is the row number where the $i$ th row of any matrix go when $P$ is applied to it from the left.
H: A question on geometry? I wanted to know, given quadrilateral ABCD such that $AB^2+CD^2=BC^2+AD^2$ , prove that $AC⊥BD$ . Help. Thanks. AI: Hints: I'll do it with analytic geometry: place the vertices at $$D=(0,0)\;,\;C=(x,0)\;,\;A=(a,b)\;,\;B=(\alpha,\beta)$$ Then we get that $$AB^2+CD^2=(a-\alpha)^2+(b-\beta)^2+x^2=b^2+(\alpha-x)^2+a^2+b^2=BC^2+AD^2\iff$$ $$a\alpha+b\beta=x\alpha\iff \frac\alpha\beta=\frac b{x-a}$$ Now check what the slopes of $\,AC\,,\,BD\,$ are and what's the condition they must fulfill in order to get $\,AC\perp BD\;$ ...
H: property of the exterior derivative $d \circ d=$ for a $\mathcal C^\infty$ function One of the properties of the exterior derivative is that $d\circ d=0$. We're trying to prove this for the case $f\in\mathcal C^\infty (U)$ on an open set $U\subset \mathbb R^n $. The prove starts with the uniqueness of the exterior derivative using the properties: $d$ is $\mathbb R$-linear $d\circ d=0$ If $f$ is in $\mathcal C^\infty(U)$, $df$ is the differential of $f$. $d(\omega\wedge\eta)=d\omega\wedge \eta+(-1^k) \omega \wedge d\eta$ Now to the existence of $d$: What I'm interested in is the proof of $d\circ d=0$. Let $f\in\mathcal C^\infty (U)$. $$ \begin{eqnarray} ddf &=& d(\sum_{i=0}^n \frac{\partial f}{\partial x_i} dx_i) \\ &=& \sum_{i=0}^n d(\frac{\partial f}{\partial x_i}) \cdot dx_i+ \sum_{i=0}^n (\frac{\partial f}{\partial x_i}) d(dx_i) \end{eqnarray} $$ The argument for the left part of the sum is the lemma of schwartz and the alternating property and i got the soultion.The other part is the problem. Now $d(dx_i)$ is the exterior derivative of a 1-form, as were trying to prove $d\circ d=0$ it is not clear to me on how to deduce that this part of the sums is $0$. AI: In your list of properties you should add locality, i.e. if 2 forms $\alpha$ and $\beta$ coincide on an open set $U$, then $d\alpha=d\beta$ on $U$. More generally, denoting by $M$ a given manifold, if $\omega\in\Omega^{n}(M)$ is locally represented by $w=w_{i_1\dots i_n}dx_{i_1}\wedge\dots\wedge dx_{i_n}$, then, denoting by $d'$ an operator that satisfies all properties 1-4 above, by using 4) $$d'w=(d'w_{i_1\dots i_n})dx_{i_1}\wedge\dots\wedge dx_{i_n}+ w_{i_1\dots i_n}d'(dx_{i_1}\wedge\dots\wedge dx_{i_n}); $$ using 3) we arrive at $ d'w_{i_1\dots i_n}=dw_{i_1\dots i_n}$ and $d'x_{i_l}=dx_{i_l}$. Using 3) and 4) we have then $d'(dx_{i_1}\wedge\dots\wedge dx_{i_n})=0$, as $d'x_{i_l}=dx_{i_l}$ for all $l=1,\dots, n$. So $d'$ is the uniquely defined operator $d$, where $d$ is the operator $$dw=\frac{\partial w_{i_1\dots i_n}}{\partial x_k}dx_k\wedge dx_{i_1}\wedge\dots\wedge dx_{i_n}.$$ Property 2) is satisfied as $$ d(dw)=\frac{\partial^2 w_{i_1\dots i_n}}{\partial x_k\partial x_r}dx_r\wedge dx_k\wedge dx_{i_1}\wedge\dots\wedge dx_{i_n}=0,$$ as we sum over all $r,k=1,\dots, n$, with $dx_r\wedge dx_k=-dx_k\wedge dx_r$.
H: Does $f(z)$ exist such that $f'$ and $f''$ exist in $\mathbb{C}$ but $f'''$ does not? Is it possible to find $f(z)$ defined on $\mathbb{C}$ such that $f'$ and $f''$ exist everywhere on $\mathbb{C}$ but $f'''$ does not? I'm guessing no such $f(z)$ exists, but I don't know how to prove this. What I've done so far: Let $f(z)=u(x,y)+iv(x,y)$ If $f'$ exists on $\mathbb{C}$ then $f(z)$ is analytic and the Cauchy-Riemann equations hold everywhere in $\mathbb{C}$: $u_x=v_y$ and $u_y=-v_x$ Similarly $f''$ exists on $\mathbb{C}$ so $\Delta u=0$ and $\Delta v=0$ . I'm not sure where to proceed from here. AI: No such function exists. See this article. Convergent power series are infinitely differentiable in the interior of the disk of convergence.
H: Ordered set partitions Let $a_n$ be a number of ordered partitions of the set $\left\{1,\ldots,n\right\}$, which means that order of elements in block is not relevant, but order of blocks does matter. (so $a_n = \sum_k\left\{n\atop k\right\}k!$, if I'm not mistaken). Prove recurrence relation: $$a_0=1; \ a_n=\sum_{i=0}^{n-1}{n\choose i} a_i$$ I don't know if I'm tired or it is just difficult. I tried with combinatorial interpretation but nothing came to my mind, so I tried with formula $a_n = \sum_k\left\{n\atop k\right\}k!$ but his also led me to nowhere. Can anybody help? AI: I suppose you cominatorial interpretation is - if I want to divide n-elements then I first choose $n-i > 0$ elements that will be put in the first block and then I divide the rest of them in $a_i$ ways which yields the desired formula. also note that ${n\choose i}$ = ${n \choose {n-i}}$
H: Topology : R open domain and closed domain I dont understand why we can consider the domain R and $\varnothing$ as close and open domains ? I have no idea of how to demonstrates it and like to see how to do it? Edit : My definition : an open space is circle where each point contained can be circled by a circle with a very small diameter and all the points contained in this small circle are contained in the big one. A closed ones doesn't have this property because on the border of the circle we can't trace this small circle such as all points contained inside are contained in the set. AI: Your definition is not a correct one. There are plenty of open sets that are not a circle. Let's use the definition that you may have in your mind: a set $U \subseteq \mathbb{R}^{2}$ is open if for any $x \in U$ there is a circle without the boundary $B$ such that $x \in B \subseteq U$. We answer if $\emptyset$ is open. If $x \in \emptyset$, then by vacuous truth, there exists a circle $B$ without a boundary such that $x \in B \subseteq \emptyset$, so $\emptyset$ must be open. I think it is pictorially clear that $\mathbb{R}^{2}$ is open, using our definition.
H: Let $g(x) = \int_{0}^{2^x} \sin(t^2)\,dt $. What is the $g'(0)$? I'm kinda stuck in this exercise : Let $\displaystyle g(x) = \int_{0}^{2^x}\sin{(t^2)\,dt}$. What is the $g'(0)$? How do I approach this kind of thing? I was thinking about Riemann sums but does this have any application here? AI: We will use the Fundamental Theorem of Calculus. Let $F(t)$ be an antiderivative of $\sin(t^2)$. We will not find an explicit formula for $F(t)$, but we won't need to. We have $$g(x)=F(2^x)-F(0).$$ Differentiate, using the Chain Rule. Note that since $2^x=e^{x\ln 2}$, the derivative of $2^x$ is $(\ln 2)e^{x\ln 2}=(\ln 2)2^x$. It follows that $$g'(x)=(\ln 2)2^x F'(2^x).$$ But $F'(2^x)=\sin((2^x)^2)=\sin(2^{2x})$. Putting things together, we find that $$g'(x)=(\ln 2)2^x\sin(2^{2x}).$$ One can write the above argument more compactly. By the Fundamental Theorem of Calculus, we have for any continuous function $f(t)$, that the derivative of $\int_a^u f(t)\,dt$ with respect to $u$ is $f(u)$. Letting $u=2^x$, and $a=0$, we find that $$g(x)=\int_a^u \sin(t^2)\,dt.$$ Then $$\frac{dg}{dx}=\frac{dg}{du}\frac{dg}{dx}.$$ But $\frac{dg}{du}=\sin(u^2)$ and $\frac{du}{dx}=(\ln 2)2^x$, and we are essentially finished.
H: Laplace Transform of $100e^{-5t}\sin10t$ Can anybody help me with the answer of this question? $$100e^{-5t}\sin10t$$ AI: We have: $$\mathcal{L}(\sin 10 t) = \dfrac{10}{s^2 + 100}$$ We have: $$\mathcal{L}(e^{-5 t} \sin 10 t) = F(s+5) = \dfrac{10}{(s+5)^2 + 100}$$ Lastly, we multiply by $100$, to yield: $$\mathcal{L}(100e^{-5t}\sin10t) = \dfrac{1000}{(s+5)^2 + 100}$$
H: Need to find function by the data I have found interesting sequence, but I can't find its function. Here are the input and output data: 0 30 1 26.7 2 24 3 21.8 4 20 5 18.5 6 17.1 7 16 8 15 9 13.3 10 12 11 10.9 12 10 13 9.2 14 8.6 15 8 16 7.5 17 6.7 18 6 19 5.5 20 5 21 4.6 22 4.3 23 4 24 3.7 25 3.3 26 3 27 2.7 28 2.5 29 2.3 30 2.1 31 2 Does anybody know what is behind this sequence? Please help. AI: Using MATLAB: Fourier with 3 terms: General model Fourier3: f(x) = a0 + a1*cos(x*w) + b1*sin(x*w) + a2*cos(2*x*w) + b2*sin(2*x*w) + a3*cos(3*x*w) + b3*sin(3*x*w) Coefficients (with 95% confidence bounds): a0 = 1.095e+11 (-8.143e+14, 8.145e+14) a1 = -1.641e+11 (-1.221e+15, 1.221e+15) b1 = -5.669e+09 (-3.516e+13, 3.515e+13) a2 = 6.553e+10 (-4.879e+14, 4.88e+14) b2 = 4.533e+09 (-2.811e+13, 2.812e+13) a3 = -1.089e+10 (-8.118e+13, 8.116e+13) b3 = -1.132e+09 (-7.026e+12, 7.024e+12) w = 0.00206 (-2.553, 2.557) Goodness of fit: SSE: 0.9513 R-square: 0.9995 Adjusted R-square: 0.9994 RMSE: 0.1991 Polynomial of degree 5: Linear model Poly5: f(x) = p1*x^5 + p2*x^4 + p3*x^3 + p4*x^2 + p5*x + p6 Coefficients (with 95% confidence bounds): p1 = -3.164e-06 (-5.386e-06, -9.426e-07) p2 = 0.0002865 (0.0001134, 0.0004596) p3 = -0.0103 (-0.01514, -0.005463) p4 = 0.2077 (0.1496, 0.2659) p5 = -3.051 (-3.331, -2.772) p6 = 29.66 (29.25, 30.06) Goodness of fit: SSE: 1.484 R-square: 0.9992 Adjusted R-square: 0.9991 RMSE: 0.2389 Smoothing Spline: Smoothing spline: f(x) = piecewise polynomial computed from p Smoothing parameter: p = 0.9 Goodness of fit: SSE: 0.04464 R-square: 1 Adjusted R-square: 0.9999 RMSE: 0.06053 Exponential with 2 terms: General model Exp2: f(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 28.49 (28.09, 28.9) b = -0.08572 (-0.08712, -0.08433) c = 1.528 (0.9799, 2.076) d = -1.313 (-2.479, -0.1475) Goodness of fit: SSE: 1.015 R-square: 0.9995 Adjusted R-square: 0.9994 RMSE: 0.1904
H: Hartshorne exercise 1.6.4 : Is it true that $\mathcal{O}_{P,X} \cong \mathcal{O}_{\varphi(P),\Bbb{P}^1}$? Let us work over a fixed algebraically closed field $k$ and consider a non-singular projective curve $X$ and $\varphi : X \to \Bbb{P}^1$ a non-constant morphism. My question is: For $P \in X$, do we have an isomorphism $$\mathcal{O}_{P,X} \cong \mathcal{O}_{\varphi(P),\Bbb{P}^1}?$$ The reason I ask this question is because I want to prove that $\varphi$ is surjective. I believe I have almost done this, and this is the last part in the proof that I basically need. Now I have determined that $\varphi$ is actually a dominant morphism (by topological considerations and using that the cardinality of $X$ is necessarily infinite). So actually I already know that $$\varphi_P^\ast : \mathcal{O}_{\varphi(P),\Bbb{P}^1} \to \mathcal{O}_{P,X}$$ is injective. How can I prove that it has to be surjective? Do I know that $\mathcal{O}_{P,X}$ is finitely generated (as a module) over the image of $\mathcal{O}_{\varphi(P),\Bbb{P}^1}$? AI: These local rings are not isomorphic, unless $\varphi$ itself is an isomorphism. The situation, from an algebraic perspective, is similar to the inclusion of $\mathbb Z$ into $\mathbb Z[i]$. This is not an isomorphism, and does not become one if you localize at $2$ and at the prime above $2$. One way to see it is that if this were an isomorphism, it would induce an isomorphism on fraction fields, i.e. an isomorphism $K(\mathbb P^1) \cong K(X)$, but such an isomorphism implies that $X$ itself is isomorphic to $\mathbb P^1$ (since a smooth projective curve is determined by its function field).
H: Question - Möbius inversion formula I need your help in the next question: Prove directly from the definition the Möbius inversion formula. (Möbius function defined as follows: μ(n) = 1 if n is a square-free positive integer with an even number of prime factors. μ(n) = −1 if n is a square-free positive integer with an odd number of prime factors. μ(n) = 0 if n is not square-free.) I'm not sure what I need to do and how Thanks ! AI: Well, since you already know about the abelian group structure, things are way easier: Moebius Inversion Formula: For arithmetic functions $\,f\,,\,g\,$ we have $$f(n)=\sum_{d\mid n}g(d)\iff g(n)=\sum_{d\mid n}f(d)\mu\left(\frac nd\right)$$ Proof: With the functions $$u(n)=1\;\;\forall n\in\Bbb N\;,\;\;I(n)=\begin{cases}1&\,\;\;n=1\\0&,\,\,n>1\end{cases}$$ we get that $$f=g*u\stackrel{\text{mult. by $\,\mu\,$}\;}\implies\,f*\mu=(g*u)*\mu=g*(u*\mu)=g*I=g\;\;\;\;\;\square$$
H: Criterion for sum/difference of unit fractions to be in lowest terms Pick two nonzero integers $a$ and $b$, so $(a,b)\in (\mathbb{Z}\setminus\{0\})\times(\mathbb{Z}\setminus\{0\})$. We want to add the fractions $1/a$ and $1/b$ and use the standard algorithm. First carefully find the least common multiple of $a$ and $b$ (it is only well-defined up to a sign but that will not be important). Then convert each of the two fractions to get this common number as their denominators. Finally, add the two new numerators and keep the denominator. For the scope of this question, let's call $(a,b)$ "easy" iff the resulting sum fraction is already in lowest terms (irreducible fraction). Examples: If $a=12$ and $b=16$, then $$\frac{1}{12}+\frac{1}{16}=\frac{4}{48}+\frac{3}{48}=\frac{7}{48}$$ so $(12,16)$ is "easy". On the other hand, since $$\frac{1}{10}+\frac{1}{15}=\frac{3}{30}+\frac{2}{30}=\frac{5}{30}$$ the pair $(10,15)$ is not "easy". The question: Is there an equivalent or simpler way to define this "easiness"? For example in terms of the signs and prime factorizations of $a$ and $b$? Does this property already have a conventional name? Note that signs seem to matter since $(3,6)$ is not easy while $(3,-6)$ is. AI: Let $\gcd(a,b)=d$, and write $a=da', b=db'$, where $\gcd(a',b')=1$. Then you have $$\frac{1}{a}+\frac{1}{b}=\frac{1}{da'}+\frac{1}{db'}=\frac{b'+a'}{da'b'}$$ Now, $\gcd(a'+b',a'b')=1$ because if prime $p|\gcd(a'+b',a'b')$ then $p|a'b'$ then wlog $p|a'$; but then $p|(a'+b')$ and hence $p|b'$, a contradiction. Hence the sum is "easy" exactly when $$\gcd(a'+b',d)=1$$ In the example $a=10, b=15$, we have $a'=2, b'=3, d=5$ and $a'+b'=5$. In the example $a=3, b=\pm 6$, we have $a'=1, b'=\pm 2, d=3$, and $a'+b'$ is $3$ or $-1$.
H: $\mathbb Q$-basis of $\mathbb Q(\sqrt[3] 7, \sqrt[5] 3)$. Can someone explain how I can find such a basis ? I computed that the degree of $[\mathbb Q(\sqrt[3] 7, \sqrt[5] 3):\mathbb Q] = 15$. Does this help ? AI: Try first to find the degree of the extension over $\mathbb Q$. You know that $\mathbb Q(\sqrt[3]{7})$ and $\mathbb Q(\sqrt[5]{3})$ are subfields with minimal polynomials $x^3 - 7$ and $x^5-3$ which are both Eisenstein. Therefore those subfields have degree $3$ and $5$ respectively and thus $3$ and $5$ divide $[\mathbb Q(\sqrt[3]7,\sqrt[5]3) : \mathbb Q]$, which means $15$ divides it. But you know that the set $\{ \sqrt[3]7^i \sqrt[5]3^j \, | \, 0 \le i \le 2, 0 \le j \le 4 \}$ spans $\mathbb Q(\sqrt[3]7, \sqrt[5]3)$ as a $\mathbb Q$ vector space. I am letting you fill in the blanks. Hope that helps,
H: matrix calculus (differentiation of complex matrix) I know that $f(x)=||Ax-b||_2^2$ (real matrix) has gradient $\partial f/\partial x=A^T(Ax-b)$. Now suppose $A$ is complex, then how can I prove that $\partial f/\partial x=A^*(Ax-b)$? AI: I assume that $x$ is real in both cases. For real-valued $A$ and $b$ you have $$f(x)=||Ax-b||_2^2=(Ax-b)^T(Ax-b)=x^TA^TAx-2x^TA^Tb+b^Tb$$ So $$\partial f/\partial x=2A^T(Ax-b)$$ For complex $A$ and $b$ we get $$f(x)=||Ax-b||_2^2=(Ax-b)^H(Ax-b)=x^TA^HAx-x^TA^Hb-b^HAx+b^Hb=\\ =x^TA^HAx-2x^T\text{Re}\{A^Hb\}+b^Hb$$ where $^H$ denotes hermitian conjugation. The derivative w.r.t. $x$ is then $$\partial f/\partial x=2(A^HAx-\text{Re}\{A^Hb\})$$
H: Are weak* topology and strong topology the same in $L^\infty$? Let $(\Omega, \mathcal{F}, R)$ be a reference probability space. For short, we use $\mathbb E[\cdot]$ to denote the expectation operator $\mathbb E^{R}[\cdot]$ under probability $R$. We consider the following two layers of function spaces. The lower level function space is $L^{1}$, the collection of $\mathcal F$-measurable integrable random variables $X$ with endowed with norm $\|X\|_{1} = \mathbb E[|X|].$ The upper level function space is $L^{\infty}$, that is the collection of all random variables $X$ with $\hbox{esssup}_{\Omega} |X| <\infty$, endowed with weak* topology, which is equivalent to the norm induced by $$\|\phi\|_{*} = \sup\{ \mathbb E[ \phi X] : \mathbb E[X] \le 1\}.$$ Compared weak* norm in $L^{\infty}$, one can define different norm by $$\|\phi\|_{\infty} = \hbox{esssup} |\phi|.$$ We refer weak* topology and strong topology in $L^{\infty}$ induced by $\|\cdot\|_{*}$ and $\|\cdot\|_{\infty}$, respectively. Now we have two opposite claims. [Claim 1.] First, $\| \cdot\|_{*}$ and $\| \cdot \|_{\infty}$ should be different norms in $L^{\infty}$. This can be seen from the following fact: The closed unit ball $B^{*}(1) = \{\phi: \|\phi\|_{*} \le 1 \}$ is compact under weak* topology by Alaoglu's theorem, while the closed unit ball $B^{\infty}(1) = \{\phi: \|\phi\|_{\infty} \le 1\}$ is not compact under $\|\cdot\|_{\infty}$. [Claim 2.] Second, we can show that $\|\phi\|_{*} = \|\phi\|_{\infty}$ for all $\phi \in L^{\infty}$ as follows. One direction is straightforward: $$\|\phi\|_{*} = \sup\{ \mathbb E[ \phi X] : \mathbb E[X] \le 1\} \le \|\phi\|_{\infty} \sup\{ \mathbb E[ X] : \mathbb E[X] \le 1\} = \|\phi\|_{\infty}.$$ The other direction is to show $\|\phi\|_{*} \ge \|\phi\|_{\infty}$. For arbitrary $\phi \in L^{\infty}$, set $$A_{n} = \{\omega\in \Omega: \phi(\omega) \ge \|\phi\|_{*} + 1/n\}.$$ If $R(A_{n}) >0$, then with indicator $I_{A_{n}}$, we have the following contradiction: $$\|\phi\|_{*} \ge \mathbb E[\phi R^{-1}(A_{n}) I_{A_{n}}] \ge \|\phi\|_{*} + 1/n. $$ This implies $R(A_{n}) = 0$ for all $n>0$ and hence $\phi \le \|\phi\|_{*}$ almost surely in $R$. This implies the other direction. [Q.] Can you indicate which one is false among two opposite claims? AI: It's quite true that $\|\cdot\|_\ast = \|\cdot\|_\infty$. The problem is that the norm $\|\cdot\|_\ast$ does not induce the weak-* topology. For an explicit example, take $\Omega = [0,1]$ with $R$ Lebesgue measure. Let $\phi_n = 1_{(0, 1/n)}$. It follows from the dominated convergence theorem that $E[\phi_n X] \to 0$ for any $X \in L^1(R)$, so $\phi_n \to 0$ in the weak-* topology. But clearly we have $\|\phi_n\|_\ast = \|\phi_n\|_\infty = 1$ for every $n$. In fact, except in trivial cases (where $L^1(R)$ and $L^\infty(R)$ are finite dimensional), the weak-* topology is not induced by any norm. For instance, it is not first countable.
H: Electric field of finite sheet: Full analytical solution of integration? I am trying to work out the integral $$ \operatorname{E}_{z}\left(x,y,z\right) = \alpha\int\int\frac{z\,\mathrm{d}x'\,\mathrm{d}y'} {\left[\left(x - x'\right)^{2} + \left(y -y'\right)^{2} + z^{2}\right]^{3/2}} $$ with the limits $$-\frac{a}{2}\leq dx'\leq\frac{a}{2}\qquad-\frac{b}{2}\leq dy'\leq\frac{b}{2}$$ Mathematica does not return the solution in a reasonable time and I can't seem to find it. This integral is for the $z$ component of the electric field of a homogeneously charged finite sheets in the $z=0$ plane. The sheets dimensions are $a \cdot b$. The integrals that need to be solved are detailed on the second page of this text. Note that The three integrals are for $E_x$, $E_y$ and $E_z$ (typo in the linked document). I solved the $E_x$ and $E_y$ integrals in Mathematica and I would be grateful for some help or a pointer to get the analytical expression for $E_z$. AI: This integral may be done analytically as far as I can see. First, for the inner integral over $y'$, use a trig substitution $y'=y+\sqrt{(z^2+(x-x')^2} \tan{\theta}$ to transform the integral into $$\alpha z \int_{-a/2}^{a/2} \frac{dx'}{z^2+(x-x')^2} \, \int_{-\arctan{((b/2)+y)/\sqrt{(z^2+(x-x')^2}}}^{\arctan{((b/2)-y)/\sqrt{(z^2+(x-x')^2}}} d\theta \frac{\cos{\theta}}{\sin^3{\theta}}$$ The inner integral has a simple antiderivative, and after some algebra, we are down to single integrals: $$\frac{\alpha z}{2} \left (\frac{b}{2}+y\right)^2 \int_{-a/2}^{a/2} \frac{dx'}{\left[ (x-x')^2+\left (\frac{b}{2}+y\right)^2+z^2\right ]\left[(x-x')^2+z^2 \right ]} - \\\frac{\alpha z}{2} \left (\frac{b}{2}-y\right)^2 \int_{-a/2}^{a/2} \frac{dx'}{\left[ (x-x')^2+\left (\frac{b}{2}-y\right)^2+z^2\right ]\left[(x-x')^2+z^2 \right ]}$$ Using partial fraction decompositions, we may simplify the above expression drastically to get $$\frac{\alpha z}{2} \int_{-a/2}^{a/2} dx' \left [\frac{1}{(x-x')^2+\left (\frac{b}{2}-y\right)^2+z^2}- \frac{1}{(x-x')^2+\left (\frac{b}{2}+y\right)^2+z^2} \right ]$$ These integrals are expressible in terms of arctan, and I assume you can take it from here.
H: How prove this $\frac{\sin{(A-B)}\sin{(A-C)}}{\sin{2A}}+\frac{\sin{(B-C)}\sin{(B-A)}}{\sin{2B}}+\frac{\sin{(C-A)}\sin{(C-B)}}{\sin{2C}}\ge 0$ let $0<A,B,C<\dfrac{\pi}{2}$,and $A+B+C=\pi$,prove that $$\dfrac{\sin{(A-B)}\sin{(A-C)}}{\sin{2A}}+\dfrac{\sin{(B-C)}\sin{(B-A)}}{\sin{2B}}+\dfrac{\sin{(C-A)}\sin{(C-B)}}{\sin{2C}}\ge 0$$ my idea $$\sin{2B}\sin{2C}\sin{(A-B)}\sin{(A-C)}+\sin{2A}\sin{2C}\sin{(B-C)}\sin{(B-A)}+\sin{2A}\sin{2B}\sin{(C-A)}\sin{(C-B)}\ge 0$$ But following I can't, and I find sometime,I have find the Similar quetions:http://www.artofproblemsolving.com/Forum/viewtopic.php?p=128597&sid=55397cc8fe6b896712e5495b124dd247#p128597 AI: $$\sin{2B}\sin{2C}\sin{(A-B)}\sin{(A-C)}=\\\sin(2(\pi-A-C))\sin(2(\pi-A-B))\sin{(A-B)}\sin{(A-C)}\\=4[\sin(A+C)\cos(A+C)][\sin(A-C)\cos(A-C)]\sin(A+B)\sin(A-B)\\ =\cos(B)\cos(C)[\sin^{2}(A)-\sin^{2}(B)][\sin^{2}(A)-\sin^{2}(C)]$$ Using the extended Schur's Inequality $$\sum \cos(B)\cos(C)[\sin^{2}(A)-\sin^{2}(B)][\sin^{2}(A)-\sin^{2}(C)]\geq0$$
H: Explain why the integral test can't be used to determine whether the series is convergent I have a homework question on the integral test chapter of my book and I'm not sure i'm answering this correctly. $$\sum_{n=1}^\infty = \frac{\cos \pi n}{\sqrt n } $$ Now I know that in order for me to apply the integral test I have to have a decreasing, continuous, positive function. So my answer to this question is I cant apply the integral test because this function is not a decreasing positive function on the interval $[1,\infty)$. Is this an accurate statement? I guess I can actually ask this question in a more precise manner and that is. How can i look at a function and tell if its decreasing and positive? I seem to have a conceptual block on this one. Thanks in advance for your response. AI: This is an alternating series, you probably want the alternating series test, not the integral test. The integral test is for positive series, while this one alternates $+,-$. For that reason alone you can't use the integral test.
H: How to show that a given set is a subspace OK I just want to be sure I have done this correctly. Given: $R^3$, are the following sets subspaces? (a) $W_1$ = {($a_1$,$a_2$,$a_3$) $\in R^3: a_1 = 3a_2$ and $a_3 = -a_2$ Since the set you get when you plug in the values to the set you get (3$a_2$,$a_2$,$-a_2$), and that is the same as $a_2$(3,1,-1). Which means that the set is closed under addition since the vector you want is a linear combination -- it can be expressed as a multiple of (3, -1, 1). It's also closed under multiplication since any arbitrary $a_2$ I plug in is still in $R^3$. So, the addition and multiplication are all defined in the relevant vector space. Is that correct? AI: However what you did seems right, it would be nice verifying the definition of a subspace. Of course $0=0~(3,1,-1)\in W$ and if we took $v=(a_1,a_2,a_3),w=(b_1,b_2,b_3)\in W$ then we would see $$\alpha v+\beta w\in W$$ as well wherein $\alpha,\beta\in\mathbb R$. It can be seen just by doing some manipulation.
H: Show that this set is linearly independent Ñotation: $V$ is a vector spaces of real functions $g:X\rightarrow\mathbb{R}$; $\{g_1,...,g_m\}$ is a subset of $V$; $\{x_1,...,x_n\}$ is a subset of $X$, where $x_i\neq x_j$ when $i\neq j$; $v_i=\left (g_i(x_1),...,g_i(x_n)\right )$ for all $i=1,...,m$; $\{v_1,...,v_m\}$ is a subset of $\mathbb{R}^n$. I want to prove that if $\{g_1,...,g_m\}$ is linearly independent, then $\{v_1,...,v_m\}$ is linearly independet. Could someone give me a hint? Thanks. AI: The result is not correct. For example, it is possible that the functions $g_1, \ldots, g_m$ are linearly independent as functions even though they share a common set of zeros $x_1, x_2, \ldots, x_n$. In that case $v_1, v_2, \ldots, v_m $ are all the zero vector, and so that collection is dependent.
H: Help with anti-image matrix First of all, I am very sorry but I don't know the mathematics terminology in English, so I'll try to explain as good as i can but i will probably do some mistakes since it's not my native language. I have this problem: given the endomorphism $f_k(x,y,z) = (x+y+kz, x+ky, 2x+(k+1)y+kz)$ from $\mathbb{R}^3$ to $\mathbb{R}^3$, determine for $k \in \mathbb{R}$ a base of $\mathop{\rm Im} f_k$ and a base of $\mathop{\rm Ker} f_k$, the set $f^{-1}(0,2,2)$. I have a good idea on how to solve the first one, but I am blocked on the second one. I'd like some help if possible. Again, sorry for the many mistakes, and thanks a lot for your patience and attention. AI: Definition: $f^{-1}(0,2,2) = \{ (x,y,z): f(x,y,z) = (0,2,2) \}$. So, you have a system of equations: \begin{align*} 0 &= x + y + kz, \\ 2 &= x + ky, \\ 2 &= 2x + (k+1)y + kz. \end{align*} Note that the sum of the first two equations gives you the second, so the system is not regular and will have many solutions or none at all. I'll assume this is enough of a hint. If you need help with solving the system, feel free to ask in the comments. Edit (solving the system) I got $x = 2-ky$ from the second equation, and then I substituted $x$ in the first one, obtaining $$y = \frac{2+kz}{k-1}.$$ Then I've put that in $x = 2-ky$ and got $$x = -\frac{2+k^2z}{k-1}.$$ Wolfram|Alpha gives the same solution, so it should be correct. Obviously, for $k = 1$ this solution is not acceptable, as we have a division by zero. This means that we have to solve it separately for $k = 1$ (we could ignore the third equation, as it is still the sum of the first two, but I'll write it anyway): \begin{align*} 0 &= x + y + z, \\ 2 &= x + y, \\ 2 &= 2x + 2y + z. \end{align*} Now, we have $x = 2 - y$. Putting that into the first equation, we get $$z = -2.$$ There are no more equations to use, so we're done here. So, our final solution is: \begin{align*} f^{-1}(0,2,2) &= \left\{ \left(-\frac{2+k^2z}{k-1}, \frac{2+kz}{k-1}, z \right): z \in \mathbb{R} \right\}, \quad \text{for $k \ne 1$}, \\ f^{-1}(0,2,2) &= \left\{ \left(2 - y, y, -2 \right): y \in \mathbb{R} \right\}, \quad \text{for $k = 1$}. \end{align*} We could have solved by $(x,z)$ (see here) or by $(y,z)$ (see here), which would give us different parametrizations of the same solution. To check that these solutions are really correct, try calculating $f(x,y,z)$ of the above parametrized solutions (both cases $k \ne 1$ and $k = 1$). As a result, you should get $(0,2,2)$ in all cases (for all $k$ and $z$).
H: Product rule question about Alphabet I am trying to understand the product rule and I have a simple example it says, If I have a license plate with two English letters how many different plates can be made? The answer is 26^2 Now another question is in the same format but it asks how many plates can be made with upper and lower case letters. Would this still be counted as 26^2? Or would this be 52^2? Or are they all different sets, making it 78^2? I looked through lecture slides and they were no help. Thanks again. AI: To think about the problem intuitively, imagine just choosing one letter out of the set of upper and lower case letters. You have $26$ lower case letters and $26$ upper case letters, giving you $26+26=52$ characters in total. Therefore there are $52$ potential choices for the first character. Choosing for the second letter is exactly the same, and so you can choose any of the $52$ characters; combining these is a simple matter of multiplication (because for each of the $52$ choices for the first character there are $52$ choices for the second character). Therefore we have: $$52\times52=52^{2}=2704\text{ combinations}$$
H: $G_1/H\cong G_2\implies G_1\cong H\times G_2$? Lagrange's lets us write the deceptively tidy relation: $$\left|\frac{G}{H}\right|=\frac{|G|}{|H|}$$ and from this we can do neat things like, in the proof of the Orbit-stabiliser theorem, $$\frac{G}{\text{stab}(s)}\cong \text{orb}(s)\implies\left|\frac{G}{\text{stab}(s)}\right|=|\text{orb}(s)|\implies\frac{|G|}{|\text{stab}(s)|}=|\text{orb}(s)|\\\implies|G|=|\text{orb}(s)|\times|\text{stab}(s)|$$ Which leads to my question: is it always/sometimes/ever possible to use naive reasoning and say that $$\frac{G_1}{H}\cong G_2\implies G_1\cong H\times G_2$$ AI: Take the cyclic group $\mathbb{Z}/4\mathbb{Z}$ of order $4$. This has a subgroup of order $2$, let us call it $H$. $H \cong \mathbb{Z}/2\mathbb{Z}$, since this is the only group of order $2$. By order-counting, $G/H \cong \mathbb{Z}/2\mathbb{Z}$, because it has order $2$ as well. But $\mathbb{Z}/4\mathbb{Z}$ is not $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. The idea here is the following: there are many ways to build a new group out of smaller pieces besides for the direct product.
H: How to efficiently generate five numbers that add to one? I have access to a random number generator that generates numbers from 0 to 1. Using this, I want to find five random numbers that add up to 1. How can I do this using the smallest number of steps possible? Edit: I do want the numbers to be uniformly distributed. AI: The following method results in a uniform distribution on all sequences with sum one. Generate four random numbers and sort them in ascending order to get $x_1 \leq x_2 \leq x_3 \leq x_4$ then take $$x_1, x_2-x_1, x_3-x_2, x_4-x_3, 1-x_4$$ as your final sequence of numbers that add to one. In contrast, the method suggested in another answer (rescaling) results in sequences that are biased towards the centroid of the set of all sequences of sum one (i.e. the summands are biased towards $\frac{1}{5}$). Here are some results of a simple experiment to demonstrate the bias. I implemented both methods (called "sort" and "scale") and generated ten sequences with each. I determined the standard deviation of all these sequences and sorted them in descending order. Here's one result set (all numbers rounded to three decimal places): # method sd sequence - ------ -- -------- 1: sort 0.302 (0.802, 0.081, 0.065, 0.026, 0.027) 2: sort 0.182 (0.344, 0.001, 0.139, 0.475, 0.040) 3: sort 0.180 (0.290, 0.499, 0.040, 0.165, 0.005) 4: sort 0.179 (0.369, 0.445, 0.009, 0.017, 0.160) 5: sort 0.172 (0.011, 0.198, 0.308, 0.023, 0.461) 6: scale 0.171 (0.325, 0.452, 0.031, 0.191, 0.001) 7: sort 0.159 (0.413, 0.129, 0.064, 0.028, 0.366) 8: scale 0.138 (0.003, 0.090, 0.392, 0.256, 0.259) 9: scale 0.136 (0.335, 0.346, 0.082, 0.233, 0.004) 10: sort 0.133 (0.375, 0.086, 0.349, 0.093, 0.096) 11: scale 0.128 (0.082, 0.021, 0.328, 0.232, 0.337) 12: scale 0.118 (0.256, 0.038, 0.342, 0.083, 0.281) 13: sort 0.103 (0.205, 0.229, 0.028, 0.348, 0.191) 14: scale 0.096 (0.121, 0.361, 0.117, 0.142, 0.260) 15: sort 0.091 (0.246, 0.284, 0.181, 0.258, 0.031) 16: scale 0.080 (0.157, 0.281, 0.192, 0.294, 0.077) 17: sort 0.074 (0.146, 0.179, 0.328, 0.228, 0.119) 18: scale 0.065 (0.264, 0.259, 0.202, 0.083, 0.193) 19: scale 0.027 (0.219, 0.215, 0.213, 0.147, 0.206) 20: scale 0.020 (0.235, 0.201, 0.200, 0.185, 0.179) So the "scale" methods tends to produce sequences with smaller standard deviation.
H: Proving that a matrix is diagonalizable Let $ T $ be the linear operator on $ \Bbb R^3 $ which is represented by the matrix $$ A = \begin{bmatrix} 6 & -3 & -2 \\ 4 & -1 & -2 \\ 10 & -5 & -3 \\ \end{bmatrix} $$ Prove that $ T $ is diagonalizable. AI: Hints: $$\det(xI-A)=\begin{vmatrix}x-6&3&2\\-4&x+1&2\\-10&5&x+3\end{vmatrix}=$$ $$(x-6)(x^2+4x+3)-60-40-10(x-6)+12(x+3)+20(x+1)=$$ $$=x^3-2x^2-21x-18-100-10x+60+12x+36+20x+20=$$ $$=x^3-2x^2+x-2=(x-2)(x^2+1)$$ Thus, $\,A\,$ is diagonalizable over the complex $\,\Bbb C\,$...but not over the reals, say.
H: homeomorphism between the same spaces with different topologies $\mathbb{R}^2$ with different topologies on it are homeomorphic as a topological space? for example with discrete topology and usual topology, what I need is a continous bijection with inverse is continous, from usual to discrete any continous map is finally constant map,so I think they are not homeomorphic. thank you for help and discussion AI: No, they are not homeomorphic. $\Bbb R^2$ with the discrete topology is not connected, and $\Bbb R^2$ with the usual topology is connected, so there isn’t even a continuous map from $\Bbb R^2$ with the usual topology onto $\Bbb R^2$ with the discrete topology: continuous maps preserve connectedness. There are many other ways to see that they aren’t homeomorphic. For instance, the usual topology is separable, and the discrete topology isn’t. The usual topology has a countable base; the discrete topology does not.
H: $\operatorname{SL}(2,\mathbb R)$ is not isomorphic to $S^1 \times \mathbb R^2 $ as a Lie group? I try to prove $\operatorname{SL}(2,\mathbb R)$ is not isomorphic to $S^1 \times \mathbb R^2 $ as a Lie group. My idea is that since $\exp\colon \mathfrak{sl}(2,\mathbb{R}) \to \operatorname{SL}(2,\mathbb{R})$ is not surjective, it is enough to show that $\exp\colon \operatorname{Lie}(S^1 \times \mathbb{R^2}) \to S^1 \times \mathbb{R^2}$ is surjective. But I couldn't find how to do it. Are there other good ideas? Could you help me? AI: $S^1 \times \Bbb{R}^2$ is abelian while $\text{SL}_2(\Bbb{R})$ is not.
H: Solve $g(g(x))=f(x)$ If $f(x)$ is a continuous and monotonically increasing function on an interval $(0,∞)$ and $f(x)>0$ for every $x>0$, then does there always exist a continuous and monotonically increasing function $g(x)$ on $(0,∞)$ so that for every $x\in (0,∞),g(x)\in (0,∞)$ and $g(g(x))=f(x)$? If $f(x)=c~x^k~(c>0,k>0),$and $g(x)=r~x^{\sqrt{k}},$where $c=r^{\sqrt{k}+1},$then $g(g(x))=f(x)~(x>0).$ But how to solve it in general conditions?Thanks in advance! AI: EDDDITTT: the answer is YES. This is Theorem 11.2.2 on pages 424-423 of the KCG book mentioned below. I think I will put the jpeg at the end so that my text is readable. A preliminary answer. The book you want is Iterative Functional Equations by Kuczma, Choczewski, and Ger. I got a used copy at a very reasonable price. I put several items about the same problem for real analytic functions at BAKER. The person who initiated this was Helmuth Kneser, father of Martin Kneser. He showed that there was a real analytic solution solving $g(g(x)) = e^x$ on the entire real line, meaning that it extends to a holomorphic function on an open set surrounding the real axis. Note that the question of commutation is exactly right: Baker re-worded the problem into one of commuting functions and formal power series. For the most difficult case an elaborate procedure was given by Ecalle in about 1973, see QUESTION and my own ANSWER. The main obstruction in the continuous case is that you cannot have fixed points of the target function with negative slope. However, you have said monotonic increasing. In case you have something like $x+ \sin x$ I think there is material in the KCG book for solving on each interval between fixpoints, but i will need to check that. If your function has no fixpoints or, say, only one, I think you can do it. But there may be no simple recipe or formula, just an existence result. =-=-=-=-=-=-=-= =-=-=-=-=-=-=-=
H: Conditional probability with $5$ sided dice Question: In Brooklyn people play a fair dice with $5$ sides, numbered $1,2,3,4,5$. Jack rolls the dice over and over again. What is the probability that the results $2$ or $4$ will come up before the result $5$? What We want to understand We saw 2 solutions which we didn't understand: The first one being: $\frac {0.2}{0.2+0.2+0.2}$ (we thought that $\Omega=${2,4,5} ???) And the second solution is Let A be the desired prob. so we condition on the result of the first throw. $a=0.2 \cdot 0 + 0.4 \cdot 1 + 0.4 \cdot a $ Any explanations will be welcome :-) AI: Solution 0 (none of the ones listed): One of $2,4,5$ will come up first, from those three. Which it is is equally likely. Solution 1: probability that a 5 is rolled, conditioned on the probability that a 2,4, or 5 is rolled. You actually want to subtract this from 1, or instead $\frac{0.2+0.2}{0.2+0.2+0.2}$. Solution 2: Let $a$ be the desired probability, of winning. After one roll, there is a $0.2$ chance of loss, a $0.4$ chance of winning, and a $0.4$ chance of rolling again, wherein you win with probability $a$ again.
H: How to show all eigenvalues are positive? Could you help me to show that the following matrix has all its eigenvalues positive? $$H= \begin{bmatrix} \sum_{k=1}^ng_1(x_k)^2 & \sum_{k=1}^ng_1(x_k)g_2(x_k) & \cdots & \sum_{k=1}^ng_1(x_k)g_m(x_k)\\ \sum_{k=1}^ng_2(x_k)g_1(x_k) & \sum_{k=1}^ng_2(x_k)^2 & \cdots & \sum_{k=1}^ng_2(x_k)g_m(x_k)\\ \vdots& \vdots & \ddots &\vdots \\ \sum_{k=1}^ng_m(x_k)g_1(x_k) & \sum_{k=1}^ng_m(x_k)g_2(x_k) & \cdots & \sum_{k=1}^ng_m(x_k)^2 \end{bmatrix}$$ where $g_1,...g_n$ are linearly independent real functions and $x_i\neq x_j$ when $i\neq j$. Thanks. AI: $H= \begin{bmatrix} \sum_{k=1}^ng_1(x_k)^2 & \sum_{k=1}^ng_1(x_k)g_2(x_k) & \cdots & \sum_{k=1}^ng_1(x_k)g_m(x_k)\\ \sum_{k=1}^ng_2(x_k)g_1(x_k) & \sum_{k=1}^ng_2(x_k)^2 & \cdots & \sum_{k=1}^ng_2(x_k)g_m(x_k)\\ & & \vdots & \\ \sum_{k=1}^ng_m(x_k)g_1(x_k) & \sum_{k=1}^ng_m(x_k)g_2(x_k) & \cdots & \sum_{k=1}^ng_m(x_k)^2 \end{bmatrix}$ $= \begin{bmatrix} g_1(x_1) & g_1(x_2) & \cdots & g_1(x_n)\\ g_2(x_1) & g_2(x_2) & \cdots & g_2(x_n)\\ & & \vdots & \\ g_m(x_1) & g_m(x_2) & \cdots & g_m(x_n) \end{bmatrix}\begin{bmatrix} g_1(x_1) & g_2(x_1) & \cdots & g_m(x_1)\\ g_1(x_2) & g_2(x_2) & \cdots & g_m(x_1)\\ & & \vdots & \\ g_1(x_n) & g_2(x_n) & \cdots & g_m(x_n) \end{bmatrix}=AA^T$ Here $A=\begin{bmatrix} g_1(x_1) & g_1(x_2) & \cdots & g_1(x_n)\\ g_2(x_1) & g_2(x_2) & \cdots & g_2(x_n)\\ & & \vdots & \\ g_m(x_1) & g_m(x_2) & \cdots & g_m(x_n) \end{bmatrix}$ Let $\lambda$ be an eigen value of $H$ then $\exists 0\ne v\in V$ such that $Hv=\lambda v$ Let $\left<,\right>$ be the standard dot product, $\lambda\left< v,v\right>=\left< v,\lambda v\right>=\left<v,AA^tv\right>=\left<A^tv,A^tv\right>\ge 0$ As $v\ne 0$ so $\left< v,v\right>>0\Rightarrow \displaystyle \lambda =\frac{\left<A^tv,A^tv\right>}{\left<v,v\right>}$ So $\displaystyle \lambda =\frac{\left<A^tv,A^tv\right>}{\left<v,v\right>}\ge 0$
H: Confirm the meaning of Prime and Primitive in a Galois(2) polynomial. Here it discusses primality (or more accurately irreducibility) and primitivity of polynomials in $G(2)$. More specifically it states that $x^6 + x + 1$ is irreducible and primitive. But here I can divide $x^7 + 1$ by $x^6 + x + 1$ and get $x$ remainder $x^2+x+1$. Surely this means the the order of $x^6 + x + 1$ is at most $7$ and therefore nowhere near the required $2^6-1 = 63$ as required for primitivity. What mistake am I making? I guess it is my interpretation of The order of a polynomial $f(x)$ for which $f(0)$ is not $0$ is the smallest integer $e$ for which $f(x)$ divides $x^e+1$. A polynomial over GF(2) is primitive if it has order $2^n-1$. AI: According to the definition in the first link you provided, to see that the order of $x^6+x+1$ is $2^6-1$, you only need to check that $(x^6+x+1)\nmid (x^{n}+1)$ for all $n<2^6-1$, and that $x^6+x+1$ does divide $x^{2^6-1}+1$. A computer check easily confirm that this is the case. I don't see any reason why the division of $x^7+1$ by $x^6+x+1$, giving quotient x and remainder $x^2+x+1$, would imply that the order is at most $7$, as you claimed.
H: Express $y=|-x^2+1|$ as a piecewise function. I'm unsure of how to start this problem. Any help would be greatly appreciated. AI: As for real $z,$$$ |z|=\begin{cases} z &\mbox{if } z\ge0 \\ -z & \mbox{ otherwise } \end{cases}$$ $$|-x^2+1|=\begin{cases} -x^2+1 &\mbox{if } -x^2+1\ge0 \iff -1\le x\le 1 \\ -(-x^2+1)=x^2-1 & \mbox{ otherwise } \end{cases}$$
H: Rewrite fraction, probability generating function I'm looking at an example from probability (generating function) where the following fraction came up: $g_Y(t)=\frac{q+tp}{2-p-tq}=\frac{1}{q(2-p)} (-p(2-p) + \frac{1}{1-\frac{qt}{2-p}})=\frac{1-p}{2-p}+\frac{t}{(2-p)^2}+\frac{qt^2}{(2-p)^3}+ \dots$ $p+q=1$ From which the probabilities $P(Y=1)$, $P(Y=2)$, etc, can be read out. My question is how one can from the fraction $\frac{q+tp}{2-p-tq}$ rewrite to another fraction that can be expanded as a geomteric series. I can of course check that their answer is correct, but what method should I use to tackle such a question if it comes up in a different form? AI: You know that if it can be written as $a + b \frac{1}{1-ct}$ then expansion is possible, so equate your expression to this, cross multiply, and since in your expression the numerator and denominator are linear in $t$, you'll get to three equations in three unknowns $a,b,c$ for the coefficients (in terms of the constant coefficients $p,q$). Then solving that system will give the $a,b,c$, hopefully.
H: What's the difference between $\frac{dy}{dx}$ and $dy$? Ok, so I was doing a substitution problem and I realized that $dy = u\ dx + x\ du$ and not $\frac{dy}{dx} = u\ dx + x\ du$ and I was wondering what the difference was between those two. My first guess would be that $\frac{dy}{dx}$ means differential of $y$ with respect to $x$, and $dy$ would be differential of $y$, but I don't know what that implies exactly. Can you provide examples so I can wrap my head around that concept? If $\frac{dy}{dx}$ = integral of $3x$, what would $y$ be? AI: Both $dx$ and $dy$ are just symbols which mean nothing out of context, but probably there is a way to explain. $dx$ is increase of $x$ and $dy$ is increase of $y$. That means that if you add "infinitesimal" $dx$ to $x$, $y$ will grow by $dy$. In other words, $$dy=y(x+dx)-y(x).$$ If you write $\frac{dy}{dx}$, you mean $$ \frac{dy}{dx} = \lim_{\Delta x\rightarrow 0}\frac{y(x+\Delta x) - y(x)}{\Delta x}, $$ since $dx$ was assumed infinitesimal. When you write $dy$, you always assume there is some $dx$. For example, for $y(x)=\sin(x)$ $$dy = \cos(x)\,dx;$$ for $y(x)=e^x$ $$dy = e^xdx = y(x) dx.$$ So, you can't just write something like $dy = 5$. May seem to have no sense, but the logic of infinitesimals may simplify proofs sometimes (and it was the only way to derivate before Cauchy!) In contrast, $\frac{dy}{dx}$ is always a number (we are still talking about real functions, right?), which can depend on $x$. For example, for $y(x) = \sin(x)$ $$\frac{dy}{dx} = \cos(x).$$ In your example $y(x, u) = x\cdot u$, and $$dy=y(x + dx, u + du) - y(x, u) = u\,dx + x\,du + dx\cdot du,$$ and $dx \cdot du$ is omitted because it is much less than other summands and thus "insignificant". There can't be calculated $\frac{dy}{dx}$ anymore because we would have to divide infinitesimal $du$ by infinitesimal $dx$, but there is $$\frac{\partial y}{\partial x} = \lim_{\Delta x\rightarrow 0}\frac{y(x+\Delta x, u) - y(x, u)}{\Delta x} = u,$$ the partial derivative, which allows us to write equation for $dy$ in form $$dy = \frac{\partial y}{\partial x}dx + \frac{\partial y}{\partial u}du.$$ In conclusion, $dy$ is the full increase of $y$ depending on increases of all variables ($dx$, $du$, etc.) and $\frac{dy}{dx}$ or $\frac{\partial y}{\partial x}$ is the growth coefficient determining how much greater will grow $y$ with growth of only one variable $x$.
H: Recursive algorithm I am trying to understand how this works. My instructor is teaching his first class, in summer on top of that, and only had 3 slides on this and had to rush over it. His example is: Give a recursion algorithm for computing 0+1+2+...+n, where n is non-negative His answer is procedure add(n: nonnegative integer) if n = 0 then return 0 else return n + (n - 1) He says the output is 0+1+2+...+n but how? If you use n + (n - 1) and input 2, you would have 3, and if you input 3, you would have 5. If you continue with 4, you end up with 7. So it seems to be jumping by 2, not 1. What am I not understanding? AI: The idea of recursion is to break down a big task into similar, smaller tasks. This can be imagined as delegating a task to an assistant (who may have his own assistant, and so on). Imagine you are supposed to add the numbers from 1 to 100. Instead of adding them all yourself, you ask your assistant to add the numbers from 1 to 99. You wait, (he asks his assistant for the sum from 1 to 98, etc.) and eventually he gives you an answer: 4950. Then you add 100 to that to get 5050 for the final result. The program you give (when corrected) implements this behavior: If $n$ is 0, there is no way to make the task simpler, so it just returns 0. Otherwise we ask for the result of the problem for $n-1$ (which is what "add(n - 1)" gives) and then add on $n$ to get the correct total.
H: How to calculate $ \int_{0}^{\infty} \frac{ x^2 \log(x) }{1 + x^4} $? I would like to calculate $$\int_{0}^{\infty} \frac{ x^2 \log(x) }{1 + x^4}$$ by means of the Residue Theorem. This is what I tried so far: We can define a path $\alpha$ that consists of half a half-circle part ($\alpha_r$) and a path connecting the first and last point of that half circle (with radius $r$) so that we have $$ \int_{-r}^{r} f(x) dx + \int_{\alpha_r} f(z) dz = \int_{\alpha} f(z) dz = 2 \pi i \sum_{v = 1}^{k} \text{Res}(f;a_v) $$ where $a_v$ are zeros of the function $\frac{x^2 \log(x) }{1+x^4}$. If we know $$\lim_{r \to \infty} \int_{\alpha_r} f(z) dz = 0 \tag{*} $$ then we know that $$\lim_{r \to \infty} \int_{-r}^{r} f(x) dx = \int_{-\infty}^{\infty} f(x) dx = 2 \pi i \sum_{v=1}^{k} \text{Res}(f;a_v) $$ and it becomes 'easy'. Q: How do we know (*) is true? AI: It's a bit more tricky that what you describe, but the general idea is correct. Instead of integrating from $0$ to $\infty$, one can integrate from $-\infty$ to $+\infty$ slightly above the real axis. Because of the logarithm, the integral from $-\infty$ to $0$ will give a possibly non-zero imaginary part, but the real part will be an even function of $x$. So we can write: \begin{align} \int_0^{\infty}\frac{x^2\ln x}{1+x^4}dx&=\frac12\mathrm{Re}\,\int_{-\infty+i0}^{\infty+i0} \frac{x^2\ln x}{1+x^4}dx=\\&=\pi\cdot \mathrm{Re}\left[ i\left(\mathrm{res}_{x=e^{i\pi/4}}\frac{x^2\ln x}{1+x^4}+\mathrm{res}_{x=e^{3i\pi/4}}\frac{x^2\ln x}{1+x^4}\right)\right]=\\ &=\pi\cdot \mathrm{Re}\left[ i\left(\frac{\pi e^{i\pi/4}}{16}- \frac{3\pi e^{3i\pi/4}}{16}\right)\right]=\\ &=\pi\cdot\mathrm{Re}\frac{(1+2i)\pi}{8\sqrt{2}}=\frac{\pi^2}{8\sqrt{2}}. \end{align} Now as far as I understand the question was about how can one justify the vanishing of the integral over the half-circle $C$ which in its turn justifies the application of residue theorem. Parameterizing that circle as $x=Re^{i\varphi}$, $\varphi\in(0,\pi)$, we see that \begin{align} \int_C \frac{x^2\ln x}{1+x^4}dx=\int_0^{\pi}\frac{iR^3e^{3i\varphi}\left(i\varphi+\ln R\right)}{1+R^4e^{4i\varphi}}d\varphi=O\left(\frac{\ln R}{R}\right), \end{align} which indeed vanishes as $R\rightarrow \infty$.
H: Connectivity (graph theory) - proof A graph $G$ has $n$ vertices. Prove that $G$ is connected if and only if we can find an injective sequence ($\ v_{1}...v_{n}$) such that for each $i>1$ there exists $j < i$ with $\ v_{i} v_{j}$ $\in$ E(G). AI: Proof by induction. $n=1$ both things are vacuously true. $n=2$ both statements hold if the two vertices are connected, both fail if they are not. Suppose now $n>2$. If $G$ is connected, then some vertex $v$ can be removed without disconnecting the graph. proof $G\setminus \{v\}$ now has $n-1$ vertices, and remains connected. By induction, take the injective sequence, then put vertex $v$ at the end. If instead we have an injective sequence $(v_1, \ldots, v_n)$, then $(v_1,\ldots, v_{n-1})$ is an injective sequence and its corresponding graph $G\setminus \{v_n\}$ is connected. But now $v_n$ is connected to one of the other vertices by the property you are giving injective sequences.
H: Is $\sum_{k=4}^{\infty }{k^{\log(k)}}/{(\log(k))^{k}}$ convergent or divergent? I came across this problem in a textbook, and the question is to investigate the convergence/divergence of the following series: $$\sum_{k=4}^{\infty }\frac{k^{\log(k)}}{(\log(k))^{k}}$$. I have no idea how to start solving this problem. I tried to call $a_{k}=\frac{k^{\log(k)}}{(\log(k))^{k}}$ and then proving that this limit maybe doesn't tend to zero and hence by the n-th term test the series diverges, but I couldn't do it. Any help is appreciated!! AI: First hint: $$ \left(\frac{k^{\log k}}{(\log k)^k}\right)^{1/k} = \frac{k^{\frac{\log k}{k}}}{\log k}. $$ Second hint: $$ \lim_{k \to \infty} k^{\frac{\log k}{k}} = 1. \qquad \text{(why?)} $$
H: How to prove $f( D(a,c) ) \le f [ D(a,b) + D(b,c) ] \implies f( D(a,c) ) \le f ( D(a,b) ) + f ( D(b,c) )$? I am presently working on an exercise from Kaplansky, I. Set Theory and Metric Spaces (ex. 7, pg 70). The question is as follows: Suppose that: $f$ is concave, $f(0) = 0$, $f(x) > 0$ for $x > 0$, and $f$ is montone in the weak sense. Let $M$ be a metric space with distance function $D$. Then $f(D)$ is also a distance function on $M$. The basic properties are easy to prove, where I am having difficulty is the triangle inequality, which asserts: $$f( D(a,c) ) \le f ( D(a,b) ) + f ( D(b,c) ) \tag{1}$$ Since $f$ is weakly monotonic, I know that following must hold: $$f( D(a,c) ) \le f [ D(a,b) + D(b,c) ] \tag{2}$$ I have been attempting to use the definition of concavity to demonstrate that $(2) \implies (1)$, but have been unable to figure anything out. I would appreciate some assistance on what to do. AI: If $f$ is concave and $f(0)=0$, then $f$ is sub-additive, i.e. $$ f(x+y) \leq f(x)+f(y). $$ (see the wikipedia article http://en.wikipedia.org/wiki/Concave_function for a proof.) Applying this to your line (2) finishes the argument.
H: Help with graphing an inequality? Looking at this question, and I'm unsure of how to do it properly. I'm trying to graph y≥3x+4 if x<2 and y≥-3. Any help would be greatly appreciated! AI: Using Geogebra, which is a really neat piece of software
H: $\left(-\frac{1}{2}\right)! = \sqrt{\pi}?$ I recently learned that $\left(-\frac{1}{2}\right)! = \sqrt{\pi}$ but I don't understand how that makes sense. Can someone please explain how this is possible? Thanks! AI: In order to extend the factorial function to any real number, we introduce the Gamma Function, which is a strange object defined as follows: $$ \Gamma(s)=\int_0^\infty t^{s-1}e^{-t} \, dt $$ The gamma function comes with the special property that $n!=\Gamma(n+1)$ for natural numbers $n$, so to evaluate $(-1/2)!$, (which by itself is not technically defined) we define it to be $\Gamma(1/2)$ and hence we evaluate the integral $$ (-1/2)!:=\Gamma(1/2)=\int_0^\infty\frac{e^{-t}}{\sqrt{t}} \,dt $$ To evaluate this integral, we make the substitution $u=\sqrt{t}$, which results in the well known Gaussian integral: $$ \int_0^\infty \frac{e^{-t}}{\sqrt{t}}dt=2\int_0^\infty \frac{e^{-u^2}}{u}u \, du=\int_{-\infty}^\infty e^{-u^2} \, du=\sqrt{\pi} $$
H: Euler graph and cycles Prove that the graph $G$ is an Euler graph if and only if the set of its edges can be divided into separate non-empty subsets, each of which induces simple cycle in $G$. AI: Each connected component of a graph $G$ is Eulerian if and only if the edges can be partitioned into disjoint sets, each of which induces a simple cycle in $G$. Proof by induction on the number of edges. Assume $G$ has $n\ge 0$ edges and the statement holds for all graphs with $<n$ edges. If $G$ has more than one connected component, the claim holds for each (smaller) component, hence for all of $G$. Hence we now assume that $G$ is connected. The case $n=0$ is trivial, hence we may assume $n>0$. $\Rightarrow$: Consider an Euler cycle $v_0v_1v_2\ldots v_n$ with $v_0=v_n$. Consider all $(i,j)$ with $0\le i<j\le n$ and $v_i=v_j$. Such pairs exist because $v_0=v_n$ (and $n>0$!). Select $(i,j)$ among these pairs such that $j-i$ is minimized. Then $v_iv_{i+1}\ldots v_j$ is a simple cycle $\gamma$ and $v_0\ldots v_iv_{j+1}\ldots v_n$ is an Euler cycle of the graph $G':=G-\gamma$. By induction, $G'$ can be partitioned into disjoint cycles. Together with $\gamma$ we obtain a partitioning of $G$ into disjoint cycles. $\Leftarrow$: Assume $G$ can be partitioned into disjoint cycles. Since $n>0$, there must be at least one cycle in this partitioning. If there is exactly one cycle, it is automatically an Euler cycle and we are done. Otherwise, select one cycle $\gamma=v_0v_1\ldots v_k$ (with $k>0$ and $v_k=v_0$). Then we obtain a partitioning of $G-\gamma$ into disjoint cycles, hence by induction each component of $G-\gamma$ is Euler. Also, $\gamma$ shares at least one vertex with each connected component of $G-\gamma$. Construct an Euler cycle of $G$ as follows: Start with $\gamma$ and extend it as follows: If $G'$ is a connected component of $G-\gamma$, select $v\in G'$ that occurs in $\gamma$ (and hence also in the Euler cycle as constructed so far) and replace the first occurance of $v$ in the cycle as constructed so far by an Euler cycle of $G'$ that begins and ends in $v$. After processing all components $G'$ like this, we obtain an Euler cycle of $G$.
H: Number of generators of the maximal ideals in polynomial rings over a field Hi I'm trying to prove the following If $K$ is a field (not necessary algebraically closed) then every maximal ideal of $K[x_{1},\dots,x_{n}]$ is generated by exactly $n$ elements. I know that if $K$ is algebraically closed by Hilbert basis the problem is done. But if $K$ is not algebraic closed, I know that the Krull dimension of $K[x_{1},\dots,x_{n}]$ is $n$ and by the generalized Krull theorem we have that every ideal has height at most $n$, but I'm stucked in this point. Thank you for any help. AI: There's no need to use any dimension theory. Rather, the proof is by induction. The case $n = 1$ is well known, so I won't repeat it here. Now, suppose $\mathfrak{m}$ is a maximal ideal of $A = K [x_1, \ldots, x_n]$. By a generalised Nullstellensatz, we know that $A / \mathfrak{m}$ is a finite field extension of $K$; in particular, the image of $x_n$ in $A / \mathfrak{m}$ has some minimal polynomial $f_n (t)$ over $K$. Clearly, $f_n (x_n) \in \mathfrak{m}$. Let $L = K[x_n] / (f_n (x_n))$. Then $A / \mathfrak{m} \cong L[x_1, \ldots, x_{n-1}] / \mathfrak{n}$ for some maximal ideal $\mathfrak{n}$ of $L [x_1, \ldots, x_{n-1}]$. Using the induction hypothesis, we know that $\mathfrak{n}$ is generated by $n - 1$ elements, say $f_1 (x_1, \ldots, x_n), f_2 (x_2, \ldots, x_n), \ldots, f_{n-1} (x_{n-1}, x_n)$. Then $\mathfrak{m}$ is generated by $f_1 (x_1, \ldots, x_n), f_2 (x_2, \ldots, x_n), \ldots, f_n (x_n)$, because $K [x_1, \ldots, x_n] / (f_n (x_n)) \cong L [x_2, \ldots, x_n]$.
H: Testing convergence of $\sum_{n=0}^{\infty }(-1)^n\ \frac{4^{n}(n!)^{2}}{(2n)!}$ Can anyone help me to prove whether this series is convergent or divergent: $$\sum_{n=0}^{\infty }(-1)^n\ \frac{4^{n}(n!)^{2}}{(2n)!}$$ I tried using the ratio test, but the limit of the ratio in this case is equal to 1 which is inconclusive in this case. Any hints please! AI: Hint: Use the alternating series convergence test. The work you already did is not wasted: $$\left|\frac{a_{n+1}}{a_n}\right|=\frac{4(n+1)^2}{(2n+2)(2n+1)}=\frac{2n+2}{2n+1}>1$$ Hence the terms do not approach 0 in absolute value, and by the $n^\textrm{th}$ term test, the series diverges.
H: Simple question about closure Let $p, A, B$ be open sets such that $p \subset A$, $\bar A \subset \bar B$ and $p \cap A\neq \emptyset$. Does $p \cap B \neq \emptyset$ holds? I need this for a proof. AI: As $p \cap A$ is non-empty and open, and as $p \cap A \subset A \subset \overline{A} \subset \overline{B}$, it must intersect $B$ non-emptily: any point $x$ in $p \cap A$ is in $\overline{B}$ so any neighbourhood of $x$, including $p$, must intersect $B$.
H: I don't understand this proof of the AM-GM inequality? The proof uses this lemma which I understand: $\mathbf {Lemma}$: Suppose $x$ and $y$ are positive real numbers such that $x>y$. If we decrease $x$ and increase $y$ by some positive quantity $E$ such that $x-E \ge y+E$, then $(x-E)(y+E) \gt xy$ . $\;$Hence, by subtracting $E$ from $x$ and adding it to $y$, we leave the average of the two numbers unchanged while increasing their product. $\mathbf {Proof}:$ Suppose $a_{1}, a_{2}, a_{3}... a_{n}$ are positive real numbers with average $A$ and product $P$. If all $a_{i}$ are equal, then both the geometric mean and the arithmetic mean are equal to $A$. Let $a_{j}$ be one number closest to $A$ without being equal to $A$. Without loss of generality, let $a_{j} \lt A$ . Since the average of the numbers is $A$, there is at least one member of the set greater than $A$. Let $a_{k}$ be the greatest of these numbers. Clearly we must have $a_{k}-A \gt A-a_{j}$ since $a_j$ is closer to $A$ than any other $a_i$ not equal to $A$. We now use our lemma. Replace $a_j$ with $A$ and $a_k$ with $a_k-(A-a_j)$ . Note that $a_k-(A-a_j) \ge a_j +(A-a_j)$ , so we can apply our lemma with $(A-a_j)$ as our $E$ . By our lemma, the average of the numbers in the new set is the same, but the product is now higher. If we continue this process, we make one of the members of the set equal to $A$ with each application of the process. Hence, in some finite number of steps, we will make all the numbers equal to $A$. Thus, we prove that of all the sets of positive numbers with average $A$, the set with maximum product has all the elements equal to $A$. There are three things I don't understand about this proof: $1)$ I don't understand why they don't loose generality when they say to let $a_j$ be the number closest to $A$ and let $a_j \lt A$. It certainly is possible for this not to be the case, for example the set ${2, 10, 10, 10}$. The average is $8$, but the number closest to $A$ is greater than $A$, so I don't see how the proof can apply to this set. $2)$ I don't see how this process makes makes the elements of the set equal to $A$. If you want $a_j+E$ and $a_k-E$ to be equal to $A$, then $a_j$ and $a_k$ have to be equidistant from $A$. $3)$ if you do bring one pair of terms at a time equal to $A$, then that means you must have an equal number of therms below and above $A$. Any help is appreciated, thanks! AI: 1) You are correct, there needs to be some tweaking. 2) It makes one more number equal to A, so by induction eventually they all will be. 3) You don't make both equal to A; you make at least one equal to A. Perhaps a simpler proof of the middle part, avoiding the first issue, is this: Let $a,b$ be chosen so that $a<A<b$; if this is not possible then all the $a_i$'s are already equal. Set $c=\min(|A-b|,|A-a|$). We replace $a$ by $a+c$ and $b$ by $b-c$. By the same calculation as in the lemma, the average remains the same and the product increases. We have now made at least one of $a,b$ equal to $A$. Continue until all $a_i=A$.
H: Does setting derivative to zero suffice always for minimization of convex functions? I have this convex function in $X$, given by $Trace(AX^TBX)$ where $A$, $B$ are p.s.d and all entries are real. Now if I had a linear function $l(X)$ that prevents a trivial zero-matrix solution for $X$ in the minimization of $Trace(AX^TBX)$ w.r.t $X$. Now would setting the first-derivative(gradient) of $Trace(AX^TBX) + \lambda l(X)$ w.r.t $X$ as zero and solving for $X$ suffice to get the optimal solution for $X$ given the convexity, or are there more conditions/directional derivatives and so forth to consider? For your convenience, the gradient is $(B \otimes A^T + B^T \otimes A)vecX + \lambda \nabla_Xl(X)$ and the gradient, $\nabla_Xl(X)$ does not contain $X$ in it, but has other terms. $\lambda$ is the Lagrange multiplier for enforcing the linear constraint. AI: Yes, a convex differentiable function has a minimum at any point where its gradient is $0$, and conversely any point where the gradient is $0$ is a minimum.
H: Acceleration to velocity with coordinate $x$ Can someone please help me in finding the formula used to get the answer to this question? An object moves along $x$-axis. In any coordinate $x$, the acceleration is $a=x^4$ (SI units). If the object goes from rest in $x=1$m, what velocity it will get at $x=2$m? \begin{align*} A)\quad 4.59 \textrm{ m/s}\\ B)\quad 3.52 \textrm{ m/s}\\ C)\quad 2.47 \textrm{ m/s}\\ D)\quad 5.66 \textrm{ m/s}\\ E)\quad 1.41 \textrm{ m/s}\\ \end{align*} Answer B is the correct answer AI: If $v=\frac{dx}{dt}$ is the velocity at time $t$, we have $$\frac{dv}{dt}=x^4.$$ There is a standard trick for solving this kind of DE. Multiply both sides by $v$. We get $$v\frac{dv}{dt}=vx^4=x^4\frac{dx}{dt}.$$ Now we can integrate with repect to $t$. On the left, we get $\frac{1}{2}v^2$, and on the right we get $\frac{1}{5}x^5 +C$. Thus $$\frac{1}{2}v^2=\frac{1}{5}x^5 +C.$$ Use your initial condition to find that $C=-\frac{1}{5}$. Now we know $v^2$ at position $x$.
H: Proof of equivalent definitions of continuity of a function Let $X$ and $Y$ be metric spaces, and $f : X\rightarrow Y$ a function I have to prove: (1) $f : X\rightarrow Y$ is continuous (3) $\forall\,F \subset Y closed: f^{-1}(F) \, is\,closed$ from $(3) \Rightarrow (1)$ I know, for a continuous function: $\forall \, A \subset X: f(\overline A)\subset \overline{f(A)}$ I can proove it by proving the contraposition: $(\exists\,F \subset Y$ closed: $f^{-1}(F)$ is open$)$ $\Rightarrow $($f : X\rightarrow Y$ is continuous$)$ But I can't seem to get any further in the proof, since I have no experience with prooving equivalent definitions like that :/ Can anyone help please? AI: Firstly notice that your condition 3 is equivalent to the following: (*) For every open $U\subseteq Y$, $f^{-1}(U)$ is open. Proof: $U\subseteq Y$ is open iff $U^c$ is closed, also notice that $f^{-1}(U)^c=f^{-1}(U^c)$ hence $f^{-1}(U)$ is open iff $f^{-1}(U^c)$ is closed. So now it suffices to show that $(1) \Leftrightarrow (*)$. $(1)\Rightarrow (*)$ If $f$ is continuous let $U\subseteq Y$ be open, if $U$ is empty it's preimage is empty and hence open. So we assume that $U, f^{-1}(U)$ are non-empty. Let $x\in f^{-1}(U)$, as $f$ is continuous, for every $\epsilon>0$ there exists a $\delta>0$ such that $d_X(x,x')<\delta$ implies $d_Y(f(x),f(x'))<\epsilon$. So pick $\epsilon>0$ such that $B(f(x),\epsilon)\subseteq U$, (we can do this as $U$ is open), then there is a $\delta>0$ such that for all $x'$ such that $d_X(x,x')<\delta$, $d_Y(f(x),f(x'))<\epsilon$. But then $f(B(x,\delta))\subseteq B(f(x),\epsilon)\subseteq U$, therefore $B(x,\delta)\subseteq f^{-1}(U)$ and so the preimage is open. ($*$)$\Rightarrow$ (1) Let $f$ be as in ($*$) and let $\epsilon>0$ and $x\in X$. Consider $B(f(x),\epsilon)\subseteq Y$, this is open, hence it's preimage is open, and it contains $x$. So let $\delta>0$ be such that $B(x,\delta)\subseteq f^{-1}(B(f(x),\epsilon))$, then for every $x'$ such that $d_X(x,x')<\delta$, $x'\in B(x,\delta)$ and hence is in the preimage of $B(f(x),\epsilon)$. Thus $f(x')$ is in $B(f(x),\epsilon)$ and so $d_Y(f(x),f(x'))<\epsilon$ as required.
H: Given a linear transformation matrix, T, find the equation for the curve that T transforms a circle into. Given the linear transformation matrix: $$T=\pmatrix{2&-3\\1&1}$$ Find the equation for the curve that $T$ transforms a circle with equation $x^2+y^2=6$ into. What I know: My basis is going to be $[1,0]^T$ and $[0,1]^T$ because I'm in $\Bbb{R}^2$. I have my transformation matrix $T$ (usually called $A$?). $T(x)=Ax$ I have one equation for the circle. Need to find the equation for the curve. (circle transformed into curve) I've been working vector/matrix type transformations, so the 'equation' type has me confused. Any help would be greatly appreciated! Thanks!! AI: May I suggest a less sophisticated approach for this problem. Let us define $T(x,y)=(u,v)$ hence $$ u = 2x-3y \qquad v = x+y $$ Solve these equations for $x$ and $y$ as functions of $u,v$ and plug-these into $x^2+y^2=6$ and you'll obtain the formula for the image of the circle under the $T$-transformation. Of course, you can understand what happens in terms of deeper theory, but the approach I outline is totally valid for this problem.
H: Hausdorff space and compact subspaces Let $X$ be Hausdorff and $A,B \subseteq X$ be disjoint compact subspaces of $X$. Prove that there are $U$ and $V$ open disjoint sets in $X$, $A\subseteq U$ and $B\subseteq V$. I know that: $A$ and $B$ are closed in $X$. But I have no idea how to prove statement using that $X$ is Hausdorff since with that information I know what to do only with points :/ Maybe just an idea? AI: Fix $a \in A$. For every $x \in B$, we can find disjoint neighborhoods $U_x(a)$ of $a$ and $V_x(a)$ of $x$. Since $B$ is compact, the open cover $\{V_x(a) : x \in B\}$ has a finite subcover $\{V_{x_1}(a) , \dots, V_{x_n}(a)\}$. Let $V(a) = \bigcup_{i = 1}^n V_{x_i}(a)$ and $ U(a) = \bigcap_{i=1}^n U_{x_i}(a)$ Then $U(a)$ and $V(a)$ are disjoint neighborhoods of $a$ and $B$. The open cover $\{U(a) : a \in A\}$ of $A$ has a finite subcover $\{U(a_1), \dots, U(a_k\}$. Let $$U = \bigcup_{i = 1}^k U(a_i)$$ $$V = \bigcap_{i = 1}^k V(a_i)$$ It follows that $U$ and $V$ are disjoint neighborhoods of $A$ and $B$.
H: The form of maximal ideal in the real polynomial ring $\mathbb R[x,y]$ Every maximal ideal of the real polynomial ring $\mathbb R[x,y]$ is of the form $(x-a, y-b)$ for some $a,b \in \mathbb R$. True or false? Any suggestions? AI: Hint: How about the ideal $(x, y^2+1)$?
H: How to find exponent coefficients in a sum of exponents? It is easy to determine a coefficient 'c' of exp(c*x), just log it and find slope. Or if it's exp(c1*x) + exp(c2*x) then after log from 0 to the right of left we would find 'c1' and 'c2'. But what if we have more terms? For example, such a sum exp(x) + exp(1.5*x) + exp(2.5*x) + exp(3*x) if you loged it, almost would have no difference between exp(x) + exp(3*x) So, please, how could I find them in another way? AI: If you know, that $$y(x) = \exp(c_1 x)+\exp(c_2 x)+\cdots+\exp(c_n x),$$ then calculate $y(1), y(2), \ldots, y(n)\;$: $\;p_1 = y(1), p_2 = y(2), \ldots, p_n=y(n)$. You'll get system of equations: $$\left\{ \begin{array}{r} \exp(c_1)+\exp(c_2)+\cdots+\exp(c_n)=p_1; \\ \exp(2c_1)+\exp(2c_2)+\cdots+\exp(2c_n)=p_2; \\ \cdots \cdots \cdots \qquad \qquad \qquad \qquad \\ \exp(nc_1)+\exp(nc_2)+\cdots+\exp(nc_n)=p_n. \end{array} \right. $$ If denote $s_1=\exp(c_1), s_2=\exp(c_2), \ldots, s_n=\exp(c_n)$, then $$\left\{ \begin{array}{r} s_1+s_2+\cdots+s_n=p_1; \\ s_1^2+s_2^2+\cdots+s_n^2=p_2; \\ \cdots \cdots \cdots \qquad \qquad \\ s_1^n+s_2^n+\cdots+s_n^n=p_n. \end{array} \right. $$ With the help of Power sum symmetric polynomial you'll find values $e_1,e_2, \ldots, e_n$ $-$ elementary symmetric polynomials: $$\left\{ \begin{array}{l} e_1 = s_1+s_2+\cdots+s_n = p_1; \\ e_2 = s_1s_2+s_1s_3+\cdots+s_{n-1}s_n = \dfrac{1}{2}(p_1^2-p_2); \\ \cdots \cdots \cdots\qquad \qquad \\ e_n = s_1s_2\cdots s_n=\dfrac{1}{n}\sum\limits_{j=1}^n (-1)^{j-1}e_{n-j}p_j; \end{array} \right. $$ so $s_1,s_2,\ldots,s_n$ are roots of equation $$s^n-e_1\cdot s^{n-1}+\cdots+(-1)^{n-1}e_{n-1}\cdot s+(-1)^n e_n=0.$$ Finally, when you will find roots $s_1,s_2,\ldots,s_n$, $$c_j=\ln(s_j), \; j=1,\ldots,n.$$
H: A trouble about the Ekeland variational principle I have a trouble in the proof to $EVP$ theorem: About the existence of the $\lim (\varphi(y_n))$ ? Any hints would be appreciated. AI: By construction, the sequence $\varphi(y_n) + 2^{1-n}$ is non-increasing. It is also bounded from below (since $\varphi$ is bounded from below by assumption). It follows that $\varphi(y_n) + 2^{1-n}$ converges to some value $c\in \mathbb R$. Now $2^{1-n}$ converges to zero, thus $$c = \lim_{n\to \infty} \left(\varphi(y_n)+ 2^{1-n}\right) = \lim_{n\to \infty} \varphi(y_n).$$
H: Constructing a functions with Gelfand Naimark If $X$ and $Y$ are compact Hausdorff spaces, show that for any algebra homomorphism $$ F:C(Y) \to C(X) $$ there exists a continuous function $f:X\to Y$ such that $$ F(\phi)=\phi \circ f, \forall \phi \in C(Y) $$ The spaces are compact Hausdorff, so presumably one should use the Gelfand-Naimark theorem and construct a continuous function from the spectra of $C(X)$ and $C(Y)$. To be honest I'm pretty confused. I don't know exactly what I should try to do; using characters and spectra instead of points in spaces doesn't seem to get me closer to the answer. Any help would be appreciated. Please note that this is not homework. EDIT: I've found the solution. Proof. Let $A=C(X)$ and $B=C(Y)$ be two commutative algebras over $\mathbb{R}$. If $X$ and $Y$ are compact Hausdorff spaces, show that for any algebra homomorphism $$ F:B \to A $$ there exists a continuous function $f:X\to Y$ such that $$ F(\phi)=\phi \circ f, \forall \phi \in C(Y) $$ We first define all the relevant maps. Let $X_A$ and $Y_B$ be the spectra of $A$ and $B$ respectively. Let $\sigma: Y_B \to Y$ and $\tau: X_A \to X$ be the homeomorphisms induced by the Gelfand Naimark theorem, since $X$ and $Y$ are compact Hausdorff. Let $\lambda: A \to \mathbb{R}$ and $\chi: B\to \mathbb{R}$ be two characters in $X_A$ and $Y_B$ respectively. Notice that $\lambda \circ F$ is a function from $B \to \mathbb{R}$ that lies in $Y_B$ since $F$ respects the $\mathbb{R}$-linearity and multiplicative identities of the algebra structure on $\mathbb{B}$ and $\lambda$ is a character and therefore respects the same identities by definition. So there exists an injective function $h: X_A \to Y_B$. We prove that $h$ is continuous. Apply Exercise 4: we have that $ f_a \circ h = f_a(h(\lambda))= f_a(\lambda \circ F)=\lambda(F(a))= g_{F(a)}$ with $g_a: X_A \to \mathbb{R}$ and $g_a(\lambda)=\lambda(a)$ and $f_a: Y_B \to \mathbb{R}$ and $f_a(\chi)=\chi(a)$. Define $f$ as follows: $f(x):= (\sigma \circ h \circ \tau)(x)$. The composition of continuous function is continuous so f is continuous. We now prove the identity $\phi \circ f=F(\phi)$. \begin{align*} (\phi \circ f)(x)&= (\phi \circ \tau \circ h \circ \sigma)(x) \\ &= (\phi \circ \tau\circ h) (\chi_x)\\ &= (\phi \circ \tau \circ \chi_x \circ F)\\ &= (\phi \circ \tau \circ \chi_y)\\ &=\phi(y)\\ &= \chi_y(\phi)\\ &=(\chi_x \circ F)(\phi)\\ &=F(\phi)(x) \end{align*} AI: Any point $x\in X$ gives rise to a maximal ideal $I_x=\{f\in C(X):f(x)=0\}$. Then $F^{-1}(I_x)$ is a maximal ideal of $C(Y)$, so, thanks to Gelfand-Naimark (and compactness and Hausdorffness), there is a unique $y\in Y$ such that $F^{-1}(I_x)=\{g\in C(Y):g(y)=0\}$. Define $f(x)$ to be $y$. There's a lot that needs to be checked --- continuity of $f$ and the formula $F(\phi)=\phi\circ f$, but I believe all the relevant work is already implicit in the proof (if not the statement) of the Gelfand-Naimark theorem. (My computer's spell-checker tells me that "Hausdorffness" isn't a word, and I have to agree with that, but you know what it means anyway. My computer also tells me that "Gelfand" isn't a word, but apparently "Naimark" is OK.)
H: Does $\frac{\mathrm d}{\mathrm dx} \ln(x)=\frac{\mathrm d}{\mathrm dx} \ln|x|$? For some time, I've seen different solutions for the same problems. Let $f$ be any continuous function, differenciable on its domain such that, $$\int \frac{\mathrm d f}{f}=\ln(f)$$ but some authors say, $$\int \frac{\mathrm d f}{f}=\ln|f|$$ I know that, $$\displaystyle \frac{\mathrm d}{\mathrm dx}\ln |x|=\frac{\mathrm d}{\mathrm dx}\ln \sqrt(x^2)=$$ $$\displaystyle \frac{\mathrm d}{\mathrm dx}\ln (x^2)^{\frac{1}{2}}=\frac{\mathrm d}{\mathrm dx}\frac{1}{2}\ln (x^2)=$$ $$\frac{\mathrm d}{\mathrm dx}\ln (x)$$ And so I concluded that: $$\frac{\mathrm d}{\mathrm dx} \ln|x|=\frac{\mathrm d}{\mathrm dx} \ln(x)$$ Can I generalizate that, $$\int \frac{\mathrm d f}{f}=\ln(f)= \ln|f|$$ AI: The functions $\ln x$ and $\ln |x|$ are defined on different regions: The former is defined for positive $x$ while the latter is defined for every non-zero $x$. Using the chain rule, and the fact the derivative of the absolute value is the signum function one can see that $$\frac{d}{dx} \ln |f(x)| =\frac{1}{|f(x)|} (\text{sgn} f(x)) f'(x).$$ Since $\text{sgn}=\pm1$ we can move it to the denominator, so $$\frac{d}{dx} \ln |f(x)| =\frac{1}{(\text{sgn} f(x))|f(x)|} f'(x).$$ And lastly, for any real $t$, $|t| \cdot \text{sgn}t=t$. Thus $$\frac{d}{dx} \ln |f(x)| =\frac{f'(x)}{f(x)}.$$ Which coincides with the derivative of $\ln f(x)$. To summarize: $\ln f$ and $\ln |f|$ have the same derivative but the second one is defined over a greater set.
H: Formula for the $nm$th cyclotomic polynomial when $(n,m) = 1$ Let $n,m$ be coprime. I want to find a formulae for $\Phi_{n\cdot m, \mathbb Q}$. I conjecture that because $$d \mid nm \implies d \mid n \lor d \mid m,$$ that $$ \Phi_{n\cdot m, \mathbb Q} = \Phi_{n, \mathbb Q} \cdot \Phi_{m, \mathbb Q} = \prod_{d|n,~(d,n) = 1} \Phi_{d, \mathbb Q} \cdot \prod_{d|m,~(d,m) = 1} \Phi_{d,\mathbb Q}. $$ Am I right? I didn't find this formula anywhere. AI: No, that's not correct. Consider the example of $n=2$, $m=3$, which satisfy $\gcd(n,m)=1$. Then $$\Phi_6=x^2-x+1\neq(x+1)(x^2+x+1)=\Phi_2\Phi_3.$$ By the way, what you've written has other problems. For instance: for any integer $n$, the only $d$ such that $\gcd(d,n)=1$ and $d\mid n$ is $d=1$. Also is not true that $$\Phi_{n} = \prod_{d|n} \Phi_{d}$$ nor is it true that $$\Phi_n=\prod_{\gcd(d,n)=1}\Phi_d.$$ Perhaps what you were thinking of was $$\Phi_n=\prod_{d\mid n}(x^d-1)^{\mu(n/d)}$$ or $$\Phi_n=\prod_{\substack{1\leq k\leq n\\\gcd(k,n)=1}}(x-e^{2\pi ik/n})$$ both of which are true.
H: Suppose that a $3\times 3$ matrix $M$ has an eigenspace of dimension $3$. Prove that $M$ is a diagonal matrix. How would I go about this? I realise that having dimension 3 means that the solution to $(A-\lambda I)\mathbf b = \mathbf 0$ has 3 free parameters, which would in turn mean that $(A-\lambda I)$ is the zero matrix, so $A = \begin{bmatrix}\lambda & 0 & 0\\0 & \lambda & 0\\0 & 0 & \lambda\end{bmatrix}$, but the first part of that argument is a bit too heuristic. How would I prove it formally? AI: Hint: Imagine you have three linearly independent (column) eigenvectors for $\lambda$, and put them into the columns of a matrix $B$, which is necessarily nonsingular. Then $AB=B(\lambda I_3)=(\lambda I_3)B$. Can you see how to conclude this, remembering that $B$ has an inverse? In general, no matter what the eigenvalues are, you can use this observation to find a matrix $B$ such that $AB=BD$ where $D$ has the eigenvalues of $A$ on the diagonal, and is zero elsewhere. But unless all the eigenvalues are the same, you can't always conclude that $BD$ and $DB$ are the same. They are quite often different. Can you see why?
H: if $A$ is Abelian group , $B$ is subgroup of $A$ , Is $B \times A/B \cong A$? If $A$ is abelian group and $B$ is a subgroup of $A$, $B$ is normal subgroup of $A$. Is it true that $B \times A/B \cong A$? I ask because I was watching an online lecture from a course in abstract algebra at Harvard extension school. And the lecturer (whose name is Peter) was taking about vector spaces and said that if $V$ is a vector space and $W$ is a subspace, then $V/W \times W \cong V$. So the question which I thought of is: "is this true for all Abelian groups?" Also, is there a less restricted condition on groups which will make this property hold? AI: As suggested by DonAntonio: We need only take $A =\mathbb Z_4$, $B = \mathbb Z_2$. Then $A/B = \mathbb Z_4/\mathbb Z_2 \cong \mathbb Z_2 = B.\;$ Clearly, $$\mathbb Z_4 \not\cong \mathbb Z_2 \times \mathbb Z_2 = B \times A/B $$ The same "conjecture" would fail for $A = \mathbb Z_9$ and $B = \mathbb Z_3$, for the same reason Indeed, what we can say is that if $A$ is a finite abelian group, then your conjecture will hold provided there is no prime $p$ such that $p^2$ divides $\,|A|$.
H: Equivalence relation help If $|A| = 30$ and the equivalence relation $R$ on $A$ partitions $A$ into (disjoint) equivalence classes $A_1$, $A_2$, and $A_3$, where $|A_1| = |A_2| = |A_3|$, then what is $|R|$? AI: We have $R = A_1 \times A_1 \cup A_2 \times A_2 \cup A_3 \times A_3$, so $|R|=|A_1\times A_1|+|A_2\times A_2|+|A_3\times A_3|$.
H: Develop five terms in the Taylor series Develop five terms in the Taylor series around $x_0=\pi$ for the function $f(x)=\cos\left({x\over3}\right)$ $f^0(x)=\cos\left({x\over3}\right) \Big|_\pi $ $f^{'}(x)=-\sin\left({x\over3}\right) {1\over3} \Big|_\pi$ $f^{''}(x)=-\cos\left({x\over3}\right) {1\over9} \Big|_\pi$ $f^{'''}(x)=\sin\left({x\over3}\right) {1\over{27}} \Big|_\pi$ $f^{iv}(x)=\cos\left({x\over3}\right) {1\over{81}} \Big|_\pi$ So (...) $S={\cos\left({\pi\over3}\right)}{-\sin\left({\pi\over3}\right) {1\over3}}{-\cos\left({\pi\over3}\right) {1\over9}}+{\sin\left({\pi\over3}\right) {1\over{27}}}+{\cos\left({\pi\over3}\right) {1\over{81}}}$ *EDIT: * I forgot the factorial division and multiplication by $x_0$ $S= {\cos\left({\pi\over3}\right)}- {{1\over3}\sin\left({\pi\over3}\right)(x-\pi)}- \frac{{1\over9}\cos\left({\pi\over3}\right) }{2}(x-\pi)^2+ \frac{{1\over{27}}\sin\left({\pi\over3}\right) }{6}(x-\pi)^3+ \frac{{1\over{81}}\cos\left({\pi\over3}\right) }{24}(x-\pi)^4$ My question is: this is correct? AI: Not quite. You forgot the factorial terms in the denominator, for one. You forgot the powers of $x-\pi,$ as well. Also, you should be able to explicitly evaluate those trigonometric expressions. Edit: It looks better, but you can still evaluate those trig expressions and simplify those fractions. Also, I just realized that you incorrectly calculated your fourth derivative.
H: How to prove that the set $\{\sin(x),\sin(2x),...,\sin(mx)\}$ is linearly independent? Could you help me to show that the functions $\sin(x),\sin(2x),...,\sin(mx)\in V$ are linearly independent, where $V$ is the space of real functions? Thanks. AI: Suppose that, for every $x\in\Bbb R$ we have $$a_1\sin x+a_2\sin 2x+\cdots+a_m\sin mx=0$$ Take $i\in \{1,\dots,m\}$, and consider $\sin ix$. Multiply throughout and integrate from $x=0$ to $x=2\pi$. Do this for $i=1,\dots,m$. Use that $$\int_0^{2\pi} \sin mx\sin nxdx=\begin{cases}0& m\neq n\\ \pi &m=n\end{cases}$$ ADD If the above wasn't entirely clear, for each $1\leq k\leq m$ $$\begin{align}\sum_{j=1}^m a_j\sin jx&=0\\ \sum_{j=1}^m a_j\sin kx\sin jx&=0\\ \sum_{j=1}^m a_j\int_0^{2\pi}\sin kx\sin jxdx&=0\\ a_k \pi&=0\\ {}&{}\\ a_k&=0\end{align}$$ since, of course, $\pi\neq 0$.
H: Why is the length R cosine theta? Why is the length described as R cosine theta (the top where the Sphere is sliced off)? I've been staring at the geometry for quite a bit & can't figure. Thanks AI: By SOHCAHTOA (a special case of the law of sines that only works for right triangles), the cosine of an acute angle of a right triangle is the length of the leg adjacent to the angle, divided by the length of the hypotenuse. Equivalently, the length of a leg is the length of the hypotenuse times the cosine of the acute angle adjacent to the leg. In this case, the hypotenuse length is $R$ and the acute angle is $\theta$. Thus the length of the leg adjacent to $\theta$ is $R\cos\theta$.
H: problem about continuity and limits Let $f\colon\mathbb R \to \mathbb R$ be a continuous function. Suppose that $\lim_{x \to +\infty} f(x) = \lim_{x \to -\infty} f(x) = +\infty$. Prove that $f$ has a minimum, i.e., $\exists x_0 \in \mathbb R: \forall x \in \mathbb R f(x) \geq f(x_0)$: My solution: Suppose $\exists x_0 \in \mathbb R:\forall x \in\mathbb R: f(x) \geq f(x_0)$ is false. Then, $\forall x_0 \in \mathbb R: \exists x\in\mathbb R: f(x) < f(x_0)$ (1). $\lim_{x \to +\infty} f(x) = +\infty \implies \forall \varepsilon >0: \exists M>0: x>M \implies f(x) > \varepsilon$. In particular, for $\varepsilon=f(x_0)$, we have $f(x)>f(x_0)$, which is an absurd, because it contradicts the hypothesis (1). Therefore $\exists x_0 \in \mathbb R: \forall x\in\mathbb R:f(x) \geq f(x_0)$ I´d like to hear from you if my solution is correct or not. AI: $f$ is continuous and $\lim_{x\to \infty}f(x)=\lim_{x\to-\infty}f(x)=\infty$, choose an arbitray $y_0\in \mathbb{R}$ so there is a $M_1$ such that if $x>M_1$ then $f(x)>f(y_0)$ and a $M_2>0$ such that if $x<-M_2$ then $f(x)>f(y_0)$, . Now, observe that $[-M_2,M_1]$ is compact, and $f$ is continuous, therefore it has a maximum and a minimum in this set. And the minimum $x_0 \in [-M_2,M_1] $ is global because if $ x\in [-M_2,M_1] $ then $ f(x)\geq f(x_0)$ and because our choose we have that $y_0\in[-M_2,M_1]$ then $f(x_0)\leq f(y_0)$. Now if $x \in \mathbb{R} - [-M_2,M_1] $ then $f(x)> f(y_0)\geq f(x_0)$. And you have a global minimum. An observation in your answer is that: Say that the negative of the statement: $f$ has a minimum is $f$ does not have a minimum. What you are doing is electing an element and showing that it is not a maximum because you find an $x$ such that $f(x)>f(x_0)$, and it does not solve the problem, do you see?
H: question about epsilon, delta limit definition Sometimes, when describing the closeness of $x$ to $a$ as being less than $\delta$, it's stated as $|x-a|<\delta$ and sometimes it's stated as $0<|x-a|<\delta$. What is the " $0<$ " part that's sometimes included in the definition, I'm a bit confused about that. Is it just telling us that the distance can't be zero? thanks. AI: The "$0 <$" part explicates the interest in a neighborhood around a point. For any desired error, there exists a nonzero deviation from the point of interest such that the function evaluated inside this neighborhood does not change by more than the desired error.
H: The solutions of $x^2+ax+b=0\pmod n$ in $\mathbb Z_n$ For every positive integer $n\ge 6$ which is not prime, there exist integers $a$ and $b$ such that the congruence equation $x^2+ax+b=0 \pmod n$ has more than two solutions modulo $n$. I have no idea to prove the above statement. Any suggestions? AI: Hint: $(x-u)(x-v)\equiv 0 \pmod n$ can also hold for $x-u=k$ and $x-v=l$ if $kl=n$, not only for $x-u=0$ or $x-v=0$.
H: A question about Baby Rudin Theorem 2.27 (a) Theorem 2.27: If $X$ is a metric space and $E \subset X$, then $\bar E$ (the closure of $E$) is closed. The proof says: If $p \in X$ and $p \not \in \bar E$ then $p$ is neither a point of $E$ nor a limit point of $E$. Hence $p$ has a neighborhood which does not intersect $E$. The complement of $\bar E$ is therefore open. Hence $\bar E$ is closed. I'm particularly questioning about ''Hence $p$ has a neighborhood which does not intersect $E$. The complement of $\bar E$ is therefore open.'' Should we also prove that the neighborhood of $p$ also does not intersect $E'$ (the set of all limit points of $E$)? Here's what I tried to prove, by contrapositive: ''For any $p \in {\bar E}^c$, if $N_r(p) \cap E' \ne \emptyset$ then $N_r(p) \cap E \ne \emptyset$''. Proof: For any $p \in {\bar E}^c$, if $N_r(p) \cap E' \ne \emptyset$, then take $q \in N_r(p) \cap E'$, $\exists N_h(q)$ s.t. $N_h(q) \subset N_r(p)$. Since $q \in E'$ is a limit point, $N_h(q) \cap E \ne \emptyset$, and hence $N_r(p) \cap E \ne \emptyset$. I'm not quite sure whether this is necessary. Or is there anything I missed from Rudin's proof? AI: $p$ was chosen randomly, so could be any point not in $E$, not in $E'$, as any such $p$ was explicitly ruled out as a limit point of $E$, hence the existence of a neighborhood of $p$ will not intersect $E'$. Your proof is fine. You explained nicely why Rudin's claim follows. Rudin is notorious for leaving some of the "links" between "stepping stones" (steps in his proofs) "to the reader". You've just filled in some of those unwritten details, and you are correct in those details. It's always a good idea to do so as you read any text, when anything is not immediately apparent to you while reading through a proof. In particular, that can often be the case when reading Rudin. When you revisit the proofs, then, that "extra work" will pay off, as you'll have made the connections, and will then be able to reread Rudin's proofs and follow them without so much effort. Sometimes filling in the details may amount to simply "unpacking" the definition(s) of the terms being used in a theorem (what it means to be a limit point, e.g.). You'll find that's a sound way to learn the definitions inside-out, and to review theorems when they are used in subsequent proofs. (In all honesty, I personally "wrote, expanded upon, rewrote, extended, rewrote again" virtually all of Baby Rudin when I first encountered the text.
H: Why does the Tower of Hanoi problem take $2^n - 1$ transfers to solve? According to http://en.wikipedia.org/wiki/Tower_of_Hanoi, the Tower of Hanoi requires $2^n-1$ transfers, where $n$ is the number of disks in the original tower, to solve based on recurrence relations. Why is that? Intuitively, I almost feel that it takes a linear number of transfers since we must transfer all but the bottommost disc to the buffer before transferring the bottommost disc to the target tower. AI: Your intuition is right. All but the bottom disk must be moved TWICE, so you should expect (one more than) twice the number of transfers for one fewer disk. We have $$2(2^n-1)+1=2^{n+1}-1$$
H: Why does derivation use lim? Alternative method possible! Okai, so we were learning about Newtons method of differentiation and I came to questions why Isaac Newton or Leibniz use the following function. $${f(x+h) - f(x)\over h}$$ $h$ is the distance at the X axis of the point we wish to find. However, this equation requires us to use a function $\lim$ which really means that the number $h$ is of such small significance that we simply remove it from the equation. Which in my eyes is more about relativity than maths. Lets say we use the function = $$x^2+4x$$ We all know that derivative is = $$2x + 4$$ So I came up with a solution like this. $${f(x+h) - f(x-h)\over 2h}$$ Why dont we use this method instead of learning to use a method that requires us to use relativity and take it into consideration? EDIT: $$f(x+h) = (x+h)^2 + 4(x+h)$$ $$f(x-h) = (x-h)^2 + 4(x-h)$$ Derivate is: $$(((x+h)^2 + 4(x+h)) - ((x-h)^2 + 4(x-h))/h)\over 2$$ which gives out to be = 2x + 4 once all the calculation has been carried out. AI: Your method gives us what is known as the symmetric derivative (at least, if you take it to the limit). If you'll read over the article, you'll see that such a beast may be defined where $f$ isn't, or where $f$ is defined but not continuous, or where $f$ is defined and continuous but has a "sharp corner". This is probably why differentiability is preferred in many circumstances (symmetric differentiability is "too weak"), even though the two types of derivatives agree at points of differentiability. If you don't take the limit with your expression, then all you've given is the slope of a particular sort of secant line. This tells us little in general (if anything) about the behavior of the function at the point in question, unless we take $h$ to be small enough (that is, take the limit as $h\to 0$). For example, consider $f(x)=x^3$. A perfectly friendly function, no? But $$\frac{f(x+h)-f(x-h)}{2h}=3x^2+h^2,$$ which is not the same as the derivative (and may be very different!), unless you take the limit as $h\to 0$.
H: Proof of Cauchy-Schwarz inequality, why does this work? My books says to prove that the following inequality is true, and to use it to prove Cauchy-Schwarz: $$(a_1x+b_1)^2+(a_2x+b_2)^2+(a_3x+b_3)^2+\dots+(a_nx+b_n)^2 \ge 0$$ This is easy to prove because by the trivial inequality each term on the LHS is $\ge 0$. However, to prove Cauchy-Schwarz using this this book gives the hint: Write the left side as a quadratic equation in $x$ and note that a quadratic equation is non negative for all $x$ if and only if the discriminant is non positive. If you multiply this out and set the discriminant $\le 0$, the Cauchy-Schwarz inequality follows very straightforwardly. My question is: If there were no restrictions on $a$, $b$, or $x$ before I multiplied it out, why are there restrictions on them now? I know that before I was considering a series of quadratics and now I'm considering one giant one, but still it's the same inequality as before. AI: We know the LHS is a quadratic with non-positive discriminant, because it does not dip below the $x$-axis. Here's a concrete example: $(2x-b)^2+(x-b)^2\ge 0$. This is a quadratic, namely $5x^2-6bx+2b^2$. The discriminant is $36b^2-40b^2=-4b^2$, which is non-positive, for all values of $x,b$. In this case it's easy to see that $-4b^2$ is non-positive, but if the quadratic were more complicated, the discriminant would necessarily be non-positive because the quadratic can be rearranged into the sum of squares.
H: Infinitesimal $SO(N)$ transformations An infinitesimal $SO(N)$ transformation matrix can be written : $$R_{ij} = \delta_{ij}+\theta_{ij}+O(\theta^2)$$ Now it has to be shown that $\theta_{ij}$ is real and anti-symmetric. I've started with the orthogonality condition like following: $$R^TR=\boldsymbol 1$$ $$\implies (R^TR)_{ij}=\delta_{ij}$$ $$\implies \sum_kR^T_{ik}R_{jk}=\delta_{ij}$$ $$\implies \sum_kR_{ik}R_{jk}=\delta_{ij}$$ $$\implies \sum_kR_{ik}R_{ik}=\delta_{ii}$$ $$\implies \sum_jR_{ij}R_{ij}=1$$ Now i can stick my infinitesimal form of $R_{ij} $ into the above formula: $$\sum_j(\delta_{ij}+\theta_{ij}+O(\theta^2))(\delta_{ij}+\theta_{ij}+O(\theta^2))=1$$ $$\implies \sum_j (\delta_{ij}\delta_{ij}+2\theta_{ij}\delta_{ij}+O(\theta^2))=1$$ As you can easily see my calculations are going nowhere. AI: First, $$R^{T}_{ij}=\delta_{ij}+\theta_{ij}^{T}=\delta_{ij}+\theta_{ji}$$ Now in the last sum you wrote, you get a different result, $$\sum_j(\delta_{ij}+\theta_{ij}+O(\theta^2))(\delta_{ij}+\theta_{ji}+O(\theta^2))=\sum_j(\delta_{ij}+\theta_{ji})(\delta_{ij}+\theta_{ij})=\sum_j(\delta_{ij}\delta_{ij}+\delta_{ij}(\theta_{ij}+\theta_{ji})+\theta_{ij}\theta_{ji})=1$$ This is true, if $\theta_{ij}=-\theta_{ji}$. To prove that $\theta_{ij}$ is real, evaluate this $\mathrm{Im}[(R^{T}R)_{ij}]$. You will find that $\mathrm{Im}[(R^{T}R)_{ij}]=0$.
H: Show $G$ and $H$ are isomorphic Let $G$ and $H$ be finite abelian groups of the same order $2^n$. If for each integer $m$, $$\left|\left\{x\in G\mid x^{\large 2^m}=1\right\}\right|=\left|\left\{x\in H\mid x^{\large 2^m}=1\right\}\right|$$ then $G$ and $H$ are isomorphic. How to show the above statement? Thanks. AI: Hint: use the fundamental theorem of finitely generated Abelian groups to determine what kind of factorizations into direct products of cyclic groups that $G$ and $H$ can have. Then use the given fact to conclude something about the cyclic groups that the elements of $G$ and $H$ generate, and use that fact to draw an isomorphism between $G$ and $H$ using the previous factorization.
H: Differentiability of a function at a point to prove it differentiable everywhere on the given condition. A simple question which I came across recently. Just wanted to confirm if my logic on it is right.... Suppose $f(x)$ is a function and it's given that it's differentiable everywhere except possibly at $0$, but for the point "$0$", it's given that $\lim_{x\to 0} f'(x) = 0.$ We need to prove that its differentiable everywhere. What I think is the left hand derivative will be equal to $\lim_{x\to 0^-} f'(x) = 0$ and similarly right hand derivative will be equal to $0$ too on similar lines. Is this right? I thought of the question myself as I was working on some other question and this bumped in my head. Can we do it this way or there should be some more condition given in the question to work our proof? Thanks. AI: Suppose $f$ is continuous at $0$. Without this assumption, user math_man's answer shows that the statement is not true. We need to argue that $\displaystyle f'(0)=\lim_{h\to 0}\frac{f(h)-f(0)}h$ exists. But this is now immediate from L'Hôpital's rule: The assumptions that $f$ is continuous at $0$ (and therefore everywhere), and differentiable away from $0$ give us that we can apply the mean-value theorem. This allows us to conclude that for any $h$ there is a $\xi_h$ in between $0$ and $h$ such that $$ \frac{f(h)-f(0)}h=\frac{f'(\xi_h)}1=f'(\xi_h). $$ Now note that $\xi_h\to0$ as $h\to 0$. Since we are given that $\lim_{t\to0}f'(t)=0$, it follows that $\lim_{h\to0}f'(\xi_h)=0$ as well, so $f'(0)$ exists and is $0$.
H: Probability: Two people get multiple choice questions I was wondering... If two people do multiple choice test, and the questions are taken from a pool of question (the size of the pool is unknown). Both people must be given 30 questions. They both end up with 15 of the same questions (and the other 15 different). What is the most likely size of the pool? This question is not from a book or anything, it is a question that I'm just wondering about. AI: Let the number of questions be $M$. We make two draws of $30$ without replacement, but allowing replacement between the draws and ask the probability of overlap of $15$. The naive answer is that since half the questions overlap, we are seeing half the questions, so $M=60$. We now prove this (under one large assumption). This probability is $\frac {{30 \choose 15}{M-30 \choose 15}}{M \choose 30}$ because we make the first draw, then choose $15$ questions of the first $30$ and choose $15$ of the rest. In a classic reversal (the probability of $M$ given $15$ overlap is the probability of $15$ overlap given $M$-this assumes that the probability distribution of $M$ is uniform, but there is no uniform distribution over all the naturals), we look for the maximum of this to determine $M$. It is not surprising that this is maximized near $M=60$ in this Alpha graph. If we imagine a change of $M$ from $60$, going upward by $1$ we multiply the probability by $\frac {\frac {61}{46}}{\frac {61}{31}} \lt 1$. Similarly if we go down by $1$ we multiply by $\frac {\frac {30}{15}}{\frac {60}{30}}=1$ and exact computation shows tha $M=59$ is equally probable, but then it falls off.
H: Does $a^3 + 2b^3 + 4c^3 = 6abc$ have solutions in $\mathbb{Q}$ Does $a^3 + 2b^3 + 4c^3 = 6abc$ have solutions in $\mathbb{Q}$? This is not a homework problem. Indeed, I have no prior experience in number theory and would like to see a showcase of common techniques used to solve problems such as this. Thanks Edit Apart from $a=b=c=0$. AI: First, note that $(a,b,c)$ is a solution if and only if $(ka,kb,kc)$ is; hence we may assume $a,b,c$ are integers, with no common factor (divide by that common factor if necessary). Because $6abc, 2b^3+4c^3$ are even, so is $a^3$ and hence $a$. Write $a=2a'$ and we have $$8(a')^3+2b^3+4c^3=12a'bc$$ and hence $$4(a')^3+b^3+2c^3=6a'bc$$ By similar logic, $b$ is even, so write $b=2b'$ and we have $$4(a')^3+8(b')^3+2c^3=12a'b'c$$ But now $$2(a')^2+4(b')^3+c^3=6a'b'c$$ and hence $c$ is even. Hence $a,b,c$ are all even; this contradicts $a,b,c$ having no common factor. Followup: The same proof works if the coefficients $\{1,2,4,6\}$ are replaced by $\{\alpha_1, \alpha_2, \alpha_3,\alpha_4\}$ so long as there is some prime $p$ with $\nu_p(\alpha_1)=0, \nu_p(\alpha_2)=1, \nu_p(\alpha_3)=2, \nu_p(\alpha_4)\ge 1$. (Here $\nu_p(\cdot)$ denotes the p-adic valuation). For example, apart from $(0,0,0)$, there are no rational solutions to $$7a^3+15b^3+18c^3=45abc$$ where here $p=3$. $~$ Double followup: The same proof works with $n$ variables $$\alpha_0a_0^n+\alpha_1a_1^n+\cdots+\alpha_{n-1}a_{n-1}^n=\alpha_n(a_0a_1\cdots a_{n-1})$$ provided that $\nu_p(\alpha_i)=i$ (for $0\le i\le n-1$) and $\nu_p(\alpha_n)\ge 1$.
H: Products in a Set Let: $$S := \{1,2,3,\dots,1337\}$$ and let $n$ be the smallest positive integer such that the product of any $n$ distinct elements in $S$ is divisible by $1337$. What are the last three digits of $n$? I'm having a bit of trouble with this problem: the context is that my prof. gave this as a 'extra' exercise to me to do for fun. Any help would be appreciated, thanks! AI: $1337=7\times191$ and there are $190$ numbers with a factor of $7$ before $1337$ and $6$ numbers with a factor of $191$ before $1337$. Suppose you've chosen all the numbers in the set except those that have a factor of either $7$ or $191$. That'd be $1337$ minus $1$ (for $1337$) minus $6$ (for numbers with factor $191$) minus $190$ (for numbers with factor $7$). $1337-1-6-190=1140$. Now the worst possible condition you can have is that you've chosen all the number that have a factor of $7$ but no number with a factor of $191$ which are $190$ in total. This essentially means that you have all the $7$s in the list but no number of factor $191$ to pair them up with $7$ resulting in some multiple of $1337$. So now you have $1140+190=1330$. Now you need any one of the remaining 7 numbers to get a product divisible by $1337$. So $1330+1=1331$. Therefore the last $3$ digits of $n$ are $331$.
H: From a deck of 52 cards, the face cards and four 10's are removed. From these 16 cards four are choosen. From a deck of 52 cards, the face cards and four 10's are removed. From these 16 cards four are chosen. How many possible combinations are possible that have at least 2 red cards? My solution I'm not sure how to enter the equation in to the calculator I believe my reasoning is correct but please do correct me if I'am wrong. "Calculator Ti-84" At least 2 red cars: =2 red 4c2x10c2= =3 red 4C3x10c1= =4 red: 4c4x10c0 A. 154 B. 518 C. 1302 D. 784 AI: The $16$ "removed cards" will contain $8$ red cards and $8$ black cards. There are three possibilities to consider: the arrangements of four cards that we want to count will include the possible outcomes $2$ are red, $2$ are blue, $3$ red, $1$ blue, $4$ red Summing gives us $$\binom 82 \cdot \binom 82 + \binom 83\cdot \binom 81 + \binom 84 = \left(\frac{8!}{6!2!}\right)^2 + \frac{8!}{5!3!}\cdot 8 + \frac{8!}{4!4!}= 1302$$
H: is this subset a subspace - redux OK, I have been bothering people here with this for days and with luck I finally have this. People have helped a lot here so far. (Doing these examples is I hope helping me learn the proofs, but I want to know that I am doing this right). Let W be a subset of vector space V. Is it s subspace as well? W = {($a_1$, $a_2$, $a_3$) ∈ $ℝ^3$ : $2a_1 - 7a_2 + a_3=0$} So, to check if this is a subspace I need to satisfy the following: That 0 is in the set. Plugging (0,0,0) into the equation $2a_1 - 7a_2 + a_3=0$ yields 0=0 so yes, it is. That it is closed under addition. Let ($b_1, b_2, b_3$) be an arbitrary vector in W. For this to be closed under addition ($b_1, b_2, b_3$)+($a_1, a_2, a_3$) ∈ W. $2(a_1+b_1) - 7(a_2+b_2) + (a_3+b_3) = 0$ can also be written as $(a_3+b_3) = -2(a_1+b_1) + 7(a_2+b_2)$ There are real-valued solutions to this, whenever $b_i = -a_i$ is one, so the answer is yes, it is closed under addition. Is it closed under multiplication? Any arbitrary λ($2a_1 - 7a_2 + a_3)=0 =(λ)0$ So since that's still part of the set, it is closed under multiplication. So, did I do this one correctly? God I hope so. AI: Excellent work. It will get easier! Trust me. Very nicely done. You "covered all the bases", a bit awkwardly but you got the job done! I'd just add: "Therefore, (since ${\bf 0} \in W$, and $W$ is closed under vector addition and scalar multiplication,) $\;W\,$ is a subspace of $\,V$. This is how I'd approach the "closed under addition" component of the proof. Let $w_1 = (a_1, a_2, a_3), w_2 = (b_1, b_2, b_3) \in W$. So $2a_1 - 7 a_2 + a_3 = 0, \text{ and}\; 2b_1 - 7b_2 + b_3 = 0.$ Now, it follows that $w_1 + w_2 = (a_1 + b_1, a_2 + b_2, a_3 + b_3).$ And since we have $$2(a_1+b_1) - 7(a_2+b_2) + (a_3+b_3)$$ $$ = 2a_1 + 2b_1 -7a_2 -7b_2 + a_3 + b_3 $$ $$ = (2a_1-7a_2+a_3)+(2b_1-7b_2+b_3) $$ $$ ={\bf 0 + 0} = {\bf 0}\in W$$ ...$W$ is closed under vector addition.