text
stringlengths
83
79.5k
H: Prove inequality: $\sqrt {(x^2y+ y^2z+z^2x)(y^2x+z^2y+x^2z)} \ge \sqrt[3]{xyz(x^2+yz)(y^2+xz)(z^2+xy)} + xyz$ Let $x,y,z\in \mathbb R^+$ prove that: $$\sqrt {(x^2y+ y^2z+z^2x)(y^2x+z^2y+x^2z)} \ge xyz + \sqrt[3]{xyz(x^2+yz)(y^2+xz)(z^2+xy)}$$ The inequality $\sqrt {(x^2y+ y^2z+z^2x)(y^2x+z^2y+x^2z)} \overset{C-S}{\ge} 3xyz$ will not help because $3xyz \le RHS$. How to prove above inequality ? It is too hard for me. AI: The inequality is homogeneous, so we may assume that $xyz=1$. Then if we expand left and right hand side, we see that we have to show that $$\sqrt{x^3 + y^3 + z^3 + \frac{1}{x^3} + \frac{1}{y^3} + \frac{1}{z^3} + 3} \ge 1 + \sqrt[3]{x^3 + y^3 + z^3 + \frac{1}{x^3} + \frac{1}{y^3} + \frac{1}{z^3} + 2}.$$ Let $$u = \sqrt[3]{x^3 + y^3 + z^3 + \frac{1}{x^3} + \frac{1}{y^3} + \frac{1}{z^3} + 2}.$$ If we square both sides of the expanded inequality, we get the equivalent inequality $$u^3 \ge 2u + u^2,$$ or, $$u^2 - u - 2 \ge 0.$$ Therefore it is enough to show that $u \ge 2$. But this is certainly true since $x^3 + \frac{1}{x^3}, y^3 + \frac{1}{y^3}, z^3 + \frac{1}{z^3} \ge 2$.
H: Check my work: $\lim a_n = 0 \Rightarrow \lim \sqrt{a_n} = 0 $? (for $a_n$ positive) I'm trying to prove, as "properly" as possible the following:$$\left[ \lim z_n = z \right] \iff \left[ \lim x_n = x \quad \wedge \quad \lim y_n = y \right]$$ where $z_n = x_n + i y_n$ and $z=x+iy$. For the converse direction, I have $\lim |x_n - x|=0$ and $\lim |y_n - y|=0$, then I show this implies $\lim |x_n - x|^2=0$ and $\lim |y_n - y|^2=0$ so that $$\lim |x_n - x|^2 +|y_n - y|^2 =0 $$ Now I want to say this implies $\lim \sqrt{|x_n - x|^2 +|y_n - y|^2 }=0 = \lim |z_n-z|$ because of the fact that $f:x \rightarrow \sqrt{x}$ is continuous at $x=0$. But is this justified, given that $f$ is undefined to the left of $0$? Does that matter at all? AI: If $0\leq a<\varepsilon^2$, then $0\leq\sqrt{a}<\varepsilon$.
H: Prove that there exists a path in G of length K Let $G=(V,E)$ be a nonempty graph and let $d(v)\geq k$ for all $v\in V$. Prove that there exists a path in $G$ of length $k$. Alright so my first intuition is $$\sum_{i=1}^{n} d(v_i) \geq nk \quad(n=|E|)$$ So a path contains each vertex once and has no cycle. Seeing as each vertex has a degree $\geq k$, in the worst situation (because this limits the length of our path) we have: $$\sum_{i=1}^{n} d(v_i) = nk$$ We also have $\sum_{v\in V} d(v_i) = 2|E|$, so $$ 2|E| = nk. $$ This seems logical but I cant translate this into the length of a path. Any tips? edit maybe we can say because we have $\frac{nk}{2}$ edges, and $n$ vertices, we could maybe use induction? AI: Hint. Pick a vertex $v_1$. It is connected to $k$ vertices. Pick one of them $v_2$, this is connected to $k$ vertices, and $k-1$ of them are not in the path (only $v_1$ can be). $v_2$ is connected to at least $k-2>0$ vertices not in the path. Pick one of them, $v_3$ repeat.... Induction by $k$ is probably the simplest way to write the solution....
H: Zero, a mystery. Is zero even? Is zero a multiple of ANY number? I did some reading, and I found the following: for 1) I see a whole Wiki article saying that this is true, but I just can't understand why? for 2) I think it is, but I don't have a precise definition, and the only thing I remember is that in the junior classes, I used to write that the smallest multiple of any number is the number itself and it was approved by my teachers...so a proof on 1) and 2) would be highly appreciated... Thanks a Ton! AI: $1$. It depends on your definition of even number. If you define the set of even numbers as $$\mathbb{E} = \{x \in \mathbb{R}: x = 2n, \text{ where } n \in \mathbb{Z}^+\}$$ then $0$ is not even. However, if you define the set of even numbers as $$\mathbb{E} = \{x \in \mathbb{R}: x = 2n, \text{ where } n \in \mathbb{Z}\}$$ then it is. $2$. For the second question again it depends on your definition of multiple of a number. If you define, the set of multiples of $x$ as $$M_x = \{y \in \mathbb{R}: y = z \times x, \text{ where }z \in \mathbb{Z} \}$$ then $0$ is indeed a multiple of any number since $0 = 0 \times x$ for any $x$ and in fact, $0$ is the only number present in all the sets, i.e., $\cap_{x \in \mathbb{R}} M_x = \{0\}$, i.e., $0$ is the only number that is a multiple of all numbers.
H: On the general definition and nature of preordered sets I found the following statement in a book after the usual definition of preorder and preordered set and I am not sure about how I should interpret it. "While a preordered set $(X, \succsim)$ is not a set, it is convenient to talk as if it is a set when referring to properties that apply only to $X$. For instance, by a 'finite preordered set', we understand a preordered set $(X, \succsim)$ with $|X|< \infty$." So the question (probably trivial) is, in the end is the preordered set a set or not? My interpretation is that here the only set is $X$, and even if we put a preorder on it, and we call the result a preordered set, still the only set around is $X$, even if we had some characteristics (i.e. the predorder) that describe the behavior of the elements of $X$ with more details. Is this the right interpretation of this paragraph? PS I was not sure about the title of this question. I hope it is ok. AI: This is a small, but sometimes relevant, key issue. Are ordered pairs sets? Recall that preordered sets are ordered pairs, $(X,R)$ such that $X$ is a [non-empty] set and $R$ is a preorder of $X$. If you treat ordered pairs as non-set objects, then preorders are not sets for that very reason. But when we talk about the preordered set $(X,R)$, we are saying that $X$ is a set, and it is preordered by $R$. This is how you should interpret this paragraph. Often, however, it is very good to confuse between $(X,R)$ and $X$. It helps us read things better, as the example about finiteness proposes.
H: Probability that Brownian motion makes a loop around the origin Let $B$ be a complex Brownian motion started at $\epsilon > 0$. I am trying to prove that as $\epsilon \rightarrow 0$, the probability that $B$ makes a loop around the origin before hitting the unit circle tends to 1. I tried looking at $\operatorname{arg} B_t$, and rescaling the problem, but neither of these seemed to get me anywhere. Does anyone have any suggestions? AI: For each $\epsilon$, by scaling, this is the probability that a Brownian motion starting at $1$ makes a loop around the origin before hitting the circle of radius $1/\epsilon$. When $\epsilon\to0$, the hitting time of the circle of radius $1/\epsilon$ goes to infinity hence, in the limit, one is after the probability that a Brownian motion starting at $1$ makes a loop around the origin, ever. The conclusion then comes from a representation of the argument process of a plane Brownian motion... which you might know?
H: Economic Elasticity: where elasticity-equation come from? I know the equation for economic elasticity is: $$\varepsilon = \frac{\%\,\Delta Y}{\%\,\Delta X}\frac{X}{Y} = \frac{\partial Y(X)}{\partial X}\frac{X}{Y} = \frac{\partial \log(Y)}{\partial \log(X)}$$ In fact, this is the generalization for any sensitive-analysis or elasticity for a given function; in this case $Y(X)$. But, where does come it from? I mean, why is it the equation and no -for example- just the partial derivate of the given function? Thanks in advance! AI: The elasticity gives you the percentage change of the dependent variable with respect to the percentage change of the independent variable. Elasticity by definition is dimensionless and I believe this is part of the motivation behind using this over the derivative like you mentioned.
H: the nonexistence of a division algebra on $\mathbb R^{2n+1}$ Prove that there is no division algebra structure on $\mathbb R^{2n+1}$. The hint says Suppose that there is such a structure on $\mathbb R^{2n+1}$. Take a nonzero $a \in \mathbb R^{2n+1}$. Consider $f: S^{2n} \rightarrow S^{2n}, x \mapsto \frac{ax}{|ax|}$. Prove that $f$ and $-f$ are homotopic. But how is this homotopy constructed? Thanks for the help. AI: Hint: Find a path between $a$ and $-a$ that does not go through $0$.
H: Difference between Bi-annual and semi-annual in Financial Maths I was wandering what the difference was between compounding interest when they use bi-annual and semi-annual and hence how to change your value of i I think semi-annual means twice in 1 year so your i would be i/2? and then you would multiply your years by two as well however im not sure how to deal with bi-annual, i think it means once every 2 years so would you take you i and divide it by 0.5 as well as your number of years? AI: I assume you are using the formula for compound interest: $$A = P \left(1 + \frac{i}{n}\right)^{nt}$$ where $A$ is the future value, $P$ is the present value, $i$ is the annual interest rate (as a decimal), $n$ is the number of times compounded per year and $t$ is the length of time in years. It is very important here that the question states interest as the annual interest rate. Semi-annual means twice in one year. Therefore, your $n$ will equal 2. Hence, your formula becomes $$A = P \left(1 + \frac{i}{2}\right)^{2t}.$$ You are correct that bi-annual means once every two years. Therefore, the interest is compounded "half" a time per year (1 compounding every 2 years for $\frac{1}{2}$). Now we have $n = \frac{1}{2}$ and $$A = P \left(1 + \frac{i}{\frac{1}{2}}\right)^{\frac{1}{2}t}$$
H: Intersection of two lines in vector form Sorry about formatting, this is my first question here, and I have no idea how to do it properly. Assuming I have two lines $\displaystyle l_1 = \binom{x_1}{y_1} + a\binom{u_1}{v_1}$ $\displaystyle l_2 = \binom{x_2}{y_2} + b\binom{u_2}{v_2}$ Thus the intersection holds $\displaystyle \binom{x_1}{y_1} + a\binom{u_1}{v_1} = \binom{x_2}{y_2} + b\binom{u_2}{v_2}$ and have made sure those two lines will intercept (so $u_1,v_1$ and $u_2,v_2$ aren't parallel), how do I find the intersection point elegantly? Currently (I am using this in a C++ code), I need to do a lot of if() cases, since I cannot divide by 0, but in order to find the point, I need to substitute b in the formula I get $a = \dfrac{x_2-x_1+bu_2}{u_1}$ so here I have to check for $u_1\neq0$, and then substitute in the second line, where I then need to check for $v_2\neq0$, and this creates one hell a lot of code. Is there a better way, am I just missing something easy? AI: The most elegant formulation (IMO) uses Cramer's Rule. You can reformulate your two equations into $$\begin{bmatrix} u_1 & -u_2\\ v_1 & -v_2 \end{bmatrix} \cdot \begin{bmatrix} a \\ b\end{bmatrix} = \begin{bmatrix} x_2-x_1 \\ y_2 -y_1 \end{bmatrix}$$ and use determinants to solve for $a$. Another nice way to think about it is areas: For segments AB and CD, the fraction along AB at which the intersection occurs is equal to the area of the triangle ACD divided by the area of the quadrilateral ACBD.
H: $f:\mathbb{R}\to\mathbb{R}$ is continuous and $\int_{0}^{\infty} f(x)dx$ exists $f:\mathbb{R}\to\mathbb{R}$ is continuous and $\int_{0}^{\infty} f(x)dx$ exists could any one tell me which of the following statements are correct? $1. \text{if } \lim_{x\to\infty} f(x) \text{ exists, then it is 0}$ $2. \lim_{x\to\infty} f(x) \text{ must exists, and it is 0}$ $3. \text{ in case if f is non negative } \lim_{x\to\infty} f(x) \text{ exists, and it is 0}$ $4. \text{ in case if f is differentiable } \lim_{x\to\infty} f'(x) \text{ exists, and it is 0}$ I solved one problem in past which says: if $f$ is uniformly continuos and $\int_{0}^{\infty} f(x)dx$ exists then $\lim_{x\to\infty} f(x)=0$, so the condition in one says $f$ is uniformly continuous? and hence $1$ is true? well I have no idea about the other statements. will be pleased for your help. AI: The point is that the limit need not exist. You can construct something according to the comment by J.J. For instance, you can consider a function such that at any integer the height is one but the width is $1/n^2$. Then the area of each rectangle is $1/n^2$ and $$ \int_0^\infty f(x)dx=\sum\frac{1}{n^2}<\infty. $$ But the limit does not exists. You get $$ \lim_{n\rightarrow\infty} f(n)=1, $$ $$ \lim_{n\rightarrow\infty} f(n-\delta)=0,\;\; 0<\delta<<1 $$ Thus, 1 is true. If the limit exists, it should be zero (if it isn't then the function can't be integrable, right?). 2 is not true (see the hint by J.J. and the first part of my answer above). I think that 3, 4 are not true due to the same reason. Edit: To prove that 4 is not true you can construct a (more decaying) example and then integrate it.
H: Complex numbers - separate real/imaginary parts $$K(\omega) = \frac{1}{1 + j\omega RC}$$ Uhm...How do I separate the real part from the imaginary part here? :U And how can I find the argument? I mean, if the document I got this formula from is right, the argument should be negative... AI: If you have $\frac{z_1}{z_2}$ a ratio of complex numbers, multiplying $$ \frac{z_1}{z_2}\cdot\frac{\overline{z_2}}{\overline{z_2}} = \frac{z_1\overline{z_2}}{z_2\overline{z_2}} $$ will give you a real denominator ($\overline{z_2}$ is the complex conjugate of $z_2$). Once you do that, you need only write $z_1\overline{z_2} = x + jy$, $x,y\in\Bbb R$, as then $$ \frac{z_1}{z_2} = \frac{x + jy}{z_2\overline{z_2}} = \frac{x}{z_2\overline{z_2}} + j\frac{y}{z_2\overline{z_2}}. $$
H: Non-trivial solutions implies row of zeros? If there exist non trivial solutions, the row echelon matrix of homogenous augmented matrix A has a row of zeros. True or False? I'm not sure where to begin as to see why this would be true or false. I know that if there are a row of zeros it means that there are infinitely many solutions, but not sure how I can tell if that means there are non-trivial solutions though. Any help would be appreciated. Thanks in advance. AI: Recall that a system can have either $0$, $1$, or infinitely many solutions. Thus, the fact that there is at least one nontrivial solution (other than the trivial solution consisting of the zero vector) implies that there are infinitely many solutions. Thus, your statement is false; as a counterexample, consider the folloring homogeneous augmented matrix (conveniently in reduced row echelon form): $$ A= \left[ \begin{array}{ccc|c} 1 & 0 & 2 & 0 \\ 0 & 1 & 3 & 0 \end{array}\right] $$ Notice that $A$ has infinitely many solutions (the third column has no pivot, so the system has one free variable), yet there is no row of zeroes. Note: The converse is not necessarily true either. That is, it is NOT the case that: if the row echelon matrix of a homogenous augmented matrix A has a row of zeroes, then there exists a nontrivial solution. As a counterexample, consider: $$ A= \left[ \begin{array}{cc|c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right] $$ Notice that $A$ has only the trivial solution (every column has a pivot, so the system has no free variables), yet $A$ has a row of zeroes.
H: Young's inequality for three variables Let $x, y, z \geqslant 0$ and let $p, q, r > 1$ be such that $$ \frac{1}{p} + \frac{1}{q} + \frac{1}{r} = 1. $$ How can one show that under these hypotheses we have $$ xyz \leqslant \frac{x^p}{p} + \frac{y^q}{q} + \frac{z^r}{r} $$ with equality if and only if $x^p = y^q = z^r$, using twice the standard two-parameters Young's inequality which says that for all $x, y \geq 0$ and for all $p, q > 1$ for which $\frac{1}{p} + \frac{1}{q} = 1$ we have $$ xy \leqslant \frac{x^p}{p} + \frac{y^q}{q} $$ with equality if and only if $x^p = y^q$ ? I've tried to apply it twice directly, to multiply two inequalities and to add two inequalities, but in each case it gets quite messy and I can't get the desired result, even though I'm sure it should be quite simple. AI: Here's a solution by applying Young's inequality twice. First apply the inequality to $x$ and $yz$ with $p$ and $\frac{p}{p-1}$ to get $$xyz \le \frac{x^p}{p} + \frac{(yz)^{\frac{p}{p-1}}}{\frac{p}{p-1}}.$$ Then apply it to $y^{\frac{p}{p-1}}$ and $z^{\frac{p}{p-1}}$ with $\frac{p-1}{p} q$ and $\frac{\frac{p-1}{p}q}{\frac{p-1}{p}q - 1} = \frac{(p-1)q}{pq-p-q}$ to get $$xyz \le \frac{x^p}{p} + \frac{p-1}{p} \left(\frac{y^q}{\frac{p-1}{p}q} + \frac{z^{\frac{pq}{pq-p-q}}}{\frac{(p-1)q}{pq-p-q}}\right)$$ Notice that $\frac{pq}{pq-p-q} = r$, so you get $$xyz \le \frac{x^p}{p} + \frac{y^q}{q} + \frac{z^r}{r}$$ as wanted.
H: Divisibility and factors 1) Can factors be negative? Please prove your opinion. 2)If prime factorization is given to you, how will you find out how many composite factors are there? Not the factors, just how many. For 2), my teacher gave me an easy method: let x = $a^{a_1} . b^{b_1}. c^{c_1}..$ Then your answer is : $(a_1 +1)(b_1 + 1)(c_1 +1)...$ But again, he didnt prove it, and I didnt have any time left in the class to ask him, so please help me prove it... AI: 1) Prime factors are certainly positive only, since prime numbers are positive by definition. But a divisor of a number can certainly be negative. For example, all numbers in the set $\{-10,-5,-2,-1,1,2,5,10\}$ divide $10$. 2) Say $x = p_1^{e_1} * p_2^{e_2} *p_3^{e_3} * \dots * p_n^{e_n}$, where $p_i$ is the $i$th prime factor of $x$ and $e_i$ the exponent of that factor. If $a$ divides $n$, then all prime factors of $a$ have to be in the prime factorisation of $x$, and the exponent of such a factor in the factorisation in $a$ can be no more than the exponent of that factor in the factorisation of $x$. Therefore, the exponent of $p_1$ in the prime factorisation of $a$ is in the set $\{0,1,2,\dots , e_1\}$. More specifically, it is one of $e_1 + 1$ different values. Similarly, the exponent of $p_2$ in the prime factorisation of $a$ can take up $e_2 + 1$ values. Multiplying all these numbers together shows that the number of (positive) divisors of $a$ is indeed $(e_1+1)*(e_2+1)*(e_3+1)*\dots *(e_n+1)$. For example, if $x = 12 = 2^2 * 3^1$, a divisor $a$ of $x$ can have a factor $2$ in its prime factorisation, two of them, or zero of them. Three cases for the factor $2$. Also, it can either have or have not a factor $3$ in its prime factorisation. Two cases for the factor $3$. Therefore, the possible total number of values $a$ can have is $3 * 2 = 6$ and indeed this is true: the positive factors of $12$ are $1,2,3,4,6,12$. As I said above, a divisor can also be negative. If you want to count not only the number of positive divisors but the number of positive and negative divisors combined, you just have to multiply the outcome by $2$, because the number of positive and negative divisors are the same.
H: Evaluating the series $\sum\limits_{n=1}^\infty \frac1{4n^2+2n}$ How do we evaluate the following series: $$\sum_{n=1}^\infty \frac1{4n^2+2n}$$ I know that it converges by the comparison test. Wolfram Alpha gives the answer $1 - \ln(2)$, but I cannot see how to get it. The Taylor series of logarithm is nowhere near this series. AI: Rewrite the series as follows: $$\sum_{n=1}^\infty\frac{1}{4n^2+2n}=\sum_{n=1}^\infty\frac{1}{2n(2n+1)}=\sum_{n=1}^\infty\left(\frac{1}{2n}-\frac{1}{2n+1}\right)=\sum_{m=2}^\infty\frac{(-1)^m}{m}.$$ You may now evaluate it using the Taylor series of logarithm. Added: The third equality is justified as follows: Write $a_N=\sum_{n=1}^N\left(\dfrac{1}{2n}-\dfrac{1}{2n+1}\right)$ and $b_M=\sum_{m=2}^M\dfrac{(-1)^m}{m}$. The sequence of partial sums $b_M$ is convergent by the Leibniz criterion. Furthermore, $a_N = b_{2N+1}$ holds for all $N\in\mathbb N$, i.e. $(a_N)_{N=1}^\infty$ is a subsequence of $(b_M)_{M=1}^\infty$. Therefore these sequences converge to the same limit, which justifies the equality.
H: Non-Recursive Fundamental Recurrence Formulas Is there a non-recursive version of the fundamental recurrence formulas for continued fractions? I am trying to compute $A_{1000}$, and it is taking me an extremely long time. By the way, I am expanding $\sqrt2=1+1/(2+\cdots)$. AI: I'm guessing this is about the 57th problem from Project Euler, and feel that it might be more of a question on programming and creating an optimised algorithm. You can try to set up better recurrences. Let $d_t$ and $n_t$ denote the $t$-th denominator and numerator respectively. Then you'll find the following recurrence relations. $$ \begin{align*} d_t&=d_{t-1}+n_{t-1} \\ n_t&=2d_{t-1}+n_{t-1} \\ \Rightarrow n_t&=d_t+d_{t-1} \end{align*} $$ You can also find recurrence relations which are exclusive of one another (so the denominator recurrence relation contains no terms dependent upon the numerator and vice versa). $$ \begin{align*} d_t&=2d_{t-1}+d_{t-2} \\ n_t&=2n_{t-1}+n_{t-2} \end{align*} $$ Given that $d_0=n_0=1$, and $d_1=2$, $n_1=3$, the denominators actually form the sequence of Pell numbers, and so there is a closed form for the denominator that I'm familiar with. $$ d_{t}=\frac{(1+\sqrt{2})^{t+1}-(1-\sqrt{2})^{t+1}}{2\sqrt{2}} $$ You can also calculate a closed form for the numerator. $$ n_{t}=\frac{(1+\sqrt{2})^{t+1}+(1-\sqrt{2})^{t+1}}{2} $$
H: Choosing an answer randomly You are asked a question and you don't have a clue about it. But, luckily you are given $K$ ($K \ge 4$) possible answers and only one is the right one. Because you choose your answer randomly you would like to increase your chance of answering the question right. You have two options: Choose an answer randomly. Choose an answer randomly, exclude it and then choose another answer randomly from the other $K-1$ given answers. So the question, would you have better chance if you choose option 2 than option 1? And if it's better, how many times would be the best option, and stopping before it become a bad idea (excluding large part of the answers). Let's say. $K=4$. Then you'll have $25$% odds choosing the right answer with option 1. If you exclude that answer you have $33$% or $0$% odds of choosing the right answer. AI: These are two different procedures, whose outcome is selecting one of the $K$ answers. All $K$ outcomes are equally likely, so the procedures are identical in outcome. Procedure 1 is simpler and quicker, so is to be preferred. Clarifying edit: Think of these procedures as "black boxes" that produce a guess. It doesn't matter how the procedures work, they produce a guess. By symmetry, none of the answers are any more likely than any of the others to be chosen, because the black box doesn't know the answer to the question. Hence every such mechanism is identical to every other such mechanism.
H: A simple proof about finite subsets of a set What is the simplest proof that for every cartesian square $A\times A$, which is a subset of $\Gamma$, the set $A$ is finite? $$\Gamma = \left\{ ( x ; x) \hspace{.5em} | \hspace{.5em} x \in [ 0 ; 1] \right\} \cup \left\{ ( x ; 0) \hspace{.5em} | \hspace{.5em} x \in [ 0 ; 1] \right\} \cup \left\{ ( 0 ; x) \hspace{.5em} | \hspace{.5em} x \in [ 0 ; 1] \right\} .$$ The graph of $\Gamma$ follows: AI: Suppose that $A$ contains more than one non-zero point: $y,z$. Without loss of generality $y<z$. So we have that $(y,z)$ is not of the form $(y,y)$ nor $(0,y)$ nor $(y,0)$. Therefore $A\times A$ is not a subset of $\Gamma$. So it's not only finite, it has at most one non-zero point, so at most two points.
H: help to understand the partial derivative of $f(x,y) = {1 + h(x) \over 1 + (g(y))^2}$ Can you please help me to understand how given this function: $f(x,y) = {1 + h(x) \over 1 + (g(y))^2}$ Its partial derivative is: $\frac{\partial f}{\partial x} = {1 \over 1 + (g(y))^2}(1 + h'(x))$ My initial attempt was: $\frac{\partial f}{\partial x} = {1 \over 1 + (g(y))^2}(h'(x))$ AI: $$f(x,y)=\frac{1+h(x)}{1+(g(y))^2}=\underbrace{\frac{1}{1+(g(y))^2}}_{\text{independent of }x}+\underbrace{\frac{1}{1+(g(y))^2}}_{\text{independent of }x}\,h(x)$$ When taking the partial derivative with respect to $x$, you can treat expressions independent of $x$ as constants. Hence $$f'_x(x,y)=0+\frac{1}{1+(g(y))^2}h'(x)$$ and your attempt is correct.
H: Making a logarithmic equation that starts at $(0,0)$ and passes through $(x, y)$? I'm writing a computer program and for fading sound, it's best to do it in a logarithmic equation. What I need it to find a graph of the "volume" that starts at (0, 0) [x is the time, y is the volume] and goes to a certain volume after a certain time, increasing based on the logarithmic graph. What's a step by step way I can compute this? (So I can translate it into code) Sorry, I just realized my question didn't make sense. By "ends at (x, y)" I mean passes through (x, y) AI: For clarity, I'm going to change notation and say it has to pass through $(x_0,y_0)$. Then we can start with $f(x) = A \, ln(x+1)$, which is logarithmic and passes through $(0,0)$. We need $y_0 = A \, ln (x_0+1)$, so we have $A = \frac{y_0}{ln(x_0+1)}$. So the answer is $f(x) = \frac{y_0}{ln(x_0+1)} \, ln(x+1)$.
H: Basis given the rank of A is equal to n Let $A$ be an $m \times n$ matrix with columns $C_1,C_2,\dots,C_n$. If $\operatorname{rank} (A) = n$, show that $\{ A^T C_1,\dots,A^T C_n\}$ is a basis for $\mathbb{R}^n$. I'm really confused about how I should do this. Could anyone please help me? AI: Hint: what are the properties of a basis in general? Edit: some more hints You need to show two things namely: Linear independence That they span $\mathbb{R^n}$ What do you know of the rank of $A^T$ given what you know of that of $A$? How could you represent $\{ A^T C_1,\dots,A^T C_n\}$ in a different way? Alright, to be more specific: $\{ A^T C_1,\dots,A^T C_n\}$ are the columns of $A^TA$. The rank of $rank(A^TA)=rank(A)=n$ By definition of the rank, these columns are all linearly independent and together span $\mathbb{R^n}$. Hence, they form a basis for $\mathbb{R^n}$
H: Difference between $P(Y \ge y)$ and $P(Y > y)$. In the book "Probability: Theory and Examples", there is this theorem: Lemma 2.2.8 If $Y \ge 0$ and $p > 0$ then $\mathbb E(Y^p) = \int_0^\infty py^{p-1} P (Y > y) \, \mathrm dy$. The proof is by using Fubini's Theorem as follows \begin{align*} \int_0^\infty py^{p-1} \mathbb P (Y > y) \mathrm dy & = \int_0^\infty \int_\Omega py^{p-1} 1_{(Y > y)} \, \mathrm dP \, \mathrm dy \\ & = \int_\Omega \int_0^\infty py^{p-1} 1_{(Y > y)} \, \mathrm dy \, \mathrm dP \\ & = \int_\Omega \int_0^Y py^{p-1} \mathrm dy \mathrm dP \\ & = \int_\Omega Y^p \mathrm dP = \mathbb E Y^p \\ \end{align*} For me it seems totally fine to replace $Y > y$ with $Y \ge y$ in this proof, but the conclusion would be $\mathbb E(Y^p) = \int_0^\infty py^{p-1} P (Y \ge y) \, \mathrm dy$. So if $Y$ can only take nonnegative integer values, we have $\mathbb E Y = \sum_{y \ge 0} P(Y \ge y)$, which is wrong. So why we can not replace $Y > y$ with $Y \ge y$? AI: Lemma 2.2.8 is equally valid when one replaces $P[Y\gt y]$ in the integral by $P[Y\geqslant y]$ because $P[Y\gt y]=P[Y\geqslant y]$, for almost every $y$ with respect to the Lebesgue measure, since the set of values of $y$ such that $P[Y\gt y]\ne P[Y\geqslant y]$ is at most countable. This holds for every nonnegative random variable $Y$, either continuous or discrete or neither continuous nor discrete.
H: Putnam 1967 problem, integration I have some problems understanding this solution of this problem: Given $f,g : \mathbb{R} \rightarrow \mathbb{R}$ - continuous functions with period $1$, prove that $$\lim _{n \rightarrow \infty} \int_0^1 f(x)g(nx) \, dx = \int_0^1 f(x) \, dx \cdot \int_0^1g(x) \, dx$$ It says there that if we split this integral into $n$ parts to get $$\int_0^1 f(x)g(nx) \, dx = \sum_{r=0}^{n-1} \int_{\frac{r}{n}}^{\frac{r+1}{n}} f(x)g(nx) \, dx$$ we will have $$\int_0^1 f(x)g(nx) \, dx = \sum_{r=0}^{n-1} f\left(\frac{r}{n}\right) \int_{\frac{r}{n}}^{\frac{r+1}{n}}g(nx) \, dx$$ because for large $n$, $f$ is roughly constant over the range. Could you tell me why $f$ is roughly constant over the range for $n$ large enough? I would really appreciate all your insight. AI: I hesitate to post an answer after an accepted answer is there, but somebody should say this. $f$ is roughly constant on small intervals because $f$ is uniformly continuous. The fact that $f$ is continuous means you can make $f$ "roughly contstant" on a small interval about a specified point. But intervals small enough to make $f$ roughly constant near one point may fail to be small enough to make $f$ roughly constant near other points. That's where "uniformly" helps: $f$ is continuous on a closed bounded interval $[0,1]$ and that makes it uniformly continuous, so that once you've specified that you want $f$ not to change by more than a certain amount, the intervals that are small enough will be small enough at all points.
H: Finding the second derivative I have this question: Show that $$ \frac{\partial^2r}{\partial x^2} = \frac{y^2}{r^3} $$ in two ways: (1) find the $x$ derivative of $$ \frac{\partial r}{\partial x} = \frac{x}{\sqrt{x^2+y^2}} $$ and (2) find the $x$ derivative of $$ \frac{\partial r}{\partial x} = \frac{x}{r} $$ by the chain rule. I am quite lost. I mean I can take the $x$ derivative of the two equations, but I don't seem to in any way connect it to $$ \frac{\partial^2 r}{\partial x^2} = \frac{y^2}{r^2}. $$ Can anybody help? David EDIT It is supposed to be $$ r^3 $$ in the denominator. AI: (1) $$ \begin{align} \frac{\partial^2 r}{\partial x^2} &= \frac{\partial}{\partial x} \left( \frac{\partial r}{\partial x} \right) \\ &= \frac{\partial}{\partial x} \left( \frac{\partial}{\partial x} \sqrt{x^2 + y^2} \right) \\ &= \frac{\partial}{\partial x} \left( \frac{x}{\sqrt{x^2 + y^2}} \right) \\ &= \frac{\sqrt{x^2 + y^2} \cdot 1 - x \cdot \frac{x}{\sqrt{x^2 + y^2}}}{\left(\sqrt{x^2 + y^2}\right)^2} \\ &= \frac{(x^2 + y^2) - x^2}{\left(\sqrt{x^2 + y^2}\right)^3} \\ &= \frac{y^2}{r^3} \end{align} $$ (2) $$ \begin{align} \frac{\partial^2 r}{\partial x^2} &= \frac{\partial}{\partial x} \left( \frac{\partial r}{\partial x} \right) \\ &= \frac{\partial}{\partial x} \left( \frac{x}{r} \right) \\ &= \frac{r \cdot \frac{\partial x}{\partial x} - x \cdot \frac{\partial r}{\partial x}}{r^2} \\ &= \frac{r - x \cdot \frac{\partial r}{\partial x}}{r^2} \\ &= \frac{r - x \cdot \frac{x}{r}}{r^2} \\ &= \frac{r^2 - x^2}{r^3} \\ &= \frac{y^2}{r^3} \\ \end{align} $$
H: Question about integration I am currently revising some calculus notes, and I got a question I hope somebody could help me with. I seen the following done various places (not in my own notes, but in other proofs) $\int \text{some-expression-in-terms-of-y }\frac{dy}{dx} dx = \int \text{some-expression-in-terms-of-y } dy$ I was told never to think of $\frac{dy}{dx}$ as a fraction, but rather as a symbol. So what exactly is happening there? Its like part of the integral is evaluated, changing the integral so its now with respect to $dy$. AI: It's not a fraction, and you are right not to think of it as a fraction. What you've seen is the substitution rule (the opposite of the chain rule you're used to), which states that $$\int{f(u(x))\cdot u'(x)} dx=\int{f(u)} du$$ You're not "cancelling" the dx you were integrating against before with the dx on the bottom - it's more like you're changing to a more convenient coordinate system. The idea that you can "cross out" the dx's in $\frac{dy}{dx}dx$ to get dy is an abuse of notation that many nonrigorous, pedagogical texts adopt so their students can remember the rule better.
H: Confusion regarding probability of microbe producing everlasting colony. My question is about the given solution to problem 4 in Newman's book 'A Problem Seminar'. Note that the book is available online at Springer. Problem 4 A microbe either splits into two perfect copies of itself or else disintegrates. If the probability of splitting is p, what is the probability that one microbe will produce an everlasting colony? Solution: We can (almost) solve this problem by expressing the unknown probability, $x$, in terms of itself. For, after all, how can the one microbe multiply forever? Only in one way. It must at the first moment split, and then at least one of its two daughters must succeed in the task of the mother (i.e., lead to an eternal progeny). In short $x= p\cdot(1-(1-x)^ 2)$. We can solve for x and get the answer! ... From my understanding, $(1-(1-p)^2)$ means the probability of at least one microbe splitting. However, I have no idea why $x= p\cdot(1-(1-x)^ 2)$ instead of $x= p\cdot(1-(1-p)^ 2)$. How does $x$ get into the $(1-x)^2$? What does $(1-(1-x)^ 2)$ intuitively mean? AI: Since $x$ is the probability that a microbe produces an everlasting colony, $1-x$ is the probability that a microbe’s line will die out. Now we start with one microbe; with probability $p$ it splits. The daughter lines survive or die out independently of each other, so the probability that both daughter lines die out is $(1-x)^2$. The probability that at least one daughter line is everlasting is therefore the complementary probability $1-(1-x)^2$. Thus, the probability that the original microbe splits and has at least one everlasting daughter line is $p\big(1-(1-x)^2\big)$. But this is just the probability that the original microbe’s line is everlasting, so $x=p\big(1-(1-x)^2\big)$.
H: Problem to find the matrix of linear transformation Let $B = \left(\vec{e}_1, \vec{e}_2, \vec{e}_3\right)$ a direct orthonormal basis of $V^3$ and let $\vec{a}$ and $\vec{b}$ be two nonzero vectors of $V^3$ such that $\vec{a}=a_{1}\vec{e}_1+a_{2}\vec{e}_2+a_{3}\vec{e}_2$ $\vec{b}=b_{1}\vec{e}_1+b_{2}\vec{e}_2+b_{3}\vec{e}_2$ Give the matrix associated with the following linear transformations $T(\vec{u})=(\vec{a}\cdot\vec{u})\vec{b}$ $T(\vec{u})=\vec{a}\times\vec{u}$ Can anyone please give me an idea how to find these matrix? I know the answers are $$ \begin{bmatrix} a_{1}b_{1} & a_{2}b_{1} & a_{3}b_{1} \\ a_{1}b_{2} & a_{2}b_{2} & a_{3}b_{2} \\ a_{1}b_{3} & a_{2}b_{3} & a_{3}b_{3} \\ \end{bmatrix} $$ $$ \begin{bmatrix} 0 & -a_{3} & a_{2} \\ a_{3} & 0 & -a_{1} \\ -a_{2} & a_{1} & 0 \\ \end{bmatrix} $$ but I have no idea what I need to do to get them Thanks in advance AI: My suggestion would be to evaluate the transformation in $e_1$, $e_2$ and $e_3$. Also you must note that the matrix associated to a transformation in a given basis $\{e_1,e_2,e_3\}$ is $[T(e_1)T(e_2)T(e_3)]$ where each $T(e)$ is a column vector.
H: Integrating $x^x$ and getting a graph I've heard many times of functions that cannot be integrated. For example, $x^x$, which is the most common. But what I don't know is how could you, even if the graph has no equation, plot this integral. If anyone could give me a graph, I would be extremely pleased. Or at least, could you tell me how to graph it? Maybe in Wolfram Mathematica? AI: Here is a quick picture of $\ x\mapsto \int_0^x t^t\;dt$ I added the plot of the real and imaginary part of the integral in the complex plane $\ z\mapsto \int_0^z t^t\;dt\ $ with $z=x+iy$ (note the 'branch jump' on the negative axis $(-\infty,0)$).
H: Find the greatest integer $N$ such that... Find the greatest integer $N$ such that $N<\dfrac{1}{\sqrt{33+\sqrt{128}}+\sqrt{2}-8}$. The way I did it is this: first, I rewrote the biggest square root as $\sqrt{1+2*16+8\sqrt{2}}$. Then I made the substitution $x=\sqrt{2}$, so this became $\sqrt{16x^2+8x+1}$, which factors to $\sqrt{(4x+1)^2}$ which is equal to $4x+1$. Then I substituted back: $\dfrac{1}{4\sqrt{2}+1+\sqrt {2}-8}$, or $\dfrac 1{5 \sqrt{2}-7}$. This is where I have a problem. I am able to find the answer because I know that $1.4<\sqrt 2 <1.5$. However, is there any way to find this answer without knowing this? Thanks! AI: Picking up from where you left off, we have: $$ \begin{align*} N &= \left\lfloor \dfrac{1}{5\sqrt{2}-7} \right\rfloor\\ &= \left\lfloor \dfrac{1}{5\sqrt{2}-7} \cdot \dfrac{5\sqrt{2}+7}{5\sqrt{2}+7} \right\rfloor\\ &= \left\lfloor \dfrac{5\sqrt{2}+7}{50-49} \right\rfloor\\ &= \lfloor 5\sqrt{2}+7 \rfloor\\ &= \lfloor \sqrt{50}+7 \rfloor\\ &= \lfloor \sqrt{50} \rfloor+7 \\ &= \sqrt{49}+7 \\ &= 14 \\ \end{align*}$$
H: property of extremum of two variable function What can we say about $u_{xx}(x_0,y_0)$, $u_{yy}(x_0,y_0)$ and $u_{xy}(x_0,y_0)$ if we know $u$ reaches its maximum at $(x_0,y_0)$? Especially, what is the sign of each term? AI: When a local maximum is attained: $H(x_0,y_0) = u_{xx}(x_0,y_0) \,u_{yy}(x_0,y_0) - (u_{xy}(x_0,y_0) )^2 \geq 0 $. $u_{xx}(x_0,y_0) \leq 0$, $u_{yy}(x_0,y_0) \leq 0 $. We only know $|u_{xy}(x_0,y_0)| \leq \sqrt{u_{xx}(x_0,y_0) \,u_{yy}(x_0,y_0) }$, but sign we can't tell much, could be either case.
H: Getting generating functions for a weirdly-defined weight function Let $n\in\mathbb{N}$. For a permutation $\sigma:[n]\rightarrow[n]$, we ue the notation $(\sigma(1)\sigma(2)\cdots\sigma(n))$ to describe the mapping. A pair of integers $(i,j)$ is called an inversion of $\sigma$ if $i<j$ and $\sigma(i)>\sigma(j)$. For example, the permutation $(32415)$ on $[5]$ has 4 inversions: $(1,2),(1,4),(2,4),(3,4)$. Define the weight function $w$ on a permutation $\sigma$ to be the number of inversions in $\sigma$. Let $S_n$ be the set of all permutations of $[n]$. (a) Determine the generating functions for $S_1$, $S_2$, $S_3$ with respect to the weight function $w$. (b) Prove that for $n\geq 2$, $\Phi_{S_{n}}(x)=(1+x+\cdots+x^{n-1})\Phi_{S_{n-1}}(x) $. You may use the following non-standard notation: if $\sigma$ is a permutation of $[n]$, then $\sigma '$ is the permutation of $[n-1]$ obtained from $\sigma$ by removing the element $n$. (c) Prove that the number of permutations of $[n]$ with $k$ inversions is $$[x^{k}]\frac{\prod_{i=1}^{n}(1-x^{i})}{(1-x)^{n}}$$ (a) is doable by brute-forcing. (b) seems to be doable by the product lemma if only the weight function was something more tractable, since I need to prove that the weight is equal to adding the weights of some product. (c) is completely over my head since I can't get any general formula for the number of inversions in a permutation. AI: If you have (a) and (b), you have (c): $$\begin{align*} \frac1{(1-x)^n}\prod_{i=1}^n(1-x^i)&=\prod_{i=1}^n\frac{1-x^i}{1-x}\\\\ &=\prod_{i=1}^n\left(1+x+x^2+\ldots+x^{i-1}\right)\;, \end{align*}$$ so you need only verify that $\Phi_{S_1}(x)=1$. One way to prove that $$\Phi_{S_n}(x)=\prod_{i=1}^n\left(1+x+x^2+\ldots+x^{i-1}\right)$$ is by induction on $n$: let $$c_n(k)=[x^k]\Phi_{S_n}(x)\;,$$ the number of $n$-permutations with $k$ inversions, and $$a_n(k)=[x^k]\prod_{i=1}^n\left(1+x+x^2+\ldots+x^{i-1}\right)\;,$$ and show that $$c_n(k)=\sum_{i=\max\{0,k-n+1\}}^kc_{n-1}(i)\tag{1}$$ and $$a_n(k)=\sum_{i=\max\{0,k-n+1\}}^ka_{n-1}(i)\;.\tag{2}$$ $(2)$ is pretty straightforward. For $(1)$, think about starting with a permutation of $[n-1]$ with $i$ inversions and inserting $n$ so that it precedes the last $k-i$ elements of the permutation; how many inversions does that add to the permutation? (The non-standard notation mentioned in (b) appears to be designed to let you talk easily about the inverse of this operation.)
H: How to find change of basis matrices Suppose $U$ has a basis $e_1$, $e_2$, $e_3$ and $V$ has a basis $f_1$, $f_2$ and that $T:U \rightarrow V$ is a linear transformation whose matrix $A$ with respect to the given bases is $A=\begin{pmatrix} 0 & 1 & 2\\ 3 & 4 & 5 \end{pmatrix}$. The first part of the questions asks to show that $e_1$, $e_1$ + $e_2$, $e_1$ + $e_2$ + $e_3$ forms a basis of $U$ and that $f_1$, $f_1$ + $f_2$ forms a basis of $V$. I've shown that. Now there are two things I need help with. Write a matrix $A'$ of $T$ with respect to $e_1$, $e_1$ + $e_2$, $e_1$ + $e_2$ + $e_3$ and $f_1$, $f_1$ + $f_2$. Write a change of basis matrix $P$ from $e_1$, $e_2$, $e_3$ to $e_1$, $e_1$ + $e_2$, $e_1$ + $e_2$ + $e_3$. $$P=\begin{pmatrix} 1 & -1 & 0\\ 0 & 1 & -1\\ 0 & 0 & 1 \end{pmatrix}$$ The answer is given, but I do not know how to work it out. Could you help me? Also, are $e_1$, $e_2$, $e_3$ and $f_1$, $f_2$ supposed to be standard bases? Thank you for your time. AI: I'm not sure it's necessary to say the bases are standard at all. One of the big problems I perceive with writing out matrices as arrays is that people lose the ability to think about them abstractly--they no longer know what the matrices mean. I think this is part of the way matrices are taught, so it's not your fault. Let's look at the matrix $A$. What does it mean? Well, any matrix can be thought of as a linear function. You can equivalently describe $A$ as so: $$A(e_1) = 3 f_2, \quad A(e_2) = f_1 + 4 f_2, \quad A(e_3) = 2 f_1 + 5 f_2$$ You'll notice each of these is a column, but with the bases written out. How does this approach help? It makes problem (1) really trivial. Let $f_2' = f_1 + f_2$. Then $f_2 = f_2' - f_1$. Similarly, let $e_2' = e_1 + e_2$ and $e_3' = e_1 + e_2 + e_3$. Now, you should be able to evaluate the components. Consider $A(e_2')$: $$\begin{align*}A(e_2') &= A(e_1) + A(e_2) \\ &= 3f_2 + f_1 + 4 f_2 \\ &= f_1 + 7 f_2 \\ &= f_1 + 7 (f_2'- f_1) \\&= -6 f_1 + 7 f_2'\end{align*}$$ All it takes to evaluate that is simple substitution. Now, for (2), what is it we want to do when we use the change of basis matrix? We want to feed in a vector in the new (primed) basis, convert it back to the old basis, use the map $A$, and the convert back to the primed basis. The change of basis map should then take a basis vector in the primed basis and convert it to a basis vector in the original, unprimed basis. So like this: $$P(e_1') = e_1, \quad P(e_2') = e_2, \quad P(e_3') = e_3$$ I'll handle only one of these cases, the last one. You should observe that $e_3' = e_2' + e_3$, or $e_3 = -e_2' + e_3'$. That basically does it--this is exactly the third column of $P$ that you're given. All you have to do for this part is express $e_1, e_2, e_3$ as linear combinations of $e_1', e_2', e_3'$. Since $e_1 = e_1'$, you only really have to do 2 equations, at that.
H: Number of possible pairs from $\{1,\dots, n\}$ with $i < j$ I am not sure how to find the above number of possible pairs. For example, for $\Sigma = \{1,2,3,4\}$, the desired number would be $6$. I see that this is half of ${4 \choose 2}$ and I am wondering if this is in fact the way to go about this: take all possible two element selections from a set of $n$ elements, and then equate choosing $(i, j)$ with choosing $(j, i)$. AI: ${4\choose 2}$ is 6! ${n\choose 2}$ is the number of ways of choosing 2 elements from a set of n in any order. So it will just give you the number of pairs $(i, j)$ - but that is exactly what you want, as in any pair $(i,j)$ one of the elements must be greater than the other.
H: What is $\lim\limits_{n→∞}(\frac{n-x}{n+x})^{n^2}$? What is $$\lim_{n\rightarrow\infty}\left(\frac{n-x}{n+x}\right)^{n^2},$$ where $x$ is a real number. Mathematica tells me the limit is $0$ when I put an exact value for $x$ in (Mathematica is inconclusive if I don't substitute for $x$), but using $f(n)=(n-x)^{n^2}$ and $g(n)=(n+x)^{n^2}$, then $$\lim_{n\rightarrow\infty}f(n)=\lim_{n\rightarrow\infty}g(n)=\infty,$$ and by L'Hopital's rule we have $$\lim_{n\rightarrow\infty}\frac{f(n)}{g(n)} = \lim_{n\rightarrow\infty}\frac{f'(n)}{g'(n)} = \lim_{n\rightarrow\infty}\frac{(s-z)^{-1+s^2} (s+z)^{1-s^2} (s+2 (s-z) \text{Log}[s-z])}{s+2 (s+z) \text{Log}[s+z]} = 1.$$ I'm not sure which to believe - Mathematica or L'Hopital. AI: I think this goes to zero for positive $x$. Consider the log of the expression: $$n^2 \log{\left(\frac{1-(x/n)}{1+(x/n)}\right)} = n^2 \left[ \log{\left(1-\frac{x}{n}\right)}-\log{\left(1+\frac{x}{n}\right)}\right]$$ For fixed $x>0$, the value of $x/n$ is small compared with $1$, so Taylor expand and get $$n^2 \left ( -\frac{x}{n} - \frac{x}{n}\right) = -2 n x$$ which clearly $\to -\infty$ as $n \to \infty$. Taking exponentials, $e^{-\infty} = 0$, loosely speaking. For $x<0$, this diverges. For $x=0$, the limit is $1$.
H: Simplifying $a(a-2) = b(b+2)$ I reduced a number theory problem to finding all ordered pairs $(a,b)$ that satisfy the equation $a(a-2) = b(b+2)$ in a certain range. After thinking about this for a while, I figured that either $a = b + 2$, $a = -b$, $a = b = 0$ or $a = 2 \text{ and } b = -2$. It is easy to prove that these values for $a$ and $b$ will all satisfy the equation, but how would I solve this equation had I not come up with these solutions? And how do I know I did not miss one? In short, can this equation be solved (more) rigorously? Edit: Silly of me I did not recognize that my last two solutions were already covered by my first two. AI: Add $1$ to both sides to get $$a^2 - 2a + 1 = b^2 + 2b + 1$$ i.e. $$(a-1)^2 = (b+1)^2.$$ Hence $a-1 = b+1$ or $a-1 = -b-1$ and these are all solutions.
H: What conditions do I need for this calculation to work? (Product rule differentiation of bilinear form) Let $b(\cdot,\cdot):X \times X \to \mathbb{R}$ be a bilinear mapping where $X$ is a Banach space. Consider functions $f:[0,T]\to X$ and $g:[0,T] \to X$. What assumptions do I need on $b$ so that this is valid: $$\lim_{h \to 0}\frac{b(f(t+h),g(t+h))-b(f(t),g(t))}{h} = \lim_{h \to 0}\frac{b(f(t+h),g(t+h))\pm b(f(t+h), g(t)) -b(f(t),g(t))}{h} = \lim_{h \to 0}\frac{b(f(t+h),g(t+h)-g(t))+b(f(t+h)-f(t),g(t))}{h}$$ $$=\lim_{h \to 0}\frac{b(f(t+h),g(t+h)-g(t))}{h}+\lim_{h \to 0}\frac{b(f(t+h)-f(t),g(t))}{h}=b(\lim_{h \to 0}f(t+h),\lim_{h \to 0}\frac{g(t+h)-g(t)}{h})+b(\lim_{h \to 0}\frac{f(t+h)-f(t)}{h},g(t))$$ $$=b(f(t), g'(t)) + b(f'(t), g(t))$$ Obviously $f$ and $g$ need to be differentiable in time. So say $f, g \in C^1([0,T];X)$. But not sure how to formulate the condition on $b$. It should be something like "$b$ is continuous".. AI: You need that the bilinear form is a bounded operator (i.e. continuous). If it is such, you can freely pass limits through.
H: Adaptation to Banach–Mazur theorem I'm trying to prove the following: For every normed linear space $X$, there exists a isometric ismorphism of $X$ in $C(K)$, where $K$ is a compact space. I know from Banach–Mazur theorem that every separable Banach space satisfies the problem, so I tried to take a look at the demonstration of this theorem to have any idea how to solve this, but to prove that the function defined on the sketch of the proof of the theorem is in fact a isometry, we have to use Hahn-Banach, since the space $X$ is not necessary complete, I can't use this here and hence I'm stuck. Thanks for any help! AI: For the beginning the theorem should be read as follows: Theorem For every normed space $X$ there is isometric embedding into $C(K)$ for some compact space $K$. Proof. Consider $K:=\operatorname{Ball}_X(0,1)$ with the weak-$^*$ topology. By the Banach-Alaoglu theorem it is compact. By definition of the weak-$^*$ topology the map $$ J(x): K\to\mathbb{C}:f\mapsto f(x) $$ is continuous for each $x\in X$. So we have a well-defined map $$ J:X\to C(K):x\mapsto J(x) $$ Note that $$ \Vert J(x)\Vert=\sup\{|J(x)(f)|:f\in K\}=\sup\{|f(x)|:f\in \operatorname{Ball}_X(0,1)\}=\Vert x\Vert $$ for each $x\in X$. In the last step we use a corollary of the Hahn-Banach theorem. Thus $J$ is an isometric embedding. Note that the corollary of the Hahn-Banach theorem does not require completeness, so its usage was valid in the proof above.
H: Determinant of unspecified matrices Suppose A and B are $5\times 5$ matrices with $\det(A) = -1/3$ and $\det(B) = 6$, find the determinant of $ 2AB$. Solution: $$= \det(2AB) $$ $$= 2^5 \det(A)\det(B) $$ $$= (32)(-1/3)(6)$$ $$= -64$$ I understand how all of this works, except for where $2^5$ comes from, can anyone explain how this happens? AI: When you multiply a matrix by $2$, that multiplies all the rows of the matrix by $2$. The determinant is linear in each row, and the matrix $AB$ has five rows, so it multiplies the resulting determinant by $2^5$.
H: hyperbola curve formula in 3 dimensions Cartesian formula for 2d hyperbola curve is $x^2/a^2-y^2/b^2 = 1$. What is the formula for a 3d hyperbola curve? AI: What is a 3D hyperbola curve? How about $\frac {x^2}{a^2}-\frac {y^2}{b^2}=1, z=0$. If you want a curve in 3D, you need two equations, just like a line. I suspect this is not what you want, so please explain. Alternately, you might want $\frac {x^2}{a^2}-\frac {y^2+z^2}{b^2}=1$, which is a surface that rotates the 2D curve around the $x$ axis. Added in response to comment: Given your three points, you can solve for the $a,b,c$ that are the equation of the plane. Be careful-we have been using $a,b$ as parameters of the hyperbola and now you are reusing them. The $m$ in my previous comment corresponds to $\frac ba$ here, but my plane went through the $z$ axis while if $c \ne 0$ it will miss the $z$ axis by that much. A hyperbola with its axis parallel to $z$ that is in the plane $ax+by=0$ is $\frac {z^2}{d^2}-\frac {x^2+y^2}{e^2}=1,y=-\frac ba x$ To get it into the plane $ax+by=c$ we add $\frac c{\sqrt {1+(\frac ab)^2}}$ to $x$ and $\frac ab\frac c{\sqrt {1+(\frac ab)^2}}$ to $y$ giving $$\left\{(x',y',z)\left|\frac {z^2}{d^2}-\frac {x^2+y^2}{e^2}=1,y=-\frac ba x,x'=x+,y'=y+\frac ab\frac c{\sqrt {1+(\frac ab)^2}}\right.\right\}$$
H: Why is do no complete graphs have genus 7? My teacher mentioned this in class, but he rushed his explanation due to time restraints. Why is it that complete graphs ($K_n$) never have genus $7$? AI: It is a theorem of Ringel and Youngs that for all $n \geq 3$, the genus of the complete graph $K_n$ -- i.e., the minimal genus of a compact orientable surface into which $K_n$ embeds -- is $\lceil \frac{(n-3)(n-4)}{12} \rceil$. Granting this result, what you ask about is very straightforward: the given function is weakly increasing. For $n = 12$ it takes the value $6$. For $n = 13$ it takes the value $8$. Thus it never takes the value $7$ (the first of infinitely many values that it skips). Not being a graph theorist, I confess that I don't know the proof of the Ringel-Youngs Theorem, and the paper was not freely available to me on the internet. For all I know it may be possible to avoid this result and give an argument more specific to your precise question...but I will guess that it will probably be more fruitful to learn the proof of the general result. Ringel, G. and Youngs, J. W. T. "Solution of the Heawood Map-Coloring Problem." Proc. Nat. Acad. Sci. USA 60, 438-445, 1968.
H: show $\lim_{x\rightarrow\infty} x^n\exp(-x^2)=0$ I want to show for all $n\in\mathbb N$ $$\lim_{x\rightarrow\infty}\frac{x^{n}}{\exp(x^2)}=0$$ I am pretty sure that I have to use L'Hospital. I've tried induction: $n=1$: $$\lim_{x\rightarrow\infty}\frac x{\exp(x^2)}=\lim_{x\rightarrow\infty}\frac1{2x\exp(x^2)}=0$$ And for $n\rightarrow n+1$: $$\lim_{x\rightarrow\infty}\frac{x^{n+1}}{\exp(x^2)}=\lim_{x\rightarrow\infty}\frac{(n+1)x^n}{2x\exp{x^2}}$$ And now I am stuck. The term $2x$ really annoys my for my induction hypothesis. Any hints? AI: From where you left it: $$\lim_{x\to\infty}\frac{(n+1)\color{red}{x^n}}{2\color{red} xe^{x^2}}=\frac{n+1}2\lim_{x\to\infty}\frac{x^{n-1}}{e^{x^2}}\stackrel{\text{Inductive Hyp.}}=\frac{n+1}2\cdot 0=0$$
H: Differential equation with start condition and maximum capacity The house with 1000 people have got 1 infected with an unknown virus. The number of students who are newly infected is proportional to the number of healthy product of current and the current number of infected students. After 2 days is 5 people infected. How many of them will be infected in a week? I have calculated this with diferencial equation and i have got 1000 people. Is that OK?? Thanks for answers My solution: MAX people: 1000 day: 1 infected day: 5 infected $P (7) = (1000* (\frac{2}{998} e^{10})*e^{5*7}) / ( 1 + (\frac{2}{998} e^{10}) * e^{5*7}) = 999,99 = 1000$ AI: Let $y$ be the number infected at time $t$, where $y(0)=1$. The model we are asked to use is $$y'=ky(1000-y).$$ This is a separable differential equation, which can be rewritten as $$\frac{dy}{y(1000-y)}=k\,dt,$$ and then, using partial fractions, as $$\frac{1}{1000}\left(\frac{1}{y}+\frac{1}{1000-y}\right)\,dy=k\,dt$$ and then as $$\left(\frac{1}{y}+\frac{1}{1000-y}\right)\,dy=1000k\,dt.$$ Integrate. We get, since $y$ will stay between $1$ and $1000$, $$\ln\left(\frac{y}{1000-y}\right)=1000kt+C.\tag{1}$$ Since $y(0)=1$, we get $C=\ln(1/999)=-\ln 999$. We have $y(2)=5$. Substituting in our equation we get $$\ln(5/995)=2000k -\ln 999,$$ and therefore $$k=\frac{1}{2000}\ln(999/199).$$ Finally, take the exponential of both sides of (1). We get $$\frac{y}{1000-y}=\frac{e^{kt}}{999},$$ where $k$ is known. Set $t=7$. With some manipulation, we get a linear equation for $y(7)$. Solve. Remark: The model should not be taken entirely seriously. One can expect at best a rough fit with reality.
H: Spivak Calculus chapter 7 theorem 9 I am working through Spivak Calculus chapter 7 theorem 9. There is one statement that I can't quite understand. The theorem states: If $n$ is odd, then any equation $$ \ x^n+a_{n-1}x^{n-1} +\cdots+a^0 $$ has a root. proof: we would like to prove that $f$ is sometimes positive and sometimes negative. The intuitive idea is that for large $|x|$, the function is very much like $g(x) = x^n$ and, since $n$ is odd, this function is positive for large positive $x$ and negative for large negative $x$. A little algebra is all we need to make this intuitive idea work. $$ f(x) = x^n+a_{n-1}x^{n-1} +\cdots+a^0 = x^n \left(1+\frac{a_{n-1}}{x}+\cdots+\frac{a_0}{x^n}\right) $$ Note that $$ \left|\frac{a_{n-1}}{x}+\frac{a_{n-2}}{x^2}+\cdots+\frac{a_0}{x^n} \right|\le \frac{|a_{n-1}|}{|x|}+\frac{|a_{n-2}|}{|x^2|}+\cdots+\frac{|a_{0}|}{|x^n|} $$ Consequently if we choose $x$ satisfying $$ |x|>1,2n|a_{n-1}|,\ldots,2n|a_0| \tag{*} $$ I am not sure how he comes to $(*)$ Thanks in advance AI: You need to keep reading in the same page. He wants to make $$ \left|\frac{a_{n-1}}{x}+\frac{a_{n-2}}{x^2}+\cdots+\frac{a_{0}}{x^n} \right|\le \frac12, $$ in order to guarantee that the term multiplying $x^n$ is positive (one plus a number of absolute value less than a half will be positive). So he chooses $x$ such that $|a_{n-k}/|x^k|\leq 1/2n$ (for all $k$), and then the sum will be less than $1/2$. He needs $|x|\geq1$ so that $|x^k|\geq|x|$.
H: Generalized eigenspace decomposition of vector space I was curious whether the direct sum of generalized eigenspaces in a finite dimensional vector space is the largest decomposition of this space invector spaces that are invariant under the endomorphism or are there counterexamples available? AI: No. If you take the sum of two generalized eigenspaces, it will still be an invariant subspace, since generalized eigenspaces correspond to the blocks in the Jordan decomposition. Even in finite dimension, the number of invariant subspaces can be infinite. To see the most basic example, consider the identity endomorphism: it obviously has infinitely many invariant subspaces. Or consider $$ V=\begin{bmatrix}1&0&0\\0&0&0\\0&0&0\end{bmatrix} $$ as an endomorphism of $\mathbb R^3$. Then, for each $t\in[0,1]$, the subspace $$ X_t=\{(0,ct,c(1-t)):\ c\in\mathbb R\} $$ is invariant for $V$.
H: Constructing a primary decomposition for principal ideals in a Krull domain Let $A$ be Krull ring and $\left\{R_l\right\}_{l \in L}$ a defining family of DVRs. Let $a \in A, a \neq 0$. Then $a R_l \neq R_l$ only for a finite number of indices $l$ (otherwise the image of $a$ under infinitely many valuations would be non-zero, contradiction). Let $R_1,\dots, R_t$ be the DVRs such that $a R_i \neq R_i$. Define $\mathfrak q_i = aR_i\cap A$ and $\mathfrak p_i=rad(R_i) \cap A$. Then $\mathfrak q_i$ is primary belonging to $\mathfrak p_i$. Question: How can we see that $aA=\mathfrak q_1\cap \cdots \cap \mathfrak q_t$? Remark: Certainly, $aA \subseteq \mathfrak q_1 \cap \cdots \cap \mathfrak q_t$, since $aA \subset \mathfrak q_i$ for every $i$. How about the other direction? AI: Prove that $$aR_1\cap\cdots\cap aR_t\cap A=aA.$$ "$\supseteq$" is clear. "$\subseteq$" it's easy: take $x\in aR_1\cap\cdots\cap aR_t\cap A$. Then $a^{-1}x\in R_1\cap\cdots\cap R_t$. Since $aR_l=R_l$ for all $l\neq 1,\dots,t$ it follows that $a^{-1}\in R_l$, so $a^{-1}x\in R_l$, and that's all.
H: What is wrong with my approach for orthonormal procedure? I am given a matrix $$A=\left(\begin{matrix}1&1&0&1\\1&2&1&0\\0&1&3&-2\\1&0&-2&4\end{matrix}\right)$$ which defines a scalar product in $\mathbb{R}^4$ as $\langle x,y\rangle=x^TAy$. I have to find an orthonormal system. I try to orthonorm canonical basis using Gram orthonormal procedure, so: $$v_1=e_1\\v_2=\frac{e_2-\langle e_2,v_1\rangle v_1}{||e_2-\langle e_2,v_1\rangle v_1||}\\v_3=\frac{e_3-\langle e_3,v_2\rangle v_2-\langle e_3,v_1\rangle v_1}{||e_3-\langle e_3,v_2\rangle v_2-\langle e_3,v_1\rangle v_1||}$$ and so on. But when I do like this I get $\langle v_2,v_3\rangle\neq0$, so this method must be wrong for arbitrary scalar products and I have to do some adjustments, but I can't see them. Hope you can help me, thanks in advance! AI: You should get \begin{align*} v_1 &= e_1,\\ v_2 &= (-1,1,0,0)^T,\\ v_3 &= \frac1{\sqrt{2}}(1,-1,1,0)^T. \end{align*} Note that, when you normalise a vector $w$, you should not use the Euclidean 2-norm, but calculate the norm as $\|w\|=\sqrt{\langle w,w\rangle}=\sqrt{w^TAw}$.
H: Integral $\frac{\sqrt{e}}{\sqrt{2\pi}}\int^\infty_{-\infty}{e^{-1/2(x-1)^2}dx}$ gives $\sqrt{e}$. How? To calculate the expectation of $e^x$ for a standard normal distribution I eventually get, via exponential simplification: $$\frac{\sqrt{e}}{\sqrt{2\pi}}\int^\infty_{-\infty}{e^{-1/2(x-1)^2}dx}$$ When I plug this into Wolfram Alpha I get $\sqrt e$ as the result. I'd like to know the integration step(s) or other means I could use to obtain this result on my own from the point I stopped. I am assuming that Wolfram Alpha "knew" an analytical solution since it presents $\sqrt e$ as a solution as well as the numerical value of $\sqrt e $. Thanks in advance! AI: This is because $$\int_{\Bbb R} e^{-x^2}dx=\sqrt \pi$$ Note that your shift $x\mapsto x-1$ doesn't change the value of integral, while $x\mapsto \frac{x}{\sqrt 2}$ multiplies it by $\sqrt 2$, giving the desired result, that is, $$\int_{\Bbb R} e^{-\frac 1 2(x-1)^2}dx=\sqrt {2\pi}$$
H: Is there a model of ZFC such that $\aleph_\omega^{\aleph_0}=\aleph_{\omega+2}$? By Easton's theorem, it is consistent with ZFC that $2^{\aleph_0}=\aleph_{\omega+1}$ holds, and if we assume $2^{\aleph_0}=\aleph_{\omega+1}$ then $$\aleph_{\omega+1}\le 2^{\aleph_0}\le \aleph_\omega^{\aleph_0}\le \aleph_{\omega+1}^{\aleph_0} =(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}=\aleph_{\omega+1}$$ So it is consistent with ZFC that $\aleph_\omega^{\aleph_0}=\aleph_{\omega+1}$, and I conjecture that $\aleph_\omega^{\aleph_0}=\aleph_{\omega+2}$ is consistent with ZFC. But how to prove it? Thanks for any help. AI: Yes. Easily (if one is familiar with forcing). Start with a model of $\sf GCH$. Now force that for every $\alpha\in\omega\cup\{\omega+1\}$: $$2^{\aleph_\alpha}=\aleph_{\omega+2}$$ Then we have the wanted equality, by the same considerations as in your question. (In fact, if one starts with $\sf GCH$ then we only need to force $2^{\aleph_0}=\aleph_{\omega+2}$, the rest follows.)
H: If I have a holomorphic function on the unit disc, do I know anything about the radius of convergence of its series expansion about zero? I'm looking at a proof that assumes only that $f : \mathbb{D} \rightarrow \mathbb{D}$ is holomorphic with $f(0) = 0$ The first step in the proof is to "expand $f$ in a power series centered at $0$ and convergent in all of $\mathbb{D}$." In other words, $f(z) = a_0 + a_1 z + a_2 z^2 + ...$ inside the disc. Is this a valid step? How do I know that the function's series expansion is valid in the whole disc? AI: Following Tunococ's link, I found the following: "the fact that the radius of convergence is always the distance from the center a to the nearest singularity; if there are no singularities (i.e., if ƒ is an entire function), then the radius of convergence is infinite. Strictly speaking, this is not a corollary of the theorem but rather a by-product of the proof." Since there are no singularities, then the series is valid for the whole disc!
H: Is this correct? How many distinct $(a,b,c)$ satisfy $a^2+bc=b^2+ac$ where $a,b,c$ are integers between $1$ and $5$ inclusive. $a(a-c)=b(b-c)$, $a/b=(b-c)/(a-c)=(a+b-c)/(a+b-c)=1$, so $a=b$, so the answer is 25 solutions. My only concern is that there seem to be no restrictions on the variable $c$. Is this ok? AI: Note that from you equation you get $a^2-b^2=ac-bc$ which is $$(a-b)(a+b)=(a-b)c$$ One option is thus $a=b$. Now assume $a\neq b$. Then you get $a+b=c$. And this has infinitely many solutions!
H: Convexity of expected value I am trying to understand if the expected value of a variable is convex in that variable or not. I know that expectation is a linear operator, so must be convex. But I do not see why it does not depend on what characteristics the probability vector $p(x)$ has. In other words, why $E(x)= x^Tp(x)$ is convex for any $p(.)$ such that $p\ge0, p(x)^T1=1$? Is $p$ constant wrt $x$? Confused. AI: The expected value does not depend on $x$ as we integrate out $x$. So, your question is not meaningful.
H: Determine invariant subspaces imagine that a matrix of an endomorphism has the characteristic polynomial $(\lambda-2)^2(\lambda-3)$ now i was wondering whether all invariant subspaces can be determined by $0,V$ and $\ker(A-2)^2, \ker(A-2), \ker(A-3)$? or how do I find them? AI: No, you miss a few invariant subspaces. One in the non-diagonalizable case. Infinitely many (assuming the field is infinite) in the diagonalizable case. So $A$ is a $3\times 3$ matrix over a field $K$. By Cayley-Hamilton, $(A-2I)^2(A-3I)=0$ and $K^3=\ker (A-2I)^2\oplus \ker (A-3I)$. Note that $\{0\}\subsetneq \ker (A-2I)\subseteq \ker (A-2I)^2$, with $\dim \ker(A-2I)^2=2$ and $\dim\ker(A-3I)=1$. Let $T$ be the linear operator whose matrix is $A$ in the canonical basis. Of course it has the same chracteristic polynomial and eigenvalues/spaces as $A$. The invariant subspaces of dimension $0$ and $3$ are $\{0\}$ and $K^3$. It remains to determine those of dimension $1$ and $2$. Let $F$ be such an invariant subspace. In a basis starting with a basis of $F$, the matrix of $T$ becomes block upper triangular. In particular, the upper left block is the matrix of $T_{|F}$ whose characteristic polynomial divides that of $T$. Case 1: $A$ is diagonalizable, i.e. $\dim \ker (A-2I)=2$ and $\ker (A-2I)=\ker (A-2I)^2$. If $\dim F=1$, then $F$ is the span of an eigenvector of $T$. So it is either $\ker (A-3I)$, or any one-dimensional susbpace of $\ker (A-2I)$. If $\dim F=2$, then the characteristic polynomial of $T_{|F}$ is either $(\lambda-2)^2$ or $(\lambda-2)(\lambda-3)$. In the first case, $F=\ker (A-2I)^2=\ker (A-2I)$. In the second case, $F=Kv\oplus \ker(A-3I)$ for any eigenvector $v\in\ker (A-2I)$. Case 2: $A$ is not diagonalizable, i.e. $\dim\ker(A-2I)=1$. If $\dim F=1$, now $F=\ker(A-3I)$ or $\ker(A-2I)$. If $\dim F=2$ and the characteristic polynomial of $T_{|F}$ is $(\lambda-2)(\lambda-3)$, then $F=\ker(A-2I)\oplus\ker(A-3I)$. And if it is $(\lambda-2)^2$, then $F=\dim\ker (A-2I)^2$. Remark: an easy case, in any dimension, is when $A$ is diagonalizable with pairwise distinct eigenvalues. Then the invariant subspaces are the $2^n$ direct sums built with the eigenspaces.
H: How can a parallel edge be identified? An edge will have the same vertices as another edge that it is parallel to, so how can it be uniquely described? AI: For an undirected multigraph, we simply have a multiset of edges, not a set. Thus, if $V=\{a,b,c\}$, we might have $$E=\{[(a,b),2],[(a,c),3]\},$$ which specifies that there are two edges connecting vertices $a$ and $b$, and three edges connecting vertices $a$ and $c$. We still can't technically distinguish "individual edges" between the same vertices; all we know is how many edges there are between any two vertices. However, for many purposes, that's all that's really necessary. If you want to point to individual edges, you could take the approach typically used for directed multigraphs: we take a quadruple $(V,E,s,t)$, where $V$ is the set of vertices, $E$ is the set of edges, $s:E\to V$ is the function taking an edge to the vertex it starts at (its "source") and $t:E\to V$ is the function taking an edge to the vertex it ends at (its "target"). Then edges $e_1,e_2\in E$ are parallel with the same direction if $s(e_1)=s(e_2)$ and $t(e_1)=t(e_2)$, and parallel with opposite directions if $s(e_1)=t(e_2)$ and $t(e_1)=s(e_2)$.
H: The number of odd-degree vertices is even in a finite graph? if you add up the degrees of all the vertices of a finite graph then the result will be twice the number of edges, provided that the vertices are odd-degree. What is the name of this theorem? why does it require that the graph be undirected? AI: It is known as the handshaking lemma because if a bunch of people shake hands and you add up the number of hands each person shook, you get an even number. Each handshake adds two to the total. It does not require that each vertex be of odd degree, but it shows there are an even number of vertices of odd degree. For a directed graph, this is still true if you add all the indegrees and outdegrees for the same reason, but the sum of indegrees need not be even. Think of a directed graph with two vertices and one directed edge connecting them.
H: $f:\mathbb{R}^n\to\mathbb{R}^n,f(x)=x\|x\|^2$ Then $f:\mathbb{R}^n\to\mathbb{R}^n,f(x)=x\|x\|^2$ Then $1.$ $(Df)(0)=0$ $2.$ $(Df)(x)=0\forall x$ $3.$ $f$ is one one $4$. $f$ has an inverse. let, $x=(x_1,x_2,x_3,\dots,x_n)$, $\|x\|^2=(x_1^2+\dots,+x_n^2)$ ,Then $Df=3\|x\|^2$ am I right? Then $1$ is true, $2,3,4$ are false. AI: Your $f$ maps $(x_1,\dots,x_n)$ to $$((x\cdot x)x_1,\dots,(x\cdot x)x_n)$$ Let $f_k(x)=(x\cdot x)x_k$. Then $$\nabla f_k(x)=\nabla (x\cdot x)x_k+(x\cdot x)\nabla (x_k)$$ $$\nabla f_k(x)=2x_k x+(x\cdot x)e_{k}$$ Thus $$M_{(Df)(x)}=2 \begin{pmatrix} x_1x_1&x_1x_2&\cdots&x_1x_n\\x_2x_1&x_2x_2&\cdots&x_2x_n\\ \vdots &\vdots&\ddots&\vdots\\x_nx_1&x_nx_2&\cdots&x_nx_n\end{pmatrix}+\Vert x\Vert^2{\bf Id}$$ Note that your functions is of the form $f(x)=\lambda(x)\operatorname{id}(x)$ with $\lambda:\Bbb R^n\to \Bbb R$. We know that $\lambda(x)$ is not injective, but we know that $\operatorname{id}(x)$ is. Note $f$ is injective on every $\{\Vert x\Vert =r\}$, so we must look at $x,y$ with $\Vert x\Vert \neq \Vert y \Vert$, assume $\Vert x\Vert \geq \Vert y \Vert$. Since the norm is always non-negative, $f$ may only fail to be injective when each coordinate of $x$ has the same sign as the corresponding coordinate of $y$. Could you look at what happens in said case? Aim to show $x\neq y\implies f(x)\neq f(y)$.
H: Finding a direct basis for tangent space of piece with boundary of an oriented manifold. I have the following definition (from Hubbard's vector calculus book) for an oriented boundary of piece with boundary of an oriented manifold: Let $M$ be a $k$ dimensional manifold oriented by $\Omega$ and $P$ a piece with boundary of $M$. Let $x$ be a point of the smooth boundary $\partial^{ \ S}_MP$ and let $\vec{V}_{\text{out}}\in T_xM$ be an outward pointing bector. Then the function $\Omega^\partial : \mathcal{B}(T_x\partial P)\to\left\{+1,-1\right\}$ given by $$ \Omega_x^\partial(\vec{v}_1,...,\vec{v}_{k-1}) = \Omega_x(\vec{V}_{\text{out}},\vec{v}_1,...,\vec{v}_{k-1}) $$ defines an orientation on the smooth boundary $\partial_M^{ \ S}P,$ where $\vec{v}_1,...,\vec{v}_{k-1}$ is an ordered basis of $T_x\partial_M^{ \ S}P$. I'm working on a problem that asks me to find a basis for the $T_x\partial P$ that is direct to a certain orientation (given by an elementary 3-form). My question is this: When I choose a basis for $T_x\partial P$, does this basis also need to lie in $T_xM$? Also, are there restrictions to how I should choose $\vec{V}_{\text{out}}$? In other words, does $\vec{V}_{\text{out}}$ need only lie in $T_xM$ and not in $T_x\partial P$? AI: Yes, $T_x\partial P$ is a hyperplane in $T_xM$. $\vec V_{\text{out}}$ is by definition the outward-pointing normal to $\partial P$. This means that if $\vec v_1,\dots,\vec v_{k-1}$ are chosen as a basis for $T_x\partial P$, then $\vec V_{\text{out}},\vec v_1,\dots,\vec v_{k-1}$ will give you a basis for $T_xM$. The whole point of this orientation stuff is that when you pick $\vec v_1,\dots,\vec v_{k-1}$ so that $\vec V_{\text{out}},\vec v_1,\dots,\vec v_{k-1}$ gives you a positively-oriented basis for $T_xM$, then you win: You have achieved a positively-oriented basis for $T_x\partial P$.
H: Curious basic Euclidean geometry relation I was plotting some equations and I got with the curious relation If we build the triange Such that it follows the following relation: $$AD=a$$ $$DB=b$$ $$AC=ak$$ $$CB=bk$$ Then when we vary $k$ from the smallest possible value to the greatest possible one according to triangle inequality, the point C plots a circle around the midpoint of the maximum and minimum solutions, with it's diameter being the segment between them. In a more formal way If $C$ is the center of the hypotetic circle, $$\frac{AD}{DB}=\frac{AE}{EB}$$ and according to triangle inequality $$\frac{AD}{DB}=\frac{FA}{FB}$$ Prove that $$CD=CE$$ That is, prove that the point E in fact lies in the hypotetic circle. This came out of curiosity when i plotted the point C as k varies, I was expecting a more weird shape but I got with a circle!.I have empirically verified it in geogebra and such, but i have not been able to prove it.The closest i have gotten is that $CA=\frac{AD^2}{DB-AD}$, imposing that $DB>AD$. If they are equal, a division by cero occurs, but if $DB<AD$, the circles moves to the other side. AI: Here's a brute-force coordinate approach ... Edit. My point names aren't consistent with OP's (which vary between his diagrams anyway :). In case of edits to the question, I'll explain the roles of my points here. We take point $D$ on segment $AB$, with $a := |AD|$ and $b := |BD|$. A variable point $P$ is defined by $|AP|=k |AD|$ and $|BP|=k|BD|$. Note that, if $a=b$ the locus of $P$ is the perpendicular bisector of $AB$ (which happens to be the limiting case of circles), so we take $a > b > 0$. Coordinatizing with points $A( -a, 0 )$, $B( b, 0 )$, $D( 0, 0 )$, $P(x,y)$, we have $$\begin{align} |AP| &= k |AD|: \qquad ( x + a )^2 + y^2 = a^2 k^2 \\ |BP| &= k |BD|: \qquad ( x - b )^2 + y^2 = b^2 k^2 \end{align}$$ Equating $k^2$ obtained from both equations gives $$\frac{(x+a)^2 + y^2}{a^2}=k^2=\frac{(x-b)^2+y^2}{b^2}$$ so that $$b^2 (x+a)^2 + b^2 y^2 = a^2 ( x-b)^2 + a^2 y^2$$ $$( a^2 - b^2 )x^2 - 2 x a b ( a + b ) + ( a^2 - b^2 ) y^2 = 0$$ $$( a - b ) x^2 - 2 x a b + ( a - b ) y^2 = 0$$ Since the (non-zero!) coefficients of $x^2$ and $y^2$ match (and there's no $xy$ term), we definitely have a circle; without a constant term, we see that the circle passes through the origin (point $D$). For more information, we put the equation into standard form: $$\left( x - \frac{a b}{a-b} \right)^2 + y^2 = \left(\frac{ab}{a-b}\right)^2$$ So, the circle has center $C = \left(\frac{ab}{a-b},0\right)$ and radius $|CD| = \frac{ab}{a-b}$, and we can verify your conjecture: $$|CA| = \frac{ab}{a-b}+a = \frac{a^2}{a-b} \qquad |CB| = \frac{ab}{a-b}-b=\frac{b^2}{a-b}$$
H: Proof of uniqueness of the bounded linear transformation extended in the Bounded Linear Transformation theorem B.L.T Theorem (from Reed/Simon): Suppose $T$ is a bounded linear transformation from a normed linear space $\langle V_1, \|\cdot\|\rangle$ to a complete normed linear space $\langle V_2, \|\cdot\|\rangle$. Then $T$ can be uniquely extended to a bounded linear transformation (with the same bound), $\widetilde{T}$, from the completion of $V_1$ to $\langle V_2, \|\cdot\|\rangle$. The proof for $\widetilde{T}$ being bounded was given and very straightforward, and proving that it was linear was pretty simple as well. I have been having issues with proving that $\widetilde{T}$ is unique, however, despite it probably being easy. I tried supposing towards a contradiction and using that $V_1$ is dense in its completion, $\tilde{V_1}$, the extending transformations must agree on $V_1$, the extending transformations must both have the same bound as $T$, and since this implies that the extending transformations must have different bounds to be different themselves, this proves that $\widetilde{T}$ is unique. I don't think that this reasoning is right since I think that a transformation can act differently on some subset of $\tilde{V_1}\setminus{V_1}$ without changing the bound, and my only other idea was to use that both of these spaces are $T_1$, so if a sequence converges, it must converge to a unique point, and since $T$ is bounded and therefore continuous, and any extensions must be bounded and therefore continuous, we'll have sequences being mapped to their limits, and since these limits are unique, we will only yield one extension $\widetilde{T}$ that works for all of $\tilde{V_1}$. Again, I think that this reasoning is missing something. Any insights, whether it be a nudge in the right direction or full proofs, are very welcome! Thanks in advance. AI: Let $\{x_n\}$ be a Cauchy-Sequence in $V_1$. Then $\{T(x_n)\}$ is a Cauchy-Sequence in $V_2$. (This follows from the boundedness of $T$) Then the natural way to extend $T$ is using the limit of Cauchy-Sequence in $V_2$. (which is possible by completeness of $V_2$) For the proof of uniqueness, suppose there are two extensions $T_1$ and $T_2$, and use the fact that $T_1-T_2$ is bounded linear. (hence continuous) Apply to the Cauchy-Sequence $\{x_n\}$. Then for $x^*=\lim x_n \in \bar{V_1}$, $(T_1-T_2)(x_n) = 0$ for all $n$. Thus $T_1(x^*) = T_2(x^*)$ by continuity.
H: Two questions about Euler's number $e$ I am on derivatives at the moment and I just bumped into this number $e$, "Euler's number" . I am told that this number is special especially when I take the derivative of $e^x$ , because its slope of any point is 1. Also it is an irrational ($2.71828\ldots$) number that never ends, like $\pi$. So I have two questions, I can't understand What is so special about this fact that it's slope is always 1? Where do we humans use this number that is so useful, how did Mr Euler come up with this number? and how come this number is a constant? where can we find this number in nature? AI: You don't take the derivative of a constant. You could, but it's zero. What you should be talking about is the exponential function, $ e^x $ commonly denoted by $ \exp(\cdot ) $. Its derivative at any point is equal to its value, i.e. $ \frac{d}{dx} e^x \mid_{x = a} = e^a $. That is to say, the slope of the function is equal to its value for all values of $ x $. As for how to arrive at it, it depends entirely on definition. There are numerous ways to define $ e $, the exponential function, or the natural logarithm. One common definition is to define $$ \ln x := \int\limits_1^x \frac{1}{t} \ dt $$ From here, you can define $ e $ as the sole positive real such that $ \ln x = 1 $ and arrive at it numerically. Another common definition is $ e = \lim\limits_{n \to \infty}\left(1 + \frac{1}{n}\right)^n $, although in my opinion it is easier to derive properties from the former definition.
H: How to prove a linear map to be the trace function,3x Let $M_n(F)$ be the matrix space on the field $F$, $f$ be the linear map from $M_n(F)$ to $F$ such that $f(I)=n$, where $I$ is the identitiy matrix. Furthermore, $f(AB)=f(BA)$ for any $A,B\in M_n(F)$. Show that $f=tr$, where $tr(A)=\sum_{i=1}^n a_{ii}$ for $A=(a_{ij}). I do not have any idea.... AI: It is probably already on this forum, this is well known. Denote by $E_{i,j}$ the $n\times n$ matrix with a $1$ in position $(i,j)$ and zero elsewhere. For $i,j$ in $\{1,\ldots,n\}$ we have $f(E_{i,i}) = f(E_{i,j}E_{j,i}) = f(E_{j,i}E_{i,j}) = f (E_{j,j}) = 1$. If $i\neq j$, we have $f(E_{i,j}) = f (E_{i,j}E_{j,j}) = f (E_{j,j}E_{i,j}) = f (0) = 0$. We conclude writing $A= \sum_{i,j=1}^n a_{i,j}E_{i,j}$.
H: Problem from Roman: a lower bound for trace of |T|^2 I'm working on a problem from Stephen Roman's Linear Algebra text, #20 on p. 235: Suppose $\tau \in \mathcal{L}(\mathbb{C}^n)$ and let the characteristic polynomial $\chi_{\tau}(x)$ have roots $\lambda_1, \dotsc, \lambda_n$ (with multiplicity). Show that $$\sum_{i=1}^n|\lambda_i|^2\leq \text{Tr}(\tau^*\tau),$$with equality holding iff $\tau$ is normal. The last part is clear, since if $\tau$ is normal, then there is an ON basis $B$ diagonalizing $\tau$ and then $[\tau^* \tau]_B$ is diagonal with entries $|\lambda_i|^2$. I'm trying to solve the first part. I note that $\text{Tr}(\tau^*\tau)$ is $\sum\limits_{i,j}|a_{ij}|^2$, if $(a_{ij})=[\tau]_B$ for $B$ an ON basis. I am also trying to exploit that $\tau^*\tau\geq0$. I'm stuck, though. Any advice? AI: Hint: Matrices over $\mathbb{C}$ can be unitarily triangularised. So, you may assume that the matrix of $\tau$ is lower triangular.
H: If a series is absolutely converge then the series can be regroup with changing their order? I am just thinking about why this is true. Can I change it to Q1. If a series is convergent then the series can be regrouped without changing the order of terms. For example the sum of $(-1)^n$ is an alternating sequence and it is divergent, so I can't regroup them? Q2. Can I claim that a convergent, non-alternating series be absolutely convergent? As there is no difference after the term become absolute value, it should be still convergent after absolute those terms? Q3. What does conditionally convergent mean? If a sequence is either convergent or absolutely convergent then it is conditionally convergent? AI: Answers: An absolutely convergent series can be summed in any fashion. The value of the sum is not affected by order. Your supposition about $(-1)^n$ is completely correct. Based on regrouping and rearranging, you could assign a wealth of values to the series with $(-1)^n$ as the summand. Even a conditionally convergent series can be rearranged to give any value. A non-alternating, convergent series is also absolutely convergent since you can either factor out a negative (if the series is made of only negative terms) or leave it as is and it will be absolutely convergent. It does not follow, however, that any convergent series is also absolutely convergent. Case in point: $\sum_{n=1}^{\infty} \frac{(-1)^n}{n} = \log(2)$, but the harmonic series diverges. Conditionally convergent means that a series converges as is, but does not converge absolutely. Absolute convergence is much stronger than conditional convergence.
H: Determinant of $4\times4$ Matrix I tried to solve for a $4 \times 4$ matrix, but I'm unsure if I did this properly, can anyone tell me if I did this correct? Or if there were any mistakes where at? Also, I know this is an inefficient method for finding the determinant, however I want to get practice with solving like so: $$A= \begin{bmatrix} 2 & 4 & 0 & 1 \\ 0 & 8 & 0 & 2 \\ 0 & 3 & 0 & 5 \\ 1 & 2 & 1 & 1 \end{bmatrix} $$ $$\begin{align*} \det(A)&=2 \begin{vmatrix} 8 & 0 & 2 \\ 3 & 0 & 5 \\ 2 & 1 & 1 \end{vmatrix} -4 \begin{vmatrix} 0 & 0 & 2 \\ 0 & 0 & 5 \\ 1 & 1 & 1 \end{vmatrix} +0 -1 \begin{vmatrix} 0 & 8 & 0 \\ 0 & 3 & 0 \\ 1 & 2 & 1 \end{vmatrix}\\[0.3in] &=2\left(8 \begin{vmatrix} 0 & 5 \\ 1 & 1 \end{vmatrix}-0+2 \begin{vmatrix} 3 & 0\\ 2 & 1 \end{vmatrix}\right)\\[0.1in] &\quad{}-4\left(0-0+2 \begin{vmatrix} 0 & 0\\ 1 & 1 \end{vmatrix}\right)\\[0.1in] &\quad{}+0\\[0.1in] &\quad{}-1\left(0-8 \begin{vmatrix} 0 & 0\\ 1 & 1 \end{vmatrix}+0\right)\\[0.3in] &=2(8(0-5)-0+2(3-0))\\[0.1in] &\quad{}-4(0-0+2(0))\\[0.1in] &\quad{}+0\\[0.1in] &\quad{}-1(0-8(0)+0)\\[0.3in] &= 2(8(-5)-0+2(3))\\[0.3in] &=2(-45+6)\\[0.3in] &=2(-39)\\[0.3in] &=-78 \end{align*}$$ Sorry for the long post, I tried to make the readability easy for everyone. AI: As pointed out by Amzoti in the comment: your calculations are all correct, save for the glitch at the end, when you erroneously computed $8(-5) = -45$, at the very end, which results in your final calculation being off by $2(-5) = -10$, giving you $-78$ instead of $-68$. Correct that, and you're good to go! But the "hard part" was all perfectly correct. (I double checked the rest of your work, too, and you did just fine.) so if your point was to "practice" finding the determinant of a matrix by expanding along the first row, to get the process right, you did the key parts well. ADDED: If you've learned how elementary row operations alter the determinant of the matrix on which you're operating, doing so can greatly simplify the computation of the determinant of a matrix! See, e.g. Java Man's comment: if you had performed the following elementary row operations $-2R_4 + R_1 \to R_1$, you could have expanded along the first column, greatly simplifying the process. If you haven't yet learned how ERO's change the determinant, once you learn that, your work will be greatly reduced in the future! (For example, if you are able to use ERO's on some matrix to reduce it to an equivalent system/matrix containing a row, or a column, of all zeros, the determinant of the entire (original and reduced) matrix will be $0$. The more practice you get, and the more "shortcuts" you learn, the less tedious computing the determinant of a matrix will become.
H: which parameters always make this rational equation evenly divisible? Hi guys I have the following equation: $$x = \dfrac{a + b \times c - b}{c}$$ This is what I know about each variable: $$a \ge 64$$ $$b \ge 0$$ $$8 \le c \le a$$ My questions is there a concise way for me to pick a, b, and c so that x will always be a positive integer >= 8. For example if $a = 64$; $b= 0$; $c = 8$ (the smallest possible values for each) $x = (64 + 0 \times 8 - 0)/8$ $x = 8$ which is a positive integer $\ge 8$ However, if let's say $a = 200$; $b = 2$; $c = 10$ $x = (200 + 2 \times 10 - 2)/10$ $x = \dfrac{218}{10}$ which is $21.8$ and is not a positive integer greater than or equal to $ 8$ How can I always know that $x$ will be a positive integer?? Thanks! AI: If $$x=(a+b*c-b)/c$$ You can split it into $x=\frac{a-b}{c}+b$ Then, the only divisibility condition is $$\frac{a-b}{c}=k$$ $$a-b=ck$$ $$a=b+ck$$ So, you can pick any $b$,$c$ and $k$ integer such that x is an integer. Replacing a you get $k+b \ge 8$ Finally, you can pick any integers $b$ and $k$ such that $k+b \ge 8$ and any integer $c$ that follow your initial inequalities, then operate for $a$.
H: arrange k identical robots in 15 chairs, with limitations I ran into this question, want to challenge you. in how many ways can you arrange a number of identical robots on $15$ chairs? limitations: 1) $2$ robots cannot sit next to each other. 2) each empty chair has at least one neighbour with a robot. what i got so far: i know that it has something to do with $\sum_{5}^{8}$ because the minimum amount of robots is $5$ and maximum is $8$. i'm not sure how to proceed from here though. Thank you very much in advance, Yaron AI: Consider $xRExRExRExRExRx$. The $R$'s stand for robots, the $E$'s stand for empty chairs. All the $x$'s can be zero or one empty chairs. However there are 15 chairs altogether, so in fact each $x$ must be one empty chair. Consider now one more robot. $xRExRExRExRExRExRx$. We have six robots and five empty chairs accounted for, of 15 chairs. That means there are four empty chairs to distribute among the seven possible $x$'s, which can be done in ${7\choose 4}$ ways. I leave the cases of 7 or 8 robots for you to consider.
H: character of a group is constant on each conjugacy class in the group I'm trying to solve this problem and I'm not quite sure what I need to prove here. Can you guys please help?. Here is the question. Definition: Character of a group $G$ is defined to be a group homomorphism $\chi: G \rightarrow \mathbb{C^*}$. Prove that the character of a group $G$ is constant on each conjugacy class in $G$. My answer: Let $x\in G$ be fixed. Then $\chi(gxg^{-1})=\chi(g) \chi(x) \chi(g) ^{-1}$ since $\chi$ is a homomorphism. But $\mathbb{C^*}$ is abelian so $\chi(gxg^{-1})=\chi(x)$. Have I proved what I needed to prove? :-). I guess my question is what exactly do they mean by "constant on each conjugacy class in $G$". Thanks for all your help. AI: You are absolutely right in your work. Conjugacy class of an element $x$ in $G$ is precisely the set $\{gxg^{-1} | g \in G\}$. Thus, you have accomplished what you wanted.
H: If $X^\ast $ is separable $\Longrightarrow$ $S_{X^\ast}$ is also separable Let $X$ be a Banach space such that $X$* (Dual space of $X$) is separable How can we prove that $S_{X^\ast}$ (Unit sphere of $X$*) is also separable Any hints would be appreciated. AI: Note: here we consider the question for $X^*$, and $S_{X^*}$, equipped with the topology induced by the norm. See this thread for the same question with the weak* topology. This is a consequence of a much more general fact which has nothing to do with duals and Banach spaces. Fact: if $(X,d)$ is a separable metric space, then for any subset $Y\subseteq X$, the subspace $(Y,d)$ is separable. Note: as pointed out by Nate Eldredge, another approach is to observe that a subspace of a second countable space is obviously second countable. So it only remains to use or prove that a metric space is separable if and only it is second countable. Proof: let $S=\{x_n\,;\,n\geq 1\}$ a dense subset of $X$. For every $n\geq 1$, consider the distance $d(x_n,Y)=\inf\{d(x_n,y)\,;\,y\in Y\}$. By definition of an infimum=greatest lower bound, for every $k\geq 1$, $d(x_n,Y)+\frac{1}{k}$ is not a lower bound of $\{d(x_n,y)\,;\,y\in Y\}$. So there exists $y_n^k\in Y$ such that $d(x_n,y_n^k)< d(x_n,Y)+\frac{1}{k}$. Since $T=\{y_n^k\,;\,n\geq 1, k\geq 1\}$ is a countable union of countable sets, it is countable. We will now show that it is dense in $Y$. Take $y\in Y$ and $\epsilon>0$. By density of $S$ in $X$, there exists $n\geq 1$ such that $d(x_n,y)<\frac{\epsilon}{2}$. In particular, $d(x_n,Y)\leq d(x_n,y)<\frac{\epsilon}{2}$. So for $k$ large enough, precisely: $k>\left(\frac{\epsilon}{2}-d(x_n,Y)\right)^{-1}$, we have $d(x_n,Y)+\frac{1}{k}<\frac{\epsilon}{2}$. Then for this $n$ and such a $k$, we get $$ d(y_n^k,y)\leq d(y_n^k,x_n)+d(x_n,y)<d(x_n,Y)+\frac{1}{k}+\frac{\epsilon}{2}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon. $$ So there are elements of $T$ arbitrarily close to $y$ for every $y\in Y$. That is $T$ is dense in $Y$. QED. Remark: what really is about Banach spaces $X$ such that $X^*$ is separable is that $X$ is separable. And it follows that the unit sphere of $X$, $S_X$, is separable as well.
H: Why doesn't the circle retract to a point? OK, this appears to me like perhaps a dumb question. I am reading Allen Hatcher's Algebraic Topology. I've seen bits and pieces of further material here and there before, now I'm restarting from the beginning. OK, visually I can see why, say, the 2-sphere $S^2$ can not deformation retract onto it's equator. Intuitively, we can't do this without tearing a hole (or rather two holes). Even with visualizing the 'cylinder' $S^2$ x I (well as a volume) for the function $F$ to $S^1$, $F:S^2 \times I \rightarrow S^1$, $F(x,t) = f_t(x)$ the family of homotopic maps. You can see that this mapping cylinder won't work in trying to identify points $F(x,1)$ to points in the circle. The mapping cylinder for a map and spaces $f:X \rightarrow Y$, is the quotient space $[(X \times I) \amalg Y]/((x,1) \sim f(x))$. Or even $S^1$ to its equator, $S^0$ the union of two separate points: not "deformation-retractable". Again, looking at $S^1 \times I$ doesn't appear to have a cont. function to $\{x\} \cup \{y\}$ for distinct points $x,y$ in $S^1$. It doesn't seem to have any mapping cylinder as well. Disclaimer: OBVIOUSLY a circle is NOT homotopic to a point. Just in case anyone gets any wrong ideas as to where I'm going with this. It's just a question on my part :) If anyone could guide me to the a better intuition or visualization to correct the error of what I'm seeing. However, if we just pick one point $x_0$ from $S^1$, then the mapping cylinder looks just like a cone, with $F(x,0) = x$, $F(x_0, t) = x_0$ (which is a line going down the cone from the circle to the bottom apex), and $F(x,1) = x_0$. And it looks like the mapping cylinder shows how to continuously deform the circle to the point, even if the point is an element of the circle. I am thinking this "homotopic picture" of what merely looks like, the circle def. retracting to a point, is somehow misleading me, in the sense that I am just missing something or looking at it wrong or etc... Can anyone elucidate? AI: First of all, let's clarify one thing. A circle does retract onto a point, because a retract of a circle to a point on it is just a constant map $r : S^1 \to \{p\}$. What you're really asking about is the fact that a circle doesn't deformation retract onto a point. A deformation retract would be a homotopy $F : S^1 \times I \to S^1$ taking the circle to one of its points, so to deformation retract a circle to a point you'd need to retract it to a point on the circle, via a series of maps $F_t$ that map the circle to itself. Your map seems to be shrinking the circle to a point, which doesn't work because you're moving the rest of the circle off of itself into the "empty space inside of it," which isn't allowed. In other words, you're viewing the circle as being embedded in the plane, like you'd draw a circle on a piece of paper. But, topologically, the points "inside the circle" don't exist -- there's only the circle itself.
H: Adriaan van Roomen's 45th degree equation in 1593 Adriaan van Roomen proposed a 45th degree equation in 1593(see this book, picture reference as follows): $$ \begin{gathered} f(x) = x^{45} - 45x^{43} + 945x^{41} - 12300x^{39} + 111150x^{37} - \color{red}{740459}x^{35} + 3764565x^{33} \\- 14945040x^{31} + 46955700x^{29} - 117679100x^{27} + 236030652x^{25} - 378658800x^{23} \\+ 483841800x^{21} - 488494125x^{19} + 384942375x^{17} - 232676280x^{15} + 105306075x^{13} \\ - \color{red}{34512074}x^{11} + 7811375x^9 - 1138500x^7 + 95634x^5 - 3795x^3 +45x \\ = \sqrt{\frac{7}{4}-\frac{\sqrt{5}}{4}- \sqrt{\frac{15-3\sqrt{5}}{8}}} \approx 0.4158234. \end{gathered}\tag{1} $$ Another source said the right side is: $\sqrt{\frac{7}{4}-\frac{\sqrt{5}}{4}-\frac{5\sqrt{3}}{8}} $. ( EDIT: According to the answer, the coefficients in red are wrong.) Another mathematician Viète noticed that left side is nothing but the expansion of sine using $$ \sin3\theta = 3\sin\theta - 4\sin^3\theta, $$ recursively, and the equation (1) reduced to: $$ \sin(\alpha) = c, \quad \text{ for } \alpha \in (0,\frac{\pi}{2}) \tag{2} $$ where $x = \sin(\alpha/45)$ (EDIT: According to the answer, this is not correct, should be $x = 2\sin(\alpha/45)$). Now $2k\pi+\alpha$ satisfies (2) too, hence the solutions are: $$ x = \sin\left(\frac{\alpha + 2k\pi}{45}\right), \quad \text{ for } k = 0,1,2,\ldots,44. $$ Totally 45 roots in $(-1,1)$. However, when drawing the graph of $f(x)-c$, something does not look quite right: There are two other roots near roughly 3 and -3? Am I missing something here? AI: Typos! Scroll to the end to see which coefficients are typos. $$\begin{align} \sin(45t)&=\Im\left(e^{45it}\right)\\ &=\Im\left(\left(e^{it}\right)^{45}\right)\\ &=\Im\left(\left(\cos(t)+i\sin(t)\right)^{45}\right)\\ &=\Im\left(\sum_{k=0}^{45}\binom{45}{k}\left(i\sin(t)\right)^k\cos^{45-k}(t)\right)\\ &=\frac{1}{i}\left(\sum_{k=0}^{22}\binom{45}{2k+1}\left(i\sin(t)\right)^{2k+1}\cos^{44-2k}(t)\right)\\ &=\sum_{k=0}^{22}\binom{45}{2k+1}(-1)^k\sin^{2k+1}(t)\cos^{44-2k}(t)\\ &=\sum_{k=0}^{22}\binom{45}{2k+1}(-1)^k\sin^{2k+1}(t)\left(1-\sin^2(t)\right)^{22-k}\\ &=\sum_{k=0}^{22}\binom{45}{2k+1}(-1)^k\sin^{2k+1}(t)\sum_{j=0}^{22-k}\binom{22-k}{j}(-1)^j\sin^{2j}(t)\\ &=\sum_{k=0}^{22}\sum_{j=0}^{22-k}\binom{45}{2k+1}\binom{22-k}{j}(-1)^{j+k}\sin^{2k+2j+1}(t)\\ &=\sum_{k=0}^{22}\sum_{j=k}^{22}\binom{45}{2k+1}\binom{22-k}{j-k}(-1)^{j}\sin^{2j+1}(t)\\ &=\sum_{j=0}^{22}\sum_{k=0}^{j}\binom{45}{2k+1}\binom{22-k}{j-k}(-1)^{j}\sin^{2j+1}(t)\\ &=\sum_{j=0}^{22}(-1)^{j}\left[\sum_{k=0}^{j}\binom{45}{2k+1}\binom{22-k}{j-k}\right]\sin^{2j+1}(t)\\ &=\sum_{j=0}^{22}(-1)^{j}\left[\sum_{k=0}^{j}\binom{45}{2k+1}\binom{22-k}{j-k}\right]x^{2j+1}\\ &=45x+\cdots+17592186044415x^{45} \end{align}$$ But now if we replace $x$ by $\frac{x}{2}$ and multiply the whole thing by $2$, we have $$ \sum_{j=0}^{22}\left(\frac{-1}{4}\right)^{j}\left[\sum_{k=0}^{j}\binom{45}{2k+1}\binom{22-k}{j-k}\right]x^{2j+1} $$ which expands as $$ \begin{gathered} f(x) = x^{45} - 45x^{43} + 945x^{41} - 12300x^{39} + 111150x^{37} - 740\mathbf{2}59x^{35} + 3764565x^{33} \\- 14945040x^{31} + 46955700x^{29} - 117679100x^{27} + 236030652x^{25} - 378658800x^{23} \\+ 483841800x^{21} - 488494125x^{19} + 384942375x^{17} - 232676280x^{15} + 105306075x^{13} \\ - 3451207\mathbf{5}x^{11} + 7811375x^9 - 1138500x^7 + 95634x^5 - 3795x^3 +45x \\ \end{gathered} $$ and the coefficients of $x^{35}$ and of $x^{11}$ are typos. Note that $x=2\sin(\alpha/45)$ is the substitution, not $x=\sin(\alpha/45)$, and that we multiplied by $2$, so that doubles the value of $c$.
H: how to solve vectors in 3D? We had our examination a few days ago. I was not able to answer this question. So I decided to remember it and answer at home but still I can't answer this on my own. Please help me to answer this so that if this type of question will be asked again in our incoming major exam, I can answer it. AI: First use trigonometry to find the x, y and z components of each vector.
H: Computing the dimension of $O(n)$ - why is this matrix invertible? I am looking at Lee's Smooth Manifolds second edition and on page 167 he computes the dimension of $O(n)$. Now there he defines a map $$\begin{eqnarray*}\Phi : &\text{GL}(n,\Bbb{R})& \to M(n,\Bbb{R}) \\ &A& \mapsto A^T A \end{eqnarray*}$$ and wants to compute the rank of $\Phi$. Now he says for any $B \in T_I\text{GL}(n,\Bbb{R})$ let $\gamma : (-\epsilon,\epsilon) \to \text{GL}(n,\Bbb{R})$ be the smooth curve $\gamma(t) = I + tB$ and so on. My question is: Why should the matrix $I + tB$ be an invertible matrix for all $t \in (-\epsilon,\epsilon)$? My guess is this: at $t = 0$ the determinant of $I + (0\cdot B)$ is $1$. Now consider the map $$f(t)\stackrel{\text{def}}{\equiv} \det \gamma(t) : \Bbb{R} \to \Bbb{R}.$$ Now $f(0) = 1$ and because $\Bbb{R} - \{0\}$ is open I can choose a $\varepsilon > 0 $ so that $B_\varepsilon(1) \subseteq \Bbb{R} - \{0\}$. Then by continuity there exists $\delta > 0$ so that $f(B_\delta(0)) \subseteq B_\varepsilon(1)$, that is to say I can choose an $\delta > 0$ so that $\gamma(t)$ is invertible for all $t \in (-\delta,\delta)$. AI: You are correct. The curve $\gamma$ you defined is continuous from $\mathbb{R}$ to $M(n, \mathbb{R})$ and the determinant function is continuous from $M(n, \mathbb{R})$ to $\mathbb{R}$. So, the map $t \mapsto \operatorname{det}(\gamma(t))$ is continuous on $\mathbb{R}$, taking the value $1$ at $t = 0$. Therefore there is a small interval around $0$ on which $\operatorname{det}(\gamma(t))$ is nonzero.
H: Proving that free modules are flat (without appealing projective modules) Suppose $R\neq 0$ is a commutative ring with $1$. Let $M$ be a free $R$-module. I would like to prove that $M$ is a flat $R$-module. Everywhere I have looked (mostly online) this is proved by first proving that every free module is projective, and then proving that every projective module is flat. Unfortunately, Atiyah & Macdonald's "Introduction to Commutative Algebra" (Chapter 2) does not discuss projective modules. But the result that every free module is flat comes very handy in the exercises. So my question is, Is it possible to prove that every free module is flat just by definitions and without appealing to projective modules? Thanks! AI: You can just show that the functor $M\otimes_R(-)$ is left-exact. Write $M=\bigoplus_{i\in I} R$ and let $$0\to N'\xrightarrow{\quad\iota\quad} N\to N''\to 0$$ be an exact sequence of $R$-modules. Note that $M\otimes N=\bigoplus_{i\in I} R\otimes_R N = \bigoplus_{i\in I} N$, so the sequence $$0 \to M\otimes N' \xrightarrow{\quad\mathrm{id}\otimes\iota\quad} M\otimes N \to M\otimes N'' \to 0$$ is the same as $$0 \to \bigoplus_{i\in I} N' \xrightarrow{\textstyle\quad\bigoplus_{i\in I} \iota\quad} \bigoplus_{i\in I}N \to \bigoplus_{i\in I}N'' \to 0$$ and the morphism $\bigoplus_{i\in I} \iota$ is clearly injective if $\iota$ is injective.
H: Prove closed of dimension one of $X\times I$. Suppose $F(x,t): X\times I \rightarrow R$ is a homotopy of Morse functions. That is, $f_t: X \rightarrow R$ is Morse for every $t$. Show that the set $C = \{(x,t)\in X\times I : d(f_t)_x = 0\}$ forms a closed, smooth submanifold of dimension one of $X\times I$. Assume the homotopy is constant near the ends of I and use an open interval. What I have done so far is that showed it is a smooth manifold by inverse function theorem. But I don't know how to show it is closed and has dimension 1. Thank you very much for your guidance! AI: Thank you for @DanielMcLaury's comment, I finally figured it out after his very wise guidance. Submanifold: When $dF=0, d^2 F \neq 0$, we know the Jacobian of $dF$ is not singular. Hence by inverse function theorem, it is a submanifold. Closed: The given condition {$(x,t) \in X \times I:d(f_t)_x=0$} is closed, since all the limit points satisfy the condition and therefore is included in the set. 1-dimension: By Morse function condition, because $d^2 F \neq 0$, that means given $df_x = 0$, that means $df_{x^\prime} \neq 0$ for all $x^\prime$ in any neighborhood of $x$. Therefore, the set can be only varied by $t$, which suggests it is 1-dimensional.
H: Gram-Schmidt algorithm to find orthonormal basis I must use the Gram-Schmidt algorithm to find an orthonormal basis for Sp$\{1,x,x^2 \}$ in $L^2[-1,1]$. Suppose the elements of the orthonormal basis are $e_1, e_2$ and $e_3$. Then they must be pairwise orthogonal and $\|e_i \|=1$ for each $i$. I thought of trying with the standard inner product $<f,g>=\int_X \mid f \bar{g} \mid d\mu$ on $L^2$, where $ X=[-1,1]$. However I have no idea of how to determine the orthogonal basis. AI: See Gram-Schmidt's ortogonalization process for a description. You get (note that everything is real, so you don't have to worry about complex conjugates) \begin{align} u_2 &= v_2 - \operatorname{proj}_{e_1} v_2 \\ &= v_2 - \langle e_1,v_2 \rangle e_1 \\ &= x - \left( \int_{-1}^1 x \cdot \frac{1}{\sqrt2} \,dx\right) = x \end{align} and $$ e_2 = \frac1{\|v_2\|}v_2 = \left( \int_{-1}^1 x^2\,dx \right)^{-1/2} x = \frac{x\sqrt3}{\sqrt2}.$$ I'll leave the computation of $e_3$ to you.
H: Pointwise convergence implies $L^{2}$ convergence If we have a sequence of bounded functions $f_{n}$ converging almost everywhere to another bounded function $f$ in a finite measure space such that $$ |f_{n}(t)| \leq c$$ for some constant $c$. Then $$ \int |f_{n}(t) - f(t)|^{2} d\mu$$ goes to zero. Since we are in a finite measure space, the constants are integrable and hence DCT applies. However, it only gives the result that $$ \int f_{n}(t)d\mu \rightarrow \int f(t)d\mu$$ and $$ \int |f_{n}(t) - f(t)| d\mu \rightarrow 0$$. How do we prove the convergence in $L^{2}$ norm? Here is what I tried: $$ \int |f_{n}(t) - f(t)|^{2} d\mu = \int (f_{n}-f)(\overline{f_{n}}-\overline{f})$$ $$ = \int |f_{n}|^{2} - \int f\overline{f_{n}} - \int f_{n}\overline{f} + \int |f|^{2} $$ which goes to zero by DCT (as $f_{n}\overline{f} \rightarrow |f|^{2}$ and so on). Am I right in all this or am I missing something? I am apprehensive because I intuitively feel that pointwise convergence should not imply $L^{2}$ convergence. AI: You are right, but you can also argue more directly. Let $g_n(t) = |f_n(t) - f(t)|^2$. Then $g_n \to 0$ almost everywhere and $|g_n| \le 4c^2$ almost everywhere. As we are in a finite measure space, $$ \|f_n - f\|_{L^2}^2 = \int g_n \, d\mu \to 0. $$
H: Combinatorics problem: # of arithmetic problems possible "How many arithmetic problems of the following form are possible? You must use each of the digits 1 through 9, they must appear in numerical order from left to right, and you can use any combination of the + and * symbols you like, as long as the resulting expression makes mathematical sense. For example, 1234+5678+9 and 123456+789 and 123456789 are three possibilities, but 1**23456789 is not." We know the numbers are ordered as: 1_2_3_4_5_6_7_8_9 where the _'s can be filled by one of the three operations '+', '*', and ''. There are 2*3{n-1} ways to arrange these operations. I just don't know how to use the information correctly for the problem. Where 'n' is 8 I believe. Advice/Hints? edit: 2*3^{n-1} is totally wrong. Brain was stuck on previous problem. Sorry. AI: Each '_' may be either empty, $*$, or $+$ -- so a total of three possibilities. We have 8 such '_'. So by the fundamental principle of counting we are left with a total of $3^8$ possibilities.
H: How to find the number of combinations of $x,y,z$ that satisfy an equation The equation is $x + y + z = 18$, with the constraints: $2 \leq x \leq 8$ $1 \leq y \leq 7$ $1 \leq z \leq 6 $ First I solved it so the lower bounds where zero, by substituting the following: $x' = x - 2$, $y' = y - 1$, $z'= z - 1$. Which gave me the combination of $16,2 = 120$, now I have to subtract the number of combination of $x,y,z$ that are over, so $x'\geq7$, $y'\geq7$, or $z'\geq6$. The problem is when I plug those into the equation I end up with a negative answer and do not know how to deal with the problem. AI: As you say, you need to count the solutions in non-negative integers to $x'+y'+z'=14$ that violate at least one of the constraints $x'\le 6$, $y'\le 6$, and $z'\le 5$. This is an inclusion-exclusion argument. First count the solutions with $x'\ge 7$. That’s the number of solutions in non-negative integers to $x''+y'+z'=7$, where $x''=x'-7$. You know how to compute this: it’s $$\binom{7+3-1}{3-1}=\binom92=36\;.$$ Similarly, you find that there are $\dbinom92=36$ solutions with $y'\ge 7$ and $\dbinom{10}2=45$ with $z'\ge 5$. These all have to be subtracted from $120$, leaving $120-36-36-45=3$ valid solutions. However, some invalid solutions were removed twice, because they violated two constraints; these must be added back in once. How many solutions to $x'+y'+z'=14$ have $x'\ge 7$ and $y'\ge 7$? How many have $x'\ge 7$ and $z'\ge 6$? How many have $y'\ge 7$ and $z'\ge 6$? All of those must be added back in. If there were any solutions that violated all three constraints, they would now have been counted once in the $120$, subtracted $3$ times in the initial correction, and added back in $3$ times in the second correction, so they would be included in the total and would need to be subtracted. However, there are none, so no further correction is needed.
H: Solving a Linear Congruence I've been trying to solve the following linear congruence with not much success: $19\equiv 21x\pmod{26}$ If anyone could point me to the solution I'd be grateful, thanks in advance AI: Hint: $26 = 2 \cdot 13$ and the Chinese remainder theorem. Modulo $2$ we have to solve $1 \cong x \pmod 2$, that is $x = 2k + 1$ for some $k$, now solve $19 \cong 42k + 21 \pmod{13}$.
H: $\displaystyle\lim_{z\to 0} |zf(z)| = 0$ implies $z = 0$ is a removable singularity. Given that $f$ is analytic around $z = 0$ and that $\displaystyle\lim_{z\to 0} |zf(z)| = 0$, we can conclude that $f(z)$ has a removable singularity at $z = 0$. I think that we should deduce that $f(z)$ is bounded around $z = 0$ and use Riemann to conclude that it has a removable singularity. However, I have no idea how to show that this must be bounded. Am I on the right track? AI: Let $g(z) = zf(z)$. By assumption $g$ is bounded near $z=0$, so $g$ has a removable singularity at $z=0$. Hence $$f(z) = \frac{g(z)}{z},$$ on a punctured neighbourhood of $z=0$. Since $g(0) = 0$, it follows that $g(z) = zh(z)$ for some analytic function $h$, i.e. $$f = h$$ on a punctured neighbourhood of $z=0$, which shows that $f$ is bounded near $0$.
H: Characterization of line graphs I'm confused about how you to recognize line graphs. From the Wikipedia page on the subject, I read that A graph is a line graph if and only if it does not contain one of these nine graphs as an induced subgraph: Which explains why this graph is not a line graph as it induces a claw, which is one of the forbidden subgraph. However here's a graph and its subgraph (in green). This line graph also contains a claw, if you look at the vertex (4,3) which is adjacent to (1,3), (1,4) and (4,5). What am I missing? AI: When we say induced subgraph, we mean you choose some vertices and take all edges between them. So when you choose (4,3), (1,3), (1,4), and (4,5) you get a diamond (with an edge across its middle), not the claw.
H: Find the value of the constants $a$ and $b$ such that the following limit is satisfied. Find the value of the constants $a$ and $b$ such that the following limit is satisfied. $\lim_{x\to a}\dfrac{bx-1}{x-a}=\frac12 $. $\lim_{x\to a}\dfrac{bx+9}{x-a}=a-6 $. $\lim_{x\to a}\dfrac{ax+b}{x^2+3x-4}=4$. I desperately need help with these 3 problems as I do not understand them. Thanks! AI: For $(bx-1)/(x-a)\to\frac12$ as $x\to a$ recognize that $bx-1=\frac12(x-a)$. Reduce and equate coefficients:$$bx-1=\frac12(x-a)\\bx-1=\frac12x-\frac{a}2\\\implies b=\frac12,a=2$$ Similarly, for our second limit, we see that $bx+9=(a-6)(x-a)$, reduce, and equate coefficients:$$bx+9=(a-6)x-(a-6)a$$Immediately we see that $b=a-6$ and that $-9=(a-6)a=a^2-6a$. Moving terms to our right-hand side, we have a factorizable quadratic equation in $a$:$$a^2-6a+9=0\\(a-3)^2=0\\\implies a=3$$... and thus $b=-3$. Can you use a similar approach to tackle our third limit? Recognize that our denominator is easily factorizable to yield $x^2+3x-4=(x+4)(x-1)$. Note that since there are no singularities unless $x=a$ and since we have a finite limit and the numerator is only linear we know that cannot be true, we may "distribute" our limit safely to yield:$$\frac{\lim\limits_{x\to a}(ax+b)}{\lim\limits_{x\to a}(x^2+3x-4)}=4\\\frac{a^2+b}{(a+4)(a-1)}=4\\\implies a^2+b=4(a+4)(a-1)$$Expand our right-hand side to yield $4(a+4)(a-1)=4(a^2+3a-4)=4a^2+12a-16$, yielding $$a^2+b=4a^2+12a-16$$Equating coefficients we find $a=0$ and $b=-16$.
H: A question about the following theorem: $c(A\cup B)=c(A)\cup c(B)$. Let there be a point $p\in(A\cup B)'$ such that some open sets containing it contain points in $A\setminus B$, and other open sets containing it contain points in $B\setminus A$. $A\setminus B=A-(A\cap B)$ Then $p$ is not a limit point of either $A$ or $B$. However, it is a limit point of $A\cup B$!! Is the theorem $c(A\cup B)=c(A)\cup c(B)$ untrue then? EDIT: $c(A)$ is the closure of set $A$ in a topological space. $A'$ is the complement of set $A$ in the topological space AI: There is no such point: Let $p \in c(A \cup B)$, then any open set containing $p$ contains points of $A$ or $B$. Suppose there is an open set $U$ such that $p \in U$, $U \cap A = \emptyset$ and another open $V$ such that $p \in V$, $V \cap B\ne\emptyset$. Then $U \cap V$ is open $p \in U \cap V$, but $(U \cap V) \cap (A \cup B) = \emptyset$, contradicting $p \in c(A \cup B)$. Hence, a point as you want, cannot exists, therefore $c(A \cup B) = c(A) \cup c(B)$.
H: If $A,B$ are positive definite matrix then I need to prove so is $ABA^*$ If $A,B$ are positive definite matrix then I need to prove so is $ABA^*$, here is what I have done $$x^*ABA^*x=(A^*x)^*B(A^*x)=y^*By>0$$, is it okay? $y=A^*x$ AI: Yes, that's almost ok. You should mention that $y = A^*x$ cannot be $0$ for $x \ne 0$, as $A$ is invertible. Moreover your computation shows that $A > 0$ isn't necessary, $A$ invertible suffices for $ABA^*$ being positive definite for positive definite $B$.
H: The sine inequality $\frac2\pi x \le \sin x \le x$ for $0 There is an exercise on $\sin x$. How could I see that for any $0<x< \frac \pi 2$, $\frac 2 \pi x \le \sin x\le x$? Thanks for your help. AI: For $x \in \left[0, \frac{\pi}{2}\right]$, we have $\sin''(x) = -\sin(x) \le 0$. So the sine function is concave on $\left[0, \frac{\pi}{2}\right]$. So the inequality follows from the principle (I suggest drawing the graph to see it clearly) : $$\textrm{secant} \le \textrm{function} \le \textrm{tangent}$$
H: If $\gcd(a,b)=1$ and $a$ and $b$ divide $c$, then so does $ab$ Using divisibility theorems, prove that if $\gcd(a,b)=1$ and $a|c$ and $b|c$, then $ab|c$. This is pretty clear by UPF, but I'm having some trouble proving it using divisibility theorems. I was looking at this Math.SE problem, and if this could be proven, then the solution would follow immediately. AI: If $\gcd(a,b)=1$ then there exist integers $s$ and $t$ such that $a s +b t =1$. Hence $c a s +c b t =c$. Now since $a \vert c$ and $b \vert c$ there exist integers $m$ and $n$ such that $c = m a$ and $c= n b$. Hence $n b a s + m a b t =c$. Since $a b $ divides the entire left hand side, it must also divide the right hand side.
H: Basis of a basis I'm having troubles to understand the concept of coordinates in Linear Algebra. Let me give an example: Consider the following basis of $\mathbb R^2$: $S_1=\{u_1=(1,-2),u_2=(3,-4)\}$ and $S_2=\{v_1=(1,3),v_2=(3,8)\}$ Let $w=(2,3)$ be a vector with coordinates in $S_1$, then $w=2u_1+3u_2=2(1,-2)+3(3,-4)=(11,-16)$. When I tried to found the coordinates of $w$ in $S_2$, I found the following problem: Which basis $(11,-16)$ belongs to? I suppose the same of $u_1$ and $u_2$, but which basis $u_1$ and $u_2$ belongs to? and if I discover the basis of $u_1$ and $u_2$, what's the basis of the basis of $u_1$ and $u_2$? I found an infinite recurrence problem and I was stuck there. Maybe I'm seeing things more complicated than it is, but it seems that there is a deeper and philosophical question inside of this doubt, I couldn't see what a coordinate really is. I would be very grateful if anyone help me with this doubt. AI: The vector $(11,-16)$ is written in the standard basis - it means $11\times(1,0)+(-16)\times(0,1)$. This is exactly the computation that you do when you write: $$2u_1+3u_2=2(1,-2)+3(3,-4)=(11,-16)$$ If we write $e_1=(1,0)$ and $e_2=(0,1)$, then what you're writing is: $$2u_1+3u_2=2(e_1-2e_2)+3(3e_1-4e_2)=11e_1-16e_2$$ so you've changed coordinates from $u_1$ and $u_2$ to $e_1$ and $e_2$. It's not strictly correct to say that a vector $(a,b)$ "belongs to" a basis, but you do need to know what basis you're working in to be able to interpret it. More about vectors vs. coordinates Normally we take $\mathbb{R}^2:=\{(a,b):a,b\in\mathbb{R}\}$ to be the set of pairs of real numbers, with vector space structure given by: $$(a,b)+(c,d)=(a+c,b+d)$$ $$\lambda(a,b)=(\lambda a,\lambda b)$$ So its points (vectors) are pairs $(a,b)$. We also have that $(a,b)=a(1,0)+b(0,1)$, so the vectors $e_1=(1,0)$ and $e_1=(0,1)$ span (I'm not claiming they're a basis yet, but of course they are), but I can still understand $(a,b)$ without knowing about $e_1$ and $e_2$. Now pick a basis $v_1,v_2$ of $\mathbb{R}^2$ (not necessarily the standard one). Then for any $u\in\mathbb{R}^2$, we have $u=xv_1+xv_2$, so we could write $u=(x,y)$ in the coordinates $v_1$ and $v_2$. However, because $u$ is a point of $\mathbb{R}^2$, it is a pair $(a,b)$ of real numbers. But unless $v_1=e_1$ and $v_2=e_2$, we won't have $a=x$ and $b=y$. This is confusing (now we can truthfully say $u=(a,b)$ and $u=(x,y)$, but the pair $(a,b)$ isn't equal to the pair $(x,y)$), so instead I'll write $u=[x,y]_v$ to mean that $u$ has coordinates $x$ and $y$ in the basis $v_1$ and $v_2$. Now we can say $(a,b)=[x,y]_v$ without getting confused, and our notation separates a vector in $\mathbb{R}^2$, which is just a pair of numbers $(a,b)$ that doesn't depend on any basis, from a coordinate representation $[x,y]_v$, which requires the basis $v$ to be understood. (Note that we do have $(a,b)=[a,b]_e$). Now to return to your example, $u_1$ really is the vector $(1,-2)$, and $u_2$ really is the vector $(3,-4)$ - I don't need any basis to understand this. In our new coordinate-emphasising notation, your calculation is now: $$[2,3]_u=2u_1+3u_2=2(1,-2)+3(3,-4)=(11,-16)$$ where $(11,-16)$ is interpreted as just being a vector in $\mathbb{R}^2$, with no chosen basis (although we could think of it as $(11,-16)=[11,-16]_e$ if we wanted). Now to write it in terms of the basis $v_1$ and $v_2$ you need to find $x,y$ such that $[x,y]_v=(11,-16)$, or rather such that: $(11,-16)=[x,y]_v=xv_1+yv_2=x(1,3)+y(3,8)=(x+3y,3x+8y)$
H: Determining planarity?? Sorry i couldnt think of any way to ask this so i took a picture of it and pasted it below. I'll start with what little i do know, firstly this graph is not homemorphic to $K_5$ simply because that would require 5 vertices of degree 4 which we clearly don't have ( only 3) i also can't seem to figure out hot make it homemorphic to $k,3,3$ though i think i could label it as a bipartite graph in some was where everything was degree 3 i cant find a way to do that with only 6 vertices. Lastly i don't think its planar why? if you cut out b1b4, b2b5 and a6b3 it becomes obvious by looking at it that you can't untwist it in two dimensions. Please help thx. AI: Being non-planar doesn't mean being homeomorphic to $K_5$ or $K_{3,3}$. Rather, a graph is non-planar if and only if it contains either $K_5$ or $K_{3,3}$ as a minor. Here, a graph $H$ is a minor of a graph $G$ if $H$ can be obtained from $G$ by deleting vertices, deleting edges, and/or contracting edges. (I.e., $G$ has a subgraph homeomorphic to $H$.) Here's a hint for this problem: show that your graph contains $K_{3,3}$ as a minor.
H: Simple explanation of difference between $\cos(x)$ and $\cos(x^2)$ integrals. I'm not looking for the solved integrals, I'm just looking for a simple explanation for why $\displaystyle\int\cos(x^2)dx$ is so much more complicated than $\displaystyle\int\cos(x)dx$ With simple I mean something I can say to explain the difference to a person just starting to learn about integration. Perhaps something visual? AI: For one thing, look at their graphs. Since $x$ increases at a constant rate, $\cos(x)$ has a constant period but $x^2$ increases faster and faster for larger $x$ so the "period" of $\cos(x^2)$ keeps getting smaller and smaller.
H: Proving that $T^{-1}(V)$ is a subspace of $X$ when $V$ is a subspace of $X$? I just came across a question which is as follows: Let $X$ and $Y$ be normed linear spaces and let $ T : X \rightarrow Y$ be a linear operator with domain $D(T) \subset X$ and range $R(T) \subset Y$. The first part of the question asks us to show that $T^{-1}$ is linear as well, which I have been able to do. However I'm confused with the second part. It says: Let $V$ be a subspace of $X$ and $W$ be a subspace of $Y$. Show that $T^{-1}(V)$ is a subspace of $X$ and $T^{-1}(W)$ is a subspace of $Y$. I simply don't understand how $T^{-1}(V)$ can be a subspace of $X$. Shouldn't it have been: Show that $T^{-1}(W)$ is a subspace of $X$ and $T(V)$ is a subspace of $Y$? AI: Yes, I agree with your diagnosis. The notation $T^{-1}(V)$ isn't justified since $V$ is not necessarily in the range of $T$. It's much more likely that $T(V)$ and $T^{-1}(W)$ were meant. By the way you mentioned you were able to show "$T^{-1}$ is linear." Whenever $T$ isn't one to one, it isn't even a homomorphism, so it will often fail to be linear. When it's 1-1 though it can be regarded as a homomorphism from a subspace of $Y$ to a subspace of $X$, yes. One good thing to keep in mind here is that $T^{-1}(W):=\{v\in D(T)\mid T(v)\in W\}$ is defined even if $T^{-1}$ isn't a function.
H: Cauchy Criterion for Sequences as opposed to Series So through as I've been venturing through baby Rudin I came upon his definition of a cauchy sequence: A sequence $\{ p_n \}$ in some metric space $X$ is said to be cauchy if $$ \forall \; \epsilon > 0 \; \exists \; N \in \mathbb{N} \; s.t. d(p_n, p_m) < \epsilon \; \forall \; n,m \ge N $$ He then talks about the cauchy criterion for convergence being that a cauchy sequence converges to a point in its contained metric space (similarly this metric space would be called complete). He goes on to say that the cauchy criterion for convergence for series can be restated as the following: A series $\sum a_n$ converges if and only if $$ \forall \; \epsilon > 0 \; \exists \; N \in \mathbb{N} \; s.t. \left\lvert \sum_{k=n}^{m} a_k \right\rvert \le \epsilon \; \forall \; n,m \ge N $$ Note that these series live in $\mathbb{R}^k$ I can see how one would get A series $\sum a_n$ converges if and only if $$ \forall \; \epsilon > 0 \; \exists \; N \in \mathbb{N} \; s.t. \left\lvert \sum_{k=n}^{m} a_k \right\rvert < \epsilon \; \forall \; n,m \ge N $$ but I don't understand why the new definition has a $\le$ rather than a $<$ Could anybody advise me as to why? Thanks in advanced! AI: The two formulations are equivalent; proving the first from the second is trivial. For the other, apply the statement of 1 to $\epsilon/2$; then the $N$ obtained will suffice for $\epsilon$ in formulation 2. Occasionally, allowing equality will simplify one's proof (because it makes the choice of $N = N(\epsilon)$ easier). In fact, we can do better than that, and give a useful form for determining convergence that allows you to skip right through tedious computations when the sufficiency of $N$ is apparent -- typically of the form "when $N$ satisfies this complicated inequality involving $N$ and $\epsilon$". For functions $f,g$, define a variant of Big O notation: $f \in \mathcal O(g; 0)$ iff there exist $\delta$ and $C$ such that $|x|<\delta$ implies $|f(x)| < C\, |g(x)|$. Now we have the following result (where $\epsilon$ in $\mathcal O(\epsilon;0)$ signifies the identity function): Let $(p_k)_k$ be a real sequence (or, for that matter, a sequence in any metric space). Then $(p_k)_k$ is Cauchy iff there exists a function $f(\epsilon) \in \mathcal O(\epsilon; 0)$ such that: $$\forall \epsilon > 0: \exists N: \forall m,n>N: d(p_m,p_n) < f(\epsilon)$$ The proof is not hard; expanding the given property of $f$ is really all there is to it. Of course, we can apply this to the sequence of partial sums of a series to obtain the Cauchy criterion for convergence of series (in a suitable space, e.g. a complete normed vector space, or the more down to earth $\Bbb R^k$).
H: canonical one-form The canonical one-form is defined here: http://books.google.nl/books?id=uycWAu1yY2gC&lpg=PA128&dq=canonical%20one%20form%20hamiltonian&pg=PA128#v=onepage&q&f=false My problem is this: It states that if $(x_1,\dots x_n)$ are local coordinates in $N$, a 1-form $\alpha\in T^*_x N$ is represented by $\alpha = \Sigma^n_{j=1} y_jdx_j$ It then goes on to define a special 1-form $\lambda$ on $T^*N$ by $\lambda = \Sigma^n_{j=1} y_jdx_j$. This to me looks the same as $\alpha$ and as a 1-form on $N$ and not on $T^*N$. What am I missing here? AI: This sort of thing can be a bit confusing! A local one-form on $N$ is given by $\sum_j y_j(x) dx_j$; this gives coordinates $(y_1,\ldots,y_n)$ on the fibres of $T^*N$, so we have local coordinates $(x_1,\ldots,x_n,y_1,\ldots,y_n)$ on the total space of $T^*N$. Therefore a local one-form on $T^*N$ is $\sum_j\big(\alpha_j(x,y) dx_j + \beta_j(x,y) dy_j\big)$. The canonical one-form on $T^*N$ is given by $\alpha_j = y_j,~ \beta_j = 0$. In words, it has no components 'pointing along the fibre', and its transverse components are given by the point we're sitting at in the fibre.
H: A characterisation of semi-continuity Show that function $J$ is semi-continuous at point $v\in U$ iff $\forall \epsilon >0$ there exists $\delta >0$ such that $\forall u\in U\cap L_\delta (v)$, $J(u)>J(v)-\epsilon $. $L_\delta (v)$ is open ball with center in $v$ and radius $\delta$. Definition of semi-continuity is "Function $J: U\rightarrow R, U\subset R^n$ is semi-continuous from under at point $v\in U$ if $\lim\limits_{k \to \infty} \inf J(u_k)\geq J(v)$ for every sequence $\{u_k\}_{k\in N}$ that converges to $v$". Please, can anyone help me to show this. AI: "If" part: Suppose that the "$\varepsilon/\delta$" condition holds, and let $u_k$ be a sequence as in definition of semicontinuity. Fix $\varepsilon$, and take $\delta$ as in the condition. Then for some $n_0$, for $k > n_0$ we have $u_k \in L_\delta(u)$. Hence $J(u_k) > J(v) - \varepsilon$. Passing to $\liminf_k J(u_k) \geq J(v) - \varepsilon$. Since $\varepsilon > 0$ was arbitrary, you get $\liminf_k J(u_k) \geq J(v)$ "Only if" part: Suppose that $J$ is semicontinuous from below. If the "$\varepsilon/\delta$" condition was false, then for some $\varepsilon > 0$ we would be able to find $u_k$ with $|u_k - v| < \frac{1}{k}$ but $J(u_k) \leq J(v) - \varepsilon$ (else, $\delta = 1/k$ would work). Passing to the limit we find $\liminf_k J(u_k) \leq J(v) - \varepsilon$, contradicting semicontinuity.
H: Continuity of bilinear form Let $X$ be a Banach space and $b(\cdot,\cdot):[0,C] \times X \to \mathbb{R}$ be a form that is continuous wrt. the first argument, linear wrt. the second argument and satisfies $$b(t,x) \leq K\lVert x \rVert_X$$ a boundedness condition for all $x$ and all $t \in [0,C]$. Suppose $x_h \to x$ in $X$ as $h \to 0.$ Does $$b(t+h, x_h) \to b(t,x)$$ as $h \to 0?$ How do I prove this? AI: Fix $\epsilon >0$. If $b$ is continuous wrt the first argument, then for any $x\in X$ and $\epsilon>0$, $\exists \delta>0$ such that $|h|<\delta$ implies $|b(t+h,x)-b(t,x)|\leq \dfrac{\varepsilon}{2}$. Now, since $x_h\rightarrow x$, it means that there exists also $\eta>0$ such that $|h|<\eta\Rightarrow \|x_h-x\|<\dfrac{\varepsilon}{2C}$. When $|h|<\min(\delta,\eta)$, $$\begin{align*}|b(t+h,x_h)-b(t,x)| & =|b(t+h,x_h+x-x)-b(t,x)| \\ &=|b(t+h,x_h-x)+b(t+h,x)-b(t,x)|\end{align*}$$ by linear w.r.t the second argument, so $$\begin{align*}|b(t+h,x_h)-b(t,x)| & \leq |b(t+h,x_h-x)|+|b(t+h,x)-b(t,x)| \\ & \leq |b(t+h,x_h-x)|+\dfrac{\varepsilon}{2} \\ & \leq C\|x_h-x\| + \dfrac{\varepsilon}{2} \\ & \leq \varepsilon\end{align*}$$
H: Lebesgue–Stieltjes integral from 0 to $\infty$ on $\mathbb{R}^+$ In the Stochastic analysis course we encountered the following integral $\int_0^\infty H^2_sd[M,M]_s$, where $H_s$ is a predictable process, $M_s$ is a uniformly integrable martingale in $L^2$, $[M,M]_s$ is the quadratic variation. We were working on the measure space $(\mathbb{R}^+ \times \Omega, \mathcal{B}(\mathbb{R}^+) \otimes \mathcal{F}) $. As far as I understand, we consider the integral $w$-wise ($\omega \in \Omega$) as a Lebesgue–Stieltjes integral, with a stochastic process being a function $s \mapsto X_s(\omega), \ X_s(w): \mathbb{R}^+ \rightarrow \mathbb{R}$ for any $\omega$. The question is: how the integral is defined (taking into account we are in $\mathbb{R}^+$ and the integral is to $\infty$)? My guess was: we define it via a limit as $s \rightarrow \infty$ of the integral from 0 to $s$. However, my professor declined it (saying "there is no problem here as with a Riemann integral"). AI: Let $\omega\in\Omega$ be fixed and let $F\!:\mathbb{R}_+\to \mathbb{R}_+$ be defined as $F(s)=[M,M]_s(\omega)$. Then $F$ is a non-negative, right-continuous and non-decreasing function with $F(0)=0$, and hence $$ \mu_F((a,b])=F(b)-F(a),\quad0\leq a<b<\infty $$ defines a Borel-measure on $(\mathbb{R}_+,\mathcal{B}(\mathbb{R}_+))$. If $g\!:\mathbb{R}_+\to \mathbb{R}_+$ is a measurable function, then integrating with respect to $F$ is just ordinary Lebesgue-integration with respect to $\mu_F$, i.e. $$ \int_0^\infty g(s)\,\mathrm dF(s):=\int_{\mathbb{R}_+}g(s)\mu_F(\mathrm ds).\tag{1} $$ Now, the integral $$ \int_0^\infty H_s^2\mathrm d[M,M]_s $$ is just the $\omega$-wise construction of the integral in $(1)$.
H: finding the number of non-appearing integers in three different sequences English,Math and Science classes started on the very first day of a month.Mathematics class schedule is 1,3,5,7,9. . . . The schedule for English is 1,4,7,10,13. . . And for Science is 1,5,9,13,17. . .In the next three months how many days will there be no classes of the above subjects? I first tried to find the number of odd integers between 0 and 181 including the latter and argued that the same odd numbers will appear in the next two pattern sequences but of couse there will be a few more odd numbers.I tried to do some tedious calculation and have ended up with nothing.Then I tried to see the pattern between the digits that are not appearing but that did not help either. A prod in the right direction would help at this stage,I think. AI: Hint: try looking at the sequences in modulo $12$. You will find that certain values $\mod 12$ will not be in any sequence.
H: Show that $\sin6\alpha\equiv \sin2\alpha(16\cos^4\alpha-16\cos^2\alpha+3)$ $$\sin6\alpha\equiv \sin2\alpha(16\cos^4\alpha-16\cos^2\alpha+3)$$ Can you help me with De Moivre's theorem and how I would go about tackling this question. I understand that De Moivre's theorem states that $(\cos\alpha+i\sin\alpha)^n \equiv \cos n\alpha+i\sin n\alpha$. But I don't see how this would come to use in this question. AI: From De Moivres $$ \cos 6\alpha+i\sin 6\alpha=(\cos\alpha+i\sin \alpha)^6=\\(\cos^6\alpha+\ldots)+i\left({6\choose 1}\cos^5\alpha\sin\alpha-{6\choose 3}\cos^3\alpha\sin^3\alpha+{6\choose 5}\cos\alpha\sin^5\alpha\right). $$ Therefore $$\sin 6\alpha={6\choose 1}\cos^5\alpha\sin\alpha+{6\choose 3}\cos^3\alpha\sin^3\alpha+{6\choose 5}\cos\alpha\sin^5\alpha=\\ 2\cos\alpha\sin\alpha(\ldots)=\ldots$$
H: How to express exclusive intersections? I have a function $f:\mathbb{R}^2 \rightarrow\mathbb{R}, \ \mathbf{x} \mapsto \sum_{i = 1}^{n} w_i \mathbf{1}_{A_i}(\mathbf{x})$ for $w_i \in \mathbb{R}$ and the $A_i$s are allowed to intersect. Thus for the case of $n=3$ we might have something like: ($A_1 = A, A_2 = B, A_3 = C$ here) where at each exclusive intersection e.g. $AB = A \cap B \setminus C$, $f$ takes the value $w_A + w_B$. Now, I would like to express $f$ as the sum of indicators over non-intersecting sets (i.e. a partition of the underlying space): $$f(\mathbf{x}) = (w_A + w_B + w_C) \mathbf{1}_{ABC}(\mathbf{x}) + (w_A + w_B) \mathbf{1}_{AB}(\mathbf{x}) + (w_A + w_C) \mathbf{1}_{AC}(\mathbf{x}) + \ldots$$ How to express this for general $n$? (In the same vein as XOR, the notion of exclusive intersection would really help here, but I'm unsure of its notation.) AI: Let $X=\{A_1,A_2,\ldots, A_n\}$. For each $S\in\mathcal P(X)$ denote $M(S)=\bigcup S\setminus \bigcup(\mathcal P(X)\setminus S)$ and $W_S=\sum_{A\in S}w_A$. Then $$f(x)=\sum_{S\subseteq X} W_S\mathbf 1_{M(S)}$$ is such a sum. Note that $M(S)$ is the "exclusive intersections" of all $A_i\in S$. Alternatively, we have a map $\phi\colon\mathbb R^2\to X$, $\mathbb x\mapsto \{A\in X\mid x\in A\}$ and can write $\phi^{-1}(S)$ isntead of $M(S)$ above.
H: A question about a limit in a Hilbert space Suppose $H$ is a Hilbert space, $v_{n},z \in H$ and suppose that $$ \lim\langle x, v_{n}\rangle = \langle x, z \rangle$$ for all $x$ in some dense subset of $H$. Then can I say that the sequence $v_{n}$ converges to $z$? Note: $v_{n}$ is not given to be convergent. AI: No; the classic example is the space $L^2([0,2\pi])$ with $v_n=\sin(nx)$, $z=0$, and $x$ being any function in $L^2$ and $L^1$. This is known as the Riemann-Lebesgue Lemma (http://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue_lemma).
H: A simple problem of linear algebra in infinite dimension Let $E$ be a vector space (of infinite dimension) and $u : E \rightarrow E$. Suppose that $E/u(E)$ has finite dimension. Is it true then that this dimension is equal to $\dim \ker u$ ? In finite dimension the rank theorem says yes. But here $E$ is typically infinite-dimensional. I realize it is still true that there is an isomorphism $\overline{u} : E/\ker u \rightarrow u(E)$, so I guess if we could prove that $E/(E/\ker u)) \simeq \ker u$ it would be enough. EDIT : I'm not sure if this is relevant, but in my situation there is a (Banach) topology on $E$ and $u$ is continuous. Maybe this is important if we need to have direct spaces. However the question is purely algebraic, so I don't really know... AI: No, this isn't necessarily true. One of the main novel aspects of infinite-dimensional vector spaces is that they can be isomorphic to proper subspaces, i.e. $u$ may map $E$ onto $D\subsetneq E$ with no kernel. This is essentially no different that the set theoretic fact that infinite sets are in bijection with proper subsets. For a concrete example, consider the vector space $\Bbb{R}[X]$ of real polynomials and have $u(p)=pX$. $u$ has no kernel, but the one-dimensional subspace of constant polynomials is missing from the image. It doesn't matter that your $E$ is a Banach space, as similar examples are then forthcoming.