text
stringlengths
83
79.5k
H: Value of $f''(c)$ Suppose $f$ is defined in $[a,b]$ and $c\in(a,b)$ and suppose $f'$ exists in an open interval containing $c$ and that $f''(c)$ exists. Show that $$\lim_{h\rightarrow 0}\frac{f(c+2h)-2f(c+h)+f(c)}{h^2}=f''(c)$$ Give an example to show that the limit above may exist even though $f'(c)$ does not exist. So following the definitions, we have $$f''(c)=\lim_{h\rightarrow0}\dfrac{\lim_{i\rightarrow0}\dfrac{f(c+h+i)-f(c+h)-f(c+i)+f(c)}{i}}{h}$$ I don't know how to show that this quantity is equal to the quantity in question. Also, this post shows that it's not possible to pull out the limit for any function. AI: Hint Using Taylor's Theorem one can write $$f(c+2h)=f(c)+f'(c)2h+f''(c)2h^2+o(h^2)$$ $$2f(c+h)=2f(c)+f'(c)2h+f''(c)h^2+o(h^2)$$ Thus $$f(c+2h)-2f(c+h)+f(c)=f''(c)h^2+o(h^2)$$ Which is what we wanted.
H: Drawing points on Argand plane The points $5 + 5i$, $1− 3i$, $− 4 + 2i$ and $−2 + 6i$ in the Argand plane are: (a) Collinear (b) Concyclic (c) The vertices of a parallelogram (d) The vertices of a square So when I drew the diagram, I got an rectangle in the 1st and 2nd quadrant. So, are they vertices of parallelogram? I am not sure! AI: It's not a rectangle, and it's certainly not contained only in the first and second quadrants. Try plotting the points again. If you make further efforts on your own and need help, mouse over this box.                                    Show[ListPlot[{{5, 5}, {1, -3}, {-4, 2}, {-2, 6}}, PlotRange -> {{-7, 7}, {-7, 7}}, AspectRatio -> 1, PlotStyle -> {Red, PointSize[0.02]}, LabelStyle -> Directive[12]], Graphics[Text[Style["5+5[ImaginaryI]", Directive[18]], {5, 4.1}]], Graphics[Text[Style["1-3[ImaginaryI]", Directive[18]], {2.2, -3}]], Graphics[Text[Style["-4+2[ImaginaryI]", Directive[18]], {-4, 2.7}]], Graphics[Text[Style["-2+6[ImaginaryI]", Directive[18]], {-3.4, 6}]]]
H: Find a recursive formula for the following problem Let $a_n$ be the number of bricks in a path that is $n \geq 1$ long. We have 3 types of bricks: Blue: $2$ cm long Red: $3$ cm long Green: $1$ cm long When a blue brick can't be placed next to a green brick. I was trying to work this out but I'm getting into an endless loop, I know that after a red brick we have $a_{n-3}$ legal paths, but I can't go any further because both colors are getting me into the loop. AI: Hint: Let $b_n$ be the number of paths that are $n$ long and end with blue, $g_n$ that end with green, $r_n$ that end with red. Then $a_n=b_n+g_n+r_n$. You should be able to find a coupled set of recurrences. For example, $r_n=g_{n-3}+r_{n-3}+b_{n-3}=a_{n-3}$ as you say.
H: How to calculate the density of $Y=X^4$ Let $X$ be a uniformly distributed variable on $[0,1]$. What is the density of $Y=X^4$? How do you calculate it? Thank you AI: We take the slow way, by first computing the cumulative distribution function $F_Y(y)$ of $Y$. So we want $\Pr(Y\le y)$. First do the really easy parts. If $y\le 0$, then $F_Y(y)=0$. If $y\ge 1$, then $f_Y(y)=1$. Now we deal with the interesting part, where $0\lt y\lt 1$. For such $y$, we have $$F_Y(y)=\Pr(Y\le y)=\Pr(X^4\le y)=\Pr(X\le y^{1/4})=y^{1/4}.$$ For the density function $f_Y(y)$ of $Y$, find the derivative of $F_Y(y)$.
H: How to put a norm on an ultrapower of normed spaces? I'm trying to understand how to form the ultrapower of a normed space. I read how to construct the underlying vector space here: http://en.wikipedia.org/wiki/Ultraproduct But I don't see an obvious norm which can be defined on the resulting equivalence classes. Is there a standard one which always works? AI: Let $I$ be an index set, let $\mathcal U$ be an ultrafilter on $I$. For each $i \in I$ let $(X_i, \|\cdot\|_i)$ be a normed space. Let $X$ be the $l^\infty$-product of the $X_i$, consisting in all families $(x_i)_{i \in I}$ such that $x_i \in X_i$ for all $i$ and $\sup_i \|x_i\|_i < \infty$. The ultraproduct is the quotient $X/N$ where $N$ consists of all $(x_i)$ with $\lim_{i,\mathcal U} \|x_i\|_i = 0$. (That's the limit along an ultrafilter.) In space $X/N$ use norm $\lim_{i,\mathcal U}\|x_i\|_i$.
H: Find the standard deviation of $ \frac{\gamma}{\sqrt{2\pi\sigma}}\exp\left(-\frac{\gamma^2}{\sigma}\frac{(x-\mu)^2}{2}\right)$ Given $\frac{\gamma}{\sqrt{2\pi\sigma}}\exp\left(-\frac{\gamma^2}{\sigma}\frac{(x-\mu)^2}{2}\right)$ as a normal distribution PDF with mean $\mu$, I'd like to solve for the std deviation in terms of $\gamma$ and $\sigma$. Any hints or tips on how to think about this and solve it would be appreciated. I am familiar with the normal distribution PDF: $\frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$ and I can see that the former and latter PDF are slightly different in terms of $\sigma $ and $\gamma$ and I know that the integral of the PDF over the real line equals 1. I tried doing substitutions with different quotients of $\sigma$ and $\gamma $, like $\frac{\gamma}{\sqrt{\sigma}} $ and then simplifying, etc. Still, as of now I haven't figured it out. Thanks. AI: The expression in your title does not define a normal density unless you assume that $\sigma > 0$, and indeed most people would use $\sigma^2$ in the normal density as you have indeed done in the text of your question. Provided that $\sigma > 0$, note that $$\frac{\gamma}{\sqrt{2\pi\sigma}}\exp\left(-\frac{\gamma^2}{\sigma}\frac{(x-\mu)^2}{2}\right) = \frac{1}{\sqrt{2\pi(\sigma/\gamma^2)}}\exp\left(-\frac{(x-\mu)^2}{2(\sqrt{\sigma}/\gamma)^2}\right)$$ is a normal pdf with mean $\mu$ and variance $\left(\frac{\sqrt{\sigma}}{\gamma}\right)^2 = \frac{\sigma}{\gamma^2}$ and so you can read off the standard deviation as $\frac{\sqrt{\sigma}}{\gamma}$ without resorting to integration.
H: Optimizing the area of a square and circle I am suppose to optimize the sum of the area of a square and a circle with a 12cm piece of wire to have the smallest area. To me this problem seems kind of obvious. A circle is a more efficient use of space so I know that it will have more area since it doesn't waste space on corners. This is a math thing I learned and I try and justify it by doing math stuff. I want the square as large as possible. I know that the math on this is wrong when I do it but I can't make sense of it. I know that a square is more inefficient so don't I want to simply minimize the amount of the cirlce being used? Where does the logic come in that it should be somewhere in between? The answer is 5.28cm for the circle but I don't understand the logic behind it, I thought I would want a small of a circle as possible. There must be some weird exponential growth of a circle past a certain point that I don't understand. AI: Let the amount of wire used on the square be $x$ so that the amount of wire used on the circle is $12-x$. The sum of the areas in terms of $x$ is then $$f(x) = \frac{x^2}{16} + \frac{(12-x)^2}{4 \pi}$$ Find the $x$ that minimizes area by solving $f'(x)=0$: $$\frac{x}{8} - \frac{12-x}{2 \pi}=0 \implies x=\frac{6/\pi}{(1/8)+(1/(2 \pi))} = \frac{48}{\pi +4}\approx 6.72119$$ Note that $f''(x) > 0$, so this is indeed a minimum. Plug this value of $x$ into $f(x)$ to get this minimum area.
H: Does the Convolution of Two Series Require Absolute Convergence? Let $A=(a_n)_{n=0}^\infty$ and $B=(b_n)_{n=0}^\infty$ be sequences of real numbers. Let $C = (c_n)_{n=0}^\infty$ be the sequence such that $$c_n = {a_0}{b_n} + {a_1}{b_{n-1}} +\cdots+ {a_n}{b_0}.$$ Let $G_A(s), G_B(s), G_C(s)$ denote the generating functions of sequences $A,B$ and $C$, respectively. My textbook claims that $$G_C(s) = \sum_{n=0}^{\infty}{c_n}{s^n} = \sum_{n=0}^{\infty} ({\sum_{k=0}^{n}{a_k}{b_{n-k}}}){s^n}= (\sum_{k=0}^{\infty}{a_k}{s^k})(\sum_{n=k}^{\infty}{b_{n-k}}{s^{n-k}}) = G_A(s) G_B(s).$$ I was able to verify the equality in the finite case via induction. In the infinite case, I think that absolute convergence is necessary. However, the book did not mention anything about this. Going slightly off topic, I'd like to ask if this is why you cannot use induction to prove that two expressions of $n \in \mathbb N$ are equivalent when $n$ goes to infinity. In this example, induction does not say any thing about convergence. The instructor in my Analysis class made that point, but I did not really understand it at the time. AI: Convergence is not an issue: these are formal power series, and as such are to be thought of as algebraic objects with an associated ‘arithmetic’, not as real- or complex-valued functions. As Herbert S. Wilf says in generatingfunctionology, ‘A generating function is a clothesline on which we hang up a sequence of numbers for display’. It is in the first instance a convenient way to manipulate the sequence of coefficients. In particular, if $A=\langle a_n:n\in\Bbb N\rangle$ and $B=\langle b_n:n\in\Bbb N\rangle$ are sequence (not series!) of real numbers, we can construct their convolution, $C=\langle c_n:n\in\Bbb N\rangle$, directly, by defining $C_n=\sum_{k=0}^na_kb_{n-k}$, or we can get it ‘automagically’ by taking the product of the generating functions of $A$ and $B$: $$\begin{align*} G_A(x)G_B(x)&=\left(\sum_{n\ge 0}a_nx^n\right)\left(\sum_{n\ge 0}b_nx^n\right)\\\\ &=\sum_{n\ge 0}\left(\sum_{k=0}^na_kb_{n-k}\right)x^n\\\\ &=G_{A*B}(x)\;. \end{align*}$$ There’s actually nothing to prove here: the step $$\left(\sum_{n\ge 0}a_nx^n\right)\left(\sum_{n\ge 0}b_nx^n\right)=\sum_{n\ge 0}\left(\sum_{k=0}^na_kb_{n-k}\right)x^n\;,$$ is a matter of definition. We define the product of formal power series to be the Cauchy product, generalizing the ordinary product of polynomials. Chapter $2$ of generatingfunctionology starts with a brief introduction to formal power series; you can freely download the whole book here. Now it’s true that sometimes we do want to treat generating functions as analytic objects rather than as purely formal algebraic objects, and then we do have to worry about convergence. But when we have convergence, the operations behave as one would expect.
H: Finding probabilities for a discrete random variable using a CDF I have a question about notation and I want to make sure I really understand the homework. The discrete random variable X has cdf F such that F(x)= 0 x < 1 1/4 1 ≤ x < 3 3/4 3 ≤ x <4 1 x ≥ 4 Find P(X=2), P(X=3.5), P(X=4) Here is what I found. Can you tell me where I am making a mistake? Thanks P(X=2) = P(X ≤ 2) - P(X < 2) = ? P(X=3.5) = P(X ≤ 3.5) - P( X ≤ 3) = 3/4 - 3/4 =0 P(X=4) = P(X ≤ 4) - P( X < 4) = 1 - 3/4 = 1/4 AI: You are trying to use arithmetic/algebra to solve the problem. I prefer to try to visualize the cdf, and use the visualization to tell the story. You know, of course, that $F(x)$ is the probability that $X\le x$, It is the "weight" up to and including $x$. We will have to be very careful: for these discrete distributions, there can be a big difference between $\le$ and $\lt$. At any point $x\lt 1$, the weight of the stuff $\le x$ is $0$. Then the cdf jumps to $\frac{1}{4}$ at $1$. So there must be a weight of $\frac{1}{4}$ at $1$. In probability terms, $\Pr(X=1)=\frac{1}{4}$. Then the cdf stays steady at $\frac{1}{4}$ for quite a while, and jumps to $\frac{3}{4}$ at $x=3$. So there must be a weight of $\frac{3}{4}-\frac{1}{4}$ at $x=3$, that is, $\Pr(X=3)=\frac{1}{2}$. Then the cdf stays at $\frac{3}{4}$ until $x=4$, and jumps to $1$. Thus $\Pr(X=4)=\frac{1}{4}$. Our weights at $1$, $3$, and $4$ add up to $1$, so that's all there is. Adding up also provides a nice little check against minor errors. Now we can answer any question. The probability that $X=2$ is $0$, no weight at $x=2$. Same with $3.5$. And we already knew that the weight at $4$ is $\frac{1}{4}$. Formal remark: Let $F(x)$ be a cdf. We give an explicit "formula" for $\Pr(X=a)$: $$\Pr(X=a)=F(a)-\lim_{x\to a^-} F(x).$$ By $\lim_{x\to a^-} F(x)$ we mean the limit of $F(x)$ as $x$ approaches $a$ from the left.
H: Deleted Exponential Series and Injectivity (1)? Consider $\exp(z):=\Sigma_{n=1}^{\infty}\frac{z^n}{n!}.$ We know that the radius of convergence for this series is infinity, and hence it defines a holomorphic map which is not injective. I have following questions: If we delete from exponential series those terms with positive even powers, the series we are left with, does that defines a holomorphic function which is injective? What about in general, i.e., let $\{n_\nu:\nu\in\mathbb{N}\setminus\{1\}\}$ be a strictly incresing sequence and delete those terms in exponential series with power indices coming from this set. Now consider the deleted series. When this defines an injective map? AI: As the limit of $1/(n!^{1/n})$ is 0 (as $n \to \infty$), deleting any subset of terms produces a power series whose radius of convergence is $\infty$. In the case where the new power series still has infinitely many terms, then $\infty$ is an essential singularity, so the function cannot be injective, by the big Picard theorem. In the case where the new power series is a polynomial, then clearly it cannot be injective except in the cases when it's linear.
H: Symmetry in reduced residue systems This may be a stupid question, but it looks to me like the reduced residue systems modulo N are symmetrical about N/2; that is to say, that the there is the same number of integers not divisible by a factor of N that are smaller than N/2 as there are that are bigger. Is this a case, and is there literature I would need to reference in a paper that uses this fact? Just to be clear, in this case, I use "Reduced Residue System" to refer to the system of numbers counted by the totient of N, i.e. those numbers smaller than N that are relatively prime with N. AI: This is certainly true. You would not need to cite any literature to use such a thing in a paper, as the proof follows immediately from the definition of divisibility (I wrote two below). If you really wanted to cite something, cite any introductory number theory book that explains divisibility and modular arithmetic. For completeness, a pair of proofs. Claim: If $m < n$ has $\gcd(m,n) = 1$, then $\gcd(n-m,n)=1$. proof 1: $d|(n-m)$ and $d|n \iff d|m$ and $d|n$ by divisibility definitions. $\spadesuit$ proof 2: Let's use Bezout, as an exercise. Then $\gcd(n,m)=1 \implies$ there are integers $a,b$ s.t. $an + bm = 1$. But then $-b(n-m) + (a+b)n = 1$, so that $\gcd(n-m,n) = 1$ too. $\clubsuit$
H: Explanation of an example of linear transformation This is an example from a text (Linear Algebra, Freidberg). I am trying to follow along, and I feel like I should know this from vector calc but I am missing something silly. The example is: Define the Linear transformation $T:P_2(R)\rightarrow M_{2x2}(R)$ by $$T(\,f(x)) = \begin {pmatrix} f(1)-f(2) & 0 \\ 0 & f(0) \end{pmatrix} \DeclareMathOperator{\span}{span}$$ since $\beta = ${$1, x, x^2$} is a basis for $P_2(R)$ we have \begin{align*} R(T) &= \span(T(\beta)) \\ &= \span(\{T(1), T(x), T(x^2)\}) \\ &= \span\left(\left\{\begin {pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \begin {pmatrix} -1 & 0 \\ 0 & 0 \end{pmatrix}, \begin {pmatrix} -3 & 0 \\ 0 & 0 \end{pmatrix} \right\}\right) \\ &= \span\left(\left\{\begin {pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \begin {pmatrix} -1 & 0 \\ 0 & 0 \end{pmatrix} \right\}\right). \end{align*} I wasn't sure how they got from the matrix defined above to the ones below. Neither $f(1)$ nor $f(2)$ are really defined, and plugging an $x$ or $x^2$ in there doesn’t help. This is frustrating, because I am trying to just get through an example so I can figure out this stuff. The book says “thus we have found a basis for $R(T)$”, but it is far from obvious to me. I understand they are using the theorem that says $$R(T) = \span(T(\beta)) = \span(\{T(v_1), T(v_2)...T(v_n)\}),$$ where the $v_i$ are vectors that form a basis. It’s getting through the matrix calculation above I am struggling with. AI: Your transformation is from the space of polynomials of degree at most 2 to the space of $2\times 2$ matrices with real coefficients. They tell you that the polynomials $1$, $x$, and $x^2$ are a basis for $P_2(\mathbb{R})$; the next three matrices are just the results of applying your transformation to these three polynomials: If $f(x)=1$ for all $x$, then $$ T(\,f(x))=\begin{pmatrix}f(1)-f(2) & 0\\0 & f(0)\end{pmatrix}=\begin{pmatrix}1-1 & 0\\ 0 & 1\end{pmatrix}=\begin{pmatrix}0&0\\0&1\end{pmatrix}. $$ If $f(x)=x$, then $$ T(\,f(x))=\begin{pmatrix}f(1)-f(2) & 0\\0 & f(0)\end{pmatrix}=\begin{pmatrix}1-2 & 0\\0 & 0\end{pmatrix}=\begin{pmatrix}-1 & 0\\0 & 0\end{pmatrix}.$$ Finally, if $f(x)=x^2$, then $$ T(\,f(x))=\begin{pmatrix}f(1)-f(2) & 0\\0 & f(0)\end{pmatrix}=\begin{pmatrix}1-4 & 0\\0 & 0\end{pmatrix}=\begin{pmatrix}-3 & 0\\0 & 0\end{pmatrix}. $$
H: Collection of Borel sets contains all $[a,b)$ Show that the collection of Borel sets is the smallest $\sigma$-algebra that contains intervals of the form $[a,b)$, where $a<b$. So I first prove that the collection $S$ of Borel sets contains those intervals. $S$ contains the closed interval $[a,b]$ and the (closed) single point $b$, so it contains the difference $[a,b)$. Now I have to prove that any $\sigma$-algebra containing all intervals $[a,b)$ must contain all elements of $S$. If I can prove that it must contain all open intervals $(a,b)$, I'll be done because any open set in $\mathbb{R}$ is a countable union of open intervals. But how do I show that $(a,b)$ belongs to such a $\sigma$-algebra? AI: $\sigma$-algebras are closed under countable unions. You need to use this to show that any $\sigma$-algebra containing the half open intervals contains the intervals as well. For instance $(0,1)= \bigcup_{n=1}^{\infty} [\frac{1}{n}, 1)$.
H: Three doors logic problem Imagine three doors where behind one door $\text{A}$ there is a new car, behind door $\text{B}$ there is a goat, and behind door $\text{C}$ there is a new car and a goat. The problem is that each door is labeled incorrectly... If you can open only one door, is it possible to label all the doors correctly? I'm trying to answer this in logical form, but I am unsure if my answer is good enough: Denote doors as $D_1, D_2, D_3$ $D_n $ is equal to either $A,B$ or $C$. If we assume that the correct solution is $$D_1 = A $$ $$D_2 = B $$ $$D_3 = C $$ then $$\begin{align} D_3 = A &\implies D_1=B \implies D_2 = C\\ D_3 = B &\implies D_1=C \implies D_2 = A\\ \end{align}$$ $$\Leftrightarrow$$ $$\begin{align} C = D_1 \implies D_2 = A \implies D_3 = B\\ C = D_2 \implies D_1 = B \implies D_3 = A\\ \end{align}$$ Would you consider this a valid solution? AI: The key is the assertion that each door is labelled incorrectly. If capitals stand for the correct labels and lower case letters for the incorrect ones currently present, you will find there are just two possibilities: $$Ab: Bc :Ca$$ $$Ac :Ba :Cb$$ Each of the doors is labelled differently in the two cases. Whichever door you open, you will therefore know which case applies, and will be able to apply correct labels without opening a second door.
H: Maclaurin Series for $\ln(x+\sqrt{1+x^2})$ Is there a trick to finding the Maclaurin series for $f(x)=\ln(x+\sqrt{1+x^2})$ fast? Vaguely, I recall this being some sort of inverse hyperbolic function, but I'm not sure about which one, and what its derivatives are. This is a past exam question and I would like to know, should something similar appear again, if there is a quick method of finding this series without using inverse hyperbolic functions. The derivatives of this look absolutely painful to calculate, although easy at $x=0$, but I'm not sure if I could easily see a pattern for the $n$-th derivative at $0$. AI: The easiest method: note that $$ \frac{d}{dx}\left[\ln(x+\sqrt{1+x^2})\right]=\frac{1}{\sqrt{1+x^2}}=(1+x^2)^{-1/2}. $$ This last can be expressed through a binomial series; you can then integrate term by term, and solve for the constant of integration, to get the series you want.
H: MLE problem - the likelihood function has no maximum. The probability density function is: $f(x)=e^{\theta -x}, \ 0 \le \theta \le x $ Given an n-element sample, the likelihood function is: $$L(\theta)=\exp \left( n\theta - \sum_{i=1}^n x_i \right)$$ Since the function has no maximum, but increases with $\theta$, I would take the estimator to be $\hat{\theta}=\max X_i$, following an example I did in class. But the answers I have (not 100% correct) say to take the minimum. Can someone shed some light on this? AI: This will be clear if you write the likelihood function more carefully as \begin{align} L(\theta) &= e^{ \left( n\theta - \sum_{i=1}^n x_i \right) }\prod_{i=1}^n 1_{\{ \theta \le x_i\}} \\ &= e^{ \left( n\theta - \sum_{i=1}^n x_i \right) }1_{\{ \theta \le \min\limits_{1\le i\le n} x_i\}} \end{align} As you correctly noted, $L(\theta)$ is an increasing function of $\theta$ on the interval $[0, \min x_i]$. And for $\theta > \min x_i$, the likelihood is zero. Thus, $\hat{\theta} = \min X_i$.
H: Number of distinct path in a graph with $n$ vertices Let $T = (V , E)$ be a tree with $|V | = n\geqslant 2$. How many distinct paths are there (as sub graphs) in $T$? I already have the answer to this question as $(n/2)$. The problem that I'm having is finding anything in the text that helps me to figure out how to arrive at this answer. AI: Any two vertices of a tree are connected by a unique path, so there are exactly $\dbinom n2$ paths of length at least one. If you also count one-point paths (why wouldn't you?) there are $n$ of those, making a total of $\dbinom n2+\dbinom n1=\dbinom{n+1}2=\dfrac{n(n+1)}2$ paths in a tree $T=(V,E)$ with $|V|\ge1$.
H: Existence of maximum area triangle among all whose vertices are in a compact subset of $\mathbb{R}^2$ Prove that among all triangles whose vertices are in a compact subset $K$ of $\mathbb{R}^2$, there exists at least one with maximum area. I am at a loss as to how to rigorously show that you can generate a finite open cover of $K$ using triangles at least one of which has maximum possible area. Below is my first attempt. Any help would be greatly appreciated. Define the set $T=\{(x,y,z) : x,y,z\in K, x\neq y, x \neq z, y \neq z\}$, and let $\operatorname{tri}(t)$ represent the triangle with vertices at $t=(x,y,z) \in T$. Moreover, define $\operatorname{tri}(a) \cap \operatorname{tri}(b)$ to be empty if the two triangles overlap only on an edge. Choose $t_0 \in T$. If there exists $u \in T$ such that $\operatorname{tri}(t_0) \subset \operatorname{tri}(u)$, then set $t_0 = u$. Repeat this process until no such $u\in T$ exists. Since $K$ is closed and bounded, this process will terminate. (I have no proof of this.) Choose $t_i \in T$ such that $$ \operatorname{tri}(t_i) \bigcap \left( \bigcup_{k=0}^{i-1} \operatorname{tri}(t_k) \right) = \emptyset $$ and repeat the same process used to find $t_0$ above. Since $K$ is compact, the $\cup_{i} \operatorname{tri}(t_i)$ forms a finite open cover of $K$ (I have no proof of this) where the area of each $\operatorname{tri}(t_i)$ is greater than that of all other $t\in T$ such that $\operatorname{tri}(t)\subset \operatorname{tri}(t_i)$. Thus, since we have a finite cover, there exists $\operatorname{max}_i \operatorname{area}( \operatorname{tri}(t_i))$. The above is just shoddy and unconvincing, and most likely needlessly complicated and incorrect. AI: Let $A(x,y,z)$ denote the area of a triangle with vertices $x,y,z$ in $K$. It is easy enough to show that $A$ is continuous, for example by noting that $$A(x,y,z)=\frac12\|(y-x)\times (z-x)\|$$ where $\times$ denotes the cross-product. Furthermore, since $K$ is compact, $K^3$ is compact by Tychonoff's theorem. Since $K^3$ is the domain of $A$, and continuous functions on compact domains attain their supremum, we conclude that $A$ obtains its supremum, i.e. that we have some $x_0,y_0,z_0\in K$ such that $A(x_0,y_0,z_0)$ is maximal. Thus the triangle with vertices $x_0,y_0,z_0$ has maximal area.
H: How to solve infinite square root of 1+ itself or: $\varphi=\sqrt{1+\varphi}$ How do I find $\varphi$ for $\varphi=\sqrt{1+\varphi}$ or $\varphi=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+...}}}}}$? AI: The commentators have already give hint how to get the possible value of $\varphi=\varphi_\infty$, it remains to show it converges. Let $\varphi_n=\underbrace{\sqrt{1+\sqrt{1+\sqrt{\dots}}}}_{n}$. Then it is clear that $\varphi_n>\varphi_{n-1}$, and $\varphi_\infty>\varphi_n$ provided $\varphi_\infty>\varphi_{n-1}$. So $\varphi_n$ converges.
H: Why is this composition of scheme morphisms proper? I am learning about proper morphisms from Liu's book. I have a question about the proof of the Lemma 3.17 on page 104. Let $A$ and $B$ be rings and suppose $\operatorname{Spec} B$ is proper over $A$. The lemma says that $B$ is finite over $A$. The proof begins by supposing that $B$ is singly generated over $A$, so that there is a surjective morphism $A[T]\rightarrow B$. We can consider $\operatorname{Spec} A[T]$ as an open subscheme of $\operatorname{Proj} A[T_1,T_2]$ by identifying it with the principal open set $D_+(T_2)$, setting $T=T_1/T_2$. We have the following composition of morphisms, where the first is a closed immersion and the second is an open immersion. $$\operatorname{Spec} B \rightarrow \operatorname{Spec} A[T] \rightarrow \operatorname{Proj} A[T_1,T_2]$$ Why is this proper? To justify this, the book quotes a result saying that if one has a proper composition of two morphisms $f\circ g$, and $f$ is separated, then $g$ is proper. But here, we know the first morphism is proper and we want to deduce the composition is proper. If we knew this was a closed immersion, I think we could compose it with a map to $\operatorname{Spec} A[T_1,T_2]$ and use the fact that projective morphisms are proper in conjunction with the result cited above. But it does not appear to be a closed immersion. AI: When [poorly] learning this stuff I found these proofs confusing. I think what you want to do is factor the given map as \[ \operatorname{Spec} B \to \operatorname{Proj} A[T_1, T_2] \to \operatorname{Spec} A. \] The first map is the one you want to prove is proper; the second is the canonical map, which is projective and hence separated. The composition is assumed to be proper, so you can apply your result.
H: Proving that $f(x)=\sum^{\infty}_{n=1}\frac{1}{n^2-x^2}$ is continuous. Prove that $f(x)=\sum^{\infty}_{n=1}\frac{1}{n^2-x^2}$ is continuous at all $x \notin \Bbb N$. An attempt: We should consider showing that $\sum^{\infty}_{n=1}\frac{1}{n^2-x^2}$ converges uniformly. Also, $$f(x)=\sum^{\infty}_{n=1}\frac{1}{n^2-x^2}=\sum^{\infty}_{n=1}\frac{1}{2n}\left(\frac{1}{n-x}+\frac{1}{n+x}\right)$$ AI: Let $k\in \mathbb Z$ and $a,b\in\mathbb R$ such that $a<b$ and $[a,b]\subset (k,k+1)$ then: $$\frac{1}{|n^2-x^2|}\leq c_n=\max\left(\frac{1}{|n^2-a^2|},\frac{1}{|n^2-b^2|}\right)\quad\forall\ x\in [a,b]$$ and the series $$\sum_n c_n$$ is convergent since $c_n\sim_\infty \frac{1}{n^2}$ so the given series is uniformly convergent in every interval $[a,b]$ and with the continuity of the functions $x\mapsto \frac{1}{n^2-x^2}$ we can conclude.
H: Conditional probability is not a probability measure, but it does satisfy each of the requisite axioms with probability 1. I dont quite understand what the statement in the question means, which appears in this paragraph of a textbook I am reading. How can it not be a probability measure (not even almost surely) but satisfies each of the requisite axioms with probability 1? Why are the two things not the same? AI: Consider for example the summability identity $S(\mathbf a)$ for a given sequence $\mathbf a=(A_n)_n$ of disjoint events, namely the fact that $$ \sum_nP(A_n\mid\mathcal G)=P\left(\bigcup_nA_n\mid\mathcal G\right). $$ Then each $S(\mathbf a)$ holds except on a negligible event $N(\mathbf a)$ hence all these identities hold simultaneously on the complement of $$ \mathcal N=\bigcup_\mathbf aN(\mathbf a). $$ If there are uncountably many different sequences $\mathbf a$, the set $\mathcal N$ may not be negligible but, for $P(\ \mid\mathcal G)(\omega)$ to be a probability measure for some $\omega$ in $\Omega$, one needs $\omega$ to be in $\Omega\setminus N(\mathbf a)$ for every $\mathbf a$, that is, to be in $\Omega\setminus\mathcal N$.
H: $\sum_{k=1}^{\infty}(a_k+\epsilon/2^k)$, where $a_k\ge 0$ I know that if $\sum a_k$ and $\sum b_k$ converge, then: $$\sum_{k=1}^{\infty}(a_k+b_k)=\sum_{k=1}^{\infty}a_k+\sum_{k=1}^{\infty}b_k$$ However, as I read books on measure theory, I come across lines like this all the time: $$\sum_{k=1}^{\infty}(m^*(E_k)+\epsilon/2^k)=\sum_{k=1}^{\infty}m^*(E_k)+\sum_{k=1}^{\infty}\epsilon/2^k$$ They are not the same concept, similar, but not the same as in the second case $\sum_{k=1}^{\infty}m^*(E_k)$ does not necessarily converge. At this stage of the proof, we do know that $0\le m^*(E_k)<+\infty$. So, can I ask someone to provide a rigorous proof of the following statement: If $a_k\ge 0$ for all $k$, then: $$\sum_{k=1}^{\infty}(a_k+\epsilon/2^k)=\sum_{k=1}^{\infty}a_k+\epsilon$$ Note, I understand that $\sum_{k=1}^{\infty}\epsilon/2^k=\epsilon$, so you don't have to worry about showing that. Thanks. AI: If $\sum{a_k}$ does not converge, it is $+\infty$ by your assumption that $a_k\ge 0$ for all $k$, and therefore the right hand side is $+\infty$. But then the left hand side is also $+\infty$, by term by term comparison (since $a_k+\epsilon/2^k\ge a_k$ for all $k$). The other possibility is that $\sum a_k$ converges, and then we can apply the result you quote. In both cases, we have equality.
H: Infinite Cartesian Product minus AC example I've been looking for an example of an empty Cartesian product whose factors are non-empty. From what I've gathered so far, this statement is equivalent to the negation of AC, ie. AC fails. So constructing an example means finding a collection of sets for which no choice function exists. But I haven't a clue how to go about such a proof. Can you actually construct an empty cartesian product where each factor is nonempty, or is it not possible by virtue of the definition? If the latter, is there a proof of such a statement? Obviously if it is, it's probably over my head, but I'd be interested nonetheless. Thanks in advance! AI: No. You cannot really construct something like that. The negation of the axiom of choice is as non-constructive as the axiom of choice itself. All it tells us is that somewhere in the universe of sets there exists a family of non-empty sets whose product is empty. If you want more, you will have to assume more. What does that mean? For example we know that we can assume that there is a family of sets of real numbers whose product is empty, and therefore the product of all non-empty sets of real numbers is empty. But it is also consistent that the real numbers can be well-ordered, and so the collection of all non-empty sets of real numbers does admit a choice function. So you have to assume one way or another. But generally, you cannot point out at a non-well orderable set, or a family of non-empty sets whose product is empty.
H: Proof that $\mathbf{R}[\omega]_\times\mathbf{R} = [\mathbf{R}\omega]_\times$ I have to prove that $$\mathbf{R}[\omega]_\times\mathbf{R}^\mathrm{T} = [\mathbf{R}\omega]_\times$$ Herein $\omega$ is a vector with elements. The notation $[\mathbf{a}]_\times$ is a conversion of the vector $\mathbf{a}$ to to a matrix to compute the cross-product e.g. $\mathbf{a} \times \mathbf{b} = [\mathbf{a}]_\times \mathbf{b} = [\mathbf{b}]_\times^\mathrm{T} \mathbf{a}$. With: $$[\mathbf{a}]_\times = \begin{bmatrix} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{bmatrix}$$ Further $\mathbf{R}$ is an 3D rotation matrix. Because I didn't really know how to proof this I tried just to write everything out in terms of $\mathbf{r}_{ij}$ and $\omega_i$ This gives for $[\mathbf{R}\omega]_\times$: $$\begin{bmatrix} 0 & -r_{31}w_1 - r_{32}w_2 - r_{33}w_3 &r_{21}w_1 + r_{22}w_2 + r_{23}w_3 \\ r_{31}w_1 + r_{32}w_2 + r_{33}w_3 & 0 & -r_{11}w_1 - r_{12}w_2 - r_{13}w_3 \\ -r_{21}w_1 - r_{22}w_2 - r_{23}w_3 & r_{11}w_1 + r_{12}w_2 + r_{13}w_3& 0 \end{bmatrix}$$ And $\mathbf{R}[\omega]_\times \mathbf{R}^\mathrm{T}$ becomes $$\begin{bmatrix} \begin{bmatrix} r_{13}(r_{11}w_2 - r_{12}w_1) - r_{12}(r_{11}w_3 - r_{13}w_1) + r_{11}(r_{12}w_3 - r_{13}w_2) \\ r_{23}(r_{11}w_2 - r_{12}w_1) - r_{22}(r_{11}w_3 - r_{13}w_1) + r_{21}(r_{12}w_3 - r_{13}w_2) \\ r_{33}(r_{11}w_2 - r_{12}w_1) - r_{32}(r_{11}w_3 - r_{13}w_1) + r_{31}(r_{12}w_3 - r_{13}w_2) \end{bmatrix}^T\\ \begin{bmatrix}r_{13}(r_{21}w_2 - r_{22}w_1) - r_{12}(r_{21}w_3 - r_{23}w_1) + r_{11}(r_{22}w_3 - r_{23}w_2) \\ r_{23}(r_{21}w_2 - r_{22}w_1) - r_{22}(r_{21}w_3 - r_{23}w_1) + r_{21}(r_{22}w_3 - r_{23}w_2)\\ r_{33}(r_{21}w_2 - r_{22}w_1) - r_{32}(r_{21}w_3 - r_{23}w_1) + r_{31}(r_{22}w_3 - r_{23}w_2)\end{bmatrix}^T \\ \begin{bmatrix}r_{13}(r_{31}w_2 - r_{32}w_1) - r_{12}(r_{31}w_3 - r_{33}w_1) + r_{11}(r_{32}w_3 - r_{33}w_2) \\ r_{23}(r_{31}w_2 - r_{32}w_1) - r_{22}(r_{31}w_3 - r_{33}w_1) + r_{21}(r_{32}w_3 - r_{33}w_2)\\ r_{33}(r_{31}w_2 - r_{32}w_1) - r_{32}(r_{31}w_3 - r_{33}w_1) + r_{31}(r_{32}w_3 - r_{33}w_2)\end{bmatrix}^T \end{bmatrix}$$ This however leads to nothing. So how do I proof this any tips and help is appriciated? Quick matlab code R = sym('r',3); syms w1 w2 w3; w = [w1; w2; w3]; W = [ 0 -w3 w2; ... w3 0 -w1; ... -w2 w1 0]; R*w % note: not in [a]_x form R*W*transpose(R) AI: For convenience, I write $w$ in place of $\omega$ and $R$ in place of $\mathbf{R}$. \begin{align*} R[w]_\times R^T = [Rw]_\times \Leftrightarrow& R[w]_\times = [Rw]_\times R \\ \Leftrightarrow& z\cdot(R[w]_\times x) = z\cdot([Rw]_\times Rx)\quad \forall x,z\\ \Leftrightarrow& Rv\cdot(R[w]_\times x) = Rv\cdot([Rw]_\times Rx)\quad \forall v,x\\ \Leftrightarrow& Rv\cdot R(w\times x) = Rv\cdot(Rw\times Rx)\quad \forall v,x\\ \Leftrightarrow& v\cdot (w\times x) = Rv\cdot(Rw\times Rx)\quad \forall v,x\\ \Leftrightarrow& \det(v,w,x) = \det(Rv,Rw,Rx). \end{align*} Now the last line is true because $$\det(Rv,Rw,Rx)=\det\left(R(v,w,x)\right)=\det(R)\det(v,w,x)=\det(v,w,x).$$
H: Do I need to present a formula in this proof? Well, I've been studying Rudin's Principles of Mathematical Analysis and then I've thought on the following exercise: Let $A, B$ be two countable sets, then $A\times B$ is countable. My idea to prove this was to arrange the elements of $A\times B$ in a sequence. Inded, I've used that argument of constructing the array: $$\begin{matrix} (a_1,b_1) & (a_1,b_2) & (a_1,b_3) & (a_1,b_4) & \cdots \\ (a_2,b_1) & (a_2,b_2) & (a_2,b_3) & \cdots & \cdots \\ (a_3,b_1) & (a_3,b_2) & \cdots & \cdots & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ \end{matrix}$$ And then arrange the sequence $(a_1,b_1), (a_1,b_2), (a_2,b_1), (a_3,b_1), (a_2,b_2), (a_1,b_3)\dots$ with this I would be defining a sequence of distinct elements, in other words a bijection $f: \Bbb N \to A\times B$ showing that $A\times B$ is countable. I've then shown this to someone I know that studies set theory and he said that this proof wasn't good, it was needed to show explicitly the function $f$. Is this really necessary? Because the argument show that this must be a bijection: since $A$ is countable it's elements can be put into a sequence $(a_i)$ and the same for the elements of $B$ which would give a sequence $(b_i)$. Both sequences are of distinct terms. Now, on this arrangement, we have elements $(a_i, b_j)$, so all the pairs are distinct because the sequences $(a_i)$ and $(b_j)$ are of distinct elements. Is it necessary to find $f$? If that's the case, how could I find it? I didn't notice any obvious pattern into the sequence I've made. AI: It's a slightly delicate point. You don't need to explicitly write a formula for $f$, but you need to define it from the fact that $A$ and $B$ are countable. That is to say, you need to fix two bijections $f_A, f_B$ and then use them to write the formula for $f$ itself. Essentially this means writing a formula for a bijection $F\colon\Bbb{N\to N\times N}$, and use that with $f_A$ and $f_B$ to write a function $f$ as wanted.
H: Uniform convergence of $\sum_{n=1}^{\infty}\ln\left(1+\frac{x^2}{n^2}\right)$ Find the convergence domain and determine if $\sum_{n=1}^{\infty}\ln(1+\frac{x^2}{n^2})$ converges uniformly on $\mathbb{R}$ $[a,b]$ (some closed interval) An Attempt: Using $\ln(1+t)\leq t$, we will have: $$\sum_{n=1}^{\infty}\ln\left(1+\frac{x^2}{n^2}\right)<\sum_{n=1}^{\infty}\frac{x^2}{n^2}=x^2\sum_{n=1}^{\infty}\frac{1}{n^2}<\infty$$ Thus, the series converges for every $x\in\mathbb{R}$. For 2: We can use M-test and get $\sum_{n=1}^{\infty}\ln\left(1+\frac{x^2}{n^2}\right)<b^2\sum_{n=1}^{\infty}\frac{1}{n^2}<\infty$ What I need to do with 1? AI: If $\sum\limits_{n=1}^\infty f_n(x)$ is uniformly convergent on $I$, then it is uniformly Cauchy on $I$. That is, for each $\epsilon>0$, there is an $N$ so that for all $m\ge n\ge N$, we have $$ \Bigr| \sum_{i=n}^m f_i(x) \Bigl|<\epsilon, \ \text{for all}\ x\in I. $$ See this post for a proof of this. From the above, it follows that if $\sum\limits_{n=1}^\infty f_n(x)$ is uniformly convergent on $I$, then the sequence $(f_n)$ converges to $0$ uniformly on $I$. The terms of your series do not converge to $0$ uniformly on $\Bbb R$. To see this, evaluate the $n$'th term of the series at $x=n$.
H: Method for solving ODE with power series when trying to solve second order linear homogeneous variable coefficient ODEs using a power series method, there seem to be two different general forms cropping up in my notes. The first uses an ordinary point $$x_0$$ $$y = \sum_{m = 0}^{\infty}a_m(x-x_0)^m$$ The second uses a factor of $$x^r$$ $$y = x^r\sum_{m = 0}^{\infty}a_mx^m$$ How do I know which of these forms to use for my solution? AI: Your first solution is a power series. Your second solution is the Frobenius solution which allows for $r$ non-integer. There are 4 cases to consider for the second solution. The way in which the second solution is found differs slightly if: you have distinct exponents which do not differ by an integer you have distinct exponents which do differ by an integer you have repeated exponents you have complex exponents The simplest example illustrating these is the Cauchy-Euler problem $ax^2y''+bxy'+cy=0$ which is the quintessential example of a second order ODE with a regular singular point. To answer your question, if you seek a solution at an ordinary point use the power series solution. If you face a regular singular point then invoke the method of Frobenius.
H: Given that $xyz=1$ , find $\frac{1}{1+x+xy}+\frac{1}{1+y+yz}+\frac{1}{1+z+xz}$? I think I solved this problem, but I don't feel $100$ percent sure of my solution. We have: $xy=\large {\frac 1z}$ $xz=\large \frac 1y$ $yz=\large \frac 1x$ So let's substitute these into our sum: $\large \frac{1}{1+x+\frac 1z}+\frac{1}{1+y+\frac 1x}+\frac{1}{1+z+\frac 1y}$ If we rewrite with a common denominator we get $\large \frac{z}{1+z+xz}+\frac {x}{1+x+xy}+\frac{y}{1+y+zy}$ If $\large \frac{1}{1+x+xy}+\frac{1}{1+y+yz}+\frac{1}{1+z+xz}=\frac {x}{1+x+xy} +\frac{y}{1+y+zy}+\frac{z}{1+z+xz}$ , $;$ then $(x, y, z)=1$ and we can compute that the sum is equal to $1$. The problem I have with this is that I found what $(x, y, z)$ $had$ to be, but I but I only had one weak restriction on the variables. Secondly, what are other way of doing this that I can learn from? Thanks. AI: We only need to continue a little further on the path you took. The first term is $$\frac{1}{1+x+xy}.\tag{1}$$ Consider the second term $$\frac{1}{1+y+yz}.$$ Using your substitution, we (you) find that this is $$\frac{x}{1+x+xy}.\tag{2}$$ Consider the third term $\dfrac{1}{1+z+zx}$ and replace $z$ by $\frac{1}{xy}$, and $zx$ by $\frac{1}{y}$. We end up with $$\frac{xy}{1+x+xy}.\tag{3}$$ Add up the first term, the second, and the third, or equivalently (1), (2), and (3). We get $$\frac{1}{1+x+xy}+\frac{x}{1+x+xy}+\frac{xy}{1+x+xy},$$ which is $1$.
H: rigorous definition of a "logic" It's been a couple of years since I've had a course in logic (the course was propositional and first order logic, up to Gödel's completeness theorem). I've been looking at some papers online, and they seem to talk about systems of logic like $\mathrm K$ and $\mathrm{S4}$ as "logics" (the noun). What does this mean? A quick Google search doesn't seem to have turned up anything useful. Is it just the set of well-formed formulas in the system using the language, inference rules, and starting with some basic axioms? Feel free to redirect me to a source—for some reason I feel like this should be in some book…. And thanks for any answers or advice! Sincerely, John AI: These terms are subject to some amount of variability, but what follows is a common way of defining “a logic.” A logic is a logical language (as described in the linked notes) that defines the sentences of the language. Along with the logical language, a provability relationship is defined, often by defining inference rules and axiom schemata, by which sentences may be derived in a proof. A related, and often present, feature of a logic is that of a semantics for the logic. The notion that $Q$ is derivable from $P \land Q$ by the inference rule $\land$-elimination is purely syntactic; it is concerned only with the manipulation of strings of symbols. A semantics for a logic is a way of saying, “a sentence of a particular form has this meaning, or this interpretation.” With those sorts of notions, we can say things like ”the interpretation of a sentence $\phi \land \psi$ is true if and only if the interpretation makes $\phi$ true and makes $\psi$ true.” In a logic with a semantics, we can talk about more properties of the logic. For instance, we can talk about the soundness of inference rules; an inference rule is said to be sound if and only if its conclusion must be true if all of its premises are true. We can also talk about the completeness of a proof calculus (i.e., a set of rules for constructing proofs); a proof calculus is complete if every sentence which must, by the semantics, be true, is also provable. (There are actually other notions of complete as well, some of which are more syntactic in nature. See this question for more details.) Each of the following is an example of a logic: propositional calculus, first order logic, $\mathrm K$, and $\mathrm{S4}$. In each case, there is a set of rules for constructing sentences of the logical system, and a set of inference rules and axiom schemata (or natural deduction proof systems, etc.) for inferring new sentences from base principles. All of those logics also have associated semantics. Some of the logics were developed before they had semantics; i.e., they were developed as syntactic systems, and semantic interpretations were only developed later. In all of these cases, there are various formalizations of the logics and associated semantics. For instance: provably equivalent axiomatizations of of propositional calculus; logical languages that take certain connectives as primitives and logical languages that take them as abbreviations for other combinations of connectives (e.g., taking $\phi \to \psi$ as a primitive, or taking it as an abbreviation for $\lnot \phi \lor \psi$); various semantics for modal logics (the Kripke semantics are popular, but there are plenty of alternatives, too).
H: Prove this inequality by using the mean value theorem I want to prove that $x<\frac{2x}{2-x}, \forall x \in (0,1)$, by using the mean value theorem. So, consider $f(x)=\frac{2x}{2-x} -x$. $f(0)=0$. $f´(x)=\frac{2x-2}{(2-x)^2} - 1$ and $f'(x)<0, \forall x \in (0,1)$. By the mean value theorem: $$\exists c \in (0,1)~~~\text{such that}~~~f(x)-f(0) = f'(c)(x-0)~~~\rightarrow~~~f(x)<0 ~~~\rightarrow~~~x> \frac{2x}{2-x}$$ So, it doesn't work. AI: It turns out that $$f'(x)=\frac{4}{(2-x)^2}-1.$$ This is safely positive on our interval, so the MVT argument goes through. Remark: It seems you were asked to use MVT explicitly. The ordinary way is to use the theorem, derived using the MVT, that if the derivative of a function is positive on an interval, then the function is increasing on that interval.
H: Conditional distribution problem Given random variables X and Y with joint density $$f(x,y) = 2(x + y), \text{ for } 0 < y < x < 1,$$ I am trying to compute the probability that $$P(Y < 0.5 \mid X = 0.1)$$ To that end, I computed the marginal density $$f_X(x) = \int_0^x 2(x + y) \, \mathrm d{y} \\ = 3x^2 $$ We now have enough information to compute the conditional distribution $$f(y|x) = \frac{2(x + y)}{3x^2}$$ and, fixing the condition, $$f(y|x = 0.1) = \frac{0.2 + y}{3(0.1)^2}$$ Finally, to compute the probability, we find the conditional distribution $$F(y \mid X=0.1) = \frac{1}{3(0.1)^2} \int_0^y 0.2 + 2y \, \mathrm d{y} \\ = \frac{1}{3(0.1)^2} 0.2y + y^2$$ and substitute $$F(0.05 \mid X=0.1) = \frac{1}{3(0.1)^2} 0.1 + (0.05)^2$$ Plugging these into a calculator yields approximately 4.17, instead of 0.417, which my solution sheet says is the answer. Where am I going wrong? Thanks! AI: All your work is correct (ignoring a few typos); it's just a calculator error. Plugging in $y=0.05$ yields: $$ F(0.05 | X=0.1) = \dfrac{1}{3(0.1)^2} (0.2(0.05) + (0.05)^2) = 0.41666... $$
H: Chain rule for functions of two variables Suppose that $f(x,y)$ is a function of two variables with $f_x(0,2) = 2$ and $f_y(0,2) = -1$. Using the chain rule compute the numerical value of $f_\theta(r\cos\theta,r\sin\theta) = 2$ at $r=2$, $\theta=\frac{\pi}{2}$. Any hints on how to do this question would be appreciated. Thanks in advance. AI: The problem is not quite properly written, but you should be able to understand what is meant by the problem. I can try to make it more rigorous: Let $f_1$ and $f_2$ be the partial derivatives of $f:\mathbb R^2 \to \mathbb R$ with respect to the first variable and the second variable respectively. $x(r, \theta) = r\cos\theta$ $y(r, \theta) = r\sin\theta$ $f_1(0, 2) = 2$ $f_2(0, 2) = -1$ $g(r, \theta) = f(x(r, \theta), y(r, \theta))$ Find $g_2(2, \pi/2)$. To solve this problem, use the chain rule: $$ g_2(r, \theta) = f_1(x(r,\theta),y(r,\theta))x_2(r,\theta) + f_2(x(r,\theta),y(r,\theta))y_2(r,\theta). $$ At $r = 2$ and $\theta = \pi/2$, you should be able to evaluate all terms on the right-hand side.
H: What is $2012^{2011}$ modulo $14$? $$2012^{2011} \equiv x \pmod {14}$$ I need to calculate that, all the examples I've found on the net are a bit different. Thanks in advance! AI: By the Chinese remainder theorem, knowing what something is modulo $2$ and what it is modulo $7$ is equivalent to knowing what it is modulo $14$. Clearly, $2012^{2011}\equiv 0\bmod 2$. Next, note that because$$2012\equiv 3\bmod 7$$ we have $$2012^{2011}\equiv 3^{2011}\bmod 7.$$ By Fermat's little theorem, we know that $$3^6\equiv 1\bmod 7$$ so that $$2012^{2011}\equiv 3^{2011}\equiv 3^{(6\cdot 335)+1}\equiv (3^6)^{335}\cdot 3\equiv 3\bmod 7.$$ Putting this back together with the help of the Chinese remainder theorem (or just direct observation if you prefer) we see that $$2012^{2011}\equiv 10\bmod 14.$$
H: Prove that the additive inverse of an odd integer is an odd integer This is a homework problem, but I don't want the answer, just a little guidance: Prove that the additive inverse of an odd integer is an odd integer. When approaching a problem like this, how much is it safe to assume? Is it safe to assume that "the additive inverse of an integer is an integer?" Or does that need to be proven first, before we can start talking about odds and evens? I have two ideas about how to approach this, and that is to either: 1) Use absolute value to negate the fact that something is negative so that the absolute values of something like $4$ and $-4$ are both $4$. But is it safe to assume something like "the absolute values of any integer positive or negative are equal?" 2) Do something like subtract $2$ times a number to get the negative or positive: e.g. the additive inverse of $4$ is $(4 - 2(4))$. The additive inverse of $-4$ is $(-4 -(2(-4))$. Exactly where I would follow those ideas to, I'm not sure yet, but I'd like to at least know I'm on the right track and not completely going off in the wrong direction. AI: An integer $n$ is odd if and only if there exists an integer $k$ such that $n = 2k+1$. (Note, $2$ does not divide $n = 2k + 1$: $2$ divides $2k$, but not $1$, hence $n = 2k + 1$ is not even, therefore is odd). So let $n$ be an arbitrary odd integer; i.e. $n = 2k+1$ where $k$ is some integer. Then $\begin{align} \;-n & = -(2k+1) \\ & = -2k -1 \\ & = -2k + (- 2 + 2) - 1 \\ & =(-2k - 2) + (2 - 1) \\ &=2(-k-1) + 1\end{align}$. Now, $\;j = (-k - 1)\,$ is some integer (because $k$ is some integer). So have that $-n = 2(-k -1) = 2j + 1$, which by definition, is an odd integer.
H: Error in Fibonacci recurrence proof by induction? I'm working on a problem from a number theory book (Number Theory by George E. Andrews - problem 1-1-11). The text reads: Prove: $\displaystyle F_1F_2+F_2F_3+F_3F_4+\ldots+F_{2n-1}F_{2n}=F_{2n}^2$ I started by setting up a summation: $$\sum_{j=1}^{n}F_{2j-1}F_{2j}=F_{2n}^2$$ From this, I worked inductively: $$\begin{align*} \sum_{j=1}^{n+1}F_{2j-1}F_{2j}&=F_{2n+2}^2\\ F_{2(n+1)-1}F_{2(n+1)}+\sum_{j=1}^{n}F_{2j-1}F_{2j}&=\ldots\\ F_{2n+1}F_{2n+2}+F_{2n}^2&=F_{2n+2}^2 \end{align*}$$ This doesn't mathematically work, though. Let $n=3$, and evaluate. $$\begin{align*}F_7F_8+F_6^2&=F_8^2\\13\cdot21+8^2&=21^2\\337&\neq441\end{align*}$$ Where did I go wrong? Any ideas? AI: Hint: When we increment $n$ by $1$, we add two terms to the sum, not one.
H: A set theory problem - proof Prove that $X \cap (\bigcup_{i \in I} Y_i) = \bigcap_{i \in I} (Y_i \cap X) $ Is the indices finite? How do I know it isn't $1 \leq i < \infty$? Also, isn't the RHS just $Y_1 \cap Y_2 \cap ....\cap X$? (assuming we have finite intersection). EDIT A quick counterexample from myself disproves this. $Y_1 = \{1,2 \}$ and $Y_2 = \{ 2\}$ and $X = \{1\}$ EDIT2 Yes, yes, yes; thank you all for your answers. AI: The indexing set needn't be finite. Maybe it should read $$X\cap \bigcup Y_i=\bigcup (X\cap Y_i)\text{ ? }$$ You can argue as follows: First suppose $x\in X\cap\bigcup Y_i$. Then $x\in X$ and $x\in\bigcup Y_i$. Thus $x\in X$ and $x\in Y_i\;$ for at least one $i$, say it is $j$. Thus, $x\in X\cap Y_j$. This means $x\in \bigcup (X\cap Y_i)$. Can you do the other direction?
H: second and first derivative growth function I have a function I would like to take the first and second derivative from $$f(t)= a\left(1-\frac{1}{1+(b(t+i))^e+(c(t+i))^f+(d(t+i))^h)}\right)$$ I have taken the following steps $$u(t)={\left(\mathrm{b}\, \left(\mathrm{i} + t\right)\right)}^{\mathrm{e}} + {\left(\mathrm{c}\, \left(\mathrm{i} + t\right)\right)}^{\mathrm{f}} + {\left(\mathrm{d}\, \left(\mathrm{i} + t\right)\right)}^{\mathrm{h}} + 1$$ $f(t) = a (1-1/u(t))$ $f(t) = a ((u(t)-1)/u(t))$ for simplicity u(t) = u and f(t)=f f= a*(u-1)/u quotient rule dy = d(u-1) df = a*((du*u-du*(u-1))/u^2) df = a * du/u^2 quotient rule d2f = a*((d2u*u^2-du*d(u^2))/u^4) Is the above reasoning correct? df= (a*(b*e*(b*(i + t))^(e - 1) + c*f*(c*(i + t))^(f - 1) + d*h*(d*(i + t))^(h - 1)))/((b*(i + t))^e + (c*(i + t))^f + (d*(i + t))^h + 1)^2 d2f=(a*((b^2*e*(e - 1)(b(i + t))^(e - 2) + c^2*f*(f - 1)(c(i + t))^(f - 2) + d^2*h*(h - 1)(d(i + t))^(h - 2))((b(i + t))^e + (c*(i + t))^f + (d*(i + t))^h + 1)^2 - (2*(b*e*(b*(i + t))^(e - 1) + c*f*(c*(i + t))^(f - 1) + d*h*(d*(i + t))^(h - 1))^2 + 2*(b^2*e*(e - 1)(b(i + t))^(e - 2) + c^2*f*(f - 1)(c(i + t))^(f - 2) + d^2*h*(h - 1)(d(i + t))^(h - 2))((b(i + t))^e + (c*(i + t))^f + (d*(i + t))^h + 1))*(b*e*(b*(i + t))^(e - 1) + c*f*(c*(i + t))^(f - 1) + d*h*(d*(i + t))^(h - 1))))/((b*(i + t))^e + (c*(i + t))^f + (d*(i + t))^h + 1)^4 $\dfrac{d}{dt} f(t) = \frac{a\, \left(b\, e\, {\left(b\, \left(i + t\right)\right)}^{e - 1} + c\, f\, {\left(c\, \left(i + t\right)\right)}^{f - 1} + d\, h\, {\left(d\, \left(i + t\right)\right)}^{h - 1}\right)}{{\left({\left(b\, \left(i + t\right)\right)}^e + {\left(c\, \left(i + t\right)\right)}^f + {\left(d\, \left(i + t\right)\right)}^h + 1\right)}^2}$ $\dfrac{d2}{d2t} f(t) =\frac{a\, \left(\left(b^2\, e\, \left(e - 1\right)\, {\left(b\, \left(i + t\right)\right)}^{e - 2} + c^2\, f\, \left(f - 1\right)\, {\left(c\, \left(i + t\right)\right)}^{f - 2} + d^2\, h\, \left(h - 1\right)\, {\left(d\, \left(i + t\right)\right)}^{h - 2}\right)\, {\left({\left(b\, \left(i + t\right)\right)}^e + {\left(c\, \left(i + t\right)\right)}^f + {\left(d\, \left(i + t\right)\right)}^h + 1\right)}^2 - \left(2\, {\left(b\, e\, {\left(b\, \left(i + t\right)\right)}^{e - 1} + c\, f\, {\left(c\, \left(i + t\right)\right)}^{f - 1} + d\, h\, {\left(d\, \left(i + t\right)\right)}^{h - 1}\right)}^2 + 2\, \left(b^2\, e\, \left(e - 1\right)\, {\left(b\, \left(i + t\right)\right)}^{e - 2} + c^2\, f\, \left(f - 1\right)\, {\left(c\, \left(i + t\right)\right)}^{f - 2} + d^2\, h\, \left(h - 1\right)\, {\left(d\, \left(i + t\right)\right)}^{h - 2}\right)\, \left({\left(b\, \left(i + t\right)\right)}^e + {\left(c\, \left(i + t\right)\right)}^f + {\left(d\, \left(i + t\right)\right)}^h + 1\right)\right)\, \left(b\, e\, {\left(b\, \left(i + t\right)\right)}^{e - 1} + c\, f\, {\left(c\, \left(i + t\right)\right)}^{f - 1} + d\, h\, {\left(d\, \left(i + t\right)\right)}^{h - 1}\right)\right)}{{\left({\left(b\, \left(i + t\right)\right)}^e + {\left(c\, \left(i + t\right)\right)}^f + {\left(d\, \left(i + t\right)\right)}^h + 1\right)}^4}$ AI: Let $m1 = a, m2 = b, m3 = c, m4 = d , m5 = e, m6 = f, m7 = h, m8 = i$. $\dfrac{d}{dt} f(t) = \dfrac{d}{dt}\left[(a (1-1/(1+(b (t+i))^e+(c (t+i))^f+(d (t+i))^h)))\right] = \dfrac{(a (e (b (i+t))^e+f (c (i+t))^f+h (d (i+t))^h))}{((i+t) ((b (i+t))^e+(c (i+t))^f+(d (i+t))^h+1)^2)}$ $\dfrac{d^2}{dt^2} f(t) = \dfrac{d^2}{dt^2} \left[(a (1-1/(1+(b (t+i))^e+(c (t+i))^f+(d (t+i))^h)))\right] = a\left[\dfrac{ ((b^2 (e-1) e (b (i+t))^{e-2}+c^2 (f-1) f (c (i+t))^{f-2}+d^2 (h-1) h (d (i+t))^{h-2})}{\left((b (i+t))^e+(c (i+t))^f+(d (i+t))^h+1\right)^2-\dfrac{2 (b e (b (i+t))^{e-1}+c f (c (i+t))^{f-1}+d h (d (i+t))^{h-1})^2}{\left((b (i+t))^e+(c (i+t))^f+(d (i+t))^h+1\right)^3}}\right]$ Update If we write, $f(t) = a\left(1 - \dfrac{1}{u(t)}\right)$, then, by the quotient rule, we have: $$f'(t) = a\left(0 - \dfrac{0 - u'(t) \cdot 1}{u^2(t)}\right) = a\dfrac{u'(t)}{u^2(t)}$$
H: Generalizing the Definition of Convexity The definition of convexity can be given as: Definition: Call a subset of $\mathbb{R} ^ k$, which will be denoted $E$, convex if given two elements of $E$, $\boldsymbol{x}$ and $\boldsymbol{y}$ and $0 < \lambda < 1$ the following holds: $$ \lambda \boldsymbol{x} + (1 - \lambda ) \boldsymbol{y} \in E$$ What I have been wondering is whether this is limited to two elements of $E$. In other words, can this be generalized to $n$ elements of $E$ and still retain the same properties? Thus I propose the following definition: Definition: Call a subset of $\mathbb{R} ^ k$, which will be denoted $E$, "generally convex" if given $ \boldsymbol{x}_i \in E$, for $i = 1,2,...,n$ and the condition $\sum_{1=1}^n \lambda_i = 1$ the following holds: $$ \sum_{i=1}^n \lambda_i \boldsymbol{x}_i \in E $$ An open ball is an example of a convex set, and it is also the case that a ball is generally convex. Proof: Let $| \boldsymbol{x} _1 - \boldsymbol{x} |$, $| \boldsymbol{x} _2 - \boldsymbol{x} |$, $...$, $| \boldsymbol{x} _n - \boldsymbol{x} |$, be $n$ balls centred at $\boldsymbol{x}$ with radius $r$. Consider the following: $ | \sum_{i=1} ^n \lambda_i \boldsymbol{x}_i - \boldsymbol{x} | $. This is also a ball of radius $r$. $$| \sum_{i=1} ^n \lambda_i \boldsymbol{x}_i - \boldsymbol{x} | = | \sum_{i=1} ^n \lambda_i (\boldsymbol{x}_i - \boldsymbol{x}) | $$ $$\le \sum_{i=1} ^n |\lambda_i (\boldsymbol{x}_i - \boldsymbol{x}) |$$ $$< \sum_{i=1}^n \lambda_i r $$ $$=r$$ This, among other things, satisfies both the definition for convexity and general convexity. This leads me to the following conjecture: Conjecture: Consider a subset of $\mathbb{R} ^k$, $E$. $E$ is convex if and only if $E$ is generally convex. I am having difficulty proving whether or not my conjecture is true. My first thought was that given an element of $E$, it should be able to be expressed as a linear combination of other elements of $E$. And thus using this fact, expand the convexity definition to the general convexity one. The only difficulty with this is figuring out whether or not this can be done for every element of $E$. Online I have been able to find arguments that seem to support this for the case of vector-spaces, but nothing that states it for complex sets. Proof Sketch: Suppose that it is the case that any $\boldsymbol{x}_i \in E$ may be expressed as a linear combination of other elements in $E$. Then given $\lambda \boldsymbol{x}$ as presented in the definition of convexity, the $\boldsymbol{x}$ can be "split" into two new elements of $E$. This can be re-iterated until we have the $n$ elements needed for the general convexity definition. Conversely, we just need to "combine" $\mathbb{x}_i$'s in the general convexity definition until only the two remaining elements we desire remain. Thus this leads to my questions: How do I go about proving whether my conjecture is true or false (linear combinations or some other method)? Does this "general convexity" definition have any mathematical significance? AI: General convexity is equivalent to convexity. This follows from a short induction proof. Specifically, suppose $E$ is convex. Then $E$ satisfies the general convexity condition for $n=2$. Moreover, for any $n$, any points $x_1,\ldots,x_n\in E$, and any $0<\lambda_1,\ldots,\lambda_n<1$ with $\lambda_1+\cdots+\lambda_n=1$, we have $$ \sum_{i=1}^n \lambda_i x_i \;=\; (1-\lambda_n)\left(\sum_{i=1}^{n-1} \frac{\lambda_i}{1-\lambda_n} x_i\right) \,+\, \lambda_n x_n $$ By the induction hypothesis, the quantity in parentheses lies in $E$, and therefore the quantity on the left lies in $E$ as well.
H: For a morphism of affine schemes, the inverse of an open affine subscheme is affine This seems ridiculously simple, but it's eluding me. Suppose $f:X\rightarrow Y$ is a morphism of affine schemes. Let $V$ be an open affine subscheme of $Y$. Why is $f^{-1}(V)$ affine? I noted that $V$ is quasi-compact and wrote it as a finite union of principal open sets. Because the pullbacks of principal open sets are principal open sets, we can write $f^{-1}(V)$ as the union of such sets. But I'm not sure how to show this union is affine. I don't think this is the right way to go, because such unions are not affine in general. I'm particularly perplexed because this occurs as an exercise in the chapter on separated morphisms and base change in Liu. Of course, all affine schemes are separated, but I don't see the relevance of that here. AI: The slogan I remember is, "open immersions are stable under base change". Suppose you have a diagram \[ \begin{matrix} & & X' \\ & & \downarrow {\scriptstyle f} \\ U & \xrightarrow{i} & X \end{matrix} \] and $i$ is an open immersion, in which case we might as well identify $U$ with an open subset of $X$. The fibered product exists and one realization of it is the open subscheme $f^{-1}(U)$ of $X'$. To show this you would just verify that it has the universal property. [You don't need to know that fibered products exist in general to do this.] In the setting of the problem $X, X'$, and $U$ are affine, so you have another realization of $U \times_X X'$ as the spectrum of a ring. Hence there is an isomorphism between it and $f^{-1}(U)$.
H: Matrix of a relation on a set If I have a Matrix $A=\begin{bmatrix} 0&0&0\\ 0 & 0 & 0 \\0&0&0\end{bmatrix}$ why is this both symmetric and anti-symmetric? If I had a Matrix $B=\begin{bmatrix} 1&1&1\\1&1&1\\1&1&1\end{bmatrix}$ would this also be symmetric and anti-symmetric? AI: The first matrix represents the empty relation on a $3$-element set. So both symmetry and anti-symmetry are satisfied vacuously. Now $B$ represents the relation on a $3$-element set where any two elements are related. Hence, it is symmetric, but not anti-symmetric (e.g $1 \sim 2$ and $2 \sim 1$, but $1 \ne 2$ )
H: You have 4 prizes, 3 tickets, n tickets- what is the probability of winning You have bought 3 tickets in a lottery. There are n total tickets and 4 prizes. What are the odds of winning at least one prize? I thought of it like this: The total possible ways of extracting 4 prizes is: a= $${n\choose 4}$$ The possibilities of extracting at least 1 winning prize is: b=$${4\choose 1} + {4\choose 2} +{4\choose 3}$$ so the probability of winning is $$\frac{{4\choose 1} + {4\choose 2} +{4\choose 3}}{{n\choose 4}}$$ If I'm wrong, which might be the case, please tell me what the right solution is, as this is the only I could come up with. Thank you AI: We give several approaches. The first one is closest in spirit to yours. You have bought your $3$ tickets. Now the lottery corporation is choosing the $4$ winning tickets. These can be chosen in $\binom{n}{4}$ ways. If the lottery is well run, the process makes more or less sure that the choices are all equally likely. How many ways can they choose the $4$ winning tickets so that none of them are yours? Clearly $\binom{n-3}{4}$. So the probability you win no prize is $$\frac{\binom{n-3}{4}}{\binom{n}{4}}\tag{A}.$$ (The expresssion in (A) can be considerably simplified.) The probability you win at least one prize is therefore $1$ minus the answer of (A). Another way: Imagine the lottery corporation picks the winning tickets one at a time. The probability the first ticket it picks is bad (not one of yours) is $\frac{n-3}{n}$. Given that the first ticket picked was bad, there are $n-4$ bad left out of $n-1$, So given that the first ticket was bad, the probability the second is bad is $\frac{n-4}{n-1}$. Thus the probability the first two are bad is $\frac{n-3}{n}\cdot\frac{n-4}{n-1}$. Continue two more rounds. The same reasoning shows that the probability all our tickets are bad is $$\frac{n-3}{n}\cdot\frac{n-4}{n-1}\cdot\frac{n-5}{n-2}\cdot\frac{n-6}{n-3}.$$ Still another way: Let's switch points of view. Imagine that the winning tickets have already been determined. So there are $4$ "good" tickets and $n-4$ bad. We will find the probability that you pick all bad. There are $\dbinom{n}{3}$ ways that we could choose our three tickets. There are $\binom{n-4}{3}$ ways to choose them so they are all bad. So the probability that we choose all bad is $$\frac{\binom{n-4}{3}}{\binom{n}{3}}.\tag{B}$$ The probability of at least one good is $1$ minus the number in (B). Remark: We could use a strategy like yours: Find the probability of winning exactly $1$ prize, exactly $2$ prizes, exactly $3$ prizes, and add up, We calculate the probability of winning exactly one prize. The other calculations are roughly similar. So you have bought $3$ tickets. We find the probability exactly one is good, There are $\binom{n}{4}$ ways for the corporation to choose $4$ tickets. How many ways are there to choose $1$ that you have and $3$ that you don't have? The one you have can be chosen in $\binom{3}{1}$ ways. For each such choice, the $3$ you don't have can be chosen in $\binom{n-3}{3}$ ways. Thus the probability of exactly one winning ticket is $$\frac{\binom{3}{1}\binom{n-3}{3}}{\binom{n}{4}}.$$
H: Looking for an easy lightning introduction to Hilbert spaces and Banach spaces I'm co-organizing a reading seminar on Higson and Roe's Analytic K-homology. Most participants are graduate students and faculty, but there are a number of undergraduates who might like to participate, and who have never taken a course in functional analysis. They are strong students though, and they do have decent analysis, linear algebra, point-set topology, algebraic topology... Question: Could anyone here recommend a very soft, easy, hand-wavy reference I could recommend to these undergraduates, which covers and motivates basic definitions and results of Hilbert spaces, Banach spaces, Banach algebras, Gelfand transform, and functional calculus? It doesn't need to be rigourous at all- it just needs to introduce and to motivate the main definitions and results so that they can "black box" the prerequisites and get something out of the reading seminar. They can do back and do things properly when they take a functional analysis course next year or so. AI: I don't know how useful this will be, but I have some lecture notes that motivate the last three things on your list by first reinterpreting the finite dimensional spectral theorem in terms of the functional calculus. (There is also a section on the spectral theorem for compact operators, but this is just pulled from Zimmer's Essential Results of Functional Analysis.) I gave these lectures at the end of an undergraduate course on functional analysis, though, so they assume familiarity with Banach and Hilbert spaces.
H: Mean or standard deviation? Which one of mean or standard deviation can used to solve the following problem? A light bulb is considered defective if it lasts less than 400 hours. The following claim is made: 'Brand A light bulbs are more likely to be defective than Brand B light bulbs.' Is the claim correct? $$ \begin{array}{c|lc} & \text{} & \text{Mean} & \text{Standard deviation}\\ & Brand A & 450 & 25 \\ & Brand B & 500 & 50 \\ \end{array} $$ My guess is that the claim is incorrect. The reason is that Brand B standard deviation is higher. This shows that although Brand B has a higher mean but the data is more distributed. However I cannot proof my guess. Is there a way that I can find out the number of bulbs that had a higher lifetime of 400, so I can make a comparison? AI: Assuming that "lamp lasts" distributed normally, we can find, that Z-score for brands A and B are Z(A) = (450 - 400) / 25 = 2 sigma Z(B) = (500 - 400) / 50 = 2 sigma so we can conclude, that probabilities of being defective are the same for both A and B brands; according one tail normal distribution it equals to P = 0.0228...
H: How to continue this problem to find distribution function of $|X-Y|$ The probability density function of $$f(x,y) = \begin{cases} 1/x^2, & \text{if }0 \le x \le a\text{ if }0 \le y \le a \\ 0, & \text{otherwise} \\ \end{cases} $$ How can you prove that $|X-Y|$ and $\min(X,Y)$ have the same distribution function? I tried: Firstly $f(x)=1/a$, $f(y)=1/a$, so CDF for $f(x)$ is $x/a$ CDF for $f(y)$ is $y/a$ right?? $Z=\min(X,Y)$ The CDF of $\min(X,Y)= \Pr(Z < z)=1-(1-\Pr(X < z))(1-\Pr(Y < z))$ because at least one must be smaller than $z$, to be included in the CDF of $\min(X,Y)$ $=1-(1-x/a)(1-y/a)$, But I am stuck with finding the CDF for $|X-Y|$. And finding the CDF for $|X-Y|$, will I have to derivate both CDFs in terms of both $x$ and $y$, and get to the same function? I am really stuck, I hope someone can help. Thanks AI: Let $W=|X-Y|$. We find the cdf of $W$. So we want to find $\Pr(W\le w)$. The only interesting part is for $0\le w\le a$. Draw the square where our joint density for $X$ and $Y$ lives. Now draw the two lines $x-y=w$ and $x-y=-w$. Then the probability that $W\le w$ is the area of the part of the square between these two lines, divided by $a^2$. It is easier to find the area of the rest of the square. The rest of the square is made up of two right-angled isosceles triangles. Together they make up a little square. Find the side of that square. Note that the right-angled isosceles triangles are the north-west and south-east corners of the $a\times a$ square. Let's find the leg length of the one at the south-east corner. The line $y=x-w$ meets the $x$-axis at $x=w$. So the leg length is $a-w$. Thus the combined area of the two triangles is $(a-w)^2$, and therefore $$\Pr(W\le w)=1-\left(\frac{a-w}{a}\right)^2.$$ You will find that this bears a very nice resemblance to the cdf you calculated for $\min(X,Y)$: the distributions of the two random variables are the same. Remark: Instead of using geometry, we could have integrated. But why bother? There is undoubtedly a way of seeing that the distributions are the same with no calculation. Maybe tomorrow!
H: Transforming the Area of Integration in the Beta Function My text derived the Beta function by change of variable and the Jacobian determinant. $$\Gamma(x)\Gamma (y) = \int_0^{\infty} \int_0^{\infty} e^{-s-t} \space t^{x-1} \space s^{y-1} dt ds$$ Let $t = u(1-v)$ and $s = uv$. Then $ 0 \le u < {\infty}$ and $0 \le v \le 1$. How did the boundary of $u, v$ become as such? Now, I have that $u = t + s$. So when either $t$ or $s$ goes to infinity, so does $u$. Then, $v = {s \over {s+t}} = {1 \over {1+(t/s)}}$, but dow do you justify that $v$ goes to $1$ when $t, s$ approach infinity? AI: What you need to do is to maximize and minimize the two functions $$ u=t+s,v={\frac {s}{t+s}},\quad 0\leq t \leq \infty,\, 0\leq s \leq \infty. $$ The first one is easy to see its max and min. For the second one, we shift to the polar coordinates $t=r\cos(\theta)$ and $s=r\sin(\theta)$ which implies $$ v=\frac{\sin(\theta)}{\sin(\theta)+\cos(\theta)},\quad \theta \in [0,\pi/2]. $$ Now, the min and max of the above function is $0$, $1$.
H: Simplifying a Recurrence Relation $(n_i) $ is a sequence of integers satisfying $n_{i+1}=a_{i+1}n_i+n_{i-2}$. Consider a subsequence $(n_{i_j}).$ Can $n_{j_{i+1}}$ be written in terms of $n_{j_i}$? An attempt is to use the recurrence relation and write $$n_{j_{i+1}}=a_{j_{i+1}}n_{j_{i+1}-1}+n_{j_{i+1}-2}=a_{j_{i+1}}(a_{j_{i+1}-1}n_{j_{i+1}-2}+n_{j_{i+1}-3})+n_{j_{i+1}-2} $$ and so on, but I'm not sure how to obtain a closed formula containing $n_{j_i}$ Another idea is to write $n_i=q^i$. Then $$q=\frac{a_{j_{i+1}}+\sqrt{a_{j_{i+1}}^2+4}}{2}, $$ but there's no $n_{j_i}.$ AI: Here is an example to show what is not possible. Set $\forall n: a_n=0 $, then you have $n_i=n_{i-3}$, but $n_i$ need bear no relation at all to $n_{i-2}$, and in fact you end up with three independent constant sequences interleaved with each other. You can pick a subsequence which picks out the values in any order (e.g. based on the $n^{th}$ digit of $\pi$). With a little jiggling, you will see that the $a_i$ can be chosen to give a range of behaviours - so unless you have some control over the $a_i$ there seems to be no general way to begin to answer the question.
H: What is the density of $1-X^3$ if $X$ is a Cauchy random variable? What is the density function of $Y=1-X^3$, if $X$ is a Cauchy random variable? My approach: $$Pr(Y<y)=Pr(1-X^{3}<y)=Pr(X<(1-y)^{-3})=\int^{y}_{-\infty}\left(1-\frac{1}{\pi(1+t^{2})}\right)^{-3}\:dt$$ - is this ok? And then the density function would be the derivative of the function above? EDIT1 $$Pr(Y<y)=Pr(1-X^{3}<y)=Pr(X>(1-y)^{-3})=1-\int^{(1-y)^{-3}}_{-\infty}\left(\frac{1}{\pi(1+t^{2})}\right)\:dt=1-A$$ The CDF for Cauchy distribution, according to Wikipedia Cauchy Distribution - do I use it correctly? $$A=\frac{1}{\pi}arctan((1-y)^{-3})+\frac{1}{2} - \frac{1}{\pi}arctan(-\infty)-\frac{1}{2}=\frac{1}{\pi}arctan((1-y)^{-3})-\frac{1}{\pi}\frac{-\pi}{2}=\frac{1}{\pi}arctan((1-y)^{-3})+\frac{1}{2}$$ Than the distribution function would be the derivative of 1-A ? is this correct? AI: Apart from the problem mentioned by @Avitus in a comment, let me mention that the identity $$ P(X\lt g(x))=\int_{-\infty}^xg(f_X(t))\mathrm dt, $$ where $f_X$ is the density of $X$, is quite wrong. Please use the correct $$ P(X\lt g(x))=\int_{-\infty}^{g(x)}f_X(t)\mathrm dt. $$ When, as in your case, the function $g$ is increasing and differentiable, this is also $$ P(X\lt g(x))=\int_{-\infty}^{x}g'(s)f_X(g(s))\mathrm ds. $$ Likewise, if $g$ is decreasing and differentiable and $Y=g^{-1}(X)$, then $$ P(Y\lt x)=P(X\gt g(x))=\int_{g(x)}^{+\infty}f_X(t)\mathrm dt=\int_{-\infty}^{x}(-g'(s))f_X(g(s))\mathrm ds. $$
H: A gambler with the devil's luck? A gambler with $1$ dollar intends to make repeated bets of $1$ dollar until he wins $20$ dollars or is ruined. Probabilities of win/loss are $p$ and $(1-p)$, and each bet brings a gain/loss of $1$ dollar. Unfortunately, the devil is active, and ensures that every time he reaches $19, he loses! Obviously, the poor guy will get ruined sooner or later! The question is, what is the expected number of bets he makes until he is ruined? AI: Let $f(n)$ be the expected number of bets given that the gambler has £$n$. (My gambler is British to save messing around with dollar signs). for every integer $0<n<19$ we have $$f(n) = 1 + pf(n+1) + (1-p) f(n-1)$$ Solutions to this equation look like $$f(n) = \alpha + \beta\left(\frac{1-p}p\right)^n + \frac n{1-2p}\tag 1$$ and by recursion this formula must hold for $0\leq 1\leq19$. We must have $f(0)=0$ and because of the unholy involvement we have $f(19) = 1+f(18)$, that is $$\begin{align} \alpha + \beta &= 0 \\ \alpha + \beta \left(\frac{1-p}p\right)^{19} + \frac {19}{1-2p} &= \alpha + \beta \left(\frac{1-p}p\right)^{18} + \frac {18}{1-2p} + 1 \end{align}$$ Rearranging $$\begin{align} \frac{1}{1-2p}&= \beta\left(\frac{1-p}p\right)^{18}\frac{2p-1}p \\ \beta &=-\left(\frac{1-p}p\right)^{-18}\frac{2p^2}{(1-2p)^2} \end{align}$$ So substituting into $(1)$ we get the same answer as given by Did. Edit: As pointed out below this answer is invalid when $p=\frac 12$ because the particular solution $\frac{n}{1-2p}$ is infinite. In this case notice that $f(n) = -n^2$ satisfies $f(n) = 1+\frac{f(n+1) + f(n-1)}2$ hence all solutions will be in the form $$f(n) = \alpha + \beta n -n^2.$$ Which is solved as before with $\alpha = 0, \beta=37$.
H: To show a function is integrable I came across this question while studying for my exam: To show the function $f(x) = \frac{(\sin x)^2}{x^2}$ is Lebesgue integrable on [0, $\infty$). I wonder if there is any smarter way of proving it (like, using absolute continuity) without going all the way back to the definition. Thanks AI: A standard way to show that a "complicated" non-negative function is integrable is to find an integrable upper bounding function (for which integrability is simpler to prove). Since for all $t$, $|\sin t|\leqslant \min\{|t|,1\}$, we get for $x$ positive, $$0\leqslant \frac{\sin^2x}{x^2}\leqslant \chi_{(0,1]}+\chi_{(1,+\infty)}\frac 1{x^2}.$$
H: Jordan form of matrices So my professor gave me this question: $A=\begin{pmatrix} 0 & 2 & 5\\ -5 & 5 & 10\\ 2 & -2 & -4 \\ \end{pmatrix}$ I had to calculate $\forall 0 < i$ $kerA^{i}$ and $ImA^{i}$ So after calculating I reached those results: $kerA$={$\begin{pmatrix} 1 \\ 5 \\ -2 \\ \end{pmatrix}$} and $\forall 1<i$ $kerA^{i}$={$\begin{pmatrix} 1 \\ -1 \\ 0 \\ \end{pmatrix}$,$\begin{pmatrix} -3 \\ 0 \\ 1 \\ \end{pmatrix}$} regarding the image it is just the span of the columns of $A^{i}$ Then he asked us to calculate $\forall 0 < i$ $ker(A-I)^{i}$ and $Im(A-I)^{i}$ So after calculating I reached those results: $\forall 0<i$ $kerA^{i}=$ {$\begin{pmatrix} 0 \\ -5/2 \\ 1 \\ \end{pmatrix}$} regarding the image it is just the span of the columns of $(A-I)^{i}$ Then he asked us to show the Jordan form of $A$ How can I conclude that from what I proved. As far as I know the diagonal of jordan form contains the eigenvalues of $A$ (each eigenvalue repeat in the diagonal as the number of Algebraic multiplicities of the eigenvale ) and nothing more but how can I conclude from what I proved the eigenvalues of $A$. Any help would be appreciated Thanks in advanced !! AI: For a general method to compute the normal form in such case: For each eigenvector, start with the power such that the kernel of $A-\lambda I$ does not change anymore (call that $m_\lambda$. Take a basis of a complement of $\operatorname{ker}(A-\lambda I)^{m_\lambda-1}$ in $\operatorname{ker}(A-\lambda I)^{m_\lambda}$. Now apply $(A-\lambda I)$ to that basis. The elements will lie in $\operatorname{ker}(A-\lambda I)^{m_\lambda -1}$. Now take a basis of the complement of the span of these vectors and $\operatorname{ker}(A-\lambda I)^{m_\lambda-2}$ in that space. Continue inductively. You will then get a basis for your whole space. If you change your matrix to that basis, you will get a matrix in Jordan normal form. In your example (assuming your calculations are correct) take for example $b_0=(1,-1,0)^T$ and $Ab_0$ (for the eigenvalue $0$ and $b_1=(0,-5/2,1)^T$ for the eigenvalue $1$. Then your matrix represented in Jordan normal form will look like $$\begin{pmatrix}0&1&0\\0&0&0\\0&0&1\end{pmatrix},$$ the first column corresponding to $Ab_0$, the second to $b_0$ and the third to $b_1$
H: Result of $ \sum_{i=1}^{\infty} \frac{1}{(2i-1)^2}$ What is the result of $$ \sum_{i=1}^{\infty} \frac{1}{(2i-1)^2}$$ I feel that the answer should be obvious, but somehow I can't find it. The series $$ 1 + \frac{1}{9} + \frac{1}{25}\ ... $$ doesn't look familiar to any other 'known' sequence. So how would you proceed here? Thanks! AI: First, assuming that you know the identity, $$S=1^2+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots=\frac{\pi^2}{6}$$ Then, $$\frac{1}{2^2}+\frac{1}{4^2}+\frac{1}{6^2}+\frac{1}{8^2}=\frac{1}{2^2}\left(1^2+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots\right)=\frac{S}{4}$$ Therefore, $$1^2+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\cdots=S-\frac{S}{4}=\frac{3S}{4}=\frac{\pi^2}{8}$$ We could do this re-arrangement in last step since series convergent absolutely(trivial).
H: Why is $ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} = \frac{1}{e^4}$ According to WolframAlpha, the limit of $$ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} = \frac{1}{e^4}$$ and I wonder how this result is obtained. My approach would be to divide both nominator and denominator by $n$, yielding $$ \lim_{n \to \infty} \left(\frac{1-\frac{1}{n}}{1 + \frac{1}{n}} \right)^{2n+4} $$ As $ \frac{1}{n} \to 0 $ as $ n \to \infty$, what remains is $$ \lim_{n \to \infty} \left(\frac{1-0}{1 + 0} \right)^{2n+4} = 1 $$ What's wrong with my approach? AI: If you already know that $$\lim_{n\to\infty}\left(1+\frac x{f(n)}\right)^{f(n)}=e^x\;,\;\;\forall\,x\in\Bbb R$$ and for any function $\,f\,$ s.t. $\,f(n)\xrightarrow[n\to\infty]{}\infty\;$ , then $$\left(\frac{n-1}{n+1}\right)^{2n+4}=\left[\left(1-\frac2{n+1}\right)^{n+1}\right]^2\left(1-\frac2{n+1}\right)^2\xrightarrow[n\to\infty]{}(e^{-2})^2\cdot1^2=e^{-4}$$
H: Do these two definitions of Disconnectedness coincide? Here are 2 different versions of the definition of Disconnectedness that I know of, one is taught in class, and the other one is the version in Gouvêa's $p$-adic numbers - An Introduction. But I just don't seem to be able to connect the 2 definitions. The set $D$ is called disconnected iff First version There exist 2 open sets $O_1, O_2$ such that: $D \cap O_1 \cap O_2 = \emptyset$ $D \cap O_1 \neq \emptyset \wedge D \cap O_2 \neq \emptyset$ $D \subset (D \cap O_1) \cup (D \cap O_2)$ Second version (Gouvêa) There exists 2 open sets $O_1; O_2$ such that: $\color{red}{O_1 \cap O_2 = \emptyset}$ $D \cap O_1 \neq \emptyset \wedge D \cap O_2 \neq \emptyset$ $D = (D \cap O_1) \cup (D \cap O_2)$ The second version is no doubt stronger than the first one. The way I understand it, the open sets $O_1; O_2$ in the second version can be obtained from the first by some kind of shrinking. But given the first version, how can we find such a way to 'shrink' both $O_1, O_2$, so that they become disjoint? And here's a problem from Gouvêa's book that I'm having troubles with. I can prove it one way using the first version, but I can just prove 1 way using the second definition. Problem Prove that a set $S$ is disconnected iff we can write it as a union $S = A \cup B$, for some arbitrary sets $A, B$, and moreover $\overline{A} \cap B = \emptyset$, $A \cap \overline{B} = \emptyset$. How can I prove the part that: Given $S = A \cup B$, and $\overline{A} \cap B = \emptyset$, $A \cap \overline{B} = \emptyset$, then $S$ is disconnected? How can I find $O_1; O_2$, such that they are disjoint? AI: HINT: To show that the first definition implies the second, suppose that $U$ and $V$ are open, $U\cap V\cap D=\varnothing$, $U\cap D\ne\varnothing\ne V\cap D$, and $D\subseteq U\cup V$. Let $W=X\setminus\operatorname{cl}U$; clearly $U\cap W=\varnothing$. Show that $W\cap D=V\cap D$, and conclude that $U\cap D\ne\varnothing\ne W\cap D$, and $D\subseteq U\cup W$. For the other problem, you don’t need to find disjoint $O_1$ and $O_2$: you just need to be sure that $O_1\cap O_2\cap S=\varnothing$, as in the first definition of disconnectedness. What if you use $X\setminus\operatorname{cl}A$ and $X\setminus\operatorname{cl}B$?
H: Balls arrangements in a line We randomly arrange balls numbered 1-100 in a line. What is the probability that there is a spot which splits the balls into two groups: All the balls preceding the splitting point are placed in ascending order (regarding their number), and all the balls from that point forward are place in decending order? What I did: 100 different balls are placed in a line. Therefore $|\Omega| = 100!$ There are 100 possible "split-points". We'll mark $N$="split-point". For every $N$, the possibility of it being a split-point is $\binom{100}{N}$, as we choose the first $N$ balls and have only one way to arrange them and the remaining $100-N$ balls. Therefore: $$P(\text{a split point exists})=\frac{\sum\limits_{N=1}^{100}\binom{100}{N}}{100!}=\frac{\sum\limits_{N=0}^{100}\binom{100}{N}-1}{100!}=\frac{2^{100}-1}{100!}$$ But the answer given is:$$\frac{2^{99}}{100!}$$ Am I wrong? How? Thank you for your time and effort. AI: You are overcounting in one respect (or possibly two). First consider that you identify the position of the "split" with the location of a ball, so you say there are 100 possible "split" points. But really we are talking about where the 100 ball (highest point) should go. Once you pick that position k, the subsets of size k-1 and 100-k to the left and right can be chosen (and the arrangement of balls within the two stretches is forced). But really you are picking subsets of 99 balls at this point. So redo the calculation and I think you'll get the book answer.
H: $||f-f_n||_{L^1} \rightarrow 0$ but $f_n \rightarrow f$ for no $x$ Show that there are $f\in L^1(\mathbb{R}^d)$ and a sequence $\{ {f_n}\}$ with ${f_n}\in L^1(\mathbb{R}^d)$ such that $||f-f_n||_{L^1} \rightarrow 0$ but $f_n \rightarrow f$ for no $x$ Thanks. AI: Consider the sets \begin{array}{llllllll} S_{1,1}:=[0,1]& & & & & & &\\ S_{2,1}:=[0,2^{-1})& S_{2,2}:=[2^{-1},1)& & & &&&\\ S_{3,1}:=[0,2^{-2})& S_{3,2}:=[2^{-2},2\cdot 2^{-2})&S_{3,3}:=[2\cdot 2^{-2},3\cdot 2^{-2})&S_{4,3}:=[3\cdot 2^{-2},1), \end{array} and more generally, $S_{n,k}:=[(k-1)2^{-n},k2^{-n})$. We can write it in lines, and $f_N$ is the characteristic function of these sets, going from the top to the bottom and left to right.
H: Can we always find an analytic function if we know countable points? I hope to find an analytic function such that $f(n)=b_{n}$, $n\in \mathbb{N}$? Can we always take an analytic function $f$? AI: Yes, you can find such a function. We can do it inductively. Start with $f_0(x) = 0$, the constant function. For convenience, let $h_n$ be a polynomial with $h_n(m) = 0$ for $m < n$ and $h_n(n) = 1$. Suppose you have $f_n$ constructed for some $n$ so that $f_n(m) = b_m$ for $m < n$. We want to construct $f_{n+1}$. Let $N$ be a (huge) integer, which we will determine shortly. Consider the map $g_n(x) = (b_n - f_n(n)) (x/n)^N h_n(x)$. It is clear that $g_n(m) = 0$ and $g_n(n) = b_n - f_n(n)$. Define $f_{n+1} := f_n + g_n$; then $f_n(m) = b_m$ for $m < n+1$. Now, let use determine how large $N$ is supposed to be. We would like $|g_n(x)| < 2^{-n}$ for $|x| < n-1$. This will hold if $N$ is so large that $\left(\frac{n-1}{n}\right)^N \cdot \max_{|x| < n-1} (b_n - f_n(n)) h_n(x) < 2^{-n}$, but we don't want to describe $N$ in any more explicit way. Now, $f_n$ is uniformly convergent on compact sets, and all $f_n$ are analytic. It follows that their limit, call it $f$, is analytic as well. And clearly $f(m) = b_m$ for all $m$.
H: A Basic question on intuition of rational cut set in the construction of real numbers The intuition for cuts presumably comes from the standard experience of approximation by terminating decimals. For example, we can approximate $\sqrt{2}$ by the sequence $1,1.4,1.41,1.414,1.4142, \dots$. Now, why a cut set for $\sqrt{2}$ requires "all" rationals less than $\sqrt{2}$. Intuitively, only these rationals $1,1.4,1.41,1.414,1.4142, \dots$ uniquely identifies $\sqrt{2}$. Are not they sufficient ? Related question : Why the addition of two cut set is defined as the set of all possible summation of elements of the two cutsets ? AI: The answer to your question is somewhat complicated - the successive decimal approximations define a number whose square is $2$ in a context in which every increasing sequence which is bounded above has a limit. There is a good discussion here, and more related topics here. But there is a bigger issue. The square root of $2$ can be taken to be the limit of many different sequences of rational numbers (try working in base $9$ or $11$ instead of base $10$ for starters). If we take sections of "all" the rationals, it becomes easier to work with real numbers because we don't have to keep proving that two representations are equivalent (eg for the purposes of multiplying numbers).
H: Why is $X^4+1$ reducible over $\mathbb F_p$ with $p \geq 3,$ prime I have proven that in $\mathbb F_{p^2}^*$ exists an element $\alpha$ with $\alpha^8 = 1$. Let $f(X) := X^4+1 \in \mathbb F_p[X]$. How can I prove that $f$ is reducible over $\mathbb F_p$? Has $f$ a zero in $\mathbb F_p$ ? AI: I think I know, finally!, what you meant with this exercise. Be sure you can follow and prove all the following: Claim: $\,x^4+1\in\Bbb F_p[x]\,$ is reducible for any prime $\,p\,$ Proof : For $\,p=2\,$ the claim follows from $\,x^4+1=(x+1)^4\bmod 2\,$ , so let us suppose $\,p\,$ is odd. Note that for any such prime, $\,p^2-1=0\bmod 8\,$ (why?) , so that the cyclic multiplicative group $\,\Bbb F_{p^2}^*\,$ has a cyclic subgroup of order $\;8\;$ , and from here that you have an element of order $\,8\,$ in $\,\Bbb F_{p^2}^*\;$ (and not merely an element s.t. $\,\alpha^8=1\,$ , which is completely trivial). Since $$x^8-1=(x^4+1)(x^4-1)$$ over any field, all the roots of the right hand side are contained in $\;\Bbb F_{p^2}^*\;$ , and since this is an extension of degree two over the prime field $\,\Bbb F_p\,$ (and observe that $\,x^4+1\,$ is always a polynomial over this prime field!), it cannot be this polynomial is irreducible over the prime field since then any of its roots would form an extension of $\,\Bbb F_p\,$ of degree $\;4\;$ , which is absurd. $\;\;\;\;\;\;\;\;\;\;\;\;\;\square\;$
H: Show that $ \mathbf u^2 \mathbf v^2 = (\mathbf u \cdot \mathbf v)^2 - (\mathbf u \wedge \mathbf v)^2 $ where $ \mathbf u $ and $ \mathbf v $ are vectors. From Linear and Geometric Algebra by Alan Macdonald. AI: You can use $$ \begin{align} (u\cdot v)^2 - (u\wedge v)^2 & = \frac{1}{4} \left( (uv+vu)^2 - (uv-vu)^2\right) \\ & = \frac{1}{4} \left(uvuv + uvvu + vuuv + vuvu - uvuv + uvvu + vuuv - vuvu \right) \\ & = \frac{1}{2} \left( uvvu + vuuv\right) \\ & = \frac{1}{2} \left( u^2v^2+u^2v^2\right) \\ & = u^2v^2 \end{align} $$ The third line to the fourth line is true because $$uvvu = uv^2u = uuv^2 = u^2v^2$$ where we can re-order terms because $u^2$, $v^2$ are scalars.
H: norm induced by inner product and triangle inequality Let $\langle\cdot,\cdot\rangle$ be a scalar product on a space $X$, and let $\lVert \cdot\rVert$ denote the norm induced by this scalar product. I need to show that for $x,y\in X$, $\lVert x+y\rVert=\lVert x\rVert+\lVert y\rVert $ holds $\Longleftrightarrow$ $x$ and $y$ are linear transformations of each other, i.e. $\exists \alpha\geq 0$ with $x = \alpha y$. While one direction is clear ($\Longleftarrow$) I am struggling with $\Longrightarrow$. How can I construct such an $\alpha$? AI: Do you know that the Cauchy-Schwarz inequality $$\lvert \langle x, y \rangle \rvert \leq \lVert x \rVert \lVert y \rVert$$ is an equality if and only if $x = \alpha y$ for some $\alpha$? (If not, here's a proof.) With that, you can prove the other direction by noting that $$\lVert x \rVert^2 +\lVert y \rVert^2 + 2\lVert x \rVert \lVert y \rVert = (\lVert x \rVert + \lVert y \rVert)^2 = \lVert x + y \rVert^2 =\langle x+y , x+y \rangle \\ = \langle x , x \rangle + \langle y , y \rangle + 2\langle x, y \rangle = \lVert x \rVert^2 + \lVert y \rVert^2 + 2 \langle x , y \rangle.$$ This implies that $\langle x , y \rangle = \lVert x \rVert \lVert y \rVert$, which by the first claim means that $x = \alpha y$ for some $\alpha$.
H: How to show this identity? Suppose $A,B,A+B$ are invertible matrices (or just elements of some arbitrary, possibly non-commutative unital ring). I know that we have: $$ (A-A(A+B)^{-1}A)^{-1}=A^{-1}+B^{-1} $$ This is easy to check by direct calculation. The question is, how can I reasonably arrive at the simplified expression on right hand side without knowing it in prior, or at least reasonably guess so that it makes sense to check it? If it helps, I can assume that $A,B$ commute. AI: The trick is to make appear $A+B$, in order to simply it with the most problematic term, namely, the inverse of the sum: \begin{align} (A-\color{red}A(A+B)^{-1}A)^{—1}&=(A-\color{red}{(A+B)}(A+B)^{-1}A\color{red}{+B}(A+B)^{-1}A)^{-1}\\ &=(B(A+B)^{-1}A)^{-1}\\ &=A^{-1}(A+B)B^{-1}\\ &=(I+A^{-1}B)B^{-1}\\ &=B^{-1}+A^{—1}. \end{align} It will work in any non-commutative ring, and we don't need to assume $A$ and $B$ to commute.
H: Finding the complementary language of a given language I'm trying to figure out what's the complementary language of: L = {w#w : w∈{a,b}*, |w| = k} I think it's the language of all the words w#w where |w|!=k. I think my answer is not correct. How should I think about this? And what is the correct answer? AI: Hint: Since L contains only words like $abb#abb$ with the property that they repeat themselves (with a "#" in the middle), words like $abb#bab$ where the second part is different from the first part will not be in your language. Now it is easy to see that $$L^c=\{a\text{#}b|a\ne b, |a|=|b|=k\}$$ .
H: Prove that the polynomial divided by a fraction of the power of n is equal to the sum of fractions of any constans and successive powers of Let n≥1 and n is integer. P(x) - polynomial and $deg P(x)<n$. Prove if $ a \in \Bbb R/{0} $ then: $ \frac{P(x)}{(ax+b)^n} = \frac{c_1}{ax+b} + \frac{c_2}{(ax+b)^2}+...+\frac{c_n}{(ax+b)^n}$ for appropriately selected constants $ c_1, c_2, ... c_n$. So let's write P(x) as Taylor series. $ P(x) = f(t) + f'(t) \frac{x-t}{1} + ... + f^{(k)}\frac{(x-t)^k}{k!} + R_k$ and $k<n$ I need to find a good t. $ax+b = x-t$ ? I don't know what is next. AI: Hints: Taylor polynomial around $\,\alpha:=-\frac ba\;$ : $$P(x)=P(\alpha)+\frac{P'(\alpha)(x-\alpha)}{1!}+\frac{P''(\alpha)(x-\alpha)}{2!}+\ldots\implies$$ Please observe that $\,x-\alpha=x+\frac ba\;$: $$\frac{P(x)}{(ax+b)^n}=\frac1{a^n}\frac{P(x)}{\left(x+\frac ba\right)^n}=\ldots$$
H: Establishing equality between two functions $\mathbb{Z}_5[x]$; $a(x) = x^5 + 1$ and $b(x) = x-4$ Consider the following two polynomials in $\mathbb{Z}_5[x]$:* $$ a(x) = x^5 + 1 $$ $$ b(x) = x - 4 $$ You may check that $a(0) = b(0), a(1) = b(1), \ldots, a(4) = b(4)$, hence $a(x)$ and $b(x)$ are equal functions from $\mathbb{Z}_5$ to $\mathbb{Z}_5$. Why and how $a(0)=b(0)$ for the above two functions? AI: $a(0) = (0)^5 + 1 \equiv 1$ mod $5$. $b(0) = (0) - 4 = -4 \equiv1$ mod $5$ Remember numbers in $\mathbb{Z}_5$ are the same if they differ by a multiple of 5.
H: I'm looking for a function $f(x,y,z)$, which has partial derivatives only in single point Function must be defined in $\mathbb R$. I know that Dirichlet function is involved somehow, but i still can't find out an example. AI: As usually, let $D(x)$ be the characteristic function of the set of all rational numbers $\mathbb{Q}$ as a subset of the set of all real numbers $\mathbb{R}$ (see Wiki for more info). The function $f(x,y,z):=(x^2+y^2+z^2)D(x^2+y^2+z^2)$ has the required property, being differentiable only at the origin.
H: looking for a combinatorial interpretation given positive integers $n,m$ does the fraction $$ \frac{(nm)!}{n!^mm!} $$ count something? Namely does it correspond to the number of possibilities to do something? AI: This counts the number of partions of $nm$ items into $m$ groups of $n$ items. $$\frac{(nm)!}{(n!)^m}=\binom{nm}{n,n,\dots,n}$$ is the number of ordered partitions of $nm$ elements into $m$ sets of $n$ elements. Dividing by $m!$ the counts unordered partitions of this sort.
H: How to check if function has global extremes? I need to determine if function: $f(x,y)=x+2y-2\log(xy) $ has global minimum/maximum. I've found local minimum at $(2,1)$, but that's not any proof of global minimum. AI: Put $x=1/y$. You get $f(x,1/x)=x+2/x$. Approach $x$ to zero from below and from above. What do you get?
H: Show that $e^x \geq (3/2) x^2$ for all non-negative $x$ I am attempting to solve a two-part problem, posed in Buck's Advanced Calculus on page 153. It asks "Show that $e^x \geq \frac{3}{2}x^2$ $\forall x\geq 0$. Can $3/2$ be replaced by a larger constant?" This is after the section regarding Taylor polynomials, so I have been attempting to leverage the Taylor expansion for $e^x$ at $0$. $e^x \geq 1+x+\frac{x^2}{2}$, and by quadratic formula, we have $1+x+\frac{x^2}{2}\geq \frac{3}{2}x^2$ for $x\in [0, \frac{1+\sqrt{5}}{2}]$. Now also $e^x\geq 1+x+\frac{x^2}{2}+\frac{x^3}{6}$. We know there exists a $c\in \mathbb{R}^{\geq 0}$ such that for all $x\geq c$, we have $1+x+\frac{x^2}{2}+\frac{x^3}{6}\geq \frac{3}{2}x^2$. I want to find this point without messing with the cubic formula, etc. I think I am missing a simpler way. Any ideas? AI: Consider a quadratic $y=a x^2$ that is tangent to $y=e^x$. That is, we match function and derivative at a point $x=x_0$. Then $$e^{x_0}=a x_0^2$$ $$e^{x_0}=2 a x_0$$ Then $a x_0^2=2 a x_0$ so that $x_0=2$. Plugging this back into one of the above equations, we get that $$a=\frac{e^2}{4} \approx 1.84726$$ Consider, as @julien suggests, the function $g(x)=a x^2 e^{-x}$ which has derivative $a (2-x) x e^{-x}$ and second derivative $a (x^2-4 x+2) e^{-x}$. This function represents the ratio of the quadratic to the exponential and is a maximum at $x=2$ for all $a>0$. As $a x^2$ monotonically increases in $a$, we have $$e^x \ge \frac{e^2}{4} x^2 \gt \frac{3}{2} x^2$$
H: Why is the determinant the volume of a parallelepiped in any dimensions? For $n = 2$, I can visualize that the determinant $n \times n$ matrix is the area of the parallelograms by actually calculating the area by coordinates. But how can one easily realize that it is true for any dimensions? AI: If the column vectors are linearly dependent, both the determinant and the volume are zero. So assume linear independence. The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume. By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.
H: Showing that if $\lim_{x\to\infty}xf(x)=l,$ then $\lim_{x\to\infty}f(x)=0.$ Let $l \in\mathbb{R} $ and $ f:(0,\infty) \to \Bbb R $ be a function such that $ \lim_{x\to \infty} xf(x)=l $ . Prove that $ \lim_{x\to \infty} f(x)=0 $. Any help would be appreciated - I found this difficult to prove AI: $\lim\limits_{x\to \infty} xf(x)=l$ $\lim\limits_{x\to \infty} \frac{1}{x}=0$ $\lim\limits_{x\to \infty} \frac{1}{x}xf(x) =\lim\limits_{x\to \infty}f(x)$ $\lim\limits_{x\to \infty} \frac{1}{x}xf(x)=\left(\lim\limits_{x\to \infty} \frac{1}{x}\right)\left(\lim\limits_{x\to \infty} xf(x)\right)=0l=0$ $\boxed{\lim\limits_{x\to \infty}f(x)=0}$
H: How to examine if multivariable functions are differentiable? How to examine if functions: $f(x,y)=|x+y|$ and $g(x,y)=\sqrt{|xy|}$ are diffirentiable in points: $(0,0)$ for $f(x,y)$ and $(0,1)$ for $g(x,y)$ AI: Normally, if you suspect a function to be differentiable, the easiest is to show that its partial derivatives exist and are continuous. This will imply differentiability (although it is not a necessary condition). If you think a function is not differentiable in a given point, then you might be able to obtain a contradiction with the definition of differentiability by approaching the given point from various directions. For example, in the first of your two cases, you may draw on your knowledge of what happens in the one-variable case and remember that $x \mapsto \lvert x \rvert$ is not differentiable in the point $0$ to come up with the guess that that something similar is probably also the case in two variables. More precisely, you can prove that $f$ is not differentiable in $(0,0)$ by approaching $0$ along the $x$-axis, letting $y= 0$ and obtaining two different results for the limits of the difference quotients (so that the limit does not exist).
H: Simple operator proof Let $A$ and $B$ be linear operators on vector space $\nu$. There exist an operator $X$ on $\nu$ for which $AX = B$ holds, only if $\text{img}(B) \subseteq \text{img}(A)$. I would like to see the proof of this statement. AI: Assume that such an $X$ exists, and let $y \in \mathrm{img}(B)$ so that $Bx = y$ for some $x$. Then also $A(Xx) = Bx = y$, which says that $y \in \mathrm{img}(A)$.
H: Does every function have a value of 0 for the nth derivative? Just out of curiosity, and for no other reason then I've been staring at the ceiling wondering, does every function have a value of zero at the nth derivative? And if it does, is there a function that can be used to tell you at which derivative the value of the function is zero? AI: No. In fact, any smooth function (one that can be differentiated arbitrarily many times) which is not a polynomial is such an example. This is a result of the following: Theorem: Let $f(x)$ be a $n$-times differentiable. If $f^{(n)}\equiv 0$ (meaning $f^{(n)}(x)=0$ for all $x$) then $f(x)$ is a polynomial. Proof: $$f(x)=\int\cdots\iiint f^{(n)} = \int\cdots \iint C_0=\int\cdots\int (C_0x+C_1)=\cdots=C_0x^{n-1}+\cdots+C_{n-1}$$
H: Prime ideal in the ring of polynomials I'm trying to do the following: Let $R = K[X,Y,Z]$ and $\mathfrak{p}$ = $(X+Y,Z^{2}-X)$. Show that $\mathfrak{p}$ is prime and find the transcendence degree of $R/\mathfrak{p}$. If I prove that $\mathfrak{p}$ is prime the question is over just using the fact that the quotient is integral over $K[y]$. But my problem is to show that $\mathfrak{p}$ is prime. I know so far that $\mathfrak{p}$ is generated by a regular sequence in $R$, i.e., $X+Y$ is regular in $R$ and $Z^{2}-X$ is regular in $R/(X+Y)$, and thus every minimal prime of $\mathfrak{p}$ has height $2$. I tried do it by contradction but I get nothing. Thank you for any help. AI: Consider the following map: $T: K[x,y,z]/(x+y, z^2-x) \to K[z]$ where $T$ is defined by $1 \mapsto 1$, $x \mapsto z^2, y \mapsto -z^2, z \mapsto z$. Then $x+y$ and $z^2-x$ both map to $0$ and so $T$ is well defined. $T$ is surjective and if $Tf=0$, then $f(z^2, -z^2, z)=0$. But in the domain space, $x=z^2, y=-z^2$ and so then $f(x,y,z)=0$. Thus, the map is an isomorphism and we notice the image is definitely integral.
H: Figure out rate of change of speed for fixed distance Say I have two points. I know the distance between those points. And let's say I have a bobbin that's starting at point A at a fixed speed of 100 jiggers/second By point B, I want my bobbin to only be traveling 20 jiggers/second. What equation can I use to figure out what rate of change of speed I would need to apply to my bobbin to slow it down to 20 jiggers/second? How would I do it so that it eased into the new speed? I apologize if this is poorly worded or an asininely easy question. I'm a game developer, and I'm not really sure how to ask this question in a Google-friendly fashion. AI: This is a basic kinematics problem, in which two things are known: The distance between points: $d$ The initial and final velocities: $v_f=100$ j/s and $v_i=20$ j/s To find the desired acceleration, use the equation: $$v_f^2=v_i^2+2ad$$ Where $a$ is the acceleration.
H: The wedge product I have seen the wedge-product as being defined in differential geometry in the definition of a differential form or p-form. Now in the course we have proven the basic properties of this product and how to take the differential. Now when we apply this to the differentiation of a function in on a curved manifold, we make the following change in the integral $\int dx^n\rightarrow\int dx^0\wedge dx^1\wedge...\wedge dx^{n-1}$. I don't see how the wedge product is related to the volume element, I was hoping that you guys might be able to clear that up for me ? Second, I already asked around about this, the wedge product is said to be the generalisation of the cross vector product and the 3D volume element to higher dimensions. I was woundering of anyone could explain that ? AI: The right place to start is with determinants. The determinant of a 2 by 2 matrix can be thought of geometrically as the (signed) area of the parallelogram formed by the column vectors. The exterior product is a way of formalizing this intuition. The expression $dx^0\wedge dx^1\wedge...\wedge dx^{n-1}$ is by definition the volume element. The wedge product is a "generalisation" of cross product only in the sense that both involve the area of the parallelogram in one way or the other. Otherwise there are significant differences between them; thus, the exterior product is associative but the cross product is not. There is a good introduction to these ideas here.
H: Calculate the highest possible chunk size I need to find the higest possible value to multiply with in the following senario. I have a collection of items, all items are of the same type and thay have a fixed number of columns. The problem is that i need to find out what the higest possible value that i can multiply with. X is the fixed size of an item column size, in this case 9. Y is unkown i dont know how to find out the highest possible value here. M is the maximum allowed size in one set of items, in this case 2100. $$ XY \leq M,\quad 9Y\leq 2100$$ I have not managed to come up with any formula to get Y. AI: You're looking (I think) for the greatest integer $Y$ such that $9Y\le2100.$ That is, the greatest integer less than or equal to $$\frac{2100}9=\frac{700}3=\frac{699}3+\frac13=233+\frac13.$$ Can you see what that integer is? More generally, given $X$ and $M$, your maximum $Y$ will be $$\left\lfloor\frac{M}{X}\right\rfloor,$$ where $\lfloor\cdot\rfloor$ is the floor function.
H: How to evaluate a $A_{x\times y}B_{y\times z}$ where $ A$ and $B$ are matrices, $x\neq z$ I know how to evaluate a Cx,y * Dy,x (C rows are equals to D columns), but how do I evaluate a matrix multiplication in which the involved matrices (A and B) have respectively different number of rows and columns? Here is the "mask": $A_{x\times y} B_{y\times z}$ (x,y in A and y,z in B are the matrices order). EDIT: I wanna know how to evaluate a multiplication of matrices that have A number of columns = B number of rows, but A number of rows != B number of columns. You usually multiply the elements of the A rows with the elements of B columns, but if the number of rows are different from the number of columns some elements would remain, what should I do? AI: Your getting what the indices denote mixed up: $C_{x\times y}\times D_{y \times x} = E_{x\times x}$ denotes the product of a matrix $C$ with $x$ rows and $y$ columns, times a matrix $D$ having $y$ rows and $x$ columns. This results in a matrix $E_{x\times x}$ which has $x$ rows and $x$ columns: i.e. $E$ is then a square matrix. Multiplication of matrices $A\cdot B$ is defined if and only if the number of columns of $A$ is equal to the number of rows of $B$. So in your case $A_{\large {\bf x}\times\color{blue}{\bf y}} \cdot B_{\large \color{blue}{\bf y}\times {\bf z}}$ is defined, and results in a matrix $C_{\large \bf x\times z}$. Here the first index denotes the number of rows of a matrix, and the second index denotes the number of columns of a matrix. We see that multiplication is defined because there are $\color{blue}{\bf y}$ columns in $A$, and $\color{blue}{\bf y}$ rows in $B$. So, e.g., $A_{x\times y}$ denotes a matrix with $x$-rows, and $y$-columns, and $x\times y$ is called the dimension of the matrix $A$. Example: Lets multiply: $A_{2\times 3}\cdot B_{3\times 3} = \begin{bmatrix}1 & 2 & 3 \\ 1 & 2 & 0\end{bmatrix} \cdot \begin{bmatrix} 0 & 1 & 0\\ 2 & 0 & 1\\ 1 & 0 & 2\end{bmatrix}$ We will obtain a product $$ \begin{align}A\cdot B =C_{2\times 3} & = \begin{bmatrix} (1\cdot 0 + 2\cdot2 + 3\cdot 1) & (1\cdot 1 + 2\cdot 0 + 3\cdot 0) & (1\cdot 1 + 2\cdot 0 + 3 \cdot 2)\\ (1\cdot 0 + 2\cdot 2 + 0\cdot 1) & (1\cdot 1 + 2\cdot 0 + 0\cdot 0) & (1\cdot 0 + 2\cdot 1 + 0\cdot 2)\end{bmatrix} \\ \\ &= \begin{bmatrix} 7 & 1 & 7 \\ 4 & 1 & 2 \end{bmatrix}\end{align}$$
H: Prove that $q(a_i)\in \{a_1,..., a_n\}$ Let $p(x)$ and $q(x)$ be polynomials with rational coefficients such that $p(x)$ is irreducible over $\mathbb{Q}$. Let $a_1,..., a_n\in \mathbb{C}$ be the complex roots of $p$, and suppose that $q(a_1)=a_2$. Prove that $q(a_i)\in \{a_1,..., a_n\}$ for all $i\in {2, 3,..., n}$. AI: Hint: The given bits imply that $a_1$ is a zero of the polynomial $p(q(x))$. Why does this imply that $\gcd(p(x),p(q(x)))=p(x)$?
H: Faithful group actions and dimensions Just a quick question. I'm trying to understand the answer to one of my previous questions. The precise problem I want to show is as follows. Let $G$ be a group acting faithfully on a manifold $X$. If $G$ is such that $\dim G$ makes sense (for example, $G$ is also a manifold), then $\dim G \leq \dim X$. Suppose not, i.e., $\dim G > \dim X$. Then we wish to show that the kernel of the homomorphism $G \to \operatorname{Aut}(X)$ is nontrivial. This should follow if $\dim \operatorname{Aut}(X) = \dim X$, but in general $\operatorname{Aut}(X)$ can be much larger than $X$. (Also, does $\dim \operatorname{Aut}(X)$ make sense?) Any suggestions? EDIT: It turns out that this proposition is false as the examples below show, and the answer I linked to has been retracted (and is being rewritten?). Thanks to everyone for the help. AI: This is not true. For example, $SO(3)$ acts faithfully on the $2$-sphere $S^2$, but $SO(3)$ has dimension $3$, and $S^2$ only has dimension $2$.
H: Prove if $|z|=|w|=1$, and $1+zw \neq 0$, then $ {{z+w} \over {1+zw}} $ is a real number If $|z|=|w|=1$, and $1+zw \neq 0$, then $ {{z+w} \over {1+zw}} \in \Bbb R $ i found one link that had a similar problem. Prove if $|z| < 1$ and $ |w| < 1$, then $|1-zw^*| \neq 0$ and $| {{z-w} \over {1-zw^*}}| < 1$ AI: HINT: Let $z=\cos A+i\sin A, w=\cos B+i\sin B$ $\implies z\cdot w=\cos(A+B)+i\sin(A+B)$ Using $\cos C-\cos D=-2\sin\frac{C-D}2\sin\frac{C+D}2$ and $\sin C-\sin D=2\sin\frac{C-D}2\cos\frac{C+D}2$ $$\frac{z-w}{1-zw}=\frac{\cos A-\cos B+i(\sin A-\sin B)}{1-\{\cos(A+B)+i\sin(A+B)\}}$$ $$=\frac{-2\sin\frac{A-B}2\sin\frac{A+B}2+i2\sin\frac{A-B}2\cos\frac{A+B}2}{2\sin^2\frac{A+B}2-i2\sin\frac{A+B}2\cos\frac{A+B}2}$$ $$=\frac{2i\sin\frac{A-B}2\left(i\sin\frac{A+B}2+\cos\frac{A+B}2\right)}{-2i\sin\frac{A+B}2\left(i\sin\frac{A+B}2+\cos\frac{A+B}2\right)}$$ $$=\frac{\sin\frac{A-B}2}{-\sin\frac{A+B}2}$$ EDIT: The changed question can be addressed in the same way using $\cos C+\cos D=2\cos\frac{C-D}2\cos\frac{C+D}2$ and $\sin C+\sin D=2\sin\frac{C+D}2\cos\frac{C-D}2$ $$\frac{z+w}{1+zw}=\frac{\cos A+\cos B+i(\sin A+\sin B)}{1+\{\cos(A+B)+i\sin(A+B)\}}$$ $$=\frac{2\cos\frac{A+B}2\cos\frac{A-B}2+i2\sin\frac{A+B}2\cos\frac{A-B}2}{2\cos^2\frac{A+B}2+i2\sin\frac{A+B}2\cos\frac{A+B}2}$$ $$=\frac{2\cos\frac{A-B}2\left(\cos\frac{A+B}2+i\sin\frac{A+B}2\right)}{2\cos\frac{A+B}2\left(\cos\frac{A+B}2+i\sin\frac{A+B}2\right)}$$ $$=\frac{\cos\frac{A-B}2}{\cos\frac{A+B}2}$$
H: How to build a $K(G,1)$ space for every group $G$? I am reading the book Algebraic Topology by Allen Hatcher. And in page 89 when he explains how to build a $K(G,1)$ space for every group $G$: He builds a $\Delta$-complex with the $n$-simplices $\left[g_{0},\dots,g_{n}\right] $ of elements of $G$, and he attaches to it the $n-1$-simplices with the corresponding vertices (the elements of $G$) and calls it $EG$. Then, he defines an action of $G$ on $EG$ by: for each $g\in G$ taking the simplex $\left[g_{0},\dots,g_{n}\right] $ linearly onto the simplex $\left[gg_{0},\dots,gg_{n}\right]$. He concludes that this action is covering space action. The question: I understand that $EG$ is a cover of $EG/G$. Why is the universal cover? Equivalent question: why is $\pi_{1}\left(EG\right)=\left\{ e\right\} $? AI: He argues, beginning on line 6, that $EG$ is contractible (and thus has trivial fundamental group) by a homotopy that slides every point $x \in [g_0,\dots,g_n]$ to $[e]$ along the line in $[e,g_0,\dots,g_n]$. Edit. Ah, too slow. :)
H: Prove that in a tree with maximum degree $k$, there are at least $k$ leaves Prove that in a tree with a maximum degree for each vertex is $k$, there are at least $k$ leaves. So I said: $2|E| = \sum_{v \in V} {\deg(v)} \leq k $ which is, if we say that we have AT MOST $k-1$ leaves (I used the contradiction method to prove) \begin{align} \sum_{v \in V} {\deg(v)} &= \sum_{v \; \text{is a leaf}} {1} + \sum_{v \; \text{is not a leaf}} {\deg(v)} \\ &\leq (k-1) + k(n-k+1) \\ &= 2k - k^2 + kn -1. \end{align} But that obviously tells us nothing. All extra information I know is that in a tree the sum of all degrees is $2|E| = 2(n-1) = 2n - 2$ so somehow I should get to an inequality/equality regarding $n$ only. Any help is appreciated AI: Let me give you a hint. Start off by proving that every tree with at least two vertices must have at least 2 leaves. Now, if the maximum degree is $k$, then there is a vertex $v$ of degree $k$. Consider the graph obtained by deleting $v$ from your tree. What does it look like? Prove that it is a collection of $k$ non-empty trees. Consider each of these components. If the component has only one vertex, then this vertex was a leaf in the original graph: its only neighbor was $v$. If the component has two or more vertices, then by the first claim it must have two leaves. Of these, only one could have been connected to $v$, since otherwise they form a cycle; hence at least one of them must have been a leaf in the original tree. So, we get at least one leaf from each of the $k$ components of the new graph.
H: Doubt on Binomial Series Sorry to ask this simple question, but I am finding it hard to get a convincing answer As we all know, binomial series is defined as $$(1 + x)^k = 1 + \frac{k}{1!}x + \frac{k(k-1)}{2!} x^2 + \dots,$$ where $k$ is a real number and $-1 < x < 1$. Why the restriction $-1 < x < 1$? Even without this restriction, the series works fine, right? Why this restriction in every definition I see? AI: The restriction $-1<x<1$ is only necessary for binomials of the form $(1+x)^{\alpha}$ where $\alpha\notin\mathbb{N}$. In this case, rather than a finite sum, you need an infinite number of terms of the series - and it turns out that the series has radius of convergence 1.
H: Numerical Methods Texts, mid-level. what are the 3-4 numerical analysis/methods texts, that you value as best? I would prefer a mid-level treatment, balanced between theory and applications. Right now, I'm using the book by Sauer. It is pretty good, but I wish to couple it with a deeper treatment. Many Thanks. AI: Here are some recommendations (you can peruse their table-of-contents online): Theory and applications (1) Numerical Analysis - Burden and Faires (2) Numerical Methods - Pal (3) Numerical Methods for Engineers, Chapra, Canale If price is an issue, you might want to look at the following excellent books (note, some of these are more theory than applied, but I included a mix - check out the TOCs online). Dover Books (1) Analysis of Numerical Methods, Isaacson, Keller (2) Numerical Methods, Dahlquist, Bjorck (3) Numerical Methods for Scientists and Engineers, Hamming (4) A First Course in Numerical Analysis: Second Edition, Ralston , Rabinowitz (5) Introduction to Numerical Analysis: Second Edition, Hildebrand You might also want to check out your local university library and look into notes/lectures at places like Open Courseware (like MIT and many others). Lastly, you should certainly have an environment that allows you to explore these methods like a CAS and you can get a commercial one (like Mathematica, Maple, MatLab...) or a free one (like SAGE, Maxima, GP/Pari...).
H: In $S_9$: for given $\sigma$ is there $\tau$ with $\tau^2=\sigma, \; \tau^3 = \sigma$? Struggling with these: Let $\sigma$ in $S_9$ be given by $\sigma=(8\,9)(5\,6\,7\,1\,2\,3\,4)$. 1: Is there a $\tau\in S_9$ with $\tau^2=\sigma\,?\;$ Tip: think of $ \epsilon ( \sigma)$. 2: Is there a $\tau\in \langle\sigma \rangle$ with $\tau^3=\sigma\,?\;$ Tip: think of $ \operatorname{order}( \sigma)$. Thanks in advance! AI: Use the tips you were given. What is $\epsilon(\sigma)$? What do you know about $\epsilon$? Can you use what you know about $\epsilon$ to find such a $\tau,$ or prove that no such $\tau$ exists? (Edit: Nice work!) What is the order of $\sigma$? What then do you know about the structure of the group $\langle\sigma\rangle$? Can you then find such a $\tau$?
H: Prove that the matrix is totally unimodular Is there any (theoretic) way I can prove the matrix is totally unimodular? I have tested it by Matlab and know it is TU, however I cannot prove it. -1 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 -1 -1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 AI: First, observe that since $R_1= - R_4$, $R_2= - R_5$, $R_3 = - R_6 $, then if any sub matrix uses any of these pairs of rows, then the determinant must be 0. Hence, (WLOG) we may assume that rows $1, 2, 3$ are removed (and replaced with rows 4, 5, 6 respectively in the sub matrix determinant calculation). Now call this matrix $A$. If you look up the Wikipedia article, you will see a sufficient condition for Totally Unimodular matrices: Every column of contains at most two non-zero entries. Every entry in is 0, +1, or −1 If two non-zero entries in a column of $A$ have the same sign, then the row of one is in $B$, and the other in $C$. If two non-zero entries in a column of $A$ have opposite signs, then the rows of both are in $B$, or both in $C$. Observe that if $B$ is the set of rows 4, 5, 6 and $C$ is the set of rows 7, 8, 9, 10, then this will satisfy the conditions. Hence we are done.
H: generalisations of Lagrange's four-square theorem For which positive integers $a, b, c, d$, any natural number $n$ can be represented as $$n=ax^2+by^2+cz^2+dw^2$$ where $ x, y,z,w$ are integers? Lagrange's four-square theorem states that $(a,b,c,d)=(1,1,1,1)$ works. Ramanujan proved that there are exactly $54$ possible choices for $a, b, c, d$. 2 . For which positive integers $a, b, c, d$, $$n=ax^2+by^2+cz^2+dw^2$$ is solvable in integers $ x, y,z,w$ for all positive integers $n$ except one number? For example, $n=x^2+y^2+2z^2+29w^2$ is solvable for all natural number $n$ except $14$, $n=x^2+2y^2+7z^2+11w^2$ and $n=x^2+2y^2+7z^2+13w^2$ except $5$ P.R.Halmos proved that there are exactly $88$ possible choices for $a, b, c, d$. How to get that? Can recommend some references? Thanks AI: As I recall, Ramanujan mistakenly included $(1,2,5,5)$ which does not represent 15. The main thing you need is the list of regular ternary forms $$ a x^2 + b y^2 + c z^2 $$ with $$ 1 \leq a \leq b \leq c, $$ which I put at KAPLANSKY as a pdf under the name Dickson_Diagonal_1939. The list also requires $$ \gcd(a,b,c) = 1, $$ so one needs to check for when that might make a difference. I also put the proof of the 15 theorem as Bhargava_2000. Manjul's main observation was that any positive universal form has a regular ternary section, which greatly shortened the search. For the Halmos result, I do not remember that a regular section is necessary. Indeed, Halmos found $(1,2,7,11)$ which lacks a regular ternary section, so there you go. If I were to do this, I would simply do a quadruple loop in some computer language, once $(a,b,c)$ represents all numbers up to $c,$ or misses at most one value, then try possible $d \geq c$ so that no more values are missed (up to 100, say). For the Halmos problem it is acceptable to consider $a=1,2.$ As it happens, we now know for sure that the single number missed is 15 or smaller, but Halmos did not need to assume that. In case of actually computing this: Given $a=1,2,$ there are infinitely many numbers not represented. Call the smallest number missed $A_1,$ call the second $A_2.$ So, then we are considering $a x^2 + b y^2$ with $ a \leq b \leq A_2,$ otherwise we miss two values. Call the smallest number missed $B_1,$ the next $B_2.$ We move on to consider $a x^2 + b y^2 + c z^2$ with $ b \leq c \leq B_2.$ Now, it is not entirely trivial that any positive ternary still misses infinitely many values. A short discussion, not quite a full proof, is at the very end of The Sensual Quadratic Form by J. H. Conway. Anyway, call the smallest number missed $C_1,$ the next $C_2.$ We move on to consider $a x^2 + b y^2 + c z^2 + d w^2$ with $ c \leq d \leq C_2.$ Positivity is used in an essential way in all of this. As to indefinite forms that are otherwise the same, given any integer $N,$ the forms $$ x^2 - y^2 + (2N+1)z^2 $$ and $$ x^2 - y^2 + (4N+2)z^2 $$ are universal.
H: The minimal polynomial of a primitive $p^{m}$-th root of unity over $\mathbb{Q}_p$ Proposition 7.13 of Neukirch's ANT states that for a primitive $p^{m}$-th root of unity $\zeta$ ($p$ prime) the extension $\mathbb{Q}_{p}(\zeta)/\mathbb{Q}_{p}$ is totally ramified of degree $(p-1)p^{m-1}$. In the proof he shows that the polynomial $\phi(X)=X^{(p-1)p^{m-1}}+X^{(p-2)p^{m-1}}+\cdots+1$ is the minimal polynomial of $\zeta$ over $\mathbb{Q}_p$ . I can't see how to obtain the congruence $\phi(X)=(X^{p^m}-1)/(X^{p^{m-1}}-1)\equiv(X-1)^{p^{m-1}(p-1)}$ mod $p$ as claimed in the book. Here, I take it that mod $p$ means mod $p\mathbb{Z}$ and not mod $p\mathbb{Z}_{p}$ ? Also, does Eisenstein's irreducibility criterion extend to polynomials in $\mathbb{Q}_{p}[X]$ ? Any help would be very much appreciated. AI: In $\mathbf{F}_p[X]$ (you are looking at the polynomial $p\mathbf{Z}$, so as an element of $\mathbf{F}_p[X]$), because you're in characteristic $p$, $X^{p^m}-1=(X-1)^{p^m}$, and similarly $X^{p^{m-1}}-1=(X-1)^{p^{m-1}}$, so the quotient, modulo $p$, is $(X-1)^{p^m-p^{m-1}}=(X-1)^{p^{m-1}(p-1)}$. This is the congruence $\phi(X)=(X^{p^m}-1)/(X^{p^{m-1}}-1)\equiv (X-1)^{p^m(p-1)}\pmod{p}$. Eisenstein's criterion holds for any UFD $R$ and its field of fractions. In particular it holds for $\mathbf{Z}_p$ and $\mathbf{Q}_p$ (using the prime ideal $(p)$ of $\mathbf{Z}_p$). EDIT: To answer the question posed in the coments, in $\mathbf{F}_p[X]$, $X^{p^m}-1=(X-1)^{p^m}$ and $X^{p^{m-1}}-1=(X-1)^{p^{m-1}}$. Since the latter clearly divides the former (they are both powers of the same polynomial and the former is a larger power) the quotient is an element of $\mathbf{F}_p[X]$, which can be checked directly to be the image (i.e. reduction) of $\phi(X)$ in $\mathbf{F}_p[X]$, simply because, in $\mathbf{Z}[X]$, $\phi(X)=(X^{p^m}-1)/(X^{p^{m-1}}-1)$ (this equality is written in the OP's question so I assume it is understood).
H: Chaotic iterative example needed I'm using a very simple numerical method to find solutions to an equation. Start with an equation $\operatorname{f}(x)=0$ that you need to solve. Rearrange to give $x=\operatorname{g}(x)$ and then use the recurrence relation $x_{n+1} = \operatorname{g}(x_n)$ to hopefully tend towards a solution. I'm trying to find an interesting example where starting with $x_0 < \ell$, but very close to $\ell$, gives a sequences tending towards one limit, while starting with $x_0 > \ell$, but very close to $\ell$, gives another. (Where $\operatorname{f}(\ell) \neq 0)$ I can find unintesting examples where negative $x_0$ gives a negative limit and positive $x_0$ gives a positive limit. Can anyone suggest an interesting example. For example, for some $x_0 > 2$, very close to $2$. gives a sequence tending towards a negative limit and for some $x_0 < 2$, very close to $2$ gives a sequence tending towards a positive limit. EDIT: I am looking for a single, smooth function $\operatorname{g}$. I want a good old-fashioned elementary function. If possible, a polynomial or rational function would be great. AI: Let try to take it stepwise. As your title says, I guess you are seeking for some type of (deterministic) chaotic behaviour on a one-dimensional iterative process that sends from $\Bbb R$ to $\Bbb R$. Also I guess this because you are seeking for initial condition sensitivity as is in deterministic chaos a prrequisite. For such system behaviour, necessary is: (1) that $g$ is non-linear with respect to $x_n$ and/or some $x_{n-i}$ (associative memory) where $i<n$ and a positive integer (2) that there are at least $2$ discrete elementary modes (better $3$); an intuition would be to have the second mode either via memory constructed into the non-linear memory parts $x_{n-i}$ or if not possible then as a separate variable or variables, then $\overrightarrow x_n$ a vector (this possibly fits to your case). An alternative would be indeed to try a fractal version where a process sends from $\Bbb C$ to $\Bbb C$, complex domain. Are these conditions something you could accept in your approach? A posterior what the answer distinguishes is not an issue of the Real domain, rather key is the fact that your proposition seems to pursue only one elementary mode, chaos is provoked by a certain type of self organisation at higher order that demands at least 2-3 elementary modes that cooperate in this self-organisation process. A one-modal equation will not provide this precondition. Either you would need more dimmensions via a vector over Real e.g. $$\overrightarrow x_{n+1}(=[x_{n+1,1},x_{n+1,2},x_{n+1,3},\dots]^T)=\overrightarrow g(x_{n,1},x_{n,2},x_{n,3},\dots)$$ or such memory over the Real e.g. $$x_{n+1}(=[x_{n},x_{n-1},x_{n-2},\dots]^T)=g(x_{n},x_{n-1},x_{n-2},\dots)$$ or both and in both case $g$ a proper non-linear function. Your case probably would possibly fit in (2). In case you regard (2) suggest you have a look on the simple example of the logistic map. Hope that the above Latex Art works is correct.
H: Test to see if a degree $\leq4$ polynomial is factorable I'm in the middle of a programming project and we'd like to have tests to determine if polynomials in $\mathbb{Z}[x]$ of degrees up to 4 are factorable over $\mathbb{Q}$. A test that computes the discriminant (or something similar) and then asks if that is a perfect square, perfect cube, etc. is hard to implement, because I don't know how to test if a given floating point real (like $\sqrt{\Delta}$) is rational. Not to mention that there could be rounding error. So for degrees 2 and 3, what I've come up with is to apply the rational root theorem. This requires primefactoring the leading and constant coefficients and testing each ratio until a root is found. So if there are many prime factors, this is costly. But it works. Then again it's overkill, since it's actually factoring rather than just testing for factorability. With degree 4, I'm not sure what to do. Is there an out here? Should I instead focus on roots of discriminants? I could restructure the data type for coefficients to be rational numbers that separately keep track of numerators and denominators, and then checking to see if $\Delta$ is a perfect square would be easy. But is there something less intense? AI: First, by Gauss's lemma, it suffices to first remove the content, then factor the primitive part over the integers. That is, everything is integers and you need not worry about rationals at all. Next, since you're prepared to factor integers to use the RRR, I recommend Kronecker's method. To summarize, evaluate the (primitive) polynomial at three values. Suppose $f(1)=3$. If $f(x)=f_1(x)f_2(x)$, we must have $f_1(x)|3$ and $f_2(x)|3$. This is not very many possibilities. Doing this at three places, we have three values for each of $f_1, f_2$, which specifies it uniquely (since it's quadratic). Test if they both have integer coefficients, and if their product is in fact $f(x)$. We do this for each combination; i.e. $f_1(1)=1, f_2(1)=3$, and again for $f_1(1)=-3, f_2(1)=-1$, etc. Since you get to pick the three places, you can be selective. For example, if $f(1)$ is really big or has many factors, skip it and instead try $f(2)$.
H: Is the notation $[x,\to[$ common? I recently started reading Topology and Groupoids by Ronald Brown and this notation came up. The notations is $$[x,\to[ \; =\{z \mid x \leq z\}$$ and a similar notation for other type of intervals. I have never seen this before, and I was baffled wondering if this was a funny $\LaTeX$ macro mistake where "2" was used as "\to" in some situations. I am wondering if it is sort of common? Is it common in some area of mathematics? If it is not clear, I am asking about using the "$\to$" arrow not really the brackets (although I rarely see the bracket notation). AI: It is not uncommon. When discussing (partially) ordered sets, people sometimes use $\leftarrow$ and $\to$ rather than $-\infty$ and $+\infty$, so the interval $(\leftarrow,b)$ means the same as what other times one writes as $(−∞,b)$, that is, $\{z\mid z<b\}$. The use of $],[$ is also common enough (and dates back to Bourbaki), with $]a,b[$ meaning the same as $(a,b)$, etc. Here are some examples: 1, 2.
H: Prove this block matrices are similar Prove that the block matrices $ \left( \begin{array}{cc} AB & 0\\ B & 0\\ \end{array} \right) $ and $ \left( \begin{array}{cc} 0 & 0\\ B & BA\\ \end{array} \right) $ are similar. Where $\mathbf{K}$ is any field, $A\in \mathbf{K}^{m\times n}$, $B\in \mathbf{K}^{n\times m}$ and both matrices in $\mathbf{K}^{(m+n)\times (m+n)}$. I searched the Internet well enough and found no similar problem. Thanks in advance! AI: From Fuzhen Zhang, "Matrix Theory - Basic Results and Techniques" (Springer, 1999), page 52: \begin{align*} \begin{bmatrix} {\rm I}_m & A \\ 0 & {\rm I}_n \end{bmatrix} \begin{bmatrix} 0 & 0 \\ B & 0 \end{bmatrix} = \begin{bmatrix} AB & 0 \\ B & 0 \end{bmatrix}, \\ \begin{bmatrix} 0 & 0 \\ B & 0 \end{bmatrix} \begin{bmatrix} {\rm I}_m & A \\ 0 & {\rm I}_n \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ B & BA \end{bmatrix}, \\ \end{align*} follows that $$\begin{bmatrix} {\rm I}_m & A \\ 0 & {\rm I}_n \end{bmatrix}^{-1} \begin{bmatrix} AB & 0 \\ B & 0 \end{bmatrix} \begin{bmatrix} {\rm I}_m & A \\ 0 & {\rm I}_n \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ B & BA \end{bmatrix}.$$
H: Notation of "defined for all complex numbers except the negative integers and zero" The Euler and Weierstrass forms of the gamma function are : $$\mathop{\mathrm{\Gamma}}\left(z\right)=\frac{1}{z}\prod^{\infty}_{n=1}\frac{\left(1+\frac{1}{n}\right)^{z}}{\left(1+\frac{z}{n}\right)}=\frac{e^{-\gamma z}}{z}\prod^{\infty}_{n=1}\frac{e^{\frac{z}{n}}}{\left(1+\frac{z}{n}\right)},\quad\mbox{for}\;z \in \mathbb{C}$$ but how do you note mathematically instead of $z\in\mathbb{C}$ : "for all complex numbers except the negative integers and zero" ? (+bonus question: can you confirm me that the Euler and Weiestrass forms are striclty equivalent ?) AI: If your natural numbers include $0,$ you could say $$z\in\Bbb C\setminus\{-n:n\in\Bbb N\},$$ and if not, you could say $$z\in\Bbb C\setminus\bigl\{-n:n\in\Bbb N\cup\{0\}\bigr\}.$$ If you're used to $\Bbb C^\times$ meaning the non-zero complex numbers (those with multiplicative inverses), then regardless of whether your natural numbers include $0,$ you could just say $$z\in\Bbb C^\times\setminus\{-n:n\in\Bbb N\}.$$ The Euler and Weierstrass forms are indeed equivalent.
H: distinguishable and indistinguishable people and ticket offices In how many ways we can arrange p people in the queue to the 5 ticet offices a) people are distinguishable ticket offices are distinguishable b) people are distinguishable ticket offices are indistinguishable c) people are indistinguishable ticet offices are distinguishable d) people are indistinguishable ticket offices are indistinguishable a) ${p+4}\choose{4} $ How can I do rest? AI: I am assuming that each ticket office has its own queue. If the people are indistinguishable, all that matters is how many are in each queue. In (c) we can let $x_1,x_2,x_3,x_4$, and $x_5$ be the numbers of people in the first, second, third, fourth, and fifth queues, respectively, and those five numbers will completely determine the arrangement. Thus, the answer to (c) is the number of solutions of the equation $$x_1+x_2+x_3+x_4+x_5=p$$ in non-negative integers. This is a standard stars-and-bars problem; the article at the link gives both a formula and a reasonably clear explanation of why the formula is correct. For (d) you just want the number of ways of writing $p$ as a sum of at most $5$ non-negative integers. This is the number of partitions of $p$ into at most $5$ parts. This is sometimes denoted by $q(p,5)$, as for example here; there is no nice formula for it. If the people are distinguishable, then their order in each queue matters. To count the arrangements in case (a), when the ticket offices are distinguishable, we can imagine taking an arrangement, with queues $Q_1,Q_2,Q_3,Q_4$, and $Q_5$ at the first, second, third, fourth, and fifth ticket offices, respectively, and lining them up in one large queue in that order: $Q_1Q_2Q_3Q_4Q_5$. That results in a permutation of the $p$ people, and every one of the $p!$ permutations can be obtained in that way. Going in the other direction, from single long queue of everyone to the five individual queues, we just decide where to split the long queue. If we think of the long queue as consisting of $p$ people and $4$ splitting points, we see that the splitting points can be anywhere in this extended long queue, so there are $\binom{p+4}4$ ways to choose where to place them. Thus, there are $p!$ long queues and $\binom{p+4}4$ ways to split any one of them into the five ticket office queues, so there are altogether $$p!\binom{p+4}4=p!\cdot\frac{(p+4)!}{p!\cdot 4!}=\frac{(p+4)!}{4!}$$ arrangements. Your answer is actually too small by a factor of $p!$. In (b) we still want to divide the people into five ordered queues, but we no longer care which queue goes with which ticket office. Suppose that we have the five ticket office queues $Q_1,Q_2,Q_3,Q_4$, and $Q_5$. In (a) we order them: the arrangement $\langle Q_1,Q_2,Q_3,Q_4,Q_5\rangle$, with $Q_1$ at the first ticket office, $Q_2$ at the second, and so on, is different from the arrangement $\langle Q_3,Q_1,Q_2,Q_4,Q_5\rangle$, with $Q_3$ at the first ticket office, $Q_1$ at the second, and so on. In (b), however, these two arrangements are not distinguished, because we can’t tell one ticket office from another: in both of them we have the same five queues, and that’s all that matters. There are $5!$ permutations of a given set of five queues. Each of these $5!$ permutations gives us a different arrangement in (a), but they all give the same arrangement in (b). Using that fact, how cay you modify the answer to (a) to get the answer to (b)?
H: Characterization of the Subsets of Euclidean Space which are Homeomorphic to the Space Itself I have no real experience in topology (although I have done a course in metric spaces) but in the course of a project I am doing it has become useful to produce (if possible) a characterization of the subsets of arbitrary dimensional Euclidean Space (with the usual metric and topology) that are homeomorphic to the whole space. I started by looking at the sorts of properties which are conserved under homeomorphism and found that such a subset is open and connected. I have also shown that convex open sets are homeomorphic to R^n. However, what I am really looking for is an equivalence between subsets homeomorphic to R^n and subsets with a list of specific properties (e.g. open, convex). That I can use to identify any possible homeomorphic subset. Hints and statements of characterization would be appreciated as starting points. However, I would like to work through the necessary proofs on my own if possible. Thank You AI: According to this previous question, a subset of $\mathbb{R}^n$ is homeomorphic to $\mathbb{R}^n$ if and only if it is open, contractible, and simply connected at infinity. Note that the last condition is necessary. For example, the Whitehead manifold is a contractible open subset of $\mathbb{R}^3$, but it is not homeomorphic to $\mathbb{R}^3$.
H: Resolvent lemma I would like to proof a lemma that I am quite sure should be correct as I found it somewhere, I am writing a thesis about quantum walks and need this to get through an article. Let $X$ be Banach space, $A \in B(X)$, $\lambda \in \rho(A)$ (the resolvent set of the operator) I need to proof that: $B(\lambda, \; \frac{1}{||R_{\lambda}||}) \subset \rho(A)$, $B$ is a Ball and $R_{\lambda}$ is the resolvent $dist(\lambda, \sigma(A)) \ge \frac{1}{||R_{\lambda}||}$ Thanks, Im am more into the physics and I am not very good at this. AI: The key ingredient here is the following variant of the von Neumann series. Proposition: Let $A \in B(X)$ be invertible, and let $B \in B(X)$ be such that $\|BA^{-1}\|<1$. Then $A + B$ is invertible. Taking this fact for granted, let us see how it implies the desired result. Let $\lambda \in \rho(A)$ and suppose $|\lambda' - \lambda| < \|R_\lambda\|^{-1}$. Then $\|(\lambda' - \lambda)I R_\lambda\| < 1$ so the proposition tell us that $A - \lambda'I = (A - \lambda) + (\lambda - \lambda')I$ is invertible. In other words, $\lambda' \in \rho(A)$. The second part is a simple consequence of the first because we know that if $\lambda \in \rho(A)$ and $|\lambda - \lambda'| < \|R_\lambda\|^{-1}$ then also $\lambda' \in \rho(A)$. It follows that if $\lambda' \notin \rho(A)$ then $|\lambda - \lambda'| > \|R_\lambda\|^{-1}$. Thus $|\lambda - \lambda'| > \|R_\lambda\|^{-1}$ for all $\lambda' \in \sigma(A)$ which means $d(\lambda, \sigma(A)) > \|R_\lambda\|^{-1}$. Now let's prove the proposition. Proof: Define operators $T_N \in B(X)$ by $$T_N =\sum_{n=0}^N (-1)^n A^{-1}(BA^{-1})^n.$$ We will first show that $T_N$ is a Cauchy sequence in $B(X)$. For $N > M$ we have $$\|T_N - T_M\| = \|\sum_{n = M+1}^N(-1)^n A^{-1}(BA^{-1})^n\|$$ $$ \le \sum_{n=M+1}^n \|A^{-1}\| \|BA^{-1}\|^n = \|A^{-1}\|\|BA^{-1}\|^{M+1} \sum_{n=0}^{N-M-1}\|BA^{-1}\|^n$$ Letting $N \to \infty$ and using that the sum is geometric series we get $$\le \|A^{-1}\|\|BA^{-1}\|^{M+1} \frac{1}{1-\|BA^{-1}\|}.$$ Now we let $M \to \infty$ and remember that $\|BA^{-1}\| < 1$ to see that the entire term tends to $0$. Thus $T_N$ is a Cauchy sequence and thus has a limit $T \in B(X)$. We now show that $T$ is a right inverse of $A+B$ which will finish the proof. Notice that $$(A+B)T_N = \sum_{n=0}^N (-1)^n(A+B)A^{-1}(BA^{-1})^n$$ $$= I + \sum_{n=1}^N(-1)^n(BA^{-1})^n + \sum_{n=1}^{N+1} (-1)^n (BA^{-1})^n$$ $$= I + (-1)^N(BA^{-1})^{N+1} \to I$$ as $N \to \infty$. Since $A+B$ is continuous, this means $$(A+B)Tx = \lim_{N \to \infty} (A+B)T_Nx = x$$ for all $x \in X$ which means $T = (A+B)^{-1}$.
H: What is the maximum of a set of random variables? I'm a bit confused by a problem I'm supposed to solve: Given a sample of random variables $(X_1,...,X_n)$, uniformly distributed in the interval $[0,2\vartheta]$, $\vartheta>0$ and unknown. What value does the parameter $\gamma$ have to take for the estimator $$\theta_3=\gamma\left(\max_{i=1,...,n}X_i+\min_{j=1,...,n}X_j\right)$$ to be unbiased? I reasoned something like $$\mathbb E\theta_3=\gamma\mathbb E\left(\max_{i=1,...,n}X_i+\min_{j=1,...,n}X_j\right)=2\gamma\mathbb EX_n=2\gamma\vartheta $$ and therefore $\gamma=\frac{1}{2}$. I'm pretty sure that's wrong though, especially considering that I'm not even sure what $$\max_{i=1,...,n}X_i$$ is supposed to mean - random variables are functions right? ($\mathbb E$ is supposed to be the expected value, I'm not very familiar with the terms used in english speaking countries). AI: $X= \max_{i=1,...,n} X_i$ is the function $X:\Omega \to \Bbb R$ which sends $\omega \in \Omega$ to $\max_{i=1,...,n} X_i(\omega)$. The following simple symmetry argument shows that $\gamma=1/2$: Consider the random variables $Y_i=2\theta - X_i$. Obviously, the $Y_i$ are also indepent and uniformly distributed on $[0,2\theta]$. Define $m=\Bbb E[\max_{i=1,...,n}X_i]$. Now, since $(Y_1,...,Y_n)$ are distributed in the same way as $(X_1,...,X_n)$, we have also $m=\Bbb E[\max_{i=1,...,n}Y_i]$. But $$\max_{i=1,...,n}Y_i=\max_{i=1,...,n}2\theta - X_i=2\theta - \min_{i=1,...,n}X_i$$ yielding $$\Bbb E[\min_{i=1,...,n}X_i]=2\theta -\Bbb E[\max_{i=1,...,n}Y_i]=2\theta -m$$ Hence $$\Bbb E[\min_{i=1,...,n}X_i]+\Bbb E[\max_{i=1,...,n}X_i]=(2\theta-m)+m=2\theta$$
H: Stochastic Automaton accepting every word with same probability I am looking for a stochastic automaton, which induces the same probability $c \in [0,1]$ for all words in $\Sigma^*$, where $\Sigma$ is some finite alphabet. A stochastic automaton over an alphabet $\Sigma$ is a tuple $\mathcal A$= { $\mathcal Q$, $\Sigma$, $\pi$, $(P_a)_{a\in \Sigma}$, $\mathcal f$} where $\mathcal Q = \{1,2,...,n\}$ is a finite set of states $\pi \in [0,1]$ with $\sum_{i=1}^n \pi_i = 1 $ (inital probability distribution) for each $a \in \Sigma$, $P_a$ is a $n \times n$ transition matrix taking values in $[0,1]$ such that for all row $i = 1,...,n$ we have $\sum_{i=1}^n (P_a)_{ij} = 1 $ $f \in [0,1]^{n \times 1}$ is the final state vector such that $f(i) = 1$ iff state $i$ is final. If $\pi = (p_1,..., p_n)$ is the initital distribution, the probability to reach state $j$ by reading a word $w = a_1 ... a_n $ $\in \Sigma$ is given by $(\pi \cdot P_w )_j = \sum_{i=1}^n \pi_i \cdot (P_w)_{ij}$ If the automaton $\mathcal A$ has $f = (f_1,..., f_n) $ as a final vector, it is then in a final state with the probability $\sum_{i=1}^n (\pi_i \cdot P_w)_{j} \cdot f_i = \pi \cdot P_w \cdot f$. Example: $w = aab$ $\Rightarrow $ $\pi \cdot P_w \cdot f = \pi \cdot P_a² \cdot P_b \cdot f$ I constructed the following automaton so that every word has the same probability. $\mathcal Q = \{1\}$, $\pi = \left( \begin{array}{cc} 1\\ \end{array} \right) $ , $f = \left( \begin{array}{cc} 1\\ \end{array} \right) $, $P_\epsilon = \left( \begin{array}{cc} 1 \\ \end{array} \right) $and $P_\Sigma = \left( \begin{array}{cc} 1\\ \end{array} \right) $. Then $\pi \cdot P_{\Sigma^*}^n \cdot f = 1 = c $ But in this case $c = 1$ and not arbitrary. My observations: $2$ states are necessary since the probability c has to be split in $1-c$ and c for $a \in \Sigma$ transitions there are different options to involve c: In $\pi$, $P_\Sigma*$ and/or in $f$ Can anybody give me a hint? AI: Let $P$ be the $2\times 2$ identity, $\pi=(c,1-c)$, $f=(1,0)$. Then all words have the same probability $c$ of being accepted.