text
stringlengths 83
79.5k
|
---|
H: Solve $\sqrt{x+4}-\sqrt{x+1}=1$ for $x$
Can someone give me some hints on how to start solving $\sqrt{x+4}-\sqrt{x+1}=1$ for x?
Like I tried to factor it expand it, or even multiplying both sides by its conjugate but nothing comes up right.
AI: Start by squaring it to get
$$x+4-2\sqrt{(x+4)(x+1)}+x+1=1\;,$$
which simplifies to
$$\sqrt{(x+4)(x+1)}=x+2\;.$$
Now square again. |
H: Like term reduction
In finding the derivative of $f(x) = 4x - x^2$ we first find the difference of the numerator $f(x + h) - f(x)$. Therefore we have $f(x + h) = 4(x + h) - (x + h)^2 = 4x + 4h - x^2 - 2xh - h^2$ minus $f(x) = 4x - x^2$. This leads to entire expression in the numerator being:
$$4x + 4h - x^2 -2xh -h^2 - (4x - x^2) $$
My book reduces this to $4h - 2xh - h^2$. Now I understand how the $4x$ and the $-4x$ cancel. But isn't $-x^2 - x^2 = 2x^2$? How did they eliminate this term in the reduction? If I plug in numbers (e.g. $x= 2 h = 3$) the reduction is not equal to the original equation.
In my experience I am wrong more than any math book so how am I going astray here? Am I missing a sign somewhere?
AI: You are missing that $- (-x^2) = +x^2$ |
H: Is there an easy way to see that all derivatives are bounded?
Show that all derivatives of $f:\mathbb{R}\to\mathbb{R}$ given by $$f(x):=\frac{1}{\sqrt{x^2+1}+1}$$ are bounded.
It's easy to see that all derivatives are continuous. So the only potential problem is that a derivative might blow up at $\infty$. Is there an easy way to see that this does not happen? I guess deriving an explicit form of the n-th derivative is not the way to do it (I'd doubt there is an easy closed form).
AI: It is clear by composition that $f$ is smooth. So we need only prove that the derivatives are bounded on, say $A=\{x\;;|x|\geq 2\}$. Since $f$ is even we can restrict to $I=[2,+\infty)$.
This form of $f$ is not tractable. Observe that for $x\neq 0$,
$$
f(x)=\frac{\sqrt{x^2+1}-1}{(\sqrt{x^2+1}+1)(\sqrt{x^2+1}-1)}=\frac{\sqrt{x^2+1}-1}{x^2}=\frac{\sqrt{x^2+1}}{x^2}-\frac{1}{x^2}.
$$
It is trivial to check that the derivatives of $\frac{1}{x^2}$ are all bounded on $I$, so we can focus on
$$
g(x)=\frac{\sqrt{x^2+1}}{x^2}=\frac{1}{x}\sqrt{1+\frac{1}{x^2}}.
$$
Now we can use the power series representation of $(1+u)^{1/2}$ at $0$, whose radius of convergence is $1$, to get
$$
g(x)=\frac{1}{x}\sum_{n\geq 0}\binom{1/2}{n}\left(\frac{1}{x^2}\right)^n=\sum_{n\geq 0}\binom{1/2}{n}x^{-2n-1} \quad \forall x\geq 2
$$
with normal convergence as $0\leq x^{-1}\leq \frac{1}{2}<1$. Term-by-term differentiation turns out well, as these yield after $k$ differentiations
$$
\frac{1}{x^{k+1}}\sum_{n\geq 0}\binom{1/2}{n}(-2n-1)(-2n-2)\cdots(-2n-k-1)(x^{-2})^n.
$$
Of course $\frac{1}{x^{k+1}}$ is bounded on $I$, and the remaining power series evaluated at $x^{-2}$ still has radius $1$. So the latter converges normally on $I$ for each $k$. Hence a simple induction justifies simultaneously term-by-term differentiation and boundedness on $I$. |
H: Is it possible to rationalize a denominator containing two cube roots?
The fraction in question is
$$-\frac{12}{\sqrt[3]{12\sqrt{849} + 108} - \sqrt[3]{12\sqrt{849} - 108}}$$
And was reached in calculating the solution to $x^4 - x - 1 = 0$. I've tried all the standard methods, including $(a+b)(a-b) = a^2 - b^2$, but that doesn't work for cube roots, because once you have the square of one the two middle terms will not cancel each other out.
AI: As imranfat suggests in their comment, you should use the identity
$$
\frac1{\sqrt[3]a-\sqrt[3]b} = \frac{\sqrt[3]{a^2}+\sqrt[3]{ab}+\sqrt[3]{b^2}}{a-b}
$$
which can be verified via cross-multiplication. In your case, take $a=12\sqrt{849} + 108$ and $b=12\sqrt{849} - 108$ and then work through simplifying the resulting expression. When I do so, I obtain
$$
-\frac{48+\left(12 \sqrt{849}-108\right)^{2/3}+\left(108+12 \sqrt{849}\right)^{2/3}}{18}.
$$ |
H: I need help with relations
Let $S$ be the power set $P({1,2,...,10})$; that is, $S$ is the set of all subsets of $\{1,2,\dots,10\}$. define the relation $\mathcal R$ on $S$ by:
For all subsets $A,B$ of $\{1,2,\dots,10\}$, $A\mathcal RB$ if and only if $A\cup B$ has exactly $3$ elements.
a) Is $\mathcal R$ reflexive? Symmetric? Transitive?
b) Find and simplify the number of subsets $A$ is contained in $\{1,2,\dots,10\}$ so that $A \mathcal R\{1,2,7\}$.
c) Find and simplify the number of subsets $A$ is contained in $\{1,2,\dots,10\}$ so that $A\mathcal R\varnothing$.
Any help would be appreciated seeing that I don't know where to start
AI: HINT: You can quite easily determine whether $R$ is reflexive, symmetric, or transitive directly from the definition.
Reflexive: If $A\subseteq\{1,\dots,10\}$, is it always true that $A\mathbin{R}A$? In other words, is it always true that $|A\cup A|=3$?
Symmetric: If $A,B\subseteq\{1,\dots,10\}$, and $A\mathbin{R}B$, is it always true that $B\mathbin{R}A$? In other words, if $|A\cup B|=3$, is it always true that $|B\cup A|=3$?
Transitive: If $A,B,C\subseteq\{1,\dots,10\}$, $A\mathbin{R}B$, and $B\mathbin{R}C$, is it always true that $A\mathbin{R}C$? In other words, if $|A\cup B|=3$, and $|B\cup C|=3$, must it be the case that $|A\cup C|=3$? What if $A=C=\varnothing$, for instance?
For the other parts of the question you should begin by trying to understand just how $R$ works. Consider the set $A=\{1,3,7,8\}$; is there any set $B\subseteq\{1,\dots,10\}$ such that $A\mathbin{R}B$? No: no matter what set you choose for $B$, $A\cup B$ is going to have at least $4$ elements, namely, the $4$ elements of $A$. Now what if $A=\{1,3,7\}$? If $B$ contains at least one element that is not already in $A$, $A\cup B$ is going to contain at least $4$ elements. Thus, the only hope of getting $A\mathbin{R}B$ is to have $B\subseteq A$, and that always works: in this case $A\mathbin{R}B$ if and only if $B\subseteq A$. Now what if $A=\{1,3\}$? In order to have $A\mathbin{B}$, $B$ must contain exactly one element that is not in $A$: $B$ must be of the form $X\cup\{b\}$, where $X\subseteq A$, and $b\in\{1,\dots,10\}\setminus A$.
It shouldn’t be too hard to extrapolate from these examples to be able to say for any $A\subseteq\{1,\dots,10\}$ exactly which $B\subseteq\{1,\dots,10\}$ satisfy $A\mathbin{R}B$, and once you’ve done that, it’s not very hard to count those sets $B$. |
H: Finding example of sets that satisfy conditions
give examples of sets such that:
i)$A\in B$ and $A\subseteq B$
My answer : $B=\mathcal{P(A)}=\{\emptyset,\{1\},\{2\},\{1,2\}\}$ and $A=\{1,2\}$ then $A\in B$ and $A\subseteq B$
ii) $|(C\cup D)\setminus(C\cap D)|=1$
My answer is: $C=\{1,2,3\}$, $D=\{2,3\}$ then $C\cup D=\{1,2,3\}$ and $C\cap D=\{2,3\}$ so $(C\cup D)\setminus(C\cap D)=\{1\}$ and $|(C\cup D)\setminus(C\cap D)|=1$
Can we find sets A and B such that $A\in B$ and $B\subseteq A$?
My answer is no.
Are my answers correct?
AI: (i) This doesn’t quite work, unfortunately, because $A\nsubseteq B$, i.e., $\{1,2\}\nsubseteq\{\varnothing,\{1\},\{2\},\{1,2\}\}$. In order for $A$ to be a subset of $B$, each element of $A$ must be an element of $B$. The elements of $A$ are $1$ and $2$, and neither of them is an element of $B$. It’s true that $\{1\}\in B$ and $\{2\}\in B$, but that’s very different from having $1\in B$ and $2\in B$. Try the same idea with $A=\varnothing$.
(ii) This is fine.
(iii) Your answer is correct: if $A\in B\subseteq A$, then $A\in A$, which is ruled out by the axiom of regularity (also called foundation). |
H: Equivalence classes of the relation that the largest digit of integer a = largest digit of integer b.
Define the relation $\mathcal{R}$ on the set of all positive integers by: for all positive integers $a$ and $b$, $a\,\mathcal{R}\,b$ if and only if the largest digit of $a$ is equal to the largest digit of $b$. for example, $271\,\mathcal{R}\,770$ because the largest digit of $271$ is $7$ which is also the largest digit of $770$.
a) Find the number of equivalence classes of $\mathcal{R}$.
b) Find and simplify the number of positive integers between $100$ and $1000$ which are in the equivalence class $[271]$.
I have an idea of where to begin but I think I am wrong. For (a) would there be $9$ equivalence classes?
And for (b) I got my answer to be $56$.
AI: Your answer to (a) is correct. There is one equivalence class for each possible largest digit, and since you’re looking only at positive integers, the largest digit must be non-zero.
For (b) you need to count the three-digit numbers whose largest digit is $7$. One of these is $777$. How many are there with exactly two sevens? There are $3$ places to put the digit that isn’t $7$, and it must be one of the seven digits $0,1,2,3,4,5,6$, so there are $3\cdot7=21$ such numbers. OOPS: one of those is $077$, which isn’t in the required range. Thus, there are really only $20$ of them. Now how many are there with exactly one $7$? If the $7$ is the first digit, there are $7^2=49$ ways to fill out the other two places with digits less than $7$. Otherwise the first digit must be one of the six digits $1,2,3,4,5,6$. There are then $2$ places to put the $7$ and $7$ choices for the other digit, for a total of $6\cdot2\cdot7$ possibilities. Thus, there are $49+84=133$ numbers with exactly one $7$. The grand total for (b) is therefore $1+20+133=154$. |
H: Unexplainable Divergent area?
Somebody came up to me recently with the following problem. Consider $y = \dfrac{1}{x^2}$ with $x>0$. Now there is this square $S$ sitting under the curve, connecting $(0,0) , (1,0) , (1,1)$ and $(0,1)$. The area under the curve from $x = 1$ to infinity is equal to $1$ through an improper integral. This area sits on the right of $S$. Due to symmetry, the same area can be found on top of $S$. This area is then between the $y$-axis and the curve. The area of $S$ is finite ($1$ of course) and so the area of $S$ plus the area on top of it should be $2$. But evaluating the integral from $0$ to $1$ is divergent. How can this be explained?
AI: There is no such symmetry. The inverse function would be in this range $x=1/\sqrt{y}$, which when integrated over $[1,\infty)$, gives a divergent result. |
H: find $\theta_{MLE}$ for a function
For
$$
f(x;\theta)=(\theta+1)x^{-\theta-2}
$$
find the maxmimum likelihood estimators (MLEs) for $\theta$ based on a random sample of size $n$.
My work so far:
$$
\begin{align}
\prod_{i=1}^n \log(f(x_i;\theta)) &= \sum \log(\theta+1)-\log(x_i)(\theta+2) \\
&=n\log(\theta+1)-(\theta+2)\sum \log(x_i)
\end{align}
$$
Now take the derivative, and set it equal to 0:
$$
\begin{align}
\frac{d}{dx} \log(f(x;\theta)) &= 0-(\theta+2)\sum \frac{1}{x_i} =0 \\
&=\sum\frac{1}{x_i}=0
\end{align}
$$
I cant make any sense of this. What exactly have I done here? Did something go wrong? Shouldnt I get an equation with $\theta$ in it?
AI: You should take the derivative with respect to $\theta$.
Also, $\prod_{i=1}^n \log(f(x_i;\theta))$ should be $\sum_{i=1}^n \log(f(x_i;\theta))$ |
H: Can the second term of the Schur complement of a symmetric matrix be undefined?
Given the next symmetric matrix conformably partitioned
$$\begin{bmatrix}
A &B \\ B^T &C
\end{bmatrix}$$
I know that $A$ and $C$ are positive definite matrices.
The Schur complement is $S=C-B^TA^{-1}B$
What can I say about $B^TA^{-1}B$? Is this undefined in general? or it is positive/negative (semi)definite.
Probably it is an easy question, but I do not see why.
Thanks in advance.
AI: If $A \succ 0$, $S$ is positive semidefinite iff the block matrix is positive semidefinite. |
H: Simplifying a Product of Summations
I have, for a fixed and positive even integer $n$, the following product of summations:
$\left ( \sum_{i = n-1}^{n-1}i \right )\cdot \left ( \sum_{i = n-3}^{n-1} i \right )\cdot \left ( \sum_{i = n-5}^{n-1}i \right )\cdot ... \cdot \left (\sum_{i=5}^{n-1}i \right )\cdot \left (\sum_{i=3}^{n-1}i \right )\cdot \left (\sum_{i=1}^{n-1}i \right )$
Where there are $\frac{n}{2}$ groups of summations multiplied together.
For example, consider the case where $n=4$ :
$\left ( \sum_{i = 3}^{3}i \right )\cdot \left ( \sum_{i = 1}^{3} i \right ) = \left ( 3 \right )\left ( 1+2+3 \right ) = 18$
I have tried in vain to simplify the product. Perhaps there are identities I could make use of.
Edit : I can expand the product to clarify:
$$\left ( n-1 \right )\cdot \left [ (n-3)+(n-2)+(n-1) \right ]\cdot \left [ (n-5)+...+(n-1) \right ]\cdot ... \cdot\left [3+4+...+(n-1)\right ]\cdot \left [1+2+...+(n-1) \right ]$$
From where I can see a $(n-1)^{\frac{n}{2}}$ term, but the others are quite jumbled.
AI: Let $n=2m$. Your expression is then
$$\begin{align*}
&(2m-1)\cdot\frac32(2n-4)\cdot\frac52(2n-6)\cdot\ldots\cdot\frac{n-1}2n\\\\
&\qquad=1\cdot(2m-1)\cdot3(2m-2)\cdot5(2m-3)\cdot\ldots\cdot(2m-1)m\\\\
&\qquad=\prod_{k=1}^m(2k-1)(2m-k)\\\\
&\qquad=(2m-1)!!\frac{(2m-1)!}{(m-1)!}\\\\
&\qquad=\frac{(2m)!}{2^mm!}\cdot\frac{(2m-1)!}{(m-1)!}\\\\
&\qquad=\frac{(2m)!}{2^m}\binom{2m-1}m\\\\
&\qquad=\frac{n!}{2^{n/2}}\binom{n-1}{n/2}\;.
\end{align*}$$ |
H: Matrices manipulation
I am having difficulty with the following question
I have to determine if the following claim is true or not.
If it is true I have to proof it else I need to give an example
I believe it is not true but I do not find.
$A$ is a matrix (with scalar in real).
$A$ is symmetric.
if there exists $k>0$ so that $A^{k}=Id$ then $A^{2}=Id$
how can I proof if it is not true ?
in order to proof do I need to assume that $k=3$ and $A^{2}!=Id$ ?
although I still believe this is not true
Thanks in advance
AI: Hints: a real symmetric matrix is diagonalizable (via an orthogonal matrix, but you don't even need that here) in $M_n(\mathbb{R})$ and the condition $A^k=Id$ implies that the eigenvalues of $A$, which are all real, belong to $\{\pm 1\}$. Finally, note that the result is true. |
H: Subalgebras of certain C*-algebras
Let $A$ be a C*-subalgebra of $C(X, M_{n}(\mathbb{C}))$ where $X$ is a compact Hausdorff space, does it follow that $A$ is isomorphic to $C(Y, M_{m}(\mathbb{C}))$ for some $Y\subseteqq X$ and $m<=n$ ?
AI: This is not even even true for a singleton $X=\{\ast\}$. In this case $C(X,M_n(\mathbb{C}))\simeq M_n(\mathbb{C})$. Consider, for instance, $A$ the subalgebra of diagonal matrices. Then $A\simeq \mathbb{C}^n$. So for $n\geq 2$, this is not isomorphic to any $C(Y,M_m(\mathbb{C}))$ for $Y\subseteq X$, as the latter are isomorphic to $M_m(\mathbb{C})\not\simeq \mathbb{C}^n$. |
H: Square of sum of matrices
I'm trying to follow these lecture notes on Linear Discriminant Analysis (LDA) but I can't seem to figure out how the author gets from:
$$ \Sigma_{x\epsilon\omega_{i}} (w^{T}x - w^{T}\mu_{i})^2$$
to
$$ \Sigma_{x\epsilon\omega_{i}} w^{T}(x-\mu_{i})(x-\mu_{i})^Tw$$
AI: Since, $w^{T}x - w^{T}\mu_{i}$ is a scalar we can write it as:
$w^{T}x - w^{T}\mu_{i} = (w^{T}x - w^{T}\mu_{i})^T$
Thus,
$(w^{T}x - w^{T}\mu_{i})^2 = (w^{T}x - w^{T}\mu_{i}) (w^{T}x - w^{T}\mu_{i})^T$
Now, we know that:
$(AB)^T = B^TA^T$
Thus, we have:
$(w^{T}x - w^{T}\mu_{i}) (w^{T}x - w^{T}\mu_{i})^T = w^{T} (x - \mu_{i}) (w^{T} (x - \mu_{i}))^T$$
Or in other words:
$(w^{T}x - w^{T}\mu_{i}) (w^{T}x - w^{T}\mu_{i})^T = w^{T} (x - \mu_{i}) (x - \mu_{i})^T w$ |
H: On Lévy collapsing the reals
Consider the Lévy forcing notion. Let $M$ be a transitive standard model of $\mathsf{ZFC}$. Let $\aleph_n$ be the cardinality of the real numbers $2^\omega$ in $M$. Now collapse $\aleph_n$ to $\omega$. The resulting model $M[G]$ is again a model of $\mathsf{ZFC}$ but it is provable in $\mathsf{ZF}$ that $2^\omega$ is uncountable.
Could someone help me resolve this seeming contradiction? I'd be most grateful.
And I have a second question: Where can I access the original document containing the forcing notion explained? Or if that is not available: are there any other resources available? (I have a feeling that I might have been able to sort out my confusion on my own with more documentation available but Wiki and Jech are rather too concise for me.)
AI: Note that for every $\alpha<\aleph_{n+1}$ we add a bijection between $\omega$ and $\alpha$ to $M$. So in $M[G]$ all those ordinals are countable ordinals. So we added subsets of $\omega$ which encode these bijections (or the order type, if you prefer to think about that). So there are $\aleph_{n+1}$ new subsets to $\omega$ in $M[G]$.
Therefore we added $(\aleph_{n+1})^M$ real numbers to $M$, and changed $(\aleph_{n+1})^M$ to be $(\aleph_1)^{M[G]}$, because now whenever $\alpha<(\aleph_{n+1})^M$ we have that $M[G]\models|\alpha|=\aleph_0$. Therefore in $M[G]$ the least ordinal not in bijection with $\omega$ is $(\aleph_{n+1})^M$, which makes it $(\aleph_1)^{M[G]}$.
So we now have that $M[G]\models |2^\omega|=(\aleph_{n+1})^M=\aleph_1$.
As for reference request, I think that Kanamori's The Higher Infinite gives a nice exposition on the Levy collapse, as well one of its famous uses (constructing the Solovay model where all sets of reals are measurable). |
H: Using the Chebyshev Inequality
This is the Q:
A 20 fair coins tosses, (f means the "H" of the coin).
I have to block the probability that I will get n/2+n/100 "H"-s by Chebyshev Inequality. [n=20 in this case...], so:
n/2+n/100 = 20/2+20/100 = 10.2
How I'm doing it?
the result it's much higher then one.
What I get is:
Click Here
I'm right?
AI: OK, if I got the question right, it should be
$$
P(|S_{20} - 10|>10.2)<\frac{20}{4 \cdot 10.2^2} \approx 0.048
$$ |
H: Evaluating $48^{322} \pmod{25}$
How do I find $48^{322} \pmod{25}$?
AI: Recall Euler's theorem: $a^{\phi(n)} \equiv 1 \pmod n$, where $\gcd(a,n)=1$. We hence have
$$48^{20} \equiv 1 \pmod{25}$$
Hence,
$$48^{322} \equiv \left(48^{20}\right)^{16} \cdot 48^2 \pmod{25} \equiv 48^2 \pmod{25} \equiv (-2)^2 \pmod{25} \equiv 4 \pmod{25}$$ |
H: Minimal Red-Black tree with depth 3
I'd like to ask what is minimal RBT with black depth 3. Is this following RBT ? Values are not important. And that tree can't have depth 2 or 1.
AI: According to Wikipedia, a red-black tree should be "roughly balanced". The rule is "that the path from the root to the furthest leaf is no more than twice as long as the path from the root to the nearest leaf". In your example, every node has a leaf. The root has a leaf, so the path in between has length $1$. Also, the furthest node has a leaf, with the path from the root having length $3$. Thus, that tree does not meet the requirements of a red-black tree.
You can obtain a red-black tree by adding another node to the root. Since your tree was minimal as a binary tree of depth $3$, the tree you obtain must be minimal as a red-black tree of depth $3$. |
H: Finding the MLE of $\theta$ where $\theta \leq x$
consider the following PDF
$$
\begin{eqnarray}
f(x;\theta) &=&
\left\{\begin{array}{ll}
2\frac{\theta^2}{x^3} & \theta \leqslant x\\
0 & x< \theta; 0 < \theta
\end{array}\right.\\
\end{eqnarray}
$$
Now the answer stats $X_{1:n}$ so the minimum of $X$, but this cannot be deduced from the answer via my normal method (log likelihood), how am I to approach this, and other questions like it?
AI: Hint: At what value of $\theta$ is the likelihood maximized keeping in mind that $\theta \le x_i$ for all $i$? |
H: On a question chosen at random, what is the probability that the student answers it correctly?
I'm really confused about this question. I appreciate your help.
A student takes a multiple choice exam where each question has five possible answers. He answers correctly if he knows the answer, otherwise he guesses at random. Suppose he knows the answer to $70$% of the questions.
Question 1
On a question chosen at random, what is the probability the student answers it correctly?
We know $P(\text{know the answer}) = 0.7$. On a question chosen at random, he either knows the answer or he doesn't. If he does, then $P(\text{answer correctly}|\text{know correct answer})= 1$. If he doesn't, then $P(\text{answer correctly}|\text{doesn't know correct answer})= 0.7(0.3)^4$. I don't know how to continue the answer.
Question 2
Given that he did answer correctly, what is the probability that he actually knew the correct answer?
All I can think of is the following conditional probability.
$\begin{align}
P(\text{know correct answer}|\text{answer correctly}) & = P(\text{know correct answer}|\text{answer correctly}) \\
& =\frac{P(\text{know correct answer and answer correctly})}{P(\text{answer correctly)}}. \\
\end{align}$
AI: For the first one, drawing a probability tree might help.
At first, you have the two branches of whether he knows the answer (0.7) and he doesn't know the answer (0.3).
If he knows the answer, then he answers correctly (1), so that, the probability that he knows and answer AND answers correctly is (0.7*1 =) 0.7.
If he doesn't know the answer, he has a probability of (1/5 =) 0.2 of getting the right answer, so that the probability that he doesn't know the answer AND answers correctly becomes: (0.3*0.2 =) 0.06
The rest should be a little easier. |
H: Continuity of $d(x,A)$
I am doing a head-check here. I keep seeing this theorem quoted as requiring $A$ to be closed (as in Is the distance function in a metric space (uniformly) continuous?), but I don't think that it is needed.
Theorem. Let $(X,d)$ be a metric space and $\emptyset \neq A\subseteq X$. Then $x \longmapsto d(x,A)$ is Lipschitz continuous.
Proof. Fix $x,y \in X$. We note that for any $a \in A$ we have $d(x,a) \leq d(x,y) + d(y,a)$.
Taking the infimum over $a \in A$ then gives $d(x,A) \leq d(x,y) + d(y,A)$. It follows quickly that
$$
|d(x,A)-d(y,A)| \leq d(x,y)
$$
and the claim is proved.
Have I accidentally used closedness of $A$ somewhere in my proof? I don't think it is necessary. Maybe the reason that it is usually quoted with $A$ closed is so that there is an $a \in A$ with $d(x,A) = d(x,a)$? (As @Martin points out, this also requires some compactness assumption on the unit ball, e.g. it holds if $X=\mathbb{R}^n$ with Euclidean metric)
AI: Summarizing the comments and expanding slightly:
Yes, your argument is perfectly okay as it stands and you didn't make any hidden use of closedness of $A$ anywhere, only non-emptiness of $A$ is used to ensure that $d(\cdot,A)$ is a real-valued function.
A good additional exercise would be to show that $1$ is the best Lipschitz constant (unless $A$ is dense in $X$ where $d(\cdot,A) = 0$).
Why people often state this result for closed $A$ only is unclear to me, but I agree that it is quite common to make this unnecessary assumption. See for example this question or this question for further examples. The answers point out more or less explicitly that this is not needed.
It could be that the reason is related to the fact you mention---that the infimum is a minimum for closed subsets of $\mathbb{R}^n$, or, more generally, in metric spaces for which closed balls are compact. Another possible "reason" could be that $\overline{A} = \{x \in X \mid d(x,A) = 0\}$, which shows that a closed set $A$ can be recovered from the zero-set of $d(\cdot,A)$. |
H: Unity in the rings of matrices
Suppose we are given an arbitrary ring $R$. Then the set $M_n(R)$ of all square matrices with elements from $R$, together with usual matrix addition and multiplication forms a ring. If R is a unitary ring then $M_n(R)$ too.
My question seems to be very trivial but is it possible that $M_n(R)$ has unity while $R$ hasn't?
Thanks in advance.
AI: Let
$$
\begin{bmatrix}a & b\\c& d\end{bmatrix}
$$
be the identity in $M_2(R)$. Then, for any $r$ in $R$, we have
$$
\begin{bmatrix}r & 0\\0& 0\end{bmatrix}=
\begin{bmatrix}r & 0\\0& 0\end{bmatrix}
\begin{bmatrix}a & b\\c& d\end{bmatrix}=
\begin{bmatrix}ra & rb\\0& 0\end{bmatrix}
$$
so $ra=r$, for all $r\in R$, and $a$ is a right unity in $R$. Similarly
$$
\begin{bmatrix}r & 0\\0& 0\end{bmatrix}=
\begin{bmatrix}a & b\\c& d\end{bmatrix}
\begin{bmatrix}r & 0\\0& 0\end{bmatrix}=
\begin{bmatrix}ar & 0\\cr& 0\end{bmatrix}
$$
so $a$ is also a left unity.
The example generalizes easily to any matrix size. |
H: Prove that if $n\geq\text{lcm}(a,b)$ and $\gcd(a,b)|n$ then $n=xa+yb$ for some integers $x,y\geq 0$
I thought I had it, but then I realized I didn't. Even just a hint—am I going in the right direction or should I try something completely different?
We know that $\gcd(a,b)=wa+zb$ for some integers $w,z$. Then since $\gcd(a,b)|n$ we have that $k \gcd(a,b) = n$ for some integer $k$. Then $n=kwa+kzb$, so letting $x=kw$ and $b=kz$ gives the result.
Except that it doesn't because I realized that there's no promises about $kw$ and $kz$ being nonnegative and in fact I think that in general they are not.
I think there must be some way to reconcile this using $n\geq\text{lcm}(a,b)$ but I am not sure how to. Or maybe this argument was doomed from the start. Anyone have a hint?
AI: Hint: If $n=xa + yb$, then $n=(x-b)a + (y+a)b$.
If you start with $x>0$ and $y<0$, you can keep applying this hint repeatedly, until $y$ becomes positive. Of course, $x$ will keep decreasing at the same time... but can it become negative before $y$ jumps above zero? |
H: Does $P(A\cap B) + P(A\cap B^c) = P(A)$?
Based purely on intuition, it would seem that the following statement is true, when thinking of the events as sets:
$$P(A\cap B) + P(A\cap B^c) = P(A)$$
However, I am not sure if this is true, and cannot find out how to prove it, or describe a straight forward intuition of it.
Is this equation true? And if not, is there another way to link $P(A\cap B)$ and $P(A\cap B^c)$ with $P(A)$?
AI: As $A\cap B$ and $A\cap B^c$ are disjoint, we have
$$P(A\cap B)+P(A\cap B^c)=P((A\cap B)\cup(A\cap B^c))=B(A\cap(B\cup B^c))=P(A). $$ |
H: Statistics Question
This is probably super simple to most of you on here, but I was chatted by a friend earlier with a question. It reads just like this:
A sample of n =7 scores has a mean of M =5. After one new score is added to the sample,
the new mean is found to be M =6. What is the value of the new
score? (Hint: Compare the values for E X before and after the score was added.)
She said the E X looks like the greek symbols. Can someone please explain this problem to me and provide a solution or tell me how to solve it? I guess I don't understand the question or how to set up an equation to solve it. Thanks!
AI: The sample mean is given by:
$M=\dfrac{\sum_{i=1}^n x_i}{n}$
where $x$ is your individual observations and $n$ the number of observations.
Before you added the observation you have:
$5=\dfrac{\sum_{i=1}^7 x_i}{7}$
And after you add the observation you have:
$6=\dfrac{\sum_{i=1}^8 x_i}{8}$
Given these you can solve for the sums of $x_i$
These turn out to be:
$\sum_{i=1}^7 x_i=5 \times 7 = 35$
$\sum_{i=1}^8 x_i=6 \times 8 = 48$
We know that the above summations share the first $7$ observations. Therefore we can find the value of the new observation by taking the difference.
$x_8=\sum_{i=1}^8 x_i- \sum_{i=1}^7 x_i=48 - 35 = 13$
Therefore your new observation is $13$
Some further explanation:
We may also write $\sum_{i=1}^7 x_i=x_1 + x_2 + x_3 +x_4 + x_5 +x_6 +x_7$
As we, we may write: $x_8=\sum_{i=1}^8 x_i = x_1 + x_2 + x_3 +x_4 + x_5 +x_6 +x_7 +x_8$
We had shown that $\sum_{i=1}^7 x_i=x_1 + x_2 + x_3 +x_4 + x_5 +x_6 +x_7=35$
Therefore, we may substitute this into the equation for $\sum_{i=1}^8 x_i$:
$\sum_{i=1}^8 x_i = x_1 + x_2 + x_3 +x_4 + x_5 +x_6 +x_7 +x_8 = 35 +x_8$
Since using the formula for the mean we also shown:
$\sum_{i=1}^8 x_i= 45$
Then we have:
$45 = 35 + x_8$
and simply solving for $x_8$:
$x_8 = 13$ |
H: Does Euler totient function gives exactly one value(answer) or LEAST calculated value(answer is NOT below this value)?
I was studying RSA when came across Euler totient function. The definition states that- it gives the number of positive values less than $n$ which are relatively prime to $n$.
I thought I had it, until I came across this property:-
Euler Totient function is multiplicative function, that is: $\varphi(mn) = \varphi(m)\varphi(n)$
Now, if $p$ is a prime number, $\varphi(p)=p-1$.
Putting values of $p$ as 11 and 13 one by one,
$$\varphi(11)=10$$
$$\varphi(13)=12$$
Applying above stated property,
$$\varphi(11\cdot 13)=\varphi(11)\varphi(13)$$
$$\varphi(143)=12 \cdot 10$$
$$\varphi(143)=120$$
Is it correct? Does that mean we have $23$ values between $1$ and $143$ which are not relatively prime to $143$? Sorry if its something basic I'm missing. I'm not some genius at maths and came across this during study of RSA Algo. Thanks.
AI: You used it correctly, but left something out when stating multiplicativity: A number-theoretic function such as $\phi$ is called multiplicative if $\phi(nm)=\phi(n)\phi(m)$ holds if $n,m$ are relatively prime. For example $\phi(4)=2\ne\phi(2)\phi(2)$.
Otherwise, your result is correct: Precisely the multiples of $11$ and the multiples of $13$ are not relatively prime to $143$, so thats $11,22,33,\ldots , 143$ and $13,26,39,\ldots,143$ (with $143$ occuring in both exception lists). |
H: Analogy between prime numbers and singleton sets?
While trying -- in vain -- to write an alternative answer for another question (If $\cup \mathcal{F}=A$ then $A \in \mathcal{F}$. Prove that $A$ has exactly one element.), I discovered the following property for sets: $$A \textrm{ is a singleton set} \;\equiv\; \langle \forall B : B \subseteq A : B = \emptyset \;\not\equiv\; B = A \rangle$$ and I noticed the similarity to the following definition for positive whole numbers: $$n \textrm{ is prime} \;\equiv\; \langle \forall d : d \textrm{ divides } n : d = 1 \;\not\equiv\; d = n\rangle$$ So it looks like singleton sets act a bit like prime numbers. Which is not strange, come to think of it, given that both are 'indivisible atoms' of some sort.
So what common theory underlies both concepts of indivisibility?
AI: We can unify the two definitions by passing to their common generalisation in order theory. To begin, note that the set $\mathscr{P} (X)$ of all subsets of a fixed set $X$ is partially ordered by inclusion, and the set $P$ of all positive integers can be partially ordered by divisibility, and that both $\mathscr{P} (X)$ and $P$ has a bottom element ($\emptyset$ and $1$, respectively).
Moreover, both $\mathscr{P} (X)$ and $P$ have the property that, for any two elements $a$ and $b$, there exists a unique element $a \vee b$ such that, for all $c$ with $a \le c$ and $b \le c$, $a \vee b \le c$. In the case of $\mathscr{P} (X)$, this is the operation of taking the union of two subsets, and in the case of $P$, this is the operation of taking the l.c.m. of two numbers. A partially ordered set with a bottom element $\bot$ and this binary operation $\vee$ is said to be a join semilattice.
Definition. An indecomposable element of a join semilattice $L$ is an element $c$ such that, if $c = a \vee b$, then either $a = \bot$ or $a = c$ (but not both!).
As you have observed, the indecomposable elements of $\mathscr{P} (X)$ are precisely the singleton subsets, and the indecomposable elements of $P$ are precisely the prime numbers. |
H: A proof of $n*0=0$?
The only proof I've seen for this assumes that $0$ follows all the rules of arithmetic. How can we make that assumption when dividing by $0$ is a problem? I know that some people don't agree that all of the numbers follow the rules for arithmetic; for example, people say that the proof of $.99999...=1$ is invalid because arithmetic can't deal with these "infinite numbers".
AI: We take:
$$0=0$$
by zero property of addition:
$$0+0=0$$
by definition of multiplication:
$$a\cdot(0+0)=a\cdot0$$
by distributive law:
$$a\cdot0 + a\cdot0 = a\cdot0$$
by cancellation law:
$$a\cdot0=0$$
The cancellation law isn't under the field axioms and requires a proof for the above to be complete. Here's a proof:
We want to prove that if $a+c=b+c$, then $a=b$.
by the additive inverse property, we have an $c^{-1}$ such that $c^{-1}+c=0$. So by definition of addition:
$$c^{-1}+a+c=c^{-1}+b+c$$
by associativity and commutativity of addition:
$$(c^{-1}+c)+a=(c^{-1}+c)+b$$
by definition of $c^{-1}$:
$$0+a=0+b$$
by zero property of addition:
$$a=b$$
So we have proven the cancellation law. |
H: If for every $x_n$ such that $x_n \rightarrow x$, there exists a $x_{n_k}$ such that $Tx_{n_k} = Tx$, is $T$ continuous?
Let $X$ and $Y$ be Banach spaces and $T$ be the (possibly nonlinear) map $T\colon X \rightarrow Y$. $T$ is continuous if for every $x_n \in X$ such that $x_n \rightarrow x$, then $Tx_n \rightarrow Tx$. Is $T$ also continuous if instead, for every $x_n \in X$ such that $x_n \rightarrow x$, then there exists a subsequence $x_{n_k}$ such that $Tx_{n_k} = Tx$?
AI: I misunderstood the question first. Indeed the answer is yes. If $Tx_n$ does not go to $Tx$ one can choose subsequence $x_{n_k}$ that is bounded away from $Tx,$ e.g. $\|Tx_{n_k}-Tx\|\ge\varepsilon.$ This leads to a contradiction. |
H: An exercise about zerodivisors
If $A$ is a commutative ring with unity, $f\in A$ and $x\in SpecA$, with the notation $f(x)$ I mean the coset $x+f\in A/x$. Now look at this exercise:
Prove that a nonzero element $f\in A$ is a zerodivisor if and only if there exists a decomposition $SpecA=X\cup X'$ where $X\subseteq$ $SpecA$ and $X'\subsetneq Spec A$ such that $f(x)=0$ for all $x\in X$.
One direction is easy: ($\Rightarrow$) If $f$ is a zerodivisor, it's not invertible, so $f$ is contained in a maximal ideal $\mathfrak m$. We have that $SpecA= V(f)\cup D(f)$ where $D(f)\neq Spec A$ because $\mathfrak m\notin D(f)$.
I have problems to show that the other direction. Any hint?
AI: Nice for you that you have problems showing the other direction, since it is completely false!
Take for example $A=\mathbb Z$ and $f=2$:
Clearly $f=2$ is not a zero divisor. However $\text {Spec}(A)$ can be written as $\text {Spec}(A)=X\cup X'$, with $X=V(2)=\{(2)\}$ , $X'=\text {Spec}(A)\setminus V(2)\subsetneq \text {Spec}(A)$ and $f(x)=0$ for all $x\in X$ . |
H: How can I measure variance efficiently?
I have bunch of values, for example {1,2,3,4}.
I need to measure variance in a very efficient way.
On wikipedia variance is defined as sum of squared differences
between the data examples and the mean, and then you normalize
that sum with 1/n, where n is the number of data examples.
One improvement is to skip the normalization. Any other ways
I can improve this and make it less expensive to compute?
Thanks
AI: I presume you're talking implementing the calculation in software (if it is to be done by hand, some of the remarks might be irrelevant).
The averaging/normalization at the end is usually the least expensive step of the calculations; after all, it's just one arithmetical operation. Depending on what your measure of efficiency is, you might find the following formula useful:
$$\mathrm{Var}=\mathrm{E}(X^2)-(\mathrm{E}(X))^2$$
If you are dealing with $n$ samples $x_1, x_2, \ldots, x_n$, it can be expressed as follows:
$$\mathrm{Var}=\frac{1}{n}\sum_{i=1}^n x_i^2 - \left(\frac{1}{n}\sum_{i=1}^n x_i\right)^2$$
Thus, in order to calculate the variance of the given samples, you only need to find their sum and sum of their squares, both divided by $n$ and perform one squaring and one subtraction at the end. The total number of arithmetic operations will then be $(n+1)$ squarings, $(2n-2)$ additions and one subtraction... which looks quite efficient to me. As an extra bonus, you will also get the (arithmetic) mean of the samples. Finally, this approach also allows you to process the samples as they are appearing without storing them (of course, unless you need them for other purposes); allowing you to process lots of samples without increasing the memory requirements. The only possible problem could arise if the variance is very small and thus the two quantities you're going to subtract are very close; you'd need to be well aware of the capabilities and limitations of your real-number processing tools. |
H: Can't understand homework assignment
Let $p\in\mathbb{N}$ be an odd prime. Prove that if $p \equiv 3 \pmod 4$ then $−1$ is not a square modulo $p$.
$\textbf{Hint}$ : recall that $\mathbb{Z}/p\mathbb{Z}$ is a field, so that its multiplicative group is a cyclic group
of order $p − 1$. Prove that in any such group the order of $−1$ is $(p − 1)/2$, that the
order of any square is even and deduce that the equation $x
^2 = −1$ cannot be solved
if $p \equiv 3 \pmod 4$.
I can't understand the Hint. What does it mean "in any such group"? I mean, what is $-1$ in a generic cyclic group of order $p-1$? Suppose the group is the multiplicative group of $\mathbb{Z}/p\mathbb{Z}$. Why the order is claimed to be $(p-1)/2$?? Isn't the order of $-1$ always equal to $2$?
AI: You're right, this seems to be a different usage of "order" than is standard. You should look at a fixed generator $\omega$ of the group; that is, $(\mathbb{Z}/p\mathbb{Z})^{\times} = \{\omega^n : n \in \mathbb{N}\}$, and call the order of $x$ the minimal $n$ such that $\omega^n = x$ - then the hint will work. |
H: $f_n(z)={z^n\over n}$, $z\in D$ open unit disk then
$f_n(z)={z^n\over n}$, $z\in D$ open unit disk then
1.$\sum f_n$ converges uniformly on $D$?
2.$f_n$ and $f'_n$ converges uniformly on $D$?
3.$\sum f'_n$ converges on $D$ pointwise?
4.$f_n''(z)$ does not converge unless $z=0$
clearly $1$ is false as $\sum {1\over n}$ is divergent, I know $f_n$ and $f'_n$ converges to $0$ on $D$ but dont know if the convergence is uniform or not, so not sure about $2$, $3$ is true as $|z|<1$ infinite geometric series $\sum z^{n-1}$ converges!, $4\Rightarrow$ $f_n''(z)=(n-1)z^{n-2}$ is convergent to $0$ on $D$ as $|z|<1$ so $4$ is false
AI: In what follows $\|\cdot\|_{C^0(D)}$ denotes the supremum norm $\|f\|_{C^0(D)} = \sup_{z\in D}|f(z)|$.
Your reasoning for (1) is right, perhaps you should mention that $\| \sum_{n=1}^N f_n\|_{C^0(D)} \ge \sum_{n=1}^N \frac 1n$, so as the partial sums are unbounded, we cannot have uniform convergence.
We have $\|f_n\|_{C^0(D)} = \frac 1n\to 0$, so $f_n \to 0$ uniformly. For $f_n'(z) = z^{n-1}$, note that $f_n'(z) \to 0$ pointwise, but $\|f_n'\|_{C^0(D)} = 1$, if $(f_n')$ were uniformly convergent, necessarily $f_n' \to 0$, contradicting $\|f_n'\|_{C^0(D)} \to 1$.
Yes, as you said, on $D$, we have $\sum_{n=1}^\infty z^{n-1} = \frac 1{1-z}$ pointwise (as fgp mentioned in his comment, not uniformly, but this isn't asked).
For (4) we have, as you say, $f_n''(z) = (n-1)z^{n-2} \to 0$ for $z\in D$. |
H: A question regarding the Poisson distribution
The number of chocolate chips in a biscuit follows a Poisson distribution with and average of $5$ chocolate chips per biscuit. Assume that the numbers of chocolate chips in different biscuits are independent of each other. What is the probability that at least one biscuit in a box of $20$ has more than $7$ chocolate chips?
Let $X$ be the number of chocolate chips in a biscuit. We know that $\lambda = E[X]=5$.
Then the probability that each biscuit has more than $7$ chocolate chips is
$$\Pr(X \gt 7) = 1 - \Pr(X \le 6) =p_1,$$
where $p_1$ is a value to be found.
Let $Y$ be the number of biscuits in a box of $20$ that has at more than $7$ chocolate chips. Then the probability that at least one biscuit in a box of $20$ has more than $7$ chocolate chips is
$$\Pr(Y \le 1) = \Pr(Y=0) + \Pr(Y=1) = \frac{{p_1}^0 e^{-p_1}}{0!} + \frac{{p_1}^1 e^{-p_1}}{1!}=e^{-p_1}(1+p_1).$$
Is this correct? I appreciate your help.
AI: The probability that at least one has more than $7$ is, in the notation of your post,
$$1 -\left(\Pr(X\le 7)\right)^{20}.$$
This because the event "at least one has more than $7$" is the complement of the event "all $20$ have $\le 7$." By independence, the probability that $20$ cookies in a row have $\le 7$ is $\left(\Pr(X\le 7)\right)^{20}$.
Remark: In the post, you were looking at $\Pr(X\le 6)$. That's not quite the relevant probability, since the problem says more than $7$. Similarly, in the attempted calculation of the probability of at least one, the wrong index was being talked about. The probability of at least one is $1$ minus the probability of none.
In the answer, I assumed you know how to find $\Pr(X\le 7)$. We have
$$\Pr(X\le 7)=\sum_{k=0}^7 e^{-\lambda}\frac{\lambda^k}{k!},$$
where $\lambda=5$. A somewhat tedious calculation! |
H: REVISTED$^1$ - Order: Modular Arithmetic
Relevant Literature:
Question:
Observe that $2^{10}=1024≡−1 \pmod{25}$.Find the order of $2$ modulo $25$.
Thoughts:
Direct answers are OK, but I'd like to know if I'm right that what I'm really looking for is this:
$$\inf\left\{\frac{2^x-2}{25}~:~x\in \mathbb{N}\setminus \{0\}\right\}$$
EDIT$^1$:
What I meant was, I'm trying to find the smallest such $x$ so that $2^{10}+25k+2=2^x$, where $k\in \mathbb{Z}$. How do I write that in $\inf$ terms?
EDIT$^2$:
Am I right in thinking that I could think of it this way too:
$$ \inf{\{\log_2(2^{10}+25x+2)~:~x\in \mathbb{Z}\}} $$
AI: Hint: The order is $\le 20$, since $2^{20}\equiv (-1)^2\pmod{25}$. And the order divides $20$, but does not divide $10$. That does not leave many possibilities to rule out. |
H: Solving System of Differential equations
The general solution to differential equation
$$x'=Ax$$ where A is a square matrix is given by solving for the eigenvalues and then eigen vectors of matrix $A$. However, is there a general method if I have $$x'=Ax+b$$ where $b$ is a vector of length same a $x$?
The straight method to attack it would be to integrate to obtain:
$$x-A\frac{x^2}{2}-b=0$$
and solve:
$$x^2-2A^{-1}x+2A^{-1}b=0$$
$$x=\frac{2A^{-1}\pm\sqrt{(2A^{-1})^2-8A^{-1}}}{2}$$
Is there any steop ahead of this?
Thank you.
AI: If there exists a vector $v$ such that $Av=b$ then consider the variable $y=x-v$. One finds $y'=Ay$.
If there is no such $v$ then $A$ is not invertible. If $A$ is still diagonalizable, however, then the tricky part of $b$ is the bit lying in the kernel of the matrix. Thus write $b=b_0+c$ where $Ab_0=0$ but there is some $Av=c$ solution. Try computing $y=x-v+tb_0$.
That's the easiest cases to deal with.
By the way your attempt to integrate is doomed to failure, because these are vectors not numbers, and you've integrated one thing with respect to $t$ andtheother to $x$! |
H: Minimal polynomial matrix
I want to show that $ x^n-1$ is the minimal polynomial of the permutation matrix $P:=(e_2,e_3,....,e_n,e_1)$ where $e_i$ is the i-th unit vector written as a column vector.
And now I have to show that over an arbitrary field $x_n-1$ is the minimal polynomial. This seems to be tough. I have already proved that $P^n-Id=0$. Does anybody of you know how to do this or has a hint for me?
AI: Since you observed that $P$ is annihilated by $X^n-1$, it only remains to check that no nonzero polynomial of degree $\leq n-1$ annihilates $P$ to conclude that the minimimal polynomial is $X^n-1$. Since the latter has degree $n$ and divides the characteristic polynomial which is also monic of degree $n$, it will also follow that the characteristic polynomial is $X^n-1$.
So assume that a polynomial of degree $\leq n-1$ annihilates $P$, that is
$$
a_0I_n+a_1P+\ldots+a_{n-1}P^{n-1}=0.
$$
Applying the latter to the first vector $e_1$ of the canonical basis, we get
$$
a_oe_1+a_1Pe_1+\ldots+a_{n-1}P^{n-1}e_1=a_0e_1+a_1e_2+\ldots+ a_{n-1}e_n=0.
$$
That is a linear combination of the basis, whence, by linear independence of the latter, $a_0=a_1=\ldots=a_{n-1}=0$. So no nontrivial polynomial of degree $\leq n-1$ annihilates $P$, which concludes the proof.
Note: you might want to note that this is a special case of a companion matrix $C(q)$, for which in general the characteristic and the minimal polynomials are both equal to $q$. |
H: Are absolute extrema only in continuous functions?
The Extreme Value Theorem says that if $f(x)$ is continuous on the interval $[a,b]$ then there are two numbers, $a≤c$ and $d≤b$, so that $f(c)$ is an absolute maximum for the function and $f(d)$ is an absolute minimum for the function.
So, if we have a continuous function on $[a,b]$ we're guaranteed to have both absolute maximum and absolute minimum, but functions that aren't continuous can still have either an absolute min or max?
For example, the function $f(x)=\frac{1}{x^2}$ on $[-1,1]$ isn't continuous at $x=0$ since the function is approaching infinity, so this function doesn't have an absolute maximum.
Another example: suppose a graph is on a closed interval and there is a jump discontinuity at a point $x=c$, and this point is the absolute minimum.
The extreme value theorem requires continuity in order for absolute extrema to exist, so why can there be extrema where the function isn't continuous?
AI: First of all, $f(x) = x^{-2}$ isn't well defined on the domain $[-1,1]$, specifically where $x = 0$, so you can't really say that it is discontinuous. But if you assign it any value at $x = 0$, so $f(0) := c$ for $c \in \mathbb{R}$, it is discontinuous.
Then, as shown by yourself, continuity isn't needed to find a function on an interval $[a,b]$ having absolute extrema. This is reflected by the extreme value theorem, because it only guarantees extreme values for a continuous, real function on an interval $[a,b]$, but does not say that a function, having such extreme values, necessarily is continuous. |
H: How are these two equations equal?
$$\dfrac{1}{1+e^{-x}} = \dfrac{e^x}{1+e^x}$$
I was told to sketch a curve but couldn't figure out the first step. The solution manual rewrote the left hand side of the equation above as the right hand side. I cannot figure out what they did to get this. Could someone explain this to me?
AI: $$\frac{1}{1+e^{-x}}=\frac{e^x}{e^x}\frac{1}{1+e^{-x}}=\frac{e^x}{e^x+e^x\cdot e^{-x}}=\frac{e^x}{e^x+1}$$ |
H: How to prove $4\times{_2F_1}(-1/4,3/4;7/4;(2-\sqrt3)/4)-{_2F_1}(3/4,3/4;7/4;(2-\sqrt3)/4)\stackrel?=\frac{3\sqrt[4]{2+\sqrt3}}{\sqrt2}$
I have the following conjecture, which is supported by numerical calculations up to at least $10^5$ decimal digits:
$$4\times{_2F_1}\left(-\frac{1}{4},\frac{3}{4};\frac{7}{4};\frac{2-\sqrt{3}}{4}\right)-{_2F_1}\left(\frac{3}{4},\frac{3}{4};\frac{7}{4};\frac{2-\sqrt{3}}{4}\right)\,\stackrel?=\,\frac{3\sqrt[4]{2+\sqrt{3}}}{\sqrt{2}},$$
where $_2F_1$ denotes the hypergeometric function.
Can you suggest any ideas how to prove it?
The conjectural closed form was obtained using WolframAlpha query
ToRadicals[RootApproximant[2.94844576626425580599908814238570067699233]]
AI: This is the identity 15.5.12 from DLMF, with $a=-1/4$, $b=3/4$, $c=7/4$ and the special form
$$ F(b,a,a,x)=(1-x)^{-b}. $$
Is this how you got your identity? |
H: Derive Cauchy-Bunyakovsky by taking expected value
In my notes, it is said that taking expectation on both sides of this inequality $$|\frac{XY}{\sqrt{\mathbb{E}X^2\mathbb{E}Y^2}}|\le\frac{1}{2}\left(\frac{X^2}{\mathbb{E}X^2}+\frac{Y^2}{\mathbb{E}Y^2}\right)$$ can lead to the Cauchy-Bunyakovxky (Schwarz) inequality $$\mathbb{E}|XY|\le\sqrt{\mathbb{E}X^2\mathbb{E}Y^2}$$ I am not really good at taking expected values, may anyone guide me how to go about it?
Note: I am familiar with the linearity and monotonicity of expected values, what I am unsure about is the derivation that leads to the inequality, especially when dealing with double expectation.
Thanks.
AI: You can simplify your inequality as follows, for the left side:
$|\frac {XY}{\sqrt{EX^{2}EY^{2}}}|=\frac {|XY|}{\sqrt{EX^{2}EY^{2}}}$
for the right side, take the expectation:
$\frac{1}{2}E\left( \frac{X^2}{EX^2}+\frac{Y^2}{EY^2}\right)= \frac{1}{2} E \left( \frac{X^2 EY^2+X^2 EY^2}{EY^2 EX^2} \right)$
Now, $E(X^2 EY^2+X^2 EY^2)=2*EX^2EY^2 $ using the fact that $E(EY^2)=E(Y^2)$
Plug in and you get the result. |
H: Example of ring such that the nil radical is prime and 0 is not
I was just trying to think about an example of a ring that is not a domain and the nilradical is prime, however I could not find anyone.
Thanks in advance.
AI: $\mathbb{Z} / 4 \mathbb{Z}$ works. |
H: Complex Analysis boundedness and limits
True or false. If $f: \mathbb{C}\to\mathbb{C}$ is bounded, then $\lim_{z\to 0} f(z)$ exists.
R If f is bounded means that there is some M∈R such that ∀z ∈C holds that |f(z)|≤M.
My Answer
Would $\frac{z}{|z|+1}$ be an example of such a function? Bounded above by 1 and below by -1.
AI: Without additional assumptions, this doesn't hold: Consider
$f(z) = \begin{cases} 1\text{, if }\Re(z) \ge 0 \\ 0 \text{, if } \Re(z) < 0 \end{cases}$ |
H: Solutions of $x\frac{\partial \psi}{\partial x} + y \frac{\partial \psi}{\partial y} + \psi = f(x)e^{-2\pi i y}$?
I stumbled across the following PDE for a function $\psi(x, y)$:
$$
x\frac{\partial \psi}{\partial x} + y \frac{\partial \psi}{\partial y} + \psi = f(x)e^{-2\pi i y}
$$
where $f(z)$ is some arbitrary function. I was wondering if anyone knows anything about equations of this kind or how to solve them. Separation of variables didn't seem to work, so I'm fresh out of ideas.
AI: Maple says
$$
\psi(x,y) = \frac{1}{x}\left(\int_a^xf(t)\exp\left(\frac{-2\pi i y t}{x}\right)\;dt+F\left(\frac{y}{x}\right)\right)
$$
where $F$ is an arbitrary function. |
H: Equation of a line passing through a point and forming a triangle with the axes
How can I find the equation of a line that;
is passing through the point (8, 6) and
is forming a triangle of area 12 with the axes
?
So I tried to start using $A = |{\frac{mn}{2}}|$ and got that $m\cdot n$ is either -24 or 24. What now?
AI: Equation of lines that passes through $(8,6)$ is $y=ax+6-8a$
and with axes that has common points $(0,6-8a),(8-\frac{6}{a},0)$. Then the surface of right triangle is
$$S=\frac{(8-\frac{6}{a})(6-8a)}{2}=12$$
after simplification we get equation
$$16a^2-30a+9=0$$
with solutions
$$a_1=6,a_2=24$$
there exists two lines with such proppertie
$$l_1:y=6x-42$$
$$l_2:y=24x-186$$ |
H: Finitely generated flat modules that are not projective
Over left noetherian rings and over semiperfect rings, every finitely generated flat module is projective. What are some examples of finitely generated flat modules that are not projective?
Compare to our question f.g. flat not free where all the answers are f.g. projective not free.
AI: Over a von Neumann regular ring, every right module (and every left module) is flat. Let $V$ be an countable dimensional $F$ vector space, and let $R$ be the ring of endomorphisms of that vector space. It's known that $R$ is a von Neumann regular ring with exactly three ideals.
The nontrivial ideal $I$ consists of the endomorphisms with finite dimensional image. Then $R/I$ is flat but it cannot be projective. If it were projective, then $I$ would be a summand of $R$... but it is not, because it's an essential ideal.
A second example over any non-Artinian VNR ring: you can take $R/E$ for any maximal essential right ideal $E$ to get a nonprojective, simple flat module. The reasons are very much the same, since a proper essential right ideal can't be a summand of the ring.
You can even make a commutative version: take an infinite direct product of fields $\prod F_i$ (this is von Neumann regular). The ideal $I=\oplus F_i$ is an essential ideal, and $R/I$ is flat, nonprojective. (This one also has the added benefit of supplying examples of ideals which are projective but not free. Any summand of the ring will do, since the ring has IBN. The argument at the other post can be carried out again.)
Incidentally, Puninski and Rothmaler have written a nifty paper investigating which rings have all f.g. flat modules projective. |
H: How do you determine the points of inflection for $f(x) = \frac{e^x}{1+e^x}$?
$$f(x) =\dfrac{e^x}{1+e^x}$$
I know we can find points of inflection using the second derivative test. The second derivative for the function above is $$f''(x) = \dfrac{e^x(1-e^x)}{(e^x+1)^3}$$ I have found one critical point for the second derivative which is $0$. I then determined that the function is concave up from $(-\infty,0)$ and concave down from $(0,\infty)$. I am now asked to find the points of inflection. How would I determine the exact points from where the function switches from concave up to concave down?
AI: You've found the inflection point by identifying the value of $x$ at which the graph shifts from concave up, to concave down. Plus, it matches the solution to the $f''(x) = 0$.
That gives you an inflection point at $\left(0, \frac 12\right)$, |
H: Differential Equations Reference Request
Currently I'm taking the Differential Equations course at college, however the problem is the book used. I'll try to make my point clear, but sorry if this question is silly or anything like that: the textbook used (William Boyce's book) seems to assume that the reader doesn't familiarity with abstract math, so it lacks that structure of presenting motivations, then definitions, then theorems and corolaries as we see in books like Spivak's Calculus or Apostol's Calculus.
I've already seem a question like this on Math Overflow, however unfortunatelly some people felt offended somehow and said: "are you saying that Boyce is easy? It doesn't matter if you know how to prove things, you must learn to compute", and my point is not that: Apostol and Spivak teaches how to compute also, however since their books are aimed to mathematicians they take care to build everything very fine and their main preocupation is indeed in the theoretical aspects.
I really don't like the approach of: well, in some strange way we found that this equation works, so memorize it, know how to compute things and everything is fine. I really want to understand what's going on, and until now I didn't find this possible with Boyce's book (certainly there are people that find this possible, but I'm used to books like Spivak's and Apostol's, so I don't really do well with books like Boyce's).
I've already seem Arnold's book on Differential Equations, but the prerequisites for reading it are bigger: he uses diffeomorphisms many times and although I'm currently also studying differential geometry, I don't feel yet comfortable on reading a book like this one.
Can someone recommend a book that covers ordinary differential equations, systems of differential equations, partial differential equations, and so on, but that can be read without much prerequisite, and that still has the structure of a book of mathematics? And when I say "structure of book of mathematics" is being like Spivak's and Apostol's books: not mixing up definitions, theorems and examples inside histories of how the theory developed. Since I'm a student of Mathematical Physics, of course motivations and examples from Physics are welcome, but not all mixed up in the text.
In truth I don't really believe that a book like the one I described exists (if a book on this topic is good in my point of view, I believe it'll have a lot of prerequisites). Anyway I hope that I don't get misunderstood in what's my doubt, and really sorry if this question is silly in someway.
AI: Try Birkhoff/Rota and Hirsch/Smale (I actually don't know the latest edition with Devaney). |
H: How are these two equivalent?
$$\frac{\ln(e^x+x)}{x}=\frac{e^x+1}{e^x+x}$$
I see that they did something to get rid of the natural log. I couldn't find any properties that would allow me to do this. I also think that they raised both the numerator and denominator by $e$. I have tried it and I did not get the same result. Does anyone know how the solution manual got this? I am supposed to use L'Hopital's rule to find the limit as x approaches $0$.
AI: $$\frac{\ln(e^x+x)}{x} \; \neq \;\;\frac{e^x+1}{e^x+x}$$
It's the application of l'Hospital's rule to the left hand side: take the derivative of the LHS's numerator and of the denominator.
$$\large \frac{e^x+1}{e^x+x}=\frac{\frac{e^x+1}{e^x+x}}{1} = \frac{\frac d{dx}\left({\ln(e^x+x)}\right)}{\frac d{dx}(x)}$$
So $$\lim_{x\to 0} \frac{\ln(e^x+x)}{x} = \lim_{x\to 0} \frac{e^x+1}{e^x+x}$$ |
H: Example of a continuous function which is bounded and not contained in any $L_p$-space ($p\gt 0$)
I'm struggling to find an example of a continuous function $f:(0,\infty)\to \mathbb R$ which is bounded, not contained in any $L_p$-space ($p\gt 0$) and goes to zero when x goes to infinity.
I need help.
Thanks a lot!
AI: Consider
$$
f(x)=\frac{1}{\log(x+3)}
$$ |
H: Finding the limit, multiplication by the conjugate
I need to find $$\lim_{x\to 1} \frac{2-\sqrt{3+x}}{x-1}$$
I tried and tried... friends of mine tried as well and we don't know how to get out of:
$$\lim_{x\to 1} \frac{x+1}{(x-1)(2+\sqrt{3+x})}$$
(this is what we get after multiplying by the conjugate of $2 + \sqrt{3+x}$)
How to proceed? Maybe some hints, we really tried to figure it out, it may happen to be simple (probably, actually) but I'm not able to see it. Also, I know the answer is $-\frac{1}{4}$ and when using l'Hôpital's rule I am able to get the correct answer from it.
AI: Multiplying by the conjugate does indeed work. You just forgot to carry the negative sign throughout. After multiplying by the conjugate, the correct expression is $\frac{1-x}{(x-1)(2+\sqrt{3+x})}$ |
H: Representing sums of matrix algebras as group rings
Let $A = M_{n_1}(\mathbb R) \oplus M_{n_2}(\mathbb R) \oplus ... \oplus M_{n_m}(\mathbb R)$ be a direct sum of real matrix algebras. Under what conditions does there exist a group ring $\mathbb R[G]$ which is isomorphic to $A$?
I know every group ring is isomorphic to some such $A$ by Wedderburn's theorem, and I want to determine the extent to which the converse holds. I know that $A$ must have a $\mathbb R$-summand, corresponding to the linear span of $\sum_{g\in G} g \in \mathbb R[G]$. Is this condition sufficient as well? If not, is there a nice criterion, at least for small numbers of summands?
AI: This is similar to Qiaochu Yuan's answer (+1) but makes the claims more strongly and uses Miller's classification of groups with very few conjugacy classes.
Suppose $G$ is a finite group such that $\mathbb{R}[G]$ is the direct product of matrix rings $M_{n_i}(\mathbb{R})$ for $n_1 \leq n_2 \leq \dots \leq n_m$.
The original question claims this is true for any group, but actually this places fairly severe restrictions on the group which we can use to classify those groups with small $m$. Since all representations are real, we have that the $n_i$ are the character degrees of $G$, and so we have that $n_i$ divides $\sum n_i^2 = |G|$ and that $m$ is the number of conjugacy classes of $G$ (which must all be real). The number of the $n_i$ that are equal to 1 is the index $[G:G']$, and in particular is at least one. Since all the representations are real, we also have that $G$ contains exactly $\sum n_i$ elements of order dividing 2, and so $|G|$ is even.
If $m=1$, there is only one degree and we know it is $n_1=1$ so that $|G|=1$ and $G$ is the trivial group.
If $m=2$, then $G$ has only two conjugacy classes, so $|G|=2$ and $G$ is cyclic of order 2 with $n_1=n_2=1$.
If $m=3$, then $G$ has only three conjugacy classes, so $|G| \in \{3,6\}$, but only $G$ the nonabelian group of order 6 has only real conjugacy classes, so $n_1= n_2=1$ and $n_3=2$.
If $m=4$, then $G$ has only four conjugacy classes, so $|G| \in \{4,10,12\}$, but only $G = C_2 \times C_2$ and $G = D_{10}$ work, giving either $n_1=n_2=n_3=n_4=1$ or $n_1=n_2=1$ and $n_3=n_4=2$.
$$\begin{array}{ccccc|c}
n_1 & n_2 & n_3 & n_4 & G \\ \hline
1 & 1 & 1 & 1 & C_2 \times C_2 \\
1 & 1 & 2 & 2 & D_{10}
\end{array}$$
If $m=5$, then we get only four possibilities:
$$\begin{array}{ccccc|c}
n_1 & n_2 & n_3 & n_4 & n_5 & G \\ \hline
1 & 1 & 1 & 1 & 2 & D_8 \\
1 & 1 & 2 & 2 & 2 & D_{14} \\
1 & 1 & 2 & 3 & 3 & S_4 \\
1 & 3 & 3 & 4 & 5 & A_5
\end{array}$$ |
H: Solving Simple Mixed Fraction problem?
How do you wrap your head around mixed fraction, does anyone knows how to figure out, can someone give me an example how it can be solved?
AI: $$a +\frac bc = \frac {a\times c}c + \frac bc = \frac{(a\times c) + b}{c} $$
What we do first is like when finding a common denominator between two fractions, only in $a$ above, we have $a = \dfrac a1$:
We multiply $a = \dfrac a1$ by $\dfrac cc = 1$, to get $\dfrac a1 \times \dfrac cc = \;\dfrac{a\times c}{c}\;$ and then add that to the fraction $\dfrac bc$.
For example
$$3 \frac 58\; = \;3 + \frac 58\; = \;\frac{3 \times 8}{8} + \frac 58\; = \;\frac{(3\times 8)+ 5}{8}\; = \;\frac{24 + 5}{8} \;= \;\frac{29}{8}$$ |
H: Basic question on the transformation of Exponential distribution.
Why central moments coincide for random variables
$V\sim E(a,h)$ and $Y\sim E(h)$
where a=location parameter
h= scale parameter.
AI: Because $E(a,h)$ is $E(h)$ translated by $a$. Taking central moments (by subtracting the mean) removes the translation. |
H: Solution for Summation of $\cos^2x$
Can you give me the solution for the summation
$$
\sum_{n=0}^{\infty} \cos^2(\pi n)
$$
Edit: Please give me the explanation of how it is calculated and also final answer in integers.
AI: Have you noticed out the terms behaved? $\cos(\pi n) = (-1)^n$, so $\cos^2(\pi n) = 1$. This series is just summing a bunch of ones, so it diverges.
Hope that helps, |
H: Topics to Study and Books to Read?
I'm an undergraduate studying materials science and engineering with a concentration in polymer science. I would like to go to graduate school and focus on theory and computation of synthetic polymer and biopolymer systems. So I'm planning of studying things such as Hamiltonian mechanics, statistical mechanics, molecular dynamics and monte carlo simulations.
I feel like college doesn't teach math that good. I want to study more topics and go over topics I studied in classes before and hopefully build my mathematical intuition. The areas I was thinking about were: single and multivariable calculus, linear algebra, ODEs and PDEs and statistics and probability. I also saw that some researchers use topology and graph theory to study protein structures. Are there any other areas I should include?
Besides what area to study, my other question is which books to study in a particular area? I took Courant's book "Introduction to Calculus and Analysis" out of the library today. I'm trying to find the "best" or most recommended books because I would like to go through all the topics this summer, thus I don't have a ton of time. Good thing is a lot of it will be things I already know but just analyzed in more detail.
Would I be better off reading textbooks or doing MIT OpenCourseWare courses? I appreciate any responses or recommendations.
AI: Do read Apostol's calculus books , both volumes . They are probably one of the best calculus book available . They include almost all topics in details and with rigour and applications , you've listed . Just go for those 2 books by TMA , you won't need any other math book . For graph theory , you can pick up Diestel , it is available for free on internet .
And you can never know all the math , you'll require for these topics. Just read these books and work out their exercises . And then if you need to learn anything else , then you will have to pick it up along the way from internet or buy another book . I don't think , any one can give you a complete list of all the math books you should read . However , I am sure after Apostol's books , you are good to go . They will teach linear algebra, calculus,vector calculus and differential equations and probability and statistics as well , also you get an introduction to modern mathematics .
You will have to balance between both MIT OCW and books , like there is a separate OCW course called advanced calculus available that specifically uses apostol's books . You can do that , Nothing is ever learnt in all depth from just a single source . |
H: Does such $A,B$ exist?
true/false test: there're $n\times n$ matrices $A,\ B$ with real entries such that $(I-(AB-BA))^n=0$
I'm cluesless to begin.
AI: Let $C = AB - BA$ and assume that $(C - I)^n = 0$.
Then the minimal polynomial $m(x)$ of $C$ divides $(x - 1)^{n}$. Thus $m(x) = (x-1)^{k}$ for some $1 \leq k \leq n$. This shows that the only eigenvalue of $C$ (considered in $\Bbb{C}$) is $1$, hence the characteristic polynomial of $C$ is $(x-1)^{n}$. In particular, $\operatorname{tr}(C) = n > 0$. This contradicts the fact that $\operatorname{tr}(C) = 0$. Therefore no such matrices $A$ and $B$ exist. |
H: A counterexample of normal subgroup with cyclic Sylow 2-subgroup
We know that when a group $G$ has order $2^k m$, where $m$ is an odd integer, $G$ should have a normal subgroup with order $m$ from here. When $k=1$, this implies the index of the normal subgroup is $2$. However, when $k=2$, $m$ is a prime, can we also find a normal subgroup which has index $2$?
That is to say, when $|G|=4p$, where $p$ is a prime congruent to $1$ or $3$ modulo $4$, does a normal subgroup of index $2$ always exists? I don't know how to construct the contourexample, any idea or solutions are welcomed, thanks!
AI: If a group $G$ has a normal subgroup $N$ of index $2^k$, then $G/N$ is a group of order $2^k$ and so has normal subgroups of index $2^i$ for all $i=0,1,\ldots,k$, and in particular of index 2. The lattice isomorphism theorem is that the subgroups of $G/N$ are exactly the $H/N$ for subgroups $H$ containing $N$, and that $H/N$ is normal in $G/N$ if and only if $H$ is normal in $G$.
If $G$ is a finite group with a cyclic Sylow 2-subgroup, then $G$ has a normal 2-complement, a subgroup whose index is equal to the order of the Sylow 2-subgroup. By the previous, $G$ has normal subgroups of index $2^i$ for all $2^i$ dividing the order of the Sylow 2-subgroup. As long as $G$ has even order, it must have a normal subgroup of index 2.
If the Sylow 2-subgroup is not cyclic, then $G$ need not have any normal subgroups of index 2, as the alternating group on four points shows. |
H: How do I solve this solution-mixing problem?
A chemist has a 55% acid solution and a 40% acid solution. How many liters of each should be
mixed in order to produce 100 liters of a 46% acid solution?
AI: Let $X_1$ denote the total amount of acid solution 1. Let $X_2$ denote the total amount of acid solution 2. Now, the total amount of solution is $X_1 + X_2$. So we want $$X_1 + X_2 = 100.$$ Now for the acid content. Here, we'll have $0.55X_1$ liters of acid from the first solution, and $0.40X_2$ liters of acid from the second solution. Our second condition says we want $$0.55X_1 + 0.40X_2 = 0.46*100 = 46.$$ So we have a system with two equations. Solve it and get your answers! |
H: Check if $\lim_{x\to\infty}{\log x\over x^{1/2}}=\infty$, $\lim_{x\to\infty}{\log x\over x}=0$
Could anyone tell me which of the following is/are true?
$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, $\lim_{x\to\infty}{\log x\over x}=\infty$
$\lim_{x\to\infty}{\log x\over x^{1/2}}=\infty$, $\lim_{x\to\infty}{\log x\over x}=0$
$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, $\lim_{x\to\infty}{\log x\over x}=0$
$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, but $\lim_{x\to\infty}{\log x\over x}$ does not exists.
For I know $\lim_{x\to\infty}{1\over x}$ exist and so is for $1\over x^{1/2}$.
AI: $3$ is correct as $\log x$ grows slower than any $x^n$. So $x^{-1}$ and $x^{-\frac{1}{2}}$ will manage to pull it down to $0$. And $\displaystyle\lim_{x\to\infty}{1\over x}=\lim_{x\to\infty}{1\over \sqrt{x}}=0$. |
H: Example of sequence converging in $d_{l^\infty}$ but not in $d_{l^1}$.
I'll denote by $X$ the space of real sequences $(a_n)$ such that $\sum |a_n|$ converges. Let $d_{l^1}$ be the metric
$$
d_{l^1}((a_n),(b_n))=\sum|a_n-b_n|
$$
and $d_{l^\infty}$ be the metric
$$
d_{l^\infty}((a_n),(b_n))=\sup\{|a_n-b_n|\}.
$$
It seems clear to me that if a sequence (of sequences) $(x^{(n)})$ in $X$ converges with respect to $d_{l^1}$, then it must also converge in $d_{l^\infty}$.
However, I believe I recall that $d_{l^1}$ and $d_{l^\infty}$ are not equivalent, so I'm trying to find an example of a sequence in $X$ which converges in $d_{l^\infty}$ but does not converge in $d_{l^1}$. Does anyone have an example? Cheers.
AI: $(1/n,1/n,\ldots,1/n,0,0,\ldots)$ with $1/n$ in $n$ spots. |
H: Is there a way to eliminate an extraneous index from this sum?
Fix a natural number $n$. Suppose we have a triple sum of the form
$$ \sum_{i=1}^n \sum_{j=1}^n \sum_{k=j+1}^{i-1} f(i,k) $$
where the summands only depend on $i$ and $k$.
Is there a way to rewrite the sum so that it does not include the index $j$?
AI: This sum can be written as
$$
\sum_{1 \le j < k < i \le n} f(i,k)
$$
(do you see why?)
From this form, it should be clear that for fixed $i,k$ the term $f(i,k)$ appears $k - 1$ times in the sum. So you can write the sum as
$$
\sum_{2 \le k < i \le n} (k - 1) f(i,k).
$$
In general, a very helpful approach to multiple nested sums is to write them as a single sum over some inequality in the variables, as I have done above. |
H: How do you define definition symbol :=?
How is "$:=$" defined formally and why? "$\iff$", "$=$", ...?
AI: There is no need of a formal definition of a definition. A definition is just an abbreviation. For example, in formal arithmetic (number theory) we write $x\mid y$ ($x$ divides $y$) as an abbreviation for $\exists z (y=x\times z)$. Whenever $s\mid t$ is used, it can be replaced in principle by the "long" form. |
H: Is there still any hope that the GCH could be equivalent to some large cardinal axiom?
Is there still any hope that the GCH could be equivalent to some large cardinal axiom?
Even a simple yes or not answer will be fine. Thanks!!
AI: No.
The term "large cardinals" has no explicit and well-defined meaning, but it's general meaning is an additional axiom which is strictly stronger in consistency than $\sf ZFC$, that is some $\varphi$ such that $\sf ZFC+\varphi$ can prove the consistency of $\sf ZFC$.
On the other hand, $\sf ZFC+GCH$ is equiconsistent with $\sf ZFC$ so it is not a large cardinal axiom per se.
(To slightly elaborate on the above meaning of no well-defined meaning, $0^\#$ is considered a large cardinal axiom, but it really just says that a certain real number exists; or a Jonsson cardinal which is not even an inaccessible one, but its existence implies that inaccessible cardinals do exist. Both these, and more, are considered large cardinal axioms despite not being directly related to inaccessible cardinals and so.) |
H: Lagrange's Theorem for further elementary consequences
Question: Let $G$ be a finite group, and let $H$ and $K$ be subgroup of $G$. Prove: suppose $H$ and $G$ are not equal, and both have order the same prime number $p$, Then $H\cap K=\{e\}$.
This is my proof steps:
Proof: $G$ is finite, $H, K$ subgroup of $G$. therefore $H\cap K$ subgroup of $G$, also is subgroup of both $H$ and $K$
Then the order of $H\cap K$ is a common divisor of order of $H$ and the order of $K$. since $p$ is prime number. Show that $G$ is cyclic group and that any a belong to $G$ is its generator. Therefore $H,K, H\cap K$ are cyclic group too, because every subgroup of a cyclic group is cyclic. Since $H$ not equal $G$, Then …..
I just did here, because I feel I miss some part. Can anyone give me some advises! Thanks so much
AI: You can't assume $G$ is cyclic, nor can you prove it, becasue not all finite groups are cyclic.
Your proof is very good up to where you say the order of $H \cap K$ is a common divisor of the orders of both the groups $H$ and $K$. At this point, note that the only divisors of a prime $p$ are 1 and $p$ itself. What does this mean about $H \cap K$? How many elements can it have? |
H: Natural logarithm
Can someone please suggest how one proves:
$(1+2x)\ln(1+\frac{1}{x}) -2 >0$ where $x>0$.
I plotted the function in a program and the inequality should be correct.
AI: Here is one approach. Let $f(x)=(1+2x)\ln(1+\frac{1}{x})$. You can show that $\lim\limits_{x\to+\infty}f(x)=2$, for example using l'Hôpital's rule, with a rewrite as $\dfrac{\ln\left(1+\frac{1}{x}\right)}{\left(\dfrac{1}{1+2x}\right)}$. You can show that $f$ is always decreasing by showing that $f'$ is always negative. In case working directly with $f'$ is too cumbersome, it turns out to be easy to show that $f''$ is always positive, and then it would suffice to show that $\lim\limits_{x\to+\infty}f'(x)=0$. |
H: Why is Cauchy's integral formula always written with the function as the subject?
Scouring textbooks, lecture notes, Wikipedia, etc., I notice that the standard presentation of Cauchy's integral formula is $$f(w)=\frac1{2\pi i}\int_L\frac{f(z)}{z-w}\,\mathrm dz\tag1$$ rather than $$\int_L\frac{f(z)}{z-w}\,\mathrm dz=2\pi i f(w).\tag2$$
I'm new to complex analysis, and have so far directly utilised only form $(2)$ (that is, that equation as an integral formula҂), which, to boot, is a little more compact and reads more intuitively than form $(1).$
Or is the canonical form $(1)$ actually more natural? Please shed light on its intuitive interpretation that I must be missing.
҂A formula is typically a rule for expressing a subject in terms of some other variable(s), so the circle-area formula being canonically presented as $$r=\sqrt{\frac A\pi}$$ would similarly cause a double take: is this a quirky misnomer, is $r$ more fundamental than $A$ when discussing circles, does this square-root form lend itself better to typical applications of the equation, etc.?
AI: I guess the focus of the formula is not that you can evaluate certain integrals by plugging in function values, but that you can recover a function's values by an integral along a circle. |
H: How to write $\pi$ as a set in ZF?
I know that from ZF we can construct some sets in a beautiful form obtaining the desired properties that we expect to have these sets. In ZF all is a set (including numbers, elements, functions, relations, etc...).
For example we can define a copy of $\mathbb{N}$, using the empty set axiom and others, called sometimes $\omega$ and defining,
\begin{align}
0_\mathbb{N}&=\{\}\\
1_\mathbb{N}&=\{\{\}\}\\
2_\mathbb{N}&=\{\{\},\{\{\}\}\}\\
3_\mathbb{N}&=\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\\
4_\mathbb{N}&=\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}, \{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\}\\
&\ \vdots
\end{align}
Well, ZF allow us to build this type of sets that are kind of arrangements of brackets and commas. We can follow with $\mathbb{Z}$ since we can define $(a,b)=\{\{a\},\{a,b\}\}$:
$\mathbb{Z}$ is defined as the set of equivalence classes $\mathbb{Z}=(\mathbb{N}\times\mathbb{N})\big/\sim$ where
$$\sim\, =\{\big((m,n),(h,k)\big)\in(\mathbb{N}\times\mathbb{N})\times (\mathbb{N}\times\mathbb{N}):(m+_\mathbb{N} k)= (h+_\mathbb{N} n)\}$$
here, the integers are sets more complicated than natural numbers. For example,
\begin{align}
-2_\mathbb{Z}&=\{(1_\mathbb{N},3_\mathbb{N}),(2_\mathbb{N},4_\mathbb{N}),(3_\mathbb{N},5_\mathbb{N}),\ldots,(n_\mathbb{N},(n+2)_\mathbb{N}),\ldots\}\\
&=\{\{\{1_\mathbb{N}\},\{1_\mathbb{N},3_\mathbb{N}\}\},\{\{2_\mathbb{N}\},\{2_\mathbb{N},4_\mathbb{N}\}\},\ldots\}\\
-2_\mathbb{Z}&=\{\{\{\{\{\}\}\},\{\{\{\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\}\},\\
&\quad\{\{\{\{\},\{\{\}\}\}\},\{\{\{\},\{\{\}\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}, \{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\}\}\},\ldots\}\\
\end{align}
Here we note the importance of notations. We continue with $\mathbb{Q}=(\mathbb{Z}\times(\mathbb{Z\setminus\{0_\mathbb{Z}\})}\})\big/\sim$ where
$$\sim\, =\{\big((m,n),(h,k)\big)\in(\mathbb{Z}\times(\mathbb{Z\setminus\{0_\mathbb{Z}\})}\})\times (\mathbb{Z}\times(\mathbb{Z\setminus\{0_\mathbb{Z}\})}\}):m\odot_\mathbb{Z}k=h\odot_\mathbb{Z}n\}$$
And for example:
$$(0.2)_\mathbb{Q}=\{(1_\mathbb{Z},5_\mathbb{Z}),(2_\mathbb{Z},10_\mathbb{Z}),(3_\mathbb{Z},15_\mathbb{Z}),\ldots,(n_\mathbb{Z},(5n)_\mathbb{Z}),\ldots\}$$
Imagine if we write the integer numbers as before and write the ordered pairs in unabridged form ($(0.2)_\mathbb{Q}$ is a nice abbreviation for this monster). However we can.
Finally we define $\mathbb{R}$ as the set of all Dedekind cuts, for example:
$$(0.2)_\mathbb{R}=\{x\in\mathbb{Q}:x<_\mathbb{Q} (0.2)_\mathbb{Q}\}$$
Note that $(0.2)_\mathbb{R}$ is even more monstrous than $(0.2)_\mathbb{Q}$. Also I can write $(\sqrt{2})_\mathbb{R}$ showing its elements in a simple form,
$$(\sqrt{2})_\mathbb{R}=\{x\in\mathbb{Q}:(x^2<_\mathbb{Q} 2_\mathbb{Q}) \lor (x<_\mathbb{Q} 0_\mathbb{Q})\}$$
But I don't know how to do it with $\pi_\mathbb{R}$, since
$$\pi=\lim_{k\to\infty}\sum_{n=0}^{k}\cfrac{2^{n+1} n!^2}{(2n + 1)!}$$
I only know that
$$(\pi)_\mathbb{R}=\bigcup_{k=1}^{\infty} \left(\sum_{n=0}^{k}\cfrac{2^{n+1} n!^2}{(2n + 1)!}\right)_\mathbb{R}$$
Since monotonically converge to $\pi$ we have
$$(\pi)_\mathbb{R}=\bigcup_{k=1}^{\infty} \left\{x\in\mathbb{Q}:x<_\mathbb{Q} \left(\sum_{n=0}^{k}\cfrac{2^{n+1} n!^2}{(2n + 1)!}\right)_\mathbb{Q}\right\}$$
Is there any way to avoid infinite union (and
the choice of a particular convergent sequence
) as the case of $(\sqrt{2})_\mathbb{R}$? If not, why?
Can we write the set representing $\pi$ listing its elements as we do with integers or rationals (as $\mathbb{Q}$ is countable I guess that should be able to do, but I don't know how to do)?
If in ZF all is a set, is so surprising the fact that so many things can be defined, then my last question is
How many more things can be built using ZF?, ZF could define us what is a derivative, an integral, a limit or a measure?
Thanks in advance.
AI: Note that when we define the natural numbers we have a good sense of addition and multiplication (ordinal arithmetics), and from those we can define the operations on $\Bbb Z$ and $\Bbb Q$ and then by using Dedekind cuts construction we can extend these to $\Bbb R$ as well.
So we have that $\Bbb R$ has the operations $+,\cdot$ and they all satisfy all the things we know they do from the times we did mathematics without writing all the sets explicitly.
Now we can use these things to start and define anything else that we desire using the $+$ and $\cdot$ and whatnot as our stones. For example you can define $\pi$ to be the length of the semi-circle of radius $1$.
How do we do that? We define what is an integral, and a path integral, and so on. All from the sets which are addition and multiplication and so on, and then we can define $\pi$ in a painfully tedious way.
The whole point of using set theory, and in this case $\sf ZF$, as our foundation is that we can do things, once we can define the real numbers with their basic properties we have formulas which define things from that structure, and we don't have to write everything in set-form explicitly.
Once we have the real numbers (with the order) it is easy to define the collection of open intervals, and then it is easy to define the standard topology (the smallest collection containing the intervals and having certain properties), from there we can define the Borel sets, the Lebesgue sets, and the Lebesgue measure (being the unique function from the Lebesgue sets into the real numbers which satisfies certain properties), then we can define integration and with respect to the measure, and we can define derivation.
All these things end up being immensely long and complicated formulas, but the point is that we can write them up. And all this with just $\in$ and the axioms of $\sf ZF$. (Although we may want to add $\sf DC$ or even $\sf AC$ if we discuss measure theory.)
But if you do want to insist on $\pi$ being written in set form:
$$\pi = \left\{x\in\Bbb Q\mathrel{}\middle|\mathrel{} x<_\Bbb Q0\lor \left(x\geq_\Bbb Q0\land\exists k\in\Bbb N:\frac{x^2}6<_\Bbb Q\sum_{n=1}^k\frac1{n^2}\right)\right\}$$ |
H: How to find the amount to added every month or year to get the required amount after certain years?
I want to do a Java application for which after giving the current savings, and the rate of interest and and required amount after specified no of years, it has to show how much a person has to earn a month or year to achieve his target.
I'm aware of the formula $$\text{present value}={\text{future value} \over (1+\text{interest rate})^{\text{no. of years}}}$$
Now after
getting present value, should I divide present value by no. of months in those years, to get the amount he has to invest per month ?
AI: The formula you need is the future value of an ordinary annuity:
$$F=R \times \frac{(1+i)^n-1}{i}$$where F is the value at the time of the last payment, R is the size of the regular payment, i is the interest rate per payment period and n is the number of regular payments.
The compounding period of the interest rate must match the payment period...
REVISED ANSWER:
Assume you start with B dollars, and wish to accumulate a total of C dollars after n years, making annual deposits of R dollars, and earning an annual rate of i per year as you go.
The original amount will grow, all by itself, to $B'$ at the end of the payments:
$$B'=B \times (1+i)^n $$Assuming that this is less than your target of C dollars, the regular payments must make up the difference of $C-B'$. Re-arranging the formula for the future value of an annuity: $$R=(C-B') \times \frac{i}{(1+i)^n-1}$$ |
H: Natural Deduction proof for $\forall x \neg A \implies \neg \exists xA$
$\forall x \neg A \implies \neg \exists xA$
I won't ask you to solve this for me, but can you please give some guiding lines on how to approach a proof in NDFOL?
There are many tricks that the TA shows in class, that I could not dream of...
P.S. I managed to proof $\neg \exists xA \implies \forall x \neg A$ but could not get on from there.
Thanks!
After the proposed answer, let me see if I got this correct:
$\exists x A \implies \exists x A$ (axiom assumption)
$\exists x A \implies A$ (from 1)
$\exists x A, \forall x \neg A \implies \forall x \neg A$ (axiom assumption)
$\exists x A, \forall x \neg A \implies \neg A$ ($\forall$ extraction, from 3)
$\forall x \neg A \implies \neg \exists x A$ (from 2,4)
Am I correct?
I could not understand the justification going from (1) to (2)
AI: To prove an implication, the general guideline for constructing formal proofs is: Assume the premise and the negation of the consequence, and derive a contradiction.
In your present case:
Assume $\exists x A(x)$. By Existential Instantiation, we have $A(t)$ for some (unspecified but fixed) $t$.
Assume $\forall x \neg A(x)$. By Universal Instantiation, we have $\neg A(t)$.
Now $A(t)$ and $\neg A(t)$ combine into a contradiction $\bot$.
We use Negation Introduction on the open assumption $\exists x A(x)$ to conclude $\neg \exists x A(x)$.
Finally, by Implication Introduction, we conclude the desired $\forall x \neg A(x) \implies \neg \exists x A(x)$ holds without any assumption.
Q.E.D.
Addressing OP's efforts:
As you see, there is a difference in our notations. I have parametrised $A$ as $A(x)$, while you haven't. However, this is crucial for the inference of $(2)$ from $(1)$. The expression $A(t)$ contains a $t$, which we can think of as an arbitrary "witness" of $\exists x A(x)$. It is this witness $t$ we apply Universal Instantiation / $\forall$ Extraction to: Since $\forall x \neg A(x)$, in particular $\neg A(t)$, where $t$ is the witness to $\exists x A(x)$.
This working with witnesses requires some practice, and even then one can sometimes mix things up. They are however crucial for the validity of the reasoning, so be sure to try and derive some more "trivialities" containing existential quantifiers!
A final remark (inspired by the comment by Peter Smith) is that in place of where you wrote "axiom", it's better to write one of "assumption" or "hypothesis", because these words have different meanings in mathematical lingo. |
H: A theorem about the Poisson Point process.
In the proof of the Levy-Khintchine theorem, I saw a theorem about the Poisson
point process.
The theorem states that if $\Pi$ is a poission point process on $S$ with
intensity measure $\mu.$ Let $f:S\rightarrow\lbrack0,\infty)$ be a measurable
function. And define
$$
Z=\int_{S}f\left( x\right) \Pi\left( dx\right)
$$
We have the following,
$$
E\left[ Z\right] =\int_{S}f\left( x\right) \mu\left( dx\right)
$$
and if $E\left[ Z\right] <\infty,$ then,
$$
Var\left( Z\right) =\int_{S}f\left( x\right) ^{2}\mu\left( dx\right)
$$
How should I proof this? I'm thinking about started the proof by assuming $f$ is a
simple function, and apply DCT somehow? Are there any proofs for this theorem?
AI: The proof is given in Proposition 19.5 in Lévy Processes and Infinitely Divisible Distributions by Ken Iti Sato. It goes along the lines:
Show that $Z$ follows a Compound Poisson Process with characteristic function
$$
\varphi_Y(z):={\rm E}[e^{izY}]=\exp\left(\int_S(e^{izf(x)}-1)\,\mu(\mathrm dx)\right),\quad z\in\mathbb{R}.
$$
Use the derivatives of the characeristic function to obtain expressions of the first and second moment, i.e.
$$
i{\rm E}[Y]=\frac{\mathrm d}{\mathrm dz}\varphi_Y(z)\bigg|_{z=0}=i\int_Sf(x)\,\mu(\mathrm dx)
$$
and
$$
i^2{\rm E}[Y^2]=\frac{\mathrm d^2}{\mathrm dz^2}\varphi_Y(z)\bigg|_{z=0}=i^2\int_Sf(x)^2\mu(\mathrm dx)+\left(i\int_Sf(x)\,\mu(\mathrm dx)\right)^2.
$$
Use these expressions to find ${\rm E}[Y]$ and ${\rm Var}(Y)$. |
H: Show that If k is odd, then $\Bbb{Q}_{4k}$ is isomorphic to $\Bbb{Z}_k \rtimes_{\alpha} \Bbb{Z}_4$ for some $\alpha$
Show that if k is odd, then $\Bbb{Q}_{4k}$ is isomorphic to $\Bbb{Z}_k \rtimes_{\alpha} \Bbb{Z}_4$ for some $\alpha: \Bbb{Z}_4 \rightarrow Aut(\Bbb{Z}_k)$. Calculate $\alpha$ explicitly.
We know that $\Bbb{Q}_{4k} = \{ b^k_{2n}, b^k_{2n}a | 0 \leq k < 2n \}$, and that everything outside of the cyclic group $\langle b \rangle$ is of order 4. What confuses me is that we only have one cyclic group in $\Bbb{Q}$, right? However, in order to write it of the form $\Bbb{Z}_k \rtimes_{\alpha} \Bbb{Z}_4$, we need to find a normal cyclic group of order k in $\Bbb{Q}_{4k}$, and I can't really see that.
For example, in the group $\Bbb{Q}_{12} = \{e, b, b^2, b^3, b^4, b^5, a, ab, ab^2, ab^3, ab^4, ab^5\}$, we only have $\langle b \rangle$ as the cyclic group. I cannot see any normal cyclic group of order 3 here.
For the second part of the problem, suppose we accept the fact that $\Bbb{Q}_{4k} = \Bbb{Z}_k \rtimes_{\alpha} \Bbb{Z}_4$ for some $\alpha: \Bbb{Z}_4 \rightarrow Aut(\Bbb{Z}_k)$.
There is a theorem in the textbook that states:
Corollary Let $\bar{m}$ have exponent k in $\Bbb{Z}^×_n$, and let $α : \Bbb{Z}_k → \Bbb{Z}^×_n$ be the homomorphism that takes the generator to $\bar{m}$. Then writing $\Bbb{Z}_n = \langle b \rangle$, $\Bbb{Z}_k = \langle a \rangle$, and identifying $\Bbb{Z}^×_n$ with $Aut(\Bbb{Z}_n)$, we obtain $$\Bbb{Z}_n \rtimes_{α} \Bbb{Z}_k = \{b^ia^j | 0 ≤ i < n, 0 ≤ j < k\},$$ where b has order n, a has order k, and the multiplication is given by $$b^ia^jb^{i'}a^{j'} = b^{i+m^ji'}a^{j+j'}.$$
Moreover, the nk elements $\{b^ia^j | 0 \leq i < n, 0 \leq j < k\}.$
Since we know that $aba^{-1}=b^{-1}$ in quaternionics, we need to find an $\bar{m}$ such that $b^0aba^{-1}= b^{0+m}a^{-1+1} = b^{-1}$. So $\bar{m} = \bar{-1}$, and $\alpha$ must be send the generator to $\bar{-1}$. I guess this is what the question is asking for when it tells us the calculate $\alpha$ "explicitly". Is that correct?
However, I cannot say that until I prove that $\Bbb{Q}_{4k} \cong \Bbb{Z}_k \rtimes_{\alpha} \Bbb{Z}_{4}$, so I was wondering if anybody could help me with that.
Thank you in advance
AI: Let's be clear on one significant point first:
What confuses me is that we only have one cyclic group in Q, right?
No. Remember that any chosen element $g$ in a group $G$ generates a cyclic group $\langle g\rangle$, so in general a group will have many cyclic subgroups. In fact, since any cyclic group has a generator, any cyclic subgroup of some $G$ will be obtained by generating it from some single element $g\in G$. Moreover, a group $G$ has only one nontrivial cyclic subgroup iff $G$ is itself a cyclic group of prime order.
To prove this, observe that if $G$ is not cyclic of prime order, then either
$G$ is cyclic of composite order $n$ generated by say $g$, in which case $\langle g^d\rangle$ is a proper nontrivial subgroup of $G$ when $d\mid n$ is a proper divisor and $\ne1$ hence $\langle g^d\rangle$ and $G$ itself are two distinct nontrivial subgroups of $G$ that are cyclic, or
$G$ is not cyclic, so pick $x\in G\setminus\{e\}$, then pick $y\in G\setminus\langle x\rangle$, in which case $\langle x\rangle$ and $\langle y\rangle$ are two distinct nontrivial cyclic subgroups of $G$.
Now, back to generalized quaternion groups. The group presentation is
$$Q_{4k}=\langle a,b~|~b^{2k}=a^4=1,~b^n=a^2,~a^{-1}ba=b^{-1} \rangle$$
What are some cyclic subgroups? By experimentation, $\langle b\rangle$, $\langle b^2\rangle$, $\langle b^k\rangle$, $\langle a\rangle$, $\langle a^2\rangle$, etc. (At this point, assume $k$ is odd.) What are some subgroups of order $4$? The obvious one is $\langle a\rangle$. What about normal cyclic subgroups of order $k$? Well, $\langle b^2\rangle$ is cyclic of order $k$. We see that $b=a^2(b^2)^{(1-n)/2}\in\langle a,b^2\rangle$, which we can use to prove that $\langle a\rangle\langle b^2\rangle=Q_{4k}$. Furthermore $\langle b^2\rangle\cap\langle a\rangle=\{1\}$. We can check that $a\langle b^2\rangle a^{-1}=\langle b^{-2}\rangle=\langle b^2\rangle$ hence $\langle b^2\rangle$ is normalized by $a$ and $b$ hence normalized by the subgroup generated by $a$ and $b$, which is the full group $Q_{4k}$. So $\langle b^2\rangle$ is normal, cyclic and order $k$.
Thus, $Q_{4k}=\langle b^2\rangle\rtimes\langle a\rangle$. Apply $x\mapsto a^{-1}xa$ to $b^2$ three times to get $ab^2a^{-1}=b^{-2}$, hence we conclude that the homomorphism $\alpha:\langle a\rangle\to{\rm Aut}(\langle b^2\rangle)$ sends $a$ to the inverse map $x\mapsto x^{-1}$ (this is an automorphism of any abelian group), hence sends $a^2$ to the identity and also sends $a^3=a^{-1}$ to the inverse map. In terms of cyclic groups and unit groups,
$$Z_4\to U(k)={\rm Aut}(Z_k):0,2\mapsto 1;~1,3\mapsto -1.$$
However you'll have to judge for yourself from the context of the text and the material what constitutes an explicit bijection for the purposes of this exercise. |
H: Self-adjoint and eigenvalues properties
I wondering about something.
Let $V$ be an inner product space
$T\colon V\to V$ is a linear map
$T$ is self-adjoint and all the eigenvalues of $T$ are not negative
I need to prove that for all $v$ in $V$, $(T(v),v)\ge0$.
So I think that if all the eigenspaces of all the eigenvalues span $V$ I am done (It is very easy).
But who can promise that? I do not know any theorem like that (am I wrong ?).
So which theorem can I use? All I know is that eigenvectors from different eigenspaces are orthogonal and nothing more.
AI: There is a theorem, that says that self-adjoint maps are allways diagonalizable, so there an eigenbasis and you are done.
Let's sketch the proof: Let $\lambda\in \mathbb C$ be an eigenvalue of $T$, $E_\lambda := \ker(\lambda- T)$ the eigenspace. Then $E_\lambda$ is $T$-invariant, and by self-adjointness, its orthogonal complement $E_\lambda^\bot$ is also: For $v \in E_\lambda^\bot$, $w \in E_\lambda$, we have
$$ (Tv, w) = (v,Tw) = (v,\lambda w) = 0$$
so $Tv \in E_\lambda^\bot$. If we restrict $T$ to $T' \colon E_\lambda^\bot \to E_\lambda^\bot$, we can inductively find a $T$-eigenbasis of $E_\lambda^\bot$ and are done. |
H: Approximating sums of powers of integers: why does $\sum_{i=1}^n i^r \div \frac{n^{r+1}}{r+1} \to 1$ as $n \to \infty$?
I know there are exact formulas for sums of integer powers of integers ($\sum_{i=1}^n i^r$ with $r \in \mathbb{N}$), but I was interested in approximating them. One way occured to me through a geometrical argument. If you arrange $1,2,3,...,n-1,n$ as a right-angled triangle, they fill up about half of the square $n^2$. Similarly if you arrange $1^2,2^2,3^2,...,(n-1)^2,n^2$ as a 'right-angled' pyramid, they fill up about a third of the cube $n^3$. This suggests the general approximation: $$\sum_{i=1}^n i^r \approx \frac{n^{r+1}}{r+1}$$
and indeed I've found numerically that $$\lim_{n\to\infty}{\frac{\sum_{i=1}^n i^r}{\frac{n^{r+1}}{r+1}}} \to 1.$$How can I prove this?
AI: One can show by induction on $r$ that $\sum_{i=1}^n i^r$ is a polynomial funtion in $n$ of degree $d=r+1$. Then noting that $(n+1)^d=n^d+d\cdot n^{d-1}+{d\choose 2}n^{d-2}+\ldots$ you get the asymptotic result. |
H: One of $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$ for odd $n$
Let $n$ be an odd integer greater than 1. Show that one of the numbers $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$.
I know that pigeonhole principle would be helpful, but how should I apply it? Thanks.
AI: Observe that there are $n$ such number with the possible number of remainders is $n-1$ from $1,2,\cdots,n-2,n-1$ because if $0$ is a remainder for some $r,$ then $n|(2^r-1)$ and we are done.
Otherwise using Pigeonhole Principle, at least for two distinct values $r$ we shall have same remainder
Let $2^t-1\equiv2^s-1\pmod n$ where $t>s$
$\implies n|\{(2^t-1)-(2^s-1)\}\implies n| 2^s(2^{t-s}-1)\implies n|(2^{t-s}-1)$ as $(2,n)=1$
Clearly, $0<t-s<n$ as $n\ge t>s\ge1$ |
H: Show that eigenvalues are negative
I have to consider the eigenvalue problem:
$$ L[u] := \frac{d^2 u}{dx^2}= λu,x \in (0,1)\quad u(0)-\frac{du}{dx}(0)=0, u(1)=0.$$
I need to show that the eigenvalues are negative.
AI: Suppose we have $\lambda > 0$, then $u'' = \lambda u$ gives us $$u(t) = \alpha\exp(-\sqrt\lambda t) + \beta\exp(\sqrt\lambda t)$$
with $\alpha, \beta \in \mathbb R$. Now
\begin{align*}
u(0) - u'(0) &= \alpha + \beta -\sqrt \lambda \alpha + \sqrt \lambda \beta\\
&= \alpha(1 - \sqrt \lambda) + \beta(1 + \sqrt\lambda)\\
\text{and } u(1)&= \alpha\exp(-\sqrt \lambda) + \beta\exp(\sqrt\lambda)
\end{align*}
So we must have by the boundary conditions
$$\beta = -\alpha \exp(2\sqrt\lambda), \quad \beta = -\alpha\cdot\frac{1-\sqrt \lambda}{1+\sqrt \lambda} $$
that is $\alpha = \beta = 0$ or
$$ \exp(2\sqrt \lambda) = \frac{1-\sqrt\lambda}{1+ \sqrt\lambda} $$
But this is impossible for $\lambda > 0$ as $\lambda \mapsto \exp(2\sqrt\lambda)$ is strictly increasing and $\lambda \mapsto \frac{1-\sqrt\lambda}{1+\sqrt \lambda}$ is decreasing and they are both equal at $0$. So $\alpha = \beta = 0$ and no $\lambda > 0$ is an eigenvalue. |
H: Product of variables
Suppose we have a set of variables $\{a_i,b_i| i=1,2,3\}$ which can take values $\pm 1$ according to some probabilities.
If there's a constraint that $a_1b_2b_3=a_2b_1b_3=a_3b_1b_2$ must equal to $1$, why then must the value of $$a_1a_2a_3=1$$?
I thought of saying that $a_1b_2b_3a_2b_1b_3a_3b_1b_2=a_1a_2a_3(b_1b_2b_3)^2=1$ therefore the result follows, but I don't think this argument is valid, since the first $b_1$ may differ from the second. -- they are variables and since there is a non-zero probability for both $\pm 1$...
Also, I don't understand why we couldn't have $$a_1=a_2=+1, a_3=-1$$ and have the $\{b_i\}$'s take appropriate values -- since there is a probability, it's possible, right?
Or have I totally misunderstood the question?
AI: Your argument involving $a_1a_2a_3(b_1b_2b_3)^2$ is correct.
The vector $(a_1,a_2,a_3, b_1,b_2,b_3)$ can take $2^6=64$ possible values. ($\pm 1$ for each variable). The constraint limits the set of values that we are allowed to those that satisfy $a_1b_2b_3=a_2b_1b_3=a_3b_1b_2 = 1$. Sp there are only $8$ possible values that satisfy this constraint. (we are free to choose $b_i$ however we like, but this fixes the $a-i$.)
Your probability distribution then assigns a probability to each of these $8$ vectors, and $0$ to the remaining $56$.
So all you have to show is that $a_1a_2a_3 = 1$ for all $8$ possible values. Which you've already done. |
H: Order of convergence of a sum
Let $(X_t)_{t\geq 0},\;X_0=0$, be a positive stochastic process such that
\begin{align*}
\mathbb{E}\left[\sum_{n=1}^{\infty}X_t^n\right]=\sum_{n=1}^{\infty}\mathbb{E}[X_t^n]<\infty.
\end{align*}
Assume that $\mathbb{E}[X_t]=O(t),\;t\to 0$. Clearly for each fixed $N\in\mathbb{N}$,
\begin{align*}
\sum_{n=1}^{N}\mathbb{E}[X_t^n]=O(t).
\end{align*}
Does we also have the following:
\begin{align*}
\sum_{n=1}^{\infty}\mathbb{E}[X_t^n]=O(t)?
\end{align*}
AI: (This is a revised version which addresses the mention (silently added to the question) that the limit to consider is when $t\to0$, not when $t\to\infty$. As regards the other modification, that in contradiction to the original question there would be only one process $(X_t)$, I refuse to address it, for reasons explained in the comments.)
No. Try $\mathbb E[X_t^n]=t\mathbf 1_{nt\leqslant1}$. |
H: Help Understanding Fields
I came across this problem in a Linear Algebra text today:
Let $u$ and $v$ be distinct vectors in a vector space $V$ over a field $F$. Prove that $\{u,v\}$ is linearly independent if and only if $\{u+v,u-v\}$ is linearly independent.
Working on ($\Rightarrow$), I must show that
$$c_1(u+v) + c_2(u-v) = 0 \implies c_1 = c_2 = 0 \text{, where } c_1,c_2 \in F.$$
Using linear independence of $\{u,v\}$ I arrive at the equation $c_1 + c_1 = 0$. However, if $F = \mathbb{Z}_2$ then $1 + 1 = 0$. Also, if $c_1$ and $c_2$ were "integers" then in a field of characteristic $2$ I have
$$c_1 + c_1 = \underbrace{(1 + \cdots + 1)}_{c_1 \text{ times}} + \underbrace{(1 + \cdots + 1)}_{c_1 \text{ times}} = (1 + 1) + \cdots + (1 + 1) = 0 + \cdots + 0 = 0$$
However, I'm assuming that they're integers and I'm not sure (as for example $\mathbb{R}$ is a field which isn't only integers) that $x + x = 0$ in any field with characteristic $2$. Can someone clarify?
Also, I see often the restriction that $F$ be of characteristic not equal to $2$. Why is that?
My knowledge of fields is very limited so any references would also be appreciated.
Thank-you.
AI: The statement you are trying to prove is, indeed, not true if the field has characteristic $2$. If $u=(1,0)$ and $v=(0,1)$ then $u$ and $v$ are linearly independent but $u+v=u-v=(1,1)$. |
H: Can someone explain to me what are these 2 statements talking about?
I have to prove that these 2 statements are equivalent, but I can't even understand them.
There exist $\epsilon_0>0$ such that for all $k\in\mathbb N$, there exist $n_k\in\mathbb N$ such that $n_k\geq k$ and $|x_{n_k}-x|\geq\epsilon_0$?
There exist $\epsilon_0>0$ and a subsequence $(x_{n_k})$ of sequence $(x_n)$ such that $|x_{n_k}-x|\geq\epsilon_0$ for all $k\in\mathbb N$.
Thank you.
AI: This looks like an exercise to get you thinking about the logic of these kinds of statements, and to get used to reading them - you are likely to encounter many more of similar form. It is therefore worth putting in some effort to decode them. Sometimes it is easiest to work from the end, because this is the target statement, and the rest is needed to frame it accurately in context.
So in the first one, this is aiming at $|x_{n_k}-x|\geq\epsilon_0$, which we can interpret as $x_{n_k}$ is not close to $x$ - so we are trying to find terms of the sequence which are further away from $x$ than $\epsilon_0$.
We are indexing these terms with positive integers $n_k$, and we want an infinite number of them. We could do that in different ways - here it is done by setting $n_k \ge k$, so we can't choose $n_k=1$ every time, because the sequence eventually has to grow.
This is an interesting way of doing it, because it doesn't specify that the $n_k$ are growing all the time, rather it says that they eventually get bigger than any definite integer. So the sequence $n_k$ could begin $10, 9, 8, 7, 6, 6 \dots$ but by the time we reach $n_{11}$ the value has to be at least 11.
Note that this example does not give you a subsequence, but you should be able to show that a subsequence does satisfy the condition $n_k \ge k$, hence that a subsequence defined as in $2$ - which we can see is also looking for elements of the original sequence which are not close to $x$ - satisfies all the conditions of definition 1.
The real work is showing that you can pick an increasing sequence of integers from the $n_k$ in definition 1, which will index your subsequence. You will need to use the observation that the $n_k$ grow eventually. One approach is to assume you have picked the first $r$ numbers and to show that you can then pick the $(r+1)th$. |
H: Conditioning a martingale increment by earlier increments
I have a $L^1$ - martingale ($E[|X|]<\infty$) defined on $(\Omega,\mathcal F , \mathbb P)$, with constant expectation $EX_t$, and I have to prove that $$E\{(X_v-X_u)|(X_t-X_s)\}=0$$ for $0\le s<t\le u<v$.
$$$$
Can I consider $X_t-X_s$ as a filtration, then apply the linearity property?
AI: Notice that $X_t-X_s$ is $\mathcal F_u$-measurable. Therefore by the tower property we a have
$$\begin{array}{rl}E(X_v|X_t-X_s) &= E(E(X_v|\mathcal F_u)|(X_t-X_s)) \\ &=E(X_u|(X_t-X_s)).\end{array}$$
Hence
$$\begin{array}{rl}E(X_v-X_u|X_t-X_s) &=E(X_v|X_t-X_s)-E(X_u|(X_t-X_s)) \\&=0.\end{array}$$ |
H: Determinant of product of symplectic matrices
In optical ray tracing it's possible to use symplectic matrices. I have a problem with them.
If a matrix $M$ is symplectic, this means that for $M$ the following equation hols:
$$M^T\Omega M=\Omega$$
where
$$\Omega =
\begin{pmatrix}
0 & I_n & \\
-I_n & 0
\end{pmatrix}$$
The determinant of $M$ is one:
$$det(M)=1$$
If I have a product of symplectic matrices:
$$M_t=\prod_{k=1}^NM_k$$
the determinant of the product is the same way $1$.
So, how is it possible to prove:
$$det(M_t)=1$$
Thanks
AI: Use $det(AB...Z)=det(A)det(B)...det(Z)$ and the fact that $det(M_k)=1$. |
H: weighted average
I'm not sure the correct term for my problem is weighted average. But let me explain.
I've conducted a survey where participants answer on a scale betweeen $1$ and $7$.
The questions fall into three categories. In category one & two there are $12$ questions, and in category three there are four questions.
Lets assume there are nine participants and the distribution of the answers are $579$ points in category one. $450$ points in category two and $87$ in category three.
I want to calculate/show the answer distribution of each category (compared to the total points), taking into account that category three is three times smaller.
I hope I'm making sense.
AI: Just multiply the points in third category by 3 to make points in different categories comparable.
BTW There is a separate forum for Statistics
Cross Validated |
H: Is $e^x$ the only morphism for addition and multiplication?
I find it interesting that the only (more or less) function that is equal to its own derivative also happens to be a morphism from the reals under addition to the positive reals under multiplication. This would be even more interesting if it were the only such morphism.
Is this the case?
AI: Once you know that $\def\R{\mathbb{R}}(\R_{>0},\cdot)$ is isomorphic to $(\R,+)$ via $\exp$, what you're asking is classifying the additive isomorphisms of $(\R,+)$ onto itself.
If the isomorphism is supposed to be continuous, then it is of the form $x\mapsto ax$, for some $a\in\R$, $a\ne0$. But there are many others, because $\R$ is a vector space over $\def\Q{\mathbb{Q}}\Q$ and vector space isomorphisms are in particular additive isomorphisms of $(\R,+)$ onto itself.
Since the cardinality of a basis of $\R$ over $\Q$ is the same as $|\R|$, there are at least $|\R|$ permutations of the basis, which give as many isomorphisms. |
H: confused about the limit of a trigonometric function
I am trying to calculate the limit of the following function for general $a$:
$$\lim_{x\to a}[\cos(2 \pi x)-\sin(2 \pi x) \cot(\frac{\pi x}{a})]$$
I was believing this is infinite. But Mathematica calculates for $a=3$ for instance the limit is $5$
And for $a$ in general Mathematica returns an infinit term that says:
DirectedInfinity[a Sin[2 a [Pi]]]
I can not understand the role of $a\sin(2a \pi)$
May someone here ehlp me out of confusion?
AI: DirectedInfinity is, I believe, a term that Mathematica uses to indicate a limit of infinity in the complex plane. If $a_n$ is an increasing sequence of positive numbers and $k$ is some complex number, then the limit of $ka_n$ will be expressed as DirectedInfinity[k]. (And the same for sequences that approach this type of sequence.) In the case where $a$ is real, it just means that the expression tends to either $\infty$ or $-\infty$, and the sign of $a \sin(2a\pi)$ will tell you which one.
As for the first part of your question, I think that Mathematica is substituing $x=3$ and rearranging your expression to get something like
$$ \cos(\pi) [ \frac{\cos(6\pi)}{\cos(\pi)} - \frac{\sin(6\pi)}{\sin(\pi)} ]$$.
Since $\cos(6\pi)$ can be expressed as a sixth-degree polynomial in $\cos(\pi)$ and the same for $\sin$, the part in brackets probably ends up evaluating to $-5 + 0$ or similar.
This might not be what you want, but it could be what someone else wants when writing this expression. Try the limit for $a$ not an integer, and this simplification shouldn't happen. |
H: Polynomial differential equation
I came across this problem in an old olympiad paper (Putnam?)
Find all polynomials $p(x)$ with real coefficients satisfying the differential equation
$7\dfrac{d }{dx } [xp(x)]=3p(x)+4p(x+1)$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\infty<x<\infty$
I didn't find any "official" solution on internet and it would be interesting to see your approaches on this one.
My approach was the following , i don't know if i'm right.
Consider the leading coefficient $a_n$ on both sides. Then we have $7na_n=7a_n$.
Suppose $a_n\neq 0$ then we get $n=1$. I plugged in the general solution $p(x)=a_0+a_1x$ in the differential equation and solved for $a_1$ wich turned out to be $0$ , contradicition.
The other option is that $a_n=0$ but then $a_{n-1 }$ would be the leading coefficient of the polynomial wich would be again $0$ (by the same reasoning). Hence by induction the only non zero leading coefficient must be $a_0$.
Could you tell me if this is ok? Thanks in advance.
AI: By the product rule
$$ \frac d{dx}\bigl(xp(x)\bigr) = xp'(x) + p(x) $$
so we get
$$ 7xp'(x) = 4\bigl(p(x+1) - p(x)\bigr) $$
Obviously, constant $p$s are a solution, if $p$ were not constant, $n = \deg p \ge 1$, say, we have that the degree of $p(x+1) - p(x)$ is less than $n$. On the other hand, $\deg p' = n-1$, hence the degree of $xp'(x)$ is $n$. But that is not possible, so constant polynomials are the only solutions. |
H: Which one is the correct series expansion?
Is
$$p^{n+1} = p^0+p^1+ \dots + p^n$$
or
$$p^{n+1} = p^0\times p^1\times \dots \times p^n\text{ ?}$$
I am confused.
please explain the correct one.
AI: Hints:
$$\begin{align*}\bullet&\;\;\;\;1+q+q^2+\ldots+q^n=\frac{q^{n+1}-1}{q-1}\\\bullet\bullet&\;\;\;\;1+2+\ldots+n=\frac{n(n+1)}2\\\bullet\bullet\bullet&\;\;\;a\cdot a^2\cdot\ldots\cdot a^n=a^{1+2+\ldots+n}\end{align*}$$ |
H: A problem on matrices : Sum of elements of skew-matrix
If $A=[a_{ij}]$ is a skew-symmetric matrix, then write the value of $$ \sum_i \sum_j a_{ij}$$
My doubt is that what is the meaning of $ \sum_i \sum_j ?$ Is it the same as $\sum_{ij}?$
Please offer your assistance.
Thank you
AI: As for the notaion. It is easier to understand on concrete example. Say $n=3$, then
$$
\begin{align}
\sum\limits_{i=1}^3\sum\limits_{j=1}^3a_{ij}
&=\sum\limits_{i=1}^3(a_{i1}+a_{i2}+a_{i3})\\
&=(a_{11}+a_{12}+a_{13})+(a_{21}+a_{22}+a_{23})+(a_{31}+a_{32}+a_{33})\\
&=(a_{11}+a_{12}+a_{13}+a_{21}+a_{22}+a_{23}+a_{31}+a_{32}+a_{33})\\
&=\sum\limits_{i,j=1}^3 a_{ij}
\end{align}
$$
As for the original problem. Elements of the diagonal are zeros (why?). So you need to perform calculations only for nondiagonal entries. Divide them into pairs $a_{ij}+a_{ji}$, recall definition of skew symmetric matrix and conclude... |
H: A problem on matrices : Powers of a matrix
If $ A=
\begin{bmatrix}
i & 0 \\
0 & i \\
\end{bmatrix}
, n \in \mathbb N$, then $A^{4n}$ equals?
I guessed the answer as $ A^{4n}=
\begin{bmatrix}
i^{4n} & 0 \\
0 & i^{4n} \\
\end{bmatrix}
=\begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix}$ which actually was the answer.
Can you provide me with the correct method of getting the answer?
Thank you.
AI: you can see $A^2=\begin {pmatrix}i^2&0\\0&i^2 \end{pmatrix}$
you assume that it satisfy fo n-1 so :
$A^{n-1}=\begin {pmatrix}i^{n-1}&0\\0&i^{n-1} \end{pmatrix}$
then :
$A^n=A*A^{n-1}=\begin {pmatrix}i^n&0\\0&i^n \end{pmatrix}$ |
H: Necessary condition for local minima; non-negative Hessian matrix
The problem I have is the following. Any results on Taylor expansions etc. can be assumed:
Let F : R^n -> R be a C2 function. Let x_0 be a local minimum of F. Prove that the Hessian matrix of F is non-negative.
Thanks for your help.
AI: Let $\def\R{\mathbb R}h \in \R^n$. Define $g \colon \R\to \R$ by $g(t) = F(x_0 + th)$. Then $g$ is a $C^2$-function, having a local minimum at $0$. Hence $g'(0) = 0$, $g''(0) \ge 0$. By the chain rule, we have for $t\in \R$:
\begin{align*}
g'(t) &= F'(x_0 + th)h\\
g''(t) &= F''(x_0 + th)[h,h]
\end{align*}
For $t = 0$, we get
$$ g'(0) = F'(x_0)h, g''(0) = F''(x_0)[h,h] $$
So we have $0 \le F''(x_0)[h,h]$. As $h$ was arbitrary, the bilinear mapping $F''(x_0)$ is non-negative, and hence it is representing matrix, the Hessian. |
H: There are infinitely many choices of $(\alpha_1,\dots,\alpha_n)$ such that $f(\alpha_1,\dots,\alpha_n)\neq 0$
I'm trying to solve this exercise in the page 10 of this book
Maybe I'm forgetting something, but I couldn't solve this exercise, I need a hint or something to begin to solve this question.
Thanks in advance
AI: Deal with the case $n = 1$ first. Here you use that a non-zero polynomial in one variable over a field has only finitely many roots.
When $n > 1$, argue by induction.
(Thanks to user78535 for suggesting some extra care here.)
If $X_1$ does not appear in $f$, then we have a polynomial in $n-1$ variables, and induction applies.
If $X_1$ does appear in $f$, write $f$ as a polynomial in $X_1$, with coefficients in $K[X_2, \dots, X_n]$. Clearly the coefficient of one of the powers $X_1^i$, for $i > 0$ has to be non-zero as a polynomial. Thus by induction there are infinitely many choices of $(\alpha_{2}, \dots, \alpha_{n}) \in (K \setminus \Lambda)^{n-1}$ for which this coefficient is non-zero as an element of $K$, when evaluated at $(\alpha_{2}, \dots, \alpha_{n})$.
Now appealing to the case $n = 1$, for each such choice of $(\alpha_{2}, \dots, \alpha_{n})$ there are infinitely many choices of $\alpha_{1}$ such that $f(\alpha_{1}, \dots, \alpha_{n}) \ne 0$. |
H: complex analysis -examples of complex functions that are bounded and the limits
What are some examples of complex functions that are bounded and the limits does not exists as $z\to 0$?
AI: $f(z)=\frac{z}{|z|}$.
Also, let $g(z)$ be any analytic function with $g(0)\neq 0$ and let
$$h(z)=\frac{z}{|z|}g(z) \,.$$ |
H: Are these functions orthonormal?
Are the following set of functions orthonormal over the interval $0$ to $1$?
$$Y_r(x) = \sin{\beta_r x}-\sinh{\beta_r x}-\frac{\sin\beta_r-\sinh\beta_r}{\cos\beta_r-\cosh\beta}\left(\cos\beta_r x-\cosh\beta_r x\right)$$
where $\beta_r$ are the positive solutions to:
$$1-\cos\beta_r\cosh\beta_r=0$$
I know that the functions $Y_r(x)$ are orthogonal. I went to check if they were orthonormal by evaluating $\int_0^1 Y_r Y_r \, dx$ numerical and checking if the integral equaled to $1$. For $\beta_r = 4.7300$ I get that the integral is equal to $1.03593$. For Larger values of $\beta_r$ the integral approaches $1$ so that makes me think that the set of functions is orthonormal and I am just encountering numerical error. Any thoughts?
AI: I agree with your computations; these functions are not normalized. In fact, Mathematica is able to compute the exact value of of the the squared norm of the first function to be
$$\int_0^1 Y_1^2(1,x) = \frac{(\sin (\beta_1 )-\sinh (\beta_1 ))^2}{(\cos (\beta_1 )-\cosh (\beta_1))^2}.$$
Since $\beta_1 \approx 4.73$, we get about $1.0359$ for the integral.
I assume this computation is coming out of a Sturm-Liouville problem and I don't find this type of behavior to be at all unusual in that context. |
H: $\mathbb{Z}[i]$ is a Dedekind domain
I know that $\mathbb{Z}[i]$ is a PID, and that every PID is a Dedekind. But I want to show that $\mathbb{Z}[i]$ is a Dedekind, without using PID.
One strategy coul be to show that $\frac{\mathbb{Z}[i]}{\mathfrak{P}}$ is a field for every nonzero prime ideal $\mathfrak{P}$. Could someone suggest me how to prove this?
Furthermore, what else am I supposed to prove to ensure that $\mathbb{Z}[i]$ is a Dedekind?
AI: Besides proving that every nonzero prime ideal is maximal, you need to prove that $\mathbb{Z}[i]$ is Noetherian and integrally closed.
To prove that $\frac{\mathbb{Z}[i]}{\mathfrak{P}}$ is a field for every nonzero prime ideal $\mathfrak{P}$, prove that it is finite, because every finite domain is a field. See the spoiler below.
To prove that $\frac{\mathbb{Z}[i]}{\mathfrak{P}}$ is finite, consider $I=\mathfrak{P} \cap \mathbb{Z}$. Then $I$ is a prime ideal of $\mathbb{Z}$ and so $I=p\mathbb{Z}$ for some prime $p$. This implies that $\mathfrak{P}$ contains $p\mathbb{Z}[i]$ and so $\frac{\mathbb{Z}[i]}{\mathfrak{P}} \subseteq \frac{\mathbb{Z}[i]}{p\mathbb{Z}[i]}$ has at most $p^2$ elements. |
H: Gateaux and Fréchet differentials in $\ell^1$
I am in trouble trying to solve the following:
Let $X = \ell^1$ with the canonical norm and let $f \colon \ell^1 \ni x\mapsto \Vert x \Vert$. Then $f$ is Gateaux differentiable at a point $x = (x_1, x_2, \ldots )$ if and only if $x_i \ne 0$ for every $i \in \mathbb N$ with Gateaux differential given by
$$
\Phi_G f(x) = \left( \frac{x_1}{\vert x_1 \vert}, \ldots , \frac{x_n}{\vert x_n \vert}, \ldots \right )
$$
Well, I show you what I've done.
1 If for some $i$ we have $x_i = 0$ then I am quite sure the limit
$$
\lim_{t \to 0} \frac{f(x + te_i) - f(x)}{t}
$$
does not exist (in an analogous way to the one variable function $x \mapsto \vert x \vert$) but I don't manage to prove it rigorously (how can I evaluate the left and the right limits?).
2 Secondly, I do not know how to show that $f$ is Gateaux differentiable if $x_i \ne 0$ for every $i$. How can I do?
3 Eventually, here there is a self-posed question: is the function $f$ Fréchet differentiable at some point $x$? My guess is "no", since Gateaux differential is non-linear but I have some doubts...
I thank you for any help.
AI: Fix some $v\in \ell^1$ and note that $$\tag{1}\frac{\|x+tv\|-\|x\|}{t}=\frac{\sum_{i=1}^\infty(|x_i+tv_i|-|x_i|)}{t}$$
It follows from $(1)$ that $$\tag{2}\lim_{t\to 0}\frac{\|x+tv\|-\|x\|}{t}=\sum_{i=1}^\infty \lim_{t\to 0}\frac{(|x_i+tv_i|-|x_i|)}{t}$$
Now the question is: In what points the function $|\cdot|:\mathbb{R}\to\mathbb{R}$ is differentiable? and what is its derivative in those points?
For your third question: the fact that Gateaux differential is non-linear has nothing to do with $f$ being or not Frechet differentialbe. Try to verify if in the points where $f$ is Gateaux differentiable the derivative is continuous. |
H: Finding convergence of the next function: $f(x)=\sum_{n=1}^\infty\frac{(\ln n)^3}{n}x^n$
How can i find whether the next function converges: $f(x)=\sum_{n=1}^\infty\frac{(\ln n)^3}{n}x^n$?
I thought about this question for quite a while, What's the trick?
AI: The ratio test here will work nicely:
Recall that we apply the ratio test to the the entire summand:
$$a_n=\left({(\ln n)^3\over n}|x|^n\right)$$
and doing so here will give you the result $\;1\cdot |x|.\;$ So the series will converge $\forall x: |x| \lt 1$, and will diverge $\forall x:|x| > 1$.
What happens at $x = +1, x = -1\,$ remains to be determined, so be sure to test for convergence/divergence at $\;x = 1,\;$ and also at $\; x = -1.\;$ |
H: Trace of a differential operator
Given the differential operator:
$$A=\exp(-\beta H)$$
where $$H=\frac{1}{2}\left( -\frac{d^2}{dx^2}+x^2 \right)$$
and $\beta\gt 0$
How can I get the trace of this operator?
Thanks in advance.
AI: The trace of this operator is easily obtained in the following way:
$$
Z={\rm Tr}\exp(-\beta H).
$$
that is equivalent to
$$
Z=\sum_n \langle n|\exp(-\beta H)|n\rangle.
$$
Assuming $H|n\rangle=E_n|n\rangle$, this is just
$$
Z=\sum_n\exp(-\beta E_n).
$$
Your case is the harmonic oscillator $E_n=n+\frac{1}{2}$ and the sum is just a geometric series easy to perform. |
H: Runge-Kutta method and step doubling
I am studying Runge-Kutta and step size control and came up with a few doubts. Because they are related with this integration method, I will divide it in two parts. First, allow me to introduce the problem.
$1^{st}$ part - questions about Runge-Kutta Method
Consider a $2^{nd}$ order Runge-Kutta with general form:
$k_{1}=hf(x_{n},y_{n})$
$k_{2}=hf(x_{n}+\frac{1}{2}h,y_{n}+\frac{1}{2}k_{1})$
$y_{n+1}=y_{n}+k_{2}+O(h^{3})$
where $f(x_{n},y_{n}) = y'(x_{n})$
Now, if we are to consider $4^{th}$ order Runge-Kutta we would get
$k_{1}=hf(x_{n},y_{n})$
$k_{2}=hf(x_{n}+\frac{1}{2}h,y_{n}+\frac{1}{2}k_{1})$
$k_{3}=hf(x_{n}+\frac{1}{2}h,y_{n}+\frac{1}{2}k_{2})$
$k_{4}=hf(x_{n}+h,y_{n}+k_{3})$
$y_{n+1}=y_{n}+\frac{1}{6}k_{1}+\frac{1}{3}k_{2}+\frac{1}{3}k_{3}+\frac{1}{6}k_{4}+O(h^{5})$
This leads me to the following questions:
I understand why the 2nd order method has the $y_{n+1}$ indicated above. Yet, shouldn't the $4^{th}$ order version of the method be expressed only as $y_{n+1}=y_{n}+k_{4}+O(h^{5})$ given that $k_{4}$ implicitly has the values of $k_{1}$ to $k_{3}$?
Why does the $4^{th}$ order version have those fractional coefficients?
$2^{nd}$ part - questions about step-size control
Consider the exact solution for an advance from $x$ to $x+2h$ by $y(x+2h)$ and the two approximate solutions by $y_{1}$ (one step 2h) and $y_{2}$ (two steps each of size $h$). Considering the $4^{th}$ order method we have:
$y(x+2h)=y_{1}+(2h)^{5}\phi+O(h^{6})+...$
$y(x+2h)=y_{1}+2(h)^{5}\phi+O(h^{6})+...$
The difference between these estimates permits estimating the truncation error:
$\Delta=y_{2}-y_{1}$
Which we then use to improve the numerical estimate of the true solution:
$y(x+2h)=y_{2}+\frac{\Delta}{15}+O(h^{6})$
This brings me to the following questions:
In the two expressions of y(x+2h) at the top, does $\phi$ include the terms $k_{1}...k_{4}$ in the first part of this post?
Why is the final expression $5^{th}$ order, if the original problem was $4^{th}$ order?
Where does the coefficient $\frac{1}{15}$ come from?
Thank you for all the insight!
AI: For the first part, the Wikipedia article http://en.m.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods has a detailed discussion of the different members of the family along with a derivation of RK-4. The key idea is that you can do whatever you want in terms of writing down the form of your approximation; all that matters is that doing a Taylor expansion makes all the error terms up to the order of approximation vanish. This is how the coefficients are chosen.
Then in the next part, $\phi$ is the the expression for the error made in the step, without the $(2h)^5$ term. It should be thought of as an expression in terms of the derivatives of $y$ at $x$, but its exact form isn't important, just its independence from $h$. It's the same in both lines to first order in $h$ so the errors are hidden away in the correction term. Note that fourth order means the error term is $h^5$ per step which it is throughout. You then use two fourth order approximations to get a better one by calculating the error approximately. This is reasonable since you've used more data.
As to the $1/15$ factor, that simply arises because $(2h)^5-2(h)^5=(16-1)h^5=15h^5$ and if you make only a step of size $h$ you get a factor of $h^5$. Try carefully defining and writing down the different things you work out, then you can see why you take the difference in this way. |
H: What is the purpose to define central moment?
What is the purpose to define central moment?
I searched the google and all i could find is bunch of properties
AI: Not sure I got your question, one of the 'purposes' of central moment is to set $n=2$ and get the variance of the rv, which is a measure of dispersion around the mean. Higher central moments (e.g. kurtosis) can also be used for this purpose. |
H: Time has dimension $2$ with respect to the Ricci flow scaling
Terence Tao in his lecture notes on Ricci flow has written:
If we are to find a scale-invariant (and diffeomorphism-invariant) monotone quantity for Ricci flow, it had better be constant on the gradient shrinking soliton. In analogy with $\frac{d}{dt}\mathcal{F}_m=2\int_M |Ric+Hess(f)|^2dm$, we would therefore like the variation of this monotone quantity with respect to Ricci flow to look something like
$$2\int_M|Ric+Hess(f)-\frac{1}{2\tau}g|^2dm (*)$$
where $\tau$ is some quantity decreasing at the constant rate $\dot{\tau}=-1.$
But the scaling is wrong; time has dimension $2$ with respect to the Ricci flow scaling, and so the dimension of a variation of a scale-invariant quantity should be $-2$, while the expression $(*)$ has dimension $-4$. (Note that $f$ should be dimensionless (up to logarithms), $\tau$ has the same dimension of time, i.e. $2$, and $\int_Mdm=1$ is of course dimensionless.
Question:
In the note what is the meaning of this sentence: time has dimension $2$ with respect to the Ricci flow scaling?
Thanks. (Sorry if the question is too trivial.)
AI: Let us start with the simple example of heat equation
$$ \partial_t u(t,x) = \triangle u(t,x) $$
we see that if $u(t,x)$ is a solution, then $u_\lambda(t,x) = u(\lambda^2 t,\lambda x)$ is also a solution, as
$$ (\partial_t u_\lambda)(t,x) = \lambda^2 (\partial_t u)(\lambda^2 t,\lambda x) = \lambda^2 (\triangle u)(\lambda^2 t,\lambda x) = (\triangle u_\lambda)(t,x)~.$$
This shows that if we change $x$ by a scale factor $\lambda$, we need to change $t$ by scale factor $\lambda^2$. So if $x$ has unit "Length", the units for $t$ should be "Length$^2$".
This fact is usually true for parabolic equations (though nonlinearities sometimes throw a wrench into the proceedings; in those cases it is also important to keep track of the solution $u$'s units; note that in the linear case $u$'s units do not matter, since by linearity they must appear in the same "strength" for all terms).
For the Ricci flow, consider the metric $g_{\mu\nu}$ as a function in local coordinates. Let the "scaled metric" $g^{(\lambda)}_{\mu\nu}(x) = g_{\mu\nu}(\lambda x)$ and compute the corresponding Christoffel symbols, you will see that $\Gamma^{(\lambda)} = \lambda \Gamma$ and the curvature scales like $\mathrm{Riem}^{(\lambda)} = \lambda^2 \mathrm{Riem}$. Taking the trace does not change the scaling, so we have that the same analysis as in the linear case gives that if $g_{\mu\nu}(t,x)$ solve, in a fixed coordinate system, the Ricci flow, then so would $g^{(\lambda)}_{\mu\nu}(t,x) = g_{\mu\nu}(\lambda^2 t,\lambda x)$.
Response to followup:
Again let us first start with the simple examples. Think back about classical physics. The distance $x$ an object has traveled has unit "Length". The time $t$ it took to travel has unit "Time". So the "variation of the distance with respect to time" $v = \frac{\mathrm{d}x}{\mathrm{d}t}$ has unit "Length / Time".
Now in the case of a parabolic equation, suppose $\mathcal{F}$ is something that has no units (a pure number, or in SI the unit of "one"), then the variation of $\mathcal{F}$ with respect to "time", which we recall has unit "Length$^{2}$", would have unit
$$ \frac{\text{Unit of } \mathcal{F}}{\text{Unit of time}} = \frac{1}{\text{Length}^2} = \text{Length}^{-2} $$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.