text
stringlengths 83
79.5k
|
---|
H: Derivations on semisimple Lie algebra
First recall some definitions :
Let $B$ be a Killing form on Lie algebra $\mathfrak{g}$ over ${\bf R}$ such that
$B(X,Y)\doteq Tr(ad_Xad_Y)$.
$\mathfrak{g}$ is semisimple if $B$ is non-degenerate.
Define $\partial \mathfrak{g} \doteq \{ D | D[X,Y]=[DX,Y] + [X,DY] \}$ Clearly it contains $ad(\mathfrak{g})\doteq \{ ad_X | X\in \mathfrak{g}\}$.
My question is about proof of $\partial \mathfrak{g}=ad(\mathfrak{g})$ :
I am reading Helgason's book. We can show that $ ad(\mathfrak{g}) = \mathfrak{g}$ easily.
I cannot proceed the proof since $B$ is defined on $\mathfrak{g}$ only.
How can I complete the proof ? Thank you in advance.
AI: Here's a proof from Humphrey's excellent book:
First of all, show that $I=ad(\mathfrak g)\subset\partial\mathfrak g$ is an ideal of the Lie algebra $\partial\mathfrak g$: if $x\in\mathfrak g$ and $\delta\in\partial\mathfrak g$, then $$[\delta,ad(x)]=\cdots\in I$$ This implies that the Killing form on $\mathfrak g$ coincides with that induced by the Killing form on $\partial\mathfrak g$. Since $\mathfrak g$'s Killing form is non degenerate by semisimplicity, we have
$$I\oplus I^{\perp}=\partial\mathfrak g$$
with respect to the Killing form on the derivations. Now consider $\delta\in I^{\perp}$. For any $x\in\mathfrak g$, since both $I$ and $I^{\perp}$ are ideals in $\partial\mathfrak g$, $$[\delta,ad(x)]\in I\cap I^{\perp}=\lbrace 0\rbrace$$ equals $ad(\delta(x))$ by the calculation above, so for all $x\in\mathfrak g,~\delta(x)\in\ker(ad)=\lbrace 0\rbrace$. That is, $\delta=0$ and $I^{\perp}=\lbrace 0\rbrace$, so that $ad(\mathfrak g)=\partial\mathfrak g$. |
H: Proving that Euclidean space having the infinity metric is a complete metric space (stuck)
I am trying to prove that the space ${\mathbb{R}}^k$ with the $\infty$-metric is a complete metric space.
I know that I need to show that every Cauchy sequence in the metric space ${\mathbb{R}}^k$ with the $\infty$-metric converges to a point $x\in{\mathbb{R}}^k$. Part of my work so far involved proving that the space ${\mathbb{R}}^k$ with the old Pythagorean norm is a complete metric space, but I’m not sure if I should be using that at all in this proof. Here goes:
Proof:
Suppose that $k\in{\mathbb{N}}$ and that $\left(x_{n}\right)$ is a Cauchy sequence in the space ${\mathbb{R}}^k$ with the $\infty$-metric.
Suppose that $\epsilon>0$.
Using the fact that $\left(x_{n}\right)$ is a Cauchy sequence, choose an integer $N$ such that the inequality $||x_{m}-x_{n}||_\infty<\frac{\epsilon}{2}$ holds for all integers $m$ and $n$ such that $m\ge{N}$ and $n\ge{N}$.
This is where I’m stuck. I feel like I want to use the triangle inequality from here…
Since the space ${\mathbb{R}}^k$ is a metric space, we know that the inequality $$||{x_n}-{x_m}||_{\infty}\le||{x_n}-{x}||_{\infty}+||{x_m}-{x}||_{\infty}\lt\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$$ holds for all positive integers $n$ and $m$.
However, I know that this isn’t a good thing to state. I think that I’m assuming that there is some point $x$ in the space that the sequence converges to… this is exactly what I'm trying to prove!
I’m totally stuck. Any advice would be GREATLY appreciated!
EDIT: I'm including the updated version of my proof based on advice from Brian M. Scott. Here it is:
Proof:
Suppose that $k\in{\mathbb{N}}$ and that $\left(x_{n}\right)$ is a Cauchy sequence in the space ${\mathbb{R}}^k$ with the $\infty$-metric.
Suppose that $\epsilon>0$.
For each $n\in\mathbb{Z}^{+}$ we have $x_{n}= \left(x_{n,1},x_{n,2},\ldots,x_{n,k}\right)$.
Consider the real-valued sequences $\left(x_{n,j}\right)$ for each $j=1,2,\ldots,k$.
Using the fact that $\left(x_{n}\right)$ is a Cauchy sequence, choose $N\in\mathbb{Z}$ such that the inequality $||x_{m} – x_{n}||_{\infty}\lt \frac{\epsilon}{2}$ for all $n\ge N$ and $m\ge N$.
Since the inequality $||x_{m,j} – x_{n,j}||_{\infty}\le||x_{m} – x_{n}||_{\infty}$ holds for all $m$ and $n$, we have the inequality $||x_{m,j} – x_{n,j}||_{\infty}\lt\frac{\epsilon}{2}$ for all $m\ge N$ and $n\ge N$.
Thus for $j=1,2,\ldots,k$ and all $n\in\mathbb{Z}^{+}$ the sequence $\left( x_{n,j} \right)$ is a Cauchy sequence in $\mathbb{R}$.
Using the fact that $\mathbb{R}$ is complete, define $$x\left(j\right)=\lim_{n\to\infty}x_{n,j}$$ for $j=1,2,\ldots,k$.
The point $x=\left(x_{1},x_{2},\cdots,x_{k}\right)$ is the limit of $\left(x_{n}\right)$ under the Pythagorean metric.
From the inequality $$||x-x_{n}||_\infty\le||x-x_{m}||_{\infty}+||x_{m}-x_{n}||_{\infty}\lt\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$$ that holds for all $n\ge N$ and $m\ge N$ we see that the sequence $\left(x_{n}\right)$ converges to the point $x\in\mathbb{R}^{k}$ and the space $\mathbb{R}^{k}$ is a complete metric space.
AI: HINT: There are at least two reasonable approaches.
Start with a Cauchy sequence $\langle x^n:n\in\Bbb N\rangle$, where $x^n=\langle x_1^n,\dots,x_k^n\rangle$ for each $n\in\Bbb N$. Now look at the real-valued sequences $\langle x_i^n:n\in\Bbb N\rangle$ for $i=1,\dots,k$. Show that each is Cauchy in the usual metric on $\Bbb R$, let $x_i$ be the limit (since $\Bbb R$ is complete in the usual metric), and show that $\langle x^n:n\in\Bbb N\rangle$ converges to $\langle x_1,\dots,x_k\rangle$ in $\Bbb R^k$.
Show that the $\infty$ metric is equivalent to the Euclidean metric. Then a sequence is Cauchy in one of these metrics if and only if it’s Cauchy in the other, it converges to a point $x$ in one if and only if it converges to $x$ in the other, and you already know that the Euclidean metric is complete. |
H: Proof of Bienayme Inequality
I have a bit of trouble about the proof of Bienayme Inequality.
Bienayme Inequality is as follows:
If X has mean $\mu$ and variance $\sigma^2$, then
$$\mathbb{P}\left(\frac{|X-\mu|}{\sigma}\ge k\right)\le\frac{1}{k^2}.$$
Bienayme's Proof:
Let $B = \{|X-\mu|\ge k\sigma\}$ and $\mathbb{1}_B$ be the indicator random variable. Then $(X-\mu)^2\mathbb{1}_B\ge\sigma^2k^2\mathbb{1}_B$ and $\mathbb{E}(\sigma^2k^2\mathbb{1}_B)=\sigma^2k^2\mathbb{P}(B)$, hence
$$\sigma^2=\mathbb{E}[(X-\mu)^2]$$
$$\ge\mathbb{E}((X-\mu)^2\mathbb{1_B})$$
$$\ge\mathbb{E}(\sigma^2k^2\mathbb{1}_B)$$
$$=\sigma^2k^2\mathbb{E}(\mathbb{1}_B)$$
$$=\sigma^2k^2\mathbb{P}(\mathbb{B}).$$
There is only one thing that I don't understand:
Why is $\mathbb{E}[(X-\mu)^2]\ge \mathbb{E}((X-\mu)^2\mathbb{1_B})$. How can we explain /derive that?
Many different explanations and interpretations are appreciated.
Thanks.
AI: $1_B$ is an indicator random variable. It can take ane of two values, $0$ or $1$. So for any non-negative random variable $X$ we have $X1_B \leq X$ with probability one.
Therefore $E(X1_B)\leq E(X)$ for every non-negative random variable $X$ and every event $E$. |
H: Show relation for integrals
Let $f \in C^{1}([a,b];\mathbb{R})$ and $|f'(x)-f'(y)| \le L |x-y|$
then we have $|\int_a^b f(x) dx -f(\frac{a+b}{2})(b-a)| \le L\frac{(b-a)^3}{4}$.
I have troubles to show this inequality. the problem is that i need to have a difference of derivatives of the function in order to use the Lipschitz condition.
AI: Denote $c=\frac{a+b}{2}$. By mean value theorem, for every $x\in[a,c]$(resp. $[c,b]$), there exists $t\in[x,c]$(resp. $[c,x]$), such that
$$f(x)-f(c)=f'(t)(x-c).$$
It follows that
$$|f(x)-f(c)-f'(c)(x-c)|=|(f'(t)-f'(c))(x-c)|\le L|(t-c)(x-c)|\le L(x-c)^2.$$
Integrating the inequality above over $[a,b]$, it follows that
$$\big|\int_a^bf(x)dx-f(\frac{a+b}{2})(b-a)\big|\le\frac{L(b-a)^3}{12}.$$ |
H: How to graph the equation: $y=\frac {x-2}{x+1}$?
the title says it all.
I'm pretty sure this is a hyperbola, but is there an alternative way of doing this besides a table of values?
"Graph the equation $y=\frac {x-2}{x+1}$"
I know that $x$ cannot equal $-1$ but I'm not sure how to carry on from there.
Any help would be appreciated.
If you could provide a step by step explanation, and a picture of the graph itself, that would be great :D!
AI: When you want draw the graph of the function, you need to check a couple of things:
Check if there are any points in the domain where the function is undefined, you know there's something happening there. For instance here you saw that when $x=-1$ it is undefined.
Then, $y$ can be equal to $0$, there's no problem with that, and you should find all the $x$ such that $y=0$. In this case, when $x=2, y=0$. Thus you know that the function goes through the point $(2,0)$.
Now you have all the points where there's something special happening, you can check where the function is positive, and where it is negative. You need to look at all the intervals between the special points. In your case : $(-\infty,-1),\,(-1,2),\,(2,\infty)$. Take $(-\infty,-1)$ for example. On this interval, you know that the numerator is always negative and the denominator is always negative as well, so the value for $y\in(-\infty,-1)$ will always be positive. Do that for all the intervals.
Finally, you need to check some limits to see what happens at those special points and also when you tend to infinity (in your case only $-1$ is a special point because you already now that the value of your function when $x=2$ is $0$). So you should check :
$$\lim_{x\rightarrow\infty}f(x),\; \lim_{x\rightarrow-\infty}f(x),\; \lim_{x\rightarrow-1^+}f(x),\; \lim_{x\rightarrow-1^-}f(x),\;$$
That's it, you have everything you need to graph your function! (If you want to go into further details you can also check the derivative for extremas and the second derivative for inflexion points. This means that you have to compute the derivatives and check for which values of $x$ they are equal to $0$). |
H: the domain for $\dfrac{1}{x}\leq\dfrac{1}{2}$
What is the domain for $$\dfrac{1}{x}\leq\dfrac{1}{2}$$
according to the rules of taking the reciprocals, $A\leq B \Leftrightarrow \dfrac{1}{A}\geq \dfrac{1}{B}$, then the domain should be simply $$x\geq2$$
however negative numbers less than $-2$ also satisfy the original inequality. When am I missing in my understanding?
AI: The equivalence
$$A\leq B\iff \frac{1}{A}\geq\frac{1}{B}$$
only holds for numbers $A$ and $B$ that have the same sign (i.e., are both positive or both negative). Remember, when $c>0$, we have
$$A\leq B \iff c A\leq c B$$
and when $c<0$, we have
$$A\leq B\iff cA\geq cB.$$
To go from $A\leq B$ to
$$\frac{1}{A} \mathbin{\fbox{$\leq$ or $\geq$}} \frac{1}{B}$$
you'll multiply both sides by $c=\frac{1}{AB}$. This $c$ is greater than $0$ when $AB>0$, which happens when $A$ and $B$ have the same sign, and it is less than $0$ when $AB<0$, which happens when $A$ and $B$ have opposite signs.
Thus,
$$\frac{1}{x}\leq\frac{1}{2}\iff\begin{cases} x\geq 2 &\text{ if }x\text{ has the same sign as 2},\\
x\leq 2 &\text{ if }x\text{ has the opposite sign as 2}.
\end{cases}$$ |
H: Compute value of $\pi$ up to 8 digits
I am quite lost on how approximate the value of $\pi$ up to 8 digits with a confidence of 99% using Monte Carlo. I think this requires a large number of trials but how can I know how many trials?
I know that a 99% confidence interval is 3 standard deviations away from the mean in a normal distribution. From the central limit theorem the standard deviation of the sample mean (or standard error) is proportional to the standard deviation of the population $\sigma_{\bar X} = \frac{\sigma}{\sqrt{n}}$
So I have something that relates the size of the sample (i.e. number of trials) with the standard deviation, but then I don't know how to proceed from here. How does the "8 digit precision" comes into play?
UPDATE
Ok I think I am close to understand it. From CLT we have $\displaystyle \sigma_{M} = \frac{\sigma}{\sqrt{N}}$ so in this case $\sigma = \sqrt{p(1-p)}$ therfore $\displaystyle \sigma_{M} = \frac{\sqrt{p(1-p)}}{\sqrt{N}}$
Then from the Bernoulli distribution, $\displaystyle \mu = p = \frac{\pi}{4}$ therefore
$$\sigma_{M}=\frac{\sqrt{\pi(4-\pi)}}{\sqrt{N}}$$ but what would be the value of $\sigma_{M}$? and then I have $\pi$ in the formula but is the thing I am trying to approximate so how does this work? and still missing the role of the 8 digit precision.
AI: I may completely off the mark, but I guess that Monte Carlo here refers to approximating some integral by a random process, and that the integral must be chosen in a way that knowing its value allows us to calculate $\pi$.
Let's guess that the integral we are interested in is
$$
\frac\pi4=\int_0^1\sqrt{1-x^2}\,dx
$$
giving the probability that a random point in the square $[0,1]\times[0,1]$ (so $x$ and $y$ independent and uniformly distributed in $[0,1]$) is also in the unit disk.
Assume that we generate $N$ such points $(x,y)$ and record the number of successes (points in the unit disk) $M$. We know from crude estimates for $\pi$, say $3<\pi<4$, that the success rate $p$ of an individual point is between $3/4$ and $1$. Therefore the standard deviation of $M$ is bounded from above by
$$\sigma(M)=\sqrt{Np(1-p)}<\frac{\sqrt{3N}}4.$$
We approximate
$$
\pi\approx\frac{4M}N.
$$
This has SD $\sigma(\pi)<4\sigma(M)/N=\sqrt{3/N}$.
For 99% confidence we want $3\sigma(\pi)<10^{-8}$ or equivalently $\sqrt{N}>3\sqrt{3}\cdot 10^8$. This suggests that generating $N\approx 27\cdot10^{16}$ random points on the unit square would do the trick :-)
After this gedankenexperiment I quite appreciate Machin's formula. Even the alternating series for $\arctan 1$ beats this.
So this leaves open the possibility that something completely different was wanted? |
H: The harmonic conjugate of $\Im e^{z^2}$?
It is obvious that $e^{z^2}$ is analytic, right? So the harmonic conjugate of $\Im e^{z^2}$ is $\Re e^{z^2}$, isnt' it?
However, the solutions manual I'm consulting gives the answer as $\Im (-ie^{z^2})$, which is not the same function, and I don't understand.
AI: For real-valued functions $u(x,y)$ and $v(x,y)$, we say that $v$ is a harmonic conjugate of $u$ if $u+iv$ is analytic. This is not a symmetric relation between $u$ and $v$. For example, $e^x\sin y$ is a harmonic conjugate of $e^x\cos y$ because $e^x\cos y+ie^x\sin y=e^z$ is analytic, but $e^x\cos y$ is not a harmonic conjugate of $e^x\sin y$ because $e^x\sin y+ie^x\cos y$ is not analytic (check the Cauchy-Riemann equations).
If $v$ is a harmonic conjugate of $u$, meaning that the function $f(z)=u+iv$ is analytic, then $-if(z)=v-iu$, being a constant multiple of $f(z)$, is also analytic; this shows that $-u$ is a harmonic conjugate of $v$, in other words, $-\Re f(z)$ is a harmonic conjugate of $\Im f(z)$.
To repeat one more time: although $\Im f(z)$ is a harmonic conjugate of $\Re f(z)$ (assuming $f(z)$ is analytic), it's $-\Re f(z)$, not $\Re f(z)$, that is a harmonic conjugate of $\Im f(z)$. This is true for any analytic function $f(z)$, including your problem function $f(z)=e^{z^2}$. |
H: Find $p$ if $(x + 3)$ is a factor of $x^3 - x^2 + px + 15$.
I'm just making sure I answered this correctly.
If $(x+3)$ is a factor, then $P(-3)$ would equal $0$, correct?
AI: Yes, that's correct. If $(x+3)$ is a factor, then $P(-3) = 0$ by the Factor Theorem. So
\[P(-3) = (-3)^3-(-3)^2-3p+15 = -27-9-3p+15 = -3p-21 = 0 ,\]
and so $p = -7$. |
H: Evaluate $\int \dfrac{1}{\sqrt{1-x}}\,dx$
Find $$\int \dfrac{1}{\sqrt{1-x}}\,dx$$
I did this and got $\dfrac23(1-x)^{\frac32} + c$
But a online calculator is telling me it should be $-2(1-x)^{\frac12}$
What one is on the money and if not me why?
AI: $$
\int \frac{1}{\sqrt{1-x}}~dx=\int(1-x)^{-1/2}~dx
$$
Let $u=1-x$, $du=-dx$, so
$$
\int(1-x)^{-1/2}~dx=-\int u^{-1/2}~du
$$
Add one to the power of $u$, and divide by the new power
$$
\int \frac{1}{\sqrt{1-x}}~dx=-\int u^{-1/2}~du=-\frac{u^{1/2}}{1/2}+c=-2(1-x)^{1/2}+c
$$
What you have done is integrated $\sqrt{1-x}$ by mistake
$$
\int \sqrt{1-x}~dx=\int (1-x)^{1/2}~dx=-\int u^{1/2}~du=-\frac{2}{3}u^{3/2}+c=-\frac{2}{3}(1-x)^{3/2}+c
$$ |
H: Find the minimum values of $a,b,c,d,e,f$ that satisfy following equations
${ a }^{ 2 }+{ b }^{ 2 }={ c }^{ 2 }\\ { a }^{ 2 }+{ \left( b+c \right) }^{ 2 }={ d }^{ 2 }\\ { a }^{ 2 }+{ \left( b+c+d \right) }^{ 2 }={ e }^{ 2 }\\ { a }^{ 2 }+{ \left( b+c+d+e \right) }^{ 2 }={ f }^{ 2 }$
where $a,b,c,d,e,f$ positive integers. Moreover, prove that we can always find positive integer solutions when the number of equations goes to infinity, i.e.
${ a }^{ 2 }+{ b }^{ 2 }={ c }^{ 2 }\\ { a }^{ 2 }+{ \left( b+c \right) }^{ 2 }={ d }^{ 2 }\\ { a }^{ 2 }+{ \left( b+c+d \right) }^{ 2 }={ e }^{ 2 }\\ { a }^{ 2 }+{ \left( b+c+d+e \right) }^{ 2 }={ f }^{ 2 }\\ \vdots $
Note: You probably see this problem for the first time here.
Edit: A small example:
${ 24 }^{ 2 }+{ 7 }^{ 2 }={ 25 }^{ 2 }\\ { 24 }^{ 2 }+{ \left( 25+7 \right) }^{ 2 }={ 40 }^{ 2 }$
AI: We consider the rational solutions first,and then multiply by an integer to make them be integers.
Assume $a=1,$ then $1+b^2=c^2,(c-b)(c+b)=1,$ let $c+b=t_1,c-b=\dfrac{1}{t_1},$ then we get
$$c=\frac{1}{2}(t_1+\frac{1}{t_1}),b=\frac{1}{2}(t_1-\frac{1}{t_1}),$$
we need $1+(b+c)^2=1+t_1^2=d^2,$in the same way,we get
$$d=\frac{1}{2}(t_2+\frac{1}{t_2}),t_1=\frac{1}{2}(t_2-\frac{1}{t_2}),$$
$$e=\frac{1}{2}(t_3+\frac{1}{t_3}),t_2=\frac{1}{2}(t_3-\frac{1}{t_3}),$$
$$f=\frac{1}{2}(t_4+\frac{1}{t_4}),t_3=\frac{1}{2}(t_4-\frac{1}{t_4}),$$
$$\cdots$$
For example, let $t_4=11,$ then $t_3=\frac{60}{11},t_2=\frac{3479}{1320},t_1=\frac{10361041}{9184560},b=\frac{22995028210081}{190323205453920}.$
Then multiply by $190323205453920,$ we get
$a=190323205453920,b=22995028210081,c=191707312997281,d=286914652560962,e=536509581434876,f=1055428684789920.$
You must choose $t_4$ so that $t_3,t_2,t_1,b>0$ . |
H: What is the professional term for the combination of the selection in n out of the total m elements?
I know the number of combinations is called ${}_nC_r$, but what about all the exact outcomes?
For example: I have $3$ elements $a,b,c$ and for the parameter $2$, I will have outcomes
$$ab,\quad ac,\quad ba,\quad bc,\quad ca,\quad cb$$
I want to search different implementations of this, but I don't know what term should I input in google.
AI: If I have a set $X$ with $n$ elements, and I wanted to refer one of the possible "outcomes" of choosing $k$ elements (without the order mattering), I would just call it "a $k$-element subset of $X$". Thus, the number of $k$-element subsets of $X$ is $\binom{n}{k}$ or ${}_nC_k$, depending on your notational preferences. I'm not aware of any more specialized term.
I would use the term "$k$-tuple of elements of $X$" to mean an ordered $k$-element subset of $X$ (possibly with repetitions). Thus, the number of $k$-tuples of distinct elements of $X$ is ${}_nP_k$, and the number of $k$-tuples of elements of $X$ is $n^k$. |
H: How to Find the Center of a Parallelogram
I want to find the center of a parallelogram in order to use it in my java program. I have four coordinates of the parallelogram and I want to find the center coordinate of the parallelogram. It seems I need to find the intersection point of the diagonal lines that I couldn't figure out of my poor math skills. Given four coordniates of the parallelogram what is the formula for finding the center of the parallelogram. I hope the intersection point of diagonals is the center of the parallelogram, correct me If it's not. If incase I'm wrong what I need is that intersection point not the center. It seems from another view point center of the bounding rectangle is the center of the parallelogram.
Any help would be appreciated
AI: Yes, it's the intersection point of the diagonals. In fact, the diagonals bisect one another; so, if $(A,B)$ is one vertex and $(C,D)$ is the opposite, the center of the parallelogram is just $(\frac{A+C}{2}, \frac{B+D}{2}).$ |
H: If a number is a square modulo $n$, then it is also a square modulo any of $n$'s factors
Say we have $a \equiv x^2 \bmod n$. How would we prove that this implies:
$$\forall \text{ prime }p_i\text{ such that }\,p_i\mid n,\;\exists y\,\text{ such that }\, a \equiv y^2 \bmod p_i$$
AI: If $n$ is a natural number and $a$ and $b$ are integers, when we say $$a\equiv b\bmod n,$$ we just mean that $n$ divides $b-a$. This clearly implies that any factor of $n$ also divides $b-a$, so that if $m$ is any factor of $n$, we also have $a\equiv b\bmod m$.
Thus, if we start with our integer $a$, and there exists an integer $x$ such that
$a\equiv x^2\bmod n$, we must also have $a\equiv x^2\bmod m$ for any number $m$ that divides $n$, and in particular, $a\equiv x^2\bmod p$ for any prime factor $p$ of $n$. |
H: Show $\|f\|_p\leq \lim\inf\|f_n\|$
$\Omega$ is a bounded domain of $\mathbb R^n$. If $\{f_n\}\subset L^p(\Omega)$ and $f_n\rightarrow f\in L^p(\Omega)$ weakly, then
$$\|f\|_p\leq \lim_{n\rightarrow\infty}\inf\|f_n\|$$
AI: You are asking to prove why $$\left(\int |f|^p\right)^{1/p} \leq \lim_{n\rightarrow \infty} \inf \left(\int {|f_n|^p}\right)^{1/p},$$
so it is enough to prove that $\left(\int |f|^p\right)\leq \lim_{n\rightarrow \infty} \inf \left(\int {|f_n|^p}\right)$. Now if $f_n \to f$ pointwise a.e. then $|f_n|^p \to |f|^p$ pointwise a.e. Thus
$$\begin{eqnarray*} \int |f|^p &=& \int \lim_{n \rightarrow \infty} |f_n|^p \\
&=&\int \lim_{n \rightarrow \infty} \inf |f_n|^p \\
&\leq& \lim_{n \rightarrow \infty} \inf \int |f_n|^p \end{eqnarray*}$$
where the last step was made using Fatou's lemma. Q.E.D. |
H: Find the antiderivative of $\sqrt{3x-1} dx$
Find the antiderivative of $\sqrt{3x-1} dx$.
I got $\frac{2}{3}(3x-1)^{3/2}+c$ but my book is saying $\frac{2}{9}(3x-1)^{3/2}+c$
Can some one please tell me where the $2/9$ comes from?
AI: $$
\int \sqrt{3x-1}~dx=\int(3x-1)^{1/2}~dx
$$
Let $u=3x-1$, $du=3~dx$, so
$$
\int(3x-1)^{1/2}~dx=\frac{1}{3}\int u^{1/2}~du
$$
Add one to the power of $u$, and divide by the new power
$$
\int \sqrt{3x-1}~dx=\frac{1}{3}\int u^{1/2}~du=\frac{1/3}{3/2}u^{3/2}+c=\frac{2}{9}(3x-1)^{3/2}+c
$$
Your problem is that you need to take into consideration the derivative of $3x-1$, which means you need to divide by $3$, giving you the factor of $\frac{1}{3}$ missing from your answer.
As was suggested in the comments of your previous question, if you differentiate the answer that you got with the chain rule, then you'll see why you've made a mistake
$$
\frac{d}{dx}\left(\frac{2}{3}(3x-1)^{3/2}\right)=\frac{3}{2}\cdot\frac{2}{3}(3x-1)^{1/2}\underbrace{\frac{d}{dx}(3x-1)}_{=3}=3\sqrt{3x-1}
$$ |
H: How to find inverse of the function $f(x)=\sin(x)\ln(x)$
My friend asked me to solve it, but I can't.
If $f(x)=\sin(x)\ln(x)$, what is $f^{-1}(x)$?
I have no idea how to find the solution. I try to find
$$\frac{dx}{dy}=\frac{1}{\frac{\sin(x)}{x}+\ln(x)\cos(x)}$$
and try to solve it for $x$ by some replacing and other things, but I failed.
Can anyone help? Thanks to all.
AI: The function fails the horizontal line test for one, very badly in fact. One to one states that for any $x$ and $y$ in the domain of the function, that $f(x) = f(y) \Rightarrow x = y$. that is to each point in the domain there exists a unique point in the range. Many functions can be made to be one to one $(1-1)$ by restricting the interval over which values are taken, for example the inverse trig functions and the square root function (any even root). We typically only take the positive square root because otherwise the function would have two answers and each x wouldn't have a unique y value (The so called vertical line test, which is why generally if functions fail the horizontal line test they don't typically have an inverse). If you graph this function it looks very much like a growing sinusoidal shape this cannot be restricted uniquely in a manner that makes an inverse definable. |
H: 3rd grade exercise: "make your own turning pattern"
My 8 year old has been given a worksheet of numeric sequences, e.g. "what are the next numbers in the sequence 11, 12, 14, 15, 17, ..." and "make your own number pattern" and "make your own colour repeating pattern". I've had no problem helping her with these.
The final exercise is "Make your own turning pattern". No other explanation is given.
Apart from sending her back to her teacher to ask "what's a "turning pattern"?", what would you say this is?
AI: My guess would be they want something like this (as examples): the "turning pattern"
$$\underbrace{\fbox{$\,\uparrow\,\strut$}\;\fbox{$\rightarrow\strut$}\;\fbox{$\,\downarrow\,\strut$}\;\fbox{$\rightarrow\strut$}}_{\large\mathtt{part\, that\, repeats}}\;\fbox{$\,\uparrow\,\strut$}\;\fbox{$\rightarrow\strut$}\;\fbox{$\,\downarrow\,\strut$}\;\fbox{$\rightarrow\strut$}\;\cdots$$
corresponds to walking in a snaking line, and
$$\underbrace{\fbox{$\,\uparrow\,\strut$}\;\fbox{$\,\uparrow\,\strut$}\;\fbox{$\rightarrow\strut$}}_{\substack{\large\mathtt{part\, that}\\ \large\mathtt{repeats}}}\;\fbox{$\,\uparrow\,\strut$}\;\fbox{$\,\uparrow\,\strut$}\;\fbox{$\rightarrow\strut$}\;\cdots$$
corresponds to repeatedly moving like a knight can in chess.
It's worth pointing out that "guess what comes next" exercises like
Find the next numbers in the sequence: 11, 12, 14, 15, 17, ...
are really not very good problems (in my opinion). All they judge is whether you can guess what the teacher is thinking of; they do not ask a real mathematical problem. This is because any finite sequence can be extended in literally any way you want; there is always a justification that can be made for it. There is no answer for what the sequence "should" be.
There is a rigorous sense in which what I've said above is true, but sometimes it can be a little hard to get one's head around for someone not mathematically inclined; it might be easier to note that there is a similar problem with analogies. If you were given an analogy to complete,
$\mathsf{grass} : \mathsf{green}\, ::\, \mathsf{ocean} : \mathord{?}$
you can guess that the answer should be "blue", because the teacher is likely thinking of the relation "grass is green", but there is just as much reason to say any other color. For example, "orange" works, because for all we know, the intended relationship between green and grass is "they start with the same letter" (of course, other colors may require more convoluted explanations). |
H: How to show that $f : {}^\ast \Bbb R \to {}^\ast \Bbb R$ is bounded, if it obtains a limited value everywhere?
Let $f : {}^\ast \Bbb R \to {}^\ast \Bbb R$. For all $x \in {}^\ast \Bbb R $, there exists $y \in \Bbb R $ such that $f(x) \leq y$. How to show that there exists $r \in \Bbb R$, such that for all $z \in {}^\ast \Bbb R$ we have $f(z) \leq r$?
I know this can be shown by a direct application of the dual form of idealization principle. Is there any other way to show this, say, using ultrapower construction?
Moreover, if $f$ is continuous, is it true that $f$ necessarily obtain a maximal value?
AI: Our goal is to prove the statement
$$ (\forall z \in {}^{*}\Bbb{R} ) (\exists r \in \Bbb{R}) ( f(z) \leq r ) \quad \Longrightarrow \quad (\exists r \in \Bbb{R})(\forall z\in {}^{*} \Bbb{R})( f(z) \leq r). $$
We prove the contrapositive:
$$ (\forall r \in \Bbb{R})(\exists z\in {}^{*} \Bbb{R})( f(z) > r) \quad \Longrightarrow \quad (\exists z \in {}^{*}\Bbb{R} ) (\forall r \in \Bbb{R}) ( f(z) > r ). $$
Assume that for any $r \in \Bbb{R}$, there exists $z \in {}^{*}\Bbb{R} $ such that $f(z) > r$. For each fixed $r \in \Bbb{R}$, applying the transfer principle, we have
$$(\exists z \in {}^{*}\Bbb{R})( f(z) > r ) \quad \Longrightarrow \quad (\exists z \in \Bbb{R})( f(z) > r ). $$
Thus we have
$$ (\forall r \in \Bbb{R})(\exists z \in \Bbb{R})( f(z) > r ). $$
Passing to the transfer principle, we obtain
$$ (\forall r \in {}^{*}\Bbb{R})(\exists z \in {}^{*}\Bbb{R})( f(z) > r ). $$
In particular, for a given infinitely large $R \in {}^{*}\Bbb{R}$, we have $f(z) > R$ for some $z \in {}^{*}\Bbb{R}$. This can be rephrased as
$$ (\exists z \in {}^{*}\Bbb{R} ) (\forall r \in \Bbb{R}) ( f(z) > r ). $$
This is exactly what we wanted, and hence the proof is completed.
To show that $f$ may fail to achieve its maximum, we consider the function
$$ f(x) = \frac{|x| + 1}{|x| + 2}. $$
It is clear that for all $x \in \Bbb{R}$ we have $f(x) < 1$ while for each $r < 1$ there exists $x \in \Bbb{R}$ such that $f(x) > r$. This can be transferred into the corresponding statement in the hyperreal field, and thus $f(x)$ has no maximum on $\Bbb{R}^{*}$. |
H: Logarithm question involving different base
Calculate the values of $z$ for which $\log_3 z = 4\log_z3$.
AI: Hint
$$\log_3 z = 4\log_z3 \Rightarrow \dfrac{\log z}{\log 3}=4\dfrac{\log 3}{\log z}$$
Answer (don't look until you try the hint)
$$\color{lightgrey}{\dfrac{\log z}{\log 3}=4\dfrac{\log 3}{\log z}\Rightarrow (\log z )^2=4(\log 3)^2 \Rightarrow \log z =2\log 3 \Rightarrow \log z =\log 3^2 \Rightarrow z = 3^2}$$
$\color{lightgrey}{\text{Therefore} \ z=9.}$ |
H: Is my solution correct? Generating functions question: How many non-negative solutions does the equation $x_1+x_2+x_3+x_4+x_5+x_6=12$ have?
so we began studying this subject, and I tried solving this question: How many non-negative and whole ($\in \Bbb Z$) solutions does the equation $x_1+x_2+x_3+x_4+x_5+x_6=12$ have?
I would like to know if my solution is correct.
For non negative solutions it means $0\leq x_1, x_2, x_3, x_4, x_5, x_6$, so I built the next function:
$A(x)=(1+x^1+x^2+x^3+...)^6$, and due to the formula of the sum of an infinite geometric sequence:
$A(x)= {1 \over (1-x)^6}= \sum_{n=0}^∞ {n+6-1 \choose 6-1}x^n$.
Since we need the sum of 12, we need the coefficient of $x^{12}$, so I place n=12, and got the number of solutions to be 6188.
Now this seemed a bit weird to me. Is it so?
Thanks for any input!
AI: Your method of solving this is correct. It comes down basically to remembering the formula for the coefficients of $(1-x)^{-n}$. If you just call the coefficient of $x^k$ in that power series $(-1)^k\binom{-n}k$ (after all the coefficient of $x^k$ in $(1-x)^m$ is $(-1)^k\binom mk$ for $m\geq0$ too), then these new coefficients continue to satisfy
$$
\binom nk=\frac{n\times(n-1)\times\cdots\times(n-k+2)\times(n-k+1)}
{k\times(k-1)\times\cdots2\times\times1}
\qquad\text{for all $n$,}
$$
and therefore in particular
$$
\binom {-n}k=\frac{-n\times(-n-1)\times\cdots\times(-n-k+2)\times(-n-k+1)}
{k\times(k-1)\times\cdots2\times\times1}
=(-1)^k\frac{n\times(n+1)\times\cdots\times(n+k-2)\times(n+k-1)}
{k\times(k-1)\times\cdots2\times\times1}
=(-1)^k\binom {k+n-1}k.
$$
Therefore $(-1)^k\binom{-n}k=\binom {k+n-1}k=\binom {k+n-1}{n-1}$ which is (probably) your formula for the coefficients. (Be warned that although the first transformation is always valid, the symmetry of binomial coefficients used in the second step is only valid if the top index, here $k+n-1$, is non-negative! So there was no possibility if applying symmetry before the first transformation.)
What I'm saying is basically that you can remember the identity for all $n$ and for $k\in\mathbf N$ $\binom {-n}k=(-1)^k\binom {k+n-1}k$ (or if you prefer it in the form $\binom nk=(-1)^k\binom {k-n-1}k$) instead of explicitly remembering the formula for the coefficients of $(1-x)^n$ (the first identity, and more generally the use of binomial coefficients with "strange" upper indices, is a very useful tool, but there is no harm in remembering the latter as well, as it comes in handy rather often.). |
H: Uniformly regular measure "Babiker"
A regular Borel (Radon) probability measures $\mu$ on compact Hausdorff space $X$ is called uniformly regular if:
There is a countable family $\mathcal{A}$ of compact $G_\delta$-subsets of $X$ such that for every open set $U\subseteq X$ and every $\epsilon >0$, there is $A\in\mathcal{A}$ such that $A\subseteq U$ and $\mu(U\setminus A)<\epsilon$,
or equivalently,
There is a sequence $\{U_n\}$ of open subsets of $X$ such that $\mu(K)=\displaystyle\lim_{n\to\infty}\mu(U_n)$ for every compact set $K\subseteq X$ and $K\subseteq U$.
I have seen in many papers, it is said:
"Clearly" Every measure $\mu$ on compact metric space is uniformly regular?
So I do not know how to prove this fact? anything idea is appreciated!
AI: In a metric space, closed sets are $G_\delta$ and there is $(U_k,k\in\Bbb N)$ a countable basis of open sets. For each $k$, consider $(K_{k,j},j\in\Bbb N)$ a sequence of compact subsets of $U_k$ such that $\mu(U_k\setminus K_{k,j})<2^{-k}j^{-1}$.
It is possible since in this context an open set is a countable union of closed sets.
Let $U$ open and $\varepsilon>0$. Then $U=\bigcup_{k\in I}U_k$ for some $I\subset\Bbb N$. There is $N$ such that $\mu(U\setminus\bigcup_{k\in I\cap [N]}U_k)<\varepsilon$. Now for $k\in [N]$, take $j$ such that $j^{-1}<\varepsilon$. Then $K:=\bigcup_{k=1}^NK_{k,j}$ gives what we want. |
H: Uniform distribution on the n-sphere.
I have the next RV:
$$\underline{W}=\frac{\underline{X}}{\frac{||\underline{X}||}{\sqrt{n}}}$$
where $$X_i \tilde \ N(0,1)$$
It's a random vector, and I want to show that it has a uniform distribution on the n-sphere with radius $$\sqrt{n}$$.
I understand that it has this radius, just calculate it. But I don't understand from calculating the CDF how to I arrive at uniform distribution.
Thanks in advance, MP.
AI: You can prove that the $n$-dimensional Gaussian is invariant under transformation by $T$ for any orthogonal matrix $T$. (This is well known)
For orthogonal $T$ we have $\|TX\|=\|X\|$ hence $T\underline W = \sqrt n\frac{TX}{\|TX\|}$. Therefore the distribution of $W$ is also invariant under transformation by $T$
So the distribution of $W$ is invariant under any isometry of the sphere and
the uniform distribution is the only distribution on the sphere that satisfies this condition. |
H: The negative square root of $-1$ as the value of $i$
I have a small point to be clarified.We all know $ i^2 = -1 $ when we define complex numbers, and we usually take the positive square root of $-1$ as the value of "$i$" , i.e, $i = (-1)^{1/2} $.
I guess it's just a convention that has been accepted in maths and the value $i = -[(-1)^{1/2}] $ is neglected as I have never seen this value of "$i$" being used. What I wanted to know is, if we will use $i = -[(-1)^{1/2}] $ instead of $ (-1)^{1/2} $, would we be doing anything wrong? My guess is there is nothing wrong in it as far as the fundamentals of maths goes. Just wanted to clarify it with you guys. Thanks.
AI: ... and we usually take the positive square root of −1 as the value of "$i$".
There's no such thing as a positive or negative complex number. By treating $\mathbb{R}$ as a subset of $\mathbb{C}$ you can call some complex numbers positive or negative, but only those which actually are real numbers. There's no way to define a total order on $\mathbb{C}$ which would behave like the usual order on $\mathbb{R}$ does.
What happens is that one simply picks any root of $-1$, i.e. any solution of $x^2 + 1 = 0$, and calls it $i$. There's then always a second solution, and that solution is $-i$. You can't even say which solution you picked for $i$ - the two solutions of $x^2 + 1=0$ are algebraically indistinguishable, i.e. you cannot tell them apart with algebraic means.
Imagine a friend hands you a bag containing two balls, of equal size and material. You can pick one arbitrarily and call it "ball 1", and take a pen and mark it with a big "1". The other is then "balls 2", and gets marked with a big "2". Now, imagine you had picked the other ball. Would you end up in a different situation? No! You'd still have two balls, one marked "1" and one marked "2", and indistinguishable otherwise. Now, after you've marked the balls, they are of course different. You can now for example put both back into the bag, let your friend pick one, and ask "Which ball have you picked?". But that only works after you marked them! |
H: having trouble intuiting analyticity
My textbook seems to suggest that the analytic functions are precisely the functions that can be written in terms of $z$ alone (no $x$ or $y$ or conjugate-$z$).
Am I inferring correctly?
Does this mean that $\sin (z+x)$ is not analytic? [where $z=x+iy$]
AI: More precisely, an analytic function can be expressed locally in terms of its power series. Here one must distinguish between two related notions: real analyticity and complex analyticity. Thus, the function you mentioned is real analytic (of two variables), but is not complex-analytic as a function of $z$. |
H: Multiplication properties in rings of matrices
Let $R$ be an arbitrary ring and let $M_n(R)~~(n>1)~$ be the ring of all $n ~ х ~ n$ matrices with elements from $R$ with usual matrix addition and multiplication.
1) Is it right that there are zero divisors in $M_n(R)$ iff $R$ is non-trivial?
2) Which necessary and sufficient conditions on $R$ do we need for $M_n(R)$ to be non-commutative?
3) Suppose $R$ has no unity. Is it right that this condition is sufficient for each element of $M_n(R)$ to be a zero divisor? If not, how can we construct an example of matrix that isn't zero divisor?
Thanks in advance.
EDIT
By zero-divisors i mean NON-ZERO elements.
AI: Perhaps the meaning of (1) is that there are non-trivial zero divisors...etc.? But
$$\begin{pmatrix}0&0&\ldots&0&r\\0&0&&\ldots&0\\\ldots&\ldots&\ldots&\ldots&\ldots\\0&0&0&\ldots&0\end{pmatrix}$$
is a nilpotent matrix, and thus it is a zero divisor (which is non zero if $\,0\ne r\in R\,$)
Also
$$\begin{pmatrix}0&r\\0&0\end{pmatrix}\begin{pmatrix}s&0\\0&0\end{pmatrix}\neq \begin{pmatrix}s&0\\0&0\end{pmatrix}\begin{pmatrix}0&r\\0&0\end{pmatrix}$$
whenever $\;sr\neq 0\;$ , so assuming the ring $\,R\,$ is non trivial and with non-trivial multiplication (like the one we can define on any abelian group and defining every product to be zero), if $\,n>1\,$ the product won't be commutative.
The last part: $\,2\Bbb Z\,$ is a non-unitary ring... |
H: Geometric question?
First of all, is it Geometric?
Image of the drafted:
I need help solving this question, and I am completely lost on how can I solve this.
Could anyone explain the way of solving this geometric question?
Here is a drafting of the two lines I and II.
With 3 formulas (a), (b), (c).
(a) $y = 2x + 8$,
(b) $y = - 2x + 8$
(c) $y = x + 2$.
A. Every of the straights I and II, please find the correct formula out of the formulas (a) , (b) , (c), and explain your answer.
b. Find the points of intersection of the straight rates.
c. Find the straight formula that's going through the way of the point (5, 2) and parallels to the straight II.
How do I do this?
This is a translated version.
AI: Hints: for the straight line $\,y=ax+b\,$:
$\bullet\;\;$ The line above is ascending iff $\,a>0\,$ , and it is constant iff $\,a=0\,$ ;
$\bullet\;\;$ the $\,y$-intersection of the above line is the point $\,(0,b)\,$ ;
$\bullet\;\;$ The line above intercepts the line $\,y=mx +n\,$ at the point $\,(x_0,y_0)\,$ which is the solution of the linear system
$$\begin{cases}y=ax+b\\{}\\y=mx+n\end{cases}$$
How to solve: compare $\,y$-s:
$$ax+b=mx+n\implies(a-m)x=n-b\implies \,\text{if}\;a\neq b\;,\;x=\frac{n-b}{a-m}$$
Iff $\,a=b\,$ the lines are parallel and thus they are the very same line or else they have no common point. |
H: How do I show that the degree $n$ Taylor polynomials of $f$ about two points are equal?
Question
Suppose that $f(x)$ is a polynomial of degree $d$, and that $n \ge d$. Let $x_0 \neq x_1$. Show that the degree $n$ Taylor polynomials of $f$ about $x_0$ and $x_1$ are equal.
Attempt
Let the polynomial be $f(x) = \sum_{k=0}^{d}a_kx^k$. Then the Taylor polynomial of $f$ about $x_0$ is $$P_n(x) = \sum_{k=0}^{n} \dfrac{f^{(k)}(x_0)(x-x_0)^k}{k!}$$
and the Taylor polynomial of $f$ about $x_1$ is $$Q_n(x) = \sum_{k=0}^{n} \dfrac{f^{(k)}(x_1)(x-x_1)^k}{k!}.$$
For them to be equal, I think it would have to be the case that $P_n(x) - Q_n(x)= 0$
I tried with a simple example. Let $d=1$ and $n=3$. Then
$$f(x) = \sum_{k=0}^{1}a_kx^k = a_0 + a_1x ,$$
$$P_3(x) = \sum_{k=0}^{3} \dfrac{f^{(k)}(x_0)(x-x_0)^k}{k!} = a_0 + a_1(x-x_0) ,$$
$$Q_3(x) = \sum_{k=0}^{3} \dfrac{f^{(k)}(x_1)(x-x_1)^k}{k!} = a_0 + a_1(x-x_1) .$$
However, $P_n(x) - Q_n(x)= 0 \Rightarrow x_0 = x_1$ or $a_1 = 0$. Where did I go wrong?
Thank you for your time.
AI: The $k=0$ term in $P_3(x)$ should be $f(x_0)$, which is $a_0+a_1x_0$, not $a_0$. Similar remarks apply to $Q_3(x)$. |
H: How to show this matrix is invertible?
Let $f:H \times H \to \mathbb{R}$ be a mapping with $H$ a Hilbert space.
Let $A$ be a matrix with entries $a_{ij}=f(b_i, b_j)$ with
$$a_{ii}=f(b_i, b_i) \geq C\lVert b_i\rVert_{H}^2.$$
Suppose $b_i \neq b_j$ and $(b_i, b_j)_H = 0$ for $i \neq j$ .
How do I show that $A$ is invertible?
AI: If $f(u,v)$ is given by scalar product $(Bu,v)_H$, $B\in\mathcal L(H,H)$ - symetric continuous linear operator which is positive definite (because $f$ is coercive). If $b_j$ are linearly independent, then the matrix $A$ is a metric tensor on $\text{span} \{b_j\}$ and it should be invertible.
Edit I'll develop a little on this case. Suppose my hypothesis is true and $a_{i,j}=(Bb_i,b_j)_H$, $i,j=1..n$. Suppose that $A$ is singular, then there exists $u\in\mathbb R^n$ such that $(Au,u)_{\mathbb R^n} =0 $, but
$\displaystyle (Au,u)_{\mathbb R^n} =\sum_i\sum_j (Bb_i,b_j)_Hu_i u_j = \left( B\left(\sum_i b_i u_i\right),\left(\sum_j b_j u_j\right)\right)_H \ge C\left\|\sum_i b_i u_i\right\|_H^2>0$. Hence $A$ is invertible. As it's easy to see, this proof relies heavily on the fact that $f$ is given by a scalar product. |
H: Why is Lie derivative smooth?
Let $G$ be a linear Lie group, say $\mathrm{SL}(2,\mathbb{R})$. Suppose $X\in\mathfrak{g}$ and $f:G\to\mathbb{R}$ is smooth. The Lie derivative of $f$ with respect to $X$ is the function $\mathcal{L}_X f:G\to\mathbb{R}$ defined as $$\mathcal{L}_X f(y):=\left.\frac{\mathrm{d}}{\mathrm{d}t}\right|_{t=0}f\left(ye^{tX}\right).$$ Why is $\mathcal{L}_X f$ smooth? In his book on $\mathrm{SL}(2,\mathbb{R})$, Lang just says so without any comment (p. 90), but I failed to provide a rigorous justification up to now. A reference is also welcome.
AI: Because the map $$\phi:G\times\Bbb R\to G,~(g,t)\mapsto g\exp(tX)$$ is smooth, as it is obtained from the smooth map $\Bbb R\to G,~t\mapsto \exp(tX)$ and the smooth map $\mu:G\times G\to G,~(g,g')\mapsto gg'$. The Lie derivative of $f$ is the derivative with respect to $t$ of the smooth map $$f\circ\phi:G\times\Bbb R\to\Bbb R,~(g,t)\mapsto f(\phi(g,t))$$ and it is thererfore smooth.
To be $100\%$ precise (and pretty pedantic), it is obtained as the composite
$$\begin{array}{c}
G&\to &T(G\times\Bbb R)&\to& T\Bbb R\simeq \Bbb R\times\Bbb R&\to&\Bbb R\\
g&\mapsto& {\frac{\partial}{\partial t}}\bigg|_{(g,0)}&\mapsto & T_{(g,0)}(f\circ\phi)\bigg({\frac{\partial}{\partial t}}\bigg|_{(g,0)}\bigg)= \underbrace{\lambda}_{=\mathcal L_Xf(g)}\frac{\partial}{\partial t}\bigg|_{f(g)}&\mapsto&\lambda
\end{array}$$ |
H: Eigenvectors and differential equations
I was able to find part (a), and I got 4 and -1 for the eigenvalues and from these values I got eigenvectors of [1,1] and [-3,2], but I don't know what to do for part (b) and (c)
AI: Let ${\bf y}=(x,y)$. Then the coupled system can be written as $${\bf y}'=A{\bf y}$$ The general solution is $${\bf y}=c_1e^{4t}(1,1)+c_2e^{-t}(-3,2)$$ assuming that you did the eigenvector/eigenvalue calculations correctly. |
H: Conditional joint probability and independence
Let's have a joint probability of three events, $\mathbf{P}(X,A,B)$. If $\mathbf{P}(X|A) = \mathbf{P}(X)$, can we show that $\mathbf{P}(X|A,B) = \mathbf{P}(X|B)$? If so, how?
AI: Toss a fair coin twice, and let $A$ be the event that the first toss is a head and $B$ the event that the second toss is a head.
Now let $X$ be the event that the result of the two tosses is the same.
$P(X|A) = P(X|B)=P(X)=\dfrac 12$ but $P(X|A,B)=1$. |
H: What is the idea behind sheaves of rings on distinguished open sets
In the book on algebraic geometry of Mumford (which can be found here), he said :
We want to enlarge the ring $R$ into a whole sheaves of rings on $SpecR$, written $\mathcal{O}_{SpecR}=\mathcal{O}_{X}$.
So he need to define $\mathcal{O}_{X}(U)$ for $U$ is an open set in $X$, in particular he need to define $\mathcal{O}_{X}(X_f)$ for some distinguished open subset $X_f$. And he said
We want to define : $\mathcal{O}_{X}(X_f)=R_f$=localization of the ring $R$ w.r.t multiplicative system $\lbrace 1,f,f^2,..\rbrace$
My question is : Why do we want to define the sheaf $\mathcal{O}_{X}(X_f)$ to be $R_f$ ?
Back to case of affine variety $V$, $V\setminus V(f)$ is a quasi-affine variety with the ring of regular function $k[V]_f$. So can we think about the sections of $\mathcal{O}_{X}(X_f)$ as the regular function on some spaces ?
AI: You can think of $X_f$ as the the set of those points where $f$ does not vanish (this is true verbatim in the case of classical varieties, but also for schemes when you use the structure sheaf and think of $f$ as a function valued in the residue fields). Thus $f$ should be invertible on $X_f$, and therefore $r/f^k$ should be a regular function on $X_f$, for every regular function $r$ on $X$. There is no reason to impose more regular functions. So we just define $\mathcal{O}(X_f) := R_f$.
In the end, it is just a definition, and when you work with it you will see that it is the right one. You have already mentioned that this fits perfectly to the case of classical varieties. |
H: Given $x,y,z\geq0$ and $x^2+y^2+z^2+x+2y+3z=13/4$. Find the minimum of $x+y+z$.
Given $x,y,z\geq0$ and $x^2+y^2+z^2+x+2y+3z=13/4$. Find the minimum of $x+y+z$.
I tried many method, such as AM-GM, but all of them failed.
Thank you.
AI: use this
$$(x+y+z)^2+3(x+y+z)\ge x^2+y^2+z^2+x+2y+3z$$
and let $x+y+z=t$,
then $$t^2+3t\ge \dfrac{13}{4}$$ |
H: Series of $\int_0^z \zeta^{-1} \sin \zeta d \zeta$
This is a homeworkquestion so I would appreciate some good hints. I have $f(z) = \int_0^z \zeta^{-1} \sin \zeta d \zeta$. Can this be written as a power-series in $\mathbb C$ around $z = 0$?
AI: Hint: What is the power series for $\frac{\sin z}{z}$ around $z=0$? |
H: Rings with zero divisors
Is there a ring $~R~$ with non-trivial multiplication (i.e. $~\exists a,b\in R ~~~ ab\neq 0$) such that each non-zero element of $~R~$ is a zero-divisor?
AI: The simplest example might be $R=2\mathbb Z/8\mathbb Z$. Then $2\cdot 2\neq 0$ but $a\cdot 4=0$ for all $a\in R$. |
H: Basic concepts in finite fields
I need some help with clearing up some some basic concepts in finite fields.
I understand that $\mathbb{F}_p = GF(p)$ where $p$ is a prime is a finite field, which is isomorphic to $\mathbb{Z}/p\mathbb{Z}$. However, I get quite confused with $GF(p^n)$.
Since $GF(p)$ is a finite field, is $GF(p^n)$ a vector space of dimension $n$ over $GF(p)$ ($n$-tuples of elements from $GF(p)$) or another finite field with more ($p^n$) elements? Can I understand this intuitively as like when $\mathbb{R}$ is the field then $\mathbb{R}^n$ is an $n$-dimensional vector space over $\mathbb{R}$?
Another thing is the $\mathbb{F}_p^n $. Is this the same as $GF(p^n)$? But what's $\mathbb F_{p^n}$ then? Are they just two different notations of the same thing? It seems more logical to me that the first one means something like $GF(p)^n$ but is it something meaningful?
Finally, when someone states "Let $V$ be an $n$-dimensional vector space over a finite field..." - does he explicitly mean $GF(p^n)$ for some $p$ prime or something else?
AI: If $q = p^n$ for some prime $p$, then the notations $\rm{GF}(q)$ and $\mathbb{F}_q$ mean the same, namely the (unique up to isomorphism) field with $q$ elements.
This is indeed a vector space over $\rm{GF}(p)$ of dimension $n$.
$\rm{GF}(p)^n$ and $\mathbb{F}_p^n$ also mean the same, namely the set of $n$-tuples of elements from the field $\rm{GF}(p)$, which is also a vector space of dimension $n$ over $\rm{GF}(p)$.
Since these two vector spaces have the same dimension, they are isomorphic as vector spaces. However, they are not isomorphic as rings, since the first is a field, and the other has non-trivial zero-divisors (take for example en element like $(1,0)$ in $\rm{GF}(p)^2$).
When someone says "a vector space of dimension $n$ over $\rm{GF}(p)$" he means just that. You can take any of the two above and it will not matter, since as vector spaces, we cannot distinguish them. |
H: The algebraic possibilities of the (topological) procedure of the compactification of a space
If $X$ is locally compact $K$-vector space, then $X\cup \{\infty\}$ is via the Alexandroff-compactification a compact space.
But this purely topological procedure tells me nothing about the algebraic relationship of $\infty$ to rest of the $x\in X$. What can be said about this relationship ?
What kind of definitions for $x+\infty$ and $\lambda\cdot \infty$ are meaningful ? And for which $x\in X$ and $\lambda \in K$ ?
What can be said about the algebraic structure of $X\cup \{\infty\}$ ?
AI: For the one-point compactification $\overline{\mathbb{C}}$ of $\mathbb{C}$, i.e. $\overline{\mathbb{C}} = \mathbb{C} \cup \{\infty\}$, you don't even get that $(\overline{\mathbb{C}},+)$ is a group because $\infty$ has no additive inverse.
So very little of the algebraic structure of a topological space transferrs to its one-point compactification. |
H: Simplify square of sinc functions
I need to simplify if possible the following:
$$\left(i^n\cdot \operatorname{sinc}\big(\pi(x-\tfrac{n}{2})\big)+(-i)^n\cdot \operatorname{sinc}\big(\pi(x+\tfrac{n}{2})\big)\right)^2$$
with $n \in \mathbf{N}$ and $\operatorname{sinc}(x)=\sin(x)/x$.
Thanks
AI: Note that $\;\pi\left(x-\frac n2\right)\;$ and $\;\pi\left(x+\frac n2\right)\;$ difference will be $n\pi$ so that they will be equal for $n$ even and of different sign for $n$ odd.
This allows to get :
For $n$ even :
$$\left(\frac{2\,x\sin(\pi x)}{\pi\left(x^2-(n/2)^2\right)}\right)^2$$
while for $n$ odd :
$$-\left(\frac{2\,x\cos(\pi x)}{\pi\left(x^2-(n/2)^2\right)}\right)^2$$ |
H: Finding the range of a vector valued function
For a single valued function, I can infer if the function is monotone from its derivative.
For a vector valued function, is it possible to infer monotonicity from the directional derivative?
For example, define
$$
D=[1,2]\times[1,2],
$$
and
$$
f(x,y)=\left( \frac{2}{1/x+1/y},\sqrt{xy} \right).
$$
Is it possible to show that $f$ maps $D$ to $D$ from its gradient $\nabla f$?
The gradient is
$$
\nabla f = \begin{pmatrix}
\frac{2}{\left(1+x/y\right)^2} & \frac{2}{\left(1+y/x\right)^2} \\
\frac{y^{1/2}}{2x^{1/2}} & \frac{y^{1/2}}{2x^{1/2}}
\end{pmatrix},
$$
whence, for $(x,y)$ in $D$, the directional derivative
$$
\left(\nabla f(x,y)\right)\begin{pmatrix} x \\ y \end{pmatrix},
$$
is always positive and I would like to conclude that, on $D$,
$$
\text{$f$ is minimal at $(1,1)$},\\
\text{$f$ is maximal at $(2,2)$}.
$$
Is it the right way to proceed?
The graph of each component of $f$ looks like this
AI: Note that the Jacobian determinant is
$$\det J_f(x,\, y)=\det\begin{bmatrix}
\frac{2}{\left(1+x/y\right)^2} & \frac{2}{\left(1+y/x\right)^2} \\
\frac{y^{1/2}}{2x^{1/2}} & \frac{y^{1/2}}{2x^{1/2}}
\end{bmatrix}=\\
=\dfrac{1}{{\left(\dfrac{1}{x} + \dfrac{1}{y}\right)}^{2} \sqrt{x y} x} - \dfrac{1}{{\left(\dfrac{1}{x} + \dfrac{1}{y}\right)}^{2} \sqrt{x y} y}= \\
=\dfrac{y-x }{{\left(\dfrac{1}{x} + \dfrac{1}{y}\right)}^{2} (xy)^{\frac{3}{2}}},$$ therefore $\det J_f(x,\, y)=0$ on the line $y=x.$
Addition:
Denote $$G_1=\{(x, \ y)\colon \;\; {1} < {x} < {2},\;\; x < {y} < {2} \}, \\
G_2=\{(x, \ y)\colon \;\; {1} < {y} < {2},\;\; y < {x} < {2} \}$$ Then $J_f (x,\ y)\ne{0}, \;\;\; \forall(x, \ y)\in{G_1\cup G_2},$
thus $f(x, \ y)$ is a diffeomorphism on each $G_1$ and $G_2$ and
$$f(\partial{G_1})=\partial{f(G_1 )}, \\
f(\partial{G_2})=\partial{f(G_2 )}.$$
As noted by Ted Shifrin, $$\gamma_1=\left\{ f(1,\ y),\;\; 1 < y < 2\right\}=\left\{\left(\dfrac{2y}{y+1},\sqrt y\right)\colon \;\; 1 < y < 2\right\}, \\
\gamma_2=\left\{ f(x,\ 2),\;\; 1 < x < 2\right\} = \left\{\left(\frac{4x}{x+2},\sqrt{2x}\right)\colon \;\; 1 < x < 2\right\}, \\
\gamma_3=\left\{ f(x,\ x),\;\; 1 < x < 2\right\} = \left\{\left(x,x\right)\colon \;\; 1 < x < 2\right\}.$$
Therefore the interior of $\partial{G_1}$ is mapped by $f$ into the interior of $f(\partial{G_1}) = \gamma_1\cup\gamma_2\cup \gamma_3.$
Due to the symmetry of $f$ with respect to $x$ and $y$ the interior of $\partial{G_2}$ is mapped by $f$ also into the interior of $\gamma_1\cup\gamma_2\cup \gamma_3.$
Different signs of the Jacobian determinant on both sides of the diagonal indicate the opposite orientation of parts of the image of $D.$ |
H: Integrate: $\int_0^{\pi} \log ( 1 - 2 r \cos \theta + r^2)d\theta$
If $r \in \Bbb R$ how to integrate $\displaystyle \int_0^{\pi} \log ( 1 - 2 r \cos \theta + r^2)d\theta$?
I need some hints. Special case, if $r = 1$ then I know the above integral is zero.
Here is my working
\begin{align*}
\int_0^{\pi}\log (1 - 2 r \cos \theta + r^2)d\theta &= \int_0^\pi\log ((1 - re^{i \theta})(1 -re^{-i\theta} )) d\theta\\
&= \int_0^\pi \log(1 - r e^{i\theta})d\theta + \int_0^\pi\log(1 - re^{-i\theta})d\theta\\
&= \int_0^{2\pi} \log( 1 - re^{i\theta})d\theta \\
&= 0
\end{align*}
AI: Two hints:
The integrand is an even function of $\theta$ $\Rightarrow$ the integral can be written as $\frac12\int_0^{2\pi}$.
$1-2r\cos\theta+r^2=(1-re^{i\theta})(1-re^{-i\theta})$. |
H: Diffeomorphic surfaces and Jacobian
Suppose $S$ and $T$ are bounded (open) surfaces in $\mathbb{R}^n.$ Let them have boundary $\partial S$ and $\partial T$.
Suppose $F:S \to T$ is a $C^k$ diffeomorphism.
Under what conditions on $F$ and $S$ and $T$ and their boundaries do we get that the determinant of the Jacobian matrix of $F$ is bounded above and below by positive numbers??
AI: If you mean your surfaces to be the interior of compact manifolds with boundary (as your question seems to imply) then the existence of bounds is immediate from the compactness of the boundary, so long as $k$ is at least $1$.
If you don't assume from the start that the map extends to a diffeomorphism of the boundary, then the claim is not true, as can be seen already in the 1-dimensional case. Thus, the self-map of the interval $[0,1]$ given by $x\mapsto x^2$ is a diffeomorphism of the interior, but the Jacobian on the boundary is singular.
To make sure the map extends to the boundary, one would impose a suitable condition of uniform convergence (see http://en.wikipedia.org/wiki/Uniform_convergence for the case of sequences). To get $C^k$ one would need such bounds for the $k$-th derivatives. |
H: Prime with digits reversed is prime?
Well, just another idea came up into my mind and i have no idea how to solve it :D
Is there infinitely many prime numbers, which are not repunits and their inverse is also prime? (For example, inverse of 31 is 13 which is also prime. i didn't have any other name to describe the function!)
P.S:I now know they are called Emirps.
AI: Actually, it has a name and it's quaintly called an "emirp". (The word "prime" in reverse.) The link given is to the Online Encyclopaedia of Integers' list,
$13, 17, 31, 37, 71, 73, 79, 97, 107, 113, 149, 157, 167,\dots$
While there are an infinite number of primes, I believe it is an open problem if there is an infinite number of emirps.
P.S. Regarding terminology, given $x$, then its "multiplicative inverse" (or "reciprocal") is $1/x$. For functions, for ex, given $\sin(x)$, then its "inverse function" is $\arcsin(x)$. |
H: Find the value of $\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx$
I'm trying to figure out how to evaluate the following:
$$
J=\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx
$$
I'm tried considering $I(s) = \int_{0}^{\infty}\frac{x^3}{(e^x-1)^s}\,dx\implies J=-I'(1)$, but I couldn't figure out what $I(s)$ was. My other idea was contour integration, but I'm not sure how to deal with the logarithm. Mathematica says that $J\approx24.307$.
I've asked a similar question and the answer involved $\zeta(s)$ so I suspect that this one will as well.
AI: Mathematica says that the answer is
$$\pi^2\zeta(3)+12\zeta(5)$$
I will try to figure out how this can be proven.
Added: Let me compute the 2nd integral in Ron Gordon's answer:
\begin{align}\int_{0}^{\infty}\frac{x^3 e^{-x}}{1-e^{-x}}\ln(1-e^{-x})\,dx
&=-\frac32\int_0^{\infty}x^2\ln^2(1-e^{-x})\,dx=\\&=-\frac32\left[\frac{\partial^2}{\partial s^2}\int_0^{\infty}e^{-sx}\ln^2(1-e^{-x})\,dx\right]_{s=0}=\\
&=-\frac32\left[\frac{\partial^2}{\partial s^2}\int_0^{1}t^{s-1}\ln^2(1-t)\,dt\right]_{s=0}=\\
&=-\frac32\left[\frac{\partial^4}{\partial s^2\partial u^2}\int_0^{1}t^{s-1}(1-t)^u\,dt\right]_{s=0,u=0}=\\
&=-\frac32\left[\frac{\partial^4}{\partial s^2\partial u^2}\frac{\Gamma(s)\Gamma(1+u)}{\Gamma(1+s+u)}\right]_{s=0,u=0}=\\
&=-\frac{1}{2}\left(\pi^2\psi^{(2)}(1)-\psi^{(4)}(1)\right).
\end{align}
To obtain the last expression, one should expand the ratio of gamma functions to 2nd order in $u$, then to expand the corresponding coefficient to 2nd order in $s$.
Then we can use that $\psi^{(2)}(1)=-2\zeta(3)$ and $\psi^{(4)}(1)=-24\zeta(5)$ (cf formula (15) here) to obtain the quoted result. |
H: Apparently simple integral
I am having trouble solving this apparently simple integral:
$\int\frac{x}{3+\sqrt{x}}dx$
Hints would be preferable than complete answer...
Thanks!
AI: Substitute $x = y^2$ and use long division to simplify the integrand.
Thus, we have $dx = 2 y dy$. Substituting in the original integrand, we have:
$$\int \frac{x}{3+\sqrt{x}} dx = \int \frac{y^2}{3+y} 2y dy$$
i.e.,
$$\int \frac{x}{3+\sqrt{x}} dx = \int \frac{2y^3}{3+y} dy$$ |
H: The value of $\lim_{x\to +\infty} \dfrac{e^{x^2}}{10^{|x|}}$
I want to find the value of
$$\lim_{x\to +\infty} \dfrac{e^{x^2}}{10^{|x|}}.$$
Since $x \rightarrow + \infty$, I only consider the value of the function for $x \ge 0$, i.e.
$$\lim_{x\to +\infty} \dfrac{e^{x^2}}{10^{x}}.$$
Noticing that the numerator and denominator both approaches $+\infty$ as $x$ approaches $+\infty$, I tried using L'Hôpital's Rule. However, it makes things worse. I did check at Wolfram Alpha, and found out that the value of the limit is $\infty$.
I then noticed that both the numerator and the denominator has powers of some function of $x$. Taking natural logarithm of the numerator and denominator, we have $$\dfrac{\ln e^{x^2}}{\ln 10^{x}}=\dfrac{x^2 \ln e}{x \ln 10}=\dfrac{x}{\ln 10}.$$
This seems really silly, but would I be justified if I say that $$\lim_{x\to +\infty} \dfrac{e^{x^2}}{10^{x}} = \lim_{x\to +\infty}\dfrac{x}{\ln 10}= + \infty?$$
If this is incorrect, kindly give me a hint to tackle this problem.
Thank you for your time.
AI: You can take the natural logarithm of the fraction and use your idea.
$$\lim_{x\to +\infty} \dfrac{e^{x^2}}{10^{x}}=e^{ \lim_{x\to +\infty} \ln\dfrac{e^{x^2}}{10^{x}}}$$
A much simpler solution is to observe that $10 <e^3$ and hence
$$ \dfrac{e^{x^2}}{10^{x}}> \dfrac{e^{x^2}}{e^{3x}}$$
Added: To clarify why I can interchange the exponential and the limit.
Since $\lim_{x\to +\infty} \ln\dfrac{e^{x^2}}{10^{x}} = \infty$, for each $M>0$ there exists some $N>0$ so that $x>N$ implies $f(x) >\ln (M)$. Then for $x >N$ we have
$$e^{f(x)} >e^{\ln(M)} =M \,.$$
Thus, for each $M>0$ there exists some $N>0$ so that $x>N$ implies $e^{f(x)} >e^{\ln(M)} =M$. This shows that
$$\lim_{x\to +\infty}e^{ \ln\dfrac{e^{x^2}}{10^{x}}} = \infty \,.$$
Had $\lim_{x \to \infty} f(x)$ been finite, to show that
$$\lim_{x \to \infty} e^{f(x)}=e^{\lim_{x \to \infty} f(x)}$$
you'd need to use the continuity of $e^x$ and the following Theorem, which can be typically found in all Calculus/Analysis textbooks:
Theorem If $\lim_{x \to a}f(x)=b$ and $g$ is continuous at $x=b$, then
$$\lim_{x \to a} g(f(x)) = g(b) \,.$$
This Theorem is used to prove that composition of continuous functions is continuous, and it typically be found just before that. |
H: Group extension of $\mathbb Z_4$ by $\mathbb Z_2$
Let $f : G →\mathbb Z_2$ be an extension of $\mathbb Z_4$ by $\mathbb Z_2$. Suppose that the induced action $α_f :\mathbb Z_2 →\mathbb Z^{\times}_4$ carries the generator of $\mathbb Z_2$ to $−1$. Then $G$ is isomorphic to either $D_8$ or $Q_8$.
Proof
Let $\mathbb Z_4 = \langle b \rangle$ and $\mathbb Z_2 = \langle a \rangle$. For any $x ∈ G$ with $f(x) = a$, we have $xb^kx^{−1} = α_f (a)(b^k) = b^{−k}$ for all $k$. Also, $f(x^2) = a^2 = e$, so $x^2$ ∈ ker f = b. Thus, $x^2$ = $b^k$ for some k.
Note that x fails to commute with b and $b^{−1}$, so neither of these can equal $x^2$. Thus, either $x^2$ = e or $x^2$ = $b^2$.
If $x^2$ = e, then x has order 2, and hence there is a section of f, specified by s(a)= x. In this case, $G \cong\mathbb Z_4 \rtimes_{α_f}\mathbb Z_2$, which, for this value of $α_f$ , is isomorphic to the dihedral group $D_8$.
I'm a bit confused about how they got from $x^2=e$ to $G \cong Z_4 \rtimes_{\alpha_f} Z_2$.
If s(a) = x, then the section $s: Z_2 \rightarrow G$ sends 1 to e and -1 to x, right? So we must have an element of order 2 in G. Additionally, we know that we have a normal subgroup of order 4. However, if we want to construct a semidirect product, we need to make sure that $Z_2$ and $Z_4$ do not have any elements in common, except for the identity, right? But what if $x=b^2$?
The remaining case gives $x^2 = b^2$. But then there is a homomorphism $h : Q_8 → G$
with $h(a) = x$ and $h(b) = b$.
In my notes, it says that any group G is isomorphic to $Q_{2n}$ if the following conditions hold:
$\bullet$ $x^4=e$
$\bullet$ $b^{2n}=e$
$\bullet$ $x^{-1}bx=b^{-1}$
$\bullet$ $x^2 = b^n$
Of course, in this case, all of the first three conditions hold. But what about the 4th one? We know that $x^2 = b^2 \not= b^4$. Then how does that homomorphism exist?
As the image of h contains more than half the elements of G, h must be onto, and hence an isomorphism. (Alternatively, we could use the Five Lemma for Extensions to show that h is an isomorphism.)
I'm not sure if I understand this. The only thing we know about G is that it contains a normal subgroup of order 4, right? So what if we map the group of order 4 in $Q_8$ to the group of order 4 in G? Then we won't ahve mroe than half the elements of G in the image, right?
Also, for the Five Lemma for Extensions... in order to use that theorem, we need an extension for $Q_8$, right? But we're not given any. In other words, we have $\langle b \rangle \rightarrow G \rightarrow \mathbb Z_2$. But For $Q_8$, we have $\langle b' \rangle \rightarrow Q_8 \rightarrow ?$.
$Q_8$, of course, is not a split extension of b by $Z_2$, for two reasons. First, we know that every element of $Q_8$ which is not in b has order 4, and hence it is impossible to define a section from $Q_8/\langle b \rangle$ to Q8. Second, if it were a split extension, then it would be isomorphic to $D_8$, which we know is not so.
Why is it impossible to define a section? Is it not possible to say $s(a)=b^2$. I guess this is related to my previous question.
We have classified the extensions of Z4 by Z2 corresponding to the homomorphism
from $Z_2$ to $Z^{\times}_4$ that takes the generator of $Z_2$ to −1. Since $Z^{\times}_4$ is isomorphic to $Z_2$, there is only one other homomorphism from $Z_2$ to $Z^{\times}_4$ : the trivial homomorphism.
So if we look at all the possibilities of where the generator is being sent, then that would be like classifying all groups of order 8, right?
Thank you in advance
AI: But what if $x=b^2$?
Then $b^x = b^{b^2} = b \neq b^{-1}$. In other words, $x$ and $b$ do not commute, but $b^2$ and $b$ do commute. Equality of $x=b^2$ implies $x$ commutes with $b$, despite our assumption that $x^{-1} b x = b^{-1}$.
In my notes, it says that any group G is isomorphic to $Q_{2n}$ if the following conditions hold:
This should say $Q_{4n}$ not $Q_{2n}$.
I'm not sure if I understand this.
The image of $h$ contains $x$ and $\langle b \rangle$, 5 elements out of 8, so it must contain all 8 by Lagrange's theorem. $x \notin \langle b \rangle$ because $x$ and $b$ do not commute.
…in order to use that theorem, we need an extension for $Q_8$, right?
Yes, we take $1 \to \langle i \rangle \to Q_8 \to C_2 \to 1$ where the surjection takes $\pm j, \pm k$ to the non-identity element of $C_2$ and $\pm1, \pm i$ to the identity of $C_2$. The five lemma probably also requires some homomorphisms from one exact sequence to the other. Map $i\mapsto b$ and $j\mapsto x$. This is basically the same as $h$, but I'm not positive of your notation for $Q_8$.
Why is it impossible to define a section?
$Q_8$ has only one element of order $2$, and that element is central, but the image of $a$ is not central.
So if we look at all the possibilities of where the generator is being sent, then that would be like classifying all groups of order 8, right?
More or less, yes. A group of order 8 contains a group of order 4 and index 2. If that subgroup is cyclic, then the quoted argument classifies all non-abelian possibilities ($D_8$ and $Q_8$). It indicates one could also handle the abelian possibilities ($C_4 \times C_2$ and $C_8$). However, one also has to deal with the possibility that there is no cyclic subgroup of order 4 (then the group has exponent 2 and is a vector space over $\mathbb{Z}/2\mathbb{Z}$ so we get $E_8 = C_2 \times C_2 \times C_2$). Be careful that if one tries to classify the extensions of $C_2 \times C_2$ by $C_2$ you'll get some repeats, $D_8$ and $C_4 \times C_2$, as well as the desired $E_8$. |
H: $ e^{At}$ for $A = B^{-1} \lvert \cdots \rvert B $
For a homework problem, I have to compute $ e^{At}$ for
$$ A = B^{-1} \begin{pmatrix}
-1 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & 3 \end{pmatrix} B$$
I know how to compute the result for $2 \times 2$ matrices where I can calculate the eigenvalues, but this is $3 \times 3$, and I cannot compute eigenvalues, so is there any identity or something which allows computing such exponentials?
Thanks!
AI: Note that $A$ has been diagonalized, and that: $$e^{At}=I+At+\frac{1}{2!}(At)^2+\frac{1}{3!}(At)^3+\cdots $$ $$ \Longrightarrow e^{BDB^{-1}t}=I+BDB^{-1}t+\frac{1}{2!}(BDB^{-1}t)^2+\frac{1}{3!}(BDB^{-1}t)^3+\cdots $$ $$ =BB^{-1}+B(Dt)B^{-1}+\frac{1}{2!}B(Dt)^2B^{-1}+\frac{1}{3!}B(Dt)^3B^{-1}+\cdots $$ $$ =B\left(I+Dt+\frac{1}{2!}(Dt)^2+\frac{1}{3!}(Dt)^3+\cdots\right)B^{-1}$$ What is the expression in the middle?
Edit: Forgot the $t$. |
H: Binomial sum of derivatives
I would like to know the result of the following sum:
$$\sum_{p=0}^m \binom{m}{p}(-1)^{p-1}\frac{\partial^{p-1}}{\partial x^{p-1}}f(x)\cdot(-1)^{m-p-1}\frac{\partial^{m-p-1}}{\partial x^{m-p-1}}g(x)$$
with $$\frac{\partial^{-1}}{\partial x^{-1}}=\int dx$$
Thanks
AI: You can pull out the factors of $-1$ to yield an overall factor $(-1)^m$. Also it's unclear why you're using partial derivatives, as there's only one variable; I'll use total derivatives. Then
\begin{align}
&
\sum_{p=0}^m\binom mp\left(\frac{\mathrm d}{\mathrm dx}\right)^{p-1}f(x)\left(\frac{\mathrm d}{\mathrm dx}\right)^{m-p-1}g(x)
\\
=&
\sum_{p=0}^m\binom mp\left(\frac{\mathrm d}{\mathrm dx}\right)^p\int f(x)\mathrm dx\left(\frac{\mathrm d}{\mathrm dx}\right)^{m-p}\int g(x)\mathrm dx
\\
=&
\left(\frac{\mathrm d}{\mathrm dx}\right)^m\left(\int f(x)\mathrm dx\int g(x)\mathrm dx\right)\;.
\end{align} |
H: space of riemann integrable functions not complete
Define norm as $\int |f|$ (Riemann integral) on $\mathcal R^1[0,1]$, the space of riemann integrable functions on $[0,1]$ with identification $f=g$ iff $\int |f-g|=0$.
Let $\{ r_1,r_2,\cdots \}$ be the rationals in $[0,1]$, and let $f_n=1_{\{r_1,\cdots,r_n\}}$. Then $f_n$ is a cauchy sequence in $\mathcal R^1[0,1]$. I want to show that there is no $f\in \mathcal R^1[0,1]$ such that $f_n$ converges to $f$ in norm. How can I show it?
Obviously the pointwise limit $f=1_\mathbb{Q}$ is not contained in $\mathcal R^1[0,1]$, but can I use this fact? I think that there can be other candidates, since convergence in norm and pointwise convergence are different.
AI: Recall the condition that $f=g$ if and only if $\int|f-g| = 0$. This means that elements of $\mathcal R^1$ are not functions in the classical sense, because they're only defined up to sets of measure $0$. You can't evaluate $f(x)$, because every pair $(x,y)$ there's some $g\in\mathcal R^1$ with $g=f$ but $g(x)=y$. We just change the value of $f$ at a single point.
So in your example we have $f_n = 0$ for every $n$.
Consider instead the functions
$$g_n(x) = \min\left(n,-\log x\right)$$.
Now each $g$ is Riemann integrable, and it's easy to see that the sequence is Cauchy $\int|g_n - g_m|\leq \int|g_n| - 1$ for $m>n$.
But there is no limit in $\mathcal R^1$. If there is a limit it must be $x\mapsto-\log(x)$ (almost everywhere), but that isn't Riemann itegrable because it's unbounded. |
H: Permutation combination problem
This is how Edward’s Lotteries work. First, 9 different numbers are selected. Tickets with exactly 6
of the 9 numbers randomly selected are printed such that no two tickets have the same set of numbers.
Finally, the winning ticket is the one containing the 6 numbers drawn from the 9 randomly. There is
exactly one winning ticket in the lottery system. How many tickets can the lottery system print?
AI: The number of tickets is the number of ways to choose $6$ numbers from $9$.
How do we calculate that that number? It is simplest if we know a formula for the binomial coefficient $\binom{9}{6}$ aka $C(9,6)$ aka $C^9_6$. One expression for this is $\frac{9!}{6!3!}$. Now we compute. There is a lot of cancellation.
Or we can figure out the number from first principles. How many ways are there to choose $6$ numbers (the ones that will appear on tickets) from $9$? Just as many as there are ways to choose the $3$ numbers that don't appear. (Yhis "feels" easier, since $3$ is smaller than $6$.)
There are $(9)(8)(7)$ three number strings made up of different numbers. The $6$ strings $abc,acb, bac,bca,cab,cba$ all represent the same ticket, the same choice of rejected numbers. So the total number of tickets we can print is $\frac{(9)(8)(7)}{6}$. |
H: Pigeon holes principle
Let $P$ be a group that it's elements are 257 sentences in which only atomic sentences from $A,B,C$ exist (i.e. $A \iff B,\space\space A \wedge B \wedge C, \space\space...$) Show that there exists two different $p_1, p_2 \in P$ so that the sentence $p_1 \iff p_2$ is a tautology.
Pigeons are the $257$ sentences but I can't think of a way to prove the asked.
AI: HINT: Notice that $257=2^8+1$, so you might guess that there will be $2^8$ pigeonholes. You have three atomic sentences. They can have $2^3=8$ different combinations of truth values. How many different truth tables can you build from these $8$ combinations of truth values? Now notice that if $p_1\iff p_2$ is a tautology, then $p_1$ and $p_2$ have the same truth table. (Why?) |
H: Getting rid of a floor function in the next expression:$\left\lfloor\frac{(x-2)^2}{4}\right\rfloor $, It is known x is odd.
I was wondering if there's a way in which you can get rid of a floor function in the next expression:$$\left\lfloor\frac{(x-2)^2}{4}\right\rfloor $$ It is known x is odd.
AI: Since $x$ is odd, $x=2n+1$ for some integer $n$, and therefore $(x-2)^2=(2n-1)^2=4n^2-4n+1$. Thus,
$$\left\lfloor\frac{(x-2)^2}4\right\rfloor=\left\lfloor n^2-n+\frac14\right\rfloor=n^2-n\;.$$
And $n=\dfrac{x-1}2$, so the expression reduces to
$$\left(\frac{x-1}2\right)^2-\frac{x-1}2=\frac{x^2-4x+3}4\;.$$ |
H: Study of Set theory: Book recommendations?
Can you suggest a good book for set theory?
I have just started reading about Group theory and want to learn set theory on my own.
Thanks in advance
AI: A couple of ‘entry level’ treatments that can be confidently recommended.
Herbert B. Enderton, Elements of Set Theory (Academic Press, 1997) is particularly clear in marking off the informal development of the theory of sets, cardinals, ordinals etc. (guided by the conception of sets as constructed in a cumulative hierarchy) and the formal axiomatization of ZFC. It is also particularly good and non-confusing about what is involved in (apparent) talk of classes which are too big to be sets – something that can mystify beginners. It is written with a certain lightness of touch and proofs are often presented in particularly well-signposted stages. The last couple of chapters or so perhaps do get a bit tougher, but overall this really is quite exemplary exposition.
Derek Goldrei, Classic Set Theory: For guided independent study (Chapman & Hall/CRC 1996) is written by a staff tutor at the Open University in the UK. It is as you might expect extremely clear, it is quite attractively written (as set theory books go!), and is indeed very well-structured for independent reading. The coverage is very similar to Enderton’s, and either book makes a fine introduction (but for what it is worth, I prefer Enderton ).
For more see entries on set theory in this Guide to reading on logic. |
H: How can I solve this Laws of Sines problem?
This is a homework question that was set by my teacher, but it's to see the topic our class should go over in revision, etc.
I have calculated $AB$ to be 5.26m for part (a). I simply used the law of cosines and plugged in the numbers.
part (b) is the question I have been trying for quite a while. I tried to do the law of sines, to no avail. To calculate $BC$ I need the angle opposite that which I do not have (or know how to work out.) The triangle $BDC$ has a right-angle, but this does not work as $sine(90) = 1$
A step in the right direction would be more beneficial than a full answer.
AI: steps to solve this problem
1)
use cosine formula in $\Delta ABD$ to find side $AB$.
$AB^2=88.3^2+91.2^2-2\times88.3\times 91.2\times \cos2.8^\circ$
$AB=5.257$m
2)
find $\angle ABD$ in above triangle using same cosine formula.
$\cos \angle ABD=\dfrac{5.257^2+88.3^2-91.2^2}{2\times 5.257\times 88.3}$
$\angle ABD=122.069^\circ$
3)
Now find out $\angle BDC=\angle ABD-90^\circ=32.069^\circ$
4)
Now we can use simple trigonometry to solve
$\dfrac{BC}{DC}=\tan 32.069^\circ\implies DC=\dfrac{BC}{\tan 32.069^\circ}$
and in $\Delta ACD$
$\dfrac{AB+BC}{DC}=\tan (32.069^\circ+2.8^\circ)$
$\dfrac{5.257+BC}{DC}=\tan 34.869^\circ$
${5.257+BC}={DC}\tan 34.869^\circ$
${5.257+BC}=\dfrac{BC}{\tan 32.069^\circ}\tan 34.869^\circ$
$BC=46.879$m and $AB=5.257$m |
H: proving $n!>2^n\;\;\forall \;n≥4\;$ by mathematical induction
My teacher proved the following $n!>2^n\;\;\;\forall \;n≥4\;$ in the following way
Basis step: $\;\;4!=24>16$ ok
Induction hypothesis: $\;\;k!>2^k$
Induction step: $\qquad\qquad(k+1)!=k!(k+1)>(k+1)2^k>2^k\cdot 2=2^{k+1}$
I wonder how did he assume that $2^k(k+1)>2^{k}\cdot 2\quad\forall k≥4$?
Don't we have to show it by induction too?
AI: We need only take advantage of the proof's hypothesis, when we assume from the start that $k \geq 4$, so those are the only values of $k$ that need to be considered. Clearly, $$\forall\;k\geq 4 \implies k + 1 \geq 4 + 1 = 5 > 2$$
This is where we get that $$2^k \cdot \underbrace{(k + 1)}_{\large > 2} \;\gt \; 2^k \cdot 2 = 2^{k+1},\quad\text{as desired}.$$ |
H: Two questions on finite abelian groups
Which of the following are true?
1.Every group of order $6$ abelian.
2.Two abelian groups of the same order are isomorphic
AI: none of them is true.
for 1. consider $S_3$.
for 2. consider $\mathbb{Z}_4$ and $\mathbb{Z}_2 \times \mathbb{Z}_2$ |
H: Finite group is generated by a set of representatives of conjugacy classes.
Could you tell me how to prove that a finite group is generated by a set of representatives of conjugacy classes?
I've read this https://mathoverflow.net/questions/26979/generating-a-finite-group-from-elements-in-each-conjugacy-class but I don't know
why if $H$ is a proper subgroup of a finite group $G$ intersecting every conjugacy class of $G$, then $G = \bigcup_{g \in G} g H g^{-1}$
and why the fact that we can't cover $G$ with sets in $\bigcup_{g \in G/H} g H g^{-1}$ implies that the group is generated by a set of representatives of conjugacy classes?
Maybe you know a simpler proof?
Thank you.
AI: Let $x \in G$. $x$ is in a conjugacy class, $x^G = \{ g^{-1} x g : g \in G \}$, and $H \cap x^G \neq \varnothing$ by assumption, so let $h \in H \cap x^G$. Since $h \in x^G$, there is some $g \in G$ with $g^{-1} x g = h$ and so $x = ghg^{-1} \in gHg^{-1} \subseteq \cup_{g \in G} gHg^{-1}$.
Since $G$ is not in fact such a union (by the Lemma below), there is no such subgroup $H$. Every subgroup that intersects all conjugacy classes must not be proper, it must be all of $G$.
If $H$ is generated a set of representatives of the conjugacy classes, then it intersects every conjugacy class (for instance, in that chosen generator), and so it cannot be proper; instead $H=G$.
Lemma: No finite group is the union of conjugates of a proper subgroup.
Proof: [Standard and included in Hagen's answer] Let $G = \cup_{g \in G} gHg^{-1}$ for some subgroup $H \leq G$. Since $gh H = gH$ and $H(gh)^{-1} = Hh^{-1} g^{-1} = Hg^{-1}$, we don't need to union over all $g \in G$, since $g$ and $gh$ give the same subgroup $gHg^{-1} = ghH(gh)^{-1}$. We only need $[G:H]$ different $g$. Now consider the non-identity elements of $G$. Each one has to be in at least one of the $gHg^{-1}$, but it won't be the identity element of that subgroup. Hence the $|G|-1$ different non-identity elements of $G$ lie in one of the subsets $gHg^{-1} \setminus \{1\}$ of size $|H|-1$. Since there are at most $[G:H]$ of those subsets we get an inequality, $|G| - 1 \leq [G:H](|H|-1)$, but the right hand side is just $|G|- [G:H]$. Simplifying $|G|-1 \leq |G|-[G:H]$ gives $[G:H] \leq 1$, but since $[G:H]$ is a positive integer, this means $[G:H]=1$ and $H=G$. $\square$ |
H: What kind of functions can be moment-generating functions for a random variable?
Given an infinitely differentiable function $ g: \mathbb{R} \rightarrow \mathbb{R}$, can we always find a distribution function $f_X$ of some random variable $X$ so that
$g(t) = \int_{-\infty}^\infty e^{tx}f_X(x) dx$?
If my question is too vague or ill-posed, can anyone recommend any literature on the characterization of moment-generating functions?
AI: No, there are a few necessary conditions, I don't know if there's a sufficient one.
If $g(t) = \mathbb E(e^{tX})$ we must have $g(0) = 1$ and $\lim_{t\to -\infty}g(t) = \mathbb P(X=0) \in [0,1]$.
we also have $\mathbb E(X) = f'(0)$ and Var$(X) = f''(0) - f'(0)^2 >0$.
We also have by Jensen's inequality
$$\begin{array}{rl}g(t) &=\mathbb E(e^{tX}) \\&\geq e^{t\mathbb E(X)} \\ &= e^{tf'(0)}\end{array}.$$
So there are lots of probabilistic statements you can make that must be satisfied by anything that's a probability generating function. It's quite easy to find counterexamples. |
H: Image of a union of collection of sets as union of the images
I am having problems with establishing the following basic result. Actually, I found a previous post that is close in nature (it is about inverse image), but I was interested in this specific one, with the following notation, because it is the one I found in the book I am self-studying and I guess that most of my problems are actually related with the notation itself. Morevoer I would like to take it as an opportunity to find out how to write proof with equalities, because I tend in those cases to write them in a cumbersome way, with necessary and sufficient conditions.
[I assume that this is partly due to the influence of "How to prove it: a structured approach", a book that I loved, but that left me the tendency to be quite mechanical in proving any sort of result].
Theorem:
Let $X$ and $Y$ be nonempty sets and $f \in Y^X$. Prove that, for any (nonempty) classes $\mathcal{A} \subseteq 2^X$, we have
$ f(\cup \mathcal{A}) = \cup \{f(A):A \in \mathcal{A} \} $
Here, it's how I approach the problem.
Tentative Proof:
First of all, by definition of (direct) image of a function, we have
$$f(\cup \mathcal{A}) := \{f(x):x\in \cup \mathcal{A} \}. \hspace{1cm} (*)$$
Let $X$, $Y$ be arbitrary (nonempty) sets. Let $f$ be an arbitrary function that maps from $X$ to $Y$ and let $\mathcal{A}$ be a family of sets, subset of the powerset of $X$. By $(*)$, the result we have to prove becomes
$$ \{f(x):x \in \cup \mathcal{A} \} = \cup \{ f(A):A \in \mathcal{A}\} \hspace{1cm} (1)$$
In order to prove it, rephrase $(1)$ as
$$ \forall y ( y \in \{f(x):x \in \cup \mathcal{A} \} \leftrightarrow y \in \cup \{ f(A):A \in \mathcal{A}\}). \hspace{1cm} (2)$$
We start by proving the necessary condition. Let $y$ be an arbitrary element and assume that $y$ is a member of $\{f(x):x \in \cup \mathcal{A} \}$. This means that $\exists x(x \in \cup \mathcal{A} \land y=x)$. At the same time, we have to prove that $\exists A(A \in \mathcal{A} \land y\in f(A))$.
And here I got stuck...
I really don't see how from my premises I can get the desired result. Indeed, what $A$ should be?
I assume there is a problem with the way in which I translate the problem in logical terms.
AI: You’re confusing yourself by using more notation than is necessary. Suppose that $y\in f\left[\bigcup\mathscr{A}\right]$; then by definition there is an $x\in\bigcup\mathscr{A}$ such that $y=f(x)$. By the definition of union we know that $x\in A_x$ for some $A_x\in\mathscr{A}$, so $y=f(x)\in f[A_x]\subseteq\bigcup\left\{f[A]:A\in\mathscr{A}\right\}$. Since $y$ was arbitrary, this shows that
$$f\left[\bigcup\mathscr{A}\right]\subseteq\bigcup\left\{f[A]:A\in\mathscr{A}\right\}\;.$$
That’s all you have to say, and it’s far more readable than something cluttered with quantifiers and other formal logical notation.
The argument to show the reverse inclusion is essentially just running the same steps in reverse. |
H: Easy way to check for a valid solution in this triple equality?
Let's say I have the following equalities
$a_1x_1 + a_2x_2 + a_3x_3 + a_4x_4 = b_1x_1 + b_2x_2 + b_3x_3 + b_4x_4 = c_1x_1 + c_2x_2 + c_3x_3 + c_4x_4$
Where the $a$'s, $b$'s, and $c$'s are known, non-negative integers.
Is there an efficient way to check if a solution exists (the $x$'s) such that they are non-negative real numbers (except for the trivial case of all $x$'s being 0)? I don't need to actually calculate them, just need some way to see if a solution even exists.
AI: After subtracting one of your expressions from the other two, you have two homogeneous linear equations, which you can write as
$ A x = 0$, where $A$ is a $2 \times 4$ matrix. By a theorem of Gordan (or the duality theory of linear programming) this has a solution with $x \ge 0$ and $x \ne 0$ if and only if
the system $A^T y < 0$ has no solution, i.e. there are no real $y_1, y_2$ with
all four entries of $A^T y < 0$. By homogeneity it suffices to look at the three cases
$y_1 = 1$, $y_1 = 1$ and $y_1 = -1$. In each case, the four entries of $A^T y$ give us four inequalities on $y_2$, and it is easy to check if a solution exists.
EDIT: In your example, subtract the third expression from the other two to get
$$ \eqalign{x_1 - x_2 + x_3 &= 0\cr -x_1 \phantom{- x_2} + x_3 &= 0\cr}$$
The system $A^T y < 0$ says
$$ \eqalign{y_1 - y_2 &< 0 \cr
-y_1 \phantom{-y_2} &< 0\cr
y_1 + y_2 &< 0\cr}$$
The second row says $y_1 > 0$, so we take $y_1 = 1$, and then we need
$1 - y_2 < 0$, i.e. $y_2 > 1$, and $1 + y_2 < 0$, i.e. $y_2 < -1$. These
are contradictory, so we conclude that $A^T y < 0$ has no solution, and your
system does have a nonnegative nonzero solution. |
H: Find the first 5 terms of the expansion in a power series
Find the first 5 terms of the expansion in a power series
$$y′=xe^{x}+2y^{2}$$
I've got a riccati equation $$ x e^{x}+2y^{2}, y(0)=0$$
After solving: $$y=e^{x}(x-1)+\frac{2}{3}y^{3} - 1$$
And I don't know how to go forward. Please help me.
AI: You assume that there exist $a_i$ such that $$y = a_0 +a_1x + a_2x^2 + \cdots.$$
From here it is very easy to calculate power series expansions for $y', xe^x,$ and $2y^2$. Then you equate like powers of $x$ on both sides of the equality to get a relation between the $a_i$, and get a sequence of fairly simple equations in the $a_i$. The initial condition $y(0)=0$ tells you immediately that $a_0 = 0$ and from there you can usually get the other coefficients.
Once you have $a_0,\ldots a_4$, you have the answer to the question.
To take a much simpler example that gives an idea of the method, suppose the equation were $$y' = y + 1$$ with the initial condition $y(0) = 0$.
We start with:
$$\begin{array}{rlrrr}
y &= &a_0 &+a_1x &+ a_2x^2 &+ \cdots \\
y+1 &=& 1+ a_0 &+a_1x &+ a_2x^2 &+ \cdots \\
y' &=& a_1 &+2a_2x &+ 3a_3x^2 &+ \cdots
\end{array}$$
Then equating the last two we have $$a_1 + 2a_2x + 3a_3x^2 + \cdots = 1 + a_0 + a_1x + a_2x^2 + \cdots$$
and equating coefficients of like powers gives equations in the $a_i$:
$$
\begin{align}
a_1 & =& 1+a_0\\
2a_2 & = &a_1 \\
3a_3 & = & a_2\\
&\vdots&
\end{align}
$$
Now $y(0) = 0$ tells us immediately that $a_0 = 0$. And the equations above then give us $a_1 = 1, a_2 = \frac12, a_3 = \frac16,\ldots$. So the series for $y$ is $$y = x + \frac12x^2 + \frac16x^3 + \cdots$$
which is in fact the power series for $y=e^x-1$. |
H: Dimension of the set of self-adjoint operators
I'm trying to figure out what the dimension of the set of self-adjoint operators on V would be, or in more concrete terms:
Let $dim V =n$. Let $S(V)$ denote the set of self-adjoint linear operators on V. What is its dimension?
The only thing I know that somewhat resembles this might be that $dim L(V) = (dim V)^2$ but I'm not sure how I might be able to apply that to this problem.
Any tips or assistance would be greatly appreciated. Thanks!
EDIT: Also, V is a finite dimensional inner product (or Hermitian) space.
AI: Use matrix representations to figure out the answer. Let $\mathcal B$ be an orthonormal basis of $V$, and let $u\in\mathrm{End}_{\Bbb C}(V)$ be an endomorphism of $V$. Then $u$ is selfadjoint iff its matrix in the basis $\mathcal B$ is hermitian i.e. $$u\text{ is selfadjoint iff }^t\overline{\mathrm{Mat}(u,\mathcal B)}=\mathrm{Mat}(u,\mathcal B).$$
What is the (real) dimension of the subspace of hermitian matrices $H_n\subset M_n(\Bbb C)$? Consider a matrix $(u_{ij})\in M_n(\Bbb C)$: it is hermitian iff for all $i,j\in\lbrace 1,\dots,n\rbrace,~\overline{u_{ji}}=u_{ij}$. You can thus choose the upper coefficients ($i<j$) freely in $\Bbb C$ , the diagonal coefficients $(i=j$) have to be real but can be chosen arbitrarily, and the coefficients below the diagonal ($i>j$) are completely determined by those above the diagonal. Therefore the real dimension of $H_n$ is $2\frac{n(n-1)}{2}+n=n^2$. |
H: Confusing math problem
How would I solve this question? I came across it and is really confused.
The payment of Jon was bigger by $960$ than the payment of David.
After the payment of David got increased by $10\%$, Jon and David got the same payment amount.
$\text{A}$. We will mark with $X$ the payment of David in the beginning and using the $X$ find the payment of Jon.
$\text{B}$. Find out the payment of David in the beginning.
How can I solve this? Any hints?
AI: Let $J$ = payment of John and $X$ = payment of David. Then we know that $J = X + 960$. Furthermore, $1.1X = J$. Now substituting the first equation into the second we obtain: $1.1X=X+960 \iff 0.1X=960 \iff X=9600 \rightarrow$ $J=9600 + 960 = 10560$ (we obtain J from substituting $9600$ into the first equation). |
H: Results following from Analyticity on a domain
This is part of an old Oxford exam paper (1997 2602 Q2) I'm working on for revision.
Suppose we have a function $f$ which is holomorphic on the disc radius $R$ about $0$. We want to show that there is a sequence $\{p_n\}$ of polynomials such that $\{p_n\} \rightarrow f$ uniformly on the circle centre $0$ radius $r<R$.
My thoughts are that by Taylor's Theorem we can expand $f$ as a power series on $D(0;R)$ so that $f(z)=\sum^{\infty}_{i=0}c_i z^i$ for $z \in D(0;R)$.
This leads me to propose defining $$p_n(z)= \sum^{n}_{i=0}c_i z^i$$
Clearly then $\{p_n\} \rightarrow f$ so we just require the convergence to be uniform.
Can anyone help me show the convergence is uniform?
The questions then continues to asking us to evaluate $\int_{\alpha}z^k dz$ where $\alpha$ is the positively-oriented circle centre $0$ radius $r$.
I think that this integral is simply equal to zero by Cauchy's Theorem, though I'm more than happy to be corrected.
Now, we have to use this to show that there is no sequence of polynomials that converge uniformly to $\frac{1}{z}$ on $\alpha$. Can someone please assist with this part also.
Thanks
AI: Hint: if $r < t < R$, show that $|c_n| t^n $ is bounded, and use that to get uniform estimates for $\left|\sum_{i= n}^\infty c_i z^i\right|$ for $|z| \le r$.
For the second part: yes, the integral is $0$ for nonnegative integers $k$. And so $\int_\alpha p(z)\ dz = 0$ if $p$ is a polynomial. For $1/z$, on the other hand, ... |
H: Application of the Identity Theorem to $|x|^3$ for $-1
Oxford Exam $2602$ $1997$ $Q3$
We want to show that there is no function $f$ which is holomorphic in $D(0;1)$ and such that $f(x)=|x|^3$ for $-1<x<1$.
Here are my thoughts thus far:
Suppose there is. Then $g(x)=f(x)-|x|^3=0$ for $-1<x<1$. Then by the Identity Theorem we must have that $g^{-1}(\{0\})$ has a limit point in $S:=\{x:-1<x<1\}$.
I'm hoping this leads to a contradiction, but I can't see how. Perhaps the method is incorrect.
Any help appreciated. Thanks.
AI: Hint: If $f(x)=|x|^3$ for $x\in (0,1)$, then $f(x)$ and $x^3$ agree on the interval $(0,1)$. Now use the identity theorem, and derive a contradiction.
Edit: For reference, my version of the identity theorem states:
If $f(z)$ and $g(z)$ are holomorphic functions on a domain $U$ (an open, path connected set), then if $\{ z \in U : f(z)=g(z)\}$ has a non-isolated point $w$, then $\forall z \in U,f(z)=g(z) $.
$\bigg($My definition of $w$ being a limit point for a set $S$ is that $\forall \epsilon > 0 , (B_\epsilon(w)\text{\ }\{w\})\cap S\not=\phi\bigg)$ |
H: General Linear Groups with Homomorphisms
Let $G=\mathrm{GL}_n(\mathbb R)$ and $H=\mathbb R^*$. Let $\phi : G=\mathrm{GL}_n(\mathbb R) \rightarrow \mathbb R^*$ be the map defined by $\phi(A)=\det(A)$. Show that $\phi$ is a group homomorphism.
If $\phi : G \rightarrow H$ is a group homomorphism, then the set $\{g \in G : \phi(g)=e_H\}$ where $e_H$ is the identity element of $H$ is called the kernel of $\phi$. Show that the kernel of $\phi$ is a subgroup of $G$.
AI: HINTS
You have $G=\text{GL}(n,\mathbb{R})$ with the operation of matrix multiplication and $H=\mathbb{R}^*$ with the operation of multiplication. You gave $\det : \text{GL}(n,\mathbb{R}) \to \mathbb{R}^*$. You need to show that composing two elements of $G$ and then moving it over to $H$ gives the same as moving two elements of $G$ over to $H$ and composing them there. Can you show that $\det(AB) = \det(A)\det(B)$?
To show that $\ker \phi < G$, just need to check all of the group axioms. If $a,b \in \ker \phi$, then can you show that $ab \in \ker\phi$, i.e. if $\phi(a) = \phi(b) = 1_H$, can you show that $\phi(ab) = 1_H$? Associativity is inherited from $G$. Next you need to show that $1_G \in \ker\phi$, i.e. $\phi(1_G) = 1_H$. Finally, if $a \in \ker\phi$, can you show that $a^{-1} \in \ker\phi$? |
H: Rotation of a point in 3d space
I'm trying to rotate a point around a single axis of a 3D system.
Given $P=\begin{pmatrix}
101 \\
102 \\
103
\end{pmatrix}
$,
And the rotation matrix formula for rotation around the X axis only, I get:
$Rx(\psi=90^\circ)=
\begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & -1 \\
0 & 1 & 0
\end{pmatrix}$
The rotation yields
$Rx*P =\begin{pmatrix}101 \\ -103 \\ 102\end{pmatrix}$
...but I expected a 90 degree rotation around $x$ would mirror the $y$ coordinate, yielding $\begin{pmatrix}101 \\ -102 \\ 103\end{pmatrix}$. By now I'm utterly confused by the various rotation conventions, and I'd be grateful for any help in clearing up where I went wrong with this simple operation.
Thanks!
AI: The matrix multiplication you performed is correct... the expectation sentence is wrong.
Rotations and reflections are different, in general. Rotations are generally taken to be orientation preserving rigid motions, and it turns out that any matrix for such a rotation has determinant 1. Reflections, on the other hand, are orientation reversing, and they have to have determinant -1.
If it helps, we can think about plane geometry for a second. If you have a triangle drawn on a sheet of paper and a point marked as the origin, you can rotate this paper in place around the point on the paper. To perform a reflection you have to flip the paper over! This is the difference between a rotation and reflection in the 2-d plane.
Exercises to convince yourself the computation you did is correct:
1) You can confirm that the reflection you were performing was $\begin{bmatrix}1&0&0\\0&-1&0\\0&0&1\end{bmatrix}$, and that the determinants of this matrix and your original matrix are -1 and 1 respectively. This shows the two operations are definitely of different characters.
2) Look at the "shadow" of the point in the $y-z$ plane as the sun shines down the $x$ axis. (I mean $(102,103)$). If you rotate that 90 degrees around the $x$ axis, you can see that it lands in the second $y-z$ quadrant. Of course, this point is the shadow of the original point after rotation. Draw a triangle between the old shadow, the new shadow and the origin, and verify that it's a right triangle.
3) It would also be a good exercise to try a 45 degree rotation instead. Do a 45 degree rotation in the $y-z$ plane, and check to see where the point $(0,100,100)$ goes. It will certainly not be $(0,-100,100)$ :) |
H: Why the root of this tree has to be "1"?
Arrange $2^{n-1}-1$ zeroes and $2^{n-1}$ ones in a balanced full binary tree of depth $n$. If we want the number of edges that connect the same (and respectively different) digits are the same, then one claims that the root of this tree has to be a one. Why is that?
For example, if $n=2$, then we need to arrange 1 zero and 2 ones. One arrangement that makes the number of edges that connect the same digits (which in this case is only one: the edge with a "+" who connects 2 ones,) and the number of edges that connect two different digits (which in this case is also one: the edge with a "-") are the same. Note that the root of this tree is unexceptionally 1.
1
/ \
+ / \ -
/ \
1 0
And here is a case where $n=3$.
1
/ \
+ / \ -
/ \
1 0
+ / \ - + / \ -
/ \ / \
1 0 0 1
Again, we see that the root is a one. If we change the root to a zero, however, then we can never find an arrangement that makes the number of "+" edges equals to the number of "-" edges. Why is that?
AI: Consider a balanced depth-$n$ binary tree of $0$'s and $1$'s and label each edge $+$ or $-$ as you did.
If all edges were $+$, all $2^n-1$ vertices would be the same as the root. You can switch
an edge from $+$ to $-$ or vice versa if you flip all the vertices above that edge (i.e. those such that the edge is on the path from the vertex to the root). This always involves
flipping an odd number of vertices, so the parity of the numbers of $1$'s and of $0$'s changes. If you do an even number of edge-switches, you have the same parity you started with. Now your tree has $2^n-2$ edges, and you need to do $2^{n-1}-1$ switches to get an equal number of $+$ and $-$ edges. If $n \ge 2$, this is odd, so the parity is different than at the start. At the start, there were an odd number the same as the root, so after the switching you must have an even number. Since you want the number of $0$'s to be odd,
the root must be $1$. |
H: For x < 5 what is the greatest value of x
It can't be $5$. And it can't be $4.\overline{9}$ because that equals $5$. It looks like there is no solution... but surely there must be?
AI: There isn't one. Suppose there were; let's call it $y$, where $y<5$.
Let $\epsilon = 5 -y$, the difference between $y$ and 5. $\epsilon$ is positive, and so $0 < \frac\epsilon2 < \epsilon$, and then $y < y+\frac\epsilon2 < y+\epsilon = 5$, which shows that $y+\frac\epsilon2$ is even closer to 5 than $y$ was.
So there is no number that is closest to 5. Whatever $y$ you pick, however close it is, there is another number that is even closer.
Consider the analogous question: “$x < \infty$; what is the greatest value of $x$?” There is no such $x$. |
H: Proof by induction that $\left(\sum_{k=1}^nk\right)^2\ge\sum_{k=1}^nk^2$
I am a bit new to logical induction, so I apologize if this question is a bit basic.
I tried proving this by induction:
$$\left(\sum_{k=1}^nk\right)^2\ge\sum_{k=1}^nk^2$$
Starting with the base case $n=1$:
$$1^2\ge1^2$$
Then to prove that $P(n)\to P(n+1)$:
$$\left(\sum_{k=1}^{n+1}k\right)^2\ge\sum_{k=1}^{n+1}k^2$$
$$\left((n+1)\sum_{k=1}^nk\right)^2\ge(n+1)^2\sum_{k=1}^nk^2$$
$$(n+1)^2\left(\sum_{k=1}^nk\right)^2\ge(n+1)^2\sum_{k=1}^nk^2$$
Since they're both multiplied by $(n+1)^2$, it's easy to see that if $\left(\sum_{k=1}^nk\right)^2\ge\sum_{k=1}^nk^2$, then $(n+1)^2\left(\sum_{k=1}^nk\right)^2\ge(n+1)^2\sum_{k=1}^nk^2$. But if the $\ge$ were replaced with a $\le$, the proof would still be valid, even though it's demonstrably not true.
I can see that it would take strong induction would fix this problem, but if I didn't know by observation that $\left(\sum_{k=1}^nk\right)^2\le\sum_{k=1}^nk^2$ isn't true by observation, how would I know to use strong induction?
AI: You're not quite on the right track. Your base case is just fine. For your induction step, suppose that $$\left(\sum_{k=1}^nk\right)^2\ge\sum_{k=1}^nk^2.$$ Note then that $$\left(\sum_{k=1}^{n+1}k\right)^2=\left((n+1)+\sum_{k=0}^nk\right)^2=(n+1)^2+2(n+1)\sum_{k=1}^nk+\left(\sum_{k=1}^nk\right)^2$$ and that $$\sum_{k=1}^{n+1}k^2=(n+1)^2+\sum_{k=1}^nk^2.$$ Can you get the rest of the way from there?
Alternately, you can proceed directly, noting that $$\begin{align}\left(\sum_{k=1}^nk\right)^2 &= \left(\sum_{k=1}^nk\right)\left(\sum_{j=1}^nj\right)\\ &= \sum_{k=1}^nk\sum_{j=1}^nj\\ &= \sum_{k=1}^n\sum_{j=1}^njk\\ &= \sum_{k=1}^n\left(k^2+\underset{j\ne k}{\sum_{j=1,}^n}jk\right)\\ &= \sum_{k=1}^nk^2+\sum_{k=1}^n\underset{j\ne k}{\sum_{j=1,}^n}jk\\ &\ge \sum_{k=1}^nk^2.\end{align}$$ |
H: Trigonometric substitution integral
Trying to work around this with trig substitution, but end up with messy powers on sines and cosines... It should be simple use of trigonometric properties, but I seem to be tripping somewhere.
$$\int x^5\sqrt{x^2+4}dx $$
Thanks.
AI: You don't even need a trigonometric substitution. Let $u^2=x^2+4$. Then $2u\,du=2x\,dx$, so $x\,dx=u\,du$.
Rewrite $x^5\sqrt{x^2+4}\,dx$ as $x^4\sqrt{x^2+4}\,x\,dx$, and substitute. Note that $x^2=u^2-4$, so $x^4=(u^2-4)^2$. Finishing the substitution, we get $(u^2-4)^2 u^2\,du$. Expand and integrate. No trig! |
H: First derivative of $\sqrt[\large 5]{\frac{t^3 + 1}{t + 1}}$
I have yet another derivative I need help with. I have to differentiate :
$$\sqrt[\uproot{3}{\Large 5}]{\frac{t^3 + 1}{t + 1}}$$
with respect to $t$.
I had two thoughts about this, use the chain rule then the quotient rule and multiply out, but then I am left with a mess of:
$$\left(\frac{t^3+1}{t+1}\right)^{-4/5}\cdot\frac{0.6t^3+0.6t^2-0.2t^4+0.2t}{(t+1)^2}$$
This is turning into a real mess and the answer I should get is:
$$\frac{2t-1}{5(t^2-t+1)^{4/5}}$$
Am I going the correct way about this or should I try a different route?
Thanks
AI: The process you describe one way to approach this: you can use both the chain rule and the quotient rule. But it looks to me as though you misunderstand the quotient rule:
$$h(x) = \frac{f(x)}{g(x)}$$
$$h'(x) = \frac{f'(x)\cdot g(x) - f(x)\cdot g'(x)}{g^2(x)}$$
where the factors in the numerator are multiplied, not added. (It also looks like you multiplied the coefficients by $1/5$?)
In your case, the rational function within the 5th root is $$h(x) = \frac{f(x)}{g(x)} = \frac{t^3 + 1}{t+1}$$
$$f(x) = t^3 + 1, \implies f'(x) = 3t^2;\quad g(x) = t+ 1\implies g'(x) = 1$$
$$h'(x) = \frac{f'(x)\cdot g(x) - f(x)\cdot g'(x)}{g^2(x)} $$
$$h'(x) = \frac{3t^2(t+1) - (t^3 +1)\cdot 1}{(t+1)^2}\;\;=\;\;\frac{2t^3 + 3t^2 - 1}{(t+1)^2}$$
Now, multiply $h'(x)$ with the result you got for the first factor: $$\frac 15\left(\frac{t^3+1}{t+1}\right)^{-4/5}= \left(\frac{t+1}{t^3 + 1}\right)^{4/5}$$
Which gives us:
$$\frac 15\left(\frac{t+1}{t^3 + 1}\right)^{4/5}\cdot \frac{2t^3 + 3t^2 - 1}{(t+1)^2}$$
Now, we can simplify. (The Chaz's method greatly simplifies the process of taking the derivative of your function. But I think it is a good idea to ensure you understand the quotient rule.)
Chaz's tip will also simplify the left factor above to $\left(\dfrac{1}{t^2 - t + 1}\right)^{4/5}$. After that, there is really no need to simplify. Answers can be correct, yet not match the "textbook's solution" perfectly, simply based on how we choose to simplify.
$$\left(\frac{2t - 1}{5(t^2 + t - 1)^{4/5}}\right)$$ |
H: Find integer solutions of $x^2 -px +q=0$, where $p$ and $q$ are prime
Quick number theory question that I have just come across, was wondering if anyone could shed some light on it.
So $p$ and $q$ are given to be prime numbers, and we are told that the equation $x^2 -px +q=0$ has two integer solutions. How can might one find them?
Thanks for your help.
AI: If the roots are two integers $a$ and $b$, then $p=a+b$ while $q=ab$. Since $q$ is prime, without loss of generality we may assume $a=\pm 1$. So suppose $a=1$, then $b=q$ and $p=q+1$ which forces $q=2,p=3$ (since both $p$ and $q$ are prime and differ by one). Similarly, if $a=-1$, we must have $p=2,q=3$. No! When $a=-1,~b=-q$ and $p=-q-1<0$ cannot be prime, so there is no solution. |
H: Inequality proof 2
How to prove the inequality : for real numbers $\alpha_1, \ldots \alpha_n, \beta_1, \ldots \beta_n$:
$$\sqrt{(\alpha_1 + \beta_1)^2+\cdots+(\alpha_n + \beta_n)^2} \leq \sqrt{\alpha_1^2 + \cdots + \alpha_n^2}+\sqrt{\beta_1^2+\cdots+\beta_n^2}.$$
Thanks!
AI: Hint: Square both sides
$(\alpha_1 + \beta_1)^2+\cdots+(\alpha_n + \beta_n)^2 \leq \alpha_1^2 + \cdots + \alpha_n^2+\beta_1^2+\cdots+\beta_n^2+ 2\sqrt{\left(\alpha_1^2 + \cdots + \alpha_n^2\right) \left(\beta_1^2+\cdots+\beta_n^2\right)}$
$\alpha_1 \beta_1+\cdots+\alpha_n \beta_n \leq \sqrt{\left(\alpha_1^2 + \cdots + \alpha_n^2\right) \left(\beta_1^2+\cdots+\beta_n^2\right)}$ |
H: Confused on definition of strong induction
I found the following statement in Munkres' Topology:
Theorem 4.2 (Strong induction principle). Let $A$ be a set of positive integers. Suppose that for each positive integer $n$, the statement $S_n \subset A$ [here $S_n = \{1, 2, \dots, n\}$] implies the statement $n \in A$. Then $A = \mathbb{Z}_+$.
Now, I think I understand strong induction. But what I don't understand here is, doesn't $S_n \subset A$ always imply $n \in A$? It's pretty much by definition of a subset, that if $\{1,2,\dots,n\} \subset A$ then $n \in A$. Did the author mean "Suppose that for each positive integer $n$ the statement $S_n \subset A$ is true"?
AI: If you check the previous page, you should see that $S_n$ is the set of positive integers that are less than $n$. $S_n$ here denotes the section of $\Bbb Z_+$ by $n$. See also the more general definition immediately before Lemma 10.2. |
H: Prove or disprove: $\sum\limits_{i=1}^n i^2 = O(n^2) $
Prove or disprove:
$$\sum_{i=1}^n i^2 = O(n^2) $$
If we want to prove this, find some summation that we know the $ O(n)$ runtime for, and is $ O(n^2) $ or smaller.
Otherwise, we could disprove this by finding some summation that is less than this one, but has a $O(n)$ runtime that is greater than $ O(n^2)$.
No idea where to go from here though. Any assistance is appreciated.
AI: Did you know that $$\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}6\text{ ? }$$
More generally, $$f_p(n)=\sum_{k=1}^n k^p$$ is a polynomial of degree $p+1$, with leading term $$\frac{n^{p+1}}{p+1}$$ You can prove this by induction. |
H: Convergence of these series
$$\sum_{n=1}^\infty \frac{2nx^{n}}{(n+1)^{2}3^{n}} \tag{1}$$
$$\sum_{n=1}^\infty x^{n}\tan\frac{x}{4^{n}}\tag{2}$$
Is there any good article that describes an equivalents like if $$ \lim_{n\to\infty} \sin\frac{2\pi}{3^{n}} \sim \frac{2\pi}{3^{n}}\tag{3}$$
(am I right about $(3)$?)
AI: The assertion
$$ \lim_{x\to\infty} \sin\frac{2\pi}{3^{n}} \sim \frac{2\pi}{3^{n}}$$
is perhaps not precise enough for our needs. But it can be made precise, in a form useful for Ratio Test calculations.
We have
$$\lim_{t\to 0}\frac{\sin t}{t}=1.$$
We can replace $\sin\left(\frac{2\pi}{3^n}\right)$ by
$$\frac{\sin\left(\frac{2\pi}{3^n}\right)}{\frac{2\pi}{3^n}}\frac{2\pi}{3^n}.$$
The first term goes safely to $1$, while $\frac{2\pi}{3^n}$ "plays nicely" with the Ratio Test.
We can do precisely the same thing with $\tan\left(\frac{x}{4^n}\right)$ for $x\ne 0$. Since
$$\lim_{t\to 0}\frac{\tan t}{t}=1,$$
use
$$\frac{\tan\left(\frac{x}{4^n}\right)}{\frac{x}{4^n}}\frac{x}{4^n}$$
in the Ratio Test limit calculation. The front part has limit $1$, so it ford not affect the Ratio Test limit.
Remark: More informally, we can simply replace $\tan(x/4^n)$ by $x/4^n$. However, that is likely to be considered too informal in a homework problem when an important focus of the course is careful justification. Later, when one is doing "real" work, we can be more casual, for we know that formal detail could be supplied.
In answer to an earlier comment of yours, $\cos x$ is nearly $1$ if $x$ is close to $0$. But it can be useful to know that since the first $2$ terms of the power series of $\cos x$ are $1-\frac{x^2}{2}$, near $0$ the function $1-\cos x$ behaves like $\frac{x^2}{2}$. |
H: What is the name of a game that cannot be won until it is over?
Consider the following game:
The game is to keep a friend's secret. If you ever tell the secret, you lose. As long as you don't you are winning. Clearly, it's a game that takes a lifetime to win.
Another example would be honor. Let's say, for our purposes, once you lose your honor you can't get it back. Call it a game. Then the only way to win is to never lose your honor.
Both of these games are highly unfavorable for the player. Because they are games that can never be won, until they are over. Is there a mathematical name for such a game?
AI: In some contexts, such a game is called a closed game. To understand the terminology, let I and II denote two players, and suppose that the players alternate indefinitely playing elements $m$ from some set $M$ of possible moves.
If we imagine that the game is played for infinitely many moves, we obtain an infinite sequence of moves $(m_1,m_2,m_3,\ldots)$. Let $M^{\mathbb{N}}$ denote the space of such sequences.
This space $M^{\mathbb{N}}$ has a natural topology, namely the product of the discrete topologies on $M$. Suppose that the payoff set (set of move sequences that count as a "win") for player I is closed in this topology. This means that whenever player I loses, it is because he or she made a mistake (e.g. telling the secret, in your example) at some finite stage of the game, or because the game was simply unwinnable. On the other hand, if player I wins it is because he or she has forever avoided making mistakes.
I am stretching your example here by assuming that the players are immortal, in which case the only way to win is by keeping the secret literally forever.
In this situation, the game is "over" when infinitely many years have passed. I think that this is the most interesting interpretation of the question; otherwise one could simply consider "taking the secret to the grave" as one of the possible moves $m \in M$, and this move has the property that playing it results in an immediate win for player I, which is not very interesting.
The next few paragraphs do not directly address the question, but help to explain why some people (e.g. set theorists) consider closed games to be interesting:
A nice feature of closed games (relative to infinite games in general) is
that they are determined. This is known as the Gale–Stewart theorem. To say that a game is determined means that one player or the other has a winning strategy that prescribes the moves to make in any given situation in order to win. If player I (the player with the closed payoff set) has a winning strategy, that strategy can be described simply as "don't make any mistakes" where a mistake is defined as a move after which player II can force a win in finitely many moves.
The more interesting case of the Gale–Stewart theorem is when player I does not have a winning strategy.
If the set $M$ of moves is finite, as in chess, then the lack of a winning strategy for player I means that there is some fixed finite number $n$ such that player II can force a win in $n$ moves (e.g. perhaps the starting position in chess is a mate in 73 moves for black, although this seems very unlikely.) Then a winning strategy for player II proceeds by always playing so as to reduce this number (e.g. mate in 72, then on the next move mate in 71, and so on until he or she inevitably wins.)
On the other hand, if the set of moves is infinite then the construction of a winning strategy for player II in the case that player I has no winning strategy is more complicated and involves transfinite ordinals. The issue is that, for example, if the possible moves for player I are $m_1,m_2,m_3,\ldots,$ it could be that making any of these moves leads to a win for player II, but making move $m_i$ allows player I to survive for $i$ many moves, so there is no fixed upper bound on how long it takes player II to win. In this case we would take the supremum and say that player II can force a win in $\omega$ moves. By always choosing moves that decrease this ordinal rank, player II always wins, because there is no infinite decreasing sequence of ordinals. |
H: Extension of valuation to the algebraic extension of a number field.
I am trying to get the idea how we can extend the $p$-adic valuation on $\mathbb Q$ to an algebraic extension. In particular, how to extend the $p$-adic valuation for $p = 5$ from $\mathbb Q$ to $\mathbb Q(5^{1/3})$. Thank you.
AI: The analogue of the $p$-adic valuation of a number field depends upon the prime ideals of its ring of integers. Let $K$ be a number field with ring of integers $\mathcal{O}_K$, and let $\mathfrak{p}\lhd\mathcal{O}_K$ be a non-zero prime. Then for any non-zero element $x\in K$, we can factor the fractional ideal $x\mathcal{O}_K\subset K$ uniquely into a product of prime ideals, say $x\mathcal{O}_K=\mathfrak{p}_1^{e_1}\mathfrak{p}_2^{e_2}\cdots$, where the $e_i\in\mathbb{Z}$ and each $\mathfrak{p}_i\lhd\mathcal{O}_K$ is prime. The $\mathfrak{p}_i$-adic valuation $\text{ord}_{\mathfrak{p}_i}(x)$ is then the exponent $e_i$ in the prime factorisation of $x\mathcal{O}_K$.
In your particular example, $K=\mathbb{Q}(\sqrt[3]{5})$, the rational prime $5$ ramifies as $(5)=\mathfrak{p}^3$, where $\mathfrak{p}=(5,\sqrt[3]{5})$, so you could consider the $\mathfrak{p}$-adic valuation on $K$. |
H: Some questions about variations of fixed point method
I'm doing some excercises in Fixed Point Iteration methods with Matlab. I have to find roots for $f(x)=e^x -x -1.9\cos x$ by using $x_{n+1}=g(x_n)$. I know how to choose $g(x)$ such that I can find both roots. The following part of the exercise is as follows:
In general we can rewrite $f(x)=0$ to $x=g(x)$ with $g(x)=x+cf(x)$, where $c\neq 0$ is a constant.
I am given the function $f(x)=e^x-x-1.9\cos x$ , and I have to determine values for $x_0$ and $c$ such that the sequences obtained for $x_{n+1}=g(x_n)$ both converge quickly, to roots $x_{r1}$ and $x_{r_2}$.
I'm told that this can be obtained by picking a point $x_0$ close to the zero and setting $g'(x_0)=0$ by an appropriate choice of $c$. My question is, what is this all about? What should I do?
Then, another part says
Increase the order of convergence of the fixed point method $x_{n+1}=x_n+cf(x_n)$ by changing $c$ each iteration instead of using a constant value.
How can I do that? What do they mean?
AI: A central tenet in calculus is that we can understand the local behavior of a function near a point by examining the linear approximation of the function at that point. Thus, let's begin by exploring the behavior of iteration near fixed points of linear functions.
To this end, consider the function $\ell(x) = x_f + m(x-x_f)$. This clearly has a fixed point at $x_f$. Now, suppose that we iterate this function starting from some point $x_0$. Note that
$$\ell(\ell(x_0)) = x_f + m((x_f + m(x_0-x_f)) - x_f) = x_f + m^2 (x_0 - x_f).$$
More generally,
$$\ell^p(x_0) = x_f + m^p (x_0 - x_f).$$
As a result, we see quite easily that the orbit of $x_0$ tends towards $x_f$ iff $|m|<1$. Furthermore, the smaller the value of $m$, the faster the convergence. The convergence is fastest (instantaneous, in fact) when $m=0$.
Now, let's consider the iteration of a differentiable function $g$ near a fixed point $x_f$, thus $g(x_f)=x_f$. Suppose we iterate $g$ starting from some point $x_0$ near $x_f$. By analogy with the linear example we just examined, we might expect $x_f$ to be attractive if the slope of $g$ is less than $1$ in absolute value at the point $x_f$. More precisely, we need $|g'(x_f)|<1$.
Now, in your example, you have a given function $f(x)=e^x-x-1.9\cos(x)$ and you'll generate $g_c$ by setting $g_c(x)=x+c\,f(x)$. Since you have a computational tool in your hands, I suggest that you examine the graph of $g_c$ together with the graph of $y=x$ to help you choose a good value of $c$. For example, here's the graph of $y=g_{-1/2}(x)$ together with the graph of $y=x$.
The points of intersection correspond to the roots of $f$. It appears that one such point is around $0.7$ or so and that iteration of $g_{-1/2}$ starting near there should converge to that fixed point. It does appear, though, that you could pick a value of $c$ that is better than $-1/2$. Note that it is important to plot the graph in it's correct aspect ratio to get a good feel for the magnitude of the derivative.
Finally, if you start with a good $c$ value, then refinement should not be necessary. I guess the idea, though, is choose $c$ at the $i^{\text{th}}$ step so that $g_c'(x_i)=0$. Since $x_i$ is close to $x_f$, we'd then expect $g_c'(x_f)$ to be close to zero. |
H: 1st derivative of $\frac{2x}{\sqrt{x^2 + 1}}$
Another simple question that I can't work out today, yet I would work it out two weeks ago!
I need to find the 1st derivative of $$\frac{2x}{\sqrt{x^2 + 1}}$$.
So I use the Quotient rule and I get: $$\frac{(x^2 + 1)^.5 (2) - (2x)(0.5x^2 + 0.5)^-5}{x^2+1}$$
Am I heading in the correct direction and do I just need to multiply and try to get rid of the exponents somehow?
Thanks
AI: The numerator is not quite right. We want $(x^2+1)^{1/2}(2)-2x g'(x)$ where $g(x)=(x^2+1)^{1/2}$. That derivative will be $(2x)(1/2)(x^2+1)^{-1/2}$.
The denominator is right. Now you can if it is useful do some manipulation ("simplification").
Remark: For square roots, I think you will find $\sqrt{x^2+1}$ easier and safer to work with than the exponent notation. For other roots, the advantage goes over to the exponent notation. |
H: Probability of two random n-digit numbers dividing each other
Let $n$ be a positive integer. Suppose $a$ and $b$ are randomly (and independently) chosen two $n$-digit positive integers which consist of digits 1, 2, 3, ..., 9. (So in particular neither $a$ nor $b$ contains digit 0; I am adding this condition so that division by $b$ will be possible, and that we don't get numbers of the form $0002$ and so on). Here "randomly" means each digit of $a$ and $b$ is equally likely to be one of the 9 digits from $\{1,2,3,..., 9\}$.
My question concerns the divisibility of these integers:
1) What is the probability that $b$ divides $a$ ?
The answer, of course, will depend on $n$. Denote this probability by $p(n)$. I would be happy with rough estimates for $p(n)$ as well :)
2) Is it true that $p(n)\to 0$ as $n\to\infty$?
I think answer to question 2) is yes (just by intuition).
AI: As $a,b$ are $n$-digit numbers, we have $10^{n-1}\le a,b<10^n$. If $b|a$, this implies $\frac ab\in\{1,2,\ldots,9\}$. So at most $9$ of the $9^n$ possible $b$ are divisors of $a$. $p(n)\le \frac 9{9^n}$.
While this estimate is far from sharp, it shows $p(n)\to0$ as $n\to\infty$.
Can we calculate $p(n)$ more precisely? Your forbidding zeroes is partly making this more difficult as divisibility of $a$ by small numbers $\in\{1,2,\ldots,9\}$ becomes less easily predictable.
We need to calculate, for $d\in\{1,\ldots,9\}$, the probability that $a$ is both a multiple of $d$ and is $\ge d\cdot 111\ldots1$ (where $111\ldots1$ is the smallest $n$-digit number without zeroes) and $\frac ad$ has no zeroes.
The probability that $a\ge d\cdot 111\ldots 1$ is approximately(!) $\frac{9-d}8$.
The probability that $a$ is divisible by $d$ is approximately(!) $\frac1d$ and is almost(!) independant of whether or not $a$ is $\ge d\cdot 111\ldots1$.
The probability that, if $a$ is indeed divisible by $d$, the number $\frac ad$ has no zeroes is at most $1$; in fact for $d>1$ it looks like it decreases $\to0$ as $n\to\infty$. Therefore we estimate the expected number of zero-free $n$-digit divisors of $a$ as approximately (and likely less than)
$$\sum_{d=1}^9 \frac{9-d}{8}\cdot\frac1d=\frac{4609}{2240}\approx 2.06$$
and hence get $$\tag1p(n)\approx \frac{2.06}{9^n}.$$
For small $n$, this estimate as quite a bit off. For example, it gives $p(1)\approx0.2286$ instead of the correct value $p(1)=\frac{23}{81}=0.28395\ldots$, next $p(2)\approx 0.0254$ instead of $p(2)=\frac{163}{6561}=0.02484\ldots$, and next $p(3)\approx 0.00282$ instead of $p(3)=\frac{483}{177147}=0.00261\ldots$. However, we observe already here (though only empirically!) that the estimate $(1)$ is an upper bound for $n\ge 2$. |
H: Radioactive isotopes differential equationa
I am having a hard time finding the correct differential equation to my problem. The problem is :
There's 2 isotopes: A and B. A is is transforming into B to a rate proportional to its quantity and B is decreasing to a rate proportional to its quantity.
I need to find an equation that gives the quantity of the B isotope at time t.
I've found the the equation of the lost of A over time, but I don't know how to integrate it to the differential equation of B.
The quantity of A isotope that has turned into B over time is y(t) = yi*e^(kt)
edit
B is not a fixed quantity. It's the lost quantity of A so the lost rate of B is proportional to the quantity B at the time which is equals to the lost quantity of A - minus the quantity of B that has been lost over time.
Thanks for the help
AI: Let $A(t)$ be the amount of substance A at time $t$, and let $B(t)$ be the amount of substance B at time $t$.
For suitable decay constants $a$ and $b$, we have
$$A'(t)=-aA(t),$$
and
$$B'(t)=-A'(t)-bB(t)=aA(t)-bB(t).$$
The second equation comes from the fact that B atoms are "born" (through decay) at rate $aA(t)$, and die at rate $bB(t)$.
How we handle these is a matter of taste. If it is familiar, we can express the system in matrix notation. The more straightforward approach is to solve the first equation explicitly for $A(t)$, and substitute in the second equation. We obtain a fairly simple but non-homogeneous linear equation. |
H: How to show in a clean way that $z^4 + (x^2 + y^2 - 1)(2x^2 + 3y^2-1) = 0$ is a torus?
How to show in a clean way that the zero-locus of $$z^4 + (x^2 + y^2 - 1)(2x^2 + 3y^2-1) = 0$$ is a torus?
AI: This took me longer than it should have, given the simplicity of the answer.
First, analyze the equation for when it has solutions: clearly, we require $z^4 \geq 0$, and thus, $(x^2 + y^2 - 1)(2x^2 + 3y^2 - 1) \leq 0$. Thus, one should be positive and the other negative. Each factor vanishes on a plane curve that's a conic section, respectively the unit circle at $(0,0)$ and the ellipse centered at $(0,0)$ with its major axis along the $x$-axis of length $1/\sqrt{2}$ and its minor axis along the $y$-axis of length $1/\sqrt{3}$. The points $(x,y)$ with a corresponding $z$ above them are exactly those in one of those figures and outside the other; since the ellipse is completely contained in the circle, the corresponding set of points is the circle with the ellipse removed, an annular region.
Now, for any $(x,y)$ in the annulus, the corresponding $z$ is a fourth root of some associated nonnegative number; any positive real number has exactly two real fourth roots, one positive and one negative, while zero of course has only one, which is zero.
Putting this together, literally, we see that the implicit equation describes a shape that's obtained from the annulus as follows:
Make two copies, superimposed;
Glue their edges to the $xy$-plane;
With one copy, "inflate upwards", and with the other, "inflate downwards". The edges remain fixed but the interiors flex away from the $xy$-plane.
The result is, in short, two annuli glued in the natural way along their boundaries, which is a topological torus.
Of course, I didn't use any details from the question other than to verify that the "shadow" of the torus was an annulus. In fact, that's all that matters: that the equation is of the form
$$z^2 + f(x,y) g(x,y) = 0$$
where the zero sets of $f$ and $g$ are closed plane curves one contained in the interior of the other. The resulting surface will always be a torus (and you won't always be able to use polar coordinates to figure it out).
(edit: Just noticed I wrote $z^2$ there rather than $z^4$. Actually, that would work too, as would any even power of $z$.) |
H: Riemann integral show $f(x)=g(x)$ for at least 1 $x$ in [a,b]
Let $f$ and $g$ be continuous functions on $[a,b]$ such that $\int_a^b f = \int_a^b g$. Show that there exists $x\in [a,b]$ such that $f(x) = g(x) $.
I want to assume not and then show that the integrals cannot be equal. but perhaps an argument on an upper or lower sums of both fuctions could find the point?
AI: Let $f$ and $g$ be continuous functions on $[a, b]$ such that
$$
\int_a^b f(x)\ dx = \int_a^b g(x)\ dx.
$$
Then
$$
\int_a^b f(x)-g(x)\ dx = 0.
$$
Since $f$ and $g$ are continuous, then $f-g$ is continuous. Hence, if $f-g$ is never zero on $[a, b]$, then it must be strictly positive or negative on $[a, b]$. But then $\int_a^b f(x)-g(x)\ dx \neq 0$, which is a contradiction. |
H: Confusion about Lemma 13.2 in Munkres' topology (property which implies that a collection is a basis for a topology)
Lemma 13.2 and its proof confuse me.
$X$ is a topological space and $C$ is a collection of open sets of $X$ satisfying a property. A specific topology is not mentioned in the lemma. In the proof, he assumes the properties of topologies are properties of $C$. Is $C$ a topology? Is this something that should be proven, or was it implicit in the language of the lemma?
He says he will often omit mentioning the topology $T$. But he defined open just in terms of the basis, and so in my mind, I don't associate an open set with any particular topology. Or in this case, a collection of open sets doesn't translate to topology. Is it supposed to?
AI: As you say, the topology is usually omitted, i.e. left implicit. The statement
Let $X$ be a topological space.
means
Let $X$ be a set, together with a chosen topology $\mathcal{T}$ on $X$.
the $\mathcal{T}$ being implicit. The collection $\mathcal{C}$ is exactly what he says in the statement of the theorem, namely,
$\mathcal{C}$ is a collection of open sets of $X$ such that, for each open set $U$ of $X$ and each $x\in U$, there is an element $C$ of $\mathcal{C}$ such that $x\in C\subset U$.
In other words,
$\mathcal{C}$ is a subset of $\mathcal{T}$ such that, for each $U\in\mathcal{T}$ and each $x\in U$, there is an element $C\in\mathcal{C}$ such that $x\in C$ and $C\subset U$.
The topology on $X$ was already chosen; he did not define the topology on $X$ in terms of any basis. The goal here is to prove that, if $\mathcal{C}$ is a collection of subsets of $X$ satisfying the above criterion, then $\mathcal{C}$ is a basis for the topology $\mathcal{T}$ on $X$.
Because elements of $\mathcal{C}$ are open subsets of $X$ (as defined by the topology $\mathcal{T}$), or in other words $\mathcal{C}\subset\mathcal{T}$, the elements of $\mathcal{C}$ will have the properties of open sets: for example, an intersection of elements of $\mathcal{C}$ will be an open set, because the intersection of any two open sets is again an open set. It need not be the case that an intersection of elements of $\mathcal{C}$ will again be an element of $\mathcal{C}$. |
H: Laurent expansion problem
Expand the function $$f(z)=\frac{z^2 -2z +5}{(z-2)(z^2+1)} $$ on the ring $$ 1 < |z| < 2 $$
I used partial fractions to get the following $$f(z)=\frac{1}{(z-2)} +\frac{-2}{(z^2+1)} $$
then
$$ \frac{1}{z-2} = \frac{-1}{2(1-z/2)} = \frac{-1}{2} \left[1+z/2 + (z/2)^2 + (z/2)^3 +\cdots\right]
$$
but I'm stuck with $$ \frac{-2} {z^2+1} $$ how can I expand it?
AI: Hint: Find $A,B$ such that $$\frac{-2}{z^2+1}=\frac{A}{z+i}+\frac{B}{z-i},$$ then proceed in a similar fashion to what you did with the term $\frac1{z-2}.$ Alternately, note that $1<|z|$ if and only if $1<|z|^2=|z^2|$ if and only if $|\frac1{z^2}|<1,$ and just work with $\frac{-2}{z^2+1}.$ |
H: Closed form of an integral
Is there a closed form of $$\int\limits_0^1\frac{\arctan(\sqrt{x^2+2})}{(1+x^2)\cdot(\sqrt{x^2+2})}dx \quad \text{?}$$
I just know that $\int\limits_0^1\frac{\arctan(\sqrt{x^2+2})}{(1+x^2)\cdot(\sqrt{x^2+2})}dx = 0.514042...$
AI: This is called the Ahmed integral.
$$ \int_{0}^{1} \frac{\tan^{-1}\sqrt{x^2 + 2}}{(x^2 + 1)\sqrt{x^2 + 2}} \, dx = \frac{5\pi^2}{96} . $$
You can see a solution here. |
H: Diagonalizable matrices in $M_{2\times 2}(\mathbb{F}_2)$
List all diagonalizable $2\times 2$ matrices over the a field $F$ consisting of two elements $0$ and $1$.
I want to try and do this using C++, but perhaps this isn't the place to ask. I have an idea as to how I'd do it.
AI: If $M$ is diagonalizable, we have $M=PDP^{-1}$ with $D$ diagonal. Since $D^2=D$, we get $M^2=M$. Conversely, if $M^2=M$, $M$ is annihilated by $X^2-X=X(X-1)$ which has simple roots and splits over $\mathbb{F}_2$. So $M$ is diagonalizable.
Hence the diagonalizable matrices are exactly the idempotents, i.e. $M$ such that $M^2=M$. This gives three possibilities for the minimal polynomial of $M$: $X$, $X-1$, and $X^2-X$. In the latter case, it must be equal to the characteristic polynomial as well.
We get the null and the identity matrix with the first two cases, and the matrices whose characteristic polynomial is $X^2-X$ in the last case. The latter correspond to the matrices $M$ with $\mbox{trace} \;M=1$ and $\det M=0$:
$$
M=\pmatrix{a&b\\ c&d}\qquad a+d=1\qquad ad-bc=0.
$$
This leaves a few cases to consider and this gives in the end eight diagonalizable matrices:
$$
\pmatrix{0& 0\\0&0}\;\pmatrix{1& 0\\0&1}\;\pmatrix{1& 0\\0&0}\;\pmatrix{1& 1\\0&0}\; \pmatrix{1& 0\\1&0}\;\pmatrix{0& 0\\0&1}\;\pmatrix{0& 0\\1&1}\;\pmatrix{0& 1\\0&1}.
$$
Conclusion: $50\%$ of the matrices are diagonalizable in $M_2(\mathbb{F}_2)$. |
H: How to find the unknown values in this Numerical Integration type?
Given the following type of numerical integration:
$$I(f)=\int_0^1 f(x) \, dx \approx \frac 12 f(x_{0}) +c_1 f(x_1) $$
a) Find the values of: the coefficient $c_1$ and points $x_0$ and $x_1$ so that the above formula numerical integration to be as accurate as possible.
b)Find the error of the formula of the numerical integration in (a).
c)What is the degree of accuracy of the numerical integration formula in (a)?
AI: We need to interpret "as accurate as possible." One common interpretation is that it is dead on for $f(x)$ identically equal to $1$, for $f(x)=x$, for $f(x)=x^2$, and so on for as long as possible.
If the formula is to give the right answer for $f(x)=1$ (and hence for $f(x)$ any constant function) we need $c_1=\frac{1}{2}$.
For the formula to give the right answer for $f(x)=x$, we then need
$\frac{1}{2}(x_0+x_1)=\int_0^1 x\,dx=\frac{1}{2}$. So we need $x_0+x_1=1$.
For the formula to give the right answer for $f(x)=x^2$, we need
$\frac{1}{2}(x_0^2+x_1^2)=\int_0^1 x^2\,dx=\frac{1}{3}$. So we need $x_0^2+x_1^2=\frac{2}{3}$.
Solve the equations $x_0+x_1=1$, $x_0^2+x_1^2=\frac{2}{3}$ for $x_0$ and $x_1$. We get $x_0=\frac{1}{2}\left(1-\frac{\sqrt{3}}{3}\right)$ and $x_1=\frac{1}{2}\left(1+\frac{\sqrt{3}}{3}\right)$.
Remarks: $1.$ This little problem connects with important mathematics. For detail, look for Gaussian quadrature. It turns out that by choosing suitable not uniformly distributed points, and suitable weights, one can produce numerical integration procedures that are extremely efficient. This is particularly important in situations where function evaluation is "expensive."
$2.$ We did not try to get the right answer for $x^3$, since obviously $x_0$ and $x_1$ were determined when we dealt with $x^2$. However, we get a nice bonus! It turns out the formula is exact for $x^3$. This is not hard to show, all we need to do is to verify that $\frac{1}{2}(x_0^3+x_1^3)=\frac{1}{4}$. That is a short calculation.
So the formula gives dead on right answers for $1$, $x$, $x^2$, and $x^3$. Thus, by linearity the formula is dead on for all polynomials of degree $\le 3$. |
H: Ring Inside an Algebraic Field Extension
Let $E|F$ be an algebraic field extension and a ring $K$ such that $F\subseteq K\subseteq E$. It is true that $K$ is a field?
AI: Yes. Suppose $0\ne k\in K$. Since $k\in E$ and $E/F$ is algebraic, we have some minimal polynomial $x^n+a_{n-1}x^{n-1}+\cdots+a_0$ with coefficients in $F$ that is satisfied by $k$. By minimality the coefficient $a_0$ must be nonzero, so it has an inverse $a_0^{-1}$ in $F$. Then $k(-a_0^{-1})(k^{n-1}+a_{n-1}k^{n-2}+\cdots+a_1)=1$, so $k^{-1}=(-a_0^{-1})(k^{n-1}+a_{n-1}k^{n-2}+\cdots+a_1)$. Since $k$ and each $a_i$ are in $K$, it follows that $k^{-1}$ is in $K$. Thus $K$ is a field. |
H: Question regarding $\lim_{x \to 0} \left(\exp(\sin (x)) + \exp \left(\frac{1}{\sin (x)}\right)\right)$
I wanted to find out whether the following limit exists, and find the value if it does.
$$\lim_{x \to 0} \left(\exp(\sin (x)) + \exp \left(\frac{1}{\sin (x)}\right)\right).$$
Attempt
After many attempt to prove that the limit exists, I looked up Wolfram Alpha, and it turns out that it doesn't. So I set out to prove that. By the algebra of limits we have
$$\lim_{x \to 0} \left(\exp(\sin (x)) + \exp \left(\frac{1}{\sin (x)}\right)\right)= \lim_{x \to 0} \exp(\sin (x)) + \lim_{x \to 0} \exp \left(\frac{1}{\sin (x)}\right).$$
Now, let $$f(x)=\frac{1}{\sin (x)} \ \text{and} \ g(x)=\exp (x),$$ and consider the sequences
$$(y_n)=\dfrac{1}{\frac{\pi}{2}+ 2n\pi} \ \text{and} \ (z_n)=\dfrac{1}{-\frac{\pi}{2}+ 2n\pi }\text{.}$$
Then $(y_n) \rightarrow 0$ and $(z_n) \rightarrow 0$ as $n \rightarrow 0$, however $f(y_n)=1$ and $f(z_n)=-1$ for all $n \in \mathbb{N}$.
Hence $\lim_{n \to \infty}f(y_n) \neq \lim_{n \to \infty}f(z_n)$ and $\lim_{x \to 0} \frac{1}{\sin (x)}$ does not exist.
If the working above is correct, I then would like to show that
since $\lim_{x \to 0} \frac{1}{\sin (x)}$, then $\lim_{x \to 0} \exp \left(\frac{1}{\sin (x)}\right)$ does not exist,
if $\lim_{x \to 0} \exp \left(\frac{1}{\sin (x)}\right)$ does not exist, then $\lim_{x \to 0} \exp(\sin (x)) + \lim_{x \to 0} \exp \left(\frac{1}{\sin (x)}\right)$ does not exist.
Question
A. Is my working above correct?
B. How do I show (1) and (2)? I know the rule for finding the limit of the composition of two functions and the addition of limits, if the limit exists, but not if the limit doesn't exist.
C. Is there any other way to tackle this question?
D. Could you give me some tips to identify if the limit of a function does not exist, without any aid of computers? (In order to not waste time trying to prove the existence of the limit when it actually doesn't.)
Thank you for your time.
AI: Note that $\frac{1}{\sin x}$ blows up as $x$ approaches $0$ from the right. So the limit does not exist. Even if we allow infinite limits, the limit does not exist, for our function approaches $1$ as $x$ approaches $0$ from the left.
Remark: If the question is changed to $\lim_{x\to 0^-}$ (limit as $x$ approaches $0$ from the left), then the limit does exist, and is equal to $1$.
The calculations in the post for values of $x$ of special form are not useful for this problem. The function $\sin x$ behaves very smoothly for $x$ near $0$, and it makes no difference if $x$ is of special shape such as $\frac{1}{2n\pi+\frac{\pi}{2}}$. Such values of $x$ would be very important if we were studying expressions that involve $\sin(1/x)$ for $x$ near $0$.
As to your question about ways other than using computers, it is a matter of visualizing the behaviour of these fairly familiar functions. For positive $x$ near $0$, $\sin x$ is close to $0$ and positive, so $\exp(1/\sin x)$ is really really big. For negative $x$ near $0$, $\frac{1}{\sin x}$ is big negative, so $\exp(1/\sin x)$ is close to $0$. |
H: Why is $2\pi i \neq 0?$
We know that $e^{\pi i} = -1$ because of de Moivre's formula. ($e^{\pi i} = \cos \pi + i\sin \pi = -1).$
Suppose we square both sides and get $e^{2\pi i} = 1$(which you also get from de Moivre's formula), then shouldn't $2\pi i=0$? What am I missing here?
AI: You have shown that $e^{2\pi i} = e^0$. This does not imply $2\pi i = 0$, because $e^z$ is not injective.
You have to give up your intuition about real functions when you move them to the complex plane, because they change a lot. $e^z$ is actually periodic for complex $z$. |
H: Help with modular arithmetic
If$r_1,r_2,r_3,r_4,\ldots,r_{ϕ(a)}$ are the distinct positive integers less than $a$ and coprime to $a$, is there some way to easily calculate, $$\prod_{k=1}^{\phi(a)}ord_{a}(r_k)$$
AI: The claim is true, with the stronger condition that there is some $i$ with $e_i=1$ and all other exponents are zero. The set of $r_i$'s is called a reduced residue system.
The second (now deleted) claim is false. Let $a=7$. Then $2^13^1=6^1$, two different representations. |
H: Weierstrass $M$-test problem, $f_n(x)=(nx^2)/(n^3+x^3)$
Use the Weierstrass M-test to show $$f(x)=\sum_{n=1}^\infty \frac{nx^2}{n^3+x^3}$$ converges uniformly on any finite interval $[-R,R]$.
This was an exam question I had. My attempt was to find an upper bound for $\frac{nx^2}{n^3+x^3}$ by taking the derivative and finding critical points. One issue I had was that the function isn't defined if $x = -n$ for any $n \in \mathbb{N}$. I suppose we could just take the sum to start at $\mathrm{Ceil}(R)$.
My attempts to find critical points for the derivative led me to concluding $f_n'(x)=0$ if $x=0$ or $x$ has
$$-nx^3-n^4x+2n^4=0.$$
The class is over, is there any kind of hints to do this or was I on the right track? Thanks very much
AI: For $n>R\geq x>0$, $$\frac{nx^2}{n^3+x^3}\leq \frac{x^2}{n^2}\leq \frac{R^2}{n^2}$$
and a similar bound can be found when $x<0$, by the fact that $-R\leq x$.
which gives you uniform convergence on any interval $[-R,R]\backslash (-\mathbb{N})$. As you rigthfully point out, $x$ cannot be a negative integer. Notice that uniform convergence essentially only cares about how the tail of the series behaves and not on a fixed (i.e. not depending on $x$) number of finite terms , in this case $R$ of them. |
H: $\int^{\pi/2}_{0}\log|\sin x| \,dx = \int^{\pi/2}_{0}\log|\cos x| \,dx $
Prove that :
$$\int^{\pi/2}_0 \log|\sin x| \,dx = \int^{\pi/2}_0 \log|\cos x| \,dx $$
I tried to cut the integral into a sum of parts and changing variable but it didn't work out right, i dont know how to solve this kind of problems in any other way, any hint will be much appreciated!
AI: Hint: Use the fact $$\int_0^a f(x)dx = \int_0^a f(a-x)dx$$ You can show this by letting $u = a-x$. |
H: limsup and liminf and the product of sequences
I'm trying to show that if $ \limsup s_n = +\infty$ and $\liminf t_n > 0$, then $\lim\sup s_n t_n = +\infty$.
Could someone check my proof/give feedback?
Since $\lim\inf t_n > 0$, we know that there is a natural $N_1$ such that $m = \inf \{t_n \ | \ n > N_1 \} > 0$. Also because $\lim\sup s_n = +\infty$, there exists $N_2 \in \mathbb{N}$ and $M > 0 $ such that $\displaystyle \sup s_n > \frac{M}{m}$ for $n > N_2$. Then for $n > \max \{ N_1, N_2 \}$, we have $s_n t_n \geq s_n m$. Now we can write $\displaystyle \lim\sup s_n t_n \geq \lim\sup s_n m = m \lim\sup s_n > m \left( \frac{M}{m} \right) = M$. Since $M$ was any arbitrary number greater than 0, we see that $\lim\sup s_n t_n = +\infty$.
Edit
Since $\limsup s_n = +\infty$, there exists a subsequence $s_{n_j} \to +\infty$ as $j \to \infty$. Then we can also take a subsequence $t_{n_j}$ of $t_n$. Now we have $s_{n_j}t_{n_j} \geq s_{n_j}m$ and $\limsup s_{n_j}t_{n_j} \geq m \limsup s_{n_j} = +\infty$. Since $\limsup s_{n_j}t_{n_j} = +\infty$, we must have $\limsup s_n t_n = +\infty$
AI: This is red herring, but there are a few mistakes in this line:
"...because $\limsup s_n=+\infty$, there exists $N_2\in\mathbb{N}$ and $M>0$ such that $\sup s_n>\frac{M}{m}$ for $n>N_2$."
First of all, if by $\sup s_n$ you mean $\sup\{s_n: n \in \mathbb{N}\}$ then clearly this number is $\infty$, so your statement is vacuous. If by $\sup s_n$ you are referring to $\sup\{s_n:n>N_2\}$, then this number is still $+\infty$.
Second, this $N_2$ will depend on $M$; the correct order of quantification is "for any $M>0$ there exists $N_2\in\mathbb{N}$ such that..."
I say this is a red herring because this line is irrelevant to the proof. You are done once you are able to show that $\limsup s_nt_n \geq m\limsup s_n$, since the right hand side is equal to $+\infty$. You have not quite done this yet. Instead of doing this strange business with the $\limsup$, try going back to the original definition:
By $\limsup a_n=+\infty$ we mean that there exists a subsequence $(a_{n_k})_{k=1}^\infty$ such that $a_{n_k}\to+\infty$ as $k \to \infty$.
Getting the inequality $s_nt_n\geq s_nm$ is a good idea (in fact, this is the only hard part of the proof), but you need to ensure $s_n \geq 0$ for this inequality to work. Since you're working with $\limsup$, you don't need the entire sequence to be nonnegative, but only a subsequence. Can you extract such a subsequence from the definition? |
H: Proof on showing $\frac{(b-a)}{2}(f(a) + f(b)) \leq \int_a^b f \leq (b-a) f(\frac{a+b}{2})$ for class $C^2$ function $f$
The task is as follows:
Given:
(a) function $f \in C^2$
(b) $f \geq 0$ and (c) $f'' \leq 0$ on $[a,b]$
Goal:
Show
$$\frac{(b-a)}{2}(f(a) + f(b)) \leq \int_a^b f \leq (b-a) f(\frac{a+b}{2})$$
To get an understanding of the problem, I tried specific function $f(x) = \sqrt x$ on interval $[ 1, 4 ]$
(1) For the first area (triangle with base $b-a$):
$\frac{(b-a)}{2}(f(a) + f(b))$ = $\frac{(4-1)}{2}(f(1) + f(4))$ = $\frac{9}{2}$
(2) For the second area (integral):
$\int_1^4 \sqrt x$ = $\frac{14}{3}$
I also tried adding up areas of sub-rectangles for this one, using right rectangles.
(3) For the third area = rectangle with length $b-a$ and width $f(\frac{a+b}{2})$ = $4.7$ (approximately)
So the conclusion clearly holds for this specific case.
But I have issue on how to generalize my example >_<
Well, by given information, I break function $f$ into 3 cases:
Case 1: If $f = 0$ i.e: zero function
Then there is nothing to prove, since area is always 0
Case 2: If $f = c$ i.e: constant function
Then the proof is quite easy, since all the 3 areas "shrink" down to be the area of the "big rectangle" with base $b-a$ and width $c$
Case 3: $f$ is convex or concave
This is the part that I don't know how to generalize what I found from my example.
My thoughts:
When I do the first area, I'm dealing with a triangle, thus I'm going below (or exactly on) the function $f$
When I do the second area, I'm thinking about the upper Darboux sum. Thus the sub-rectangles exceed the original curve by some little fractional area, namely the upper left of the rectangles
When I do the third area, I'm also exceed the original curve by some fractional area, but I think this extra part is a bit more than the fractional areas formed by the sub-rectangles. Or thinking another way, if I double up this rectangle, I get an area which is way bigger than the other two areas.
But then... how should I generalize all these ideas ?
Would someone please help me on this question?
Thank you in advance ^^
AI: You already know that function is concave down ($f'' \leq 0$), therefore area of trapezoid is less than the integral.
$$\frac{(b-a)}{2}(f(a) + f(b)) \leq \int_a^b f$$
Now the second part:
You can write the integral as:
$$\int _{ a }^{ b }{ f } =\int _{ a }^{ (a+b)/2 }{ f } +\int _{ (a+b)/2 }^{ b }{ f }$$
The function is increasing ($f'\ge0$), therefore
$$ \int _{ (a+b)/2 }^{ b }{ f } \ge \int _{ a }^{ (a+b)/2 }{ f } $$
In the first middle the rectangle has bigger area:
$$ \\ \int _{ a }^{ (a+b)/2 }{ f } \le \left( \frac { a+b }{ 2 } -a \right) f\left( \frac { a+b }{ 2 } \right) $$
As a result:
$$\\ 2\int _{ a }^{ (a+b)/2 }{ f } \le \int _{ a }^{ b }{ f } \le \left( b-a \right) f\left( \frac { a+b }{ 2 } \right) $$ |
H: Can an algebraic structure have indistinguishable elements?
Sometimes, a topological space has indistinguishable points - we call those spaces non-$T_0$. But given such a space, we can always identify indistinguishable points, thereby yielding a $T_0$ space. (Technically, we've taken the Kolomogorov quotient).
Does this sort of thing ever happen in abstract algebra?
Here's two more examples.
A preordered set can have comparable, distinct points - in other words it can fail to be antisymmetric. But that's cool, we can identify comparable points to obtain a partially ordered set.
Sometimes a pseudometric space has distinct points that are zero distance apart. But that's okay, we can identify zero-distance points to obtain a metric space.
Edit: It would be nice to see a definition of 'indistinguishable' for the elements of arbitrary structures. It would then be a consequence of this more general definition that for an arbitrary preordered set $X$ (order relation $\leq$) it holds that $x,y \in X$ are indistinguishable iff $x \leq y$ and $y \leq x$.
Here's an example. Consider the function $f : \mathbb{N} \rightarrow \mathbb{N}$, with $f(n)=0$ for all $n \in \mathbb{N}$. The associated notion of indistinguishability for the structure $(\mathbb{N},f)$ should probably be the relation $\sim$ such that $a \sim b$ iff both $a$ and $b$ equal $0$, or both $a$ and $b$ are distinct from $0$.
Edit2: On the other hand, perhaps it does not make sense to speak of 'the natural notion of indistinguishability in a structure $X$' without first situating that structure in a category. After all, if we're going to quotient out by the indistinguishability relation, epimorphisms will probably show up at some point.
AI: Consider the field $\Bbb Q(t,s)$ where $t,s$ are two algebraically independent transcendental numbers.
Then these two numbers are completely inseparable by a first-order formula in the language of fields.
Generally speaking, if $\cal L$ is some first-order language of some structure, then there are at most $\aleph_0\cdot|\cal L|$ definable elements in any given structure. If by "indistinguishable" we mean "inseparable by a first-order formula with limited parameters$^1$", then any sufficiently large structure will invariably contain a lot of indistinguishable elements.
One good place to learn about these things is model theory, and in particular the concept of "type".
Edit: To your last edit, about $(\Bbb N,f)$ note that $0$ is a definable element of the structure with the formula $x=f(x)$. And since we don't have any other symbols in the language it's really impossible to express anything else. Therefore it's very easy to see that over the empty set, every two non-zero elements satisfy the same formulas with one free variable.
(To see that we can't express anything else, at least without parameters, note that if $m,n$ are non-zero then there is an automorphism which exchanges between the two. Therefore every two non-zero elements must satisfy the same formulas [in one free variable].)
Footnotes:
Of course if we allow any parameter then $\varphi(x,y)$ defined as $\lnot(x=y)$ is sufficient to distinguish between any two members. But if, like in the first example, we allow no parameters - or parameters from a small substructure - then if the universe of the structure is large enough, there will be many indistinguishable elements. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.