text
stringlengths 83
79.5k
|
---|
H: question about summation?
Are there any general rules to find
$???\leqslant \sum_{n=t}^{m}f(n)\leqslant ???$
when $m$ and $t$ $\in $ R
AI: A trivial, yet sometimes useful, inequality is $$(m-t+1)\min_{i\in\{t,t+1,...,m\}}f(i)\leq\sum_{n=t}^{m}\ f(n) \leq(m-t+1)\max_{i\in\{t,t+1,...,m\}}f(i)$$ |
H: Proper constants for $\alpha, \beta$
Here is the problem:
For what values of $\alpha$ and $\beta$, the function
$$\mu(x,y)=x^{\alpha}y^{\beta}$$
is an integrating factor for the OE $$ydx+x(1-3x^2y^2)dy=0.$$ I am working on it just knowing the definition. :(
AI: Let $$M(x,y)dx+N(x,y)dy=0$$
If exists $F$ such that $$dF=M(x,y)dx+N(x,y)dy$$
then
$$ F(x,y)=C$$
for some $C \in \mathbb{R}$ defines an implicit solution of the ODE
When does $F$ exists? Exists if
$$\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x} $$
and $$M(x,y)=\frac{\partial F}{\partial x}, N(x,y)=\frac{\partial F}{\partial y} $$
Multiply your equation by $\mu(x,y)=x^{\alpha}y^{\beta}$
$$ x^{\alpha}y^{\beta+1}dx+x^{\alpha+1}y^{\beta}(1-3x^2y^2)dy=0$$
Identify $M(x,y)$ and $N(x,y)$ and apply the above theory. I found that $\alpha=\beta=-3$ |
H: Sequence $(a_n)$ s.t $\sum\sqrt{a_na_{n+1}}<\infty$ but $\sum a_n=\infty$
I am looking for a positive sequence $(a_n)_{n=1}^{\infty}$ such that $\sum_{n=1}^{\infty}\sqrt{a_na_{n+1}}<\infty$ but $\sum_{n=1}^{\infty} a_n>\infty$.
Thank you very much.
AI: The simplest example I can think of is $\{1,0,1,0,...\}$. If you want your elements to be strictly positive, use some fast-converging sequence such as $n^{-4}$ in place of the zeroes. |
H: Vectors that form a triangle!
I have a problem here.
How can I prove that sum of vectors that form a triangle is equal to 0 $(\vec {AB}+\vec {BC}+\vec {CA}=\vec 0)$ ?
AI: from the triangle law : $\vec {AB}+\vec{BC}=\vec{AC}$
$\vec {AC}$ will be resultant vector of addition of other two vectors.
$\vec {AB}+\vec{BC}=\vec{AC}$
$\vec {AB}+\vec{BC}+\vec{CA}=\vec{AC}+\vec{CA}$ (add $\vec{CA}$ on both side)
$\vec {AB}+\vec{BC}+\vec{CA}=\vec{AC}-\vec{AC}$
(because when AC and CA are same in magnitude but they are opposite in direction)
$\vec {AB}+\vec{BC}+\vec{CA}=\vec0$
hence proved |
H: Given that the family has at least one girl, determine the probability that the family has at least one boy.
Suppose that a family has exactly n children (n ≥ 2). Assume that the probability that any child will be a girl is 1/2 and that all births are independent. Given that the family has at least one girl, determine the probability that the family has at least one boy.
This is how I went about it:
Since we know that the family has at least one girl, we now have to determine the probability that the family has at least one boy from a pool of n-1 children.
Thus, probability is equal to:
1-Probability of having no boys
=1-(1/2)^(n-1)
But the answer is (1-(1/2)^(n-1)) / (1-(1/2)^(n))
The structure of the answer suggests that Bayes theorem is being used in the correct answer, but even the Bayes theorem should lead to my answer. I do not know where I am going wrong?
AI: Define $Y$ to be the numbe of boys, $X$ the number of girls. You need:
$$
P(Y \geq 1 |X \geq 1)=1-P(Y=0|X \geq 1) = 1- \frac{P(Y=0|X \geq 1)P(X \geq 1)}{P(X \geq 1)}\\
=1 -\frac{P(Y=0 \cap X \geq 1)}{1-P(X=0)}=1-\frac{\frac{1}{2^n}}{1-\frac{1}{2^n}}=\frac{1-\frac{1}{2^{n-1}}}{1-\frac{1}{2^n}}
$$ |
H: Integrating: $\int_0^\infty \frac{\sin (ax)}{e^x + 1}dx$
I am trying to evaluate the following integral using the method of contour which I am not being able to. Can anyone point out what mistake I am making?
$$\int_0^\infty \frac{\sin ax}{e^x + 1}dx$$
I am considering the following contour. And function $\displaystyle f(z):= \frac{e^{iaz}}{e^z + 1}$
The pole of order $1$ occours at odd multiple of $i\pi$. By considering above contour there is no singularity. The integral can be broken down into six parts.
$$\int_0^R \frac{e^{iax}}{e^x + 1} dx + i \int_0^{2\pi} \frac{e^{ia(R + iy)}}{e^{R + iy} + 1} dy + \int_{R}^{0}\frac{e^{ia(x+2\pi i)}}{e^{x + 2 \pi i } + 1} dx + \\ i \int_{2 \pi }^{\pi + \epsilon} \frac{e^{ai( iy)}}{e^{ iy } + 1}dy + \int_\gamma \frac{e^{iaz}}{e^z + 1} dz + i \int_{ \pi -\epsilon}^{0} \frac{e^{ia( iy)}}{e^{ iy } + 1}dy$$
First and third gives $\displaystyle (1 - e^{-2 a\pi})\int_0^R\frac{e^{iax}}{e^x + 1} dx$. Second goes to $0$ as $R \to \infty$
For fifth integral, $$\int_\gamma \frac{e^{iaz}}{e^z + 1} dz = \int_{-\pi/2}^{\pi/2} \frac{e^{ia\pi + a\epsilon e^{i\theta}}}{e^{i\pi + \epsilon e^{i\theta}+1}}i \epsilon i e^{i\theta }d\theta \to 0 \text{ as } \epsilon \to 0$$
The real part of fourth and sixth integral does not converge. But since my original integral is imaginary, it suffices to take imaginary part. As $\epsilon \to 0$, I get
$$i\int_{2\pi }^0 \Re \left [\frac{e^{-ay}}{e^{iy} + 1} \right] dy = i \int_{2\pi}^0 \frac{e^{-ay}}{2}dy = i \frac{e^{-2\pi a} - 1}{2a}$$
Finally using residue theorem, I am geting which is incorrect.
$$(1 - e^{-2 a\pi})\int_0^\infty \Im \left [\frac{e^{iax}}{e^x + 1} \right ] dx +\frac{e^{-2\pi a} - 1}{2a} = 0$$
Can anyone point out my mistake or give worked out solution?? Thanks in advance!!
ADDED::
I evaluated fifth integral incorrectly
$$\int_\gamma \frac{e^{iaz}}{e^z + 1} dz = \int_{-\pi/2}^{\pi/2} \frac{e^{ia(i\pi + \epsilon e^{i\theta})}}{e^{i\pi + \epsilon e^{i\theta}}+1}i \epsilon e^{i\theta }d\theta = ie^{-a\pi}\int_{\pi/2}^{-\pi/2}\frac{e^{ia\epsilon e^{i\theta}}}{-e^{\epsilon e^{i\theta}} + 1} \epsilon e^{i\theta}d\theta = i \pi e^{-a\pi}$$
So the total sum should be
$$(1 - e^{-2 a\pi})\int_0^\infty \Im \left [\frac{e^{iax}}{e^x + 1} \right ] dx +\frac{e^{-2\pi a} - 1}{2a} +\pi e^{-a\pi}= 0 $$
After slight manipulation we find that
$$\int_0^\infty \frac{\sin ax}{e^x + 1}dx = -\frac{\pi}{2\sinh (\pi a)} +\frac{1}{2a}$$
AI: Combine the 4th and 6th integrals to get the Cauchy principal value:
$$PV \int_{\pi}^0 dy \frac{e^{-a y}}{1+e^{i y}} = PV \int_{\pi}^0dy \frac{e^{-a y}}{2 \cos{(y/2)}} e^{-iy/2} $$
You still need to evaluate the imaginary part of the integral. |
H: Problem involving the computation of the following integral
I was solving the past exam papers and stuck on the following problem:
Compute the integral $\displaystyle \oint_{C_1(0)} {e^{1/z}\over z} dz$,where $C_1(0)$ is the circle of radius $1$ around $z=0.$
Here,$z=0$ is a pole of order $1$ and so Res$(f,0)=\lim_{z \to 0 } z f(z)=\lim_{z \to 0}e^{1/z}=?$ ,where $f(z)={e^{1/z}\over z}$.
Can someone point me in the right direction with some explanation?
AI: The function $zf(z)=e^{1/z}$ also has a singularity at $0$; The laurent series of $f(z)$ is $\sum_{k=0}^\infty \frac{1}{k!}\frac{1}{z^{k+1}}=\frac{1}{z}+\sum_{k=1}^\infty \frac{1}{k!}\frac{1}{z^{k+1}}$. The residue is therefore $1$. |
H: sum of monotonic increasing and monotonic decreasing functions
I have a question regarding sum of monotinic increasing and decreasing functions. Would appreciate very much any help/direction:
Consider an interval $x \in [x_0,x_1]$. Assume there are two functions $f(x)$ and $g(x)$ with $f'(x)\geq 0$ and $g'(x)\leq 0$. We know that $f(x_0)\leq 0$, $f(x_1)\geq 0$, but $g(x)\geq 0$ for all $x \in [x_0,x_1]$. I want to show that $q(x) \equiv f(x)+g(x)$ will cross zero only once. We know that $q(x_0)\leq 0$ and $q(x_1)\geq 0$.
Is there a ready result that shows it or how to proceed to show that? Many thanks!!!
AI: Alas, the answer is no.
$$f(x)=\begin{cases}-4& x\in[0,2]\\ -2& x\in [2,4]\\0& x\in[4,6]\end{cases}$$
$$g(x)=\begin{cases}5 & x\in [0,1]\\3& x\in[1,3]\\1& x\in[3,5]\\ 0 & x\in[5,6]\end{cases}$$
$$q(x)=\begin{cases} 1 & x\in [0,1]\\ -1& x\in[1,2]\\ 1 & x\in[2,3]\\ -1 & x\in[3,4]\\ 1& x\in[4,5]\\ 0 & x\in[5,6]\end{cases}$$
This example could be made continuous and strictly monotone with some tweaking. |
H: Formulas for calculating pythagorean triples
I'm looking for formulas or methods to find pythagorean triples. I only know one formula for calculating a pythagorean triple and that is euclid's which is:
$$\begin{align}
&a = m^2-n^2 \\
&b = 2mn\\
&c = m^2+n^2
\end{align}$$
With numerous parameters.
So are there other formulas/methods?
AI: It doesn't seem cumbersome to me. Just loop on $m$ starting at $2$. Loop on $n$ starting at $1$ or $2$ depending on the parity of $m$ and going up to $m-1$. Check the GCD using Euclid's algorithm. If it is $1$ you have a primitive set, so calculate $a,b,c$. If you want all sets up to some $N$, multiply $a,b,c$ by all values up to $\lfloor \frac Nc \rfloor$. Stop the $m$ loop at $\sqrt N$ |
H: Question about limits with variable on exponent
So I have to find the following limit $$\lim_{n\to\infty}\left(1+\frac{2}{n}\right)^{1/n}.$$I said that this is $$\lim_{n\to\infty}\left[\left(1+\frac{2}{n}\right)^n\right]^{1/n^2}=\left(e^2\right)^{\lim_{n\to\infty}1/n^2}=1.$$Now I know that the final answer is correct, but my method seems to me to be wrong. Can I seperate a limit in the way I did - i.e. is my working correct? Is there any other more elegant way to find the limit?
Thanks in advance.
AI: You don't actually need to go through this all:
$$\lim_{n\rightarrow\infty} 1+\frac{2}{n}=1$$ and
$$\lim_{n\rightarrow\infty} \frac{1}{n}=0$$
therefore
$$\lim_{n\rightarrow\infty} \left(1+\frac{2}{n}\right)^{1/n}=1^0=1$$ |
H: The restriction of a covering map on the connected component of its definition domain
Suppose $p:Y\to X$ is a covering map, $X,Y$ are manifolds and $X$ is connected. If $Z$ is a connected component of $Y$, I wonder if the restriction of $p$ on $Z$ is also a covering map? If not, what conditions should be added to guarantee the restriction is a covering map? (expect that $Y$ is compact.)
What I do: I know only need to show $p(Z)=X$. $p$ is a local homeomorphism and thus an open map. $Z$ is an open set since $Y$ is locally connected, and of course a closed set, so $p(Z)$ is an open set. Then I want to show $p(Z)$ is also closed, thus $p(Z)$ is open and closed in a connected space $X$, so $p(Z)=X$. But I cannot show $p(Z)$ is closed, perhaps I try a wrong way.
AI: I'll use the definition of covering map as it appears in Hatcher's Algebraic Topology: A continuous map $p:Y\to X$ is called a covering map, if for every $x\in X$ there is an open neighborhood $U$ around $x$ whose preimage is a (possibly empty) disjoint union of open sets, each of which is mapped homeomorphically onto $U$ via $p$. Note that this definition does not require a covering map to be surjective.
Still, if the codomain is connected, the map must be surjective by the following argument:
Assume $x\notin p(Y)$. Then its preimage $p^{-1}(x)$ is empty. There is an open $U$ containing $x$ such that $p^{-1}(U)$ equals $\bigsqcup_{\alpha\in I}U_\alpha$ where $U_\alpha\approx U$ and all $U_\alpha$ are open. But since $x$ is not in the image of $p$, the disjoint union must be the empty union. This means that $U$ does not intersect $p(Y)$, so $p(Y)$ is closed.
On the other hand, $p$ is an open map. If $V\subset Y$ is an open set containing $y$ and $U$ is the evenly covered neighborhood of $p(y)$, then $y$ is contained in some $U_\alpha$. Since $V\cap U_\alpha$ is open, $p(V\cap U_\alpha)$ is an open subset of $U$, thus an open set in $X$ which is contained in $p(V)$, so $p(V)$ is open.
If $X$ is connected, then $p(Y)$ must be all of $X$, being a clopen subset of a connected space.
To prove that the restriction of the $p$ in your problem to the connected component $Z$ is also a covering map, take an $x\in X$ and an open neighborhood $U$ such that $p^{-1}(U)$ equals $\bigsqcup_{\alpha\in I}U_\alpha$ where $U_\alpha\stackrel p\approx U$ and all $U_\alpha$ are open. Since $X$ is locally connected, there is an open connected $V$ such that $x\in V\subset U$. Its preimage is $\bigsqcup_{\alpha\in I}V_\alpha$ where $V_\alpha$ is simply the preimage of $V$ in the particular $U_\alpha$. Each $V_\alpha$ is connected, so it is either entirely in $Z$ or it is disjoint to $Z$. If you delete all the $V_\alpha$'s which are not in $Z$ from the union, you obtain the preimage of $V$ under the restriction of $p$. This means that this restricted $p$ still is a covering map. |
H: Let $G$ be a finite group with $|G|>2$. Prove that ${\rm Aut}(G)$ contains at least two elements.
Let $G$ be a finite group with $|G|>2$. Prove that ${\rm Aut}(G)$ contains at least two elements.
We know that ${\rm Aut}(G)$ contains the identity function $f: G \to G: x \mapsto x$.
If $G$ is non-abelian, look at $g : G \to G: x \mapsto gxg^{-1}$, for $g\neq e$. This is an inner automorphism unequal to the identity function, so we have at least two elements in ${\rm Aut}(G).$
Now assume $G$ is abelian. Then the only inner automorphism is the identity function. Now look at the mapping $\varphi: G \to G : x \mapsto x^{-1}$. This is an homomorphism because $\varphi (xy) = (xy)^{-1} = y^{-1} x^{-1} = x^{-1} y^{-1} = \varphi (x) \varphi (y)$. Here we use the fact that $G$ is abelian. This mapping is clearly bijective, and thus an automorphism.
This automorphism is unequal to the identity function only if there exists an element $x \in G$ such that $x \neq x^{-1}$. In other words, there must be an element of order greater than $2$.
Now assume $G$ is abelian and every non-identity element has order $2$. By Cauchy's theorem we know that the group must have order $2^n$.
I got stuck at this point. I've looked at this other post, $|G|>2$ implies $G$ has non trivial automorphism, but I don't know what they do in the last part (when they start talking about vector spaces). How should this prove be finished, without resorting to vector spaces if possible?
Thanks in advance
AI: The reason why that last case is done separately is because in that case we cannot describe a non-trivial automorphism with operations intrinsic to the group. In the non-abelian case we could use an inner automorphism, and in other abelian cases we can use negation. Using vector space structure is the "next best thing" in the sense that then everybody will immediately accept multiplication by an invertible matrix as an automorphism.
That last case is about groups that are isomorphic to a direct product of finitely many copies of $C_2$
$$
G=C_2\times C_2\times\cdots C_2.
$$
Any permutations of components is then obviously an automorphism. But proving that such a group has this direct product structure, while not too tricky, is something the answerers did not want to do, because it is much more convenient to observe that a finite abelian group where all the non-trivial elements have order two is a vector space over $GF(2)$.
Here's another way of doing this step. We inductively find elements $x_1,x_2,\ldots,x_\ell\in G$ such that at all steps $x_k$ is not in the subgroup generated by $x_1,x_2,\ldots,x_{k-1}$. Because $G$ is finite, at some point we will have exhausted all of $G$, and thus $G=\langle x_1,x_2,\ldots,x_\ell\rangle$. The claim is then that
$$
G=\langle x_1\rangle\times\langle x_2\rangle\times\cdots\langle x_\ell\rangle
$$
is a way of writing $G$ as a direct product of subgroups of order two. This is more or less obvious by construction.
But you should notice that this is essentially the same argument proving that a finitely generated vector space over a field has a finite basis. |
H: Multivariate normal distribution density function
I was just reading the wikipedia article about Multivariate normal distribution: http://en.wikipedia.org/wiki/Multivariate_normal_distribution
I use a little bit different notation. If $X_1,\ldots,X_n$ are independent $\mathcal{N}(0,1)$ random variables, $X=(X_1,\ldots,X_n)$ and $m=(m_1, \ldots, m_n)$ are $n$-dimensional vectors and $B$ is a $n\times n$ matrix, then $Y=m+BX$ has density function
$$f_Y(x)=\frac{1}{\sqrt{(2\pi)^n|\boldsymbol\Sigma|}}
\exp\left(-\frac{1}{2}({x}-{m})^T{\boldsymbol\Sigma}^{-1}({x}-{m})
\right)$$
where $\Sigma=BB^T$
Question: In the wikipedia article its part of the definition that $\Sigma$ is the covariance matric from $Y$, i.e $\sum_{ij}=\operatorname{Cov}(Y_i,Y_j)$, but why? How can this be proved.
AI: By the definition, any covariance matrix $\Sigma$ consists of $\Sigma_{i,j} = Cov(X_i, X_j) = E[(X_i - \mu_i)(X_j - \mu_j)]$, where randoms $X_i$ and $X_j$ have expectations $E[X_i] = \mu_i$ and $E[X_j] = \mu_j$ accordingly. So your problem is to prove $Cov(m_i + \mathbf{b}_i \mathbf{X}, m_j + \mathbf{b}_j \mathbf{X}) = (B \cdot \Sigma \cdot B^\top)_{i,j}$, where $B$ is non-singular matrix with rows $\mathbf{b}_i = (b_{i,1}, \ldots, b_{i,n})$, $1 \le i \le n$ and $\mathbf{X} = (X_1, \ldots, X_n)$ is $n$-dimensional random, $(m_1, \ldots, m_n)$ is vector of scalars.
Let $\mathbf{X}$ has the covariance $n \times n$-matrix $\Sigma$, which is symmetric.
Note, $E[m_i + \mathbf{b}_i \mathbf{X}] = m_i + \mathbf{b}_i \mathbf{\mu}$, where $\mathbf{\mu} = (\mu_1, \ldots, \mu_n)$. Expectation operator $E$ is linear. So, the proof:
$Cov(m_i + \mathbf{b}_i \mathbf{X}, m_j + \mathbf{b}_j \mathbf{X}) = E[(\mathbf{b}_i (\mathbf{X} - \mathbf{\mu}))(\mathbf{b}_j (\mathbf{X} - \mathbf{\mu}))] = E[\sum_{t=1}^n \sum_{p=1}^n b_{i,t} b_{j,p} (X_t - \mu_t)(X_p - \mu_p)] = \sum_{t=1}^n \sum_{p=1}^n b_{i,t} b_{j,p} E[(X_t - \mu_t)(X_p - \mu_p)] = \sum_{t=1}^n b_{i,t} \left(\sum_{p=1}^n b_{j,p} \Sigma_{p,t}\right) = \sum_{t=1}^n b_{i,t} \left( B \cdot \Sigma \right)_{j,t} = \left(B \cdot \Sigma \cdot B^\top\right)_{i,j}$
So the covariance matrix of $\mathbf{m} + B \cdot \mathbf{X}$ is $B \cdot \Sigma \cdot B^\top$, where $Cov(\mathbf{X}) = \Sigma$. |
H: Embedding of Tree
Q.
Proof for every Tree can be embedded into the plane.
Conditions.
We cannot use Euler Formula for Planar Graphs. We can use definition of tree, $V-E=1$, no-cycles, every edge is critical, there is a vertex of frequency at most one.
Attempt.
We prove this by induction. True for $V$=1. Trivial. Consider true for $V<n$. Consider a tree $T$ with $n$ edges. Now, we know it has a vertex with at most one edge incident in it. Consider $T$\ $\{v,l\}$ the last vertex and edge. This can be embedded by inductive hypothesis. For the $v$ and $l$, that is left, it can be easily embedded.
The last part seems sloppy. I would appreciate other proofs or improvement on this proof.
AI: You may prefer to chose an embedding where all edges are mapped to straight line segments.
As you did, let $v$ be a vertex of degree $1$, let $w$ be its only neighbour.
As you are not allowed to use very much, I assume you already know/can use that $v\ne w$ and removing vertex $v$ and edge $vw$ from $T$ does indeed produce a tree $T'$.
By induction hypothesis, there is an embedding $f\colon T\to\mathbb R^2$ of $T'$ where all edges are mapped to straight line segments.
Especially, the image of any edge $xy$ where $w\ne\{x,y\}$ is compact, hence has positive distance to $f(w)$.
Since there are only finitely many such edges, for some $r>0$ the open ball $B_r(f(w))$ interscts only the finitely many straight line segments $f(wx)$ for edges incident with $w$.
Select $f(v)$ in $B_r(f(w))\setminus\bigcup_{x\text{ neigbour of } w}f(wx)$ and map $vw$ to the straight line segment from $f(v)$ to $f(w)$. |
H: Proving integrability in integration by parts in Rudin's text
Integration by parts, as stated in W. Rudin's Principles of Mathematical Analysis, Theorem 6.22, goes as follows:
Suppose F and G are differentiable functions in $[a,b]$, $F'=f\in \mathcal{R}$, and $G'= g\in \mathcal{R}$. Then $\int_a^bF(x)g(x)dx = F(b)G(b) - F(a)G(a) - \int_a^bf(x)G(x)dx$.
The proof is to put $H(x)=F(x)G(x)$ and apply the fundamental theorem of calculus to H and its derivative. The fundamental theorem of calculus is stated as:
If $f \in \mathcal{R}$ on $[a,b]$ and if there is a differentiable function $F$ on $[a,b]$ s.t. $F'=f$, then $\int_a^bf(x)dx=F(b)-F(a)$.
We also have, from an earlier theorem (6.13), that:
If $f \in \mathcal{R}$ and $g \in \mathcal{R}$ on $[a,b]$, then $fg \in \mathcal{R}$.
Rudin notes that "$H' \in \mathcal{R}$, by Theorem 6.13" (above). Is that theorem really enough to prove this? It doesn't seem to be. After all, if $H(x)=F(x)G(x)$ then $H'(x)=F(x)g(x)+f(x)G(x)$, and we only have $f(x)g(x) \in \mathcal{R}$, while $F(x)g(x) \neq f(x)g(x)$. Where do we get integrability of $F(x)g(x)$ and $f(x)G(x)$?
I'd be grateful for any pointers. Thanks!
AI: As $F,G$ are given to be differentiable so they are continuous. And all continuous functions in a closed inteval is integrable so $F,G$ are integrable. Integrability of $Fg,Gf$ follows from this. |
H: Find Aut$(G)$, Inn$(G)$ and $\dfrac{\text{Aut}(G)}{\text{Inn}(G)}$ for $G = \mathbb{Z}_2 \times \mathbb{Z}_2$
Find Aut$(G)$, Inn$(G)$ and $\dfrac{\text{Aut}(G)}{\text{Inn}(G)}$ for $G = \mathbb{Z}_2 \times \mathbb{Z}_2$.
Here is what I have here:
Aut$(G)$ consists of 6 bijective functions, which maps $G$ to itself, since Aut$(\mathbb{Z}_2 \times \mathbb{Z}_2) \approx S_3$.
I think the next part goes wrong. For Inn$(G)$, letting $\kappa_x : G \rightarrow G$ to be the conjugate function, I found $\{e, \kappa_{(0,1)}, \kappa_{(1,0)}, \kappa_{(1,1)}\}$.
I can't determine $\dfrac{\text{Aut}(G)}{\text{Inn}(G)}$ yet since I need to correctly determine Inn$(G)$.
Any advices or comments?
AI: First note that $G$ is abelian, therefore $Z(G) = G$
($Z$ is the Center of $G$).
Then you can use a little proposition (Humphreys pg.73-74) that tells you $\text{Inn}(G) \approx \frac{G}{\text{Z}(G)}$
The proof of this prop in a few words:
define an automorphism (prove it) $\varphi_x(g)=xgx^{-1}$, then define $\phi : G \to \text{Aut}(G)$ by $x \mapsto \varphi_x$ and find its image and its kernel, then apply first theorem of homomorphism and you conclude. |
H: Differentiation problem of power to infinity by using log property
Problem:
Find $\frac{dy}{dx}$ if $y =\left(\sqrt{x}\right)^{x^{x^{x^{\dots}}}}$
Let ${x^{x^{x^{\dots}}}} =t. (i)$ Taking $\log$ on both sides $ \implies {x^{x^{x^{\dots}}}}\log x = \log t$
This can further be written as $ t
\cdot\log x = \log t$
Differentiating w.r.t. $t$ we get :
$\frac{dx}{dt}=\left(\frac{1-\log t}{t^2}\right)x$. (ii)
Now taking $\log$ on both sides of original equation :
$\log y = {x^{x^{x^{\dots}}}} \log\sqrt{x} \implies \log y = t \log \sqrt{x}$
Now differentiating both sides w.r.t. $x$ we get :
$\frac{1}{y} \frac{dy}{dx} = t\log\sqrt{x} = t \frac{1}{2\sqrt{x}} + \log\sqrt{x}\frac{dt}{dx}$. (iii)
Now putting the values of $y$, $\frac{dt}{dx} $, and $t$ from (i), (ii) and (iii) we get:
$$\frac{dy}{dx} = \left(\sqrt{x}\right)^{x^{x^{x^{\dots}}}}\left[\frac{{x^{x^{x^{\dots}}}}}{2\sqrt{x}}+\frac{\log\sqrt{x} \cdot{x^{x^{x^{\dots}}}}}{(1-\log{x^{x^{x^{\dots}}}})x}\right]$$
Please confirm whether it is the correct answer or not. Thanks.
AI: HINT:
If $y =\left(\sqrt{x}\right)^{x^{x^{x^{\dots}}}}=\left(\left(x\right)^{x^{x^{x^{\dots}}}}\right)^\frac12$
$$\implies y^2=\left(x\right)^{x^{x^{x^{\dots}}}}=(x)^{y^2}$$
Now, taking log wrt $e,$ $$2\ln y=y^2\ln x$$ |
H: Is the the number of generators of a group the number of different generators that one finds if one counts over every generating set of the group?
Consider the additive group of integers as an example as mentioned at the bottom of the Wikipedia article. There are two generating sets that are mentioned; The set consisting of the number 1, {1}, and the two-element set, {3,5}. So when we talk about the number of generators of a group do we mean the number of different elements within every generating set, all added together. So above we've counted 3 generators so far, 1,3, and 5 - although there is clearly more than this in total since we haven't mentioned all the generating sets of the group.
As another example, in SU(N) we have $N^{2}-1$ generators. Would this be the total number of generating matrices that we'd get if we counted every single generating set?
On a similar note, would we only count a generator once if it appeared in more than one generating set? So for example if some group had generating sets {1,5}, {5,9}, {3,4,12}, would we say this group had 6 generators, rather than 7?
AI: A group $G$ typically have many different generating sets, that is subsets $S\subseteq G$ with $\langle S\rangle = G$, and among these are sets of different cardinality (for example, trivially $S=G$ is a generating set).
Therefore it makes little sense to speak of the number of generators of a group (or even of the set of generators).
We do speak of the free group $F_S$ generated by the set $S$ and for such a free group, the set $S$ is a canonical choice of generators. Nevertheless, the free group over $S=\{a,b\}$ is also generated by $\{a,b,1,a^{-2},bab\}$.
In general, when we speak of a group generated by $n$ elements, we mean a group $G$ that allows an epimorphism $f\colon F_S\to G$ where $|S|=n$, typically given by a presentation $G=\langle a_1,\ldots,a_n\mid\ldots\rangle$. Depending on the author, it may additionally be understood that $f|_S$ is injective, that is that the generators of $G$ obtained this may are actually an $n$-element set. But I guess it is often not the case that this distinction is made.
In summary, "a group generated by $n$ elements" should usually be more precisely called "a group that has at least one generating set of at most $n$ elements".
For example, a cyclic group is a group generated by a single element. We do however also count the trivial group as cyclic, even though it can in fact be generated by zero elements. |
H: what is the diffrence between a term , constant and variable in first order logic languages ?
in the text , the author says that the language contains parenthises , sentintial connectives and n-place functions , n-place predicates , equality sign = , terms , constans and variables
i have two question ,
1- what is the definition of a term , constant and variable ?!
and what is the diffrence between them ?
2- what is the definition of n-place predicate ?
thanx !
AI: A constant is a symbol. A variable is also a symbol. The difference comes from how they can be used (syntactically). For example, a symbol $v$ can occur with a quantor (as in $\forall v$) only if $v$ is a vraiable, not if it is a constant.
Terms are words (symbol sequences) in the language, again obeying certain rules; for example constants and variables are the simplest terms and new terms are obtained from simpler terms by connectives.
As a matter of fact, the details of these distinctions should be apparent with what I assume the author writes in the nest few paragraphs after these remarks. |
H: Sequence of continuous functions, integral, series convergence
Let $f_k$ be a sequence of continuous functions on $[0,1]$ such that $\int _0 ^1 f_k(x)x^ndx = \int _0^1 x^{n+k} dx$ for all $n \in \mathbb{N}$.
Is $\sum _{k=1} ^{\infty}f_k(x)$ convergent?
Could you tell me how to solve this? I would appreciate all the hints.
Thank you.
AI: No. Hint: The right-hand-side $\int_0^1 x^{n+k} dx=c_k$ is a constant for each $k$. You can make $f_k(x)$ completely arbitrary on the interval $[0,9/10]$, and then adjust it on the interval $[9/10,1]$ to give the integral $\int_0^1 f_k(x)x^n dx$ the value $c_k$. Choose $f_k$ on the interval $[0,9/10]$ in such a way that the series diverges for any $0\leq x\leq 9/10$. If instead of always "adjusting" $f_k$ on the interval $[9/10,1]$, and you instead adjust it somewhere else for different $k$, you can construct a sequence $f_k$ which makes the series diverge for all $x\in[0,1]$.
EDIT I'll flesh out the hint. For the first case: Let $f_k(x)=1$ for $x\in[0,9/10]$, and let $f_k(x)$ on the interval $[0,9/10]$ be a straight line from the point $(9/10,1)$ to a point $(1,a_k)$. If $a_k$ were $1$, then $f_k$ would be identically 1 on the entire interval, and we would get $\int_0^1 f_k(x) x^n dx = \int_0^1 x^n dx > \int_0^1 x^{n+k} dx=c_k$. On the other hand, if we let $a_k\to-\infty$, then the integral $\int_0^1 f_k(x) x^n dx$ will tend to $-\infty$, which is less than $c_k>0$, so by the intermediate value theorem, there must exist some $a_k$ such that $\int_0^1 f_k(x) x^n dx$ is equal to $c_k$. Now the series $\sum_{k=1}^\infty f_k(x)$ will diverge for all $x\leq 9/10$ since it will just be the series $\sum 1=\infty$.
If we want the series to diverge for all $x$, then we could repeat the idea above, but instead of letting $f_k$ be $1$ on $[0,9/10]$, we could let it be $1$ on $[0,1-1/k]$ instead. Then the series would diverge for all $x<1$, since for each such $x$, all but finitely many terms of the series will be $1$. For $x=1$ one gets the series $\sum a_k$, and one needs to check that this series necessarily diverges. I will leave that as an exercise to you. |
H: For every prime of the form $2^{4n}+1$, 7 is a primitive root.
What I want to show is the following statement.
For every prime of the form $2^{4n}+1$, 7 is a primitive root.
What I get is that
$$7^{2^{k}}\equiv1\pmod{p}$$
$$7^{2^{k-1}}\equiv-1\equiv2^{4n}\pmod{p}$$
$$7^{2^{k-2}}\equiv(2^{n})^2\pmod{p}$$
Thus $(\frac{2^n}{p})=(\frac{7^{2^{k-2}}}{p})=1$.
I think that $7$ is important because $7$ is a primitive root but I don't know how to use $7$.
AI: Assume that $p=2^{4n}+1$ is a prime. We know that the order of $7$ is a factor of $p-1$, so it is a power of two. The claim is equivalent to saying that the order is exactly $p-1$. Assume that this is not the case. Then the order is a factor of $(p-1)/2$ meaning that
$$
\left(\frac7p\right)\equiv 7^{(p-1)/2}\equiv 1\pmod p.
$$
The case $n=0$ is easy, so we can assume that $n>0$. Then $p\equiv1\pmod4$, so the law of quadratic recpirocity tells that
the claim is equivalent to
$$
\left(\frac{p}7\right)=1.
$$
As $2^3\equiv1\pmod7$, $p\equiv 2^n+1\pmod7.$ The residue class of $2^n+1$ modulo $7$ can be either $2,3$ or $5$, when $n\equiv 0,1,2\pmod3$ respectively. Of these, only $2$ is a quadratic residue modulo $7$. This means that we must have $3\mid n$. I leave it to you to prove that in that case $p$ cannot be a prime unless $n=0$ and $p=2$. |
H: Combinations, Expected Values and Random Variables
A community consists of $100$ married couples ($200$ people). If during a given year, $50$ of the members of the community die, what is the expected number of marriages that remain intact?
Assume that the set of people who die is equally likely to be any of the ${200 \choose 50}$ groups of size $50$
Hint: For $i = 1, ..., 100$ let $X_i$ $=$$\{1$, if neither member of couple $i$ dies; and $0$, otherwise$\}$
It's a question from "Probability and Statistics for Engineers and Scientists" by Sheldon Ross ... and I have no idea how to go about it. Any help would be greatly appreciated.
AI: Use the linearity of expectation. Let $X$ be the rv of the number of couples that survive. Then $$X = \sum_{i=1}^{100} X_i.$$
Thus,
$$E[X] = E[\sum_{i=1}^{100} X_i] = \sum_{i=1}^{100} E[X_i] = 100 E[X_1]. $$ |
H: Finding sequence in a set $A$ that tends to $\sup A$
I have been reading the book at http://www.neunhaeuserer.de/short.pdf, and have noticed that in the proof of the intermediate value theorem (Theorem 5.8 in the book), it seems to be quietly assumed that you can always find a sequence of points in a set that tend to the supremum of the set (in $\mathbb{R}$).
I presume this is true, but how would one go about proving it? I can't seem to find the result easily, but maybe I'm searching the wrong. What does one call this property (of there being a sequence converging to you?). It seems "limit point" would be the natural term, but I've discovered that the definition doesn't quite mean that--e.g. an isolated point $a$ cannot be limit point, but the sequence $a,a,a,a,a,a \dots$ converges to it (or is that a matter of how we define convergence?).
AI: This is a consequence of the Axiom of Countable Choice, which the author is taking for granted (actually, the author is taking stronger assumptions for granted, based on section 6 of the book).
The idea is as follows:
If $\xi\in A$, then there is a constant sequence of points of $A$ converging to $\xi$. Otherwise, since $\xi$ is the supremum of $A$, then for each positive integer $n$, we have that $A\cap[\xi-\frac1n,\xi)$ is non-empty, and we simultaneously choose $x_n\in A\cap[\xi-\frac1n,\xi)$ for each such $n$. This gives us a sequence of points of $A$ converging to $\xi$. |
H: Last non zero digit of $n!$
What is the last non zero digit of $100!$?
Is there a method to do the same for $n!$?
All I know is that we can find the number of zeroes at the end using a certain formula.However I guess that's of no use over here.
AI: This question gets asked fairly frequently. There was originally a problem on the AMC asking for the last two digits. Here is my post on the single digit case, which is not all that much simpler than the two digit case. |
H: quadratic equation precalculus
from Stewart, Precalculus, 5th, p56, Q. 79
Find all real solutions of the equation
$$\dfrac{x+5}{x-2}=\dfrac{5}{x+2}+\dfrac{28}{x^2-4}$$
my solution
$$\dfrac{x+5}{x-2}=\dfrac{5}{x+2}+\dfrac{28}{(x+2)(x-2)}$$
$$(x+2)(x+5)=5(x-2)+28$$
$$x^2+2x-8=0$$
$$\dfrac{-2\pm\sqrt{4+32}}{2}$$
$$\dfrac{-2\pm6}{2}$$
$$x=-4\text{ or }2$$
official answer at the back of the book has only one real solution of $-4$
where did I go wrong?
AI: You multiplied both sides by $(x-2)(x+2)$. If this is zero, you may introduce extra solutions, hence you need to check your final answer to see if you have any extraneous solutions. In this case, you do! For $x=2$, two of the three fractions in the original equation are undefined. |
H: Euler lagrange equation is a constant
I'm working through exercises which require me to find the Euler-Lagrange equation for different functionals.
I've just come across a case where the Euler Lagrange equation simplifies to
$$1=0.$$
Please could someone explain what can be concluded about the set of extremals to problems where this is the case.
AI: Notation $$L[u]=\int(f(u)u_{x}+u)dx=\int F(u,u_{x})dx $$
$$\frac{d}{dx}\left( \frac{\partial F}{\partial u_{x}}\right)-\frac{\partial F}{\partial u}=0 $$
As $F(u,u_{x})=f(u)u_{x}+u$ we have
$$\frac{\partial F}{\partial u_{x}}=f(u) $$
$$ \frac{\partial F}{\partial u}=\frac{df}{du}\frac{du}{dx}+1$$
so $$ $$
Hence $$\frac{d}{dx}(f(u)+1)-\frac{df(u)}{du}\frac{du}{dx}=0 $$
$$\frac{df}{du}\frac{du}{dx}-\frac{df}{du}\frac{du}{dx}=0 \rightarrow 0=0$$ |
H: Finding the Fourier Series of $\sin(x)^2\cos(x)^3$
I'm currently struggling at calculation the Fourier series of the given function
$$\sin(x)^2 \cos(x)^3$$
Given Euler's identity, I thought that using the exponential approach would be the easiest way to do it.
What I found was: $$\frac{-1}{32}((\exp(2ix)-2\exp(2ix)+\exp(-2ix))(\exp(3ix)+3\exp(ix)+3\exp(-ix)+\exp(-3ix)))$$
Transforming it back, the result is:
$$ -\frac{1}{18}(\cos(5x)+\cos(3x)+2\cos(x))$$
(I've checked my calculations multiple times, I'm pretty sure it's correct.)
Considering the point $x = 0$ however, one can see that the series I found doesn't match the original function.
Could someone help me find my mistake?
AI: 1) Trigonometric identities:
$$
\sin^2 x\cos^3x=(\sin x\cos x)^2\cos x=\left(\frac{\sin 2x}{2}\right)^2\cos x=\frac{1}{4}\sin^22x\cos x
$$
$$
=\frac{1}{4}\left(\frac{1-\cos 4x}{2}\right)\cos x=\frac{\cos x}{8}-\frac{\cos 4x\cos x}{8}
$$
$$
=\frac{\cos x}{8}-\frac{\cos 5x+\cos 3x}{16}
$$
$$
=\frac{\cos x}{8}-\frac{\cos 3x}{16}-\frac{\cos 5x}{16}
$$
2) Complex exponential:
$$
\sin^2x\cos^3x=\left(\frac{e^{ix}-e^{-ix}}{2i}\right)^2\left(\frac{e^{ix}+e^{-ix}}{2}\right)^3
$$
$$
=-\frac{1}{32}(e^{2ix}-2+e^{-2ix})(e^{3ix}+3e^{ix}+3e^{-ix}+e^{-3ix})
$$
$$
=-\frac{1}{32}(e^{5ix}+e^{3ix}-2e^{ix}-2e^{-ix}+e^{-3ix}+e^{-5ix})
$$
$$
=-\frac{1}{32}(2\cos 5x+2\cos 3x-4\cos x)
$$
$$
=\frac{1}{16}(2\cos x-\cos 3x-\cos 5x)
$$
Note: you made a mistake when you expanded $(e^{ix}-e^{-ix})^2$. I have no idea how you ended up with this $18$. You probably meant $16$. |
H: Is a Relationship Quadratic?
I have a relationship $y=f(x)$ for which I can obtain data through simulation.
I have good reason to suspect that this relationship is quadratic (rather than, say, exponential), and would like to provide evidence for this.
I was thinking of the following method, and I would like to ask if there is anything wrong with it:
a) Obtain $y$ data for a limited interval of $x$, say $x\in (0, x_a)$.
b) Fit a quadratic function to this data using least squares regression.
c) Obtain data for a larger value of $x$ (say $10x_a$) and check the percentage error that the model leads to when extrapolated to this value.
Does this prove anything?
AI: A better method is to plot the points $(\log x,\log y)$ for all data points $(x,y)$. If the relationship is roughly quadratic, $y\approx Cx^2$, then $\log y\approx 2\log x+\log C$, which is a linear equation in $\log x$ and $\log y$. Straight lines are easier to detect in a point plot, and if you get a straight line with slope $2$, then you can be pretty sure your relationship is roughly quadratic. |
H: Showing it is a joint probability density function
I have two random variables $X,Y$ with a joint density function $f_{X,Y}(x,y)=x+y$ if $(x,y)\in[0,1]\times [0,1]$ and otherwise $f_{X,Y}(x,y)=0$
I want to analyze this case in different cases, first of all, I want to show it is a probability density function. Well I think the best way is to calculate it $f_{X,Y}(x,y) = f_{Y\mid X}(y|x)f_X(x) = f_{X\mid Y}(x\mid y)f_Y(y)$ but how can we evaluate the condtional and marginal distributions ?
Another idea is to show $\int_x \int_y f_{X,Y}(x,y) \; dy \; dx= 1.$
In the next step I would like to determine the cumulative distributions $F_X(x), F_Y(y)$
AI: Yes, you should check that
$$\int_0^1\int_0^1(x+y)\,dydx=1\,.$$
By definition, $F_X(t)=P(X<t)$, and this can be expressed by the integral:
$$F_X(t)=\int_0^t\int_0^1 (x+y)\,dydx\,.$$ |
H: how would $f(T)$ look like if ...
I know the result that if $T:V_F\to V_F$ is a linear operator then for any polynomial $f(x)\in F[x],~f(T)$ is a linear operator. Now my question is how would $f(T)$ look like if
$f(x)$ is the zero polynomial
$f(x)$ is a constant polynomial other than $0$
AI: Then $f(T)$ is the zero operator, i.e. it sends every vector to the zero vector.
If $f(x)$ is the constant polynomial $c$, then $f(T)$ is the operator $cI$, i.e. it sends a vector $\vec v$ to $c\vec v$. |
H: Prove that if $G$ is abelian, then $H = \{a \in G \mid a^2 = e\}$ is subgroup of $G$
Let $G$ be an abelian group. Prove that $H = \{a \in G \mid a^2 = e\}$ is subgroup of $G$, where
$e$ is the neutral element of $G$.
I need some help to approach this question.
AI: Hint: To show it is a subgroup you must show the following three things:
$1)$ $e \in H$.
$2)$ If $a$ and $b$ are in $H$ then so is $ab$.
$3)$ If $a$ is in $H$, then so is $a^{-1}$.
To show these three things, think about what it means for a particular element to be in $H$ and try and use this to show that the elements you want to be in $H$ actually are.
Once you have shown this is true, you might like to think about why we needed the group to be abelian, and see if you can find a (non-abelian) group where $H$ is not a subgroup. |
H: How to prove that: $\phi^{2}(0) \leq \|\phi\|^2_{L^{2}}+\|\phi'\|^{2}_{L^{2}}$
Let $\phi$ be a function and $\phi \in C^{\infty}(\mathbb{R}_{+},\mathbb{R})$ with compact support and $\mbox{supp }{\phi} \subset [0, \infty)$.
I want to prove that: $$\phi^{2}(0) \leq \|\phi\|^2_{L^{2}}+\|\phi'\|^{2}_{L^{2}}.$$
Someone give an indication that I should begin with:
$$\phi(x)-\phi(0)=\int_{0}^{x}{\phi'(t)}\mbox{dt}$$ and then to prove that:
$$\phi^{2}(0) \leq \int_{0}^{\infty}{|2\phi(x)\phi'(x)|dx} \mbox{ . }$$
Thanks :)
AI: First notice $2\phi \phi' = (\phi^2)'$:
$$\phi^2(\infty) - \phi^2(0) = \int^\infty_0 2\phi(x)\phi'(x)\,dx.$$
Compactly supportedness drives $\phi\to 0$ when $x\to \infty$, so:
$$
\phi^2(0) = \left|\int^\infty_0 2\phi(x)\phi'(x)\,dx\right| \leq \int_{0}^{\infty}|2\phi(x)\phi'(x)|\,dx
$$
Lastly simply using $2ab\leq a^2 + b^2$:
$$
\int_{0}^{\infty}|2\phi(x)\phi'(x)|\,dx \leq \int_{0}^{\infty}(|\phi(x)|^2 + |\phi'(x)|^2)\,dx = \|\phi\|^2_{L^{2}}+\|\phi'\|^{2}_{L^{2}}.
$$ |
H: Area of a circle is $A = \pi r^2$. Is it possible that both $A$ and $r$ are perfect integers.
Can you produce an example where both the area of a circle and it's radius are integers?
AI: $$r\neq 0\;,\;\pi r^2\;,\;r\in\Bbb N\implies \pi\in\Bbb Q\;,\;\text{which is false}$$ |
H: Rationalizing quotients
I have
$$\frac{\sqrt{10}}{\sqrt{5} - 2}$$
I have no idea what to do, I know that I can do some tricks with splitting square roots up but pulling out whole numbers like I know that $\sqrt{27}$ is just $\sqrt{3*9}$ so I can pull out a nine which becomes a 3. Here though I have no such options, what can I possibly do?
AI: Hint
$$\frac{1}{\sqrt{5}-2}=\frac{\sqrt{5}+2}{(\sqrt{5}+2)}\frac{1}{(\sqrt{5}-2)} $$
and $(a+b)(a-b)=a^2-b^2$ |
H: Trigonometrical Question
the question is solve the following equation in the interval
$$0<\theta\leq 360$$
$$\tan(\theta) = \tan(\theta)(2+3\sin(\theta))$$
I got 199.5 and 340.5 as my answers like so:
$\tan(\theta) = \tan(\theta)(2+3\sin(\theta))$
$1=2+3\sin(\theta)$
$\sin(-1/3) = 199.5$ and $340.4$
However in the answer scheme it gives 180, 199.5, 340.5 and 360
How do they get the 2 extra values?
AI: If $\tan\theta=0$, the equation obviously holds. That gives the solutions $180^\circ$ and $360^\circ$. We can only "cancel" $\tan\theta$ if $\tan\theta\ne 0$. When we cancel, we are potentially throwing away some roots.
When we cancel, we get $1=2+3\sin\theta$, or equivalently $\sin\theta=-\frac{1}{3}$. This has the solutions that you found.
Let's look at a related example, $x=x(x^2-3)$. This has the obvious solution $x=0$. For $x\ne 0$, the equation is equivalent to $1=x^2-3$, which has the solutions $x=\pm 2$.
Remark: Instead of immediately cancelling, we could rewrite the equation as $\tan\theta(+3\sin\theta)-\tan\theta=0$, or equivalently as $\tan\theta(1+3\sin\theta)=0$. A product is $0$ if and only if one of the terms is $0$. This gives us the possibilities $\tan\theta=0$ and $1+3\sin\theta=0$.
"Cancelling" is quicker, but carries a risk of throwing away some solutions. |
H: Proving an inequality: $|1-e^{i\theta}|\le|\theta|$
We have been using this result without proof in my class, but I don't know how to prove it. Could someone point me in the right direction?
$$|1-e^{i\theta}|\le|\theta|$$
I believe this is true for all $\theta\in\mathbb{R}$. It is easy to show that the left side is bounded by 2 (triangle inequality), but I'm stuck otherwise.
AI: Hint:
$$
\mathrm e^{\mathrm i\theta}-1=\int_0^\theta\mathrm i\mathrm e^{\mathrm ix}\mathrm dx,\qquad|\mathrm i\mathrm e^{\mathrm ix}|\leqslant1\ (x\in\mathbb R).
$$ |
H: limit of $e^z$ at $\infty$
What's the limit of $e^z$ as $z$ approaches infinity?
I am given that the answer is "There is no such limit."
Is this correct, and if so, am I correct to demonstrate this by showing that as $y$ tends to infinity along the $y$-axis, the magnitude of $e^z$ remains $1$, i.e. it doesn't have infinite magnitude, thus it cannot be tending to infinity? And does this mean $e^z$ has an essential singularity at infinity?
AI: Use the fact: $f$ has a pole of order $k$ at $z=z_0$ if and only if
$$
\lim_{z\to z_0}(z-z_0)^k f(z)=L(\ne 0).
$$
Relpacing $z$ by $\frac 1 z$ and consider the function $f(z)=e^{\frac 1 z}$.
For each $k=0, 1, 2, \ldots $, the limit
$$
\lim_{z\to 0}z^ke^{\frac 1 z}
$$
does not exist (it suffices to check when $z$ is real).
Thus, $f$ has no poles(including removeable singularity) at the origin, and has essential singularity at the origin.
(or more simply, it can be checked by expanding $f$
as Laurent series-using taylor series formula).
Therefore, $e^z$ has an essential singularity at $\infty$. |
H: Proving a $Z- $transform
I am having trouble demonstrating the $Z$ transform of $a^{n-1}u(n-1)$ is $\frac{1}{(z-a)}$, as it says in this table.
I try using the definition of the z transform, but it comes out different than what the table says:
$$\sum_{k=-\infty}^{\infty}a^{k-1}u(k-1)z^{-k}=\sum_{k=1}^{\infty}a^{k-1}z^{-k}=\frac{1}{a}\sum_{k=1}^{\infty}(\frac{a}{z})^k$$
This is a geometric series, so the result should be:
$$\frac{1}{a}\frac{1}{1-\frac{a}{z}}=\frac{z}{a(z-a)}$$
Why is my answer coming out different than what it says in the table?
AI: The geometric series $\sum_{k\geq k_0} r^k$ converges if and only if $|r|<1$. You have a confusion regarding its actual sum, which depends where the summation starts. In particular
$$
\sum_{k\geq 1}r^k=\frac{r}{1-r}\qquad\mbox{and}\qquad\sum_{k\geq 0}r^k=\frac{1}{1-r}.
$$
More generally,
$$
\sum_{k\geq k_0}r^k=\frac{r^{k_0}}{1-r}.
$$
To see where this all comes from, you just need to observe that the partial sums satisfy, for $K\geq k_0$:
$$
\sum_{k=k_0}^Kr^k=r^{k_0}(1+r+\ldots+r^{K-k_0})=r^{k_0}\left(\frac{1-r^{K-k_0+1}}{1-r}\right)=\frac{r^{k_0}-r^{K+1}}{1-r}.
$$
In your case, you do get
$$
\frac{1}{a}\sum_{k\geq 1}\left(\frac{a}{z}\right)^k=\frac{1}{a}\cdot \frac{\frac{a}{z}}{1-\frac{a}{z}}=\frac{1}{z-a}.
$$ |
H: Central limit theorem - std dev away from mean
I was reading about the CLT and found something that I think people use interchangeably. On one hand I found that 68% of the means are 1 standard deviations from away and 95% are 2 std dev. On the other hand, if I take a look at the Z-table I found that 68% is approx 1 std dev whereas 95% is 1.96 std dev away.
Can someone clarify this please? Are these values of "std devs away" interchangeable? Or better yet, what am I confusing?
AI: Both are approximations, though the second is a very good one. The actual percentage of the area under the normal curve that is within two standard deviations of the mean is about $95.44\%$; the cutoff within which you get $95\%$ of the area is very close to $1.96$ standard deviations on each side of the mean.
The rule of thumb that says that about $68\%$ of the area is within one standard deviation, about $95\%$ is within two, and about $99.7\%$ is within three is just that: a rule of thumb. These are approximate numbers that are easy to remember. |
H: What is the perimeter of a sector?
I don't understand this.
So we have:
\begin{align}
r &= 12 \color{gray}{\text{ (radius of circle)}} \\
d &= 24 \text{ (r}\times2) \color{gray}{\text{ (diameter of circle)}} \\
c &= 24\pi \text{ (}\pi\times d) \color{gray}{\text{ (circumference of circle)}} \\
a &= 144\pi \text{ (}\pi\times r^2) \color{gray}{\text{ (area of circle)}}
\end{align}
And we have:
\begin{align}
ca &= 60^\circ \color{gray}{\text{ (Central Angle of sector)}} \\
ratio &= \frac{60}{360} = \frac{1}{6} \color{gray}{\text{ (ratio of ca to circle angle which is 360 degrees)}}
\end{align}
So now we can calculate:
\begin{align}
al = \frac{1}{6} \times 24\pi &= 4\pi \color{gray}{\text{ (arc length of SECTOR = ratio X circumference of circle)}}
sa &= \frac{1}{6} \times 144per = 24\pi \color{gray}{\text{ (sector area = ratio X area of circle)}}
\end{align}
So my question is: What is meant by the perimeter of a Sector. Is it the arch length or the are of a Sector? And what is $24 + 4\pi$?
AI: The perimeter of the sector includes the length of the radius $\times 2$, as well as the arc length. So the perimeter is the length "around" the entire sector, the length "around" a slice of pizza, which includes it's edges and its curved arc.
The arc length is just the curved portion of the circumference, the sector permimeter is the length of line $\overline{AC} = r$ plus the length of line $\overline{BC} = r$, plus the length of the arc ${AOC}$.
The circumference of the circle is the total arc length of the circle.
Length is one-dimensional, the length of a line wrapped around the circle. Area is two dimensional; All of what's inside the circle. |
H: Integrate: $\int_0^{\infty}\frac{\sinh (ax)}{\sinh x} \cos (bx) dx$
Q: If $|a|< 1$ and $b>0$, show that
$$\int_0^{\infty}\frac{\sinh (ax)}{\sinh x} \cos (bx) dx = \frac{\pi \sin (\pi a)}{2 (\cos (\pi a)+\cosh (\pi b))}$$
I need to evaluate the above integral by method of contour. I tried to use this contour on this question but at $2\pi i$, $\sinh(ax)$ changes to $\sinh(ax+2a\pi i)$ and I have difficulty taking out $\sinh(ax)$. Please give hints on which contour to use.Thanks in advance!!
ADDED::
Considering $-R \to R \to R + \pi i \to -R + \pi i \to -R$ with a bump on $0$ and $\pi i$ to avoid singularity.
$$(1 + e^{(a+ib)\pi i}) \int_{-\infty}^{\infty}\frac{e^{(a+bi)x}}{\sinh x}dx = -\pi i(1 - e^{(a+ib)\pi i}) \hspace{1 cm}(1)$$
$$(1 + e^{(-a+ib)\pi i}) \int_{-\infty}^{\infty}\frac{e^{(-a+bi)x}}{\sinh x}dx = -\pi i(1 - e^{(-a+ib)\pi i}) \hspace{1 cm}(2)$$
With a bit of algebra, we get
\begin{align*}
\int_{-\infty}^{\infty}\frac{e^{ax}-e^{-ax}}{\sinh x}e^{ibx}dx &= 2\pi i \left( \frac{1}{1 + e^{(a+bi)\pi i}} - \frac{1}{1 + e^{(-a+bi)\pi i}} \right)\\
&= 2 \pi \frac{\sin (a\pi)}{\cosh (b\pi) + \sin(a\pi)}
\end{align*}
From which we get the desired result.
AI: Use parity to extend the domain of integration from $-\infty$ to $\infty$. Shift the contour of integration down by $\frac{i\pi}{2}$ to avoid pole $x=0$. Then compute separately four integrals (actually, it suffices to compute one of them and then to change parameters accordingly)
$$\int_{-\infty-\frac{i\pi}{2}}^{\infty-\frac{i\pi}{2}}\frac{e^{(\pm a\pm ib)x}dx}{\sinh x}$$
using the rectangle $$-R-\frac{i\pi}{2}\rightarrow R-\frac{i\pi}{2}\rightarrow R+\frac{i\pi}{2} \rightarrow -R+\frac{i\pi}{2} \rightarrow -R-\frac{i\pi}{2}$$ with $R\rightarrow\infty$.
For example, let us compute
\begin{align}
\int_{\text{rectangle}}\frac{e^{(a+ib)z}dz}{\sinh z}\substack{R\rightarrow\infty\\=}\int_{-\infty-\frac{i\pi}{2}}^{\infty-\frac{i\pi}{2}}\frac{e^{(a+ib)z}dz}{\sinh z}-
\int_{-\infty+\frac{i\pi}{2}}^{\infty+\frac{i\pi}{2}}\frac{e^{(a+ib)z}dz}{\sinh z}=\\
=\left(1+e^{\pi i (a+ib)}\right)\int_{-\infty-\frac{i\pi}{2}}^{\infty-\frac{i\pi}{2}}\frac{e^{(a+ib)z}dz}{\sinh z}
\end{align}
On the other hand, the integral over rectangle is equal to $2\pi i$ (the only pole inside is $z=0$ and the residue is $1$). Therefore,
$$\int_{-\infty-\frac{i\pi}{2}}^{\infty-\frac{i\pi}{2}}\frac{e^{(a+ib)z}dz}{\sinh z}=\frac{2\pi i}{1+e^{\pi i (a+ib)}},$$
and the initial integral evaluates to
$$\frac{2\pi i}{8}\left(\frac{1}{1+e^{\pi i (a+ib)}}-\frac{1}{1+e^{\pi i (-a+ib)}}+\frac{1}{1+e^{\pi i (a-ib)}}-\frac{1}{1+e^{\pi i (-a-ib)}}\right)=\frac{\pi\sin\pi a}{2\left(\cos\pi a+\cosh\pi b\right)}.$$ |
H: If $f$ is differentiable at $x = x_0$ then $f$ is continuous at $x = x_0$.
Claim: if $f$ is differentiable at $x = x_0$ then $f$ is continuous at $x = x_0$.
Please, see if I made some mistake in the proof below. I mention some theorems in the proof:
The condition to $f(x)$ be continuous at $x=x_0$ is $\lim\limits_{x\to x_0} f(x)=f(x_0)$.
(1) If $f(x)$ is differentiable at $x-x_0$, then $f'(x)=\lim\limits_{x\to x_0} \dfrac{f(x)-f(x_0)}{x-x_0}$ exists and the function is defined at $x=x_0$.
(2) Therefore, by the Limit Linearity Theorem, $\lim\limits_{x\to x_0} f(x)$ exists and we'll show it is equals $f(x_0)$.
(3) We'll do this by the Precise Limit Definion: given $ \epsilon>0, \exists\delta|0<|x-x_0|<\delta$, then $0<|f(x)-f(x_0)|<\epsilon$. As this limit exists by (2), we can make $f(x)$ as close to $f(x_0)$ as one wishes, therefore $\lim\limits_{x\to x_0} f(x)=f(x_0)$, what satisfies the condition for $f(x)$ be differentiable at $x=x_0$. The end.
AI: $$\lim\limits_{x\to x_0}f(x)=\lim\limits_{x\to x_0}\left(f(x_0)+(x-x_0)\cdot\dfrac {f(x)-f(x_0)}{x-x_0}\right)=$$
$$\lim\limits_{x\to x_0}f(x_0)+\lim\limits_{x\to x_0}(x-x_0)\cdot\lim\limits_{x\to x_0}\dfrac {f(x)-f(x_0)}{x-x_0}=$$
$$f(x_0)+0\cdot f'(x_0)=f(x_0)$$ |
H: On boundedly invertible
Let $T:X\to X$ be a bounded and invertible linear operator. Show that $\inf_{x: \|x\|=1}\|T(x)\|\geq M$ if and only if $\sup_{x: \|x\|=1}\|T^{-1}(x)\|\leq N$, where $M,N\ge 0$.
.
AI: Let's show it in one direction:
$\displaystyle \sup_{x:\|x\|=1}\|T^{-1}(x)\|= \sup_{x:\|x\|>0}\frac{\|T^{-1}(x)\|}{\|x\|}.$ Now replace $x$ by $T(y)$ - we can do it because our operator is invertible; furthermore, for each $x$ such $y$ is unique. Hence our equality continues
$\displaystyle = \sup_{y:\|y\|>0}\frac{\|y\|}{\|T(y)\|}=\sup_{y:\|y\|=1}\frac{1}{\|T(y)\|}=\frac{1}{\inf_{y:\|y\|=1} \|T(y)\| }=\frac{1}{M} $.
The proof in the other direction is done likewise. |
H: Completing the square with simple polys
I am suppose to rewrite $x^2 + x + 1$ by completing the square. I don't really know what that means but I know that if I add 3 at the end of this I get
$$(x + 2) (x - 1) - 3$$ this is the same as the original now but the answer isn't right. What is wrong with what I did? It seems a lot cleaner than the answer.
AI: In general, $$\color{purple}{(ax+b)^2= a^2x^2+2abx+b^2}.$$
What you have is $x^2 + x + 1$. To complete the square, what you want to do is find $a$, $b$, $c$ in such a way that $x^2 + x + 1 = (ax + b)^2 + c$.
Now,
$$\begin{align}
x^2 + x + 1 & = (ax + b)^2 + c \\
x^2 + x + 1 & = a^2x^2+2abx+b^2 + c \\
\end{align}$$
Now let's color the last equation. We then have $$ \color{red}{x^2} + \color{orange}{x} + \color{blue}{1} = \color{red}{a^2x^2}+\color{orange}{2abx}+\color{blue}{b^2 + c}.$$
Then, for the colored equation above, we equate the coefficients of left hand side to the right hand side of the equation. Then we have three things:
$$\begin{align} \tag 1
\color{red}{x^2} & \equiv \color{red}{a^2x^2} \\ \tag2
\color{orange}x & \equiv \color{orange}{2abx} \\ \tag3
\color{blue}1 & \equiv \color{blue}{b^2 + c} \\
\end{align}$$
Solving equation $(1)$, we have
$x^2 \equiv a^2x^2 \Rightarrow a^2 = 1 \Rightarrow a= 1$ or $a= -1$,
Solving equation $(2)$, we have
$x \equiv 2abx \Rightarrow 1 = 2ab \Rightarrow b = \dfrac{1}{2a}$. From above, if $a= 1$ then $b= \dfrac{1}{2}$. If $a= -1$ then $b= -\dfrac{1}{2}$.
Solving equation $(3)$, we have $1 \equiv b^2 + c \Rightarrow c= 1 - b^2$. From above, if $b= \dfrac{1}{2}$ then $c= \dfrac{3}{4}$. If $b= -\dfrac{1}{2}$ then $c= \dfrac{3}{4}$ as well.
In conclusion, we have the set of solutions $\color{navy}{a=1, b= \dfrac{1}{2}, c= \dfrac{3}{4}}$ or $\color{maroon}{a=-1, b= -\dfrac{1}{2}, c= \dfrac{3}{4}}$.
Check the answer, plug the values of $a$, $b$, $c$ into $(ax+b)^2+c$ and expand it. If you get $x^2 + x + 1$, then it is correct.
For $\color{navy}{a=1, b= \dfrac{1}{2}, c= \dfrac{3}{4}}$, we have $$\left(x+\frac{1}{2}\right)^2+ \frac{3}{4}= x^2 + 2(1)\frac{1}{2}x^2 + \left(\frac{1}{2}\right)^{2} + \frac{3}{4} = x^2 + x + 1.$$
For $\color{maroon}{a=-1, b= -\dfrac{1}{2}, c= \dfrac{3}{4}}$, we have $$\left(-x-\frac{1}{2}\right)^2+ \frac{3}{4}= (-x)^2 + 2(-1)\left(-\frac{1}{2}\right)x^2 + \left(\frac{1}{2}\right)^{2} + \frac{3}{4} = x^2 + x + 1.$$
So the answer to your question is either $$\left(x+\dfrac{1}{2}\right)^2+ \dfrac{3}{4}$$ or $$\left(-x-\frac{1}{2}\right)^2+ \frac{3}{4}.$$
The first one looks prettier. I would go with that. Hope this helps.
Some more examples. Let's start with an easy one.
Example 1
Suppose someone asks you to complete the square of $$x^2+2x+3. \tag{A1}$$ Now consider $(x+1)^2$. Expanding that, we have $$(x+1)^2 = x^2 + 2x+ 1. \tag{A2}$$ You want to complete the square of $x^2+2x+3$. Notice that to get $x^2+2x+3$ from equation $(A2)$, all you have to do is add the number $2$ to both sides of equation $(A2)$, since both $(A1)$ and $(A2)$ both have $x^2$ and $2x$. Then
$$\begin{align}(x+1)^2 +\color{red}2 &= x^2 + 2x+ 1 +\color{red}2 \\
(x+1)^2 +2 & = x^2 + 2x+ 3.
\end{align}$$ And there you have it, the answer is $(x+1)^2 +2$.
Example 2Now, back to your question. You want to complete the square of $$x^2+x+1. \tag{B}$$ Now consider $(x+1)^2$. Expanding that, we have $$(x+1)^2 = x^2 + 2x+ 1. \tag {C}$$ Equation $(B)$ and $(C)$ both have $x^2$, but equation $(B)$ has $x$ in it, while equation $(C)$ has $2x$. We then have to try another expansion. So you consider $\left(x+\frac{1}{2}\right)^2$. We then get $$\left(x+\frac{1}{2}\right)^2= x^2 + x + \frac{1}{4} \tag{D}$$ Aha! They now both have $x$. All you have to do now is add a number to both sides of equation $(D)$ so that the right hand side equals $x^2+x+1$. So you add $\frac{3}{4}$ to both sides of equation $(D)$. Then you have
$$\begin{align}
\left(x+\frac{1}{2}\right)^2 & = x^2 + x + \frac{1}{4} \\
\left(x+\dfrac{1}{2}\right)^2+ \color{red}{\dfrac{3}{4}} & = x^2 + x + \frac{1}{4} + \color{red}{\dfrac{3}{4}} \\
\left(x+\dfrac{1}{2}\right)^2+ {\dfrac{3}{4}} & = x^2 + x + 1.
\end{align}$$
And so the answer is $\left(x+\dfrac{1}{2}\right)^2+ {\dfrac{3}{4}}$.
Example 3Suppose you want to complete the square of $$9x^2+3x+1. \tag{E}$$ You know $(3x)^2=9x^2$, so you consider $(3x+1)^2$. Expanding that, we have $$(3x+1)^2 = 9x^2 + 6x+ 1. \tag {F}$$ Equation $(E)$ and $(F)$ both have $9x^2$, but equation $(E)$ has $3x$ in it, while equation $(F)$ has $6x$. We then have to try another expansion. So you consider $\left(3x+\frac{1}{2}\right)^2$. We then get $$\left(3x+\frac{1}{2}\right)^2= 9x^2 + 3x + \frac{1}{4} \tag{G}$$ They now both have $3x$. All you have to do now is add a number to both sides of equation $(G)$ so that the right hand side equals $9x^2+3x+1$. So you add $\frac{3}{4}$ to both sides of equation $(G)$. Then you have
$$\begin{align}
\left(3x+\frac{1}{2}\right)^2 & = 9x^2 + 3x + \frac{1}{4} \\
\left(3x+\dfrac{1}{2}\right)^2+ \color{red}{\dfrac{3}{4}} & = 9x^2 + 3x + \frac{1}{4} + \color{red}{\dfrac{3}{4}} \\
\left(3x+\dfrac{1}{2}\right)^2+ {\dfrac{3}{4}} & = 9x^2 + 3x + 1.
\end{align}$$
And so the answer is $\left(3x+\dfrac{1}{2}\right)^2+ {\dfrac{3}{4}}$.
With practice, it will get easier. |
H: Math question efficiency
A solar collector has 1000 Btu/min of radiant energy available on a clear sunny day. The collector can transfer 450 Btu/min to a storage tank. What is the efficiency of the system?
I used n= energy-output/energy-input *100
I plugged it all in $\;\dfrac{450}{1000}*\times 100\%$.
I got $45\%$, however on my answer choices it says the answer is $48\%$?
Exact Question. Maybe I used the wrong formula?
A solar collector has 1000 Btu/min of radiant energy available on a clear sunny day. The collector can transfer 450 Btu/min to a storage tank. What is the efficiency of the system?
A. 17%
B. 25%
C. 37%
D. 48%
AI: You used the correct formula, and your calculation is the correct answer: $45\%$.
There must be a misprint/typo in the solution manual/text. |
H: How to sum numerator and denominator of a fraction?
I want to do sum over this. Can apply the summation to top and bottom separately?
$$\sum\limits_{i=1}^{n} \frac{-a(x_i-\mu)^2}{x_i}$$
$$=\frac{\sum\limits_{i=1}^{n}-a(x_i-\mu)^2}{\sum\limits_{i=1}^{n}x_i}$$
Is this correct?
Where can I find the rules to summations of a division, product, or addition?
Edit:
I want to solve for y,
$$\frac{5}{y}=\sum\limits_{i=1}^{n} \frac{-a(x_i-\mu)^2}{x_i}$$
$$\frac{y}{5}=\sum\limits_{i=1}^{n} \frac{x_i}{-a(x_i-\mu)^2}$$
Is this correct?
AI: No. For example $ \frac{3 + 2}{1 + 1} = \frac{5}{2} \neq 5 = \frac{3}{1} + \frac{2}{1} $. There are no basic rules for division or product. Of cource, $ \sum a_n + b_n = \sum a_n + \sum b_n $ which follows from definition.
One trick to use with products though is that $ \ln \prod a_n = \sum \ln a_n $. |
H: Packing circles on a line
On today's TopCoder Single-Round Match, the following question was posed (the post-contest write-up hasn't arrived yet, and their explanations often leave much to be desired anyway, so I thought I'd ask here):
Given a maximum of 8 marbles and their radii, how would you put them next to each other on a line so that the distance between the lowest point on the leftmost marble and the lowest point on the rightmost marble is as small as possible?
8! is small enough number for brute forcing, so we can certainly try all permutations. However, could someone explain to me, preferably in diagrams, how to calculate that distance value given a configuration?
Also, any kind of background information would be appreciated.
AI: If I understand well, the centers of the marbles are on the line.
In that case, we can fix a coordinate system such that the $x$-axis is the line, and the center $C_1$ of the first marble is the origin. Then, its lowest point is $P=(0,-r_1)$. Calculate the coordinates of the centers of the next circles:
$$C_2=(r_1+r_2,0),\ C_3=(r_1+2r_2+r_3,0),\ \dots,\ \\C_n=(r_1+2r_2+\ldots+2r_{n-1}+r_n,\ 0)$$
The lowest point of the last circle is $Q=(r_1+2r_2+..+2r_{n-1}+r_n,-r_n)$. Now we can use the Pythagorean theorem to calculate the distance $PQ$:
$$PQ^2=(r_1+2r_2+..+2r_{n-1}+r_n)^2+(r_1-r_n)^2=
\\=\left(2\sum_{k=1}^n r_k-(r_1+r_n)\right)^2+(r_1-r_n)^2\,.$$
It follows that the distance is independent of the order of the intermediate marbles, only the first and the last one counts. Hopefully from here you can calculate the minimum of this expression. |
H: Differential equation (2nd order) with divergent coefficients.
I have this equation:
$$x(x-1)y''+6x^2y'+3y=0$$
I try to get the series for the solution around $x=0$, using Frobenius (however it's written). the first solution must be of the form:
$$y_1=\sum_{n=0}^\infty c_nx^{n+1}$$
If I try to get coefficients, I will get a divergent series of coefficients, in which I can't see any sense. The first one is $c_1=\frac{3}{2}c_0$, I calculated the first 1000 by computer and the never go down, although I think you can deduce that from the expression for the coefficients:
$$a_{n+2}=\frac{[(n+2)(n+1)+3]a_{n+1}+6(n+1)a_{n}}{(n+3)(n+2)}$$
Since the upper thing goes "like" a factorial with the $a_{n-1,2}$.
So... how would you get the solutions by series for this? What does it mean that the coefficients are divergent?
AI: Without looking at the actual form of your specific coefficients, the fact that the coefficients of a power series go to infinity is not a "curse" — it is possible, albeit it implies that the radius of convergence will be at most $1$ (if it is $0$, then it is a problem).
For instance, consider $a_n = n$: the radius of convergence is $R=1$, as $\sum_{n=1}^\infty nx^n$ is well-defined for all $|x|<1$ (a characterization is actually $R=\sup \{r \geq 0: a_n r^n \to 0\}$).
Even more striking, $a_n = 2^n$: the radius of convergence is $R=\frac{1}{2}$. |
H: $\sum \frac{\ln(n)}{\sqrt{n^5}}$ test for convergence
Let $\sum a_{n}=\sum \frac{\ln(n)}{\sqrt{n^5}}$. To find if the serie is convergence or not, I had some difficult on finding the proper serie to test the given one.
After some work around, I found this sequence $b_{n}=\frac{1}{\sqrt{n^3}}$, whose serie converges!
Then I applyed the limit test comparasion.
$$\lim_{n \to \infty} \frac{\ln(n)}{\sqrt{n^5}}\cdot \sqrt{n^3}=\lim_{n \to \infty} \frac{\ln(n)}{\sqrt{n^{2}}}=\lim_{n \to \infty} \frac{\ln(n)}{n}=0$$
If the limit is equal $0$ and the $\sum b_{n}$ converges, then $\sum a_{n}$ converges to.
My main difficult was to fint out the proper $b_{n}$. Is there an easy way to find it? Thanks
AI: You should have in your repository that exponential functions grow faster than any power - and conversely logarithms grow slower than any power, including fractionals powers. That is, $\ln n <\sqrt n$ for sufficiently big $n$. This leaves you simply with $\frac 1{n^2}$ (or even $\frac1{n^{5/2-\epsilon}}$ for any positive $\epsilon$). |
H: If $0
I have been really struggling with this problem ... please help!
Let a,b be real numbers. If $0<a<1, 0<b<1, a+b=1$, then prove that $a^{2b} + b^{2a} \le 1$
What I have thought so far:
without loss of generality we can assume that $a \le b$, since $a^{2b} + b^{2a}$ is symmetric in $a$ and $b$. This gives us $0<a \le 1/2, 1/2 \le b<1$. But then I am stuck.
I also thought of solving by Lagrange's multiplier method, but it produces huge calculations.
Any help is welcome :)
AI: Not having a good day with websites. I have downloaded what seems to be the source of the question, a 2009 paper by Vasile Cirtoaje which is about 14 pages. Then a short answer, in a four page document by Yin Li, probably from the same time or not much later. The question was posted on MO by a selfish guy who knew the status of the problem but was hoping for a better answer, a complete answer was also given there in 2010 by fedja, https://mathoverflow.net/questions/17189/is-there-a-good-reason-why-a2b-b2a-1-when-ab1
I have both pdfs by Cirtoaje and Li, email me if you cannot find them yourself.
This is not a reasonable homework question, so I would like to know more of the story, what course for example.
========================
Yin Li of Binzhou University, Shandong, 2009 or 2010, excerpts I believe he just spelled Jensen's incorrectly, see JENSEN
======================== |
H: Trigonometric problem
I'm trying to get the roots for a complex number $x^2+1$
$x^2+1=0\rightarrow x^2=-1 \rightarrow x = \sqrt{-1} \rightarrow i$
So, $w^2 = 0 + 1i$
$p = \sqrt{0^2+1^2} = 1$
$\theta = \tan^{-1} \left( \frac{1}{0} \right )$
But I don't know what I can do to get the $\tan^{-1}(\frac10)$
AI: $\theta = \tan^{-1}(\infty) =\dfrac{\pi}2$ because the real part and the imajinary part are positive and if the real part and the imajinary part are negative also we have $\theta=\dfrac{\pi}{2}$, if one of this side is negative and the other is positive then $\theta=-\dfrac{\pi}{2}$. |
H: What does $\vdash s \rightarrow (\neg s\rightarrow t)$ mean?
What does this statement mean $\vdash s \rightarrow (\neg s\rightarrow t)$?
And how can I prove it?
AI: Writing $T\vdash\varphi$ means that if we assume $T$ then we can prove $\varphi$. If $T$ is omitted then this means that without any assumptions we can prove $\varphi$, that is to say that $\varphi$ is logically true.
In this case it means that $s\rightarrow(\lnot s\rightarrow t)$ is true, regardless to the truth values of $s$ and $t$.
I'll leave you with the task that this statement is indeed true, but I'm going to give you a small hint:
Hint: Use the truth table of $\rightarrow$ and calculate the possible truth values of the statement based on the possible combinations of truth values of $s$ and $t$. |
H: How do you determine the particular solution to a non-homogeneous DE by undetermined coefficients?
I am asked to solve
$y'' +2y' = 2x + 5 -e^{-2x}$
I can find the general solution easily, but the particular solution in this case is hard to find. Here's the answer. I don't know why they got $Ax^2 + Bx + Cxe^{-2x}$
I know the $x$ in $Cxe^{-2x}$ is due to the $e^{-2x}$ being found in the general solution, but I don't know where the $Ax^2 + Bx$ part comes from.
Shouldn't it be $Ax + B + Cxe^{-2x}$, instead?
AI: It's Ax^2 and Bx, because there is a constant C in the general solution and because of Bx we can't have Ax, so it's Ax^2. |
H: Why the terms "unit" and "irreducible"?
I'm trying to understand why in a ring we choose the names unit to an invertible element and irreducible element in this definition
Maybe historical reasons?
For example, I suppose the second definition are named as prime elements because of the analogy to prime numbers.
Thanks in advance.
AI: Firstly, units behave "like $1$", which explains their name.
Historically, prime nuumbers were rather defined as what is here called irreducible. And for integers, "irreducible" and "prime" coincide.
So for the general ring theory, where they do not coincide in general, one had to coin at least one new name. Since irreducible was established for (the ring of) polynomials and this well matches the notion that these elements cannot be (nontrivially) split into several factors, the name prime could be used for the other notion. (Alternatively, one would have had to coin a name for if-it-divides-a-product-it-divides-a-factor numbers) |
H: How to show that a valid inner product on V is defined with the formula $[x, y] = \langle Ax, Ay\rangle $?
Let $A \in L(V,W)$ be an injection and $W$ an inner product space with the inner product $\langle \cdot,\cdot\rangle $. Prove that a valid inner product on $V$ is defined with the formula $[x, y] = \langle Ax, Ay\rangle $
$L(V, W)$ = The set of all linear mappings (linear operators) from V to W
To prove this, if I am correct, I need to show that the four properties of an inner products space apply on this formula:
1. $\langle x, y \rangle = \overline{\langle y,x\rangle }$
2. $\langle \alpha x, y\rangle = \alpha\langle x,y\rangle $
3. $\langle x+y,z\rangle = \langle x,z\rangle + \langle y,z\rangle $
4. $\langle x,x\rangle \space \ge 0 \space \space \forall x$
4.' $\langle x,x\rangle \space = 0 \Longleftrightarrow x=0$
4.
$ [x, x] = \langle Ax, Ax\rangle , Ax \in W$, and since $W$ is an inner product space, $\langle Ax, Ax\rangle \space \ge 0$ implies $[x, x] \ge 0$.
4.'
Since A is an injection: $Ax = 0 \implies x = 0$ and since $W$ is an inner product space and $Ax \in W \implies \langle Ax, Ax \rangle = 0 \implies Ax = 0 \implies [x,x] = 0 \Longleftrightarrow x=0$
3.
$[x+y,z] = \langle A(x+y), Az\rangle = A$ is linear $= \langle Ax + Ay, Az\rangle = Ax, Ay, Az \in W$ and $W$ is an inner product space $= \langle Ax, Az\rangle + \langle Ay, Az\rangle = [x,z] + [y,z]$
2.
$[\alpha x, y] = \langle A(\alpha x), Ay\rangle = A$ is linear $ = \langle \alpha (Ax), Ay \rangle = W$ is an inner product space $ = \alpha \langle Ax, Ay\rangle = \alpha [x, y]$
1.
$[x, y] = \langle Ax, Ay \rangle = W$ is an inner product space $= \overline{\langle Ay, Ax\rangle } = \overline{[y, x]}$
But this seems to me a little to easy, did I maybe conclude something that can't be concluded so easily or maybe is my approach to prove this completely wrong?
AI: Yes, that's it all. So called 'routine verification'. $A$ embeds $V$ into $W$, and $[-,-]$ is just the inherited inner product from $W$, along this embedding. |
H: Closed form for n-th anti-derivative of $\log x$
Is it possible to write a closed-form expression with free variables $x, n$ representing the n-th anti-derivative of $\log x$?
AI: $$\log^{(-n)}x=\frac{x^n}{n!}(\log x-H_n),$$
where $H_n$ is the harmonic number: $H_n=\sum_{k=1}^n k^{-1}=\,$$\gamma$$\,+\,$$\psi$$(n+1)=\gamma+\frac{\Gamma'(n+1)}{n!}$
Proof: By induction. |
H: Definition of open set/metric space
On Proof Wiki, the definition of an open set is stated as
Let $(M,d)$ be a metric space and let $U\subset M$, then $U$ is open iff for all $y\in U$, there exists $\epsilon \in \mathbb{R}_{>0}$ such that $B_M(y;\epsilon) \subset U$
Then there's a remark that states
It is important to note that, in general, the values of $\epsilon$ depend on $y$. That is, it is not required that $\exists \epsilon \in \mathbb{R}_{>0}$ s.t. $\forall y \in U$ s.t. $B_M(y;\epsilon) \subset U$
I don't understand quite understand the remark, could someone please clearly explain this and perhaps provide an example? I believe I understand the definition, but the remark is throwing me off.
AI: As an example consider for instance $\mathbb R^2$ with its usual (Euclidean) metric structure (so distance look like what we are used to in the plane). Now consider the set $\{x\in \mathbb R^2\mid d(x,(0,0))<1\}$. This is the interior of the circle with centre the origin and unit radius. Now, this set is open, since for any point $p$ in this set, there is a radius $\epsilon>0$ such that a the ball with centre $p$ and radius $\epsilon$ is entirely contained in the set. But, the radius $\epsilon$ depends on the position of the point $p$. When $p$ is very close to the circumference of the circle the value of $\epsilon$ must be chosen to be very small.
It should be noted that it is actually extremely rare that in a metric space a set will be 'uniformly' open, in the sense that there will be a single $\epsilon$ that fits for all points in the set. As an extra exercise, try to find examples of such situations. |
H: Is any compact metric totally disconnected space homeomorphic to a compact subspace of a Cantor space?
Every compact metric totally disconnected perfect space is homeomorphic to a Cantor space.
Is every compact metric totally disconnected space homeomorphic to a compact subspace of a Cantor space?
In other words, if you have a compact metric totally disconnected space can you embed it in a compact metric totally disconnected perfect space?
AI: Yes, first show that every zero-dimensional metric space is homeomorphic with a closed subset of the Baire space. Since the Baire space is a $G_\delta$ subset of the Cantor space, we have that every zero-dimensional metric space is a $G_\delta$ subspace of the Cantor space. Note that if our space was compact then the image of the homeomorphism is compact, as well.
The proof of this can be found in Kechris' Classical Descriptive Set Theory, Theorem 7.8. |
H: Relationship between three matrices
I think this might be an odd question, and a little vague. But here goes.
This is related to coordinate transformations. Three matrices are given: $G_1 , G_2$, and $\Lambda$. $G_1$ and $G_2$ are symmetric. (They are metrics, actually.) $\Lambda$ is the matrix in question. The three matrices satisfy
$\Lambda^{-1} = G_1^{-1} \Lambda^T G_2$
Is there anything we can say about the properties of $\Lambda$, such as it is symmetric, or it is either symteric or anti-symmetric, or ...?
Thanks.
AI: If you rewrite this as $$G_1 = \Lambda^T G_2 \Lambda\,,$$
this is just the change-of-basis formula for a symmetric bilinear form. That is, $\Lambda$ is the matrix that interpolates from the expression for the metric in one coordinate system to the expression in another coordinate system. As it stands, $\Lambda$ could be any invertible matrix. |
H: Why is $\varphi$ called "the most irrational number"?
I have heard $\varphi$ called the most irrational number. Numbers are either irrational or not though, one cannot be more "irrational" in the sense of a number that can not be represented as a ratio of integers. What is meant by most irrational? Define what we mean by saying one number is more irrational than another, and then prove that there is no $x$ such that $x$ is more irrational than $\varphi$.
Note: I have heard about defining irrationality by how well the number can be approximated by rational ones, but that would need to formalized.
AI: How well can a number $\alpha$ be approximated by rationals?
Trivially, we can find infinitely many $\frac pq$ with $|\alpha -\frac pq|<\frac 1q$, so something better is needed to talk about a good approximation.
For example, if $d>1$, $c>0$ and there are infinitely many $\frac pq$ with $|\alpha-\frac pq|<\frac c{q^d}$, then we can say that $\alpha$ can be approximated better than another number if it allows a higher $d$ than that other number. Or for equal values of $d$, if it allows a smaller $c$.
Intriguingly, numbers that can be approximated exceptionally well by rationals are transcendental (and at the other end of the spectrum, rationals can be approximated exceptionally poorly - if one ignores the exact approximation by the number itself). On the other hand, for every irrational $\alpha$, there exists $c>0$ so that for infinitely many rationals $\frac pq$ we have $|\alpha-\frac pq|<\frac c{q^2}$. The infimum of allowed $c$ may differ among irrationals and it turns out that it depends on the continued fraction expansion of $\alpha$.
Especially, terms $\ge 2$ in the continued fraction correspond to better approximations than those for terms $=1$. Therefore, any number with infinitely many terms $\ge 2$ allows a smaller $c$ than a number with only finitely many terms $\ge2$ in the continued fraction. But if all but finitely many of the terms are $1$, then $\alpha$ is simply a rational transform of $\phi$, i.e. $\alpha=a+b\phi$ with $a\in\mathbb Q, b\in\mathbb Q^\times$. |
H: $\iint f(x,y)\,dxdy$ and $\iint f(x,y)\,dydx$ exist but $f$ not integrable on $[0,1]\times[0,1]$
I want to look for a function $f(x,y)$, whose support is inside $[0,1]\times[0,1]$, such that $\int_0^1\!\int_0^1\!f(x,y)\,dxdy$ and $\int_0^1\!\int_0^1\!f(x,y)\,dydx$ both exist, but $f(x,y)$ is not Riemann-integrable (or Darboux-integrable) on $[0,1]\times[0,1]$.
By Riemann-Lebesgue Theorem, I know that the set of discontinuities of $f$ in $[0,1]\times[0,1]$ cannot be contained in a set of measure $0$ in $\mathbb{R}^2$, but with each fixed $x$ (or fixed $y$), the set of continuities of $f_x(y)$ or $f_y(x)$ can be contained in a set of measure $0$ in $\mathbb{R}^1$.
I'm unable find a set of continuities in $[0,1]\times[0,1]$ that satisfy this, thus unable to find a function like this. Please help. Thank you very much.
Further question: is it possible that $\int_0^1\!\int_0^1\!f(x,y)\,dxdy=\int_0^1\!\int_0^1 \!f(x,y)\,dydx$ but $f(x,y)$ is not Riemann-integrable on $[0,1]\times[0,1]$?
AI: There are less complicated examples (at least this won't require a page! :)). Try
$$f(x,y) = \begin{cases} 1\,, & x = k/q \text{ and } y = \ell/q \text{ for some integers } k,\ell \text{ with $q$ prime} \\ 0\,, & \text{otherwise}\end{cases}\,.$$
Then each horizontal/vertical line segment contains (at most) finitely many discontinuities of $f$. But you need to do the exercise of showing the set $\{(k/q,\ell/q): k,\ell\in\mathbb N,\ q \text{ prime}\}$ is dense in $[0,1]\times [0,1]$.
Oh, and this function answers your second question. Both iterated integrals are $0$. |
H: How do I find the series expansion of the meromorphic function $\frac{1}{e^z+1}$?
in a theoretical physics book, the author makes the following claim:
$$\frac{1}{e^z + 1} = \frac{1}{2} + \sum_{n=-\infty}^\infty \frac{1}{(2n+1) i\pi - z}$$
and justifies this as
These series can be derived from a theorem which states that any meromorphic function may be expanded as a summation over its poles and residues at those poles
What's the name of that theorem? It's not really a Laurent series, since the Laurent series is for an expansion around one particular point only. I can see that the poles occur whenever $z = (2n+1)i\pi$ for $n \in \mathbb{N}$, but then where does that constant $1/2$ come from?
EDIT: Well, it appears that the general claim isn't valid, so now I'd be interested in a justification for the expansion in my particular example...
AI: Mittag-Leffler's theorem guarantees the existence of a meromorphic function $g(z)$ whose poles and principal parts are given by any values specified. Then, if $f(z)$ is a meromorphic function, then $f(z) - g(z)$ is holomorphic, and it remains to compute this difference. In practice this is probably nontrivial, because $g(z)$ is not uniquely determined, but for functions with nice poles and principal parts, this is possible.
Such a possibility applies in your case with $f(z) = 1/(e^z + 1)$. We can justify the formula you gave in your question by using an approach based on a discussion between me and one of my friends, so I do not claim the credit for these ideas.
In order to properly handle the convergence of the infinite sum, we should first symmetrize the infinite sum you gave, so instead let
$$ g(z) = \sum_{k > 0 \text{, odd}} \left( \frac{1}{k i \pi - z} - \frac{1}{k i \pi + z} \right) = -\sum_{k > 0 \text{, odd}} \frac{2z}{z^2 + k^2 \pi^2}$$
We can check that $g(z)$ is a meromorphic function whose poles and principal parts match that of $f(z) = 1/(e^z + 1)$, so it follows that $f(z)- g(z)$ is entire. It remains to compute this difference. First, notice that both $f(z)$ and $g(z)$ are $2 \pi i$ periodic. So to check the growth of $f(z), g(z)$, we need only check the behavior as $\mathrm{Re}(z) \rightarrow \pm \infty$. Notice that $f(z) \rightarrow 1,0$ as $\mathrm{Re}(z) \rightarrow -\infty, +\infty$, respectively. Thus it follows that $f(z)$ is in fact uniformly bounded away from its poles. To check $g(z)$, split the sum as
$$ g(z) = \sum_{0<k<2|z|/\pi, \text{ odd}} + \sum_{k \ge 2|z|/\pi, \text{ odd}} \frac{-2z}{z^2 + k^2 \pi^2} = S_1(z) + S_2(z)$$
Now, notice that for $\mathrm{Re}(z)$ sufficiently large,
\begin{align}
|S_1(z)| & = \left|\frac{-2}{z} \sum_{0<k<2|z|/\pi, \text{ odd}} \frac{1}{1 + k^2 \pi^2/z^2} \right| \le \frac{2}{|z|} \frac{2|z|}{\pi} = 4,\\
|S_2(z)| & = \left| \sum_{k \ge 2|z|/\pi, \text{ odd}} \frac{-2z}{z^2 + k^2 \pi^2} \right| \\
& \le \frac{8}{3} |z| \sum_{k \ge 2|z|/\pi, \text{ odd}} \frac{1}{ \pi^2 k^2} \\
& \le \frac{8}{3} |z| \int_{-1 + 2|z|/\pi}^{\infty} \frac{1}{\pi^2 s^2} \, ds \\
& \le \frac{8}{3} \frac{|z|}{-1 + 2|z|/\pi} \le C
\end{align}
for $C > 0$ a constant. Thus $g(z)$ is also uniformly bounded away from its poles. Then, it follows that the difference $f(z) - g(z)$ is uniformly bounded, and being entire, then by Louiville's theorem it must be constant. Now, we compute that
$$ f(0) - g(0) = \frac{1}{2} - 0 = \frac{1}{2} $$
and hence $f(z) - g(z) \equiv 1/2$. |
H: Is the Dirac delta a function?
Is Dirac delta a function? What is its contribution to analysis?
What I know about it:
It is infinite at 0 and 0 everywhere else. Its integration is 1 and I know how does it come.
AI: To have a better understanding of what is the Delta Dirac "function", it is good to know what is a distribution (but this is not necessary). Let $\Omega\subset\mathbb{R}^N$ be a open set and $\mathcal{D}(\Omega)=C_0^\infty(\Omega)$. We define in $\mathcal{D}(\Omega)$ the following notion of convergence: we say that $\phi_n\to 0$ in $\mathcal{D}(\Omega)$ if
I - $\operatorname{spt}(\phi_n)\subset K\subset\Omega$, where $K$ is a fixed compact ($\operatorname{spt}(\phi_n)$ is the supoort of $\phi_n$),
II - For all $j$, $D_j\phi_n\to 0$ uniformly in $\Omega$.
We denote by $\mathcal{D}'(\Omega)$ the set of linear transformations $F:\mathcal{D}(\Omega)\to\mathbb{R}$ that are continuous with respect to the pseudo topology (the notion of convergence) that we defined, i.e. if $\phi_n\to 0$ in $\mathcal{D}(\Omega)$, then $\langle T,\phi_n\rangle=T(\phi_n)\to 0$. We call the elements of $\mathcal{D}'(\Omega)$ distributions.
Now we are able to define the famous delta Dirac "distribution". It is a distribution $\delta_{x}:\mathcal{D}(\Omega)\to\mathbb{R}$ defined by the formula ($x\in\Omega$) $$\langle\delta_{x},\phi\rangle=\phi(x)$$
You can easily check by using the definition that $\delta_x\in\mathcal{D}'(\Omega)$. Also, any funcion $u\in L^1_{loc}(\Omega)$ de fined a distribution by the formula $$\langle T_u,\phi\rangle=\int_\Omega u\phi$$
On the other hand, we can define a notion of convergence in $\mathcal{D}'(\Omega)$, to wit, $T_n\to T$ in $\mathcal{D}'(\Omega)$ if $\langle T_n,\phi\rangle\to\langle T,\phi\rangle $. You can check by using this definition that if $h_\epsilon$ is a mollifier sequence, then $T_{h_\epsilon}\to \delta _x$ in the sense of distributions.
From the last paragraph you can see, why usullay is said that the Delta Dirac "function" is one that satisfies $\delta(x)=1$, $\delta(y)=0$ if $y\neq x$ and $\int \delta(x)=1$. It is obvius that in the standard sense, this is impossible, but if we were to take the limit of the sequence $h_\epsilon$ pointwise, we would have something like this.
Also there is a lot of places in mathematics where the distribution of Dirac is used, for example, it is know that the Fundamental solution $\Gamma$ of the Laplace equation satisfies $-\Delta\Gamma=\delta$ in the sense of distributions, hence, if you wanna solve the problem $-\Delta u=f$, you can write (at least formally and with some assumptions) $u=\Gamma\ast f$, where $\ast$ stands for convolution.
There are a bunch of other applications of this distribution, but this place here is to small to write them all. |
H: Proof on showing function $f \in C^1$ on an open & convex set $U \subset \mathbb R^n$ is Lipschitz on compact subsets of $U$
The question is as follows:
Given:
(1) function $f: U \subset \mathbb R^n ==> \mathbb R$
(2) $U$ is open and convex set
(3) $f \in C^1$ in $U$
Goal: Show that $f$ is Lipschitz on any compact subset of $U$
By now, I have various ideas come to mind, but I can't connect the dots >_<
Here are my thoughts so far:
(1) Recall definitions:
(i) $f \in C^k$ means all partial derivatives up to (and including) order $k$ exist and continuous. Here $k = 1$
(ii) a set $U$ is convex if for any 2 points $x, y$ in $U$, the segment joining $x$ and $y$ is totally inside $U$
(iii) function $f$ is Lipschitz if there is a bound $M$ such that $|f(x) - f(y)| \leq M |x-y|$
(2)By a theorem in my book:
if function $f \in C^1$ on an open & convex set $U$, then for any 2
points $x$ and $y$ in $U$, there is a point $s$ lying on the segment
joining $x$ and $y$ such that $f(x) - f(y) = Df(s) * (x - y)$
I firstly have a feeling that I may need this theorem in the proof, but I can't see the connection between my desired bound $M$ with $Df(s)$. So I tried to think of other ideas and ...
(3) It turns out that by all the given information, I think if I can show function $f$ is convex, then I'm done because there is a theorem which said: a convex function on a open, convex set $U$ should be Lipschitz on $U$, thus I think it must also be Lipschitz on any subset of $U$, either that subset is compact or not.
However, how can I prove function $f$ is convex, based on given information? I have a feeling that I have to use the fact that set $U$ is convex, but then I'm stuck on how to proceed further >_>
*Would someone please help me on this question?
Thank you very much ^_^*
AI: You need to use the Mean Value theorem in several variables, as you already stated it:
For $x,y\in U$ there exists $s$ which lies on the line segment between $x$ and $y$ ( this line segment is contained in $U$, since it is convex), such that:
$$f(x)-f(y)=\nabla f(s)\cdot (x-y)\Rightarrow\text{( Cauchy-Schwarz )}$$
$$|f(x)-f(y)|\leq|\nabla f(s)|\cdot |x-y|$$
Now, take a compact subset $S$ of $U$, and $x,y\in S$. Then take a convex compact set $S'$ such that $S\subset S'\subset U$.This is because we want $s\in S'.$
Now, since $f\in C^1$ , the first partial derivatives are continuous, and therefore, bounded in the compact set $S'$. This implies that $|\nabla f(s)|\leq M$
Concluding, for each $x,y\in S$ we have that :
$$|f(x)-f(y)|\leq M\cdot |x-y|$$
which proves that $f$ is Lipschitz in $S$. |
H: Calculating $\sqrt{28\cdot 29 \cdot 30\cdot 31+1}$
Is it possible to calculate $\sqrt{28 \cdot 29 \cdot 30 \cdot 31 +1}$ without any kind of electronic aid?
I tried to factor it using equations like $(x+y)^2=x^2+2xy+y^2$ but it didn't work.
AI: \begin{align}
&\text{Let }x=30
\\ \\
\therefore&\ \ \ \ \ \sqrt{(x-2)(x-1)x(x+1)+1}
\\ \\
&=\sqrt{[(x-2)(x+1)[(x-1)x]+1}
\\ \\
&=\sqrt{(x^2-x-2)((x^2-x)+1}
\\ \\
&=\sqrt{(x^2-x)^2-2(x^2-x)+1}
\\ \\
&=\sqrt{(x^2-x-1)^2}
\\
&=x^2-x-1
\\
&=30^2-30-1
\\
&=\boxed{869}
\end{align} |
H: Composition of Partial Isometry
Let $T$ be a linear operator in $H$, a Hilbert space.
An operator $T \in L(H)$ is said to be a partial isometry if the restriction of $T$ to $ker(T)^{\perp}$ is an isometry. I would like to prove that given a partial isometry $T$, then $T^{2}$ is a partial isometry if and only if $(T^{*}TTT^{*})$=$(TT^{*}T^{*}T)$. All I could do so far was to prove the classic result : $T=TT^{*}T$ iff $T$ is a partial isometry iff $T^{*}=T^{*}TT^{*}$, that might be useful.
Thank you for help :)
AI: You can prove this using the following observation: Let $T$ be a bounded linear operator on a Hilbert space. Then $T$ is a partial isometry if and only if the spectrum of $T^{\ast} T$ consists only of $\{0,1\}$. Furthermore, let $A$ be a bounded, self-adjoint operator. Then $\sigma(A) \subset \{0,1\}$ if and only if $A = A^2$; this follows from the spectral theorem.
So our general strategy to prove that $T$ is an isometry should be to prove the equivalent relation $T^{\ast} T = (T^{\ast} T)^2$. If you apply this directly, you get your "classic" result $T = TT^{\ast} T$. So all that remains for you to prove the partial isometry condition for $T^2$ is to check all this instead with $T^* T^* TT = (T^*T^* TT)^2$. |
H: Simplifying $\sqrt{\underbrace{11\dots1}_{2n\ 1's}-\underbrace{22\dots2}_{n\ 2's}}$
How do I simplify:
$$\sqrt{\underbrace{11\dots1}_{2n\ 1's}-\underbrace{22\dots2}_{n\ 2's}}$$
Should I use modulos or should I factor them? Or any I suppose to use combinatorics? Any one have a clue?
AI: Nice question there!
Let $x=\underbrace{11\cdots1}_{n\ 1's}$, then
\begin{align}
\therefore\underbrace{11\cdots1}_{2n\ 1's}-\underbrace{22\cdots2}_{n\ 2's}&=\underbrace{11\cdots1}_{n\ 1's}\times10^n+\underbrace{11\cdots1}_{n\ 1's}-2\times\underbrace{11\cdots1}_{n\ 1's}
\\
&=x\times10^n+x-2x
\\ \\
&=x\times10^n-x
\\ \\
&=x(10^n-1)
\\ \\
&=x\times\underbrace{99\cdots9}_{n\ 9's}
\\
&=x\times(9\times\underbrace{11\cdots1}_{n\ 1's})
\\
&=9x^2
\end{align}
\begin{align}
\therefore\text{The original equation is equal to: }\sqrt{9x^2}=3x=\boxed{\underbrace{33\cdots3}_{n\ 3's}}
\end{align} |
H: Integration of function help
I'm having problems integrating this function $\displaystyle E(X)=\int^ \infty_0 x\lambda e^{-\lambda x} dx$. I did the integration by parts and had $-xe^{-\lambda x}- \lambda e^{-\lambda x}$. However the solution gives $-xe^{-\lambda x} - \dfrac{1}{\lambda}e^{-\lambda x}$. I can't find any mistakes. What am I doing wrong?
AI: Integrating by parts using $u=x \Rightarrow du=dx$ and $dv= e^{-\lambda x} dx \Rightarrow v = \dfrac{e^{-\lambda x}}{-\lambda}$, we have
$$\begin{align}
\int x\lambda e^{-\lambda x} dx & = \lambda \int x e^{-\lambda x} dx \\
& = \lambda \left [\frac{xe^{-\lambda x}}{-\lambda} - \int \frac{e^{-\lambda x}}{(-\lambda)} dx\right] \\
& = \lambda \left [\frac{-xe^{-\lambda x}}{\lambda} + \int \frac{e^{-\lambda x}}{\lambda} dx\right] \\
& = \lambda \left [\frac{-xe^{-\lambda x}}{\lambda} + \frac{e^{-\lambda x}}{\lambda(-\lambda)} dx\right] \\
& = \lambda \left [\frac{-xe^{-\lambda x}}{\lambda} - \frac{e^{-\lambda x}}{\lambda^2} \right] \\
& = -xe^{-\lambda x} - \frac{e^{-\lambda x}}{\lambda}. \\
\end{align}$$
Therefore we have
$$\begin{align}
\int_0^\infty x\lambda e^{-\lambda x} dx & = \left[-xe^{-\lambda x} - \frac{e^{-\lambda x}}{\lambda}\right]_0^\infty \\
& = \left[\left(0 - 0\right) - \left(0 - \frac{1}{\lambda}\right) \right] \\
& = \frac{1}{\lambda}.
\end{align}$$ |
H: Proving that $\frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\dots+\frac{1}{\sqrt{100}}<20$
How do I prove that:
$$\frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\dots+\frac{1}{\sqrt{100}}<20$$
Do I use induction?
AI: Prove the following claim using induction on $n$:
$$\sum_{k=1}^n \dfrac1{\sqrt{k}} < 2 \sqrt{n}$$
In the induction, you will essentially need to show that
$$2\sqrt{n} +\dfrac1{\sqrt{n+1}} < 2 \sqrt{n+1} \tag{$\star$}$$
To prove $(\star)$, note that
$$\sqrt{n} < \sqrt{n+1} \implies \sqrt{n} + \sqrt{n+1} <2 \sqrt{n+1} \implies \dfrac1{\sqrt{n+1}} < \dfrac2{\sqrt{n} + \sqrt{n+1}}$$
Multiplying and divding the right hand side by $(\sqrt{n+1} - \sqrt{n})$, we get
$$\dfrac1{\sqrt{n+1}} < \dfrac2{\sqrt{n} + \sqrt{n+1}}\cdot \dfrac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n+1} - \sqrt{n}} = 2({\sqrt{n+1} - \sqrt{n}})$$
which gives us $(\star)$. |
H: A basic doubt on Lebesgue integration
Can anyone tell me at a high level (I am not aware of measure theory much) about Lebesgue integration and why measure is needed in case of Lebesgue integration? How the measure is used to calculate the horizontal strip mapped for a particular range?
AI: The idea of (exterior) measure is required to define the integral of the characteristic function of a measurable set. Without this basic building block you cannot even define the Lebesgue integral. |
H: Order of operations (BODMAS)
$$40-20/2+15\times1.5\\\hspace{.1in}\\40-20/2+15\times1.5=\\ 40-10+22.5=7.5$$
I'm studying and this is from an example.
In BODMAS, aren't addition and subtraction have same level?
So, in the 3rd line, it should be from left to right, correct?
AI: $$40-10+22.5=52.5$$
You're correct, subtraction and addition have the same level of priority, so when both exist, then you operate from left to right. You are entirely correct about that. If you were puzzled when you saw the "answer" $\,7.5$, good for you, since $\;\;40-10+22.5\neq 7.5!$
Now, note: $$40-(10+22.5)=7.5$$
because operations in "brackets" (or parentheses) have greatest priority.
So let's be forgiving: perhaps the author of the problem statement/review notes forgot parentheses! |
H: how to tell whether x and y are independent or not
Suppose that $f_{x,y}(x,y) = \lambda^2 e^{\displaystyle-\lambda(x+y)}, 0\leq x , 0\leq y.$ Find $\operatorname{Var(X+Y)}$.
I'm having trouble with this problem the way to find $\operatorname{Var(X+Y)} = \operatorname{Var(X)}+\operatorname{Var(Y)}+2\operatorname{Cov(X,Y)}$, however if $X$ and $Y$ are independent, then $\operatorname{Cov(X, Y)}=0$, the answers indicated that $X$ and $Y$ are independent since they just used $\operatorname{Var(X+Y)} = \operatorname{Var(X)}+\operatorname{Var(Y)}+0$, my question is how do I tell whether $X$ and $Y$ are independent or not, based on looking only at $f_{x,y}(x,y) = \lambda^2 e^{\displaystyle-\lambda(x+y)}, 0\leq x , 0\leq y.$
AI: $$f_{xy}(x,y)\neq f_X(x)f_Y(y)$$ |
H: Having trouble using eigenvectors to solve differential equations
The question asked to solve $$\frac{dx}{dy} = \begin{pmatrix}
5 & 4 \\
-1 & 1\\
\end{pmatrix}x$$ ,where $$ x = \begin{pmatrix} x_1 \\
x_2 \\ \end{pmatrix}$$
I went ahead an found the determinant of matrix $$ |A - I\lambda| = \lambda^2 - 4\lambda + 9$$
And found $\lambda = 3$
Then the $\alpha$ matrices was found to be
$$ \begin{pmatrix}
5 & 4\\
-1 & 1\\
\end{pmatrix} \alpha = 3\alpha$$ where $$\alpha = \begin{pmatrix}
\alpha_1 \\
\alpha_2 \\
\end{pmatrix}$$
Ultimately $-\alpha_1 = 2\alpha_2$ so I write $$\alpha = \begin{pmatrix}
-1 \\
2 \\
\end{pmatrix}$$
Then because $\lambda$ is a repeated root I know the solution is supposed to look something like this:
$$x = c_1\alpha e^{\lambda t} + c_2( \alpha t + \beta) e^{\lambda t}t$$
And then this is where it gets tricky for me. I know we find the $\beta$ matrix by figuring this out:
$$(A - I\lambda)\beta = \alpha$$
Now when I multiply all that out I get
$$2\beta_1 + 4\beta_2 = -1$$
$$\beta_1 + 2\beta_2 = -2$$
This is the system of equations I can't seem to solve to get a suitable $\beta$. One option I have is to make $$\beta = \begin{pmatrix}
0\\
-1\\
\end{pmatrix}$$
But this doesn't work for the second system of equations. Help please.
AI: The characteristic polynomial is:
$$|A - \lambda I| = 0 \rightarrow \lambda^2-6 \lambda+9 = 0 \rightarrow \lambda_{1,2} = 3$$
Substituting in the first eigenvalue to find the first eigenvector:
$$[A - \lambda I]v_1 = 0 \rightarrow \begin{pmatrix}
2 & 4 \\-1 & -2\\\end{pmatrix}v_1 = 0$$
After RREF, for the first eigenvector, I would have chosen:
$$a + 2b = 0 \rightarrow b = 1 \rightarrow a = -2$$
So, the first eigenvector is $v_1 = (-2, 1)$.
Since we have a repeated eigenvalue, we need a generalized eigenvector and you did the right approach, we have:
$$[A - \lambda I]v_2 = v_1$$
$$\begin{pmatrix}
2 & 4 \\-1 & -2 \end{pmatrix}v_2 = \begin{pmatrix}-2\\1 \end{pmatrix}$$
The RREF is:
$$\begin{pmatrix}1 & 2 \\ 0 & 0 \end{pmatrix}v_2 = \begin{pmatrix}-1\\0 \end{pmatrix} $$
This yields:
$$a + 2b = -1 \rightarrow b = 0 \rightarrow a = -1$$
From this, the second eigenvector is $v_2 = (-1, 0)$. |
H: Polynomials - The sum of two roots
If the sum of two roots of
$$x^4 + 2x^3 - 8x^2 - 18x - 9 = 0$$ is
$0$, find
the roots of the equation
AI: Generally, if there's two roots whose sum is zero, then it means that two factors are $x-a$ and $x+a$, which means that $x^2-a^2$ must be a factor. So clearly
$$
(x^2-a^2)(x^2+bx+c)=x^4+2x^3-8x^2-18x-9=0
$$
Find the values of $a^2$, $b$, and $c$ that satisfy the left equality, and you'll have found factors that you can then solve for all roots. (This works even if there's no rational root) |
H: Methods for determining the convergence of $\sum\frac{\cos n}{n}$ or $\sum\frac{\sin n}{n}$
As far as I know, the textbook approach to determining the convergence of series like $$\sum_{n=1}^\infty\frac{\cos n}{n}$$ and $$\sum_{n=1}^\infty\frac{\sin n}{n}$$ uses Dirichlet's test, which involves bounding the partial sums of the cosine or sine terms. I have two questions:
Are there any other approaches to seeing that these series are convergent? I'm mostly just interested to see what other kinds of arguments might be made.
What's the best way to show that these two series are only conditionally convergent? I don't even know the textbook approach to that question.
AI: Hint for 2)
$$\sum_{n=1}^{\infty} \frac{|\cos n|}{n} \geq \sum_{n=1}^{\infty} \frac{\cos^2 n}{n}=\sum_{n=1}^{\infty} \frac{1+\cos {2n}}{2n}$$
Convergence of $\sum_{n=1}^{\infty}\frac{\cos{2n}}{2n}$, and divergence of $\sum_{n=1}^{\infty}\frac{1}{2n}$ gives the divergence.
The same method applies to $\sum_{n=1}^{\infty}\frac{|\sin n|}{n}$. |
H: Application of derivative - how to calculate change in error
Problem: If the error committed in measuring the radius of the circle is $0.05\%$ then find the corresponding error in calculating the area.
Solution: Let the error be denoted by $\delta r = 0.05\%$, therefore the corresponding error in calculating the area is:
$A = \pi r^2 $
$\frac{\delta A}{\delta r} = 2\pi r$
$\Rightarrow \delta A = 2\pi r \delta r$
Please suggest if this is the right approach. If not, please help me solve this problem along with the concepts.
AI: You are on the right track. When you see $\delta r = 0.05\%$ it really means that $r$ is measured within $0.05\%$, so the proper reading is $\frac {\delta r}r=0.05\%=5\cdot 10^{-4}$. I don't know if this is in your text, but when errors are quoted as percents, they are relative, not absolute, errors. Similarly, you are being asked for $\frac {\delta A}A$. So divide your equation by $A=\pi r^2$ to get $\frac {\delta A}A=2\frac {\delta r}r$ |
H: Evalutate $\int\frac{dx}{x\sqrt{1-\frac{4}{3}x^4}}$
How to integrate
$$\int\frac{dx}{x\sqrt{1-\frac{4}{3}x^4}}$$
Mathematica found
$$\ln x-\frac{1}{2}\ln\left(1+\sqrt{1-\frac{4}{3}x^4}\right)$$
but I can't find a method to arrive at this solution.
AI: Try $u=\sqrt{1-(4/3)x^4}$, $u^2=1-(4/3)x^4$, $2u\,du=-(16/3)x^3\,dx$, together with $dx/x=x^3\,dx/x^4$. |
H: Finding the MLE of a multinomial distribution (uneven probabilities)
I am trying to simulate loaded die where the face probabilities are:
$$
p_1=p_2=p_3=p_4=1/6+\theta\text{ and }p_5=p_6=1/6-2\theta
$$
And so using the multinomial distribution I have:
$$
\binom{n}{x_i}\prod_{i=1}^6 p_i^{\displaystyle x_i}=\binom{n}{x_i}\left ( \frac{1}{6} + \theta \right)^{\displaystyle\sum_{i=1}^4 x_i} \left(\frac{1}{6}-2\theta \right)^{ \displaystyle\sum_{i=5}^6 x_i}
$$
How do I find the MLE w.r.t $\theta$?
AI: Your notation is rather confused. You have $\displaystyle\prod_{i=1}^6 p_i^{x_i}$, and that means
$$
p_1^{x_1} p_2^{x_2} p_3^{x_3} p_4^{x_4} p_5^{x_5} p_6^{x_6}.
$$
In the first factor, $p_1^{x_1}$ you have $i=1$, in the next you have $i=2$, and so on. But what is $i$ equal to in the factor $\dbinom{n}{x_i}$ that you've written in front of the product??
What needs to be there is not $\dbinom{n}{x_i}$, but $\dbinom{n}{x_1,\ldots,x_n}=\dfrac{n!}{x_1!\cdots x_n!}$, i.e. a multinomial coefficient.
You need the value of $\theta\in[0,1/12]$ that maximizes the product. First check that $\theta$ must be between $0$ and $1/12$. The multinomial coefficient does not depend on $\theta$. So you can just maximize
$$
L(\theta)=\left(\frac16+\theta\right)^{x_1+x_2+x_3+x_4}\left(\frac16-2\theta\right)^{x_5+x_6}.
$$
Since $\log$ is an increasing function this is the same as maximizing
$$
\ell(\theta)=\log L(\theta) = (x_1+x_2+x_3+x_4)\log\left(\frac16+\theta\right) + (x_5+x_6)\log\left(\frac16-2\theta\right).
$$
We have
\begin{align}
\ell\,'(\theta) & = \frac{x_1+x_2+x_3+x_4}{\frac16+\theta} - \frac{2(x_5+x_6)}{\frac16-2\theta} \\[8pt]
& = 6\left( \frac{x_1+x_2+x_3+x_4}{1+6\theta} - \frac{2(x_5+x_6)}{1-12\theta} \right) \\[8pt]
& = (\text{a positive number}) \cdot \left( (1-12\theta)(x_1+x_2+x_3+x_4)-(1+6\theta)(x_5+x_6) \right) \\[8pt]
& = (\text{pos.}) \cdot (x_1+x_2+x_3+x_4-(x_5+x_6) - 6\theta(2(x_1+x_2+x_3+x_4)+x_5+x_6)).
\end{align}
Find the value of $\theta\in[0,1/12]$ for which $\ell\,'(\theta)=0$, and show that for smaller values of $\theta\in[0,1/12]$ this derivative is positive and for larger ones it is negative. |
H: Symmetric Matrices of $I_{2}$
Find $10$ symmetric matrices $ A = \begin{pmatrix}
a &b \\
c&d
\end{pmatrix}$ such that $A^{2}=I_{2}$
(I'm going to call matrix A the "square root" of $A^{2}$. If this is the incorrect name for it, may someone please tell me what it is actually called?)
My professor posed this question in class and told us there was an infinite amount of square roots. (Assuming I understood him correctly). However I don't see how there would be many of these, as I was under the impression that a matrix only has one inverse, for $A A^{-1}=I_{n}$. If someone could tell me if I either misunderstood the professor or if I'm thinking about something incorrectly, please correct me.
My other question is other than blatant guess and check, is there a method to think of these symmetric square roots?
Thanks in advance.
AI: Your professor is right, there's an infinite number of square roots, kind of like how there's two square roots of $1$ (namely, $1$ and $-1$).
To see how to get it in general, notice that, for a symmetric matrix, you have
$$
\begin{pmatrix}a&b\\b&c\end{pmatrix}^2 = \begin{pmatrix}a^2+b^2&b(a+c)\\b(a+c)&b^2+c^2\end{pmatrix}
$$
So, for the right side to be equal to the identity, you must have
$$
a^2+b^2=1\\
b^2+c^2=1\\
b(a+c) = 0
$$
What solutions does this system of equations admit?
To demonstrate that multiple solutions exist directly, consider that
$$
\begin{pmatrix}0&1\\1&0\end{pmatrix}^2 = \begin{pmatrix}1&0\\0&1\end{pmatrix}
$$
and so is a square root of the identity matrix. |
H: Integration by parts disconnect
I'm trying to integrate $\displaystyle E(Y^2) = \int^\infty_0 y^2\lambda e^{-\lambda y} dy$
doing it by parts this is my logic.
$\displaystyle E(Y^2) = \int^\infty_0 y^2\lambda e^{-\lambda y} dy$ where $u=y^2$, $du=2y\,dy$, $dv=\lambda e^{-\lambda y}\, dy$ and $v = -e^{-\lambda y}$ solving by parts we get $\displaystyle (y^2)(-e^{-\lambda y})-\int^\infty_0 -e^{-\lambda y} 2y \,dy$ which then simplifies to $\displaystyle 0 + 2\int^\infty_0y e^{-\lambda y} dy$ which is the same as $0 + 2 E(Y) = \dfrac{2}{\lambda}$ where $E(Y)=\dfrac{1}{\lambda}$
however the solutions shows a different answer of $\dfrac{2}{\lambda^2}$ can someone explain why this is?
http://oi40.tinypic.com/b5fwxs.jpg
AI: Integrating by parts, take $u= y^2 \Rightarrow du= 2y \cdot dy$ and $dv=e^{-\lambda y} \Rightarrow v = \dfrac{e^{-\lambda x}}{-\lambda}.$
$$\begin{align}
\int y^2\lambda e^{-\lambda y} dy & = \lambda \int y^2 e^{-\lambda y} dy \\
& = \lambda \left [\frac{y^2e^{-\lambda y}}{-\lambda} - \int \frac{2y e^{-\lambda y}}{(-\lambda)} dy\right] \\
& = \lambda \left [\frac{-y^2e^{-\lambda y}}{\lambda} + \color{red}{\frac{2}{\lambda}\int ye^{-\lambda y} dy}\right]. \\
\end{align}$$
Integrating the red colored by parts again, let $u= y \Rightarrow du= dy$ and $dv=e^{-\lambda y} \Rightarrow v = \dfrac{e^{-\lambda y}}{-\lambda}.$
$$\begin{align}
\color{red}{\frac{2}{\lambda}\int ye^{-\lambda y} dy}
& = \frac{2}{\lambda} \left[\frac{ye^{-\lambda y}}{-\lambda} - \int \dfrac{e^{-\lambda y}}{(-\lambda)} dy \right] \\
& = \frac{2}{\lambda} \left[\frac{-ye^{-\lambda y}}{\lambda} + \int \dfrac{e^{-\lambda y}}{\lambda} dy \right] \\
& = \frac{2}{\lambda} \left[\frac{-ye^{-\lambda y}}{\lambda} + \dfrac{e^{-\lambda y}}{(-\lambda)\lambda} dy \right] \\
& = \color{blue}{\frac{2}{\lambda} \left[\frac{-ye^{-\lambda y}}{\lambda} - \dfrac{e^{-\lambda y}}{\lambda^2} \right]}.
\end{align}$$
Put the blue part back, we have
$$\begin{align}
\int y^2\lambda e^{-\lambda y} dy
& = \lambda \left [\frac{-y^2e^{-\lambda y}}{\lambda} + \color{red}{\frac{2}{\lambda}\int ye^{-\lambda y} dy}\right] \\
& = \lambda \left [\frac{-y^2e^{-\lambda y}}{\lambda} + \color{blue}{\frac{2}{\lambda} \left[\frac{-ye^{-\lambda y}}{\lambda} - \dfrac{e^{-\lambda y}}{\lambda^2} \right]}\right] \\
& = e^{-\lambda y} \left(-y^2 - \frac{2y}{\lambda} - \frac{2}{\lambda^2} \right).
\end{align}$$
Therefore
$$\begin{align}
\int_0^\infty y^2\lambda e^{-\lambda y} dy
& = \left[e^{-\lambda y} \left(-y^2 -\frac{2y}{\lambda} - \frac{2}{\lambda^2} \right) \right]_0^\infty \\
& = 0 - \left(- \dfrac{2}{\lambda^2}\right) \\
& = \dfrac{2}{\lambda^2}.
\end{align}$$ |
H: Prove the edges of a multigraph may be oriented such that the net-degree of any vertex is $\leq 1$.
The net-degree of a vertex $v$, denoted $\text{netdeg}(v)$, in a digraph $G$ is defined by
$$ \text{netdeg}(v)=| ~ \text{outdeg}(v) - \text{indeg}(v) ~| $$
where $\text{outdeg}(v)$ and $\text{indeg}(v)$ are the out-degree and in-degree of $v$.
Show that the edges of any multigraph may be oriented so that $\text{netdeg}(v) \leq 1$ for all its vertices $v$.
My first thought was to order the vertices and alternate edges in and out beginning at the first vertex then move on to the next vertex and orient the remaining edges depending on the net-degree of the remaining vertices, but this approach does not seem to work.
AI: For any graph $G$, perhaps you could add an auxiliary vertex $x$ and make it adjacent to all vertices of odd degree. Call this new graph $G'$. Since there must be an even number of such vertices (so as to keep the degree sum even), $G'$ has an Eulerian circuit. Assign the orientation to $G'$ realizing the circuit starting from $x$. Removing $x$ yields an orientation for $G$ where for each vertex $v\in V(G)$, $netdeg(v)$ is 0 if $deg(v)$ is even and $netdeg(v)$ is 1 if $deg(v)$ is odd. |
H: Does $\,x>0\,$ hint that $\,x\in\mathbb R\,$?
Does $x>0$ suggest that $x\in\mathbb R$?
For numbers not in $\,\mathbb R\,$ (e.g. in $\mathbb C\setminus \mathbb R$), their sizes can't be compared.
So can I omit "$\,x\in\mathbb R\,$" and just write $\,x>0\,$?
Thank you.
AI: It really depends on context. But be safe; just say $x > 0, x\in \mathbb R$.
Omitting the clarification can lead to misunderstanding it. Including the clarification takes up less than a centimeter of space. Benefits of clarifying the domain greatly outweigh the consequences of omitting the clarification.
Besides one might want to know about rationals greater than $0$, or integers greater than $0$, and we would like to use $x \gt 0$ in those contexts, as well.
ADDED NOTE: That doesn't mean that after having clarified the context, and/or defined the domain, you should still use the qualification "$x\in \mathbb R$" every time you subsequently write $x \gt 0$, in a proof, for example. But if there's any question in your mind about whether or not to include it, error on the side of inclusion. |
H: When two polynomials $f(x),g(x)$ over a field $F$ are said to be relatively prime?
When two polynomials $f(x),g(x)$ over a field $F$ are said to be relatively prime?
Following the definition given for the integers I guess when two of them have no factors in common other than $1.$ But if we follow such definition it implies $-1$ and $x+1$ are not relatively prime.
My second question is when does $f,g$ are relatively prime provided one of them is $0?$
AI: The relevant property here is that any field is a GCD domain, which means that any two nonzero elements have a GCD (ordered by divisibility). And once $F$ is a GCD domain, $F[x]$ is as well. "Relatively prime" means that their GCD is 1 (up to units, of course).
In this context $\gcd(-1,x+1)=1$, up to units; so in some sense $F\setminus \{0\}$ is the gcd. Also for nonzero $\alpha$, $\gcd(\alpha,0)=\gcd(\alpha,\alpha)=\alpha$, again up to units. |
H: Group homomorphisms into a field
Let $G$ be a finite group, and let $k$ be a field, which should be algebraically closed, I think. How to describe all homomorphisms $G\rightarrow k^*$ (i.e. one-dimensional representations: $k^*=\mathrm{GL}_1$) or just find the number of them?
I've got the following:
First, we assume $G$ is abelian. Since each finitely generated abelian group is a product of cyclic groups ($C_{k}$ or $\mathbb{Z}$), $G=C_{k_1}\times C_{k_2}\times\cdots \times C_{k_l}$. Let $n=|G|=k_1 k_2\cdots k_l$. So, for each homomorphism $\phi: G\rightarrow k^*$ we have $\phi(G)\subset Z(t^n-1)=H$ (I mean zeros of $t^n-1\in k[t]$).
Since each finite subgroup of $k^*$ is cyclic (let $H=C_m$) we are to investigate homomorphisms $$C_{k_1}\times C_{k_2}\times\cdots\times C_{k_l}\rightarrow C_m.$$ Is it necessarily the case that $m=n$? I know that in $\mathbb{Z}/p\mathbb{Z}$, we have $Z(x^p-1)=\{1\}$, so $m\ne n$. But how about algebraically closed fields?
However that may be, are there any other ways to describe such homomorphisms? To describe homomorphisms $C_{k_1}\times C_{k_2}\times\cdots\times C_{k_l}\rightarrow C_m$?
In conclusion, let me note that the case of nonabelian $G$ reduces to abelian one.
AI: Every homomorphism $C_{k_1}\times \cdots \times C_{k_l}\to k^\times$ factors as a product of homomorphism $C_{k_{\large i}}\to k^\times$, since the original is determined by where it sends the coordinates' generators. Each such coordinate homomorphism may be chosen independently. In particular the image of the full homomorphism is at most the $\ell={\rm lcm}(k_1,\cdots,k_l)$th roots of unity in $k^\times$ (which exist because $k$ is alg. closed), and it is indeed possible to obtain this as a surjective image.
To see that $\mu_\ell$, the $\ell$th roots of unity in $k$, are an "upper bound" for the image, note that everything in the image must be $\ell$-torsion since everything in $G$ is $\ell$-torsion itself. To see that this bound can be reached, it is simply a matter to see that the subgroups of $C_{\large \ell}$ of orders $k_i$ ($1\le i\le l$) are a generating set for the entire cyclic group $C_{\large\ell}$ (nice elementary number theory exercise).
The full collection of homomorphisms into $k^\times$ is itself a group under pointwise multiplication, called the dual group, and it is (noncanonically) isomorphic to the original finite abelian group.
If the original group $G$ is nonabelian then the homomorphisms factor through the abelianization $G^{\rm ab}:=G/[G,G]$, as you allude to, and there is no nice dual group for the full nonabelian group. |
H: Finding side of rectangle using given information
Really simple question but I am stuck. The following information is given:
$$BD=8,\quad AB = 6,\quad ED =5,\quad EF = EC$$
and we want to find $AF$.
If we have three $90^\circ$, what does that really mean, and how I can find $AF$?
AI: Hint:
$$\begin{align*}
AB=6,\;BD=8,\;\text{Pythagorean theorem}&\implies AD=\mathbin{?}\\\\
ED=5&\implies AE=\mathbin{?}\\\\
EF+EC=AB=6,\;EF=EC&\implies EF=\mathbin{?}\\\\
AE=\mathbin{?},\;EF=\mathbin{?},\;\text{Pythagorean theorem}&\implies AF=\mathbin{?}\\\\
\end{align*}$$ |
H: Composition of systems of equations
Suppose $$2x + 3y = u$$ $$x - 4y = v$$
and further that
$$3u - 5v = c$$ $$2u + 3v = d$$
Express c and d in terms of $x$ and $y$ by matrix multiplication.
It's quite easy by direct substitution but I can;t work out how to use matrix multiplication. Any ideas? Thanks in advance!
AI: $$\begin{align*}
2x + 3y& = u\\
x - 4y &= v
\end{align*}\quad \underset{\substack{\text{convert to}\\ \text{matrix language}}}{\leadsto}\quad \begin{bmatrix} 2 & \hphantom{-}3\\ 1 &-4\end{bmatrix}\begin{bmatrix} x\\ y\end{bmatrix}=\begin{bmatrix}u\\ v\end{bmatrix}$$
$$\begin{align*}
3u - 5v &= c\\
2u + 3v &= d
\end{align*}\quad \underset{\substack{\text{convert to}\\ \text{matrix language}}}{\leadsto}\quad \begin{bmatrix} 3 & -5\\ 2 &\hphantom{-}3\end{bmatrix}\begin{bmatrix} u\\ v\end{bmatrix}=\begin{bmatrix}c\\ d\end{bmatrix}$$
$$\begin{bmatrix} 3 & -5\\ 2 &\hphantom{-}3\end{bmatrix}\begin{bmatrix} u\\ v\end{bmatrix}=\begin{bmatrix} 3 & -5\\ 2 &\hphantom{-}3\end{bmatrix}\Bigg(\begin{bmatrix} 2 & \hphantom{-}3\\ 1 &-4\end{bmatrix}\begin{bmatrix} x\\ y\end{bmatrix}\Bigg)=\begin{bmatrix}c\\ d\end{bmatrix}\implies\begin{bmatrix} p & q\\ r &s\end{bmatrix}\begin{bmatrix} x\\ y\end{bmatrix}=\begin{bmatrix}c\\ d\end{bmatrix}\\[0.4in]
\quad\text{ where }\quad \begin{bmatrix} p & q\\ r &s\end{bmatrix}=\begin{bmatrix} 3 & -5\\ 2 &\hphantom{-}3\end{bmatrix}\begin{bmatrix} 2 & \hphantom{-}3\\ 1 &-4\end{bmatrix}.\\[0.4in]
\begin{bmatrix} p & q\\ r &s\end{bmatrix}\begin{bmatrix} x\\ y\end{bmatrix}=\begin{bmatrix}c\\ d\end{bmatrix}\quad\underset{\substack{\text{convert to}\\ \text{equation language}}}{\leadsto}\quad \begin{align*}
px + qy& = c\\
rx + sy &= d
\end{align*}$$ |
H: Proving that length of a curve is $\infty$
Let $f$ be a differentiable and continious function in $(0,1]$ and $lim_{x\to 0^+}f(x)=\infty$. Prove that the length of the curve on (0,1] is $\infty$.
Steps I tried: $L=\int_0^1 \sqrt{1+f\prime(x)^2}dx\geq\int_0^1\sqrt{f\prime(x)^2}=\int_0^1f\prime(x)=f(1)-lim_{x\to0^+}f(x)=-\infty$ (and I had to prove $\int\geq\infty$ and not $-\infty$).
AI: I think your question is wrong, because in order for $f$ to be continuous at 0, it has to be bounded near 0, but your claim that $\lim_{x \rightarrow 0^+}f(x)=\infty$ means that $f$ is unbounded at 0, contradiction. I think you mean that $\lim_{x \rightarrow 1^-}f(x)=\infty$. Then, you get the limit as $\infty$ from your computation.
P.S. From one of your steps, $\sqrt{f'(x)^2} \neq f'(x)$, but it is $|f'(x)|$. So I think you just need to change that equality into inequality
Edit in response to edited question:
Then, once we have $\int_0^1 \sqrt{f'(x)^2}=\int_0^1 |f'(x)| \geq |\lim_{t \rightarrow 0^+}\int_t^1 f'(x)|=|f(1)-\lim_{x \rightarrow 0^+}f(x)|=|-\infty|=\infty$
This corrects and finishes the proof. |
H: Factorize in R[x]
I have the polynomial $x^8+1$, I know that there's no root for solve this in $\Bbb R[x]$ but i want to factorize this to the minimal expression. This is possible or this is irreducible?
AI: Fun fact:
$$x^4+1=(x^2+\sqrt{2}x+1)(x^2-\sqrt{2}x+1).$$
This can be derived by setting $x^4+1$ equal to a product of two monic quadratics with unknown coefficients and then solving for said coefficients little by little. As a consequence,
$$x^8+1=(x^4+\sqrt{2}x^2+1)(x^4-\sqrt{2}x^2+1).$$
We can go further. For example, set
$$x^4+\sqrt{2}x^2+1=(x^2+ax+b)(x^2-ax+1/b),$$
yielding
$$\begin{cases}-a^2+b+1/b & =\sqrt{2} \\ a/b-ab & = 0 \end{cases} $$
Rule out $a=0$ to obtain $b=\pm1$ from the second equation, then plug those candidates into the first equation yielding $a=\sqrt{\pm2-\sqrt{2}}=\sqrt{2-\sqrt{2}}$ with $b=1$. The second factor of $x^8+1$ that is written above is similarly reducible. I leave the details as an exercise. The full factorization is
$$\begin{array}{lll} x^8+1 & = & \left(x^2+\sqrt{2-\sqrt{2}}x+1\right) \times \left(x^2-\sqrt{2-\sqrt{2}}x+1\right) \\ & \times & \left(x^2+\sqrt{2+\sqrt{2}}x+1\right) \times\left(x^2-\sqrt{2+\sqrt{2}}x+1\right). \end{array}$$
over the real numbers $\bf R$. With the quadratic formula applied to the above you can get the roots to $x^8+1$ exactly (they are precisely the primitive $16$th roots of unity) in the form of nested radicals, and hence the full factorization in $\bf C$.
By the way, I should mention that the only nonlinear polynomials over $\bf R$ that are irreducible are quadratics with negative discriminant. This is because the nonreal roots of any real-coefficient polynomial can be paired off into conjugates, and then each conjugate pair of linear factors can be put together to obtain a real quadratic, thus every real-coefficient polynomial can be factored into real linear and quadratic factors.
Over $\bf Q$, as Belgi notes, a simple shift allows Eisenstein's criterion to apply. |
H: How to find area of triangle from its medians
The length of three medians of a triangle are $9$,$12$ and $15$cm.The area (in sq. cm) of the triangle is
a) $48$
b) $144$
c) $24$
d) $72$
I don't want whole solution just give me the hint how can I solve it.Thanks.
AI: You know that medians divide a triangle to 6 equal areas. If you find one of them, multiplying with 6 give you the area of whole triangle. Let's denote one area as $S$, now see the figure:
I guess you saw the right triangle. |
H: need to show antiderivative exist
Let $U$ be a simply connected open set and $z_1,\dots, z_n$ be points of $U$ and let $U^*=U\setminus \{z_1,\dots,z_n\},z_i\in U$ Let $f$ be analytic on $U^*$. Let $\gamma_k$ be a small circle centered at $z_k$ and let $$a_k={1\over 2\pi i}\int_{\gamma_k} f(\xi)d\xi$$ let $$h(z)=f(z)-\sum a_k/(z-z_k)$$ we need to show there exist analytic function $H$ such that $H'=h$
AI: Hints: 1) what is the residue of $h$ at $z_j$?
2) Show that path integrals $\int_\Gamma h(z)\ dz$ in $U \backslash \{z_1,\ldots,z_n\}$ with the same endpoints are always equal. |
H: Definition of simple spectrum
From the book "Spinning Tops" by Audin, given Lax equation $[A_{\lambda},B_{\lambda}]$ where $\lambda$ is a parameter (so called spectral parameter), she claims that we have spectral curve $P(\lambda,\nu)=0$ where $P(\lambda,\nu)$ is a characteristic polynomial of $A_{\lambda}$. Then, she talks about when $A_{\lambda}$ has a simple spectrum. I tried to look for definition of it, but I couldn't find it. Can you please tell me what does it mean for a spectrum to be simple?
AI: I suspect she means that the eigenvalues are all simple, i.e. the characteristic polynomial has no multiple roots. |
H: When is it solvable:$10^a+10^b\equiv -1 \pmod p$
If $p$ is a prime, $(a,p)=1$,denote $ord(a,p)=d,$ where $d$ is the smallest positive integer solution to the equation $a^d\equiv 1 \pmod p$.We can prove that $$10^n\equiv -1 \pmod p\tag1$$ is solvable iff $ord(10,p)$ is even.
Now,consider this equation,$$10^a+10^b\equiv -1 \pmod p.\tag2$$
If $10$ is a primitive root modulo $p$, then there is an integer $a$ for every $b≠\dfrac{p-1}{2}\pmod {p-1}$ so that $a,b$ satisfies $(2)$.
My question is,what's the necessary and sufficient condition that $(2)$ has at least $1$ solution?
If we are given a prime $p$,how to determine whether $(2)$ is solvabe?
This is a way,but not effectively: for every positive integer $b\leq\frac{1}{2}ord(10,p)$,determine whether $(2)$ is solvabe for $a$.By this way,I find $(2)$ is solvable for these primes,which $10^n\equiv -1 \pmod p$ is not solvable:
$3,31,37,43,53,67,71,83,107,151,163,173,191,199,227,277,283,307,311,317,347,359,397,431,439,443,467,479,523,547,563,587,599,613,631,643,683,719,751,757,773,787,797,827,839,853,883,907,911,919,947,991,\cdots$
My original problem is: how many "$1$" is need at least for a decimal number which is consisting of "$0$" and "$1$" and divisible by $p$?This question is to find these primes that three "$1$" is need at least.
Thank you.
AI: The first few primes for which it is not solvable are
$$5, 11, 41, 73, 79, 101, 137, 239, 271, 281, 641, 733, 859, 1321, 1409, 2531, 2791, 3191, 3541, 4013, 4093, 4637, 4649, 5051, 5171, 5237, 6163, 6299, 7253, 7841, 8779, 9091, 9161, 11831, 12517, 12671$$
The sequence does not seem to be in the OEIS.
On the other hand, if you replace $10$ by $2$ there is https://oeis.org/A179113
EDIT: I doubt that there's a simple necessary and sufficient condition. But here might be the start of a heuristic analysis which might suggest there should be infinitely many.
Heuristically, if the order of $10$ mod $p$ is $m$, we evaluate
$10^a + 10^b \mod p$ for approximately $m^2/2$ unordered pairs $(a,b)$, so the
probability that none of those is congruent to $-1 \mod p$ should be approximately
$\exp(-m^2/(2p))$. So if you want to find primes $p$ for which your equation is not solvable,
you might look at those where the order of $10 \mod p$ is less than about $\sqrt{p}$.
The order of $10 \mod p$ is $m$ or one of its divisors if $p$ divides $10^m - 1$.
So we might look for primes $ p > m^2$ dividing $10^m - 1$. Nearly all positive integers $x$ will have at least one prime factor greater than $\log_{10}(x)^2$ (see Dickman's theorem). |
H: Are there real numbers a and b such that $f(x,y,t) = x^a t^b$ satisfies the heat equation?
The question is in the title. The heat equation is as follows:
$$
\frac{\partial f}{\partial t} = k \left( \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} \right),\quad k\in\mathbb{R}
$$
Attempt at solution
Plugging the requested form into the above equation yields:
$$
\frac{\partial f}{\partial t} = b\ t^{b-1} x^a\\
\frac{\partial^2 f}{\partial x^2} = a (a-1)\ t^b x^{a-2}\\
\frac{\partial^2 f}{\partial y^2} = 0
$$
Which leads to showing that:
$$
b\ x^2 = k\ a(a-1)\ t
$$
I'm not sure how to proceed from this point. Is this the correct procedure to solve this problem? Thanks!
AI: Note that for your condition to be satisfied for all $x,t$, since $x$ and $t$ are independent, the only way for the condition to hold would be for both sides to be equal zero, leading to:
$$b = 0, a = 0,1$$ |
H: Quadratic residues mod $n$ of $n-1$
While investigating something about square numbers, I started to wonder which numbers $n$ have quadratic residues of $n-1$. Obviously all $n$ of the form $m^2+1$ will work. But there are other numbers such as $13$, is there a pattern to them? Any insights on this would be great.
AI: In very short:
The equation $\,x^2=-1\pmod p\,,\;p\,$ a prime, has solution iff $\,p=1\pmod 4\,$ or $\,p=2\,$ (this is already a very nice exercise), and from here, with the prime factorization of $\,n\in\Bbb N\,$ , you get the corresponding condition for $\,x^2=-1\pmod m\,$. |
H: Torus interior homeomorphic to torus exterior
Let $T^2 \subset \mathbb{R^3}$, then $X_i$ be its interior and $X_e$ its exterior. By computing homotopy groups of $X_i \cup T^2$ and $X_e \cup T^2$ and corresponding isomorphisms between, one could show with Whitehead's theorem that $X_i$ is homotopy equivalent to $X_e$.
But how could I show that $X_e$ is homeomorphic to $X_i$?
AI: Here's how I think of it. The one point compactification of $\mathbb R^3$ is $S^3$. If you dig out a standard ball from $S^3$ it's believable that the complement is a ball, but you can also prove this directly using stereographic projection. If you remove a standard ball, the complement is a union of rays emanating from a sphere out to infinity. That is, it is a cone on a sphere, which is a ball.
So now I have a $S^3$ written as the union of $2$ balls, one of which contains $\infty$. Drill a straight hole through the center of the bounded ball. This has the effect of creating a bounded solid torus. Now what happens to the unbounded ball is that a solid tube gets added to it along its boundary. Abstractly, if you add a solid tube to a $3$-ball to get an orientable $3$-manifold, the result is a solid torus. |
H: $e^z$ is entire yet has an essential singularity (at $\infty$)
Is there no inconsistency? Or does the property of being entire exclude the point $z=\infty$?
p.s. following up from my previous question limit of $e^z$ at $\infty$
AI: The complex plane doesn't contain the point at infinity. (Infinity is *not*a number, not even a complex one.) Entire means holomorphic on $\mathbb{C}$. In fact the only functions that are holomorphic on the Riemann sphere (i.e. on $\mathbb{C}\cup \{\infty\}$) are the constant functions. |
H: Set Theory Notation Crises
For those who are familiar with the following notation, could you explain it in plain English because I picked up a set theory textbook but the book assumes the reader is familiar with the notation without giving a formal explanation anywhere.
1) $$\{x:\mathscr{P}x\}$$
2) $$\{\mathscr{a}_x: \mathscr{P}x\}$$
3) $$\left(\bigcup_{i \in I} \mathscr{a}_i\right)^{c}$$
4) $$x \in \{y: y>2\}$$
5) (distributive laws) $$B \cap \left(\bigcup_{i \in I} \mathscr{a}_i\right)$$
6) $$\{x: x \not\in x\}$$
The Px is like a P that is curly and looks like old english writing or might be a greek and same with a_i. a is like a fat curly a which I'm thinking might stand for "for all." Any input with explanation would be great or any reference would work as well. The book I'm reading is 'Set Theory An Introduction' by Robert Vaught.
AI: 1) and 2) are explained in page 8. ${\scr P} x$ indicates that $x$ satisfies some predicate or some property (for example consider $x$ satisfies $\scr P$ if and only if $x$ is a positive integer then $2,23$ does not satisfy $\scr P$ and therefore is not an element of $\{x:{\scr P} x\}$), and $a _x$ is a function of $x$, so you can just substitute that by $f(x)$ for some fixed $f$ defined whenever $x$ satisfies $\scr P$.
6) Is an instance of 1) it is therefore a set formed out of all the sets $x$ so that $x$ is not a member of itself, it is generally used to show a paradox which arises when one allows himself to form sets $\{x:{\scr P} x\}$ without any restriction on the predicate $\scr P$.
$\bigcup_{i\in I} a_i$ Is the set formed by all elements $x$ such that $x\in a_i$ for at least one $i\in I$, here the $a_i$ are said to form a family of sets indexed on the set $I$, so 5), becomes just the intersection of this family with $B$, that is, elements that lie both in $B$ and in some $a_i$, the (distributive law) refers to the property
$$ B\cap (\bigcup_{i\in I} a_i) = \bigcup_{i\in I} (B\cap a_i)$$
Now given a set $a$, $a^c$ usualy refers to the complement of the set (which is seen to be contained in some ommited larger set), thus 3) is defined if all of the $a_i$ lie inside the same set $A$ in which case it expands to
$$(\bigcup_{i\in I}a_i)^c = A\setminus (\bigcup_{i\in I} a_i) $$
4) Is, again, an instance of 1) where by saying that $x\in \{y:y>2\}$ we mean $x>2$ (note that in order for this to make sense $x$ must be a real number) |
H: Level Sets Questions
1) In the following link, question 1:
http://mathquest.carroll.edu/libraries/MVC.student.14.01.pdf
Is it true that both partial derivatives are negative ? If so, can someone help me find an example of a contour plot with $f_x<0 , f_y>0$ in one of the points ? I just want to verify
2) In the following (question 4, with $t=1$ instead of 2 ):
http://mathquest.carroll.edu/libraries/MVC.student.14.06.pdf
At $t=1$ i think that the derivative is positive... this is because the $x'(1)=0 ,y'(1)>0$ and $ z_y(3,0) > 0 $ .. Is it correct?
Thanks in advance
AI: Honestly, I could just make a guess of the function in part one. I could just imagine and so the following function may help us to find out what exactly happens in that point.
$$f(x,y)=\frac{(x-1)^2}{2}+(y-2)$$ |
H: question about Riemann zeta $\zeta (0)$
i know that
$$\zeta (m)=\sum_{n=1}^\infty n^{-m}$$
so
$$\zeta (0)=\sum_{n=1}^\infty n^0=1+1+1+1+1+1+\cdots=\infty $$
but actually
$$\zeta (0)=-0.5$$
where is the wrong
please help
thanks for all
AI: I also struggled with this for a while. Your definition of the Riemann zeta function is only its definition when the real part of $ m $ is greater than $ 1 $.
The domain of $ \zeta $ though is $ \mathbb{C} $, so the question is: how do we move from $ \{z \mid \Re(z) > 1 \} $ to $ \mathbb{C} $? The answer is analytic continuation.
Using the functional equation for $ \sum\limits_{n = 1}^\infty \frac{1}{n^z} $, we can extend the domain of $ \zeta $ to the complex numbers.
If you want to know the details, I suggest looking at the Wikipedia page on the function. |
H: Using Semi-circle find side of triangle
The figure below above shown a bicycle path. If semicircular portion $ABC$ is $100$ $\pi$ and $CD$ is $100$$ft$ then what is $AD$?
I have tried to find the diamenter of the circle and the Pythagorean, I am not able figure out my answer.
AI: The ratio of a circle's circumference to its diameter is $\pi$:
$$\frac{\text{circumference}}{\text{diameter}}=\pi.$$The length of the semicircle arc is half the circumference of what would be the full circle. Thus,
$$\frac{\text{semicircle}}{\text{diameter}}=\frac{\frac{1}{2}\cdot\text{circumference}}{\text{diameter}}=\frac{1}{2}\left(\frac{\text{circumference}}{\text{diameter}}\right)=\frac{\pi}{2}$$
Using that $\text{semicircle}=100\pi$, solve for the length of the diameter of the circle. Note that this is the same as the length of the line segment $AC$. Lastly, now apply the Pythagorean theorem. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.