text
stringlengths 83
79.5k
|
---|
H: Is there a smarter way to integrate?
My textbook uses the following technique to integrate. Let us take the following as an example, evaluate the integral of:
$$f(x) = 6x(x^2 + 1)^5$$
Notice that $[x^2+1]' = 2x$, so $6x\space dx = 3 \space d (x^2+1)$
$$6x(x^2+1)^5 \space dx = \dfrac{6}{2} (x^2+1)^5 \space d(x^2+1) = 3u^5 \space du = d \dfrac{1}{2} u^6 = d \dfrac{1}{2} (x^2+1)^6$$
So $$F(x) = \dfrac{1}{2} (x^2 +1)^6 + c$$
This method seems very confusing (what is the meaning of the $d$ for example?) and besides, it seems like it would work only in textbook-cases. Is there a 'better' way of integrating these types of basic functions?
p.s. - I looked in my textbook for reference, but the toughest types of functions we have to evaluate are the integrals of for example $h(x) = \dfrac{2x-3}{x^2+6x+10}$ and $k(x) = \dfrac{\arcsin^2(x)}{\sqrt{1-x^2}}$. So the method wouldn't of course have to work for integrals tougher than these types.
AI: First of all, the following boils down to exactly the same as your textbook does, but it might make more sense to you. Make the substitution $u = x^2 +1$. Then $du= 2xdx$. Hence $\int 6x(x^2+1)^5dx=\int 3 u^5du$. Now integrate and substitute back for $u$.
Notice that it was a very plausible idea to make this substitution because we see the the integral involves a product of a certain function ($u$) multiplied by the derivative of this function ($du$). |
H: Why define vector spaces over fields instead of a PID?
In my few years of studying abstract algebra I've always seen vector spaces over fields, rather than other weaker structures. What are the differences of having a vector space (or whatever the analogous structure is called[module?]) defined over a principal ideal domain which is not also a field? What properties of vector spaces break down when defining it over a PID? Please give an example (apart from letting the vector space be the field/PID itself) where the vector space over a field has properties that the module over a PID does not.
AI: There is a very nice structure theory for finitely generated modules over a PID, which is frequently discussed here at MSE.
However, the general theory is much more complicated already in the case of (not finitely generated) abelian groups, i.e., $\Bbb{Z}$-modules.
The first thing that's missing is that not all modules over a PID will have a base. Take for instance the $\Bbb{Z}$-module $\Bbb{Z}/2\Bbb{Z}$. When dealing with vector spaces, the only invariant is the dimension. With modules over a PID, this is not the case anymore.. |
H: What is the factor group $\mathbb{Z}/5\mathbb Z$?
I am trying to understand the concept of factor group. The definition of factor group I know is the following: Let $G$ be a group and $H$ be a subgroup of $G$. Then the group of cosets denoted by $G/H$ is called the factor group of $G$ by $H$. Now I am looking at an example: Let $G=\mathbb{Z}$ and $H=5\mathbb{Z}$. Then I find the cosets of $5\mathbb{Z}$. Here they are:
Cosets of $5\mathbb{Z}$ containing $0$, $\{\ldots,-10,-5,0,5,10,\ldots\}$
Cosets of $5\mathbb{Z}$ containing $1$, $\{\ldots,-9,-4,1,6,11,\ldots\}$
Cosets of $5\mathbb{Z}$ containing $2$, $\{\ldots,-8,-3,2,7,12,\ldots\}$
Cosets of $5\mathbb{Z}$ containing $3$, $\{\ldots,-7,-2,3,8,13,\ldots\}$
Cosets of $5\mathbb{Z}$ containing $4$, $\{\ldots,-6,-1,4,9,14,\ldots\}$
My question is, is the factor group $\mathbb{Z}/5\mathbb{Z}$ the union of all these $5$ cosets? Then I think $G/5\mathbb{Z} =\mathbb{Z}$ since the union of these cosets are equal to $\mathbb{Z}$. Can anyone help me with this?
Thanks.
AI: The factor group is not the union of cosets, it's the set of cosets, so
$${\bf Z}/5{\bf Z}=\{5{\bf Z},1+5{\bf Z},2+5{\bf Z},3+5{\bf Z},4+5{\bf Z}\}$$
The union of cosets is the original group. And $H$ has to be a normal subroup for the set of cosets to be a group (this is trivial if $G$ is abelian, of course). |
H: Why is $\langle \operatorname{grad} f, X\rangle_g$ independent of the metric on a Riemannian manifold?
Let $(M,g)$ be a Riemannian manifold and let $f \in C^{\infty}(M)$. Let $X$ be a smooth vector field on $M$. In smooth local coordinates $(x^i)$ on $M$, we can write $g = g_{ij} dx^i \otimes dx^j$ as well as $X = X^i \partial_{x^i}$. Now I have computed that $\langle\operatorname{grad} f, X \rangle_g = X^i\frac{\partial f}{\partial x^i}$ as follows. If we let $g^{ij}$ be the inverse matrix of $g_{ij}$ We have
$$\begin{eqnarray*} \langle\operatorname{grad} f, X \rangle_g &=& \left\langle g^{ij} \frac{\partial f}{\partial x^i}\partial_{x^j} , X^k\partial_{x^k} \right\rangle_g\\
&=& g^{ij} \frac{\partial f}{\partial x^i}X^k\left\langle \partial_{x^j},\partial_{x^k} \right\rangle_g\\
&=& g_{jk}g^{ij} \frac{\partial f}{\partial x^i}X^k \\
&=& g_{kj}g^{ji}\frac{\partial f}{\partial x^i}X^k \hspace{1cm} (\text{by using that $g_{ij}$ is a symmetric matrix})\\
&=& X^i\frac{\partial f}{\partial x^i}. \end{eqnarray*}$$
My question is: Is there any geometric reason as to why $\langle \operatorname{grad} f, X \rangle_g$ should be independent of the metric $g$?
AI: Absolutely. The exterior differential of a function, $df$, is a one-form which is defined independently of metric: $df(X) = X(f)$. In coordinates, $df = \partial_if dx^i$.
Choice of a nondegenerate metric induces an isomorphism between vector fields and $1$-forms by pairing: $g(X)(V) = \langle X,V\rangle$. In coordinates, this pairing corresponds to multiplying by the inverse of the metric, or "raising indices." This isomorphism is also known as a musical isomorphism: if $X$ is a vector field, then the one-form $X^\sharp$ is defined by $X^\sharp(V) = \langle X,V\rangle$.
The gradient $\operatorname{grad}(f)$ is defined to be the metric dual of the one-form $df$, i.e., $\operatorname{grad}(f) = (df)^\sharp$, so if you pair $\operatorname{grad}(f)$ with an arbitrary vector field $X$, you get $df(X) = X(f)$, which does not depend at all on the metric. The gradient is constructed so that, in pairing, all mentions of the metric cancel. |
H: longest path two nodes in common
Let $G$ be a connected graph. Prove that two longest paths in $G$ have atleast one node in common. Note that two longest paths do not neccessarily have the same length.
I began by defining two paths namely $P_1 = <v_1,...,v_k>$ and $P_2 =<u_1,...,u_m>$ where $m\neq k$ because of the note given. Now I state the definition of a conneted graph.
A graph G is connected if for each pair of vertices u; v, there is a u; v-walk
in G.
now my first intuition is to consider the fact that if no node was shared than connecting the two paths would give a (slightly ?) longer path which would make them not the largest paths. Given the fact that they are the two largest (say $n$ and $n-1$). They should be connected via atleast one node.
HOWEVER it seems odd for me to now state that this is a proof.
AI: Let $P_1 = \langle v_0, \dots, v_m \rangle$ and $P_2 = \langle u_0, \dots, u_n \rangle$, such that $P_1$ and $P_2$ have no common vertices, be two different longest inextensible paths in a graph. Let us assume that $m \ge n$. Furthermore, let $v_k$ and $u_l$ be connected (in the original graph) be a path $\langle v_k, w_1, \dots, w_p, u_l \rangle$ such that neither of $w_i$ belong to either $P_1$ or $P_2$ (the length of such path is, obviously, $p+1$).
Without the loss of generality, we can assume that $k \ge m/2$ and $l \ge n/2$. This is just a matter of notation: we're labeling vertices in a way that $\langle v_0, \dots, v_k \rangle$ is longer than or equally long as $\langle v_k, \dots, v_m \rangle$ and that $\langle u_1, \dots, u_l \rangle$ is longer than or equally long as $\langle u_l, \dots, u_n \rangle$.
We construct a new path
$$P := \langle v_0, \dots, v_k, w_1, \dots, w_p, u_l, \dots u_0 \rangle$$
which has length
$$k + (p + 1) + l \ge m/2 + p + 1 + n/2 \ge n/2 + p + 1 + n/2 = n + p + 1 > n, $$
so $P$ is strictly longer than $P_2$ (and obviously different from $P_1$), which is a contradiction with the assumption that $P_2$ is the second longest inextensible path in the graph. |
H: Finding global max./min.
my task is to figure out the critical points of $f(x,y)=e^y(x^4-x^2+y)$, $\ $$\mathbb{R}^2 \rightarrow \mathbb{R}$, and show which of them is a maximum or minimum. As far as I got, I've shown that the critical points are:
1.: $(0,-1)$ which is neither max. nor min. (char. pol. of Hessian is indefinite)
2.: $\left(\frac{1}{ \sqrt2},-\frac{3}{4}\right)$, which is a local minimum and
3.: $\left(-\frac{1}{ \sqrt2},-\frac{3}{4}\right)$ which is the second local minimum.
Moving towards my question, is there any way to easily show if any global maximum or minimum exists (in general and/or concerning this example)? I’ve used the char. poly. of the Hessian, is there any faster possibility to finding local min./max.?
AI: First find the limit as x and y approach infinity. In single variable functions, you only have to check to "ends", $-\infty$ and $\infty$. In functions of two variables, there are four, $x\to-\infty$, $x\to\infty$, $y\to-\infty$, and $y\to\infty$.
$$\lim_{x\to-\infty}f(x,y)=\infty$$
$$\lim_{x\to\infty}f(x,y)=\infty$$
$$\lim_{y\to-\infty}f(x,y)=-\lim_{y\to\infty}\frac{e^{-y}}{\frac{1}{y}}=0$$
$$\lim_{y\to\infty}f(x,y)=\infty$$
Evaluate $f(x,y)$ at the critical points you calculated and compare them to these values. |
H: How to solve the irrational system of equations?
Solve the system of equations $$\begin{cases} \sqrt{x+2y+3}+\sqrt{9 x+10y+11}=10,&\\[10pt] \sqrt{12 x+13y+14}+\sqrt{28 x+29y+30}=20. \end{cases} $$
AI: Hint: Try $t=x+y+1$ to get
\begin{cases} \sqrt{t+y+2}+\sqrt{9t +y+2}=10,&\\[10pt] \sqrt{12t+y+2}+\sqrt{28t+y+2}=20. \end{cases}
And solve by squaring. |
H: What do we call the negation of logical equivalence?
The statement that '$x$, $y$ and $z$ are equivalent' just means
all of $x$, $y$ and $z$ are false, or
all of $x$, $y$ and $z$ are true.
Now suppose its not the case that $x$, $y$ and $z$ are equivalent. That is, suppose:
at least one of $x$, $y$ and $z$ is false, and
at least one of $x$, $y$ and $z$ is true.
Under these circumstance, we say that $x$, $y$ and $z$ are what? Inequivalent? This seems wrong, because some of them will still be equivalent.
AI: There is no "special term": We can simply say "not all of $x, y, z$ are equivalent" or "$x, y, z$ are not all equivalent."
As you note, that's not the same thing as all of them being inequivalent, because two of them might still be equivalent, and in the case of logical equivalence, we would need one pair and not all pairs logically equivalent.
Negation of the statement of mutual logical equivalence involves DeMorgan's:
We can say "$x, y, z$" are all equivalent by stating $$(x \iff y) \land (y \iff z)\tag{A}$$ and by transitivity we have also conveyed in $(A)$ that $(x \iff z)$.
And so to negate an entire conjunction, we can arrive at the disjunction of negated dijunctions $$\lnot(x \iff y) \lor \lnot(y \iff z): \tag{$\lnot $ A}$$
$\qquad\qquad\qquad \qquad\qquad\qquad$
In disjunctive normal form $(\lnot A)$ equivalent to: $$(\lnot x \land y) \lor (\lnot y \land z) \lor (x \land \lnot z)\tag{DNF}$$
And in conjunctive normal form, as you describe in your bullets: we have that $(\lnot A)$ is equivalent to $$(x \lor y\lor z) \land (\lnot x \lor \lnot y \lor \lnot z)\tag{CNF}$$ |
H: What makes the Var(x)+Var(y)=var(x+y) property important?
What makes the Var(x)+Var(y)=var(x+y) property important?
It was taught in my statistics class
AI: In the 18th century Abraham de Moivre considered this problem (his account of which you can find in his book The Doctrine of Chances: If you toss a fair coin $1800$ times, what is the probability that the number of heads in in a specified interval, for example at least $880$ but no more than $906$? In the course of solving that problem, he first introduced the "bell-shaped curve"
$$
\varphi(x) = \text{constant}\cdot e^{-x^2/2}
$$
where the "constant" is chosen so as to make this a probability density function. (De Moivre found the constant numerically, and James Stirling later showed that it's $1/\sqrt{2\pi\,{}}$. This is the "normal distribution" with expectation $0$ and variance $1$. It follows that the normal density with expectation $\mu$ and variance $\sigma^2$ has density
$$
x\mapsto \text{same constant}\cdot\frac1\sigma\varphi\left(\frac{x-\mu}{\sigma}\right).
$$
De Moivre found that the distribution of the number of heads can be made as close as desired to the normal distribution with the right values of $\mu$ and $\sigma$ by making the number of tosses large enough, and $1800$ was plenty.
But how do you know the right values of $\mu$ and $\sigma$?
In the case of $\sigma$, you just use the fact that the variance of the sum of independent random variable is the sum of the variances, i.e. the property of variances that you learned in your statistics class. Other simple measures of statistical dispersion lack that property, and so cannot be applied to this coin-toss problem.
It's used in statistics all the time. How do you know that if you take a sample of size $n$ from a population that's not necessarily normally distributed, you can use the two numbers
$$
\bar x \pm 1.96\frac{S}{\sqrt{n}}
$$
as endpoints of a $95\%$ confidence interval for the population mean, where $S$ is the sample standard deviation? In particular, where does "$S/\sqrt{n}$ come from? The answer is that it comes from that same property of variances that you were taught. |
H: Calculate the limit: $\lim_{n\to+\infty}\sum_{k=1}^n\frac{\sin{\frac{k\pi}n}}{n}$ Using definite integral between the interval $[0,1]$.
Calculate the limit: $\lim_{n\to+\infty}\sum_{k=1}^n\frac{\sin{\frac{k\pi}n}}{n}$ Using definite integral between the interval $[0,1]$.
It seems to me like a Riemann integral definition:
$\sum_{k=1}^n\frac{\sin{\frac{k\pi}{n}}}{n}=\frac1n(\sin{\frac{\pi}n}+...+\sin\pi)$
So $\Delta x=\frac 1n$ and $f(x)=\sin(\frac{\pi x}n)$ (Not sure about $f(x))$
How do i proceed from this point?
AI: Ideas: take the partition $\,P_n\,$ of the unit interval
$$P_n:=\left\{0=x_0\,,\,x_1=\frac1n\,,\ldots,\,x_k=\frac kn\,,\ldots,\,x_n=\frac nn=1\right\}$$
and choose points
$$c_i:=\frac in\;,\;\;1\le i\le n$$
Since we already know that $\,\sin \pi x\,$ is continuous everywhere and Riemann integrable in any closed, finite interval, we can apply the Riemann integral definition for the particular partitions and particular points $\,c_i\,$ as above in each subinterval, and get:
$$\lim_{n\to\infty}\frac 1n\sum_{k=1}^n \sin\frac{\pi k}n=\int\limits_0^1 \sin\pi x\,dx=\ldots\ldots$$ |
H: What am I doing wrong; evaluating integrals?
$$f(x) = \dfrac{2x}{2-3x^2}$$
$$- \dfrac{1}{3} d (2-3x^2) = 2x$$
$$ \dfrac{2x}{2-3x^2} dx = -\dfrac{1}{3} d (2-3x^2) \cdot \dfrac{1}{2-3x^2} = - \dfrac{1}{3} du \cdot u^{-1} = -\dfrac{1}{3} u^{-1} \cdot du$$
And here I'm stuck, since you can't divide by 0.. What am I doing incorrectly here? I've applied the method my book gave me perfectly as far as I know.
AI: $$f(x) = \dfrac{2x}{2-3x^2}$$
$$2-3x^2=t\implies-6x=\dfrac{dt}{dx}\implies dx=\dfrac{dt}{-6x}$$
$$\int \dfrac {2x}{2-3x^2}\,dx$$
$$\int \dfrac {2x}{t}\,\dfrac{dt}{-6x}$$
$$-\int \dfrac {1}{3t}\,dt$$
$$-\dfrac13\cdot \log t+C$$
$$-\dfrac13\cdot \log (2-3x^2)+C$$
when you have situation like this : $\int \dfrac 1x \,dx$ then don't write it as $x^{-1}$.
$\int x^n\, dx=\dfrac {x^{n+1}}{n+1}$ this is not applicable on $n=-1$.
$\int x^{-1}\, dx$ solve as $\int \dfrac 1x\, dx=\log |x|$ |
H: How do I find the limit $\lim_{x \to 1} \frac{1}{1 - x} - \frac{3}{1 - x^3}$
How do I calculate the following limit? $$\lim_{x \to 1} \frac{1}{1 - x} - \frac{3}{1 - x^3}$$
I already transformed this into
$$\lim_{x \to 1} \frac{(1-x)(1-x)(x+2)}{(1-x)(1-x^3)} = \frac{(1-x)(x+2)}{(1-x^3)}$$
but this still has zeroes up- and downstairs. L'Hospitals rule is not allowed. Any advice?
EDIT: Sorry, I was doing confused. There is a 3 in the second numerator.
EDIT 2: Wolfram Alpha says the limit is $-1$
AI: Recall that $1-x^3=(1-x)(1+x+x^2)$, so
\begin{align*}
\frac1{1-x}-\frac3{1-x^3} &= \frac1{1-x}\left[1-\frac3{1+x+x^2}\right] = \frac1{1-x}\cdot\frac{1+x+x^2-3}{1+x+x^2} \\ &= \frac1{1-x}\cdot\frac{(x+2)(x-1)}{1+x+x^2} =
\frac{x-1}{1-x}\cdot\frac{x+2}{1+x+x^2} = -\frac{x+2}{1+x+x^2}
\,.
\end{align*}
From this we see that $\lim\limits_{x\to 1} = -1$.
P.S. You have it from where you were if you use the factoring on my first line to simplify your fraction. |
H: Kinematics stone thrown upwards past a point, show the following.
I know I should be able to do this, but I have tried for 3 hours and can't do it. I know its simple but it's driving me mad.
A particle is projected vertically upwards with speed $ u_{0}$ and passes through a point
that is a distance $ h $ above the point of projection at time $t_{1}$ going up and $t_{2}$ coming down. Show that $g t_{1} t_{2} = 2 h$.
I am assuming the time taken for the stone to go from point $h$ up to the max height is equal to the time taken for the stone to fall from the max height down to the point $h$. This time being $\frac{t_{2}-t_{1}}{2}$. I've used the SUVAT equations ... many times ,and the answer won't deliver. Any help appreciated.
AI: The kinetic and potential energies are given by
$$T=\frac 12 m \dot h^2$$
$$V=m\,g\,h$$
By applying Lagrangian mechanics the equation of motion can be found as
$$\ddot h=-g\Rightarrow h=-\frac g2 t^2+a\,t+b$$
By initial conditions $\dot h(0)=u_0$ and $h(0)=0$
$$h=-\frac g2 t^2+u_0\,t$$
It can be described as a quadratic equation
$$\frac g2 t^2-u_0\,t+h=0$$
which has the following roots
$$t_1=\frac {u_0+\sqrt{u_0^2-2\,h\,g}}{g}\qquad t_2=\frac {u_0-\sqrt{u_0^2-2\,h\,g}}{g}$$
and
$$t_1 t_2=\frac {u_0+\sqrt{u_0^2-2\,h\,g}}{g}\frac {u_0-\sqrt{u_0^2-2\,h\,g}}{g}=\frac{u_0^2-u_0^2+2\,h\,g}{g^2}$$
$$t_1\,t_2=\frac{2h}g\Rightarrow g\,t_1\,t_2=2h$$ |
H: Totient-like function
I have number written as factors for instance: n = 2 * 3 * 3 * 5. What I have to do is find how many numbers between <1, n) are co-prime to n, which means GCD = 1. It can simply be done using Euler's Totient. But what if GCD = 2 or more? Is there any totient-like function?
UPDATE:
I seeking how many numbers between ai = <1, n) will return GCD(ai,n) = 2. For GCD(ai, n) = 1. It's Euler Totient, what about higher GCD's?
AI: Let $n > 1$, and let $d < n$ be a positive divisor of $n$. You want to count the number of elements of the set
$$
A = \{ a : 0 \le a < n, \gcd(a, n) = d \}.
$$
Note that if $a \in A$, then $\gcd\left(\dfrac{a}{d}, \dfrac{n}{d}\right) = 1$, so $\dfrac{a}{d} \in B$, where
$$
B = \left\{ b : 0 \le b < \frac{n}{d}, \gcd\left(b, \frac{n}{d}\right) = 1 \right\},
$$
and $B$ has $\varphi(n/d)$ elements. Conversely, if $b \in B$, then $b d \in A$, as
$$
\gcd(b d, n) = \gcd \left(b d, \frac{n}{d} d \right) = \gcd \left( b, \frac{n}{d} \right) d = d.
$$
So $A$ has $\varphi(n/d)$ elements. |
H: Prove that the degrees lie in a range
Let $G=(V,E)$ be a graph with $|V|=n$ and $|E|=m$ prove that
$$
\min_{u\in V} \{d(u)\}\leq 2\frac{m}{n}\leq \max_{v\in V} \{ d(v)\}
$$
now my first intuition is to assume that $\min\limits_{u\in V} \{d(u)\} =0$ holds because it is not a connected graph. And my second assumption is to use $\max\limits_{v\in V} \{ d(v)\} =n -1$ (connected to everything, except it self) , and from another exercise we got
$$
d(v)\geq \frac{n-1}{2}
$$
which could be interpreted as
$$
n-1\leq 2d(v)
$$
which would help in getting the right side, the only thing baffling me at the moment is the $\frac{m}{n}$ part, any intuition on this?
AI: How about we just summarize all the degrees? We know:
$$\min_{v \in V} d(v) \le d(u) \le \max_{v \in V} d(v).$$
Applying this to all elements in the sum $2m = \sum_{u \in V} d(u)$, we get that
$$n \min_{v \in V} d(v) \le \sum_{u \in V} d(u) \le n\max_{v \in V} d(v),$$
so
$$\min_{v \in V} d(v) \le \frac{2m}{n} \le \max_{v \in V} d(v).$$ |
H: Does pointwise equicontinuous and uniformly equicontinuous implies compactness?
If every sequence of pointwise equicontinuous functions $M \rightarrow \mathbb{R}$ is uniformly equicontinuous, does this imply that $M$ is compact?
AI: Consider the subspace $M=\mathbb{Z}$ of the reals, an infinite discrete space, certainly not compact. But every sequence of functions is uniformly equicontinuos in that space. |
H: help me with this regarding hypothesis using chi square distribution
The rope used in a lift produced by a certain manufacturer is known to have a mean tensile breaking strength of 1700 kg and standard deviation 10.5kg. A new component is added to the material which will, it is claimed, decrease the standard deviation without altering the tensile strength. Random samples of 20 pieces of the new rope are tested to destruction and the tensile strength of each piece is noted. The results are used to calculate unbiased estimates of the mean strength and standard deviation of the population of new rope. These were found to be 1724 kg and 8.5kg.
(i) Test, at the 5% level, whether or not the variance has been reduced.
(ii) What recommendation would you make to the manufacturer?
my answer*
i want to simply know when using hypothesis is it possible to use
NULL HYPOTHESIS : population variance(sigma^2) = 10.5^2
ALTERNATE HYPOTHESIS : variance has reduced ==>population variance < 10.5^2
then test statistic =chi square = (n-1)s^2/(sigma)^2 = (20-1)8.5^2/10.5^ = 12.4512
degree of freedom =20-1=19
i have a problem with choosing the critical chi square value
i have chosen chi square critical= chi square for .05 sl and df of 19 =10.1
Since test statistic doesnt fall in region of rejection we don't reject Null hypothesis. hence Variance has not been reduced.
please help me with this problem
AI: No sure what you mean by "sl". You want a number $c$ such that $\Pr(\chi^2_{19}<c)=0.05$. If the test statistic is less than that, then you reject $H_0$; otherwise you don't. (The software I'm using gives $c=10.11701$.)
You're OK, except that as your bottom line, instead of concluding that the variance has not been reduced, you should conclude that you haven't shown that it has been reduced. That could be because you need a more sensitive test to detect a small reduction, and that more sensitive test could come from just using a larger sample. |
H: About completeness of $l^{\infty}$ with respect to sup norm
Let $l^{\infty}$ be the space of all bounded sequences of real numbers $(x_n)_{n =1}^{\infty}$ with the sup norm. I have to show that $l^{\infty}$ is complete with respect to this norm.
Proof: In the proof below I am confused with the sequence $x^n = (x_1^{n},x_2^{n}\ldots )$. I am not able to visualize this sequence. How this sequence can be formed? Is $(x^{n})$ is collection of cauchy sequences?
$x_1^{n}$,$x_2^{n}$ are different cauchy sequences they may be converging to different points so how can we assume that $(x^{n})$ will converge to fixed point $x$?
Please help me to understand this. Any numerical example supporting this will be very much helpful. Thanks
AI: Elements of $l^\infty$ are sequences. You have to show that a Cauchy sequence in $l^\infty$ converges.
Using a different notation, consider a Cauchy sequence $(x_n)$ in $l^\infty$. Each $x_n$ is itself a sequence; let's denote it by
$$
x_n(0), x_n(1), \dots
$$
So, if $x_n$ is the sequence $1, 1/3, 1/5,\dotsc$ (the reciprocals of the odd integers), you'd have $x_n(0)=1$, $x_n(1)=1/3$ and so on. Thus $x_n(k)$ means "the $k$-th term in the $n$-th element of the sequence we're starting with". The book page you show simply writes this as $x^n_k$.
So every $x_n$ (or $x^n$ in the book's notation) is a bounded sequence, not a Cauchy one, because it's an element of $l^\infty$. It's not at all assumed that $x_n$ converges (most elements of $l^\infty$ don't). It's the sequence (of sequences) $x_0,x_1,\dotsc$ that is a Cauchy sequence under the sup norm.
The fact that $(x_n)$ is a Cauchy sequence means that, for all $\def\eps{\varepsilon}\eps>0$, there is $N$ such that, for $n,m>N$
$$
\|x_n-x_m\|<\eps
$$
that is,
$$\sup_k|x_n(k)-x_m(k)|<\eps,$$
which implies
$$
|x_n(k)-x_m(k)|<\eps, \text{ for all } k.
$$
(it would be equivalent if we used $\le\eps$ throughout). |
H: Prove that if $ h \circ f = g $ then $ h $ is an $A$-algebra homomorphism.
Let $f:A\rightarrow B,\ g:A\rightarrow C$ be ring homomorphisms. An $A$-algebra homomorphism $h:B\rightarrow C$ is a ring homomorphism which is also an $A$-module homomorphism.
Please prove that if $h \circ f = g$ then $h$ is an $A$-algebra homomorphism.
AI: We have $h(ax)=h(f(a)x)=h(f(a))h(x)=g(a)h(x)=ah(x)$ for all $a\in A$ and $x\in B$. (Note that $B$ is an $A$-module via $f$, that is, $ax:=f(a)x$ for all $a\in A$ and $x\in B$, while $C$ is an $A$-module via $g$.)
Edit. The question was unclear and I assumed the $h$ is a ring homomorphism, otherwise the claim being (almost) obviously wrong. |
H: What is $\;\int xe^{-x^2} \,dx\;?$
What is $$\int xe^{-x^2} dx\quad?$$
I used substitution to rewrite it as $$\int -\dfrac{1}{2}e^u\, du$$ but this is too hard for me to evaluate. When I used wolfram alpha for $\int e^{-x^2} dx$ I got a weird answer involving a so called error function and pi and such (I'm guessing it has something to do with Euler's identity, but I'm fairly certain this is above my textbook's level).
AI: Your substitution was spot on. You put $$u = -x^2 \implies du = -2x\;dx \implies x\,dx = -\frac 12\, du$$
And having substituted, you obtained $$\int -\dfrac{1}{2}e^u\, du$$
Great work. But I think you gave up too early!: $$-\frac 12\int e^u \, du = -\frac 12 e^u + C,\tag{1}$$ and recall, $u = f(x), \;du = f'(x)\,dx$, so $(1)$ is equivalent to $$-\frac 12 \int e^{f(x)}\,f'(x)\,dx = -\frac 12 e^{f(x)} + C$$
So we can integrate as follows, and then back-substitute:
$$\int -\dfrac{1}{2}e^u\, du = -\frac 12 \int e^u \,du = -\frac 12 e^u + C = -\frac 12 e^{-x^2} + C\tag{2}$$
Now, to remove all doubts, simply differentiate the result given by $(2)$: $$\frac{d}{dx}\left(-\frac 12 e^{-x^2} + C\right) = -\frac 12(-2x)e^{-x^2} = xe^{-x^2}$$
which is what you set out to integrate: $\color{blue}{\bf x}e^{-x^2} \neq e^{-x^2}$ |
H: Example of a group which is abelian and has finite (except the $e$) and infinite order elements.
Exercise 7: Show that the elements of finite order in an abelian
group $G$ form a subgroup of $G$
I just solved this exercise but I can't find example of a group which is abelian and has finite (except the $e$) and infinite order elements. Are there any well known such groups?
For not abelian group with other properties group of matrixes I know.
AI: What about the group $\mathbb{Z}_2\times\mathbb{Z}$ ? $(1,0)$ has finite order and $(1,1)$ has infinite order. |
H: Total probabilities of being admitted to any university
Let's provide an hypothetical situation in which a student applies to 10 different universities whose number of applicants, admissions and admission rate you can see in the table below.
------------------------------------------------------
| Name | Applicants | Admitted | Admission rate |
------------------------------------------------------
| Columbia | 329281 | 22885 | 6.95% |
| Stanford | 280915 | 19945 | 7.10% |
| MIT | 111963 | 10894 | 9.73% |
| CalTech | 17471 | 2231 | 12.77% |
| Cornell | 117590 | 21131 | 17.97% |
| Berkeley | 167324 | 36142 | 21.60% |
| UCLA | 159635 | 40675 | 25.48% |
| Virginia | 73030 | 24297 | 33.27% |
| Rochester | 30261 | 10319 | 34.10% |
| UCSD | 80544 | 28593 | 35.50% |
------------------------------------------------------
How can we estimate the total probabilities of being admitted in any of these 10 universities? We also have to take in account that some of the other applicants may have also applied to more than one university.
I'd be grateful if you could redirect me to any similar question or provide me any useful formula to solve this problem. Thanks for your interest.
Source: USNews Education
AI: This is generally easier to solve by the completary situation, that is: What's the probability of NOT being accepted by any universtiy? Then the one you're looking for will be $1-x$. Now that's easier to calculate because you just have to use the product rule: probabilities of each of them multiplied, as it's only one way of not being accepted anywhere, which is all universities rejecting you, that is:
$$P(NO)=\Pi_i(1-p_i)$$
Being $p_i$ the admission rate/100. The result is 0.092, which is a 9.2%, so the probability of being admited in some university is 100-9.2=90.74%. |
H: Calculate the limit using definite integrals: $\lim_{n\to\infty}2n\sum_{k=1}^n\frac1{(n+2k)^2}$
Calculate the limit using definite integrals: $\lim_{n\to\infty}2n\sum_{k=1}^n\frac1{(n+2k)^2}$
Well, I started like this:
$\lim_{n\to\infty}2n\sum_{k=1}^n\frac1{(n+2k)^2}=\lim_{n\to\infty}2n[\frac1{(n+2)^2}+...+\frac1{(n+2n)^2}]$
From this point and on i'm always getting stuck. I know i should use Riemann integral definition but i don't know how can i recognize which function am i looking at? or how can i recognize the correct $\Delta x$? or at least the "borders" of the definite integral?
Say i presume $\Delta x=\frac1n$ (It's incorrect, Should be $\frac 2n$ - But why?), Then: $\lim_{n\to\infty}\frac1n[\frac{2n^2}{(n+2)^2}+...+\frac{2n^2}{(n+2n)^2}] $, Using limit on both of the 'edges' of the sum should give me the 'borders?' of the definite integral?
As you can see i'm pretty much lost in this kind of questions, BTW: An explanation will be of much more use than only an answer to this one.
AI: Factor out an $n^2$ from the denominator of the summand. You then get
$$\frac{2}{n} \sum_{k=1}^n \frac{1}{(1+2 k/n)^2}$$
This is the form of a Riemann sum; take the limit as $n \to \infty$ and identify the result as a definite integral over a variable $x$.
A Riemann sum takes the form
$$\int_a^b f(x) \, dx = \lim_{n \to \infty} \frac{b-a}{n} \sum_{k=1}^n f\left(a + (b-a)\frac{k}{n}\right)$$
What is $a$? $b$? |
H: Trigonometric function integration: $\int_0^{\pi/2}\frac{dx}{(a^2\cos^2x+b^2 \sin^2x)^2}$
How to integrate $$\int_0^{\pi/2}\dfrac{dx}{(a^2\cos^2x+b^2 \sin^2x)^2}$$
What's the approach to it?
Being a high school student , I don't know things like counter integration.(Atleast not taught in India in high school education ).I just know simple elementary results of definite and indefinite integration. Substitutions and all those works good. :)
AI: In this case, the following trick also works: Dividing both the numerator and the denominator by $\cos^4 x$, we can use the substitution $ t = \tan x$ to obtain
\begin{align*}
\int_{0}^{\frac{\pi}{2}} \frac{dx}{(a^2 \cos^2 x + b^2 \sin^2 x)^2}
&= \int_{0}^{\frac{\pi}{2}} \frac{1 + \tan^2 x}{(a^2 + b^2 \tan^2 x)^2} \sec^2 x \, dx \\
&= \int_{0}^{\infty} \frac{1 + t^2}{(a^2 + b^2 t^2)^2} \, dt \\
&= \frac{1}{a^2}\int_{0}^{\infty} \left( \frac{1}{a^2 + b^2 t^2} + \frac{(a^2 - b^2) t^2}{(a^2 + b^2 t^2)^2} \right) \, dt.
\end{align*}
The first one can be evaluated as follows: Let $bt = a \tan\varphi$. Then
$$ \int_{0}^{\infty} \frac{dt}{a^2 + b^2 t^2} = \frac{1}{ab} \int_{0}^{\frac{\pi}{2}} d\varphi = \frac{\pi}{2ab}. $$
For the second one, we perform the integration by parts:
\begin{align*}
\int_{0}^{\infty} \frac{t^2}{(a^2 + b^2 t^2)^2} \, dt
&= \left[ - \frac{1}{b^2}\frac{1}{a^2 + b^2 t^2} \cdot \frac{t}{2} \right]_{0}^{\infty} + \int_{0}^{\infty} \frac{1}{2b^2}\frac{dt}{a^2 + b^2 t^2} \\
&= \frac{1}{2b^2} \int_{0}^{\infty} \frac{dt}{a^2 + b^2 t^2} \\
&= \frac{\pi}{4ab^3}.
\end{align*}
Putting together, the answer is
$$ \frac{(a^2 + b^2)\pi}{4(ab)^3}. $$ |
H: Convergence of coordinates to zero
Consider a normed finite-dimensional vector space $V$ with some norm $|| .||$
Say a sequence of vectors in this vector space $v_m \rightarrow 0$ where $0$ is the zero vector.
Let $\{b_1,b_2,\ldots,b_n\}$ is a basis for $V$.
Therefore $v_m = \sum_{i=1}^{n} c_i ^m b_i$ for some scalars $c_i$
I am trying to prove that $\forall m$ the sequences $\{ c^m_i \}_{i=0}^{\infty}$
converge to the scalar $0$.
This seems like an 'obvious' theorem, but I am having difficulty proving it.
Obviously some triangle inequality has to be used here but I don't know how.
Using both the inequalities $ | ||x||- ||y|| |\leq ||x-y|| \leq ||x|| + ||y||$
on $||\sum_{i=1}^{n} c_i ^m b_i || < \epsilon$ does not give me an upper bound
on each of the $c^i_m$ sequences.
Any hints?
AI: You can do that by first showing that norms are equivalent in a finite dimensional space (see https://math.stackexchange.com/a/345661/10385) and then you have $M\in \Bbb R, \| \cdot\|_\infty \le M\|\cdot\|$ so if for a sequence of vectors $\left(v_n\right)_{n\in \Bbb N}$, $v_n\underset{n\to +\infty}{\longrightarrow}0$ for the norm $\|\cdot\|$, that is $\|v_n\|\underset{n\to +\infty}{\longrightarrow}0$, you also have $\|v_n\|_\infty\underset{n\to +\infty}{\longrightarrow}0$ and since $0 \le |c_i^n| \le \|v_n\|_\infty\underset{n\to +\infty}{\longrightarrow}0$, $|c_i^n|\underset{n\to +\infty}{\longrightarrow}0$ so $c_i^n\underset{n\to +\infty}{\longrightarrow}0$.
And no, it doesn't work in infinite dimensional spaces.
Take $\Bbb R\left[X\right]$ the space of polynomials.
Take $\forall P \in \Bbb R\left[X\right],\left\|P\right\|=\int\limits_0^1 \left|P(t)\right|\space dt$
And $\left(\left(1-X\right)^n\right)_{n\in \Bbb N}\in \Bbb R\left[X\right]^\Bbb N$
$\left(\left\|\left(1-X\right)^n\right\|\right)_{n\in \Bbb N}$ is decreasing and bounded below so it converges to $l$. We also have $0\le l$.
$\forall \varepsilon > 0,\left\|\left(1-X\right)^n\right\|=\int\limits_0^1 \left|\left(1-t\right)^n\right|\space dt=\int\limits_0^\varepsilon \left|\left(1-t\right)^n\right|\space dt+\int\limits_\varepsilon^1 \left|\left(1-t\right)^n\right|\space dt \le \varepsilon +\int\limits_\varepsilon^1 \left|\left(1-t\right)^n\right|\space dt\underset{n\to +\infty}{\longrightarrow}\varepsilon$
So $\forall \varepsilon >0, l \le \varepsilon$ which means $l=0$.
So $\left(1-X\right)^n\underset{n\to +\infty}{\longrightarrow}\Bbb 0$
But $\left(1-X\right)^n = \sum\limits_{k=0}^n {n \choose k} 1^{n-k}\left(-X\right)^k = 1 + \sum\limits_{k=1}^n {n \choose k} \left(-X\right)^k = 1 - X\sum\limits_{k=0}^{n-1} {n \choose k+1}\left(-X\right)^k$
So since we know that $\Bbb R\left[X\right]=\Bbb R_0\left[X\right] \oplus X\Bbb R\left[X\right]$, in the canonical basis $\left(X^n\right)_{n\in \Bbb N}$, the first coordinate will remain 1. |
H: Equation with Logarithm in Exponent
How to I solve the following exercise with a logarithm? I've forgotten the "trick" for doing so:
$x^{log_{10} x} =10^4$
AI: To start, take $\log_{10}$ of both sides to get:
$$\log_{10}(x)\log_{10}(x) = \log_{10}(10)^{4} = 4$$
Then just solve $$(\log_{10}(x))^{2} = 4$$
$$\implies \log_{10}x = \pm 2$$
Exponentiating both sides, we get:
$$x = 10^{2} = 100$$ OR
$$x = 10^{-2} = \frac{1}{100}$$ |
H: Given two sets, finding two non trivial homomorphisms that are not isomorphisms
Is it possible to have two non trivial homomorphisms that are not isomorphisms for given two Groups?
I am specially interested in additive/remainder Group of Integers and multiplicative (arithmetic multiplication as group operator) group of complex roots of unity.
Thank You.
AI: If the groups are isomorphic to start with and you’re asking about isomorphisms between them, you are in essence just asking about the set (group, actually) of automorphisms of a single group. This automorphism group will ordinarily be nontrivial, as is the case with the group of $n$-th roots of unity for a fixed $n>2$. If you were asking about the (countable) group of all roots of unity (in the complex numbers, for example), then the automorphism group is not even countable. So the quantitative answer to your question is, Yes, Many, Many. |
H: Curve defined by a vector
https://i.stack.imgur.com/tD4Bn.png
I'm studying line integrals with a curve as a vector, but I couldn't understand the 'dr' part.
First of all: the curve isn't really a curve, it's like some points where a vector points from the origin. So this 'curve' does not really exists, rigth?
And then, when we take the integral with respect to 'dr', shouldn't we take it do ||dr|| (the magnitude of dr)? I really didn't understand, so I need some better and intuitive explanation...
Thank you so much!
AI: Well, intuitively $dr$ is an infinitesimal tangent vector to the curve. If you imagine that the curve is the trajectory of some particle, then the derivative of $r$ is the velocity. If a small amount of time $dt$ passes, then you've moved through the vector $dr =r '(t)dt$.
I could stop there and pretend that this is all fine, but I won't. Elementary books insists on making people learn to think with those infinitesimals and when one needs to move to the modern approach get's really confused with questions like "but hey, the other book told it was fine to think like that! What's the problem?".
So I'll tell you the true idea. A curve in $n$-dimensional space is a map $\alpha : I \subset \mathbb{R} \to \mathbb{R}^n$ and it's tangent vector at the point $\alpha(t)$ is the derivative $\alpha'(t)$. Now, $\alpha'(t)$ already defines the tangent direction of the curve, so you don't really need that $dt$ multiplying. Now, the line integral is a measure of how much some vector field $F:\mathbb{R}^n \to \mathbb{R}^n$ projects on the direction of the curve.
If you recall a little about the dot product you'll soon be convinced that if we fix some point $t \in I$ and consider $\alpha(t) \in \mathbb{R}^n$, then $F(\alpha(t))\cdot \alpha'(t)$ is a good measure of how $F$ projects onto $\alpha$. We are considering the "local" direction of $\alpha$, also it'll be zero if $F$ is orthogonal to $\alpha$ and it'll be maximum if $F$ is in the direction of $\alpha$. Since this is local, we can compute at each point and sum everything up, this will be a measure of how $F$ projects onto $\alpha$ in general (try constructing the partitions and so on, it'll be a good exercise). So we define the line integral of $F$ over $\alpha$ to be:
$$\int_\alpha F=\int_a^b F(\alpha(t))\cdot \alpha'(t)dt$$
Then, the people that love that thing of infinitesimals set $\alpha'(t)dt = dr$ and just write this as:
$$\int_a^b F\cdot dr$$
Personally I prefer the first notation because it makes clear that you have to calculate $F$ at $\alpha(t)$, dot it with $\alpha'(t)$ and integrate this thing that will simply be a function from $\mathbb{R}$ to $\mathbb{R}$. |
H: Dynamical limitation and minimization
I come across the following question: Under what conditions (on the series of functions $f_n$ or perhaps the domain of minimization) the following holds
$$
\lim_{n\to\infty}\min_{x_1,\ldots,x_n}f_n(x_1,\ldots,x_n) = \min_{\left\{x_i\right\}_i}\lim_{n\to\infty}f_n(x_1,\ldots,x_n)
$$
I can show that the above holds true when the minimization is carried over a finite number of variables independently of $n$, namely,
$$
\lim_{n\to\infty}\min_{x_1,\ldots,x_M}f_n(x_1,\ldots,x_M) = \min_{x_1,\ldots,x_M}\lim_{n\to\infty}f_n(x_1,\ldots,x_M)$$
But, I'm not sure how to approach the first question.
Thanks
AI: Let's a functions $F_n:\mathbb{X}\subset\mathbb{R}^{\mathbb{N}}\to \mathbb{R}$ such that $F_n(x_1,\ldots, x_n,\ldots )=f_n(x_1,\ldots, x_n )$ for all $f_n:\Pi_n(\mathbb{X})\subset\mathbb{R}^n\to \mathbb{R}$ of sequence $\{f_n\}_{n\in\mathbb{N}}$. Here $\Pi_n$ is the projection $(x_i)_{i\in\mathbb{N}}\mapsto (x_1,\ldots, x_n)$. Most of the times when the minimum of a function $F:\mathbb{X}\subset\mathbb{R}^{\mathbb{N}}\to\mathbb{R}$ is achieved can admit the following conditions.
Condition: supose there is $x^*_n=(x_1^*,\ldots, x_n^*,\ldots)$ such that $$F_n(x^*_n)=\min_{x\in \mathbb{X} }F_n(x).$$
Condition: supose thare is a sequence $\mathbb{R}^{\mathbb{N}} \ni x_{nk}\to x_{n\infty}\in \mathbb{R}^{\mathbb{N}}$ such that $$\lim_{k\to \infty}F_n(x_{nk})=F_n(x_{n\infty})=\min_{x\in \mathbb{X} }F_n(x).$$
Under these conditions the problem can be stated as
$$
\lim_{k\to \infty}\lim_{n\to \infty}F_n(x_{nk}) \mathop{=}^{?}\lim_{n\to \infty}\lim_{k\to \infty}F_n(x_{nk})
$$
For exemple for $\mathbb{X}\subset\ell^p$ whit usal topology of norm $\|\,\cdot\,\|_p$, we could to use the classical theorem
Theorem. Let $\{ F_t ; t\in T\}$ be a family of functions $F_t : Y \rightarrow \mathbb{C}$ depending on a parameter t; let $\mathcal{B}_X$ be a base $Y$ and $\mathcal{B}_{T}$ a base in $T$. If the family converges uniformly on $Y$ over the base $\mathcal{B}_{T}$ to a function $F : Y \rightarrow \mathbb{C}$ and the limit $\lim_{\mathcal{B}_{T}} F_t(y)=A_t$ exists for each $t\in T$, the both repeated limits $\lim_{\mathcal{B}_{Y}}(\lim_{\mathcal{B}_{T}}F_t(y))$ and $\lim_{\mathcal{B}_{T}}(\lim_{\mathcal{B}_{Y}}F_t(y))$ exist and the equality and holds
$$
\lim_{\mathcal{B}_{Y}}(\lim_{\mathcal{B}_{T}}F_t(y))=\lim_{\mathcal{B}_{T}}(\lim_{\mathcal{B}_{Y}}F_t(y)).
$$
This theorem can be found in books of Zorich (Mathematical Analysis II p. 381). |
H: Automorphism group of $\mathbb{Z}_p\times \mathbb{Z}_p$
How to determine the automorphism group of $\mathbb{Z}_p\times \mathbb{Z}_p$ where $p$ is a prime? Or more specifically, how to determine the element of order $2$ in this group?
I got stuck here, since I only know that if two finite groups $H$ and $K$, where $(|H|,|K|)=1$, then $\text{Aut}(H\times K) = \text{Aut}(H) \times \text{Aut}(K)$. But $p$ and $p$ are not relatively prime.
Any hints or solutions are welcomed, thanks!
AI: As suggested by vadim123, the automorphism group is isomorphic to $\operatorname{GL}(2, p)$, the group of $2 \times 2$ invertible matrices over $\Bbb{Z}$.
If $p = 2$, the group $\operatorname{GL}(2, 2)$ is isomorphic to $S_{3}$, so you should be able to determine the involutions, which are
$$
\begin{bmatrix}0&1\\1&0\end{bmatrix}, \quad
\begin{bmatrix}1&1\\0&1\end{bmatrix}, \quad
\begin{bmatrix}1&0\\1&1\end{bmatrix}.
$$
If $p > 2$, you will have
$$
c = \begin{bmatrix}-1&0\\0&-1\end{bmatrix},
$$
a central element, and then the conjugacy class of
$$
b = \begin{bmatrix}1&0\\0&-1\end{bmatrix},
$$
which has order
$$
\frac{\lvert \operatorname{GL}(2, p) \rvert}{\lvert C_{\operatorname{GL}(2, p)}(b) \rvert}
=
\frac{(p^{2} - 1)(p^{2} - p)}{(p-1)^{2}}
=
(p+1)p.
$$ |
H: Maximization of minimum difference
Suppose we have a function of the form: $(x_1 - x_2) + (x_3 - x_4) + (x_5 - x_6)$ and we have maximized this summation using linprog (using some constraints which are not important for this matter). This provides us with a value for the different x variables.
The problem I now want to solve is the maximization of the minimum $(x_i - x_j)$ and with the constraint that the solution $(x_1, x_2, ...)$ filled in in the original summation should have a higher or the same value (constraint).
This would distribute the difference between the variables $x_1,x_2; x_3,x_4, ...$
How can this problem be solved in Matlab (maximization of the minima)?
AI: In order to maximize a minimum using linear programming techniques, you can introduce an additional variable $\delta$ and add constraints of the form
$$x_i-x_j\ge \delta$$
The variable $\delta$ must then be maximized. For your problem to be well-defined there should of course be additional constraints on the variables $x_i$, probably like the ones you used in the first problem. |
H: Classifying map
Let $\xi=(E,p,B)$ a principal $G$-bundle and $\eta=(P,\pi,Q)$ a real vector bundle such that $\operatorname{rank}(\eta)=n$. We can consider a classificant space $BG$. What is the classifying map $f:X \rightarrow BG$? Why the name "classifying"? How can build classifying map for $\eta$?
AI: Given any principal $G$-bundle $E' \to X'$, and a map $X \to X'$, the pullback $E' \times_X' X \to X$ is a map that's easily seen to be a principal $G$-bundle. The idea of the classifying space is that it's a space $BG$, with a principal $G$-bundle $EG \to BG$, such that any principal $G$-bundle over any space $X$ is a pullback of this bundle. Thus, the principal $G$-bundles over $X$ (up to isomorphism) are in bijective correspondence with the maps $X \to BG$ (up to homotopy). That is, maps to $BG$ 'classify' principal $G$-bundles.
In many cases, once you prove that such a space exists (which isn't at all obvious), you don't pay much attention to the explicit construction of classifying maps. Indeed, part of the point of the thing is to replace a somewhat opaque theory -- the study of vector bundles and principal bundles -- with a theory which has more machinery developed for it -- the study of homotopy classes of maps. This is just my opinion, though -- your mileage may vary.
One way to construct $BG$ is to let $EG$ be any contractible space with a free $G$-action, and $BG = EG/G$. Clearly $EG \to BG$ is a principal $G$-bundle, and the contractibility of and freeness of the $G$-action on $EG$ implies that there are no obstructions to defining a $G$-map from a principal $G$-bundle to $EG$, which descends to a map $X \to BG$. For more details, see e.g. these notes.
Now, this answers your questions about principal $G$-bundles. The secret to the last question is that real vector bundles really are principal bundles as well -- they're principal $O(n)$-bundles! To turn a vector bundle $E \to X$ into a principal $O(n)$-bundle, replace it by the frame bundle $\mathrm{Fr}(E) \to X$, which is a bundle whose fiber over a point $x \in X$ is the set of orthonormal bases for $E_x$. (You have to pick a metric on the vector bundle first, but this always exists if $X$ is, for example, paracompact.) Conversely, given a principal $O(n)$-bundle $P \to X$, you can replace it by the bundle whose fibers are $P_x \times \mathbb{R}^n$ mod its diagonal $O(n)$-action, which is a real vector bundle. These mappings are mutually inverse, so when we talk about real vector bundles, we might as well be talking about principal $O(n)$-bundles.
So real rank $n$ vector bundles are classified by $BO(n)$, which there actually is a nice geometric construction of. Let $EO(n)$ be the space of sequences of $n$ orthonormal vectors in an infinite-dimensional inner product space. This clearly has a free $O(n)$-action, and you can check that it's contractible. The quotient $EO(n)/O(n)$ is the space of $n$-dimensional planes (we've forgotten their bases) in this infinite-dimensional space, also called the Grassmannian $\mathrm{Gr}_n(\mathbb{R}^\infty)$. In particular, one-dimensional vector bundles are classified by $\mathrm{Gr}_1(\mathbb{R}^\infty) = \mathbb{R}P^\infty$.
Similar remarks apply to other other sorts of vector bundles. For example, complex rank $n$ vector bundles are the same as principal $U(n)$-bundles, and these are classified by the Grassmannian of copies of $\mathbb{C}^n$ in $\mathbb{C} P^\infty$. |
H: How to prove that sequence spaces $l^{p}, l^{\infty}$ and function space $C[a. b]$ are of infinite dimension
I am studying about the sequence space $l^{p}, l^{\infty}$ and function space $C[a. b]$. It is mentioned in the book that all of these spaces are of infinite dimension. I want to prove that these spaces are of infinite dimension. I am not finding a way to proceed.
Any help and suggestions would be helpful to me.
Thanks
AI: You just need to construct infinitely many linearly independent vectors. For $l_p$ and $l_{\infty}$ take vectors $(0,0,0,...0,1.....).$ For $C[a,b]$ polynomials $x^n$ will do. |
H: Tell me the ideal selling price to get back a specified number
If I am buying something at xxx,
what is the price to sell it if I want a profit of $2.50 after minus-ing 0.63% (broker fee) from the selling price?
I need to make this into an excel formula, but this is the first step... if you know the excel formula that would be a great help as well!
Thanks!
AI: $$\begin{align*}
\mathsf{profit}&=(\mathsf{selling\;price}) - (\mathsf{broker\;fee})-(\mathsf{price\;you\;bought\;it\;for})\\
\end{align*}$$
So you want to solve
$$\begin{align*}
2.50&=(\mathsf{selling\;price}) - (0.63\%\mathsf{\;of\;selling\;price})-(\mathsf{price\;you\;bought\;it\;for})\\\\
2.50&=(\mathsf{selling\;price})-(0.0063)(\mathsf{selling\;price})-(\mathsf{price\;you\;bought\;it\;for})\\\\
2.50&=(0.9937)(\mathsf{selling\;price})-(\mathsf{price\;you\;bought\;it\;for})\\\\
(0.9937)(\mathsf{selling\;price})&=2.50+(\mathsf{price\;you\;bought\;it\;for})\\\\
(\mathsf{selling\;price})&=\frac{2.50+(\mathsf{price\;you\;bought\;it\;for})}{0.9937}\\\\
\end{align*}$$ |
H: How can this property of definite integrals be true?
In this question, people are saying that the definite integral of $f(x)$ from $0$ to $a$ is equal to the integral of $f(a-x)$ from $0$ to $a$. How can that be true? Simple examples don't work.
AI: Put $z=a-x$ then $dz=-dx$. When $x=0, z=a$ and when $x=a,z=0$. Now $$\int_0^af(a-x)dx=\int_a^0f(z)(-dz)=\int_0^af(z)dz$$ |
H: Linear Approximation
I have an exercise, giving this question.
Find the linear approximation $Y$ to $f(x)$ near $x=a$.
$$
f(x) = x + x^4,\quad a=0
$$
I can see in my result list that it says $Y=x$, however, after multiple tries I can only convince myself that it gives $Y=0$.
Can anyone explain why it gives $1$ and not $0$?
AI: For any differentiable function $f$, the best linear approximation to $f(x)$ at the point $x=a$ is the line defined by
$$y=f'(a)(x-a)+f(a).$$
(Intuitively, this makes sense: $f'(a)$ is like "the slope of $f$ at $a$", and the line with slope $f'(a)$ that goes through the point $(a,f(a))$ is precisely the one specified above).
Can you find the derivative of $f(x)=x+x^4$ with respect to $x$? |
H: What is the relation between vectors in physics and algebra?
Vector math is something I find very interesting. However, we have never been told the link between vectors in physics (usually represented as arrows, e.g. a force vector) and in algebra (e.g. represented like a column matrix). It was really never explained well in classes.
Here are the things I can't wrap my head around:
How can a vector (starting from the algebraic definition) be represented as an arrow? Is it correct to assume that a vector (in a 2-dimensional space) $v = [1,1]$ could be represented as an arrow from the origin $[0,0]$ to the point $[1,1]$?
If the above assumption is correct, what does it mean in the physics representation to normalize a vector?
If I have a vector $[1,1]$, would the vector $[-1,1]$ be orthogonal to that first vector? (Because if you draw the arrows they are perpendicular).
How can one translate an object along a vector? Is that simply scalar addition?
These questions probably sound really odd, but they come from a lack of decent explanation in both physics and algebra.
AI: Physicists tend to emphasize a geometric interpretations of vectors, which mathematicians need not do. This is because one of the main uses of vectors in physics is to talk about the geometry of some system, while vectors in general can be used in more abstract senses (and perhaps with no geometric interpretation at all).
Concerning your first bullet point, your interpretation is correct: this is a model of a vector space as an algebra of directions. Each arrow has a starting point and an ending point, which determines both length and orientation. These are the properties of a "direction".
Normalizing vectors is a way to lose the length information while maintaining directionality; it makes vectors unit length.
Yes, you're correct in saying those vectors would be orthogonal. This is part of the geometric interpretation.
I'm not sure what you refer to by "translating an object about a vector". It could be taking the location of the object and simply moving every point in the direction of the vector. This is better described by vector addition. |
H: Two Algorithm A and B solve the same problem
Two algorithms $A$ and $B$ solve the same problem. $A$ solves a problem of size $n$ with $n^2~2^n$ operations. $B$ solves it with $n!$ operations. As $n$ grows, which algorithm uses fewer operations?
AI: I believe Stirling is your friend. |
H: Prove $\omega^{\omega_1}=\omega_1$ and $2^{\omega_1}=\omega_1$
Prove that $\omega^{\omega_1}=\omega_1$ and $2^{\omega_1}=\omega_1$.
I also found an exercise asking to compute the ordinal number $\omega_1^{\omega}$, but I do not even understand what I am supposed to do, any help?
AI: Ordinal exponentiation is not cardinal exponentiation.
Recall the definition of $\alpha^\beta$:
$\alpha^0=1$.
$\alpha^{\beta+1}=\alpha^\beta\cdot\alpha$.
$\alpha^\beta=\sup\{\alpha^\gamma\mid\gamma<\beta\}$ for a limit ordinal $\beta$.
So now we note the following:
If $\alpha,\beta$ are countable ordinals then $\alpha^\beta$ is countable. We can prove this by induction.
If $\beta>0$ then $\alpha<\alpha^\beta$.
From this it is somewhat easy to calculate $2^{\omega_1}=\omega^{\omega_1}$, since for every $\beta<\omega_1$ we have $2^\beta$ and $\omega^\beta$ to be countable ordinals, and this is a strictly increasing sequence of length $\omega_1$.
As for the additional exercise for $\omega_1^\omega$, I'm not clear about what it means "to compute the ordinal number", because $\omega_1^\omega$ is an ordinal number.
If one interprets "the ordinal number" as the Cantor normal form of $\omega_1^\omega$, then one can note that:
$$\omega_1^\omega=\left(\omega^{\omega_1}\right)^\omega=\omega^{\omega_1\cdot\omega}$$
I don't know if that's simpler, though. |
H: How do you respond to "I was always bad at math"?
Here in the U.S., it is my experience that over 75% of adults I meet socially will volunteer that phrase or a variation upon learning that I am a mathematician. I find this frustrating, since almost nobody would brag about being bad at history or English or really any other subject. Normally I change the subject, or perhaps say they must have had a bad classroom experience, but that feels unsatisfying and turns the conversation in an ugly direction.
Surely some of the bright minds here on Math.SE have found a good response. Note that I'm not looking to alienate a new acquaintance, so let's keep it positive.
AI: Math is hard for everyone. It takes long hours of practice to become competent. I think that's what people want to hear; and fortunately, it's true.
This reply has the advantage of steering the conversation in a somewhat more positive direction---in my experience, usually towards whatever it is that the person does like to do and has practiced long hours at. This is usually more interesting than having a conversation about poorly taught math courses. |
H: How should I go about this approaching-infinite limit problem?
I'm doing some exercises of limits approaching infinite, most are simple polynomials where only the highest degree term will matter in the end but for this one I couldn't find a solution (not correct at least).
$$\lim_{x\to-\infty}\frac{x^2+x+1}{(x+1)^3-x^3}$$
How should I proceed to get the correct answer? ($\frac{1}{3}$)
Also, while simplifying some questions this question took my mind: is it correct to say that $\sqrt{a+b}$ is equal to $\sqrt{a}+\sqrt{b}$?
Ps.: Without using l'Hôpital's rule.
AI: Start by expanding:
$$\lim_{x\to-\infty}\frac{x^2+x+1}{(x+1)^3-x^3}= \lim_{x\to-\infty}\frac{x^2+x+1}{x^{3} + 3x^{2} + 3x + 1 -x^3}$$
Simplify to obtain:
$$\lim_{x\to-\infty}\frac{x^2+x+1}{3x^{2}+3x +1}$$
Now divide every term by $x^{2}$:
$$\lim_{x\to-\infty}\frac{1+\frac{1}{x}+\frac{1}{x^{2}}}{3+\frac{3}{x}+\frac{3}{x^{2}}}$$
Now take the limit to obtain $\frac{1}{3}$. |
H: Let $ \mathbb{N}$, $a \in \mathbb{N} \to a+1 \in \mathbb{N}$
I need to prove the following:
"Let $ \mathbb{N}$, if $a \in \mathbb{N} \to a+1 \in \mathbb{N}$"
thanks in advance!!
AI: There is actually something to prove here. $a+1$ is defined in the Peano axioms as $a+S(0)=S(a+0)=S(a)$. Hence, if $a\in\mathbb{N}$, $a+1=S(a)\in \mathbb{N}$.
We can go further; $a+2=a+S(1)=S(a+1)=S(S(a))$. Hence, if $a\in \mathbb{N}$, $a+2=S(S(a))\in \mathbb{N}$. In fact, for any specific $k$, we can equally prove that if $a\in \mathbb{N}$ then $a+k\in \mathbb{N}$. |
H: Linear Regression - Proof Sum Adds to Zero
In linear regression, why is $\sum(X_{i} - \mu_{x})$ = $0$? I understand that for ($\sum$ $Y_{i}$ minus the fitted value of Y) = $\sum$ $e_{i}$ this is true but why is this other fact true?
AI: Just break up the sum into two sums, and substitute the definition of
$\mu_x$. More explicitly, $$\sum_{i=1}^n (X_i - \mu_X) = \sum_{i=1}^n X_i - \sum_{i=1}^n \mu_x
= \left[\sum_{i=1}^n X_i\right] - n\mu_x
= \left[\sum_{i=1}^n X_i\right] - n\times \left[\frac{1}{n}\sum_{i=1}^n X_i\right]
= 0.$$ |
H: Understanding the differential $dx$ when doing $u$-substitution
I just finished taking my first year of calculus in college and I passed with an A. I don't think, however, that I ever really understood the entire $\frac{dy}{dx}$ notation (so I just focused on using $u'$), and now that I'm going to be starting calculus 2 and learning newer integration techniques, I feel that understanding the differential in an integral is important.
Take this problem for an example:
$$ \int 2x (x^2+4)^{100}dx $$
So solving this...
$$u = x^2 + 4 \implies du = 2x dx \longleftarrow \text{why $dx$ here?}$$
And so now I'd have: $\displaystyle \int (u)^{100}du$ which is $\displaystyle \frac{u^{101}}{101} + C$ and then I'd just substitute back in my $u$.
I'm so confused by all of this. I know how to do it from practice, but I don't understand what is really happening here. What happens to the $du$ between rewriting the integral and taking the anti-derivative? Why is writing the $dx$ so important? How should I be viewing this in when seeing an integral?
AI: Okay, so this is a slightly tricksy problem, because at first people think "Oh, it's fine, I just multiply through by this thing and it's fine" but then they think "Hang on, we've never actually defined what we meant by this... it's just some shorthand trickery" - but then if they eventually do differential geometry you think "Aha! So it did make sense all along!"
The main message I want you to take away is that things like $\mathrm d x$ are actually well defined things called differential forms which you don't really need to understand in any detail at all to get how they work.
The way they end up working in integration and changes of variable is roughly the following: they come together to make a volume form which just tells you how much volume a small range of your parameters corresponds to. (I say "come together" because if you are doing many integrations like in $\int \int f(x,y) \;\mathrm d x \mathrm d y$ then you get a bunch of the $\mathrm d [...]$ things all together.) More precisely, remember how you can roughly define integration as a limit of a sum like
$$\int_a^b f(x) \mathrm d x \equiv \lim_{N\to\infty}\sum_{n=1}^N f(x_n) \left(x_n-x_{n-1}\right)$$
where $x_0=a,x_N=b$ and the other points $x_i$ are chosen in between, such that all gaps $\delta_n = x_n-x_{n-1} \to 0$ (say, uniformly) as $N\to \infty$.
Here, $\delta_n = x_n-x_{n-1}$ is providing some measure of how important the bit of space between $x_{n-1},x_n$ is in computing the integral. The $\mathrm d x$ is what keeps track of that information.
Suppose you then try $u=x^2$ or $x=\sqrt u$. Then in general
$$\int_{a^2}^{b^2} f(\sqrt u) \mathrm d u \equiv \lim_{N\to\infty}\sum_N f(\sqrt{u_n}) \left(u_n-u_{n-1}\right) = \lim_{N\to\infty}\sum_N f(x_n) \left(x_n^2-x_{n-1}^2\right)\neq \int f \mathrm d x$$
because the weight is different!
But notice that $x_n^2-x_{n-1}^2 = (x_n-x_{n-1})(x_n+x_{n-1}) \approx (x_n-x_{n-1})(2x_n)$ in the limit of fine spacing, so
$$\int_{a^2}^{b^2} f(\sqrt u) \frac{\mathrm d u}{2x} = \int_a^b f(x) \; \mathrm d x$$
We're really analyzing the difference between the volumes of the little patches of space when we play with the differentials. The trick is to realize that in general, just like here, $\mathrm d u = u'(x) \mathrm dx$. In higher dimensional integrals, you will discover that the generalization to e.g.
$$\int f(x,y) \;\mathrm d x\mathrm d y = \int f(u,v) \; J \; \mathrm d u\mathrm d v$$
where $u=u(x,y),v=v(x,y)$ involves a quantity $J$ called the Jacobian (determinant) which uses all the possible derivatives of $u,v$ with respect to $x,y$ in a particular way.
The notation
$$\frac{\mathrm d u}{\mathrm d x} = \lim_\text{fine spacing}\frac{\delta u}{\delta x} = \lim_{x_n-x_{n-1}\to 0}\frac{u_n-u_{n-1}}{x_n-x_{n-1}} = u'(x)$$
is now seen to be just a suggestive notation which works for the case of only one variable changing. It's used because it makes it clear how the volume form should be replaced.
When there are many variables, this notation breaks down because the factors are all mixed up together and people write partial derivatives, which you'll see soon if you haven't already, instead. It turns out that it makes sense to use a generalization of the
$$\mathrm d u = u'(x) \mathrm d x$$
law called the chain rule in which, for $u=u(x,y)$ for example
$$\mathrm d u = u_x \mathrm d x + u_y \mathrm d y$$
where $u_x(x,y)$ is the derivative of $u$ with respect to $x$ when we just think of $y$ as a constant.
You'll have to wait until differential geometry courses to see how to use this to get the Jacobian factor; it turns out that rather than just writing the forms together, you should technically define something called a wedge product such that $a\wedge b = -b\wedge a$ for one-forms like $\mathrm d x$; then you get
$$\mathrm d u \wedge \mathrm d v = (u_x \mathrm d x + u_y \mathrm d y)\wedge(v_x \mathrm d x + v_y \mathrm d y) = (u_x v_y-u_y v_x) \mathrm d x \wedge \mathrm d y$$
so that the Jacobian is (one over) $(u_x v_y-u_y v_x)=\det \pmatrix{u_x & u_y \\ v_x & v_y}$.
You can get this result directly from thinking about little patches of volume, however, so you'll see this far earlier than any differential form stuff. I just thought that, since you were curious, you should have had the full story mentioned to you along the way. |
H: Group equals union of three subgroups
Suppose $G$ is finite and $G=H\cup K\cup L$ for proper subgroups $H,K,L$. Show that $|G:H|=|G:K|=|G:L|=2$.
What I did: so if some of $H,K,L$ is contained in another, then we have $G$ being a union of two proper subgroups, which is impossible due to another result. So none of $H,K,L$ is contained is each other. If some element $h$ belongs to only $H$ and $k$ belongs to only $K$, then its product $hk$ cannot be in $H$ or $K$, so must belong to only $L$.
AI: At least one of $H, K$ and $L$ must have index $2$ in $G$ by counting. Let's say $|G:H| = 2$. Then, $H$ is normal in $G$. Then, $|K:K\cap H| = 2 = |L:L\cap H|$.
Now, for $G = H \cup (K - K\cap H) \cup (L - L\cap H)$, this union must be disjoint by counting and we must have $|G:K| = 2 = |G:L|$. |
H: Absolute convergence of a real series
I need to show that the following series:
$\sum_{i=1}^n (-1)^n\dfrac{x^2+n}{n^2} $
Is uniformly convergent on any bounded interval, but not absolutely convergent for any real $x$. My first thought was to use the Weierstrass M-Test, however this is pointless as if the above series "passed" the test, it would have to be absolutely convergent.
In assessing uniform convergence, I considered partial sums. Is it possible to show that the partial sums can get arbitrarily close together, (in a Cauchy sense) which would then imply the series converges uniformly?
AI: Note that in absolute value, we have, since all but $(-1)^n$ is positive:
$$\sum_{n=1}^N\frac{x^2+n}{n^2}$$
But this is
$$\sum_{n=1}^N\frac{x^2}{n^2}+\sum_{n=1}^N\frac{1}{n}$$
The first term converges, but the second term doesn't. Note this tells the $M$ test is not applicable.
On the other hand, your sum is $${f_N}\left( x \right) = \sum\limits_{n = 1}^N {{{( - 1)}^n}} \frac{{{x^2} + n}}{{{n^2}}} = \sum\limits_{n = 1}^N {{{( - 1)}^n}} \frac{{{x^2}}}{{{n^2}}} + \sum\limits_{n= 1}^N {{{( - 1)}^n}} \frac{1}{n}$$
This converges pointwisely to
$$\mathop {\lim }\limits_{N \to \infty } {f_N}\left( x \right) = \sum\limits_{n = 1}^\infty {{{( - 1)}^n}} \frac{{{x^2}}}{{{n^2}}} + \sum\limits_{n = 1}^\infty {{{( - 1)}^n}} \frac{1}{n} = - \frac{{{x^2}{\pi ^2}}}{{12}} + \log 2$$
Now, look at the difference; we have $$\displaylines{
\left| {f\left( x \right) - {f_N}\left( x \right)} \right| = \left| {\sum\limits_{n = N + 1}^\infty {{{( - 1)}^n}} \frac{{{x^2}}}{{{n^2}}} + \sum\limits_{n = N + 1}^\infty {{{( - 1)}^n}} \frac{1}{n}} \right| \cr
\leqslant \left| {\sum\limits_{n = N + 1}^\infty {{{( - 1)}^n}} \frac{{{x^2}}}{{{n^2}}}} \right| + \left| {\sum\limits_{n = N + 1}^\infty {{{( - 1)}^n}} \frac{1}{n}} \right| \cr
\leqslant {x^2}\left| {\sum\limits_{n = N + 1}^\infty {\frac{1}{{{n^2}}}} } \right| + \left| {\sum\limits_{n = N + 1}^\infty {{{( - 1)}^n}} \frac{1}{n}} \right| \cr
\leqslant M\varepsilon + \varepsilon = \left( {1 + M} \right)\varepsilon \cr} $$
where $M$ is a bound for $x^2$ on any bounded interval, for sufficiently large $N$. |
H: Is the set theory (ZF) a structure?
According to the definition, generally speaking, a structure $\langle A;R;F,C\rangle$ is such that $A$ is a non-empty set, $R$ is the set of relations, $F$ is the set of functions, and $C$ is a set of constants. For example $\langle\mathbb{R};; +,\cdot, ^{-1};0,1\rangle$ would be the field of the real numbers. Now, in the construction of the set theory (ZF), we need to make a structure. But then $A$ would be the set of sets, which is impossible... What am I missing?
AI: This is a very delicate point in set theory.
First of all we need to point out that the concept of "set" is not an absolute one. Different models of set theory, will think that different mathematical objects are indeed sets.
This is an external point of view. We consider the universes of set theory as sets, from the outside. The Russell paradox, and indeed all the classical paradoxes of set theory, state that the universe cannot be a set from its own point of view. This is where internal and external points of view are important.
Furthermore, if in a universe of set theory which satisfies the axioms of $\sf ZF$ and we can find a set $M$ and a relation $E$ such that $\langle M,E\rangle$ is a model of $\sf ZF$ then by the completeness theorem we have proved that $\sf ZF$ is consistent - in that particular universe. But from the incompleteness theorem we know that a theory like $\sf ZF$ cannot prove its own consistency, therefore we can never even prove that such set structure exists.
If $V$ is the universe of all sets that mathematics have, and suppose that it satisfies all the axioms of $\sf ZF$, then we know that (1) it is not a set itself; (2) we cannot prove that there is a set which is a structure satisfying the axioms of $\sf ZF$; (3) if there is such set, then we can talk about objects which are in that structure, which that particular structure thinks of as sets, and we can contrast them to actual sets.
This is close to the Skolem paradox, by the way.
In any case, as I said, this is a very delicate point and one has to study quite a lot of logic and set theory in order to understand it completely and become comfortable with it. |
H: Closure and limit of a sequence
Let $E$ be a subset of a metric space $(S,d)$. I'm trying to show that an element is in $\overline{E}$ if and only if it is the limit of some sequence of points in $E$.
Suppose there is a sequence $(p_n) \subseteq E$. Then because $E \subseteq \overline{E}$, I know that $(p_n) \subseteq \overline{E}$. Since $\overline{E}$ is closed, $\overline{E}$ contains the limit of every convergent sequence of points in $\overline{E}$. So $p \in \overline{E}.$
This is how I approached the converse: I assumed $p \in \overline{E}$ and that there does not exist a sequence $(p_n) \subseteq E$ such that $p_n \to p$. Then I concluded that there exists $\varepsilon > 0$ such that $d(p, x) \geq \varepsilon \ \forall x \in E$.
Then there exists an open ball centered at $p$ that does not contain a point of $E$ (I wasn't sure how to make this statement rigorous). So $p \notin \overline{E}$.
Could I get some feedback on my proof?
AI: You haven’t actually justified the conclusion that there is an $\epsilon>0$ such that $d(p,x)\ge\epsilon$ for each $x\in E$. Justifying it properly requires essentially the same work as a direct proof that if $p\in\operatorname{cl}E$, then there is a sequence $\langle p_n:n\in\Bbb Z^+\rangle$ in $E$ converging to $p$, so you might as well give the direct proof.
There are two cases.
If $p\in E$, there’s a trivial sequence in $E$ that converges to $p$; what is it?
If $p\in(\operatorname{cl}E)\setminus E$, you have to work just a little bit. For each $n\in\Bbb Z^+$ there’s a point $p_n\in E$ such that $d(p,p_n)<\frac1n$; why? What does this get you? |
H: Prove that $e^a e^b = e^{a+b}$
I've read the argument in Rudin, but I think I need a little clarification
\begin{align}
e^a e^b &= \sum_{k=0}^{\infty} \frac{a^k}{k!} \sum_{m=0}^{\infty} \frac{b^m}{m!}\\ &= \sum_{n=0}^{\infty} \frac{ n!}{n!} \sum_{k=0}^{n} \frac{a^k}{k!} \frac{b^{n-k}}{(n-k)!} \\
&= \sum_{n=0}^{\infty} \frac{(a+b)^n}{n!}\\
&=e^{a+b}\\
\end{align}
I'd like to understand how we get from the first line to the second. Of course $m =n-k$, but how do we ensure that we haven't missed any addends in the limit changing process?
Changing limits: $m=0 \rightarrow n-k=0 \rightarrow k=n$ and $m=\infty \rightarrow n-k=\infty \rightarrow k=0$, assuming $n \to \infty$.
\begin{align}
\frac{b^m}{m!} \sum_{k=0}^{\infty} \frac{a^k}{k!}
&= \sum_{k=0}^{\infty} \frac{a^k}{k!} \frac{b^{m}}{m!} \\
&=\sum_{k=0}^{?} \frac{a^k}{k!} \frac{b^{n-k}}{(n-k)!} \\
\end{align}
how to justify that "?"$=n$ ?
AI: Suppose we have two series
$$p(x)=\sum_{k=1}^\infty a_kx^k$$
$$q(x)=\sum_{k=1}^\infty b_kx^k$$
Then multiplying and collecting terms gives $$(p\cdot q)(x)=a_0b_0+(a_1b_0+b_1a_0)x+(a_2b_0+a_1b_1+a_0b_2)x^2+\dots \\=\sum_{k=0}^\infty \left(\sum_{j+i=k}a_jb_i\right)x^k\\=\sum_{k=0}^\infty \left(\sum_{j=0}^k a_jb_{k-j}\right)x^k$$
This informal approach actually works whenever either one of the series converges and the other converges absolutely. Try to see what happens when we choose the series to be those of $e^a,e^b$.
ADD The case when the series are not power series, but just plain series gives
$$\sum\limits_{k = 0}^\infty {\left( {\sum\limits_{j = 0}^k {{a_j}} {b_{k - j}}} \right)} = \sum\limits_{k = 0}^\infty {{a_k}} \cdot \sum\limits_{k = 0}^\infty {{b_k}} $$
This gives when $$\eqalign{
& {a_k} = \frac{{{a^k}}}{{k!}} \cr
& {b_k} = \frac{{{b^k}}}{{k!}} \cr} $$ that
$$\displaylines{
{e^a}{e^b} = \sum\limits_{k = 0}^\infty {\left( {\sum\limits_{j = 0}^k {\frac{{{a^j}}}{{j!}}} \frac{{{b^{k - j}}}}{{\left( {k - j} \right)!}}} \right)} \cr
= \sum\limits_{k = 0}^\infty {\frac{1}{{k!}}\left( {\sum\limits_{j = 0}^k {\frac{{k!}}{{j!\left( {k - j} \right)!}}{a^j}} {b^{k - j}}} \right)} \cr
= \sum\limits_{k = 0}^\infty {\frac{1}{{k!}}{{\left( {a + b} \right)}^k}} \cr
= {e^{a + b}} \cr} $$ |
H: Question regarding the proof of a topological claim
The lecturer in the Topology course I'm taking defined the following:
Given a topological space $X$
we say that:
$X$ is weakly locally compact if for all $x\in X$
there exists a compact nbhd.
$X$ is strongly locally compact if every nbhd of $x$
contains a compact nbhd of $x$
.
We then made the following claim: A weakly locally compact (wlc) Hausdorff space is strongly locally compact.
Briefly the proof went as follows:
Given $U$ a nbhd of $x\in X$ since $X$ is wlc there is a compact nbhd of $x$
, $C\subseteq X$.
Since $U,C$
are both nbhds of $x$ then $U\cap C$
is also a nbhd of $x$
and thus there is an open set $V\subseteq X$
such that $x\in V\subseteq U\cap C$
.
Since $C$
is a compact Hausdorff space (Hausdorff being hereditary) we know $C$
is regular.
Since regularity is hereditary and $V\subseteq C$
we know $V$
is also regular and thus there is an open set $W\subseteq V$
such that $x\in W\subseteq\overline{W}\subseteq V\subseteq C$ .
Since $C$
is compact and $\overline{W}$
is closed in $C$
we know that $\overline{W}$
is also compact.
Finally $x\in W\subseteq\overline{W}\subseteq V\subseteq U$
and thus $\overline{W}$
is a compact nbhd of $x$
contained in $U$ .
The lecturer then noted that it's important to notice the proof hangs on the fact that the closure of $W$ in $V$ and $C$ is the same. That is since we can only deduce compactness of $\overline{W}$ since it is closed in $C$. However, we used the regularity of $V$ in order to find $W$ and thus the closure of $W$ is relative to the topology in $V$ and not in $C$. He also noted that in fact from the way we carried out the construction the closure of $W$ is the same in all the groups in which it is contained, that is $\overline{W}_{X}=\overline{W}_{C}=\overline{W}_{U}=\overline{W}_{V}$
(the substring marking closure relative to which space).
My question is why is it in fact true that $\overline{W}_{X}=\overline{W}_{C}=\overline{W}_{U}=\overline{W}_{V}$ ?
AI: There is a useful formula: If $A$ is a subset of $X$ and $W$ is another subset, then $\text{cl}_A(W\cap A)=\text{cl}_X(W\cap A)\cap A$. So let $A$ be the $V$ in your example.
If we take the closure of $W$ with respect to $C$, then $\text{cl}_C(W)=\text{cl}_X(W)$ since $C$ is closed in $X$.
On the other hand $\text{cl}_V(W)=\text{cl}_X(W)\cap V$ which is just $\text{cl}_X(W)$ since $W$ was chosen to have its closure contained in $V$.
Finally, $\text{cl}_U(W)=\text{cl}_X(W)\cap U=\text{cl}_X(W)$ since $\text{cl}(W)\subset U$.
I should also mention that the above formula simplifies one more time if $A$ is open. In this case we have $\text{cl}_A(W\cap A)=\text{cl}_X(W\cap A)\cap A=\text{cl}_X(W)\cap A$. Using this the last two equalities follow automatically since $U$ and $V$ were open. |
H: Proving that commensurability is transitive
We have that two groups $\Gamma$ and $\Gamma'$ are commensurable if there exist finite index subgroups $G \leq \Gamma$ and $G' \leq \Gamma'$ such that $G \cong G'$. We denote this $\Gamma \approx \Gamma'$.
I am trying to prove that this gives a transitive relation, but I just don't see why it needs to be. Clearly if $\Gamma \approx \Gamma'$ and $\Gamma' \approx \Gamma''$ we have finite index subgroups $G \leq \Gamma$, $G' \leq \Gamma'$, $H' \leq \Gamma'$ and $H'' \leq \Gamma''$ with $G \cong G'$ and $H' \cong H''$ but this doesn't mean that $G \cong H''$...
I feel that I must be missing something obvious, but I can't see why we have to have finite index subgroups of $\Gamma$ and $\Gamma''$ which are isomorphic to each other.
AI: An idea:
$$G\cong G'\implies \,\forall\,H'\le \Gamma'\;\exists\, H\le \Gamma\;\;s.t.\;\;G\cap H\cong G'\cap H'$$
because isomorphic groups have isomorphic subgroups. But then in fact
$$G'\cap H'\cong K'\cap H''\;,\;\;\text{for some finite index}\;\ K'\le\Gamma'\;\text{(same argument as above)} $$
and now remember that $\,[G:H]\,,\,[G:K]<\infty\implies [G:H\cap K]<\infty$ |
H: Does half-life mean something can never completely decay?
Caffeine has a half-life of approximately six hours. I understand this to mean that every six hours, the amount of caffeine in the body is half of what it was six hours prior. Does that mean that caffeine never completely leaves the body? It just keeps reducing to half after a fixed time ad infintum.
I guess a broader statement of my question is: When calculating the half life of a thing, is it true that it will never reach zero? And how is this used/reconciled in a more practical setting like when calculating the amount of caffeine in the body?
AI: There is always a whole number of molecules, and at some point in time there will be zero molecules left. With time, it becomes increasingly likely that all the caffeine molecules have completely disappeared.
However, in chemistry the "concentration" is the mean, or expected, amount of molecules. This is not a whole number. This is much like the way the expected roll of a dice is $3\frac{1}{2}$ even though there is no such number on the dice - the average of your dice rolls over many rolls will be $3\frac{1}{2}$.
This expected number gets smaller and smaller but never reaches zero.
Perhaps this simulation works on you computer. |
H: Implicit differentiation: $x^2y - 2x^3 - y^3 + 1 = 0$
Hi I'm stuck on one problem in my study guide. It's the only one with an = 0 at the end of it.
Differentiate:
$x^2y - 2x^3 - y^3 + 1 = 0$
AI: Implicit differentiation:
$$2xy\,dx+x^2\,dy-6x^2\,dx-3y^2\,dy=0\implies (x^2-3y^2)dy=(6x^2-2xy)dx\implies$$
$$y'=\frac{dy}{dx}=\frac{6x^2-2xy}{x^2-3y^2}\;,\;\;x\neq\pm\sqrt 3\, y$$ |
H: Telescoping sum of powers
$$
\begin{array}{rclll}
n^3-(n-1)^3 &= &3n^2 &-3n &+1\\
(n-1)^3-(n-2)^3 &= &3(n-1)^2 &-3(n-1) &+1\\
(n-2)^3-(n-3)^3 &= &3(n-2)^2 &-3(n-2) &+1\\
\vdots &=& &\vdots & \\
3^3-2^3 &= &3(3^2) &-3(3) &+1\\
2^3-1^3 &= &3(2^2) &-3(2) &+1\\
1^3-0^3 &= &3(1^2) &-3(1) &+1\\
\underline{\hphantom{(n-2)^3-(n-3)^3}} & &\underline{\hphantom{3(n-2)^2}} &\underline{\hphantom{-3(n-2)}} &\underline{\hphantom{+1}}\\
n^3-0^3 &= & 3f_2(n) &-3f_1(n) &+n
\end{array}
$$
Can somebody explain me how these results are disposed intuitively? I didn't understand why $$(n-1)^3 -(n-2)^3$$ became equals to $$3(n-1)^2 - 3(n-1) + 1$$
How do this transformation was done?
Thanks!
AI: $(x-1)^3=x^3-3x^2+3x-1$; now set $x=n-1$. |
H: Piecewise continuous differentiable
if I have a piecewise continuously differentiable function. How do I see that on each open interval, where the derivative is continuous, there is a continous extension on the larger closed interval?
AI: This is true when $f$ satisfies the condition: the lateral limits exist. And false in other cases.
Let $f:[a,b]\to\mathbb{R}$ be a piecewise continuously differentiable function. Then there is a partition $P=\{x_i\}_{i=1}^n$ of $[0,1]$ (i.e. $a=x_0<x_1<\ldots<x_n=b$) such that each $I_i=(x_{i-1},x_i)$ is a maximal interval where $f$ is continuously differentiable.
We want to extend some $g_i=f|{I_i}$ to the whole interval $\overline{I_i}=[x_{i-1},x_i]$. Define $g_i':\overline{I_i}\to\mathbb{R}$ as $g_i'|{I_i}=g$ and
$$g_i'(x_{i-1})=\lim_{x\to x_{i-1}^-}g(x),\quad g_i'(x_i)=\lim_{x\to x_i^+}g(x).$$
exists since lateral limit exists.
Is easy to see that $g'$ is a continuous extension of $g$. Note that you can't always extend $f$ in the whole $[a,b]$ since $\lim_{x\to x_i^-}g(x)$ is not necessary the same value as $\lim_{x\to x_{i-1}^+}g(x)$, so here we have $g'_i(x_{i})\neq g'_{i+1}(x_i)$.
If $f$ is not bounded this is false, note that $f(x)=\tan{x}$ where $\tan$ is definied and $f(x)=0$ where $tan$ is not definied is a piecewise differentiable function but if $P$ is a partition that maximices intervals where $f$ is continuously differentiable, $\lim_{x\to x_i^-}g(x)$ and $\lim_{x\to x_i^+}g(x)$ doesn't exists ($g$ defined as before).
If $f$ is discontinuous in some point of a maximal differentiable partition but bounded, this is also false, consider the function $f(x)=\sin{\frac{1}{x}}$ defined on $[-1,1]\setminus\{0\}$ and $f(0)=0$ then $x_0=-1$, $x_1=0$, $x_2=1$. Then $\lim_{x\to 0^-}g(x)$ and $\lim_{x\to 0^+}g(x)$ doesn't exists ($g$ defined as before). In other cases it is easy to give examples. |
H: Evaluating a summation of inverse squares over odd indices
$$ \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$$
I want to evaluate this sum when $n$ takes only odd values.
AI: Note that
$$\sum_{n \text{ is even}} \dfrac1{n^2} = \sum_{k=1}^{\infty} \dfrac1{(2k)^2} = \dfrac14 \sum_{k=1}^{\infty} \dfrac1{k^2} = \dfrac{\zeta(2)}4$$
Also,
$$\sum_{k=1}^{\infty} \dfrac1{k^2} = \sum_{k \text{ is odd}} \dfrac1{k^2} + \sum_{k \text{ is even}} \dfrac1{k^2}$$
Hence,
$$\sum_{k \text{ is odd}} \dfrac1{k^2} = \dfrac34 \zeta(2)$$ |
H: Does the 80-20 Rule apply to the Buffon process?
Whether the needle crosses a line depends greatly on the angle the needle makes with the line, 90 degrees being of course the most favorable for a line-crossing. Does the 80-20 Rule apply to these angles? That is, since 20% of 90 degrees is 18 degrees, are about 80% of the line-crossings (for a large number of needle-tosses) accounted for by about 20% instances, in which the angle of the needle relative to the line was between 72 and 90 degrees? Has anyone done a statistical study or simulation of this?
AI: The angle $\Theta$ is uniformly distributed betwen $0$ and $\pi/2$ radians.
It actually took me a while to figure out that the question is this: If the number $c$ is so chosen that $\Pr(\Theta>c\mid\text{crossing})=0.8$ then is it true that $\Pr(\Theta >c)=0.2$?
I.e. do $80\%$ of crossings occur for only the $20\%$ of values of $\Theta$ most favorable to crossings? Short answer: no.
The prior density of $\Theta$ is
$$
f_\Theta(\theta) = 2/\pi\quad\text{for }\theta\text{ between }0\text{ and }\pi/2
$$
and $=0$ for $\theta$ not in that space.
The likelihood function is
$$
L(\theta\mid\text{crossing}) = \Pr(\text{crossing}\mid\Theta=\theta) = \sin\theta.
$$
Bayes tells us to multiply the prior by the likelihood function and then normalize to get the posterior probability density $f_{\Theta\mid\text{crossing}} (\theta)$.
Thus the posterior is
$$
f_{\Theta\mid\text{crossing}} (\theta) = 1\cdot\sin\theta,
$$
since $\int_0^{\pi/2}\sin\theta\,d\theta=1$.
Then we have, for example, $\Pr(\Theta>\pi/4)=1/2$ and
$$
\Pr(\Theta>\pi/4\mid\text{crossing}) = \int_{pi/4}^{\pi/2}\sin\theta\,d\theta = \cos\frac\pi4=\frac{\sqrt{2}}{2} \approx 0.707\ldots.
$$
So the most crossing-productive half of the population produces about $70.7\%$ of all crossings.
Plug in numbers to get other results. |
H: An inflection point where the second derivative doesn't exist?
A point $x=c$ is an inflection point if the function is continuous at that point and the concavity of the graph changes at that point.
And a list of possible inflection points will be those points where the second derivative is zero or doesn't exist.
But if continuity is required in order for a point to be an inflection point, how can we consider points where the second derivative doesn't exist as inflection points?
Also, an inflection point is like a critical point except it isn't an extremum, correct?
So why do we consider points where the second derivative doesn't exist as inflection points?
thanks.
AI: Take for example $$
f(t) = \begin{cases}
-x^2 &\text{if $x < 0$} \\
x^2 &\text{if $x \geq 0$.}
\end{cases}
$$
For $x<0$ you have $f''(x) = -2$ while for $x > 0$ you have $f''(x) = 2$. $f$ is continuous as $0$, since $\lim_{t\to0^-} f(t) = \lim_{t\to0^+} f(t) = 0$, but since the second-order left-derivative $-2$ is different from the second-order right-derivative $2$ at zero, the second-order derivative doesn't exist there.
For your second question, maybe things are clearer if stated like this
If the second derivative is greater than zero or less than zero at some point $x$, that point cannot be an inflection point
This is quite reasonable - if the second derivative exists and is positive (negative) at some $x$, than the first derivative is continuous at $x$ and strictly increasing (decreasing) around $x$. In both cases, $x$ cannot be an inflection point, since at such a point the first derivative needs to have a local maximum or minimum.
But if the second derivative doesn't exist, then no such reasoning is possible, i.e. for such points you don't know anything about the possible behaviour of the first derivative. |
H: Does the Law of Factor Sparsity apply to factorization?
Does the 80-20 Rule (also known as the Law of Factor Sparsity, the Law of the Vital Few, and the Pareto Principle) apply to factorization? That is, if x is a large positive integer, then are about 20% of the primes not exceeding x sufficient for factorizing about 80% of the positive integers not exceeding x? Has anyone done a statistical study of this?
AI: Yes. Hint: The set of primes less than 20% of $x$ constitute 20% of the set of primes less than $x$ for large $x$ (use the prime number theorem; using some explicit bounds you can show that the set constitutes at least 19% of the set of primes less than $x$ for all sufficiently large $x$). If any integer $n\leq x$ has a prime factor $p$ larger than 20% of $x$, then that integer $n$ must be one of $p,2p,3p,4p,5p$, so the number of such integers is less than 6 times the number of primes larger than 20% of $x$, and in particular, less than $6\pi(x)$. Thus since $\pi(x)/x\to0$ as $x\to\infty$, the set of primes less than 20% of $x$ are sufficient for factorizing almost all integers $\leq x$. |
H: A simple yet hard task for (theoretically) Poisson distribution
Sorry if I don't use the words properly, I haven't learnt these things in English, only some of the words.
Anyway, I'm practicing to one of my exams and sadly this task seemed more challanging for me than it should be. Some kind of explain would help a lot!
10 meters of clothes have 6 holes in it.
a) What kind of distribution does the number of holes per meter follow? (I think it must be Poisson distribution)
b) What's the probability that there's more than 10 holes in 5 meters of clothes?
AI: The Poisson model is not unreasonable. So if $X$ is the number of holes in a $1$ meter piece of cloth, then it is reasonable to assume that $X$ has a more or less Poisson distribution with parameter $\lambda =\dfrac{6}{10}$.
Let $Y$ be the number of holes in a random $5$ meters of cloth. Then it is reasonable to assume that $Y$ has more or less Poisson distribution with parameter $\lambda=6\cdot\frac{5}{10}=3$.
The probability there are more than $10$ holes is $1$ minus the probability there are $10$ or fewer holes. The probability there are $10$ or fewer holes is
$$\sum_{i=0}^{10} e^{-3}\frac{3^i}{i!}.$$
The calculation is a little unpleasant. The last few terms are pretty small, and are not really terribly relevant. One should realize that the Poisson model is only a model. It fits reality modestly well, but one should not expect a high accuracy fit. |
H: Is pi lying on the ground, and on TV? - and on the sun?
Consider the leaves from a bunch of trees in a terraced plaza in the Autumn. It may well happen that the tiles of the terrace are squares whose length easily exceeds the length of the stem of the leaves (assuming leaves all of the same kind). Cannot this be regarded as a Buffon process? That is, if you took a photograph of it and counted the line crossing of the leaf-stems with the lines determined by the tiles, you would get Buffon's calculation for pi. Furthermore, you could do this even in the absence of the tiles, simply by imposing an appropriate tiling over the photograph later, right? And if we can impose a tiling later, what about imposing it on a TV program, with respect to some recurring object? Would that, properly done, not constitute a Buffon process as well?
edit: It might apply to sunspots as well, because sunspots, during solar min, typically have low latitudes, i.e., they occur near the equator of the sun, which can be considered as approximately a flat surface, and each sunspot approximates a highly non-smooth Jordan curve, which therefore almost always is going to have a unique greatest diameter, which can be considered as the needle. Sunspots vary greatly in size, so we could restrict our attention to ones of a given size (most likely the smaller ones, so as to maximize the number of sunspots considered), and then consider the circles of latitude, starting at the equator, separated by a distance a little bit greater than the needle length (perhaps, such that the needle length is 5/6 that of the circle separations, in imitation of what Mario Lazzarini did in 1901.) Of course, since we’re considering solar min, we might want to consider all sunspots over, say, a 22-year period, so as to get a lot of data points (trial tosses of the needle). So, how good an approximation to pi would be get? – and is there any correlation between how good an approximation we get and certain solar events such as flares?
AI: Let's do an experiment! Here's a deck with leaves on it:
Counting the leaves, I see about 100 leaves. Now lets assume all the stems are the same size, even for the larger and smaller leaves. Also let's try to visualize stems on some leaves which don't seem to have stems. Interpolating, I see about 30 crossings.
Now the Buffon needle problem says that the probability of a crossing is
$$P(\mbox{crossing})=\frac{2L_{stem}}{L_{deck}\pi}$$
Now the rough ratio I'm seeing from the image is $L_{stem}/L_{deck}\approx 1/2$, which gives:
$$P(\mbox{crossing})\approx \frac{1}{\pi}$$
or with around 100, leaves, we'd expect to see $100/\pi\approx 32$ crossings. Not bad, since we counted 30, which gives our experimental approximation of $\pi$ as 3.33, and since I probably counted pretty poorly, I probably have a standard error of around three leaves, which puts $\pi$ within our experimental error. By the way, it can be shown that we used an unbiased estimator for $\pi$, whose mean-square error scales something like $1/n$, where $n$ is the number of leaves, so we should get a somewhat accurate guess of $\pi\approx 3.14$ past around 1000 leaves. I leave that experiment to you. |
H: How to show $\mathbb{Z}[w]/(2,w) \simeq \mathbb{Z}_2$?
Let $\mathbb{Z}[w]=\mathbb{Z}[\frac{1+\sqrt{-15}}{2}]$ be the quadratic integers.
I want to show that $\mathbb{Z}[w]/(2,w) \simeq \mathbb{Z}_2$. It seems very clear, but how can I show the isomorphism rigorously by isomorphism? I found somewhat similar using 3rd isomorphism theorem(here). But I think there will be much easier way.
I tried $\mathbb{Z}[w]/(2,w) \simeq (\mathbb{Z}(X)/(f(X))/(2+f(X),X+f(X))$ but I'm not sure then it is isomorphic to $\mathbb{Z}(X)/(2,X)\simeq \mathbb{Z}_2$.
I also tried $\mathbb{Z}[X] \to \mathbb{Z}[w] \to \mathbb{Z}[w]/(2,w)$, but not sure what the kernel is. Obviousely $2,w$ goes to $0$ but isn't it possible that $1 \mapsto 0$?
AI: If it's not obvious that $\mathbb{Z}[X]/(2,X) \cong \mathbb{F}_2$, then quotient out by one element at a time:
$$ \mathbb{Z}[X]/(2,X) = \left( \mathbb{Z}[X] / (2) \right) / (X) $$
or
$$ \mathbb{Z}[X]/(2,X) = \left( \mathbb{Z}[X] / (X) \right) / (2) $$
and maybe it will be easier. |
H: How can I prove this closed form for $\sum_{n=1}^\infty\frac{(4n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}$
How can I prove the following conjectured identity?
$$\mathcal{S}=\sum_{n=1}^\infty\frac{(4\,n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}\stackrel?=\frac{\sqrt3}{2\,\pi}\left(2\sqrt{\frac8{\sqrt\alpha}-\alpha}-2\sqrt\alpha-3\right),$$
where
$$\alpha=2\sqrt[3]{1+\sqrt2}-\frac2{\sqrt[3]{1+\sqrt2}}.$$
The conjecture is equivalent to saying that $\pi\,\mathcal{S}$ is the root of the polynomial
$$256 x^8-6912 x^6-814752 x^4-13364784 x^2+531441,$$
belonging to the interval $-1<x<0$.
The summand came as a solution to the recurrence relation
$$\begin{cases}a(1)=-\frac{81\sqrt3}{512\,\pi}\\\\a(n+1)=-\frac{9\,(2n+1)(4n+1)(4 n+3)}{32\,(n+1)(3n+2)(3n+4)}a(n)\end{cases}.$$
The conjectured closed form was found using computer based on results of numerical summation. The approximate numeric result is $\mathcal{S}=-0.06339748327393640606333225108136874...$ (click to see 1000 digits).
AI: According to Mathematica, the sum is
$$ \frac{3}{\Gamma(\frac13)\Gamma(\frac23)}\left( -1 + {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; -1\right) \right). $$
This form is actually quite straightforward if you write out $(4n)!$ as
$$ 4^{4n}n!(1/4)_n (1/2)_n (3/4)_n $$
using rising powers ("Pochhammer symbols") and then use the definition of a hypergeometric function.
The hypergeometric function there can be handled with equation 25 here: http://mathworld.wolfram.com/HypergeometricFunction.html:
$$ {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; y\right)=\frac{1}{1-x^k},$$
where $k=3$, $0\leq x\leq (1+k)^{-1/k}$ and
$$ y = \left(\frac{x(1-x^k)}{f_k}\right)^k, \qquad f_k = \frac{k}{(1+k)^{(1+1/k)}}. $$
Now setting $y=-1$, we get the polynomial equation in $x$
$$ \frac{256}{27} x^3 \left(1-x^3\right)^3 = -1,$$
which has two real roots, neither of them in the necessary interval $[0,(1+k)^{-1/k}=4^{-1/3}]$, since one is $-0.43\ldots$ and the other $1.124\ldots$. However, one of those roots, $x_1=-0.436250\ldots$ just happens to give the (numerically at least) right answer, so never mind that.
Also, note that
$$ \Gamma(1/3)\Gamma(2/3) = \frac{2\pi}{\sqrt{3}}. $$
The polynomial equation above is in terms of $x^3$, so we can simplify that too a little,
so the answer is that the sum equals
$$ \frac{3^{3/2}}{2\pi} \left(-1+(1-z_1)^{-1}\right), $$
where $z_1$ is a root of the polynomial equation
$$ 256z(1-z)^3+27=0, \qquad z_1=-0.0830249175076244\ldots $$
(The other real root is $\approx 1.42$.)
How did you find the conjectured closed form? |
H: Proof of matrix inequality involving trace and max-operator
Let $C\in\mathbb R^{N\times N}$ be a real positive-semidefinite matrix, $I\in\mathbb R^{N\times N}$ the identity matrix and $W,V\in\mathbb R^{N\times N}$ two real-valued random matrices. Is it true that
$$tr\left[(C-I)WCV^T\right]\leq max\left\{tr\left[(C-I)WCW^T\right],tr\left[(C-I)VCV^T\right]\right\}$$
holds for all $W, V$? My simulations suggests the inequality is true, but I have no idea how to proof it.
Any comments and ideas are greatly appreciated! Thanks!
AI: The inequality does not hold in general.
Let
$$
C=1/2\,I=\begin{bmatrix}1/2&0\\0&1/2\end{bmatrix},\ \ W=\begin{bmatrix}0&0\\1&0\end{bmatrix},\ \ V=W^T=\begin{bmatrix}0&1\\0&0\end{bmatrix}.
$$
Then
$$
\mbox{tr}((C-I)WCV^T)=-\frac14\,\mbox{tr}(WV^T)=-\frac14\mbox{tr}(W^2)=0.
$$
And
$$
\mbox{tr}((C-I)WCW^T)=-\frac14\,\mbox{tr}(WW^T)=-\frac14,
$$$$
\mbox{tr}((C-I)VCV^T)=-\frac14\,\mbox{tr}(VV^T)=-\frac14,
$$ |
H: A basic proof on Morse Function
The questions is to show if $f_t$ is a homotopic family of functions on $R^k$, show that if $f_0$ is Morse in some neighborhood of a compact set $K$, then so is every $f_t$ for sufficiently small t.
I get the previous question leads to the proof, but couldn't reach the question:
det$(H)^2 + \sum_{i=1}^k (\partial f/ \partial x_i)^2 > 0$
And also, on a compact manifold:
det$(H)^2 + \sum_{i=1}^k (\partial f/ \partial x_i)^2 >\epsilon$
Thanks for your help!
AI: G&P have you prove that the first inequality is equivalent to $f$ Morse. If the quantity is $>\epsilon$ for $f_0$, then by smoothness, for small enough $t$ it'll be $>\epsilon/2$ for $f_t$. |
H: What happens to the infinite monkey theorem when there are an infinite number of keys on the typewriter?
What happens to the infinite monkey theorem when there are an infinite number of keys on the typewriter? So what is the probability of a finite string of keys like the works of Shakespeare being typed up. Thanks for any insights.
AI: You need to pick a probability distribution over the keys. If the number of keys is at most countable and each key occurs with positive probability, then the infinite monkey theorem continues to hold and the proof is the same (any finite string only uses finitely many keys so you can ignore the rest). If one of the keys occurs with probability zero and it also occurs in your string, then the string occurs with probability zero. |
H: What set theory axioms do I need to believe in uncountable ordinals?
Math people:
The title is the question. I am convinced that uncountable sets exist, thanks to Cantor's diagonal proof. It is not intuitively clear to me that uncountable ordinals or cardinals should exist. What axioms of set theory are needed to accept their existence? I think I have read here that the Axiom of Choice is not necessary. I am reluctant to accept as "true" any theorems that rely on the Axiom of Choice.
EDIT: This question may be a duplicate: I just found No uncountable ordinals without the axiom of choice? . I apologize if this is so.
AI: As [1], [2], [3], and [4] may tell you, the axiom of choice is not needed to define $\omega_1$ (in particular [1] and [4]).
Two key axioms are the power set axiom and the replacement axiom schema. In [2] and [3] you can see why the axiom of power set is needed. It is consistent with $\sf ZF$ without the power set axiom that there are only countable ordinals. In particular the set of hereditarily countable sets satisfies that. In fact, it shows that without the power set axiom we cannot prove the existence of any uncountable sets.
But the replacement schema is also essential. We use it in order to map a certain subset of $\mathcal P(\omega\times\omega)$ onto ordinals, and we need the replacement schema to show that the result is a set. Indeed if we consider $\sf ZF$ without the replacement schema, then $V_{\omega+\omega}$ is a model of these axioms, and there are only countable ordinals in that model. It should be remarked that there may still be well-ordered of length $\omega_1$ in $V_{\omega+\omega}$ but the von Neumann ordinal, a transitive set ordered by $\in$ does not exist there. That is to say, it is consistent that the axiom of choice holds, and every set can be well-ordered, but the von Neumann ordinals don't exist beyond $\omega+\omega$. In such model we separate between ordinals as we think about today (von Neumann's definition), and as equivalence classes of order types (which are proper classes, of course).
Of course one uses the axiom of union all the time, as well extensionality. It should be remarked that regularity is not necessarily because we can always limit ourselves to the part of the well-founded sets, where it holds, at least if we have replacement.
The Links:
How do we know an $ \aleph_1 $ exists at all?
Uncountable ordinals without power set axiom
Does the definition of countable ordinals require the power set axiom?
No uncountable ordinals without the axiom of choice? |
H: Unique largest normal pi-subgroup
Let $\pi$ be a set of prime numbers. A finite group is said to be a $\pi$-group if every prime that divides its order lies in $\pi$. If $G$ is finite, show that $G$ has a unique largest normal $\pi$-subgroup (which may be trivial and may be all of $G$).
What I did: suppose $|G|=p_1^{k_1}p_2^{k_2}\ldots p_n^{k_n}$. If $\pi$ contains a prime outside of the $p_i$'s, then the only $\pi$-subgroup is the trivial group. Then consider when $\pi$ contains a subset of the $p_i$'s, say $p_1,p_2,\ldots,p_m$. I don't know what I can say about a subgroup whose order is divisible by $p_1,p_2,\ldots,p_m$.
AI: There exists a maximal normal $\pi$-subgroup of $G$ since $G$ is finite. If $H$ and $K$ are two such subgroups what can you say about $HK$? |
H: Recurrence Relations with single roots
I have the following recurrence: $a_{n+3}=3a_{n+2}-3a_{n+1}+a_n$
with initial values $a_1 = 1, a_2 = 4, a_3 = 9$
I have found the characteristic equation to be $x^3-3x^2+3x-1$ and the root to be 1.
My text book is not helpful in how I should go about solving this when I have a single root and don't have the $a_0$ value given.
Any tips on how I could move forward to solve this?
AI: If the characteristic equation has roots $x_1,x_2,\ldots,x_k$, where ($x_i \neq x_j, \, \forall i \neq j$) the root $x_j$ has multiplicity $m_j$, then the solution is of the form
$$a_n = \sum_{j=1}^k P_j(n)x_j^n$$
where $P_{j}(n)$ is a polynomial in $n$ of degree $m_j-1$. Hence, in your case,
$$a_n = (c_0 + c_1 n + c_2 n^2) 1^n, \text{i.e., } a_n = c_0 + c_1 n + c_2 n^2$$ With the initial conditions, we get that
$$a_n = n^2$$ |
H: Good calculus exercises/problems?
I can't enroll in a university this year, so I'm studying calculus at home, but the only exercises about calculus that I find are the easy ones. Do you know a great page where I can find not only calculus exercises, but problems as well?
I want to find about:
hard limits,
derivatives and integrals,
and some problems
AI: Barring any more information, I'd suggest you acquire Michael Spivak's Calculus, and begin working your way through the material. There are plenty of exercises and examples, but you'll also deepen your conceptual understanding at the same time, which will help ensure the material "sticks with you" over the long-run, which I presume is a goal of yours, as you seem committed to enrolling at a university, perhaps even to pursuing math, and/or math-related fields.
Spivak will prepare you well. |
H: Sum of Logarithm Arguments
This is a very simple question I suspect but I just cannot seem to nail it...
I have values for $X,Y,Z $, where $X =\log (x)$, $Y = \log (y)$ and $Z = \log (z)$ and I need to calculate $x + y + z$, well actually $\log(x + y + z)$ would suffice. Is there a clever way of doing this other than simply doing $e^X+e^Y+e^Z$?
Its for an algorithm where I am trying to avoid underrun - $x=e^X,y=e^Y,z=e^Z$ likely to be very small.
Any pointers much appreciated.
With answer given by @response I ended up using $\log(x+y+z)=\log(e^R+e^S+e^T)-C$ where $R=X+C$, $S=Y+C$, $T=Z+C$
AI: Suppose, we add a constant $C$ to each one of the values: $X, Y, Z$ which would prevent underflow when they are raised to $e$. Then we get:
$X' = X + C$
$Y' = Y + C$
$Z' = Z + C$
Thus, we have:
$x' = e^{X'}$
$y' = e^{Y'}$
$z' = e^{Z'}$
Adding them, we have:
$x'+y'+z' = e^{X'} + e^{Y'} + e^{Z'}$
But, we know that:
$e^{X'} + e^{Y'} + e^{Z'} = e^{X+C} + e^{Y+C} + e^{Z+C} = e^C (e^{X} + e^{Y} + e^{Z})$
Thus, we have:
$x+y+z = (x'+y'+z') e^{-C}$ |
H: Redundance in statement of second morphism theorem
The standard statement of the Second Morphism Theorem found in my textbook and Wikipedia is as follows:
Let $N$ be a normal subgroup of $G$ and $H$ be any subgroup of $G$.
$HN = \{hn | h \in H, n \in N\}$ is a subgroup of $G$
$H \cap N$ is a normal subgroup of $H$
$H / (H \cap N) \cong HN / N$
Isn't the third part redundant? In particular, isn't $HN / N$ just $H/N$? That is, cosets in $HN/N$ are given by $hnN$. But by the closure of $N$, a group, this is equal to $hN$. Also, since $N$ is a normal subgroup, (since we are taking the quotient?) right cosets should equal left cosets, so we don't have to look at $nHN$.
Perhaps I've gone wrong somewhere, but if not, I'd like some insight on why this is stated "redundantly" -- is it just convention?
AI: Let us consider a specific example: Take $G = D_4 = \{1,x,x^2,x^3,y,xy,x^2y,x^3y\}$ and set $H = \{1,y\}$ and $N = \{1,x,x^2,x^3\}$. Being an index $2$ subgroup of $G$, $N$ is normal in $G$. But now you can't even form $H/N$ because $N$ is not even contained in $H$.
If $N$ is a normal subgroup of $G$ that is contained in $H$ then the second isomorphism theorem says that $H \cap N = N$ is a normal subgroup of $H$. Then indeed
$$H/(H \cap N) = H/N \cong HN/N.$$ |
H: Probability AND/OR
Suppose we have a bag of $10$ balls, and each ball is a unique colour.
If we randomly select $3$ balls from this bag, without replacement, I want to find out the chances of correctly guessing the colour of all $3$ balls (if we have chosen $3$ colours beforehand).
I have come down to two possible equations:
Work out the chances of not correctly guessing any ball and negate it:
$9/10 * 8/9 * 9/8 = 0.7$, where the complement is $0.3$
The chances of correctly guessing all balls:
$1/10 + 1/9 + 1/8 = 0.3361$
Firstly:
If these are both correct, which one is more accurate?
Secondly:
Is there a better way of calculating this?
I have a feeling I'm going about the whole thing in the wrong way, but I'm not too sure how I can check to be sure I'm heading in the right direction.
AI: Suppose out of the ten colours, we predicted that we were going to pull out pink, red, and blue.
Then,
P(pink,red,blue)= P(pink on first pick)P(red on second pick|pink on first pick)P(blue on third pick|red on second pick)$=(\frac {1}{10})(\frac {1}{9})(\frac {1}{8})=\frac{1}{720}$ |
H: Is the compactness theorem (from mathematical logic) equivalent to the Axiom of Choice?
Or more importantly, is it independent of the axiom of choice. The compactness theorem states the given a set of sentences $T$ in a first order Language $L, T$ has a model iff every finite subset of $T$ has a model. So for any natural number $n, T(n)$ is a finite subset of $n$ sentences. Now if every finite subset has a model than adding a sentence $r$ to $T(n)$ gives us $T(n+1)$ which also has a model so there is some transfinite induction going on here when $T$ is countable. This seems to cry out for the application of Zorn's Lemma which is equivalent to the axiom of choice. So to summarize, is the compactness theorem consistent with ~AC (the negation of the axiom of choice which is independent of ZF set theory)?
AI: In general, the compactness theorem is equivalent to the ultrafilter lemma, which is known to be strictly weaker than the axiom of choice (so is consistent with its negation) but independent of ZF. The models the compactness theorem asserts exist can be constructed using ultraproducts.
Over a countable alphabet, the compactness theorem is provable in ZF. This is because it can be proven from the completeness theorem, which over a countable alphabet is also provable in ZF. |
H: distribute m pennies to n people, what is the expectation of coins one would obtain
Assume there are $m$ pennies and $n$ people. We want to distribute the pennies to the people by uniformly picking a vector $(x_1,...,x_n)$ from the set of all vectors satisfying $x_1+...+x_n=m$, where $x_i$ is the number of coins given to person $i$.
What is the expectation of the number of pennies given to player 1?
Some analysis: I figured out that, the probability for getting a large $x_1$ is lower when compared to a smaller value of $x_1$, although the solution is uniformly picked. I guess, every person has the same likelihood to get the a certain number of coins, which means no one has an advantage. Therefore the expectation for player 1 is just $m\over n$ Is my analysis and answer correct?
AI: Your reasoning is correct, you are using symmetry. Here is another approach.
Suppose the pennies are distinguishable. Let $X_1, X_2,\ldots, X_m$ each be random variables, with $X_i=1$ if penny $i$ is given to person 1, and $X_i=0$ if penny $i$ is not given to person 1. We have $E(X_i)=\frac{1}{n}$, and $E(X_1+\cdots+X_m)=E(X_1)+\cdots+E(X_m)=\frac{m}{n}$, where the linearity of expectation is used in the first equality. Note that $E(X_1+\cdots+X_m)$ denotes the expected total number of pennies person 1 receives. |
H: Big-O notation, prove the following: $\sum\limits_{k=3}^n(k^2 - 2k)$ is $O(n^3)$.
Use the definition of Big-O notation to prove the following:
$\sum\limits_{k=3}^n(k^2 - 2k)$ is $O(n^3)$.
Can someone please give me some hints on how to expand $\sum\limits_{k=3}^n(k^2 - 2k)$?
AI: The easiest way to conclude that it is $\mathcal{O}(n^3)$ is to note that
$$k^2 - 2k < k^2 \leq n^2, \,\,\,\,\, \forall k \in \{3,4,\ldots,n\}$$
You could in fact evaluate the sum exactly and then prove that it is $\mathcal{\Omega}(n^3)$. Make use of the fact that
$$\sum_{k=1}^n k^2 = \dfrac{n(n+1)(2n+1)}6 \text{ and } \sum_{k=1}^n k = \dfrac{n(n+1)}2$$
Move your mouse over the gray area below for the complete solution.
Note that$$\sum_{k=3}^n (k^2-2k) = \sum_{k=3}^n k^2 - 2\sum_{k=3}^nk = \dfrac{n(n+1)(2n+1)}6-1^2-2^2 - 2 \left(\dfrac{n(n+1)}2-1-2\right)$$The above simplifies to$$\dfrac{(n-2)(n-1)(2n+3)}6$$ |
H: Related rates, calculus
Suppose that $k^{2} + h^{3} = 9$. Find $\frac{dh}{dt}$ when $k=1$ and $\frac{dk}{dt} = 3$ ans $= \frac{1}{2}$. I'm differentiating with respect to $t$ but I cannot get the answer if you could show me the steps, or how to approach this question, would be very helpful :)
AI: Treating $k$ and $h$ as functions of $t$, we take the derivative of each side of the given equation with respect to $t$ by using implicit differentiation and chain rule to obtain:
$$\begin{align*}
2k\frac{dk}{dt}+3h^2\frac{dh}{dt} &= 0 \\
3h^2\frac{dh}{dt} &= -2k\frac{dk}{dt} \\
\frac{dh}{dt} &= \frac{-2k\frac{dk}{dt}}{3h^2} \\
\end{align*}$$
Note that when $k=1$, we have $1^2+h^3=9 \iff h^3=8 \iff h=2$. Thus, substitution yields:
$$\frac{dh}{dt} = \frac{-2(1)(3)}{3(2)^2} = \frac{-2}{2^2} = \boxed{\dfrac{-1}{2}}$$ |
H: Ratio of corresponding sides of similar triangles, given the areas.
The area of two similar triangles are 72 and 162. what is the ratio of their corresponding sides?
AI: When linear dimensions a scaled by the factor $\lambda$, area is scaled by the factor $\lambda^2$. Here, we have $\lambda^2=\dfrac{162}{72}=\dfrac{9}{4}$. So $\lambda=\dfrac{3}{2}$. |
H: Geometry of the space $C[a, b]$ respect to the norm $\lVert x \rVert_{\infty} = \max_{t\in [a,b]}\lvert x(t)\rvert$.
I have studied that the space $C[a, b]$ of all scalar-valued (real or complex)
continuous functions defined on $[a, b]$ is a Banach space with respect to the norm $\lVert x \rVert_{\infty} = \max_{t\in [a, b]}\lvert x(t)\rvert$.
I want to understand geometry of the space $C[a, b]$ for the norm defined above. Is this possible? Because I am not able to visualize this space with respect to the norm defined above. Could anybody explain me?
Thanks
AI: This questions is rather broad. As soon as you go from the finite dimensional space to the infinite dimensional visualizing becomes much more complicated in general. One needs to develop certain intuition which comes from practice. There is nothing special about $C[a,b],$ you have to get used to infinite dimensional spaces first. When you are dealing with concrete problem you can try to apply your "euclidean" intuition first. Roughly speaking, there are two major differences to keep in mind:
$1.$ There is no Pythagorean theorem $\|x\|^2+\|y\|^2=\|x+y\|^2$ whenever one $x,y$ are "orthogonal." The reason is that we cannot define orthogonality in the usual way because $C[a,b]$ is not a Hilbert space. One can however, choose "almost" orthogonal vectors if necessary.
$2.$ The unit ball is not compact, but this is a common thing for all infinite dimensional spaces. One has to be very careful when choosing convergent subsequences etc. |
H: Let $f(z)=\frac{z^3}{(z-\pi)^3(z+5)^2}$ and let $C$ be $|z|=3$.Then $\int_C{f(z)dz}=0$ because :
Which of the following options are true?
Let $f(z)=\dfrac{z^3}{(z-\pi)^3(z+5)^2}$ and let $C$ be $|z|=3$.Then $\int_C{f(z)dz}=0$ because :
(A) the residue is $0$ at its only pole within $C$.
(B) the sum of the residues at its two poles within $C$ is $0$.
(C) $f(z)$ is analytic on and within $C$.
(D) $|f(z)|< \frac {M}{|z|^2}$ for some constant $M$.
AI: $C$ is the correct answer here because the given fuction has poles on $\pi$ and $5$ but none of them are within and on the circle |
H: Cylindrical Shell method conceptual question
I am self-teaching myself calculus for the summer to get ready for the actual class. Let us say that we have a region bounded above by the curve $y = 2 - x^2$ and below by the curve $y=x^2$ from $x=0$ to $x=1$. Suppose that the region is revolved around the $y-$axis.
Why do we use this formula instead of the washer method?:
$ 2 \pi \int_a^b x(f(x) - g(x)) dx$
The surface area of a cylinder is $2\pi rh$ (without the top and bottom) and the volume is $\pi r^2 h$, so how can the volume of a cylinder be $2 \pi rh dr$? I don't understand that way of thinking.
I thought that we're adding infinitesimally thin cylinders as we go up and down, so why not use the washer method?
AI: If we want, we could use the washer method (in fact, this is the disk method since the $y$-axis is a boundary of the region) and integrate with respect to $y$. However, this would involve two separate integrals, since the radius of each disk is defined by two different curves. Thus, the volume using the disk method is:
$$
V= \left[ \pi \int_0^1 (\sqrt{y})^2 dy \right] + \left[ \pi \int_1^2 (\sqrt{2-y})^2 dy \right] = \dfrac{\pi}{2} + \dfrac{\pi}{2} = \boxed{\pi}
$$
The computations in this example actually turned out to not be that bad, but sometimes it just isn't feasible this way if the inverse functions of the boundaries can't be easily found.
Instead, it would be easier to add infinitesimally thin cylindrical shells ("hollow pipes"). If "unrolled" into a rectangular prism, the dimensions of an arbitrary shell would be $2\pi r \times h \times dx$, where the radius is $r=x$ and the height is $h=(2-x^2)-x^2$. Note that $2\pi r$ corresponds to the circumference of the original cylindrical shell's circular base and $dx$ corresponds to the shell's infinitesimally thin thickness. This yields the integral:
$$
V= \int_0^1 2\pi x ((2-x^2)-x^2)dx = \boxed{\pi}
$$ |
H: First Homology Group and Abelianization
On the Wolfram Mathworld article for Commutator Subgroup, it states that the first homology group is the abelianization, $$H_{1}(G) = G \big/ [G,G]$$
which totally blows my mind because I've only seen the commutator subgroup in the context of Lie algebra representation theory, and I've only seen the first homology group in the context of simplicial homology. This leads me to the following question,
Can you give an explanation (rigorous, or intuitive) for why the first homology group is the abelianization? Furthermore, if you happen to know of how simplicial homology may be related to Lie algebra representation theory (or know of a reference regarding it), please tell!
AI: $H_1(G)$ is the first homology group of the corresponding Eilenberg-Maclane space $K(G,1)=X$ (which is by definition the space X such that $\pi _1(X)=G$ and all other homotopy groups of X are trivial). Denote $X=K(G,1)$ Then your question reduces to the following
$H_1(X)=\pi_ 1(X)/[\pi _1(X),\pi _1(X)].$
This is a standard algebraic topology fact which you can find in ant standard Algebraic topology textbook (e.r. Algebraic topology by Hatcher Theoem 2A.1) |
H: Pages 6 and 27 are on the same (double) sheet of a newspaper, how many pages are there in the news paper altogether
I was never really good at maths, trying to get back into it.
I have this question:
Pages 6 and 27 are on the same (double) sheet of a newspaper, what are the page numbers on opposite sides of the sheet and how many pages are there in the newspaper altogether
This seems like a trick question... I am not too sure how to take the logic, but I suppose pages 5 and 28 are the numbers on opposite sides of the page?
How do I get the total pages of the newspaper altogether? I am assuming we add the missing pages, 1 thru 4 (4 pages in total) to 27 to get 31?
Not sure how to solve this, and I have no answer key.
AI: Assume the paper is formed by taking a pile of (printed) sheets of paper and folding it in half.
Look at the outside cover - this pairs up the first page and the last page on the outside. If there are $n$ pages altogether, the sum is $n+1$.
The other side of the same sheet has pages $2$ and $n-1$ - sum $n+1$.
In fact it is easy to see that the two pages on the same side of any of the original pieces of paper have numbers which add to $n+1$.
Here $n+1=33$ so $n=32$ - and the paper consists of $8$ folded sheets. |
H: Probability that a divisor of $10^{99}$ is a multiple of $10^{96}$
What is the probability that a divisor of $10^{99}$ is a multiple of $10^{96}$? How to solve this type of question. I know probability but I'm weak in number theory.
AI: HINT: $10^n=2^n5^n$, so the divisors of $10^n$ are the numbers of the form $2^k5^m$, where $0\le k,m\le n$. Since there are $n+1$ choices for each of $k$ and $m$, $10^n$ has $(n+1)^2$ divisors. And $10^\ell\mid 2^k5^m$ if and only if $\ell\le\min\{k,m\}$. |
H: The maximal ideal space of $A$ is contained in the unit ball of $A^\ast$
On page 281 of Rudin's book Functional Analysis, let $\Delta$ be the maximal ideal space of a commutative Banach algebra $A$, $K$ be the norm-closed unit ball of $A^*$, then $\Delta\subset K$ (by Theorem 10.7). Can someone give me more detail?
AI: This is true after identification of closed maximal ideals with non-zero characters on $A$. More preciesly each maxiamal closed ideal in $A$ is a kernel of uniqely defined character. One can show that its norm is $1$. For details see this answer. |
H: Formula for series $\frac{\sqrt{a}}{b}+\frac{\sqrt{a+\sqrt{a}}}{b}+\cdots+\frac{\sqrt{a+\sqrt{a+\sqrt{\cdots+\sqrt{a}}}}}{b}$
All variables are positive integers.
For:
$$a_1\qquad\frac{\sqrt{x}}{y}$$
$$a_2\qquad\frac{\sqrt{x\!+\!\sqrt{x}}}{y}$$
$$\cdots$$
$$a_n\qquad\frac{\sqrt{x\!+\!\sqrt{\!x+\!\sqrt{\!\cdots\!+\sqrt{x}}}}}{y}$$
Is there a formula of an unconditional form to describe series $a_n$?
I thought of something along the lines of:
$$\sum _{k=1}^{n } \left(\sum _{j=1}^k \frac{\sqrt{x}}{y}\right)$$
but, I quickly realized that it was very incorrect; Then I thought of:
$$\sum _{k=1}^{n} \frac{\sum _{j=1}^k \sqrt{x}}{y}$$
which I also concluded as very incorrect...
I'm blank, but I would like to see an example of something along the lines of:
$$\sum _{k=1}^{n } \frac{\sqrt{x+\sqrt{x+\sqrt{\cdot\cdot\cdot+\sqrt{x}}}}}{y}$$
where each $\sqrt{x+\sqrt{\cdots}}$ addition, repeats $k$ times. (i.e $k=3 \Rightarrow \sqrt{x+\sqrt{x+\sqrt{x}}}$);
If it is possible...
Cheers!
AI: If all you are looking for is a compact representation, let
$$
s_{k}=\begin{cases}
0 & \text{if }k=0\\
\sqrt{a+s_{k-1}} & \text{if }k>0
\end{cases}.
$$
Then
\begin{align*}
S_n & =\left(\frac{\sqrt{a}}{b}\right)+\left(\frac{\sqrt{a+\sqrt{a}}}{b}\right)+\ldots+\left(\frac{\sqrt{a+\sqrt{a+\ldots+\sqrt{a}}}}{b}\right)\\
& =\frac{1}{b}\left[\sqrt{a}+\sqrt{a+\sqrt{a}}+\ldots+\sqrt{a+\sqrt{a+\ldots+\sqrt{a}}}\right]\\
& =\frac{1}{b}\sum_{k=1}^{n}s_{k}.
\end{align*}
Assume $a\in\mathbb{R}$ (you don't have to do this). We can show that the recurrence is
stable everywhere (weakly stable at $a=-\frac{1}{4}$). Particularly,
the fixed point is given by
$$
s^{2}-s-a=0,
$$
which has roots
$$
\frac{1\pm\sqrt{1+4a}}{2}.
$$
Particularly, the locally stable fixed point is the solution with $\pm$ is $+$.
So, for large enough $k$,
$$
s_k\approx\frac{1+\sqrt{1+4a}}{2}.
$$
This is as good an answer as you can hope for, save for error bounds on the above expression. |
H: Conditional Expectation with independent sub-sigma fields
Let X and Y be bounded random variables on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Consider two independent sub-$\sigma$ fields $\mathcal{G}$ and $\mathcal{H}$ of $\mathcal{F}$. We assume X is $\mathcal{G}$-measurable and Y is $\mathcal{H}$-measurable. Prove the following equality holds:
$$\mathbb{E} (XY\,\, |\, \sigma (\mathcal{H},\mathcal{G})) = \mathbb{E} (X\,\, |\mathcal{H}) \mathbb{E} (Y\,\, |\mathcal{G})$$
AI: It doesn't.
If $X$ is $\mathcal G$-measurable and
$Y$ is $\mathcal H$-measurable then $XY$ is $\sigma(\mathcal G, \mathcal H)$-measurable. Therefore $\mathbb E(XY|\sigma(\mathcal G, \mathcal H))=XY$.
But if $\mathcal G$ and $\mathcal H$ are independent then
$\mathbb E(X|\mathcal H)= \mathbb E(X)$ and $\mathbb E(Y|\mathcal G)= \mathbb E(Y)$.
So you claim that $XY = \mathbb E(X)\mathbb E(Y)$, which holds only if $X$ and $Y$ are almost surely constant. |
H: Let $f$ be a bounded uniformly continuous function in $R^1$. Then $X_n\to 0$ in pr. implies $E(f(X_n)) \to f(0)$.
The following is problem 4 from Section 4.2 of "A Course in Probability Theory" by Kai Lai Chung.
Let $f$ be a bounded uniformly continuous function in $R^1$. Then $X_n\to 0$ in pr. implies $E(f(X_n)) \to f(0)$.
I don't understand where the condition uniformly continuous is needed. Where is the flaw in the following argument that uses just $f$ is continuous at 0?
Consider an $\epsilon >0$.
Continuity of $f$ says that there exists $\delta > 0$ such that if $|x| \le \delta$ then $|f(x)-f(0)|<\epsilon/3$
That $f$ is bounded says that there exists $B$ such that $|f|<B$.
Because $X_n\to 0$ in pr. we know that there exists $N$ such that if $n> N$ then $P(|X_n| > \delta) < \epsilon/3B$.
Then for each $n > N$ we have that
$$|E(f(X_n))-f(0)|=|\int f(X_n)dP - f(0)| = \left|\int_{|X_n|>\delta} f(X_n) + \int_{|X_n|\le \delta}(f(X_n)-f(0)) - \int_{|X_n|>\delta}f(0)\right|\le \left|\int_{|X_n|>\delta} f(X_n)\right| + \left|\int_{|X_n|\le \delta}(f(X_n)-f(0))\right| + \left|\int_{|X_n|>\delta}f(0)\right|\le \int_{|X_n|>\delta} |f(X_n)| + \int_{|X_n|\le \delta}|f(X_n)-f(0)| + \int_{|X_n|>\delta}|f(0)| \le \int_{|X_n|>\delta} B + \int_{|X_n|\le \delta}\epsilon/3 + \int_{|X_n|>\delta}B \le B*(\epsilon/3B)+\epsilon/3+B*(\epsilon/3B) = \epsilon$$
AI: You're correct. The reason the author has chosen the uniform continuity condition is that this is a step in the proof of a stronger statement
Let $f$ be a bounded uniformly continuous function in $R^1$. Then $X_n\to X$ in pr. implies $E(f(X_n)) \to E(f(X))$.
For this statement you need the uniform continuity condition. So I'd guess that the author is trying to keep the conditions consistent to avoid confusing the reader. |
H: Help with functions
How many functions $f\ne0$, $f:\mathbb{Z}\to\mathbb{C}$
Periodic in an integer $a$ are there so that
$$f(x+y)=f(x)f(y)$$
What I have sofar is that
$$f(a)=f(2a)=f(a)f(a)$$
So that $f(a)=1$,
Also if I multiply $f(x)$ together with itself $a$ times I get that $$f(x)^a=f(x)f(x)f(x)....f(x)=f(ax)=1$$
So that $f(x)^a=1$
I would appreciate any help
AI: By your argument, we can conclude that $f(1)$ is an $a$th root of unity. Once you know $f(1)$, then $f(n)$ is determined for positive $n\in \mathbb{N}$, as $f(n)=f(1)^n$.
We can also figure out $f(0)=1$ as $1=f(a)=f(0+a)=f(0)\cdot f(a)=f(0)$. That means that $1=f(0)=f(1+(-1))=f(1)f(-1)$, so we also know $f(-1)=\frac{1}{f(1)}$ and from there we know $f(-n)=f(1)^{-n}$ for positive $n\in \mathbb{N}$.
Therefore the function $f$ is determined by a choice of an $a$th root of unity for $f(1)$. Conversely, for any $a$th root of unity $\omega$, $n\mapsto \omega^n$ satisfies the requirement for $f$. However, to guarantee the period is exactly $a$ and no less, we must choose an $a$th root of unity $\omega$ with order $a$. There are $\phi(a)$ such choices. |
H: Diamond diagram for Correspondence Theorem
This paragraph appears in Isaacs' Algebra (chapter on homomorphisms).
We comment briefly on the interpretation of the Correspondence theorem in terms of lattice diagrams, at least in the case where $\phi$ is the canonical homomorphism $G\rightarrow G/N$. If we have a lattice diagram for some of the subgroups of $G$, including $N$, then the part of the diagram above $N$ is a valid lattice diagram for $G/N$. In fact, diamonds go over to diamonds, since if $N\subseteq U$ and $N\subseteq V$ and $UV$ is a group, then in $G/N$ we have $(U/N)(V/N)=(UV)/N$.
I get the last part that if $N\subseteq U$ and $N\subseteq V$ and $UV$ is a group, then in $G/N$ we have $(U/N)(V/N)=(UV)/N$, but I don't understand what he means in terms of the diagram and diamonds, and that diamonds go over to diamonds. Maybe this part of algebra is too "abstract" for me to understand...
AI: In the original diagram with $G$ at the top, the groups $UV,U,V$, and $N$ form a diamond with $UV$ at the top and $N$ at the bottom. The map $\varphi$ carries $UV$ to $(U/N)(V/N)$, $U$ to $U/N$, $V$ to $V/N$, and $U\cap V$ to $(U\cap V)/N$, which form a diamond in the lattice diagram for $G/N$. More generally, every diamond in the part of the $G$-lattice lying above $N$ is carried in similar fashion to a diamond in the $(G/N)$-lattice. Writing $W$ for $U\cap V$, we have this correspondence of diamonds:
UV UV/N
/ \ / \
U V U/N V/N
\ / \ /
W W/N
| |
N 1 |
H: Find a measurable function such that $f(x)\le \alpha$ for $x\in E_\alpha$
Theorem: Given $\{ E_\alpha \}_{\alpha \in \mathbb{R}}\subset \mathcal{M}$ such that $E_\alpha \subset E_\beta$ for $\alpha < \beta$. We have also that $\bigcup_{\alpha \in \mathbb{R}}E_\alpha=X$ and $\bigcap_{\alpha \in \mathbb{R}}E_\alpha=\emptyset$. Then there exists a measurable function $f:X\rightarrow \mathbb{R}$ such that $f(x)\le \alpha$ if $x\in E_\alpha$ and $f(x)\ge \alpha$ if $x\notin E_\alpha$.
I proved this by defining, $$f(x) = \inf \{ q\in \mathbb{Q} : x\in E_q \}$$
Then it is easy to show that $f$ has the desired properties and $f(x) = \inf_{q \in \mathbb{Q}} f_q(x)$. Where $f_q(x)$ is $q$ if $x\in E_q$ and is $\infty$ otherwise. Then since each $f_q$ is measurable we have that $f$ is measurable.
I don't feel very confident about this proof because I don't use the fact that $\bigcap_{\alpha \in \mathbb{R}}E_\alpha=\emptyset$ and the book hints that I should use the fact that $g:X\rightarrow \overline{\mathbb{R}}$ is measurable if $g^{-1} \left( (r,\infty] \right)\in\mathcal{M}$ for $r\in \mathbb{Q}$. And I don't use this fact, so I'd appreciate if someone could tell me if I'm making a bad mistake here.
AI: Stuart, since your question calls for a $f:X\rightarrow \mathbb{R}$, you need to ensure that $f$, as you have defined it, is never $\pm \infty$. This is where your two conditions come in. ( $\cup_{\alpha} E_\alpha=X , \cap_{\alpha} E_\alpha = \emptyset$)
To use the fact about $g^{-1}((r,\infty])$ being measurable for $r \in \mathbb{Q}$, just show that $f$ satisfies that condition, and then show that $f$ takes only real values. |
H: A product identity involving the gamma function
I have reduced this problem (thanks @Mhenni) to the following (which needs to be proved):
$$\prod_{k=1}^n\frac{\Gamma(3k)\Gamma\left(\frac{k}{2}\right)}{2^k\Gamma\left(\frac{3k}{2}\right)\Gamma(2k)}=\prod_{k=1}^n\frac{2^k(1+k)\Gamma(k)\Gamma\left(\frac{3(1+k)}{2}\right)}{(1+3k)\Gamma(2k)\Gamma\left(\frac{3+k}{2}\right)}.$$
As you see it's quite a mess. Hopefully one can apply some gamma-identities and cancel some stuff out. I have evaluated both products for large numbers and I know that the identity is true, I just need to learn how to manipulate those gammas.
AI: Put all gamma functions to one side. Then
$\Gamma(2k)$ cancels out.
Using gamma function duplication formula, one can replace
$$\frac{1}{1+k}\Gamma\left(\frac{k}{2}\right)\Gamma\left(\frac{k+3}{2}\right)=\frac12\Gamma\left(\frac{k}{2}\right)\Gamma\left(\frac{k}{2}+\frac12\right)=2^{-k}\sqrt{\pi}\,\Gamma(k).$$
Similarly,
$$\frac{1}{1+3k}\Gamma\left(\frac{3k}{2}\right)\Gamma\left(\frac{3k+3}{2}\right)=
\frac12\Gamma\left(\frac{3k}{2}\right)\Gamma\left(\frac{3k}{2}+\frac12\right)=
2^{-3k}\sqrt{\pi}\,\Gamma(3k).
$$
The identity follows immediately. |
H: Check proof for a statement of linear independence involving 5 matrices in $M_2(\mathbb{R})$
Let $A,B,C,D,E \in M_2(\mathbb{R})$
I'm asked to prove or disprove that if the set $A = \{EA,EB,EC,ED\}$ is linearly independent so the set $\{A,B,C,D\}$ is linearly independent.
I was having troubles with matrices algebra but this is my shot, I'm not sure I'm allowed to factorise $E$ but here's it:
The set $\{EA,EB,EC,ED\}$ is linearly independent if and only if:
$$\alpha\cdot EA + \beta\cdot EB + \gamma\cdot EC + \delta\cdot ED = 0$$
$$ \Rightarrow \alpha = \beta = \gamma = \delta = 0 $$
Thus, if we factor out $E$ out of the first equation we get that
$$E\cdot(\alpha\cdot A + \beta\cdot B + \gamma\cdot C + \delta\cdot D) = 0$$
We know that $A$ is linearly independent so $E \neq 0$, thus we get that:
$$ \alpha\cdot A + \beta\cdot B + \gamma\cdot C + \delta\cdot D = 0 $$
We also know that
$$\alpha = \beta = \gamma = \delta = 0$$
Thus, $\{A,B,C,D\}$ is linearly independent.
$\blacksquare$
AI: The correct argument goes like this. Assume
$$
a A + b B + c C + d D = 0,
$$
with $a, b, c, d \in \Bbb{R}$.
You have to prove that this implies $a = b = c = d = 0$.
Multiply by $E$ on the left to get
$$
a E A + b E B + c E C + d E D = 0.
$$
By assumption $EA, EB, EC, ED$ are linearly independent, so this implies $a = b = c = d = 0$, done. |
H: Nonsingular affine curve which is not unmixed
Let $C$ be any nonsingular curve in $A^3_{\mathbb C}$. Can a point be an irreducible component of $C$? I am not able to find an example of such $C$.
AI: First, note that a point is an irreducible component of a variety if and only if its complement is closed, i.e. if and only if the point is isolated. So any example would be a disjoint union of another non-singular curve and a point.
The following equations cut out the disjoint union of a line and a point in the affine plane:
\begin{align}
x y & = 0 \\
x^2 - x & = 0
\end{align}
Thus it is seen that the coordinate ring of this affine variety is isomorphic to $\mathbb{C}[y] \times \mathbb{C}$, which is certainly of dimension 1. It is a regular variety in the sense that every local ring is regular, but perhaps you would not count it as non-singular, because the tangent space at the isolated point is 0-dimensional instead of 1-dimensional. |
H: Rigorous proof that $\int_{\Omega}X\;dP=\int_{-\infty}^{\infty}xf(x)\;dx$
I'm trying to prove rigorously that $\int_{\Omega}X\;dP=\int_{-\infty}^{\infty}xf(x)\;dx$. Where $f$ is the pdf of the random variable $X$.
I can't find a proof on the wikipedia article, or if it's there then it's disguised enough that I can't recognize it. Basically what I've got is a sort of semi-rigorous (and probably incorrect) proof of the equality, but maybe somebody could help me flesh out the details.
Proof:
Beginning from the definition that $E[X]:=\int_{\Omega}X\;dP$.
Given a random variable $X:\Omega\rightarrow\mathbb{R}$, we have the induced measure on $(\mathbb{R}, \mathscr{B}(\mathbb{R}))$ given by $P(\{X\in A\})$. Then by the Radon-Nikodym theorem there exists a measurable function $f:\mathbb{R}\rightarrow [0,\infty)$ such that $$P(\{X\in A\}) = \int_Afd\mu,$$
where $d\mu$ is the Lebesgue measure. From here it gets a bit hand-wavy. Basically since this measure was induced by the random variable $X$, then on $\mathbb{R}$ this random variable is simply given by the identity function $g(x)=x$. And thus by definition we write the expected value of $g$ with respect to our Radon-Nikodym produced measure in the form $$\int_{-\infty}^{\infty}x\;d\Big(\int_Afd\mu\Big).$$
Now by the Fundamental Theorem of Calculus, this becomes $$\int_{-\infty}^{\infty}xf(x)\;dx.$$
What does everyone think about this?
AI: I'll try to provide a little more general result. Let $(\Omega,\ \mathcal{E},\ P)$ be a probability space and let $X\colon \Omega\longrightarrow \mathbb{R}$ be a random variable, i.e for each $I\in \mathcal{B}$, $X^{-1}(I)\in\mathcal{E}$, where $\mathcal{B}$ is the usual Borel $\sigma-$algebra on $\mathbb{R}$. Let us write $\mu:=\mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $\mathcal{B}$ by $\mu(I):=P(X^{-1}(I))$ for each $I\in\mathcal{B}$. Then the following holds.
Theorem (Abstract-Concrete Formula): Let $\phi\colon\mathbb{R}\longrightarrow \mathbb{R}$ be a borelian function, i.e $\phi^{-1}(I)\in\mathcal{B}$ for every $I\in\mathcal{B}$, and write $\phi(X)$ for the composition $\phi\circ X$. Suppose at least one between the integrals $$\int_{\Omega} \phi(X)\ dP\quad\text{and}\quad \int_{\mathbb{R}}\phi (x) \ d\mu $$
exists (resp. exists and it is finite). Then also the other one exists (resp. exists and it is finite) and it holds that $$\int_{\Omega} \phi(X)\ dP=\int_{\mathbb{R}}\phi (x) \ d\mu\ .$$
In particular, $\phi(X)$ is summable with respect to $P$ if and only if $\phi$ is summable with respect to $\mu$.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $\phi$ is a (finitely) simple and positive function. Then also $\phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $\phi=\sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=\vert \phi(\mathbb{R})\vert$, $\phi (\mathbb{R})=\{c_{1},\cdots,c_{n}\}$ and $E_{i}:=\phi^{-1}(\{c_{i}\})$, we get $$\int_{\mathbb{R}}\phi (x) \ d\mu=\sum_{i=1}^{n}c_{i}\mu (E_{i})=\sum_{i} c_{i}P(X^{-1}(E_{i}))=\sum_{i} c_{i}P(X^{-1}(\phi^{-1}(\{c_{i}\})))=\int_{\Omega} \phi(X)\ dP.$$ Assume now $\phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(\phi_{n})_{n\in \mathbb{N}}$ of simple, positive functions such that $\lim\limits_{n\to \infty}\phi_{n}(x)=\phi (x)$ for every $x\in\mathbb{R}$. By monotone convergence theorem we get immediately that $$\int_{\mathbb{R}}\phi (x) \ d\mu=\int_{\mathbb{R}}(\lim_{n\to\infty}\phi_{n} (x)) \ d\mu =\lim_{n\to\infty} \int_{\mathbb{R}}\phi_{n} (x) \ d\mu=\lim_{n\to\infty} \int_{\Omega}\phi_{n} (X) \ dP=\int_{\Omega} \phi(X)\ dP.$$ Finally, suppose only $\phi\colon \mathbb{R}\longrightarrow\mathbb{R}$ is a borelian function, with no further restrictions. Then one can write $\phi=\phi^{+}-\phi^{-}$ (here for notation). Suppose $\int_{\mathbb{R}}\phi (x) \ d\mu$ exists: then at least one between $\int_{\mathbb{R}}\phi^{+} (x) \ d\mu$ and $\int_{\mathbb{R}}\phi^{-}(x) \ d\mu$ (say, the first one) must be finite and hence also $\int_{\Omega}(\phi(X))^{+} \ dP$ is finite, i.e $\int_{\Omega}\phi(X)\ dP$ exists. It is now clear that we can conclude with our thesis.
Corollary: In the previous situation, if $E[X]<+\infty$, then $$E[X]:=\int_{\Omega} X\ dP=\int_{\mathbb{R}} x\ d\mu.$$ In particular, if $\mu$ is absolutely continuous with respect to Lebesgue's Measure (on $\mathbb{R}$) and has density $f$, then we get $E[X]=\int_{\mathbb{R}} xf(x)\ dx$ (by (a straightforward consequence of) Radon-Nikodym theorem). |
H: Cardinality of the set of divergent sequences
How to find the cardinality of the set of divergent sequences? Let's name this set $A$.
I know that cardinality of the set of sequences equals $2^{\lvert \mathbb{N}\rvert}$, so $\lvert A\rvert\le 2^{\lvert \mathbb{N}\rvert}$.
How to strictly prove to the other side?
Thank you for your time.
AI: Pick one divergent sequence, $a_n$, now for every $r\in\Bbb R$ let $r_n=\begin{cases} r & n=1\\ a_n & n>1\end{cases}$.
Show that this is an injection from $\Bbb R$ into $A$.
Alternatively, pick one sequence whose elements are pairwise different (e.g. $a_n=n$), and for every $K\subseteq\Bbb N$ define the sequence: $k_n=\begin{cases} a_n & n\in K\\ -a_n & n\notin K\end{cases}$
Show that this is an injection from $\mathcal P(\Bbb N)$ into $A$. |
H: Proving that none of these elements 11, 111, 1111, 11111...can be a perfect square
How can i prove that no number in set S
S = {11, 111, 1111, 11111...}
Is a perfect square.
I have absolutely no idea how to tackle this problem i tried rewriting it in powers of 10 but that didn't really get me anywhere...
Thanks in advance.
AI: First notice that each element in S can be rewritten as:
$4(25m+2) +3$ where m is an integer, the next step is to realise that all squares leave a remainder of either 1 or 0 when divided by 4.
As $(2n)^2 = 4n^2$ which clearly leaves a remainder of 0 when divided by 4
And for odd squares: $(2n-1)^2 = 4n^2 -4n +1$ which clearly leaves a remainder of 1 upon division by 4.
But by writing each element of S as $4(25m+2) +3$ we see that it leaves a remainder of 3 when divided by 4, therefore it can never be a perfect square. |
H: Construction of the projective plane over $\mathbb{F}_3$
I have a question about constructing projective plane over $\mathbb{F}_3$. We first establish seven equivalence classes $P= \{ [0,0,1], [0,1,0], [1,0,0], [0,1,1], [1,1,0], [1,0,1], [1,1,1] \}$.
Given a triple $(a_0, a_1, a_2) \in \mathbb{F}^3_3 \setminus (0, 0, 0)$ we define the line
$L(a_0, a_1, a_2)$ as follows: $L(a_0, a_1, a_2) := \{ [x_0; x_1; x_2] \in P : a_0x_0 + a_1x_1 + a_2x_2 = 0 \}$.
It is quite easy for $L(0,0,1), L(0,1,0), L(1,0,0)$, because here we take the points which have zero on the first, second and third coordinate.
It gets more difficult for me for $L(0,1,1)$, because here we need to have $x_1+x_2=0$. Do we treat the coordinates of the points as elements of $\mathbb{F}_2$ or $\mathbb{F}_3$?
There are $26$ nonzero triples in $\mathbb{F}^3_3 $.
Do we check all 26 $L(x_0, x_1, x_2) $ sets?
Please help, because I really want to understand it.
Thank you.
AI: You are describing a method for getting the Fano plane, which by definition is the projective plane over $\mathbb F_2$.
For getting the projective plane over $\mathbb F_3$ in a similar way, do the following:
Select a set of projective representatives of the nonzero elements of $\mathbb F_3^3$. One way to do this is to select only the vectors whose first nonzero entry is a $1$.
Since $\mathbb F_3$ has $2$ units, you end up with $(3^3 - 1) / 2 = 13$ vectors. Those vectors are called coordinate vectors and give you the $13$ points of the projective plane.
You can also use the coordinate vectors for the description of the lines: Each coordinate vector $v$ corresponds to the line containing all the points $w$ such that the scalar product of $v$ and $w$ equals $0$ (so $\langle v,w\rangle = 0$).
In this way, there are $13$ lines containing $4$ points each.
Typically, the projective plane over a field $K$ is defined in this way:
The subspaces of $K^3$ of dimension $1$ are the points, and the subspaces of $K^3$ of dimension $2$ are the lines.
It may be worth to convince yourself that the above construction is compatible with this description. |
H: Exponential Growth, half-life time
An exponential growth function has at time $t = 5$
a) the growth factor (I guess that is just the "$\lambda$") of $0.125$ - what is the half life time?
b) A growth factor of $64$ - what is the doubling time ("Verdopplungsfaktor")?
For a), as far as I know the half life time is $\displaystyle T_{1/n} = \frac{ln(n)}{\lambda}$ but how do I use the fact that we are at $t = 5$?
I don't understand the (b) part.
Thanks
AI: The growth factor tells you the relative growth between $f(x)$ and $f(x+1)$, i.e. it's $$
\frac{f(t+1)}{f(t)} \text{.}
$$
If $f$ grows exactly exponentially, i.e. if $$
f(t) = \lambda\alpha^t = \lambda e^{\beta t} \quad\text{($\beta = \ln \alpha$ respectively $\alpha = e^\beta$)} \text{,}
$$
then $$
\frac{f(t+1)}{f(t)} = \frac{\lambda\alpha^{t+1}}{\lambda\alpha^t} = \alpha = e^\beta \text{,}
$$
meaning that, as you noticed, the grow factor doesn't depend on $t$ - it's constant.
The half-life time is the time $h$ it takes to get from $f(t)$ to $f(t+h)=\frac{f(t)}{2}$. For a strictly exponential $f$, you have $$
f(t+h) = \frac{f(t)}{2} \Rightarrow \lambda\alpha^{t+h} = \frac{\lambda}{2}\alpha^t
\Rightarrow \alpha^h = \frac{1}{2} \Rightarrow h = \log_\alpha \frac{1}{2} = -\frac{\ln 2}{\ln \alpha} = -\frac{\ln 2}{\beta} \text{.}
$$
Similarly, the doubling-time is the time $d$ it takes to get from $f(t)$ to $f(t+d) = 2f(t)$, and you have $$
f(t+d) = 2f(t) \Rightarrow \lambda\alpha^{t+d} = 2\lambda\alpha^t
\Rightarrow \alpha^d = 2\Rightarrow h = \log_\alpha 2 = \frac{\ln 2}{\ln \alpha} = \frac{\ln 2}{\beta} \text{.}
$$
Thus, you always have that $d = -h$ for doubling-time $d$ and half-time $h$, which of course makes sense. If you go forward $d$ units of time to double the value, then going backwards $d$ units halves the value, similarly for going forward respectively backward $h$ units to half respectively double the value.
In your case, you get that the doubling time for (b) is $\frac{\ln 2}{\ln 64} = \frac{1}{6}$. For (a) you get that the half-life time is $-\frac{\ln 2}{\ln \frac{1}{8}} = \frac{1}{3}$.
You can also derive those by observing that a growth factor of one-eight means that going forward one unit of time means the value decreases to one-eight of the original value. Thus, after going forward one-third unit, the value decreases to one-half, since if if it decreases three times to one-half each, it overall decreases to one-eight.
Similarly, if the value increases to 64 times the original value when going forward one unit of time, you have to go forward one-sixt unit of time to have it increase to twice the value, since $2^6 = 64$. |
H: Ideals of quotient algebras.
Suppose $I$ and $J$ are ideals of a Lie Algebra L.
I know that we have the fact that:
$\frac{I+J}{J} \cong \frac{I}{I\cap J}$
Prove that the ideals of $\frac{L}{I}$ - the quotient algebra of L defined by $x + I$ $x \in
L$ are of the form $\frac{K}{I}$ where K is an ideal of L containing I.
AI: If $f\colon L\to L'$ is a surjective homomorphism of Lie algebras, then, for any ideal $I$ of $L$,
$$
f^\to(K)=\{f(x):x\in K\}
$$
is an ideal of $L'$.
For any ideal $K'$ of $L'$,
$$
f^\gets(K')=\{x\in L:f(x)\in K'\}
$$
is an ideal of $L$ (here surjectivity is not needed), containing $\ker f$.
If an ideal $K$ of $L$ contains $\ker f$, then $f^\gets(f^\to(K))=K$. Moreover, for any ideal $K'$ of $L'$, $f^\to(f^\gets(K'))=K'$. The proofs of these two statements are easy.
Therefore there is a bijection, given by $f^\to$ and $f^\gets$ between the ideals of $L$ containing $\ker f$ and the ideals of $L'$.
Apply this to the canonical homomorphism $L\to L/I$, when $I$ is an ideal of $L$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.