text
stringlengths 83
79.5k
|
---|
H: Why is $\frac{987654321}{123456789} = 8.0000000729?!$
Many years ago,
I noticed that $987654321/123456789 = 8.0000000729\ldots$.
I sent it in to Martin Gardner at Scientific American
and he published it in his column!!!
My life has gone downhill since then:)
My questions are:
Why is this so?
What happens beyond the "$729$"?
What happens in bases other than $10$?
AI: In base $n$ the numerator is $$p = n^{n-1} - \frac{n^{n-1}-1}{(n-1)^2}$$ and the denominator is $$q = \frac{n(n^{n-1}-1)}{(n-1)^2}-1.$$
Note that $p = (n-2)q + n-1$ and for the quotient we get
\begin{align}
\frac{p}{q} &= n-2 + \frac{(n-1)^3}{n^n} \frac{1}{1 - \frac{n^2-n+1}{n^n}} \\
&= n-2 + \frac{(n-1)^3}{n^n} \sum_{k=0}^{\infty} \left(\frac{n^2-n+1}{n^n}\right)^k.
\end{align}
Indeed for $n=10$ this is
$$\frac{987654321}{123456789} = 8 + \frac{729}{10^{10}}\sum_{k=0}^{\infty}\left(\frac{91}{10^{10}}\right)^k
$$ |
H: Rudin: Real & Complex Analysis Thm 1.10
$\textbf{Theorem:}$ If $\mathcal{F}$ is any collection of subsets of $X$, there exists a smallest $\sigma$-algebra $\mathcal{M}^{*}$ in $X$ such that $\mathcal{F} \subset \mathcal{M}^{*}$.
$\textbf{Proof:}$ Let $\Omega$ be the family of all $\sigma$-algebras $\mathcal{M}$ in $X$ which contain $\mathcal{F}$. Since the collection of all subsets of $X$ is such a $\sigma$-algebra, $\Omega$ is not empty. Let $\mathcal{M}^{*}$
be the intersection of all $\mathcal{M} \in \Omega$. It is clear that $\mathcal{F} \subset \mathcal{M}^{*}$ and that $\mathcal{M}^{*}$ lies in
every a-algebra in X which contains $\mathcal{F}$.
To complete the proof, we have to
show that $\mathcal{M}^{*}$ is itself a a-algebra.
If $A_n \in \mathcal{M}^{*}$ for $n = 1,2,3, \dots $ and if $\mathcal{M} \in \Omega$, then $A_n \in \mathcal{M}$, so $ \cup A_n \in \mathcal{M}$,
since $\mathcal{M}$ is a a-algebra. Since $\cup A_n \in \mathcal{M}$ for every $\mathcal{M} \in \Omega$, we conclude that
$\cup A_n \in \mathcal{M}^{*}$. The other two defining properties of a a-algebra are verified in the same manner. $\square$
I understand $\mathcal{M}^{*}$ being an intersection of all the $\sigma$-algebras $\mathcal{M}$ on $X$, that contain $\mathcal{F}$. Follows the proof of $\mathcal{M}^{*}$ being a $\sigma$-algebra - since it's intersection of all $\mathcal{M} \in \Omega$, then if the $A_n$ sets are in $\mathcal{M}^{*}$, they must also be in all of the $\mathcal{M}$'s, so the countable union is there as well (the same applies for the complements, $\varnothing$ and the whole space).
Can we therefore generalize, that any intersection of $\sigma$-algebras must also be a $\sigma$-algebra?
AI: You don't generalize, that's precisely what they do! (i.e. showing that any intersection of $\sigma$-algebras is also a $\sigma$-algebra). To see this, just take $\mathcal F = \bigcap_{\mathcal M \in \Omega} \mathcal M$. The meaning of "smallest" is in the sense of inclusion, i.e. every other smaller $\sigma$-algebra is included in that minimal $\sigma$-algebra. |
H: need to show image of $f$ contains the unit disk.
$f$ be non constant analytic on the closed unit disk,$|f|=1$ if $|z|=1$,we need to show image of $f$ contains the unit disk.
My thoughts:
whenever $|\omega|<1$ if I show that $g(z)=f(z)-\omega$ has a root in $D$ the we are done
On $|z|=1$ $|f(z)-g(z)|=|\omega|<1=|f(z)|$ Hence by Rouches Theorem $f(z)$ has a root in $D$
AI: Assume $f$ does not have a zero. Applying the maximum modulus principle to $1/f(z)$ gives a contradiction ($f$ constant, contradicting the hypothesis), so we find that any analytic function that has modulus $1$ on $|z|=1$ has a zero.
Fix $a$ in the open unit disk. Consider the Blaschke factor switching $0$ and $a$. Composing $f(z)$ with this factor gives another analytic function with modulus $1$ on the unit circle, so the composition has a zero in the unit disk. But this means $f(z)$ takes on the value $a$ in the unit disk. |
H: The elements of $\mathbb{Z}[\sqrt{-5}]/(2,\sqrt{-5}+1)$
I'm really confused with this one...
How can I determine the elements of the module $\mathbb{Z}[\sqrt{-5}]/(2,\sqrt{-5}+1)$? Or its cardinality?
Does $$\mathbb{Z}[\sqrt{-5}]/(2,\sqrt{-5}+1)=\{\bar{0},\bar{1},\overline{0+\sqrt{-5}}\}?$$
AI: Recall that
$$\mathbb{Z}[\sqrt{-5}]=\{a+b\sqrt{-5}\mid a,b\in\mathbb{Z}\}.$$
Therefore, as an abelian group (i.e., forgetting about the multiplicative structure for a second), we can make an isomorphism.
$$\mathbb{Z}[\sqrt{-5}]\cong\mathbb{Z}^2,\quad 1\mapsto (1,0),\quad \sqrt{-5}\mapsto (0,1).$$
Under this isomorphism, the ideal $(2)\subset\mathbb{Z}[\sqrt{-5}]$ corresponds to the subgroup of $\mathbb{Z}^2$ generated by $(2,0)$ and $(0,2)$. Therefore (again, as abelian groups)
$$\mathbb{Z}[\sqrt{-5}]/(2)\cong \mathbb{Z}^2/\langle (2,0),(0,2)\rangle\cong (\mathbb{Z}/2\mathbb{Z})^2,$$
and because representatives for $\mathbb{Z}^2/\langle (2,0),(0,2)\rangle$ are
$$\overline{(0,0)},\quad \overline{(1,0)},\quad \overline{(0,1)},\quad \overline{(1,1)},$$
we can undo our isomorphism and conclude that representatives for $\mathbb{Z}[\sqrt{-5}]/(2)$ are
$$\overline{0},\quad \overline{1},\quad \overline{\sqrt{-5}},\quad \overline{1+\sqrt{-5}}.$$
Now, $\mathbb{Z}[\sqrt{-5}]/(2,1+\sqrt{-5})$ is just $\mathbb{Z}[\sqrt{-5}]/(2)$ further quotiented by $1+\sqrt{-5}$ (or rather $\overline{1+\sqrt{-5}}$).
You could either determine which of
$$\overline{0},\quad \overline{1},\quad \overline{\sqrt{-5}},\quad \overline{1+\sqrt{-5}}$$
differ by a multiple of $1+\sqrt{-5}$ (remember, this means a $\mathbb{Z}[\sqrt{-5}]$-multiple, not a $\mathbb{Z}$-multiple), or determine what the ideal $(2,1+\sqrt{-5})$ corresponds to as a subgroup of $\mathbb{Z}^2$, or compute representatives modulo $1+\sqrt{-5}$ of these four elements. |
H: To show that function is constant
Let $f$ be defined on $\mathbb{R}$ and suppose that |$f(x)$ - $f(y)$| $\leq$ $(x-y)^2$ $x,y \in\mathbb{R}$. Here I have to show that $f$ is a constant function.
I think I have to show that $f'(x)$ = 0 for all $x$.
But I don't know from where to start this. I tried taking it as (|$f(x)$ - $f(y)$|/|$x$-$y$|) $\leq$ |$x$ - $y$|. Am I right in doing so? Any hint or suggestion will be helpful. Thanks
AI: Ya it suffices to prove that $f'(x)$ = 0 $\forall x \in \mathbb{R}$. You can use $\epsilon - \delta$ approach to prove this statement.
Let $\epsilon > 0$ be given then you need to show that $\exists$ $\delta > 0$ such that
$|\frac{f(x+h) -f(x)}{h} - 0| = |\frac{f(x+h) -f(x)}{h} | < \epsilon$ when $|h|<\delta$
but, by the definition of the function given above we have that
$|\frac{f(x+h) -f(x)}{h}| \leq \frac{h^2}{|h|} = |h|$ . So choose $\delta = \epsilon$, and if
$|h|< \delta$ we have $|\frac{f(x+h) -f(x)}{h} - 0| < \epsilon$. Since $\epsilon > 0$ is arbitrary we have that $f'(x) = 0$ |
H: Is it possible to write any bounded continuous function as a uniform limit of smooth functions
Is $C^\infty(\mathbb{R})\subset C_b(\mathbb{R})$ dense? I.e. is any continuous bounded function $f:\mathbb{R}\to\mathbb{R}$ the uniform limit of smooth functions?
On any bounded interval this is true since by Stone-Weierstrass polynomials are smooth and dense in the continuous functions. I guess the result is still true for $\mathbb{R}$ but I don't know how to prove it. Since I don't have any integrability conditions on $f$ the technique of convoluting with mollifiers is probably not applicable.
AI: Given continuous $f$ and $\epsilon>0$, find for each $n\in\mathbb Z$ a smooth approximation $g_n\colon [n-1,n+1]\to\mathbb R$ with $\left\lVert f|_{[n-1,n+1]}-g_n\right\rVert_\infty<\epsilon$.
Glue these together with a smooth partition of one.
For example: Start with a smooth function $\phi\colon[0,1]\to [0,1]$ with $\phi(x)=1$ for $x\le\frac13$ and $\phi(x)=0$ for $x\ge \frac23$.
Then $$g(x):=\phi(x-\lfloor x\rfloor)g_{\lfloor x\rfloor}(x)+(1-\phi(x-\lfloor x\rfloor))g_{\lceil x\rceil}(x)$$
is smooth (the trick is that $g(x)=g_n(x)$ if $|x-n|<\frac13$) and at each $x$ we have $|f(x)-g(x)|<\epsilon$ because it is a convex combination. Therefore, $\lVert f-g\rVert_\infty<\epsilon$. |
H: Neumann problem, stuck on a boundary condition.
I am stuck on a problem that I am trying for exam practice and I would very much appreciate a hint to help me out, here is the section where I am stuck:
A solution is sought to the Neumann problem for $\nabla^2 u = 0$ in the half plane $z > 0$:
$u = O(|x|^{−a}),
\frac{\partial u}{\partial r}
= O(|x|^{−a−1}) ~~ \mathrm{as} ~~ |x| → ∞,~~ \frac{\partial u}{\partial z} ~
= p(x, y) ~ on ~ z = 0,
\mathrm{where}~ a > 0$. It is assumed that
$\int_{\infty}^{\infty}\int_{-\infty}^{\infty}
p(x, y) dx dy = 0$. Explain why this condition is
necessary.
My feeling is that this is to do with Green's third identity and that we need the normal derivative in the $x-y$ plane to be integrable in order to find out solution with a Green's function, am I correct?
EDIT:
the divergence theorem sorts this out.
AI: Let $f(z)=\int\int u_z$. Then $f_z=\int\int u_{zz} =\int\int (-u_{xx}-u_{yy})$. Consider this quantity and also $\lim _{z\to\infty} f(z)$. That should give you the result you want. |
H: Question about inverse with respect to convolution product.
Let $\mathcal{I}(X)$ be the collection of real valued functions $f:X\times X\to \mathbb{R}$ with the property that $f(x,y)=0$ when $x>y$. The convolution product $f*g$ for $f,g\in \mathcal{I}(X)$ is defined by $$ f*g(x,y)=\begin{cases}
\sum_{\lbrace z:x\le y\le z\rbrace}f(x,z)g(z,y) & x\le y \\
0 & \text{otherwise} \end{cases}$$
From page 14 of http://www.math.ust.hk/~mabfchen/Math232/Inclusion-Exclusion.pdf, I don't follow the last step.
If the latest summ where $\delta$ is the Kronecker Delta, is equal to $\delta(x,y)$, wouldn't that imply the sum representing $g(x,y)$ is 0?
AI: No. It means, if $x<y$ then
$$\sum_{x\le z\le y} g(x,z)\,f(z,y)=0\,.$$
And, also $g(x,x)\,f(x,x)=1$.
For example, if $y$ covers $x\ $ ($y>x$ and there is no element in between), we have
$$g(x,y)=-\frac1{f(y,y)}\big(g(x,x)\,f(x,y)\big)=-\frac{f(x,y)}{f(x,x)\,f(y,y)}\,.$$
Then
$$g(x,x)\,f(x,y)\ +\ g(x,y)\,f(y,y)\ =\ \frac{f(x,y)}{f(x,x)}\ +\ \frac{-f(x,y)}{f(x,x)}\ =\ 0$$
indeed. |
H: Irreducible polynomials have distinct roots?
I know that irreducible polynomials over fields of zero characteristic have distinct roots in its splitting field.
Theorem 7.3 page 27
seems to show that irreducible polynomials over $\Bbb F_p$ have distinct roots in its splitting field (and all the roots are powers of one root). Is the proof correct? I have never seen this result anywhere else. The proof is very convincing to me.
Does the result hold for $\Bbb F_q$ where $q$ is a power of prime? I don't think it holds because I've heard there are irreducible polynomials with repeated roots?
Please help.
AI: Consider a field $F$ of characteristic $p$.
A polynomial has multiple roots only if it has a root in common with its (formal) derivative; that is, the multiple roots of $f$ are the roots of $\gcd(f,f')$. Since $f$ is irreducible, multiple roots can occur only if the $\gcd$ is $f$ itself, that is $f'$ is a multiple of $f$. And that is only possible if $f'=0$, that is, all monomials in $f$ have degree a multiple of $p$, so $f(x)=g(x^p)$ for some polynomial $g$.
If $F$ is finite, then $\phi\colon a\mapsto a^p$ is an automorphism of $F$ (and also of the splitting field $E$ of our polynomial), and there exists $h(x)$ such that $\phi(h)=g$. Then for $\alpha\in E$ with $f(\alpha)=0$ also $h(\alpha)=0$ (because $\phi(h(\alpha))=\phi(h)(\phi(\alpha))=g(\alpha^p)=f(\alpha)=0$). Since $h$ is of smaller degree than $f$, $f$ is not irreducible.
As this proof shows, one has to look for cases where $\phi$ is not an automorphism to find a counterexample (such as in Andreas Carantis comment). |
H: Deducing a coefficient from a cubic polynomial given a divisor and remainder?
I got this question which I don't understand:
"Suppose $x^3 - 2x^2 + a = (x + 2) Q(x) + 3$ where $Q(x)$ is a polynomial. Find the value of a."
I know the identity: $P(x)=A(x)Q(x)+R(x)$, but I'm not sure how to apply it in this question.
Any help would be appreciated, thanks.
AI: Well, when the polynomial is divided by $x+2$ the remainder is $3$, so substitute $-2$ for each $x$ value, and it will equal $3$. Then, you will be able to find $a$. |
H: Is a maximal-square-covering unique?
Let X be a shape in 2-dimensional space.
Define a square covering of X as a set of axis-aligned squares, whose union exactly equals X.
Note that some shapes don't have a finite square covering, for example, a circle or a triangle.
Define a maximal square covering of X as a square covering where each square is needed, i.e.:
For every square, there is a point in X that is covered by that square only.
No two squares can be replaced by a larger square that is contained in X.
Is a maximal square covering, defined in this way, unique? (i.e. can there be two different maximal square coverings of the same shape?)
AI: Let $Q(a,b,c,d)$ denote the square having $(a,b)$ and $(c,d)$ as opposite vertices.
Then
$$ \begin{align}X&=Q(-10,1,10,21)\cup Q(-10,-21,10,-1)\cup Q(-2,-1,2,3)\\& = Q(-10,1,10,21)\cup Q(-10,-21,10,-1)\cup Q(-2,-3,2,1)\end{align}$$
shows that uniqueness does not hold. |
H: Conditional series convergence guess; Prove/ Disprove
I ran into this question:
Prove or disprove:
If $\sum_{n=1}^{\infty}a_{n}$ is a converging series, but the series $\sum_{n=1}^{\infty}a_{n}^2$ diverges, then $\sum_{n=1}^{\infty}a_{n}$ is conditionally convergent.
I'm pretty sure it's true because I couldn't find any example that shows otherwise, but still, i can't find the proof.
Thanks in advance,
Yaron.
AI: We want to show that: if the series $\sum_{n=1}^{\infty}a_{n}^2$ diverges then the $\sum_{n=1}^{\infty}|a_{n}|$ diverges.
To prove this use that $a^2_n<1$ for all $n>N$ for some $N\in\mathbb N$ and therefore $|a_n|>a_n^2$ for all $n>N$... |
H: Propositional Logic "Riddle/Puzzle"
I have this kind of 'riddle' as a question that i need to complete, however I'm not sure what to do of it.
This is the question:
Determine who out of the following is guilty of doping. The suspects are: Sam, Michael, Bill, Richard, Matt.
1) Sam said: Michael or Bill took drugs, but not both.
2) Michael said: Richard or Sam took drugs, but not both.
3) Bill said: Matt or Michael took drugs, but not both.
4) Richard said: Bill or Matt took drugs, but not both.
5) Matt said: Bill or Richard took drugs, but not both.
^ Of these 5 statements, 4 are true, one is false.
6) Tom said: If Richard took drugs, then Bill took drugs.
^ This statement is guaranteed to be true.
From this information, I deduced:
p : Michael took drugs
q : Bill took drugs
r : Richard took drugs
s : Sam took drugs
t : Matt took drugs
So given this I came up with this:
1) (p ^ ~q) v (~p ^ q)
2) (r ^ ~s) v (~r ^ s)
3) (t ^ ~p) v (~t ^ p)
4) (q ^ ~t) v (~q ^ t)
5) (q ^ ~r) v (~q ^ r)
6) (~r v q)
However, I'm not sure where to go from here. I suppose, I could connect each statement with an ^ as that seems like the next step to do. Then that entire equation would essentially tell me who was guilty? The next step, would obviously be to simplify, and come up with a name. However, I'm not sure how to do this.
Could anyone please shed some light and give me some tips on how to do this?
Thanks.
AI: There are two things you need to notice.
First, statements one three and four are mutually contradictory. So the false statement must be one of those three.
So both statements 5 and 6 are true. If Richard took drugs then Bill both did and didn't take drugs. Therefore Richard did not and Bill did.
So by statement 2 (which we know to be true) Sam also took drugs.
There's no way of solving it from here, we can declare any one of statements 1,3 and 4 to be false and work out the rest.
The only thing you can say for sure is that at least one of Matt or Michael took drugs. |
H: Definition of Sobolev Space
I have a definition that says that the space of functions that satisfy$$\|u\|_{H^m}^2=\sum_{k\in\mathbb{Z}}(1+|k|^2)^m|\hat{u}_k|^2<\infty$$is called Sobolev Space and when $m=1$, this is equivalent to saying,
$$\int_0^{2\pi}(|u(x)|^2+|u'(x)|^2)dx<\infty$$
I have a few questions. The Sobolev Space is about the space of functions whose functions have well behaved derivatives in some sense (??). Or at least the norm incorporates some information about the derivative.
What is $m$ in this context?
I do not see any information about derivative in first statement.
That is obvious in second case. So the natural questions is:
How does second become equivalent to first when $m=1$?
P.S. The last statement is perfectly clear and makes perfect sense. Since, it says both derivative and function are bounded in certain sense.
AI: In this context, $m$ can be a real number.
We can find a link between the Fourier coefficients of a function and of its derivative. Hence in the case $m=1$ it's a consequence of Parseval's equality. |
H: Why doesn't the Weierstrass approximation theorem imply that every continuous function can be written as a power series?
I hope that my question in the title is well formulated.
I am a little bit confused with the next exercise from a book:
Argue that there exists functions $f \in C[0, \frac{1}{2}]$ that cannot be written on the form
$$
f(x) = \sum_{k=0}^{\infty}c_kx^k, x\in(0, \frac{1}{2}).
$$
But Weierstrass' theorem states that $||f(x)-P(x)||_{\infty} < \epsilon$, where $P(x)$ is a polynomial for $x\in[a,b]$.
The statement of the exercises refers to an open and bounded interval, and in Weierstrass to a closed and bounded interval for the polynomial. Is this the key? to be honest, I do not see why.
AI: Weierstrass theorem tells that on a compact interval, given a continuous function, there is a sequence of polynomials converging uniformly to this function. But the polynomials are not necessarily of the form $\sum_{j=0}^{N_n}c_jx^j$, because the coefficients depend on $n$. They are indeed of the form $\sum_{k=0}^{N_n}c_{n,k}x^k$.
To get a concrete example, consider $\left|x-\frac 12\right|$. This cannot be written as a power series, because this function would be differentiable at $1/2$. |
H: Evaluating $\lim _{n \rightarrow \infty} (2n+1) \int_0 ^{1} x^n e^x dx$
Could you help me evaluate $\lim _{n \rightarrow \infty} (2n+1) \int_0 ^{1} x^n e^x dx$?
I've calculated that the recurrence relation for this integral is:
$\int_0 ^{1} x^n e^x dx = x^ne^x | ^{1} _{0} - n \cdot \int_0 ^{1} x^{n-1} e^x dx$
So if we let $I_n = \int_0 ^{1} x^n e^x \ dx$, we get $I_n = \left.x^ne^x \,\right |^1 _0 - n \cdot I_{n-1}$.
Can this be useful here?
I would appreciate all your help.
AI: A related technique. You can use integration by parts technique by letting $u=e^{x}$ which leads to
$$ I_n = \left( 2\,n+1 \right) \left( {\frac {{{\rm e}}}{n+1}}-{\frac {{
{\rm e}}}{2+3\,n+{n}^{2}}}+\int _{0}^{1}\!{\frac {{x}^{n+2}{
{\rm e}^{x}}}{ \left( n+2 \right) \left( n+1 \right) }}{dx} \right) .$$
$$\implies \lim_{n\to \infty}I_n = 2 \,\rm{e} + 0 + \lim_{n\to \infty } \int _{0}^{1}\!{\frac {(2n+1){x}^{n+2}{
{\rm e}^{x}}}{ \left( n+2 \right) \left( n+1 \right) }}{dx} $$
$$ \implies \lim_{ n\to \infty } = 2 \rm e . $$
Note that, the change of the limit with integral is due to the uniform convergence of the sequence $$ \frac {(2n+1){x}^{n+2}}{ \left( n+2 \right) \left( n+1 \right) }. $$
Here is a technique for proving uniform convergence.
Added: Integration by parts,
$$ \int u\, dv = u\,v -\int v \,du. $$
So, in your case $ u = e^{x} $ and $ dv = x^n dx $. |
H: what does "in wide sense" mean?
I came across the statement "the sequence increases(in wide sense)".
So my doubt is what does author mean by wide sense?I came across this in number theory book
AI: The sequence increases in wide sense means that it is not strictly increasing ( $a_n>a_{n-1})$ but it is non-decreasing $a_n\geq a_{n-1}$ |
H: Is there extension of function from a curve on the whole space preserving smoothness?
Assume that $\alpha: (a,b) \rightarrow \mathbb R^3$ and $f: (a,b) \rightarrow \mathbb R$ are given smooth functions. Let $t_0 \in (a,b)$.
Do there exist a $\delta>0$ and a smooth function $V: \mathbb R^3 \rightarrow \mathbb R$ such that
$V(\alpha(t))=f(t)$ for $t\in (t_0-\delta, t_0+\delta)$ ?
Edit. Assume additionally that $\alpha'(t)\neq 0$ for all $t\in (a,b)$.
AI: Yes. It is enough to have $v:=\alpha'(t_0)\ne 0$.
Then in a neighbourhood of $t_0$, the map $t\mapsto \langle \alpha(t)-\alpha(t_0),v\rangle$ is injective and smooth and has a smooth inverse around $0$ with $0\mapsto t_0$. Let $g\colon \mathbb R\to\mathbb (a,b)$ be a smooth function that coincides with this inverse near $0$.
Define $$V(x)=f(g(\langle x-\alpha(t_0),v\rangle)).$$
Then for $t\approx t_0$, we have
$$ V(\alpha(t))=f(g(\langle \alpha(t)-\alpha(t_0),v\rangle))=f(t).$$ |
H: Why substitution method does not work for $\int (x-\frac{1}{2x} )^2\, \mathrm dx$?
Why $$\int \ \left(x-\frac{1}{2x} \right)^2 \, \mathrm dx$$ is easy to integrate once $$\left(x-\frac{1}{2x} \right)^2$$ is expanded, but impossible using substitution method? (tried 5 different subs but of course that is not the proof that there is no suitable substitution) if mathematical results are independent of the logical methods used to derive them, why something so simple works one way but not the other?
AI: The situation you describe is not too dissimilar to the following: imagine standing in front of a wall with a few doors in it. You want to go in. At first you don't notice any doors so you wander up and down in front of the wall, until eventually you realize there was a door right in front of you all along. Then since you believe there's only one room inside, you are surprised that there wasn't a door everywhere else on the wall outside. Does this make clearer why your surprise is unfounded?
Put another way, equivalence of any correct route doesn't show existence of any proposed route. |
H: About $\mathcal{L}(V,W)$
Let $V,W$ are two vector space and let $S\subseteq V$. Define: $$S^{0}=\{T\in\mathcal{L}(V,W)\mid~T(x)=0, \forall x\in S\}$$ The problem aks me to verify $S^{0}$ is a subspace of $V$ and if $V_1,V_2$ are two subspaces, then $$(V_1+V_2)^{0}=V^0_1\cap V^0_2$$ Please help me to overcome the second part. I did the first one. Thank you!
AI: You can do this by a double inclusion, suppose that $W\in(V_1+V_2)^0$, then for every $x\in V_1$ we have that $W(x)=0$ and similarly for $x\in V_2$, therefore $W\in V_1^0$ and $W\in V_2^0$, so $W$ is in the intersection.
The opposite direction is similar, and I leave it to you. Note that you will have to use the definition of $V_1+V_2$ and the linearity of $W$ here. |
H: Prove divisibility using linear congruences
I need to prove that:
$$10|(53^{53} - 33^{33})$$
I can and should only use linear congruences ($a \equiv b \mod n$) - how can I do this?
AI: $$53^{53}-33^{33}\equiv 3^{53}- 3^{33}\pmod {10}$$
Now, $$3^4=81\equiv 1\pmod 4\implies 3^{53}- 3^{33}\equiv 3^1-3^1\pmod {10}\equiv0$$ as $33\equiv1\pmod 4, 53\equiv1\pmod 4$
Alternatively, $$3^{53}- 3^{33}\equiv 3^{33}(3^{20}-1)\pmod {10}$$
Now, $3^4\equiv1\pmod 4\implies 3^{20}=(3^4)^5\equiv1^5\pmod {10}\equiv1$ |
H: Solving modular equations that gives GCD = 1
I have problems with understanding modular equations that gives GCD = 1. For example:
$$3x \equiv 59 \mod 100$$
So I'm getting $GCD(3, 100) = 1$. Now:
$1 = -33*3 + 100$
That's where the first question appears - I always guess those -33 and 1 (here) numbers...is there a way to solve them?
And the second question - the answer to that equation, at least according to the book, is:
{$-33*59 + 100k$} (k - integer) - and why is that so? Where did the 59 came from?
AI: So, $3x=59+100y$ where $y$ is an integer
$\implies 3x=100y+59(100-3\cdot33)$
$\implies 3(x+59\cdot 33)=100(y+59)$
$\implies \frac{3(x+59\cdot 33)}{100}=y+59$ which is an integer
$\implies 100$ divides $3(x+59\cdot 33)$
$\implies 100$ divides $(x+59\cdot 33)$ as $(3,100)=1$
So, $x+59\cdot 33=100\cdot k$ where $k$ is an integer |
H: What does this dollar sign over arrow in function mapping mean?
In a certain function mapping like this,
$x \xleftarrow{\$} \{0,1\}^k$
(Lecture Notes on Cryptography by
S. Goldwasser and M. Bellare, page 18)
I fail to understand what exactly does this \$ sign mean. This has been put
here without any explanation or further elaboration.
It may be very trivial or may be very silly of me asking it like this, but
I want to understand what is means.
Google search (function, dollar, arrow) has not returned any result.
http://en.wikipedia.org/wiki/List_of_mathematical_symbols does not mention anything as well.
AI: It is not standard notation as far as I know. I searched for dollar signs in the document, and found that he defines the notation 45 pages later on page 63. |
H: What is $-i$ exactly?
We all know that $i$ doesn't have any sign: it is neither positive nor negative. Then how can people use $-i$ for anything?
Also, we define $i$ a number such that $i^2 = -1$. But it can also be seen that $(-i)^2 = -1$. Then in what way is $-i$ different from $i$? If it isn't, then why do we even have something like a $-i$?
AI: The vector $(1,-2)$ is also neither positive nor negative, yet there is value in having the operation $-(1,-2)=(-1,2)$.
Note: there is no natural meaning for "positive" for vectors, this is just an analogy. |
H: convergence of series with absolute value
prove or show false:
if $\sum_{n=1}^{\infty}\left |a_{n} \right |$ converges, then $\sum_{n=1}^{\infty}\frac{n+1}{n}a_{n}$ converges as well.
Thank you very much in advance,
Yaron.
AI: Just notice that,
if $\sum_{n\geq 1}|a_n|<\infty$ , then $\sum_{n\geq 1}a_n<\infty$. Also, if $\sum_{n\geq 1}a_n$ converges, then $\sum_{n\geq 1}\frac{a_n}{n}$ converges too by Dirichlet's test, so we get
$$ \sum_{n \geq1}^{\infty}\frac{n+1}{n}a_n = \sum_{n \geq 1}a_n + \sum_{n \geq 1}\frac{a_n}{n}< \infty.$$ |
H: Aproximating rational with fraction with "smallest numerator and denumerator possible"
For example $0.795=\frac{159}{200}$. But is there a way to find fraction with smaller numerator and denumerator that will represent number $0.795xyz...$ i.e. it will approximate our given number?
I need algorithm or some procedure for this.
AI: The smallest denominator $q$ is $q=44$ corresponding to $$\frac pq=\frac{35}{44}=0.795\overline{45}$$
To see this, note that
$$ \frac ab=\frac{31}{39}=0.79487\ldots$$
is too small and
$$\frac cd=\frac{4}{5}=0.8$$
is too big.
For any fraction with $\frac ab<\frac pq<\frac cd$, we have
$$\frac pq-\frac ab=\frac{bp-aq}{bq}>0\qquad\frac cd-\frac pq=\frac{cq-dp}{dq}>0,$$
hence $$ bp-aq\ge 1\qquad cq-dp\ge 1$$
and finally (because $bc-ad=1$)
$$ q=(bc-ad)q=d(bp-aq)+b(cq-dp)\ge b+d=44.$$ |
H: Trigonometric Identity Problem - Cos Tan and Sin
I have been going through my lecture notes for a structures question (the solution of a 2nd order ode for a buckling problem) when I came across a very weird trigonometric simplification which I just cannot get my head around. Could anyone shed any light on this?
Further down the page, there is another similar simplification, but this one seems to be incorrect:
Testing some random values, the simplification is incorrect unless I replace the C/2 with a C.
AI: This simplification is correct as
$$\cos t+\tan 2t\sin t=\cos t+\frac{\sin 2t}{\cos 2t}\sin t=\frac{\cos t\cos 2t+\sin t\sin 2t}{\cos 2t}=\frac{\cos(2t-t)}{\cos2t} $$
Similarly, $$-\sin t+\tan2t\cos t=-\sin t+\frac{\sin2t\cos t}{\cos2t}=\frac{\sin2t\cos t-\sin t\cos2t}{\cos2t}=\frac{\sin(2t-t)}{\cos2t}$$
Put $t=\frac{\mu'L}4$ |
H: Cohomology $H^ {i}(\mathbb{R}P^n, \mathbb{Z}_2)$
I have to compute $H^{i}(\mathbb{R}P^2,\mathbb{Z}_2)$. I know that is $\mathbb{Z}_2$ for $i=0,1,2$ but I'm looking for a proof without universal coefficient theorem. Have you some ideas?
AI: If you look at Hatcher page 144 he writes down the cellular chain complex for $\Bbb{R}P^n$ for both even and odd $n$. Consider the case now when $n$ is even and our coefficient theory is in $\Bbb{Z}/2\Bbb{Z}$. Then your cellular chain complex is looking like
$$0 \to \Bbb{Z}/2\Bbb{Z} \stackrel{2\cdot}{\longrightarrow} \Bbb{Z}/2\Bbb{Z} \stackrel{0}{\longrightarrow} \ldots \stackrel{0}{\longrightarrow} \Bbb{Z}/2\Bbb{Z} \to 0.$$
However because our chain groups are now $\Bbb{Z}/2\Bbb{Z}$ the multiplication by two map is the zero map. If you now apply the functor $\hom(-,\Bbb{Z}/2\Bbb{Z})$ you will get the exact same chain complex but with arrows going the other way. Now what happens when you take the homology of a chain complex where all the maps are the zero maps? |
H: Method to partial fractions.
For Example:
$$\frac{ax^2 + bx+c}{(dx+e)(fx^2+g)}\equiv\frac{A}{dx+e}+\frac{Bx+C}{fx^2+g}$$
and
$$\frac{ax^4 + bx^3+cx^2+dx+e}{(x+f)(x^2+g)}\equiv Ax+B+\frac{C}{x+f}+\frac{Dx+E}{x^2+g}$$
How do you know how to format the right hand side, in the equation below in partial fractions? (what is the 'pattern'?)
$$\frac{2ax}{(x-2a)(x^2+a^2)}$$
where a is a non zero constant
Could you look at it as the top of the fraction being a simple derivative of the bottom? If so how would you know to add the Ax+B in the latter example?
AI: Step 1: Look at the orders of the numerator and the denominator. For instance, in the second example, the order of the numerator is 4 ($x^4$ is the largest power) and the order of the denominator is 3 ($x\cdot x^2=x^3$ is the largest power). If $n\geq m$, where $n$ is the order of the numerator and $m$ is the order of the denominator, then you need to add linear terms up to order $n-m$. In this case, up to order $1$, so you have
$$
Ax+B+\frac??+\frac??
$$
Step 2: Separate out the factors in the denominator into separate fractions. So in your question case, you have
$$
\frac{2ax}{(x-2a)(x^2+a^2)} = \frac?{x-2a}+\frac?{x^2+a^2}
$$
Step 3: Give each numerator powers up to, but not including the order of the denominator. So the denominator of the first fraction is order 1, so you have up to order 0, meaning just a constant, $A$. The denominator of the second fraction is order 2, so you have up to order 1, so it's $Bx+C$, giving
$$
\frac{2ax}{(x-2a)(x^2+a^2)} = \frac{A}{x-2a}+\frac{Bx+C}{x^2+a^2}
$$
If a denominator had a cubic, order 3, then the numerator would be up to order 2, so $Ax^2+Bx+C$.
EDIT: Note that you may include a fourth step if you have a repeated factor. If you have $(x-a)^2$ as a factor of the denominator, then you could write
$$
\frac{Ax+B}{(x-a)^2}
$$
But you could also write it as
$$
\frac{A}{x-a}+\frac{B}{(x-a)^2}
$$
which is equivalent for different values of $A$ and $B$. It is often viewed as better to use the latter expression. |
H: Solving Euler-equation alike 2nd order DE with disturbing RHS
For a homework problem, I have to solve
$$ t^2 \ddot{x} - 3 t \dot{x} + 3x = t^2 $$
which seems quite similar to the Euler Equation, which I would know how to solve, apart from the disturbing $ t^2 $ on the RHS instead of a constant $0$.
How does one approach such types of DE? Would it make a difference if there would be a constant $ a \neq 0$ on the RHS?
Thanks for any directions!
AI: In general, an equation of the form:
$$\mathcal{L}(x)=f(t)$$
Where $\mathcal{L}$ denotes some differential operator can be solved by finding the form of a general solution "$y_h(t)$"to the homogeneous equation (where the sum of any two solutions is again a solution):
$$\mathcal{L}(y_h)=0$$
And then finding any particular solution $y_p(t)$ that satisfies the inhomogeneous equation (where the sum of any two solutions is not necessarily another solution):
$$\mathcal{L}(y_p)=f(t)$$
Then the full solution to the original problem will be $x(t) = y_h(t)+y_p(t)$.
Whilst finding solutions to inhomogeneous problems can usually be approached in a formulaic way, finding particular solutions are usually more of a case by case basis. We frequently call the function $f$ the "forcing" of the equation, and it is frequently best to attempt to find a particular solution by looking for something that "looks like" the forcing, plug it into the left hand side and see if it works.
When the forcing is a polynomial, try plugging in a general polynomial of the same degree into the left hand side $(x=at^2+bt+c)$, and then compare coefficients with the right hand side to determine if there exists some real numbers $(a,b,c)$ such that the equation works. |
H: Showing that a function has a certain absolute minimum.
Suppose we have the function $$f(x) = \frac{x}{p} + \frac{b}{q} - x^{\frac{1}{p}}b^{\frac{1}{q}}$$ where $x,b \geq 0 \land p,q > 1 \land \frac{1}{p}+\frac{1}{q} = 1$
I am trying to show that $b$ is the absolute minimum of $f$.
I proceeded as follows:
$$\frac{df(x)}{dx} = \frac{1}{p} - \frac{x^{\frac{1}{p}-1}}{p} b^{\frac{1}{q}} = \frac{x - x^{\frac{1}{p}} b^{\frac{1}{q}}}{px}$$
Now I will look for critical points by searching for the zeros of this function.
$$\frac{x - x^{\frac{1}{p}} b^{\frac{1}{q}}}{px} = 0 \iff x - x^{\frac{1}{p}} b^{\frac{1}{q}} = 0 \iff x = x^{\frac{1}{p}} b^{\frac{1}{q}}$$.
Now I can see that $b$ is a critical point.
How ever when I continue my calculations to check whether there are any other critical points
$$x = x^{\frac{1}{p}} b^{\frac{1}{q}} \implies x^p = b^{\frac{p}{q}}x \implies x^{p-1} = b^{\frac{p}{q}} \implies x = b^{\frac{p}{(p-1)q}}$$
But this could not be equal to $b$, where did I go wrong?
AI: Note that $$\dfrac{p}{q(p-1)}=1$$ since $\dfrac1p + \dfrac1q = 1$. This is because, we have $\dfrac1q = 1 - \dfrac1p = \dfrac{p-1}p \implies \dfrac{p}{q(p-1)} = 1$. |
H: What is the sum of this infinite series? Which one is it, Taylors? Binomial?
I am trying to figure which formula to use for this one.
$$\displaystyle\sum\limits_{x=-1}^{-\infty} -x(1-y)p(1-p)^{-x}+\sum\limits_{x=1}^\infty xyp(1-p)^x$$
where $0<y<1$, and $0<p<1$.
My work:
$$=\displaystyle\sum\limits_{x=1}^{\infty} -x(1-y)p(1-p)^{x}+\sum\limits_{x=1}^\infty xyp(1-p)^x$$
next step,
$$=\displaystyle\sum\limits_{x=1}^{\infty} p(1-p)^{x}(2xy-x)$$
This doesn't look like anything. Does anyone recognize this?
AI: We have, provided $|a|<1$, that $$\sum_{n=0}^\infty na^n=\frac{a}{(a-1)^2}$$
In your problem, we may rewrite $$\sum_{x=1}^\infty p(1-p)^x(2xy-x)=(2y-1)p\sum_{x=1}^\infty x(1-p)^x$$
Applying that first identity, we find the sum to be $$\frac{(2y-1)p(1-p)}{p^2}=\frac{(2y-1)(1-p)}{p}$$
Edit: Deriving the formula, per request.
We start with the geometric series $$\sum_{n=0}^\infty a^n=\frac{1}{1-a}$$
We take $\frac{d}{da}$ of both sides to get $$\sum_{n=0}^\infty na^{n-1}=\frac{1}{(1-a)^2}$$
We multiply both sides by $a$ to get $$\sum_{n=0}^\infty na^{n}=\frac{a}{(1-a)^2}$$ |
H: When $\int |f|=\left|\int f\right|$ holds?
I was just wondering when did the equality hold for the following inequality:
$$\left|\int_{R^d}f(x)\, d x\right|\leq\int_{R^d}|f(x)|\, d x$$
where $f:R^d\to R$ is Lebesgue integrable on $R^d$.
Obviously, if $f\geq0$ a.e. on $R^d$, the equality will hold trivially, but I don't know whether it is the only case?
AI: Let $E_+$ be the set where $f\geq 0$ and let $E_-$ be the set where $f<0$. Then $\int f=\int_{E_+}f+\int_{E_-}f=\int_{E_+}|f|-\int_{E-}|f|$. This is equal to $\int|f|=\int_{E_+}|f|+\int_{E_-}|f|$ in absolute value if and only if either $\int_{E_+}f$ or $\int_{E_-}f$ is $0$. An integral $\int_{E_+}f$ of a non-negative function is positive if and only if $f>0$ almost everywhere. Thus you have equality if and only if $f\geq 0$ almost everywhere or $f\leq 0$ almost everywhere. |
H: HINT for summing digits of a large power
I recently started working through the Project Euler challenges, but I've got stuck on #16 (http://projecteuler.net/problem=16)
$2^{15} = 32768$ and the sum of its digits is $3 + 2 + 7 + 6 + 8 = 26$. What is the sum of the digits of the number $2^{1000}$?
(since I'm a big fan of generality, my interpretation is to find a solution to the sum of digits of $a^b$ in base $c$, and obviously I'm trying to solve it without resorting to "cheats" like arbitrary-precision numbers).
I guess this is simpler than I'm making it, but I've got no interest in being told the answer so I haven't been able to do a lot of internet searching (too many places just give these things away). So I'd appreciate a hint in the right direction.
I know that $2^{1000} = 2^{2*2*2*5*5*5} = (((((2^2)^2)^2)^5)^5)^5$, and that the repeated sum of digits of powers of 2 follows the pattern $2, 4, 8, 7, 5, 1$, and that the last digit can be determined by an efficient pow-mod algorithm (which I already have from an earlier challenge), but I haven't been able to get further than that… (and I'm not even sure that those are relevant).
AI: In this case, I'm afraid you just have to go ahead and calculate $2^{1000}$. There are various clever ways to do this, but for numbers this small a simple algorithm is fast enough.
The very simplest is to work in base 10 from the start. Faster is to work in binary, and convert to base 10 at the end, but this conversion is not completely trivial. A compromise (on a 32-bit machine) would be to work in base 1000000000, so that you can double a number without overflow. Then at the end the conversion to base 10 is much simpler. |
H: Sample $x$ from $g(x)$
I got confused with all this randomness and probability functions. I was trying to implement the rejection sampling method which (apparently) is really simple. I was reading from Rejection Sampling in Wikipedia and the first step says sample $x$ from $g(x)$ and $u$ from $U(0,1)$ what does this mean?
If I have a uniform distribution, when someone says "Generate/Sample/Draw a sample from $U(0,1)$" means that I have to take a random value from the uniform distribution right? In matlab terms, using the function rand will generate a sample from $U(0,1)$ right?
Now, for any pdf $g(x)$ defined over $(a, b)$. If I want to sample from $g(x)$ I have to pick a value of $x$ between $a$ and $b$ and evaluate the function $g(x)$ ? Is this correct ?
Then to pick the value of $x$ I can use a uniform distribution $U(a,b)$ to sample from, is this true?
I got confused easily. Thanks in advance.
UPDATE
I appreciate your help. But this is still not clear to me. Say I have a probability density function $g(x)=3x+1$. What would be the result of draw a sample from g(x) or pick any $x_0$ distributed according to $g(x)$?
UPDATE
I think I got it. I was confused with the graph of the PDF. When someone asks "draw a sample from a PDF, say $g(x)$" it should return a value for $x$ and this value has to be distributed as the function, that means that more samples will be drawn where the value of $g(x)$ is bigger. Does this sounds about right?
Finally, if I want to use the function I said before $g(x)=3x+1$ I have to use for instance the inverse method to draw samples form that distribution properly.
I think the confusion arose because I thought of the value of $g(x)$ as the probability but this was wrong, the value of $g(x)$ is some kind of density (is this correct?). Hope you can correct me or confirm my hypotheses.
AI: I think the Wikipedia article is misleading here.
Rather than saying sample from $g(x)$ which doesn't strictly speaking make sense it should read sample from a distribution whose probability density function is $g$.
A distribution is a way of describing a random number. To specify the distribution of $X$ I need to be able to calculate the probability $\mathbb P(a < X \leq b)$ for every pair $a \le b$. For a continuous random variable it's enough to specify its probability density function $g$.
So if $X$ has probability density function $g$ then
$$\mathbb P(a < x \leq b) = \int_a^b g(x) dx$$
Alternatively I can specify the cumulative distribution function
$$G(x) = \mathbb P(X\leq x) = \int_{-\infty}^x g(y)dy.$$
Now I can specify $\mathbb P(a < x \leq b) = G(b) - G(a)$.
The reason we use PDF's is that sometimes we can't write down the cumulative distribution function, because not all integrals have nice results.
Now if $X$ has cumulative distribution function $G$ I can sample from $X$ simply by taking a uniform $(0,1)$ random variable $U$ and setting $X = G^{-1}(U)$. Because $G$ must be increasing we have
$$\begin{array}{rl}\mathbb P(X<x) &= \mathbb P (G(X) <G(x)) \\&=\mathbb P(U < G(x)) \\&=G(x)\end{array}$$
So if $X$ has a probability density function $g$ we need to integrate $g$ and find the inverse. Then I can sample using the uniform distribution. |
H: Finding the steady state error in the Laplace domain
I have the following block diagram:
Now I like to find the steady state error for theta_ref being a step input and for several values of n, Td, K1 and K2.
For the moment we can assume all gains are simple scalars, so no vectors and matrices involved.
I like to find an expression for e(s) so that I can apply Laplace's final value theorem.
I came up with:
$e(s) = \theta_{ref}(s) - \theta(s)$
$\theta(s) = (e(s)K_2-n\cdot M)N^{-1}....$
But I don't know how to handle the inner loop.
I have Matlab / Simulink on my computer, so if I can obtain steady state error information using this software it is find with me.
AI: If you're looking for tracking error, you can use Mason's formula to find the transfer function between $\Theta_{ref}$ and $\Theta$. First find the forward path gains. There is only one forward path:
$$F_1 = K_{2}*\frac{1}{N}*K_1\frac{1}{s}*N*\frac{1}{s}=\frac{K_1K_2}{s^2}$$
Now find the loop gains. there are two loops, the inner loop ($L_1$) and the outer loop($L_2$):
$$L_1=-1*K_1*\frac{1}{s}=-\frac{K_1}{s}$$
$$L_2=-1*K_2*\frac{1}{N}*K_1*\frac{1}{s}*N*\frac{1}{s}=-\frac{K_1K_2}{s^2}$$
Finding the determinant and plugging into Mason's formula, you will obtain:
$$\frac{\Theta}{\Theta_{ref}}=\frac{K_1K_2}{s^2+K_1s+K_1K_2}$$
$$T_e(s)=1-\frac{\Theta}{\Theta_{ref}}$$
You can do steady state analysis on this transfer function to find the tracking error, and you could use the same process to find the transfer function for the noise and disturbances and find the steady state error for those inputs as well. |
H: Questions on Nilpotent Operator
I've some questions from the following theorem:
Let $T:V\to V$ be a nilpotent linear operator with index of nilpotency $k.$ Then $T$ can be expressed as a block diagonal matrix representation where each block $N$ is a Jordan nilpotent matrix. Moreover there is at least one such $N$ of order $k$ and all other $N$ are of order $\le k.$ Moreover the no of $N$ of order $i(\le k)$ is uniquely determined by $T$ and total no of $N$ of all order is $\ker T.$
$$Now~my~questions~are: $$
Does such representation of $T$ look like this?
What is meant by "the no of $N$ of order $i(\le k)$ is uniquely determined by $T$"? Does it mean if $T$ is represented by two different such (Jordan nilpotent) block diagonal matrices then the no of block matrices of order $i$ remain same in those two?
AI: Yes, the matrix looks like that. Note that at least one of the blocks is notjust $\le k\times\le k$ but in fact $k\times k$.
And Yes, two different such block matrices do have the "same" blocks, but possibly permuted.
The claims can be easily verified by looking at the numbers $\dim \ker T^i$ for $i=1, \ldots, k$. |
H: Equivalence Relations and functions on partitions of Sets
let $f$ be a function on $A$ onto $B$. Define an equivalence relation $E$ in $A$ by: $aEb$ if and only if $f(a)=f(b)$.
Define a function $\phi$ on $A/E$ by $\phi([a]_{E})=f(a)$.
Hint: Verify that $\phi([a]_{E})=\phi([a']_{E}) $ if $[a]_{E} = [a']_{E}$
I know that A/E is a partition of A.
I know that a binary relation $F$ is called a function if $aFb$ and $aFc$ implies $b=c$ for any $a, b, \text { and } c$
AI: To show that $\phi$ is a function, you must follow the hint. If $[a]_E=[a']_E$, then what can we say about $f(a)$ and $f(a')$? What then can be said about $\phi\left([a]_E\right)$ and $\phi\left([a']_E\right)$? You'll need to keep the definitions of $E,\phi$ in mind. |
H: Is this $\mathbb{Z}_2^n$?
What group is formed by binary strings of a fixed length, $n$, and the XOR operation (^)?
For example, we have:
For $n=1$:
A^B = B^A = B
A^A = B^B = A
For $n=2$:
AA ^ AB = AB = AB ^ AA
BB ^ AB = BA = AB ^ BB
For $n=3$:
ABA ^ BAA = BBA
etc
I reckon it is written $\mathbb{Z}_2^n$ but I would like someone to either confirm this or tell me why I am wrong.
AI: Considering bit strings of length $n$ as $n$-tuples of elements of $\mathbb Z/2\mathbb Z$, the XOR operation is simply the addition in $(\mathbb Z/2\mathbb Z)^n$.
So, yes, this "is" just the group $(\mathbb Z/2\mathbb Z)^n$. |
H: Dimension of Bil(V)
Let $V$ be a vector space of finite dimension $n$, and let $\operatorname{Bil}(V)$ be the vector space of all bilinear forms on $V$.
In some notes by Keith Conrad, he says in an exercise that $\operatorname{Bil}(V)$ has dimension $n$ as well. I am confused by this, because it seems to me that the dimension ought to be $n^2$. Here is my reasoning: we have isomorphisms $\operatorname{Bil}(V)\cong (V\otimes V)^* \cong V \otimes V$ since $V$ is finite dimensional. The dimension of $V\otimes V$ is $n^2$. Where have I gone wrong here?
AI: You have not gone wrong at all. More concretely, a bilinear form $B$ is associated with an $n\times n$ matrix [given a basis $\{v_i\}$, its $ij$-entry is, of course $B(v_i,v_j)$]. |
H: About two equinumerous partitions of the same set.
Let $\mathcal {A,B}$ be partitions of a set $X$ into $m$ subsets. Suppose that for any $k\leq m$ and any $A_1,\ldots,A_k \in\mathcal A$ there are at most $k$ elements of $\mathcal B$ contained in $\bigcup_{i=1}^kA_i.$ Does it imply that for any $k\le m$ and any $A_1,\ldots,A_k\in\mathcal A$, there are at least $k$ elements of $\mathcal B$ whose intersection with $\bigcup_{i=1}^kA_i$ is non-empty?
I'm thinking about a problem which a positive answer to the above would solve. And the above appears plausible to me.
Both the hypothesis and the conclusion say something about any $A_1,\ldots,A_k.$ My first thought was to take any $A_1,\ldots,A_k$ and getting the conclusion from the hypothesis restricted to those particular $A_i$. But this doesn't work, and I don't have any ideas.
AI: I assume that $X$ is non-empty and $m>0$. The implication does, indeed hold. In the case $k=m$, this is obvious, since $\mathcal A,\mathcal B$ are both partitions of $X$ into $m$ subsets. Now, suppose $0<k<m$, and take any $k$ distinct elements $A_1,...,A_k$ of $\mathcal A$. Put $j=m-k$ and let $A_1',...,A_j'$ be the other elements of $\mathcal A$. By hypothesis, there are at most $j$ elements of $\mathcal B$ contained in $$\bigcup_{n=1}^jA_n'=X\smallsetminus\left(\bigcup_{n=1}^kA_n\right),$$ so there are at least $m-j=k$ elements of $\mathcal B$ whose intersection with $$ \bigcup_{n=1}^kA_n$$ is non-empty. |
H: If $\lVert f(t) \rVert:[0,T] \to \mathbb{R}$ is measurable, is $f$ measurable?
Let $f:[0,T] \to X$ be a mapping to a Banach space $X$. If its norm $\lVert f(t) \rVert$ is measurable, is $f$ itself measurable? The converse is true.
AI: No. We don't even need to leave Euclidean space. Let $V$ be a nonmeasurable set in $[0,T]$. Consider the map $f: \mathbb [0,T] \rightarrow \mathbb R$ that is $1$ on $V$ and $-1$ on $V^c$. Then $|f|$ is a constant function and surely measurable, but $f$ is not. |
H: Which real functions have their higher derivatives tending pointwise to zero?
Let $\mathrm C^\infty\!(\Bbb R)$ be the space of infinitely differentiable functions $f:\Bbb R\rightarrow\Bbb R$, and define the subspace$$A:=\{f\in\mathrm C^\infty\!(\Bbb R):(\forall x\in \Bbb R)\lim_{n\rightarrow\infty} f^{(n)}(x)=0\},$$where $f^{(n)}$ is the $n$th derivative of $f\;(n=0,1,\dots).$ Clearly all polynomial functions are in $A$. Are any others?
Edit: Alfonso has answered this question well, but is there any characterization of $A$ in terms of familiar types of function?
AI: Consider an analytic function
$$\tag0f(x)=\sum_{n=0}^\infty \frac{a_n}{n!}x^n.$$
As $f^{(n)}(0)=a_n$, a necessary condition for such $f$ to be an example is that $a_n\to 0$.
But $a_n\to 0$ is also sufficient.
Indeed, with $b_n:=\sup_{k\ge n} |a_k|$ we have $b_n\to 0$ and hence
$$\begin{align}|f^{(n)}(x)|&= \left|\sum_{k=0}^\infty \frac{a_{n+k}}{k!}x^k\right|\\
&\le\sum_{k=0}^\infty\frac{|a_{n+k}|}{k!}|x|^k\\
&\le b_n\sum_{k=0}^\infty\frac{1}{k!}|x|^k\\&=b_ne^{|x|}\to 0. \end{align} $$
(Note that the calculation for $f^{(0)}(x)$ shows that $f$ is entire to begin with). |
H: Using Square area finding quadrilateral area
Area of square ABCD is 169 and that of square EFGD is 49. Find area of quadrilateral FBCG
I am stuck and just thinking which way can be helpful for me finding this area of quadrilateral FBCG. Instead of hint can please suggest me perfect answer , so using this idea answer, i will solve other questions which are near about similar.
AI: By symmetry, the quadrilaterals $FBCG$ and $FBAE$ are congruent, so they have the same area. Their combined area is the area of the region which is in the big square but outside the little square. So their combined area is $169-49$, that is, $120$. It follows that quadrilateral $FBCG$ has area $60$.
Remark: The above is the quickest way to find the answer.
But we can do it in various other ways, that may be useful to you in other problems. For example, the little square has area $49$, so it has sides $7$ each. The big square has area $169$, so has sides $13$.
It follows that the height $CG$ of the trapezoid $FBCG$ is $13-7=6$. The two bases are $7$ and $13$. So by a standard formula for the area of a trapezoid, the area is $(6)\left(\frac{7+13}{2} \right)$.
Or else you can think of the region $FBCG$ as made up of a $6\times 7$ rectangle, with half of a $6\times 6$ square on top. That gives area $(6)(7)+\left(\frac{1}{2}\right)(6)(6)$.
Another computational approach is by using triangles. Because of your picture, we calculate the area of $ABFE$, which is the same as the area of our target region. This is made up, as your picture shows, of two triangles. Each has height $13-7=6$. Their bases are $7$ and $13$, Now huse the standard formula for the area of a triangle. |
H: Using induction to verify a statement
I have to prove that this statement is true.
For $n = 1, 2, 3, ...,$ we have $ 1² + 2² + 3² + ... + n² = n(n + 1)(2n + 1)/6$
Basically I thought I'd use induction to prove this. Setting $n = p+1$, I got this so far:
Left hand side: $1² + 2² + 3² +...+ p² + (p+1)² = p² + (p+1)² = 2p² + 2p + 1$
Right hand side: $(p + 1)(p + 1 + 1)(2 * (p + 1) +1)/6$
This is all I've got. I've never worked with induction so I don't even know if this correct to begin. Any insight or solution to my problem is appreciated.
AI: For proving an expression through induction. Check for the base case $n=1$,
Let $P(n)$ is true, $\sum_1^nk^2=1+4+9+ \dots n^2= P(n)=\dfrac{n(n+1)(2n+1)}{6}$.
You need to prove that $\sum_1^{n+1} (k)^2=1+4+9+ \dots (n+1)^2=P(n+1)=\dfrac{{(n+1)}{}(n+2)(2n+3)}{6}.$
Hint: $(k+1)^2=k^2+2k+1$ and $\sum_1^n k=\dfrac{n(n+1)}{2}, \sum_1^n 1=n$ |
H: Estimating the sum $\sum_{k=2}^{\infty} \frac{1}{k \ln^2(k)}$
By integral test, it is easy to see that
$$\sum_{k=2}^{\infty} \frac{1}{k \ln^2(k)}$$
converges. [Here $\ln(x)$ denotes the natural logarithm, and $\ln^2(x)$ stands for $(\ln(x))^2$]
I am interested in proving the following inequality (preferrably using
integral calculus) $$\sum_{k=2}^{\infty} \frac{1}{k \ln^2(k)}>2$$
By wolfram alpha, the actual value of the sum is about 2.10974. Since $\frac{1}{k \ln^2(k)}$ is decreasing, we have
$$
\sum_{k=2}^{\infty} \frac{1}{k \ln^2(k)}\ge \int_2^{\infty} \frac{1}{x \ln^2(x)}
dx=\frac{1}{\ln(2)}\approx 1.4427$$
So this lower bounded is weaker than desired.
My motivation for asking this question is that by being able to estimate this particular sum will hopefully teach me a general technique which I may try applying to sums of the form $\sum_{k=1}^{\infty} f(k)$.
Thanks!
AI: \begin{align*}
\sum_{k=2}^{\infty} \frac1{k\ln^2k} &= \frac1{2\ln^22} + \frac1{3\ln^23} + \sum_{k=4}^{\infty} \frac1{k\ln^2k} \\
&\ge \frac1{2\ln^22} + \frac1{3\ln^23} + \int_4^{\infty} \frac{dx}{x\ln^2x} \\
&= \frac1{2\ln^22} + \frac1{3\ln^23} + \frac{1}{\ln4} > 2.038.
\end{align*}
(I see Andrew just wrote this in a comment.) |
H: If we know the eigenvalues of a matrix $A$, and the minimal polynom $m_t(a)$, how do we find the Jordan form of $A$?
We have just learned the Jordan Form of a matrix, and I have to admit that I did not understand the algorithm.
Given $A = \begin{pmatrix} 1 & 1 & 1 & -1 \\ 0 & 2 & 1 & -1 \\ 1 & -1 & 2 & -1 \\ 1 & -1 & 0 & 1 \end{pmatrix} $, find the Jordan Form $J(A)$ of the matrix.
So what I did so far:
(I) Calculate the polynomial: $P_A(\lambda) = (\lambda - 1)^2(\lambda -2)^2$.
(II) Calculate the minimum polynomial: $m_A(\lambda) = P_A(\lambda) =(\lambda - 1)^2(\lambda -2)^2 $
But I am stuck now, how do we exactly calculate the Jordan Form of $A$?
And an extra question that has been confusing me. In this case, does $A$ have $4$ eigenvalues or $2$ eigenvalues?
AI: Since the minimal polynomial has degree $4$ which is the same order of the matrix, you know that $A$'s smith normal form is
$\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & m_A(\lambda )\end{pmatrix}$.
Therefore the elementary divisors (I'm not sure this is the correct term in english) are $(\lambda -1)^2$ and $(\lambda -2)^2$.
Theory tells you that one jordan block is $\color{grey}{(\lambda -1)^2\to} \begin{pmatrix}1 & 1\\ 0 & 1\end{pmatrix}$ and the other is $\color{grey}{(\lambda -2)^2\to} \begin{pmatrix} 2 &1\\ 0 & 2\end{pmatrix}$.
Therefore a possible JNF for $A$ is $\begin{pmatrix} 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 2 &1\\ 0 & 0 & 0 & 2\end{pmatrix}$.
Regarding the extra question, it seems to be asking about the geometric multiplicity. In this case it has two eingenvalues and not four. |
H: Two ramified covers $\Rightarrow$ reducible ramification divisor?
Let $X,X',X''$ be algebraic varieties and let $X''\to X'$ and $X'\to X$ be two ramified covers. Is the ramification divisor of the composition $X''\to X'\to X$ reducible?
AI: Let $X=X'=X''=\Bbb{A}^1$ be the affine line, and let the two covers be $z \mapsto z^2$. Their composite is the (ramified) cover $z \mapsto z^4$, and the ramification divisor of this morphism is supported at $0$ only. This shows in particular that the ramification divisor can be irreducible. |
H: How many functions $ f: \{1, 2, 3, \dots, 10\} \to \{0,1\}$ satisfy $f(1) + f(2) + \dots + f(10) = 2$?
How many functions $ f: \{1, 2, 3, \dots, 10\} \to \{0,1\}$ have this property: $$f(1) + f(2) + \dots + f(10) = 2.$$
I understand just $2$ functions can be $1$, the rest have to be $0$, in total there are $2^{10}$ functions, but how can I find out how many of them have this property?
AI: HINT: As you said in the question, exactly two of the functions values must be $1$, and the other eight must be $0$. How many ways are there to choose $2$ things from a set of $10$ things (i.e., to choose which two function values are to be $1$)? |
H: Understanding when convergence implies uniform convergence for sequences of non-continuous functions
I am working on the following problem:
Let $(f_n)$ be a sequence of functions $[a,b] \rightarrow \mathbb{R}$ such that: (i) $f_n(x)≤0$ if $n$ is even, $f_n(x)≥0$ if $n$ is odd; (ii) $|f_n(x)|≥|f_{n+1}(x)|$ for all $x$; (iii) $f_n$ converges to $0$ uniformly. Prove that then $\sum_n f_n$ is uniformly convergent.
I take as given the following theorem (3.43 in Rudin's Principles of Mathematical Analysis):
Suppose (a) $|c_1|≥|c_2|≥|c_3|≥...$; (b) $c_{2m-1}≥0$, $c_{2m}≤0$ ($m=1,2,3,...$); (c) $lim_{n\rightarrow\infty} c_n=0$ Then $\sum_n c_n$ converges.
Now, it follows immediately that for given $x$, $\sum_n f_n(x)$ converges, say, to $F_n(x)$, a constant function.
I am wondering how to show rigorously that this convergence is in fact uniform. We are dealing with a compact set, but the $\sum_n f_n(x)$ are not continuous (are they?), so we can't use Dini's theorem.
I have tried saying something along these lines:
Choose $\epsilon>0$. Then $\forall x \in [a,b]$, $\exists N_x$ s.t. $n≥N_x \implies |\sum_nf_n(x)-F_n(x)|<\epsilon.$
Put $M=sup_{x\in [a,b]}N_x$. Then $n≥M \implies|\sum_nf_n(x)-F_n(x)|<\epsilon$ $ \forall x \in [a,b]$.
But I worry that the reasoning is faulty. ($\forall x \exists N$ does not usually imply $\exists N \forall x$ - but I wonder if it does in this case because we're working on a compact set?) If anyone has any thoughts I'd be very grateful. Thanks!
AI: Following David's hint, we have:
We are given that $f_n \rightarrow 0$ uniformly, so for any $\epsilon > 0$ $\exists N$ s.t. $n≥N \implies |f_n(x)|<\epsilon$ $\forall x$.
We have, then, for $m>n>N$, $|\sum_1^n f_n(x) - \sum_1^m f_n(x)|=|\sum_{n+1}^m f_n(x)|≤|f_n(x)|≤\epsilon$ $\forall x$, where the penultimate inequality follows immediately from conditions (i) and (ii).
Thus, $\sum_n f_n$ is uniformly convergent. |
H: What is the functional inverse (with respect to $h$ (!)) of $f^{\circ h}(x)={F(h) +x F(h-1) \over F(1+h) +x F(h) }$?
I've considered the fractional iteration of $f(x) = {1 \over 1+x} $ for which the general expression depending on the iteration-height parameter $h$ might be assumed by the formula
$$ f^{\circ h}(x) = f(x,h)={F(h) +x \cdot F(h-1) \over F(1+h) +x \cdot F(h) }$$ where
$ \displaystyle \qquad \small F(h)=\operatorname{fibonacci}(h) = {\varphi^h - (1-\varphi)^h \over \sqrt5 }$ and
$ \displaystyle \qquad \small \varphi = {1+\sqrt5 \over 2} \sim 1.618... $
(Note, that for fractional iterates we need $x \in \mathbb C$)
Today I tried to find a formulation for the functional inverse with respect to h, but don't find a good starting point so:
Q: How would look a function $ h = \operatorname{hgh}(x_0,x_h) $ which would indicate the required iteration-height $h$ given $x_0$ and $x_h = f(x_0,h)$ ? Is there even a closed form for it (I'd call it still closed-form if it possibly includes e.g. the Lambert-W function)
[update]: for more background see the older exercise of mine
AI: Fractional linear transformation $f$ can be represented by the $2\times 2$ matrix $\displaystyle \left(\begin{array}{cc} 0 & 1 \\ 1 & 1\end{array}\right)$, which is diagonalized as
$$\left(\begin{array}{cc} 0 & 1 \\ 1 & 1\end{array}\right)=
P\left(\begin{array}{cc} \varphi & 0 \\ 0 & 1-\varphi\end{array}\right)P^{-1},\qquad
P=\left(\begin{array}{cc} \varphi-1 & -\varphi \\ 1 & 1\end{array}\right).$$
Therefore the matrix corresponding to $h$th iteration is
$$P\left(\begin{array}{cc} \varphi^h & 0 \\ 0 & (1-\varphi)^h\end{array}\right)P^{-1}=
\frac{1}{\sqrt{5}}\left(\begin{array}{cc} (\varphi-1)\cdot\varphi^h +\varphi\cdot(1-\varphi)^h & \varphi^h -(1-\varphi)^h \\ \varphi^h -(1-\varphi)^h &
\varphi\cdot\varphi^h +(\varphi-1)\cdot(1-\varphi)^h \end{array}\right)$$
and we obtain
$$\left(\frac{1-\varphi}{\varphi}\right)^{h}=\frac{x_hx_0+\varphi x_h-(\varphi-1)x_0-1}{x_hx_0-(\varphi-1)x_h+\varphi x_0-1}.$$
Taking the logarithm, one finds the expression
$$\mathrm{hgh}(x_h,x_0)=\ln\frac{x_hx_0+\varphi x_h-(\varphi-1)x_0-1}{x_hx_0-(\varphi-1)x_h+\varphi x_0-1}\Bigl/\Bigr.\ln\frac{1-\varphi}{\varphi}.$$
Note, however, that the expressions in the logarithms can become negative or even complex (in the denominator logarithm it is always negative), so some care should be taken to choose correct branches. |
H: The unity of a subdomain
Let $D$ be an integral domain and $R$ a subdomain , I'm trying to show that both $R$ and $D$ have the same unity. I did it using the fact that $1$ and $0$ are the only idempotent elements inside any integral domain. I'm trying to do it in another way which is the following:
Since $R$ is a subdomain it has unity call it $c$, let $a\in R$ then $ac=a$ since we can cancel we get that $c$ which is in $R$ is equal to $1$ then $1\in R$.
I'm not really confident this right. If this is wrong what other way is there that does not use the fact that there are two idempotent elements in an integral domain.
AI: Assuming that your definition of subdomain requires that the unity is non zero, then you have for this unity $c\in R$
$$
cc=ce
$$
where $e$ is the unity of $D$, because both, by definition, give $c$. Hence
$$
c(c-e)=0
$$
and, since $c\ne0$, $c=e$.
Of course, this is just using that $c$ is idempotent. Which it is, so why bother? |
H: how many abelian groups of order $8$ and $21$ exist?
I have to find how many abelian groups of order $8$ and order $21$ exist. There this theorem that states that every abelian group is a direct sum of cyclic groups. So what I did is this: $8=2\cdot2\cdot2$, now I look to all possible divisors of $8$, that is $2$, $4$ and $8$. Then $G$ is isomorphic to $\mathbb{Z}/8\mathbb{Z}$ or $\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ or $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$. For $G$ of order 21=7*3 we have $G$ is isomorf with $\mathbb{Z}/21\mathbb{Z}$ or $\mathbb{Z}/7\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}$. Thus there are $3$ abelian groups of order $8$ and $2$ abelian groups of order $21$.
My question is: is this the right way to attack this problem (if it's correct)? Thank you.
AI: The second answer needs to be modified, since the two groups you mention are isomorphic. The direct sum is cyclic of order $21$, the element $(1,1)$ is a generator.
For completeness, one should prove that the three groups of order $8$ that you list are non-isomorphic. One quick way is to observe that the first group has an element of order $8$. The other two groups do not. And the second group has an element of order $4$, while the third does not. |
H: Function such that $f(x) = -1$ for $x < 0,$ and $f(x)=1$ for $x > 0$?
What is a function to returns $-1$ if number is negative, $1$ if positive, and zero if number is equal to 0?
for example:
$$
f(-8) = -1
$$
$$
f(8) = 1
$$
$$
f(0) = 0
$$
for $$x < 0$$ maybe?
$$ f(x) = (-x-(-x-1))\cdot-1 $$
AI: This function is the $\text{sgn}$ (sign) function. You can write it as $$\text{sgn}(x)=\begin{cases}\frac x{|x|} &x\neq 0\\ 0 & x=0\end{cases}$$ |
H: If $\lim\limits_{x \to \pm\infty}f(x)=0$, does it imply that $\lim\limits_{x \to \pm\infty}f'(x)=0$?
Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is everywhere differentiable and
$\lim_{x \to \infty}f(x)=\lim_{x \to -\infty}f(x)=0$,
there exists $c \in \mathbb{R}$ such that $f(c) \gt 0$.
Can we say anything about $\lim_{x \to \infty}f'(x)$ and $\lim_{x \to -\infty}f'(x)$?
I am tempted to say that $\lim_{x \to \infty}f'(x)$ = $\lim_{x \to -\infty}f'(x)=0$.
I started with the following, but I'm not sure this is the correct approach, $$\lim_{x \to \infty}f'(x)= \lim_{x \to \infty}\lim_{h \to 0}\frac{f(x+h)-f(x)}{h}.$$
AI: No. Try $f(x)=\sin(x^a)/x$ for various values of $a$. |
H: What's the asymptotic distribution of $p^n$ (powers of primes)?
We know by the prime number theorem that
$\lim_{n\to\infty}\frac{\pi(n)}{n\,/\ln n} = 1$
An even better approximation is
$\lim_{n\to\infty}\frac{\pi(n)}{\int_2^n\frac{1}{\ln t}\mathrm{d}t} = 1$.
Is there a similar formula that approximates the distribution of natural numbers of the form $p^n$ where $p$ is a prime? That is, an approximation of
$$\pi'(x)=\left|\,\Pi'\cap\{1,\ldots,x\}\,\right|
\qquad\mbox{ where }\quad
\Pi'=\{p^n\;|\;p\mbox{ is prime}, n\in\mathbb{N}\}$$
(There is already a question that asks if $\lim_{n\to\infty}\frac{\pi'(n)}{n}=0$, but I'm interested in a more precise approximation.)
AI: The same asymptotics hold, and the intuitive explanation for that is that the number of prime powers up to $x$ are negligible in comparison to the number of primes up to $x$. In fact, the usual proofs of the prime number theorem actually prove the asymptotics for $\pi'$ first, and then one shows that these coincide with the asymptotics of $\pi$.
Since $\pi'(x)$ for a real $x$ counts the number of primes $p$ up to $x$, the number of prime squares $p^2$ up to $x$, the number of prime cubes $p^3$ up to $x$, and so on, we can write
$$\pi'(x)=\pi(x)+\pi(x^{1/2})+\pi(x^{1/3})+\cdots+\pi(x^{1/m})$$
where $m=\log(x)+1$. Now, using the prime number theorem, we get
$$\pi'(x)-\pi(x)\sim x^{1/2}/\log(x^{1/2})+\cdots+x^{1/m}/\log(x^{1/m})$$
which can be bounded above by $x^{1/2}/\log(x^{1/2})\cdot (\log x+1)\leq x^{1/2}/\log(x^{1/2})\cdot 2\log x=4 x^{1/2},$ and thus $\pi'(x)=\pi(x)+O(x^{1/2})$, so $\pi'(x)$ is asymptotic to $x/\log x$. |
H: How many functions $f:\{1,2,3,4\}→\{1,2,3,4\}$ satisfy $f(1)=f(4)$?
I just need a hint or a way to think a about this problem: $f(1)$ can be $1, 2, 3, 4$ and $f(4)$ can be $1,2,3,4.$
AI: To give a function between these sets you need to give the data: for each number ${1,2,3,4}$ give a number in that same set to map it to.
This gives you a way to count all the functions. Now once you have decided where to send $1$, you have 4 choices for each of $\{2,3,4\}$ to map it to. However, the image of $4$ is decided by the image of $1$. Do you see how to proceed? |
H: Bilinear forms on C[0,1]
Let $C[0,1]$ be the vector space of real-valued continuous functions on $[0,1]$. Then
$$B(f,g) = \int_0^1{f(x)g(x)\, dx}$$
is a bilinear form on $C[0,1]$. More generally, if $k:[0,1]^2\rightarrow \mathbb{R}$ is continuous, then
$$B_k(f,g) = \int_{[0,1]^2}{f(x)g(y)k(x,y)\,dx\, dy}$$
is a bilinear form.
In Keith Conrad's notes on bilinear forms, he asks if there is a $k$ corresponding to the first example. My intuition first suggested the characteristic function on the diagonal, but that is not continuous. Surely it must be something like this, but I can't think of what it should be.
There is another question where he asks us to find conditions on $k$ under which $B_k$ will be a symmetric bilinear form. I sketched a proof that its both necessary and sufficient for $k(x,y) = k(y,x)$, but my proof is not pretty. Is there a natural proof?
AI: 1) No. If such a $k$ existed, denote $M:=\sup |k|$. Then
$$
\big|\int_0^1 f^2\Big|=\Big|\iint f(x)f(y)k(x,y)dxdy\big|\leq \iint|f(x)||f(y)|Mdxdy=M\left(\int_0^1 |f|\right)^2
$$
for every continuous function $f$ on $[0,1]$. Then for each $n\geq 1$, consider the continuous function $f_n$ defined by $f_n(x):=\frac{1}{\sqrt{x}}$ on $[1/n,1]$ and $f_n(x):=\sqrt{n}$ on $[0,1/n]$. We get
$$
\log n=\int_{1/n}^1\frac{1}{x}dx\leq \int_0^1f_n^2\leq M\left(\int_0^1 |f_n|\right)^2\leq M\left(\int_0^1 \frac{1}{\sqrt{x}}dx\right)^2=4M.
$$
Let $n$ tend to $+\infty$ to obtain the desired contradiction.
2) If $B_k(f,g)=B_k(g,f)$, set $u(x,y):=k(x,y)-k(y,x)$. You get, as observed by Davide Giraudo,
$$
\iint_{[0,1]^2} f(x)g(y)u(x,y)dxdy=0
$$
for every $f,g$ continuous on $[0,1]$. From here, there are several ways to conclude that $u=0$. Here are two possible ones.
a) Applying the latter to $f(x)=x^n$ and $g(y)=y^m$, we get, by linearity, $\iint p(x,y)u(x,y)dxdy=0$ for every polynomial in two variables $p(x,y)$. By Stone-Weierstrass, $u$ can be uniformly approximated by such polynomials. So $\iint u^2=0$ whence $u^2=0$ since $u^2$ is continuous and nonnegative.
b) Approximating uniformly characteristic functions of intervals by continuous functions, we get $\iint 1_I(x)1_J(y)u(x,y)dxdy=\iint 1_{I\times J}\cdot u=0$ for every intervals $I,J$ in $[0,1]$. If $u$ was not $0$, by continuity, there would exist non trivial intervals $I,J$ such that $u\geq \alpha>0$ on $I\times J$ (or $u\leq -\alpha<0$). Then $0=\iint 1_{I\times J}\cdot u\geq \alpha \lambda(I)\lambda(J)>0$ (or $<0$). Contradiction. |
H: Topological manifolds (dimension)
I am taking an introductory course to topology and the professor defined a topological manifold of dimension $n$ if it is hausdorff and if for every point $x$ there exists an open set $U$ around $x$ such that $U$ is homeomorphic to $\mathbb{R}^n$. My question is: By the way she defined it, one could have (apriori) a topological space $X$ being a manifold of dimensions $n$ and $m$? Meaning I could find open sets $U$ and $V$ around every $x$ and $U$ is homeomorphic to $\mathbb{R}^n$ and $V$ is homeomorphic to $\mathbb{R}^m$, and this would make the notion of dimension not well defined. My guess is that since it is called "topological manifold of dimension $n$" what I described probably cant happen, but I dont see why not.
AI: You are correct.
There are several ways to show that it is impossible, and it basically boils down to the fact that no open subset of ${\bf R}^n$ is homeomorphic to an open subset of ${\bf R}^m$ if $n<m$ (as $U\cap V$ would be in your case). You can assume without loss of generality that the sets in question are connected.
One way to show it is to use the fact that you can find an $(n-1)$-dimensional sphere which disconnects any open subset of ${\bf R}^n$, but it is a general fact that if a compact set disconnects ${\bf R}^m$, then it must admit a non-nullhomotopic map onto $S^{m-1}$ (in fact, it is an equivalent condition), while on the other hand, no proper compact subset of $S^{m-1}$ admits such a map.
So, summing it all up, a set homeomorphic to $S^{n-1}$ can't disconnect an open subset of ${\bf R}^m$ if $m>n$ and we're done.
(The results used can be shown by combinatorial or algebraic topology, but are not simple enough to show here.) |
H: Show groups of symmetries of a cube and a tetrahedron are not conjugate in isometry group.
I've shown that the symmetry group of a cube and a tetrahedron are both isomorphic to S4, but I am now trying to show that they are not conjugate when considered as subgroups of isometries of 3D space.
I cannot think of any kind of criteria that gives when two groups are not conjugate?
EDIT: I meant to say the rotations of a cube! So I've shown the full symmetry group of the tetrahedron is isomporphic to the rotations of a cube, however I need to show that they are not conjugate.
AI: Something's wrong with your first statement. If you consider orientation preserving 3D movements, the symmetry group of the cube is twice as big as that of the tetrahedron. And the same holds if you disregard orientation for both solids.
Note that taking $4$ out of $8$ vertices of a cube, you get a tetrahedron. Any symmetry of that tetrahedron induces a symmetry of the cube. However, the cube allows additional symmetries that map the tetrahedron to the "other" tetrahedron (the one with the other four vertices).
In other words, the fact that half the vertices of a cube form a tetrahedron shows that the tetrahedron group is a subgroup of index 2 in the cube group.
However, you do get $S_4$ if you count
arbitrary isomometries of the tetrahedron (that is, just permutations of the four vertices) and
orientation preserving isometries of the cube (that can be identified with permutations of the four spatial diagonals).
These two subgroups of the group of isometries cannot be conjugate to each other because the conjugate of an orientation preserving isometry is orientation preserving. |
H: Two terms that I want to understand: weakest topology and jointly continuous (in the following context).
I was reading an article online, please help me to understand the following lines (in bold letters). -
Topological structure:
If (V, ‖·‖) is a normed vector space, the norm ‖·‖ induces a metric and therefore a topology on V.the distance between two vectors u and v is given by ‖u−v‖.
This topology is precisely the weakest topology which makes ‖·‖ continuous and which is compatible with the linear structure of V in the following sense:
1.The vector addition + : V × V → V is jointly continuous with respect to this topology. This follows directly from the triangle inequality.
2.The scalar multiplication · : K × V → V, where K is the underlying scalar field of V, is jointly continuous. This follows from the triangle inequality and homogeneity of the norm.
Please explain me the Weak topology and how does it makes norm ‖·‖ continuous. what does it mean by "Addition + : V × V → V is jointly continuous with respect to this topology"
Thank you so much in advance.
AI: The norm $\|\cdot\|$ is a function from $V$ to $\Bbb R$. There are many topologies that can be placed on $V$ that make this function continuous. Let $\mathscr{T}$ be the set of all such topologies. Then it turns out that there is a topology $\tau_0\in\mathscr{T}$ such that $\tau_0\subseteq\tau$ for all $\tau\in\mathscr{T}$. That is, every topology on $V$ that makes the norm a continuous function has to include all of the open sets in $\tau_0$. This, by the way, is equivalent to saying that $\tau_0=\bigcap\mathscr{T}$. Weakest topology here means coarsest topology, i.e., the one with the absolute minimum of open sets needed to make the norm function continuous.
Vector addition is a function $+$ from $V\times V$ to $V$. Once we impose the topology $\tau_0$ on $V$, we automatically get a product topology on $V\times V$, and we can ask whether the function $+:V\times V\to V$ is continuous with respect to that product topology on $V\times V$ and the topology $\tau_0$ on $V$. It turns out that it is. In this context jointly continuous is just a synonym for continuous with respect to the product topology, so (1) is just saying that vector addition is a continuous function. Similarly, the field $K$ has a natural topology $\tau_K$, so $K\times V$ has a product topology defined from $\tau_K$ and $\tau_0$, and (2) is just the claim that the scalar multiplication function
$$\cdot:K\times V\to V:\langle\alpha,v\rangle\mapsto \alpha v$$
is continuous with respect to this product topology in $K\times V$ and the topology $\tau_0$ on $V$.
The reason for the term jointly continuous is that it is possible for a function $f:X\times Y\to Z$ that is not continuous nevertheless to be continuous in each variable separately. That is, it’s possible for each function
$$f_x:Y\to Z:y\mapsto f(x,y)$$
with $x\in X$ to be continuous and for each function
$$f^y:X\to Z:x\mapsto f(x,y)$$
with $y\in Y$ to be continuous, without $f:X\times Y\to Z$ being continuous as a function of two variables.
When each function
$$f_x:Y\to Z:y\mapsto f(x,y)$$
with $x\in X$ to be continuous, we say that $f$ is continuous in the second variable, and when each function
$$f^y:X\to Z:x\mapsto f(x,y)$$
with $y\in Y$ to be continuous, we say that $f$ is continuous in the first variable. If both of these are the case, $f$ is separately continuous, but, as I said, this does not guarantee that it is actually continuous as function from the product space $X\times Y$ to the space $Z$. |
H: Evaluating $\int_0^{\infty}e^{-\alpha x^2 \cos \beta} \cos(\alpha x^2 \sin \beta) dx$
Q: Suppose $\alpha>0$ and $|\beta|<\pi/2$, show that
\begin{align*}
\textbf{(1)} \; \int_0^{\infty}e^{-\alpha x^2 \cos \beta} \cos(\alpha x^2 \sin \beta) dx &= \frac 1 2 \sqrt{\pi/\alpha}\cos(\beta/2)\\
\textbf{(2)} \; \int_0^{\infty}e^{-\alpha x^2 \cos \beta} \sin(\alpha x^2 \sin \beta) dx &= \frac 1 2 \sqrt{\pi/\alpha}\sin(\beta/2)
\end{align*}
How can I integrate the above with the method of contour?
The integral can be changed into $\displaystyle \int_0^{\infty}e^{-\alpha x^2 \cos \beta} e^{i(\alpha x^2 \sin \beta) }dx = \int_0^{\infty} e^{x^2\alpha e^{i (\pi - \beta)}}dx$. This is similar to $\displaystyle \int_0^{\infty} e^{-x^2}dx$ which has been discussed here except that it has complex coefficients. How do I modify it?
AI: Use a wedge contour $C$ of angle $-\beta/2$, i.e., below the real axis. That is, consider
$$\oint_c dz \, e^{-a e^{i \beta} z^2} = \int_O^R dx \, e^{-a e^{i \beta} x^2} + i R \int_0^{-\beta/2} d\theta e^{-a R^2 e^{i (\beta+2 \theta)}} + e^{-i \beta/2} \int_R^0 dx \, e^{-a x^2}$$
Note that by using this contour, we get a pure Gaussian integrand along the sloped line to the origin.
That the second integral vanishes in the limit as $R \to \infty$ may be seen by noting that $\cos{(\beta+2 \theta)} \gt 0$ within the integration interval. Therefore,
$$\int_0^{\infty} dx \, e^{-a e^{i \beta} x^2} = e^{-i \beta/2} \int_0^{\infty} dx \, e^{-a x^2} = \frac12 e^{-i \beta/2} \sqrt{\frac{\pi}{a}}$$
The stated answers come from taking real and imaginary parts of the above. |
H: How to derive the sum of an arithmetic sequence?
I'm attempting to derive a formula for the sum of all elements of an arithmetic series, given the first term, the limiting term (the number that no number in the sequence is higher than), and the difference between each term; however, I am unable to find one that works. Here is what I have so far:
Let $a_0$ be the first term, $a_n$ be the last term, and $x$ be the difference between each term.
If we have the sequence $a_0, a_0 + x, a_0 + 2x ... a_n$, then we can add the first and last term, the second and second last term, etc., to quickly find the sum based on the number of terms. Thus, the sum of these terms is $n\frac{(a_n + a_0)}{2}$, where $n$ is the total number of terms.
The number of terms must be the number of times the first term was increased by $x$ plus one (to account for the first term), and so $n = \frac{(a_n - a_0)}{x} + 1$.
Thus, the sum is equal to $(\frac{(a_n - a_0)}{x} + 1)\frac{(a_n + a_0)}{2}$.
However, I am unable to integrate the limiting term in the place of $a_n$; any ideas for how to make this work?
In case my definition of a limiting term is ambiguous; an example would be if there was a set $3, 6, 9$; I'd like to be able to replace $a_n$ (which is $9$ in this case) with any number above $9$, and below $12$, and still get the same answer.
AI: Essentially, you want to "round" $a_n$ down to the greatest number which can be expressed as $a_0 + kx$ for some integer $x$. One way to do so is to replace $a_n$ by $(a_0+x\lfloor\frac{a_n-a_0}{x}\rfloor)$ in your summation formula. |
H: If $f(x)\to 0$ as $x\to\infty$ and $f''$ is bounded, show that $f'(x)\to0$ as $x\to\infty$
Let $f\colon\mathbb R\to\mathbb R$ be twice differentiable with $f(x)\to 0$ as $x\to\infty$ and $f''$ bounded.
Show that $f'(x)\to0$ as $x\to\infty$.
(This is inspired by a comment/answer to a different question)
AI: Let $|f''|\le 2M$ on $\mathbb R$ for some $M>0$. By Taylor's expansion, for every $x\in\mathbb R$ and every $\delta>0$, there exists $y\in[x,x+\delta]$, such that
$$f(x+\delta)=f(x)+f'(x)\delta+\frac{1}{2}f''(y)\delta^2.$$
It follows that
$$|f(x+\delta)-f(x)-f'(x)\delta|\le M\delta^2.\tag{1}$$
Since $\lim_{x\to\infty}f(x)=0$, fixing $\delta>0$ and letting $x\to\infty$ in $(1)$, we have
$$\limsup_{x\to\infty}|f'(x)|\le M\delta.$$
Since $\delta>0$ is arbitrary, the conclusion follows. |
H: What is $\int x^re^xdx$?
Is there any simple way to get integral of $e^{x}x^{r}, r \in \mathbb{R}$?
Basically I want to solve this:
$$\displaystyle \int \frac{e^t(4t^2+1)}{2t \sqrt{t}}dt$$
so I will appreciate any help in both of the above problems.
AI: Hint: Break up the integral into two integrals
$$
\int \frac{e^t(4t^2 + 1)}{2t \sqrt{t}} dt = \int 2e^t \sqrt{t} dt + \int \frac{e^t}{2t^{3/2}}dt
$$
Then use integration by parts on one of them until you have a constant multiple of the other, and see what happens.
Note: In general, we can't find $\int e^x x^r dx$. The definite integral $\int_0^\infty x^r e^x dx$ is (almost) the gamma function. |
H: Topology - Dunce Cap Homotopy Equivalent to $S^2$
So I'm trying to find two spaces with isomorphic homology groups but where the spaces aren't homotopy equivalent.
From my work so far, taking the Dunce Cap as a triangle with the edges identified as $aa(a^{-1})$ if that makes sense, the homology group would be $\Bbb Z$ for $n=0,2$ and $0$ elsewhere.
However how do I prove that it isn't homotopy equivalent to $S^2$? I'm relatively confident it isn't as it's homotopy equivalent to $D^2$?
AI: A contractible space has no higher homology groups, so your example doesn't work. A working example would be $X=\mathbb CP^2$ and $Y=S^2\vee S^4$. They have the same homology, but are not homotopy equivalent. One way of seeing this, is looking at the cohomology. |
H: Rational Expression Question. (Word problem)
Joe got a mark of $\dfrac{44}{50}$ on one test and $\dfrac{32}{x}$ on another test. If the average mark on the two tests was 80%, what value was the second test out of?
My revised attempt:
Still confused, is this correct? Seems odd that the answer I am getting is a repeating decimal:
$$\dfrac 12 \left(\dfrac{44}{50}+\dfrac{32}{x}\right)=.8$$
$$\dfrac{44}{50}+\dfrac{32}{x}=\dfrac{.8}{.5}$$
$$\dfrac{32}{x}=\bigg(\dfrac{8}{5}-\dfrac{44}{50}\bigg)x$$
$$32=\dfrac{8}{5}x-\dfrac{44}{50}x$$
$$50\bigg(32=\dfrac{8}{5}x-\dfrac{44}{50}x\bigg)$$
$$1600=80x-44x$$
$$\dfrac{1600}{36} = \dfrac{36x}{36}$$
$$\boxed{x=44.4}$$
AI: There are unfortunately two possible interpretations of "average mark." Some teachers would think of the average mark as $\frac{44+32}{50+x}$. Essentially they would be computing a weighted average. In that case, solving $\frac{44+32}{50+x}=0.8$, we get $x=45$.
It is quite possible that this is the intended solution, for the alternative $\frac{1}{2}\left(\frac{44}{50}+\frac{32}{x}\right)=0.8$ does not yield a nice $x$. |
H: Writing a 2nd order PDE as a system of equations
I want to turn this 2nd order equation into a system of first order equations but I am unsure about whether I can get rid of the $u$ or not
$$u_{xy}-u_x+u_y+10u u_{xx}$$
To write this as a system of equations so I can determine whether its semi-linear, quasilinear or nonlinear I thought that
$\xi\equiv u_x$
$\eta \equiv u_y$
$$\xi_y-\xi+\eta+10u \xi_x=0$$
$$\xi_y=\eta_x$$
Form a system of 2 equations.
But $u$ is still in it? Does this say something about whether it is quasi-linear or not? Is $u\equiv u(x,y)$ still? If it is then the equation is in the form
$\xi(x,y)_y+f(x,y)\xi(x,y)_x+g(\xi,\eta)=0$
I cannot tell what classification these equations have.
AI: As far as I can tell, "quasilinear" means "linear in the highest-order derivatives".
Here the highest-order derivatives are $u_{xy}$ and $u_{xx}$, and the equation is indeed linear in those. The fact that the coefficient of $u_{xx}$ involves $u$ is no problem for quasilinear. On the other hand, since this coefficient does involve $u$ rather than depending only on the independent variables $x,y$, your equation is not semi-linear. |
H: How to solve these?
Inverse Trigonometric Functions
They are incomplete and I don't know how to complete them.
Who can help me?
1st
$$
\int\frac 1{ x \sqrt{x^{6} - 4}}dx
$$
I tried with:
$$u = x^3 $$
$$du= 3x^2dx$$
but this is not completed,
2nd
oops, is not $$x^2$$ is only "x"
$$
\int \frac 1{ x \sqrt{x-1}}dx
$$
$$u = \sqrt x $$
$$du= \frac 1{\sqrt x} dx$$
where is the sqrt(x) on du?
3
$$
\int \frac {e^x}{ \sqrt{4-e^x}}dx
$$
I think in this
$$u=e^x $$
$$du = e^xdx$$
I have a bad English syntax, but I know read English++ Ok..
AI: For $\int\frac 1{ x \sqrt{x^{6} - 4}}dx$,
you didn't go far enough.
Write it as
$\int\frac{x^2}{ x^3 \sqrt{x^{6} - 4}}dx
$.
Make your substitution of $y = x^3$,
so $dy = 3 x^2 dx$,
and we get
$\int \frac{dy}{3 y \sqrt{y^2-4}}$
and this is essentially the same as
$\int \frac 1{ x \sqrt{x^2-1}}dx$
with $4$ instead of $1$
(but write $4$ as $2^2$). |
H: ZF Extensionality axiom
To familiarize myself with axiomatic set theory, I am reading Kenneth Kunen's The Foundations of Mathematics that presents ZF set theory. I haven't gotten really far since I am stuck at the axiom of Extensionality, stated as follows:
$$\forall x,y \; [\forall z(z \in x \leftrightarrow z \in y) \rightarrow x = y]$$
As far as I understood it, the purpose of this axiom is to state that every two sets that have exactly the same members are the same set. In terms of the above formula, when antecedent is true, the consequent needs to be true. What confuses me here is the case when the antecedent is false. Then, the consequent can be false or true, i.e., we can't say much about the consequent. But, don't we really want to say that in that case those two sets are not equal? More precisely, shouldn't we use equivalence instead of implication? If not, what is the reason?
AI: The reason the converse is not part of the axiom is that it already follows from the axioms of first-order logic: $x = y \implies (\varphi(x) \iff \varphi(y))$ for any formula $\varphi$. This is known as the substitution property of equality. |
H: What is $2 - 1 + 1$?
$2-1+1$; a fairly straightforward question, but I (well, not me, but Henry Reich) found something strange.
Most people would evaluate it as $2+(-1)+1 = 2$; however, this goes against the famed, and fairly standard B.E.D.M.A.S./P.E.D.M.A.S., which states that addition goes first, and then subtraction.
If this is the case, then the answer is $2 - (1 + 1) = 2-2 =0$.
Which is the correct answer, and why is the conventional way (B.E.D.M.A.S./P.E.D.M.A.S.) so ambiguous?
AI: This is why those abbreviations are poor. PEMA is a better one for that purpose.
By the way, normally it is PEMDAS (not PEDMAS) and it stands for Parenthesis, Exponentiation, Multiplication and Division, Addition and Subtract. It's just that the "and"s get lost in abbreviation. |
H: Evaluating $\int_0^{\infty}\frac{2}{x^2-8x+15}dx$
I am trying to evaluate $\int_0^{\infty}\frac{2}{x^2-8x+15}dx$. Factoring and using partial fraction decomposition, I have found that the indefinite integral is:
$$\ln{|x-5|}-\ln{|x-3|} + C$$
But when evaluating the improper integral, I don't know how to deal with the difference between two diverging limits, or even whether or not it even converges. My intuition tells me that since the denominator of the original integral is a second degree polynomial, it converges, but I'm not entirely sure.
$$\lim_{b\to\infty}{(\ln{(b-5)}-\ln(b-3))}$$
How do you evaluate this limit?
AI: Combine and you have
$$ \log(b -5) - \log(b - 3) = \log\left({b - 5 \over b - 3}\right).$$
Exploit the continuity of the log function at 1 to finish.
This is only useful if your start point for integrating is beyond the nasty places $x = 3 $ and $x = 5$. Otherwise integrating across these points will render the integral nonexistent. |
H: $a^b = c$, is it possible to express $b$ without logarithms?
$ a^b = c $
is it possible to express b without logarithms?
AI: Not without more information, since finding $b$ in this case is what logarithms are. Maybe in some special cases something can be said. |
H: Subset of $GL(n,R)$
I'm trying to understand why the subset of $GL(n,\mathbb{R})$ formed by the block-matrices of the following type:
$$\begin{pmatrix} A & B \\ 0 & C \end{pmatrix}$$ where
$$A \in GL(k,\mathbb{R}),~ C \in GL(n-k,\mathbb{R}),~ B \in M(k,n-k,\mathbb{R}).$$
is closed in $GL(n,\mathbb{R})$. I first tried to see this subset as inverse image of a closed subset of $\mathbb{R}$ via the determinant function, and then it would be closed (since $\det$ is continuous), but I couldn't do it. Can you help me?
AI: First, one needs to be specific about what topology you're interested in. I'm going to assume it is the Zariski topology (although this answer also works for the topology induced by the standard topology on $\mathbb{R}^{n^2}$). The set $\mathrm{GL}(n,\mathbb{R})$ is considered as a subset of $\mathbb{R}^{n^2}$. Then note that your set can be defined as the vanishing set of a set of polynomials. Precisely, it is the vanishing set of the coordinates corresponding to the entries of the lower left block matrix. (Note that if the lower left block is zero and the matrix is invertible, then the blocks on the diagonal must automatically be invertible, so your other conditions are automatic.) Since the vanishing set of a set of polynomials is closed, you're done. |
H: Infinite Series Problem Using Residues
Show that $$\sum_{n=0}^{\infty}\frac{1}{n^2+a^2}=\frac{\pi}{2a}\coth\pi a+\frac{1}{2a^2}, a>0$$
I know I must use summation theorem and I calculated the residue which is:
$$Res\left(\frac{1}{z^2+a^2}, \pm ai\right)=-\frac{\pi}{2a}\coth\pi a$$
Now my question is: how do I get the last term $+\frac{1}{2a^2}$ after using the summation theorem?
AI: The method of residues applies to sums of the form
$$\sum_{n=-\infty}^{\infty} f(n) = -\sum_k \text{res}_{z=z_k} \pi \cot{\pi z}\, f(z)$$
where $z_k$ are poles of $f$ that are not integers. So when $f$ is even in $n$, you may express as follows:
$$2 \sum_{n=1}^{\infty} f(n) + f(0)$$
For this case, $f(z)=1/(z^2+a^2)$ and the poles $z_{\pm}=\pm i a$ and using the fact that $\sin{i a} = i \sinh{a}$, we get
$$\sum_{n=-\infty}^{\infty} \frac{1}{n^2+a^2} = \frac{\pi}{a} \text{coth}{\pi a}$$
The rest is just a little more algebra. |
H: an example to show separability of a Banach space does not imply separability of the dual space
$X$ is a Banach space and it is separable, is there any simple counterexample to show the dual space $X^\ast$ is not separable?
AI: The dual of $\ell^1$ (clearly separable by finite sequences of rational numbers) is $\ell^\infty$ (clearly not separable, as you have $\mathfrak c$-many disjoint balls of radius $1/2$ around indicator functions). |
H: Kernel of the evaluation map on a power series ring
Let $R$ be a commutative ring with unity and $r \in R$ a nilpotent element. Is it true that if $f \in R[[\epsilon]]$ satisfies $f(r) = 0$, then $(\epsilon - r) | f$ in $R[[\epsilon]]$? I tried solving for the coefficients of $f/(\epsilon - r)$ inductively and got myself confused.
AI: Yes. Since $r$ is nilpotent, the map $\phi:f(\epsilon) \mapsto f(\epsilon-r)$ is an automorphism of the power series ring, and it is definitely true that if $f(0)=0$ then $\epsilon$ divides $f$. |
H: The sum of the integration of g and $g^{-1}$
Let $g$ be a strictly increasing continuous function mapping $[a,b]$ onto
$[A,B]$, and, as usual, let $g^{-1}: [A,B] \to [a,b]$ denote its inverse function.
Use geometric insight to visualize the equation $\int_a^b g + \int_A^B g^{-1} = bB - aA$.
Apply this to the function $g(\theta) = \tan \theta$ on the interval $[0,b]$ for a fixed $b \in (0,\pi/2)$.
Can someone give me some help on how to set this up?
AI: If you draw the graph of $g$ on the $x,y$-coordinate system, it contains exactly the $(x,g(x))$ points, $x\in [a,b]$. But these are the same as the $(g^{-1}(y),y)$ points, for $y\in [A,B]$.
So that, $\int_A^Bg^{-1}=\int_A^Bg^{-1}(y)\,dy$ appears on the picture horizontally, as the area from the $y$-axis to the graph of $g$.
Subtracting the two rectangles, $(\int_a^bg\,-(b-a)A)\ +\ (\int_A^Bg^{-1}\,-(B-A)a )$ we get the area of the rectangle $[a,b]\times [A,B]$. |
H: $\mathcal{P}(\omega \times \omega)$ contains a copy of every countable ordinal
I am trying to understand this proof of the existence of an uncountable ordinal. I don't see why $\mathcal{P}(\omega \times \omega)$ contains a copy of every countable ordinal as it is said.
For example, what element of $\mathcal{P}(\omega \times \omega)$ would correspond to $\omega\cdot 2$ ?
AI: Let $\alpha$ be a countable ordinal, and let $f\colon\omega\to\alpha$ be a bijection. Then the relation $\{\langle m,n\rangle\mid f(m)\in f(n)\}$ is a well-ordering of $\omega$ with order type $\alpha$ (with $f$ being the isomorphism), and it is an element of $\mathcal P(\omega\times\omega)$.
If you want a particular relation which is isomorphic to $\omega\cdot2$, take the following: $$\{\langle m,n\rangle\mid (m<n\text{ and }m\equiv n\pmod 2)\lor m\text{ is odd, and }n\text{ is even}\}.$$ |
H: mixture problem
From Stewart, Precalculus, $5$th ed, p.$71$, q.$55$
The radiator in a car is filled with a solution of $60\%$ antifreeze and $40\%$ water. The manufacturer of the antifreeze suggests that, for summer driving, optimal cooling of the engine is obtained with only $50\%$ antifreeze. If the capacity of the radiator is $ 3.6$L, how much coolant should be drained and replaced with water to reduce the antifreeze concentration to the recommended level?
I don't think there is a need to set up a model, since the currently concentration of antifreeze is at $60\%$ and desired contraction is $50\%$, the difference is $10\%$ of the total capacity of the radiator.
$10\%\times3.6L = 0.36L$
$0.36$L to be drained from the original $60\%$ concentration of $\dfrac{2.16}{3.6}L$ and replaced with water would give $50\%$ concentration at $\dfrac{1.8}{3.6}L$
The answer at the back of the book says $0.6$L to be drained. I don't understand why this is so, if $0.6$L is drained the new concentration would be $43.333\%$.
AI: Let $x$ be the amount in litres of coolant replaced by water.
Amount of water originally in the coolant: $0.4 \times 3.6$L
Amount of water removed when $x$ amount of coolant is removed: $(0.4\times x)$ L
Note that the removal of $x$ amount of coolant would result in the removal of fluids according to their proportion.
Amount of water when $x$ amount of water is added: $\left(0.4 \times 3.6-(0.4\times x)+x\right)$L $=(1.44+0.6x)$L
Amount of antifreeze originally in the coolant: $0.6 \times 3.6$L
Amount of antifreeze removed when $x$ amount of coolant is removed: $(0.6\times x)$ L
Amount of antifreeze when $x$ amount of water is added: $\left(0.6 \times 3.6-(0.6\times x)\right)$L $=(2.16-0.6x)$L
For equal fluid proportions, the amount of fluids should be equal.
$\therefore 1.44+0.6x=2.16-0.6x$
or $x=0.6$L |
H: Let $f :\mathbb{R}→ \mathbb{R}$ be a function such that $f^2$ and $f^3$ are differentiable. Is $f$ differentiable?
Let $f :\mathbb{R}→ \mathbb{R}$ be a function such that $f^2$ and $f^3$ are differentiable. Is $f$ differentiable?
Similarly, let $f :\mathbb{C}→ \mathbb{C}$ be a function such that $f^2$ and $f^3$ are analytic. Is $f$ analytic?
AI: In short: not necessarily in the real case, but yes in the complex case.
In the real case like in the complex one, $f$ is differentiable whenever it does not vanish and
$$
f'=\left(\frac{f^3}{f^2}\right)'=\frac{(f^3)'f^2-f^3(f^2)'}{f^4}.
$$
Real case: if $f$ vanishes, it no longer needs to be differentiable. For an alternative example to $f(x)=|x|$, observe that
$$
f(x)=x\sin\left(\frac{1}{x}\right)\quad \forall x\neq 0\qquad f(0)=0
$$
is not differentiable at $0$, while $f^2$ and $f^3$ are differentiable everywhere.
Complex case: the zeros of $f^2$ and those of $f$ coincide. If $f$ is constant equal to $0$, the result is clear. Otherwise, its zeros are isolated since $f^2$ is holomorphic non constant equal to $0$. On the open set which is the complement of these zeros, the argument above shows that $f$ is holomorphic. And $f^2$, whence $f$, is bounded near each zero. So these are removable singularities. Hence $f$ is holomorphic on its domain.
Edit: for a different argument, see this thread.
Note: if you replace the assumptions by $f^2$ (or $f^n$, $n\geq 2$, more generally) differentiable and $f$ continuous, you get the same conclusions in both cases. Indeed, where $f(x_0)\neq 0$, we have
$$
\frac{f(x)-f(x_0)}{x-x_0}=\frac{f^2(x)-f^2(x_0)}{x-x_0}\cdot\frac{1}{f(x)+f(x_0)}\longrightarrow \frac{(f^2)'(x_0)}{2f(x_0)}.
$$
But if you remove the continuity assumption on $f$, the complex case can fail as well. Just pick a random square root of $z$. |
H: Linear polynomials of finite fields
I have a final tomorrow, and I was looking over some exercises in my textbook. However, I can't seem to work this problem out.
Let $F$ be a field of $p^n$ elements and let $\alpha \in F^*$, where $F^*=F-\{0\}$.
Show $(x- \alpha)(x- \alpha^p)(x- \alpha^{p^2})\cdots(x- \alpha^{p^{n-1}}) \in \mathbb{Z}_p[x]$. Also, my text has some weird notation, $\mathbb{Z}_p$ is shorthand for $\mathbb{Z}/p\mathbb{Z}$.
It hints that I need to show both $\alpha+\alpha^{p}+\alpha^{p^2}+\cdots+\alpha^{p^{n-1}}\in\mathbb{Z}_p$ and $\alpha\alpha^{p}\alpha^{p^2}\cdots\alpha^{p^{n-1}}\in\mathbb{Z}_p$, but I'm having trouble with it. Help would be much appreciated.
AI: In a field of characteristic $p$, $(a-b)^p = a^p-b^p$.
Now,
$$\left[\prod_{i=0}^{n-1}\left(x-\alpha^{p^i}\right)\right]^p
= \prod_{i=0}^{n-1}\left(x-\alpha^{p^i}\right)^p
= \prod_{i=0}^{n-1}\left(x^p-\alpha^{p^{i+1}}\right)
= \prod_{i=1}^{n}\left(x^p-\alpha^{p^i}\right)
= \prod_{i=0}^{n-1}\left(x^p-\alpha^{p^i}\right)$$
where the rightmost equality follows from the fact that $\alpha^{p^n} = \alpha$ for all $\alpha \in \mathbb F_{p^n}$. Thus we have that
$$f(x) = \prod_{i=0}^{n-1}\left(x-\alpha^{p^i}\right) = \sum_{j=0}^n f_jx^j$$
enjoys the property that $[f(x)]^p = f(x^p)$. But,
$\displaystyle [f(x)]^p = \left[\sum_{j=0}^n f_jx^j\right]^p = \sum_{j=0}^n f_j^px^{jp}$
equals
$\displaystyle f(x^p) = \sum_{j=0}^n f_jx^{jp}$ if and only if $f_j^p = f_j, 0 \leq j \leq n$, that is, $f_j \in \mathbb F_p$ and hence $f(x) \in \mathbb F_p[x]$. |
H: How to frame this set of linear equations?
I have the following set of equations, as an example
$2x + 1y + 2z = A$
$0x + 2y + 2z = A$
$1x + 2y + 1z = A$
I assume this can be rewritten as a matrix? How can I check if a solution exists such that x, y, and z are nonnegative? In this case I don't believe a solution exists but how can I verify it without manually testing values?
AI: Use an augmented coefficient matrix, and obtain row-echelon form (using elementary row operations), to see if a solution exists, and/or if the system is inconsistent. If inconsistent, then no solution exists.
$2x + 1y + 2z = A$
$0x + 2y + 2z = A$
$1x + 2y + 1z = A$
$$
M = \begin{pmatrix}
2 & 1 & 2 & A \\
0 & 2 & 2 & A \\
1 & 2 & 1 & A
\end{pmatrix}
$$
Subtract 1/2 × (row 1) from row 3
Multiply row 3 by 2
Swap row 2 with row 3
Subtract 2/3 × (row 2) from row 3
Multiply row 3 by 3
Subtract 1/3 × (row 3) from row 1
Subtract 1/3 × (row 2) from row 1
Divide row 1 by 2
Divide row 2 by 3
Divide row 3 by 6
$$\text{Result}:\quad
\begin{pmatrix}
1 & 0 & 0 & A/6\\
0 & 1 & 0 & A/3\\
0 & 0 & 1 & A/6
\end{pmatrix}
$$
If you row reduce carefully, (and you should attempt this so you can gauge your success in being able to do so), you should obtain the following:
$$x = A/6,\; y = A/3, \; z = A/6$$
So for any given value of A, you will have a unique solution for $\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} A/6 \\ A/3 \\ A/6 \end{pmatrix}$.
So long as $A\geq 0$, the solution will be non-negative.
For review: see row echelon form and row operations. |
H: Rectangle is divided into 14 identical smaller rectangles of width $x$. What is its perimeter in terms of $x$?
Rectangle region ABCD show in below partitioned into 14 identical small rectangles, each of which has width x.
What is perimeter of ABCD in terms of x?
I have used rectangle perimeter formula but answer getting wrong . what is method to find answer in shortest way ?
AI: If each small rectangle has width $x$, then we can see that the length of the small rectangles is 7 widths of the small rectangle, thus $7x$. Therefore, sides $AP$, $DQ$ and $BC$ have length $7x$. Finally, $AD$, $QC$, and $PB$ have length $7x$. Thus, we have six sides with length $7x$, so the perimeter has length $6\cdot 7x = 42x$. |
H: How to find the factorial of a fraction?
From what I know, the factorial function is defined as follows:
$$n! = n(n-1)(n-2) \cdots(3)(2)(1)$$
And $0! = 1$. However, this page seems to be saying that you can take the factorial of a fraction, like, for instance, $\frac{1}{2}!$, which they claim is equal to $\frac{1}{2}\sqrt\pi$ due to something called the gamma function. Moreover, they start getting the factorial of negative numbers, like $-\frac{1}{2}! = \sqrt{\pi}$
How is this possible? What is the definition of the factorial of a fraction? What about negative numbers?
I tried researching it on Wikipedia and such, but there doesn't seem to be a clear-cut answer.
AI: The gamma function is defined by the following integral, which converges for real $s>0$:
$$\Gamma(s)=\int_0^\infty t^{s-1}e^{-t}dt.$$
The function can also be extended into the complex plane, if you're familiar with that subject. I'll assume not and just let $s$ be real.
This function is like the factorial in the when $s$ is a positive integer, say $s=n$, it satisfies $\Gamma(n)=(n-1)!$. It generalizes the factorial in the sense that it is the factorial for positive integer arguments, and is also well-defined for positive rational (and even real) numbers. This is what it means to take a "rational factorial," but I would hesitate to call it that. Many functions have those two properties, and $\Gamma$ is chosen out of all of them because it is the most useful in other applications. Rather than the notation used in that article you refer to, it would be more accurate for you to say that "the gamma function takes these values for these arguments." Gamma is not a function that intends to generalize factorials; rather, generalizing factorials came along as something of an accident following the definition. Its true purpose is deeper.
As for why $\Gamma(1/2)=\sqrt{\pi}$, this comes out of an interesting property of the $\Gamma$ function: some of them are here http://en.wikipedia.org/wiki/Gamma_function#Properties. The property you are interested in is the reflection formula: $$\Gamma(1-z)\Gamma(z)=\frac{\pi}{\sin(\pi z)}.$$ Set $z=1/2$ in the formula to get the desired identity.
If you want to learn more about the gamma function, the hard way is to learn a lot more math, in particular real and complex analysis. An easier way is to read this excellent set of notes: http://www.sosmath.com/calculus/improper/gamma/gamma.html. |
H: Question about notation in a theorem about Galois theory from Lang's Algebra (chapter 6 §1, corollary 1.16)
I have a question about the notation in an assertion in Lang's Algebra, chapter 6 §1, corollary 1.16:
Let $K/k$ be finite Galois with group $G$, and assume that $G$ can be written as a direct product $$G=G_1 \times \cdots \times G_n.$$ Let $K_i$ be the fixed field of $G_1 \times \cdots \times \{1 \} \times \cdots \times G_n$ where the group with $1$ element occurs in the $i$th place. Then $K_i/k$ is Galois, and $K_{i+1} \cap (K_1 \cdots K_i) = k$...
Very well then; suppose $n=3$ in the above situation. Then $K_1/k$ is Galois and $K_2 \cap K_1 = k$. Also, $K_3/k$ is Galois and... Well, what would $K_{3+1}$ be? In the case $K_1$ I suppose is meant that $K_2 \cap K_1 K_3 = k$, but in the case $K_3$ I have no idea. What does he mean? Thank you.
AI: The index $i$ should be restricted to $1 \le i < n$ (so there is no $i=3$ in the case $n=3$).
In the case $n=3$, the two cases are $i=1$ and $i=2$. If $i=1$, the theorem is saying that $K_2 \cap K_1 = k$. If $i=2$, the theorem is saying that $K_3 \cap K_1 K_2 = k$.
The result is a consequence of the correspondence theorem between subgroups of the Galois group of $K/k$, and subfields of $K$ containing $k$. If subgroups $H, H'$ have fixed fields $K, K'$ then the intersection $H \cap H'$ has fixed field $KK'$ and the subgroup $\langle H, H' \rangle$ has fixed field $K \cap K'$. |
H: Dominant morphism, equal dimensions: always finite?
Let $f:X\to Y$ be a dominant morphism of varieties (integral separated schemes of finite type over an algebraically closed field) such that dim $X$ = dim $Y$.
Question: must f be finite?
It seems that f must have finite fibers by looking at dimensions. So perhaps it is the same question to ask if $f$ must be proper, because finite fibers + proper = finite. I am not familiar enough with standard non-examples of properness to have a good intuition on this.
Finally, if it makes any difference to assume X and Y are quasi-projective, please do so.
Edit: After Steve's answer to the original question, I would like to ask the same question for a self-morphism $f:X\to X$.
AI: No, $f$ might be an inclusion of an open set; for example, the morphism corresponding to the (not module finite) extension $\mathbb{C}[x] \subseteq \mathbb{C}[x,x^{-1}]$.
Edited in light of edit to question:
Here's another type of thing that can happen: look at the map $(x,y) \mapsto (x,xy)$ from $\mathbb{C}^2$ to itself. This is evidently dominant, but it does not have finite fibers (since the fiber over the origin is a line). I bet one can find an example with finite fibers which is not finite, but I can't think of one at the moment. |
H: Right triangles with integer sides
Most of you know these triples:
$3: 4 :5$
$5: 12 :13$
$8: 15 :17$
$7: 24 :25$
$9: 40 :41$
More generally we can construct such triangles such as
$$2x:x^2-1:x^2+1$$
My question is why one of the sides seems to be always prime? (When there is no common divisor)
AI: It's great that you are interested and looking for patterns, so don't get discouraged. As one of my teachers is fond of saying,
If all your conjectures are true, you aren't trying hard enough.
So, here are first ten counterexamples:
$$\begin{array}{c|c}
x& 2x:x^2-1:x^2+1\strut\\\hline
8 & 16:63: 65\strut\\
12 & 24:143: 145\strut\\
18& 36:323: 325\strut\\
22& 44: 483: 485\strut\\
28& 56: 783: 785\strut\\
30& 60: 899: 901\strut\\
32& 64: 1023: 1025\strut\\
34& 68: 1155: 1157\strut\\
38& 76: 1443: 1445\strut\\
42& 84: 1763: 1765
\end{array}$$
Mathematica code:
listofcounterexamples = {};
For[x = 1, x < 100, x++,
If[Length[listofcounterexamples] == 10, Break[]];
If[GCD[2x, x^2-1, x^2+1] == 1 && Not[PrimeQ[x^2+1]],
AppendTo[listofcounterexamples, {x, 2x, x^2-1, x^2+1}]]];
listofcounterexamples |
H: Prove that the index of the set of left coset in a cyclic group is finite
If G is cyclic , show |G/H| < ∞ for any subgroup H except the identity.
I already know that any subgroup of a cyclic group is also cyclic but i have no idea how to prove a the quotient of G is finite especially when Lagrange theorem also only apply to finite set.
Thanks everyone for your helps, truly appreciate it!
AI: I will outline a solution. I think you should be able to fill in the details.
Let $G$ be cyclic and generated by $g$. Then, as we know any subgroup is also cyclic, if $H$ is a subgroup, it is generated by $g^n$ for some $n\in \mathbb N$.
I claim that the index of $H$ in $G$ is $n$, with cosets $H, gH, \cdots, g^{n-1}H$. It suffices to show that any element lies in one of those cosets. Fix an element $g^a$ and find the remainder when $a$ is divided by $n$. Can you see how this tells you which coset it belongs to? |
H: Find the volume of the region bounded by $z = x^2 + y^2$ and $z = 10 - x^2 - 2y^2$
So these are two paraboloids. My guess is I would want to find the intersection of these two which would be $2x^2 + 3y^2 = 10$ and construct a triple integral based on its projection. No idea how to do this but the answer comes out to be $\dfrac{50\pi}{\sqrt{6}}$.
AI: $f(x, y) = x^2 + y^2$ and $g(x, y) = 10- x^2 -2y^2$. Now as you said you need to finds out where they intersect. That means we want $x^2 + y^2 = 10- x^2 -2y^2$ which means $2x^2+3y^2 = 10$. If you graph this in 2D you will see that it is an ellipse. Following the analogy with the 2D case, we have to compute
$\int \int g(x, y) − f(x, y)dxdy = \int \int_D (10 - 2x^2 -3 y^2) dxdy$, where $D$ is the interior of the ellipse. Now all that you need to find out is the limits of integration. Can you do it?
Here is the intersection of $f$ and $g$ |
H: Help to understand the ring of polynomials terminology in $n$ indeterminates
In Hungerford's Algebra, page 150, the author defines a ring of polynomials in "$n$" indeterminates in the following manner:
After the author defines the operations in this ring with a theorem:
saying that the author defines another definition to a ring of polynomials with $n$ indeterminates:
My problem is with the terminology in this theorem which comes after:
If I understood well, the author defines a function $x_i$ which when we applies to $\epsilon_i$, we have the identity in $R$. following this reasoning what is $x_i^k(k\epsilon_i)?$ is this the composition of $x_i$?
I'm very confused with these terminologies, I need help.
Thanks a lot!
AI: It's not the composition, but just the multiplication of $x_i$ with itself $k$ times (as usual). Note that Theorem 5.3 defines a multiplication operation on $R[x_1,\ldots,x_n]$, via
$$(fg)(u)=\sum_{\large\substack{v+w=u\\ v,w\in\mathbb{N}^n}}f(v)g(w).$$
So for example, writing $x_i^2$ for the product $x_ix_i$,
$$(x_i^2)(2\epsilon_i)=\sum_{\substack{v+w=2\epsilon_i\\ v,w\in\mathbb{N}^n}}x_i(v)x_i(w)=\bigg[x_i(0)x_i(2\epsilon_i)\bigg]+\bigg[x_i(\epsilon_i)x_i(\epsilon_i)\bigg]+\bigg[x_i(2\epsilon_i)x_i(0)\bigg]$$
$$=[0\cdot 0]+[1\cdot 1]+[0\cdot 0]=1$$
(where we have used the defining property $x_i(\epsilon_i)=1$ and $x_i(u)=0$ for any other $u$).
The correspondence with our usual conception of polynomials is as follows: for any polynomial $F\in R[x_1,\ldots,x_n]$, the corresponding function $\mathbf{F}:\mathbb{N}^n\to R$ is defined by
$$\mathbf{F}(k_1\epsilon_1+\cdots+k_n\epsilon_n)=\text{the coefficient of }x_1^{k_1}\cdots x_n^{k_n}\text{ in the polynomial }F.$$
Let's do an example in $\mathbb{Z}[s,t]$ (I'm using $s$ and $t$ to avoid confusion with the functions $x_1,x_2$):
$$F=2+t+5s^2$$
corresponds to the function $\mathbf{F}:\mathbb{N}^2\to \mathbb{Z}$ defined by
$$\mathbf{F}(u)=\begin{cases}
2 & \text{if }u=(0,0)\\
1 & \text{if }u=(0,1),\\
5 & \text{if }u=(2,0),\\
0 & \text{otherwise},
\end{cases}$$
and we can see that (as functions from $\mathbb{N}^2$ to $\mathbb{Z}$),
$$\mathbf{F}=2+x_1+5x_2^2.$$ |
H: Proving one function is greater than another
How can I prove $f(x)$ $>$ $g(x)$ for all $x > 0$ given $f(x) = (x+1)^{2}$ and $g(x) = 4qx$ where $q$ is a constant in $(0, 1)$?
My approach was to show that $(x+1)^2 > 4qx$ for the interval endpoints, e.g. $q=0$ and $q=1$. E.g. $(x+1)^2 \geq 4x$ for all $x$ and $(x+1)^2 > 0$ for all $x$. However, $q \neq 0,1$ so $f(x) > g(x)$ for all $x$. However, I'm looking for something more mathematically rigorous. Any suggestions?
AI: Since $q \in (0,1)$ and $x > 0$, we know $4xq < 4x$, so it suffices to prove the stronger claim $(x+1)^2 \geq 4x$ for all $x > 0$. The latter is equivalent to showing $(x+1)^2 - 4x \geq 0$. To that end, we have
\begin{align*}
(x+1)^2 - 4x &= x^2 + 2x + 1 - 4x\\
&= x^2 - 2x + 1\\
&= (x - 1)^2\\
&\geq 0.
\end{align*} |
H: Self Adjoint operator $\Rightarrow$ Idempotent Operator?
If $P\in\mathcal{L}(H,H)$, with $H$ a Hilbert space, such that $P = P^*$, Is possible to show that $P^2 = P$?
If that is possible, then $P$ is a projection operator, right?
Thanks in advance.
AI: An operator $P$ satisfying $P = P^{\ast}$ is called self-adjoint. There are plenty of self-adjoint operators that do not satisfy $P^2 = P$; for example, $P = 2I$, as proximal mentioned.
An operator $P$ satisfying $P = P^2$ is called a projection. Not all projections are self-adjoint, such as $\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}$.
An operator that satisfies both is called an orthogonal projection. |
H: If ${ x }^{ 4 }+{ y }^{ 2 }=1$ then $x$ and $y$ can be both rational numbers?
Can you give two numbers $x,y\in\mathbb{Q}$ such that ${ x }^{ 4 }+{ y }^{ 2 }=1$?
I don't know if exists or not. I derived this equation questioning that if $\sin { \alpha } ={ x }^{ 2 }$ for $x\in \mathbb{Q}$ then for which $\alpha$, $\cos{ \alpha }=y$ and $y\in \mathbb{Q}$?
Edit: $x,y\neq0$
AI: Consider $\dfrac{p}{q}=x$ and $y=\dfrac{m}{n}$, such that $\gcd (p,q)=(m,n)=1$ and $n \neq 1$, $q \neq 1$
$(\dfrac{p}{q})^4=1-(\dfrac{m}{n})^2=(1-\dfrac{m}{n})(1+\dfrac{m}{n})$
$=(\dfrac{n-m}{n})(\dfrac{n+m}{n})=\dfrac{(n-m)(n+m)}{n^2}$, here $n^2 \nmid (n-m)(n+m)$ since $q \neq 1$.
Let $d$ be a divisor of $n$, note that $d \nmid(n+m)$ and $d \nmid (n-m)$, Using the fact: $\gcd(m,n)=1$
Now this implies $(n-m)(n+m)=p^4, n^2 =q^4$, which isn't possible.(Recalling Area of right angled triangle can't be a square, it doesn't exist as proven by Fermat) |
H: Approximating a Poisson distribution to a Normal distribution
I have the following problem I'm trying to solve:
I know that the quantity of complains in a call center is a Poisson variable with $\lambda=18 $ costumers/hour, and that the probability of being able to solve a complain is $0.35$. They ask me about the probability of having more than $50$ complaints without solution in $8$ hours.
I tried to solve it by obtaining a new Poisson process with $\lambda=0.35\times18$, which describes the quantity of complains unsolved in certain time.
So, to solve the problem I would have to find $F_x (x)$ and do this:
\begin{align}
P(\text{more than 50 complaints in 8 hours}) &= 1 - P (\text{less than 50 complaints in 8 hours}) \\&= 1 - F_x(x)
\end{align}
My problem is that I need to sum $50$ terms of this Poisson distribution in order to solve the problem (to find the $F_x(x)$, which is exhausting, and I quite doubt this is the objective of the problem).
I think the problem has more to do with using a Normal distribution to get some sort of approximation, but I don't know how many calls there will be (I can't find how many terms I should solve, if I think that the variables I'm adding are each one of the calls, that I can think as being Bernoulli Variables, each one with a probability of $0.35$ of being solved).
I'd appreciate any help. Thank you very much.
AI: The number of unresolved complaints in $1$ hour does indeed (under our assumptions) have Poisson distribution with parameter $(0.35)(18)$. This assumes essentially that a complaint is either resolved instantly or remains unresolved, which is not quite reasonable.
Let $X$ be the number of unresolved complaints in $8$ hours. This then also has Poisson distribution, with parameter $\lambda=(8)(0.35)(18)=50.4$.
The normal distribution with mean $\lambda$ and variance $\lambda$ is a reasonably good approximation to the Poisson with the large parameter $\lambda=50.4$. So now we have a standard normal calculation to do. |
H: Summation involving subfactorial function
Inspired by this post:
Does the following series converge; if so, to what value does it converge? $$ \sum_{n = 2}^\infty \left|\frac{!n \cdot e}{n!} - 1\right|$$ I am looking for a closed form for the second question.
Note: !n denotes the subfactorial, also known as the number of derangments within $ \mathrm{S}_n $.
AI: (This is a corrected version)
I get that the sum converges,
but do not know the value.
Since $!n = n!\sum_{k=0}^n\frac{(-1)^k}{k!}$,
$\frac{!n \cdot e}{n!}
= e \sum_{k=0}^n\frac{(-1)^k}{k!}
= e \big(\frac1{e}-\sum_{k=n+1}^{\infty}\frac{(-1)^k}{k!}\big)
= 1-e\sum_{k=n+1}^{\infty}\frac{(-1)^k}{k!}
$.
Starting as Carl Najafi did,
$\begin{align*}
\sum_{n=2}^{\infty}\left|\frac{!n\cdot e}{n!}-1\right|
&=\sum_{n=2}^{\infty}\left|\big(1-e\sum_{k=n+1}^{\infty}\frac{(-1)^k}{k!}\big)-1\big)\right|\\
&=e\sum_{n=2}^{\infty}\left|\sum_{k=n+1}^{\infty}\frac{(-1)^{k+1}}{k!}\right|\\
&=e\sum_{n=2}^{\infty}\left|(-1)^{n}\sum_{k=n+1}^{\infty}\frac{(-1)^{k-n+1}}{k!}\right|\\
&=e\sum_{n=2}^{\infty}\left|(-1)^{n}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{(n+k)!}\right|\\
&=e\sum_{n=2}^{\infty}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{(n+k)!}\\
\end{align*}
$
The inner sum
$S(n)
= \sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{(n+k)!}
$
is an alternating series
with terms decreasing in absolute value to $0$,
so its sum is bounded
by any two consecutive sums.
In particular,
taking the first two sums,
$\frac1{(n+1)!}
> S(n) > \frac1{(n+1)!} - \frac1{(n+2)!}
> \frac1{2(n+1)!}
$.
Therefore
$\sum_{n=2}^{\infty}\left|\frac{!n\cdot e}{n!}-1\right|
< \sum_{n=2}^{\infty}\frac1{(n+1)!}
$
which easily converges.
My original answer
actually looked at
$\sum_{n=2}^{\infty}\left|!n\cdot e-n!\right|
=\sum_{n=2}^{\infty}\left|n! S(n)\right|
$.
Since
$\frac1{n+1}
> |n!S(n)| > \frac1{n+1} - \frac1{(n+1)(n+2)}
\ge \frac1{2(n+1)}
$,
this sum diverges, but can be shown to converge
if the absolute value signs are removed,
since the resulting series is alternating
and decreasing. |
H: Application of Urysohn's lemma
I am working on the following hw problem: If we have that $X$ is a compact Hausdorff space, with $\{U_\alpha\}_{\alpha\in A}$, then we can find a finite number of continous functions $f_1,...,f_k$, with $f_i:X\mapsto [0,1]$ such that $f_1(x)+...+f_k(x)=1$ for all $x$, and for each $i$ there exists a $\alpha_i$ such that $\overline{f_i^{-1}((0,1])}\subset U_{\alpha_i}$.
So this are my thoughts at the moment. Since $X$ is Hausdorff and compact it is normal, so we know that Urysohn's lemma applies. Meaning, if I have any two disjoint sets I can find a continous function that is $0$ is one of the sets and $1$ in the other one.
My intuition is telling me to use compactness and find $U_{\alpha_1},..., U_{\alpha_k}$ that cover $X$, and then by the last requirement of the problem we would need $f_i(x)=0$ if $x\notin U_{\alpha_i}$.
The problem is very clear if I can write my space $X$ as the disjoint union of finite closed sets cause then I could just apply Urysohns and get my functions very easily, but I do not see how to apply this lemma using the open cover given. Any hints would be greatly appreciated.
AI: Such a family of functions is called a partition of unity (where unity = 1, because of the summing to 1 property), and these exists for all paracompact Hausdorff spaces.
For a complete proof, see this, e.g. A compact (Hausdorff) space is paracompact Hausdorff (we can work with finite covers everywhere, but that makes the proof only slightly easier..).
Chapter 2 of the linked document handles the finite case, without going to paracompactness. |
H: Regular $T_2$ space which is not completely regular.
Theorem 10. of
Pontryagin's Topological Groups says that:
Every Hausdorff topological group is completely regular.
But is there exists a Regular $T_2$ space which is not completely regular?
AI: This answer has a complete description of such a space, due to John Thomas and published in A Regular Space, Not Completely Regular, The American Mathematical Monthly, Vol. 76, No. 2 (Feb., 1969), pp. 181-182. |
H: Bounded sequence in Hilbert space contains weak convergent subsequence
In Hilbert space $H$, $\{x_n\}$ is a bounded sequence then it has a weak convergent subsequence.
Is there any short proof? Thanks a lot.
AI: Suppose $M$ bounds the sequence. Then, if we think of $H$ as sitting inside $H^{**}$, then for any $T \in H^\ast$ with $\|T\| \leq 1$, we have $\|x_n(T)\| = \|Tx_n\| \leq \|x_n\| \leq M$, so the operator norms of the $x_n$ thought of as operators on $H^\ast$ are bounded by $M$. Apply Banach-Alaoglu. (To the unit ball of $H^{\ast \ast}$.) |
H: Is this an exact differential or not?
I have the 1-form
$$dz=2xy\, dx+(x^{2}+2y)\, dy$$
And I want to integrate it from $(x_{1},y_{1})$, to $(x_{2},y_{2})$.
If I'm not drunk, checking mixed partials, I find that $dz$ is an exact differential. BUT, when I want to calculate explicitly the integral
$$\int_{\sigma}dz$$
where $\sigma$ is
i) $(x_{1},y_{1})\to(x_{2},y_{1})\to(x_{2},y_{2})$
ii) $(x_{1},y_{1})\to(x_{1},y_{2})\to(x_{2},y_{2})$
I find that the integral has a different value if I take the trajectory ii) instead of i). Why this happens?
Calculating the integral for i)
$\int dz=\int_{x_{1}}^{x_{2}}2xy_{1}\, dx+\int_{y_{1}}^{y_{2}}(x_{2}^{2}+2y)\, dy=y_{1}(x_{2}^{2}-x_{1}^{2})+x_{2}^{2}(y_{2}-y_{1})+y_{2}^{2}-y_{1}^{2}$
Calculating the integral for ii)
$\int dz=\int_{y_{1}}^{y_{2}}(x_{1}^{2}+2y)\, dy+\int_{x_{1}}^{x_{2}}2xy_{2}\, dx=x_{1}^{2}(y_{2}-y_{1})+y_{2}^{2}-y_{1}^{2}+y_{2}(x_{2}^{2}-x_{1}^{2})$
AI: $$\begin{array}{lr} & \color{Purple}{y_1x_2^2}-\color{Blue}{y_1x_1^2}+\color{Magenta}{x_2^2y_2}-\color{Purple}{x_2^2y_1}+\color{Green}{y_2^2-y_1^2} \\ & = \color{Teal}{x_1^2y_2}-\color{Blue}{x_1^2y_1}+\color{Green}{y_2^2-y_1^2}+\color{Magenta}{y_2x_2^2}-\color{Teal}{y_2x_1^2} \end{array}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.