text
stringlengths 83
79.5k
|
---|
H: How to calculate $\sum_{i= 0}^{k-1}\left \{ \frac{ai+b}{k} \right \}$
How to calculate: $$F=\sum_{i= 0}^{k-1}\left \{ \frac{ai+b}{k} \right \}$$
where $a,b \in \mathbb Z$ and $(a,k)=1$; $\left \{ \frac{ai+b}{k} \right \} =$ fraction part of $\frac{ai+b}k$
Such problem so strange for me. Please help me solve it, give me solution or a hint.
Thanks
AI: Since $(a,k)=1$, the numbers $ai$ and also the numbers $ai+b$ run through all residue classes modulo $k$ exactly once. Therefore, we can simplify
$$ F=\sum_{i=0}^{k-1}\frac ik=\frac1k\frac{k(k-1)}{2}=\frac{k-1}2.$$ |
H: (simple) Expectation of random variable as a multipart function
Let the random variable $Y \in [0, \infty)$, a real number $\theta >0$, and the random variable $X$ such that $X = \theta - \min(\theta,Y)$, thus, $X \in [0, \theta]$. That is, $X = 0$ if $Y > \theta$ and $X = \theta-Y$ if $Y \leq \theta$. I would like to know if the expectation of $X$, $E[X] = E[\theta-Y|Y \leq \theta] P(Y \leq \theta) = (\theta-E[Y|Y\leq \theta])P(Y \leq \theta)=\theta P(Y \leq \theta)-\int_{0}^{\theta}y f_{Y}(y)dy$ is valid.
If so, since $E[X] >0$, then, $\theta P(Y \leq \theta) > \int_{0}^{\theta}y f_{Y}(y)dy$ (?) Thanks!
AI: Your computations are valid. The inequality at the end is direct since
$$
\theta P[Y\leqslant\theta]=\int_0^\theta \theta f_Y(y)\mathrm dy\geqslant\int_0^\theta y f_Y(y)\mathrm dy=E[Y;Y\leqslant\theta].
$$ |
H: Symmetric Matrices Using Pythagorean Triples
Find symmetric matrices A =$\begin{pmatrix} a &b \\ c&d
\end{pmatrix}$ such that $A^{2}=I_{2}$.
Alright, so I've posed this problem earlier but my question is in regard to this problem.
I was told that $\frac{1}{t}\begin{pmatrix}\mp r & \mp s \\ \mp s & \pm r \end{pmatrix}$ is a valid matrix $A$ as $A^{2}=I_{2}$, given the condition that $r^{2}+s^{2}=t^{2}$, that is, (r,s,t) is a Pythagorean Triple.
Does anybody know why this works?
AI: It works because $$A^2 = \frac{1}{t^2}\begin{pmatrix}r & s\\s & -r\end{pmatrix}\begin{pmatrix}r&s\\s&-r\end{pmatrix} = \frac{1}{t^2}\begin{pmatrix}r^2+s^2 & 0\\0 & r^2 + s^2\end{pmatrix}.$$
and you want the diagonals to be 1, i.e. $\frac{r^2 + s^2}{t^2} = 1$. |
H: Is the following set empty?
$$
sp\left \{
\begin{pmatrix}
1 \\
-1 \\
1 \\
-1
\end{pmatrix} , \begin{pmatrix}
4\\
-2 \\
4 \\
-2
\end{pmatrix} , \begin{pmatrix}
1\\
1\\
1\\
1
\end{pmatrix}
\right \} \bigcap \left \{ \begin{pmatrix}
x_1\\
x_2\\
x_3\\
x_4
\end{pmatrix} | \begin{matrix}
x_1 + x_2 = 0\\
x_3 = 2x_4
\end{matrix} \right \}$$
I was asked to find the basis of the following set but the vectors of the LHS set never fit to both requirements on the RHS set , or maybe I'm totally wrong and the binary relation between the requirements is OR?
AI: I would interpret it as you did. It’s not empty, though: it’s the trivial subspace of $\Bbb R^4$ containing only the zero vector. (And while these objects are groups under vector addition, the fact that you’re talking about the span in the first term makes it clear that they are to be understood as vector spaces, not as groups.) |
H: Accumulation points of accumulation points of accumulation points
Let $A'$ denote the set of accumulation points of $A$.
Find a subset $A$ of $\Bbb R^2$ such that $A, A', A'', A'''$ are all distinct.
I can find a set $A$ such that $A$ and $A'$ are distinct, but not one where $A,A',A'',A'''$ are all distinct.
AI: Here’s a schematic of what you need to get distinct $A,A'$, and $A''$; it generalizes easily.
$$\begin{array}{rr}
\bullet&\bullet&\bullet&\bullet&\bullet&\bullet&\longrightarrow&\bullet\\
\uparrow&\uparrow&\uparrow&\uparrow&\uparrow&\uparrow\\
\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet\\
\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet\\
\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet\\
\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet\\
\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet&\color{red}\bullet\color{red}\bullet\color{red}\bullet\color{red}\to\bullet
\end{array}$$ |
H: The successor of a set
Definition : The successor of a set $x$ is the set $S(x) = x \bigcup \{x\}$
Prove that $x \subseteq S(x)$ and there is no $z$ such that $ x \subset z \subset S(x)$
I really battle with proofs :(
Here is what I have:
let $ y \in x $
then $y \in S(x)$
therefore $ x \subseteq S(x) $
I think I have over simplified something here and am not really proving anything :(
The second thing to prove is that there is no $z$ such that $ x \subset z \subset S(x) $
I really don't know how to approach this one.
Would it be by contradiction?
Assume that such $z$ exists, and then show that it can't exist?
I know that the statement $ x \subset z \subset S(x) $ means there is at least one element in $z$ that is not in $x$ and at least one element in $S(x)$ that is not in $z$ which implies that there are elements in $S(x)$ which are not in $x$. Doesn't this contradict the definition of $S(x)$?
AI: Your first step could be okay or not, depending on whether you understand why it’s justified. The point is that since $S(x)=x\cup\{x\}$, $x\subseteq S(x)$, simply because it’s always true that $a\subseteq a\cup b$, no matter what sets $a$ and $b$ are.
Now suppose that $x\subsetneqq z\subsetneqq S(x)$. Then there is some $u\in z\setminus x$. On the other hand, $z\subseteq S(x)$, so $u\in S(x)$. There is only one element of $S(x)$ that is not an element of $x$, so $z$ must be that element; what is it? Why does that contradict the assumption that $z\subsetneqq S(x)$? |
H: $R$ is an integral domain iff $R[x_1,...,x_n]$ is a integral domain
I would like to prove that $R$ is an integral domain iff $R[x_1,...,x_n]$ is a integral domain. The converse is trivial, since R can be viewed as a subring of $R[x_1,...,x_n]$. In order to prove the first implication, I'm trying to use the same argument we use when we prove the case $n=1$ looking to the highest degree coefficient, am I on the right way?
Thanks in advance
AI: Did you prove the case $n=1$? If so, then you're done. Suppose $R$ in an integral domain, and inductively assume $R[x_1,\ldots,x_n]$ is an integral domain. Then $R[x_1,\ldots,x_n,x_{n+1}] = R[x_1,\ldots,x_n][x_{n+1}]$ is an integral domain since $R[x_1,\ldots,x_n]$ is. |
H: A set $E$ is closed if and only if $E = E^-$
Let $E$ be a subset of a metric space $(S,d)$. I want to show that the set $E$ is closed if and only if $E = E^-$ where $E^-$ is the closure of a set $E$.
First I assumed $E = E^-$. Then since I know $E^-$ is the intersection of all closed sets containing $E$, $E^-$ must be closed because the intersection of any closed sets is a closed set. So it follows that $E$ is closed.
Now if $E$ is closed, then $S \setminus E$ is open. That also means $E^- \setminus E$ is open since $E \subseteq E^-$ by definition. Suppose $E \subsetneq E^-$. Let $x$ be on the boundary of $E^-$. Then there does not exist an $r>0$ such that $\{s \in S \ | \ d(s, x) < r \} \subseteq E^- \setminus E$. That means $E^- \setminus E$ is not open which gives a contradiction. So it must be the case that $E^- = E$.
Could someone give me feedback on my proof? And is there a better/shorter proof?
AI: The second implication could be simpler I believe. It seems that your definition of $E^-$ is
$$
E^-=\bigcap_{C\supseteq E\ ;\ C\text{ closed}}C.
$$
If $E$ is closed, then $E$ is indeed a closed set containing $E$. Then $E$ is a member the above intersection, so $E^-\subseteq E$. Of course, $E\subseteq E^-$, so $E=E^-$. |
H: Does taking the direct limit of chain complexes commute with taking homology?
Suppose I have a directed system $C_i$, $i\in\mathbb{N}$ of chain complexes over free abelian groups (bounded below degree $0$)
$$C_i=0\rightarrow C^{0}_{(i)}\rightarrow C^{1}_{(i)}\rightarrow\cdots\rightarrow C^{n-1}_{(i)}\rightarrow C^n_{(i)}\rightarrow \cdots$$
with chain maps $f_i\colon C_i\rightarrow C_{i+1}$. Can I say that
$$H_*\left(\lim_{\rightarrow}(C_i,f_i)\right)\cong\lim_{\rightarrow}\left(H_*(C_i),(f_i)_*\right),$$
where $\displaystyle\lim_{\rightarrow}$ is the direct limit (colimit) in the respective category, $H_*(C)$ is the *th homology of the the chain complex $C$, and $f_*$ is the induced homomorphism in homology of the chain map $f$?
I imagine the answer will involve some categorical property of the functor $H_*$.
AI: Any exact functor between abelian categories will preserve homology, and colimits indexed by filtered or directed diagrams are exact in $\mathbf{Ab}$.
The first claim is straightforward, because homology is computed using kernels and cokernels. Indeed, given a chain complex $C_{\bullet}$, we form the object of cycles as a kernel,
$$0 \longrightarrow Z_n \longrightarrow C_n \longrightarrow C_{n-1}$$
and we form the object of boundaries as a cokernel,
$$0 \longrightarrow Z_{n+1} \longrightarrow C_{n+1} \longrightarrow B_n \longrightarrow 0$$
and then the homology object is also a cokernel:
$$0 \longrightarrow B_n \longrightarrow Z_n \longrightarrow H_n \longrightarrow 0$$
The second claim is best checked by hand using the concrete description of filtered/directed colimits in $\mathbf{Ab}$. By general nonsense, colimits are additive and preserve cokernels, so it is enough to check that kernels are preserved by filtered/directed colimits. |
H: proof of $ A\otimes M\cong M $
Let $M$ be an $A$-module. Can someone prove the $A$-module isomorphism:
$$A\otimes M \cong M?$$
(By $\otimes$ I mean tensor product.)
AI: First define a map $A\times M\rightarrow M$ such that $(a,m) \mapsto am$. This is clearly bilinear, so it induces a homomorphism $A\otimes M\rightarrow M$ such that $a\otimes m \mapsto am$ (by the universal property of tensor products). Surjectivity is straight forward, just take $a=1$, $m$ arbitrary. To show that the homomorphism is injective, note that any element in $A\otimes M$ can be expressed as $1\otimes m$. But this maps to $m$. If $m=0$, then $1\otimes m = 0$, hence the map is injective.
Edit: the hard part of this problem is getting a well-defined homomorphism. This is the appeal of the universal property. It is much easier to define (bilinear) maps on $A\times M$ than it is to define homomorphisms from the tensor product (without appealing to the universal property). |
H: Cardinality of the set of surjective functions on $\mathbb{N}$?
I know that the set of all surjective mappings of ℕ onto ℕ (lets name this set as F) should have cardinality |ℝ|.
How to strictly prove that?
From the fact that cardinality of every possible function is |ℝ|, |F| <= |ℝ|.
Saw similar question on this site, but I need strictly defined function that shows that |F|>=|ℝ|.
Thank you for your time.
AI: HINT: Let $M=\{2n+1:n\in\Bbb N\}$. For each $A\subseteq M$ define
$$f_A:\Bbb N\to\Bbb N:n\mapsto\begin{cases}
n/2,&\text{if }n\in\Bbb N\setminus M\\
1,&\text{if }n\in A\\
0,&\text{if }n\in M\setminus A\;.
\end{cases}$$ |
H: Strong mathematical induction: Prove Inequality for a provided recurrence relation $a_n$
The sequence $a_1,a_2,a_3,\dots$ is defined by: $a_1=1$, $a_2=1$, and $a_n=a_{n-1}+a_{n-2}-n+4$ for all integers $n\ge 3$. Prove using strong mathematical induction that $a_n\ge n$ for all integers $n\ge 3$.
I'm comfortable solving questions that require mathematical induction but I always struggle with strong mathematical induction. I've read the steps provided in my textbook and looked at examples but I still don't get it
AI: The only difference is that strong induction gives you more to work with at the induction step: you get to assume that the result is true for all smaller values of $n$, not just for the immediately preceding value.
Here your induction hypothesis should be that $n>4$ and that $a_k\ge k$ for all integers $k$ such that $3\le k<n$, and the induction step is then to prove from this that $a_n\ge n$. As it happens, you don’t even need the full strength of the induction hypothesis: you need only that $a_{n-1}\ge n-1$ and $a_{n-2}\ge n-2$. From these you can argue that
$$a_n=a_{n-1}+a_{n-2}-n+4\ge(n-1)+(n-2)-n+4=n+1>n\;.$$
This is actually more than you were asked to show, but that’s okay: if $a_n>n$, then certainly $a_n\ge n$.
Note that the induction step does require knowing the result for the next two smaller values, $n-1$ and $n-2$, so you have to start with an $n$ that’s at least $5$, and your basis step has to verify two things, namely, that $a_3\ge 3$ and that $a_4\ge 4$. |
H: Why is $\mathbb{Z}[\alpha ]$ not finitely generated as $\mathbb{Z}$-module?
Assume that $\alpha \in \mathbb{C}$ is an algebraic number which is not an algebraic integer. My question is why $\mathbb{Z}[\alpha]$ is not finitely generated as $\mathbb{Z}$-module.
Clearly there exists $ a_i \in \mathbb{Z}$, which are not all zero and $n>0$ such that $a_n\alpha^n+a_{n-1}\alpha^{n-1}+\dots + a_1\alpha + a_0=0$. Shouldn't this imply that $\mathbb{Z}[\alpha]$ is finitely generated? The motivation for this question is that it may be shown that $\alpha \in \mathbb{C}$ is an algebraic integer if and only if $\mathbb{Z}[\alpha ]$ is finitely generated as a $\mathbb{Z}$-module. Why does the coefficient $a_n$ has to be one for the module to be finitely generated? And, to check that I haven't misunderstood the terminology, $\mathbb{Z}[\alpha ]$ is equal to all polynomials in $\alpha$ over $\mathbb{Z}$, right?
AI: The standard proof that $\mathbb Z[\alpha]$ is finitely generated if $\alpha$ is an algebraic integer uses the fact that if
$\alpha^n+a_{n-1}\alpha^{n-1}+\cdots+a_0=0$ then $\alpha^n=-(a_{n-1}\alpha^{n-1}+\cdots+a_0)$, so for any $m\geq n$ we have
$$\alpha^m=-(a_{n-1}\alpha^{m-1}+\cdots+a_0\alpha^{m-n})$$
i.e. $\alpha^m$ is in the module generated by $1,\alpha,\ldots,\alpha^{m-1}$. By induction this shows us that $\alpha^m$ is in the module generated by $1,\alpha,\cdots,\alpha^{n-1}$, hence $\mathbb Z[\alpha]$ is generated by these elements.
If $a_n$ is not a unit this all breaks down, because we no longer have a way to write $\alpha^n$ in terms of $a_{n-1}\alpha^{n-1}+\cdots+a_0$. |
H: Understanding Bayes' Theorem
I worked through some examples of Bayes' Theorem and now was reading the proof.
Bayes' Theorem states the following:
Suppose that the sample space S is partitioned into disjoint subsets $B_1, B_2,...,B_n$. That is, $S = B_1 \cup B_2 \cup \cdots \cup B_n$, $\Pr(B_i) > 0$ $\forall i=1,2,...,n$ and $B_i \cap B_j = \varnothing$ $\forall i\ne j$. Then for an event A,
$\Pr(B_j \mid A)=\cfrac{B_j \cap A}{\Pr(A)}=\cfrac{\Pr(B_j) \cdot \Pr(A \mid B_j)}{\sum\limits_{i=1}^{n}\Pr(B_i) \cdot \Pr(A \mid B_i)}\tag{1}$
The numerator is just from definition of conditional probability in multiplicative form.
For the denominator, I read the following:
$A= A \cap S= A \cap (B_1 \cup B_2 \cup \cdots \cup B_n)=(A \cap B_1) \cup (A\cap B_2) \cup \cdots \cup(A \cap B_n)\tag{2}$
Now this is what I don't understand:
The sets $A \cup B_i$ are disjoint because the sets $B_1, B_2, ..., B_n$ form a partition.$\tag{$\clubsuit$}$
I don't see how that is inferred or why that is the case. What does B forming a partition have anything to do with it being disjoint with A. Can someone please explain this conceptually or via an example?
I worked one example where you had 3 coolers and in each cooler you had either root beer or soda. So the first node would be which cooler you would choose and the second nodes would be whether you choose root beer or soda. But I don't see why these would be disjoint. If anything, I would say they weren't disjoint because each cooler contains both types of drinks.
Thank you in advance! :)
AI: As Tharsis pointed out, and was clarified in the comments, it is all of sets of the given by $\;(A \cap B_i),\; 1 \leq i \leq n\;$ that are pairwise disjoint.
$$\;(A \cap B_i)\cap (A \cap B_j) = \varnothing,\;\;\;\forall i, j,\;\;\text{s.t.}\;\;1 \leq i, j\leq n\;\;\text{and}\;\;i\neq j$$
e.g., $(A\cap B_1)$ is disjoint from $(A\cap B_2)$, but $(A\cup B_1)$ is certainly not disjoint from $(A\cup B_2)$, etc.
Pay attention to the distinction between $∩,∪.$ |
H: Factorization in Gaussian integers
Let $p$ be a natural number, suppose $p$ prime. Show that the following conditions are equivalent:
the polynomial $x^2+1\in\mathbb{Z}_p$ has roots in $\mathbb{Z}_p$
$p$ is reducible in the ring $\mathbb{Z}[i]$
there exists $a,b\in\mathbb{Z}$ such that $p=a^2+b^2$
My attempt: suppose $x$ is a solution of $x^2=-1 \ mod\ p$. Raising both sides to $(p-1)/2$ gives $x^{p-1}=(-1)^{(p-1)/2}\ mod \ p$. Suppose $p$ congruent to $3$ modulo $4$. Then $(p-1)/2$ is odd and $(-1)^{(p-1)/2}=-1$, while $x^{p-1}=1$, a contradiction. Hence $p$ must be congruent to $2$ modulo $4$, i.e. $p=2$ or $p$ congruent to $1$ modulo $4$, i.e. $p=1+4n$, some $n$.
If $p=2$ then $p=(1+i)(1-i)$ is reducible. If $p=1+4n$ i can't see how to show reducibility.
For $2)\rightarrow 3)$, suppose $p=\alpha\cdot\beta$ with $\alpha,\beta$ gaussian integers. Then the norm of $\alpha$, say $a^2+b^2$, divides $p$, but $p$ is prime and the norm can't be $1$, because the factorization of $p=\alpha\beta$ is not trivial, hence $p=a^2+b^2$.
Again, for $3)\rightarrow 1)$, i really don't know what to do. Thanks for any help.
AI: We deal first with the $p$ of shape $4n+1$ case you had trouble with. Suppose that the congruence has a solution $s$. Then $p$ divides $(s+i)(s-i)$. If $p$ were irreducible, it would be a Gaussian prime, so it would divide one of $s-i$ or $s+i$. But it doesn't.
For the other question, suppose $p=a^2+b^2$. Clearly $p$ divides neither $a$ nor $b$. Multiply $a^2+b^2$ by the square of the inverse $c$ of $b$ modulo $p$. We get $(ca)^2+1\equiv 0\pmod{p}$. |
H: What's so well in having a least element in a set?
What's so well about the principle of well-ordering? Why is this pattern of order named like this? I am unable to trace a connection between well-ordering and a set that has a least element so I presume that there must be some mathematical becakground I'm unaware of. I've searched some wikipedia links but they describe only what the principle is, not the reason why it's named like this.
AI: Remember that Cantor was led to introduce the basic concepts of set theory (especially ordinal an cardinal numbers) by the need for transfinite inductive definitions in treating some aspects of trigonometric series. So it's not surprising that he should call a set well-ordered (wohlgeordnet) when it serves his purpose of supporting transfinite inductions. |
H: Correlation between multiplied numbers?
I do not have a strong math background, but I'm curious as to what this pattern is from a mathematical standpoint.
I was curious how many minutes there were in a day, so I said "24*6=144, add a 0, 1440." Then it immediately struck me that 12*12=144, 6 is half of 12, and 12 is half of 24. So, I checked to make sure it worked in other circumstances:
4*16=64
8*8=64
11*44=484
22*22=484
9*36=324
18*18=324
So what exactly is going on here from a logical standpoint to create that pattern? Thanks in advance for satiating my curiosity!
AI: Think about multiplication as having piles of rocks. $6*24$ represents 6 piles of 24 rocks.
Now what happens if you split each pile in two halves? The number of rocks in your pile half, but the number of piles double. Thus, you get 12 piles of 12 rocks, or $12*12$.
You didn't change the number of rocks, you only rearrange them in a different way.
Thus you have $6*24=12*12$ rocks.
And the same holds with any numbers. If you split each pile in halves, the number of rocks in piles half, and the number of piles double. But in total you have the same number of rocks... Thus, if you multiply two numbers, half one and double the other, you get the same product. [Note that this intuitive explanation works for whole numbers, but can also be made to work easily for fractions]. |
H: Can this sum ever converge?
If I have a strictly increasing sequence of positive integers, $n_1<n_2<\cdots$, can the following sum converge?
$$ \sum_{i=1}^\infty \frac{1}{n_i} (n_{i+1}-n_{i}) $$
I suspect (and would like to prove) that it always diverges. Haven't made much progress so far, though.
On a related note, is there any characterization of which subsequences of $1/n$ have a convergent sum?
AI: To expand on my comment (oops, and even more on Achille Hui's comment which appeared when I was typing):
$$
\frac{n_{i+1}-n_i}{n_i}\geq \int_{n_i}^{n_{i+1}}\frac{1}{t}dt\;\Rightarrow \; \sum_{i=1}^K\frac{n_{i+1}-n_i}{n_i}\geq \int_{n_1}^{n_{K+1}}\frac{1}{t}dt=\log\left(\frac{n_{K+1}}{n_1}\right)\rightarrow +\infty.
$$
Of course, $n_K$ tends to $+\infty$ as $n_k\geq K$ for every $K$ by induction.
For your other question, a partial answer can be given with the help of the notion of natural/asymptotic density. If $\liminf \frac{k}{n_k}>0$, then there exists $m>0$ and $K$ such that $\frac{k}{n_k}\geq m$ for every $k\geq K$, whence $\sum_{k\geq K}\frac{1}{n_k}\geq m\sum_{k\geq K}\frac{1}{k}=+\infty$. That is, if the set $\{n_k\;;\;k\geq 1\}$ has positive asymptotic lower density, then the series $\sum_{k\geq 1}\frac{1}{n_k}$ diverges. But the converse is false. For instance, the set of prime numbers $\mathcal P$ has null asymptotic density and yet $\sum_{p\in\mathcal{P}}\frac{1}{p}=+\infty$. |
H: Number of Sylow bases of a certain group of order 60
We have $G=\langle a,b \rangle$ where $a=(1 2 3)(4 5 6 7 8)$, $b=(2 3)(5 6 8 7)$. $G$ is soluble of order 60 so has a Hall $\pi$ subgroup for all possible $\pi\subset\{2,3,5\}$. $\langle a\rangle$ is a normal subgroup.
A Hall $\{2,3\}$ subgroup is $\langle a^5,b \rangle$, a Hall $\{2,5\}$ subgroup is $<a^3,b>$ and a Hall $\{3,5\}$ subgroup is $\langle a\rangle$. This means a Sylow basis is $\langle a^3 \rangle$, $\langle a^5\rangle$ and $\langle b\rangle$.
I am trying to determine the number of Sylow bases. Here is my attempt:
$\langle a^3\rangle$ and $\langle a^5 \rangle$ are normal as characteristic subgroups of the normal subgroup $\langle a\rangle$, and so as all Sylow bases are conjugate, their number is equal to the number of Sylow 2 subgroups, which is 1 or 5 by Sylow's theorem and $\langle b\rangle$ is not normal so it is $5$.
The answer we were given is 15, so I wondered if anyone else could spot where I'm going wrong. Thanks.
AI: The number of Sylow bases is exactly equal to the number of Sylow complement bases. A Sylow basis has a complicated definition involving permuting subgroups, but a Sylow complement basis has a very simple definition: a choice of one Sylow $p$-complement for each prime $p$. Thus we just need to count the number of Sylow $p$-complements. This is just finding the index of their normalizer, and since the Sylow 2-complement is normal, but the other complements are not and have prime index, we can easily count them.
$p=2$: The Sylow $p$-complements are the Hall $\{3,5\}$-subgroups, the conjugates of the normal subgroup $\langle a \rangle$. So just one.
$p=3$: The Sylow $p$-complements are the Hall $\{2,5\}$-subgroups, $\langle a^3,b \rangle$ and its 3 conjugates (not normal, but has index 3)
$p=5$: The Sylow $p$-complements are the Hall $\{2,3\}$-subgroups, $\langle a^5, b \rangle$ and its 5 conjugates (nor normal, but has index 5)
All told that is $1 \times 3 \times 5 = 15$ possibilities.
Another method is to find a system normalizer. In this case it is a Sylow 2-subgroup, which can be seen from Carter's work on abnormal subgroups (given the large number of normal subgroups). |
H: Generating Laguerre polynomials using gamma functions
An exercise given by my complex analysis assistant goes as follows:
For $n \in \mathbb{N}$ and $x>0$ we define
$$P_n(x) = \frac{1}{2\pi i} \oint_\Sigma \frac{\Gamma(t-n)}{\Gamma(t+1)^2}x^tdt$$
where $\Sigma$ is a closed contour in the $t$-plane that encircles the points $0,1, \dots, n$ once in the positive region.
Now, we have to prove some things like that $P_n(x)$ is a polynomial of degree $n$, which is no problem. However, the assistant claims that $P_n(x)$ "is known as a Laguerre polynomial". However, calculating $P_n(x)$ for certain $n$ and comparing to values of the Laguerre polynomials $L_n(x)$ on the internet, I find that
$$(-1)^n \cdot n! \cdot P_n(x) = L_n(x)$$
Could my assistant have made a typo which explains this missing factor; or could he mean that $P_n(x)$ have similar properties as the Laguerre polynomials, perhaps?
AI: All you need to do is to multiply the integral representation by $(-1)^n n!$, so you will have the right integral representation
$$ P_n(x) = \frac{(-1)^nn!}{2\pi i} \oint_\Sigma \frac{\Gamma(t-n)}{\Gamma(t+1)^2}x^tdt. $$
For instance,
$$ P_5(x) = 1-5\,x+5\,{x}^{2}-\frac{5}{3}\,{x}^{3}+{\frac {5}{24}}\,{x}^{4}-{\frac {1}{120
}}\,{x}^{5},$$
which agrees with the Laguerre polynomial $L_5(x)$. |
H: If $17 \mid \frac{n^m - 1}{n-1}$ find the values of $n$ where $m$ is even but not divisible by $4$
Let $m, n \in \mathbb{Z}_+$ with $n > 2$, and let $\frac{n^m-1}{n-1}$ be divisible by $17$. Show that either $m$ is even:$ m \equiv 0 \mod 17$ and $n \equiv 1 \mod 17$. Find all possible values of $n$ in the cases when $m$ is even but not divisible by $4$, or divisible by $4$ but not divisible by $8$.
So far, I have done this: First, let's assume that $n \equiv 1 \mod 17$. Also, we know that we can write
$$\frac{n^m-1}{n-1} = \sum_{i = 0}^{m-1} n^i.$$
As $n \equiv 1 \mod 17 \implies n^i \equiv 1 \mod 17$ and so $\sum_{i = 0}^{m-1} n^i \equiv \underbrace{1 + 1 + \cdots + 1} \equiv m$.
Also, we know that $\sum_{i = 0}^{m-1} n^i \equiv 0 \mod 17 \implies m \equiv 0 \mod 17$.
Now, let's assume that $n \not\equiv 1 \mod 17$. Then, $\frac{n^m - 1}{n - 1}$ is divisible by $17$ if $n^m - 1$ is divisible by $17$, i.e $n^m - 1 \equiv 0 \mod 17 \implies n^m \equiv 1 \mod 17$. Fermat's little theorem tells us that $n^{p-1} \equiv 1 \mod p \implies n^{16} \equiv 1 \mod 17$. From here, we have that $m \mid 16$ which tells us that $m$ must be even.
However I am now stuck on the next bit, I'm not sure how to find those values of $n$. Can someone help me please. Thank you.
AI: You have made considerable progress, indeed you are near the end. If $n\equiv 1\pmod{17}$, then by what you wrote we have $m$ must be divisible by $17$, and any $m$ divisible by $17$ will do. The condition $m$ divisible by $2$ but not by $4$ is easy to meet. We need $m\equiv{34}\pmod{68}$. Then any $n\equiv 1\pmod{17}$ will do the job.
Now let us look at the case $n\not\equiv 1\pmod{17}$. So $n-1$ and $17$ are relatively prime. So our ratio is divisible by $17$ if and only if $n^m\equiv 1\pmod{17}$. This automatically holds if $16$ divides $m$. But our condition on $m$ is that $m$ is divisible by $2$ but not by $4$, so we are far from $16$ divides $m$.
The order of $n$ modulo $17$ divides $16$, so is a power of $2$. We want the order to be exactly $2$. This is the case if and only if $n\equiv -1\pmod{17}$.
For the second part ($m$ divisible by $4$ but not by $8$), there will be some work left to do. You need to determine which $n$ have order $4$ modulo $17$. |
H: Proof that $dx/|x|$ is a Haar measure on non-zero reals?
Most importantly, what is the meaning of this notation $\lambda = dx/|x|$? How do I compute say $\lambda(0,1)$ for example?
AI: This is the Haar measure on the multiplicative group ${\bf R}^\times$ (or the group of positive reals under multiplication too, but then there's no reason to use the absolute value symbol). Note that the "$dx$" part actually corresponds to the additive Haar measure. In symbols,
$$\mu^\times(A):=\int_Ad^\times x=\int_A\frac{d^+x}{|x|}.$$
(The $d^+x$ means we perform the integral as usual in calculus, because our measure there is in fact the additive Haar measure.)
For example, let's look at an interval $I=(a,b)$ with $0<a<b$. Then
$$\mu^\times(I)=\int_{(a,b)}d^\times x=\int_a^b\frac{dx}{|x|}=\log b-\log a=\log\left(\frac{b}{a}\right).$$
Notice, then, that if $cI=(ca,cb)$ where $c>0$ is some multiplicative scaling factor,
$$\mu^\times(cI)=\log\left(\frac{cb}{ca}\right)=\log\left(\frac{b}{a}\right)=\mu^\times(I).$$
This is an example of how the multiplicative Haar measure is translation-invariant, where translation in the multiplicative group amounts to uniform multiplication by a fixed scalar.
More generally, if $A\subset(0,\infty)$, we have
$$\mu^\times(cA)=\int_{cA}d^\times x=\int_{x\in cA}\frac{dx}{|x|}=\int_{u\in A}\frac{cdu}{|cu|}=\int_A\frac{du}{|u|}=\mu^\times(A).$$
This follows from the simple substitution $x=cu$, $dx=cdu$, $x=cu\in cA\iff u\in A$. |
H: Prove the equivalence of presentation of a free group with a free product
I want to prove that the following presentation of a free group (generators and relations):
$$\left(\begin{array}{c|c}
x_0,a_0&a_0a_1x_2=x_0a_1\\
x_1,a_1&a_1a_2x_0=x_1a_0\\
x_2,a_2&a_2a_0x_1=x_2a_2
\end{array}\right)$$
is equivalent to the free product
$$(a_0,a_1)\star(x_2,A:A^2x_2=x_2A), \quad A=a_2a_0a_1.$$
I have written the presentation that way in case it is suggestive of the solution. This is an example (#11) from A Quick Trip Through Knot Theory by R. Fox, which is a chapter in Topology of 3-Manifolds, Ed. M.K. Fort.
Mostly I am just overwhelmed by the idea of trying to show that any combinbation of the six generators can always be written as alternating products of the generators of the two free groups in the product. I can show some things (for instance, I can easily show the relation $A^2x_2=x_2A$ holds), but how does one really prove things like this?
This is not a homework but since I am asking it rather like a homework, I will take hints first if that's what some people would like to offer.
AI: Having the answer makes this calculation less nasty than it might otherwise be. Since the desired presentation doesn't involve $x_0$ or $x_1$, it's reasonable to use the first and third of the given relations to eliminate these, expressing them in terms of $x_2$ and the three $a$'s. So your group is generated by $\{a_0,a_1,a_2,x_2\}$ subject to just the one remaining relation, the second on the original list, which becomes, after you plug in the expressions for $x_0$ and $x_1$,
$$
a_1a_2a_0a_1x_2a_1^{-1}=a_0^{-1}a_2^{-1}x_2a_2a_0.
$$
Transpose all the factors of the form $a_i^{-1}$ to the other sides, and you get $A^2x_2=x_2A$. Finally, since $a_2$ can be expressed in terms of the other two $a$'s and $A$, namely as $a_2=Aa_1^{-1}a_0^{-1}$, we can introduce $A$ as a new generator and eliminate $a_2$. The result is that the group is generated by $\{a_0,a_1,A,x_2\}$ subject to $A^2x_2=x_2A$, as desired. |
H: What type of input does trigonometric functions take in
I see in my Book that 45 deg is equivalent of π/4 . Ι also do the conversion if I simply convert degrees into radians like this
45* π/180 = π/4 radians
and back again
π/4 * 180/π = 45 deg
.
So I think that I grasp the Idea finally that π/4 is another way of saying 45 degrees. Now if I use my calculator and I say sin(45) witch I assume that 45 is degrees I get the number 0.7071067812. If I do the calculation sqrt(2)/2 I will get the same number hence sin(45)=sqrt(2)/2 . But when I input sin(π/4) I get the number 0.0137073546. So I say to my self that probably that function does not get that type of input. But then again I see in my book that sin(π/4) = sqrt(2)/2 . So this forces me to ask the question, "What type of input do trigonometric functions take in?" Is the calculator not working properly ?
Excuse me if this looks simple to you but I am so bad in math and the calculator is the only validator I have at the moment and this confuses me allot.
AI: When you use $\sin(45)$ in your calculator, you mean $\sin(45^o)$ (notice the degree symbol).
But when you input $\dfrac{\pi}{4}$, you are still telling your calculator that it's in degrees: $\sin\left(\dfrac{\pi}{4}^o\right)$.
To resolve this, you have to change the mode of your calculator from degrees to radians, and reperform the calculation. |
H: complex analysis- Liouville's theorem
1).Let $f: \mathbb{C}\to \mathbb{C}$ be a holomorphic function. Prove that if $f(0) = f(1) =0 $ and $f:\lvert f'(z) \rvert\leq 1 $ for all $z\in \mathbb{C}$, then $f(z)=0$ for all $z\in \mathbb{C}$.
2).True or false: If $f:\mathbb{C}\to\mathbb{C}$ is holomorphic, then f must be either a constant function or must be surjective.
I know that Liouville's theorem states that Every bounded entire function is constant.
if $f'(z) \leq 1$ then $F(z) = cz+D.$ Where D is a constant.
by the condition $f(0) = f(1) =0 $. substituting 0 and 1 for z. I found that c and D is equal to 0. So would it be true for $f(z)=0$ for all $z\in \mathbb{C}$.
Please correct me if im heading in the wrong direction
*Not sure of Part 2*.By Liouville's theorem i think its true for a constant but cannot identify a surjective function.
AI: If $f'$ is bounded, then Liouville says $f'$ is constant, and the fact that $f(0)=f(1)$ tells you WHICH constant it is.
For you second question, consider that $\exp$ is not surjective since there is one number that is not in its image.
What I wrote above should tell you the answers to these questions. But if you write them verbatim and hand them in as homework solutions, you won't get much if any credit, if the person grading them has any sense. Take them as "hints". |
H: Split multiplicative reduction question
Let $E/\mathbb{Q}$ be an elliptic curve and $E_{d}$ be the quadratic twist of $E$ by a squarefree integer $d$. Let $\ell$ be a prime of multiplicative reduction for $E$. If $(d, \ell) = 1$, then $\ell$ is a prime of multiplicative reduction for $E_{d}$. I read that $\ell$ is of split multiplicative reduction for $E$ if and only if $\ell$ is of split multiplicative reduction for $E_{d}$. Are there any sources for this?
AI: The statement is false.
An elliptic curve $E$ with multiplicative reduction at $\ell$ has split reduction if and only if $-c_6$ is a square in $\mathbb{F}_\ell$. Now suppose that $\gcd(d,\ell)=1$, so that $E'=E_d$, the quadratic twist by $d$, also has multiplicative reduction. The corresponding coefficient for $E'=E_d$ is given by $c_6'= d^3 c_6$. Hence, $E'$ has split reduction if and only if $-c_6' = -d^3c_6$ is a square in $\mathbb{F}_\ell$. In particular, if $d$ is not a square mod $\ell$, then $d^3$ is not a square either, and $-c_6'$ is a square if and only if $-c_6$ is not a square.
For instance, let $E:y^2=x^3-x^2+35$. Then $E$ has split multiplicative reduction at $p=5$. Now:
Let $d=11$. Since $11\equiv 1 \bmod 5$ is a square, the quadratic twist by $11$ also has split multiplicative reduction at $5$.
Let $d=13$. Since $13\equiv 3 \bmod 5$ is not a square, the quadratic twist by $13$ has non-split multiplicative reduction at $5$. |
H: Quaternion group associativity
Consider the eight objects $\pm 1, \pm i, \pm j, \pm k$ with multiplication rules:
$ij=k,jk=i,ki=j,ji=-k,kj=-i,ik=-j,i^2=j^2=k^2=-1$,
where the minus signs behave as expected and $1$ and $-1$ multiply as expected. Show that these objects form a group containing exactly one involution.
Well, it is easy to determine closure from the definition. $1$ is clearly the identity, and the inverses can also be determined $(i,-i),(j,-j),(k,-k)$, $-1$ with itself, and $1$ with itself. The only involution is $-1$. Is there an easy way to check associativity of this group? There are too many possible combinations $(ab)c=a(bc)$ to check directly. ($8^3$ possible combinations)
AI: One idea that saves some work, but still might make your eyes cross is to realize that negatives work as expected, so you don't need to test associativity with 1, -1, or any of the negative i,j,k. That leaves only $\{i,j,k\}^3$ which is 27 tests, each requiring 4 very easy multiplications, 108 easy lookups. If you notice that $i \mapsto j \mapsto k$ is an automorphism, then you can say that $a=i$ in $a(bc) \stackrel{?}{=} (ab)c$ reducing it to 9 checks, 36 easy multiplications.
However, a method with more fringe benefits is to find matrices that multiply as expected. It is easiest if you know complex numbers, since $i$ and $j$ are both like $\sqrt{-1}$, but they don't commute.
Hopefully you can discover the matrices by yourself, but maybe it is tricky without a lot of experience. Here are matrices that work ($\sqrt{-1}$ is any fixed square root of $-1$ in a field of characteristic not 2, say $\mathbb{C}$ for instance; take 1 to be the identity matrix, and if $a$ corresponds to the matrix $A$, then $-a$ corresponds to $-A$, so that negatives work just like expected).
$$i = \begin{bmatrix} \sqrt{-1} & 0 \\ 0 & -\sqrt{-1} \end{bmatrix}, \qquad j = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \qquad k = \begin{bmatrix} 0 & \sqrt{-1} \\ \sqrt{-1} & 0 \end{bmatrix}$$ |
H: Spheres in different dimension are not homotopy equivalent
Is there a way to prove that $\textbf{S}^n$ and $\textbf{S}^m$ are not homotopy equivalent if $n\neq m$ without using the machinery of homology or higher homotopy groups?
AI: Just for fun, here's an answer that doesn't use the machinery you're talking about, at the expense of using crazier machinery. Suppose you had a homotopy equivalence $S^n \stackrel{\sim}{\to} S^{n+k}$ for $k > 0$. By smashing this with $S^{km}$, you get homotopy equivalences $S^{n+km} \stackrel{\sim}{\to} S^{n + (k+1)m}$ for each $k$. Composing these gives you a sequence
$$ S^n \stackrel{\sim}{\to} S^{n+k} \stackrel{\sim}{\to} S^{n+2k} \stackrel{\sim}{\to} \dotsb $$
and taking the colimit gives you a weak equivalence $S^n \stackrel{\sim}{\to} S^\infty$. But $S^\infty$ is contractible. (And as with Andreas's answer, you'd then have to prove that spheres aren't contractible.) |
H: Definition of a dominating function and the Dominated Convergence Theorem.
I apologise if this is a rather simplistic or even silly question, but I am confused with the word "dominated" in Lebesgue's Dominated Convergence Theorem (DCT) since I can find no definition of a dominating function in the textbook I am following, rather I can only find a definition in relation to measures.
In the DCT it is assumed we have a sequence $\{f_{n}\}$ of real measurable functions where $|f_{n}|\leq g$ almost everywhere for all $n$, and where $g$ is integrable. Now I can clearly see from various discussions on the DCT that $g$ is considered to be the dominating function, but I cannot find any definition to confirm this. The only definition I can find is related to measures and absolute continuity:
The measure $v$ is absolutely continuous with respect to the measure $\mu$ if for all $A\in\mathscr{F}$, $\mu(A)=0$ implies $v(A)=0$. In the text I am following this is written as $v<<\mu$ which means $v$ is dominated by $\mu$.
Am I right in assuming there is nothing linking these two uses of the word dominating?
AI: Given two positive functions $f$ and $g$ such that $f$ dominates $g$, let
$$\nu(A) = \int_A g d\mu, \qquad \eta(A) = \int_A f d\mu.$$
We note that $\nu(A) \leq \eta(A)$ for all measurable sets $A$. In particular, $\nu(A)=0$ for all $\eta$-null sets $A$, so that $\nu \ll \eta$. In this sense,
Domination of functions implies absolute continuity of the induced measures.
Of course, the converse is wildly false. For example, the measures induced by $f=2$ and $g=1$ are absolutely continuous with respect to each other, whereas $f$ dominates $g$ (and not conversely).
I suppose you could use the Radon-Nikodym theorem to further analyze this situation. |
H: Identify all $a \mod 35$ such that $35$ is a pseudoprime to base $a$
Identify all $a \mod 35$ such that $35$ is a pseudoprime to base $a$.
By defintion, $\gcd(a,n) = 1$ for $a,n \in \mathbb{Z}$, then $a^{n-1} \equiv 1 \mod n$ means that $n$ is a pseudoprime to base $a$.
$G_{35}$ is the group of elements all coprime to $35$. So I have said that
$$G_{35} \cong G_5 \times G_7.$$
The order of the two isomporhic groups are $4$ and $6$ and so the order of $G_{35} = \mathrm{lcm}(4,6) = 12$. That means all divisors of $12$ will be potential orders of $a$. I want all the $a$ such that $a^{34} = 1 \mod 35$. By Eulers theorem, this tells me that the order of this element is $\gcd(12,34) = 2$. So I want all elements of order $2$ in $G_5$ and $G_7$. Clearly, this is only $\pm 1 \mod 5$ and $\pm 1 \mod 7$ and so all values of $a$ are of the form $\pm1 \mod 7, \pm 1 \mod 5$?
Have I done this correctly? I think I might have cheated a little with the Euler theorem bit, because normally I would say $34 \mid 12$ but seeing as $34$ doesn't divide $12$ and $12$ doesn't divide $34$, I got a little stuck.
EDIT: Actually, I don't think I've finished the question as the answer should be $\mod 35$. So would the final answer just be $\pm 1 \mod 35$?
AI: We will use standard elementary number theory terminology, not group terminology, but that makes no difference.
We want to characterize those $a$ such that $a^{34}\equiv 1\pmod{35}$. For such an $a$, the order of $a$ modulo $35$ must divide $34$. It must also divide $\varphi(5)$ and $\varphi(7)$, giving the possibilities $1$ or $2$. And it is clear that if $a$ has order $1$ or $2$ then $a^{34}\equiv 1\pmod{35}$.
So we want to find all elements of order $1$ or $2$ modulo $35$. So we want to solve the $4$ systems of congruences $x\equiv \pm 1\pmod{5}$, $x\equiv \pm 1\pmod{7}$.
If the signs match, the solutions are obvious. For $x\equiv 1\pmod{5}$, $x\equiv -1\pmod{7}$, a short search gives $x\equiv 6\pmod{35}$. The remaining congruence then has solution $x\equiv -6$. |
H: Finding the sum of a Taylor expansion
I want to find the following sum:
$$
\sum\limits_{k=0}^\infty (-1)^k \frac{(\ln{4})^k}{k!}
$$
I decided to substitute $x = \ln{4}$:
$$
\sum\limits_{k=0}^\infty (-1)^k \frac{x^k}{k!}
$$
The first thing I noticed is that this looks an awful lot like the series expansion of $e^x$:
$$
e^x = \sum\limits_{k=0}^\infty \frac{x^k}{k!}
$$
The only obstacle is the $(-1)^k$ term. I tried getting rid of it by rewriting:
\begin{align*}
\sum\limits_{k=0}^\infty (-1)^k \frac{x^k}{k!} &= 1 - x + \frac{x^2}{2!} - \frac{x^3}{3!} + \frac{x^4}{4!} - \frac{x^5}{5!} + \dots\\
&= (1 + \frac{x^2}{2!} + \frac{x^4}{4!} + \dots) - (x + \frac{x^3}{3!} + \frac{x^5}{5!} + \dots)\\
&= \sum\limits_{k=0}^\infty \frac{x^{2k}}{(2k)!} - \sum\limits_{k=0}^\infty \frac{x^{2k+1}}{(2k+1)!}
\end{align*}
These sums look a lot like the series expansions of $\sin(x)$ and $\cos(x)$:
\begin{align*}
\sin(x) &= \sum\limits_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}\\
\cos(x) &= \sum\limits_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!}
\end{align*}
However, these sums do have a $(-1)^k$ term, just when I got rid of it! So now I'm stuck. Can someone help me in the right direction?
AI: Note that $$e^{-x}=\sum_{k=0}^\infty \frac{(-x)^k}{k!}=\sum_{k=0}^\infty (-1)^k\frac{x^k}{k!}$$ |
H: Group of mappings containing an injective map is a subset of symmetric group
Let $G$ be a group of mappings on a set $X$ with respect to function composition. Show that if $G$ contains some injective function, then $G\subseteq \text{Sym}(X)$.
What I did: If $X$ is finite, then the injective mapping $g\in G$ is also bijective. Suppose the identity $i\in G$ is not bijective. Then $gi=g$ is not bijective, impossible, so $i$ must be bijective. Then any $h\in G$ must be bijective because of the existence of $h^{-1}$ such that $hh^{-1}=i$. So $G\subseteq\text{Sym}(X)$ indeed.
But when $X$ is infinite, the injective mapping is no longer necessarily bijective, and the whole argument breaks down.
AI: Here's a recipe for how you can do it:
Prove that $i$ is injective
Prove every $h\in G$ is injective
Prove that $i$ is surjective (consider $i=i^2$)
Prove every $h\in G$ is surjective
At each stage, employ proof by contradiction. Spoiler:
Suppose $g\in G$ is injective. Suppose that $i$ is not injective. Then $gi=g$ is not injective, absurd, so $i$ is injective. Suppose $h\in G$ is not injective. Then $h^{-1}h=i$ is not injective, absurd, so every element of $G$ is an injective function. Suppose $i\in G$ is not surjective. Then $\exists x\in X$ such that $x\not\in i(X)$. But then $i(x)\in i(X)$, but $i(x)=i^2(y)\implies x=i(y)$ hence $i(x)\not\in i^2(X)$, hence $i\ne i^2$, a contradiction. Hence $i$ is surjective. Suppose now that $h\in G$ is not surjective. Then $h h^{-1}=i$ is not surjective, a contradiction. Hence every element is surjective as well as injective. |
H: Classification of nonzero prime ideals of $\mathbb{Z}[i]$
I know the classification of Gaussian primes: let $u$ be a unit of $\mathbb{Z}[i]$. Then the following are all Gaussian primes:
1) $u(1+i)$
2) $u(a+ib)$ where $a^2+b^2=p$ for some prime number p congruent to $1$ modulo $4$
3) $uq$ where $q$ is a prime number congruent to $3$ modulo $4$.
If i knew that $\mathbb{Z}[i]$ is a Principal ideal domain, i could say: a nonzero ideal $P$ of $\mathbb{Z}[i]$ is prime iff it is principal prime, say $P=p\mathbb{Z}[i]$ iff $p$ is a Gaussian prime, and conclude that every nonzero prime ideal $P$ of $\mathbb{Z}[i]$ belongs to one of the following families:
1) $P=(1+i)\mathbb{Z}[i]$
2) $P=(a+ib)\mathbb{Z}[i]$, where $a,b\in\mathbb{Z}$ and $a^2+b^2=p$ is an odd prime congruent to $1$ modulo $4$
3) $P=q\mathbb{Z}[i]$ where $q$ is an odd prime congruent to $3$ modulo $4$.
Now, suppose that for some reason i don't know that $\mathbb{Z}[i]$ is a PID. Can i still prove the classification above? I can't say an ideal is prime iff its generator is prime, so, how could i proceed in this case?
AI: To prove this result, you must prove along the way that $\mathbb{Z}[i]$ is a PID, to guarantee that you have not missed any non-principal ideals. One cheap way to get around your restriction is to implicitly use the implication
$$\text{Euclidean domain} \Longrightarrow \text{PID}.$$
In the case at hand, we obtain a Euclidean algorithm on $\mathbb{Z}[i]$ as follows. Given non-zero $x,y \in \mathbb{Z}[i]$, let $z \in \mathbb{Z}[i]$ denote the lattice point nearest $x/y$. Geometrically, it's clear that
$$\vert z - x/y \vert \leq 1/\sqrt{2},$$ in which the metric above is the standard metric on the plane. Then $x-yz \in \mathbb{Z}[i]$ satisfies $\vert z-xy \vert < \vert y \vert$, so that $\vert \cdot \vert$ defines a Euclidean function on $\mathbb{Z}[i]$. As in the case with the integers, a Euclidean algorithm can be used to find GCD (up to units).
Now, suppose that $\mathfrak{p}=(a,b) \subset \mathbb{Z}[i]$ is a prime ideal. If $c$ denotes any GCD of $a$ and $b$, we have $c=(a,b)$. Induction (and the fact that integer rings are Noetherian) implies that all ideals are principal. For your problem, it would then suffice to classify all the principal ideals in $\mathbb{Z}[i]$, which is standard practice (look at norms).
Note: while this approach seems contrary to the question asked, I nevertheless believe that the best way to classify the prime ideals in $\mathbb{Z}[i]$ is to first prove that such ideals must be principal. Other techniques would simply reprove this result, without the benefit of machinery.
Edit: This edit includes a proof of the fact that $\mathbb{Z}[i]$ is Noetherian. Let $$\mathfrak{p}_1 \subsetneq \mathfrak{p}_2 \subsetneq \ldots $$
be an infinite chain of ideals in $\mathbb{Z}[i]$. We note that $a_k:=\#\mathbb{Z}[i]/\mathfrak{p}_k$ is finite for all $k$, because both $\mathbb{Z}[i]$ and $\mathfrak{p}_k$ are abelian groups of rank $2$. The sequence of positive integers $\{a_k\}$ is decreasing and does not stabilize, a contradiction. Thus no such chains exist. |
H: Algebraic expression in its most simplified form
I am trying to simplify the algebraic expression:
$$\bigg(x-\dfrac{4}{(x-3)}\bigg)\div \bigg(x+\dfrac{2+6x}{(x-3)}\bigg)$$
I am having trouble though. My current thoughts are:
$$=\bigg(\dfrac{x}{1}-\dfrac{4}{(x-3)}\bigg)\div \bigg(\dfrac{x}{1}+\dfrac{2+6x}{(x-3)}\bigg)$$
$$=\bigg(\dfrac{x(x-3)}{1(x-3)}-\dfrac{4}{(x-3)}\bigg)\div \bigg(\dfrac{x(x-3)}{1(x-3)}+\dfrac{2+6x}{(x-3)}\bigg)$$
$$=\bigg(\dfrac{x(x-3)+(-4)}{(x-3)}\bigg)\div \bigg(\dfrac{x(x-3)+2+6x}{(x-3)}\bigg)$$
$$=\dfrac{x(x-3)+(-4)}{(x-3)}\times \dfrac{(x-3)}{x(x-3)+2+6x}$$
$$=\dfrac{x(x-3)+(-4)(x-3)}{(x-3)x(x-3)+2+6x} $$
$$\boxed{=\dfrac{-4(x-3)}{2(1+3x)} }$$ Which does not appear is not the answer. Am I close? Where exactly did I go wrong? I have tried this question multiple times.
Edit: Figured it out!
$\dfrac{x(x-3) - 4}{x(x - 3) + 2(1 + 3x)}\implies\dfrac{x^2-3x-4}{x^2+3x+2}\implies \dfrac{(x-4)(x+1)}{(x+2)(x+1)}$
$(x+1)$'s cancel leaving us with: $\boxed{\dfrac{x-4}{x+2}}$
AI: You did not distribute the term $(x - 3)$ in the denominator when you wrote:
$$\begin{align} & =\dfrac{x(x-3)+(-4)}{(x-3)}\times \dfrac{(x-3)}{x(x-3)+2+6x} \\ \\ & =\dfrac{x(x-3)+(-4)(x-3)}{(x-3)x(x-3)+2+6x} \end{align}$$
What would be correct is the following denominator:
$$\begin{align} & \quad\color{blue}{(x-3)}[x(x-3)+2+6x] \\ \\ & = \color{blue}{(x-3)}x(x-3)+\color{blue}{(x-3)}(2+6x)\end{align} $$
But note
$$\dfrac{x(x-3)+(-4)}{\color{blue}{\bf (x-3)}}\times \dfrac{\color{blue}{\bf(x-3)}}{x(x-3)+2+6x}$$
The highlighted terms cancel, leaving you:
$$\begin{align} & =\dfrac{x(x-3)+(-4)}{1}\times \dfrac{1}{x(x-3)+2+6x} \\ \\
& = \frac{x(x-3) - 4}{x(x - 3) + 2 + 6x} \\ \\
& = \dfrac{x^2-3x-4}{x^2+3x+2} \tag{$\diamondsuit$}
\end{align} $$
Now, all both the numerator and denominator of $\diamondsuit$ factor very nicely, and in fact, share a common factor, and hence, can be further simplified.
Recall: $$\frac{[b + c]a}{a[d+e]} = \frac{a[b+c]}{a[d+e]} = \frac{b+c}{d+e}$$
Or: $$\frac{a[b+c]}{a[d+e]} = \frac{ab+ac}{ad+ae}= \frac{b+c}{d+e}$$ |
H: Describe the graph locus represented by this equation
I want to know the shape of the region described by
$$ Im(z^2) = 4 $$
so I did the following:
$$ z=x + iy $$
$$z^2 = x^2 + 2xiy -y^2 $$then
$$Im(z^2) = 2xy
$$
then the locus is $$ 2xy = 4 $$
$$ xy = 2 $$
then it's the line $$ xy = 2 $$
what does this equation describe?
AI: We get the equation $2xy=4$. This is a hyperbola, not a line. You are undoubtedly familiar with its close relative, $y=\dfrac{1}{x}$.
You can put the hyperbola $xy=2$ into more familiar form by letting $x=\frac{u+v}{\sqrt{2}}$, $y=\frac{u-v}{\sqrt{2}}$. This is just a rotation. Then $xy=2$ becomes $u^2-v^2=4$. |
H: How does one show that two functors are *not* isomorphic?
Let $C$ be the category of finite-dimensional vector spaces over some field. It is easy to construct pairs of endofunctors $F, G$ of $C$, of the same variance, such that $F(V)$ and $G(V)$ have the same dimension for every $V \in C$, yet are not naturally isomorphic.
For example, we could take $G(V) := (V^*)^{\otimes 2}$, and $F(V) := (V^{\otimes 2})^*$, where $*$ denotes the dual. Or, I'm sure many of us have, at some time, tried showing that $(\Lambda V)^* \simeq \Lambda(V^*)$ canonically, before realizing that it is false. A convincing reason why they are not canonically isomorphic is that there is no obvious reason, in terms of universal properties, why they should be. In other words: if you try proving that $(\Lambda V)^* \simeq \Lambda(V^*)$ canonically, you're going to have a bad time.
However, how does one explicitly show that $F$ and $G$ are not isomorphic? Is there a slick, systematic way of doing this?
Edit: Well, my examples couldn't be worse, as Mariano points out. But the gist of my question still stands (perhaps just get rid of the finite-dimensionality assumption ;))
AI: Here is one way to construct families of endofunctors $F, G$ of $\text{FinVect}$ that can't be distinguished on the basis of dimension counts. First, consider the family of endofunctors $V \mapsto S^n(V)$, and second, consider the family of endofunctors $V \mapsto \Lambda^n(V)$. The first sends vector spaces of dimension $d$ to vector spaces of dimension ${d+n-1 \choose n}$ while the second sends vector spaces of dimension $d$ to vector spaces of dimension ${d \choose n}$. Both of these form bases for the space of integer-valued polynomials in one variable ($d$), hence together there are many linear dependences between them. The smallest one is that
$${d+1 \choose 2} = {d \choose 2} + d$$
which implies that $F(V) = S^2(V)$ and $G(V) = \Lambda^2(V) \oplus V$ can't be distinguished by their dimension counts.
Nevertheless, they are not isomorphic. The reason is that for any endofunctor $F$, the vector space $F(V)$ naturally admits the structure of a $\text{GL}(V)$-representation, and $S^2(V)$ and $\Lambda^2(V) \oplus V$ are not isomorphic as $\text{GL}(V)$-representations. More simply, you can consider the action of a diagonal matrix on $S^2(V)$ and on $\Lambda^2(V) \oplus V$ and verify that the traces (which are symmetric functions) are different.
In fact I think every endofunctor (edit: defining a polynomial function between Hom spaces, see Steve's comment) of $\text{FinVect}$ is equivalent to a direct sum of Schur functors, so they can all be distinguished in this way up to isomorphism. (I need some mild hypothesis on the base field; I think characteristic zero suffices.) |
H: Letters for complex numbers
Suppose that I am writing a proof or some other piece of mathematical writing, and wish to introduce $n$ distinct complex numbers, for some positive integer $n$. What are the complex numbers called?
If $n=1$, then clearly the (unique) complex number I am interested in is called $z$.
If $n=2$, the two complex numbers are called $z$ and $w$.
It is the case $n\ge3$ that i am concerned about. There seem to be no other standard letters for complex numbers, and all the other letters around that end of the alphabet ($x,y,u,v,t$ etc.) are reserved for real numbers. Indeed we often write $z=x+iy$ and $w=u+iv$ where $x,y,u,v$ are stipulated to be real.
What am I missing here? I can't think of any way to get around this problem. Is there a theorem in mathematics that states that every problem parametrized by $n$ complex numbers is equivalent (in some precise sense) to a problem involving just two complex numbers?
AI: Subscripts!
$$
z_1 = x_1 + i y_1, \dots, z_n = x_n + i y_n
$$ |
H: Proving that $\int_0^{2\pi}\frac{ 1-a\cos\theta}{1-2a\cos\theta + a²} \mathrm d\theta = 0$ for $|a|>1$
Let $|a|>1$. Show that $$\int_0^{2\pi}\frac{ 1-a\cos\theta}{1-2a\cos\theta + a²} \mathrm d\theta = 0$$
I'm also trying to solve this problem. I wasn't sure if I was supposed to start a new thread or contribute to previous asked questions about this integral.
I have some ideas that we could do something like this
$$ -i \oint _{|z|=1} \frac{1-az}{(z-2a) }dz = \int_{\theta=0}^{\theta=2\pi} \frac{1-a e^{i\theta}}{e^{i\theta}-2a } e^{i\theta}d\theta = \int_{\theta=0}^{\theta=2\pi} \frac{1-a e^{i\theta}}{1-2ae^{-i\theta} }d\theta \quad (2)$$
Then we have that the real part of $(2)$ is almost equal to the desired integral, except for the $a^2$ in the denominator.
An inequality to the right for the desired integral is then given from the real part of $(2)$. All we want to find is an inequality to the left for the desired integral. That hopefully and easily could be calculated with help of
$$ \oint _{|z-z_0|=r}(z-z_0)dz = \frac{0 \quad n \neq -1}{2\pi i \quad n = -1} $$
Any suggestions?
AI: I'm not sure you stated things correctly, but I can tell you the best way to work with such an integrand. Basically,$$\frac{1-a\cos \theta}{1-2a\cos\theta + a^2}=\text{Re}\left(\frac{z}{z-a^2}\right)$$ when $z=a e^{i\theta}$, so you should choose the circle of radius $a$ centered at the origin as your contour to perform the integration. This should help you answer any further questions you have about these types of integrals. |
H: Is there a special term for an array consisting only of ones?
Is there a special term for an array consisting only of ones?
Sorry for the rather elementary question. I am getting into MapReduce programming and am trying to frame my code to be nice and neat.
AI: I have used, and seen used, the term "all-ones vector". It is typically notated however other vectors are being notated in your writing. For instance, if $\vec{x}$ is a vector, then the all-ones vector would be $\vec{1}$. Likewise, if $\mathbf{x}$ is a vector, then you'd write $\mathbf{1}$.
As an example, consider the average of a list of numbers. If the list is in vector $\mathbf{x}$, then the average can be written:
$\mathrm{average}(\mathbf{x}) = \dfrac{\mathbf{x}\cdot\mathbf{1}}{\mathbf{1}\cdot\mathbf{1}}$
where "$\cdot$" represents the standard dot product. |
H: easy question about conditional expectations
I got a quite easy exercise I just don't get.
Let P be a probability measure, $\frac{dQ}{dP}=Z$, $Z>0$ a.s. and $E[Z]=1$, hence Q is an equivalent proba-measure to P. Then I shall prove that for a sub-sigma field G we have:
$E_{Q}[X|G](\omega)=\frac{E_{P}[XZ|G](\omega)}{E_{P}[Z|G](\omega)}=:h(\omega)$, what seems easy as pie, but when using the definition, I need to show that $E[X(\omega)*\mathbb{1}_A(\omega)]=E[h(\omega)*\mathbb{1}_A(\omega)]$ for all $A\in G$ what makes me confused. these double integrals and the fraction of integrals...
Thx for you help!!
AI: Let $A\in G$. We have
\begin{align*}
\int_A \frac{E_P[XZ\mid G]}{E_P[Z\mid G]}\, dQ &= \int_A \frac{E_P[XZ \mid G]}{E_P[Z\mid G]}\,Z \, dP\\
&= \int_A E_P\left[\frac{E_P[XZ \mid G]}{E_P[Z\mid G]}\,Z\Biggm | G \right]\, dP\\
&= \int_A \frac{E_P[XZ \mid G]}{E_P[Z\mid G]}\,E_P[Z\mid G] \, dP & \text{the fraction is $G$-measurable}\\
&= \int_A E_P[XZ\mid G]\,dP\\
&= \int_A XZ\, dP\\
&= \int_A X\, dQ.
\end{align*}
Hence
$$ E_Q[X \mid G] = \frac{E_P[XZ\mid G]}{E_P[Z\mid G]} $$
as wished. |
H: Proof that a linear transformation is continuous
I got started recently on proofs about continuity and so on. So to start working with this on $n$-spaces I've selected to prove that every linear function $f: \mathbb{R}^n \to \mathbb{R}^m$ is continuous at every $a \in \mathbb{R}^n$. Since I'm just getting started with this kind of proof I just want to know if my proof is okay or if there's any inconsistency. My proof is as follows:
Since $f$ is linear, we know that there's some $k\in \mathbb{R}$ such that $|f(x)|\leq k|x|$ for every $x\in \mathbb{R}^n$, in that case let $a\in \mathbb{R}^n$ and let $\varepsilon >0$. Consider $\delta = \varepsilon /k$ and suppose $|x-a|<\delta$, in that case we have:
$$|f(x)-f(a)|=|f(x-a)|\leq k |x-a|<k \frac{\varepsilon}{k}=\varepsilon$$
And since $|x-a|<\delta$ implies $|f(x)-f(a)|<\varepsilon$ we have that $f$ is continuous at $a \in \mathbb{R}^n$. Since $a$ was arbitrary, $f$ is continous in $\mathbb{R}^n$. Is this proof fine? Or there was something I've missed on the way?
AI: This proof is correct modulo result you stated in the begining, i.e.
$$
\text{there exist $k\in\mathbb{R}$ such that $|f(x)|\leq k|x|$ for all $x\in\mathbb{R}^n$}
$$
Proof of this fact is much more interesting and uses compactness of unit ball in $\mathbb{R}^n$. |
H: How should one think to get the radius of the resulting curve?
For example, the curve C is given as the intersection between
$$
C: x²+y²+z²=1, x+y+z=0
$$
Radius: 1
Another one:
$$
C: x²+y²+z²=1, x+y=1
$$
Radius:
$$
\frac{1}{\sqrt{2}}
$$
How should I think to get these? I know they're ellipses, but substituting them into each other yields "hard" expressions.
AI: A plane at distance $d$ from the center of a sphere of radius $R$ intersects the sphere by a circle of radius $\sqrt{R^2-d^2}$ (This is just the old good Pythagorean theorem). |
H: How to evaluate $\lim_{x\to 0} (1+2x)^{1/x}$
Good night guys! I'm having some trouble with this:
$$\lim_{x\to 0} (1+2x)^{1/x}$$
I know that $\lim_{x\to\infty} (1 + 1/x)^x = e$
but I don't know if i should take $h=1/(2x)$ or $h=1/x$
Can someone please help me? Thanks!
AI: We first find the limit as $x$ approaches $0$ from the right.
Let $y=\frac{1}{2x}$. Then $2x=\frac{1}{y}$ and $\frac{1}{x}=2y$. We want
$$\lim_{y\to\infty}\left(1+\frac{1}{y}\right)^{2y}.$$
This is the square of the familiar
$$\lim_{y\to\infty}\left(1+\frac{1}{y}\right)^{y}.$$
We conclude that
$$\lim_{x\to 0^+}\left(1+2x\right)^{1/x}=e^2.$$
Limit from the left is more troublesome with this approach. Let $1+2x=\frac{t}{1+t}$. Then $\frac{1}{x}=-2\frac{1+t}{t}$. So we are looking for
$$\lim_{t\to 0^+}\frac{1}{\left(1+t\right)^{-2(1+t)/t}},$$
which is
$$ \lim_{t\to 0^+}\left(1+t\right)^{2(1+t)/t}.$$
Now an argument much like the one before works. The limit from the left is also $e^2$. |
H: Where does the function $f(x) = \frac{2x}{x - 7}$ have an increasing slope?
Where does the function $f(x) = \frac{2x}{x - 7}$ have an increasing slope?
$a. x \le 0, x > 7$
$b. x<7$
$c. x > 7$
$d. x \in \Bbb R, x \neq 7$
This question is from a test of mine in a pre-calculus course (so no calculus allowed in answering the question).I have no idea as to how to solve this problem. I can tell where the function is negative and where it is positive, but that's about it, and I'm fairly sure that's no use here.
Any ideas?
AI: I'd go for $$\frac{2x}{x-7}=\frac{2(x-7)}{x-7} +\frac{14}{x-7}$$
The first term is constant, and the second one is just the graph of $1/x$ shifted and rescaled. |
H: What is the actual geometric meaning of trigonometric operations such as adding cos,sine,tan
$$\sin(\pi/4)+\cos(\pi/4)=\frac{\sqrt{2}}{2}+\frac{\sqrt{2}}{2}= \frac{2\sqrt{2}}{2}=\sqrt{2}$$
Thinking of trig components (cosine, sine) that I used to produce the result using the mechanics of algebra, makes me wonder what is the geometric representation of
$$\sin(π/4)+\cos(π/4)$$
The sine function corresponds to the shadow projection on $y$-axis (opposite) and the cosine function to the shadow projection on $x$-axis (adjacent).
At the previous operations I actually added those lines shadowed on the Cartesian axes. In other words, I added those sides of the triangle that form a $45-45-90$.
What is the actual geometric meaning of trigonometric operations such as adding cosine, sine, tangent, etc., or subtracting them? Am I just adding those sides and lines in order to get one new line with length $$\sqrt{2}$$ Is that all?
AI: Well, in the case of adding sines and cosines, you can think of it algebraically:
$$\sin\theta=\frac{y}{r};\cos\theta=\frac{x}{r}$$
Therefore, when you're taking $\sin\theta+\cos\theta$ you're really finding an expression for $\displaystyle\frac{x+y}{r}$. Similarly:
$$\cos\theta+\tan\theta=\frac{x}{r}+\frac{y}{x}=\frac{x^2+ry}{rx}$$
$$\sin\theta+\tan\theta=\frac{y}{r}+\frac{y}{x}=\frac{xy+ry}{rx}$$
In the case of a 45-45-90 triangle, represented by $\theta=\frac{\pi}{4}$, you have $x=y=\sqrt{2}$ and $r=2$, therefore:$$\sin\theta+\cos\theta=\frac{\sqrt{2}+\sqrt{2}}{2}=\sqrt{2}$$
$$\cos\theta+\tan\theta=\frac{2+2\sqrt{2}}{2\sqrt{2}}=\frac{2+\sqrt{2}}{2}$$
And the same for $\sin\theta+\tan\theta$. |
H: If $y$something?
For instance I know $y<x$ implies $-y > -x$ but is there a way to phrase it in terms of $y>$ something (that does not itself contain $y$)?
AI: No.
For each $x\in\Bbb R$ we have that the set $L_x:=\{y\,\mid\,y<x\}$ is downward closed meaning that $u<v\in L_x \,\implies\,u\in L_x$.
The other kind of sets, $G_x:=\{y\,\mid\,y>x\}$ don't have this property, because there exist $u,v\in\Bbb R$ such that $v>x>u$. |
H: Computing the expected value of a matrix?
This question is about finding a covariance matrix and I wasn't sure about the final step.
Given a standard $d$-dimensional normal RVec $X=(X_1,\ldots,X_d)$ has i.i.d components $X_j\sim N(0,1)$. Here we treat $X$ as a row vector. Now consider $Y:=\mu+XA$, where $\mu\in\mathbb{R}^m$, $A\in\mathbb{R}^{d\times m}$, so that $Y\in\mathbb{R}^m$. Then (we write the covariance matrix of Y as $C_Y^2$), $$C_Y^2=\mathbb{E}(Y-\mathbb{E}Y)^\intercal(Y-\mathbb{E}Y)=\mathbb{E}(XA)^\intercal XA$$ $$=\mathbb{E}A^\top X^\top XA=A^\top(\mathbb{E}X^\top X)A=A^\top IA=A^\top A.$$
I wasn't sure about how to compute $\mathbb{E}X^\top X$. Since $X$ is a row vector then $X^\top X$ is a column vector multiplied by a row vector which resulted in a matrix, then how can we take the expected value of a matrix? Moreover how can we get the identity matrix as the product?
Also how to choose dimension d and m, it is said (without reason) that it is not good if m>d.
May anyone explain it to me? Thanks.
AI: You do it term by term, which you can do because Expectation is a linear operator. You get the identity matrix because the $X_i$ terms are independent and $N(0,1)$ so $E[X_i*X_i]=1$ but $E[X_i*X_j]=0.$ |
H: If $2^x=0$, find $x$.
If $2^x=0$, find $x$.
Solution: I know range of $2^x$ function is $(0,\infty)$.
So $2^x=0$ is not possible for any real value of $x$
Hence, equation is wrong. We can't find value of $x$. Am I right?
Please help me.
Can $x$ be in $[-\infty,\infty]$?
i.e is $2^x=0$ possible for $x=-\infty$?
AI: Yes. You are right, there is no $x \in \mathbb{R}$ such that $2^x = 0$. However, note that
$$\lim_{x \to -\infty} 2^x = 0$$
However, on the extended real line, i.e., $\mathbb{R} \cup \{-\infty\} \cup \{\infty\}$, there exists a solution, since one of the ways of defining "$2^{-\infty}$" on the extended real line is to define "$2^{-\infty}$" as the limit of $\lim_{x \to -\infty} 2^x$, which is $0$. |
H: Boundary and closure of a set
I'm trying to show that a point is in the boundary of $E$ if and only if it belongs to the closure of both $E$ and its complement.
Let $\overline{E}$ denote the closure of $E$ and $E^\circ$ be the interior of $E$. Then the boundary is defined to be $\overline{E} \setminus E^\circ$.
So what I need to show is $\overline{E} \setminus E^\circ = \overline{E} \cap \overline{(S \setminus E)}$.
After looking at this link: A point is in the boundary of $E$ if and only if it belongs to the closure of both $E$ and its complement. I am confused about how the OP concluded that he/she needed to show $S \setminus E^\circ = \overline{(S \setminus E)}$.
Could someone explain why?
AI: $\newcommand{\cl}{\operatorname{cl}}\newcommand{\int}{\operatorname{int}}$Suppose that you can show that $S\setminus\int E=\cl(S\setminus E)$. Then
$$(\cl E)\setminus\int E=\big((\cl E)\cap S\big)\setminus\int E=(\cl E)\cap(S\setminus\int E)=(\cl E)\cap\cl(S\setminus E)\;,$$
which is exactly what you want.
To see where the idea comes from, note that both $(\cl E)\setminus\int E$ and $(\cl E)\cap\cl(S\setminus E)$ are subsets of $\cl E$. If $A$ is any set disjoint from $\cl E$, therefore,
$$(\cl E)\setminus\int E=(\cl E)\cap\cl(S\setminus E)\quad\text{iff}\quad A\cup\Big((\cl E)\setminus\int E\Big)=A\cup\Big((\cl E)\cap\cl(S\setminus E)\Big)\;,$$
and a clever choice of $A$ might give us sets that are easier to think about than $(\cl E)\setminus\int E$ and $(\cl E)\cap\cl(S\setminus E)$. In particular, if we let $A=S\setminus\cl E$, we can compare the set,
$$(S\setminus\cl E)\cup\Big((\cl E)\setminus\int E\Big)\;,$$
which is simply all of $S$ except $\int E$, or $S\setminus\int E$, with the set
$$(S\setminus\cl E)\cup\Big((\cl E)\cap\cl(S\setminus E)\Big)\;,$$
which is simply $\cl(S\setminus E)$, and try to show that these are equal. |
H: Simplifying Differentiation
I'm working off my textbook and I've followed the steps easily enough until it gets to this
$ \dfrac {dy}{dx} = \dfrac{(x^2 + 1)^3}{2 \sqrt{x - 1}} + \sqrt{x-1}~(6x)~(x^2 + 1)^2$
$= \dfrac{(x^2 + 1)^2}{2\sqrt{x - 1})}[(x^2 + 1) + 12x(x - 1)]$
$= \dfrac{(x^2 + 1)^2(13x^2 - 12x + 1)}{2\sqrt{x - 1}}$
How do they go from the first line onwards?
Thanks in advance (sorry about formatting)
AI: I will assume that we are looking at
$$\frac{(x^2+1)^3}{2\sqrt{x-1}} +\sqrt{x-1}(6x)(x^2+1)^2.$$
Take out the common factor $(x^2+1)^2$, and multiply and divide by $2\sqrt{x-1}$. We get
$$\frac{(x^2+1)^2}{2\sqrt{x-1}}\left((x^2+1)+(2)(x+1)(6x) \right).$$
Now simplify the expression on the right.
That seems to be the way it was thought of. I would prefer to first bring the expression to the common denominator $2\sqrt{x-1}$. We then get
$$\frac{(x^2+1)^3 +2(x-1)(6x)(x^2+1)^2}{2\sqrt{x-1}}.$$
Now take out the common factor
$\dfrac{(x^2+1)^2}{2\sqrt{x-1}}$. Perhaps that was the way it was done, with a step skipped.
Remark: If you have seen the (natural) logarithm already, the following approach, called logarithmic differentiation, may appeal to you. It is of at most marginal utility in this problem, but could be useful when differentiating a longer product.
We were differentiating $(x^2+1)^3 \sqrt{x-1}$. Let
$$y=(x^2+1)^3\sqrt{x-1}.$$
Take the natural logarithm of both sides. Using "laws of logarithms," we get
$$\ln y=3\ln(x^2+1)+\frac{1}{2}\ln(x-1).$$
Differentiate, using the Chain Rule. We get
$$\frac{1}{y}\frac{dy}{dx}=\frac{6x}{x^2+1} +\frac{1}{2(x-1)}.$$
Finally, multiply by $y$ to get $\frac{dy}{dx}$. |
H: What is the exact definition of polynomial functions?
I'm trying to understand the difference between polynomial functions and the evaluation homomorphisms. I noticed that I don't know what's the exact definition of a polynomial function, although I've used it since I was in school.
I'm confused.
Thanks in advance
AI: A polynomial with coefficients in a commutative ring $R$ is a formal sum $r_n x^n + \ldots + r_1 x + r_0$. It is not a function. It is merely a finite sum of terms of the form $r_i x^i$, where $x$ is some abstract symbol. The ring structure of $R$ lets us make these symbols into a ring, that we denote $R[x]$.
For each $r \in R$, there is a ring homomorphism $e_r:R[x] \rightarrow R$. These are the evaluation maps, defined by replacing each instance of $x$ in the formal sum of a polynomial with $r$ and considering this as an element of $R$. We sometimes denote $e_r(f(x))$ as $f(r)$, but this is not the same as the formal polynomial $f(x)$. $f(r)$ for $r \in R$ is actually an element of the ring, while the formal polynomial $f(x)$ is not.
For each polynomial $f(x) \in R[x]$, there is a map $R \rightarrow R$ defined by $r \mapsto e_r(f(x)) = f(r)$. This is not a ring homomorphism! (For instance, the polynomial function on $\mathbb{R}$ defined by $f(x) = x^2$ is not a homomorphism of rings.) It is a function determined by a polynomial, and so we call such a function a polynomial function.
All of these things can be generalized to polynomials in multiple variables, as well.
A note: Why the fuss about formal polynomials and the functions they define? Typically, you're safe not making a distinction between the formal sum and the function it defines. However, some issues can arise and you need to be aware of them. For instance, consider the polynomial $x^2 - x = (x-1)x$ with coefficients in the ring $\mathbb{F}_2$, the field with two elements. This polynomial, formally, is not zero... it has nonzero coefficients! However, the function it defines on $\mathbb{F}_2$ is the zero function, since there are only two points in $\mathbb{F}_2$ and the evaluation homomorphism for each point sends the function to zero. |
H: $\liminf, \limsup$, Measure Theory, show: $\lim \int n \ln(1+(f/n)^{1/2})\mathrm{d}\mu=\infty$
Let $(X,\Omega,\mu)$ be a measure space and $M^+(X,\Omega)$ denote the set of all non-negative real valued measurable functions.
If $f \in M^+(X,\Omega)$ and $0< \int f \mathrm{d}\mu < \infty$ then $$\lim_{n\rightarrow\infty} \int n \ln\left(1+\left(\frac{f}{n}\right)^{\frac{1}{2}}\right)\mathrm{d}\mu = \infty.$$
The hint given says use Fatou's Lemma. I am not really too sure exactly what Fatou's Lemma will do to help me solve this:
If I let $$f_n = n \ln\left(1+\left(\frac{f}{n}\right)^{\frac{1}{2}}\right)$$ then $f_n \in M^+(X,\Omega)$ and so Fatou's Lemma says:
$$\int \liminf f_n \mathrm{d}\mu \leq \liminf \left( \int f_n \mathrm{d}\mu \right).$$
To be honest, $\liminf f_n$ looks daunting to me as I know it as
$$\sup_{m \in \mathbb{N}}\left( \inf_{n \geq m} f_n \right)=\sup_{m \in \mathbb{N}}\left( \inf_{n \geq m}\left[ n \ln\left(1+\left(\frac{f}{n}\right)^{\frac{1}{2}}\right)\right] \right).$$
My actual attempt at solving the problem was looking at the set $E = \{ x \in X : 1+\left(\frac{f}{n}\right)^{\frac{1}{2}} \geq e \}$ and then breaking
$$\int f_n \mathrm{d}\mu = \int_E f_n \mathrm{d}\mu + \int_{X\setminus E}f_n\mathrm{d}\mu$$
and then we have
$$\int_E n \ln\left(1+\left(\frac{f}{n}\right)^{\frac{1}{2}}\right) \mathrm{d}\mu \geq \int_{E} n \mathrm{d}\mu=n\mu(X)$$
and assuming we have a space with non-zero measure (Do some axioms include just looking at measures that give the space positive measure?) then we'd have $\int f_n \mathrm{d}\mu \geq n\mu(X)$ and by taking limits of both sides (and as long as $\mu(E) \neq 0$) we'd get $\lim \int f_n \mathrm{d}\mu = \infty$ as needed. Problem is I wasn't sure what to do in the case that $\mu(E)=0$.
I would like to add I am interested in exercises to help work on improving my comfort and understanding of $\limsup$ and $\liminf$ if possible.
Thanks very much!
AI: You can check that for $a>0$ the function
$$
\varphi(t)=t\log\left(1+\left(\frac{a}{t}\right)^{1/2}\right)
$$
is strictly increasing on $[1,+\infty)$ and tends to infinity. Unfortunately you had to dig up to second derivative to prove this. Thus for each $x\in X$ you have strictly increasing to infinity sequence $\{f_n(x):n\in\mathbb{N}\}$, so
$$
\liminf\limits_{n\to\infty} f_n(x)=\lim\limits_{n\to\infty} f_n(x)=+\infty
$$
The rest is clear. |
H: Given a triangle with points in $\mathbb{R}^3$, find the coordinates of a point perpendicular to a side
Consider the triangle ABC in $\mathbb{R}^3$ formed by the point $A(3,2,1)$, $B(4,4,2)$, $C(6,1,0)$.
Find the coordinates of the point $D$ on $BC$ such that $AD$ is perpendicular to $BC$.
I believe this uses projections, but I can't seem to get started. I tried the projection of $AC$ onto $BD$ and $AB$ onto $BC$, but to no avail.
Any help is loved! Thanks.
AI: Here is one way, presumably unsuitable.
The direction vector of $BC$ is $(2,-3,-2)$.
A generic point $D_t$ on $BC$ is given by $(4,4,2)+t(2,-3,-2)$.
The direction vector of $AD_t$ is $(1,2,1)+t(2,-3,-2)$. The dot product of this with $(2,-3,-2)$ is $-6+17t$. This dot product must be $0$.
We end up with $D=\left(\frac{80}{17},\frac{50}{17},\frac{22}{17} \right)$. |
H: $J$ be a $3\times 3$ matrix with all entries $1$ Then $J$ is
$J$ be a $3\times 3$ matrix with all entries $1$ Then $J$ is
Diagonalizable
Positive semidefinite
$0,3$ are only eigenvalues of $J$
Is positive definite
$J$ has minimal polynomial $x(x-3)=0$ so 1, 2,3 are true , am I right?
AI: You are correct. This is a multiple duplicate but I can't find any.
The matrix has rank $1$, which makes $0$ an eigenvalue of multiplicity $2$. Then the trace tells us what the other eigenvalue is, whence diagonalizability, since it adds the missing dimension. Since $0$ is an eigenvalue, it can't be positive definite. But it is indeed positive semidefinite by diagonalization in an orthonormal basis ($J$ is symmetric, eigenvalues are nonnegative). The minimal polynomial is obvious by diagonalization.
Alternative: we can observe that $J^2=3J$ so $J$ is annihilated by $X^2-3X=X(X-3)$, which is therefore divided by the minimal polynomial. Since $J$ is nether $I_3$ nor $3I_3$, the latter must be the minimal polynomial. It splits and has simple roots. Whence 1 and 3. |
H: Determining the group homomorphism in semidirect product
We know that if $N$ is a normal subgroup, $H$ is a subgroup, and $\varphi$ is the group homomorphism such that $\varphi:H\to$Aut$(N)$. And this gives a unique group, called the outer semidirect product of $N$ and $H$.
But when a group and two of its subgroups are known, how to determine the homomorphisms? For instance, from here we know
$$D_3\times \mathbb{Z}_3=(\mathbb{Z}_3\times \mathbb{Z}_3) \rtimes \mathbb{Z}_2$$
And from here
$$\langle a,b,c| a^3=c^3=b^2=1, ac=ca,ab=ba^{-1},cb=bc^{-1}\rangle=(\mathbb{Z}_3\times \mathbb{Z}_3)\rtimes \mathbb{Z}_2$$
The group homomorphism $\varphi:\mathbb{Z}_2 \to$Aut$(\mathbb{Z}_3\times \mathbb{Z}_3)$ are surely different. But I have no idea on how to determine this homomorphism. Any hint or solutions are welcomed! Thanks!
AI: If $N\trianglelefteq G$ and $H$ is a subgroup such that $HN=G$ and $H\cap N=1$, then $G$ is an internal semi-direct product $N\rtimes H$. This means it can be constructed as an external semidrect product $N\rtimes H$ with the data of a homomorphism $H\to{\rm Aut}(N)$. If we want to characterize this homomorphism when we already know $G$ full well, we think of conjugation. That is, for $h\in H$ and $n\in N$ and homomorphism $\alpha:H\to{\rm Aut}(N)$, then $\alpha(h)$ applied to $n$ yields the element $hnh^{-1}\in N\subseteq G$.
I like to think of semidirect products as special quotients of free products. Recall that relations are equations of the form $\rm blah=blah$, and we can quotient by relations $\rm \color{Blue}{blah}=\color{Purple}{blah},\color{Teal}{blah}=\color{Magenta}{blah},$... by quotienting a group by the normal subgroup generated by $\rm \color{Blue}{blah}\color{Purple}{blah}^{-1}$, $\rm \color{Teal}{blah}\color{Magenta}{blah}^{-1}$, etc. (The normal subgroup generated by a set $S$ is the normal closure of $\langle S\rangle$, i.e. the group generated by the union of all conjugates of $S$.) Then $N\rtimes H$ is isomorphic to $N*H$ modulo $h(n)=hnh^{-1}$ for all elements $h\in H$ and $n\in N$ (the expression $h(n)$ is interpreted as the result of $h$'s action on $n$). |
H: If one of $n$ coins is fair, then find the probability that the total number of heads is even
A collection of $n$ coins is flipped. The outcomes are independent, and the $i$-th coin comes up heads with probability $\alpha_i, i=1, \dots, n$. Suppose that for some value of $j, 1 \leq j \leq n, \alpha_j=\frac{1}{2}$. Find the probability that the total number of heads to appear on the $n$ coins is an even number.
Now, I don't understand the importance of the statement "Suppose that for some value of $j, 1 \leq j \leq n, \alpha_j=\frac{1}{2}$"
AI: That one coin guarantees that the probability is exactly $1/2$.
If the sum from the other coins is odd and that one turns up "heads", then the total is even; otherwise it's odd.
$$
\begin{align}
& {}\quad\Pr(\text{even}) = \mathbb E(\Pr(\text{even} \mid \text{other coins' outcomes})) \\[8pt]
& = \sum_{\text{other coins' outcomes}} \Pr(\text{even}\mid\text{particular other coins' outcome})\cdot\Pr(\text{particular other}\ldots) \\[8pt]
& = \sum_\text{outcomes} \frac12\cdot \Pr(\cdots) \\[8pt]
& = \frac12,\text{ since the sum of those probabilities is }1.
\end{align}
$$
Slightly later edit: Just to be concrete: Suppose $\Pr(\text{1st coin H}) = 9/10$ and $\Pr(\text{2nd coin H})=1/2$. Then
$$
\begin{align}
\Pr(\text{even}) & = \Pr(\text{1st H})\cdot\Pr(\text{even}\mid\text{1st H}) + \Pr(\text{1st
T})\cdot\Pr(\text{even}\mid\text{1st T}) \\[8pt]
& = \frac9{10}\cdot\frac12 + \frac1{10}\cdot\frac12 \quad = \frac12\left(\frac9{10}+\frac1{10}\right) \quad = \frac12\cdot1.
\end{align}
$$ |
H: $A$ be a $2\times 2$ real matrix with trace $2$ and determinant $-3$
$A$ be a $2\times 2$ real matrix with trace $2$ and determinant $-3$, consider the linear map $T:M_2(\mathbb{R})\to M_2(\mathbb{R}):=B\to AB$ Then which of the following are true?
$T$ is diagonalizable
$T$ is invertible
$2$ is an eigen value of $T$
$T(B)=B$ for some $0\ne B\in M_2(\mathbb{R})$
if $ A=\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{pmatrix}$ Then I have calculated that matrix of $T$ will be \begin{pmatrix}a_{11}&a_{12}&0&0\\a_{21}&a_{22}&0&0\\0&0&a_{11}&a_{12}\\0&0&a_{21}&a_{22}\end{pmatrix} so $T$ is invertible I can say as $\det T=(\det A)^2=9$, could any one help to find out others as true/false?
AI: Nicely done so far. Don't forget to precise that you took the matrix of $T$ with respect to the basis $\{E_{11},E_{12},E_{21},E_{22}\}$ if you are to write this down.
By assumptions, the characteristic polynomial of $A$ is $$X^2-(\mbox{tr} \,A)X+\det A=X^2-2X-3=(X-3)(X+1).$$
So the eigenvalues of $A$, $-1,3$, are distinct. Hence $A$ is diagonalizable. This should help you answer 1, 3, and 4. By block considerations. Note that 4 says that $1$ is an eigenvalue of $T$.
In case you are not used to matrix block computations, if $P$ is invertible in $M_2$, then
$$
\pmatrix{P&0\\0&P}\pmatrix{A&0\\0&A}\pmatrix{P^{-1}&0\\0&P^{-1}}=\pmatrix{PAP^{-1}&0\\0&PAP^{-1}}
$$
in $M_4$. So if $P$ diagonalizes $A$... |
H: prove $ F(x)=\int_0^\infty {\sin(tx)\over(t+1)\sqrt t} dt \in C^\infty(\mathbb R^*) $
prove that :
$$ F(x)=\int_0^\infty {\sin(tx)\over(t+1)\sqrt t} \, dt \in C^\infty(\mathbb R^*) $$
i end up proving that $F(x)\in C^ \infty(\mathbb R^{*+})$ not $\mathbb R^*$ , and i studied the case with :
$F(x)= \sqrt x G(x)$ where $G(x)=\int_0^\infty {\sin(t)\over(t+x)\sqrt t} \, dt$ ($\sqrt x , G(x) \in C^\infty(R^{*+}))$
is my answer wrong ? , wich is the correct interval for $C^\infty$ ?
AI: Assume $x > 0$ and write
$$ F(x) = \sqrt{x} G(x), \quad \text{where} \quad G(x) = \int_{0}^{\infty} \frac{\sin t}{(t + x)\sqrt{t}} \, dt. $$
The proof is divided into two steps, where we first improve the decaying behavior of the integral representation of $G(x)$ and then we prove that $G(x)$ is infinitely differentiable.
Step 1. We introduce the function
$$ A(x) = \int_{0}^{x} \frac{\sin t}{\sqrt{t}} \, dt = \frac{1-\cos x}{\sqrt{x}} + \frac{1}{2} \int_{0}^{x} \frac{1-\cos t}{t^{3/2}} \, dt. $$
Clearly $A(x)$ converges as $x \to \infty$. In particular, $A(x)$ is uniformly bounded on $[0, \infty)$. Then we have
\begin{align*}
G(x)
= \int_{0}^{\infty} \frac{\sin t}{(t + x)\sqrt{t}} \, dt
&= \int_{0}^{\infty} \frac{A'(t)}{t + x} \, dt
= \left[ \frac{A(t)}{t+x} \right]_{0}^{\infty} + \int_{0}^{\infty} \frac{A(t)}{(t + x)^{2}} \, dt \\
&= \int_{0}^{\infty} \frac{A(t)}{(t + x)^{2}} \, dt.
\end{align*}
Step 2. Now we introduce the notation
$$ G_{k}(x) = (-1)^{k} (k+1)! \int_{0}^{\infty} \frac{A(t)}{(t + x)^{k+2}} \, dt, $$
so that $G(x) = G_{0}(x)$. We claim that $G_{k}'(x) = G_{k+1}(x)$. Once this is proved, it follows from the induction that $G(x)$ is infinitely differentiable. Indeed, by the Mean Value Theorem,
$$ \frac{1}{h} \left( \frac{1}{(t + x + h)^{k+2}} - \frac{1}{(t + x)^{k+2}} \right) = -\frac{k+2}{(t + x + \theta h)^{k+3}} $$
for some $0 \leq \theta \leq 1$, with the choice of $\theta$ depending on $t$, $x$ and $h$. Thus for $h$ satisfying $0 < |h| \ll x$, we have
\begin{align*}
&\left| \frac{G_{k}(x+h) - G_{k}(x)}{h} - G_{k+1}(x) \right| \\
&= (k+1)! \left| \int_{0}^{\infty} A(t) \left\{ \frac{1}{h} \left( \frac{1}{(t + x + h)^{k+2}} - \frac{1}{(t + x)^{k+2}} \right) + \frac{k+2}{(t+x)^{k+3}} \right\} \, dt \right| \\
&= (k+2)! \left| \int_{0}^{\infty} A(t) \left( \frac{1}{(t+x)^{k+3}}-\frac{1}{(t + x + \theta h)^{k+3}} \right) \, dt \right|
\end{align*}
Since the integrand of the last integral is dominated by the integrable function
$$ |A(t)| \left( \frac{1}{(t+x)^{k+3}} + \frac{1}{(t + x - |h|)^{k+3}} \right), $$
we can apply the dominated convergence theorem to conclude that
$$ \lim_{h\to 0} \frac{G_{k}(x+h) - G_{k}(x)}{h} = G_{k+1}(x) $$
as desired. Therefore the proof is complete. |
H: Infinite series and its upper and lower limit.
I am learning analysis on my own and I am puzzled with the following question.
Consider the series $$\frac{1}{2}+\frac{1}{3}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{2^3}+\frac{1}{3^3}+\cdots$$ Indicate whether this series converges or diverges.
The following is what I was able to get.
First I noticed that $$a_n = \frac{1}{2^n}+\frac{1}{3^n}$$ would be a simple and nice form of this sequence, so according to the ratio test, $$\lim_{n \to \infty} \frac{1}{6} \left( \frac{3+2(\frac{2}{3})^n}{1+(\frac{2}{3})^n}\right) = \frac{1}{2}$$ therefore the series must converge.
However, I later understood the concept of upper an lower limits a little, so taking $$ a_n =
\begin{cases}
\frac{1}{2^n}, & \text{if $n$ is odd} \\
\frac{1}{3^{n-1}}, & \text{if $n$ is even} \\
\end{cases}$$ I figured out the following $$\lim_{n \to \infty} \frac{1}{2}\left(\frac{2}{3}\right)^n = 0$$ when $n$ is odd and $$\lim_{n \to \infty} \frac{1}{2}\left(\frac{3}{2}\right)^n = +\infty$$ when $n$ is even.
Through this experience I realized and really understood how a subsequence of a sequence may have multiple limits, and the supremum and the infimum of those limits give us the upper and lower limits, respectively. But the book does something different than I did, and I would like to have the following questions answered.
1),$\quad$ From my understanding, the ratio test tells us that $\Sigma a_n$ diverges when the upper limit is greater than 1. Clearly one of the limits goes to $+\infty$, so why cannot we conclude that the series diverge ?
2),$\quad$ I learned that by choice of the subsequence, the limit may or mayn't be different. In the case that it is different, how do we know what is the supremum and infimum? It would be clear that if they were $\pm \infty$ but in my case, why is it guaranteed that there is no other subsequence that has a negative limit ? Is there an algebraic way to guarantee them ? Or do we always have to conceptually understand all of the subsequential limits.
3),$\quad$ The book says the ratio test is inconclusive, which turns me back to question 1), but it used the ratio test to determine that the series converges. According to my even-odd wise definition of $a_n$, the "larger limit" (I am purposefully not saying supremum because I don't know how it's guaranteed that it is the supremum) using the ratio test can be found as $$\lim_{n \to \infty} \sqrt[n]{a_n} = \lim_{n \to \infty} \sqrt[n]{\frac{1}{2^n}} = {1 \over 2}$$ But the book says $$\lim_{n \to \infty} \sqrt[n]{a_n} = \lim_{n \to \infty} \sqrt[2n]{\frac{1}{2^n}} = {1 \over \sqrt{2}}.$$ Where did the $2n$ come from ?
Can someone help me out ?
Edit:Thanks for helping me out everyone! I really appreciate it.
AI: By pairing adjacent terms, you’ve actually changed the series with which you’re working: you’re working with $\sum_{n\ge 1}\left(\frac1{2^n}+\frac1{3^n}\right)=\sum_{n\ge 1}\frac{2^n+3^n}{6^n}$, the series whose $n$-th term is $\frac{2^n+3^n}{6^n}$. The ratio test works fine on this series:
$$\begin{align*}
\lim_{n\to\infty}\frac{\frac{2^{n+1}+3^{n+1}}{6^{n+1}}}{\frac{2^n+3^n}{6^n}}&=\lim_{n\to\infty}\frac{2^{n+1}+3^{n+1}}{6(2^n+3^n)}\\
&=\lim_{n\to\infty}\frac{2^{n+1}}{6(2^n+3^n)}+\lim_{n\to\infty}\frac{3^{n+1}}{6(2^n+3^n)}\\
&=\lim_{n\to\infty}\frac{2^n}{3(2^n+3^n)}+\lim_{n\to\infty}\frac{3^n}{2(2^n+3^n)}\\
&=\lim_{n\to\infty}\frac1{3\left(1+\left(\frac32\right)^n\right)}+\lim_{n\to\infty}\frac1{2\left(\left(\frac23\right)^n+1\right)}\\
&=0+\frac12\\
&=\frac12\;,
\end{align*}$$
which is essentially the calculation that you made. However, if you apply the ratio test to the original series, you find that if the $n$-th term is $a_n$, then
$$a_n=\begin{cases}
\frac1{2^{(n+1)/2}},&\text{if }n\text{ is odd}\\\\
\frac1{3^{n/2}},&\text{if }n\text{ is even}\;,
\end{cases}$$
so that
$$\begin{align*}
\frac{a_{n+1}}{a_n}&=\begin{cases}
\frac{1/3^{(n+1)/2}}{1/2^{(n+1)/2}},&\text{if }n\text{ is odd}\\\\
\frac{1/2^{(n+2)/2}}{1/3^{n/2}},&\text{if }n\text{ is even}
\end{cases}\\\\
&=\begin{cases}
\left(\frac23\right)^{(n+1)/2},&\text{if }n\text{ is odd}\\\\
\frac12\left(\frac32\right)^{n/2},&\text{if }n\text{ is even}\;.
\end{cases}
\end{align*}$$
Thus, $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}$ does not exist: the terms with odd indices are approaching $0$, but those with even indices are increasing without bound. Since the limit does not exist, the ratio test is inconclusive.
The ratio test applied to your modified series gives the correct answer because the original series is absolutely convergent; this permits you to combine the adjacent terms as you did, but since you don’t know ahead of time that the series converges, you can’t make use of the fact to show that it converges. |
H: Manipulating Vector Identity
I would like to expand then simplify (if possible) the following quantity.
$\nabla (a \cdot (C\, a))$
Where $a = a(x)$ is a vector valued function of $x$ and $C$ is a constant matrix.
AI: No need to use index notation here; use the chain rule.
$$\begin{align*}b \cdot \nabla [a \cdot \underline C(a)] = (b \cdot \nabla a) \cdot \nabla_a [a \cdot \underline C(a)]\end{align*}$$
Let's call $\underline a(b) = b \cdot \nabla a$ and abstract that out for a moment. $\underline a(b) \cdot \nabla_a [a \cdot \underline C(a)]$ can be attacked using the product rule.
$$\underline a(b) \cdot \nabla_a [a \cdot \underline C(a)] = ([\underline a(b) \cdot \nabla_a]a) \cdot \underline C(a) + a \cdot (\underline a(b) \cdot \nabla_a \underline C(a))$$
You get for the first term $\underline a(b) \cdot \underline C(a) = \overline a \underline C(a) \cdot b$, where $\overline a$ is the transpose. For the second term, you get $\overline{aC}(a) \cdot b$, so the result is
$$b \cdot \overline a(\underline C + \overline C)(a)$$
$b$ is arbitrary, and we can pull it out to get
$$\nabla(a \cdot \underline C(a)) = \overline a(\underline C + \overline C)(a) = \dot \nabla(\dot a \cdot [\underline C + \overline C](a))$$
The dots denote what is to be differentiated; the second $a$ should not be differentiated and should instead be held constant. In practice, I would compute the Jacobian transpose $\overline a$ in terms of a basis and just use that. |
H: Group equals union of two subgroups
Suppose $G=H\cup K$, where $H$ and $K$ are subgroups. Show that either $H=G$ or $K=G$.
What I did: For finite $G$, if $H\neq G$ and $K\neq G$, then $|H|,|K|\le |G|/2$. But they clearly share the identity element, so $H\cup K\neq G$.
How can I extend it to $G$ infinite?
AI: We show that if $H\cup K$ is a group, then either $H\subseteq K$ or $K\subseteq H$.
If neither $H\subseteq K$ nor $K\subseteq H$ is true, then there is an $h\in H$ which is not in $K$, and a $k\in K$ which is not in $H$.
If $H\cup K$ is a group, then the product $hk$ is in $H\cup K$. Thus the product is in $H$ or in $K$. If $hk\in H$, then $k\in H$, contradicting the choice of $k$. We get a similar contradiction if $hk\in K$. |
H: Set of homomorphisms form a group?
Given vector spaces $V, W$ over field $F$, the set of all linear maps $V \to W$ forms a vector space over $F$ under pointwise addition.
Is there an analogue for groups? Can the set of all homomorphisms from groups $G \to K$ be given a group structure?
AI: The set of all homomorphisms between two groups naturally forms a groupoid rather than a group; the objects of the groupoid are the homomorphisms and the morphisms are given by pointwise conjugation, so if $\varphi_1, \varphi_2 : G \to H$ are two homomorphisms then a morphism between them is an element $h \in H$ such that $\varphi_1(g) = h \varphi_2(g) h^{-1}$ for all $g \in G$.
This is a special case of the construction of functor categories, thinking of groups as one-object categories.
Some other relevant keywords from category theory: enriched category, preadditive category (what I would prefer to call an $\text{Ab}$-enriched category). The category $\text{Vect}$ of vector spaces is enriched over itself, as is the category $\text{Ab}$ of abelian groups, the category $\text{Cat}$ of (small) categories, and its subcategory $\text{Gpd}$ of groupoids.
I realize I've dodged the original question to some extent. There's a natural extra condition to ask for when you put additional structure on homsets, namely composition ought to respect that structure. So if you want to enrich the category of groups over itself, then it would be nice if composition
$$\text{Hom}(G, H) \times \text{Hom}(H, K) \to \text{Hom}(G, K)$$
were a group homomorphism. But this implies that $\text{Hom}(G, G)$ comes equipped with two monoid structures, one given by the enrichment and one given by composition, which satisfy the compatibility relation from the Eckmann-Hilton argument. It then follows that the monoid structures must be isomorphic and commutative, but now it follows that every group homomorphism $G \to G$ is an isomorphism and also that $\text{Aut}(G)$ is always commutative, both of which are clearly a contradiction.
(So why can we enrich the category of abelian groups over itself? The reason is that we give $\text{Ab}$ a different monoidal structure, namely the tensor product, which changes the condition we want on composition to
$$\text{Hom}(A, B) \otimes \text{Hom}(B, C) \to \text{Hom}(A, C)$$
being a group homomorphism, which can happen. Now the compatibility relation on $\text{Hom}(A, A)$ coming from its two monoid structures turns it into a ring. But the category of groups, unlike the category of abelian groups, doesn't have a natural notion of tensor product.) |
H: For any arrangment of numbers 1 to 10 in a circle, there will always exist a pair of 3 adjacent numbers in the circle that sum up to 17 or more
I set out to solve the following question using the pigeonhole principle
Regardless of how one arranges numbers $1$ to $10$ in a circle, there will always exist a pair of three adjacent numbers in the circle that sum up to $17$ or more.
My outline
[1] There are $10$ triplets consisting of adjacent numbers in the circle, and since each number appears thrice, the total sum of these adjacent triplets for all permutations of the number in the circle, is $3\cdot 55=165$.
[2] If we consider that all the adjacent triplets sum to 16 , and since there are $10$ such triplets, the sum accordingly would be $160$, but we just said the invariant sum is $165$ hence there would have to be a triplet with sum of $17$ or more.
My query
Could someone polish this into a mathematical proof and also clarify if I did make use of the pigeonhole principle.
AI: Yes, you used the Pigeonhole Principle. As a very mild correction, you should say that of all sums of three consecutives are $\le 16$, then the sum is $\le 160$.
The proof (with the small correction) is already fully mathematical. Conceivably you might want to explain the $55$ further. It is clear to you and to most users of this site why $55$, but imagine the reader to be easily puzzled.
The part about "all the permutations" is vague, and technically incorrect. You are finding the sum of a consecutive triple, and summing all these sums. Permutations have nothing to do with it, we are talking about a particular fixed arrangement of the numbers.
Remark: We could use a lot of symbols. Starting at a particular place, and going counterclockwise, let our numbers be $a_1,a_2,\dots,a_{10}$. For any $i$, where $1\le i\le 10$, let $S_i=a_i+a_{i+1}+a_{i+2}$, where we use the convention that $a_{10+1}=a_1$, $a_{10+2}=a_2$, and $a_{9+2}=a_1$.
Then $S_1+S_1+\cdots+S_{10}=165$, since each of $1$ to $10$ appears in $3$ of the $S_i$, and $1+2+\cdots+10=55$.
But if all the $S_i$ are $\le 16$, then $\sum_{i=1}^{10}S_i\le 160$. This contradicts the fact that $\sum_{i=1}^{10}S_i=165$.
I prefer your proof, mildly modified. |
H: Help Understanding Complex Roots
I was reading a graphical explanation of complex roots, and between Figures 7 and 8 I became confused.
The roots appear in the imaginary plane, but I don't understand why the original function must be inverted before the graphical representation fits. I realize that inversion is necessary for the graphical representation of imaginary roots to fit, but the lack of explanation bothers me. I would like to understand this inverstion better than "we do it because it fits the data."
AI: Comment: In general, a nonzero quadratic has the form $y = A(x-a)^2+b^2$ where $A,a,b$ are real constants, but I set $A=1$ for the sake of clarity.
The graph is given to extend $y=f(x) = (x-a)^2+b^2$ to $y=Re(f(z)) = Re((z-a)^2+b^2)$ where $z = x+it$ is a complex variable. The graph given indicates all points in $(t,x,y)$ space such that the equation $y=Re(f(z))$ is satisfied. In particular,
$$ y = Re((x+it-a)^2+b^2) = (x-a)^2-t^2+b^2 $$
Considering $x=a$ which is a vertical plane in the pictures,
$$ y = -t^2+b^2 $$
This is a parabola which opens downward in the $x=a$ plane. It has $y=0$ when $t = \pm b$. This is why the graph opens downward. On the other hand, the plane $t=0$ (otherwise known as the $xy$-plane) we have $y = (x-a)^2+b^2$ which is a parabola with vertex $(a,b^2)$ which opens upward with no $x$-intercept.
The function is not inverted, it is extended to a complex domain, in this larger context the three-dimensional graph looks as pictured. However, beware the real picture casts $y = Re(f(x+it))$ as a surface, it is one-equation in three variables. This gives two parameters at most points on its solution set. The picture you post focuses on two interesting curves on this surface (it has parametrization $X(u,v) = (u,v,Re(f(u+iv)))$ )
Also, I was tempted to say it was the graph of $y = f(x+it)$, but that is more difficult to graph directly... think about it. |
H: Non-self-mapping automorphism implies abelian
Suppose $\sigma\in\text{Aut}(G)$. If $\sigma^2=1$ and $x^{\sigma}\neq x$ for $1\neq x\in G$, show that if $G$ is finite, it must be abelian.
There's a hint to show that the set $\{x^{-1}x^{\sigma}\mid x\in G\}$ is the whole group $G$. I have already proved this hint ($x^{-1}x^{\sigma} = y^{-1}y^{\sigma}\rightarrow (xy^{-1})^{\sigma} = xy^{-1}\rightarrow x=y$ since $x^{\sigma}=x$ only for $x=1$), but I could not use it to solve the problem.
AI: For each $x\in G$ we have $(x^{-1}x^\sigma)^\sigma=(x^{-1})^\sigma x=(x^{-1}x^\sigma)^{-1}$, but $x^{-1}x^\sigma$ runs over $G$ as $x$ does, so for each $x\in G$ we have $x^\sigma=x^{-1}$. Thus, for any $x,y\in G$ we have
$$(xy)^\sigma=x^\sigma y^\sigma=x^{-1}y^{-1}=(yx)^{-1}=(yx)^\sigma$$
and hence $xy=yx$. |
H: If $x$ and $y$ are in this given sequence, can $2^x+2^y+1$ be prime?
The sequence:
$3, 11, 13, 17, 19, 29, 37, 41, 53, 59, 61, 67, 83, 97, 101, 107, 113, 131, 137, 139, 149, 163, 173, 179, 181, 193, 197, 211, 227, 257, 269, 281, 293, 313, 317, 347, 349, 353, 373, 379, 389, 401, 409, 419, 421, 443, 449, 461, 467, 491, 509, 521, 523, 541, 547, 557, 563, 569, 577, 587, 593, 613, 617, 619, 653, 659, 661, 677, 701, 709, 757, 761, 769, 773, 787, 797, 809, 821, 827, 829, 853, 857, 859, 877, 883, 907, 929$...
$n = 2^x + 2^y + 1$
$x,y$ in this sequence.
Is there case that $n$ is prime with this sequence?
AI: Such pairs do exist, and they are precisely the columns of the following matrix:
$$\left(
\begin{array}{ccccccccccccccc}
3 & 3 & 11 & 37 & 53 & 59 & 179 & 179 & 197 & 227 & 353 & 421 & 449 & 467 & 853 \\
37 & 317 & 17 & 67 & 83 & 797 & 293 & 509 & 227 & 509 & 587 & 787 & 659 & 653 & 907
\\
\end{array}
\right)$$
The minimal prime obtained in this way is, by inspection, $2^{11}+2^{17}+1=133121$.
Edit: I forgot to check the diagonal! That adds the single pair $(3,3)$, corresponding to $$2^3+2^3+1=17.$$
This brings us to a total of $16$ solutions. |
H: How to find remainder?
$$a=r\mod (r+1) \ \ \forall r\in\{2,3,4,\dots,9\}$$
Then how do we find $'x'$ if $$a=x\mod 11$$
I get $$2a=9\mod11$$ but that does not help.
Please keep solution simple , i don't now number theory.
The above is the crux what I got from the question:
Let $n_1,n_2,... $ be an increasing sequence of natural numbers each of which leaves remainder $r+1$ when divided with $r\in\{2,3,....9\}$. Find the remainder when $n_{2008}$ is divided with $11$.
AI: $$2a\equiv9\pmod {11}\equiv -2\implies a\equiv-1\pmod {11}\equiv10$$
Alternately, as $2\cdot 6=12\equiv1\pmod{11}\implies 2^{-1}\equiv6$
$2a\equiv 9\pmod{11}\implies a\equiv2^{-1}\cdot9\equiv 6\cdot9\equiv 10\pmod {11} $
If $a\equiv r\pmod{r+1}\equiv-1\implies (r+1)$ divides $a+1 \ \ \forall r\in\{2,3,4,\dots,9\} $
If $m_i$ divides $b,$ we know lcm$(m_i)$ will divide $b$
EDIT: to answer the changed question
Here lcm$(3,4,\cdots,9,10)=5\cdot 8\cdot7\cdot 9=2520$
So, $n_i\equiv-1\pmod{2520}=2520c_i-1$ for some integer $c_i$s
For $n_i$ to be natural number $n_i=2520c_i-1>0\implies c>0\implies c_{\text{min}}=1$
So, if we choose, any arbitrary set of increasing positive integers for $c_i,$ we shall get an increasing sequence of natural numbers satisfying the given condition and solution will depend on the choice of $c_i$s
Now if we take $c_i=i$ for $i\ge1,$
$n_{2008}=2520\cdot 2008-1$
$\equiv1\cdot6-1\pmod {11}$ as $2520\equiv1\pmod {11}$ and $2008\equiv6\pmod {11}$
$\implies n_{2008}\equiv5\pmod{11}$ |
H: $A\ne 0:V\to V$ be linear,real vec space $V$
$A\ne 0:V\to V$ be linear,real vec space $V$, $\dim V=n$,$V_0=A(V),\dim V_0=k<n$ and for some $\lambda\in\mathbb{R}, A^2=\lambda A$
Then
$\lambda=1$
$|\lambda|^n=1$
$\lambda$ is the only eigen value of $A$
There is a subspace $V_1\subseteq V$ such that $Ax=0\forall x\in V_1$
$4$ is true definitely as I can chose $V_1$ as kernel of $A$ which is non trivial subspace of $V$, well assuming $\lambda\ne 0$ we get charpoly for $A$ is $x(x-\lambda)=0$ so $0,\lambda$ are eigen value with some multiplicity so $3$ is false, I have no idea about $1$ and $2$, could any one tell me?
AI: Both (1) and (2) are not true.
We'll choose:
$n=2 $
$ A=\left(\begin{array}{cc}
2 & 0\\
0 & 0
\end{array}\right) $
$ V = Sp\left\{(1,0),(0,1) \right\} $
then:
$ \dim(V)=2 $
$ \dim(AV)=1<2 $ (easy multiply)
and we get:
$ A^{2}=2A $. Therefor, those are falses. |
H: this is regarding exponentials distribution
In an office building, the lift breaks down randomly at a mean rate of 3 times per week. The random variable X represents the time in days between successive lift breakdowns.
(i) Calculate the probability that the time interval between successive lift breakdowns is
between 2 and 3 days.
(ii) Find the probability that, after a breakdown has just occurred, at least 1 week will pass
without another breakdown occurring.
AI: The time $X$ between breakdowns, measured in days, has mean $\frac{7}{3}$, so we are dealing with an exponential with parameter $3/7$, and hence density function $\frac{3}{7}e^{-3x/7}$, for $x\ge 0$.
We want the probability that $2\le X\le 3$. This probability is
$$\int_{2}^{3} \frac{3}{7}e^{-3x/7}\,dx.$$
Now it is just a matter of integrating.
The second question is in a sense simpler. We want the probability that $X\ge 7$. This is
$$\int_{7}^{\infty} \frac{3}{7}e^{-3x/7}\,dx.$$
Remark: In some courses, for people with little knowledge of calculus, people are just told that if $X$ has exponential distribution with parameter $\lambda$, then $\Pr(X\le x)=F_X(x)=1-e^{-\lambda x}$ (for $x\ge 0$).
In such a course, one would note that the probability that $X$ is between $a$ and $b$, where $0\le a\lt b$ is $F_X(b)-F_X(a)$. This is $(1-e^{-\lambda b})-(1-e^{-\lambda a})$, which simplifies to $e^{-\lambda a}-e^{-\lambda b}$.
Similarly, the probability that $X\gt a$ is $e^{-\lambda a}$. |
H: The value of a limit of a power series: $\lim\limits_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \left(\frac{x}{k} \right)^k$
What is the answer to the following limit of a power series?
$$\lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \left(\frac{x}{k} \right)^k$$
AI: A simple calculation shows that
\begin{align*}
\sum_{k=1}^{\infty} (-1)^{k} \left( \frac{x}{k} \right)^{k}
&= \sum_{k=1}^{\infty} \frac{(-1)^{k} x^{k}}{(k-1)!} \int_{0}^{\infty} t^{k-1} e^{-kt} \, dt
= \int_{0}^{\infty} \sum_{k=1}^{\infty} \frac{(-1)^{k} x^{k} t^{k-1} e^{-kt}}{(k-1)!} \, dt \\
&= -x \int_{0}^{\infty} \exp \left\{ - t \left( 1 + x e^{-t} \right) \right\} \, dt
= - \int_{0}^{1} x \cdot u^{x u} \, du,
\end{align*}
where $u = e^{-t}$. Now we claim that
$$ \lim_{x\to\infty} \int_{0}^{1} x \cdot u^{x u} \, du = 1. $$
To find the limit, we prove the following lemma:
Lemma. Let $f : [0, \delta] \to [0, 1]$ be a measurable function. Suppose there exists $0 < A < B$ such that
$$ 1 - Ax \leq f(x) \leq 1 - Bx. $$
Then we have
$$ \frac{1}{A} \leq \liminf_{x\to\infty} \left( x \int_{0}^{\delta} f(t)^{x} \, dt \right) \leq \limsup_{x\to\infty} \left( x \int_{0}^{\delta} f(t)^{x} \, dt \right) \leq \frac{1}{B}. $$
Assume this lemma holds. Let $f(u) = u^{u}$. Then we observe that
$f(u)$ decreases for $[0, 1/e]$ and increases for $[1/e, 1]$.
For any small $\epsilon > 0$, there exists small $\delta > 0$ such that
$$ 1 - (1+\epsilon)(1-u) \leq f(u) \leq 1 - (1-\epsilon)(1-u) $$
for $0 < u < \delta$.
For any large $M > 0$, we can choose small $\delta > 0$ such that
$$f'(u) = u^{u}(1 + \log u) \leq -M$$
for $0 < x < \delta$. In particular, $f(u) \leq 1 - Mu$.
Let $\epsilon > 0$ be small and $M > 0$ be large. Let $\delta > 0$ be a sufficiently small number satisfying the conditions simultaneously. Then we have
$$ 0 \leq x \int_{\delta}^{1-\delta} u^{xu} \, du \leq x \max\{ f(\delta)^{x}, f(1-\delta)^{x} \} \xrightarrow{x\to\infty} 0. $$
Also, Lemma shows that
$$ \frac{1}{1+\epsilon} \leq \liminf_{x\to\infty} \left( x \int_{1-\delta}^{1} u^{xu} \, du \right) \leq \limsup_{x\to\infty} \left( x \int_{1-\delta}^{1} u^{xu} \, du \right) \leq \frac{1}{1-\epsilon} $$
and
$$ 0 \leq \limsup_{x\to\infty} \left( x \int_{0}^{\delta} u^{xu} \, du \right) \leq \frac{1}{M}. $$
Putting together, we have
$$ \frac{1}{1+\epsilon} \leq \liminf_{x\to\infty} \left( x \int_{0}^{1} u^{xu} \, du \right) \leq \limsup_{x\to\infty} \left( x \int_{0}^{1} u^{xu} \, du \right) \leq \frac{1}{1-\epsilon} + \frac{1}{M}. $$
Therefore, letting $M \to \infty$ and $\epsilon \to 0^{+}$, we obtain
$$ \lim_{x\to\infty} \left( x \int_{0}^{1} u^{xu} \, du \right) = 1 $$
as desired.
Proof of Lemma. For any $0 < \eta < \delta$, we have
$$ 0 \leq x \int_{\eta}^{\delta} f(t)^{x} \, dt \leq x \int_{\eta}^{\delta} \max \{ 1- B\eta, 0 \}^{x} \, dt \leq \max \{ x \delta (1- B\eta)^{x}, 0 \} \xrightarrow[]{x\to\infty} 0. $$
Thus we may assume that $\delta$ is sufficiently small so that $1 - A\delta \geq 0$. Then
\begin{align*}
x \int_{0}^{1/A} (1 - At)^{x} \, dt + o(1)
&= x \int_{0}^{\delta} (1 - At)^{x} \, dt \\
&\leq x \int_{0}^{\delta} f(t)^{x} \, dt \\
&\leq x \int_{0}^{\delta} (1 - Bt)^{x} \, dt = \leq x \int_{0}^{1/B} (1 - Bt)^{x} \, dt + o(1).
\end{align*}
Evaluating, we obtain
$$ \frac{x}{A(x+1)} + o(1) \leq x \int_{0}^{\delta} f(t)^{x} \, dt \leq \frac{x}{B(x+1)} + o(1), $$
proving the lemma. |
H: 2-colorable belongs to $\mathsf P$
To show that 2-colorable belongs to $\mathsf P$, I have a straightforward mental description in mind that I don't think will be considered as a formal proof. Hence I am interested to know how this must be said as an answer to this question. Here's what I think: Say we have 2 colors and n vertices. 2-coloring will be applied such that no two adjacent vertices have the same color. If a vertex is in color A, the other is B and so on. Is my reasoning correct?
Appendix: A graph G is said to be k-colourable if and only if a k-colouring of G exists.
A k-colouring is an assignment of k colours to the vertices of a graph G such that no edge joins two vertices of the same colour.
AI: You have the idea, but "the other" what?
One should present a more explicit algorithm and then investigate if it is polynomial. So, to elaborate your idea
Prepare an empty queue $Q$ and assign colour $0$ to each vertex.
If $Q$ is empty go to step 7; otherwise pop $(v,c)$ from $Q$
If the current colour of $v$ is $c$, go to step 2; if it is $-c$, the algorithm terminates: No 2-colouring exists
Assign colour $c$ to vertex $v$
Enumerate all neighbours of $v$ (that is all edges $vw$ with one end $v$). For each neighbour $w$ of $v$, push $(w,-c)$ to $Q$
Go to step 2.
Select any vertex $x$ that has not been coloured yet. If no such $x$ exists the algorithm terminates and the graph is 2-coloured.
push $(x,+1)$ to the queue $Q$ and go to step 2.
Considering memory, a little refinement of the above can get along with one bit plus one pointer per vertex to realize the queue. Can you determine the time complexity of the algorithm? |
H: Ask book to deeply understand partially ordered sets
I learn little about ordering and poset before, but I think it's not enough and want to learn more about ordering and Poset. Can anyone please recommend some best books to learn about this topic. I really appreciate it. Thanks so much
AI: E.Harzheim, Ordered Sets, Springer Verlag, 2005. |
H: I am confused with the def of derivative.
The problem given :
Let $f:\Bbb R\rightarrow \Bbb R$ and $F:\Bbb R^2\rightarrow \Bbb R$ be differentiable and satisfy $F(x,f(x))=0$ and $\displaystyle \frac {dF}{dy}$ is not zero.
Prove that $f'(x)= -\dfrac {dF}{dx}/\frac {dF}{dy}$ where $y=f(x)$
I am so confused here.
Since $F(x,f(x))=0$, I think, $\displaystyle \frac {dF}{dx}$ must be $0$.
But when I see the $f'(x)$, it seems not.
Is there any difference between $\displaystyle \frac {dF(x,f(x))}{dx}$ and $\displaystyle \left.\frac {dF}{dx} \right|_{(x,f(x))}$ ?
AI: Taking $f(x) = y$, note that by the chain rule:
$$0 = \frac{dF}{dx} = \frac{\partial F}{\partial x} + \frac{\partial F}{\partial y} \frac{dy}{dx}$$
The result you want follows immediately from this observation.
So to recap:
You are correct in supposing that $dF / dx = 0$. This only works because we are taking the total derivative: it emphatically does not mean that $\partial F / \partial x = 0$. |
H: A simple riddle related to addition of odd numbers
I'm not sure if this type of question can be asked here, but if it can then here goes:
Is it possible to get to 50 by adding 9 positive odd numbers? The odd numbers can be repeated, but they should all be positive numbers and all 9 numbers should be used.
PS : The inception of this question is a result of a random discussion that I was having during the break hour :)
AI: A direct approach:
Any given integer is either odd or even. If $n$ is even, then it is equal to $2m$ for some integer $m$; and if $n$ is odd, then it is equal to $2m+1$ for some integer $m$. Thus, adding up nine odd integers looks like
$${(2a+1)+(2b+1)+(2c+1)+(2d+1)+(2e+1)\atop +(2f+1)+(2g+1)+(2h+1)+(2i+1)}$$
(the integers $a,b,\ldots,i$ may or may not be the same).
Grouping things together, this is equal to
$$2(a+b+c+d+e+f+g+h+i+4)+1.$$
Thus, the result is odd.
An simpler approach would be to prove these three simple facts:
$$\begin{align*}
\mathsf{odd}+\mathsf{odd}&=\mathsf{even}\\
\mathsf{odd}+\mathsf{even}&=\mathsf{odd}\\
\mathsf{even}+\mathsf{even}&=\mathsf{even}
\end{align*}$$
Thus, starting from
$$\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}$$
and grouping into pairs,
$$\mathsf{odd}+(\mathsf{odd}+\mathsf{odd})+(\mathsf{odd}+\mathsf{odd})+(\mathsf{odd}+\mathsf{odd})+(\mathsf{odd}+\mathsf{odd})$$
we use our facts to see that this is
$$\mathsf{odd}+\mathsf{even}+\mathsf{even}+\mathsf{even}+\mathsf{even}.$$
Grouping again,
$$\mathsf{odd}+(\mathsf{even}+\mathsf{even})+(\mathsf{even}+\mathsf{even})$$
becomes
$$\mathsf{odd}+\mathsf{even}+\mathsf{even}$$
becomes
$$\mathsf{odd}+(\mathsf{even}+\mathsf{even})=\mathsf{odd}+\mathsf{even}=
\mathsf{odd}$$ |
H: Quotient of abelian groups of rank $2$
Let $A, B$ be abelian groups, $B$ is contained in $A$, both $A$ and $B$ are assumed to have rank $2$. Is there a standard way to show that the quotient group $A/B$ is finite? I think there exists some general theorem about modules finitely generated over a PID, but i can't find the precise theorem i need to apply, so even a simple reference would be very appreciated. Thank you.
AI: I think the theorem you want is that there is a generating set $x,y$ for $A$ and there are non-negative integers $e$ and $f$ such that $ex,fy$ is a generating set for $B.$ Then $[A:B] = |ef|$. This should be proved (in somewhat greater generality) in any algebra text that treats modules over a PID. |
H: trigonometric identity related to $ \sum_{n=1}^{\infty}\frac{\sin(n)\sin(n^{2})}{\sqrt{n}} $
As homework I was given the following series to check for convergence:
$ \displaystyle \sum_{n=1}^{\infty}\dfrac{\sin(n)\sin(n^{2})}{\sqrt{n}} $
and the tip was "use the appropriate identity".
I'm trying to use Dirichlet's test and show that it's the product of a null monotonic sequence and a bounded series, but I can't figure out which trig. identity is needed.
Can anyone point me towards the right direction?
Many thanks.
AI: Hint: You can show that
$$
\sum\limits_{n=1}^N\sin(n)\sin(n^2)=\frac{1}{2}(1-\cos(N^2+N))
$$
To do this use identity
$$
\sin(\alpha)\sin(\beta)=\frac{1}{2}(\cos(\alpha-\beta)-\cos(\alpha+\beta))
$$ |
H: What is the difference between a surface and the graph of a function?
When I was studying a book, Elementary classical analysis (Jerrold.E.Marsden), there was a confusing sentence.
"The unit sphere $x^2+y^2+z^2=1$ in $\mathbb R^3$ is a surface of the form $F(x,y,z)=c$ which is not the graph of a function."
What does this sentence mean?
Is there any relevance between above sentence and following problems?
1). find the tangent plane to the graph of $z=x^2 + y^2$ at $(0,0)$
2). Find the equation of the tangent plane to the surface $x^2 - y^2 + xyz = 1$ at $(1,0,1)$
I think I have some confusion between a surface and a graph. right?
please help me~
AI: The equation $x^2 + y^2 + z^2 = 1$ has two solutions for every $(x,y)$ satisfying $x^2 + y^2 < 1$. This means that we can't find a function $f$, such that the sphere is the graph $z = f(x,y)$. (A function has only one value for every choice of variables.)
A similar argument shows that the sphere is not the graph of a function of any other pair of the variables either.
This is similar to the one-variable setting: the circle $x^2 + y^2 = 1$ is not the graph of a function of $x$ or a function of $y$. Go through this simpler example first. |
H: Given a spanning set, what is the span of the 'transpose' of the set?
Given $$sp\left \{
\begin{pmatrix}
a_1\\
a_2\\
a_3
\end{pmatrix}
,\begin{pmatrix}
b_1\\
b_2\\
b_3
\end{pmatrix}
,\begin{pmatrix}
c_1\\
c_2\\
c_3
\end{pmatrix}
\right \} = \mathbb{R}^3$$
What is
$$sp\left \{
\begin{pmatrix}
a_1\\
b_1\\
c_1
\end{pmatrix}
,\begin{pmatrix}
a_2\\
b_2\\
c_2
\end{pmatrix}
,\begin{pmatrix}
a_3\\
b_3\\
c_3
\end{pmatrix}
\right \}\subseteq \mathbb{R}^3$$
I'm thinking of row spaces and column spaces but with not much luck,
a leading hint would be appreciated more than the final answer.
AI: Hint: The row rank and the column rank of a matrix is the same.
Another solution
Hint: The set of $n$ vectors spans $\mathbb{R}^n$ if and only if the determinant is non-zero.
Hint: The determinant of matrix $A$ is equal to the determinant of matrix $A^t$. |
H: Prime divisibility in a prime square bandtwidth
I am seeking your support for proving (or fail) formally the following homework:
Let $p_j\in\Bbb P$ a prime, then any $q\in\Bbb N$ within the interval $p_j<q<p_j^2$ is prime, if and only if all:
$$p_{i}\nmid q$$ with $1\le i<j$
There should be a simple sieving argument behind of this that I can not fish.
I hope I got it now correct.
Your help is welcome
AI: The condition of $q \nmid p_{j-1}$ is absurd, since $q > p_{j-1}$.
So let's assume that we flip the order.
Even then, the statement is still not true. For example, consider $p_4 = 7$, then 8 is a valid counter example.
I believe that the statement you want it:
$q$ is a prime if and only if $ p_i \nmid q$ for all values of $i<j$.
This is a true statement that is easy to show.
Hint: If $q$ is not a prime number, then it must have a prime factor that is less than $\sqrt{q} < p_j$. If $q$ is a prime, then clearly no smaller number divides it. |
H: Simple Expect Value Exercise
Question: We have $9$ coins, $1$ of them is false (lighter). We divide them up in pairs (with one left) and weigh them (that is taking two in a balance and seeing if one of them is lighter). What is the expected value of number of weighings to find the false one?
I suspect this is a rather elementary problem, yet, whenever I solve these (or rather - when I think i solved them), I am never sure at all with my result being correct.
My solution The probability of the false coin being in any pair is $2/9$, therefore the probability $\mathbb{P}(X=t)$ that the false coin will be found is $2/9$ for $t=1,2,3$ and $3/9$ for $t=4$ since if it didn't show up in the last weighing, it must be the one left. Therefore:
$$\mathbb{E}[X]=\frac{2}{9}(1+2+3)+4\frac{3}{9}=\frac{24}{9}$$
Is that correct and if not, why?
Thanks for any help!
AI: Your solution is correct for the technique you're using to find the coin.
The nine coins problem is a classic puzzle, it's usually phrased as a challenge to find the light coin with as few weighings as possible.
Your solution to that problem is not optimal.
Instead for your first weighing put three coins on each side of the scale.
So you have sets groups of three coins, and you may deduce which set the coin is in. Then you can find the coin with a second weighing. |
H: Quasicompact over affine scheme
Let $X$ be a scheme and $f : X \rightarrow \mathrm{Spec}\, A$ a quasicompact morphism. Are there any easy conditions on $A$ under which we can say that $X$ is quasicompact?
Quasicompact morphism means only that there is an affine cover $\cup_{i \in I} \mathrm{Spec}\, A_i$ where $f^{-1}(\mathrm{Spec}\, A_i)$ is quasicompact. It doesn't seem to be enough.
AI: $X$ is always quasi-compact. This is a standard result which can be found in any good and complete introduction to algebraic geometry. But you can also prove it yourself. In the end, it is just an exercise in general topology. Hint: Affines schemes are quasi-compact. So choose a finite subcover of $\{\mathrm{Spec} A_i \to \mathrm{Spec} A\}$ and observe that $X$ is a finite union of quasi-compact open subspaces. |
H: Counterexample to inverse Leibniz alternating series test
The alternating series test is a sufficient condition for the convergence of a numerical series. I am searching for a counterexample for its inverse: i.e. a series (alternating, of course) which converges, but for which the hypothesis of the theorem are false.
In particular, if one writes the series as $\sum (-1)^n a_n$, then $a_n$ should not be monotonically decreasing (since it must be infinitesimal, for the series to converge).
AI: Put:
$$
b_n = \begin{cases}
n^{-2} &: n \text{ odd} \\
2^{-n} &: n \text{ even}
\end{cases}
$$
$b_n$ is not monotonically decreasing. Still, $\sum (-1)^n b_n$ converges. |
H: Uniform convergance for $f_n(x)=x^n-x^{2n}$
the function $f_n(x)=x^n-x^{2n}$ converge to $f(x)=0$ in $(-1,1]$. Intuativly the function does not converge uniformally in (-1,1]. How can I prove it?
I tried using the definition $\lim \limits_{n\to\infty}\sup \limits_{ x\in (-1,1]}|f_n(x)-f(x)|$ function is continial fractional on $[-1,1]$ and $x=0,(\frac 1 2 )^{\frac 1 n}$ are the roots of the derivative. I found that the second derivative is negative in the second point. then $\sup=1/4$ and the function does not converge uniformally?
AI: Choose an arbitrarily large odd value of $n$. There exists some $0<x<1$ such that $x^n>\dfrac 12$.
Then $$\begin{array}{rl}f_n(-x) &= (-x)^n - (-x)^{2n}
\\ &= -\left(x^n + x^2n\right) \\ &\leq -\frac 34 \end{array}$$
So $f_n$ does not converge uniformly on $(-1,1]$. |
H: Laurent Series Expansion Problems
Expand in a laurent Series :
1- $f_{1} (z) = \frac{z^{2} - 2z +5 }{(z^{2}+1) (z-2)}$
in the ring : $1 < |z| < 2 $
2- $ f_{2} (z) = \frac{1 }{(z-3) (z+2)}$
In :
$i. 2 < |z| < 3
\\ ii. 0 < |z+2| < 5$
I managed to solve the second one but not sure if it is correct
For i. $2 < |z| < 3$ :
$ \frac{-1}{5} * \frac{1}{z(1+ \frac{2}{z}) } + \frac{1}{5} * \frac{1}{-3(1- \frac{z}{3}) }
= \frac{-1}{5} \sum_{n=0}^ \infty (-1)^{n} (\frac{2}{z})^{n} - \frac{1}{15}\sum_{n=0}^ \infty (\frac{z}{3})^{n}$
For ii. $0 < |z+2| < 5$ :
$ \frac{-1}{5} * \frac{1}{z+2} + \frac{1}{5} * \frac{1}{(z+2 -5) }
= \frac{-1}{5} * \frac{1}{z+2} + \frac{1}{25} * \frac{1}{-5 (1- \frac{Z+2}{5} ) } \\ = \frac{-1}{5} * \frac{1}{z+2}- \frac{1}{25} \sum_{n=0}^ \infty (\frac{z+2}{5})^{n} $
AI: In case 1, note that $1\lt|z|\lt2$ means that $1/|z|\lt1$ and $|z|/2\lt1$, hence the well known expansion
$1/(1-u)=\sum\limits_{n\geqslant0}u^n$, valid for every $|u|\lt1$, applied twice, yields
$$
\frac1{z^2+1}=\frac1{z^2(1+1/z^2)}=\sum\limits_{n\geqslant0}\frac{(-1)^n}{z^{2n+2}},$$
and
$$
\frac1{z-2}=-\frac12\frac1{1-z/2}=-\sum\limits_{n\geqslant0}\frac{z^n}{2^{n+1}}.
$$
The expansion of $f_1(z)$ as a Laurent series follows from the decomposition
$$
f_1(z)=\frac1{z-2}-\frac2{z^2+1}.$$ |
H: Show that $\left( \frac{q}{p} \right) \equiv q^{(p-1) / 2} \mod p$, where $\left( \frac{q}{p} \right)$ is the Legendre Symbol
Show that if $p$ is any odd prime then
$$\left( \frac{q}{p} \right) \equiv q^{\frac{p-1}{2}} \mod p.$$
stating any theory that you use. In particular, you may assume the existence of a primitive element in $G_p$.
Here $\left( \frac{q}{p} \right)$ is the Legendre Symbol and $G_p$ is the group of elements $g \mod p$ such that $\gcd(g,p) = 1$. I said that for some $a \in \mathbb{Z}$, we have
$$a \equiv q^{(p-1) / 2} \mod p \implies a^2 \equiv q^{(p-1)} \equiv 1 \mod p$$
by Fermat's little theorem. And so, by definition of the Legendre symbol, we have that if $q$ is a quadratic residue mod $p$ then $\left( \frac{q}{p} \right) \equiv q^{(p-1) / 2} \mod p$. I'm now stuck on how to show that it is $\equiv -1 $ if it isn't a quadratic residue. Obviously the hint with primitive elements comes into play somehow, but I can't see how it does.
Can someone help me please.
AI: Let $g$ be a primitive root of $p$, a generator of the group of $p-1$ units. Suppose that $a$ is a QR. Then $a\equiv b^2\pmod{p}$ for some $b$. But $b$ is congruent to a power $g^e$ of $g$. So $a$ is congruent to $g^{2e}$. And by Fermat's Theorem, $(g^{2e})^{(p-1)/2}\equiv 1\pmod{p}$.
Note that all $(p-1)/2$ numbers congruent to an even power of $g$ are automatically QR.
Since there are $(p-1)/2$ QR, the numbers congruent to an odd power of $g$ are all NR. An odd power of $g$ cannot be congruent to $1$ modulo $p$. Let $x=a^{(p-1)/2}$. Then $x^2\equiv 1\pmod{p}$, so $x\equiv \pm 1\pmod{p}$. If $a$ is an NR, then $a^{(p-1)/2}\not\equiv 1 \pmod{p}$, so it is $\equiv -1\pmod{p}$. |
H: Absolute convergance of function series
The question is for which values of $x\in \mathbb R$, the following series absolute/conditionally converge: $$\sum_{n=1}^{\infty}\frac{x^n}{(1+x)(1+x^2)...(1+x^n)}$$ I have no idea how to solev it except M-test of wirestrass but I don't know how to bound it . Forgive me if tagging the question as 'power-series' was wrong.
AI: Let
$$u_n=\frac{x^n}{(1+x)(1+x^2)...(1+x^n)}$$
so we have
$$\left|\frac{u_{n+1}}{u_n}\right|=\left|\frac{x}{1+x^{n+1}}\right|\to\left\{\begin{array}\\
|x| \quad\text{if}\quad -1<x<1\\
\frac{1}{2} \quad\text{if}\quad x=1\\
0 \quad\text{if}\quad -1<x \,\text{or}\, x>1
\end{array}\right.$$
so we can see by the ratio test that the series converges absolutely for all $x\neq -1$ |
H: Topology on Integers such that set of all Primes is open
In my topology homework we are asked to describe a topology on the Integers such that:
set of all Primes is open.
for each $x\in\mathbb Z$, the set $\{x\}$ is not open.
$\forall x,y \in\mathbb Z$ distinct, there is an open $U\ni x$ and an open $V\ni y$ such that $U\cap V=\emptyset$
i was looking at Furstenberg's topology as in this proof:
For $m, b \in\mathbb Z$ with $m > 0$ define $N(m,b):=\{mx + b : x ∈ Z\}$, an arithmetic progression streching towards infinity in both directions. A set $U$ is open if either:
$U = \emptyset$; or
For each $b\in U$ there is an $m>0$ such that $N(m,b)\subseteq U$.
but as I understand the set of Primes is not open in this topology. Now I'm not sure what should I do: is there a way to modify this topology to make set of Primes open or should I think of something completely different.
Any hints are appreciated!
thanks!
AI: A cheap way to go about it is the following:
Let $\mathbb{P}$ denote the set of prime numbers, and $\mathbb{P}^\prime = \mathbb{Z} \setminus \mathbb{P}$.
As both $\mathbb{P}$ and $\mathbb{P}^\prime$ are countably infinite, pick bijections $f : \mathbb{P} \to \mathbb{Q}$, $g : \mathbb{P}^\prime \to \mathbb{Q}$.
Define $U \subseteq \mathbb{Z}$ to be open iff $f [ U \cap \mathbb{P} ]$ and $g [ U \cap \mathbb{P}^\prime ]$ are both open subsets of $\mathbb{Q}$ (under the subspace topology). |
H: Subrings and homomorphisms of unitary rings
Let $(R,+,\cdot)$ be a ring (in the definition i use multiplication is associative operation and it's not assumed that there is unity in the ring).
I've seen two definitons of subring.
1) non-empty subset $S \subset R$ is called subring of ring $(R,+,\cdot)$ iff $(S,+,\cdot)$ is a ring
2) Let $(R,+,\cdot)$ be unitary ring with unity $e$. Non-empty subset $S \subset R$ is called subring of ring $(R,+,\cdot)$ iff $(S,+,\cdot)$ is a ring and $e \in S$.
I'm looking for an example of such ring R and its subset S that $(S,+,\cdot)$ is a ring but not a unitary ring.
I will be also very grateful for an example of such UNITARY rings $(R_1,+_1,\cdot_1)$, $(R_2,+_2,\cdot_2)$ and function $f: ~~ R_1 \longrightarrow R_2$ that
$(1) \forall a,b \in R_1 ~~~ f(a+_1 b) = f(a) +_2 f(b) $
$(2) \forall a,b \in R_1 ~~~ f(a\cdot_1 b) = f(a) \cdot_2 f(b)$
$(3) f(e_1) \neq e_2$
where $e_1$ is unity in $R_1$; $e_2$ - in $R_2$.
Thanks in advance.
AI: $R = \mathbb Z$ and $S = 2\mathbb Z$
$R_1 = R_2 = \mathbb Z$ and $f(a) = 0$.
More generally let $R$ be a unitary ring. Any proper ideal $S$ provides an example for 1. For arbitrary unitary rings $R_1$ and $R_2\neq\{0\}$, the zero map is always an example for 2. |
H: Listing subgroups of a group
I made a program to list all the subgroups of any group and I came up with satisfactory result for $\operatorname{Symmetric Group}[3]$ as
$\left\{\{\text{Cycles}[\{\}]\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 2 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 3 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
2 & 3 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 2 & 3 \\
\end{array}
\right)\right],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 3 & 2 \\
\end{array}
\right)\right]\right\}\right\}$
It excludes the whole set itself though it can be added seperately. But in case of $SymmetricGroup[4]$ I am getting following
$\left\{\{\text{Cycles}[\{\}]\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 2 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 3 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 4 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
2 & 3 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
2 & 4 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
3 & 4 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 3 \\
2 & 4 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{cc}
1 & 4 \\
2 & 3 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 2 & 3 \\
\end{array}
\right)\right],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 3 & 2 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 2 & 4 \\
\end{array}
\right)\right],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 4 & 2 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 3 & 4 \\
\end{array}
\right)\right],\text{Cycles}\left[\left(
\begin{array}{ccc}
1 & 4 & 3 \\
\end{array}
\right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left(
\begin{array}{ccc}
2 & 3 & 4 \\
\end{array}
\right)\right],\text{Cycles}\left[\left(
\begin{array}{ccc}
2 & 4 & 3 \\
\end{array}
\right)\right]\right\}\right\}$
The matrix form shows double transposition. Can someone please check for me if I am getting appropriate results? I doubt I am!!
AI: I have the impression that you only list the cyclic subgroups.
For $S_3$, the full group $S_3$ ist missing as a subgroup (you are mentioning that in your question).
For $S_4$, several subgroups are missing. In total, there should be $30$ of them. $14$ of them are cyclic, which are exactly the ones you listed. To give you a concrete example, the famous Klein Four subgroup
$$\{\operatorname{id},(12)(34),(13)(24),(14)(23)\}$$
is not contained in your list. |
H: Shortcut in calculating examples of elements of a given order?
My question is:
Find all possible orders of elements of the group of units $G_{31}$. Give an example of an elememt of each possible order.
I did the question, but I felt I did it a long way. As $31$ is prime, elements of $G_{31} = \{1, 2, \cdots 30\}$. Possible orders are divisiors of $\varphi(31) = 30 = 1,2,3,5,6,10,15,30$. So, to get an example of each element, I litterally went through all elements checking their orders. Like, I didn't have to go through ALL of them because I noticed after a while if I was going to get the order I wanted or if I wasn't, but it still took a while calculating all of this.
Is there a shortcut to finding all this? So, I think I'm asking, is there a way to solve
$$a^{10} \equiv 1 \mod 31,$$
$$a^{15} \equiv 1 \mod 31$$
$$\mathrm{etc}$$
without trial and error?
AI: Since $31$ is prime, the ring $\mathbb Z/31\mathbb Z$ is a field and henc its multiplicative group $G_{31}$ is cyclic. Therefore, you can find an element $g$ of order $30$. Then the elements of order $d|30$ are precisely the $g^k$ with $\gcd(k,30)=\frac{30}d$. |
H: Equation $f(x,y) f(y,z) = f(x,z)$
How to solve the functional equation $f(x,y) f(y,z) = f(x,z)$?
AI: Set $g(x)=f(x,0)$ and $h(z)=f(0,z)$. Then, we have $f(x,z)=f(x,0)f(0,z)=g(x)h(z)$ for all $x$ and $z$. Apply this to the original equation to obtain $g(x)h(y)g(y)h(z)=g(x)h(z)$.
There are three possibilities now:
Either $g(x)=0$ for all $x$ (and thus $f(x,z)=0$ for all $x$ and $z$).
Or $h(z)=0$ for all $z$ (and thus $f(x,z)=0$ for all $x$ and $z$ again).
Or $g(x)\not=0$ and $h(z)\not=0$ for some $x$ and $z$. Then, we can cancel those two out to obtain $h(y)g(y)=1$ for all $y$. Thus, $h(y)=\frac{1}{g(y)}$ and we get $f(x,z)=\frac{g(x)}{g(y)}$. This can be easily checked to be a solution for any suitable $g(x)$.
It's easy to check there are two solutions: $f(x,y)=0$ for all $x,y$ and $f(x,y)=\frac{g(x)}{g(y)}$ with $g(x)$ being any function which is never equal to zero. |
H: why is an annulus close to it's boundary when it's boundary curves are close?
This is the motivating question for the rather vague question here: when is the region bounded by a Jordan curve "skinny"?
Suppose we are given two Jordan curves in the plane, one inside the other and each contained in an epsilon neighborhood of the other. How can we conclude that the annular region between the two curves is itself contained in the two epsilon neighborhoods of the original curves?
Note: by epsilon neighborhood of a curve I mean the set of all points which lie within a distance epsilon from some point in the image of the curve.
Follow up: Since the statement above is false (see Hagen's answer) we might ask whether it is true if we assume point-wise closeness of chosen parametrizations of the curves. This is answered in the affirmative by user7...
AI: We can't. Consider an arbitrarily large closed disk. At two points of its circumference let two thin strips emerge that first wind araound back and forth very close to the disk boundary and then meet. The contour arount this figure consists of two Jordan curves making a counterexample. |
H: Inequality for embedding in Sobolev space
For $\Omega=(0,1). $Prove that there exists $M>0$ such that
$$||u||_{C^0(\overline{\Omega})}\le M||u||_{H^1(\Omega)}$$
for all $u\in H^1(\Omega).$
AI: Assume that $u(0)=0$. Then
$$
u(x)=\int_0^x u'(s)ds=\int_0^1 \textbf{1}_{(0,x)}u'(s)ds.
$$
Now you use Cauchy-Schwarz inequality.
$$
u(x)\leq \|u'\textbf{1}_{(0,x)}\|_{L^2}=\int_0^1 (u'(s))^2\textbf{1}_{(0,x)}ds\leq \int_0^1 (u'(s))^2 ds.
$$
Right? |
H: Axiom of Regularity - Transitive set
I just managed to confuse myself completely while studying for Set Theory.
We have the Axiom of regularity:
$$\forall S (S\not= \emptyset \rightarrow (\exists x\in S)(S\cap x=\emptyset))$$
Now a set is transitive, if $x\in T$ implies $x\subset T$.
I don't understand anymore how that can be while we have the Axiom of regularity.
Doesn't follow from the definition of a transitive set, that $x\cap T = x$?
Maybe if someone knows a good source for a more detailed explanation of the Axiom of regularity or has the time to explain me some more?
Thanks and best,
Luca
AI: You are right. If $x$ is transitive, and $y\in x$ then $x\cap y=y$.
Hint: Show that a transitive set $x$ must either be empty, or $\varnothing\in x$. |
H: Comparison between the limits of two real functions
I know
If $f:D(\subset\mathbb R)\to\mathbb R,c$ is a limit point of $D,$ and $f(x)\ge(\text{resp.}\le)~a~\forall~x\in D-\{c\},$ then $\displaystyle\lim_{x\to c}f(x)\ge(\text{resp.}\le)~a.$ (Provided the limit exists)
If $f,g,h:D(\subset\mathbb R)\to \mathbb R,c$ is a limit point of $D,$ and $f(x)\le g(x)\le h(x)~\forall~x\in D-\{c\},$ then $\displaystyle\lim_{x\to c}f(x)=\displaystyle\lim_{x\to c}h(x)\implies \displaystyle\lim_{x\to c}f(x)=\displaystyle\lim_{x\to c}g(x)=\displaystyle\lim_{x\to c}h(x).$
I wounder whether the following result holds:
$\diamond$ Let $f,g:D(\subset\mathbb R)\to\mathbb R,c$ is a limit point of $D,f(x)\ge g(x)~\forall~x\in D-\{c\}.$ Do all these imply $\displaystyle\lim_{x\to c}f(x)\ge\displaystyle\lim_{x\to c}g(x)?$ (Provided the limits exist)
For then it will be more powerful result from which both 1 and 2 follow.
My Attempt:
$F-\epsilon<f(x)<F+\epsilon~\forall~x\in D\cap N'(\delta_1)$
$G-\epsilon<g(x)<G+\epsilon~\forall~x\in D\cap N'(\delta_1)$
Set $\delta=\min\{\delta_1,\delta_2\}.$ Then $G-\epsilon<g(x)\le f(x)<F+\epsilon~\forall~x\in D\cap N'(\delta)$ i.e. $G<F+\epsilon.$
Since $\epsilon>0$ is arbitrary so $G\le\displaystyle\inf_{\epsilon>0}(F+\epsilon)=F.$
Is my attempt correct?
AI: Basically, you can just use 1. for the function $h = f - g$ and $a=0$. |
H: Deducing a result about entire functions
We need to show that for an entire function $f$ on $\mathbb{C}$, there are constants $a_1,...,a_n$ such that $\int^{2 \pi}_{0}|f(re^{i \theta})|^2 d\theta=2\pi\sum_{n=0}^{\infty}|a_n|^2r^{2n}$.
Thoughts so far are that we can find a power series expansion about $0$ by Taylor's Theorem. Given the result it seems natural to use $f(z)^2$ somewhere but I can't see how.
Any help appreciated.
AI: Let $f(z)=\sum\limits_{n=0}^\infty a_nz^n$. For $z=re^{i\theta} $ we have $$f(re^{i\theta })=\sum\limits_{n=0}^\infty a_nr^ne^{in\theta }$$
We have that $$|f(re^{i\theta})|^2=f(re^{i\theta})\cdot \overline{f(re^{i\theta})}=$$
$$\sum\limits_{n=0}^\infty a_nr^ne^{in\theta }\cdot \sum\limits_{m=0}^\infty \overline {a_m}r^me^{-im\theta }=$$
$$\sum\limits_{n=0}^\infty \sum\limits_{m=0}^\infty a_n \overline {a_m}r^{n+m}e^{i(n-m)\theta }$$
Therefore $$\int_0^{2\pi }|f(re^{i\theta})|^2d\theta =$$
$$\int_0^{2\pi}\sum\limits_{n=0}^\infty \sum\limits_{m=0}^\infty a_n \overline {a_m}r^{n+m}e^{i(n-m)\theta }d\theta =(*)$$
$$\sum\limits_{n=0}^\infty \sum\limits_{m=0}^\infty \int_0^{2\pi} a_n \overline {a_m}r^{n+m}e^{i(n-m)\theta }d\theta=(*)$$
$$\sum_{n=0}^\infty a_n\overline{a_n}r^{n+n}\cdot 2\pi= $$
$$2\pi\sum_{n=0}^{\infty}|a_n|^2r^{2n}$$
$(*)$Where we can use uniform convergence to switch integral and sums, and also note that $$\int_0^{2\pi}e^{i(n-m)\theta}d\theta=\begin{cases}0&n\neq m\\2\pi&n=m\end{cases}$$ |
H: Books on computational complexity
Can anyone recommend a good book on the subjects of computability and computational complexity? What are the de facto standard texts (say, for graduate students) in this area?
I've heard a thing or two on these subjects from "CS people" back at the university. With lots of hand-waving they talked about the halting problem, about P versus NP, and so on.
What I would like to find now is an actual mathematics textbook (as opposed to programming and/or popular textbooks). With rigorous definitions and proofs. I'm no stranger to mathematical logic, including things like partial recursive functions and Godel's theorem. I suspect I would benefit from reading a standard graduate-level text on computational complexity. All I need now is some references to such texts. Any suggestions?
AI: You should take a look at Computational Complexity: A Modern Approach by Arora and Barak if you haven't already. I have also heard good things about Goldreich's Computational Complexity: A Conceptual Perspective but I haven't read it. |
H: What is rigorous notation for functions?
I have seen many ways to denote a function: $f(x)=x^2, y=x^2, f: x\mapsto x^2$ and so on. What is exact notation for functions? Please include lethal doses of rigor, set theory, and of course notational exactness.
Note: I am very familiar with functions in general. I just know that a lot of mathematical literature abuses notation when it comes to functions.
AI: $$f:E\to F,\quad x\mapsto f(x)$$
For example, the function $f:\mathbb R\to\mathbb R$, $x\mapsto x^2$, is not the function $g:\mathbb R_+\to\mathbb R$, $x\mapsto x^2$, but the functions $h:\{-1,0,1\}\to\mathbb R$, $x\mapsto x^2$ and $k:\{-1,0,1\}\to\mathbb R$, $x\mapsto |x|$ are equal. |
H: $\frac{a}{1-a}+\frac{b}{1-b}+\frac{c}{1-c}+\frac{d}{1-d}+\frac{e}{1-e}\ge\frac{5}{4}$
I tried to solve this inequality:
$$\frac{a}{1-a}+\frac{b}{1-b}+\frac{c}{1-c}+\frac{d}{1-d}+\frac{e}{1-e}\ge\frac{5}{4}$$
with
$$a+b+c+d+e=1$$
I am stuck at this. I don't want the full solution, a hint would be enough.
AI: Hint:
You have $1-a=b+c+d+e$ and
$a=1-(b+c+d+e)$
$\dfrac{1-(b+c+d+e)}{b+c+d+e}=\dfrac{1}{b+c+d+e}-1$ and $b+c+d+e=1-a$
Using AM $\ge$ HM
$(\dfrac{1}{1-a}+\dfrac{1}{1-b}+\dfrac{1}{1-c}+\dfrac{1}{1-d}+\dfrac{1}{1-e}) \ge \dfrac{5^2}{1+1+1+1+1-(a+b+c+d+e)} $ |
H: Approximation of beam
Assume that there is a simply supported beam subjected to concentrated moments $M_0$ at each end. The governing equation is
$$EI\frac{d^2y}{dx^2}-M(x)=0$$
with the boundary conditions $y(0)=0$ and $y(H)=0$. I know that there is an exact solution in the form of $y(x)=\frac{M_0\,x}{2EI}(x-H)$ but I must find an approximate solution for the deflection of the beam using trigonometric functions via Least Squares Method. The least squares method is known to me in general; but how to use it in this case?
AI: Suppose that $\hat y(x)$ is an approximate solution then it follows that
$$EI\frac{d^2 \hat y}{dx^2}-M(x)=R(x)\ne 0$$
since the approximate solution does not satisfy the equation. In LSM the error term is defined such as
$$I=\int_a^b R(x)^2 dx$$
and minimized wrt to coefficients in $\hat y(x)$. To satisfy the boundary constraint the approximation function can be taken as (Note that you can have more parameters and satisfy the boundary constraints later on)
$$\hat y(x)=a\sin\bigg(\frac{\pi\,x}H\bigg)$$
By noting that $M(x)$ has a constant value $M_0$
$$I=\int_0^H \bigg(EI\frac{d^2 \hat y}{dx^2}-M_0\bigg)^2 dx=\int_0^H \bigg(-EI\frac{a\pi^2}{H^2}\sin\bigg(\frac{\pi\,x}H\bigg)-M_0\bigg)^2 dx$$
Now use Wolfram Alpha to integrate (or use below result)
$$I=\frac{E^2\,I^2\,\pi^4}{2\,H^3}a^2+\frac{4\,M_0\pi\,E\,I}Ha+M_0^2\,H$$
For minimization use the derivative wrt $a$
$$\frac{d\,I}{d\,a}=\frac{E^2\,I^2\,\pi^4}{H^3}a+\frac{4\,M_0\pi\,E\,I}H=0\Rightarrow a=-\frac{4\,M_0\,H^2}{\pi^3\,E\,I}$$
and the approximate function becomes
$$\hat y(x)=-\frac{4\,M_0\,H^2}{\pi^3\,E\,I}\sin\bigg(\frac{\pi\,x}H\bigg)$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.