text
stringlengths 83
79.5k
|
---|
H: Probability, making a selection of 5 people from 10, with two married couples with restrictions
10 people. must make a committee of 5 people
So the restrictions are
1) Mr and Mrs Q can't be separated
2) Mr and Mrs P can't be in the same committee.
So how many possible committees can you form.
4 CASES:
CASE 1: No P AND no Q's
$$ \binom{6}{5}=6 $$
CASE 2: Q's AND NO P
$$ \binom{6}{3} = 20 $$
CASE 3: 1P and Q's
$$ \binom{6}{2} * 2=30$$
CASE 4: 1P and NO Q's
$$ \binom{6}{4}*2=30 $$
86 possible committees
Am I doing this correctly?
AI: The title on case 1 should be no P's or Q's. Some of the others may be married, but you don't care. The logic is fine.
|
H: Need help applying the root test for: $\sum\limits_{n=1}^{\infty}\left(\frac{2}{e^{-8n}-1}\right)^n$
I'm not sure if I am doing something wrong, or not... I've got an answer but it doesn't look right to me.
Given the following series, determine if it is convergent or divergent using the root or ratio test. If the test is inconclusive, use another test.
$$\sum_{n=1}^{\infty}\left(\frac{2}{e^{-8n}-1}\right)^n$$
Here's my step by step process. Maybe I left something out.
$$\lim_{n\to\infty}\left|\left(\frac{2}{e^{-8n}-1}\right)^n\right|^{\frac{1}{n}}$$
$$=\lim_{n\to\infty}\left(\frac{2}{e^{-8n}-1}\right)$$
$$=-2$$
But, even though $-2<1$, I'm very hesitant to claim that the series is convergent. I somehow feel like the answer should be positive, and I don't know if I should take the absolute value of the limit and say that the series is divergent.
AI: You take the limit of the absolute value of the general term $\;\sqrt[\large n]{|a_n|}$, and after taking the $n$th root, what remains is still the absolute value of the nth root. Taking the limit of the absolute value of the nth root, in your case, will give you $$\lim_{n\to\infty}\left|\left(\frac{2}{e^{-8n}-1}\right)^n\right|^{\frac{1}{n}}\;=\;\;\lim_{n\to\infty}\left|\frac{2}{e^{-8n}-1}\right| = 2$$
Hence, in your case, the series will indeed by divergent.
|
H: Prove that one of the following sets is a subspace and the other isn't?
OK, here goes another.
Prove that $ W_1 = ${$(a_1, a_2, \ldots, a_n) \in F^n : a_1 + a_2 + \cdots + a_n = 0$} is a subspace of $F^n$ but $ W_2 = ${$(a_1, a_2, \ldots, a_n) \in F^n : a_1 + a_2 + \cdots + a_n = 1$} is not.
OK. Any subspace has to contain the zero vector, be closed under addition and scalar multiplication by definition. So to prove this we first see whether the set $W_1$ meet those criteria. Plugging in 0 for $a_i$ obviously works, so the first condition is met.
Is it closed under addition? let $b_i$ be the components of an arbitrary vector in $W_1$. So ($b_1, b_2, \ldots, b_n) \in W_1$ and if we add it to $(a_1, a_2, \ldots, a_n)$ we get $(a_1 + b_1) + (a_2 + b_2) + \cdots +(a_n + b_n) = 0$. That's pretty clearly part of $W_1$ and thus closed under addition.
Next we see if it is closed under multiplication by a scalar. We pick an arbitrary scalar $c$ and multiply it by $(a_1, a_2, \ldots, a_n)$ to get $(ca_1, ca_2, \ldots, ca_n)$ and plugging that into the original condition we find that it doesn't matter what c is, because $ca_1 + ca_2 + \cdots + ca_n = 0$ and that's still in $W_1.$ Therefore $W_1$ is a subspace of $F^n$.
If we do the same procedure with $W_2$, though, we find that $0$ vector is not in the set. Because $a_1 + a_2 + \cdots + a_n = 1$ is a contradiction.
Further, we can see that it isn't closed under multiplication either. $ca_1 + ca_2 + \cdots + ca_n = c$ and that will only equal 1 if c=1, so the equation does not hold with an arbitrary $c$.
Therefore $W_2$ is NOT a subspace of $F^n$.
Any holes in this proof?
(Yeah, I have been bothering folks here a lot but I finally feel that I am getting the hang of this and I have an exam tomorrow night).
AI: Your proof is fine. Also, to prove that a subset $S$ is not subspace, it suffices to show that at least one of the conditions ($0 \in S$, closure under addition and scalar multiplication) fails. So as soon as you show that $0 \notin S$, your proof is complete.
|
H: Enveloping Algebra $U(L \oplus L')$
I'm having trouble understanding part of a proof of the following statement
Let $L,L'$ be Lie algebras and $L \oplus L'$ their direct sum. Then $$ U(L \oplus L') \cong U(L) \otimes U(L')$$
Let $i_L : L \to U(L)$ denote the natural inclusion into the enveloping algebra, and similarly for $i_{L'}$ and $i_{L \oplus L'}$. The proof begins by defining a morphism
$\varphi:U(L \oplus L') \to U(L) \otimes U(L')$ by first defining a Lie algebra morphism $f:L \oplus L' \to U(L) \otimes U(L')$ by
$$f(x,x') = i_L(x) \otimes 1 + 1 \otimes i_{L'}(x')$$
Then, $\varphi$ is defined so that $\varphi \circ i_{L \oplus L'} = f$ using the universality of the enveloping algebra.
The inverse map is defined $\psi:U(L) \otimes U(L') \to U(L \oplus L')$ by
$$\psi(a \otimes a') = \psi_1(a)\psi_2(a')$$ where $\psi_1(x) = i_{L \oplus L'}(x,0)$ and $\psi_2(x) = i_{L \oplus L'}(0,x)$
The part I have the problem with is when they show that $\psi \circ \varphi$ is the identity on the image of $L \oplus L'$ in $U(L \oplus L')$. The calculation given is, for $x \in L, x' \in L'$,
$$\psi(\varphi(x,x')) = \psi(x \otimes 1) + \psi(1 \otimes x') = i_{L \oplus L'}((x,0) + (0,x')) = i_{L \oplus L'}(x,x')$$
I don't really get how the calculation of $\psi(x \otimes 1)$ is done. If I attempt myself, it comes out
$$\psi(x \otimes 1) = \psi_1(x)\psi_2(1) = i_{L \oplus L'}((x,0) \otimes (0,1)) = ~?$$
so I guess I don't understand how the tensor product $(x,0) \otimes (0,1)$ is simplified in $U(L \oplus L')$. I'm sure it's something completely basic, but I'd appreciate if someone could explain it.
AI: The maps $\psi_1$ and $\psi_2$ are supposed to be thought of as the obvious inclusions
$$\begin{cases}\psi_1:U(L)\,\hookrightarrow U(L\oplus L') \\ \psi_2: U(L')\hookrightarrow U(L\oplus L')\end{cases}$$
Thus e.g. $\psi(x\otimes 1_{U(L')})=\psi_1(x)\psi_2(1_{U(L')})=i_{L\oplus L'}(x,0)1_{U(L\oplus L')}=i_{L\oplus L'}(x,0)$.
Of course, the isomorphism $U(L\oplus L')\cong U(L)\otimes U(L')$ should be "obvious": an element in the first of the form $a_1\cdots a_n$ can be decomposed as $(u_1+v_1)\cdots(u_n+v_n)$ with $u_i\in L$, $v_i\in L'$, which can be expanded and rearranged into monomials of the form $u_{i_1}\cdots u_{i_l} v_{j_1}\cdots v_{j_r}$ (because we have the relation $[u,v]=0$ for all $u\in L,v\in L'$, allowing $u$s and $v$s to be slid across each other without issue), which corresponds to $u_{i_1}\cdots u_{i_l}\otimes v_{j_1}\cdots v_{j_r}$ in $U(L)\otimes U(L')$. How to go back from $U(L)\otimes U(L')$ to $U(L\oplus L')$ should be even more clear: just erase the $\otimes$ symbol everywhere. (We have been slightly abusive, viewing $L$ and $L'$ as subsets of $L\oplus L'$.)
Note this is very near to the fact that direct sums distribute through tensor products (why?).
|
H: Limit points and interior points in relative metric
Let $M$ be a metric space and let $X$ be a subset of $M$ with the relative metric. If $Y$ is a subset of $X$, let $\overline{Y}^{(X)}$ denote the closure of $Y$ in the metric space $X$. Prove that $\overline{Y}^{(X)}=\overline{Y}\cap X$. State and prove a corresponding result for $Y^o$.
Suppose $y\in \overline{Y}^{(X)}$. Since $\overline{Y}^{(X)}$ is defined in the metric space $X$, we must have $y\in X$. Also, there exists a sequence $y_1,y_2,\ldots\in Y$ converging to $y$. So $y\in \overline{Y}$, meaning that $Y\in \overline{Y}\cap X$. On the other hand, suppose $Y\in \overline{Y}\cap X$. Then there exists a sequence $y_1,y_2,\ldots\in Y$ converging to $y$ in the metric space $M$. Since $Y\subset X$, that same sequence also shows that $y$ is a limit point in the metric space $X$.
Also, $Y^{o(X)} = Y^o\cap X$. Since $Y\subset X$, any interior point of $Y$ in the metric space $X$ must be an interior point in the metric space $M$, and vice versa.
I feel like this is too straightforward... am I misunderstanding something?
Edit: Okay I see where I went wrong. Let me try again on the interior points.
\begin{align*}Y^{o(X)} &= \bigcup\{Z\mid Z\subset Y, Z \text{ open in } X\} \\
&=\bigcup\{Z\cap X\mid (Z\cap X)\subset Y, Z \text{ open in } M\} \\
&=\bigcup\{Z\cap X\mid Z\subset (Y\cup X'), Z \text{ open in } M\} \\
&=\bigcup\{Z\mid Z\subset (Y\cup X'), Z \text{ open in } M\} \cap X \\
&=(Y\cup X')^o\cap X
\end{align*}
Can we simplify $(Y\cup X')^o$ further? Does the $^o$ distribute inside?
Edit 2 Seems like the answer to the line above is no :)
AI: The closure argument looks good, maybe one thing to be a little more careful about is that when you say a sequence converges, say it converges as a sequence in the desired space (differentiating between convergence in $X$ and in $M$. You started doing this towards the end, but the beginning could use it. But this is really just me searching really hard for ways to improve).
Here is another way of doing it that doesn't rely on the use of sequences (this is useful when working with an arbitrary topological space). One characterization of closure is the following:
Let $A$ be a topological space (e.g. a metric space), and $B\subset A$. Then $\overline{B} = \bigcap\{Z|Z\supset B, Z \text{ closed in } A\}$. That is to say, $\overline{B}$ is the smallest closed set containing $B$.
So
\begin{align*}\overline{Y}^{(X)} &= \bigcap\{Z|Z\supset Y, Z \text{ closed in } X\} \\
&=\bigcap\{Z\cap X|Z\supset Y, Z \text{ closed in } M\} \\
&=\bigcap\{Z|Z\supset Y, Z \text{ closed in } M\} \cap X \\
&=\overline{Y} \cap X.
\end{align*}
Edit: Wow, I was way off on the relative interior bit. Thank you @copper.hat for your example.
|
H: The simplest nontrivial (unstable) integral cohomology operation
By an integral cohomology operation I mean a natural transformation $H^i(X, \mathbb{Z}) \times H^j(X, \mathbb{Z}) \times ... \to H^k(X, \mathbb{Z})$, where we restrict $X$ to some nice category of topological spaces such that integral cohomology $H^n(-, \mathbb{Z})$ is represented by the Eilenberg-MacLane spaces $K(\mathbb{Z}, n)$. The Yoneda lemma shows that such operations are in natural bijection with elements of $H^k(K(\mathbb{Z}, i) \times K(\mathbb{Z}, j) \times ... , \mathbb{Z})$.
Altogether, these cohomology operations determine a (multisorted) Lawvere theory. There is an obvious subtheory of this theory generated by zero, negation, addition, the cup product, and composition. Based on the table in this paper, it looks like the simplest integral cohomology operation not in this obvious subtheory is a cohomology operation $H^3(X, \mathbb{Z}) \to H^8(X, \mathbb{Z})$ coming from the generator of $H^8(K(\mathbb{Z}, 3), \mathbb{Z}) \cong \mathbb{Z}/3\mathbb{Z}$, but I don't know enough algebraic topology to extract an explicit description of this cohomology operation from the paper.
So: what is this cohomology operation? Where does it come from? What can you do with it? (I know there are some cohomology operations coming from the $\text{Tor}$ terms in the Künneth formula; is this one of them?)
AI: Suppose $\alpha$ is an integral $3$-cocycle on a space $X$. Then $2\alpha\cup\alpha$ is a coboundary, because the cup product is graded-commutative, and there exists a $5$-cochain $\beta$ such that $\mathrm d\beta=2\alpha\cup\alpha$. One can easily check that $\alpha\cup\beta+\beta\cup\alpha$ is an $8$-cocycle. I think that the class of this cocycle does not depend on the choice of $\beta$, so we get a well defined mapping $H^3\to H^8$. This map is natural because there is a natural proof that the cup product is graded-commutative, so we $\beta$ depends naturally on $\alpha$.
This gives an operation :-)
|
H: About rationalizing expressions
For example, rationalizing expressions like
$$\frac{1}{\pm \sqrt{a} \pm \sqrt{b}}$$
Is straightforward. Moreover cases like
$$\frac{1}{\pm \sqrt{a} \pm \sqrt{b} \pm \sqrt{c}}$$
and
$$\frac{1}{\pm \sqrt{a} \pm \sqrt{b} \pm \sqrt{c} \pm \sqrt{d}}$$
Are still easy to rationalize. But my question is in the more general case
$$\frac{1}{\pm \sqrt{a_1} \pm \sqrt{a_2} \cdots \pm \sqrt{a_n}}$$
Where $n \ge 5$
Are they always rationalizable? If so, how would be an algorithm to rationalize them. If not, then a proof must exist.
From my point of view, I can't find an obvious way to rationalize the case $n=5$, since grouping the radicals in a group of 3 and a group of 2 radicals and then applying the identity
$$(a-b)*(a+b)=a^2-b^2$$
Just modifies the denominator from
$$\pm \sqrt{a} \pm \sqrt{b} \pm \sqrt{c} \pm \sqrt{d} \pm \sqrt{e}$$
to
$$\pm v \pm \sqrt{w} \pm \sqrt{x} \pm \sqrt{y} \pm \sqrt{z}$$
Will this help in something? Or a different method or identity is needed?
AI: Consider
$$P_n(t_1,\ldots,t_n) = \prod_{\sigma \in \{-1,1\}^n}\sum_i \sigma_i t_i$$
You probably don't want to expand this explicitly for symbolic $t_i$, but
it is a polynomial in the $t_i$ whose terms involve only even powers (because it's invariant under $t_i \to -t_i$). Thus $P_n(\sqrt{a_1},\ldots,\sqrt{a_n})$ is a polynomial in $a_1, \ldots, a_n$. Then for any $\rho \in \{-1,1\}^n$,
$$ \frac{1}{\sum_i \rho_i \sqrt{a_i}} = \dfrac{\prod_{\sigma \ne \rho} \sum_i \sigma_i \sqrt{a_i}}{P_n(\sqrt{a_1},\ldots,\sqrt{a_n})}$$
|
H: Is there an axiom of ZFC expressing that GCH fails as badly as possible?
The GCH axiom basically says that for all infinite cardinal numbers $\kappa$, the number of cardinals lying strictly between $\kappa$ and $2^\kappa$ is as small as possible. Namely, there are none.
Is there an axiom which claims the opposite, in other words that the number of cardinal numbers lying strictly between $\kappa$ and $2^\kappa$ is as large (in some sense) as possible?
Edit. For example - and I don't know if this is a silly suggestions, I know very little set theory - is the following axiom for infinite cardinals $\kappa$ consistent with ZFC? And if so, is it interesting? $$|\{\mbox{cardinals } \nu \mid \kappa<\nu<2^\kappa\}|=2^\kappa$$
AI: Note that there is no "largest possible distance" between even $\aleph_0$ and $\mathfrak{c} = 2^{\aleph_0}$. It is an old result that as long as $\aleph_\alpha$ has uncountable cofinality, then it is relatively consistent with $\mathsf{ZFC}$ that $\mathfrak{c} = \aleph_\alpha$. As every infinite successor cardinal has uncountable cofinality, this implies that there is no bound on the number of cardinals strictly between $\aleph_0$ and $\mathfrak{c}$.
Easton's Theorem goes even further, and says that except for certain basic restrictions, the function $\aleph_\alpha \mapsto 2^{\aleph_\alpha} = \aleph_{G(\alpha)}$ restricted to the regular cardinals can be pretty much arbitrary. (As Andrés Caicedo notes in his comment below, under the assumption of certain large cardinal hypotheses, the arbitrariness is further restricted. As a basic example, the least (infinite) cardinal at which $\sf{GCH}$ fails cannot be measurable.)
The answers to these questions might also be of interest:
A problem with Cantor's continuum hypothesis
Counterexamples to the continuum hypothesis
|
H: Quadratic form $\mathbb{R}^n$ homogeneous polynomial degree $2$
Could you help me with the following problem?
My definition of a quadratic form is: it is a mapping $h: \ V \rightarrow \mathbb{R}$ such that there exists a bilinear form $\varphi: \ V \times V \rightarrow \mathbb{R}$ such that $h(v)=\varphi(v,v)$.
Could you tell me how, based on that definition, I can prove that a quadratic form on $V=\mathbb{R}^n$ is a homogeneous polynomial of degree $=2$?
AI: Choose a basis for $V$, call it $\{v_1,\dots,v_n\}$. Then leting $v = \sum_{i=1}^n a_i v_i$
$$
h(v) = \varphi(v,v) = \sum_{i=1}^n \sum_{j=1}^n a_i a_j \varphi(v_i,v_j).
$$
Therefore $h(v)$ is a polynomial whose terms are $a_i a_j$ (i.e. degree $2$ in the coeficients of $v$) and the coefficient in front of $a_i a_j$ is $\varphi(v_i,v_j)$. This means $h(v)$ is an homogeneous polynomial of degree $2$ in the variables $\{a_1,\dots,a_n\}$.
Hoep that helps,
|
H: Find minimum value of a
Find the minimum value of $a$ if there's a differentiable function in $\mathbb{R}$ for which :
$$e^{f'(x)}= a {\frac{|(f(x))|}{|(1+f(x)^2)|}}$$ for every $x$
pretty much stuck. I think the minimum value should be $1$ but not sure.
AI: Let $f(x)$ be any solution of the DE for any $a$ over all $\mathbb{R}$:
$$e^{f'(x)} = a \left|\frac{f(x)}{1+f(x)^2}\right|\tag{*}$$
Notice the L.H.S $> 0$ for any choice of $f(\cdot)$ and $x$. This means we must have $a > 0$ and $f(x) \ne 0$ for any $x$ and hence $f(\cdot)$ is having the same sign over all $\mathbb{R}$. Let us consider the case $f(0) > 0$ first. We have:
$$\begin{align} &1 + f'(x) \le e^{f'(x)} = a \frac{f(x)}{1+f(x)^2} = \frac{a}{f(x) + f(x)^{-1}} \le \frac{a}{2}\\
\implies & f'(x) \le \frac{a-2}{2}, \text{ for all } x \in \mathbb{R}\\
\implies & f(x) \le f(0) + \frac{a-2}{2} x, \text{ for all } x \in [0,\infty)
\end{align}$$
If $a < 2$, this leads to the contradiction that $f(x)$ changes sign before $2 f(0)/(2 - a)$.
When $f(0) < 0$ and $a < 2$, apply the same argument above to $-f(-x)$, we see $f(x)$ changes sign somewhere between $0$ and $2 f(0)/(2-a)$ again. From these we can conclude if the DE $(*)$ has a solution, we must have $a \ge 2$.
If $a$ is indeed $\ge 2$, it is trivial to see $(*)$ has a constant solution.
This implies the desired minimum value of $a$ is $2$.
|
H: What increases faster: $(\log n)^2$ or $n^{1/3} + \log n$
What increases faster: $(\log n)^2$ or $n^{1/3} + \log n$ and why? And also, what increases faster $\log n$ or $n^{x}$, where $x$ is a random positive constant number?
AI: Recall that:
$$
f(n)=O(g(n)) \iff \text{For some } C\ge0,\quad\lim_{n\to\infty} \dfrac{f(n)}{g(n)} = C
$$
Hence, $n^{1/3}+\log n$ grows faster (that is, we have $(\log n)^2 = O(n^{1/3}+\log n)$), since by L'Hôpital's Rule, we have:
$$ \begin{align*}
\lim_{n\to\infty} \dfrac{(\log n)^2}{n^{1/3}+\log n} &= \lim_{n\to\infty} \dfrac{\dfrac{2\log n}{n}}{\dfrac{1}{3}n^{-2/3}+\dfrac{1}{n}}\\
&= \lim_{n\to\infty} \dfrac{2\log n}{\dfrac{1}{3}n^{1/3}+1}\\
&= \lim_{n\to\infty} \dfrac{\dfrac{2}{n}}{\dfrac{1}{9}n^{-2/3}}\\
&= \lim_{n\to\infty} \dfrac{2}{\dfrac{1}{9}n^{1/3}}\\
&= 18\lim_{n\to\infty} \dfrac{1}{n^{1/3}}\\
&= 18(0)=0\ge0
\end{align*} $$
In general, notice that for any constant $x>0$, $n^x$ will grow faster than a logarithm (that is, $\log n = O(n^x)$), since:
$$
\lim_{n\to\infty} \dfrac{\log n}{n^x} = \lim_{n\to\infty} \dfrac{\dfrac{1}{n}}{xn^{x-1}}= \lim_{n\to\infty} \dfrac{1}{xn^x}= \dfrac{1}{x}\lim_{n\to\infty} \dfrac{1}{n^x}= \dfrac{1}{x}(0)=0\ge0
$$
|
H: Proving replacement theorem?
I want to see if I am understanding the proof of the replacement theorem correctly.
Let $V$ be a vector space that is spanned by a set $G$ containing $n$ vectors.
Let $L \subseteq V$ be a linearly independent subset containing $m$ vectors.
Then $m\leq n$ and there exists a subset H of G containing exactly $n-m$ vectors s.t. $L \cup H$ generates $V$.
The proof in the book says to start with $m=0$. In that case $L=\varnothing$, the empty set.
Let's assume this theorem is true for some integer $m \geq 0$.
Let $L = \{v_j\}_{j=1}^{m+1}$ and define it as a linearly independent subset of $V$ consisting of $m+1$ vectors.
Since any subset of a linearly independent set is linearly independent as well ($S_1 \subseteq S_2 \subseteq V$), then $\{v_j\}_{j=1}^m$ is linearly independent also.
It then says to use the induction hypothesis to say that $m \geq n$. Well, ok, but that seems sort of pulled out of nowhere to me. The next step is to say that there is therefore another subset, $\{u_k\}_{k=1}^{n-m}$ of G such that $\{v_j\}_{j=1}^m \cup \{u_k\}_{k=1}^{n-m}$ spans $V$.
That being the case there are scalars $(a_j)_{j=1}^m$ and $(b_k)_{k=1}^{n-m}$ which you can multiply by the vectors $v_j$ and $u_k$. You can then add the two sets of vectors, yielding $$\sum_{j=1}^m a_j v_j + \sum_{k=1}^{n-m} b_k u_k = v_{m+1}$$
At this point I am not sure I understand things. Because we are assuming $n-m>0$ -- otherwise $v_{m+1}$ is linearly dependent. But then it says not only is $n>m $ but $n>m+1$, and that is where I am losing the plot to get to the next step.
AI: I'm assuming you're working through the proof in Friedberg et al. as its very similar. I'd really encourage you to double check what you've written because there are many typos above and probably isn't coherent enough for someone without a copy of the text.
The proof in the book says to start with m=0. In that case L=$\varnothing$, the empty set.
Okay, and I hope you realize that this also proves the inequality $m \leq n$ for the base case.
Let's assume this theorem is true for some integer m $\ge$ 0.
Let L = ${v_1, v_2, .... v_m+1}$ and define it as a linearly independent subset of V consisting of m+1 vectors.
So we're going to use induction on $m$ to prove the theorem. Also, it should be $L = \{v_1, \ldots, v_{m+1} \}$.
Since any subset of a linearly independent set is linearly independent as well ($S_1 \subseteq S_2 \subseteq V$), then {$v_1, v_2, .... v_m$} is linearly independent also.
That's correct. I'll call this set $L'$.
It then says to use the induction hypothesis to say that m$\le$n. Well, ok, but that seems sort of pulled out of nowhere to me. The next step is to say that there is therefore another subset, {$u_1, u_2, .... un_m$} of G s.t. {$v_1, v_2, .... v_m$} $\cup$ {$u_1, u_2, .... u_{n-m}$} generates V.
It isn't pulled out of nowhere because it follows from the induction hypothesis. As we have assumed it for some $m \geq 0$, we can apply the theorem to $\{v_1, \ldots, v_m\}$ which gives us the inequality $m \leq n$ and the set $H' = \{u_1, \ldots, u_{n-m}\} \subset G$ such that $L' \cup H'$ generates $V$.
That being the case there are scalars $a_1, a_2, .... a_m$ and $b_1, b_2, .... b_{n-m}$ which you can multiply by the vectors $v_i$ and $u_i$. You can then add th two sets of vectors, yielding $$a_1 v_1 + a_2v_2 + ... + a_mv_m + b_1u_1 + b_2u_2 + ... b_{n-m}u_{n-m} = v_{m+1} \tag{*}$$
We can do this as $L' \cup H'$ generates $V$.
At this point I am not sure I understand things. Because we are assuming n-m>0 -- otherwise $v_{m+1}$ is linearly dependent. But then it says not only is n>m but n>m+1, and that is where I am losing the plot to get to the next step.
What follows is this:
Note that $n - m > 0$, lest $v_{m +1}$ is a linear combination of $v_1, \ldots, v_m$ which by Theorem 1.7 (pg 39) contradicts the assumption that $L$ is linearly independent.
Do you understand why it must be $n - m >0$? Well if we had $n - m = 0$ then $H' = \emptyset$ and this would mean that $(\text{*})$ is only composed of vectors of $L'$ and therefore, $L' \cup \{v_{m+1}\} = L$ would be linearly dependent as $v_{m+1} \in \operatorname{span}(L')$, but that would be a contradiction.
Hence $n > m$; that is, $n \geq m +1$.
Why? Because another way of expressing $n > m$, as $n$ and $m$ are integers, is $n \geq m +1$. That's all there is to it.
Moreover, some $b_i$, say $b_1$ is nonzero, for otherwise we obtain the same contradiction. Solving $(\text{*})$ for $u_1$ gives
$$u_1 = (-b_1^{-1}a_1)v_1 + \cdots + (-b_1^{-1}am)v_m + (-b_1)v_{m + 1} + (-b_1^{-1}b_2)u_2 + \cdots + (-b_1^{-1}b_{n-m})u_{n-m} $$
Of course. If $b_i$ were all zero in $(\text{*})$ it would again only be composed of vectors of $L'$ and we know that that can't be the case for the same reason as above.
Let $H = \{u_2, \ldots, u_{n-m}\}$. Then $u_1 \in \operatorname{span}(L \cup H)$ and because $v_1, \ldots, v_m,u_2,\ldots,u_{n-m}$ are clearly in $\operatorname{span}(L \cup H)$, it follows that
$$L' \cup H' = \{v_1, \ldots, v_m, u_1, \ldots, u_{n-m}\} \subseteq \operatorname{span}(L \cup H).$$
This should be clear.
Because $L' \cup H'$ generates $V$, Theorem 1.5 (pg 30) implies that $\operatorname{span}(L \cup H)$ generators $V$.
To see why you can apply Theorem 1.5 you must remember that if $S \subseteq V$ then $\operatorname{span}(S)$ is a subspace of $V$ and therefore, by Theorem 1.5, if $W \subseteq \operatorname{span}(S)$ then $\operatorname{span}(W) \subseteq \operatorname{span}(S)$. So we have that $V = \operatorname{span}(L' \cup H') \subseteq \operatorname{span}(L \cup H)$ and since obviously, $\operatorname{span}(L \cup H) \subseteq V$ we have $\operatorname{span}(L \cup H) = V$.
Since $H$ is a subset of $G$ which contains $(n - m) - 1 = n - (m +1)$ vectors the theorem is true for $m + 1$. This completes the induction.
This should be clear.
Hope this helps!
|
H: If $p(2x+1)=p(x^2)$ for all $x\in\mathbb{R}$, then $p\equiv\text{const.}$
Let $p\in \Bbb{R}[x]$ (polynomial) with $\deg(p)=n$. Suppose that $p(2x+1)=p(x^2)$ for all $x\in\mathbb{R}$. Prove that $p\equiv\text{const.}$
AI: Hint: If $p(2x + 1) = p\left(x^2\right)$ for all $x\in\Bbb{R}$, $\deg(p(2x + 1)) = \deg\left(p\left(x^2\right)\right)$. You know $\deg p$, so what can you deduce about $\deg\left(p\left(x^2\right)\right)$?
|
H: Which integral theorem to use to evaluate this triple integral?
Take the normal pointing outwards from the surface. Use an appropriate integral theorem
$$\iint_S \textbf{F}\cdot d\textbf{S} \space \space where \space \space \textbf{F} (x,y,z)=(x^3,3yz^2,3y^2z+10) $$ and $S$ is the surface $z=-\sqrt{4-x^2-y^2}$
My attempt
I have tried using Gauss' divergence theorem which gives me $$ \iiint_V 3x^2+3y^2+3z^2 d\textbf{V}$$ and I end up with the integral $$ \int_0^2 \int_0^{2\pi} \int_{\frac{\pi}{2}}^\pi 3 r ^{4} \sin(\phi) \space \space d\theta \space d\phi \space dr$$ The extra $r^2\sin(\theta)$ is the Jacobian in spherical coordinates.
Now when I integrate the above I get the incorrect answer. The correct answer is $\frac{-8\pi}{5}$. Where am i going wrong?
AI: In the Gauss theorem one needs closed surface, so you have to subtract from the volume integral $\displaystyle I_V=\iiint_V3(x^2+y^2+z^2)d\mathbf{V}=\frac{192\pi}{5}$ the integral $\displaystyle\iint_D \mathbf{F}\cdot d\mathbf{S}$, where $D$ is the disc $x^2+y^2\leq4$ in the $xy$-plane with normal vector $\mathbf{n}=(0,0,1)$. Now since in this case
$\mathbf{F}\cdot\mathbf{n}=10$,
the answer will be $$I_V-10\cdot \mathrm{area}(D)=\frac{192\pi}{5}-10\cdot 4\pi=-\frac{8\pi}{5}.$$
|
H: I think that the limit of $\frac{x-1}{xy-2x-y+2}$ as $(x,y) \rightarrow (1,-2)$ exists, and equals $-1/4$. How to confirm this?
This question is on a practice exam; unfortunately, I have no idea how to solve it!
Let $$f(x,y)=\frac{x-1}{xy-2x-y+2}.$$
Does the limit of $f(x,y)$ as $(x,y) \rightarrow (1,-2)$ exist? I think it does, because I've considered a few paths and they all go to $-1/4$. But, how can I prove this?
AI: $f(x,y)=\frac{x-1}{xy-2x-y+2}=\frac{x-1}{x(y-2)-(y-2)}=\frac{x-1}{(x-1)(y-2)}=\frac{1}{y-2}$
$f(1,-2)=-\frac{1}{4}$
|
H: Convert two points to line eq (Ax + By +C = 0)
Say one has two points in the x,y plane. How would one convert those two points to a line? Of course I know you could use the slope-point formula & derive the line as following:
$$y - y_0 = \frac{y_1-y_0}{x_1-x_0}(x-x_0)$$
However this manner obviously doesn't hold when $x_1-x_0 = 0$ (vertical line). The more generic approach should however be capable of define every line (vertical line would simply mean B = 0);
$$Ax+By +C = 0$$
But how to deduce A, B, C given two points?
AI: Let $P_1:(x_1,y_1)$ and $P_2:(x_2,y_2)$. Then a point $P:(x,y)$ lies on the line connecting $P_1$ and $P_2$ if and only if the area of the parallellogram with sides $P_1P_2$ and $P_1P$ is zero. This can be expressed using the determinant as
$$
\begin{vmatrix}
x_2-x_1 & x-x_1 \\
y_2-y_1 & y-y_1
\end{vmatrix} = 0 \Longleftrightarrow
(y_1-y_2)x+(x_2-x_1)y+x_1y_2-x_2y_1=0,
$$
so you get (up to scale) $A=y_1-y_2$, $B=x_2-x_1$ and $C=x_1y_2-x_2y_1$.
|
H: Solving Pell's equation(or any other diophantine equation) through modular arithmetic.
Let us take a solution of Pell's equation ($x^2 - my^2 = 1$) and take any prime $p$. Then we have found a solution of the Pell's equation mod $p$.
Now, conversely, for any prime $p$, we can find a solution of Pell's equation. My question is whether we can use these solutions mod $p$ to build a solution in the integers.
AI: About the general matter of deriving the existence of solutions over $\Bbb Q$ from solutions modulo every prime it is worth recalling the classical theorem in Number Theory that goes under the name of Hasse's Principle that a quadratic form
$$
F(X_1,..X_n)=\sum_{1\geq i\geq j\geq n}a_{ij}X_iX_j\in\Bbb Q[X_1,...,X_n]
$$
represents $0$ (i.e. admits a non-trivial solution of $F(X_1,..X_n)=0$ in $\Bbb Q^n$) if and only if it represents $0$ over $\Bbb R$ and all the $p$-adic fields $\Bbb Q_p$.
In turn, to go from a solution mod $p$ to a solution in $\Bbb Q_p$ one has to apply Hensel's Lemma.
Mind, though, that the proofs are not constructive. E.g., see Serre's book A Course in Arithmetic, Springer GTM 7.
These general results do apply to Pell's equation, because one can homogenize it, i.e. solving $X^2-mY^2=1$ is equivalent to find a representation of $0$ of
$$
X^2-mY^2-Z^2.
$$
Nonetheless, they are not too conclusive, in the sense that they do not allow to say that there exist solutions of Pell's equation different from the trivial ones, namely $X=\pm1$, $Y=0$.
|
H: Convergence in $\mathbb{Z}_p$
Here is my question:
Let $\alpha_0, \dots, \alpha_{p-1} \in \mathbb{Z}_p$ be such that $\alpha_i \equiv i \pmod{p}$ for all $i = 0,\dots, p-1$. Show that, for any $x\in \mathbb{Z}_{p}$, you can find an infinite sequence $(a_{n})_{n\geq 0}$, where each $a_n \in \lbrace\alpha_0,\alpha_1,\dots,\alpha_{p-1} \rbrace$, such that
$$x = \sum_{n=0}^{\infty}a_np^{n}.$$
Any suggestions?
AI: Take the class $\bar x$ of $x\bmod p^n$. Since $\Bbb Z_p/p^n\Bbb Z_p\simeq\Bbb Z/p^n\Bbb Z$, you can find
$$
m_n=a_0+a_1p+a_2p^2+...+a_{n-1}p^{n-1}\in\Bbb Z
$$
such that $\bar x=\bar m_n$, where each $a_i\in\{0,1,...,p-1\}$.
Now observe that
$$
m_{n+1}=m_n+a_np^n
$$
so that the sequence $\{m_n\}$ converges to $x=\sum_{k=0}^\infty a_kp^k$ in $\Bbb Z_p$.
|
H: multiplication in GF(256) (AES algorithm)
I'm trying to understand the AES algorithm in order to implement this (on my own) in Java code.
In the algorithm all byte values will be presented as the concatenation of its individual bit values (0 or 1) between braces with the most significant bit first.
So bytes are interpreted as finite field elements using a polynomial representation:
$$b_7x^7+b_6x^6+b_5x^5+b_4x^4+b_3x^3+b_2x^2+b_1x+b_0$$
Example: ${01100011} \equiv x^6 + x^5 + x +1$
These finite field elements can easily be added with the XOR operator.
But multiplication is a little bit more complex, and I don't understand it.
In the polynomial representation, multiplication in GF(256) corresponds with the multiplication of polynomials modulo an irreducible polynomial of degree 8
This irreducible polynomial is: $m(x)=x^8 +x^4 +x^3 +x+1$
Here is my example:
$$01010111 * 00010011$$
$$01010111\equiv (x^6+x^4+x^2+x+1)$$
$$00010011\equiv(x^4+x+1)$$
$$(x^6+x^4+x^2+x+1) * (x^4+x+1) = x^{10}+x^8+x^7+x^3$$
$$(x^{10}+x^8+x^7+x^3) \mod m(x)$$
$$(x^{10}+x^8+x^7+x^3) \mod (x^8 +x^4 +x^3 +x+1) = x^2 + 1$$
$$x^{10} x^3$$
$$x^8+x^7+x^6+x^5+x^2$$
$$x^8$$
$$x^7+x^6+x^5+x^4+x^3+x^2+x+1$$
So here $x^7+x^6+x^5+x^4+x^3+x^2+x+1$ is my solution, that leads to a binary representation: $11111111$
But in the literature the correct solution is: $11111110$
I would be happy if you could tell me where my mistake is.
AI: $$(x^6+x^4+x^2+x+1) * (x^4+x+1) = x^{10}+x^8+x^7+x^3$$
should make you suspicious. What happened to the cross-term $1*1$?
I make that product to evaluate to $x^{10}+x^8+x^7+x^3+1$, and the difference of $+1$ should carry through the remaining computation to give the difference you sought.
The modulo operation is equivalent to assuming $x^8 = x^4+x^3+x+1$, which gives $x^{10} = x^6+x^5+x^3+x^2$ so $$\begin{eqnarray*}x^{10}+x^8+x^7+x^3+1 & = & (x^6+x^5+x^3+x^2) + (x^4+x^3+x+1) + x^7+x^3+1\\
& = & x^7+x^6+x^5+x^4+x^3+x^2+x\end{eqnarray*}$$
|
H: How to diagonalize $f(x,y,z)=xy+yz+xz$
Could you tell me how to diagonalize $f(x,y,z)=xy+yz+xz$.
I know I can rewrite it as $(x+ \frac{1}{2}y + \frac{1}{2}z)^2 - x^2 - \frac{1}{4}(y-z)^2$
What do I do next?
Could you help me?
AI: I have explained the general procedure in my comments in your previous thread. So, I will take this as a concrete example. Assume the characteristic of the underlying field is not $2$. The matrix in question is
$$A=\pmatrix{0&\tfrac12&\tfrac12\\ \tfrac12&0&\tfrac12\\ \tfrac12&\tfrac12&0}$$
and we want to diagonalise it by using congruence transform.
The first thing we notice is that $A$ has a zero diagonal. This is bad, because we cannot use Gaussian elimination to kill the off-diagonal entries. So, we try to "borrow" numbers from other entries so as to make the $(1,1)$-th entry nonzero. In particular,
$$
P=\pmatrix{1&1&0\\ 0&1&0\\ 0&0&1}
\Rightarrow B=PAP^T=\pmatrix{1&\tfrac12&1\\ \tfrac12&0&\tfrac12\\ 1&\tfrac12&0}.
$$
We may now use simultaneous row and column reduction to kill all entries below or on the right of the $(1,1)$-th entry of $B$:
$$
Q=\pmatrix{1&0&0\\ -\tfrac12&1&0\\ -1&0&1}
\Rightarrow D=QAQ^T=\pmatrix{1&0&0\\ 0&-\tfrac14&0\\ 0&0&-1}.
$$
It happens that $D$ is diagonal. So we may stop here and the diagonalisation is given by $D=(QP)A(QP)^T$ and
$$\pmatrix{u\\ v\\ w}=(QP)^{-T}\pmatrix{x\\ y\\ z}=\pmatrix{\tfrac12&\tfrac12&1\\ -1&1&0\\ 0&0&1}\pmatrix{x\\ y\\ z}.$$
That is, if you define $u=\frac x2+\frac y2+z,\,v=y-x$ and $w=z$, then $f(x,y,z)=u^2-\frac14v^2-w^2$.
|
H: Defining infinitesimals
Can such definition of infinitesimals hold?
$$\mathrm{d} x :=a:(a>0 \;\And\; \forall b \in \mathbb{R}^+\backslash \{ a \}\;(a<b))$$
And, if the above definiton works, then obviously
$$\mathrm{d} f(x) := f(x+{\mathrm{d} x})-f(x)$$
This definiton is basically a try not to delve into the realms of non-standard analysis and other sophisticated branches of mathematics, which seems redundant for me for defining such an intuitive concept.
AI: The original problem with the infinitesimal approach to analysis/calculus was the existence of these objects. If $a = \mathrm{d}x$ were a real real number, then so would $\frac{a}{2}$. As $a > 0$, basic arithmetic says that $0 < \frac{a}{2} < a$, which means that $a$ cannot have the defining property of being an infinitesimal!
What the sophisticated machinery of Abraham Robinson did was establish a logically consistent and rigourous footing for the intuitive notion of an infinitesimal. After this was done, one can do non-standard analysis/calculus without worrying at all about the inner details. (Much like most mathematicians think of the real number $\pi$ as a single discrete object, when it is actually an infinite set of rational numbers! Or maybe it is an equivalence class of Cauchy sequences of rational numbers. I forget.)
|
H: Finite generation of Hom between cyclic and artinian module
Let $R$ be a Noetherian ring with unit, and $I$ be a nonzero ideal of $R$. Let $M$ be an artinian $R$ module. Is $\operatorname{Hom}(R/I, M)$ finitely generated?
Thanks.
AI: There are many counterexamples. If $IM=0$, then $\hom(R/I,M)=M$. Now take any artinian $S$-module $M$ which is not finitely generated (for example $\mathbb{Z}/p^{\infty}$ over $\mathbb{Z}$), $R:=S \times \mathbb{Z}$, $I=0 \times \mathbb{Z}$.
|
H: Create Fourier-Series of f(x) = x if 0 < x < Pi and 0 if Pi < x < 2*Pi
I tried the following to create the Fourier-series of the function:
$$ f(x) = \begin{cases} x & 0<x<\pi \\ 0 & \pi < x < 2 \pi \end{cases}$$
This is what I tried:
$$a_0 = \frac{1}{\pi}\int\limits_0^\pi x dx = \frac{1}{\pi}\cdot\left(\frac{x^2}{2}|_0^\pi\right) = \frac{1}{\pi}\cdot\frac{\pi^2}{2} = \frac{\pi^2}{2\cdot\pi} = \frac{\pi}{2}$$
Is that right?
$$a_n = \frac{1}{\pi}\int\limits_0^\pi x \cdot \cos(nx) dx = \frac{1}{\pi}\cdot\left(\frac{x \cdot \sin(nx)}{n} + \frac{\cos(nx)}{n^2}\right)|_0^\pi$$
Assuming that $\sin(nx)$ is always $0$ and $\cos(nx)$ is $(-1)^n$. Is that ok??
$$ = \frac{1}{\pi}\cdot\left(\frac{0}{n} + \frac{(-1)^n}{n^2}\right) = \frac{1}{\pi}\cdot\left(\frac{(-1)^n}{n^2}\right) = \frac{(-1)^n}{\pi n^2}$$
That's already wrong I'm sure!
$$b_n = \frac{1}{\pi}\int\limits_0^\pi x \sin(nx) dx = \frac{1}{\pi}\cdot\left(\frac{x \cdot \cos(nx)}{n} + \frac{\sin(nx)}{n^2}\right)|_0^\pi$$
Assuming that $\sin(nx)$ is always $0$ and $\cos(nx)$ is $(-1)^n$. Is that ok??
$$ = \frac{1}{\pi}\cdot\left(\frac{\pi \cdot (-1)^n}{n}\right) = \frac{\pi\cdot(-1)^n}{\pi n} = \frac{(-1)^n}{n}$$ voila, thats wrong again :-)
A solution would be great, but I also need to understand what was my mistake.
Thank you in advance!
AI: Plugging in $x=0$ and $x=\pi$ in $\sin (nx)$ indeed gives you $0$.
Plugging in $x=\pi$ in $\cos (nx)$ indeed gives you $(-1)^n$.
But don't forget that plugging in $x=0$ in $\cos (nx)$ gives you $1$.
Also, in the integral for $b_n$, there seems to be a minus sign missing after the integration by parts. Recheck your calculation.
|
H: Does $X_n \stackrel{Prob}{\longrightarrow} X$, $X \in L^2$ imply $X_n \stackrel{L^2}{\longrightarrow} X$?
Let $X_n$, $n\in \mathbb{N}$ be a sequence of random variables which converges in probability to $X$, i.e. $X_n \stackrel{Prob}{\longrightarrow} X$. Furthermore it is known that $X \in L^2$. Does this imply $X_n \stackrel{L^2}{\longrightarrow} X$?
AI: No. For example, consider $X_n=\sqrt n\cdot\chi_{[0,1/n]}$ in $L_2([0,1])$. $(X_n)$ converges to the zero function in probability and $\Vert X_n\Vert_{L_2}=1$ for each $n$.
|
H: Hartshorne 8.9.1 $\mathcal O_{\Delta X}$-module structure on $\mathcal I$ := the kernel of the diagonal morphism
Hartshorne asserts in 8.9.1 that $\mathcal I$, the kernel of the diagonal morphism $X \to X \times_Y X$, has a natural $\mathcal O_{\Delta X}$-module structure.
My problem is that $\mathcal I$ is an ideal sheaf on the whole ${X \times_Y X}$, whereas $\mathcal O_{\Delta X}$ is the restriction of the sheaf $\mathcal O_{X \times_Y X}$ to $\Delta X$. So there are only two possibilities: Either we restrict $\mathcal I$ to $\Delta X$, where it is zero, or we extend $\mathcal O_{\Delta X}$ to $\mathcal O_{X \times_Y X}$ by zeros. In either way we have a trivial action, which I highly doubt to be the natural $\mathcal O_{\Delta X}$-module structure mentioned by Hartshorne.
AI: The point is that the discussion is about $\mathcal{I} / \mathcal{I}^{2}$. The module structure comes from the right factor in the isomorphism
$\mathcal{I} / \mathcal{I} ^{2} \simeq \mathcal{I} \otimes _{\mathcal{O}_{X}} \mathcal{O} _{X} / \mathcal{I} \simeq \mathcal{I} \otimes _{\mathcal{O}_{X}} \mathcal{O} _{\Delta(X)}$,
since $\mathcal{O} _{\Delta(X)} \simeq \mathcal{O} _{X} / \mathcal{I}$.
(Notice I have ignored the subtle fact that $\Delta(X) \subseteq X$ is possibly a non-closed subscheme in the non-separated case, because it seems it is not the source of your confusion.)
|
H: Solve : $x^2-92 y^2=1$
As some of you might know,this is Brahmagupta's equation . How to find solution for this ?
I mean integral solution? How to solve it using programming ?
I tried something like $x^2=1+92y^2$
$x=\sqrt{1+92y^2}$
Use brute force approach to check for every y ? Is there any better answer ?
AI: As for the programming part, I thought I'd put up some simple brute force Python just as an example.
# Start at (1, 1) because (1, 0) is a trivial solution
x, y = 1, 1
z = x**2 - 92*(y**2)
while z != 1:
if z > 1:
y += 1
else:
x += 1
z = x**2 - 92*(y**2)
print x, y
This outputs the first solution
1151 120
Of course, as a brute force solution this code won't get you very far if, for example, you replaced 92 with larger numbers (or even if you replaced it by 61, for that matter).
|
H: Solve $-1+B^{\prime}(r)r+B(r)=\frac{Q^2}{4 \pi r^2}$ analytically
I need to solve $$-1+B^{\prime}(r)r+B(r)=\frac{Q^2}{4 \pi r^2}$$ , $Q=const$. The boundary condition is $B(r)\to 1$ as $r \to \infty$. I am faced with this equation while solving for the spherically symmetric metric with a charge $Q$. Though, I can find the solution using Mathematica, or Wolfram Alpha, I would like to get an analytic solution. I am familiar with techniques for solving ODEs not able to recall them.
AI: It is a linear equation. Rewrite it as
$$
(r\,B)'=1+\frac{Q^2}{4\,\pi\,r^2}.
$$
Integrate to obtain
$$
r\,B=r-\frac{Q^2}{4\,\pi\,r}+C
$$
and
$$
B(r)=1-\frac{Q^2}{4\,\pi\,r^2}+\frac{C}{r}.
$$
The boundary condition is satisfied or any value of the constant $C$.
|
H: How is the real number line a second axiom space?
Out of all the stupid questions I may have asked, this surely may be the stupidest. Any help would be appreciated.
My book says "The real number line is a second axiom space"
How??
Sure one might say $(-n,n), n\in\mathbb{Z}$ could be described as the sets in the base. But can every open set like $(-\frac{1}{2},\frac{1}{2})$ be generated this way? Shouldn't a base be able to generate every open set in a space by taking unions of base sets?
I can't think of any other formulation for a countable base.
I've never dealt with the real number line in topological terms before. My textbook has only minimal algebraic examples until this point. So I'm possiby missing out on something important.
AI: Hint: The rational numbers are countable and dense. Therefore $(a,b)$ of the form $a,b\in\Bbb Q$ make a countable basis for the topology.
|
H: Show the eigenspaces of $T$ are all $1$-dimensional
Let $V$ be a finite-dimensional complex vector space and $T:V\to V$ a linear transformation. Suppose there exists $v\in V$ such that $\{v,Tv,T^2v,\ldots,T^{n-1}v\}$ is a basis for $V$. Show that the eigenspaces of $T$ are all $1$-dimensional.
I want to use the minimal polynomial to prove the above statement. So I need to find $T^nv$. Any suggestions?
AI: Suppose there is some eigenspace of dimension at least$~2$, say for$~\lambda$. Then $\ker(T-\lambda I)$ having dimension at least$~2$, the image $W$ of $T-\lambda I$ has dimension $d\leq n-2$ (rank-nullity). The subspace$~W$ is $T$-stable, and is annihilated by some polynomial $P[T]$ of$~T$ with $\deg(P)\leq d$ (one could take for $P$ the minimal or the characteristic polynomial of the restriction of $T$ to$~W$). Then $P(X-\lambda)$ (multiplication, not substitution!) is a polynomial of degree strictly less than$~n$ that, when evaluated in$~T$, annihilates the whole vector space: in this evaluation $P[T]\circ(T-\lambda I)$ the rightmost (first acting) factor maps $V\to W$ and the other factor vanishes on$~W$.
This shows that the minimal polynomial of$~T$ has degree strictly less than$~n$, and this contradicts the existence of a cyclic vector for$~T$. Indeed, if the minimal polynomial is $c_0+c_1X+\cdots c_mX^m$ with $m<n$, then the linear relation $0=c_0v+c_1T(v)+\cdots c_mT^m(v)$ holds for all vectors$~v\in V$, and $v,T(v),\ldots,T^{n-1}(v)$ cannot be a basis of$~V$.
|
H: how to work out $14^{293}-12^{26}\pmod{13}$
How can I work this out without a calculator?
$$14^{293}-12^{26} \pmod{13}$$
I just couldn't figure out a way to do this.
AI: $14\equiv 1\mod 13$ and $12\equiv -1\mod 13$
$14^{293}\equiv 1\mod 13$ and $12^{26}\equiv (-1)^{26}=1\mod 13$
$(14^{293}-12^{26})\equiv 0\mod 9$
|
H: 17! mod 13, How do I do this without a calculator
So I know $$17! = 17 \times16\times15...\times1$$
So I was thinking maybe go $$17mod(13)\equiv4 \space \space and \space 16mod(13)\equiv3 ...$$
add all that together but that is too much work so I went
17-13 = 4
$$4! mod 13 \equiv 4 mod(13) \space 3 mod(13)... etc$$
4+3+2+1 = 10
Is this correct?
Edit: Is this at least a correct answer?
EDIT 2: I just checked google calc 17! mod 13 = 0, so what did i do wrong?
AI: $17!=17\times 16\times \dots \times 13\times \dots 1$ .So $17!$ is divisible by $13$ implying that $17!\equiv 0\mod 13$.
Note: If $m\ge n$ then $m!\equiv 0\mod n$ .
|
H: Homeomorphism example
I'm reading some notes about topology and homeomorphisms and there is an example of a homeomorphism from the unit ball in $\mathbb R^n$ to $\mathbb R^n$. The map $x \mapsto {x \over 1 - |x|^2}$. I assume absolute value means Euclidean norm here. What I don't understand is the square in the definition. Wouldn't $x \mapsto {x \over 1 - |x|}$ also be an homeomorphism and a simpler example? Is there a reason why one would want to square the absolute value to get some sort of better homeomorphism?
AI: The only reason I can see to use $\dfrac{x}{1-|x|^2}$ is that it is differentiable, so that when you later discuss diffeomorphisms, you can refer back to this function.
BTW, a standard map is $x\to \dfrac{x}{\sqrt{1-|x|^2}}$.
This might seem more complicated, but there is actually a deep geometric meaning to this map.
Take the point $x=(x_1,x_2,\dots,x_n)$ in the open unit disk. Then $(x_1,\dots,x_n,\sqrt{1-|x|^2})$ is in $S^n$, and we can find the point on the line $x_{n+1}=1$ projected from the center of $S_n$. This gives $\dfrac{x}{\sqrt{1-|x|^2}}$.
|
H: Height of a prime ideal and number of generators of its localization
This question is very related to this one: generators of a prime ideal in a noetherian ring.
Let $\mathfrak{p}$ be a prime ideal in a Noetherian ring and let $k$ be its height. Further suppose that $f_{1},\dots, f_{k} \in \mathfrak{p}$ generate the maximal ideal in the localization $R_{\mathfrak{p}}$, more precisely $f_{1}, \dots, f_{k}$ generate $\mathfrak{p} R_{\mathfrak{p}}$ in $R_{\mathfrak{p}}$. This situation appears for example in the Jacobian criterium in local analytic geometry (see for example the book by DeJong/Pfister).
My question: Is it possible to conclude that $f_{1},\dots, f_{k}$ generate $\mathfrak{p}$?
AI: No - unless I'm misunderstanding your question, here's a simple counterexample. Let $R = \mathbb Z$, and $\mathfrak p = (2)$. Then the element $f = 10$ generates the ideal $(2) \cdot \mathbb Z_{(2)}$, since 5 is invertible there, but clearly doesn't generate $\mathfrak p$ in $\mathbb Z$.
|
H: The probability that an event with exponential distribution will happen before an event with a Poisson distribution
I have two variables depicting arrival. One (lets call it $A$) has a Poisson distribution, so the probability of $n$ elements arriving in time period $\tau$ is: $P_n(\tau)=\frac{\left(\lambda t\right)^ne^{-\lambda t}}{n!}$.
One (lets call it $B$) has an exponential distribution, so the probability of an element arriving in under time $\tau$ is $P(t)=1-e^{-\mu t}$.
$A$ and $B$ are unrelated/independent.
I'm asked what is the probability that an element from group $B$ will arrive before an element before group $A$.
The correct answer is given as $\frac{\mu}{\mu+\lambda}$
What I did is:
$$\int_0^{\infty}P\left( T_B<t\right)\cdot P^A_0(t)dt=\int_0^{\infty}\left( 1-e^{-\mu t}\right)\cdot e^{-\lambda t}dt=\int_0^{\infty}e^{-\lambda t}-e^{-(\lambda +\mu) t}dt=\frac{1}{\lambda}-\frac{1}{\mu + \lambda}=\frac{\mu}{\lambda (\mu + \lambda)}$$
Explanation: I tried to calculate the probability that $B$ arrives before time $t$ and that during that time 0 elements of $A$ arrive.
Where is my mistake?
AI: My mistake was that I took the cumulative distribution function for $B$ instead of the probability density function ($f(t)=\mu e^{-\mu t}$). Which as @Shreevatsar explains causes to over count. With the correct function the result is:
$$\int_0^{\infty}f_B\left(t\right)\cdot P^A_0(t)dt=\int_0^{\infty}\mu e^{-\mu t}\cdot e^{-\lambda t}dt=\int_0^{\infty}\mu e^{-(\mu +\lambda)t}=\frac{\mu}{\mu + \lambda}$$
|
H: congruence proof: Prove that there is no integer such that $x≡2 \pmod 6$ and $x≡3 \pmod 9$ are both true.
Prove that there is no Integer such that $x≡2 \pmod 6$ and $x≡3 \pmod 9$ are both true.
How should I approach this question?
I attempted using contra-positive proof, so $x=6p+2$ and $x=9q+3$ where $p,q$ are integers.
Then $6p+2=9q+3$.
AI: $x=6k+2=9l+3$ for some $k,l\in N$ ,
$3$ divides $9l+3\Rightarrow 3|6k+2$
What does $3|(6k+2)$ imply?
Doesn't this imply the impossible fact $3|2$.
So it is a contraction to the given fact.
|
H: Programmatically solving a system of nonlinear equations over GF(2)
I have the following relatively large system of nonlinear equations over $GF(2)$:
$
0 = w_7x_7 + w_7x_5 + w_7x_4 + w_7x_0 + w_6x_6 + w_6x_5 + w_6x_1 + w_5x_7 + w_5x_6 + w_5x_2 + w_4x_7 + w_4x_3 + w_3x_4 + w_2x_5 + w_1x_6 + w_0x_7 \\
0 = w_7x_6 + w_7x_4 + w_7x_3 + w_6x_7 + w_6x_5 + w_6x_4 + w_6x_0 + w_5x_6 + w_5x_5 + w_5x_1 + w_4x_7 + w_4x_6 + w_4x_2 + w_3x_7 + w_3x_3 + w_2x_4 + w_1x_5 + w_0x_6 \\
0 = w_7x_5 + w_7x_3 + w_7x_2 + w_6x_6 + w_6x_4 + w_6x_3 + w_5x_7 + w_5x_5 + w_5x_4 + w_5x_0 + w_4x_6 + w_4x_5 + w_4x_1 + w_3x_7 + w_3x_7 + w_3x_6 + w_3x_2 + w_2x_7 + w_2x_3 + w_1x_4 + w_0x_5 \\
0 = w_7x_7 + w_7x_4 + w_7x_2 + w_7x_1 + w_6x_5 + w_6x_3 + w_6x_2 + w_5x_6 + w_5x_4 + w_5x_3 + w_5x_3 + w_4x_7 + w_4x_5 + w_4x_4 + w_4x_0 + w_3x_6 + w_3x_5 + w_3x_1 + w_2x_7 + w_2x_6 + w_2x_2 + w_1x_7 + w_1x_3 + w_0x_4 \\
0 = w_7x_7 + w_7x_6 + w_7x_5 + w_7x_4 + w_7x_3 + w_7x_1 + w_6x_7 + w_6x_6 + w_6x_5 + w_6x_4 + w_6x_2 + w_5x_7 + w_5x_6 + w_5x_5 + w_5x_3 + w_4x_7 + w_4x_6 + w_4x_4 + w_3x_7 + w_3x_5 + w_3x_0 + w_2x_6 + w_1x_7 + w_1x_2 + w_0x_3 \\
0 = w_7x_6 + w_7x_3 + w_7x_2 + w_6x_7 + w_6x_4 + w_6x_3 + w_5x_5 + w_5x_4 + w_4x_6 + w_4x_5 + w_3x_7 + w_3x_6 + w_2x_7 + w_2x_0 + w_1x_1 + w_0x_2 \\
0 = w_7x_7 + w_7x_5 + w_7x_2 + w_7x_1 + w_6x_6 + w_6x_3 + w_6x_2 + w_5x_7 + w_5x_4 + w_5x_3 + w_4x_5 + w_4x_4 + w_3x_6 + w_3x_5 + w_2x_7 + w_2x_6 + w_1x_7 + w_1x_0 + w_0x_1 \\
1 = w_7x_6 + w_7x_5 + w_7x_1 + w_6x_7 + w_6x_6 + w_6x_2 + w_5x_7 + w_5x_3 + w_4x_4 + w_3x_5 + w_2x_6 + w_1x_7 + w_0x_0 \\
$
I would like to solve the system for the 8 $w$ variables in terms of $x$. I know Sage and Magma implement relatively new techniques for converting this system to its CNF equivalent and then solving using popular SAT solvers like Minisat, but such techniques do not allow one to "fix" (for lack of a better word) variables (i.e. they solve the solve for exact values of $w0\dots w7$ and $x0 \dots x7$).
Does anyone know how I can accomplish this task using a CAS? The system above is really only a small example; I have larger systems with more variables whose equations have more monomials, so doing this task by hand becomes very time consuming.
AI: In general you could use Groebner basis methods, but it turns out that your system is much easier: it's linear in $w$.
(although you may still want to solve it using Groebner basis methods even though it's linear so as to easily keep things reduced modulo the relations like $x_7^2 + x_7$)
|
H: Why is $\sum_{g \in G} \rho(g) =0$ for any nontrivial irreducible representation
Let $F$ be an arbitrary field, and $(\rho, V)$ be an irreducible representation of $G$. Then $$\sum_{g \in G} \rho(g) = \begin{cases}
0 & \text{ if } \rho \neq 1_G, \\
|G|1_V & \text{ if } \rho = 1_G.
\end{cases}$$
The case when $\rho =1_G$ is clear. But why is the sum $0$ for nontrivial irreducible representations?
Thanks very much.
AI: Here's a hint:
Suppose that $x = \rho(\sum g) v \neq 0$ for some $v$. What happens if you act $g \in G$ on $x$? What does that mean for the subspace spanned by $x$?
|
H: Inverse of a polynomial function
I want to find the inverse of $f(x)=\frac{3}{4}x^2-\frac{1}{4}x^3 $ when $0<x<2$.
According to wolfram the answer is inverse
I would like to know how can I find wolfram's inverse.
AI: The correct formula is $$(f^{-1})'(y) = \dfrac{1}{f'(f^{-1}(y))}$$
|
H: Prove $A\in \mathbb R^{n\times n}$ is antisymmetric iff...
Prove that $A\in \mathbb R^{n\times n}$ is antisymmetric iff $ \forall v\in\mathbb R^n:\langle v,Av\rangle=0 $
$\langle \cdot,\cdot\rangle$ is just the dot product.
I'm a little stumped by this problem. I'm fairly sure that the $\Leftarrow$ part of the proof can be proven by a judicious choice of vectors $v$. For example, we can show that the diagonals of a matrix that follows this property are zero since $\langle \vec e_j,A \vec e_j \rangle=0$, where $\vec e_j = (0,\dots,0,1,0,\dots,0)^T$ is the $j^{th}$ standard basis vector of $\mathbb R^n$. I still need to show, however, that the "diagonally opposite" entries add to zero.
Not sure where to start with $\Rightarrow$.
Thanks!
AI: $\Rightarrow$: If $A$ is antisymmetric, then $A = -A^T$ and so $$(v,Av) = v^T Av = -v^T A^T v = -(Av,v) = -(v,Av),$$ so $(v,Av) = 0.$
$\Leftarrow$: Your way seems to work. Also consider the vectors $v = e_i + e_j$; since you know that the diagonals vanish, you are left with $A_{ij} = -A_{ji}$ by the fact that $(v,Av) = 0.$
|
H: In how many ways can you rearrange CANADA?
I'm trying to solve the following question which is in the permutations unit:
In how many ways can all the letters of the word CANADA be arranged if the consonants must always be in the order in which they occur in the word itself?
I have no idea where to start, any hints would be appreciated!
Thanks
AI: Select three out of six positions for the consonants, fill tme in order, then fill the rest with "A"s. So that's $6\choose 3$.
|
H: Symmetric and exterior powers of a projective (flat) module are projective (flat)
Assume that $R$ is a commutative ring with unity and $P$ a projective (flat) $R$-module. Why $\mathrm{Sym}^n(P)$ and $\Lambda^n(P)$ are projective (flat) for every $n$?
AI: Here's one approach using the fact that projective modules are direct summands of free modules. Suppose $P\oplus Q\cong R^k$. Then $$R^{k^n}=(P\oplus Q)^{\otimes n}=P^{\otimes n}\oplus\cdots ,$$ so $P^{\otimes n}$ is projective. But then notice that $\Lambda^n(P)$ and $\mathrm{Sym}^n(P)$ are both direct summands of $P^{\otimes n}$.
Edit: As Darij Grinburg points out, and I only just now noticed, this only works for some rings $R$, such as $\mathbb Q$ algebras.
|
H: Calculating interior angles of quadrilateral
stupid question... but:
I've a polygon which has the points $(a_x,a_y),(b_x,b_y),(c_x,c_y), (d_x,d_y)$
How can I calculate each interior angle of this quadrilateral? I know that in sum, it has to be $360^\circ$.
Thanks
AI: If you know about vectors, the dot product can help you out here.
Recall $\textbf{a.b}=a b \cos(\theta)$. Rearrange this to get your desired result.
|
H: Limit problem in the process of integration?
I am trying to solve this integral
$$\int_{0}^{+\infty }\frac{x\ln(x)}{(1+x^2)^2}$$
where
$$F(x)=\int_{}^{}\frac{x\ln(x)}{(1+x^2)^2}=\frac{1}{4}\left( \frac{-2\ln(x)}{1+x^2}+\ln\frac{x^2}{1+x^2} \right).$$
I think the best way to solve our integral $\int_{0}^{+\infty }\frac{x\ln(x)}{(1+x^2)^2}$ is to use this formula:
$$[F(x)]_{a}^{b}=\lim_{x\to b-}F(x)-\lim_{x\to a+}F(x)$$
so
$$\int_{0}^{+\infty }\frac{x\cdot \ln(x)}{(1+x^2)^2}=[F(x)]_{0}^{\infty}=\lim_{x\to \infty -}F(x)-\lim_{x\to 0+}F(x)$$
$$\int_{0}^{+\infty }\frac{x\cdot \ln(x)}{(1+x^2)^2}=\lim_{x\to \infty -}\frac{1}{4}\left( \frac{-2\ln(x)}{1+x^2}+\ln\frac{x^2}{1+x^2} \right)-\lim_{x\to 0+}\frac{1}{4}\left( \frac{-2\ln(x)}{1+x^2}+\ln\frac{x^2}{1+x^2} \right)$$
Would you please guide me through how to solve the following limits?
$$\lim_{x\to \infty -}F(x)=\lim_{x\to \infty -}\frac{1}{4}\left( \frac{-2\ln(x)}{1+x^2}+\ln\frac{x^2}{1+x^2} \right)=\quad ?$$
$$\lim_{x\to 0+}F(x)=\lim_{x\to 0+}\frac{1}{4}\left( \frac{-2\ln(x)}{1+x^2}+\ln\frac{x^2}{1+x^2} \right)=\quad ?$$
Thank you
AI: We answer your question, and at the end make a suggestion that makes the integration work, and to a fair degree the limit computation, unnecessary.
The behaviour as $x\to\infty$ is the more straightforward one. Your expression as two components, which we can treat separately. The least problematic is
$$\lim_{x\to\infty}\frac{x^2}{1+x^2}.$$
For as $x\to\infty$, $\frac{x^2}{1+x^2}\to 1$, and so by continuity the logarithm approaches $\ln 1$, which is $0$.
Finding
$$\lim_{x\to\infty} -\frac{2\ln x}{1+x^2}$$
is not difficult, particularly if we operate informally, since $x^2$ grows much faster than $\ln x$. There are various formal calculations we can make to confirm. For instance, we can use L'Hospital's Rule.
Dealing with the limit as $x\to 0^+$ is more complicated. This is because each of the two components "blows up." So to find the limit we will somehow have to combine them.
Our expression is equal to
$$\frac{1}{4}\left(-2\frac{\ln x}{1+x^2} +2\ln x-\ln(1+x^2)\right).$$
The $\ln(1+x^2)$ part is totally harmless. Getting rid of irrelevant constants, we want to find
$$\lim_{x\to 0^+} \left(-\ln x +\frac{\ln x}{1+x^2}\right),$$
which simplifies to
$$\lim_{x\to 0^+} -\frac{x^2\ln x}{1+x^2}.$$
The denominator $1+x^2$ is harmless. We want
$$\lim_{x\to 0^+} -x^2\ln x.$$
This limit is $0$. There are many ways to show this. For example, we can rewrite our expression as $\dfrac{-\ln x}{1/x^2}$ and use L'Hospital's Rule. Or else we can write $x=e^{-w}$, and study $\frac{w}{e^{2w}}$ as $w\to\infty$.
All this work for nothing!
Remark: To do the problem with a lot less work, break up the integral into $0$ to $1$ and $1$ to $\infty$. For the $1$ to $\infty$ part, make the change of variable $x=1/t$. Very nice things will happen!
|
H: How many divsors of $4725$ are there?
I need to solve the following problem:
How many divsors of $4725$ are there?
I found the number of divsors between $0-9$ that can divide $4725$ which are: $3,5,7,9$ but how do I find the others? Also, what is a good way to approach such problems?
Thanks!
AI: Here's a proof I made for you which should let you finish the problem:
Theorem: If the prime factorization of $n$ is ${p_1}^{a_1} {p_2}^{a_2}{p_3}^{a_3} \ldots {p_n}^{a_n}$, then the number of factors of $n$ is the product of one plus each of the exponents, that is:
$$ \prod_{k=1}^n (a_k + 1) $$
Proof: For each $p_k$, we have $a_k + 1$ choices, we can include it $0$ times, $1$ time, $2$ times, all the way up until $a_k$ times. Since prime factorizations are unique, this is the only way to form these factors. By the counting principle, the number of choices is the product of the ways to choose each individual factor.
For the second part of the problem, consider this: we no longer have a choice to put $0$ fives. We must now put at least $1$ five. What effect will this have on the number of choices? Well there are still $a_k +1$ choices for all other $p_k$, but for $5$ we will only have $a_k$ choices, since there is no choice of putting $0$.
|
H: Polytopes characterization in $\mathbb R^n$
Given $P = \{x \in\mathbb R^n \mid a_1x_1 + \ldots + a_nx_n = \text{constant}\}$, $(a_1, \ldots , a_n) \ne 0$.
Can $P$ be a polytope? I think that with $N = 1$, $P$ is a point. Can a point in $\mathbb R^1$ be a polytope?
Thank you all!
AI: This is essentially an issue of semantics. There are two ways of defining polytopes: the V-representation (the convex hull of a finite set of points) and the H-representation (the intersection of half-spaces).
V-representations are always compact, but H-representations don't have to be. A lot of people would reserve the word "polytope" for these objects when they are compact. Can you see how $P$ is the intersection of two halfspaces?
|
H: Area between $y=e^{-x}$ and $y=e^{-x}\sin x$
Let $x_n$ denote the $x$-coordinate of the $n$th point of contact between the curves $y=e^{-x}$ and $y=e^{-x}\sin x$, with $0<x_1<x_2<\cdots$, and let $A_n$ denote the area of the region enclosed by the two curves between $x_n$ and $x_{n+1}$. Show that $$\sum_{n=1}^\infty A_n=\frac{1}{2}e^{-\pi/2}.$$
My attempt:
The two curves meet at $\sin x= 1 \iff x=\pi/2 + 2\pi n,~n\in\mathbb{Z}$. So $$x_n=\frac{(4n+1)\pi}{2}$$and $$x_{n+1}=x_n+2\pi.$$Now $e^{-x}\geqslant e^{-x}\sin x$ since $\sin x\leqslant 1$ by definition. So we have, $$A_n=\int_{x_n}^{x_{n+1}}\left(e^{-x}-e^{-x}\sin x\right)~\mathrm{d}x=-e^{-x}\bigg|_{x_n}^{x_{n+1}}-I,$$where $$I=\int_{x_n}^{x_{n+1}}e^{-x}\sin x~\mathrm{d}x=-e^{-x}\sin x\bigg|_{x_n}^{x_{n+1}}+\int_{x_n}^{x_{n+1}}e^{-x}\cos x~\mathrm{d}x$$
$$=-e^{-x}\sin x\bigg|_{x_n}^{x_{n+1}}-e^{-x}\cos x\bigg|_{x_n}^{x_{n+1}}-I$$$$\Longrightarrow I=-\frac{1}{2}\left[e^{-x}\sin x\bigg|_{x_n}^{x_{n+1}}+e^{-x}\cos x\bigg|_{x_n}^{x_{n+1}}\right].$$It is clear that $\cos x_{n+1}=\cos x_n = 0$ and that $\sin x_{n+1}=\sin x_n = 1$ and so we have $$A_n=-e^{-x_{n+1}}+e^{-x_n}+\frac{1}{2}e^{-x_{n+1}}-\frac{1}{2}e^{-x_n}=\frac{1}{2}\left(e^{-x_n}-e^{-x_{n+1}}\right)$$$$=\frac{1}{2}e^{-x_n}(1-e^{-2\pi}).$$
Now for the sum $$\sum_{n=1}^\infty A_n=\frac{1}{2}(1-e^{-2\pi})\sum_{n=1}^\infty e^{-x_n}=\frac{1}{2}e^{-\pi/2}(1-e^{-2\pi})\sum_{n=1}^\infty e^{-2\pi n }$$$$=\frac{1}{2}e^{-\pi/2}(1-e^{-2\pi})\cdot \frac{e^{-2\pi}}{1-e^{-2\pi}}=\frac{1}{2}e^{-5\pi/2},$$which is not what I had to show.
So, can anybody point out exactly where my working went wrong? A friend suggested that using $$x_n=\frac{(4n-3)\pi}{2}$$ instead of $$x_n=\frac{(4n+1)\pi}{2}$$ and following the same procedure I followed yields the required result. But I cannot see the difference between choosing either of the two as $x_n$. Shouldn't both give the same result anyway?
Thanks in advance.
AI: The problem asks you to consider the $n$th crossing of those two greater than zero, starting at the first crossing. Quite simply, the first formula you used starts at $5\pi/2$, while the one your friend suggested starts at $\pi/2$. You simply missed the first few terms in the series when you started summing from $5\pi/2$ !
|
H: Find maximum and minimum of $f(x, y) = xy$ on $D = \left\{ (x,y) \in \mathbb{R}^2: x^2+2y^2 \leq 1 \right\}$
I'm kinda stuck on this one :
Find the minimum and maximum of the given function $f$ on $D$, where $$f(x, y) = xy$$ and $$D = \left\{(x,y) \in \mathbb{R}^2 : x^2+2y^2 \leq 1 \right\}$$
I don't know what to do with this domain $D$. I counted the first derivatives and got only point $(0,0)$ as a possible maximum/minimum inside $D$ but what about the boundary of $D$? Should I parametrize this ellipse or how should I approach this? Thanks for your tips!
AI: Obviously the function $f$ has no relative extreme inside the desired region $D$. In favt using the routine method we will find $(0,0)$ a saddle point in which $f_{xx}f_{yy}-(f_{xy}^2)<0$. Now consider $D$ and that $$x=\pm\sqrt{1-y^2}$$ Putting each parts $x=+$ and then $x=-$ separately we will find two one-variable functions $$f(y)=+y\sqrt{1-y^2}, ~~~f(y)=-y\sqrt{1-y^2}$$ I think you can find the relative extremes of these functions.... You have $4$ points as I plotted below:
|
H: Solving Systems of Equations Question
Saw this question and have been unsure of how to solve it properly. Any help would be appreciated!
A pilot of a downed airplane fires the emergency flare into the sky. The path of the flare is modeled by the equation $y =-0.096(x-25)^2+60$, where $y$ is the height of the flare in meters when its horizontal distance from where it is propelled is $x$ meters. An emergency helicopter equipped with special binoculars has a line of sight to the spot where the flare was launched. The line of sight from the binocular is modeled by the equation $9x-10y=-14$.
Solve the system and give answers rounded to two decimal points.
The line of sight from the binoculars spots the flare twice. How high was the flare closest to the ground?
AI: Hint simplify and solve the following:
$$9x-10(-0.096(x-25)^2+60)=-14$$
|
H: Conversion Calculations
I'm trying to create an app for a workshop that I'll be running.
It's going to be a variation of the temperature converter app that tends to be the "Introduction to programming" default.
Anyway, I was wondering about conversions.
If you take, temperature, time, pressure, distance... and the rest of the categories from this site... http://www.digitaldutch.com/unitconverter/electric_current.htm
Do all the conversions follow the pattern...
y = mx + c (i.e. the formula of a straight line).
Can you give me an example that doesn't use this calculation? I can't think of one. In fact, with the exception of fahrenheit, most of them are just y = mx. Oh and Celsius to Kelvin is just a y = x + c.
Is that correct?
AI: The reason you see the pattern you do: y = mx + c, is because
in many cases, there is a precise conversion factor m being used, along with adjustment of a constant c, where needed (as in Fahrenheit-Celsius conversions). E.g., this is the case in degrees to radians, feet-yards, min-hours, etc.
or else an approximation of a conversion factor (m) is being used (especially between systems) - an approximation to however many significant figures for increasing accuracy. E.g., pounds-kilograms, feet-meters.
See, e.g., the National Institute of Standards and Technology's website for such approximating factors, and the conversion processes you observe.
|
H: Find the limit $\displaystyle\lim_{n\rightarrow\infty}{(1+1/n)^{n^2}e^{-n}}$?
Find the limit $\displaystyle\lim_{n\rightarrow\infty}{(1+1/n)^{n^2}e^{-n}}$?
I found the limit as $e^{-1/2}$ using l'Hospital rule. I guess I made a mistake. Because the limit seems to be 1. Also, can we find the limit without L'Hospital rule?
AI: Let $$y=\left(1+\frac1n\right)^{n^2}\cdot e^{-n}$$
$$\implies \ln y=n^2\ln\left(1+\frac1n\right)-n$$
$$\text{Putting }n=\frac1h, \lim_{n\to\infty}\ln y=\lim_{h\to0}\frac{\ln(1+h)-h}{h^2}\text{ which is of the form } \frac00$$
Applying L'Hospitals Rule, $$\lim_{h\to0}\frac{\ln(1+h)-h}{h^2}=\lim_{h\to0}\frac{\frac1{1+h}-1}{2h}=\lim_{h\to0}\frac{1-(1+h)}{2h(1+h)}=-\frac12 $$ as $h\ne0$ as $h\to0$
Alternatively, using Taylor expansion of $\ln(1+x)$
$$\lim_{h\to0}\frac{\ln(1+h)-h}{h^2}=\lim_{h\to0}\frac{h-\frac{h^2}2+O(h^3)-h}{h^2}=-\frac12$$
|
H: trigonometric inequality - how to prove it?
Let $ 0 < x < \frac {\pi}{2}$
How to prove it?
$$2 \sin x \le x- \frac {\pi}{3} + \sqrt {3} $$
AI: The following is a standard mechanical approach. (But mechanical is often not best.) Let
$$f(x)=x-\frac{\pi}{3}+\sqrt{3} -2\sin x.$$
Then $f'(x)=1-2\cos x$.
The derivative is negative until $\cos x=\frac{1}{2}$, that is, until $x=\frac{\pi}{3}$, and then is positive.
So $f(x)$ reaches a minimum in our interval at $x=\frac{\pi}{3}$. Evaluate $f(\pi/3)$. We get $0$. So in our interval, $f(x)\ge 0$, with equality only at $x=\frac{\pi}{3}$.
|
H: Using FFT in matlab
I am not completely sure if this is where a MatLab question belongs, so if not, please direct me where I should ask.
But onto my question. I am working on trying to deconvolution a signal with noise. So I have $h(x)=f(x) \ast g(x) +n(x) $. I want to find what the function g is. n(x) is white gaussian noise. I am using the Wiener Deconvolution Method. The function I am using for h is a box:
x=-20:.01:20;
h=zeros(size(x));
h(1900:2100)=1;
I know that the Fourier transform of the box should be a Sinc function, but when I use fft(h), I get nothing that resembles this. Additionally, this contains zeros, so when I use the linked method, I end up dividing by zero. So I am wondering how I can go about doing this correctly. (Note: I do not have the symbolic toolbox which was recommended me by a colleague).
AI: If you want to keep the length of $h$ unchanged (4001 samples) you can define it like
h=[ones(101,1);zeros(3800,1);ones(100,1)]
This will guarantee that its FFT is real-valued (up to numerical noise). The reason for this is that Matlab assumes that the first value of your signal, i.e. $h(1)$, corresponds to the actual index $0$ (i.e. the center of symmetry for a signal with a real-valued FFT). So if you want a signal which is symmetric, you need to shift the part corresponding to negative indices to the right (the 'end' of the signal). This is because the FFT implies a periodic continuation of the signal. Note that you still have to take the real part of the FFT of $h$ because due to small numerical errors, the imaginary part of the FFT of $h$ is in the order of $10^{-14}$, which corresponds to Matlab's floating point accuracy.
|
H: If totally disconnectedness does not imply the discrete topology, then what is wrong with my argument?
Assume that $X$ is a totally disconnected space. Then every two-point set is disconnected, which implies that every singleton is open in the topology of $X$ (because the one-point subsets of two-point sets form a separation). Isn't the collection of those singletons a basis for the discrete topology? So isn't the topology discrete?
AI: Note that being totally disconnected means that the only connected nonempty subsets are the singletons. To see that, for example, $\{ a , b \}$ is not connected, it suffices to find open subsets $U , V \subseteq X$ with the following properties:
$U \cap \{ a,b \} = \{ a \}$;
$V \cap \{ a,b \} = \{ b \}$;
This would not imply, however, that either $\{ a \}$ are $\{ b \}$ are open subsets of $X$. This only means that $\{ a \}$ and $\{ b \}$ are open subsets of the subspace $\{ a,b \}$ of $X$.
To give more details with regards to an actual example, consider the rationals $\mathbb{Q}$ as a subspace of the real line. Given any subset $A \subseteq \mathbb{Q}$ of size $> 1$ pick $p,q \in A$ with $p < q$. Then there is an irrational number $x$ such that $p < x < q$. Note that $U = ( - \infty , x ) \cap \mathbb{Q}$ and $V = ( x , + \infty ) \cap \mathbb{Q}$ are open subsets of $\mathbb{Q}$ which have the following properties:
$U \cap A \neq \emptyset \neq V \cap A$;
$A \subseteq U \cap V$; and
$( U \cap V ) \cap A = \emptyset$.
This demonstrates that $A$ cannot be a connected subset of $\mathbb{Q}$; and in general, the only nonempty connected subsets of $\mathbb{Q}$ are the singletons.
Note, however all nonempty open subsets of $\mathbb{Q}$ are infinite.
|
H: Totally geodesic immersions
Let $ x: M \rightarrow \overline{M} $ be a totally geodesic immersion, where $ M $ is a $ k- $ dimensional Riemannian manifold and $ \overline{M} $ is a $ n- $ dimensional Riemannian manifold. Is it true that $ x $ is an embedding?
Thanks
AI: Not as it stands. Consider the usual irrational-slope line on a torus, which is the standard example of a 1-1 immersion thatns not an embedding. Put the flat metric on the torus and this dense curve is a geodesic.
|
H: Draw two or three balls from an urn with ten balls
My urn contains two black balls and eight white balls. What is the probability that I get the two black balls
a) after two draws
b) after three draws?
My approach is to draw a decision tree.
draw 1: W B
draw 2: W B W B
draw 3: W B W B W B W -
With two draws, the chance to get the black ball in the first draw is 2/10. Now I have 8 white balls and 1 black ball left, therefore the chance to draw the second black ball is 1/9.
Solution for a):
$$\frac{2}{10} * \frac{1}{9}$$
With three draws, I have two more way of obtaining two black balls (W B B and B W B), therefore solution for b): $$\frac{2}{10} * \frac{1}{9} + \frac{8}{10} * \frac{2}{9} * \frac{1}{8} + \frac{2}{10} * \frac{8}{9} * \frac{1}{8}$$.
Are these solutions correct? (Sorry for the awkward method).
AI: Your answers are correct.
Note that each of the terms in your second answer can be written, after re-arranging, as:
$$\frac{8\cdot 2\cdot 1}{10\cdot 9\cdot 8}$$
So the total answer there can be written as:
$$\frac{3\cdot 8\cdot 2\cdot 1}{10\cdot 9\cdot 8}$$
|
H: Proportion and inverse proportion
It is given that $x$ is directly proportional to $y^2$ and $y$ is inversely proportional to $z$. If $x=20$ and $y=2$ when $z=5$
(A) the value of $y$ when $z=20$
(B) the value of $x$ when $y=3$
(C) an equation relating $x$ and $z$
(D) the value of $z$ when $x = \frac 5 4$, given that $z>0$
AI: So you are given the two relations to start:
$$x \propto y^{2} \implies x=cy^{2} \text{ for some }c\in\mathbb{R} \\ y \propto \frac{1}{z} \implies y=\frac{k}{z} \text{ for some } k\in\mathbb{R}$$
You are then given that when $z=5$, $y=2$ and $x=20$. Therefore, starting with our $y$ relation to find the value of $k$, we get:
$$2=\frac{k}{5} \implies k = 10$$
Therefore we have found our constant of proportionality for our $y$ variable. Next we have our $x$ relation to find the constant of proportionality for:
$$20=c(2)^{2}=4c \implies c=\frac{20}{4}=5$$
Now you have found these you should be able to work through the rest of the problems applying similar logic. Leave a comment if you get stuck!
|
H: What is limit superior and limit inferior?
I've looked at the Wikipedia article, but it seems like gibberish. The only thing I was able to pick out of it was the concept of infimum (greatest lower bound) and supremum (least upper bound), as I had learned them previously in an intro discrete math course.
The limit inferior of a sequence ($x_n$) is defined by
$\displaystyle\liminf_{n\to\infty}x_n := \lim_{n\to\infty}\Big(\inf_{m\geq n}x_m\Big)$
or
$\displaystyle\liminf_{n\to\infty}x_n := \sup_{n\geq 0}\,\inf_{m\geq n}x_m=\sup\{\,\inf\{\,x_m:m\geq n\,\}:n\geq 0\,\}.$
Similarly, the limit superior of ($x_n$) is defined by
$\displaystyle\limsup_{n\to\infty}x_n := \lim_{n\to\infty}\Big(\sup_{m\geq n}x_m\Big)$
or
$\displaystyle\limsup_{n\to\infty}x_n := \inf_{n\geq 0}\,\sup_{m\geq n}x_m=\inf\{\,\sup\{\,x_m:m\geq n\,\}:n\geq 0\,\}.$
Can anybody provide any examples of its use, and why it's used in that context?
AI: A very prominent application of $\limsup$ is the Cauchy-Hadamard formula for the radius of convergence:
Given a power series $\sum_{n=0}^\infty a_n x^n$, its radius f convergence $R$ can be obtained from
$$\frac1R=\limsup_{n\to\infty}\sqrt[n]{|a_n|}.$$
Without going into details why that is so, let's ask:
Why not the $\lim$? Because it may not even exist (e.g. if $a_n=1+(-1)^n$).
Why not the $\sup$? Because a single large $|a_n|$ would then spoil the value whereas a single summand $a_nx^n$ does not influence the convergence.
Nevertheless, if $L:=\lim_{n\to\infty}\sqrt[n]{|a_n|}$ happens to exist, we see that the power series is dominated by $\sum |L' x|^n$ for any $L'>L$, and this converges (to $\frac1{1-|L'x|}$) provided $|L'x|<1$, and by suitable choice of $L'$ we obtain convergence whenever $|Lx|<1$. But if the limit does not exist, we must use the $\limsup$, so we have a sub-sequence converging to some $L$ and can choose the subsequence so large that all other terms $\sqrt[n]{|a_n|}$ are less than $L$. These small terms don't hurt the convergence if $|xL|<1$. And yet, since our subsequence has infinitely many terms, this is enough to spoil convergence when $|xL|>1$.
|
H: What defines the dimension of a representation?
For example, if I have trivial representation of $S_3$, why does it have dimension 1? Why can't I take a vector space of dimension 2 and map all the vectors identically so I would have a representation of dimension 2?
Thanks in advance!
AI: A trivial representation does not need to have dimension $1$; if you have $S_3$ acting on a two-dimensional vector space, that would indeed be a trivial representation of dimension $2$. I think what's confusing you is that the trivial representation only needs to have dimension $1$ if it's an irreducible representation. In your example, there is an invariant subspace, (since any subspace will be invariant), so we can decompose the representation as two trivial subrepresentations, each of dimension 1. You can see that if we have a trivial representation of dimension $>1$, since any subspace will be invariant under our representation, we can always decompose our representation into one-dimensional trivial subrepresentations.
|
H: Definition of compact set/subset
I found one exercise in my book
Let $X$ be a compact subset of a metric space $M$. Prove that $X$ is closed.
In the definitions, the book only mentions compact space and never compact set.
An open cover of a metric space $M$ is a collection $U$ of open subsets of $M$ such that $M=\bigcup U$. A subcover of $U$ is a subcollection $U^*$ of $U$ such that $M=\bigcup U^*$. A metric space $M$ is said to be compact if every open cover of $M$ has a finite subcover.
So I wonder: When it says $X$ is a compact subset, are the open subsets in the cover open subsets of $X$, or open subsets of $M$? (If $X=[0,1]$ and $M=\mathbb{R}$, then $X$ is an open subset of $X$ but not an open subset of $M$.)
AI: For $Y\subseteq X$, this means that the subset $Y$ is a compact space when considered as a space with the subspace topology coming down from $X$.
To jog your memeory, recall that the subspace topology works this way: the open sets of $Y$ are just the intersections of $Y$ with open sets of $X$.
This turns out to be equivalent to the following condition: given any open covering of $Y$ by open sets of $X$, there is a finite subcollection covering $Y$.
I think you'll be able to prove that equivalence rather easily. I am also unsure of how the terminology of "covers" jives with what you learned. I can clear up whatever questions you have about what I meant in the comments. Just ask :)
I hope this takes care of your question about definitions. If you need further help with the question you're working on, I can suggest another problem/answer posted at this site: Does a compact subspace have to be closed in an arbitrary metric space?.
|
H: Finding the definite integral $\int_0^1 \log x\,\mathrm dx$
$$\int_{0}^1 \log x \,\mathrm dx$$
How to solve this? I am having problems with the limit $0$ to $1$. Because $\log 0$ is undefined.
AI: Yet another approach:
\begin{align}
\int_0^{1}\ln x\,dx=\left[\frac{\partial}{\partial s}\left(\int_0^1x^s dx\right)\right]_{s=0^+}=\left(\frac{\partial}{\partial s}\left[\frac{x^{s+1}}{s+1}\right]_0^1\right)_{s=0}=\left[-\frac{1}{(s+1)^2}\right]_{s=0}=-1.
\end{align}
|
H: Derived functors - how is natural transformation between $L_0T$ and $T$ constructed?
For simplicity's sake, consider the categories $R\text{-Mod}, S\text{-Mod}$ of left $R$-modules and left $S$-modules, respectively, and let $\mathcal{F}$ be some precovering class in $R\text{-Mod}$. Then given a functor $T:R\text{-Mod} \rightarrow S\text{-Mod}$, left derived functors $L_nT$ can be defined.
In a book I recently read (Relative homological Algebra by Enochs and Jenda), a canonical natural transformation $L_0T\rightarrow T$ is mentioned but not described.
When I tried to construct such transformation, it occured to me that there is a natural transformation going in the opposite direction:
Given a left $R-$module $M$, it is easy to see that $L_0T(M)$ is simply a factor of $T(M)$ modulo $ImT(f)$ for some morphism $f:F\rightarrow M$ (the "beginning" of left $\mathcal{F}$-resolution of $M$). Then it is easy to check that $\tau_M:T(M) \rightarrow L_0T(M)$ defined via $\tau_M(x)=x+ImT(f)$ is a natural transformation.
So my question is:
Is it possible, that this is the intended natural transformation and the direction was just reversed by mistake? If not, is there a general way of defining a natural transformation $\sigma: L_0T\rightarrow T$, such that, assuming $T$ is right exact, it is an isomorphism?
AI: It seems to me that this is the right direction. To make sure we are on the same page let me recall how I remember the relevant definitions.
In the context of module categories, one can define the left derived functors as $L^{i}F(M) = H^{i}(F(P^{\bullet}))$, where $P^{\bullet}$ is a projective resolution of $M$. We say $P^{\bullet}$ is a projective resolution if it is a complex vanishing in negative degrees such that all its objects are projective, together with a map $q: P^{\bullet} \rightarrow M$ that induces isomorphisms in homology. (Where we see $M$ as a complex concentrated in degree $0$).
Hence, the induced map $H^{0}(T(q)): L^{0}F(M) \simeq H^{0}(T(P^{\bullet})) \rightarrow H^{0}(T(M)) \simeq T(M)$ seems to be the required natural transformation.
|
H: Splitting fields of symmetric groups
Is it true that $k$ is a splitting field of $S_n$ if and only if the characteristic $p$ of $k$ is zero or larger than $n$? The fact that the character table (over $\mathbb C$) has only integer entries smaller or equal to $n$, seems to imply this, or am I mistaken?
If the statement is true, could someone give a citation?
Edit: $k$ is a splitting field of $S_n$ if the $k$-algebra $kS_n$ splits over $k$, i.e. if for every simple ($=$ irreducible) $kS_n$-left-module $M$, we have $\mathrm{End}_{kS_n}(M) \cong k$.
AI: The irreducible $S_n$-modules are all realizable over the integers. Specifically, there is a family of $\mathbb{Z} S_n$-modules $S^\lambda$ indexed by partitions $\lambda$ of $n$, called Specht modules, together with symmetric $\mathbb{Z}$-bilinear forms, such that over a field $k$, the quotient by the radical of the form is either zero or irreducible, and the set of non-zero irreducibles obtained this way is a complete set of representatives for the isoclasses of irreducibles. What is true is that the group algebra is semisimple exactly if the characteristic $p$ is bigger than $n$. In general, the blocks are in bijection with the set of $p$-cores of partitions of $n$.
You can read about this in chapter 4 of James' book "The representation theory of the symmetric group".
|
H: Ordinary Differential Equation
Consider a system of differential equation
y'(t) = By(t) where B is a 2X2 matrix defined as: B = [α , -β ; β , α]
where α,β ∈ ℝ and (α,β) ≠ (0,0).
A) If the roots of B are purely imaginary verify that the the solution of this system y=( y1 , y2 ) is of the form y1(t) = c1 cos(c2 + βt) and y2(t) = c1 sin(c2 + βt).
B) Characterise the orbit of y, namely y(ℝ+). Graph the orbit in ℝ2.
C) Comment on the stability properties of y when the roots of B are purely imaginary.
D) If the roots are not purely imaginary then verify that the solution of this system y=( y1,y2 ) is of the form y1(t) = c1 exp(αt)cos(c2 + βt) and y2(t) = c1 exp(αt)sin(c2 + βt).
E) Derive and graph the orbit of y when the roots of B are not purely imaginary.
F) Comment on the stability properties of y when the roots of B are not purely imaginary.
G) What determines the direction of rotation of the orbit as t ↑ ∞ ?
AI: Hints (given that this is from an entrance exam).
Part A
We have: $B = \begin{bmatrix}\alpha & -\beta\\\beta & \alpha\end{bmatrix}$
We can find the eigenvalues and eigenvectors of this matrix as:
$\lambda_1 = \alpha - i \beta, ~~v_1 = (-i, 1)$
$\lambda_2 = \alpha + i \beta, ~~v_2 = (i, 1)$
Since the roots are purely imaginary, that is $\alpha = 0$, we can write the solution $y(t)$ as follows:
$y(t) = \begin{bmatrix} y_1(t) \\ y_2(t)\end{bmatrix} = \begin{bmatrix} c_1 \cos \beta t - c_2 \sin \beta t \\ c_1 \sin \beta t + c_2 \cos \beta t \end{bmatrix} = \begin{bmatrix} c_1 \cos(c_2 + \beta t) \\c_1 \sin(c_2 + \beta t) \end{bmatrix}$
Note: the last part uses the fact that we can shift $\cos$ and $\sin$ terms to combine them. We can easily verify this solution by replacing it in the original equation and verifying it satisfies it.
Part B
The orbits are going to be spirals when the eigenvalues are purely imaginary as can be seen from this phase portrait.
.
Part C
These are stable (but not asymptotically stable); sometimes it is referred to as neutrally stable.
Part D
Form the eigenvalue/eigenvector pairs determined above, we have:
$y(t) = \begin{bmatrix} y_1(t) \\ y_2(t)\end{bmatrix} = \begin{bmatrix} c_1 e^{\alpha t} \cos (\beta t) - c_2 e^{\alpha t} \sin (\beta t) \\ c_1 e^{\alpha t} \sin (\beta t) + c_2 e^{\alpha t} \cos (\beta t) \end{bmatrix} = \begin{bmatrix} c_1 e^{\alpha t} \cos(c_2 + \beta t) \\c_1 e^{\alpha t} \sin(c_2 + \beta t) \end{bmatrix}$
Part E
We can draw a phase portrait for various cases (we already did $\alpha = 0$, so have to see $\alpha > 0$ and $\alpha < 0$).
For $\alpha > 0$, we have:
For $\alpha < 0$, we have:
Part F
$\alpha > 0$, we have an unstable spiral point at the origin.
$\alpha < 0$, we have stable spiral point at the origin.
Part G
The sign of $\alpha$ as that leads to the time dependent exponential term. Also, the magnitude of $\alpha$ determines the rate.
|
H: How to find the coordinates of the point on a sphere closest to another point?
Take the sphere $x^2 + y^2 + z^2 = 4$ and find the point on it that is closest to the point $(3,1,-1)$ without using calculus.
AI: The point on a surface closest to a point not on the surface (called an "external point") lies on a normal line from the surface point to the external point. All radii of a sphere are normal (perpendicular) to the sphere's surface. So the closest point to $ \ (3,1,-1) \ $ will lie along the radius from the origin (the center of the sphere) connecting to that external point. You need to find the point on the sphere that is found on that line.
Maybe even easier: the point on the sphere has to be 2 units away from the origin. Find the line $ ^* $ passing through $ \ (0,0,0) \ $ and $ \ (3,1,-1) \ $ , then locate the point on that line which is 2 units from the origin.
$ ^* $ EDIT: To remove the ambiguity that I clumsily left, which Thomas Andrews properly points out, perhaps I should say "the line segment" connecting the two points.
|
H: Congruence relation possible typo?
Is the following a typo? If $a \equiv b \pmod{m}$, then for some scalar $c>0$, $ac \equiv bc \pmod{mc}$
Or should it be $\pmod{m}$?
AI: No typo in the congruence equations or implication:
Theorem: $$\text{If}\; a \equiv b \pmod{m},\;\text{ then for any scalar }\;c \neq 0,\; ac \equiv bc \pmod{mc} \tag{$\dagger$}$$
$$a \equiv b \pmod m \quad\iff (a - b) \equiv 0 \pmod m \quad\iff\; (a - b) = km, \;k\in \mathbb Z.$$
$$(a-b) = km \;\iff\; c(a-b) = c(km),\; (c\neq 0)\quad \iff \;(ac - bc) = k(mc),\;k\in \mathbb Z.$$
$$\iff (ac - bc)\equiv 0\pmod{mc} \quad \iff \;ac\equiv bc \pmod{mc}, mc \in \mathbb Z.$$
For your second question: "should it be $\pmod{m}$?"
That would certainly be true, as well, but is not as strong a statement. But we do indeed have that for $c\neq 0$: $$ac \equiv bc \pmod{mc} \implies ac\equiv bc \pmod m$$ since $$ac \equiv bc \pmod{mc} \iff mc\mid (ac - bc) \implies m\mid (ac - bc) \iff ac\equiv bc \pmod m$$
|
H: Is a bounded sequence Cauchy if the element come closer?
MISSED THE CONDITION ON THE SUP....
I try to prove the existence of a limit in a Banach space. I have a sequence $\{x_n\}$ and I have managed to prove that $\limsup_{n\to\infty}\|x_n\|= C<\infty$ and $\|x_n-x_{n-1}\|>\|x_{n+1}-x_n\|$, $\forall n\in\mathbb{N}$. Is this enough to prove that the sequence is Cauchy? If not, what else does it need? I hope I don't have to prove that it is Cauchy by using the definition.
EDIT:I'll give some more information as David suggested.
My problem actually is to prove the existence of the separatrix of a map. Not to bother you with details I was an asymptotic to the separatix (that I don't know it exists). I have a real analytic map $F$ So there is a series up to any $N\in\mathbb{N}$,
$$w_N(t)=\sum_{j=1}^N w_j t^{-j}$$
with the property
$$w_N(t)-F(w_N(t-1))\in O(t^{-N-2}).$$
My conjunction is that there exists a $w(t)$ such that $w(t)=F(w(t-1))$ and moreover
$$\lim_{n\to\infty}F^n(w_N(t-n))=w(t)$$
for large enough $N$. I have proven so far that
$$w_N(t)-F^n(w_N(t-n))\in O(t^{-N-1}),\;\forall n\in\mathbb{N}.$$
It is easy to show that $|F^{n}(w_N(t-n))-F^{n-1}(w_N(t-n+1))|>|F^{n+1}(w_N(t-n-1))-F^{n}(w_N(t-n))|$.
I have to clarify some things that are hidden by my notation. $F$ is extended to a map $F:\mathbb{C}^2\to\mathbb{C}^2$ and the real part of $t$ is smaller than a negative number of my choice. So what I want to prove that the sequence of these analytic functions converge uniformly but for a start pointwise convergence will make me happy.
AI: No, that isn't enough. Look at $\mathbb{R}$ and the sequence $x_{2n} = \frac{1}{4} - \frac{1}{8^{2n}}$, $x_{2n+1} = \frac{3}{4} + \frac{1}{8^{2n+1}};$ we approach $\frac{1}{4}$ and $\frac{3}{4}$ alternatingly and successively lower the distance, but clearly we don't converge.
|
H: Simpsons rule & Lagrange?
What is the relation between Lagrange interpolation and Simpson's rule to integrate some function with some points $x_0,f(x_0)$; ... $x_n, f(x_n)$ ?
AI: There are various approaches to deriving Simpson's Rule. A common one uses a special case of Lagrange interpolation. Recall that we used *evenly spaced points $a=x_0,x_1, \dots x_{2m}=b$.
Then for $i=0$ to $m-1$, we find a polynomial $P_(i)(x)$ of degree $\le 2$ that passes through the three points $A_{2i}=(x_{2i}, f(x_{2i}))$, $A_{2i+1}=(x_{2i+1}, f(x_{2i+1}))$ and $A_{2i+2}=(x_{2i+2}, f(x_{2i+2}))$.
This is ordinary three point Lagrange interpolation, though the term is ordinarily not used in this very special context.
We then integrate $P_i(x)$ from $x_{2i}$ to $x_{2i+2}$, and add up over all $i$ from $0$ to $m-1$. When the calculation is done, we end up with the ordinary Simpson's Rule formula.
Many calculus books do a detailed derivation.
|
H: How to show convergence or divergence of a series when the ratio test is inconclusive?
Use the ratio or the root test to show convergence or divergence of the following series. If inconclusive, use another test:
$$\sum_{n=1}^{\infty}\frac{n!}{n^{n}}$$
So my first instinct was to try the ratio test due to the existence of the factorial. This is my working:
Using the Ratio Test:
\begin{align*}
\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_{n}}\right|&=\lim_{n\to\infty}\left|\frac{(n+1)\cdot n!\cdot n^{n}}{n!\cdot n\cdot n^{n}}\right|\\
&=\lim_{n\to\infty}\left|\frac{n+1}{n}\right|\\
&=1
\end{align*}
The Ratio Test is inconclusive.
I decided then to try the root test due to the presence of the $n^n$, but I think that's problematic and won't work (unless I'm looking at something the wrong way). I end up with the following:
\begin{align}
\lim_{n\to\infty}\left|a_n\right|^{1/n}&=\lim_{n\to\infty}\left|\frac{n!}{n^n}\right|\\
&=\lim_{n\to\infty}\left|\frac{(n!)^{1/n}}{n}\right|=\frac{\infty}{\infty}
\end{align}
So my problem here is that I can't apply L'Hopital's rule. If I expand the numerator I get the following:
$$(n!)^{1/n}=\sqrt[n]{n}\cdot\sqrt[n]{n-1}\cdot\sqrt[n]{n-2}\cdot\sqrt[n]{n-3}\cdots\sqrt[n]{3}\cdot\sqrt[n]{2}\cdot\sqrt[n]{1}$$
Which then would only allow me to cancel the $\sqrt[n]{n}$ and get $n^{(n-1)/n}$ in the denominator. Still gives me the indeterminate form of $\infty/\infty$.
So how can I approach this? Or was I on the right track and did something wrong?
AI: $$
\frac{n^n}{(n+1)^{n+1}} = \frac{1}{n+1}\cdot\frac{n^n}{(n+1)^n}.
$$
The $n+1$ in the denominator is canceled by the $n+1$ in the other part of the whole expression. Then we have
$$
\frac{n^n}{(n+1)^n}\to \frac1e\text{ as }n\to\infty.
$$
|
H: How can evaluate $\lim_{x\to0}\frac{3x^2}{\tan(x)\sin(x)}$?
I know this
$$\lim_{x\to0}\frac{3x^2}{\tan(x)\sin(x)}$$
But I have no idea how make a result different of:
$$\lim_{x\to0}\frac{3x}{\tan(x)}$$
I would like understand this calculation without using derivation or L'hôpital's rule. Thank you.
AI: Since $\tan x = \dfrac{\sin x}{\cos x}$, we have the identity:
$$\frac{3x^2}{\tan x\sin x} = \frac{3x^2\cos x}{\sin^2 x}$$
valid in at least an open, punctured neighbourhood around $0$.
Now using the product rule for limits, and that $\lim\limits_{x\to 0}\dfrac{\sin x}x = 1$ (proofs here, also some without differentiation):
$$\lim_{x\to0}\frac{3x^2}{\tan x\sin x} = \lim_{x\to0}\cos x\cdot\lim_{x\to 0}\frac{3x^2}{\sin^2 x} = 3$$
|
H: Algebra and skills needed for Hatcher
A couple of months ago I asked a professor by e-mail to mentor me on topology during the summer. He advised me to study general topology (Hausdorff spaces, connectedness, compactness) and algebraic topology (from Hatcher) before we meet, as time permits.
Yesterday I finished chapter 2 of Munkres (I am planning to study up to chapter 3) but I have no knowledge of abstract and linear algebra. I took analysis and set theory, this is my background.
I consider myself as a beginner (if you look at my questions:-)). Although I study up to 5 hours a day, I feel I don't accomplish much. Also, I am particularly bad at finding counterexamples (perhaps because I have not taken calculus).
Coming back to my question, how much algebra do I need to cover to understand Hatcher? I have about 1 month to cover? Would you recommend some books/notes? How can I develop my skills at finding counterexamples?
AI: I imagine you're either going to look at the section on fundamental group, or the section on homology. In either case, all I would recommend as background for algebra is an understanding of what it means for a subgroup to be normal and basic properties of homomorphisms (specifically the first isomorphism theorem).
I think using algebraic topology as a motivating factor for learning algebra is a good idea, but (in my experience) it's better to learn the concepts as you go. It's difficult to see what algebraic concepts are relevant beforehand. Just read about the appropriate topics as you see them in Hatcher.
As far as the general topology is concerned, the only other thing I would add to your list is to try to understand quotient spaces. A lot of the general topology can be swept under the carpet, but quotients are extremely important. Klaus Jänich's book simply titled "Topology" has some excellent examples.
Lastly, don't feel deterred by these feelings of "not accomplishing much". Learning mathematics is highly nonlinear: sometimes one can be stuck for days or weeks, and then suddenly a moment of clarity occurs and you jump ahead in understanding. The important thing is to be constantly working.
|
H: Parent function of $\sqrt{x^2 - 4}$?
Does this particular function($\sqrt{x^2 - 4}$) have a parent, such that it can be represented as a translation, compression, rotation, stretching, etc, of the parent graph?
AI: $y=\sqrt{x^2-4}$ is the top half of the two branches of the hyperbola $x^2-y^2 = 4$, with standard form $\dfrac{x^2}{2^2} - \dfrac{y^2}{2^2}= 1$. It could be thought of as a vertical and horizontal stretch by a factor of $2$ from the "parent" hyperbola $x^2-y^2 = 1$. In that case, the parent function is $f(x)=\sqrt{x^2-1}$, and your function is $g(x)=\sqrt{x^2-4}$, with $g(x)=2f(x/2)$.
|
H: Clarification on quotient groups
I've only recently started looking at quotient groups, so I don't know if this question will make sense...
In this wiki article, $G/H$ is defined as the set of left cosets of $H$ in $G$, without any reference to whether or not $H$ is normal. In the quotient group article, however, the definition of is stated only for when $H$ is a normal subgroup of $G$.
What accounts for the discrepancy between the two articles?
Are quotient groups only defined when $H \triangleleft G$?
If one is trying to determine what $G/H$ is explicitly, will this be affected if $H$ is normal or not? Are there any caveats I should anticipate if $H$ is not normal?
I do want to note that I came across a similar article, but I don't understand it all.
AI: $G/H\,$ is quotient group if and only if $H$ is a normal subgroup of $G$. However, the notation $G/H$ denotes the set of left cosets of $H$ in $G$, and it does not necessarily denote a quotient group. If it also happens to be the case that $\;H \triangleleft G,\;$ then $G/H$ is not just the set of left cosets of $H$ in $G$, but also a group, namely, the quotient group, sometimes referred to as the factor group, the group of cosets under "coset multiplication", which of course is defined if and only if the left cosets of $H$ equal the right cosets of $H$ in $G$: i.e., if and only if $H$ is a normal subgroup of $G$.
|
H: Path of particle under gravity
If a particle is subjected to gravity then
$$\frac{∂^2 u}{∂\theta^2} +u = \frac{GM}{h^2} $$
where
$$ u = \frac{1}{r}$$
and
$$h = r^2\dot{\theta}.$$
If you solve this you get
$$u = A\sin\theta+B\cos\theta + \frac{GM}{h^2}.$$
But the general solution for this is just in terms of $\cos\theta$ because then you have the eqn for a conic. So why is $A = 0$?
AI: You can also write the general solution to this differential equation (under a fixed set of polar coordinates) as
$$u = C_1 \cos(\theta+C_2) + \frac{GM}{h^2}$$
by defining $C_1=\sqrt{A^2+B^2}$ and $C_2=\tan^{-1}\left(\dfrac{A}B\right)$.
However, by redefining $\theta=0$ appropriately, we can have the same solution with $C_2=0$, leaving us with only a cosine term.
|
H: Archimedean Proof?
I've been struggling with a concept concerning the Archimedean property proof. That is showing my contradiction that For all $x$ in the reals, there exists $n$ in the naturals such that $n>x$.
Okay so we assume that the naturals is bounded above and show a contradiction.
If the naturals is bounded above, then it has a least upper bound (supremum) say $u$
Now consider $u-1$. Since $u=\sup(\mathbb N)$ , $u-1$ is an element of $\mathbb N$. (here is my first hiccup, not entirely sure why we can say $u-1$ is in $\mathbb N$)
This implies (again not confident with this implication) that there exists a $m$ in $\mathbb N$ such that $m>u-1$. A little bit of algebra leads to $m+1>u$.
$m+1$ is in $\mathbb N$ and $m+1>u=\sup(\mathbb N)$ thus we have a contradiction.
Can anyone help clear up these implications that I'm not really comfortable with? Thanks!
AI: $u-1$ may not be an element of $\mathbb{N}$, but we can be certain that $u-1 < u$. Since $u$ is the supremum of $\mathbb{N}$ then $u-1$ cannot be an upper bound for $\mathbb{N}$. This means that there is an $n \in \mathbb{N}$ such that $u-1 < n$. But then $n+1 \in \mathbb{N}$ and $u = (u-1)+1 < n+1$, which is our contradiction! ($u$ is no longer an upper bound of $\mathbb{N}$.)
|
H: Converting Ranges
I am writing a program for the arduino that takes a number as input and displays colors based on that input. At any given time, I know the value of the variables min and max where min is the minimum value of the input range and max is the maximum value of the input range. The problem is, in order for my program to work I have to convert this range to a range of 0-80. Is there a mathematical equation I can use for this?
AI: Denote min and max by $m$ and $M$ and use
$$f(x) = 80\cdot \frac{x-m}{M-m}$$
Then, $f$ proportionally maps the interval $[m,M]$ to $[0,80]$ as you asked. (In other words, as $x$ moves from $m$ to $M$, if it covered $a\%$ of the interval $[m,M]$, the result, $f(x)$ will have covered $a\%$ of the resulting interval $[0,80]$.
|
H: limit of expression with first and second derivatives
I got the expression: $$\frac{X'(t) X''[t]+Z'(t) Z''(t)}{\sqrt{X'(t)^2+Z'(t)^2}}$$
How do I find the limit when I get to a point with $X'(t)=Z'(t)=0$, it's like everytime it happens, the limit is $\pm\sqrt{X^{\prime\prime }(t)^2+Z^{\prime\prime }(t)^2}$ is this explainable?
AI: I'd suggest the following. First, let $t_0$ be a value such that $X'(t_0)=Z'(t_0)=0$. Evaluate the following limit :
$$\begin{eqnarray} L^2 &=& \lim_{t\to t_0} \left(\frac{X'(t)\, X''(t) + Z'(t)\, Z''(t)}{\sqrt{X'(t)^2 + Z'(t)^2}} \right)^2 \\ &=& \lim_{t\to t_0} \frac{\left(X'(t)\, X''(t) + Z'(t)\, Z''(t)\right)^2}{X'(t)^2 + Z'(t)^2}\end{eqnarray}$$
Apply l'Hospital rule to get :
$$\begin{eqnarray} L^2 &=& \lim_{t\to t_0} \frac{2\, \left(X'(t)\, X''(t) + Z'(t)\, Z''(t)\right)\left(X''(t)^2 + X'(t)\, X'''(t) + Z''(t)^2 + Z'(t)\, Z'''(t)\right)}{2 X'(t)\, X''(t) + 2 Z'(t)\, Z''(t)} \\ &=& \lim_{t\to t_0} \left(X''(t)^2 + X'(t)\, X'''(t) + Z''(t)^2 + Z'(t)\, Z'''(t)\right) \end{eqnarray}$$
Knowing that $X'(t_0)=Z'(t_0)=0$, you can conclude that $L^2 = X''(t_0)^2 + Z''(t_0)^2$.
|
H: Finding distance from point to line
Knowing the position of 3 points($A, B, C$) , how can I get the distance from $A$ to the line $\overline {BC}$ if I know the angle?
AI: If you have the positions as vectors, compute $\frac{|(A-B)\times (C-B)|}{|C-B|}$.
|
H: Characterization of $T+T^*\geq 0$, for $T$ a bounded operator on Hilbert space
(This is Exercise 3.2.1 in Pedersen's book Analysis Now.) Let $T$ be a bounded operator on a complex Hilbert space $H$. I want to prove that $T+T^*\geq 0$ if and only if $T+I$ is invertible in $\mathcal{B}(H)$ with $\|(T-I)(T+I)^{-1}\|\leq 1$.
There is a hint, namely to prove that $T+T^*\geq 0$ if and only if $\|(T+I)x\|\geq\|x\|$ and $\|(T+I)x\|\geq\|(T-I)x\|$.
I can prove $\Rightarrow$ of the hint, and I can use this to prove the conclusion of the original statement. I can also show that $\|(T-I)(T+I)^{-1}\|\leq 1$ implies $\|(T+I)x\|\geq\|(T-I)x\|$, but I am having trouble with the rest.
That is, I'm trying to show that
(1) if $T+I$ invertible with $\|(T-I)(T+I)^{-1}\|\leq 1$, then $\|(T+I)x\|\geq\|x\|$, and
(2) if $\|(T+I)x\|\geq\|x\|$ and $\|(T+I)x\|\geq\|(T-I)x\|$, then $T+T^*\geq 0$.
AI: For part a) the proof by contradiction: suppose that $(I+T)x=y$ and $\|y\|<\|x\|.$
Then, $\|(T-I)(I+T)^{-1}y\|=\|(T-I)x\|=\|y-2x\|\ge 2\|x\|-\|y\|>\|y\|,$ which contradicts to our norm condition. For the second part,
suppose that $Tx=y.$ The condition $\|(I+T)x\|\ge \|x\|$ implies $(x+y,x+y)\ge (x,x)$ or $\|y\|^2+(x,y)+(y,x)\ge 0.$ The second condition is equivalent to $(x+y,x+y)\ge (y-x,y-x)$ or $(x,y)+(y,x)\ge 0.$ The only thing left is to check what that $T+T^*$ is equivalent to $((T+T^*)x,x)\ge 0$ or $(y,x)+(x,y)\ge 0.$
|
H: Consequence of injectivity of projections from covering spaces
We have the theorem which says that the induced homomorphism $p_* : \pi_1(\tilde X,\tilde x_0)\rightarrow \pi_1(X,x_0)$ is injective (hence a monomorphism). Here $\tilde X$ is a covering space of $X$.
I am just trying to digest this fact (the injectivity), since we know many spaces (e.g. wedge of two circles $S^1 \vee S^1$) have covers whose fundamental group is free group on many (or even infinite) generators.
My understanding of Injectivity (as it relates to groups) is that it implies that $\pi_1(X)$ is at least as "big" as $\pi_1(\tilde X)$, so I am confused here.
Confusion: The $\pi_1(X=S^1\vee S^1)$ is free group on two generators. $\pi_1(\tilde X)$ can be free group on many generators. How is the induced map injective then ?
I am not very experienced with group theory, though I have read upon the basics before reading Hatcher/Munkres for Algebraic Topology.
AI: The free group on two generators contains subgroups which are free on as many generators as you want (and one particularly nice way of proving this is noticing that $S^1\vee S^1$ has finite wedges of circles as covering spaces :-) )
Indeed, if $L$ is a free group on two generators, one can check that the derived subgroup $[L,L]$ is in fact free on countably infinite generators!
|
H: Inequality holding for complex numbers in the unit disk
In Nehari's book Conformal Mapping he gives it as an exercise to prove that for $a,b\in \mathbb{C}$, $|a|, |b| <1$ we have $$\frac{|a|-|b|}{1-|ab|} \leq \left|\frac{a-b}{1-\overline{a}b}\right| \leq \frac{|a|+|b|}{1+|ab|}.$$I've been trying to prove this inequality for a while without success. Obviously the trouble is that the triangle inequality is making both numerator and denominator smaller (larger) on the left (right), making the inequality inconclusive at first glance. Things I have tried include:
Multiplying each expression by one of the denominators to get rid of the fractions
Exploiting the mapping $z\mapsto \frac{z-a}{1-\overline{a}z}$, which preserves the unit disk and takes $a\mapsto 0$.
So far no success. Does someone have some advice?
AI: Why not straightforward approach? E.g.
\begin{align}
\left|\frac{a-b}{1-\bar{a}b}\right|^2-\left(\frac{|a|-|b|}{1-|ab|}\right)^2&=
\frac{|a|^2+|b|^2-(\bar{a}b+a\bar{b})}{1+|ab|^2-(\bar{a}b+a\bar{b})}-\frac{|a|^2+|b|^2-2|ab|}{1+|ab|^2-2|ab|}=\\
&=\frac{(1-|a|^2)(1-|b|^2)(2|ab|-\bar{a}b-a\bar{b})}{|1-\bar{a}b|^2(1-|ab|)^2}\geq0,
\end{align}
since $|a|,|b|<1$ and $|ab|\geq \mathrm{Re}(\bar{a}b)$.
Similarly,
\begin{align}
\left(\frac{|a|+|b|}{1+|ab|}\right)^2-\left|\frac{a-b}{1-\bar{a}b}\right|^2&=\frac{|a|^2+|b|^2+2|ab|}{1+|ab|^2+2|ab|}-
\frac{|a|^2+|b|^2-(\bar{a}b+a\bar{b})}{1+|ab|^2-(\bar{a}b+a\bar{b})}=\\
&=\frac{(1-|a|^2)(1-|b|^2)(2|ab|+\bar{a}b+a\bar{b})}{|1-\bar{a}b|^2(1+|ab|)^2}\geq0.
\end{align}
|
H: Lines and planes at space
First of all, sorry for my poor English.
Can someone please help me?
How can I find the parametric equation of the line that have $ A=(1, -2, -1) $ and passes through the skew lines
$r:$
$ x= z -1$
$ y= 2z - 3 $
$s:$
$x= z-2$
$y= -z + 1$
Thanks!
AI: The parametric equations for a line passing through $ \ (1, -2, -1) \ $ are $ \ x = 1 + at \ , \ y = -2 + bt \ , \ z = -1 + ct \ . $ This will hold true at the points where this line meets lines $ \ r \ $ and $ \ s \ $ (assuming it does intersect them both -- I presumed that the problem is constructed so that this happens, though it shouldn't be expected to do so in general).
Looking at line $ \ r \ $ , the intersection is at some value of the parameter $ \ r \ $ such that
$$ \begin{array}{cc}1 + ar \ = \ (-1 + cr) -1 \\-2 + br \ = \ 2 \cdot (-1 + cr) -3\end{array} \ \Rightarrow \ \begin{array}{cc}ar \ = \ cr -3 \\ br \ = \ 2cr -3\end{array} \ .$$
Doing the same for line $ \ s \ $ produces
$$ \begin{array}{cc}1 + as \ = \ (-1 + cs) -2 \\-2 + bs \ = \ -(-1 + cs) + 1\end{array} \ \Rightarrow \ \begin{array}{cc}as \ = \ cs -4 \\ bs \ = \ -cs +4\end{array} \ , $$
from which we see that $ \ a = -b \ $ , since $ \ s \neq 0 \ . $ Inserting this result back into the first pair of equations yields
$$ \begin{array}{cc}ar \ = \ cr -3 \\ -ar \ = \ 2cr -3\end{array} \ \Rightarrow \ 3cr = 6 \ \Rightarrow \ c \ = \frac{2}{r} \ . $$
From this, we find $ \ ar \ = \ 2 - 3 \ = \ -1 \ $ and $ \ br \ = \ 4 - 3 \ = \ 1 \ . $ We now have the relation among the coefficients
$$ r \ = \ \frac{-1}{a} \ = \ \frac{1}{b} \ = \ \frac{2}{c} \ \Rightarrow \ 2a \ = -2b \ = -c . $$
We cannot pin down specific numerical values for these coefficients, but it is not actually necessary, since any non-zero scalar multiplication of values for them is equivalent to a non-zero scalar multiplication of the parameter $ \ t \ $ . Another way of saying this is that a direction vector for the line in question is $ \langle 1, -1, -2 \rangle \ . $
ADDENDUM: If we use the scaling for $ \ a = 1 \ $ (and thus $ \ b = -1 \ , \ c = -2 \ $ ), we can quickly determine that $ \ r = -1 \ $ , with the intersection point on that line being $ \ (0, -1, 1 ) \ , $ and that $ \ s = -\frac{4}{3} \ $ , with that intersection point located at $ \ ( -\frac{1}{3} , -\frac{2}{3} , \frac{5}{3} ) \ . $ These results are compatible with the relations given for lines $ \ r \ $ and $ \ s \ . $
|
H: What is combinatorics? How is it related to Ramsey theory? What is the background needed to study it?
What is combinatorics? How is it related to Ramsey theory? What is the background needed to study it?
When I was reading about Ramsey theory in some reviews on some books, many people mentioned this branch of mathematics and said that it's related with Ramsey theory and that it's divided into two parts: commutative combinatorics and non-commutative combinatorics.
So, what is combinatorics? How is it related to Ramsey theory? Is it related to groups or mathematical logic? And what is the required background to study it?
AI: Check out the Wikipedia entry on Combinatorics. It will include a good distillation of the field(s), since "Combinatorics" is huge and encompasses seemingly unrelated "subfields". The entry will also point to some good references and links. See also Gentle Introduction to Ramsey's Theory , and once you've done that, review Ramsey's Theory, more technical. See also this earlier post: What is Ramsey's Theory? What is its own importance in maths?
As Alex suggests, one of the nice things about combinatorics is that it doesn't require much in the way of prerequisites: you can "dig in" and run with it. You can study sub-topics within Combinatorics that are fairly well-contained, like graph theory, and doing so doesn't require knowledge of every thing that counts as "Combinatorics." See also this earlier post: What is Combinatorics?.
One of my favorite texts in Combinatorics is Peter Cameron's text Combinatorics. But if you search Math.SE using the tags "reference-request" and "Combinatorics", you'll find a host of previously recommended texts.
|
H: Relation between differentiable,continuous and integrable functions.
I have been doing lots of calculus these days and i want to confirm with you guys my understanding of an important concept of calculus.
Basically, in the initial phase,students assume that integration and differentiation are always associated to each other, i.e., a function which is integrable is also differentiable at the same time. But having explored on it more, i found out its not true at all and does not hold always. Many a times (or should i say infinitely times) a function can be integrable on an interval while its not differentiable on that same interval (and vice versa) .
What i want to ask is this : i recently read a conclusion on the above mentioned concept which is :
{Differentiable functions} $\subset$ {Continuous functions} $\subset$ {Integrable functions}
i.e., each is a proper subset of the next. Now, "Differentiable functions set" is a proper subset of "Continuous functions set"... that is very well understood without a doubt as every continuous function may or may not be differentiable. I have problem with the next relation which is :
$\\\\$
"Continuous functions set" is a proper subset of "Integrable functions set"...Why is this so???
$\\\\$
I am just not able to visualize this. I know that a bounded continuous function on a closed interval is integrable, well and fine, but there are unbounded continuous functions too with domain R , which we cant say will be integrable or not.
So, my question is simple. Why are there more number of elements in the "Integrable functions set" than "Continuous functions set" (here by elements i mean integrable and continuous functions ofcourse) ???. So,this is it... Can anyone plz help me understand this out in as simple words as possible. I know i need some kind of visualization which i guess is easy, but i could not make it out on my own, so i turned to u guys.Thanks for any help.
AI: Let $g(0)=1$ and $g(x)=0$ for all $x\ne 0$. It is straightforward from the definition of the Riemann integral to prove that $g$ is integrable over any interval, however, $g$ is clearly not continuous.
The conditions of continuity and integrability are very different in flavour. Continuity is something that is extremely sensitive to local and small changes. It's enough to change the value of a continuous function at just one point and it is no longer continuous. Integrability on the other hand is a very robust property. If you make finitely many changes to a function that was integrable, then the new function is still integrable and has the same integral. That is why it is very easy to construct integrable functions that are not continuous.
|
H: A finite-type quiver has no self-loops
I am reading through Etingof et al's notes on representation theory, and they assert in Exercise 5.4(c) on page 80 that a finite-type quiver has no self-loops. I think the way to show this is to consider the simplest case: a quiver with one vertex and one edge - the self-loop. Representations of this quiver correspond to a choice of vector space $V$ and any linear endomorphism $T: V \to V$. I should be able to simply write down an infinite family of nonisomorphic indecomposable representations, but I'm not very good at seeing which representations are indecomposable/nonisomorphic. Can someone give me a hint?
AI: Here is one infinite family of pairwise non-isomorphic indecomposables. Take $V$ to be $F^n$ (your base field $F$) a nilpotent matrix with a single Jordan block for $T$: any submodule must contain the (unique up to scalars) vector in the kernel, so these are indecomposable.
Here is another, if the base field is infinite: take for $T$ the scalar matrix $T=a$ for $a \in F$, acting on a one-dimensional $V=F$. These are simple and non-isomorphic. The real issue here is the infinite dimensionality of the path algebra...
|
H: Help with implicit differentiation: $e^{9x}= \sin(x+9y)$
Find $\;\dfrac{dy}{dx}\;$ given $\;e^{9x}= \sin(x+9y)$
the answer is $\;\displaystyle\frac{e^{9x}}{\cos(x+9y)}- \frac{1}{9}$.
Can you show the process of how this is worked?
thanks.
AI: Differentiate both sides of your equation using chain rule
$$
\left( e^{9x} \right )' = \left ( \sin (x+9y) \right )'\\
9e^{9x} = \cos (x+9y) \cdot (x + 9y)' = \cos (x+9y) \cdot (1+9y')
$$
From which you can find that
$$
1+9y' = \frac {9e^{9x}}{\cos (x+9y)} \\
y' = \frac {e^{9x}}{\cos (x+9y)}-\frac 19
$$
|
H: Derivation of Pythagorean Triple General Solution Starting Point:
I was reading on proof wiki about the derivation of the general solution to the pythagorean triple diophantine equation:
$$
x^2 + y^2 = z^2,
$$
where $x,y,z > 0$ are integers.
I came across the following general solution to the primitive function:
\begin{align*}
x &= 2mn\\
y &= (m^2 - n^2)\\
z &= (m^2 + n^2)\\
\end{align*}
for coprime $m,n$.
I looked at the proof of it working (if you square $x$ and $y$ and add it it does indeed equal $z^2$)
My one qualm was, how the hell did they start with that? For example, is there a natural way by starting with the original problem that you end up with the expression above?
I noticed that when attempting to derive the general solution myself, from start to finish,
I would begin by noting I can find all pairs such that $z$ and $y$ differ by a constant $k$... but I cannot make that final leap to end up with the equation above so that for any given $k$ you can find a solution.
Thanks ahead of time!
AI: There are a number of ways to find those expressions organically. One way is via complex numbers (and this method can be found in the first chapter of Marcus' Number Fields).
Suppose $x^2 + y^2 = z^2$, with $x,y,z\in\Bbb N$, and also suppose that $x$, $y$, and $z$ have no common factor. Looking mod $4$, we can tell that $z$ must be odd: the only squares mod $4$ are $0$ and $1$, so the only possible equations are $0^2 + 0^2\equiv 0^2\mod 4$ and $0^2 + 1^2 \equiv 1^2\mod 4$. But in the first case, everything is divisible by $4$, so we don't have a primitive triple.
Proceeding, we can factor the left hand side and get $(x + iy)(x - iy) = z^2$. So, we now have a problem concerning $\Bbb Z[i] = \{a + bi\mid a,b\in\Bbb Z\}$. One can show that $\Bbb Z[i]$ has unique factorization of elements into primes (for a precise definition of all this, look to any abstract algebra text), so suppose some prime $p$ divides $x + iy$. $p$ clearly divides $z^2$ an even number of times, so we want to show that $p$ does not divide $x - iy$. If it did, we would have $p\mid (x + yi) + (x - yi)$, or $p\mid 2x$. Since $p$ divides $z$ as well, and $x$ and $z$ are relatively prime, we know that we can find $n,m\in\Bbb{Z}$ such that $2xm + zn = 1$ (recall that $z$ is odd). However, this implies that $p$ divides $1$, but the only elements dividing $1$ in $\Bbb{Z}[i]$ are $\pm 1$, $\pm i$, none of which are primes - so we have a contradiction. Therefore, by unique factorization, we must have $x + iy = u\alpha^2$, where $u\in\{\pm1,\pm i\}$. Writing $\alpha = m + in$, we have $\alpha^2 = m^2 - n^2 + i2mn $, so $x = \pm \left(m^2 - n^2\right)$, $y = \pm 2mn$, and solving for $z$, we get $z = \pm\left(m^2 + n^2\right)$.
If $m$ and $n$ aren't coprime, we don't obtain a primitive triple, as each of $x$, $y$, and $z$ will have a factor in common.
There are numerous other ways of deriving these ways of representing $x$, $y$, and $z$ as well, some of which include parameterizing the unit circle or moving $y^2$ to the other side and writing $x^2 = (z + y)(z - y)$.
|
H: prime notation clarification
When I first learned calculus, I was taught that $'$ for derivatives was only a valid notation when used with function notation: $f'(x)$ or $g'(x)$, or when used with the coordinate variable $y$, as in $y'$.
But I have seen on a number of occasions, both here and in the classroom, where it will be used with an expression. E.g. $(x+\frac{1}{x})'$ to mean $\frac{d}{dx}(x+\frac{1}{x})$. It has always been my understanding that this notation is not considered valid because it doesn't indicate what the independent variable that the expression is being differentiated with respect to is. E.g. in $(ax+bx^2)'$, the variable could be $a$, $b$, or $x$. This problem also exists with $y'$ but I figured this was an exception because $y$ and $x$ usually represent the coordinate axes so it can be assumed that the independent variable for $x$ is $y$ when taking $y'$.
So is this notation valid, just putting a $'$ at the end of an expression?
AI: What you're seeing is a "shorthand" an instructor or such may use in the process of computing the derivative of a function with respect to $x$. Usually when you seem something like $(ax + bx^2)'$, it's assumed from the context that we are taking the derivative of the expression, with respect to $x$. That is, "$(ax + bx^2)'$" is taken to mean "evaluate $\,\frac d{dx}(ax + bx^2)$", just as one assumes from context that $y'$ refers to the derivative, $f'(x)$, of $y = f(x)$.
I prefer to stick with $f'(x)...$ or $y'$, using $\frac d{dx}(\text{some function of x})$ when evaluating the derivative of a function with respect to $x$, particularly when trying to convey information to another person. (On scratch paper, or in my own work, I might get a little informal and slip into using a "prime" to abbreviate what I'm doing.) But I would prefer the more formal or "official" conventions/notations were used in "instructive contexts", to avoid confusion or possible ambiguity.
|
H: Pointwise convergence and uniform convergence
Suppose that $f$ has a uniformly continuous derivative. We define $\ f_n: \Bbb R\to\Bbb R $ by
$$\ f_n(x) = n \left( f \left(x + \frac{1}{n}\right) - f(x)\right) $$
Find a pointwise convergence $\ f_n$. Prove that the sequence $\ f_n$ converges uniformly to its limit.
AI: What have you done so far?
Hint: for the pointwise convergence, write
$$
f_n(x) = \frac{f(x+h_n)-f(x)}{h_n}
$$
with $h_n=\frac{1}{n}\to 0^+$
|
H: Systems of Quadratic Equations Question
looking for help on this question.
Solve the following systems of equations algebraically using the quadratic formula.
$$\begin{align} y& =-x^2+2x+9\\ y& =-5x^2+10x+12\end{align}$$
Any help would be appreciated!
AI: Hint: put both equations equal to one another. You'll have a quadratic equation.
$$-x^2+2x+9 = -5x^2+10x+12$$
Now simplify (combine "like terms": get all terms on one side (say left hand side) equal to $0$ (on the right-hand side).
Factor and/or use the quadratic formula to find any solution(s), if they exist, and they do exist: there are two solutions for $x$, each of which is a "zero" of the equation.
|
H: Prove the Equality of Two Integrals
This is what I've done so far:
$V_1 = \pi\int_0^af(x)^2dx = -\pi\int_0^by^2 (1/f'(x))\ dy = -\pi\int_0^by^2 (1/f'(g(y)))\ dy = -\pi\int_0^by^2 g'(y)\ dy$
Integrating by parts:
$u=y^2,\ du=2y\ dy, \ v=g(y), \ dv = g'(y)\ dy$
$y^2g(y)|_0^b -2\pi\int_0^by\ g(y)\ dy = -ab^2 -2\pi\int_0^by\ g(y)\ dy = -ab^2-V_2$
so it looks like $V_1 = -ab^2-V_2$ which looks wrong.
I've also tried other ways of integrating, substituting and I always end up with a proof that $V_1 \ne V_2$. Can someone point me to the right direction?
AI: You've made two mistakes. $y^2g(y)|_0^b=b^2\cdot0-0^2\cdot a=0$, not $b^2a$. So that leaves you with $V_1=-V_2$ after your integration by parts. But it also seem you're neglecting the minus sign before the integrating by parts. Actually, you are neglecting the $\pi$ too in part as the expression should be $-\pi\, y^2g(y)|_0^b+2\pi\int_0^by\,g(y)\,dy$.
Make these two corrections and you have it.
|
H: Radius of convergence of $\sum_{n = 0}^{\infty} (a_1^n + \dots + a_k^n)z^n$, where $|a_1| = |a_2| = \dots = |a_k| = 1$
Here's the problem: Find the radius of convergence of $f(z) = \sum_{n = 0}^{\infty} (a_1^n + \dots + a_k^n)z^n$, where $|a_1| = |a_2| = \dots = |a_k| = 1$, and $a_i \in \mathbb{C}$.
Since the series in question is the sum of as $j$ ranges from $1$ to $k$ of the series $f_j(z) = \sum_{n = 0}^{\infty} a_j^n z^n$ you see right away that the radius of convergence $R$ of $f(z)$ is at least $\displaystyle 1$.
I'm guessing that maybe the radius of convergence $R$ is exactly $\displaystyle 1$.
I think you might be able to get the answer if you can show that the set $\{a_1^n + \dots + a_k^n : n \ge 0\}$ either has a non-zero limit point inside $\{ z : |z| \le k \}$, or else attains the same non-zero value infinitely many times, because then in either case you can find a subsequence $\{n_j\}$ with $|a_1^{n_j} + \dots + a_k^{n_j}|$ bounded away from $0$, and so $|a_1^{n_j} + \dots + a_k^{n_j}|^{1/n_j} \to 1$ as $n_j \to \infty$.
Any thoughts about how to proceed with this idea? Or, is there a far better way to look at this?
Edit
(*) For $a_j$ and any $N > 0$, you can find a $n > N$ such that $a_j^n$ is arbitrarily close to $1$. You can do this by writing $a_j^n = e^{i n \theta_j}$ and looking at $n \theta _j$ modulo $2 \pi$.
Let $b_i = a_i / a_1$.
It is not true that we have any of the limits $\lim \limits_{n \to \infty}a_1^n + a_2^n = 0, -1, -2, \dots$. This is because we can find a subsequence $\{n_k\}$ such that $\lim \limits_{n \to \infty}a_1^{n_k} = 1$. Assuming to the contrary for a contradiction, for this subsequence one would have to have $\lim \limits_{n \to \infty} a_1^{n_k}(1 + b_2^{n_k}) = 0, -1, -2, \dots$ so that $\lim \limits_{n \to \infty} (1 + b_2^{n_k}) = 0, -1, -2, \dots$ and therefore $\lim \limits_{n \to \infty} b_2^{n_k} = -1, -2, -3, \dots$ which is impossible by (*).
Similarly, it is not true that we have any of the limits $\lim \limits_{n \to \infty}a_1^n + a_2^n + a_3^n= 0, -1, -2, \dots$. Because otherwise, we can find a subsequence $\{n_k\}$ such that $\lim \limits_{n \to \infty}a_1^{n_k} = 1$, and then $\lim \limits_{n \to \infty} (1 + b_2^{n_k} + b_3^{n_k}) = 0, -1, -2, \dots$, and in this case $\lim \limits_{n \to \infty} b_2^{n_k} + b_3^{n_k}= -1, -2, -3, \dots$ which is impossible.
Continuing like this, we see that it is not true that we have any of the limits $\lim \limits_{n \to \infty}a_1^n + a_2^n + \dots + a_k^n= 0, -1, -2, \dots$. In particular we either have a non-zero limit point, or a non-zero value is attained infinitely many times.
Does this appear to be correct?
AI: The radius of convergence is $1$. Note that
$$
\sum_{n = 0}^{\infty} (a_1^n + \dots + a_k^n)z^n \;=\; \frac{1}{1-a_1z} + \cdots + \frac{1}{1-a_kz}
$$
whenever $|z| < 1$. Since this function has poles at $z = 1/a_i$ for each $i$, it cannot be extended holomorphically to a disk of radius larger than one, so the radius of convergence is precisely one.
|
H: Lines and Planes
can someone please help me? How can I find the equation of the plane which contains the line
$r:$ $ x= 2 + 2\lambda$
$ y= 3 - \lambda$
$ z= -3\lambda$
($\lambda$ is a Real number)
and makes an angle of $\pi$/4 with the line
$ s:$ $x= 1 - 2t $
$y= 2 - t$
$z=3 - t$
(t is a Real number)
Thanks a lot!
AI: Let $\vec n$ be a vector normal to the plane.
$r$ is parallel to the vector $\vec v_r = (2,-1,-3)$.
$\vec n\cdot \vec v_r = 0$.
$s$ is parallel to the vector $\vec v_s = (-2,-1,-1)$.
$\displaystyle \vec n \cdot \vec v_s = \frac{\Vert \vec n\Vert \Vert \vec v_r\Vert}{\sqrt 2}=\Vert \vec n\Vert \sqrt 3$
Breaking $\vec n$ down into its components turns this into a system of two equations with three unknowns, which is exactly what you need.
|
H: Finding the derivative of a relational problem
I am self studying some calculus and I have gotten really stuck! I thought I had the right idea but I keep getting the answer totally wrong. I am sure I am missing something important. Here is the problem:
For the equation $6x^{\frac{1}{2}}+12y^{-\frac{1}{2}} = 3xy$, find an equation of the tangent line at the point $(1, 4)$.
Here is my work:
$$6x^{\frac{1}{2}}+12y^{-\frac{1}{2}} = 3xy$$
$$6(x^{\frac{1}{2}}+2y^{-\frac{1}{2}}) = 3xy$$
$$2(x^{\frac{1}{2}}+2y^{-\frac{1}{2}}) = xy$$
$$2x^{\frac{1}{2}}+4y^{-\frac{1}{2}} = xy$$
$$\ln(2x^{\frac{1}{2}})+\ln(4y^{-\frac{1}{2}}) = \ln(xy)$$
$$\ln2 + \ln x^{\frac{1}{2}}+\ln4 + \ln y^{-\frac{1}{2}} = \ln x + \ln y$$
$$\ln2 + \frac{1}{2}\ln x+\ln4 -\frac{1}{2} \ln y = \ln x + \ln y$$
$$\ln2 + \ln4 + \frac{1}{2}\ln x - \ln x = \ln y + \frac{1}{2} \ln y$$
$$\frac{2}{3}\ln2 + \frac{2}{3}\ln4 - \frac{1}{3}\ln x = \ln y$$
$$e^{\frac{2}{3}\ln2 + \frac{2}{3}\ln4 - \frac{1}{3}\ln x} = y$$
$$\frac{2}{3}\ln2 + \frac{2}{3}\ln4 - \frac{1}{3}\ln x = \ln y$$
$$\frac{1}{3x} = y'\frac{1}{y}$$
$$y' = \frac{y}{3x}$$
$$y' = \frac{e^{\frac{2}{3}\ln2 + \frac{2}{3}\ln4 - \frac{1}{3}\ln x}}{3x}$$
$$y - 4 = \frac{4}{3}(x-1)$$
$$y = \frac{4}{3}x-\frac{4}{3} + 4$$
$$y = \frac{4}{3}x+\frac{8}{3}$$
The Solution I found was: $y = \frac{4}{3}x+\frac{8}{3}$ but this is wrong! Can you tell me where I have gone wrong?
AI: From the fourth line to the fifth, there is a mistake. You assumed that $\ln(a+b)=\ln(a)+\ln(b)$. This is not true.
How to do it:
Start from $6x^{1/2}+12y^{-1/2}=3xy$. It is probably a good idea to do no algebraic manipulation. But let's divide through by $3$, getting $2x^{1/2}+4y^{-1/2}=xy$.
Don't wait, differentiate, using implicit differentiation.
We get
$$2\cdot\frac{1}{2}x^{-1/2}-4\cdot\frac{1}{2}y^{-3/2}\frac{dy}{dx}=x\frac{dy}{dx}+y.$$
Now substitute our values of $x$ and $y$ to find $\frac{dy}{dx}$ at the point $(1,4)$.
|
H: What is some application of variance in actuarial science/insurance risk?
What is some application of variance in actuarial science/insurance risk?
I learn that a lot of applied math book of actuarial science have variance of probability distribution frequently.
Don't the books already have its probability for different problems?
AI: Variance is one of the simplest measures of dispersion we have. Dispersion is important because it is a measure of how far off our random variable can be from the mean, and its calculation is nice and smooth (as opposed to, say, absolute deviation).
Given a measure of dispersion, you can start to think about risk loads. Since insurance contracts usually do not trade in liquid markets, insurance companies have to set a fixed price for them. This usually entails an estimate of expected losses, plus expenses, plus a "reasonable" provision for profit. A multiple of some measure of dispersion, like variance, can be a good starting point for a profit provision.
Note that current theory is moving away from dispersion and towards tail risk. This idea says that it is the extreme losses, "in the tail", which matter more for thinking about risk and profit. In liquid markets, dispersion makes sense because you can trade around the price, and so volatility (i.e. dispersion) is your main concern. But if you cannot reprice the risk, it makes sense to focus on the outcomes that do you the most harm, and that would be the tail risk.
EDIT: For example, if $X$ is a positive random variable representing an insurance loss, then:
$E(X)+\alpha Var(X)$
is a possible formula for a risk loaded pure premium (excluding expenses), where:
$E(X)$ is the expected loss of the contract
$Var(X)$ is the variance of the contract, and
$\alpha$ is a load factor to apply to the variance to change variance to profit load.
Note that there will be no consensus on what $\alpha$ should be, beyond what will sell in the market. Also, this model is disputable: some people will prefer using standard deviation (the square root of variance) because its unit are the same as the mean. But this gets us into a complex discussion on risk theory that is beyond the scope of your question.
|
H: $M$ compact iff every open additive cover of $M$ contains $M$
Call an open over $\mathscr{U}$ of a metric space $M$ an additive cover if whenever $U,V\in\mathscr{U}$, we have $U\cup V\in \mathscr{U}$. Prove that $M$ is compact if and only if every additive open cover of $M$ contains $M$.
To begin with, I'm quite confused with the phrase "every additive open cover of $M$ contains $M$". Doesn't every open cover of $M$ contain $M$ by definition?
AI: The cover is a set of sets. To say that it contains $M$ in this context means that $M$ is one of the elements of $\mathscr U$, i.e., one of the sets in the cover. The definition of cover only says that the union of the elements (sets) of the cover contains $M$ as a subset (not as an element).
One problem is the ambiguity of the word "contains," which sometimes refers to set membership and other times refers to containment of subsets, a very important distinction to make.
As for the problem itself, here's how one direction can be proved: You can use induction to prove that $\mathscr U$ contains (as an element) every finite union of elements from $\mathscr U$. You can use the open cover definition of compactness to show that $M$ is a finite union of elements from $\mathscr U$.
Added: For the other direction, if you start with an arbitrary cover $\mathcal A$, let $\mathscr U$ be the set of finite unions of elements of $\mathcal A$.
|
H: Help with differentiation of natural logarithm
Find $\;\dfrac{dy}{dx}\;$ given $y=\frac{\ln(8x)}{8x}$.
The answer is $\;\dfrac{1-\ln(8x)}{8x^2}\;$.
Can you show the process of how this is worked?
Thanks.
AI: Here we can use the quotient rule and the chain rule:
Quotient rule: $\quad$ If $y = \dfrac{f(x)}{g(x)}$, then $$\dfrac{dy}{dx} = \dfrac{f'(x)g(x) - f(x)g'(x)}{g^2(x)}\tag{(1) quotient rule}$$
$\displaystyle f(x) = \ln(8x), \; f'(x) = \frac 1{8x}\cdot \frac d{dx}(8x) = \dfrac 8{8x} = \dfrac 1x\quad\text{by chain rule}$
$\displaystyle g(x) = 8x, \; g'(x) = 8.$
Now, just substitute each of $f(x), g(x), f'(x), g'(x), \text{and} g^2(x) = (8x)^2$ into $(1)$, simplify, and you're done!
$$\frac {dy}{dx} = \frac{(8x)\frac{1}{x} - \left(\ln(8x)\right)\cdot 8}{64x^2} = \dfrac{(1 - \ln(8x))}{8x^2}$$
|
H: What could be a homeomorphism from the circle to a triangle?
I'm looking for a bijection from a circle to a triangle that is continuous with a continuous inverse. What could be one?
AI: I will assume that by circle you mean the curve, not the disk, and also that by triangle you mean the "curve" consisting of three line segments. In a remark at the end, we mention how to modify things if we are dealing with a disk and a triangle-with-interior.
We are given a circle $C$ and a triangle $T$. Draw the incircle $C^\ast$ of $T$. There is a natural bijection $\varphi$ from $C$ to $C^\ast$ (scale and translate).
We will produce a bijection from $C^\ast$ to $T$. Let $O$ be the centre of $C^\ast$, and let $P$ be any point on $C^\ast$. Draw the half-infinite ray $OP$. This ray meets the triangle at a uniquely defined point. Let $\psi(P)$ be that point. Then $\psi$ is a continuous bijection with continuous inverse from $C^\ast$ to $T$, so $\psi\circ\varphi$ is a continuous bijection from $C$ to $T$, with continuous inverse.
Remark: Using a similar idea, we can find a continuous bijection with continuous inverse from any disk $C$ to the figure made up of a triangle and its interior. All we need to do is to modify the definition of $\psi$. Let $O$ be the centre of the disk $C^\ast$, and let $P$ be a point on the boundary of that disk. Suppose that the ray $OP$ meets the boundary of the triangle at $Q$, and let $X$ be a point in the disk and on the line segment $OP$. Map $X$ to the point $Y$ on the line segment $OP$ such that $\frac{OY}{OX}=\frac{OQ}{OP}$.
|
H: What is wrong with my thinking, simple groups order $168$
How many elements of order $7$ are there in a simple group of order $168$?
I will work on this more but I have seen some solutions out there. My only question is regarding what is wrong what my thinking here:
An element of order $7$ produces a cyclic group of order $7$ which is abelian and thus would be normal in $G$ and $G$ is simple so the only normal subgroups are itself $G$ and the trivial subgroup $\{e\}$ but the cyclic group is neither of those so there couldn't be an element of order $7$?
AI: The main problem with your reasoning is that abelian subgroup and normal subgroup are different concepts with similar definitions.
A subgroup $H \leq G$ is abelian if $gh=hg$ for every $g,h \in H$. It has nothing to do with $G$.
A subgroup $H \leq G$ is normal if $gH=Hg$ for every $g \in G$. It has $g$ coming from $G$, not just form $H$. Sometimes I like to write the condition $H = g^{-1}Hg$.
For example:
Let $G= \{ (), (12), (23), (13), (123), (132) \}$ be the symmetric group of degree 3. Then $H=\{ (), (12) \}$ is an abelian subgroup because $() \cdot () = () \cdot ()$, $() \cdot (12) = (12) \cdot ()$, and $(12) \cdot (12) = (12) \cdot (12)$. However $H$ is not normal, since $(23)^{-1} H (23) = \{ (), (13) \} \neq H$.
You know something must be wrong, because Cauchy's theorem claims that if $7$ divides $|G|$ (and it does when $|G|=168=7\cdot24$), then $G$ has at least one element of order $7$.
|
H: Normal Vector to a Sphere
I'm having kind of a problem on calculating the normal vector to a sphere using a parameterization. Consider a unit-radius sphere centered at the origin.
One can parameterize it using the following:
$$P(\phi, \theta)=(\sin(\phi)\cos(\theta),\,\sin(\phi)\sin(\theta),\,\cos(\phi)) $$
My Vector Calculus book says that the Vector Product between the two partial derivates of the parameterized surface gives a Normal Vector to the surface.
I found that
$$\frac {\partial P(\phi, \theta)}{\partial \phi} \times \frac {\partial P(\phi, \theta)}{\partial \theta} = (\sin^2(\phi)\cos(\theta))\hat i + (\sin^2(\phi)\sin(\theta))\hat j + (\sin(\phi)\cos(\phi))\hat k$$
Which, as one can easily verify, yields $(0, 0,0)$ for $(\phi,\theta) = (0,0)$ which means that the sphere is not regular at the point $(0,0,1)$. That is obviously wrong, so I would like to know where is my mistake.
AI: For a sphere, the surface normal is exactly your $P(\phi,\theta)$, since the normal is just the vector from the origin. Your vector $\vec N$ is indeed normal to the surface, but it's not normalized:
$$|\vec N| = |\sin \phi \cdot \vec P| \neq1$$
The correct definition of the surface normal , coupled with the limit $\theta \to 0$ should give you the correct result:
$$\vec N=\frac{\vec a \times \vec b}{\sqrt{|\vec a|^2|\vec b|^2-|\vec a\cdot\vec b|^2}}$$
Where:
$$\vec a = \vec P_\phi, \ \ \vec b =\vec P_\theta$$
|
H: Chain of compact subsets and their intersection
Let $\{X_n\}$ be a sequence of compact subsets of a metric space $M$ with $X_1\supset X_2\supset X_3\supset\dotsm$. Prove that if $U$ is an open set containing $\bigcap X_n$, then there exists $X_n\subset U$.
So suppose for contradiction that there doesn't exist $X_n\subset U$, i.e. every $X_n$ has a part outside $U$. Since $\bigcap X_n$ is a subset of an open set $U$, every element of $\bigcap X_n$ has an open ball within $U$. To use compactness, I want to find an open cover for $X_n$. But I don't see any open cover to use...
Edit: Okay, following the hint that someone posted (and deleted), I think I got it.
Compact sets are closed, so $X_n$ is closed for all $n$, so $X_n'$ is open for all $n$ (where $'$ denotes complement.) Note that $X_n$ has an open cover consisting of $U,X_{n+1}',X_{n+2}',\ldots$ since $X_{n+1}'\cup X_{n+2}'\cup\ldots = (X_{n+1}\cap X_{n+2}\cap \ldots)' = (\bigcap X_i)'$. Since $X_n$ is compact, there exists a finite subcover $U,X_{a_1}',X_{a_2}',\ldots,X_{a_k}'$ where $a_1<a_2<\ldots<a_k$. But this is a contradiction, since the part of $X_n$ which is the part of $X_{a_k}$ outside $U$ is not covered.
AI: The collection $\{U, X_2^c, X_3^c,... \}$ is an open cover of $X_1$, which is compact. To see this, suppose $x \in X_1$. Then either $x$ is in a finite number of the $X_n$, or $x \in \cap_n X_n \subset U$
Hence there is a finite subcover. We may take the subcover to be $U,X_{i_1}^c,...,X_{i_k}^c$, and since they are nested, the cover can be $U,X_{\max(i_1,...,i_k)}^c$. Since $X_{\max(i_1,...,i_k)} \subset X_1$, it follows that $X_{\max(i_1,...,i_k)} \subset U$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.