text
stringlengths 83
79.5k
|
---|
H: Prove $\sum_{k=0}^n \binom{n}{k}(-1)^k \frac{x}{x+k} = \prod_{k=1}^n \frac{k}{x+k}$ and more
The current issue (vol. 120, no. 6)
of the American Mathematical Monthly
has a proof by probabilistic means
that
$$\sum_{k=0}^n \binom{n}{k}(-1)^k \frac{x}{x+k} = \prod_{k=1}^n \frac{k}{x+k}
$$
for all $x > 0$
and all
$n \in \mathbb{N}$.
The article also mentions two other ways to prove this,
one using hypergeometric functions and
the Chu-Vandermonde formula,
and the other using the Rice integral formula
and complex contour integration.
This made me wonder if there were
more elementary ways to prove this result,
and my question is a challenge
to find the most elementary proof.
Two ideas that have occurred to me
are (1) using Lagrange interpolation
and (2) induction.
I have not yet completed a proof,
so I am putting the problem out here.
The article states with hints of proofs
the following results:
$$\sum_{k=0}^n \binom{n}{k}(-1)^k \big(\frac{x}{x+k}\big)^2
= \big(\prod_{k=1}^n \frac{k}{x+k}\big)\big(1+\sum_{k=1}^n \frac{x}{x+k}\big)
$$
and,
for $m\in \mathbb{N}$,
$$\sum_{k=0}^n \binom{n}{k}(-1)^k \big(\frac{x}{x+k}\big)^m
= \big(\prod_{k=1}^n \frac{k}{x+k}\big)
\big(1+\sum_{j=1}^{m-1} \sum_{1\le k_1 \le k_2 \le ... \le k_j \le n}
\frac{x^j}{\prod_{i=1}^j (x+k_i)}\big)
$$
What (relatively) elementary proofs
of these are there?
AI: $$\sum_{k=0}^n \dbinom{n}k (-y)^{x+k-1} = (-y)^{x-1} (1-y)^n$$
Hence,
$$\int_0^1\sum_{k=0}^n \dbinom{n}k (-y)^{x+k-1}dy = \int_0^1(-y)^{x-1} (1-y)^n dy$$
$$\sum_{k=0}^n \dbinom{n}k (-1)^{x+k-1}\int_0^1y^{x+k-1}dy = \sum_{k=0}^n \dbinom{n}k (-1)^{x+k-1} \dfrac1{x+k}$$
$$\int_0^1(-y)^{x-1} (1-y)^n dy = (-1)^{x-1} \beta(x,n+1)$$
Hence,
$$\sum_{k=0}^n \dbinom{n}k (-1)^{k} \dfrac1{x+k} = \beta(x,n+1)$$
|
H: $M_1,M_2$ not homeomorphic but $G(M_1),G(M_2)$ isomorphic.
Let $M$ be a metric space, and let $G(M)$ denote the set of homeomorphisms of $M$ onto $M$. Then $G(M)$ is a group under composition. Show that if $M_1,M_2$ are homeomorphic metric spaces, then $G(M_1)$ is isomorphic to $G(M_2)$. Give an example to show that the converse does not hold.
I settled the first part routinely. For the example, I tried to think about simple examples, like when $M_1, M_2$ are finite and have the same size. In this case, the homeomorphisms are just permutations. However, $M_1,M_2$ are homeomorphic, since any bijection from $M_1$ to $M_2$ is continuous. For other choices of $M_1,M_2$, the groups $G(M_1),G(M_2)$ seem much harder to identify.
AI: A rigid space is one whose only homeomorphism onto itself is the identity. The case $\kappa=\omega$ of Theorem $2$ of this old paper of mine immediately implies that $[0,1]$ has $2^\omega=\mathfrak{c}$ pairwise non-homeomorphic rigid subspaces, which of course all have trivial autohomeomorphism group. (This is admittedly using a piledriver to swat a gnat.)
Added: Here’s a more accessible example. Let $M_1=\left\{\frac1n:n\in\Bbb Z^+\right\}$ with the usual metric, and let $M_2=M_1\cup\{0\}$, also with the usual metric. The autohomeomorphisms of $M_1$ are clearly the permutations of $M_1$. Now suppose that $h:M_2\to M_2$ is an autohomeomorphism; clearly $h(0)$ must be $0$, since $0$ is the only non-isolated point of $M_2$. Thus, $h\upharpoonright M_1$ is an autohomeomorphism of $M_1$. To finish the example, just show that if $f:M_1\to M_1$ is any bijection, the map
$$h:M_2\to M_2:x\mapsto\begin{cases}
f(x),&\text{if }x\in M_1\\
0,&\text{if }x=0
\end{cases}$$
is an autohomeomorphism of $M_2$; it follows immediately that $G(M_1)$ and $G(M_2)$ are both the permutation group on a countably infinite set.
|
H: After subdividing a painted cube, how many smaller cubes have paint on exactly 2 sides?
A solid cube of side 6 is first painted pink and then cut into smaller cubes of side 2. How many of the smaller cubes have paint on exactly 2 sides?
Answer with illustrations will be helpful for me.
Thanks in advance.
AI: Hint: Think of a Rubik's cube (image source)
(just imagine that the center is also a cube, not a turning mechanism like it is in real life).
How many cubes have two stickers?
|
H: Finding coordinates
The slope of the line passing through the point $(5,5)$ is $\dfrac 56$. All of the following points could be on the line except
A. $(2.5, 2) $
B. $(11, 10) $
C. $(8, 7.5) $
D. $(-1, 0) $
E. $(-7, -5)$
Will I use the slope formula: $\dfrac{y-y_2}{y_-y_1} = \dfrac{x-x_2}{x_-x_1}$? or there are any method to solve this problem quickly?
AI: use slope formula:
$$m=\dfrac {y_2-y_1}{x_2-x_1}$$
$$\dfrac 56=\dfrac {y_2-5}{x_2-5}$$
put your options in above equation
answer is (B)
|
H: Infinite Parity Function
I was looking at this problem, and I have a solution for a finite board with $2^n$ squares, that I want to extend to a countably infinite board.
Label the squares from $0$ to $2^n-1$. Consider the set of all squares with a $1$ at the $i$th position in their binary expansion. If the number of heads in the set is odd, let $p(i) = 2^i$, else $0$. Compute $X = \sum_{i = 0}^n p(i)$.
XOR the index of the Magic Square with your result, and flip the coin with the resulting index. Thus, when your friend enters the room, when he computes $X^\prime$, by the same approach, he will get the correct index.
When I try to extend this to an infinite board, I have trouble with defining parity. There are many functions that map $\{0, 1\}^\infty$ to $\{0, 1\}$, so it doesn't seem unreasonable that there is such a function that obeys one rule: flipping any bit changes the value. Is there?
And would this strategy still work on an infinite board, assuming your friend can examine infinite coins? (Pretty sure the devil wins when your friend must examine arbitrarily large, but finite number)
Pardon the abuses of notation; I suspect there's a more accurate way to write "all infinite bitstrings" than putting an $\infty$ up there.
AI: There are such functions, but I'm not sure if one can actually construct one. To obtain such a function, note that $\{0,1\}$ with addition and multiplication is the field $\mathbb F_2$. Then $\mathbb F_2^{\mathbb N}$, the sets infinite bitstrings is a vector space over $\mathbb F_2$. The set $B' := \{e_n \mid n \in \mathbb N\}$ of the unit vectors $e_n = (0, \ldots, 0, 1, 0, \ldots)$ forms a linear independent subset. Choose a basis $B\supset B'$ of $\mathbb F_2^{\mathbb N}$. Define $f$ on the basis by
$$ f\colon b \mapsto \begin{cases} 1, & b = e_n \text{ for some } n \in \mathbb N\\
0, & b \in B\setminus B'\end{cases} $$
extend $f$ linearly to a $\mathbb F_2$-linear function $f \colon \mathbb F_2^{\mathbb N} \to \mathbb F_2$.
Then $f$ is as wished: Flipping the $n$-th bit corresponds to adding $e_n$ and for $x \in \mathbb F_2^{\mathbb N}$ we have
$$ f(x + e_n) = f(x) + f(e_n) = f(x) + 1 = \begin{cases} 1, & f(x) = 0\\ 0, & f(x) = 1.\end{cases} $$
|
H: Hartshorne III.7.6b) (ii) => (i) "Duality for a projective scheme)
Let X be a closed immersion of dimension n in P = *P*$^N_k$, where k is an algebraically closed field. Let $\omega_P$ denote the canonical bundle and A the local ring $\mathcal O_{P,x}$.
Then Hartshorne argues on p. 244 that the condition $$\mathcal {Ext}^i_P(\mathcal O_X, \omega_P) = 0 $$ for i>N-n implies that $$Ext^i_A(\mathcal O_{X,x}, A) = 0 $$
Why? I can see how he got from $\mathcal Ext$ to $Ext$. The former is the derived functor of the sheaf $\mathcal {Hom}$, while the latter is the derived functor of just Hom. Obviously, if $\mathcal Ext$ is zero so is any localization of Ext. But how does one get from $\omega_P$ to A?
AI: $\omega_P$ is invertible, hence its stalks are free of rank $1$.
|
H: How to calculate $\Pr[\max(X,Y)<4]$?
Suppose the joint PDF of X,Y is $f(x,y)=1/40$ and $0 < x < 5$ and $0 < y < 8$.
How to calculate $\Pr[\max(X,Y)<4]$?
AI: Hint:
$$
P(\max(X,Y)<4)=P(X<4,Y<4)=P((X,Y)\in (-\infty,4)\times(-\infty,4)).
$$
|
H: Determining Linear Combinations
Let $\vec{u}=[2,2,3]^T$ and $\vec{v}=[3,2,1]^T$. Find a vector $\vec{w}$ that is NOT a linear combination of $\vec{u}$ and $\vec{v}$.
My work thus far/ my line of thinking:
Since the vectors are in the form $[x,y,z]^T$, that is, there are three (not sure what to call them- elements?) within the brackets, it means that the vector is in $\mathbb{R}^3$. Since the vector is in $\mathbb{R}^3$, it follows that at least one of the standard basis vectors, $\vec{i}=[1,0,0]^T, \vec{j}=[0,1,0]^T, \vec{k}=[0,0,1]^T$ are linear combinations of $\vec{u}$ and $\vec{v}$.
I chose to deal with $\vec{k}$ in this particular problem. So, is $(0,0,1)=x(2,2,3)+y(3,2,1)$, for some $x,y$? To test it out, I solved the system, which yielded no solutions.
Does that mean there are no vectors $\vec{w}$ that are a linear combination of $\vec{u}$ and $\vec{v}$?
AI: Your reasoning is correct: you showed that the equation $(0,0,1)=x(2,2,3)+y(3,2,1)$ has no solutions, so this means that the vector $(0,0,1)$ is not a linear combination of the given $\vec{u}$ and $\vec{v}$.
Does that mean there are no vectors $\vec{w}$ that are a linear combination of $\vec{u}$ and $\vec{v}$
No. You have shown that the vector $(0,0,1)$ is not a linear combination of $\vec{u}$ and $\vec{v}$. But there might be some other vector $\vec{w}$ that is a linear combination of $\vec{u}$ and $\vec{v}$. For example the vector $\vec{w} = (5,4,4)$ is certainly a linear combination of $\vec{u}$ and $\vec{v}$ --- in fact, it is $\vec{u} + \vec{v}$.
|
H: $2$ out of $3$ property of the unitary group
I am trying to understand the $2$ out of $3$ property of the unitary group. I have almost got it, but I am not completely sure about the interaction between an inner product and a symplectic form to obtain an almost complex structure.
Let $V$ be a real vector space.
An inner product on $V$ is a positive definite symmetric bilinear form $g$. An endomorphism $T \in \operatorname{End}(V)$ preserves $g$ if $g(T(u), T(v)) = g(u, v)$ for all $u, v \in V$; the collection of all such endomorphisms forms a group $O(V, g)$ called the orthogonal group.
An almost complex structure on $V$ is an endomorphism $J \in \operatorname{End}(V)$ such that $J^2 = -\operatorname{id}_V$. An endomorphism $T \in \operatorname{End}(V)$ is complex linear if $T \circ J = J\circ T$; the collection of all such endomorphisms forms a group $GL(V, J)$ called the complex general linear group.
A symplectic form on $V$ is a skew-symmetric non-degenerate bilinear form $\omega$. An endomorphism $T \in \operatorname{End}(V)$ preserves $\omega$ if $\omega(T(u), T(v)) = \omega(u, v)$ for all $u, v \in V$; the collection of all such endomorphisms forms a group $Sp(V, \omega)$ called the symplectic group.
Almost Complex Structure & Inner Product
For an inner product $g$ and a compatible almost complex structure $J$ (i.e. $J \in O(V, g)$), we obtain a symplectic form by defining $\omega(u, v) := g(u, J(v))$.
It follows that $O(V, g)\cap GL(V, J) \subseteq Sp(V, \omega)$.
Almost Complex Structure & Symplectic Form
For a symplectic form $\omega$ and a compatible almost complex structure $J$ (i.e. $J \in Sp(V, \omega)$) which tames $\omega$ (i.e. $\omega(u, J(u)) > 0$ for all $u \in V\setminus\{0\}$) we obtain an inner product by defining $g(u, v) := \omega(J(u), v)$.
It follows that $Sp(V, \omega)\cap GL(V, J) \subseteq O(V, g)$.
Inner Product & Symplectic Form
This is the part I am unsure about.
Denote by $\Phi_g$ the isomorphism $V \to V^*$ induced by $g$; that is $\Phi_g(v) \in V^*$ is defined by $\Phi_g(v)(u) = g(u, v)$. Likewise, denote the isomorphism $V \to V^*$ induced by $\omega$ by $\Phi_{\omega}$; that is $\Phi_{\omega}(v) \in V^*$ is defined by $\Phi_{\omega}(v)(u) = \omega(u, v)$.
Is there any compatibility restriction that we must impose on $\Phi_g$ and $\Phi_{\omega}$?
Is $J = \Phi_g^{-1}\circ\Phi_{\omega}$ an almost complex structure on $V$?
How do we use this to deduce $O(V, g)\cap Sp(V, \omega) \subseteq GL(V, J)$?
For question 3, I can use the previous relationships between the three groups, but I'd like to be able to deduce it from the structures themselves.
AI: For question 1, the compatibility relation we are looking for is
$$ \Phi_g^{-1} \circ \Phi_{\omega} = -\Phi_{\omega}^{-1} \circ \Phi_g.$$
Define $J = \Phi_g^{-1} \circ \Phi_{\omega}$. Then, unrolling the definitions, we get the relation $\Phi_g(Jv)(u) = \omega(u,v)$, i.e., $$g(u,Jv) = \omega(u,v). $$
Since we also have the relation $-J = \Phi_{\omega}^{-1} \circ \Phi_g$, we have $$\omega(u, -Jv) = g(u,v).$$
To see that $J$ defines an almost complex structure, note that, $$g(u, J^2v) = \omega(u, Jv) = -g(u,v)$$ so by the nondegeneracy of $g$, we have $J^2 = -1$. This answers question 2.
For question 3, let $T$ be any automorphism of $V$ preserving both $g$ and $\omega$. Then $$g(Tu, JTv) = \omega(Tu, Tv) = \omega(u,v) = g(u, Jv) = g(Tu, TJv)$$ and hence by nondegeneracy of $g$, we have $JT = TJ$, i.e., $T \in GL(V,J)$.
|
H: Reference request: Tensor product of DG-modules
Let $A$ be a (skew-) commutative DG-algebra, and let $M,N$ be two DG $A$-modules.
I am looking for a reference which describes the functor $-\otimes_A -$ and its basic properties (associtivity, identity element, etc).
Assuming $A$ is a DG $K$-algebra, where $K$ is a commutative ring, most sources I find describe a tensor product which is simply a tensor product of complexes over $K$, but this is a wrong object from the point of view of DG-modules over $A$, because then we will not have $M\otimes A \cong M$.
Can anyone please point me to a reference where the "correct" notion is discussed?
Thanks!
AI: A very good book on homological algebra is Osborne's "Basic Homological Algebra". You can find information of Hom-tensor products and projective/injective/flat objects in the very first chapters. If you want to focus your attention on algebras, I would start with Jacobson's "Basic Algebra I-II". The homological algebra book by Bourbaki is a good reference, as well.
On your concept of "correct notion". I really believe that everything depends on applications and theoretical framework. The isomorphism $M\otimes_A A\simeq M$ can be useful in your specific context, but there are others where the tensor product over the ground field is important as well. Consider for example the bar resolution $\mathcal B_M(A)$ of a left $A$-dg module $M$, where $A$ is a (possibly augmented) dg algebra $A$ over the ground field $\mathbb K$.
Very briefly:
$$\mathcal B_M(A): A\otimes A[1]^{\otimes n}\otimes M \rightarrow A\otimes A[1]^{\otimes n-1}\otimes M \rightarrow\dots A\otimes M\rightarrow M,$$
with $\otimes=\otimes_{\mathbb K}$ and $[1]$ denotes suspension.
This construction is quite important in both algebraic topology and homological algebra.
|
H: How can we prove that $\displaystyle \limsup_{n \to \infty } b_n \le \limsup_{n \to \infty } a_n$
Let $(a_n)_{n\ge1}$ be a bounded sequence in $\mathbb{R}$
Set $\displaystyle B_n=\left\{\sum_{j=n}^{\infty}(\theta_j\cdot a_j)\;:\theta_j\ge 0 \;,\;\sum_{j=n}^{\infty}\theta_j=1\right\}$ for $n\ge1$
Let $(b_n)_{n\ge 1}$ be a sequence such that $b_n \in B_n$ , $\forall \;n\ge1$
How can we prove that $\displaystyle \limsup_{n \to \infty } b_n \le \limsup_{n \to \infty } a_n$ ?
Any hints would be appreciated.
AI: Let $n$ be fixed; then for all $j \ge n$, we have $a_j \le \sup\limits_{m \ge n} a_m$. Therefore:
$$b_n = \sum_{j=n}^\infty \theta_j a_j \le \sum_{j=n}^\infty \theta_j \sup_{m \ge n} a_m = \sup_{m \ge n} a_m$$
Moreover, since $\sup\limits_{m \ge n} a_m$ is decreasing as $n$ increases, we have for any $n' \ge n$ that:
$$b_{n'} \le \sup\limits_{m\ge n'} a_m \le \sup\limits_{m \ge n} a_m$$
Can you take it from here?
|
H: Trigonometry problem, using COS
Let's say two right angled triangles share a common hypotenuse which measures 10 in length and share an angle which measures $20^\circ$ in total. How do I work out the value of x (the side adjacent to the $20^\circ$ angle)? Using $\cos$ looks like the right strategy to apply but not sure how to proceed...
AI: As has been commented, your question would benefit greatly from a diagram for clarity, but I believe I can answer it anyway.
Recall $\cos(\theta)=\frac{A}{H}$ so rearranging we get $A=H\cos(\theta)$ and now you just have to substitute in the appropriate values. In your diagram, you should be able to see, by symmetry, that the angle inside each triangle will be half of the $20^\circ$ so we have $H=10, \theta=10^\circ$
|
H: A question about how to take a -1 out of mutiple-valued analytic function $z^{\alpha } $ $0<\alpha<1 $
I encountered a question about multiple-valued analytic functions.
Under some circumstance I have to take a -1 out of a power function $z^{\alpha } $ $0<\alpha<1$
suppose $\alpha=\frac{1}{2}$, let the branch cut be $[0,\infty ]$, choose the analytic branch $z^{\frac{1}{2}}(1^{\frac{1}{2}}=1)$, and I get two following result they are $$z^{\frac{1}{2}}=((-1)(-z))^{\frac{1}{2}}=(-1)^{\frac{1}{2}}(-z)^{\frac{1}{2}}=i(-z)^{\frac{1}{2}}$$
and
$$z^{\frac{1}{2}}=(\frac{1}{-1}(-z))^{\frac{1}{2}}=(\frac{1}{-1})^{\frac{1}{2}}(-z)^{\frac{1}{2}}=\frac{1^{\frac{1}{2}}}{(-1)^{\frac{1}{2}}}(-z)^{\frac{1}{2}}=\frac{1}{i}(-z)^{\frac{1}{2}}=-i(-z)^{\frac{1}{2}}$$
Which manipulation is right, why? How about $z^{-\frac{1}{2}}$? Is it the same?
AI: The definition of $z^\alpha$ is $e^{\alpha \log z}$, so let's look at
$\log(-z)$. If your branch cut for the logarithm (and thus also for the $\alpha$ power) is on the positive real axis,
with real limits as you approach that axis from above, i.e. you're using the branch where $0 \le \text{Im} \log(z) < 2 \pi$, then
$\log(-z) = \log(z) + \pi i$ if $0 \le \text{Im} \log(z) < \pi$ (i.e. $z$ is on the positive real axis or in the upper half plane),
$\log(z) - \pi i$ if $z$ is on the negative real axis or in the lower half plane.
And so
$ (-z)^\alpha = e^{i\alpha \pi + \alpha \log z} = e^{i\alpha \pi} z^\alpha$ if
$z$ is on the positive real axis or in the upper half plane,
$e^{-i \alpha \pi} z^\alpha$ if $z$ is on the negative real axis or in the lower half plane.
|
H: Usefulness of induced representations.
I am learning representation theory from Serre's book by myself. Currently I am reading about induced representations, but I don't understand the importance. The concept looks strange and the definition appears quite complicated compared to the topics discussed before it. Can someone briefly exposit its importance?
AI: You may think of it like this, maybe it helps a bit. If you've got a representation for a larger group$~G$, you can always restrict to a subgroup$~H$ to get a representation of the latter. In particular you can restrict all irreducibeles for$~G$ to$~H$, and see what they do there; some may remain irreducible, some may decompose after restriction. But if you already know things about the subgroup$~H$ (for instance you may have classified its irreducibles), and which to find out things about$~G$, all this does not help much. You need a way to go in the other direction, take a representation of$~H$ and build one of$~G$ out of it. Since $G$ is more complicated than$~H$, it is too naive to assume that you can always extend a given $H$-representation to a $G$-representation by just defining the action of the new elements (in other words view your old representation as the restriction of a newly defined one); you may need to increase the dimension in order to extend the action. Induction does precisely this in a way that only uses the old representation and the group structure of $H$ within $G$ as ingredients; no choices are required. It is not the inverse operation of restriction, but it they are "adjoint" in a sense made precise by the Frobenius reciprocity (for instance the irreducibles $\rho$ of $G$ that show up when decomposing and induced irreducible $H$-representation $\sigma$ are precisely those for which $\sigma$ shows up in the decomposition of the restriction to$~H$ of$~\rho$; in fact the multiplicities of occurrence will be identical).
To see the use of induced representations, you might want to study easy examples, such as the (complex) representations of the dihedral group, using the easily described structure of the representations of its subgroup of rotations (as the subgroup is commutative, its irreducible representations are all of dimension$~1$). In general induction from relatively "easy" (but not too small) subgroups is often an important tool in studying representations of a given group.
An analogy is change of base fields for vector spaces. Every complex vector space can be seen as a real vector space by restriction (forget the multiplication by$~\mathbf i$), but not every real vector space can be so obtained (the dimension has to be even). However every real vector space can be complexified to a complex vector space, making its complex dimension equal to the original real dimension (and the real dimension has doubled). This is what implicitly goes on behind the scenes when they tell you to interpret a real square matrix as a complex one for the purpose of finding eigenvalues (the corresponding eigenspaces have no direct meaning for the original real space, since they live in its complexification). In this analogy, induction of representations corresponds to complexification.
|
H: Notation for function being differentiable at a certain point
This question describes a notation for a function $f(x)$ being (continuously) differentiable on some domain $A$.
Often, I see the requirement that some function $f(x)$ be differentiable only (or rather, at least) at a certain point $\tilde{x}$. This is usually written in plain text (e.g. "let $f: X \rightarrow Y$ be differentiable at $\tilde{x}$").
I was wondering whether there is some more formal notation for such (local) differentiability. Thanks in advance!
AI: You can just say 'where' your function is differentiable, because anyway differentiability is always a local concept. So let $f : X \to Y$ where $Y$ is some 'numerical' set (e.g. $\mathbb R^n$ or $\mathbb C^n$...) and let $A \subseteq X$ be the set of points where $f$ is differentiable. I think that's just fine. There's no need to be more precise unless you do something reaaaaally specific.
Hope that helps,
|
H: Determine the number of iteration to find solutions accurate to within $10^{-2}$ for $f(x)=x^3-7x^2+14x-6=0$ on $[a,b]=[1,3.2]$
i got the number of iteration,$n$, to achieve the accuracy, $\epsilon=10^{-2}$ is $n=5.5\approx 6$
But in answer script, $n=8$.
My procedure is
$
\frac{(b-a)}{2^n}<\epsilon$
$\Rightarrow\frac{(3.2-1)}{2^{n}}<10^{-2}$
$\Rightarrow{(2.2)}{2^{-n}}<10^{-2}$
Taking $\log_{10}$
$\log_{10}(2.2)-n\log_{10}(2)<-2$
$\Rightarrow n>\frac{2-n\log_{10}(2)}{\log_{10}(2.2)}=5.5\approx 6$
Where is the mistake?
AI: You want $\frac{2.2}{2^n} < 10^{-2}$, or in other words, $220 < 2^n$. Then $\log_2 220 < n$. Since $\log_2 220 \approx 7.7814\cdots$, we see that the smallest such $n$ is $n=8$.
Alternatively, you could notice that $f(3) = 0$., and so $n=0$ :-).
|
H: Find the natural number $n$ satisfy the condition
Find the natural number $n$ satisfy the condition
$$\dfrac{1}{2}C_{2n}^1 - \dfrac{2}{3} C_{2n}^2 + \dfrac{3}{4} C_{2n}^3 - \dfrac{4}{5} C_{2n}^4 + \cdots - \dfrac{2n}{2n+1} C_{2n}^{2n} =\dfrac{1}{2013}.$$
AI: HINT:
$$\frac r{r+1}\binom {2n}r=\binom {2n}r-\frac{(2n)!}{r!(2n-r)!(r+1)}$$
$$=\binom {2n}r-\frac1{2n+1}\cdot \binom{2n+1}{r+1}$$
$$\text{So, }\sum_{1\le r\le 2n}\frac r{r+1}(-1)^{r-1}\binom {2n}r$$
$$=\sum_{1\le r\le 2n}(-1)^{r-1}\binom{2n}r-\frac1{2n+1}\cdot \sum_{1\le r\le 2n}(-1)^{r-1}\binom{2n+1}{r+1}$$
$$\text{ Now, }\sum_{1\le r\le 2n}(-1)^{r-1}\binom{2n}r=1-\left(\binom{2n}0-\binom{2n}1+\binom{2n}2-\cdots+\binom{2n}{2n}\right)=1-(1-1)^{2n}=1$$
$$\text{ and } \sum_{1\le r\le 2n}(-1)^{r-1}\binom{2n+1}{r+1}$$
$$=\binom{2n+1}0-\binom{2n+1}1+\binom{2n+1}2-\cdots+\binom{2n+1}{2n}-\binom{2n+1}{2n+1}-\binom{2n+1}0+\binom{2n+1}1$$
$$=(1-1)^{2n+1}-\binom{2n+1}0+\binom{2n+1}1=2n+1-1=2n$$
So, the result will be $$1-\frac{2n}{2n+1}=\frac1{2n+1}$$
|
H: Surface integrals of second kind
In the formula for calculating surface integrals of second kind, we have:
But, this integral is denoted by $\int \int _S \vec{F}\cdot \hat{n}dS $ . So, should we always normalize the expression $ \frac{\partial \vec r }{\partial v} \times \frac{\partial \vec r}{\partial u} $ before substituting it into the formula?
Thanks in advance
!
AI: Here's a little derivation of the formula - hopefully it'll help you to answer your own question.
The definition of the surface integral tells us that
$$
\iint_S \vec F \cdot d\vec S = \iint_S \vec F \cdot \hat n \, dS
$$
where $\hat n$ is the unit normal vector to the surface. This equivalence is well described in chapter 16 of Stewart's book. Moving on, we know that
$$
\vec r_u \times \vec r_v
$$
is a vector normal to the surface parametrized as $\vec r(u,v)$. This vector must to normal to the surface because $\vec r_u$ and $\vec r_v$ are both curves tangent to the surface and their cross product gives a vector that perpendicular to each of them. To make this a unit vector, we simply divide it by it's length $\left| \vec r_u \times \vec r_v \right|$ and obtain
$$
\hat n = \frac{\vec r_u \times \vec r_v}{\left| \vec r_u \times \vec r_v \right|}
$$
Substituting this into the above integral, we obtain
$$
\iint_S \vec F \cdot \left( \frac{\vec r_u \times \vec r_v}{\left| \vec r_u \times \vec r_v \right|} \right) \, dS
$$
Furthermore, we know that $\left| \vec r_u \times \vec r_v \right|\, dA$ is the infinitesimal surface area element $dS$ (this is also well described in Stewart). Plugging this in, we get
$$
\iint_D \vec F \cdot \left( \frac{\vec r_u \times \vec r_v}{\left| \vec r_u \times \vec r_v \right|} \right) \, \left| \vec r_u \times \vec r_v \right| \, dA
$$
where $D$ is the domain of the parameters $u$ and $v$. This simplifies to
$$
\iint_D \vec F \cdot \left( \vec r_u \times \vec r_v \right) \, dA
$$
Any of these expressions can be used in computing the surface integral. However, using any of them before the final one would just be a recalcitrant implementation of the final formula as they would simplify to the same thing. If you're trying to compute the surface integral over $D$, the parameters of $u$ and $v$, then no, you do not need to scale the normal vector to length one.
Hope this helps!
|
H: Does $\lim \frac {a_n}{b_n}=1$ imply $\lim \frac {f(a_n)}{f(b_n)}=1$?
I wanted to prove the seemingly simple statement:
If $\lim \frac {a_n}{b_n}=1$ and $f$ continuous with $f(b_n)\neq0$ then $\lim \frac {f(a_n)}{f(b_n)}=1.$
I started promptly with
\begin{align}
\\ \lim \frac {f(a_n)}{f(b_n)} &= \lim \frac { f(b_n \times \frac{a_n}{b_n})}{f(b_n)}
\\ &= \frac { f(b_n \times \lim\frac{a_n}{b_n})}{f(b_n)}
\\ &= \frac { f(b_n \times 1)}{f(b_n)}
\\ &= 1
\end{align}
Yet two seconds later I realized what a nonsense it was and that I fell victim of one of the freshman's dreams.
I would greatly appreciate a hint for a proof or a counterexample if the statement turns out to be false.
AI: This is not true. Let $a_n = n + \log n$, $b_n = n$, $f(x) = e^x$. Then $\lim \frac{a_n}{b_n} = 1$, but $\frac{f(a_n)}{f(b_n)} = e^{\log n} = n$ does not converge to 1. This is an issue with uniform continuity, I think.
Edit: Scratch that, uniform continuity is not sufficient either. But if $f$ is continuous and $a_n, b_n$ are bounded, and $f$ is bounded away from zero, it may be possible, but that's a lot of conditions.
|
H: How could I define this $\mathrm{nw}(X)$ by using only one sentence?
A family $\mathcal N$ of subsets of a topological space $X$ is a network for $X$ if for every point $x\in X$ and any neighbourhood $U$ of $x$ there exists an $M \in \mathcal N$ such that $x\in M \subset U$.
$$\mathrm{nw}(X)=\min \{|\mathcal N|: \mathcal N \text{ a network for }X\} +\omega.$$
How could I define this $\mathrm{nw}(X)$ by using only one sentence?
For example:
The Lindelof number $l(X)$ of a topological space $X$ is the
smallest number $\kappa$ such that every open cover of $X$ has a
subcover the cardinality of which is at most $\kappa$.
Thanks for your time.
AI: There is no real advantage to using just one sentence; in fact, the definition is probably clearer if you use two. However, it is possible to state it in one:
The net weight of a topological space $X$ is the smallest infinite cardinality $\kappa$ such that $X$ has a a family $\mathscr{N}$ of subsets, not necessarily open, such that every non-empty open set in $X$ is the union of members of $\mathscr{N}$, and the cardinality of $\mathscr{N}$ is at most $\kappa$; such a family is called a network for $X$.
|
H: Anagrams with sequences inside
I need some help with this exercise:
Find the number of anagrams of the word “MONOCROMO” containing atleast one of the sequences “OMO”, “MON”, “CRO”.
Normally I know what to do, but in this one there are some cases i don't know how to handle. Please give me a hand, I need to be able to explain this tomorrow and I'm not sure if this works how i think it does.
The idea is to use the principle of inclusion-exclusion, so i call
$A$ = "OMO"
$B$ = "CRO"
$C$ = "MON"
|A$\cup$B$\cup$C| = |A| + |B| + |C| - |A$\cap$B| - |A$\cap$C| - |B$\cap$C| + |A$\cap$B$\cap$C|
So, $|MONOCROMO|$ are $\frac{9!}{4!2!}$ since there are 4 "O" and 2 "M"
I'll write just the cases I don't know how to solve, to spare you a longer and boring read.
$|A|$ = OMO, N, C, R, $O$, $M$, $O$ = $\frac{7!}{2!}$
but I can see another "OMO" that is free. I am not sure how to handle this case.
I think I would do something like
OMO, OMO, N, C, R = $-3!{5 \choose 2}$.
Ok, so what i understand here is that i have OMO, OMO, N, C, R.
${5 \choose 2}$ is the number of words that contains 2 sequences of "OMO"
$3!$ is the number of other letters wich i can permute. So i get $\frac{7!}{2!} -3!{5 \choose 2}$ is this correct?
|A$\cap$B| = OMO, CRO, M, N, O = ${5!}$
also CROMO, M, O, N, O = $\frac{5!}{2!}$
but also CROMO, OMO, N = $3!$ ?
and then OMOMO, CRO N =
and CROMOMO, N, O =
I am not sure at all about this one, i hope it works this way but is really messy in my head if i have to subtract what i find or add it...
|A$\cap$C| = OMO, MON, C, R, O = $5!$
Also OMON, C, R, $O$, $M$, $O$ = $\frac{6!}{2!}$
And also OMON, OMO, C, R = $4!$ ?
And OMOMON, C, R, O =
Again, the same case.
I hope this is understandable, I am new of math stackexchange and i tried to find the right syntax and words to write, also I am not native english speaker, so I am sorry if I made any mistakes.
Also i deleted my previous post and made this new one.
Thanks again for the help!
AI: a) Sequences containing OMO at least once: OMO, M, N, C, R, O, O: $\frac{7!}{2!}$.
b) Sequences with two disjoint OMOs: OMO, OMO, N, C, R: $\frac{5!}{2!}$.
c) Sequences with overlapping OMOs, i.e. OMOMO, N, C, R, O: $5!$.
In summary, there are $\frac{7!}{2!} -\frac{5!}{2!}-5!$ sequences with at least one OMO. But there are awfully many more special cases to count and to check which are double counted where, which to add, which to suctract.
All in all I rely more in a quick programmatic count that gives me $4574$ as answer (so tehre's something to check against).
d) CRO, N, M, M, O, O, O:$\frac{7!}{2!3!}$
e) CRO, OMO, N, M, O: $5!$
f) CRO, MON, M, O, O: $\frac{5!}{2!}$
g) CRO, MON, OMO: $3!$
h) MON, C, R, M, O, O, O: $\frac{7!}{3!}$
i) MON, OMO, C, R, O: $5!$
j) CROMO, N, M, O, O: $\frac{5!}{2!}$
k) CROMO, MON, O: $3!$
l) CRO, OMON, M, O: $4!$
m) CRO, OMOMO, N: $3!$
n) CRO, OMOMON: $2!$
o) CROMON, M, O, O: $\frac{4!}{2!}$
p) CROMON, OMO: $2!$
q) CROMOMO, N, O: $3!$
r) CROMOMON, O: $2!$
|
H: General solution of a simple PDE
What is the general solution of this equation:
$Q \frac{\partial C}{\partial V} + r = \frac{\partial C}{\partial t}$
where Q and r are constants?
I tried to use the method of separation of variables but it doesn't work. Any help will be really appreciated.
AI: Rewriting your equation a little, we have
$$ \partial_t C - Q\partial_V C = r. $$
This can be written as
$$ DC\begin{pmatrix} 1\\ -Q\end{pmatrix} = r $$
That is the directional derivative in direction $(1,-Q)^t$ is $r$. So let us consider $C$ in this direction, define $f \colon [0,\infty) \to \mathbb R$ by $f(t) := C(t, V_0 - tQ)$, we have
\begin{align*}
f'(t) &= \partial_t C(t, V_0 - tQ) - Q\partial_V C(t, V_0 - tQ)\\
&= r
\end{align*}
for any $t \ge 0$. Hence, by integrating
\begin{align*}
f(t) &= f(0) + \int_0^t f'(s)\, ds\\
&= C(0, V_0) + rt
\end{align*}
So, the solution is given by
$$ C(t, V) = C(t, V + tQ - tQ) = C(0, V+tQ) + rt $$
for some initial function $C(0,\cdot)$.
|
H: To what extent the statement "Data is normally distributed when mode, mean and median scores are all equal" is correct?
I read that normally distributed data have equal mode, mean and median. However in the following data set, Median and Mean are equal but there is no Mode and the data is "Normally Distributed":
$ 1, 2, 3, 4, 5 $
I am wondering to how extent the statement is correct? Is there a more accurate definition for "normal distribution"?
AI: It is not correct at all. Any unimodal probability distribution symmetric about the mode (for which the mean exists) will have mode, mean and median all equal.
For the definition of normal distribution, see e.g. Wikipedia.
Strictly speaking, data can't be normally distributed, but it can be a
sample from a normal distribution. In a sample of $3$ or more points from a continuous distribution such as the normal distribution, with probability $1$ the data points will all be distinct (so there is no mode), and the mean will not be exactly the same as the median. It is only the probability distribution the data is taken from that can have mode, mean and median equal.
|
H: Is population standard deviation useful in the computers era?
Sample standard deviation used to approximate the population standard deviation. I can imagine this was very handy when you were dealing with a large population in pre-computer era.
Now thanks to computers one can calculate the population standard deviation quite easily. I am wondering is the population standard deviation still useful? Why?
AI: You could calculate the population standard deviation if you had data on the whole population. But in many cases of practical interest you don't. All you have to work with is a sample. Computers have not changed that fact.
|
H: Question concerning maximum increment of Brownian motion
Suppose $B$ is the standard Brownian motion, how to calculate
$$\mathbb{P}\left((\max_{0\le s\le t}B(s))-B(t)<a\right)$$
I tried $B(t)-B(s)=B(t-s)$,
$$P(B(t-s)>-a)=\int_{-a}^{\infty}\frac{1}{\sqrt{2\pi(t-s)}}e^{-\frac{y^2}{2(t-s)}}dy$$
And then I integrate it via $s$, that is
$$\mathbb{P}\left((\max_{0\le s\le t}B(s))-B(t)<a\right)=\int_{0}^{t}\int_{-a}^{\infty}\frac{1}{\sqrt{2\pi(t-s)}}e^{-\frac{y^2}{2(t-s)}}dy ds$$
Am I right in this process? I am novice to stochastic process, and I don't know if these manipulations are legal. Also, I don't know how to integrate this stuff. Thanks for your attention!
AI: You need a couple of tricks for this one.
First use that fact that if $s\mapsto B(s)$ is a Brownian motion then
$s\mapsto B(t-s)-B(t)$ is also a Brownian motion. Hence
$$\mathbb{P}\left((\max_{0\le s\le t}B(s))-B(t)<a\right)=
\mathbb{P}\left(\max_{0\le s\le t}B(s)<a\right)$$
Now we can use the reflection principle.
Let $T_a$ be the first hitting time of $a$ and set
$$B_a(s) = \begin{cases}B(s) &\mathop{ if } s\leq T_a \\
a-B(s) &\mathop{ if } s\geq T_a
\end{cases}$$
Now $B_a(s)$ is also a Brownian motion and $\max_{0\le s\le t}B(s)<a$ if and only if both $B(t) < a$ and $B_a(t) < a$.
As the events $B(t) \geq a$ and $B_a(t) \geq a$ are mutually exclusive (up to an event of probabililty $0$) and have identical probabilities we have that
$$\mathbb{P}\left((\max_{0\le s\le t}B(s))-B(t)<a\right)= 1 - 2\mathbb P\left( B(t)\geq a\right).$$
Which is easy as $B(t)$ is normally distributed.
|
H: Show that $\frac 1 2 <\frac{ab+bc+ca}{a^2+b^2+c^2} \le 1$
If $a,b,c$ are sides of a triangle, then show that $$\dfrac 1 2 <\dfrac{ab+bc+ca}{a^2+b^2+c^2} \le 1$$
Trial: $$(a-b)^2+(b-c)^2+(c-a)^2 \ge 0\\\implies a^2+b^2+c^2 \ge ab+bc+ca $$ But how I prove $\dfrac 1 2 <\dfrac{ab+bc+ca}{a^2+b^2+c^2}$ . Please help.
AI: Adding the triangle inequalities:$$a^{2}>(b-c)^{2}\\b^{2}>(a-c)^{2}\\c^{2}>(b-a)^{2}$$
We get $$a^{2}+b^{2}+c^{2}>2a^{2}+2b^{2}+2c^{2}-2ab-2bc-2ca$$
Now divide by $a^{2}+b^{2}+c^{2}$ to get,$$1>2-2\left(\frac{ab+bc+ca}{a^2+b^2+c^2}\right)\\\rightarrow \frac{ab+bc+ca}{a^2+b^2+c^2}>\frac{1}{2}$$
|
H: Sequences of i.i.d. subgaussian RVs and uniform integrability
Consider a sequence of i.i.d. subgaussian RVs $\{a_{j}\}^{n-1}_{j = 0}$; is $\{a^2_{j}\}^{n-1}_{j = 0}$ uniformly integrable (UI)?
Intuitively it appears to be so; if we take for example $a_j$ i.i.d. symmetric Bernoulli (which is subgaussian) then UI of $a^2_j$ is easy to show since the expectation $\mathbf{E}[|a^2_j|] < +\infty$ and by applying directly the definition of UI in Billingsley, "Probability and Measure", (16.21), p. 220 the result is immediate.
Actually, in these notes example 6 claims that all zero-mean, unit-variance RVs are UI. However, the generalization to the initial statement of this question does not seem immediate, i.e., is it true that the square of a subgaussian RV is uniformly integrable?
Since this question arises in an engineering application of a particular CLT and I must keep things as simple as possible, I would like to know if anyone is aware of a quotable result in this direction, or if this statement is trivial enough that a reference to the definition of uniform integrability suffices.
AI: Uniform integrability is concerned only with the distributions of the random variables hence every collection of i.i.d. random variables is equivalent to any single random variable in the collection, as far as uniform integrability is concerned.
In particular every collection of i.i.d. integrable random variables is uniformly integrable because a single integrable random variable is uniformly integrable.
|
H: Show that $2^n>1+n\sqrt{2^{n-1}}$
If $n$ be a positive integer greater than $1$, then prove that $$2^n>1+n\sqrt{2^{n-1}}$$
I found this problem under $AM \ge GM$ chapter. Help me to solve this problem using $AM \ge GM$. Thanks in advance.
AI: Hint:$1+2+2^2+\dots 2^{n-1}=2^n-1$
Edit:
Solution:
Applying A.M.-G.M. on $\displaystyle \sum_{k=0}^{n-1}2^{k}$ we have $2^n-1=\displaystyle \sum_{k=0}^{n-1}2^{k}\ge n(2^{(\sum_{k=0}^{n-1}k)})^{\frac{1}{n}}=n2^{\frac{n-1}{2}}$
|
H: What "whisker" means in box-and-whisker plot?
This is a bit off-topic but I can't help thinking about the reason behind naming box-and-whisker plot.
"Whisker" according to dictionary is "any of the long stiff hairs that grow near the mouth of a cat, mouse, etc." To me it seems very irrelevant. Why the word "whisker" chose for this particular type of plot. Does "whisker" have another mathematical meaning?
AI: Perhaps it's more evocative if we draw it slightly differently. Think of the single line and horizontal lines as an abstract version of this (Kazimir Malevich would be so proud).
In any case, the further you go in mathematics the lazier mathematicians get in naming things. Shout outs to "admissible", "pseudo / quasi / weak", and the horrors of "normal" and "regular".
|
H: In the proof for Urysohn's lemma, why isn't "$x\in {U_{r}}$, then $f(x)\leq r$, and if $x\notin {U_{r}}$, then $f(x)\geq r$" true?
This proof of Urysohn's lemma states that if $x\in \overline{U_{r}}$, then $f(x)\leq r$, and if $x\notin \overline{U_{r}}$, then $f(x)\geq r$. This portion is given on page 4.
Isn't this also true for open sets $U$?
Isn't $x\in {U_{r}}$, then $f(x)\leq r$, and if $x\notin {U_{r}}$, then $f(x)\geq r$ also true, where $U$ is an open set that is not clopen?
EDIT: Justification:
If $x\in U_{r}$, then $f(x)\leq r$ quite clearly.
If $x\notin U_{r}$, but $\in$ every open set whose index is higher than $r$, then $f(x)=r$ (as the greatest lower bound of the indices is mapped to). Else, the $f(x)>r$.
$x,U_{r},\overline{U}$, all belong to normal space $X$.
EDIT: Why this is relevant in the proof is:
Let $(a,b)$ be an open interval in $\mathbb{R}$. Then, if $f$ is continuous, $f^{-1}(a,b)$ should also be open. $f^{-1}(a,b)$ contains points that are present in open sets $U_{r}\setminus U_{a}$ such that $a<r<b$. $f^{-1}(a,b)$ can't contain points in $\overline{U_{a}}\setminus U_{a}$ as then $\{a\}$ would be included in the mapping.
If what I'm saying is true- that if $x\notin U_{r}$, but $\in$ every open set whose index is higher than $r$ then $f(x)\geq r$, then also the mapping $f$ will contain $\{a\}$.
So what exactly is $f^{-1}(a,b)$?? This seems to be a direct contradiction of the fact that $f^{-1}(a,b)$ contains points that are present in open sets $U_{r}\setminus U_{a}$ such that $a<r<b$.
$f^{-1}(a,b)$ seems to contain points in sets satisfying the following condition: the greatest lower bound of the indices of the sets is not $a$.
Thanks in advance!
AI: It is true that if $x\in U_r$, then $f(x)\le r$, and if $x\notin U_r$, then $f(x)\ge r$; this just isn't very useful for the proof of Uryson's lemma.
Added after OP’s edit of the question: Note that $U_a$ is defined only for $a\in\Bbb Q$, so you want to start with an open interval $(a,b)$ such that $a,b\in\Bbb Q$. Recall that
$$f:X\to\Bbb R:x\mapsto\inf\{p\in\Bbb Q:x\in U_p\}\;.$$
Suppose that $f(x)\in(a,b)$, so that $a<f(x)<b$. Clearly $a<f(x)$ implies that $x\notin U_a$. However, I claim that we can say more: $$x\notin\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}\;.$$
Suppose, on the contrary, that $$x\in\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}\;.$$ For any rational $p>a$ there is a rational $q\in(a,p)$, so $x\in\operatorname{cl}U_q\subseteq U_p$. Thus, $$\{p\in\Bbb Q:x\in U_p\}\supseteq\Bbb Q\cap(a,\to)\;.$$ On the other hand, $x\notin U_a$, and for each rational $p\le a$ we have $U_p\subseteq U_a$ and hence $x\notin U_p$, so in fact
$$\{p\in\Bbb Q:x\in U_p\}=\Bbb Q\cap(a,\to)\;,$$ and $f(x)=\inf\{p\in\Bbb Q:x\in U_p\}=\inf\big(\Bbb Q\cap(a,\to)\big)=a\notin(a,b)$. This is impossible, so $x$ cannot belong to $\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}$.
Since $b>\inf\{p\in\Bbb Q:x\in U_p\}$, there is a $p\in\Bbb Q$ such that $p<b$ and $x\in U_p$; thus, $$x\in\bigcup\{U_p:p\in\Bbb Q\text{ and }p<b\}\;.$$
Putting the pieces together, we see that
$$x\in\left(\bigcup\{U_p:p\in\Bbb Q\text{ and }p<b\}\right)\setminus\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}\;.$$
Conversely, suppose that
$$x\in\left(\bigcup\{U_p:p\in\Bbb Q\text{ and }p<b\}\right)\setminus\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}\;.$$
Since $x\in\bigcup\{U_p:p\in\Bbb Q\text{ and }p<b\}$, there is a rational $p<b$ such that $x\in U_p$, and it follows at once that $f(x)\le p<b$. Since $x\notin\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}$, there is a rational $p>a$ such that $x\notin\operatorname{cl}U_p$ and hence $x\notin U_p$, and it follows that $f(x)\ge p>a$. Thus, $f(x)\in(a,b)$, and we’ve shown that
$$f^{-1}\big[(a,b)\big]=\left(\bigcup\{U_p:p\in\Bbb Q\text{ and }p<b\}\right)\setminus\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}\;.$$
$\bigcup\{U_p:p\in\Bbb Q\text{ and }p<b\}$ is open, and $\bigcap\{\operatorname{cl}U_p:p\in\Bbb Q\text{ and }p>a\}$ is closed, so their difference, $f^{-1}\big[(a,b)\big]$, is open, just as it should be. Since this is the case for all open intervals $(a,b)$ with rational endpoints, and these form a base for the topology of $\Bbb R$, it follows that $f$ is continuous.
|
H: How does this expression arise: $\pi(10.5) = \phi (-z_{1-\alpha} + \sqrt{n} \frac{\mu_0-\mu}{2})$?
$X_i$ is $N(\mu,\sigma^2)$ distributed and the following is given $H_0: \mu \geq 12, H_a: \mu < 12$, and $\alpha=0.01$. I'm asked to calculate $\beta=P[TII]$ if in fact $\mu=10.5$
Now this is the first step that the solutions provide:
Now I dont understand here why $\pi(10.5) = \phi (-z_{1-\alpha} + \sqrt{n} \frac{\mu_0-\mu}{2})$ ($\pi(\cdot)$ denotes the powerfunction here). This doesn't seem obvious and I don't see how they have derived this expression. (I do understand how they have standardized $\bar{X}$ but still I don't see the obvious reason for this expression). Could anyone please explain this to me?
AI: Here's how you get to that:
$$
P(\text{Type II error})=1-P(\text{Being in the Rejection Region}|\mu=10.5)\\=1-P\left(\frac{X-\mu_0}{\sigma/\sqrt{n}}<-z_{1-\alpha}\right)=1-P\left(\frac{X-\mu}{\sigma/\sqrt{n}}<-z_{1-\alpha}+\frac{\mu_0-\mu}{\sigma/\sqrt{n}}\right)\\
=1-P\left(Z<-z_{1-\alpha}+\frac{\mu_0-\mu}{\sigma/\sqrt{n}}\right)=1-\phi\left(-z_{1-\alpha}+\frac{\mu_0-\mu}{\sigma/\sqrt{n}}\right)
$$
So in your example, $\sigma=2$. A recommendation is to draw the densities and compare them and think about what the probability you are calculating actually means.
|
H: Existence of projective curve such that...
Is true that for any two different integers $d,d'>1$ there exist two projective curve, of degree $d$ and $d'$ that are not isomorphic and two projective curves that are birational ?
AI: The answer to the second question is "yes".
Consider the affine curves given by the equations $y^n-x$ over some field $k$. Their function field $F$ is rational: $F=k(x,y)=k(y)$. Hence the projective closure of all of these curves is birational to $\mathbb{P}^1_k$.
However these projective closures are pairwise non-isomorphic: for $n>1$ the curves possess a singularity at $[1:0:0]$. This is the only singularity of these curves. As Matt pointed out the singularity is given by the equation $y^n=z^{n-1}$ and thus the local ring in the singularity is $O_n:=k[y,z]_{(y,z)}$. Since $y=(\frac{z}{y})^{n-1}$, $y$ is integral over $O_n$.
Claim: the ring $O:=O_n[\frac{z}{y}]$ is local and the integral closure of $O_n$ in $F$.
Proof: $O$ contains $k[\frac{z}{y}]$ and is therefore integrally closed, hence it is the integral closure of $O_n$. $(y,z)$ is the maximal ideal of $O_n$ and $(y,z)O=\frac{z}{y}O$. Consequently $O/(y,z)O=O_n/(y,z)$, that is $(y,z)O$ is the only maximal ideal of $O$ lying over $(y,z)$.
One concludes that $O$ is a discrete valuation ring. For the valuation $v$ attached to it one has $v(y)=n-1$ and $v(z)=n$. It follows that
$\mu_n:=\min (v(a) : a\in (y,z))=n-1$.
For isomorphic local rings $O_n$ the integral closures in $F$ are ismorphic and the numbers $\mu_n$ must therefore be equal. Thus the rings $O_n$ for $n>1$ cannot be isomorphic.
For $n=1$ the curve is regular.
|
H: Dirac delta applied to Dirichlet function
Let's denote Dirichlet function as $d(x)$:
$$d(x)=\begin{cases}{1,x \in \mathbb{Q}\\ 0,x\not\in \mathbb{Q}}\end{cases}$$
I know that Lebesgue integral of Dirichlet function on any finite domain is zero:
$$\int_a^b d(x) dx=0$$
On the other hand, integral of Dirac function is 1 on any set including $0$ ($a<0<b$):
$$\int_a^b\delta(x)dx=1$$
So, I wonder, what would this integral evaluate to:
$$\int_a^b\delta(d(x))dx$$
? Does $\delta(d(x))$ make sense at all?
AI: The "function" $\delta$ is in reality the measure
$$ \delta(A) = 1 \Leftrightarrow 0 \in A$$
which is not absolutely continuous with respect to the Lebesgue measure. So the expression
$$ \delta(d(x))$$
does not have any sense, since it assumes that "function" $\delta$ is the density of the measure $\delta$ with respect to the Lebesgue measure.
If we interpret $\delta(x)$ as the "function" which is infinite at 0 an 0 otherwise (while integrating 1) then $\delta(d(x))$ is a "function" which is infinite at any $ x$ rational and 0 otherwise while integrating ... I think the interpretation has reached its limit.
If there is any mathematical object that could represent $\delta(d(x))$ it is a countable sum of Dirac masses (the generalization of the $\delta(x)$, which is infinite at an arbitrary point $y$, in terms of the 'function' delta that is $\delta_y(x) = \delta(x - y)$) at each rational that is
$$ ``\delta(d(x)) \text{''} = \sum_{y \in \mathbb{Q}} \delta_y $$
and so
$$ ``\int_a^b \delta(d(x)) dx \text{''} = \sum_{y \in \mathbb{Q}} \delta_y([a,b]) = \sum_{y \in [a,b] \cap \mathbb{Q}} 1 = \infty$$
For $\hat{d}(x) = 1 - d(x)$ is even worse since we would have a non-countable sum of non zero terms an that is not even well-defined in general.
|
H: Trigonometry Problem Solving
How can we estimate the height (h) of a castle surrounded by a moat, using the info below?
AI: I assume that's supposed to be $50^\circ,$ not just $50$. Let $x$ be the distance (in meters) from the closer viewing point to the castle wall, so that the distance to the further viewing point is $x+45$ meters. Use SOHCAHTOA on the two right triangles and solve the system of two equations in $x$ and $h$ for the variable $h$. (Don't estimate until the very end.)
|
H: I am not understanding what has asked to compute of the following exercise.
let $f(x)=(x+2)(x+1)x(x-1)^3(x-2)$.
To which zero of $f$ does the Bisection method converges when applied on the interval $[-3,2.5]$
Have i asked to find the root of $f(x)$ ?
AI: Do you know how to implement the bisection method? Every zero of $f$ lies in the initial interval (how can we tell?), but after a few iterations, there will be only one remaining. That is the zero to which the method will converge.
|
H: Number of solutions for $\sum_{i=1}^{4} x_i < 22$ with condition.
I'm looking for the number of solutions to $\displaystyle\sum_{i=1}^{4} x_i < 22$ where $x_i > i$
Any help is appreciated.
I tried solving it using combinations to do $C(11,4)$ but that doesn't seem to be the right answer.
AI: Hint:
$x_1+x_2+x_3+x_4+y=21$,where $y\geq 0$.
$(x_1-2)+(x_2-3)+(x_3-4)+(x_4-5)+y=7$
in which every element is greater than or equal to zero.
|
H: Is there an algorithm to read this mathematical definition?
I'm reading Gemignami's Calculus and Statistics.
I'm a little stuck in this definition:
Definition 7: Suppose $f$ is a function from a set $T$ of real numbers into $R$. Then $f$ is said to be continuous at a point $a$ of $T$ if, given any positive number $p$, there is a positive number $q$ such that whenever $x$ is a point of $T$ and $|x-a|<q$, then $|f(x)-f(a)|<p$. The function $f$ is said to be continuous if it is continuous at every point of $T$.
The question may seem naive, but I really miss an algorithm to read this definition. I usually need to look at examples, analogies, wikipedia and other sources to form a intuitive conception of the concept, and I guess that definition should be self-contained (right?). The problem for me is that there are a lot of premises that are assumed at the same time and I feel confused with that. I am unable to figure out what's happening in such definitions and to make things a little worse, I'm learning alone at the moment. So, Is there an algorithm/way to read this mathematical definition?
AI: To me, the big thing is to figure out how to translate the definition in to English sentences that are not as precise, but are more telling.
The big, unsatisfying thing about my answer is this: in order to do this, you need to have some sense of what the definition is trying to tell you. But, I think, it doesn't have to be a very LARGE sense; plus, this is something that you get much better at as time passes. A lot of it depends on having access to a teacher who really understands it themselves (or, a community like this one, where there are lots of people who understand!)
For instance:
Given any definition of 'small' (given any positive number $p$)
we can define 'close' (there is a positive number $q$)
such that whenever $x$ is close to $a$, (such that whenever $x$ is a point of $T$ and $\lvert x-a\rvert<q$,)
the difference between $f(x)$ and $f(a)$ is small. ($\lvert f(x)-f(a)\rvert<p$.)
|
H: Limit of sequence $u_1,u_3,u_5,\dots$ with $u_{n+1}=1+\frac{1}{u_n}$
We have a sequence of numbers defined recursively by $$u_{n+1}=1+\frac{1}{u_n},$$for $n\geqslant 1$. It is also given that $u_1=1$. Find the limit $l$ of the sequence $u_1,u_3,u_5,\dots$.
So I said, $u_1,u_3,u_5,\dots$ is given by $u_{2n+1}$for $n\geqslant 0$. Then, $$\lim_{n\to\infty}u_{2n+1}=1+\frac{1}{\lim_{n\to\infty}u_{2n}}.$$Now, at $n\to\infty$, $u_{2n}=u_{2u+1}=l$, and so $$l=1+1/l \Rightarrow l=\frac{1+\sqrt{5}}{2},$$since $l>0$.
Can somebody please explain to me what's wrong with my reasoning (because it really 'feels' wrong)?
Thanks.
AI: Here's a more solid reasoning:
$$u_{2 n+1} = 1+\frac{1}{u_{2 n}} = 1+\frac{1}{1+\frac{1}{u_{2 n-1}}}$$
which means that
$$u_{2 n+1} = \frac{1+2 u_{2 n-1}}{1+u_{2 n-1}}$$
Now assuming the limit exists, apply the same reasoning as above and the same result emerges.
|
H: Difficulty to solve the exercise of Bisection method.
Find an approximation to $ {25}^{\frac{1}{3}}$ correct to within $10^{-4}$ using the Bisection algorithm.
How to solve it?
Where are the function and interval here?
AI: The most natural function is $f(x)=x^3-25$. You need an interval $[a,b]$ such that $f(a)$ and $f(b)$ have different signs. Take $a=0$ and $b=25$ for instance.
|
H: Bottom to top explanation of the Mahanalobis distance?
I'm studying Pattern recognition and statistics and almost every book I open on the subject I bump into the concept of Mahanalobis distance. The books give sort of intuitive explanations, but still not good enough ones for me to actually really understand what is going on. If someone would ask me "What is the Mahanalobis distance?" I could only answer: "It's this nice thing, which measures distance of some kind" :)
The definitions usually also contain eigenvectors and eigenvalues, which I have little trouble connecting to the Mahanalobis distance. I understand the definition of eigenvectors and eigenvalues, but how are they related to the Mahanalobis distance? Does it have something to do with changing the base in Linear Algebra etc.?
I have also read these former questions on the subject:
https://stats.stackexchange.com/questions/41222/what-is-mahanalobis-distance-how-is-it-used-in-pattern-recognition
Intuitive explanations for Gaussian distribution function and mahalanobis distance
http://www.jennessent.com/arcview/mahalanobis_description.htm
The answers are good and pictures nice, but still I don't really get it...I have an idea but it's still in the dark. Can someone give a "How would you explain it to your grandma"-explanation so that I could finally wrap this up and never again wonder what the heck is a Mahanalobis distance? :) Where does it come from, what, why?
I will post this question on two different forums so that more people could have a chance answering it and I think many other people might be interested besides me :)
Thank you in advance for help!
AI: As a starting point, I would see the Mahalonobis distance as a suitable deformation of the usual Euclidean distance $d(x,y)=\sqrt{\langle x,y \rangle}$ between vectors $x$ and $y$ in $\mathbb R^{n}$. The extra piece of information here is that $x$ and $y$ are actually random vectors, i.e. 2 different realizations of a vector $X$ of random variables, lying in the background of our discussion. The question that the Mahalonobis tries to address is the following:
"how can I measure the "dissimilarity" between $x$ and $y$, knowing that they are realization of the same multivariate random variable?"
Clearly the dissimilarity of any realization $x$ with itself should be equal to 0; moreover, the dissimilarity should be a symmetric function of the realizations and should reflect the existence of a random process in the background. This last aspect is taken into consideration by introducing the covariance matrix $C$ of the multivariate random variable.
Collecting the above ideas we arrive quite naturally at
$$D(x,y)=\sqrt{\langle (x-y),C^{-1}(x-y)\rangle} $$
If the components $X_i$ of the multivariate random variable $X=(X_1,\dots,X_n)$ are uncorrelated, with, for example $C_{ij}=\delta_{ij}$ (we "normalized" the $X_i$'s in order to have $Var(X_i)=1$), then the Mahalonobis distance $D(x,y)$ is the Euclidean distance between $x$ and $y$. In presence non trivial correlations, the (estimated) correlation matrix $C(x,y)$ "deforms" the Euclidean distance.
|
H: $\int\sin^2(Cx)\,dx$ from a manual - need proof
In the book of quantum mechanics I came across an integral which was supposed to be from a manual ($C$ is a constant):
\begin{align}
\int\limits_{0}^d \sin^2\left( C x \right)\, d x = \left.\left(\frac{x}{2}- \frac{\sin(2Cx)}{4C}\right)\right|_0^d
\end{align}
Where can I read more about this? I would be glad if anyone could provide me a proof.
AI: $$\sin ^2 Cx=\dfrac{1-\cos 2Cx}{2}$$
$$\int \sin ^2 Cx\;dx=\int\dfrac{1-\cos 2Cx}{2}\;dx$$
$$\int\dfrac 12-\dfrac{\cos 2Cx}{2}dx$$
$$\dfrac {x}{2}-\dfrac{\sin 2Cx}{2\cdot 2C}$$
$$\dfrac {x}{2}-\dfrac{\sin 2Cx}{4C}$$
Now just put given limits
$$\left[\dfrac {x}{2}-\dfrac{\sin 2Cx}{4C}\right]_0^d$$
$$\dfrac {d}{2}-\dfrac{\sin 2Cd}{4C}$$
|
H: Compactness and Normed Linear spaces
If the set $S=\{ x \in X : ||x||=1 \}$ in the normed linear space $X$ is compact, how can it be shown that $X$ is finite dimensional?
AI: Following up on Martin's comment, if you know Riesz's lemma, you can use it, supposing that $X$ is infinite-dimensional, to inductively create a sequence $\{x_n\}$ of unit vectors with the property that $x_n$ has distance at least $\tfrac{1}{2}$ to $\mathrm{span} \{x_1,\dots,x_{n-1}\}$ and thus show that no subsequence of $\{x_n\}$ is Cauchy. See this blog post for the full argument.
|
H: Given that $ x + \frac 1 x = r $ what is the value of: $ x^3 + \frac 1 {x^2}$ in terms of $r$?
Given that $$ x + \cfrac 1 x = r $$
what is the value of: $$ x^3 + \cfrac 1 {x^2}$$ in terms of $r$?
NOTE: it is $\cfrac 1 {x^2}$ and not $ \cfrac 1 {x^3} $
Where I reached so far:
$$ \Big(x^3 + \cfrac 1 {x^2}\Big) + \cfrac 1 x \cdot\Big(x^3 + \cfrac 1 {x^2}\Big) = r^3 - r^2 -3r - 2 $$
Any hints??
AI: If $g$ is a function such that $g(x)=f(x+\frac{1}{x})$ then $g(1/x)=g(x)$.
Your $g(x)=x^3+x^{-2}$ therefore cannot be written as $f(x+1/x)$.
|
H: What does ($\ln x$) or ($\log x$) mean?
How does a logarithm followed by a variable read such as ($\ln x$) or ($\log x$). Is it $\log$ times $x$ or the $\log$ of $x$? I'm a little confused by this...?
AI: "Log" is a function; hence, interpreting $\log x$ as "$\log$ times $x$" doesn't make any sense - $\log$ needs an input before it can be interpreted as a number. The notations $\log x$ and $\ln x$ mean the exact same thing as $\log(x)$ and $\ln(x)$; they are just used as short-hand when leaving off the parentheses will not cause confusion.
|
H: Vector space with multiplication
A vector space is a commutative group and I am wondering if it can be extended to be a ring by defining a multiplication. I tried $v \cdot w = (v_1 w_1, ..., v_nw_n)$ componentwise but then inverses aren't unique. Is it possible to construct a multiplication?
AI: The construction you described is a perfectly fine way to define a multiplication operation on a vector space, which is just viewing the vector space as the ring product of $n$ copies of the field.
There are other ways too, but they are not always possible for every $n$. Here's what I mean: suppose $n=k^2$. Then you can just rewrite the vectors as $k\times k$ matrices, and then you have multiplication given by matrix multiplication. That would be an entirely different multiplication than the one you proposed.
There are still more ways that a vector space can have a multiplication, but there are not a lot that work for generic vector spaces (the coordinatewise product above is an exception.) The study of vector spaces that are rings is just the study of algebras over fields.
I think there is another construction you should be interested in, but it is not exactly what you're asking for. This is purely for your information.
The tensor algebra $T(V)$ of a vector space $V$ is "the biggest algebra generated by $V$". In some sense this means it is a ring that contains $V$ and doesn't contain superfluous stuff, and that every other algebra like that is a quotient of it.
This produces a multiplication on a much larger set ($T(V)$) and when you multiply things in $V$ toegether, you do not generally get something back in $V$, so it is not really a multiplication on $V$. Still, this is a rather interesting thing to study. The tensor algebra itself, and several quotients of it (like the symmetric algebra, the exterior algebra, and Clifford algebras) are very interesting algebras.
|
H: Complex analysis knowledge that required to understand material in Riemann Surface
I have taken a course on complex analysis in university, at that time the instructor chose the book "Complex Analysis" by Serge Lang. Now I am participating a cemina on Riemann Surfaces which truly based on the book Lecture on Riemann surfaces by Forster. With my simple background on complex analysis, I almost did not understand the material in lecture 2 : Elementary properties of holomorphic mappings (that I do not found in the book of Lang).
So, my question is : Which book on complex analysis should I read in preparing for that cemina? Please help me. Thanks
Edit: What I want to say is : With my background on complex analysis, I could not understand the proofs of some theorems, corollaries,... in section 2 of the book of Forster. I really want to know which book I should read to understand them. The section 1 does not use much complex analysis, so it is not my problem now.
AI: I would use Bak & Newman's "Complex Analysis" for an introduction to the basics. The book by Forster is a good one; personally I loved Farkas & Kra's "Riemann Surfaces". It contains the complex analysis which is used in the book and the section on theta functions is simply great. Miranda's "Algebraic curves and Riemann Surfaces" is a nice one, too.
|
H: Cross product in $\mathbb R^n$
I read that the cross product can't be generalized to $\mathbb R^n$. Then I found that in $n=7$ there is a Cross product: https://en.wikipedia.org/wiki/Seven-dimensional_cross_product
Why is it not possible to define a cross product for other dimensions $ \ge 4$?
AI: The issue is the problem of choice.
Given two linearly independent vectors in $\mathbb R^3$, the dimension of the space perpendicular on both is $1$. This means that up to scalar multiplication, you know the perpendicular direction.
The only issue is is choosing the one of the opposing directions and magnitude, and there is a simple way of doing this, the known way, which in some sense comes out in a natural way from the Cramer's rule. Moreover, this choice works nicely in the case of linearly dependent vectors.
In higher dimensions the problem becomes much more complicated since the perpendicular space on two vectors has higher dimension. Then, if one tries to define the cross product, one has to chose one of infinitely many directions in a consistent way.
Also, $\mathbb R^3$ can be identified in a "natural" way with a subspace of $\mathbb R^4$ in many ways, for example $\mathbb R^3 \times \{0\}$ or $\{0\} \times \mathbb R^3$. But no matter how you define the cross product in $\mathbb R^4$, it won't be consistent with one of these identifications...
|
H: Hilbert polynomial of disjoint union of lines in $\Bbb{P}^3$
Let $X$ be the disjoint union of the two lines in $\Bbb{P}^3$ given by $Z(x,y)$ and $Z(z,w)$. Letting $R = k[x,y,z,w]$, I have computed the following free resolution for the homogeneous coordinate ring $S(X)$:
$$0 \to R(-4) \stackrel{\varphi_0}{\longrightarrow} R(-3)^4 \stackrel{\varphi_1}{\longrightarrow} R(-2)^4 \stackrel{\varphi_2}{\longrightarrow} R \longrightarrow S(X) \to 0.$$
$\varphi_0$ is the map that sends $r \in R(-4)$ to the $4$ - tuple $(ry,-rz,rw,-rx)$, $\varphi_1$ is multiplication by the matrix $$\left( \begin{array}{cc} z & y & 0 & 0 \\ -w & 0 &y & 0 \\ 0& 0& -x & w \\ 0 & -x & 0 &-z \end{array}\right)$$
and $\varphi_2$ is multiplication by the row matrix matrix $\left(\begin{array}{cccc} xw & xz & yw & yz\end{array}\right)$. Note I have used that $I(X)= (x,y) \cap (z,w) = (xw,xz,yw,yz)$. From the free resolution above I then computed the Hilbert polynomial $H_X(t)$ for $X$:
$$H_X(t) = 2(t+1).$$
My questions are:
Is the free resolution for $S(X)$ above a minimal resolution for $S(X)$? If not how can I find the minimal one?
What is the significance of the Hilbert polynomial in this case, for example how does it encode information about the dimension of $X$? What if I have a general projective variety?
AI: The degree of the polynomial is one; this encodes the fact that your variety is of dimension $1$. The leading terms is $2$ (or, perhaps better, $2/1!$); this encodes the fact that your variety is of degree $2$ (a generic plane meets it in two points).
The meaning of the constant terms is a bit more subtle, but here is one way to think about it:
Suppose first instead you have two lines meeting in a point; equivalently, two co-planar lines. The two co-planar lines are a (degenerate) conic in the plane, and all planar conics have the same Hilbert polynomial, namely $2t+1$.
Now two disjoint lines have $1$ more point than two co-planar lines (because the two co-planar lines share a point in common), and this is reflected in the fact that the Hilbert polynomial is $2t + 2 = (2t+1) +1$; so that gives some significance to the constant term.
(This example is discussed somewhere in Hartshorne, maybe in the discussion of flat families near the end of Chapter III; if you degenerate you two skew lines into two lines that meet in a point in a flat family, you don't simply get two co-planar lines, but rather two co-planar lines with a non-reduced structure at the intersection point, reflecting the fact that the "extra point" can't disappear.)
|
H: Interchange of partial derivative and limit
Consider the following expression:
$$\frac{\partial}{\partial m} \lim_{T \rightarrow \infty} \gamma(T,m)$$
where $\gamma$ is a function of $T$ and $m$.
My question is just: can I permute the partial derivative and the limit operators? I suppose that I can, given that the concerned variable is different for each operator but I still need a confirmation.
AI: In general: No, you can't. Consider
$$ \gamma(T, m) = \frac 1T \sin(T^2 m) $$
Then $\lim_T \gamma(T, m) = 0$, hence $\partial_m \lim_T \gamma(T, m) = 0$. But $\partial_m \gamma(T,m) = T\sin(T^2m)$ and $\lim_T \partial_m \gamma(T,m)$ does not exist.
|
H: Show that $|f(p_n)|<10^{-3}$ whenever $n>1$ but that $|p-p_n|<10^{-3}$ requires that $n>1000$.
Let $f(x)=(x-1)^{10}$.
The root of the equation , $p=1$.
The approximates of the root, $p_n=1+\frac{1}{n}$
Show that $|f(p_n)|<10^{-3}$ whenever $n>1$ but that $|p-p_n|<10^{-3}$ requires that $n>1000$.
I started to solve it by the following way:
$|p-p_n|<10^{-3}\Rightarrow |1-(1+\frac{1}{n})|<10^{-3}\Rightarrow |\frac{1}{n}|<10^{-3}\Rightarrow |n|>1000\Rightarrow n>1000$
And
$f(p_n)=[(1+\frac{1}{n})-1]=\frac{1}{n}$; whenever $n>1$
But how to show
$|f(p_n)|<10^{-3}$ ?
AI: You forgot the power of $10$ when you computed $f(p_n)$.
|
H: trace of $\operatorname{End}(E)$-valued $p$-form
$\eta\,$ is $\,\operatorname{End}(E)$-valued $p$-form. $\;d_D\,$ is exterior covariant derivative. $\,E\,$ is Bundle.
How can I prove $$\,\operatorname{tr}(d_D\eta)=d(\operatorname{tr}(\eta))\quad ?$$
AI: Proofs like this entail "unpacking" the respective definitions, (recall your earlier post: Exterior covariant derivative of $\operatorname{End}(E)$-valued $p$-form, e.g., and see also Taking trace of vector-valued differential forms) and then showing that by definition, $$\,\operatorname{tr}(d_D\eta)=d(\operatorname{tr}(\eta))$$
|
H: Infinite Sets in Compact Spaces
Given that:
$S$ is a compact set in the topological space $(X, \mathcal T)$
$T\subset S$ has no accumulation points in S
How do I show that $T$ is finite?
AI: For every $s \in S$ we find a neighbourhood $U_s$ that intersects $T$ in only at most finitely many points, so $U_s \cap T$ is finite (possibly empty). This is because no point in $S$ is an accumulation point of $T$ by assumption.
As the $U_s$ ($ s \in S$) form an open cover of $S$ by compactness we find a finite subcover $U_{s_1},\ldots,U_{s_n}$.
So $$T= T \cap S \subset T \cap (\cup_{i=1}^n U_{s_i}) = \cup_{i=1}^n (T \cap U_{s_i})$$
where the first follows from $T \subset S$, the inclusion from the fact that we cover $S$, and the last equality is the standard distributive property of intersection over unions.
It follows that $T$ is a subset of a union of finitely many finite sets, so itself finite.
|
H: Find the least next N-digit number with the same sum of digits.
Given a number of N-digits A, I want to find the next least N-digit number B having the same sum of digits as A, if such a number exists. The original number A can start with a 0. For ex: A-> 111 then B-> 120, A->09999 B-> 18999, A->999 then B-> doesn't exist.
AI: You get the required number by adding $9$, $90$, $900$ etc to $A$, depending on the digits of $A$.
First Case If $A$ does not end in a row of $9$s find the first (starting at the units end) non-zero digit. Write a $9$ under that digit and $0$s under all digits to the right of it. Add the two and you get $B$.
Example: $A=3450$. The first non-zero digit is the 5 so we write a $9$ under that and a $0$ to its right and add:
\begin{align}
3450\\
90\\
\hline
3540
\end{align}
There is a problem if the digit to the left of the chosen non-zero digit is a $9$. In this case we write a $9$ under that $9$ and $0$s to its right. And if there are several $9$ we put our $9$ under the highest one.
Example: $A=3950$. The first non-zero digit is the 5 but there is a $9$ to its left. We write a $9$ under that $9$ instead and $0$s to its right and add:
\begin{align}
3950\\
900\\
\hline
4850
\end{align}
Second case If $A$ does end in a row of $9$s write a $9$ under the highest of the row of $9$s and $0$s under all digits to the right of it. Add the two and you get $B$. As you say, if $A$ is entirely $9$s there is no solution.
Example: $A=3999$. The highest $9$ is in the hundreds column so we write a $9$ under that and $9$s to its right and add:
\begin{align}
3999\\
900\\
\hline
4899
\end{align}
Edit This method does not always give the correct result. I will try and fix it in due course.
Edit 2 I have corrected my method and posted it in a new answer, so that this question's comment remain relevant.
|
H: Is it possible to calculate sine by hand?
Without a calculator, how can I calculate the sine of an angle, for example 32(without drawing a triangle)?
AI: You can use first order approximation $\sin(x+h)=\sin(x)+\sin'(x)h=\sin(x)+\cos(x)h$
where $x$ is the point nearest to $x+h$ at which you already know the value of the $\sin$ function and its derivative $\cos$ function too.
Like for $\sin(32^0)=\sin(30^0)+\cos(30^0)*(\frac{\pi}{90})$
Here you need to take $h$ in radians which is $\frac{\pi}{90}$ for $(32^0-30^0)=2^0$
|
H: Sum of N numbers whose sum is M
In how many ways can we sum N nonnegative numbers (that is, taking values 1, 2, 3...) such that their sum is M? I found this problem doing convolution of series and combinatorics has never been my strong point.
Thank you!
Precision: The precise question is, I have a sum:
$$
\sum_{i_1,...,i_N=0}^\infty f\left(\sum_k i_k\right) = \sum_M C^N_M f(M)
$$
and I want to know what the coefficient is.
AI: If you distinguis by order of the summands, this is simply $M-1\choose N-1 $ by a classicla stars-and-bars argument.
|
H: Let $p_n$ be the sequence defined by $p_n=\sum_{k=1}^n\frac{1}{k}$. Show that $p_n$ diverges even though $\lim_{n\to\infty}(p_n-p_{n-1})=0$
I have tried this as :
$$p_n=\sum_{k=1}^n\frac{1}{k}=1+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{n-1}+\frac{1}{n}$$
$$p_{n-1}=\sum_{k=1}^{n-1}\frac{1}{k}=1+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{n-1}$$
$$(p_n-p_{n-1})=\frac{1}{n}$$
$$\lim_{n\to\infty}(p_n-p_{n-1})=\lim_{n\to\infty}\frac{1}{n}=0$$
But dont know how to show $p_n$ diverges?
AI: Using some idea from the comments, with $\,a_n:=\frac1n\;$:
Proof 1: Cauchy's Condensation test:
$$2^na_{2^n}=\frac{2^n}{2^n}=1\;,\;\text{and since }\;\;\sum_{n=1}^\infty 1\;\;\text{diverges so does our series}$$
Proof 2: Integral test:
$$\int\limits_1^\infty\frac{dx}x=\left.\lim_{b\to\infty}\log x\right|_1^b=\lim_{b\to\infty}\log b=\infty\implies \;\text{ also our series diverges}$$
Proof 3: Subsequence of sequence of partial sums:
$$a_{2^n}=1+\frac12+\ldots+\frac1{2^n}=$$
$$=\left(1+\frac12\right)+\left(\frac13+\frac14\right)+\left(\frac15+\ldots+\frac18\right)+\ldots+\left(\frac1{2^{n-1}+1}+\frac1{2^{n-1}+2}+\ldots+\frac1{2^n}\right)\ge$$
$$\ge\underbrace{ \frac12+\frac12+\ldots+\frac12}_{n\text{ times}}=\frac n2\xrightarrow[n\to\infty]{}\infty$$
and thus our series diverges.
|
H: Gauß-Jordan algorithm - 'reading' the solution
Disclaimer: I'm not really sure how to do a proper coefficient-matrix in latex, if someone could edit it to look properly I'd be really thankful ;)
Given the following system of linear equations, determine the solution
set using the Gauß-Jordan-Algorithm
$$ (I):3x_1 +2x_2-x_3-2x_4=0 $$ $$(II):2x_1+3x_2-4x_3+2x_4=0$$
$$(III):x_1+3x_2-5x_3+4x_4=0$$ $$(IV):x_1+4x_2-7x_3+6x_4=0$$
So to solve this I used the Gauß-Jordan-Algorithm as asked by the task and ended up with these last two steps:
$$
\begin{pmatrix}
1 & 4 & -7 & 6 &|&0 \\
0 & 1 & -2 &2 &|&0 \\
0 & 1 & -2 & 2 &|&0 \\
0 & 1 & -2 & 2 &|&0 \\ \end{pmatrix}
\text{
(III & IV$-$II)}=
\begin{pmatrix}
1 & 4 & -7 & 6 &|&0 \\
0 & 1 & -2 &2 &|&0 \\
0 & 0 & 0 & 0 &|&0 \\
0 & 0 & 0 & 0 &|&0 \\ \end{pmatrix}
$$
Would someone of you mind explaining how exactly to continue from this point? How exactly do I 'read/see' the solution in this last matrix?
P.S.: I'm from Germany and therefore I'm only familiar with the german terminology, please bear with me if something was lost in translation
AI: You have reduced it to row echelon form. You can go further to reduced row echelon form as follows:
$$
\begin{pmatrix}
1 & 0 & 1 & -2 &|&0 \\
0 & 1 & -2 &2 &|&0 \\
0 & 0 & 0 & 0 &|&0 \\
0 & 0 & 0 & 0 &|&0 \\ \end{pmatrix}
$$
You can read this matrix as:
$$x_1+x_3-2x_4=0\\x_2-2x_3+2x_4=0\\x_3~\mathrm{and}~x_4~\text{are free}$$
This system has an infinite number of solutions. You can parameterize:
$$x_4=t, ~x_3=s, ~x_2=2s-2t,~x_1=2t-s$$
|
H: A trouble about the topology of pointwise convergence $({\mathbb{R}}^M,\tau)$
Let $(M,d)$ be a separable metric space and $F=\{f_{\lambda}:M\longrightarrow\mathbb{R}:\lambda\in L \}\subset \mathbb{R}^M $ be an equicontinuous family of uniformly bounded functions on $M$.
How can we prove that:
$\boxed{1}\;\; \overline{F}^{\tau}\subset ({\mathbb{R}}^M,\tau)$ is an equicontinuous and uniformly bounded family
$\boxed{2}\;$ $(\overline{F}^{\tau},\tau) \subset ({\mathbb{R}}^M,\tau)$ is metrizable
$L:$ set of indices
$\tau :$ topology of pointwise convergence $({\mathbb{R}}^M,\tau)$
The proof of this fact is analogous to Proposition $3.22$ ?
Any hints would be appreciated.
AI: You asked for hints, so I'm being "hintish..."
Definition of Uniformly Bounded is that $f_\lambda < N$ for some real number $N$ correct? If so, all limits of sequences in $F$ are bounded by $N$, since every element in the sequence is bounded by $N$.
A similar argument holds for equicontinuous.
I don't immediately see how to metrize it, although my first gut is to hack out a supremum norm. That would definitely work for M compact.... Ah, I bet you can metrize it by something like, $d(f_1,f_2) = \sum_{i=1}^\infty |f_1(s_i) - f_2(s_i) | / 2^i$ where the $s_i$ form a separable subset of $M$.
I'm not using the fact that $M$ is a metric space.... so I am probably missing something...
|
H: Index of a subgroup of $\mathbb{Z}\times\mathbb{Z}$
Let $p\in\mathbb{Z}$ be a prime and $u\in\mathbb{Z}$ be such that $u^2\equiv -1\pmod{p}$. Now define an additive subgroup $S$ of $\mathbb{Z}\times\mathbb{Z}$ by following, $$S:=\{ (a,b)\in\mathbb{Z}\times\mathbb{Z}: b\equiv ua\pmod{p}\}$$ Then what is the index of $S$ in $\mathbb{Z}\times\mathbb{Z}$ ?
AI: Consider the group homomorphism
$$\begin{align}\mathbb Z\times \mathbb Z&\to\mathbb Z/p\mathbb Z,\\(x,y)&\mapsto y-ux+p\mathbb Z.\end{align}$$
What is its kernel and image? (Side question: Do we care about special properties of $u$ or $p$ at all?)
|
H: Let $f(x)=x^3+2x^2+1,\,\,g(x)=2x^2+x+2.$ Then over $\,\left(\Bbb Z/3\Bbb Z\right)[x]\;$......
I am stuck on the following problem:
Let $f(x)=x^3+2x^2+1,\,\,g(x)=2x^2+x+2.$ Then over $\,\left(\Bbb Z/3\Bbb Z\right)[x]\;$, show that $f(x)$ is irreducible ,but $g(x)$ is not.
Can someone explain how to tackle it? Thanks in advance for your time.
AI: Hints:
== Over any field, a polynomial of degree $\,\le 3\;$ is irreducible iff it has not root in that field
== $\,1^2-4\cdot2\cdot2=-15=0^2\pmod 3\;$, and thus in fact $\,g\,$ is a perfect square in $\,\left(\Bbb Z/3\Bbb Z\right)[x]\;$
|
H: Random Variables from $[0,1]$ - Integration Limits
I was wondering if someone could help me understand the first steps I should take for solving the next problem:
Let $U$, $V$ be random numbers chosen independently from the interval $[0, 1]$ with uniform distribution. Find the cumulative distribution and density for $Y=U+V$.
Y has to be in the interval $[0,2]$
Doing this in terms of $U=Y-V$ For the CDF I we have:
$$P(Y\le y)=P(U+V\le y)=P(U\le y-V)$$
But from there I got stuck (or am I doing something wrong?) Some tip might be useful. Thanks!
AI: I will change notation, for reasons that may become clear. Let our two uniform random variables be called $X$ and $Y$. Call their sum $W$.
We need to assume that $X$ and $Y$ are independent. Then the joint density function of $X$ and $Y$ is $1$ in the square with corners $(0,0)$, $(0,1)$, $(1,1)$, and $(0,1)$. The joint density is $0$ elsewhere.
Draw the square.
We want to find $F_W(w)=\Pr(W\le w)$. For $w\le 0$ we have $F_W(w)=0$. For $w\ge 2$ we have $F_W(w)=1$. Now look at the case $0\lt w\le 2$.
For such a $w$, draw the line $x+y=w$. There are geometrically two distinct cases, (i) $0\lt w\le 1$ and (ii) $1\le w\lt 2$. In each case we want to determine the probability that we end up in the part of the square "below" $x+y=w$.
(i) If $0\lt w\le 1$, the target region is an isosceles right triangle with legs $w$. This region has area $\frac{w^2}{2}$. So for $0\lt w\le 1$ we have $F_W(w)=\frac{w^2}{2}$.
(ii) If $1\lt w\lt 2$, our target region is the full square with an isosceles right triangle removed. The legs of that triangle have length $2-w$, so it has area $\frac{(2-w)^2}{2}$. Thus the rest of the square has area $1-\frac{(2-w)^2}{2}$.
For the density function, differentiate.
Remark: Since our density function is constant, "integration" was just a matter of finding an area. But if we wish to integrate, it can be done. However, the picture is still very useful as a guide.
For $0\lt w\le 1$, we are integrating over a triangle. Integrate first from $y=0$ to $y=w-x$, and then from $x=0$ to $x=w$.
For $1\lt w\lt 2$, we need to break up the integral into two parts. For $x=0$ to $w-1$, we are integrating with respect to $y$ from $0$ to $1$, then with respect to $x$ from $x=0$ to $x=w-1$. Then for the rest, we integrate with respect to from $y=0$ to $y=w-x$, and then with respect to $x$ from $x=w-1$ to $1$. Finally, we add the two results.
|
H: Why are projective schemes $\mathbb P_A^n$ over a ring not affine for $n>1$?
I recently posted a very similar question, but I hid the question I really wanted answered in it. I'm posting this to make that question explicit.
Let $A$ be a nonzero commutative ring with unit. Define $\mathbb P_A^n$ to be the scheme $\operatorname {Proj} A[T_0,\dots,T_n]$, where the grading on the polynomial ring is by degree.
I would like to show that the only $n$ for which $\mathbb P_A^n$ is affine is $n=0$. This is exercise $3.11$ in chapter $2$ of Liu's book. I can show that for any $\mathbb P_A^n$, the sheaf of global sections is $A$. (This is the hint given in the book.) So it is easy to see that if $\mathbb P_A^n$ is affine, the only possibility is that it is isomorphic to $\operatorname{Spec} A$. One can directly verify that they are isomorphic for $n=0$. I would like to show that for larger $n$, $\mathbb P_A^n$ is not affine.
One way to do this is to proceed as in the question I linked in show that the $\mathbb P_A^n$ of different dimensions are non-isomorphic. Unfortunately, this seems to require cohomology theory, and Liu poses this problem before discussing cohomology.
Another way is to note that if $\mathbb P_A^n$ is affine, it is isomorphic to $\operatorname{Spec} A$ as a scheme over $A$. One can then use a base change argument, as sketched in a comment to the answer on the question I linked. I like this solution, but again, Liu has presented this problem before discussing base change.
Is there another more elementary way to show $\mathbb P_A^n$ is not affine for $n>1$, using the fact about global sections I noted above? I normally hate questions of the form "Can you prove $X$ with your hands tied behind your back?", so it pains me a little to be asking one, but I really have no idea what Liu intended. If it not possible to do this in a more elementary way, I would appreciate if someone experienced could confirm that. Thank you.
AI: Take one of the affine pieces, say $D_+(T_0) \cong \mathbb{A}_A^n$. The inclusion $D_+(T_0) \subset \mathbb{P}_A^n$ would then be the morphism of affine schemes corresponding to the ring homomorphism $A \to A[T_1/T_0, \dots, T_n/T_0]$. If $A$ is non-zero then there should be some issue here.
|
H: Confusing Trigonometry Problem
Lets say at an intersection the words "STOP HERE" are painted on the road in red letters 2.5m high. It is important that drivers using this lane can read the letters. How can I find the angle subtended by the letters to the eyes of a driver 20m from the base of the letters and 1.25m above the road?
Is it right to use tan, so tanθ=1.25/20? Or am I missing something?
AI: I find it most helpful to start any problem like this by drawing a diagram. The most useful perspective to consider is that of an observer on the sidewalk. Take a 2-dimensional cross-section of what she sees. Such a drawing will include a triangle. The bottom edge of this triangle represents the letters, so that the two vertices of this edge are the top and the bottom of the letters. The last vertex represents the driver's eyes. You can then add the other information you have. For example, you know how far away the driver is from the lettering as measured down the road.
Once you have this picture, you can use trigonometric / geometric identities to find relevant lengths and angles. An important question to ask yourself is whether you have enough information given to you to find what you need. Are you comfortable using trigonometric identities and laws?
If this description of the picture is not clear, please let me know and I will attach a sample diagram.
|
H: for $ F(x,y) = 10$, what is $ y'$?
For an input $x$ and output $y$ of a system it is know that $x,y$ always satisfy
$$ F(x,y) = 10 $$
At a certain point, $x=1$ and $y=1$. The question is how $y$ responds to a small decrease in $x$, e.g. to $ 0.999$ (that is, what is $y'(x)$ of the function $y(x)$ near $x=1$.
Now this somehow asks for an application of the Implicat Function Theorem, of a Taylor Polynomial, or both of them.
Intuitively, as $F(x,y) = 10$, can I not simply conclude that each decrease in $x$ has to be compensated by a change in $\Delta y =-x$, in order to keep $F(x,y)$ constant again?
However, on economic grounds, a decrease in inputs resulting in an increase in output sounds kind of awkward.
So I summed up what I know again:
$$F(1,1) = 10 $$
$$ F(0.999, 1 + y'(1)) = 10 $$
$$ y'(1) = - \frac{\frac{\partial{F}}{x}(1,1)}{\frac{\partial{F}}{y}(1,1)}
$$
But there I'm stuck - how to solve when both $F$ and $y$ are unknown?
Thanks for all hints!
AI: You can use total differential. If $F(x,y)$ is constant it means that
$$dF=\frac{\partial F}{\partial x}dx+\frac{\partial F}{\partial y}dy=0$$
$$\Rightarrow \frac{dy}{dx}=y'=-\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}}$$
In economics it makes sense since it is an implicit function. Assume that the production function is given such as
$$y=x^2+10 \Rightarrow F(x,y)=y-x^2=10$$
and
$$y'=-\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}}=-\frac{-2x}{1}=2x$$
which shows output will increase as a result of an input increase.
|
H: Prove: If in all subgraphs of $G$ there is a vertex of degree $<2$ then $G$ is a forest
I need help proving this:
Given a graph $G$, prove that if in all subgraphs of $G$ there is a vertex of degree less than $2$ ($1$ or $0$) then $G$ is a forest.
AI: Have you tried to prove the contrapositive? Like Damian said, assume that G is not a forest. Then it has a cycle. This cycle is a subgraph. What is the minimum degree of any vertex in a cycle?
To do this proof you just need to make sure you know the definitions of forest, cycle, and vertex, and that you know how to use contraposition. Wikipedia has a definition and some examples if you need help with how to use this.
|
H: Easy way to compute the area between $f(x)=x$ and $g(x)=x^2\ln(x)$
Is there an easy to compute the area between $f(x)=x$ and $g(x)=x^2\ln(x)$ without refering to the Lambert W-function?
AI: By letting $x_0$ be the positive point that satisfies $x_0 \log x_0 = 1$, we get that the relevant integral equals a polynomial in $x_0$ (of degree $3$). This is probably the simplest expression you can get for this value. Since $x_0 = W(1)^{-1}$, I would guess the answer to your question is "no".
|
H: $E$ is closed $\iff\partial E$ (boundary of set $E$) $\subseteq E$
I am studying topology of euclidean space from William Wade's text book.
I saw this question. But I cannot come up with any ideas.
Please show me the solution in an instructive an clear way.
Thank you for yourhelp.
$E$ is closed $\iff\partial E$ (boundary of set $E$) $\subseteq E$
AI: Prove (if you haven't already) that a set is closed $\iff E=\overline E$.
Since $\partial E=\overline{E}\cap\overline{E^c}\subseteq \overline E=E$, the result follows. On the other hand, note (prove it) that $\overline E=E\cup \partial E$, so if $\partial E\subseteq E$, $E=\overline E$, so $E$ is closed.
ADD Given a set $E$ on a space $(X,\mathscr T)$, one can define the closure of a set to be the intersection of all closed sets that contain $E$, that is $$\overline E=\bigcap\{F\subseteq X:E\subset F\text{ and } F \text{ is closed}\}$$
That is why we usually say $\overline E$ is the smallest set (w.r.t. inclusion) that contains $E$. Because $\overline E$ is the intersection of closed sets, it is closed. Thus, if $E=\overline E$, $E$ is seen to be closed. On the other hand, if $E$ is closed, $E$ itself is a closed set containing $E$, so $\overline E\subseteq E$. Since by definition, we always have $E\subseteq\overline E$, it follows $E=\overline E$.
|
H: What's the meaning of computing an integral at a given point?
Let $f$ be a function. If one finds $\displaystyle \frac{\mathrm d}{\mathrm dx}f$ and computes it at $x=a$, then one gets the rate of change of $f$ at $a$. That can be useful in some situations. But if one finds $\int f \space \mathrm dx$ and computes it at $x=a$, what information can we obtain about the function $f$?
AI: You use the indefinite-integral tag, so I'll assume you are talking about indefinite integrals. In that case, recall that when integrating an integrable function f(x): $\displaystyle \int f \,dx$, we obtain a family of functions: $F(x) + C$, where $C$ can be any constant.
So we can not evaluate $F(x) + C$ at $a$ and obtain a distinct value for $a$, without knowing $C$.
|
H: Alternative unconditional form of $\sqrt{n -\sqrt{n -\sqrt{n -\cdots}}}$?
Consider $a_n$, where
$$\begin{align} a_n &=\small{\sqrt{n -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}}\end{align}$$
Using a recursive solution, such that:
$$a_n = f(n) = \sqrt{n - f(n)}$$
is too slow, while an iterated form don't fit my usage.
Is there a unconditional form of $a_n$ which don't rely as heavily on self-reference or recursion? Maybe an approximation?
AI: $a_n =\small{\sqrt{n -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}}$
$a_n^2 =\small{n -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}$
$a_n^2 -n =\small{ -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}$
$a_n^2 -n =-a_n$
$a_n^2+a_n -n =0$
Using quadratic formula for positive root.
$a_n=\dfrac{-1+\sqrt{1+4n}}{2}$
|
H: Learning to read complex math formulas
could anybody point me to a book or article where I could learn how to read formulas like this one:
I have no idea what that means.
AI: The following points may be helpful:
$i$ is used to index the various numerical values $x_i$ you have. Usually, unless specified otherwise, it is understood that $i$ ranges from $1$ to some finite value $n$.
Thus, in your example, you have $n$ observations each one of them is denoted by $x_i$.
$\bar{x}$ denotes the mean of the $n$ observations i.e.,
$$\bar{x} = \frac{x_1+x_2+\ldots+x_n}{n}$$
The same interpretation holds for $y_i$ and $\bar{y}$.
$\Sigma$ stands for sum and hence we could have re-written the mean as follows:
$$\bar{x} = \frac{x_1+x_2+\ldots+x_n}{n}=\frac{\Sigma_i{x_i}}{n}$$
I hope that helps decipher what is going on in the equation.
|
H: Compute this limit $\lim_{x\to0}\frac{\sin(x^2+\frac{1}{x})-\sin\frac{1}{x}}{x}$ using L'Hôpital's rule
I have asked this problem before, but I can't understand the explanation, I couldn't understand how the sin multiply for cos, and too multiply for A + and - B: $$\sin(A)-\sin(B)=2\sin\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$ and I don't understand in this step how/why the $A-B$ and $A+B$ was replaced by $\frac{x^2}{2}$ and $\frac{x^2}{2}+\frac{1}{x}$ :
$$\lim_{x\to0}\frac{\sin\left(x^2+\frac1x\right)-\sin\left(\frac1x\right)}{x}= \lim_{x\to0}\frac{2\sin\left(\frac{x^2}{2}\right)\cos\left(\frac{x^2}{2}+\frac1x\right)}{x}.$$
AI: Hint:its easy to prove $$\sin(x+y)+\sin(x-y)=2\sin(x)\cos(y)$$
then put $y=\frac{A+B}{2}$,$x=\frac{A-B}{2}$
$$ \sin(x)\sim x$$ $$\ cosx\sim1-\frac{x^2}{2}$$ because $$\lim_{x\to0}\frac{sinx}{x}=\lim_{x\to0}\frac{cosx}{1-\frac{x^2}{2}}=1$$
|
H: Is $\ln(x)$ uniformly continuous?
Let $x\in[1,\infty)$. Is $\ln x$ uniformly continuous? I took this function to be continuous and wrote the following proof which I'm not entirely sure of.
Let $\varepsilon>0 $, $x,y\in[1, ∞)$ and $x>y$.
Then, $\ln x< x$ and $\ln y< y$ and this follows that $0<|\ln x-\ln y|<|x-y|$ since $x> y$.
Choose $δ=ϵ$. Now suppose $|x-y|< δ$. Then, $|\ln x-\ln y|<|x-y|<\varepsilon$
It would be much appreciated if someone could validate my proof
AI: You can prove something more general:
PROP Suppose $f:[a,\infty)\to\Bbb R$ has bounded derivative. Then $f$ is uniformly continuous on its domain.
P Pick $x,y\in[a,\infty)$ arbitrarily. By the mean value theorem, we can write $$|f(x)-f(y)|=|f'(\xi)||x-y|$$
Let $M=\sup\limits_{x\in[a,\infty)}|f'(x)|$. Then $$|f(x)-f(y)|\leq M|x-y|$$
Thus, for any $\epsilon$ we may take $\delta=\frac{\epsilon}{2M}$. Note that in your case $M=1$. I only divide by $2$ to turn $\leq$ into $<$.
ADD This means, for example, that $\log x$ (over $[a,\infty)$, $a>0$), $\sin x$, $\cos x$, $x$, and similar functions are all uniformly continuous. Note, for example, that $\sin(x^2)$ is not uniformly continuous. Note that we actually prove $f$ is $1$-Lipschitz with constant $M$, so this might be of interest.
|
H: Can I always extend a selfadjoint Operator in $L^2$?
Assume that we have a self-adjoint operator $T\colon D \to D$ where $D \subset L^2$ is some finite dimensional subspace. Can I conclude that than a self-adjoint operator $S \colon L^2 \to L^2$ exists with $S=T$ on $D$ ?
Hope someone can help me.
Best regards,
Adam
AI: Since $D$ is finite dimensional, it is closed. Let $\Pi$ be the orthogonal projection onto $D$, then let $S = \Pi T \Pi$. (Note that the first $\Pi$ is superflous, but makes it obvious that $S$ is self-adjoint.)
|
H: hard time with series convergence or divergence
I'm having real hard time with this series
I can't prove that the series converges and also I can't prove that the series diverges:
$$\sum_{k=1}^\infty\frac{\sin^2(n)}{n}.$$
any help would be appreciated.
AI: An idea:
$$\sin^2n=\frac12(1-\cos 2n)$$
Now, using Dirichlet's test we get that
$$\sum_{n=1}^\infty\frac{\cos2n}n\;\;\text{converges}$$
and since the harmonic series diverges then our series diverges as well.
|
H: If $\gamma$ is an uncountable ordinal, then $\gamma$ contains uncountably many successor ordinals.
Suppose $\gamma$ is an uncountable ordinal, i.e. $\gamma$ has uncountably many elements.
We write
$$
\gamma = \{0\} \cup\{\alpha \in \gamma: \alpha \text{ is a successor ordinal}\} \cup\{\alpha \in \gamma: \alpha \text{ is a limit ordinal}\} =: Z \cup S \cup L.
$$
Since $\gamma$ is uncountable, we must have at least one of $Z,S,L$ uncountable. I would like to show that no matter what, $S$ is uncountable.
Clearly $Z$ cannot be uncountable, and if $S$ is uncountable we are done, so we consider the case where $L$ is uncountable. That is, there are uncountably many limit ordinals in $\gamma$. From this can we show that $S$ must also be uncountable?
AI: Consider the ordinals $\alpha+1$ where $\alpha$ is a limit ordinal in $\gamma$. How many of those are in $\gamma$?
|
H: For a group $G$: show that $p$ is a divisor of $\# \mathcal{Z}$
Given a prime number $p$, an integer $n>0$, and a group $G$, where $\#G=p^n$.
Let $\mathcal{Z}(G)$ the center of the group: $\mathcal{Z}(G)G= \{a\in G; xa=ax, \text{ }\forall x\in G \}$
Now I have to show that $p$ is a divisor of $\# \mathcal{Z}(G)$.
Hint: Write $G$ as a union of conjugacy classes $C_g$ and check that $C_{e}$ is not the only conjugacy class in $G$ that contains just one element.
Can somebody explain this exercise? How does the hint relate to the problem exactly? Should I use sylow theorems ?
AI: Assuming you already know actions and stuff, define an action of $\;G\;$ on itself as follows
$$\forall\;g,x\in G\;,\;\;g\cdot x:=x^g:=x^{-1}gx$$
Check the above indeed is an action of $\,G\,$ on itself.
Since $\;\mathcal Orb(x)=[G:G_x]\;,\;\;G_x=Stab(x)=\{g\in G\;;\;x^g=x\}\;$ , then all the orbits have order a power of $\,p\,$.
Check that $\,x\in Z(G)\iff \mathcal Orb(x)=\{x\}\;$
We thus can write
$$|G|=\sum_{x\in G-Z(G)}'|\mathcal Orb(x)|+|Z(G)|$$
where $\;\sum'\,$ means sum over different, and thus disjoint, orbits.
Deduce that $\,p\mid|Z(G)|\;$ ...
|
H: Plase explain me the statment related to interior point in $\Bbb R^n$
I wrote such definition at the class. I understand the definition. But also my teacher said " needs provoing!" I underlined it with blue pencil. I could not understand what my teacher wants us and why he wrote such proof requirment. Please explain me. How do I prove this? Ans ıs this important?
AI: To prove that $E^o$ is open, note that if $x \in E^o$, there exists $\epsilon > 0$ such that $B_\epsilon(x) \subseteq E$. For any $y \in B_\epsilon(x)$, let $$\delta_y = \frac{\epsilon - d(x,y)}{2}$$
Then $B_{\delta_y}(y) \subseteq B_{\epsilon}(x)$, so $B_{\epsilon}(x) \subseteq E^o$.
|
H: Why does $(-2^2)^3$ equal $-64$ and not $64$?
The title says it all. Why does $(-2^2)^3$ equal $-64$ and not $64$? This was on my algebra final, and I am completely stuck on how it works.
AI: The negative sign ($-$) applies to the quantity $2^2$, so that $-2^2$ means $-(2^2)=-4$, not $(-2)^2=4$. $$(-2^2)^3=(-4)^3=-64$$
|
H: Number of indecomosbale $\mathbb{Z}_p[G]$ modules finite
Is there a theorem like those of Jones, which tells if the number of different $\mathbb{Z}_p[G]$ modules is finite, where $G$ is a finite group and $\mathbb{Z}_l$ the $p$-adic ring?
AI: Yes, Heller and Reiner proved this in the early 60s.
Proposition: $\hat{\mathbb{Z}}_p[G]$ has finitely many indecomposable representations if and only if $G$ has a cyclic Sylow $p$-subgroup of order at most $p^2$.
Higman (1954) proves the corresponding classification for fields of characteristic $p$ (finitely many indecomposable modules iff Sylow $p$-subgroup is cyclic). In particular, his technique of inducing from a Sylow to the whole group (which we learn about in terms of Green's theory of “sources”) applies to $p$-adic representations as well: If $M$ is an indecomposable $\hat{\mathbb{Z}}_p[G]$ module, then there is some indecomposable $\hat{\mathbb{Z}}_p[P]$ summand $V$ of $M_P$ ($P$ a Sylow $p$-subgroup of $G$) so that $M$ is a direct summand of the induced module $V^G$. In particular, if the number of indecomposable $\hat{\mathbb{Z}}_p[P]$ modules is finite, so is the number of indecomposable $\hat{\mathbb{Z}}_p[G]$ modules. Similarly the converse holds, so it suffices to consider $p$-groups only. I believe Higman already settles the case of non-cyclic $p$-groups, but Reiner credits the following:
Borevič–Faddeev (1959) proved (though I have not verified) that if $G$ has a non-cyclic Sylow $p$-subgroup, then $\hat{\mathbb{Z}}_p[G]$ has infinitely many indecomposable representations. This is not surprising as $\mathbb{Z}/p\mathbb{Z}[G]$ has infinitely many indecomposable modules, and they all lift to $\hat{\mathbb{Z}}_p[G]$.
Heller–Reiner (1962), theorems 2.6 and 3.1, show that if $G$ is cyclic of order $p$ or $p^2$, then $\hat{\mathbb{Z}}_p[G]$ has exactly $3$ or $4p+1$ (respectively) indecomposable representations. The case of order $p$ is due to Diederichsen (1940).
Heller-Reiner (1963), theorem page 327, show that if $G$ is cyclic of order $p^3$ then $\hat{\mathbb{Z}}_p[G]$ has infinitely many indecomposable representations. Since every indecomposable $\hat{\mathbb{Z}}_p[G/N]$ is an indecomposable $\hat{\mathbb{Z}}_p[G]$ representation, it follows that if $G$ is cyclic of order $p^k$ for any $k \geq 3$, then $\hat{\mathbb{Z}}_p[G]$ has infinitely many indecomposable representations.
This completes the proof for integral $p$-adic representations. The proof for (rational) integral representations was completed by Jones (1963), results from his dissertation.
Diederichsen, Fritz-Erdmann
“Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Äquivalenz.”
Abh. Math. Sem. Hansischen Univ. 13, (1940). 357–412.
MR2133
Higman, D. G.
“Indecomposable representations at characteristic $p$.”
Duke Math. J. 21, (1954). 377–381
MR67896
DOI:10.1215/S0012-7094-54-02138-9
Borevič, Z. I.; Faddeev, D. K.
“Theory of homology in groups II,”
Proc. Leningrad Univ., 14 (1959) no 7, 72-87.
ZBL0171.28301
MR106234
Heller, A.; Reiner, I.
“Representations of cyclic groups in rings of integers. I.”
Ann. of Math. (2) 76 (1962) 73–92.
MR140575 DOI:10.2307/1970266
Heller, A.; Reiner, I.
“Representations of cyclic groups in rings of integers. II.”
Ann. of Math. (2) 77 (1963) 318–328.
MR144980
DOI:10.2307/1970218
Jones, Alfredo
“Groups with a finite number of indecomposable integral representations.”
Michigan Math. J. 10 (1963) 257–261.
MR153737
DOI:10.1307/mmj/1028998908
|
H: Logistic functions - how to find the growth rate
We have the formule for a model with logistic growth:
$$ N_t = N_{t-1} + g\, N_{t-1}\left( 1 -\dfrac{N_{t-1}}{K}\right)$$
where $g$ defines the growth rate and $K$ is the carrying capacity.
Let's say we have the following data:
$N_0 = 10$,
$N_1 = 18$,
$N_2 = 29$, $N_3 = 47$,
$N_4 = 71$,
$N_5 = 119$,
$N_6 = 175$,
$N_7 = 257$,
$N_8 = 351$,
$N_9 = 441$,
$N_{10} = 513$,
$N_{11} = 560$,
$N_{12} = 595$
,$N_{13} = 630$
,$N_{14} = 641$
,$N_{15} = 651$
,$N_{16} = 656$
,$N_{17} = 660$
,$N_{18} = 662$.
How can we get a good approximation for what $g$ should be using this data? I just tried to fill in 2 consecutive points but it obviously didn't work out because the function isn't linear and you get a different $g$ between every 2 consecutive points. So how should I approach this problem?
AI: If $g$ is presumed to be independent of $N$ then your data as such does not fit a logistic progression over $N$ for $0 \leq t \leq 18$ (results in contradiction). It would fulfil certain segments probably where the equation can be solved for constant $g$ and $K$.
For example:
$$18=10 a - 100 b$$
$$29=18 a - 18^2 b$$
gives certain solution for $a=1+g$ and $b=g/k$.
So what you did is correct but the $g$ seems not be constant over the whole bandwidth $N$ for $0 \leq t \leq 18$.
What you could do instead is to test stepwise and find $g$ for each progression and possibly apply a regression that gives certain approxm. relation between $N \rightarrow g$ in other words $g$ as function of $N$.
|
H: A question on faithfully flat extension
This question arose while reading page 116 of Red Book by Mumford.
Let $B$ be a faithfully flat extension of $A$. Can I claim that $b \otimes 1 = 1 \otimes b$ in $B\otimes_A B$ if and only if $b\in A$?
AI: You are asking if the sequence
$0 \to A \to B \to B\otimes_A B$, where the second map is given by $b \mapsto b\otimes 1 - 1 \otimes b$, is exact.
You can test this by tensoring up with $B$ over $A$ (since $B$ is faithfully flat over $A$). Write $B' = B\otimes B$. Then extending scalars to $B$ we get
$0 \to B \to B' \to B'\otimes_B B'$ (the same sequence but with $A$ replaced by $B$ and $B$ by $B'$). (To check this is just a manipulation with tensor products.) Again, $B'$ is faithfully flat over $B$.
You might think that nothing has improved by doing this, but it has: unlike in the general faithfully flat context, we have a homorphism $B' \to B$ (given by $b_1 \otimes b_2 \mapsto b_1 b_2$); geometrically, Spec $B' \to $ Spec $B$ admits a section.
(This reduction to the case where the faithfully flat morphism admits a section
is the key technical tool in Grothendieck's flat descent arguments.)
Writing $B' = B \oplus I,$ where $I$ is the kernel of $B' \to B$, we get that
$B'\otimes_B B' = B \oplus I \oplus I \oplus I\otimes_B I$,
and in terms of this isomorphism it is easily seen that the morphism $b' \mapsto b' \otimes 1 - 1 \otimes b'$ is injective when restricted to $I$ (since for $b' \in I$ the two terms lie in the two distinct "$I$ summands" in the direct sum description
of $B'\otimes_B B'$). Thus your question has a positive answer in this base-changed context, and hence in the original context.
The general philosophy behind this kind of argument is that for answering these kinds of questions when $A \to B$ is faithfully flat, it is usually no loss of generality to assume in addition that $A$ is a direct summand of $B$ as an $A$-module.
|
H: Why do we define a linear transformation to have the property that $f(cW)=c f(W)$?
Why we define a lin tranfs to have the property that $f(cW)=c f(W)$ ?
let $V,T$ be any two vector spaces and
let $f:V\rightarrow T$ be a linear transformation between $V $and $T $
why do we assume this condition ?
i think you would say , to preserve the structure of the spaces ! and this is right
but i find it unnecessary , why ?
we know that $f(A+B)=f(A)+f(B)$ by the first condition of transformation
so we can , using induction , prove that
$f(A_1 + A_2 +... +A_n ) = f(A_1) + f(A_2) + ... + f(A_n)$
putting , $A_1 = A_2 = ... =A_n = A$
we conclude that , $f(cA)=cf(A)$
so , why mathematicians assume the second condition although it's a followed from the first ?!
isn't this a repeat ? i think mathematicians don't like repeating things !
AI: Hint: you are assuming in your demonstration that $c \in \mathbb Z$. The underlying field may not be the set of integers. So we have a problem when $c\notin \mathbb Z$, and we'd need to extend this to cover, e.g., rationals. But then what do we do, e.g, if $c$ is Irrational? Imaginary?
So in the general scenario, we need that a linear transformation $f$ on $W$ satisfies the constraint that $f(cW) = cf(W)$.
I am quite confident that the "extra baggage" of this requirement would have fallen by the wayside if it were not needed to characterize what conditions, exactly, must be satisfied by any and every linear transformation.
|
H: Question about Switching Between Random Variables
Find the density function of $Y = aX$, where $a > 0$, in terms of the density function of $X$.
Show that the continuous random variables $X$ and $-X$
have the same distribution function if and only if $f_X(x) = f_X(-x)$ for all $x \in \mathbb R$.
I got the first part already. $$F_Y(y) = \mathbb P(Y = aX \le y) = \mathbb P(X \le y/a) = F_X(y/a)$$
Hence, $$f_Y(y) = {1 \over a} f_X(y/a)$$
But for the second part, I cannot get the subscripts to match.
This is what I mean ... $( \rightarrow)$ Let $Y = -X$.
$$F_Y(y) = \mathbb P(Y = -X \le y) = \mathbb P(X \gt -y) = 1 - F_X(x)$$
$$f_Y(y) = -f_X(x) * {{dx} \over {dy}} = f_X(x)$$
So $f_{-X}(-x) = f_X(x)$, which is different from what the text claimed.
I always have troubles understanding the subscripts in $F_Y$ or $f_X$.
A lot of times I have difficulty switching from one random variable to another.
So, in this example, the text is claiming that
$$\mathbb P(X(\omega) \le -x) = \mathbb P(X(\omega) \le x)$$
That makes me think that $f_X = 0$ between $-x$ and $x$.
AI: If $Y = -X$, $f_{-X}(-x) = f_X(x)$ is the same as $f_Y(t) = f_X(-t)$ where $t = -x$. Saying $X$ and $Y$ have the same distribution says that $f_Y(t) = f_X(t)$.
So this is exactly the same.
And how could you possibly argue with
$ {\mathbb P}(X(\omega)\le -x) = {\mathbb P}(X(\omega)\le -x)$?
|
H: Is there a tree $T$ such that $\text{diam}(T) \geq k$, where $k$ is the number of vertices with degree less than 3?
Let $T$ be an undirected tree, let $d$ be the diameter of $T$, and let $s$ be the number of vertices in $T$ with degree less than 3. Recall the diameter of a graph is the length of the longest shortest path in $T$.
Consider the star graph $S_n$ on $n$ vertices. It is easy to see that the difference between $d$ and $s$ can be arbitrarily large. In this case, $d$ is always 2, but we can make $s$ how ever large we wish. In a path graph, $s-d=1$.
Is it possible to construct a tree where $d$ is greater than or equal to $s$? I don't think it is, but is there a simple argument to show it?
AI: Direct proof:
Consider any of the longest paths in the tree (it consists of $d$ edges and $(d+1)$ nodes).
If we remove the edges of this path from the tree, we get a forest of $(d+1)$ trees, each of which has at least one leaf.
Since removing the edges decreased the degree of each node by at most $2$, all these leaves had degree smaller than 3 in the original tree.
Thus, $s\geq d+1$.
(and the path graph shows that this bound is tight).
|
H: $\sum_{n=1}^{\infty }\left(\frac{2n+5}{7n+6}\right)^{n\log(n+1)} $ converges or diverges?
I am trying to determine whether this series converges or diverges: $\sum_{n=1}^{\infty }\left(\frac{2n+5}{7n+6}\right)^{n\log(n+1)}$.
Here is my solution: I called: $a_{n}=\left(\frac{2n+5}{7n+6}\right)^{n\log(n+1)}$. Then, I used the root test as follows: $\lim_{n \to \infty }\left | a_{n} \right |^{\frac{1}{n}}=\lim_{n \to \infty}\left(\frac{2n+5}{7n+6}\right)^{\log(n+1)}$. Then I called $x_{n}=(\frac{2n+5}{7n+6})^{\log(n+1)}$, Instead of computing $\lim_{n \to \infty}x_{n}$, I computed first $$\lim_{n \to \infty}\log(x_{n})=\lim_{n \to \infty}\log(n)\log\left(\frac{2n+5}{7n+6}\right)=\log(\frac{2}{7})\lim_{n \to \infty}\log(n)=-\infty,$$ therefore: $\lim_{n \to \infty}x_{n}=\lim_{n \to \infty}e^{\log(x_{n})}=0$. Therefore: $\lim_{n \to \infty }\left | a_{n} \right |^{\frac{1}{n}}=0< 1$. So by the root test, the series converges.
Can you please let me know whether my solution is correct (especially the last steps) or not? if there is a mistake, please let me know how I should fix it. Also, if you are aware of a better way of solving the problem , please do let me know. Thanks!
AI: Another way to to prove the convergence would be to study the integral
$$\int_0^\infty \left(\frac{2x+5}{7x+6} \right)^{x\ln x}\, dx.$$
The integrand is a continuous function in $x\to 0$ and on infinity is majorated by $c(2/7)^x$ for some positive constant $c$, hence the integral converges; by an integral criterion, so does the series.
Your method works, too.
|
H: Does $\alpha + \beta = \alpha$ imply $\beta \le \aleph_0$
Just like in title, my question is : Does $\alpha + \beta = \alpha$ imply $\beta \le \aleph_0$ where, $\alpha$ and $\beta$ are cardinals?
P.S. I actualy have to prove $\alpha + \beta = \alpha$ $\iff$ $\alpha + \aleph_0 \cdot \beta = \alpha$. Any ideas?
AI: If you mean $\alpha+\beta= \alpha$ as in addition of cardinals, then no. Consider $\alpha=\aleph_2, \beta=\aleph_1$.
|
H: Calculating $\lim_{x \to a} \frac{x^2 - (a+1)x + a}{x^3-a^3}$ using L'Hospital
I tried to calculate this limit:
$$\lim_{x \to a} \frac{x^2 - (a+1)x + a}{x^3-a^3}$$
Using L'Hospital's rule I get:
$$\lim_{x \to a} \frac{2x - (a+1)}{3x^2} = \frac{2a - (a+1)}{3a^2} = 0$$
But actually the limit is
$$\lim_{x \to a} \frac{2x - (a+1)}{3x^2} = \frac{a-1}{3a^2}$$
Can you help me find my mistake?
AI: $$2a-(a+1)=2a-a-1=a-1.$$ I think this is the only mistake you have.
|
H: Reference request for complex variables
My curriculum for math has the first chapter on complex variables. It is as stated below:
Functions of complex variables:
Continuity and derivability of a function
Analytic functions
Necessary condition for $f(z)$ to be analytic, sufficient conditions (without
proof)
Cauchy-Riemann equations in polar form
Harmonic functions
Orthogonal trajectories
Analytical and Milne-Thomson method to find $f(z)$ from its real or
imaginary parts.
Complex integration
Taylor’s and Laurent’s series (without proof)
Cauchy’s residue theorem (statement & application)
I have been able to locate some of it on MIT OpenCourseware but I am not sure if that will be enough.
Can someone please point me out to more resources for this chapter ?
AI: I would use Bak & Newman's "Complex Analysis" for an introduction to the above topics except for "CR in polar form", "harmonic functions" and "Milne-Thomson".
|
H: Do endomorphisms of quotients always lift?
Let $H \to G$ be an injective homomorphism of Abelian groups and let $\varphi$ be an endomorphism of $H$. Must $\varphi$ extend to an endomorphism of $G$? The answer is no; a counterexample is the endomorphism of the subgroup $2\mathbb{Z} \times \mathbb{Z}_2$ of $\mathbb{Z} \times \mathbb{Z}_2$ given by $(2,0) \to (0,1)$ and $(0,1) \to (0,0)$.
I am interested in the dual question. Let $G \to H$ be a surjective homomorphism of Abelian groups and let $\varphi$ be an endomorphism of $H$. Must $\varphi$ lift to an endomorphism of $G$?
I strongly suspect the answer to be no, but I have not been able to find a counterexample. An easy counterexample like the one above would be ideal, although any would be good. Note that if the map $G \to H$ splits then $\varphi$ must be induced by an endomorphism of $G$ (and similarly in the dual case). The two questions make sense for other objects also, and so if there is anything interesting to be said on this matter in categorical terms then I would be happy to hear it.
AI: Consider the standard surjection $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$.
More specifically we have $$G=\langle x , y : x^4 = y^2 =1 , yx=xy\rangle \quad\text{and}\quad H=\langle \bar x, \bar y : \bar x^2 = \bar y^2 = 1, \bar y \bar x = \bar x \bar y \rangle$$
The elements of $G$ that have order 4 are $\{ x, xy, x^3, x^3y \}$, the elements that order 2 are $\{ y, x^2, x^2 y \}$, and the element of order 1 is just called $1$. The elements of $H$ are just $\{ \bar x, \bar y, \bar x \bar y \}$ of order 2, and $\bar 1$ of order 1.
An endomorphism of $G$ cannot take $y$ (of order 2) to any element of order 4. But the image in $H$ of the elements of $G$ order dividing 2 is only $\{ \bar 1 = \bar x^2, \bar y = \bar x^2 \bar y \}$. Hence an endomorphism of $H$ that is induced by an endomorphism of $G$ can only take $\bar y$ to $\bar y$ or $\bar 1$.
The endomorphisms of $H$ are most easily represented by $2\times 2$ matrices over $\mathbb{Z}/2\mathbb{Z}$. There are 16 such endomorphisms, but only 8 of them send $\bar y$ to $\bar y$ or $\bar 1$. The other 8 send $\bar y$ to $\bar x$ or $\bar x \bar y$. None of these latter 8 lift as shown in the previous paragraph. All of the former 8 do lift, though this doesn't matter very much.
In particular, 8 of the endomorphisms of $H$ lift (4 ways each), and 8 do not lift.
|
H: $-1 = 0$ by integration by parts of $\tan(x)$
I had a calculus final yesterday, and in a question we had to find a primitive of $\tan(x)$ in order to solve a differential equation.
A friend of mine forgot that such a primitive could easily be found, tried to integrate $\tan(x)$ by parts... and then arrived to the result $0 = -1$.
The kind of thing you're pretty satisfied to "prove", except during an important exam. :-°
So afterwards I tried to do the same :
$$\begin{align*}
\int \tan(x)dx &= \int \sin(x) \times \frac{1}{\cos(x)}dx \\[0.1in]
&= -\frac{\cos(x)}{\cos(x)} - \int - \frac{\cos(x) \times \sin(x)}{\cos(x)^2}dx \\[0.1in]
&= -1 + \int \tan(x)dx
\end{align*}$$
And therefore we get :
$$ \int \tan(x)dx = -1 + \int \tan(x)dx \implies 0 = -1$$
What? The reasoning sounds about right to me.
Could someone explain where something went wrong?
Thanks,
Christophe.
AI: Without even reading I answer: the antiderivatives of a function are equal only up to a an additive constant, that is any two antiderivatives will always differ by a constant on an interval.
Edit: Ok, having now read the question I confirm my suspicion, note that the symbol $\int f(x)dx$ is not a well defined function. You should interpret the symbol $\int f(x)dx$ as being an undertermined differentiable function which, once you differentiate, yields $f(x)$. Although more formally I believe it's more common to define $\int f(x)dx$ as the set of functions described above, that is, for some non degenerate interval $I$,
$$\int f(x)dx=\{F\in \Bbb R^I: \text{F is differentiable and }(\forall x\in I)(F'(x)=f(x))\}.$$
Using this definition one has to very careful about what one means with $\int f(x)dx=g(x)$, because it doesn't mean what one would initially suspect.
|
H: Choosing random number $[1,n]$. What is the expected value of $f(x) = x^2$?
We have just started learning discrete probability and this question came up:
We choose a random number from $[1,n]$, and we let be $f(x) = x^2$ and $g(x) = 2^{-x}$.
I) What is the expected value of $f$?
II) What is the expected value of $g$?
II) What is the expected value of $f \circ g$?
Now I am very new to this, but, isn't expected value defined as $E[f] = \sum_{x \in \omega} {x \cdot Pr(x)}?$
Our $\omega$ here is $\{1,2,...,n\}$ and of size $n$ with each element of probability $Pr(x \in \omega ) = \frac{1}{n}$.
But then, what is the meaning of $f(x) = x^2$?
AI: No, $E[f]=\sum_{x \in \omega} f(x)\cdot Pr(x)$. So to find $E[f]$ you need $\sum_{i=1}^ni^2\frac 1n$ and similarly for $g$.
|
H: Regular Pentagon is the Unique Largest Two-Distance Set in the Plane
A two-distance set is a collection of points for which only two distinct distances appear among pairs of points. (That is, the distance between any pair of points is either $x$ or $y$, and these values may be whatever you want.)
The unique (up to similarity) largest two-distance set in the plane is the regular pentagon. The result seems to have become folklore, and I cannot find a proof anywhere online. I am primarily interested in the bound of five on the size of the two-distance set, but a proof of uniqueness would be a welcome bonus.
AI: Suppose there is a two distance set of size $n=6$. Label each of the edges according to whether they have length $d_1 \neq d_2$. From Ramsey Theory, we know that a monochromatic triangle exists.
Let the 3 vertices of the monochromatic triangle be $A, B, C$, WLOG distance $d_1$. Now, consider the intersections of circles with radius $d_1$ and $d_2$ about centers $A$ and $B$, which have to give us the 3 other points.
If circles of radius $d_1$ intersect at a vertex that is not $C$, then we have $d_2 = \sqrt{3} d_1$.
If circles of $d_2$ intersect at a vertex, there are 2 possibilities for vertices. By considering the distance to $C$, we must have $ d_2 = \frac{\sqrt{3}}{3} d_1$ or $ d_2 = 2 \frac{ \sqrt{3}}{3} d_1$.
Notice that all of these $d_2$ are different, and hence we can have at most 1 of these points as a vertex.
If circles of radius $d_1$ and $d_2$ intersect at a vertex that is on the different side of $AB$ as $C$, there are 4 possibilities of points, with 2 on either side of $AB$. However, the points on either side of $AB$ will result in different values of $AB$, hence we can have at most 2 of these points as a vertex.
Hence, we must have 1 from the first set of points, and 2 from the second set of points. However, it is easy to see that non of the distances from the first set, will result in valid points in the second set.
Uniqueness:
Suppose that we have a set of 5 points that are two-distance. Color each edge as before.
Claim: It doesn't contain a monochromatic triangle.
Suppose it does. Let the vertices be $A, B, C$, and the other 2 points be $D, E$. Take the circles with centers $A, B$. Then from the above, we know that the other 2 points must be from the 3 possible intersection points, so $C, D, E $ are collinear. The only 2-distance set of collinear points is where one of them is the midpoint of the other. By considering the possible positions, we must have (WLOG) $D$ as the other intersection of the cirlces that $C$ lies on. Then, (WLOG) $ED=DC =\sqrt{3} CA$ But $EA \neq CA, EA \neq CE$ hence we have a contradiction.
Claim: No monochromatic 4-cycle exists.
Suppose it does. Then, we must have a rhombus. Consider the lengths of the diagonals.
If they are distinct, then one of them is the side length of the rhombus, so we have a monochromatic triangle, contradicting the previous claim.
If they are equal and distinct from the side length of the rhombus, we have a square. It's quick to argue that no 5th point works.
From the above claims, the complete graph on 5 vertices must be formed from 2 5-cycles. Hence, we must have the regular pentagon.
|
H: Two tangent closed discs connected
Let $X$ be a connected subset of a metric space $M$. Show that $X^0$ (the interior of $X$) is not necessarily connected.
So the example I'm thinking of is $X$ being two closed discs, tangent at a point. The interior is clearly not connected, since there exist two disjoint, non-empty open sets whose union is the interior (the two sets are exactly the two discs.)
But I don't know how I can prove that $X$ is connected. Using the definition of connectedness, I must prove either
(i) The only subsets of $X$ which are both open and closed are $\emptyset$ and $X$
or
(ii) If $A,B$ are disjoint open subsets of $X$ whose union is $X$, one of them contains $X$.
AI: Connectedness can be a bit of an abstruse concept to work with. It's often easier to work with the stronger concept of path-connectedness (a space is path-connected if any two points can be joined by a continuous path in the space). Not every connected space is path-connected, but for those that are, this is generally the easiest way to prove connectedness.
In this case, for example, it's almost trivial to see that $X$ is path-connected: two points in the same disc can be joined by a straight-line path, while two points in different discs can be joined by a composite path formed of two line segments meeting at the tangency point.
|
H: Calculus Complicated Substitution Derivative
When,
$$y=6u^3+2u^2+5u-2 \ , \ u= \frac{1}{w^3+2} \ , \ w=\sin x -1 $$find what the derivative of $ \ y \ $equals when $ \ x = \pi \ . $
Tried it many times, still can't seem to get the right answer (81)
AI: $$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dw}\cdot\frac{dw}{dx}$$
$\frac{dy}{du}=18u^2+4u+5$, $\frac{du}{dw}=\frac{-3w^2}{(w^3+2)^2}$ , $\frac{dw}{dx}=\cos x$
Can you now make the substitutions?
Note that when $x=\pi $, then $w=-1$, and $u=1$
|
H: Weak topology on an infinite-dimensional normed vector space is not metrizable
I've been pondering over this problem for a while now, but I can't come up with a proof or even a useful approach...
Let $X$ be am infinite-dimensional normed vector space over $\mathbb{K}$ (that is either $\mathbb{R}$ or $\mathbb{C}$).
Then the weak topology $\sigma(X,X^*)$ is not metrizable, i.e. there is no metric $d$ such that the induced topology of $d$ coincides with $\sigma(X,X^*)$.
Can anyone help me with this?
AI: Let $X$ be a normed space. You can show that if the weak topology of $X$ admits a countable base of open sets at $0$, then $X$ is finite dimensional:
Prove the existence of a countable set $\{\zeta_n\}$ in $X^*$ such that every $\zeta \in X^*$ is a finite linear combination of the $\zeta_n$.
Derive from this that $X^*$ is finite dimensional.
Deduce that $X$ is finite dimensional.
|
H: Characteristic Primes of repunits
First off, we're working in base 10. A repunit is a number of form $111111...1$. ( n ones)
For some integer sequence $(a_n)$, a charateristic prime $p$ of $a_n$ is a prime which divides $a_n$, but none of $a_1,...,a_n$. Does every repunit have a charteristic prime?
AI: The answer is yes, every repunit has a characteristic prime. This follows from Bang-Zsigmondy theorem.
|
H: In a random graph of $n$ vertices, what is the expected value of the number of simple paths?
I am very new to discrete probabilty and was asked this question:
In a random graph $G$ on $n$ vertices (any edge can be in the graph with probabilty of $\frac{1}{2}$,) what is the expected value of the number of paths between a vertex $v$ and a vertex $u$? (The answer might be a summation).
How do we exactly begin this? I know we have to define $f(u,v) = \text{number of simple paths between v and u}$, and we need to calculate $E[f(u,v)] = \sum_{u,v \in \omega} {f(u,v) \cdot Pr(u,v)}$. But what exactly is $f(u,v)$ here and what is our $\omega$?
AI: This is where you should use the linearity of expectation. Instead of trying to count the number of simple paths in a given configuration, you count the number of times a given path is in a configuration.
Given any simple path between $u,v$ of length $l$, the expected number of times that it will be a path is $\left(\frac{1}{2}\right)^l$. Hence we need to sum over all such paths between $2$ vertices $uv$.
Fix $k$. How many simple paths of length $k+1$ are there from $u$ to $v$? There must be $k+2$ vertices involved, of which the initial and final vertices are $u$ and $v$. Next, we have to pick any $k$ out of the remaining $n-2$ vertices. The order that the vertices are picked matter, hence there are $(n-2)^{\underline k} = k!{n-2\choose k}$ simple paths of length $k+1$ between vertices $u, v$.
$$E[X] = {n \choose 2} \left[ \sum_{k=0}^{n-2} k!{n-2\choose k} \times \frac{1}{2^{k+1}}\right]$$
|
H: Possible mistake in exercise in Hartshorne exercise II.2.18b
I'm trying to solve Exercise II.2.18b in Hartshorne, and I've constructed what appears to be a counterexample to its statement. Can someone tell me where I've gone wrong?
The statement is as follows. Let $\phi : A \rightarrow B$ be a ring homomorphism, let $X = \text{Spec } A$ and $Y = \text{Spec } B$, and let $f : Y \rightarrow X$ be the induced map. Then $\phi$ is injective if and only if the map of sheaves $f^{\#} : \mathcal{O}_X \rightarrow f_{\ast} \mathcal{O}_Y$ is injective.
My counterexample is as follows. Let $A = k[x]$ and $B = k[x,y]/(xy)$, and let $\phi : A \hookrightarrow B$ be the obvious injection. Then I claim that the associated map of sheaves is not injective. Indeed, let $p = (x) \in \text{Spec } B$ (this is a prime ideal since $B/(x) \cong k[y]$ is a domain) and $q = (x) \in \text{Spec } A$. Then $f(p) = q$. But the induced map of stalks $A_q \rightarrow B_p$ is not injective; indeed, $\frac{x}{1} \in A_q$ is nonzero but $\frac{x}{1} \in B_p$ is zero since in $B_p$ we have $\frac{x}{1} = \frac{xy}{y} = 0$.
AI: The stalk at $\mathfrak{q}$ of $f^\sharp:\mathscr{O}_X\rightarrow f_*(\mathscr{O}_X)$ is a map $\mathscr{O}_{X,\mathfrak{q}}\rightarrow f_*(\mathscr{O}_Y)_\mathfrak{q}$, and the latter ring can not always be identified with $\mathscr{O}_{Y,\mathfrak{p}}$, even though it does map to it naturally.
The point is, while it is true that a map of sheaves of abelian groups on a topological space is injective if and only if the maps it induces on stalks are injective at all points, for a morphism of locally ringed spaces $f:Y\rightarrow X$, injectivity of $f_y^\sharp:\mathscr{O}_{X,f(y)}\rightarrow\mathscr{O}_{Y,y}$ is not always equivalent to injectivity of $f^\sharp$, because $f_y^\sharp$ is not the same as the map $\mathscr{O}_{X,f(y)}\rightarrow f_*(\mathscr{O}_Y)_{f(y)}$. It is the composite of this map with the natural map $f_*(\mathscr{O}_Y)_{f(y)}\rightarrow\mathscr{O}_{Y,y}$, which can be fail to be injective, for example. In fact, the map denoted $f_y^\sharp$ is the stalk of the map $f^{-1}\mathscr{O}_X\rightarrow\mathscr{O}_Y$ corresponding to $f^\sharp$ via the adjunction between the inverse image and direct image functors, so $f^\sharp_y$ being injective for all $y\in Y$ is the same as injectivity of $f^{-1}\mathscr{O}_X\rightarrow\mathscr{O}_Y$.
Your map of sheaves is a map $\tilde{A}\rightarrow\tilde{B_A}$, where $B_A$ denotes $B$ regarded as an $A$-module via $A\rightarrow B$. The kernel of this map, by basic properties of the functor associating a sheaf of modules to a module, is $\tilde{\ker(\varphi)}$, which is zero if and only if $\ker(\varphi)$ is zero.
|
H: Find all different integer exponents
Find all different integers that satisfy the following equality:
$m(\sin^{n}x + \cos^{n} x- 1) = n(\sin^{m}x + \cos^{m}x - 1), (\forall) x\in\mathbb{R}.$
Case1: $m$ is odd, $n$ is even, then put $x=180^0 => m=n=0 =>$ contradiction.
Case2: $m$ and $n$ is odd, then put $x=180^0 => m=n =>$ contradiction.
Case3: $m$ and $n$ is even $=> m=2a, n=2b$ there $a$, $b$ is integer.
Then put $x=45^0 =>$
$2^{a-b}a(2^{b-1}-1)=b(2^{a-1}-1)$.
Put $x=60^0$ then we have
$4^{a-b}a(4^b-3^b-1)=b(4^a-3^a-1)$. How can I continue?
AI: In case 3, substitute $u=\sin^2 x$ (and $1-u=\cos^2 x$) to transform the equality into $$a\left(u^b + (1-u)^b - 1\right) = b\left(u^a + (1-u)^a - 1\right)$$
Since the original equality held for all values of $x$, the new one must hold for all values of $u\in \langle 0,1 \rangle$ and thus the polynomials (in variable $u$) on left-hand and right-hand side must be identical.
The degree of the left-hand side is either $b$ (if $b$ is even) or $(b-1)$ (if it is odd, since the highest term cancels out). Similarly, the degree of the right-hand side is either $a$ or $(a-1)$. Without loss of generality, we can assume $a<b$ and since the degrees of the polynomials must be equal, we have $b-1=a$ (and $a$ is even and $b$ odd).
The coefficient at $u^{b-1}$ (which is the same as $u^a$) on the left-hand side is then $(ab)$ and on the right-hand side it's $(2b)$; equating them yields $a=2$ and $b=3$. It's easy to check that these values satisfy the equality for all $u$, so the only solution to the original problem is $\{m,n\}=\{4,6\}$.
|
H: Simple partial differentiation
I have a simple partial differentiation question here, given:
$u = x^2 - y^2$ and $v = x^2 -y$, find $x_u$ and $x_v$ in terms of $x$ and $y$.
What is the easiest way to go about this?
Thanks
AI: HINT: Think $x = x(u,v)$ and $y= y(u,v)$ and use implicit differentiation.
First we consider the partial w.r.t. $u$
$$\left\{\begin{aligned}
\partial_u u &= \partial_u (x^2) - \partial_u (y^2),
\\
\partial_u v &= \partial_u (x^2) - \partial_u (y),
\end{aligned}\right.$$
this is:
$$\left\{\begin{aligned}
1 &= 2x \partial_u x - 2y\partial_u y,
\\
0 &= 2x \partial_u x - \partial_u y.
\end{aligned}\right.$$
Now solve this equation for $\partial_u x$ and $\partial_u y$ in terms of $x$ and $y$:
$$
\partial_u x =\frac{1}{2x(1-2y)},\quad \partial_u y = \frac{1}{1-2y}.
$$
Now try to mimic above for the partial derivative w.r.t. $v$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.