text
stringlengths 83
79.5k
|
---|
H: Understanding Conditional Expectation
I just want to make sure I'm understanding conditional expectation correctly:
Let $X_1,X_2,X_3$ denote three independent coin flips with probability of heads $\frac{1}{4}$ and probability of tails $\frac{3}{4}$, and $X_i=2$ if heads and $X_i=0$ if tails.
Then I'm looking to determine the conditional probability
$$\mathbb{E}[X_1+X_2+X_3|F_1],$$
where $F_1$ is the sigma algebra generated by $X_1$, or equivalently that generated by the partition $\{\varnothing,\Omega,\{HHH,HHT,HTH,HTT\},\{THH,THT,TTH,TTT\}\}$.
So I compute this by taking $P(\{HHH,HHT,HTH,HTT\})=\frac{1}{64}+\frac{3}{64}+\frac{3}{64}+\frac{9}{64}=\frac{1}{4}$
and then weighting these according to the various sums: $6\frac{1}{64}+4\frac{3}{64}+4\frac{3}{64}+2\frac{9}{64}=\frac{3}{4}.$
And then doing the same thing for the other set
$P(\{THH,THT,TTH,TTT\})=\frac{3}{64}+\frac{9}{64}+\frac{9}{64}+\frac{27}{64}=\frac{3}{4}$
$4\frac{3}{64}+2\frac{9}{64}+2\frac{9}{64}+0\frac{27}{64}=\frac{3}{4}.$
And then taking $\frac{\frac{3}{4}}{\frac{1}{4}}=3$ and $\frac{\frac{3}{4}}{\frac{3}{4}}=1$, to obtain the random variable:
$$\mathbb{E}[X_1+X_2+X_3|F_1](\omega)=3 \;for\; \omega\in\{HHH,HHT,HTH,HTT\}$$
$$\mathbb{E}[X_1+X_2+X_3|F_1](\omega)=1 \;for\; \omega\in\{THH,THT,TTH,TTT\}.$$
Can anyone please tell me is this correct? Thanks.
AI: More quickly, and for the same result, note the following:
The random variable $X_1$ is $F_1$-measurable hence $E[X_1\mid F_1]$.
For every $k\ne1$, the random variable $X_k$ is independent of $F_1$ hence $E[X_k\mid F_1]=E[X_k]$.
For every $k$, $E[X_k]=\frac12$.
Thus,
$$E[X_1+X_2+X_3\mid F_1]=X_1+\tfrac12+\tfrac12=X_1+1$$ |
H: If $f:\mathbb{R} \to \mathbb{Z}$, why can't we have that $g \circ f$?
In the lecture notes for a course I'm taking, it is stated that:
Unlike multiplication however, we can't reverse order. First of all,
in general it doesn't even make sense to reverse composition. For
example, if we have $f:\mathbb{R} \to \mathbb{Z}$ defined by $f(x)
=\left \lfloor x \right \rfloor$ and $g:\mathbb{R} \to \mathbb{R}$ defined by$g(x) = x^2$. Then $f \circ g$ makes perfect sense since the
codomain of $g$ and the domain of $f$ are equal (both are $\mathbb{R}$),
but we can't technically do $g \circ f$, since the codomain of $f$ is
$\mathbb{Z}$ and the domain of $g$ is $\mathbb{R}$.
I'm confused by the reasoning for this. Since the implication is $(g\circ f):\mathbb{R} \to \mathbb{Z}$, and we have that $f:\mathbb{R} \to \mathbb{Z}$ is true, why is it that $(g \circ f)$ is not true as well?
AI: We can form $g\circ f$ only by (silently) inserting a third function, namely the inclusion $\iota\colon \mathbb Z\to\mathbb R$, i.e. strictly we can only form $g\circ\iota\circ f$:
$$ \mathbb R\stackrel f\longrightarrow\mathbb Z\stackrel \iota\longrightarrow \mathbb R\stackrel g\longrightarrow \mathbb R.$$
Alternatively, we can consider the restriction of $g$ to $\mathbb Z\subseteq \mathbb R$, that is $g|_{\mathbb Z}\colon\mathbb Z\to\mathbb R$, $x\mapsto x^2$. Then we can form $g|_{\mathbb Z}\circ f$:
$$\mathbb R\stackrel f\longrightarrow\mathbb Z\stackrel {g|_{\mathbb Z}}\longrightarrow \mathbb R.$$
(Well, it's not really an alternative as one may define the restriction via $\iota$).
Maybe the example would have been more instructive if it used "totally different" sets instead of one subset of the other - cause in practice this silent assumption in the presence of subset inclusions (or other canonical inclusion maps/restrictions) is usually made. |
H: Solving the 'easy' differential equation $(1 - \phi^2)\phi'' + \phi(\phi')^2 =0$.
I need to solve the following:
$$(1 - \phi^2)\phi'' + \phi(\phi')^2 =0.$$
Is there any standard method I can use?
AI: Just a lot of pattern matching and manipulation. Rewrite the equation as
$$\frac{\phi''}{\phi'} = -\frac{\phi \, \phi'}{1-\phi^2}$$
This can be written as
$$\frac{d}{dx} \log{\phi'} = \frac12 \frac{d}{dx} \log{(1-\phi^2)}$$
This may be integrated and subsequently exponentiated to produce
$$\phi' = A \left (1-\phi^2\right)^{1/2}$$
where $A$ is a constant of integration. We may then rewrite this equation in differential form as
$$\frac{d\phi}{\left (1-\phi^2\right)^{1/2}} = A \, dx$$
which integrates to
$$\arcsin{\phi} = A x + B$$
where $B$ is another constant of integration. The solution to the above equation is then
$$\phi(x) = \sin{(A x+B)}$$
You may verify that this is indeed the solution by plugging it back into the original equation. |
H: Ring homomorphisms $\mathbb{R} \to \mathbb{R}$.
I got this question in a homework:
Determine all ring homomorphisms from $\mathbb{R} \to \mathbb{R}$. Also prove that the only ring automorphism of $\mathbb{R}$ is the identity.
I know that $\mathbb{R}$ is a field, so the only ideals are $\mathbb{R}$ and $\{0\}$. Therefore the homomorphisms must be the identity and the function $f(x)=0$ where $x \in \mathbb{R}$.
But how do I prove these are the only two homomorphisms?
Also, I was told to use the fact that $\mathbb{Q}$ is dense in $\mathbb{R}$, how can I use this hint?
AI: $f\colon \mathbb R\to\mathbb R$ is uniquely determined by $f(1)$. Why?
By induction, $f(n)=n\cdot f(1)$ for $n\in\mathbb N$. Then by additivity, $f(x)=x\cdot f(1)$ for $x\in \mathbb Z$ and finally also for $x\in\mathbb Q$.
We can make use of the densitiy of $\mathbb Q$ if we show that $f$ is continuous.
Indeed, if $x\ge0$ then $x=y\cdot y$ for some $y\in\mathbb R$, hence $f(x)=f(y)f(y)\ge 0$, therefore $f$ preserves $\ge$ and hence $|y-x|\le \frac 1n$ implies $|f(y)-f(x)|\le \frac1n|f(1)|$, that is $f$ is continuous. We conclude that $f(x)=x\cdot f(1)$ for all $x\in\mathbb R$.
What values of $f(1)$ are allowed? We must have $f(1)=f(1\cdot 1)=f(1)\cdot f(1)$, hence $f(1)=0$ or $f(1)=1$. |
H: Linear Transformation - orthogonal projection and orthogonal symmetry compositions
Let $\vec{a}$ a nonzero vector of $V^2$
I know that orthogonal projection of $\vec{u}$ onto the line generated by $\vec{a} \text{ is}$
$P_{\vec{a}}(\vec{u}) = \frac{(\vec{u}\cdot\vec{a})}{\vec{a}\cdot\vec{a}}\vec{a}$
And the orthogonal symmetry relative to line generated by $\vec{a} \text{ is }$
$S_{\vec{a}}(\vec{u}) = 2P_{\vec{a}}(\vec{u}) - \vec{i}d(\vec{u})$
So, how I can calculate?
$P_{\vec{a}} \circ P_{\vec{a}} \text{ and } S_{\vec{a}} \circ S_{\vec{a}}$
Thanks in advance
AI: Orthogonal projection is an idempotent operation. This means applying it twice is the same as applying it once. Think intuitively about why this is true.
Then, using this fact, you should be able to show that the orthogonal symmetry you've written down is its own inverse. There is also intuition as to why this is true (what is the reflection of a reflection?)
To do this computationally, notice that $P_a(u)\cdot a=\frac{u\cdot a}{a\cdot a}a\cdot a=u\cdot a$ so that we have:
$$P_a\circ P_a (u)=\frac{P_a(u)\cdot a}{a\cdot a}a=\frac{u\cdot a}{a\cdot a}a=P_a(u)$$
Using this we have:
$$S_a\circ S_a=(2P_a-\operatorname{id})\circ(2P_a-\operatorname{id})=4P_a\circ P_a-4P_a+\operatorname{id}=\operatorname{id}$$ |
H: How to find the solution coset x in a equation involving cosets?
Is it possible to kindly tell me the steps necessary to find $\overline{x}$ for the following equation?
$$
\overline{3}\overline{x} = \overline{2} \text{ in } \mathbb{Z}_5 \text{ where } \mathbb{Z}_5 \text{ is quotient ring } \mathbb{Z} / \langle 5 \rangle \text{ and } \overline{3}, \overline{x} \text{ and } \overline{2} \text{ are element cosets of } \mathbb{Z}_5
$$
The answer is given: $\overline{x} = \overline{4}$
AI: We want to find some $x$ satisfying the following congruence:
$$3x\equiv 2\operatorname{mod}5$$
In other words, we need $3x-2$ to be divisible by $5$, so we have $3x-2=5k$ for some integer $k$. Solving for $x$ gives:
$$x=\frac{5k+2}{3}$$
Plug in values for $k$ until $x$ is an integer. You will find that $k=2$ gives $x=4$.
Of course, plugging in $k=2+3m$ also gives $x=5m+4\equiv 4\operatorname{mod}5$. |
H: Differentiable functions without an antiderivative
Specifically, why is there no antiderivative, or any possible method of integrating (except numerically) say $\;e^{\csc(x)}$?
(I don't have my computer handy right now so I cant format the formula, sorry about that!)
AI: It's kind of similar to as saying: Why can we foil two linear terms to become a quadratic, but not any given quadratic can be factored into (real) linear terms (anti Foil, if you wish). That's just how it is. The far majority of continuous functions do not have an anti derivative in terms of elementary functions (that's I think what you mean here), and thus we need to resort to numerical methods to find area under the curve, or whatever the integral stands for. |
H: prove that , there is no element $a , b$ of the group $G$ which satisfy this property
let $G=(x) \times (y) $ where $(t)$ is the group generated by $t$ , $|x|= 8 , |y|=4$
let $H=(x^2y , y^2 )$ be isomorphic to $Z_4 \times Z_2 $
prove that , there are no elements a,b of G such that $G=(a) \times (b)$ and
$H = (a^2) \times (b^2)$
any hints ?
AI: Hint: $G/H$ is isomorphic to $\mathbb Z_4$ but $\big((a)\times(b)\big)/\big((a^2)\times(b^2)\big)$ is isomorphic to $\mathbb Z_2\times\mathbb Z_2$. |
H: Prove $T$ has at most two distinct eigenvalues
The question is from Axler's Linear Algebra text. The $\mathcal{L}(V)$ stands for the space of linear operators on the vector space $V$.
Suppose that V is a complex vector space with dim $V=n$ and $T \in \mathcal{L}(V)$ is such that
$$\text{null} \ T^{n-2} \neq \text{null} \ T^{n-1}
$$
Prove that $T$ has at most two distinct eigenvalues.
I fist thought of solving this by contradiction. That is, I thought, suppose there were three distinct eigenvalues. Then, there would be an equation like
$$(x-\lambda_1I)^{d_1}(x-\lambda_2I)^{d_2}(x-\lambda_3I)^{d_3}
$$
where the $d_i$'s are positive integers that sum to dim $V$. Call this polynomial $q(x)$ the characteristic poly. Thus, by Cayley's theorem,
$$q(T)=(T-\lambda_1I)^{d_1}(T-\lambda_2I)^{d_2}(T-\lambda_3I)^{d_3}=0
$$
Then multiplying out and setting dim $V = d_1+d_2+d_3 = n$, I could then get a poly with the various powers of $T$ (this is a little tricky to write down). In particular, I wanted to see the powers $n-2$ and $n-1$. I thought, ok, so now, rewrite the poly in terms of each of these and using the fact that $\text{null} \ T^{n-2} \neq \text{null} \ T^{n-1}$ you get some vector $v \in V$ such that,
$$T^{n-1}v = (\text{poly}_1)v = 0
$$
but
$$T^{n-2}v = (\text{poly}_2)v = k \neq 0
$$
I can think of some interesting things about $(\text{poly})_1$ and $(\text{poly})_2$, in particular, each has the monic term $T^n$. At this point, I'm not sure any of this helped.
Well, anyways, I'm sure someone has a much better approach! Thanks in advance to anyone who read this.
AI: Hints: consider the chain of subspaces $\{0\}=\ker T^0\subseteq\ker T\subseteq \ker T^2\subseteq \ldots...$ and think about what happens if $\ker T^{k-1}=\ker T^k$ at some point. Then prove that the assumption $\mbox{null} T^{n-2}\neq \mbox{null} T^{n-1}$ yields $\mbox{null} T^{n-1}=n-1$ or $\mbox{null} T^{n-1}=n$. In the latter case $T^{n-1}=0$, so it is easy to conclude. In the former case, take a basis of $\ker T^{n-1}$, complete it into a basis of $V$, and consider the matrix of $T$ in the latter.
Note: there is nothing special about $\mathbb{C}$ with this approach. It could be over any field. |
H: property of equality
The property of equality says:
"The equals sign in an equation is like a scale: both sides, left and right, must be the same in order for the scale to stay in balance and the equation to be true."
So for example in the following equation, I want to isolate the x variable. So I cross-multiply both sides by 3/5:
5/3x = 55
x = 3/5*55
What I did to one side, I had to to do the other.
However, take a look at the following problem:
y - 10/3 = -5/6(x + 2)
y = -5/6 - 10/6 + 10/3
So for we just use distributive property of multiplication to distribute -5/6 to the quantity of x + 2. Then since we isolate y, we add 10/3 to both sides.
However, now in order to add the two fractions, we find the greatest common factor is 6, so we multiple 10/3 by 2:
y = -5/6x - 10/6 + 20/6.
There is my question. Why can we multiply a term on the one side by 2, without having to do it on the other side?
After writing this question here, I'm now thinking that because 10/3 is equal to 20/6 we really didn't actually add anything new to the one side, and that's why we didn't have to add it to the other side.
AI: You actually multiplied one term on the right-hand side by $\;1 = \frac 22.\;$ That's valid! $\;1\cdot x = x,\;$ whatever the value of $\,x\,$. In doing so, you haven't changed the value of the original fraction $10/3$, nor have you changed the quantity on either side of the equation. $$\dfrac{10}{3} = \dfrac {10}{3}\cdot \dfrac 22 = \frac {20}{6}$$
$$y = -\frac 56 - \frac{10}6 + \frac{10}{3} = -\frac 56 - \frac{10}6 + \frac{20}{6}$$
You are correct, though, that it would not be valid to multiply the fraction $10/3$ by $2$: resulting in $2 \cdot \dfrac {10}{3} = \dfrac {20}3,\;$ unless you also multiplied $y$ by $2$, and multiplied each of the other terms in the sum on the right-hand side of the equation by $2$. |
H: Solve Modular Equation $5x \equiv 6 \bmod 4$
Here is an modular equation
$$5x \equiv 6 \bmod 4$$
And I can solve it, $x = 2$.
But what if each side of the above equation times 8, which looks like this
$$40x \equiv 48 \bmod 4$$
Apparently now, $x = 0$. Why is that? Am I not solving the modular equation in a right way, or should I divide both side with their greatest-common-divisor before solving it?
P.S.
To clarify, I was solving a system of modular equations, using Gaussian Elimination, and after applying the elimination on the coefficient matrix, the last row of the echelon-form matrix is :
$$0, \dots, 40 | 48$$
but I think each row in the echelon-form should have been divided by its greatest common divisor, that turns it into :
$$0, \dots, 5 | 6$$
But apparently they result into different solution, one is $x = 0,1,2,3....$, the other $x = 2$. And why? Am I applying Gaussian-Elimination wrong?
AI: The congruence $40x\equiv48\pmod4$ means that $4\mid40x-48$. But $40x-48=4(10x-12)$, so this is always true: $40x\equiv48\pmod4$ for all integers $x$. Thus, $x\equiv0\pmod4$ is not the only solution.
Added: If you have the congruence $ax\equiv b\pmod m$, and $d$ is a common divisor of $a$ and $b$, you cannot simply divide through by $d$ and say that
$$\frac{a}dx\equiv\frac{b}d\!\!\!\!\pmod m\;;$$
it’s not generally true. However, if $d$ is a common divisor of $a,b$, and $m$, the original congruence is equivalent to the congruence
$$\frac{a}dx\equiv\frac{b}d\left(\bmod \frac{m}d\right)\;.$$
Here you can take $d=4$ to reduce the original problem to solving $10x\equiv12\pmod1$, and since all integers are congruent to one another mod $1$, you again arrive at the conclusion that $x$ can be any integer. |
H: Eigenvalues of a block matrix
For $X=\left(\begin{array}{cc} A & B\\ C & 0\end{array}\right)$, how are eigenvalues of $X$ related to the eigenvalues of $A$?
AI: Not much can be said. However, if $A$ is square and $X$ is Hermitian (hence $A$ is Hermitian and $C=B^\ast$) and $\lambda_1(M)\le\lambda_2(M)\le\lambda_3(M)\cdots$ denote the eigenvalues of a Hermitian matrix $M$ arranged in increasing order, we have the following interlacing inequality:
$$
\lambda_k(X)\le\lambda_k(A)\le\lambda_{k+n-r}(X)
$$
for $1\le k\le r$, when $A$ is $r\times r$ and $X$ is $n\times n$.
Also, if the four blocks have equal sizes, the characteristic polynomial of $X$ can be simplified as follows:
$$
\det\pmatrix{xI-A&-B\\ -C&xI} = \det(x^2I - xA - BC).
$$
Such simplification is valid because the two blocks at the bottom of the LHS commute. You can see that $\det(x^2I - xA - BC)$ has little resemblance to $\det(\lambda I-A)$ and we don't expect any relationship between the eigenvalues of $X$ and $A$ in general. |
H: About series representations
Can $\displaystyle \frac{1}{1+\frac{z}{n}}$ have the following series representation?
$\displaystyle \sum_{k=0}^{\infty}\frac{(-z)^k}{n^k}$
AI: In general for any $w\in \mathbb C$ with $|w|<1$ it holds that $\frac{1}{1-w}=\sum_{k=0}^\infty w^k$ (this is the sum of a geometric series). Thus, for $w=-\frac{z}{n}$, if $|w|<1$, then plugging that $w$ into the equation above yields the affirmative answer to your question. |
H: Nonsingularity of a block matrix
Let $X=\left(\begin{array}{cc}
A & B\\
C & 0
\end{array}\right)$
and:
If $X$ is non-singular, is $A$ non-singular when $B$ is full column rank and $C$ is full row rank?
AI: Counterexample:
$$\begin{pmatrix}0&1\\1&0\end{pmatrix}$$ |
H: condition number of product of matrices
Let $\Phi$ be a $n \times m$ matrix and $C$ be a $n \times n$ diagonal matrix. Let $A = \Phi^{T}C\Phi$ (an $m \times m$ matrix). I am wondering if there is a theorem that relates the condition number of $A$, $\kappa(A)$ to the condition number (or singular values) of $\Phi$ and the elements of $C$?
Thank you!!
AI: I'm afraid you cannot consider $\Phi$ and $C$ separately. For example, let $\Phi^T=(1,0)$, $C_1=\pmatrix{1\\ &2}$ and $C_2=\pmatrix{2\\ &1}$. The matrices of $C_1$ and $C_2$ have identical singular values and identical condition numbers, but $\Phi^TC_1\Phi=1$ and $\Phi^TC_2\Phi=2$ don't. |
H: complex exponential equation
I am trying to solve the following exponential equation: $z^{1+i} = 4$ where the argument of $z$ is between $-\pi$ and $\pi$. Here is what I have gotten so far: If $z = a + bi$ then the magnitude of $z$ is $2$ and $arctan(\frac{b}{a}) = -2$, therefore the two solutions of $z$ are in the 4th and 2nd quadrant respectively.
Can anyone confirm this? If this is right then I know how to proceed with the rest of this problem. Thanks
AI: Actually there are infinitely many solutions.
Let $z = re^{i\theta}$, with $r,\theta \in \mathbb{R}$. Then:
\begin{align}
z^{i+1} &= 4\\
\left( re^{i\theta} \right)^{i+1} &= 4\\
r^{i+1}e^{(i^2+i)\theta} &= 4\\
re^{-\theta} e^{i(\theta + \ln r)} &= 4\\
\end{align}
So
\begin{align}
\theta + \ln r &\equiv 0\ (\operatorname{mod}\ 2\pi)\\
re^{-\theta} &= 4.\\
\end{align}
The later of these implies
\begin{align}
\ln r - \theta &= 2\ln 2\\
2\theta &\equiv -2\ln 2\ (\operatorname{mod}\ 2\pi)\\
\theta &\equiv -\ln 2\ (\operatorname{mod}\ \pi)\\
\end{align}
Let $\theta = k\pi - \ln 2$ with $k \in \mathbb{Z}$. Then
\begin{align}
\theta &= k\pi -\ln 2\\
\ln r &= \ln 2 + k \pi\\
r &= e^{ \ln 2 + k\pi }\\
r &= 2 e^{k\pi}\\
z &= 2e^{k\pi + i\left(k \pi -\ln(2) \right)}\\
z &= 2^{1-i} e^{k\pi \left( 1+i \right)}\\
\end{align}
One can verify the solution holds $\forall k \in \mathbb{Z}$:
\begin{align}
z^{1+i} &= 2^{ (1-i)(1+i) }e^{ k\pi(1+i)^2 }\\
&= 2^{ 1-i^2 }e^{ 2k\pi i }\\
&= 4.
\end{align}
One can tell that the solutions $z$ are distinct for distinct $k$ because the moduli $|z|$ are distinct. |
H: Poisson process (simple question)
Imagine you have two events starting at the same time. The duration time for each event is exponential, with different parameters. Knowing that one of the events is finished (we don't know which) at instant t, how does one get the distribution of the time between t and the moment the other event ends?
Thank you!
AI: Let the parameters of the exponentials $X$ and $Y$ be $a$ and $b$. Think of them as the lifetimes of two components. We will assume that at some time $t$, we test the system and discover that precisely one of the components is dead.
This has probability $(1-e^{-ta})e^{-tb}+(1-e^{-tb})e^{-ta}$. The (conditional) probability that it is $X$ is
$$\frac{(1-e^{-ta})e^{-tb}}{(1-e^{-ta})e^{-tb}+(1-e^{-tb})e^{-ta}}.$$
There is a similar expression for the probability it is $Y$.
If it is $X$, then by memorylessness, the probability that the additional time is $\le s$ is $1-e^{-sb}$. If it is $Y$, then the probability is $1-e^{-sa}$. Now one puts the pieces together. |
H: Can a sequence of a function with a single variable be thought about as a function with two variables?
Long title, but first off is it logically ok to think of $\{f_n(x)\}$ as $f(n,x)$ where $n$ is restricted to a natural number?
Second, would this at all be useful? Thus far in my study of sequences of functions you deal strictly with convergence (similar to sequences in general) - either pointwise or universal convergence.
It seems interesting to me because then we can consider some function of two variables as converging to a function with one. It seems as though this would be useful...
So it's as if we could have the definition of convergence be:
$$
\forall \; \epsilon > 0 \\
\exists \; N \in \mathbb{N} : \lvert f(n,x) - f(x) \rvert < \epsilon
$$
and then could this possibly be extrapolated out to an $n$ with any domain one wishes?
EDIT: as Andre reminded me, sequences can be thought of as functions, i.e.
$$
a : \mathbb{N} \to \mathbb{R}
$$
however it seems to be then that this becomes the concatenation of functions, i.e.
$$
g : \mathbb{R} \to \mathbb{R}\quad
f : \{ \mathbb{R} , \mathbb{N} \} \to \mathbb{R}
$$
and $f_n (x) = f(g(x),n)$
again, would this be an ok way of thinking about sequences of functions, is it useful, and could this be extrapolated so that we can view any $n$ variable function as the concatenation of $n$ functions, and the variables dont need to have any specific domain?
AI: What you are describing can certainly be done and can be useful. It is akin to realizing that an ordinary sequence of real numbers is nothing but a function $\mathbb N \to \mathbb R$. So the study of sequences is subsumed by the theory of general functions. Moreover, given any space $X$ a sequence in $X$ is nothing but a function $\mathbb N \to X$, so now take $X$ to be the space of functions $\{f:\mathbb R \to \mathbb R\}$, then a sequence of elements in $X$ is nothing but a sequence $\{f_n(x)\}$ of single variable functions, and this amount to a function $F:\mathbb N \to X$, with $F(n)=f_n$. We may now define $G:\mathbb N \times \mathbb R \to \mathbb R$ by $G(n,x)=F(n)(x)=f_n(x)$, so yes a sequence of functions can be seen as a function to two variables.
Now, to get a relation to limits you need to add a new points to $\mathbb N$ and topologize appropriately and, with some care, existence of the limits of a sequence becomes equivalent to the extendability of the corresponding function to this new point. This can be a useful way to look at things, though I don't think it is terribly useful.
Lastly, instead of considering sequences you can consider wilder domains to replace $\mathbb N$ by. It is common for instance to consider nets, which is what happens if instead of the natural numbers with their natural ordering you allow any directed sets. In the theory of metric spaces, sequences suffice for all topological investigations of any metric space, but the same does not hold for arbitrary topological spaces, where nets become necessary. |
H: Question in Hungerford's book
I'm trying to solve this question in Hungerford's Algebra
I didn't use the corollary:
And I used this map: $g:S^{-1}R_1\to S^{-1}R_2$, $g(r/s)=f(r)/f(s)$. I'm wondering how to prove using the corollary, is it hard?
Second, the map I have chosen indeed extends $R$? I identified the elements $r/1$ of $S^{-1}R$ as the elements of $r$, is that right?
Any help is welcome
Thanks a lot
AI: You actually don't even need to use an explicit construction of the $F_i=\mathrm{Frac}(R_i)$. If $R_1\hookrightarrow R_2$ is an injection of domains, then, by considering the composite $R_1\hookrightarrow R_2\hookrightarrow F_2$, you get a ring map which, by injectivity, sends non-zero elements of $R_1$ to units (since every non-zero element of $F_2$ is a unit). So the universal property of $R_2\hookrightarrow F_2$ yields a unique ring map $\varphi:F_1\rightarrow F_2$ compatible with the original map $R_1\hookrightarrow R_2$. If the map $R_1\hookrightarrow R_2$ is in fact an isomorphism, then, by similar considerations, you get a unique ring map $\psi:F_2\hookrightarrow F_1$ compatible with $R_2\cong R_1$. Now consider $\psi\circ\varphi:F_1\rightarrow F_1$. It is compatible with $R_1\hookrightarrow F_1$ by construction, so, by the uniqueness part of the universal property of $R_1\hookrightarrow F_1$, it must be the identity. Similarly $\varphi\circ\psi$ is the identity, so $\varphi$ and $\psi$ are mutually inverse isomorphisms.
More generally, if $R$ is a ring, $S$ a multiplicative subset of $R$, and $S^{-1}R$ the localization, with $\alpha:R\rightarrow S^{-1}R$ the canonical map, then the only ring map $\varphi:S^{-1}R\rightarrow S^{-1}R$ such that $\varphi\circ\alpha=\alpha$ is $\varphi=\mathrm{id}_{S^{-1}R}$ (this follows from the uniqueness part of the universal property of $\alpha$). Put succinctly, localizations of $R$ have no non-identity $R$-algebra endomorphisms. This is a particular case of the fact that a localization map is an epimorphism in the category of commutative rings with identity (again this is because of the uniqueness clause in the universal property which characterizes $S^{-1}R$).
Of course, to prove the existence of $S^{-1}R$, one uses an explicit construction, most commonly as a set of equivalence class of fractions. Then, for a ring map $f:R\rightarrow R^\prime$ such that $f(S)$ is contained in the units of $R^\prime$, the unique map $\varphi:S^{-1}R\rightarrow R^\prime$ such that $\varphi\circ\alpha=f$ is given by the formula the OP gives: $\varphi(r/s)=f(r)f(s)^{-1}$. It is well-defined by the definition of the equivalence relation used in the construction of $S^{-1}R$ and the fact that $f(s)\in (R^\prime)^\times$. In the case $\varphi$ is an isomorphism of domains, once the maps between the fields of fractions are written down, it can be checked by hand (using fractions) that it is an isomorphism. So you don't have to use the universal property if you don't want to; the explicit construction as a ring of fractions is sufficient. Note also that, in the notation of the first paragraph, you apply the universal property of localization to the map $R_1\hookrightarrow R_2\hookrightarrow F_2$, not directly to $R_1\hookrightarrow R_2$, since there is no reason that non-zero elements in $R_1$ should be mapped to units in $R_2$; they are mapped to non-zero elements by injectivity, and because $R_2$ injects into $F_2$, the image of a non-zero element in $R_1$ under $R_1\hookrightarrow R_2\hookrightarrow F_2$ is a non-zero element of the field $F_2$, hence invertible, so you can apply the universal property. |
H: Solve $2\tan (x)\cos (x)=\tan (x)$, algebraically where $0 \leq x < 2\pi$
please help me correct if anything is wrong.. or even if i'am right
Solve $\quad 2\tan x\cos x=\tan x,$ algebraically where $0≤x<2π$
$$2\tan(x) \cos (x) - \tan (x) = 0$$
$$\tan (x)(2\cos (x) - 1) = 0$$
$$\text{So, either}\;\;\tan (x) = 0 \Longrightarrow x \in \{0, π, 2π\} $$
$$\text{Or}\;\;2\cos(x)-1 = 0, \cos (x) = 1/2 \implies x \in \{-π/3, π/3\}$$
$$\text{Solutions}\;\;: x \in \{0, \pi/3, -\pi/3, \pi, 2\pi\}$$
2) Solve $2\sin^2(x)-\sin(x)-1=0$ where $0≤x<2π$
$$2\sin^2(x)-\sin(x)-1=0$$
$$\iff(2\sin(x)+1)(\sin(x)-1) = 0$$
Thus, either $\;2\sin(x) +1 = 0 \iff \sin x = -1/2 \implies x = -π/6, x = 7π/6$
or $\;\sin(x)-1 = 0 \iff \sin x = 1 \implies x = π/2$
$$\text{Solutions}\;\;x\in \{-π/6, π/2, 7π/6\}$$
AI: Almost!
You might want to give your first answer as inclusive of both possible sets of solutions - the solutions being those satisfying $\tan x = 0$ and/or satisfying $\cos x = 1/2$ - rather than as two answers.
Oops, just noted that you include $x = 2\pi$ as a solution to $\;\tan x = 0$, and it seems as though your domain for $x$ is written as $0\leq x \lt 2\pi$. If that's the correct domain for $x$, then omit $x = 2\pi$ as a solution to $(1)$.
And so you'll also need to omit $x = -\pi/3$, and instead include $x = 5 \pi/3$ as solutions to $\cos x = 1/2, $ in $(1)$. Same angle position, different representation, but this representation within the interval $0 \leq x\lt 2\pi$.
Finally, in the last problem, you list one solution as $x = - \pi/6$. Again this needs to be in your interval $0 \leq x \lt 2\pi$. So instead of $-\pi/6$, represent that angle as $2\pi - \pi/6 = 11\pi/6$. |
H: What am I missing here?
That's an idiot question, but I'm missing something here. If $x'= Ax$ and $A$ is linear operator in $\mathbb{R}^n$, then $x'_i = \sum_j a_{ij} x_j$ such that $[A]_{ij} =a_{ij} = \frac{\partial x'_i}{\partial x_j}$, therefore $\frac{\partial}{\partial x_i'} = \sum_j \frac{\partial x_j}{\partial x'_i} \frac{\partial}{\partial x_j}$. However $\frac{\partial}{\partial x_i'} = \frac{\partial}{\partial \sum_j a_{ij} x_j} = \sum_j a_{ij} \frac{\partial}{\partial x_j} = \sum_j \frac{\partial x'_i}{\partial x_j} \frac{\partial}{\partial x_j}$ !!! What's wrong here?
Thanks in advance.
AI: When you wrote $\frac{\partial}{\partial x_i'} = \frac{\partial}{\partial \sum_j a_{ij} x_j} = \sum_j a_{ij} \frac{\partial}{\partial x_j} $ the $a_{ij}$ somehow climbed from the denominator to the numerator |
H: Different kinds of systems
I got interested in learning more about Logic, recently.The first thing i noticed is that this topic is a lot bigger than i expected. As i'm trying to make a sense of it all ( seeing the big picture ) before delving into it, i'm having a lot of questions and i thought of asking help to people that are already acquainted with it.
I'm easily grasping some concepts ( like formal language, language grammar, rules of inference, axioms, theorems ) and they seem to have a common definition over many authors.
Other terms are giving me a really hard time.My question is :
Does the following terms have agreed definitions among logicians ?
Formal System, Logical/Logic System, Proof Systems, Axiom System,Natural Deduction System
If not all of them are universally defined, is there some of them that are pretty much agreed ?
If it latter answer is positive, I would like to know where ( books/sites/resources) i could find information regarding the definition of those terms that are almost completely agreed. ( Wikipedia doesn't seem to cover it all ).
Thanks a lot in advance
AI: The latter three you give are certainly common enough, I would think, that you could use them and logicians would know what you meant. These three terms all belong to a field called proof theory, and there are several introductory texts (see third paragraph). However, "formal system" and "logical system" are just way too broad of terms to have an explicit definitions. If anything, it's a philosophical question what counts as a logical system, or what counts as a formal system, and certainly there will be shades of grey in between.
But the fact of the matter is that, ultimately, people will use a variety of notations/conventions, and you'll inevitably find disparate uses of the same term coming from different authors. Your best option, I would think, is to just pick something and stick with it when you're first learning, and not worry about it. Afterwards, when you're more comfortable with the notions, you can decide for yourself what it makes sense to call them.
You may have come across this already, but Peter Smith has a nice "Teach Yourself Logic" guide with book recommendations. The terms that you mentioned seem mostly to involve proof-theoretic terms. The basic idea is that "axiom systems" (or "Hilbert systems") and "natural deduction systems" are two different kinds of proof systems. My opinion: if you want to learn more about proof systems, and you've already seen a bit of introductory mathematical logic, I would say Negri and von Plato is a good place to start. If you're completely new to logic, I would recommend reading some Barwise and Etchemendy to get a feel for how natural deduction systems work, and then reading Enderton's introduction to mathematical logic. |
H: Trigonometry: Applications
A helicopter is flying due west over level ground at an attitude of 222 m, and at a constant speed. From point A, which is due west of the helicopter, two measurements of the angle between the ground and the helicopter are taken. The first angle measurement is 6 degrees and the second measurement, taken one minute later is 75 degrees. If the helicopter has not yet passed over Point A, how fast is the helicopter travelling to the nearest kilometer per hour.
AI: Hints: Have you drawn a picture? Perhaps with two right triangles, both of which have adjacent side $222$? Do you know what SOHCAHTOA means? Do you know how to convert between meters and kilometers? What about minutes and hours? |
H: Trigonometry: Find the smallest Angles
Calculate the measure of the smallest angle in the triangle formed by the points A (-2, -3), B(2, 5) and C(4, 1).
AI: Hint: Use the distance formula to get the lengths of the three sides and then apply the Law of Cosines. |
H: Is a real number the limit of a Cauchy sequence, the sequence itself, a shrinking closed interval of rational numbers, or what?
I've been studying a collection of analysis books (one of them Bishop's Constructive version) and contemplating the reals. Correct me if I'm wrong, but I feel that I have seen the Cauchy sequence itself in some places and its limit in other places described as the real number. I do understand that nested closed intervals have a point as the limit of their countably infinite intersection. I can visualize an arbitrarily small interval of rationals, any of which has an equal claim (its seem to me, at least until other considerations are brought in) to being an "approximation" of this point. Unless we know the limit point (via the geometric series, for instance), what are we approximating if not a yet better approximation of a yet better approximation? (Maybe the "approximation" terminology works better for cuts.) I suppose I'm asking not only for the clarification of the dominant convention but also for "real talk" about the mathematical imagining of real numbers.
AI: Real numbers are the "equivalence class" of Cauchy sequences of rational numbers. You take the set of all Cauchy sequences of rational numbers and put an equivalence relation on it. When do you say two Cauchy sequences of rationals say $x_{n}$ and $y_{n}$ are related? They are related if for every given $\epsilon > 0$ ($\epsilon$ is rational here), there exists a $n_{0}$ such that $n > n_{0}$ implies
$$ |x_{n} - y_{n}| < \epsilon $$.
Now you say that each equivalence class is a real number. How do you see the reational numbers inside real numbers then? So, suppose $r$ is a rational number, then the constant sequence $x_{n} = r$ is a Cauchy sequence and the equivalence class it belongs to is the rational number $r$ in the set of real numbers. This is the identification of rationals inside the reals.
This is one way of thinking about the real numbers. The other way is through Dedekind Cuts. You will find a wonderful exposition to Dedekind cuts in Walter Rudin Principles of Mathematical Analysis.
A yet another axiomatic way to think about the real numbers is that it is a complete ordered field, the natural numbers is the smallest inductive subset of real numbers, the integers are the subroup generated by natural numbers, and the rationals are the field of fraction of integers.
Pick and choose which you like. Any further discussion is welcome. |
H: plot a point which is in form of lat and long in a pixel map
I need to plot a point which is in form of lat and long
this point is equivalent to a screen coordinate on pixels on my map.
I want to make a relation in these two point so if you want to plot a lat,long
I can know what is this position on map.
here a graphical representation of the markers [little wheels] on the corners of the map and the values for X and Y; and the equivalent points in lat long.
so here is the X and Y pixel coordinates on my map and the relationship on lat long
I have some equations to represent this relationship but I havent figure out how to make it work so if I get given any long, lat coordinate how to represent it in my pixel x and y.
How to establish this relationship in an equation?
thanks!
AI: The practical answer is to linearly interpolate it. The error for a spherical earth will not matter, and as the user zooms in you will get more accurate. At small scales the earth is flat, and at large scales you don't care. The fact that you think a rectangle has corners that match supports this (this ignores the fact that distance east-west for a given range of longitude decreases as the latitude increases). So given $long=116.7, x=20+\frac {long-116.671221}{116.741259-116.671221}(1117-20)$ |
H: Is there a contradiction is this exercise?
The following exercise was a resolution to this problem
Let $\displaystyle\frac{2x+5}{(x-3)(x-7)}=\frac{A}{(x-7)}+\frac{B}{(x-3)}\space \forall \space x \in \mathbb{R}$. Find the values for $A$ and $B$
The propose resolution was:
In order to isolate $A$ on the right side, multiply all the equation by $x-7$
$\displaystyle\frac{(2x+5)(x-7)}{(x-3)(x-7)}=\frac{A(x-7)}{(x-7)}+\frac{B(x-7)}{(x-3)}$
Now is my doubt. The resolution suggests that $x-7$ cancel out.
$\displaystyle\frac{(2x+5)}{(x-3)}=A+\frac{B(x-7)}{(x-3)}$
But, $x-7$ can be equal zero. In this situation, is allowed to perform this operation? One can say "for every $x\neq7$", but on the next step the resolution says "for $x=7$ we have".
$\displaystyle\frac{(14+5)}{(7-3)}=A+\frac{B(0)}{(7-3)} \Leftrightarrow A=\frac{19}{4}$
I think there is a contradiction is this resolution.
AI: The incorrect bit is "$\forall x\in\Bbb R$." We must indeed rule out $x=7$ and $x=3$ in order to avoid problems. What we can still do in that case is take the limit as $x$ approaches $7$ (we can't just plug $x=7$ in, if we've declared that $x\ne 7$), and get the same result. |
H: Expressing Vectors In Terms of Other Vectors
My professor asked us a few questions in class and asked us to think about them. He's going to reveal the solutions on Thursday, but I want to understand it before he talks about it on Thursday in greater detail. If anyone could explain any of the things below, it would be appreciated. Thanks in advance. (and if you can't, don't worry- I can ask the professor after class on Thursday)
AI: For the first, we want $w-u=a(v-u)$, or more pleasantly $w=(1-a)u+av$.
This will of use in the second problem. Compute the point in the desired $2/3$ to $1/3$ position, and note that the answer is symmetrical. |
H: Is there a (real) number which gives a rational number both when multiplied by $\pi$ and when multiplied by $e$?
Besides $0$ of course. What about addition and exponentiation? I would think there's no such number, but I'm not sure if I could prove it.
AI: It is open whether there is such number. The reason is that we do not know whether $e/\pi$ is rational, but this is equivalent to your question (for a recent accessible reference, see for example here and the links provided there):
Note that if $xe$ and $x\pi$ are rational, then so is their quotient, $e/\pi$. Conversely, if $e/\pi$ is rational, then take $x=1/\pi$ and note that $xe$ and $x\pi$ are rational. |
H: Relating properties of $H$ to properties of $G/H$
$G$ is a group and $H$ is a normal subgroup of $G$. Prove that $G/H$ is cyclic iff there is an element $a \in G$ with the following property: for every $x \in G$, there is some integer $n$ such that $xa^n \in H$.
Can anyone help, since I have no idea how to work on this question! Thanks sooooo much!
AI: Since $H$ is a normal subgroup of $G$ so $G/H$ defines a group itself. Let $G/H$ be cyclic group, so there is an element $gH\in G/H, ~~g\in G$ that $$G/H=\langle gH\rangle$$ What does this latter thing mean? It means that if $g'\in G$ and so $g'H\in G/H$, we can have: $$g'H=(gH)^k$$ for some integers $k$. Equivalently, we have $$g'H=g^kH$$ (Why?). Because $H$ is normal in $G$. Well, so we have: $$g'(g^k)^{-1}H=H$$ so we have $$g'(g^k)^{-1}\in H$$ for some integers $k$. Conversely,... |
H: A numerical aptitude problem
I have got this problem that says : Given digits $2,2,3,3,3,4,4,4,4$ how many distinct $4$ digit numbers greater than $3000$ can be formed.
The options are: $50,51,52,54.$
Is there any way I can logically solve the problem instead of manually counting ?
Also, Almost same problem has been discussed here but I am not satisfied with the answers.
AI: Clearly the only allowed numbers have to have either a $4$ or a $3$ in the thousands place.
If it's a $4$, there are three possible digits to put in the remaining three positions. So the number of possibilities here are $3\times 3\times 3 - 1$ ($-1$ to account for the number $4222$ which is not possible).
Similarly if $3$ is in the thousands place, there are $3\times 3 \times 3-2$ possibilties ($3333$ and $3222$ being excluded).
So the total is $27+27-1-2 = 51$. |
H: How find the $\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(-1)^{m+n}\frac{1}{n(m+2n)}$
find the value
$$\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(-1)^{m+n}\dfrac{1}{n(m+2n)}$$
I think this is good problem,Thank you everyone
I find
$$\int_{0}^{1}\dfrac{\ln{(1+x^2)}}{1+x}dx=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(-1)^{m+n}\dfrac{1}{n(m+2n)}$$
AI: \begin{align*}
\sum_{n=1}^{\infty} \sum_{m=1}^{\infty} (-1)^{m+n} \frac{1}{n(m+2n)}
&= \sum_{n=1}^{\infty} \sum_{m=1}^{\infty} \int_{0}^{1} (-x)^{m-1} \cdot \frac{(-1)^{n-1} x^{2n}}{n} \, dx\\
&= \int_{0}^{1} \frac{\log(1+x^2)}{1+x} \, dx.
\end{align*}
For the last integral, you can refer to this site. |
H: How to prove $\sum_1^k W_i$ is the smallest subspace of $V$ that contains $W_1\bigcup\cdots\bigcup W_k$
Definition: Let $W_1, \cdots , W_k$ be a subspace of a vector space $V$. The sum is
$\sum_1^k W_i = {x_1+ \cdots + x_k | x\in W_i }$
Proposition:
$\sum_1^k W_i$ is a subspace
$W_i \subseteq \sum_1^k W_i\ \forall\ i$
$\sum_1^k W_i$ is the smallest subspace that contains $W_1 \bigcup \cdots \bigcup W_k$
Question:
I am trying to prove these simple propositions and am getting a little stuck on number 3. Just to check for number one I would choose two sums like $\sum_1^k W_i$ and $\sum_1^k N_i$ and say let them both $b\subset V$ and then show they are closed under addition and scalar multiplication. If I say $\sum_1^k W_i = \sum_1^k x_i$ and $\sum_1^k N_i = \sum_1^k m_i$ then is it okay to say $\sum_1^k \left( x_i + m_i \right)$ is contained in $V$? I believe so because summing the elements of a subspace should still be contained in the vector space.
My real problem is getting started with 3. I believe I want to show $\sum_1^k$Wi generates V and therefore is the smallest subspace that contains the intersection of $W_1 \bigcup \cdots \bigcup W_k$, but I am not sure how to go about this or justify it ( just what I believe to be the right direction). A little guidance on this simple problem would be a huge help, thank you.
AI: To show something is the smallest such thing that satisfies a certain condition, show that it is contained in anything else satisfying the condition.
In this problem, we want to show that $W_1+\ldots+W_k$ is the smallest subspace that contains $W_1\cup\ldots \cup W_k$. So suppose $U$ is some other subspace that contains $W_1\cup\ldots\cup W_k$, and show that this necessarily implies $W_1+\ldots+W_k\subset U$.
Consider any element $x_1+\ldots+x_k\in W_1+\ldots+W_k$. Is this element in $U$? Since $U$ is a subspace, it is. See if you can show this fact.
It will follow that $W_1+\ldots+W_k\subset U$, which is exactly what we wanted to show. |
H: "Goldbach's other conjecture" and Project Euler - writing 1 as a sum of a prime and twice a square
From Problem 46 of Project Euler :
It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square.
$$9 = 7 + 2 \cdot 1^2$$
$$15 = 7 + 2 \cdot 2^2$$
$$21 = 3 + 2 \cdot 3^2$$
$$25 = 7 + 2 \cdot 3^2$$
$$27 = 19 + 2 \cdot 2^2$$
$$33 = 31 + 2 \cdot 1^2$$
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a prime and twice a square?
As soon as I read this problem, I've immediately fought that the smallest odd number 1 was not on the list of examples. Then I've thought that 1 can be the answer. The only way that I could write 1 close to the problem $1=1+2 \cdot 0^2$, Which is of course false. I've also tried to give 1 as answer, but the site didn't accepted. So how can we write 1 as a sum of a prime and twice a square?
AI: If we take primes to be positive, it cannot be done. This is because primes are greater than $1$, and twice a square number is positive.
This is not contradictory to the problem though, because $1$ is not composite.
If we allow primes to be negative (considering prime elements in the ring of integers), we can write $1=-7+2\cdot2^2$. |
H: Group of inner automorphisms of a group $G$
Let $G$ be a group. By an automorphism of $G$ we mean an isomorphism $f: G\to G$
By an inner automorphism of $G$ we mean any function $\Phi_a$ of the following form:
For every $x\in G$, $\Phi_a(x)=a x a^{-1}$.
Prove that every inner automorphism of $G$ is an automorphism of $G$
which means I should prove $\Phi_a$ is isomorphism? any suggestion? thanks
AI: $\phi_a(xy)=a(xy)a^{-1}=axa^{-1}aya^{-1}=\phi_a(x)\phi_a(y)$
$\phi_a(x)=\phi_a(y)\implies axa^{-1}=aya^{-1}\implies x=y$
$\phi_a$ is also surjective since for each $y \in G $ there exists $x=a^{-1}ya$ s.t. $\phi_a(x)=y$ |
H: the angle between $Ax$ and $x = [0, 1]^T$ is
If the trace and the determinant of an orthogonal $2 \times 2$ matrix A are 1, then
the angle between $Ax$ and $x = [0, 1]^T$ is
(i) $25^{\circ}$
(ii) $30^{\circ}$
(iii) $45^{\circ}$
(iv) $60^{\circ}$
I just see the characteristic polynomial is $x^2-x+1=0$, could any one tell me how to solve this one?
AI: An orthogonal matrix is either a reflection or a rotation.
The determinant is $+1$, so it is a rotation.
A $2\times 2$ rotation matrix has the form
$$ \left( \begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array} \right).$$
The trace is also $+1$, i.e. $2\cos\theta = 1$. |
H: Proving that the medians of a triangle are concurrent
I was wondering how to prove Euclid's theorem: The medians of a triangle are concurrent.
My work so far:
First of all my interpretation of the theorem is that if a line segment is drawn from each of the 3 side's medians to the vertex opposite to it, they intersect at one point.
Since a triangle has three sides and each side must have a median, I figure that at least 2 of them have to intersect as the lines can't be parallel.
May anyone explain further? Thank you!
AI: I could do that by using Thales's Theorem. Sorry if I did it on a paper. It is really hard to do on this page. |
H: Help with understanding a derivation in a book
I'm reading a book on time series analysis and I came across with a derivation I cannot understand. The following picture is a part from my book and I have highlighted with red color the part I'm confused with. I have used other colors to highlight some other points in the picture. I tried to include as much information as possible so it would help answering:
Example $\textbf{A.2}$ Time Invariant Linear Filter
As an important example of the use of the Riesz-Fisher Theorem and the properties of mean square convergent series given in $\text{(A.5)-(A.7)}$, a time-invariant linear filter is defined as a convolution of the form
$$y_t=\sum_{j=-\infty}^\infty a_jx_{t-j}\tag{A.9}$$
for each $t=0,\pm1,\pm2,\ldots,$ where $x_t$ is a weakly stationary input series with mean $\mu_x$ and autocovariance function $\gamma_x(h)$, and $a_j$, for $j=0,\pm1,\pm2,\ldots$ are constants satisfying
$$\color{#FF7F27}{\boxed{\color{black}{\displaystyle\,\,\sum_{j=-\infty}^\infty |a_j|<\infty.}}}\tag{A.10}$$
The output series $y_t$ defines a filtering or smoothing of the input series that changes the character of the time series in a predictable way. We need to know the conditions under which the outputs $y_t$ in $\text{(A.9)}$ and the linear process $\text{(1.29)}$ exist.
Considering the sequence
$$y_t^n=\sum_{j=-n}^na_jx_{t-j},\tag{A.11}$$
$n=1,2,\ldots,$ we need to show first that $y_t^n$ has a mean square limit. By Theorem A.$1$, it is enough to show that
$$E|y_t^n-y_t^m|^2\to0$$
as $m$,$n\to\infty$. For $n>m>0$,
$$\begin{align}E|y_t^n-y_t^m|^2&= E\,\left|\sum_{m<|j|\le n}a_jx_{t-j}\right|^2\\
&=\sum_{m<|j|\le n}\sum_{m\le|k|\le n}a_ja_kE(x_{t-j}x_{t-k})\\
&\le\sum_{m<|j|\le n}\sum_{m\le|k|\le n}|a_j||a_k||E(x_{t-j}x_{t-k})|\\
&\le\sum_{m<|j|\le n}\sum_{m\le|k|\le n}|a_j||a_k|(E|x_{t-j}|^2)^{1/2}(E|x_{t-k}|^2)^{1/2}\\
&\color{#ED1C24}{\boxed{\color{black}{=\color{#FFF200}{\boxed{\color{black}{\gamma_x(0)}}}\,\color{#FF7F27}{\boxed{\color{black}{\displaystyle\left(\sum_{m\le|j|\le n}|a_j|\right)^2}}}\to 0}}}\color{#ED1C24}{\boldsymbol{\longleftarrow} \text{Why does this part converge to zero?}}
\end{align}$$
as $m$,$n\to\infty$, because $\gamma_x(0)$ $\color{#FFF200}{\underline{\color{black}{\text{is a constant}}}}$ and $\{a_j\}$ $\color{#FF7F27}{\underline{\color{black}{\text{is absolutely summable}}}}$ (the second inequality follows from the Cauchy-Schwarz inequality).
Although we know that the sequence $\{y_t^n\}$ given by $\text{(A.11)}$ converges in mean square, we have not established its mean square limit. It should be obvious, however, that $y_t^n\to^{ms}y_t$ as $n\to\infty$, where $y_t$ is given by $\text{(A.9)}$.$^1$
$\text{_________}$
$^1$ If $S$ denotes the mean square limit of $y_t^n$, then using Fatou's Lemma, $E|S-y_t|^2=E\lim \inf_{n\to\infty}|S-y_t^n|^2\le\lim\inf_{n\to\infty}E|S-y_t^n|^2=0$, which establishes that $y_t$ is the mean square limit of $y_t^n$.
My question is: Why does the red part converge to zero? We know that $\gamma_x(0)$ is a constant and that $\{a_j\}$ is absolutely summable, but couldn't it be that the sum $\left(\sum_{m\leq|j|\leq n}|a_j|\right) \geq 1$? In this case the red part wouldn't converge to zero right?
It is absolutely summable, but it doesn't have to be $< 1$ right? If the sum $\left(\sum_{m\leq|j|\leq n}|a_j|\right)$ would be less than $1$ or $\gamma_x(0)$ would equal $0$, I could understand this. What obvious am I missing? =)
Thank you for any help!
AI: If $a_j$ are absolutely summable, then
$$\lim_{j\rightarrow\infty}|a_j|=0$$
must be satisfied. And this implies that
$$\lim_{m,n\rightarrow\infty}\left ( \sum_{m\le|j|\le n}|a_j|\right )^2=0$$ |
H: Let S, T be two subspaces of $\mathbb{R}^{24}$ such that $\dim(S) = 19 $ and $\dim(T ) = 17.$
Let S, T be two subspaces of $\mathbb{R}^{24}$ such that $\dim(S) = 19 $ and $\dim(T ) = 17.$
Then
(i) the smallest possible value of $\dim(S ∩ T )$ is 2
(ii) the smallest possible value of $\dim(S + T )$ is 19
(iii) the largest possible value of $\dim(S ∩ T )$ is 18.
(iv) the largest possible value of $\dim(S + T )$ is 22.
Well, from the formualae $\dim(S+T)=\dim S+\dim T-\dim (S\cap T)$ what can be done? I am not able to conclude or do I have use something else?
AI: HINTS
$\operatorname{dim}(S\cap T)=\operatorname{dim}(S)+\operatorname{dim}(T)-\operatorname{dim}(S+T)$ will be minimum when $\operatorname{dim}(S+T)$ is maximum that is when $S+T=\mathbb{R^{24}}$.
Again maximum value $dim(S\cap T) = 17$ ,So minimum possible value of $dim(S+T)=19+17-17=19$ |
H: need to find the rank of $T$
$B=\begin{pmatrix}2&-2\\-2&4\end{pmatrix}$ , $T:M_2(\mathbb{R})\to M_2(\mathbb{R})$ defined by $T(A)=BA$, we need to find the rank of $T$, I have calculated by hand and got rank is $4$, could any one tell me if there is any process with out calculating that?
AI: $T$ is a linear map of spaces of the same dimension, and it is injective (an inverse is given by multiplying by $B^{-1}$ on the left) and therefore an isomorphism. So it is surjective, hence rank 4. |
H: Can anybody validate this WolframAlpha computation?
Can anybody validate this WolframAlpha computation?
http://www.wolframalpha.com/input/?i=GCD%5BDivisorSigma%5B1%2Cx%5D%2C+DivisorSigma%5B1%2Cx%5E2%5D%5D
Thank you!
AI: PolynomalGCD is a simple operation: it takes the GCD of polynomials. It doesn't know about transcendental functions or number-theoretic operations or anything: anything it sees that isn't a number is considered a formal variable.
So, it is treating DivisorSigma[1,x] as one indeterminate variable, and DivisorSigma[1,x^2] as another indeterminate variable. As these are different indeterminate variables, their polynomial gcd is 1, no different from asking for the gcd of x and y.
If there are any algebraic relationships between $\sigma_1(x)$ and $\sigma_1(x^2)$, you have to work them out and do an appropriate substitution before plugging into the gcd function.
GCD is even simpler: it acts on integers. If you don't give it integers, it can't do anything. WolframAlpha tried to be helpful here, and it recognized that you didn't give it integers, so it assumed you meant to take the gcd as polynomials.
Mathematica (and by extension WolframAlpha) can't even (directly) tell you, for example, that $\gcd(x(x+1), 2) > 1$ for every integer value you plug in for $x$. |
H: How can i create a presentation of a group ?
in Dummit and Foote , the notion of presentation is introduced in section 1.2 which talks about dihedrial group of order $2n$.
and after this , it was rare to talks about presentation throw the exercises or the material of the sections . but suddenly in chapter 4 , it asks to create or make a presentation of particular groups , and i don't know the right methods and strategies - if any exists ! - which i can follow
so , any help with that ? i think that presentations and relations play fundamental role in the advanced levels of the study of group theory , so it's good to develop my skills with it from now and on " just a feeling ! am i wrong ? "
AI: There is no general strategy that you can follow to obtain a short presentation for a given group. Some things to look for are elements of particular orders that you can easily identify in the group. If you have an element of order $k$, then add a generator $g$ and a the relation $g^k=e$. Next, among such elements, or powers of such elements, look for conjugation relations and similar things. That is, suppose you have two generators $g,h$ in your presentation and you want these to correspond to two elements in the group that you have. Then check for their orders and the way the multiply with each other and with powers of each others. Among these try to find those that look like the most important ones (those from which the others follow). For small groups this will soon enough lead to a presentation after enough trial and error and your intuition will improve. However, this presentation may be shortened by noticing that some relations are redundant as well as by choosing different generators. |
H: Every normal matrix is diagonalizable?
Sorry, I mis-typed the question. The real question is that
Every normal matrix is diagonalizable?
and the answer is False.
Can you give me a counter example?
AI: Let
$$A=\left(\begin{matrix}1&i\\
i&-1\end{matrix}\right)$$
then $A$ is a symmetric complex matrix and its characteristic polynomial
$$\chi_A(x)=\det(A-xI)=x^2$$
then if $A$ is diagonalizable then it's similar to the zero matrix which is clearly false.
Added A complex matrix $A$ is normal if $AA^*=A^*A$ and this matrix is diagonalizable. see Normal matrix |
H: Proving divisibility of numbers
Let us take a two digit number and add it to its reverse.We have to prove that it is divisible by 11.
Same way,if we subtract the larger number from the other,it is divisible by 9.How can we explain these?
AI: Hint: any two digit number $ab$ can be represented in base 10 as: $$ab=10a+b$$with $1\le a\le 9$ and $0\le b \le 9$. Example: $$43=10(4)+3$$ |
H: This tower of fields is being ridiculous
Suppose $K\subseteq F\subseteq L$ as fields. Then it is a fact that $[L:K]=[L:F][F:K]$. No other hypotheses are needed (I'm looking at you, Hungerford V.1.2).
Now obviously $[\mathbf{C}:\mathbf{R}]=2$. But consider the fact that the algebraic closure of $\mathbf{R}(t)$ has cardinality $2^{\aleph_0}$---this implies that $\overline{\mathbf{R}(t)}\cong\mathbf{C}$, so in particular we can embed $\mathbf{R}(t)$ into $\mathbf{C}$.
If we embed $\mathbf{R}$ into $\mathbf{R}(t)$ in the natural way, we get
$$\mathbf{R}\subset\mathbf{R}(t)\subset\mathbf{C}.$$
So our good fact at the beginning would have us believe
$$2=[\mathbf{C}:\mathbf{R}(t)][\mathbf{R}(t):\mathbf{R}].$$
What is the meaning of this? Either these two degrees really are both finite or (more likely) I've made a huge mistake. Perhaps it would all be clear if I were more precise about "embedding" $\mathbf{R}(t)$ in $\mathbf{C}$.
AI: Your mistake is that $[\mathbb{C} : \mathbb{R}] \neq 2$!
To define the degree of a field extension is not enough to know the two fields involved (except in special cases): you actually have to know what the field extension is. In this case, the field extension $\mathbb{R} \to \mathbb{C}$ you constructed is not the field extension that comes from the inclusion $\mathbb{R} \subseteq \mathbb{C}$, and therefore it can, and does, have different degree.
A simpler (and more dramatic!) example of this phenomenon is the field extension $F(x) / F(x)$ given by the embedding $F(x) \to F(x)$ that sends $x \to x^2$. In this case, we have
$$ [F(x) : F(x)] = 2 $$
The supposition "$K \subseteq F \subseteq L$ as fields" usually implies more than it says: it also implies that the letter $F$ will sometimes be used not for a field, but for the field extension defined by the inclusion $K \to F$. Occasionaly we might disambiguate by writing $F/K$ rather than $F$. Similarly, $L$ will sometimes mean a field, and it will sometimes mean $L/F$ and it will sometimes mean $L/K$.
These sort of technicalities is the price we pay for the greater flexibility of allowing extensions to be any injective map $F \to E$, rather than requiring field extensions to come from actual subset relationships $|F| \subseteq |E|$ among sets. (In that last expression, $|F|$ means the underlying set, and $\subseteq$ has its usual set-theoretic meaning)
P.S. if you're ever in the situation where you consider the field extension $F(x) \to F(x)$ above, do yourself a favor and rename the indeterminate variable in one of the two copies of $F(x)$ rather than blithely forge ahead as I did above for dramatic effect. Similarly, it's probably wise to add a decoration to $\mathbb{R}$ to indicate when you are using it in a way inconsistent with its canonical inclusion into $\mathbb{C}$. (or decorate $\mathbb{C}$) |
H: recurrence relation with induction
The following recurrence relation:
$T(n)=T(n-1)+n=1+\frac{n^2+n}{2}=\theta(n^2)$, so this mean that:
there is $c_1, c_2, n_o > 0 : c_1n^2<=1+\frac{n^2+n}{2}<=c_2n^2$,
the second inequality it's easy, but how to prove by induction the first:
$c_1n^2<=1+\frac{n^2+n}{2}$?
AI: Choose $c_1=1/2$, then you can directly see that
$$c_1n^2\le 1+\frac{n^2+n}{2}$$
If you want to use induction, you get with $c_1=1/2$
$$\frac{(n+1)^2}{2}\le 1+\frac{(n+1)^2}{2}+\frac{(n+1)}{2}$$
This is equivalent to
$$0\le 1+\frac{(n+1)}{2}$$
which is obviously satisfied for all $n\ge 0$. |
H: Approximate Nash Equilibrium
I am sort of confused by the notion of approximate Nash equilibrium. I will try to express my confusion in the following exercise.
Problem. Is it true that for every two player game where every player has $n$ available actions and all payoffs $\in [0,1]$, there exists approximate $\epsilon$- Nash equilibrium where all players probabilities are integer multiples of $Ω(\epsilon/n)$, for all $\epsilon$.
$\sigma$ is $\epsilon$-Nash equlilbrim if $\forall$ player $i$ and $\forall j$ action of $i, u_i(\sigma_{-i},\sigma_{j}) - u_i(\sigma) \leq \epsilon$. In another words by deviating to any other action every player can't gain more than $\epsilon$.
The problems is to prove that for every 2 player game with $n$ action, exists $\epsilon$-nash equilibrium where all probabilities bounded below by $\frac{\epsilon \cdot k}{n}$, for some positive $k>0$, so there is no probability that equals to $0$.
First of all every game has at least one mixed Nash equilibrium, therefore the problem reduced to show that there exists some strategy for players with probability of actions is bounded below by $\frac{\epsilon \cdot k}{n}$ and difference between payoff of Nash equilibrium and the payoff of strategy is at most $\epsilon$.
Let's assume the worst case when the given game has pure Nash equilibrium, such that the single action has probability $1$ and $n-1$ others are $0$, therefore in $\epsilon$-Nash equilibrium it's required to increase every probability by $\frac{\epsilon}{n}$ and there are $n-1$ such actions and the maximum payoff of the action is 1. Therefore we change the payoff at most by $\frac{\epsilon}{n} \cdot (n-1) \cdot 1 < \epsilon$.
It looks like a right reasoning regarding the problem, but still it feels like not enough rigorous proof, in addition the same is applied to the second player and then we have the difference $>\epsilon$.
If you have any idea how to proceed with the proof please share it with us.
AI: I think you proved it nicely. As you say, Nash equilibrium in mixed strategies always exists. Without loss of generality you can check only games where the Nash equlibria are joint strategies where some players play pure strategy, that is give weight $1$ to one strategy and give weight $0$ to all other strategies.
Now, a joint strategy is $\epsilon$-Nash equilibrium if for every player holds that when he deviates to another strategy his payoff increases at most $\epsilon$.
We are claiming that joint strategy $s$ where:
You put weight $\frac{\epsilon}{n}$ to strategies where a player that played pure strategy had weight $0$.
You put weight $\frac{\epsilon}{n}$ to all strategies in mixed strategy of a player who had weight less than $\frac{\epsilon}{n}$ in Nash equilibrium.
is indeed $\epsilon$-Nash equilibrium.
From deviating he gained at most $\frac{\epsilon}{n} \cdot (n-1) < \epsilon$. But that holds for every player that played worst case scenario strategy, i.e. pure strategy.
From deviating he can gain even less than players from 1.
Therefore no player can gain more than $\epsilon$ and you have it. |
H: How does this strange phenomena happen in quotient of groups ?
in my question , here , i learned a strange fact from the comments which was a surprise for me on the answer of landsacpe !
and this surprise it :
if $G$ is a group , $H$ and $K$ are two normal subgroups of $G$ such that $H$ is isomorphic to K then
it is NOT necessary true that $G/H$ is isomorphic to $G/K$ .
i always thought that two isomorphic groups have the same algebraic structure and so we can deal with them as the same things , we can use them respectively in fact !
but an example from the member Landscape broke down this thought
so , why does this happens in quotients groups ? why two groups have the same isomorophism type leads to different groups under quotients ?
also , in which case , two isomorophism groups may leads to different things ? and in which case there is no matter to deal with them as the same thing ?
till the moment , i think this phenomena may happens ONLY on infinite groups not finite groups " it may not happen also in infinite generated groups i think ! "
any clarification plz ?
AI: The quotient of two groups is an operation on a group and one of its subgroups. The way that the subgroup sits inside the bigger group is relevant here. Looking just at the subgroup and replacing it with an isomorphic copy of it inside the bigger group may change the way that the smaller group sits inside the bigger one. As a simple example, consider $G=\mathbb Z_3\times S_3$. Now, consider the two subgroups $H=\mathbb Z_3\times \{e\}$ and $K=\{e\}\times \{e,(123),(132)\}$. Both are normal and isomorphic to each other. But $G/H\cong S_3$ while $G/K\cong \mathbb Z_3\times \mathbb Z_2$.
In short, isomorphic objects can safely be interchanged (always with your eyes open though) inside the category where they are isomorphic. Once a construction is occurring outside of that category you can't just treat things that are isomorphic in some category as the same. When the category changes, the notion of isomorphism changes. In the case above, if you consider the category whose objects are $(G,H)$ where $G$ is a group and $H$ is a normal subgroup of $G$, and arrows $(G,H)\to (G',H')$ are group homomorphisms $\psi :G\to G'$ such that the restriction of $\psi$ to $H$ lands in $H'$, then isomorphic objects here will have isomorphic quotients. The quotient operation here becomes a functor from this category $C$ to the category $Grp$ of groups.
In this category, for $(G,H)$ and $(G',H')$ to be isomorphic means that there is a group isomorphism $\psi:G\to G'$ such that the restriction of it to $H$ is an isomorphism onto $H'$. This says exactly that the isomorphism does not just change the names of the elements of the subgroups, but also the names of the elements of the bigger groups, and in such a way that everything is consistent. The counter example I gave above shows that $(\mathbb Z_3\times S_3, \mathbb Z_3\times \{e\})$ is not isomorphic to $(\mathbb Z_3\times S_3,\{e\}\times \{e,(123),(132)\})$. |
H: Invariance under transition of integral
I am reading a calculus book, and in section on invariance under transition of integral I get the following passage I don't quite understand:
Suppose we have a function $s(x)$. Then we introduce another function $t(x)$, such that $t(x)=s(x)+c$, i.e. $t(x)$ is $s(x)$ shifted to the left by some arbitrary $c$.
Now the textbook says that ordinate sets of these two functions are related by the equation $t(x)=s(x-c)$. And if $s(x)$ is defined on $[a,b]$, then $t(x)$ is defined on $[a+c,b+c]$.
I can understand that if $s(x)$ is defined on $[a,b]$, then $t(x)$ is defined on $[a+c,b+c]$, what I don't understand is that why do we have $t(x)=s(x-c)$ if $t(x)=s(x)+c$?
I know this is a quite stupid question, but I can't figure this out on my own.
AI: It’s not a stupid question: you’ve been victimized by two bad typos. The function $t(x)$ is supposed to be $t(x)=s(x+c)$, not $s(x)+c$; with that definition it’s no problem to check that $s(x)=t(x-c)$ (note the interchanging of the $s$ and $t$ compared to the book). Note that replacing $s(x)$ with $s(x)+c$ would actually translate the graph up be $c$ (so down if $c<0$), not to the left. |
H: Problem in computing of integral by substitution.
I want to compute an integral like
$\int_0^{+\infty} \ln(1+x)e^{-x}\,\mathrm dx$.
Then denote $\mu = 1-e^{-x}$, so $x=-\ln(1-\mu)$. Substitute this into the integral, we get
$$\int_0^1 \ln(1-\ln(1-\mu))(1-\mu)\,\mathrm d(-\ln(1-\mu)) = \int_0^1 \ln(1-\ln(1-\mu))\,\mathrm d\mu$$
However, $\ln(1-\ln(1-\mu))$ goes to infinity if $\mu$ goes to 1, which makes the latter integral uncomputable. But the original integral has no such problem. Is there any error in my substitution?
AI: You reduce the integral over the infinite interval to the one over the finite interval. You shall not be surprised that the function under the integral may become unbounded. The point is that you may think of the value of the integral as an area under the graph of the function. Now, what you change a variable, you make a correspondent change of the coordinate system where they area is defined. That shall give you an intuition why the function may (but doesn't have to) become unbounded. However, it still stays integrable.
Your operations with $\mu$ seem fine to me. To support this, I computed both integral in Mathematica (see the image below), they're both well-defined and have the same value.
For some reason, Wolfram Alpha computes only an approximate value of the second integral. |
H: Well definition of algebra homomorphism
Suppose we have unitary $\mathbb{K}$-algebras $A,B$ and $A$ is generated by $\{a_1,...,a_n\}$. Let $f: A \rightarrow B$ be an unitary algebra homomorphism satisfying $f(a_i) = b_i$ where $b_i$ are any elements of $B$. When is $f$ well defined?
I know that $f$ does not have to be well defined:
Suppose $A = \{a+eb: a,b \in \mathbb{R}\}$ where $ee=0$. So $A$ is generated $\{1,e\}$
$B = \mathbb{R}$
$f(1) = 1,
f(e) = 1 $
than $0 = f(ee) = f(e)f(e) = 1 \cdot 1 = 1$
So I guess that a sufficient condition for $f$ to be well defined is:
if there are $i,j \in \{1,\dots,n\}$ and $r_i\in \mathbb{K}$ such that $a_i a_j =\sum_{k=1}^n r_k a_k$ then $f(a_i)f( a_j) =r f(\sum_{k=1}^n r_k a_k)$ must hold.
AI: If the $K$-algebra $A$ has generators $a_1,...,a_r$, a $K$-algebras homomorphism $f:A\rightarrow B$ with $f(a_i)=b_i$ exists if and only if the following condition is satisfied:
For every $P(X)\in K[X_1,...,X_r]$ such that $P(a_1,...,a_r)=0$, one has $P(b_1,...,b_r)=0$.
Sometimes, it is easier to use other informations, even when you have explicit generators. For instance, in your case you are considering an homomorphism
$$
f:A=\frac{\Bbb R[x]}{(x^2)}\longrightarrow B=\Bbb R
$$
In this case the target ring is exactly the algebra $K=\Bbb R$, so $f$ is either the null map or surjective. In the latter case, its kernel must be a maximal ideal of which there's only one, namely the ideal $Ax$. |
H: RObin problem (Laplace equation)
Let
$\Delta u = 0 $,
$ \frac{\partial u}{\partial v}(x) + \alpha u(x) = 0 $
be the Laplace equation with Robin conditions.
How do I prove it has at most one solution.
If I could prove that any two solutions differs only by one constant, I would prove this using the unicity of the the solution for Dirichlet problem...
But I couldn't prove this until now.
THank you
AI: I suppose that $\alpha$ is a positive constant. Take two solutions $u,v$. Then, test both weak formulations with the difference $u-v$ and take the difference of these weak formulations. You should end with
$$
\int_\Omega \lvert \nabla (u - v)\rvert^2 \, \mathrm{d}x + \int_{\partial\Omega} (u - v)^2 \, \mathrm{d} s = 0.
$$
This shows (I do not reveal the precise arguments here) $u = v$. |
H: Integral related to harmonic functios
It suppose to be a easy task. But I couldn't solve it (I guess I can't learn much analysis).
If $ u $ is harmonic, in the middle of my problem, I need to prove that the integral
$ \int _ {\partial \Omega } \displaystyle\frac{\partial u}{\partial v} d s_x =0 $
THis $ v $ denotes the unit exterior normal.
AI: Using Green's first identity,
$$
\int_{\partial\Omega}\frac{\partial u}{\partial \nu}\,ds =
\int_{\Omega} (\Delta u + \nabla 1 \cdot \nabla u)\,dV = 0,
$$
since $\Delta u = 0$ by assumption and $\nabla 1 = \mathbf{0}$. |
H: Proper Subsets of Real Numbers
Every non-empty proper subset of Real Numbers is either open or closed. true/false
AI: It is false.
For example, $[0,1)$ is not open, since $0 \in [0,1)$ has not a nbhd $U$ such that $0 \in U\subset [0,1)$. Also, $[0,1)$ is not closed, since the point 1 is accumulation of $[0,1)$, however 1 is not in $[0,1)$. |
H: A particular proof that I don't follow.
In Rudin Thm.3.20, along the proof of (d), there is an inequality that says for $n > 2k$
$${n\choose {k}}p^k \ge \frac{n^k}{2^kk!}p^k $$
I simply don't get this, especially why $n$ is restricted as such.
From the looks of it we just need to show
$${n P k} \ge \left(\frac{n}{2}\right)^k $$
no matter what I do I cannot show that it has to be true for $n > 2k$.
Can someone give me an easy-to-follow explanation ?
AI: \begin{align}
\binom{n}{k} & = \frac{(n)\ldots(n-k+1)}{k!} \\
& = \frac{n^k}{k!} \left(\frac{n}{n}\right) \ldots \left(\frac{n-k+1}{n} \right) \\
& \geq \frac{n^k}{k!} \left(\frac{n-k+1}{n} \right)^k \\
& \geq \frac{n^k}{k!} \left(\frac{n - n/2 +1}{n} \right)^k \\
& \geq \frac{n^k}{k!} \frac{1}{2^k}
\end{align} |
H: Methods for assessing convergence of $\sum\limits_{k=1}^\infty (-1)^k a_k$ when $a_k$ is complicated
Think of the problem of convergence of the series
$$ \sum_{k=1}^\infty (-1)^k a_k$$
Is it possible to consider the convergence of $\sum\limits_{k=1}^\infty (-1)^k b_k$ if $\lim \limits_{k\rightarrow\infty } \dfrac{a_k}{b_k}=$ limited and non-zero?
If not, is there any theorem that helps to facilitate the analysis of convergence when $a_k$ is a complicated function of $k$?
AI: In the following I will assume that $a_n\ge0$; otherwise, as nick points out in a comment, you are asking about convergence criteria for arbitrary sequences.
The only general convergence criterion I know of is Leibiz' criterion.
There is no comparison theorem like the one you ask about. The series
$$
\sum_{n=1}^\infty\frac{(-1)^n}{n}\quad\text{and}\quad\sum_{n=1}^\infty\frac{(-1)^n}{\sqrt n}
$$
are both convergent by Leibniz criterion, but
$$
\sum_{n=1}^\infty\frac{(-1)^n}{n-(-1)^n}
$$
converges, while
$$
\sum_{n=1}^\infty\frac{(-1)^n}{\sqrt n-(-1)^n}
$$
diverges. |
H: How has this derivation been achieved?
This is a step in a derivation I am to know, but I can't figure out how to achieve it - specifically, how we end up with $1 + e^{-X\theta}$ instead of $1 + e^{X\theta}$ as the denominator in the applicable terms. Working it through I seem to get the latter.
These steps are given in the literature, not derived by me:
$$\frac{\delta}{\delta\theta}\left[-y\log(1 + e^{-X\theta}) + (1 - y)\left[-X\theta - \log(1 + e^{-X\theta})\right]\right]$$
$$= -y\left(\frac{-X}{1 + e^{-X\theta}}\right) + (1 - y)\left(-X - \frac{-X}{1 + e^{-X\theta}}\right)$$
(see http://cl.ly/image/2o2R1P1d290u for all steps)
To offer some context factoring out $-X$ would leave in the first term, for example, $\frac{1}{1 + e^{-X\theta}}$ which is the sigmoid function, and this is indeed what is required for the subsequent steps.
I suppose my problem must lie in my calculation:
$$\frac{\delta}{\delta\theta}\left(-y\log(1 + e^{-X\theta})\right) = -y\frac{-X\frac{1}{e^{X\theta}}}{1 + e^{-X\theta}} = -y\frac{-X}{1 + e^{X\theta}}$$
Thanks in advance.
AI: You have $1+e^{-X\theta}$ inside of logarithm. After derivative is taken, the content of the logarithm moves into denominator, because $\frac{d}{du}\log u=\frac{1}{u}$. You also get the derivative of $1+e^{-X\theta}$ in the numerator (chain rule), which is $-X\,e^{-X\theta}$. So,
$$\frac{\partial}{\partial\theta}\left[-y \log(1 + e^{-X\theta})\right]
=-y \frac{-X\,e^{-X\theta}}{1 + e^{-X\theta}}$$
and
$$\frac{\partial}{\partial\theta}\left[ (1 - y)\left[-X\theta - \log(1 + e^{-X\theta})\right]\right] =
(1 - y)\left\{-X -\frac{-X\,e^{-X\theta}}{1 + e^{-X\theta}}\right\}
$$
One can then simplify a bit, using
$$
\frac{-X\,e^{-X\theta}}{1 + e^{-X\theta}} = \frac{-X }{ e^{X\theta}+1}
$$ |
H: Farkas' lemma application
Farkas' lemma: Let $A$ be $m \times n$ matrix and $b \in \mathbb{R}^m$ $m$-dimensional vector.
Then, exactly one of the following holds:
(a) there exists some $x \in \mathbb{R}^n$, $x \geq 0$, such that $Ax = b$
(b) there exists some vector $y \in \mathbb{R}^m$ such that $y^TA \geq 0$ and $y ^Tb < 0$.
where $p^TA \geq 0$ means that every component of $p^TA$ is non-negative.
The task:
Let $A$ be a $m \times n$ matrix, let $C$ be a $m \times k$ matrix and let $b \in \mathbb{R}^m$. Prove that exactly one of the following holds
(a) there exist $x \in \mathbb{R}^n$ and $u \in \mathbb{R}^k$ such that $Ax + Cu \leq b$ and $x \geq 0$;
(b) there exists $y \in \mathbb{R}^m$ such that $y \geq 0$, $y^TA \geq 0$, $y^TC=0$ and $y^Tb < 0$.
I think you have to introduce vector of slack variables $s = [s_1,s_2,\dots,s_m]^T$, $s_i \geq 0$ such that
$$
Ax + Cu + s = b
$$
ant then somehow apply Farkas' lemma but i don't know how. Any help most welcome.
Edit: with the help of the answer i came up with the following...
(a) there exist $x \in \mathbb{R}^n$ and $u \in \mathbb{R}^k$ such that $Ax + Cu \leq b$ and $x \geq 0$;
is equivalent to
(a') there exist $x \in \mathbb{R}^n$, $u \in \mathbb{R}^k$ and $s \in \mathbb{R}^m$ such that $Ax + Cu + s = b$, $x \geq 0$ and $s \geq 0$;
We define $m \times (n+2k+m)$ matrix
$$ D = \begin{pmatrix} A & C & -C & I
\end{pmatrix}
$$
and apply standard Farkas' lemma to matrix $D$ and vector $b \in \mathbb{R}^m$. We get that exactly one of the following holds
(a') there exists $x' \in\mathbb{R}^{n+2k+m}$, $x' \geq 0$, such that $Dx=b$. We denote first $n$ components of $x'$ as a vector $x \in \mathbb{R}^n$, the following $k$ components as a vector $u'$, the following $k$ components as a vector $u''$ and the last $m$ components of $x'$ as a vector $s$. $Dx = b$ implies $Ax + Cu' - Cu'' + s = b$. Picking $u=u'-u''$ we get $Ax + Cu + s = b$.
(b) there exist $y \in \mathbb{R}^m$ such that $y^TD \geq 0$ and $y^Tb < 0$. $y^TD \geq 0$ implies $y^T A \geq 0$, $y^T C \geq 0$, $-y^T C \geq 0$, $y^T \geq 0$ which is exactly what we wanted to prove.
AI: Hint: Your idea almost works, but you will have to deal with the $C$ part also, let
$$ D = \begin{pmatrix} A & C & -C & \mathrm{Id}
\end{pmatrix} \in \mathbb R^{m \times (n+2k+m)}. $$
Then apply the first form you gave to $D$ and $b$. |
H: Jordan form and an invertible $P$ for $A =\left( \begin{smallmatrix} 1&1&1 \\ 0 & 2 & 2 \\ 0 & 0 & 2 \end{smallmatrix}\right)$
$A = \begin{pmatrix} 1&1&1 \\ 0 & 2 & 2 \\ 0 & 0 & 2 \end{pmatrix}$, find the jordan form and the invertible $P$ such that: $A = P J P^{-1}$.
Now I found the characteristic polynomial and minimal polynomials:
$P_A(x) = (x-1)(x-2)^2 = m_A(x)$.
And from the minimal polynomial I found out that the maximal block size for the eigenvalue $1$ is $1$ so we have one block of size $1$ for that eigenvalue. And in the same way that the maximal jordan block size for eigenvalue $2$ is $2$ and I calculated $N=A-2I$ and figured that there is only one block of size $2$ for eigenvalue $2$. And so I found the Jordan Form:
$$J_A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{pmatrix}$$
Now what I am having trouble with is finding $P$. I know that $Ker(N) = Ker(N-2I) = (1,1,0$ and $Ker(Z) = Ker(A-I) = (1,0,0)$ But how do I exactly calculate the spans to know the basis for the Jordan form if I have two eigenvalues? This is an algorithm that I was not taught!
Any help will be appreciated
AI: We have $v_1=(1,0,0)^T\in \ker(A-I)$ is an eigenvector associated to the eignevalue $1$
moreover, since $\ker(1-2I)=\mathrm{span}((1,1,0)^T)$ then $v_2=(1,1,0)^T$ is an eigenvector of $A$ associted to the eigenvalue $2$ and the matrix $A$ isn't diagonalizable. Finally we look for a vector $v_3$ s.t. $Av_3=v_2+2v_3$ and we find $v_3= (0,\frac{1}{2},\frac{1}{2})^T$ hence with
$$P=(v_1\ v_2\ v_3)=\left(\begin{matrix}1&1&0\\
0&1&\frac{1}{2}\\
0&0&\frac{1}{2}\end{matrix}\right)\quad\text{and}\quad J=\left(\begin{matrix}1&0&0\\
0&2&1\\
0&0&2\end{matrix}\right)$$
we have
$$J=P^{-1}AP$$ |
H: How would you best describe the rate of growth of the function $f(x) = cxr^x$?
Consider the function $f(x) = cxr^x$, where both $r$ and $c$ are constants and we have cases: (a) $r<1$, (b) $r>1$. Regarding terminology, how would you best describe the asymptotic growth of $f(x)$ in cases (a) and (b)? Though you could state that exponential growth and decay will dominate the asymptotics of the function, strictly speaking it would be incorrect to call (a) exponential growth and (b) exponential decay, right?
AI: Exponential growth and exponential decay when $x\to+\infty$ are often defined by the fact that
$$
\lim_{x\to+\infty}\frac{\log f(x)}x
$$
exists and is not zero. In this context, every function $f:x\mapsto cx^a(\log x)^br^x$ with $c\gt0$ and $r\gt0$, $r\ne1$, has exponential growth or exponential decay when $x\to+\infty$. |
H: Proving a graph is connected iff it has no isolated points
How do I prove that a graph is connected iff it has no isolated points?
I can do the first half; if the graph is connected, any pair of vertices have a walk between them. Suppose there is an isolated point. Since an isolated point has no edges, it is impossible for it to have a walk between another vertex, a contradiction. Hence, a connected graph cannot have an isolated point.
How do I prove the converse?
I imagine I'm supposed to construct a procedure like going from marking one vertices one at a time, and say at some point the vertex which is the intended finish of the walk has to be marked since there are only finite vertices in the graph.
But I can't go further than this rough sketch.
AI: This is false. Consider for example the graph consisting of vertex set $\{1,2,3,4\}$ and edge set $\{(1,2),(3,4)\}$. |
H: For what values of $a$ $\int^2_0 \min(x,a)dx=1$
I want to check for what values of $a$ the integral of this function is equal to $1$
$$\int^2_0 \min(x,a)dx=1$$
What I did is to check 2 cases and I am not sure is enough :
case 1:
$$a<x \rightarrow \int^2_0 a = ax|^{2}_{0}=2a=1 \rightarrow a=\frac{1}{2}$$
now I dont know if its enough so say that, I need to check if $a<2$? or what I did?
case 2:
$$a>x \rightarrow \int^2_0 x = \frac{x^2}{2}|^{2}_{0}=2$$
how I need to continue from here?
Thanks!
AI: There's three cases:
If $a\leq 0$ then
$$\int^2_0 \min(x,a)dx=a\int_0^2 dx=2a\neq 1\forall a$$
If $a\geq 2$ then
$$\int^2_0 \min(x,a)dx=\int_0^2 xdx=2\neq 1\forall a$$
If $0<a<2$ then
$$\int^2_0 \min(x,a)dx= \int_0^a xdx+a\int_a^2dx\\=\frac{a^2}{2}-a^2+2a=-\frac{a^2}{2}+2a=1\iff 0<2-\sqrt 2<2\ \text{or}\ 2-\sqrt 2\notin(0,2) $$
so the desired value of $a$ is $2-\sqrt 2$ |
H: What are two continuous maps from $S^1$ to $S^1$ which are not homotopic?
This is an exam question I encountered while studying for my exam for our topology course:
Give two continuous maps from $S^1$ to $S^1$ which are not homotopic. (Of course, provide a proof as well.)
The only continuous maps from $S^1$ to $S^1$ I can think of are rotations, and I thought rotations on a circle can be continuously morphed into one another.
AI: The identity is not homotopic to a constant map; otherwise, $S^1$ would be contractible, which would imply $\pi_1(S^1)=0$. |
H: Limit of $\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)$
I want to evaluate this limit and I faced with one issue.
for this post I set $L`$ as L'Hôpital's rule
$$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)$$
Solution One:
$$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)=\frac{0}{0}L`=\frac{\sin(2x)+2x\cos(2x)+\sin(x)}{\sin(2x)}$$
at this step I decided to evaluate each fraction so I get :
$$\lim\limits_{x\to 0}\frac{\sin(2x)}{\sin(2x)}+\frac{2x\cos(2x)}{\sin(2x)}+\frac{\sin(x)}{\sin(2x)} = \frac{3}{2}$$
Solution Two:
$$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)\frac{0}{0}L`=\frac{\sin(2x)+2x\cos(2x)+\sin(x)}{\sin(2x)}=\frac{0}{0}L`$$
$$\frac{2\cos(2x)+2\cos(2x)-4x\sin(2x)+\cos(x)}{2\cos(2x)}=\frac{5}{2}$$
I would like to get some idea where I did wrong, Thanks.
AI: As mentioned your first solution is incorrect. the reason is $$\displaystyle lim_{x\to0}\frac{2xcos(2x)}{sin(2x)}\neq 0$$ you can activate agin l'hospital:
$$lim_{x\to0}\frac{2xcos(2x)}{sin(2x)}=lim_{x\to0}\frac{2cos(2x)-2xsin(2x)}{2cos(2x)}=lim_{x\to0} 1-2xtg(2x)=1+0=1$$, so now after we conclude this limit, in the first solution, i'd write after $$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)=\lim\limits_{x\to 0}\frac{\sin(2x)}{\sin(2x)}+\frac{2x\cos(2x)}{\sin(2x)}+\frac{\sin(x)}{\sin(2x)}$$ that the limit equals to $$ = 1+\frac 1 2+1=\frac 5 2$$ and that's the correct answer. |
H: How to prove that this particular function is continuous?
This was an exam question from the topology course of a previous year.
Let $(X,d)$ be a compact metric space and $f: X \to X$ be a continuous map. How that the map $g: X \to \mathbb{R} , x \mapsto d(x,f(x))$ is continuous.
I know several definitions of continuity and and I know that the the continuous image of a compact set is compact (and, by Heine-Borel, closed and bounded). I can't seem to apply these definitions and results to prove that $g$ is continuous, though.
AI: The function you consider is the composition of $d:X\times X\to \mathbb R$ and the function $X\to X\times X$ given by $x\mapsto (x,f(x))$. Show that each of these is continuous. Notice that compactness is not used anywhere. |
H: Normalize a negative range
How can I convert a range of -12 to 12 as a value between 0 and 1 ?
I'm guessing this is a relatively simple problem, I just can't figure it out.
float value=[valuestring floatValue];
float absval =fabsf(value);
if (value<0) {
//0-0.5
value=(absval/12);
value=value*.5;
value-=.5;
}else if (value>0)
{
//0.5-1
value=(absval/12);
value=value*.5;
value+=.5;
}else if(value==0)
{
//0.5
value=0.5;
}
AI: If $$-12\le x\le 12$$
Note that we need to change the range difference from $12-(-12)=24,$ to $1-0=1$
So, dividing throughout by $24,$
$$-\frac12\le \frac x{24}\le \frac12$$
Now, we need to shift by $\frac12$
$$0\le \frac12+ \frac x{24}\le 1$$
So, if we put $$y=\frac12+ \frac x{24}=\frac{12+x}{24}, -1\le y\le1$$ |
H: Prove path P4 and the cycle C5 are self-complementary
I can "show" that the two graphs are in fact self-complementary by making a drawing.
How do I "prove" this?
How can I rigorously put in words that the complement of P4 is P4 itself?
In other words, how is an isomorphism of a graph proven?
Is it possible to do this with a degree sequence? (Does a degree sequence uniquely determine a graph?)
AI: No, a degree sequence does not determine the graph uniquely (consider two copies of $K_3$ vs. $C_6$).
In order to prove the self-complementarity explicitly, you only need to provide an isomorphism between the graphs -- a bijective mapping between their vertices such that the vertices in original graph are adjacent if and only if their images are adjacent in the other graph. In case of $C_5$ (say, the vertices are named $0,1,2,3,4$ and they're connected in this order), one such isomorphism could be $f(i)=3i+2$. |
H: How to show that $\int_{a}^{b}\left \lfloor x \right \rfloor dx+ \int_{a}^{b}\left \lfloor -x \right \rfloor dx=a-b$?
How to show that
$$
\int_{a}^{b}\left \lfloor x \right \rfloor dx+ \int_{a}^{b}\left \lfloor -x \right \rfloor dx=a-b
$$
Where $\left \lfloor x \right \rfloor$ means greatest integer $\leqslant x$.
AI: What happens when you add $\lfloor x \rfloor$ and $\lfloor -x \rfloor$? Can you see how this relates to the bound of the integral and the answer you want? |
H: Find the unknown $n$
If $A$ and $B$ are invertible matrices (with 2013 rows and columns) such that
$A^9$ = $1$ and $ABA^{-1}$ = $B^2$, then prove that there exists a natural number
$n$ such that $B^n = 1$. Find the smallest such $n$.
AI: We have $AB = B^2A$, from which it immediately follows that $AB^n = B^{2n}A$. Thus, it can be shown by induction that $A^n B = B^{2^n}A^n$.
Now consider $B = A^9 B = B^{2^9}A^9 = B^{2^9}$, since $B$ is invertible this means that $B^{2^9-1} = I$. Hence such an $n$ is $2^9-1 = 511$
The smallest such $n$ must now divide $511$, hence $n\in\{1,7,73,511\}$. |
H: If $\int^{1}_{0} \frac{\tan^{-1}x}{x} dx = k \int^{\pi/2}_{0} \frac {x}{\sin x} dx$, find $k$.
Problem: If $$\int^{1}_{0} \frac{\tan^{-1} x}{x} dx = k \int^{\pi/2}_{0} \frac{x}{\sin x} dx,$$ find $k$.
Solution: If we put $x=\tan t$ in $$\int^{1}_{0} \frac{\tan^{-1}x}{x} dx$$ then integral becomes $$\int^{\pi/4}_{0} \frac{t}{\tan t} \sec^2t \,dt$$ or may be we can do integration by parts.
I have no idea about $\int^{\pi/2}_0 \frac {x}{\sin x} dx$ as $\int_{0}^{a} f(x) \ dx = \int_{0}^{a} f(a-x) \ dx$ method is not looking valid in above case.
AI: HINT:
From where you have left,
$$I=\int^{\frac{\pi}4}_0 \frac t {\tan t} \sec^2t dt=\int_0^{\frac\pi4}\frac t{\sin t\cos t}dt=\int_0^{\frac\pi4}\frac{2t}{\sin2t}dt$$ as $\sin2t=2\sin t\cos t$
Put $2t=u$ |
H: Find the largest number having this property.
The $13$-digit number $1200549600848$ has the property that for any $1 \le n \le 13$, the number formed by the first $n$ digits of $1200549600848$ is divisible by $n$ (e.g. 1|2, 2|12, 3|120, 4|1200, 5|12005, ..., 13|1200549600848 using divisor notation).
Question 1: Find the largest computed number having this property.
Question 2: Is there a theoretical upper bound on the largest possible number with this property?
Edit: Added Question 2 as I believe it is more insightful as compared to brute force computer calculations.
AI: The The On-Line Encyclopedia of Integer Sequences list this series as A109783 and state that 3608528850368400786036725 works for 25 digits, but there is no such 26 digit number.
A thread titled divisor problem at The Math Forum suggests the following argument:
3608528850368400786036725 is the largest number with such a property.
Of course any substring from the left of this has the same property.
Let N be a number with such a property.
to be searched a digit d such that 10N+d has the property.
I call a "terminal" number a number N that can't be "expanded"
that is for which no digit d exists with 10N+d having the property.
The only "number" which can indefinitely be expanded is
0000...
All solutions are then given by the set of these "terminal" numbers.
This set is finite and there are only 2492 terminal numbers.
Interesting is a number with all digits differents.
The only solution is 3816547290
Edit: to answer question 2 explicitly, the largest is the 25 digit number given above. There is no such 26 digit number and therefore no 27, 28, 29, 30,... digit number.
We can prove this by contradiction. Suppose there was a 30 digit number, then we could chop off the last 4 digits and we'd get a 26 digit number which satisfies the required property, but we know no such number exists. Proof by contradiction. |
H: Write the 2nd degree equation which have the following roots
$y_1$=${(x_1+x_2\varepsilon+x_3\varepsilon^2)}^3$
$y_2$=${(x_1+x_2\varepsilon^2+x_3\varepsilon)}^3$
where $x_1,x_2,x_3$ roots for the $x^3+ax^2+bx+c=0$ and $\varepsilon$ = $\frac{-1}{2}$+$i$$\frac{\sqrt{3}}{2}$. ${\varepsilon}^3=1$
AI: Hint: The coefficients of the required equation are $${(x_1+x_2\varepsilon+x_3\varepsilon^2)}^3{(x_1+x_2\varepsilon^2+x_3\varepsilon)}^3$$ and $$-{(x_1+x_2\varepsilon+x_3\varepsilon^2)}^3-{(x_1+x_2\varepsilon^2+x_3\varepsilon)}^3.$$
They are symmetric functions of $x_1,x_2,x_3$. So it remains to express them by elementary symmetric polinomials $x_1+x_2+x_3=-a$, $x_1x_2+x_2x_3+x_1x_3=b$ and $x_1x_2x_3=-c$.
Addendum: It is known that the roots of a cubic equation are of the form:
$$x_1=u+v, \ \ x_2=u\varepsilon+v\varepsilon^2, \ \ x_3=u\varepsilon^2+v\varepsilon, $$
where $u,v$ are from http://en.wikipedia.org/wiki/Cubic_function#Cardano.27s_method .
Then $$x_1+x_2\varepsilon+x_3\varepsilon^2=u(1+\varepsilon+\varepsilon^2)+3v=3v$$
and similarly for $x_1+x_2\varepsilon^2+x_3\varepsilon$.
$2$nd Addendum: To calculate $u^3$ and $v^ 3$ one has to reduce the equation $x^3+ax^2+bx+c=0$ to the form $y^3+py+q=0$ by the substitution $x=y-\frac{b}{3a}$. Then
$$
p=-\frac{a^2}{3}+b, \ \ \ q=\frac{2a^3}{27}-\frac{ab}{3}+c,
$$
$$u^3=-\frac{q}{2}+\sqrt{D}, \ \ v^3=-\frac{q}{2}-\sqrt{D}.$$
Hence $u^3+v^3=-q, \ u^3v^3=\frac{q^2}{4}-D=-\frac{p^3}{27}$ and the required equation is
$$y^2+27qy-p^3=0.$$ |
H: Is the number of quadratic nonresidues modulo $p^2$, greater than the number of quadratic residues modulo $p^2$?
Let $p$ be a prime. The number of quadratic nonresidues modulo $p^2$, is greater than the number of quadratic residues modulo $p^2$.
Is that statement true or false? Why?
Thank you.
AI: Let's count the number of squares modulo $p^2$. I'll assume that $p$ is odd. The case $p=2$ is left as an exercise...
If $x^2$ is a multiple of $p$, then $x$ is a multiple of $p$, and $x^2 \equiv 0 \bmod {p^2}$.
If $x^2$ is a not multiple of $p$, then $x$ is a not multiple of $p$, and $x^2$ is a unit modulo $p^2$. There are $\phi(p^2)=p^2-p$ such units. Since there is a primitive root modulo $p^2$, half of the units are squares. Indeed, if $g$ is a primitive root, then all units are of the form $g^k$ and the squares are of the form $g^{2k}$.
So, the number of squares is $(p^2-p)/2+1$ and the number of nonsquares is $(p^2+p)/2-1$.
Alternatively, consider the map $x \mapsto x^2$ in the unit group. This map is a homomorphism and its image is the set of nonzero squares. The kernel is the set of solutions $x^2=1$, which is $\{\pm 1\}$. (There is something to prove here!) Hence the image has size $(p^2-p)/2$. |
H: A question arising from some misunderstanding involving the Laplace Transform of the Heaviside function
I am trying to compute the Laplace Transform of the Heaviside Step-function.
I define the Heaviside step-function with the half-maximum convention:
$H(t-t_0) = 0$ for $t < t_0$ ; $H(t-t_0) = 1/2$ for $t=t_0$ ; $H(t-t_0) = 1$ for $t > t_0$.
Performing the integrations necessary to find the Laplace transform of the Heaviside step-function, I found that: (I use the symbol $L$ to denote the laplace transform.)
$L[ 0 ] = 0 $ ; $L[1/2] = \frac{1}{2s} $ ; $L[1] = \frac{1}{s} $ .
I thought that this is the correct Laplace Transform for the different intervals on which the Heaviside Step-function is defined. However, my book mentions that: $$L [ H(t - t_0) f(t - t_0) ] = e^{-t_0 s } F(s) , $$ where $F(s) = L [f(t) ] $. So if we look at the interval in which $t > t_0$, we find that $$L [ H(t - t_0) f(t - t_0) ] = \frac{ e^{-t_0 s } }{s} .$$
This doesn't agree with my calculations. Can you please point who goes wrong, and where and how?
AI: The unilateral Laplace transform $F(s)$ of a function $f(t)$ is defined by
$$F(s)=\int_{0}^{\infty}f(t)e^{-st} \,dt$$
For the shifted step function $H(t-t_0)$ this gives
$$F(s)=\int_{0}^{\infty}H(t-t_0)e^{-st} \,dt=\int_{t_0}^{\infty}e^{-st} \,dt=\frac{e^{-st_0}}{s}$$
So I guess your book is correct. |
H: What is cofinality of $(\omega^\omega,\le)$?
Let us consider the set $\omega^\omega$ of all maps $\omega\to\omega$ with the pointwise ordering. By cofinality of $(\omega^\omega,\le)$ I mean the smallest cardinality of the subfamily $\mathcal B$ such that for each $f\in\omega^\omega$ there is a $g\in\mathcal B$ such that $f\le g$, i.e., $\mathcal B$ is cofinal.
I understand that every cofinal family in $(\omega^\omega,\le)$ is dominating. (A dominating family is basically the same thing as cofinal in $(\omega^\omega,\le^*)$, perhaps with the slight distinction that we are talking about equivalence classes, when dealing with $\le^*$.)
I do not think that the opposite implication is true (i.e., dominating $\Rightarrow$ cofinal).
Is it known whether cofinality of $(\omega^\omega,\le)$ is $\mathfrak d$? (Here $\mathfrak d$ denotes the dominating number.)
AI: It is true that the cofinality of $(\omega^\omega,\leq)$ is $\mathfrak{d}$. While a dominating family $\mathcal{D}$ might not be cofinal in this sense, we can obtain a cofinal family from it by taking all finite modifications of the functions in $\mathcal{D}$, expanding all of the equivalence classes of these functions, as it were. The new family
$$\mathcal{D}'=\{f;\exists g\in\mathcal{D}\colon g=f\text{ almost everywhere}\}$$
is a cofinal family in $(\omega^\omega,\leq)$ and clearly has the same cardinality as $\mathcal{D}$. |
H: Problem related to GCD
I was solving a question on GCD. The question was calculate to the value of $$\gcd(n,m)$$
where $$n = a+b$$$$m = (a+b)^2 - 2^k(ab)$$
$$\gcd(a,b)=1$$
Till now I have solved that when $n$ is odd, the $\gcd(n,m)=1$.
So I would like to get a hint or direction to proceed for the case when $n$ is even.
AI: $$\gcd(n,m)=\gcd(a+b,(a+b)^2-2^k(ab))=\gcd(a+b,2^k(ab))=\gcd(a+b,2^k)$$ where the last equality uses the fact that $\gcd(a+b,ab)=1$, easily proved from $\gcd(a,b)=1$. |
H: Multiplying two summations together exactly.
Consider the integral: $$\int_0^1 \frac{\sin(\pi x)}{1-x} dx$$ I want to do this via power series and obtain an exact solution.
In power series, I have $$\int_0^1 \left( \sum_{n=0}^{\infty} (-1)^n \frac{(\pi x)^{2n+1}}{(2n+1)!} \cdot \sum_{n=0}^{\infty} x^n \right)\,\,dx$$ My question is: how do I multiply these summations together? I have searched online, however, in all cases I found they simply truncated the series and found an approximation.
Many thanks
AI: Let's take a more abstract case, trying to multiply $\sum_{k=0}^\infty a_n$ and $\sum_{k=0}^\infty b_n$. Note that In the resulting sum, we will have $a_i b_j$ for all possibilities of $i,j \in \mathbb{N}$.
One way to make it compact is to sum across diagonals. Think about an integer lattice in the first quadrant of $\mathbb{R}^2$. Drawing diagonals (origin, then along $x+y=1$ then along $x+y=2$, etc), note that the one along the line $x+y=n$ will have length $n+1$ integer points, and the sum of the indices along all points there will be $n$ - i.e. $(n,0),(n-1,1),\ldots,(k,n-k)\ldots,(0,n)$. So we can renumber the summation based on these diagonals, getting
$$
\left(\sum_{k=0}^\infty a_n\right) \left(\sum_{k=0}^\infty b_n \right)
= \sum_{n=0}^\infty \sum_{j,k\text{ along } x+y=n} a_k b_j
= \sum_{n=0}^\infty \sum_{k=0}^n a_k b_{n-k}.
$$ |
H: Cumulative distribution function (of NHPP inter-arrival time) not tending to 1?
According to this website, for a non-homogeneous Poisson process with mean $m(t) = \int^t_0 \lambda(u) \, dt$, the cumulative distribution function (CDF) for the inter-arrival time to the first event is,
$$F_0(t) = 1 - e^{- m(t)}.$$
As $t \rightarrow 0$, $m(t) \rightarrow 0$, so $F_0(t) \rightarrow 1-e^{0} = 1- 1 = 0.$ So that is fine.
However, as $t \rightarrow \infty$, $m(t) \rightarrow \int^{\infty}_0 \lambda(u) \, du$, so $F_0(t) \rightarrow 1 - e^{-\int^{\infty}_0 \lambda(u) \, du}$ which is $1$ only if $\int^{\infty}_0 \lambda(u) \, du$ is $\infty$.
Question
Does this mean that the intensity function $\lambda(u)$ of a non-homogeneous Poisson process must be defined such that $m(t) \rightarrow \infty$ as $t \rightarrow \infty$? We cannot have an intensity function that drops to $0$ and remain $0$ after a certain point?
AI: If the limit $m(+\infty)$ of $m(t)$ when $t\to+\infty$ is finite, the total number of points is Poisson with parameter $m(+\infty)$, hence there is no point at all with positive probability $\mathrm e^{-m(+\infty)}$. When this happens, there is no first event, in other words the time to wait until the first event is $+\infty$. Hence the defective distribution.
This may happen even when $\lambda(t)\gt0$ for every $t\geqslant0$, and does happen if and only if $\lambda$ is integrable, for example $\lambda(t)=\frac1{1+t^2}$ for every $t\geqslant0$. |
H: Interval of convergence for $\sum_{n=1}^∞({1 \over 1}+{1 \over 2}+\cdots+{1 \over n})x^n$
What is the interval of convergence for $\sum_{n=1}^∞({1 \over 1}+{1 \over 2}+\cdots+{1 \over n})x^n$?
How do I calculate it? Sum of sum seems a bit problematic, and I'm not sure what rules apply for it.
Thanks in advance.
AI: The coefficient of $x^n$ is the $n$-th harmonic number $H_n$. One can show that there are positive constants $a$ and $b$ such that $a\ln n\lt H_n \lt b\ln n$. For much more detail than necessary, see the Wikipedia article on harmonic numbers. There will be no "endpoint" issue, since the coefficients go to $\infty$.
More simply, one can use the Ratio Test directly on $H_n$. Since $H_n\to\infty$ as $n\to\infty$, one can show quickly that
$$\lim_{n\to\infty} \frac{H_{n+1}}{H_n}=1.$$ |
H: probability question
a class of 60 students, 15 students failed in exam A, 25 students failed in exam B ,8 students failed in both , what is the probability of a student passing A and failing B ?
when I solve it using Venn diagrams the probability is 17/60
but when I solve it using P(Passing A & failing B)=P(A' intersection B)=P(A')*P(B)
the result =18.75/60
shouldn't the two events be mutually exclusive (independent) ,passing A shouldn't affect passing B and vice versa so the intersection equals their product, so what is it that I'm doing wrong ?
AI: Mutually exclusive and independent are two different things. Mutually exclusive means they cannot both happen, but one can certainly pass A and fail B. Independent here means the probability of passing A is the same for those students that passed B as those that failed B. We are not given that. Your Venn diagram solution is correct, while the second formula assumes independence, which is not the case here. |
H: Probability of consecutive dice rolls
This is probably quite a simple question but here I go..
Suppose you are going to roll a six-sided (fair) die N times, what is the probability that you will get at least one set of three consecutive numbers in a row?
Hopefully that's clear enough but as an example, in nine rolls you may get:
1, 4, 2, 6, 4, 4, 4, 4, 3
With the 4s making two sets of consecutive rolls.
Thanks!
AI: Let $T$ denote the first time when this happens and, for every $|s|\leqslant1$ and $k$ in $\{0,1,2\}$, $u_k(s)=E_k[s^T]$ where $k$ is the number of identical results just produced. One is asking for $P_0[T\leqslant N]$. A one-step Markov conditioning yields the usual linear system $u_0=su_1$, $u_1=s\left(\frac16u_2+\frac56u_1\right)$ and $u_2=s\left(\frac16+\frac56u_1\right)$.
Solving this yields $u_0(s)=\frac{s^3}{36-30s-5s^2}$. There exists two positive real numbers $a$ and $b$ such that $36-30s-5s^2=36(1-as)(1+bs)$, thus $u_0(s)=\frac1{36}\frac1{a+b}s^3\left(\frac{a}{1-as}+\frac{b}{1+bs}\right)$ and, for every $n\geqslant1$, $P_0[T=n+2]=\frac1{36}\frac1{a+b}(a^{n}-(-1)^nb^{n})$. The value of $P_0[T\leqslant N]$ follows.
Numerically, $a=\frac1{12}(5+3\sqrt5)=0.97568$, $b=\frac1{12}(-5+3\sqrt5)=0.14235$, for every $n\geqslant1$, $P_0[T=n+2]=\frac1{18\sqrt{5}}(a^{n}-(-1)^nb^{n})$, and, when $n\to\infty$,
$$
P_0[T=n]\sim\frac{7-3\sqrt5}{5\sqrt5}a^n,\qquad P_0[T\geqslant n]\sim\frac{12}{5\sqrt5}a^n.
$$ |
H: $\mathbb{C}[f(x)]$ is not a maximal subring of $\mathbb{C}[x]$
Prove that $\mathbb{C}[f(x)]$ is not a maximal subring of $\mathbb{C}[x]$ for all $f\in\mathbb C[x] $.
I managed to prove it in a straightforward way by taking $f(x)=a_0 + a_1x+\cdots+ a_nx^n$ and by means of contradiction. But I am looking for a more elegant solution that is not brute force. (It must be extremely easy but without sleep my brain is in a standstill).
AI: As for $\deg(f)=0,1$ we have $C[f] =C$ or $C[f]=C[x]$, we will only worry about the case $\deg(f) \geq 2$.
Let $I$ be the ideal generated by $f(x)$. Then $\mathbb C[f(x)]$ is a subring of $\mathbb C[f(x)]+I$.
Since $\deg(f) \geq 2$ it is easy to prove that $x \notin \mathbb C[f(x)]+I$, thus $\mathbb C[f(x)]+I \neq \mathbb C[x]$. Also $xf(x) \in I$ thus $xf(x) \in \mathbb C[f(x)]+I$ but not in $\mathbb C[f(x)]$. |
H: Wild automorphisms of the complex numbers
I read about so called "wild" automorphisms of the field of complex numbers (i.e. not the identity nor the complex conjugation). I suppose they must be rather weird and I wonder whether someone could explain in the simplest possible way (please) how I could imagine such wild automorphisms.
E.g. I suppose they are completely discontinuous.
E.g. are the real rational numbers fixed or any other set of complex numbers?
Can such an automorphism be pictured in a model?
AI: Given any field automorphism of $\mathbb C$, the rational numbers are fixed. In fact, any number that is explicitly definable in $\mathbb C$ (in the first order language of fields) is fixed. (Actually, this means that we can only ensure that the rationals are fixed, I expand on this below.)
Any construction of a wild automorphism uses the axiom of choice. See here for a related open problem. In fact, there is a model of set theory first considered by Solovay (in this model the axiom of choice fails, but the model satisfies the axiom of "dependent choice", which suffices for classical analysis) where all sets of reals are Lebesgue measurable and have the property of Baire, and there the only automorphisms are the identity and complex conjugation.
Wild automorphisms are indeed far from continuous. Since choice is used explicitly in their construction, I am not sure there is an easy way to "imagine" them, though the example described below is in principle not too elaborate, given these caveats.
The first explicit construction in print seems to be in a paper by Kestelman,
H. Kestelman. Automorphisms of the field of complex numbers, Proceedings of the London Mathematical Society (2), 53, (1951), 1-12.
His paper however traces the first proof as being "implicitly" given by Steinitz, using a transcendence basis, call it $T$, of $\mathbb C$ (called $Z$ in the paper) over $\mathbb Q$ (called $R$ in the paper), so $\mathbb C$ is algebraic over $\mathbb Q(T)$. (Note that this is where choice is used, in verifying the existence of $T$ via, for example, Zorn's lemma.)
The point is that any such basis contains two points $x_0, x_1$ with $x_0\notin\{x_1,\bar x_1\}$. One can then consider any permutation $\pi$ of $T$ mapping $x_0$ to $x_1$, and there is a unique extension of $\pi$ to a field automorphism of $\mathbb Q(T)$, which then can be lifted to an automorphism of $\mathbb C$. Pages 4, 5 in the linked paper gives a few more details. The outline itself was pointed out by Rado.
After this is done, the paper discusses how very weak regularity conditions on an automorphism (continuity at a point, for example), trivialize it.
Let me close with some remarks. In particular, I want to expand on the remark on fixed points in the first paragraph.
The argument above indicates we can produce an automorphism by starting with a permutation of $T$, which gives rise to an automorphism of $\mathbb Q(T)$, and then lift this to an automorphism of $\mathbb C$. Note that different permutations of $T$ give rise to different automorphisms, that $|T|=\mathfrak c=2^{\aleph_0}$, and that there are $2^\mathfrak c$ permutations of $T$. This means that there are at least $2^\mathfrak c$ wild automorphisms. On the other hand, there are only $\mathfrak c^\mathfrak c=2^{\mathfrak c}$ functions from $\mathbb C$ to itself, regardless of whether they are field automorphisms or not. This means that there are precisely $2^{\mathfrak c}$ (wild) field automorphisms of $\mathbb C$.
The next thing to note is that there is some leeway here. We do not need to start with $T$. We could just as well take any subfield $\mathbb F$ of $\mathbb C$, take a transcendence basis over $\mathbb F$, and repeat the argument above. In fact, we see this way that given any automorphism of $\mathbb F$, there is a field automorphism of $\mathbb C$ that extends it. This is explained in more detail in the paper linked by kahen in a comment below,
Paul B. Yale. Automorphisms of the complex numbers, Math. Mag. 39 (1966), 135-141. (Lester R. Ford Award, 1967.)
From basic field theory we know that for any irrational algebraic $\alpha$ we can take $\mathbb F$ to be the smallest subfield of $\mathbb C$ containing all roots of the minimal polynomial of $\alpha$ over $\mathbb Q$, and that there are automorphisms of $\mathbb F$ that move $\alpha$. Since any such automorphism can be extended to one of $\mathbb C$, this shows that no irrational algebraic number is fixed by all automorphisms of $\mathbb C$.
Similarly, if $\alpha$ and $\beta$ are transcendental and algebraically independent, then there is a transcendence basis $T$ with $\alpha,\beta\in T$, and there is an automorphism of $\mathbb Q(T)$ that maps $\alpha$ to $\beta$. Again, this extends to an automorphism of $\mathbb C$, so no transcendental number is fixed by all automorphisms of $\mathbb C$.
It follows that only the rational numbers are fixed by all automorphisms. On the other hand, again from basic field theory, we have that if $\alpha$ is algebraic, then any automorphism must map $\alpha$ to one of its conjugates, that is, to a root of the minimal polynomial of $\alpha$ over $\mathbb Q$. This means that there are only finitely many possible values the image of $\alpha$ may take.
Finally, a technical remark that I had on a comment, but probably deserves better visibility: To construct Solovay's model referred to above requires an inaccessible cardinal. On the other hand, as shown by Shelah, no additional consistency strength is required to show that there are models of set theory without choice, where all sets of reals have the property of Baire. Now, if a field automorphism of $\mathbb C$ is Lebesgue measurable, then it is trivial (the identity, or complex conjugation). Same if it is Baire measurable. In any model of set theory where all sets of reals have the property of Baire, all functions $f:\mathbb C\to \mathbb C$ are Baire measurable. It follows that in these models, the only automorphisms of $\mathbb C$ are trivial. |
H: Proving an equality on $\arctan x$
To demonstrate the following equality
\begin{equation}
\arctan x=\pi/2- \arctan(1/x)
\end{equation}
I have proceeded in this way. I know that
\begin{equation}
\arctan x= \int \frac{1}{1+x^2} dx + C
\end{equation}
But:
\begin{equation}
\int \frac{1}{1+x^2} dx=\int \frac{1}{x^2(1+1/x^2)} dx=-\int \frac{1}{1+t^2} dt
\end{equation}
with $t=1/x$. Then:
\begin{equation}
\arctan x= -\arctan\left(\frac{1}{x}\right)+C
\end{equation}
And if we apply the limit
\begin{equation}
\lim_{x\rightarrow0}\ \arctan x=\lim_{x\rightarrow0}\ -\arctan\left(\frac{1}{x}\right)+C
\end{equation}
we obtain: $C=\pi/2$.
What do you think of my proof? Write down your ideas, I'm curious and I want to compare myself with you.
Thank you very much.
AI: $$\tan(\pi/2- \arctan(1/x))=\frac{\sin(\pi/2- \arctan(1/x))}{\cos(\pi/2- \arctan(1/x))}=\frac{\cos (\arctan(1/x)}{\sin(\arctan(1/x))}$$
$$=\frac{1}{\tan(\arctan(1/x))}=\frac{1}{1/x}=x$$
Note: The proof doesn't end here, since $\tan(x)$ is not 1-1.
But $\arctan(1/x) \in (-\frac{\pi}{2}, \frac{\pi}{2}) \Rightarrow \frac{\pi}{2}-\arctan(1/x) \in (0, \pi)$.
One needs to split the problem now into $\frac{\pi}{2}-\arctan(1/x) $ into QI and QII, that is into $x>0$ and $x<0$. |
H: An inequality for symmetric matrices: $ \mbox{trace}(XY)\leq \lambda(X)^T\lambda(Y) $.
Let the vector of eigenvalues of a $n\times n$ matrix $U$
is denoted by
$$
\lambda(U)=\big(\lambda_1(U),\lambda_2(U),\ldots,\lambda_i(U),\ldots\lambda_{n-1}(U),\lambda_n(U)\big)^T.
$$
where the eigenvalues are ordered as $\lambda_1(U)\leq\lambda_2(U)\leq\ldots\leq\lambda_i(U)\leq\ldots\lambda_{n-1}(U)\leq\lambda_n(U)$.
I would like to prove ( or get a counterexample) the following inequality for symmetric matrices $X$ and $Y$
$$
\mbox{trace}(XY)\leq \lambda(X)^T\lambda(Y).
$$
Thanks in advance.
AI: The proof is essentially the same as in the case where $X$ and $Y$ are positive semi-definite.
Let $d_1\le\ldots\le d_n$ and $s_1\le\ldots\le s_n$ be the eigenvalues of $X$ and $Y$ respectively and let $D=\operatorname{diag}(d_1,\ldots,d_n),\ S=\operatorname{diag}(s_1,\ldots,s_n)$. Then $\operatorname{trace}(XY)=\operatorname{trace}(DQ^TSQ)$ for some real orthogonal matrix $Q$. Note that the Hadamard product $Q\circ Q=(q_{ij}^2)$ is a doubly stochastic matrix. Therefore
\begin{align}
\max_{QQ^T=I} \operatorname{trace}(DQ^TSQ)
&=\max_{QQ^T=I} \sum_{i,j}s_id_jq_{ij}^2\\
&\le\max_M \sum_{i,j}s_id_jm_{ij};\ M=(m_{ij}) \textrm{ is doubly stochastic}.\tag{1}
\end{align}
As $\sum_{i,j}s_id_jm_{ij}$ is a linear function in $M$ and the set of all doubly stochastic matrices is compact and convex, $\max\limits_M \sum_{i,j}s_id_jm_{ij}$ occurs at an extreme point of this convex set. By Birkhoff's theorem, the extreme points of doubly stochastic matrices are exactly those permutation matrices. Since permutation matrices are real orthogonal, it follows that equality holds in $(1)$ and
$\max\limits_{QQ^T=I} \operatorname{trace}(DQ^TSQ)=\max\limits_\sigma\sum_i d_is_{\sigma(i)}$, where $\sigma$ runs over all permutations of the indices $1,2,\ldots,n$. However, by mathematical induction on $n$, one can prove that $\sum_i d_is_{\sigma(i)}\le\sum_i d_is_i$ (note that the base case here is $n=2$, not $n=1$). Hence the assertion follows. |
H: Find the legs of isosceles triangle, given only the base
Is it possible to find the legs of isosceles triangle, given only the base length? I think that the info is insufficient. Am I right?
AI: You are correct that it is impossible. Given only the base length of an isosceles triangle, we cannot determine the length of its sides: one would need to have the measure of the angle opposite the base in order to determine the lengths of the sides.
If the base of a triangle is fixed, an angle of smaller measure $m$ opposite the base would give longer congrent sides, than would an angle of greater measure. See for example, the following nested triangles:
For the same base $\overline{AC},\;m(\angle E) \lt m(\angle D) \lt m(\angle B)$, and $|CE|>|DE|> |BE|$.
You can experiment with a triangle of a given base, to see how the angle opposite the base determines the length of its sides. |
H: Find the limit : $\lim\limits_{n\rightarrow\infty}\int_{n}^{n+7}\frac{\sin{x}}{x}\,\mathrm dx$
I have this exercise I don't know how to approach :
Find the limit : $$\lim_{n\rightarrow\infty}\int_{n}^{n+7}\frac{\sin x}{x}\,\mathrm dx$$
I can see that with $n\rightarrow\infty$ the area under the graph of this function becomes really small as $\sin{x} \leq 1$ so $\dfrac{\sin{n}}{n}\rightarrow_{\infty}0$ but can I get something from it?
AI: Hint: $\def\abs#1{\left|#1\right|}$
$$\abs{\int_n^{n+7}\frac{\sin x} x \, dx}\le \int_n^{n+7}\frac{\abs{\sin x}}{x}\, dx\le \frac 1n \int_n^{n+7}\abs{\sin x}\, dx $$ |
H: prove this $\int_{0}^{2}f^2(x)dx\le\int_{0}^{2}f'^2(x)dx$
let $f\in C^1[0,2]$,and such $\int_{0}^{2}f(x)dx=0,f(0)=f(2)$,
show that
$$\int_{0}^{2}f^2(x)dx\le\int_{0}^{2}f'^2(x)dx$$
I think we must use $Cauchy$ inequality
my idea:I have see this
let $f(x)\in C^1([a,b],R)$,and $f(a)=f(b)=0$,show that:$$\displaystyle\int_{a}^{b}f^2(x)dx\le\dfrac{(b-a)^2}{8}\displaystyle\int_{a}^{b}[f'(x)]^2dx$$
pf:
$$|f(x)|=|f(x)-f(a)|\le\sqrt{x-a}\left(\displaystyle\int_{a}^{x}[f'(t)]^2dt\right)^{\frac{1}{2}}$$
then $$f^2(x)\le(x-a)\displaystyle\int_{a}^{x}[f'(t)]^2dt\le(x-a)\displaystyle\int_{a}^{b}[f'(t)]^2dt$$
so we can $a$to $b$ we have;
$$\displaystyle\int_{a}^{b}f^2(x)dx\le\displaystyle\int_{a}^{b}\left[(x-a)\displaystyle\int_{a}^{b}[f'(t)]^2dt\right]dx=\dfrac{(b-a)^2}{2}\displaystyle\int_{a}^{b}[f'(x)]^2dx$$
then we use $\dfrac{a+b}{2}$to $b$,then we
$$\displaystyle\int_{a}^{\frac{a+b}{2}}f^2(x)dx\le\dfrac{(b-a)^2}{8}\displaystyle\int_{a}^{\frac{a+b}{2}}[f'(x)]^2dx$$
other hand ,for any $x\in[\frac{a+b}{2},b],f(x)=-\displaystyle\int_{x}^{b}f'(t)dt$, so
$$f^2(x)=\left(\displaystyle\int_{x}^{b}f'(x)dx\right)^2\le(b-x)\displaystyle\int_{x}^{b}[f'(t)]^2dt$$
we can $\dfrac{a+b}{2}$to $b$ have :
\begin{align}
&\displaystyle\int_{\frac{a+b}{2}}^{b}f^2(x)dx\le\displaystyle\int_{\frac{a+b}{2}}^{b}(b-x)\left(\displaystyle\int_{a}^{b}[f'(t)]^2dt\right)dx\le
\displaystyle\int_{\frac{a+b}{2}}^{b}(b-x)dx\left(\displaystyle\int_{\frac{a+b}{2}}^{b}[f'(t)]^2dt\right)dx\\
&=\left(\displaystyle\int_{\frac{a+b}{2}}^{b}(b-x)dx\right)\left(\displaystyle\int_{\frac{a+b}{2}}^{b}[f'(x)]^2dx\right)\\
&=\dfrac{(b-a)^2}{8}\displaystyle\int_{\frac{a+b}{2}}^{b}[f'(x)]^2dx
\end{align}
then
$$\displaystyle\int_{a}^{b}f^2(x)dx\le\dfrac{(b-a)^2}{8}\displaystyle\int_{a}^{b}[f'(x)]^2dx$$
let $b=2,a=0$,then we have
$$2\int_{0}^{2}f^2(x)dx\le\int_{0}^{2}f'^2(x)dx$$
AI: The hypothesis allow to see $f$ as a continuous $2$-periodic function, piecewise $C^1$. Working in $L^2(0,2)$, the Fourier coefficients of $f$ and $f'$ are
$$
c_n(f)=\frac{1}{2}\int_0^2f(x)e^{-i\pi nx}dx\qquad c_n(f')=i\pi nc_n(f)
$$
where the second formula follows from an integration by parts. In particular, $c_0(f')=0$.
Note that we have $c_0(f)=0$ by assumption. Hence Parseval for $f$ and $f'$ yields
$$
\frac{1}{2}\int_0^2|f(x)|^2dx=\sum_{n\geq 1}|c_n(f)|^2\leq \sum_{n\geq 1}\pi^2n^2|c_n(f)|^2=\frac{1}{2}\int_0^2|f'(x)|^2dx.
$$
Note that this is strict as soon as there exists $n$ such that $c_n(f)\neq 0$. Since $f$ is piecewise $C^1$, it is equal to its Fourier series which converges normally. So we have equality if and only if $f=0$ under the given assumptions.
Finally, note that this argument yields the sharper inequality
$$
\int_0^2|f(x)|^2dx\leq \frac{1}{\pi^2}\int_0^2|f'(x)|^2dx.
$$
And this is optimal, considering $f(x)=\sin (\pi x)$. |
H: Derivatives constants basic
I'm struggling with basic rules for derivatives.
So $\dfrac{d}{dx} 2x = 2$
Because you factor out the constant to $2\times \dfrac{d}{dx}x$ and that is $2\times1 = 2$
But $\dfrac{d}{dx}2 = 0$
Again factor out to $2\times \dfrac{d}{dx}1(*)$ and that is $2\times0 = 0$
(*) Is that the right way to think about it that there should be a $1$?
Greetings
Jeff
AI: It's certainly a way to think about it. Remember a derivative essentially tells us how much a function is changing. Constants will essentially scale how fast the function changes. $2x$ grows twice as fast as $x$, same as $2x^5$ grows twice as fast as $x^5$ and yes, $2\times 1$ grows twice as fast as $1$ but remember both $2$ and $1$ as functions don't grow at all, they're constant and hence their derivative is zero. |
H: Cauchy inequality $\Rightarrow$ Schwarz's integral inequality. Why can't the limit of the prior be used to deduce the latter?
Given the Cauchy-Schwarz inequality and the Riemann definition for the integral,
$$\sum_{k=1}^{n}a_kb_k\le\sqrt{\sum_{k=1}^{n}a_k^2}\sqrt{\sum_{k=1}^{n}b_k^2}$$
$$\int_a^bf(x)dx=\lim_{n\rightarrow \infty}\left(\sum_{k=1}^{n}f\left(a+k\frac{b-a}{n}\right)\frac{b-a}{n}\right)$$
why can you not simply put the two together
$$\lim_{n\rightarrow \infty} \sum_{k=1}^{n}a_kb_k\le\sqrt{\lim_{n\rightarrow \infty}\sum_{k=1}^{n}a_k^2}\sqrt{\lim_{n\rightarrow \infty}\sum_{k=1}^{n}b_k^2}$$
$$a_k=f \left(a+k\frac{b-a}{n}\right)\sqrt{\frac{b-a}{n}},b_k=g \left(a+k\frac{b-a}{n}\right)\sqrt{\frac{b-a}{n}}$$
($a_k,b_k$ depends on $n$) to get
$$\int_a^bg(x)f(x)dx\le \sqrt{\int_a^bf(x)^2dx}\sqrt{\int_a^bg(x)^2dx}$$
? It appears that this simple-minded method is incorrect, for example, here's a passage from page 11 of Michael Steele's book:
[This] approach to [the integral inequality] via Cauchy’s inequality would have been
problematical for several reasons, including the fact that the strictness
of a discrete inequality can be lost in the limiting passage to integrals.
Could some kind person explain why this is: in general and in this particular case (at which point did the above 'derivation' run afoul)?
AI: The proof is right, the issue is the following:
If
$$\sum_{k=1}^{n}a_kb_k < \sqrt{\sum_{k=1}^{n}a_k^2}\sqrt{\sum_{k=1}^{n}b_k^2}$$
you only get
$$\lim_{n \to \infty} \sum_{k=1}^{n}a_kb_k\le\sqrt{\lim_{n\rightarrow \infty}\sum_{k=1}^{n}a_k^2}\sqrt{\lim_{n\rightarrow \infty}\sum_{k=1}^{n}b_k^2} $$
It is possible to have $c_n < d_n$ and $\lim c_n = \lim d_n$.
That paragraph refers to the fact that Schwartz needed to find/use the equality case in the CS.
So, you can extend the standard CS inequality by Riemann Sums, but you cannot characterize this way the equality case.... |
H: Collinear points in $\mathbb{K}^n$, $k_1k_2k_3 = (k_1-1)(k_2-1)(k_3-1)$
Could you tell me how to prove that given three non collinear points in $\mathbb{K}^n$: $A, \ B, \ C$, the following three points:
$A_1 = k_1B + (1-k_1)C, \ \ B_1=k_2C + (1-k_2)A, \ \ C_1=k_3A + (1-k_3)B$
are collinear $\iff \ \ \ k_1k_2k_3= (k_1-1)(k_2-1)(k_3-1)$?
I've tried equating lines generated by $A_1$ and $B_1$, $A_1$ and $C_1$ and $B_1$ and $C_1$ but I got lost in the calculations.
Could you help me with that?
AI: Hint 1 : If the dimensionality is $3$ (show that one can always assume wlog that this is the case), then $A_1,B_1,C_1$ are collinear iff
the determinant $d=[\overrightarrow{OA_1},\overrightarrow{OB_1},\overrightarrow{OC_1}]$ is zero.
Hint 2 : Show that
$$
\Bigg|
\begin{matrix}
0 & k_1 & 1-k_1 \\
1-k_2 & 0 & k_2 \\
k_3 & 1-k_3 & 0 \\
\end{matrix}
\Bigg|=k_1k_2k_3-(k_1-1)(k_2-1)(k_3-1).
$$ |
H: Space $H^1([0, T], H^{-1}(U))$
My teacher uses the space $H^1([0, T], H^{-1}(U))$, where $H^1 = W^{1,2}$ (Sobolev space) and $H^{-1}(U)$ is the dual space of $H_0^1(U)$.
So we have $u \in H^1([0, T], H^{-1}(U))$ if $u(t)$ is in $H^{-1}(U)$ for all $t \in [0, T]$, $u \in L^2([0, T], H^{-1}(U))$ (which means that $\displaystyle\int_0^T||u(t)||_{H^{-1}(U)}^2dt < +\infty$) and $u' \in L^2([0, T], H^{-1}(U))$.
But I actually don't understand how $u'$, the weak derivative of $u$, is defined in this context. When we work on $H^1([0,T], \mathbb{R})$, we have that $u'$ is the weak derivative of $u$ if
$$\int_0^T u(t) \phi'(t) dt = \int_0^T u'(t) \phi(t) dt$$
for all $\phi \in C_0^\infty([0, T], \mathbb{R})$
Here, I would like to say something like
$$\int_0^T ||u(t)||_{H^{-1}(U)} ||\phi'(t)||_{H^{-1}(U)} dt = \int_0^T ||u'(t)||_{H^{-1}(U)} ||\phi(t)||_{H^{-1}(U)} dt$$
for all $\phi \in C_0^\infty([0, T], H^{-1}(U))$ but it seems to be a total non-sense since we only use the norm of $u'$ in $H^{-1}$... $u$ would then have plenty of derivatives!
So I suppose it can't be that, but I don't know what else it should be. I also think about something like :
$$\int_0^T \langle u(t), \phi'(t) \rangle dt = \int_0^T \langle u'(t), \phi(t) \rangle dt$$
for all $\phi \in C_0^\infty([0, T], H_0^1(U)$) (where $\langle , \rangle$ means the action of the first element (which is in $H^{-1}$) on the second one (which is in $H_0^1$)), but I don't see why this should be the good definition... There aren't a lot of "analogies" with the definition for $H^1([0, T], \mathbb{R})$ that I recall before.
AI: I believe your first definition is essentially correct. $u'$ is defined in the usual weak sense:
if $u \in L^1(0,T;X)$, then $v \in L^1(0,T;X)$ is the weak derivative, $u'=v$, if
$$
\int_0^T \varphi'(t)u(t) \, dt = -\int_0^T \varphi(t)v(t) \, dt
$$
$\forall \varphi \in C_0^{\infty}(0,T)$. This is the definition Evan's gives. |
H: $m$ balls into $n$ urns
Assume that there are $m$ balls and $n$ urns with $m\gt n$. Each ball is thrown randomly and uniformly into urns. That is, each ball goes into each urn with probability $\dfrac1n$.
What is the probability that there are exactly $r$ urns with at least one ball in it? In other words, what is the probability that there are $n-r$ empty urns?
( I am Not sure whether it makes any difference whether balls and urns being distinguishable or not. If there is a difference assume that both balls and urns are distinguishable)
AI: The result should not depend on whether the ball and urns are distinguishable - only the derivation. Let's assume they are distinguishable.
The number of ways of placing $m$ (distinguishable) balls in $r$ (distinguishable) urns with no empty urn is
$$r! \, S_{m,r}$$
where $S_{m,r}$ is the Stirling number of the second kind. Hence the total number of ways of occupying $r$ urns out of $n$ is
$$ {n \choose r} r! \, S_{m,r}$$
and the probability then is
$$ p= \frac{{n \choose r} r! \, S_{m,r}}{n^m} = \frac{n! \, S_{m,r} }{(n-r)!\, n^m} = {n \choose r} \sum_{j=0}^r (-1)^{r-j} {r \choose j} \left(\frac{j}{n}\right)^m$$
(BTW, this is basically the same as a recent question) |
H: Verify my attempt to diagonalize matrix
Decide whether matrix $A$ is diagonalizable. If so, find $P$ such that $P^{-1}AP$ is diagonal.
We are given: $A = \begin{bmatrix}1 & 0 & 0 \\ 1 & 2 & 1 \\ 0 & 0 & 1\end{bmatrix}$
We set up and and solve: $|A - \lambda I| = 0$, which yields:
$$\left|\begin{matrix}1-\lambda & 0 & 0 \\ -1 & 2-\lambda & -1 \\ 0 & 0 & 1-\lambda\end{matrix}\right| = 0$$
This yields a characteristic polynomial and eigenvalues as:
$$-\lambda^3 + 4 \lambda^2 - 5 \lambda + 2 = (1-\lambda)^2 (2-\lambda) = 0 ~~~\rightarrow ~~~ \lambda_1 = 1, \lambda_{2,3} = 2$$
We have multiplicities of $2$ and $1$ for those eigenvalues.
So, for $\lambda_1 = 1$, we have:
$[A- I]v_1 = \begin{bmatrix}0 & 0 & 0 \\ -1 & 1 & -1 \\ 0 & 0 & 0\end{bmatrix}v_1 = 0$
Thus, $v_1 = (1,1,0)$ and $v_2 = (0,1,1)$ .
Repeating this same process for the second eigenvalue, we have as RREF:
$\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix}v_3 = 0$
This yields an eigenvector of $v_3 = (0,1,0)$.
Putting all of this together, we have the eigenvalue/eigenvector pairs:
$\lambda_1 = 1, v_1 = (1,1,0)$
$\lambda_2 = 1, v_2 = (0,1,1)$
$\lambda_3 = 2, v_3 = (0,2,0)$
So P=\begin{bmatrix}1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 1 & 0\end{bmatrix}
Is this right?
Do I need to show more work?
Do I need to take any additional steps after I find $P$?
AI: I think you have mistake since $|A-\lambda I|=\left|\begin{matrix}1-\lambda & 0 & 0 \\ 1 & 2-\lambda & 1 \\ 0 & 0 & 1-\lambda\end{matrix}\right| = 0$. the polynom is by minors $p(x)=(1-\lambda)^2(2-\lambda)$ and $\lambda_{1,2}=1,\lambda_3=2$. as a result of the little mistake in the matrix,your eigenvectors are not correct.so for $\lambda=1$ we can write: $$v_{1}=(-1,0,1),v_2=(0,1,-1)$$ and for $\lambda=2,$ we can get $$v_3=(0,1,0)$$so P is $$P=\begin{bmatrix}-1 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & -1 & 0\end{bmatrix}
$$ |
H: On the definition of cofinite.
I am having some difficulty comprehending the definition of a cofinite set. I am seeking confirmation of whether my understanding is correct and some clarification on the definition.
Wikipedia provides the following definition:
Definition: A cofinite subset of a set $X$ is a subset $A$ whose complement in $X$ is a finite set. In other words, $A$ contains all but finitely many elements of $X$.
Based upon my understanding, I have come up with a definition I believe is equivalent.
Definition': Suppose there are two sets, $X$ and $A$ where $A \subset X$. $A$ is cofinite in $X$ if $A^c \cap X$ is finite, where the $c$ superscript denotes the complement.
AI: Yes. It's fine.
Note that $A^c\cap X=X\setminus A$, which may be simpler to read. |
H: Optimization of entropy for fixed distance to uniform
Suppose that I know that a probability distribution with $n$ outcomes is very close to being uniform (that is: $\forall i,p_i=\frac{1}{n}$), and in particular for $n\epsilon\ll 1$ the distribution verifies
$$\sum_{i=1}^n|\frac{1}{n}-p_i|=\epsilon$$
Now consider the Shannon entropy function:
$$H(p_1,\ldots,p_n)=-\sum_{i=1}^np_i\log_2p_i$$
My question is: what are the probability distributions that, verifying the constraint, maximize and minimize the entropy? My guess is that $(\underbrace{\frac{1}{n},\ldots,\frac{1}{n}}_{n-2\text{ times}},\frac{1}{n}+\frac{\epsilon}{2},\frac{1}{n}-\frac{\epsilon}{2})$ minimizes the entropy and $(\underbrace{\frac{1}{n}-\frac{\epsilon}{n},\ldots,\frac{1}{n}-\frac{\epsilon}{n}}_{n/2\text{ times}},\underbrace{\frac{1}{n}+\frac{\epsilon}{n},\ldots,\frac{1}{n}+\frac{\epsilon}{n}}_{n/2\text{ times}})$ maximizes the entropy but I have not been able to prove any of them.
AI: For the first, you have $H=-(n-2)\frac 1n \log_2 \frac 1n-(\frac 1n + \frac {\epsilon}2)\log_2 (\frac 1n + \frac {\epsilon}2)- (\frac 1n - \frac {\epsilon}2)\log_2 (\frac 1n - \frac {\epsilon}2)
\\=-(n-2)\frac 1n \log_2 \frac 1n-(\frac 1n + \frac {\epsilon}2)[\log_2 \frac 1n +\log_2 (1+\frac {n\epsilon}2)]- (\frac 1n - \frac {\epsilon}2)[\log_2 \frac 1n - \log_2 (1-\frac {n\epsilon}2)]
\\ \approx-\log_2\frac 1n-\frac {n\epsilon^2}{2 \ln 2}$
For the second you have $H=-\frac n2(\frac {1-\epsilon}n \log_2 \frac {1-\epsilon}n + \frac {1+\epsilon}n \log_2 \frac {1+\epsilon}n)
\\=-\frac n2(\frac {1-\epsilon}n (\log_2 \frac 1n + \log_2(1-\epsilon)) + \frac {1+\epsilon}n (\log_2 \frac 1n + \log_2(1+\epsilon))
\\\approx-\log_2 \frac 1n-\frac {\epsilon^2}{2 \ln 2}$
So the second has lower change in entropy by a factor $n$ |
H: open subset of $G\times G$
If $O$ be an open symmetric subset of topological group $G$ such that $e\in O$, then is $V_O=\{(a,b)\in G\times G: a^{-1}b\in O\}$ open in $G\times G$?
AI: Yes. $f\colon G\times G\to G$, $(x,y)\mapsto x^{-1}y$ is continuous, hence $V_O=f^{-1}(O)$ is open. (We do not need $e\in O$ or symmetry of $O$ for this) |
H: solving for an argument of arctan()
In a course of writing a software to do computer vision I'm trying to "calibrate" the assembly with as little user interaction as possible. As a result I'm getting a few numbers from the point of view of my hardware and then trying to analyze those and figure out the relative spacial positions and angular divergence of the pieces.
Of course I'm doing one plane at a time $(X, Y, Z)$ but even then I'm left with a system of three equations that I can reduce down to 2 and that's where I get stuck - I cannot resolve it around either x or y (the only two unknowns in that formula):
$$\arctan\left( \frac{ax}{d - y} \right) + \arctan\left(\frac{bx}{d - y} \right) = k$$
What do I do with this? One way for me would be to "try" different $x$ and $y$ and adjust them until the two equations are satisfied (within a given precision) but I'd rather solve it "cleanly", TBH.
Any help would be appreciated.
AI: Use the fact that
$$\arctan P + \arctan Q = \arctan\frac{P+Q}{1-PQ}$$
Then your equation becomes, after a little algebra
$$\frac{(a+b)(d-y) x}{(d-y)^2-a b x^2} = \tan k$$ |
H: What is the notation for the set of all $m\times n$ matrices?
Given that $\mathbb{R}^n$ is the notation used for $n$-dimensional vectors, is there an accepted equivalent notation for matrices?
AI: If $A$ is an $m\times n$ matrix, then $$A \in \mathbb{R}^{m\times n}$$ |
H: Question involving absolute function
I saw this interesting problem in a math puzzle forum:-
Find all integral values of $t$ such that the equation $|s-1| - 3|s+1| + |s+2| = t $ has no solutions.
How does one approach these kind of problems?
AI: Divide into regions like so:
Case 1: Assume $s\ge 1$
Your equation reduces to:
$s-1-3(s+1)+(s+2)=t$
Case 2: Assume $-1 \le s \le 1$
Your equation reduces to:
$-(s-1)-3(s+1)+(s+2)=t$
and so on. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.