text
stringlengths 83
79.5k
|
---|
H: diagonal matrix with $k$ 1's and $n-k$ 0's
This is from an old qualifying exam.
Let $V$ be a vector space of dimension $n$ and let $P$ be a projection on $V$ with
$\dim(P(V))=k$. Prove that there is an ordered basis for $V$ such that the matrix representation of $P$ with respect to this basis is a diagonal matrix with $k$ $1$'s and $n-k$ $0$'s on the diagonal.
My solution: Since $P$ is a projection, I can show that the only eigenvalues of
$P$ are $0$ and $1$. Since $\operatorname{rank}(P)=k$ we have $\operatorname{nullity}(P)=n-k$. And so the characteristic polynomial of $P$ is $f(x)=x^{n-k}(x-1)^{k}$ <---( Can I immediately say this
or do I have to prove this fact???). The minimal polynomial $p(x)$ of $P$ must divide $x^2-x$ (because $x^2-x$ annihilates $P$). And so by Cayley Hamilton we must have that $p(x)=x(x-1)$. since $p(x)$ factors completely into distinct linear factors, we must have that $P$ is diagonalizable. And so there exits a basis in which the matrix representation has the desired form.
Is this a good solution?. Is there anything I have to prove more or be more clear :).
Thanks for all your help.
AI: It is a bit unclear.
"I can show that the only eigenvalues of $P$ are $0$ and $1$." Taking it literally, it means if $P$ has some eigenvalues, they must be either $0$ or $1$. Note that this is different from saying that "$P$ has $n$ eigenvalues and all of them are either $0$ or $1$." You see, when the field is not algebraically closed (e.g. $\mathbb{R}$), some matrices may not even possess a single eigenvalue.
"And so by Cayley Hamilton ... ." I don't see how Cayley-Hamilton theorem is relevant here. C-H thm says that the minimal polynomial must divide the characteristic polynomial, but here you first establish that $x^2-x$ is an annihilating polynomial, and then argue that the minimal polynomial must divide $x^2-x$. The characteristic polynomial is not used at all.
"... we must have that p(x)=x(x−1)." Why? $I$ and $0$ are projections, but their minimal polynomials are not $x(x−1)$.
On the whole, I think that the arguments you have shown look quite good. However, since the essential step that $P$ has $n$ eigenvalues in $\{0,1\}$ is not shown, I am not sure if you really have a correct proof. |
H: Confidence Interval
After taking 90 observations, you construct a 90% CI for μ . You are told that your interval is 3 times too wide.(i.e. your interval is 3 timers wider than what was required. You sample size should have been. (a)30 (b) 270 (c) 810 (d) 10
The correct answer is c. But why is C?
AI: Often the endpoints of a confidence interval for a population mean $\mu$ has endpoints $\bar x \pm A \dfrac{S}{\sqrt{n}}$, where $A$ is a number you'd find in a table of values of the t-distribution or the normal distribution, where $n$ is the sample size. Its length would then be some number divided by the square root of $n$. If you want to make the length $1/3$ as big, you'd need to make the denominator $3$ times as big. So you need to make $\sqrt{n}$ three times as big. That means $n$ must be $9$ times as big, since $\sqrt{9}=3$. |
H: Evaluate $\int_{D} \frac{dw}{w \cdot (1-w)}$ where $D$ is the rectangle
How can I evaluate the below integral:: $$\int_{D} \frac{dw}{w \cdot (1-w)}$$ using the $\textbf{Cauchy Integral Formula}$, where $D$ is the rectangle with vertices at the points $3 \pm{i}$ and $-1 \pm{i}$.
I know that the Cauchy Integral formula states $$f(a) = \frac{1}{2\pi i}\oint_{C} \frac{f(z)}{z-a} \ dz$$ but not sure as to how to apply.
A solution will atleast help me in knowing how to work with these type of problems in future.
Thanks.
AI: One way would be to start with partial fractions
$$
\frac{1}{w(1-w)} = \frac{A}{w} + \frac{B}{1-w}.
$$
The formula tells you how to handle $\displaystyle\int_D \frac{dw}{w-0}$ and also $\displaystyle\int_D \frac{dw}{w-1}$, and you've got a sum of those two.
Notice that both $0$ and $1$ are inside the region bounded by $D$. Without that, you wouldn't do this the same way.
Notice that if you have a curve $C$ that winds around $a$ but not around $b$, then you could write
$$
\int_C \frac{dw}{(w-b)(w-a)} = \int_C \frac1{w-b}\cdot\frac{dw}{w-a},
$$
and then you could say that that's $\dfrac{1}{w-b}$ evaluated at $w=a$, then multiplied by $\displaystyle\int_C\frac{dw}{w-a}$. The reason is that as the curve shrinks to a point at $a$, the value of the integral does not change (Cauchy's formula tells you that), but $1/(w-b)$ approaches a limit.
If the curve winds around more than one point at which there's a pole, the integral is just the sum of integrals each winding around just one of those points. That's a fact you'll probably see pretty soon if you haven't yet. |
H: Why does the XOR operator work?
In many coding problems, I see applying XOR to the set of values gives the result.
For example :
In the game of nim
Let n1, n2, … nk, be the sizes of the piles. It is a losing position
for the player whose turn it is if and only if n1 xor n2 xor .. xor nk
= 0.
Can somebody explain me why this works? Please give me an idea, so that in the future I'll be able to solve these type of problems.
AI: In nim (in fact, in any finite combinatorial game), the following facts are true:
(1) From a losing position, all moves lead to a winning position.
(2) From a winning position, there is at least one move to a losing position.
(The terms "winning position" and "losing position" are to be interpreted from the point of view of the person whose turn it is to move from the position.)
It is also true that for any finite combinatorial game, there is only one way to decompose the (finite!) set of positions into two classes which satisfy (1) and (2) above.
So, the reason the XOR rule works is that satisfies these conditions: (I write $\oplus$ for XOR.)
(1') From a position where $a_1 \oplus a_2 \oplus \ldots \oplus a_k = 0$, all moves lead to
a position where $a_1 \oplus a_2 \oplus \ldots \oplus a_k \ne 0$.
(2') From a position where $a_1 \oplus a_2 \oplus \ldots \oplus a_k \ne 0$, there is at least one move to a position where $a_1 \oplus a_2 \oplus \ldots \oplus a_k = 0$.
To prove (1'), notice that any move can only change one of the $a_i$. If you change a single bit in an XOR of bits, you will change the final result. So if you start with an XOR total of 0, you must end up with an XOR total of $\ne 0$.
To prove (2'), let $a = a_1 \oplus a_2 \oplus \ldots \oplus a_k \ne 0$. In $a$, there is a 1 bit in somewhere. Take the leftmost 1 bit in the total $a$, and look for a 1 bit in that column in one of the $a_i$'s. Say $a_1$ has a 1 bit in that column. Then flip all the bits in $a_1$ corresponding to the 1 bits in $a$. This will make the new XOR total 0, and it is a legal nim move because the highest order bit changed in $a_1$ was changed from 1 to 0, hence the value of $a_1$ decreased as required by the rules of nim. An example with 5 piles:
*
1 1 0 0 1
0 1 1 1 1
1 0 0 1 1
0 0 0 1 1
0 1 1 0 1
---------
0 1 0 1 1 (the bitwise XOR)
The leftmost 1 bit is in the position indicated by *. The first pile (11001 binary) has a 1 in the * position. We flip all bits in which there is a 1 in the total. So the new first pile is (10010 binary). You can check that this move makes the new XOR total equal to 0, and also we decreased the size of the pile (from 25 to 18). |
H: Find the intersection (vector) of two 2D planes
Wall #1 is defined by [1,1,6]^T and [2,0,7]^T. Wall #2 is defined by [1,1,2]^T and [3,2,-1]^T. Perform the steps that a CAD program might do to find a vector which represents the corner created where these two walls intersect.
Hint: the left nullspace is perpendicular to the column space.
Any help would be greatly appreciated. This problem has me confused. Maybe because of the 'hint' given.
AI: Assuming coordinates given are for vectors parallel to walls.
Let $A$ be the cross product of the first two vectors. This is perpendicular to the first wall.
Then make $B$ similarly for the second wall.
Then $A\times B$ is in direction of the intersection of the two walls.
Alternatively assume the first wall is $ax+by+cz=0$ then fit the first two vectors in it ie.
$ 1a+1b+6c=0$ and $2a+0b+7c=0$ solve these to get the equation of first wall $W_1$.
Then do the same for the second wall
Solve $1a+1b+2c=0$ and $3a+2b-1c=0$ to get equation of second wall $W_2$.
Finally put the two equations $W_1$ and $W_2$ together and solve the resulting system.
the solution will be the equation of the
line of intersection. |
H: Statistic question
1) A student took a chemistry exam where the exam scores were mound-shaped with a mean score of 90 and a standard deviation of 64. She also took a statistics exam where the scores were mound-shaped, the mean score was 70 and the standard deviation was 16. If the student's grades were 102 on the chemistry exam and 77 on the statistics exam, then:
a. the student did relatively better on the chemistry exam than on the statistics exam, compared to the other students in each class
b. the student's scores on both exams are comparable, when accounting for the scores of the other students in the two classes
c. it is impossible to say which of the student's exam scores indicates the better performance
d. the student did relatively better on the statistics exam than on the chemistry exam, compared to the other students in the two classes
e. the student did relatively the same on both exams
The correct answer is D but I don't know why.
AI: In Chemistry, she was $\frac{102-90}{64}$ standard deviation units above the mean.
In Statistics, she was $\frac{77-70}{16}$ standard deviation units above the mean. Note that $\frac{7}{16}$ is quite a bit larger than $\frac{12}{64}$, so the performance in Statistics is (comparatively) better than in Chemistry.
Measuring in standard deviation units takes above (or below) the mean takes account of the different means and different amounts of variability in the class scores in the two subjects.
"Mound-shaped" presumably means more or less normally distributed. The probability that a normal random variable with mean $90$ and standard deviation $64$ is $\ge 102$ is equal to the probability that a standard normal is $\ge \frac{112-90}{64}$. If we look at a table of the standard normal, this is approximately $0.43$. The corresponding result for Statistics is about $0.33$. Informally, that means that about $43\%$ of the students were above her in Chemistry, and only about $33\%$ were above her in Statistics.
Remark: In my experience, unmanipulated marks from Mathematics exams have a very far from normal distribution. They are often consistent with being drawn from $2$ or more populations with radically different means. |
H: $(a_n+a_{n+1})$ is convergent implies $(a_n/n)$ converges to $0$
Let $(a_n)$ be a sequence such that $(a_n+a_{n+1})$ is convergent. Prove that $(a_n/n)$ converges to $0$.
$(a_n+a_{n+1})$ convergent to $L$ means that for all $\epsilon$, there exists $N$ such that for all $n\geq N$, $|L-a_n-a_{n+1}|<\epsilon$, or in other words $L-\epsilon<a_n+a_{n+1}<L+\epsilon$. How to proceed from here?
AI: Hint: $(a_n+a_{n+1})\to 0$ implies $(a_{n+2}-a_n)\to 0$, so there exists $N$ such that, for any $n>N$, $|a_{n+2}-a_n|<\epsilon$. Note that $a_n=(a_n-a_{n-2})+(a_{n-2}-a_{n-4})+\ldots+(a_{N+2}-a_N)+a_N$(here I assume n and N are both even or odd, in the other case we can do this similarly), then try to prove by definition(using $\epsilon-N$ language). |
H: recurrence relation for proportional division
Consider the following recurrence relation, for a function $D(x,n)$, where x is a positive real number and n is a positive integer:
$$ D(x,1) = x $$
$$ D(x,n) = \min_{k=1..n-1}{D(xk/n,k)} \ \ \ \ [n>1] $$
This formula can be interpreted as describing a process of dividing a value of x to n people: if there is a single person ($n=1$), then he gets all of the value x, and if there are more people, they divide x in a proportional way.
By induction on n, it is possible to prove that:
$$ D(x,n) = x/n $$
PROOF: For $n=1$, this is given. Assume it is true for $1, 2, ... n-1$. Then:
$$ D(x,n) = \min_{k=1..n-1}{D(xk/n,k)} = \min_{k=1..n-1}{x/n} = x/n $$
Now consider a slight modification of the formula:
$$ E(x,1) = x $$
$$ E(x,2) = x/2 $$
$$ E(x,n) = \min_{k=2..n-1}{E(x(k-1)/n,k)} \ \ \ \ [n>2] $$
The modified version models division with loss - in each division process, we lose some value: we give to $k$ people, only $(k-1)/n$ of the original value. What is the solution to this formula?
AI: By induction on n, it is possible to prove that, for $n \geq 2$:
$$ E(x,n) = x/n(n-1) $$
PROOF: For $n=2$, this is given. Assume it is true for $2, ... n-1$. Then:
$$ E(x,n) = \min_{k=2..n-1}{E(x(k-1)/n,k)} = \min_{k=2..n-1}{x/nk} = x/n(n-1) $$
This is interesting - even a small loss in the division process, leads to a large reduction of the value each person gets, from $O(1/n)$ to $O(1/n^2)$. |
H: Clarification needed on finding last two digits of $9^{9^9}$
I stumbled across this problem here. In the answer given by the user Gone, I don't see how he makes use of the second line in the last line. Could someone explain why he calculated $9^{10}$ via binomial theorem and where it is applied?
AI: The idea is that he wanted to calculate a value of $N$ such that $9^N \equiv 1 \pmod{100}$. (He determined that $N = 10$ would work.)
Once we found such an $N$, in order to calculate $9^M \pmod{100}$, we merely need to look at $M \pmod{N}$, and then we know that
$9 ^ M \equiv 9^{M \pmod{N}} \pmod{100}$.
In particular, this gives us that
$$9^{9^9} \equiv 9^{ 9 ^ 9 \pmod{10} } \pmod{100}.$$
As to how it is proven, you can read the answer by anon (to the original question). |
H: Convergence of nth root iteration?
I'm on problem 18 of baby Rudin, which asks you to describe the behavior of the sequence defined by
$$x_{n+1}=\frac{p-1}{p}x_n+\frac{\alpha}{p} x_n^{1-p}$$
with $x_1>\sqrt{\alpha}$, for given $\alpha>1$ and positive integer $p$. From what I can tell it's a common formula for calculating the pth root of alpha, and can be found from applying Newton's method to the equation $x^p-\alpha=0$.
(the restriction on $x_1$ can be changed to be $x_1>(\alpha)^{1/p}$; It's loosely worded. Since I proved in an earlier problem convergence for $p=2$, we can assume $p \ge 3$.)
I can prove directly that $x_{n+1}<x_n$ if and only if $x_n>(\alpha)^{1/p}$, just by expanding the first statement. I've had trouble proving that $x_{n+1}>(\alpha)^{1/p}$ whenever $x_n>(\alpha)^{1/p}$ but it appears to be true.
I have a feeling that the following identity $$x_{n+1}=x_n-\frac{x_n}{p} (1-\frac{\alpha}{x_n^p})$$
is useful, but I'm not sure how I could make use of it.
Does anyone know (hints welcome) how I can prove that this sequence converges to $\alpha^{1/p}$?
(So, monotonicity can be proven as described above, and given that the sequence converges I can prove that it converges to $\alpha^{1/p}$, but I still haven't proved boundedness)
AI: Hint: You have a monotonically decreasing sequence.
Hint: A monotonically decreasing sequence that is bounded below, converges to a limit. Show that your sequence is bounded below by using AM-GM
Hint: The limit is unique. Prove that it must satisfy a certain equation which has a unique real root. Hence the limit is $\alpha ^ { \frac{1}{p} } $.
Hint 2's AM-GM on $p-1$ terms of $x_n$ and 1 term of $\alpha x_n^{1-p}$.
$$x_{n+1} = \frac { (p-1) \times x_n + 1 \times (\alpha x_n^{1-p} ) } {p} \geq \sqrt[p]{x_n^{p-1} \times (\alpha x_n^{1-p} ) } = \sqrt[p]{\alpha}$$ |
H: Find an orthogonal matrix $Q$ so that the matrix $QAQ^{-1} $ is diagonal.
The question is as follows:
$A=\left( \begin{array}{ccc}
1 &1& 1 \\
1 & 1 & 1 \\
1 & 1 & 1 \end{array} \right) $ Find an orthogonal matrix $Q$ so that the matrix $QAQ^{-1} $ is diagonal. Verify this by direct computation.
My friend knows how to find the eigenvector for the eigenvalue of 3, but doesn't know how to find the eigenvector for the eigenvalue of 0: when he computes it he gets a different answer than given by the solution.
He found the vectors $\left( \begin{array}{ccc}
-1 \\
1 \\
0 \end{array} \right) $ and $\left( \begin{array}{ccc}
-1 \\
0 \\
1 \end{array} \right) $ by computing $Ker(A-0I)$.
However, according to the solutions, the eigenvectors for the eigenvalue 0 are $\left( \begin{array}{ccc}
0 \\
1 \\
-1 \end{array} \right) $
and $\left( \begin{array}{ccc}
-1 \\
1/2 \\
1/2 \end{array} \right) $
AI: When you are talking about finding vectors as solutions to the equation $Ax = \lambda x$, the solution is a subspace, not simply a finite set of vectors.
Hence, the statement that the 'solution' is making is the same as what your friend is saying, since the subspace generated by those sets of vectors is the same (verify this).
It doesn't matter which set of vectors you use (though of course, it will change your $Q$). |
H: Why do Artinian rings have dimension 0 and not 1?
One of the properties of an Artinian ring $R$ is that every prime ideal is maximal. So, if $\mathfrak{m}$ is a nonzero prime ideal, $(0)\subseteq \mathfrak{m}$ is a length-$1$ chain of prime ideals, meaning that $\dim R\geq 1$. Since every prime ideal is maximal, no longer chains exist, meaning $\dim R=1$. But Artinian rings are supposed to have dimension $0$. What am I missing?
AI: Being an Artin ring does not require that $(0)$ is a prime ideal. In fact, if $(0)$ is prime in the Artin ring $R\neq 0$, then $(0)$ is the only maximal ideal in $R$. Therefore $R$ is a field since every nonzero element is not contained in a maximal ideal, hence they are all units. |
H: What is continuity correction in statistics
Can someone please explain to me the idea behind continuity correction and when is it necessary to add or subtract $\dfrac{1}{2}$ from the desired number (how do we tell whether we need to add or subtract), how do we tell when we need to use continuity correction?
AI: The continuity correction comes up most often when we are using the normal approximation to the binomial. It comes up sometimes when we are approximating a Poisson distribution with large $\lambda$ by a normal.
Let $X$ be a binomially distributed random variable that represents the number of successes in $n$ independent trials, where the probability of success on ay trial is $p$. Let $Y$ be a normal random variable with the same mean and the same variance as $X$.
Suppose that $npq$ is not too small. Then if $k$ is an integer, $\Pr(X\le k)$ is reasonably well-approximated by $\Pr(Y\le k)$. It is ordinarily better approximated by $\Pr(Y\le k+\frac{1}{2})$. The difference can be significant when $n$ is not large. When $np(1-p)$ is big, say bigger than $100$, the continuity correction makes little practical difference.
The continuity correction is less important than it used to be. For with modern software, we can compute $\Pr(X\le k)$ essentially exactly.
It is easy to get confused when using the continuity correction. In particular, the question that you asked comes up: when do we add $\frac{1}{2}$, and when do we subtract? I deal with that by remembering only one rule. To repeat,
Rule: If $k$ is an integer, then $\Pr(X\le k)\approx \Pr(Y\le k+\frac{1}{2})$, where $Y$ is a normal with the same mean and variance as $X$.
Let us look at a couple of examples. Let $X$ have binomial distribution. Approximate the probability that $X\lt k$, where $k$ is an integer. This doesn't quite look like our Rule. Note we have $\lt k$, not $\le k$. But $X\lt k$ if and only if $X\le k-1$. Now we are of the right shape. The answer is, approximately, $\Pr(Y\le (k-1+\frac{1}{2}$, where $Y$ is the appropriate normal.
This is $\Pr(Y\le k-\frac{1}{2}$, so in a sense we sutracted. But it all came from the one Rule, where we always add, but pay close attention to the difference between $\lt$ and $\le$.
What is the probability that $X\gt k$? This is $1-\Pr(X\le k)$. Thus we get that the result is approximately $1-\Pr(Y\le k+\frac{1}{2})$.
A numerical example: Toss a fair coin $100$ times. Approximate the probability that the number of heads is $\le 55$.
By working directly with the binomial, and software, I get this is, to $6$ figures, $0.864373$. That's the "right" answer.
Using $\Pr(Y\le 55)$, where $Y$ is normal mean $50$, standard deviation $5$, no continuity correction, I get the approximation $0.8413$.
Using the continuity correction, I get the approximation $0.8643$. I should really do a few other examples, the continuity correction is too good here! |
H: A recursive formula for $a_n$ = $\int_0^{\pi/2} \sin^{2n}(x)dx$, namely $a_n = \frac{2n-1}{2n} a_{n-1}$
Where does the $\frac{2n-1}{2n}$ come from?
I've tried using integration by parts and got $\int \sin^{2n}(x)dx = \frac {\cos^3 x}{3} +\cos x +C$, which doesn't have any connection with $\frac{2n-1}{2n}$.
Here's my derivation of $\int \sin^{2n}(x)dx = \frac {\cos^3 x}{3} +\cos x +C$:
$\sin^{2n+1}xdx=\int(1-\cos^2x)\sin xdx=\int -(1-u^2)du=\int(u^2-1)du=\frac{u^3}{3}+u+C=\frac{\cos^3x}{3}+\cos x +C$
where
$u=\cos x$;$du=-\sin x dx$
Credits to Xiang, Z. for the question.
AI: HINT:
Using this, $$\int_0^{\frac\pi2}\sin^{2n}xdx$$
$$=\int_0^{\frac\pi2}\sin^{2n}\left(\frac\pi2+0-x\right)dx$$ as $\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$
$$=\int_0^{\frac\pi2}\cos^{2n}xdx$$
$$=\frac{\cos^{n-1}x\sin x}n\big|_0^{\frac\pi2}+\frac{2n-1}{2n}\int_0^{\frac\pi2}\cos^{2n-2}xdx$$
$$=0+\frac{2n-1}{2n} \int_0^{\frac\pi2}\cos^{2n-2}\left(\frac\pi2+0-x\right)dx\text{ for } n-1>0\iff n>1$$
$$=\frac{2n-1}{2n} \int_0^{\frac\pi2}\sin^{2n-2}xdx$$ |
H: Partial fractions on complex function for Laurent series
Whats the proper way to calculate the partial fractions for functions like these
$$ f(z)=\frac{z}{z^2(z+1)} \\
f(z)=\frac{z}{(2-z)^3z}
$$
before calculating the Laurent series?
AI: $$\frac z{z^2(z+1)}=\frac1{z(z+1)}=\frac1z-\frac1{z+1}\;--\text{almost no need for calculations: just good "eye"}$$
$$\frac z{(2-z)^3z}=-\frac1{(z-2)^3}\;,\;\text{and this is even more boring than the first one...}$$ |
H: Differential manipulation
Let $v \equiv F(y, z)$. The partial derivatives of $F$ are $$F_1 \equiv \frac{\partial F(y,z)}{\partial y} = \frac{H(v)}{H(y)},$$ $$F_2 \equiv \frac{\partial F(y,z)}{\partial z} = r\frac{H(v)}{H(z)},$$ where $H$ is an arbitrary function. This is taken from the book Probability Theory by E.T. Jaynes.
The author goes on saying that the relation $ \mathrm{d}v = \mathrm{d}F(y, z) = F_1\mathrm{d}y + F_2\mathrm{d}z$ takes the form of
$$\frac{\mathrm{d}v}{H(v)} = \frac{\mathrm{d}y}{H(y)} + r\frac{\mathrm{d}z}{H(z)}$$
or, on integration
$$w[F(y, z)] = w(v) = w(y)w^r(z),$$
where
$$w(x) \equiv exp\left\{\int^x \frac{\mathrm{d}x}{H(x)}\right\}.$$
I did undergraduate calculus; however, we didn't do this kind of differential manipulation (expansion of $\mathrm{d}v$) and I find it quite confusing. Is there a way to arrive at the same result using "standard" steps like integration by substitution etc.?
AI: You can try thinking of the equation
$$
dv = F_1 dy + F_2 dz
$$
in this way : if $y$ and $z$ were to depend on some variable $t$, then $v$ would depend on this variable $t$ as well because $v = F(y,z)$. Therefore, by the chain rule,
$$
\frac{dv}{dt} = F_1 \frac{dy}{dt} + r F_2 \frac{dz}{dt} = \frac{H(v)}{H(y)} \frac{dy}{dt} + r \frac{H(v)}{H(z)} \frac{dz}{dt}.
$$
By putting $H(v)$ on the left side, you get
$$
\frac 1{H(v)} \frac {dv}{dt} = \frac 1{H(y)} \frac{dy}{dt} + r \frac 1{H(z)} \frac{dz}{dt}.
$$
Integrate this equation with respect to $t$ :
$$
\int \frac 1{H(v)} \frac {dv}{dt} \, dt = \int \frac 1{H(y)} \frac{dy}{dt} \, dt + r \int \frac 1{H(z)} \frac{dz}{dt} \, dt.
$$
These three integrals are all 'the same' (besides the fact that the name of the variable changes), and the change of variables is written down for you! Take $v(t)$ as the new variable :
$$
\int \frac 1{H(v)} \frac{dv}{dt} \, dt = \int \frac{dv}{H(v)} \overset{def}{=} w(v(t)).
$$
In other words, the function $w(v)$ is defined as this integral because we have no general means of computing it. Since we now have
$$
w(v(t)) = w(y(t)) + r w(z(t)),
$$
writing $v(t) = F(y(t), z(t))$, we are basically done. (Note that we assumed that $H$ was not arbitrary, but rather a non-zero function for which $1/H$ has primitive, and this is a very strong assumption compared to 'arbitrary'!)
Hope that helps, |
H: Sperner's Lemma in infinite-dimensional spaces?
I've been looking at Sperner's Lemma for a little while and have managed to come to grips with some of the combinatorial proofs. Some descriptions I have encountered claim to prove it for "simplices" and some for "$n$-simplices", and there didn't seem to be any particularly large differences between the proofs. On the other hand, it seems nontrivial to formulate Sperner's Lemma in infinite dimensions.
It doesn't seem like there should be anything preventing us from defining simplices in arbitrary real infinite-dimensional topological vector spaces (RIDTVSs). But having little experience with them, I don't know how Sperner's Lemma would work there. In particular I know that closed balls in RIDTVSs are not compact, which could reasonably be hiding in the background of the combinatorial proofs. It certainly plays a role in the equivalent Brouwer's Fixed Point Theorem.
Does anyone know of a formulation of Sperner's Lemma in RIDTVSs, or at least have a reference to an analytic proof in finite dimensions that might be illuminating?
AI: We can just consider $\mathbb R^\infty $, the set of all infinite sequences of real numbers with almost all entries equal to $0$. Then the infinite dimensional simplex is most naturally defined to be the closure of the convex hall of all the standard basis vectors (i.e., all zeros with only one $1$). Things basically work. See this artilce. |
H: Simple question of maximum value a part can have?
We have to partition n chocolates among m children.
Children will be happy if max and min a child has got is less than 2.
What is the max a child can get??
For n=6 m=3 ,the partition will be 2 2 2
Maximum a child can get is 2
For n=7 m=3 ,the partition will be 2 3 2
Maximum a child can get is 3
How to get maximum without knowing the partition?
AI: Start by giving each child $\left\lfloor\frac{n}m\right\rfloor$ pieces; if there are any left over, give $n-m\left\lfloor\frac{n}m\right\rfloor$ of the children one more piece each. This is the only way to keep the difference between the minimum and maximum less than $2$. The maximum will then be $\left\lceil\frac{n}m\right\rceil$.
If the floor and ceiling functions are unfamiliar, let $n=mq+r$, where $0\le r<m$. If $r=0$, each child gets $q$ chocolates. (Giving any child more than that forces you to give some child fewer than $q$, making the difference between maximum and minimum greater than $1$.) If $r>0$, give $r$ children one more chocolate each; the maximum will then be $q+1$ and the minimum $q$. |
H: Simple problem on restricted partition
When finding number of ways to partition n distinct chocolates among m children such that each child has at most
$$\left\{\begin{matrix}
\left \lfloor \frac{n}{m} \right \rfloor & \text{if} \ \ n\ \ \left (mod \ \ m \right )\equiv 0 \\
\\
\left \lceil \frac{n}{m} \right \rceil & \text{if} \ \ n\ \ \left (mod \ \ m \right )\not\equiv 0
\end{matrix}\right.$$
How can find the number of unique partitions be found?
For example n=5 and m=3 the answer is 90
AI: This is a bit messy. Suppose that $r$ children get $q+1$ chocolates each, and the remaining $m-r$ get $q$ chocolates each. There are $\binom{m}{m-r}=\binom{m}r$ ways to choose which children get $q$ chocolates. The first of those can get any of $\binom{n}q$ sets of $q$ chocolates; the second can then get any of the $\binom{n-q}q$ remaining sets of $q$ chocolates; and so on. Thus, there are
$$\binom{m}r\binom{n}{q,q,\dots,q}$$
ways to choose and accommodate the $m-r$ children who get $q$ chocolates each, where the multinomial coefficient has $n-r$ $q$’s.
That leaves $r(q+1)$ chocolates for the $r$ lucky children who get $q+1$ each; they can be distributed in
$$\binom{r(q+1)}{q+1,q+1,\dots,q+1}$$
ways, where the multinomial coefficient has $r$ lower numbers. The total number of distributions is therefore
$$\binom{m}r\binom{n}{q,q,\dots,q}\binom{r(q+1)}{q+1,q+1,\dots,q+1}=\binom{m}r\binom{n}{q,\dots,q,q+1,\dots,q+1}\;,$$
where the last multinomial coefficient has $m-r$ $q$’s and $r$ $(q+1)$’s. In your example of $n=5,m=3$ we have $q=1$ and $r=2$, so this becomes
$$\binom32\binom5{1,2,2}=3\cdot\frac{5!}{1!2!2!}=3\cdot30=90\;.$$ |
H: Analytic continuation of Riemann Zeta function.
I am reading about zeta function from book by Ingham. In that book the following theorem is given. I am unable to understand what does he mean by finite part of plane in the statement.
AI: By finite part of the plane he probably means the complex plane itself. The infinite part is probably the point of infinity of the Riemann sphere, where the zeta function has an essential singularity. |
H: Some questions on outer region and outer solution of a boundary problem
There is this problem below that I have some doubts and confusions, I will appreciate if anyone could provide some clarifications and explanations. I am new to the Boundary Layer Theory, this question involves the Boundary Layer Theory, it is a collection of perturbation methods for solving differential equations whose solutions exhibit a boundary layer structure.
Consider the boundary value problem $\epsilon y''+(1+\epsilon)y'+y=0$, $y(0)=0,y(1)=1$. Keeping $x$ fixed and taking the limit $\epsilon\to0$ (these are called outer regions) we approximate the solution of the differential equation by the solution to the outer equation $eq_{out}=y_{out}'(x)+y_{out}(x)=0$. Since this is a first order equation it cannot satisfy both boundary conditions. The outer solution satisfies only $y_{out}(1)=1$.
My understanding is that an outer region is a region outside the boundary layer. Now comes my question, by taking the limit $\epsilon\to0$, we get $y'+y=0$, then how can we tell where the outside region lies? How can we know the outer solution satisfies only $y_{out}(1)=1$?
I think understanding this step is crucial to proceed, and that is the reason I want to get the ideas correct.
Thanks in advance for any helps!
AI: Let's pretend we know nothing at all about boundary layers and just try to solve this equation by expanding $y$ in powers of $\epsilon$, i.e.
$$ y(x) = y_{0}(x) + \epsilon y_{1}(x) + \epsilon^{2}y_{2}(x) + \ldots$$
Substitute this expression for $y(x)$ into the given differential equation and then collect the like powers of $\epsilon$ to recover the following:
$$ \begin{array}{ll} \epsilon^{0}: & y_{0}' + y_{0} = 0 \\ \epsilon^{1}: & y_{0}'' + y_{0}' + y_{1}' + y_{1} = 0 \\ \epsilon^{2}: & y_{1}'' + y_{1}' + y_{2}' + y_{2} = 0 \end{array}$$
and so on. We solve the first equation and find that $y_{0}(x) = Ae^{-x}$. Substitute into the second equation and solve to find that $y_{1}(x) = Be^{-x}$. Substitute into the third equation to find that $y_{2}(x) = Ce^{-x}$ and so on.
We realise that all these solutions are effectively the same, so we confidently claim that
$y(x) = Ae^{-x}$ and now we just have to fix $A$ by using the boundary conditions. Unfortunately, we experience a sinking feeling when we realise that we have two boundary conditions, $y(0) = 0$ and $y(1) = 1$, but only one free variable.
Our solution cannot satisfy the boundary condition at $x = 0$ (unless $A = 0$, but then we will recover the trivial solution), so we pick $A = e$ so that $y(1) = 1$. To satisfy the boundary condition at zero we introduce a boundary layer of thickness $\delta$ at zero, i.e. we rescale $x = \delta X$ and $y(x) = Y(X)$. Substitute into our equation
$$ \frac{\epsilon}{\delta^{2}} Y'' + \frac{1+\epsilon}{\delta}Y' + Y = 0.$$
To ensure that the terms in this equation are balanced we choose $\delta = \epsilon$, then
$$ Y'' +(1+\epsilon) Y' + \epsilon Y = 0.$$
Note that the first two terms are both order one, and the third term is negligibly small, i.e. the first two terms can balance each other out so that the entire expression is equal to zero.
If we had chosen some other width for the boundary layer, e.g. $\delta = \sqrt{\epsilon}$ then our equation would become
$$ Y'' + \frac{1+\epsilon}{\sqrt{\epsilon}} Y' + Y = 0. $$
This is unbalanced - although the $Y''$ and $Y$ terms are both order one, the middle term is no longer negligible, instead it blows up as $\epsilon \rightarrow 0$.
What remains is to solve the rescaled equation by expanding $Y(X) = Y_{0}(X) + \epsilon Y_{1}(X) + \ldots$, and then to match the rescaled (or inner solution) to the outer solution using Van Dyke's rule. I may add those details later. |
H: In ZF, does there exist an ordinal of provably uncountable cofinality?
Question is in the title. In ZFC, one can prove that $\aleph_{\alpha+1}$ is regular, so there is a large source of cardinals with uncountable cofinality, but in ZF, it is consistent that ${\rm cf}(\aleph_1)=\aleph_0$, and most conceivable limit alephs also have cofinality $\aleph_0$. I recall reading in a book that it is "unknown" if there are any ordinals that are provably of uncountable cofinality in ZF, so this is really a reference request for progress on this problem. Is this a "provably unprovable" problem? Are there any large cardinal hypotheses (other than AC) that shed some light on the problem?
AI: No. As Miha quotes, it is consistent (relative to some very very large cardinals) that no initial ordinal (read: $\aleph$ number) has an uncountable cofinality. Since the cofinality of an ordinal is always an initial ordinal, this finishes the proof.
Note that very large cardinals are necessary. If $\operatorname{cf}(\omega_1)=\operatorname{cf}(\omega_2)=\omega$ then there is an inner model with a Woodin cardinal. So to have all the cardinals with countable cofinality you have to expect some proper class of very large cardinals.
Gitik proved this from a proper class of strongly compact cardinals, which is quite a large assumption. (Note, however that "proper class of ..." is quite scary, but still weaker from something like "inaccessible cardinal which is a limit of ...") |
H: Find $f(5)$ of a non-constant polynomial function $f(x)$
Suppose $f(x)$ is a non-constant polynomial such that $f(x^ 3) − f(x ^ 3 − 2) = f( x )\cdot f(x) + 12$ for all $x$. Find $f(5)$?
I find this problem on Quora just now, and I try to solve it but do not know where to start (every time I substitue a number $a$ into the equation, I get three more unknown numbers). Would anyone give me any clue? Is there a general method to deal with this kind of problems?
AI: Let $d$ be the degree of $f(x)$, $d > 0$. The degree of the RHS is $2d$. Now let's study the top degree terms of the LHS:
Write $f(x) = a x^d + b x^{d-1} + \dots$, $a\neq0$. then the LHS is (where $\dots$ are terms with degree $<3d-3$):
$$\begin{align}
ax^{3d} + b x^{3d-3} - \bigl( a (x^3-2)^d + b (x^3-2)^{d-1} \bigr) + \dots
& = ax^{3d} + bx^{3d-3} - \bigl( ax^{3d} - 2dax^{3d-3} + bx^{3d-3} \bigr) + \dots\\
&= -2dax^{3d-3} + \dots
\end{align}$$
So the LHS has degree 3d-3. Equating this with the degree of the RHS, the degree of $f(x)$ is 3. Now it's simply a matter of writing $f(x)$ explicitly, plugging into the original equation and solving for the coefficients. |
H: Sandwich measurable set between Borel sets
If $B$ is Lebesgue measurable then we can sandwich it closely between Borel sets in the sense that there is Borel $A , C$ such that
$A \subset B \subset C$ and $m(C\setminus A) = 0$.
does anyone know a reference for this statement or how to go about a proof? this is something that was skipped over in class with no proof, not graded homework, but I marked it homework anyway =)
AI: If you know that $E$ is Lebesgue Measurable if and only if $\forall \delta > 0$, there is an open set $E\subset U$ sucht that $m(U\setminus E)<\frac{\delta}{2}$, it's easy to prove it : First, note that if $E$ is Lebesgue Measurable, then $E^{c}$ is also Lebesgue Measurable and so, there is another open set $E^{c}\subset V$ such that $m(V\setminus E^{c})<\frac{\delta}{2}$. Now, it's easy to see that there is an open set ($U$) and a closed set ($V^{c}$), such that $V^{c}\subset E\subset U$ and such that $m(U\setminus V)<\delta$. At this point, just choose for each $n \in \mathbb{N}$ open sets $U_{n}$ and closed sets $F_{n}$ such that $F_{n}\subset E \subset U_{n}$ and that $m(U_{n}\setminus F_{n})<\frac{1}{n}$. Now, just take $A = \cup F_{n}$ and $B= \cap U_{n}$, both Borel sets and it's trivial to see that $A \subset E\subset B$ and that $m(B\setminus A)=0$. |
H: Are two independent events $A$ and $B$ also conditionally independent given the event $C$?
If we know that two events $A$ and $B$ are independent, can we say that $A$ and $B$ are also conditionally independent given an arbitrary event $C$?
$$P(A\cap B) = P(A)P(B) \overset{?}{\Rightarrow} P(A\cap B|C) = P(A|C)P(B|C)$$
AI: Hint: Consider the example of
$${{\text{Event }A}\atop \begin{array}{|c|c|c|}
\hline\strut& & \\\hline
\strut& & \\\hline
\strut\Large\color{red}{\bullet} & \Large\color{red}{\bullet} & \Large\color{red}{\bullet}\\\hline
\end{array}}\qquad {{\text{Event }B}\atop \begin{array}{|c|c|c|}
\hline\strut& & \Large\color{blue}{\bullet}\\\hline
\strut\;\;&\;\; & \Large\color{blue}{\bullet}\\\hline
\strut & & \Large\color{blue}{\bullet}\\\hline
\end{array}}\qquad {{\text{Event }C}\atop \begin{array}{|c|c|c|}
\hline\strut& & \\\hline
\strut\;\;&\Large\color{green}{\bullet} & \\\hline
\strut & \Large\color{green}{\bullet} & \Large\color{green}{\bullet}\\\hline
\end{array}}$$ |
H: Can there be a repeated edge in a path?
I was just brushing up on my discrete mathematics specifically graph theory and read the following definition of a walk in a graph
"A walk in a graph is an alternating sequence of vertices and edges from a start vertex to an end vertex where start and the end vertices are not necessarily distinct"
and after this I read up the definition of a path in a graph which says
"A path in a graph is a walk in the graph with no repeated vertices"
the point of confusion is that the definition for a path doesn't mention anything about the repetition of edges in a path, although the idea of repetition of edge in a path sounds absurd to me because that eventually means that I am taking the same road that I traversed previously and hence repeating the same destinations which are the vertices respectively. I am curious to know if there is any such case of a path where there is a repetitive edge but no repeated vertex.
One more thing I would like to know is can there exists an ordering of edges in a graph such that the path from one vertex to another is infinite in short can there be an infinite path in a finite graph?
AI: The terminology varies a bit from textbook to textbook, so it's always a good idea to check the conventions used by that particular author.
I assume that the definitions you work with are the ones you state. Then there can not be a repeated edge in a path. If an edge occurs twice in the same path, then both of its endpoints would also occur twice among the visited vertices.
For the second question, a finite graph has a finite number of edges and a finite number of vertices, so as long as no repetition are allowed, a path would have to be finitely long. |
H: How to find $\lim_{n\rightarrow\infty} \int_{0}^{1} f(t)\phi(nt) dt$ for $1$-periodic $\phi$?
How to solve the following problem:
Let $\phi\in L^{\infty}(0,1)$ be a $1$-periodic function and $f\in L^{1}(0,1)$. Find $\lim_{n\rightarrow\infty} \int_{0}^{1} f(t)\phi(nt) dt$.
Thanks in advance.
AI: Claim: $$\lim_{n\to\infty}\int_0^1 f(t)\phi(nt)dt=\int_0^1f(s)ds\cdot \int_0^1\phi(s)ds.$$
Proof:
Since $\phi$ is of period $1$, for $s=nt$,
$$\int_0^1 f(t)\phi(nt)dt=\frac{1}{n}\int_0^n f(\frac{s}{n})\phi(s)ds=\frac{1}{n}\sum_{k=0}^{n-1}\int_0^1 f(\frac{s+k}{n})\phi(s)ds.\tag{1}$$
First let us assume that $f\in C[0,1]$, i.e. $f$ is continuous on $[0,1]$. Then
$$\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(\frac{s+k}{n})=\int_0^1f(t)dt,\quad \forall s\in[0,1].\tag{2}$$
Combining $(1)$ and $(2)$, by dominated convergence theorem,
$$\lim_{n\to\infty}\int_0^1 f(t)\phi(nt)dt=\int_0^1f(s)ds\cdot \int_0^1\phi(s)ds.\tag{3}$$
Now given $f\in L^1(0,1)$, for any $\epsilon>0$, there exists $g\in C[0,1]$, such that
$$\int_0^1|f(t)-g(t)|dt<\epsilon.\tag{4}$$
It follows that
$$\int_0^1 \big|(f(t)-g(t))\phi(nt)\big|dt\le\epsilon\|\phi\|_\infty, \quad \forall n\ge 1.\tag{5}$$
Since $(3)$ holds when $f$ is replaced with $g$, with the help of $(4)$ and $(5)$, we have:
$$\limsup_{n\to\infty}\big|\int_0^1 f(t)\phi(nt)dt-\int_0^1f(s)ds\cdot \int_0^1\phi(s)ds\big|\le 2\epsilon\|\phi\|_\infty.\tag{6}$$
Since $\epsilon>0$ in $(6)$ is arbitrary , we can conclude that $(3)$ holds for every $f\in L^1(0,1)$.$\quad\square$ |
H: Arc length of $y = x^{2/3}$
Find the arc length of $y=x^{2/3}$ from $x=-1$ to $x=8$.
I have tried applying the arc length formula but for some reason I keep getting $7.63$, but the answer is $10.51$.
AI: The complex outcome of Maazul seems to be caused by the definition of $x^{1/3}$ for $x < 0$. Many computer algebra packages do not take it to be the "natural choice" $-|x|^{1/3}$ (but one of the other complex roots), while it is almost certain that this is what is intended.
Therefore, the following calculation (which should be reminiscent of what OP did) is tempting:
$$\begin{align}
\int_{-1}^8 \sqrt{1+\frac49x^{-2/3}}\,\mathrm dx &=\int_{-1}^8\sqrt{\frac{x^{2/3}+\frac49}{x^{2/3}}}\,\mathrm dx\\{\text{tentative}}&= \int_{-1}^8 \frac{\sqrt{x^{2/3}+\frac49}}{x^{1/3}}\,\mathrm dx \\&= \left.\left(\frac49+x^{2/3}\right)^{3/2}\right|_{x=-1}^8 = \left(\frac{40}9\right)^{3/2}-\left(\frac{13}9\right)^{3/2} \approx 7.63
\end{align}$$
but $\sqrt{x^{2/3}} \ne x^{1/3}$ for $x < 0$, at which point a correcting minus sign is required.
Therefore, we have to split the integral into two parts: $\int_{-1}^0$ and $\int_0^8$ (which will have different primitives). Since this is a homework assignment, I'll stop here.
Addendum after seeing OP evaluates using substitution: Similar problems to above arise when using $u = x^{2/3}$ because this $u$ is always positive. Therefore, splitting the interval of integration is again necessary. Then both integrals can be solved using (suitable variants of) this substitution. |
H: Composition $R \circ R$ of a partial ordering $R$ with itself is again a partial ordering
If $R$ is a partial ordering then $R\circ R$ is a partial ordering.
I cannot seem to prove this can anyone help ?
AI: Denote $R$ with $\le$, and $R \circ R$ with $\mathrel{\underline\ll}$. Expanding what reflexivity, transitivity, and antisymmetry of $R \circ R$ mean:
Reflexivity: $x \mathrel{\underline\ll} x$ iff there is a $y$ with $x \le y$ and $y \le x$.
Transitivity: $x \mathrel{\underline\ll} y$ and $y \mathrel{\underline\ll} z$ should imply $x \mathrel{\underline\ll} z$. The premises imply there exist $v,w$ with $x \le v \le y \le w \le z$.
Antisymmetry: $x \mathrel{\underline\ll} y$ and $y \mathrel{\underline\ll} x$ should imply $x = y$. The premises imply there exist $v,w$ with $x \le v \le y \le w \le x$.
I leave it to you to conclude by using that $\le$ is a partial ordering. |
H: How to simplify summation $(1\cdot2) + (2\cdot3) + (3\cdot4) + (4\cdot5) + (5\cdot6) + ... + (N\cdot(N+1))$ in terms of N?
I have function f(n) like:
N=1 result = 2
N=2 result = 8
N=3 result = 20
N=4 result = 40
N=5 result = 70
N=6 result = 112
N=7 result = 168
N=8 result = 240
N=9 result = 330
N=10 result = 440
I could understand that its something like:
N = 1, (1 + 1) = 2
N = 2, (1 + 1) + (3 + 3) = 8
N = 3, (1 + 1) + (3 + 3) + (4 + 4 + 4)= 20
N = 4, (1 + 1) + (3 + 3) + (4 + 4 + 4) + (5 + 5 + 5 + 5) = 40
N = 5, (1 + 1) + (3 + 3) + (4 + 4 + 4) + (5 + 5 + 5 + 5) + (6 + 6 + 6 + 6 + 6) = 70
N = 6, (1 + 1) + (3 + 3) + (4 + 4 + 4) + (5 + 5 + 5 + 5) + (6 + 6 + 6 + 6 + 6) + (7 + 7 + 7 + 7 + 7 + 7)= 112
Finally, I could understood that sum of N in three loop is:
[Question]
(1*2) + (3*2) + (4*3) + (5*4) + (6*5) + ... + (N * (N-1))
Now, I wants to simplify this equation in terms of N?
A similar kind of question I have asked previously Here also, If I compare results of both series it looks the result in term of N is = ( ((N) * (N+1) * (N+2)) / 3 ). but I am not sure.
Can someone help me to simplify this question further.
Edit: After Mr.Mark Bennet's comment I recheck my equation it should be actually following (notice the small change):
(1*2) + (2*3) + (3*4) + (4*5) + (5*6) + ... + (N * (N+1))
AI: By the comment, your sum is:
$$
\sum_{1 \le k \le n} k (k + 1)
= \sum_{1 \le k \le n} k^2 + \sum_{1 \le k \le n} k
= \frac{n (n + 1) (2 n + 1)}{6} + \frac{n (n + 1)}{2}
= \frac{n (n + 1) (n + 2)}{3}
$$
Or even simpler, with $k^{\overline{m}} = k (k + 1) \ldots (k + m - 1)$, you have
$$
\sum_{1 \le k \le n} k^{\overline{m}}
= \frac{n^{\overline{m + 1}}}{m + 1}
$$
In your case $m = 2$. |
H: How to find the minimum of the expression?
Let $a$, $b$, $c$ be three real positive numbersand $a^2 + b^2 + c^2 =3$. Find the minimum of the expression
$$P = \dfrac{a^2}{b + 2c} +\dfrac{b^2}{c + 2a}+ \dfrac{c^2}{a + 2b}.$$
I tried
$$\dfrac{a^2}{b + 2c} +\dfrac{b^2}{c + 2a}+ \dfrac{c^2}{a + 2b} \geqslant \dfrac{(a + b+c)^2}{3(a + b + c)}$$
or $$P \geqslant \dfrac{a + b + c}{3}$$
AI: $P\geq \frac{9}{a^2(b+2c)+b^2(c+2a)+c^2(a+2b)}$ by Cauchy-Schwarz inequality.
$a^2b+b^2c+ac^2\leq\sqrt{3(a^2b^2+a^2c^2+b^2c^2)}$ and $2(a^2c+ab^2+bc^2)\leq 2\sqrt{3(a^2b^2+a^2c^2+b^2c^2)}$, again by Cauchy-Schwarz inequality and $P\geq \frac{\sqrt{3}}{\sqrt{(a^2b^2+a^2c^2+b^2c^2)}}$. Arithmetic-geometric inequality gives $(a^2b^2+a^2c^2+b^2c^2)\leq \frac{(a^2+b^2+c^2)^2}{3}=3$, so finally $P\geq 1$. |
H: Cyclotomic integers: Why do we have $x^n+y^n=(x+y)(x+\zeta y)\dots (x+\zeta ^{n-1}y)$?
Why do we have the factorization $$x^n+y^n=(x+y)(x+\zeta y)\dots (x+\zeta ^{n-1}y)$$ for $\zeta$ a primitive $n^{\text{th}}$ root of unity where $n$ is an odd prime?
AI: If $y = 0$, the factorisation reads $x^n = \underbrace{x\,.x \dots x}_{n\ \text{times}}$.
Consider $y\neq 0$. If $\zeta \in \mathbb{C}$ is a primitive $n^{\text{th}}$ root of unity then $\zeta^n = 1$ and $\zeta^k \neq 1$ for $k = 1, \dots, n-1$. In particular, $\zeta, \zeta^2, \dots, \zeta^{n-1}, \zeta^n=1$ are all distinct. Now consider $x^n + y^n$ as a polynomial in $x$ over the complex numbers (treat $y$ as a fixed constant). Then by the Fundamental Theorem of Algebra, $x^n + y^n$ has precisely $n$ zeroes. For each $k = 1, \dots, n$, $x = -\zeta^ky$ is a zero as
\begin{align*}
x^n + y^n &= (-\zeta^ky)^n + y^n\\
&= (-1)^n(\zeta^k)^ny^n + y^n\\
&= -(\zeta^n)^ky^n + y^n\qquad \text{as $n$ is odd, $(-1)^n = -1$}\\
&= -y^n + y^n\\
&= 0.
\end{align*}
As $y \neq 0$, $x = -\zeta^ky$ for $k=1, \dots, n$ are all distinct, so they are all of the zeroes of $x^n + y^n$. By the factor theorem, each zero corresponds to a linear factor, so we have
\begin{align*}
x^n + y^n &= a(x+\zeta y)(x+\zeta^2 y)\dots(x+\zeta^{n-1}y)(x+\zeta^ny)\\
&= a(x+\zeta y)(x+\zeta^2 y)\dots(x+\zeta^{n-1}y)(x+y)\\
&= a(x+y)(x+\zeta y)(x+\zeta^2 y)\dots(x+\zeta^{n-1}y)
\end{align*}
for some $a \in \mathbb{C}$. Comparing the coefficients of $x^n$, we deduce that $a = 1$ and arrive at the desired factorisation
$$x^n+y^n = (x+y)(x+\zeta y)(x+\zeta^2 y)\dots(x+\zeta^{n-1}y).$$ |
H: Simple inequality with unknown in the exponent
Let $0<\alpha\ll1$ I have the following inequality:
$$
2\alpha^2x\geq \alpha^{2x}
$$
It looks trivial, but I wasn't able to find the $x$ that verify the condition. Anyone any clue?
AI: Here is a general approach.
Let $$f(x) = \alpha^{2x} \text{ and } g(x) = 2 \alpha^2 x,$$
then $$f'(x) = 2\log(\alpha) \alpha^{2x} \text{ and } g'(x) = 2 \alpha^2.$$
Note that $f$ is always decreasing and $g$ is always increasing. When $x = 1$ we have $g(x) > f(x)$, and hence the inequality holds (in particular) for all $x \geq 1$. |
H: If $f^2$ is differentiable, how pathological can $f$ be?
Apologies for what's probably a dumb question from the perspective of someone who paid better attention in real analysis class.
Let $I \subseteq \mathbb{R}$ be an interval and $f : I \to \mathbb{R}$ be a continuous function such that $f^2$ is differentiable. It follows by elementary calculus that $f$ is differentiable wherever it is nonzero. However, considering for instance $f(x) = |x|$ shows that $f$ is not necessarily differentiable at its zeroes.
Can the situation with $f$ be any worse than a countable set of isolated singularities looking like the one that $f(x) = |x|$ has at the origin?
AI: I'd try something along these lines:
if we consider $K$ - Cantor set on $[0,1]$ and $f(x)=\inf_{y\in K} |x-y|$. This function is continuous and has a continuum of zeros: $f^{-1}(0)=K$. The only question is whether $f^2$ is everywhere differentiable. The intuition suggests that up to some tinkering with the definition of $f$ that it is.
Edit. reflected the result from my comment below. |
H: How do I find the probability of these independent events
The following is taken from the ETS math review for the GRE:
Let A, B, C, and D be events for which P(A or B)=0.6, P(A)=0.2, P(C or D)=0.6, and P(C)=0.5 The events A and B are mutually exclusive, and the events C and D are independent.
Find P(D).
I'd appreciate it if someone could show me how to solve this. The answer is given as 0.2, but I don't know how it's arrived at. Thanks!
AI: Using the independence of $C$ and $D$ we have that
$$
P(C)P(D)=P(C\cap D)=P(C)+P(D)-P(C\cup D).
$$
Use this to find $P(D)$. You don't need to involve $A$ and $B$ to find $P(D)$. |
H: Is each space filling curve everywhere self-intersecting?
Consider a continuous surjection $f:[0,1]\to [0,1]\times[0,1]$. Is
$$\{x:\exists(t_1\not=t_2) f(t_1)=f(t_2)=x\}=[0,1]\times[0,1]?$$
AI: No. Consider for instance the Hilbert space-filling curve, on which Brian Hayes wrote a nice popular article recently (HTML, PDF).
This mapping from $[0, 1]$ to $[0, 1] \times [0, 1]$ can be viewed as a mapping from digit sequences $0.d_1d_2d_3\dots$ in base-$4$ (where $0 \le d_i < 4$) to $[0, 1] \times [0, 1]$. The first digit tells us which quadrant a point lies in, the second digit tells us which sub-quadrant of that it lies in, and so on. (See the second image on page 4.)
Conversely, given a point in $[0, 1] \times [0, 1]$, we can invert the curve -- find all $t$ in $[0, 1]$ that are mapped to this point -- by noting down which quadrant it lies in, then which sub-quadrant of that it lies in, and so on, recursively.
The only points that are the image of multiple $t$ are those that, at some granularity, lie in multiple quadrants. These are precisely those points $(a, b)$ such that either $a$ or $b$ can be written as $\dfrac{m}{2^k}$ for some integers $k \ge 1$ and $1 \le m < 2^k$. Every other point is the image of exactly one $t$.
So for the Hilbert curve at least, the set of points at which the curve is "self-intersecting" in your sense is, far from being every point, actually of measure $0$. |
H: What is a place?
In Specialization of Quadratic
and Symmetric Bilinear
Forms (page: 3) the author writes "Let also $\lambda: K \to L \cup \infty$ be a place, $\mathfrak o = \mathfrak o_\lambda$ the valuation ring associated to $K$ and $\mathfrak m$ be the maximal ideal of $\mathfrak o$."
What is a place and what is $L$? This is the first page of the book so that knowledge of the definition seems to be assumed.
AI: You can find some definitions in:
Wikipedia
Algebraic Number Theory, J. Neukirch - chapter III starts with a definition of places. Valuations are defined in chapter II.
Algebraic Number Theory, E. Weiss - places are defined much earlier in the book (in the first few pages), but I personally like this book a little less than Neukirch. |
H: $f,\overline f$ are both analytic in a domain $\Omega$ then $f$ is constant?
is it true if $f,\overline f$ are both analytic in a domain $\Omega$ then $f$ is constant? I am not able to find out what property of holomorphic map I need to apply. please help.Thank you.
$f(z)=u(x,y)+iv(x,y)$, $\bar{f}(z)=u(x,y)-iv(x,y)$
AI: Let $f(z) = u(x,y) + i v(x,y)$, then $\bar f(z) = u(x,y) - iv(x,y)$.
Cauchy-Riemann equations for $f$: $(1) \quad \partial_xu = \partial_yv$, $(2) \quad \partial_yu = -\partial_xv$
Cauchy-Riemann equations for $\bar f$: $(3) \quad \partial_xu = -\partial_yv$, $(4) \quad \partial_yu = \partial_xv$
Combine $(1)+(3)$ to get $\partial_xu = \partial_yv = - \partial_xu$, therefore $\partial_xu = \partial_yv = 0$. Similarly, $(2)+(4)$ imply that $\partial_yu = \partial_xv = 0$. $u$ and $v$ have all their partial derivatives vanishing, $\Omega$ is connected (being a domain), therefore $u$ and $v$ are constant, and $f$ is a constant. |
H: Union of two partial orderings
Suppose S and R are partial orderings. Does is necessarily mean that $R \cup S$ (union) is a partial ordering? If not what conditions would have to be met for it to be a partial ordering?
AI: As exitingcorpse remarks, antisymmetry may fail.
Transitivity can be a problem too, for example on domain $\{a,b,c\}$, with $S=\{(a,a), (b,b), (a,b),(c,c)\}, R= \{(a,a),(b,c),(c,c),(b,b)\}$. Then the union $S \cup R$ is not transitive: $(a,c)$ should be in it as $(a,b)$ and $(b,c)$ are... |
H: On one representation of Green's function
The Green's function for heat equation on finite interval is well known (with Dirichlet conditions):
$$
G(x,x', t) = \frac{2}{l}\sum\limits_{n=1}^{\infty}
\exp\left(-\frac{\pi^2n^2}{l^2}at\right)\sin\left(\frac{x\pi n}{l}\right)
\sin\left(\frac{x'\pi n}{l}\right)
$$
But recently I have encountered another representation. It is claimed that we can use method of images to obtain following representation of same Greens function:
$$
G(x,x',t) = \frac{1}{2\sqrt{\pi a t}}\sum\limits_{k=-\infty}^{+\infty}(-1)^k
\exp\left(-\frac{(x-x_k)^2}{4\pi t}\right)
$$
where $x_k$ -- is places of images (or, as it is sometimes said, ''fake sources'')
It is not obvious at all...
Can anyone explain or advise some materials where I could find explanation with proof?
Thanks in advance!
AI: You have the Green function corresponding to the whole real line, we write $G(x,y,t)$ for this function and we denote the initial data as $f$. Assume now that the heat equation is posed on the interval (-L,L) with homogeneous Dirichlet BC. Then, you can't use the gaussian because the BC are not fulfilled.
The idea now is, for each time $t$ (let's say that the time is a parameter), to substract something that ensures that the BC are fulfilled. If you use $G$ you have
$$
G(-L,y,t)\approx\exp\left(\frac{(-L-y)^2}{4t}\right), y\in(-L,L)
$$
Notice that you can use
$$
G_1(x,y,t)\approx\exp\left(\frac{(x+2L+y)^2}{4t}\right), y\in(-L,L),
$$
and you have
$$
G(-L,y,t)-G_1(-L,y,t)=\exp\left(\frac{(-L-y)^2}{4t}\right)-\exp\left(\frac{(-L+2L+y)^2}{4t}\right)=0.
$$
This is the term corresponding to one of the "images", i.e. a symmetric $\delta(x)$ in the appropriate point, in this case $2L$. The point is that you need infinitely many of these terms. For instance, $G_1$ influences the value at the other boundary $x=L$, thus, when we deal with the BC at $x=L$ we need a function $G_2$ which depends on $G_1$ but then an extra $G_3$ should be taken into account... Finally you obtain infinitely many $G_i$.
I hope this helps you. |
H: Probability Independence to reduce terms.
Is the following probability reduction correct?
p(A,B|C,D) = p(A,B|C), where (A,B) is independent of D
Is it because of the following prove:
p(A,B|C,D)
= p(A,B,C,D) / p(C,D)
= p(D)*p(A,B,C) / p(C)*p(D)
= p(A,B,C) / p(C)
= P(A,B|C)
If so, then isn't it (A,B,C) is independent of D rather than (A,B) is independent of D?
AI: To get a simple counterexample, throw a fair coin twice, call $C$ the event that the first throw was a head, $D$ the event that the second throw was a head and $A=B$ the event that both throws coincide.
Then $A$ is independent of $D$ while $P(A\mid C,D)=1$ and $P(A\mid C)=\frac12$.
On the other hand, if $D$ is independent of $(A,B,C)$, the proof you suggest is correct and shows indeed that $P(A,B\mid C,D)=P(A,B\mid C)$. |
H: Is this graph coloring problem solved correctly?
On this Wikipedia page about Burnside's lemma, it is calculated that there are 57 rotationally distinct colorings of the faces of a cube with three colors. I'm confused by the way it is done.
They apply the lemma to the set of all $3^6$ functions assigning one color to every face. Is it correct? The page about graph colorings says that a face coloring is one in which no two neighboring faces have the same color. It seems to me that the proof here disregards this definition. How should I understand this? What is calculated here? And how do I calculate the other thing?
AI: Let's call a face-coloring "proper" if no two neighboring faces have the same color, "improper" otherwise. The $57$ rotationally distinct face-colorings of the cube include improper colorings and colorings which do not use all three of the available colors, e.g., they include the $3$ colorings in which all faces have the same color.
If you want to count the rotationally distinct proper face-colorings of the cube with $3$ colors, you can use Burnside's theorem if you want to, or you can count them by hand. In fact, there is only one way to do it, because opposite faces must have the same color.
On the Wikipedia page about Burnside's lemma, the number of rotationally distinct colorings (not necessarily proper) is calculated as
$$\frac 1{24}(1\times3^6+6\times 3^3+3\times 3^4+8\times 3^2+6\times 3^3)=57.$$
For proper colorings this becomes
$$\frac 1{24}(1\times 6+6\times 0+3\times 6+8\times 0+6\times 0)=1.$$
In the first term of the "improper" formula, the factor $1$ is the number of identity rotations, the factor $3^6$ is the number of colorings which are invariant under the identity rotation, which means all of them. In the corresponding term of the "proper" formula, the factor $3^6$ is replaced by the number of proper colorings, which is $3\times 2=6$ since the whole coloring is determined once two neighboring faces have been colored.
In the second term of the "improper" formula, the factor $6$ is the number of $90$ degree face rotations, and the factor $3^3$ is the number of colorings invariant under a given rotation of that type. E.g., for a 90 degree rotation about the vertical axis through the center of the cube, the invariant colorings are those in which the four vertical faces have the same color. None of these are proper colorings, so the corresponding factor in the "proper" formula is $0$. |
H: limit with quaternion
Let $v\in \mathbb{H}$ and $q:t\in\mathbb{R}\rightarrow q(t)\in \mathbb{H}$. That $q(t) \neq 0$ for all $t\in \mathbb{R}$ and $q^{-1}(t) = \frac{1}{q(t)}$. So don't confuse $q^{-1}(t)$ with inverse function(as pointed by rschwieb).
Than we have limit:
$$\lim_{\Delta t \rightarrow 0} \frac{1-q(t+\Delta t) q^{-1}(t)}{\Delta t} =\lim_{\Delta t \rightarrow 0} \frac{q(t)-q(t+\Delta t)}{\Delta t} q^{-1}(t) = - q'(t)q^{-1}(t) $$
But I have trouble to find limit of this:
$$\lim_{\Delta t \rightarrow 0} \frac{v-q(t+\Delta t) q^{-1}(t)vq(t) q^{-1}(t +\Delta t)}{\Delta t} = ?$$
Does it even exist? Can it be writen i terms of $v,q,q'$?
Context: I developed some system of recurrence equations. See And I want to find corresponding differential equations:
This is the system: $i \in \{1,\dots,n\}$
\begin{align*}
v_i(t_{n+1})& = p_i(t_{n+1})-p_0(t_{n+1}) = R_0(t_n,\Delta t) v_i(t_n) R_0^{-1}(t,\Delta t)\\
p_0(t_{n+1}) &= p_0(t_n) + \sum_{i=1}^n v_i(t_n) - R_i(t_n,\Delta t) v_i(t_n) R_i^*(t_n,\Delta t) \\
q_0(t_{n+1}) &= \{R_i(t_n,\Delta t)\}_i q_0(t_n) \\
p_i(t_n) &= p_0(t_n) + R_0(t_0,t_n-t_0)v_i(t_0)R_0^{-1}(t_0,t_n-t_0) \\
R_i(t,\Delta t) &= q_i(t+\Delta t) q^{-1}_i(t)\\
\{a_i\}_i &=\frac{\sum_{ \sigma \in \Pi_n } a_{\sigma(1)} \dots a_{\sigma(n)}}{||\sum_{ \sigma \in \Pi_n } a_{\sigma(1)} \dots a_{\sigma(n)}||}
\end{align*}
here is how I came up with it
AI: This can be evaluated with an invocation of the product rule for quaternion functions:
$$(fg)'(t) = f'(t)g(t)+f(t)g'(t)$$
namely by considering $f(\tau) = -q(\tau)q^{-1}(t)$ and $g(\tau) = vq(t)q^{-1}(\tau)$, deriving at $\tau = t$. This means the result is $-q'(t)q^{-1}(t)v-vq(t)(q^{-1})'(t)$. Note that we have assumed $q^{-1}$ to be differentiable.
A derivation of the product rule goes along the standard lines, exploiting the identity:
$$f(t+\Delta t)g(t+\Delta t) - f(t)g(t) = f(t+\Delta t)\left(g(t+\Delta t)-g(t)\right)+\left(f(t+\Delta t)-f(t)\right)g(t)$$ |
H: Properties of a one to one continuous function from $[0, 1]$ onto itself
Let $f$ be a one to one continuous function from $[0, 1]$ onto itself. Show that
(i) $f$ is a homeomorphism.
(ii) $f$ is strictly monotone on $[0, 1]$
(iii) Is it true that if $f$ is strictly monotone on $[0, 1]$ and onto $[0, 1]$, then $f$ is continuous?
For (iii) I am sure it will be true as monotone function has only jump discontinuity so if it is onto it has to be continuous as well. This is my intuition only, more rigorous argument is welcome, thank you.
AI: i) Because $[0,1]$ is compact and Hausdorff, $f$ is a homeomophism.
ii) Any one-one continuous function (on $\Bbb R$ to $\Bbb R$) is strictly monotone.
iii) If $f$ is not continuous, at some point $a\in (0,1)$ we have:
$$\lim_{x\to a^-}f(x)\ne \lim_{x\to a^+}f(x)$$
$$\Rightarrow \sup_{x<a}f(x)< \inf_{a<x}f(x)$$
$$\Rightarrow \sup f([0,a))< \inf f((a,1])$$
So $f([0,1])\ne [0,1]$ . |
H: antipodal map from sphere to projective space is immersion and submersion?
Define a map $\mathbb{S}^n \to \mathbb{RP}^n$ given by $x \mapsto \{ x,-x\}.$
Clearly this is not a diffeomorphism, but how can one show that it is an immersion and submersion?
AI: This is a local diffeomorphism, and the result follows. |
H: Representation problem from Serre's book
I asked this question yesterday on the setting of an exercise problem (Ex 2.8) from Serre's book "Linear representations of Finite Groups" (I'm teaching myself representation theory...)
Now that that is sorted, I'm still stuck on the actual problem: Let $\rho: G\to GL(V)$ be a linear representation of a finite-dimensional complex vector space $V$, and suppose $V = W_1 \oplus \dotsb \oplus W_1 \oplus W_2 \oplus \dotsb \oplus W_2 \oplus \dotsb \oplus W_l \oplus \dotsb \oplus W_l$ is a decomposition into irrreducible representations, and let $V_i := W_i\oplus \dotsb \oplus W_i$.
Let $H_i$ be the vector space of linear mappings $h: W_i \to V_i$ such that $\rho_s h = h \rho_s$ for all $s\in G$.
Show that $\dim H_i = \dim V_i / \dim W_i$. [Hint: Use Schur's lemma.]
What I've got so far: if $h\neq0$ then $h(W_i)$ must be an irreducible subrepresentation of $\rho$ isomorphic to $W_i$, and such $h$ mapping to the same image form a one-dimensional subspace by Schur's lemma. For the given explicit decomposition $V_i := W_i\oplus \dotsb \oplus W_i$ with $k$ summands, let $h_\alpha$ send the first summand to the $\alpha$-th summand. These $h_\alpha$ are linear independent, but I wasn't able to show that they span...
Please give me some hints!
AI: You are asked to compute the dimension of the vector space $\mathrm{Hom}_G(W_i,V_i)$ of $G$-module homomorphisms from $W_i$ to $V_i$; explicitly, this consists of linear maps $\phi$ from $W_i$ to $V_i$ such that for all $g \in G$ and $w \in W_i$ one has $\phi(g w)=g \phi(w)$. The right way to think about this is to use the fact that Hom is additive (in both variables, for any representations---this is the fact that a map to a direct sum is uniquely determined by its compositions with the projections, which can be arbitrary):
$$\mathrm{Hom}_G(U,V \oplus W) \cong \mathrm{Hom}_G(U,V) \oplus \mathrm{Hom}_G(U,W),$$ so that
$$\mathrm{Hom}_G(W_i,V_i)=\mathrm{Hom}_G(W_i,W_i \oplus \cdots \oplus W_i) \cong \mathrm{Hom}_G(W_i,W_i) \oplus \cdots \oplus \mathrm{Hom}_G(W_i,W_i) .$$ Now Schur's lemma implies that the dimension of $\mathrm{Hom}_G(W_i,W_i)$ is $1$, so the dimension you are after is just the number of summands---in other words, the quotient $\mathrm{dim} (V_i) / \mathrm{dim}(W_i)$. |
H: Dropping the modulus sign in integrals of the form $\int 1/t ~dt$
In the process of solving a DE and imposing the initial condition I came up with the following question.
I've reached the stage that
$$\ln y + C = \int\left(\frac{2}{x+2}-\frac{1}{x+1}\right)dx$$
$$\Rightarrow \ln y +C=2\ln|x+2|-\ln|x+1|$$
$$\Rightarrow y=A\frac{(x+2)^2}{|x+1|}.$$
Now I had also found that the curve passes through $(-4,-3)$. Susbstituting in the expression above, we find $A=-9/4$. However, the solution in the markscheme (the problem was from a past exam) drops the modulus sign and so it gives $A=9/4$.
So my question is, why do they drop the modulus sign and when is one allowed to do so in dealing with integrals of the form $\int 1/t ~dt$?
Thanks in advance.
AI: First of all, you should presumably have had
$$\ln |y| + C = 2\ln |x+2| - \ln |x+1|\,, \quad\text{leading to}\quad |y| = A\frac{(x+2)^2}{|x+1|}\,.$$
Since a solution curve through $(-4,-3)$ cannot cross the lines $x=-1$ and $y=0$, we infer that throughout $y<0$ and $x<-1$, so $|y|=-y$ and $|x+1|=-(x+1)$. Thus, the solution is
$$y=\frac94\frac{(x+2)^2}{x+1}\,, \quad x<-1,\ y<0\,.$$
Check the markscheme carefully! :) |
H: $1500=P \times { (1 + 0.02) }^{ 24 }$, what is the value of $P$?
Hey guys could you please tell me what is the faster why to solve this equation. It's a compound interest equation and I'm stuck at the ${ (1 + 0.02) }^{ 24 }$ I really don't know how to proceed in this part. I thought about log, but how can I apply logarithm without touching on the $1500$?
Any help is appreciated, thanks.
AI: $1500=P \times { (1 + 0.02) }^{ 24 }$
if you don't want to touch $1500$ then just find out the value of ${ (1 + 0.02) }^{ 24 }$ .you can do it as in your selected answer and via log table like this:
$$x={ (1 + 0.02) }^{ 24 }$$
take log on both side
$$\log x=\log { (1 + 0.02) }^{ 24 }$$
$$\log x=24\log { (1.02) }$$
$$\log x=24\times 0.0086$$
$$\log x=0.2064$$
$$x=Antilog(0.2064)$$
antilog is a inverse function of $\log$ there is also an antilog table to see value.To calculate via calculator or manually it is like this:
$$x=10^{0.2064}$$
$$x=1.6084$$
Now you have value of $(1.02)^{24}\;$which is $1.6084$ just put this value in equation :
$$P=\dfrac{1500}{1.6084}\implies P=932.58$$ |
H: Showing $\varphi(t)\neq 0$ when $\varphi$ is a characteristic function of an infinitely divisible distribution
Let $\varphi$ be a characteristic function of an infinitely divisible random variable. Show that $\varphi(t) \neq 0$ for all $t$.
Sorry, I have no clue how to do it, because if the exponential is not real, then it can turn around at the origin.
AI: Since $\varphi$ is the characteristic function of an infinitely divisible distribution we have that
$$
\varphi(t)=\varphi_n(t)^n,\quad n\in\mathbb{N},\tag{1}
$$
for a sequence of characteristic functions $\varphi_n$. Now we use that $|\varphi_n|^2$ is also a characteristic function for each $n$, and thus by $(1)$ we have that $|\varphi|^{2/n}$ is a characteristic function for each $n$. Define $\psi$ by $\psi(t)=\lim\limits_{n\to \infty}|\varphi(t)|^{2/n}$, then
$$
\psi(t)=
\begin{cases}
1,\quad &\text{if }\varphi(t)\neq 0,\\
0,&\text{if }\varphi(t)=0.
\end{cases}
$$
Since $\psi$ is continuous at $0$ we know that $\psi$ is also a characteristic function and hence it is continuous. Using that $\psi(0)=1$ and that $\psi$ only takes on the values $0$ and $1$ we must have that $\psi(t)=1$ for all $t$ meaning that $\varphi(t)\neq 0$ for all $t$. |
H: Is this integral improper? If yes - why?
Is this integral improper? If yes - why?
$$
\int\limits^2_0 \,\frac{1}{x-1} dx
$$
AI: Definition:
The integral $\int_a^b f(x)dx$ is called improper integral if:
$a=+\infty$ or $b=\infty$ or both.
$f(x)$ is unbounded at one or more points of $a\le x\le b$.
As @Git suggested verify which ones of above is satisfying the definition. You'll get the answer. ;-) |
H: What is the ratio of the perimeter of $OPRQ$ to the perimeter of $OPSQ$?
Area of circle $O$ is $64\pi$. What is ratio of the perimeter of $OPRQ$ to that of $OPSQ$ ($\pi = 3$)?
Okay i have tried couple of things but seems its not working . Please suggest me proper solution of this example so that , i can solve similar questions.
AI: arc length=$\dfrac {\theta}{360^\circ}\times2\pi r\;\;\;,$here $\theta\;$is angle made by arc on centre of circle.
$$m\widehat {PRQ}=\dfrac{120}{360}\times 2\pi\cdot8$$
$$m\widehat {PRQ}=\dfrac{16\pi}{3}$$
$$ m\widehat{PRQ}=16$$
$C=48\;,$$ m\widehat{PSQ}=C-m\widehat{PRQ}\implies48-16=32$
so ratio=$\dfrac {m\widehat{OPRQ}}{m\widehat{OPSQ}}=\dfrac{16}{32}\implies\dfrac 12$ |
H: Proving the origin is a saddle point.
I have the function $g(x,y) = x^6 -y^6x^2$ and want to prove that the origin is a saddle point.
I know that a critical point with an indefinite Hessian matrix is a saddle point, but this is only a sufficient condition.
$(0,0)$ is indeed a critical point of $g$, but the Hessian matrix is everywhere $0$ and hence it is positive and negative semi-definite and so not indefinite.
How would I go about concluding that the origin is indeed a saddle point here? Unfortunately, the only definitions of saddle points that I could find gave the usual sufficient condition of an indefinite Hessian.
Many thanks.
AI: On the line $y=0$, $g(x,y)=x^6$, which is concave up.
On the curve $x=y^2$, $g(x,y)=y^{12}-y^{10}=y^{10}(y^2-1)$, which is concave down.
More details, as requested:
A saddle point is stationary, but neither a local max nor a local min. $g(x,y)$ is stationary at the origin, because both partials are zero. $(0,0)$ is not a local max by the first observation above, it is not a local min by the second one. |
H: How to find $x^{2000}+x^{-2000}$ when $x + x^{-1} = \frac{1}{2}(1 + \sqrt{5})$
Let $x+x^{-1}=\dfrac{1+\sqrt{5}}{2}$. Find $x^{2000}+x^{-2000}$.
How many nice methods do you know for solving this problem? Thank you everyone.
My method: because $x+\dfrac{1}{x}=2\cos{\dfrac{2\pi}{5}}$, so $$x^{2000}+\dfrac{1}{x^{2000}}=2\cos{\dfrac{2000\pi}{5}}=2.$$
Can you think of other nice methods? Or this problem has not used Euler's theorem: $(\cos{x}+i\sin{x})^n=\cos{nx}+i\sin{nx}$
AI: Here is an algebraic way avoiding trig functions: note that your number $x$ satisfies
$$x^2-(\frac{1+\sqrt{5}}{2})x+1=0 \quad \implies \quad \text{by multiplying by the conjugate} \quad x^4-x^3+x^2-x+1=0$$ and then use the factorization
$$x^{10}-1=(x^6+x^5-x-1)(x^4-x^3+x^2-x+1)$$ to see that $x^{10}=1$. |
H: How to show $d(x,A)=0$ iff $x$ is in the closure of $A$?
This is a problem form Topology by Munkres:
Let $X$ be a metric space with metric $d$ and $A$ is a nonempty subset of $X$. Show that $d(x,A)=0$ if and only if $x$ is in the closure of $A$.
I think this problem is quite easy to understand emotionally but I don't know how to express the proof in standard math language. Thanks in advance!
AI: We prove the result by equivalence:
$x\in cl(A)\iff \forall \epsilon>0\ B(x,\epsilon)\cap A\not=\emptyset\iff\forall\epsilon>0\ \exists a\in A: d(x,a)<\epsilon\iff \inf_{a\in A}d(x,a)=0\iff d(x,A)=0$ |
H: Find coordinates of vertex of equilateral triangle
$ABC$ is an equilateral triangle , $AC = 2 $
What is the value of $p$ and $q$ ?
AI: HINT:
So, $C$ has to be $(2,0)$
Now, equating the squares of lengths of the sides $$(p-0)^2+(q-0)^2=(p-2)^2+(q-0)^2$$
Solve for $p$ and find $q$ from $p^2+q^2=2^2$ |
H: Contravariant Metric Tensor
Okay, so I have exactly ZERO experience with tensors and this project I am working on involves tensors. I have looked through a bunch of online resources, and attempted to look for textbooks (not available to me) and I am getting really confused. The extent to which I have picked up is that I understand how to find the metric tensor for the spherical coordinate transformation. So I was hoping someone could run me through the following example or give some advice on how to approach this.
Curvilinear coordinate system:
$v=x$
$u = y-a(x)$ $a(x)$ arbitrary
$w=z$
I am looking for the contravariant metric tensor for this (and possibly what the interpretation is). I have the result;
\begin{pmatrix}1 & -a'(x) & 0 \\
-a'(x) & 1+a'(x) & 0 \\
0 & 0 & 1\end{pmatrix}
Thanks
AI: This is a process that can feel very arbitrary, but using geometric principles, you should be able to develop an intuition about these problems.
Imagine the coordinate functions $v, u, w$ as scalar fields on the 3d space, assigning their respective coordinates to a given position. For all these coordinates, there are associated gradients: $\nabla v$ for $v$, and so on. These tell us the direction of greatest increase for each coordinate.
What we do then is use these gradient vectors as a basis for our space: a set of vectors $g^v, g^u, g^w$ such that $g^v = \nabla v$ and so on. The contravariant metric tensor just measures the dot products of these vectors, so we can have an idea of how to measure lengths with them.
For instance, take $u = y - a(x)$ as you gave us. Taking the gradient of
$u$, we get
$$g^u = \nabla u = (g^x \partial_x + g^y \partial_y + g^z \partial_z) u(x,y,z) = g^y -a'(x) g^x$$
where $g^x, g^y, g^z$ are a Cartesian basis (thus, they are orthonormal), so it's easy to take the dot product:
$$g^{uu} = g^u \cdot g^u = [g^y - a'(x) g^x] \cdot [g^y - a'(x) g^x] = 1 + [a'(x)]^2$$
as you found. So if we have two vectors expressed using this basis of gradients, we can find the overall dot product using the contravariant metric, rather than having to go back and figure out the relationships between those gradients all over again. |
H: Let $f : \mathbb{R} \to \mathbb{ R}$ be such that $f' (x)$ exists
Let $f : \mathbb{R} \to \mathbb{ R}$ be such that $f' (x)$ exists for all non zero $x$ and $\lim_{x\to 0} f' (x) = 0$.
Then
(i) $f$ is continuous but not differentiable at $0$.
(ii) $f$ is differentiable at $0$ and $f' (0) = 0.$
(iii) $f$ has either a local maximum or a local minimum at $0$.
(iv) None of the above.
I am totally clueless. Thank you for help and discussion.
AI: In my answer I suppose that $f$ is continuous at $0$
Let $x>0$ then by the mean value theorem there's $\xi_x\in(0,x)$ s.t.
$$f(x)-f(0)=xf'(\xi_x)$$
so by passing to the limit $x\to0$ so $\xi_x\to 0$ and since $\lim_{x\to 0}f'(x)$ exists we have
$$\lim_{x\to0}f'(\xi_x)=0=\lim_{x\to0}\frac{f(x)-f(0)}{x-0}=f'(0)$$
hence $f'(0)=0$.
Added In the case where $f$ isn't continuous at $0$ you can see the answer of cooper.hat for a counterexample. |
H: How can i solve a differential equation like this one?
My Problem is: this given differential equation
$$x^3+y^3+x^2y-xy^2y^{\prime}=0$$ $$(x\neq 0,\ y\neq 0)$$
My Approach was: i had the idea to bring it in this form:
$$x^3+y^3+x^2y-xy^2y^{\prime}=0$$
$$x^3+y^3+x^2y=xy^2y^{\prime}$$
$$\frac{x^3}{xy^2}+\frac{y^3}{xy^2}+\frac{x^2y}{xy^2}=\frac{xy^2y^{\prime}}{xy^2}$$
$$\frac{x^3}{xy^2}+\frac{y^3}{xy^2}+\frac{x^2y}{xy^2}=y^{\prime}$$
$$\frac{x^2}{y^2}+\frac{y}{x}+\frac{x}{y}=y^{\prime}$$
But this is the point where i get stuck. It seems the expression is gettin more and more complex, and is not leading to any solution... how can i solve this?
AI: Hint: Make the change of variable $y=zx$. You will get a separable equation. You can do that from the point you reached, but it is quite a bit simpler to start all over again. For general information (which you will not need in this case) search for homogeneous differential equation.
More: Let $y=zx$. Then $y'=z+xz'$. Substituting in the equation, we get
$$x^3+z^3x^3 +x^3z-x^3 z^2(z+xz')=0.$$
Divide through by $x^3$. There is some nice cancellation, and we end up with
$$xz^2z'=1+z.$$
This is a separable equation, which I expect you can handle. There is a complication in that we will end up with an implicit solution. |
H: Neyman-Pearson lemma proof
$\mathbf{Theorem}$ Let $\forall \alpha \in (0,1) \space \exists \space k$, such that for $W_0 = \{ x: p_1(x) \ge k p_0(x) \}$, $$\int_{W_0} p_0(x) d\mu(x)=\alpha$$
where $p_i(x)$ is the likelihood function under the hypothesis $i=0,1$. Then $\forall W$, such that
$$\int_{W_0} p_0(x) d\mu(x) = \int_W p_0(x) d\mu(x) = \alpha$$
the following inequality is true:
$$\int_{W_0} p_1(x) d\mu(x) \ge \int_W p_1(x) d\mu(x)$$ That is: an $H_0$ test based on the set $W_0$ is the most powerful test.
$\mathbf{Proof}$
This is taken directly from my textbook:
\begin{align}
\int_{W_0} p_1(x) d\mu(x) - \int_{W} p_1(x) d\mu(x) &= \int_{W_0-W} p_1(x) d\mu(x)- \int_{W-W_0} p_1(x) d\mu(x)\\
& \ge \int_{W_0-W} k p_0(x) d\mu(x)- \int_{W-W_0} k p_0(x) d\mu(x)\\
&= \int_{W_0} k p_0(x) d\mu(x) - \int_{W} k p_0(x) d\mu(x) \\
&= k \alpha - k \alpha\\
&=0
\end{align}
I don't understand the magic in the first line.
$$\int_{W_0-W} p_1(x) d\mu(x)- \int_{W-W_0} p_1(x) d\mu(x)$$
What is the meaning of $W_0 - W$? These are sets, so shouldn't we have $W_0 \setminus W$ instead?
I know we can't directly say that
$$\int_{W_0} p_1(x) d\mu(x) - \int_{W} p_1(x) d\mu(x) \ge \int_{W_0} k p_0(x) d\mu(x) - \int_{W} k p_0(x) d\mu(x) $$
because we only know that $\forall x \in W_0$
$$\int_{W_0} p_1(x) d\mu(x) \ge \int_{W_0} k p_0(x) d\mu(x)$$ and the same is not necessarily true for $x \in W$.
But how is the intermediate step helping us? (i.e. what's going on there?)
AI: First of all, $A-B$ is often used instead of $A\setminus B$, see e.g. the book of Durrett on probability. Second, we know that if $x\in W_0$ then $p_1(x)\geq k p_0(x)$ and if $x\notin W_0$ then $p_1(x)\leq kp_0(x)$ which implies
$$
-p_1(x)\geq-kp_0(x)\quad\forall x\notin W_0.
$$
I think, now it shall be clear how can we get the magic inequality. |
H: Find the coefficient of $x^m$ in the expansion $(1 + ax + bx^2)^n$
Find the coefficient of $x^m$ in the expansion of $(1 + ax + bx^2)^n$
One approach would be:
Let $p = 1+ax$
and $q = bx^2$
Now expand $(p+q)^n$ and then expand $p$ and $q$ individually. But that is so clumsy. Can we derive a direct formula?
AI: Hint: use the multinomial theorem:
$$(a+b+c)^n=\sum_{i+j+k=n}\frac{n!}{i!j!k!}a^ib^jc^k$$ |
H: Solve the second order differential equation.
Find a general solution to the equation:
$u''-e^tu'-e^tu=1$.
AI: Note that you can rewrite the equation as
$$\frac{d}{dt} (u \,e^t) = u''-1$$
which is equivalent to
$$u'-e^t u = t+C$$
where $C$ is a constant of integration. This equation has an integrating factor of $e^{-e^t}$, and may be written as
$$\frac{d}{dt}\left[ u \,e^{-e^t}\right] = e^{-e^t} (t+C)$$
I hope you can take it from here. |
H: Joint and individual probability independence
If (A,B,C) is independent of D, are the following true?
A then also independent of D
B then also independent of D
C then also independent of D
AI: Due to the symmetry of the argument, it is equivalent to ask whether independence of $A\cap B \cap C$ and $D$ implies the independence of $A$ and $D$. It's not true: suppose that $A = B = D$ and $C = D^c$. Then $A\cap B\cap C = \emptyset\perp D$, but none of $A,B,C$ is itself independent of $D$. |
H: Standard Approach to the Fundamental Counting Principal
I'm trying to teach myself combinatorics from a textbook. The last question of the first chapter is as follows:
If A is a finite set, its cardinality $o(A)$, is the number of elements in $A$. Compute
(a) $o(A)$ when $A$ is the set consisting of all five-digit integers, each digit of which is 1, 2 or 3.
(b) $o(B)$, where $B = \{x \in A : \text{ each of 1, 2 and 3 is among the digits of $x$}\}$ and $A$ is the set in part (a).
The notation is kind of new to me, but I understand (b) to mean that values such as 33333 or 21211 are not allowed because each value must contain at least one of the digits 1, 2 and 3. Therefore I represented the values as follows:
xxabc, xaxbc, xabxc, xabcx, axxbc, axbxc, axbcx, abxxc, abxcx, abcxx
Where the values for x are independently 1, 2 or 3, and the values for a, b and c are dependently 1, 2 or 3. So the combinations of x are $3^2$, and those of abc are $3!$. Since there are 10 ways of representing these combinations, I believe the answer for (b) is:
$$ 9\cdot6\cdot10 = 540$$
Although I'd like someone to verify that that is true, my real question has to do with my approach at solving it. I had a girlfriend in college who was very good at looking at algebraic equations and figuring out the answer in her head, even though she understood very little about algebra itself. I'm suspecting that I have the same problem here, because there is a hint in the back of the book that doesn't make any sense to me:
Hint: The answer can be expressed as a sum of six multinomial coefficients.
Therefore my question is: Can someone explain the hint, and if there is a better or more common way to represent the problem, what would that be? Even though I found ten ways to list the combinations, I didn't do it mathematically, but rather by carefully listing them out on a piece of paper. That, in itself, is a good indication that there is something wrong with my approach.
AI: You overcounted quite a bit. The answer I get is 150.
To see why your count is too high, observe that the number 12321 was counted four times: as xxabc, xabxc, axbcx, and abcxx.
The idea of the hint in the book is to divide the things you want to count into six nonoverlapping classes depending on the distribution of digits: (a) numbers with 3 1's, a 2 and a 3, i.e., rearrangements of 11123; (b) rearrangements of 12223; (c) rearrangements of 12333; (d) rearrangements of 11223; (e) rearrangements of 11233; (f) rearrangements of 12233. It's easy to see that all possibilities are covered with no overlaps. So we just have to count how many 5-digit numbers satisfy each of the six cases, and add. Also it's clear that cases (a)-(c) all have the same count (just swapping digits), likewise cases (d)-(f) all have the same count.
Now, do you know what multinomial coefficients are? If not, look them up in your book. The number of arrangements of 3 ones, 1 two and 1 three is the multinomial coefficient $\dfrac{5!}{3!1!1!}=20$; the number of arrangements of 2 ones, 2 twos and 1 three is $\dfrac{5!}{2!2!1!}=30$, so the answer to your problem is $20+20+20+30+30+30=150$ if I did all that arithmetic right.
By the way, the number of arrangements of xxaaa is ${5\choose 2}=\dfrac{5!}{2!3!}=10$, the number of ways to choose 2 things from 5. This is a binomial coefficient, which is a special case of the multinomial coefficient; the binomials are the ones you will have the most use for. |
H: $\bigcup_{n}V_n$ is dense in $V$ (Hilbert spaces)
I read:
$\bigcup_{n}V_n$ is dense in $V$ (Hilbert spaces)
Does this mean: for every $v \in V$, there is a sequence $\{v_n\}$ with $v_n \in V_n$ for each $n$ such that $|v_n - v|_V \to 0$?
I guess so. But then I also read
$V_n$ is dense in $V$
Does the author mean by this that given $v$, we can pick an $N$ such that there is an element $v_N \in V_N$ that is arbitrary close to $v$?
Are these two statements equivalent then?
AI: Definition. A subset $U \subseteq V$ is called dense in $V$ iff for each $v \in V$ there is a sequence $(u_k) \in {}^{\mathbb N}U$ with $u_k \to u$.
Lets apply this to your two cases: In the first case we are given that for any $v \in V$ there is a sequence $(u_k)$ with $u_k \in \bigcup_n V_n$ and $u_k \to v$. If moreover the $V_n$ are increasing, that is $V_1 \subseteq V_2 \subseteq \cdots$ we can arrange that $u_n \in V_n$ as you say. In general this is not possible (think of $V_{2n} = \{0\}$ for all $n$ or something like that).
The second one is easier: $V_n$ being dense means that for each $v \in V$ there is a sequence $(u_k)$ of elements from $V_n$ converging to $v$, or in $V_n$ there are elements arbitrarily close to $v$. |
H: How to show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$?
I really think I have no talents in topology. This is a part of a problem from Topology by Munkres:
Show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$.
I always have the feeling that it is easy to understand the problem emotionally but hard to express it in math language. I am a student in Economics and I DO LOVE MATH. I really want to learn math well, could anyone give me some advice. Thanks so much!
AI: Hints:
(1) By definition of distance between points and set:
$$D:=d(x,A):=\inf_{a\in A}d(x,A)$$
(2) By definition of infimum:
$$\forall\,n\in\Bbb N\;\exists\,a_n\in A\;\;s.t.\;\;D\le d(x,a_n)\le D+\frac1n$$
(3) The sequence $\,\{a_n\}\,$ has a subsequence
$$\,\{a_{n_k}\}\;\;s.t.\;\;a_{n_k}\xrightarrow[k\to\infty]{}a\in A$$
(4) The function distance $\,d(x,.): X\to\Bbb R_+\;$ from a metric space $\,X\,$ is continuous |
H: Every morphism in Set is regular
I am trying to prove that every morphism in the category Set is regular, that is, that for every set-function $f:A\to B$ there exists a function $g:B\to A$ such that $f\circ g\circ f=f$. The assumption is that $A\neq\varnothing $, because otherwise $B=\varnothing$.
Define an equivalence relation on $A$: for any $x,y\in A$, $x\sim y\Leftrightarrow f(x)=f(y)$. In the set $A/\sim$ choose an equivalence class representative for each class $[x]$, then define a function $g:B\to A, g(f(x))=x'$, where $x\sim x'$ and if $b\in B$ is not in the image of $f$ then just map it anywhere. That way I can track equivalent elements of $A$ and eventually map them into the right element of $B$.
The way I have 'constructed' $g$ is not unique, but is this construction valid at all and have I actually proved the proposition? Did I assume some form of axiom of choice?
AI: Yes, it's valid, and proves the proposition.
Yes, you have implicitly used the axiom of choice when simultaneously fixed one $x'$ in each equivalence class of $\sim$, which is needed for the definition of $g$. |
H: Can a set of non self-intersection points of a space-filling curve contain an arc?
Consider a continuous surjection $f:[0,1]\to[0,1]\times[0,1]$.
It can be proved that set of self-intersection points must be dense.
In the Hilbert curve, the set of self-intersections are points (a,b) such that either a or b can be written as $\frac{m}{2^k}$ for some integers $k≥1$ and $1≤m<2^k$ (see this explanation).
So the set of self-intersections is dense and of measure $0$, but you cannot draw any vertical or horizontal line without intersecting it.
This leads me to the question: can there exist, for some space-filling curve, an arc (homeomorphic to a non-degenerate closed interval)
$\tau\subset [0,1]\times[0,1]$ such that $\forall_{t_1,t_2\in [0,1]} f(t_1) = f(t_2) \in \tau \implies t_1 = t_2$? That is, such that $\tau$ includes only non-self-intersection points?
AI: No, there can be no such arc.
Suppose for the sake of contradiction that $g$ is an arc in $[0,1]^2$ that contains no self-intersections of $f$.
Let $g \colon [0,1] \to A$ be a homeomorphism.
Then $A$ is compact and connected.
Since $A$ is compact and $[0,1]^2$ is a Hausdorff space, $A$ is closed.
Since $f$ is continuous, $B = f^{-1}[A]$ is also closed, and therefore compact.
Now the restriction of $f$ to $B$, $f \restriction B$, is a continuous bijection.
A continuous bijection from a compact space to a Hausdorff space is a homeomorphism, so $B$ is homeomorphic to $A$.
Thus $B$ is also connected.
A closed, connected set of real numbers is a closed interval, so $B$ is a closed interval $[p, q]$.
Let $D = ([0,1] \setminus B) \cup \{ \min B, \max B \} = [0,p] \cup [q, 1]$.
Then $D$ is obviously compact, but $f(D)$ is not:
$f(D)$ is the square with the arc $A$ removed and then just two points put back, so any other element of $A$ is a limit point of $f(D)$. |
H: Is there a way to factor $uv-u-v-1$?
$uv-u-v-1$. I tried $(u+1)(v-1)$ and it's almost correct but I can't quite get it/ This is part of a limit problem I'm doing. Thanks!
AI: This polynomial in $u$ and $v$ is irreducible (not factorable). Observe that since it is degree 2, it can only factor nontrivially as the product of two linear polynomials in $u$ and $v$. So $$uv-u-v-1=(au+bv+c)(du+ev+f)$$ Without loss of generality we can rescale so that $a=1$. Then $d$ must be $0$, so that the product has no $u^2$ term. Since the product is nontrivial, $e$ must now be nonzero, and therefore $b=0$ so that there is no $v^2$ term. Now
$$uv-u-v-1=(u+c)(ev+f)$$ forcing $e$ to equal 1 to mach the coefficient of $uv$. Lastly, to match coefficients on $u$ and $v$, we have $c=f=-1$, but this is in contradiction with the constant term. |
H: Pumping Lemma Proof for $ww$
The proof of language $F = \{ww\mid w ∈ \{0,1\}^*\}$ is not a regular language using pumping lemma most of the solutions i found uses the string $0^p10^p1$. I understand the proof using that. But in Michael Sipser's Introduction to the Theory of Computation book he mention that using $0^p0^p$ string will fails to proof $F$ is not a regular language. But from what I understood if we take the string as $0^{p-r}0^{r}0^{p}$ where $p$ is the pumping length when $r= 0$ the string that generate is $0^{p}10^p$. So this is clearly a string that not generated by this language. By using this we can prove $F$ is not a regular language. Is this correct?
thanks.
AI: No it's not, because $0^r$ means "$0$ repeated $r$ times". The operation that is repeated is concatenation and not multiplication. In particular, $0^0$ is the empty word $\varepsilon$ (because it is the neutral element of concatenation), and certainly not $1$. |
H: Some questions about $f(z) = \frac 1 {e^z -1}$
Let $f(z):= \frac 1 {e^z-1}$.
First question: Why has $f$ a pole of order $1$ in $z = 0$? Second question: How can we determine the radius of convergence of $\sum_{n} a_n z^n$ which is the Laurent-series of $f$ around $z=0$ ? I have computed that
$$
\forall n > 0: b_n = - \sum_{i=0}^{n-1} \frac{b_i}{(n+1-i)!}
$$ where $a_n = b_{n+1}$ and $a_{-1} = 1$.
AI: Hint: What would the radius of convergence of $g(z)=\dfrac{z}{e^z-1}$ be around $z=0$?
Second hint: When is $e^z-1=0$? |
H: Unitarily equivalent?
I'm confused about that notion.
In my textbook there are two examples.
(1) $A=\begin{pmatrix} 1&1&0\\0&2&3\\0&0&3\end{pmatrix}$ and $B=\begin{pmatrix} 1&0&0\\0&2&0\\0&0&3\end{pmatrix}$
They have same eigenvalues but not unitarily equivalent because one is symmetric and the other is not.
(2) $A=\begin{pmatrix} 0&1&0\\-1&0&0\\0&0&1\end{pmatrix}$ and $B=\begin{pmatrix} 1&0&0\\0&i&0\\0&0&-i\end{pmatrix}$
They have same eigenvalues and unitarily equivalent.
I can't understand.
The second one is the same form with the first one (I mean, one is symmetric and the other is not), but why it is unitarily equivalent?
AI: $A$ and $B$ are unitarily equivalent $\iff$ $\exists\, U:\, U^{\ast}=U^{-1}$, $U^\ast A U=B$.
In the first case you always have $(U^\ast B U)^\ast = U^\ast B U$, while $A^\ast\ne A$, hence not unitarily equivalent to $B$.
But the same argument fails in the second case, because both matrices are asymmetric. When complex numbers come into play, you need to use adjoint matrices, not transposed.
So, in order to prove that in second case the matrices are unitarily equivalent, it's sufficient to find an orthonormal basis in $\mathbb C^3$ which consists of eigenvectors of the matrix $A$, which is quite easy to do. |
H: Convergence of the series $\sqrt[n]n-1$
Let $a_n=\sqrt[n]n-1$. Does $\sum_{n=1}^\infty a_n$ converge?
AI: Hint: $$\sqrt[n]n = e^{\frac{\log n}{n}} > 1+\frac{\log n}{n}$$ |
H: lim sup and sup inequality
Is it true that for a sequence of functions $f_n$
$$\limsup_{n \rightarrow \infty }f_n \leq \sup_{n} f_n$$
I tried to search for this result, but I couldn't find, so maybe my understanding is wrong and this does not hold.
AI: The inequality
$$\limsup_{n\to\infty}a_n\leq\sup_{n\in\mathbb{N}}a_n$$
holds for any real numbers $a_n$, because the definition of $\limsup$ is
$$\limsup_{n\to\infty}a_n:=\lim_{m\to\infty}\left(\sup_{n\geq m}a_n\right)$$
and for any $n\in\mathbb{N}$, we have
$$\left(\sup_{n\geq m}a_n\right)\leq\sup_{m\in\mathbb{N}}a_n$$
(if the numbers $a_1,\ldots,a_{m-1}$ are less than or equal to the supremum of the others, both sides are equal, and if not, then the right side is larger).Therefore
$$\limsup_{n\to\infty}f_n(x)\leq \sup_{n\in\mathbb{N}}f_n(x)$$
holds for any real number $x$, which is precisely what is meant by the statement
$$\limsup_{n\to\infty}f_n\leq \sup_{n\in\mathbb{N}}f_n.$$ |
H: Proof derivative using Cauchy-Riemann
Using the Cauchy–Riemann equations I have to prove that $f(z)=e^{iz}$ is analytic and its derivative is $ie^{iz}$
Using $z=x+iy$ this is what I have done:
$$e^{iz}=e^{i(x+iy)}=e^{ix-y}= \frac{e^{ix}}{e^{y}}=\frac{\cos x+i\sin x}{e^{y}}$$ so, $u(x,y) = \frac{\cos x}{e^y}$ and $v(x,y) = \frac{\sin x}{e^y}$
I've made all the derivatives and $u_x(x,y)=v_y(x,y)$ and $u_y(x,y)=-v_x(x,y)$
I've then made
$$f'(z)=u_x(x,y)+iv_y(x,y)=\frac{-\sin x}{e^y}+\frac{i\cos x}{e^y} = \frac{-\sin x+i\cos x}{e^y}$$
but now i'm stuck. How can I go from here to $ie^{iz}$?
AI: $$
\frac{-\sin x + i\cos x}{e^y} = e^{-y}(i(i\sin x+\cos x)) = i\left(e^{-y}e^{ix}\right) = i e^{i(x+iy)} = i e^{iz}.
$$ |
H: Expand log function with two terms
HOw can I expand ln(1+2/(A-1))? I think I need to use taylor series but the 1 is messing me up.
Should I just ignore the 1?
AI: You can expand it iff $|\frac{1}{2(A-1)}|<1$.
The expansion is $\displaystyle\ln(1+w)=w-\frac{w^2}{2}+\frac{w^3}{3}\dots=\sum_{n=1}^{\infty}(-1)^{n+1}\frac{w^n}{n}$ ,for $|w|<1$
By the way this is maclaurin series. |
H: Mathematics Essence
I started reading History of Philosophy and readily noticed that the origins of our actual natural sciences were due to the proper use of inductive logic.Our Physics/Chemistry and Biology all are known to have started by the revolution of Thales and the Pre-socratic Thinkers, conscious human-beings that decided to stop relying on mythical and supernatural explanations for all phenomena and started using inductive logic specially in cosmology.
I was trying to device the same understanding about Mathematics. I read in History of Mathematics that the first accounts of something similar to mathematics was the activity of our ancestors ( in Pre-Historic period ) in perceiving that different collection of objects did actually have the same property , they noticed they might had the same "number" of elements, as we now know.From there, the Babylonians, Indians, Chineses, Egypcians,Muslims developed further this idea. I'm trying to see if there was anything essential ( inductive/deductive logic, Wittgenstein's language-games on collection of objects ? ) that lead to the idea of numbers and spectially that latter lead to the development, by those societies, of Arithmetic, Geometry and Algebra, some foundations of our modern Mathematics.
As inductive logic lead to the development of the natural sciences, is there anything essential in our human existence ( brain, mind, etc ) that lead to the development of Mathematics ?
Highly appreciate all kind of answers !
AI: Maybe this question is more philosophy than mathematics, but possibly mathematicians can contribute some relevant things that philosophers cannot.
It is mentioned that physics, biology, and chemistry rely on inductive reasoning.
I would hold that mathematics does as well.
In empirical sciences, inductive reasoning on which the science rests is from observations to general propositions. (E.g.: the buoyancy equals the weight of the displaced fluid.)
In mathematics, inductive reasoning on which the discipline rests is from examples to general concepts. (E.g.: One observes groups of motions in geometry and groups of permutations of roots of algebraic equations, and then forms the algebraic concept of a group.)
One of these goes from empirical observations to general propositions. The other goes from examples to general concepts. |
H: Does a lower bounded set always have an infimum?
Let $A$ be a partially ordered subset of $X$. If $A$ is bounded below, does $\inf(A)$ exist?
AI: Let us assume $A$ is nonempty to avoid pathologies.
The statement does not have to be true even in the nice case that the ambient space $X$ is totally ordered:
Let $\Bbb Q^\times$ be the set of nonzero rational numbers, and let $\Bbb Q_{>0}$ be the set of positive rationals; then $\Bbb Q_{>0}$ has every negative rational as a lower bound, but there is no largest such upper bound.
When we do not assume the space is totally ordered, three elements suffice: put $X = \{a,b,c\}$, $A = \{c\}$ and set $a \preceq c, b\preceq c$ as the only nontrivial comparisons. Then $a$ and $b$ are both lower bounds of $A$, but since they are incomparable, neither is the largest lower bound, $\inf(A)$ does not exist. |
H: The Abelianization of $\langle x, a \mid a^2x=xa\rangle$
I wish to verify the following statement (which comes from Fox, "A Quick Trip Through Knot Theory", although that is probably not important).
"$\Gamma=\pi_1 (M)=\langle x, a \mid a^2x=xa\rangle$ so the homology of $M$ is infinite cyclic."
So, I need to find the Abelianization of the fundamental group. Using the relations I get
$$y_1:=[x,a]=x^{-1}ax,\qquad y_2:=[a,x]=x^{-1}a^{-1}x$$
generate $[\Gamma,\Gamma]$. Now thinking of $\Gamma/[\Gamma,\Gamma]$ as left cosets I find
$$xy_1=ax,\qquad ay_1=ax^{-1}ax$$
$$xy_2=a^{-1}x,\qquad ay_2=ax^{-1}a^{-1}x$$
but I don't see how this is infinite cyclic. Using various combinations of the relations don't seem to get me to a single coset. Is there some trick I am missing related to $[\Gamma,\Gamma]$ apparently being the conjugates of $a$? Or did I just screw up something else?
AI: To go from a group presentation to a presentation of the abelianisation, you have to add the commutator relation for each pair of generators. In this case, it says $ax = xa$. From your relation you get $axa = xa$, and so $a = 1$. Thus the abelianisation is just $\mathbb{Z}$ (as it should be for a knot complement). |
H: Must an ideal contain the kernel for its image to be an ideal?
I'm trying to learn some basic abstract algebra from Pinter's A Book of Abstract Algebra and I find myself puzzled by the following simple question about ring homomorphisms:
Let $A$ and $B$ be rings. If $f : A \to B$ is a homomorphism from $A$ onto $B$ with kernel $K$, and $J$ is an ideal of $A$ such that $K \subseteq J$, then $f(J)$ is an ideal of $B$.
I'm clearly missing the obvious, but I don't see where the requirement $K \subseteq J$ comes into the proof. Since $f$ is also a homomorphism of additive groups, the image $f(J)$ must be closed under addition and negatives (correct?). Then for $f(J)$ to be an ideal we have to show that it is closed under multiplication by an arbitrary element $b \in B$. Since $f$ is onto, there is some $a \in A$ such that $b = f(a)$. Let $j'$ be any element of $f(J)$, so $j' = f(j)$ for some $j \in J$. Then $bj' = f(a)f(j) = f(aj)$, and $aj \in J$ since $J$ is an ideal. Then it seems that $f(J)$ is closed under multiplication by $B$.
What mistaken assumption am I making?
AI: You are right, it is not required. However maybe later he wants to point out the following fact: There is a bijection of ideals of $A$ containing $K$ and ideals of $B$. You give the one direction: If $J$ is an ideal of $A$ then $f(J)$ is an ideal of $B$. The inverse of this map is given by: If $I$ is an ideal of $B$ then $f^{-1}(I)$ is an ideal of $A$ that contains $K$.
Edit: Using this we can answer to Jon O's question in the comment below: If $B$ is a field and $I$ is an ideal of $A$ that contains $\ker f$ properly, then $f(I)$ is an ideal of $B$ that contains $0$ properly. Since every non zero element of $B$ is a unit $f(I)$ must be the whole of $B$. By the above we have $I=f^{-1}(f(I))=f^{-1}(B)=A$ so every ideal properly containing $\ker f$ is the whole of $A$, i.e. $\ker f$ is maximal. |
H: Number of sides a regular polygon has.
The question is "Both tile A and B are regular polygons.
Work out the number of sides A has."
For this I put B is equilateral ∴ all angles are 60.
However, I have no idea where to go from this.
Could anyone give me any tips for solving this and similar questions? Thanks.
AI: Hint Look around a $60^\circ$ angle. You see a $60^\circ$ angle of tile $B$ and two angles of tile $A$. So what is the angle of tile $A$?
What is the angle between two sides of a regular $n$-gon? For which $n$ is exactly the above value? |
H: FInd the number of pairs $(A,B)$
Let $n,r,s$ be given, where $n\geq 1$,$1\leq r\leq n$ and $1\leq s \leq n$.
a) determine the number of pairs $(A,B)$ with $A\subseteq N_n, |A|=r,B\subseteq N_n, \text{and} |B|=s $
Now my intuition says that these two could intersect. But dont have to. So I would define it as being
$$
A \cup B = 2 \leq r+s\leq 2n
$$
But My intuition seems to bee too easy. any pointers?
AI: It seems to me that you're making things more difficult for yourself by considering the union of $A$ and $B$ rather than considering the sets separately, since each of the sets are chosen independently.
The question can be broken down as follows:
how many valid choices are there for $A$
how many valid choices are there for $B$
how many pairs of $A$ and $B$ are there
For 1, we just need the number of distinct subsets of $N_n$ that are of size $r$. You can compute this as
$$
\binom{n}{r}=\frac{n!}{r!(n-r)!}
$$
2 amounts to the same except with $s$ instead of $r$
$$
\binom{n}{s}=\frac{n!}{s!(n-s)!}
$$
Finally, since we made these choices independently, the total number of pairs is just the product of these two numbers. So, our final answer is
$$
\text{Total}=\binom{n}{r}\binom{n}{s}=\frac{(n!)^2}{r!s!(n-r)!(n-s)!}
$$ |
H: Confusion related to smoothness of a function
I just found this thing that $\operatorname{trace}(AB)$ where $A$ and $B$ are two matrices, it is a smooth function. I didn't understand how it is a smooth function. Any suggestions?
AI: A smooth function has derivatives of all orders. In this case, trace$(AB)$ is a product and sum of entries. The partial derivative with respect to any of the entries is again a product and sum of entries, hence well-defined.
Example as requested:
Let $A=[x], B=[y]$. Then $AB=[xy]$ and $tr(AB)=xy$. $\frac{\partial}{\partial x} tr(AB)=y$, and $\frac{\partial}{\partial y} tr(AB)=x$. Further partial derivatives will be constants, then zero, so all partial derivatives exist of all orders. |
H: Show that an algebraically closed field must be infinite.
Show that an algebraically closed field must be infinite.
Answer
If F is a finite field with elements $a_1, ... , a_n$ the polynomial $f(X)=1 + \prod_{i=1}^n (X - a_i)$ has no root in F, so F cannot be algebraically closed.
My Question
Could we not use the same argument if F was countably infinite? Couldn't we say that if F was a field with elements $a_1, a_2, ... $ then the polynomial $f(X) = 1 + \prod_{i=1}^{\infty} (X - a_i)$ does not split over F?
Thank you in advance
AI: We can't, because $\prod_{i=1}^\infty (X-a_i)$ is not a polynomial. Infinite combinations of finitary operations are not defined; when we do talk about them, what is really going on is that we are taking some sort of limit, but to do that we need topology to be present, and even then, the infinitary operation will be subtle. A priori, one cannot talk about an infinite sum or an infinite product of elements. |
H: show that $|\cup^n_{i=0} X_i| = \sum \binom{n}{r}\binom{r}{r-i}\binom{n-r}{s-i}$
Define $$
X_i = \{(D,E,F) : D \subseteq N_n, |D| = r,E\subseteq D, |E|=r-i,F\subseteq N_n - D, |F|=s-i\}
$$
Where $N_n$ denotes the set of all subsets.
$$|X_0 \cup X_1 \cup \ldots\cup X_n| = \sum_{i=0}^n \binom{n}{r}\binom{r}{r-i}\binom{n-r}{s-i}$$
Now my first intuition was to draw a venn-diagram to visualize the problem. This helped a lot and makes it clear to me that this is true. At first I noticed that all $X_i$ are independent and as such the LHS can be written as:
$$
\begin{align}
|X_0 \cup X_1 \cup \ldots\cup X_n| &=\sum_{i=0}^n|X_i|\\
&= \dots
\end{align}
$$
Now the only thing I can think of is: its true, so just write it out. Because the first Binom is calling D, the second one E and the last one F and adding all of them together. It seems really obvious. But actually writing it out is the difference.
AI: This goes exactly like the accepted answer for this question.
For $X_i$, there are $\binom{n}{r}=\frac{n!}{r!(n-r)!}$ choices for $D$, $\binom{r}{r-i}=\frac{r!}{(r-i)!(r-(r-i))!}$ choices for $E$ and $\binom{n-r}{s-i}=\frac{(n-r)!}{(s-i)!(n-r-(s-i))!}$ choices for $F$. Hence $|X_i|=\binom{n}{r}\binom{r}{r-i}\binom{n-r}{s-i}$ and $$|X_0 \cup X_1 \cup \ldots\cup X_n| =\sum_{i=0}^n|X_i|=\sum_{i=0}^n \binom{n}{r}\binom{r}{r-i}\binom{n-r}{s-i}$$ |
H: what are necessary conditions for $\mathbb{E}[X_n] \to \mathbb{E}[X]$?
Say $\{X_n\}$ is a sequence of random variables. What type of convergence to $X$ (or additional conditions) is required to ensure convergence of the means?
I think convergence in probability is not enough in general. However, convergence in probability implies convergence in distribution (or law) and that implies that for any $g$ bounded and continuous:
$$ \lim_{n \to \infty} \mathbb{E}[g(X_n)] = \mathbb{E}[g(X)]. $$
So, in particular, if $X_n$ and $X$ can only take a finite set of values, taking $g$ to be the identity, the convergence would follow. Is that true?
Thanks!
AI: what you wrote about g having a finite set of values is correct. also if you're interested in the convergence in probability thing - if you additionally assume that the family $X_n$ is uniformly integrable then what you wrote will be true
actually you'll even get something better, you'll get $L_1$ convergence, ie $E |X_n - X|$ will converge to 0 - it's a well known theorem |
H: How did we know to invent homological algebra?
Update: Qiaochu Yuan points out in the comments that the title of the question is misleading, as homological algebra did not begin with long exact sequences as I'd thought.
(Original question follows.)
I want to understand how anyone knew that the long exact sequence in homology was something to go looking for. It seems like every time I push it back a step I just end up with more questions, though...
I know that the long exact sequence in homology comes from short exact sequences of chain complexes. The motivating example I've generally seen for this is something like this: Let $B$ be a (simplicial, CW, etc.) complex and let $A$ be a subcomplex of $A$. The inclusion induces a map of chain complexes $C_\bullet A \to C_\bullet B$, and so we can get a short exact sequence $0 \to C_\bullet A \to C_\bullet B \to C_\bullet B / C_\bullet A \to 0$. If we write $H_\bullet(B, A) := H_\bullet(C_\bullet B / C_\bullet A)$ then we get induced maps $H_n(A) \to H_n(B) \to H_n(B, A)$.
Ignore for the moment that it seems to be a pretty far stretch to get from there to the idea that a long exact sequence might exist in homology. So far as I can tell $H_n(B, a)$ is not a natural object to study from a geometric perspective, and the only purpose of defining it is that it (tautologically) gives us a short exact sequence.
But then how do we know that short exact sequences are something we want to study? The only motivation I know of for wanting to look at a short exact sequence is that it can lead to a long exact sequence in homology, but how do you discover that fact without knowing to start by looking at short exact sequences?
Of course even once we've started looking at short exact sequences, the existence of a connecting map seems far from obvious.
Can anyone give some insight as to how people ever came up with this stuff?
AI: In my opinion, the relative singular homology group $H_n(B, A)$ is naturally a geometric object. As Stefan H. says in the comments, the idea of considering the boundary of a relative cycle in $(A, B)$ as a cycle of one dimension less in $B$ is quite natural.
But why is this connected to the short exact sequence? A quotient of chain groups (the relative group) ought to be related to a quotient of the underlying spaces.
In the relative group $C_n(B, A)$, the condition for a chain to be a cycle is relaxed from the boundary being $0$ to the boundary being a chain in the subspace $A$. Under mild assumptions on the pair $(A, B)^\dagger$, the quotient map $B \to B/A$ induces an isomorphism
$$
H_n(B, A) \overset{\sim}{\longrightarrow} \tilde{H}_n(B/A).
$$
When the subspace $A$ is quotiented to a point, the boundary of a chain in $B$ maps to that point in $H_n(B/A)$, or $0$ in the reduced group.
$^\dagger$ $A$ is closed and is a deformation retract of a neighborhood $A'$ in $B$ |
H: On the ramification index in Dedekind extensions
Citing from my textbook...
Definition If $R\subseteq R'$ are Dedekind domains and $\mathfrak{P}$ is a nonzero prime ideal of $R'$ and $\mathfrak{p}=\mathfrak{P}\cap R$ then the ramification index of $\mathfrak{P}$ over $R$ is the power of $\mathfrak{P}$ appearing in the prime factorization of $\mathfrak{p}R'$.
My question is: it couldn't happen that $\mathfrak{P}\cap R=0$, the zero ideal? If yes, could you provide some (easy) examples? If yes is the answer, how should the definition be modified? (i.e. what is the ramification index in this case?)
AI: Yes, it could happen - for example with $R=\mathbb{Z}$, $R'=\mathbb{C}[x]$, and $\mathfrak{P}=(x)$. However, the setting for ramification theory is (as far as I have seen) when $\mathrm{Frac}(R)\subseteq\mathrm{Frac}(R')$ is a finite, or at the very least algebraic, extension. In this setting, there is an induced injective map
$$R/\mathfrak{p}\hookrightarrow R'/\mathfrak{P},$$
$R'/\mathfrak{P}$ is a field, and the extension $R'/\mathfrak{P}$ over $R/\mathfrak{p}$ is integral, so that $R/\mathfrak{p}$ must be a field as well by Proposition 5.7 of Atiyah-Macdonald. |
H: The system of equations $x^2 + y^2 - x - 2y = 0$ and $x + 2y = c$
I have
$(1.) \quad x^2 + y^2 - x - 2y = 0 \\
(2.) \quad x + 2y = c$
Solving for $y$ in $(2.)$ gives
$y = (c - x) / 2$
Is there a way to simplify equation $(1.)$?
Because at the end I arrive at
$c^2 - 2x - c = 0$
and can't proceed. Need to get typical form of square equation $Ax^2 + Bx + C = 0$.
The solution for $c$ is $0$ or $5$.
EDIT 1:
For what real values of the parameter do the common solutions of the following pairs of simultaneous equations became identical?
(g) $ \quad x^2 + y^2 - x - 2y = 0, \quad x + 2y = c$, Ans. c = 0 or 5.
The process is to solve for $y$, then to substitute that into second equation.
From that we get A, B and C. Delta being $B^2 - 4AC$ we can get parameter.
So
$y = (c - x) / 2$
$x^2 + y^2 - x -2y = 0$
$x^2 + ((c-x)/2)^2 - x - 2 * ((c-x)/2) = 0$
and i got lost...
AI: If you substitute correctly, you will get the equation $5\,{x}^{2}-2\,c\,x+{c}^{2}-4\,c=0$.
If you solve this, you will get $x = \frac{1}{5}(c \pm 2\sqrt{c(5-c)})$. The other value is given by $y=\frac{1}{2}(c-x)$, but you don't need this to answer your question.
If the two sets of solutions have the same values, then we must have $c(5-c)=0$, hence $c=0$ or $c=5$.
Addendum: To get the above equation, replace $y$ in $x^2+y^2-x-2y=0$ by $y = \frac{1}{2}(c-x)$ (from the second equation). That is, the equation
\begin{eqnarray}
x^2+\frac{1}{4}(c-x)^2 -x -(c-x) &=& \frac{1}{4}(4x^2+(c-x)^2 -4c) \\
&=& \frac{1}{4}(5 x^2-2 c x +c^2-4c)
\end{eqnarray}
Then the solutions are (ignoring the $\frac{1}{4}$, of course):
\begin{eqnarray}
x &=& \frac{1}{10}\left(2c \pm \sqrt{4 c^2-20(c^2-4c)} \right) \\
&=& \frac{1}{10}\left(2c \pm \sqrt{16 c(5-c)} \right) \\
&=& \frac{1}{5}\left(c \pm 2\sqrt{ c(5-c)} \right)
\end{eqnarray} |
H: How to prove the properties derived from a matrix's signature
We've recently learned about metric signatures following the proof of Sylvester's law of inertia but we didn't quite say which properties does the signature of a given matrix $A\in \mathcal{M}_n\left(\mathbb{R}\right) $ have. Let's call it $(n_{_+} ,n_{_-} ,n_{_0} )$. What I'm asking is what are the main things I can conclude based on a signature?
For example given $B:\mathbb{R}^6\times\mathbb{R}^6\to \mathbb{R}$ and a singature $(3 ,3 ,0 )$ what does it tell me?
Is there a subspace $U\subseteq \mathbb{R}^6$ on which $B\Bigl|_{U} \equiv0$ ?
What's the maximal (dimension-wise) subspace of $\mathbb{R}^6$ on which $ B$ is positive/negative definite ?
When reading Wikipedia it seems like this is the kind of information one can conclude from a signature, but unfortunately I could not find a list of properties nor proofs to those properties.
Edit: It seems that the $n_{_0}$ is not like the others in the sense that for example considering $\left[\mathbf{B^{\phantom{}}}\right] =\left(\begin{matrix}1 & 0 & 0\\ 0 & -1 & 0 \\ 0 &0 &1\end{matrix}\right)$, the signature is $(2,1,0)$ but if we take $ U = \mbox{span} \left \lbrace e_1 + e_2 \right \rbrace$ in that basis we get that:
$v,u\in U \Rightarrow v=\alpha\left(e_1 + e_2 \right) \land v=\beta\left(e_1 + e_2 \right),\ \forall v,u\ \ \ B(v,u) = B(\alpha(e_1 + e_2 ),\beta(e_1 + e_2 ))= \alpha \beta B(e_1 + e_2 ,e_1 + e_2 ) = \alpha \beta \left(B(e_1,e_1) + \overbrace{B(e_1,e_2)}^0 + \overbrace{B(e_2,e_1)}^0 + B(e_2,e_2)\right) = \alpha \beta (1-1) = 0 \Rightarrow B\left(U\right)\equiv 0$
Proposition $\min(n_{_+},n_{_-}) + n_{_0} $ is the maximal dimension of $U\subseteq \mathbb{R}^n$ on which $B\Bigl|_{U} \equiv0$ ?
Is there a simple proof or counter example to this property ?
AI: Let's take a matrix $A=A^\ast$ with signature $(n_+,n_-,n_0)$. It means we have an eigenvalue $0$ of multiplicity $n_0$; this allows to conclude on the existence of your subspace $U$ ($U\ne 0\iff n_0>0$). In the same spirit, if $n_+>0$, then we have positive eigenvalues, and on the subspace generated by corresponding eigenvectors we have our matrix $A$ as positive definite; apparently, the maximum dimension is equal to $n_+$, because that's exactly the number of eigenvectors for positive eigenvalues. Same goes for the case $n_->0$.
Edit A more formal approach.
1) $A=A^\ast$, hence our matrix is diagonalisable, has an orthonormal basis of eigenvectors. It's signature is $(n_+,n_-,n_0)$. The eigenvectors $\vec e_k^+$ correspond to positive eigenvalues, similarly for $\vec e_k^-$ and $\vec e_k^0$.
2) Let $E_+$ be a subspace on which $A$ is positive definite and its dimension is $N$. We have $N$ independent vectors $\vec u_j\in E_+$, each of them has non-zero coordinates with respect to $\vec e_k^+$ (otherwise they wouldn't belong here). We call $\vec u^+_j$ the orthogonal projection of $\vec u_j$ on $\text{span}\{\vec e_k^+\}$. Suppose that the family $\vec u^+_j$ is linearly dependent, then there exists a linear combination of these vectors which is zero. The same linear combination of vectors $\vec u_j$ would have its components only in $\text{span}\{\vec e_k^-\}+\text{span}\{\vec e_k^0\}$, which is impossible in $E_+$. This implies that the family $\vec u^+_j$ is independent. We can have a family of at most $n_+$ linearly independent vectors in $\text{span}\{\vec e_k^+\}$, thus $N\le n_+$.
We can conclude by taking $E_+=\text{span}\{\vec e_k^+\}$ that $dim\,E_+$ can be equal to $n_+$.
3) Same reasoning applies for $E_-$.
4) Let's define $E_0$ as a maximum subspace where $A$ is zero a bilinear map. Clearly, all $\vec e_k^0$ are in it. This allows to write $E_0=\text{span}\{\vec e^0_k\}\oplus E$ where $E$ is a subspace generated only by vectors that do not contain $\vec e^0_k$ in their development. Suppose we have a basis $\vec u_j$ in $E$, and its orthogonal projections $\vec u^+_j$, $\vec u^-_j$. Clearly, if the family $\vec u^+_j$ is not linearly independent, then there exist a linear combination of $\vec u^+_j$ which is zero. The same linear combination $v$ of $\vec u_j$ lies in $\text{span}\{\vec e^-_k\}$, which can't happen in $E_0$ since $(Av,v)<0$. Hence, the family $\vec u^+_j$ is linearly independent; similarly, the family $\vec u^-_j$ is independent, too. Given that $\,dim\, \text{span}\{\vec e^\pm_k\} =n_\pm$, we conclude that $\,dim\, \text{span}\{\vec u_j\}\le \min(n_-,n_+)$ and $\,dim\, E_0\le \min(n_-,n_+)+n_0$. It is quite easy to build such $E_0$ with its dimension precisely equal to $\min(n_-,n_+)+n_0$, which concludes the proof. |
H: questions about proof of reverse formula of Fourier transform
We had the following theorem in class for a fourier transform $\widehat f$:
Let $\widehat f$ be the restriction of a $\mathbb C$ definied meromorphic function $F$. Let $F$ have a finite number of poles and let $z\cdot F(z)$ be bounded for big $z$. Then for $t>0$\begin{align*}
f(t)=i\cdot\sum_{\Im(z)>0}\textrm{res}_z\left(F(z)e^{i zt}\right)\end{align*}
My proof:
Consider $\gamma_r(s):=re^{i s}$:
The residue theorem gives us
for big $r$\begin{align*}
\int_{\gamma_r}F(z)e^{i zt}\,d z+\int_{-r}^r\widehat f(\omega)e^{i \omega t}\,d \omega=2\pi i\cdot\sum_{\Im(z)>0}\textrm{res}_z\left(F(z)e^{i zt}\right)
\tag{$\ast$}\end{align*}
Now $|z\cdot F(z)|\leq M$ for $z\geq R$, so for $r\geq R$\begin{align*}
\left|\int_{\gamma_r}F(z)e^{i zt}\,d z\right|&=\left|\int_0^\pi F(re^{i s})i re^{i s}\cdot e^{i \gamma_r(s)t}\,d s\right|\\
&\leq \int_0^\pi r\left|F\left(re^{i s}\right)\right|e^{-rt\sin s}\,d s\\
&\leq M\int_0^\pi e^{-rt\sin s}\,d s
\end{align*}
For $0\leq s\leq \frac{\pi}{2}$ we have $\sin s\geq\frac{2s}{\pi}$, and so
\begin{align*}
2M\int_0^{\frac{\pi}{2}}e^{-rt\sin s}\,d s&\leq2M\int_0^{\frac{\pi}{2}}e^{-rt\frac{2s}{\pi}}\,d s\\
&=2M\left[\left(\frac{-\pi}{2rt}\right)\cdot e^{-rt\cdot\frac{2s}{\pi}}\right]_0^{\frac{\pi}{2}}\\
&=\frac{M\pi}{rt}\cdot\left(1-e^{-rt}\right)\\
&\xrightarrow[]{r\rightarrow\infty}0
\end{align*}
So we get\begin{align*}
f(t)=\frac1{2\pi}\lim_{r\rightarrow\infty}\int_{-r}^r\widehat f(\omega)e^{i\omega t}\,d\omega=i\cdot\sum_{\Im(z)>0}\textrm{res}_z\left(F(z)e^{i zt}\right)
\end{align*}
So we had ($\ast$) be given in class, but I don't understand why this equation holds. How does it follow from the residue theorem?
And second why is $|e^{i\gamma_r(s)t}|=e^{-rt\sin s}$? (I did it with wolfram alpha because I didn't get any nice result)
Thanks.
AI: 1) It seems like you have assumed that $0 \leq s \leq \pi$ for $\gamma_r(s):=re^{i s}$, such that it is the arc of an upper half-circle. Together with the line $-r$ to $r$ this gives you a half-circle like in this picture, and for large $r$, all poles in the upper half plane will be inside it (since there are finitely many). Then it is just the residue theorem.
2) Remember that $\gamma_r(s):=re^{i s} = r \cos(s) + ir \sin(s)$, and that $|e^{ik}|$ is one when $k$ is real. |
H: the product of a matrix and a permutation matrix
Can a permutation matrix ($P$) be used to change the rank of another matrix ($M$)?
Is there any literature to this effect, or to the contrary?
I've tried a few small examples and the resulting matrix ($M_2$) seems to always have the same rank as the input matrix ($M$)
$M_2 = M P$
AI: Hint: The rank of a matrix is the number of linearly independent row vectors, or of linearly independent column vectors. Now think about what a permutation matrix does to the row or column vectors in the matrix if you multiply it from left or right. |
H: Suppose that $T: \mathbb R^2 \to \mathbb R^2$ is the linear transformation that rotates a vector by 90°.
Suppose that $T: \mathbb R^2 \to \mathbb R^2$ is the linear transformation
that rotates a vector by 90°.
(a) What is the null space of T?
(b) Is T one-to-one?
(c) What is the range of T?
(d) Is T onto?
Well I'm thinking this involved the Rotation Matrix. However, all of these things require a standard matrix $A$ to evaluate. How do I get a standard matrix from the rotation Matrix?
AI: The rotation (around the origin) being linear means geometrically that it takes triangles to triangles (if two sides of a triangle are $a$ and $b$ vectors, then the third is $a+b$), and it is invariant under scaling ($v\mapsto\lambda v$ for any $\lambda$)
(a) The nullspace contains exactly those vectors $v$ which go to $0$ (under the rotation now), that is, $Tv=0$. Can you find all such vectors?
(b) Suppose $Tv=Tw$, that is, by linearity, $T(v-w)=0$. Then, can we conclude that $v=w$?
(c) What vectors on the plane can arise as a rotated vector?
(d) Being onto means that the answer on (c) is 'all' the vectors. |
H: If $f:I\to\mathbb{R}$ is $1{-}1$ and continuous, then $f$ is strictly monotone on $I$.
Suppose that $I\subseteq\mathbb{R}$ is nonempty. If $f:I\to\mathbb{R}$ is $1{-}1$ and continuous, then $f$ is strictly monotone on $I$.
The answer in the back of the book$^1$, which I found after writing the following proof, says this is false (no explanation). However, I cannot find an error in my proof or think of a viable counterexample. What am I overlooking?
Suppose not, and there exists some $x_1,x_2,x_3\in I$ with $x_1<x_2<x_3$ such that $f(x_1)\leq f(x_2)$ and $f(x_2)\geq f(x_3)$, or $f(x_1)\geq f(x_2)$ and $f(x_2)\leq f(x_3)$. Without loss of generality, we may assume that $f(x_1)\leq f(x_2)$ and $f(x_2)\geq f(x_3)$.
If $f(x_1)=f(x_2)$ or $f(x_2)=f(x_3)$ then $f$ would not be $1{-}1$; thus, $f(x_1)<f(x_2)$ and $f(x_2)>f(x_3)$. We are left with two possible cases.
Case 1. Suppose $f(x_1)<f(x_3)$. Then $f(x_1)<f(x_3)<f(x_2)$. By the Intermediate Value Theorem, there exists point $x_0\in (x_1, x_2)$ such that $f(x_0)=f(x_3)$ with $x_0\neq x_3$ since $x_3\notin (x_1,x_2)$, a contradiction since $f$ is $1{-}1$.
Case 2. Suppose $f(x_1)>f(x_3)$. Then $f(x_3)<f(x_1)<f(x_2)$. By the Intermediate Value Theorem, there exists point $x_0\in (x_2,x_3)$ such that $f(x_0)=f(x_1)$ with $x_0\neq x_1$ since $x_1\notin (x_2,x_3)$, a contradiction since $f$ is $1{-}1$.
If we assume that $f(x_1)\geq f(x_2)$ and $f(x_2)\leq f(x_3)$ we are lead to similar contradictions; thus, $f$ is strictly monotone. $\blacksquare$
1: An Introduction to Analysis by William R. Wade, 4th edition
AI: The trick in the text of this exercise that $I$ is not assumed to be an interval. Your proof is valid on an interval.
But, for example, let $I:=[0,1]\cup[2,3]$ and let $f(x):=\left\{\matrix{x,\ \text{ if }\,x\le 1\\5-x,\ \text{if }\, x\ge 2} \right.$. |
H: Linearly independent subset under linear 1-to-1 transformation
Suppose that $T:\mathbb{R}^n \to \mathbb{R}^m$ is linear and one-to-one. Let $\{v_1, v_2, \ldots, v_k\}$ be a linearly independent subset of $\mathbb{R}^n$.
Prove that the set $\left\{T(v_1), T(v_2), \ldots, T(v_k)\right\}$ is a linearly
independent subset of $\mathbb{R}^n$.
I don't even understand what this is asking me to do.
AI: This is asking you to write up and verify the definition of 'linearly independent', by knowing what 'one-to-one' and 'linear map' means.
$\sum_i(\lambda_i\cdot Tv_i)\,=0\ \implies\ T\left(\sum_i \lambda_iv_i\right)=0\ \implies\ \left(\sum_i \lambda_iv_i\right)=0\ \implies\ \forall i:\lambda_i=0$. |
H: A planar graph problem
The problem is this:
Let $G$ be a connected, planar graph whose number of vertices is a multiple of $8$. $5/8$ of the vertices have degree $3$, $1/4$ have degree $4$, and $1/8$ have degree $5$. All faces of $G$ are triangles or quadrilaterals. Find the number of triangular faces, the number of quadrilateral sides, the number of vertices and the number of edges of $G$. Draw at least one such graph.
It is a problem from an old exam. (I'm trying to prepare for mine.)
I've seen people solve problems like this using a formula that had some kind of a weighted sum in it (and I think the weights were degrees, probably of vertices), and it looked like it was linked to Euler's formula. I remember people saying that they were no-brainers, with an algorithmic procedure for solving them. Unfortunately, I didn't understand the formula then, and I can't find it now, either in my notes or on the internet.
I've been trying to derive something useful from what I know about Euler's theorem, but I'm failing badly.
Could you please help me with this?
AI: From counting vertex-edge incidences, we have
$$\tag1 2e = 3\cdot\frac58v+4\cdot\frac14v+5\cdot \frac18v=\frac72v.$$
From counting face-edge incidences, we have
$$\tag2 2e = 3f_3+4f_4.$$
From Euler's $v+f=e+2$, we have
$$\tag3 v+f_3+f_4=e+2.$$
Thus $7(3)$ gives us using $(1)$:
$ 7e+14=7v+7f_3+7f_4=4e+7f_3+7f_4=6e+4f_3+3f_4$
or
$$ e=4f_3+3f_4-14.$$
Using $(2)$ again, we find
$3f_3+4f_4=2e=8f_3+6f_4-28$ or
$$ 5f_3+2f_4=28.$$
As $f_3,f_4\ge0$, this allows only
$(f_3,f_4,e,v)\in\{(0,14,28,16),(2,9,21,12),(4,4,14,8)\}$.
An example with the tlast tuple is obtained by taking a cube graph and adding two face diagonals with a common end point (that assume degree 5 this way).
Rereading the original problem statement (as opposed to equation $(1)$ we obtained from it), we see that $v$ must be a multiple of $8$, hence $(2,9,21,12)$ does not work.
Alright, "pics, or it didn't happen": |
H: Is $G $ contained in $ G *_H K $ if $ H\rightarrow G $ and $ H\rightarrow K$ are injections?
Given injections $ H\rightarrow G $ and $ H\rightarrow K $, is the canonical morphism $ G\rightarrow G *_H K $ of G into the free product with amalgamation also injective?
AI: Yes, since every element $x\in G_1 *_H G_2$ has a (unique) canonical form
$x=ab_1\ldots b_r$ where $a\in H, b_i\in G_1$ or $b_i\in G_2$, $b_i$ are coset representatives of $G_j$'s by $H$, and $b_i,b_{i+1}$ ($i\le r-1$) belong to different $G_j$'s. See M.Hall, The Theory of groups, Theorem 17.2.1. |
H: Let $F$ be a finite subset of the natural numbers and consider the sum
$$\sum(-1)^l\tag{1}$$
Define $F_{\text{even}} = \{n \in F : n\text{ is even}\}$ and $F_{\text{odd}} = \{n \in F : n\text{ is odd}\}$
(a) Suppose that $\#F$ is odd. Show that $\#F_{\text{even}}\ne \#F_{\text{odd}}$
(b) Suppose that $\#F$ is odd. Show that $(1)$ is nonzero
AI: HINT: $(-1)^\ell$ is $1$ when $\ell$ is even and $-1$ when $\ell$ is odd. When you take the sum of all $(-1)^\ell$ with $\ell\in F$, you’re adding one for each even number in $F$ and subtracting $1$ for each odd number. Under what circumstances can the result of that operation possibly be $0$? |
H: Is there NO solution to this linear system of 3 equations, $3$ unknowns?
I have the following linear system:
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&3x - y + 14z -6 = 0 \\
&x + 2y +5 = 0
\end{align}$$
I immediately noticed that there was no $z$ term in the last equation and thus determined that I will end with $0z = \text{some number}$ and therefore, came to the conclusion that there is no solution to the linear system. To see if I was right, I checked with echelon form and that also suggested that there was no solution.
$$\begin{align}
&L_1: x + y + 2z + 2 = 0 \\
&L_2: 3x - y + 14z -6 = 0 \\
&L_3: x + 2y +5 = 0 \\
\end{align}$$
Then $-3L_1 + L_2 \rightarrow L_2$
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&0x + 3y - 42z - 18 = 0 \\
&x + 2y + 0z + 5 = 0
\end{align}$$
Then $-L_1 + L_3 \rightarrow L_3$
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&0x + 3y - 42z - 18 = 0 \\
&0x + 1y + 0z + 3 = 0
\end{align}$$
Then $-L_2/3 + L_3 \rightarrow L_3$
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&0x + 3y - 42z - 18 = 0 \\
&0x + 0y + 0z + 3 = 0
\end{align}$$
Firstly, is my answer correct? If so, then can I make the same conclusions for other linear systems that are similar?
AI: First of all, you "jumped to" an erroneous conclusion, based on inspection. In a system of equations, one or more variables may fail to be present in one or more equations.
A most extreme example would be the three equations in three unknowns:
$$\begin{align}
&x - 1 = 0 \\
&y - 1= 0 \\
&z - 1 = 0
\end{align}$$
From which we can "read off" the unique solution: $\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 1\end{pmatrix}$
There would be a problem (and no solution would exist) if you had the following (say, reduced) linear system of equations:
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&3x - y + 14z -6 = 0 \\
&0x + 0y + 0z + 5 = 0
\end{align}$$
Note that in the above system, we have the absurd equation $5 = 0$: such a system is called inconsistent, and clearly, no solution exits.
Finally, if you end up with an equation of all "zeros": $0x + 0y + 0z = 0$, then infinitely many solutions exist, and depending on how many such equations exist in your system, you might have a system with one parameter, or two, which then, while an infinite number of solutions exist, there would constraints which limit exactly which solutions are valid; i.e., the parameter(s) would define a "family" of infinitely many solutions.
Now, back to your good idea to "check out" your initial conclusion:
In your last elementary row operation, note that you didn't operate on the $-42 z - 18$ of $L_2$:
We go from:
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&0x + 3y - 42z - 18 = 0 \\
&0x + 1y + 0z + 3 = 0
\end{align}$$
Then applying, correctly, $-L_2/3 + L_3 \rightarrow L_3$
$$\begin{align}
&x + y + 2z + 2 = 0 \\
&0x + 3y - 42z - 18 = 0 \\
&0x + 0y + 14z + 9 = 0
\end{align}$$
Now, you'll see that a unique solution exists: $14 z = -9 \implies $
To solve the system at this point, you might want to write your equations as follows:
$$\begin{align}
&x + y + 2z = -2 \\
&0x + 3y - 42z =18 \\
&0x + 0y + 14z = -9
\end{align}$$ |
H: Consequence of the Chinese Remainder Theorem
We want to prove the following:
For any $n+1$ distinct real numbers $a_0, a_1, ..., a_n$ and any $n+1$ real numbers $b_0, b_1, ..., b_n$, there exists a polynomial of degree at most $n$ taking the value $b_i$ at $a_i$ for all $i=0, 1, ..., n$.
This was discussed in a class as a follow-on from the Chinese Remainder Theorem, but I can't see how so.
Real numbers have unique prime factorisations, so I thought about considering ideals of the form $p_i \mathbb{Z}$ where $p_i$ is the $i$th prime.
Then let $p_n$ be the greatest prime that features in the prime factorisations of the $a_i$.
Then applying the CRT, but I'm not sure if this approach is misguided.
I'd be interested to see an answer/guidance on how this can be done.
AI: Hint: apply CRT to the system $\ f(x) \equiv b_i \pmod{x-a_i},\,$ using that the $\,x-a_i\,$ are pairwise comaximal, since the $\,a_i\,$ are distinct.
It deserves to be much better known that Lagrange interpolation is a special case of CRT. |
H: Example of a martingale and a stopping time with $E(T)<\infty$ but $E(X_T) \neq E(X_0)$
Is there an example of a martingale in discrete time $X_0, X_1, X_2,\ldots$ and a stopping time $T$ so that $E(T) <\infty$ but $E(X_T) \neq E(X_0)$?
With added assumptions on how $X_n$ behaves, you can prove that $E(X_T)=E(X_0)$ For example if $|X_{n+1} - X_{n}| \leq c$ for some constant $c$, then we can show $E(X_T) = E(X_0)$.
AI: How about this: let $Y_0,Y_1,\ldots$ be independent RVs, where
$$
P(Y_n=2^n)=\frac{1}{2},\qquad P(Y_n=-2^n)=\frac{1}{2}.
$$
For $n\geq 0$, let $X_n=\sum_{i=0}^{n}Y_i=X_{n-1}+Y_n$. This is a martingale:
$$
\mathbb{E}[X_n\,\mid\,X_{n-1}]=X_{n-1}+\mathbb{E}[Y_n]=X_{n-1},
$$
since $X_{n-1}$ is measurable with respect to $Y_1,\ldots,Y_{n-1}$, while $Y_n$ is independent of these. Let $T$ be the minimum $n$ such that $X_n>0$, if such an $n$ exists, and $\infty$ otherwise.
Note that $T$ is also the first $n$ such that $Y_n>0$, since $2^n>1+2+\cdots+2^{n-1}$. So,
$$
P(T=n)=\frac{1}{2^{n+1}}\qquad\text{and}\qquad\mathbb{E}[T]=\sum_{n=0}^{\infty}\frac{n}{2^{n+1}}=1.
$$
Since $\mathbb{E}[T]<\infty$, $T$ is almost surely finite, and $X_T$ is almost surely well-defined. But, since $X_n\geq 1$ if $X_n>0$, it is always the case that $X_T\geq 1$ if $T<\infty$. Combining these, $\mathbb{E}[X_T]\geq1$, whereas $\mathbb{E}[X_0]=0$.
(It should be noted here that, as you would hope, our situation doesn't fit any of the various conditions for the optional stopping theorem; our steps aren't almost surely bounded, our stopping time isn't almost surely bounded, etc.) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.