text
stringlengths 83
79.5k
|
---|
H: How can I resolve this definte integral?
$$ \int (t^2-3)^3 t dt$$
$$ \int_a^b (t^2-3)^3 t dt$$
if $$ a = -1$$
$$ b=1$$
then
$$ \frac 1 2\int (t^2-1)^3 2t dt$$
$$ \frac 1 2 \frac {u^{3+1}}{3+1}$$
$$ \frac 1 8 u^{4}$$
$$ \frac 1 8 (t^2-1)^{4}$$
$$ \frac 1 8 [(1^2-1)-(-1^2-1]$$
$$ \frac 1 8 [(0)-(0)]=0$$
or
$$ \int (t^2-1)^3 t dt$$
$$ \int (t^4-2t^2+1)t dt$$
$$ \int t^5dt-2\int t^2dt+\int tdt$$
$$\frac {t^6}6 - \frac{2t^4} 4 + \frac {t^2} 2$$
a=-1, b=1
$$\frac {1^6}6 - \frac{2(1)^4} 4 + \frac {1^2} 2 -[\frac {-1^6}6 - \frac{2(-1)^4} 4 + \frac {(-1)^2} 2]$$
$$\frac 1 6 - \frac 1 6 = 0$$
is it a error?
I think that area cannot be zero,
I already tried the change of variable method,
that gives me a & b = 0
AI: The function you are integrating is an odd function. (A function $f(t)$ is odd if $f(-t)=-f(t)$ for all $t$.)
If $f(t)$ is an odd (and say continuous) function, then
$$\int_{-a}^a f(t)\,dt=0$$
for all $a$. So you did not even need to integrate.
Remark: If you find the result troubling for your particular application, it may be that the integral was not set up properly. In particular, it is easy to make a mistake calculating the area between two curves $y=g(x)$ and $y=h(x)$, if in part of the domain $g(x)\gt h(x)$, and in another part of the domain the inequality runs the other way. The integral that gives the area is $\int_a^b |f(x)-g(x)|\,dx$. |
H: Prove that the class of well-founded sets is a proper class
Does anyone have an "elementary" proof of the following claim:
If $A$ is a class such that
$$(*)\qquad\forall x(x\subseteq A\to x\in A),$$ then $A$ is a proper class, i.e. $\forall y\ y\ne A$.
The reason that the title refers to well-founded sets is that if we define the standard cumulative hierarchy $V_\alpha=\bigcup_{\delta<\alpha}{\cal P}(V_\delta)$ with ${\rm WF}=\bigcup_{\alpha\in{\sf On}}V_\alpha$, then it can be proven that $A\supseteq{\rm WF}$ if $A$ satisfies $(*)$. Now I know a proof of the claim using this and ${\rm WF}\supseteq{\sf On}$, together with the Burali-Forti paradox and the subset axiom, but that seems a bit contrived. I feel like there should be a simple proof by contradiction in the vein of Russel's paradox or the Burali-Forti paradox. (Don't assume regularity; it is not necessary, but the proof is trivial assuming it, because any set satisfying $(*)$ is an element of itself.)
AI: The condition given allows us to directly imitate Russell’s paradox.
Suppose $A$ were a set. Then take $R := \{x \in A\ |\ x \notin x \}$; $R$ is a set by the Separation axiom, and certainly $R \subseteq A$, so $R \in A$ by hypothesis. But now, just as in Russell’s paradox, $R \in R \Leftrightarrow R \notin R$; contradiction. |
H: Finding root of equation
This question was asked in one of the enterance test of mathematics in India which is
For the equation $1+2x+x^{3}+4x^{5}=0$, which of the following is true?
(A) It does not possess any real root
(B) It possesses exactly one real root
(C) It possesses exactly two real roots
(D) It possesses exactly three real roots.
AI: HINTS:
$1$. A polynomial with odd degree has at-least one real root.
$2$. A strictly monotone increasing function can have at-most one real root.
$3$. If a function's derivative is strictly positive then it is monotonically increasing. |
H: Connected Metric Space Exercise
Let $E$ be a connected metric space, in which the distance is not bounded. Show that in
$E$ every sphere is nonempty.
AI: I’m assuming that by sphere you mean a set of the form
$$S(x,r)=\{y\in E:d(x,y)=r\}\;,$$
the sphere of radius $r$ centred at $x$.
HINT: Let $S(x,r)$ be any sphere. By hypothesis there is a $y\in E$ such that $d(x,y)>r$. Show that if $S(x,r)=\varnothing$, then $E$ is not connected. In particular, $x$ and $y$ are in different components. |
H: Find triangel-area with Cavalieri's principle
A triangle is given by $A=(0,0), B=(5,1)$ and $C=(2,4)$. I already know $\lambda^2(\Delta ABC)=9$. Now I want to compute the area by using Cavalieri's principle.
I know how to start when I have to use the principle for volumes, but I don't know how to start here. I thought about intercept theorem, but then I don't know how to find the ratios.
Any hints are appreciated.
AI: If we are allowed to use "The Area of a Triangle as Half a Rectangle", one side of the rectangle has the length $\sqrt{(5-0)^2+(1-0)^2}=\sqrt26$ unit
The equation of the line containing the side is $$\frac{y-0}{x-0}=\frac{1-0}{5=0}\implies x-5y=0$$
The other side contains $(2,4),$ so the length of the perpendicular side will be $$\left|\frac{2-5\cdot4}{\sqrt{5^2+1^2}}\right|=\frac{18}{\sqrt{26}}$$
So, the are of the rectangle will be $\sqrt26\cdot \frac{18}{\sqrt{26}}=18$ Sq Unit. |
H: How prove this $\frac{1}{\sqrt{1+x}}+\frac{1}{\sqrt{1+y}}\le\frac{2}{\sqrt{1+\sqrt{xy}}}$
Let $x,y>0$ and $xy\le 1$. Show that
$$\dfrac{1}{\sqrt{1+x}}+\dfrac{1}{\sqrt{1+y}}\le\dfrac{2}{\sqrt{1+\sqrt{xy}}}.$$
This inequality have same follow methods?
I saw this.
Let $x,y>0, xy\le 1$.
$$\dfrac{1}{1+x}+\dfrac{1}{1+y}\le\dfrac{2}{1+\sqrt{xy}},$$
because we have
\begin{align}
\dfrac{1}{1+x}+\dfrac{1}{1+y}-\dfrac{2}{1+\sqrt{xy}}&=\left(\dfrac{1}{1+x}-\dfrac{1}{1+\sqrt{xy}}\right)+\left(\dfrac{1}{1+y}-\dfrac{1}{1+\sqrt{xy}}\right)\\
&=\dfrac{(\sqrt{x}-\sqrt{y})^2(\sqrt{xy}-1)}{(1+x)(1+y)(1+\sqrt{xy})}\le 0
\end{align}
Thank you everyone, or have other nice methods?
AI: As you show
$$\frac{1}{1+x}+\frac{1}{1+y}\le\frac{2}{1+\sqrt{xy}},$$
it follows that
$$(\frac{1}{\sqrt{1+x}}+\frac{1}{\sqrt{1+y}})^2\le 2 (\frac{1}{1+x}+\frac{1}{1+y})\le \frac{4}{1+\sqrt{xy}}.$$ |
H: Derivative of $-e^y = 0$?
I stumbled upon this on wolfram alpha and still wonder why $-e^x$ equals $0$ (third step).
AI: It is assuming that $x$ and $y$ are independent variables and as you're differentiating with respect to $y$, that term vanishes since it doesn't contain any $y$'s |
H: Not able to solve $({\frac{1}{2}})^p + ({\frac{1}{3}})^p + ({\frac{1}{7}})^p - 1 = 0.$
I'm not able to solve $$({\frac{1}{2}})^p + ({\frac{1}{3}})^p + ({\frac{1}{7}})^p - 1 = 0.$$
If you put values of $p$ (like $\frac{1}{2}$ or 2) back in the equation it doesn't satisfy! So please check your values also.
What I found is:
If $p = 1$ then the equation becomes $ - \frac{1}{42}$.
That means if $p >1$ then values of all the fractions will decrease more and the equation will become more negative.
So surely $p < 1$. Similarly putting $p=0$, the equation becomes $-1$
i. e. < $ - \frac{1}{42}$.
Hence $ 0 < p < 1. $
Now I'm wondering if there is certain generic techniques to evaluate $p$ ? [ ie. without guessing technique like " Lets assume $p =\frac14(1\pm\sqrt{13}$) " ]
AI: Not a solution. Just an existence proof:
Firstly $f(p) =\left(\frac{1}{2}\right)^p + \left(\frac{1}{3}\right)^p + \left(\frac{1}{7}\right)^p -1$ is monotone decreasing in p. Next, it is continuous in p.
Now we have $f(0) = 2$ and $f(1) = -1/42$. By the Intermediate value theorem, there is a $0 < \hat{p} < 1$ such that $f(\hat{p}) = 0$. Mvcouwen has verified this via Wolfram Alpha. Moreover as it is monotone decreasing, there are no more solutions in $\mathbb{R}$. |
H: Why do we restrict the range of the inverse trig functions?
I understand why we restrict the domain, but why do we restrict the range? Why do we necessarily care so much for the inverse trig relations to be functions? Thanks!
AI: Basically since it is easier and simpler to work with functions than it is to work with relations. What does it even mean for a relation to be continuous for instance, let alone differentiable. It's thus convenient to consider a function rather than a relation if possible. In the case of the trigonometric functions it does very little harm to restrict them to become functions. |
H: Why is $e^x$ the only nontrivial function with a repeating derivative?
Why is $e^x$ the only nontrivial function with a repeating derivative, i.e. is its own derivative?
It says so in the Wikipedia article about $e$. Is there a proof of this that I (a calculus AB student) could understand? Thanks!
AI: Technically $y=0$ is also a function with a repeating derivative but in answer to your question, what you're essentially doing is solving the differential equation $\frac{\mathrm{d}y}{\mathrm{d}x}=y$ to which the only non-trivial solution is $y=A e^x$. (The differential equation is not hard to solve) |
H: What is the probability of a multidimensional rectangle?
Assume given a probability measure $P$ on $(\mathbb{R}^p,\mathcal{B}_p)$, where $\mathcal{B}_p$ denotes the $p$-dimensional Borel-$\sigma$-algebra. Let $F$ denote the $p$-dimensional CDF for $P$, given by $F:\mathbb{R}^p\to[0,1]$ with
$$
F(x_1,\ldots,x_p) = P((-\infty,x_1]\times\cdots\times(-\infty,x_p])
$$
I would like to know how to express a probability of the form
$$
P((x_1,y_1]\times\cdots\times(x_p,y_p])
$$
in terms of $F$. For example, in two dimensions, it holds that
$$
P((x_1,y_1]\times(x_2,y_2])
=F(y_1,y_2) - F(x_1,y_2) - F(y_1,x_2) + F(x_1,x_2).
$$
I assume that the general formula is somehow related to the inclusion-exclusion principle, and I assume that the general formula is quite well-known as well, but it's not immediately obvious to me what the correct answer is...
AI: Let $x$ and $y$ be two elements of $\Bbb R^p$ such that $x_i< y_j$ for all $i$. If $I$ is a subset of $[p]$ then $v_I$ is the element of $\Bbb R^p$ such that the coordinated $i\in I$ are $x_i$ and $j\in I^c$ are $y_j$. For example, with $p=3$, $v_{\{1,3\}}=(x_1,y_2,x_3)$. Then
$$\mathbb P\left(\prod_{j=1}^p(-x_j,y_j]\right)=\sum_{I\subset [p]}(-1)^{|I|}F(v_I).$$
This can be proved by induction on $p$. |
H: Proving something is not differentiable
I am looking for confirmation so that I can be sure I understand what is being asked here. I need to show that the following function $f(x,y)$ is not differentiable at $(0,0)$ but that $g(x,y)=yf(x,y)$ is:
$$
f(x,y) = \left \{
\begin{array}{ll}
{\frac {x^2 y} {x^2 + y^2} } & \mbox{ if $x,y\neq 0$ } \\
0 & \mbox{if $x=y=0$}
\end{array}
\right.
$$
If the differential exists at $(x,y)$, then there is a linear mapping $f'(x,y):\mathbb{R}^2 \rightarrow \mathbb{R}$ such that
$$f(x+tu,y+tv) - f(x,y) = f'(x,y)(tu,tv) + r(tu,tv)$$
where $r(tu,tv)/t \rightarrow 0$ as $t \rightarrow 0$. At $(x,y)=(0,0)$, this implies that
$$
\frac{u^2 v}{u^2 + v^2} = f(tu,tv) = f'(0,0)(tu,tv) + r(tu,tv)
$$
since $f$ is nonlinear, $f'(0,0)$ cannot exist.
However, following the same line of reasoning for $g(x,y)$ gives:
$$
\frac{u^2 v^2 t}{u^2 + v^2} = g'(0,0)(tu,tv) + r(tu,tv)
$$
Thus, as $t \rightarrow 0$, we have that $g'(0,0)(tu,tv) \rightarrow 0$, which means that $g'(0,0)$ is simply the linear operator defined by $g'(0,0)(x,y)=0$.
AI: Since $f = 0$ on both coordinate axis, the partial derivative of $f$ (if they exist) must be $0$. Hence, if $f$ is differentiable at $(0,0)$ it would have to be the case that
$$
f(x,y) - f(0,0) = \frac{x^2y}{x^2+y^2} = r(x,y),
$$
where $r(x,y)/\sqrt{x^2+y^2} \to 0$ as $(x,y) \to (0,0)$. Analyzing this limit, by passing to polar coordinates, we get
$$
\frac{r(x,y)}{\sqrt{x^2+y^2}} = \frac{x^2y}{(x^2+y^2)^{3/2}} =
\frac{r^3 \cos^2\theta \sin\theta}{r^3} = \cos^2\theta\sin\theta
$$
and it's clear that this does not tend to $0$ (for most values of $\theta$) as $r \to 0$.
I'll leave the corresponding analysis of $yf(x,y)$ to you. |
H: Prove inequality: $74 - 37\sqrt 2 \le a+b+6(c+d) \le 74 +37\sqrt 2$ without calculus
Let $a,b,c,d \in \mathbb R$ such that $a^2 + b^2 + 1 = 2(a+b),
c^2 + d^2 + 6^2 = 12(c+d)$, prove inequality without calculus (or
langrange multiplier): $$74 - 37\sqrt 2 \le a+b+6(c+d) \le 74
+37\sqrt 2$$
The original problem is find max and min of $a+b+6(c+d)$ where ...
Using some calculus, I found it, but could you solve it without calculus.
AI: Hint: You can split this problem to find max and min of $a+b$ and $c+d$. |
H: How to show orthonormal basis?
Let $A$ be n*n matrix with complex entries.
Prove that $AA^*=I$ iff rows of $A$ form an orthonormal basis of $C^n$.
I know since $AA^*=\langle a_i, a_j \rangle= \delta_{i,j}$ so the rows are orthonormal.
But why does it mean that they are basis?
AI: For the rows to be a basis they must be linearly independent but in this case, orthonormality implies linear independence. |
H: What is wrong with treating $\dfrac {dy}{dx}$ as a fraction?
If you think about the limit definition of the derivative, $dy$ represents $$\lim_{h\rightarrow 0}\dfrac {f(x+h)-f(x)}{h}$$, and $dx$ represents
$$\lim_{h\rightarrow 0}$$
. So you have a $\;\;$$\dfrac {number}{another\; number}=a fraction$, so why can't you treat it as one? Thanks! (by the way if possible please keep the answers at a calc AB level)
AI: The derivative, when it exists, is a real number (I'm restricting here to real values functions only for simplicity). Not every real number is a fraction (i.e., $\pi$ is not a fraction), but every real number is trivially a quotient of two real numbers (namely, $x=\frac{x}{1}$). So, in which sense is the derivative a fraction? answer: it's not. And now, in which sense is the derivative a quotient to two numbers? Ahhh, let's try to answer that then: By definition $f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$. Well, that is not a quotient of two numbers, but rather it is a limit. A limit, when it exists, is a number. This particular limit is a limit of quotients of a particular form (still, not of fractions in general, but quotients of real numbers).
The meaning of the derivative $f'(x)$ is the instantaneous rate of change of the value of $f$ at the point $x$. It is defined as a certain limit. If you now intuitively think of $h$ as an infinitesimal number (warning: infinitesimals do not exist in $\mathbb R$, but they exist in certain extensions of the reals) then you can consider the single expression $\frac{f(x+h)-f(x)}{h}$. In systems where infinitesimals really do exist one can show that this single expression, when the derivative exists, is actually infinitesimally close to the actural derivative $f'(x)$. That is, when $h\ne 0$ is infinitesimal, $f'(x)-\frac{f(x+h)-f(x)}{h}$ is itself infinitesimal. One can them compute with this expression as if it were the derivative (with some care). This can be done informally, and to some extend this is how the creators of early calculus (prior to Cauchy) argued, or it can be done rigorously using any one of a number of different techniques to introduce infinitesimals into calculus. However, getting infinitesimals into the picture comes with a price. There are logical/set-theoretical issues with such models rendering all of them not very explicit. |
H: A question on the symbolic powers of a prime ideal
In I. Swanson's notes about primary decomposition the author wrote:
The smallest $P$-primary ideal containing $P^n$ is called the $n$th symbolic power of $P$, where $P$ here is a prime ideal of a ring $R$.
She in fact gave this result as a consequence of a theorem on primary decomposition, I found that it's quite complicated. In others textbooks the symbolic power of a prime ideal $P$ is defined as the contraction of $P^{n}R_{P}$ to $R$. So I tried to prove the converse, let $P^{(n)}$ be the contraction of $P^{n}R_{P}$ to $R$ and prove that it is primary. But I have got no idea.
Could you please give me some hints? Thanks.
AI: Lemma. Let $I\lhd R$. If the radical $r(I)$ is maximal, then $I$ is primary.
Proof. Let $M=r(I)$. Then $M/I=\mathcal{N}(R/I)$ the nilradical, and $M/I$ is prime so $R/I$ has a unique prime ideal, namely $M/I$. So every non-nilpotent element of $R/I$ is a unit, and hence is not a zero-divisor. QED.
In particular, if $M$ is maximal, then $M^n$ is $M$-primary since $r(M^n)=M$. So since $PR_P$ is the unique maximal ideal of $R_P$, we know that $P^nR_p$ is $PR_P$-primary.
Lemma. The contraction of a primary ideal is primary.
Proof. Let $f:A\rightarrow B$ be a ring homomorphism, $Q\lhd B$ primary ideal. Then $1\notin Q^c$ the contraction of $Q$ to $A$ since $f(1)=1\notin Q$, so $A/Q^c\ne 0$. Also, $f$ induces an injective map $A/Q^c\hookrightarrow B/Q$, so $A/Q^c$ must also have the property that zero-divisors are nilpotent since $B/Q$ does. QED.
So the contraction of $P^nR_P$ to $R$ is primary.
The reason why this is the smallest $P$-primary ideal containing $P^n$ is because
$$
P^{(n)}=P^nR_P\cap R=\lbrace r\in R\mid sr\in P^n\text{ for some }s\in R\setminus P\rbrace
$$
Now clearly $P^n\subset P^{(n)}$ since $1\in R\setminus P$. So let $Q$ be another $P$-primary ideal with $P^n\subset Q$, and let $r\in P^{(n)}$. Then $sr\in P^n$ for some $s\in R\setminus P$. Since $P^n\subset Q$, $sr\in Q$, and either $s\in r(Q)=P$, or $r\in Q$. Clearly $s\notin r(Q)=P$, since we chose $s\in R\setminus P$, so we must have $r\in Q$, and hence $P^{(n)}\subset Q$. |
H: What are the possible ranges of a metric
Given a metric $d$ on a space $X$, what can we say about $d(X\times X)$? What possible range can $d$ have?
More precisely, consider the set $D=\{ A \subset [0,\infty) | A = d(X\times X), \textrm{d is a metric on $X$} \} $ What properties does $D$ have?
For instance, all finite sets containing $0$ are in $D$: If $A=\{0,a_1,a_2...a_n \}$, where $a_i<a_j$ if $i<j$, take $X=\{ 0,1... n \}$ and $d(i,j)= a_{max(i,j)} $ (for $i\neq j$). Any well-ordered countable set is also in $D$, by a similar construction.
AI: Your construction works much more generally:
First note that $\varnothing\in D$ by taking $X=\varnothing$.
Next, note that if $A\in D$ is not empty, then $0\in A$.
Now let $A\subseteq [0,\infty)$ such that $0\in A$ and let $d\colon A\times A\to A$ be defined by
$$d(x,y) = \begin{cases} \max(x,y) & x\neq y \\ 0 & x=y \end{cases}.$$
It is not hard to see that $d$ is a metric and the range of $d$ is $A$.
Hence $D = \{A\subseteq [0,\infty)|0\in A\mbox{ or }A=\varnothing\}$. |
H: Prove that the preimage of a prime ideal is also prime.
Let $f: R \rightarrow S$ be a ring homomorphism, with $R$ and $S$ commutative and $f(1)=1$. If $P$ is a prime ideal of $S$, show that the preimage $f^{-1}(P)$ is a prime ideal of $R$.
Define $g: S \rightarrow S/P$ with kernel $s$. Let $h = g \circ f: R \rightarrow S/P$. Since $h$ is a ring homomorphism, the kernel is an ideal of $R$. Also, from the first isomorphism theorem, we know that $R/\ker(h) \cong S/P$. Since $P$ is a prime ideal of $S$, we know that $S/P$ is an integral domain. Since $R/\ker(h)$ is isomorphic to $S/P$, it must also be an integral domain, which implies that the kernel of $h$ (which is the preimage of $P$) is a prime ideal of $R$.
Do you think my answer is correct? The reason why I was a bit skeptical of my answer is because I did not use the fact that $R$ and $S$ are commutative. So I'm wondering if I missed something $\dots$
Thank you in advance.
AI: What you've done is correct. The definition for prime ideals in commutative rings relies on commutativity.
For a non-commutative ring $R$, we have a different definition, and say that $P$ is a prime ideal if whenever the product of two ideals $IJ\subset P$, then either $I\subset P$ or $J\subset P$. |
H: A question about an infinite sum
Let $\{ a_n \}, \{ b_n \}$ be a sequence of nonnegative real numbers. Assume that $\sum_{n=1}^\infty b_n $ converges(let $\sum_{n=1}^\infty b_n = C)$ and assume that $$ a_n \le a_1 \;(\forall n \in \mathbb N).$$ Then can I conclude that $$\sum_{n=1}^\infty a_n b_n \leq a_1 \sum_{n=1}^\infty b_n = a_1 C$$
AI: Yes, as long as all your numbers are nonnegative, there is no problem with this reasoning.
You can write $$\Sigma_{n=1}^\infty a_nb_n\leq\Sigma_{n=1}^\infty a_1b_n\leq a_1\Sigma_{n=1}^\infty b_n=a_1 C$$ |
H: Determine if $\sum\limits_{n=0}^{\infty}\frac{1}{2^{\sqrt{n}}}$ converges
Apparently it can be proven with a comparison but I've tried to compare it to $\frac{1}{n^{p}}$ with not results.
I've also tried comparing $\sqrt{n}$ with $\ln{n}$ but $\frac{1}{2^{\ln n}}$ diverges so that doesn't give me anything useful.
AI: For the comparison, it is enough to show that after a while $2^{\sqrt{n}}\gt n^2$.
Equivalently, we want to prove that after a while $(\ln 2)\sqrt{n}\gt 2\ln n$. There are various ways to do this. For example by using L'Hospital's Rule we can show that
$$\lim_{x\to \infty} \frac{2\ln x}{(\ln 2)\sqrt{x}}=0.$$ |
H: Given $a_1,a_2,...,a_n>0$ where $n\in\mathbb N$$, a_1+a_2+...+a_n=n$. Is this true? $a_1a_2+a_2a_3+...+a_na_1\leq n$
Given $a_1,a_2,...,a_n>0$ where $n\in\mathbb N$$, a_1+a_2+...+a_n=n$. Is this true?
$$a_1a_2+a_2a_3+...+a_na_1\leq n$$
By observing:
When $n=1$, this is trivial;
When $n=2$, $ab\leq(\frac {a+b} 2)^2=1\leq2$;
When $n=3$, $ab+bc+ca\leq(\frac {a+b} 2)^2+(\frac {b+c} 2)^2+(\frac {c+a} 2)^2=\frac 1 2(a+b+c)^2-\frac 1 2(ab+bc+ca)$
$\Rightarrow ab+bc+ca\leq3$;
When $n=4$, $ab+bc+cd+da=(a+c)(b+d)\leq(\frac{a+b+c+d} 2)^2=4$.
But I can't find a more general way to prove these at once. Please help.
Thank you.
AI: The inequality does not hold in general. Indeed, take $a_1=a_2=\frac{n}{2}-1$ and
$a_3=..=a_n=\frac{2}{n-2}.$ Then the left hand side of our inequality is greater that $n^2/4-n+1$ which can be made greater than $n.$ |
H: How to prove that some set has the quotient topology for a function?
Let $f: X \to Y$ be a continuous function between topological spaces. Let $S$ be a set and $g: Y \to S$ a function. Assume that $g \circ f$ is surjective, and that $S$ has the quotient topology for $g \circ f$. Assume that $g$ is continuous. How does one show that $S$ has the quotient topology for $g$?
The definition of the quotient topology for $g$ on $S$ is (I think): $T = \{ U \subset S | g^{-1}(U) \space \text{open in} Y \} $.
This was an question for a previous exam for our topology course. I don't really know how to prove this.
AI: Assume that $h:S\to T$ is a set map between the spaces $S$ and $T$ such that $h\circ g$ is continuous. Then $h\circ g\circ f$ is also continuous, and since $S$ has the quotient topology with respect to $g\circ f$, this implies that $h$ is continuous. Hence $S$ has the universal property characterizing the quotient topology. |
H: Calculate the limit of $\lim\limits_{x\to 1^-}\left(\frac{1}{1-x^2} -\frac{1}{1-x^3}\right)$
I want to calculate this limit and wonder what is the best way to calculate it.
$$\lim\limits_{x\to 1^-}\left(\frac{1}{1-x^2} -\frac{1}{1-x^3}\right)$$
I tried to do the following thing
$$\lim\limits_{x\to 1^-}\left(\frac{1-x^3-(1-x^2)}{(1-x^2)(1-x^3)}\right)$$
What I get is $0$ but the answer is $+\infty$,What I`m doing wrong?
Thanks!
AI: Starting from where you had stopped.
$$\text{lim}_{x \rightarrow 1^{-}}\frac{x^{2}-x^{3}}{(1-x^{2})(1-x^{3})}=\text{lim}_{x \rightarrow 1^{-}}\frac{x^{2}(1-x)}{(1-x^{2})(1-x^{3})}=\text{lim}_{x \rightarrow 1^{-}}\frac{x^{2}}{(1+x)(1-x^{3})}=\frac{1}{(1-1)(1+1)}=\frac{1}{0\cdot2}=\infty$$ |
H: Evaluate $ \int^4_1 e^ \sqrt {x}dx $
Evaluate $ \int^4_1 e^ \sqrt {x}dx $
solution:-
Here $1<x<4$
$1<\sqrt x<2$
$e<e^ \sqrt {x}<e^ 2$
$\int^4_1 $e dx$<\int^4_1 e^ \sqrt {x}dx<\int^4_1 e^ 2dx$
$3e <\int^4_1 e^ \sqrt {x}dx<3 e^ 2 $
But in this objective question
Options are
a)$e $
b)$e^2 $
c)$2e $
d)$2e^2 $
AI: Hint: Make the substitution $x=t^2$ to get the integral $2\int_1^2 te^t \mathrm{d}t$. |
H: What is the easiest/most efficient way to find the taylor series expansion of $e^{1-cos(x)}$ up to and including degrees of four?
So I have $$e^{1-cos(x)}$$ and want to find the taylor series expansions up to and including the fourth degree in the form of $$c_{0} \frac{x^0}{0!} + c_{1} \frac{x^1}{1!} + c_{2} \frac{x^2}{2!} + c_3 \frac{x^3}{3!} + c_{4} \frac{x^4}{4!} + HOT$$ or just $$1 + c_{1} x^1 + c_{2} x^2 + c_3 x^3 + c_{4} x^4 +HOT$$ at that.
I know of two ways to do this, by substitution in the series summation or by differentiating the original four times. When substituting I am left with cosines in my result which is not sufficient, but differentiating this formula will take a $very$ long time. Is there a pattern in differentiating this function that im missing or is there a way to get rid of the $cos$ when substituting?
I did eventually go for differentiating everything and took me forever, much longer than I think was intended for this assignment, that is why I think I am missing the point. The derivatives to the fourth degree fill an entire page in my noteblock so I wont type it over here but my result was $$1+\frac{x^2}{2}+\frac{x^4}{12}+HOT$$ I have yet to find out if I was correct, but nevertheless I think I should learn an insight here about how to get this answer an easier way. Hopefully someone knows what im missing here. Thanks in advance!
AI: We have
$$1-\cos x=\frac{x^2}{2!}-\frac{x^{4}}{4!}+ \text{higher order terms}.$$ Substitute in the power series expansion of $e^t$. The only relevant part will be $1+t+\frac{t^2}{2}$. |
H: Do the following functions exist?
While practicing for my topology exam, I stumbled upon the following question from a previous exam:
Give a proof of your answer:
(a) Is there a continuous surjective map from $\mathbb{C}$ to $\mathbb{C} - \{0 \} $?
(b) Is there a continuous surjective map from $\mathbb{R}$ to $\mathbb{R} - \{0 \} $?
(c) Is there a continuous surjective map from $\mathbb{R} - \{ 0 \}$ to $\mathbb{R} $?
(d) Is there a continuous surjective map from $\mathbb{S^1}$ to $\mathbb{S^1} - \{1 \} $?
I think the answer to (a) and (b) is "Yes, there is. Those are the identitiy functions $id_{\mathbb{C} }$ and $id_{\mathbb{R} }$". Is this true? And what about (c) and (d)? And how do you prove it?
Edit: I now know that my proposed functions are no good (they aren't even functions!). So I guess the questions remain: do the continuous surjective maps exist?
AI: For (a) and (c), we can do it, with (for example) the maps $z\mapsto e^z$ and $x\mapsto x-\frac1x.$ For (b), it is impossible, since the continuous image of any real-valued function on $\Bbb R$ is a singleton, interval, ray, or all of $\Bbb R$--more generally, continuous images of (non-empty) connected sets are again (non-empty) connected. For (d) it is impossible, since a continuous image of a compact set is compact, but $\Bbb S^1-\{1\}$ is not compact. |
H: Consistency of definition of weak derivative with classical derivative
I know the definitions of both weak and classical derivative. But I am trying to see the classical derivative as a weak derivative. When we have
$\int f' \varphi = -\int f\varphi'$ for all $\varphi\in C^\infty$ with compact support. Is this definition consistent with the one for classical derivative, I mean, if $f\in C^1$ how can we obtain $f'$(classical) from the def. above?
AI: Integration by parts:
$$
\int_{-\infty}^\infty f\phi'\,dx = \lim_{\substack{a\to-\infty\\b\to\infty}} \left[ f\phi \right]_a^b - \int_{-\infty}^\infty f'\phi\,dx = - \int_{-\infty}^\infty f'\phi\,dx
$$
since $\phi(a) = \phi(b) = 0$ if $|a|$ and $|b|$ are sufficiently large. (Remember that $\phi$ has compact support.) |
H: Is it true that Quadratic residue was published and discovered before Legendre symbol and Euler's Criteria?
So Is it true that Quadratic residue was published and discovered before Legendre symbol and Euler's Criteria?
Quadratic residue came in 1801 by Gauss(1).
can you put these concepts in chronological order and by whom these concepts was first introduced or developed in paper form?
AI: I'd say yes. My reason being that the Legendre symbol simply defines if a number $\mod p$ is a quadratic residue to $p$ or not. There is no way to define a number being a quadratic residue $\mod p$ if you don't even know what a quadratic residue is.
Regarding the Euler criterion, that's a little bit more tricky. That too would've come after the discovering quadratic residues, but I'm not sure if that was before or after the Legendre symbol |
H: question about Ito's formula
I'm currently learning about the Ito's lemma / formula
In my textbook, a direct application of the formula is to compute quantities like that :
(W is a Brownian motion)
While trying to prove these results I am finding that these computations are really not direct.
Am I missing something and they're actually trivial with Ito's formula ?
For instance for the first one I couldn't directly compute the integral of Wt*dWt, I had to use Ito's formula on the derivative of Wt²
For the second one I managed to get the result but it was also very convoluted.
So my question : using Ito's lemma, what's the direct approach to compute these formulas?
thanks
AI: Since by Ito's lemma you have for $\phi(t,x) \in \mathcal C ^{1,2}$
$$ \phi(t, X_t) = \phi(o,X_0) + \int _0 ^t (\partial_t +b\partial_x +\frac{\sigma^2}{2}\partial^2_{xx})\phi(s,X_s) ~ ds + \int_0^t\partial_x\phi(s,X_s) ~dW_s$$
if $X$ is a Ito's diffusion
$$ X_t = X_0 + \int _0 ^t b~ ds + \int_0^t\sigma ~dW_s,\ t \geq 0$$
you must search for the simples $\phi$ such that $\partial_x \phi =x $ with $X =B$ ( so $b=0$ and $\sigma =1$) for the first integral.
Indeed, we have
$$ \int_0^T W_s ~dW_s=\frac{1}{2}(W^2_T-W^2_0 -T).$$
For the second formula a direct application of Ito's lemma would give the result. By effect, if you consider $\phi(t,x) = \exp(\lambda x-\frac{\lambda^2}{2}t) $ since $\partial_t\phi(t,x)= -\frac{\lambda^2}{2}\phi(t,x),\ \partial_x\phi(t,x)=\lambda \phi(t,x)$ and $\partial_{xx}^2\phi(t,x)=\lambda^2\phi(t,x)$, we have
\begin{align}\exp(\lambda W_t-\frac{\lambda^2}{2}t)=\phi(t,W_t)&=\phi(0,W_0)+ \int _0 ^t (\partial_t +\frac{1}{2}\partial^2_{xx})\phi(s,W_s) ~ ds + \int_0^t\partial_x\phi(s,W_s) ~dW_s \\&=1+\lambda\int_0^t \exp(\lambda W_s-\frac{\lambda^2}{2}s) ~dW_s\end{align}
Note that $(\phi(t,W_t))_{t \geq0}=(\exp(\lambda W_t-\frac{\lambda^2}{2}t))_{t \geq0}$ satisfy the SDE
$$ L_t = 1 +\lambda \int_o^t L_s dW_s ,\ t \geq 0$$
and not $$ L_t = 1 + \int_o^t L_s dW_s ,\ t \geq 0$$ as you said in the question(probably a typo). Also, since $\mathbb E \left\{ \exp(2\lambda W_t-\lambda^2 t)\right\}< +\infty$ it is a (positive) exponential martingale |
H: Composition of a convex function
If $f:[a,b]\rightarrow R$ is convex function and $f'(x)\geq 0$ for all $x\in [a,b]$ and $g:U\rightarrow [a,b]$ is convex function, how to show that $f(g(u)), u\in U$ is convex function?
AI: Note that $f$ is increasing. Let $x,y\in U, \lambda\in(0,1)$. From Jensen's inequality follows
$$f(g(\lambda x + (1-\lambda)y))\leq f(\lambda g(x) + (1-\lambda)g(y))\leq
\lambda f(g(x)) + (1-\lambda)f(g(y)),$$
which means the convexity of $f(g(u))$. |
H: Calculate a $\infty^0$ limit using `de l'hopital` rule.
I have to calculate the following limit:
$$\lim_{x\to \infty} 2x^{1/\ln x}$$
So I tried to start:
$$\lim_{x\to \infty} 2x^{1/ \ln x} = \infty^0 $$
From here on I noticed that I have to use de l'hopital rule. but don't really know how and I need help.
If the math representation doesn't clear, so the exerice is:
2x^(1/ln x) as x $\to \infty$
thanks in advance.
AI: This is almost trivial and doesn't require l'Hospital:
$$2x^{\frac1{\log x}}=2e^{\frac1{\log x}\log x}$$
$$\lim_{x\to\infty}\frac{\log x}{\log x}=1$$
and thus
$$\lim_{x\to\infty}2x^{\frac1{\log x}}=2e$$
Perhaps your function is mistyped? |
H: A function $f$ is increasing on the closed internal $[a,b]$
Let the function $f$ be increasing on the closed internal $[a,b]$. If $a \le f(a)$ and $f(b)\le b$, prove that:
$\exists x_0\in [a,b]$, such that $f(x_0)=x_0$.
Thanks for your help.
Note that $f$ need not be continuous.
AI: Assume otherwise. Especially, $f(a)>a, f(b)<b$. Let $u_0=a,v_0=b$.
Given an interval $[u_n,v_n]\subseteq [a,b]$ with $f(u_n)>u_n, f(v_n)<v_n$ we can find a subinterval $[u_{n+1},v_{n+1}]\subset [u_n,v_n]$ with
$$f(u_{n+1})>u_{n+1}, f(v_{n+1})<v_{n+1}\ \text{ and }\ |v_{n+1}-u_{n+1}|=\frac12|u_n-v_n|.$$
To do so let $w_n=\frac{u_n+v_n}{2}$. If $f(w_n)>w_n$, let $u_{n+1}=w$, $v_{n+1}=v_n$; otherwise let $u_{n+1}=u_n$, $v_{n+1}=w_n$.
Then these nested intervals converge towards a number $x_0\in[a,b]$.
Then $u_n<f(u_n)\le f(x_0)\le f(v_n)<v_n$ for all $n$ implies $f(x_0)=x_0$. |
H: Extreme values of a function with conditions
What is a way to find extreme values of a function $u(x,y,z)=xy+yz+xz$ with conditions $x+y=2, y+z=1$?
AI: $$u(x,y,z)=(2-y)y+y(1-y)+(2-y)(1-y)=y(2+1-1-2)+y^2(-1-1+1)+2=2-y^2$$
Alternately, $$u(x,y,z)=(x+y)(z+y)-y^2=2-y^2$$
So the minimum is when $y=0$.
Using Lagrange multipliers,
$$g_1(x,y,z)=x+y-2=0$$
$$g_2(x,y,z)=y+z-1=0$$
$$u(x,y,z)=xy+yz+xz$$
$$\nabla u= \lambda _1 \nabla g_1+\lambda _2 \nabla g_2$$
$$(y+z,x+z,x+y)= a (1,1,0)+b (0,1,1)$$
Giving
$$y+z=a$$
$$x+z=a+b$$
$$x+y=b$$
Substituting:
$$x+z=x+2y+z$$
$$y=0$$ |
H: Contructing a $\delta$-fine tagged partition from the old ones
Let $[a,b]\subset \mathbb{R}$. A tagged partition of $[a,b]$ is a set
$D=\{(t_i,I_i)\}_{i=1}^m$ where $\{I_i\}_{i=1}^m$ is a partition of $[a,b]$ consisting of closed non-overlapping subintervals of $[a,b]$ and $t_i\in I_i$; $t_i$ is called the tag associated with $I_i$.
Suppose that $D=\{(t_i,I_i)\}_{i=1}^m$ and $D'=\{(s_j,J_j)\}_{i=1}^n$
are any two tagged partitions of $[a,b]$. We say that $D'$ is finer than $D$ if for each $j\in\{1,2,\cdots,n\}$, there exists $i\in\{1,2,\cdots,m\}$ such that $J_j\subset I_i$ and every tag in $D$ is a tag in $D'$.
Let $\delta:[a,b]\to (0,+\infty)$ be a positive function. We say that a tagged partition $D=\{(t_i,I_i)\}_{i=1}^m$ is $\delta$-fine if
$$I_i\subset(t_i-\delta(t_i),t_i+\delta(t_i))\quad\mbox{for } i=1,2,\cdots,m.$$
Question:
Suppose $D$ and $D'$ are $\delta$-fine tagged partitions of $[a,b]$. How do we construct a third $\delta$-fine tagged partition $E$ of $[a,b]$ such that $E$ is both finer than $D$ and $D'$?
AI: There are two things to keep track of:
The partitions;
The tags.
On an intuitive level, they can be taken care of by the family $(I_i \cap J_j)_{i,j}$, placing the old tags as appropriate, and making up some new ones for those intervals not containing an old tag (this may require further splitting of the intervals, depending on $\delta$). We also need to split those $I_i \cap J_j$ that contain both $t_i$ and $s_j$ (presuming they are distinct, of course).
A more detailed version of the sketch above:
Denote $K_{ij} = I_i \cap J_j$; then $(K_{ij})_{ij}$ is a partition of $[a,b]$ of the required form.
Note that if we produce a tagged partition $E_{ij}$ for each $K_{ij}$, we can amalgamate these into a tagged partition $E$ of $[a,b]$. So fix a $K_{ij}$, and distinguish four cases:
$t_i, s_j \in K_{ij}$: if $t_i = s_j$, use the tagged partition $\{(K_{ij},t_i)\}$. Otherwise, use the partition $\{(K_{ij}\cap[a,r],t_i \wedge s_j),(K_{ij}\cap[r,b],t_i \vee s_j)\}$, where $r = \frac12(t_i+s_j)$, and $\wedge$ and $\vee$ denote the binary $\max$ and $\min$ operations, respectively.
$t_i \in K_{ij}$, $s_j \notin K_{ij}$: use the tagged partition $\{(K_{ij}, t_i)\}$.
$t_i \notin K_{ij}$, $s_j \in K_{ij}$: use the tagged partition $\{(K_{ij}, s_j)\}$.
$t_i, s_j \notin K_{ij}$: For each $r \in K_{ij}$, let $L_r = (r-\frac12\delta(r),r+\frac12\delta(r)) \cap K_{ij}$. Then each $L_r$ is open in the subspace topology, and $\bigcup\limits_{r \in K_{ij}} L_r = K_{ij}$. Since $K_{ij}$ is compact, find a finite subset $r_k, 1\le k \le \ell$ such that $\bigcup_k L_{r_k} = K_{ij}$ and $k<k'$ implies $r_k < r_{k'}$. Denote with $\bar L_r$ the closed interval corresponding to $L_r$. Use the tagged partition $$\{(\bar L_{r_k}, r_k):1\le k\le\ell\}$$ where $L_{r_0} = \varnothing$ (we will let rest the boring details for dealing with the non-trivial overlapping of $L_{r_k}$).
This construction yields a tagged partition $E_{ij}$ of each $K_{ij}$, and hence of $[a,b]$; call this latter tagged partition $E$. It is to be verified that $E$ refines $D$ and $D'$.
Now each $K_{ij}$ is contained in both $I_i$ and $J_j$, and hence all intervals used for $E_{ij}$ are, too; furthermore, each tag $t_i, s_j$ occurs in some $K_{ij}$, and by construction remains a tag in $E$. Thus $E$ refines $D$ and $D'$.
Finally, it remains to establish that $E$ is $\delta$-fine. We restrict attention to the tags within one $E_{ij}$. If this $E_{ij}$ was formed using one of the first three constructions, it is immediate from the $\delta$-fineness of $D$ and $D'$ that $E_{ij}$ is $\delta$-fine. For the fourth construction, observe that $L_{r_k}$ is strictly contained in $(r_k-\delta(r_k),r_k+\delta(r_k))$, since $\delta(r_k)>0$; therefore, $\bar L_{r_k}$ is also contained in this set, and hence so are the intervals used in the partition. In conclusion, all $E_{ij}$, and consequently $E$, are $\delta$-fine. |
H: For what values of $m$ the function $y=x^m\sin(x)$ have horizontal asymptote
I want to figure for what values of $m$ the function have horizontal asymptote.$$y=x^m\sin(x)$$
so what I understand from that is this that the function dont have a vertical one, so I will find vertical asymptote and I will require that its equal to $0$:
$$a=\lim\limits_{x\to \infty} \frac{f(x)}{x}$$
I need some advice how to continue from here.
Thanks!
AI: $$\frac{f(x)}x=x^{m-1}\sin x\xrightarrow[x\to\pm\infty]{}\begin{cases}0&,\,\,m-1<0\\{}\\\not\exists&,\;\;m-1\ge0\end{cases}$$ |
H: Not able to solve $\int\limits_1^n \frac{g(x)}{x^{p+1}} \mathrm dx $
If $p=\frac{7}{8}$ then what should be the value of $\displaystyle\int\limits_1^n \frac{g(x)}{x^{p+1}} \mathrm dx $
when $$g(x) = x \log x \quad \text{or} \quad g(x) = \frac{x}{\log x}? $$
Wondering which way to proceed?
an algebraic substitution,
partial fractions,
integration by parts, or
reduction formulae.
Please don't suggest something like ("Learn basic Calculus first" etc).
Kindly help by solving if possible because I'm out of touch with calculus for nearly 15 yrs.
AI: $$\frac{g(x)}{x^{p+1}}=\frac{x\log x}{x^{p+1}}=\frac{\log x}{x^p}$$
By parts:
$$u=\frac1{x^p}\;,\;\;u'=-\frac p{x^{p+1}}\\v'=\log x\;,\;\;v=x\log x-x$$
Thus:
$$\int\limits_1^n\frac{\log x}{x^p}dx=\left.\left(\frac{\log x}{x^{p-1}}-\frac1{x^{p-1}}\right)\right|_1^n+p\int\limits_1^n\frac{\log x}{x^p}dx-p\int\limits_1^n\frac1{x^p}dx\ldots\ldots$$ |
H: Is a diagonal matrix times a matrix A a linear combination of A?
Say there is a set of m $\mathbb R^n$ vectors, represented as matrix $A$ in $\mathbb R^{r*m}$; and an m-by-n diagonal matrix $D =diag(i_1, i_2,...,i_m)$.
Is $D$ times $A$ a linear combination of that set of vectors? Is $0A$ the trivial linear combination?
edit OK, I realized I'm definately wrong, since a diagonal matrix is quadratic. But consider a matrix m-by-n, with $\lambda_n$ real numbers in each row on the matching columns.
AI: If you have $n$ column vectors $\textbf{v}_{1}, \textbf{v}_{2}, \ldots \textbf{v}_{n}$, each from $\mathbb{R}^{m}$, you can write this as an $m \times n$ matrix $A$.
$$A = \left( \begin{array}{c} \textbf{v}_{1} | \textbf{v}_{2} | \cdots|\textbf{v}_{n} \end{array} \right).$$
Now let $\textbf{c}$ be an $n\times 1$ column vector whose entries are constants $c_{1}, c_{2},\ldots,c_{n}$, i.e.
$$\textbf{c} = \left( \begin{array}{c} c_{1} \\ c_{2} \\ c_{3} \\ \vdots \\ c_{n} \end{array} \right)$$
then
$$\sum_{i = 1}^{n}c_{i}\textbf{v}_{i} = A\textbf{c}.$$ |
H: Find push down automata and context free grammar
I have the following language:
$$
L = \{a^nb^{2n+1} \mid n \ge 0\}
$$
I must find the push down automaton and a context free grammar for the language.
For the push down I have no idea how to approach the problem.
For the context free grammar I think I know the solution:
$$
S \rightarrow Sb \\
S \rightarrow aSbb \\
S \rightarrow \lambda
$$
AI: Here's the basic idea: Start by pushing a marker on the stack. Then, as long as the input character is $a$, push two markers on the stack. Then, for each $b$ read, pop a marker from the stack. If, after having read all the input, the stack is empty, then accept the input.
I've left it to you to complete this PDA to deal with, for example, incorrect inputs like $aba$ and $abbbb$. |
H: Proving $\sum_{k=1}^m{k^n}$ is divisible by $\sum_{k=1}^m{k}$ for $ n=2013$
I got an interesting new question, it's about number theory and algebra precalculus. Here is the question:
a positive integer $n$ is called valid if $1^n+2^n+3^n+\dots+m^n$ is divisible by $1+2+3+\dots+m$ for every positive integer $m$.
Prove that 2013 is valid
Prove that there are infinite positive integers which are not valid
Every little hint, contribution and recommendation would be very helpful. Sorry for my bad english. Thanks before.
AI: If $n$ is odd, then modulo $m+1$ we have $2(1^n + 2^n + \ldots + m^n) = (1^n+m^n) + (2^n+(m-1)^n) + \ldots + (m^n+1^n) \\ \equiv (1^n-1^n) + (2^n-2^n) + \ldots + (m^n-m^n) = 0 \pmod {m+1}$.
Also, since $m^n \equiv 0 \pmod m$, $2(1^n + 2^n + \ldots + m^n) \equiv 2(1^n + 2^n + \ldots + (m-1)^n) \equiv 0 \pmod m$
Since $m$ and $m+1$ are coprime, this shows that $2(1^n + \ldots + m^n)$ is a multiple of $m(m+1)$, and since $m(m+1)$ is even, $1^n + \ldots + m^n$ is a multiple of $m(m+1)/2 = 1+2+\ldots+m$.
If $n$ is even then $1+2^n \equiv 2 \pmod 3$, which is not a multiple of $1+2 = 3$ |
H: Continuity of the function $f=1/x$
How do I show that the function $f(x)=\frac{1}{x}$ is continuous using the $\epsilon - \delta$ definition?
I have been trying for quite a while now without success.
My attempts
Suppose that $\left |x-x_0 \right| < \delta$ for some $\delta >0$ then $\left |f(x)-f(x_0)\right| =\left |\frac{1}{x}-\frac{1}{x_0} \right|=\left |\frac{x_0-x}{x \cdot x_0} \right | $
But what do I do when x=0?
AI: Let's assume we are looking at the fuction $f:\mathbb{R}\setminus \{0\} \to \mathbb{R}$ with $x\mapsto \frac{1}{x}$. A function $f:X\to Y$ is continuous in $x$ iff
\[ \forall \varepsilon >0 \exists \delta > 0 : \forall y\in X \text{ with } |x-y|< \delta \text{ the following holds } |f(x)-f(y)|< \varepsilon\]
A function is continuous if it is continuous in every point of its domain.
Now lets take a look at $|f(x)-f(y)|$
\begin{align*}
|f(x)-f(y)|&= | \frac{1}{x}-\frac{1}{y}| \\
&= |\frac{y-x}{xy}| \\
&= |x-y| \cdot \frac{1}{|xy|}
\end{align*}
This doesn't look that helpfull, but now we make a little trick. We say that $\delta < \frac{|x|}{2}$, now you know that $\frac{|x|}{2} \leq y\leq \frac{3|x|}{2}$ now you can bound those and you are done- |
H: Closed operator
I've got a very straightforward question : if $T : B \rightarrow B$ is a linear continuous operator and $B$ is a Banach space, is $T$ a closed operator?
This is obviously true in finite dimension, but I'm not sure what can happen in infinite dimension. Maybe it can't get "too bad" if $B$ is a Banach.
AI: 1) If closed means the image of every closed set is closed: No.
If this was true, any injective bounded operator with dense range would have to be surjective, whence invertible on a Banach space by the Banach isomorphism theorem. The compact diagonal operator $T(x_n)=(x_n/n)$ on $\ell^2$ is a natural counterexample.
2) If closed means the operator is closed, that is its graph is closed in $B\times B$: Yes.
Assume $(x_n,Tx_n)$ converges to $(x,y)$ in $X\times X$, that is $x_n\rightarrow x$ and $Tx_n\rightarrow y$ in $X$. By continuity of $T$, $Tx_n\rightarrow Tx$. By uniqueness of a limit in a metric space, $Tx=y$. So $(x,y)=(x,Tx)$ belongs to the graph of $T$. |
H: A problem about mollification
The problem is :
Given $M > 0$ a constant, show that exists $\phi \in C^{\infty}(R)$ with the following properties:
i) $\phi(x) = x , \forall x \in [-M,M] $
ii) $ 0 \leq\varphi^{'}(x) \leq 1, \forall \ x $
This question arises form my question in the link
In the previous link the user 79635 says :
let $M$ be a constant , mollifing the function $f(x) = \min ( \max (x, M+1), -M-1 )$ , you obtain a function $\phi$ with the properties said above.
I am trying to do the mollifcation, but i am not getting anywhere. Someone can give me a hand ?
Thanks in advance.
AI: Fix some $\epsilon>0$ and define for $x\in\mathbb{R}$: $h_1(x)=x$, $h_2(x)=M+\epsilon$ and $h_3(x)=-M-\epsilon$. Let $U_1=(-\epsilon-M,M+\epsilon)$, $U_2=(M,\infty)$ and $U_3=(-\infty,-M)$.
Let $\{\phi_i,U_i\}_{i=1}^3$ be a partition of unit associated with $U_i$, i.e.
I - $\phi_i\in C^{\infty}(\mathbb{R})$,
II - $\operatorname{spt}\phi_i\subset U_i$ and $\sum_{i=1}^3\phi_i=1$
Define for $x\in\mathbb{R}$ $$h(x)=\phi_1h_1+\phi_2h_2+\phi_3h_3$$
Note that $h$ is the desired function.
Remark: To make things more clear, note that it is possible to choose $\phi_i$ in such a way that $\phi_1$ and $\phi_3$ are strictly decreasing in $(M,M+\epsilon)$ and $(-\epsilon-M,-M)$ respectively and $\phi_1$ and $\phi_2$ are strictly increasing in $(-\epsilon-M,-M)$ and $(M,M+\epsilon)$ respectively. |
H: On convergence of $\prod (1 - \alpha_n)$
Suppose $\{ \alpha_n \}$ is a decreasing sequence of real numbers such
that $0 < \alpha_n < 1$ and $\alpha_n$ goes to $0$ as $n$ goes to infinity.
I was wondering if there is a known condition for $\{ \alpha_n \}$ so that
the product $\prod (1- \alpha_n)$ will not be $0$?
Thanks!
AI: Everytime when the Product converges the limit won't be zero, because per definition a infinite product only converges when it limit is not zero.
Using that
$$\log\Big( \prod_{i=1}^n a_i \Big) = \sum_{i=1}^n \log(a_i) $$
one just can use the well known results for series to test the convergence of products. |
H: Integrating $\ln x$ by parts
I am asked to integrate by parts $\int \ln(x) dx$. But I'm at a loss isn't there supposed to be two functions in the integral for you to be able to integrate by parts?
AI: Hint: Write $\log(x)$ as $1 \cdot \log(x)$ and use integration by parts. |
H: CDF of the distance of two random points on (0,1)
Let $Y_1 \sim U(0,1)$ and $Y_2 \sim U(0,1)$.
Let $X = |Y_1 - Y_2|$.
Now the solution for the CDF in my book looks like this:
$P(X < t) = P(|Y_1 - Y_2| < t) = P(Y_2 - t < Y_1 < Y_2 + t) = 1-(1-t)^2$
They give this result without explanation. How do they come up with the $1-(1-t)^2$ part? Can you help me find the explanation?
AI: I want to change notation. Call the random variable called $Y_1$ in the problem by the name $X$. Call the random variable called $Y_2$ in the problem by the name $Y$. And finally, call the random variable called $X$ in
the problem by the name $T$. Trust me, these name changes are a good idea!
We need to assume that $X$ and $Y$ are independent.
Fix $t$ between $0$ and $1$. In the usual coordinate plane, draw the square with corners $(0,0)$, $(1,0)$, $(1,1)$, and $(0,1)$. Now draw the two lines $y=x+t$ and $y=x-t$. By independence, the joint distribution of $(X,Y)$ is uniform in our square.
Draw the lines $y=x-t$ and $y=x+t$. You know well what these look like. Remember that $0\le t \le 1$ when drawing the lines. For a nice picture, you could for example pick $t$ around $\frac{1}{3}$. (Without drawing a picture, you are unlikely to understand what is really going on.)
Note that $T\le t$ if and only if $|X-Y|\le t$ if and only if the pair $(X,Y)$ lands between our two lines. The probability that this happens is the area of the region between the two lines, divided by the area of the whole square, which is $1$. So we need to find the area of the region between the two lines.
Now we find that area. The part of the square which is outside our region consists of two isosceles right-angled triangles. Each of these triangles has legs $1-t$, so together they make up a $(1-t)\times (1-t)$ square, with area $(1-t)^2$.
Thus the area of the region between the two lines is $1-(1-t)^2$. |
H: Where I can find the Pythagorean theorem deduced from Hilbert's axioms?
Hilbert took years to make a rigorous revision and formalization of Euclidean geometry in his Foundations of Geometry. As he intended to organize only the most basic aspects of the theory, he didn't write about things like the Pythagorean Theorem or the sum of the angles of a triangle. He would say "it is easily deduced from the previous theorems." Even though it can be easy I don't seem to find any book where there is the Euclidean geometry presented as in the Euclid's Elements, but using Hilbert's axioms.
What I mean is: Is there an Elements of geometry as Euclid's, but deduced from Hilbert's rigorous axiomatic?
AI: Try Robin Hartshorne's book http://www.amazon.com/Geometry-Euclid-Beyond-Robin-Hartshorne/dp/0387986502. |
H: Points from an affine subspace with equal distance from given points
Given vector space $\mathbb{R}^3$ with dot product defined as $x \cdot y = 2x_1y_1 + 3x_2y_2 + x_3y_3$ where $x = (x_1,x_2,x_3),y = (y_1,y_2,y_3)$ and given an affine subspace $W: x - y - z - 2 = 0$ .
I need to find all points from W which have the same distance from $p_1 = (0,1,1),\enspace p_2 = (-1,1,0),\enspace p_3 = (0, 0, 0)$.
Am I correct that this will be an intersection of three paraboloids in $\mathbb{R}^3$ ? (Following the fact that parabola is the set of all points having the same distance from a point and a line in $\mathbb{R}^2$)
My approach would be, that I need to find $x = (x,y,z)$ satisfying following conditions:
1) $x \in W$ thus I get first equation $(1, -1, -1) \cdot (x,y,z) = 2$
Since $W$ is a hyperplane in $\mathbb{R}^3$ and distance of point $p_i$ from a hyperplane W is given by formula $$ \rho(p_i,W) = \frac{|p_i \cdot n_W - 2|}{||n_W||}$$ where $n_W$ is normal vector of $W$. Im aware that the norm in the denominator is induced by the given dot product.
I get 3 more conditions:
$\rho(p_1,W) = \rho(p_2,W) = \rho(p_3,W)$
Im not sure how to put all these 4 conditions all together, can anybody give me some hint? Or is my approach even correct or is there a faster (better) approach than mine?
AI: You want a (possibly several, though unlikely) point $A=(a,b,c)$ such that
$a-b-c-2 = 0$.
$4(a-0)^2+9(b-1)^2+(c-1)^2 = 4(a+1)^2 + 9(b-1)^2 + (c-0)^2 = 4(a-0)^2 + 9(b-0)^2 + (c-0)^2 $
The second equation gives us
$-2c+1 = 8a + 4$,
$-18b + 9 -2c+1= 0$ |
H: MODULAR problem
What will be the remainder when 64! is divided by 71?
Do we need to solve this problem by using MOD theorem or need to expands the factorial?
AI: Hint $\displaystyle\ \ {\rm mod}\ 71\!:\,\ 64! = \frac{70!}{\color{#c00}{70}\cdots \color{#0a0}{65}}\!\!\stackrel{\rm\ Wilson_{\phantom{ I_I}}}\equiv\!\!\!\! \frac{-1}{\color{#c00}{(-1)}\cdots \color{#0a0}{(-6)}}\equiv \frac{-1}{720}\equiv\frac{70}{10}\equiv 7$ |
H: Solving trigonometric identity with condition.
Problem : If $\sin\theta +\sin^2\theta +\sin^3\theta=1$ Then prove $\cos^6\theta -4\cos^4\theta +8\cos^2\theta =4$
My working :
As $\sin\theta +\sin^2\theta +\sin^3\theta=1 \Rightarrow \sin\theta +\sin^3\theta = \cos^2\theta$
Now the given equation : $\cos^6\theta -4\cos^4\theta +8\cos^2\theta$ can be written as
$(\sin\theta +\sin^3\theta)^3-4(\sin\theta+\sin^3\theta )^2+8(\sin\theta +\sin^3\theta)$
= $\sin^3\theta +\sin^6\theta +3\sin^5\theta +3\sin^7\theta -4\sin^2\theta -4\sin^6\theta -8\sin^4\theta + 8\sin\theta + 8\sin^3\theta$
But I think this is not the right way of doing this...Please suggest other alternative.. Thanks...
AI: Let $x = \sin \theta$. We are given that $x^3 + x^2 + x - 1 = 0$.
We want to show that $(1-x^2)^3 -4(1-x^2)^2 +8(1-x^2) = 4$. Expanding and comparing terms, this is equivalent to
$$ x^6 + x^4 + 3x^2 - 1 = 0. $$
This is true because $$x^6 + x^4 + 3x^2 - 1 = (x^3 + x^2 + x -1 ) ( x^3 - x^2 + x +1).$$ |
H: What is "every inductive set"?
In Apostol's «Calculus I», on page 22 there is the following definition:
A set of real numbers is called an inductive set if it has the following two properties:
(a) The number 1 is in the set
(b) For every x in the set, the number x + 1 is also in the set.
Next there is a definition of positive integers:
A real number is called a positive integer if it belongs to every inductive set.
So my question is what is meant by «every inductive set». I don't quite understand this definition.
AI: Thinking about the definition of "inductive set", you'll find that there are lots of inductive sets, for example: the set of all real numbers, the set of positive real numbers, the set of integers, the set of rational numbers, and lots more. "Every inductive set" means all of these, not just the ones I listed but also all other sets that satisfy the definition.
Notice that $1$ is in all these sets --- because the definition of "inductive set" says that it as to be there. The (b) clause in the same definition (applied with $1$ as he value of $x$) then ensures that $2$ is in all of the inductive sets. Continuing this way, you can see that $3$, $4$, etc. are also forced to be in every inductive set. On the other hand, $0$ is only in some of the inductive sets, not in all of them (for example, not in the set of positive real numbers). Similarly, $1/2$ is in some but not all of the inductive sets. After thinking about more examples like these, you'll see that the positive integers are in all inductive sets, but all other numbers are in only some, not all, of the inductive sets.
That observation is what Apostol is using to define what he means by positive integer. |
H: Contronominal property
Let $P$ be a set, $\leq$ a binary relation on $P$, reflexive, antisymmetric and transitive. Let $\wedge$ and $\vee$ be two binary operations, both commutative and associative and distributive one to each other. Let $0$ be the minimum element of $P$, $1$ the maximum. Suppose that for every $a\in P$ there exists $a'\in P$ such that $a\wedge a'=0$ and $a\vee a'=1$.
Now take $a,b\in P$ and assume $a\leq b$. Can i prove that $b'\leq a'$? How?
AI: No, you haven't assumed enough connections between the $\land$ and $\lor$ operations on the one hand and the partial order $\leq$ on the other. Consider, for example, the set $P$ of all subsets of $\{p,q,r\}$, let $\land$ and $\lor$ be intersection and union, and let $0$ be the empty set and $1$ be $\{p,q,r\}$, so $a'$ is the complement of $a$. Now you can define $\leq$ to be, for example, the linear order
$$
\varnothing<\{p\}<\{q\}<\{r\}<\{q,r\}<\{p,r\}<\{p,q\}<\{p,q,r\},
$$
and find that complementation does not reverse this order relation. |
H: If $\langle T(x),y \rangle=0$ then $T=T_0$ - Prove this result if the equality holds for all $x,y$ in some basis for $V$
Let $T$ be a linear operator on an inner product space $V$. If $\langle T(x),y \rangle=0$ for all $x,y \in V$ then $T=T_0$ where it means zero transformation.
Prove this result if the equality holds for all $x,y$ in some basis for $V$.
The hint says use the theorem below.
[let $V$ and $W$ be vector spaces over $F$ and suppose that {$v_1,\dots,v_n$} is a bsis for $V$. For $w_1,\dots,w_n$ in $W$, there exists exactly one linear transformation $T:V \rightarrow W$ such that $T(v_i)=w_i$.]
But I don't know how to apply this.
Actually the true meaning of this question is still ambiguous to me.
It means that $x,y \in \beta$ for some basis $\beta$? And $\langle T(x),y \rangle=0$?
AI: Presumably what the question means is that if $v_k$ is a basis for $V$ and $w_k$ is a basis for $W$, then $\langle Tv_i, w_j \rangle = 0$ for all $i,j$.
Let $v_k$ be a basis for $V$ and $w_k$ be a basis for $W$. Then if $x \in V$ we can write $x = \sum_i \alpha_i v_i$ and similarly for any $y \in W$, we can write $y = \sum_j \beta_j w_j$. Then $\langle Tx, y \rangle = \sum_i \sum_j \alpha_i \beta_j \langle Tv_i, w_j \rangle = 0$. Hence $\langle Tx, y \rangle = 0$ for all $x,y$.
If $\langle Tx, y \rangle = 0$ for all $x,y$, then for each $x$, choose $y=Tx$, then you have $\langle Tx, Tx \rangle = \|Tx\|^2 = 0$. Hence $Tx = 0$ for all $x$, from which it follows that $T=0$. |
H: Prove that $f(x)=0$ has no rational solutions
$f(x)$ $\in$ $Z[X]$ monic polynomial of degree $n$
$k,p$ $\in$ $N$
If none of the numbers $f(k), f(k+1), \ldots , f(k+p)$ is disivible by $p+1$, then $f(x)=0$ has no rational solutions.
AI: Hint $\ $ Suppose not, so $\,f(x)\,$ has a rational root $\,r.\,$ By the Rational Root Test, a rational root of a $\rm{\color{#c00}{monic}}$ polynomial $\in\Bbb Z[x]\,$ must be an integer, since the test implies that the denominator of a reduced rational root divides the leading coefficient $\color{#c00}{(= 1)}.\,$ Since the $\,m = p+1\,$ consecutive integers $\,k,k\!+\!1,\ldots,k\!+\!p\,$ form a complete system of residues mod $\,m,\,$ the integer root $\,r\,$ must be congruent to one them, say $\, r \equiv k\!+\!i\pmod m.\, $ Therefore
$\quad{\rm mod}\ m\!:\,\ k\!+\!i\equiv r\ \Rightarrow\ f(k\!+\!i)\equiv f(r) \equiv 0,\ $ i.e. $\ p\!+\!1 = m\mid f(k\!+\!i),\, $ contra hypothesis. |
H: Rank of a matrix with an added all-1 row
As part of a proof I have the following statement, $A$ being an $n × n$ matrix:
Let us assume that $rank(A) ≤ n − 2$. If we add an extra row consisting of all $1$s to $A$, the resulting $(n+1) × n$ matrix still has rank at most $n − 1$.
I don't understand how adding an extra row can reduce the rank of a matrix.
AI: Suppose $A$ has $r$ linearly independent rows $R_1,\ldots ,R_r$ and $r\leq n-2$. Let $R$ be the row of all $1$'s. Now if $R_1,\ldots ,R_r , R$ are linearly independent, then the rank becomes $r+1$ and if they are linearly dependent then rank is $r$. So in either case rank is $\leq r+1\leq n-1$. |
H: Evaluate $\sum_{k=1}^nk\cdot k!$
I discovered that the summation $\displaystyle\sum_{k=1}^n k\cdot k!$ equals $(n+1)!-1$.
But I want a proof. Could anyone give me one please? Don't worry if it uses very advanced math, I can just check it out on the internet. :)
AI: HINT: $k(k!)=(k+1-1)(k!)=(k+1)!-k!$. Now do the summation and most of the terms will cancel. |
H: Norm-paradox of normal endomorphism
Let A be a normal endomorphism $A:V\rightarrow V$ and V is a unitary vector space. Now every normal endomorphism is unitary diagonalizable, meaning: $A=QDQ^{-1}$ for some unitary matrix $Q$ and $D$ is a digonal matrix.
Now we have if $||.||$ is an arbitrary operator norm, that $||A||=||QD\bar{Q^{-1}}||=||D||=spec(A)$ where $spec(A)$ is the spectral radius of the matrix and $||Q||=1$ as Q is unitary.
But if I have a look at: $A=
\begin{pmatrix} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{pmatrix} $then it has $spec(A)=3$ but obviously $||A||_1=4$, which is a contradiction to the proof I gave above, although A is symmetric and therefore also normal. where am I wrong?
AI: There is no contradiction, because $\|\cdot\|_1$ is not unitarily invariant, therefore $\|QDQ^\ast\|_1\neq\|D\|_1$ in general. All operator norms, except the $2$-norm, are not unitarily invariant. |
H: Am I understanding vectors and matrices properly?
So, here is my understanding of a Vector:
A vector is an ordered set of real numbers that lie in the space $R^n$ where $n$ is the size of the vector.
So if $n$ equals 4, the vector is of size 4.
I understand matrices to be a set of vectors - row vectors and column vectors.
So, given a matrix $A_3\times_2$, the row vectors are in the space $R^3$ because each row has only 3 components and the columns are in the space $R^2$ because each column can only have 2 components.
And finally here is my understanding of of $R^n$:
It means a an ordered set of $n$ numbers such that each one of the numbers is a real numbers. Each ordered set is called a $tuple$ and a tuple can not have more than $n$ elements/components in it.
In the most basic way, is my understanding correct?
I was very bad at math in high school and now I am taking an effort to make it up and improve my math skills.
AI: The word "size" is not really used for the length of a vector, (normally we would just go all the way and say "number of components" or "coordinates" of the vector). We often say that the dimension of $R^n$ is $n$.
Note that, in higher generality, a vector space does not have to be presented in the form $R^n$ (i.e. it does not have to be comprised of coordinate vectors), and it does not have to be a space over the real numbers. (It could be a space defined over complex numbers, rational numbers, finite fields, etc. as the set of scalars.) A space simply satisfies some list of abstract axioms.
The size, also called magnitude, of a vector is measured by some kind of norm $\|\cdot\|$, which is often induced by some sort of inner product $\langle\cdot,\cdot\rangle$. For instance, in the Cartesian plane ${\bf R}^2$ with the standard inner product, the size of the vector $(3,4)$ is $\sqrt{3^2+4^2}=5$.
Indeed an $m\times n$ matrix has $n$ column vectors and $m$ row vectors. Sometimes a matrix is thought of as an array and not an ordered set of vectors.
The key use of matrices is that we can multiply them together, and every linear map on a vector space can be encoded as multiplication by some matrix (using a basis to represent the vectors as coordinate vectors); this will probably turn up later in your study.
The notation ${\bf R}^n$ does not itself denote an ordered $n$-tuple. Rather it denotes the space of all ordered $n$-tuples with real number entries, where addition is defined "componentwise" and scalar multiplication is defined in the obvious way, etc.
More generally, vector spaces do not need to be finite-dimensional, but infinite-dimensional vector spaces verge into different territories of mathematics (functional analysis, set theory, ..) |
H: Prime generating functions
I'm studying prime numbers at school and I've seen some functions that generate mostly prime numbers. I'm talking about : $$\text{Euler's polynomial : } n^2+n+41$$ $$\text{Legendre's polynomial : } 2n^2+29$$ $$\text{Ruby's polynomial : } 103n^2-3945n+34381$$ $$\text{Mersenne numbers : }2^n-1$$
I've noticed a strange correspondence between these functions : they are all second degree polynomials (exept for Mersenne numbers). But Mersenne numbers can also be related by the exponentiation of base 2. So my question is : what is the link between 2 and prime numbers the make these function generating primes? I mean why using a 3rd ,4th degree polynomials or anything else doesn't work?
AI: There is no reason to assume that there is a dependence between primes, prime generating functions/polynomials and the number 2 as such.
For example http://mathworld.wolfram.com/Prime-GeneratingPolynomial.html gives examples of polynomials of various degrees generating primes.
Another interesting page to take a look at is http://en.wikipedia.org/wiki/Formulas_for_primes |
H: If $\frac{\cos^4\theta}{\cos^2\phi}+\frac{\sin^4\theta}{\sin^2\phi}=1$, show $\frac{\cos^4\phi}{\cos^2\theta} +\frac{\sin^4\phi}{\sin^2\theta}=1$
If $\dfrac{\cos^4\theta}{\cos^2\phi}+\dfrac{\sin^4\theta}{\sin^2\phi}=1$, prove that $\dfrac{\cos^4\phi}{\cos^2\theta} +\dfrac{\sin^4\phi}{\sin^2\theta}=1$.
Unable to move further ...request you to please suggest how to proceed ..Thanks..
AI: Hint: Let $ x = \cos \theta$, $y = \cos \phi$.
Show by expansion (and clearing denominators) that both equations are equivalent to $x^4 - 2x^2 y^2 + y^4 =0$, hence these statements are equivalent to each other.
Note: This shows that the condition is satisfied iff $x = \pm y$. This is not required, but very strongly hinted at in the question. |
H: How can I find the smallest possible of full miles to get full kilometers?
$1 \textrm{mile} = 1.609344 \textrm{km}$
I know that using $1000000$ miles I can move the decimal point and get a full number of $1609344 $km.
But how can I find the smallest amount of full miles that is equal to full kilometers and do the same for other conversions?
AI: $$1.609344=\frac{25146}{15625}$$
So $15625$ miles is exactly $25146$ kilometers, and these are the smallest natural numbers because the above fraction is reduced. |
H: Find gradient of this implicit function
How to find a gradient of this implicit function?
$$
xz+yz^2-3xy-3=0
$$
AI: EDIT: Abhinav pointed out a mistake, which has been corrected for posterity.
To find $\frac{\partial z}{\partial x}$ by implicit differentiation means to differentiate both sides of the equation with respect to $x$, while remembering that $z$ is implicitly a function of $x$ (we didn't write it with function notation such as $z(x)$). Don't forget to use the product rule when differentiating a product!
$$
\begin{align}
xz + yz^2 - 3xy - 3 &= 0 \\
\tfrac{\partial}{\partial x} \left[ xz + yz^2 - 3xy - 3 \right] &= \tfrac{\partial}{\partial x} \left[ 0 \right] \\
\tfrac{\partial}{\partial x} \left[ xz \right] + \tfrac{\partial}{\partial x} \left[ yz^2 \right] - \tfrac{\partial}{\partial x} \left[ 3xy \right] - \tfrac{\partial}{\partial x} \left[ 3 \right] &= 0 \\
\left[ 1 \cdot z + x \cdot \tfrac{\partial z}{\partial x} \right] + y \cdot 2z \tfrac{\partial z}{\partial x} - 3y - 0 &= 0 \\
z + x \tfrac{\partial z}{\partial x} + 2yz \tfrac{\partial z}{\partial x} - 3y &= 0
\end{align}
$$
This yields
$$
\frac{\partial z}{\partial x} = \frac{3y - z}{x + 2yz}.
$$
Try to find $\tfrac{\partial z}{\partial y}$ in an analogous way, and post your result in the comments. |
H: Exercise over the connected component of a point $x$ in a metric space $E$
In a metric space $E$, how to prove that the connected component of a point $x\in E$ is contained in every open and closed set containing $x$.
AI: The fact that $E$ is metric is really irrelevant, what matters is that it is a topological space.
Let $x \in E$ and let $C(x)$ be the connected component of $x$. If $O$ is closed and open ("clopen") and $x \in O$, then, as $O$ and $E \setminus O$ are both open, $C(x)$ cannot intersect $O$ and $E \setminus O$ in a non-empty set, or else
$$ C(x) = ( C(x) \cap O ) \cup ((C(x) \cap (E \setminus O)) $$
would be a non-trivial decomposition of the connected set $C(x)$ by relative open sets.
So one intersection is empty, and it cannot be $C(x) \cap O$, as it contains $x$. Alternatively stated: $C(x) \cap O$ is clopen in $C(x)$, and non-empty so must be $C(x)$ by connectedness.
So $C(x) \cap O = C(x)$, or $C(x) \subset O$.
This shows that $C(x)$ is contained in any clopen that contains $x$.
(the intersection of all clopen sets that contain $x$ (there is always at least one: $E$) is called the pseudo-component of $x$, so this proves that the component of $x$ is always a subset of the pseudocomponent of $x$; they can differ in general.) |
H: Is this function "$h$" symmetric of the plane $x=y$?
$h=\left\{\begin{matrix}
f,x<y\\ g,x\geq y
\end{matrix}\right.$
$g=f(y,x)$.
Is $h$ symmetric of $x=y$? Here $g$ is the function that changes all $x$ to $y$ and changes all $y$ to $x$ in $f(x,y)$.
For example, $h=\left\{\begin{matrix}
x^2-y^2, x<y\\ y^2-x^2,x\geq y
\end{matrix}\right.$.
Is $h$ symmetric of the plane $x=y$?
AI: Yes.
Suppose that $h$ is a function defined as
$$
h(x,y) = \begin{cases} f(x, y), & x < y \\ f(y, x), & x \ge y. \end{cases}
$$
Notice that $h$ only "sees" the half-plane in $(x, y)$-coordinates where $x < y$. For example, $h(2, 3) = f(2, 3)$ and $h(3, 2) = f(2, 3)$, as well. The value $f(3, 2$) is never called upon, for instance.
This easily generalizes to a proof that $h$ has symmetry about the line $y = x$. If $x > y$, then $h(x, y) = f(y, x)$. In this case, $y < x$, so $h(y, x) = f(y, x)$. In other words, we can conclude that
$$
h(y, x) = h(x, y)
$$
for any $(x, y)$. |
H: Finding indefinite integral by partial fractions
$$\displaystyle \int{dx\over{x(x^4-1)}}$$
Can this integral be calculated using the Partial Fractions method.
AI: HINT:
We need to use Partial Fraction Decomposition
Method $1:$
As $x^4-1=(x^2-1)(x^2+1)=(x-1)(x+1)(x^2+1),$
$$\text{Put }\frac1{x(x^4-1)}=\frac Ax+\frac B{x-1}+\frac C{x+1}+\frac {Dx+E}{x^2+1}$$
Method $2:$
$$I=\int \frac1{x(x^4-1)}dx=\int \frac{xdx}{x^2(x^4-1)} $$
Putting $x^2=y,2xdx=dy,$
$$I=\frac12\int \frac{dy}{y(y^2-1)}$$
$$\text{ Now, put }\frac1{y(y^2-1)}=\frac A y+\frac B{y-1}+\frac C{y+1}$$
Method $3:$
$$I=\int \frac1{x(x^4-1)}dx=\int \frac{x^3dx}{x^4(x^4-1)} $$
Putting $x^4=z,4x^3dx=dz,$
$$I=\frac14\int \frac{dz}{z(z-1)}$$
$$\text{ Now, put }\frac1{z(z-1)}=\frac Az+\frac B{z-1}$$
$$\text{ or by observation, }\frac1{z(z-1)}=\frac{z-(z-1)}{z(z-1)}=\frac1{z-1}-\frac1z$$
Observe that the last method is susceptible to generalization.
$$J=\int\frac{dx}{x(x^n-a)}=\int\frac{x^{n-1}dx}{x^n(x^n-a)}$$
Putting $x^n=u,nx^{n-1}dx=du,$
$$J=\frac1n\int \frac{du}{ u(u-a)}$$
$$\text{ and }\frac1{u(u-a)}=\frac1a\cdot\frac{u-(u-a)}{u(u-a)}=\frac1a\left(\frac1{u-a}-\frac1u\right)$$ |
H: Proof using natural deduction
Prove that
$$\lnot r\Rightarrow \lnot p,\lnot(q\lor r),s\Rightarrow(p\lor q)\models\lnot s
$$
I'm completely stuck on this one. Only natural deduction inference rules can be used, no de morgan's law etc. The premises given all seem to be really irrelevant, and since we can't use transformational proof techniques, all the implications turn the first and third expressions into essentially garbage, and without de Morgan's law the second one seems useless too. How should I proceed?
AI: Hint: After listing your given premises, start by assuming $\lnot\lnot s$ or $s$, and derive a contradiction, using your premises. E.g.
premises
$\quad\vdots\quad$
$|$ Assume $s$
$\quad p\lor q$
EDIT: We can get assumption down to needing to find a contradiction from $p$. But, as noted in the comments, unless there is a typo in the questions statement, as written, this is not provable, as there exists a model (see Henno's comment) $p, s$ true, $q, r$ false, in which we have all premises true, but the conclusion false.
EDIT EDIT: Looks like we are now "good to go"...see comments. |
H: Proving Inner Product Space
Let $E=C^1 [a,b]$ be the space of all continuously differentiable functions. For $f,g \in E$ define $$ \langle f,g \rangle \ = \ \int_a^b f'(x) \ g'(x) \ dx$$
Is $\langle f,g \rangle$ an inner product space?
I'm just checking the four conditions from Kreyszig pg 129.
I have a few questions. I know the first few conditions are true but I'm unsure of the wording for my justification. Because these are continuously differentiable I do not need to incorporate any sort of measure or Lebesgue integral, correct? So the scalar removes due to properties of the Riemann integral constructed as partial sums? Is this a real vector space? The functions are real valued, so is the inner product Hermitian symmetric from this fact?
AI: If $\;f(x)=c=$ a constant, then $\;\langle f,g\rangle=0\;\;\forall\,g\;$ , but $\,f\neq 0\;$ ... |
H: Subgroups of a cyclic group and their order.
Lemma $1.92$ in Rotman's textbook (Advanced Modern Algebra, second edition) states,
Let $G = \langle a \rangle$ be a cyclic group.
(i) Every subgroup $S$ of $G$ is cyclic.
(ii) If $|G|=n$, then $G$ has a unique subgroup of order $d$ for each divisor $d$ of $n$.
I understand how every subgroup must be cyclic and that there must be a subgroup for each divisor of $d$. But how is that subgroup unique? I'm having trouble understanding this intuitively. For example, if we look at the cyclic subgroup $\Bbb{7}$, we know that there are $6$ elements of order $7$. So we have six different cyclic subgroups of order $7$, right?
Thanks in advance.
AI: To help you understand where you're going wrong, why not try writing out these "six different subgroups": if $G$ is a cyclic group of order $7$, and $a$ is a generator of $G$, then
$$\begin{array}{c|c}
\mathsf{\text{Subgroup of }}G\mathsf{\text{ generated by}} & \mathsf{\text{consists of}}\\\hline
a \strut & a,\; a^2,\;a^3,\; a^4,\; a^5,\;a^6,\;a^7=e\\\hline
a^2 \strut& \\\hline
\vdots \strut&\\\hline
a^6\strut & \\\hline
\end{array}$$ |
H: Problem of convolution.
If we are given with a polynomial $\mathcal P$ and a compactly
supported distribution $g$. Can we prove that their convolution will
be a polynomial again?
AI: It should be somewhat easy to show the following:
Lemma
If $f:\Bbb{R}^n\rightarrow\Bbb{R}$ is $m$ times continuously differentiable and $\partial^m_{x_j}f\equiv 0$ for all $1\leq j\leq n$, then $f$ is a polynomial of degree at most $m-1$.
Then, you should be able to differentiate under the integral sign to obtain your result. |
H: Test for convergence of improper integrals $\int_{0}^{1}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ and $\int_{1}^{\infty}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$
I need to test if, integrals below, either converge or diverge:
1) $\displaystyle\int_{0}^{1}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$
2) $\displaystyle\int_{1}^{\infty}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$
I tried comparing with $\displaystyle\int_{0}^{1}\frac{1}{(1+x)\ln^3(1+x)}dx$, $\displaystyle\int_{0}^{1}\frac{\sqrt{x}}{(1+x)}dx$ but ended up with nothing.
Do you have any suggestions? Thanks!
AI: Near $0$, $\log(1+x)=x(1+O(x))$ so
$$
\frac{\sqrt{x}}{(1+x)\log^3(1+x)}=x^{-5/2}(1+O(x))
$$
and because $-5/2\le-1$, the integral in 1) does not converge.
$$
\left(\frac{\sqrt{x}}{\log^3(1+x)}\right)^{1/3}=\frac{x^{1/6}}{\log(1+x)}
$$
By L'Hospital,
$$
\begin{align}
\lim_{x\to\infty}\frac{x^{1/6}}{\log(1+x)}
&=\lim_{x\to\infty}\frac{\frac16x^{-5/6}}{1/(1+x)}\\
&=\lim_{x\to\infty}\frac16\left(x^{-5/6}+x^{1/6}\right)\\[12pt]
&=\infty
\end{align}
$$
Therefore,
$$
\lim_{x\to\infty}\frac{\sqrt{x}}{\log^3(1+x)}=\infty
$$
Thus, the integral in 2) does not converge by comparison to $\int_1^\infty\frac1{1+x}\,\mathrm{d}x$ |
H: My proof of $I \otimes N \cong IN$ is clearly wrong, but where have I gone wrong?
Ok, I'm reading some thesis of some former students, and come up with this proof, but it doesn't really look good to me. So I guess it should be wrong somewhere. So, here it goes:
Let $R$ be a unitary commutative ring, and $I$ be an ideal, and $N$ is an $R$-module. I'll now show that $$I \otimes_R N \cong IN\,.$$
First, since $1 \in R$, and $I$ is an ideal of $R$, we must have that $I = RI$. So then $$I \otimes_R N = RI \otimes_R N = R \otimes_R IN \cong IN\,.$$
But this must be so wrong, since if this proof is true, I could prove that any $R$-module is flat. So how come this proof is wrong?
AI: The equality $RI\otimes_R N=R\otimes_R IN$ is very subtly false: the point is that it does not hold in $I\otimes_RN$, which is the only place where it could hold.
But, since tensor product is $R-$bilinear, can't we write (for example) $1\cdot i\otimes n=1\otimes i\cdot n \:$?
No, we can't! Because $1\otimes i\cdot n$ does not make sense in $I\otimes_RN$, since $1\notin I$ and thus $1$ may not be put on the left-hand side of $\otimes_R$ in $I\otimes_RN$. |
H: For given $n\times n$ matrix $A$ singular matrix, prove that $\operatorname{rank}(\operatorname{adj}A) \leq 1$
For given $n\times n$ matrix $A$ singular matrix, prove that $\operatorname{rank}(\operatorname{adj}A) \leq 1$
So from the properties of the adjugate matrix we know that
$$ A \cdot \operatorname{adj}(A) = \operatorname{det}(A)\cdot I$$
Since $A$ is singular we know that $\operatorname{det}(A) = 0$, thus
$$ A \cdot \operatorname{adj}(A) = 0$$
This is where I'm getting lost, I think I should say that for the above to happen one of the two, $A$ or $\operatorname{adj}(A)$ would have to be the $0$ matrix, but if $A = 0$ then $\operatorname{adj}(A) = 0$ for sure, which means I said nothing.
A leading hint is needed.
AI: Since $A$ is singular, $\mbox{rank}A\leq n-1$.
Case 1: $\mbox{rank}A\leq n-2$. Then $A$ contains no invertible submatrix of order $n-1$. So every minor of order $n-1$ is zero. What can you conclude about $\mbox{adj}(A)$?
Case 2: $\mbox{rank}A= n-1$. By rank-nullity, we get $\dim\ker A=1$. Now $A\cdot \mbox{adj}(A)=0$ means that the range of $\mbox{adj}(A)$ is contained in $\ker A$. So... |
H: Deducing Euler Equation
From Sydsaeter / Hammond (Further Mathematics for Economic Analysis, 2008, 2nd ed., p. 293):
$$ \max \int\limits_{0}^T [N(\dot{x}(t)) + \dot{x}(t)f(x(t))] e^{-rt} dt $$
where N and f are $C^1$ functions, r and T positive constants, $x(0) =
x_0$, and $x(T)=x_T$. Deduce the Euler Equation: $\frac{d}{dt}
N'(\dot{x}) = r [N'(\dot{x} + f(x)] $
Now, first of all, I'm not really sure how to understand 'deduce' here. Does it mean that I should bring the equation into the general Euler-equation form
$$ \frac{\delta F}{\delta x} = \frac{d}{dt} \frac{\delta F}{\delta \dot{x}}$$
?
If so, I end up with
$$ \dot{x} e^{-rt} f'(x) = \frac{d}{dt} N'(\dot{x})$$
which doesn't look too promising.
Am I totally misunderstanding the task here, or is my derivative wrong? Thanks!
AI: I think you did it wrong. The left side is correct. The right side is $\displaystyle\frac{d}{dt}[N'(\dot{x})+f(x)]e^{-rt}$ where $\displaystyle\frac{d}{dt}$ is being applied to the whole expression. This gives $\displaystyle e^{-rt}\frac{d}{dt}[N'(\dot{x})+f(x)]-re^{-rt}[N'(\dot{x})+f(x)]$, hence $\displaystyle \frac{d}{dt}[N'(\dot{x})+f(x)]-r[N'(\dot{x})+f(x)]=\dot{x}f'(x)$, and using chain rule on the left side, this is $\displaystyle \frac{d}{dt}N'(\dot{x})+\dot{x}f'(x)-r[N'(\dot{x})+f(x)]=\dot{x}f'(x)$, hence $\displaystyle \frac{d}{dt}N'(\dot{x})=r[N'(\dot{x})+f(x)]$ |
H: Sum of a countable dense set and a set of positive measure
Assume $A$ is a countable dense set in $\mathbb{R}$, and set $B$ has positive (Lebesgue) measure. Prove that $A+B=\{a+b:a\in A, b\in B\}=\mathbb{R}\backslash N$, where $N$ is a set of measure zero.
I haven't come up with a good idea.
Thanks in advance!
AI: This follows from the following Steinhaus theorem: if $A,B$ are subsets of the real line of positive measure then $A-B$ contains an interval. |
H: Using the beta function
Show that $\displaystyle \int_{0}^{\frac{\pi }{2}}\cos^{n} \theta d\theta=\int_{0}^{\frac{\pi }{2}}\sin^{n} \theta d\theta=\frac{\sqrt{\pi}[\frac{(n-1)}{2}]!}{2(\frac{n}{2})!}$
AI: First substitute $u=\cos{\theta}$ in the first integral to get
$$\int_0^1 du \, (1-u^2)^{-1/2} u^n$$
Now sub $u=v^2$ to get
$$\frac12 \int_0^1 dv \, (1-v)^{-1/2} v^{(n-1)/2}$$
Use the definition of a beta function to get
$$\frac{\Gamma{\left(\frac12\right)}\Gamma{\left(\frac{n+1}{2}\right)}}{2\Gamma{\left(\frac{n}{2}+1\right)}}$$
It should be clear that the same substitution holds for the second integral. |
H: Finding SVD from unit eigen values
Suppose $A$ is a 2 by 2 symmetric matrix with unit eigenvectors $u_1$ and $u_2$. If its eigenvalues are $\lambda_1=3$ and $\lambda_2=-2$, what are the matrices $U,\Sigma,V^T $ in its SVD?
How to do this?
Is it something with the matrix beeing symmetric?
David
AI: Since $A$ is symmetric, $A^*A=A^2$ and its singular values are simply the absolute values of its eigenvalues. So a SVD of $A$ is of the form $A=U\Sigma V^*$, with $U, V$ unitary and $\Sigma=\mbox{diag}(3,2)$. With the usual convention of ordering the singular values in descending order, only $\Sigma$ is unique. We will now see how to find some $U,V$ which work.
Since you are given an orthonormal basis $\{u_1,u_2\}$ of diagonalization, take $U$ to be the change of basis matrix from $\{u_1,u_2\}$ to the canonical basis. That is the columns of $U$ are $u_1$ and $u_2$ expressed in the canonical basis.
$$U^*AU=\pmatrix{3&0\\0&-2}\quad\Rightarrow\quad U^*AU\pmatrix{1&0\\0&-1}=\pmatrix{3&0\\0&2}=\Sigma.$$
Therefore
$$
A=U\Sigma V^*\quad\mbox{with}\quad V=U\pmatrix{1&0\\0&-1}.
$$
Note: this works in general if you are given a symmetric (or normal, more generally) matrix with an orthonormal basis of diagonalization. Then the singular values are just the moduli of the eigenvalues, so it suffices to multiply $U$ by the appropriate diagonal unitary matrix to get $V$. If $A$ is not normal, this no longer works. |
H: Is there a proof of this that does not use idempotents?
I am going to present a statement and a proof. The proof makes use of idempotents which makes it a little cumbersome.
Is there a proof that does not use idempotents?
(using well-known theorems is OK even if their proofs do use idempotents)
Statement:
Let $k=\mathbb{F}_p$. Let $G$ be a finite group. Consider the group ring $kG$. Let $I$ be the augmentation ideal of $kG$. Denote the Jacobson radical of $kG$ by $J=J(kG)$. Let $U$ be the indecomposable projective $kG$-module such that $U/UJ$ is isomorphic to the trivial $kG$-module $k$. Write $kG=U\oplus W$ ($W$ is the direct sum of the other indecomposable modules in the decomposition of $kG$).
Then $I=UJ\oplus W$.
My proof:
The augmentation map $\epsilon:kG\rightarrow k$ factors through $kG/J$ because $J\subset I$ because $kG/I$ is simple. Write $\epsilon=\epsilon'\circ\pi$ (that is, $\epsilon$ factors as $kG\overset{\pi}{\longrightarrow}kG/J\overset{\epsilon'}{\longrightarrow}k$).
Let $U'=\ker\epsilon'$ and let $W'\subset kG/J$ be a direct complement of $U$' in $kG/J$. That is $kG/J=U'\oplus W'$. Write $U'=e_1kG/J$ and $W'=e_2kG/J$ with $e_1,e_2$ orthogonal idempotents. Since $J$ is a nil ideal (even nilpotent), we can lift $e_1,e_2$ to a pair of orthogonal idempotents $f_1,f_2\in kG$. Write $U=f_1kG$ and $W=f_2kG$. Then $kG=U\oplus W$.
Now, it's easy but a little cumbersome to prove that $W \subset I$ and $U \cap I = UJ$. Thus $I=UJ\oplus W$.
Finally, we have to our $U$ is the same $U$ as in the statement. It is enough to show that $U/UJ=k$. This is true because $UJ=U \cap J$ and thus $U/UJ=U/(U\cap J)=(U+J)/J$. That last expression equals $k$, but to show it I need to use idempotents and again it's easy but a little cumbersome.
AI: Roughly, $kG$ only has one part that can surject onto $k$; the rest is in the kernel.
In more detail: $kG$ is a direct sum of projective indecomposable modules $P_i$ each with a simple top factor $P_i/JP_i = S_i$ (with $d_i = \left|\{ j : S_i \cong S_j\}\right|$ the dimension of $S_i$). Since $k$ has dimension 1, there is only one such p.i.m. with top quotient $k$, and it is usually called $P_1$.
Consider the restriction of $\epsilon$ to each $P_i$. The kernel of the restriction certainly contains $JP_i$, but for $i > 1$ it must also contain the top factor too, since $S_i \not\cong S_1 = k$.
The kernel contains $JP_1 \oplus P_2 \oplus \ldots \oplus P_n$ but the quotient by that subgroup is already $k$, so the kernel of $\epsilon$ must be exactly $JP_1 \oplus P_2 \oplus \ldots \oplus P_n$. Here $U=P_1$ and $W=P_2 \oplus \ldots \oplus P_n$. |
H: Prove that $1, x, x^2, \dots , x^n$ are linearly independent in $C[-1,1]$
As it states in the title, I'd like to prove that $1, x, x^2, \ldots , x^n$ are linearly independent in $C[-1,1]$.
Should I use an induction argument or integrate for $x^m$ and $x^n$ with cases $m=n$ and $m \neq n$?
The inner product is $$ \langle f,g \rangle = \int_{-1}^1 f(x)g(x)dx.$$ Do both methods work?
AI: Suppose they aren't linearly independent in $[-1,1]$. Then $a_0+a_1x^1+\cdots+a_nx^n=0$ for some set of coefficients, where not all of them are zero. But an $n$ degree polynomial can have at most $n$ roots, but this one has infinitely many, a contradiction. |
H: How prove $\mathbb Q$ is close in the following metric space?
assume $(d,\mathbb R)$ be a mertic space such that $$d:\mathbb R\times \mathbb R \to [0,\infty)$$$$d(x,y)=
\begin{cases}
0, & \text{if x=y} \\
max\{|x|,|y|\}, & \text{if x$\neq$y} \\
\end{cases}$$
How prove $\mathbb Q$ is close in this metric space and $\mathbb Q$ is not open in this metric space?
Thanks for any hint?
AI: HINT: For any $x\in\Bbb R$ and $\epsilon>0$, the open ball of radius $\epsilon$ centred at $x$ is
$$\begin{align*}
B(x,\epsilon)&=\{y\in\Bbb R:d(x,y)<\epsilon\}\\
&=\big\{y\in\Bbb R:y=x\text{ or }\max\{|x|,|y|\}<\epsilon\big\}\;.
\end{align*}$$
Now $\max\{|x|,|y|\}\ge|x|$, so if $x\ne 0$, and $\epsilon\le|x|$, then $\max\{|x|,|y|\}\ge|x|\ge\epsilon$ for any $y\ne x$, and therefor $B(x,\epsilon)=\{x\}$. In other words, every non-zero real number is an isolated point in this space. To show that $\Bbb Q$ is closed, just use the fact that $0$ is rational.
To show that $\Bbb Q$ is not open, determine exactly what $B(0,\epsilon)$ looks like for $\epsilon>0$. |
H: Simple number theory problem
I found this question in a textbook on number theory:
For which integer c will $\;\displaystyle{\frac{c^6 - 3}{c^2 + 2}}\;$ also be an integer?
I wonder if there is a solution which is not based on trial and error.
AI: If $(c^6 - 3)/(c^2 + 2)$ is an integer, then so is $$\frac{c^6 - 3}{c^2 + 2} - (c^4 - 2c^2 + 4) = \frac{c^6 - 3}{c^2 + 2} - \frac{c^6 + 8}{c^2 + 2} = \frac{-11}{c^2 + 2},$$ that is, $c^2 + 2$ divides $11$. The only way this can happen is if $c^2 = 9$, so $c = \pm 3$. |
H: Proof for $\displaystyle\sum_{k=1}^n k^a$ equaling a sum of fractions
I know $\displaystyle\sum_{k=1}^n k^2$ equals $n/6+n^2/2+n^3/3$, but... why?
And I also know that $\displaystyle\sum_{k=1}^n k^3$ equals $n^2/4+n^3/2+n^4/4$, but... is there a pattern so I can easily get $\displaystyle\sum_{k=1}^n k^a$? And could you give me a proof if so?
AI: Assuming that we know the expression of
$$\sum_{k=1}^n k,\sum_{k=1}^n k^2,\ldots,\sum_{k=1}^n k^{p-1}$$
then since by telescoping
$$\sum_{k=1}^n (k+1)^{p+1}-k^{p+1}=(n+1)^{p+1}-1$$
and
$$(k+1)^{p+1}-k^{p+1}=\sum_{s=0}^{p}{p+1\choose s}k^s$$
hence we can find
$$\sum_{k=1}^n k^{p}=\frac{1}{p+1}\left((n+1)^{p+1}-\sum_{s=0}^{p-1}{p+1\choose s}\sum_{k=1}^nk^s-1\right)$$ |
H: Cumulative distribution function of the generalized beta distribution.
Suppose $Z$ has a beta distribution on the interval $(0,1)$ and its probability density function is $f_Z(x)$. I know that the cumulative density function is,
$$F_Z(x) = \mathbb{P}(Z \leq x) = \int_{0}^x f_Z(u) \, du.$$
I also know that if $X = cZ$, then $X$ has a beta distribution generalized on the interval $(0,c)$. But what is the cumulative density function of $X$?
$$F_X(x) = \mathbb{P}(X\leq x) = \mathbb{P}(cZ \leq x) = \, ?$$
It is possible to write this in terms of the pdf or cdf of $Z$?
AI: Based on your derivation, wouldn't it be just $\mathbb{P}(cZ \leq x) = \mathbb{P}(Z \leq \frac{x}{c}) = F_Z(\frac{x}{c})$? |
H: An operator between $\mathcal{L}(X, Y)$ and $\mathcal{L}(Y, X)$
Please, I need help with this problem.
Let $X$, $Y$ be two vector normed spaces. Let $A_0\in\mathcal{L}(X, Y)$ such that $A^{-1}_0\in\mathcal{L}(Y,X)$. Show that there's an operator $\mathcal{T}_0\in\mathcal{L}(\mathcal{L}(X, Y), \mathcal{L}(Y, X))$ such that $\mathcal{T}_0A_0 = A_0^{-1}$ and $\|\mathcal{T}_0\| = \|A^{-1}_0\| / \|A_0\|$.
Thanks in advance.
AI: It's clear that $A_0 \neq 0$ ($A_0$ is inyective), then by Hahn-Banach Theorem on $\mathcal{L}(X,Y)$, there's $F\in[\mathcal{L}(X,Y)]^{\prime}$ such that
$$F(A_0)\ =\ \|A_0\|_{\mathcal{L}(X,Y)}\quad \mbox{ and }\quad \|F\|_{[\mathcal{L}(X,Y)]^{\prime}}\ =\ 1.$$
So, define the operator
$$\begin{array}{lcrcl}
\mathcal{T}_0 & : & \mathcal{L}(X,Y) & \longrightarrow & \mathcal{L}(Y,X)\\
& & B & \longmapsto & \mathcal{T}_0(B)\ :=\ \frac{F(B)}{\|A_0\|_{\mathcal{L}(X,Y)}}A_0^{-1}
\end{array}$$
Note that $\mathcal{T}_0$ is linear ($F$ is linear) and also $\mathcal{T}_0(A_0) = A_0^{-1}$. Besides,
$$\|\mathcal{T}_0(B)\| = \frac{\|A_0^{-1}\|_{\mathcal{L}(Y,X)}}{\|A_0\|_{\mathcal{L}(X,Y)}}|F(B)| \leq \frac{\|A_0^{-1}\|_{\mathcal{L}(Y,X)}}{\|A_0\|_{\mathcal{L}(X,Y)}}\|F\|\cdot\|B\| = \frac{\|A_0^{-1}\|_{\mathcal{L}(Y,X)}}{\|A_0\|_{\mathcal{L}(X,Y)}}\|B\|$$
then $\mathcal{T}_0\in\mathcal{L}(\mathcal{L}(X,Y),\mathcal{L}(Y,X))$.
Finally,
\begin{eqnarray*}
\|\mathcal{T}_0\| & = & \sup_{B\neq 0}\frac{\|\mathcal{T}_0(B)\|}{\|B\|}\ =\ \sup_{B\neq 0}\frac{\left\|\frac{F(B)}{\|A_0\|_{\mathcal{L}(X,Y)}}A_0^{-1}\right\|}{\|B\|}\\
& = & \frac{\|A_0^{-1}\|_{\mathcal{L}(Y,X)}}{\|A_0\|_{\mathcal{L}(X,Y)}}\sup_{B\neq 0}\frac{|F(B)|}{\|B\|}\\
& = & \frac{\|A_0^{-1}\|_{\mathcal{L}(Y,X)}}{\|A_0\|_{\mathcal{L}(X,Y)}}\cdot\|F\|\\
& = & \frac{\|A_0^{-1}\|_{\mathcal{L}(Y,X)}}{\|A_0\|_{\mathcal{L}(X,Y)}}.
\end{eqnarray*} |
H: Distribution with singularities.
I need some help to prove that $f$ defined by $\langle f,\psi\rangle:= \sum_{n=0} ^\infty
\psi^{(n)}(n)$ is a distribution which has singularities of infinite
order. Here $\psi$ is a test function that belongs to $ \mathcal D(\Bbb R)$.
Thanks.
AI: I will give you the main idea. Then I believe you can deal with the technical details.
You may already suspect that
$$
f=\sum_{n=0}^{\infty} (-1)^n \delta^{(n)}_n = \mathcal D - \lim_{N\rightarrow \infty}\sum_{n=0}^{N} (-1)^n \delta^{(n)}_n
$$
where the equality and the limit are taken in the sense of distribution. You can verifie that the distribution $S_N = \sum_{n=0}^{N} (-1)^n \delta^{(n)}_n$ is well defined and also its limit $f$.
Then, since $\delta^{(n)}_x$ is a distribution of order $n$, you have automatically the wanted result. |
H: Wedge product $S^1 \vee S^2$
I am trying to compute $\pi_1(S^1 \vee S^2$) by Van Kampen. I know Hatcher has a solution but I need to verify if my approach is correct and rigorous. I have seen a previous post on this topic, but I am using a different decomposition of $X=S^1 \vee S^2$, so please bear with me.
Let $z$ be the common point (wedge point).
Let $x_0$ be another point on $S^2$, and $x_1$ be another point on $S^1$.
Then let $Q=X\backslash x_0$, and $P=X\backslash x_1$.
Clearly, $X=Q\cup P$, and $Q,P$ both open.
Now, $\pi_1(Q)=\mathbb Z$, since the punctured sphere is homeomorphic to $R^2$, which def. retracts to the point $z$ and we are left with just $S^1$.
Similarly, $\pi_1(P)$ is trivial, since punctured $S^1$ def. retracts to $z$, and we are left with $S^2$, which is simply connected.
Also, $\pi_1(P\cap Q)$ is wedge of punctured sphere with punctured circle, both of which def. retract to $z$, and hence the wedge is simply connected.
So, now by Van Kampen, we obtain that $\pi_1(P\cup Q)$ is isomorphic to $\pi_1(P)_{*\pi_1(P\cap Q)}\pi_1(Q)$, which is just $\mathbb Z$.
Is this proof rigrous? Have I made some assumptions that have not been justified?
AI: It's correct, even intuition suggest us that in this case the only kind of paths that aren't in the same equivalence class of the trivial one is the paths around $S^1$. And moreover it is not important that they move around the sphere because I can homotopically retract them to move only around the circle if they move around it.
if fact you have just applied a more general corollary of Van Kampfen which is
$\pi_1(X \vee Y) $ is the group with generators the union of generators of $\pi_1(X)$ and $\pi_1(Y)$ and with relations the union of relations of the two groups
the proof is easy, just choose the two opens $X$, $Y$ and intersection a simply connected neighborhood of the wedge point |
H: Finding Root of an Equation with Variables Dependent on Each other
Sorry for the title. I'm sure there is better terminology. I'd be interested to here what that terminology is haha.
Here is my problem:
If x < 40, y = 0.01
If x > 40, y = 0.02
If x > 50, y = 0.03
0 = -1000 + (x-30–[x*y])*50
Solve for $x$ and $y$.
This is a breakeven revenue equation that would normally only include $x$ (price) but now also includes $y$ (a royalty % of price).
I need to do solve this in a way that can be accomplished programmatically with R or VBA.
It was suggested to me that I could make a guess for $y$, solve for $x$, and then again solve for $x$ and so on until I came to an answer for each. Would it not be an infinite loop?
AI: As you said - the best way to solve this would be to loop over all values of $y$, solve for $x$, and then see if $x$ falls within the correct bin. For instance, if you try $y=0.01$, you'll find that $x>50$, so that cannot be a solution. There is no infinite loop. |
H: Continuity of a function $f$ in a metric space from of the continuity of $f$ in every compact subset of $E$
Let $E$, $E'$ be two metric spaces,$f$ a mapping of $E$ into $E'$. Show that if the restriction of $f$ to any compact subspace of $E$ is continuous, then $f$ is continuous in E.
AI: HINT: If $\langle x_n:n\in\Bbb N\rangle$ is a sequence in $E$ converging to some $x\in E$, then $\{x_n:n\in\Bbb N\}\cup\{x\}$ is a compact set. |
H: Invertibility in a finite-dimensional inner product space
Let $T$ be an invertible linear operator on a finite-dimensional inner product space. I just want a hint as to how I should prove that $T^{*}$ is also invertible and $( T^{-1} )^{*} = ( T^{*} )^{-1}$.
$$ \circ \circ \circ ~ Answer ~ from ~ Below ~ \circ \circ \circ $$
$$
\langle(T^{-1})^*(T^*(v))\mid w\rangle\overset{1}{=}
\langle T^*(v)\mid T^{-1}(w)\rangle\overset{2}{=}
\langle v\mid T(T^{-1}(w))\rangle\overset{3}{=}
\langle v\mid w\rangle
$$
Could somebody explain steps $1$ through $3$, please?
Actually, I think @egreg is using this property:
$\langle Ax,y \rangle = \langle x,A^*y \rangle$
AI: The transpose $T^*$ of an endomorphism $T$ is characterized by the property that
$$
\langle v\mid T(w)\rangle=\langle T^*(v)\mid w\rangle
$$
(where I denote by $\langle x\mid y\rangle$ the inner product of the two vectors $x,y\in V$). The other thing to note is that if $S_1$ and $S_2$ are endomorphisms of $V$, then $S_1=S_2$ if and only if $\langle S_1(v)\mid w\rangle=\langle S_2(v)\mid w\rangle$, for all $v,w\in V$.
By definition, for all $v,w\in V$,
$$
\langle(T^{-1})^*(T^*(v))\mid w\rangle=
\langle T^*(v)\mid T^{-1}(w)\rangle=
\langle v\mid T(T^{-1}(w))\rangle=
\langle v\mid w\rangle
$$
and therefore $(T^{-1})^*\circ T^*$ is the identity.
Similarly,
$$
\langle T^*((T^{-1})^*(v))\mid w\rangle=
\langle (T^{-1})^*(v)\mid T(w)\rangle=
\langle v\mid T^{-1}(T(w))\rangle=
\langle v\mid w\rangle
$$
so also $T^*\circ (T^{-1})^*$ is the identity.
Therefore
$$
(T^{-1})^*=(T^*)^{-1}.
$$
as we wanted to prove.
Note that no hypothesis of finite dimensionality is needed, but just the existence of the transpose endomorphism, which is true for Hilbert spaces. |
H: Find closest vector to A which is perpendicular to B
To start, I would like to apologize if the answer to my question was easily googled, I am quite new to this and googling "Find closest vector to A which is perpendicular to B" gave me no results.
My problem:
I am a procedural generation programmer looking for a way to do something slightly similar to a cross product:
The cross product returns a vector which is perpendicular to two other vectors.
I need a vector which is only perpendicular to one vector. However, I need this vector to be the closest vector to another vector.
In other words, I would like to find a Vector C, with the smallest amount of difference between its self and Vector A -- But this vector MUST be perpendicular to Vector B.
Is there any way to do this using a series of Dot/Cross products (or some other sort of vector arithmetic)?
I am, sadly, not familiar with linear algebra, so I won't be able to solve for C in an answer in the form of an equation, so if you do decide to post an equation, please also post how I might make a computer solve for C using that equation.
Extra details:
I am working with unit vectors (Magnitude = 1)
I am working in 3D (not 2D... or 4D for that matter)
By "Closest Vector to A" I mean "Dot product between C and A closest to 1"
AI: You can do this with elementary vector algebra. Call $D = A \times B$, and then $C = B \times D$. $C$ is automatically orthogonal to $B$.
Of course, it's a little difficult to know that this is indeed the vector most like $A$. I reasoned this out using geometric algebra: there is a unique plane denoted $iB$ that is orthogonal to $B$ (and thus contains all vectors orthogonal to $B$). The vector in $iB$ closest to $A$ is just the projection of $A$ onto this subspace. This projection is denoted $[A \cdot (iB)](iB)^{-1}$, and this is equivalent to the prescription I have given using the cross product above. Geometric algebra is ideally suited to formulating problems like these, as it naturally lets you work with orthogonal planes and relationships between vectors and planes. |
H: Convergent Series in a dual space
I don't know how solve this problem.
Please I need help.
Let $X =\mathcal{C}[0,1]$ with the uniform norm and let
$\{p_j\}_{j\in\mathbb{N}}$, $\{q_j\}_{j\in\mathbb{N}}\subseteq X$ such
that the series $\sum\limits_{j=1}^{\infty}p_j(s)q_j(t)$ uniformly
converge to a continuous function $K:[0,1]\times [0,1]\rightarrow
> \mathbb{R}$, i.e., $$\lim_{N\rightarrow +\infty}\left\{\max_{(s,t)\in [0,1]\times [0,1]}\left|K(s,t) - \sum_{j=1}^Np_j(s)q_j(t)\right|\right\}\ =\ 0.$$ Besides, let $F_j\in
X^{\prime}$ defined by $F_j(u) := \int_0^1q_j(t)u(t)dt$, $\forall\
u\in X$. Show that, for all $G\in X^{\prime}$, the series
$\sum\limits_{j=1}^{\infty}G(p_j)F_j$ is convergent in $X^{\prime}$.
Identify the limit value, in terms of the adjoint of a convenient
operator.
Thanks in advance.
AI: For the first question, notice that for a fixed $u\in X$, we have
$$\sum_{j=1}^NG(p_j)F_j(u)=\sum_{j=1}^NG((F_j(u))\cdot p_j)=G\left(\sum_{j=1}^NF_j(u)p_j\right),$$
hence given integers $M$ and $N$, we get
$$\left|\sum_{j=M}^{M+N}G(p_j)F_j(u)\right|\leqslant \lVert G\rVert\cdot\max_x\left\lvert\sum_{j=M}^{M+N}p_j(x)\int_0^1q_j(t)dt\rvert\right|.$$
Recall that a dual space is complete. |
H: Scheffe’s Theorem
I saw a statement of Scheffe’s Theorem as follows ([1, p84]):
... we need a simple result called Scheffe’s Theorem. Suppose we have probability densities $f_n$ , $1 \le n \le \infty$, and $f_n \to f_\infty$ pointwise as $n \to \infty$. Then for all Borel sets $B$
$$
\left| \int_B f_n (x)dx - \int_B f_\infty(x) \, \mathrm dx~\right| \le
\int_B |f_n(x) − f_\infty(x)| \, \mathrm dx
=\int 2 (f_\infty(x) − f_n(x))^+ \, \mathrm dx \to 0
$$
by the dominated convergence theorem, the equality following from the fact that the
$f_n ≥ 0$ and have integral $= 1$.
I feel that to use dominated convergence theorem, and to make the last equality work, we need one more condition that $\int f_\infty(x) \, \mathrm dx < \infty$. Or in this case actually this condition is not necessary?
AI: The condition that $\int f_\infty < \infty$ can be deduced from the other assumption. By assumption, we have $f_n \to f_\infty$ pointwise and each $f_n$ is a density. Hence $\int f_n = 1$ for all $n$. From Fatou's lemma it follows that
$$\int f_\infty = \int \liminf f_n \le \liminf \int f_n = 1.$$ |
H: Fold, Gather, Cut
Here's a mathematical puzzle I've been thinking about. Let's say you have a strip of fabric, of length $N$ units ($N$ being an integer), which has regular markings on it every 1 unit along its length. Your task is to cut the fabric into $N$ lengths of 1 unit each, but to do so using the fewest operations possible. The operations available to you are:
Cut - you may cut through any number of overlapping layers in a single operation. All cuts must be through the 1-unit markings on the fabric, and the cut must be in a straight line. (No using a wavy cut to do the whole thing in 1 operation.)
Gather - you may gather any number of already-cut lengths together in a bundle. They must all line up on one end. (If they are the same lengths, they will of course line up on both ends.)
Fold - You may fold any number of layers of fabric in either direction. Multiple folds can only be counted as 1 operation if the places to be folded are already lined up, either via a previous Gather or another Fold step. Unlike Cuts, a Fold may occur between the 1-unit markings. Unfolding does not require an additional operation.
Now it can be trivially shown that for any $N$ which is $2^k$, an ideal solution would be to have $k$ Cuts alternating with $k-1$ Gathers, for a total of $2k - 1$ operations. This is by no means the only ideal solution. For example, consider $N=8$. The steps could be:
Cut into 2 lengths of 4.
Gather lengths.
Cut into 4 lengths of 2.
Gather lengths.
Cut into 8 lengths of 1.
Or you could do the following:
Fold at the 3rd marking.
Fold at the 5th marking the other direction.
Cut down the middle to get 4 lengths of 2 (2 of which are folded).
Gather lengths.
Cut into 8 lengths of 1.
Still 5 steps either way. It gets trickier when you consider other numbers however. Say, for $N=9$, you could take any $N=8$ solution, and then add 1 more cut for that last piece that will be 1 unit too long. But you can do 9 in 5 steps as well:
Fold in half.
Cut at the 3rd and 6th markings (which should be lined up) to get 3 lengths of 3.
Gather lengths.
Fold all 3 in half again.
Cut into 9 lengths of 1.
So my question is, with the given operations, can you compute the minimum number of operations for any given $N$?
AI: Here is an algorithm that does it in $2+\lceil\log_2 n\rceil$ steps. This doesn't match the OP's algorithm for $n=9$, unfortunately, but it's "k+2" rather than "2k-1" when $n=2^k$.
Fold the fabric into a pile that is exactly 2 inches wide. This takes $\lceil\log_2 n\rceil-1$ steps.
Cut down the middle. This will leave many pieces of fabric, almost all 2 inches wide (perhaps one piece 1 inch wide).
Gather it all into a single pile, 2 inches wide.
Cut.
For example, if $n=16$. Fold (8"), fold (4"), fold (2"). Cut, gather, cut. 6 steps.
Second example, if $17\le n\le 32$. Pretend it's $32$, the next higher power of 2. Fold (16"), fold (8"), fold (4"), fold (2"). Cut, gather, cut. 7 steps. |
H: Rudin Theorem 2.47 - Connected Sets in $\mathbb{R}$
I need help with the proof of the converse, as given by Rudin in Principles of Mathematical Analysis, to the following theorem:
Theorem 2.47: A subset $E$ of the real line $\mathbb{R}^1$ is connected if and only if it has the following property: If $x \in E$, $y \in E$, and $x < z < y$ then $z \in E$.
Proof: To prove the converse suppose that $E$ is not connected. Then there are nonempty separated sets $A \text { and } B$ such that $A \cup B = E$. Pick $x \in A, y \in B$ and assume (without loss of generality) $x < y$. Define
$$z = \sup(A \cap [x,y])$$
By Theorem 2.28, $z \in \overline{A}$; hence $z \not\in B$. In particular, $x \leq z < y.$ If $z \not \in A$, it follows that $x < z < y$, and $z \not\in E$.
If $z \in A$, then $z \not\in \overline{B}$, hence there exists $z_1$ such that $z < z_1 < y \text { and } z_1 \not\in B$. Then $x < z_1 < y$ and $z_1 \not \in E$. $_\Box$
What I need help with:
i) I need help understanding what $z$ is and if there is any relationship to $\sup(A)$. I initially thought they were equal, but I constructed an example where they weren't.
ii) I need help understanding why Theorem 2.28 implies that $z \in \overline{A}$. It is my understanding that Theorem 2.28 implies that $z \in \overline{A \cap [x,y]}$. I initially thought that $\overline{A \cap [x,y]} = \overline{A} \cap \overline{[x,y]}$, but I've come to realize that this isn't generally the case and therefore I'm having troubles seeing the implication.
iii) I suppose that I should be able to understand the rest of the proof if I understand i) and ii), but any further added detail would be greatly appreciated.
Thank-you.
For reference:
Definition: Two subsets $A$ and $B$ of a metric space $X$ are said to be separated if $\overline{A} \cap B = \emptyset = A \cap \overline{B}$.
Definition: A set $E \subset X$ is said to be connected if $E$ is not a union of two nonempty separated sets.
Theorem 2.28: Let $E$ be a nonempty set of real numbers which is bounded above. Let $y = \sup(E)$. Then $y \in \overline{E}$. Hence $y \in E$ if $E$ is closed.
AI: i) $z$ is the supremum of $A$, restricted to the region $[x,y]$. This is necessary because we need $z\in [x,y]$.
ii) $\overline{A\cap [x,y]}\subseteq \overline{A}$, so if $z\in \overline{A\cap [x,y]}$ then... |
H: Inner Product Spaces : $N(T^{\star}\circ T) = N(T)$ (A PROOF)
Let $T$ be a linear operator on an inner product space. I really just want a hint as to how prove that $N(T^{\dagger}\circ T) = N(T)$, where "$^\dagger$" stands for the conjugate transpose.
Just as an aside, how should I read to myself the following symbolism:
AI: Hint: Let $V$ denote your inner product space. Clearly $N(T)\subseteq N(T^* T)$, so you really want to show that $N(T^* T)\subseteq N(T)$. Suppose $x\in N(T^* T)$. Then $T^* Tx = 0$, so we have $\langle T^* Tx, y\rangle = 0$ for all $y\in V$. Can you see where to go from here? |
H: variational question
Let $\Omega$ a bounded domain, connexe and regular, and let $f \in L^2(\Omega).$ Let the variational problem: Find $u \in H^1(\Omega)$ such
$$\int_{\Omega} \nabla u \nabla v dx + (\int_{\Omega} u dx)(\int_{\Omega} v dx) = \int_{\Omega} f v dx, \forall v \in H^1(\Omega)$$
1- Prouve that this variational problem admits a unique solution in $H^1(\Omega).$
2- Deduce the boundary problem associate to this variational problem ( study the two cases $u \in H^2(\Omega)$ and $u \notin H^2(\Omega)$.
I dont't understand in the question 2, why we mus study the two cases $u \in H^2$ and $u \notin H^2$?
AI: Case 1: $u\in H^2(\Omega)$
In this case, we have that $$\tag{1}\int_\Omega \nabla u\nabla v=-\int_\Omega v\Delta u+\int_{\partial\Omega}\frac{\partial u}{\partial\nu}v,\ \forall\ v\in H^1(\Omega)$$
From $(1)$, we conclude that $$\tag{2}-\int_\Omega v\Delta u+\int_{\partial\Omega}\frac{\partial u}{\partial\nu}v+\int_\Omega (\int_\Omega u)v=\int f v,\ \forall\ v\in H^1(\Omega)$$
If we take $v\in C_c^\infty(\Omega)$ in $(2)$, we can conclude by the Fundamental Lemma of Calculus of Variation that $$\tag{3}-\Delta u(x)+\int_\Omega u=f(x),\ a.e.\ x\in\Omega$$
By using $(3)$, we conclude from $(2)$ that $$\tag{4}\int_{\partial\Omega} \frac{\partial u}{\partial\nu}v=0,\ \forall\ v\in H^1(\Omega)$$
$(4)$ implies that $\frac{\partial u}{\partial\nu}=0$ in $\partial\Omega$, so your boundary balue problem is
$$
\left\{ \begin{array}{rl}
-\Delta u+\int_\Omega u=f, &\mbox{ in $\Omega$} \\
\frac{\partial u}{\partial\nu}=0 &\mbox{ in $\partial\Omega$}
\end{array} \right.
$$
Case 2: $u\notin H^2(\Omega)$
I dont know how to treat the case. It is worth to note that the solution is unique, so I think that it is possible to prove that $u\in H^2(\Omega)$, by using difference quotient methods and then we are on the first case again, but this is just a guess.
Update: Suppose that $u\in H^1(\Omega)$ satisfies $$\tag{5}\int_\Omega \nabla u\nabla v+\left(\int_\Omega u\right)\left(\int_\Omega v\right)=\int_\Omega fv,\ \forall\ v\in H^1(\Omega)$$
Take $v=1$ in $(5)$ and get $$\tag{6}\int_\Omega u=\frac{1}{|\Omega|}\int_\Omega f$$
From $(5)$ and $(6)$ we concude that $$\tag{7}\int_\Omega\nabla u\nabla v=\int_\Omega\left(\frac{1}{|\Omega|}\int_\Omega f-f\right)v,\ \forall\ v\in H^1(\Omega)$$
As you can verify in Brezis's book chapter 9, equation $(7)$ implies that $u\in H^2(\Omega)$, so the same argument as above can be used.
Remark: After the update, we note that the first part of the proof could be carried without distinguishing two distinct cases. |
H: Calculate double integral of function over triangle
Find the limits for integrals $\int\int f(x,y) \,dy \, dx$ and $\int\int f(x,y)\,dx\,dy$ and compute the integral over the region, based on the function $f(x,y) = 3x^2y$.
Region = triangle inside the lines $x=0$, $y=1$, $y=2x$.
To find what the limits of my inner integral should be, I tried to sketch it. My problem was, how do I sketch a triangle when the only information I have is $x=0, y=1, y=2x$. I have no boundary for $x$?
AI: I take it that you need to compute
$$\iint_T dx dy \, f(x,y)$$
where $T$ is the triangle you described above. To see what to do, draw a picture. You'll notice that $T$ actually consists of the triangle above the line $y=2 x$, and bounded by the $y$ axis on the left, and the line $y=1$ from above. In this case, I find it easier to integrate over $x$ first, which means that the limit in $x$ is $[0,y/2]$. The integral looks like
$$\int_0^1 dy \, \int_0^{y/2} dx \, f(x,y)$$
When $f(x,y) = 3 x^2 y$ then you have
$$3 \int_0^1 dy \, y \, \int_0^{y/2} dx \, x^2$$
Evaluate from right to left. I take it you can handle the rest. |
H: rock, paper, scissors, well
Everyone knows rock, paper, scissors. Now a long time ago, when I was a child, someone claimed to me that there was not only those three, but also as fourth option the well. The well wins against rock and scissors (because both fall into it) but loses against paper (because the paper covers it).
Now I wonder: What would be the ideal playing strategy for rock, paper, scissors, well?
It's obvious that now the different options are no longer on equal footing. The well wins against two of the three other options, and also the paper now wins over two options, namely rock and well. On the other hand, rock and scissors only win on one of their three possible opponents.
Moreover, the scissors seem to have an advantage to the rock, as it wins against a "strong" symbol, namely paper, while the rock only wins against the "weak" symbol scissors.
Only playing "strong" symbols is obviously not a good idea because of those two, the paper always wins, so if both players only played strong symbols, the clear winning strategy would be to play paper each time; however if you play paper each time, you're predictable, and your opponent can beat you by selecting scissors.
So what if you play only well, paper and scissors, but all with the same probability? If your opponent knows or guesses it, it's obviously undesirable to choose rock, because in two of three cases he'd lose, while with any other symbol, he'd lose only in one of three cases. But if nobody plays rock, we are effectively at the original three-symbol game, except that the rock is now replaced by the well.
Therefore my hypothesis is: The ideal strategy for this game is to never play rock, and play each other symbol with equal probability.
Is my hypothesis right? If not, what is the ideal strategy?
AI: Mixing evenly between paper, scissors, and well is indeed an equilibrium.
Starting with Vadim's condition:
$$ p-s+(1-r-p-s)=\\-r+s-(1-r-p-s)=\\r-p+(1-r-p-s)=\\-r+p-s$$
If Rock receives no weight, we have:
$$s-(1-p-s)=\\-p+(1-p-s)=\\p-s$$
Which gives $p=s=(1-p-s)=\frac{1}{3}$
Further, Rock is dominated by any combination of the other three strategies against this mixture. Thus, any mixture of the three is indeed a best response to an equal mixture and so the equal mixture is a Nash equilibrium.
To see there is no other equilibrium, we can use the fact that in a symmetric non-zero sum game, any strategy optimal for one player is optimal for another. Note that when rock receives weight in opponent's strategy, rock is strictly dominated by well. Thus, rock cannot be part of an equilibrium since it would imply that rock is part of an optimal strategy against a strategy that includes positive weight on rock. |
H: Show $|X\times X| =$ cardinality of set of all functions $2\subseteq \omega \to X.$
Show that the Cartesian product of $X\times X$ has the same cardinality of the set of all functions from the set $2 \subseteq \omega$ to the set $X.$
I wonder what strategy should work for this problem.
For example, how can I find a function from one of these two sets to the other? I understand that the Cartesian product is a set of ordered pairs, as well as the other set. However, all I know about the last set is that it's contained in the power set of the Cartesian product of 2 with X (according to Halmos's book, Naive Set Theory).
Maybe if I can better define the last set I would be able to find a function?
AI: HINT: The function $f:2\to X$ that takes $0$ to $x$ and $1$ to $y$ should correspond to the ordered pair $\langle x,y\rangle\in X\times X$. |
H: Zeta function and probability
I know that $\zeta(n) = \displaystyle\sum_{k=1}^\infty \frac{1}{k^n}$ (Where $\zeta(n)$ is the Riemann zeta function)
But the reciprocal of $\zeta(n)$ for $n$ a positive integer is equal to the probability that $n$ numbers chossen at random are relatively prime. But why? Can you give a proof?
AI: Here is a very rough sketch of the idea :
We have the following :
$$
\zeta(n) = \sum_{k \ge 1} \frac 1{k^n} = \sum_{k \ge 1} \prod_{p^{k_p} || k} \frac 1{(p^n)^{k_p} } = \prod_{p} \sum_{k \ge 0} \left( \frac 1{p^n} \right)^k = \prod_{p} \frac 1{1-\frac 1{p^n}}.
$$
(You need to work out the details for all the convergence issues and these are treated in pretty much all good elementary number theory books.)
Now if we choose $k_1, \dots, k_n$ integers independently and uniformly over the interval $[1,x]$, one roughly expects that $p | k_i$ with probability $1/p$. The fact that $(k_1,\dots,k_n) = 1$ means that there is no prime which divides all those integers at once. $p$ divides $k_1, \dots, k_n$ with probability $1/p^n$ assuming independence, hence the probability we are looking for is roughly
$$
\prod_{p \le x} \left( 1 - \frac 1{p^n} \right) \underset{x \to \infty}{\longrightarrow} \prod_p \left( 1 - \frac 1{p^n} \right) = \frac 1{\zeta(n)}.
$$
You probably need to understand better what happens when $p$ is relatively large compared to $x$ to work out the error terms, but the basic ideas are all here.
Hope that helps, |
H: Classifying groups of order 12.
I was trying to classify groups of order 12 and I ended up with 5 different groups:
$\bullet$ $\Bbb{Z}_{12}$
$\bullet$ $\Bbb{Z}_2 \times \Bbb{Z}_6$
$\bullet$ $(\Bbb{Z}_2 \times \Bbb{Z}_2) \rtimes_{\alpha} \Bbb{Z}_3$ where $\alpha: (1,1) \rightarrow \bar{-1}$
$\bullet$ $(\Bbb{Z}_2 \times \Bbb{Z}_2) \times \Bbb{Z}_3$
$\bullet$ $\Bbb{Z}_3 \rtimes \Bbb{Z}_4$ where $\alpha$ sends the generator to $\bar{-1}$
I want to show that $(\Bbb{Z}_2 \times \Bbb{Z}_2) \times \Bbb{Z}_3 \cong D_{12}$ and $\Bbb{Z}_3 \rtimes \Bbb{Z}_4 \cong Q_{12}$.
My notes state that:
There is a homomorphism $f: D_{2n} \rightarrow G$ with f(a)=x and f(b)=y if and only if:
1) $x^2 = e$
2) $y^n = e$
3) $x^{-1}yx=y^{-1}$
If such a homomorphism exists, it is unique as $f(b^k) = y^k$ and $f(b^ka) = y^kx$. If |x|=2 and |y|=n f is 1-1.
There is a homomorphism $f:Q_{4n} \rightarrow G$ with f(a)=x and f(b)=y if and only if:
1) $x^4 = e$
2) $y^{2n} = e$
3) $x^{-1}yx = y^{-1}$
4) $x^2 = y^n$
If n>1, then f is 1-1 if and only if |x|=4 and |y|=2n.
Case 1: $(\Bbb{Z}_2 \times \Bbb{Z}_2) \times \Bbb{Z}_3 \cong D_{12}$
We know that:
1) $\bar{(1,1)} \in \Bbb{Z}_2 \times \Bbb{Z}_2$ has order 2
2) $\bar{1} \in \Bbb{Z}_3$ has order 3
3) $(\bar(1,1), \bar{-1})(\bar{(0,0)}, \bar{1}) = (\bar{(1,1)}+ \alpha(\bar{-1})(\bar{0,0}), 0 )$ But I'm not sure if I know what $\alpha(\bar{-1})(\bar{0,0})$ is. We know that $\alpha$ sends the generator to $\bar{-1}$ But I'm not sure how to actually do this calculation. What exactly is the operation, do we just add $\bar{-1}$ to $\bar{(0,0)}$ and then add $\bar{1}$ (since that's the inverse of $\bar{-1}$?)
However, we are also supposed to have $|\bar{(1,1)}|=n=6$, but that is not true in this case, right?
Case 2: $\Bbb{Z}_3 \rtimes \Bbb{Z}_4 \cong Q_{12}$
Let $\Bbb{Z}_3 = \langle b \rangle$ and $\Bbb{Z}_4 = \langle a \rangle$
1) $a^4 = e$
2) $b^6 = e$
3) $a^{-1}ba = b^{-1}$
4) Here I'm stuck, because we are supposed to have $a^2 = b^3 = e$ but that's not true, right?
I also have the same problem here about the order of b since $|b|=3 \not= 6$, so how can it be isomorphic to $Q_{4n}$?
Thank you in advance
AI: Not quite right: your list of "five" groups of order $12$: There are indeed five groups of order $12$, but you've omitted one, and you've included a duplicate:
Note that $$(\mathbb Z_2 \times \mathbb Z_2) \times \mathbb Z_3 = \mathbb Z_2 \times (\mathbb Z_2 \times \mathbb Z_3) \cong \mathbb Z_2 \times \mathbb Z_6$$
since $\mathbb Z_2\times \mathbb Z_3 \cong \mathbb Z_6$, because $\gcd(2, 3) = 1$. |
H: Simple projector problem
Please, consider this ("sub")problem:
Let $S$ a two-dimensional subspace of a Hilbert $H$ and let $Q\in\mathcal{L}(S,S)$, $Q\neq 0$ and $Q\neq I$, such that $Q^2 = Q$. Show that $\mbox{Im}(Q)\oplus\mbox{Im}(I-Q)$ and there's $p,q,r,s\in S$, no zero, such that $\langle p,q\rangle = \langle r,s\rangle = 1$ and
$$Q(v)\ =\ \langle q, v\rangle p\quad \mbox{ and }\quad (I-Q)(v)\ =\ \langle s,v\rangle r, \quad \forall\ v\in S.$$
I have already proved that $\mbox{Im}(Q)\oplus\mbox{Im}(I-Q)$, but I have had problems to justify the last part. Please, somebody help me. Thanks in advance.
AI: If what you mean is that you proved that $S=\mbox{Im}(Q)\oplus\mbox{Im}(I-Q)$ (it's not written like that in your question), then you already know that the image of $Q$ is one dimensional.
Now let $p\in\mbox{Im}(Q)$ with $p\ne0$. For any $v\in H$, there exists $\lambda_v\in\mathbb C$ with $Qv=\lambda_v\,p$. It is easy to see that this assignment is unique and linear map, i.e. $v\mapsto\lambda_v$ is a linear functional on $H$.
By the Riesz Representation Theorem, there exists $q\in H$ such that $\lambda_v=\langle q,v\rangle$ (assuming your convention is that the inner product is linear in the second coordinate, mathematicians tend to choose the opposite convention). So
$$
Qv=\langle q,v\rangle p.
$$
The reasoning for $I-Q$ is similar. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.